halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
0
36
timestamp
stringclasses
652 values
year
stringclasses
55 values
url
stringlengths
43
370
text
stringlengths
16
2.18M
04095162
en
[ "math" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04095162/file/Brieskorn-cobordism.pdf
Vincent Blanloeil email: [email protected] COBORDANT ALGEBRAIC KNOTS DEFINED BY BRIESKORN POLYNOMIALS Keywords: Mathematics Subject Classification. Primary 57Q45; Secondary 57Q60, 32S55 Knot cobordism, algebraic knot, Brieskorn singularity, Seifert form, Witt equivalence . We define new sets of invariants for these cobordism classes. Using these invariants we find more examples of distinct cobordism classes with distinct exponents. Introduction A Brieskorn polynomial is a polynomial of the form P (z) = z a1 1 + z a2 2 + • • • + z an+1 n+1 with z = (z 1 , z 2 , . . . , z n+1 ), n ≥ 1, where the integers a j ≥ 2, j = 1, 2, . . . , n + 1, are called the exponents. The complex hypersurface in C n+1 defined by P = 0 has an isolated singularity at the origin, which is called a Brieskorn singularity. To be more precise, let f : (C n+1 , 0) → (C, 0) be a holomorphic function germ with an isolated critical point at the origin. We denote by D 2n+2 ε the closed ball of radius ε > 0 centred at 0 in C n+1 , and by S 2n+1 ε its boundary. According to Milnor [START_REF] Milnor | Singular points of complex hypersurfaces[END_REF], the oriented homeomorphism class of the pair (D 2n+2 ε , f -1 (0) ∩ D 2n+2 ε ) does not depend on the choice of a sufficiently small ε > 0, and by definition it is the topological type of f . The oriented diffeomorphism class of the pair (S 2n+1 ε , K f ), with K f = f -1 (0) ∩ S 2n+1 ε , is the algebraic knot associated with f , where K f is a closed oriented (2n-1)-dimensional manifold. According to Milnor's cone structure theorem [START_REF] Milnor | Singular points of complex hypersurfaces[END_REF], the algebraic knot K f determines the topological type of f . In fact, it is known that the converse also holds. Definition 1.1. An m-dimensional knot, or a m-knot, is a closed oriented mdimensional submanifold of the oriented (m + 2)-dimensional sphere S m+2 . When this submanifold is homeomorphic to a sphere we call this a sperical knot. Two m-knots K 0 and K 1 in S m+2 are said to be cobordant if there exists a properly embedded oriented (m + 1)-dimensional submanifold X of S m+2 × [0, 1] such that (1) X is diffeomorphic to K 0 × [0, 1], and (2) ∂X = (K 0 × {0}) ∪ (-K 1 × {1}), where -K 1 × {1} denotes the manifold K 1 × {1} with the reversed orientation. A manifold X as above is called a cobordism between K 0 and K 1 (see Fig. 1). In this paper, we will study the cobordism classes of algebraic knots associated with Brieskorn singularities, we will call such knots Brieskorn knots for short. r r K0 S m+2 × {0} r r K1 S m+2 × {1} S m+2 × [0, 1] Figure 1. A cobordism between K 0 and K 1 Moreover, our goal is to study these cobordism classes in terms of exponents of the Brieskorn polynomials. In [START_REF] Blanloeil | Cobordism of algebraic knots defined by Brieskorn polynomials[END_REF] we proved that when n equals 1 or 2 Brieskorn knots are cobordant if and only if they have same exponents. Then, we will work with n ≥ 3. In [START_REF] Blanloeil | A theory of cobordism for non-spherical links[END_REF], for n ≥ 3, necessary and sufficient conditions for two algebraic (2n -1)knots to be cobordant have been obtained in terms of Seifert forms (for the definition of the Seifert form, see §2). However, the computation of the Seifert form of a given algebraic knot is very difficult, and an explicit calculation of a Seifert form is known only for a very limited class of algebraic knots. Furthermore, even if we know the Seifert forms explicitly, it is still difficult to see if given two such forms satisfy the algebraic conditions given in [START_REF] Blanloeil | A theory of cobordism for non-spherical links[END_REF] or not. When the knots are spherical, the algebraic condition of cobordism becomes much more simple. M. Kervaire [START_REF] Kervaire | Knot cobordism in codimension two[END_REF] and J. Levine [START_REF] Levine | Knot cobordism groups in codimension two[END_REF] proved that spherical knot are cobordant if and only if they have Witt equivalent Seifert forms and J. Levine [START_REF] Levine | Invariants of knot cobordism[END_REF] gave a complete list of invariants for cobordism classes of spherical knots. (for details, see §2). Recall that cobordism does not necessarily imply isotopy for algebraic knots in general. For details, see the survey article [START_REF] Blanloeil | Cobordism of fibered knots and related topics[END_REF]. In this paper, we first associate some spherical Brieskorn knots to a given Brieskorn knot. Since the set of exponents of such associated spherical knots are very similar to which of the Brieskorn knot, the study of cobordism classes of these spherical Brieskorn knots impose conditions on the exponents' sets of the initial Brieskorn cobordism class. The paper is organized as follows. In §2 we give some definitions and classical results, then in §3 we give a new set of invariants for cobordism classes of Brieskorn knots. Throughout the paper we work in the smooth category. All the homology groups are with integer coefficients unless otherwise specified. Definitions and results on spherical knots cobordism Let f (z) be a polynomial in C n+1 with an isolated critical point at the origin. We denote by F f the Milnor fiber associated with f , i.e., F f is the closure of a fiber of the Milnor fibration ϕ f : S 2n+1 ε \ K f → S 1 defined by ϕ f (z) = f (z)/|f (z)|. According to Milnor [12], F f is a compact 2n-dimensional submanifold of S 2n+1 ε ✛ + ✂ ✂ ✍ ★ ✥ ✧ ✦ ★ ✥ ✧ ✦ ξ ✲ ξ + ✲ η ✲ η + ✲ F Figure 2. Computing a Seifert matrix for the trefoil knot which is homotopy equivalent to the bouquet of a finite number of copies of the n-dimensional sphere. The Seifert form L f : H n (F f ) × H n (F f ) → Z associated with f is defined by L f (α, β) = lk(a + , b), where a and b are n-cycles representing α and β in H n (F f ) respectively, a + is the n-cycle in S 2n+1 ε obtained by pushing a into the positive normal direction of F f , and lk denotes the linking number of n-cycles in S 2n+1 ε . See Fig. 2 for a picture of cycles necessary to compute the trefoil knot's Seifert from. It is known that the isomorphism class of the Seifert form is a topological invariant of f . Furthermore, two algebraic knots K f and K g associated with polynomials f and g in C n+1 , respectively, with isolated critical points at the origin are isotopic in S 2n+1 ε if and only if their Seifert forms L f and L g are isomorphic, provided that n ≥ 3. In fact, algebraic knots are simple fibered knots as follows. We say that an oriented m-knot K is fibered if there exists a smooth fibration φ : S m+2 \ K → S 1 and a trivialization τ : N K → K × D 2 of a closed tubular neighborhood N K of K in S m+2 such that φ| NK \K coincides with π • τ | NK \K , where π : K × (D 2 \ {0}) → S 1 is the composition of the projection to the second factor and the obvious projection D 2 \ {0} → S 1 . Note that then the closure of each fiber of φ in S m+2 is a compact (m + 1)-dimensional oriented manifold whose boundary coincides with K. We shall often call the closure of each fiber simply a fiber. Moreover, for m = 2n -1 ≥ 1 we say that a fibered (2n -1)-knot K is simple if each fiber of φ is (n -1)-connected and K is (n -2)-connected. For details we refer the reader to [START_REF] Blanloeil | Cobordism of fibered knots and related topics[END_REF]. Note that two simple fibered (2n -1)-knots are isotopic if and only if they have isomorphic Seifert forms, provided n ≥ 3 (see [START_REF] Durfee | Fibered knots and algebraic singularities[END_REF][START_REF] Kato | A classification of simple spinnable structures on a 1-connected Alexander manifold[END_REF]). Definition 2.1. Two bilinear forms L i : G i × G i → Z, i = 0, 1, defined on free abelian groups G i of finite ranks are said to be Witt equivalent if there exists a direct summand M of G 0 ⊕ G 1 such that (L 0 ⊕ (-L 1 ))(x, y) = 0 for all x, y ∈ M and twice the rank of M is equal to the rank of G 0 ⊕ G 1 . In this case, M is called a metabolizer. 2.1. Characterization of spherical Brieskorn knots. We refer to [START_REF] Brieskorn | Beispiele zue Diffenrentialtopologie von Singularitaten[END_REF] for the results of this subsection, the reader may consult the book [START_REF] Dimca | Singularities and topology of hypersurfaces[END_REF] for detailed proofs as well. Let P (z) = z a1 1 + z a2 2 + • • • + z an+1 n+1 be a Brieskorn polynomial, we associate to this polynomial a graph G P . This graph has n + 1 vertices labeled by the letters a 1 , . . . , a n+1 , and two vertices a i and a j are connected with and edge if and only if gcd(a i , a j ) is strictly greater to 1. We denote by C ev,P the connected component of G P wich contain all odd exponents ; remark that C ev,P may contain some odd labeled vertices as well. We say that C ev,P fulfills conditionC if it contains an odd number of vertices and gcd(a i , a j ) = 2 for any two distinct points a i and a j in C ev,P . Theorem 2.2 ([4] ). Let n ≥ 3. The Brieskorn knot associated to P is spherical if and only if the graph G P contains at least two isolated points or an isolated point which is odd and the component C ev,P fulfills conditionC. Let K P be the Brieskorn knot associated to the Brieskorn polynomial P (z) = z a1 1 + z a2 2 + • • • + z an+1 n+1 . Set p and q be two distinct prime numbers such that for all i = 1, . . . , n + 1 we have gcd(p, a i ) = gcd(q, a i ) = gcd(p, q) = 1. Then we define the following polynomial P p,q (z) = z a1 1 + z a2 2 + • • • + z an+1 n+1 + z p n+2 + z q n+3 . According to Theorem 2.2 and h-cobordism theorem [START_REF] Smale | On the structure of 5-manifolds[END_REF], the Brieskorn knot K Pp,q is spherical. Note that K P and K Pp,q are respectively of dimension 2n + 1 and 2n + 3. Levine's complete invariant set of cobordant isometric structures. We refer to [START_REF] Levine | Invariants of knot cobordism[END_REF] for all results of this subsection. In order to study C n the cobordism group ok knotted n-dimensional spheres in codimension two, J. Levine introduced isometric structures <, >, T where <, > is a non degenerate symmetric bilinear form on a finite dimensional vector space V over Q, and T is an isometry of V . Recall that the group law of C n is the connected sum ; two such knots K 0 and K 1 are cobordant if the oriented connected sum K 0 # -K 1 is null-cobordant, i.e. cobordant to the trivial embedding, and this only happens when the associated Seifert forms are Witt-equivalent. Following [START_REF] Levine | Invariants of knot cobordism[END_REF], an isometric structure <, >, T is null-cobordant when V contains a totally isotropic subspace, invariant under T , of half dimension of V . Moreover, two isometric structures <, >, T and <, > ′ , T ′ are cobordant if the orthogonal sum <, >, T ⊥ -<, > ′ , T ′ is null-cobordant. As in the case of spherical knots, this equivalence relation gives an abelian group of cobordism classes of isometric structures. Let ∆ T (t) be the characteristic polynomial of T . Levine proved that the group of cobordism classes of isometric structures <, >, T satisfying ∆ T (1)∆ T (-1) = 0 is isomorphic to the Witt-equivalence group of matrices A satisfying (A -t A)(A + t A) is non-singular. With this isomorphism we associate the isometric structure A + t A, -A -1 t A to a square matrix A over Q such that (A -t A)(A + t A) is non-singular. This isomorphism allows to study Witt-equivalence of matrices in terms of cobordism classes of isometric structures. Let <, >, T an isometric over Q. Let Λ = Q[t, t -1 ] be the ring of Laurent polynomials over Q. We consider V the vector space on which <, > and T are defined as a Λ-module, defining the action of t by T . Let ∆ T (t) = r i=1 λ i (t) ei be the factorisation of ∆ T (t) with irreducible factors over Q. To each irreducible factor λ i we define V λi = Ker λ i (t) N for N a large integer, such a V λi is called a primary component of V . Moreover, V is the direct sum V = r i=1 V λi . Let λ(t) be a symmetric 1 irreducible factor of ∆ T , then Levine defined (1) ε λ <, >, T equals to the exponent of λ(t) in ∆ T (t) mod 2. (2) σ λ <, >, T equals to the signature of the restriction of <, > to V λ over R. ( ) µ λ <, >, T = -1, -1 r(r+3 2 det(<, >), -1 r S <, > 3 where <, > is of rank 2r, the Hilbert symbol for <, > over R is denoted by , and S <, > is the Hasse symbol over Q We have the following theorem. Theorem 2.3 ([11] ). Two isometric sturctures α and β are in the same cobordism class if and only if ε λ (α) = ε λ (β), σ λ (α) = σ λ (β) and µ λ (α) = µ λ (β) for all λ(t) for which these invariants are defined. Results 3.1. Cobordism classes of Brieskorn knots. First, we prove the following proposition. Proposition 3.1. Let K P and K Q be two Brieskorn knots associated with two polynomials P and Q with n+1 variables. If there exists two distinct prime numbers p and q such that the spherical Brieskorn knots K Pp,q and K Qp,q are not cobordant, then K P and K Q are not cobordant. Proof. Since the polynomial P p,q is obtained from P by adding a two variables polynomial of the form z p i + z q i+1 , then according to K. Sakamoto [START_REF] Sakamoto | The Seifert matrices of Milnor fiberings defined by holomorphic functions[END_REF] we know that the Seifert form of the Brieskorn knot K Pp,q is obtained by the tensor product of the Seifert form of K P with a square matrix A p,q . By Durfee [START_REF] Durfee | Fibered knots and algebraic singularities[END_REF] we know that A p,q is a matrix with only 0 and ±1 coefficents which are only determined by p and q. If two Brieskorn knots K P and K Q are cobordant, then they have witt-equivalent Seifert forms A KP and A KQ defined on Z-modules G P and G Q . Let M , a submodule of G P ⊕ G Q , be a metabolizer for A KP ⊕ -A KQ and let d be the dimension of the square matrix A p,q . Then the submodule d i=1 M i of d i=1 G P,i ⊕ G Q,i , where M i is a copy of M and G P,i ⊕ G Q,i is a copy of G P ⊕ G Q , is a metabolizer for A KP ⊗ A p,q ⊕ -A KQ ⊗ A p,q . Hence the two Seifert forms A KP ⊗ A p,q and A KQ ⊗ A p,q are Witt-equivalent. Since the Brieskorn knots K Pp,q and K Qp,q are spherical, we know that they are cobordant. We have proved the proposition by contraposition. are cobordant if and only if the Brieskorn knots K P+ and K Q+ associated to the polynomials P + (z) = z 2 n+2 + n+1 j=1 z aj j and Q + (z) = z 2 n+2 + n+1 j=1 z bj j are cobordant. Proof. According to [START_REF] Sakamoto | The Seifert matrices of Milnor fiberings defined by holomorphic functions[END_REF] the knots K P and K P+ have same Seifert forms and the same is true for K Q and K Q+ . Since the cobordism class of fibered knots is completely determined by the algebraic cobordism class of its Seifert form (see [START_REF] Blanloeil | A theory of cobordism for non-spherical links[END_REF]) we have that K P and K Q have algebraically cobordant Seifert forms if and only if the same holds for K P+ and K Q+ . Then, the Brieskorn knots K P and K Q are cobordant if and only if the Brieskorn knots K P+ and K Q+ are cobordant. Recall that if A is a Seifert matrix of a Brieskorn knots K P associated with a Brieskorn polynomial P (z) = z a1 1 +z a2 2 +• • •+z an+1 n+1 , then its Alexander polynomials is defined by ∆ K (t) = det t A + (-1) n t A . Since Brieskorn knots are fibered knots, we know that the Alexander polynomial is the characteristic polynomial of the monodromy. Proof. By the monodromy theorem proved by E. Brieskorn [START_REF] Brieskorn | Die Monodromie der isolierten Singularitäten von Hyperflächen[END_REF] the Alexander polynomial of Brieskron knots are some products of cyclotomic polynomials. Moreover, Alexander polynomials of Brieskorn knots has been computed by F. Pham [START_REF] Pham | Formules de Picard-Lefschetz généralisées et ramification des intégrales[END_REF] and E. Brieskorn [START_REF] Brieskorn | Beispiele zue Diffenrentialtopologie von Singularitaten[END_REF]. More precisely, let P be the polynomial P (z) = ∆ KP (t) = 0<i k <a k t -ζ i1 a1 . • • • .ζ in an ( * ) where ζ a k = e 2πi a k . Let K P and K Q be two Brieskorn knots with distinct irreducible factors in their Alexander polynomial, then they have roots aver C which are distinct. The Brieskorn knots K Pp,q and K Qp,q are defined with the same integers p and q which are distinct prime numbers and both are coprime with all exponents of P and Q. Then, according to ( * ) the Alexander polynomials of K Pp,q and K Qp,q have distinct irreducible factors since the do not have the same complex roots. Moreover, proposition 2.2 implies that K Pp,q and K Qp,q are spherical Brieskorn knots. According to [START_REF] Durfee | Fibered knots and algebraic singularities[END_REF], if A is the Seifert form associated to the Milnor fibration, then its intersection form is defined by the relation S = A + (-1) n t A and its monodromy is defined by the relation h = (-1) n-1 A -1 t A. Moreover by proposition 3.2 one can suppose that (-1) n = 1. Hence we have that S, h is an isometric structure on H n (F, Q) where F is the Milnor fiber of the isolated singularity at 0 of a Brieskorn polynomial, S is the intersection form and h is the monodromy. On top of that, up to invertible element, the characteristic polynomial of h is the Alexander polynomial defined using the Seifert form which is a product of symmetric cyclotomic polynomials. Finally, according to theorem 2.3, the two spherical Brieskorn knots K Pp,q and K Qp,q cannot be cobordant since they do not have the same list of invariants. In [START_REF] Blanloeil | Cobordism of algebraic knots defined by Brieskorn polynomials[END_REF] we proved that cobordant Brieskorn knots must have same exponents, up to order, provided no exponent is a multiple of another one. Now, we have the following result which is an immediate corollary of proposition 3.4. Corollary 3.5. Let K P and K Q be the Brieskorn knots associated to the polynomials P (z) = n+1 j=1 z aj j and Q(z) = n+1 j=1 z bj j . Let P P and P Q be the sets of distinct irreducible factors of each polynomial P and Q. If P P =P Q , then the knots K P and K Q are not cobordant. Proof. Set ζ a k = e 2πi a k , as before we have the following factorizations ∆ KP (t) = 0<i k <a k t -ζ i1 a1 . • • • .ζ in an and ∆ KQ (t) = 0<i k <b k t -ζ i1 b1 . • • • .ζ in bn . When P P =P Q the polynomials ∆ KP and ∆ KQ admit distinct complex roots. Hence they do not have the same irreducible factors over Q. By proposition 3.4 the knots K P and K Q are not cobordant. In the following, for each Brieskorn knot, we will define a list of integers which only depends on its exponents. Since these integers are related to the factorization of the Alexander polynomial of as a product of cyclotomic polynomials, then we will get an invariant of cobordism classes of Brieskorn knots. In the following, when α and β are two integers, we denote by α ∧ β the greatest common divisor of α and β and we denote by [α, β] the lowest common multiple of α and β. Let k and d be two integers. Set µ k (d) be the greatest divisor of k ∧ d which is coprime with k k∧d . Then, we associate to the couple of integer (k, d) a set of integers defined as follows , for all distinct prime numbers p and q which are both coprime to each a 1 , . . . , a n+1 we define the set Ξ KP ,p,q as follows. Ψ k (d) =        [k,d] l , if l is a divisor of k µ k (d) k∧d \ {k} if µ k (d) ≤ 2 [k,d] l , if l is a divisor of k µ k (d) (1) Ξ 0 KP ,p,q = {pq} (2) for i = 1 to n + 1 we set Ξ i KP ,p,q = Ψ αj (a i ) | α j ∈ Ξ i-1 KP ,p,q (3) Ξ KP ,p,q = Ξ n+1 KP ,p,q We will prove that Ξ KP ,p,q is invariant in a cobordism class of Brieskorn knots. Proposition 3.7. Let K P be a Brieskorn knot associated with the polynomial P (z) = n+1 j=1 z aj j . Let p and q be two distinct prime numbers which are both coprime to a 1 , . . . , a n+1 , then the set of integers Ξ KP ,p,q is an invariant of its cobordism class Proof. When P (z) is a Brieskorn polynomial with z = (z 1 , z 2 , . . . , z n+1 ) and n ≥ 1 ; I. Savel'ev [START_REF] Savel'ev | Cyclotomic polynomials and singularities of complex hypersurfaces (Russian) Uspekhi Mat. Nauk[END_REF] compute the Alexander polynomial of Q(z, z n+2 ) = P (z) + z d n+2 where d is an integer. Let Φ n (t) be the n-th cyclotomic polynomial. If the Alexander polynomial of P (z) is ∆ P (t) = N l=1 Φ k l (t) τ l , with τ l > 0, then ∆ Q (t) = N l=1 ν|λ k l ,d Φ [k l ,d] ν (t) ϕ(µ k l (d)) Φ k l (t) τ l (⋆) where λ k l ,d = dµ k l (d) k l ∧d . We see that {Φ k1 , . . . , Φ kN } is the set of irreducible factors of ∆ P (t) and Φ Ψα j (d) | α j ∈ {k 1 , . . . , k N } is the set of irreducible factors of ∆ Q (t). According to [START_REF] Brieskorn | Beispiele zue Diffenrentialtopologie von Singularitaten[END_REF], the Alexander polynomial of the Brieskorn knot K x p +y q is ∆ p,q (t) = (t pq r -1) r (t -1) (t p -1)(t q -1) where r = p ∧ q. If p and q are distinct prime numbers, then we have ∆ p,q (t) = Φ pq (t)Φ p (t)Φ q (t)Φ 1 (t) 2 Φ p (t)Φ q (t)Φ 1 (t) 2 = Φ pq (t). When µ k l (d) ≤ 2, then ϕ(µ k l (d)) equals 1 and Φ k l is no longer a factor of ∆ Q in (⋆). Hence Ξ KP ,p,q is the set of irreducible factors of ∆ Pp,q . According to proposition 3.4, when p and q are two distinct prime numbers which are both coprime to each a 1 , . . . , a n+1 , the set Ξ KP ,p,q is an invariant of the cobordism class of K P . Remark 3.8. The last proposition is just a reformulation of proposition 3.4. But it gives a computable list of integers which is an invariant of the cobordism class of a Brieskorn knot. 3.2. Examples. Recall that Alexander polynomials of cobordant knots must satisfy a Fox-Milnor relation. In [START_REF] Blanloeil | Cobordism of fibered knots and related topics[END_REF] we only use this property to determine if knots cannot be cobordant. Moreover, in the same paper, example 3. Hence, the polynomials ∆ K f and ∆ Kg have not the same set of complex roots. By proposition 3.4 we see that in this case the Brieskorn knots K f and K g are not cobordant. 3.3. Conjecture. We found more examples of Brieskorn knots for which distinct exponents imply distinct cobordism classes. Moreover, according to proposition 3.7 it seems difficult to find examples of Brieskorn knots which are cobordant and have distinct exponents up to order. Hence, we formulate the following conjecture. Conjecture 3.9. The Brieskorn knots associated to the polynomials Remark that if this conjecture is true, then the multiplicity of a Brieskorn knot will be an invariant of its cobordism class. P (z) = Proposition 3 . 2 . 32 The Brieskorn knots K P and K Q associated to the polynomials e., such that λ(t) = ±t deg(λ) λ(t -1 ). Remark 3 . 3 .Proposition 3 . 4 . 3334 The knots K P and K P+ have same Seifert forms, hence they have same Alexander polynomials. The set of irreducible factors of the Alexander polynomial over Q of a Brieskorn knot is an invariant of its cobordism class. Brieskorn gave the following factorization over C for the alexander polynomial of the Brieskorn knots K P k∧d else Definition 3 . 6 . 36 When K P is a Brieskorn knot associated with the polynomialP (z) =n+1 j=1 z aj j 3 n- 3 + z 6 n- 2 + z 6 n- 1 + z 6 n + z 6 n+1 33626166 [START_REF] Kato | A classification of simple spinnable structures on a 1-connected Alexander manifold[END_REF] gave the examples of Brieskorn knots K f and K g where n > 3, p 1 , p 2 , . . . , p n-3 ≥ 2 andf (z) = z p1 1 + z p2 2 + • • • + z pn-3 n-3 + z 8 n-2 + z 8 n-1 + z 4 n + z 4 n+1 and g(z) = z p1 1 + z p2 2 + • • • + z pn-for wich it was unknown if they are in the same cobordism class since the product of the Alexander polynomials fulfills a Fox-Milnor relation. With proposition 3.4 we have a new method to determine if such Brieskorn knots lies in the same cobordism class or not. More precisely, if 3 is coprime with all p 1 , . . . , p n-3 , then the set of prime factors of the exponents of the polynomials f and g are distinct.Recall that when P (z) = n+1 j=1 z aj j we have ∆ KP (t) = 0<i k <a k t -ζ i1 a1 . • • • .ζ in an . and only if a j = b j , j = 1, 2, . . . , n + 1, up to order.
04090325
en
[ "shs.eco" ]
2024/03/04 16:41:18
1988
https://shs.hal.science/halshs-04090325/file/LET2109_TRANSPORTS_TERRESTRES_MARCHANDISES_PAYS_BAS.pdf
Maurice Bernadet teaching and research institutions in France or abroad, or from public or private research centers. Les transports terrestres de marchandises aux Pays-Bas. Actualisation
04095276
en
[ "nlin", "math" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04095276/file/Proposal_Fandio_article14.pdf
Rubin Fandio Hamadjam Abboubakar email: [email protected] Henri Paul Ekobena Fouda email: [email protected] Anoop Kumar Mathematical modelling and projection of Buruli ulcer transmission dynamics using classical and fractional derivatives: A case study of Cameroon Keywords: Buruli ulcer, Fractional derivatives, Caputo derivative, Asymptotic stability, Parameter estimation, Strength number, Adams-Bashforth-Moulton scheme (ABM) 26A33, 34C60, 92C60, 92D30 The goal of this work is to derive and study a new model which traduces the transmission dynamics of the Buruli ulcer (BU) in which we replace mass action incidences with standard incidences, consider latent period, and by using both integer and fractional derivatives. We first compute the basic reproduction number denoted by R 0 and prove the asymptotic stability of the Buruli-free equilibrium whenever R 0 < 1. Then, for R 0 > 1, we prove the existence of an unique positive equilibrium point and its global stability using the general theory of Lyapunov. We perform parameter estimation to calibrate our model with real data from Cameroon, while sensitivity analysis is conducted to determine important parameters in the BU dynamics. From this model calibration, we obtain R 0 = 2.0843 which is greater than one and implies that BU ulcer is endemic in the country. To determine whether if or not the BU admits one or multiple waves, we compute the strength number denoted by A 0 . We find that A 0 ≈ 17 > 0 which means that Bu admits multiple waves. We then replace integer derivative with fractional derivative and prove the existence of equilibrium points as well as its asymptotic stability. This follows by the proof of existence and the uniqueness of solutions of the fractional model. We then construct a numerical scheme based on the Adams-Bashforth-Moulton (ABM) Method. Theoretical results are validated from numerical simulations. These last also permits to evaluate the impact of varying fractional order α in the BU dynamics. Introduction For several decades, mathematical modelling has been positioned as this discipline that allows us to better understand the dynamics of disease transmission and the appropriate control mechanisms that lead to successful intervention. This modelling involves mathematical models constructed for infectious diseases to study the non-linear process involved in these disease dynamics and determine the better strategy to control them. Like Examples of these mathematical models, see [START_REF] Abboubakar | Projections and fractional dynamics of the typhoid fever: A case study of mbandjock in the centre region of cameroon[END_REF][START_REF]New concept in calculus: Piecewise differential and integral operators[END_REF][START_REF] Ding | A fractional-order differential equation model of hiv infection of cd4+ t-cells[END_REF][START_REF] Singh | A fractional epidemiological model for computer viruses pertaining to a new fractional derivative[END_REF]. Most mathematical models are often constructed using classical (integer) derivatives (partial and ordinary differential equations), and in recent years, the concept of fractional derivatives which generalize classical derivatives has been considered very effective in different scientific fields. For example, modelling the typhoid fever disease with fractional derivative [START_REF] Abboubakar | Fractional dynamics of typhoid fever transmission models with mass vaccination perspectives[END_REF][START_REF] Abboubakar | Projections and fractional dynamics of the typhoid fever: A case study of mbandjock in the centre region of cameroon[END_REF], mathematical modelling of the computer virus spreading [START_REF] Singh | A fractional epidemiological model for computer viruses pertaining to a new fractional derivative[END_REF], the COVID-19 modelling [START_REF] Atangana | Modeling third waves of covid-19 spread with piecewise differential and integral operators: Turkey, spain and czechia[END_REF]. Since then, several works are appear in the literature for infectious diseases such as Malaria [START_REF] Chitnis | Bifurcation analysis of a mathematical model for malaria transmission[END_REF][START_REF] Teboh-Ewungkem | Models and proposals for malaria: a review[END_REF], Dengue [START_REF] Rodrigues | Vaccination models and optimal control strategies to dengue[END_REF], Chikungunya [START_REF] Abboubakar | Mathematical modeling and projections of a vector-borne disease with optimal control strategies: A case study of the chikungunya in chad[END_REF][START_REF] Dumont | Vector control for the chikungunya disease[END_REF][START_REF] Dumont | Mathematical studies on the sterile insect technique for the chikungunya disease and aedes albopictus[END_REF], HIV/AIDS [START_REF] Bryan | Mediational analysis in hiv/aids research: Estimating multivariate path analytic models in a structural equation modeling framework[END_REF][START_REF] Karrakchou | Optimal control and infectiology: application to an hiv/aids model[END_REF], Cholera [START_REF] Capasso | A mathematical model for the 1973 cholera epidemic in the european mediterranean region[END_REF][START_REF] Edward | A mathematical model for the dynamics of cholera with control measures[END_REF][START_REF] Sun | Transmission dynamics of cholera: Mathematical modeling and control strategies[END_REF][START_REF] Wang | Mathematical models for cholera dynamics-a review[END_REF], Tuberculosis [START_REF] Egonmwan | Analysis of a mathematical model for tuberculosis with diagnosis[END_REF][START_REF] Feng | On the role of variable latent periods in mathematical models for tuberculosis[END_REF][START_REF] Houben | The global burden of latent tuberculosis infection: a re-estimation using mathematical modelling[END_REF][START_REF] Waaler | The use of mathematical models in the study of the epidemiology of tuberculosis[END_REF], Measles [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF][START_REF] Qureshi | Modeling of measles epidemic with optimized fractional order under caputo differential operator[END_REF][START_REF] Thompson | Evolution and use of dynamic transmission models for measles and rubella risk and policy analysis[END_REF], Coronavirus (COVID-19) [START_REF] Djaoue | Mathematical modeling, analysis and numerical simulation of the covid-19 transmission with mitigation of control strategies used in cameroon[END_REF][START_REF] Khan | Mathematical modeling and analysis of covid-19: A study of new variant omicron[END_REF][START_REF] Nabi | Forecasting of covid-19 pandemic: From integer derivatives to fractional derivatives[END_REF], and Buruli ulcer [START_REF] Chu | Mathematical modeling and stability analysis of buruli ulcer in possum mammals[END_REF][START_REF] Cynthia | Modelling transmission of buruli ulcer in the central region of ghana[END_REF][START_REF] Edholm | A risk-structured mathematical model of buruli ulcer disease in ghana[END_REF][START_REF] Momoh | Modeling, optimal control of intervention strategies and cost effectiveness analysis for buruli ulcer model[END_REF][START_REF] Nyabadza | On the transmission dynamics of buruli ulcer in ghana: Insights through a mathematical model[END_REF][START_REF] Zhao | A mathematical model for the coinfection of buruli ulcer and cholera[END_REF]. Speaking of the latter, Buruli Ulcer (BU) is a disease that attacks the skin and sometimes the bones, and may also be known as necrotizing skin disease. The pathogen agent of BU is called Mycobacterium ulcerans. It is the most common mycobacterial infection after Hansen's disease (Leprosy) and tuberculosis [START_REF] Portaels | Buruli ulcer[END_REF][START_REF] Weir | Buruli ulcer: the third most common mycobacterial infection[END_REF]. Infection by Mycobacterium ulcerans causes chronic necrotizing ulcers [START_REF] Van Der Werf | Mycobacterium ulcerans disease[END_REF], implying in functional limitation, physical deformities, as well as social stigma. Long-term disabilities can be observed if the infection is not treated [START_REF] Agbenorku | Buruli-ulcer induced disability in ghana: a study at apromase in the ashanti region[END_REF][START_REF] Owusu-Sekyere | Perceptions and attitudes: The challenge of managing buruli ulcer morbidity in ghana[END_REF]. The disease is generally endemic in regions around slow-flowing or stagnant water bodies, and periodically flooded or continuously humid areas covered with a shallow water table which worldwide, has proven to be a cause of risk for infection [START_REF] Sopoh | Family relationship, water contact and occurrence of buruli ulcer in benin[END_REF][START_REF] Williamson | Distribution of mycobacterium ulcerans in buruli ulcer endemic and non-endemic aquatic sites in ghana[END_REF]. In addition, BU is age and sexes independent. The clinical evolution of BU in people co-infected with HIV is more aggressive. Indeed, complications due to the management of HIV imply a poor treatment outcome of BU, which has allowed WHO to publish a technical guide to help clinicians manage co-infection and especially children under 15 years old [START_REF]Buruli ulcer (Mycobacterium ulcerans infection)[END_REF]. However, children under the age of five are less affected to Mycobacterium ulcerans compared to older [START_REF] Bratschi | Geographic distribution, age pattern and sites of lesions in a cohort of buruli ulcer patients from the mapé basin of cameroon[END_REF][START_REF] Röltgen | Late onset of the serological response against the 18 kda small heat shock protein of mycobacterium ulcerans in children[END_REF]. Mycobacterium ulcerans is heat-sensitive, it mainly causes skin lesions. 90% of these lesions are observed on the limbs although the rest of the body can be affected [START_REF] Bratschi | Geographic distribution, age pattern and sites of lesions in a cohort of buruli ulcer patients from the mapé basin of cameroon[END_REF]. Nowadays, the treatment of BU is not limited to surgery alone [START_REF] Asiedu | Socioeconomic implications of buruli ulcer in ghana: a three-year review[END_REF], it is based on the combination of complementary treatments and antibiotics [START_REF]World Health Organisation, Treatment of Mycobacterium ulcerans disease (Buruli ulcer): Guidance for health workers[END_REF]. For health workers, a treatment guidance is available in [START_REF] Asiedu | Buruli ulcer: Mycobacterium ulcerans infection, tech. rep[END_REF]. The current recommended treatment consists of a combination of Clarithromycin and Rifampicin [START_REF]World Health Organisation, Treatment of Mycobacterium ulcerans disease (Buruli ulcer): Guidance for health workers[END_REF][START_REF]Buruli ulcer (Mycobacterium ulcerans infection)[END_REF]. Although BU disease has been reported worldwide, in several countries with high infection, 33 countries are in Africa, Asia, South American countries, and the Western Pacific, except for Australia, China, and Japan. According to the World Health Organization (WHO), only fourteen countries [START_REF] Butcher | Numerical methods for ordinary differential equations in the 20th century[END_REF] regularly report data on the disease. Until 2010, the number of suspected cases of UB reported each year worldwide was in the order of 5,000. The total number of BU cases then declined until 2016, when 1961 cases were reported, the lowest level on record. Subsequently, the number of cases increased again each year to 2,713 cases in 2018. And then in 2020, 1,258 cases were reported, compared to 2,271 in 2019 [START_REF]World Health Organization (WHO, Ulcère de Buruli (infection à Mycobacterium ulcerans)[END_REF]. This decrease was observed in 2020 following the onset of the infectious disease Coronavirus (COVID- [START_REF] Cynthia | Modelling transmission of buruli ulcer in the central region of ghana[END_REF], could be attributable to the effects of this infection on active case detection activities. In each endemic country, the disease is usually present in households, which unfortunately affects people living in these environments who are generally very poor and have serious difficulties in accessing quality medical care within communities [START_REF] Asiedu | Socioeconomic implications of buruli ulcer in ghana: a three-year review[END_REF]. Although there are some studies which prove that the pathogen can leave in others mammal reservoirs (animal), the transmission dynamics of BU remains unclear [START_REF] Johnson | Buruli ulcer (m. ulcerans infection): new insights, new hope for disease control[END_REF]. In Cameroon, for example, BU was first described in 1975. The 47 cases studied at the time all came from a very localized outbreak in the Nyong Valley between the cities of Ayos, and Akonolinga, in south-central Cameroon [START_REF] Alphonse | L'ulcère de Buruli au Cameroun[END_REF] but also from the different regions of the country such as the Far North, and the South-West. In addition, an epidemiological survey conducted by, among others, the Emmaus-Switzerland Leprosy Aid (ALES) in August 2001 in the Nyong basin and focusing on Buruli ulcer identified 438 cases of Buruli ulcer (active and inactive forms combined). Of these 438 cases, 97 were recorded in Ayos district and 331 cases in Akonolinga district. This survey has made it possible to classify Buruli ulcer as a health problem in Cameroon [START_REF] Alphonse | L'ulcère de Buruli au Cameroun[END_REF]. So far, health workers have been reporting suspected cases of Buruli ulcer for irregular time intervals without confirmation of diagnosis. These observations come from different regions of the country, but especially from the provinces of the Far North, South-West and Centre. In the latter province, the reported cases come mainly from the Ayos-Akonolinga area. The main contribution in this work consists to the formulation of a new BU compartmental model including latent period, standard force of infection, with both classical and fractional derivatives in the Caputo sense. We begin by the formulation of the proposed model with integer derivative. We then prove the positivity and boundedness of solutions, compute equilibrium points and perform stability analysis of these equilibrium points in term of the basic reproduction number R 0 . After that, we use real data to calibrate the model by performing parameter estimation and obtain the value of the basic reproduction number. We also compute the strength number A 0 which the sign indicates the presence of more than one wave in the disease spread, and compute its numerical value. The sign of this last threshold permits to know if the epidemic has one or multiple wave (epidemic peaks) [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF]. To determine the important parameters in the disease dynamics, we conduct both local sensitivity analysis. We then formulate our BU fractional model in the Caputo sense and perform theoretical analysis like stability of the stationary points as well as existence and uniqueness of solutions. To validate our theoretical results, we perform several numerical simulations based on a numerical scheme constructed using the dams-Bashforth-Moulton method. The rest of the work is organized as follows: Section 2 deals with the model description and basic results (positivity and boundedness of solutions). Section 3 is devoted to equilibrium points and stability analysis of these particular points. Parameter estimation, computation of the strength number as well as existence of multiple waves and sensitivity analysis are performed in Section 4. Preliminaries on fractional calculus are presented in Section 5. The BU model with Caputo derivative is formulated in Section 6 as well as its mathematical analysis. A numerical scheme, simulation results as well as discussions are done in Section 7. The work finish with a conclusion. BU model and basics results We present here the transmission dynamics of the model of Buruli ulcer disease using firstly classical derivative. In what follows, N h (t) denotes the total human population at any time positive t (t > 0). N h (t) is divided into fours subpopulation, also called compartments, namely S h (t) for susceptible humans, E b (t) for infected (but not yet infectious ) humans, I b (t) for infectious humans, and R b (t) for recovered humans. The total vector population                                              dS h dt = π h + ϕR b -β h S h I v N h -µ h S h , dE b dt = β h S h I v N h - k 1 (µ h + γ 1 ) E b , dI b dt = γ 1 E b - k 2 (µ h + κ + η) I b , dR b dt = ηI b - k 3 (µ h + ϕ) R b , dS v dt = π v -β v S v I b N h -µ v S v , dI v dt = β v S v I b N h -µ v I v , (1) with the following initial conditions S h (0) = S h0 > 0, E b (0) = E b0 ≥ 0, I b (0) = I b0 ≥ 0, R b (0) = R b0 ≥ 0, S v (0) = S v0 ≥ 0, I v (0) = I v0 ≥ 0. ( 2 ) Remark 1. Model system (1) is a extension of the one from proposed by Zhao et al. in [START_REF] Zhao | A mathematical model for the coinfection of buruli ulcer and cholera[END_REF], in which we added a compartment for latent individuals, and force of infection (FOI) with standard incidences. By cons, we do not consider reservoir dynamics as done by Nyabadza and Bonyah in [START_REF] Nyabadza | On the transmission dynamics of buruli ulcer in ghana: Insights through a mathematical model[END_REF]. Others works concerning mathematical modeling and control of Buruli ulcer transmission dynamics can be found in [START_REF] Chu | Mathematical modeling and stability analysis of buruli ulcer in possum mammals[END_REF][START_REF] Cynthia | Modelling transmission of buruli ulcer in the central region of ghana[END_REF][START_REF] Edholm | A risk-structured mathematical model of buruli ulcer disease in ghana[END_REF][START_REF] Khan | Mathematical modeling and optimal control strategies of buruli ulcer in possum mammals[END_REF][START_REF] Momoh | Modeling, optimal control of intervention strategies and cost effectiveness analysis for buruli ulcer model[END_REF]. We first prove that state variables S h (t), E b (t), I b (t), R b (t), S v (t), I v (t), are positive for all t ≥ 0. Theorem 1. ∀t > 0, a solution x(t) = (S h (t), E b (t), I b (t), R(t), S v (t), I v (t)) ′ of model (1) with initial conditions x(0) = (S h (0), E b (0), I b (0), R(0), S v (0), I v (0)) ′ ∈ R 6 + is positive. Proof. From (1), we have                  dS h dt (t) S h =0,R b ≥0 = π h + ϕR b (t > 0, dE b dt (t) E b =0,S h ,I b ,R b ,Iv≥0 = β h S h (t)I v (t) S h + I b + R b ≥ 0, dI b dt (t) I b =0,E b ≥0 = γ 1 E b (t) ≥ 0, dR b dt (t) R b =0,I b ≥0 = ηI b (t) ≥ 0, dS v dt (t) Sv=0,N h ≥0 = π v > 0, dI v dt (t) Iv=0,N h ,Sv≥0 = β v S v (t)I b (t) N h ≥ 0. (3) So, the non-negativity of each state variable of system (1) come from the barrier theorem [START_REF] Gauthier | CIMPA lecture notes on Mathematical Epidemiology[END_REF]. Adding the first four equations of model [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF], we obtain dS h (t) dt + dE b (t) dt + dI b (t) dt + dR b (t) dt = π h -µ h N h (S h + E b + I b + R b ) -kI b , ≤ π h -µ h N h (t) Solving this inequality gives N (t) ≤ π h µ h + N h (0) - π h µ h exp(-µ h t), for all t ≥ 0, with N h (0) = S h (0) + E b (0) + I b (0) + R b (0) > 0. When t -→ ∞, we get lim sup t→∞ N h (t) ≤ π h µ h . Adding the last two equations of (1) gives dN v (t) dt = dS v (t) dt + dI v (t) dt = π v -µ v . Solving the above equality gives N v (t) = π v µ v + N v (0) - π v µ v exp(-µ v t), for all t ≥ 0, When t -→ ∞, we get lim sup t→∞ N v (t) = π v µ v . This prove that all solutions of the model [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF] lies in the following absorbing set Ψ = (S h , E b , I b , R b , S v , I v ) ∈ R 4 + × R 2 + : N h (t) ≤ π h µ h and N v (t) = π v µ v , ∀t ≥ 0 , (4) in which Buruli ulcer model ( 1) define a dynamical system. Equilibrium points and asymptotic stability Before determining the equilibrium points of the Buruli ulcer model (1), we first define the following threshold, called the "basic reproduction number," which drives the qualitative dynamics of the model. R 0 = β h µ h π v π h µ v γ 1 (µ h + γ 1 ) 1 (κ + µ h + η) R vh β v 1 µ v R hv = R vh R hv . (5) For model ( 1), the following result holds. Theorem 2. Let us define the following threshold: R 2 c := 2 1 - κγ 1 k 3 κγ 1 k 3 + µ h [(k 2 + γ 1 )ϕ + µ h (k 1 + κ) + ηk 1 ] + µ h β v k 3 γ 1 µ v {[µ h k 2 + (µ h + κ)γ 1 ] ϕ + µ h k 1 k 2 } . (6) 1. If R 0 > 1 or (R 0 = 1 and R c < 1) , then Buruli ulcer model (1) admits two feasible equilibrium points: the BU-free equilibrium M 0 and the endemic equilibrium M 1 ; 2. If R c < R 0 < 1. then Buruli ulcer model (1) has two positive equilibrium points in addition with the BU-free equilibrium; 3. No endemic equilibrium point otherwise. Proof. Let E b = (S * h , E * b , I * b , R * b , S * v , I * v ) ′ any arbitrary steady states of model [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF]. Setting the right-hand side of Eq. ( 1) to zero with X * = I * v N * h , Y * = I * b N * h and N * h = π h -κI * b µ h , we obtain                π h + ϕR * b -(β h X * + µ h ) S * h = 0, β h X * S * h -k 1 E * b = 0, γ 1 E * b -k 2 I * b = 0, ηI * b -k 3 R * b = 0, π v -β v Y * S * v -µ v S * v = 0, β v S * v Y * -µ v I * v = 0. (7) Resolution of the above system gives                                        R * b = ηγ 1 β h X * π h [k 1 k 2 k 3 (β h X * + µ h ) -ηγ 1 β h X * ϕ] , S * h = π h + ϕR * b (β h X * + µ h ) , E * b = β h X * S * h k 1 , I * b = γ 1 E * b k 2 , S * v = π v β v Y * + µ v , I * v = β v S * v Y * µ v . (8) Using [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF] in the expressions of X * and Y * gives X * = πv µv βvY * βvY * + µv µ h k 1 k 2 (β h X * + µ h ) [k 1 k 2 k 3 (β h X * + µ h ) -ηγ 1 β h X * ϕ] π h k 1 k 2 (β h X * + µ h ) [k 1 k 2 k 3 (β h X * + µ h ) -ηγ 1 β h X * ϕ] -κγ 1 β h X * (π h [k 1 k 2 k 3 (β h X * + µ h ) -ηγ 1 β h X * ϕ] + ϕηγ 1 β h X * π h ) and Y * = γ 1 (X * ) 2 ηβ h π h φ + γ 1 k 3 (X * ) 2 b h π h κ + -k 1 k 2 k 3 X * µ h -k 1 k 2 k 3 (X * ) 2 β h π h µ 2 v γ 1 X * ηβ h µ h φ -k 1 k 2 k 3 µ 2 h -k 1 k 2 k 3 X * β h µ h βvπv + (-γ 1 (X * ) 2 ηβ h π h φ -γ 1 k 3 (X * ) 2 β h π h κ + (k 1 k 2 k 3 X * µ h + k 1 k 2 k 3 (X * ) 2 b h ) π h ) βvµv . By substituting the expression of Y * in the expression of X * , it thus follows that X * is a nonnegative solution to the following equation X * a 2 (X * ) 2 + a 1 X * + a 0 = 0, ( 9 ) where a 2 = R 4 0 π 3 h µ 5 v (k 2 k 3 + γ 1 (k 3 + η)) (µ v (k 2 k 3 + γ 1 (k 3 + η)) + γ 1 β v k 3 ) > 0, a 1 = R 2 0 γ 1 k 3 µ 2 h π 2 h β v µ 4 v [(µ h k 2 + µ h γ 1 + γ 1 κ) ϕ + µ h k 1 k 2 ] π v (R 2 c -R 2 0 ), and a 0 = (1 -R 0 ) (R 0 + 1) γ 2 1 µ 4 h π h (ϕ + µ h ) 2 β 2 v µ 2 v π 2 v . Descartes' rule of signs is used to obtain each item of the theorem 2. Stability of the BU-free equilibrium The trivial solution of equation ( 9) is given by X * = 0, which corresponds to the BU-free equilibrium M = π h µ h , 0, 0, 0, π v µ v , 0 ′ . The Jacobian matrix of system (1) evaluated at M is given by: W =            -µ h 0 0 ϕ 0 -β h 0 -k 1 0 0 0 β h 0 γ 1 -k 2 0 0 0 0 0 η -k 3 0 0 0 0 -β v π v µ h µ v π h 0 -µ v 0 0 0 β v π v µ h µ v π h 0 0 -µ v            (10) The eigenvalues of W are w 1 = -µ h , w 2 = -k 3 , w 3 = -µ v , and those of the following sub-matrix W 1 =    -k 1 0 β h γ 1 -k 2 0 0 β v π v µ h µ v π h -µ v    . (11) The characteristic polynomial of W 1 is given by P(x) = x 3 + a 2 (µ v + k 2 + k 1 ) x 2 + a 1 ((k 2 + k 1 ) µ v + k 1 k 2 ) x + a 0 1 -R 2 0 k 1 k 2 µ v . ( 12 ) Coefficients a 2 and a 1 are always positive, and a 0 > 0 ⇐⇒ R 0 < 1. It follows that all root of P have negative real part. Thus, we conclude that the condition R 0 < 1 ensures the negativity of all eigenvalues of the matrix W. We resume the above analysis as follows: Lemma 1. The BU-free equilibrium M 0 = π h µ h , 0, 0, 0, π v µ v , 0 ′ is LAS whenever R 0 < 1. Theorem 3. If R 0 < 1, then M 0 = π h µ h , 0, 0, 0, π v µ v , 0 ′ is GAS in Ψ. Proof. By considering the equations of the model (1) reflecting the dynamics of the infected populations, the obtained system can be rewritten as follows:    Ėb (t) İb (t) İv (t)    = W 1   E b (t) I b (t) I v (t)   -K (S h , E b , I b , R b , S v , I v ) (13) where W 1 is given at Eq. ( 11), and K (S h , E b , I b , R b , S v , I v ) =       β h 1 - S h N h 0 β v I b S 0 v N 0 h - S v N h       . ( 14 ) It then follows that if S 0 v N 0 h - S v N h ≥ 0, then K (S h , E b , I b , R b , S v , I v ) ≥ 0 R 3 . We thus have    Ėb (t) İb (t) İv (t)    ≤ W 1   E b (t) I b (t) I v (t)   . (15) From Lemma 1, we obtained that W 1 has eigenvalues with real part. Then if R 0 < 1, BU model ( 1) is stable. We thus have (E b , I b , I v ) -→ (0, 0, 0) as t -→ ∞. Note that W 1 = F    0 0 β h 0 0 0 0 β v π v µ h µ v π h 0    - V   k 1 0 0 -γ 1 k 2 0 0 0 µ v   where matrices F and V are the matrices defined in [START_REF] Shuai | Global stability of infectious disease models using Lyapunov functions[END_REF][START_REF] Van Den Driessche | Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission[END_REF] to compute the next generation matrices (NGM). Since F ≥ O R 3 * 3 and V -1 =    1 k 1 0 0 γ 1 k 1 k 2 1 k 2 0 0 0 1 µv    ≥ O R 3 * 3 , it follows from [55, Theorem 2.1] , that there exists a Lyapunov function for system (1) expressed as L (S h , E b , I b , R b , S v , I v ) = w ′ V -1 (E b , I b , I v ) ′ where w ′ is the left eigenvector of V -1 Z corresponding to the eigenvalue R 0 . Thus, dL dt = (R 0 -1) w ′ (E b , I b , I v ) -w ′ V -1 K (S h , E b , I b , R b , S v , I v ) ≤ 0. Since K (S h , E b , I b , R b , S v , I v ) ≥ 0 R 6 , then dL dt < 0 if R 0 < 1, with dL dt = 0 if and only if (E b , I b , I v ) = 0 R 3 . Thus, {M 0 } is the largest invariant set contained in (S h , E b , I b , R b , S v , I v ) ∈ R 6 + : dL dt = 0 . From LaSalle Invariance Principle [START_REF] La Salle | The stability of dynamical systems[END_REF], every solution of (1) with initial conditions in Ψ converge to M 0 when t -→ +∞. That is (E b , I b , I v ) -→ 0 R 3 , S -→ S 0 h , S v -→ S 0 v when t -→ +∞, which is equivalent to (S h , E b , I b , R b , S v , I v ) -→ M 0 when t -→ +∞. Thus, if R 0 < 1, then M 0 is GAS in Ψ. Stability of the endemic equilibrium point From item 1 of Theorem 2, model (1) admits a unique endemic equilibrium whenever R 0 > 1. The global stability of this equilibrium is given as follows. Theorem 4. If R 0 > 1, then the unique positive equilibrium point of model (1), M 1 = (S * h , E * b , I * b , R * b , S * v , I * v ) ′ where the component is given by (8) with X * ̸ = 0 and Y * ̸ = 0, is globally asymptotically stable provided that 1 - S v I * v S * v I v 1 - Y Y * ≥ 0, 1 - S h S * h E * b E b 1 - X X * ≥ 0, S * h S h I * b I b R b R * b ≤ 1. ( 16 ) Proof. See Appendix A. Model calibration, multiples waves and sensitivity analysis Parameter estimations Here, we consider data from Cameroon during the period between 2001 to 2014 [START_REF] Tabah | Buruli ulcer in cameroon: the development and impact of the national control programme[END_REF] to calibrate our BU model [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF]. One parameter, namely µ h is estimated while others are fitted using real data. The beginning date 01/01/2001 and end date 31/12/2014 correspond to t = 0 and t = 14, respectively. The parameter estimation is performed using the routine lsqcurvefit of MATLAB software [START_REF] The Mathworks | MATLAB version 9[END_REF]. Calibrate BU model ( 1) is equivalent to solving the optimization problem min Σ ∥ I predict b -I data b ∥, (17) where Σ = (π h , π v , ϕ, µ v , β h , β v , κ, η, γ 1 ). The results are displayed in Table 2 and Figure 1 (panel (a)). From panel (b) of Figure 1, We can be led to conclude that, it is possible that the cumulative number of cases will be decreased after (t ≈ 16). In what follows we will perform another analysis which could confirm whatever if or not the BU epidemic model (1) has only a single wave or multiple waves of epidemic peaks. In what follows, we use the package cftool of MATLAB software [START_REF] The Mathworks | MATLAB version 9[END_REF] to predict the number of Buruli ulcer in Cameroon from 2015 to 2030. Before do this, we begin by choose the function which better fit the reported data. We obtain that the Gaussian function of order 4 given by f (x) = 4 i=1 a i exp - x -b i c i 2 ( 18 ) where x is the year, and a 1 = 758. Strength number and epidemic waves The concept of "basic reproduction number " is one of most important concept in the study of epidemiological models. Indeed, it drives the qualitative behaviour of epidemiological systems. From [START_REF] Van Den Driessche | Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission[END_REF], one must decompose the model into two component V and F , which represent the matrices of the transfer terms and infectious terms, respectively, evaluated at the disease-free equilibrium. The biological threshold R 0 is equal of the spectral radius of the matrix F V -1 , i.e. the maximal solution of (F.V -1 -λI 3 * 3 ) = 0, ( 19 ) where I 3 * 3 is a 3 * 3 identity matrix. To obtain the strength number [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF] of BU model (1), we need to compute the following quantity ∂ 2 ∂I b 2 β v S v I b N h = - β v S v (N h ) 2 - β v S v N h -2β v S v I b (N h ) 3 . ( 20 ) At the BU-free equilibrium point M 0 , we have ∂ 2 ∂I b 2 β v S v I b N h M 0 = -2 β v µ 2 h π v µ v π 2 h . ( 21 ) We have F A =     0 0 β h 0 0 0 0 -2 β v π v µ 2 h π 2 h µ v 0     and V =   µ h + γ 1 0 0 -γ 1 µ h + k + η 0 0 0 µ v   . By solving det((F A .V -1 ) -λI 3 * 3 ) = 0, (22) we thus obtain the strength number which is the spectral radius of F A V -1 A [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF]: A 0 = ρ(F A V -1 A ) = 2γ 1 µ 2 h β v π 2 v π 2 h µ 2 v > 0. ( 23 ) Using parameter values of Table 2, we obtain A 0 ≈ 17. Since A 0 > 0, it follows that the Bu model can have multiple waves [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF]. This is depicted in figures 4 where it is clear that the BU will have other waves (at least 2). Sensitivity analysis To determine important parameter in the BU dynamics, we perform here a local sensitivity analysis (LSA). To this aim, we compute for each model parameter, its sensitivity index, which is obtained by applying the following formula [START_REF] Chitnis | Determining important parameters in the spread of malaria through the sensitivity analysis of a mathematical model[END_REF] ϖ z = z R 0 ∂R 0 ∂z , (24) where z represents each model parameter (z ∈ {π h , π v , ϕ, µ h , µ v , κ, β h , β v , η, γ 1 }). Let us evaluate the sensitivity of R 0 to each parameters. We thus obtain: ϖ β h = 1 2 > 0, ϖ βv = 1 2 > 0, ϖ πv = 1 2 > 0, ϖ µv = -1 < 0, ϖ π h = - 1 2 < 0, ϖ k = - κ 2(µ h + κ + η) < 0, ϖ η = - κ 2(µ h + κ + η) < 0, ϖ γ 1 = µ h 2(µ + γ 1 ) > 0, ϖ µ h = -µ 2 h + γ 1 (κ + η) 2 , ϖ ϕ = 0 (25) Hence, R 0 increasing with the increase of the following parameters β h , β v , π v , and γ 1 . It decreasing with the increase of µ v , π h , κ and η. The sign of ϖ µ h depends of parameter values. Using parameter values in Table 2, the sensitivity indices of each model parameters are depicted in Table 3 and Figure 5. Note that increase µ v of 10% permits to decrease the value of R 0 to 10%, while increase the value of β h of 10% (respectively β v and π v ) permits to increase the value of R 0 to 5%. The sensitivity index of ϕ is equal to zero since R 0 does not depend of this model parameter. a) R 0 = f (β h , β v ) (b) R 0 = f (µ v , β h ) (c) R 0 = f (µ v , β v ) (d) R 0 = f (π v , β h ) (e) R 0 = f (π v , β v ) (f) R 0 = f (π h , β h ) (g) R 0 = f (π h , β v ) (h) R 0 = f (π h , π v ) Preliminaries on fractional calculus The beauty of fractional calculus is its ability to accurately capture the exact behaviour of many complex models in science, engineering, and finance [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF][START_REF] Machado | Recent history of fractional calculus[END_REF][START_REF] Richard | New analytical modelling of fractional generalized kuramoto-sivashinky equation via atangana-baleanu operator and j-transform method[END_REF]. Before formulate the fractional model of Buruli ulcer, we introduce here some useful definitions and results. Definition 1 ([49]). Let ν ∈ R * + , f ∈ C ([a, b] ) and a < t < b. The Riemann-Liouville fractional integral of f of order ν is defined by: I ν a f (t) := 1 Γ(ν) t a f (τ )(t -τ ) ν-1 dτ, ( 26 ) where Γ is the gamma function defined as follows: Γ(τ ) := ∞ 0 σ τ -1 exp(-σ)dσ for τ > 0. Definition 2 ([49] ). Given a function f : [a, b] → R of class C m , the Caputo fractional derivative of f of order ν is defined by : C t 0 D ν t f (t) = 1 Γ(m -ν) t t 0 (t -θ) m-ν-1 f (m) (θ)dθ (27) if ν / ∈ N, m = [ν] + 1. Definition 3 ([46] ). The generalized Caputo-type fractional derivative of order ν is defined as follows C D ν,ρ a f (s) (t) = ρ ν-m+1 Γ(m -ν) ξ a (s ρ -τ ρ ) m-ν-1 τ 1-ρ d dτ m f (τ )dτ, s > a, (28) where ρ > 0, a ≥ 0, and m - 1 < ν ≤ m Proposition 1 (Linearity). Let f, g : [a, b] → R be such that C t 0 D ν t f (t) and C t 0 D ν t g(t) exist almost everywhere and let ξ 1 , ξ 2 ∈ R. Then C t 0 D ν t (c 1 f (t) + c 2 g(t)) exists almost everywhere with C t 0 D ν t (ξ 1 f (t) + ξ 2 g(t)) = ξ 1 × C t 0 D ν t f (t) + ξ 2 × C t 0 D ν t g(t). Lemma 2. Suppose that f (t) ∈ C([a, b]) and C t 0 D ν t f (t) ∈ C([a, b]) for 0 < ν ≤ 1. Then f (t) = f (x) + 1 Γ(ν) C t 0 D ν t f (τ )(t -t 0 ) ν for 0 ≤ τ ≤ t; ∀t ∈ [a, b]. The fractional model and its analysis The proposed BU fractional model in the Caputo sense is given as follows:                                χ α-1 × C D α t S h = π h + ϕR b -β h S h I v N h -µ h S h , χ α-1 × C D α t E b = β h S h I v N h -(µ h + γ 1 ) E b , χ α-1 × C D α t I b = γ 1 E b -(µ h + κ + η) I b , χ α-1 × C D α t R b = ηI b -(µ h + ϕ) R b , χ α-1 × C D α t S v = π v -β v S v I b N h -µ v S v , χ α-1 × C D α t I v = β v S v I b N h -µ v I v . (29) where C D α t denotes the Caputo fractional derivative of order α, and χ is use to balance the units [START_REF] Ullah | An efficient numerical technique for a new fractional tuberculosis model with nonsingular derivative operator[END_REF]. Without lost of generalities, the fractional model ( 29) define a dynamical system in the following absorbing set Ψ α = (S h , E b , I b , R b , S v , I v ) ∈ R 4 + × R 2 + : N h (t) ≤ π α h µ α h and N v (t) = π α v µ α v , ∀t ≥ 0 , (30) in which solutions of (29) are bounded. In what follows, we set k α 1 = µ α h + γ α 1 , k α 2 = µ α h + κ α + η α , k α 3 = µ α h + ϕ α . Equilibrium points and stability analysis For the fractional model ( 29), we define the fractional basic reproductive number R 0 := γ α 1 µ α h β α h β α v π α v (µ α h + γ α 1 ) (κ α + µ α h + η α ) π α h (µ α v ) 2 . (31) As for the model (1), BU model ( 29) admits always a BU-free equilibrium (DFE), M = (S 0 , 0, 0, 0, S 0 v , 0) ′ where S 0 h = π α h µ α h and S 0 v = π α v µ α v . The following result, obtained as a similar manner as which of Theorem 2, holds: Theorem 5. Let us define the following threshold: R 2 c : = 2 1 - κ α γ α 1 (µ α h + ϕ α ) κ α γ α 1 (µ α h + ϕ α ) + µ α h [(k α 2 + γ α 1 )ϕ α + µ α h (k α 1 + κ α ) + η α k α 1 ] + µ α h β α v k α 3 γ α 1 µ α v {[µ α h k α 2 + (µ α h + κ α )γ α 1 ] ϕ α + µ α h k α 1 k α 2 } , (32) 1. If R 0 > 1 or (R 0 = 1 and R c < 1), then Buruli ulcer model (29) admits two feasible equilibrium points: the BU-free equilibrium M 0 and the endemic equilibrium M 1 ; 2. If R c < R 0 < 1. then Buruli ulcer model [START_REF] Feng | On the role of variable latent periods in mathematical models for tuberculosis[END_REF] has two positive equilibrium points in addition with the BU-free equilibrium; 3. No endemic equilibrium point otherwise. We thus claim what follows (we omit the proof which is similar to the one from Theorem 1): Lemma 3. If R 0 < 1, then the BU-free equilibrium M of the fractional model (29) is LAS in Ψ α . We also claim the following result: Theorem 6. The BU-free equilibrium M 0 of the fractional model (29) is GAS in Ψ α whenever R 0 < 1. Proof. The proof of Theorem 6 follows the proof of Theorem 3. It is sufficient to consider the following Lyapunov function L (S h , E b , I b , R b , S v , I v ) = w ′ V -1 (E b , I b , I v ) ′ where w ′ is the left eigenvector of V -1 Z corresponding to the eigenvalue R 0 , and applying the LaSalle Invariance Principle [START_REF] La Salle | The stability of dynamical systems[END_REF], to conclude that (E b , I b , I v ) -→ 0 R 3 , S -→ S 0 h , S v -→ S 0 v when t -→ +∞, i.e. (S h , E b , I b , R b , S v , I v ) -→ M 0 = (S 0 h , 0, 0, 0, S 0 v , 0) ′ when t -→ +∞. Thus, if R 0 < 1, then the BU-free equilibrium M 0 is GAS in Ψ α . From [START_REF] Li | Stability analysis of a fractionalorder linear system described by the caputo-fabrizio derivative[END_REF], it follows that the fractional model [START_REF] Feng | On the role of variable latent periods in mathematical models for tuberculosis[END_REF], as the ODE model ( 1), has a unique endemic equilibrium. Thank to Boukhouima et al. [START_REF] Boukhouima | Lyapunov functions for fractional-order systems in biology: Methods and applications[END_REF]Corollary 2], the Lyapunov function Z given by Eq. ( 44) can also use to prove the GAS of the unique endemic equilibrium of the the fractional model [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF]. Thus the following result holds. Theorem 7. If R 0 > 1, then M 1 of the fractional model (29) is GAS in Ψ α whenever 1 - S v I * v S * v I v 1 - Y Y * ≥ 0, 1 - S h S * h E * b E b 1 - X X * ≥ 0, S * h S h I * b I b R b R * b ≤ 1. (33) Existence and uniqueness of solutions of the BU fractional model Let us rewrite the system (29) in the following form: C D α t x(t) = d(t, x(t)), x l (0) = x l 0 , l = 1, 2..., 6, (34) where 0 < α ≤ 1, d(t, x) = (d 1 , d 2 , d 3 , d 4 , d 5 , d 6 ) ′ , x = (S h , E b , I b , R b , S v , I v ) ′ and x(0) = (S h (0), E b (0), I b (0), R b (0), S v (0), I v (0)) ′ . Here, d l , l = 1, 2, ...6 are right-hand side of system (29), i.e., d 1 = π α h + ϕ α R b -β α h S h I v N h -µ α h S h , etc. The function d(t, x) : R × R 6 → R 6 defines a vector field. Let us consider the well-defined function Z(t) : R + → R + differentiable on (0, t max ) such as: Z(t) = 6 i=1 d i (t). ( 35 ) We obtain Z(t) = π α h + ϕ α R b -β α h S h I v N h -µ α h S h + β α h S h I v N h -(µ α h + γ α 1 ) E b + γ α 1 E b -(µ α h + κ α + η α ) I b + η α I b -(µ α h + ϕ α ) R b + π α v -β α v S v I b N h -µ α v S v + β α v S v I b N h -µ α v I v =⇒ Z(t) = (π α h + π α v ) -µ α h S h + µ α h E b -(µ α h + κ α ) I b -µ α h R b -µ α v S v -µ α v I v =⇒ Z(t) ≤ (π α h + π α v ) -a 0 I b -(µ α h S h + µ α h E b + µ α h I b + µ α h R b + µ α v S v + µ α v I v ) , =⇒ Z(t) ≤ (π α h + π α v ) -a 1 (S h + E b + I b + R b + S v + I v ) , ( 36 ) where a 0 = κ α and a 1 = min {µ α h , µ α v }. Clearly, a 0 > 0. From [START_REF] La Salle | The stability of dynamical systems[END_REF] we have d(t) ≤ (π α h + π α v ) -a 1 x(t), ∀t ∈ (0, t max ). ( 37 ) Setting J = [t 0 -ϵ, t 0 + ϵ], B = {x ∈ R 6 : ∥ x -x 0 ∥≤ ζ} and K = {(t, x) ∈ R × R 6 : t ∈ R, x ∈ R 6 } with ϵ > 0 and ζ > 0, and considering that d : K -→ R 6 satisfies all conditions of [39, Theorem 2.1], we thus claim what follows: Theorem 8. Assume that there exists Q(t) ∈ L 6 (J) such that ∥ d(t, x) -d(t, y) ∥≤ Q(t) ∥ x -y ∥, ( 38 ) for t ∈ J and g, z ∈ B. The IVP (34) admits a unique solution on [t 0 -ϵ, t 0 + ϵ] with ϵ > 0. Proof. Note that d l , l = 1, 2, ..., 6, are continuous on B, measurable on J and bounded ∀t ∈ [t 0 -ϵ, t 0 + ϵ]. From Eq. ( 37), it follows that d(t, x) satisfies all the conditions of [39, Theorem 2.1] with m(t) = (π α h + π α v ) -a 1 x(t). Thus, there exists a solution of the fractional model ( 29) in (0, t max ). Using Eq. ( 37), we have ∥ d(t, x) -d(t, y) ∥≤ Q(t) π α h + π α v a 1 ∥ x -y ∥, (39) with a 1 = min {µ α h , µ α v }. This ends the proof. Numerical scheme In general, exact solutions of fractional disease models are not always available. Hence an iterative solution of model ( 29) is proposed using the Adams-Bashforth technique [START_REF] Nabi | Forecasting of covid-19 pandemic: From integer derivatives to fractional derivatives[END_REF][START_REF] Owolabi | Analysis and application of new fractional adams-bashforth scheme with caputo-fabrizio derivative[END_REF]. This method is generally based on the discretization of the independent variable and includes modifications of the integer order. The advantage of the Adams-Bashforth method is that it uses only one additional function evaluation per step yet achieves high-order accuracy [START_REF] Butcher | Numerical methods for ordinary differential equations in the 20th century[END_REF]. By considering the following fractional differential equation [START_REF] Li | On the fractional adams method[END_REF] C D α t h(t) = g(t, q(t)), 0 ≤ t ≤ T q k (0) = q k 0 , k = 0, 1, 2..., m -1, where m = [α], (40) which is equivalent to the following Volterra integral equation: q(t) = m-1 . k=0 q k t k k! + 1 Γ (α) t 0 (t -φ) α-1 q(φ, g(φ))dφ, (41) and using the Diethelm and Neville method [START_REF] Diethelm | Analysis of fractional differential equations[END_REF] based on the well know Adams-Bashforth-Moulton algorithm [START_REF] Diethelm | An algorithm for the numerical solution of differential equations of fractional order[END_REF], we obtain the following scheme of the BU fractional model ( 29): Setting α ∈ (0, 1], 0 ≤ t ≤ T , q = T N , t n = n.q, n = 0, 1, 2..., N ∈ N, and S h = S, E b = E, I b = I, R b = R, S v = V , I v = W as state variables, the solution of the fractional model ( 29) is given by                                                                          S n+1 = S 0 + q α Γ (α + 2) (π α h + ϕ α R p n+1 -β α h S p n+1 W p n+1 N h -µ α h S p n+1 ) + q ι Γ (α + 2) n . j=0 a j,n+1 (π α h + ϕ α R j -β α h S j W j N h -µ α h S j ), E n+1 = E 0 + q α Γ (α + 2) (β α h S p n+1 W p n+1 N h -(µ α h + γ α 1 ) E p n+1 ) + q ι Γ (α + 2) n . j=0 a j,n+1 (β α h S j W j N h -(µ α h + γ α 1 ) E j ), I n+1 = I 0 + q α Γ (α + 2) (γ α 1 E p n+1 -(µ α h + κ α + η α ) I p n+1 ) + q ι Γ (α + 2) n . j=0 a j,n+1 (γ α 1 E j -(µ α h + κ α + η α ) I j ), R n+1 = R 0 + q α Γ (α + 2) (η α I p n+1 -(µ α h + ϕ α ) R p n+1 ) + q ι Γ (α + 2) n . j=0 a j,n+1 (η α I j -(µ α h + ϕ α ) R j ), V n+1 = V 0 + q α Γ (α + 2) (π α v -β α v V p n+1 I p n+1 N h -µ α v V p n+1 ) + q ι Γ (α + 2) n . j=0 a j,n+1 (π α v -β α v V j I j N h -µ α v V j ), W n+1 = W 0 + q α Γ (α + 2) (β α v V p n+1 I p n+1 N h -µ α v W p n+1 ) + q ι Γ (α + 2) n . j=0 a j,n+1 (β α v V j I j N h -µ α v W j ), (42) where S p n+1 = S 0 + 1 Γ (ι) n . j=0 b j,n+1 (π α h + ϕ α R j -β α h S j W j N h -µ α h S j ), E p n+1 = E 0 + 1 Γ (ι) n . j=0 b j,n+1 (β α h S j W j N h -(µ α h + γ α 1 ) E j ), I p n+1 = I 0 + 1 Γ (ι) n . j=0 b j,n+1 (γ α 1 E j -(µ α h + κ α + η α ) I j ), R p n+1 = R 0 + 1 Γ (ι) n . j=0 b j,n+1 (η α I j -(µ α h + ϕ α ) R j ), V p n+1 = V 0 + 1 Γ (ι) n . j=0 b j,n+1 (π α v -β α v V j I j N h -µ α v V j ), W p n+1 = W 0 + 1 Γ (ι) n . j=0 b j,n+1 (β α v V j I j N h -µ α v W j ). (43) and a j,1+n =      n 1+p -(-ι + n)(1 + n), if j = 0, (2 + n -j) 1+ι -2(n + 1 -j) 1+ι + (-j + n) 1+ι , if 1 ≤ j ≤ n, 1, if j = 1 + n. b j,1+n = q ι ι ((1 + n -j) ι -(-j + n) ι ), 0 ≤ j ≤ n. Numerical results and discussions The ODE model Let us consider the parameter values as listed in Table 2. We begin by illustrating the result of Lemma 1 (resp. Lemme 3) and Theorem 3 (resp. Theorem 4). Note that for estimated parameters, we have 1 < R 0 = 2.0843 > R c = 0.8795. The numerical values of coefficients of Eq. ( 9) are a 2 = 261.9718984431078 > 0, a 1 = -36.51673116398468 < 0, and a 0 = -2.414751617064077 < 0. It then follows that Eq. ( 9) admits X * = 0 and the unique positive solution X * = 0.1883345. Thus, model (1) admits as equilibrium points M 0 = (2090, 0, 0, 0, 69971, 0) ′ and M 1 = (888, 262, 128, 521, 292620, 339) > 0 R 6 + . Now, setting β h = 0.1 such that R 0 = 0.744080618507066 < R c = 0.8795695327518487, the coefficients of Eq. ( 9) are a 2 = 33.38675690832024 > 0, a 1 = 0.2867204474332617 > 0, and a 0 = 0.3222820573018668 > 0. It then follows that the unique nonnegative solution of Eq. ( 9) is X * = 0 which correspond to the disease-free equilibrium M 0 = (2090, 0, 0, 0, 69971, 0) ′ . This validate item 3 of Theorem 2. Setting β h = 0.18 such that R c = 0.8795695327518487 < R 0 = 0.9982889062331324 < 1, the coefficients of Eq. ( 9) are a 2 = 60.09616243497641 > 0, a 1 = -0.5230213257082926 < 0, and a 0 = 0.002468871467002343 > 0. The discriminant of Eq. ( 9) is given by ∆ = a 2 1 -4a 2 a 0 = -319.9275 × 10 -3 < 0. It then follows that the unique nonnegative solution of Eq. ( 9) is X * = 0 which correspond to the disease-free equilibrium M 0 = (2090, 0, 0, 0, 69971, 0) ′ . This validate the fact that backward bifurcation does not occurs in model [START_REF] Abboubakar | Fractional dynamics of a measles epidemic model[END_REF]. Setting β v = 0.1806175784054825 such that R 0 = 1, the coefficients of Eq. ( 9) are a 2 = 60.30235183593316 > 0, a 1 = -0.5328650411460003 < 0, and a 0 = 0. It then follows that the nonnegative solutions of Eq. ( 9) are X * = 0 and X * = 0.0088366 which correspond to the disease-free equilibrium M 0 = (2090, 0, 0, 0, 69971, 0) ′ and the unique endemic equilibrium M 1 = (3164, [START_REF]New concept in calculus: Piecewise differential and integral operators[END_REF][START_REF] Abboubakar | Projections and fractional dynamics of the typhoid fever: A case study of mbandjock in the centre region of cameroon[END_REF][START_REF] Diethelm | An algorithm for the numerical solution of differential equations of fractional order[END_REF]292931,[START_REF] Egonmwan | Analysis of a mathematical model for tuberculosis with diagnosis[END_REF], respectively. This validate the fact that R 0 ≥ 1 implies that the DFE M 0 becomes unstable, and the emergence of a unique endemic equilibrium M 1 which is GAS. The above analysis are displayed in figures 7-8, which display the time series of model ( 2) with different initial conditions. On figure 7 trajectories of infected states tend to zero whenever R 0 < 1, while for R 0 > 1, trajectories of infected states tend to the endemic equilibrium [START_REF] Atangana | Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?[END_REF]. The fractional model Figure 9 shows the influence of varying the fractional-order parameter α between 0.6 and 1 on the model dynamics. The blue curve in each of these figures represents the numerical results of model ( 29) when the fractional order is equal to 1. From the results of Figure 9, it follows that the variation of the fractional parameter has a great impact on the quantitative dynamics of the model. Indeed, panels (b) and (c), the classes of human infected peak after 3 years and decrease according to the decrease of the fractional parameter α. We also note that the total numbers of susceptible humans decreases rapidly according to the increase of the fractional parameter (panel (a)), while the total numbers of susceptible vectors increases when the the fractional parameter α increases (panel (f)). Figure 10 validates Theorem 6. It is clear that varying the fractional order parameter α does not influence the model dynamics whenever R 0 < 1. Indeed, whatever the value of α, the infected compartments tend to zero asymptotically whenever R 0 < 1. This validate the fact that the Buruli-free equilibrium of the fractional model is GAS whenever R 0 < 1. Conclusion In this work, we formulated and studied a compartmental model for the Buruli-Ulcer transmission dynamics using classical and fractional (in the Caputo sense) derivatives. We first formulate and study the model with integer derivative by proving the existence of equilibrium points, determining the basic reproduction number R 0 , and proving the asymptotic stability of the Buruli-free equilibrium whenever R 0 < 1. Then, for R 0 > 1, we also proved the global stability of the unique endemic equilibrium point using the general theory of Lyapunov. We calibrated our model by performing parameter estimation using real data from Cameroon, while sensitivity analysis is conducted to determine important parameters in the disease dynamics. It followed from the estimated value of the basic reproduction number R 0 is equal to 2.0843, which implies that the system is in an endemic state since R 0 is greater than unity. To determine the number of waves (epidemic peaks), we computed the strength number A 0 . We found that A 0 = 17 > 0 which means that BU epidemic has multiple waves. Numerically, we found that BU has at least 2 epidemic peaks. To determine parameters which play an important rule in the model dynamics, we performed sensitivity analysis of R 0 by computing sensitivity index of each model parameters. The results of this sensitivity analysis showed that the mortality rate of vector µ v , the transmission probabilities β h and β v , as well as the recruitment rate of vectors are the important parameters which govern the BU dynamics. 2 such that R 0 = 0.9982889062331324 < 1. In this case the Buruli-free equilibrium M 0 = (2090, 0, 0, 0, 69971, 0) ′ is globally asymptotically stable. 2 such that R 0 = 2.0843 > 1. In this case the Buruli-free equilibrium is unstable and the unique endemic equilibrium point is globally asymptotically stable. By replacing integer derivatives with Caputo derivative, we obtained our Buruli ulcer fractional model. As for the classical model, we performed stability analysis of equilibrium points, as well as existence and uniqueness of solution. Based on Adams-Bashforth-Moulton (ABM) method, we constructed a numerical scheme. We thus conducted several numerical simulations to validate theoretical results and also, see the role playing by the variation of the fractional parameter α in the disease dynamics. Indeed, simulation results confirmed the theoretical results (Theorems 6 and 7) which claimed that the disease-free equilibrium (respectively the endemic equilibrium point) of both models are GAS whenever R 0 ≤ 1 (respectively R 0 > 1). Also, we found that, in quantitative point of view, fractional model allows more flexibility than the classical one. The epidemic peaks depend of the variation of the fractional-order α. The direct perspectives of this work consist to: (1) use another fractional derivatives like Caputo-Fabrizio, and Atangana-Baleanu derivatives and compare numerically the obtained results, and (2) implement optimal control which permits to determine which control strategy is better to control of BU disease in Cameroon. where e i for i = 1, ...7 are positive constants to be determined latter, its time-derivative gives Ż(S h , E b , I b , R b , S v , I v ) = e 1 1 - S * S Ṡh + e 2 1 - E * b E b Ėb + e 3 1 - I * b I b İb + e 4 1 - I * b I b İb + e 5 1 - R * b R b Ṙb + e 6 1 - S * v S v Ṡv + e 7 1 - I * v I v İv = e 1 1 - S * S [π h + ϕR b -β h S h X -µ h S h ] + e 2 1 - E * b E b [β h S h X -k 1 E b ] + e 3 1 - I * b I b [γ 1 E b -k 2 I b ] + e 4 1 - I * b I b [γ 1 E b -k 2 I b ] + e 5 1 - R * b R b [ηI b -k 3 R b ] + e 6 1 - S * v S v [π v -β v S v Y -µ v S v ] + e 7 1 - I * v I v [β v S v Y -µ v I v ] At equilibrium, we have the following relations                π h = -ϕR * b + β h S * h X * + µ h S * h , β h S * h X * = k 1 E * b , γ 1 E * b = k 2 I * b , ηI * b = k 3 R * b , π v = β v S * v Y * + µ v S * v , β v S * v Y * = µ v I * v . (45) We thus obtain Ż(S h , E b , I b , R b , S v , I v ) = e 1 µ h S * h 2 - S h S * h - S * h S h + e 6 β v S * v Y * S * v I * v 2 - S v S * v - S * v S v + R * S h S * h - S * h S h + β v S * v Y * S * v I * v 2 - S v S * v - S * v S v + β v S * v Y * 2 + Y Y * - S * v S v - I v I * v - S v Y S * v Y * I * v I v + 2µ h E * b 2 + X X * - S * h S h - S h X S * h X * E * b E b - E b E * b + 2k 2 I * b 3 + X X * - S * h S h - I * b I b E b E * b - I b I * b - S h X S * h X * E * b E b + 2ϕR * b I b I * b 1 - R * b R b + S * h S h 1 - R b R * b Since the arithmetic means is greater than the geometric means, we conclude that the first two terms of the right-hand side of above expression are negative. It then follows that Ż(S h , E b , I b , S v , I v ) ≤ 0 if the following conditions hold: 2 + Y Y * - S * v S v - I v I * v - S v Y S * v Y * I * v I v ≤ 0, 2 + X X * - S * h S h - E b E * b - S h X S * h X * E * b E b ≤ 0, I b I * b 1 - R * b R b + S * h S h 1 - R b R * b ≤ 0. ( 46 ) Using the function f (ξ) := 1 -ξ + ln(ξ) which is negative for all ξ > 0 and equal to zero whenever ξ = 1, we obtain Thanks to the LaSalle invariance principle [START_REF] La Salle | The stability of dynamical systems[END_REF], we conclude that if R 0 > 1, then M 1 is GAS in Ψ. This ends the proof. 2 + Y Y * - S * v S v - I v I * v - S v Y S * v Y * I * v I v = 2 + Y Y * - S * v S v - I v I * v -1 - S v I * v S * v I v 1 - Y Y * -1 + S v I * v S * v I v + Y Y * = -1 - S v I * v S * v I v 1 - Y Y * + 1 - S * v S v + 1 - I v I * v + 1 - S v I * v S * v I v ≤ -1 - S v I * v S * v I v 1 - Y Y * ≤ 0 2 + X X * - S * h S h - E b E * b - S h X S * h X * E * b E b = 2 + X X * - S * h S h - E b E * b -1 - X X * 1 - S h S * h E * b E b -1 + S h S * h E * b E b + X X * = -1 - X X * 1 - S h S * h E * b E b + 1 - S * h S h + 1 - E b E * b + 1 - S h S * h E * b E b ≤ -1 - S h S * h E * b E b 1 - X X * ≤ 0, 3 + X X * - S * h S h - I b I * b - I * b I b E b E * b - S h X S * h X * E * b E b = 3 + X X * - S * h S h - I b I * b - I * b I b E b E * b -1 - X X * 1 - S h S * h E * b E b -1 + S h S * h E * b E b + X X * = -1 - X X * 1 - S h S * h E * b E b + 1 - S * h S h + 1 - I b I * b + 1 - I * b I b E b E * b + 1 - S h S * h E * b E b ≤ -1 - X X * 1 - S h S * h E * b E b ≤ 0. Note that I b I * b 1 - R * b R b + S * h S h 1 - R b R * b < 0 ⇔ S * Figure 1: (a) Real data versus model simulation; (b) Forecasting of Buruli Ulcer in the next fifteen (15) years (from 2015 to 2030). 3; b 1 = 2004; c 1 = 0.6758; a 2 = -391.8; b 2 = 2014; c 2 = 3.04; a 3 = 64.25; b 3 = 2006; c 3 = 0.3512; a 4 = 552.9; b 4 = 2017 and c 4 = 11.5.The results are depicted in Figures2 and 3 Figure 2 : 2 Figure 2: Gaussian fitting of order 4 (red line) using the reported cases of Buruli Ulcer (BU) in Cameroon between 2001 and 2014 (blue dots). Figure 3 : 3 Figure 3: Gaussian fitting of order 4 (red line) using the reported cases of Buruli Ulcer (BU) in Cameroon between 2001 and 2014, and prediction of of BU in the next sixteen (16) years (blue line). Figure 4 : 4 Figure 4: Expecting number of cases with multiple waves. Figure 5 : 5 Figure 5: Sensitivity indices for R 0 against model parameters. Figures 6 6 Figures 6 depict how the basic reproduction number R 0 varies in term of most important model parameters. ( Figure 6 : 3 - 63 Figure 6: 3-D plots of the basic reproduction number R 0 in term of most important model parameters. Figure 7 : 7 Figure 7: Time-series of model (2) with β v = 0.18 and the other parameter values as in Table2such that R 0 = 0.9982889062331324 < 1. In this case the Buruli-free equilibrium M 0 = (2090, 0, 0, 0, 69971, 0)′ is globally asymptotically stable. Figure 8 : 8 Figure 8: Time-series of model (2) with the parameter values as in Table2such that R 0 = 2.0843 > 1.In this case the Buruli-free equilibrium is unstable and the unique endemic equilibrium point is globally asymptotically stable. Figure 9 :Figure 10 : 910 Figure 9: Time-series of fractional model[START_REF] Feng | On the role of variable latent periods in mathematical models for tuberculosis[END_REF] showing the impact of varying the fractional-order parameter α on model dynamics. The fractional-order being α = 1, α = 0.9, α = 0.8, α = 0.7, and α = 0.6. b e 5 k 3 -e 1 ϕ + e 5 k 3 + 1 X+ β v S * v Y * e 6 + e 7 + e 6 Y Y * -e 6 S 7 -e 6 ) 33166676 β h S * h X * e 1 + e 2 + e S v Y S * v Y * -e 2 µ h E b .Setting e 3 = e 4 = e 6 = 1, and e 1 = e 2 = (e 3 + e 4 ), e 5 = (e 3 + e 4 ) ϕ k 3 and e 6 = e 7 , We obtainŻ(S h , E b , I b , R b , S v , I v ) = 2µ h S * h 2 - (S h , E b , I b , S v , I v ) ≤ 0. So that lim t→∞ (S h (t), E b (t), I b (t), R b (t), S v (t), I v (t)) → M 1 := (S * h , E * b , I * b , R * b , S * v , I * v ) . Table 1 : 1 Description of state variables and model parameters. h is the human life mean expectancy while µ v is the mortality rate of vectors; κ is the disease-induced death rate; the transmission rate of infection between susceptible humans and infectious vectors is denoted by β h while β v denoted the transmission rate of infection between infectious humans and susceptible vectors; γ 1 is the transition rate between E b and I b ; η 1 is the recovered rate of infectious humans. We resume states variables and parameter descriptions in Table1. We thus express our new Buruli ulcer model using ordinary derivatives as follows: State variables Description S h Susceptible humans E b Infected humans in latent stage I b Infectious population R b Recovered population S v Susceptible vectors I v Infected vectors Parameters Description π h Recruitment rate of susceptible humans π v Recruitment rate of water bugs ϕ Rate of recovered people who loosed immunity after recovered from Buruli ulcer µ h Natural death rate of human population µ v Mortality rate of the vector population β h Transmission rate of Mycobacterium ulcerans from infectious vector to a susceptible human β v Transmission rate of Mycobacterium ulcerans from infectious human to a susceptible vector κ Disease-induced death rate η Recovery rate of Buruli Ulcer infected people γ 1 Contact rate of Mycobacterium Ulcer with the population is denoted by N v (t) and includes the susceptible vector population noted S v (t) and the infectious vector population noted I v (t), at any time t. So, N h (t) = S h (t)+E b (t)+I b (t)+R b (t) and N v (t) = S v (t) + I v (t). The following model parameters, which are nonnegative, have described as follows: π h represents the recruitment rate of population; π v denoted recruitment rate of water bugs while ϕ denotes the rate of recovering people who have loosed immunity after recovering from Buruli ulcer; 1/µ Table 2 : 2 Estimated parameters. The corresponding value of the basic reproduction number is R 0 = 2.0843. Parameters Value Source Parameters Value Source µ h 1/55.5 [2, 4] β h 0.784684 Fitted k 0.203789 Fitted β v 0.036985 Fitted π h 58.660064 Fitted η 0.760527 Fitted π v 172813.870585 Fitted γ 1 0.482389 Fitted ϕ 0.169881 Fitted µ v 0.589891 Fitted Table 3 : 3 Sensitivity indices of model parameters. Parameters Sensitivity index Parameters Sensitivity index π h -0.5000 β h 0.5000 π v 0.5000 β v 0.5000 ϕ 0 κ -0.1037 µ h 0.4728 η -0.3871 µ v -1 γ 1 0.0180 Acknowledgements HA and HPEF acknowledge the grants and support of the Cameroon Ministry of Higher Education through the initiative for the modernization of research in Cameroon's Higher Education. Declaration of Competing Interest The authors declare that they have no known competing interests. Author Contributions Conceptualization: HA & HPEF; Methodology: RF, HA, AK; Formal analysis and investigation: RF & HA; Writing-original draft preparation: RF & HA; Writing-review and editing: AH, HPEF, AK; Funding acquisition: HA; Resources: HA, HPEF, AK; Supervision: HA & HPEF. A. Proof of Theorem 4 Considering the function
04072076
en
[ "info", "phys" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04072076v2/file/AITQM.pdf
Samuel Epstein email: [email protected] An Introduction to Algorithmic Information Theory and Quantum Mechanics This article presents a survey of published and unpublished material of the intersection of Algorithmic Information Theory and Quantum Mechanics. It is, to the author's knowledge, the first of its type. Three different notions of the algorithmic content of quantum states are reviewed. Notions of algorithmic quantum typicality and mutual information are introduced. The relationship between algorithmic information and quantum measurements is explored. One of the surprising results is that an overwhelming majority of quantum (pure and mixed) states, when undertaking decoherence, will result in a classical probability with no algorithmic information. Thus most quantum states decohere into white noise. A quantum analouge of Martin Löf random sequence is reviewed. Algorithmic Information Theory presents new complications for the Many Worlds Theory, as it conflicts with the Independence Postulate. When algorithmically complicated processes are ruled out, measurements are required to produce distributions over quantum states that have cloneable information. . Thus there is a positive constant c, where c(E ⊗ F ) ∈ Chapter 1 Introduction Classical information theory studies the communication of bits across a noisy channel. Quantum Information Theory (QIT) studies the kind of information ("quantum information") which is transmitted by microparticles from a preparation device (sender) to a measuring apparatus (receiver) in a quantum mechanical experiment-in other words, the distinction between carriers of classical and quantum information becomes essential. The notion of a qubit can be defined at an abstract level, without giving preference to any particular physical system such as a spin-12 particle or a photon. Qubits behave very differently than bits. To start, qubits can be in a linear superposition between 0 and 1. Qubits can have entanglement, where two objects at a distance become a single entity. The study of entanglement and in particular the question how it can be quantified is therefore a central topic within quantum information theory. However, due to the no-cloning theorem [START_REF] Wootters | A single quantum cannot be cloned[END_REF], instant communication is not possible. Some other aspects of QIT are as follows. 1. Quantum Computing: includes hardware (quantum computers), software, algorithm such as Shor's factoring algorithm or Grover's algorithm, and applications. 2. Quantum Communication: quantum networking, quantum internet, quantum cryptography. 3. Applications in Physics: applications to convex optimizations, black holes, and exotic quantum phases of matter. 4. Quantum Shannon Theory: quantum channels, quantum protocols, quantum information and entropy. One aspect of Quantum Shannon Theory (QST) that has had relatively little study is its relationship to Algorithmic Information Theory (AIT). AIT, in part, is the study of the information content of individual strings. A string is random if it cannot be compressed with respect to a universal Turing machine. This paper surveys the current state of research of QST and AIT and provides unpublished results from the author. Hopefully it will convince the reader that there is a fruitful area of research of QST and AIT. Some areas of this intersection include algorthmic content of quantum states, how typical a quantum state is with respect to a quantum source, and how to quantify the algorithmic content of a measurement. One can also gain further insight into quantum transformations, such as purification, decoherence, and approximations to quantum cloning. As this survey will show, there are some aspects of AIT that directly transfer over to quantum mechanics. This includes comparable definitions of complexity, and conservation inequalities. In addition, there exist quantum versions of the EL Theorem, [START_REF] Levin | Occam bound on lowest complexity of elements[END_REF][START_REF] Epstein | On the algorithmic probability of sets[END_REF] and the Outlier Theorem, [START_REF] Epstein | All sampling methods produce outliers[END_REF]. However there are some aspects of AIT that are different in the context of quantum mechanics. This includes the fact the self information of most quantum pure states is zero, with I(|ψ : ψ) ≈ 0. This has implications on the algorithmic content of measurements and decoherence. The main areas covered in this article are • Chapter 2: This chapter covers the background material needed for the article. A new definition, the algorithmic information between probabilities, is introduced and shown to have information non-growth properties with respect to randomized processing. • Chapter 3: Three different algorithmic measures of quantum states are covered. Their properties are described, including an addition inequality, a Quantum EL Theorem, and a generalized no-cloning theorem. Multiple relationships between the complexities are proven. • Chapter 4: The notion of the algorithmic typicality of one quantum state with respect to another quantum state is introduced. Typicality is conserved with respect to quantum operations. A quantum outlier theorem is proven. This states that non-exotic projections must have atypical pure states in their images. • Chapter 5: The definition of quantum algorithmic information is introduced. Quantum information differs from classical algorithmic information in that an overwhelming majority of pure states have negligible self-information. Information is conserved over quantum operations, with implications to quantum cloning, quantum decoherence, and purification. • Chapter 6: Quantum algorithmic information upper bounds the amount of classical information produced by quantum measurements. Given a quantum measurement, for an overwhelming majority of pure states, the measurement will be random noise. An overwhelming majority of quantum pure states , when undertaking decoherence, will result in a classical probability with no algorithmic information. • Chapter 7: A quantum equivalent to Martin Löf random sequence is introduced. Such quantum random states have incompressible initial segments with respect to a new measure quantum complexity called Quantum Operation Complexity. This complexity term measures the cost of approximating a state with a classical and quantum component. • Chapter 8: This chapter shows the Many Worlds Theory and AIT are in conflict, as shown through the existence of a finite experiment that measures the spin of a large number of electrons. After the experiment there are branches of positive probability which contain forbidden sequences that break the Independence Postulate, a postulate in AIT. • Chapter 9: This chapter concludes the paper with a discussion of the boundary between quantum information and classical information. We show that measurements are necessary to produce distributions over quantum states that have cloneable information. • Appendix A: Properties of the quantum information of basis states are proven. Chapter 2 Background The reader is assumed to be familiar with both algorithmic information theory and quantum information theory, but we review the core terms. We use N, Z, Q, R, {0, 1}, {0, 1} * , and {0, 1} ∞ to denote natural numbers, integers, rational numbers, reals, bits, finite strings, and infinite sequences. {0, 1} * ∞ def = {0, 1} * ∪ {0, 1} ∞ . x denotes the length of the string. Let X ≥0 and X >0 be the sets of non-negative and of positive elements of X. [A] def = 1 if statement A holds, else [A] def = 0. The set of finite bit-strings is denoted by {0, 1} * . For set of strings A ⊆ {0, 1} * , A = {xα : x ∈ A, α ∈ {0, 1} ∞ }. When it is clear from the context, we will use natural numbers and other finite objects interchangeably with their binary representations. The ith bit of α ∈ {0, 1} * ∞ is denoted α i , and its n bit prefix is denoted α ≤n . x ∈ {0, 1} * for x ∈ {0, 1} * is the self-delimiting code that doubles every bit of x and changes the last bit of the result. For positive real functions f , by < + f , > + f , = + f , and < log f , > log f , ∼f we denote ≤ f +O(1), ≥ f -O(1), = f ±O(1) and ≤ f +O(log(f +1)), ≥f -O(log(f +1)), = f ±O(log(f +1)). Furthermore, * <f , * >f denotes < O(1)f and > f /O(1). The term and * =f is used to denote * >f and * <f . A probability measure Q over {0, 1} * is elementary if it has finite support and range that is a subset of rationals. Elementary probability measures can be encoded into finite strings Q in the standard way. Algorithmic Information Theory T y (x) is the output of algorithm T (or ⊥ if it does not halt) on input x ∈ {0, 1} * and auxiliary input y ∈ {0, 1} * ∞ . T is prefix-free if for all x, s ∈ {0, 1} * with s = ∅, either T y (x) = ⊥ or T y (xs) = ⊥ . The complexity of x ∈ {0, 1} * with respect to T y is K T (x|y) def = inf{ p : T y (p) = x}. There exist optimal for K prefix-free algorithms U , meaning that for all prefix-free algorithms T , there exists c T ∈ N, where K U (x|y) ≤ K T (x|y) + c T for all x ∈ {0, 1} * and y ∈ {0, 1} * ∞ . For example, one can take a universal prefix-free algorithm U , where for each prefix-free algorithm T , there exists t ∈ {0, 1} * , with U y (tx) = T y (x) for all x ∈ {0, 1} * and y ∈ {0, 1} * ∞ . K(x|y) def = K U (x|y) is the Kolmogorov complexity of x ∈ {0, 1} * relative to y ∈ {0, 1} * ∞ . The algorithmic probability is m(x|y) = {2 -p : U y (p) = x}. By the coding theorem K(x|y) = + -log m(x|y). The amount of mutual information between two strings x and y is I(x : y) = K(x) + K(y) -K(x, y). By the chain rule K(x, y) = + K(x) + K(y|x, K(x)). The halting sequence H ∈ {0, 1} ∞ is the infinite string where H i def = [U (i) halts] for all i ∈ N. The amount of information that H has about x ∈ {0, 1} * is I(x; H) = K(x) -K(x|H). The randomness deficiency of x ∈ {0, 1} * with respect to elementary probability P over {0, 1} * is d(x|P ) = -log P (x) -K(x| P ) . we say t : {0, 1} * → R ≥0 is a P -test, for some probability P , if x t(x)P (x) ≤ 1. Let t P be a universal lower computable P -test, where for any other lower computable P -test t, t P (x) * > m(t)t(x). Then by the universality of the deficiency of randomness, [G 01], d(x|P ) = + log t P (x). The transform of a probability Q by f : {0, 1} * → {0, 1} * , is the probability f Q, where f Q(x) = f (y)=x Q(y). Both randomness deficiency and information enjoy conservation inequalities. Theorem 1 (See [G 01]) d(f (x)|f Q) < + d(x|Q). Theorem 2 ( [START_REF] Levin | Randomness conservation inequalities; information and independence in mathematical theories[END_REF]) I(f (x) : y) < + I(x : y). Lemma 1 ( [START_REF] Epstein | The outlier theorem revisited[END_REF]) I(f (a); H) < + I(a; H) + K(f ). Lemma 2 Let ψ a be an enumerable semi-measure, semi-computable relative to a. Proof. This requires a slight modification of the proof of Proposition 2 in [START_REF] Levin | Randomness conservation inequalities; information and independence in mathematical theories[END_REF], by requiring ψ to have a as auxilliary information. For completeness, we reproduce the proof. We The stochasticity of a string x ∈ {0, 1} * is Ks(x) = min Elementary Q K(Q)+3 log max{d(x|Q), 1}. Strings with high stochsticity measures are exotic, in that they have high mutual information with the halting sequence. Lemma 3 ([Lev16, Eps21b]) Ks(x) < log I(x; H). Algorithmic Information Between Probabilities We can generalize from information from strings to information between arbitrary probability measures over strings. Definition 1 (Information, Probabilities) For semi-measures p and q over {0, 1} * , I Prob (p : q) = log x,y∈{0,1} * 2 I(x:y) p(x)q(y). Definition 2 (Channel) A channel f : {0, 1} * × {0, 1} * → R ≥0 has f (•|x) being a probability measure over {0, 1} * for each x ∈ {0, 1} * . For probability p, channel f , f p(x) = z f (x|z)p(z). Theorem 3 For probabilities p and q over {0, 1} * , computable channel f , I Prob (f p : q) < + I Prob (p : q). Proof. Using Lemma 1, I Prob (f p : q) = log x,y 2 I(x:y) z f (x|z)p(z)q(y) < + log y,z q(y)p(z) x 2 I((x,z):y) f (x|z). 6 Using Lemma 2, I Prob (f p : q) < + log z,y q(y)p(z)2 I(z:y) = + I Prob (p : q). Thus processing cannot increase information between two probabilities. If the the probability measure is concentrated at a single point, then it contains self-information equal to the complexity of that point. If the probability measure is spread out, then it is white noise, and contains no self-information. Some examples are as follows. Example 1 • In general, a probability p, will have low I Prob (p : p) if it has large measure on simple strings, or low measure on a large number of complex strings, or some combination of the two. • If probability p is concentrated on a single string x, then I Prob (p : p) = K(x). • The uniform distribution over strings of length n has self information equal to (up to an additive constant) K(n). • There are semi-measures that have infinite self information. Let α n be the n bit prefix of a Martin Löf random sequence α and n ∈ [2, ∞). Semi-measure p(x) = [x = α n ]n -2 has I Prob (p : p) = ∞. • The universal semi-measure m has no self information. Example 2 (Uniform Spread) An example channel f has f (•|x) be the uniform distribution over strings of length x . This is a cannonical spread function. Thus if p is a probability measure concentrated on a single string, then I Prob (p : p) = K(x), and I(f p : f p) = + K( x ). Thus f results in a decrease of self-information of p. This decrease of information occurs over all probabilities and computable channels. Quantum Mechanics We use the standard model of qubits used throughout quantum information theory. We deal with finite N dimensional Hilbert spaces H N , with bases |α 1 , |α 2 , . . . , |α n . We assume H n+1 ⊇ H n and the bases for H n are the beginning of that of H n+1 . An n qubit space is denoted by Q n = n i=1 Q 1 , where qubit space Q 1 has bases |0 and |1 . For x ∈ Σ n we use |x ∈ Q n to denote n i=1 |x[i] . The space Q n has 2 n dimensions and we identify it with H 2 n . A pure quantum state |φ of length n is represented as a unit vector in Q n . Its corresponding element in the dual space is denoted by φ|. p i |ψ i ψ i | is S(σ) = -p i log p i . A pure quantum state |φ and (semi)density matrix σ are called elementary if their real and imaginary components have rational coefficients. Elementary objects can be encoded into strings or integers and be the output of halting programs. Therefore one can use the terminology K(|φ ) and K(σ), and also m(|φ ) and m(σ). We say program q ∈ {0, 1} * lower computes positive semidefinite matrix σ if, given as input to universal Turing machine U , the machine U reads ≤ q bits and outputs, with or without halting, a sequence of elementary semi-density matrices {σ i } such that σ i ≤ σ i+1 and lim i→∞ σ i = σ. A matrix is lower computable if there is a program that lower computes it. Quantum Operations A quantum operation is the most general type of operation than can be applied to a quantum state. In Chapters 4 and 5, conservation inequalities will be proven with respect to quantum operations. A map transforming a quantum state σ to ε(σ) is a quantum operation if it satisfies the following three requirements 1. The map of ε is positive and trace preserving, with Tr(σ) = Tr(ε(σ)). The map is linear with ε( i p i σ i ) = i p i ε(σ i ). 3. The map is completely positive, were any map of the form ε ⊗ M acting on the extended Hilbert space is also positive. Another means to describe quantum operations is through a series of operators. A quantum operation ε on mixed state σ A can be seen as the appending of an ancillia state σ b , applying a unitary transform U , then tracing out the ancillia system with ε(σ A ) = Tr B (U (σ A ⊗ σ B )U * ) . (2.1) A third way to characterize a quantum operation is through Kraus operators, which can be derived using an algebraic reworking of Equation 2.1. Map ε is a quantum operation iff it can be represented (not necessarily uniquely) using a set of matrices {M i } where ε(σ) = i M i εM * i and i M * i M i ≤ I, where I is the identity matrix. A quantum operation ε is elementary iff it admits a represented of the form in Equation 2.1 where B, U , and σ B are each elementary, in that they each can be encoded with a finite string. The encoding of an elementary quantum operation is denoted by ε = B U σ B . Each elementary quantum operation admits an elementary Kraus operator representation {M i }, in that each M i is an elementary matrix. This elementary Kraus operator is computable from ε . Chapter 3 Quantum Complexity Three Meausres of the Algorithmic Content of Individual Quantum States The formal study of Algorithmic Information Theory and Quantum Mechanics began with the introduction of three independent measures of the algorithmic content of a mixed or pure quantum state, detailed in the papers [START_REF] Berthiaume | Quantum Kolmogorov Complexity[END_REF]G 01,[START_REF] Vitanyi | Quantum kolmogorov complexity based on classical descriptions[END_REF]. BVL complexity [START_REF] Berthiaume | Quantum Kolmogorov Complexity[END_REF] measures the complexity of a pure quantum state |ψ by the length of the smallest input to a universal quantum Turing machine that outputs a good approximation of |ψ . Vitányi complexity [START_REF] Vitanyi | Quantum kolmogorov complexity based on classical descriptions[END_REF] measures the entropy of a pure state |ψ as the amount of classical information needed to reproduce a good approximation of |ψ . Gács complexity measures the entropy of a pure or mixed quantum state by using a quantum analouge of of the universal semi-measure m. BVL Complexity BVL complexity, introduced in [BvL01], uses a universal quantum Turing machine to define the complexity of a pure quantum state. The input and output tape of this machine consists of symbols of the type 0, 1, and #. The input is an ensemble {p i } of pure states |ψ i of the same length n, where p i ≥ 0 and i p i = 1. This ensemble can be represented as a mixed state of n qubits. If, during the operation of the quantum Turing machine, all computational branches halt at a time t, then the contents on the output tape are considered the output of the quantum Turing machine. The BVL Complexity of a pure state, Hbvl[ ](|ψ ) is the size of the smallest (possibly mixed state) input to a universal quantum Turing machine such that fidelity between the output and |ψ is at least . The fidelity between a mixed state output σ and |ψ is ψ| σ |ψ . We require that the input quantum state be elementary. We also require that universal quantum Turing machine be conditioned on the number of qubits n, on a classical auxillary tape. Vitányi Complexity Vitányi complexity [START_REF] Vitanyi | Quantum kolmogorov complexity based on classical descriptions[END_REF] is a measure of the algorithmic information content of a pure state |ψ . It is equal to the minimum of the Kolmogorov complexity of an approximating elementary pure state |φ summed with a score of their closeness. We use a slightly different definition than the original [START_REF] Vitanyi | Quantum kolmogorov complexity based on classical descriptions[END_REF], in that we use a classical machine and not a quantum machine for the approximation. Let N be the dimension of the Hilbert space. Hv(|ψ ) = min Elementary |θ ∈H N K(|θ |N ) -log | ψ|θ | 2 . Gács Complexity Gács complexity [G 01] is defined using the following universal lower computable semi-density matrix, parametered by x ∈ {0, 1} * , with µ x = Eementary |φ ∈H N m(|φ |x, N ) |φ φ|. The parameter N represents the dimension of the Hilbert space. We use µ X to denote the matrix µ over the Hilbert space denoted by symbol X. The Gács entropy of a mixed state σ, conditioned on x ∈ {0, 1} * is defined by Hv(σ|x) = -log Trµ x σ . We use the following notation for pure states, with Hg(|φ |x) = Hg( |φ φ| |x). For empty x we use the notaion Hg(σ). This definition generalizes H in [G 01] to mixed states. Note than in [G 01], there is another measure of quantum algorithmic entropy, H, which we will not cover in this paper. An infinite version of algorithmic entropy can be found at [START_REF] Benatti | Gacs Quantum Algorithmic Entropy in Infinite Dimensional Hilbert Spaces[END_REF]. Properties of Universal Matrix µ and Gács Complexity The matrix µ is important in algorithmic information theory and quantum mechanics, as it is the foundation for the information term defined in Chapter 5. The following theorem shows that the lower computable semi-density matrix µ is universal. It is greater than any other lower computable matrix, weighted by their complexity. This parallels the classical case, where universal measure m majorizes lower computable semi measure p, with m(x) * > m(p)p(x). This theorem is used throughout the paper, and will not be explicitly cited. Theorem 4 ([G 01]) Let q ∈ {0, 1} * , and the dimension of the Hilbert space, N , compute lower compute semi-density matrix A. Then µ * > m(q|N )A. Proof. A can be composed into a sum i p(i) |ψ i ψ i |, where each |ψ i is elementary, p is a semi-measure, with i p(i) ≤ 1, and p is computable from q. Thus since p is computable from q and N , A = i p(i) |ψ i ψ i | * < m(p|N ) -1 i m(i|N ) |ψ i ψ i | * < m(q|N ) -1 i m(i|N ) |ψ i ψ i | * < µ/m(q|N ). Theorem 5 ([G 01]) µ ii * = m(i|N ). Proof. The matrix ρ = i m(i|N ) |i i| is lower computable, so ρ * < µ so µ ii * > m(i|N ). Fur- thermore, f (i) = |i µ i| is a lower computable semi-measure, so m(i|N ) * > µ ii . Theorem 6 ([G 01]) Tr Y µ XY * = µ X . Proof. Let ρ = Tr Y µ XY , which is a lower computable semi-density matrix because one can enumerate elementary pure states |ψ ψ| in the space XY , take their partial trace, Tr T |ψ ψ|, and add the resulting pure or mixed state to the sum ρ. Thus ρ * < µ X . Let σ = µ X ⊗ |ψ ψ|, where |ψ is a reference elementary state. Thus σ * < µ XY so µ X = Tr Y σ * < Tr Y µ XY . Theorem 7 ([G 01]) Hg(σ) < + Hg(σ ⊗ ρ). Proof. Note that this theorem is not less general than that of Theorem 9, because both σ and ρ can be non-elementary. Using Theorem 6 and the properties of partial trace, 2 -Hg(σ) * > Trσµ X * > TrσTr Y µ XY * > Tr(σ ⊗ I)µ XY * > Tr(σ ⊗ ρ)µ XY * = 2 -Hg(σ⊗ρ) . No Cloning Theorem In classical algorithmic information theory, one can easily reproduce a string x, with K(x) = + K(x, x). However the situation is much different in the quantum case. Due to the no-cloning theorem, [START_REF] Wootters | A single quantum cannot be cloned[END_REF] Hg(|ψ ) < + K(P S /M |N m ) -log ψ| 1 M P S /M |ψ < + K(m) + log m + N -1 m . For the lower bound, let c = max = m + N -1 m -1 c > + log m + N -1 m . Addition Inequality The addition theorem for classical entropy asserts that the joint entropy for a pair of random variables is equal to the entropy of one plus the conditional entropy of the other, with H(X ) + H(Y/X ) = H(X , Y). For algorithmic entropy, the chain rule is slightly more nuanced, with K(x) + K(y|x, K(x)) = + K(x, y). An analogous relationship cannot be true for Gács entropy, Hg, since as shown in Theorem 8, there exists elementary |φ where Hg(|φ |φ ) -Hg(|φ ) can be arbitrarily large, and Hg(|φ / |φ ) = + 0. However, the following theorem shows that a chain rule inequality does hold for Hg. For n 2 × n 2 matrix A, let A[i, j] be the n × n submatrix of A starting at position (n(i -1) + 1, n(j -1) + 1). For example for n = 2 the matrix A =     1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16     has A[1, 1] = 1 2 5 6 , A[1, 2] = 3 4 7 8 , A[2, 1] = 9 10 13 14 , A[2, 2] = 11 12 15 16 . For n 2 × n 2 matrix A and n × n matrix B, let M AB be the n × n matrix whose (i, j) entry is equal to TrA[i, j]B. For any n × n matrix C, in can be seen that TrA(C ⊗ B) = TrM AB C. Furthermore if A is lower computable and B is elementary, then M AB is lower computable. For elementary semi density matrices ρ, we use ρ, Hg(ρ) to denote the encoding of the pair of an encoded ρ and an encoded natural number Hg(ρ). Theorem 9 ([Eps19a]) For semi-density matrices σ, ρ, elementary ρ, Hg(ρ) + Hg(σ| ρ, Hg(ρ) ) < + Hg(σ ⊗ ρ). Proof. Let µ 2n be the universal lower computable semi density matrix over the space of 2n qubits, Q 2n = Q n ⊗Q n = Q A ⊗ Q B . Let µ n be the universal matrix of the space over n qubits. We define the following bilinear function over complex matrixes of size n × n, with T (ν, δ) = Trµ 2n (ν ⊗ δ). For fixed ρ, T (ν, ρ) is of the form T (ν, ρ) = TrM µ 2n ρ ν. The matrix M µ 2n ρ has trace equal to TrM µ 2n ρ = T (ρ, I ) = Trµ 2n (ρ ⊗ I) = Tr ((Tr Q B µ 2n )ρ) * = Trµ n ρ * = 2 -Hg(ρ) , using Theorem 6, which states Tr Y µ XY * = µ X . By the definition of M , since µ 2n and ρ are positive semi-definite, it must be that M µ 2n ρ is positive semi-definite. Since the trace of M µ 2n ρ is * = 2 -Hg(ρ) , it must be that up to a multiplicative constant, 2 Hg(ρ) M µ 2n ρ is a semi-density matrix. Since µ is lower computable and ρ is elementary, by the definition of M , 2 Hg(ρ) M µ 2n ρ is lower computable relative to the string ρ, Hg(ρ) . Therefore we have that 2 Hg(ρ) M µ 2n ρ * < µ ρ,Hg(ρ) . So we have that -log Tr2 Hg(ρ) M µ 2n ρ σ = -Hg(ρ) -log T (σ, ρ) = + Hg(σ ⊗ ρ) -Hg(ρ) > + -log µ (ρ,Hg(ρ)) σ = + Hg(σ| ρ, Hg(ρ) ). Subadditivity, Strong Subadditivity, Strong Superadditivity Theorem 10 ([G 01]) Hg(σ) is subadditive, with Hg(σ ⊗ ρ) < + Hg(σ) + Hg(ρ). Proof. 2 -Hg(σ)-Hg(ρ) = (Trµ X σ) (Trµ Y ρ) =Tr(σ ⊗ ρ)(µ X ⊗ µ Y ) * >Tr(σ ⊗ ρ)(µ XY ) * =2 -Hg(σ⊗ρ) . A function L from quantum mixed states to whole numbers is strongly subadditive if there exists a constant c ∈ N such that for all mixed states ρ 123 , L(ρ 123 ) + L(ρ 2 ) < L(ρ 12 ) + L(ρ 23 ) + c. Similarly L is strongly superadditive if there exists a constant c ∈ N such that for all mixed states ρ 123 , L(ρ 12 ) + L(ρ 23 ) < L(ρ 123 ) + L(ρ 2 ) + c. Theorem 11 Hg is not strongly subadditive. Proof. We fix the number of qubits n, and for i ∈ [1.. So Hg(ρ 123 ) + Hg(ρ 2 ) > + n and Hg(ρ 12 ) + Hg(ρ 23 ) = + 0, proving that Hg is not strongly subadditive. Theorem 12 Hg is not strongly superadditive. Proof. We fix the number of qubits n, and for i ∈ [1..2 n ], |i is the ith basis state of the n qubit space. Let |φ = So Hg(σ 123 ) + Hg(σ 2 ) < + n, and Hg(σ 12 ) + Hg(σ 23 ) > + 2n, proving that Hg is not strongly superadditive. Relation Between Complexities Vitányi Complexity and Gács Complexity By definition Hg(|ψ ) < + Hv(|ψ ). In fact, as shown in the following theorem, Vitányi complexity is bounded with respect to Gács complexity. Theorem 13 ([G 01]) Hg(|ψ ) < + Hv(|ψ ) < log 4Hg(|ψ ). Proof. For semi-density matrix A with eigenvectors {|a i } and decreasing eigenvectors {a i } with ψ| A |ψ ≥ 2 -k and |ψ = c i |a i , let A m be a projector onto the m largest eigenvectors. Let m be the first i where a i ≤ 2 -k-1 . Since a i ≤ 1, we have m ≤ 2 k+1 . Since i≥m a i |c i | 2 < 2 -k-1 i |c i | 2 = 2 -k-1 , we have ψ| A m |ψ ≥ i<m |c i | 2 ≥ i<m a i |c i | 2 ≥ 2 -k - i≥m a i |c i | 2 > 2 -k-1 . Thus there is some i ≤ m such that | ψ|a i | 2 ≥ 2 -2k-2 . Let ν = Trµ and ν k ∈ Q be a rational created from the first k digits of ν. Let μ be a lower approximation of µ, with trace greater than ν k . So K( μ) < log k. Thus if ψ| µ |ψ ≥ 2 -k , then ψ| μ |ψ ≥ 2 -k-1 . Thus there is an eigenvector |u of μ of complexity K(|u |N ) < log 2k and | ψ|u | 2 * > 2 -2k , so Hv(|ψ ) ≤ K(|u |N ) -log | ψ|u | 2 < log 4k < log 4Hg(|ψ ). We now describe an infinite encoding scheme for an arbitrary (not necessarily elementary) quantum pure state |ψ . This scheme is defined as an injection between the set of pure states and {0, 1} ∞ . We define |ψ to be an ordered list of the encoded tuples |θ , q, [| ψ|θ | 2 ≥ q] , over all elementary states |θ and rational distances q ∈ Q >0 . We use the following definition of the mutual information between sequences. Definition 3 ([Lev74]) For α, β ∈ {0, 1} ∞ , I(α : β) = log x,y∈{0,1} * m(x|α)m(y|β)2 I(x:y) . The following theorem states that only exotic pure states will have a Vitányi complexity much greater than Gács complexity. States are exotic if they have high mutual information, I, with the halting sequence H ∈ {0, 1} ∞ . Theorem 14 ([Eps20]) Hg(|ψ ) < + Hv(|ψ ) < log Hg(|ψ ) + I( |ψ : H). The proof to this theorem is quite extensive. We encourage the readers to review [START_REF] Epstein | An extended coding theorem with application to quantum complexities[END_REF] if they are interested in learning it. BVL Complexity and Gács Complexity Theorem 15 ([Eps20]) For pure state |ψ ∈ Q n , Hg(|ψ ) < + Hbvl[ ](|ψ ) + K(Hbvl (|ψ ) - log . Proof. For each k and t in N, let H k,t be the smallest linear subspace spanning elementary kqubit inputs to the universal quantum Turing machine M of size k that halt in t steps, outputing a n qubit mixed state. As shown in [START_REF] Muller | Strongly Universal Quantum Turing Machines and Invariance of Kolmogorov Complexity[END_REF], if t = t , then H k,t is perpendicular to H k,t . Let P k,t be the projection onto H k,t . For each k and t, the universal quantum Turing machine defines a completely positive map Ψ k,t over H k,t , where Ψ k,t (ν) = ρ implies that the quantum Turing machine, with semi-density matrix ν of length k on the input tape will output the n qubit mixed state ρ and halt in time t. Let ρ be a k qubit mixed state that minimizes k = Hbvl[ ](|ψ ) in time t. ρ ≤ P k,t 2 -k ρ ≤ 2 -k P k,t Ψ k,t 2 -k ρ ≤ Ψ k,t 2 -k P k,t Ψ k,t 2 -k ρ ≤ t Ψ k,t 2 -k P k,t The semi density matrix t Ψ k,t 2 -k P k,t is lower computable relative to k, so m(k|N )2 -k Ψ k,t ρ ≤ m(k|N ) t Ψ k,t 2 -k P k,t * < µ m(k|N )2 -k ψ| Ψ k,t (ρ) |ψ * < ψ| µ |ψ k + K(k|N ) -log > + Hg(|ψ ). Theorem 16 ([Eps20]) Hbvl[2 -Hg(|ψ )-O(log Hg(|ψ )) ](|ψ ) < log Hg(|ψ ). Proof. We use reasoning from Theorem 7 in [G 01]. From Theorem 7 in [START_REF] Epstein | An extended coding theorem with application to quantum complexities[END_REF] there exists a ρ such that K(ρ|N ) -log ψ| ρ |ψ < log Hg(|ψ ). Let -log ψ| ρ |ψ = m. Let |u 1 , |u 2 , |u 3 , . . . be the eigenvectors of ρ with eigenvalues u 1 ≥ u 2 ≥ u 3 . . . For y ∈ N, let ρ y = y i=1 u i |u i u i |. We expand |ψ in the basis of {|u i } with |ψ = i c i |u i . So we have that i u i |c i | 2 ≥ 2 -m . Let s ∈ N be the first index i with u i < 2 -m-1 . Since i u i ≤ 1, it must be that s ≤ 2 m+2 . So i≥s u i |c i | 2 < 2 -m-1 i |c i | 2 ≤ 2 -m-1 , ψ| ρ 2 m+2 |ψ ≥ Tr ψ| ρ s |ψ > i<s u i |c i | 2 ≥ 2 -m - i≥s u i |c i | 2 > 2 -m-1 . We now describe a program to the universal quantum Turing machine that will construct ρ 2 m+2 . The input is an ensemble {u i } 2 m+2 i=1 of vectors {|cB(i) }, where B(i) is the binary encoding of index i ∈ N which is of length m + 2. Helper code c of size = + K(p|N ) transforms each |cB(i) into |u i . Thus the size of the input is < + K(p|N ) + m < log Hg(|ψ ). The fidelity of the approximation is ψ| ρ 2 m+2 |ψ > 2 -m-1 ≥ 2 -Hg(|ψ )-O(log Hg(|ψ )) . Quantum EL Theorem In this paper we prove a Quantum EL Theorem. In AIT, the EL Theorem [START_REF] Levin | Occam bound on lowest complexity of elements[END_REF][START_REF] Epstein | On the algorithmic probability of sets[END_REF] states that sets of strings that contain no simple member will have high mutual information with the halting sequence. Theorem 17 ([Lev16, Eps19c]) For finite set D ⊂ {0, 1} * , min x∈D K(x) < log -log x∈D m(x) + I(D; H). It has many applications, including that all sampling methods produce outliers [START_REF] Epstein | All sampling methods produce outliers[END_REF]. The Quantim EL Theorem states that non exotic projections of large rank must have simple quantum pure states in their images. By non exotic, we mean the coding of the projection has low information with the halting sequence. The Quantum EL Theorem has the following consequence. Claim. As the von Neumann entropy associated with the quantum source increases, the lossless quantum coding projectors have larger rank and thus must have simpler (in the algorithmic quantum complexity sense) pure states in their images. Theorem 18 (Quantum EL Theorem [START_REF] Epstein | A quantum el theorem[END_REF]) Fix an n qubit Hilbert space. Let P be a elementary projection of rank > 2 m . Then, relativized to (n, m), min |φ ∈Image(P ) Hv(|φ ) < log 3(n -m) + I( P ; H). Proof. We assume P has rank 2 m . Let Q be the elementary probability measure that realized the stochasticity, Ks(P ), of an encoding of P . We can assume that every string in the support of Q encodes a projection of rank 2 m . We sample N independent pure states according to the uniform distribution Λ on the n qubit space. For each pure state |ψ i and projection R in the support of Q, the expected value of ψ i | R |ψ i is ψ i | R |ψ i dΛ = TrR |ψ i ψ i | dΛ = 2 -n TrRI = 2 m-n . Let random variable X R = 1 N N i=1 ψ i | R |ψ i be the average projection size of the random pure states onto the projection R. Since ψ i | R |ψ i ∈ [0, 1] with expectation 2 m-n , by Hoeffding's inequality, Pr(X R ≤ 2 m-n-1 ) < exp -N 2 -2(m-n)-1 Let d = d(P |Q). Thus if we set N = d2 2(m-n)+1 , we can find N elementary n qubit states such that Q({R : X R ≤ 2 m-n-1 }) ≤ exp(-d) , where X R is now a fixed value and not a random variable. Thus X P > 2 m-n-1 otherwise one can create a Q-expectation test, t, such that t(R) = exp d. This is a contradiction because 1.44d < + log(P ) < + d(P |Q) < + d, for large enough d which we can assume without loss of generality. Thus there exists i such that ψ i | P |ψ i ≥ 2 m-n-1 . Thus |φ = P |ψ i / ψ i | P |ψ i is in the image of P and | ψ i |φ | 2 = ψ i | P |ψ i ≥ 2 m-n-1 . The elementary state |ψ i has classical Kolmogorov complexity K(|ψ i ) < log log N + K(Q, d) < log 2(m -n) + Ks(P ). Thus by Lemma 3, min{Hv(|ψ ) : |ψ ∈ Image(P )} ≤ Hv(|φ ) < log K(|ψ i ) + | ψ i |φ | 2 < log 3(n -m) + Ks(P ) < log 3(n -m) + I(P ; H). Computable Projections Theorem 23 is in terms of elementary described projecctions and can be generalized to arbitrarily computable projections. For a matrix M , let M = max i,j |M i,j | be the max norm. A program p ∈ {0, 1} * computes a projection P of rank if it outputs a series of rank projections {P i } ∞ i=1 such that P -P i ≤ 2 -i . For computable projection operator P , I(P ; H) = min{K(p) -K(p|H) : p is a program that computes P }. Corollary 1 ([Eps23b] ) Fix an n qubit Hilbert space. Let P be a computable projection of rank > 2 m . Then, relativized to (n, m), min |φ ∈Image(P ) Hv(|φ ) < log 3(n -m) + I(P ; H). Proof. Let p be a program that computes P . There is a simply defined algorithm A, that when given p, outputs P n such that min |ψ ∈Image(P ) Hv(|ψ ) = + min |ψ ∈Image(Pn) Hv(|ψ ). Thus by Lemma 1, one gets that I(P n ; H) < + I(P ; H). The corollary follows from Theorem 23. Quantum Data Compression A quantum source consists of a set of pure quantum states {|ψ i } and their corresponding probabilities {p i }, where i p i = 1. The pure states are not necessarily orthogonal. The sender, Alice wants to send the pure states to the receiver, Bob. Let ρ = i p i |ψ i ψ i | be the density matrix associated with the quantum source. Let S(ρ) be the von Neumann entropy of ρ. By Schumacher compression, [START_REF] Schumacher | Quantum coding[END_REF], in the limit of n → ∞, Alice can compress n qubits into S(ρ)n qubits and send these qubits to Bob with fidelity approaching 1. For example, if the message consists of n photon polarization states, we can compress the inital qubits to nS(ρ) photons. Alice cannot compress the initial qubits to n(S(ρ) -δ) qubits, as the fidelity will approach 0. The qubits are compressed by projecting the message onto a typical subspace of rank nS(ρ) using a projector P . The projection occurs by using a quantum measurement consisting of P and a second projector (I -P ), which projects onto a garbage state. The results of this paper says that as S(ρ) increases, there must be simple states in the range of P . There is no way to communicate a quantum source with large enough S(ρ) without using simple quantum states. Chapter 4 Quantum Typicality Definition of Quantum Randomness Deficiency In [G 01], the quantum notion of randomness deficiency was introduced. This quantum randomness deficiency measures the algorithmic atypicality of a pure or mixed quantum state ρ with respect to a second quantum mixed state σ. Mixed states σ are used to model random mixtures {p i } of pure states {|ψ i }, so quantum randomness deficiency is a score of how atypical a quantum state is with respect to a mixture. We first describe typicality with respect to computable σ, and then generalize to uncomputable σ. Given a density matrix σ, a σ-test is a lower computable matrx T such that TrT σ = 1. Let T σ be the set of all σ-tests. If σ is computable, there exists a universal σ test t σ , that is lower computable relative to the number of qubits n, Trσt σ ≤ 1, and for every lower computable σ test T , O(1)t σ > m(T |σ)T . This universal test can be computed the following manner, analagously to the classical case (see [G 21]). A program enumerates all strings p and lower computes m(p|σ). The program then runs p and continues with the outputs as long as p outputs a series of positive semi-definite matrices T i such that TrT i σ ≤ 1 and T i ≤ T i+1 . If p outputs something other than this sequence or does not halt, the sequence is frozen. t σ = p m(p|σ) lim i T i is the weighted sum of all such outputs of progams p. Definition 4 (Quantum Randomness Deficiency) For mixed states σ and ρ, computable σ, d(ρ|σ) = log Tr t σ ρ. The quantum randomness deficiency, among other interpretations, is score of how typical a pure state is with respect to an algorithmically generated quantum source. Indeed, suppose there is a computable probability P over encodings of elementary orthogonal pure states { |ψ i } of orthogonal pure states {|ψ i }, with corresponding density matrix σ = Proof. The positive semi-definite matrix i P ( |ψ i ) |ψ i ψ i |. Then there is a lower-computable σ-test T = i 2 d( |ψ i |P ) |ψ i ψ i | with O(1)t σ > T . Thus d(|ψ i |σ) > + d( |ψ i |P ), T = i 2 d(i|p) |i i| is a σ-test, so T * < t σ and thus d(|i |σ) > + log i| T |i = + d(i|p). The function t(i) = i| t σ |i is a lower computable p-test, so d(i|P ) > + d(|i |σ). The following theorem shows that randomness deficiency d(ρ|σ) parallels the classical definition of randomness decificiency, d(x|P ) = log m(x)/P (x). Theorem 20 ([G 01]) Relativized to elementary σ, log d(ρ|σ) = + log Trρσ -1/2 µσ -1/2 σ Proof. The matrix σ 1/2 t σ σ 1/2 is a lower-computable semi density matrix, so t σ * < σ -1/2 µσ -1/2 . This implies Trt σ ρ * < Trρσ -1/2 µσ -1/2 . Uncomputable Mixed States We now extend d to uncomputable σ. For uncomputable σ, T σ is not necessarily enumerable, and thus a universal lower computable randomness test does not necessarily exist, and cannot be used to define the σ deficiency of randomness. So in this case, the deficiency of randomness is instead defined using an aggregation of σ-tests, weighted by their lower algorithmic probabilities. The lower algorithmic probability of a lower computable matrix σ is m(σ|x) = {m(q|x) : q lower computes σ}. Let T σ = ν∈Tσ m(ν|n)ν. Proof. d(σ|ρ) + d(ν|ξ) = log Tr ρ ∈Tρ m(ρ )ρ σ + log Tr ξ ∈T ξ m(ξ )ξ ν = log Tr     ρ ∈Tρ m(ρ )ρ   ⊗   ξ ∈T ξ m(ξ )ξ     (σ ⊗ ν) = log Tr   ρ ∈Tρ,ξ ∈T ξ m(ρ )m(ξ )ρ ⊗ ξ   (σ ⊗ ν) < + log Tr   ρ ∈Tρ,ξ ∈T ξ m(ρ ⊗ ξ )ρ ⊗ ξ   (σ ⊗ ν) < + log Tr   κ∈T ρ⊗ξ m(κ)κ   (σ ⊗ ν) = + d(σ ⊗ ν|ρ ⊗ ξ). Conservation Over Quantum Operations Proposition 1 For semi-density matrix ν, relativized to a finite set of elementary matrices {M i }, m( i M * i νM i |N ) * > m(ν|N ). Proof. For every string q that lower computes ν, there is a string q M of the form rq, that lower computes i M * i νM i . This string q M uses the helper code r to take the intermediary outputs ξ i of q and outputs the intermediary output i M * i ξ i M i . Since q M has access to {M i } on the auxilliary tape, m(q M |N ) * > m(q|N ). m(ν|N ) = {m(q|N ) : q lower computes ν} * < {m(q M |N ) : q lower computes ν} * < {m(q |N ) : q lower computes i M * i νM i } * <m i M * i νM i /n . The following theorem shows conservation of randomness with respect to elementary quantum operations. It generalizes Theorems 2 and 3 from [START_REF] Epstein | On the algorithmic probability of sets[END_REF]. Theorem 22 (Randomness Conservation) Relativized to elementary quantum operation ε, for semi-density matrices ρ, σ, d(ε(ρ)|ε(σ)) < + d(ρ|σ). Proof. Since the universal Turing machine is relativized to ε, there is an elementary Kraus operator {M i } that can be computed from ε where ε(ξ ) = i M i ξM * i . If ν is a i M i ρM * i -test, with ν ∈ T i M i ρM * i , then i M * i νM i is a ρ-test, with i M * i νM i ∈ T ρ . This is because by assumption Trν i M i ρM * i ≤ 1. So by the cyclic property of trace Tr i M * i νM i ρ ≤ 1. Therefore since i M * i νM i is lower computable, i M * i νM i ∈ T ρ . From Proposition 1, m( i M * i νM i |n) * > m( ν|n). So we have the following inequality d i M i σM * i | i M i ρM * i = log ν∈T i M i ρM * i m(ν|N )Trν i M i σM * i < + log ν∈T i M i ρM * i m i M * i νM i |N Tr i M * i νM i σ < + d(σ|ρ). A Quantum Outlier Theorem One recent result in the classical randomness deficiency case is that sampling methods produce outliers [START_REF] Epstein | All sampling methods produce outliers[END_REF]. There are several proofs to this result, with one of them derived from the fact that large sets of natural numbers with low randomness deficiencies are exotic, in that they have high mutual information with the halting sequence. In this paper, we prove a quantum version of this result. Projections of large rank must contain pure quantum states in their images that are outlying states. Otherwise, the projections are exotic, in that they have high mutual information with the halting sequence. Thus quantum coding schemes that use projections, such as Schumacher compression, must communicate using outlier quantum states. The classical and quantum theorems are analogous, but their proofs are very different! Theorem 23 ( [START_REF] Epstein | A quantum outlier theorem[END_REF]) Relativized to an n qubit mixed state σ, for elementary 2 m rank projector P , 3m -2n < log max |φ ∈Image(P ) d(|φ |σ) + I( P ; H). Proof. We relativize the universal Turing machine to σ and (3m -2n). Thus it is effectively relativized to m, n, and σ. Let elementary probability measure Q and d ∈ N realize Ks(P ), where d = max{d(P |Q), 1}. Without loss of generality we can assume that the support of Q is elementary projections of rank 2 m . There are d2 n-m+2 rounds. For each round we select an σ-test T , that is of dimension 1, TrσT ≤ 1, and for a certain Q-probability of projections B, TrT B is large. We now describe the selection process. Select a random test T to be 2 m-2 |ψ ψ|, where |ψ is an n qubit state chosen uniformly from the unit sphere, with distribution Λ. E[TrT σ] = 2 m-2 Tr ψ| σ |ψ dΛ = 2 m-2 Trσ |ψ ψ| dΛ = 2 m-n-2 Trσ = 2 m-n-2 . Thus the probability that T is a σ-test is ≥ 1 -2 m-n-2 . Let I m be an n-qubit identity matrix with only the first 2 m diagonal elements being non-zero. Let K m = I -I m . Let p = 2 m-n and T = T /2 m-2 . For any projection B of rank 2 m , Pr(TrB T ≤ .5p) = Pr(TrI m T ≤ .5p) = Pr(TrK m T ≥ 1 -.5p) E[TrK m T ] = 1 -p Pr(TrK m T ≥ 1 -.5p) ≤ (1 -p)/(1 -.5p) Pr(TrB T ≥ .5p) = 1 -Pr(TrK m T ≥ 1 -.5p) ≥ 1 -(1 -p)/(1 -.5p) = .5p/(1 -.5p) ≥ .5p Pr(TrBT ≥ 2 2m-n-3 ) ≥ .5p. Let Ω be the space of all matrices of the form 2 m-2 |φ φ|. Let R be the uniform distribution over Ω. Let [A, B] be 1 if TrAB > 2 2m-n-3 , and 0 otherwise. By the above equations, for all A ∈ Support(Q), Ω [A, B]dR(B) ≥ .5p. So A Ω [A, B]Q(A)dR(B) ≥ .5p. For Hermitian matrix A, {A} is 1 if TrAσ ≤ 1, and 0 otherwise. So Ω {A}dR(A) ≥ (1 -p2 -2 ). Let f = max T {T } Q(A)[T, A]. So .5p ≤ A Ω [A, B]Q(A)dR(B) = A Ω {B}Q[A, B](A)dR(B) + A Ω (1 -{B})[A, B]Q(A)dR(B) ≤ A Ω {B}[A, B]Q(A)dR(B) + Ω (1 -{B})dR(B) ≤ A Ω {B}[A, B]Q(A)dR(B) + p2 -2 p/4 ≤ A Ω {B}[A, B]Q(A)dR(B) = Ω {B} A [A, B]Q(A) dR(B) ≤ Ω f dR(B) p/4 ≤ f. Thus for each round i, the lower bounds on f proves there exists a one dimensional matrix T i = 2 m-2 |ψ ψ| such that TrT i σ ≤ 1 and R {Q(R) : TrT i R ≥ 2 2m-n-3 } ≥ p/4 = 2 m-n-2 . Such a T i is selected, and the the Q probability is conditioned on those projections B for which [T i , B] = 0, and the next round starts. Assuming that there are d2 n-m+2 rounds, the Q measure of projections B such there does not exist a T i with [T i , B] = 1 is ≤ (1 -p/4) d2 n-m+2 ≤ e -d . Thus there exists a T i such that [T i , P ] = 1, otherwise one can create a Q test t that assigns e d to all projections B where there does not exist T i with [T i , B] = 1, and 0 otherwise. Then t(P ) = e d so 1.44d < log t(P ) < + d(P |Q) < + d. d(P |σ) + K(T i ) 2m -n < + max |φ ∈Image(P ) d(P |σ) + (n -m) + log d + K(d) + K(Q) 2m -n < + max |φ ∈Image(P ) d(P |σ) + (n -m) + Ks(P ) 3m -2n < log max |φ ∈Image(P ) d(P |σ) + I(P ; H). Note that due to the fact that the left hand side of the equation is (3m-2n) and it has log precision, this enables one to condition the universal Turing machine to (3m -2n). Chapter 5 Computable Projections Quantum Information Definition of Quantum Algorithmic Information For a pair of random variables, X , Y, their mutual information is defined to be I(X : Y) = H(X ) + H(Y) -H(X , Y) = H(X ) -H(X /Y) = x,y p(x, y) log p(x, y)/p(x)p(y). This represents the amount of correlation between X and Y. Another intrepretation is that the mutual information between X and Y is the reduction in uncertainty of X after being given access to Y. Quantum mutual information between two subsystems described by states ρ A and ρ B of a composite system described by a joint state ρ AB is I(A : B) = S(ρ A ) + S(ρ B ) -S(ρ AB ), where S is the Von Neumman entropy. Quantum mutual information measures the correlation between two quantum states. As stated in Chapter 2, The algorithmic information between two strings is defined to be I(x : y) = K(x) + K(y) -K(x, y). By definition, it measures the amount of compression two strings achieve when grouped together. The three definitions above are based off the difference between a joint aggregate and the separate parts. Another approach is to define information between two semi-density matrices as the deficiency of randomness over µ ⊗ µ, with the mutual information of σ and ρ being d(σ ⊗ ρ|µ ⊗ µ). This is a counter argument for the hypothesis that the states are independently chosen according to the universal semi-density matrix µ. This parallels the classical algorithmic case, where I(x : y) = + d((x, y)|m ⊗ m) = + K(x) + K(y) -K(x, y). However to achieve the conservation inequalities, a further refinement is needed, with the restriction of the form of the µ ⊗ µ tests. Let C C⊗D be the set of all lower computable matrices A ⊗ B, such that Tr (A ⊗ B)(C ⊗ D) ≤ 1. Let C C⊗D = A⊗B∈C C⊗D m(A ⊗ B|N )A ⊗ B. Definition 6 (Information) The mutual information between two semi-density matrices σ, ρ is defined to be I(σ : ρ) = log TrC µ⊗µ (σ ⊗ ρ). Up to an additive constant, information is symmetric. Theorem 24 I(σ : ρ) = + I(ρ : σ). Proof. This follows from the fact that for every A ⊗ B ∈ C µ⊗µ , the matrix B ⊗ A ∈ C µ⊗µ . Furthermore, since m(A⊗B|N ) * = m(B ⊗A|N ), this guarantees that TrC µ⊗µ (σ ⊗ρ) * = TrC µ⊗µ (ρ⊗ σ), thus proving the theorem. Paucity of Self-Information Pure States For classical algorithmic information, for all x ∈ {0, 1} * , I(x : x) = + K(x). As shown in this section, this property differs from the quantum case, where there exists quantum states with high descriptional complexity and negligible self information. In fact this is the case for most quantum states, where for most n qubit pure states |ψ , Hg(|ψ ) ≈ n, I(|ψ : |ψ ) ≈ 0. The following theorem states that the information between two elementary states is not more than the combined length of their descriptions. Theorem 25 For elementary ρ and σ, I(ρ : σ) < + K(ρ|N ) + K(σ|N ). < + 0. Mixed States The results of the previous section can be extended to mixed states. Given a uniform measure over mixed states, an overwhelming majority of such states contain no algorithmic self information. Let Λ be the uniform distribution of the unit sphere of H N . Fix any number M ∈ N. Let the M -simplex be ∆ M = {(p i ) 1≤i≤M |p i ≥ 0, p 1 + • • • + p M = 1}. Let η be any distribution over ∆ M . Let µ M i=1 p i |ψ i ψ i | = η(p 1 , . . . , p M ) M i=1 Λ(|ψ i ), Theorem 27 2 I(σ:σ) dµ(σ) < + 0. Proof. 2 I(σ:σ) dµ(σ) =TrC µ⊗µ ∆ M Λ 1 • • • Λ M M i=1 p i |ψ i ψ i | ⊗ M i=1 p i |ψ i ψ i | dΛ 1 . . . dΛ M dη(p 1 , . . . , p M ) =TrC µ⊗µ ∆ M Λ 1 • • • Λ M   M i,j=1 p i p j |ψ i ψ i | ⊗ |ψ j ψ j |   dΛ 1 . . . dΛ M dη(p 1 , . . . , p M ) =TrC µ⊗µ ∆ M Λ M i=1 p 2 i |ψψ ψψ| dΛdη(p 1 , . . . , p M ) + TrC µ⊗µ ∆ M Λ 1 Λ 2 i,j∈{1,...,M },i =j 2p i p j |ψ 1 ψ 1 | ⊗ |ψ 2 ψ 2 | dΛ 1 dΛ 2 dη(p 1 , . . . , p M ). The first term is not greater than TrC µ⊗µ ∆ M Λ M i=1 |ψψ ψψ| dΛdη(p 1 , . . . , p M ) =TrC µ⊗µ Λ M i=1 |ψψ ψψ| dΛ. At this point, reasoning from the proof of Theorem 26 can be used to show that this term is O(1). The second term is not greater than TrC µ⊗µ ∆ M Λ 1 Λ 2 i p i i p i |ψ 1 ψ 1 | ⊗ |ψ 2 ψ 2 | dη(p 1 , . . . , p M ) =TrC µ⊗µ ∆ M Λ 1 Λ 2 |ψ 1 ψ 1 | ⊗ |ψ 2 ψ 2 | dη(p 1 , . . . , p M ) =TrC µ⊗µ Λ 1 Λ 2 |ψ 1 ψ 1 | ⊗ |ψ 2 ψ 2 | =TrC µ⊗µ (I/N ⊗ I/N ). Again, at this point, reasoning from the proof of Theorem 26 can be used to show that this term is O(1). Information Nongrowth Classical algorithmic information non-growth laws asserts that the information between two strings cannot be increased by more than a constant depending on the computable transform f , with I(f (x) : y) < I(x : y) + O f (1) (Theorem 2). Conservation inequalities have been extended to probabilistic transforms and infinite sequences. The following theorem shows information non-growth in the quantum case; information cannot increase under quantum operations, the most general type of transformation that a mixed or pure quantum state can undergo. The following theorem shows information nongrowth with respect to elementary quantum operations. It generalizes Theorems 5 and 10 from [START_REF] Epstein | On the algorithmic probability of sets[END_REF]. Theorem 28 (Information Conservation) Relativized to elementary quantum operation ε, for semi-density matrices ρ, σ, I(ε(ρ) : σ) < + I(ρ : σ). Proof. Since the universal Turing machine is relativized to ε, there is an elementary Kraus operator {M i } that can be computed from ε where ε(ξ ) = i M i ξM * i . Given density matrices A, B, C and D, we define d (A ⊗ B|C ⊗ D) = log C C⊗D A ⊗ B. Thus I(σ : ρ) = d (σ ⊗ ρ|µ ⊗ µ). The semi-density matrix i M i µM * i is lower semicomputable, so therefore i M i µM * i * < µ and also ( i M i µM * i ⊗ µ) C ( i M i µM * i )⊗µ . So we have d i M i σM * i ⊗ ρ|µ ⊗ µ = log E⊗F ∈C µ⊗µ m(E ⊗ F |N )Tr(E ⊗ F )( i M i σM * i ⊗ ρ) < + log E⊗F ∈C µ⊗µ m(c(E ⊗ F )|N )Trc(E ⊗ F ) i M i σM * i ⊗ ρ < + d i M i σM i * ⊗ ρ| i M i µM * i ⊗ µ . Using the reasoning of the proof of Theorem 22 on the elementary Kraus operator {M i ⊗ I} and d , where C replaces T , we have that d i M i σM * i ⊗ ρ| i M i µM * i ⊗ µ < + d (σ ⊗ ρ|µ ⊗ µ). Therefore we have that I i M i σM * i : ρ = d i M i σM * i ⊗ ρ|µ ⊗ µ < + d i M i σM * i ⊗ ρ| i M i µM * i ⊗ µ < + d (σ ⊗ ρ|µ ⊗ µ) = + I(σ : ρ). Algorithmic No-Cloning Theorem The no-cloning theorem states that every unitary transform cannot clone an arbitrary quantum state. Hiowever some unitary transforms can clone a subset of pure quantum states. For example, given basis states |1 , |2 , |3 , . . . there is a unitary transform that transforms each |i |0 to |i |i . In addition, there exists several generalizations to the no-cloning theorem, showing that imperfect clones can be made. In [START_REF] Buzek | Quantum copying: Beyond the no-cloning theorem[END_REF], a universal cloning machine was introduced that can clone an arbitrary state with the fidelity of 5/6. Theorem 8 shows a generalization of the no-cloning theorem using Gács complexity. Given the information function introduced in this chapter, a natural question to pose is whether a considerable portion of pure states can use a unitary transform to produce two states that share a large amount of shared information. The following theorem answers this question in the negative. It states that the amount of information created between states with a unitary transform is bounded by the self information of the original state. Proof. We have the inequalities I(|φ : |ϕ ) < + I(|φ |ϕ ) : |φ |ϕ ) < + I(|ψ |0 n : |ψ |0 n ) < + I(|ψ : |ψ ), where the first inequality is derived using partial trace, the second inequality is derived using the unitary transform C, and the third inequality is derived by appending of an environment, all constituting quantum operations, whose conservation of information is proven in Theoerem 28. Theorem 29, combined with the paucity of self-information in pure states (Theorem 26) shows that only a very sparse set of pure states can, given any unitary transform, duplicate algorithmic information. Purification Every mixed state can be considered a reduced state of a pure state. The purification process is considered physical, so the extended Hilbert space in which the purified state resides in can be considered the existing environment. It should therefore be possible to regard our system with its mixed state as part of a larger system in a pure state. In this section we proof that the purifications of two mixed states will contain more information than the reduced states. Purification occurs in the following manner, starting with a density matrix ρ = n i=1 p i |i i|. A copy of the space is defined with orthonormal basis {|i }. In this instance the purification of ρ is |ψ = n i=1 √ p i |i ⊗ |i . For a density matrix ρ of size n, let P m ρ be the set of purifications of ρ of dimension m ≥ 2n. Corollary 3 For all |ψ σ ∈ P n σ , |ψ ρ ∈ P n ρ , d(σ|ρ) < + d(|ψ σ | |ψ ρ ). Corollary 4 For all |ψ σ ∈ P n σ , |ψ ρ ∈ P n ρ , I(σ : ρ) < + I(|ψ σ : |ψ ρ ). This all follows from conservation of randomness (Theorem 22) and information (Theorem 28) over quantum operations, which includes the partial trace function. Decoherence In quantum decoherence, a quantum state becomes entangled with the environment, losing decoherence. The off diagonal elements of the mixed state become dampened, as the state becomes more like a classical mixture of states. The single qubit example is as follows. The system is in state |ψ Q = α |0 + β |1 and the environment is in state |ψ E . The initial state is |ψ QE |ψ Q ⊗ |ψ E = α |0, ψ E + β |1, ψ E . The combined system undergoes a unitary evolution U , becoming entangled, with the result U |ψ QE = α |0, E 1 +β |1, E 2 . The density matrix is ρ QE = |α| 2 |0, E 1 0, E 1 |+|β| 2 |1, E 2 1, E 2 |+ α * β |1, E 2 0, E 1 | + αβ * |0, E 1 1, E 2 |. The partial trace over the environment yields ρ Q = |α| 2 |0 0| E 1 |E 1 + |β| 2 |1 1| E 2 |E 2 + α * β |1 0| E 2 |E 1 + αβ * |0 1| E 1 |E 2 . We have E 1 |E 1 = E 2 |E 2 = 1. Two environment-related terms are time dependent and can be described by an exponential decay function E 1 |E 2 = e -γ(t) . The larger the decay, the more off diagonal terms are suppressed. So ρ Q ≈ |α| 2 α * βe -γ(t) αβ * e -γ(t) |β| 2 . The above example can be generalzied to n qubit density matrices. Let Decohere(σ, t) be a decoherence operation that dampens the off-diagonal elements of σ with decay t. By definition, Decohere is a quantum operation. Randomness is conserved over decoherence. Thus if two states decohere, the first state does not increase in algorithmic atypicality with respect to the second state. Corollary 5 d(Decohere(σ, t)|Decohere(ρ, t)) < + d(σ|ρ). This is a corollary to Theorem 22. When a state loses coherence into the environment will not gain information with any other state. Corollary 6 For semi-density matrices σ and ρ, I(Decohere(σ, t) : Decohere(ρ, t)) < + I(σ : ρ). Chapter 6 Quantum Measurements In quantum mechanics, measurements are modeled by POVMs. A POVM E is a finite or infinite set of positive definite matrices {E k } such that k E k = I. For a given semi-density matrix σ, a POVM E induces a semi measure over integers, where Eσ(k) = TrE k σ. This can be seen as the probability of seeing measurement k given quantum state σ and measurement E. An elementary POVM E has a program q such that U (q) outputs an enumeration of {E k }, where each E k is elementary. A quantum instrument with respect to POVM E, is a quantum operation Φ E that takes a state σ to a set of outcomes and their probabilities, Φ E (σ) = k E(σ(k)) |k k|. Typicality and Measurements Theorem 30 shows that measurements can increase only up to a constant factor, the deficiency of randomness of a quantum state with respect to another quantum state. The classical deficiency of randomness of a probability with respect to a another probability is denoted as follows. Definition 7 (Deficiency, probabilities (Folkflore)) For probabilities p and q over {0, 1} ∞ , d(q|p) = log x q(x)m(x)/p(x). Note that in the following theorem, d(Eσ|Eρ) term represents the classical deficiency of randomness of a semimeasure Eσ with respect to a computable probability measure Eρ,. Theorem 30 ( [START_REF] Epstein | Algorithmic no-cloning theorem[END_REF]) For density matrices σ, ρ, relativized to elementary ρ and POVM E, d(Eσ|Eρ) < + d(σ|ρ). Proof. 2 d(Eσ|Eρ) = k (TrE k σ)m(k|N )/(TrE k ρ) = Tr( k (m(k|N )/TrE k ρ)E k )σ = Trνσ, where the matrix ν = ( k (m(k|N )/TrE k ρ)E k ) has ν ∈ T ρ , since ν is lower computable and Trνρ ≤ 1. So 2 d(σ|ρ) ≥ m(ν|N )Trνσ = m(ν|N )2 d(Eσ|Eρ) . Since m(ν|N ) * > 1, d(Eσ|Eρ) < + d(σ|ρ). Information and Measurements Given two mixed states σ and ρ and POVM E, the mutual information between the probabilities of Eσ and Eρ, from Definition 1, is I Prob (Eσ : Eρ). The following theorem states that given two states, the classical (algorithmic) information between the probabilities generated by two quantum measurements is less, up to a logarithmic factor, than the information of the two states. Thus I represents an upper bound on the amount of classical algorithmic information that can be extracted between two states. Theorem 31 Relative to POVMS E and F , I Prob (Eσ : F ρ) < log I(σ : ρ). Note than since the universal Turing machine is relativized to E and F , all K and m are conditioned to the number of qubits N . Quantum instruments with respect to POVMs E and F produces two mixed states Ψ E (σ) = m i=1 E i (σ) |i i| and Ψ F (ρ) = m j=1 F j (ρ) |j j|, where, without loss of generality, m can be considered a power of 2. By Theorem 5, the (i, i)th entry of µ is * = m(i), so T ij = 2 K(i)+K(j)-O(1) |i i| |j j| is a µ ⊗ µ test, with TrT i,j (µ ⊗ µ) < 1. So, using the fact that x/ log x is convex, I(σ : ρ) > + I(Ψ E (σ) : Ψ F (ρ)) > + log i,j m(T i,j )T i,j Ψ E (σ) ⊗ Ψ F (ρ) > + log i,j 2 K(i)+K(j) m(i, j, K(i) + K(j))E i (σ)F j (ρ) > + log i,j 2 I(i:j)-K(I(i:j)) E i (σ)F j (ρ) > + log i,j 2 I(i:j) I(i : j) -O(1) E i (σ)F j (ρ) > log log i,j 2 I(i:j) E i (σ)F j (ρ) > log I Prob (Eσ : F ρ). Corollary 7 For density matrices ρ and σ, and i, j ∈ N, relativized to POVMS E and F , I(i : j) + log E i (ρ)F j (σ) < log I(ρ : σ). Algorithmic Contents of Measurements This sections shows the limitations of the algorithmic content of measurements of pure quantum states. Theorem 32 says that given a measurement apparatus E, the overwhelming majority of pure states, when measured, will produce classical probabilities with no self-information, i.e. random noise. Theorem 3 shows that there is no randomized way to process the probabilities to produce more self-information, i.e. process the random noise. This is independent of the number of measurement outcomes of E. To prove this result, we need to define an upper-information term I that is defined using upper computable tests. We say a semi-density matrix ρ is upper computable if there a program q ∈ {0, 1} * such that when given to the universal Turing machine U , outputs, with or without halting, a finite or infinite sequence of elementary matrices ρ i such that ρ i+1 ρ i and lim i→∞ ρ i = ρ. If U reads ≤ q bits on the input tape, then we say p upper computes ρ. The upper probability of an upper computable mixed state σ is defined by m(σ/x) = {m(q/x) : q upper computes σ}. Let G C⊗D be the set of all upper computable matrices (tests) of the form A ⊗ B, where Lemma 4 Tr(A ⊗ B)(C ⊗ D) ≤ 1. Let G C⊗D = A⊗B∈C C⊗D m(A ⊗ B/n)(A ⊗ B) • Let Λ be the uniform distribution on the unit sphere of an n qubit space. 2 I(|ψ : |ψ ) dΛ = O(1), • 2 I(σ : σ) dµ(σ) = O(1). Proof. The proof follows identically to that of Theorems 26 and 27, with reference to Proposition 2. Lemma 5 ([Eps21a]) Relativized to POVM E, I Prob (Eσ:Eσ) < + I(σ:σ). Proof. Note that all complexity terms are relativized to N , due to the relativization of E. Since z(k) = TrµE k is lower semi-computable and k z(k) < 1, m(k) * > TrµE k , and so 1 > 2 K(k)-O(1) TrµE k . So ν i,j = 2 K(i)+K(j)-O(1) (E i ⊗ E j ) ∈ G µ⊗µ , with m(ν i,j ) * > m(i, j). I(σ:σ) = log A⊗B∈G µ⊗µ m(A ⊗ B)(A ⊗ B)(σ ⊗ σ) > + log Tr ij ν i,j m(ν i,j )(σ ⊗ σ) > + log 2 K(i)+K(j) m(i, j)Eσ(i)Eσ( (j) > + I Prob (Eσ:Eσ). Theorem 32 ([Eps21a]) Let Λ be the uniform distribution on the unit sphere of an n qubit space. Relativized to POVM E, 2 I Prob (E|ψ :E|ψ ) dΛ = O(1). Proof. By Lemma 5, 2 I(|ψ :|ψ ) * > 2 I Prob (E|ψ :E|ψ ) . From Lemma 4, 2 I(|ψ :|ψ ) dΛ = O(1). The integral 2 I Prob (E|ψ :E|ψ ) dΛ is well defined because 2 I Prob (E|ψ :E|ψ ) = Tr i,j m(i, j)ν i,j (|ψ ψ| ⊗ |ψ ψ|), for some matrices ν i,j which can be integrated over Λ. Theorem 33 Relativized to POVM E, 2 I Prob (Eσ:Eσ) dµ(σ) = O(1). Proof. By Lemma 5, 2 I(σ:σ) * > 2 I Prob (Eσ:Eσ) . From Lemma 4, 2 I(σ:σ) dµ(σ) = O(1). An implication of Theorems 32 and 33 is that for an overwhelming majority of quantum states, the probabilities induced by a measurement will have negligible self information. Algorithmic Content of Decoherence Decoherence was explained in Section 5.3.3. In the idealized case, decoherence transforms an arbitrary density matrix σ into a classical probability, with the off-diagonal terms turned to 0. Let p σ be the classical probability that σ decoheres to, with p σ (i) = σ ii . The following corollary to Theorem 32, for an overwhelming majority of pure or mixed states σ, p σ is noise, that is, has negligible self-information. The corollary follows from the fact that there is a POVM E, where E i = |i i| with E i |ψ = p |ψ (i). Corollary 8 Let Λ be the uniform distribution on the unit sphere of an n qubit space. • 2 I Prob( p |ψ :p |ψ ) dΛ = O(1), • 2 I Prob (pσ:pσ) dµ(σ) = O(1). PVMs Quantum measurements is also of the form of PVMs, or projection value measures. A PVM P = {Π i } is a collection of projectors Π i with i Π i = I, and TrΠ i Π j = 0 when i = j. When a measurement occurs, with probability ψ| Π i |ψ , the value i is measured, and the state collapses to |ψ = Π i |ψ / ψ| Π i |ψ . Further measurements of |ψ by P will always result in the i measurement, so P |ψ (i) = 1. To look at the effects of a measurement operation on the algorithmic information theoretic properties of a state, take a PVM, F = {Π i } 2 n-c i=1 , where n is the number of qubits of the Hilbert space. Let Λ F be the distribution of pure states when F is measured over the uniform distribution Λ over n qubit spaces. Thus Λ F represents the F -collapsed states from Λ. Theorem 34 n -2c < + log 2 I Prob (F :|ψ :F |ψ ) dΛ F . Proof. Note that ψ| Π i |ψ dΛ = Dim(Π i )2 -n . Furthermore, let κ ⊂ {1, . . . , 2 n-c } be the set of numbers a ∈ κ such that K(a) > + n -c. So |κ| * > 2 n-c . We have that if ψ| Π i |ψ = 1 then I Prob (F |ψ : F |ψ ) = I Prob (j → [i = j] : j → [i = j]) = I(i : i) = + K(i). 2 I(F :|ψ :F |ψ ) dΛ F = 2 n-c i=1 Dim(Π i )2 -n 2 K(i) * > i∈κ Dim(Π i )2 -n 2 n-c * >|κ|2 -n 2 n-c * >2 n-2c . Chapter 7 Infinite Quantum Spin Chains A qubit abstracts the properties of a single spin 1/2 particle. A complex system can be described by the collection of qubits, which model properties of superposition and entanglement. It can be convenient to consider a system's thermodynamic limit, which is the limit of infinite system size. This model is an infinite quantum spin chain. In the study of infinite quantum spin chains one can make a distinction between local and global effects. In addition, one does not need to consider boundary conditions. A Martin Löf random sequence is the accepted definition in AIT for a random infinite sequence. Can one define a quantum Martin Löf infinite quantum state? This chapter shows that this can be answered in the affirmative, and even landmark theorems in AIT like the Levin-Schnorr theorem can transfer over to the quantum domain. We first review Martin Löf random sequences. A Martin Löf test is an effective null set of the form n G n , where the measure of open set G n of the Cantor space goes toward zero. An infinite sequence passes a Martin Löf test if it is not contained in its null set. A Martin Löf random infinite sequence passes all Martin Löf tests. Let MLR be the set of Martin Löf random sequences. In [START_REF] Nies | Martin-Löf Random Quantum States[END_REF], the set of random infinite quantum states was introduced, which we call a NS random state. Just like the classical setting, a NS random state passes allso-called NS tests. An NS test is a quantum analog to Martin Löf tests, and it is defined by projections instead of open sets. Infinite Quantum Bit Sequences Before we introduce NS random sequences, we revisit the notion of C * algebras and functional states. A C * algebra, M, is a Banach algebra and a function * : M → M such that • For every x ∈ M, x * * = x. • For every x, y ∈ M, (x + y) * = x * + y * and (xy) * = y * x * . • For every λ ∈ C and x ∈ M, (λx) * = λx * . • For all x ∈ M, x * x = x x * . A C * algebra M is unital if it admits a multiplicative identity 1. A state over unital M is a positive linear functional Z : M → C such that Z(1) = 1. States are used to define NS random sequences. The set of states of M is denoted by S(M). A state is tracial if Z(x * x) = Z(xx * ), for all x ∈ M. The C * algebra over matrices of size 2 k over C is denoted by M k . Each state ρ ∈ S(M k ), can be identified with a density matrix S such that ρ(X) = TrSX, for all X ∈ M. States that cannot be represented as the convex combination of other states are called pure states. Otherwise they are called mixed states. States are used interchangeably with density matrices, depending on the context. The tracial state τ n ∈ S(M n ) corresponds to the matrix 2 -n I 2 n . The algebra M ∞ is the direct limit of the ascending sequence of M n . . A state Z ∈ S(M ∞ ) over M ∞ can be seen as a sequence of density matrices {ρ n } that are coherent under partial traces, with Tr M n+1 ρ n+1 = ρ n . We use Z n to denote the restriction of state Z to the algebra M n . There is a unique tracial state τ ∈ S(M ∞ ), where τ n = τ n . A projection p ∈ M ∞ is a self adjoint positive element such that p = p 2 . A special projection p ∈ M n is a projection represented by an elementary matrix. NS Randomness An NS Σ 0 1 set is a computable sequence of special projections {p i } in M ∞ with p i ≤ p i+1 over all i. For state ρ and NS Σ 0 1 set G, ρ(G) = sup i ρ(p i ). We define NS tests. But initially, we will provide the definition for the classical Martin Löf random sequence, to provide a point of reference. A classical Martin Löf test, is a sequence {U n } of uniformly Σ 0 1 sets of infinite sequences U n ⊆ {0, 1} ∞ such that µ(U n ) ≤ 2 -n . An infinite sequence α ∈ {0, 1} ∞ is Martin-Löf random if there is no Martin Löf test {U n } such that α ∈ n U n . There is a universal Martin Löf test {V n } such that if α ∈ n V n , then α is random. Mirroring the classical case, a NS test is an effective sequence of NS Σ 0 1 sets G r such that τ (G r ) ≤ 2 -r . Unlike a classical test, which can either pass or fail a sequence, a NS test can pass a quantum state up to a particular order. For δ ∈ (0, 1), state Z ∈ S(M ∞ ) fails test G r at order δ if Z(G r ) > δ for all r. Otherwise Z passes the test at order δ. We says Z passes a NS test if it passes it at all orders δ ∈ (0, 1). A state is NS random if it passes every NS test at every order. Theorem 35 ([NS19] ) There exists a universal NS test R n , where for each NS test G k and each state Z and for each n there exists a k such that Z(R n ) ≥ Z(G k ). Proof. Let G k n ∞ n=1 be an enumeration of NS tests, performed analgously to the classical case (see [G 01]). Furthermore let G e m = p e m,r r∈N . For each k, n ∈ N, let q n k = e+n+1≤k p e e+n+1,k . Thus q n k ≤ q n k+1 and τ q n k ≤ e τ (p e e+n+1,k ) ≤ 2 -n . The universal NS test is R n = q n k k∈N . Since τ (R r ) ≤ 2 -n , R n is a NS test. For a set e, ρ(R n ) = sup k ρ(q n k ) ≥ sup k ρ(p e n+e+1,k ) = ρ(G e n+e+1 ). A state Z is NS random if it passes the test R n . . More information about R r can be found in [START_REF] Nies | Martin-Löf Random Quantum States[END_REF]. Closure Properties The set of NS random sequences has closure properties over (possibly noncomputable) convex combinations, as shown in the following theorem. Theorem 36 Every convex combination Z = i αZ i of NS random states Z i , with i α i = 1 and α i ≥ 0, is NS random. Proof. Given an NS test G r = p r t , there exists a NS test H r such that for all states Z, inf r Z(H r ) ≥ inf r Z(G r ) and H r ⊇ H r+1 . This is by setting H r equal to i≥r G i . More formally, H r = q r t , where q r t = t i=1 p r+i t . Thus there exists a universal NS test L r such that L r ⊇ L r+1 . Assume that Z is not NS random. Then lim r→∞ Z(L r ) > 0 lim r→∞ i α i Z i (L r ) > 0 i α i lim r i →∞ Z i (L r i ) > 0. So there exists an i such that lim r→∞ Z i (L r ) > 0, and thus Z i is not NS random. Gács Complexity and NS Random Sequences In this section, we characterize NS random states in terms of Gács complexity, Hg. Theorem 37 Given state Z ∈ M ∞ , and program p that enumerates infinite set A ⊆ N, then sup n∈N n -Hg(Z n) < + sup n∈A n -Hg(Z n) + K(p). Proof. There exists a program p of size p +O(1) that outputs a list {a n } ⊆ A such that n < a n . For a given a n , σ = 2 n-an µ n ⊗ I an-n is a lower computable 2 an × 2 an semi-density matrix. There is a program q = q a n , n that lower computes σ where q is helper code that uses the encodings of a n and n. By the universal properties of µ, we have the inequality m(q|a n )σ * < µ an . So, using properties of partial trace, a n + log m(q|a n )TrσZ a n < + a n + log Trµ(Z a n ) a n + log Tr2 n-an (µ n ⊗ I an-n )Z a n -K(q|a n ) < + a n + log Trµ(Z a n ) n + log Tr(µ n ⊗ I an-n )Z a n -K( n, a n |a n ) < + a n + log Trµ(Z a n ) n + log Tr(µ n Tr n Z a n ) -K(p |a n ) < + a n + log Trµ(Z a n ) n -Hg(Z n) < + a n -Hg(Z a n ) + K(p). So sup n∈N n -Hg(Z n) < + sup an∈{an} a n -Hg(Z a n ) + K(p) < + sup n∈A n -Hg(Z n) + K(p). Theorem 38 Suppose for state Z, and for infinite enumerable set A ⊆ N, sup n∈A n -Hg(Z n) < ∞. Then Z is NS random. Proof. Suppose Z is not NS random. Let L r = p r t be the universal NS test. So Rank(p r n ) ≤ 2 n-r . Thus inf r Z(L r ) = δ > 0. For each r, there exists an n such that Tr(p r n z n ) ≥ δ, where z n = Z n. Since 2 r-n p r n is a computable semi-density matrix given n and r, m(r|n)2 r-n p r n * < µ. So m(r|n)2 r-n δ * < Trµz n , which implies that Hg(Z n) < + n -r + K(r|n). Since this property holds for all r ∈ N, sup n n -Hg(Z n) = ∞. From Theorem 37, sup n∈A n -Hg(Z n) = ∞. Encodings of States Let [Z] ∈ {0, 1} ∞ be an encoding of the state Z described as follows. For each n, let e(n, m) be the mth enumeration of a pair (p, k) consisting of a special projection p of M n and a rational 0 ≤ k ≤ 1. For [Z], the ith bit, where i = 2 n m for maximum n, corresponds to 1 if and only if TrpZ n > k, where (p, k) is the pair enumerated by e(n, m). We say that state Z ∈ QH if and only if the halting sequence can be computed from [Z]. Quantum Operation Complexity In a canonical algorithmic information theory example, Alice wants to send a single text message x to Bob. Alice sends a program q to Bob such that x = U (q), where U is a fixed universal Turing machine. The cost of the transmission is the length of q. Alice can minimize cost by sending K(x) bits to Bob, where K is the Kolmogorov complexity function. We now look at the quantum case. Suppose that Alice wants to send a (possibly mixed) n qubit quantum state σ to Bob, represented as an density matrix over C 2 n , or an element of S(M n ). Alice has access to two channels, a quantum channel and a classical channel. Alice can choose to send m ≤ n qubits ρ on the quantum channel and classical bits q ∈ {0, 1} * on the classical channel, describing an elementary quantum operation η, where U (q) = [η]. Bob then applies η to ρ to produce σ = η(ρ). Bob is not required to produce σ exactly. Instead the fidelity of the attempt is measured by the trace distance between σ and σ . The trace distance D between two matrices A and B is D(A, B) = 1 2 A -B Tr , with A Tr = Tr|A|. We use O m,n to denote the set of elementary quantum operations that take m qubit quantum states to n ≥ m qubit quantum states. Definition 9 For n qubit density matrix σ, the quantum operation complexity at accuracy is Hoc (σ) = min{K([η]) + m : η ∈ O m,n , ξ ∈ S(M m ), D(σ, η(ξ)) < }. Initial Segment Incompressibility Due to Levin and Schnorr,[START_REF] Levin | Laws of Information Conservation (Non-growth) and Aspects of the Foundations of Probability Theory[END_REF][START_REF] Schnorr | A unified approach to the definition of random sequences[END_REF] α is random iff there is an r such that for all n, K(α ≤n ) ≥ n -r, where α ≤n is a prefix of α of size n, and K is prefix free Kolmogorov complexity. In this section, we prove a quantum analog to this result. We show that NS states that are NS random have incompressible prefixes with respect to quantum operation complexity. Theorem 39 builds upon the proof of the Theorem 4.4 in [NS19] using quantum operation complexity Hoc. Theorem 39 Let Z be a state on M ∞ . 1. Let 1 > > 0, and suppose Z passes each NS test at order 1 -. Then there is an r where for all n, Hoc (Z n ) > n -r. 2. Let 1 > > 0 be lower computable and Z fails some NS test at order 1 -. Then either Z ∈ QH or for all r, there is an n where Hoc √ (Z n ) < n -r. Proof. (1). Let K t (x) be the smallest program to produce x in time t. Let s(n, r, t) be the set of pure n qubit states ρ ∈ S(M n ) such that there exists a quantum operation η ∈ O z,n and pure state σ ∈ S(M z ) such that ρ = η(σ) and K t ([η]) + z ≤ n -r. Let p(n, r, t) be the orthogonal projection in M n with minimum τ (p(n, r, t)) such that ρ(p(n, r, t)) = 1 for all ρ ∈ s(n, r, t). Let p(r, t) = sup n≤t p(n, r, t). So p(r, t) is in M t and p(r, t) is computable from r and t and p(r, t) ≤ p(r, t + 1). Let b(y, n, z) be the number of programs of length y which outputs an encoding of an elementary quantum operation η ∈ O z,n . Let b(y, n) be the number of programs of length y which outputs an encoding of an elementary quantum operation η ∈ O z,n , for any z ≤ n. So Range(p(n, r, t)) ≤ y+z≤n-r b(y, n, z)2 z τ (p(n, r, t)) ≤ y+z≤n-r b(y, n, z)2 z-n ≤ y+z≤n-r b(y, n, z)2 -y-r ≤ n-r y=1 b(y, n)2 -y-r τ (p(r, t)) ≤ ∞ n=1 τ (p(n, r, t)) ≤ ∞ n=1 n-r y=1 b(y, n)2 -y-r = 2 -r ∞ n=1 n-r y=1 b(y, n)2 -y ≤ 2 -r . So for NS Σ 0 1 set G r enumerated by the sequence {p(r, t)} t , G r is a NS test. For each r suppose there is an n such that Hoc (Z n) ≤ n -r. So there is an elementary quantum operation η ∈ O z,n and input ρ ∈ S(M z ) such that K([η]) + z ≤ n -r and D(Z n , η(ρ)) < . So η(ρ) is in the range p(n, r, t) for some t and so Trη(ρ)p(n, r, t) = 1. This implies 1 -< Z(p(n, r, t)) ≤ Z(G r ). Since this is for all r, Z fails the test at order 1 -. (2). Let bb(n) be the longest running time of a halting program of length ≤n. Let L r be the universal NS test, where each L r is enumerated by {p r t }, with p r t ∈ M n(r,t) . Assume there is an infinite number of r where TrZ n(r, bb(r/2))p r bb(r/2) > 1 -. Fix one such r and let n = n(r, bb(r/2)), and p = p r bb(r/2) . Projection p has eigenvectors {u i } and kernel spanned by {v i }. Thus 2 -r ≥ τ (p). Let p ≥ p with p ∈ M n such that each u i is in the range of p and {v i } k i=1 is in the range of p such that k is minimized such that τ (p ) = 2 -r . Thus TrZ n(p ) > 1-. The eigenvectors of p are {w i } 2 n-r i=1 and its kernel is spanned by the vectors {y i } 2 n -2 n-r i=1 . Let z = Proj(Z n; p ) be a density matrix with eigenvalues v i ∈ R corresponding to eigenvectors w i . For i ∈ [1, 2 n ], let B(i) ∈ {0, 1} * be an encoding of n bits of the number i, with B(1) = 0 (n) , B(2) = 10 (n-1) , and B(2 n ) = 1 (n) . Let U be a 2 n × 2 n unitary matrix, of the form U = 2 n-r i=1 |B(i) w i | + 2 n -2 n-r i=1 |B(i + 2 n-r ) y i |. Proposition 3 ([NS19]) Let Proj(s; h) = 1 Tr[sh] hsh. Let p be a projection in M n and σ be a density matrix in M n . If α = Trpσ and σ = Proj(σ; p) then D(σ, σ ) ≤ √ 1 -α. Proof. Let |ψ σ be a purification of σ. Then α -1 2 p |ψ σ is a purification of σ . Uhlmann's theorem states F (σ, σ ) ≥ α -1 2 ψ σ | p |ψ σ = α 1 2 , where F is fidelity, with F (σ, σ ) = Tr √ σ σ √ σ . Thus the proposition follows from D(σ, σ ) ≤ 1 -F (σ, σ ). The quasilocal C * algebra A ∞ is defined to be the norm closure of Λ⊂Z A Λ . For states Ψ over A ∞ , we use Ψ n to denote Ψ restricted to the finite subalgebra A {1,...,n} of A ∞ . The right shift T is a * -automorphism on A ∞ uniquely defined by its actions on local observables T : a ∈ A {m,...,n} → A {m+1,...,n+1} . A quantum state Ψ is stationary if for all a ∈ A ∞ , Ψ(a) = Ψ(T (a)). The set of shift-invariant states on A ∞ is convex and compact in the weak* topology. The extremal points of this set are called ergodic states. Lemma 6 Let R j be the smallest subspace spanned by pure states produced by elementary quantum operations η ∈ O z,n with K(η) + z < j . Then Dim(R j ) < 2 j . Proof. Let b(y, z) be the number of programs of length y that outputs an elementary quantum operation η ∈ O x,z over the Hilbert space H 2 n . Let b(y) be the number of programs of length y that outputs an elementary quantum operation O z,n over the Hilbert space H 2 n . Dim(R j ) ≤ y+z<j b(y, z)2 z = 2 j y+z<j b(y, z)2 z-j < 2 j y,z b(y, z)2 -y = 2 j y b(y)2 -y ≤ 2 j . Theorem 40 Let Ψ be an ergodic state with mean entropy h. For all δ > 0, for almost all n, there is an orthogonal projector P n ∈ A n such that for all > 0, 1. Ψ n(P n ) > 1 -δ. 2. For all one dimensional projectors p ≤ P n , Hoc (p)/n ∈ (h -δ, h + δ). Proof. Let δ < δ < δ. From [BDK + 05], there is a sequence of projectors P n ∈ A n where for almost all n, Ψ n(P n ) > 1 -δ , for all one dimensional projectors p ≤ P n , 2 -n(h+δ ) < Ψ n(p ) < 2 -n(h-δ ) , and 2 n(h-δ ) < TrP n < 2 n(h+δ ) . Let S n be the subspace that P n projects onto. Let R n be the smallest subspace spanned by all pure states produced by an elementary quantum operation η ∈ O g,n , where K(η) + g < n(h -δ ). Let Q n be the projector onto R n . By Lemma 6, Dim(R n ) < 2 n(h-δ ) . Let S n be the largest subspace of S n that is orthogonal to R n . Let P n be the orthogonal projector onto S n . So for sufficiently large n, Ψ n(P n ) ≥ Ψ n(P n )-Dim(R n )2 -n(h-δ ) > 1 -δ -2 n(h-δ ) 2 -n(h-δ ) = 1 -δ -2 n(δ -δ ) > 1 -δ, for large enough n. By definition, since P n is orthogonal to R n , for all , for all one dimensional projectors p ≤ P n , Hoc (p) ≥ n(h -δ ) > n(h -δ). Furthermore, all such p can be produced from an elementary quantum operation η that maps n(h + δ ) length pure states into S n . Therefore for large enough n, Hoc (p) ≤ K(η) + n(h + δ ) < + K(n, h) + n(h + δ ) < n(h + δ). Measurement Systems We note that pre-measures are of the form γ : {0, 1} * → R ≥0 , where γ(x) = γ(x0) + γ(x1). By the Carathéodory's Extension Theorem, each such pre-measure can be uniquely extended to a measure Γ over {0, 1} ∞ . in Chapter 6, measurements of finite collections of qubits are studied. This section deals with measurement measurement systems, which can be applied to infinite quantum states. Definition 10 (Meaurement System ([Bho21])) An α-computable measurement system B = {(|b n 0 , |b n 1 )} is a sequence of orthonormal bases for Q 1 such that each |b n i is elementary and the sequence |b n 1 , |b n 0 ∞ n=1 is α-computable. Note that the above defiinition can be generalized to a sequence of PVMs. We now define the application of a measurement system B to an infinite quantum state Z which produces a premeasure p. Let ρ n be the density matrix associated with Z n. For the first bit, we use the standard definition of measurement, where p(i) = Tr |b 1 i b 1 i | ρ 1 . Given ρ 2 , if i is measured on the first bit, then the resulting state would be ρ i 2 = (|b 1 i b 1 i | ⊗ I)ρ 2 (|b 1 i b 1 i | ⊗ I) Tr(|b 1 i b 1 i | ⊗ I)ρ 2 So p(ij) = p(i)p(j|i) = Tr |b 1 i b 1 i | ρ 1 Tr I ⊗ |b 2 j b 2 j | |b 1 i b 1 i | ⊗ I ρ 2 |b 1 i b 1 i | ⊗ I |b 1 i b 1 i | ⊗ I ρ 2 Since Tr 2 ρ 2 = ρ 1 , Tr |b 1 i b 1 i | ρ 1 = Tr |b 1 i b 1 i | ⊗ I ρ 2 . Therefore p(ij) = Trρ 2 |b 1 i b 2 j b 1 i b 2 j | . More generally for x ∈ {0, 1} n , we define the pre-measure p to be p(x) = Trρ n |⊗ n i=1 b i x i ⊗ n i=1 b i x i | ; It is straightforward to see that p is a pre-measure, with p(x) = p(x0) + p(x1). Let µ B Z be the measure over {0, 1} ∞ derived from the described pre-measure, using measurement system B and state Z. We recall that MLR is the set of Martin Löf random sequences. S m = m≤i A m i , where A m i ⊆ A m i+1 , and A m i {τ m,i 1 , . . . , τ m,i k m,i } ⊂ {0, 1} i for some 0 ≤ k m-i ≤ 2 i-m . Thus µ(S m ) ≤ 2 -m , where µ is the uniform distribution over {0, 1} ∞ . We define an NS test as follows. For all m and i, with m ≤ i, let τ a = τ m,i a and define the special projection p m i = a≤k m,i |⊗ i q=1 b q τα[q] ⊗ i q=1 b q τα[q] | . We define P m = {p m i } m≤i we have that P m is an NS Test. The special tests p m i is uniformly computable in i and m since B and A m i are uniformly computable in i and m. Since A m i ⊆ A m i+1 , Range(p m i ) ⊆ Range(p m i+1 ) . So P m is an NS Σ 0 1 set for all m. Since k m,i ≤ 2 i-m for all m and i, this implies τ (P m ) ≤ 2 -m for all m. For all m, {0, 1} ∞ \ MLR ⊆ S m . Since by assumption µ B Z ({0, 1} ∞ \ MLR) > δ, for all m there exists i(m) > m such that µ B Z ( A m i(m ) > δ. Fix an m and i = i(m) and let A m i = {τ 1 , . . . , τ k m,i }, where k m,i ≤ 2 i-m . Let p be the pre-measure associated witth µ B Z . So we have δ < a≤k m,i p(τ a ) = a≤k m,i Trρ i |⊗ i q=1 b q τ [q] ⊗ i q=1 b q τ [q] | = Trρ i a≤k m,i |⊗ i q=1 b q τ [q] ⊗ i q=1 b q τ [q] | So we see that for all m there is an i such that δ < Trρ i p m i ≤ Z(P m ). So inf m Z(P m ) > δ, contradicting that Z is NS random. Theorem 42 ([Bho21] ) There are states that are Bhojraj random and not NS Random. NS Solovay States A NS Solovay test is a sequence of NS Σ 1 0 sets G n such that n τ (G n ) < ∞. A state Z fails a quantum NS test G r at order δ ∈ (0, 1) if there is an infinite number of r ∈ R such that inf r∈R Z(G r ) > δ. Otherwise state Z passes the quantum NS test at order δ. A quantum state Z is NS Solovay random if it passes all NS Solovay tests at all orders. The following theorem shows the equivalence of NS randomness and NS Solovay randomness with respect to every order δ. Given a special projection p, NS Σ 1 0 set Q = {q n }, and state Z, we define Z(p \ Q) = inf n Z(p \ q n ). In [START_REF] Bhojraj | Algorithmic randomness and kolmogorov complexity for qubits[END_REF], it was proven that NS randomness is equivalent to NS Solovay randomness. Proposition 4 Given a special projection p, NS Σ 1 0 set Q, and state Z, Z(p ) -Z(Q) ≤ Z(p \ Q) ≤ Z(p). The proof is straightforward. Theorem 43 If a state Z fails an NS test at order δ then it fails an NS Solovay test at order δ. Proof. Assume that state Z fails a NS test G r at order δ. Since r τ (G r ) ≤ 1, and each G r is an NS Σ 0 1 set, G r is a NS Solovay test. Furthermore since inf r Z(G r ) ≥ δ, there exists an infinite number of r such that Z(G r ) > δ. Thus Z fails a NS Solovay test at order δ. Theorem 44 For all δ < δ, if a state Z fails an NS Solovay test at order δ then it fails an NS test at order δ . Proof. Assume state Z fails NS Solovay test G r at order δ. Given G r , where G r = p r n n∈N , we construct an NS test H r as follows. There exists an m such that n>m τ (G n ) ≤ 1. Fix r. Enumerate all unordered sets of r + 1 natural numbers {D r n } n∈N , D r n ⊂ N, with infinite repetition. H r = {q r n }, q r n = m<n q r m   t∈D r n p t n   . Each H r can be seen to be an NS Σ 0 1 set. Furthermore τ (H r ) ≤ t>r Z(G t ) ≤ t>r 2 -t = 2 -r . So H r is an NS test. For each r, Z(H r ) > δ . Assume not. Then there exists a k such that Z(H k ) ≤ δ . Since Z fails G r at order δ, there exists an infinite number of r ∈ R and n r ∈ N such that Z(p r nr ) ≥ δ , for some δ < δ < δ. We reorder the NS Solovay test G r such that r ranges over solely R. Let z r = p r n,r . Let D n,k be the set of all unordered subsets of {1, . . . , n} of size k. For k > n let F n,k = ∅. Let F n,k =   A∈D n,k r∈A z r   \ s>k F n,s . So for all n ∈ N, using Proposition 4, n(δ -δ ) ≤ n r=1 (Z(z r ) -Z(H k )) ≤ n r=1 Z(z r \ H k ) ≤ n r=1 Z k s=1 F n,s ∧ z r (7.1) Equation 7.1 is due to the fact that for s > k there is a t where we have Range(F n,s ) ≤ Range(q k s ). Let F n,s,r = F n,s ∧ z r , with for a fixed s ≤ k, n i=1 Z(F n,s,i ) ≤ s. n(δ -δ ) ≤ n r=1 Z k s=1 F n,s,r = k s=1 n r=1 Z (F n,s,r ) ≤ k s=1 s = O(k 2 ). This is a contradiction for large enough n. Corollary 10 A quantum state is NS random if and only if it is NS Solovay random. Chapter 8 The Many Worlds Theory The Many Worlds Theory (MWT) was formulated by Hugh Everett [START_REF] Everett | relative state" formulation of quantum mechanics[END_REF] as a solution to the measurement problem of Quantum Mechanics. Branching (a.k.a splitting of worlds) occurs during any process that magnifies microscopic superpositions to the macro-scale. This occurs in events including human measurements such as the double slit experiments, or natural processes such as radiation resulting in cell mutations. One question is if MWT causes issues with the foundations of computer science. The physical Church Turing Thesis (PCTT) states that any functions computed by a physical system can be simulated by a Turing machine. A straw man argument for showing MWT and PCTT are in conflict is an experiment that measures the spin of an unending number of electrons, with each measurement bifurcating the current branch into two sub-branches. This results in a single branch in which the halting sequence is outputted. However this branch has Born probability converging to 0, and can be seen as a deviant, atypical branch. In fact, conflicts do emerge between MWT and Algorithmic Information Theory. In particular, the Independence Postulate (IP) is a finitary Church-Turing thesis, postulating that certain infinite and finite sequences cannot be found in nature, a.k.a. have high "addresses". If a forbidden sequence is found in nature, an information leak will occur. However MWT represents a theory in which such information leaks can occur. This blog entry covers the main arguments of this conflict. Many Worlds Theory Some researchers believe there is an inherent problem in quantum mechanics. On one hand, the dynamics of quantum states is prescribed by unitary evolution. This evolution is deterministic and linear. On the other hand, measurements result in the collapse of the wavefunction. This evolution is non-linear and nondeterministic. This conflict is called the measurement problem of quantum mechanics. The time of the collapse is undefined and the criteria for the kind of collapse are strange. The Born rule assigns probabilities to macroscopic outcomes. The projection postulate assigns new microscopic states to the system measured, depending on the the macroscopic outcome. One could argue that the apparatus itself should be modeled in quantum mechanics. However it's dynamics is deterministic. Probabilities only enter the conventional theory with the measurement postulates. MWT was proposed by Everett as a way to remove the measurement postulate from quantum mechanics. The theory consists of unitary evolutions of quantum states without measurement collapses. For MWT, the collapse of the wave function is the change in dynamical influence of one However, the Born rule and the projection postulate are not assumed by MWT. The dynamics are totally deterministic. Each branch is equally real to the observers in it. To address these issues, Everett first derived a typicality-measure that weights each branch of a state's superposition. Assuming a set of desirable constraints, Everett derived the typicality-measure to be equal to the norm-squared of the coefficients of each branch, i.e. the Born probability of each branch. Everett then drew a distinction between typical branches that have high typicality-measure and exotic atypical branches of decreasing typicality-measure. For the repeated measurements of the spin of an electron |φ = a |φ ↑ + b |φ ↓ , the relative frequencies of up and down spin measurements in a typical branch converge to |a| 2 and |b| 2 , respectively. The notion of typicality can be extended to measurements with many observables. In a more recent resolution to the relation between MWT and probability, Deutsch introduced a decision theoretic interpretation [START_REF] Deutsch | Quantum theory of probability and decisions[END_REF] that obtains the Born rule from the non-probabilistic axioms of quantum theory and non-probabilistic axioms of decision theory. Deutsch proved that rational actors are compelled to adopt the Born rule as the probability measure associated with their available actions. This approach is highly controversial, as some critics say the idea has circular logic. Another attempt uses subjective probability [START_REF] Vaidman | On schizophrenic experiences of the neutron or why we should believe in the many-worlds interpretation of quantum theory[END_REF]. The experimenter puts on a blindfold before he finishes performing the experiment. After he finishes the experiment, he has uncertainty about what world he is in. This uncertainty is the foundation of a probability measure over the measurements. However, the actual form of the probability measure needs to be postulated: Probability Postulate. An observer should set his subjective probability of the outcome of a quantum experiment in proportion to the total measure of existence of all worlds with that outcome. Whichever explanation of the Born rule one adopts, the following section shows there is an issue with MWT and IP. There exist branches of substantial Born probability where information leaks occurs. Violating the Independence Postulate In [START_REF] Levin | Randomness conservation inequalities; information and independence in mathematical theories[END_REF][START_REF] Levin | Forbidden information[END_REF], the Independence Postulate, IP, was introduced: Let α ∈ {0, 1} * ∞ be a sequence defined with an n-bit mathematical statement (e.g., in Peano Arithmetic or Set Theory), and a sequence β ∈ {0, 1} * ∞ can be located in the physical world with a k-bit instruction set (e.g., ip-address). Then I(α : β) < k +n+c IP , for some small absolute constant c IP . The I term is an information measure in Algorithmic Information Theory. For this blog, the information term we use is I(x : y) = K(x)+K(y)-K(x, y), where K is the prefix-free Kolmogorov complexity. We can use this definition of I because we only deal with finite sequences. Let Ω m be the first m bits of Chaitin's Omega (the probability that a universal Turing machine will halt). We have m < + K(Ω m ). Furthermore Ω m can be described by a mathematical formula of size O(log m). Thus by IP, where Ω m = α = β, Ω m can only be found with addresses of size at least m -O(log m). IP can be violated in the following idealized experiment measuring the spin |φ ↑ and |φ ↓ of N isolated electrons. We denote |φ 0 for |φ ↑ and |φ 1 for |φ ↓ . The "address" (in the sense of IP) of this experiment is < O(log N ). The measuring apparatus will measure the spin of N electrons in the state |φ = 1 2 |φ ↑ + 1 2 |φ ↓ . There is a measuring apparatus A with initial state of |ψ A , and after reading N spins of N electrons, it is in the state |ψ A [x] , where x ∈ {0, 1} N , whose ith bit is 1 iff the ith measurement returns |φ 1 . The experiment evolves according to the following unitary transformation: N i=1 |φ ⊗ |ψ A unitary -→ a 1 ,...,a N ∈{0,1} N 2 -N/2 N i=1 |φ a i ⊗ |ψ A [a 1 a 2 . . . a n ] . If the bits returned are Ω N then a memory leak of size N -O(log N ) has occurred, because Ω N has been located by the address of the experiment, which is O(log N ). Thus Born-Probability(a memory leak of size N -O(log N ) occurred) ≥ 2 -N . Conclusion There are multiple variations of MWT when it comes to consistency across universes. In one formulation, all universes conform to the same physical laws. In another model, each universe has its own laws, for example different values of gravity, etc. However, the experiment in the previous section shows that mathematics itself is different between universes, regardless of which model is used. In some universes, IP holds and there is no way to create information leaks. In other universes information leaks occur, and there are tasks where randomized algorithms fail but non-algorithmic physical methods succeeds. One such task is finding new axioms of mathematics. This was envisioned as a possibility by Gödel [G 61], but there is a universal consensus of the impossibility of this task. Not any more! In addition, because information leaks are finite events, the Born probability of worlds containing them is not insignificant. In such worlds, IP cannot be formulated, and the the foundations of Algorithmic Information Theory itself become detached from reality. Formulated another way, let us suppose the Born probability is derived from the probability postulate. We have a "blindfolded mathematician" who performs the experiment described above. Before the mathematician takes off her blindfold, she states the Independence Postulate. By the probability postulate, with measure 2 -N over all worlds, there is a memory leak of size N -O(log N ) and IP statement by the mathematician is in error. As a rebuttal, one can, with non-zero probability, just flip a coin N times and get N bits of Chaitin's Omega. Or more generally, how does one account for a probability P over finite or infinite sequences learning information about a forbidden sequence β with good probability? Due Thus the probability of a single event creating a leak is very small. However if many events occur, then the chances of a memory leak grows. However as there is many events, to locate one such leak, one will probably need a long address to find the leak, balancing out the IP equation. This still leaves open the possibility of a memory leak occuring at an event with a small address. Since there are a small number of events that have a small address, the probability of a signifcant memory leak is extremely small. In physics on can postulate away events with extremely small probabilities. For example, the second law of thermodynamics states that entropy is nondecreasing, postulating away the extremely unlikely event that a large system suddenly decreases in thermodynamic entropy. There is no way to postulate such memory leaks in MWT. Assuming the probability postulate, probability is a measure over the space of possible worlds. Thus when Bob now threatens to measure the spin of N particles, Alice now knows 2 -N of the resultant worlds will contain N bits of Chaitin's Omega, violating IP. Information non-growth laws say information about a target source cannot be increased with randomized processing. In classical information theory, we have I(g(X) : Y ) ≤ I(X : Y ). where g is a randomized function, X and Y are random variables, and I is the mutual information function. Thus processing a channel at its output will not increase its capacity. Information conservation carries over into the algorithmic domain, with the inequalities I(f (x) : y) < + I(x : y); I(f (a); H) < + I(a; H). These inequalities ensure target information cannot be obtained by processing. If for example the second inequality was not true, then one can potentially obtain information about the halting sequence H with simple functions. Obtaining information about H violates the Independence Postulate, discussed in Chapter 8. Information non growth laws can be extended to signals [START_REF] Epstein | On the algorithmic information between probabilities[END_REF] which can be modeled as probabilities over N or Euclidean space1 . The "signal strength" of a probability p over N is measured by its self information. I Prob (p : p) = log i,j 2 I(i:j) p(i)p(j). A signal, when undergoing randomized processing f (see Section 2.1.1), will lose its cohesion. Thus any signal going through a classical channel will become less coherent. I Prob (f (p) : f (p)) < + I Prob (p : p). In Euclidean space, probabilities that undergo convolutions with probability kernels will lose self information. For example a signal spike at a random position will spread out when convoluted with the Gaussian function, and lose self information. The above inequalities deal with classical transformations. One can ask, is whether, quantum information processing can add new surprises to how information signals occur and evolve. One can start with the prepare-and-measure channel, also known as a Holevo-form channel. Alice starts with a random variable X that can take values {1, . . . , n} with corresponding probabilities {p 1 , . . . , p n }. Alice prepares a quantum state, corresponding to density matrix ρ X , chosen from {ρ 1 , . . . , ρ n } according to X. Bob performs a measurement on the state ρ X , getting a classical outcome, denoted by Y . Though it uses quantum mechanics, this is a classical channel X → Y . So using the above inequality, cohesion will deteriorate regardless of X s probability, with I Prob (Y : Y ) < + I Prob (X : X). There remains a second option, constructing a signal directly from a mixed state. This involves constructing a mixed state, i.e. density matrix σ, and then performing a measurement E on the state, inducing the probability Eσ(k) = TrσE k . However from [START_REF] Epstein | On the algorithmic information between probabilities[END_REF], for elementary (even enumerable) probabilities Eσ, I Prob (Eσ : Eσ) < + K(σ, E). Thus for simply defined density matrices and measurements, no signal will appear. So experiments that are simple will result in simple measurements, or white noise. However it could be that a larger number of uncomputable pure or mixed states produce coherent signals. Theorems 32 and 33 say otherwise, in that the POVM measurement E of a vast majority of pure and mixed states will have negligible self-information. Thus for uniform distributions Λ and µ over pure and mixed states (see Section 5.2.2), 2 I Prob (E|ψ :E|ψ ) dΛ = O(1); 2 I Prob (Eσ:Eσ) dµ(σ) = O(1). This can be seen as a consequence of the vastness of Hilbert spaces as opposed to the limited discriminatory power of quantum measurements. In addition, there could be non-uniform distributions of pure or mixed states that could be of research interest. In quantum decoherence, a quantum state becomes entangled with the environment, losing decoherence. The off diagonal elements of the mixed state become dampened, as the state becomes more like a classical mixture of states. Let p σ be the idealized classical probability that σ decoheres to, with p σ (i) = σ ii . Corollary 8 states that for an overwhelming majority of pure or mixed states σ, p σ is noise, that is, has negligible self-information. 2 I Prob( p |ψ :p |ψ ) dΛ = O(1); 2 I Prob (pσ:pσ) dµ(σ) = O(1). This is to be expected, with one supporting fact being for an n qubit space, i ∈ {1, . . . , 2 n }, E Λ [p |ψ (i)] = 2 -n . With Algorithmic Information Theory, we've taken this fact one step further, showing that p |ψ has no (in the exponential) self-algorithmic information and cannot be processed by deterministic or randomized means to produce a more coherent signal. In addition, it appears a more direct proof of the first decoherence inequality could be possible. However the measurement process has a surprising consquence, in that the wave function collapse causes an massive uptake in algorithmic signal strength. Let F be a PVM, of size 2 n-c , of an n qubit space and let Λ F be the distribution of pure states when F is measured over the uniform distribution Λ. Thus Λ F represents the F -collapsed states from Λ. Theorem 34 states n -2c < log log 2 I Prob (F |ψ :F |ψ ) dΛ F . Apriori Distributions To avoid the pitfall of a signalless distribution that only produces white noise, we can conjecture a new apriori distribution for quantum states that is not signalless. Note that we are dealing with measures over the density operator space and not directly with density operators because we are measuring properties, such as self-information, over all possible (pure or mixed) states. Properties of this apriori distribution can be discerned by working backwards. Indeed, suppose there are a set of (possibly infinite) systems {|ψ i }, where for each system |ψ i , a measurement occurs, producing a discernable signal. By Theorem 31, this implies the states |ψ i have high I(|ψ i : |ψ i ), where I is the information function between mixed states introduced in Definition 6. Thus any universal quantum apriori distribution over these systems must be weighted toward states with high self information. One candidate is an probability measure ξ over pure states where ξ(|ψ ) ∝ 2 I(|ψ :|ψ ) . However this area of research is still ongoing. Another clue to this universal quantum apriori distribution is the measurement operation, which as shown above, causes an uptake in signal strength. Take a PVM measurement F , which procures a value i from a state |ψ , projecting to a new state |ψ . P |ψ (i) = 1. By Corollary 7, this new state |ψ has self information K(i) < log I(|ψ : |ψ ). The error term is on the order of K(P ). Most of the measurement values i of P will be random, i.e. have large K(i) (just look at the Kolmorogov complexity of the first 2 n numbers!). Thus simple quantum measurements increase the self information of most measured quantum states (see Figure 9.1). So this fact, and Theorem 26, leads us to the following conclusion. Take a distribution over density operators, such as Λ, where an overwhelming majority of states have negligible self-information. When each such state in its support is measured with a simple apparatus, the result is new a distribution where most of the states have substantial self-information. However, the situation is reversed for quantum channels. A quantum state that is transformed by a quantum operation will not increase in self-information. So by Theorem 28, we get the following claim, where equality occurs if the quantum operation is a unitary transform. Given any distribution over density operators, if all the density matrices its support are transformed by a simple quantum operation, then the resultant distribution will give more measure to mixed states with less self-information. Thus simple measurements with many operators can only increase self-information, simple quantum operations can only decrease self-information, and simple unitary transforms leave the selfinformation unaltered. If the operation is complex, then nothing so far has been proven. Measurements Before Information Cloning The no-cloning theorem states that every unitary transform cannot clone an arbitrary quantum state. However there is the possibility of copying information from a subset of states. By "copying information", we mean that two measurements of two states will produce two values that are similar. More formally, the information cloned from a state |ψ relative to unitary transform U , and POVMs E and F is, The question is, given an initial distribution over density operators with low expected I Clone , what sort of transform is required to increase this expectation. In this section, we discuss necessary conditions of this transform. We require the following two assumptions. Assumption (1): The initial distribution has low expected self information. Theorem 26 shows there is a large set of natural distributions that have this property. Any distribution Ω that is less than 2 c Λ will have log 2 I(|ψ :|ψ ) dΩ < + c. Another way to intrepret this assumption is through parmeterized distributions. Let P be a probability over parameters θ over pure state distribution, Γ(|ψ |θ). Assumption (2): The universal Turing machine is relativized to all the transforms and operators. This assumption states that for a system, the operations are known quantities. This is congruent with quantum information theory, in which actors are seen to compute unitary transforms or quantum operations. It is asumed that these actors have knowledge of the transforms. How do you create a distribution with high expected I Clone , where most states can have cloneable information? Any transform that increases cloneable information must increase self-information. However Theorem 28, along with assumption (2) bars quantum operations as a means to create selfinformation, as the complexity of the quantum operations is O(1). Thus the only way to potentially increase self-information is to perform a measurement, which as Theorems 31 and 34 show, often times cause an uptake in self-information (see Figure 9.2). This is also discussed in the quotes of Section 9.2. Thus we get the following claim. Measurements are required to produce distributions over quantum states that have cloneable information. For example, take the starting distribution to be the uniform measure over pure states, Λ. Let E = F = {|i i|} be POVM measurements over projectors to the basis states and let U be any unitary transform such that U |i |0 = |i |i for i ∈ {1, . . . , 2 n }. By Theorems 26 and 29, we have that 2 I Clone (|ψ ) dΛ = O(1). Now suppose we apply the measurement G = E to Λ, producing a new distribution Λ G concentrated evenly among the basis states, where Λ G (|i ) = 2 -n . Thus we have that I Clone (|i ) = I Prob (E |i : F |i ) = K(i). Since there are 2 n-O(1) basis states |i where n < + K(i), we have the following uptake in cloneable information. n < + log 2 I Clone (|ψ ) dΛ G . Other such applications can be seen as generalizations from this extreme example. Future work involves determining how tightly self information covers cloneable information. (4) This follow as a special case of Theorem 31. (5) Let s(i, j) = m(i|N )m(j|N )2 I(|i :|j ) . The function s is lower semicomputable relative to H because m and T µ⊗µ are lower computable relative to H. Furthermore we have that where λ i are the eigenvalues of ρ. Due to concavity -2S(ρ) ≤ 2 log i λ 2 i . So I(ρ : ρ) > + 2Hg(ρ) -K(ρ, Hg(ρ)) -2S(ρ). The inequality follows from Hg(|i ) = + K(i) and S(|i i|) = 0. For I(|i : |i < + K(i) + I(i : H|n), we note that it is a special case of (5). (8) This follows from (5) and (6). Corollary 11 For elementary ρ, 2Hg(ρ) -K(ρ, Hg(ρ)) -2S(ρ) < + I(ρ : ρ). c 2 I( a,c :b) ψ a (c) * < 2 I(a:b) /m(ψ). need to show m(a, b)/(m(a)m(b)) * > c (m(a, b, c)/(m(b)m(a, c)))m(ψ)ψ a (c), or c (m(a, b, c)/m(a, c))m(c|a) * < m(a, b)/m(a), since m(c|a) * > m(ψ)ψ a (c). Rewrite it c m(c|a)m(a, b, c)/m(a, c) * < m(a, b)/m(a) or c m(c|a)m(a)m(a, b, c)/m(a, c) * < m(a, b). The latter is obvious since m(c|a)m(a) * < m(a, c) and c m(a, b, c) * < m(a, b). The tensor product of two vectors is denoted by |φ ⊗ |ψ = |φ |ψ = |φψ . The inner product of |ψ and φ| is denoted by ψ|φ . The symbol Tr denotes the trace operation. The conjugate transpose of a matrix M is denoted by M * . For Hermitian matrix with eigenvalue decomposition A = a i |ψ i ψ i |, |A| = |a i | |ψ i ψ i |. The tensor product of two matrices is denoted by A ⊗ B. Projection matrices are Hermitian matrices with eigenvalues in {0, 1}. For tensor product space G X ⊗ G Y , the partial trace is denoted by Tr Y . For B X = Tr Y B, Tr(A•B X ) = Tr((A⊗I)•B), which is used frequently throughout the paper. For positive positive semidefinite matrices, σ and ρ we say σ ≤ ρ if ρ -σ is positive semidefinite. For positive semidefinite matrices A, B, C, if A ≤ B then TrAC ≤ TrBC. Mixed states are represented by density matrices, which are, self adjoint, positive semidefinite, operators of trace 1. A semi-density matrix has non-negative trace less than or equal to 1. The von Neumann entropy of a density matrix σ with orthogonal decomposition |ψ ∈H N Hg(|ψ ⊗m ). We have for all |ψ ∈ H N Trµ |ψ m ψ| m * > 2 -c . (3.1) Let Λ be the uniform distribution on the unit sphere of H N . And let ρ = |ψ m ψ| m dΛ. Trρ = Tr |ψ m ψ| m dΛ = dΛ = 1. Furthermore for |φ m , |ν m ∈ Sym(H m N ), with unitary transform U such that U m |ψ m = |ρ m , we have ν| n ρ |ν n = φ| m (U * m |ψ m ψ| m U m ) |φ m dΛ = φ m |ψ m ψ m |φ m m dΛ = φ| n ρ |φ n . For any pure state |ψ ∈ H m N , such that ψ| P S |ψ = 0, then ψ| ρ |ψ = 0. Thus ρ = P S /M . Integrating Equation 3.1, by dΛ results in 2 -c * < Trµρ * = TrµP S /M * 2 n ], |i is the ith basis state of the n qubit space. Let |ψ = 2 n i=1 2 -n/2 |i |i . The pure state |ψ is elementary, with K(|ψ |2 2n ) = + 0. We define the the 3n qubit mixed state ρ 123 = .5 |ψ ψ| ⊗ |1 1| + .5 |1 1| ⊗ |ψ ψ|. ρ 12 = .5 |ψ ψ| + .5 |1 1| ⊗ 2 -n I. ρ 23 = .5 * 2 -n I ⊗ |1 1| + .5 |ψ ψ|. ρ 2 = 2 -n I. Hg(ρ 12 ) = + -log Trµ 2n ρ 12 < + -log Trµ 2n |ψ ψ| < + -log m(|ψ |2 2n )| ψ|ψ | 2 < + 0. Similarly, Hg(ρ 23 ) = + 0. Hg(ρ 2 ) = + n. 2 n i=1 2 2 -n/2 |i |i |i , with K(|φ |2 3n ) = 0. Let σ 123 = |φ φ|. σ 12 = σ 23 = 2 n i=1 2 -n |i i| ⊗ |i i|. Hg(σ 123 ) = + -log Trσ 123 µ 3n < + -log Tr m(|φ |2 3n )| φ|φ | 2 < + 0. Let D be a unitary transform where D |i |i = |i |1 and K(D|2 2n ) = + 0. So Hg(σ 12 ) = + Hg(Dσ 12 D * ) = + Hg(2 -n I ⊗ |1 1|) = + n -log Tr(I ⊗ |1 1|)µ 2n . By Theorem 5 and properties of partial trace, Hg(2 -n I ⊗ |1 1|) = + n -log Tr |1 1| µ n = + n. So Hg(σ 12 ) = Hg(σ 23 ) = + n. giving high scores to pure states |ψ i which are atypical of the source. In general the d(|φ |σ) score for arbitrary |φ will be greater than a combination of d(•|P ) scores, with d(|φ |σ) > + log 2 d( |ψ i |P ) | φ|ψ i | 2 . In fact d is equivalent to the classical definition of randomness deficiency when σ is purely classical, i.e. only diagonal. Theorem 19 For diagonal σ = i p(i) |i i|, d(|i |σ) = + d(i|p). Definition 5 ( 5 Quantum Randomness Deficiency (Uncomputable States)) The randomness deficiency of ρ with respect to σ is d(ρ|σ) = log TrT σ ρ. If σ is computable, then Definition 5 equals Definition 4. By definition, T σ is universal, since for every lower computable σ-test ν, m(ν)ν < T σ . Theorem 21 For n qbit density matrices σ, ρ, ν, and ξ, d(σ|ρ) + d(ν|ξ) < + d(σ ⊗ ν|ρ ⊗ ξ). < Proof. Assume not. Then for any positive constant c, there exists semi-density matrices ρ and σ, such that cm(ρ|N )m(σ|N )2I(ρ:σ) = cTrm(ρ|N )m(σ|N )C µ⊗µ (ρ ⊗ σ) > 1.By the definition of µ, m(ρ|N )ρ * < µ and m(σ|N )σ * < µ. Therefore by the definition of the Kronecker product, there is some positive constant d such that for all ρ and σ, dm(ρ|N )m(σ|N )(ρ⊗ σ) < (µ ⊗ µ), and similarlydTrm(ρ|N )m(σ|N )C µ⊗µ (ρ ⊗ σ) < TrC µ⊗µ (µ ⊗ µ).By the definition of C, it must be that TrC µ⊗µ µ ⊗ µ ≤ 1. However for c = d, there exists a ρ and a σ, such that TrC µ⊗µ µ ⊗ µ > dTrm(ρ|N )m(σ|N )C µ⊗µ (ρ ⊗ σ) > 1, causing a contradiction. Theorem 26 ([Eps19b]) Let Λ be the uniform distribution on the unit sphere of H N . 1. Hg(I/N ) = + log N , 2. I(I/N : I/N ) < + 0, 3. 2 -Hg(|ψ ) dΛ * = N -1 , 4. 2 I(|ψ : |ψ ) dΛ < + 0. Proof. (1) follows from Hg(I/N ) = + -log TrµI/N = + log N -log Trµ = + log N . (2) is due to Theorem 25, with I(I/N : I/N ) < + 2K(I/N |N ) < + 0. (3) uses the fact that ρ = |ψ ψ| dΛ = I/N , because Trρ = 1, and ψ| ρ |ψ = φ| ρ |φ . Thus 2 -Hg(|ψ ) dΛ * = Trµ |ψ ψ| dΛ * = Trµ |ψ ψ| dΛ * = N -1 . (4) uses the proof of Theorem 8, which states |ψ ψ| ⊗ |ψ ψ| dΛ = |ψψ ψψ| dΛ = N +1 2 -1 P , where P is the projection onto the space of pure states |ψψ . So 2 I(|ψ : |ψ ) dΛ = TrC µ⊗µ |ψ ψ| ⊗ |ψ ψ| dΛ = TrC µ⊗µ |ψ ψ| ⊗ |ψ ψ| dΛ TrC µ⊗µ N -2 I * = 2 I(I/N :I/N ) Theorem 29 ([Eps19b]) Let C |ψ |0 n = |φ |ϕ , where C is an elementary unitary transform. Relativized to C, I(|φ : |ϕ ) < + I(|ψ : |ψ ). be an aggregation of upper computable C ⊗ D tests of the form A ⊗ B, weighted by their upper probability. Definition 8 The upper information between semi-density matrices A and B is I(A : B) = log TrG µ⊗µ (A ⊗ B). Proposition 2 I(I/N : I/N ) = O(1). Proof. 1 ≥ TrG µ⊗µ (µ ⊗ µ) * > TrG µ⊗µ (I/N ⊗ I/N ) * > 2 I(I/N :I/N ) . to probabilistic conservation laws [Lev74, Lev84], we have Pr α∼P [I(α : β) > I( P : β) + m] * < 2 -m . Signals from Classical and Quantum Sources Figure 9 9 Figure 9.1: Each box on the top row represents an n qubit Hilbert space, with the shaded rectangles being the subspaces of the PVM projectors. Thus there are three PVMs. The self-information majorizes these subspaces, inversely weighted by the PVM's complexity. I Clone (|ψ ) = I Prob (E |φ 1 :F |φ 2 ), where U |ψ |0 = |φ 1 |φ 2 .This represents the shared signal strength between |1 and |2 when the states 2 were created from a unitary transform U of |ψ tensored with an ancillia state |0 . Note that by Theorems 29 and 31, cloneable information is less than self information, with I Clone (|ψ ) < log I(|ψ : |ψ ). The distribution is balanced, where Γ(|ψ |θ)dP (θ) = Λ(|ψ ). Then because of Theorem 26, P ({θ : E |ψ ∼Γ(•|θ) [I(|ψ : |ψ )] ≥ m}) ≤ 2 -m+1 . Figure 9 . 2 : 92 Figure 9.2: The intial distribution has low self information and cloneable information. A measurement increases the self information and potentially increases the cloneable information. From each T , let T = i,j ( k m(k|j, N ) i| k| T |i |k ) |i |j i| j|. Since m and T are lower computable, then so is T . In addition, O(1)T ∈ T µ⊗µ , becauseTrT µ ⊗ µ = i,j i| j| µ ⊗ µ |i |j k m(k|j, N )( i| k| T |i |k ) * = i,j m(i|N )m(j|N ) k m(k|j, N )( i| k| T |i |k ) * < i,j m(i|N ) k m(j, k|N )( i| k| T |i |k ) * = i,j m(i|N ) k m(k|N )m(j|k, K(k), N )( i| k| T |i |k ) * = i,k m(i|N )m(k|N ) j m(j|k, K(k), N ) i| k| T |i |k * < i,k m(i|N )m(k|N )( i| k| T |i |k ) * < TrT µ ⊗ µ < O(1). So I(|i : |j ) > log TrT m(O(1)T |N 2 )O(1)T |i |j i| j| > + log Tr T m(T |N 2 )T |i |j i| j| > + log T m(T |N 2 ) k m(k|j, N ) k| i| T |k |i > + log T m(T |N 2 )m(k|j, N ) k| i| T |k |i = + I(|k : |i ) -K(k|j). )m(j|N )TrT µ⊗µ |i i| ⊗ |j j| =TrT µ⊗µ i,j m(i|N )) |i i| ⊗ m(j|N ) |j j| <O(1)TrT µ⊗µ µ ⊗ µ < O(1). Therefore s(i, j) * < m(i, j|N, H) and so I(|i : |j ) < + log m(i, j|N, H)|(m(i|N )m(j|N )) = + I(i : j|N ) + I(i, j : H|N ). (6) For K(i|N ) < + I(|i : |i ), we prove the stronger statement: for elementary ρ, 2Hg(ρ) -K(ρ, Hg(ρ)) -2S(ρ) < + I(ρ : ρ). Let ν = 2 2Hg(ρ)-2 (ρ ⊗ ρ). The matrix ν ∈ T µ⊗µ because 57 Tr(µ ⊗ µ)ν ≤ 1. Therefore I(ρ : ρ) = log TrT µ⊗µ (ρ ⊗ ρ) ≥ log m(ν)Trν(ρ ⊗ ρ) > + log m(ν)2 2Hg(ρ) Tr(ρρ ⊗ ρρ) > + 2Hg(ρ) -K(ρ, Hg(ρ)) + 2 log i λ 2 i , ( 7 ) 7 If T ∈ T µ n ⊗µ n , then TrT * < 2 2n . This is because 1 ≥ TrT (µ n ⊗ µ n ) * > TrT (2 -n I ⊗ 2 -n I).Since the set of lower computable matrices of trace not more than 2 2n is enumerable, 2-2n T µ n ⊗µ n * < µ 2n . Assume I(|i : |i ) = log TrT µ n ⊗µ n |ii ii| = 2n -c. Then -log Tru 2n |ii ii| < + c. This means that K(ii|2 2n ) < + c. So K(i|2 n ) < + c. So 1 ≥ TrT µ n ⊗µ n µ⊗µ * > m(i|2 n ) 2 TrT µ n ⊗µ n |ii ii| * > 2 2n-3c. Thus c > + 2n/3. This implies that I(|i : |i ) = 2n -c < + 4n/3. one cannot clone a quantum state. The following theorem generalizes this no-go result, by showing there exist tensor products |ψ m that has signifigantly more Hg measure than |ψ . The following theorem presents a new proof to this result. ) is spanned by M basis vectors, where M is the number of multisets of size m from the set {1, . . . , N }. This is because for each such multiset S = {i 1 , . . . , i m }, one can construct a basis vector |ψ S that is the normalized superposition of all basis vectors of Sym(H m N ) that are permutations of S. If S = S, then |ψ S is orthogonal to |ψ S . Thus the dimension of Sym(H m Theorem 8 ([G 01]) log m+N -1 m < + max |ψ Hg(|ψ ⊗m ) < + K(m) + log m+N -1 m Proof. Let H N be an N dimensional Hilbert space and let H m N be an m-fold tensor space of H N . Let Sym(H m N ) be the subspace of H m N consisting of all pure states of the form |ψ ⊗m . The subspace Sym(H m N N ) M , is m+N -1 because choosing a multiset is the same as splitting an interval of size m into N m intervals. For the upper bounds, let P S be the projector onto Sym(H m N ). If |ψ ∈ Sym(H m N ), then ψ| P S |ψ = 1 so computes a projection P of rank if it outputs a series of rank projections {P i } ∞ i=1 such that P -P i ≤ 2 -i . For computable projection operator P , I(P ; H) = min{K(p) -K(p|H) : p is a program that computes P }. Relativized to an n qubit mixed state σ, for computable 2 m rank projector P , 3m -2n < log max |φ ∈Image(P ) d(|φ |σ) + I( P ; H).Proof. Let p be a program that computes P . There is a simply defined algorithm A, that when given p and σ, outputs P n such that max |ψ ∈Image(P ) d(|ψ |σ) = + max |ψ ∈Image(Pn) d(|ψ |σ). Thus by Lemma 1, one gets that I(P n ; H) < + I(P ; H). The corollary follows from Theorem 23. Corollary 2 ([Eps23c]) Theorem 23 is in terms of elementary described projecctions and can be generalized to arbitrarily computable projections. For a matrix M , let M = max i,j |M i,j | be the max norm. A program p ∈ {0, 1} * Proof. Let state Z be NS random. Let {ρ n } be the density matrices associated with Z. Suppse not. Then there is δ ∈ (0, 1) and computable measurement systemB = {|b n 0 , |b n 1 } ∞ n=1 where µ B Z ({0, 1} ∞ \ MLR) > δ.Let {S m } be a universal ML test. Without loss of generality, this test is of the form Definition 11 (Bhojraj Random) A state Z is Bhojraj Random if for any computable measure- ment system B, µ B Z (MLR) = 1. Theorem 41 ([Bho21]) All NS Random states are Bhojraj Random states, In[START_REF] Epstein | On the algorithmic information between probabilities[END_REF] probabilities over {0, 1} ∞ and T0 second countable topologies were also studied. Note this definition can be generalized to arbitrary states, with I Prob (ETr2σ : F Tr1σ), where σ = ε(|ψ ), for quantum operation ε. Thus for the diagonal 2 n-r × 2 n-r matrix σ with entries {v i } 2 n-r i=1 , z = U (σ ⊗ |0 r 0 r |)U * . By Proposition 3, since 1 -< Tr(p Z n) and z = Proj(z n ; p ), it must be that D(z , Z n) < √ . Thus using quantum operation η = (U, |0 r 0 r |, ∅) and input σ, < + n -r + K(n, r) < + n -r + K(bb(r/2), r) < + n -r + r/2 + K(r) Thus for every r there exists an n where Hoc (Z n) < n -r. This is because the additive constant of the above equation is not dependent on r. Otherwise there is some R where for all r ≥ R, and q < bb(r/2), TrZ n(r,q) p r n(r,q) ≤ 1 -. Thus given R, L r , [Z], and a lower enumeration of , one can iterate through each r ≥ R and return an s such that TrZ n(r,s) p r n(r,s) > 1 -. This is because the set of rational numbers Q such that q > 1 -for all q ∈ Q can be enumerated and the set V = {v : TrZ n(r,v) p r n(r,v) > q, q ∈ Q} can be enumerated using the infinite encoding [Z] ∈ {0, 1} ∞ . The returned s is the first enumerated element of V . This number s has the property that s ≥ bb(r/2), and can be used to compute the prefix of the halting sequence over all programs of length ≤ r/2 as every such program that will halt will do so in less than s steps. Thus the halting sequence is computable relative to [Z] and thus Z ∈ QH. Corollary 9 Let state Z ∈ QH. Then Z is NS random iff for all 0 < < 1, there is an r, where for all n, Hoc (Z n) > n -r. Proof. Assume Z is NS random. Then for all 0 < < 1, Z passes each NS test at order 1 -. Then by Theorem 39 (1), for all 0 < < 1 there is an r where for all n, Hoc (Z n) > n-r. Assume Z is not NS random. Then there is some rational 0 < δ < 1 such that Z fails some NS test at order 1 -δ. Then by Theorem 39 (2), for = √ δ, for all r, there is an n where Hoc (Z n) < n -r. Quantum Ergodic Sources In [START_REF] Brudno | The Complexity of the trajectories of a Dynamical System[END_REF], Brudno proved that for ergodic measures η over bi-infinite sequences, for η-almost all sequences, the rate of the Kolmogorov complexity of their finite prefixes approaches the entropy rate of η. Therefore the average compression rate of sequences produced by η is not more than its entropy rate. In [BKM + 06], a quantum version of Brudno's theorem was introduced relating, in a similar fashion, Von Neumann entropy and BVL complexity (using the fidelity measure). The results provide two bounds with respect to two variants of Hbvl: approximate-scheme complexity and finite accuracy complexity. In this subsection we provide a quantum variant of Brudno's theorem with respect to quantum communication complexity R . Differently from the Hbvl results, the bounds provided below are for almost all n, invariant to the accuracy term . We define the quasilocal C * algebra A ∞ , which differs only from M ∞ in that it is a doubly infinite product space over Z. In particular, A is the C * algebra of qbits, i.e. 2 × 2 matrices acting on C 2 . For finite Λ ⊂ Z, A Λ = z∈Λ A z . part of the wavefunction over another, the decoherence of one part from the other. The result is a branching structure of the wavefunction and a collapse only in the phenomenological sense. Branching Worlds An example of a branching of universes can be seen in an idealized experiment with a single electron with spin |φ ↑ and |φ ↓ . This description can be found in [START_REF] Saunders | Many Worlds?: Everett, Quantum Theory, & Reality[END_REF]. There is a measuring apparatus A, which is in an initial state of |ψ A ready . After A reads spin-up or spin-down then it is in state |ψ A reads spin ↑ or |ψ A reads spin ↓ , respectively. The evolution for when the electron is solely spin-up or spin-down is Furthermore, one can model the entire quantum state of an observer O of the apparatus, with For the general case, the electron is in a state |φ = a |φ ↑ + b |φ ↓ , where |a| 2 + |b| 2 = 1. In this case, the final superposition would be of the form: This is a superposition of two branches, each of which describes a perfectly reasonable physical story. This bifurcation is one method on how the quantum state of universe bifurcates into two branches. Deriving the Born Rule In my opinion, one of the main problems of MWT is its reconciliation of the Born rule, for which no proposed solution has universal consensus. Proof. (
04095422
en
[ "shs.anthro-se" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04095422/file/Claveyrolas__Confluence.pdf
V Goossaert eds P Van Der Veer Mathieu Claveyrolas email: [email protected] Afsar Mohammad Assayag Jackie Stationery Office Benoist Jean " De L'inde Maurice De Maurice The Regulation of Plurality in Mauritian Mathieu Claveyrolas /p. 170/ After his return from a fieldwork trip in the 1950s, the British anthropologist Burton Benedict titled his invaluable ethnographic report on Mauritius, Indians in a Plural Society (1961). Indeed, plurality in Mauritius is both a cliché and an indisputable historical reality. Local representations, both old and contemporary, support national-census statistics estimating that a quarter of the Mauritian population is descended from African slaves (and is Catholic), while two thirds are descended from Indian indentured labourers (and are mostly Hindu, but also Muslim). On the small Mauritian island (1,800 km²), people speak Creole, but also Bhojpuri, French, or English, while Hindi, Urdu, Tamil, and Mandarin are recognised (and taught) as "ancestral languages". The plurality of the young Mauritian nation is not only inscribed in its history, it is also a founding and structuring feature. Mauritiusand the Indian Oceanhas always been at the heart of globalisation. The island was at first valued as a stopping place for sailors, both Arab (from the 10 th century) and Portuguese (in the 16 th century), on their way to the Indies. Having no indigenous population, it was subsequently populated thanks to more or less forced migrations driven by colonialism: the French (1715-1810) settled there by bringing in slaves from Africa and Madagascar (but also from India); and the British, who succeeded them, made up for the lack of labour caused by the abolition of slavery (1834) and the development of a plantation economy (1820s) by using indentured labour. 2 The raison d'être of the plantation society was in turn based on unequal globalised circuits: a monoculture (sugar cane) exclusively intended for export, which required a large and cheap workforce readily available on the sites of production. Mauritian plurality, therefore, relates first of all to Creoleness. By "Creole" I mean a society founded on slavery and the encounter, within the total (territorial, socio-economic, racial) and extremely restrictive context of the plantation, of uprooted populations of varied origins (European, African, and Asian). At the heart of the plantation's sugar-cane fields and camps, European settlers and masters, African slaves, Indian indentured labourers, and their descendants came one after the other and interacted. Since independence in 1968, the sugar industry has lost its dominant place in the national economy; 3 /p. 171/ but the "sugar barons" still play a key role in it thanks to diversification into the island's new core activities (textiles, tourism, offshore finance, real estate). If the plantation matrix can be found in most contemporary territorial, social, ethnic, and religious configurations, if it has survived the end of the sugar boom, 4 it is because it is rooted in the founding radical violence of colonisation and slavery, the long-term effects of which have shaped local plurality, including the identity-based policies that have pitted the subjugated groups against one another. Without getting into the debate on the status of indenture as an extension of slavery, 5 it is worth emphasising the sharp divide that exists in contemporary Mauritius between the descendants of slaves (Blacks and Catholics, known as Creoles) 6 and the descendants of indentured labourers (Indians, invariably assumed to be Hindus). History partly echoes the identity narrative by contrasting the collective success of the descendants of indentured labourers with the persistent marginalisation of Creoles [START_REF] Boswell | Le malaise créole: Ethnic Identity in Mauritius[END_REF]. Having made up the demographic majority since the 1850s, 7 the "Indians" gradually gained access to (small) land ownership in the 1880s. 8 The Indo-Mauritian elite, formed in the 1930s, led the political struggle for an independence which the Creoles, who feared the Indianisation of the island, did not want and which, over the last 50 years, has resulted in the effective monopoly of political power by the Indian community (essentially Hindus from North India and the Vaish caste). In 2011, 48% of Mauritians identified as Hindus (compared to 27% Catholics and 17% Muslimsalso descended from indentured labourers). 9 /p. 172/ It should also be noted that Mauritian plurality is not limited to the Catholic Creole/Hindu Indian dichotomy; it also exists within local Hinduism itself. 10 Regional Hindu traditions that in India are separated by thousands of kilometres (Bhojpuri Hinduism, which makes up the majority, and Tamil Hinduism, but also Telugu and Marathi) interact and compete on the small island. 11 Other known pluralities within Indian Hinduism are also found in Mauritius, particularly along caste lines: in addition to preferential endogamy, today one can 3 In 1968, the sugar industry still accounted for 25% of the island's GDP, a figure that fell to 12% in 1990 and 2.1% in 2010 (Grégoire, Hookoomsing, Lemoine, 2011: 75). 4 In the early 1970s, the price of sugar exploded and Mauritius benefited from a "sugar protocol" (guaranteed outlets at preferential prices) signed in 1975 (and abolished in 2009). 5 See [START_REF] Tinker | A New System of Slavery: The Export of Indian Labour Overseas, 1830-1920[END_REF] and, for a more nuanced discussion focused on Mauritius, Carter, 1995. 6 In this article, "Creole" refers to this Mauritian community; from an emic point of view, in Mauritius, one is either Creole or Indian. The terms "Creoleness" and "creolisation" refer to historical processes common to Mauritius and other sugar colonies in the Caribbean and Indian Ocean; I am therefore speaking of "Creole Hinduism". 7 In 1861, Indians made up two thirds of the colony's population (Carter, 1995: 271); by the end of the 19 th century, Mauritian-born Indians outnumbered those living in Mauritius but born in India (Benedict, 1961: 27). 8 During the "Grand Morcellement" planters who found themselves in financial difficulty had to part with some of their (least profitable) lands, which were often bought by former indentured labourers (Allen, 1999). 9 The statistics quoted are from the Central Statistics Office of Mauritius (2011). 10 As well as (although this falls outside the scope of this article) within other religions, including local Catholicism (as practised by white "Franco-Mauritians", black Creoles, and "Indian"especially Tamilconverts) and local Islam. Evangelical churches, which bring together devotees from Hindu and Catholic backgrounds, also appear to be growing. 11 The Bhojpuri are estimated to have made up two thirds of the Indian indentured labourers. find in the same village and sometimes side by side places of worship looked after and frequented by ti nasyon (low-caste) or gran nasyon (high-caste) devotees [START_REF] Mathieu | Au 'pays des Vaish' ? Structure et idéologie de caste à l'île Maurice[END_REF]. Locally, though, there is no lack of dynamics critical of plurality. Among the actors seeking both the internal homogenisation of Hinduism and the hardening of its boundaries with other religions, we can cite the Arya Samaj, whose missionaries (Indian and then Mauritian) have preached their Hindu reformismboth social (criticism of castes) and theological (criticism of image worship)in Mauritius from the early 20 th century, barely a few decades after its birth in India. Since the late 20 th century, another dynamic has framed the ongoing reform of Mauritian Hinduism towards greater orthodoxy. In Mauritius, this logic, which is also known in India under the effects of the Hindutva 12 political ideology, responds to partly distinct factors. The aim, in fact, is to standardise local Hinduism by Sanskritising it, which involves a joint purification of its folk village roots (like in India: ending blood sacrifices, Brahmanisation of the priesthood and pantheon) which 19 th -century indentured labourers brought with them, but also of its Catholic influences from the time of the plantation (architecture, practices, vocabulary) (Claveyrolas, 2017: 309-310). Here I will study the intersecting dynamics of the regulation of plurality in Mauritian society and of the redefinition of Hinduism, its internal and external boundaries, by drawing on the history and ethnography of places of worship, practices, and actors that construct it. I will consider chronologically the different phases of the export of Hinduism to and its plurality in Mauritius: what were the consequences when indentured labourers left the Indian territory, when together they crossed the ocean by boat, when they settled on the plural plantation, and when they became part of an independent secular state? /p. 173/ The effects of indenture on Hinduism Indian Hinduism in the 19 th century Hinduism is a term and a category whose contours have been defined only relatively recently (during the 19 th century). Without going into the details of this "invention of Hinduism" [START_REF] David | Who Invented Hinduism?[END_REF], let us simply note that it was concurrent with an attempt at setting external boundaries and achieving internal homogenisation. 13 On the one hand, the British, anxious to better understand Indian diversity, institutionalised categories of caste and religion, particularly through censuses, something which has been seen as a factor in the hardening of identities (Bayly, 1999;[START_REF] Bernard | Colonialism and Its Forms of Knowledge[END_REF][START_REF] Nicholas | Castes of Mind. Colonialism and the Making of Modern India[END_REF]. This desire to understand was certainly not exempt from a divide-and-rule strategy, promoting a communalism that undermined not least the long-standing coexistence between Hindus and Muslims (Pandey, 1990). 14 The plurality of the colonised, which contrasted with the claimed homogeneity of the colonising nations, even became synonymous with the absence of a shared 12 Nationalist ideology which defines India as an exclusively Hindu nation and territory. 13 For a concise discussion of these issues, see [START_REF] Mathieu | L'altérité au coeur de l'hindouisme[END_REF]. 14 For examples of shared places and practices in the plural holy city of Benares, see Eck, 1983: 276;Parry, 1994: 35-36;[START_REF] Sunthar | Between Mecca and Banaras: Towards an acculturation-model of Muslim-Hindu relations[END_REF] social projecta potential source of chaos -, thereby legitimising the necessary regulation of Indian society from the outside by the colonial power, the only one capable of arbitrating and guaranteeing equal treatment for all (Bates, 2001). On the other hand, anti-British nationalism went hand in hand with the promotion of the social and territorial unity of India, taking up the terms and categories (India, Hinduism) invented by the colonists. Moreover, despite the coexistence of Hindu and Muslim leaders within the nationalist ranks, it was very much a Hinduised India that was promoted, being deified in the anthropomorphic guise of the Hindu goddess Bharat Mata (Mother India), who had been dismembered or raped by the non-Hindu "invaders" [START_REF] Mathieu | Les temples de Mère Inde, musées de la nation[END_REF][START_REF] Sumathi | Enshrining the Map of India: Cartography, Nationalism, and the Politics of Deity in Varanasi[END_REF]. However, the constitution of independent India sanctioned secularism, that is, the new state's neutrality in the regulation of its religious plurality. However, since the late 20 th century, Hindu nationalism, which was born in the same decades that preceded independence,15 has taken power in Delhi and challenged the constitutional foundations of this religious equality. 16 It is worth pointing out that contemporary Hindu nationalist ideology (Hindutva) /p. 174/ echoes the dynamics of Hindu reformism: both advocate a Hinduism free of any Muslim or Christian influence (denying the history of encounters between traditions and condemning the practices that bear witness to them), but also a homogenisation of Hinduism that favours its Brahmanic dimensions to the detriment of the diversity of practices based on regions, castes, and sects in particular. Hinduism during the time of indenture What happened to Hinduism, its internal diversity, and its relationship with other religions once it left India? It should first be noted that the period of indentured labour (1835-1907) coincided with the dynamics of the invention of Hinduism as described. Barely "invented" and already exported, the Hinduism of indentured labourers further posed the challenge of being tied to the territory of India (Bharat) where it had been born and developed. Being Hindu outside India was not without its dilemmas. The socio-religious taboos related to emigration (leaving the Hindu territorydharmabhumiand crossing the black waterskalapani) have been well studied [START_REF] Catherine | Kālāpānī ou les limites à ne pas franchir. Le voyage en Angleterre du maharaja de Jaipur[END_REF][START_REF] Mathieu | L'altérité au coeur de l'hindouisme[END_REF]. While they primarily concerned the orthodox high castes who had little to do with indentured labour, these taboos were part of the contemporary reinvention of the Indo-Mauritian identity narrative [START_REF] Mathieu | Indo-Mauritians and the Indian Ocean. Literacy accounts and anthropological readings[END_REF]. Along with the social consequences of emigration (losing one's caste), it was the doubt over the conditions in which practice could be continued that dominated indenture narratives: had the gods followed the indentured outside the territory of Hindu India? Combining orthodoxy and pragmatism, the concerns of indentured Hindus in Mauritius remind us that the internal plurality of Hinduism does not merely juxtapose distinct traditions without any structural link. Between leaving India and settling on the plantation, the crossing of the "black waters" (kalapani) posed the immediate challenge of cohabiting in the limited space of a boat, making it impossible to maintain the prohibitions on contact and commensality that regulated plurality in India, although accounts differ as to the compromises granted to the indentured labourers.17 But another paradigm recurs as a leitmotiv in the crossing narratives: that of a liminal passage ensuring rebirth under the new status of "ship brothers" (jahaji bhai). This community specific to the indenture system transcended exclusive categories and may have lasted for several generations in Mauritius, affecting /p. 175/ even marriage preferences as caste and religious endogamy was rivalled by endogamy within the jahaji community (Benoist, 1989). Thus, on the Mauritian plantation, the Indians may have become socially and collectively "indentured labourers" before being Muslims or Hindus. Alongside this paradigm of the jahahi bhai, ocean-crossing narratives join together continuity and rupture in the traces left by indenture in the porosity of Hinduism's external boundaries. This is exemplified by the role played by Nagour Mira. The town of Nagore (in Tamil Nadu) is the site of the dargah dedicated to this Muslim saint. 18 When a ship bringing indentured labourers from India to Mauritius was caught in a storm, the lascars (Muslim sailors) prayed to Nagour Mira to save them. They were joined in their prayers by the passengers, both Hindus and Muslims, who stretched their hands up to heaven imploring the saint's intercession. The indentured labourers, who made it safely to port, maintained a Hindu-Muslim brotherhood forged by this episode, as evidenced by the saint's presence in many Hindu temples on the plantation, in the form of a hand, a flag, a boat, or a sail. 19 Thus, the narratives of the initial crossing place religious plurality, and collaboration, at the heart of the identity and religiosity of indentured labourers. But what happened once the latter had settled on the plantation? The effects of the plantation on Hinduism Unlike slaves, whose religiosity was suppressed by the colonial authorities by banning their African practices and converting them to Catholicism, Indian indentured labourers had their religious practices denigrated but tolerated on the plantation (Carter, 1995: 260 et seq). The planters granted labourers a piece of land where they could house their deities. These plantation shrines (kalimai), at first made of simple stones found in sugar-cane fields and erected under trees, quickly developed into sanctuaries that were to become the countless, sometimes monumental, temples that now dot the island. The kalimais brought the Hinduism of Indian villages to the plantation: a pantheon of initially popular, female, and ferocious (carnivorous) deities. These goddesses were fundamentally ambivalent, both dangerous and protective: they healed diseases that they themselves sent. Some rituals for the protection /p. 176/ of the community were celebrated at the kalimais, as well as rites of passage that brought together family and loved ones, but worship was essentially votive in nature and offered individually. 20 In Indian villages or on Mauritian plantations, popular Hinduism did not need an officiating Brahman; and the latter kept his distance from a practice he deemed inferior, and for which his scholarly knowledge and techniques were neither appropriate nor effective. Local discourses, too often reflected in research (Carter, 1995: 257), were rooted in a rhetoric of loss. However, Mauritian Hinduism was probably built more on loyalty to Indian village heritage than on the impossibility of reproducing rites due to the (likely) lack of Brahmans among the indentured labourers [START_REF] Mathieu | Quand l'hindouisme est créole. Plantation et indianité à l'île Maurice[END_REF]. Indian village Hinduism was based on pragmatic practices and deities that ensured protection against disease and misfortune. The indentured labourers' risky experience could only make this pragmatic function of religion all the more necessary, over and above any ontological or identity considerations. It is therefore not surprising that kalimais bespoke the ability of practices and pantheons to evolve in order to reflect the new natural, social, professional, and religious environment which Hinduism was meant to control and manipulate. Whilst both in Mauritius and India trees were indispensable in setting up a cult, the plant species mobilised in the founding narratives of kalimais were identified as typically Mauritian, including the mango tree which was meant to be better adapted to the local climate than socalled Indian species, such as the neem (nim) which, in fact, was widespread locally under the name of lila (Chazan-Gillig, Ramhota, 2009: 120 et seq). Pragmatism and adaptation characterised the inclusion of new deities into the Mauritian Hindu pantheon, especially because of their miraculous qualities. They were either added (a statue of Père Laval,21 the Virgin Mary, or Saint Anthony22 would join the "bondieux hindous" ("Hindu gods")) or included in a network of equivalences (Dhi and Saher were "Jesus and his secretary", Kali was Saint Expedite, and Madurai Viran was Saint Michael or Saint George). The therapeutic concern was often central, requiring recourse to entities chosen not for their Hindu identity but for their presence, significance, and effectiveness (beneficial or otherwise) in Mauritius. Thus, Bram Baba was specifically linked to the Mauritian context. Known in India, in Mauritius he /p. 177/ was the spirit of an indentured labourer who set himself on fire in order to destroy the sugar-cane fields, a type of sabotage which testifies to the Indians' resistance to the plantation order. The main festivals celebrated also show an adaptation to the concerns and status of the plantation workers. Indentured labourers were more concerned with individual risks than the state of the harvest, therefore, they quickly abandoned the Tamil harvest feast (pongal), while the baharia puja which protected against the many work accidents became the highlight of the ritual year around the kalimais [START_REF] Léo | Recognizing Mauritius's Unique Heritage: The Relevance of Estate Temples and Shrines[END_REF]. On the one hand, popular Hinduism was common to Indian villages and Mauritian plantations; on the other hand, its demand for pragmatic effectiveness encouraged adaptation to the Mauritian context and facilitated openness to non-Hindu pantheons. However, over and above this Hindu capacity to include the other [START_REF] Hacker | Inklusivismus[END_REF][START_REF] Wilhelm | Traditional Hindu xenology[END_REF], we should probably not overlook the role of the institution of the plantation in regulating Indian plurality in Mauritius. While planters tolerated their Indian workers' cults, they only granted one space where these could be established. It is the conditions set by planters as much as the initial (India) or founding (during the crossing) brotherhood between Muslims and Hindus that imposed the coexistence between Nagour Mira and Hindu deities. The same constraint also explains the coexistence, this time within Hinduism, between Bhojpuri and Tamil cults and deities inside the same kalimai. Kali (North Indian goddess) would be found side by side with Mariamman (Tamil goddess). The same stone would sometimes be worshipped as Mariamman by the Tamils and as Kali by the Bhojpuri. As indentured labourers or their descendants began leaving the plantation thanks to social advancement and access to land, the number of temples no longer dependent on the single space available to themincreased due to scissions that often reflected a more exclusive Tamil or Bhojpuri affiliation, while Nagour Mira became less prominent in Hindu places of worship. On the plantation, the authorities' relative tolerance towards Hinduism did not protect indentured labourers from the inevitable acculturation by the dominant Catholicism. The vocabulary used strongly testifies to this: in plantation Hinduism, a ritual specialist was a "priest", the deity a "bondieu", the temple was known as a "parish", "church", or "chapel", and worship was called "prayer" or "promise". Such linguistic borrowings are obviously not enough to determine the indentured Hindus' degree of integration of Catholic representations. However, they do go hand in hand with other signs of acculturation. Thus, the generalisation of a "sermon" closing a "priest's" weekly "prayer" is also a mark of the Catholic influence on the temporal organisation of worship. And Hindu places of worship would sometimes adopt an architecture (pitched roof) and spatial organisation ("nave" and /p. 178/ "choir"; rows of benches with prie-dieux) which replaced Hindu practices and symbols (particularly circumambulation). Finally, practices within the space of the temple itself (daily (kissing) or regular (the priest "blessing" a coffin)) attest to the loss of injunctions of purity central to Indian Hinduism (saliva and death are the vectors par excellence of ritual pollution). What are the drivers and meanings of such acculturation (injunctions, adaptations, diversions, syncretisms)? To better understand the role of the plantation institution in the foundation of Mauritian Hinduism and the regulation of its plurality, we need to ask one question that is central to Hinduism: who financially sustains a cult? Stories about the founding of places of worship and the structuring of local pantheons are enlightening in this respect. It was the white Catholic planter who systematically (although not always willingly at first) paid for the place of worship. When a deity "appeared" in a sugar-cane field, the planter was asked for permission to erect a shrine. The planter would then give away a piece of land and help to pay for the new forms of worship, and, if necessary, the erection of buildings. The white Mauritian planter would thus become the yajman, the Hindu sacrificer, who presided over and reaped the benefits of the cult. This key role in local Hinduism is confirmed by the white planter's deification as Dhi Baba in most kalimais. This deity who protected the boundaries of Indian villages and neighbourhoods [START_REF] Coccari | Protection and Identity: Banaras's Bir Babas as Neighborhood Guardian Deities[END_REF], in Mauritius, became the guardian of the plantation area (the fieldsplace of work, and the campplace of residence); its divine jurisdiction (kshetra) covered the white planter's land. The stone representing Dhi was often placed at the boundaries of the area of the kalimai, which signified the planter's relative exteriority, but also his interceding and primordial status: since the planter's permission had been asked to found a cult on his lands, his permission was also asked to enter the kalimai and pray to the Hindu gods. The relationship between the master's Catholicism and the workers' Hinduism was also manifested around ritual practices since Dhi was worshipped using candles (associated with Catholicism), and Kali using camphor (associated with Hinduism). In India, popular Hinduism was strongly anchored in a given areathe villagewithin which segregation (neighbourhoods of different castes and religions) and structural complementarity (hierarchical collaboration between all neighbourhoods during the village's ritual festivals - [START_REF] Marie-Louise | Les Dieux et les hommes. Étude des cultes d'un village du Tirunelveli[END_REF][START_REF] Pierre-Yves | Mapping the Management of Threatening Gods and Social Conflict: A Territorial Approach to Processions in a South Indian Village[END_REF] went hand in hand. This plural village area was ordered by socio-ritual logics of purity which prescribed everyone's place in the hierarchy and the prohibitions needed for its preservation. Such norms lost their significance when Hinduism was translated to Mauritius. Being again strongly anchored in a specific and plural area (the plantation), Mauritian Hinduism reflected socio-racial norms and hierarchies. Plantation employees lived in either the /p. 179/ "workers' camp"reserved mainly for Creole skilled factory workersor the "labourers' camp" -Indian unskilled sugar-cane-field workers -, while white staff lived separately. From India to Mauritius there was a shift from the Hindu order, which was imposed even on non-Hindus (the caste system), to the colonial plantation order, which was external to Hinduism and ascribed the latter a subordinate place. In Mauritian Hinduism, the planter was obviously not the only yajman or patron of the temples and cults. However, the other authority that played this role was also born of the plantation, namely, the sirdar or Indian foreman, who was often an intermediary between planter and workers. Within the plantation, he had at his disposal a network of workerswhose "gangs" he ledand staff, and also benefitted from greater capital. The sirdars were the first to have access to land and found places of worship in villages outside the plantation, where they often acted as administrators and ritual officiants; they could even join the temple pantheon after their death (Claveyrolas, 2017: 261). The recurrent rise of Indian foremen invites us to nuance the fate of plantation Hinduism between acculturation and resilience. This was the case of sirdar Gokoola. An indentured labourer who converted to Catholicism on the plantation, he founded a village (now named after him) and the island's first Bhojpuri temple in 1867. Far from being the sign of radical acculturation and a break with the Hindu identity, his conversion was probably guided by a strategy for professional advancement: we know that planters preferred choosing Catholics as their trusted men. We also know that sirdar Gokoola was careful to have his marriage to his wife declared legally valid only a few months before his death, so as to ensure the continuity of his fortune which Mauritian law would not have otherwise guaranteed simply by virtue of his Hindu marriage [START_REF] Marina | Gokoola: Family, Temple & Village[END_REF]. But no record remains of Gokoola's actual experience: was his conversion a case of insidious acculturation, opportunistic manipulation, or pragmatic management of religious plurality (all divine power was worth having on one's side)? As someone who was jointly affiliated to original Indian Hinduism, to a Catholicism (which he never disowned) connected to his position of authority on the plantation, and to Mauritian Hinduism, of which he was one of the early promoters thanks to his status as a sirdartemple patron, Gokoola illustrates both the little store set by exclusive religious affiliations and the ability, when needed, to take full advantage of them. Gokoola's story, which preceded dozens of other stories of foremen founding village temples in the decades that followed, illustrates the formation of the Indo-Mauritian elites who were born of the plantation and would later lead /p. 180/ the demands for national independence. 23 But what happened to Hinduism once independence was won and political power was promised to the Indian majority in a plural society and democratic state? The effects of the Mauritian state on Hinduism The gradual dismantling of the plantation was concurrent with national independence. Taking much of its inspiration from the Indian model of "unity in diversity", the Mauritius state enshrined secularism in its constitution. From the four-coloured flag (one colour for each religion in a "rainbow nation") to the banknotes (bearing the faces of representatives of each religiously identifiable community), no symbol of the independent nation escaped the ideal of harmonious coexistence between communities and religions. Mauritian identities have crystallised around religious affiliations: one is Muslim or Hindu before being "Indian", and "Creole" means Catholic. Mauritian Constitution has institutionalised four categories: "Hindu", "Muslim", "Sino-Mauritian",24 and, by default, "General Population" (meaning Christians). The fact that all candidates in an election have to declare to which one of these four categories they belong shows how state regulation of plurality encourages religious essentialism (inherited and exclusive) rather than the porous and multiple affiliations of creolisation. However, the debates and disputes that this requirement has given rise to 25 underline the fact that the constitutional categories do not cover all the issues and experiences of Mauritian religious plurality. Let us take the example of Mauritian cemeteries, which sheds light on local religious plurality and its regulation by three key institutions: the plantation, the state (colonial and then Mauritian), and the Catholic Church. Planters provided their Hindu labourers with crematoria, which are still in use today. However, while the scattering of ashes in the ocean is sometimes preferred /p. 181/ (so the deceased can rejoin and be reborn in "ancestral India"), most ashes (without counting non-cremated bodies) are buried in cemeteries. Apart from private cemeteries belonging to parishes where only Catholics are laid to rest, Mauritian cemeteries can be public (municipal) or belong to the plantation: they thus accommodate Catholics, Muslims, and Hindus side by side, albeit in separate "sections". The deceased's religious affiliation, which is framed by the two secular plural institutions of the plantation and the state, is on display while also reflecting the intricacies of the plurality of their private lives: first and last names, symbols (Om, trident, cross, pagoda roof, or minaret), and colours (yellow for the Tamils, blue for Catholics, and green for Muslims) take up codes common to other spheres (domestic or ritual architecture), without always displaying complete consistency (Catholic first name and Hindu symbol). Following the end of the plantation, national actors regulating the island's plurality did not stop at institutions external to Hinduism, while Mauritian Hindu institutions themselves did not value plurality. It is true that with independence, Mauritian Hinduism dissociated itself from the Creoleness shared with non-Indians. Anxious to legitimise its political hegemony and glorify a prestigious past, the Indo-Mauritian community began to reinvent Indianness, breaking with the history of the plantation [START_REF] Mathieu | With or Without Roots: the Compared and Conflicting Memories of Slavery and Indenture in the Mauritian Public Space[END_REF]. Thus, most of the religious festival celebrations lost their plural complexion and became vectors for exclusive ethnic identities. This was the case of the Muslim Ghoon: not only was this Muharram Shi'ite festival a high point of the ritual year for the Sunnis, who make up the vast majority of Muslims in Mauritius, up until the 1950s it was famously shared by the Hindus, who also helped to organise and pay for it (Benedict, 1961: 128). Procession, flagellation, and possession were linked to Tamil Hindu religiosity (body piercing, fire walking, possession during processions). The same goes for the Hindu pilgrimage to the Grand Bassin lake, in which Catholic Creoles stopped taking part when it became a Bhojpuri communalist political platform [START_REF] Mathieu | Indo-Mauritians and the Indian Ocean. Literacy accounts and anthropological readings[END_REF]; its organisation relies on the networks of the Voice of Hindu, known in Mauritius for its radicalism, even violence, in defending a Hinduness reminiscent of the Indian Hindutva movements. The decline in shared practice also concerns the plurality internal to Mauritian Hinduism: for the past twenty years, Cavadee and Timidee processions have become showcases for a Tamil identity that seeks to distinguish itself from Bhojpuri Hinduism (Trouillet, 2014). However, in terms of regulating plurality, the main dynamic has been the generalised funding of religious institutions by the Mauritian state. The number of these institutions, known as "socio-cultural associations", increased particularly as a result of scissions that reflect both assertions of identity and opportunities to better attract subsidies. They take the form of national /p. 182/ "temples federations" (Bhojpuri, Tamil, Arya Samaj), who redistribute the financial manna to their affiliated temples. In return, the latter are bound by theological requirements which are the main tool for bringing about "reform" and orthodoxy in Mauritian Hinduism. Thus, ferocious deities are proscribed, as are the practices associated with them, such as animal sacrifice. While the secularist principle does not exclude the funding of religion, the Hindu majority in Mauritius (and consequently the Hindus' capture of most of the subsidies) coupled with Hindu political hegemony have caused bitterness. It is worth noting that unlike India, where secularism has sometimes come with state interference in the administration of temples [START_REF] Franklin | Religion under Bureaucracy: Policy and Administration for Hindu Temples in South India[END_REF], Mauritian temples federations act more like effective lobbies, being largely outside state control. The federations have driven the Indianisation of Mauritian Hinduism. They sometimes import Indian communalist issues to Mauritius, such as when taking part in the campaign to send consecrated bricks to rebuild the Ram temple in Ayodhya, which is supposed to have been destroyed in order to be replaced by a mosque (Eisenlohr, 2006: 36-7). On a day-to-day basis, the federations provide temples with networks of priests, divine images, musicians, and architects from India. This Indianisation has ambiguous effects on local plurality. As far as the external boundaries of Hinduism are concerned, Indianisation is a type of ethnicisation which excludes Creoles from cults in which they often used to be integrated. Mauritian-style reformed Hinduism has replaced Creole with Indian languages and Indianised the names of gods and rituals [START_REF] Mathieu | Un prêtre tamoul dans le chantier de l'hindouisme mauricien. Orthodoxies et autorité religieuse[END_REF]. But Indianisation has also hardened boundaries within Hinduism. For example, temples federations subscribe to the Bhojpuri Sanskritised or Tamil Dravidian variants of orthodoxy. Generally speaking, far from homogenising Mauritian Indianness, the institutionalisation of Mauritian Hinduism has led to a growing number of federations, which underlines the diversity of Hindu sub-communities, particularly along caste-identity lines. This is the case of the organisation representing the Rajput caste (Gahlot Rajput Maha Sabha), or the Vaish caste (Vaish Mukhti Sangh). Arya Samaj, a reformist institution which in theory denounces the caste system, itself split in two along caste lines, leading to the creation of Arya Ravived Pracharini Sabha by the Chamar untouchables, who were renamed "Ravived". We should not forget that Hinduism, whether Indian or Mauritian, does not recognise a centralised authority able to impose unilateral reform "from above". The Hinduism of the Mauritian plantation also lacked major representatives of a Hindu religious authority, whether it be Brahmanic scholars, various /p. 183/ ritual specialists,26 or renouncers. It would, however, be a mistake to conclude that this kind of institutional vacuum leaves the national federations free to frame the regulation of local Hinduism. In Mauritius, the mainspring of the current evolution of local Hinduism is represented by the "societies" in charge of each place of worship. They often consist of at least one president, one secretary, and one treasurer (who are elected or appointed in rotation from among thirty members or so), and vote on crucial decisions such as hiring a priest or extending a temple. This democratic way of working does not prevent individuals from taking over power and thereby having long-lasting influence on the fate of a temple. However, it also gives them real freedom from the authority of the federations. Those in charge of the national temples federations constantly rail against the indiscipline and ego of these individuals, who are suspected of dividing the community for their own personal prestige. For their part, those in charge of temple societies criticise national federations for their elitism disconnected from the practices of Mauritian ancestors. They sometimes even go as far as to ignore instructions since "there in Moka [the seat of the Tamil federation], they don't know what it's like here [in the village]", and "our ancestors used to do this [healing cults of the popular goddesses Kateri and Tookay, who are not very well regarded in high places]". At the level of individual temples, there is also criticism of the federations' identitarian agenda, which hardens the internal and external boundaries of Hinduism, whereas locally it is pragmatic concerns that prevail. A Tamil priest who did not hesitate to extol the superiority of the "Tamil religion" and Dravidian civilisation over its North Indian counterpart, nevertheless criticised the Tamil federation's injunction to use only Tamil and exclude Sanskrit as a ritual language. He "like[s] speaking and hearing Sanskrit. Mantras are a powerful thing. Sanskrit is very powerful, it's what I prefer" (Claveyrolas, 2014). Ideological debates asidedebates which, in India, pit the Brahmanic tradition (which extols Sanskrit ritual) against reformist or devotional currents (which integrate lingua francas)it is the effectiveness attached to Sanskrit that is the decisive factor here. It is worth adding that one of the central activities of Hindu temples in Mauritius is not regulated by the federations, and even frustrates their ambitions to display an ethnic and scholarly religion: priests are often healers and their consultations (several half-days a week) attract devotees well beyond religious affiliations, even acting as a trigger for individual religious plurality (particularly in the case of Creoles who often /p. 184/ come to Hinduism via these healers). The Hindu temples that are most famous for their healers are recognisable by their plural pantheon: the one in Medine (whose officiants are Indian and Creole) groups together Bhojpuri and Tamil deities, but also the Virgin, Jesus, and the Buddha; the Queen Victoria ritual complex includes a Bhojpuri shrine (Shiv Parvati mandir), a Tamil shrine (Draupadi kovil), a Catholic "grotto", and Nagour Mira's hand around the healer's shrine dedicated to Kateri. In these circumstances, some temples give up subsidies, preferring to free themselves from the federations' oversight and rely exclusively on "the village people, who give a lot". It should be stressed that while federation subsidies are roundly criticised by non-Hindu Mauritians as a way to undermine the secularist spirit, which favours competition between exclusive religious affiliations, individual village donations are notably trans-religious, the rule being that anyone can and does contribute financially to the building of all places of worship and main rituals, irrespective of their personal religious affiliation, both within and outside Hinduism. In other words, the communalism crystalised around religious identities is often rejected in Mauritius as the result, even project, of national elites, whereas, at a village level, everyone stresses harmonious cross-community collaboration. There has been no radical change at this level since the 1950s when Burton Benedict attested to the joint funding of a kalimai's baharia puja by the whole village, "even the high caste villagers who disapproved of the ceremony and the Muslims contributed" (Benedict, 1961: 131). We can hardly stop at the first level of discourse and categorically state that "all [gods] are the same: Allah, Vishnu, Buddha" and that the only difference is the name by which each human community refers to them. Besides, taking part together in a ritual practice does not mean sharing all of its dimensions. The goats sacrificed to Nagour Mira have their throats cut according to the Muslim ritual while reciting Qur'anic verses, whereas those sacrificed to Kali are decapitated. During a baharia puja, pig sacrifices were sometimes abandoned in "deference" to Muslims sharing in the ceremony (Benedict, 1961: 131). On an intra-Hindu level, the divisions between high-caste and low-caste temples are aligned with the sacrifices practised: pigs for low castes (Shudra), goats for middle castes (Vaish), and vegetarian offerings for high castes (Babujee-Maraz) and Arya Samaj members. Everyone can in turn make an individual choice while taking part in village or family worship: when eating together, vegetarians substitute flour balls for the animal remains offered to the goddess [START_REF] Mathieu | Au 'pays des Vaish' ? Structure et idéologie de caste à l'île Maurice[END_REF]. Hindu practices show their pragmatism by joining the Mauritian plural context and opening up to powers whose effectiveness they value. Conversion (albeit by a minority) to Catholicism was probably facilitated by the undeniable /p. 185/ effectiveness of Catholic deities (Jesus, the Virgin, and the saints), as evidenced by their being worshipped by the dominants. However, the private experiences of religious plurality within Mauritian Hinduism stress a more conflictual and less strategic dimension. Individuals do not necessarily choose the deities they worship. Some of them come in dreams, apparitions, or possessions diagnosed by a religious specialist. Individuals then have no choice but to join the religious pluralitysomething which they can be apprehensive about since they are often wary of the less familiar modi operandi of the "gods of the others": Creoles are particularly fearful of the powernever far from witchcraftattributed to Hindu gods. Even worse than this intrusive plurality, most of the cults thus imposed by deities also extend to the devotees' descendants: the agreement into which they enter with a particular god requires not only to be sealed by a specific action (fasting, donation, worship), it further involves a lasting trans-generational obligation the breach of which would be dangerous. We can thus see why domestic shrines often embody Mauritian plurality by bringing side by side communal gods (Hanuman for a Bhojpuri Hindu) and deities that a family member has encountered in his or her life, often when making a vow (the Virgin for a Hindu). Not only is plurality inherited and accumulated, conversely, the radical acculturation of conversion is not always enough to do away with individuals' openness to religious plurality. Thus, a Tamil interlocutor, whose family converted to Catholicism in the early 20 th century on the plantation, was saddened and surprised that he was not "immunised" against "bondieux madras". 27 He could not prevent being possessed by "the goddess" as he passed by a Timidee procession (fire walking). His mother had to "do a system" [set of prayers] to expel this unwanted divine presence. Here, the plurality rooted in ancestrality shakes up the certainties of chosen identities and authoritarian regulation "from above", to the advantage of the exclusivism embodied by conversion. Conclusion Hinduism's integration in Mauritian plural society followed the main phases of indentured labourers' settlement on the plantation, and the influence that the latter continues to exert on the island's national construction. To conclude, let us take the example of one of the most famous Tamil temples, the one in Camp Diable dedicated to the goddess Amma Tookay. The story of its foundation takes us back to the beginnings of indentured /p. 186/ labour 150 years ago, at the heart of the Britannia sugar estate. An important part of the labourers' work was to clear the fields of all stones. One of them proved impossible to lift. Even the planter could not do it, but that did not prevent him from refusing to allow the Hindu labourers to erect a shrine there to the goddess identified behind this stone apparition. The planter was accused of having kicked the stone whose divine quality he denied and subsequently his leg started to rot. It only healed when he decided to allow the temple to be built, pay for it, and worship there himself. Thus, the Hindu goddess demanded and got a "place of prayer" (a Creole term for a place of worship) in the heart of the plantation. The Catholic planter and Hindu labourers worked together not without conflict or negotiations, reflecting the power relations between hierarchy and interdependence. And the sacrilegious Catholic planter became the Hindu temple's chief sacrificer. This episode of the planter constrained by the goddess is no longer central to local stories, which are often expurgated of all instances of conflictual foundation to better emphasise harmonious cooperation. Thus, at the entrance to the sanctuary, the plaque commemorating the temple's renovation in 2002 expresses gratitude for "the invaluable contribution of the sponsor Britannia Sugar Estate". Today, the temple benefits from unique infrastructure funded by the state (signposts, a wide paved road, parking), with a plaque thanking the district council and the ministries of environment, public infrastructure, and public utilities. Under its "Dravidian" appearance (a multi-coloured gopuram tower loaded with sculptures), it remains dedicated to a popular Hindu goddess which is renowned first of all for its therapeutic qualities, unlike Brahmanic goddesses such as Lakshmi or Sarasvati who are increasingly central to renovated temples. However, the ferocious iconography (raw black stone, red eyes, grimace, and fangs), which used to crystallise non-Hindus' fear and incomprehension,28 has been replaced by a more peaceful form: a pastel-coloured face with discreet fangs framing a near-smile. And the old idea that the stone keeps growing is now largely dismissed as superstition, including by the officiant. Apart from illustrating the history of the plantation and the ongoing renewal of local Hinduism, this temple bears witness to Hinduism's integration into contemporary plural Mauritian society. Each year in early June, the island celebrates with great pomp the start of the cane-cutting season, holding "prayers so the sugar harvest may go well."29 This Hindu ceremony is attended /p. 187/ by the state's highest authorities30 as well as a director of the sugar factory (Omnicane, replacing Britannia), Jacques d'Unienville. In 2019, Prime Minister Pravind Jugnauth took on the role of the sacrificer symbolised by the orange cloth which the priest wrapped around his head on his arrival. The (Bhojpuri) Prime Minister and (Catholic) planter led the celebrants throughout the ceremonies inside the Tamil sanctuary, in front of the cameras of the MBC national television channel, their foreheads adorned with sindur and ashes as a sign of Amma Tookay's blessing, which they received by performing the Hindu ritual salutation pressing their palms together. The media noted the ceremony's capacity to bring "all the communities" of the island together. It particularly stressed the prayers' economic concerns, which were to ensure a good national harvest (more than three million tons were expected that year). However, one of the temple's officiants focused on the propitiatory dimension of the ritualasking the goddess for permission to cut the cane in order to ensure her protection: "That is why we offer [the goddess] a cane [stalk]. Like this, all goes well for everyone in the fields and factories, no accidents, no misfortunes ... All these people ... the important people and the labourers ... the goddess, that's her children. She will protect them all year round." The sugar industry is a sector historically linked to Mauritian identity and still provides tens of thousands of jobs (Grégoire, Hookoomsing, and Lemoine, 2011: 75). Its economic difficulties, widely covered by the media, are at the heart of political debates and electoral stakes. The fact that cane workers, planters, and the highest authorities of the Mauritian state entrust their national prosperity into the hands of a popular Tamil deity during a shared propitiatory ritual sanctions the historical evolution of Hinduism in the local plural society. The popular Hinduism that emerged in Indian villages crossed the ocean and, while at first it was merely tolerated, it became part of the plurality, constraints, and power relations of the plantation. After a century and a half of being in Mauritius, and fifty years of national independence, Hinduism today enjoys a status as key interlocutor of the secular Mauritian state, and as the "unofficial religion" of the sugar industry. dans la société mauricienne et de la redéfinition de l'hindouisme, de ses frontières internes et externes, à partir de l'histoire et de l'ethnographie des lieux de culte, des pratiques et des acteurs qui la construisent. J'envisage chronologiquement les différentes phases de l'exportation de l'hindouisme et de sa pluralité à Maurice: quelles sont les conséquences lorsque les engagés quittent le territoire indien, lorsqu'ils partagent la traversée de l'océan en bateau, lorsqu'ils s'installent dans la plantation plurielle et lorsqu'ils s'inscrivent dans un État indépendant et séculier? Mots-clés: Maurice, hindouisme, plantation, pluralité, créolité. The Regulation of Plurality in Mauritian Hinduism. History, Actors, and Practices The plurality of the young Mauritian nation is not only inscribed in history, it is foundational and structural, rooted in the radical violence of colonisation, slavery, and plantation. I examine the intersecting dynamics of the regulation of plurality in Mauritian society and of the redefinition of Hinduism, its internal and external boundaries, by drawing on the history and ethnography of places of worship, practices, and actors that construct it. I will consider chronologically the different phases of the export of Hinduism to and its plurality in Mauritius: what were the consequences when indentured labourers left the Indian territory, when together they crossed the ocean by boat, when they settled on the plural plantation, and when they became part of an independent secular state? Keywords: Mauritius, Hinduism, plantation, plurality, Creoleness. La regulación de la pluralidad en el hinduismo de Mauricio. Historia, actores y prácticas La pluralidad de la joven nación mauriciana no sólo está inscrita en la historia, sino que es fundadora y estructuradora, enraizada en la violencia radical de la colonización, la esclavitud y la plantación. Estudio aqui la dinámica de intersección de la regulación de la pluralidad en la sociedad mauriciana y la redefinición del hinduismo, de sus fronteras internas y externas, sobre la base de la historia y la etnografía de los lugares de culto, las prácticas y los actores que lo construyen. Considero cronológicamente las diferentes fases de la exportación del hinduismo y su pluralidad a Mauricio: ¿cuáles son las consecuencias cuando los hombres alistados abandonan el territorio indio, cuando comparten la travesía del océano en barco, cuando se establecen en la plantación plural y cuando se unen a un estado independiente y secular? Palabras clave: Mauricio, Hinduismo, plantación, pluralidad, Creolidad. The formalisation of Hindu political ideology can be dated to the publication in 1923 of the book Hindutva. Who Is a Hindu? byVinayak Damodar Savarkar (1883-1966). See, for example, the recent and highly controversial Citizenship Amendment Act (2019) which excludes Muslim immigrants from regularisation measures. In some convoys, cooks were sometimes selected from among non-untouchables (Ponaman, 1989), and the higher castes are said to have been exempted from the more polluting tasks(Carter, 1995: 102-122). Dargahs (saints' tombs) are the main places of worship shared by Muslims and Hindus in India, as well as symbols of old interactions between Hinduism and other religions(Afsar, 2013; Assayag, [START_REF] Marina | Servants, Sirdars and Settlers: Indians in Mauritius, 1834-1874[END_REF] Assayag, Tarabout, 1997;[START_REF] Bigelow | Sharing the Sacred. Practicing Pluralism in Muslim North India[END_REF].19 I draw here on the story told to me by a Mauritian Tamil priest explaining the hand found in his temple. For an analysis of worship among indentured communities in the Caribbean and Indian Ocean, seeBenoist, 2013. 20 For an analysis of kalimais, see[START_REF] Suzanne | L'hindouisme mauricien dans la mondialisation[END_REF][START_REF] Mathieu | Les territoires de l'hindouisme mauricien : l'Inde, la mer, la plantation et la nation[END_REF] French Catholic missionary (1803-1864) beatified in 1979, whose tomb in Sainte-Croix is the object of an important multi-faith annual pilgrimage. The therapeutic features of the cult of Saint Anthony are also known in Indian popular religion, including in Hindu[START_REF] Sébastia | Les Rondes de saint Antoine. Culte, possession et troubles psychiques en Inde du Sud[END_REF] and Muslim(Flueckiger, 2006: 3) contexts. Sir Seewoosagur Ramgoolam, the "Father of Independence" and first prime minister of Mauritius, was himself the son of a sirdar. Sino-Mauritians (2% of the population, but playing a considerable economic role) are usually identified as "Buddhists", but their "pagodas", their main forms of worship (offerings, divination rituals, ancestor worship) and deities (Kwan Tee) are more rooted in Chinese folk traditions, particularly Taoist and Confucian. Mauritian censuses include a "Chinese religion" category separate from "Buddhism". The official conversion of many Sino-Mauritians to Catholicism should not be seen as exclusive allegiance. 25 It is on the basis of the ethnic identity declared that the seats of "corrective deputies" are allocated; these are meant to reduce imbalances of ethnic representation in the national parliament. Candidates who refused to comply with this declaration requirement, rendering them ineligible, complained to the UN Human Rights Committee, but a ruling in their favour in 2012 has not yet led Mauritian authorities to change the electoral system. Temples' ritual officiants, who are present in Mauritius, enjoy very little prestige in Hinduism, unlike domestic priests (purohit), who are absent from Mauritius. "How can one pray to a scary bondieu?" asked a Catholic "Indian" woman who remembered her fear when, as a child, she came here to pray during Sunday picnics. https://defimedia.info/une-priere-pour-le-bon-deroulement-de-la-coupe, consulted 23/02/2021. In 2020, the ministries of agroindustry and finance.
04081280
en
[ "info.info-cy" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04081280v3/file/Towards%20a%20methodology%20to%20consider%20the%20environmental%20impacts%20of%20digital%20agriculture.pdf
Pierre La Pierre La Rocca Towards a methodology to consider the environmental impacts of digital agriculture Keywords: Digital Agriculture, ICT infrastructures, Environmental footprint 1. Research concrete examples proving their added value to farmers. In the ideas of [? ] and [? ], the context of production need to be taken into account and ICT itself cannot be a sufficient lever if not used in more sustainable means of production. Hence, SFT benefits could depend on the production context where they are used, the functions that they need to address, and the technological paradigm used to address those functions. The authors of [? ] argues that digital solutions need to be used in agroecological context and centred on farmers' needs to be a potential sustainability lever. This last point is also supported by the authors of [? ], that add that developing ICT solutions in isolation from agricultural realities run the risk of hampering rather than advancing possibilities for sustainability transitions in the food system. If digital agriculture might be seen as a sustainable solution, the environmental footprint linked to ICT seems rarely considered by itself. Yet, ICT device manufacturing has a significant environmental footprint [? ], and a global adoption of digital agriculture could lead to a massive deployment of complex electronic devices in the environment. This situation would not be sustainable following the conclusions of [? ] regarding IoT, as it could cause digital rebounds [? ] at a scale cancelling the expected benefits of digital agriculture. Research objectives In this context, by considering the peculiar environmental footprint of the ICT needed by digital agriculture and the possible rebound effects it could generate, it seems relevant to ask what kind of ICT, deployed to which scales, and for which agricultural systems could bring a certain sustainability, without threatening the sufficiency and the resilience also required by agricultural systems. The main goal of this research is to set up of a methodology taking into account the environmental footprint of agricultural ICT systems, and their consequences on a systemic scale. This methodology should help us to develop present and prospective models and scenarios, in order to evaluate and compare possible technological paths. The obtained results are expected to enlighten societal debates and political decisions. Currently at its earlier stage, this research in Computer Science is conducted at the Laboratoire Bordelais de Recherche en Informatique (LaBRI), under the supervision of Aurélie Bugeau (LaBRI), Gaël Guennebaud (Inria) and Anne-Laure Ligozat (LISN). It is realized in the Image and Sound department, because of the current disciplinary fields of its supervision, but also because of the lack of an ICT sustainability department. Methods and first results Research Approach Our general methodology is inspired by the works of [? ] and [? ]. It is adapted to the farm complexity, where several companies can propose several ICT services working on a same territory. This while calling for different infrastructures not always interoperable. The goal of this methodology is to lead to the creation of parametric models able to inform decisions on the adoption of digital agriculture paths. Those models aim to go beyond individual life cycle assessment (LCA) studies to provide more systemic visions at different territorial scales. To adopt a more systemic approach implies broadening the environmental impact spectrum by looking at more indicators than carbon emissions and energy consumption indicators. Relevant additional indicators could be water consumption or potential metallic depletion linked to ICT infrastructures. Such models could help characterize absolute environmental impacts of digital agriculture systems and few associated indirect effects. The main steps of the proposed methodology are the definition of a baseline and its boundaries, the identification of relevant case studies and the effect assessment of possible prospective trends. Whether the digitization of agriculture being a matter of growing interest, this trend is far from being new according to [? ]. Hence, the definition of a baseline need to take into account the already existing digitization. We then use as a general baseline the average ICT used on French farm in 2025. This baseline will further be refined according to considered case studies. After a first environmental footprint assessment, this baseline will be useful to compare different technological scenarios at the 2035 horizon. Inventory studies, as the one proposed by [? ] as well as field surveys will help us to acquire associated general data. The boundaries definition enables us to define the ICT infrastructures taken into account, the territory and the agricultural systems and steps where they are being deployed as well as the temporal perspective where they can evolve. Because different ICT systems can be used to address a same issue, we adopt a functionalist approach to compare different ICT systems addressing a same function. Approaching digital ICT by functions enables us to consider complex systems of embedded different technologies interacting to solve a same function instead of isolated technologies. This complex approach seems to better reflect systems like those presented by the works of [? ] and [? ]. Looking at the ICT infrastructures required by a system to realize a function firstly gives us the possibility to assess its environmental impact magnitudes. It also gives us a way to consider the different functions a same ICT infrastructure is expected to address. First Results The thesis started in October 2022. A first work consisted in a global synthesis to discover the multiplicity behind digital agriculture systems. A first use case dedicated to cattle individual identification and health metrics systems is currently realized to build and test our general methodology. We also conducted field surveys on a farm and at agricultural events to characterize current and expected trends of agricultural ICT. Future works In the near future, we aim to propose a first model comparing and RFID plus IoT system with one using AI and computer vision. This model will address two complementary functions regarding AI infrastructure possibilities, which are individual identification and health metrics. To do so, our preliminary work needs to be enriched with IoT collars. Then, we would like to consider culture spraying in order to test the conceptualized method on another use case. Environmental metrics linked to software processes will need to be considered. If current systems mainly focus on edge computing, our field studies showed that a future trend for digital agriculture companies is to externalize computing to owned cloud platforms. In the long time, we plan to compare our 2025 baselines to different 2035 scenarios, proposing different technical systems scales. Those scenarios will be inspired by the European commission plans [? ] or the projections of [? ]. Conclusion Our research aims to propose methods and models to better assess the environmental footprint of digital agriculture systems. Filling this current gap directly contributes to a better understanding of how ICT can help to mitigate the uncertainties between climate change and agriculture. To do so, we will rely on and adapt to the context of agriculture methods used in ICT environmental effects assessments. Participating to ICT4S doctoral symposium is an opportunity to present our research to people concerned by the role that can play ICT in sustainability. Presenting our work there would be the occasion for us to formalize them toward an initiated audience, and to get from it external yet relevant feedback.
00019482
en
[ "spi.meca" ]
2024/03/04 16:41:18
2002
https://hal.science/hal-00019482/file/Patel2002.pdf
H G Patel B P Patel M Ganapathi M Touratierl Dynamic Characteristics of Laminated Angle-Ply Noncircular Cylindrical Shells Keywords: laminated shdl. angle-ply. free vibration. non-circular. modes. finite clc, ment. elliptic:al c., ross SC('tion 1bc study of response of shells of rcmlution under static and dynamic loading ,itua• lions has received '-onsiderabk aucntion in the litcrJturc co mpared to the analysi> that deals with shells of noncircular cros. ' ><:c tiun. Thi. is p<»sibly due to the diOkully in- lhis approach may be unac.-cp1"ble for shells with significantly noncir<•ular curvature. The cxicnsiv.->tudics available in the li1cra1ure on dynamic analysis of circularcylindri• cal shells/panels haw been reviewed in the work of Leissa I 11. N()(>r 121 and Soldatos 131. Recently. 1he limi1,•J progress made in understanding 1hc bchavior of the non-circular cylinders has h1:cn reviewed by Soldaios HI. II may be concluded that few contributions arc availahli.: conc:cming "''ilh 1hc free vibrJ tion analysis of a.niso1ropic laminated noncircular cylindrical shell, compared 10 1hosc of isotropic ca.>e and 1bcy arc all concerned wilh cross-ply <'ase. Some of the :1vailable work' on dynamics of cross-ply noncircular shells arc of Solda1<" and T1iv:midis 151. and Soldatos 161. All these investigations have been carried ou1 hy 1he various anal)1ical methods. However. 1he analysis of angle-ply llOClCircular shell' appears 10 bc scarce in the li1era1ure bccau. <c of i n c r c a . c d n1mplexi1y due 10 various couplings a r i ĩ n g frorn anisotmpici1y. The present paper deals with 1he study of 1he dynamic behaviur of anisntropic la minuted m g.le-ply noncireular cylindrical shells based on finite elcmclll method. The clement employed here is a r• continnus. eighl-noded serendipity quadrilatcml shear flexible shell element with live nodal degree' of freedom ( "'" 1'(1• 11• 0 • 0,. 0,) developed bas->J on field consistency approach. A detailed '1udy is carried ou1 to highlight the effects of lap-up and ply-angle on the naluml frc4uencies pcnaining to differe111 typcs of modes of vibrations of layered elliptical cylinders. The radius of curvature of the middle surface ~ p e r t a i r u n ~ tu all a..'ynunctric nHl<ll'' uf 'ihn11ion anJ the next higher one j, 1no,1ly yil'ldl•J h ~ .\Cl" l'a--c. The n1:t'.:in1un1 \';.llUl'' occur either for the Re' ic". 52. 2.17•274. 1999. 151 Suldah". K.P .. T1i\anidi,, Ci.J .• "'lluckling and ,;1u•J1ion uf ,•n•wpl) larninal<"d rmn•cin:ular r) lindrical 'hdl<' Juumal uf .Suund amJ Vibr:uiun. K2. -125-.J.tl. lllX! l<•I Soldaru,. K.P.. ",\ Fluggc•IYl'IC 1hcnl')• for 1hc :maly,i. uf :iniM>1rop idamina1cd nu11 drcular cy lindrical 'hclb," Im. J. Sul id' S1ru<1un:•. 20. 107-1 Õ . l'IK4. p i ~ -an!!k' . i • • l•r tur 60° 1. 'a"'. r.k•J"'.•nJ1n}! un the f n : q u c : r n . " ~ ordl•r anJ l)f'IC of °'')n11nctric 'ihration nu.Jc. Ttk.• Jcraill"1 ' tu<l h:' ,f'k1'-' that lhc i n C f ( ' 3 ~ in l ' -' C l ' O l r i C i t ~ \•;due a f f c t ' troduced in the governing equa1ions b L ã u s c of ''arying nalurc of the rJdiui. of cur\'ature or lhc noncircular cylinders with the circumferential C()• ordinatc. Studies of such ,hell' made of ad\•anc,>J composite material' that arc preferred in 1hc design of lightwdght and dlicienl shell structures are funhcr limitc>J because of1he increased complexity due hl\hc inherent dirc-c1ional propcr1ies of the se rnatcrials. Although many struc1urJI c o n 1 p o n c n t ~ of noncircular c r o : -o • s c c t i o n can be adequah.:ly 1rea1i:d as c..-quivalcn1 shells of rc\•olu1inn. or lhc elliptical cross-Sl' ction is described as II = (b' / 11.)( I + 11 0 cos 20) 3 1 2 where 14 = [(a 2 + b 2 )/2j 112 • 11 0 = (a' ' -l?)/(n' + b'): ;md fl denotes an angle between the 1 t a n c n t 3l thl' Ori!_!iO or y h, 'in•u1nfcn.'Olial l'U-•>nJin;lll' • anJ l ~ Ufll' at a n ~ J' IOIOl on lhl• l'Cntcr-linc. Th: par-.t.11k'.'tcf\ 11 anJ IJ an.• the l ( " n t h ' uf ttk.• '4'1Ul •l1lJjur anJ -rninor a\l''• ""flCClivcly. Rc,uhi;; ohtaincd for the lil"t fC\.\' nntural fr\.""qU l'n1.•il•, ot rv. n• :1nd c i ! ! h t • l a ~c r l • J o a n ~l t ... ply (.1''/ .f' & (.f' / -.l").J la111in:11cd 1hin clliptkal ,1,.,,11, •I./ II .. = 11.:1: 11,J h = 11111 1 for ĩ r n p l y ' upponcd txlunJa') c o n d i t i o n ~ an: prc-.cntl•J 1n thl' lull p;ipcr. It j, oh,1.•r,l•tl fnu111 hc n:,uhs that the I ~ .. ply-unglc c a c n : ũ l l ~ in ltl\.\C, t lrcqul'll1.'Y \ ' : t h l l ' SO I •516. l'J\14. 141 Solda1'"• K.P .. "Mccha. ni••' uf C) linJrical 'hells wilh nun•dn:ular """'' ><."Clinn: ,\ 'u"c>:• AwlicJ k c h a n i c ' ttk,.• ffl"\IUl'tll.'11.., ũ o n 1 i 1 ; : i 1 i v t . • l ~ hul lh'-' n;;1tun: uf vari:i.1ion of thf f n . ũ c n c ~ parJnk'1Cr d o e ~ noc l'h:.tnl!L•. Referenct'S 111 l.ci""• A.W., "'Vihrn1iun uf • hdh." 0 1'ASA sr.2xx. 1' 17.1. I ~ I l'\1x•r . .i \ .K .• "Bihliov.raph)' 11f n11u1ographs and Ũ l " \ ' Ỹ ~ tHt ~h e l l , , " 1\pplh:t.I 1 L ' l ' h a 1 1 k' Rc,kw. 4.1. 22,1. !,14. IWO. I. ' I St•h.l:tto,. K.P. ... Rl•\ h.•" of 1 h n : c - t . l i n 1 c n ĩ o n a l d y n ũ n i c unuly'i' ••f <.•in:ular cylind1.•r, am.I cylindri<"al 'hell-."" t\pplicd . \kc hanic' Review. 47.
04053780
en
[ "info" ]
2024/03/04 16:41:18
2019
https://hal.science/hal-04053780v2/file/paper.pdf
Tobias Pape email: [email protected] Jakob Reschke email: [email protected] Patrick Rein email: [email protected] Fabio Niephaus email: [email protected] Marcel Taeumel email: [email protected] Robert 2019 Hirschfeld email: [email protected] Object Versioning in the Presence of File-based Version Control Systems Keywords: engineering → Software configuration management and version control systems, Classes and objects, Integrated and visual development environments Version control, Object-oriented programming, Exploratory programming, Serialization come L'archive ouverte pluridisciplinaire INTRODUCTION Version control is an established practice for source code since at least the 1970s [START_REF] Marc | The source code control system[END_REF]. Version control systems (vcss) typically manage files, as most programming systems use files to store the source code of programs. However, since source code files seldom convey the current state of a running system, such as its objects, those entities are hard to track with conventional vcss. Yet, using systems that built upon a model of long-living state, such as image-based programming systems like Smalltalk and Self, database-oriented systems, or live-and exploratory programming systems, can lead to a split experience. The system's source code can be put in a vcs, enabling safe changes, easy sharing, and recovery of code from earlier versions of a program. The systems's state, however, can hardly be versioned because it does not easily fit into files. Developers who wish for the benefits of vcss for state have to e.g. serialize state to source code or separate files, and manage the loading of it. When objects must be converted before they can be put under version control, sometimes there are multiple options of how this can be done. For any given purpose, there are typically multiple file formats in use and multiple perspectives from which an object could be modeled. Version control supports collaboration. When people make changes to their local copies of a system simultaneously, contemporary version control tools support users in managing different streams of development (branches). Internet platforms such as GitHub 1 have made collaborative, distributed development of software easier. We want to provide support for collaboration on artifacts that are not text, or files, but arbitrary objects built in an exploratory programming such as Squeak/Smalltalk. We propose a version control solution for arbitrary objects that works on top of an established vcs to promote platform reuse. Contributions. We present an architecture to put arbitrary objects under version control. It allows for customization of how versions of objects are represented and how they could be stored in files, so the most suitable representation can be chosen. The storage of versions is delegated to a backend version control system, so existing systems can be used for their maturity and familiarity to users, and their platforms can be reused for their utility and to save costs. Further, we present an overview of an implementation of this architecture in Squeak/Smalltalk and discuss its features and limitations. We do not propose a solution for synchronous collaborative editing of objects. That is, the proposed architecture assumes that the users of the exploratory programming environment need to synchronize their changes to objects explicitly. Structure of this work. In the following section (section 2) we illustrate basic considerations when providing vcs for objects. We present our architecture for putting objects under version control based on established vcss (section 3). We illustrate our implementation prototype in Squeak/Smalltalk (section 4) and discuss its merits and limitations (section 5). Finally, we discuss related approaches (section 6) and conclude the paper (section 7). 1 http://github.com (last accessed May 10, 2023). BACKGROUND In this section we will first illustrate the challenges of object version control by looking at two existing approaches. We then explicate our goals for a vcs for objects in general. Further, we motivate our approach of basing the vcs on an established system in order to ease collaboration. Finally, we describe a set of options the system should provide to users according to previous work. Due to the large variety of vcs today, a variety of terms and concepts exists. In order to clarify our interpretation of these terms we collected and explained them in Appendix A. Challenges of object versioning In Self, objects are transferred from one image to another via the Self Transporter [START_REF] Ungar | Annotating objects for transport to other worlds[END_REF]. There are no classes in Self. Objects implementations are reused as prototypes, like in JavaScript. The prototype objects are fully-capable objects themselves, which blurs the gap between meta-objects and "actual instances". For this reason, the Self Transporter can transport any object. It does so by traversing a graph of objects -guided by annotations to objects and slots to fill in information about user intentions -and writing files that contain Self source code expressions that rebuild the captured object graph when evaluated. The serialization format is therefore text-based and general enough to describe arbitrary objects, but it is specific to the Self language. Objects can be exchanged with file-based vcss, like Smalltalk file outs, but the version control system is not integrated in the Self environment. The Lively Kernel environment comes provides its users with an exploratory programming environment for JavaScript entirely inside the web browser [START_REF] Ingalls | The lively kernel a self-supporting system on a web page[END_REF]. Lively components can be shared via a parts bin that uses Subversion for version control and publishing. On top of that, version control features like difference detection and merging have been integrated into the Lively Kernel environment [START_REF] Calmez | Explorative authoring of active web content in a mobile environment[END_REF][START_REF] Lincke | Evolving Tools in a Collaborative Self-supporting Development Environment[END_REF]. It employs a serialization based on JavaScript Object Notation (json) with support for object graphs (with cycles) and instance-specific behavior (i.e. functions that do not belong to a class). This is an elaborate solution for version control of arbitrary objects, but the serialization format is still not adaptable. All of the systems described above have in common that objects and their meta-objects (e.g. classes and prototypes) and the programming tools live in the same environment. More technically, the tools and manipulated objects share a single execution environment. This is different from programming environments for languages like C, Java, or Python, where the execution of the program under development is usually short-lived in comparison to the programming environment: during a programming session, the source code is repeatedly compiled and run for testing or debugging. This entails a strong separation of software artifacts from the rest, and the ubiquitous use of the file system as a medium of data exchange between the two. In contrast, loading equivalent software artifacts into a programming environment such as Squeak/Smalltalk can have immediate side effects. Checking out a class for a Java program will replace a single file of source code and any running programs will not be affected by that change until restarted. Checking out a class in Smalltalk does imply changing an existing class definition and possibly compiling some methods, immediately affecting all existing instances of that class. This is a fundamental difference between tracking live objects and tracking source code. Our vcs for objects has to account for that difference. Goals for a new version control system for objects Our goal is to put arbitrary objects under version control, not only meta-objects related to source code. To achieve this, source code and meta-objects should be strictly regarded as a special case of objects to be versioned. The diversity of domain objects and their possible repertoire of suitable data exchange formats should be accounted for by separating the serialization of objects from their captured snapshot representation. Moreover, certain version control operations, such as the handling of differences, should also be under the influence of domain-specific types. They may have special requirements for an operation (e.g. to produce differences that are at all useful for the consuming users) or the nature of a type might offer opportunities for improvements over a domain-unspecific, fixed set of procedures and tools. Another reason is that the knowledge and code about the representation of domain objects can stay close to the domain objects themselves. In contrast to version control via import/export mechanisms version control for objects should be controlled from inside the exploratory programming environment. This should support the construction of wellintegrated tools. On the other hand, the new system should not prompt for version control specific specialization too eagerly: users should focus on their project domain and supply specifics for version control only later in the process. An adequate solution that is already sufficient for many objects must be found. Specializations should be optional, rather than required. One motivation for version control is collaboration which requires an agreement on tools and platforms for exchanging versions. In order to lower the entrance barrier, we want to base it on an existing wide-spread system, such as Git. Further, sharing versioned files (or objects) usually involves a central place to host and exchange the versioned data. While this is not technically necessary in distributed version control systems the presence of platforms such as SqueakSource2 , GitHub, or Atlassian Bitbucket3 indicates that central repositories are very much desired. Reusing an existing vcs also means reusing existing platforms. Reusing a file-based vcs benefits resources that are inherently stored as external files. These external files should also be version controlled along with the artifacts resident in the image. Support for external files becomes even more important when multiple programming languages are used in a software project, and the different parts are developed in different programming environments. Like resource files, the primarily file-based source code must be kept synchronized with the Smalltalk parts as part of the software configuration management. Diverse representations of objects There can be many ways to represent one kind of object. Smalltalk methods, for example, can be represented as byte codes or source text. Capturing the former may be brittle because the literals could change and it is specific to the byte code set employed by the virtual machine. But restoring a compiled method from such a snapshot could be much faster than compiling the source code again. Some common requirements for snapshot and serialization formats, among which a trade-off must be made, are performance, portability and interoperability, expressiveness and completeness (i.e. that no information is lost), and human-readability. The most suitable form of representation might not even depend only on the type of object, but also on the use case of the representation. In the compiled-method example, for collaboration on GitHub, classes and methods are best represented in the form edited by the developers (i.e. as source code). However, if the use case is to distribute the software it can be more beneficial to share a binary representation for performance reasons. Object graphs In general, a single object does not have an inherent meaning on its own. Instead, what makes an object meaningful is a graph of objects that is reachable from it. For example, given a user interface that contains a box for text entry and a button to accept the entered text, the form alone would not be meaningful without the contained text box and button. On the other hand, not all objects that are reachable from a given object might be relevant for the purpose of tracking this object for version control. For example, in Squeak's implementation of Morphic, each morph has a reference to its containing morph, the owner. If the user chooses to track one morph, its owner might not be relevant for versioning the morph; the owner can change whenever the user puts the morph into another space of the programming environment. Further, if the owner reference were always followed unconditionally, tracking any morph visible on the screen would mean to track the whole world of visible morphs. Consequently, the developers desire to manage clear boundaries in the object graph. User intentions missing from object graphs The generalized variant of the issue raised above with the example of morph owners has already been documented for the Self Transporter. According to [START_REF] Ungar | Annotating objects for transport to other worlds[END_REF], the following information is required when these objects should be transported: • To which package a part of an object belongs (different parts could belong to different packages): As packages are meta-objects also, we should actually assume the reverse which is that a package is just a special composed object which we can use as a starting point for capturing a graph. Further, the reach of an object must be expressible, i.e. which references should be captured. • Whether a reference from one object to another should be captured as is or whether the referent should be replaced by a different value in the transport representation: This applies for example to or strictly dynamic objects such as caches. • Whether a referenced object is a global object that is assumed to be already present in the target system, thus making it preferable to capture only a symbolic reference to the object: This is relevant as some environments provide a fixed context which can be assumed and does not have to be transported. • Whether the identity of an object matters when it is restored: This is especially relevant when multiple references to the same object must be transported. If the identity matters all these references have to point to the same instance again. This is not relevant for value objects for example geometric points in Squeak/Smalltalk. • Whether an object should be recreated from an abstract expression rather than from a complete snapshot representation: This information is particular to the Self transporter. Since we do not want to dictate a particular serialization format or even prescribe the in-memory representation of snapshots for all kinds of objects, this point can be reformulated into "[It must be defined] whether a special type of snapshot should be used to represent an object". All of these issues require that objects or whole object graphs must be complemented by additional information, which will be here referred to as object metadata (or only metadata for short). AN ARCHITECTURE FOR OBJECT VERSIONING In this section, we present our solution to track objects and store them in existing version control systems. We describe how object graphs can be captured and rematerialized, how object identity can be preserved in this process, how differences between two editions of object graphs can be handled, and how it can be supported to have different formats in which snapshots can be stored in an existing vcs. Finally, a generic way to capture any kind of object is presented, so that users do not have to provide own solutions for all types of objects they want to track. Storing objects in versions In order to put anything under version control from inside of an exploratory programming environment, we require a connector component to a version control system. It should be able to access the version history of a repository, create new versions, and possibly manage independent development streams, such as branches. How this connection can be established without focusing on a particular vcs has already been treated by previous work. However, some assumptions must be established on how the vcs deals with objects. This part of the architecture is based on a subset of an abstraction for vcs called Pur [START_REF] Kleine | An abstraction for version control systems[END_REF]. The basic concept is a version which describes a revision of a set of objects. Versions can have any number of parent versions. This relationship forms the version history in a repository. Each version contains a snapshot of object graphs and their associated metadata. The Pur architecture deliberately does not define what a snapshot consists of because it depends on the particular application. However, Pur defines an entity named store that can create and restore snapshots, updating the objects in the store. What "restore" means depends on the particular type of the store. The example implementation Pur for Newspeak [START_REF] Kleine | An abstraction for version control systems[END_REF] defines two stores: an image store to capture and load classes and methods with a snapshot object, and a file store to write and read snapshots to files. In the following sections of this section, we will further define the snapshots of our version control solution for objects and what they must be able to do. Live objects, snapshot objects, storage objects The described version control objects can be categorized into three realms. This makes communication about the different version control objects and about crossing the boundaries of the realms easier. The three realms are shown in Figure 1. The objects that the users usually interact with are attributed to the live realm. Domain objects belong to this realm, as well as their meta-objects that define their types and behaviors, such as classes and methods in Smalltalk. A defining property for live objects is that they would still exist when version control is not even attempted. The snapshot realm contains objects that stand in for editions of objects that originally came from the live realm. Objects in the snapshot realm reside in the memory of the running programming environment, just like live objects, but they solely exist for the purpose of version control. Objects for differences between editions, and other objects that explicitly deal with snapshots are also attributed to the snapshot realm. The third realm includes all forms of objects that are intended to be stored outside of the programming environment. This realm is be called the storage realm. But even when snapshots from the version control were to be transferred directly between two programming environments, the representation "on the wire" between the two processes would belong to the storage realm. Converting objects between the storage and the live realm always goes via the snapshot realm. Further, there are additional version control infrastructure objects. They deal with objects of one particular realm or operate at the boundary between realms, converting objects from one realm into another. Examples of these are stores, versions, and repositories. They will be attributed to the realm on which they operate or if they operate in multiple realms, they will be attributed to the live or the storage realm. For example, serializers that convert snapshots to storage data are attributed to the storage realm. Finally, stores, snapshots and versions contain objects in different forms (as indicated by the different realms), but ultimately they always contain graphs of objects. Therefore, at a higher level of abstraction, stores, versions and their snapshots can share a common set of operations, such as computing differences between each other. To generalize such Preserving object identity across system boundaries In addition to the requirements for transferring objects between environments (see subsection 2.5), there are further requirements for tracking objects for version control: if an object already exists in a target environment, it should be possible to rediscover it there and different snapshots of the same object must have an identification. Rediscovering objects in the target graph is needed to update objects in-place. An alternative would be to re-materialize the whole object graph with new objects. This would preserve object references inside the captured graph but references from outside of the captured graph would point to the then obsolete instances (see Figure 2). To identify objects in different snapshots, objects are assigned names if their identity needs to be preserved, which is the default. However some objects do not identification: value objects and objects that can be identified in the target environment based on their properties (e.g. a PackageInfo object in Squeak is fully identified by its name). Such names must be globally unique, even across the boundaries of the programming environment. They can take any suitable form but should be small, resilient to collisions, and independent of their object's state (i.e. no hash values). Capturing and materialization of object graphs In this section, we describe how graphs of live objects can be captured to create graphs of snapshot objects, and how the inverse operation, materializing live objects from snapshot objects, can be performed (see Figure 1). Composition of version snapshots and object graphs. The snapshot of a version or a store is a collection of object graphs. Each graph is associated with object metadata that saves decisions about the capturing or storage, which might also be needed to recover the graphs properly. In addition to that, a unique key is assigned to each graph to access it in a snapshot. Each object graph stores the bidirectional mapping between object snapshots and the names of their objects. Additionally, each stores a start object, from which all other objects in the graph can be reached. New graphs are introduced to a store by telling it to track an additional start object under a given key. The other objects in the graph are then derived from the relationships among objects, guided or restricted by the metadata that is configured in the store. To capture the snapshot of a store, the store must enumerate the live object graphs that are known to it, convert them to the snapshot realm and collect them into the overall snapshot, together with the metadata (see Figure 3). Abstract algorithm to capture object graphs. To capture an object graph from a given start object, the graph has to be traversed and a snapshot has to be created for each object. All objects encountered that do not already have names from a previous capture operation must have new names assigned unless their identity does not matter. These names should be persisted in the store that tracks live objects, so future operations on the same graph of objects can look up the named live objects if they still exist. The names are also assigned to the respective snapshot objects, so the live object of a snapshot object can be looked up. The graph traversal can be realized with an exhaustive search algorithm (e.g. breadth-first search for graphs). The algorithm should include multiple-path pruning, so each object is captured only once. The live objects being captured must be able to direct the graph traversal to related objects. When a live object is encountered during the traversal, a message is sent to the live object to convert it into its preferred type of snapshot. This is the opportunity for a live object to decide that it should be replaced in the snapshot graph by another object (e.g. by a symbolic reference to itself). Live objects that know that they are the root of a sufficiently independent subgraph could also decide to start another graph traversal that works differently (see Figure 4). They must also respect the multiple-path pruning of the main traversal. If the snapshots of the objects in this subgraph should be registered normally (with object names) in the snapshot graph that is built by the outer graph traversal, a way to pass the inner snapshots out to the main graph must be implemented. Alternatively, the aim of the separate traversal might be to encapsulate the results in a single snapshot object. Live objects (and objects involved in the capturing traversal) must further be able to access the object metadata, so gaps of missing information can be filled with this metadata. For example, when one attribute of an object should be captured with a default value instead of the actual value. There might be more than one way to capture one type of live object. For example, a CompiledMethod in Smalltalk could be captured through source code or as compiled byte code. To support choosing a different type of snapshot than the one the object prefers, it must be possible to specify this in the metadata associated with the object. Further, either the graph traversal or the live object itself has to respect the override. 3.4.3 Abstract algorithm to materialize object graphs. Converting snapshots to live objects can also be realized with a traversal of the snapshot object graph, starting from the snapshot of the start object. As during capturing, each object should only be rematerialized once. When a snapshot is to be materialized, a message is sent to it to convert it back to its original live object. Depending on the kind of snapshot this may involve, for example, creating a new instance of the type of the captured object, or compiling source code. If the live object had itself replaced by another during capturing, the replacement object will be rematerialized instead. To get the original live object back, another message is sent to the materialized object, essentially telling it to "bring itself back to live". A symbolic reference to a globally accessible object would at this point resolve itself and return this global object. Live objects can also perform other post-materialization tasks, such as notifying observers of changes. When snapshots are materialized, it is possible that a live object already exists (i.e. there is a live object with the same name ). In this case, the snapshot object should be instructed to materialize itself into the existing live object if possible. This ensures that references to the live object do not become stale by materializing a new live object. Differences between snapshots After having described how objects can be converted between the live realm and the snapshot realm, in this section we motivate the need for differences between graphs, describe how they can be detected independent of a particular application domain, and how these differences can be applied to object graphs. About the importance and granularity of differences. Differences are important, not only for the users to consume, but also for optimizing certain operations. For example, when a new version is to be saved, first computing the differences between the working copy and its parent version makes it possible to ignore all unchanged objects. The unchanged objects may not need to be serialized again and can be reused for caching of the newly created version. Assuming that only a small part of a system changes between versions, processing only the differences can mean that fewer objects must be processed overall. Implementing differences requires additional implementation effort (in comparison to building a purely snapshotbased system). But the effort can be worth it for both the user experience and the performance of the version control system. There are multiple levels at which differences can be computed. (1) At the object container level: Which object graphs have changed? (2) At the object graph level: Which objects have changed? (3) At the object level: Which parts of an object have changed? 3.5.2 Abstract detection of differences. The specific structure of differences depends on the structure of the snapshots. However, the structure of collections of differences in an object graph and the structure of differences between object containers can be generalized. The general principle of detecting changes is described in this section. An object graph is defined by its start object and contains a mapping between names and object snapshots. Thus, the differences between two object graphs can be expressed as the differences between the start objects plus the differences for any objects with the same name. To collect the differences for the snapshots, we can perform a simultaneous graph traversal in the the graphs of snapshots that should be compared. One of the graphs is the left-side graph, containing snapshots "before" certain changes that should be detected, the second is the right-side graph, which captures the situation "after" the changes, and an optional third graph would be the base graph, which contains snapshots from the base version of a merge, or more generally the base version of a three-way difference of graphs. Assume two graphs that contain different snapshot contents for some names. Beginning from the snapshots of the start objects in each graph, two snapshots are compared in each step. The determination of the local differences (i.e. changes that apply to one object) is up to the implementation of the snapshots being compared because they know their structure best. The result of the comparison must be some kind of differences object. We assume that the comparison will somehow iterate over the relevant relationships of the snapshot object (the "referrer") and that the other ends of these relationships (the "referents") may need to be compared among the two graphs again (see Figure 5). Depending on the names of the referents, three situations can arise: First, if the referent snapshots have the same object name, then the referrer object is in relation with the same object in both graphs. This means that this relationship has not changed. The referents must then be compared in a subsequent traversal step to detect differences deeper in the graph. Second, if the referents have different names, then the relationship has changed and, thus, a difference for the referrer exists. While users might be interested in the differences between these distinct referent objects, this comparison does not make sense in general, e.g. when two of the referents are of completely unrelated types. Third, if one of the referents has no object name, meaning that the identity of the captured object is not tracked, further information is needed. If it is a snapshot of an immutable value object, then the difference applies to the referrer. If it is a snapshot of an object that should be mutated and that supports fine-granular differences in both graphs then there is no difference for the referrer. If a three-way difference is computed and there are three different names for the snapshots in a set of referents (or two names and one nameless snapshot), then the relationship was changed both from the base to the left-side and from the base to the right-side, but to different objects. This is a conflict in the referrer and must be appropriately recorded. When the referent from the right-side graph does not have a corresponding snapshot in the left-side graph, a new object has been introduced to the graph. This must be noted in the differences so it could be added to a graph to which these differences should be applied. If there is a difference for the referenced object, then it will be recorded under this name anyway. For the case when an existing object from the leftside graph can only be reached via the added object in the right-side graph, the relationships of the added object must be followed (see Figure 6). For each referent with a name that already exists in the left-side graph, the difference traversal must continue with these same-named snapshots from either side. If a three-way difference is computed and a name can be found via this mechanism in the left-side and right-side graph, but not in the base graph, then the same object was added in both changes from the base. Any differences between the two sides are then automatically conflicts. If the right-side graph does not contain an object from the left-side graph, that object has been removed from the graph. We can detect this by marking each name in the leftside graph which we have also seen in the right-side graph. All unmarked objects at the end of the traversal have been removed. Whether the removal should be noted in the differences depends on how the store to which these differences should be applied behaves. It does not need to be noted if simply removing references to the removed object is sufficient (e.g. when automatic garbage collection is used). It is also safe for uncaptured objects (outside of the graph) that refer to the object that has been "removed" from the captured graph. In case the store has to take care of the deletion itself (e.g. files on disk may not be deleted automatically), the removal of an object should be noted explicitly in the differences. When one snapshot is told to compare itself to another, the snapshot implementation can decide to start an own graph traversal for differences. Abstract application of differences. Applying differences relates to detecting differences like materialization relates to capturing. But since the differences between two snapshot graphs are made up of the collection of differences to named objects (as defined above), no further object graph traversal is needed this time. Instead, the individual differences must simply be applied to their respective objects in the left-side graph. Sometimes, objects appear in the right-side graph that do not exist in the left-side graph. In this case, these objects have been added and this addition must be reproduced when the differences are applied. If a store on live objects implements the application of differences and new objects must be created, they must be materialized similar as described in subsubsection 3.4.3, but with a variation. If a materialized object refers to another named object, this referent object must be looked up in the target graph as usual. But if it already exists, it does not need to be materialized (which would be the case in the abstract materialization algorithm above). Instead, only the reference to this object must be materialized in the referrer, while the existing referent object is only subject to a change if there is an own difference for it. Storing objects outside of the programming environment Objects eventually have to leave the programming environment to the storage realm, in order to be shared with other programmers or authors. When snapshots are exported from the programming environment, they must be converted into a representation that suits the target storage. Snapshots might have more than one form of representation. For example, a snapshot of a formatted text could be converted into Markdown or some specialized Extensible Markup Language (xml) format. For some objects, users may want to have control over the export format, but for others, they might not care. We note that there can be a variety of storage strategies for each snapshot type. These strategies are represented through serializers and deserializers. Serializers convert graphs of snapshots into storage objects (e.g. files), and deserializers should do the inverse. Snapshot types should define a preferred serializer that is generally suitable for the objects they represent. For example, character string snapshot could refer to a serializer that outputs the text in Unicode-encoded text files. The serializer used for a graph of objects must be recorded in the object metadata because users may have chosen a different serializer. Deserializers should be able to answer the question "Can you read the output of this serializer?", so a store can choose a suitable deserializer based on the information about the serializer. Because the object metadata must be accessible before the correct deserializer is known, the format of the metadata must be determined by the store. A store must therefore perform the following steps to serialize an object graph: (1) Look up the serializer according to the metadata, if none is defined use the preferred serializer of the start object and add that information to the metadata. (2) Instruct the serializer about the key of the object graph to be stored (the serializer may derive the storage location from the key). (3) Invoke the serializer with the object graph. (4) Write out the metadata to the storage medium. The steps to deserialize storage objects to snapshots are: (1) Read in the metadata from the storage medium. (2) Based on the metadata about the serializer, look up a suitable deserializer. (3) Instruct the deserializer about the location of the storage objects. (4) Invoke the deserializer to obtain a graph of snapshots. Finally, a store may need to find the storage locations of objects graphs in the first place. How it does that is basically implementation-defined, but a good strategy is to maintain a dictionary that connects graph keys with locations. The store would have to store this dictionary in a well-known location and format. Generic snapshot format for objects The previous sections described an abstract framework for version control of diverse types of objects. In this section, we describe how objects can be captured and compared when there are no specialized snapshots available for them. As a consequence, all objects should become trackable. 3.7.1 The structure of objects. Objects can be viewed as comprising a number of slots, as in Self or the Common Lisp Object System (clos) to denote "a component of an object that can store a value" [START_REF] Pitman | Common lisp hyperspec[END_REF]. There may be different types of slots, such as instance variables or unordered items of a collection. Slots can have an identifier, such as a variable name, symbol, or index, but they do not need to. However, it must be possible to look up a slot in an object. A slot can also refer to behavior, e.g. a method in case of Self or JavaScript. The snapshot of an object is the collection of its captured slots, and its assigned name if its identity should be kept. This schema can be specialized to accommodate specia objects, such as primitive values. Slots could refer to other object snapshots directly or by name (if the referent has one). In programming languages with strong typing, we have to keep a reference to the type of the original live object -or its replacement. Thus, when the snapshot is rematerialized into a live object, the correct type can be instantiated. The default implementation of the capturing message should enumerate all slots in the live object and add slots of the appropriate type to the snapshot. Objects referenced by the captured slots also have to be traversed. When the snapshot slots are created, we have to consult the object metadata whether the slot should be captured at all or whether there is a default value. To materialize an object snapshot, we create a new (uninitialized) instance of the type of the captured object (which is either the type of the original live object or the type of the replacement). Each slot must then be materialized into this object. What must be done to that object depends on the type of each slot. For example, a slot for an instance variable should assign its materialized value to the instance variable. Differences in single objects. How objects can be mutated depends on the programming language. Commonly a slot value can be changed to a different object. In some languages, we can also add or remove slots. The differences between two object snapshots can therefore be described by the changes to the slots of the object (reassignments, additions, removals). If an object is replaced by another one everywhere in the system, there might also be the difference type "object replacement". But in most cases, it is sufficient to record that all slots referring to the "replaced" object have changed. When we compare an object snapshot with another one (as described in subsubsection 3.5.2), we must match its slots with the slots of the other snapshot. If no matching slot can be found in the other snapshot, the slot was either added or removed. In both cases, this must be added to the collection of slot changes. If a matching slot is found, the two slots are compared. Should the slots be of a kind that references another snapshot, we determine the differences between the referents of the slots as described in subsubsection 3.5.2. Should it be determined that there is a local change to the referrer, which in this case means that a different object has been assigned to the slot, this reassignment is added to the collection of slot changes. OBJECT VERSION CONTROL IN SQUEAK/SMALLTALK WITH GIT A prototype implementation4 of our approach has been realized in Squeak/Smalltalk and connects to Git as the backend vcs. The implementation is hosted on GitHub5 . Object containers and object graphs Capturing live objects is the responsibility of the SquotImageStore. Given an object and a key, it will traverse the object's graph for capturing, providing a bidirectional mapping between paths and start objects. For the prototype, the only backend vcs at the moment is Git, which is a file-based vcs, the keys of graphs are also the paths to the files or directories in which the graphs will be stored. Because graph keys need to be unique only within a store, the double role as paths is unproblematic. In addition, an image store also tracks metadata for each path, e.g., the desired serializers or deserializers, or arbitrary data. For bookkeeping, the images store keeps an object registry that maps all known names to their live objects and a collection of SquotObjectGraphs, which in this case serve as local registries for object names. The object registry makes sure that object names are unique across the object graphs. UUIDs are used for names in this implementation. Certain metadata are only relevant to the image store and not to be persisted. This transient store info is available to all objects that are being captured or materialized. Such information might include which instance variables of an object not to capture. Capturing an image store's snapshot creates a SquotSnapshot. Snapshots of ordinary objects are called shadows in Squot. Each SquotSnapshot has a dictionary of SquotArtifacts. An artifact is the combination of an object graph, its key (or path), and the associated metadata. An artifact is an element of an object container, which can be queried for artifacts. The result is a mapping from paths (as above) to SquotArtifact. Generic object snapshots Snapshots for generic objects, implemented in SquotObjectShadow, contain a collection of slots, the live object's class and the class of a potential replacement for the live object, if desired. Slot types are provided for the two kinds of Smalltalk object contents: instance variables (SquotInstVarSlot) and indexable variables (SquotVariablePartSlot). A slot is key-value-pair, whose key is the variables' name or index, respectively. The value is the shadow of the variables contents. Certain kind of objects need special treatment. Objects with value semantics (i.e., SmallInteger, Character, and SmallFloat64) and certain well-known objects (true, false, and nil) are shadowed by SquotPrimitiveValue, a simple wrapper. This allows serialization by value or well-known name. CompiledCode is handled by a SquotCompiledCodeShadow, as its special layout (both indexable reference parts and indexable byte parts) does not fit the generic slot model. For efficiency, certain collections have their own shadows: indexable collections of well-known bit-or character-types (e.g. ByteArray, WordArray, ByteString, and WideString) are handled with aSquotBitsObjectShadow that wraps a copy of its contents rather than creating SquotPrimitiveValue wrapper for each slot. Capturing To start capturing live objects, a SquotObjectCapturer is handed to a start object (see Listing 1 for the default implementation). It can decide to be replaced (squotReplacement:), if necessary, and then hands the to-be-captured objects to the capturer. The capturer performs a breadth-first search to build a SquotObjectGraph. When new objects are encountered, the capturer (a) assigns a name to newly encountered objects, (b) registers them in the object graph and the snapshot's object registry and (c) recurses by sending captureWithSquot:. If necessary, the object can decide to be shadowed by something else than a SquotObjectShadow via squotShadowFactory. The capturer then creates a new shadow and in a second step instructs it to initialize itself based on the live object. This separation allows cyclic object graphs. Generally, initialization comprises the enumeration of a live object's variables, adding them to the shadow, creating and populating slots, and queueing them for capturing. Materialization The SquotShadowMaterializer provides snapshot traversal for materialization in a recursive, depth-first manner. Given a start object shadow, the materialization starts in rematerialize : if the object shadow has not been materialized already. If a shadow has an object name, and a live object of that name exists, the shadow is materialized in place in the live object (cf. Listing 2). If this is not possible, a new live object with the shadows content will be created, which, when the object name is found, will replace the old one. During materialization, a shadow with slots will instruct the slot to materialize, recursively. The slots, in turn use the materializer to rematerialize their values and put them at the appropriate places in the live object. Similar to capture, the materialization is twostep, to allow for cyclic references to materialize correctly (cf. Listing 3). Differences The implementation supports differences at three levels: SquotPatch handles the object container level, SquotObjectGraphDiff the object graph level, and SquotObjectDiff an similar the object level (cf. subsubsection 3.5.1). The SquotDiffBuilder provides graph traversal for the differences computation, constructing a SquotObjectGraphDiff. In each step, a left-side shadow is compared with a right-side one, and optionally a base shadow. Their object names are compared as described in subsubsection 3.5.2. Cycles are dealt with by splitting the creation and initialization of difference objects (see Listing 4). The multiple-path pruning in this traversal works by only checking that the left-side shadow has not been encountered yet. The rationale is that the changes to a single object do not depend on the path that led to it. The object must have changed in the same way wherever it is referenced. Anything else means that a former reference to this object was in fact changed to another object with a different name. Analogous to capturing, the comparison of slot-based snapshots works by delegating the comparison to the contained slots, storing the result in a SquotObjectDiff. When the object names of the compared slot values are equal, no difference is recorded, but the referents are compared further. When the object names of the values of two compared slots are different or absent, a SquotSlotReassignment is recorded, which remembers both values. Differences can be applied to shadows using a SquotObjectPatcher, which eventually sends squotApplyTo:with: to each difference object, which results in a modified object shadow: SquotObjectDiffs simply apply all their slot differences to the shadow, SquotSlotReassignment replace the slot's value with the remembered right-side shadow. Differences can be applied to a SquotImageStore to change live objects, re-using the materialization process with a patcher instead of a materializer. Serialization and deserialization For de/serialization of artifacts, Squot includes the SquotFileSystemStore that operates on a file directory. To aid deserialization, it stores object metadata and a tableof-contents as Smalltalk Object Notation (ston) [START_REF] Van Caekenberghe | Ston: a smalltalk object notation[END_REF] in dedicated files. Preferred serializers and deserializers can be set in an artifact's metadata. Since Squot provides general purpose shadow objects, it also provides general purpose serializers for them: A binary serializer based on Squeak's SmartRefStream and a serializer based on ston. DISCUSSION In this section, Squot and the architecture from section 3 are evaluated with regard to the goals described in section 2. Observations, issues and possible improvements are discussed. Squot in practice First of all, the architecture is in actual use through the Squot and Squit packages. The implementation as described in section 4 is capable of tracking, serializing, and restoring complex object graphs such as a workspace morph with variable bindings (see Listing 5). The example involves different kinds of objects and circular dependencies in the object graph, as well as metadata that influences the capturing. Further, the created file system structure can be managed through a filebased vcs. In the case of Squot, Git is used through the Squit Git connector component. Further, we have created specialized snapshot and serialization formats for Squeak packages, hashed collections, and string objects. These specializations show that: • the general (slot-based) approach to object snapshots is extensible to support types with special requirements, • domain objects can opt out of the general approach and implement their own, • existing classes for snapshots and differences (e.g. those of Monticello) can be reused and adapted for Squot, and • the format of snapshots and the format of the generated files can be customized separately -to use a different file format, it is not always required to change the kind of snapshot used for an object. Limitations The following limitations span from limitations arising from the current implementation to conceptual limitations which require further research. State migration. One problem associated with changing the structure of classes -such as adding or renaming instance variables -is migrating existing objects to the new schema. Any migrations that users apply to tracked objects in own environment will also be applied in other users' environments. These objects have been captured after receiving the treatment of the migration, after all. But objects that are not tracked with Squot cannot benefit from that. Therefore, it is still advisable to take other measures to migrate existing objects, such as migration code in pre-and post-load scripts of packages. Merging objects. While the merging of packages has already been simplified in Squot, merging for arbitrary objects (with slot-based shadows) is not implemented yet. The reason is that further research must be conducted first on how objects can be merged correctly. In particular some basic assumptions from line-based merging via three-way differences do not hold in object graphs. This problem is out of scope for this work. 5.2.3 Sharing objects between graphs. One potential problem with the current implementation is that objects cannot be shared among multiple object graphs. If one object is referenced in multiple graphs, these graphs will overlap and create redundant snapshots (and storage data) for the objects reachable from the common object. At least the redundant snapshots will have equal names due to the shared object registry in an image store. Shared objects would be "rejoined" when they are materialized. But it could also lead to inconsistencies when changes to common objects are committed for only one of the involved graphs. So far users have to ensure that graphs do not overlap. 5.2.4 Re-integrating materialized objects. If multiple parts of the system are tracked in separate object graphs, it could be confusing where they are composed again in the target environment and and how this composition configuration should be tracked. More generaly speaking it is unclear where new live objects should be put when they are materialized from a snapshot. Materialized objects are added to an image store, which is probably not part of the software or document under development. The materialized objects might need to be assigned to some class variable or associated with content holders, such as graphical tools. Other approaches could be places and generalized references as found in Common Lisp [START_REF] Pitman | Common lisp hyperspec[END_REF]). Partial materialization. Another concern is that of materializing only parts of a version. Currently, Squot assumes that an object graph should be deleted (i.e. untracked) if it is absent in a working copy's store, but present in the parent version. If a repository contains packages for different platforms, so not all of them can be loaded at the same time, support for unmaterialized object graphs without signifying their deletion becomes a requirement. It could be necessary to make the untracking of object graphs explicit in the state of a store to achieve this. RELATED WORK Version control has been investigate previously for both source code and object contents. Moreover, moving objects between systems as well as combining contents of different objects have their share in literature. Version control approaches-code-centric. Pur [START_REF] Kleine | An abstraction for version control systems[END_REF] is a version control framework that abstracts from common vcs with a state-based history model and directed acyclic graphs as version history. Squot is a partial implementation of the Pur concepts. Image and file store, working copy and snapshot types make up the Pur frontend implementation in Squot. Most Smalltalk systems (VA Smalltalk: ENVY/Developer [START_REF] Pelrine | Mastering ENVY/Developer[END_REF], VisualWorks: Store [2], Dolphin source tracking system [START_REF]Source tracking system[END_REF], Squeak and Pharo: Monticello [18], among others) provide specialized version control systems for their meta-objects, usually classes and methods. They do not support tracking arbitrary objects or only in a fixed serialization format. Orwell [START_REF] Thomas | Orwell-a configuration management system for team programming[END_REF] is a Smalltalk vcs and configuration management system that supports versioning of methods, classes, applications and configurations, all of which are stored in a single object database. It does not support tracking arbitrary objects. Iceberg [START_REF]Iceberg[END_REF] is source code vcs tool for Pharo/Smalltalk to interact with Git and GitHub. Version control approaches -object-centric. CoVer [START_REF] Haake | Take cover: exploiting version support in cooperative systems[END_REF] adds version control to a collaborative hypermedia editing system with asynchronous editing. All versions of an object are combined into a multi-state object to track its identity. Since arbitrary objects can be mapped to hypertext [START_REF] Haake | Take cover: exploiting version support in cooperative systems[END_REF], they can be managed using CoVer, collaboratively. Squot does not include any model for collaboration. COOP/Orm [START_REF] Magnusson | Fine grained version control of configurations in coop/orm[END_REF][START_REF] Magnusson | Fine-grained revision control for collaborative software development[END_REF] provides fine-grained version control and configuration management for sets of documents. Each document comprises a tree (as opposed to Squot's general graphs) that might contain classes, methods, or textual paragraphs, among other. Versions are created for documents and multiple versions of the same document can occur in a configuration. In Squot, only one version of an object can be checked out at the same time. HistOOry [START_REF] Pluquet | Executing code in the past: efficient in-memory object graph versioning[END_REF] is an object versioning system to record the state of selected fields of objects. It is provided as a language extension for Squeak and Pharo and, hence, does not integrate with external-storage vcss. HistOOry can create views on the snapshots of an object that are polymorphic with the live object, which is not possible in Squot yet. CoExist [START_REF] Steinert | Coexist: overcoming aversion to change[END_REF][START_REF] Steinert | Built-in recovery support for explorative programming[END_REF] provides continuous object versioning with new snapshots being created automatically whenever a program is changed. To achieve short response times a state-based approach for methods and classes is used. It does not integrate with external-storage vcs, either The Lively Kernel [START_REF] Steinert | Object versioning to support recovery needs: using proxies to preserve previous development states in lively[END_REF] includes object versioning facilities: any object is aware of its previous versions, an global queries allow to identify following versions of objects. Lively Kernel includes a parts bin of reusable graphical objects [START_REF] Lincke | The lively partsbin-a cloud-based repository for collaborative development of active web content[END_REF], which is versioned using Subversion. Simulink [START_REF] Grace | Simulab, an integrated environment for simulation and control[END_REF] is a programming environment that does not primarily involve source code, rather, hierarchical blocks are used to model simulation systems. The vcs integration bears similarities to Squot's architecture: different vcs backends [START_REF]Set up svn source control -matlab & simulink[END_REF][START_REF]Set up git source control -matlab & simulink[END_REF] and different kinds of objects that can be tracked and merged [START_REF]Merge simulink models from the comparison report -matlab & simulink[END_REF]. Tracking and transportation of objects. Vegdahl [START_REF] Steven | Moving structures between smalltalk images[END_REF] describes the the challenges of moving objects between Smalltalk systems, including solutions to circular and symbolic references. The essence of this approach can be found in the standard serializer of Squeak as well as the more recent ston [START_REF] Van Caekenberghe | Ston: a smalltalk object notation[END_REF]: during serialization, objects are assigned unique identifiers which are used as references. Only the first occurrence of an object will serialize its contents. Certain objects, such as classes, are always serialized symbolically. Self uses the Transporter [START_REF] Ungar | Annotating objects for transport to other worlds[END_REF] to capture objects and transforms them into Self expressions that recreate the captured objects. In that course, annotations to the objects are use to preserve metadata such as categorization. The Transporter has no vcs capabilities. VisualWorks Parcels [START_REF] Miranda | Parcels: a fast and feature-rich binary deployment technology[END_REF] are a fast binary deployment mechanism for objects and source. It supports circular dependencies among and partial loading of parcels. Similarly, Fuel [START_REF] Dias | Fuel: a fast general purpose object graph serializer[END_REF] is a fast binary object serializer for Pharo and Squeak in the vein of parcels, but employs object clustering for efficiency. Fuel's object graph analysis is similar to capturing in Squot. Object merging. Operation-based and state-based merging of objects emerged in the setting of asynchronous collaborative editing of graphical objects [START_REF] Ignat | Operationbased versus state-based merging in asynchronous graphical collaborative editing[END_REF]. Operation-based merging has advantages in solving conflicts that would otherwise require manual resolution and is more efficient for large object sets. Nevertheless, Squot uses state-based merging because Squeak does not provide practical means tocapture all operations on arbitrary objects. Further, most contemporary vcss assume a state-based approach, too. Despite that, for efficiency, Squot employs parts of the operation-based approaches by computing the differences between snapshots first and applying those instead of processing whole snapshots. For xml documents, a three-way merge approach has been documented [START_REF] Lindholm | A three-way merge for xml documents[END_REF]. The approach might be applicable to Squot, as xml documents are essentially "ordered trees with labeled nodes", which matches to Squot snapshots. CONCLUSION AND OUTLOOK We set out to devise a solution to the problem that, in exploratory programming environments that are built around objects, regular (non-code) objects are precluded from specialized version control solutions. Existing version control technology should be reused to ease collaboration. Ultimately, version control for objects should be as practical and accessible as it is for files today. With the architecture presented, a first step towards that goal has been made. It provides a framework for modeling and comparing editions of objects. A generic solution to track arbitrary kinds of objects is provided, so the effort of versioning new objects is kept low. When a specialized way to handle editions of an object is needed, the architecture provides variation points, so that custom software can supply their own formats of snapshots and serializations. The prototype implementation Squot proves that the architecture is functional and that the status-quo on object version control could be improved in Squeak/Smalltalk. By building on the ideas of Pur [START_REF] Kleine | An abstraction for version control systems[END_REF], the architecture should be portable to various backend version control systems. The journey towards making version control for objects feel as natural as it has become for files has not come to an end yet. Besides the technical challenges illustrated in section 5, conceptual challenges remain. To provide a fullyfunctional version control experience for objects, merging facilities for arbitrary objects must still be implemented. In general, graphical tools would support working with snapshots and differences. To validate the generality of the proposed architecture, an implementation in another programming environment should be undertaken. While Lively Kernel already supports object versioning, it could be examined how our solution can extend their approach. A TERMS For an overview of selected terms associated with the lifecycle of objects under version control, have a look at Figure 7. The terms used in this work (with some deviations in section 4, which describes the prototype implementation of the proposed design), are as follows. live object An object that would exist even without any support for support for version control in the programming environment. snapshot object An object that represents a live object for the purpose of version control. to capture an object Convert a live object to a snapshot object. to materialize an object Convert a snapshot object to a live object. This may have side effects on other live objects, as noted earlier in this section. tracked object A live object that is currently considered for version control and that can be captured at some point. captured object Usually a live object that has been captured. to serialize an object Convert an object to a series of bytes, for storing the object to a stream, which can end up in a file. to deserialize an object Convert a series of bytes that was generated by serializing an object back to an object. version (without a referent noun) An object that describes a set of object graphs at some point in time, with metadata such as the author who created this version. Such versions form the version history in a repository. The Git equivalent would be a commit. version (of an object) An object as present in a version as defined above. edition (of an object) An object in a state that needs not necessarily be present in any version. It could be a live object with changes not persisted in a version, or a modified snapshot object that was derived from applying only some of the differences between two versions to a version of an object. The set of versions of an object is always a subset of the set of editions of an object. to apply differences to an object Transform an object from one edition into another edition of itself. Synonym: to patch an object. merge The operation of combining three editions of each object in a set of objects into one edition, and the result of that operation. The color code used in Figure 7 will also be used for other figures in this report when applicable: live objects are green, snapshot objects are yellow, and objects that result from serialization or are involved in that process are gray. B SYSTEM ARCHITECTURE SUPPLEMENT This section supplements the implementation description in section 4. The relationships between the implementation entities of the Squot prototype as described in subsection 4.1 are depicted by the diagram in Figure 8. The relationships between the capturing-related classes and the difference-related classes as described in subsection 4.5 are depicted by the diagram in Figure 8. Figure 1 : 1 Figure 1: An overview of the three realms making up our architecture for object versioning. Figure 2 : 2 Figure 2: External reference into a captured object graph Figure 3 : 3 Figure 3: Example setting for object graphs, start objects, and object names. The store captures all objects that are reachable from its known start objects. Figure 4 : 4 Figure 4: Different graph traversal strategy for a subgraph. Line styles indicate possibly very different relationships among the objects in the subgraph (e.g. pointers, naming conventions) Figure 5 : 5 Figure 5: Two graphs being compared, currently inspecting a particular relationship from the two editions of the object named A. In graph 1, B is at the end of this relationship; in graph 2, it is a different object C. B from graph 1 and C from graph 2 are the referents in this relationship, while A from graph 1 and A from graph 2 are the referrers. The change in name from B to C means that the two captured editions of A relate to different objects. Figure 6 : 6 Figure 6: Two graphs being compared. In the right graph an object C has been added and it replaces B at the end of the reference x from A. B can only be reached from A via C in the right graph. The changes to B are "hidden" behind the added object. The difference detection graph traversal must therefore follow the relationships of C or it will not detect the changes in B. 2 | toCapture | 3 self class isImmediateClass ifTrue: [^aCapturer capturePrimitiveValue: self]. 4 self class isBits ifTrue: [^aCapturer captureBits: self]. 5 toCapture := self squotReplacement: aCapturer. 6 ^aCapturer capture: toCapture as: toCapture squotShadowFactory Figure 7 : 7 Figure 7: Common terms to describe operations of converting objects between different representations. Adopted variants in bold. Figure 8 : 8 Figure 8: Relationships among object containers, SquotSnapshot, SquotImageStoree, artifacts, and graphs of shadow objects (snapshot objects). Key: UML Figure 9 : 9 Figure 9: Relationships among object containers, artifacts, object graphs, and their respective difference classes. Key: UML http://squeaksource.com (last accessedMay 10, 2023). https://bitbucket.org (last accessedMay 10, 2023). Refer to Appendix B for supplemental information on the systems architecture. https://github.com/hpi-swa/Squot (last accessedMay 10, 2023). 12 materializedObject := (aShadow materializeAs: anObject with: self) squotReactivateWith: self. 13 anObject becomeForward: materializedObject copyHash: false.
04095581
en
[ "scco.neur" ]
2024/03/04 16:41:18
2022
https://hal.science/hal-04095581/file/Poster_Corticodays_2022_vf.pdf
Perrine Rose Rose Seguin Emmanuel Maby Anatole Otman Mélodie Fouillen Dominique Morlet Jacques Luauté Pascal Giraux Jérémie Mattout Attentional markers during auditory BCI: healthy subjects and patients with locked-in syndrome ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Significant attentional modulation for each electrode Method: Threshold-free cluster enhancement (Smith and Nichols, 2009) Attentional markers during auditory BCI: healthy subjects and patients with locked-in syndrome P . SÉGUIN a, b , E. MABY a , A. OTMAN a , M. FOUILLEN a , D. MORLET a , J. LUAUTÉ a, c , P. GIRAUX a,b , J. MATTOUT a ca Centre de Recherche en Neurosciences de Lyon, INSERM U1028-CNRS, UMR5292, 69000 Lyon, France; Université Lyon 1, France b Médecine Physique et de Réadaptation, Hôpital Bellevue, CHU de Saint-Etienne, France c Médecine Physique et de Réadaptation, Hospices civils de Lyon, Hôpital Henry Gabrielle, Mouvement et Handicap, Saint-Genis Laval 69230, France To answer « Yes » : Please, pay attention to the right audio stream and try to count the deviant sound « Yes », without paying attention to the left. Protocol Methods : online data processing • Probabilistic binary classification (target/non-target) for each trial type (Standard/deviant x Yes/No) • Based on the combined posterior probabilities in each trial • Auditory feedback to the subject • Bandpass filter (0.5-20 Hz) • Spatial filter (xDAWN) • Averaged evoked responses drop of performance in conscious but severely paralyzed patients -Patients tend to show a very frontal activity that has to be explored -Attention abilities may be affected in case of severe (oculo-a steady-state potential arises at the frequency of the attended stimulus, its shape varies across subjects -Deviants: they elicit a P3b component Clinical results -Even when they control the BCI, patients show abnormal attentional modulation patterns -The impact of oculomotor impairment onto covert attention has to be further explored 94 % (17/18) 29 % (2/7) Attentional modulation on STD 83 % (15/18) 29 % (2/7) Attentional modulation on DEV 72 % (13/18) 42 % (3/7) pool of all stimuli STD: pool of all standards DEV: pool of all deviants YES: pool of all « yes » NO: pool of all « no » This work was financially supported by grants from the Fondation pour la Recherche Médicale (DEA20140629858 and FDM201906008524) and the following grant provided by the French government (ANR-17-CE40-0005, MindMadeClear).
04095592
en
[ "sdv.mhep", "info.info-et" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04095592/file/PrescribeBCI_VF1.pdf
P Seguin E Maby J Mattout DYCOG team Keywords: Model BESTBCI TM 1 cap, X electrodes, X gel tubes Introduction Since the early development of BCI, clinical applications are presented as a major objective. However, despite notable progress, they struggle to enter the clinical routine. Besides technological limits, we notice that most BCI studies involve healthy subjects, and when applied to patients, they are mainly proofof-concept studies. Recent « replace » or « restore » BCI studies showed noticeable changes in terms of design, to optimize clinical translation. We will highlight some of these pragmatic turns. We will also compare these new strategies with general clinical guidelines and practices to emphasize what is still missing to allow a rehabilitation physician to prescribe a BCI. Clinical expert centers, gathering multidisciplinary expertise Clinician trained in BCI prescription (or engineers trained in clinics ? ) Institutional frame, allowing financial support by health insurance, based on realistic cost-benefit studies A clear list of selected BCI products Benchmarking of usablity for each kind of patient (pathology, impairments): • comparing BCI and state of the art rehabilitation tools • comparing BCI systems BCI database including demographic data (age, gender, profession, …) and their impact on performance Longitudinal data Validation in ecological settings : home use, social use Aknowledgement This work was supported by the following grant from the French government (ANR-17-CE40-0005, MindMadeClear) and by a grant from FRM (DEA20140629858). Lyon Neuroscience Research Center Conclusion & Perspectives What is missing ? A clinician who would like to prescribe a BCI nowadays would face uncertainty at several levels, from the choice of the most adapted BCI for a given patient, to the effects that she/he could expect. For now, the field is not mature enough to ensure optimal BCI prescription. However, methodological changes that would favor the translation, such as user centered designs and longitudinal studies are spreading rapidly. On the other hand, the dominant engineering culture of the development of BCI is still perceptible in the lack of detailed clinical description of the patients, and in the lack of precise comparison with state of the art assistive devices or rehabilitation strategies. These steps are mandatory to realize realistic cost-benefit studies that are missing to allow the refunding of BCI solutions by health insurance. Finally, a strong "pragmatic" ethical frame would be required and should encompass all these aspects.
04095636
en
[ "shs" ]
2024/03/04 16:41:18
2014
https://hal.science/hal-04095636/file/Secrecy%20on%20PAR%20sheets%20red.pdf
Catalin Barboianu Is the secrecy of the parametric configuration of slot machines rationally justified? The Keywords: gambling mathematics, statistical recording, outcome recording, slot machines, slots probabilities, slots mathematics, gambling ethics exposure of the mathematical facts of games of chance as an ethical obligation Popularity of slot machines and their particularities in regard to exposure of the parametric configuration There is a relation between the popularity of a game and excessive gambling in it, which is also visible on any pathway model to problem gambling in its first phase, namely ecological factors involving availability and accessibility. For decades, the slots game has remained one of the most popular games of chance and the main elements that contribute to its position -from the perspective of both a problem gambler and a non-problem gambler -are these: a) variety; Players may choose among a wide palette of games with respect to rules, parameters, design, according to their profile, goals, or even hobbies [START_REF] Griffiths | Gambling Technologies: Prospects for Problem Gambling[END_REF][START_REF] Wood | The Structural Characteristics of Video Games: A Psycho-Structural Analysis[END_REF]. b) privacy: Slots games -played either in a public place or in front of the computer -assume a private space in which there is only the player and the machine [START_REF] Parke | Slot Machine Gamblers -Why Are They So Hard to Study[END_REF], with minimal exposure, unlike a roulette table, for example, where you place your bets together with other players. c) attractive design: From the graphics of the symbols to the design of the interface and even case, all is sparkling and brightly colored [START_REF] Wood | The Structural Characteristics of Video Games: A Psycho-Structural Analysis[END_REF]. d) brevity: A game timeline is short, lasting few seconds from credit insertion to the stop of the spin, and a high number of games played within a time unit is preferred by players. [START_REF] Griffiths | Gambling Technologies: Prospects for Problem Gambling[END_REF]. e) illusions and sensations: There are multiple design and configuration features that distort player's perceptions (e.g., just missing the jackpot by one symbol can cause players to think a big win is imminent; use of a "stop" button can foster the illusion of control over machine outcomes. [START_REF] Griffiths | Gambling Technologies: Prospects for Problem Gambling[END_REF]. Slot machines gained and maintained this popularity despite some specific elements that could limit their appeal: f) non-transparency: Players do not know the configurations of the machines they play at, as this information is not exposed. Blackjack players know the composition of the decks in play, roulette players know the numbers on the wheel, lottery players know the numbers from which the winning line is drawn, and so on. Slots remains the only game in which players are not aware of the essential parameters of the game, such as number of stops of the reels, number of symbols and their distribution on the reels. g) prevention from odds estimation: Obviously, the lack of data regarding the configuration of a machine prevents people from computing the odds of winning and other mathematical indicators. The so-called PAR sheets (Probability Accounting Reports), exposing the weighting of the reels, some of the probabilities associated with the winning combinations, and other statistical indicators, are kept secret by game producers. Practical questions arise on whether this popularity would decrease and the slot player's behavior would be influenced if these hidden probabilities were exposed and whether there exists a rigorous method through which qualified persons studying slot games (statisticians, applied mathematicians, programmers, etc.) can retrieve the parameters of the configurations of slot machines in order to generate their own PAR sheets for the players. I answer the first question in the section The psychological argument and the second in the section Statistical methods for estimating the parameters of the configuration of a slot machine. The parametric configuration of slot machines as a base for the probabilistic models for the slot games Although less difficult than other games with respect to the ease of probability calculus (compared to card games, for example), slot games still fall into that category of games of chance for which the probability computations cannot be conducted and performed by the average player, as such computations require medium to advanced probability theory knowledge and skills. Therefore, the final probability results for these games can be delivered to gamblers only by qualified persons, as numerical probabilities or formulas (or software programs /applets using those formulas) ready to be computed by inserting the parameters of the specific game design and of the event to be measured. For the applied mathematician, the hardest task in establishing the mathematical model for the probability calculus in slots is the optimal categorization of slot games, so as to be able to obtain general probability formulas with variables describing all possible parametric designs of the machines and all winning events. This difficulty is due to the wide variety of existing and possibly forthcoming slot games with respect to their parametric configuration, which consists of configuration of the reels and configuration of the display. Configuration of a reel The configuration of a reel refers to the distribution of the symbols over the stops of that reel and the arrangement of the symbols: Denoting by t the number of stops and by p the number of distinct symbols 1 2 , , , p S S S  on the reel, and denoting by i c the number of symbol i S on the reel (1 i p t ≤ ≤ ≤ ), then the vector ( ) 1 2 , , , p c c c  is called the distribution of the symbols 1 2 , , , p S S S  on the reel, also known as the weighting of a reel. Each reel has its own distribution of symbols. Given the number of stops t, the number of distinct symbols p, and a distribution ( ) 1 2 , , , p c c c  of the symbols on a reel, there are several ways of arranging those symbols on the stops of that reel. Any function a from the set of stops to the set of distinct symbols, such that { } ( ) i i x a x S c = = for any i from 1 to p (that is, the number of stops having assigned symbol i S by function a is i c ) is called arrangement of the symbols on the reel. The distribution and arrangement of the symbols on each reel identify a slot game and are determined by the game producer. The configuration parameters are essential for the probability computations. The numbers of stops of the reels and the symbol distributions on the reels stand as variables for any general formula for the probability of a winning event defined on a payline made of independent stops (stops belonging to independent reels -that is, a payline that crosses over the reels without overlapping them). The symbol arrangements on the reels count toward the probability computations of winning events defined on paylines holding stops of the same reel, assuming the arrangement is known. Consequently, the parameters of the configuration of a slot machine should be present in any technical sheet describing that machine -for either internal or external use -and in its associated PAR sheet, along with the computed probabilities of the winning combinations and other statistical indicators, owing to ethical reasons for which I advocate in a further section. Configuration of the display The configuration of the display refers to the shape and structure of the set of windows showing the visible symbols of the reels, which produce the outcome of the game, and the shape, length, and position of the paylines. All these properties are described mathematically through geometrical and topological properties of those sets (a rigorous model for the configuration of the display is a rectangular grid in which the lines are defined as discrete paths linking neighboring points). From the whole configuration of a display, the length of a payline is the only parameter that counts toward the probability computations for events related to that line, regardless of its shape or other properties; however, for more complex events defined on several paylines (for instance, of the type 'a specific winning combination of symbols on any payline from a given group of paylines'), particular properties and parameters of that group, such as related to intersection and independence do also count. The mathematical model of the configuration and also the probabilistic models based on it are idealized models not representing all types of slot machines on the market. Variables for the general formulas of probability and expected value Under the assumption that the reels spin independently (either physically or virtually, in the sense of probabilistic independence), we can obtain general formulas for the probability of the various winning events related to one or several paylines, having as variables the parameters of the configuration of the slot machine described in previous sections. Most slot machines do not have the same number of stops on their reels, nor the same distribution of symbols on them (unfortunately, for the ease of computations!) Yet we can assume the same number of distinct symbols on each reel (denoted by p) through a convention: if a symbol does not appear on a reel, we could simply take its distribution on that reel as being zero. A blank is considered as a distinct symbol within the mathematical model. We distinguish two possible types of slot machines with regard to the parametric equality of their reels: Call the two situations case A and case B. Given a specific symbol i S , the probability of i S occurring on a reel after a spin is i i c q t = in case A and j j i i j c q t = in case B, where j is the number of that reel. Numbers i q , respectively j i q are the basic probabilities in slots. For an event E related to an independent-stops line of length n, the general formula of the probability of E is: ( ) ( ) n F E P E t = in case A and 1 ( ) ( ) n j j F E P E t = = ∏ in case B, (1) where F(E) is the number of combinations of stops favorable for the event E to occur. For winning events E defined in a cumulative manner (that is, through numbers (quantities) of specific symbols necessary for the payline to hold, regardless of their position on the payline), which is the case for most slot games, F(E) has a polynomial expression, being a function of t and i c in case A or of j t and j i c in case B (1 i p ≤ ≤ and 1 j n ≤ ≤ ). For instance, if the event E is in particular Exactly one instance of a specific symbol S, formula (1) is written as: ( ) 1 ( ) n s s n nc t c P E t - - = in case A and ( ) 1 1 1 ( ) n i j S j S i jn j i n j j c t c P E t = ≤ ≤ ≠ = - =  ∏ ∏ in case B (2) Still in particular, for an event E expressed through the number of instances of each symbol on a payline in case A, formula (1) becomes the classical formula of probability in a polynomial field: ( ) ( ) ( ) 1 2 1 2 1 2 ! ( ) ! ! ! p a a a p p n P E q q q a a a =   (3) where 1 a is the number of instances of 1 S , and so on, p a is the number of instances of p S ( 1 2 p a a a n + + + =  ). Parameters 1 a to p a characterize the winning combination, while 1 q to p q characterize the configuration of the reels. Formula (3) is used when the winning event is defined through an exact distribution of all symbols on the payline (even if some of them do not appear in the winning combination, having the distribution zero). Particularizing in case B, consider the event E as Exactly m instances of a specific symbol S ( m n ≤ ). Formula (1) becomes in this particular case: { } ( ) 1 2 1 2 1 2 1 , , , 1 , , , ( ) 1 m m m j k S S i i i n j i i i k n k i i i P E q q ≤ < < < ≤ ∈ ≤ ≤ ≠ = -  ∏ ∏    (4) where i S q is the basic probability of occurrence of symbol S on reel no. i. The general formula (1) holds for simple events related to one payline. For more complex events like unions of winning events on one or several paylines, formula (1) is used along with other properties of probability and also methods of approximations for obtaining applicable (although overloaded) formulas for the probability of those events (Bărboianu, 2013a). From the expression of the probability formulas presented, one can see that these are functions of n (the length of the payline), t, and i c in case A, or of j t and j i c in case B (t or j t are the numbers of stops of the reels; i c or j i c are the distributions of the symbols on the reels) and other variables describing the event to be measured. For events related to several paylines, other variables describing the event also appear in the formulas (the number of lines of the group, cardinalities of the intersections of those lines, etc.). The formulas can also be written in terms of basic probabilities i q or j i q , which can replace the c and t variables as ratios between them. The same variables will also appear in the general formulas of the expected value of the slots bets, along with the payout rates from the payout schedule of the game. I present this overview of the base of the mathematical model necessary for probability calculus in slots and also few general results to emphasize the necessity of having the data describing the configuration of a slot machine as inputs for the probability computations. By applying mathematics within a general model, we can obtain only general formulas for probability and expected value. However, these formulas or even associated tables of values are useless for the player without final numerical computations for a given game, and these can be performed only if we are provided in advance with the numerical parameters of the configuration of that game. The secrecy of slots PAR sheets -facts, justifications, and implications The secrecy of game producers on PAR sheets is a certitude, verified by the lack of these files. One can see unsuccessful PAR sheet requests from slot players by browsing their forums, while game researchers can obtain them only through legal intervention. Since 2007, game researchers obtained the PAR sheets of some slot games through FIPPA (Freedom of Information and Protection of Privacy Act) in Canada [START_REF] Harrigan | PAR Sheets, probabilities, and slot machine play: Implications for problem and non-problem gambling[END_REF]. Most of these PAR sheets became public after researchers studied them; others were also available before 2007 through other channels (see Wilson 2004a[START_REF] Wilson | PAR excellance: Part 2[END_REF][START_REF] Wilson | PAR excellance: Part 3[END_REF]Wilson , 2004d, 2004e), 2004e). Still, these are the PAR sheets of a miniscule part of all slot games on the market. Slot Tech magazine has published articles dedicated to PAR sheets; however, they are limited in the number of games covered and in that the description is adapted to the audience of this magazine, who are technicians serving the machines [START_REF] Harrigan | PAR Sheets, probabilities, and slot machine play: Implications for problem and non-problem gambling[END_REF]. Browsing the appeal decisions of IPC (Information and Privacy Going to the webpage indicated as providing PAR sheets "for all IGT games," I found a link labeled PAR sheets hidden on a third-level page, for which the browser returned a 310 error when accessed. The problem of accessing that section might have been temporary; however, there is a disclaimer on that page that certain areas of the site are secured and require an active member account. I made no further investigation of whether opening such an account leads unconditionally to a positive answer regarding a PAR sheet request. Given all the facts presented above, it is clear that slot producers are strongly reticent in exposing the parametric configuration of their games. The psychological argument related to competition Coming now to the justification of this reticence, the slot producers' reasons for declining PAR sheet requests, shown in the IPC's appeal decisions, seem to be judicially formal rather than factual. I present the following arguments for this claim: • The psychological argument related to players The facts are: 1. PAR sheets are kept secret by slot producers, with isolated and rare exceptions. 2. Slot players continue to play slots in the absence of information regarding parametric configuration, probabilities, and statistical indicators of the games, maintaining the popularity of these games. When we talk about hiding a data sheet, obviously it is about the content that is hidden, and for a PAR sheet this means parametric configuration (numbers of stops and symbol weighting of the reels), probabilities for the prize-award combinations, other probabilities, frequencies, and other statistical indicators. With respect to the content of a PAR sheet: by putting myself in both the producer's and the player's position, I see two possible reasons for the secrecy,-one related to competition (hiding the parametric configuration and the statistical indicators), and the other related to players (hiding probabilities and statistical indicators). Further empirical study on slot players can confirm the existence of these two reasons and perhaps find others, which may be treated thereafter. The former reason cannot change players' behavior with regard to the willingness to play slots, as that is not related to any new unethical or fraudulent strategy of the producer against them -everything is the same as it was. In treating the latter reason, I will focus on the PAR sheet's content that is the most accessible (as knowledge) to and has the greatest impact on gamblers. Among all the mathematical data related to a game, the probabilities of the various winning events are the most important for a player with respect to the objective evaluation of winning/losing possibilities and general gaming knowledge, even though their influence might not lead to gaming decisions and changes in gaming behavior. This top status of probabilities is due to the fact that all players, regardless of their level of mathematical knowledge, have a basic understanding (although many times distorted or misconceived) of the notion of probability according to its common classical definition (as the ratio between the situations favorable for an event to occur and the number of all equally possible situations) and a basic interpretation of it as a degree of belief in the occurrence of an event. Moving then to expected value, this notion already requires a new level of mathematical education (as a mean of a discrete random variable), not available to the majority of slot players. Thus, it can be hypothesized that players perceive that the main reason for secrecy of PAR sheets (related to themselves) is to hide the odds/probabilities of winning. If further empirical research confirms this hypothesis, we shall arrive at the conclusion that slot players expect low to very low odds of winning without acquiring this information 1 , since there is no other reason for keeping them secret if they were high compared to other types of games on the market. Now, fact 2 along with the last conclusion leads to the prediction that slot gamblers will not stop playing in the event of exposure of the PAR sheets of the games they play, since they already expect low to very low odds of winning. There are other visible elements of the games that keep them attractive and popular [START_REF] Griffiths | Gambling Technologies: Prospects for Problem Gambling[END_REF][START_REF] Wood | The Structural Characteristics of Video Games: A Psycho-Structural Analysis[END_REF] despite their special status among games of chance (in respect to the exposure of their parametric configuration, which I talked about in the first section of this article). Obviously, there is also an addictive component of the other elements of attractiveness, and slots addiction should also be studied as a particular type of addiction, given the missing-parametric-configuration feature of the slot games. Further empirical study of slot players 2 can confirm this theoretical prediction of not quitting slots under the condition of low to very low odds of winning exposed through PAR sheets. Until then, we have an example, provided by lottery players, whose behavior seems to confirm my prediction in the slots case. 1 I have excluded the statistical indicators from this analysis and focused on probabilities, also given the possibility of their being misinterpreted by non-math gamblers. A proper interpretation of the mathematical data of a game with respect to favoring the player takes into account both probabilities and statistical indicators based on expected value, not one category or the other individually. For instance, a payback percentage of over 90% can be seen as high and therefore favorable by (and for) a player. However, there is mainly the probability of the biggest win (along with the payout of that win) that yields this high payback percentage and the calculation is a mean over the long run (infinity in mathematical terms). Since probability translates to frequency in the gaming experience, it won't be the same for the player if that winning event (balancing the computed payback percentage) occurs on average in a lifetime or less. Besides, payback percentage is an indicator usually exposed outside the PAR sheet. 2 possibly conducted together with those proposed earlier as one stand-alone research The lottery example. From all games of chance, lottery offers by far the lowest odds of winning for the top prizes, on the order of one to millions or tens of millions for the first prize, and one to tens or hundreds of thousands for the second prize, for the common lottery designs. With a history traced back to B.C. antiquity for its birth and to 15 th -16 th century for expansion, lottery remained the most stable and respected game of chance; contemporaneous studies of this game recognize its popularity and the fact that no decrease of this popularity has been reported (National Impact Gambling Study Commission [NIGSC], 2004). Most lottery players play regularly and are aware of the very low odds of winning. Even though most of them might not know the exact figures, all have a clue about the size order of these odds, knowing that they are very close to zero, because this information has spread widely enough in common communication between lottery players and through the media as well to become a proven well-known characteristic of lottery. Even knowing that the winning odds are very low, lottery players still continue to buy tickets on a regular basis and the lottery has never lacked for business. Why, then, should slot game producers worry about slots doing otherwise, since the odds of winning at slots are generally higher than those of lottery? Some will be quick to point out the following elements in which the two types of games differ at least with respect to financial expectations and player's options to estimate and manage these expectations: • Prize amount -Lottery offers first-to third-category prizes in amounts higher than the similar prizes in slots games, and the high prizes somehow compensate for the low probabilities of winning with respect to the decision of quitting the game. • Enhancing the probability of winning -The lottery games allow an increase in the overall winning probability 3 through buying several tickets or playing 3 "Overall" has the sense of a disjunction of winning events that is measured in probability, that is "winning with line no.1" or "winning with line no. 2" and so on in our lottery example. The increase in probability refers to moving from a single event to a disjunction systems with several lines in unlimited numbers for the same draw, while in slots the player may enable paylines only in limited number for such an increase. • Game frequency -Slot players can spin the reels of a slot machine thousands times in a day, while in lottery players must wait several days for a new game. In response, I would argue as follows to ignore these non-equivalences: • Prize amount -The objection argument assumes that the high prize amount is the dominant factor in the lottery player's behavior of playing against the minute odds of winning. My argument: If this factor is merely addictive, in parallel the slot games also have their specific addictive and entertaining components, and the existent balance between lottery and slots is inclined toward the latter due to its generally higher odds of winning. If the highprizes factor has a practical side in lottery player's mind, like "someday these [high prizes] will make me rich" given the assumed basic knowledge of the probabilities involved, the player can estimate and face the overall probability of this fortunate event happening in a lifetime as very low and the invested money as serious, which eliminates the practical expectation, contradicts the assumption, and thus reduces the factor to its addictive component, which has been already addressed. (Such probability estimation is at hand for everyone, by taking an average playing lifetime, assuming a weekly play, and multiplying the winning probability by the total number of plays.) • Enhancing the probability of winning -My argument: This increase is actually limited by the player's available funds for the tickets' total price, and therefore, increasing the probability of winning cannot change the size order of this probability from very low or low (for an one-line ticket) to medium (for a multi-line ticket or several tickets). For example, assume a player buys 500 one-line tickets at $2 each (a serious investment for one draw), or an of events that includes that single event. This convention applies in every instance where these terms were used in the article. equivalent line-system ticket, at a 6/49 lottery. The overall winning probability will increase a maximum of 500 times, leading to a maximum of 0.00003575 (from 0.0000000715) for the first-category prize and 0.0092 (from 0.0000184) for the second-category prize, which are very low and low respectively (these maximums apply if the played lines are independent, in the sense of a restriction on the number of the common numbers of those lines) [START_REF] Bărboianu | The probabilities of the winning events related to several lines[END_REF]. There is another limitation in playing large numbers of lines once, coming from the distribution of the prize fund -the prizes are not given by a fixed payout schedule, but the lottery company assigns a certain prize amount for each prize category as a fixed percentage from the ticket sales of that draw. Therefore, investing large amounts of money for increasing the chances for a draw might result at some point in winning less than invested. On the other hand, in slots the payout schedule of a machine is fixed for all games (spins), and the increase in the overall winning probability can be acquired not necessarily by enabling more paylines, but by running more spins -by the technical features of the slot game, a player can run hundreds of independent spins in a reasonable time, and so we get the equivalent effect as in the multiline play in the lottery case with respect to probability increasing. • Game frequency -My argument: This conclusion was drawn based on the criterion of probability size order for one game, under the assumption that the basic probability knowledge is accessible to most gamblers. Game frequency is not a criterion for and does not change the argument of the conclusion, as it does not change the probabilities attached to a game; game frequency can be a criterion for other comparative analyses of the two types of games -for instance, regarding how deviation from the expected value is directly perceived and accounted by the math-inclined player. In my analysis, when talking about probabilities of winning, I considered only the slots prizes as given by the winning combinations of a single machine and ignored the progressive-jackpot prizes, because including them would just incline the balance more toward slots in the lottery-slots comparison, with respect to the analyzed prediction of not quitting slots under the condition of PAR sheets exposure. With the above counter-arguments I conclude the lottery example and declare it as relevant within the psychological argument for arguing for the prediction of not quitting slots under the condition of PAR sheets exposure. Of course, there are other characteristic features (except those related to financial-expectation) that distinguish the two types of games from eachother and might be responsible for a different behaviour of the slot players than the lottery players. Further research is needed to confirm or refute that idea. I will also present another argument, this time merely mathematical, for the insubstantiality of the secrecy of slots PAR sheets, in the next section. Statistical methods for estimating the parameters of the configuration of a slot machine The methods I briefly describe here can be applied in an organized professional environment in order to estimate and expose the parametric configuration of any slot game whose PAR sheet is missing. In the next sections I use the same denotations used in section The parametric configuration of slot machines as a base for the probabilistic models for the slot games. The raw approximation. This method is based on the well-known result from probability theory called the Bernoulli's Theorem, which states that in a sequence of independent experiments performed under identical conditions, the sequence of the relative frequencies of the occurrence of an event is convergent toward the probability of that event. Applied to slots, that principle says that if N is the number of spins of a reel with t stops where we observe as an outcome a specific symbol S that is placed on c stops and n(N) is the number of occurrences of S after the N spins, then the sequence ( ) / N n N is convergent toward the probability of occurrence of S, namely P(S) = c/t. The ratio n / N is the relative frequency of occurrence of S. It follows that for large values of N, the relative frequency of occurrence of S approximates the probability of S occurring. The higher N, the more accurate this approximation. Obviously, the number of spins N must be large enough for obtaining good approximations of the ratios / i c t , and this is the main issue of this method. As theory does not provide us with tools for choosing N for a given error range, all we have is the principle "the larger N, the better." As one can notice, this method of approximation based on statistical observation is subject to errors coming from idealizations and various assumptions, and the error ranges are not even quantifiable. Given these issues, the best way to use this method is not for individual records, but cumulating progressively the records coming from several sources and refining the estimations in correlation with the increase in total number of spins N. This principle is also common for the odds calculators based on partial simulations, used for various games. Note that the described method provides us with approximations of the ratios / i c t (the basic probabilities) for each reel and not the parameters of the configuration individually ( i c and t). However, knowing the basic probabilities is enough for any probability computation for a slot game, as seen in section The parametric configuration of slot machines as a base for the probabilistic models for the slot games. A more accurate approximation of the ratios / i c t and even of i c and t individually is still possible through statistical observation, using a method which can refine the raw estimations obtained through the previously described method. Such a method is briefly described in the next section. Denominator-match method. Denote by 1 2 ( ), ( ), , ( ) p n N n N n N  the number of occurrences of symbols 1 S to p S respectively after N spins of a reel. There is a slight correlation between the recorded values 1 2 , , , p n n n  for various large numbers of spins N. Based upon this correlation, we can refine the estimation of the ratios / i c t obtained through the previous method and also find estimations for ( ) 1 2 , , , p c c c  and t, by recognizing a numerical pattern across some sequences of fractions representing the ratios between possible values for i c and t. The denominator-match method is based on the numerical analysis of the fractions / i n N and on a five-step algorithm briefly explained below: We write each fraction / i n N as a chain of equal fractions, having numerators from 1 upward and denominators not necessarily integers, for every i from 1 to p. Across the p chains of equal fractions obtained, we choose that of the minimal length (let m be the minimal length). Then, across the p chains of equal fractions, we extract m sequences of fractions (one fraction from each equality chain), having the denominators the nearest to the denominators from the minimal equality chain respectively. From the m sequences of fractions obtained, we choose one sequence of p fractions by applying progressively the following filtering criteria: having denominators as close to each other as possible, having the highest number of instances of the same denominator, and the repeating denominator with the largest share being an integer. As final step, we adjust the numerators of the final sequence of fractions, as follows: If the sum of the numerators lies between the minimum and maximum of the denominators, then we take the numerators as the symbol distribution on the reel ( i c ) and their sum as the number of stops of the reel (t); if their sum does not lie in that interval, then through addition or subtraction, we distribute, proportionally with their values, the difference between their sum and the integer nearest to the mean of the minimal and maximal denominator, rounding the added/subtracted quantities to integers. For our resulting estimation, we take the adjusted numerators as the symbol distribution on the reel ( i c ), and the integer nearest to the mean of the minimal and maximal denominators as the number of stops of the reel (t). This method provides us with the most probable number of stops t and associated symbol distribution ( ) 1 2 , , , p c c c  of a reel in a certain probability field; the error range of this approximation is quantifiable in terms of probability [START_REF] Bărboianu | The probabilities of the winning events related to several lines[END_REF]. Regarding the practical application of the methods through statistical observation, it is obviously an arduous task, since we have to watch and record spins in numbers of thousands. For online games, software can be developed to help in such an endeavour. For physical machines, it is far more difficult to watch and note down thousands of outcomes just for one reel of a machine, not to mention that the slots operator might not allow this action. Of course, technology based on video capturing might help with such a task, but that is not the concern of the current study. Physical measurements. Any information acquired on t besides the presented statistical methods of estimation is useful with respect to the accuracy of the approximations because it can give a clue as to how high we should choose N for avoiding irrelevant results (for example, if t = 100, we intuit that choosing N = 1,000 or lower would not be high enough for relevant results). Besides the methods based on statistical observation, there exists a method of estimating t through physical measurements, applicable to some particular types of slot machines. This method exploits the information given by the appearance of the reel on the display. As we know, only a small part of the reel (either physical or virtual) is visible on the display and this part can be seen as one or several adjacent symbols (usually 3, up to 5). So we can view from 1 up to 5 consecutive symbols of the reel. If the appearance of this part of the reel is three-dimensional (which is possible for both physical and virtual reels), by measuring some parameters of this image, we can deduce an estimation for the number of stops of that reel (t). Basically, the apparent lengths of the visible symbols give full information on the curvature of the reel, which then leads to an estimation of the entire number of stops, since the number of visible symbols per the circular length of the visible reel is proportional to the total number of stops per the circular length of the entire reel [START_REF] Bărboianu | The probabilities of the winning events related to several lines[END_REF]. This method can be applied only to reels showing at least two consecutive symbols on the display in three-dimensional view. The method cannot be applied to virtual reels showing several consecutive symbols in flat image. As in the case of the previous method through statistical observation, there are issues with the practical application of the method through physical measurements. There might be technical issues regarding acquiring the proper position for measurement or placing the measurement tool on the surface of the machine. Also for this method, an alternative would be for the observer to take photos and make the measurements on the photos. Of course, the slot machine operator might not allow the direct measurement and/or taking photos. With this incursion into the mathematical methods of approximating the parameters of the configuration of a slot machine through statistical observation, I conclude the analysis of the possible reasons for slot producers keeping secret their PAR sheets. Summarizing below the arguments against the justifications for these possible reasons, I draw the conclusion that the secrecy of slot producers on the parametric configuration of their slot machines is not rationally justified: • Protection against competition fails against the generality of the math formulas and equations and the open possibility for all slot producers to configure any parametric design for their slot machines, also manipulating the game parameters and the payout schedule in unlimited ways, so as to obtain the desired statistical indicators for the house. • The fear of losing players who face the real odds of winning attached to their games, thereby affecting the popularity of slot games, fails against the a priori expectation of the players for low and very low odds of winning induced by the experienced secrecy of PAR sheets and against the lottery example, in which lottery players continue to play against the lowest odds of winning due to other addictive elements that slots also hold. • The secrecy itself fails against the statistical methods that mathematics provides us with for retrieving the missing data through statistical observation, even tough as approximated results. The exposure of the parametric configuration and the mathematical facts of a game as an ethical obligation The requirement for the exposure of the parametric configuration applies only to slots, since it is the only existent game of chance for which such data is hidden. Indeed, while a slot machine displays only a part of each reel in stop position, for the other games all the configuration from which the outcome is produced is visible for the players -the roulette numbers are shown on the wheel and table, the deck composition is known for every card game, dice faces and number of dice are visible for every dice game, lottery numbers are known for each lottery design, etc. The information to be exposed would be in the form of a technical/mathematical sheet specific to each slot game, either consisting only of the parametric configuration, or of the parametric configuration plus basic mathematical results such as the probabilities of the winning combinations shown on the payout schedule, probability of any win, and expected value. For the former variant, which is merely informative and provided either by the slot producer or retrieved through the methods I described in the previous section, it would remain for the player to inquire further for the mathematical results as an optional action. For the latter variant, the probability/statistics part that comes along with the parametric configuration would be completed by an assigned mathematical authority. The goal of exposing parametric configurations for slot games is not so much to place slots in the same status as other games of chance as it is a matter of ethics. The exposure of the parametric configuration of a game to the player prior to playing is an ethical obligation in two ways -one commercial and the other humanitarian. The commercial way treats the game as any commercial service, for which full technical specifications are required from the producer to the customer -as a public coffee machine must show the coffee brand name and the coffee volume returned for the unit price, so the slot machine must show the reels' numbers of stops and symbol weighting; a bet is still a purchased service once the players inserts a non-returnable coin in the machine. A relevant example of unethical procedures would be having identical-looking machines yielding different payback percentages due to different (missing) parametric configurations or even the same machine changing its payback percentage with the replacement of a single chip. The humanitarian way is related first to the free will of thought and second, to the limitation of addiction risk. Being informed on all parameters of a game one plays is a condition for unconstrained (constrained through omission) personal thinking leading to personal actions. It is as if someone asks you to bet you can jump from a high place and land on your feet; of course, if you know in advance the height from which you will jump or measure it before you bet, you might decline the bet or propose another one for a certain measurement, and this means free decision. Such comparison can also be an argument for the sceptical slot player who could ask: "Given that slots is not a strategic game played against opponents (as with poker for instance, where the odds are essential in evaluating the advantage in a given situation), why do we need 'all the math stuff' associated with slots?" The answer is simple: information and strategy. The argument for information is expressed through the above comparison. Regarding strategy, in slots there is a trivial strategy, namely the strategy of choosing: choosing one game or another, choosing how many paylines to enable, choosing the parameters of time and money management, and of course, choosing to quit one game for another or choosing not to play at all. The only objective criteria for such a strategy are probabilities and expected value. Similar comparisons that also hold in some ethical aspects are the illness-danger warnings on cigarette packs and the "possible adverse effects" statistics in drug leaflets. Regarding the limitation of addiction risk, past and ongoing studies debating the issue of whether mathematical knowledge (as either provided pre-calculated results like winning odds and other statistical indicators or theoretical and applied probability theory basics learning) causes a decrease in gambling behavior have not yet reached a clear conclusion. Several empirical studies found no significant changes in college students' gambling activity after they received a scholastic intervention on gambling mathematics [START_REF] Hertwig | Decisions from experience and the effect of rare events in risky choice[END_REF][START_REF] Steenbergh | Impact of warning and brief intervention messages on knowledge of gambling risk, irrational beliefs and behaviour[END_REF][START_REF] Williams | Does learning about the mathematics of gambling change gambling behavior?[END_REF]. On the other hand, more theoretical studies proved that postsecondary statistics education developed critical thinking, which also applies to gambling, and the gamblers who get such education tend to have significantly lower rates of problem gambling (Abbot & Volberg, 2000;[START_REF] Gerstein | Gambling impact and behavior study[END_REF][START_REF] Gray | Critical abilities, graduate education, and belief in unsubstantiated phenomena[END_REF]. I am personally inclined to think that such a decrease follows an optimal mathematical learning, which can be devised and developed according to its scope, and this will be the focus of forthcoming research. Given the ethical obligation to expose the parametric configuration of the slot games, the question arises as to how this information can be technically exposed. Due to its relation with the risk factors, the exposure on a website would not be enough, because in a physical casino or slot room, there are specific physical addictive elements that might distract a player's mind from the mathematical facts seen earlier on the internet, not to mention that the player might encounter a slot machine for which he/she had not studied its technical/mathematical sheet beforehand. It follows that the technical/mathematical sheet must accompany each slot machine in a printed form or at least be available upon request from the slot operator. Instead, online sheets are applicable to the online slot games. Since slot operators, like slot producers, might consider that it is not to their advantage to provide the technical/mathematical sheets to their customers, such an action is imposable only by law, which can also certify as official the authority providing the mathematical facts of the games. The debate remains open as to whether the technical/mathematical sheets must contain only the parametric configuration (as sufficient information for someone to compute further mathematical results optionally) or in addition, basic mathematical results concerning probabilities and other statistical indicators, with respect to the ethical requirement. The former alternative raises the question of the usefulness of mathematical didactical intervention undergone to gamblers, and both alternatives raise the question of understanding and interpretation of the exposed or learned mathematical concepts and facts related to games of chance, since the simple acquisition of numerical probabilities and statistical indicators as mere quantities might not be enough toward the decisions made based upon them. These issues are treated in a forthcoming article, as conditions for an optimal mathematical didactical intervention in gambling. Commissioner) in Ontario, Canada, with respect to PAR sheets requests, one can see that game producers who declined the requests invoked the exemption set forth for scientific and technical information, through one or more of the facts that PAR sheets contain information routinely considered to be trade secrets in the gaming industry and consist of mathematical formulas and equations developed by their engineers.They further claim that information provided on PAR sheets significantly prejudices their competitive position and interferes significantly with the contractual obligations of the company (Information and Privacy Commissioner[IPC], 2009[IPC], , 2010)).Major slot companies have in circulation for players brochures presenting their games, in which mathematical aspects of the games are barely touched upon, thus being more promotional than informational materials. For instance, International Game Technology (IGT)'s brochure, called Introduction to Slots and Video Gaming, has a section titled Slot Math, which presents three examples of games, each one from a large category (3-reel, 4-reel, and 5-reel slot machines), for which the following configuration parameters and mathematical facts are shown: number of stops of each reel, the distribution of the top-award symbol (that symbol triggering the jackpot in a line-up combination) on each reel, hit frequency of the top-award combination, and the overall hit frequency of any winning combination. Summarizing, the math section provides the player with the numbers of stops of the reels, distributions of one symbol on the reels and two probabilities (one for the top-award combination and the other for any winning combination), for three specific games; there are no distributions of the other symbols, no probabilities for other winning combinations, and no expected value, which is a relevant indicator of the practical risk over the long run. Also, the use of the term symbol combination in probability context is confusing, as with this term the winning combinations shown on the payout panel are defined; the combinations involved in the probability computations are combinations of stops (holding those symbols), and not combinations of symbols. To "explain" the brevity of their math section, the editor wrote in its introduction (International Game Technology, 2009): [...] One such tool, par sheets, can be complicated to understand. However, investing the time learning to read them is time well spent. They offer important information for optimizing the revenue for each machine, as well as offering data for technicians. In this section, we provide examples of simple slot math that is found on par sheets for three types of games -spinning reel and video reel slots, video poker, and bonus games. These equations represent the most basic operations only. For more detailed information, please ask a gaming representative or attend training classes. Par sheets for all IGT games [...] are available online at [...] The trade secret and intellectual ownership reasons fail against the generality of the math formulas and equations. Although the parametric details vary from game to game, the mathematical results concerning probability, expected value, and other statistical indicators are just applications of general formulas that are publicly available in mathematics, thus common across all slot machines, and no individual or corporate body can claim ownership of such a pattern or formula. The argument also holds if talking only about the protection of the parametric configuration. Protecting a certain finite sequence (combination or arrangement) of symbols reverts to protecting the sequence of numbers that can be put in bijection with the former, since others may use it by just replacing the symbols with new ones through a new bijection on that sequence. But protecting a sequence of numbers falls within the same argument that mathematics is freely available.• There are three possible reasons for competitive prejudice against competition, two of them coming from the situation in which another producer copies and uses the revealed parametric configuration: a) the possibility of losing a share of the market to the infringer; b) the unethical development of the infringing company; c) the exposure to bad publicity from a competitor or neutral entities. The arguments as to why these reasons fail are the following:a) The infringing company would develop a slot machine different in its external (physical) design from the machine having the original parametric configuration and having this configuration in common with the original (because the entire machine of the original producer is patented, even though its parametric configuration in and of itself may not be). With the parametric configuration invisible, if the new machine is successful at and after launch, there may be possibly three types of elements responsible for this success, alone or in combination: its physical design, the marketing, and its statistical indicators.The first two types of elements have nothing to do with parametric configuration, so they do not apply to the original producer's competitiveprejudice argument. If statistical indicators like frequency of wins and payback percentage prove in time to be responsible for the new machine's success -which is very likely to happen -this is not a consequence of the use of that specific parametric configuration alone, but together with a given payout schedule. First, a producer can manipulate the game parameters, including the payout schedule, in unlimited ways so as to obtain the desired statistical indicators for the house; that is, if the goal is to have the payback percentage of another machine, a better alternative is to use a new parametric configuration yielding the same percentage instead of replicating one; besides, within the same goal, if copying the parametric configuration of the reels, the infringing company would need to copy the payout schedule also; while the parametric configuration itself may not fall within patent restrictions, parametric configuration plus payout schedule is likely to do so, therefore the infringing company should expect the original producer to recognize his own payout schedule on the new machine and take a legal action the potential (at that time) infringers.b) The infringing company would get a ready-to-use parametric configuration, possibly non-patented, which he can use in two ways: keeping also the original statistical indicators, including payback percentage, or adding a new payout schedule and getting different statistical indicators. The former alternative assumes copying the entire game parameters (and avoiding the expense of the mathematical work, which compounds the unethical behavior), which may be protected as a whole. If this happens, the infringing company exposes himself to lawsuit and bad publicity, since the parametric configuration used can be discovered through statistical observation (see the next section) by the original producer. The latter alternative gives no rational motive for the infringing company to use a copied parametric configuration instead of a new one, since he already must do the math work for finding a payout schedule which will yield the desired statistical indicators. In addition, using only the copied parametric configuration would again expose him to bad publicity when revealed through statistical observation. c) In the situation of bad publicity as result of revealing a PAR sheet (which would have as issue the fact that probabilities and/or statistical indicators are not favorable for the players), the producer would have the option of defending himself with an acceptable answer: the fact that all existent slot games have similar figures attached, not much different from those exposed, while he at least showed them to the public ("I am not the bad guy here, but those who keep the PAR sheets secret"). Such an answer may turn the negative publicity into a good marketing strategy. The hypothetical situations related to reasons a) and b) would actually apply to a start-up company in the role of the infringing company, as it is very unlikely that an established producer would risk his position just to avoid the expense of the math work. Thus, the slot producers' justification for the secrecy of PAR sheets related to competition is insubstantial, as I argue above, even though some of them claim the opposite in their judicial litigations. The previous arguments (related to competition) are part of what I called at large the psychological argument. The possible justification exclusively related to their players remains to be considered. That would mean that they are afraid of losing players who face the PAR sheets of their games, and so the popularity of slots would decrease. I argue in the next section that such a claim is likewise insubstantial and propose further research to confirm what it is hypothesized. Acknowledgements I want to thank to the reviewer who suggested relevant examples for the hypothetical situations presented in the paper: the start-up company example within the competitiveprejudice counterargument and the example of two identical-looking slot machines having different parametric configurations within the ethical argument.
04049425
en
[ "shs.phil" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04049425/file/Scheppers%20--%20Hocus%20Pocus.%20Wittgenstein%E2%80%99s%20critical%20philosophy%20of%20mathematical%20practice--20230321--SENT.pdf
In this study I interpret Wittgenstein's work on mathematics as an integral part of his philosophical endeavor at large, in terms of the stated aims and methods of this endeavor, and the (critical, ethical and aesthetical) agendas underlying it. My focus is on Wittgenstein's later work, but strong continuities in some of the lines of thought I am interested in, lead to many naturally occurring references to earlier material. The study is based on a close reading of extended passages of the manuscripts (as published in the online Bergen edition of the Nachlass), with special attention to the critical remarks that are mostly neglected if not shunned by Wittgenstein-scholarship. I show that Wittgenstein foreshadows many of the themes that are prevalent in 21 st century Philosophy of Mathematical Practice, 1 which however lacks the critical bias that is proper to Wittgenstein's work and argue that the Wittgensteinian themes focused on in the present 1 'Philosophy of Mathematical Practice' (PhilMathPract) here refers to the research tradition represented by, for instance, study still can offer worthwhile contributions to the Philosophy of Mathematics at large, and to Philosophy of Mathematical Practice in particular. Preface The aim of this study is to bundle some of the main points of my work in Philosophy of Mathematics, mostly dating from the period 2008-2017. 2 As my professional and personal circumstances did not appear to allow me to prepare a publishable text in the short term, I decided in the spring of 2022 to make available what I had in the form of -what became-the present rather rough draft. 3 As it turns out, this exercise did yield a publishable text focused mainly on Wittgenstein's critical remarks (see Part 2 of the present document). Still, as the process of distilling a small book out of this material may take some time and will involve trimming down at least some of the bulk, it seemed a good idea to consolidate the present draft version as it is, which would also allow me to solicit feedback from an expert readership. As the present text is intended to function as a stand-alone document but does build on other work of mine, I was forced to sometimes repeat myself. As it did not make sense in the present circumstances to rewrite passages of unpublished 4 work for the purposes of this study, equally not intended for publication in its present form, I allowed myself to simply cannibalize some of my previous work. I thank Joep Hoekstra, Michel Schorokoff and Sorin Bangu, who have -each in their own waygiven me the impetus to do the work necessary to produce this draft. I also thank Jip Van Besouw, Bart Van Kerkhove and Koen Vermeir for their comments, which helped improve the present draft considerably, but will have even more impact on future versions. This draft version (especially sections 1.1.3(C), 1.3 and part 2) is still a 'happy' text, in a way that eventual later versions will probably not be: this draft is the first time these contents are written out after my initial notes and the writing has not yet gone stale by successive rewrites and strategic considerations. If I were my readers, I would prefer to read this version, rather than whatever versions will follow, even if those versions will surely contain less mistakes and a more thorough interaction with the literature. My work on Wittgenstein's PhilMath is part of a larger, more long-term effort centered around the concept of 'practice', 5 which in its turn emerged from previous work in linguistics. 6 In the somewhat longer run, this effort should give rise to a book-length study, of which my work on Wittgenstein, and my work on mathematics are only a small part. Abbreviations Throughout this draft I use the following abbreviations of my own: • PhilMath = Philosophy of Mathematics • PhilMathPract = Philosophy of Mathematical Practice • LW = Ludwig Wittgenstein As for references to LW's work on mathematics that is central to this study, I have tried to refer directly to the manuscripts as published in the online Bergen University edition of the Nachlass (http://wab.uib.no/transform/wab.php?modus=opsjoner). I use the numbering of the manuscripts and typescripts used there (but ultimately based on (von Wright 1997). However, for practical reasons (mostly ease of reference, in those cases in which there is no added value in citing the manuscript, or when the reference to the standard edition is actually more meaningful than a reference to a manuscript), I still sometimes refer to the standard editions. I use the following abbreviated references to LW's published work: • PhU = Philosophische Untersuchungen / Philosophical Investigations; • PhPF = Philosophie der Psychologie -Ein Fragment / Philosophy of Psychology -A Fragment (as of the fourth edition of PhU (Wittgenstein 2009), PhPF is the title for what had -controversially-been published as Part II of PhU; I will also follow the new numbering into paragraphs); • ÜG = Über Gewißheit / On Certainty; • BGM = Bemerkungen über die Grundlagen der Mathematik / Remarks on the Foundations of Mathematics; • BPhP = Bemerkungen über die Philosophie der Psychologie / Remarks on the Philosophy of Psychology. • LSPhP = Letzte Schriften über die Philosophie der Psychologie / Last Writings on the Philosophy of Psychology • TLP = Logisch-philosophische Abhandlung / Tractatus Logico-Philosophicus Except if mentioned otherwise I quote these standard texts from the editions prepared by his literary heirs, as published in the Suhrkamp Werkausgabe : Wittgenstein 1989a; Wittgenstein 1989b; Wittgenstein 1989c; Wittgenstein 1989d; Wittgenstein 1989e. For PhU and PhPF I consulted the Blackwell 4th edition (Wittgenstein 2009). One last abbreviation that I will use, is the following: • LFM = Lectures on the Foundations of Mathematics, Cambridge, 1939 ((Wittgenstein 1976)) Introduction LW's PhilMath has not been well received (see section 0.1 below). There are plenty of reasons why the reception of LW's writings on this matter (and others) may have gone wrong: the aims and the methods of LW's philosophy, as well as the way his work is presented, appear quite different from what is expected by both his contemporaries and many present-day scholars, including even those who claim to be inspired by LW's work (see section 0.2 below). As far as LW's aims and methods go, the scholarship focusing on LW's writings on mathematics has been particularly bad at taking these into account, apparently preferring to focus on what LW can contribute to existing issues in PhilMath at large, rather than reading the texts on their own terms. The question as to how LW's work on math fits in with the agenda underlying his work at large will therefore be a central concern in this study. Sections 0.1, 0.2 and 0.3 of the present introduction aims at briefly sketching a few aspects of the broader context of LW's PhilMath that are relevant to its interpretation. In section 0.4, I briefly explain how the present study is organized. Wittgenstein's bizarre (?) philosophy of mathematics In 1944, asked John Wisdom to include a final sentence to a short biographical paragraph that Wisdom had written about him for a biographical dictionary: "Wittgenstein's chief contribution has been in the philosophy of mathematics". Whatever the value of this testimony, it does appear that Wittgenstein first got interested in philosophy through mathematics-related problems [START_REF] Netz | The Shaping of Deduction in Greek Mathematics: A Study in Cognitive History[END_REF] and that this was why he first contacted Frege and Russell. It is also true that after his return to philosophy he wrote and taught prolifically about mathematics, especially between 1929 and 1934 and again in the period September 1937-April 1944. Still, despite his own commitment to PhilMath and despite his reputation as one of the major philosophers in 20 th century philosophy at large, Wittgenstein has -generally speaking-a bad reputation in PhilMath circles (and this includes -perhaps surprisingly-PhilMathPract circles). In some cases, even mentioning LW's name is almost religiously (though not always successfully) avoided. In any case, LW's work seems to have had remarkably little direct influence on mainstream PhilMath and his work is rarely quoted in the field, 9 with the exception perhaps of authors interested in aspects of mathematics that are closely related to logic, for instance, authors interested in Gödel, Tarski, Turing, or Russell. Thus, scholarship taking LW's work on mathematics (more or less) seriously is restricted to a productive but rather small niche, more or less isolated from what may be called the 'mainstream' of PhilMath (although some of the big names do have mainstream credibility through other work of theirs). 10 9 Thus, Bangu (Bangu n.d.) (s.d., §1) speaks of, as well as illustrates, "Wittgenstein-phobia" in Philosophy of Mathematics. Similarly, see also Mühlhölzer 2010, p. 10 (Mühlhölzer 2010) on the bad reputation of LW's work on mathematics within the field of PhilMath at large. As a case in point, handbooks or collections with a wide scope in PhilMath seem to often ignore Wittgenstein's contributions to the field: • Linnebo's Philosophy of Mathematics (Linnebo 2017) mentions Wittgenstein a few times, but -bizarrely-not his work on math; • Incurvati's 2020 Conceptions of Set and the Foundations of Mathematics (Incurvati 2020) appears to not mention LW at all; • the volume Philosophy of mathematics within the series Handbook of the Philosophy of Science (Irvine 2009) mentions LW in 5 separate passages; • in Heaton's A Brief History of Mathematical Thought (Heaton 2017), it is claimed on p. 5 that "This book is related to the work of various philosophers (particularly Ludwig Wittgenstein), [...]", and LW is mentioned a few times, but the book does not discuss LW's work on math in any detail; • Friend's 2007 Introducing Philosophy of Mathematics (Friend 2014) explicitly states "There are several glaring omissions in this book, noticeably Wittgenstein's philosophy of mathematics. By way of excuse I can say that this is not meant as an encyclopaedia of the philosophy of mathematics, but only an introduction, so it is not intended to cover all philosophies. Nevertheless, the omission of Wittgenstein's philosophy of mathematics bears further justification. I am no expert on Wittgenstein, and I am not sure I would trust second-hand sources, since many disagree with each other profoundly. I do not have the expertise to favour one interpretation over others, so I leave this to my more able colleagues". • Notable exceptions are (Shapiro 2005), (Shanker 1996). This also goes for contributions to the emerging field of PhilmathPract. References to LW in the seminal collection Philosophy of Mathematical Practice (Mancosu 2008) are limited to one contribution; in (Van Kerkhove and van Bendegem 2007), LW's PhU comes up in the bibliography of two contributions; in (Van Kerkhove 2009), one contribution (Desmet 2009) deals at some length with math-related lines of thought in LW's work (although other aspects of LW's work are exploited in a few other contributions). It is symptomatic for LW's reception in Philosophy of Mathematics that Ferreirós, in his 2016 primer on Philosophy of Mathematical Practice (Ferreirós 2016), mentions LW only for a 'colorful' quotation, according to Ferreirós "expressing views akin to logical positivism and strict formalism" (pp. 89-90). However, the context of the quote is an exposition of views very much akin to Ferreirós' own account, though LW is more radical in his turn towards practice. [[By the way, Ferreirós history of set theory (Ferreirós 2007) briefly mentions LW and his TLP on a few occasions (among which I like "The strange features of the famous Tractatus by his student Wittgenstein [1921] are thus more a symptom than a deviation."(footnote 1 on p. 332), which elucidates the following sentence in the body text: ""Russell's peculiar conflation of syntax and semantics has the effect that his work is dealing with philosophical logic, and even metaphysics, throughout"), but does not mention LW's remarks on set theory at all. ]] A notable exception is Ravn & Skovsmose's Connecting Humans to Equations: A Reinterpretation of the Philosophy of Mathematics (Ravn and Skovsmose 2019). 10 The following authors come to mind (list in alphabetical order, not necessarily the order in which they come to mind): Sorin Bangu; Juliet Floyd; Pasquale Frascolla; Jaakko Hintikka; Georg Kreisel; Timm Lampert; Penelope Maddy; Felix Mühlhölzer; Victor Rodych; S.G. Shanker; Mark Steiner; ... . (A) weird claims In part, this lack of appreciation for LW in PhilMath circles may be the result of the fact that LW's PhilMath was/is mainly known in the form of a few dogmatic claims that sounded, and to many still sound, distinctively weird: • "we don't know the meaning of a theorem unless we know the way to prove it (e.g. Fermat, '777 in π')"; • "continuing the decimal expansion of an irrational number is an expansion of math"; 11 • "math is a matter of grammar: mathematical statements are not propositions but instructions on how to use certain words"; • "mathematical advances are inventions, not discoveries" (which got LW lumped in with various forms of social-constructivism that bear very little resemblance to his own work, qua conceptual framework, but especially qua methods and aims); • "math is defined by its applications". All of these claims will be addressed in later sections within the present study. (B) wild criticism One of the reasons why LW's work has met with such resistance within PhilMath, is that he appears to strongly reject contributions to mathematics that are universally (or almost universally) accepted as canonical parts of mainstream mathematics, sometimes using very strong language in the process: -he calls Cantor's diagonal argument 'hocus pocus'; -he calls Gödel's famous paper 'unphilosophical' and his (and most mathematicians') concepts 'slimy'; -he calls the set-theoretical construction that leads to Russell's paradox a cancerous tumor and considers set theory in general "pernicious", a symptom of the "illness of our time", and the mathematics of the previous hundred years "instinctless". Even among those commentators that are generally speaking sympathetic towards LW's PhilMath, nobody appears to be willing to defend LW's critical outbursts against such august and canonized parts of the mathematical mainstream as Cantor and Dedekind's diagonal methods, certain conceptions of infinity, set theory as a foundational theory, well-established interpretations of Gödel's results, etc. Most Wittgenstein apologists with respect to his PhilMath either (1) avoid, ignore and/or explain away or (2) explicitly disavow LW's more 11 To be fair: LW is aware of the bizarreness of this claim: "So seltsam es klingt: Die Weiterentwicklung einer irrationalen Zahl ist eine Weiterentwicklung der Mathematik" (Ms-126,133, d.d. 19421214). overtly critical rants and focus on a selection of what appear to be more technical lines of thought. [START_REF] Lynch | Extending Wittgenstein: The Pivotal Move from Epistemology to the Sociology of Science[END_REF] I will take the opposite route and try to show that LW's philosophical activity, including his PhilMath, is profoundly and essentially critical (in several senses of that word) and that these overtly critical remarks show a fundamental aspect of LW's work. (C) continuing exegetical controversies It is remarkable that many central aspects in the exegesis of LW's work on mathematics are still controversial, despite LW's high profile in the history of 20 th century philosophy, despite the many years that have passed since his death, and even since the posthumous publication of his works, and despite the enormous exegetical efforts spent on his oeuvre. Thus, scholars are still debating such questions as "is LW a finitist?", "is LW a constructivist?", "is LW a formalist or an anti-formalist?", etc. 13 Interestingly, scholars are also still disagreeing on the issue as to whether LW is a revisionist or not, i.e. as to whether we have to take his claim that "philosophy leaves everything as it is" serious, or the fact that LW does seem to criticize a number of things (see section 0.2 here below). Similarly, scholars are still debating on what exactly LW was objecting against in his apparent critique of Gödel, or in his overt criticism of Dedekind and Cantor. Many times, I've heard people blame LW's 'obscurity' for this lack of consensus (and use this as a justification for not having to engage with this body of work), but I don't think that this is fair: one of the main points of this study is that most of the misunderstandings are based on [START_REF] Lynch | Extending Wittgenstein: The Pivotal Move from Epistemology to the Sociology of Science[END_REF] For instance, Mühlhölzer refuses to defend LW's more critical rants, and calls them "engherzig" (narrowhearted) ( (Mühlhölzer 2010), p 15). Maddy speaks of LW's non-revisionism and the tension with the criticial strands of his work in terms of 'false modesty': "Surely, one cannot deny the law of the excluded middle or rule out non-constructive existence proofs and at the same time leave "mathematics as it is". But what is the motivation for this prohibition? If philosophy provides compelling reasons to abandon the Platonistic picture, if current mathematical practice is based on that picture, why shouldn't the result of philosophical analysis be allowed to reform that practice? Mightn't Wittgenstein's reluctance be a form of false modesty? This reading of Wittgenstein's late views uncovers a tension between the upshot of his philosophical views and his insistence that philosophy alters nothing.(5) It tempts us to downplay the non-interference remarks in favor of the presumed payoffs of his contentful philosophical conclusions. A directly opposed approach -my focus in this paper -would give pride of place to the non-interference claims and adjust the reading of the rest to match." ( (Maddy 1993), p. 55). For a fair-minded but sympathetic account of the historical background of LW's work on math and the way it was written and published, see Floyd 2015, pp. 9-12 (Floyd 2015); still, even Floyd does not seem to have any sympathy for LW's critical rants: "By 1939 Wittgenstein's knowledge of the foundations of mathematics as an ongoing mathematical pursuit was minimal, even by his own contemporaries' standards, as he himself emphasized with his Cambridge students [1989, pp. 13-14]. Yet he frequently dares to vituperate in his notebooks, especially on bad days when he is driving himself hard: he compares set theory (to choose only one among other famous examples) to a cancerous growth on mathematics (BGM VII, §7), as it were, sucking out its healthy marrow. The manuscript writings of this ambivalent philosopher are laced, far more even than the published versions, with continual expressions of ire, aspersion, hesitation, rejection, criticism, and revision." (p. 11). 13 To make things worse, one could even argue that some aspects on which there is little controversy, most notably LW's anti-Platonism, are less clear-cut than the literature seems to suggest. a refusal to read LW's texts on their own terms, i.e. in terms of the aims, problems, methods, etc. these texts claim for themselves. Wittgenstein's work on mathematics in the context of his oeuvre as a whole Most scholarship on LW's PhilMath has approached it from the point of view of PhilMath in general, focusing on the question as to what LW's work may or may not contribute to this or that issue in PhilMath at large. Even in the works of scholars that are obviously aware of the problem, the problem remains pervasive, perhaps not in the least because of external constraints (peer review oblige). 14 In the present study, I will radically adhere to the basic idea that Wittgenstein's PhilMath should be read as an integral part of his philosophy as a whole and that all of its aspects should be interpreted in terms of the same underlying agenda, biases, methods, and themes. Trying to read isolated remarks taken from LW's manuscripts that appear to deal with a topic one happens to be interested in, in the hope that these remarks will shed light on that topic, is the standard hermeneutic in this field, but need not be the most fruitful approach, neither to obtain a proper understanding of LW's text for its own sake (obviously), nor to learn something from LW's work with respect to the topic one is interested in. (A) style and presentation The format and style in which LW wrote and was published is far removed from what is usual in contemporary academic publications: -the writings are sometimes presented as collections (LW called (what became) PhU an 'album') of separate paragraphs, as collections of fragments rather than as continuous lines of argument; -LW often asks his reader to consider some alternative way of looking at a familiar issue, sometimes a quite plausible way, sometimes a wildly counterfactual way; for instance, to imagine mathematical calculations as applied in accounting or engineering practices, or as part of a ceremony, or as a way to produce wallpaper, to communicate with ghosts, etc. (cf. section 1.3 below); -similarly, LW often switches from one opinion on a particular topic to one radically opposed to it in quick succession and without really resolving the tensions or explicitly coming to a conclusion. 15 On top of this, the editorial history of his works was a catastrophe, and the appearance of fragmentation and misunderstandings of what is and what isn't at stake in individual texts was enhanced by the butchering that the manuscripts received at the hands of the editors; 16 the original manuscripts, while often still not taking the shape of clearly stated theses which then are argued for, often do feel a lot more cohesive than the text presented in the standard editions. It may be worth mentioning that starting to read LW's work on mathematics directly from the Bergen edition of the Nachlass did make a big difference in the way I understood LW's work. 17 In what follows, I will present the results of a close reading of extended passages from MS-117, MS-121, MS-122, MS-122, MS-126 and MS-161. Furthermore, in section 2 of the 15 Cf. (Sass 2001), pp. 104-105: "Any attempt to correlate the life and thought of a major philosopher is likely to be difficult, of course, but several features of Wittgenstein's thinking and writing make such an attempt particularly hazardous. First there is the difficulty of identifying or recognizing precisely who is speaking in his texts. Understanding Wittgenstein requires that one read with an ear cocked to the dramatic ironies and other complexities that make up the ebb and flow of his argument. This means recognizing that we are confronted not with a textbook or treatise so much as with a series of conversational dialogues in which it can be difficult to discern the location, or even the existence, of a settled point of view attributable to Wittgenstein. What at first may seem the asser-tion of a philosophical view often turns out to be the provision of a target or a stalking horse for his criticism. A remark from 1951 is apropos: "But see, I write one sentence, and then I write another -just the opposite. And which shall stand?"". For the context of this last -very funny-remark, see (Bouwsma 1999), p. 122 Mühlhölzer (2010, pp. 12-14) discusses this aspect of LW's style under the heading "Aber-Dialektik" ("the dialectic of but"), (disapprovingly) quoting (p.13) Hintikka's interpretation of this feature in terms of a "defensive" attitude, "insecurity" and even "paranoia" on the part of LW. I believe this kind of interpretation is based on a misunderstanding of LW's aims, his method and his inherently polyphonic style. The purpose, and the value, of this demarche resides in the experience of going through this process, internalizing the different points of view and reactions that emerge from working through the material one is working on. It is not necessarily fruitful to try and determine which voice is LW's "own voice". In the context of the original manuscript notebooks, all of these 'voices' (?) have to be taken seriously, qua participants in the debate at hand. The difficulties resulting from this quasi-dialogical / polyphonic style for the interpretation of these texts are very clear in Kripke's attempt (Kripke 1982) and the very rich ensuing debate about the correctness of Kripke's interpretation: Kripke goes as far as interpreting the PhU as a dialogue between LW and a character he calls "the sceptic"; a substantial part of the interpretative debate is about determining "who said what" and what statements are made by the real "LW". For an overview of the debates, see (Miller and Wright 2002); (Kusch 2006). I find it hard to comprehend why professional philosophers have difficulties recognizing what LW is doing in his writings: they may disapprove of the format for a publishable text, but I would have thought that the process of internalizing and working through opposing (or perpendicular) points of view would be instantly recognizable. In other words: to me, LW's manuscripts look like an adequate representation of the way philosophers normally (?) work. 16 For the history of the publication of LW's writings, see (Erbacher 2015), (Erbacher 2016); (Toynton 1997); (Venturinha 2010); (McDougall n.d.). Perhaps the time will come in which it becomes possible to write an appropriately critical account without generating unfruitfully emotional polemics. 17 LW's manuscripts, as opposed to the album-like standard editions, do give the impression of a philosopher thinking in a coherent way, going over various aspects of a limited set of topics for prolonged periods of time. Apparently, the editors decided that this kind of writing was not publishable, and perhaps that was the case back when they were supposed to deliver published materials, but with hindsight, it would perhaps have been better, if they had selected a few manuscripts that do present a sustained development of coherent lines of thought (even if they do not take the shape of 'thesis and arguments') and published them as they were. One day, someone should write a masterpiece about the real-life circumstances that lead the trustees to not trust in the quality of the legacy they were curating. present study, we will encounter a few spectacular cases of how the editorial practices that gave rise to the standard editions can lead scholars astray. (B) the aims and methods of philosophy: therapy and critique As for aims, LW makes a very clear distinction between the aims of (his) philosophy and the aims of scientific endeavors. Philosophy does not consist in constructing true propositional knowledge at all. Philosophical problems are not to be considered as questions one should answer, but rather as an undesirable or even pathological state of confusion, which should disappear completely (PhU §133; cf. already TLP 6.521). In PhU, LW uses a number of different metaphors expressing this basic idea: the philosophical problem as an illness and his philosophical method as a therapy (PhU §133; 18 §255: "Die Philosophie behandelt eine Frage; wie eine Krankheit"; cf. also PhU §593, where 'philosophische Krankheite' are said to be caused by an 'einseitiges Diät' and see section 2.2(C) below for the "Krankheit einer Zeit" remark in BGM 2, 23); 19 the philosophical problem as a case of being lost or trapped, and philosophy as pointing the way out. Cf. e.g. "Was ist dein Ziel in der Philosophie? -Der Fliege den Ausweg aus dem Fliegenglas zeigen." (PhU §309). Cf. also the philosophical problem as 'Glatteis' (PhU §107); language as a "labyrinth" (PhU §203); the dead-end street of doing philosophy (PhU §436: "Sackgasse des Philosophierens"), ... ; the philosophical problem as a case of enchantment ("Verhexung") of our minds (PhU §109), of 'superstition' due to 'grammatical illusions' that give rise to 'philosophical pathos' (PhU §110), 20 of being captured by a picture (PhU §115). 21 These aims are obviously at odds with the overtly scientistic objectives of mainstream (post-)analytic English-language philosophy in general and PhilMath in particular. 18 Wir wollen nicht das Regelsystem für die Verwendung unserer Worte in unerhörter Weise verfeinern oder vervollständigen. // Denn die Klarheit, die wir anstreben, ist allerdings eine vollkommene. Aber das heißt nur, daß die philosophischen Probleme vollkommen verschwinden sollen. // Die eigentliche Entdeckung ist die, die mich fähig macht, das Philosophieren abzubrechen, wann ich will. -Die die Philosophie zur Ruhe bringt, so daß sie nicht mehr von Fragen gepeitscht wird, die sie selbst in Frage stellen. -Sondern es wird nun an Beispielen eine Methode gezeigt, und die Reihe dieser Beispiele kann man abbrechen. -Es werden Probleme gelöst (Schwierigkeiten beseitigt), nicht ein Problem. // Es gibt nicht eine Methode der Philosophie, wohl aber gibt es Methoden, gleichsam verschiedene Therapien. 19 To these we could add the notion of 'mental cramp', which occurs in the first paragraph of (and on several other occasions in) the Blue Book (1933-1934). 20 »Die Sprache (oder das Denken) ist etwas Einzigartiges« -das erweist sich als ein Aberglaube (nicht Irrtum!), hervorgerufen selbst durch grammatische Täuschungen. // Und auf diese Täuschungen, auf die Probleme, fällt nun das Pathos zurück. (Cf. section 4.3 below). 21 Ein Bild hielt uns gefangen. Und heraus konnten wir nicht, denn es lag in unsrer Sprache, und sie schien es uns nur unerbittlich zu wiederholen. Interestingly, philosophy itself is sometimes seen as an undesirable obsession: we have to learn to stop philosophizing when we want to (PhU §133); 22 philosophical behavior can easily be confused for madness and should perhaps not be performed in public (ÜG §467), etc. This leads to an interesting paradox that I dealt with elsewhere (Scheppers 2017), Chapter 3, §1, and that is closely related to the problem of LW's overt anti-revisionism, but apparent criticism (cf. sections 0.2(D), 1.2.3(C) and 3.1.1(C8) below). LW's methods are closely related to his therapeutic aims. In a number of passages, LW speaks out more or less clearly against theorizing or even explanation (not even, or perhaps even especially not, scientific explanation) as a proper and adequate method of philosophy. LW's philosophy consists entirely in clearing up misunderstandings (or so he claims): time and time again, LW wants to show that this or that 'philosophical' proposition is based on an erroneous understanding of the meaning of the words used in that utterance (cf. e.g. PhU § §90-92). 23 In other words: philosophical problems originate when words are taken out of the everyday contexts in which they belong; putting these words back into their contexts is sufficient to 'dissolve' the problem; philosophical analysis consists in merely presenting the everyday use of certain words in an easily overseeable way (cf. e.g. PhU §122). Thus, the method consists in 'staying at the surface' (PhU §92) and avoiding to yield to the temptation of looking for 'depth' where there is no depth. 24 This stance is especially alien to the one taken in most PhilMath, which appears to cultivate exactly what LW is combating (cf. sections 2.0.3, 2.4.3(C) and Appendix 4.3 below). It follows from these considerations that LW did not have any ambitions to construct a systematic terminology either: although LW did post hoc contribute a number of seminal terms to the standard philosophical jargon (Language Game, Form of Life, Grammar, Hurly-Burly, ...), only the term Language Game (and perhaps Grammar) could be argued to fulfill the function of a terminus technicus within LW's own work. 25 22 The issue appears to have been a real-life one for LW. For instance, Rush Rhees reports that LW in conversation confessed: "In my book I say that I am able to leave off with a problem in philosophy when I want to. But that's a lie; I can't." (Wittgenstein and Rhees 2015, 54); I am not so sure that Baker and Hacker's deflationary comment "but this was transforming a metaphor into a literalism" (Baker and Hacker 2005, 252) is actually to the point. The concept 'to stop talking' / 'to be silent' is mentioned elsewhere as well (e.g. BPP2 §402, and of course TLP §7). Cf. also LW, Ms-127,82: Friede in den Gedanken. || Das ist das ersehnte Ziel dessen, der philosophiert. 23 Already in TLP (6.53) it was stated that the only method in philosophy consisted in pointing out that no meaning had been attributed to certain words in a 'metaphysical' claim. 24 Cf. Baker and Hacker's formula "The flatness of philosophical grammar" (Baker and Hacker 2009, 19-21). Cf. also ÜG §471: "Es ist so schwer, den Anfang zu finden. Oder besser: schwer, am Anfang anzufangen. Und nicht zu versuchen, weiter zurückzugehen", and BPP1 §509: "Das psychologische Phänomen nicht erklären, sondern hinnehmen, ist das schwere". 25 For an important remark on the terminology used in the present study, I refer to section 1.1.1(C). (C) LW's anthropological approach: "observing math from the outside" It has been said that LW's evolution between his early work, before he left philosophy for a number of years, and the classical later work, is mostly a shift away from a purely logical point of view and toward an anthropological point of view, 26 perhaps under the influence of Sraffa ( (Mühlhölzer 2010), p. 392, quoting (Monk 1990), 260 ff.). In the manuscripts studied here, LW explicitly states that he views math as an "anthropological phenomenon" (LW, Ms-124,116; cf. also Ms-162b,26v) and repeatedly says that he approaches mathematics "from the outside". 27 This has for an effect that certain aspects of math which may be trivial (and therefore invisible) from within, become highlighted and clarify in what respects and to what extent the mathematical phenomena are similar or dissimilar to other anthropological phenomena. LW is very much aware of the fact that this means that he is not actually speaking about the things that mathematicians and philosophers of math like to talk about. 28 This basic attitude runs against the mathematical exceptionalism (i.e. the idea that math is unlike other human endeavors) that is prevalent within PhilMath (cf. sections 2.4.3(D) and 3.2.3(A) below), but fits in with LW's basic critical stance. For an example that nicely illustrates all this, see my analysis of LW, Ms-124,115-119, in which LW attacks Gödel's 'slimy' concepts in terms of his own vision of math as an "anthropological phenomenon" (cf. section 2.3(E) below). (D) philosophy as criticism and critique and the issue of LW's (anti-)revisionism LW claims that his philosophy does not aim at criticizing language games and changing what people say ('non-revisionism', or 'anti-revisionism); "philosophy leaves everything as it is" (PhU §124). Interestingly, LW refers in the same paragraph to the idea that philosophy cannot offer a foundation for everyday language either, which highlights the extent to which his 26 From an approach "sub specie aeternitatis" to an approach "sub specie humanitatis" ((Gakis 2015) p. 928.). This coincides with the difference between logic and grammar: a grammar implies the details of how people actually speak and highlights the historical and cultural contingency and variation, whereas logic has the connotation of universality and a-temporality. Cf. also my account in terms of the shift from a semantic approach to a pragmatic approach to meaning. 27 Cf. for instance: • Wer das Wesen der Mathematik verstehen will, muß nicht aus ihrem Fenster heraus, sondern von außen hinein schauen. (LW, Ms-123,17v-18r, 19401116 Meine Aufgabe ist es nicht über den Gödelschen Beweis (z.B.) || , z.B., zu reden; sondern an ihm vorbei zu reden. (LW, Ms-124, 84) engagement with the 'Grundlagen-debate' (cf. section 0.3 here below and section 3.1.3(B)) is intertwined with the very core of his later philosophy. 29 At the same time, a critical strand was present in LW's philosophy from the beginning (e.g. TLP §6.53) and LW is often interpreted in such a way that he apparently does criticize certain ways of speaking, most notably certain typically philosophical uses of language. For instance, in PhU §116, LW appears to be critical of philosophical/metaphysical language, and explicitly contrasts the way in which "the philosophers" use words such as 'knowing', 'being', 'object', etc. and his own philosophical practice, which consists in bringing those words back to their everyday use. 30 Both strands in LW's thought taken together immediately give rise to a paradox: if it is true that it is Wittgenstein's aim in philosophy to 'leave everything as it is' and merely describe existing language games as they are, why doesn't he seem to 'leave alone' a number of 'nonordinary' ('philosophical' / 'metaphysical' / 'theoretical' ...) ways of using language? In other words: why does LW claim to want to 'leave language games as they are' and at the same time condemn a number of types of language use? Elsewhere ((Scheppers 2017), Ch. 3), I deal with the philosophical significance of this paradox for its own sake; for the purposes of the present study, I am mostly concerned with its impact on the interpretation of LW's overtly critical remarks about certain types of mathematical discourse. In the literature, we encounter two ways of dealing with this tension (if it is dealt with at all): 31 • deflating the non-revisionist claim and accepting that LW actually claims that -say-set theory is not proper math (Steiner 2009; Maddy 1993); • accepting non-revisionism as central to LW's purpose and trying to interpret the apparent criticism in that light (Dawson 2015). It is one of the main claims of this study that LW's philosophy is primarily and pervasively critical, and that the same thing goes for his PhilMath (in actual fact, LW's PhilMath is a very good example of this aspect of LW's philosophy). 29 PhU §124: "Die Philosophie darf den tatsächlichen Gebrauch der Sprache in keiner Weise antasten, sie kann ihn am Ende also nur beschreiben. Denn sie kann ihn auch nicht begründen. Sie läßt alles, wie es ist. [...]". 30 PhU §116: "Wenn die Philosophen ein Wort gebrauchen -'Wissen', 'Sein', 'Gegenstand', 'Ich', 'Satz', 'Name' -und das Wesen des Dings zu erfassen trachten, muss man sich immer fragen: Wird denn dieses Wort in der Sprache, in der es seine Heimat hat, je tatsächlich so gebraucht? -Wir führen die Wörter von ihrer metaphysischen, wieder auf ihre alltägliche Verwendung zurück." 31 For the fact that most Wittgenstein-apologists with respect to his PhilMath either (1) avoid, ignore and/or explain away or (2) explicitly disavow LW's more overtly critical rants and focus on a selection of what appear to be more technical lines of thought, see section 0.2(B) above. (E) LW's conceptual framework (LW's pragmatism, LW's holism, LW's structuralism, LW's everydayism) Besides the fact that LW's conception of the aims and methods of philosophy are quite different from most of the mainstream of PhilMath from his day up until now (2022), LW also operates with a conceptual framework that may not be immediately palatable to the reader. Throughout Part 1, I will highlight a number of aspects of the conceptual framework that LW develops in his later work, and show how these aspects should inform our way of understanding LW's PhilMath. -pragmatism: meaningfulness is ultimately a matter of embedding in practices, and practice is ontologically irreducible; -holism and structuralism: practices are holistic structures, within which practical, linguistic, epistemic, biological, etc. dimensions cannot be reduced to each other; -everydayism: the difference between everyday and non-everyday practices is a pervasive and fundamental one in LW's outlook. (F) LW and his readers/commentators (conclusion) The fact that LW's aims, methods and conceptual framework are very far removed from those of most people who work in the field of PhilMath (especially in the so-called 'analytic' Anglo-Saxon traditions) does not bide well for an easy reception of LW's work on mathematics. Furthermore, the fact that LW offers an 'anthropological' perspective "from the outside looking in" is directly at odds with the overtly exceptionalist attitude prevalent in PhilMath. It is my contention that many of LW's commentators have looked at LW's texts (often in a more or less edited format, that already aimed at making the texts more 'palatable') without necessarily taking the distance between LW's concerns and those of mainstream PhilMath into account and have tried to extract answers from the text to questions that were not necessarily relevant from the point of view of LW's own concerns. The omnipresent focus on -isms (finitism, constructivism, Platonism, normativism, ...) is a case in point: these stances are only relevant if one is interested in the questions that these -isms are supposed to be an answer to (cf. section 3.1.1 below). Wittgenstein's philosophy of mathematics in the context of the Grundlagen-debates The topic in PhilMath that attracted LW to philosophy in the first place ((McGuinness 1988) pp. 73-77) and that continued to be the backdrop for all of his work on mathematics is the issue of the foundations of math, heavily debated in the first part of the 20 th century. All the topics that LW dealt with (set theory, formalism, Gödel, applications, ...) are relevant to him insofar as they are related to this topic. We should not forget that LW was not only present but was actively contributing when Bertrand Russell was still one of the major players in the field (though his most seminal work may have already been produced). By all accounts, LW's early contributions were taken very seriously by Russell, who appeared to have seen a worthy successor in LW, who would be able to continue the more technical work in logic and PhilMath that he felt himself less capable and/or inclined to pursue (cf. e.g. (McGuinness 1988), pp. 104-15 et passim). In this respect, it is interesting to observe that Kurt Gödel appears to have blamed LW's influence for Russell's leaving behind his earlier "epistemological" (i.e. correspondence-based) views on mathematics in favor of his 'classical' logic-based ones (Floyd and Kanamori 2016). In the later works that we mostly focus on in this study, LW's main contribution was pointing out that the things that were being proposed as foundations, actually were not that in any real sense: according to LW, math in actual fact is not rooted in axiomatic systems, on the contrary. The roots (but this may not even the right word in this context) of math are in the end not even propositional; historically/genetically as well as synchronically/structurally, math is rooted in (what was then called) 'applications', 32 a heterogeneous, contingent and unstable set of practical activities, which existed long before math as a coherent body of knowledge. Another aspect is the fact that Grundlagen are typically presented as more simple, universal and somehow a priori necessary, whereas LW argues that complexity and contingency are irreducible. Thus, in this study, I point out that LW's critique operates at a much more general level than the more technical issues that mainstream PhilMath (incl. most studies concerned with the exegesis of LW's PhilMath) focuses on, and that LW's stance antagonizes a number of different strands within mainstream PhilMath in a quite radical manner (radically antifoundational; radically anti-unitarian/anti-monist; radically anti-teleological/'antinaturalizing' 33 ). 32 Note that the term 'applications' presupposes that there is something pre-existing to be applied, which seems to be a bizarre choice if one is arguing that the applications are prior. This terminological choice shows to what extent LW's contribution is rooted in contemporary debates about the foundations of math: applications is simply the default word that was available in this historical context to express the idea of practical math-like practices. 33 I have not been able to come up with a good term for the position that opposes claims to the effect that one's own ideological preferences are 'facts of nature'. Structure After the present introductory section "0", this study is organized as follows: 1. LW's philosophy of (mathematical) practice, in which I show how LW's PhilMath, including some of its more controversial aspects, fits in with his philosophical approach at large and a number of ways in which it still can contribute to present-day PhilMath and PhilMathPract; 2. LW's philosophy (of mathematics) as critique, in which I show how the infamous critical remarks on set theory, various diagonal methods, Gödel, etc. actually fit in with some of the core concerns and agendas underlying LW's world view at large; 3. Conclusions, in which I summarize the different lines of thought that came up throughout the study and try to show how they are relevant to present-day Wittgenstein-scholarship as well as to PhilMath at large; 4. Appendices, in which I present materials that are directly related to the subject matter (s) dealt with in this study but go beyond the interpretation of LW's text. Part 1. Wittgenstein's philosophy of (mathematical) practice In the first part of this study, I show how LW's PhilMath is best understood as an integral part of his philosophical approach to meaning and practice at large: -Section 1.1 gives an overview of some of the main features of LW's philosophical approach in general (his pragmatic account of meaning as embeddedness in practice, and his holistic and structuralist conception of practice) and how this informs a number of aspects of his PhilMath: his focus on 'applications' (incl. exotic ones), his opinions on formalism, the issue of the 'freedom' of pure math, etc. I also show how many of these lines of thought foreshadow much later developments in PhilMathPract. -Section 1.2 discusses the role of the notion of the "everyday" in LW's work (incl. his PhilMath). -Section 1.3 consists of a close reading of a longer passage that offers a good illustration of some of the lines of thought focused on in this study. -In section 1.4 I briefly summarize the main lines of thought that I developed in Part 1. The focus in this part of the study is not so much on the fine-grained exegesis of the Wittgensteinian corpus as on showing how some of the main features of LW's philosophy have not (not yet?) been picked up in present-day PhilMath (incl. PhilMathPract) and still could offer a powerful contribution to this field. Wittgenstein's pragmatic approach to meaning: meaning as embedding in a practice There is something to be said for the idea that 'meaning' is the central issue throughout LW's work (Rodych 1997; Floyd 2001): 34 the opposition sense vs. nonsense is a pervasive one throughout LW's works, and in Part 2 of this study, I will show how it is connected with the ethical, aesthetical and existential biases that underlie and motivate LW's philosophy as a whole. 35 34 Cf. also the title Understanding and Meaning that Baker and Hacker gave to the first volume of their classic commentary of the PhU (Baker and Hacker 2005). 35 However, there is also a lot to be said for the view held by Paul Horwich: "I have been arguing that, early and late, it is Wittgenstein's view of philosophy, rather than his view of meaning, that plays the pivotal role in his thought" (Horwich 2004: 105). Nothing hinges on it for the present purposes. More precisely, LW's intervention regarding meaning coincides with the shift from semantics, as a correspondence-based approach to meaning, to a pragmatic approach to meaning, in terms of embedding in practices. It is not my aim to give a full account of LW's contribution to -say-the theory of meaning or even of LW's own conception on the issues in a technical way. But in order to be able to formulate what I have to say about LW's work on mathematics, and make this understandable to the reader, I will have to say something about these broader issues, though my account here will have to be very succinct and I can only briefly summarize a few lines of thought. 36 Wittgenstein's pragmatism Here (as elsewhere), I use the adjective 'pragmatic' in the very general sense of "based in (the study of) practice" or "focusing on practice" and 'pragmatism' for any pragmatic approach. No reference to the historical American pragmat(ic)ism of Peirce, James, etc. is intended (though their choice for the term was of course not gratuitous either). (A) LW's shift from a semantic approach to a pragmatic perspective on meaning Semantic approaches to meaning -and most (if not all) traditional approaches fit in this category-37 construe meaning as a matter of correspondence between several strata of structure. For instance, in linguistics, phonological structure (sound) is related to conceptual structure (depending on the theory one adheres to, passing through various layers of syntactic structure). 38 Relating linguistic structure to the real (or a virtual) world ("reference") is again a matter of correspondence. 36 I also believe it should not be controversial to state that the critical bias that is omnipresent throughout LW's work and life (see here above) is often expressed in terms of meaninglessness (nonsense) or loss of meaning (see also section 1.2 below). 37 Of course, pragmatics has its own traditional precursors in (for example) rhetoric, which also may have had an influence on argumentation theory, but these are not usually construed as theories of meaning and lack the direct link to core issues of logic and mathematics. 38 For instance, in linguistic approaches to natural language, a grammar would be construed as follows: a phonological structure is linked with a semantic/conceptual structure, perhaps mediated by a morphosytactic level; each of these levels is characterized by its own system of categorial distinction and its own combinatorial rules (its own syntax). For the sake of clarity, consider, for instance, the following differences/similarities: -at the phonological level: cat / mat / sat / flat / begat/… vs. flow / though / mow / doe …; -at the semantic level: cat / dog / bird / cow /… vs. crayon / pen / pencil / marker / …; -at the morphosyntactic level: I beat my horse / The train reaches Antwerp / This paper concerns entomology / … vs. the man on the moon / the best of the best / the road to the mountains / … . For an account in the tradition of generative grammar that highlights the 'correspondence' aspect in a particularly clear manner, see for instance (Jackendoff 1985), (Jackendoff 2002)) What LW tries to introduce, is what would nowadays be called the pragmatics of natural language (cf. Mey (Mey 2001), (Mey 1998); for the links and differences between semantics and pragmatics, see e.g. Scheppers 2011 (Scheppers 2011), §13.1.4 and §13.2(1))). From a pragmatic point of view the following may be functionally similar: "Close that window!"; "It's cold in here"; "Do you want to kill me?"; [Speaker shuts the window himself]. LW started out with a particular type of correspondence-theory about meaning, which has been described as the 'picture-theory' about truth (Johnston 2017). For the LW of TLP, there was no meaning outside the proposition (which is defined as something that has to be either true or false). Interestingly, the overall purpose of the TLP appears to be to push the correspondence approach to meaning (the "picture theory") to its limits: to show that this approach gives a coherent account of the full subject matter of logic, but that this does not amount to anything of real importance, as LW explicitly states in Preface to the TLP: Dagegen scheint mir die Wahrheit der hier mitgeteilten Gedanken unantastbar und definitiv. Ich bin also der Meinung, die Probleme im Wesentlichen endgültig gelöst zu haben. Und wenn ich mich hierin nicht irre, so besteht nun der Wert dieser Arbeit zweitens darin, daß sie zeigt, wie wenig damit getan ist, daß diese Probleme gelöst sind. 39 (Ts-202,IIr = TLP, 'Vorwort') A similar idea, emphasizing the ultimately nonsensical nature of the approach taken in the TLP, reappears in the famous 'ladder' image at the very end of the TLP: Meine Sätze erläutern dadurch, dass sie der, welcher mich ver-steht, am Ende als unsinnig erkennt, wenn er durch sie-auf ihnen-über sie hinausgestiegen ist. (Er muss sozusagen die Leiter wegwerfen, nachdem er auf ihr hinaufgestiegen ist.) 40 The main insight behind LW's later philosophy appears to be that he let go of the picturetheory as his main account of meaning. Already in 1931, LW had let go of the notion of elementary proposition and started experimenting with the notion of 'grammar', in the sense of 'a set of rules for the use of words', as an account of meaning (Manninen 2011). The notion 'Sprachspiel'/'Language Game' occurs first in the Blue Book (1933-1934) to refer to simplified games that LW introduces to make a specific point, but the denotation of the term soon enough expanded to cover real-life patterns, as well as imaginary, exotic, and/or counterfactual games. This is one of the few terms coined by LW that actually took on the function of a true terminus technicus within LW's own work. In the literature, this view is 39 On the other hand the truth of the thoughts communicated here seems to me unassailable and definitive. I am, therefore, of the opinion that the problems have in essentials been finally solved. And if I am not mistaken in this, then the value of this work secondly consists in the fact that it shows how little has been done when these problems have been solved. 40 My propositions are elucidatory in this way: he who understands me finally recognizes them as senseless, when he has climbed out through them, on them, over them. (He must so to speak throw away the ladder, after he has climbed up on it.) often summarized under the slogan "meaning is use", 41 but this is too reductive a summary to cover LW's understanding of pragmatic meaning and a less narrow conceptualization of meaning -for instance, as 'embedding in, or function within, an encompassing practice', would be more appropriate. 42 (B) meaning as embedding in practice: Language Games, Forms of Life .... In LW's later work, the shift from semantics to pragmatics is marked by the introduction of a number new concepts/terms in his vocabulary. The most obvious and most systematically used of these terminological innovations is perhaps 'Language Game'. The list of examples in the famous paragraph PhU §23 shows what kind of things LW has in mind when he speaks of Language Games. 43 This list on its own already allows us to see that most of these behavioral patterns cannot be understood as strictly verbal/linguistic: -some of the Language Games may be more or less purely linguistic: 'reporting an event', 'telling a joke', 'translating'; -verbal and non-verbal aspects are equally essential to 'giving and obeying orders'; -a number of these patterns do not necessarily imply language use at all: 'construct an object by means of a picture', 'solve an applied math problem', 'play-acting'. This observation already shows why the notion of Language Game can be viewed as a precursor of the notion of practice in general. 44 This list occurs in the paragraph in which the notion of Language Game takes its proper shape for the first time in PhU, and the way LW introduces this concept is very explicit about the point he intends to make. Let us try to reconstruct LW's line of thought. Starting point is the statement that an account of meaning in terms of a fixed number of sentence types (say: statements, questions, orders) should be abandoned in favor of an account in terms of large array of Language Games, that is furthermore subject to contingency, 41 Admittedly based on remarks by LW himself, such as PhU §43 ("Man kann für eine große Klasse von Fällen der Benützung des Wortes "Bedeutung" a wenn auch nicht für alle Fälle seiner Benützung a dieses Wort so erklären: Die Bedeutung eines Wortes ist sein Gebrauch in der Sprache."). 42 The main problem with the concept of 'use' is that it suggests that the locus of the meaning is ultimately the 'user', whereas this is not compatible with what LW says elsewhere, perhaps most famously from the 'private language argument' (see section 1.1.2 below, on LW's structuralism). 43 Befehlen, und nach Befehlen handeln -Beschreiben eines Gegenstands nach dem Ansehen, oder nach Messungen -Herstellen eines Gegenstands nach einer Beschreibung (Zeichnung) -Berichten eines Hergangs -Über den Hergang Vermutungen anstellen -Eine Hypothese aufstellen und prüfen -Darstellen der Ergebnisse eines Experiments durch Tabellen und Diagramme -Eine Geschichte erfinden; und lesen -Theater spielen -Reigen singen -Rätsel raten -Einen Witz machen; erzählen -Ein angewandtes Rechenexempel lösen -Aus einer Sprache in die andere übersetzen -Bitten, Danken, Fluchen, Grüßen, Beten. 44 It may be useful to remind ourselves at this point that LW had no ambition to develop a systematic terminological apparatus. variation and change. 45 In other words, LW no longer tries to account for the difference between meaningful discourse and nonsense in terms of the relation between a proposition and reality, but in terms of the relation between an utterance and the practice in which it occurs. Of course, once one has taken this step, one has to give up the hope to be able to reduce the possible ways for an utterance to be meaningful to a few universal types of sentences. It is remarkable that LW chooses mathematical practice as a good example of the historical contingency of practices. One wonders whether this is a deliberately provocative move: while historically and 'anthropologically' correct, it goes against the grain of what mainstream mathematicians and philosophers of mathematics seem to think of math. Interestingly, LW explicitly states that the very purpose of the word Language Game is to highlight the link with Form of Life (cf. also Whiting 2017). 46 It is suggested that language is a mere part of something more encompassing, that LW first calls 'an activity' and, in immediate apposition to 'activity', a Form of Life. It is as if he uses the term Form of Life to correct the use of the word 'activity', as if 'activity' was still not general enough a term. The point is that meaning cannot be reduced to a language-internal, local phenomenon, but should be approached holistically (see section 1.1.2 below). 47 The extensive list of examples of Language Games helps highlighting both the practical aspect and the heterogeneity. The paragraph closes by contrasting the 'new' approach with traditional logic (including the TLP), in which only propositional (true or false) sentences were considered relevant. 45 Wieviele Arten der Sätze gibt es aber? Etwa Behauptung, Frage und Befehl? -Es gibt unzählige solcher Arten: unzählige verschiedene Arten der Verwendung alles dessen, was wir »Zeichen«, »Worte«, »Sätze«, nennen. Und diese Mannigfaltigkeit ist nichts Festes, ein für allemal Gegebenes; sondern neue Typen der Sprache, neue Sprachspiele, wie wir sagen können, entstehen und andre veralten und werden vergessen. (Ein ungefähres Bild davon können uns die Wandlungen der Mathematik geben.) 46 Das Wort »Sprachspiel« soll hier hervorheben, daß das Sprechen der Sprache ein Teil ist einer Tätigkeit, oder einer Lebensform. Führe dir die Mannigfaltigkeit der Sprachspiele an diesen Beispielen, und anderen, vor Augen: [hereafter follows the list of Language Games already quoted above]. -Es ist interessant, die Mannigfaltigkeit der Werkzeuge der Sprache und ihrer Verwendungsweisen, die Mannigfaltigkeit der Wort-und Satzarten, mit dem zu vergleichen, was Logiker über den Bau der Sprache gesagt haben. (Und auch der Verfasser der Logisch-Philosophischen Abhandlung.). 47 Baker and Hacker's commentary on PhU §23 includes the following observation: "It is unclear what principle of classification (if any) is employed. It is not obvious, e.g., that requesting and thanking, which are speech-acts, are on the same level as forming and testing a hypothesis or as acting on-stage, which are not." ((Baker and Hacker 2005), p. 87). Searle's notion of 'speech act', which despite its currency in philosophical circles has never been a very fruitful way of analyzing actual discourse ( (Mey 2001), pp. 212-217), evidently cannot be projected onto LW's thought and is not only subject to a criticism very similar to the one LW applies to 'sentence types', but also is unable to account for the hierarchical nature of intentionality and for the similarity -or rather identity-of the role of intentionality in the case of verbal behavior and non-verbal behavior. For an approach to discourse coherence and pragmatic 'sense' that is more in tune with the Wittgensteinian lines of thought analyzed here, see (Scheppers 2003). (C) terminology Some of LW's coinages have come to have a life on their own in the secondary literature, though they do not necessarily fulfill a singular terminological function with LW's work. Perhaps the most obvious quasi-technical term in LW's later work is Language Game / Sprachspiel (Language Game). The term that most famously covers the notion of structural patterns beyond the simple Language Game and in the literature has been treated as a quasitechnical term and a key concept in LW's approach, despite its very low frequency in LW's oeuvre, is 'Lebensform'/'Form of Life' ( (Baker and Hacker 2009, 218-23); (Hacker 2015); (Moyal-Sharrock 2015). The word 'Lebensform' as such occurs only 7 times in LW's writings as represented in the standard editions (PhU §19, §23 and §241; PhPF i §1 and xi §345; BPhP1 §630; ÜG §358). 48 We might be tempted to try and demarcate/define LW's key concepts with respect to each other by saying things like: Form of Life designates larger scale and/or more general patterns than Language Game, or: Language Game denotes formal linguistic patterns, and Form of Life more holistic entities. This also appears to be the way in which these terms live on in the literature. But in fact, LW's ways of relating Language Games and Form of Life fluctuate a lot. A first aspect concerns the relative complexity of the patterns referred to: some passages suggest that a Form of Life is something 'larger' or 'more encompassing' than a Language Game (cf. PhU §19, PhU §23, PhU § §240-242); in other passages, more small-scale or limited patterns are also called Forms of Life: being able to speak or not (BPhP1 §630), the fact that we assume that certain physical aspects of the world remain stable (PhPF xi §345), a certain kind of certainty, expressed through the 'I know' game (ÜG §358). The term Form of Life accordingly refers to patterns that coincide with a language as a whole, or the way of life of an entire community/culture, as well as to patterns that are not 'larger' than a single Language Game. In the above we have seen that LW uses the term Language Game to denote a number of practices that do not even have to involve language. For the purposes of this study, I have therefore chosen to deviate from LW's own terminological practice and use the term "practice" as an umbrella term for many different terms within LW's text (including the very common "Language Game"). 49 When there is no 48 Baker and Hacker add two cases from the manuscripts that do not appear in the standard editions, plus a reference to Philosophical Occasions, but do not mention BPP1 §630 (Baker and Hacker 2005:74-75; see also more recent contributions, e.g. by P.M.S. Hacker (Hacker 2015) and Anna Boncompagni (Boncompagni 2015), for a few more relevant excerpts from the Nachlaß. 49 Let me point out that this is not that different from using the Wittgensteinian term Form of Life as a common terminus technicus when discussing LW's work: Form of Life /Lebensform was not used systematically, as a terminus technicus, by LW either. emphasis on agency or activity, I will sometimes also use terms such as 'Forms of Life' or 'our lives', loosely, not unlike LWís own use of these terms. (D) LW's philosophy of mathematical practice: mathematical practices embedded in applications / Forms of Life So, within the context of the Grundlagen-debate, it is one of LW's most significant moves to attract the attention away from axiomatic 'foundations' to actual practices involving operations such as counting, measuring, drawing, proving, etc. Many times, LW also insists on applications at a very practical level, in the context of everyday buying and selling things, building things, engineering, accounting, graphic design, music. Any reader of the published later works (say: PhU) recognizes this as one of the most typical Wittgensteinian moves, and many of the standardly quoted examples involve math-related activities. Thus, PhU §1 already involves the example of buying five red apples and the use of number words in various practical contexts continues to be a recurrent example. Various types of measuring and calculating prices as part of buying and selling also are recurrent examples, as are examples related to engineering applications (building houses, bridges, machines, etc.). LW often goes to (relatively) great lengths to evoke the details of the practice, thus e.g. in the case of the sometimes rather exotic 50 measurement procedures in BGM 1, § §143-151. In this passage, LW makes us consider various ways in which woodsellers could determine the price of timber: • by piling up the wood, measuring height, width and length of the pile, multiplying the outcomes of the three measurements and calling the outcome of the multiplication the price in pennies; • by piling up the wood and measuring the surface it covers; • LW offers some alternative scenarios: pricing by weight, by labor time or by effort (take the age and skill of the woodsman into account!), or simply giving the wood away. 51 In this case, LW is quite explicit about some of the points he wants to make: • he insists on the fact that what these people do is part of a practical context (the buying/selling of wood for the purpose of building a house): the measurements and 50 For LW's practice of conjuring up imaginary practices, see section 1.3 below. 51 Comparing the standard edition of this passage with Ms-118,33v-36r would -by the way-be a good way to illustrate the impact of the editorial practices of LW's literary executioners: the published text does look a lot more continuous and smooth (the editors took out LW's many self-interruptions), but we also lose the connection with the more abstract issues that LW tried to come to grips with by means of this example, as well as the sense of intense involvement on the part of the author. The addition of "151. (A society acting in this way would perhaps remind us of the Wise Men of Gotham.)" in the standard edition, was taken from a previous version of this text (MS-117) and it remains to be seen whether it was a good idea to compile the two versions the way the editors did: referring to legendary fools in this context does make a difference for how one reads the text. calculations (if that is what they are) are a way to determine what the builder needs to pay to woodsalesman for his timber; • he points out that the propositional nature of equations (which some of us might be tempted to consider essential to calculation) need not play a role in such practices at all; • it is suggested that what counts as a correct calculation of a price (which criteria are considered relevant, e.g. the surface covered by the pile, the weight of the wood, the age of the woodsman, ...) depends entirely on how the calculation fits in with its wider practical context and that there is no a priori way to know what would count as correct/acceptable outside an actual practical context. NB: none of the scenarios that LW imagines is so outlandish that it could not be found in the historical or ethnographic record. (E) pragmatics first vs. the primacy of truth and propositional knowledge A pragmatic account of meaning implies that meaning in the case of linguistic behavior (Language Games in the more literal sense of the word) is conceived of as entirely parallel to meaning in non-communicative activities. 52 This appears to be one of the recurring themes in LW's later work and is already evident if you look at the list of examples at the beginning of PhU quoted above. Note that this is contrary to the assessment that LW was mainly a philosopher of language. Of course, LW was dealing with philosophy and logic as the main topic of his work, and philosophy is essentially a verbal endeavor, which means that he is mainly focusing on things that are essentially of a linguistic nature. But his main contribution to the study of language is clearly that he showed that meaning is not primarily a matter of language: what is meaningful about language is not primarily linguistic. 53 A well-documented aspect of LW's intervention in the history of philosophy is that the 'pragmatic first' idea displaces the primary status of truth. LW's later work documents the realization that conveying information in such a way that the truth of it is crucial to its meaning, is only one among many ways in which language can function; in other words, truth 52 In a completely different context, I have been arguing for the importance and correctness of the Pragmatics First claim (including the parallelism between communicative/linguistic meaning and non-linguistic/noncommunicative meaning), which I here attribute to LW, since the 1990s (most important publications (Scheppers 2003) and (Scheppers 2011), but see already (Scheppers 1997)). 53 LW's later philosophy was wrongly understood as a 'philosophy of language' from the beginning, as becomes clear from the following report of LW's lectures in the years 1930-1933 by G.E. Moore: "[...] he held that though the "new subject" must say a great deal about language, it was only necessary for it to deal with those points about language which have led, or are likely to lead, to definite philosophical puzzles or errors. I think he certainly thought that some philosophers now-a-days have been misled into dealing with linguistic points which have no such bearing, and the discussion of which therefore, in his view, forms no part of the proper business of a philosopher." (G. E. Moore 1955, 27) is now seen as only one of the possible effects (and in some cases only a side-effect) of the way language works. In the prolongation of that line of thought, lies the notion that at the bottom of propositional (i.e. truth-based) constructions, there is non-propositional, even not propositionalizable, stuff. Hence LW's persistent focus on phenomena like understanding gestures and music, physiognomy (recognizing faces and facial expressions) as a paradigm for meaning (see also paragraph (F) here below and section 2.0.2 below). This is one of the key notions that we have to keep in mind when we try to come to grips with LW's PhilMath. It seems that LW's emerging practice-based account of meaning, or at least the way this is expressed in his writings, is often mistaken for other "-isms". To make my point as simply as I can: LW's agenda is to show that meaning need not be a matter of correspondence, the other lines of thought are subordinate to this main move. Even in those cases in which language does refer to things out there, this still is always and irreducibly mediated by the practice which serves as the immediate context for this reference, as well as the encompassing structures (Form of Life, ...) of which this practice is a part. (F) LW's shift from a semantic to a pragmatic account of meaning and the interpretation of his remarks on mathematics I believe an adequate understanding of how exactly LW's view on meaning in general shifted from a semantic one to a pragmatic one helps account for a few apparently problematic issues concerning his (also evolving) views on mathematics. LW's apparent normativism in PhilMath is a case in point: at a certain stage of his development, LW experimented with the idea that mathematical utterances, like the sentences of logic (which already in the TLP were shown to be tautologies and therefore not really meaningful) are grammatical sentences, parts of a grammar, a set of rules for the use of words ( (Frascolla 1994); (Rodych 2011)). This intermittent but long-lasting experiment with this concept of grammar as an account for meaning should be understood within his even longerlasting and more fundamental concern with non-propositionality, starting with the realization in the TLP that logical (as well as ethical and aesthetical) sentences are tautological and therefore by definition transcendental. 54 54 If you define the world as the set of all facts and facts are the referents of propositions (defined in terms of binary truth values), then it follows that tautologies do not refer to the world, i.e. are by definition transcendent. Cf. also LW's use of the term 'mystical' to refer to the non-propositional (Breitenbach 2008 (Breitenbach 2008)); while this term does make sense in the context of TLP, in which propositional language is considered the only type of meaningful language, his later work is characterized by the realization that our everyday use of language makes sense without being rooted in propositional truth, and from this point of view, it would be strange to call all of these non-propositional aspects of our everyday lives 'mystical'. One interesting instance of the non-propositional underlying the propositional is the way we should view axiomatic systems from this perspective, which also impacts our view on how natural language works and how mathematics works: the function of axiomatic systems within mathematics is completely displaced, in that it can no longer be seen as a foundation, on the contrary, formal math is now seen as grounded in everyday practices such as counting, measuring, selling and buying, building, etc. rather than the other way round (see also section 2.3 below, in which we analyze passages in which LW explicitly argues against the foundational function of axiomatic systems). (G) LW's pragmatic account of mathematical meaning vs. model theory, proof theory, etc. The move towards a pragmatic account of meaning also opposes LW to the prevailing views of meaning in math, which seemed to have moved in the opposite direction in the course of the 20 th century: 55 instead of emphasizing those aspects that math has in common with other meaningful human behavior, there has been an ongoing tendency for math to try and incorporate more and more aspects of its own functioning within its own formalism: model theory, category theory, proof theory are supposed to somehow express the selfunderstanding of math, by turning key aspects of its own functioning into mathematical objects. Thus, formal proof theory (as part of mathematical logic) views mathematical proofs as mathematical objects, the features of which can be studied by mathematical means. However, from a pragmatic and/or anthropological point of view, this approach has the obvious disadvantage that mathematical proofs are no longer studied qua proofs, in the normal 'human' sense of the word: as human actions that intend to convince one of the truth of some claim. Thus, LW's work on proofs asks basically the following question: how can something that does not convince a human mathematician of the truth of a claim be considered a proof? Other aspects of LW's work on proofs (e.g. his insistence of surveyability etc.) can readily be understood from this point of view. This part of LW's PhilMath has been thoroughly covered in the literature (cf. for instance Felix Mühlhölzer's monumental book (Mühlhölzer 2010); see also (Mühlhölzer 2006), (Floyd 2001)); suffice it here to point out that LW's opinions on the matter of proof are clearly related to his basic 'pragmatic' stance: the meaningfulness of something equals the way it functions within the practice(s) in which it actually functions. Similarly, model theory is an attempt to capture the meaning of formal theories in mathematics in terms of their relation with the mathematical structures for which their 55 Interestingly, the 'pragmatics first' aspect of LW's approach also is completely different from Carnapian pragmatics, and from the way pragmatics is still construed in most grammatical approaches within linguistics. statements are true. Again, this approach moves in the opposite direction from LW's anthropological/pragmatic approach: whereas model theory tries to incorporate the semantics of mathematical formalism into mathematical formalism itself, LW emphasizes the fact that the meaningfulness of any mathematical entity ultimately depends on how it is embedded in 'our lives'. Whatever model theory may achieve otherwise, it will not be able to capture the fact that the meaningfulness of mathematics (or of items within mathematics) is fundamentally similar to meaningfulness in general, i.e. as it applies to any other human endeavor. (H) LW on "dead signs" and "mindless calculation" A very similar idea is expressed in terms of the difference between live and dead signs. 56 One of the early occurrences of this idea in LW's work is in the seminal passage towards the beginning of the so-called Blue Book, in which we also find an instance of the idea that meaning is use: Ts-309 [The Blue Book, 1933-1934],6-7: Frege ridiculed the formalist conception of mathematics by saying that the formalists confused the unimportant thing, the sign, with the important, the meaning. Surely, one wishes to say, mathematics does not treat of dashes on a bit of paper. Frege's idea could be expressed thus: the propositions of mathematics, if they were just complexes of dashes, would be dead and utterly uninteresting, whereas they obviously have a kind of life. And the same, of course, could be said of any proposition: Without a sense, or without the thought, a proposition would be an utterly dead and trivial thing. And further it seems clear that no adding of inorganic signs can make the proposition live. And the conclusion which one draws from this is that what must be added to the dead signs in order to make a live proposition is something immaterial with properties different from all mere signs. But if we had to name anything which is the life of the sign, we should have to say that it was its use. The idea is that mathematics requires a non-mechanical interpretation/use, in exactly the same way that the mere manipulation of forms (i.e. without any verbal semantics) is not logic, 56 The image is not limited to LW and scholars influenced by LW (e.g. Mühlhölzer's "On Live and Dead Signs in Mathematics" (Mühlhölzer 2014)), but is pervasive in all corners of the literature, see e.g. (Ferreirós 2016): p.42: "formal systems come to life"; p.45: "maths are not cold and bloodless"; (Livingston 2015) p. 204: "When professional provers read mathematical argumentation, they seem to always, unavoidably, seek to find and maintain the association of the text with proving's lived work. If they are not doing this, they are not fully engaged in the professional practice of doing mathematics.". A very similar idea is expressed in terms of "emptiness", see. e.g. Dieudonné's expression "mathématiques vides" in the title of his article "Mathématiques vides et mathématiques significatives" (Dieudonné 1982), and "les mathématiques non motivées ou le délayage" in the body of the text; in this article, the expression does (of course!) not refer to formalism in math as such, but to the fact that a large part of what is being published by professional mathematicians is not motivated by any genuine interest in anything genuinely mathematical. Timothy Lampert, admittedly acquainted with LW's work and admittedly in a preprint, dares to use the astounding formula "sound but empty proofs" (Lampert 2017). qua model of human reasoning. By stating that "no adding of inorganic signs can make the proposition live", LW appears to directly attack the model-theoretic approach, which does exactly that. A similar, if not the same, idea is developed in MS-126,30-32, d.d. 19421028, a.k.a. BGM V, §2, in which LW asks whether the purely formal, mindless, manipulation of signs, whether by a machine or by drilled humans, counts as calculating. 57 And his answer to the question is straightforward and completely consistent with the rest of his account of meaning (in math and in general), as analyzed above: [...] it is essential to mathematics that its signs are also employed in civilian clothing. It is the use outside mathematics, and so the meaning of the signs, that makes the sign-game into mathematics. In the next few pages, LW explores in great detail the idea that people might use calculating machines for a wide variety of purposes, even if they have no understanding of math whatsoever. It is important to understand that LW does not suggest that the difference between meaningful and meaningless depends on the involvement of a psychological subject, which would go directly against the holism expressed in e.g. his private language argument (see below). What secures meaningfulness is embedding in a practical, everyday situation. A normal human form of life -of course-does imply an agent with a number of psychological attributes, but the 'subject' is not the proper locus for meaning. This is one of the points that I want to cover in the following section. Wittgenstein's holism and structuralism about practices and Forms of Life Anti-reductionism is a well-recognized feature of LW's approach. Practices (incl. Language Games etc.) and Forms of Life are the embodiment of this anti-reductionism, in that they are multi-dimensional and resist reduction to any of the dimensions contained within them. I distinguish two separate aspects: Es ist der Gebrauch außerhalb der Mathematik, also die Bedeutung der Zeichen, was das Zeichenspiel zur Mathematik macht. So wie es ja auch kein logischer Schluß ist, wenn ich ein Gebilde in ein anderes transformiere (eine Anordnung von Stühlen etwa in eine andere) wenn diese Anordnungen nicht außerhalb dieser Transformation einen sprachlichen Gebrauch haben. Aber ist nicht das wahr, daß Einer, der keine Ahnung von der Bedeutung der Russellschen Zeichen hätte, Russells Beweise nachrechnen könnte? || der nichts von der Bedeutung der Russellschen Zeichen wüßte, die Russellschen Beweise nachrechnen könnte? Undalso in einem wichtigen Sinne prüfen könnte ob sie richtig seien oder falsch? -holism: various dimensions that are relevant to a domain are considered parts of a single system, rather than viewing them as external to each other; -structuralism: the identity of various items within a system is considered in terms of their place within the system, rather than as prior to the system. In the context of the present study, I cannot show that -or rather to what extent-the 'ontological' picture that I sketch below accurately represents LW's own views, but I do want to claim that the picture emerges naturally from LW's text and for the purposes of this study, the following, admittedly somewhat blunt and dogmatic presentation will have to do. Beyond the matter of Wittgenstein-exegesis, I will also insist on the ways in which LW's holistic and structuralistic conceptualization of practice, which have not been picked up in present-day PhilMathPract, could still contribute to current issues. (A) holism: practices as multidimensional structures The way LW speaks of Language Games and Forms of Life already shows that he conceives of them as multidimensional. Elsewhere ((Scheppers 2017) §4), I distinguished the following dimensions: • a linguistic dimension: obviously, LW insists a lot on the way words are used in the context of various practices, if only because his subject matter is philosophy, a mostly verbal kind of practice; • a pragmatic dimension: what an agent does within the framework of a practice, is an obviously relevant aspect of it; • a social/cultural/historical dimension: LW often makes his readers picture various exotic populations or primitive cultures so as to highlight the variability and contingency that underlies our forms of life; • a mental/cognitive/psychological dimension: what agents perceive, think, feel, etc. are major aspects of how they experience practices; 58 • a biological dimension: our biological constitution (e.g. the fact that we have hands, eyes, ... etc.), different from the constitution of other species, is a major factor in how our form(s) of life evolved (cf. LW's occasional comparisons with dogs and lions or the evocation of societies in which everyone is color blind, ...); 58 LW's main points about psychology (not only in the works that have been edited under such titles as Bemerkungen über die Philosophie der Psychologie or Letzte Schriften über die Philosophie der Psychologie, but also in the famous 'private language argument' in PhU) intend to show how psychological aspects cannot be understood outside the context of the practices within which their expression occurs, but the structural relation (see below) goes both ways: psychological aspects are an integral part of the practices in which they occur as well. • a physical/material dimension: the physical properties of our world are obviously constitutive of our forms of life (e.g. counting would not have been as important if the objects we would encounter in daily life had had less stable identities, like clouds; measuring would not be as viable of the materials at our disposal changed shape more readily; weighing cheese would not make sense if cheese expanded and shrunk; without apparent cause (PhU §142); etc.); • an epistemic dimension: although the net intended result of LW's contribution may have been to give epistemic aspects (knowledge, truth, certainty, ...) a much more peripheric role than they have in the philosophical tradition, especially in fields like PhilMath, epistemic considerations continue to play a role in LW's analysis of certain types of language use. Thus, LWís approach foreshadows a number of developments that occurred much later in the development of some branches of PhilMath or initiated outside PhilMath and turned out to be philosophically relevant, most notably the realization that mathematics shows other philosophically relevant aspects besides the epistemic ones (cf. section 3.2.1(C) below). (A1) holism at work (1): practice-based vs. agent-based vs. community based; In a large part of the philosophical tradition, the default locus for meaning has been the subject. In many avatars of the practice-turn, the subject is merely revamped as the agent and continues to function in exactly the same way as the subject did. One of the advantages of this version of the practice turn is that it remains compatible with various more or less commonsense versions of -what remains basically-reductionism/physicalism/naturalism: because the subject/agent coincides with a biological organism, we can continue to entertain the idea that science will at least in principle be able to deal with whatever we want to describe in terms of practice, in a unified manner, i.e. in terms of biology, chemistry, physics. As far as practice-based accounts of mathematics go, this is the point of view adopted, for instance in José Ferreirós' primer Mathematical Knowledge and the Interplay of Practices (Ferreirós 2016), in which practice-based means mostly agent-based. However, for both practical/empirical reasons (complex practices can obviously not be reduced to the agency of a single human agent) and more theoretical reasons (cf. the 'private language argument'), most authors quickly come to the conclusion that practices cannot directly be reduced to individual agents. In that case, 'the community' is often what is invoked as an ersatz-subject. Kripke (Kripke 1982) -and in his defense of Kripke, Martin Kusch (Kusch 2006)-attribute this vision to LW. However, this use of the notion of 'community' retains the disadvantages of reductionism (because it is a form of reductionism) and is not viable as an interpretation of LW's work: 59 the foundationalism and reductionism of Kripke's way of invoking 'community consensus' as the ultimate ground for meaning is incompatoble with LW's holistm, as embodied by -amongst others-the concept of Form of Life, as well as the anti-foundationalism he demonstrates throughout his work on math. 60 Furthermore, there are methodological objections to simply positing communities as prior to practices, 61 as well as empirical ones. If a relatively fixed group of individuals share many different aspects of life, as in nuclear families, or a remote village, perhaps an office in which colleagues work closely on a daily basis; then it seems completely natural to speak of a community, but it seems arbitrary to speak of the "community of mathematicians", let alone even larger and/or more heterogeneous collections of people that have very few real contacts with each other and/or very little in common. Thus, we should ask at least the following simple questions. In what sense is mathematics actually based in an actual 'community'? Is this always the case? In what sense are the interactions between mathematicians that we observe stable and intensive enough to be qualified as 'communities'? Are there perhaps historical cases in which we can see actual communities at work and cases in which we can't? Etc. A case in point example would be Netz's work on the social aspects of Ancient Greek math (Netz 1999) (Netz 2009). If Netz is right about Ancient Greek math, Ancient Greek elite mathematics was not based in communities in any real sense at all (whatever your interpretation of 'community' may be), in that most mathematicians worked (or is it 'played'?) 59 Severin Schroeder comes to the same conclusion ((Schroeder 2021), §7.1 'Rule-following and community'). 60 See also Floyd (Floyd 2021) p. 56, for basically the same argument. 61 As a matter of method, if communities are a priori considered prior to practices, this parti pris will inform the way we will construe the basic phenomena, in that it will be impossible to even perceive human interactions that cannot easily be construed under this label. Of course, if it can be shown that certain practices in certain cases coincide with, or depend on, or give rise to, etc. the existence of a community (under some definition of that term), then this would be a noteworthy empirical fact. For instance, if you do find a community somewhere, for instance a group of monks living together in an abbey and practicing math together, then that would be an interesting result, especially if you can -for example-show that their math is different from the math done by other contemporary networks or groups. The notion of 'community' would be especially instructive if you can show that what makes their math distinctive is related to other distinctive features of other communal practices of theirs. That would be an interesting result. But that is not at all the same thing as positing a priori that the social aspect of human practices is always supported by a community, let alone that meaning ultimately is a matter of consensus in a community, as is often presupposed by both Wittgenstein-scholars (Kripke (Kripke 1982) and many others) and philosophers of mathematical practice ( (Ferreirós 2016)). Therefore, it is not correct to equate practice with community from the outset: this would tend to make both terms ('practice' and 'community') almost void, and would preclude a proper analysis of the actual role of communities (if any) in concrete cases. The concept of 'community' as the somehow natural locus of human interaction is not politically/ideologically neutral either: the suggestion that communities are the 'natural' (with the connotation of 'desirable') way for people to experience their social nature, has strong right-wing, anti-humanist implications. Alternative conceptualisations, without the undesirable connotations of 'community', are available. See, for instance, the notion of 'nexus of practice' in (Scollon 2001) and (Scollon and Scollon 2003). in complete isolation, as far as real-life interaction goes (in some periods there simply may have been no mathematician active in the whole Greek world). 62 For the present Wittgenstein-related purposes, it suffices to point out that projecting the idea that communities could be construed as the ultimate locus of meaning does not make much sense if we consider LW's anti-reductionist (holist & structuralist) implementation of the notion of practice. (A2) holism at work (2): practice-based vs. knowledge-centered In a similar fashion, holism about practice also displaces the status of propositional knowledge somewhat to the periphery of practices, as compared to the center-stage role knowledge always has played in the philosophical tradition, especially in PhilMath. Mainstream approaches to PhilMath are all basically epistemological, i.e. in these approaches, math is viewed as essentially a body of propositional contents, and mathematical practice is viewed as of only marginal interest to the understanding of the subject matter, at best. This epistemological emphasis takes several avatars, which do not all have the same consequences, but have in common that the propositional contents, whether they are construed as human knowledge or as objects independent of human cognition, are considered as essentially independent from human practice. LW's holism about practice, as applied to mathematics, opposes this epistemological bias on a very fundamental level. The idea that mathematical practice may have some philosophical relevance has become acceptable in at least some parts of the academic landscape, but almost invariably the underlying research agenda, the main issues about mathematics that one is interested in, continue to be formulated in entirely epistemological terms. 63 In the context of LW's pragmatic view of meaning and his holistic conception of practice, propositional knowledge loses its self-contained status, in that its meaning is now construed as a matter of how it functions within practices and how it is embedded in our forms of life, in the most general sense of this term (i.e. including the physical, biological, and cognitive aspects, but also the highly variable, contingent, historical, cultural and social aspects). In the texts that we focus on for the purposes of this study, we read several passages in which LW goes as far as pointing out that one can easily imagine instances in which activities that 62 Of course, Netz may be fundamentally underestimating the permeability of mathematics, i.e. overestimating the impermeability of the boundaries between mathematics as a 'ludic', 'elite', intellectual endeavor on the one hand, and applied geometrical and arithmetic (accounting) practices on the other: the elites that could afford to engage in ludic (i.e. non-professional, non-applied) math were typically also landowners and must have had at least a passive acquaintance with professional geometry (in the etymological sense of the word) and accounting. 63 The title of Ferreirós' Mathematical Knowledge and the Interplay of Practices (Ferreirós 2016) is emblematic in this respect. are basically indistinguishable from normal calculations, can perfectly do without any propositional knowledge being involved at all (see BGM 1, § §143-151, quoted in section 1.1.1(D) above; cf. also section 1.3 below for a number of potential applications of math-like techniques in which propositional truth need not play a role at all). For instance, in the context of cutting wood to size for carpentry, calculation-like techniques need not at all involve strings of signs that are evaluated for truth, as long as one distinguishes between doing it right and doing it wrong. It follows that the status of knowledge within mathematical practice should be an empirical issue, especially in the present 'naturalist' era. The question should be: when and how does knowledge and truth play a role in mathematical practice? In any case, LW's work does not take the propositional/epistemic nature of mathematics for granted and in all seriousness asks the question as to what the exact role of mathematical propositions is amongst the other aspects that make up mathematical practice. 64 My claim is that within a coherently practice-based account, the role of knowledge will ipso facto be displaced as compared to an epistemological account: one can't coherently think in terms of 'practice' and then construe knowledge as external to the practice (nor vice versa, by the way), one can't coherently attribute an autonomous ontological status to knowledge, independent of the practice in which it occurs. So my Wittgensteinian criticism of the way 'practice' is construed in most of PhilMathPract, is one of lack of internal coherence, not one of lack of Wittgensteinian orthodoxy. (B) structuralism: identity as irreducibly relational I show elsewhere ((Scheppers 2017), Ch. 1) that the only charitable reading of LW's multidimensional account of practice implies that the relation between these dimensions is structural, internal to the structure of practices (otherwise the practice-based account would be vulnerable to the same objections as any other reductionist approach), which in its turn implies that none of these dimensions can be primary with respect to the other dimensions. This aspect is not clearly understood in mainstream, especially 'naturalist', versions of the practice turn, or has at least not (not yet?) been picked up. Again, this topic, in its most generic 'ontological' form, is as such not directly relevant to the subject matter of the present study (which is why I will not argue for it here at any length), but it has consequences for the interpretation of a few notable aspects of LW's PhilMath (which is why I have to briefly mention it). 64 Questioning the primacy of purely epistemic issues has been proven extremely fruitful in Philosophy of science and there is no reason why it should not be equally fruitful in PhilMath. structuralism at work: "we don't know the meaning of a theorem unless we know the way to prove it" One of the more controversial among LW's claims has always been that the meaning of a sentence (say: a conjecture, e.g. Fermat's) changes once it is proven, i.e. that a theorem has not the same meaning as the conjecture the same words/symbols used to express. I will here look at the way in which this idea is articulated in LW, Ms-126,59-61, d.d. 19421108-19421110. 65 Some of the ways in which LW tries to articulate his point in this excerpt should not be too hard to understand (or to swallow) at all: "The question [does '770' occur in the decimal development of pi, or not?] changes its status as soon as it becomes decidable. For a connection is made where there used to be none" should make sense to everyone, perhaps slightly trivially so: in actual practice, one can obviously do other things with the question and the concepts that are used as soon as one knows how to go about answering it, than was previously the case. LW then compares this with an author who has not yet decided whether one of the characters in his upcoming book has a sister. This comparison and the point it is supposed to make should be clear and understandable, even if some/many of us may not want to follow LW where this point leads us. LW's point is not that hard to understand from the point of view of his account of meaning in general: if there is literally no other way to establish the meaning of an utterance than by looking at how it is used within the practice in which it is actually used, then it follows that the meaning of the sentence is different when it is used in one context as compared to another 65 8.11. Wie seltsam die Frage ist ob in der unendlichen Entwicklung von π die Figur φ(eine gewisse Anordnung von Ziffern, z.B. '770') vorkommen wird, sieht man erst wenn man die Frage in einer ganz hausbackenen Weise zu stellen versucht: Menschen sind darauf abgerichtet worden nach gewissen Regeln Zeichen zu setzen. Sie verfahren nun dieser Abrichtung gemäß & wir sagen es sei ein Problem, ob sie der gegebenen Regel folgend jemals die Figur φ anschreiben werden. Was aber sagt der, der || welcher, wie Weyl, sagt, eines sei klar: man werde oder werde nicht, in der endlosen Entwicklung auf φ kommen? Mir scheint, wer dies sagt, stellt schon selbst eine Regel, oder ein Postulat auf. Wie, wenn man auf eine Frage hin erwiderte: 'Auf diese Frage gibt es bis jetzt noch keine Antwort'? So könnte etwa der Dichter antworten der gefragt wird ob der Held seiner Dichtung eine Schwester hat oder nicht -wenn er nämlich noch nichts darüber entschieden hat. Die Frage -will ich sagen -verändert ihren Status, wenn sie entscheidbar wird. Denn ein Zusammenhang wird dann gemacht, der früher nicht da war. Man kann von dem Abgerichteten fragen: 'wie wird er die Regel für diesen Fall deuten?', oder auch 'wie soll er die Regeln für diesen Fall deuten'. Wie aber, wenn über diese Frage keine Entscheidung getroffen wurde? -Nun, dann ist die Antwort nicht: 'er soll sie so deuten, daß φ in der Entwicklung vorkommt' oder: 'er soll sie so deuten daß es nicht vorkommt', sondern: 'darüber ist noch nichts entschieden'. Wir mathematisieren mit den Begriffen. -Und mit gewissen Begriffen mehr als mit andern. 10.11. Ich will sagen: Es scheint, als ob ein Entscheidungsgrund bereits vorläge; & er muß erst erfunden werden. Käme das darauf hinaus, zu sagen: Man benutzt beim Reden || Denken über die gelernte Technik des Entwickelns das falsche Bild einer vollendeten Entwicklung (dessen, was man für gewöhnlich 'Reihe' nennt) & wird dadurch gezwungen unbeantwortbare Fragen zu stellen. context. 66 In the case of a conjecture, before one knows how to go about proving it, it is literally not clear yet how it will fit in with other things one says or does; so, one is literally not clear about its meaning yet. One does not necessarily need to agree with LW's assessment, but one should be able to see how it fits in with, and makes sense within the context of, LW's structuralism and pragmatism, i.e. it should be clear that this is not an outrageous, gratuitous statement, but a logical part of a wide-reaching and coherent account. It may at this point be interesting to anticipate Part 2 of this study and point out that what LW is targeting here is the naturalness of the idea that "in the infinite decimal development of pi, we must either find or not find <770>" (which he attributes here to Weyl 67 ): from the point of view of LW's holism and structuralism about meaning, there simply is no way to isolate these contents from the way they function within our actual practices, and as it stands, the contents are freewheeling, have no real function within any actual mathematical practice, and LW blames Weyl for pretending they do have a definite meaning. Again, one may or may not want to follow LW in this line of thought, but one cannot deny its coherence and consistency. 68 (C) holism and structuralism at work: objecthood and objectivity This section is not really based on an analysis of any aspects of LW's work at all. The reason why I chose to integrate it at this point in the prsent study, is that it shows a Wittgenstein-like holism-cum-structuralism at work, as was the case in the paragraphs here above, and that it shows how such a more radically pragmatic framework could contribute to present-day PhilMath, as was also the case here above. The ontological issues concerning mathematical objects are interesting in this context because it is perhaps the last domain in which ontological problems are still hotly debated, and because it allows for a very concrete demonstration of the potential of a pragmatic approach. The prototypical example of an object is the mesoscopic physical object: it is physical, it can be readily perceived/cognized and manipulated by humans, it is relatively permanent and apparently independent of our perception of them... Mathematical objects can be considered 'objects' in that they are perceived/cognized as relatively permanent and manipulated within mathematical practices, in the same ways that other objects are manipulated and perceived in other practices, but mathematical objects are traditionally considered "special", perhaps because (1) they are "not physical", but (2) at the same time they appear to be relatively 66 ("The door is open" may be an invitation to come in or a request to close the door) 67 For LW's references to Weyl (and other authors), see (Biesenbach 2008c); (Biesenbach 2008a); (Biesenbach 2008b). 68 It would be interesting to ask the question as to how conjectures actually function in real-life mathematical practices, e.g. as part of an ethnomethodological research project in the style of (Livingston 1986) (Livingston 2015). independent of our perception and/or our imagination. But from a pragmatic (or even a more general phenomenological) point of view, this is not that special. 69 In the phenomenology of objecthood, i.e. the way in which objects occur in actual practice, 70 the physicality of physical objects does not really play a role. Not even in the case of eminently physical objects. When we use a table, it's the tablehood of the table that constitutes its objecthood, not its physicality, its molecular structure, its physical properties, etc. (unless these features become a part of what is wrong with the table, of course). 71 This point is made quite clearly by cases in which we perceive tablehood, without underlying physicality, as in representations (say: pictures, films, etc.) of tables, including even completely fictitious tables, as in cartoons (or even dreams): what makes these tables understandable as tables is their function, the way they are used as a table, despite their not being physical. A rich ontological taxonomy is needed, which shows how objects actually function within practices. We adopt a 'structural' approach to objecthood, i.e. objecthood and its different subcategories are defined in terms of their function within a practice: something is a certain kind of thing by virtue of the way it is manipulated. From a pragmatic point of view, is an object what functions (is manipulated, transformed, ...) as an object within a practice, for instance: as a tool, as an ingredient, as a product, as infrastructure. The details of such a typology should be an empirical matter (including not only pragmatic, ethnographic or otherwise phenomenological but perhaps also cognitive approaches) and is not the job of a philosopher, though of course the results of such empirical inquiries may turn out to be philosophically relevant. I am ready to believe that there are very specific features to mathematical objects, but I want to see that demonstrated by means of actual analysis of actual mathematical practices, not posited a priori. Interestingly, this line of argument puts us (again) on the opposite side of the argument from José Ferreirós, this time with respect to the 'objectivity without objects' claim, which is central to Ferreirós Mathematical Knowledge and the Interplay of Practice (Ferreirós 2016); Chapter 9, carrying the title 'Objectivity in Mathematical Knowledge' (pp. 247-290), is the penultimate and climactic chapter of the book. Ferreirós states his aims as follows: 69 A case in point would be 'fictional' stuff. NB I'm not a fictionalist about mathematics (but then again, I'm not particularly a fictionalist about Mickey Mouse either): the reality/fictionality of stuff is not necessarily a particularly relevant aspect. 70 This is more or less exactly what Martin Heidegger shows in § §15-10 of his seminal Sein und Zeit, by pointing out that 'Zuhandenheit' (i.e. being available in the context of an everyday practice), is the default way for things to 'be'. Cf. Scheppers 2017, Chapter 2, §2. 71 Cf. Heidegger's notions of Auffällichkeit, Aufdringlichtkeit, and Aufsässigkeit, in §16 of Sein und Zeit ((Heidegger 1967)). The celebrated objectivity of mathematical results led many authors to believe that the theorems of mathematics are apodictic, necessary, a priori. From the beginning, we have remained uncommitted to this aprioristic view of mathematics, moreover we have defended the position that advanced mathematics is marked by the presence of hypotheses at its roots. How can the idea of objectivity be rescued in this setting? Precisely by considering the interplay of knowledge and practices that takes place in mathematics. From the point of view developed in the present study, there is no reason to want to 'rescue objectivity' (on the contrary: we consistently emphasize the variability and contingency of mathematcs in general), but there is also no problem with the objecthood of mathematical objects, which do function like objects within mathematical practices. In other words: we could adopt the slogan 'objects without objectivity', if necessary. *** The purpose of the above paragraphs was to show (1) how LW's holism and structuralism about practice gives rise to a much more radically pragmatic approach to practice than is currently prevalent in PhilMathPract, and -in preliminary way-(2) how this approach can contribute (relatively) novel ways to deal with more or less current issues in PhilMath and PhilMathPract. Integration vs. fragmentation: the local-global dimension of embedding In his account of meaning, LW insists time and time again on both of the following opposite poles of the same dimension: -on the one hand, meaning depends on embedding in the very local, small-scale context of prototypical practices / Language Games such as buying something, measuring a piece of wood, etc.; -on the other hand, the meaningfulness of these very local practices depends on their integration within much larger structures (ultimately "our lives" as a whole, in all their multidimensional glory). 72 So, on the one hand, certain small-scale / local aspects of practices can only be understood in terms of that particular small-scale practice itself. It is at the level of these small-scale patterns that the enormous variability of human practice is most visible (cf. the seminal expression "das ganze Gewimmel" (standard translation: "the whole hurly-burly", as introduced in BPhP2 §629 (= Zettel §567 = Ms-137,54b = Ts-232,754 = Ts-233b,38) 73 ). 74 On the other hand, LW also repeatedly 75 points out that the meaningfulness of practices depends on how they fit in with much larger patterns, whether cultural (cultures, tribes, historical eras, ...) or biological (species, ...) or even physical (the existence or not of entities that are stable enough to be reliably counted), etc. Think of smoking. Grab a matchbox, take out a match, light the match. In order to understand the why's of this pattern, it is important to know what you need the light for. If you light that match to smoke, this instance of smoking a cigarette perhaps coheres with many other times you smoked a cigarette, and it coheres more remotely with various aspects of the tobacco industry and the history of the tobacco trade (or whatever is relevant to you understanding), but it will not be cohesive with encompassing patterns including other adjacent activities of yours, in the way it would be if you needed the light to heat water to do the dishes. Smoking remains a lot more 'local' than doing the dishes: the chain (or rather the tree) of explanations climbs up in lot more hierarchically articulated way in the case of doing the dishes. 76 (A) the fragmentation of math -the integration of math And the same goes for math: it can be fruitfully argued that certain aspects of advanced 20 th and 21 st century math (say: theoretical set theory) are not directly linking up with elementary practices like counting in the same way that basic arithmetic is, that they can be seen as autosufficient and self-supporting and don't need their historical links to more basic practices to be meaningful. And in a way, this is correct: the meaning of practices often is quite local, not all features derive from its embedding in encompassing practices. 77 In sections 1.1.3(C) and 73 Cf. also Ms-171,4: "Unsere Begriffe, Urteile, Reaktionen erscheinen nie bloß in Verbindung mit einer einzelnen Handlung, sondern mit dem ganzen Gewimmel der menschlichen Handlungen". 74 "Wie könnte man die menschliche Handlungsweise beschreiben? Doch nur, indem man die Handlungen der verschiedenen Menschen, wie sie durcheinanderwimmeln, zeigte. Nicht, was Einer jetzt tut, sondern das ganze Gewimmel ist der Hintergrund, worauf wir eine Handlung sehen, und bestimmt unser Urteil, unsere Begriffe und Reaktionen." "Das ganze Gewimmel" is one of those memorable formulas that LW's work is full of, which also in English translation ("the whole hurly-burly") made history (see e.g. the title of chapter 3 of Lee Braver's (2012) Groundless Grounds: A Study of Wittgenstein and Heidegger, where it is supposed to summarily refer to both LW's and Heidegger's holisms). 75 Even the very sentence in which he uses the expression "das ganze Gewimmel" the focus is on the hurly-burly as whole, as opposed to single actions by single agents. 76 For the representation of intentionality by means of a tree strucrure, see e.g. (Scheppers 2003) and (Scheppers 2011). (By the way: in this context, one could try and experiment with pseudo-quantitative approaches in terms of concepts such as 'depth of embedding' or 'degrees of locality', but in my experience (i.e. I have tried...), that line of thought doesn't lead very far). 77 I clearly remember this reluctance was part of some mathematicians-philosophers' first reaction to Ferreirós' Mathematical Knowledge and the Interplay of Practices (Ferreirós 2016) in the reading group at the Vrije Universiteit Brussel's Centre for Logic and Philosophy of Science dedicated to that book in 2015-2016, before it came out. 1.3 below, we will see how LW insists at great length on the great variety of (what he calls) applications that can give meaning to mathematical or math-like techniques. On the other hand, in actual practice, contemporary mathematics is not that independent from its roots: there is a structural relationship between them, not only genetically, but also synchronically, through the fact that every mathematician is the product of an education that starts from more elementary numeracy training before embarking on advanced stuff. One could also argue that there is a more intricate structural relationship, as well, which would account for its 'magical' applicability: 78 all human endeavors have developed in a specific context, in which so much is already 'given': a physical environment with specific features (relatively stable mesoscopic objects are a prerequisite for counting), cultural practices (argumentation as part of judicial and political procedures are a prerequisite to proof), ... . Interestingly, Floyd (Floyd 2021) insists a number of times on the fragmentation of mathematical practice. For instance, on p. 53: "This allowance for plasticity in projecting concepts shows, not that our procedures are not rulegoverned, but rather that that notion itself requires parochial elements." 78 Excursus: applicability Apart from the idea that math as an academic discipline is deeply rooted in, or rather: is interconnected with, a complex web of heterogeneous everyday practices, there is also the niche issue of the 'miraculous' applicability of mathematics-internal developments to natural science. This niche, which -as a niche within present-day PhilMathwas famously started by Wigner (Wigner 1960), has had a lot of success and gave rise to remarkable pathos throughout its course. The applicability of basic geometry and arithmetic is -of course-not even an issue: the application existed before the math. But even in the case of more advanced mathematical developments, it can be pointed out that the math and its application (the engineering, the astronomy, ...) grew up together, are both aspects of the same Form of Life: it is no wonder at all that math is reflected in nature, if math and physics and engineering are deeply intertwined. All this has been explored by the late great Mark Steiner (cf. e.g. (Steiner 1989); (Steiner 2009); (Bangu 2006)). However, there also appear to be a number of cases in which the application occurred after the development of the math, and in unpredictable ways that "should not have worked". In reaction to the latter cases, I would like to point out the following: • First, it would be good to look at the details of how exactly the scientists that stumbled on the miraculous applicability went about: what methods were already in place, what exactly was new, etc. This usually mitigates any sense of miracle from the get-go. • Second, the sense of miraculousness may also result from the fact that one may underestimate the depth of the embedding, that one is blind to iceberg of givenness underlying any phenomenon: so much is already in common between our world and our mathematical practice that it is very hard to not overestimate the autonomy of our math. Below, in section 3.3(C), I will try and argue that this account of givenness is at the core of LW's critical philosophy, in the deeply Kantian sense of the word 'critical'. • Third, Whiggishness may also play a role here: there is a bias towards focusing on success stories, and this may result in a skewed view of what is "normal" and what is "miraculous". For every interesting item, there may occur billions of uninteresting items. We have an inherent tendency to look at phenomena that display regularity and to ignore the chaos that we cannot readily describe around those islands of regularity. If one's perception of nature is deeply influenced by math, i.e. if your science is mostly interested in these aspects of reality that can readily be quantified or otherwise represented in mathematical terms, and vice versa, if one's math is deeply intertwined with techniques that deal with our relationship to nature (engineering, physics, astronomy), it is no wonder that math and nature reflect each other. I have nothing more to say about the applicability of math as such. But I will have something to say about the verbiage ("awe", "miraculous", ...) employed by some of the authors dealing with the phenomenon (see section 3.2.3(D) and Appendix 4.3(A)). (B) the freedom/autonomy of pure math vs. its essential embeddedness in 'applications' Another interesting issue that may be captured under the integration vs. fragmentation heading is the inherent tension between the self-proclaimed freedom of pure math and its essential embeddedness in the 'applications' that make mathematics meaningful. In his work on math, LW tackles this issue more or less directly in his analysis of the idea that math is a 'game' (see section 1.3(B) below) and his insistence on the importance of applications is in direct opposition to this idea. Similarly, LW's critical remarks, analyzed in Part 2 of this study, including its Spenglerian overtones (cf. section (D) here below), mostly exploit the idea that meaningfulness depends on the integrated nature of a healthy culture (as opposed to the fragmentation of a culture in decline). Even if LW's polemical position in the context of the Grundlagen-debates made him emphasize the 'global' aspect of the embeddedness of math, the conceptual apparatus that emerges from his work is capable of describing how certain mathematical practices can function more or less autonomously at the local level (whether LW likes it or not). (C) Ein buntes Gemisch: the heterogeneity of math (LW, Ms-122, 68r-88r (19391231-19400108)) The heterogeneity of mathematics is one of the most recurrent points in LW's PhilMath: almost all of the texts that we read and analyze in the present study are characterized by a continuous insistence on the variety of techniques that make up math, the variability of the applications in which they are rooted and the precariousness of what is supposed to keep them together. To illustrate the topic, I have chosen to analyze the passage in which LW uses the colorful expression "ein buntes Gemisch". This passage consists mainly of a long struggle with the idea that mathematical proofs should prove something, that we should be able to use them as an example for correct applications, and that they therefore should be surveyable, etc. Felix Mühlhölzer wrote the book on this, literally: Braucht die Mathematik eine Grundlegung?: ein Kommentar des Teils III von Wittgensteins Bemerkungen über die Grundlagen der Mathematik (Mühlhölzer 2012). For our purposes, I would like to first briefly focus on the passage Ms-122, 69v-71v, in which LW first explores the idea that there is no fundamental reason why mathematics should operate with propositional axioms at all: it is easy to imagine that certain mathematical techniques only exist in the form of rules for building houses, without any theoretical underpinnings (?) 79 at all: truth need not play a role at all, but one can do it wrong, which is not 79 If one can even say that theory ever underpins anything... at all the same thing as conforming to some truth. LW then asks the somewhat rhetorical question as to whether this isn't a case of applied math without pure math. 80 LW then comes back to the specific topic of 'proving', struggling with what exactly happens in a "pure" mathematical proof (there are many more interesting things to unpack in LW's manuscripts than what we can afford to do here). 81 In his entry for 19400108, LW then articulates the idea that the techniques that make up math (in this context, he's focusing on proof techniques, specifically) do not form a unity, introducing the seminal formula "buntes Gemisch". 82 Ich will || möchte sagen: Die Mathematik ist ein buntes Gemisch ˓?˒ von Beweistechniken. --Und darauf beruht ihre mannigfache Anwendbarkeit & ihre Wichtigkeit. Here, he focuses specifically on the case of formal systems (the one he is most familiar with is Russell's PM, but the same would go for ZFC etc.), and points out that, when someone codes a mathematical system (say: differential calculus) into this formalism, he actually established a new piece of math. 83 So, LW's point is not that he is for or against formalisms, nor that he is for or against the way these formalisms are used as a technique in contemporary mathematics or PhilMath. His point is that these formalisms are expanding math, that they cannot be construed as simply unifying pre-existing math, that they are add-ons to pre-existing math, not a new expression of pre-existing math. 84 In this connection, LW makes another interesting point (which may -to some of us-seem somewhat at odds with the previous point): he compares the case of someone inventing a 82 Although the expression occurs only a few times, and in passing, in the context of MS122, it has been quoted quite often, as if it was an important terminus technicus (cf. also 'Forms of Life'). The expression reoccurs a little further on at Ms-122,96r-96v, 19400113: "Ich will die Buntheit der Mathematik erklären". 83 Und das kommt doch auf das Gleiche hinaus, wie zu sagen: Wer ein System, wie das R.sche, besäße & aus diesem 'durch entsprechende Definitionen' Systeme, wie den Differentialkalkül, erzeugte, der erfände || erzeugte ein neues Stück Mathematik. (Wie ich schon früher gesagt habe.) 84 Cf. Ms-122, 3v-4r: "Ich will der Formulierung entgehen: "ich weiß jetzt mehr über den Kalkül", & statt ihrer die setzen: "ich habe jetzt einen andern Kalkül". Der Sinn hiervon ist, die Kluft zwischen einem mathematischen Wissen & nicht-mathematischem Wissen immer in ihrer vollen Größe vor Augen zu behalten." formalism (like Russell did), with someone inventing a notation. 85 The point is: when you invent a notation, this is also an expansion of your math. 86 So, once again, LW confronts the idea that practical things like notations should be viewed as somehow external to math and thus displaces the idea that 'foundational' systems are somehow more at the core of mathematics than the techniques that make up actual mathematical practice. LW also addresses the idea that an axiomatic system (a "proving system" / "Beweissystem") can coordinate several pre-existing systems by translating them into a common code (which is basically the approach adopted by Russell and his successors; i.e. almost all mathematicians involved in foundational issues). LW asks whether such a coordinated system, consisting of many more or less independent sub-systems constitutes one system or several systems and suggests that the translation into a single code in the end does not change much about the heterogeneity of the sub-systems: even if we started doing trigonometry using ZFC formalism (which, by the way, we obviously don't!), it would -presumably, supposedly-still be -in a way-the same trigonometry we always did; and the same goes for all the other different subsystems. 87 So, what is suggested in this extended passage as a whole is: (1) that what is presented as a foundation of mathematics, is actually an add-on (that does not change anything about the fundamental heterogeneity of mathematical techniques), not something central to the actual mathematical practices it is supposed to unify, and (2) vice-versa, that certain things that we might want to consider extrinsic to pure math, such as notation systems, may have a much more pervasive importance with respect to our actual mathematical techniques. 85 Nun, man könnte doch einfach sagen: Wenn ein Mensch das Rechnen im Dezimalsystem erfunden hätte -der hätte doch eine mathematische Erfindung gemacht! -Auch wenn ihm Russell's Principia Mathematica bereits vorgelegen wären. -86 This notion that notation is not a peripheral, external aspect of mathematics, but an integral, intrinsic aspect of what mathematics is, anticipates Kenneth Manders seminal and rightly celebrated paper (Manders 2008) by 55 years. 87 Wie ist es, wenn man ein Beweissystem einem anderen koordiniert? Es gibt dann eine Übersetzungsregel mittels derer man die in S1 bewiesenen Sätze in die in S2 || im einen bewiesenen Sätze in die im andern bewiesenen übersetzen kann. Man kann sich doch aber denken, daß einige-oder alle -Beweissysteme der heutigen˓﹖˒ Mathematik auf solche Weise einem System, etwa dem R.schen zugeordnet wären. So daß alle Beweise, wenn auch umständlich, in diesem System ausgeführt werden könnten. So gäbe es dann nur das eine System -& nicht mehr die vielen Systeme? -Aber es muß sich doch also von dem einen || einen System zeigen lassen, daß es sich in den vielen darstellen läßt. || , daß es sich in die vielen auflösen läßt. -Ein Teil des Systems wird die Eigentümlichkeiten der Trigonometrie besitzen, ein anderer die der Algebra, u.s.w. Man kann also sagen, daß in diesen Teilen verschiedene Techniken verwendet werden. LW's intervention in the Grundlagen-debates is not at the level of various positions within these debates (finitism vs. infinitesimal approaches; formalism vs. anti-formalism; etc.), 88 but attacks presuppositions at a much more general level. LW is attacking: -the idea that math is a coherent/unitary system of propositions; -the idea that these propositional systems somehow underlie the heterogeneous collection of techniques that make up actual mathematical practice. (D) The heterogeneity of mathematical practice vs. historical grand narratives The concept of a "buntes Gemisch" and the lines of thought that imply it, also antagonizes encompassing grand narratives about the history of mathematics, e.g. Ferreirós' (Ferreirós 2016) proposal to conceptualize the relation between 'less advanced' to 'more advanced' mathematical (inc. the evolution from pre-mathematical techniques to proper math) in terms of the mechanism of 'abstraction'/'idealization', and in terms of a stratification of practices, i.e. the idea that advanced math is built on underlying 'layers' of less abstract math, which in their turn are grounded in technical practices, which are ultimately rooted in the most basic or elementary practices of counting, measuring, etc. The problem with this idea is that (1) basic counting is irreducibly complex too, and that (2) more 'advanced' practices acquire their own grounds for meaningfulness, that may separate them from -say-basic counting. A good illustration of the latter point is the fact that within the Frege & Russell-style logistic approaches to the Grundlagen debate, in which LW started out, defining the sequence of natural numbers becomes a complicated issue. So: it is obviously wrong that counting and other basic pre-, proto-or quasi-mathematical operations are rooted in axiomatic systems, but the alternative picture in which practices are neatly layered is equally wrong: complex networks of relations between various mathematical and non-mathematical practices exist at all levels, to such an extent that the idea of a level or layer becomes misleading. 89 88 As for the question as to whether LW was a formalist, an anti-formalist or something else, the issue may simply not be relevant in this context: LW does not advocate the use of formal systems in mathematics, but neither does he object to inventing formalisms: he merely objects to pretending formal systems are not invented, that they are somehow a fact of nature, that they reveal an underlying unity, rather than superimposing unity on an underlying heterogeneity. 89 Again (as was the case in my argument against positing 'communities' as the default locus for meaning; see section 1.1.2(A1)), I would like to argue that the idea of 'layers' should not be posited a priori but shown as an empirical result. If one starts from the idea that math as a ludic, autonomous endeavor is built upon more elementary, 'technical' practices, which in turn are based on even more elementary activities, then one prevents oneself from discovering how things are actually relate to each other. I would not object to the idea that something is 'built upon' something else in any particular case, if it is shown to me that this is the case in that case, but I am not ready to accept that this the default way in which math evolves and has evolved as a discipline. Stratification is a very specific concept, not something you can simply posit to be the default, because that would be a sure way to obscure the heterogeneity, the mess, one can expect in any historical process. (E) Sass on LW on fragmentation While working on this study, I stumbled on a few strands in LW's thinking that complicate the picture sketched here above, in that LW's emphasis on the need for -what I called-'local' patterns to be embedded in 'global patterns' in order to be meaningful, was much more prominent (and much more problematic) in LW than I previously thought. Suffice it here to follow what Louis Sass's 2001 article 'Deep Disquietudes: Reflections on Wittgenstein as Antiphilosopher' (also quoted throughout section 2.0) says about this topic ( (Sass 2001) Sass 2001, pp. 119-120). First of all, Sass points out "the antipathy Wittgenstein always felt toward the modern condition of cultural fragmentation and self-consciousness, in which basic cultural presuppositions come under scrutiny and can no longer serve as the taken-for-granted foundation of spontaneous thought and action", and refers to the following excerpt from a draft preface to the Philosophische Bermerkungen: Ms-109,204-207, d.d. 19301101 Es interessiert mich nicht ein Gebäude aufzuführen sondern die Grundlagen der möglichen Gebäude durchsichtig vor mir zu haben. Mein Ziel ist also ein anderes als das der Wissenschaftler & meine Denkbewegung von der ihrigen verschieden. 90 Sass then makes the obvious link with Spengler (see also section 2.0.0 below): The distinction between culture and civilization is the central theme of The Decline of the West by Oswald Spengler, one of the handful of authors whom Wittgenstein repeatedly cited as a major influence and a writer whose thinking Wittgenstein described as "completely in touch with what I have often thought myself " (D 17, 6.5.30). Spengler contrasts Kultur and Zivilisation as "the living body of a soul" versus "the mummy of it." In his view, a crucial caesura in European history occurred around 1800. On one side of this frontier, Spengler sees "life in fullness and sureness of itself, formed by growth from within [i.e., culture] . . . on the other, the autumnal, artificial, rootless life of our great cities under forms fashioned by the intellect [civilization]." For what Spengler calls "the Gothic and Doric men, Ionic and Baroque men" of the earlier era, "the whole vast form world of art, religion, custom, state, knowledge, social life was easy. They could carry it and actualize it without 'knowing' it." For culture, writes Spengler, is "the selfevident." He remarks on the typical modern feelings of "strangeness" with regard to these cultural forms, 90 MS 109 200: 5.11.1930 Sketch for a Foreword †4 This book is written for those †e who are in sympathy with the spirit in which it is written. †f This spirit is, I believe, different from that of the †g prevailing European and American civilization. The spirit of this civilization the expression of which is the industry, architecture, music, of present day †h fascism & socialism, is a spirit that is alien & uncongenial †i to the author. This is not a value judgement. It is not as though I did not know that †j what today represents itself as architecture is not architecture & not †k as though he did not approach what is called modern music with the greatest mistrust (without understanding its language), but the disappearance of the arts does not justify a disparaging judgement on a whole segment of humanity. For in these times genuine & strong characters simply turn away from the field of the arts & towards other things & somehow the value of the individual finds expression. Not, to be sure, in the way it would at a time of Great Culture. Culture is like a great organization which assigns to each of its members his place, at which he can work in the spirit of the whole, and his strength can with a certain justice be measured by his success as understood within that whole. In a time without culture, however, forces are fragmented and the strength of the individual is wasted through the overcoming of opposing forces & frictional resistances; it is not manifest in the distance travelled but rather perhaps in the heat generated through the overcoming of frictional resistances. But energy is still energy & even if the spectacle afforded by this age is not the coming into being of a great work of culture in which the best contribute to the same great end, so much as the unimposing spectacle of a crowd whose best members pursue purely private ends, still we must not forget that the spectacle is not what matters. Even if it is clear to me then that the disappearance of a culture does not signify the disappearance of human value but simply of certain means of expressing this value, still the fact remains that I contemplate the current of European civilization without sympathy, without understanding its aims if any. So I am really writing for friends "the idea that they are a burden from which creative freedom requires to be relieved," and the "fatal imposition of thought upon the inscrutable quality of creativeness." All these, he says, are "symptoms of a soul that is beginning to tire. Only the sick man feels his limbs." In such a condition, we might say (paraphrasing Wittgenstein), life does not fit into a mold and hence what is problematic cannot disappear (CV 27/31). Sass then also refers to a conversation between O.K. Bouwsma and LW, as remembered by the former: 91 In a conversation on these issues recollected by O. K. Bouwsma, Wittgenstein spoke of changes in the kind of human beings we are in the modern world: "There was a time when our lives were furnished rather simply, a house, a place, tools so many, a beast, and a circle of people. In this simplicity and this stability one grew attached to a limited environment. This gave a life a certain quality -roots."63 This passage, as well as other passages out of LW's writings, sheds light on LW's discomfort with the fragmentation of modern culture and society. Sass points out the sharp contrast between simpler and more stable earlier times and the present era in LW's thought. A similar sentiment with respect to modernity is expressed in the following excerpts from one LW's notebooks: Aber haben wir nicht das Gefühl, daß der, welcher nicht darin ein Problem sieht für etwas Wichtiges, ja das Wichtigste, blind ist? Möchte ich nicht sagen, der lebe so dahin -eben blind, gleichsam wie ein Maulwurf, & wenn er bloß sehen || aufschauen könnte, so sähe er das Problem? Oder soll ich nicht sagen: daß wer richtig lebt, das Problem nicht als Traurigkeit, also doch nicht problematisch, empfindet, sondern vielmehr als eine Freude; also gleichsam als einen lichten Äther um sein Leben, nicht als einen fraglichen Hintergrund. [...] ( Ms-118,17r-17v, d.d. 19370827) Beinahe ähnlich, wie man sagt, daß die alten Physiker plötzlich gefunden haben, daß sie zu wenig Mathematik verstehen, um die Physik bewältigen zu können, kann man sagen, daß die jungen Menschen heutzutage plötzlich in der Lage sind, daß der normale, gute Verstand für die seltsamen Ansprüche des Lebens nicht mehr ausreicht. Es ist alles so verzwickt geworden, daß zu seiner Bewältigung || , es zu bewältigen, ein ausnahmsweiser Verstand gehörte. Denn es genügt nicht mehr, das Spiel gut spielen zu 91 I quote Sass's text, but can't find the passage in (Bouwsma 1999). können, sondern die Frage ist immer wieder: was für ein Spiel ist jetzt überhaupt zu spielen? || sondern immer wieder ist die Frage: ist dieses Spiel jetzt überhaupt zu spielen & welches ist das rechte Spiel? (Ms-118,20r-20v, d.d. 19370827) 92 Note that for LW, -here as elsewhere (cf. section 2.0 below)-culture-critical strands are intertwined with existential-biographical strancs. In the same vein, Sass also refers to the following diary passage (Ms-183,45-47, d.d. 19301008), in which LW -again-expresses his alienation with respect to modern society: In subsequent versions of this study, I will have to attempt to process this material a lot further, so as to come to an integrated account of LW's outlook on contemporary culture and society (see also section 2.0 below), as sketched in the present section, and his account of meaning as embedding in practice, as sketched in previous sections. What counts most in the present context, is that there is a direct link between the notion of 'embedding in practice', the 92 CV, p. 27: Earlier physicists are said to have found suddenly that they had too little mathematical understanding to cope with physics; and in almost the same way young people today can be said to be in a situation where ordinary common sense no longer suffices to meet the strange demands life makes. Everything has become so intricate that mastering it would require an exceptional intellect. Because skill at playing the game is no longer enough; the question that keeps coming up is: can this game be played at all now and what would be the right game to play? The way to solve the problem you see in life is to live in a way that will make whatisproblematicdisappear. The fact that life is problematic shows that the shape of your life does not fit into life's mould. So you must change the way you live and, once your life does fit into the mould, what is problematic will disappear. But don't we have the feeling that someone who sees no problem in life is blind to something important, even to the most important thing ofall? Don't I feel like saying that a man like that is just living aimlessly -blindly, like a mole, and that if only he could see, he would see the problem? Or shouldn't I say rather: a man who lives rightly won't experience the problem as sorrow, so for him it will not be a problem, but a joy rather; in other words for him it will be a bright halo round his life, not a dubious background. NB: the standard edition with the title Culture and Value from which I quote the English translation, prints the two paragraphs, admittedly from the same day, but separated by several pages, in the opposite order from the one in which they appeared in the manuscript. 93 Translation Sass: "In the metropolitan civilization the spirit can only huddle in some corner. And yet it is for instance not atavist and superfluous but hovers above the ashes of culture as an (eternal) witness -as if an avenger of the deity. As if it were awaiting a new incarnation (in a new culture)" idea of 'local vs. global embedding', and a number of culture-critical and existential issues, which we will explore later on. (F) Integration vs. fragmentation: loose ends Our understanding of human practice (incl. discourse) in general depends on its local embedding in easily recognizable practices of the types that LW usually has in mind when he speaks of Language Games, but also on the fact that these small-scale structures in turn are embedded in much more encompassing structures (whether conceived of as 'our lives' or as a particular culture, or whatever). This idea is important because it allows us to conceptualize both (1) the fragmentation of actual practices that we observe as an essential aspect of our everyday realities, and (2) the fact that for these 'local' phenomena to be meaningful, they need to be somehow integrated in our lives at large. However, as a matter of Wittgenstein-exegesis, I am at present not yet able to accurately articulate the following tensions in LW's thought: • between (1) LW's insistence on the heterogeneity of everyday practice (cf. the positive valuation of the 'hurly-burly') and (2) LW's negative attitude towards fragmentation: how is the hurly-burly different from fragmentation? • between (1) LW's understanding of the contingency and heterogeneity of the hurly-burly of everyday practice that serves as the "bedrock" for meaning and (2) LW's attachment to singular and rigid cultural affiliation as essential to meaning: on the one hand, LW is fully aware of the fact that there is no simple, unique or universal bedrock that gives meaning to all human behavior, but on the other hand, LW appears to believe in the detrimental consequences of the fragmentation of modern civilization. I must say that all this is not necessarily a real problem for my own thoughts on these matters: my personal sensibilities are at the opposite side of the spectrum from LW's with regard to the importance of cultural affiliation, the idea that there is something like 'the non-everyday', the importance of authenticity, etc. However, as a matter of Wittgenstein-scholarship, there are some loose ends here. In section 2.0, I will come back to some of these issues and some headway will be made. Sense = embedding in the everyday; lack of embedding in the everyday = nonsense Once we acquire the notion of meaningfulness as embedding, we immediately also gain access to the idea that nonsense can be defined as a lack of embedding, or rather that lack of embedding gives rise to nonsense. In this section, we explore the role that the notion of 'everydayness' plays in LW's work in general and his PhilMath in particular, which will lay the groundwork for our analysis of LW's critical remarks in Part 2. Wittgenstein's criticism of 'nonsense' in terms of 'lack of embedding' There is some immediate appeal to the notion that nonsense is a matter of lack of embedding, it seems to make sense at first sight. For instance, it is tempting to view metaphysical discourse (think of "What is time?" or other prototypical philosophical stuff) as divorced from actual language games, actual practices, etc., and LW has been interpreted as saying exactly that, not in the least by the first generation of "positivist" admirers of his in the Vienna Circle. (A) nonsense and senselessness in the Tractatus Logico-Philosophicus (TLP) In the TLP (and associated writings), 'nonsense' appears to be used almost as a technical term, at least according to some (standard/mainstream) interpretations. 94 Thus, for a proposition to be meaningful, it has to be 'bivalent', i.e. either true or false; otherwise, it is not a real proposition at all. Tautologies and contradictions do not refer to the world and are not strictly 'neither true nor false', but rather 'always true' resp. 'always false'. In some sense, tautologies have therefore no sense either. The TLP does not use the term 'unsinnig' (nonsensical) for this case, but 'sinnlos' (senseless). 95 This aligns well with the so-called 'picture theory' of truth (and also meaning) adopted in the TLP: propositions mean something by being a picture of reality (Johnston 2017). The following formula summarizes this conception: • meaningful = bivalent (true or false) = refers to reality; • meaningless = not bivalent (always true, never true, neither true nor false) = does not refer to reality. As gibberish, which does not even look or sound like normal language ("tweedly deedly"), is not a philosophical problem to LW (or anybody else, really) and was -accordingly-not discussed by him in TLP, 'meaninglessness' de facto only concerns pseudo-language, i.e. 94 For overviews of the debates, see [START_REF] Bronzo | The Resolute Reading and Its Critics: An Introduction to the Literature[END_REF][START_REF] Conant | Resolute Readings of the Tractatus[END_REF], Cheung 2017. 95 TLP 4.461;TLP 4.4611;TLP 4.462. language that appears to refer to the world but actually doesn't. 96 For the present purposes, the main point about LW's talk about nonsense is that it concerns 'fake' utterances that appear to be propositions but in fact are not. This notion of 'fakeness' will become important in Part 2 below. (B) sense as embedding / nonsense as lack of embedding After his return to philosophy in the late 1920s, LW's vision of what language is and does had considerably expanded: he no longer focused only on truth and propositionality, but recognized a large array of types of language uses. The language-critical strand in his thought persisted, though, but in a different guise: whether an utterance makes sense or not now depends on whether or not it has a proper function within an (everyday) context, and words and sentences can have a wide array of possible functions (cf. e.g. the metaphor of the toolbox or the cabin of a locomotive in PhU § §11-12, and the explicit discussion of the issue in §23). This is what LW's use of the concept 'use' ("Gebrauch") comes down to. LW's mature formulation of this idea typically involved the notion of Language Game: a pattern in which linguistic and non-linguistic behavior, agent intentions and representations, and objects form a structural whole. These usually small-scale patterns are in turn part of more encompassing, holistic notions such as -famously but very infrequently-Form of Life, and similar holistic notions such as 'our lives'. Building on most notably Baker and Hacker's notion of 'internal relation' ( (Baker and Hacker 2009), p. 75), I have argued elsewhere ((Scheppers 2017); see also section 1.1.2 above) that the relations between the various variables within Language Games and within Forms of Life should be understood as 'internal' or 'structural' relations, i.e. that they should not be viewed as relations between pre-existing entities, but as relations that define the very identity of these entities. In this vein, we can formulate the notion of 'sense' in the following holistic and structuralistic way: sense = function within a context = embedding within a Language Game (and ultimately in a Form of Life (or: 'our lives')) A corresponding implementation of 'nonsense'/'senselessness' 97 immediately follows: 96 Still, I believe it would be correct to say that, if pressed, LW (at least at the time of TLP) would not have admitted that pseudo-language was any better than gibberish and that-in fact-logically speaking, there is only one type of nonsense, thus agreeing with the 'resolute readings'. 97 From here onwards I will simply use the word nonsense as the umbrella term, covering all forms of defective language use, and no longer bother with the TLP distinction 'senseless' vs. 'nonsense'. nonsense = lack of function within a context = lack of embedding in a Language Game / Form of Life This reading of the concept of nonsense is corroborated by a number of formulations in LW's later work: "language on holiday" (PhU §38), "a wheel that is not part of the engine" (PhU §271), an engine "idling" (PhU §132: "wenn die Sprache leerläuft"; cf. (Guetti 1993)), a sham corbel that supports nothing (PhU §217: "Unsere Forderung ist eine architektonische; die Erklärung eine Art Scheingesims, das nichts trägt"), a sham knob that turns out to be a mere ornament not connected with the mechanism (PhU §270). As we have seen above, LW famously and controversially claimed that conjectures have no meaning until we know how to prove them (e.g. Fermat's conjecture or the proposition that '770' will occur in the decimal development of pi). Even if common sense at first may make it hard for us to come to grips with the idea that these words literally mean nothing, this claim does make sense from the 'meaning is embedding' point of view. The problem with an unproven conjecture is the same as with a 'square circle' or 'North of the pole': the words sound like they mean something, but you can't do anything with them, there is no straightforward function for them within a bona fide practice or Language Game. Of course, they may acquire or be given such a role (as poetry, or as a mantra, or as a motivational slogan), but as it is that role will be not even remotely similar to the one of a theorem. Was LW right to call conjectures actually meaningless? I'm not sure that the issue is worth fighting over, but it is clear how the claim fits in with his general outlook on meaning, and it is clear that the role of the expression changes completely as soon as it acquires a specific place within a network of proofs. (C) paradoxes An interesting case in point is LW's view on paradoxes. In MS-118, 111v-, in LW clearly emphasizes the fact that this kind of language use is actually -in real life-(1) useless, and (2) harmless. At best, it is a childish game in which no result is ever reached and no point is made. Paradoxes are an intuitively very clear example of what is meant by 'lack of embedding in practice': the liar's game is never really part of any practical context, and never has any real-life use or any real-life consequences (except perhaps annoyance, or fun, or both). A similar approach is displayed in one of the lectures published as LFM, in which LW and Alan Turing discuss the nonsensicality of paradoxes (also quoted in (Fogelin 2009), p. 161): Think of the case of the Liar. It is very queer in a way that this should have puzzled anyone-much more extraordinary than you might think: that this should be the thing to worry human beings. Because the thing works like this: if a man says "I am lying" we say that it follows that he is not lying, from which it follows that he is lying and so on. Well, so what? . . . It doesn't matter. . . . Now suppose a man says "I am lying" and I say "Therefore you are not, therefore you are, therefore you are not . . ." -What is wrong? Nothing. Except that it is of no use; it is just a useless language-game, and why should anybody be excited? .... Turing: What puzzles one is that one usually uses a contradiction as a criterion for having done something wrong. But in this case one cannot find anything done wrong. Wittgenstein: Yes-and more: nothing has been done wrong. One may say, "This can only be explained by a theory of types." But what is there which needs to be explained? (LFM, lecture 20, pp. 206-7) Again, the point is that from a pragmatic point of view, paradoxes like the liar do not pose any real problem ever and should not get anybody excited (for this undeserved excitement, cf. sections 2.0.3, 2.4.3(C) and 3.2.3(B)), on LW's criticism of pathos and sensationalism). If the criterion for making sense is embeddedness in practice, then paradoxes of this type are plain and obvious nonsense, a somewhat childish joke at best. Remains the question as to what to do with the enormously vast literature in logic that does appear to take paradoxes very seriously... (see section 2.3 below). 98 12. Is there harm in the contradiction that arises when someone says: "I am lying.--So I am not lying.--So I am lying.--etc."? I mean: does it make our language less usable if in this case, according to the ordinary rules, a proposition yields its contradictory, and vice versa?--the proposition itself is unusable, and these inferences equally; but why should they not be made?--It is a profitless performance!--It is a language-game with some similarity to the game of thumb-catching. (D) formalism as desemantization The idea that nonsense can be defined in terms of a lack of embedding in practice also leads us to interesting implications regarding formalism, in the general sense of 'the use of formal systems and theories', within mathematics. Formalism can at least in part be defined in terms of 'desemantization' ((Dutilh Novaes 2012)) and perhaps in a certain sense even in terms of depragmatization (in the sense of de-embedding), which makes it an interesting topic in the present context. If we take the definition of desemantization literally and we take the 'nonsense = lack of embedding' idea literally, formalism is strictly speaking nonsense. However, in actual practice, we can see that formal systems are embedded in rich and well-supported practices/discourses, with deep historical roots and wide expansions, and they seem to make sense to the agents that operate with them. I have nothing else to say about the concept of desemantization in the present context of LW's PhilMath, but will take up the topic in Appendix 4.1 below. The reason why I mention it here, is that it is a good way to summarily introduce the issue of the meaning of formal systems from the point of view of LW's pragmatic approach to meaning. Embedded in what? Everydayness Despite its prima facie intuitive appeal, the idea of 'lack of embedding' as what constitutes nonsense soon enough hits its limits: the idea that nonsense corresponds to lack of embedding in practices is open to an obvious and direct objection: in actual fact, nothing actually occurs without being somehow embedded. Thus, it's actually and obviously not true that philosophical discourse, not even the most esoteric metaphysical verbiage, lacks embedding: philosophical discourses are embedded in rich and wide networks of practices, within and outside philosophy, they have considerable cultural and historical depth, they are supported by relatively large networks of agents, ... Merely saying that this does not count as embedding would be disingenuous and would undermine the coherence and applicability of the notion of embedding. (A) embedding in everydayness And this takes us to the next step, which has a lot of textual support in Wittgenstein's work and its Nachleben: the key concept is not so much lack of embedding in general, but lack of embedding in 'everyday'/'ordinary' practices and forms of life. 99 So: the relevant notion here is everydayness. Throughout his work, LW routinely distinguished between 'normal' / 'ordinary' / 'everyday' ('normal' / 'gewöhnlich' / 'alltäglich') contexts and whatever their opposite may be. 100 This idea of everydayness has a certain intuitive appeal and we can easily grasp what kind of thing LW means by looking at his many examples: buying apples; building houses; measuring, buying and selling timber; engineering and operating machines, etc. are straightforward everyday activities, and language use in the context of these activities is straightforwardly unproblematic, in a way that e.g. metaphysical talk, logical paradoxes and Gödel's code are not. I have quoted some of the more seminal expressions of this idea in the above (section 0.2(D)): LW's contrast between the 'metaphysical' use of words and the everyday language in which words have their home in PhU §116 and the idea that philosophy should leave everything as it is in PhU §124: philosophy can neither change actual language use, nor can it offer a foundation for it, so ultimately, it can only describe it. 101 This is perhaps the right place to also briefly address the following lines in §124: It also leaves mathematics as it is, and no mathematical discovery can advance it. A "leading problem of mathematical logic" is for us a problem of mathematics like any other. The inclusion of this remark in the context of the programmatic part of the PhU highlights the fact that for LW, there is no such thing as philosophically relevant mathematical problems: a problem may be a mathematical problem, but from a philosophical point of view, they are all the same. This goes directly against the grain of such luminaries as Cantor (for whom work on the transfinites was "a mission from God" with -to him-clear theological implications) and Gödel (whose most famous contributions were intended to defend mathematical Platonism). 102 Anticipating our analyses in sections 1.3, 2.1, 2.2 and 2.3 below, we can already conclude from this programmatic paragraph that (a) a critical stance towards some of the 100 The importance of this concept and its counterdistinction from the 'higher', the 'abstract', the 'sublime', etc. within LW's philosophy is rooted in the anti-rationalist, anti-positivist and generally anti-theoretical tendencies in the various brands of 'Lebensphilosophie' adopted by such philosophers as Arthur Schopenhauer (for the relations between LW and Schopenhauer, see e.g. (Jacquette 2017), Søren Kierkegaard, and Friedrich Nietzsche, a background also shared (and processed in his own distinctive way) by LW's contemporary Heidegger. Cf. also section 2.0.0 below. 101 124. Die Philosophie darf den tatsächlichen Gebrauch der Sprache in keiner Weise antasten, sie kann ihn am Ende also nur beschreiben. / Denn sie kann ihn auch nicht begründen. / Sie läßt alles wie es ist. / Sie läßt auch die Mathematik wie sie ist, und keine mathematische Entdeckung kann sie weiterbringen. Ein "führendes Problem der mathematischen Logik" ist für uns ein Problem der Mathematik, wie jedes andere. 102 See also (Rittberg 2016), in which it is argued that "that mathematics can actively influence metaphysics, i.e. that mathematicians can set up mathematics in such a way that by doing mathematics they can actively influence metaphysical debates." (p. 287), which goes even further in the direction that LW objects against. major voices in the Grundlagen-debate are -in LW's own mind-at the core of his philosophy, and (b) the everyday vs. non-everyday distinction is deeply intertwined with this agenda. (B) the mathematical everyday In the case of mathematics, LW makes abundantly clear what he means by everyday practices, by means of very extensive exemplification. LW's insistence on the importance of embedding in practices takes the shape of reflections on a very large number of examples in which calculations are put to use as an integral part of a wide variety of practical applications, which give meaning to these calculations (see section 1.3 below). LW also makes his point in more abstract terms. Thus, for instance, he famously states that mathematical concepts, in order to be meaningful, have to also be used in 'civilian clothes': BGM5 §2: Es ist der Mathematik wesentlich, daß ihre Zeichen auch im Zivil gebraucht werden. Es ist der Gebrauch außerhalb der Mathematik, also die Bedeutung der Zeichen, was das Zeichenspiel zur Mathematik macht. 103 For instance, in section 1.3 below, we will encounter the example of calculating the surface of a sphere (MS-126, 37-38), about which LW remarks that whatever it means 'to acquire a new understanding of the surface of sphere', this new conceptualization, for it to be a conceptualization of the surface of a sphere, should still be applicable to actual spheres (section 1.3(B) below). 104 In exactly the same way, whatever number theory you may want to affiliate with, for it to be a number theory, it will have to deal with what we do when we count apples. NB that the remark about the 'civilian clothes' occurs in a context in which our own 'lived' experience is contrasted with what a machine does (section 1.1.1(H)). Again, we can observe that the lines of thought concerning formalism in terms of 'live signs vs. dead signs', the lines of thought concerning the importance of applications, and the lines of thought in terms of embeddedness in the everyday are intertwined at a very fundamental level. 103 "I want to say: it is essential to mathematics that its signs are also employed in mufti. It is the use outside mathematics, and so the meaning of the signs, that makes the sign-game into mathematics. Just as it is not logical inference either, for me to make a change from one formation to another (say from one arrangement of chairs to another) if these arrangements have not a linguistic function apart from this transformation." (BGM/RFM, V, §2) The choice for the slightly slangy translation "in mufti" when there is perfectly neutral and transparent English equivalent of LW's "im Zivil" available ("in civilian clothes") is yet another example of an amateurish choice on the part of the editors/translators. 104 Was heißt es, einen neuen Begriff von der Oberfläche einer Kugel gewinnen? In wiefern ist das dann ein Begriff von der Oberfläche einer Kugel? Doch nur insofern er sich auf wirkliche Kugeln anwenden läßt. What is the non-everyday? (On everydayness as a moralistic concept) The distinction between the everyday and the non-everyday is at first intuitively attractive and there is no doubt that some version of it was supported by LW (and many others). However, there is problem with the contrast-class of the everyday:105 it is hard to articulate what constitutes the non-everyday. In LW's text, the non-everyday is always exemplified by metaphysics, mathematical logic, set theoretical discourse, or even philosophy in general (cf. sections 0.2(D) and 2.4.2(A)). (A) everyday calculations vs. parasitic prose The literature has paid a lot of attention to LW's distinction between mathematical operations (calculating, proving, constructing geometrical diagrams, etc.) on the one hand, and the "prose" that surrounds these operations on the other: LW supposedly only criticized the prose and left alone the operations. Juliet Floyd summarizes the topic as follows:106 Within mathematics-as Wittgenstein is the first to insist-there is often an important distinction to be drawn between an intuitive notion and a rigorous mathematical notion, between, for example, an intuitive mathematical argument and a formalized proof. The 'prose' surrounding a proof may be perfectly unobjectionable, even indispensable. But some prose has the tendency to mislead, and is mathematically inessential. 'There are true but unprovable propositions in mathematics' is misleading prose for the philosopher, according to Wittgenstein. It fools people into thinking that they understand Gödel's theorem simply in virtue of their grasp of the notions of mathematical proof and mathematical truth. And it fools them into thinking that Gödel's theorem supports or requires a particular metaphysical view. ( (Floyd 2001), p. 299) I have not much to add to the literature about this prose vs. calculation distinction, except perhaps that, independently of the question of how important the distinction is for LW's work on math, it participates in the obvious weaknesses surrounding the concept everydayness in general (see below) and I believe that the issues that LW is pointing at when using this distinction actually become more interesting (not less interesting), if we don't use the prosecalculation distinction: to my mind, one can view the talk about the practice as an integral part of the practice and study the relation between the talk and the other aspects of the practice (or the lack of such a relation) as a matter of the internal structure of the practice. But this disagreement would lead us beyond the topics at hand. (B) healthy working mathematicians' (or engineers' or accountants') math vs. degenerate philosophers' math The notion of everydayness in mathematics is not LW's invention: it occurs before and after LW, often opposing "working mathematicians" to logicians and other esotericists. For example, a number of publications since "Foundations of Mathematics for the Working Mathematician" ((Bourbaki 1949)) contain the expression "for the Working Mathematician". Dieudonné, a prominent member of the Bourbaki-collective, spends almost 5 pages of his 1982 article "Mathématiques vides et mathématiques significatives" on the subject matter of "Logique et mathématiques" and starts this section as follows: Les philosophes et les logiciens ont une tendance, parfaitement naturelle et excusable, à croire que les mathématiciens s'intéressent beaucoup à ce qu'ils font. Détrompez-les, ce n'est pas vrai : 95% des mathématiciens se moquent éperdument de ce que peuvent faire tous les logiciens et tous les philosophes. Cela ne les intéresse absolument pas. ( (Dieudonné 1982), p. 16) And on the next page, Dieudonné says: Alors, quand on vient nous parler de la logique du premier et du deuxième ordre, de fonctions récursives et de modèles, théories très gentilles et très belles qui ont obtenu des résultats remarquables, nous mathématiciens, nous ne voyons aucune objection à ce qu'on s'en occupe, mais cela nous laisse entièrement froids. 107 ( (Dieudonné 1982), p. 17) LW thus participates in a trope that is quite common, but for LW, the distinction is an obvious avatar of the distinction between the everyday and non-everyday and takes on a much more central place in his philosophical work than is usual. In this context, it may also be useful to point out that non-everyday mathematical discourse is invariably valuated in negative terms (sometimes hyperbolically negative terms) by LW: set-theoretical parlance is called "a tumor", "pernicious", "the illness of our time", etc. (see section 2.2 below) and in Ms-127,184-187 (= BGM V §46), he talks about "the curse of the invasion of mathematics by mathematical logic". 108 Again, we can point out that LW was not 107 For a similarly disdainful attitude towards sociological accounts of mathematics, see (Dieudonné 1982), pp. 22-23; see also section 3.2.3(A) below). 108 "Das ist der Fluch des Einbruchs der math. Logik in die Mathematik, daß nun jeder Satz sich in mathematischer Schreibung darstellen läßt & wir uns daher verpflichtet fühlen ihn zu verstehen. Obwohl ja diese Schreibweise nur die Übersetzung der vagen gewöhnlichen Prosa ist."(cf. Ms-126,108: "28.11. "Der unheilvolle Einbruch" der Logik in die Mathematik."). the only one to use that kind of language in similar mathematical contexts. Thus, Cantorbiographer Joseph W. Dauben says in a 1982 lecture: So shocking and counter-intuitive were Cantor's ideas at first that the eminent French mathematician Henri Poincaré condemned the theory of transfinite numbers as a "disease" from which he was certain mathematics would someday be cured. Leopold Kronecker, one of Cantor's teachers and among the most prominent members of the German mathematics establishment, even attacked Cantor personally, calling him a "scientific charlatan," a "renegade" and a "corrupter of youth." (Dauben 1989) So, whereas in the meantime set theoretical discourse has become part of the mainstream and -at the same time-the ways in which people normally voice their criticism in public may have changed, LW's opinions and the ways in which he expressed them, may appear more eccentric now than they used to be at the time. (C) the inherent weakness of the concept of "everydayness": everydayness as an agenda not a result Although the intuitive notion of 'everydayness' -as introduced here above-is the one we need to make sense of LW's critical remarks that we are going to look at in what follows, everydayness is also problematic: why should talk about "nothingness" not be part of the philosopher's everyday, and infinite sets part of the mathematician's everyday, in the same way that talk about eggplants is part of the greengrocer's and the cook's everyday? Why make the distinction between the everyday and the non-everyday and why make the cut-off at the level of the 'theorist'? Why are 'working mathematicians' (whatever that is) or engineers 'normal', and philosophers and people who practice mathematical logic not? Isn't one person's exotic, another person's everyday? Isn't it also true that once we get acquainted with a certain Form of Life, it will start to make sense (and look more 'normal') to us? Isn't this intuitive sort of relativism what LW's explorations instill in us? In other words: everydayness is inadequate as a neutral descriptive/empirical concept. I will not pursue this line of thought here, as I will not normally go beyond the limits of LW's own contributions in this part of my study. 109 But the observation that everydayness is not an 109 In (Scheppers 2009) and (Scheppers 2017), I make the case for a very similar analysis of Heidegger's use of the concept of everydayness. Heidegger was LW's contemporary (though Heidegger lived longer) and "everydayness", along with the emphasis on 'practice' and a preoccupation with 'authenticity', were obvously part of the cultural common ground shared by many German-speaking intellectuals of that era. For similarities between LW and his contemporary and fellow candidate for the title of 'greatest philosopher of the 20 th century", see e.g. (Braver 2012); (Egan, Reynolds, and Wendland 2013); (Egan 2019). I will argue elsewhere that one of the main differences between both authors/thinkers can be illustrated beautifully by pointing out their different valuation of 'everydayness', as the locus of authenticity in LW's work, and as the source of inauthenticity in MH. empirical notion that emerged from the analysis of actual practices is important for our understanding of what is at stake in LW's philosophy at large, and his PhilMath in particular: it suggests that the distinction between the everyday and the non-everyday is part of the agenda underlying LW's work, part of the premises, not part of the results. -126, 37-81, 19421030-19421115) In this section, I present a reading of an extended passage that illustrates a few lines of thought that are at the core of the subject matter of this study. Wittgenstein's emphasis on fringe-applications (MS LW seems to insist a lot on atypical, even fringe, applications, often even including stuff that he made up himself. In some of his manuscripts, he goes on for dozens of pages on end, investigating one example after another of atypical applications of math-like calculations, or bizarre circumstances in which math-like activities could or could not have existed. In this section, we will focus on an extended passage from MS-126 (38-81), partially published in an edited form as BGM V, § §5-8. (A) a list of examples First, I would like to present a simple list of examples taken from this passage. Among the examples, there are a few that refer to 'normal' applications of normal mathematical techniques: • calculating the surface of a sphere (in theory and of an actual sphere) (MS-126, 37-38); • the construction a force polygon (56)(57) But the majority of the examples LW presents in this passage are more exotic, in many cases fictional: 111 • what if arithmetic was only used as cipher (MS-126, 39-41; = Suhrkamp, p. 260); 112 110 6.11. Nimm die Konstruktion des Kräftepolygons: ist das nicht ein Stück angewandte Mathematik? & wo ist der Satz der reinen Mathematik der bei dieser graphischen Berechnung zu Hilfe genommen wird? 111 Cf. also the wonderful case of equations used for designing wallpaper (BGM VII, §41; LFM (Wittgenstein 1976), Lecture III, pp. 36-37). These passages in LW are -by the way-not the only interesting link between math and wallpaper: apparently, Sofia Kovalevskaya "taught herself calculus from wallpaper made from a calculus book" (Martin & Roitman 2014, p. 67). As for poetic anecdotes: LW apparently shared a pencil with Otto Neugebauer, the historian of ancient mathematics, while both were prisoners of war in Monte Cassino ((Floyd 2016), p. 57; (Swerdlow 1993), p. 139; (Høyrup 2017), p. 4). 112 2.11. Wenn die arithmetischen Operationen lediglich zur Konstruktion einer Chiffre dienten wäre ihre Verwendung natürlich grundlegend von der unsern verschieden. Wären diese Operationen dann aber überhaupt mathematische Operationen? • 4D geometry as a means to study the living conditions of ghosts (MS-126, 41 = Suhrkamp, p. 260); 113 • calculating with numbers above 1000 only used for studying ghosts (MS-126, 42); • fictional math: infinite numbers as part of a fairy tale (MS-126, 54-55); 114 • set theory as a parody of math (MS-126, 55-56); 115 • oracular math and ceremonial math (MS-126, 57-58 = Suhrkamp, p. 265); 116 • competitive math (MS126, 80 = Suhrkamp, p. 273); 117 • calculating in rhyme; 118 • math for studying ghosts (again). 119 The question is, every time: is this still math, or is it not? (Seriously: is it or is it not?) We will see later on that LW's insistence on the question of demarcation in this context is not aimed at promoting a clear-cut demarcation at all, but that asking the questions helps him and his readers to come to a more adequate attitude towards the issue. 118 LW makes us imagine that multiplication would be a lot harder than it actually is for us, e.g. because one only calculates orally, and has to construe a rhymed poem for each calculation. Oder das Multiplizieren könnte uns viel schwerer fallen, als es tut -wenn wir z.B. nur mündlich rechneten, & um uns eine Multiplikation zu merken, sie also zu erfassen, wäre es nötig sie in die Form eines gereimten Gedichts zu bringen. Wäre dies dann einem Menschen gelungen, so hätte er das Gefühl, eine große, wunderbare Wahrheit gefunden zu haben. Es wäre sozusagen für jede neue Multiplikation eine neue individuelle Arbeit nötig. for the purposes of this study, I can only highlight a few excerpts. (B) LW, Ms-126,37-39, d.d. 19421030-19421101: math as a game LW starts by exploring the idea that mathematics would be a formal game, 120 in which one never appeals to an extra-mathematical application, and asks what that would even mean: does it mean (1) that one exits and then reenters math proper, or (2) that one transits from one type of mathematical inference to another type of mathematical inference? 121 I think it is important to understand that it is not really important what answer to these questions (yes or no) one would ultimately want to give. The point of these questions is that they make us realize that the relation between 'pure' math and its application is by no means self-evident. In any case, this line of questioning already suggests that there is a tension between (1) the autonomy of an axiomatic system (its game-like nature) and (2) the more 'applied' math that it is supposed to formalize, and that (2) can do without (1), but (1) can't do without (2). This idea is illustrated by the first example in our excerpt, which is 'calculating the surface of a sphere'. LW asks what it would mean to 'gain a new conception of the calculation of the surface of a sphere' and points out that the very identity of what we would call "calculating the surface of a sphere" still depends on whether we can use the technique to actually calculate the actual surface of an actual sphere. 122 LW then transitions to the following question: "To what extent does one need to have a concept of 'proposition' in order to understand Russell's mathematical logic?". 123 Though somewhat abrupt, this transition need not be cryptic at all: again, LW points out that the 120 The theme of axiomatic systems had been introduced on p. 20 of the notebook, d.d. 19421025. 121 Zu sagen, die Math. sei ein Spiel, soll heißen: wir brauchen beim Beweisen nirgends an die Bedeutung der Zeichen appellieren, also an ihre außermathematische Anwendung. Aber was heißt es denn überhaupt, || : an diese appellieren? Wie kann so ein Appell etwas fruchten? Heißt das, aus der Mathematik heraustreten & wieder in sie zurückkehren, oder heißt es aus einer math. Schlußweise in eine andre treten? 122 Was heißt es, einen neuen Begriff von der Oberfläche einer Kugel gewinnen? In wiefern ist das dann ein Begriff von der Oberfläche einer Kugel? Doch nur insofern er sich auf wirkliche Kugeln anwenden läßt. 123 Wieweit muß man einen Begriff vom 'Satz' haben, um die Russellsche mathem. Logik zu verstehen? formalized system probably cannot function without a link to a pre-formal notion of 'proposition': the pre-formal notion makes Russell's formalization meaningful. Again, the point is the primacy of the everyday application and the fact that the identity of a formalized mathematical technique still depends on its link with the applied technique it is intended to formalize. The next day, LW continues to think about the topic of the importance of an intended application for math and considers the borderline case of an application that is 'fantastic' (in the sense of 'pure fantasy'), i.e. that is not understood adequately by the agents themselves. LW cannot help himself and gives away his game by suggesting in passing that this is actually the case in set theory. 124 Those of us that would be offended by this suggestion may want to let it slide and look at the many other examples that follow instead: math as cipher, oracular math, math for the study of ghosts, ... (see list above). (C) LW, Ms-126,47-50, d.d. 19421104: "isn't math, accompanied by bullshit, still math?" Let's pick up the thread a few days later. LW presents a series of scenarios, asking in each case whether we would call what is being done, mathematics: • someone calculates competently with complex numbers and is able to apply these calculations in physics, while holding strange beliefs about the nature of √-1 and how it was discovered; 125 • someone expands math with new definitions and theorems in an apparently competent fashion, but seems to conceive of this expansion as the discovery of a new space (which he apparently thinks of as some kind of room) and talks a lot of nonsense when asked to explain; 126 124 1.11.42. Wenn die intendierte Anwendung der Math. wesentlich ist, wie steht es da mit Teilen der Mathematik, deren Anwendung -wenigstens || oder doch das, was Mathematiker für eine || die Anwendung hielten || halten,gänzlich phantastisch ist. So daß man, wie in der Mengenlehre, einen Zweig der Math. treibt, von dessen Anwendung man sich einen ganz falschen Begriff macht. Treibt man nun nicht doch Mathematik? 125 Wer glaubt, die Mathematiker haben ein seltsames Wesen, die √-1, entdeckt, die || das quadriert nun doch -1 ergebe || ergäbe, kann der nicht doch ganz gut mit komplexen Zahlen rechnen & solche Rechnungen in der Physik anwenden? Und sind's darum weniger Rechnungen? In einer Beziehung steht freilich sein Verständnis auf schwachen Füßen; aber er wird mit Sicherheit seine Schlüsse ziehen, & sein, Kalkül wird auf festen Füßen stehen. Wäre es nun nicht lächerlich, zu sagen, dieser triebe nicht Mathematik? • someone executes enormously large multiplications in order to "conquer gigantic new provinces of the land of numbers";127 • a jester invented calculating with √-1 as an absurdist joke, thinking that he is writing down and operating with impossible things. 128 LW then asks the following: Mit andern Worten: Wer an die mathematischen Gegenstände glaubt & ihre seltsamen Eigenschaften,kann der nicht doch Mathematik betreiben? Oder: -treibt der nicht auch Mathematik? 129 What does LW exactly mean here? What does he refer to by "those who believe in mathematical objects"? More precisely: what does the word "die" in "die mathematischen Gegenstände" imply? Two potential interpretations: • does he mean the hypothetical people who believe in the bizarre objects mentioned above ("die" as a pronoun carrying emphasis, referring back to previous sentence)? • or does he mean any actual person who believes in any mathematical object at all ("die" as the article carrying no emphasis and referring to objects in general)? The second reading seems more natural, especially without added emphasis in the manuscript (I checked), and this is also the reading reflected in the standard English translation (without an article). But does this description not apply to most mathematicians (both in the 1940s and now)? In that case, what LW says is not only provocative but also heavily ironic, or is LW now simply (innocently?) speaking from his own point of view according to which it is an established fact that believing in mathematical objects is as crazy as the people in his previous examples? 130 In any case, the above fictional examples are obviously intended to be compared to what contemporary practitioners of PhilMath were saying and are an integral part of LW's critical endeavors, which will make up the subject matter of Part 2 below. (D) LW, Ms-126,55-58, d.d. 19421105-19421107: "a kind of parody" Before turning to set theory, LW first asks us to imagine a scenario in which infinite numbers are used in a fairy tale: the dwarfs have accumulated as many gold coins as there are cardinal numbers. LW then remarks (in all seriousness?) that "what can occur in a fairy tale, has to make sense, doesn't it?". 131 Apart from the question as to whether this is funny or not, and (different question) as to whether it was intended to be funny or not, it is true and relevant that stuff that occurs within fairy tales has to make sense -at least, in a certain sense. LW then turns to one of his favorite topics/targets, set theory, and asks us to imagine that set theory was invented by a satirist, and that only later on, one had found something useful or reasonable to it and included it within normal math. And -with an undoubtedly snide reference to Hilbert's famous remark on Cantor-he adds in parentheses: "If one person can view set theory as the "mathematicians' paradise", why can't someone else not see it as a joke?". 132 The implication is that neither explanation says anything worthwhile about the math qua math. LW then says: "The question is: is set theory as a joke not also evidently math?". 133 So, LW's point is not (at least not in this context) that set theory is a joke, a parody of math. 134 What he is saying is that even if it is a joke (or part of a fairy tale), it could still be viewed as math. LW then explores several options as to why set theory is evidently math, referring to various opinions that were current within the context of the Grundlagen-debates: (1) He first suggests that perhaps it is because it is a symbolic game following a set of rules (this option obviously refers to formalism). (2) Apparently as an objection to the formalist option (1), he then suggests that even in settheory-qua-joke certain concepts are being constructed (reference to the anti-formalist / 'conceptualist' party in the Grundlagen-debates), even if one is confused about the application of the concepts. 133 Die Frage ist: ist sie nun als Scherz nicht auch offenbar Mathematik? -134 Of course, LW may actually have thought that set theory is a joke, coincidentally. But even if he was inspired by his actual opinion "set theory is a joke" to use the example "set theory is a joke" to make his point here, he is still not actually saying that "set theory is a joke" in the present context. (3) Apparently as an objection to (2), he then asks how it is possible to have a concept and not be clear about its application (which corresponds to one of the main points of his own work as a whole). 135 It appears that LW steers us away from both formalism and conceptualism as viable explanations of why set-theory-as-a-joke would still be math. Still, I don't think that LW tries to force the point that set-theory-as-a-joke is not -in some sense-math: let's not forget that this example occurs somewhere in the middle of a long list of examples that have in common (a) math-like operations and (b) a wide range of different practical contexts that make them somehow meaningful to practitioners. Within this context, the point seems to be that whatever you believe you're doing while you're calculating doesn't matter that much: you're still calculating and your calculations can still play a meaningful role in whatever practical context they occur in, even if you entertain completely nonsensical beliefs about the nature of your calculations. The snide remark in which he equals Hilbert's idea of set theory as "the mathematician's paradise" with his own fictional idea of set theory as a joke, suggests that a lot of what is said within the context of PhilMath is talk of exactly this type: unfortunate nonsense that -fortunately-is external to (1) what makes mathematical technique mathematical technique and ( 2) what makes a mathematical application that mathematical application. (E) LW, Ms-126, 57-58, d.d. 19421106-07; 77-81, d.d. 19421115 "a family of activities, with a family of applications" The next day (November 6), LW goes on with his list of examples: the construction of a force polygon (an application for which no propositions of pure mathematic are needed), a tribe that uses calculations for divinatory purposes, a people that uses ceremonial calculations but otherwise doesn't calculate. The day after that (November 7), LW only writes the following: "7.11. Would it be any wonder if the technique of calculating had a family of applications?". 136 This sounds like a conclusion of some sorts: calculating is not one thing, but a family of many different things. And LW was not ready with this idea: a week later (Ms-126,77-81, d.d. 19421115), after writing about 20 notebook pages worth of remarks on infinite decimal expansions and similar topics, 135 Und warum ist sie offenbar Mathematik? -Weil sie ein Zeichenspiel nach Regeln ist? Werden hier nicht doch offenbar Begriffe gebildet -auch wenn man sich über deren Anwendung nicht im Klaren ist? Aber wie kann man einen Begriff haben & sich über seine Anwendung nicht im Klaren sein? || nicht klar sein? 136 7.11. Wäre es ein Wunder wenn die Technik des Rechnens eine Familie von Anwendungen hätte?! he writes on November 15: "But where is the problem here? Why should I not say that what we call mathematics is a family of activities with a family of purposes?". 137 He illustrates what he means with another batch of imaginary examples: • calculating in rhyme: LW makes us imagine that multiplication would be a lot harder than it actually is for us, e.g. because one only calculates orally, and has to construe a rhymed poem for each calculation; 138 • studying ghosts (again): LW asks whether it would be considered arithmetic if people thought that numbers were ghosts, and that calculations served the purpose of studying the spiritual plane, etc. 139 It is true that the problem of what still counts as mathematics stops to be a real problem once one accepts that mathematics is not one single unitary thing, but a family of quite heterogeneous activities, with a quite heterogeneous set of applications. 140 (F) summary The text we just read does not contain many (if any) doctrinary or dogmatic statements, but consists of a lengthy back and forth, working through the material, struggling with a large number of atypical (sometimes made up 141 ) applications, and a number of opposing potential 137 15.11. Aber wo ist hier das Problem? Warum soll ich nicht sagen, was wir Mathematik nennen sei eine Familie von Tätigkeiten zu einer Familie von Zwecken. 138 Oder das Multiplizieren könnte uns viel schwerer fallen, als es tut -wenn wir z.B. nur mündlich rechneten, & um uns eine Multiplikation zu merken, sie also zu erfassen, wäre es nötig sie in die Form eines gereimten Gedichts zu bringen. Wäre dies dann einem Menschen gelungen, so hätte er das Gefühl, eine große, wunderbare Wahrheit gefunden zu haben. Es wäre sozusagen für jede neue Multiplikation eine neue individuelle Arbeit nötig. 139 Wenn diese Leute nun glaubten, die Zahlen wären Geister & durch ihre Rechnungen erforschten sie das Geisterreich, oder zwängen die Geister, sich zu offenbaren -wäre dies nun Arithmetik? Oder -wäre es auch dann Arithmetik, wenn diese Menschen die Rechnungen zu nichts anderm gebrauchten? 140 After the remarks illustrating the idea that mathematics is a family of activities with a family of applications, LW writes somewhat cryptically, as a separate paragraph, and between parentheses: "(Ich suche einen Abstieg.)" ("I am looking for a way to get off"?), after which he embarks on a longish development of the idea of "mathematical alchemy", including coded remarks on his not so rosey mental state. 141 I would like to point out that LW could have chosen real historical or ethnographical materials to do the same work (there is a lot of freaky historical and ethnographic material out there!); see section 3.2.1(C). Of course, LW did not have access to this material and, more importantly, for the properly philosophical process he takes us through, it doesnít matter if the cases are real or not. Numerology may be a real-life case in point. Is numerology mathematics? It apparently has many things in common with mainstream math, some of the main objects (natural numbers) and operations involved in it appear to be identical, and in some historical contexts they were definitely intertwined, and historically, not all practitioners made a clear distinction between both. But then again, numerology involves a number of (religious or at least ritual) aspects that do not normally play a role in (what we would call) proper math and sometimes this yields results that would be unacceptable in normal math... Cf. Burkert's ((Burkert 1972), p. 398) comment "number symbolism smothers mathematics" regarding the case of Pythagorean mathematician Philolaus (some Pythagoreans were mathematicians by that time) was precluded from finding the 'right' solution to a musical problem by his religious and numerological commitments. Pythagorean tenet "the whole tone / octave cannot be dissected" is nonsense from a mathematical point of view, but not so from the point of view of the numerical symbolism that underlies Pythagoreanism (number as such is important, not proportions). opinions about, or reactions to, this material. The text illustrates a few different of aspects of LW's PhilMath that I would like to briefly focus on and make explicit. (a) pragmatism The above thought experiments presuppose that the calculations are always recognizable as calculations of some kind, whatever the prose that comes with them, and whatever the applications they are part of. So, LW operates basically with three different aspects of the various practices he is evoking: (1) some sort of calculation, qua operational technique; (2) a practical application that ( 1) is part of / embedded in; (3) optionally, a discourse or a set of beliefs that comes with (1) and/or (2). As opposed to mainstream approaches to PhilMath, LW emphasizes what people do (i.e. in all these cases, some kind of calculation) and how this activity fits in with an encompassing practical context ("applications"), which in its turn highlights the intertwining of the apparently mathematical and the clearly non-mathematical within each application. All of this fits in with the features of LW's approach to meaning and practice that we discussed in section 1.1 above. (b) comparative / anthropological approach Methodologically speaking, it is obvious that LW's approach is a comparative one: he wants to shed light on our normal mathematical practices by comparing them to a wide range of other activities that are in some respects like it. Demarcation is a recurrent theme in the passage we read: time and time again, LW asks: "Is this still mathematics or is it not?" It does not really matter if we answer the questions with yes or no in any particular case. The point is that there is never a clear and natural 142 line separating math from non-math. This idea is expressed (here as elsewhere) in terms of a 'family of techniques' and a 'family of applications'. The effect of working through the long series of examples (not unlike studying real ethnographic material) is that it forces us to make a comparison between the 'exotic' practices and our own 'normal' practices and to realize (1) that our 'normal' practice is only one option By the way, in Ms-116,247, LW ridicules philosophers who collect empirical facts 'as if the factuality of these things was important to us' (cf. section 3.2.1(C) below). 142 Most of us would probably agree that there is a cut-off at some point, but -as LW points out-it is not self-evident at all that there is a single 'natural' cut-off. As a matter of fact, LW appears to specifically target this notion of 'naturalness' as the object of his critique. And that takes us closer to the critical aspects of LW's philosophy, that we will discuss in Part 2 below. between many (and none of the options is more 'natural' than the next), and (2) there are also many structural analogies between the different options. Thus, the relation between the prose about math that is characteristic of PhilMath and actual math is not fundamentally different from the relation between the beliefs any other practitioner may entertain about his own techniques: ultimately, what counts is the way in which the technique is actually applied in actual everyday practice (as a way to build houses, to calculate prices, to predict the future, to study the lives of the ghosts, ...). (c) anti-foundationalism The most recurrent explicit point that LW makes throughout his long series thought experiments is that mathematical or math-like calculations (as we have discussed above, there is no natural way to operate a clear demarcation between both), become meaningful by being embedded in wide variety of different practical applications, not by what practitioners say or believe about them. Many (if not all) of the examples de-emphasize the link between calculation and 'foundational' talk by presenting practical applications that work perfectly fine without any foundational talk, or -interestingly-even function when accompanied by completely nonsensical, even moronic explanations. That last point about obviously nonsensical or stupid accompanying talk is interesting in that it pre-empts a possible objection to LW's criticism of mainstream PhilMath. If the question is: "Is it even imaginable that mathematicians talk bullshit about their own technique?", LW's reply would be: "Yes, very much so". What is the difference between believing in mathematical objects and believing that numbers are spirits, or between Cantor's set theoretical discourse and the discourse of a lunatic that believes that he has discovered a new room in the building of math? Well, if you think through the details of it, not that much, seems to be LW's point. LW's suggestion is clearly that this is exactly what is the case in the contemporary debates on the foundations of math: for LW, that kind of talk has nothing to do with either the mathematical techniques themselves or the applications that make them meaningful, and constitutes some kind of folkloristic practice on its own. (d) anti-unitarianism / anti-monism One of the most colorful aspects of the above passage is the variety of potential practical contexts for calculations that LW evokes throughout it. LW's constant insistence on the question of demarcation did not give rise to the articulation of criteria for distinguishing proper math from other, similar activities, on the contrary, it had for a result that fool-proof criteria looked less and less plausible. The purpose, or at least the de facto result, of this exercise was to get us (and perhaps LW himself) ready for the idea that math is not a single and unique system of propositions, but rather "a family of techniques, with a family of applications", which puts radical heterogeneity at the core of mathematics. We will see in section 2.3 below, that LW sometimes openly targets the idea of math as a coherent whole and what we've seen in the above passage should be understood in connection to this agenda. Conclusions to Part 1 By way of conclusion to the first part of this study, let's recapitulate a few of the above lines of thought. First, in section 1, I showed that for LW meaning is a matter of being embedded in practices and Forms of Life: • this 'pragmatism', when applied to math, takes the shape of LW's insistence on the primacy of a wide variety of heterogeneous practices that give meaning to the techniques used within them; • LW's holism and structuralism about practice yields a vision of math in which propositional knowledge is no longer at the core, and in which the meaningfulness of math can no longer be reduced to, or located in, the epistemic dimension (whether conceived of as reference to a mathematical universe, or as a part of human cognition), nor to the agent, nor to a community of agents; • this vision also emphasizes the variability and heterogeneity of math, in direct opposition to those participants in the Grundlagen debates who view axiomatic systems as the foundations of a unified mathematics: according to LW, application is what gives meaning to mathematics is the underlying hurly-burly of heterogeneous practices ('applications'), whereas axiomatic systems are add-ons, additional mathematical techniques, alongside the old ones, which cannot serve as foundations for these techniques and cannot even unify them in any real sense of the word. Then, in section 1.2, we explored the idea that if sense equals embedding, nonsense could be construed as a 'lack of embedding': • this idea, in its naïve form, has a certain appeal, in that we intuitively understand that activities like buying apples or building bridges are somehow embedded in our lives in a way that metaphysics is perhaps not; • however, everything that occurs always is embedded somehow and it is therefore impossible to coherently characterize certain types of language use as being 'not embedded', in any real sense of these terms; • LW's notion of 'lack of embedding' turned out to heavily depend on the notion of 'everydayness': lack of embedding appeared to boil down to lack of embedding in everydayness; • everydayness, again, may have a lot of intuitive appeal, but there appears to be no empirical or rational reason to distinguish between 'everyday' activities and 'noneveryday' (?) activities. It is not part of my aim in this study to articulate a fundamental critique of LW's (and other philosophers') concept of "everydayness", but the simple observation that everydayness is not an adequate conceptual tool for the empirical analysis of practices, is important in that it gives rise to the question as to why does LW (and others) use it. And the answer has to be that everydayness is not a conclusion but a premise. In other words: the distinction between the everyday and the non-everyday is part of an agenda underlying LW's philosophical work as a whole, not the result of this work. And this immediately leads us to the critical agendas underlying LW's philosophy, addressed in Part 2 of this study. In sections 1.1.3(C) and 1.3, we read extended passages in which we observed a number of the above-mentioned aspects at work, which also allowed us to highlight how even the details of LW's work on mathematics fit in with his stance within (or rather: towards) the Grundlagendebates that got him into philosophy in the first place, dissociating himself from the epistemic bias in PhilMath (the idea that math is primarily a body of propositional knowledge), from the very idea that mathematics would need or even could have foundations, and from PhilMath's deep-rooted monism (i.e. the idea that mathematics forms single coherent (unitary and unique) system. Part 2. Wittgenstein's critical philosophy (of mathematics) The second part of this study is based on a close reading of extended passages taken from a number of LW's manuscripts dealing with mathematics: MS106, MS113, MS117, MS118, MS121, MS124, MS125, MS126, MS161, MS163. I focus on topics where LW appears to attack more or less universally well-accepted aspects of the (philosophy of) mathematics of his time. With respect to the critical remarks that are the main focus of this part of my study, scholars who have a vested interest in LW's status as a great philosopher, seem to shy away from even looking at these passages -perhaps for fear of what they may find-, whereas for scholars who already dislike LW, the critical remarks serve as a readily available argument to simply dismiss LW's contribution as a whole: someone who objects to some of Cantor's, Dedekind's, Gödel's most revered contributions obviously doesn't know what he's talking about and must be a crank. Unfortunately, it may therefore be necessary to repeat the following platitude at the beginning of this section: there is a difference between ( 1) establishing what LW actually said or thought or meant and (2) determining whether he was right or not (or perhaps: acceptable, according to whichever criteria one chooses to apply). I will focus on (1), trying to show how the remarks in question are internally coherent and fit in with LW's work as a whole, and most of the time, I will not even bother with (2). In an introductory section (2.0), I very summarily sketch the general cultural and biographical background against which LW's philosophy developed, which allows me to identify a few modes of thought and expression that will reoccur more or less systematically in LW's life and work, and -as I will show below-are crucial for our understanding of LW's PhilMath. Then, I will present a running commentary on a series of excerpts from LW's manuscripts. As LW deals with the same issues and topics over and over again, approaching them from different angles and exploiting them for apparently different purposes, it is impossible to (1) remain close to the dynamics of a longer stretch of text, and (2) deal with a single topic at the same time. This is why I chose to pick a number of excerpts that display similar lines of thought and present a close reading of those, as follows: • in section 2.1, I comment on 3 passages in which LW discusses various diagonal techniques and the ways in which they are exploited in contemporary philosophical or quasi-philosophical discourse about mathematics; • in section 2.2, I comment on 3 passages in which LW discusses set theory in broad culturecritical terms as "a sign of the times", which allows me to illustrate the link between the broader cultural tendencies discussed in section 2.0.0 and the details of LW's PhilMath; • in section 2.3, I present a number of excerpts that illustrate LW's account of contradictions in formal axiomatic systems (incl. some of the 'notorious' ones that are often interpreted as reactions to Gödel's work). Section 2.4 consists of a summary of some of the main results emerging from Part 2 of this study. Background: Wittgenstein's philosophy as critique: nonsense, fakeness, bad faith, and bad taste LW's philosophy contains a strong ethical bias, as well as a deep-rooted aesthetical bias, which also shows in his biography. In this section, I gather a number of heterogeneous elements that together shed light on some central aspects of LW's outlook on the world. In this context, it is useful to point out that LW participates in a number of broad cultural tendencies, through his early readings and the general culture of the milieu in which he was raised. Political and cultural context There exists a body of work dealing with the influence on LW of classical German philosophers such as Kant and Schopenhauer, but also authors that were more contemporary to LW such as Fritz Mauthner, Karl Kraus, Oswald Spengler, Otto Weininger, etc. (see below, as well as other aspects of the cultural and general historical background from which LW and his philosophical work emerged. 143 A number of the more salient features of LW's outlook, such as his emphasis on everydayness (as opposed to sublimity, etc.) 144 and on practice (as opposed to thought), take part in -what 143 For instance: (Sass 2001); (Stern and Szabados 2004); (Jacquette 2017); (DeAngelis 2007); (Hanna 2017); (Nyíri 1982); (Nyíri 1992); (McGuinness 2002a) A lot of useful material can be found in the two major LW biographies (Monk 1990) and (McGuinness 1988). See also (Janik 1992); (Nyíri 1982); (Steinvorth 1979). For an analysis, see Biletzki 2003, 'Chapter 6. The Fifth Station: Over the Deep End, Or the Ethical Reading' (pp. 95-105) and 'Cultural and political readings' (pp. 181-186) (Biletzki 2003). 144 Cf. for instance, PhU §80: Wir stehen mit diesen Überlegungen an dem Ort, wo das Problem steht: Inwiefern ist die Logik etwas Sublimes? Denn es schien, daß ihr eine besondere Tiefe a allgemeine Bedeutung a zukomme. Sie liege, so schien es, am Grunde aller Wissenschaften. has been called-'Lebensphilosophie'. The 1999 edition of the Cambridge Dictionary of Philosophy introduces the term as follows (Surber 1999): Such philosophers as Dilthey and Eucken (1846 -1926) frequently applied it to a general philosophical approach or attitude that distinguished itself, on the one hand, from the construction of comprehensive systems by Hegel and his followers and, on the other, from the tendency of empiricism and early positivism to reduce human experience to epistemological questions about sensations or impressions. Rather, a Lebensphilosophie should begin from a recognition of the variety and complexity of concrete and already meaningful human experience as it is "lived"; it should acknowledge that all human beings, including the philosopher, are always immersed in historical processes and forms of organization; and it should seek to understand, describe, and sometimes even alter these and their various patterns of interrelation without abstraction or reduction. Such "philosophies of life" as those of Dilthey and Eucken provided much of the philosophical background for the conception of the social sciences as interpretive rather than explanatory disciplines. They also anticipated some central ideas of phenomenology, in particular the notion of the Life-World in Husserl, and certain closely related themes in Heidegger's version of existentialism. Note that many of the features mentioned in this quotation are applicable to LW's work, especially (or perhaps: more overtly) to his later work: the explicitly non-systematic character of the investigation, the emphasis on our "immersion" in a historical context, rather than on atemporal epistemological issues, etc. One of the most spectacular contributions regarding the cultural context of LW and his work is Allan Janik and Stephen Toulmin's 1973 classic Wittgenstein's Vienna (Janik and Toulmin 1973). 145 Even if perhaps a bit overenthusiastic in its broad strokes and perhaps sometimes misguided in its technical-philosophical interpretation of LW's work, it is still an impressive account and especially an impressive collection of relevant materials, which would be unfair and counter-productive to simply dismiss. For our purposes, one of the main points (if not the main point) of the book is immediately relevant. Throughout the book, Janik and Toulmin show that society during the last decades of the Imperial & Royal ("kaiserlich und königlich") regime in Austria (a.k.a. "Kakania") was characterized by an increasing malaise due to the discrepancy between social and political realities and public discourse about these realities: It was the consistent attempt to evade the social and political problems of Austria by the debasement of language -by the invention of "bogus language games," based on the pretense that the existing forms of life were other than they really were-that created the underlying occasion for men's universal confusions about the problems of expression and communication. This confusion found an outlet, both in the particular aesthetic critiques characteristic of all the different arts in late Habsburg Vienna, and also in the 145 See also (Steinvorth 1979); (Seldes 1996); (Molnar 1975). general philosophical critique of language as initiated by Mauthner and subsequently taken up by Wittgenstein himself. (Janik and Toulmin 1973), pp. 273-274 In what follows, I show that this notion of 'fakeness', applied to means of expression, as well as the negative assessment of the era he lived in, are recurring themes in LW's PhilMath. LW famously quoted the following authors as the ones that influenced him most: So haben mich Boltzmann, Hertz, Schopenhauer, Frege, Russell, Kraus, Loos, Weininger, Spengler, Sraffa beeinflußt. (LW, Ms-154,16r) The influence of Boltzman and Hertz, who appear to have steered LW from engineering to philosophy of science, and of Frege and Russell, as his main teachers in logic, of Loos, in the context of LW's own activity as an architect, Sraffa, as a colleague with whom he had repeated conversations during a prolonged period of time, and perhaps even Schopenhauer, as the leading philosopher for any German-speaking intellectual of LW's generation are easy to fit in with the mainstream image of LW's work as a philosopher. However, the ones that are perhaps the most relevant for the purposes of this study are Kraus, Weininger and Spengler: • Karl Kraus was a Viennese journalist, mostly known for his critical attitude towards the way in which the contemporary mainstream press used language in such a way that it obscured the social and political realities of Austria under the waning Habsburgian regime; • Otto Weininger is mainly known for publishing Geschlecht und Charakter (Sex and Character), shortly before his suicide at age 23 on October 3 1903 in the house where Beethoven died; the book consists of a wide-ranging broad-strokes psychological theory (?) centered around such concepts as 'character', 'sex' (as in 'the sexes'), and 'race', which impressed many a contemporary intellectual, including some prominent Nazis; 146 • Oswald Spengler was an amateur (I mean: non-academic) historian and philosopher of history, mostly known for his 1918 Der Untergang des Abendlandes (The Decline of the West), which offers a negative assessment of the state of Western civilization in the context of a grandiose theoretical framework. 146 The popularity of such public intellectuals as Jordan Peterson shows that Weininger's work may make a comeback soon. LW is said to have been critical of the contents of Weininger's opinions (LW wrote in a letter to Moore d.d. 19310831: "It isn't necessary or rather not possible to agree with him but the greatness lies in that with which we disagree. It is his enormous mistake which is great. I.e. roughly speaking if you just add a "∼" to the whole book it says an important truth" (Wittgenstein 2008)-which -by the way-does not make sense as a way to distance oneself from an ideology, at all), but was mainly impressed by the way in which Weininger addressed 'real problems' head-on. What these authors have in common (among other things) is their attempt to understand the era and the culture in which they lived as a whole and in terms of very general concepts, as well as their very negative assessment of the state of this culture and era. These features also characterize LW's way of thinking, as we will see below. Perhaps the single most important source for this idea is Spengler, whose main idea was that cultures are organic units having a quasi-biological lifespan; when cultures start to die, they turn into -still according to Spengler-civilizations, in which the cultural and social patterns of the old culture survive, but as empty forms,147 without whatever made them vital and meaningful previously. As Guter points out (cf. (Guter 2015), also quoted in section 2.0.2 below), LW shares with Spengler the idea that in an era of flourishing culture, there is an organic unity between the different aspects of such a culture (its literature, its music, its science, its politics, its patterns of everyday life and everyday discourse, ...), which makes all these aspects deeply meaningful. According to Spengler, Western culture entered the civilization phase of its lifespan in the 19 th century. The loss of unity between the various manifestations of the culture in its turn coincides with the loss of their intelligibility. For instance, whereas (according to Spengler, and according to LW) classical classical music was transparently linked to contemporary literature etc., modern music (in the classical tradition) had become increasingly unintelligible. 148 There is a lot more to be said about the conceptual links between the Spenglerian ideas of 'organic unity' and 'decline', the idea of meaningfulness by embedding in the everyday, the concept of 'authenticity' and the subtle or less subtle differences in the way variations on these ideas manifest themselves in authors such as LW and Martin Heidegger. Obviously, these aspects can't be developed in the context of the present study, but my analyses of LW's PhilMath below can be read as a case study in which LW's deep affiliation with Spenglerian themes shows up everywhere: much of LW's criticism of mathematical developments since the 19 th century boils down to the Spenglerian-sounding idea that mathematics has lost its organic connection to the everyday applications that make it meaningful. Not much of the mainstream English-language literature concerned with more technical aspects of LW's philosophical work seems to take on board the ethical and aesthetical and other more general aspects of LW's outlook on the world and on philosophy. For him style, the way something was put, was of enormous importance, and that not only in the artistic sphere. He said once, it wouldn't matter what a friend had done but rather how he talked about it. Similarly he used to insist on a careful reading of the dictum, Le style c'est l'homme meme. One should note the word 'même': the thought is that the real man reveals himself in his style. The meaning of the words, the content, is something secondary, and so likewise is the brute action performed. Of course, it is an important philosophical observation that actions cannot be separated from the way in which they are judged by him who performs them. In this case, as in many others, 155 LW directs his wrath onto himself and develops the notion of superficiality vs. depth to an almost allegorical degree of detail. For our present purposes, I would like to attract attention to the fact that LW's objections target not so much a lack of depth, but rather fake depth: pretending that there is depth, where in fact there is none. Interestingly, this concept of fake depth is also applied in more technical contexts, for instance in PhU §89, §97, §111, and in the context of LW's PhilMath (Ms-126,133-138, discussed in section 2.1(C) below). (C) authenticity as an existential problem LW's biography also shows that LW suffered 'existentially' from his logico-philosophical problems, in the same way other -perhaps more 'normal'-156 people may suffer from problems of a moral order. For LW, philosophical problems are not a fun game to play, nor a 9 to 5 job, but deadly serious, existentially, as can also be seen in the following oft quoted passage from Bertrand Russell's Autobiography: He used to come to see me every evening at midnight, and pace up and down my room like a wild beast for three hours in agitated silence. Once I said to him: 'Are you thinking about logic or about your sins?' 'Both', he replied, and continued his pacing. I did not like to suggest that it was time for bed, as it seemed probable both to him and me that on leaving me he would commit suicide. (Russell 2009) (Edmonds and Eidinow 2001), LW is presented as "not quite human" (thus, at the beginning of chapter 21, interestingly opposed to the "all too human" Popper); cf. also the beginning of chapter 16: "While Popper remains recognizably human despite his aggressive approach to debate and disagreement, there is an unearthly, even alien, quality to Wittgenstein's dealings with others". The relevance of this anecdotal evidence, as well as the evidence I will quote below, for the present purpose is that it sheds light not only on the way he conceives of the aims of philosophy in terms of therapy (etc.), but also on the moral indignation that often accompanies his philosophical criticism. As far as the actual contents of LW's ethical and aesthetical interventions go, I ( (Scheppers 2009); (Scheppers 2017)) am not the only one to point out the central role of authenticity, i.e. the value that consists in avoiding fakeness of all kinds, pretense, illusion and delusion, etc.. 157 For an extended account, focusing on the role of such topics as sincerity, authenticity, fatuity, theatricality, vanity and confession, one can refer to Louis Sass' article 'Deep Disquietudes: Reflections on Wittgenstein as Antiphilosopher' (Sass 2001). Sass links a number of features of LW's philosophy with LW's psychological profile (that he identifies as "schizoid"). Thus, e.g., Sass links LW's ambivalence towards the everyday (both the complete absence of the everyday in LW's early work, and its central status in his later work), as well as his generally speaking negative attitude and the critical nature of his "antiphilosophy" 158 and his preoccupation with issues of self-reflexivity, to his psychological make-up. I do not endorse Sass' psychological approach, but the collection of materials he uses is relevant here, and I 157 The concept is absent from the main current Anglo-Saxon manuals (Kuusela and McGinn 2011) and (Glock and Hyman 2017). See however (Cahill 2004). As pointed out in section 2.0.0 above, "bogus" and similar notions are central to the classic (Janik and Toulmin 1973). More recently, David Egan published extensive work on the significance of the concept of authenticity for the interpretation of LW's work (Egan 2019; 2013). As also suggested by Egan's work and others ((Scheppers 2009) (Scheppers 2017) (Braver 2012) (Egan, Reynolds, and Wendland 2013) (Egan 2019)), I would like to insist that a comparison between LW and Martin Heidegger is enlightening. I can't go into this aspect here, but I would like to point out that the important differences between both thinkers can be pinpointed by looking at the way in which they value everydayness with respect to their shared concern with authenticity: whereas LW evaluates everydayness positively as the source of all meaningfulness, MH sees everydayness as the oppressive rule of "das Man" ("They"). On another occasion, I will articulate how this difference correlates with their very different cultural and ideological profiles: LW the conservative high-society snob, and MH the revolutionary middle-class Nazi. 158 Sass 2001 (Sass 2001), p. 122: "Yet Wittgenstein's strongest impulse, his own genius, if indeed that is the appropriate term, was for philosophizing, and philosophizing of a peculiarly negative sort. If genius is to be defined as creation and absorption -absorption in the service of creation -then one has to recognize the problematic status of Wittgenstein's own work, which both early and later in his career has a distinctively negative flavor. After all, it derives in large measure from a distantiated contemplation and critique of the philosophical discourse of others. The main goal of Wittgenstein's thinking may be the discouragement of philosophizing itself ("Philosophy is a tool which is useful only against philosophers and against the philosopher in us"), but, in some respects at least, it traffics in further alienation, merely recapitulating the condition of philosophy in a higher degree.70 Wittgenstein's own antiphilosophizing is, after all, grounded not in absorption but in a kind of alienated critical self-consciousness -or perhaps we should speak of absorption in a kind of alienated critical self-consciousness. The purpose, in any case, is deconstruction, discouragement, perhaps therapy, certainly not the construction of an alternative philosophical edifice. Wittgenstein writes, "The philosopher is not a citizen of any community of ideas. That is what makes him into a philosopher."71 This would, however, seem to be doubly true of the Wittgensteinian philosopher: alienated not only from the language of household, workshop, and marketplace, but from normal philosophical conversation as well. Perhaps this is part of what Wittgenstein had in mind when he wrote: "It's only by thinking even more crazily than philosophers do that you can solve their problems" (CV 75/86)." want to acknowledge that I got acquainted with some of the material that has turned out crucial for my understanding of LW's critical remarks via Sass' article. Sass adduces a lot of material showing LW's preoccupation with lack of sincerity, vanity and pretense. A case in point is the fact that in the mid-1930s, LW went through a period in which he felt a strong urge to confess some of his 'sins' to various friends and acquaintances; these sins turned out to be rather petty cases of insincerity, which however bothered LW greatly. 159 Of course, the very fact of desiring to confess in its turn can easily be interpreted as a kind of vanity and theatricality, which in its turn LW did not fail to blame himself for ( (Sass 2001), pp. 133-134). The same basic patterns, which Sass conveniently summarizes under the heading of 'inauthenticity', reoccur when it comes to LW's condemnation of other peoples' behavior: Similar attitudes and intuitions pervade Wittgenstein's more explicitly ethical or moral concerns. Perhaps the main object of his ethical condemnation was what he termed "vanity," a quality he associated largely with tendencies toward theatrical self-display -that is, with what he saw as the inauthenticity and lack of courage inherent in being overly concerned about the impression one makes on other people, and with the detached self-consciousness inherent in imagining oneself as a potential object of admiration for others (M 278).108 In Wittgenstein's diaries of the 1930s, "vanity" is a central theme; he despises "vanity" yet is constantly discovering it in himself: [...] (Sass 2001)p. 132 Interestingly, Sass also points out a link with LW's tastes in artistic expression, and more specifically his dislike for theatricality in this respect as well: Wittgenstein was steeped in these traditions, in romanticism, certainly, but also in the early modernist movements of fin de siècle and early 20th-century-Vienna.103 His own proclivities are apparent in his dislike of any kind of explicit moralizing and didacticism in literature and in his preference for works of art that refuse to betray the purity, authenticity, or integrity of their being through theatrical selfconsciousness or by attempting to say what can only be shown. Literary works that Wittgenstein appreciated for such qualities include Tolstoy's Hadji Murat (McG 33) and the detective stories of Norbert Davis and other American writers of the "hard-boiled" school. "A typical American film, naïve and silly, can -for all its silliness and even by means of it -be instructive," he wrote. "A fatuous, self-conscious English film can teach one nothing" (CV 57/65).104 Wittgenstein accepted an aesthetics (and an ethics) of authenticity -a view that would equate detached or theatrical self-consciousness with a diminishment of 159 This obsession with his own vanity and the social awkwardness that comes with it, perdured into the last years of LW's life. Cf. O.K.Bouwsma's account of a conversation between LW and himself d.d. 19490805 about a conversation they had had on 19490731: "He [sc. LW] hardly knew how to tell me [sc. OKB]. It was absurd, etc. 'I am a very vain person'. 'The talk wasn't good. Intellectually, it may have been, but that isn't the point'. 'My vanity, my vanity'" ( (Bouwsma 1999), p. 102). This quotation is also interesting, because LW, here as elsewhere, appears to condemn behavior (in this case a conversation between philosophers) for its style (if that is the word), while conceding that 'intellectually' it was OK. For this trope and its importance for LW's philosophy, see section 2.0.2 below. both the reality of one's existence and the distinctiveness of one's identity. "If I perform to myself, then it's this that the style expresses," wrote Wittgenstein. "And then the style cannot be my own."105 ( (Sass 2001), pp. 131-132) We will see in our analyses below that the same aversion for pretense and theatricality comes back as a key component in LW's philosophical work. Whatever the validity and relevance of Sass' psychological assessments, Sass' material does show that, in LW's case, it is very hard to separate the man's existential worries from his philosophical worries, especially in the light of what he says on the topic himself. So, this material does strengthen our claim that LW's work on PhilMath (or any other topic) should not be artificially separated from the overall aims (and -dare I say-: 'spirit') of his philosophy. In what follows, I will be able to point out a recurrent concern with fakeness in LW's PhilMath, in the sense of a claim that things are presented as different from what they are.160 Wittgenstein on art in general and Mahler's music in particular We can also take a closer look at exactly what LW's aesthetical judgments in an art-related context amounted to. The following excerpt from LW's biography by Monk is worth quoting here: In discussing aesthetics, Wittgenstein was not attempting to contribute to the philosophical discipline that goes by that name. The very idea that there could be such a discipline was a consequence, or perhaps a symptom, of the 'other'. He was, instead, trying to rescue questions of artistic appreciation from that discipline, particularly from the idea that there could be a kind of science of aesthetics: Rather than trying to answer the traditional questions of aesthetics ('What is beauty?' etc.), Wittgenstein gives a succession of examples to show that artistic appreciation does not consist (as one might think from reading some philosophical discussion of aesthetics) in standing before a painting and saying: 'That's beautiful.' Appreciation takes a bewildering variety of forms, which differ from culture to culture, and quite often will not consist in saying anything. Appreciation will be shown, by actions as often as by words, by certain gestures of disgust or satisfaction, by the way we read a work of poetry or play a piece of music, by how often we read or listen to the same piece, and how we do so. These different forms of appreciation do not have any one thing in common that one can isolate in answer to the question: 'What is artistic appreciation?' They are, rather, linked by a complicated series of 'family resemblances'. Thus: It is not only difficult to describe what appreciation consists in, but it impossible. To describe what it consists in we would have to describe the whole environment. (Monk 1990), pp. 404-405 This excerpt from Monk's biography covers a couple themes that are important for our purposes. First, it reminds us of the not necessarily anti-scientific, but at least non-scientific nature of LW's approach to aesthetics (as to other topics) and especially the idea underlying the way he framed the whole TLP, i.e. that what is really valuable in life inherently falls outside the realm of propositional truth-values. This is immediately linked to -second-the ultimately non-propositional, even non-verbal nature of aesthetic appreciation (as other ways in which we give meaning to our world), and -third-his thoroughly holistic view of the context that would be relevant to describe how we appreciate things aesthetically. Eran Guter's contribution '"A Surrogate for the Soul": Wittgenstein and Schoenberg' (Guter 2011) offers a number of interesting insights. First of all, Guter shows how LW links musical understanding to our ability to operate with an intuitive sense of human physiognomy, in a way that defies any mechanical conception of rule-following, let alone an epistemic grounding in propositional contents ( (Guter 2011), pp. 124-125); see also (Guter 2017)). Guter also insists on the very intricate ways in which LW sees a link between musical meaning and the way in which art is embedded in the surrounding culture. Guter's article 'The Good, the Bad, and the Vacuous: Wittgenstein on Modern and Future Musics' (Guter 2015) contains an extensive (but somewhat uneven) analysis of some of LW's remarks on music against the background of Spengler's work and musicologist Heinrich Schenker's equally (if not more) conservative work on the tonal and harmonic features of classical classical music. 161 Guter emphasizes the point that -according to Spengler and LW-the intelligibility of classical music was the result of the organic relation it had with other aspects of the culture it was a part of. The following excerpt from one of LW's manuscripts (a notebook that he intermittently used as some kind of a diary in 1930-1932 and 1936-1937, containing miscellaneous remarks, often but not always of a personal nature, and often but not always written in code), and the main 161 The work of Heinrich Schenker (1868-1935) represents a vision of what constitutes some of the values appreciated in the tradition of European 'classical' music (in short: a certain form of tonality and harmony), which is a highly reductive approach in that it de facto only applies to European classical music of a very brief period in time, but also in that it completely ignores most of the aspects that are important in most types of music (rhythm, phrasing, timbre, ..), including the European 'classical' music that is its subject matter. This vision was horribly backward even in its own time and especially in the Vienna of the 1920s, with its burgeoning artistic experimentation; on a somewhat larger scale, as a theory of music in general, it is criminally ethnocentrist and classist, in that it excludes most music made on the planet, including most music made in Europe, from its scope. For a recent internet controversy involving Schenker's theory, opposing (i.a.) far right internet guru Ben Shapiro and a number of musicologists, see the interesting video 'Music Theory and White Supremacy' on music educator Adam Neely's youtube-channel (Neely 2020). subject matter of Guter's above-mentioned 2015 article, illustrates this aspect of LW's thinking quite well: 27. Die Musik aller || der vergangenen Zeiten entspricht immer gewissen Maximen des guten & rechten der selben Zeit. So erkennen wir in Brahms die Grundsätze Kellers etc. etc. Und darum muß eine gute Musik die heute oder vor kurzem gefunden wurde, die also modern ist, absurd erscheinen, denn wenn sie irgend einer der heute ausgesprochenen Maximen entspricht so muß sie Dreck sein. Dieser Satz ist nicht leicht verständlich aber es ist so: Das Rechte heute zu formulieren dazu ist so gut wie niemand gescheit genug & alle Formeln, Maximen, die ausgesprochen werden sind Unsinn. Die Wahrheit würde allen Menschen ganz paradox klingen. Und der Komponist der sie in sich fühlt muß mit seinem Gefühl im Gegensatz stehen zu allem jetzt Ausgesprochenen & muß also nach den gegenwärtigen Maßstäben absurd, blödsinnig, erscheinen. Aber nicht anziehend absurd (denn das ist das was doch im Grunde der heutigen Auffassung entspricht) sondern nichtssagend. Labor ist dafür ein Beispiel dort wo er wirklich Bedeutendes geschaffen hat wie in einigen, wenigen, Stücken. 162 (Ms-183,-59-61, d.d. 19310127) According to LW, it is almost impossible to write -what he considers-good music in the era in which he wrote (an era which -I guess-is still going on, or -then again-maybe not) because good music corresponds to the conception of the good and the right of its time. If someone wrote music that corresponded to the slogans of LW's era (or just before or after), then it has to be trash (LW says "Dreck"). 162 Guter ((Guter 2015), p. 426) gives the following translation: "The music of all periods [the music of the past] always appropriates certain maxims of the good and the right of its own time. In this way we recognize the principles of Keller in Brahms etc etc. And for that reason [good] music, which is being conceived today or that has been conceived recently, which is therefore modern, seems absurd; for if it corresponds to any of the maxims that are articulated today, then it must be rubbish. This sentence is not easy to understand but it is so: no one is astute enough to formulate today what is correct, and all formulations, maxims, which are articulated are nonsense [Unsinn]. The truth would sound entirely paradoxical to all people. And the composer who feels this within him must confront with this feeling everything that is [now] articulated and therefore [his music] must appear by the present standards absurd, timid [blödsinnig]. But not absurd in a dressed-up sense (for after all, this is basically what corresponds to the present attitude) but vacuous [Nichtssagend]. Labor is an example of this where he created something really significant as in some few pieces." I underlined those words that I object to as translations of the German original: "seems absurd" is not the same thing as "has to appear absurd"; "it is so" is not a correct translation of "es ist so" (better: "that's the way it is", "it is true" (referring to (the contents of) the previous sentence), …); "correct" is a possible translation for "recht", but not if you emphatically and correctly translated a previous occurrence of "das Rechte" by "the right"; "formulations" is not the same thing as formulas, in the linguistic sense of "verbal expressions that are fixed in form", which is clearly what is meant here; "timid" is horribly wrong as a translation of "blödsinnig", which clearly and transparently means something like "feeble-minded" and is unambiguously synonymous with "stupid", not "timid" (one may even argue for "retarded", as an offensive but period-correct translation but that would require some further research); "anziehend" simply means 'attractive' (from the verb 'anziehen' in the sense of 'to attract', also in the physical sense of the word) and has nothing to do with 'dressing up' (Guter's misunderstanding is probably based on a mistaken use of a dictionary: 'anziehen' can also be used in the sense of 'to put on' with a piece of clothing as a direct object, but this usage has nothing to do with the meaning of the adjective "anziehend"); one could make an argument that "Nichtssagend" fits in with a broader semantic field involving 'empty' or 'vacuous' expressions but "vacuous" as a translation for a word that transparently means "saying nothing" is objectionable. It is important to understand that for LW, these aspects apply not only to aesthetical matters, but to meaning in general (see section 2.0.3 below). Throughout LW's philosophical work we find examples that point out analogies between propositional meaning and non-verbal types of meaning, and these analogies go both ways: LW applies concepts like 'gesture', 'sentence' [Satz] to music (as is traditional), but also invokes the use of these terms as applied to music to shed light on verbal meaning; similarly, he talks about the way musical phrases can be felt to follow each other 'logically', and uses that example to shed light on what it means for one proposition to 'follow logically' from another (cf. e.g. section 2.3(F) below). Thus, the idea that cultural artefacts are intelligible to the extent that they are an organic expression of the culture that they are part of, also applies to discourse in general and philosophy in particular. Throughout part 2 of this study (but see especially section 2.2), we will encounter passages in which LW applies exactly this line of thought to mathematical issues. (A) LW on Mahler LW's comments on Mahler are particularly revealing for the purposes of this study, so I would like to dwell on those for a moment. MS 136 110b-111a [19480114] After remarks about perceiving something as something, the following excerpt 163 is the end for the entry dated 14/1, 164 and the next day, LW returns to his line of thought about perception, so we can read this as a more or less self-contained piece. What is interesting here, is how categorical LW's judgment is and how it is not based on any formal features of Mahler's music, but on the idea that it only appears to be classical music and in the end is not that. Equally interesting are the terms that LW uses to articulate his judgement: the only concrete term is 'vanity' [Eitelkeit]; technical ability is definitely not the problem, nor is any other traditional aesthetic criterion. LW also interprets Mahler's case in very broad quasi-historical (Spenglerian?) terms: Mahler is of a different nature as the great composers of the past, and perhaps the circumstances have changed to such an extent that one can't even begin to compare the value of both types of works. 163 | Wenn es wahr ist, wie ich glaube, daß Mahlers Musik nichts wert ist, dann ist die Frage, was er, meines Erachtens, mit seinem Talent hätte tun sollen. Denn ganz offenbar gehörten doch eine Reihe sehr seltener Talente dazu, diese schlechte Musik zu machen. Hätte er z.B. seine Symphonien schreiben & verbrennen sollen? oder hätte er sich Gewalt antun, & sie nicht schreiben sollen? Hätte er sie schreiben & einsehen sollen daß sie sie nichts wert seien? Aber wie hätte er das einsehen können? Ich sehe es, weil ich seine Musik mit der der großen Komponisten vergleichen kann. Aber er konnte das nicht; denn wem das eingefallen ist, der mag wohl gegen den Wert des Produkts mißtrauisch sein, weil er ja wohl sieht, daß er nicht, sozusagen, die Natur der andern großen Komponisten habe, -aber die Wertlosigkeit wird er deswegen nicht einsehen, denn er kann sich immer sagen, daß er zwar anders ist, als die übrigen (die er aber bewundert) aber in einer anderen Art wertvoll. Man könnte vielleicht sagen: Wenn Keiner, den Du bewunderst, so ist wie Du, dann glaubst Du wohl nur darum an Deinen Wert, weil Du's bist. -Sogar wer gegen die Eitelkeit kämpft, aber darin nicht ganz erfolgreich ist, wird sich immer über den Wert seines Produktes täuschen. Am Gefährlichsten aber scheint es zu sein, wenn man seine Arbeit irgendwo in die Stellung bringt, wo sie, zuerst von einem selbst & dann, von Andern mit den alten großen Werken verglichen wird. An so einen Vergleich sollte man gar nicht denken. Denn wenn die Umstände heute wirklich so anders sind, als die frühern, daß man sein Werk der Art nach nicht mit den früheren Werken vergleichen kann, dann kann man auch den Wert nicht mit dem eines andern vergleichen. Ich selbst mache immer wieder den Fehler, von dem hier die Rede ist. Unbestechlichkeit ist alles! | Konglomerat: Nationalgefühl, z.B. 164 This remark is part of a manuscript that is written 17 years after the ones we will discuss here below, and the thematic continuity is remarkable as such. Note that LW did not necessarily become milder and less trenchant with age. 165 This is not to say that there is no link between the topic of 'aspect-seeing' and LW's critique of Mahler's music: both topics illustrate the theme of non-propositional meaning, which we have argued to be perhaps the most significant aspect of LW's later work, as compared to his earlier work (cf. section 1.1.1(E) above). 166 LW has always had a positive opinion towards Mahler's talent as a conductor and in this passage he appears to not even deny his technical ability as a composer either. MS 154 17v-19r [1931?] [=Zettel, p. 16-17] In the context of a long string of -in 2022-unpleasantly racist remarks about the "Jewish mind", 167 LW ends up talking about the derivative nature of his own work, and finally -and in passing-about the -I guess-"Jewish"/"derivative"/ (I would say:) inauthentic nature of Ma[h]ler's work. 168 Once again, LW applies this criticism to himself, which once again does not excuse its vulgarity and idiocy. 169 Mahler's work is unlike a classical symphony, in the same way that a picture of an apple tree is unlike an apple tree: something altogether different. What is interesting for our purposes is LW's insistence on "organic", hard to formalize, aspects at the bottom of his judgement: not only does LW not point at any formal aspect of Mahler's work at all, he actually points away from the formal aspects, explicitly stating that the difference between Mahler and the classics is at its clearest exactly where he formally does resemble the classics. Again, LW objects to art that is not properly embedded because it pretends to be something else than it is, but it is -again-important to note that this embedding does not necessarily boil down to the absence or presence of this or that formal feature. Thus, LW approves of the simple tonality in (for instance) Josef Labor's work, 170 but disapproves of the occasional simply tonal passages in Mahler's work, where they sound somehow inauthentic (Guter 2011) p. 233). 171 167 Manuscript 154 contains a lot of remarks about Jewishness, as well as remarks on Brahms, Bruckner, Mendelssohn, ... 168 Es ist dem jüdischen Geiste typisch das Werk eines Andern besser zu verstehen als der es selbst versteht. Ich habe mich oft dabei ertappt wenn ich ein Bild entweder richtig hatte rahmen lassen oder in die richtige Umgebung gehangen hatte so stolz zu sein als hätte ich das Bild gemalt. Das ist eigentlich nicht richtig; nicht "so stolz als hätte ich es gemalt" sondern so stolz als hätte ich es malen geholfen, als hätte ich sozusagen einen kleinen Teil davon gemalt. Es ist so als würde der außerordentliche Arrangeur von Gräsern am Schluß denken daß er doch, wenigstens ein ganz winziges Gräschen, selbst erzeugt habe. Während er sich klar sein muß, daß seine Arbeit auf einem gänzlich andern Gebiet liegt. Der Vorgang der Entstehung auch des winzigsten & schäbigsten Gräschens ist ihm gänzlich fremd & unbekannt. Das genaueste Bild eines ganzen Apfelbaumes hat in gewissem Sinne unendlich viel weniger Ähnlichkeit mit ihm als das kleinste Maßliebchen mit dem Baum hat. Und in diesem Sinne ist eine Brucknersche Symphonie mit einer Symphonie der heroischen Zeit unendlich näher verwandt als eine Mahlerische. Wenn diese ein Kunstwerk ist, dann eines gänzlich andrer Art. (Diese Betrachtung aber selbst ist eigentlich Spenglerisch.) Als ich übrigens in Norwegen war, im Jahre 1913-14 hatte ich eigene Gedanken, so scheint es mir jetzt wenigstens. Ich meine, es kommt mir so vor, als hätte ich damals in mir neue Denkbewegungen geboren (Aber vielleicht irre ich mich). Während ich jetzt nur mehr alte anzuwenden scheine. 169 But that perhaps only means that I do not share these aspects of LW's Form of Life, just as LW apparently does not share certain aspects of Mahler's Form of Life. 170 Labor (1842-1924) was a renowned pianist, organist and piano teacher, and a protégé of the Wittgenstein family. His technically anachronistic compositions, for some reason appreciated by LW, sound highly unremarkable to me and appear to be all but forgotten. 171 The contents of the previous remark are fleshed out by the following quote out of LW's diaries of the same year: "Wenn die späten unter den großen Komponisten einmal in einfachen klaren harmonischen Verhältnissen schreiben, dann ist es als bekennten sie sich zu ihrer Stammmutter. Maler scheint mir gerade in diesen Momenten (wenn die Anderen am stärksten ergreifen) besonders unerträglich & ich möchte dann immer sagen: aber das hast Du ja nur von den Anderen gehört, das gehört ja nicht (wirklich) Dir". Translation by Klagge and Nordmann: "When for a change the later ones of the great composers write in simple harmonic progressions, they are showing The very preposterousness of LW's comments on Mahler, the very fact that the worthlessness of Mahler's compositions is a self-evident fact to LW (but not necessarily to us (or to Mahler)), does illustrate one of LW's important philosophical points: 172 these personal, cultural, historical, in any case obviously contingent perceptions, sensations and judgements are at the bottom, the bedrock of our worldview or our lived experience. The aesthetic which LW apparently grew up with, was bedrock to him, as much as gravity, air to breath, or 1+1=2. LW's aesthetic, as discussed here above, shows to what extent LW is the product of an aesthetically rigid culture (especially as compared to the burgeoning experiments happening in the Vienna he knew well). But the philosophical point we may take from this is that for all of us, at any point in time, the bedrock of our aesthetical appreciations -however open-minded we areis ultimately as historically contingent, complex, and non-propositional as LW's. 173 (B) LW's aesthetics of authenticity Again, LW's judgment (in this case of Mahler's music) involves some kind of deep and 'organic' embedding of the musical utterance in the culture that it is part of, not conformity to this or that formal criterion. What is objected to is that Mahler -supposedly, i.e. according to LW-in the passages referred to pretends to use certain idioms in a 'classical' way but in fact only apparently does so. It is interesting to observe that LW also objects to the 'modern' music of composers such as Schönberg that burgeoned in the Vienna that he grew up in (there are many 'external' sociobiographical links between LW and Schönberg, but they do not appear to lead to anything philosophically relevant, except perhaps by contrast ((Guter 2011), p. 209 and passim). It may be interesting to consider how the formalism of Schönberg's serial approach might have been objectionable to LW in the same way that formalism in mathematics would irk him. Both are defined by the use of algorithm-like methods. Both break with more or less long-standing practices (or at least are perceived as such). Both are signs of the time, and for LW, as for a number of his contemporaries (most notably Spengler, as discussed in section 2.0.0 above), allegiance to their ancestral mother. Especially in these moments (where the others are most moving) Mahler seems especially unbearable to me & I always want to say then: but you have only heard this from the others, that isn't (really) yours." LW did write "Maler", without the "h" (I checked the photograph of the manuscript). 172 If we wanted to be charitable, we could suggest that this is perhaps one of the reasons LW kept indulging in or even developing this type of remarks in the context of the notebooks in which he documented his work. 173 One can argue (Jip van Besouw, personal communication october 2022) that other cultural affiliations, less rigid than LW's, might allow for a more eclectic, permeable and changeable aesthetic bedrock. But that does not change the fact that the aesthetic appreciation itself, when it occurs, is bedrock. this time was not a good time. For all practical purposes, this may also be interpreted as LW simply displaying his early 19 th century tastes. 174 LW did not advocate some kind of functionalist aesthetics either: his claim is not that form should only follow function and his objections against ornamentation are not based on functionalist concepts 175 but on the idea that -for instance-a bed should look like a bed and that people normally walk around a bed; the very functional idea to put wheels underneath it is ruled out for that reason (McGuinness 2002b) pp. 18-20: The two engineers discussed how Eccles's new house should be furnished and were agreed on the exclusion of ornament. Wittgenstein was, as usual, the critic and adviser: in July 1914 he wrote to Eccles, I can't see any drawing of a bed; or do you wish to take the one which the furniture manufacturers submitted? If so, do insist that they cut off all those beastly fancy ends. And why should the bed stand on rollers? You're not going to travel about with it in your house!? By all means [probably 'At all events' is meant] have the other things made after your design! [...] It does not seem that the subordination of design to function, in the sense of intended use, would be an accurate description of Wittgenstein's tastes. These were connected, very typically for him, with his views on the value of abstract education. He used to say that mathematics would promote good taste, 'since good taste is genuine taste and therefore is furthered by whatever makes people think truthfully'.4 Speaking to Russell he emphasized construction as the decisive feature. A thing must be fully the thing it 174 Cf. the following remarks on African art, as published in Lectures and Conversations on Aesthetics, Psychology and Religious Belief, pp. 8 seqq. (Wittgenstein 1967 They may be called 'appreciation'. To the extent that this account is accurate, LW again displays his own embeddedness, but it is interesting to see his insistence on the fact that he doesn't know how other people, with different aesthetico-cultural affiliations, appreciate art, which shows a certain awareness and lucidity that seems to be absent from the world-view of his peers when it comes to non-European art (or European art, for that matter), but also from his own remarks about Mahler. 175 A lot has been made of the lack of ornamentation in the house that LW designed for his sister. This design feature it shared with contemporary functionalist architecture. However, LW's own justification for this feature appears to have been very different from functionalist discourse. was; and life must go on around it in the way appropriate to that. Thus Eccles's bed, as we have just seen, was not to have rollers: it was to be a thing around which people moved. A first important aspect is that functionality is immediately dismissed as a relevant criterion. Next, the sentence "He used to say that mathematics would promote good taste, 'since good taste is genuine taste and therefore is furthered by whatever makes people think truthfully'" illustrates how, for LW, mathematics is fully intertwined with culture in general, as well as with the individual practitioner's ethics and aesthetics. And finally, I would like to highlight "A thing must be fully the thing it was; and life must go on around it in the way appropriate to that": to the extent that McGuiness' interpretation is correct, it illustrates once more the pervasiveness of various avatars the authenticity trope in LW's thought. So: we have seen that 'fakeness vs. authenticity' is recurring theme in LW's thoughts about aesthetical matters. This central aspect of LW's aesthetic could tentatively be summarized as follows: things should look/sound/appear as they are. Again, it appears that what LW finds most objectionable is that things are fake, i.e. that they look/sound/appear like A, but actually are B. This goes for people (see section 2.0.3 above), for art (see here above), this goes also for statements about math (see below). Epistemic authenticity: nonsense as fake sense and bad faith Here above I collected circumstantial evidence for the idea that authenticity is a core concept when it comes to understanding LW's modes of thought in general, as well as the general cultural milieu from which he emerged. Both in ethical and aesthetical matters LW's stern In what follows, I will try and formulate how the notions of authenticity/fakeness apply to the epistemic matters that make up the bulk of LW's work, and especially how it relates to his account of meaning, as discussed in the above. (von Ficker 1988) pp. 196-197 If taken seriously, this should be a key ["ein Schlüssel", says LW] to his whole philosophy (at least: at this, early, stage): his work consists of two parts: that what's in the book, on the one hand, everything he did not write, on the other; the second part is the most important part; the ethical is -so to speak-delimited [begrenzt] from the inside by the book; and strictly speaking, that is the only way it can be delimited. The question is then: how exactly does an ethical and/or aesthetical impetus generate LW's apparently technical work on meaning in a logical or logical-anthropological sense? I believe this issue should be taken seriously, and also as a 'technical' philosophical-analytical matter, not only at the 'meta-level' or as a matter of human interest. 176 The letter was intended to sell the TLP to a publisher and the way LW formulates the matter here is not that different from what is said in the preface and in the last few pages of the TLP: the contents are presented as a definitive solution for all logical problems, but the message is at the same time that 'not much is done' by solving these problems (TLP, 'Preface') and that these solutions are ultimately meaningless and function like a ladder that can be thrown away after one has climbed up on it ( §6.54). It remains to be seen to what extent this applies literally to later stages of LW's development, but in the light of the analyses below, I believe it is safe to say that even in his later work on PhilMath, LW was only focusing on technical details to the extent that they shed light on a small number of very general topics, of a definitely non-technical nature. 176 Most of the mainstream literature focusing on LW in connection with ethics does not seriously take into account LW's claim that the whole of his philosophical endeavor is motivated by ethical concerns. For instance the entry 'Wittgenstein on ethics' (Arrington 2017) in the most recent high-profile manual on LW does not mention the issue as to what LW might have meant when he said that the point of TLP is an ethical one. Although most of what LW has to say about 'ethics' in the TLP and the Lecture on Ethics (1929-1930 (?), published as (Wittgenstein 1965)), is not really relevant for our purposes, 177 it is important to note that LW's use of the term 'ethical' is somewhat idiosyncratic in that, for LW, ethics is the study of value in general and covers everything that is important in life: If we take this sense of the term 'ethics' literally, it becomes clear what LW meant when he wrote to Ficker that the key to his whole philosophy (at the time of the TLP) was ethical, i.e. the part that was not in the book: if the TLP is only concerned with the study of statements that are either true or false, and the TLP shows how little is achieved by 'solving all the issues' in that domain, i.e. how little these things really matter, then it actually makes sense to say that it attracts the attention to all that does matter and to the fact that the things that really matter are not reducible to matters of propositional truth or falsehood. Now, it also follows that presenting things that are a matter of values as if they are a matter of facts (i.e. inherently bivalent (true or false) propositions about the world) is also a case of fake sense: as propositions, these utterances are fake, even if they consist in a (misguided) attempt to express something of value to the speaker. 178 177 Sidenote on LW on the transcendentality of ethics/aesthetics. Without going into details, and mostly to preemptively put the issue aside as irrelevant in the present context, I feel I need to mention LW's remarks on the transcendental nature of ethics in TLP 6.4 and following subsections: "Es ist klar, daß sich die Ethik nicht aussprechen lässt. Die Ethik ist transzendental. (Ethik und Ästhetik sind Eins.)" (TLP 6.421). There are similar lines of thought in his Lecture on Ethics. The gist of LW's thoughts on this matter is quite clear: utterances on ethics (or aesthetics) are not propositional in the sense that they say something true or false about the world, and in that sense, they are transcendental (not a fact, therefore not part of the world) but ipso facto they are also nonsense in the quasi-technical sense of the term adopted in the TLP (neither true nor false, so not saying anything about the world, therefore nonsense). However, these lines of thought are not immediately relevant to the present subject matter, which is not the logical status of talk about ethics per se, but the actual ethical values that LW appears to endorse and enact in his life and his work. Of course, the apparent tension between LW's views on ethical discourse (especially the early ones) and his own ethical-aesthetical discourse remains an interesting and philosophically relevant issue (cf. the tension between his anti-revisionist views on the aims of philosophy and the critical remarks that are omnipresent throughout his philosophical work; cf. section 0.2(D) above and section 3.1.1(C8) below). 178 Cf. -of course-LW's famous admonishment to silence at the end of the TLP. Even if LW at the time of the TLP may have believed one should literally shut up unless one had something factual to say (which is an unteable position, of course, as the mere existence of the TLP self-consciously shows), the more realistic view of what human language use consists of, which LW started developing as soon as he came back to philosphy in the 1920s, implied ipso facto a less narrow-minded view on ethical talk. Already in the Lecture on ethics quoted above, LW expressed an opinion in which he backpedals the austerity of the end of the TLP: "Ethics so far as it springs from the desire to say something about the ultimate meaning of life, the absolute good, the absolute valuable, can be no science. What it says does not add to our knowledge in any sense. But it is a document of a tendency in the human mind (B) fakeness as the target of Wittgenstein's philosophical criticism: terms, concepts and argumentative patterns In section 1.2 above, we pointed out that for LW, meaninglessness/nonsense can be defined in terms of lack of embedding in everyday practices and that gibberish is not a problem in this respect: only things that sound like they make sense but don't, are a problem; fake sense is the problem. The above sheds new light on this notion of 'fake sense', giving it more width and depth than it had in the purely technical context in which we first introduced it: fakeness, pretense, fiction, etc. have become much more central concepts, with a much wider network of connections to other concepts, than they seemed to be, at first. In sections 2.1 through 2.4 below, I will show how these concepts are systematically exploited in a number of key passages in LW's work on mathematics and how they are key to our understanding of what is at issue in these pages. Here, I want to offer a quick illustration of the pervasiveness of terms that denote 'fakeness' in one form or another within LW's philosophical work. The purpose is to focus the readers' attention to a cluster of terms, concepts and argumentative patterns that will reoccur time and time again in the material I will analyze below. A nice sample to start with is the passage in PhU in which LW is most explicit about the aims and methods of his philosophy ( § §89-133 according to the commentary by Baker & Hacker (Baker and Hacker 2005)). Against the backdrop of the above, it is truly remarkable how much of the contents of these paragraphs consists of exactly the preoccupation with fiction, illusion, nebulousness, etc. Let us first analyze a few -for some readers-very familiar passages. the crystalline nature of logic as a nimbus surrounding thought (PhU §97) "Sprache", "Erfahrung", "Welt", wenn sie eine Verwendung haben, eine so niedrige haben müssen, wie die Worte "Tisch", "Lampe", "Tür". 179 LW evokes the traditional conception of logic as something in common to thought and the world, something a priori to all experience, something simple and pure, the most concrete thing, hard as crystal, and calls this "a nimbus that surrounds thought". The word 'nimbus' evokes something visible but not actually there, not actually real, a mere appearance. Similarly, he evokes the idea that the specialness, the depth, that what is essential to us about logical investigation resides in the 'incomparable' nature of language. LW calls this idea an 'illusion' [Täuschung]. Note the array of terms denoting what I will call sensationalism (and -following the next excerptpathos): the (fake) specialness that pervades the kind of discourse LW is criticizing here (in this case the self-description of logic) gets a negative connotation, in contrast to the humble, trivial nature of everyday applications of language, which is evaluated positively. der beunruhigt uns: "Es ist doch nicht so!" --sagen wir. "Aber es muß doch so sein!" 113. "Es ist doch so ------" sage ich wieder und wieder vor mich hin. Es ist mir, als müßte ich das Wesen der Sache erfassen, wenn ich meinen Blick nur ganz scharf auf dies Faktum einstellen, es in den Brennpunkt rücken könnte. 180 179 97. Thinking is surrounded by a nimbus. a Its essence, logic, presents an order: namely, the a priori order of the world; that is, the order of possibilities, which the world and thinking must have in common. But this order, it seems, must be utterly simple. It is prior to all experience, must run through all experience; no empirical cloudiness or uncertainty may attach to it. --It must rather be of the purest crystal. But this crystal does not appear as an abstraction, but as something concrete, indeed, as the most concrete, as it were the hardest thing there is (Tractatus Logico-Philosophicus 5.5563). We are under the illusion that what is peculiar, profound and essen-tial to us in our investigation resides in its trying to grasp the incom-parable essence of language. That is, the order existing between the concepts of proposition, word, inference, truth, experience, and so forth. This order is a super-order between a so to speak a super-concepts. Whereas, in fact, if the words "language", "experience", "world" have a use, it must be as humble a one as that of the words "table", "lamp", "door". |45| 180 110. "Language (or thinking) is something unique" a this proves to be a superstition (not a mistake!), itself produced by grammatical illusions. And now the impressiveness retreats to these illusions, to the problems. 111. The problems arising through a misinterpretation of our forms of language have the character of depth. They are deep disquietudes; they are as deeply rooted in us as the forms of our language, and their significance is as The topic of these paragraphs is the idea that certain phenomena are sometimes conceived of as unique, special, deep. LW suggests that these ideas are based on illusions, false appearances, misinterpretations, superstition. We will see further on that this opposition of special vs. trivial (typically implying that what is conceived of as special, actually is trivial) is a recurring theme in LW's work. Very interesting is LW's use of the term 'pathos' in this context: he calls the idea that language or thought would be something special a 'superstition' based on illusions, and says that the 'pathos' of conceiving of trivial things as special things is based on these superstitious illusions. The term pathos evokes a -typically excessive-emotional involvement and expression, 181 great as the importance of our language. --Let's ask ourselves: why do we feel a grammatical joke to be deep? (And that is what the depth of philosophy is.) 112. A simile that has been absorbed into the forms of our language produces a false appearance which disquiets us. "But this isn't how it is!" a we say. "Yet this is how it has to be!" |48| 113. "But this is how it is ------", I say to myself over and over again. I feel as though, if only I could fix my gaze absolutely sharply on this fact and get it into focus, I could not but grasp the essence of the matter. 181 The standard translation 'impressiveness' (retained in (Wittgenstein 2009)) is wrong as it completely misses the relevant connotations of 'pathos'. 182 118. Where does this investigation get its importance from, given that it seems only to destroy everything interesting: that is, all that is great and important? (As it were, all the buildings, leaving behind only bits of stone and rubble.) But what we are destroying are only houses of cards, and we are clearing up the ground of language on which they stood. 119. The results of philosophy are the discovery of some piece of plain nonsense and the bumps that the understanding has got by running up against the limits of language. They a these bumps a make us see the value of that discovery. The first thing of interest to us in this excerpt is the dichotomy between on the one hand "all that is interesting", i.e. "everything great and important", vs. on the other hand, the trivial everyday. The context gives a de facto negative connotation to 'everything great and important' which turns out to be 'buildings made of air' [Luftgebäude], 183 and a positive evaluation of the trivial, everyday language which is the bedrock on which we stand. By implication the coarseness and materiality of everyday language is evaluated positively. Although somewhat different from the other excerpts in this series, I decided to include this excerpt here because it also shows a number of features that will come back later. First of all, the presence of this excerpt in a context in which LW explains the aims and methods of the philosophical project underlying his later work, shows that mathematics/mathematical logic is still at the core of this project and that contradictions in formal systems are for LW among the core problems philosophy needs to deal with. 120. When I talk about language (word, sentence, etc.), I must speak the language of every day. So is this language too coarse, too material, for what we want to say? Well then, how is another one to be |49| con-structed? a And how extraordinary that we should be able to do any-thing at all with the one we have! [...] 183 The standard translation 'houses of cards' is simply wrong: houses of cards are fragile but they are something, 'Luftgebäude' are nothing. Next, LW also insists on the opposition philosophy vs. mathematics/mathematical logic. Philosophy looks at the problem from a 'civilian' (non-technical) 184 point of view and technical solutions to the problem of contradictions are not interesting from a philosophical point of view. Similarly, LW is not primarily interested in a 'solution' to the problems related to contradictions but in the 'unrest' that these problems cause, i.e. not a problem internal to the game of axiomatic formalisms, but a real-life issue (cf. the notion that philosophy is a kind of therapy, etc.). In the same vein of thought, LW's diagnosis of the problem of contradictions as a matter of us not having understood the consequences of the rules of a game we invented ourselves: the real problem -for LW, as opposed to most philosophers of mathematics-is a matter of the pragmatics of mathematical formalism (I mean: the way axiomatic systems are used in actual practice), not a matter of their syntax or their semantics. This excerpt thus previews one of LW's more important criticisms that we will study below: that mathematicians/philosophers of mathematics have a tendency to pretend that what is merely an aspect of the rules of a game they invented is instead a deep, mysterious, awe-inspiring fact of nature (cf. section 2.(A) below). the given is trivial (PhU §129) Anticipating the analyses in sections 2.1 through 2.3 below, without any claim to systematicity (let alone completeness), and for the purely practical purpose of guiding the readers' attention 184 See also BGM5 §2 / MS-126,30-32 (cf. sections 1.2.2(B) and 1.1.1(H) above), for the similar notion "im Zivil". 185 129. The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice some-thing a because it is always before one's eyes.) The real foundations of their inquiry do not strike people at all. Unless that fact has at some time struck them. --And this means: we fail to be struck by what, once seen, is most striking and most powerful. towards the aspects I want to focus on below, 186 I now present a summary overview of some of the terms, concepts and argumentative patterns that will be omnipresent in the excerpts I will analyze below. • First, there are numerous occurrences of words that denote the cluster of concepts around mere appearance, illusion and delusion quite directly, for instance: Täuschung (PhU §80, § §96-97, §110; Ms-117, 105-110 = BGM2 §19-22, see section 2.1(A) below; MS 136 110b-111a, see section 2.0.2(A) above); vortäuschen (PhU §251); vorspiegelen (PhU §253); Illusion (PhU §311, §362; BPP2 §268); 187 mystification (Blue Book p. 3); PhU §270: "[...] daß die Annahme dieses Irrtums nur ein Schein war"); Einbildung (Ms-121,27r-28v, see section 2.2(C)) • In a number of passages, LW speaks of things that appear to be visible but are either not solid objects, or not there at all: Nimbus (PhU §97); Dunstkreis (PhU §117); Chimären (PhU §94); Luftgebäude (PhU §118); Atmosphäre von Gedankennebeln (Ms-113,93r-v, see section 2.2(B) below). • The notion of illusion can also take the shape of misunderstandings, misrepresentation, prejudice or superstition: Mißverständniss/Mißverstehen (PhU §90-93; PhU §100; PhU §120; PhU §132); Mißdeutung Ms-113,93r-v, see section 2.2(B) below); Vorurteil (PhU §108), Aberglaube/ abergläubisch (PhU §110; Ms-118,116r-116v, see section 2.3(B) below; Ms-125,66r-68r, see section 2.3(D) below). • Quite frequently, LW speaks of pretense in technical contexts as well as in everyday conversation (cf. section 2.0.1(C) above): 188 "Die Erklärung des Dedekindschen Schnittes tut so als wäre sie anschaulich" (Ms-106,245-255, see section 2.2(A) below); cf. also the use of "erscheinen lassen" ("make appear") and the adverb "angeblich" (allegedly, supposedly) in Ms-117, 105-110 (see section 2.1(A) below). LW's somewhat idiosyncratic but at the same time perfectly apt use of the term 'prude' in Ms-124,71-74 (section 2.3(C) for mathematical proofs in which one adheres to the strictest formal criteria, but allows complete nonsense fits in as well. • Sometimes, LW speaks of fiction: Fiktion (Ms-126,133-138, section 2.1(C) below); "einen der Mengenlehre zu Grunde liegenden fiktiven Symbolismus" and "In der Mathematik 186 I could have presented this material after my analyses, as part of my conclusions, but it is more useful here, so I can refer back to it. 187 Cf. also the title Insight and Illusion: Themes in the Philosophy of Wittgenstein (Hacker 1986)). 188 Methodological remark: some of these terms also occur quite often in other functions as well (e.g. terms related to 'pretending' occur in a large number of contexts in which LW analyzes language games that involve the concept of 'pretending': pretending to be in pain, pretending to read, pretending to play a game, pretending to be (un)conscious or unwell, etc. (e.g. in the context of arguments regarding the privateness of feelings); for typical examples, see e.g. Z § §568-571 or PhPF, xi, § §352-364. It is therefore impossible to automatize this kind of work. A close reading of the context remains necessary. können || dürfen wir alles fingieren nur nicht einen Teil unseres Kalküls" (Ms-113,93r-v, setction 2.2(B) below); "Der gebräuchliche Ausdruck fingiert einen Vorgang eine Methode des Ordnens" (Ms-117, 105-110, section 2.1(A) below). Cf. also the notion of a painting of a thing vs. a real thing: "wie der gemalte Fels die gemalte Burg trägt" (Ms-124,71-74, section 2.3(C)). • In some cases, the notion of 'appearance' indicates things that look like they are operational or functional, but are not, as in sham or merely ornamental parts in architecture (PhU §217), or machine building (PhU §132, PhU § §270-271); cf. section 1.2.1(B)). • In still other cases, we observe the notion of trickery and tricks: Taschenspielerkunststück (PhU §308), Hokus Pokus (Ms-117, 105-110 = BGM2 §19-22, cf. section 2.1(A) below); Kunststückchen Ms-118,116r-116v = BGM 1, Anhang III, §18-19, see section 2.3(B) below); I am not sure whether the notion of 'mathematical alchemy' (Ms-126,82-83) fits in with this category. • LW tends to criticize his various targets for being fake, in the sense of overstating the importance, interestingness of ultimately trivial things; what appears to irk him most is the unwarranted theatrical display of emotion: Pathos ( §110);189 "prahlerisch" (Ms-117, 105-110 = BGM2 §19-22, see section 2.1(A) below); "wovor einem schwindlig werden kann" (Ms-121,60r-64r = BGM II, § §40ff., see section 2.1(B) below); "Und dann wundert man sich z.B. darüber, daß...!" (Ms-106,245-255, see section 2.2(A) below). LW's mockery of mathematicians "fear and veneration" vis-à-vis contradictions fits in (Ms-118,116r-116v = BGM 1, Anhang III, §18-19, see section 2.3(B)), and so does his mockery of Hilbert's "Cantor's paradise" (MS-126, 55-56, see section 1.3(C) above). Another rhetorical device that LW's uses is the mis-en-scène of a panicky interlocutor that displays exactly the theatrical emotions he objects to (see sections 2.3(A), 2.3(D), 2.3(F)). Besides the above terms that express the 'fakeness vs. authenticity' trope as such, it may be worthwhile to briefly also mention the following patterns, which occur frequently in conjunction with occurrences of that trope: • a persistent dichotomy of the special vs. the trivial: discourse invoking the special (i.e. the non-everyday, interestingness, importance, mystery, depth, sublime, etc.) is typically negatively evaluated, whereas the trivial and everyday is typically positively evaluated: Sublimes (PhU §89), Tiefe (PhU §89, 110, 111; Ms-126,133-138, see section 2.1(C)), Merkwürdiges/Seltsames vs. Alltäglich (PhU §93); Einzigartiges (PhU §95, §110); das unvergleichliche Wesen, Kristall vs. niedrig (PhU §97); metaphysisch vs. alltäglich (PhU §116); Ein "führendes Problem der mathematischen Logik" vs. "ein Problem der Mathematik, wie jedes andere" (PhU §124); "interessant & merkwürdig", "die Geheimnisse der mathematischen Welt" (Ms-121,60r, see section 2.2(B) below; LW's mockery of Hilbert's pathos about 'Cantor's paradise' (MS-126, 55-56, see section 1.3(C) above) fits in with this trope; • a recurring evaluation of the objects of LW's criticism as ridiculous, comical or childish: "Scherzfrage" (Ms-121,27r-28v = BGM2 §23, see section 2.2(C) below); "komisch" (Ms-118,116r-116v = BGM 1, Anhang III, §18-19, see section 2.3(B) below);190 cf. also the recurrent references to childrens' games ("Daumenfangen" Ms-118, 111v-, see section 1.2.1(C) above and 2.1(A) below; "Fingerhut-Verstecken" (Ms-126,133-138, see section 2.1(C) below. This trope need not merely be a conventional way of being dismissive of an opponents opinion: in many cases, LW's use of this trope appears to be purposeful, in that it highlights features of the objectionable discourse that -from LW's point of view-actually are childish and/or absurd. • an emphasis on the specificity of philosophy (or at least LW's own brand of philosophy) as opposed to mathematics and logic: as pointed out elsewhere (cf. section 0.2(D) above and section 3.2.2 below), LW is aware of the fact that his point of view is different from the various stances he interacts with; remarks that refer to this contrast are recurrent features throughout LW's writings. In what follows I will try to show how the above concepts are omnipresent in LW's PhilMath, and in key functions, at that. Before embarking on the analyses that will make up the bulk of part 2 of this study, I would like to briefly summarize the conceptual framework I tried to sketch in the above and point out how the concept of fakeness shapes a few modes of thought / argumentative patterns that shape the critical aspect of LW's PhilMath: -The key concept in LW's criticism is (what I call) epistemic fakeness: 191 some utterances look/sound like they make sense but actually don't, i.e. 'fake' concepts do not really have a function within a real-life 'everyday' practice and are therefore strictly speaking nonsense / meaningless; this is where we ended up at the end of part 1 (cf. section 1.4), but throughout section 2.0, we have pointed out that this notion of fakeness has many more ramifications than we could expect at the point from which we started out. In some cases, LW calls out epistemic fiction, i.e. discourse about math that makes use of concepts that do not correspond to anything in the actual mathematical technique the discourse refers to. -When there appears to be an active effort to create such an illusion, LW calls out the epistemic pretense, often employing a tone of moral indignation. We'll see that many of the critical remarks we'll read below take this shape and use a vocabulary that evokes pretense, illusion, etc. We have seen that this concern is echoed by (rooted in?) a strong dislike for theatricality and pretense in a person's, including his own, behavior. -Particularly infuriating is (what I call) epistemic bad faith,192 i.e. discourse that makes the effort to satisfy all the formal criteria expected from it, but that is still nonsense, in the sense that it doesn't have a real-life function in a real-life practice. This is exactly what LW blames 'prudish proofs' for: attaching great importance to the syntax of the formalism but allowing complete nonsense in the contents that are expressed by the proof. A similar objection could be made towards Gödel's famous endeavor, when it pretends to be normal arithmetic, whereas the apparatus it uses is clearly designed to do only that particular trick. -There is an aesthetic aspect to epistemic fakeness, that I call epistemic bad taste (or: epistemic kitsch): we have seen above (section 2.0.1(A)) that 'style' was important in LW's ethics/aesthetics and LW's dislike for vanity, ostentatiousness, theatricality and inauthentic behavior in general (section 2.0.1(C)) has an epistemic counterpart in his objections to discourse that presents trivial things as interesting, special, mysterious, awe-inspiring, ... 193 We will encounter a few passages in which LW appears to be irked by this kind of kitsch in discourse about mathematics. 194 What is hard to convey -hence the volume of the material referred to in this section-is the lack of separation between the existential/cultural dimension and the epistemic/technical dimension of LW's philosophical concerns: on the one hand, LW's 'technical' philosophical remarks are permeated and motivated by very general, non-technical concerns, incl. a certain existential urgency, but on the other hand, it should be understood that these remarks are still intended to be technical contributions to whatever technical issue is at hand. 195 I hope that my analyses here below will make this clear. 193 As pointed out above, LW quite often attaches a positive connotation to triviality and everydayness and a negative one to the special, interesting, etc. I am not claiming that LW is against genuine emotion when faced by phenomena that we experience as 'deep' per se, only that he is particularly annoyed at discourse that tries to conjure up such emotions in a 'cheap' way by misrepresenting things that are ultimately trivial. For the fact that LW does value genuine enthusiasm, see the fact that he speaks of Ramsey's 'ugly mind' in terms of the fact (?) that Ramsey is unable to be enthusiatstic about philosophical matters and that he reduces any philosophical issue very quickly from the paradoxical to the trivial: Ramseys (Ms-183,6-8, d.d. 19260427) 194 Let me add here one more -very funny-quote from LW, which didn't find a proper place in the above, in which LW writes -without any direct context-that mathematicians always like some kind of "haut-goût" (literally: "high taste", but in German and in English referring to the slight taste of decay that in certain circles is/was appreciated for game meat) about there propositions, which -says LW-in this case as always is the result of rot/putrefaction [Ms-126,105, d.d. 19421125: "Die Mathematiker lieben einen haut-goût an ihren Sätzen, der, wie überall, von der Fäulnis herrührt."]. 195 In the preface of his classic The Greek Particles (one of my main tools in a previous life), classical scholar J.D. Denniston writes: "The reader should be able to bathe in examples. If I have selected and arranged mine reasonably well, the mere process of semi-quiescent immersion may help him as much as hours of anxious thought" ( (Denniston 1954), p. vi)." I believe this is equally true for any endeavor that requires understanding a complex, multifaceted corpus. Wittgenstein's critical remarks on math I: diagonal stuff (hocus pocus, cheap thrills, prudish proofs) The issue of ordering numbers and the use of diagonal methods (the contributions of Dedekind and Cantor immediately come to mind, but LW also quotes other mathematicians in this context) are a recurring topic in LW's remarks on math. In this section, I discuss three passages in which LW focuses on diagonal techniques and the ways in which these are used in the context of various proofs. (A) hocus pocus: Ms-117, 105-110 (= BGM2 §19-22), not dated, but after 19370911 and before 19380627 Throughout this extended passage, 196 LW isolates the diagonal technique [Kalkül], as a perfectly normal mathematical technique, from its philosophical use. There is for LW no problem with the mathematical technique itself. But he goes to great lengths to point out that this technique is -at least potentially-independent of the way it is used for philosophical (theoretical? ideological?) purposes: one can use it to find a number that is different from all other numbers in a sequence; as such, one can teach it even to children. 197 LW would have no objections to any of this. 196 The excerpt looks like a more or less well-defined unit. The demarcation of the beginning is not 100% clear, but LW does seem to leave some extra space at the top of the page, before the first words of the excerpt and also stops writing "[Ansätze]" as a heading, which he did from page 97 toll 104 of the manuscript). Still, what precedes is also of interest to our excerpt and the editors' choice to print the remarks in the order in which they occur in MS 117 is a good one. The excerpt clearly ends with the "hocus pocus" sentence (after that LW uses the notebook for drafting a couple prefaces, so that is a clear ending. However, the way this excerpt is fitted in with a larger unit in the standard edition is a good illustration of the editorial practices that gave rise to the standard editions of LW's work: in the standard edition, these remarks (BGM2, §20-22), in which LW calls diagonal procedures "hocus pocus", are followed by ( §23) by a remark on the "Krankheit einer Zeit" and how the bad health of our philosophical problems can only be cured by a change in our way of life (see section 2.2(C)). This presentation has no roots in LW's own work, and the "Sickness of our time" and the "Hocus Pocus" remarks are from altogether different manuscripts. 197 For LW, the mistake [Fehler] begins when one tries to apply the notion of "ordering things in a series" [Reihe] to the cardinal numbers at large. LW denies categorically that the question "can one order the set R in a series?" has any clear sense at all, 198 and goes on to deny that it is obvious that the diagonal proof proves that "ordering in a series" does not work in this case, arguing instead that the problem can also be viewed as an issue with the concepts involved. 199 So, on pp. 61-64, LW is not contradicting the idea that "one cannot order R in a series", nor its opposite: he is saying that the question has no clear sense (cf. what he says about "north of the pole", Fermat's conjecture, etc.; see sections 1.1.2(B) and 1.2.1(B)). Then, immediately after this, follows a passage that displays a number of aesthetical and ethical terms, 200 LW makes it very explicit that his most important target is when people start to pretend that their preferred theoretical claims follow 'naturally' from this or that technique: 201 Das Gefährliche, Täuschende, der Fassung "Man kann die reellen Zahlen nicht in eine Reihe ordnen" oder gar "Die Menge … ist nicht abzählbar" liegt darin, daß sie das was eine Begriffsbestimmung Begriffsbildung ist als eine Naturtatsache erscheinen lassen. 198 Der Fehler beginnt damit daß man sagt die Kardinalzahlen ließen sich in eine Reihe ordnen. Welchen Begriff hat man denn von diesem Ordnen? Ja man hat natürlich einen von einer endlichen Reihe, aber das gibt uns ja hier höchstens eine vage Idee einen Leitstern für die Bildung eines Begriffs.) Der Begriff selbst ist ja von dieser & einigen andern Reihen abstrahiert; oder: der Ausdruck bezeichnet eine gewisse Analogie von Fällen & man kann ihn etwa dazu benützen um ein Gebiet, von dem man reden will vorläufig abzugrenzen. Damit ist aber nicht gesagt, daß die Frage einen klaren Sinn hat: "Ist die Menge R. in eine Reihe zu ordnen?" Denn diese Frage bedeutet nun etwa: Kann man mit diesen Gebilden etwas tun was dem Ordnen der Kardinalzahlen in eine Reihe entspricht. Wenn man also fragt: "Kann man die Reellen Zahlen in eine Reihe ordnen?" So könnte die gewissenhafte Antwort sein: "Ich kann mir vorläufig gar nichts Genaues darunter vorstellen". -"Aber Du kannst doch z.B. die Wurzeln & die algebraischen Zahlen in eine Reihe ordnen; also verstehst Du doch den Ausdruck!" -Richtiger gesagt ich habe hier gewisse analoge Gebilde, die ich mit dem gemeinsamen Namen "Reihen" benenne. Aber ich habe noch keine sichere Brücke von diesen Fällen zu dem 'aller reellen Zahlen'. Ich habe auch keine allgemeine Methode um zu versuchen ob sich die oder die Menge 'in eine Reihe ordnen läßt'. 199 Nun zeigt man mir das Diagonalverfahren & sagt: "hier hast Du nun den Beweis, daß dieses Ordnen hier nicht geht". Aber ich kann antworten: "Ich weiß -wie gesagt -nicht, was es ist, was hier nicht geht." Wohl aber sehe ich: Du willst einen Unterschied zeigen in der Verwendung von "Wurzel", "algebraische Zahl", etc. einerseits & "reelle Zahl" anderseits. Und zwar etwa so: Die Wurzeln nennen wir "reelle Zahlen" & die Diagonalzahl, die aus den Wurzeln gebildet ist auch. Und ähnlich mit allen Reihen reeller Zahlen. Daher hat es keinen Sinn von einer "Reihe aller reellen Zahlen" zu reden, weil man ja auch die Diagonalzahl der || jeder Reihe eine "reelle Zahl" nennt. -Wäre das nicht etwas ähnlich, wie wenn man gewöhnlich jede Reihe von Büchern selbst ein Buch nennte & nun sagte: "Es hat keinen Sinn von 'der Reihe aller Bücher' zu reden, da jede || diese Reihe selbst ein Buch ist || wäre." 200 The underlining of terms with an ethical or aesthetical connotation is mine. 201 19. The dangerous, deceptive thing about the idea: "The real numbers cannot be arranged in a series", or again "The set... is not denumerable" is that it makes the determination of a concept--concept formation--look like a fact of nature. This element will become important later on (I believe it is one of the most important points to be gained from LW's work), so let's emphasize it here: what LW objects to as 'dangerous, misleading' is that it makes what actually is a case of concept-formation, look like a fact of nature. LW then reformulates the conclusion one may want to draw from Cantor's technique: Modestly, the proposition would sound as follows: "If one calls something 'a series of real numbers', then its expansion via the diagonal procedure could also be called a 'real number', more specifically one that would be different from all the other members of the series". 202 This would be a conclusion/interpretation that LW could agree with. Then, LW explicitates what he believes is wrong with the standard interpretation of Cantor's diagonal technique (i.e. the interpretation in terms of the non-denumerability of the reals): Unser Verdacht sollte immer rege sein, wenn ein Beweis mehr beweist, als seine Mittel ihm erlauben. Man könnte so etwas einen 'prahlerischen Beweis' nennen. Der gebräuchliche Ausdruck fingiert einen Vorgang eine Methode des Ordnens die hier zwar anwendbar ist aber nicht zum Ziele führt wegen der Zahl der Gegenstände die größer ist als selbst die der || aller Kardinalzahlen. 203 One should always be suspicious when a proof proves more than its means allow for. Such a proof, LW calls an 'ostentatious' proof [einen 'prahlerischen Beweis']. Thus, the usual way the conclusion drawn from the diagonal technique is formulated, creates the fiction of "a 202 20. The following sentence sounds sober: "If something is called a series of real numbers, then the expansion given by the diagonal members of the series procedure is also called a 'real number', and is moreover said to be different from all members of the series". NB: this standard translation is wrong: (1) 'sober', while emphasizing the contrast with the concept of 'ostentatious' in what follows, misses the ethical connotation of "bescheiden"; (2) the translation not only distorts the grammar of the German text (which need not be a problem), but seems to be based on a misunderstanding of the grammar of the original and thus also distorts the point of the original: the German text says that modestly, the proposition (i.e. the conclusion that could follow from Cantor's diagonal argument) would sound as follows (not that that the following sentence sounds sober); the English translation may be read as suggesting that the sentence 'sounds sober', but perhaps is not actually sober (which is not meant at all), but -more importantly-completely misses the idea that what follows is a more modest reformulation of what precedes. 203 21. Our suspicion ought always to be aroused when a proof proves more than its means allow it. Something of this sort might be called 'a puffed-up proof'. 22. The usual expression creates the fiction of a procedure, a method of ordering which, though applicable here, nevertheless fails to reach its goal because of the number of objects involved, which is greater even than the number of all cardinal numbers. method of ordering that is applicable in this case, but fails to reach its goal, because of the number of objects that is larger than even the number of all cardinal numbers". So, LW believes that the standard interpretation is based on the idea that it is in principle possible to order the real numbers but that we never get there in practice because of the number of objects to be ordered. And that idea he calls a fiction: it is clear how we can go about ordering an indefinite amount of integers, and whatever the number of integers, the same method (or methods) will continue to work; but these methods are clearly and obviously not going to work when we are asked to order the reals as a whole; so, from LW's pragmatic point of view, it is literally unclear what 'ordering' could even mean in the case of the reals. LW then explains what he means, again by contrasting the usual conclusion with an alternative formulation of his own: Wenn gesagt würde: "Die Überlegung über das Diagonalverfahren zeigt Euch, daß der Begriff 'reelle Zahl' viel weniger Analogie mit dem Begriff Kardinalzahl hat, als man, durch gewisse Analogien verführt, zu glauben geneigt ist" so hätte das einen guten & ehrlichen Sinn. Es geschieht aber gerade das Gegenteil: indem die 'Menge' der reellen Zahlen angeblich der Größe nach mit der der Kardinalzahlen verglichen wird. Die Artverschiedenheit der beiden Konzeptionen wird durch eine schiefe Ausdrucksweise als Verschiedenheit der Ausdehnung dargestellt. 204 LW's reformulation is rather drastic: reflection on the diagonal procedure shows you that the concept of 'real number' is much less analogous to the concept of 'cardinal number' than one would be inclined to believe, tempted by certain analogies; this account would have a good and honest sense. However, in reality, exactly the opposite happens, in that the 'set' (scare quotes required) of real numbers -allegedly [angeblich]-is compared qua magnitude with the set of cardinal numbers. So, because of a skewed way of expressing oneself, what is actually a difference in kind between conceptions is falsely represented as a difference in extension. The passage ends as follows: "Ich glaube & hoffe eine künftige Generation wird über diesen Hokus Pokus lachen". 205 The prediction that future generations will laugh about Cantor's hocus pocus has not (not yet?) come true: Cantor's contributions appear to be firmly entrenched in the mainstream mathematical canon. For the recurring theme of ridiculousness in LW's evaluation of mathematical discourses, see also LW's assessment that most 204 If it were said: "Consideration of the diagonal procedure shews you that the concept 'real number' has much less analogy with the concept 'cardinal number' than we, being misled by certain analogies, are inclined to believe", that would have a good and honest sense. But just the opposite happens: one pretends to compare the 'set' of real numbers in magnitude with that of cardinal numbers. The difference in kind between the two conceptions is represented, by a skew form of expression, as difference of extension. 205 I believe, and hope, that a future generation will laugh at this hocus pocus. mathematicians' fear of contradictions is ridiculous (Ms-118,116r; see section 2.3(B) below); cf. also LW So: LW does not object to Cantor's diagonal technique as such: as a technique to find a number that is different from a list of other numbers, it is perfectly fine. LW does object to its usual prose interpretation as a proof for the non-denumerability of the reals: for LW, the problem with this interpretation is that it pretends that the notion of 'ordering' would be applicable to the irrationals in the same way it is applied to the rationals, that the difference is a merely quantitative matter, whereas this is in practice -obviously-not the case: it is not at all clear what 'to order' would even mean in this context. Similarly, the interpretation pretends that nothing happened when the notion of 'number' (as in counting) is extended to the 'set' (scare quotes required) of real numbers, as if this extension is not the result of the decision to coin a new concept, but a fact of nature. In a very typical fashion, LW formulates his objections to this apparently 'technical' matter in clearly ethical/aesthetical terms (cf. the words that I underlined in the above quotations of the text): LW criticizes the standard interpretation of Cantor's technique because it is misleading and dangerous, lacks modesty, is ostentatious, contains fictitious aspects, lacks honesty, is ridiculous hocus pocus. The following passage, written on Christmas day 1938, also deals with the problem of ordering numbers in a series and especially the idea that "one cannot order all fractions in a series": Gegenstände selbst zu betreffen scheint. Man möchte von ihm etwa sagen: er führe uns in die Geheimnisse der mathematischen Welt ein. Es ist dieser Aspekt vor dem || welchem ich warnen will. 207 From the outset, LW tells us quite clearly what he finds suspicious and wants to warn us for: the problem is that the sentence "one cannot order all fractions in a series" sounds interesting and mysterious, quite different from a formula out of a differential calculus book, as if it concerned "the natural history of mathematical objects". A few paragraphs further on, LW argues that the image of trying to stuff ever more objects into the same space (implicit in the idea of ordering all fractions) is sensational (it "makes our heads spin" [wovor einem schwindlig werden kann]) but also inappropriate, in that the mathematical technique merely shows something trivial: 208 the issue is not that the technique goes on and on and on, the issue is that the idea of an end is simply not part of this technique: it does not make sense to speak of the "next larger fraction", the word "end" is simply not applicable (it is meaningless/nonsensical) in this context. 209 Then, LW appears to identify the root of the issue in terms of problems with the expansion of a practice: when one invents a new technique or a new framework, one needs new images, new means of expression to describe it; it would be absurd to want to describe the new scheme, the new kind of framework by means of the old expressions. 210 So, when one says "There is no next larger fraction, but there is a next larger cardinal number", it is not clear 207 40. "Fractions cannot be arranged in an order of magnitude."--First and foremost, this sounds extremely interesting and remarkable. It sounds interesting in a quite different way from, say, a proposition of the differential calculus. The difference, I think, resides in the fact that such a proposition is easily associated with an application to physics, whereas this proposition belongs simply and solely to mathematics, seems to concern as it were the natural history of mathematical objects themselves. One would like to say of it e.g.: it introduces us to the mysteries of the mathematical world. This is the aspect against which I want to give a warning. 208 Wenn ich mir bei dem Satz, die Brüche können nicht ihrer Größe nach in eine Reihe geordnet werden, das Bild einer unendlichen Reihe von Dingen mache, & zwischen je zwei Nachbarbäumen neue Bäume in die Höhe schießen & nun wieder zwischen jedem Baum & seinem Nachbar neue Bäume & so fort ohne Ende, so haben wir hier (sicher) etwas, wovor einem schwindlig werden kann. Sehen wir aber, daß dieses Bild zwar || wohl sensationell, aber ganz unzutreffend ist, daß wir uns nicht von den Worten "Reihe", "ordnen", "existieren" & anderen fangen lassen dürfen, so werden wir auf eine Darstellung des Sachverhalts zurückgehen, in der alles wieder trivial & gewöhnlich aussieht. so werden wir (wieder) auf die (Darstellung der) Technik des Bruchrechnens zurückgreifen an der nun nichts Seltsames mehr ist. 209 Daß wir eine Technik erfinden || bilden, in der der Ausdruck "der nächst größere Bruch" keinen Sinn hat, daß wir ihm keinen Sinn gegeben haben, ist nichts Erstaunliches. Wenn wir eine Technik des fortgesetzten Interpolierens von Brüchen anwenden, so werden wir keinen Bruch den "nächst größeren" nennen wollen. Von einer Technik zu sagen, sie sei unbegrenzt, heißt nicht, sie laufe ohne aufzuhören weiter -wachse ins Ungemessene; sondern, es fehle ihr die Institution des Endes, sie sei nicht abgeschlossen. Wie man () von einem Satz sagen könnte, es mangle ihm der Abschluß, wenn der Schlußpunkt fehlt oder von einem Spielfeld es sei nicht begrenzt, wenn ihm die Regeln des Spiels keine gezogene Grenze vorschreiben., 210 Eine neue Rechentechnik soll uns ja eben ein neues Bild liefern, eine neue Ausdrucksweise; & wir können nichts Absurderes tun, als dieses neue Schema, diese neue Art von Gerüst, vermittels der alten Ausdrücke beschreiben zu wollen. what the function of this sentence actually is. It is like comparing two board games: certain moves exist in checkers, but not in chess. Similarly, there is something we call "to construct the next larger cardinal number", but there is nothing we would call "to construct the next larger fraction". 211 LW applies exactly the same argument we already encountered in section 1.2.1(B), with respect to nonsensical pseudo-expressions ("north of the pole") in general, and in section 1.1.2(B) with respect to conjectures: the expression "the next larger fraction" has no actual function in an actual practice and is therefore meaningless. LW's point appears to be the following: the expression "construing the next bigger number" works in the case of integers, but is simply not applicable to fractions; similarly, applying the image of a line to the order of numbers does not always work. Trying to apply the image of a line to represent the order of 'all fractions' is a sensational one, it makes peoples' heads spin, but this is not an appropriate representation of what's going on: that the image does not work well in this context is a trivial fact. So, the main conceptual opposition he operates with here is between (1) sensational/vertiginous vs. (2) trivial, and he clearly values (1) negatively. Again, LW has no objection against any technical aspect of the mathematical technique, but objects to the way it is represented in theoretical terms, more precisely to the way the concept of 'infinite' is used in a context in which the concept of an 'end' is simply not applicable and hence, and similarly the concept 'next larger' is not just for reasons of finite time impossible to apply, but because the idea of 'next larger' is simply meaningless in this context. (C) prude proofs: Ms-126,133-138, d.d. 19411215-19411217 This excerpt is part of what looks like a nice collection of one-liners, which, however, lacks the cohesiveness and cogent development displayed by the other excerpts studied here. The reason why I do include it here is because it takes up a few themes that were highlighted in the previous excerpts and adds some nice new details to the picture, most notably the 211 Was ist die Funktion eines solchen Satzes wie: "Es gibt zu einem Bruch nicht einen nächst größeren Bruch, aber zu einer Kardinalzahl eine nächst größere"? Es ist doch gleichsam ein Satz, der zwei Spiele vergleicht; [wie: im Damespiel gibt es ein Überspringen eines Steines, aber nicht im Schachspiel.] Wir nennen etwas "die nächst größere Kardinalzahl konstruieren" aber nichts "den nächst größeren Bruch konstruieren". wonderful notion of a "prudish proof". 212 After a first try at articulating this idea on December 15 1941, 213 LW writes the following on December 16: 16.12. Eine Beweisführung ist prüde: wenn man ängstlich die geringste logische Zweideutigkeit vermeidet, aber groben Unsinn duldet. This notion of prudishness expresses one of the core gestures of LW's critical attitude towards the mathematical discourses he usually targets (Dedekind's and Cantor's diagonal proofs, etc.) quite well: even if a proof is technically correct (even according the strictest formal requirements), what mathematicians say about their proof, how they use the proof within a larger context, still can be nonsensical from a philosophical point of view. For the purposes of the present study, it would take too long an explanation and too much context to interpret the illustration that immediately follows, so I will skip it here. 214 However, the next day, LW offers some material that is up our alley: LW starts from a loose comparison between looking for mistakes in arguments and a children's game; 215 I am not entirely sure what LW means with this analogy, but perhaps he wants to highlight the childishness of trying to be puritanical about logical syntax (no doubt, "while allowing nonsense in the contents of one's argument" is understood; see above). LW then continues (without much connection, except perhaps the purely associative link between the game of Fingerhut-Verstecken to children in general) by asking what in Dedekind's proposition would be unintelligible to a 10-year-old, suggesting that that proof is easier than 212 I can't seem find this wonderful idea anywhere in the standard editions. 213 Eine Beweisführung ist prüde, wenn die geringste logische || , wenn die lässigste logische Zweideutigkeit ängstlich vermieden wird, grober Unsinn aber geduldet. || vermieden wird, & grober Unsinn geduldet. Die Hauptunklarheit in der Mathematik ist die Unklarheit darüber, was entdeckt & was bestimmt wird. 214 Wie, wenn ich sagte, die allgemeine Theorie der reellen Zahlen bereitet eine Phraseologie vor, die dann im besondern Fall von großem Nutzen ist. -Aber indem || wenn sie diese Phraseologie vorbereitet ist sie entweder ein selbständiges Stück Mathematik, oder sie kann die reellen Zahlen in vager Allgemeinheit durch Beispiele behandeln. Dabei würde natürlich die Exaktheit nichts einbüßen, denn die Anwendung dieser allgemeinen Fingerzeige auf jeden besonderen Fall würde immer wieder vollkommene Bestimmtheit herstellen. 215 The game goes as follows. Someone hides a thimble in the room and sits down. The players go and look for the object with their hands in their backs. As soon as someone has found it, (s)he sits down. The game is over when everyone sits down. See http://www.spieledatenbank.de/spiele/1181.html. the calculations the child is supposed to master at that age; if someone would reply that the child would not understand the deeper contents of the proposition, LW would reply: "How does this law/proposition acquire such deeper contents?". We encounter exactly the same move in a similar context at the beginning of section 2.1(A) above. Here, as elsewhere, LW attacks this idea of depth by pointing out that all that is essential to the mathematical technique as such, can be understood and applied by a child. LW then starts a new paragraph, again only loosely connected to the previous one, on the status of irrationals as numbers. 216 LW first discusses the fact that irrational numbers do not really have a proper numeral (Zahlzeichen, literally "number sign"), a fact not mentioned in the handbook he apparently happens to be reading. Various familiar themes come back: a) LW's critique in terms of the fictionality of some mathematical claims (cf. section 2.4.2(B) below) applies in this case to the idea that an irrational number would have a sign, but "an infinitely long sign". To LW, calling the rule for the development of the decimal expansion a 'number sign' would make some sense, but the notion of 'an infinite sign', readily accepted by most (?) mathematicians since Cantor, not at all. Of course, from LW's point of view, meaning depends on actual use, and in that case, the signs with which one actually calculates are not infinite. Hence, the 'fictitiousness' of the so-called 'infinite signs'. b) LW's very mentioning the issue is an avatar of his idea that notation is a central aspect of mathematics, as opposed to mere packaging (cf. 1.1.3(C))); in this case, the lack of a proper sign is -according to LW-indicative of an "infinitely fundamental" difference. c) Again (cf. the 'good and honest' alternative in section 2.1(A)), LW reinterprets Cantor's diagonal argument in a decidedly un-Cantor-like way: in a certain sense, Cantor's argument shows that irrational numbers can't have a proper sign. d) LW then articulates his criticism in terms of a fundamental disagreement with the mainstream vision of the continuum: the image of the number line is a perfectly natural one up to a certain point, but does not function well as a general theory of the real numbers. The latter point deserves a more thorough reading on our part. Just like the treatment he gave Cantor's arguments in other contexts that we encountered in paragraphs (A) and (B) here 216 Es wird nirgends bei Hardy hervorgehoben, daß die irrationale Zahl nicht in dem Sinne wie die rationale Zahl ein Zahlzeichen besitzt. Die Fiktion ist wohl, daß sie ein unendlich langes hat. Am ehesten könnte natürlich noch das Zeichen der Entwicklungsregel als das Zahlzeichen gelten. -Aber dieses Fehlen des Zahlzeichens bedeutet einen unendlich fundamentalen Unterschied. Und in gewissem Sinne sagt ja der Cantorsche Diagonalbeweis, daß sie kein Zahlzeichen haben kann. Das Bild der Zahlengeraden ist ein absolut natürliches bis zu einem gewissen Punkt: nämlich, soweit man es nicht zu einer allgemeinen Theorie der reellen Zahlen gebraucht. above , LW gives Cantor's diagonal argument a severely deflationary interpretation, restoring -as it were-its triviality. LW denies that the conclusions that are usually drawn from the argument are valid, and points at a much more simple and 'modest' conclusion: apparently, irrational numbers are fundamentally different from rational numbers, and apparently, the line does not function very well as a representation of the theory of the real numbers in general. So, rather than accepting a dubious expansion of the notion of 'line' for it to apply to the reals in general, he prefers to simply observe that the analogy between the notion of 'line' as we know it and the theory of the reals in general breaks down at a certain point. In other words: LW's account implies that the conceptual problems with the continuum are due to the fact that mathematicians have forced the image of a line onto aspects of number theory that it obviously does not apply to. This brings us to another fundamental aspect of LW's general attitude towards mathematics: although he doesn't attack the unity of mathematics explicitly in this case, his readiness to abandon the image underlying the idea of a continuum shows that he does not share the monism that is presupposed in mainstream PhilMath, as well as mainstream mathematical discourse and practice. As opposed to the 'totalitarian' tendencies of mainstream math (I mean: the pervasive desire to represent all of math as a single system, whether this vision comes naturally of has to be forced), LW readily abandons the concept of the number line, as soon as it appears to no longer apply as esasily. Finally, I would like to attract attention to the fact that LW evaluates this concept of the continuum in terms of its naturalness: he concedes that the analogy with a line is a perfectly natural one up to certain point (which shows that he does not dogmatically object to the notion of 'naturalness' in general), but is quick to specify that this is no longer the case if one tries to use this analogy as a general theory of the reals. So, LW simply denies that the line is a 'natural' representation of the distribution of the reals. After these remarks, there are no more entries into this notebook for 6 days. Wittgenstein's critical remarks on math II: set theory as a sign of the times Set theory emerges as the main contender in the debates about the Grundlagen in the early 20 th century. LW does not appear to object at all against the notion of "set" as such, but has difficulties accepting a number of expansions of this notion, which are essential for it to play a foundational role: the idea that a line is a set of points, the notion of infinite sets, the idea that the reals can be represented by a line / an infinite set, etc. The following three passages show a direct connection between LW's critical approach to PhilMath, and the cultural critique of Spengler, Weininger, Kraus as discussed in section 2.0.0 above, in that they directly link then recent developments in mathematics to very broad cultural tendencies. (A) the pest of the set-theoretical expression + an instinctless time: LW, Ms-106,245-255 [1929, entries not dated] The following (very early) excerpt occurs after a number of as such interesting remarks that appear to have in common that they were inspired by Brouwer, but it would take us beyond the needs of the present study to try and interpret their mutual coherence. So, let us merely focus on the ethico-aesthetical wording of the most obviously critical parts. LW starts off with one of his harsher claims: Die Mathematik ist ganz durch die perniziöse mengentheoretische Ausdrucksweise verseucht. According to LW, set-theoretical parlance has had a thoroughly destructive impact on mathematics. The word "verseuchen" is a very strong term, typically applied to pests, pollution, radiation, or epidemics. 217 It is interesting to note that LW's blames the settheoretical "way-of-expressing-oneself" [Ausdrucksweise], not set theory as a piece of mathematical technique, which reminds us of the prose-calculation distinction that we discussed in section 1.2.3(A) above. LW illustrates this claim by attacking the idea that a line consists of points (is a set of points): this set-theoretical understanding of the concept of "line" is not coherent with the traditional concept of the line. 218 LW says that a line (in the traditional sense of that word) is a 'law' [Gesetz], a constructive procedure, and therefore doesn't consist of anything. One can presume LW means that a line is defined by the way it is constructed (say: the act of drawing a line). As a drawn line, it can perhaps consist of shorter lines, but not of points. LW is 217 https://www.dwds.de/wb/verseuchen 218 Ein Beispiel dafür ist daß man sagt die Gerade bestehe aus Punkten. Die Gerade ist ein Gesetz & und besteht aus gar nichts. Die Gerade als farbiger Strich im visuellen Raum kann aus kürzeren farbigen Strichen bestehen (aber natürlich nicht aus Punkten). -Und dann wundert man sich z.B. darüber, daß "zwischen den überall dicht liegenden rationalen Punkten" noch die irrationalen Platz haben! Was zeigt eine Konstruktion wie die des Punktes √2? Zeigt sie diesen Punkt wie er doch noch zwischen allen rationalen Punkten Platz hat? Sie zeigt einfach, daß der durch die Konstruktion erzeugte Punkt nicht rational ist. Und was entspricht dieser Konstruktion & diesem Punkt in der Arithmetik? Etwa eine Zahl, die sich doch noch zwischen die rationalen Zahlen hineinzwängt? Ein Gesetz das nicht vom Wesen der rationalen Zahl ist. specifically annoyed by the fact that people are apparently amazed [Und dann wundert man sich z.B. darüber, daß...] at the image that between the very dense succession of rationals there still is room for the irrationals. For LW, the whole demonstration only shows the trivial fact that the point that is constructed in this way is not rational; in arithmetical terms, this merely means that irrationals are a law [Gesetz] that is not of the same nature as the one that generates the rationals. For the present purposes, I highlight LW's annoyance at peoples' wonder about something he deems trivial, a recurrent theme in our explorations. LW then turns towards the example of Dedekind's cut as a way to construct √2. Again, it's not necessary for our purposes to go deep into the technical details, or into the issue as to whether LW is right. Suffice it to observe that LW -again-articulates his criticism in terms of illusion and pretense. This time he attacks the idea that Dedekind's cut would be an insightful way to introduce the construction of √2: rather than being a construction of √2, the cut already presupposes the structure of square roots. 219 The introduction of √2 by the cut is therefore mere appearance [bloßer Schein] and its transparence [Anschaulichkeit] is mere pretense [Die Erklärung des Dedekindschen Schnittes tut so als wäre sie anschaulich]. 220 Again, LW does not object to the technique as such, but to the fact that -in this case-Dedekind pretends that it does things that in fact it doesn't. 221 219 If I read Mancosu correctly ( (Mancosu 2003), p. 71), Bernays seems to have similar objections to a similar interpretation of the technique involving cuts: "The first standpoint consists in accepting as a real number anything that is given by a cut (say by the condition x3 < 2). The problem with this first method is that it does not delimit at the outset the domain of the real numbers ("der Begriff der reellen Zahlen wird nicht 'bestimmt' umgrenzt"). For this reason we truly have a vicious circle here, since real numbers are defined by partitions which in turn are defined by reference to what real numbers [partitions] exist possessing a specified property. But, according to Bernays, one does not always follow this standpoint". 220 Die Erklärung des Dedekindschen Schnittes tut so als wäre sie anschaulich, wenn nämlich gesagt wird: Es gibt nur 3 Fälle entweder hat R ein letztes Glied & L kein erstes oder etc. In Wahrheit läßt sich keiner dieser Fälle denken (oder vorstellen). Wenn man als Eigenschaft der Ober-& Unterklasse im Dedekindschen Schnitt x² ˂ 2 und x² ˃ 2 nimmt, warum nicht gleich x ˂ √2 und x ˃ √2? Man glaubt durch die erste Fassung einer Schwierigkeit ausgewichen zu sein. Wenn wir logisch vorgehen so müssen wir die rationalen Zahlen einteilen in solche deren Quadrat größer als 2 ist & solche deren Quadrat nicht größer als 2 ist. (Denn, daß, was nicht größer ist entweder gleich oder kleiner ist sagt die Logik nicht, sondern das sehen wir erst durch Inspektion eines Zahlenverhältnisses.) Gut, ich schneide also: Rechts vom Strich liegen alle Zahlen mit größeren Quadraten, links alle anderen. Aber wer sagt denn, daß das so ist? Das setzt ja eben die Kenntnis der Struktur von x² und 2 voraus. Die Einführung der √2 durch den Dedekindschen Schnitt ist bloßer Schein, der dadurch zustande kommt, daß der "Schnitt" eine räumliche Illustration ist der uns die Struktur vor Augen führt, die wir klassentheoretisch -amorph -nicht erfassen können. 221 LW deals with the same material in Ms-126,131-132, d.d. 19421214 223 When set theory appeals to the human impossibility of a direct symbolization of the infinite it thereby introduces the crudest imaginable misinterpretation of its own calculus. To be sure, it is this very misinterpretation that is responsible for the invention of that calculus. But of course that doesn't show the calculus to be something inherently incorrect (at most it shows it to be something uninteresting), and it's odd to believe that this part of mathematics is imperilled by any kind of philosophical (or mathematical) investigations. (With equal justification chess might be imperilled by the discovery that wars between two armies do not follow the same course as the battle on the chess board.) What set theory has to lose is rather the atmosphere of thought-fog surrounding the bare calculus, that is to say, the references to a fictional symbolism underlying set theory, a symbolism that isn't employed in its calculus, and the apparent description of which is really nonsense. (In mathematics we're allowed27 to make up everything, except for a part of our calculus.) (Wittgenstein 2005) Let us try and unpack this quite dense paragraph. First, LW refers to the idea of the human incapacity of a direct symbolic representation of the infinite. It is not clear to me if LW intends to refer to a specific passage by a specific author, but the idea that "the finite human mind is incapable to represent the infinite" (as well as the follow-up idea of an infinite mind) is quite commonly heard even in present-day discussions involving mathematicians. 224 the idea that this due to the finite nature of our minds, as opposed to infinite minds that would be able to calculate with them, sound like gibberish indeed. LW ends this excerpt with one of 224 Interestingly, a quick internet search shows that this slogan was (is?) also popular in some theological circles. It is tempting to -hypothetically-link the origins of this idea to Cantor, who was notoriously untidy about the boundaries between mathematics and theology. 225 Aber der Kalkül an sich ist natürlich dadurch nicht als etwas Falsches erwiesen (höchstens als etwas Uninteressantes) & es ist sonderbar, zu glauben daß dieser Teil der Mathematik durch irgend welche philosophische (oder mathematische) Untersuchungen gefährdet ist. (Ebenso könnte das Schachspiel durch die Entdeckung gefährdet werden daß sich Kriege zwischen zwei Armeen nicht so abspielen wie der Kampf auf dem Schachbrett.) 226 NB: apparent formalism about the technique is not (not necessarily) incompatible with pragmatism. his oracular sounding sentences (between brackets): "(In math we are allowed to make up [fingieren] everything, but not a part of our technique [Kalkül])". 227 What does LW mean by the image of 'the athmosphere of fog'? Fog is in itself not solid at all, but it does prevent us from seeing clearly; around set theory (which, as a technique, is not problematic), there is an 'athmosphere' of mental fog, perhaps implying that it is always there, like the athmosphere around the earth (or alternatively, implying a general mood of unclarity?). 228 So, again, the problem is not the actual mathematical technique/Kalkül behind set theory, which is OK, albeit not necessarily interesting. What is problematic to LW is the misrepresentation of set-theoretical technique in mainstream set-theoretical discourse, and specifically the fictionality of some of the concepts that are invoked: there is simply nothing infinite in the actual infinitesimal technique, the concept of 'infinite numerals' is pure fiction (they are not actually used), and the talk about 'infinite minds' comprehending 'infinite numerals' is nonsense (gibberish as such (wheter in set theory or outside it) 229 and not serving a purpose in the context of set theory). This excerpt lacks some of the Spenglerian apocalyptic tone of both the previous and the next excerpt, but is perhaps more specific as to the exact technical objections LW has: LW objects to discourse that is disconnected from what is actually done in the technique itself, and in the case of the notion of 'infinite mind' also disconnected from anything else. (C) the illness of an era: LW, Ms-121,27r-28v, d.d. 19380530 [// BGM2, §23] In the manuscript, the following remarks occur after a three day pause in the writing and do not seem to refer to anything in particular in the previous remarks (short, loose, sparse and not very good remarks about music and architecture, balloons, 230 For starters, LW points out that the math problem [Aufgabe] 'name a number that is greater than the number of all numbers' sounds like a joke, which is fair: even after more than a century of transfinites, the assignment on its own still sounds distinctively weird. LW then turns to a slightly less weird-sounding exercise, looking for numbers between 1/n and 1/m, which -as we know-may lead to similarly paradoxical results in the context of set theory, but makes sense and is even useful because it is systematically linked to other such problems. It is true that we could easily imagine a use for such an assignment, e.g. in a didactic context. Still, it is also true that the method(s) used to do this assignment woud not give rise to a theory 232 This is good illustration of the pitfalls of the editorial practices that gave rise to the standard editions of LW's work: in BGM 2, 23 these remarks follow the remarks in which LW calls diagonal procedures "hocus pocus" (section 2.1(A)). This juxtaposition has a striking effect, and in previous versions of the present study, I got some interpretative traction out of it. I don't exclude that in the course of a process of cutting and pasting of the kind that gave rise to the typescript that was used as a basis for PhU, LW could have made this move. But the thing is: he didn't, and the fact that I was misled by it, shows that it is de facto misleading. 233 23. The sickness of a time is cured by an alteration in the mode of life of human beings, and it was possible for the sickness of philosophical problems to get cured only through a changed mode of thought and of life, not through a medicine invented by an individual. Think of the use of the motor-car producing or encouraging certain sicknesses, and mankind being plagued by such sickness until, from some cause or other, as the result of some development or other, it abandons the habit of driving 234 "Nenn' mir eine Zahl, die größer ist, als die Zahl aller ganzen Zahlen!" -diese || Diese Aufgabe hat den Charakter einer mathematischen Scherzfrage. Welcher Art wäre denn die Aufgabe: "Nenne mir eine Zahl zwischen 1/n und 1/m"? Nun es wäre eine Übung in der Bildung solcher Zahlen. Ihre Nützlichkeit liegt darin, daß es hier ein System solcher Aufgaben || Probleme gibt. Es ist nämlich eine ganz wesentliche Frage: Was ist denn die Anwendung dieses (neuen) Zahlenbegriffs außerhalb der Mathematik. -Denn mit 1, 2, 3, 4 … kann ich nicht nur Zahlen zählen, sondern auch Äpfel, & wenn nun ein Zahlwort nur in mathematischen Sätzen & in keinen andern vorkommen könnte, oder wir doch nicht wissen, welche Rolle es außerhalb der mathematischen Sätze spielen kann, so weist dies auf eine sehr wesentliche Unklarheit unsrerseits hin. Es ist nämlich nicht klar ob wir nicht bloß durch eine Einbildung verführt sind hier den Namen || das Wort "Zahl" zu gebrauchen. of the order of 'all fractions', as it would inevitably lead to same old problems that always show up in such contexts. LW then transitions to a simple and plane statement of his doctrine about applications: according to LW, the question as to the extra-mathematical use of a new concept of 'number', is an important one: "with the natural numbers, you can't just count numbers, you can also count apples; when a 'number' occurs only in mathematical propositions and couldn't occur in any other sentences, there exists an 'essential unclarity' about these new numbers. It is not clear whether we are not using the word 'number' as the result of an illusion [Einbildung]. So: again, LW has no objections to the mathematical technique as such, but he does object to what mathematicians say about their own technique. In this case, he questions two (related) aspects of standard set-theoretical discourse: -the idea that there is such a thing as infinite number signs (cf. paragraph (B) here above); -using the word 'number' in such a way that it disrupts the link with counting, as applied in everyday applications (cf. section 1.2.2 above, in which we discussed the passage where LW says that mathematical terms should also be used in civilian clothing). In the analyses we presented here above, we have explained LW's remarks in terms of 'the expansion' of the use of certain terms, but here LW thematizes the idea of the 'expansion' of a term very explicitely himself and articulates the consequences of the disruption of the connection with everyday practice quite clearly: the problem with expanding the use of the word 'number' in this way is a fundamental unclarity about its meaning: as long as our idea of a number remains directly connected to counting and the basic operations of everyday calculation with the integers, the notion of 'number' retains its transparency; once we start using the word 'number' in such a way that this connection gets disrupted, the meaning of the concept also looses its transparency. This line of thought should remind us of the way LW conceives of meaning in general in terms of embedding in everyday practice (cf. section 1.2 above), but the critical nature of these remarks, in which he condemns real-life contemporary usage for being disconnected from normal, everyday, actual practice (in this case mathematical practice) should remind us of journalist Karl Kraus' criticism in terms of the disruption of the connection between public discourse on the one hand, and contemporary social and political reality on the other (cf. section 2.0.0 above). The general idea that set-theory had an entirely negative impact on math remained a constant throughout LW's work on mathematics, not only observed in the remarks from 1929, 1932 interesting to note that there is a substantial discussion in the literature about the question as to whether LW actually had read Gödel's famous papers and if so, if he had understood them. In agreement with my general strategy as declared at the outset of this study, I will not engage with the technicalities of this literature and following LW's lead (see below), I have no intention to speak about the technicalities of Gödel's proof: I believe they are perfectly fine, and I believe LW also thought they were perfectly fine. This being said, I also believe that the brunt of LW's critical thought in the context of PhilMath, even if not targeting anything specific to Gödel's proofs, does apply to Gödel's work as much as to any other contemporary exponent of PhilMath. This is what I will try to make clear in the present introductory remarks. Let me first take a step back and look at the material that we are talking about. Gödel is mentioned in LW's manuscripts at the following spots: -Ms-117 (not dated, after august 1938?): Gödel is mentioned twice (p. 147 and pp. 151-152), in very tentative remarks; historically interesting but nothing in it for us; -Ms-121 (pp. 71r-85v, d.d. 19381228-19390102): Gödel mentioned 7 times between p. 75v and p. 84r, including some interesting material; - Ms-122 (p. 28v, 19391118): Gödel mentioned in passing; - Ms-124 (83-96, d.d. 19410702 Trage! Also interesting are the interludes in code (italic in the above), in which LW admonishes himself to stand strong and not make scene, not be ironic, not be artificial, so as to be maximally useful to others. 240 For a detailed commentary, see (Kienzler and Grève 2016), including a point of view that is grosso modo congruent with mine here. (A) contradictions in foundational formal systems are actually harmless: MS-118, 111v-, dd. 19370923 [= BGM 1, Anhang III, §11-13] A good place to start is LW's claim that whatever problems may show up in so-called 'foundational' axiomatic systems are actually harmless and irrelevant. LW starts from the following hypothetical scenario, in a way a particularly naïve reinterpretation of Gödel's famous consistency vs. completeness arguments, though nothing is said about the particular ways in which sentence P ("this sentence is unprovable") has been proven or disproven: 241 • A has proven the unprovability of P ("this sentence is unprovable") in Russell's system, which means that he has proven P, which means that P belongs and does not belong to the system. • B: That's what happens when you construct this kind of propositions! • A: But that is a contradiction! • B: Yep.We've got a contradiction here. Does it do any harm? 242 So, basically, LW suggests that it remains to be seen whether a contradiction is harmful or not. LW then asks the same question about the Liar's paradox: does it do any harm? 243 We have seen in section 1.2.1(C) above that LW's answer is that the paradox is obviously harmless and that it resembles pointless games one may play with very small children who have not yet understood that they will never be able to catch that thumb, because it is structurally impossible to do so. Then, LW makes the following interesting point: despite the obviously harmless and childish nature of the paradox, people have been seriously tormented by it. Which is true: logicians have been and are still struggling with paradoxes. The philosophical significance of this point is easy to underestimate: it shows that the fact that people are taking a problem seriously does not mean that it is actually a serious problem. 244 And this is something that needs to be 241 Let me repeat once more that LW makes no claim to talk about Gödel specifically (cf. Ms-163, 39v-40r, discussed above). 242 Nehmen wir an, ich beweise die Unbeweisbarkeit (in Russells System) von P; so habe ich mit diesem Beweis P bewiesen. Wenn nun dieser Beweis einer in Russells System wäre, -dann hätte ich also zugleicherzeit seine Zugehörigkeit & Unzugehörigkeit zum Russellschen System bewiesen. -Das kommt davon, wenn man solche Sätze bildet. -Aber hier ist || wäre ja ein Widerspruch! -Nun so ist hier ein Widerspruch. Schadet er hier etwas? 243 Schadet der Widerspruch der entsteht, wenn Einer sagt: "Ich lüge. -Also lüge ich nicht. -Also lüge ich etc." Ich meine: ist unsere Sprache dadurch weniger brauchbar, daß man in diesem Fall aus einem nach den gewöhnlichen Regeln sein Gegenteil & daraus wieder ihn folgern kann? -Der Satz (selbst) ist unbrauchbar, & ebenso dieses Schlüsseziehen; aber im übrigen kann man es tun, wenn man will. || warum soll man es nicht tun? Es ist (nur) eine brotlose Kunst. NB: nowhere it is said that Gödel's stuff -technically speaking-boils down to a paradox like the Liar's (again: Gödel is not mentioned and neither is anything related to the specifics of Gödel's proof), but one can safely infer that -according to LW-in both cases, the question one should ask is: is the contradiction a problem? And LW's suggestion is in both cases, that the contradiction is not harmful and for all practical purposes irrelevant, i.e. not really a problem. So: LW does not argue against any technical details internal to any specific argument but attacks one of the presuppositions of the debate: the idea that contradictions are a huge problem, or in other words: that consistency (in the sense of: avoiding contradictions within a formal system) would be a fundamental issue at all. It is also interesting to see that LW's little dialogue illustrates mostly a difference in attitude between the pathos of distress displayed by (the voice I called) A and the "so what?" attitude displayed by B: LW is not disagreeing with the observation that there is a contradiction here, he does not even mention any of the internal features of the proofs involved in his hypothetical scenario, he only attacks the panicky attitude towards contradictions is general. (B) the superstitious fear and veneration for contradictions: Ms-118,116r-116v, d.d. 19370924 [= BGM 1, Anhang III, §18-19] This excerpt, still from the 'notorious remarks', a few pages down the road from the previous excerpt, represents LW's anti-foundationalism at its most focused and most strident. Just before our excerpt starts we read the comment (between brackets) that "The mathematicians' superstitious fear and veneration when faced with a contradiction is very funny", 245 which is interesting in different ways: (1) LW identifies the problem at the level of fundamental beliefs and emotions (pathos) external to any proofs or arguments; (2) the recurring theme of ridiculousness manifests itself. As for (2), the "sehr komisch" disappeared in the typescript, and hence does not occur in the standard editions either; on the one hand, it is easy to understand why LW or his literary executors decided to cut this remark from a more public version of the text (laughing at one's colleagues' opinions has never been acceptable way of formulating criticism, let alone laughing at their fears and quasi-religious beliefs), but on the other hand, the idea that the standard attitude towards contradictions is funny, need not be merely dismissive: from LW's radically pragmatic point of view, this baseless fear and the ensuing behavior is literally comical, in a slapstick kind of way. 245 "(Sehr komisch ist die abergläubische Angst & Verehrung der Mathematiker vor dem Widerspruch.)". LW is still thinking about the Gödel-like sentence "P is unprovable" and now takes on the hypothetical scenario in which the proposition was false and therefore provable. True to his quasi-dialogical style, LW (?) immediately interjects: "But why do you call it 'false'? Because you have seen a proof? Or for other reasons? In that case, it's not a problem". So, LW is questioning what it could mean for some proposition within a formal system to be false without being proven. He then suggests that 'false, but not because it's proven to be false' could mean a number of different harmless or irrelevant things. For instance, one could argue that 'tertium non datur' is false, because 'yes and no' is heard quite frequently and makes perfect sense in these contexts, or one could say that the idea that 'the negation of a negation is an assertion' is false, because people sometimes use double negations as a strengthened negation. 246 Both examples are intended to show that you can call many things 'false from outside the axiomatic system', without them having any relevance to what can reasonably be done within the system. Again: the message is that for all practical purposes, the contradiction need not be a problem. LW then goes on to tackle the hypothetical -definitely Gödel-like-conclusion "... therefore P is true and unprovable" and suggests that this boils down to just writing "Therefore ⊢P". LW then compares the above scenario to the scenario in which someone has deduced from certain principles concerning natural forms and architectural style that Mount Everest, where nobody actually can live, would be an excellent location for a little castle [Schlößchen] in Baroque style. 247 The simile of the little baroque castle on Mount Everest is an adequate expression of the idea of a lack of embedding in anything real: you can can make formally correct deductions within you formal system all day, but when it comes to real-world conclusions with real-world consequences, the validity of the conclusions will be measured by means of real-world criteria, not by criteria internal to the formalism; if the real-world interpretation of your result is ridiculous (or otherwise undesirable) in real-world terms, it deserves to be rejected. 246 "Aber angenommen, der Satz wäre nun falsch -& daher beweisbar! -" -Warum nennst Du ihn 'falsch'? Weil Du einen Beweis siehst? -Oder aus andern Gründen? Dann macht es ja nichts. Man kann ja den Satz des Widerspruchs sehr wohl falsch nennen, mit der Begründung z.B., daß wir sehr oft mit gutem Sinn auf eine Frage antworten: "Ja -& nein." Und ebenso || desgleichen den Satz "p ≡ ~~p": weil wir die Verdoppelung der Verneinung als eine Verstärkung der Verneinung verwenden & nicht bloß als ihre Aufhebung. 247 The standard translation 'chalet', though funny in its own way, is wrong: (1) it is not clear what 'a chalet in baroque style' even could be (a chalet is by definition a rustic type of building that is -at its origins-specifically adapted to alpine circumstances, all of which is incompatible with the very notion of 'baroque'); (2) it misses the point that LW wants to make completely (whereas a chalet has some functional features that makes it suited for mountainous circumstances (though perhaps still not for the most inhabitable parts of the Everest, before it became part of the tourist industry), a little baroque castle, with its almost excusively ornamental character, is one of the most incongruous and unlikely choices). LW then concludes this excerpt by asking the following question: "How could you actually make the assertion [Behauptung] plausible to me, because you can't actually use it for any other purposes than for this little trick of yours?". 248 Let's first note that "that little magic trick" [jenen Kunststückchen] sounds very dismissive and echoes the recurrent theme of magic, slight of hand and other types of illusion. This is a very clear articulation of what LW's criticism of the foundationalist use of formal systems actually consists in: it problematizes the relation between the foundational system and 'real-life math' and asserts the primacy of the latter. This also illustrates what LW meant when he said that he 'talks past' the contents of Gödel's actual proof and says things that are applicable to Gödel's proof but to much simpler and generic aspects of math as well: the way LW's argument is formulated is intended (I guess) to apply to Gödel and the dismissive tone of the 'little magic trick' remark is perhaps rightly interpreted as a snide remark targetting Gödel (who is -it bears repeating-not mentioned throughout this text), but it is also very clear that what is being attacked here has nothing to do with the internal mechanics of Gödel's proofs, and applies to any type of proof involving formal systems. LW's line of thought in this excerpt, the way we read it here, has an important corollary: if it's true that contradictions in foundational systems need not be a problem because they are peripheric anyway, then that implies that LW does not believe that math needs to be a unified formal system to be valid. 249 This anti-monism about math is not overtly expressed in this excerpt, but is one of the things that recur time and time again throughout our analyses (cf. section 1.3(Fd) above and section 2.4.3(C) below), and LW does express it explicitely in connection with Gödel elsewhere (Ms-121,76r): Gödel zeigt uns eine Unklarheit im Begriff (der) 'Mathematik', die darin zum Ausdruck kam, daß man die Mathematik für ein System gehalten hat. LW says: Gödel shows us an unclarity in the concept of 'mathematics' that found its expression in the fact that one has taken mathematics for a system. So, LW does not believe that math needs to be a system (this is one of the clearest avatars of the anti-monism that we observed also elswhere 248 Du sagst: "… also ist P wahr & unbeweisbar." Das heißt wohl: "Also ⊢ P." Von mir aus-aber zu welchem Zweck schreibst Du diese 'Behauptung' hin? (Es || Das ist, als hätte man || jemand aus gewissen Prinzipien über Naturformen & Baustil abgeleitet, auf den Mount Everest, wo niemand wohnen kann, gehöre ein Schlößchen im Barockstile.) Und wie könntest Du mir die Wahrheit der Behauptung plausibel machen, da Du sie ja zu nichts weiter brauchen || verwenden kannst, als zu jenen Kunststückchen? 249 It is interesting to see to what an extent Gödel and LW take a similar path in this respect: both come to the conclusion that maths' validity does not depend on whatever is proven in a formal axiomatic system. But then they diverge to an extreme degree in their opinions as to what it does depend on, which is in its turn also very interesting. in LW's work) and he appears to believe that Gödel's results -whatever Gödel's own thoughts in this regards may be-actually can (or even should) be understood in such a way that they show that math is not a system. (C) ein guter Engel: LW, Ms-124,71-74, d.d. 19410623-19410624 [// BGM7, §16] In this excerpt LW asks his main question with respect to the Grundlagen issue at its most plain and simple: why does math need a foundation? And his answer shows how the issue of contradictions is intimately related to this question. For this reason, the excerpt deserves a close and detailed reading on our part. (1) LW's basic answer is simple: math does not need a foundation, no more than that propositions about physical objects or sensorial impressions need an analysis. LW then adds: What they do need, like the other kinds of sentences, is a clarification of their grammar. 250 In LW's work, 'grammar' means an account of the meaning of words in terms of the ways in which they are used, of their function within a practice (language game, etc.). So, -rather than participating in the debates about what could serve as a proper foundation for math, he simply denies that that would be a meaningful endeavour and proposes his own form of philosophy as an alternative. (2) For LW, mathematical problems concerning the so-called 'foundations' are as little fundamental to math as a painted rock carries a painted castle. 251 This implies that for LW formal systems are a mere picture of math, not math itself. 252 It also follows that whatever is shown by the use of such formal systems cannot be somehow more fundamental than what could be shown without them. (3) LW then asks: "But didn't that contradiction make Frege's logic useless for offering a foundation to arithmetic?". And the reply is: "Sure! But who has said that it had to be useful to that purpose in the first place?". 253 LW refers to Russell's observation that his paradox could be derived from Frege's logistic system. As opposed to Russell, Frege and -as far as I knowanyone else involved in the logistic approach to the Grundlagen, LW simply denies that this is a real problem. 250 Wozu braucht die Mathematik eine Grundlegung?! Sie braucht sie, glaube ich, ebensowenig, wie die Sätze über physikalische Gegenstände oder Sinnesdaten, || Sinnesempfindungen, eine Analyse. || wie die Sätze, die von physikalischen Gegenständen handeln, oder von Sinneseindrücken, eine Analyse. || wie die Sätze, die von physikalischen Gegenständen -oder die, welche von Sinneseindrücken handeln, eine Analyse. Wohl aber bedürfen die mathematischen, sowie jene andern Sätze einer Klarlegung ihrer Grammatik. 251 Die mathematischen Probleme der sogenannten Grundlagen liegen für uns der Math. sowenig zugrunde, wie der gemalte Fels einer gemalten Burg. || wie der gemalte Fels die gemalte Burg trägt. 252 Again, a move that is not that different from Gödel's basic move. 253 'Aber wurde die Fregesche Logik durch den Widerspruch zur Grundlegung der Arithmetik nicht untauglich? Doch! Aber wer sagte denn auch, daß sie zu diesem Zweck tauglich sein müsse?! (4) The next day, LW continues to think about the case of Frege's logic and -in a familiar fashion-conjures up a scenario in which 'a savage' has been given Frege's logic as a tool to derive arithmetical propositions. This scenario is a slightly mythologized version of the fantasy underlying formalism in general, i.e. that a purely mechanical execution of the rules of a formal system could adequately and completely represent (or even replace) mathematical reasoning. In this case we are asked to imagine the formalist's nightmare: suppose now that this 'savage' has derived the contradiction without knowing it is a contradiction, and is now deriving arbitrary true and false propositions. Someone reacts to this scenario by saying "Up until now, a good angel has saved us from going this way". To which, someone else (LW?) 254 replies, in familiar fashion: "Well, what more do you want?", after which we read the wonderful comment: "I believe one could say: a good angel will always be necessary, whatever you do". 255 The reference to the fact that 'a good angel' is always needed, expresses LW's belief that for things to work out foundations are actually irrelevant: whether our foundational system turns out to be consistent or not, the successful application of our mathematical techniques will not depend on it. It is important to understand that it is as a matter of fact not true that the so-called 'foundations' are what make math reliable: obviously (I would say, but perhaps there are people around that would disagree), elementary mathematics and geometry are -at an intuitive, immediate level, but also as an historical fact-a lot more secure than any set-theoretical (or otherwise foundational) theory -say ZFC or in LW's own experience Russell's Principia Mathematica or Frege's Grundgesetze-could ever be. Why? Because those applied techniques are basic aspects of our Forms of Life, intertwined with those activities that make up the bulk of our everyday lives (building stuff, buying and selling stuff, etc.), having deep historical, cultural, biological and physical roots, in a way that the 19 th -20 th century foundational axiomatic systems simply are not. Elsewhere (see section (D) here below), LW even plays with idea that it is perfectly imaginable that people operate with inconsistent systems. 254 This last reply -as opposed to the previous sentence-does not have quotation marks, which suggest that this 'voice' coincides with 'the auctorial voice', if that means anything in the case of LW's notebooks. 255 (D) why couldn't contradictions have a function? (LW, Ms-125,66r-68r, d.d. 19420923 [// BGM4, §59]; LW -Ms-121,74r, d.d. 19381228) In the following two excerpts, LW takes the idea that contradictions are not necessarily a problem in a slightly different direction (similar to his equally fictional scenarios we discussed in section 1.3), exploring the idea that contradictions could actually serve a function in logic. In the first excerpt (Ms-121,74r, d.d. 19381228) what makes the formal system meaningful, is how it is actually used within the practices in which it occurs. I will develop this idea a little further in Appendix 4.1 below. 258 The following somewhat isolated 259 paragraph (LW, Ms-125,66r-68r, d.d. 19420923), shows a similar idea, in that LW explores another -somewhat more radical-way to make 257 "Aber aus einem Widerspruch folgt ja jeder Satz! Was würde dann aus der Logik?" Nun so folgere nichts aus einem Widerspruch! 258 LW tags this excerpt with a semi-comical and all in all rather superficial comment, suggesting that if some mathemathicians superstitiously fear contradictions as if confronted by the devil, others might want to celebrate 'black masses', indulging in contradictions: Wenn Mathematiker sich abergläubisch vor dem Widerspruch wie vor dem leibhaftigen Teufel gebärden, warum sollten nicht andere eine Art schwarze Messe feiern (&) sich in Widersprüchen ergehen? 259 The remark that interests us here occurs in the context of a series of reflections on imaginary scenarios in which mathematical results could be available through other means than calculations, e.g. through calculators occurring in nature, as the result of the chemical properties of paper, by making icecubes melt, as the result of unexplicable human behavior. For the present purposes, it is not necessary to delve into this -otherwise appealing-material. contradictions part of a formal system. In this case, LW proposes to construe Russell's contradiction as something supra-propositional: the self-contradicting proposition stands above the propositions, like a Janus-headed monument, looking in both directions. One could even start logic with this contradiction and -as it were-descend down from it to the actual propositions. 260 The move that LW makes in this excerpt is -in its effects-similar to all the other imaginary scenarios: it serves to make us think about how much is presupposed / 'given' before anything meaningful can even occur. (E) Gödel's unphilosophical paper + the slimy concepts of most mathematicians LW, Ms-124,115-119, d.d. 19440310 (// BGM7, § §32-34): What is interesting in this excerpt is that LW does name Gödel by name and that LW's overt criticism directly addresses a number of presuppositions that are generally accepted. The remark about Gödel and the comments on the relevance of contradictions that come with it, occur in the context of a reflection on the question what constitutes a calculation. LW insists on the fact that a lot of different criteria may be involved in determining whether something counts as 'calculating' (training, correctness, practical application, intentionality, ...). 261 This brings LW to the following remark, directly addressed at Gödel and 'most mathematicians': LW says that what is unphilosophical about Gödel's paper is that he doesn't see the relation between math and its application and that in this respect, he has the slimy concepts of most mathematicians. What does slimy mean, here? At face value, 'schleimig' would indicate a lack of solidity (or: of undue liquidity where solidity is expected), but perhaps there is also a connotation of slickness and obsequiousness (?). What is it that LW calls slimy? Apparently, the lack of clarity that most mathematicians have about the relations between math and its applications is either slimy in the more literal sense, or in the sense of insincere docility. 263 LW then formulates the idea that every proof gives the mathematical construction a new leg, like the leg of a table. 264 The link with what precedes is probably that for LW, but not for Gödel and most mathematicians, every proof is equally fundamental, that there are not really 'foundations' that are unequivocally 'underlying' other parts of math. LW then turns to one of the now familiar themes that he also developed in the extended passage that we analyzed in section 1.3 above: the topic of demarcation and the issue of mathematical techniques with fringe or 'fantastic' applications. Previously (see e.g. section 1.3), LW had asked the question as to whether math with a purely fanciful application wasn't math anyway, suggesting that it was. He now formulates a potential objection: don't we call it 'math' only because there are many transitions, bridges from the fanciful to the non-fanciful? Would we still say that people were doing math who only calculated (operated with signs) for occult purposes? 265 And -in typical style-he formulates what looks like an objection to the objection: but wouldn't it then [i.e. even if we agree that operations with symbols for purely occult purposes do not count as proper math] still be incorrect to say that it is essential to proper math that it builds concepts? 266 LW then comes to the climax of his argument, the point to which this kind of line of questioning apparently always leads: 263 It could be an interesting exercise to try and think both potential interpretations through. In the case of the literal interpretation, what would the choice for this particular adjective imply: does it mean that they are not solid enough, that they are runny, and adapt their shape to any surface they happen to come into contact with, and/or that they are not fluid enough and stick to the hands of whoever tries to use them? Under the other interpretation, the question could be: towards whom is the obsequiousness of these concepts directed, whom are they supposed to please: religious or political authorities, perhaps? Or perhaps LW means that they avoid all conflict by merely confirming the consensus once it was established. Einfluß auf die sein, die die Mathematik nun so sehen lernen. Mathematik ist also eine Familie; aber das sagt nicht daß es uns also gleich sein wird, was alles in sie aufgenommen wird. 267 Math is an anthropological phenomenon, and like all other anthropological phenomena, it is not a homogeneous thing: what is essential in one area within what we call math, need not play a role in other areas. In other words: math is a family, which -as usual-suggests that it is not a single thing, but several things that more or less resemble each other, but in this case, LW emphasizes the fact that this does not mean that just anything can be accepted into it, either. LW also formulates an interesting little corollary to this last paragraph: the insight that math is not homogeneous should have a serious impact on those who learn to see math in this way. This remark connects back to the beginning of this excerpt, i.e. the idea that not only Gödel, but 'most mathematicians' have slimy ideas concerning the relation between math and its applications. It is also interesting to compare it to §644 The Big Typescript, in which LW also makes a link between how people view math and the way they are indoctrinated to see it through the education they are given (cf. section 3.2.3(C) below). LW then surmises that one could say that if there was no mathematical proposition that you understood better than the Axiom of Choice, then you didn't understand math at all. 268 This makes sense: I guess most people would agree that 'to understand math' would in the first place imply that one master basic arithmetic and geometrical techniques, algebra, trigonometry, calculus..., and even if we end up including set theory, understanding the Axiom of Choice would perhaps not be the most representative thing to focus on. 269 LW then comes back to one of his favorite scenarios: what if we deduce propositions from a hidden contradiction? This time, he makes us imagine a case in which there are real-life consequences: a bridge collapses. What would happen? LW surmises that we would attribute the collapse to other reasons, for instance in religious terms ("It was God's will"). LW then 267 --For mathematics is after all an anthropological phenomenon. Thus we can recognize it as the essential thing about a great part of mathematics (of what is called 'mathematics') and yet say that it plays no part in other regions. This insight by itself will of course have some influence on people once they learn to see mathematics in this way. Mathematics is, then, a family; but that is not to say that we shall not mind what is incorporated into it. 268 Man könnte sagen: verstündest Du keinen mathematischen Satz besser als Du das Mult. Ax. verstehst || das Mult. Ax., so verstündest Du Mathematik nicht. NB: "the Multiplicative Axiom" is an older term, used in the Principia Mathematica, which LW was familiar with, for what is better known as the Axiom of Choice (cf. Linsky (Linsky 2021), §11). 269 An interesting but rather technical paragraph follows in the manuscript, which would take too much space for too little benefit to comment on in the context of this study. asks the following, familiar question: is our calculation a mistaken one, or is it not a calculation at all? In an equally familiar way, LW then puts on his imaginary ethnographer's hat and explores what we would say if we observed such people from the outside. We would certainly have to acknowledge the differences with our own way of doing things, but we could not easily deny the fact that these people have some kind of mathematics. 270 LW then refers to the classical story of the king who decreed that all visitors to his city must state their business there and would be hanged if they lied, and the case of the visitor that states that he came to be hanged. 271 The king will try to make sure that this unpleasant situation could no longer occur. LW asks: what kind of measures could the king take? what kind of problem is this? And he suggests that the problem is similar to the question as to how one can change the rules of a game in such a way that this or this situation can no longer occur, and that that is a mathematical problem [Aufgabe]. It is hard not to make a direction connection with Gödel's work: what LW appears to suggest is that -rather than the dramatic conclusions that are usually inferred from Gödel's results (to begin with by Gödel himself)-one could also take note of the fact that this is the result of constructing this kind of propositions, and simply stop constructing this kind of propositions if one doesn't like this kind of results. 272 So: whereas Gödel's own interpretation of his famous results remains within (or pretends to remain within) the syntax-cum-semantics of the formalism, LW's approach operates entirely at the level of the pragmatics of math. LW then points out that it would be weird to turn the issue of the demarcation of math into a mathematical matter. 273 As LW pointed out earlier, the identity of math is an anthropological matter: its meaningfulness depends on its being deeply embedded in real-life everyday practices, and whether something is considered part of math is not a simple question: it depends on many factors and many different answers are equally possible. It is simply true that Gödel has not understood this. Whereas Gödel and the later LW may -perhaps 270 -Hier ist ein Widerspruch: Aber wir sehen ihn nicht & ziehen Schlüsse aus ihm. Etwa auf mathematische Sätze; & auf falsche. Aber wir erkennen diese Schlüsse an. -Und bricht nun eine von uns berechnete Brücke zusammen, so finden wir dafür eine andere Ursache, oder sagen, Gott habe es so gewollt. War nun unsre Rechnung falsch; oder war es keine Rechnung? Gewiß, wenn wir als Forschungsreisende nun die Leute betrachten || beobachten, die es so machen, werden wir vielleicht sagen: diese Leute rechnen überhaupt nicht. Oder: in ihren Rechnungen sei ein Element der Willkür, welches das Wesen ihrer Mathematik von dem der unsern unterscheidet. Und doch würden wir nicht leugnen können daß die Leute eine Mathematik haben. Was für Regeln muß der König geben, damit er der unangenehmen Situation von nun an entgeht, in die ihn sein Gefangener gebracht hat? -Was für eine Art Problem ist das? -Es ist doch ähnlich diesem: Wie muß ich die Regeln dieses Spiels abändern, daß die & die Situation nicht eintreten kann. Und das ist eine mathematische Aufgabe. 271 Cf. a footnote in the standard edition Wittgenstein & von Wright (ed.) & Rhees (ed.) & Anscombe (ed.) 1978(3) --Remarks on the Foundation of Mathematics, p. 400. 272 In the same way, we stopped writing 0/x. See appendix 4.1(D). 273 Aber kann es denn eine mathematische Aufgabe sein, die Mathematik zur Mathematik zu machen? Kann man sagen: "Nachdem dies mathematische Problem gelöst war, begannen die Menschen eigentlich zu rechnen"? surprisingly?-agree on the fact that math cannot be reduced to its formal representation in an axiomatic system, they are diametrically opposed on what can be concluded from this observation and where to take it from there: for Gödel, the answer had to be the existence of a mathematical universe out there, but for LW, the answer is anthropological: the validity and value of mathematical results of mathematical practices, mathematical concepts, mathematical results are a matter of the way they are embedded in the heterogeneic mess of real-life practices. 274 (F) axiomatic formalism as a tumor: Ms-161,59v-63r, not dated, probably 1941 275 LW starts from the following imaginary scenario, very similar to the one we encountered in paragraph (E) hereabove: suppose some of the results of our calculations turn out to be based on a hidden contradiction. He then asks: well, does that make the results illegitimate? By now, it should be clear to the reader of the present study that this question should be interpreted in the context of LW's 'anthropological' approach to math and that it's perfectly imaginable that people operate with procedures that we would perhaps find inconsistent. This time, LW wants us to explore the case in which someone would want to avoid adopting such results and fears that some of them could sneak through. The reply to that could be: well, that is an idea that can serve as the example for a new technique [Kalkül], the way one can have an idea for a new game.276 So: LW insists on the fact that making sure that the suspect type of propositions no longer seeps through would be a new game, different from the first one, in which the techniques aimed at filtering these out played no role. Again, LW shows that the way we do math is only one out of many different imaginable ways to perform calculations within a practical context and that insisting on consistency is not necessarily at the root of all types of calculation. This brings LW to the bombshell remark that made me include this excerpt here: LW bluntly claims that Russell's paradox is not disquieting because it is a contradiction but because the whole tumor of which it is the top appears to grow out of the normal body like a cancer, without a purpose and without sense. To be clear: the whole axiomatic system is the outgrowth, the cancer, not the tiny little contradiction that is a mere part of it. As in the excerpts we studied previously, LW denies that contradictions are inherently problematic. As in 2.3(A) and 2.3(D), LW gives room to the obvious panicky objection: "But this is a contradiction! You can't just let a contradiction stand!". And again, LW's reaction is: "Why not?", after which he points out a few examples of harmless interpretations of what looks like contradictions. 277 Immediately after this, LW illustrates his point with a characteristic analogy with music: we can sometimes immediately acknowledge that a certain musical phrase 'logically' follows another musical phrase. Just as in the case of the solution to a mathematical problem, we don't doubt that this is the correct solution. Still, it is easy to imagine, at least in the musical case, that other solutions would have worked equally well. Similarly, we can be convinced that two names go well together. 278 So, LW highlights the fact that what we accept as a 'logical' sequence need not necessarily be of a propositional nature. Then, he comes back to the theme of contradictions in formal systems in a way that highlights the pragmatism of his approach. LW evokes the following little dialogue: A: "We make inferences that respect all the rules, but suddenly a contradiction shows up. The conclusion must be that the set of rules is useless, because the contradiction wrecks (literally: topples) the whole game". 279 B: "Why do you allow the contradiction to wreck the game?" 280 This last reaction to that conclusion is simple but far-reaching in its radical pragmatism (cf. (D) above): it implies that the consequences of a contradiction in a formal system depend on a decision on the part of the user/player, that there is no logical ëmustí here, there is no natural This last intervention points out that expectancies can be of different kinds: a formal mathematical proof offers a different kind of foreseeability than a proper understanding of whatever one needs previsibility about. For instance: for most practical purposes, there are no foreseeable problems at all with the existence of Gödel-type inconsistencies. Let us now briefly reconstruct LW's line of thought throughout this rather complex excerpt, paraphrasing: • the whole excerpt is about scenarios in which we operate with a hidden contradiction; • LW points out that it remains to be seen whether such an inconsistency matters and how we best deal with it; • LW then turns to the example of Russell's paradox and states that it is part of an outgrowth that is not part of the normal functioning of the body, which is good news in that the contradiction as such is harmless, but bad news in that there is a tumor; • LW then gives a few examples in which things follow each other 'logically' in an organic way, and then addresses the way in which he believes that the formalism that gives rise to Russell's paradox is not organic: for all practical purposes one can just decide to ignore it. LW then adds the following brief 'meta' observation that is suitable as a conclusion to this section: The philosophical approach [Betrachtung] to mathematics has a different point froms mathematical approach to mathematical propositions and proofs. This rings true, in several ways: 281 Aber ich will, daß man nach den Regeln soll mechanisch weiter schließen können, ohne je zu widersprechenden Resultaten zu gelangen. 282 Nun, welche Art der Voraussicht willst Du? -LW's approach notices and questions aspects that are simply presupposed and not noticed by mathematicians; -most mathematicians will continue to not see the relevance of LW's remarks when we point them out to them. What is it that interests the philosopher but not the mathematician (or any other technician)? I believe all the above is a good illustration of the fact that the philosopher (in this case LW) is interested in what is presupposed by the practice, what is given (cf. section 3.3(C) below), whereas the mathematicians are happy to unthinkingly work with whatever is given to them. Conclusions to Part 2: LW's critical remarks The present section 2.4 is the conclusion to Part 2 of this study. Given the fact that Part 3, the general conclusion to the whole study, follows immediately after this section, the scope of section 2.4 will be limited to the following paragraphs: -in section 2.4.1, I give a summary overview of the trajectory we covered in Part 2 and how it fits in with the overall structure of this study; -in section 2.4.2, I summarize a few recurrent topics in LW's PhilMath, as they appeared in the above: (A) the idea of expansions beyond everyday mathematical practices and the It may be useful to repeat that I am in principle dealing with LW's later work, but that the lines of thought that I am focusing on here show remarkable continuity, which made it possible to include passages from different periods in LW's development. Part 2 (overview) At the end of Part 1, we had established that for LW meaning/sense could be defined as 'embedding in everyday practices', and conversely nonsense equals a lack of embedding in everydayness. We also pointed out that the only way 'nonsense' can be problematic is when it appears to make sense, but doesn't (gibberish that sounds like gibberish is never a problem, at least not a philosophical one). In the case of mathematics, this notion of 'embedding in everyday practice' takes the shape of embedding in what is called "applications", i.e. everyday practices that involve mathematical techniques. 283 The following question immediately imposes itself: what counts as everyday? We observed that, whereas the idea of everydayness has a certain intuitive appeal in terms of the naïvely prototypical (archetypical?) examples it certainly covers (buying apples, building a house, …), it turns out to be highly problematic at even the most obvious critical scrutiny: why can'tsay-metaphysical discourse be part of the philosopher's everyday? why isn't transfinite stuff part of Cantor's everyday? etc. We came to the conclusion that everydayness was not part of LW's results, but part of his agenda. In section 2.0 we found out that everydayness is part of a cluster of deep-rooted ideas, which helps explain why LW wants to work with this somewhat suspect concept in the first place: LW's concept of everydayness was a core ingredient of the culture-critical agenda that links LW's philosophical work with most notably Spengler, but also with what has been called Lebensphilosophie in general. We pointed out that authenticity (as opposed to fakeness) was a (perhaps 'the') core concept in LW's critical lines of thought, both in his philosophical work, including his PhilMath, and in his private life, as documented in the biographical literature. Authenticity, everydayness and sense/meaningfulness (and their opposites: nonsense, disruption of the link with the everyday, and fakeness) turned out to be structurally related and therefore mutually supportive concepts in LW's critical lines of thought. For an overview of the vocabulary LW uses in this context, see 2.0.3(C). The rest of Part 2 consisted of running commentaries based on close readings of three series of selected passages taken from LW's manuscripts: • in section 2.1, we read three passages in which LW discussed diagonal methods, which allowed us to illustrate a number of features of LW's critical remarks at large: the ethicalaesthetical vocabulary, the notions of pretense, fiction and (what I called) bad faith and bad taste; the idea of expanding beyond everydayness and resulting loss of meaning; the anti-monist and anti-foundationalist strands we already observed in section 1.3 (cf. 1.3(F)). • in section 2.2, we focused on three passages in which LW criticizes set-theoretical parlance; these passages illustrate the Spenglerian strand in LW's philosophy, as well as the degree of harshness his criticism sometimes reaches; • in section 2.3, we read a series of passages that discuss contradictions in formal systems and other Gödel-related (or seemingly Gödel-related) topics; many of the previously observed strands reappear, but the anti-monist and 'bad faith'-related strands acquire greater depth in these analyses. A few recurrent topics in Wittgenstein's PhilMath (A) mathematical meaning as embedding in everydayness vs. expansions of mathematics and disconnection from everydayness One of the core ideas in LW's work as a whole is the idea that meaning is a matter of embedding in everyday practice, and conversely that discourse that is not well-embedded in everyday practice lacks meaningfulness, i.e. does not make sense. In the case of math, LW starts from the idea that the meaningfulness of mathematics depends on its relation with everyday applications. Conversely, the problem with the use of certain terms in PhilMath is the discontinuity with the everyday use of these terms. Applied mathematical techniques are straightforwardly unproblematic: their meaningfulness is guaranteed by the fact that they are embedded in everyday practices and are an integral part of the Form of Life of the practitioners. LW goes to great lengths (cf. section 1.3) to point out that even very weird math-like techniques would make sense if they were part of real-life practices. In section 3.2.1(C) below, I argue that the historical and ethnographical records abundantly show that that is actually the case. Total discontinuity from everyday practice would not be a problem either (it would perhaps be uninteresting, but not wrong).284 Thus, radical formalism would be perfectly fine, as such (perhaps uninteresting, but not wrong). If math was really construed as 'the study of formal systems' (to borrow Haskell Curry's famous definition), the issues that LW objects against would not occur. But in reality, mathematics is not 'the study of formal systems': it's a matter of obvious fact that more "advanced" mathematical techniques and concepts are expansions of more basic techniques and concepts, which in turn are rooted in real-life 'applications', and it is understood that anything we would call math should at least cover the natural numbers and basic arithmetic operations on these numbers, as well as basic geometry. 285 So, purely formal math would not lead to any problems, and purely applied technique does not either. The trouble begins when one wants to have the cake and eat it, i.e. at the same time (1) redefine mathematical concepts in terms of a newly invented axiomatic formalized system, and (2) maintain that these are still the same concepts (see section (B) here below for a few examples), especially when one wants to (3) present these newly invented axiomatic systems as somehow underlying, or -even worse-as the foundations of, pre-existing techniques. For LW, the main problem is the pretense, i.e. the fact that these discourses within PhilMath pretend that these axiomatic formalisms are something they are not (see section (B) here below). There is a link between this line of thought about expansions as add-ons and LW's anti-monist and anti-foundationalist objection to the claim that math (as it is) by nature is and always has been a single system, such as the ones that the early 20 th century foundationalist efforts strove towards. As an alternative to this idea, LW introduces the idea that more advanced math is always an expansion of basic technique (in its turn deeply embedded in applications) and should not be presented as the discovery of the principles that somehow underlie the basic techniques. In other words: expansions are just that: expansions, add-ons, new techniques alongside (as opposed to above or beneath) the old ones. 286 This corresponds to PhU §124 (already discussed in section 1.2.2(A) above), in which LW states that a so-called 'leading problem of mathematical logic' is for him a problem of mathematics like any other. This reasoning underlies LW's criticism of set-theoretical verbiage, the interpretation of diagonal methods, but also the way in which the function of foundational systems in general is articulated in mainstream accounts. (B) fakeness (fictionality, pretense, fake depth, bad faith, .. .) in math The problem with expansions -according to LW's view-occurs when one pretends that what one means by a certain concept hasn't changed although one now attributes attributes [sic, fs] to that concept which are incompatible with what that concept used to mean: • 'number': for LW, the expansion of the term 'number' stops to be a natural one when it loses its connection with counting and basic arithmetical operations, i.e. when it is forced to include the artificial irrational numbers that only exist because of the theory of the continuum, not because they actually occur in actual calculations; • 'ordering': similarly, it is misleading to pretend that ordering fractions still means the same thing when one talks about 'ordering all fractions': it should be obvious that there is no such thing as 'ordering all fractions' in the normal sense of the word 'ordering': there is no possible real-life method to go about this; 287 the expression 'the next bigger fraction' is simply meaningless as long as it does not correspond to an actual technique; • 'line': LW objects to the set-theoretical idea that a line is a collection of points, as this conception disrupts the link with the constructive procedure ('law'/'Gesetz') that has always been the defining feature of a line; • 'set': there is no a priori problem with expanding the notion of set, but it is disingenuous to pretend that there is nothing weird about an infinite set or the set of all sets, or similar constructions. 288 In the same vein, LW also blames mainstream set-theoretical discourse for including completely fictitious elements: • fictitious symbolism: infinitely long numerals are not actually calculated with and are only posited as a way to fill out the theory of the continuum; • fictitious formal systems / fictitious code: formal systems such as Russell's (let alone Gödel's) are too unwieldy to actually be used for the purposes of actually proving actual theorems and are not used for that purpose in actual practice; • fictitious constructions: according to LW, Dedekind's cut has never been a way to construct √2, in that it presupposes the notion of √2 in order to be intelligible; therefore, as a constructive procedure, it is a fiction; • fictitious methods: a method for ordering infinite sets does not actually exist. There is something particularly infuriating about the conjunction of the idea of "infinite numerals" and the idea of "infinite minds" (cf. section 2.2(B)): from LW's point of view, the idea that irrationals are represented by a numeral that happens to be infinite and that we happen to not be able to comprehend because our minds happen to be finite, is gibberish in the sense of mere word salad: it serves no function within the actual technique, and it means nothing outside it, either (except perhaps in the context of very particular religious contexts, which would lead us to a whole other can of worms; see Appendix 4.3(B1)). Finally, LW also objects to the pretense that something is awe-inspiring, mysterious, deep, ... when it is actually trivial, for instance the vertiginous image of adding more and more reals 287 And no, this is not a matter of the finite amount of time we got to do it, or the finitude of our brain power. There simply is no algorithm or other method to go about this, unless one expands the notions of 'method' untill it doesn't mean anything. 288 The following examples could be included in this list of objectionable expansions of mathematical concepts, although they happened to not occur in the excerpts we analyzed above: "number", as applied to strings of symbols in Gödel's code, and "arithmetic", as applied to operations with Gödel's code, are a stretch:: it is prima facie plausible that Gödel's code did not arise from 'normal' arithmetical concerns but was invented for the sole purpose of making the logical, meta-mathematical, philosophical point Gödel wanted to make. NB that, in BGM 7 §22, LW problematizes the use of the word 'number' for coded strings such as : "But it must of course be said that that sign need not be regarded either as a propositional sign or as a number sign. in a smaller and smaller interval of the continuum, or the drama around paradox-like contradictions in axiomatic systems. I have pointed out that within the context of LW's oeuvre, this type of objection is an avatar of the same concern about authenticity that also shows in LW's existential aversion of (or struggle with) theatricality and vanity. (C) demarcation One of the recurrent rhetorical devices in the passages we analyzed in the above (1.3 and 2.1, 2.2, 2.3) involves the issue of the demarcation of math: LW describes an -often made-up, but not necessarily unrealistic-scenario in which this or that math-like technique is applied in this or that practical context, and then asks the question: 'Is this still math'? This appears to be a genuinely open question and it doesn't matter much whether each individual reader (or LW himself, for that matter) would be tempted to reply 'yes' or 'no' to it in any individual case. So, the point appears not to be that LW argues for this or that cut-off point, but rather that wherever one chooses to put that cut-off point, it will be a more or less arbitrary, or at the very least contingent, decision, not a fact of nature. The heterogeneity of the applications (and of the techniques, for that matter) shows that there is no single, principled, clear-cut, 'natural' way to demarcate math from other practices. LW's way of conceptualizing this heterogeneity is in terms of his famous notion of 'family' (as in 'family resemblances'): there is no single criterion (or set of criteria) that determines whether a certain item is included under the concept, but that does not mean that just anything can be included either. (D) set-theoretical parlance as a symptom of sick times In section 2.0.0 and section 2.0.2 above, we pointed out a few important resemblances between LW's work and Oswald Spengler's culture-critical ideas, whether by direct influence or shared cultural backgrounds. The most important of these Spenglerian strands is the notion that the dissolution of the organic unity of a culture leads to a lack of intelligibility of the products of the society in question. LW and Spengler shared the idea that Western culture had reached that point in the 19 th century: from the early 19 th century onwards, Western culture had started to decline into what Spengler called 'civilization', which showed in the arts, in politics, in the sciences, etc. This problem of 'loss of intelligibility' through a loss of organic embedding in a culture was perhaps the main problem for LW, both existentially and philosophically. It is important to understand that for LW, 19 th and 20 th century mathematics is a case in point and that LW's criticism of certain types of discourse on mathematics, most notably set-theoretical parlance, but also certain ways to interpret Gödel's results and foundationalist discourse in general (incl. the logistic approach to the Grundlagen issue, in which LW took part himself), should be understood in these terms. In LW's PhilMath, the Spenglerian strand within his critical approach is articulated in terms of the fact (?) that discourse about math had become disconnected from (1) actual mathematical technique (Kalkül) qua operations with symbols and (2) actual everyday applications. LW did not always clearly distinguish between both, sometimes mentioned only one of both aspects, and sometimes mentioned them both. 289 It is also worth mentioning that the Spenglerian strand in LW's PhilMath shows a remarkable continuity: the remarks about set theory that I quoted in section 2.2 date from 1929, 1931 and 1938 respectively (and for those who want to believe that LW grew out of this in later life, we can refer to extremely harsh and obviously Spenglerian remarks about Mahler dating from 1948; cf. section 2.0.2(A) above). And some time after March 4 1944, LW still wrote: "The curse of the invasion of mathematics by mathematical logic ... [der Fluch des Einbruchs der math. Logik in die Mathematik...]" (Ms-127,186 [= BGM 5 §46]). (E) LW on paradoxes, contradictions and the functions of axiomatic systems (a.k.a. "LW on Gödel") At the beginning of section 2.3, I felt the need to explain that most of what the literature interprets in terms of LW's critique of Gödel's famous proofs, is -according to LW himselfnot that: at no point does LW deal with the technicalities of Gödel's proof and most of the time his criticism is not even directed specifically at Gödel or his work (LW's 'notorious' remarks do not mention Gödel even once, only Russell and Frege) and in those contexts in which LW does mention Gödel, he often repeats that he is not interested in the specifics of those proofs and that the philosophical importance of Gödel's proof is that it attracted the attention towards features of mathematics that are much more general and apply equally well to much more basic areas of math than Gödel's work. 290 289 Cf. Severin Schroeder's analysis in terms of 'two strands' (see section 3.1.1(B) below). I am not sure if Schroeder is right in being so categorical about the distinction between these 'strands' and their chronological succession. Nothing much depends on it for the present purposes. I do believe that from early on in his 'intermediate period', i.e. even before his 'anthropological' approach acquired its more mature shape, LW conceived of 'Kalkül' as a reallife human activity, not as an abstract formal process. I mean: it appears to me that the distinction between a 'formal' strand or period and a -what I call-pragmatic strand or period in LW's thought is an artefact of the doxogrpahic approach, i.e. the result of wanting to project conceptual distinctions onto a text in which they do not play a real role. NB: the undeniable link between LW's criticism of set-theory and the Kraus-and Spengler-related 'lebensphilosophische' strands [sic, fs] in LW's thought may be an argument for the continuity between earlier and later stages in his thought with respect to the 'real-life' embedding of mathematical technique and hence for a nonformalist -or not really formalist-interpretation of 'Kalkül', even early on in his 'intermediate period. Pace not only Schroeder (Schroeder 2021), but also Rodych (Rodych 2018). 290 For a very similar result, see Floyd 2021 (Floyd 2021), pp. 71-72: "Often Wittgenstein is regarded as quarreling with Gödel, but that is because he is too often read as a radical finitist or conventionalist. His remarks certainly LW does not have a problem with Gödel's technique as such (it is an undeniably virtuoso piece of mathematical logic) and does not attack the internal workings of the proofs. LW attacks at a very fundamental level, at which the specificity of Gödel's work is not the heart of the matter: • LW questions the legitimacy of coding back and forth from prose into a formal code and back (and Gödel-code is not necessarily any different from Russell-code in this context; though Gödel's insistence that his code is 'normal arithmetic' is particularly infuriating); 291 • LW objects against the fictionality of these formalisms: nobody actually uses them to do actual math; the idea that they underlie actual math is a fiction; • LW questions the importance of contradictions of the type in question (Gödel's result, but also Russell's paradox/contradiction, ...); 292 • LW attacks the idea of the unity of math (monism) which is presupposed in the usual interpretations: if math is not viewed as a single system, concerns about consistency lose their central status and a lot of their urgency and importance; • LW attacks the very idea that math would need a foundation and points out that whatever foundations are proposed, they would always be less secure than the everyday applications that they grew out of. LW comments on the relationship between formalism and practice in general, and the relation between the practice and the prose that surrounds it. His point is that the problem is never located in the syntax of the formalism, not even in its semantics, but in the pragmatics of the formalism, more precisely the way one chooses (yes, chooses) to integrate it in one's practices and the consequences one chooses to attach to it. 293 In the same way that the Liar's paradox appears to not fundamentally (or rather: not at all) imperil the usability of natural language and for all practical purposes can safely be ignored as irrelevant and therefore harmless, 294 it remains to be seen whether a contradiction in an axiomatic system actually has any consequences for real-life working mathematicians' math at all, and LW suggests that for most struggle to place Gödelian incompleteness into his way of thinking. But if we take seriously his Later views, we see what the struggle is about, and it is not about refuting Gödel". I come back to this quote in section 3.1.1(B) below. 291 Ms-124,89 : 'Der Satz sagt, daß diese Zahl aus diesen Zahlen auf diese Weise nicht erhältlich ist.' -Aber bist Du auch sicher, daß Du ihn recht ins Deutsche übersetzt hast? Ja gewiß, es scheint so. -Aber kann man da nicht fehlgehen? 292 Cf. also LW (and Turing) on the nonsensicality of paradoxes: LFM lecture 20, pp. 206-7 quoted in section 1.2.1(C) above. 293 It's like proofs of the existence of god: nobody ever has been convinced by one of those either. It's perhaps not a coincidence that Gödel was interested in this, as well. See (Floyd and Kanamori 2006); (Park 2018). 294 It is worth repeating that LW does not say that Gödel's proof somehow involves the Liar's paradox. The analogy is that both are not an integral part of the normal functioning of the language. purposes Gödel's results can safely be ignored. 295 Gödel's results are also similar to paradoxes in that neither has a function in real-life practice: the Liar does not serve any real purpose (except perhaps to entertain and/or annoy); Gödel's stuff did not emerge from proper arithmetic but is a little magic trick that serves only one purpose. LW's main point is not that there are or aren't actual contradictions in this or that axiomatic system, formally speaking; LW's point is that nothing that could occur in formal axiomatic systems, not even contradictions, could ever have any impact on what he considers the core of proper mathematics, i.e. the hurly-burly of actually applied techniques. It is important to understand that it is a fact, a hard empirical, anthropological, fact that nothing important needs to depend on problems with axiomatic systems: no set-theoretical problem will ever have any impact on basic (and not so basic) arithmetic, geometry, trigonometry, calculus, etc., not even contradictions in set-theory. And this is where LW does attack Gödel: for LW, the dramatic, far-reaching consequences that Gödel attaches to his results are unwarranted, in that they show a lack of understanding of the (relatively peripheral) status of axiomatic systems in general, as well as of the (central) role of applications. In one of the few passages in which he directly addresses Gödel (see section 2.3(E) above), LW's main objection is that Gödel's conceptualization of what his own work is about, does not take into account -what I call-the pragmatics of math, i.e. how math is defined by the way it is embedded in real-life practices, which leads him (KG) to wrongly consider math a single system and to a wrong appreciation of the importance of contradictions. 296 Thus, LW was not in the first place concerned with any formal proof (which is just a piece of Kalkül like any other piece of Kalkül, and thus without particular philosophical importance); for LW, the problem is that it is not clear to what extent what pretends to be a proof of a theorem concerning the completeness of consistent formal systems (assuming that these terms mean what they always mean), is not actually a revolutionary reinterpretation of what the concepts 'consistency', 'proof', 'theorem', etc. traditionally mean. 297 295 Except for limited applications in informatics and cryptography, it can be expected that Gödel's stuff never will have any consequences indeed. 296 In a certain sense, LW's alternative views on the functions of 'Widerspruch' or his alternative interpretations of Gödel-like endeavors could be interpreted like his other made-up examples, in the same way as what he does with mathematical applications: in the end, he only tries to attract our attention to the given: to what and how much is already given, before we even start to evaluate the truth of our theoretical conceptions. So, there is an intrinsic link between the critical aims behind his approach, his use of made-up examples and his emphasis on alternative, even fringe applications, 297 Viewing a proof as an object is an example of a radical departure from the everyday meaning that proof had in earlier stages of the practice. Dutilh Novaes 2012 (Dutilh Novaes 2012), p. 68: quoting Netz 1999 (Netz 1999), on the importance of persuasion at the heart of ancient Greek math and logic; similarly, p. 78: "In a slogan, an argument, proof, or demonstration is a discourse; a calculation is a procedure." It is important -for the purpose of this study-to keep in mind that LW's concern with aesthetic/ethical authenticity ("things should be what they appear to be", see sections 2.0.2(B) and 2.0.3(C)-(D) above) is one of the main points of LW's 'notorious' remarks. LW words his criticism in moral terms: he repeatedly seems to feel genuine moral indignation towards what he considers bad faith arguments: the idea that contradictions within axiomatic systems are presented as a genuine threat to the foundations of mathematics looks like a childish little trick to LW and he can't believe that it is actually taken seriously by those who propose it. 298 As opposed to most commentators, I take LW's word on his attitude towards Gödel seriously and literally: I believe he is genuinely 299 not interested in the specifics of Gödel's proof. So: even if it were true that LW has not understood (perhaps not even read) KG's work, it doesn't follow that the above objections are ipso facto irrelevant for what they are: they are not intended to attack any of the specifics of Gödel's work anyway. And from that, it does not follow that LW's remarks do not apply to Gödel's work, on the contrary: they apply to Gödel's work in the same way they apply to Frege's work or Russell's work or to any other work in mathematics that operates with a formal axiomatic system. And of course, they may still apply more clearly to Gödel's work than to any other work; after all, that was Gödel's main merit, according to LW: to have created a situation in which this aspect of formalism in general became obvious. The critical agenda underlying Wittgenstein's Philosophy of Mathematics Across the various topics discussed here above (as well as in sections 1.3), there are a small number of recurring lines of thought that systematically attack the same targets, and in that sense embody the agenda that underlies LW's PhilMath. (Berto 2009) Berto 2009 --The Godel Paradox and Wittgenstein's Reasons on 'naïve proof' and truth, p. 212: My strategy exploits an idea proposed by Richard Routley and Graham Priest in various influential essays,6 which allows us to interpret Gödel's proof precisely as a paradoxical derivation. The core thought is to see what happens when one applies G1 to the theory that captures our intuitive, or naïve, notion of proof. By 'naïve notion of proof' Routley and Priest mean the one underlying ordinary mathematical activity: 'proof, as understood by mathematicians (not logicians), is that process of deductive argumentation by which we establish certain mathematical claims to be true ' [Priest, 1987, p. 40]. Since Hilbert, formal logicians treat proofs as purely syntactic objects. However, proving something, for a working mathematician, amounts to establishing that some sentence is true. When we want to settle the question whether some mathematical sentence is true or false, we try to deduce it, or its negation, from other mathematical sentences which are already known to be true. 298 LW's diagnosis in terms of -what I call-bad faith is in a way corroborated by what has become known on the psychodrama of Gödel's development: --> see appendix 299 And even if it could be proved that there is an apologetic strand in what LW says (i.e. if he actually didn't understand Gödel's work and said what he said as a cover-up or justification), this would not in fact change anything. (A) anti-foundationalism LW very explicitly opposes the idea that the efforts of his contemporaries and the generation that came just before him (Russell, Frege, Hilbert, ...) -and let's not forget that his own early philosophical activity under (or perhaps rather 'with') Russell was an integral part of that effort-to ground math in a single coherent system were not only unsuccessful but also fundamentally misguided. LW frontally attacks the idea that axiomatic systems play a foundational role at all (this is presented as a matter of fact): 300 (1) De facto, axiomatic systems do not actually offer a foundation for actual mathematical technique, as applied in actual practice, nor does it actually unify the heterogeneous collection of mathematical techniques into a single system (cf. section (B) here below, on LW's anti-monism). (2) De facto, mathematical techniques don't need foundations; in actual fact, applied mathematical technique does not become less or more valid, legitimate or secure by having or not having such a 'foundation' (for things to go right, we need 'a good angel' anyway, says LW). 301 LW's anti-foundationalism has been picked up on by most if not all commentators (Schroeder (Schroeder 2021), §3.6; Rodych (Rodych 2018), §2.5.1 et passim). My only contribution here is to emphasize the links between this aspect on the one hand and (a) LW's underlying philosophical agenda in terms of authenticity and fakeness and the links with the culturecritical strands inspired by most notably Kraus and Spengler, as well as (b) LW's pervasive pragmatism , on the other hand. I also think the importance of this stance appears to be underestimated or not taken seriously enough in the literature, in that it is not a separate strand within LW's work that can be isolated from other strands, but an omnipresent part of what motivates LW's philosophical activity, even beyond his work on mathematics. Thus, for instance, LW has nothing against axiomatic systems qua technique per se (in other words: he is not an anti-formalist per se), his objections only target what is being said about their foundational function. 302 300 Cf. Schroeder, title of §3.6: "Even if we assume (for argument's sake) that all arithmetic could be reproduced in Russell's logical calculus, that would not make the latter a foundation of arithmetic". 301 LW's anti-foundationalism ties in with his pragmatism, structuralism and holism: if practice is ontologically irreducible, and if no dimension within the internal structure of practice has primacy over the other dimensions, it does not make sense to look for foundations. 302 Of course: if one rejects the foundational status of these formal systems, they may lose a lot of their appeal (cf. paragraph (C) below on 'sensationalism' vs. triviality). (B) anti-monism Several lines of thought in LW's work ultimately boil down to anti-monism. heterogeneity LW points out that as a matter of historical and anthropological fact, math is inherently heterogeneous. He does not believe that math needs to be a unified formal system to make sense; on the contrary, the various techniques one calls 'applications' do perfectly fine in isolation and -in principle, but also in actual fact-they can operate even if they are incompatible with other applications or techniques. LW's preoccupation with demarcation also emphasizes heterogeneity and multiplicity, both implicitly through the abundance of examples that shows that the cut-off between what still counts as math and what doesn't is inherently unstable and more or less arbitrary, and explicitly, by pointing out that no single criterion or set of criteria really does the trick, a situation he conceptualizes in terms of the concept of a 'family' of techniques. LW's view of the continuum (?) of the reals is a case in point: LW denies that the mainstream set-theoretical conceptualization of the continuum as a line (and the line as a set of points) even makes basic sense. For LW, the continuum-approach to the reals does not represent a single system, but is a failed attempt to unify irreducibly heterogeneous techniques. axiomatic systems as post hoc add-ons LW also simply and explicitly denies that math is a systematic whole (this is again presented as a matter of fact). 303 LW insists on the fact that axiomatic foundational systems are not underlying the heterogeneous mess of actual mathematical techniques, but should be viewed as add-ons, as separate techniques alongside the old ones. According to LW, axiomatic systems also fail to really undo the underlying heterogeneity: even if one codes all the different techniques into a single formal system, they retain their identity (trigonometry, ....). consistency / contradictions LW's approach to consistency and contradictions also shows his fundamental anti-monism about math: he argues repeatedly and extensively that the presence of contradictions need not be a problem: whether the contradiction has an impact or not remains to be seen on a case by case basis. This attitude boils down to a de facto pluralism about math: consistency is an important "foundational" issue if (and only if) you believe that math is and has to be a unique and unitary system; the idea that contradictions can be dealt with locally and need not have consequences for math as a whole implies that math is not viewed as a single system. 304 (C) anti-sensationalism, anti-pathos, anti-kitsch Throughout our analyses, we encountered a positive valuation of the trivial vs. an aversion towards sensationalism, ostentatiousness, pathos and vertiginousness, which is clearly related to the emphasis on style in LW's personal ethics, which we focused on repeatedly throughout section 2.0. Thus, we pointed out several cases in which LW blames mathematicians and/or philosophers of mathematics for suggesting depth in their discourse that is actually not there, for presenting what are unforeseen complications with the rules of a game we invented, as awe-inspiringly deep facts of nature, and for going for cheap thrills by invoking mystery by the use of (inappropriate) vertiginous imagery, etc. LW often portrays these aspects of mathematical discourse as cheap, childish and ridiculous. The harsh, indignant terms that LW uses to express his irk with these phenomena can be understood if we take into account the ethical-aesthetical values that are at the bottom of LW's philosophical drive. LW blames the founding fathers of set-theory not merely for being childish and tasteless, but also for their willing participation in the decline of western culture. (D) anti-exceptionalism The later LW (on whom we focus here) argues extensively against the special (crystalline, pure, unique, unitary, interesting, mysterious, ...) status of math and logic which was prevalent in the logistic framework within which he worked when he started out. The alternative that the later LW offers is to view math as -in his own words-an "anthropological phenomenon", which implies that mathematical practices are like any other practices, intertwined with non-mathematical practice in real life; that mathematical words are like any other words, mathematical symbols are like any other symbols, mathematical agents are like any other agents. 305 LW also points out the reason why the outlook of 'most mathematicians' is so different ('unphilosophical'): their acceptance of prejudices, superstitious presuppositions about their own trade, enforced by their training. 304 Unless, of course, one is ready to let go of consistency as such. I don't seem to recall any such argument on the part of LW. 305 I am tempted to add: "And mathematical objects are like any other kind of objects" (cf. section 1.1.2(C)), but this is still supposed to be a conclusion to Part 2 of this study and no such claim was made in the passages from LW's work analyzed above. Part 3. General conclusions It bears repeating that we have to make a clear distinction between the following two questions: (1) what is LW actually saying? and ( 2) is he right / do I agree? The polarizing nature of LW's legacy, especially in PhilMath, has for a consequence that Wittgensteinscholarship is not very good at distinguishing between (1) and (2). Those who have a vested interest in LW's status as a great philosopher will probably not be inclined to spend time studying aspects of LW's work that they are not ready to defend; those who dislike LW a priori will be quick to dismiss whatever aspects that are easy to dismiss. I tried/try to point out the internal coherence of LW's lines of thought for what they are, not necessarily even bothering with the second question, trying to be charitable, but certainly not assuming that LW necessarily has to say things that I can agree with, let alone things that are readily acceptable to a 21 st century readership versed in present-day PhilMath. In the present 'General conclusions', as opposed to the conclusions to Part 1 and Part 2 in which I mostly remained focused on LW's actual text for its own sake, I attempt to articulate how (my reading of) LW's work may contribute to a few somewhat broader topics: • in section 3.1, I discuss the potential contribution of my work to various issues in Wittgenstein exegesis; • in section 3.2, I articulate a few reflections on the fact that LW should be viewed as a precursor of present-day PhilMathPract and on the ways in which his work can still contribute to that field, as well as to PhilMath at large, especially by the way he conceived of the role of PhilMath and its relation to mathematics itself and the history of mathematics, as well as of its place in society at large; • in section 3.3, I briefly discuss the very general question as to whether and to what extent Wittgenstein's philosophy of mathematics is inherently critical, suggesting that LW's philosophy, including and perhaps even especially his PhilMath, is a critique in the Kantian sense of the word, i.e. an approach that focuses on the question as to what is 'given', 'before' anything meaningful can even occur; • in section 3.4, I give a very brief summary of the main points I made in this study.306 Issues in Wittgenstein exegesis My interaction with the literature has been somewhat minimal throughout my analyses of LW's text, partly because of the unfinished nature of this draft, but also because of the specific focus of my analyses, which is quite different from most authors interested in PhilMath. In this section, I explore a few of the ways in which my contribution interacts with Wittgensteinscholarship at large. Remarks on exegesis: interpretation vs. use: "was LW an X-ist"? (A) introductory remarks When philosophers read philosophical texts they may want to do either or both of the following: (1) try and understand what the author's text means on its own terms; (2) see what we can do with the text, how we can learn from it, how we can use it for our own purposes. I believe there is nothing wrong with using an author's work for one's own purposes: an author can only hope that his work will be used and not just 'interpreted'. 307 However, I don't think that it is -at least in principle-controversial to say that in order to be able to do (2), it is a good idea (but not necessarily necessary) to first do (1). In any case, understanding a text does not merely imply understanding the words that are said (the semantics of the text), but also why they are said, for what purpose, serving what agenda (if any), in what polemical context (if any), etc. (in other words: one also needs to understand the pragmatics of the text). One of the more insidious ways in which one can distort the intentions behind an author's text is to make it speak out on issues it does not actually deal with. It appears that this happens a lot in LW exegesis, and particularly the exegesis of LW's remarks on mathematics. (B) the present study and existing scholarship on LW's PhilMath As already stated above, for various reasons my interaction with the literature has been somewhat problematic throughout this study, and not only due to the unfinished state of this manuscript. Let us first briefly look at Victor Rodych's entry 'Wittgenstein's Philosophy of Mathematics' in The Stanford Encyclopedia of Philosophy (Rodych 2018). A number of differences between Rodych's account and mine here are immediately noticeable: Often Wittgenstein is regarded as quarreling with Gödel, but that is because he is too often read as a radical finitist or conventionalist. His remarks certainly struggle to place Gödelian incompleteness into his way of thinking. But if we take seriously his Later views, we see what the struggle is about, and it is not about refuting Gödel. There is a richness to the ways we may articulate arithmetic, as Gödel proved, and a richness to the idea of "follows from a set of axioms in a formal language," as Church and Turing showed. Conventionalists, formalists, and logicists should admit and learn from this. For the mature Wittgenstein, the multidimensional play comes out in our articulations of mathematics in everyday phraseology and its embedding in life, in the techniques we establish and share. This is not an alternative to formalizing theories where we can, but it is not wholly reducible to that activity. Floyd arrived to this conclusion from extensive research of LW's PhilMath, working her way out to the wider context, whereas I worked my way into the PhilMath from the outside in. It should be reassuring that both trajectories ended up not too far from each other, at least on this point. Another case in point would be Chapter 5 "The two strands in Wittgenstein's later philosophy of mathematics" of Severin Schroeder's 2021 monograph Wittgenstein on Mathematics (Schroeder 2021), in which Schroeder speaks of "two strands" in LW's view of math ("math as calculus" vs. "math as grammar"), between which he sees that there is some tension (as well as a chronological aspect: the intermediate period mostly representing the "math as calculus" doctrine, and the transition towards "the math as grammar" doctrine occurring between 1937 and 1939). I have very little to say on this, except that if those are strands, then there are many more strands in LW's thinking on math, with a lot of tensions between them, and would suggest -again-that both occur in the context of a research activity with a lot more continuity in its biases and agendas than Schroeder's account suggests. On the specific issue of Schroeder's two strands, I would like to insist that LW's 'math as Kalkül' idea should not be read in a way that ignores the general backdrop of LW's developing 'anthropological' approach, his developing everydayism, his developing pragmatism, etc.: for LW, Kalkül has never been an abstract, formal process, but has always been something that actual agents do, and need not exclude the mature LW's full blown pragmatism; similarly, LW's "math as grammar" can be read as one of the ways in which LW's tried to articulate the ultimately nonpropositional nature of whatever grounds the meaningfulness of math (see section (C3) below on LW's 'normativism'). These very summary comparisons, as well as the remarks that follow below, illustrate the fact that the present study is not very topical within the context of the current scholarship on LW's PhilMath: there is very little overlap between my approach and the literature and even in those cases in which the same passages are discussed, there is not much for me to comment on, except for a few quite generic observations. (C) ismism: 8 isms If we look at those contributions in the literature that aim to give an overview of LW's PhilMath as a whole (e.g. Frascolla (Frascolla 1994), Rodych (Rodych 2018), Schroeder (Schroeder 2021), Floyd (Floyd 2021)), as well as the literature we've already quoted in the above, we encounter an impressive array of -isms: logicism and anti-logicism, formalism and anti-formalism, revisionism and anti-revisionism; inferentialism, verificationism and empiricism; intensionalism and extensionalism; (anti-)Platonism, Cartesianism and Kantianism; finitism; (social-)constructivism, constructionism and conventionalism; realism, anti-realism and anti-anti-realism; ... . It may be tempting to mock or otherwise criticize this ismsism308 and the decontextualization and simplification it entails, but the device is not only convenient but also requires for the commentators to commit to their interpretations, which is a good thing. So, my point is not to launch a frontal attack on the 'ismist'/doxographical approach as such, but to articulate a few concrete remarks on the ways in which the attribution of isms to LW impacts our understanding of the body of work under scrutiny and how the readings I present in the above may contribute to reframing this understanding. (C1) logicism -anti-logicism Logicism is one of the isms that is obviously relevant to understanding LW's PhilMath (cf. Schroeder, chapters 2 and 3: "Logicism" resp. "Wittgenstein's critique of logicism"). It is uncontroversial that there is a chronological aspect here: LW changed his mind on the viability of the logic-based framework within which he had worked up until the TLP and said so explicitly and unambiguously in his later work, most notably in the preface to his PhU (published in 1953, but largely written before the end of 1937, cf. e.g. Ms-117,110 ff. d.d. 19370627). Still, my contribution reminds us of the fact that there is a lot of continuity in LW's approach, as well. The language-critical and culture-critical aspect was there from the outset: even the TLP is primarily a critique of senseless verbiage, as is LW's later work. As we have repeatedly emphasized in the above, this critique of certain types of language use was deeply rooted in LW's modes of thought, and in those of many of his contemporaries. As far as direct influences are concerned (cf. section 2.0.0 above), the cultural criticism of journalist Karl Kraus was an early and determinant influence on LW; and even if LW read Spengler later in life, he said that Spengler articulated many thoughts that he had entertained on his own. In this sense, both LW's early logicism and his later anti-logicism are avatars of the same underlying agenda. 309 Even if admirers of his early 'logistical' work (Russell, Carnap, ...) didn't see it this way and lost interest in LW's later work, 310 which seemed alien to them, and even if some later scholars miss this point as well, the later LW is -in a certain sense-still a logician, still concerned with the same foundational (?) questions, still struggling with the problem of meaning vs. nonsense. (C2) formalism -anti-formalism On the one hand, LW has been read as a formalist, esp. his 'middle/intermediate period' (for instance, Rodych (Rodych 2018), §2.1 "Wittgenstein's Intermediate Constructive Formalism"; (Ferreirós 2016) pp. 89-90). Presumably, these readers take LW's insistence on 'Kalkül' -which in LW's work by default refers to actual calculations done by actual people, as opposed to, for instance, idle philosophical 'prose'-for some kind of anti-conceptual and therefore 'formalist' stance. However, even if in his 'intermediate period', LW hadn't yet developed as rich a conceptual apparatus as he did later on, I do believe his notion of mathematical technique as a rule-based activity referred to what practitioners of mathematical techniques actually do, not to abstract formal systems in the formalist sense of formalism [sic! fs]. One may want to call that formalism (and in that case LW was a formalist about small-talk, or any other naturallanguage practice, as well), but I don't think that is what formalism usually means in PhilMath. On the other hand, LW's insistence on the embedding of Kalkül/mathematical technique in actual, real-life practice, and especially the idea that the meaningfulness of Kalkül ultimately 309 Cf. Sass's -at least formally-similar claim that LW's early logicism and his later -what I called-everydayism are avatars of the same underlying psychologically/existentially uneasy relation to the everyday (Sass 2001 (Sass 2001), p. 122; see also section 2.0.1(C) above). I am not very comfortable with this parallelism between my work and Sass's, but it's there. 310 Cf. Russell's scathing remarks in My philosophical development: I have not found in Wittgenstein's Philosophical Investigations anything that seemed to me interesting and I do not understand why a whole school finds important wisdom in its pages. Psychologically this is surprising. The earlier Wittgenstein, whom I knew intimately, was a man addicted to passionately intense thinking, profoundly aware of difficult problems of which I, like him, felt the importance, and possessed (or at least so I thought) of true philosophical genius. The later Wittgenstein, on the contrary, seems to have grown tired of serious thinking and to have invented a doctrine which would make such an activity unnecessary. I do not for one moment believe that the doctrine which has these lazy consequences is true. I realize, however, that I have an overpoweringly strong bias against it, for, if it is true, philosophy is, at best, a slight help to lexicographers, and at worst, an idle tea-table amusement. ( (Russell 1959), pp. 216-217). depends on its actually being applied in actual everyday applications, is incompatible with formalism as it is usually understood in the PhilMath of the heyday of the Grundlagendebates. I believe I have shown that this central aspect of LW's work was as much opposed to formalism (in the sense of the belief that math's validity can be (or should be) guaranteed by the fact that it can be presented as a formal system) as Kurt Gödel was, though for completely different reasons. Of course, there may be a chronological aspect to this matter (cf. Rodych (Rodych 2018)): LW's earlier work is easier to interpret as somehow formalist than LW's later work, in that LW's later work emphasizes the real-world, real-life, everyday character of the relevant encompassing structures that give meaning to math, more clearly. The general impression that emerges from the readings presented above is that LW does not argue against the use of formal systems at all: the invention of new mathematical techniques is not a problem per se; what he does do is (1) problematize the semantic relation between the formal system and whatever the system is supposed to formalize or encode, and (2) emphasize the pragmatics of math, i.e. the actual techniques as performed by actual practitioners and their embedding in real-life applications. Again, the scope of the research that I present here -both the relative width of the overall topic of the encompassing research project ("practice and related concepts") and the relative narrowness of the specific focus of this study on a small number of lines of thought within LW's oeuvre-does not allow me to comment on the details of the more technical or more chronological aspects of these accounts. My only contribution is to point out that any interpretation of the chronology of LW's evolving views on this topic should take into account the links between the topic at hand and LW's culture-critical views on the fragmentation of modern society, everydayness, authenticity, etc. (these ideas are perhaps also evolving, but not that much...): for instance, it would not make sense to attribute a fully formalist stance (in the sense of the belief that math's validity is ultimately grounded in its being a formal system) to LW, if it can be shown that he held strong 'everydayist' beliefs at that time. (C3) normativism Some commentators have emphasized LW's 'normativism' about math (Schroeder (Schroeder 2021), Chapter 6 "Mathematics as Grammar"), referring to the prolonged period in which LW experimented with the idea that mathematical statements were not propositions (i.e. statements referring to facts about the world) but '(grammatical) rules' determining how the terms involved were to be used. I am -of course-not arguing against the undeniable fact that there has been a period in LW's use of the term 'grammar', as applied to math, was very prominent and that LW more or less lost interest in this term in favor of other lines of thought. But I think it is important to understand that this 'grammar' theme fits in with LW's continuing concern with the embedding of math in encompassing structures: as soon as LW understood that more could be said meaningfully than he had assumed in the TLP and that meaning is ultimately not a matter of propositions expressing facts, this boosted his interest in the non-propositional aspects of meaning, including the idea that language use is always part of encompassing practices (i.e. what I called LW's pragmatism and his holism and structuralism about practices). LW's (idiosyncratic) use of the concept of 'grammar' was one of the ways in which he tried and conceptualized the idea that words are meaningful by virtue of them being part of larger behavioral patterns rather than in terms of a correspondence with an outside world (cf. section 1.1.1(A)). Similarly, LW's use of the concept of 'grammar' should not be separated from his evolving views on rule-following in general: again, the main point appears to be that rule-following can't be understood if one takes the rules as the ultimate ground for this kind of behavioral pattern; instead, the mature LW presents a holistic account of rule-following as part of an irreducibly complex practice. In other words: LW's normativism about math should be understood as one of the concepts he experimented with within the context of his continuing concern with the idea that meaning cannot be reduced to propositional truth (i.e. reference to the world) and his gradually more coherent articulation of a holistic account. Again, it would not make sense to discuss LW's 'math as grammar" idea in isolation from his ideas on meaning as grammar and on rulefollowing in general. And the topic rule-following in its turn should be read in connection with LW's developing holism. (C4) constructivism / social-constructivism / conventionalism 311 LW often points out the cultural contingency of mathematical practices, sometimes invoking the practices of imaginary tribes or alternative decisions within an otherwise 'normal' math (cf. e.g. section 1.3 above, but also several examples throughout section 2.3). This has often been interpreted as "constructivism" and in a certain sense perhaps rightly so. But LW does not argue that practitioners could freely or arbitrarily choose to adopt any number of 311 I want to point out that the 'social' part of 'social-constructivism' is not necessarily self-evident either: whereas LW does often refer to communities, tribes, cultures etc., I don't think one can interpret these as the ultimate locus for meaning, as is often done: cf. what I had to say about LW's holism in section (cf. section 1.1.2(A1) above and 3.2.1(B) below). mathematical alternatives anytime: he is very much aware of the phenomenon of the logical 'must' and appears to suggest over and over again that a lot is given at any time in our history, even if this 'given' is the result of an accumulation of contingencies. Cf. (Steiner 2009); (Bangu 2006); (Bangu 2019)). Furthermore, social and cultural aspects make up only one dimension amongst many more within LW's holistic conception of practices (Language Games, Forms of Life): cognitive, biological, physical aspects are equally primordial, all of which constitute very strong constraints on any arbitrariness or freedom one may want to attribute to practitioners. 312 So: as an interpretation of -at least the later-works of LW, 'conventionalism' would be incompatible with LW's holism, but more importantly, it doesn't account for LW's focus on the fact that at any given time and place, an accumulation of contingencies does constitute an ultimate given (cf. Bangu's term "contingently necessary" (Bangu 2019), p. 19). (C5) finitism In the above we have referred several times to LW's qualms against various uses of the concept of 'infinity', which -I guess-ipso facto makes him side with the cause of finitism. However, the readings presented above suggest that LW's finitist arguments are not finitism for the sake of finitism: the problems with certain uses of the concept of infinity that LW points out, fit in with a much broader program. LW doesn't say that one should not calculate with infinite numbers for reasons X, Y or Z. LW does object to certain cases in which 'infinity' is used within discourse about math in such a way that it doesn't correspond to anything in the actual math. A case in point is the frivolous use of the word 'infinite' in the context of 'infinite minds calculating with infinite numerals' (cf. section 2.2(B) above). The present study emphasizes the not math-internal aspect of this criticism: the use of the term 'infinite' is criticized for its lack of embedding in practice, in exactly the same way one could criticize various 'metaphysical' ways of using common terms. 312 In that context, I believe it's worth pointing out that 'conventionalism' (insofar as it literally implies that a group of people have made a decision together) is not a viable account for mathematical meaning, let alone meaning in general. All the arguments against the idea that the 'community' would be the proper locus for the foundation of meaning (cf. section 1.1.2(A1) above and section 3.2.1(B) below), apply a fortiori to the idea of a convention: a group of people literally coming together ("convening") to decide over a certain matter is a very specific way of coming to an agreement and it is simply not the case that the kind of wide-reaching agreement that underlies our ability to participate in collective practices is the result of any decision-making process of the kind that we could call a 'convention' without the term losing all of its specific meaning. Note that similar arguments can be / should be (and have been) brought in against 'social contract' theories in political philosophy and related fields. • deflating the non-revisionist claim and accepting the consequence of the apparent criticism, for instance that according to LW, set theory is not proper math ( (Maddy 1993); (Steiner 2009)); • accepting non-revisionism as central to LW's purpose and trying to interpret the apparent criticism in that light (Dawson 2015). Our interpretation of the function of the everyday in LW's thought actually may offer a kind of solution to the underlying paradox: everydayism is a basic presupposition of LW's work, not a result; in that case, i.e. if we accept the distinction between the everyday and its opposite, there need no longer be a tension: LW leaves the everyday as it is (because it is ipso facto meaningful) and only criticizes the non-everyday (because it is by definition fake). Again: LW's problem ultimately boils down to the disruption of the link with the everyday and the lack of authenticity that results. LW's remarks on mathematics actually illustrate this general aspect of his outlook quite well: count as 'everyday' those mathematical discourses that are embedded in actual practices using actual mathematical techniques. We have seen in the above that LW systematically criticizes various kinds of contemporary (set-theoretical or otherwise foundational) verbiage, while explicitly not criticizing any actual mathematical technique: LW does not criticize the Kalkül/mathematical technique, only what some people say about it. Cf. also the idea that for LW, all techniques are similar from a philosophical point of view: there is no such thing as 'a leading problem of mathematical logic' (PhU §124). 318 Of course, the very idea of "the everyday" is in its turn a very problematic concept that we may not necessarily want to accept, but it is one that is deeply rooted in LW's outlook on the world and has deep connections to other conceptual clusters (Lebensphilosophie, authenticity, pessimism about European civilization, ...) that LW shared to various extents with such contemporaries as Kraus, Spengler, Weininger, but also Heidegger. (D) final remarks about LW's alleged -isms So: focusing on those aspects of LW's work that appear to have some technical interest from the point of view of mainstream PhilMath at large tends to lead the interpreter away from LW's philosophical agenda at large and prevents us from understanding the remarks on this or that aspect as part of its wider context, including the objectives of philosophy according to LW. The reader's point of view / approach also impacts the issue of chronology vs. continuity: depending on one's methodology, one is bound to emphasize one or the other: chronological accounts will inherently emphasize discontinuity. 319 What I have attempted to do in this study is to read LW's PhilMath in terms of the agendas and biases that underlie his philosophy as a whole. 320 In the above, I identified a number of such biases and agendas and suggested that the apparently technical issues should be read in the light of those biases and agendas. 321 The picture that emerges is one in which LW's contribution to PhilMath is critical at a very fundamental level, attacking aspects that are still prevalent in PhilMath today, 322 and viewing math as much more closely intertwined with other aspects of culture and society than is fashionable in present-day PhilMath. 323 Reading Wittgenstein's philosophy of mathematics in the context of his work at large Throughout this study, I have read LW's work on mathematics as an integral part of his oeuvre as a whole, and have pointed out to what extent the work on math is also a central part of his oeuvre, displaying all the major themes, including the culture-critical ones, and in many case, earlier and/or more incisively that in other parts of his oeuvre.324 (A) LW's aims, methods and style As pointed out in my introduction to the present study (section 0.2 above), the relative lack of success of LW's PhilMath within the mainstream of PhilMath at large, is probably due to a 319 Chronological accounts also inherently imply a larger corpus. 320 This focus on ultimately non-propositional aspects that underlie and give meaning to LW's philosophical discourse, is a very Wittgensteinian move. You are what you eat. 321 Schroeder's "two strands" (cf. section 3.1.1(B)) are a case in point. 322 However, in my, admittedly anecdotal, experience, these opinions are not necessarily as prevalent with working mathematicians and even less so with engineers or others who use mathematical technique in a more applied fashion, although most of the latter have been 'indoctrinated' with the monist doctrine throughout their studies. There is also an interesting generational aspect: those who learnt most of their math starting from a set-theoretical framework, vs. those who first learnt the techniques separately on their own terms and only later were taught settheory. Perhaps someone should run a sociological survey investigating the attitudes of engineers, various kinds of working mathematicians and philosophers of math towards mathematical monism. 323 I hope the above also helps explaining why my interaction with the literature is rather limited: the above has very little to add to the technical details that are the main focus in the literature. Of course, I could (and maybe will) include many more references to various texts I have read, pointing out that this or that aspect of my text corresponds to this or that aspect of the other text (for instance, Chapters 11 and 12 of Schroeder's recent monograph could be referred to a lot in section 2.3 above), but I don't think that would be a good use of my time at this point. lack of common ground as to the aims of philosophy, its methods and the style in one can and cannot write philosophical texts. style One of the difficulties that any commentator on LW's writings has to deal with, is the fact that the texts seem to jump from one topic to the next in an apparently haphazard way, only to come back to the same topic later on, over and over again. 325 In the case of the manuscripts on mathematics-related topics (with a few exceptions), it seems to be impossible to give a systematic account of any of the topics at hand without jumping back and forth within and across several manuscripts. However, from the point of view developed here, this apparent lack of structure less of a problem: for LW, the 'invention vs. discovery' theme, the 'surveyability of proof' theme, the 'infinite sets' theme, the 'diagonal proof' theme, the 'truth and provability' theme, etc. are not separate topics that are studied for their own sake; on the contrary, the reason why these themes recur throughout LW's work is that they all illustrate the major concerns that underlie his philosophical work in general. As discussed above, it is important to take the texts for what they are, i.e. a rather direct reflection of the actual process of a philosopher at work, and not try to make the texts answer questions that are alien to it. aims & methods: criticism & non-scientism It is a well-known fact that LW formulated the aims of (his) philosophy in terms of it being a therapy (etc.), as opposed to the accumulation of propositional knowledge. Despite the fact that this is a well-known aspect of LW's work, it is rarely taken seriously in the literature on LW's PhilMath, which remains mostly doxographic: despite the fact that LW repeatedly points out that he does not intend to articulate propositional truths about his subject matter, the scholarship on LW's PhilMath often still takes the shape of an inventory of such opinions. 326 We have also seen in the above that LW repeatedly opposes his own 'anthropological' approach 'looking in from the outside' to the mathematical approach to mathematical issues, which is important in the following ways: -LW's approach is inherently comparative in that it studies mathematical practice alongside other practices and in contexts in which mathematical and non-mathematical practices are intertwined; -by virtue of being comparative, LW 's approach also opposes mathematical exceptionalism, i.e. the idea that mathematics is unlike any other human endeavor (see section 3.2.2 below), almost universally presupposed in PhilMath. I believe I have shown that these aspects should be taken seriously, as well: the critical stance towards discourse on mathematics is a fundamental aspect of LW's work and fits in with his approach to philosophy in general. In what follows, I will make an attempt at showing that present-day PhilMath, and especially PhilMathPract, would still learn something from LW's work in this regard. (B) LW's conceptual apparatus: meaning as embedding in everyday practice Part 1 of this study consisted mostly of an overview of the conceptual apparatus that the later LW developed and the way this translates into his PhilMath. So as to not repeat basically the same summary over and over again, I refer to section 1.4 above or section 3.4 below for a 326 Within the passages I focused on for the purposes this study, I encountered several occasions on which LW reminded himself to stay away from dogmatism, for instance in Ms-122, 68r-88r, discussed everydayism, LW's pragmatism and LW's 'meta-philosophical' ideas (therapy, etc.) as these apply to LW's PhilMath. Despite the relatively clear and consistent account of these aspects achieved in the above, a number of loose ends remain: some work remains to be done in order to clearly articulate the links between (1) the above cluster of concepts, (2) LW's Spenglerian outlook, 329 (3) the fragmentation of the everyday into a hurly-burly of practices, and (4) LW's opinions on authenticity/fakeness and everydayness as compared to Martin Heidegger's work involving the very same concepts. 330 So as to avoid too many redundancies within the present Part 3 of this draft, I refer to section 3.3 for more substantial remarks on the critical nature of LW's philosophy in general and his PhilMath in particular. Reading Wittgenstein's philosophy of mathematics in its biographical and historical context (incl. the Grundlagen-debates in contemporary philosophy of mathematics) (A) cultural & biographical background In section 2.0 we explored a few aspects of the culture-historical and the biographical context from which LW's philosophical work emerged, specifically the importance of a number of issues of authenticity and fakeness. Following Janik & Toulmin's Wittgenstein's Vienna (Janik and Toulmin 1973), I pointed out that LW's sensibilities with respect to meaningful vs. meaningless language use were from early on informed by journalist Karl Kraus' analysis of the problems with never saying what is actually going on politically, socially and culturally in the last decades of the Habsburg regime ("Kakania"). I also pointed out that LW's negative view of his own era and the 100 years immediately preceding it, which we encountered as an ingredient of LW's PhilMath in section 2.2, was also part of the cultural ambience in which LW grew up, and we can point at Oswald Spengler as 329 One of the problems I am not clear about is the reason why the deeply flawed ramblings of an amateur half-wit like Spengler can have made such an impression on people like LW and Martin Heidegger, both obviously of an altogether different intellectual caliber. 330 I already had the intention to write a separate paper on the issues surrounding everydayness and authenticity (cf. (McGinn 2021), Chapter 3), but the present study made an unexpected contribution to, and substantially enriched my understanding of the issues. One of the results of this process was that it actually became less clear to me in what ways LW's views on the fragmentation of the everyday were different from those of -say-Heidegger, or most Nazis for that matter. a direct influence on LW's thought in this regard, even if LW read Spengler after he already had developed similar lines of thought on his own. Furthermore, I referenced LW's existential concerns with vanity and theatricality and how they were in line with the concern with authenticity (as opposed to fakeness, pretense, bad faith, theatricality, vanity, ...) as an omnipresent aspect of the cultural ambience from which LW emerged: for LW, things should look/sound/appear the way they are. We focused -amongst other things-on LW's preposterous remarks on Gustav Mahler as a composer (cf. section 2.0.2 ) and highlighted the fact that LW's problem was not a matter of Mahler's not conforming to formal criteria. LW's most scathing objections against Mahler's music are directed against those moments in which Mahler does sound like old-timey tonal classical music, but -according to LW-is not really old-timey tonal classical music. I introduced the terms epistemic fakeness (epistemic pretense / epistemic bad faith and epistemic bad taste / epistemic kitsch, etc.; my terms). I avoided these terms in my running commentaries in sections 2.1, 2.2 and 2.3 so as not to project them onto LW's text, but they will come handy here and in my own work building on LW's work. *** This is perhaps the right time for me to express the fact that I am somewhat uncomfortable with the role that these biographical and otherwise circumstantial materials play within the present research. In a perhaps similar fashion, Sass feels inclined to justify his own use of these materials and his psychological interpretation in a very long apologetic note ( (Sass 2001), ftn. 31, pp. 144-145), which -in my opinion-makes things worse. 331 So, I hope it will suffice for me to simply state that what I'm trying to point out is not that something that is not philosophical (e.g. psychological or sociological) should somehow explain the philosophy; what I am pointing out is: 1. that there are a number of structural patterns that occur within both LW's private discourse and his work, and that the biographical material illustrating these patterns is part of the philosophically relevant context within which we should read LW's text; this case is strengthened by the fact that the lack of separation between philosophical work and private life in LW's case is well documented (cf. "logic and my sins", as well as the fact that there is considerable overlap between personal notes and philosophical remarks 331 Similarly, I am not at all convinced by the romantic narrative Monk tries to impose on his material in The Duty of Genius ( (Monk 1990)). Monk's biography of Bertrand Russell ((Monk 1996); (Monk 2000)) has attracted some flack for its flagrant negative bias towards its subject matter, but I am not sure if I am not at least as uncomfortable with Monk's more subtle (but unabashedly positive) biases towards LW. in LW's manuscript, etc.), as well as the fact that his conception of philosophy does not separate it from existential issues; 2. that the philosophically relevant context for LW's PhilMath does not coincide with the math-internal, technical issues that are specific to PhilMath (LW says so repeatedly and explicitly) and that therefore the not specifically mathematical aspects of LW's work in PhilMath should not artificially be sanitized from it. (B) LW & the Grundlagen-debates One of the points I have repeatedly made throughout this study is that many (if not all) of the mathematical topics that LW focuses on are dealt with as examples of his main interest in the Grundlagen-related issues. Many of the central themes in LW's philosophy at large can be linked directly to the issue as to what grounds math and his reaction to the answers that were proposed in contemporary debates: • the problem of meaning vs. nonsense: even if related culture-critical strands (most notably influenced by Karl Kraus' work) predated LW's career as a professional philosopher, this topic became a proper philosophical topic for LW to do work on, in the shape of the question as to what guaranteed mathematics' validity; • the discrepancy between formal correctness and authentic meaningfulness: this discrepancy was not only an important aspect of the Krausian, culture-critical strand in LW's work, but is ultimately also what is at stake in the Grundlagen-debates; 332 • a preoccupation with vanity, insincerity and inauthenticity in general: this apparently driving factor behind LW's private and philosophical thought is illustrated most clearly by his irritation with the bad faith he thought to discern within some of the prevalent mathematical discourse of his time; • pragmatism and everydayism: these central aspects of LW's philosophical approach are not also an avatar of Lebensphilosophie in general but can also be read fruitfully as reactions to logicism, intuitionism and Platonism in PhilMath. In any case, LW's contribution to (or interaction with) the foundational issues operate at a very fundamental [unfortunate pun!] level, as I pointed out repeatedly in the above: -anti-monism: LW confronts the prevalent monism (or perhaps rather 'totalitarianism', in the relevant sense of 'the tendency to try and encompass everything within a single system') about math head-on; 332 Again: Gödel's contribution can be interpreted as highlighting the very same issue. -anti-foundationalism: LW thinks that the idea that math would need 'foundations' is wrong to begin with. In other words: rather than endorsing one of the available positions within the debate (say: Platonism, formalism, intuitionism), LW contributes to the debate by attacking/questioning the prepositions that those positions all share. For the link between the Grundlagen-related aspects in LW's PhilMath and the relation between the critical aspects of LW's philosophy and the Kantian tradition, I refer to section 3.3 here below. One of the -for me-surprising results of the work presented in the above is the remarkable continuity between LW's early work and his later work with respect to (1) the extent to which the culture-critical strands intertwine with the more technical strands, and (2) its focus on the issues that defined the Grundlagen-debates. LW stuck with what interested him when he started out in philosophy: the problem of what ultimately grounds math. One should not forget that, even if he distanced himself somewhat from mainstream PhilMath in later life, LW was there as a player, in close contact with Russell, for a number of years at the beginning of his career. As a matter of fact, Russell appears to acknowledge LW's influence and various historians attribute this or that aspect of Russell's work to young LW's influence. 333 Interestingly, Kurt Gödel appears to have attributed what he doesn't like about Russell's later work to the nefarious influence of LW: Though Gödel (rightly) blamed Wittgenstein for causing Russell's retreat from the unified approach to truth as correspondence he pursued in Principia-as well as for the more constructivistic treatment of orders in its second edition-yet, having been so influenced by Russell's original treatment of "judgments of perception" and the MRTJ [= "multiple relation theory of judgment"] in Principia, as well as in Russell's subsequent publications, Gödel was, from 1932 onward, fascinated by Russell's long-lasting ambition to analyze the notions of experience and belief. 333 An interesting example is the following (Floyd & Kanamori (Floyd and Kanamori 2016), p. 270): In (1918: IV) Russell attributed the "discovery of this fact" about a "map-in-space" to Wittgenstein. Colorfully putting the point about "judgments of perception" in terms of his zoological metaphor, Russell wrote, of judgments involving willing, wishing, and so on that (1918: IV §3): "I have got on here to a new sort of thing, a new beast for our Zoo, not another member of our former species but a new species". Wittgenstein, of course, would have rejected this zoological gloss on his idea, for -as Gödel knew from reading the Tractatus during the heyday of its influence on the Schlick circle in Vienna-his point to Russell had been, not that there was a new form in view, but, rather, that a recasting of the whole notion of form as the possibility of structure would be needed, and this is just what Wittgenstein worked out in the Tractatus (1921: 2.033). Russell saw logic as a "skeletal" enterprise of classifying and identifying bones beneath beliefs, a framework "within which the test of coherence applies" (1912: ch. XII), whereas Wittgenstein took logic to be itself a "scaffolding" to be taken up and taken down, an aid in the construction of a true depiction of reality, but not itself in the business of studying what actually is (1921: 3.42, 4.023, 6.124). Russell's effort to eliminate "possibility" in favor of actual correspondence failed in the face of Wittgenstein's new conception of logic. And Russell granted this fact, ever after, not only for logic, but even for mathematics. Up until the TLP, LW operated within a logistic framework (Russell's), which he conceived of as transcendental, in the sense that the framework itself was not part of the world that the propositions expressed in it referred to. Interestingly, already in the TLP, LW expressed his conviction that this approach was ultimately unable to deal with anything of real importance. When Wittgenstein as a philosopher of mathematical practice Although LW's contributions may have been more provocative in the context of the Grundlagen-debate of the first half of the 20th century than they are now (esp. in PhilMathPract circles), 334 it is interesting to note that many of his main points still can sound fresh, new, and even controversial (in some -if not most-circles), 70 years later. In any case, the main points of LW's work on math appear to also be one of the main points of the emerging research tradition of PhilMathPract, as represented -for instance-by (Van Kerkhove and van Bendegem 2007), (Mancosu 2008), (Van Kerkhove 2009), (Ferreirós 2016). For a historical overview, see ( van Bendegem 2014). I want to make two separate points concerning the relation between LW's work and PhilMathPract: (1) that LW's work foreshadows the themes and attitudes that define PhilMathPract as it is practiced now in 2022; 335 334 Or perhaps the opposite is the case: maybe some of the presuppositions that were still somewhat liquid back then, have solidified since... For instance, as noted elsewhere, learners up until the 1970s learnt their mathematical techniques separately and were introduced to set-theory only later (if at all), whereas from the mid 1970s (depending on one's country) many have started out within a set-theoretical framework from almost the very beginning of their education. This evolution must have had an impact on the extent to which the framework was felt to be a natural one. 335 For an apparently identical claim but with remarkably little overlap with my argument, see Pérez-Escobar 2022 (Pérez-Escobar 2022). (2) that the lack of a specifically philosophical attitude in present-day ("naturalist"(?) or more generally empirically-minded) PhilMath in general and PhilMathPract in particular is not necessarily a good thing and that LW's critical approach may offer venues for further development. Wittgenstein and present-day PhilMathPract LW was doing PhilMathPract long before it was called that. On the one hand, LW was an indirect influence on the present generation of practitioners, in that he was one of the initiators of the practice turn, i.e. his work was part of the philosophical roots of work based on the concept of 'practice' in general, whether on mathematics or on other topics (cf. (Schatzki, Cetina, and von Savigny 2001); (Soler et al. 2014)), but on the other hand, his work on math also directly foreshadowed a lot what is going on in PhilMathPract today. (A) focus on actual practices The obvious way in which LW foreshadows present-day PhilMathPract is that the later LW persistently emphasized the actual, real-life contexts in which mathematical phenomena occur, as the proper locus for mathematical meaning. This shows in the following aspects: -a focus on what practitioners of mathematics actually do and on practical applications: LW distinguished systematically between what practitioners of math actually do and what is said about their practice and technique; he also emphasized the fact that calculations ultimately originate in, and derive their meaningfulness from, their embedding in real-life practices -a focus on variability in mathematical practice, the accent on historicity, variation, social aspects, etc.: this aspect was being highlighted in the passage containing a long list of fringe applications that we analyzed in section 1.3, but also in the alternative mathematics imagined by LW in some passages analyzed in section 2.3; -the inclusion of cognitive, biological, physical aspects in one's perspective on math finds a clear precursor in LW's holistic conception of what is a practice, a language game, a form of life. (B) towards a more radically pragmatic concept of practice As compared to neighboring fields such as Philosophy of Science, Science and Technology Studies, or Integrated History and Philosophy of Science, 336 PhilMathPract has been relatively slow in its exploitation of the potential of a practice-based approach and LW's work may serve as a reminder that much more radical implementations of the basic concepts are available. Let me briefly enumerate a few ways in which average standard present-day PhilMathPract has not picked-up on conceptual resources available in LW's work: (1) practice-based ≠ agent-based / community-based: Many scholars who operate with the concept of 'practice' appear to construe practices as ultimately reducible to agents and/or communities of agents, conceived of as the proper locus for meaningful behavior. However, this conception of practice is not compatible with LW's holistic view of practices / forms of life, etc., and it can be argued that much of the potential of a practice-based approach gets lost this way (see section 1.1.2(A1) above). 337 (2) pragmatic vs. epistemological perspectives: Whereas more resolutely practice-based approaches immediately lead to a displacement of the central role of epistemological concepts (knowledge, truth, ...), most approaches within PhilMathPract continue to try and maintain not only a basically epistemological perspective, but also (and this is important!) an epistemological agenda, while incorporating practice-related notions in a piecemeal and eclectic fashion. However, from a somewhat more coherently pragmatic perspective, one should be able to show to what extent 'knowledge', 'truth', etc. actually play a role in actual mathematical practice; as LW points out, one can easily imagine math-like techniques being executed without any reference to propositional truth whatsoever (see section 1.1.2(A2) above). 338 (3) objects, objecthood and objectivity: In section 1.1.2(C) above, I have argued that, whereas PhilMath, incl. PhilMathPract, for the most part continues to obsess over the traditional idea of the "unique" ontological status of mathematical objects, from a pragmatic point of view, objecthood is always equally problematic or unproblematic and the relation between mathematical objects and 337 Apart from this fundamental issue, there is also a more superficial, but not less important, methodological argument to be made against the idea that a community is the default format for collective practices (this argument is not directly related to our reading of LW's work, so I don't come back to it here and refer back to section 1.1.2(A1) above). 338 In light of the fact that a genuinely pragmatic approach tends to render traditional epistemological concerns marginal, I would suggest that the answer to van Bendegem's question "Foundations of Mathematics or Mathematical Practice: Is One Forced to Choose?" (van Bendegem 1989) should be a resounding "yes". Unless one is ready to construe the term 'foundations' in such a way that it allows for irreducible contingency, plurality, fuzziness, etc., which would make it loose its original charm, I believe. mathematical practices is not different from the relation between other objects and the practices they occur in; as a corollary, the issue of the apparent 'objectivity' of mathematical objects could be reframed as well (cf. section 1.1.2(C) above). (4) the heterogeneity of mathematical techniques and practices vs. abstraction and stratification as the mechanism behind the history of math: Encompassing grand narratives about the history of mathematics, the proposal to conceptualize the relation between 'less advanced' to 'more advanced' mathematics (incl. the evolution from pre-mathematical techniques to proper math) in terms of the single mechanism of 'abstraction' (Ferreirós (Ferreirós 2016)), can be misleading in that they tend to obscure the heterogeneity and contingency of the 'buntes Gemisch' and 'Gewimmel' ('hurly-burly') of practices co-present at each stage of the history; LW's image of a hurlyburly is thus directly opposed to Ferreirós idea of a stratification of practices; again: if there is such a thing as a neat stratification in any particular case, then that should be shown, not presupposed (see section 1.1.3(D) above). (5) intuition, conceptual thought, understanding: Whereas the tradition of PhilMath ever since Plato typically conceives of intuition and understanding as irreducible sources of knowledge, from a pragmatic point of view, intuitions, concepts, and understanding are part of a practice, and not (epistemologically or ontologically) prior with respect to practices: they are practice-specific, the result of training, in many cases historically and geographically variable, ... (C) LW's exotic examples and the ethnographical and historical record In section 1.3, but also throughout section 2. Suffice it here to briefly enumerate a few examples: 339 For an account of this technique, see also Schroeder 2015 (Schroeder 2015). 340 Similarly, LW's holism about practices also anticipates the idea that the biological constitution of humans and their biological and physical context are philosophically relevant to PhilMath (cf. the fact that various cognitive approaches show the neurobiological roots of numeracy and geometry, cf. (Dehaene 2011) and (Dehaene and Brannon 2011); (Bangu 2018)). • While remaining within an all-in-all very traditional, epistemological framework (see paragraph (B) here above), José Ferreirós' historical work on early set-theory (Ferreirós 2007) and on the irrationals ( (Ferreirós 2016), Chapter 8, 'The invention of the reals') is documenting and paying attention to the contingent aspects of how mathematics is being shaped by the events that make up its history, and could give rise to much less conservative accounts than the one the author chose to present to his audience. • Jens Høyrup has made a career out of collecting a huge treasure trove of bizarre mathematical and math-like practices (see i.a. (Høyrup 1990); (Høyrup 2001); (Høyrup 2006); (Høyrup 1983); (Høyrup 2008); (Høyrup and Damerow 2001)); for instance, Høyrup 2009 describes a number of historical algebraic (?) practices which one could be tempted to call 'fake math' (Høyrup 2009). • Similarly, Karin Chemla's work on ancient Chinese mathematics shows a number of practices that are subtly or not so subtly different from ones we may be more familiar with ((Chemla 2012); (Chemla 2006); (Chemla and Shuchun 2004); (Chemla, Chorlay, and Rabouin 2016)), which invites us to reflect on what makes these practices what they are, not unlike LW's thought experiments, as discussed in section 1.3 above. • Pythagorean numerology, despite its alleged importance for the emergence of mathematics as a theoretical discipline, for a long time operated with criteria that are definitely at odds with what we would consider proper mathematics, and even with contemporary non-Pythagorean practices. A case in point is Philolaus' music theory (Burkert (Burkert 1972), pp. 386-400), which -for example-held on to the Pythagorean tenet "the whole tone / octave cannot be dissected", which is nonsense from a mathematical point of view, but not so from the point of view of the numerical symbolism that underlies Pythagoreanism. • Ethnographic research shows the rich variety in the human conception of quantity and other pre-mathematical concepts, which relativizes the universality of even counting, and hence the natural numbers (purportedly given by the godhead, though apparently not to everyone); for references, see for instance (Pinxten and François 2011), (François and Vandendriessche 2016), (Watson 1990), (Moltmann 2013); In the end, it is not that important what LW's eccentric opinions on those matters were; what is much more important is the fact that the historical and ethnographic record that was developed in the last few decades, offers a lot of material for philosophers to work with and that LW's remark reminds us of the fact that, despite a burgeoning research activity, very little is being done with this material at the properly philosophical level (see section 3.2.2 here below). 341 An important aspect is the focus on how these 'finished products' are used in actual practice and how they relate to the real-life research activities in which they originate. 342 I would be interested in reading up on a number real-life 'fringe' applications: the use of math and math-like techniques in various practical settings (rules of thumb, shortcuts for calculations, ...); symbolic systems that are distinctly not mathematical; the historical interactions between mathematics and astrology, numerology, and other no-longer-academic disciplines; etc. 343 One could think of LW's interaction with Frazier's Golden Bough as an example of how LW could have reacted to the type of material under discussion, but also as an illustration of what he meant by "as if the factuality of these things was important to us". Wittgenstein and the identity of Philosophy of Mathematics and Philosophy of Mathematical Practice The biggest difference between recent work in PhilMathPract and LW's work is the complete lack of a critical attitude toward mathematics and mathematicians in the former, whereas the above suggests that LW's PhilMath is essentially critical. By virtue of the very fact of this difference, LW's work urges us to ask ourselves what the role of PhilMath actually is and/or should be. In this section, I try to articulate how LW's work can/could help present-day philosophers of mathematics to reconsider their identities as philosophers and their relationship with mathematics itself. (A) LW on his position vis-à-vis the mainstream of PhilMath LW was very aware of and quite outspoken about the differences between his own outlook on the role of philosophy and the one prevalent in mainstream PhilMath. Thus, he repeatedly and explicitly highlights the following features of his own (later) approach: • LW approaches math from the outside looking in, whereas most PhilMath is practiced as a prolongation of mathematics itself. This insiders' perspective fosters exceptionalism about math (it is easy to maintain that something has unique (incomparable!) attributes if one never actually compares it to anything else). A consequence of this exceptionalism is the fact that -as opposed to LW's essentially critical stance-most PhilMath is highly deferential towards math and mathematicians. • LW's own approach is anthropological, as opposed to the logical and/or epistemological approaches that are prevalent among mathematicians and mainstream practitioners of PhilMath, which means that he views math as a human endeavor among all other human endeavors. LW's anthropological approach of math as a practice among other practices, is ipso facto comparative, as opposed to the exceptionalism of the mainstream. • PhilMath is mostly practiced within the framework of the quasi-religious (or in some cases straightforwardly religious) ideology of unity and uniqueness (what I called monism) prevalent among mathematicians. LW's anthropological approach emphasizes the contingency and variability of math, thus directly opposing this monism. • Mainstream PhilMath mostly operates within the conceptual framework within which mathematicians operate themselves; to this insiders' approach (see above), LW opposes what he considers to be a proper 'philosophical' approach, which focuses on the presuppositions underlying this conceptual framework (cf. paragraph (B) here below, for LW's criticism of Ramsey in these terms; see also section 3.3 below for the 'critical' nature (also in the Kantian sense) of LW's philosophy). 344 LW was also very conscious of the historicity of his work. Thus, LW says in Ms-126,133, d.d. 19411215-19411217 (the same passage in which he introduces the notion of a 'prudish proof', see section 2.1(C) above): Wir kämpfen jetzt gegen eine Richtung. Aber diese Richtung wird sterben, durch andere Richtungen verdrängt. Und dann wird man unsere Argumentation gegen sie nicht mehr verstehen; nicht begreifen, warum man all das hat sagen müssen. 345 It is perhaps interesting to observe that LW's criticism -if anything-applies even more directly to present-day PhilMath than it did in its own time: even the 'naturalist turn' in PhilMath (which does not represent a majority position in the field anyway) has not changed much about those aspects of PhilMath that LW was most critical of. In the same way, we read in section 2.1(A) about LW's hope and expectation that later generations will laugh at Cantor's "hocus pocus": Ich glaube & hoffe eine künftige Generation wird über diesen Hokus Pokus lachen. ( Ms-117,110) Generally speaking, this prediction has not really come true, on the contrary: the settheoretical outlook on math has become part and parcel of how people are taught math and this seems to have solidified and generalized this perspective, even with practitioners who in actual practice have no need for the axiomatic framework of set theory. Of course, LW wrote not even 100 years ago, and perhaps the emergence of PhilMathPract and the fact that LW's writing on PhilMath appears to be in fashion in the 2020s may be indications that his prediction will have a little more chance of coming true than appeared to be reasonable a decade ago. 346 LW is also very much aware of the inherently polemical relationship between his philosophy and the mainstream. In the following paragraph (Ms-113,117r-v d.d. 19320517 = "Big Typescript" (1933) §644), he acknowledges that mathematicians must be horrified at him: As opposed to the prevalent idea that mathematical truths are objectively and atemporally out there, LW points out that math as it is now is the result of a long sequence of historical decisions, educational formatting,348 etc. 349 The way we are formatted as practitioners of math all along our educational trajectories teaches us to suppress certain lines of questioning as irrelevant. LW allows these questions to come up again. The critical attitude, or even the critical agenda (incl. the focus on presuppositions), that was underlying LW's philosophical work, incl. his PhilMath, remains unexplored and unparalleled in contemporary philosophy, esp. contemporary PhilMath. Perhaps it is about time for philosophy to reclaim that 'critical' aspect of its traditional role in academia (and society at large, for that matter). Although the arguments of scholars like Penelope Maddy against the speculative tradition of 'philosophia prima' certainly have had their merit in the context in which they intervened, and although various empirical approaches to mathematics as a phenomenon have yielded important results, it is worth paying attention to LW when he points out the specificity of philosophy, as opposed to empirical approaches, 350 as well. As pointed out above (section 3.2.1(C)), LW explicitly states numerous times (perhaps most explicitly Ms-116,247) that: • history / ethnography / sociology ≠ philosophy: The factuality of our case studies is not what makes them philosophically relevant; I am not insisting that one should follow LW in his anti-empiricism in this regard, but rather that the empirical approaches are not ipso facto philosophy and that LW reminds us that the specifically philosophical approach is worth preserving. • math ≠ philosophy: As opposed to current practice, LW repeatedly defines his own "from the outside in" approach in opposition to the approach to math displayed by mathematicians. In section 2.3(E), we even encountered the claim that most mathematicians have 'slimy' ideas about math. This fits in with LW's views on the role and the status of a philosopher in general: "a philosopher is not a citizen of a community of thought; that's what makes him a philosopher" [Ms-112,72r "(Der Philosoph ist nicht Bürger einer Denkgemeinde. Das ist, was ihn zum Philosophen macht.)]". The context of this remark is interesting for its very heavy-handed criticism of Frank Ramsey (Ms-112,70v-71r, d.d. 19311101 about what is presupposed, disqualifies him as a philosopher. An interesting detail is Ramsey's tendency -according to LW-to push aside any properly philosophical reflection as 'trivial', which reminds us of the fact that LW appears to have systematically embraced triviality in a number of contexts (cf. section 2.0.3). 352 For LW, philosophy is essentially concerned with focusing on presuppositions / the given; in this sense LW remains fully within the Kantian tradition (see section 3.3 below). Towards a critical agenda for Philosophy of Mathematical Practice In this section, I attempt to articulate a number of ways in which our reading of LW's PhilMath could inspire present-day practitioners of PhilMathPract to adopt a more critical attitude towards their subject matter. (A) against whiggism and exceptionalism in PhilMath I would like to especially accentuate that a lot of what we are learning now from the history and anthropology (incl. sociology, ethnography, ethnomethodology, etc.) of mathematical practices is about the contingency and variability of mathematics. LW was exploring the philosophical importance of these aspects a long time (70+ years) before anyone else. 353 This chronological priority is only a minor scholarly detail, hardly worth mentioning, but what is important is that, whereas LW's work on the contingency and variability of mathematics was properly philosophical and was intended to contribute to the then current Grundlagendebates, the philosophical consequences of the now abundant observations and insights in these matters are currently not being investigated at all. LW's 'anthropological' approach approaches math in the context of human activity at large, which implies that it is an inherently comparative and 'neutral' approach: successful and unsuccessful, present and past, normal (?), exotic and fringe, official and unofficial, etc. 354 practices are approached in the same way as mathematical phenomena and as historical data. 355 LW's approach is thus in direct opposition to the exceptionalism and whiggism that overtly or covertly prevails in mainstream mathematical discourse. Thus, LW's work on math reminds us of the fact that, methodologically speaking, whiggism and exceptionalism are problematic: 352 NB: LW's work has also been dismissed as trivial or otherwise irrelvant. 353 For the fact that Spengler did emphasize the historicity and variability of math and the similarities and differences with LW's work, cf. Appendix 4.2. 354 What about 'fictional', 'counterfatcual', 'impossible', ... ? Is this a good question? I don't think so. 355 We have seen that LW presents Cantor's way of presenting certain set theoretical notions in the same ballpark as prototypical crank discourse. Needless to say, no criticism of Dedekind and Frege is implied: for their project of a reconstruction of arithmetic ab ovo, purely from logic (and set theory), it was necessary to try what they did. On the basis of essentially the same kind of reasoning that Ferreirós presents in the above excerpt, LW criticizes the lack of understanding on the part of 'most mathematicians' (and that includes Dedekind) of exactly the kind of links that Ferreirós points out, and calls it "slimy" (cf. section 2.3(E) above). Note that Ferreirós' own explanation implies a direct and fundamental criticism of the very "project of a reconstruction of arithmetic ab ovo, purely from logic (and set theory)". I am not necessarily arguing in favor of LW's crude way of expressing this idea (if he had prepared that remark for publication, he probably would have chosen a different wording), but I do wish to point out that the hyperbolic deference towards the great geniuses in the history of math, as illustrated by the above quotation, deserves to be questioned. Major mathematicians can have truly ignorant opinions about their own field. 357 The difference in basic attitude between mathematics and philosophy, as pointed out by LW, may go a long way in explaining this. A case in point is the great Dieudonné (of Bourbaki fame)'s remarks on the social aspects of mathematics. The following quote from Dieudonné (as quoted in van Bendegem 2014 (in Soler & al. (eds.) 2014), p. 215) was intended as an example to show that "sociological reasons [des raisons sociologiques]" never yield anything convincing ("Je veux bien, mais je n'ai jamais rien vu de très convaincant dans ce sens-là") and it is representative for the disdain towards sociological, and otherwise empirical, approaches in more traditional branches of PhilMath: To the person who will explain to me why the social setting of the small German courts of the 18th century wherein Gauss lived forced him inevitably to occupy himself with the construction of a 17-sided regular polygon, well, to him I will give a chocolate medal. (Dieudonné, 1982, p. 23) 358 (Dieudonné 1982) Dieudonné's demand, as he formulates it in the above quote, is not so much unfair as it is misguided. There would be a lot of interesting things to say about how the construction of polygons ended up in the repertoire of mathematical problems and how Gauss's activity with respect to polygons fitted in with the context in which it occurred. The relevant context would not necessarily be limited to the courts in which Gauss circulated, but could include the people he corresponded with, what exactly his reputation and payment depended on, how exactly Gauss's and his contemporaries' research agenda in general was determined by its relevance to which stakeholders, etc. Thinking, writing, teaching, being taught, buying and selling books, and being paid for doing math are social phenomena, and as such have a lot in common with any number of non-mathematical social phenomena. I am not necessarily an advocate of sociological approaches to epistemological issues; I only wish to point out that Dieudonné's argument is a good example of exceptionalism and whiggism, and ultimately incoherent (see below). In the same article (pp. 30-31), 359 Dieudonné offers more opinions on the social aspects of mathematics. First, he denigrates the practice of working on marginal topics that are not motivated by a proper research agenda (he calls this "non-motivated math" or "waffling" [délayage]) 360 and explains this practice in social terms, as a result of the pressure to publish in order to have an academic career. What interests us here is the lack of symmetry or neutrality in Dieudonné's discourse: he applies 'sociological reasons' only to the cases of math he dislikes, and not the cases he does like, as if these were not equally social. • Gödel presents his completeness/consistency thing as a result in arithmetic and acts as if this is independent evidence for his Platonism, whereas in actual fact it is almost certainly the other way around (see below); • the unity of math is fake: de facto, math is not a unitary but a family of quite heterogeneous techniques, which retain their identities even if they are integrated within a superposed axiomatic formalism; • presenting Dedekind's cut as a construction, whereas it is not, is fake; • the drama about contradictions in axiomatic systems is fake in that it always remains to be seen whether they have any impact at all; • foundationalism is fake: de facto, mathematical technique is not founded by 'foundations': it is grounded (?) in an irreducibly complex, contingent, variable, messy web of practices; • some of the core images that are used to make the set-theoretical concept of the continuum seem intuitive and/or interesting, are actually fake. For LW, the notion of authenticity has thus a direct bearing on the technicalities of his account of meaning and is therefore not 'external to' logic and epistemology: it operates at the core of the rationality that logic and epistemology are supposed to embody. It points out that the very meaningfulness of mathematical concepts depends on the way they relate to the way they actually operate within mathematical practice and that the meaningfulness of mathematical practices is fundamentally linked to the more basic "applications" that they evolved out of. I have pointed out that this is coherent with the fact that he views the problems with these types of discourse as part of a larger cultural problem: LW views Cantor's verbiage as symptom of the "illness of our time", or -to use Spengler's phrase-the decline of the West (see also section 3.3(B) below). I believe no present-day philosopher should try and defend a Spenglerian view of culture and history (though many are already doing so and many more probably will in the near future), 362 361 Of course, these are aspects of mathematical practice that also deserve our attention, as philosophers. Cf. the contributions of Maurice Chiodo and the Cambridge Ethics in Mathematics group, e.g. (Chiodo and Bursill-Hall 2019), (Chiodo and Bursill-Hall 2018), (Chiodo and Vyas 2019). Cf. also the emerging field of virtue ethics in PhilMath, e.g. (Rittberg, Tanswell, and Van Bendegem 2018); (Tanswell and Rittberg 2020); (Aberdein, Rittberg, and Tanswell 2021). My point is that even among those philosophers of mathematics and philosophically inclined mathematicians that are willing to engage with the ethical or political issues involved in mathematical practice, many are not ready to accept the idea that there are ethical, political or more generally ideological issues at the core of mathematics itself. LW's work suggests that these lines of thought deserve to be explored. 362 Let us not forget that not so long ago, Spengler-like arguments actually gave rise to such concepts as healthy Aryan science, which was differentiated from unhealthy, rootless, cosmopolitan science by being rooted in the but the way issues with the conceptual core of mainstream PhilMath are related to core ethical/aesthetical and societal/political issues in LW's work, does at least suggest that we ask ourselves whether there is still room for such an ethical-aesthetical angle within presentday philosophy. 363 LW's remark reminds us of the fact that education shapes the 'given' within which mathematicians will operate. This emphasis on formatting through math education is an important aspect, not only at the philosophical level, but also for its potential application to policy. In the context of the present discussion, it is interesting to see that LW explicitly complains about the fact that math education actively stifles any critical attitude that pupils may have. It is interesting to note that some of the more interesting developments concerning the emerging research topic "Ethics in Mathematics" (EiM) developed as an educational issue (e.g. (Chiodo and Bursill-Hall 2019); (Chiodo and Vyas 2019)). Within the context of the present study, the following quote stands out: everyday life of the healthy German Volk, and that the present-day political circumstances are sufficiently reminiscent to make it plausible that something similar may happen again. Chiodo & co's main problem is a straightforwardly educational one: how come so many mathematicians appear to behave irresponsibly in the professional contexts in which they end up, and shouldn't the educational institution that train mathematicians do something about this? These remarks are not articulated by professional philosophers but by working mathematicians. Still, these remarks point at a link between (1) actual real-life problems and (2) the problems with some core concepts in mainstream PhilMath that I -following LW-have been pointing at throughout this study. In other words, there may be a link here between Chiodo & Co's remarks on the lack of moral awareness on the part of mathematicians and the fact some of the core aspects of the practices they are educated in, are encouraging an antihumanist world-view (?), in which human responsibility (incl. individual responsibility) is de-emphasized in favor of transcendental crystalline logic (?). Our argument makes a link between the inherent features of mathematics as they are presented by the mainstream (its autonomy, its uniqueness, its unity, ...) on the one, and the ethical aspects on the other hand. If you teach people that what they are studying is the language of God, that it is completely free of the sublunar vagaries and concerns, that their talent is unlike any other talent, then it should not come as a surprise that you end up with a bunch of dangerous fools as soon as you ask them to function in the context of the complexities of real life. The point is that presenting math as it is presented in mainstream math education is not innocent (see (Ravn and Skovsmose 2019), Part IV "How Good is Mathematics", and paragraph (D) here below). 365 365 Although education policy is not part of the subject matter of the present study, the above does suggest that policy choices with respect to math education have complex ramifications, both with respect to the ideological baggage that comes with the content conveyed, and with respect to the consequences of the curriculum in shaping the views and attitudes of future practitioners and the public at large. For instance, in the light of the above, one may want to think about including notions of the history of mathematics in the curriculum so as to mitigate the fundamentalism inherent in the monism and foundationalism of the standard approach to math education. (D) final remarks on the need for a critical approach to PhilMath and PhilMathPract: why would bad faith and bad taste in PhilMath matter? LW is part of a long tradition of philosophy conceiving of itself as critical, both in the sense of taking an outsider's point of view with respect to the society/networks within which they operate, as well as in the Kantian sense of being interested in the pre-meaningful presuppositions (see section 3.3 below). At the same time, LW's PhilMath emphasizes the link between the more technical aspects of PhilMath and very broad cultural and societal issues: LW's criticism was rooted in Kraus' social/political criticism and a Spengler-like vision of the decline of Western culture due to its fragmentation, and remained essentially a kind of cultural critique. One of the immediate implications of focusing on the pragmatics of math, which could be the most important contribution of LW to PhilMath, is that it provides strong evidence against the autonomy (its 'freedom', as Cantor called it) of math: in actual fact, (1) mathematical technique, from its inception and throughout its history, has been rooted in, and intertwined with, non-math-specific applications, and (2) mathematical discourse has been shaped by equally non-math-specific philosophical, religious, etc. concerns. Furthermore, whatever mathematicians may say about their own field, 366 the public to evaluate science by improper criteria, thus creating unreasonable and (which is worse) inapplicable expectations, which in its turn inevitably contributes to a loss of credibility of academia and thus creates room for really ugly alternatives and devalues the educational value that the sciences, but especially math, claim for themselves. 368 If a substantial part of the public feels that their pre-existing distrust of intellectuals and science is vindicated when it turns out that science sometimes gets it wrong or when scientists are waffling on TV or they hear that Heisenberg has proven that "scientific knowledge is never certain" or "Gödel has proven that even math is not always true", this shows that science has communicated a fundamentally misleading picture of how it operates. The current political climate in Europe and the U.S.A. fosters anti-scientific sentiments and improper communication (incl. education) about how science/research actually works has to be part of the cause. In that sense, whiggist or otherwise unrealistic representations of the workings of science do serious damage to the credibility of academia, by creating not only unrealistic, but outright misleading expectations on what science does: the emphasis should not be on the ultimate truth of the results (not even, or perhaps especially not, in terms of an 'approximation of the truth') but on the process of careful examination and permanent re-evaluation of the empirical data and reformuation of the models and theories involved, as well as on the inherently collective (incl. polemical and/or competitive) and historical nature of the long-term endeavor and its record of success, not only in its technological applications but also as an ideological crucible. 369 Perhaps even more importantly, epistemic kitsch appears to have for a function to distract the attention from real issues and does impedes the kind of critical reflection that is being encouraged in the above. All of this goes for academia as a whole but perhaps the problem is more acute in the case of math: if math is where we investigate certain core aspects of our rationality and our interaction with nature, at their purest, if math is supposed to set the standards of intellectual rigor, both willing to discuss what motivates decisions that have an impact on the outcome of one's work, as it is presented to the public, how can that be a good thing? 368 In the same way that math has been presented as promoting rationality and even "good taste" (incl. by LW), epistemic kitsch has the opposite effect of destroying the potential hygienic/therapeutic value of rationality by replacing it with fantasy. 369 The study of the relations between science qua body of knowledge on the one hand and its technological 'applications' on the other, as well as of the practicalities of agenda setting, financing, etc., as conducted with great success in the 20th century in various academic fields (Philosophy and History of Science, Science and Technology Studies, etc.) and the resulting awareness of the fact that science/knowledge is a deeply social/societal phenomenon, should not obscure the historical importance of science as a model for our relationship as humans to our environment and as model for our ability to deal with conflicting opinions in a rational and productive way, i.e. as an ideology. within the purely academic/scientific tradition and as an integral part of the educational system at large, if math wants to continue to claim to offer an educational standard for rationality, then we can't allow bad faith and bad taste to fester at its core, especially now that 'bad faith' (i.e. checking the boxes of formal procedures while bypassing what is their purpose) has become a major societal problem, in that it affects the rationality embodied in some of the core political and judicial institutions. 370 This being said, it bears repeating that these issues should not be viewed as external to actual math: not only is the demarcation between what is math-internal and what is math-external not that sharp once one adopts a pragmatic point of view, but -as we argued here above-the societal position of math and the way math construes its own identity as a practice are closely related aspects. I have argued that the critical aspect that is at the core of LW's PhilMath, is almost entirely lacking in present-day PhilMathPract and I have attempted to demonstrate that there should still be room for a properly critical and properly philosophical approach to PhilMath, not only at the margins of PhilMath but even -in a Wittgensteinian vein-at the core of what makes math math. Wittgenstein's philosophy of mathematics as criticism and critique The specific aim of this study was to focus on a few lines of thought within LW's PhilMath that are not often focused on in the literature, especially the critical remarks that nobody seems to like and makes most of us cringe (cf. section 0.1(B) Looking at this specific corpus has uncovered a few strands within LW's philosophy at large for me, of which I didn't previously realize the importance. 370 It would be interesting to further reflect on the origins of logic (and formal reasoning in general) as part of ancient Greek legal practices (cf. Dutilh Novaes (Dutilh Novaes 2012) for the idea that formalism is a way to democratize knowledge -an idea that in its turn deserves further scrutiny) and -for that matter-the relation between formal reasoning and bad faith. On the one hand, philosophical logic appears to originate in Plato's attempt to make sense of the bad faith of the sophists (for an analysis of the passage in Plato's Sophista that dramatizes this event, see Hoekstra & Scheppers (Hoekstra and Scheppers 2003)). On the other hand, the above suggests that formalism generates its own kind of bad faith. 371 LW's objections to the inauthentic do not only refer to the ethical but also to the aesthetical side of 'fakeness'. -a bed on wheels -diagonal techniques as proofs or 'theories' -the image of more and more things being crammed in an increasingly tiny space -anachronistic ornamentation in architecture and interior design -the claim that formal systems can actually be used to convince one of the reliability of basic arithmetic -the idea that there can be something sensational about mathematical results -the idea that contradictions in a formal system are the end of the world This list -as well as most lists-is -as I said-'fun', but it is also interesting as an illustration of the following important facts: (1) there is a common thread in LW's evaluations, which I summarized under the heading "authenticity"; (2) the technical-philosophical aspects and the more personal-existential aspects of LW's thought converge in this regard. (B) recap: from nonsense to fake sense to pretense The problem of meaningfulness (as opposed to nonsense) has been a constant in LW's philosophical work from the beginning. As pointed out in section 1.2.1, nonsense/meaninglessness/senselessness was one of the central issues dealt with in the TLP. We also saw that, insofar as nonsense is a problem, nonsense is fake sense, something that looks like it is meaningful, but isn't, and in section 2.0, we saw that this problem of fake meaning was a problem LW inherited from -most notably-the culture-critical journalism of Karl Kraus. What did change in LW's later work is how meaning/meaningfulness was construed, conceptually: -in LW's early work, meaningfulness was construed in terms of being a picture of reality and anything that could not be analyzed as a combination of elementary bivalent (true or false) propositions, was discarded as meaningless; -in his later work, meaningfulness was construed in terms of embedding in everyday practices / our everyday lives: any discourse that has a function within a real-life everyday practice is ipso facto meaningful; discourse that fails to be embedded in that way is meaningless. 373 This evolution is also reflected in LW's work on math. Apparently, what brought LW to philosophy, was his interest in the issue of the foundations/Grundlagen of mathematics: what is the hard bedrock underneath math? In line with his development as a philosopher in general, LW's approach to this basic issue changed dramatically: -at first, LW participated in the logistic movement (?) that sought to find bedrock in the crystalline and unified system of logic, as represented by formal logistic systems; -then, LW evolved towards an 'anthropological' approach, in which mathematics was no longer a unitary propositional system but a heterogeneous bunch of techniques, as applied in an even messier hurly-burly of everyday practices; in this context, the validity and meaningfulness of math (or any other type of practice) is not necessarily problematic; however, what mathematicians and philosopher say about math is often not embedded in everyday practice and (as we have seen throughout Part 2) LW does not hesitate to criticize this type of discourse in very harsh terms. It is important to understand that 'everydayness' is a key concept in all of this: de facto, all and every discourse is embedded in a practice: Cantor's talk about math is deeply embedded in long-standing traditions of religious and philosophical discourse and so is the metaphysical talk LW is supposed to have railed against as nonsensical. I have repeatedly pointed out this inherent weakness in the concept of everydayness: what counts as 'everyday'? why would Gödel's stuff, or Cantor's stuff, or Heidegger's not be 'normal'? isn't their work part of their everyday? why would one choose to make that distinction? why wouldn't these practices sufficient to give meaning to give meaning to the discourses that come with them? what's the problem with these supposedly 'non-everyday' practices? The Spenglerian notion of the 'organic unity' of a healthy culture is politically suspect and descriptively not viable in that it is in direct contradiction with the obvious diversity (fragmentation?) of the everyday in general. 374 At the end of Part 1 of this study, we concluded that 'everydayness' was not a result of LW's work on mathematical practice but part of an agenda underlying his investigations. 375 This observation offers us a solution to the paradoxical tension between LW's stated antirevisionism and his blatantly revisionist critical remarks: LW lets everyday talk be and objects to talk that was not embedded in everyday practice. Whatever we may think of this concept ourselves, it is undoubtedly an inherent, bottom-line feature of LW's outlook. In section 2.0, I pointed out how 'everydayness' was part of a deep-rooted (?) and wideranging ideological (?) 376 construction that LW shared with many of his contemporaries: authenticity (vs. fakeness) is a core concept, applicable at the existential level, but also at the societal/political level, as well as -importantly-at the epistemic level; -authenticity is related to everydayness in that, at least for LW, 377 the difference between the authentic/meaningful and the inauthentic/meaningless appears to coincide with the difference between the everyday and the non-everyday; -embeddedness in everydayness is in its turn related to a Spenglerian vision on the decline of culture through fragmentation (?): in a healthy culture, all aspects of society form an organic whole; the disconnect between mathematical discourse and the techniques and applications that make math meaningful, or the (alleged) disconnect between modern music and the rest of the culture, are for LW symptoms of the 'illness of our era'. In section 1.3 and throughout Part 2, I pointed out that LW's PhilMath consists essentially of a fundamental critique of what philosophers and mathematicians say about mathematics. LW's criticism targets the following concepts: 374 NB that Heidegger to view the fragmentation of the everyday as a reason to see it as the cause for inauthenticity, which I completely disagree with, but which is straightforwardly coherent with his Spenglerian views on culture, but not at all with the phenomenology of the everyday he starts out from in Sein und Zeit ((Heidegger 1967); cf. Scheppers (Scheppers 2017), Chapter 2). As already mentioned above, LW's processing of the same basic ingredients is much more tension-ridden than Heidegger's, in that LW's attitude vis-à-vis the diversity and fragmentation (?) that is characteristic of the everyday is harder to pinpoint. 375 Elsewhere ((Scheppers 2017), Chapter 2, §4), I argued that a very similar phenomenon can be observed in Martin Heidegger's work. 376 It is unfortunate that the term 'ideological' has gotten a negative connotation and if another term with less baggage was available I would have adopted it gladly, but the core meaning of the term fits exactly what I mean. 377 As mentioned before, I am not clear yet about the similarities and dissimilarities between LW outlook, and some of his contemporaries (Spengler, Heidegger, ...). • foundationalism: the idea that the validity of mathematical technique ultimately depends on its being integrated into a foundational framework • monism: o the unity of mathematics: the idea that math, as it is, is (and has to be) a single system; o totalitariness/completeness: the idea that everything mathematical (including all that mathematicians will come up with in the future, or anything that is deemed or will be deemed relevant to our understanding of math) can and should be integrated in a single system; 378 o the naturalness / objectivity of mathematics: the idea that mathematical results constitute facts about the world (which includes the Platonist variant of objectivism, according to which this world/nature is separate a realm of reality); • exceptionalism: the idea of the uniqueness / specialness of mathematics, epistemologically (as a body of knowledge), ontologically (as a separate realm of reality), pragmatically (as a human endeavor). It should be clear that these aspects are (1) definitely at the core of the standard view of mathematics, but (2) at the same time definitely go beyond the math-specific: monism, naturalism etc. are not ideas that are specifically mathematical and should not be treated that way. Interestingly, LW also repeatedly insists on the fact that the main difference between him and most mathematicians / philosophers of mathematics is that the latter don't focus on the presuppositions 379 that underlie their mathematical discourse but simply remain within and perpetuate the framework they were trained in, whereas for LW, focus on what is presupposed in these frameworks needs to be the main focus of PhilMath (LW's remarks on Ramsey, quoted in section 3.2.2(B) above, illustrate this quite explicitly). In other words, for LW the identity of philosophy (as opposed to e.g. historical or intra-mathematical approaches to math) crucially depends on its focus on presuppositions. So: LW's work is inherently critical, not only in the 'vulgar' sense of 'expressing ethical/aesthetical judgments' (which we have seen he does do in his PhilMath), but also in 378 Case in point: the tendency for math to try and include 'meta'-aspects (i.e. aspects that need not be specific to math) within its own system, such as proof-theory, model-theory, category-theory, ... My claim is that these aspects are better understood outside the formal system, in that they are in common between math and other endeavors and including them within mathematics destroys all possibility of articulating the commonalities between math and other human activities in a meaningful way. 379 -LW criticizes the agenda's underlying the Grundlagen-debates, Gödel's contribution, etc., rather than the technicalities of the arguments within the debates themselves; 380 Some authors downplay the Kantian aspect of LW's work, for more or less superficial reasons (Steinvorth (Steinvorth 1979), Sass ((Sass 2001), pp. 116-117), Egan ((Egan 2019) , p. 150). I am not interested in disputing these arguments, however important they may be from different perspectives. Some of the apparent differences may also be purely semantic. Peneolope Maddy's interpretation of LW's work in terms of 'anti-philosophy', 'naturalism' and 'second philosophy' ( (Maddy 1993), (Maddy 2007)), would be an extreme example of downplaying LW's belonging to the philosophical tradition. 381 For a number of years, Robert Hanna was the go to reference for the topic of the relations between Wittgenstein and Kant (e.g. (Hanna 2017); (Hanna 2007); (Hanna 2001)). There is now an emerging body of recent work dealing on the topic (see e.g. (Pier 2022), (Ritter 2020), (Waxman 2019)). Again, for the purposes of the present study, my interaction with, let alone contribution to, this body of work will remain minimal. - The interlocutor of Z §351 might well think she shares Wittgenstein's thought that concepts are only at home in the language-games in which they are used. But for Wittgenstein, this way of characterizing the relationship between concepts and language-games is still too loose because it supposes that the concepts are intelligible at all apart from the language-games in which they have a use. Although he clearly has many affinities with Kant and the post-Kantian tradition, it would miss the mark to characterize Wittgenstein as delineating the conditions for the possibility of the concepts that we have precisely because such delineation engages with the question of the conditions under which we could have such concepts. Also interesting is footnote 13: We find a similar remark at PI §142, where Wittgenstein imagines lumps of cheese regularly growing or shrinking without obvious cause. In such cases, rather than saying that we could not measure the weight of cheese as we currently do, we should say simply that we would not: our current language-game of measuring cheese would simply have no point in such circumstances. Egan is right in pointing out that LW is not delineating conditions of possibility for concepts, but what he does have in common with any other Kantian project, is that he is engaging with the notion of what is given and the idea that meaningfulness is determined within the framework of what is given. By the way, a similar problem occurs within Martin Kusch's interpretation of LW's rule following stuff in terms of 'assertability criteria'. To me the term "conditions" sounds bizarre in this context, as if actions could occur without having "a point", as if pointless sentences are filtered out post hoc according to these criteria. Would one be similarly inclined to speak about "feasibility constraints" on non-verbal action? I think not. From a pragmatic point of view, the meaning of utterances should be viewed in the same way as the 'sense' of other actions (this is exactly what I meant when coining the 'Pragmatics first' slogan; cf. section 1.1.1(E) above). 383 Contra Sass, who appears to claim that LW's early work was not Kantian, but Cartesian Sass (Sass 2001, pp. 116-117): "What now become central are not Cartesian issues and themes -the sense of inwardness involved in identifying with a mental core set apart from body, the emotions, and the external and material world -but, rather, the sense of removal and remoteness that derives from adopting the position of an external observer who exists somewhere outside both the self and the entirety of its world. Such an observer presumes to be able to adopt a totalizing or transcendental stance in which it is possible to know, describe, or somehow intuit the knowable world as a whole in its most fundamental relationship with the human mind. These issues have a more Kantian or post-Kantian flavor, for they pertain to the issue of limits, to questions about the nature and the knowability of the boundaries of possible experience or of sensible discourse itself. Such issues were, in fact, central throughout the entire course of Wittgenstein's philosophical career. By considering them, it is possible to show that Wittgenstein, both early and late, was always driven both to express and simultaneously to deny his schizoid inclinations. In this paper, however, I will focus on the earlier period." • somewhat later, grammar becomes LW favorite conceptualization of what precedes meaningfulness: within this framework, grammatical rules are 'transcendental' with respect to meaningful language use; • in LW's mature work, this role was taken over by such concepts as Language Games, Forms of Life, 'our lives', etc. , i.e. the kind of holistic structures that I subsumed under the umbrella term 'practices'. Interestingly, but perhaps not surprisingly, one of the more explicit articulations of LW's engagement with the notion of 'the given', deals with math: in PPF §341-345, 384 LW reflects on the fact that people in general agree on the results of calculations, as the basis of what is called 'mathematical certainty'. In typical fashion, LW introduces a hypothetical circumstance in which people do often come to disagreements about calculations and points out that if such disagreements occurred more or less systematically (for instance, because people thought the signs on the paper changed, or their minds slipped, ...), mathematical certainty would not exist. LW refuses to offer a quasi-/pseudo-causal explanation and uses this example to drive home -what I called-the holism and the structuralism of his approach: even it is true that it would be impossible to use certain (unreliable) types of paper and ink as a support for calculations, it would still not be correct to say that mathematical certainty depends on the reliability of the paper and ink: the reliability of the support to that purpose only makes sense if one is already familiar with what a reliable calculation, using a reliable support, would be. This reasoning leads up to the following seminal one-liner: 385 345. Das Hinzunehmende, Gegebene --könnte man sagen --seien Lebensformen. 386 It is important to understand the dynamics, the 'information flow', in this one-liner. It occurs in the context of the question as to 'what grounds (?) mathematical certainty'. Various possible answers have been excluded. The question is now reformulated in the first part of this sentence (in linguistic terms the 'topic'): "what is it that we have to take along for the ride? what is it that is the given to us?". The answer is the second part (the 'comment'): "Forms of life". 387 In other words, the message appears to be: there is no answer to the question of the given, there is nothing to explain, beyond the Forms of Life. There is nothing to reduce them to: the very givenness, the sheer facticity, of Forms of Life is irreducible, i.e. it cannot be explained by reducing it to the physical aspects of the objects involved, nor to any cultural aspects of the practice, nor to the cognitive or biological aspects of the agents: what is given, are the Forms of Life in all their irreducible multidimensionality and complexity. Perhaps the most accomplished articulation of LW's mature views on the given is the wonderful image of the river and its bedrock in ÜG § §94-99, 388 in which LW expresses his 386 The translation given in (Wittgenstein 2009) is : "345. What has to be accepted, the given, is --one might say -forms of life." A number of things can be said about the language of this one-liner and its translation: • the translation has the copula "is" in the singular, whereas the German original has the copula "seien" in the plural (this difference is probably a mere matter of idiom); • the copula in the German is also in the 'conjunctive', probably triggered by the modal marker 'könnte man sagen'/'one could say', giving the original a less affirmative flavor than the translation; • more importantly, whereas 'to accept' certainly can be a translation for 'hinzunehmen', it is also potentially misleading in the present context: the verb literally means "to take along, along with other things", it does not necessarily imply an approval or even a choice; as a matter of fact, I believe it should be obvious from the context, that the idea of 'choosing to accept' would be completely impossible: one obviously does not choose to accept 'one's Form(s) of Life' (including the writing technology or the calculation techniques that are available, etc.) or not (for the fact that such interpretations do exist, cf. Boncompagni 2022, pp. 42-43; for the record: note that I would not necessarily disagree with those who object to LW's 'quietist' (?) politics, only that the term 'hinzunehmend' in this passage is not a case in point). 387 I am using the term topic-comment and not subject-predicate, because grammatically speaking, Lebensformen is the subject. Just saying. 388 94. Aber mein Weltbild habe ich nicht, weil ich mich von seiner Richtigkeit überzeugt habe; auch nicht, weil ich von seiner Richtigkeit überzeugt bin. Sondern es ist der überkommene Hintergrund, auf welchem ich zwischen wahr und falsch unterscheide. 95. Die Sätze, die dies Weltbild beschreiben, könnten zu einer Art Mythologie gehören. Und ihre Rolle ist ähnlich der von Spielregeln, und das Spiel kann man auch rein praktisch, ohne ausgesprochene Regeln, lernen. 96. Man könnte sich vorstellen, daß gewisse Sätze von der Form der Erfahrungssätze erstarrt wären und als Leitung für die nicht erstarrten, flüssigen Erfahrungssätze funktionierten; und daß sich dies Verhältnis mit der Zeit änderte, indem flüssige Sätze erstarrten und feste flüssig würden. 97. Die Mythologie kann wieder in Fluß geraten, das Flußbett der Gedanken sich verschieben. Aber ich unterscheide zwischen der Bewegung des Wassers im Flußbett und der Verschiebung dieses; obwohl es eine scharfe Trennung der beiden nicht gibt. 98. Wenn aber Einer sagte »Also ist auch die Logik eine Erfahrungswissenschaft«, so hätte er unrecht. Aber dies ist richtig, daß der gleiche Satz einmal als von der Erfahrung zu prüfen, einmal als Regel der Prüfung behandelt werden kann. 99. Ja, das Ufer jenes Flusses besteht zum Teil aus hartem Gestein, das keiner oder einer unmerkbaren Änderung unterliegt, und teils aus Sand, der bald hier, bald dort weg-und angeschwemmt wird. view of the given by means of a comparison with the flow of water in a riverbed: there is at each point in time a difference between the bedrock and the river; there is also a difference between the flow of the water and the changes in the bedrock; even if the difference between the flow of the water and the change in the riverbed is not always very sharp, it still would be wrong to say that logic (qua study of the bedrock) is an empirical science (i.e. the study of the water). 389 Of course, the way in which the problem of the given is formulated, has been displaced in LW's (later) work as compared to Kant's, in several ways, and it could be argued that this is what determines LW's place in the history of philosophy. From a birds-eye, broad-brush, historical perspective, LW's specific contribution, beyond the original Kantian position, is to point out that what is 'given' in any given case is characterized by the following features: -holism: the given can no longer be conceived of as a relatively simple set of features of our cognition or of the world itself ('categories', etc.), nor as the logical structure of propositions, but only as an open-ended hurly-burly of irreducibly complex, multidimensional practices; -contingency, variation and change have been introduced at the heart of the given: practices are inherently and irreducibly variable and historical; -deflation of transcendence: to the extent that the traditional, Kantian use of the term 'transcendental' still carried an (undesirable?) metaphysical weight (and that remains to be seen), Wittgenstein's implementation of the concept of transcendence has no longer any connotation of belonging to a 'supernatural realm'; 390 -as a result, the given is no longer articulated in terms of conditions for meaningfulness, but in terms of the irreducible facticity of meaningfulness: the ultimate ground for meaning, is that things -as a matter of fact-391 do make sense to us within the context of our everyday practices. The way the problem of the given first manifested itself to LW, was probably the way it was presented in the context of the Grundlagen-debates concerning math, and LW's contribution to these debates has been to point out that there is an iceberg of given stuff beneath every propositional system. Gödel may actually agree with this idea, put this way, but unlike Gödel and other Platonists, LW appears to not only oppose formal systems as the ultimate ground 389 Cf. Floyd's recent monograph on LW's PhilMath , in which she characterizes the last period in LW's work as "Fluid Simplicity". Also note that LW apparently continues to vew what he does as 'logic'. 390 I don't know enough about Kant to make any claims regarding Kant, but I am not sure that it would be charitable to Kant to attribute such 'metaphysical heaviness' to his conception of the 'transcendental' (did he coin the term to distinguish it from 'transcendent'?). 391 I would personally be ready to call this a 'metaphysical fact'. The working title for my long-term project is 'The metaphysics of doing the dishes'. for math, but any type of propositional truth, however conceived, at all: LW argues that the given underlying math is not a crystalline mathematical universe, but a messy bunch of ultimately non-propositional, heterogeneous, contingent and fluid human practices. (D) criticism, critique and therapy LW's interest in philosophy originated with the problem of the foundations of mathematics. Here above we has argued that the way LW conceived of this issue was steeped in the Kantian tradition, in that he framed the problem in terms of an investigation into the pre-meaningful ('transcendental') 'given'. His answer to the question ended up non-Kantian: instead of a neat closed set of categories, we are presented with an open-ended, variable, contingent mess of everyday practices as the irreducible 'given' ground (?) for meaning. We have also seen that there is an inherent link between (1) central elements of LW's conceptual apparatus (pragmatism, holism, everydayism), (2) his critical agenda (ultimately in terms of 'authenticity vs. fakeness') and (3) Spengler-like ideas about the decline of the culture he affiliated with. In the context of his work on math (as well as elsewhere), LW repeatedly articulated these links, quite explicitly. All of this is at the service of a culture-critical approach to meaning and nonsense: the loss of meaning in certain types of discourse is due to a lack of embedding in everyday practice, which is in turn a symptom of the disintegration of the culture of which these discourses are a part. The way LW formulates his critique involves the systematic use of ethical and aesthetical vocabulary, as well as the idea that epistemic issues are ultimately a matter of lifestyle and hygiene. And this bring us back to the stated 'medical' aims of LW's philosophy at large: philosophy is supposed to be a kind of therapy for various linguistic, conceptual or otherwise cultural illnesses. I believe that -if anything-the above shows how LW's conception of philosophy can be made to make sense for us: what I have called 'a critical approach' in the above, is indeed -in a certain sense-392 a matter of epistemic hygiene. 393 392 NB that the application of medical vocabulary to various cultural phenomena, especially as an expression of criticism or condemnation, is again an sign of the times: another not necessarily desirable (insalubrious?) feature that LW had in common his contemporaries, including the Nazi ideologues. This remark illustrates a more general problem: how to deal with the fact that a number of concepts (authenticity, hygiene, the everyday, nature, ...) are systematically linked with a political ideology that is undesirable (at least to me), whereas one still would like to be able to point out that -for instance-certain types of behavior are in an important sense 'inauthentic' (bad faith, etc.)? This issue (including a reflection on the differences in the ways in which the same cluster of concepts is put the work by LW, the upper-class; politically ambiguous, conservative, and Martin Heidegger, the revolutionary Nazi) will be the topic of further research on my part. 393 NB that none of this sounds like naturalism or Maddy's 'second philosophy'; on the contrary, LW's philosophy, in which his PhilMath occupies a central place, appears to be firmly rooted in 'philosophia prima', both in its focus on what is presupposed ('the given'), and its stated aims, including the fact that for LW, philosophy should operate at the same time at the existential level and at the level of society at large. Summary (A) Approach This study focuses on LW's PhilMath and more specifically on a number of critical remarks targeting various aspects of early 20 th century mathematics and PhilMath. Due to fact that this study is part of an encompassing research activity focused on the concept of 'practice', I was most interested LW's later work, but many of the lines of thought I focused on here, showed remarkable continuity, which made it possible (and also natural) for me to include passages from LW's earlier work. As for the corpus of texts I focused on, I mostly started from a close reading of a number of excerpts that are not often focused on: -in section 1.3, I analyzed a prolonged excerpt from MS-126, mainly consisting of a long list of often made-up examples of fringe applications; -in sections 2.1 through 2.3, I focused on a number of excerpts in which LW objects against generally accepted aspects of mathematics.394 I read these selected passages from LW's work against the background of the following three aspects: (1) LW's oeuvre as a whole, characterized by the following features: pragmatism: meaning is conceived of in terms of embedding in practices; in the case of mathematics, this implies the primacy of applications, out of which mathematical techniques emerged and within which they function in a straightforwardly meaningful way; holism: practices and forms of life are multidimensional structures involving not only the verbal and non-verbal activities of the agents, but also the biological, physiological and cognitive features of these agents, as well as the physical properties of the world they Let me add a final corollary aimed at Wittgenstein-scholars who want to participate in present-day PhilMath. It is tempting to want to recuperate LW as part of a quasi-empirical approach to math and mathematical practice (I have indulged in that in my own work in linguistics and will certainly indulge in that again, if I ever get to publish the research on the concept of practice that this study is a part of). However, the concept of 'everydayness' and the inherent value-judgment that is implied by it (let alone the Spengler-like strands), are an awkward fit with such an approach. Those of us who feel called to represent a Wittgensteinian voice within current debates within PhilMath have to own up to LW's own mission, as he appears to have understood it himself, or to make a clear distinction between their own agenda and LW's. operate in; as a corollary to this holism, it follows that the propositional cannot be isolated from the non-propositional, which in its turn implies that no practice can be conceived of as mainly (let alone exclusively) epistemic; structuralism: the idea that the relations between the various dimensions that make up a practice or form of life are internal to the structure of these practices or forms of life, i.e. that their identity is determined by their place within that structure; in the case of math, a few of LW's more weird-sounding claims start to make sense if we take into account his structuralism: if it is literally true that the meaning of a mathematical term equals the way it is related to an actual mathematical technique with a proper real-life application, then a conjecture about which one has no idea how to prove it, literally has no meaning; everydayism: the idea that meaningfulness ultimately depends on embedding in everydayness is a fundamental component of LW's agenda (as opposed to a result of his analysis); criticism: a critical attitude and a critical agenda drives LW's philosophy; I identified the agenda underlying LW's criticism in terms of 'authenticity vs. fakeness' (see section (B) here below). (2) the early 20 th century Grundlagen-debates in which LW started his career (?) in philosophy: the immediate context for LW's philosophy is the hotly debated issue as to what makes mathematics valid and it is hard to overestimate the central role of the various aspects involved in these debates for LW's work, not only his work on math and logic, but his philosophy at large. (3) the biographical and cultural context from which LW's work emerged: -biographical data shedding light on patterns of thought that were predominant in both LW's private life and his work (perhaps harder to separate in LW's case than in other cases), most notably his aversion for vanity, theatricality, insincerity and other avatars of inauthenticity; -the general cultural context in which LW grew up, incl. the problematization of inauthenticity in terms of a lack of embedding in an organically united culture and the negative evaluation of the perceived dissolution of Western culture since the 19 th century (as most famously articulated by Oswald Spengler); this aspect was a major ingredient in (at least some of) LW's lines of thought on math that we analyzed above. (B) LW's critical PhilMath I believe I have shown at least the following: 1. LW's criticism of certain aspects of Cantor's, Dedekind's, Russell's or Gödel's work cannot fruitfully be separated from the rest of his PhilMath; it is a central part, perhaps the very core of his work on mathematics; 2. LW's criticism must be understood within the context of the foundational issues that attracted LW to philosophy in the first place, i.e. LW's main concern remained the issue as to what makes mathematics valid, reliable, meaningful; 3. LW's criticism is consistent, in the sense that it always follows the same pattern, but also in the sense that it is coherent with other aspects of his PhilMath and his philosophy at large, fits in with patterns of thought that are also omnipresent in other aspects of LW's life, and -beyond that-in the cultural milieu from which he emerged; 395 for the sake of ease of reference, I have used the terms 'fakeness vs. authenticity' to cover the concerns underlying LW's criticism; 4. LW's criticism is closely related to -what I called above-his pragmatism and his everydayism, in that he criticizes those types of discourse that don't make sense (suffer from a lack of meaningfulness), because the connection between them and everyday practices has been disrupted; for LW, the worst sins (?) committed by philosophers of mathematics are (1) that they try to sell in the end trivial conceptual problems with games that they have created themselves, as awe-inspiringly deep facts of nature, whereas (2) they are completely oblivious to (a) the relevance of the iceberg of not strictly mathematical and outright non-mathematical presuppositions that underlies and gives meaning to mathematical discourse, and most notably (b) the hurly-burly of everyday practices and the mathematical techniques that emerged from those practices; 5. LW's criticism of discourse about math operates at a very fundamental level, not at the level of technical details: a. LW opposes the monism (totalitarianism/unitarianism) that characterizes most of mainstream PhilMath, i.e. the idea that mathematics consists in a single system/universe (formalizable or not), pointing out that mathematics is in actual practice heterogeneous and fragmented; in the same vein, he also opposes the idea that mathematical theorems are facts of nature rather than the consequences of the rules we set ourselves for some of the mathematical/logical games we play; b. LW frontally attacks foundationalism as such, by pointing out that the techniques that make up real-life mathematics, as they have been applied in everyday practical situations, do not need any axiomatic systems to ground them; as a matter of hard empirical/anthropological fact, they are much more secure than foundational systems ever could be; the axiomatic systems that are presented as foundational are actually at best new pieces of math, peripheral add-ons to the existing techniques; c. LW profoundly dislikes the sensationalism (I call it 'epistemic kitsch') that consists in presenting trivial consequences of the way our language games function, as something deep, mysterious, awe-inspiring ... or otherwise interesting; this dislike coincides with the values that LW advocated in his everyday life; d. many of the aspects of mainstream PhilMath that LW objects to ultimately boil down to its exceptionalism, i.e. the prevalent but baseless idea that mathematical practices are different from any other kind of practice, that mathematical objects are fundamentally different from any other kind of object, etc. 6. LW's critical remarks on mathematical topics are indistinguishable from the Spenglerian culture-critical strand in his work: his critique of set-theoretical parlance boils down to the idea that it participates 'the decline of western culture' by virtue of its being disconnected from the organic unity of that culture; 7. LW's criticism is also a Kant-style critique, in that it is ultimately concerned with what is 'given', what is presupposed before anything can be meaningful; this is also the point at which LW situates the main difference between his own philosophical approach and the approach that mainstream PhilMath takes vis-à-vis math (cf. also item 4 in this list); 8. I have argued that it would be desirable for present-day PhilMath (and especially presentday PhilMathPract) to reconnect with the critical role that LW attributes to philosophy at large and PhilMath in particular; in any case, it would be disingenuous to claim to represent a Wittgensteinian voice in PhilMath without owning up to the inherently critical nature of LW's work on math. (C) Final remarks Though this study started out aiming to show how LW's critical remarks on math illustrate LW's general approach to meaning and nonsense in terms of embedding in the everyday, and how the critical aspect coincides with his general outlook on philosophy as therapy etc., this research ended up substantially enriching my understanding of these more general aspects, especially by highlighting the importance of the Spengler-like strands in LW's thought and the tension-filled nature of LW's views on the fragmentation of the everyday. The net result of reading my study may be that those who didn't like LW to begin with have even less sympathy for his work, and perhaps also that those readers who started out with a basically sympathetic attitude towards LW, encountered aspects of his work that they find alien (or even repulsive) to be much more important than they previously thought. 396 On the other hand: sooner or later, a critical approach to mathematical monism and exceptionalism should be put on the agenda, with or without LW. In any case, I think have shown that LW's PhilMath should be read as an integral part of his oeuvre as a whole, both ways: -on the one hand, LW's PhilMath can't be understood outside the critical agendas that drive his philosophy as a whole (incl. the uncomfortable Spenglerian strands); -on the other hand, one should not forget how central math was to LW as a philosopher: not only did his interest in philosophy emerge from the Grundlagen-issue, but some of the main themes LW encountered as part of the Grundlagen-debates remained his main focus throughout his oeuvre. Part 4: Appendices Whereas the main body of this manuscript consists of materials that could/should in the relatively short run give rise to a scholarly publication focusing on Wittgenstein's work on mathematics, the present appendices contain materials that emerged from the same research activity as the above, but that are not necessarily directly about Wittgenstein's work 397 and are not intended to be included in that publication, or in some cases, even published at all. As these lines of thought have not been written out with a clear picture of their destination in mind, they will be even more programmatic, even more the result of cannibalism, from an even wider set of (my own) source texts, an even rougher draft. I sometimes adopt a tone and phrasing that came to me naturally at the moment I first wrote out my notes, but that I would not necessarily maintain should I rework this material for publication. Notes on the pragmatics of formal systems (A) Formalism as desemantization/depragmatization (A1) spectrum of formalism Formalism can be viewed as a spectrum (pace Dutilh Novaes) from unregulated, everyday use of natural language, over merely regulating the use of key words by means of a definition (what Dutilh Novaes would call a regimented language (Dutilh Novaes 2012, p. 58)), over a semi-formal axiomatized system like Euclid's, through a fully symbolic formalism like Principia Mathematica, to a system implemented on a computer. But even the computer exists in a physical context that determines its workings but is outside the formalism, and for the computer to embody a formalism it needs to be used as a formalism, i.e. in the meaning-giving context of an actual mathematical practice. LW argues that it would not be correct to equate what happens in the computer with math. And not because of the 397 Some of the lines of thought that I develop here do originate in my reading of LW's work, but I am no longer focusing on the way they function within the context of LW's work and in their historical context, but on how they still may interact with present-day (2022) Philosophy of Mathematical Practice and Philosophy of Mathematics at large, and how they contribute to my own long-term research project on the concept of practice. Note that I do not necessarily share LW's agenda(s) either: e.g., the everydayness theme and the prose-calculus distinction are crucial to, and deep-rooted in, LW's thought, whereas I am very critical of these aspects of LW's heritage. Other lines of thought developed here have no direct relation to LW's work at all. a new 'semantics': they may not continue to stand for the same things they used to stand for, but they will by virtue of the fact that they are manipulated, used, and talked about ipso facto acquire object-status and become meaningful in their own way within the practice they occur in. Similarly, depragmatization is ipso facto repragmatization: rather than having lost their function within a practice, they have acquired a different function than the one they had in the practice they originally belonged in. For instance: rather than being simply used as a straightforward tool, they have become an object of study. However, the fact that the tool acquires a new function does not mean that it is no longer a tool. The function of a formal system in mathematics can be compared to the function of an experimental set-up in physics or chemistry. In both cases, we have constructed objects that serve the specific purpose of being 'observed' within a highly specific practical context. These are perhaps complex tools, but still tools; and tools are objects. 399 Thus, depragmatization is an inherently problematic concept: in actual practice, all apparently depragmatized activities (say: ritual behaviors) do have clear pragmatic functions, constitute clearly recognizable action types, are embedded in a Form of Life. What makes them qualify as special, must be a value judgement in terms of normality and/or everydayness, whether negatively as nonsense, or positively as sacred. Which makes sense. In other words, "resemantization" (in a more general sense than the very narrow one used by Dutilh Novaes) and "repragmatization" may be misleading terms: desemantization is a sensible practice and at no point are symbols ever 'out there on their own'. (A5) Dutilh Novaes on the functions and origins of desemantization Perhaps the most important idea that is developed in Dutilh Novaes' Chapter 2 is that formal languages are a technology. p. 61: The basic idea is that formal languages can be fruitfully conceived of as a technology. Of course, this is not very informative unless we can provide a more precise meaning to the rather vague term 'technology'. As a first approximation, a technology can be described as a specific method, material, or device used to solve practical problems. Formal languages as such are not a method by themselves, but they are devices that allow for the implementation of certain methods. More precisely, formal languages are a cognitive technology p. 64: Formalism starts with the idea that form can be separated from meaning. I have argued elsewhere (Hoekstra and Scheppers 2003) that the precise moment in history when this happened, is staged in Plato's Sophista 261c-262e: here, the Stranger points out that words come in different kinds and that some fit and some don't, and that the ones who fit mean something, and the others don't. All of this is news to Theaetetus, the Stranger's interlocutor. Let's also note that this revolution also deeply impacted the issues surrounding the notion of truth: • In the pre-formalist situation, the main problem was: how is falsehood possible? • After the formalist revolution, the problem became: how is truth possible? So, initially, the sui generis relation between form and thing was simply a given and it was understood that -by default-speech 'spoke what is' (the direct object of verbs meaning 'to say' referred to -what we would call-reality. In this context, it is not so hard to understand that falsehood was a problem: after all, phenomenologically speaking, lies in the end do the same thing as truthful statements. Soon enough, i.e. as soon as the bottom-up view of meaning became standard (up till now, but things might be changing), the leading problem became the opposite: how is it possible that words actually say something about the world? All the problems related to formalism start when you look at words in isolation and try to describe their meaning bottom-up, starting from their own semantics, rather than originating in their pragmatic function ("point"), "in context". This development culminated in Frege and in LW's TLP, in which truth value and meaning ended up coinciding. 402 There is also a lot to say about the relation between formal reasoning and bad faith: • on the one hand, philosophical logic appears to originate in Plato's attempt to make sense of the alleged bad faith of the sophists (how is it possible to lie?); • on the other hand, the above suggests that formalism generates its own kind of bad faith. As for (2) bad faith generated by the use of formal systems, the main problem is the relation between what happens inside the black box of the formal system on the one hand, and its interpretation in everyday prose on the other: what is the criterion to determine whether the outcome of operations in a formal system, once translated back into normal prose, is acceptable? ("the computer says yes!"). The classical formulation of this issue is LW's account 402 I already quoted Ferreirós history of set theory (Ferreirós 2007) elsewhere: Russell's peculiar conflation of syntax and semantics has the effect that his work is dealing with philosophical logic, and even metaphysics, throughout", with in footnote: "The strange features of the famous Tractatus by his student Wittgenstein [1921] are thus more a symptom than a deviation."(footnote 1 on p. 332). The above suggests that this confation of syntax and semantics (and metaphysics, for that matter) is not so much a peculiar feature of the approach of Russell and his pupil, but a logical (?) outcome of the evolution of logic. of what is important about Gödel's incompleteness results (see section 2.3 above; see also appendix 4.3(B2) below). (B) Mathematical formalism in practice / the pragmatic functions of formal systems (B1) formal systems as objects If one is interested in the ontological status of mathematical objects and/or the epistemological status of formal systems, a good starting point would be observing the ways in which such objects are actually manipulated by actual mathematicians in their actual everyday work (cf. ethnomethodological work such as Livingston's (Livingston 1986); (Livingston 2015)). This seems to be the only reasonable way to achieve a viable account of their place in a taxonomy of objects. Martin Heidegger shows in § §15-10 of his seminal Sein und Zeit that 'Zuhandenheit' (i.e. being available in the context of an everyday practice), is the default way for things to 'be'. Cf. Scheppers 2017, Chapter 2, §2. Thus, one can envisage a taxonomy of objects along those lines, perhaps starting from broad categories such as these: -tools: more or less permanent object that serve at executing an activity but aren't its outcome; -ingredient: objects that serve as input to an activity and are transformed by it; -products: objects that are the intended outcome of an activity; -infrastructure: permanent objects that operate at the background of an activity. Ultimately, it remains to be seen to what extent it is even possible to formulate any regularities beyond the individual practice. So, a sensible answer to the question as to how desemanticized systems make sense may be: they are objects with a specific role within that practice. One can compare them with experimental set-ups in chemistry or mechanics: they are constructed and then used in very specific ways; in this case, the activity is 'to observe' (in the specific sense of the particular scientific practice at hand) and the experimental set-up has for a function to be observed (in that specific way). It is important to note that the ontological status of mathematical objects in general is not problematic at all, not in actual mathematical practice, and not from the pragmatic point of view adopted here, either. What is a problem from our point of view, is Platonic talk about mathematical objects as belonging to 'a separate realm', not because the 'separateness' of the 'realm' (this could be an innocent terminological choice for distinguishing the 'worlds' that go with practices), but because of the implication that there is only one, eternal world of mathematical objects that is not only separate from other realms (?) but also from mathematical practice. Thus, whenever mathematicians create and start using/studying a new formalism, a new ontology arises, a new set of types of objects that make sense as such and it remains to be seen if each of these types are more like ingredients, or more like products or more like tools or more like infrastructure (furniture), or sui generis in the processes they are involved in, but there is no reason to a priori proclaim their unique nature. (B2) formalism and conceptuality / meaning are not opposites / alternatives / complementary Starting point is Ferreirós (Ferreirós 2016), Chapter 4, in which he defends the thesis of the "complementarity of symbolic means and thought". My main argument is the idea that "conceptual thought" and formal systems are complementary ways to deal with the same subject matter is based on incoherent premises. The problem formulated in terms of conceptual thought vs. formal systems is framed the wrong way: formalism is NOT a different way of making sense of 'the same things', it is NOT an alternative to 'conceptual understanding'. I am not sure whether what Ferreirós calls "conceptual thought" is a viable concept for explaining the meaningfulness of human experience, but this is not a point I want to argue here. What I do want to argue is: whatever the functions of conceptual thought may be in nonformalist contexts (whether non-formalist mathematical practices or any other practice), it plays exactly the same role in formalist mathematical practices. Whatever one wants to make of "conceptual thought", it can't be construed as having a similar function as, let alone as an alternative to a formal system. It should be understood that conceptual thought plays exactly the same role in mathematical practices involving formal systems as in mathematical practices that don't, as in doing the dishes, as in watching Bugs Bunny, as in discussing Heidegger. One may object that the specific problem dealt with in this context, only arises within the specific context of 20 th century PhilMath (and couldn't have arisen elsewhere in that specific way): mathematical formalism (i.e. the manipulation of symbols according to explicit syntactic rules, without reference to models outside the syntax) is part of a very specific set of practices (incl. discourses) that occurred at a precise point in the history of mathematics and as such has its very specific, historically contingent set of features. 403 This historical remark is of course correct but does not imply that the tools used in the analysis have to be ad hoc, on the contrary, meaning in contemporary math has exactly the same infinitely complex contingency as any other practice: this specific practice -as a human practice-shares with all other human practices that it makes sense to practitioners and observers in exactly the same ways as any other practice. 404 In this sense, formalist types of mathematics have their own thought', as much as non-formalist ('conceptualistic') mathematics. The presence or absence of 'sense' is not what makes them different from each other. So: it would not make sense to try and solve the issue using only ad hoc, math-related resources. 405 (B3) thought experiment: a purely formal language Imagine that agent Z construes a purely formal language, and then a number of expressions/strings in it. Imagine that Z insists that his strings have no meaning yet. Then, he plays around with his language and adopts formal criteria which allow him to distinguish strings of expressions that are 'plurp' from expressions that are 'vlarve'. In what circumstances would we admit that Z's plurp & vlarve formalism makes sense? Perhaps it would suffice that Z seems to find this practice worthwhile and Z is an otherwise 'normal' person? Perhaps we would discover that the practice has its own internal coherence, which other people appear to be able to understand if they take the time to learn it... In any case, "to make sense" implies a relation to an encompassing context (a practice), but that relation and that context can be anything, incl. ludic or aesthetic functions, as well as industrial applications. This is exactly the point of LW's thought experiments involving fringe applications (cf. section 1.3 and section 3.2.1(C) above). However, such a purely formal language is ipso facto not mathematics as it is practiced now: what makes math math (as we know it) presupposes a link with counting, adding, subtracting, multiplying, dividing, etc. , as well as a link with elementary geometry, surfaces and circumferences, etc. It is easy to imagine the existence and successful application of these techniques without any axiomatization, even without any propositionalization (so without the notion of 'truth'), but we would not be ready to call whatever practice that doesn't have a strong link with the numerals we use in counting "mathematics". This is basically LW's main argument against the whole foundational endeavor that dominated the PhilMath of his era. (Bonfim 2015)). 407 These observations are relevant here, because they suggest that, perhaps, the impulse to produce repetitive forms is a basic human-reflex and not a cultural artefact that is developed on top of 'normal' language use. 408 An attractive way to model (?) in cognitive terms what happens when people appear to produce poetic effects is in terms of activation: when one item (say: a word) is activated neighboring words are activated as well. And neighboring may refer to several levels: -/screen/ is semantically a neighbor of /computer/, /television/, /cinema/, etc. -/screen/ is phonetically a neighbor of /scene/, /seen/, /mean/, /scream/, etc. -pragmatically, /screen/ is in my present case a neighbor of /wine/. Normal pragmatics inhibits these associations. Poetry turns them loose, and then repragmatizes them: select random words (e.g. to fit a meter or rhyming pattern); then, 407 For ludic' language use in schizophrenics, see Cardella 2018 (Cardella 2018), pp. 93?-95? p.93?: Even when they do not end up idolising words, schizophrenics often deconstruct the constitutive elements of language and string ideas together based on formal associations (rhymes, assonances, etymology, and so forth). In other words, they seem to play with language, slipping among different language levels. For example, asked to define contentment, a patient answers: Contentment? Well, uh, contentment, well the word contentment, hav-ing a book perhaps, perhaps your having a subject, perhaps you have a chapter of reading, but when you come to the word 'men' you wonder if you should be content with men in your life and then you get to the letter T and you wonder if you should be content having tea by yourself or be content with having it with a group and so forth. (Lorenz, 1961: 604) p. 94?: As observed by Pennisi, the schizophrenic language is the elective ground of a continuous slippage among confused and overlapping metalinguistic levels. Each word can be-come the door of a parallel dimension which, usually inviolable, is made accessible on the ground of analogies only recognisable in different and distant universes of discourse. (1998: 228, author's translation) The following are examples of this ludic use of language: I was looking at you, the sweet boy that does not want sweet soap. You always work Harvard for the hardware store. Neatness of feet don't win feet, but feet win the neatness of men. Run don't run west, but west runs east. I like west strawberries best. Rebels don't shoot rebels at night. (Kraepelin, 1913: 39) [How are you?] To relate to people about new-found...talk about statistical ideology. Er, I find that it' like starting in respect of ideology, ideals change and ideals present ideology and...new entertainments...new, new attainments... (McKenna and Oh, 2005: 43) Does water saunter? As to protein, might one tote-it-in? Is it a hydro-car-boat or a carbohydrate? As to any vitamin, might one invite-them-in? Is the dinner-all there with mineral? Is the bulk cellulose or the hulk swell-you-host? Might the medicine have met-us-some? Is it a platypus or ad-ipose? Is the seasoning pleasing? Is food reserved to be preserved? Is one glad-to-give an additive? (McKenna and Oh, 2005: 49) 408 For the the impulsive ('pulsional') character of the poetic urge, see Julia Kristeva's early work on poetry. optionally, see what they could mean.409 Thus, meaning-deficient behavior in humans typically results from over-structuration, not from under-structuration. It is also interesting to observe that this type of ludic and/or ritual behavior is often conceived as coming from an outside source, not subjected to the will or the intensions of the agent (cf. the notion of inspiration' as an outside, involuntary (depragmatized) source; cf. the Muses or possession by spirits). Oracular practices are interesting, that way. For instance, in the case of the Delphic oracle, a woman in an 'altered of consciousness' produced inspired sounds, that were then transmitted by the priests in the form of (mostly hexametric?) verse. In the case of poetry, one could interpret the exploitation of this reflex as a skillful way to give the pre-semantic impulses free reign and allow the semantics to be an emerging side-effect of the production. A case in point would be the following verses from a song written by Serge In this case, it is very obvious that the semantics are a pure side-effect of the extremely rich formal play; 410 as a matter of fact, the semantics of the text are very tenuous, i.e. as a narrative it doesn't make sense at all, and even as a lyrical expression of emotion the meaning is precarious. However, as an anagrammatic play on both the phonology (the rhythm and the rime scheme), the morphology (trans-), and the lexical meaning of the words (Boeing-bateau; porte-exit), it is very attractive indeed. Similar effects can be studied in the context of oracular language and other types of sacred language use (mantra's etc.). This secondary activity of making sense of sounds that were not primarily meant to make sense also seems to be a human reflex. 411 This is also reminiscent of what is going on in the case of the ambiguity about mathematicians' relationships with their formalisms: they want them to be 'free', purely formal, but then again, they can't help themselves and remain semiattached to the initial standard model. NB: ludic, ritual, poetic practices are obviously also practices, in the full sense of the word: they are perceived as meaningful by practitioners, they require training, there are criteria for success (one can do it right and one can do it wrong), they are social institutions and integral parts of the culture they occur in, etc. Our provisional conclusion is: math is -in its mainstream interpretations-not interpreted as a poetic practice, but to the extent that it is a purely formal language, it is the result of a human reflex that manifests itself in other ritual/ludic activities as well. Formalism, poetry, and certain (other) types of ritual behavior have in common that they lack or bypass the standard/normal semantic, pragmatic and real-life primacy to focus on the formal features of the behavior itself. 412 (D) Meaning in a formal system: syntax, semantics and pragmatics Let's focus on a slightly mythological account of what happens if someone discovers that she/he/they can write x/0. Imagine we start from our usual way of calculating with fractions: some of us learn the technique and use it for the purpose of accounting, some of us use it as part of certain engineering practices, etc. Now let's suppose that someone starts to play around with the symbols and stumbles upon the fact that he/she can write 10/0: It is important to note that this would only occur in the context of an already 'ludic'/'poetic' practice: in accounting, one never needs to divide a sum of money among zero beneficiaries; even in a scenario in which there happen to be no beneficiaries when a certain amount needs to be distributed, nobody would think of using a fraction with denominator 0 in order to know what to do. Perhaps the same person would notice soon enough that this new symbol 10/0 is slightly more problematic than the other ones in the initial series. Perhaps, that person would start from the observation that the smaller the denominator in a fraction is, the bigger the result is, and conclude as follows: <4/2 = 2; 4/1 = 4; 4/0,5 = 8; …; 4/0 = ∞> <1000/1000 = 1; 1000/100 = 10; 1000/10 = 100 = 1000/1 = 1000; 1000/0 = ∞> etc. 413 And eventually, one would also stumble on this sequence: <4/4=1; 3/3=1; …; 0/0 = 1> Once one has stumbled on this new type of symbol <x/0>, one may want to decide what to do with it, choosing between any of several options, including the following two very general types of options: (1) give it a proper place in your practice, (2) stop using it altogether. Option (1) could go different paths: one could accept <4/0 = ∞> but not <0/0 = 1>, <0/0 = 1> but not <4/0 = ∞>, or both together. In the latter case, one may stumble on the following: <10/0 = ∞; 9/0 = ∞; 8 /0 = ∞; ... 1/0 = ∞; 0/0 = ∞> <10/10 = 1; 9/9 = 1; 3/3 = 1; 2/2 = 1; 1/1 = 1; 0/0 = 1> And then one has to decide what to do about this. Again, one has different options: one could decide that this is an argument for (a) having to choose between the different, apparently mutually incompatible, ways to use these symbols, but one could also use this as (b) a (rather convincing?) argument for mathematical pluralism. 413 There are historical instances of this choice, e.g. (Martin & Roitman 2014, p.46): In 625 CE the great mathematician Brahmagupta knew quite a bit about negative numbers, and even considered division by 0, claiming that n/0 is infinity, since "In this quantity consisting of that which has zero for its divisor, there is no alteration, though many be inserted or extracted; as no change takes place in the infinite and immutable God, at the period of the destruction or creation of worlds, though numerous orders of beings are absorbed or put forth." But, apparently, mainstream Western (?) math has eventually decided for option (2), i.e. to stop using the form x/0. This option can be implemented in different ways: A. at the level of the syntax: you can modify your syntax by introducing rules that -directly or indirectly-exclude x/0 as not well-formed. B. at the level of the semantics: you can decide that nothing in your semantics corresponds to this type of sign. C. you can decide to do nothing about the problem at the level of your formal system, but solve the problem at the level of what you do with it, for instance: C1. you can decide to simply stop using the sign and explain: "this is useless nonsense", or: "there's nothing interesting there"; 414 C2. you can quarantine the sign by interpreting it (outside the formal system) as "the secret of Hermes Trismegistus of Lourdes", and explain that its use by humans would lead to a cataclysm, or make it taboo in any other way. In other words, one can say plenty of different things when confronted with such a problem: -"there is no solution to this" -"the solution is not a number" -"this proves the existence of spinal numbers" -"this proves mathematical pluralism" -"this proves the existence of the devil" The above argument based on 'imagined' or 'imaginable' cases are good enough to make our philosophical point (cf. section 3.2.1(C) for LW's opinion in this regard), but for those of us who prefer empirical stuff, the history of math provides us with materials that correspond to all of the above. The point is -of course-not that what was ultimately decided was wrong: it can easily be argued that what was chosen turned out to be a fruitful move. The point is: there was no necessity to choose that direction / make that move rather than any other direction/move at any point. Also note that it is perfectly imaginable that different policies coexisted, depending on their usefulness (or lack of usefulness) in different practical contexts. And as pointed out before: the example would fit perfectly within an argument for pluralism. 414 Case C1 may be indistinguishable from case B in actual practice. What is important as a conclusion from this, is the understanding that the links made between a formal system and a model are always a matter of pragmatics, and therefore always motivated by stuff that lies outside both the syntax and the model. What makes the relation between a formal system and a model meaningful is always non-formal and always lies outside that relationship. This relationship is always inherently contingent. This also means that these decision-making procedures are necessarily not math-specific. Notes on Spengler Interestingly, the first chapter of the Volume 1 of Decline of the West, only preceded by a longish introduction, is titled 'On the meaning of the numbers' [Vom Sinn der Zahlen] and deals with mathematics. It has to be said that Spengler did study math at a university level, enough to be licensed to teach math (among other subjects) in secondary schools. In this chapter, Spengler mostly compares the history of the mathematics of two 'cultures', the one of Ancient Greece (mostly from Pythagoras to Archimedes), and the one of the West (mostly from Descartes to Riemann), briefly mentioning other cultures (most notably -what he calls-"Arabic culture", which includes the latter part of the Roman empire). Spengler's text shows a number of features that we also encountered in LW's PhilMath: -Unsurprisingly, Spengler systematically links (supposed) features of the mathematics of a culture with (supposed) features of the artistic production of that culture (this is -after all-one of the key characteristics that make up Spengler's reputation), claiming that what defines (e.g.) Ancient Greek conception of number, also defines its sculpture. In the case of Antiquity, this would be a feeling of everything having clear limits [Begrenzung], of everything being accessible to perception, which shows in the concept of number essentially being a matter of physical size [Große] and relations between numbers are mostly a matter of proportions, in the sculptural representation of the human body being the main artistic expression, and political structures being limited to no more than what one can see from the top of the hill on which a typical polis is built. Similarly, the classical period of Western culture is characterized by an increasingly algebraic and functional math, in which the link with the ideas of size and proportion are no longer relevant and numbers are conceived of as relative positions (functions), which coincides with the victory of music (the most abstract of arts) over oil-painting. 415 NB that the picture as a whole may sound appealing at first sight, but that many of the individual claims that make up this picture are very shaky on their own. -Spengler strongly emphasizes non-propositional aspects and is ready to go very far into that direction, very often speaking of a 'feel' or a 'feeling' as constitutive of this or that aspect of a culture. Not only does he insist on the fact that math cannot be reduced to mathematical theory (something LW -and many of us, I guess-would have agreed with), but he goes as far as saying that number [Zahl] -as all other aspects of a culture-is ultimately grounded in a 'primary/primeval feel(ing)' [Urgefühl]: -Like LW, Spengler also points out the plurality and variability of math. In Spengler's case, the emphasis is on the fact (?) that every culture has its own math, but also on the existence of different 'styles' and fashions within math. As a whole, Spengler's prose is a continuous (seemingly endless) stream of interestingsounding polymathic babble, rich in quasi-insightful analogies (no bibliographical references, though) 416 and unverifiable factoids (not necessarily made-up, but hardly ever wellestablished), but completely devoid of scholarly or philosophical skill or even -I guessambition. What is missing is any friction between the discourse and not only empirical fact, but also alternative concepts, arguments or views. (we have seen that LW's polyphonic style/method is in that regard at the opposite end of the spectrum). For the purposes of the present study (on LW's work on math), I was particularly interested in Spengler's views on the relation between theoretical math and everyday application, as well as in the question as to whether he had any Wittgenstein-like criticism of 19 th and 20 th century math in that regard. Somewhat disappointingly, the chapter on the numbers, which I am discussing here, shows a decidedly un-wittgensteinian point of view. For Spengler, the number concept of Antiquity is basically the one that emerges from everyday practice, and the math used in everyday practice ("by children and untaught people") remains at that stage (?), even when the 'really-important' math has moved on: Like LW, Spengler observes that modern era Western math, evolves away from this numberas-size concept, but unlike LW, who sees this as a cause of loss of meaningfulness, he believes that the fact that modern/Western mathematicians remained somehow attached to this ancient conception, was not a good thing and, if anything, actually held modern math back from reaching its full potential and its mature expression: So: whatever else LW may have learnt from Spengler (or otherwise has in common with him), the crucial combination of pragmatism and everydayism that informs LW's critical stance towards contemporary math, he does not share with Spengler. that whatever happens in the poet's soul, (s)he is always first and foremost a craftsperson, using a certain set of techniques to achieve a certain product that is supposed to have a certain function within the lives of those who consume the product, not unlike a baker, or -indeed-a mathematician. Perhaps we should conclude that mathematicians and poets resemble each other in that both crafts have a penchant for surrounding themselves with obscurantist quasi-religious mystification. -Why would pure mathematics be religion? What kind of religion would mathematics be: a set of rituals to ensure cosmic continuity, a way towards personal salvation, a nonempirical account of the workings of the universe, ... ? If true, would it be a good thing that mathematics is religion or is that understood as self-evident, and if this is understood as self-evident, why is that the case? -Similarly, I am not convinced at all that infinity is a question that has actually moved many spirits, let alone profoundly (by the way: is infinity a question?),419 but even if 'infinity' in some sense did move the spirits of certain people (perhaps in the context of cosmological or theological speculation?), it is not at all self-evident that the concept of infinity as it is used in mathematical contexts is the same concept at all (cf. LW's critique of -what he considered-illegitimate uses of the concept of infinity, see section 2.2(B)). The question is: what is the function of these platitudes? why would a book about the relation between math and physics need such dubious literary devices anyway? why are these ugly, hollow, sentimental tropes so ubiquitous in the literature about math, much more so than in neighboring fields. The ways in which mathematical ideas are vulgarized and are received in popular culture offer many more illustrations of epistemic kitsch. For instance, exactly the same things that LW blamed Mahler's music for, apply to Hofstadter's Gödel Escher Bach (Hofstadter 1979): brilliant talent, excellent technician, great artistic achievement, but his amalgamation of the mathematical concept of recursion with physical feedback loops and Zen koans is ultimately nonsensical and therefore irresponsible in that it sells cheap analogies as genuine insights. Perhaps more pernicious is the popular idea that Gödel has proven that mathematics is riddled with uncertainty (NB that Gödel thought to have achieved the opposite, i.e. that his results show that mathematical truth is independent of any formal mathematical results), often mixed in with an equally inaccurate popular reinterpretation of Heisenberg's famous results, especially when this cocktail of semi-literate half-truths is then deployed for the consistent, it should be taken as mathematically legitimate, and the constructivist, finitist criticisms of Kronecker might be disregarded by most mathematicians for whom consistency alone should be the viable touchstone. It would be interesting to analyze Cantor's discourse on freedom, both as a metamathematical concept and as a political concept, using the tools used to analyze political discourse in general. What is particularly bizarre/ironic in this case, is that the principle of mathematical freedom, which proclaims that the only "viable touchstone" in math is internal consistency, is itself the product of the author's frustrations at the interpersonal and political level. Furthermore, Cantor's major contributions were definitely and demonstrably not a good example of the autonomy of math either: as mentioned above, Cantor himself famously saw his transfinite stuff as a discovery with theological implications and considered its promotion as a mission from his god. Cantor's Nachlass should be a fruitful field to study the exact ways in which he negotiated the technical requirements of the mathematical format and his religious impulses. Thus, Dauben mentions the following: For example, as early as his Grundlagen of 1883, Cantor referred to collections that were too large, he said, to be comprehended as well defined, completed, unified entities. Unfortunately, he wrote obscurely, with references to absolute sets in explicitly theological terms, explaining that "the true infinite or Absolute, which is in God, permits no determination." Later on, Cantor found another way to deal with the paradoxes, which was to simply not include the offending collections in set theory: By the mid-1890s Cantor could no longer be so vague about absolute entities, and was forced to be much more explicit about the paradoxes resulting from consideration of the sets of all transfinite ordinal or cardinal numbers. The solution Cantor then devised for dealing with such mathematical paradoxes was simply to bar them from set theory. Anything that was too large to be comprehended as a well defined, unified, consistent set was declared inconsistent. These were "absolute" collections, and lay beyond the possibility of mathematical determination. This, in essence, is what Cantor communicated first to Hilbert in 1897, and somewhat later to Dedekind in his letters of 1899. This narrative illustrates quite aptly the idea that decisions about mathematical innovation are never internal to the formal mathematical technique itself, and always a matter of the pragmatics, i.e. the way in which the technique is embedded in a practice, and beyond individual practices, large-scale, cultural, social, biological and/or physcial structures LW appears to suggest that the problem with "most mathematicians' slimy concepts" (see section 2.3(E) above) is their lack of understanding of the pragmatics of math, i.e. the fact that they seem to not understand the fact that the meaningfulness of their discourse depends on how it is embedded in their practice (and how their practice is embedded in their culture at large). defense of monism against pluralist threats throughout the 20 th and early 21 st centuries. Ever since formalism became an option as a theoretical approach to the Grundlagen-issue, a number of intra-mathematical developments offered challenges to mathematical monism: 425 -the study of non-standard models of standard axiomatic systems ever since Skolem's famous contributions; -the debates concerning the status of the Continuum Hypothesis, the various independence results, the playing around with alterative axiomatic systems that resulted from it, and the success of forcing as a mathematical technique; -reverse mathematics, an approach in which one starts from a theorem and works one's way up to the axioms that are needed to prove it. 426 These developments have in common that they exploit the possibility to tinker with expansions of, and variations on, axiomatic systems beyond the standard systems and their standard models, which ipso facto boils down to a de facto pluralism. The Gödel Program, and perhaps the Grundlagen-debates in general, can be understood as a reaction to -what 424 The term was apparently coined by Yourgrau ((Yourgrau 2006) - (Stachel 2007)), as follows: "Overarching much of his research in philosophy and logic was the 'Gödel program', the investigation of the limits of formal methods in capturing intuitive concepts" (p. 182; see also pp. 114, 127). This definition of the term is sympathetic and makes the endeavor look more innocent than it is. I will use a somewhat more polemical characterization of essentially the same thing: the Gödel Program is the systematic attempt at enforcing mathematical monism (especially Platonism) in the face of the challenges it faced since the beginning of the 20 th century, for instance by drawing philosophical conclusions from -what looks like-the study of formal systems. 425 NB that full-blown formalism would yield straightforward pluralism. 426 One may want to consider the potential of 'perverse reverse math', i.e. an approach to reverse math that actively tries to free itself from monism, by completely dropping the idea of 'completeness' and investigating the functionalities (and breakdown) of deliberately non-standard or incomplete axiomatic systems. NB that this approach would embody Cantor's idea of freedom (as he defined it, not as he intended it) in much more coherent way than is the case in mainstream approaches to the foundations of math. Also note that this would make math the 'study of formal systems in general' in a way that it is definitely not at this moment. 57 Rechnet die Rechenmaschine? Denk Dir, eine Rechenmaschine wäre durch Zufall entstanden; & nun drückt Einer durch Zufall auf ihre Knöpfe (oder ein Tier läuft über sie) & sie rechnet das Produkt 25 × 20. -Ich will sagen: Es ist der Mathematik wesentlich, daß ihre Zeichen auch im Zivil gebraucht werden. 27. 8 . 8 Etwas besser geschlafen. Lebendige Träume. Etwas niedergedrückt; Wetter & Befinden. Die Lösung des Problems, das Du im Leben siehst, ist eine Art zu leben, die das Problemhafte zum Verschwinden bringt. Daß das Leben problematisch ist, heißt, daß Dein Leben nicht in die Form des Lebens paßt. Du mußt dann Dein Leben verändern, & paßt es in die Form, dann verschwindet das Problematische. Kann man von Dem, der eine Regel des Entzifferns anwendet, sagen, er vollziehe mathem. Operationen? Und doch lassen sich seine Transformationen || Umformungen so auffassen. Denn er könnte doch sagen, er berechne, was bei der Entzifferung des Zeichens … nach der und der Regel herauskommen müsse. || des Zeichens … gemäß dem & dem Schlüssel herauskommen müsse. Und der Satz: daß die Zeichen … dieser Regel gemäß entziffert … ergeben ist ein mathematischer. Sowie auch der Satz: daß man beim Schachspiel von dieser Stellung zu jener kommen kann.113 Denke Dir die Geometrie des vierdimensionalen Raums zu dem Zweck betrieben, die Lebensbedingungen der Geister kennen zu lernen. Ist sie darum nicht Mathematik? Und kann ich nun sagen sie bestimme Begriffe? 114 Denke Dir unendliche Zahlen in: einem Märchen gebraucht. Die Zwerge haben soviele Goldstücke aufeinander gelegt || getürmt, als es Kardinalzahlen gibt, || -etc. Was in einem Märchen vorkommen kann, muß doch Sinn haben. -115 Denke Dir die Mengenlehre wäre als eine Art Parodie der || auf die Mathematik von einem Satiriker erfunden worden. -Später hätte man dann einen Nutzen || einen vernünftigen Sinn in ihr gesehen & sie in die Mathematik einbezogen. (Denn wenn der eine sie als das Paradies der Mathematiker ansehen kann, warum nicht ein andrer als einen Scherz || Witz?) Die Frage ist: ist sie nun als Scherz nicht auch offenbar Mathematik? -116 Ist dies nicht ein Fall wie der des Stammes, welcher eine rechnerische Technik zum Zweck gewisser Vorhersagungen hat, aber keine Sätze der reinen Mathematik? Die Rechnung die zur Ausführung einer Zeremonie dient. Es werde z.B. nach einer bestimmten Technik aus dem Alter des Vaters & der Mutter & der Anzahl ihrer Kinder die Anzahl der Worte einer Segensformel abgeleitet die auf das Haus der Familie anzuwenden ist. In einem Gesetz wie dem Mosaischen könnte man sich Rechenvorgänge beschrieben denken || solche Rechenvorschriften niedergelegt denken. Und könnte man sich nicht denken, daß das Volk das diese zeremoniellen Rechenvorschriften besitzt im praktischen Leben nie rechnet? Dies wäre zwar ein angewandtes Rechnen, aber es würde nicht dem Zweck einer || der Vorhersage dienen. 117 Die Menschen könnten z.B. Rechnungen zum Zweck einer Art von Wettrennen gebrauchen. Wie Kinder ja wirklich manchmal um die Wette rechnen; nur daß diese Verwendung bei uns keine große || eine ganz untergeordnete Rolle spielt. 126 Es erweitert Einer die Math., gibt neue Definitionen & findet neue Lehrsätze --& in gewisser Beziehung kann man sagen, er wisse nicht, was er tut. -Er hat eine vage Vorstellung, etwas entdeckt zu haben wie einen Raum (wobei er an ein || sein Zimmer denkt), ein Reich erschlossen zu haben, & würde, darüber gefragt, viel Unsinn reden. 131 Denke Dir unendliche Zahlen in: einem Märchen gebraucht. Die Zwerge haben soviele Goldstücke aufeinander gelegt || getürmt, als es Kardinalzahlen gibt, || -etc. Was in einem Märchen vorkommen kann, muß doch Sinn haben. -132 Denke Dir die Mengenlehre wäre als eine Art Parodie der || auf die Mathematik von einem Satiriker erfunden worden. -Später hätte man dann einen Nutzen || einen vernünftigen Sinn in ihr gesehen & sie in die Mathematik einbezogen. (Denn wenn der eine sie als das Paradies der Mathematiker ansehen kann, warum nicht ein andrer als einen Scherz || Witz?) You might think that Aesthetics is a science telling us what's beautiful --almost too ridiculous for words. I suppose it ought to include also what coffee tastes well. When Rhees asked Wittgenstein about his 'theory' of deterioration (referring to one of Wittgenstein's examples, which was the deterioration of the German musical tradition), Wittgenstein reacted with horror to the word: 'Do you think I have a theory? Do you think I'm saying what deterioration is? What I do is describe different things called deterioration.' criticism appears to have been primarily directed against various avatars of fakeness (in the very general sense defined above): classical music should sound like classical music and what sounds like classical music should also be classical music; a bed should look like a bed and what looks like a bed should be a bed; a decent person's behavior should show who (s)he is. ( A) ethical aspects of LW's philosophyApart from the anecdotal stories discussed above, there is evidence for an ethical/aesthetical aspect to LW's philosophy itself.The most explicit indication is perhaps LW's famous letter to Ludwig von Ficker, probably written at the end of October or the beginning of November 1919, in which LW states:Und da ist es Ihnen vielleicht eine Hilfe, wenn ich Ihnen ein paar Worte über mein Buch schreibe: Von seiner Lektüre werden Sie nämlich -wie ich bestimmt glaube -nicht allzuviel haben. Denn Sie werden es nicht verstehen; der Stoff wird Ihnen ganz fremd erscheinen. In Wirklichkeit ist er Ihnen nicht fremd, denn der Sinn des Buches ist ein Ethischer. Ich wollte einmal in das Vorwort einen Satz geben, der nun tatsächlich nicht darin steht, den ich Ihnen aber jetzt schreibe, weil er Ihnen vielleicht ein Schlüssel sein wird: Ich wollte nämlich schreiben, mein Werk bestehe aus zwei Teilen: aus dem, der hier vorliegt, und aus alledem, was ich nicht geschrieben habe. Und gerade dieser zweite Teil ist der Wichtige. Es wird nämlich das Ethische durch mein Buch gleichsam von Innen her begrenzt; und ich bin überzeugt, daß es, streng, nur so zu begrenzen ist. Kurz, ich glaube: Alles das, was viele heute schwefeln, habe ich in meinem Buch festgelegt, indem ich darüber schweige. Und darum wird das Buch, wenn ich mich nicht sehr irre, vieles sagen, was Sie selbst sagen wollen, aber Sie werden vielleicht nicht sehen, daß es darin gesagt ist.Ich würde Ihnen nun empfehlen das Vorwort und den Schluß zu lesen, da diese den Sinn am Unmittelbarsten zum Ausdruck bringen. - Now instead of saying "Ethics is the enquiry into what is good" I could have said Ethics is the enquiry into what is valuable, or, into what is really important, or I could have said Ethics is the enquiry into the meaning of life, or into what makes life worth living, or into the right way of living. I believe if you look at all these phrases you will get a rough idea as to what it is that Ethics is concerned with. (Wittgenstein 1965), p. 4 97. Das Denken ist mit einem Nimbus umgeben. --Sein Wesen, die Logik, stellt eine Ordnung dar, und zwar die Ordnung a priori der Welt, d. i. die Ordnung der Möglichkeiten, die Welt und Denken gemeinsam sein muß. Diese Ordnung aber, scheint es, muß höchst einfach sein. Sie ist vor aller Erfahrung; muß sich durch die ganze Erfahrung hindurchziehen; ihr selbst darf keine erfahrungsmäßige Trübe oder Unsicherheit anhaften. --Sie muß vielmehr vom reinsten Kristall sein. Dieser Kristall aber erscheint nicht als eine Abstraktion; sondern als etwas Konkretes, ja als das Konkreteste, gleichsam Härteste. (Log. Phil. Abh. 5.5563.)Wir sind in der Täuschung, das Besondere, Tiefe, das uns Wesentliche unserer Untersuchung liege darin, daß sie das unvergleichliche Wesen der Sprache zu begreifen trachtet. D. i., die Ordnung, die zwischen den Begriffen des Satzes, Wortes, Schließens, der Wahrheit, der Erfahrung, u. s. w. besteht. Diese Ordnung ist eine Über-Ordnung zwischen --sozusagen --Über-Begriffen. Während doch die Worte which I personally cannot help respecting deeply and I would not for my life ridicule it" ((Wittgenstein 1965), p. 12). pathos, illusion, fake depth, sensationalism (PhU §110-112) 110. "Die Sprache (oder das Denken) ist etwas Einzigartiges" --das erweist sich als ein Aberglaube (nicht Irrtum!) hervorgerufen selbst durch grammatische Täuschungen. Und auf diese Täuschungen, auf die Probleme, fällt nun das Pathos zurück. 111. Die Probleme, die durch ein Mißdeuten unserer Sprachformen entstehen, haben den Charakter der Tiefe. Es sind tiefe Beunruhigungen; sie wurzeln so tief in uns, wie die Formen unserer Sprache, und ihre Bedeutung ist so groß, wie die Wichtigkeit unserer Sprache. --Fragen wir uns: Warum empfinden wir einen grammatischen Witz als tief ? (Und das ist ja die philosophische Tiefe.) 112. Ein Gleichnis, das in die Formen unserer Sprache aufgenommen ist, bewirkt einen falschen Schein; which LW then illustrates in § §112-113 by means of the description of someone frantically pondering a 'deep' philosophical question, back and forth. This use of the term 'pathos' reminds us what Sass had to say about theatricality (cf. 2.0.1(C) here above), and we will see below that similar terms, related to ostentatiousness and sensationalism, are applied in other technical contexts. Luftgebäude (PhU §118) 118. Woher nimmt die Betrachtung ihre Wichtigkeit, da sie doch nur alles Interessante, d. h. alles Große und Wichtige, zu zerstören scheint? (Gleichsam alle Bauwerke; indem sie nur Steinbrocken und Schutt übrig läßt.) Aber es sind nur Luftgebäude, die wir zerstören, und wir legen den Grund der Sprache frei, auf dem sie standen. 119. Die Ergebnisse der Philosophie sind die Entdeckung irgend eines schlichten Unsinns und Beulen, die sich der Verstand beim Anrennen an die Grenze der Sprache geholt hat. Sie, die Beulen, lassen uns den Wert jener Entdeckung erkennen. 120. Wenn ich über Sprache (Wort, Satz, etc.) rede, muß ich die Sprache des Alltags reden. Ist diese Sprache etwa zu grob, materiell, für das, was wir sagen wollen? Und wie wird denn eine andere gebildet? --Und wie merkwürdig, daß wir dann mit der unsern überhaupt etwas anfangen können! [...] 182 Again, LW opposes (1) the special and the trivial, and (2) the solid and real vs. mere appearance.contradictions (PhU §125)Es ist nicht Sache der Philosophie, den Widerspruch durch eine mathematische, logisch-mathematische, Entdeckung zu lösen. Sondern den Zustand der Mathematik, der uns beunruhigt, den Zustand vor der Lösung des Widerspruchs, übersehbar zu machen. (Und damit geht man nicht etwa einer Schwierigkeit aus dem Wege.) Die fundamentale Tatsache ist hier: daß wir Regeln, eine Technik, für ein Spiel festlegen, und daß es dann, wenn wir den Regeln folgen, nicht so geht, wie wir angenommen hatten. Daß wir uns also gleichsam in unsern eigenen Regeln verfangen. Dieses Verfangen in unsern Regeln ist, was wir verstehen, d. h. übersehen wollen. Es wirft ein Licht auf unsern Begriff des Meinens. Denn es kommt also in jenen Fällen anders, als wir es gemeint, vorausgesehen, hatten. Wir sagen eben, wenn, z. B., der Widerspruch auftritt: "So hab' ich's nicht gemeint." Die bürgerliche Stellung des Widerspruchs, oder seine Stellung in der bürgerlichen Welt: das ist das philosophische Problem. 129. Die für uns wichtigsten Aspekte der Dinge sind durch ihre Einfachheit und Alltäglichkeit verborgen.(Man kann es nicht bemerken, a weil man es immer vor Augen hat.) Die eigentlichen Grundlagen seiner Forschung fallen dem Menschen gar nicht auf. Es sei denn, daß ihm dies einmal aufgefallen ist. --Und das heißt: das, was, einmal gesehen, das Auffallendste und Stärkste ist, fällt uns nicht auf.185 In this excerpt, LW points out that the -for his kind of philosophy-most important aspects are 'hidden' because they are simple and everyday. I included this excerpt because it is a very clear statement of the core difference between LW's approach and most contemporary (and present-day) philosophy of mathematics: LW is interested in what is presupposed by what most mathematicians and philosopher of mathematics would simply accept as 'given'.(C) LW's vocabulary and rhetoric of fakeness (summary overview) (D) epistemic fakeness: epistemic pretense, epistemic bad faith, epistemic bad taste Geist war mir sehr zuwider. Als ich vor 15 Monaten nach Cambridge kam da glaubte ich, ich würde nicht mit ihm verkehren können denn ich hatte ihn von unserer letzten Begegnung vor etwa 4 Jahren bei Keynes in Sussex in so schlechter Erinnerung. Keynes dem ich dies sagte sagte mir aber er glaube ich sollte sehr wohl mit ihm reden können & nicht bloß über Logik. Und ich fand Keynes' Meinung bestätigt. Denn ich konnte mich über manches ganz gut mit Ramsey verständigen. Aber auf die Dauer ging es doch nicht wirklich gut. Die Unfähigkeit Ramseys zu wirklichem Enthusiasmus oder zu wirklicher Verehrung was das Selbe ist widerte mich endlich mehr & mehr an. Andererseits hatte ich eine gewisse Scheu vor Ramsey. Er war ein sehr rascher & geschickter Kritiker wenn man ihm Ideen vorlegte. Aber seine Kritik half nicht weiter sondern hielt auf & ernüchterte. Der kurze Zeitraum wie Schopenhauer ihn nennt zwischen den beiden langen in denen eine Wahrheit den Menschen, zuerst paradox, & dann trivial erscheint war bei Ramsey zu einem Punkt geworden. Und so plagte man sich zuerst lange vergebens ihm etwas klar zu machen bis er plötzlich die Achsel darüber zuckte & sagte es sei ja selbstverständlich. Dabei war er aber nicht unaufrichtig. Er hatte einen häßlichen Geist. Aber keine häßliche Seele. Er genoß Musik wirklich & mit Verständnis. Und man sah ihm an welche Wirkung sie auf ihn ausübte. Von dem letzten Satz eines der letzten Beethovenschen Quartette den er mehr als vielleicht alles andere liebte sagte er mir er fühle dabei die Himmel seien offen. Und das bedeutete etwas von ihm. || wenn er es sagte. Es ist hier sehr nützlich sich vorzustellen, daß das Diagonalverfahren zur Erzeugung einer reellen Zahl längst vor der Erfindung der Mengenlehre bekannt & auch den Schulkindern geläufig gewesen wäre, wie es ja sehr wohl hätte sein können. So wird nämlich der Aspekt der Entdeckung Cantors geändert. Diese Entdeckung hätte sehr wohl bloß in der Interpretation || neuen Auffassung dieser altbekannten, elementaren Rechnung liegen können. Die Rechnung || Rechnungsart selbst ist ja nützlich. Die Aufgabe wäre etwa: Schreibe eine Dezimalzahl an die verschieden ist von den || allen Zahlen: 0˙1246798 0˙3469876 0˙0127649 0˙3426794 Man denke sich eine lange Reihe. Das Kind denkt sich: Wie soll ich das machen ich müßte ja auf alle die Zahlen zugleich schauen um zu vermeiden daß ich nicht doch eine von ihnen anschreibe || damit ich nicht doch irgend eine von ihnen aufschreibe. Die Methode sagt nun: durchaus nicht; ändere die erste Stelle der ersten Zahl, die zweite der zweiten, etc. etc. & Du bist sicher eine Zahl hingeschrieben zu haben, die mit keiner der gegebenen übereinstimmt. Die Zahl die man so erhält könnte immer die Diagonalzahl genannt werden. is particularly rich in interesting contrasts, and also shows how the notion of fakeness can take on an almost 'technical' function in what remains a discussion about what can be inferred from Cantor's diagonal argument (as it is usually understood). Bescheiden heißt || lautet der Satz: "Wenn man etwas eine Reihe reeller Zahlen nennt, so heißt die Entwicklung des Diagonalverfahrens auch eine 'reelle Zahl' & zwar eine die 'von allen Gliedern der Reihe verschieden' sei || ist. || & zwar sagt man, sie sei von allen Gliedern der Reihe verschieden. ( B) against vertiginous imagery: LW, Ms-121,60r-64r, d.d. 19381225 [// BGM II, § §40 ff.] " Man kann die Brüche nicht ihrer Größe nach ordnen. -Das klingt vor allem sehr || höchst interessant & merkwürdig. Es klingt interessant in ganz anderem Sinne || anderer Weise, als, etwa, ein Satz aus der Differentialrechnung. Der Unterschied liegt, glaube ich, darin, daß ein solcher sich leicht mit einer Anwendung auf Physikalisches assoziiert, während jener Satz ganz & gar || einzig & allein der Mathematik anzugehören gleichsam die Physik || die Naturgeschichte der mathematischen 206 Die Ausdrucksweise: m = 2n ordne eine Klasse einer ihrer echten Subklassen || Teilklassen zu, kleidet einen einfachen || trivialen Sinn durch Heranziehung einer irreführenden Analogie in eine paradoxe Form. (Und statt sich dieser paradoxen Form als etwas Lächerlichem zu schämen, brüstet man sich eines Sieges über alte Vorurteile des Verstandes.) Es ist genau so als stieße man die Regeln des Schach um & sagte, es habe sich gezeigt, daß man Schach auch ganz anders spielen könne. So verwechselt man erst das Wort "Zahl" mit einem Begriffswort wie "Apfel", spricht dann von einer "Anzahl der Anzahlen" & sieht nicht daß man in diesem Ausdruck nicht beidemal das gleiche Wort "Anzahl" gebrauchen sollte; & endlich hält man es für eine Entdeckung daß die Anzahl der geraden Zahlen die gleiche ist wie die der geraden & ungeraden. 17. 12 . 12 Den Fehler in einem schiefen Räsonnement suchen & Fingerhut-Verstecken. Man könnte fragen: Was könnte ein Kind von 10 Jahren am Beweis des Dedekindschen Satzes nicht verstehen? -Ist denn dieser Beweis nicht viel einfacher, als alle die Rechnungen die das Kind beherrschen muß? -Und wenn nun jemand sagte: den tieferen Inhalt des Satzes kann es nicht verstehen -dann frage ich: wie kommt dieses Gesetz || dieser Satz zu einem tiefen Inhalt? || ! -Es ist ein Sprachspiel das Ähnlichkeit mit dem Spiel des Daumenfangens hat. (Dies || Dieses wird so gespielt: Man hält den Daumen der rechten Hand mit der linken, so daß seine Spitze noch oben aus der linken hervorschaut. Nun entzieht man die rechte Hand rasch dem Griff der linken Hand & trachtet die rechte Daumenspitze noch mit der rechten Hand zu fangen, ehe sie sich zurückzieht.) 244 Interesse erhält jener || Widerspruch nur dadurch, daß er Menschen quält || gequält hat; & dadurch || so zeigt, wie die Sprache zu quälenden Problemen führen kann. || was für Dinge uns quälen können. || wie aus der Sprache quälende Probleme wachsen können. || wie aus der Sprache quälende Probleme wachsen können; & was für Dinge uns quälen können. || zeigt, was Menschen quälen kann. established and understood before LW's arguments even can play a role in a wider PhilMath context. Ms-125,64r: Wenn Rechenmaschinen in der Natur vorkämen & von den Menschen gefunden & benützt würden, so hätten wir eine Arithmetik ohne Sätze & ohne Beweise. [...] Ms-125,66r-Ms-125,67r: Unsre Rechenmaschine in der wir die Operationen verfolgen können -& eine Rechenmaschine, die auf einem besonderen Papier, worauf wir die Angabe schreiben, durch einen chemischen Vorgang das richtige Ergebnis erscheinen läßt. -So könnte man den Kubus einer Zahl finden indem man einen Eiswürfel von der betreffenden Kantenlänge abwägt. Und man könnte natürlich unser Rechnen auch als so einen Vorgang betrachten. In diesem Fall wäre die Rechnung ein Nebenprodukt bei der Erzeugung des Resultats. (Wie das Schnurren der || einer Maschine.) 262Das Unphilosophische an Gödels Aufsatz besteht || liegt darin, daß er das Verhältnis der Mathematik & ihrer Anwendung nicht sieht. Er hat hier die schleimigen Begriffe der meisten Mathematiker. Denke Dir den Fall, in welchem Menschen zwar immer gleiche Endresultate bei einer Rechnung erzeugten aber, sozusagen, unerforschliche Wege zu diesen gingen, d.h. Rechnungen hinschrieben, die wir nicht nachrechnen können & die sie selbst nicht erklären könnten. (Wie es bei schwierigen Problemen oft geschieht.) (Kunstrechnen) 260 Warum sollte man den Russellschen Widerspruch nicht als etwas Überpropositionales auffassen, etwas das über den Sätzen thront & nach beiden Seiten (wie ein Januskopf) || zugleich schaut. || nach beiden Richtungen schaut. N.B.: der Satz F(F) -in welchem F(ξ) = ~ξ(ξ) -enthält keine Variablen & könnte also als etwas Überlogisches, als etwas Unangreifbares, dessen Verneinung es nur wieder selber aussagt, gelten || dastehen. Ja könnte man nicht sogar die Logik mit diesem Widerspruch auffangen? Und von ihm gleichsam zu den Sätzen niedersteigen. Der sich selbst widersprechende Satz stünde wie ein Denkmal (mit einem Januskopf) über den Sätzen der Logik. 261 These paragraphs contain a lot of interesting contents, which -however-are not immediately relevant to the subject matter that concerns us in this study: 10.3.44. Man könnte sagen: Experiment -Rechnung sind Pole, zwischen welchen sich menschliche Handlungen bewegen. Wir konditionieren einen Menschen in dieser & dieser Weise; wirken dann auf ihn durch eine Frage ein; & erhalten ein Zahlzeichen. Dieses || eine Zahl. Diese verwenden wir weiter zu unsern Zwecken & es erweist sich als praktisch. Das ist das Rechnen. -Noch nicht! Dies könnte ein sehr zweckmäßiger Vorgang sein -muß aber nicht sein, was wir 'rechnen' nennen. Wie man sich denken könnte, daß zu Zwecken denen heute unsere Sprache dient Laute ausgestoßen würden, die doch keine Sprache bildeten.Zum Rechnen gehört, daß alle die richtig rechnen dasselbe Rechnungsbild produzieren || erzeugen. Und 'richtig rechnen' heißt nicht: bei klarem Verstande, oder ungestört rechnen, sondern so rechnen. 264 Jeder math. Beweis stellt das math. Regelgebäude || Gebäude auf einen || gibt dem mathematischen Regelgebäude || Gebäude einen neuen Fuß. [Ich dachte an die Füße eines Tisches] 265 Ich habe mich gefragt: Ist Mathematik mit rein phantastischer Anwendung nicht auch wirkliche Mathematik? -Aber es frägt sich: Nennen wir es 'Mathematik' nicht etwa nur darum weil es hier Übergänge, Brücken gibt von der phantastischen zur nichtphantastischen Anwendung? D.h.: würden wir sagen, Leute besäßen eine Mathematik, die das Rechnen, Operieren mit Zeichen, bloß zu okkulten Zwecken benützten? 266 Aber ist es dann doch nicht unrichtig zu sagen: das der Mathematik Wesentliche sei, daß sie Begriffe bilde? -Denn die Mathematik ist doch ein anthropologisches Phänomen. Wir können es also als das Wesentliche in einem großen Teil || Gebiet der Mathematik (dessen was 'Mathematik' genannt wird) erkennen & doch sagen, es spiele keine Rolle in anderen Gebieten. Diese Einsicht allein wird freilich nicht ohne 277 Aber Du kannst doch einen Widerspruch nicht gelten lassen! -Warum nicht? Wir gebrauchen ihn ja manchmal in unsrer Rede, freilich selten -aber man könnte sich eine Technik || Sprachtechnik denken in der er ein ständiges Implement ist. Man könnte z.B. von einem Objekt in Bewegung sagen es existiere an diesem Ort & existiere nicht an ihm. || es existiere & existiere nicht an diesem Ort; Veränderung könnte durch den Widerspruch ausgedrückt werden. 278 Nimm ein Thema wie das Haydnsche (Choräle S.A.) nimm den Teil einer der Brahmsschen Variationen, die dem ersten Teil des Themas entsprechen & stell die Aufgabe den zweiten Teil der Variation im Stil ihres ersten Teiles zu konstruieren. Das ist ein Problem sehr ähnlich einem mathematischen. Ist die Lösung gefunden, etwa wie sie Brahms gibt so zweifelt man nicht || so ist es uns klar daß dies die Lösung sei || ist. || so zweifelt man nicht -dies ist die Lösung. Mit diesem Weg sind wir einverstanden. Und doch ist es hier klar, daß es leicht verschiedene Wege geben kann mit deren jedem wir uns einverstanden erklären können, deren jeden wir konsequent nennen können. Ich könnte mir denken, daß Einer sagte || meinte die Namen 'Fortnum' & 'Mason' paßten zusammen. 279 'Wir machen lauter legitime -d.h. in den Regeln erlaubte -Schritte, & auf einmal kommt ein Widerspruch heraus.⇒ Also ist das Regelverzeichnis, wie es ist, nichts nutz, denn der Widerspruch wirft das ganze Spiel um.' 280 Warum läßt Du ihn es umwerfen? link here between anything in the formal system and what you decide to do with it (in Appendix 4.1(D) I will explore this aspect of the .primacy of the pragmatics of formalism a little further). The dialogue continues: A: "But what I want is that we can continue to mechanically make inferences without ever reaching contradictory results". 281 B: "Well, what kind of foreseeability [Voraussicht] do you want?". 282 | Die philosophische Betrachtung der Mathematik hat eine andere Pointe als die mathematische von math. Sätzen & Beweisen. | loss of meaning caused by disconnection from everydayness; (B) fakeness, pretense and fiction in mathematical discourse; (C) demarcation and heterogeneity; (D) set-theoretical discourse as a sign of sick times; (E) paradoxes, contradictions and the function of axiomatic systems; -in section 2.4.3, I discuss LW's critical agenda as it emerges from our analyses, identifying the following lines of attack: (A) anti-foundationalism; (B) anti-monism; (C) antisensationalism; (D) anti-exceptionalism. This should hit home with all of us, qua philosophers: to what extent has the consensus about -say-monism been manufactured? In other words: whether one agrees or not with LW's stance or the attitude underlying his stance, the following question really needs an answer, especially in the light of the results of the last couple of decades of research in PhilMathPract and the history and ethnography of mathematics: why would one want math to be unique and unitary, complete, objective, atemporal, ... etc.? (B) the identity of PhilMath ( C) on math education I would also like to briefly focus on the importance of math education with respect to the potential agenda of PhilMath and PhilMathPract in a Wittgensteinian vein.364 Let us come back to the following paragraph(Ms-113,117r-v d.d. 19320517 = "Big Typescript" (1933) §644, already quoted in section 3.2.2(A) above), in which LW mentions the fact that what we experience as 'given' heavily depends on our training, and that in the case of mathematics, the way we are taught at school, we may have been trained to actively shun the philosophically most interesting questions about mathematics: Den Mathematiker muß es bei meinen mathematischen Ausführungen grausen, denn seine Schulung hat ihn immer davon abgelenkt sich Gedanken & Zweifeln, wie ich sie aufrolle, hinzugeben. Er hat sie als etwas verächtliches ansehen lernen & hat um eine Analogie aus der Psychoanalyse (dieser Absatz erinnert an Freud) zu gebrauchen einen Ekel vor diesen Dingen erhalten, wie vor etwas Infantilem. D.h., ich rolle alle jene Probleme auf, die etwa ein Knabe || Kind beim Lernen der Arithmetik, etc. als Schwierigkeiten empfindet & die der Unterricht unterdrückt ohne sie zu lösen. Ich sage also zu diesen unterdrückten Zweifeln: ihr habt ganz recht, fragt nur, & verlangt nach Aufklärung! Gainsbourg: Aucun Boeing sur mon transit Aucun bateau sous mon transat Je cherche en vain la porte exacte Je cherche en vain le mot exit Je chante pour les transistors Ce récit de l'étrange histoire De tes anamours transitoires De Belle au Bois Dormant qui dort Die geschriebene Mathematik reprasentiert so wenig wie die in theoretischen Werken niedergelegte Philosophie den ganzen Besitz dessen, was im Schöße einer Kultur an mathematischem und philosophischem Blick und Denken vorhanden war. Es gibt noch ganz andere Wege, das den Zahlen zugrunde liegende Urgefühl zu versinnlichen. pp. 101-102: Die Funktion ist nichts weniger als die Erweiterung irgend eines vorhandenen Zahlbegriffs; sie ist dessen völlige Überwindung. Nicht nur die euklidische und damit auch die "allgemein menschliche", auf alltäglicher Erfahrung beruhende Geometrie der Kinder und Ungelehrten, sondern auch die archimedische Sphäre des elementaren Rechnens, die Arithmetik, hört damit auf, für die wirklichbedeutende Mathematik Westeuropas Wert zu haben. Es gibt nur noch eine abstrakte Analysis. Für den antiken Menschen waren Geometrie und Arithmetik in sich geschlossene und vollständige Wissenschaften von höchstem Range, beide anschaulich, beide mit Größen zeichnerisch oder rechnerisch verfahrend; für uns sind sie nur noch praktische Hilfsmittel des alltäglichen Lebens. pp. 102-103: Jede der tiefsinnigen Schopfungen, welche von der Renaissance an rasch aufeinander folgen, [...] sind ebensoviel Siege uber das popular-sinnliche Zahlengefuḧl in uns, das aus dem Geiste der neuen Mathematik heraus, die ein neues Weltgefuḧl zu verwirklichen hatte, uberwunden werden mußte. Es gab bisher keine zweite Kul-tur, welche den Leistungen einer andern, langst erloschenen, so viel Verehrung entgegentrug und wissenschaftlich so viel Einfluß ge-stattete, wie die abendlandische gerade der antiken. Es dauerte lange, bevor wir den Mut fanden, unser eignes Denken zu denken. Auf dem Grunde lag der bestandige Wunsch, es der Antike gleichzutun. Trotzdem war jeder Schritt in diesem Sinne eine tatsachliche Ent-fernung von dem erstrebten Ideal. Deshalb ist die Geschichte des abendlandischen Wissens die einer fortschreitenden Emanzipation vom antiken Denken, einer Befreiung, die nicht einmal gewollt, die in den Tiefen des Unbewußten erzwungen wurde. So gestaltete sich die Entwicklung der neuen Mathematik zu einem heimlichen, langen, endlich siegreichen Kampf gegen den Großenbegriß ( B3) Bad faith 3: The Gödel Program On the basis of the recurrent arguments in LW's work against the pervasive and unexamined monism (my term) in mathematical discourse, I would like to suggest that it would be interesting to investigate to what extent the above analysis can be extended to -what I would like to callthe Gödel Program at large, 424 by which I mean the persistent and systematic größte Mißtrauen entgegenbrächte ohne ihre Sprache zu verstehen, aber das Verschwinden der Künste rechtfertigt kein absprechendes Urteil über eine Menschheit. Denn echte & starke Naturen wenden sich eben in dieser Zeit von dem Gebiet der Künste ab & anderen Dingen zu & der Wert des Einzelnen kommt irgendwie zum Ausdruck. Freilich nicht wie zur Zeit einer großen Kultur. Die Kultur ist gleichsam eine große Organisation die jedem der zu ihr gehört seinen Platz anweist an dem er im Geist des Ganzen arbeiten kann und seine Kraft kann mit gewissem Recht an seinem Erfolg im Sinne des Ganzen gemessen Ist es mir so klar daß das Verschwinden einer Kultur nicht das Verschwinden menschlichen Wertes bedeutet sondern bloß gewisser Ausdrucksmittel dieses Werts so bleibt dennoch die Tatsache bestehen daß ich dem Strom der europäischen Zivilisation ohne Sympathie zusehe, ohne Verständnis für die Ziele wenn sie welche hat. Ich schreibe also eigentlich für Freunde welche in Winkeln der Welt verstreut sind.Ob ich von dem typischen westlichen Wissenschaftler verstanden oder geschätzt werde ist mir gleichgültig weil er den Geist in dem ich schreibe doch nicht versteht. Unsere Zivilisation ist durch das Wort Fortschritt charakterisiert. Der Fortschritt ist ihre Form nicht eine ihrer Eigenschaften daß sie fortschreitet. Sie ist typisch aufbauend. Ihre ⋎ Tätigkeit ist es ein immer komplizierteres Gebilde zu konstruieren. Und auch die Klarheit dient doch nur wieder diesem Zweck & ist nicht Selbstzweck. Mir dagegen ist die Klarheit die Durchsichtigkeit Selbstzweck. : Zu einem Vorwort: Dieses Buch ist für diejenigen || die geschrieben, die dem Geist || seinem Geist in dem es geschrieben ist freundlich gegenüberstehn. Dieser Geist ist, glaube ich, ein anderer als der der || des Stromes der großen europäischen & amerikanischen Zivilisation. Der Geist dieser Zivilisation dessen Ausdruck die Industrie, Architektur, Musik, der Faschismus & Sozialismus der Jetztzeit || unserer Zeit ist, ist ein dem Verfasser fremder & unsympathischer Geist || dem Verfasser fremd & unsympathisch. Dies ist kein Werturteil. Nicht als ob ich nicht wüßte daß was sich heute als Architektur ausgibt nicht Architektur ist & nicht || er glaubte daß … Architektur wäre & nicht als ob er dem was moderne Musik heißt nicht das werden. Zur Zeit der Unkultur aber zersplittern sich die Kräfte und die Kraft des Einzelnen wird durch entgegengesetzte Kräfte & Reibungswiderstände verbraucht & kommt nicht in der Länge des durchlaufenen Weges zum Ausdruck sondern vielleicht nur in der Wärme die er beim Überwinden der Reibungswiderstände erzeugt hat. Aber Energie bleibt Energie & wenn so das Schauspiel das dieses Zeitalter bietet auch nicht das des Werdens eines großen Kulturwerkes ist in dem die Besten dem gleichen großen Ziele zuarbeiten sondern das wenig imposante Schauspiel einer Menge deren Beste nur privaten Zielen nachstreben so dürfen wir nicht vergessen, daß es auf das Schauspiel nicht ankommt. Es ist (nur) eine brotlose Kunst. || ! -Es ist ein Sprachspiel das Ähnlichkeit mit dem Spiel des Daumenfangens hat. (Dies || Dieses wird so gespielt: Man hält den Daumen der rechten Hand mit der linken, so daß seine Spitze noch oben aus der linken hervorschaut. Nun entzieht man die rechte Hand rasch dem Griff der linken Hand & trachtet die rechte Daumenspitze noch mit der rechten Hand zu fangen, ehe sie sich zurückzieht.) 98 the context of his 'notorious' account of contradictions in formal systems (see section 2.3 below), LW briefly but incisively talks about logical paradoxes, and the liar's paradox in particular: Schadet der Widerspruch der entsteht, wenn Einer sagt: "Ich lüge. -Also lüge ich nicht. -Also lüge ich etc." Ich meine: ist unsere Sprache dadurch weniger brauchbar, daß man in diesem Fall aus einem nach den gewöhnlichen Regeln sein Gegenteil & daraus wieder ihn folgern kann? -Der Satz (selbst) ist unbrauchbar, & ebenso dieses Schlüsseziehen; aber im übrigen kann man es tun, wenn man will. || warum soll man es nicht tun? These explorations are good example of LW's 'polyphonic' style (see section 0.2(A) above): he explores the subject matter with which he struggles by internalizing various potential opinions and reactions one may want to express in response to the examples he conjures up; he does not appear to use the examples to illustrate a pre-established point. This does not mean that he starts with a blank slate: of course, he has biases, agendas and opinions, which guide his process. The sheer quantity of the examples is part of the process that is enacted in the text: it resets the reader's sense of what is a 'normal' case, perhaps in the same way it did for LW. Working through the entire passage is a worthwhile exercise that I recommend, but that deal with properly philosophical topics, a fortiori not in the chapters dealing with PhilMath. In the present Part 2, I will try to do exactly this: show how LW's PhilMath fully participates in the culture-critical aspects of the worldview he grew up with, including his preoccupation with various avatars of the issue of fakeness.149 There is an obvious biographical or 'existential' side to the importance of aesthetics for LW. 2.0.1 Ethical and aesthetical biases underlying Wittgenstein's philosophical agenda: biographical and 'existential' aspects (A) aesthetics Thus, for instance, in mainstream handbooks about Wittgenstein ((Glock and Hyman 2017) ; (Sluga and Stern 2017); (Kuusela and McGinn 2011)), these aspects are typically relegated to separate, biographical or otherwise 'contextual', non-technical chapters, but not taken into account in the chapters Starting with his upbringing in a high-society family committed to sponsoring, entertaining, and savoring the crème de la crème of the Austrian art world, LW remained actively interested in aesthetic and artistic matters (music, literature, architecture, sculpture, etc.) throughout his life, as is -for instance-witnessed by the selection of notes published as Vermischte Bemerkungen / Culture and Value (VB), but also elsewhere in his oeuvre (for a quick overview of the data, see e.g. (Hagberg 2014) ; for an account of a few of LW's remarks on music, see section 2.0.2 below). However, there are also indications that LW's aesthetical bias ran deeper ((?) or is it 'wider'?)) than his preoccupation with art, and in a way that is relevant to our purposes in the present study. In the notorious sequence of paragraphs following TLP §6.4, after having pointed out that ethics is necessarily non-propositional (i.e. ethics cannot refer to things that are within the world), LW also says (in §6.421, as quoted above) that ethics and aesthetics are one, and transcendental (just like logic, for that matter). 150 See section 2.0.3 below. It has also been suggested that LW's approach to life was as much aesthetical as it was ethical: his moral objections often concerned the how rather than the what of people's behavior (cf. e.g. what Brian McGuiness observes about LW's use of the dictum "Le style c'est l'homme même" ((McGuinness 2002b), pp. 21-22) : These remarks deserve our attention, as they highlight aspects that are immediately relevant to the interpretation of LW's philosophy: LW's problematization of the relation between what someone does and what that person says about it, and LW's focus on the how rather than on the what. These aspects will come back in our analysis of LW's PhilMath below. Einem geschrieben der sich so gern tief wüßte. Das Gesicht ist zu faltenlos; aber Falten kommen vom Kummer, nicht von der Bequemlichkeit. Wer auf dem Kummer schwimmen will, um ja nie unterzutauchen, wie sollte der Tiefe kennen. Mein ganzes Leben (inneres & äußeres) ist darauf angelegt, auf sicherem || im sicheren Boot auf dem Meere, auf der Oberfläche, zu schwimmen. Ich will doch gar nicht zahlen; wie sollte ich erhalten?] Aufsagen oder Anschreiben einer Wortfolge || Zeichenfolge aus dem Gedächtnis als Kriterium der Zahlengleichheit, Mengengleichheit. (B) ethics A number of biographical anecdotes suggest that LW had very strong ethical (moralistic?) reflexes throughout his life and was always ready to disapprove of other people's or his own behavior in the strongest possible terms, 151 and his biographers mention his obsession with honesty and sincerity, and his lack of patience with lack of these qualities, with hypocrisy, with half-heartedness, with vanity, etc. 152 These characteristics occasionally show up in his manuscripts, often written in code. 153 See, for instance, his entry for November 25 1939 in Ms-122,36v-38r, in which we read a long parenthetic remark written in code (here below printed in italic), 154 literally in the middle of a sentence dealing with a more 'technical' philosophical topic. Aber ich verwende nun das [I'm much too slick & all I produce is pretty slick. Es hat nicht genug Falten im Gesicht sondern ist oberflächlich & von glatter Stirn. Zugleich macht es fälschlich den Eindruck der Tiefe, denn es ist von 165 LW first expresses his belief that Mahler's music is worthless and asks -apparently in all seriousness-what poor Mahler should have done with his obvious talent: 166 should he have written his worthless symphonies and then burnt them? should he have forced himself not to write them? Of course, poor Mahler was -out of vanity-not able to see what LW could see... Let us note the almost incredible confidence (if this is even the right word) that accompanies what ultimately boils down to an expression of personal taste, if not cultural affiliation. Suppose Lewy has what is called a cultured taste painting. This is something entirely different to what was called a cultured taste in the fifteenth century. An entirely different game was played. He does something entirely different with it to what a man did then. 30. There are lots of people, well-offish, who have been to good schools, who can afford to travel about and see the Louvre, etc., and who know a lot about and can talk fluently about dozens of painters. There is another person who has seen very few paintings, but who looks intensely at one or two paintings which make a profound impression on him. Another person who is broad, neither deep nor wide. Another person who is very narrow, concentrated and circumscribed. Are these different kinds of appreciation? ): 27. [Rhees: Is there tradition in Negro art ? Could a European appreciate Negro art?] 28. What would tradition in Negro Art be? That women wear cut-grass skirts? etc., etc. I don't know. I don't know how Frank Dobson's appreciation of Negro Art compares with an educated Negro's. If you say he appreciates it, I don't yet know what this means. He may fill his room with objects of Negro Art. Does he just say: "Ah!"? Or does he do what the best Negro musicians do? Or does he agree or disagree with so and so about it? You may call this appreciation. Entirely different to educated Negro's. Though an educated Negro may also have Negro objects of art in his room. The Negro's and Frank Dobson's are different appreciations altogether. You do something different with them. Suppose Negroes dress in their own way and I say I appreciate a good Negro tunic. Does this mean I would have one made, or that I would say (as at the tailor's): "No ... this is too long", or does it mean I say: "How charming!"? 29. : Die geometrische Illustration der math. Analysis ist allerdings unwesentlich, nicht aber die geometrische Anwendung. Ursprünglich waren die geometrischen Illustrationen Anwendungen der Analysis. Wo sie aufhören dies zu sein, können sie leicht gänzlich irreführen. after these remarks, we read a paragraph written in LW's usual code, -easy to decipher, but not immediately readable by a casual passer-by: 222Ich glaube die Mathematik hat im vorigen Jahrhundert eine ganz besonders instinktlose Zeit gehabt an der sie noch lange leiden wird. Ich glaube diese Instinktlosigkeit hängt mit dem Niedergang der Künste zusammen, sie entspringt der selben Ursache.According to LW, mathematics has had an especially instinctless time in the previous century, which will plague it for a long time to come, and this instinctlessness is the result of the same causes that also lead to the decline [Niedergang] of the arts. Perhaps, we can try to explicitate what LW may have meant by 'instinctlessness': against the backdrop of those aspects of LW's modes of thought that we highlight in this study, it seems fair to interpret Hier haben wir dann die phantastische Anwendung. Die eingebildete Anwendung. Die Idee des 'Schnittes' ist so eine gefährliche Illustration. Nur soweit, als die Illustrationen auch Anwendungen sind, erzeugen sie nicht das || jenes gewisse Schwindelgefühl, das die Illustration erzeugt im Moment, wo sie aufhört eine mögliche Anwendung zu sein; wo sie also dumm wird. Immediately 'instinctlessness' as referring to a lack of connection between mathematical discourse and that what gives mathematics its real-life meaning (the term 'forms of life' was not yet part of LW's vocabulary in 1929, but the biological connotations of 'instinct' do remind us of LW's later work). Also note the negative assessment of the state of mathematics and the arts, whichagain-are presented as results of the same underlying historical cause. (B) set theory's self-misrepresentation + mental fog + fictional symbolism: LW, Ms-113,93r-Ms-113,93v, d.d. 19320508 [= Ts-213 (The Big Typescript), §750] This excerpt is part of a series of loosely connected paragraphs on various aspects of the settheoretical approach to the continuum. It doesn't look like there is a particular link to the immediately preceding or following context. Die Mengenlehre wenn sie sich auf die menschliche Unmöglichkeit eines direkten Symbolismus des Unendlichen beruft führt dadurch die denkbar krasseste Mißdeutung ihres eigenen Kalküls ein. Es ist freilich eben diese Mißdeutung die für die Erfindung dieses Kalküls verantwortlich ist. 223 222 Why did LW decide to write this not particularly personal or particularly scandalous remark in code? Good question. No idea. Just to practice? Fermat, 231 and the slogan227 Was der Mengenlehre verloren gehen muß ist vielmehr die Atmosphäre von Gedankennebeln die den bloßen Kalkül umgibt. Also die Hinweise auf einen der Mengenlehre zu Grunde liegenden fiktiven Symbolismus der nicht in ihrem Kalkül verwendet wird, & dessen scheinbare Beschreibung in Wirklichkeit Unsinn ist. (In der Mathematik können || dürfen wir alles fingieren nur nicht einen Teil unseres Kalküls.)228 Both meanings of the word 'Atmosphäre' were established in German long before LW wrote this paragraph (see https://www.dwds.de/wb/etymwb/Atmosphäre). The system of systems is a contradiction"). So, we can assume LW starts a new line of thought at the beginning of our excerpt. 232 30.5. Die Krankheit einer Zeit heilt sich durch eine || die Veränderung in der Lebensweise der Menschen & die Krankheit der philosophischen Probleme konnte nur durch eine veränderte Denkweise & Lebensweise geheilt werden nicht durch eine Medizin die ein Einzelner erfand. Denke, daß der Gebrauch des Wagens gewisse Krankheiten hervorruft oder begünstigt & die Menschheit von dieser Krankheit geplagt wird, bis sie sich, aus irgendwelchen Ursachen, als Resultat irgendeiner Entwickelung, das Fahren wieder abgewöhnt. 233 LW sounds like Weininger or Spengler, including the historical perspective and the negative assessment of the era he lives in. Then, making a direct link between cultural critique in the most general terms possible and an apparently quite technical subject matter in PhilMath, LW illustrates what he means by considering the task "Name a number that is bigger than the number of all numbers". 234 229 It takes some heavy formatting, whether within math of within theology, to make this idea even remotely palattable, I guess. A quick survey among some of my acquaintances shows that only people with a background in logic or PhilMath regard it as 'a thing', whereas working mathematicians and people with a more applied background may have heard of it, but tend to be dismissive of it. 230 Das Vergnügen, das wir an einem aufgeblasenen Gummiballon haben. Wir sind nicht gewöhnt mit Körpern zu hantieren, die so groß im Verhältnis zu ihrem Gewicht sind. 231 Es hilft wenn man sagt: der Beweis des Fermatschen Satzes ist nicht zu entdecken, sondern zu erfinden. " Gödelsche Beweis bringt eine Schwierigkeit auf, || entwickelt eine Schwierigkeit, die sich auch in viel elementarerer Weise zeigen muß. || die auch in viel elementarerer Weise erscheinen muß. (Und hierin liegt, scheint es mir, zugleich Gödels großes Verdienst um die Philosophie der Math., & zugleich der Grund, warum sein besonderer Beweis nicht das ist was uns interessiert.) 11.7. Ich könnte sagen: Der Gödelsche Beweis gibt uns die Anregung dazu die Perspektive zu ändern aus der wir die Mathematik sahen. Was er beweist, geht uns nichts an, aber wir müssen uns mit dieser mathematischen Beweisart auseinandersetzen. Trage! Stehst || Stündest Du fest & trägst, so wird es auch dem Andern am meisten nützen. Mach keine Scene, sei nicht ironisch, sei nicht unnatürlich. unphilosophical and his concepts slimy (Ms-124,115-119; see section (E) here below) and in another context (Ms-121,81v), he calls a 'Gödelian reason' to decide whether the proposition 'This proposition is self-evident' is true or not, "dumb" [dumm]. Conspicuously absent from the above list are the so-called "notorious" remarks, published as BGM1, Anhang 3. The published text was based on Ts-223, in its turn a somewhat sanitized version of Ms-118, 105v-116v, which LW wrote in Norway between 19370922 and 19370924?, in the presence of his lover Francis Skinner. This is one of the most cohesive pieces of prose in the whole Nachlass and a remarkable burst of creativity. 240 This is the text that most comments on 'LW and Gödel'-related topics are based on. What has prompted this text to be systematically interpreted as targeting Gödel is the fact that it discusses the concepts of provability vs. truth in the context of formal systems, but Gödel is not mentioned in these remarks, only Russell. -04): Gödel mentioned 6 times; interesting and coherent material; -Ms-126 (p. 131, d.d. 19421213): Gödel's casual prefatory proof mentioned in passing; -Ms-163: 4 mentions between p. 16r and p. 20v, d.d. 19410708, all of which are doublets of material also found in Ms-124; 9 more mentions between p. 24r and p. 42v, d.d. 19410708-11; interesting material, but very messy manuscript. 237 Ms-124,84 (=Ms-163,16r): "Meine Aufgabe ist es nicht über den Gödelschen Beweis (z.B.) || , z.B., zu reden; sondern an ihm vorbei zu reden". Ms-163,24r-24v (//Ms-124,94): "Man kann mit Recht fragen, welches Interesse || welche Wichtigkeit Gödels Beweis für unsre Arbeit habe. Denn er kann keines unserer Probleme lösen || löst keines unserer Probleme. -Die Antwort ist: daß die Situation uns interessiert || für uns von Interesse ist, in die ein solcher Beweis die Menschen bringt. 'Was sollen sie nun sagen?' -das ist unser Thema". Ms-163,37v-38v: "Nicht der Gödelsche Beweis interessiert mich, sondern die Möglichkeiten auf die Gödel durch seine Diskussion uns aufmerksam macht. Ms-163,39v-40v: Surprisingly (?), most of these passages are not overtly critical towards anything that is specific to Gödel's work at all. Part of these remarks simply assert that LW does not consider it his task as a philosopher to talk about any aspect of the proofs themselves. 237 Interestingly, LW actually credits Gödel with having invented a situation that creates a problem that makes us change our perspective on math: 238 the proof itself is of no philosophical interest, but the type of proof [Beweisart] is, and the problem it shows is applicable to much more elementary aspects of math as well. 239 To be fair, in other contexts, LW does call Gödel's articles / Die math. Tatsache daß hier ein arithmetischer Satz ist, der sich in P nicht beweisen noch als falsch erweisen läßt, interessiert mich nicht". 238 Gödel would probably agree on the fact that his work operates a change to our perspective on math (it probably was intended to do that from the outset), but he definitely would not endorse the change in perspective that LW has in mind. 239 Of course, everything LW says on contradictions and axiomatic systems in general also applies -I would say a fortiori-to Gödel's work, but it is important to note that LW does not address most of these remarks to Gödel specifically, and that he quite clearly and repeatedly indicates in other writings that Gödel's specific proofs and what they prove specifically have no real philosophical interest. So, I believe it's fair to say that whereas Gödel's proof is not really a deep-running theme in LW's work, contradictions and what they show us about axiomatic systems do make up a recurrent and wide-reaching theme (cf. also PhU §125, discussed in section 2.0.3(B) above). In the below, I will be focusing on a selection of passages dealing with paradoxes and contradictions and (following the indications in LW's text) I will not particularly focus on Gödel. My emphasis is on how LW's treatment of these matters is congruent with the rest of his philosophy and how they display the same critical themes and attitudes as the rest of his oeuvre. Der Der Russellsche Widerspruch ist nicht, weil er ein Widerspruch ist beunruhigend, sondern weil das ganze Gewächs deren Spitze er ist gleich einem Krebsgewächs ist welches zweck-& sinnlos aus dem normalen Körper herauszuwachsen scheint. 325 LW gives an adequate description of this in the Vorwort of his PhU:Nach manchen mißglückten Versuchen, meine Ergebnisse zu einem solchen Ganzen zusammenzuschweißen, sah ich ein, daß mir dies nie gelingen würde. Daß das Beste, was ich schreiben konnte, immer nur philosophische Bemerkungen bleiben würden; daß meine Gedanken bald erlahmten, wenn ich versuchte, sie, gegen ihre natürliche Neigung, in einer Richtung weiterzuzwingen. --Und dies hing freilich mit der Natur der Untersuchung selbst zusammen. Sie nämlich zwingt uns, ein weites Gedankengebiet, kreuz und quer, nach allen Richtungen hin zu durchreisen. --Die philosophischen Bemerkungen dieses Buches sind gleichsam eine Menge von Landschaftskizzen, die auf diesen langen und verwickelten Fahrten entstanden sind. Die gleichen Punkte, oder beinahe die gleichen, wurden stets von neuem von verschiedenen Richtungen her berührt und immer neue Bilder ent-worfen. Eine Unzahl dieser war verzeichnet, oder uncharakteristisch, mit allen Mängeln eines schwachen Zeichners behaftet. Und wenn man diese ausschied, blieb eine Anzahl halbwegser übrig, die nun so angeordnet, cut down, in order to give the viewer an idea of the landscape. So this book is really just an album. in section 1.1.3 above: "Immer bin ich hier zum Dogmatismus geneigt!". From the same manuscript: Ms-122,27r: "(In dieser ganzen Untersuchung fühle ich mich nicht wohl: mir scheint, ich bin dogmatisch.)"; Ms-122,83v: "(Ich habe das bestimmte Gefühl, daß ich sehr unvorsichtig bin. Also irgendwie im seichten Wasser des Dogmatismus herumschwimme.)". Similarly:Ms-117,192: "(Sei aber hier nicht dogmatisch. Es gibt Übergänge, die die Betrachtung erschweren.)"; Ms-130,53: " "Es muß sich doch so verhalten" ist kein Satz der Philosophie. Dogmatismus."; Ms-163,55r-55v: "Wenn der Diagonalbeweis etwas tut, so ist es, daß er unsern Begriff vom System ändert. || so ändert er unsern Begriff vom System. Hier muß man aber unterscheiden zwischen dem Begriff in der Math. & außerhalb der Math. Nur von diesem müssen wir sagen er habe sich geändert. [Furchtbar unklar!] Hier darf man nicht dogmatisch sein wollen: Von manchem neuen Beweis wird man zu sagen geneigt sein, er ändere unsern Begriff, von manchemsozusagen trivialen -nicht. Aber für uns ist gerade der Übergang zwischen der Geneigtheit, das eine, & der, das andere zu sagen, das Wichtige || wichtig." See alsoMs-142,111-112 (also interesting because Spengler is mentioned): Nur so nämlich können wir der Ungerechtigkeit -oder Leere unserer Behauptungen entgehen, indem wir das Vorbild als das, was es ist, als Vergleichsobjekt -sozusagen als Maßstab -hinstellen; & nicht als das Vorurteil, dem die Wirklichkeit entsprechen müsse. (Ich denke an die Betrachtungsweise Spenglers.) Hierin nämlich liegt derjenige || ein gewisser Dogmatismus, in den unsre Philosophie so leicht verfallen kann. Es ist wahr: eine Maßeinheit ist gut gewählt, wenn sie viele der Längen, die wir mit ihr messen wollen, in ganzen Zahlen ausdrückt. Aber der Dogmatismus behauptet, jede Länge müsse ein ganzes Vielfaches der || unserer Maßeinheit sein. LW came back to philosophy in 1929, he had already let go of his earlier framework, but continued to ponder over basically the same issue: if math is ultimately not grounded in an axiomatic system, what does justify the validity that we attribute to it?Even if my research cannot really (or rather: not directly) contribute to the issues that most scholarship is mainly focused on, there is one point I can make that may indirectly contribute to the field: if it's true that LW's PhilMath is closely connected to his culture-critical and ethical concerns (and I believe I have shown that), any exegesis of LW's PhilMath (incl. chronological approaches) should take this connection into account. Thus, for instance, the undeniable facts about LW's early fascination with Kraus and the problems of disconnection from everydayness should inform any interpretation of LW's PhilMath of the so-called intermediate period and would make any overly formalist interpretation ipso facto unlikely. 3, we observed how LW employed one of his signature techniques: setting up a thought experiment by evoking a scenario in which mathematical or math-like techniques are used in contexts that are subtly or markedly different from ours. 339 It is interesting to observe that LW's made-up examples are not necessarily more exotic or 'fringe' than what the ethnographic and historical records show. 340 Da wir in diesen Untersuchungen immer fragen: "was müßte || sollte man sagen, wenn …", so genügt uns die Varietät der uns in der Wirklichkeit bekannten Fälle nicht || der in der Wirklichkeit existierenden Fälle nicht, sondern wir müssen eine Mannigfaltigkeit von Sachverhalten in die Erwägung ziehen, gleichgültig, ob sie wirklich oder erdichtete sind. Daher berührt es komisch, wenn wir einen Philosophen mit der Miene eines Naturforschers nach einzelnen entlegenen Fakten (seltsamen Geisteskrankheiten z.B. || etwa) fischen sehen. Als wäre das faktische dieser Dinge für uns von Wichtigkeit. mathematicians' activities (articles, textbooks, etc.) 341 and (2) looks similar to other human practices, if one actually looks at what mathematicians do from up close. 342 I have two observations regarding the relation between the research mentioned above and LW's work: (1) LW did not have access to any of these types of research and he may or may not have used it if he had; 343 (2) for LW's own purposes, real examples need not necessarily have much of an added value, though -of course-for a 21 st century audience, in the present 'naturalist' climate, the fact that most of the examples have real-life counterparts may add to the persuasiveness of the argument. As for (2): LW did speak out on this, in a way that will not help to endear him to your average 21 st century philosopher of mathematical practice: in Ms-116,247, LW ridicules philosophers who collect empirical facts 'as if the factuality of these things was important to us' (cf. also Ms-120,73r): • Ethnomethodological work ((Livingston 1986) (Livingston 2015)) shows to what an extent actual mathematical practice (1) looks different from what is talked about in mainstream PhilMath and from the impression one gets from looking at the finished products of Den Mathematiker muß es bei meinen mathematischen Ausführungen grausen, denn seine Schulung hat ihn immer davon abgelenkt sich Gedanken & Zweifeln, wie ich sie aufrolle, hinzugeben. Er hat sie als etwas verächtliches ansehen lernen & hat um eine Analogie aus der Psychoanalyse (dieser Absatz erinnert an Freud) zu gebrauchen einen Ekel vor diesen Dingen erhalten, wie vor etwas Infantilem. D.h., ich rolle alle jene Probleme auf, die etwa ein Knabe || Kind beim Lernen der Arithmetik, etc. als Schwierigkeiten empfindet & die der Unterricht unterdrückt ohne sie zu lösen. Ich sage also zu diesen unterdrückten Zweifeln: ihr habt ganz recht, fragt nur, & verlangt nach Aufklärung! 347 ).351 What is interesting here, is that LW's negative assessment of Ramsey's thought gives us a clear insight in what LW saw as the proper way to be a philosopher, which is clearly on the opposite side of the spectrum from the insiders' perspective that prevails in current PhilMath: for LW, Ramsey's insistence on sticking with the state/community he happened to be part of, unable or unwilling to think Ramsey war ein bürgerlicher Denker. D.h. seine Gedanken hatten den Zweck die Dinge in einer gegebenen Gemeinde zu ordnen. Er dachte nicht über das Wesen des Staates nach -oder doch nicht gerne -sondern darüber wie man diesen Staat vernünftig einrichten könne. Der Gedanke daß dieser Staat nicht der einzig mögliche sei beunruhigte ihn teils, teils langweilte er ihn. Er wollte so geschwind als möglich dahin kommen über die Grundlagen -dieses Staates nachzudenken. Hier lag seine Fähigkeit & sein eigentliches Interesse; während die eigentliche || eigentlich philosophische Überlegung ihn beunruhigte bis er ihr Resultat (wenn sie eins hatte) als trivial zur Seite schob.) [...] (Der Philosoph ist nicht Bürger einer Denkgemeinde. Das ist, was ihn zum Philosophen macht.) 350 Which -by the way-he disparages as unphilosophical; see section 3.2.1(C) above. 351 1.11.31. ( Dieudonné seems to believe -as many others do-that only -what he considers-failures require social explanations, but -what he considers-successes are sufficiently explained by the fact that they are -what he considers-right.359(Dieudonné 1982), pp. 30-31: Alors quand, après de longues années de patientes études, on arrive enfin à une théorie bien faite, bien enseignable, bien utilisable, il semble que les choses devraient s'arrêter là. Mais non ! Cela ne s'arrête pas, parce que certaines gens, pour des raisons variées, sociologiques ou autres, se disent : « Que se passerait-il si l'on modifiait l'un des axiomes de cette théorie ? ». Et les voilà à modifier l'axiome trente-six bis, ce qui à la fin produit une nouvelle théorie. Quand on leur en demande les raisons, ils répondent : « Comme ça ! Pour écrire un papier ». Si j'ai parlé de raisons sociologiques, c'est qu'il y a des pays, et il y en a de plus en plus, où la promotion d'un universitaire se fait au poids du papier. Alors, bien entendu, il faut en produire, et quand il n'y en a pas, on se met à modifier l'axiome trente-six bis. Quoiqu'il en soit, voilà ce qui se passe. C'est ce qu'on peut appeler les mathématiques non motivées ou le délayage. On m'objectera que, peut-être, l'axiome trente-six bis modifié sera un jour aussi fondamental que la notion de groupe. Effectivement, ce n'est pas exclu, et j'ai vu dans ma vie deux ou trois cas où une théorie considérée comme totalement dénuée d'intérêt s'est brusquement trouvée accrochée à quelque chose qui vous faisait comprendre le fond des choses. Mais c'est tout à fait exceptionnel et le reste n'est que du délayage qui s'accumule dans les innombrables papiers qu'on écrit, qu'on publie, dont on fait même des comptes rendus et dont personne, par la suite, ne parle plus jamais, sauf ceux, bien entendu, qui délayent ces délayages, ce qui, apparemment, se prolonge indéfiniment. Enfin, il existe des théories qui s'étiolent progressivement, qui se meurent doucement, non pas que les mathématiciens deviennent moins ingénieux -au contraire, ils le sont peut-être plus -, mais parce que les problèmes traités s'amenuisent, deviennent de plus en plus spéciaux, s'isolent et finissent par ne plus avoir de relation qu'avec la théorie elle-même. Alors que ce qui excite beaucoup les mathématiciens, c'est le fait qu'un problème ait des relations avec d'autres théories.360 Whereas in the present paragraph I emphasize an aspect of Dieudonné's text that illustrates a typical antipragmatic aspect of standard mathematical discourse, there are other passages in the same article that are remarkably close to a Wittgensteinian point of view. Thus, Dieudonné's criticism of "empty" mathematical work that ticks all the boxes for formal correctness but is nonetheless still worthless as mathematics, bears a strong resemblance to the 'bad faith' (my term) that LW blames Gödel for (cf. section 2.3 above).Whereas this kind of whiggism has been and continues to be addressed as a problem in most neighboring disciplines, such as Philosophy of Science or Science and Technology Studies (for a recent contribution from a leading scholar, see Chang 2021 (Chang 2021)), mainstream PhilMath and History of Mathematics continue to be whiggish to a remarkable extent: many heart-felt convictions about the universality and objectivity of math should shatter in the face of the historical, ethnographic and ethnomethodological record. And this should bring usagain-to the question as to why these whiggish and exceptionalist ideas are worth defending for those who defend them. Thus, what LW's remarks can do for us is remind us of the importance of a critical attitude, which may sometimes require a polemical relation with mainstream discourse: rather than merely assuming that what practitioners say is ipso facto right, we can conceive it as our tasks as philosophers to hold mathematicians and fellow-philosophers accountable for what they say and the consequences of what they say about their own activity, its products and their role in society at large. From the above, a number of philosophically interesting questions have emerged: what exactly is specific to math, and what aspects does it share with other practices? what is the epistemic status of mathematical monism, foundationalism and exceptionalism? to what extent are monism, foundationalism and exceptionalism specific to math, to what extent do they participate in broader philosophical, religious, ideological It's about problems with mathematical discourse, the ways in which mathematicians conceive of and speak about the nature of the discipline. Examples are: intelligence; etc. discourses? what drives the predilection for monism (etc.) in math? what does history teach us in this regard? etc. (B) against epistemic bad faith and epistemic bad taste Throughout Part 2 of this study, I pointed out that LW systematically uses ethical and/or aesthetical vocabulary referring to various avatars of the concept of inauthenticity (delusion, fraud, hocus pocus, pathos, ...) to criticize various aspects of mathematical discourse he objects to. LW's criticism of math (or rather: discourse about math), formulated in ethical and/or aesthetical terms, does not target bad things done by mathematicians in the margins of their mathematical work: it's not about (at least not directly about) sexism and/or ageism in the context of the recruitment and career management of researchers, or in the context of peer reviews; nor about bullying of the less gifted in an educational context or on the work floor; it's not about acting irresponsibly while working in data-management or applied artificial 361 One reason mathematicians shy away from ethical discussions is that mathematics seeks timeless, absolute truths. The apparent perfection of mathematical truth can be its primary attraction. But ethics doesn't have the same binary clarity or timelessness. Different people may come to different conclusions or hold different moral values which are all reasonable, and mathematicians facing profession-specific ethical challenges have no universally-agreed ethical framework to use, because there isn't one.Unsurprisingly, suggesting that mathematicians need to be aware of ethical issues sometimes gets the response that ethics is imperfect and a matter of opinion, and moreover "Whose ethics?" which we would answer with "Yours!" We do not suggest that teaching EiM should give all the answers to ethical problems, but we do suggest that it is our duty to educate our students about it. The hard work of solving the questions remains and is an individual's social responsibility. The political debate that follows is part of what informed citizens frequently do. (Chiodo & Bursill-Hall 2019 --Teaching Ethics in Mathematics, p. 40) 363 In Appendix 4.3, I would like to offer a few brief suggestions as to how LW's critical attitude could still apply to contemporary mathematical discourse, despite the fact that I may not be the right person to actually do the work. 364 Within the cluster of sub-topics and sub-disciplines that makes up PhilMathPract, a niche developed which focuses on math education (see, for instance, (François and van Bendegem 2010) ; (Ernest 2018) ). epistemic kitsch and epistemic bad faith would be important. With this out of the way, the question remains as to why My claim is that even scientific kitsch, as a way of presenting one's activity, does do damage. Epistemic kitsch contributes to the demise of certain norms, values and criteria attached to scientific/academic practice, which may be inherent to the operationality of rationality at large. 367 Epistemic kitsch is detrimental in that it encourages it does function within a wider social context, in multiple ways: math is an important component of various educational systems; practitioners in many other fields need to learn mathematical technique up to various levels; professionals with mathematical training end up in many different domains, in which they will bear real-life responsibilities. For further reflection on how things go wrong when mathematicians hit the road of real-life, I refer to Chiodo and friends (quoted in section 3.2.3(C) here above) and Part IV "How Good is Mathematics" of Ravn & Skovsmose's 2019 Connecting Humans to Equations. A Reinterpretation of the Philosophy of Mathematics. 366 For examples of mathematicians claiming absolute innocence, see Ravn & Skovsmose 2019 (Ravn and Skovsmose 2019) , pp. 133-135 on the "Thesis of isolation" and the "Thesis of neutrality". 367 Of course, there are circumstances in which one can responsibly work towards the demise of certain norms, perhaps in order to promote other values and norms. But in that case one should be able and willing to explain how what one is doing is a good thing, in its intensions and in its effects. If one is not able or -more plausibly-not some of its core conceptual resources (pragmatism, everydayism, holism) are motivated by ethical/aesthetical or even cultural/political considerations. One of the claims I want to derive from the work presented above (perhaps overly polemical for the present climate, especially in PhilMath, as well as inWittgenstein-scholarship) is that even those philosophical If I am right, this context also explains why LW's criticism of apparently math-specific discourse takes the shape of genuinely ethical indignation: for LW, the problem with the philosophical/mathematical ideas he disagreed with was not that they were technically incorrect, but that they contributed to the decline of to the cultural values he affiliated with. claims within LW's work that don't look overtly critical, cannot be properly understood without understanding the conceptual links between (1) central elements in his conceptual apparatus and (2) his critical agenda. Sass 2001, p. 120 : "Indeed, he doubted that anyone who lacked this capacity for getting outside normal presuppositions could really be called a philosopher at all." [with a reference to the paragraph about Ramsey, quoted here above]. that (C) LW as Kantian critique: "bedrock" and "the given" LW's work is also 'critical' in a perhaps 'deeper' (?), more technical, Kantian, sense. LW's major concerns are an avatar of Kant's major concerns with the nature of 'the given'. The question as to what counts as 'given', what is our 'bedrock', continues to be LW's main problem, despite the spectacular evolution in LW's views on what constitutes ultimate bedrock. Again, I want to emphasize the continuity rather than the chronological succession 380 and I have no intention to do proper historiographical work here, 381 my only point being that LW's conception of what a philosopher is supposed to do, is -in a certain sense-very much in the tradition of Kant's concept of 'critique'. Let's start by recapitulating a few elements that we encountered in the above: -LW evaluates mathematical discourse in terms of the meaningfulness of what is being said, not only (and in many cases: not even) in terms of the correctness of what is being said; LW explicitly distinguishes his own philosophical approach from other approaches by emphasizing his interest in what is 'given' / presupposed before anything meaningful is being said. LW's work is Kantian in the sense that it distinguishes between the contents of any type of discourse and the 'given' that makes it meaningful and sets the limits of its meaningfulness.382 For LW, as for Kant, philosophy is ultimately all about what counts as 'the given', as 'bedrock', as 'transcendental' with respect to what is meaningful to us, i.e. what we perceive, say and do. The Kantian notion that philosophy should focus on the pre-meaningful, presupposed, transcendental (?) remains central to LW's conception of the aims of his philosophical activity, but how this pre-meaningful 'given' was conceived of, evolved significantly throughout his development: • in the TLP, LW discusses the function of his work in very overtly Kantian terms: (abgrenzen, etc.): 383 tautologies are presented as not saying anything about the world; therefore, logic is considered transcendental; 382 Let me briefly engage with the following excerpt from Egan 2019, p. 150 on LW and Kant: abendländische Mathematiker begibt sich, sobald er von antiken Vorurteilen frei sich selbst gehört, in die gänzlich abstrakte Region einer unendlichen Zahlenmannigfaltigkeit von n -nicht mehr von 3 -Dimensionen, innerhalb deren seine sogenannte Geometrie jeder anschaulichen Hilfe entbehren kann und meistern muß. Greift der antike Mensch zu künstlerischem Ausdruck seines Formgefühls, so sucht er dem menschlichen Körper in Tanz und Ringkampf, in Marmor und Bronze diejenige Haltung zu geben, in der Flächen und Konturen ein Maximum von Maß und Sinn haben. Der echte Künstler des Abendlandes aber schließt die Augen und verliert sich in den Bereich einer körperlosen Musik, in dem Harmonie und Polyphonie zu Bildungen von höchster "Jenseitigkeit" führen, die weitab von allen Möglichkeiten optischer Bestimmung liegen. Man denke daran, was ein athenischer Bildhauer und was ein nordischer Kontrapunktist unter einer Figur versteht, und man hat den Gegensatz beider Welten, beider Mathematiken unmittelbar vor sich. See also the table on p. 124.416 Another point in common with LW. But Spengler had his PhD thesis refused on that basis, LW's (the TLP) was accepted anyway. Versions of this material have been presented on numerous informal and formal occasions, among which I would like to mention the following more formal ones and thank those who offered me feedback: (1) lecture "Hocus Pocus 101. Wittgenstein's critical remarks on (meta-)mathematics: meaning, everydayness and epistemic authenticity", as part of the Masterclass on Mathematical Practices with Karine Chemla, Centrum voor Logica en Wetenschapsfilosofie (V.U.B.), 2018-05-18; (2) lecture "Hocus Pocus 2.0. Wittgenstein's philosophy of mathematics, epistemic authenticity, and the pragmatics of formalism", Centrum voor Logica en Wetenschapsfilosofie (V.U.B.), 2018-06-18; (3) lecture "Meaningfulness / meaninglessness and epistemic authenticity / fakeness in Wittgenstein's philosophy of mathematical practice", as part of the Conference on Virtue Epistemology of Mathematical Practice (V.U.B., July 13-14 2018), 2018-07-13; lecture "Wittgenstein's philosophy of mathematical practice and the ethics of formalism", Centre for Logic and Philosophy of Science, V.U.B., 2021-02-03. By rough draft I mean that some sections are underdeveloped and other sections are overdeveloped for the purpose they serve in the context of this text, that references to the literature and cross-references are incomplete and unequally distributed over the text, that some sections have been taken from other projects and have not been sufficiently integrated into their new context, that the references to, and quotations from, Wittgenstein's text have not been harmonized: although I refer to the on-line Bergen edition of the Nachlass in principle, I sometimes still refer to the standard editions; I have not yet decided on the way I should integrate LW's original text into mine (German original, English translation; in the body of the text, in footnote); this decision will depend on the venue the final version will eventually be published at. I use the terms 'published' and 'unpublished' in their currently prevailing commercial sense, as much of this 'unpublished' work has been readily available via academia.edu and other informal channels. (Scheppers 2009);(Scheppers 2017). The material in these texts gave rise to a few half-hearted attempts at publication, but personal circumstances prevented me from following through with the peer review process. It may be worth mentioning that this research activity also gave rise to a research project proposal "The ontology of the 'practice turn' in the philosophy of scientific and mathematical practice: towards a radically pragmatic framework", submitted to the FWO in 2018, not selected for funding. [START_REF] Scheppers | Kola. Antieke En Moderne Visies Op Taalkundige Geleding[END_REF];(Scheppers 1997);[START_REF] Hoekstra | Ὄνοµα, Ῥῆµα, et Λόγος Dans Le Cratyle et Le Sophiste de Platon. Analyse Du Lexique et Analyse Du Discours[END_REF];(Scheppers 2004);(Scheppers 2011);(Scheppers 2018) This story, told by Rush Rhees, was not corroborated by John Wisdom(Monk 1990, p. 466 and p. 628, note ad p. 466); Monk also mitigates the importance of this anecdote by pointing out that LW started to shift his focus to other topics only a few months after the moment of its supposed occurrence. Cf. e.g. (McGuinness 1988) pp. 73-77. The problem is also that LW adopts an outsider's perspective that is directly at odds with the mathematical exceptionalism that is prevalent in PhilMath and that inspires the idea that only mathematicians are qualified to say something about mathematics (cf. section 0.2(C) above and section 3.2.2(B) and 3.2.3(A) below). NB: it is not true that LW used 'Form of Life' for larger-scale patterns and 'Language Game' for small-scale practices. Cf. Scheppers 2017 (Scheppers 2017), §3.2. This aspect of LW's work was recognized early on and supposedly participated in the rise of the branch of analytical philosophy called 'ordinary language philosophy'. For the problematic nature of the contrast-class of the everyday in LW, see[START_REF] Baker | Wittgenstein on Metaphysical-Everyday Use[END_REF];[START_REF] Read | Ordinary/Everyday Language[END_REF];(Scheppers 2017), §3.1. See also e.g. (A. W. Moore 2017) p.322;[START_REF] Kienzler | Proofs and Mathematical Practice: Reading Remarks on the Foundations of Mathematics, Part I, Appendix III, Carefully[END_REF] p. 79, p. 81);[START_REF] Stenlund | On the Origin of Symbolic Mathematics and Its Significance for Wittgenstein's Thought[END_REF]. Wenn diese Leute nun glaubten, die Zahlen wären Geister & durch ihre Rechnungen erforschten sie das Geisterreich, oder zwängen die Geister, sich zu offenbaren -wäre dies nun Arithmetik? Oder -wäre es auch dann Arithmetik, wenn diese Menschen die Rechnungen zu nichts anderm gebrauchten? Denken wir uns den primitiven Fall, daß Einer ungeheure Multiplikationen ausführte um wie er sagt: dadurch neue riesige Provinzen des Zahlenreichs zu gewinnen. Denk Dir das Rechnen mit der √-1 wäre von einem Narren erfunden worden, der bloß vom Paradoxen der Idee angezogen die Rechnung als eine Art Gottesdienst || Gottes-oder Tempeldienst des Absurden treibt. Er bildet sich ein das Unmögliche || schlechthin Unmögliche aufzuschreiben & mit ihm zu operieren. In other words: if someone believes in mathematical objects and their queer properties--can't he nevertheless do mathematics? Or--isn't he also doing mathematics? Let us not forget that these are notebooks, not immediately intended for publication, and that it is not a priori clear who -besides himself-were the audience LW imagined writing for (if any). It should also be clear by now that working through different points of view is an integral part of LW's working method and writing style. This notion of an 'empty form' deserves a closer look, but not in the present context. For notes on Spengler's views on math and how they are similar to and different from LW's, see Appendix 4.2. In the present context, I can only briefly mention the fact that there are many philosophically relevant commonalities between LW and his contemporary (and fellow often-quoted candidate for 'greatest philosopher of the 20th century') Martin Heidegger. Cf.(Scheppers 2009),(Scheppers 2017),[START_REF] Braver | Groundless Grounds: A Study of Wittgenstein and Heidegger[END_REF],(Egan, Reynolds, and Wendland 2013),(Egan 2019). Of course, these theses (?) should be interpreted within the framework of the TLP, according to which the only valid type of speech was propositional, i.e. the kind that is either true or false; everything else was considered to be meaningless. A good example is the letter LW wrote to his friend and fellow-ex-POW Ludwig Hänsel in 1937, in which he comments on papers written by the latter, by calling them amongst other things "vomit"(Schulte 2001, 183); Schulte makes the following comment: "[…] what arouses Wittgenstein's interest is more the way one thinks or talks about a subject than the content of these thoughts or statements", which fits in nicely with some of the main points of the present study. See e.g.[START_REF] Monk | Ludwig Wittgenstein: The Duty of Genius[END_REF] pp. 44-45, but similar examples can be found throughout any biographical account. See also[START_REF] Sass | Deep Disquietudes: Reflections on Wittgenstein as Antiphilosopher[END_REF], p. 110: "At times he felt profound disgust for average people ("I suffer much from the human, or rather inhuman, beings with whom I live"); their pettiness, greed, affectation, and general lack of honor was so overwhelming as to make them seem virtually subhuman -like "loathsome worms" or "one-quarter animal"(M 228, 212, also 89). Even Wittgenstein's best friends were likely to feel the force of his severity and ruthless judgments, which could turn suddenly upon them if they said or did something that Wittgenstein considered inauthentic, fatuous, or weak. Yet it was with himself that Wittgenstein was at his most severe." For LW's habit of using an easily decipherable code for writing some of his non-technical remarks in his notebooks and diaries, see e.g. Schulte[START_REF] Schulte | Letters from a Philosopher[END_REF]), p. 178. Cf.. Gorlée 2020[START_REF] Gorlée | Wittgenstein's Secret Diaries: Semiotic Writing in Cryptography[END_REF] The transcriptions of Wittgenstein's Nachlass available on-line at the website of the Wittgenstein Archives at the University of Bergen (http://wab.uib.no/transform/wab.php?modus=opsjoner, last consulted on January 6 2021) prints the whole parenthesis before the sentence "Aber ich verwende nun das Aufsagen oder Anschreiben einer Wortfolge || Zeichenfolge aus dem Gedächtnis als Kriterium der Zahlengleichheit, Mengengleichheit.", probably for the sake of readability, but at the same time obscuring a most remarkable (textual? literary? cognitive? psychological?) phenomenon. Note the similarity with Plato's way of articulating the notion of untruth, as applied to the sophist in the Sophist. See alsoMs-183,228, d.d. 19370330: "Hüte Dich vor einem billigen Pathos wenn Du über Philosophie schreibst! Das ist immer meine Gefahr, wenn mir wenig einfällt. Und so ist es jetzt. Ich bin zu einem seltsamen Stillstand gekommen & weiß nicht recht, was ich machen soll." Also note the notion of cheapness [billig] in this excerpt; LW uses this adjective quite frequently in this sense but it happens to not occur in the passages that interest us below. Another examples is LW, Ms-119,36: "(Sehr komisch Haldane über die Ewige Wahrheit eines arithmetischen Satzes. ganz ähnlich: die Seelen der Menschen die unsichtbar, also durchsichtig sind (Grabbe))". In coining these terms, my only purpose is to provide myself and the reader with a shorthand that helps me refer to these patterns without having to resort to cumbersome paraphrasis. I will avoid projecting these terms onto LW's text during the running commentary below, but will use them freely in my conclusions and the appendix. Jip Van Besouw (personal communication, October 2022) brings up the matter of intentionality, that seems to be semantically inherent in the concept of 'bad faith'. LW does not thematize intentionality in the excerpts that I analyze here, but the pretense, trickery, etc. do presuppose blameworthiness (if not intentionality per se). This paragraph has -perhaps unsurprisingly-not been retained in the standard editions, though most of the rest of this excerpt has been retained. Cf. Floyd & Kanamori 2016 (Floyd and Kanamori 2016), p. 290: "Of course, unbeknownst to Gödel, by 1934 Wittgenstein too had rejected the Tractatus idea of a "possible projection" in logical space, and refashioned the idea against the more anthropological backdrop of "language games" and "forms of life". But, as is clear in Max Phil IX-X, it is logical, and not anthropological ideas of meaning that interested Gödel." http://wab.uib.no/wab_nachlass-table.page Sagen wir, wir erhielten manche unsrer Rechenresultate durch einen versteckten Widerspruch. Nun -sind sie dadurch illegitim? -Aber wenn wir nun solche Resultate durchaus nicht anerkennen wollen & doch fürchten es könnten welche entschlüpfen || durchschlüpfen.-Nun dann haben wir also eine Idee die einem neuen Kalkül als Vorbild dienen soll. Wie man die Idee zu einem Spiel haben kann. This use of the word 'applications' shows to what extent a depragmatized view of mathematics is entrenched in the standard idea of math: as if 'pure' math somehow precedes the 'applied' math. Cf. the parallel with nonsense (section 1.2(A)): pure gibberish is not a problem; pseudo-propositional nonsense (gibberish that pretends to mean something) is problematic. It would be interesting to interrogate various philosophers of math and varioius practitioners of math on the question as to thether they would consider Spencer Brown's calculus[START_REF] Spencer Brown | Laws of Form[END_REF] to be a kind of math or not. LW pushes this idea to its limits -or perhaps beyond the limits of usefulness, when he claims that adding a couple of digits to the decimal development of -say-pi, or any other decimal expansion, is an expansion of math ("So seltsam es klingt: Die Weiterentwicklung einer irrationalen Zahl ist eine Weiterentwicklung der Mathematik"(Ms-126,133, d.d. 19421214)). This idea is coherent but perhaps not useful. Cf. Ms-121,76r: Gödel zeigt uns eine Unklarheit im Begriff (der) 'Mathematik', die darin zum Ausdruck kam, daß man die Mathematik für ein System gehalten hat. In this draft version, the distinction between the present Part 3 and the Appendices was never sharp, and many lines of thought presented here could have ended up in the Appendices, and vice versa. I am referring to the distinction between 'use' and 'interpretation' in[START_REF] Biletzki | Over)Interpreting Wittgenstein[END_REF]). By the ad hoc term 'ismism', I simply mean "the practice of using the suffix -ism", and explicitly not anything else. As I will point out in section 3.3(C) below, the Grundlagen issue (and the skepticism issues, for that matter) ultimately boils down to a Kantian problem: what is given? what is bedrock? I find the idea that anyone could attribute 'modesty' -whether false of genuine-to LW funny. It is interesting to that in the 1936 manuscript which already resembles PhU in many ways, LW mentions Ramsey as the source of the idea he criticizes (Ms-142,108, not dated, but after November 1936: "Ein "führendes Problem der mathematischen Logik" (Ramsey) ist ein Problem der Mathematik, wie jedes andere" ). For LW's negative opinion of Ramsey as a philosopher see also section 3.2.2(B) below. It may appear somewhat ironic that after arguing against the projection of -isms onto LW's text in the previous section, I will introduce a few -isms of my own, but -as stated previously-the device is a convenient one and my point has never been that I am against -isms in general. Furthermore, within the context of this study I would like to commit to my interpretations by labeling them. (Soler et al. 2014);(Schatzki, Cetina, and von Savigny 2001);[START_REF] Schatzki | Spaces of Practices and of Large Social Phenomena[END_REF];(Hui, Schatzki, and Shove 2017) A current case in point would be Ferreirós declaring his need to 'rescue objectivity' from the pluralism his own work appears to lead towards ((Ferreirós 2016), Chapter 9). My question is: why would a philosopher want to 'rescue' mathematicians' misguided claims? We now fight against one school of thought. But this school of thought will die, supplanted by other school of thought. And then one will no longer understand our argumentations; no longer comprehend why one had had to say all this. [quick translation fs] For indications that set-theory may be on its way out again in math education (at least in Belgium), see[START_REF] Bock | Rods, Sets and Arrows: The Rise and Fall of Modern Mathematics in Belgium[END_REF]. A mathematician is bound to be horrified when faced with my mathematical remarks, since his schooling has always diverted him from giving himself over to thoughts and doubts of the kind that I am bringing up. He has learned to regard them as something contemptible and, to use an analogy from psychoanalysis (this paragraph is reminiscent of Freud), he has acquired a revulsion against these things as against something infantile. That is to say, I'm bringing up all of those problems that a child learning arithmetic, etc., finds difficult, the problems that classroom instruction suppresses without solving. So I'm saying to those suppressed doubts: You are quite right, go ahead and ask -and demand clarification! [Translation quoted from(Wittgenstein & Luckhardt (ed.) & Aue (ed.) 2005 --The Big Typescript TS 213, German-English Scholars' Edition)] For the educational aspect in LW's text, see section 3.2.3(C) here below. This 'etc.' could include the relations between math and its 'applications' as well as surrounding religious and philosophical discourses. I disagree with the notion of 'strata of knowledge' (cf. 1.13(D)), as well as with the idea of an 'interplay' of practices (as if practices themselves have agency), but these aspects are not what is being discussed here. What we're focusing on here is what Ferreirós concludes from his explanation with respect to PhilMath in general and his distinctively and explicitly uncritical attitude towards his subject matter. Cf. Rota 1997[START_REF] Rota | Indiscrete Thoughts[END_REF], p. 107: "Like artists who fail to give an accurate description of how they work, like scientists who believe in unrealistic philosophies of science, mathematicians subscribe to a concept of mathematical truth that runs contrary to the truth.". Celui qui m'expliquera pourquoi le milieu social des petites cours allemandes du XVIIIe siècle où vivait Gauss devait inévitablement le conduire à s'occuper de la construction du polygone régulier à 17 côtes, eh bien, je lui donnerai une médaille en chocolat. LW did maintain until the end that Russell in his prime was very good in conversation[START_REF] Bouwsma | Wittgenstein: Conversations 1949-1951[END_REF]. The chronology of LW's work on math may turn out to be a crucial ingredient if one wants to reconstruct the chronology of LW's philosophy at large. In order to avoid misunderstandings due to the sanitizing editorial practices of LW's literary heirs (of which we encountered a few examples in the course of our explorations), I read all these excerpts in the version directly taken from the manuscripts, as published in the Bergen online edition of the Nachlass. I have tried to be careful not suggesting that psychological or socio-cultural somehow explain LW's philosophy (pace scholars such as Sass[START_REF] Sass | Deep Disquietudes: Reflections on Wittgenstein as Antiphilosopher[END_REF] or even Monk[START_REF] Monk | Ludwig Wittgenstein: The Duty of Genius[END_REF]). I am sure I already owned a hardcover copy of the TLP with W.F. Hermans' Dutch translation in 1986' Dutch translation in -1987 and I got my first copy of Über Gewißheit around the same time; I bought the whole Suhrkamp Werkausgabe around 1990 and I have been reading this or that part of LW's oeuvre ever since, while being very aware of a great cultural gap between the author and myself and without ever having the slightest urge to become a Wittgenstein-scholar, although I reluctantly admit that Part 2 of this study could count as Wittgenstein-scholarship. I must say that I feel even less kinship with LW after writing the present study, but also that I have gained perhaps even more respect for the technical rigor of the philosopher. The man and the philosopher had to deal with a lot of tensions (more than can be seen if one focuses on only a few isolated topics) and manages to retain a remarkable level of coherence and integrity through it all. All this being said, I want to repeat that I deeply disagree with many aspects of LW's outlook on the world. A taxonomy of objects, taking into account their function within practices, is one of the major contributions of Heidegger's early work (cf. section 4.1(B1) below). Larvor 2016 (p.c.) during a session of the Reading Group on Ferreirós' Mathematical Knowledge and the Interplay of Practices (Brussels, V.U.B., CLPS). This argument is very similar to the one against math's 'totatlitarian' tendency to want to absorb not specifically mathematical aspects within its own formalism (e.g. model theory): By the way, this is especially true if one claims to adopt an agent-centered, cognitive, etc. approach! For the suggestion of a link between oracles and the way Ramanujanís work was received, see[START_REF] Rittberg | Epistemic Injustice in Mathematics[END_REF][START_REF] Rittberg | Epistemic Injustice in Mathematics[END_REF][START_REF] Rittberg | Epistemic Injustice in Mathematics[END_REF], §5. A particularly interesting case is 'songwriter's yoghurt', i.e. the improvised sounds that singers or songwriters produce before the lyrics of a song are written, as a placeholder for actual lyrics, and quite often the first stage of the process of writing actual lyrics. On pp. 111-112, Spengler summarizes this line of thought as follows: Der antike Mathematiker kennt nur das, was er sieht und greift. Wo die begrenzte, begrenzende Sichtbarkeit, das Thema seiner Gedankengänge, aufhört, findet seine Wissenschaft ein Ende. Der According to Roger Scruton[START_REF] Scruton | A Point of View: The Strangely Enduring Power of Kitsch[END_REF], "Kitsch is fake art, expressing fake emotions, whose purpose is to deceive the consumer into thinking he feels something deep and serious." (also quoted in https://en.m.wikipedia.org/wiki/Kitsch). This definition is exactly up the alley of what is meant here by 'epistemic kitsch'. I think the need for food and shelter would be a more accurate candidate for the most universally relevant soulmoving topic, with sexual desire perhaps a good second. and 1938 studied above, but in Ms-127,184-187, not dated, but written some time after 19440304, we still read "der Fluch des Einbruchs der math. Logik in die Mathematik". 235 What is important for our purposes is that this example (the disconnect between settheoretical lingo and ordinary math) is supposed to illustrate LW's general assessment of the 'illness of an era' (cf. also "eine ganz instinktlose Zeit" (paragraph (A) here above), again in the context of set theory bashing). It shows how LW's PhilMath is permeated by not only his conception of meaning as embeddedness in everyday practice, but also by the culture-critical concerns that he inherited from his maîtres à penser Karl Kraus and Oswald Spengler: it is remarkable to see how these different aspects coincide in this context. Wittgenstein's critical remarks on math III: on paradoxes, on harmless contradictions and (maybe a little bit) on Gödel In this section, I discuss a few excerpts in which LW discusses contradictions and paradoxes so as to criticize the way axiomatic systems are conceptualized in contemporary contributions to the Grundlagen debates. As many of these excerpts are often interpreted as criticism of Gödel's work, I will first briefly discuss the topic of 'LW on Gödel' in order to make clear what I will and will not deal with in this section. introductory remarks: LW on Gödel LW's remarks on Gödel (or rather: Gödel-related topics; see below) yielded a large, often somewhat technical literature. There were a few early, mostly dismissive, appraisals (e.g. (Kreisel 1958) 236 ), but since the 1990s, there has been specialized discussion involving such authors as Juliet Floyd, Mark Steiner, Victor Rodych, and more recently Timothy Lampert ((Floyd 1995); (Floyd 2001); (Floyd 2017); (Floyd and Putnam 2000); (Bays 2004); (Floyd and Putnam 2006); (Steiner 2001); (Rodych 2006); (Sayward 2005); (Rodych 2003); (Rodych 2002); (Rodych 1999); (Lampert 2013); (Lampert 2018)). Most of these contributions try to come to grips with the issue as to what exactly LW says about Gödel's proof and whether his apparently critical remarks are actually relevant criticism of Gödel's work or not. It is 235 It is worth repeating that LW was not the only one to disagree with Cantor's set theoretical innovation and that Poincarré and Kronecker also strongly disapproved (cf. section 1.2.3 above, where I also quoted Dieudonné, who, as a member of the Bourbaki-collective, was on board with the formalisation of math, but displayed the same very ambivalent attitude towards mathematical logic; for a very interesting remark on Bourbaki's skepticism towards the foundationalist project (Schroeder (Schroeder 2021) p. 202). 236 For instance: "Here, for once, Wittgenstein also makes a justified objection (p. 130, 56) among all the wild shots which miss the mark: the one-sidednessof the consistencyproblem." ( (Kreisel 1958), p. 155). (1) Rodych's account is doxographic (which is of course inherent to the genre of the encyclopedia entry), it is mainly a list of opinions on various topics (a list of 'isms'); therefore, it ignores completely LW's claim that that is not what he does; it also ignores the inherently polyphonic, exploratory style of LW's way of doing philosophy; for instance, concerning Gödel-like topics, Rodych simply assumes that LW is commenting on Gödel's famous proof, thereby completely ignoring LW's own claims to the contrary; the same goes for Rodych's account of LW's finitism, formalism, etc. I don't think it is actually true that LW is an anti-finitist, in that he doesn't object to Cantor's Kalkül: he only objects to what is being said about that Kalkül; (2) Rodych's account is chronological, an aspect that is almost completely lacking in my account (mostly because this is outside the scope of my study): I focus mainly on LW's later work, and my account emphasizes the continuity between the later work and earlier work; I believe Rodych's account -by virtue of its doxographical design-inevitably underestimates the continuity in LW's work: what I see as various ways in which LW explores certain topics over time, will inevitably appear as totally different successive or competing opinions in a doxographical account; (3) Rodych's account does not really show how the specifically mathematical aspects of LW's work are a part of his philosophy at large, whereas my account emphasizes that the main themes in LW's PhilMath are the same themes he also addresses elsewhere. Next, let's look very briefly at Juliet Floyd's recent, short but dense, monograph Wittgenstein's Philosophy of Mathematics (Floyd 2021), which gives an account in which the evolution in LW's PhilMath is studied from the perspective of his views on aspect-seeing, a topic that is not limited to PhilMath, but reoccurs in different contexts within his work. From the even more general perspective developed here, aspect-seeing is one of the many avatars of LW's continuing focus on aspects of meaning that are not easily reducible to propositional truth, which is in its turn part of his Kantian preoccupation of what constitutes the 'given' for mathematics. This focus on the non-propositional aspects of the 'given' inherently ties in with his critique of the autonomy of math and his inherently anti-foundational stance in the Grundlagen-debates, which in turn is an expression of an underlying agenda with respect to the problem of meaning, in its turn a product of a culture-critical concern with authenticity. This does not mean that my reading of individual passages has to differ that much from Floyd's. Thus, for instance, I would agree with most of what Floyd says in the following passage about LW's reading of Gödel ( (Floyd 2021) 2021, pp. 71-72): Since a few decades, Philosophy of Science and related disciplines have made a "naturalistic turn", in that it is understood that the claims of these branches of philosophy (insofar as they have made this 'turn') should "at least be aligned with, if not explicitly grounded in, scientific practice" (Nersessian and MacLeod 2022). Even if some lines of thought in LW's work can be recuperated for the case of naturalism against Platonism, it could be argued that LW was as much an anti-anti-Platonist as an anti-Platonist: LW clearly acknowledged the intuitions that are expressed by Platonism; 313 LW would also agree -in some sense and to a certain extent-with, for instance, Gödel's Platonist arguments against formalism: for LW, axiomatic systems clearly do not exhaust math either, though of course he strongly disagrees with the validity of the consequences drawn by Platonists. Maddy's "second philosophy" does recuperate a number of aspects of LW's work, especially the apparently anti-Platonist strands, for the naturalist cause. However, our reading of LW suggests that her approach to philosophy as a prolongation of science, is not compatible with LW's overtly non-scientific view of philosophy: (1) LW's own vision of his task as a philosopher (therapy, critique, ...) are diametrically opposed to the scientism that underlies Maddy's naturalist stance (cf. also section 0.2(B) above, as well as section 3.2.2 below for the way LW differentiates philosophy from other approaches); (2) LW's own holism is directly opposed to the reductionism inherent in most types of scientism. Of course, this need not be a problem per se: "use" of a certain text for one's own purposes is OK, and "inspiration" works in mysterious ways. But this approach can't be sold as Wittgenstein-scholarship, i.e. it would be disingenuous to project these concerns onto LW's text. The issues are illustrated quite well in the confrontation between Lynch and Bloor: Lynch argues quite convincingly that Bloor's recuperation of LW for his own scientistic sociological approach deviates in important ways from LW's own outlook on the aims and methods of his philosophy (see (Lynch 1992), (Lynch 1997), (Kusch 2004b), (Kusch 2004a), (Bloor 2004), (Kusch 2006)). 313 I would personally argue -from a point of view that I would qualify as generally Wittgensteinian-that there is nothing wrong with talk about mathematical objects from a pragmatic point of view (if something functions as an object within a practice, it is an object within that practice); in that regard, mathematics is -again-like any other practice: there is no problem with the notion of mathematical objects, provided one thoroughly relativizes the notion of object, the way one would any other type of object; note that this does not preclude skepticism about the claims to the 'objectivity' of mathematics (cf. sections 1.1.2(C) above and 3.2;1(B) below). The materials collected and processed in the present study make us land squarely on the nonnaturalistic, anti-scientistic side of Wittgenstein-scholarship, and both Bloor's and Maddy's positions (however attractive they may be in the context of some of the debates they were devised to contribute to) do not appear to be viable as scholarly interpretations of LW's work. Especially the ways in which LW explicitly opposes his own work on mathematics to other approaches precludes any ambiguity in this regard and LW's Kantian focus on 'the given' (cf. section 3.3 below) suggests a deep continuity with 'philosophia prima '. 314 (C7) skepticism vs. anti-skepticism In the aftermath of Kripke's seminal work ((Kripke 1982)) on skepticism about rule-following, including the private language argument, LW's relation to skepticism has been at the forefront of Wittgenstein-scholarship: some authors claim that LW's arguments target skepticism; other authors claim that LW's argument are ultimately skeptical themselves (for an overview, see Kusch (Kusch 2006), already referred to above in connection with scientism and the debates between Bloor and Lynch). 315 I am ready to believe that Kripke's topic was an important one and that skepticism -in the general sense of asking the question as to whether the meaningfulness of our discourse is ultimately guaranteed or not-is a valid way to articulate a central preoccupation in LW's work, from the TLP (in which LW concedes the relative viability of solipsism, a related position) to ÜG (cf. the riverbed analogy in ÜG § §94-99, quoted in section 3.3(C) below), the last paragraphs of which were written in the very last weeks of LW's life: the problem of 'meaningfulness vs. nonsense' that runs throughout the present study coincides more or less completely with Kripke's 'skepticism' issue. In the context of LW's own development, the topic probably even originated in his involvement with the Grundlagen-issue in PhilMath: what is it that guarantees the reliability of math? Whereas LW started out within a logistic approach to the problem, the later LW's response to this question was that math is based in a messy hurly-burly of everyday practices, which most of the time does not lead to any problems at all. Among the passages studied in the above, one of the more interesting was the one in which LW pointed out that -with or without foundations-'a good angel' [ein guter Engel] is always needed in order for things to turn out 314 Of course, 'naturalism' is a label that can cover a very wide range of positions, some of which are very weak, and some of the above may not apply to some positions that do claim to be naturalistic. It is very hard to make any general arguments without arguing about semantics (in the vulgar sense of the word). For instance, the way the term 'naturalism' is applied to LW's work in McGinn 2021 (McGinn 2021), especially Chapter 7, to my mind has very little to do with 'naturalism' as intended in the present section. 315 NB: from the point of view developed here, Kripke's appeal to "community" as a substitute 'foundation' is simply wrong, both as an interpretation of LW's work and as such (cf. section 1.1.2(A1) above). OK (cf. section 2.3(C)). In other words: nothing actually guarantees that things will work out and there is nothing we can do about it, but this need not be a real problem in actual practice, and it typically isn't. Thus, the core of LW's anti-foundationalism about math coincides with the issues underlying the debates about skepticism and LW's PhilMath offers us a very clear way of approaching this issue: 316 the skeptic may be literally speaking 'right' within his own game, but his position is ultimately unhelpful and irrelevant for all practical purposes, in that everyday practice is always the ultimate criterion for meaningfulness, even in the case of math, despite the common claims to a special status of math on the part of practitioners. In other words: skepticism ultimately cannot be upheld in good faith. (C8) revisionism/criticism vs. anti-revisionism / "leave everything as it is" Many commentators (among many others: Maddy (Maddy 1993); Dawson (Dawson 2015);; Scheppers (Scheppers 2017) , Chapter 3, §1) have noted a paradoxical tension between (1) LW's outspoken anti-revisionism ("leave everything as it is") and ( 2) the critical aspects of LW's work in general, and the critical remarks that we focused on in Part 2 of this study in particular. Thus, Maddy says the following in her 1993 article "Wittgenstein's Anti-Philosophy of Mathematics" ((Maddy 1993), p. 55): Surely, one cannot deny the law of the excluded middle or rule out non-constructive existence proofs and at the same time leave "mathematics as it is". But what is the motivation for this prohibition? If philosophy provides compelling reasons to abandon the Platonist picture, if current mathematical practice is based on that picture, why shouldn't the result of philosophical analysis be allowed to reform that practice? Mightn't Wittgenstein's reluctance be a form of false modesty? [ 317 ] This reading of Wittgenstein's late views uncovers a tension between the upshot of his philosophical views and his insistence that philosophy alters nothing.( 5) It tempts us to downplay the non-interference remarks in favor of the presumed payoffs of his contentful philosophical conclusions. A directly opposed approach -my focus in this paper -would give pride of place to the non-interference claims and adjust the reading of the rest to match. So, as pointed out by Maddy, there are two ways of dealing with this tension: relatively brief overview. Suffice it here to merely remind the reader of the key concepts in my account: • LW's pragmatism, i.e. the fact that the locus of meaning is real-life practices, which implies that the relation between a symbolic system and "the world" is always mediated by a practice; in the case of math, this implies a focus on (1) mathematical technique [Kalkül] and (2) applications from which such techniques emerge and in which such techniques occur; • LW's holism about practice: a practice is a real-life multidimensional structure, involving not only the cognition and action of the agent, but also the cultural, biological and physical aspects of the context/world in which it takes place; this implies that propositional truth is ultimately rooted in non-propositional practices, which in its turn implies that LW does not share the idea of the ultimateness of the axioms, not even the ultimateness of propositional truth; 327 • LW's structuralism: for LW, the relations between the epistemic, the linguistic, the cognitive, the physical, the cultural, etc. dimensions of practice are internal to the structure of that practice, which means that it does not make sense to try and isolate -for instancethe epistemic aspect as meaningful in its own right, let alone as underlying the other aspects of practice; 328 • LW's everydayism: LW's insistence on the everyday (as opposed to whatever he finds undesirable) is a bottom-line aspect of his philosophical and existential outlook (as opposed to a result of his analyses) and is deeply intertwined with the culture-critical (Spenglerian) strands in his worldview. (C) philosophy as criticism: ethics & aesthetics: authenticity Throughout Part 2, I have illustrated the claim that LW's philosophy is essentially and pervasively critical and that 'authenticity' (as opposed to fakeness) is the core value that is at issue in this aspect of his work. To a certain extent I have been able to sketch the relation between the concept of everydayness and the concept of authenticity, and between LW's 327 Interestingly, LW shares with Kurt Gödel the idea that math cannot be reduced to an axiomatic system, but both authors then completely diverge on where this observation leads us: whereas Gödel interprets evidence for the limits of formalism as evidence for the existence of a Platonic mathematical universe 'out there', LW interprets the very same results in a deflationary way as evidence for the primacy of everyday practice as the irreducible locus of the meaningfulness of math. 328 At this point, it may be interesting to point out that there is a close conceptual link between LW's structuralism and his anti-foundationalism: if the epistemic aspect of a practice is inherently only one of several irreducible aspects of a practice, this obviously precludes any attempt to make any propositional content foundational. • in order to be able to speak of the 'uniqueness' of math (exceptionalism), one needs comparison, a broader, anthropological, perspective, without which the claim of uniqueness is a mere profession of faith; • in order to focus 'only on successful math' (whiggism) in a meaningful way, one needs to be able to put successfulness in general and successful math in particular in the context of all math-like alternatives and what it took for them to become successful: on the one hand, even ridiculously exotic practices can be very successful for centuries, for instance: various brands of numerology have been successful for thousands of years and continue to be successful; grossly imprecise rules of thumb and unsystematic ways of measuring and counting continue to be functional in a wide variety of technical contexts, etc.; on the other hand, i.e. conversely, those practices that made it as part of present-day academic math, are the result of an accumulation of highly contingent social/historical circumstances and processes (see below). To me this suggests -again-that the important question to be asked is: why are whiggism and exceptionalism -despite their self-evident and often commented on methodological and epistemological flaws-so popular among practitioners of PhilMath? A similar lack of perspective shows in the prevailing deference for mathematicians in general and the great geniuses in particular. For instance, José Ferreirós strikes a remarkably apologetic tone on pp. 96-97 of his (Ferreirós 2016), in which he points out that "Careful logicians such as Frege and Dedekind were very unhappy with the dots ". . ." in the expression {0, 1, 2, . . .}." He then explains: From our multilayered standpoint, which insists on the interplay of different practices and strata of knowledge, there is a natural way of understanding the epistemic role that the dots ". . ." play. I suggest understanding them as indicators of a systematic link, of an interplay within the web of practices. They indicate a systematic connection with perfectly well-known, antecedent practices: we know how to count since preschool, [...]. The receiver of that information may also have good knowledge of the systematic role that the successor function plays in a deductive presentation of arithmetic. In the context of Ferreirós' framework, this is a perfectly fine explanation, more or less in line with a Wittgensteinian one as presented in the above. 356 But then Ferreirós says the following in footnote 12, attached to the above excerpt: human vs. non-human distinction, but according to the meaningful-meaningless distinction, in terms of embedding vs. non-embedding in a practice (cf. section 1.1.1(H) above). (A2) desemantization One of the defining aspects of 'formalism' is that it is the result of 'desemantization' (in the sense of Dutilh Novaes 2012). 398 In a formal language, symbols are manipulated without taking into account any meanings that may have been associated with them. So: one construes a system of symbols that are manipulated without -in principle-paying attention to any type of semantics. We have a more or less clear idea of how it is possible to manipulate items according to rules, without attaching any meaning to the items (think of chess). To the extent that we operate with truly formal systems in a truly formal fashion in math, that would be a case desemantization. However, it remains to be seen whether that is what actually happens in the case of mathematical formalism (see below). (A3) depragmatization? There may also be an intuitive appeal to the notion of depragmatization in this context: desemantization (in the above sense) is ipso facto depragmatization, in the sense that the symbols are no longer seamlessly integrated in the practice they originally belonged in. In other words, the symbols that used to be informally used in deeply embedded practices, are taken out of these practices and are contemplated as stand-alone objects. In a way, LW thought in those terms: due to his inherent everydayism, LW did not make a clear distinction between everyday practice and practice is general. However, this intuitive notion of 'depragmatization' is inherently flawed: it is true that symbols in a formalized language are divorced from the particular practice in which they may have occurred having normal semantics and a normal use as a symbol; but the reason why symbols got depragmatized and/or desemanticized can only be understood in terms of a very concrete and specific practice, which is in its turn embedded in web of concrete and historically determined encompassing practices. (A4) desemantization-resemantization and depragmatization-repragmatization: problematic concepts So: desemantization and depragmatization are always immediately resemantization and repragmatization: as soon as a desemanticized set of symbols are used, they ipso facto acquire 398 Alongside computability, a concept that need not directly concern us in this study (but cf. the issue of 'dead vs. alive signs' and machine-generated math in section 1.1.1(H)). formal languages are a technology that allows us to reason in ways that are fundamentally different from how we spontaneously reason in more mundane circumstances. Specifically, they "allow us momentarily to 'turn off'" our tendency (our "computational bias") to automatically bring into play prior, contextual, knowledge whenever we try to solve a problem. It would be interesting to further reflect on the origins of logic (and formal reasoning in general) as part of ancient Greek legal practices. 400 According to Dutilh Novaes, one of the functions of desemantization is that it gives a degree of epistemic freedom, i.e. that one can manipulate symbols without interpretation, which allows for knowledge how without knowledge why. This in turn -according to Dutilh Novaesalso allows for a certain 'democratization' of knowledge in that the notation allows for solving problems by non-experts. Of course, one could also easily argue the opposite: depending on contexts (I mean: this would be an empirical/historical matter), the mastery of an abstract notation and the operations that allow for its actual application, require specialized training. 401 (A6) the history of formalism and bad faith We saw that for LW, nonsense / inauthentic use of language consists in divorcing utterances from their 'everyday' context (for instance: "metaphysical" language). This is also the problem with formalisms of all kinds, which -by definition-are 'desemanticized' to a certain degree. 400 Cf. Dutilh Novaes (Dutilh Novaes 2012), p. 68: quoting Netz 1999, on the importance of persuasion at the heart of ancient Greek math and logic ; similarly, p. 78: "In a slogan, an argument, proof, or demonstration is a discourse; a calculation is a procedure." 401 p. 200: a characteristic of processes of de-semantification is a certain degree of 'metaphysical freedom': one is allowed to use and manipulate signs even if it is not clear whether they in fact stand for any existing 'thing'. […] Besides metaphysical freedom, de-semantification also seems to entail a certain degree of what could be described as 'epistemic freedom'. As noted by Kramer, and discussed in Chapter 3, signs can be manipulated without interpretation. This realm separates the knowledge of how to solve a problem from the knowledge of why this solution functions. (Kramer 2003: 532) Of course, this separation of knowledge-how from knowledge-why may give rise to suspicions concerning surveyability and reliability -recall the need for epistemic justification of the notational techniques developed within the abacus tradition discussed in Heeffer 2007 and mentioned in Chapter 3. But, despite these legitimate concerns, if the notation somehow manages to establish itself as reliable, then its application typically represents a cognitive boost for the agent, precisely in the senses often discussed in the extended cognition literature. Moreover, notice that an effective and reliable calculating notation may also represent a democratization of knowledge: cognitive tasks which would otherwise only be carried out by experts can now be carried out by a wider range of agents. p. 202: Long before the computer became a universal medium and a programmable machine, we developed the computer 'in ourselves', which is understood here as the cognitive use of algorithmic sign-languages that are freed of the constraints of interpretation. (Krämer 2003: 534; emphasis added) p. 202: the view defended here is that de-semantification is neither a necessary nor sufficient condition for the cognitive boost effect, but it may greatly enhance it (for reasons which will become clear shortly). (B4) two types of uses More concretely: it has been said that there are basically two ways in which you can use a formal system within the context of mathematics or something like mathematics: 1. as a formal representation of a pre-existing informal model: you operate it as a formal model and study it as such, but your intentions remain in actual practice connected to the underlying model; 2. as a stand-alone system: you can start to manipulate the symbols freely and creatively without regard to their representational potential; we could call this type of use "poetic" (see below; cf. also the idea of 'epistemic freedom' in Dutilh Novaes 2012, p. 200). Both types of use appear to be present within mathematical practice. (C) On poetic practices Everyday human activities are eminently purposeful, i.e. steered/driven by pragmatic coherence. For instance, an agent walks toward the cupboard, to take out a plastic tile with detergent and sponges inside, in order to the dishes... However, some of our actions do not seem to obey the same kind of purposefulness (cf. moving one's body to go from A to B in order to get object C for the purpose of using in as a D within the context of practice E vs. moving one's body as part of a sport, a dance, or a game). A number of our activities appear to be gratuitous and will be conceptualized in a variety of ways (NB: these options are not mutually exclusive): -as ritual behavior, including sacred contexts (Staal (Staal 1996); (Staal 1979)); -as ludic behavior (Huizinga (Huizinga 1938); Netz (Netz 2009)), as in games and sports; -as art, as in music, dance and poetry; 406 -or sometimes also as a symptom of mental illness. Thus, the desire to play around with symbolic systems is not unique to math: this aspect of math is not that different from the way in which natural languages are used in poetry, i.e. by exploiting the formal characteristics of symbols (their ability to rhyme or their ability to form meters) and viewing the meaning as emerging from that play; the parallel is interesting. An example would be repetitive rhythm, alliteration or rhyme, which can give rise to a specialized, skillful technique (poetry), but which is also found abundantly in spontaneous conversation (Sacks, Tannen, Jefferson (Tannen 1989); (Jefferson 1996)) and in the speech of schizophrenics and other mentally ill ( (Cardella 2018); see also the literature on glossolalia 406 Cf. also 'talim': formal codes as instructions for weaving complex patterns in shawls and carpets. Interestingly, the word talim has a similar etymology as mathematics: item of teaching. A quick preliminary search also shows that Spengler seems to not share with LW the positive valuation of everydayness. For instance, on p.180 he operates with the distinction between "the everyday person [der alltagliche Mensch]" vs. "the significant person [der bedeutende Mensch]", in a way that reminds us of Heidegger, but not at all of LW: Notes on epistemic bad faith and epistemic bad taste Throughout Part 2 of this study, I pointed out that LW systematically uses ethical and/or aesthetical vocabulary referring to various avatars of the concept of inauthenticity (delusion, fraud, hocus pocus, pathos, ...) to criticize various aspects of mathematical discourse he objects to. In this section, I would like to offer a few brief suggestions as to how LW's critical attitude could still apply to contemporary mathematical discourse (despite the fact that I may not be the right person to actually do the work), by developing the concepts 'epistemic bad taste' ('epistemic kitsch') and 'epistemic bad faith', introduced in section 2.0.3, a little further. (A) epistemic kitsch Epistemic bad taste / epistemic kitsch (my terms) occurs when the importance ('depth', 'mystery', ...) of a claim is proclaimed by appealing to the reader's emotions or ideological convictions, using cheap rhetorical devices, imagery or analogies, thus bypassing both empirical fact and rational thought. 417 In section 2.0.3, I referred to a number of cases in which LW objected to pathos, excitement, sensationalism, vertiginous imagery, theatrics, in various contexts, and throughout Part 2, we observed that this basically aesthetic evaluation was a recurrent theme in LW's criticism of set-theoretical verbiage, but also of the standard interpretation of paradoxes and contradictions in formal systems. In many cases, LW emphasized the ultimately trivial nature of the mathematical technique or result itself. Examples are: • dramatizing the consequences of paradoxes in mathematical formalisms (for instance the Gödel-like ones), whereas their actual status within actual math is often marginal at best; • the 'head-spinning' image of more and more reals crammed into a smaller and smaller space as an illustration of a theory of the reals, whereas one could also point out the trivial fact that the notion of 'ordering' on a line does not work for the reals in the same way that it works for natural numbers, or that the non-rational reals are not 'numbers' in the same sense as rationals, i.e. that rational and non-rational numbers have less in common with each other than one might think; etc.; • the fake depth attributed to diagonal arguments in standard mathematical discourse, as opposed to the trivial fact that even children can easily learn and comprehend the diagonal technique itself; • the outlandishness of Hilbert's hyperbolically exalted notion of 'Cantor's paradise', to which LW replied with the sardonic idea that that it would be equally legitimate to view Cantor's contributions as a satirical joke. Let me briefly demonstrate how doubtful aesthetics are very much part of present-day discourse about math by taking a brief look at a random example. 418 Chapter 1 of J.L. Schiff's 2020 popularizing book The Mathematical Universe. From Pythagoras to Planck starts with the title "The Mystery of Mathematics", under which he prints the following two mottos: "Pure Mathematics is religion... -Philosopher Friedrich von Hardenberg" and "It is impossible to be a mathematician without being a poet in soul..." -Mathematician Sofia Kovalevskaya". The title of Chapter 2, "From Here to Infinity", is followed by "The Infinite! No other question has ever moved so profoundly the spirit of man... -Mathematician David Hilbert" and "The interior of our skulls contains a portal to infinity... -Writer Grant Morrison". In none of the quoted cases, the exalted formulation contributes in any way to a proper understanding of the subject matter, on the contrary, the quotes are systematically mystifying the subject matter they are supposed to contribute to: -One may agree with Kovalevskaya's idea that a mathematician is -in a way-like a poet but one wonders what Kovalevskaya thinks a poet does: she would probably not agree 418 Another spectacular example is Hoffmann 2017(2) on Gödel's famous results ((Hoffmann 2017), p. 318): "Betrachten wir Gödels Satz VII im Lichte des Hauptresultats, so können wir daraus ein atemberaubendes Ergebnis ableiten. Aus ihm folgt, dass unentscheidbare Formeln keine scheuen Wesen sind, die sich ausschließlich in den schattigen Winkeln einer praxisfremden Mathematik tummeln. Das Gegenteil ist der Fall: Wir finden sie im Herzen der Mathematik, inmitten der elementaren Zahlentheorie". Whatever one may think of the merits of Gödel's results, it would be hard to maintain that Gödel's undecidable formula occurs "at the heart of math, in elementary arithmetic". purposes of "post truth"-styled attacks on science and/or academia as a whole. For the potential dangers of epistemic kitsch in science communication, see section 3.2.3(D). A particularly interesting case for further discussion (because it is not only part of the vulgarizing literature) would be the verbiage surrounding the applicability of math to physics, as introduced in modern PhilMath by Wigner (Wigner 1960), which has been called 'mysterious', 'unreasonable', 'awe-inspiring', 'a miracle', etc. (see also Hacking (Hacking 2014) for some nice Of course, the point is not to deny the existence of the phenomenon of awe and wonder, or to argue that it shouldn't exist. The point is that this kind of emotion is not an argument for anything: people get emotional for the silliest reasons. The emotion is a fact, it can serve as "raw materials" for philosophy to deal with, but it is as such no argument at all, let alone for the ontological status of mathematical objects or the epistemological status of math as such. 420 (B) epistemic bad faith Epistemic bad faith / epistemic pretense occurs when someone apparently conforms to (or pretends to conform to) formal criteria for acceptable discourse and at the same time does not participate in those aspects of the encompassing practice that makes (or would make?) such discourse meaningful. 420 For this notion of "raw materials for philosophy", see Ms-124,35-36: Das Piedestal der Mathematik, ist die Rolle || ist eine bestimmte Rolle, welche ihre Sätze in unsern Sprachspielen spielen. Die Sätze, welche Hardy in seinem -elenden -Buch, "Apology of a Mathematician", als Ausdruck seiner Philosophie der Mathematik hinstellt, sind noch gar nicht Philosophie, sondern können wie alle ähnlichen Ergüsse, als Rohmaterial des Philosophierens dienen, & sollten dann nicht in der Form von Meinungen, Feststellungen, oder Axiomen, ausgesprochen werden, sondern in der Form: "Ich bin geneigt zu sagen: …", "Ich möchte immer sagen: …". Worauf das Philosophieren erst beginnen soll, (um) uns diese seltsame Neigung zu erklären. || ; uns diese …. || sondern können -wie alle ähnlichen Ergüsse -Rohmaterial des Philosophierens sein; & sollten … See also (Floyd 2012) Floyd 2012, p. 245, ftn. 33: "Hardy 1940 is mentioned at MS 124, p. 35-a draft remark for PI §254-where Wittgenstein writes that "the sentences that Hardy sets forth as expression of his philosophy of mathematics in his miserable book 'Apology of a Mathematician' are in no way philosophy, but could-like all similar outpourings-be conceived as raw material of philosophizing."". 421 Does anyone actually argue that actual mystical practices are essential to mathematics? I would not be surprised that this was actually the case in the context of the original Pythagorean/Platonist school(s), but are there any equivalents in modern mathematics? However interesting it would be to be able to study such individual cases (which must exist...), my main point still stands: for most authors indulging in (pseudo-)mystical talk, this does not correspond to anything practical, and mainstream math is not presented (let alone taught) as a mystical practice. Most importantly, no mystical insights are necessary to calculate and to apply calculations, or mathematical technique in general, in science, engineering, accounting, etc. (B1) Bad faith 1: Cantor on freedom A first obvious interesting topic is the relation between Cantor's work in math and his welldocumented theological endeavours (for his correspondence with theologians including the Pope see e.g. Dauben 1977 (Dauben 1977); and Newstead 2009 (Newstead 2009)). It is interesting to observe that even an apparently conservative and prudent (see section 3.2.3(A) above) historian like José Ferreirós admits that Cantor's transfinites perhaps might not have existed (ever), if not for Cantor's intervention: "[…] there is good reason to think that the results would have been established even if Cantor had never lived. Developments could have been much slower, and we may doubt whether the transfinite ordinals would have been introduced, but I do not see reasons to have similar doubts concerning the results in Cantor (1874)." ( (Ferreirós 2016) Ferreirós 2016, pp. 256-257). Cantor's transfinites are not his only contribution that is a case in point of definitely nonmathematical aspects determining apparently intra-mathematical decisions: the origins of his famous proclamation of the 'freedom of mathematics' is equally confusing. Interestingly, and perhaps somewhat bizarrely or at least ironically, Cantor's idea of the freedom of mathematics, meaning that the only criterion for the acceptance of mathematical theory should be its self-consistency, is a good example of the non-autonomous nature of mathematical practice and discourse in that it was at its origins a more political than an epistemological idea. According to Dauben 1989, Cantor admitted that when he first formulated his idea of 'mathematical freedom', he was also thinking of the oppression he experienced on the part of his nemesis Kronecker, who criticized Cantor's revolutionary contributions in the strongest possible terms (cf. Dauben 1989: " [...] the essence of mathematics is exactly its freedom. This was not simply an academic or philosophical message to his colleagues, for it carried as well a hidden and deeply personal subtext. It was, as he later admitted to David Hilbert, a plea for objectivity and openness among mathematicians." ). The idea retained its existential and political connotations throughout Cantor's career. Cantor's activities in academic politics included his involvement in the foundation of an independent union of mathematicians, which he intended to offer an open platform in which all mathematicians could 'freely' discuss mathematical results, without any political pressure from any academic establishment, and the notion of freedom retained its ambiguity as both an epistemological idea and a political/exitential one: Applications might eventually determine which mathematical theories were useful, but for mathematicians, Cantor insisted that the only real question was consistency. This of course was just the interpretation he needed to challenge an established mathematician like Kronecker. Cantor clearly felt obliged, early in his career, to plead as best he could for a fair hearing of his work. So long as it was self-("Forms of Life", "our lives", ...). The fact that Cantor came back on an earlier decision and chose for another option at his disposal, is particularly revealing for our purposes. 422 (B2) Bad faith 2: Gödel's little magic trick LW's criticism of Gödel's Kunststückchen ['little magic trick'] (see section 2.3(B)) is a good example: LW objects to KG's bad faith in presenting his completeness-consistency proof as a normal result in normal arithmetic, whereas it is almost obvious that the only reason one would want to even start trying to prove this, is in order to make the philosophical point that Gödel actually did make (I mean: Platonism). Historiographical work 423 shows that Gödel's account of his own intellectual biography is far from transparent, most notably when it comes to the origins of his Platonist convictions and the relationship between the latter and his work in mathematical logic: if it is true that Gödel contradicted himself about the chronological order and the direction of the causal relation between his philosophical-religious convictions and his work on the completeness and consistency of formal systems, a case can be made for Gödel's bad faith in a literal and very concrete sense. However, as is often the case with autobiographical accounts, it is not clear to what extent Gödel was strategic about the way he presented his development at various moments in his career, and to what extent he was genuinely enacting a psychodrama of which he was perhaps not really aware himself. Now that his notebooks have (partially) become available for further study, it has already become clear to what extent Gödel's work was driven by religious-philosophical-theological concerns and further study of this material will undoubtedly give rise to a better understanding of the issues at hand. Whatever the conclusions may turn out to be with respect to the biographical aspect of Gödel's particular case (however interesting they may be, not only for their own sake, but also because of the historical importance and iconic status of Gödel's work), the properly epistemic aspect of epistemic bad faith is what is of interest here: how should we deal with contents that are not incorrect, according to established (formal) criteria of correctness, but nonsensical in that they have no real connection with anything that mathematicians actually do, as mathematicians? or: why is it that mathematicians / philosophers of mathematics feel the need to make claims that do not follow from what they do and at the same time refuse to look into the obvious extra-mathematical ideological connections of their claims? 422 Cf ; what we said about the various options available to mathematicians if they stumble on undesirable situations/results. 423 For instance, even the materials collected by Floyd & Kanamori on their own would suffice to make the point; see (Floyd and Kanamori 2016), pp. 259-260, incl. footnotes 32 and 33. many mainstream mathematicians and philosophers felt as a threat to their monist convictions. An interesting case in point would be the relatively recent debates about pluralism vs. nonpluralism (a.k.a. the multiverse vs. the universe) in set-theory (for a nice overview, see e.g. Koellner (Koellner 2013b); (Koellner 2013a)). On the one hand, pluralism is de facto possible: one can start and play around with an indeterminate number of different mathematical systems, pluralists actually do so. On the other hand, many leading mathematicians appear to object to the idea, mostly in the name of convictions that can be reduced to the cluster of concepts that I called 'mathematical monism'. I believe a 'pragmatic' (as opposed to a 'semantic') approach to meaning in general can contribute to a better understanding of mathematical meaning. and thus may help shed light on the debates surrounding the pluralist vs. non-pluralist conceptions of set theory, by focusing on (1) ways in which a certain awareness of pragmatic / non-semantic aspects of mathematical meaning already play a role in these debates (focus on such concepts as 'choice', 'freedom', ... vs. 'intuitively correct', 'natural', ...); (2) how a systematically pragmatic rephrasing of the problem yields a better understanding of what is a stake in these debates. As suggested in the above, the problem cannot be understood from inside the axiomatic systems, not even in terms of the relation between a formal system and its intended model(s). What counts as a desirable solution is a matter of pragmatics and -apparently-crucially involves extra-mathematical concepts, values and/or practices. Again: these issues cannot easily be dismissed as external to mathematics in that they concern core issue in theoretical math and the very identity of the field.
04095697
en
[ "info.info-lo", "info.info-lo" ]
2024/03/04 16:41:18
2007
https://inria.hal.science/hal-04095697/file/truthvalues.pdf
Gilles Dowek email: [email protected] Truth values algebras and proof normalization come Introduction Proving that a theory has the cut elimination property has some similarities with proving that it has a model. These similarities appear, for instance, in the model theoretic proofs of cut elimination, where cut elimination is obtained as a corollary of a strengthening of the completeness theorem, expressing that if a formula is valid in all models of a theory, then it has a cut free proof in this theory. Such a method has been used, for instance, by Schütte, Kanger, Beth, Hintikka and Smullyan. It has then been used by Tait [START_REF] Tait | A non constructive proof of Gentzen's Hauptsatz for second order predicate logic[END_REF], Prawitz [START_REF] Prawitz | Hauptsatz for higher order logic[END_REF], Takahashi [START_REF] Takahashi | A proof of cut-elimination theorem in simple type theory[END_REF] and Andrews [START_REF] Andrews | Resolution in type theory[END_REF] to prove cut elimination for simple type theory. It has been generalized, more recently, by De Marco and Lipton [START_REF] Marco | Completeness and cut-elimination in the intuitionistic theory of types[END_REF] to prove cut elimination for an intuitionistic variant of simple type theory, by Hermant [START_REF] Hermant | A model based cut elimination proof[END_REF][START_REF] Hermant | Semantic cut elimination in the intuitionistic sequent calculus[END_REF] to prove cut elimination for classical and intuitionistic theories in deduction modulo and by Okada [START_REF] Okada | A uniform semantic proof for cut elimination and completeness of various first and higher order logics[END_REF] to prove cut elimination for intuitionistic linear logic. An alternative method to prove cut elimination is to prove that all proofs strongly normalize. Following Tait [START_REF] Tait | Intentional interpretations of functionals of finite type I[END_REF] and Girard [START_REF] Girard | Une extension de l'interprétation de Gödel à l'analyse, et son application à l'élimination des coupures dans l'analyse et la théorie des types[END_REF], this is proved by assigning a set of proofs, called a reducibility candidate, to each formula. Here also, the proofs have some similarities with the construction of models, except that, in these models, the truth values 0 and 1 are replaced by reducibility candidates. This analogy has been exploited in a joint work with Werner [START_REF] Dowek | Proof normalization modulo[END_REF], where we have defined a notion of reducibility candidate valued models, called pre-models, and proved that if a theory in deduction modulo has such a model, then it has the strong normalization property. The fact that both cut elimination proofs and strong normalization proofs proceed by building models raises the problem of the difference between cut elimination and strong normalization. It is well-known that strong normalization implies cut elimination, but what about the converse ? This problem can be precisely stated in deduction modulo, where instead of using an ad hoc notion of cut for each theory of interest, we can formulate a general notion of cut for a large class of theories, that subsumes the usual ad hoc notions. This problem has been solved by Hermant [START_REF] Hermant | Méthodes sémantiques en déduction modulo[END_REF] and surprisingly the answer is negative: there are theories that have the cut elimination property, but not the strong normalization property and even not the weak normalization property. Thus, although the model theoretic cut elimination proofs and the strong normalization proofs both proceed by building models, these methods apply to different theories. In this paper, we focus on the model theoretic characterization of theories in deduction modulo that have the strong normalization property. It has been proved in [START_REF] Dowek | Proof normalization modulo[END_REF] that a theory has the strong normalization property if it has a reducibility candidate valued model. However, the usual model constructions use very little of the properties of reducibility candidates. In particular, these constructions seem to work independently of the chosen variant of the closure conditions defining reducibility candidates. This suggests that this notion of reducibility candidate valued model can be further generalized, by considering an abstract notion of reducibility candidate. Abstracting this way on the notion of reducibility candidate leads to introduce a class of algebras, called truth values algebras, that also generalize Heyting algebras. However there is an important difference between truth values algebras and Heyting algebras: in a Heyting algebra valued model the formula P ⇔ Q is valid if and only if the formulae P and Q have the same denotation. In particular, all theorems have the same denotation. This is not necessarily the case in truth values algebra valued models where two theorems may have different denotation. Thus, truth values algebra valued models are more "intentional" than Heyting algebra valued models. In particular, it is possible to distinguish in the model between the computational equivalence of formulae (the congruence of deduction modulo, or the definitional equality of Martin-Löf's type theory) and the provable equivalence: the denotations of two computationally equivalent formulae are the same, but not necessarily those of two logically equivalent formulae. Thus, independently of normalization, this generalization of Heyting algebras seems to be of interest for the model theory of deduction modulo and type theory. We shall first introduce the notion of truth values algebra and compare it with the notion of Heyting algebra. Then, we shall consider plain predicate logic, define a notion of model based on these truth values algebras and prove a soundness and a completeness theorem for this notion of model. We shall then show that this notion of model extends to deduction modulo. Finally, we shall strengthen the notion of consistency into a notion of super-consistency and prove that all super-consistent theories have the strong normalization property. Truth values algebras Definition Definition 1 (Truth values algebra). Let B be a set, whose elements are called truth values, B + be a subset of B, whose elements are called positive truth values, A and E be subsets of ℘(B), ⊤ and ⊥ be elements of B, ⇒, ∧, and ∨ be functions from B × B to B, ∀ be a function from A to B and ∃ be a function from E to B. The structure B = ⟨B, B + , A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ is said to be a truth value algebra if the set B + is closed by the intuitionistic deduction rules i.e. if for all a, b, c in B, A in A and E in E, 1. if a ⇒ b ∈ B + and a ∈ B + then b ∈ B + , 2. a ⇒ b ⇒ a ∈ B + , 3. (a ⇒ b ⇒ c) ⇒ (a ⇒ b) ⇒ a ⇒ c ∈ B + , 4. ⊤ ∈ B + , 5. ⊥ ⇒ a ∈ B + , 6. a ⇒ b ⇒ (a ∧ b) ∈ B + , 7. (a ∧ b) ⇒ a ∈ B + , 8. (a ∧ b) ⇒ b ∈ B + , 9. a ⇒ (a ∨ b) ∈ B + , 10. b ⇒ (a ∨ b) ∈ B + , 11. (a ∨ b) ⇒ (a ⇒ c) ⇒ (b ⇒ c) ⇒ c ∈ B + , 12. the set a ⇒ A = {a ⇒ e | e ∈ A} is in A and the set E ⇒ a = {e ⇒ a | e ∈ E} is in A, 13. if all elements of A are in B + then ∀ A ∈ B + , 14. ∀ (a ⇒ A) ⇒ a ⇒ ( ∀ A) ∈ B + , 15. if a ∈ A, then ( ∀ A) ⇒ a ∈ B + , 16. if a ∈ E, then a ⇒ ( ∃ E) ∈ B + , 17. ( ∃ E) ⇒ ∀ (E ⇒ a) ⇒ a ∈ B + . Definition 2 (Full). A truth values algebra is said to be full if A = E = ℘(B), i.e. if ∀ A and ∃ A exist for all subsets A of B. Definition 3 (Trivial). A truth values algebra is said to be trivial if B + = B. Example 1. Let B = {0, 1}. Let B + = {1}, A = E = ℘(B), ⊤ = 1, ⊥ = 0, ⇒, ∧, ∨ be the usual boolean operations, ∀ be the function mapping the sets {0} and {0, 1} to 0 and ∅ and {1} to 1 and ∃ be the function mapping the sets ∅ and {0} to 0 and {1} and {0, 1} to 1. Then ⟨B, B + , A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ is a truth value algebra. Example 2. Let B be an arbitrary set, B + = B, A = E = ℘(B) and ⊤, ⊥, ⇒, ∧, ∨, ∀ and ∃ be arbitrary operations. Then ⟨B, B + , A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ is a trivial truth value algebra. Pseudo-Heyting algebras In this section, we show that truth values algebras can alternatively be characterized as pseudo-Heyting algebras. Definition 4 (Pseudo-Heyting algebra). Let B be a set, ≤ be a relation on B, A and E be subsets of ℘(B), ⊤ and ⊥ be elements of B, ⇒, ∧, and ∨ be functions from B × B to B, ∀ be a function from A to B and ∃ be a function from E to B, the structure B = ⟨B, ≤, A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ is said to be a pseudo-Heyting algebra if for all a, b, c in B, A in A and E in E, (the relation ≤ is a pre-order) -a ≤ a, -if a ≤ b and b ≤ c then a ≤ c, ( ⊤ and ⊥ are maximum and minimum elements (notice that these need not be unique)) -a ≤ ⊤, -⊥ ≤ a, (a ∧ b is a greatest lower bound of a and b and and a ∨ b is a least upper bound of a and b (again, these need not be unique)) -a ∧ b ≤ a, -a ∧ b ≤ b, -if c ≤ a and c ≤ b then c ≤ a ∧ b, -a ≤ a ∨ b, -b ≤ a ∨ b, -if a ≤ c and b ≤ c then a ∨ b ≤ c, (the set A and E have closure conditions) -a ⇒ A and E ⇒ a are in A, ( ∀ and ∃ are infinite greatest lower bound and least upper bound) Proof. -Using 1. 2. and 3. we get that a ⇒ a ∈ B + . -if a ∈ A then ∀ A ≤ a, -if for all a in A, b ≤ a then b ≤ ∀ A, -if a ∈ E then a ≤ ∃ E, -if for all a in E, a ≤ b then ∃ E ≤ b, -Using 1. 2. and 3. we get that a ⇒ b ∈ B + and b ⇒ c ∈ B + then a ⇒ c ∈ B + . -Using 1. 2. and 4. we get that a ⇒ ⊤ ∈ B + . -⊥ ⇒ a ∈ B + is condition 5. -(a ∧ b) ⇒ a ∈ B + is condition 7. -(a ∧ b) ⇒ b ∈ B + is condition 8. -Using 1. 2. 3. and 6. we get that if c ⇒ a ∈ B + and c ⇒ b ∈ B + then c ⇒ (a ∧ b) ∈ B + , -a ⇒ (a ∨ b) ∈ B + is condition 9. -b ⇒ (a ∨ b) ∈ B + is ≤ a then ⊤ ≤ b. 2. We have ( ⊤ ∧ a) ∧ b ≤ ⊤ ∧ a ≤ a. Hence ⊤ ≤ a ⇒ b ⇒ a. 3. Let x = ⊤ ∧ (a ⇒ b ⇒ c) ∧ (a ⇒ b) ∧ a. We have x ≤ a ⇒ b ⇒ c, x ≤ a ⇒ b and x ≤ a. Using the lemma three times, we get 13. If all elements of A are in B + then ∀ A ∈ B + . Indeed, for all elements x of A, ⊤ ≤ x hence ⊤ ≤ ∀ A. 14. Let x = ⊤ ∧ ∀ (a ⇒ A) ∧ a. Let y be an arbitrary element of A. Notice that, by definition of a ⇒ A, a ⇒ y ∈ a ⇒ A. We have x ≤ ∀ (a ⇒ A) hence x ≤ a ⇒ y. We also have x ≤ a, hence using the lemma x ≤ y. For all y ∈ A, we have x ≤ y, x ≤ c. Hence ⊤ ≤ (a ⇒ b ⇒ c) ⇒ (a ⇒ b) ⇒ a ⇒ c. 4. We have ⊤ ≤ ⊤. 5. We have ⊤ ∧ ⊥ ≤ ⊥ ≤ a, thus ⊤ ∧ ⊥ ≤ a. Hence ⊤ ≤ ⊥ ⇒ a. thus x ≤ ∀ A. Hence ⊤ ≤ ∀ (a ⇒ A) ⇒ a ⇒ ( ∀ A). 15. If a ∈ A, then we have ⊤ ∧ ( ∀ A) ≤ ∀ A ≤ a. Hence ⊤ ≤ ( ∀ A) ⇒ a. 16. If a ∈ A, then we have ⊤ ∧ a ≤ a ≤ ∃ A. Hence ⊤ ≤ a ⇒ ( ∃ A). 17. Let x = ⊤ ∧ ( ∃ A) ∧ ∀ (A ⇒ a). We have x ≤ ( ∃ A) and x ≤ ∀ (A ⇒ a). We have x ≤ ( ∃ A) and x ≤ x thus x ≤ ( ∃ A) ∧ x. As x ≤ ∀ (A ⇒ a). For all y in A, x ≤ y ⇒ a, i.e (x ∧ y) ≤ a. For all y in A, we have y ∧ x ≤ x ∧ y ≤ a i.e. y ≤ x ⇒ a. Thus, ∃ A ≤ x ⇒ a, i.e. ∃ A ∧ x ≤ a. By transitivity, we get x ≤ a. Hence ⊤ ≤ ( ∃ A) ⇒ ∀ (A ⇒ a) ⇒ a. Definition 5 (Heyting algebra). A pseudo-Heyting algebra is said to be a Heyting algebra if the relation ≤ is antisymmetric -x ≤ y ⇒ y ≤ x ⇒ x = y. Remark. If the pseudo-Heyting algebra ⟨B, ≤, A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ is a Heyting algebra, then the set B + = {x | ⊤ ≤ x} is the singleton { ⊤}. Indeed, if a ∈ B + then ⊤ ≤ a and a ≤ ⊤. Hence a = ⊤. Definition 6. A function F from a truth value algebra B 1 to a truth value algebra B 2 is said to be a morphism of truth values algebras if -x ∈ B + 1 if and only if F (x) ∈ B + 2 , -if A ∈ A 1 then F (A) ∈ A 2 , if E ∈ E 1 then F (E) ∈ E 2 , -F ( ⊤1 ) = ⊤2 , F ( ⊥1 ) = ⊥2 , F (a ⇒1 b) = F (a) ⇒2 F (b), F (a ∧1 b) = F (a) ∧2 F (b), F (a ∨1 b) = F (a) ∨2 F (b), F ( ∀1 A) = ∀2 F (A), F ( ∃1 E) = ∃2 F (E). Morphisms of pseudo-Heyting algebras are defined in a similar way except that the first condition is replaced by Remark. We have proved that, in the definition of Heyting algebras, the antisymmetry is useless and can be dropped. The equivalence of truth values algebras and pseudo-Heyting algebras shows that antisymmetry is the only property that can be dropped and that truth values algebras are, in some sense, the best possible generalization of Heyting algebras, as we cannot require less than closure by intuitionistic deduction rules. -x ≤ 1 y if and only if F (x) ≤ 2 F (y). Examples of truth values algebras We have seen that the algebra {0, 1} is a truth value algebra and more generally that all Heyting algebras are truth values algebras. We give in this section two examples of truth values algebras that are not Heyting algebras. Example 3. The truth value algebra T 1 is defined as follows. The set T 1 is {0, I, 1} and the set T + 1 is {I, 1}. The sets A and E are ℘(T 1 ). The functions ⊤, ⊥, ∧, ∨, ∀ and ∃ are the same as in the algebra {0, 1}, except that their value on I is the same as their value on 1. For instance the table of the operation ∨ is 0 I 1 0 0 1 1 I 1 1 1 1 1 1 1 The function ⇒ is defined by the table 0 I 1 0 1 1 1 I 0 1 1 1 0 I I Notice that as I ⇒ 1 and 1 ⇒ I are both in T + 1 we have I ≤ 1 and 1 ≤ I. Hence the relation ≤ is not antisymmetric and the truth value algebra T 1 is not a Heyting algebra. Example 4. The truth value algebra T 2 is similar to T 1 , except that the function ⇒ is defined by the table 0 I 1 0 1 1 I I 0 1 I 1 0 1 I Ordered truth values algebras We consider truth values algebras extended with an order relation ⊑ on B. This order relation extends to sets of truth values in a trivial way: A ⊑ B if for all x in A there exists a y in B such that x ⊑ y. Definition 7 (Ordered truth values algebra). An ordered truth values algebra is a truth values algebra together with a relation ⊑ on B such that -⊑ is an order relation, -B + is upward closed, -⊤ is a maximal element, -∧, ∨, ∀ and ∃ are monotonous, ⇒ is left anti-monotonous and right monotonous. Definition 8 (Complete ordered truth values algebra). A ordered truth values algebra is said to be complete if every subset of B has a greatest lower bound for ⊑. Notice that this implies that every subset also has a least upper bound. We write glb(a, b) and lub(a, b) the greatest lower bound and the least upper bound of a and b for the order ⊑. Example 5. The algebra T 1 ordered by 0 ⊑ I ⊑ 1 is complete. Example 6. The algebra T 2 cannot be extended to a complete ordered algebra. Indeed the set {I, 1} would need to have a least upper bound. This least upper bound cannot be 0 because T + 2 would then not be upward closed. If it were 1 then we would have I ⊑ 1 and thus 1 ⇒ I ⊑ 1 ⇒ 1, i.e. 1 ⊑ I. Thus the relation ⊑ would not be antisymmetric. If it were I then we would have 1 ⊑ I and thus 1 ⇒ 1 ⊑ 1 ⇒ I, i.e. I ⊑ 1. Thus the relation ⊑ would not be antisymmetric. Proposition 4. The order relation ⊑ is finer than ≤, i.e. if a ⊑ b then a ≤ b. Proof. If a ⊑ b, then a ⇒ a ⊑ a ⇒ b, hence a ⇒ b ∈ B + , i.e. a ≤ b. Completion We now want to prove that for any truth value algebra B, there is another truth value algebra B C that is full, ordered and complete and a morphism Φ from B to B C . Notice that we do not require the morphism Φ to be injective. There are two ways to prove this, the first is to use Proposition 3 in a first step to build a truth value algebra B/B + that is a Heyting algebra and a morphism for B to B/B + and then apply in a second step MacNeille completion to the algebra B/B + to embed it into a full Heyting algebra. Together with its natural order, this algebra is a full, ordered and complete truth value algebra. The second is to apply MacNeille completion directly to B noticing that antisymmetry is not used in MacNeille completion, except to prove the injectivity of the morphism. To keep the paper self-contained we follow this second way. Definition 9 (Closure). Let X a subset of B, then the set of upper bounds of X is u(X) = {y | ∀x (x ∈ X ⇒ x ≤ y)} the set of lower bounds of X is l(X) = {y | ∀x (x ∈ X ⇒ y ≤ x)} and the closure of X is C(X) = l(u(X). It is easily checked that X ⊆ Y ⇒ u(Y ) ⊆ u(X) X ⊆ Y ⇒ l(Y ) ⊆ l(X) X ⊆ Y ⇒ C(X) ⊆ C(Y ) Proposition 7. X ⊆ C(X) Proof. Consider x ∈ X. For all y ∈ u(X), x ≤ y. Hence x ∈ l(u(X)). Proposition 8. If X ∈ E, then ∃ X ∈ C(X) Proof. For all y ∈ u(X), ∃ X ≤ y. Hence ∃ X ∈ l(u(X)). Definition 10 (Closed). A subset X of B is said to be closed if C(X) = Xor, equivalently, C(X) ⊆ X. Proposition 9. Any set of the form C(X) is closed. Proof. Consider an arbitrary set Z. Consider z ∈ Z. For all y ∈ l(Z), y ≤ z. Hence z ∈ u(l(Z)). Thus, for an arbitrary Z, Z ⊆ u(l(Z)) and in particular u(X) ⊆ u(l(u(X))). Thus l(u(l(u(X)))) ⊆ l(u(X)), i.e. C(C(X)) ⊆ C(X). Proposition 10. a ∈ C({b}) ⇔ a ≤ b C({a}) ⊆ C({b}) ⇔ a ≤ b If X is closed, x ∈ X ) a ≤ b. If C({a}) ⊆ C({b}) then {a} ⊆ C({a}) ⊆ C({b}). Hence a ∈ C({b}), i.e. a ≤ b. Conversely, if a ≤ b then a ∈ C({b}), {a} ⊆ C({b}), hence C({a}) ⊆ C(C({b})) = C({b}). If X is closed, x ∈ X and y ≤ x then let z be an element of u(X), we have x ≤ z and y ≤ x, thus, by transitivity, y ≤ z, i.e. y ∈ l(u(X)) = C(X) = X. Proposition 11. Let B be a pseudo-Heyting algebra. Let B C be the set of closed subsets of B, ≤ C be inclusion, A C = E C = ℘(B C ), ⊥C = C({ ⊥}), ⊤C = C({ ⊤}), ∧C be intersection, ∨C be defined by X ∨C Y = C(X ∪ Y ), ∀C be intersection, ∃C be defined by ∃C E = C( E), ⇒C be defined by X ⇒C Y = C({x})⊆X,Y ⊆C({y}) C({x ⇒ y}). Then the structure ⟨B C , ≤ C , A C , E, ⊥C , ⊤C , ⇒C , ∧C , ∨C , ∀C , ∃C ⟩ is a full, ordered and complete Heyting algebra. Proof. 1. Inclusion is trivially an order relation. The set C({ ⊥}) is closed. It is a minimum in B C , because if X is closed, it is a set of lower bounds and hence ⊥ ∈ X. Thus C({ ⊥}) ⊆ C(X) = X. 3. The set C({ ⊤}) is closed. It is a maximum in B C , because it is equal to B. 4. Let us check that binary and arbitrary intersections of closed sets are closed sets. We detail only the case of the operation ∀, the binary operation being a particular case. If all elements of A are closed sets, we have for every Example 7. The algebra T 2 cannot be extended to a complete ordered algebra, but it can be embedded with a non injective morphism in the full ordered and complete algebra {0, 1}. X in A, ∀C A ⊆ X, C( ∀C A) ⊆ C(X) = X. Thus C( ∀C A) ⊆ ∀C A. if X is an element of E, X ⊆ E ⊆ C( E). Then, it is the least as, if A is an element of B C such that for all X in E, X ⊆ Z then E ⊆ Z hence C( E) ⊆ C(Z) and C( E) ⊆ Z. 6. The set C({x})⊆X,Y ⊆C({y}) C({x ⇒ y}) is closed as it is an intersection of closed sets. Let us check that X ≤ C A ⇒C B if and only if X ∧C A ⊆ B. Assume X ⊆ A ⇒C B and let x ∈ X ∧C A. Let b ∈ u(B). We have x ∈ X, thus x ∈ A ⇒C B. We have x ∈ A, thus C({x}) ⊆ C(A) = A. We have b ∈ u(B), thus ∀y (y ∈ B ⇒ y ≤ b), ∀y (y ∈ B ⇒ y ∈ C({b})) and B ⊆ C({b}). We have x ∈ A ⇒C B, C({x}) ⊆ A, and B ⊆ C({b}) thus x ∈ C({x ⇒ b}), i.e. x ≤ x ⇒ b, x ∧ x ≤ b, x ≤ b. Thus, for all b in u(B), x ≤ b, i.e. x ∈ l(u(B)), x ∈ C(B) and x ∈ B. Conversely, assume X ∧C A ⊆ B. Let x ∈ X. Let a such that C({a}) ⊆ A and b such that B ⊆ C({b}). We have x ∧ a ≤ x and x ∈ X, hence x ∧ a ∈ X. We have x ∧ a ≤ a ∈ C({a}) ⊆ A, hence x ∧ a ∈ A. Thus x ∧ a ∈ X ∧C A ⊆ B ⊆ C({b}). Therefore x ∧ a ∈ C({b}), x ∧ a ≤ b, x ≤ a ⇒ b and x ∈ C({a ⇒ b}). Thus x ∈ A ⇒C B. ∈ C({a}) ∪ C({b}), thus if x ∈ u(C({a}) ∪ C({b})), then a ≤ x. In a similar way, if x ∈ u(C({a}) ∪ C({b})), then b ≤ x. Thus if x ∈ u(C({a}) ∪ C({b})), then a ∨ b ≤ x, i.e. if x ∈ u(C({a}) ∪ C({b})), then x ∈ u({a ∨ b}) i.e. u(C({a}) ∪ C({b})) ⊆ u({a ∨ b}). Hence l(u({a ∨ b})) ⊆ l(u(C({a}) ∪ C({b}))) i.e. C({a ∨ b}) ⊆ C(C({a}) ∪ C({b}))) i.e. C({a ∨ b}) ⊆ C({a}) ∨C C({b})). 3 Predicate Logic Models Definition 11 (B-valued structure). Let L = ⟨f i , P j ⟩ be a language in predicate logic and B be a truth values algebra, a B-valued structure for the language L, M = ⟨M, B, fi , Pj ⟩ is a structure such that fi is a function from M n to M where n is the arity of the symbol f i and Pj is a function from M n to B where n is the arity of the symbol P i . This definition extends trivially to many-sorted languages. Definition 12 (Denotation). Let B be a truth values algebra, M be a B-valued structure and ϕ be an assignment. The denotation A ϕ of a formula A in M is defined as follows -x ϕ = ϕ(x), -f (t 1 , ..., t n ) ϕ = f ( t 1 ϕ , ..., t n ϕ ), -P (t 1 , ..., t n ) ϕ = P ( t 1 ϕ , ..., t n ϕ ), -⊤ ϕ = ⊤, -⊥ ϕ = ⊥, -A ⇒ B ϕ = A ϕ ⇒ B ϕ , -A ∧ B ϕ = A ϕ ∧ B ϕ , -A ∨ B ϕ = A ϕ ∨ B ϕ , -∀x A ϕ = ∀ { A ϕ+⟨x,e⟩ | e ∈ M}, -∃x A ϕ = ∃ { A ϕ+⟨x,e⟩ | e ∈ M}. Notice that the denotation of a formula containing quantifiers may be undefined, but it is always defined if the truth value algebra is full. Definition 13 (Model). A formula A is said to be valid in a B-valued structure M, and the B-valued structure M is said to be a model of A, M |= A, if for all assignments ϕ, A ϕ is defined and is a positive truth value. The B-valued structure M is said to be a model of a theory T if it is a model of all the axioms of T . Soundness and completeness As the notion of truth values algebra extends that of Heyting algebra, the completeness theorem for the notion of model introduced above is a simple corollary of the completeness theorem for the notion of model based on Heyting algebras. But, it has a simpler direct proof. It is well-known that completeness proofs for boolean algebra valued models and Heyting algebra valued models are simpler than for {0, 1}-valued models. For truth values algebra valued models, it is even simpler. We want to prove that if A is valid in all models of T where it has a denotation then T ⊢ A. To do so, we consider a theory T and we construct a model of T such that the formulae valid in this model are the intuitionistic theorems of T . Definition 14 (Lindenbaum model). Let T be a theory in a language L. Let S be an infinite set of constants and L ′ = L∪S. Let M be the set of closed terms of L ′ and B T be the set of closed formulae of L ′ . Let B + T be the set of elements A of B T , such that the sequent T ⊢ A is provable. Let A = E be the set of subsets of B T of the form {(t/x)A | t ∈ M} for some A. Notice that, in this case, the formula A is unique. The operations ⊤, ⊥, ⇒, ∧ and ∨ are ⊤, ⊥, ⇒, ∧ and ∨. The operations ∀ and ∃ are defined as follows -∀ {(t/x)A | t ∈ M} = (∀x A), -∃ {(t/x)A | t ∈ M} = (∃x A). If f is a function symbol, we let f be the function mapping t 1 , ..., t n to f (t 1 , ..., t n ). If P is a predicate symbol, we let P be the function mapping t 1 , ..., t n to P (t 1 , ..., t n ). Proposition 14. The algebra B T is a truth values algebra. Proof. The condition 1. to 11. are trivial recalling that B + T is the set of theorems of T . The condition 12. is a simple consequence of the definition of A and E. For condition 13., consider a set A = {(t/x)P | t ∈ M} and c a constant occurring neither in T nor in P . If all elements of A are in B + T , (c/x)P is in B + T thus T ⊢ (c/x)P is provable. Thus, T ⊢ ∀x P is provable and ∀x P is in B + T , i.e. ∀ A is in B + T . For condition 14. consider an element A of A, by definition there exists a formula P such that A = {(t/x)P | t ∈ M} and we have ∀ A = ∀x P and ∀ (a ⇒ A) = ∀x (a ⇒ P ). Thus, the condition rephrases T ⊢ (∀x (a ⇒ P )) ⇒ a ⇒ ∀x P which is obvious as the formula (∀x (a ⇒ P )) ⇒ a ⇒ ∀x P is intuitionisticaly provable. The conditions 15., 16. and 17. are checked in a similar way. Proposition 15. Let A be a formula and ϕ be an assignment mapping the free variables of A to elements of M. Notice that ϕ is also a substitution and that ϕA is a closed formula. Then A ϕ is always defined and A ϕ = ϕA Proof. By induction over the structure of A. We consider only the case where Proof. If A is valid in the Lindenbaum model then for every assignment ϕ, A ϕ ∈ B + T , i.e. T ⊢ A ϕ is provable. Thus, T ⊢ ϕA is provable and in particular T ⊢ A is provable. A = ∀x B. We have ∀x B ϕ = ∀ { (t/x)B ϕ | t ∈ M} = ∀ {ϕ((t/x)B) | t ∈ M} = ∀ {(t/x)(ϕB) | t ∈ M} = ∀x (ϕB) = ϕ(∀x B). Proposition 17 (Completeness). If A is valid in all the models of T where it is defined, then T ⊢ A. Proof. It is valid in the Lindenbaum model of T . Using Proposition 13, we can strengthen this completeness theorem. Proposition 18. If A is valid in all the models of T where the truth values algebra is full, ordered and complete then T ⊢ A. The converse is a simple induction over proof structure. Proposition 19 (Soundness). If T ⊢ A then A is valid in all the models of T where the truth value algebra is full, ordered and complete. We finally get the following theorem. Theorem 1. T ⊢ A if and only if A is valid in all the models of T where the truth values algebra is full, ordered and complete. Consistency Definition 15. A theory is said to be consistent if there exists a non provable formula in this theory. In the completeness theorem above, we did not assume the theory T to be consistent. If it is not, then the algebra of the Lindenbaum model is trivial, i.e. all truth values are positive and every formula is valid. But we have the following theorem. Proposition 20. The theory T is consistent if and only if it has a B-valued model, for some non trivial full, ordered and complete truth values algebra B. Proof. The algebra of the Lindenbaum model and its completion are non trivial. Deduction modulo Deduction modulo In Deduction modulo [START_REF] Dowek | Theorem proving modulo[END_REF][START_REF] Dowek | Proof normalization modulo[END_REF], a theory is defined by a set of axioms T and a congruence ≡ defined by a confluent rewrite system rewriting terms to terms and atomic formulae to formulae. The deduction rules are modified to take the congruence ≡ into account. For instance, the modus ponens rule is not stated as usual Γ ⊢ A ⇒ B Γ ⊢ A Γ ⊢ B but Γ ⊢ ≡ C Γ ⊢ ≡ A C ≡ A ⇒ B Γ ⊢ ≡ B In deduction modulo, there are theories for which there exists proofs that do not normalize. For instance, in the theory formed with the rewrite rule P -→ (P ⇒ Q), the proof axiom P ⊢ ≡ P ⇒ Q axiom P ⊢ ≡ P ⇒-elim P ⊢ ≡ Q ⇒-intro ⊢ ≡ P ⇒ Q axiom P ⊢ ≡ P ⇒ Q axiom P ⊢ ≡ P ⇒-elim P ⊢ ≡ Q ⇒-intro ⊢ ≡ P ⇒-elim ⊢ ≡ Q does not normalize and, moreover, the formula Q has no normal proof. But, as we shall see, in some other theories, such as the theory formed with the rewrite rule P -→ (Q ⇒ P ), all proofs strongly normalize. In deduction modulo, like in predicate logic, normal proofs of a sequent of the form ⊢ ≡ A always end with an introduction rule. Thus, when a theory can be expressed in deduction modulo with rewrite rules only, i.e. with no axioms, in such a way that proofs modulo these rewrite rules strongly normalize, then the theory is consistent, it has the disjunction property and the witness property, various proof search methods for this theory are complete, ... Many theories can be expressed this way in deduction modulo, in particular arithmetic [START_REF] Dowek | Arithmetic as a theory modulo[END_REF] and simple type theory [START_REF] Dowek | HOL-lambda-sigma: an intentional firstorder expression of higher-order logic[END_REF] and the notion of cut of deduction modulo subsumes the ad hoc notions of cut defined for these theories. In the same way, if B be an arbitrary full, ordered and complete truth value algebra, then the theory P -→ (⊥ ⇒ P ) has a B-valued model. Example 10. The theory P -→ (P ⇒ Q) has a {0, 1}-valued model ( P = Q = 1), but no T 1 -valued model. Indeed there is no 0 in the line 0 of the table of the function ⇒ of T 1 , no I in the line I and no 1 in the line 1. Soundness and completeness To extend the completeness and the soundness theorem to deduction modulo, we replace terms by classes of congruent terms and formulae by classes of congruent formulae. Definition 17. Let T , ≡ be a theory in a language L. Let S be an infinite set of constants and L ′ = L ∪ S. We define an equivalence relation ∼ on formulae of L ′ inductively as the smallest congruence such that if A ≡ B then A ∼ B, if A ∼ B and A ′ ∼ B ′ then (A ∧ A ′ ) ∼ (B ∧ B ′ ), (A ∨ A ′ ) ∼ (B ∨ B ′ ), and (A ⇒ A ′ ) ∼ (B ⇒ B ′ ) , and if for each term t there exist a term u such that (t/x)A ∼ (u/x)B and for each term u there exist a term t such that (u/x)B ∼ (t/x)A then ∀x A ∼ ∀x B and ∃x A ∼ ∃x B. Remark that if we consider the congruence ≡ defined by the rewrite rule f (f (x)) -→ x we have ∀x P (x) ̸ ≡ ∀x P (f (x)) but we have ∀x P (x) ∼ ∀x P (f (x)) as P (x) and P (f (x)) have the same instances (the instance t in one formula corresponds to the instance f (t) in the other). Proposition 21. If t ≡ u and A ∼ B then (t/x)A ∼ (u/x)B. Proof. By induction on the derivation of A ∼ B. Proposition 22. If A ∼ B then A ⇔ B is provable modulo ≡. Proof. By induction on the derivation of A ∼ B if A = ∀x A ′ and B = ∀x B ′ and for each term t there exist a term u such that (t/x)A ∼ (u/x)B and for each term u there exist a term t such that (u/x)B ∼ (t/x)A then let c be a constant of S occurring neither in A nor in B. We have to prove the sequent ∀x A ′ ⊢ (c/x)B ′ . Let t be a term such that (t/x)A ′ ∼ (c/x)B ′ , by induction hypothesis we get (t/x)A ′ ⇔ (c/x)B ′ , and as we have ∀x A ′ we can deduce (t/x)A ′ , and thus (c/x)B ′ . Definition 18 (The Lindenbaum model). Let T , ≡ be a theory in a language L. Let S be an infinite set of constants and L ′ = L ∪ S. Let M be the set of ≡-classes of closed terms of L ′ and B be the set of ∼-classes of closed formulae of L ′ . Let B + be the set of elements A of B, such that the sequent T ⊢ ≡ A is provable. Let A = E be the set of subsets of B of the form {(t/x)A/ ∼ | t ∈ M} for some A. The operations ⊤, ⊥, ⇒, ∧ and ∨ are ⊤, ⊥, ⇒, ∧ and ∨ extended to ∼classes. To define the operations ∀ and ∃, we choose for each element a of A and E a formula A such that a = {(t/x)A/ ∼ | t ∈ M} and we let -∀ a = (∀x A)/ ∼, -∃ a = (∃x A)/ ∼. Notice that the elements (∀x A)/ ∼ and (∃x A)/ ∼ are independent of the choice of A. If f is a function symbol, we let f be the function mapping the classes of t 1 , ..., t n to that of f (t 1 , ..., t n ). If P is a predicate symbol, we let P be the function mapping the classes of t 1 , ..., t n to that of P (t 1 , ..., t n ). Proposition 23. Let A be a formula and ϕ be a assignment mapping the free variables of A to elements of M. Notice that ϕ is also a substitution and that ϕA is a closed formula. Then A ϕ = ϕA/ ∼. Proof. By induction over the structure of A. We consider only the case where A = ∀x B. We have ∀x B ϕ = ∀ { B ϕ+(x=a) | a ∈ M}. By induction hypothesis, ∀x B ϕ = ∀ {(a/x)ϕB/ ∼ | a ∈ M} = (∀x ϕB)/ ∼= ϕ(∀x B)/ ∼. Proposition 24. The algebra B is a truth values algebra. Proof. The condition 1. to 11. are trivial using the fact that B + is the set of theorems of T , ≡. The condition 12. is a simple consequence of the definition of A and E. For condition 13., consider a set A in A and P the formula associated to this set, and c a constant occurring neither in T nor in the rewrite system defining the congruence ≡, nor in P . If all elements of A are in B + , (c/x)P is in B + thus T ⊢ ≡ (c/x)P is provable. Thus, T ⊢ ≡ ∀x P is provable and ∀x P is in B + , i.e. ∀ A is in B + . For condition 14., consider an element A of A and the formula P associated to this element and Q the formula associated to the set a ⇒ A. By Proposition 22, the formula (∀x (a ⇒ P )) ⇔ (∀x Q) is provable. We have ∀ A = ∀x P and ∀ (a ⇒ A) = ∀x Q. Thus, the condition rephrases T ⊢ ≡ ∀x Q ⇒ a ⇒ ∀x P which is a consequence of the fact that the formula (∀x (a ⇒ P )) ⇔ (∀x Q) is provable. The conditions 15., 16. and 17. are checked in a similar way. Proposition 25. The closed formulae valid in the Lindenbaum model of T , ≡ in L are intuitionistic theorems of T , ≡. Proof. If A is valid in the Lindenbaum model, then for every assignment ϕ, A ϕ ∈ B + , i.e. T ⊢ ≡ A ϕ is provable in deduction modulo. Thus, T ⊢ ≡ ϕA is provable in deduction modulo and in particular T ⊢ ≡ A is provable in deduction modulo. Proposition 26 (Completeness). If A is valid in all the models of T , ≡ where it is defined, then T ⊢ ≡ A. Proof. It is valid in the Lindenbaum model of T , ≡. Using Proposition 13, we can strengthen this completeness theorem. Proposition 27. If A is valid in all the models of T , ≡ where the truth values algebra is full, ordered and complete then T ⊢ ≡ A. The converse is a simple induction over proof structure. Proposition 28 (Soundness). If T ⊢ ≡ A then A is valid in all the models of T , ≡ where the truth value algebra is full, ordered and complete. We finally get the following theorem. Theorem 2. T ⊢ ≡ A if and only if A is valid in all the models of T , ≡ where the truth values algebra is full, ordered and complete. Consistency Proposition 29. The theory T , ≡ is consistent if and only if it has a B-valued model, for some non trivial full, ordered and complete truth values algebra B. Proof. The algebra of the Lindenbaum model and its completion are non trivial. Definition By Proposition 29, a theory is consistent if it has a B-valued model for some non trivial full, ordered and complete truth values algebra. We now strengthen this condition and require that the theory has a B-valued model for all full, ordered and complete truth values algebras B. Definition 19 (Super-consistent). A theory T , ≡ in deduction modulo is super-consistent if it has a B-valued model for all full, ordered and complete truth values algebras B. Notice that, as there exists non trivial full, ordered and complete truth values algebras (e.g. {0, 1}), super-consistent theories are consistent. Examples of super-consistent theories We have seen that the theories P -→ (Q ⇒ R) and P -→ (Q ⇒ P ) are super-consistent, but that the theory P -→ (P ⇒ Q) is not. We give other examples of super-consistent theory. In particular, we show that all the theories that have been proved to have the strong normalization property in [START_REF] Dowek | Proof normalization modulo[END_REF][START_REF] Dowek | Arithmetic as a theory modulo[END_REF] are super-consistent. Definition 20 (Simple type theory). Simple type theory is a many-sorted theory defined as follows. The sorts are inductively defined by ι and o are sorts and if T and U are sorts then T → U is a sort. The language contains the constants S T,U,V of sort Definition 21 (Arithmetic). Arithmetic is a many-sorted theory defined as follows. The sorts are ι and κ. The language contains the constant 0 of sort ι, the function symbols S and Pred of rank ⟨ι, ι⟩ and + and × of rank ⟨ι, ι, ι⟩, the predicate symbols = of rank ⟨ι, ι⟩, Null and N of rank ⟨ι⟩ and ∈ of rank ⟨ι, κ⟩ and for each formula P in the language 0, S, Pred , +, ×, =, Null and N and whose free variables are among x, y 1 , . . . , y n of sort ι, the function symbol f x,y1,...,yn,P of rank ⟨ι, . . . , ι, κ⟩. The rewrite rules are x ∈ f x,y1,...,yn,P (y 1 , . . . , y n ) (T → U → V ) → (T → U ) → T → V , K T,U of sort T → U → T , ⊤ of α(α(K T,U , x), y) -→ x ε( ⊤) -→ ⊤ ε( ⊥) -→ ⊥ ε(α(α( ⇒, x), y)) -→ ε(x) ⇒ ε(y) ε(α(α( ∧, x), y)) -→ ε(x) ∧ ε(y) ε(α(α( ∨, x), y)) -→ ε(x) ∨ ε(y) ε(α( ∀T , x)) -→ ∀y ε(α(x, y)) ε(α( ∃T , x)) -→ ∃y ε(α(x, -→ P y = z -→ ∀p (y ∈ p ⇒ z ∈ p) N (n) -→ ∀p (0 ∈ p ⇒ ∀y (N (y) ⇒ y ∈ p ⇒ S(y) ∈ p) ⇒ n ∈ p) Pred (0) -→ 0 Pred (S(x)) -→ x Null (0) -→ ⊤ Null (S(x)) -→ ⊥ 0 + y -→ y S(x) + y -→ S(x + y) 0 × y -→ 0 S(x) × y -→ x × y + y Proposition 31. Arithmetic is super-consistent. Proof. Let B be a full, ordered and complete truth value algebra. We take M ι = N, M κ = B N . The denotations of 0, S, +, ×, Pred are obvious. We take N ull (0) = ⊤, N ull (n) = ⊥ if n ̸ = 0. The denotation of ∈ is the function mapping n and f to f (n). Then, we can define the denotation of ∀p (y ∈ p ⇒ z ∈ p) and the denotation of = accordingly. To define the denotation of N , for each function f of B N we can define an interpretation M f of the language of the formula ∀p (0 ∈ p ⇒ ∀y (N (y) ⇒ y ∈ p ⇒ S(y) ∈ p) ⇒ n ∈ p) where the symbol N is interpreted by the function f . We define the function Φ from B N to B N mapping f to the function mapping the natural number x to the truth value ∀p (0 ∈ p ⇒ ∀y (N (y) ⇒ y ∈ p ⇒ S(y) ∈ p) ⇒ n ∈ p) M f x/n The order on B N defined by f ⊑ g if for all n, f (n) ⊑ g(n) is a complete order and the function Φ is monotonous as the occurrence of N is positive in ∀p (0 ∈ p ⇒ ∀y (N (y) ⇒ y ∈ p ⇒ S(y) ∈ p) ⇒ n ∈ p) Hence it has a fixed point g. We interpret the symbol N by the function g. Finally, the denotation of the symbols of the form f x,y1,...,yn,P is defined in the obvious way. Proposition 32 (Quantifier free). A theory defined by a confluent and terminating rewrite systems such that no quantifier appears in the rewrite rules is super-consistent. For instance, the theory defined by the rewrite system P -→ Q ⇒ R is super-consistent. Proof. Let B be an arbitrary full truth value algebra. We associate an element of B to each normal closed quantifier free formula as follows: if A is atomic then |A| = ⊤, |⊤| = ⊤, |⊥| = ⊥, |A ⇒ B| = |A| ⇒ |B|, |A ∧ B| = |A| ∧ |B|, |A ∨ B| = |A| ∨ |B|. We then define a B-valued model as follows: M is the set of normal closed terms, f (t 1 , . . . , t n ) = f (t 1 , . . . , t n ) ↓, P (t 1 , . . . , t n ) = |P (t 1 , . . . , t n ) ↓ | where a ↓ is the normal form of the a. Proposition 33 (Positive terminating). A theory defined by a confluent and terminating rewrite systems such that all atomic formulae appear at positive occurrences in the rewrite rules is super-consistent. For instance the theory defined by the rewrite system P (0) -→ ∀x P (x) is super-consistent. Proof. Consider a full, ordered and complete truth value algebra B. Let T be the set of closed terms and M = T / ≡. Let f be the function mapping the classes e 1 , ..., e n to the class of the term f (t 1 , . . . , t n ) where t 1 , ..., t n are elements of e 1 , ..., e n (since the relation ≡ is a congruence, this class does not depend of the choice of representatives). Let C be the set of models that have the domain M, the truth value algebra B and where the function symbols f are interpreted by the functions f . Two models of C differ only by the interpretation of predicate symbols. Let M 1 and M 2 be two models of the class C, we say that M 1 ⊑ M 2 if and only if for every predicate symbol P and sequence of elements of M e 1 , . . . , e n , we have P M1 (e 1 , . . . , e n ) ⊑ P M2 (e 1 , . . . , e n ) As the algebra B is complete, the set C is a complete lattice for the order ⊑. Let F be the function from C to C defined by P F (M) (e 1 , . . . , e n ) = P (e 1 , . . . , e n ) ↓ M ∅ As all atomic formulae appear at positive occurrences in the rewrite system, the function F is monotone. Hence, as C is a complete lattice, it has a fixed point. This fixed point is a B-valued model of the theory. Proposition 34 (Positive deterministic). A theory defined by a rewrite systems such that each atomic formula has at most one one-step reduct and all atomic formulae appear at positive occurrences in the rewrite rules is superconsistent. For instance, the theory defined by the rewrite system P -→ (P ∧ P ) is super-consistent. Proof. As in the proof of Proposition 33, we consider a full, ordered and complete truth value algebra B and we define the complete lattice C of models. Let F be the function from C to C defined by P F (M) (t 1 , . . . , t n ) = P (t 1 , . . . , t n )+ M ∅ where A+ is the unique one-step reduct of A if it exists and A otherwise. As all atomic formulae appear at positive occurrences in the rewrite system, the function F is monotone. Hence, as C is a complete lattice, it has a fixed point. This fixed point is a B-valued model of the theory. Normalization We have seen that the theory P -→ (P ⇒ Q), that does not have the strong normalization property, is consistent but not super-consistent, i.e. it has B-valued models for some non trivial, full, ordered and complete truth values algebras B, but not all. We prove now that, in contrast, all super-consistent theories have the strong normalization property. To prove this, we build a particular full, ordered and complete truth values algebra: the algebra of reducibility candidates. We refer, for instance, to [START_REF] Dowek | Proof normalization modulo[END_REF] for the definition of proof-terms, neutral proofterms and of proof-term reduction ▷ and we define the following operations on sets of proofs. Definition 22. -The set ⊤ is the set of strongly normalizing proof-terms. -The set ⊥ is the set of strongly normalizing proof-terms. -If a and b are two sets of proofs-terms, then a ⇒ b is the set of strongly normalizing proof-terms π such that if π reduces to λα π 1 then for every π ′ in a, (π ′ /α)π 1 is in b. -If a and b are two sets of proof-terms, then then a ∧ b is the set of strongly normalizing proof-terms π such that if π reduces to ⟨π 1 , π 2 ⟩ then π 1 is in a and π 2 is in b. -If a and b are two sets of proof-terms, then a ∨ b is the set of strongly normalizing proof-terms π such that if π reduces to i(π 1 ) (resp. j(π 2 )) then π 1 (resp. π 2 ) is in a (resp. b). -If A is a set of sets of proof-terms, then ∀ A is the set of strongly normalizing proof-terms π such that if π reduces to λx π 1 then for every term t and every element a of A, (t/x)π 1 is in a. -If A is a set of sets of proof-terms, then ∃ A is the set of strongly normalizing proof-terms π such that if π reduces to ⟨t, π 1 ⟩, there exists an element a of A such that π 1 is an element of a. Definition 23 (Reducibility candidate). A set R of proof-terms is a reducibility candidate if -if π ∈ R, then π is strongly normalizable, -if π ∈ R and π ▷ * π ′ then π ′ ∈ R, -if π is neutral and if for every π ′ such that π ▷ 1 π ′ , π ′ ∈ R then π ∈ R. Proposition 35. The set of reducibility candidates is closed by the operations of Definition 22. Proof. All the cases are similar. Let us detail, for instance, the case of the operation ∀. Let A be a set of candidates and let us prove that the set ∀ A is a candidate. By definition, all proof-terms of ∀ A are strongly normalizing. Closure by reduction is a simple consequence of the fact that if π ∈ ∀ A and π ▷ * π ′ then π is strongly normalizing and thus so is π ′ and if π ′ reduces to λx π 1 then so does π and thus for every term t and every a in A, (t/x)π 1 is in a. Now, assume that π is a neutral proof-term and that for every π ′ such that π ▷ 1 π ′ , π ′ ∈ ∀ A. We want to prove that π is in ∀ A. Following the definition of ∀ A, we first prove that π is strongly normalizing and then that if it reduces to λx π 1 then for every term t and every a in A, (t/x)π 1 is in a. Consider a reduction sequence issued from π. If it is empty it is finite. Otherwise it has the form π ▷ 1 π 2 ▷ 1 ... The proof-term π 2 is an element of ∀ A thus it is strongly normalizing and the reduction sequence is finite. If π reduces to λx π 1 then consider a reduction sequence from π to λx π 1 . As π is neutral and λx π 1 is not, this sequence is not empty. Thus, there exists a proof-term π 2 such that π ▷ 1 π 2 ▷ * λx π 1 . We have π 2 ∈ ∀ A and thus for every term t and every a in A, (t/x)π 1 is in a. Definition 24 (The algebra of reducibility candidates). The set B is the set of reducibility candidates. The set B + may be any set closed by intuitionistic deduction rules, e.g. the set of all candidates. The sets A and E are ℘(B). The operations are those of definition 22. The order ⊑ is inclusion. Theorem 3 (Normalization). If the theory T , ≡ is super-consistent, then all proofs strongly normalize in T , ≡. Proof. Consider the full, ordered and complete truth values algebra B of reducibility candidates. As it is super-consistent, the theory T , ≡ has a B-valued model. This model is a reducibility candidate valued model of ≡ [START_REF] Dowek | Proof normalization modulo[END_REF], called premodels there. Hence all proofs strongly normalize in T , ≡. An alternative would be to define the set of candidates directly as the smallest set of sets of proofs closed by the operations of definition 22 and arbitrary intersections, like [START_REF] Parigot | Strong normalization for the second orclassical natural deduction[END_REF]. Notice that the pre-order ≤ is trivial and thus not antisymmetric. Hence, the truth values algebra of reducibility candidates is not a Heyting algebra. The fact that the choice of the set B + is immaterial is due to the fact that B + matters for the interpretation of axioms but not for that of the congruence and cut elimination is a property of the congruence of a theory, not of its axioms. We have generalized the notion of Heyting algebra into a notion of truth values algebra and proved that a theory is consistent if and only if it has a B-valued model for some non trivial full, ordered and complete truth values algebra B. Unlike Heyting algebra valued models, truth values algebra valued models allow to distinguish computational equivalence from provable equivalence. When a theory has a B-valued model for all full, ordered and complete truth values algebras, it is said to be super-consistent and all proofs strongly normalize in this theory. Proving strong normalization by proving super-consistency is easier than proving strong normalization directly. For instance the proof that simple type theory is super-consistent (Proposition 30) takes only a few lines. All the technicalities related to the notion of reducibility candidate are now hidden in the proof that super-consistency implies strong normalization and are not used in the proof that the theory of interest is super-consistent. The notion of super-consistency is a model theoretic sufficient condition for strong normalization. It remains to understand if it also a necessary condition or if some theories have the strong normalization property without being super-consistent. To prove that strong normalization implies super-consistency, we might need to restrict further the notion of super-consistency. For instance, we have already restricted it by considering only ordered and complete truth values algebras. Indeed, without such a completeness property, we could not use the fixed point theorem to prove that the theory P -→ (⊥ ⇒ P ) had a Bvalued model for all B, and indeed, this theory does not have a T 2 -valued model. Thus, the fact that the algebra of reducibility candidates, ordered by inclusion, is complete seems to be an essential property that needs to be kept when abstracting on reducibility candidates. It remains to understand if there are other essential properties of candidates that need to be kept this way, so that strong normalization may imply super-consistency. and a ≤ b ⇒ c if and only if a ∧ b ≤ c. Proposition 1. Consider a truth values algebra ⟨B, B + , A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ then the algebra ⟨B, ≤, A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩ where the relation ≤ is defined by a ≤ b if and only if a ⇒ b ∈ B + is a pseudo-Heyting algebra. 6 . 6 We have ( ⊤ ∧ a) ∧ b ≤ ⊤ ∧ a ≤ a and ( ⊤ ∧ a) ∧ b ≤ b. Hence ( ⊤ ∧ a) ∧ b ≤ a ∧ b and ⊤ ≤ a ⇒ b ⇒ (a ∧ b). 7. We have ⊤ ∧ (a ∧ b) ≤ (a ∧ b) ≤ a. Hence ⊤ ≤ (a ∧ b) ⇒ a. 8. We have ⊤ ∧ (a ∧ b) ≤ (a ∧ b) ≤ b. Hence ⊤ ≤ (a ∧ b) ⇒ b. 9. We have ⊤ ∧ a ≤ a ≤ (a ∨ b). Hence ⊤ ≤ a ⇒ (a ∨ b). 10. We have ⊤ ∧ b ≤ b ≤ (a ∨ b). Hence ⊤ ≤ b ⇒ (a ∨ b). 11. Let x = ⊤ ∧ (a ∨ b) ∧ (a ⇒ c) ∧ (b ⇒ c). We have x ≤ a ∨ b, x ≤ a ⇒ c and x ≤ b ⇒ c. We have x ≤ a ∨ b and x ≤ x, hence x ≤ (a ∨ b) ∧ x. We have a ∧ x ≤ x ∧ a ≤ c, thus a ≤ x ⇒ c and, in a similar way, b ≤ x ⇒ c. Thus, we have a ∨ b ≤ x ⇒ c, i.e. (a ∨ b) ∧ x ≤ c. By transitivity, we conclude x ≤ c. Hence ⊤ ≤ (a ∨ b) ⇒ (a ⇒ c) ⇒ (b ⇒ c) ⇒ c. 12. The closure conditions of A and E are the same. Proposition 3 . 3 Let B be a pseudo-Heyting algebra, then there exists a pseudo-Heyting algebra B/B + that is a Heyting algebra and a morphism of pseudo-Heyting algebras Φ from B to B/B + . Proof. We define a relation ≃ on elements of B by a ≃ b if and only if a ≤ b and b ≤ a. It is routine to check that this relation is an equivalence relation and that all the operations of B are compatible with this relation. We define B/B + as the quotient B/ ≃ and the morphism Φ by Φ(a) = a/ ≃. Proposition 5 .Proposition 6 . 56 glb(a, b) ≤ a ∧ b ≤ glb(a ∧ ⊤, ⊤ ∧ b) Proof. We have glb(a, b) ⊑ a and glb(a, b) ⊑ b. Thus, by Proposition 4, glb(a, b) ≤ a and glb(a, b) ≤ b. Thus glb(a, b) ≤ a ∧ b. We have a ⊑ a and b ⊑ ⊤ thus a ∧ b ⊑ a ∧ ⊤. Similarly, a ∧ b ⊑ ⊤ ∧ b. Thus a ∧ b ⊑ glb(a ∧ ⊤, ⊤ ∧ b) and, by Proposition 4, a ∧ b ≤ glb(a ∧ ⊤, ⊤ ∧ b). In a Heyting algebra, ≤ and ⊑ are extensionally equal, i.e. a ⊑ b if and only if a ≤ b. Proof. In a Heyting algebra the relation ≤ is antisymmetric and a = a ∧ ⊤ = ⊤ ∧ a. Thus, from Proposition 5, we get glb(a, b) = a ∧ b. If a ≤ b, we have a ∧ b = a, thus glb(a, b) = a, thus a ⊑ b. Conversely, by Proposition 4, if a ⊑ b then a ≤ b. Proposition 12 . 12 The function a → C({a}) is a morphism of pseudo-Heyting algebras. Proof. 1. a ≤ b ⇔ C({a}) ⊆ C({b}). 2. C({ ⊤}) = ⊤C by definition. 3. C({ ⊥}) = ⊥C by definition. 4. C({a ∧ b}) = C({a}) ∧C C({b}). Indeed x ∈ C({a ∧ b}) if and only if x ≤ a ∧ b if and only if x ≤ a and x ≤ b if and only if x ∈ C({a}) and x ∈ C({b}) if and only if x ∈ C({a}) ∧C ({b}). 5. C({a ∨ b}) = C({a}) ∨C C({b}). Indeed we have a Conversely, we have a ≤ a ∨ b, thus C({a}) ⊆ C({a ∨ b}). In a similar way C({b}) ⊆ C({a ∨ b}). Thus C({a}) ∪ C({b}) ⊆ C({a ∨ b}) andC(C({a}) ∪ C({b})) ⊆ C(C({a ∨ b})) i.e. C({a}) ∨C C({b}) ⊆ C({a ∨ b}). 6. C({ ∀ A}) = ∀C {C({a}) | a ∈ A}. Indeed, x ∈ C({ ∀ A}) ifand only if x ≤ ∀ A if and only if for all a in A, x ≤ a if and only if for all a in A, x ∈ C({a}) if and only if x ∈ ∀C {C({a}) | a ∈ A}. 7. C({ ∃ E}) = ∃C {C({e}) | e ∈ E}. Indeed, for all e in E we have e ∈ C({e}) and thus E ⊆ {C({e}) | e ∈ E} and C(E) ⊆ C( {C({e}) | e ∈ E}) i.e. C(E) ⊆ ∃C {C({e}) | e ∈ E}. As, by Proposition 8, ∃ E ∈ C(E), we have C({ ∃ E}) ⊆ C(E), and thus C({ ∃ E}) ⊆ ∃C {C({e}) | e ∈ E}. Conversely, for all e in E, e ≤ ∃ E, hence C({e}) ⊆ C({ ∃ E}). Thus, {C({e}) | e ∈ E} ⊆ C({ ∃ E}) and C( {C({e}) | e ∈ E}) ⊆ C({ ∃ E}) i.e. ∃C {C({e}) | e ∈ E} ⊆ C({ ∃ E}). 8. C({a ⇒ b}) = C({a}) ⇒C C({b}). Indeed, z ∈ C({a}) ⇒C C({b}) if and only if for all x and y such that C({x}) ⊆ C({a}) and C({b}) ⊆ C({y}) we have z ∈ C({x ⇒ y}) if and only if for all x and y such that x ≤ a and b ≤ y, we have z ≤ x ⇒ y if and only if z ≤ a ⇒ b if and only if z ∈ C({a ⇒ b}). Proposition 13. Let B be a truth value algebra, then there exists a full, ordered and complete truth value algebra B C and a morphism of truth values algebras from B to B C . Proof. The Heyting algebra B C is a full, ordered and complete truth value algebra and the function a → C({a}) is a morphism of truth values algebras. Proposition 16 . 16 The closed formulae valid in the Lindenbaum model of T in L are intuitionistic theorems of T . 4. 2 2 ModelsDefinition 16 (Model). Let T , ≡ be a theory in deduction modulo. The Bvalued structure M is said to be a model of the theory T , ≡ if all axioms of T are valid in M and for all terms or formulae A and B such that A ≡ B and assignments ϕ, A ϕ and B ϕ are defined and A ϕ = B ϕ .Example 8. Let B be an arbitrary truth value algebra, then the theory P -→ (Q ⇒ R) has a B-valued model. We take P = ( ⊤ ⇒ ⊤) and Q = R = ⊤.Example 9. Let B be an arbitrary full, ordered and complete truth value algebra, then the theory P -→ (Q ⇒ P ) has a B-valued model. The function a → ( ⊥ ⇒ a) is monotonous for the order ⊑ and this order is complete. Hence, it has a fixed point b. We define a B-valued model of this theory by P = b and Q = ⊥. sort o and ⊥ of sort o, ⇒, ∧ and ∨ of sort o → o → o, ∀T and ∃T of sort (T → o) → o, the function symbols α T,U of rank ⟨T → U, T, U ⟩ and the predicate symbol ε of rank ⟨o⟩. The rules are α(α(α(S T,U,V , x), y), z) -→ α(α(x, z), α(y, z)) y)) Proposition 30. Simple type theory is super-consistent. Proof. Let B be a full truth values algebra. The model M ι = {0}, M o = B, M T →U = M M T U , ŜT,U,V = a → (b → (c → a(c)(b(c)))), KT,U = a → (b → a), α(a, b) = a(b), ε(a) = a, ⊤ = ⊤, ⊥ = ⊥, ⇒ = ⇒, ∧ = ∧, ∨ = ∨, ∀T = a → ∀(Range(a)), ∃T = a → ∃(Range(a)) where Range(a) is the range of the function a, is a B-valued model of simple type theory. condition 10. -Using 1. 2. 3. and 11. we get that if a ⇒ c ∈ B + and b ⇒ c ∈ B + then (a ∨ b) ⇒ c ∈ B + . -The closure conditions are 12. -If a ∈ A then ( ∀ A) ⇒ a ∈ B + is condition 15. -From 13. and 14. we get if for all a ∈ A b ⇒ a ∈ B + , then b ⇒ ∀ A ∈ B + , -If a ∈ A then a ⇒ ( ∃ A) ∈ B + is condition 16. Let us first prove the following lemma: if x ≤ a ⇒ b and x ≤ a then x ≤ b. From x ≤ x and x ≤ a, we get x ≤ x ∧ a and as x ≤ a ⇒ b, we have x ∧ a ≤ b. By transitivity, we get x ≤ b. 1. Using the lemma above, we get that if ⊤ ≤ a ⇒ b and ⊤ -From 1. 2. 3. 13. and 17. we get that if for all a ∈ A a ⇒ b ∈ B + , then ∃ A ⇒ b ∈ B + , -From 1. 2. 3. 6. 7. and 8. we get that a ⇒ (b ⇒ c) ∈ B + if and only if (a ∧ b) ⇒ c ∈ B + . Proposition 2. Consider a pseudo-Heyting algebra ⟨B, ≤, A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩, then the algebra ⟨B, B + , A, E, ⊤, ⊥, ⇒, ∧, ∨, ∀, ∃⟩, where B + = {x | ⊤ ≤ x} is a truth values algebra. Proof. Moreover, binary and arbitrary intersections are obviously greatest lower bounds in B C . 5. The sets C(X ∪ Y ) and C( E) are closed. Let us check that they are least upper bounds. Again, we detail case of the case of the operation ∃C , the binary operation being a particular case. The operation ∃C is an upper bound as Acknowledgments Thierry Coquand suggested to characterize truth values algebras as pseudo-Heyting algebras. Lisa Allali, Frédéric Blanqui, Olivier Hermant and Milly Maietti provided many helpful comments on a previous version of the paper.
02337266
en
[ "sdv.bbm.bp", "chim.theo", "spi.opti" ]
2024/03/04 16:41:18
1996
https://hal.science/hal-02337266/file/2917_001.pdf
Introduction Flash absorption spectroscopy is used to monitor the kinetics and spectra of flash- induced absorption changes, which may provide information on molecular events in the sample. Usually, the system is perturbed by a light pulse (actinic flash) which is short compared to the characteristic time of the reaction(s) to be studied. The absorption changes induced by the actinic flash are probed by a usually continuous light beam (measuring light) passing through the sample. This technique has proven particularly useful in photosynthesis research, because the stimulus, light, is the natural driving force of photosynthesis, and because many of the photosynthetic light reactions are accompanied by rather characteristic absorption changes in the visible, near UV and near infrared spectral regions. Flash absorption spectroscopy and its application in photosynthesis research has been reviewed in detail by Junge in 1976 [1]. In this contribution, I will first summarise the principle and the instrumentation of flash absorption spectroscopy and then focus on practical aspects, some of which are related to the use of lasers as measuring light sources and were not yet treated in the review by Junge Il]. 2. Principle Suppose that the sample to be studied is contained in a cuvette with an optical path I (Fig. l) and that the actinic flash induces a reaction X + Y. This reaction will be accompanied by an absorbance change AA (at wavelength 1") AA: Ae'Ac'l (l) where Ae = eyex is the difference of the molar decadic absorption coefficients (at wavelength )") of Y and X, and Ac is the change in concentration of the product Y (Eq. I can easily be extended to more complicated reactions). The cuvette is crossed by a beam Kev words: Flash absorption spectroscopy; Photosynthesis Actinic flash Po(À) P, P' Photodetecfor (l'leasuring beaml '1 Vottage (Vl ôv Fig. 1. Principle of measurement of flash-induced absorption changes of continuous monochromatic light of wavelength l" (monitoring or measuring light) with a radiant power Po at the entrance of the cuvette. The measuring light is attenuated by the absorption due to the molecules in the sample (including molecules which do not participate in the flash induced reaction), the windows of the cuvette and possibly other optical elements between the cuvette and the photodetector. The radiant power seen by the photodetector is P prior to the actinic flash and P'(t) at time t after the actinic flash. ÀP = P'-P can be positive or negative, depending on whether the reaction is accompanied by a bleaching or by an absorption increase. Using the definition of absorbance, A: lg(Po/P), we can write (before the flash) A.u*pr" * Aoth". "r". : lg(PoAE) (after the flash) A'.u.pr" f Aorh"r"l.,n = lg(P¡/P') Substituting A'.u^pt" -A.u*pl" = AA, one obtains AA = lg(P/P') : lg[P/(P+AP)] : -lg(l+APAE) (2) In the limit of small absorbance changes (i.e. AP/P << l), Eq. 2 can be approximated by The photodetector usually produces a photocurrent proportional to the incident radiant power (at a given wavelength) which is afterwards converted to a voltage V (which varies by AV), so that AV/V: APAE. Recording the time course of AV, the kinetics of the flash induced reaction(s) can be measured. A difference spectrum of the absorbance changes at a given time can be recorded by repeating the measurements at various waveiengths l. of the monitoring light and then plotting-(U23) 'AP/P versus 1". These measurements directly give spectral and kinetic information They often allow determination of the species involved (e.g. the electron carriers in photosynthesis) and of the sequence and kinetics ofthe flash-induced reaction(s). .,f 3. Main elements of the experimental set-up Actinic flash its duration has to be short compared to the kinetics of the reaction to be studied. In the time range above a few microseconds, xenon flashes may be used, combined with optical filters if excitation at a special wavelength is required. For faster reactions, pulse lasers have to be used. They are inherently monochromatic; with some of them (e.g. dye lasers) the emission wavelength can be tuned. Meaxtring light Conventional lamps (tungsten, xenon, mercury etc.) or continuous lasers may be used. As most conventional lamps have a broad emission spectrum, monochromators or optical filters have to be used to obtain (nearly) monochromatic light. Lasers have the advantage of emitting monochromatic light of high intensity. They can be focused on a small photodetector. Tuning of the wavelength is limited, however. Detection system It consists in general of a photodetector (photodiode or photomultiplier), electronic amplifiers and a transient recorder (analog/digital converter and storage element). Photomultipliers provide internal, low-noise amplification of the photocurrent and are the best choice when the measuring light intensity is low. At higher intensities, they become saturated, and a photodiode with subsequent electronic amplifier should be used. The amplifier is often the main source of noise in such a detection system. In many cases it is necessary to place optical filters in front of the photodetector. They transmit the measuring light, but block scattered light from the actinic flash which otherwise could be misinterpreted as a transient bleaching of the sample. 4. Some practical aspects Time resolution The time resolution of the apparatus can be limited by the duration of the actinic flash, the electronic bandwidth of the detection system or by the sampling rate of the transient recorder. The fastest commercially available transient recorders have an electronic bandwidth from DC to about I GHz and sampling rates of a few Gigasamples T per second, allowing a time resolution in the order of 1 ns. It should also be mentioned that the response of fast detection systems is often far from perfect and can falsify the kinetics considerably (e.g. overshooting upon a fast transition; see [2] for a method of detector testing). Faster reactions (down to the fs range) can be resolved by the "pump and probe" technique where the continuous measuring light is replaced by short laser pulses ("probe") at various delay times after the actinic flash ("pump"). As the time resolution is obtained by delaying the probe pulses, an integral measurement of the energy of the probe pulse behind the sample is sufficient and a fast detection system is not required (see 13] for a recent review on this technique; the same principle can also be used on slower tir¡e scales [4]). 2. Signal-to-noise ratio As the flash-induced absorbance changes in photosynthetic samples from plants are usually rather small (rarely larger than l0-3), monitoring by flash absorption spectroscopy is often hindered by a insufficient signal to noise ratio (SAI), expressed as the ratio of AV and the noise voltag€ Vn, i.e SÂ'{: AV/V.. When the noise is purely random ("white noise"), Vn is proportional to the square root of the electronic bandwidth of the detection system, so that the difficulties to obtain a sufücient signal-to-noise ration increase with increasing rate of the reaction to be studied. It is possible to improve the signal-to-noise ratio by adding up a certain number (n) of signals measured under identical conditions. Because the accumulated signal amplitude increases proportional to n, but the random noise only proportionaltoJi, SAI increases proportional toJi. However, due to practical limitations (time, quantity of sample) it is often not convenient to accumulate a too large number of signals. Therefore, one should first look for other possibilities to improve the signal-to-noise ratio. The noise originates from three major sources: (l) From the statistics of the photons of the measuring light and of the photoelectrons created in the detector (shot noise); the shot noise voltage is proportional to the square root of the primary photocurrent in the detector. (2) From additional fluctuations of the radiant power emitted by the measuring light source; the resulting noise voltage is proportional to the radiant power reaching the detector and hence proportional to the photocurrent. ( 3) From the thermal movements of the electrons in the detection system (thermal noise); the thermal noise is independent of the measuring light. In a widely used set-up, a tungsten incandescent lamp serves as the measuring light source, and a photomultiplier as the detector. In this case, noise sources (2) and (3) are usually negligible, and the signal-to-noise ratio increases proportional to the square root of the radiant power P reaching the detector (according to Eq. 3, AP cc P for a given absorbance change AA of the sample, while the shot noise is "only" proportionat toJF). Because of the limited spectral radiant intensity of incandescent lamps, it is hardly possible to resolve reactions in the sub-microsecond range in photosynthetic samples from plants. Much higher spectral intensities of the measuring light are now available from laser sources. As photomultipliers support only weak light, one has to use a photodiode coupled to an electronic amplifìer as detection system, which introduces additional (thermal) noise. This disadvantage is, however, largely overcompensated by the gain in signal amplitude due to the higher radiant power. Thus, it became possible to monitor some reactions in plant photosynthesis with a time resolution close to 1 ns (see section 4.5). Evidently, AA can be increased by increasing the concentration c of the sample or the optical path l(see Eq. l). However, for agiven measuring light source, an increase of c or I decreases the radiant power P arriving at the detector, so that the signal-to-noise ratio would decrease when the transmission of the sample becomes too low. In fact, there exists an optimum for the product c'l with respect to the signal-to-noise ratio. Fø the simplest case, when the shot noise is the only noise source, it has been shown [5] that the best signal-to-noise ratio is obtained when c'l is chosen such that the transmission of the sample is approx. L4Yo at the wavelength of the measuring light (trace a in Fig. 2). The general case, that all three noise sources mentioned above contribute, has also been analysed ([6], see Fig. 2 for examples) leading to the following conclusions. When the thermal noise of the detection system becomes dominating (trace b), the optimum shifts to a higher transmission (i.e. a less concentrated sample or a shorter optical path) rvVhen, however, The following assumptions about the noise sources were made: Trace a: shot noise is the only noise source. Trace b: thermal noise of the detection system contributes in addition to the shot noise; the ratio of thermal noise to shot noise isJ2 .l at T = 0.14. Trace c: fluctuations of the radiant power of the measuring light source contribute in addition to the shot noise, the ratio of these fluctuations to the shot noise is Jã: I at T : 0.14. Trace d: all three noise sources contribute equally at T = 0.14. the optimum occurs at a transmission below l4%. Whenever the signal-to-noise ratio is critical for a flash absorption experiment, it is worthwhile to check for the optimal concentration and optical path of the sample. Actinic effect of the meaxring light The measuring light can have the undesirable effect of inducing the (photo)reaction to be studied already prior to the actinic flash This effect can be diminished (at the expense of the signal to noise ratio) by decreasing the intensity of the measuring light. Another possibility is to switch on the measuring light only very shortly before the actinic flash. Sometimes the reaction can be studied at a wavelength, where only/the product(s) absorb, but not the dark adapted state of the sample. In this case, even high intensities of the measuring light do not disturb the experiment. Fhtorescence arteþct Similarly to scattered light (see section 3), flash induced fluorescence reaching the photodetector would pretend a transient bleaching of the sample. If fluorescence occurs at the wavelength of the measuring light, it can not be blocked by optical filters. In this case, it can be helpful to increase the distance between sample and detector and to place a small diapkagm in front of the detector. This decreases the fraction of the fluorescence which is "seen" by the detector because the fluorescence of the sample is emitted in all directions in space. In addition, one may record the fluorescence artefact separately (in the absence of the measuring light) and subtract it from the signal recorded in the presence of the measuring light. 4.5. Examples of set-ups with sttb-microsecond time-resolution . The photoreactions of the chlorophyll-a type primary electron donors of photosystem I (P700) and photosystem II e680) can be studied conveniently with a set-up using a laser diode emitting at 820 nm as the measuring light source and a fast (and small) photodiode as detector. As the chlorophyll-a cation absorbs at around 820 nm, but neutral chlorophyll a does not, photooxidation of P700 or P680 is accompanied by an absorption increase at 820 nm. High measuring light intensities can be used without actinic effect on the sample, because the absorbance at 820 nm of the photosynthetic apparatus is virtually zero in the dark adapted state (prior to the actinic flash). Furthermore, provided that the sample is optically clear, high concentrations of the sample and a long optical path I can be used without decreasing the radiant power P reaching the detector. As another advantage, the laser light can be readily focused on a small photodiode at a large distance from the sample, thus decreasing the fluorescence artefact. Such a set-up was introduced by van Best and Mathis [7] to study the rereduction kinetics of P680.. With some more recent technicalimprovements, it is now possible to study P680. [8-10] and P700. I l, l2] with a time resolution close to I ns. . To monitor the photoreactions of P680 in the red-most absorption band of its neutral form (around 680 nm; this band is bleached upon photooxidation of P680), a tuneable dye laser (pumped by an Ar ion laser) was used as measuring light source. The fluorescence artefact, which is very strong around 680 nm in a standard set-up, could be largely reduced by using a small photodiode at a large distance (approx. 2 m) from the f sample. The measuring light was pulsed by means of an electro-optical modulator in order to diminish the actinic effect [13, 14,9]. More recently, red laser diodes (which are rather cheap) were used as measuring light sources [0]. . To study absorption changes in the [fV and blue spectral regions with a time resolution of a few nanoseconds, a pulsed measuring light of high radiant power was realised using a Xe flash lamp emitting flashes of approx. 50 ¡rs duration. The top of the flash is suffìciently flai to be used as a quasi continuous measuring light during approx. I ps. Such a set-up was applied successfully to monitor the oxidation of the immediate electron donor to P680. (which turned out to be a tyrosine) [5, l6], and electron transfer through the secondary electron acceptor A1 (phylloquinpne) in¿photosystem I 117-re1. Io/ofor IAP/PI <0.01 andapprox.5Yofor IAP/PI = o.l. Fig. 2 . 2 Fig.2. Dependence of the signal-to-noise ratio on the transmission T of the sample. Different T values represent different optical paths I or different concentrations c of the same sample. The following assumptions about the noise sources were made: Trace a: shot noise is the only noise source. Trace b: thermal noise of the detection system contributes in See text and Ref.[6] for further details. Redrawn fromRef. [6]. radiant power of the measuring light source are dominating (trace c),
04095803
en
[ "phys" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04095803/file/gravityessay.pdf
Daniele Funaro email: [email protected] Spacetime deformations of electromagnetic nature are far from negligible We would like to collect a series of considerations concerning the influence, which we believe is relevant, that phenomena of an electromagnetic nature have on the geometry of spacetime. The approach, supported by the critical observation of already wellknown properties and sustained by theoretical elements, leads us to attribute a primary role to electric and magnetic interactions. In fact, a seemingly small interpretative effort can help link theories elegantly and mathematically which at the moment appear independent. Einstein's equations have a validity that goes beyond the cosmological one and, with the appropriate corrections, they can clarify what really happens inside matter, starting with the bonds that dominate at the molecular level all the way up the description of the macroscopic characteristics. This analysis allows us to leave the context hitherto reserved exclusively for quantum mechanics and to place the study of the structure of matter in a more classic framework. Like any other form of energy, electromagnetic radiation is capable of warping spacetime geometry. From a purely mathematical viewpoint, this is obtained by placing to the righthand side of Einstein's equations an appropriate forcing term related to the electromagnetic stress tensor. Rainich's pioneering article [START_REF] Rainich | Electrodynamics in general relativity[END_REF], and the subsequent paper by Witten [START_REF] Witten | Geometry of gravitation and electromagnetism[END_REF] show how formal solutions can be derived from this type of approach. The results have been taken up and improved on some crucial issues ( [START_REF] Funaro | Electromagnetism and the Structure of Matter[END_REF], sections 4.5, 5.1; [START_REF] Funaro | From Photons to Atoms, The Electromagnetic Nature of Matter[END_REF], appendix E) that we do not intend to discuss here in order not to make the exposition too technical. Unfortunately, the interest in this branch of general relativity has never attracted great interest, above all because the phenomenon of curvature variation associated with electromagnetic waves has never been considered quantitatively relevant. The theory was born in the gravitational and cosmological context and has always had applications in these fields as its main focus. However, as soon as you enter an atomic structure, where electric and magnetic forces are dominant, the gravitational contribution can become far from negligible. It is evident that here the term "gravitational" should not be correlated with the presence of significant masses in the way we commonly refer to. Although used incorrectly, the adjective testifies to the presence of nontrivial geometric deformations. Results and considerations in this direction have been brought to light in recent publications [START_REF] Sabín | Phonon creation by gravitational waves[END_REF][START_REF] Nicolis | Mutual interactions of phonons, rotons, and gravity[END_REF][START_REF] Kornreich | Light induced gravity phonons[END_REF][START_REF] Esposito | Gravitational mass carried by sound waves[END_REF]]. As will be better specified later on, electromagnetic-type phenomena evolve following specific geodesics "carved" in the inter-atomic lattice. Practical details are available in the preprints [START_REF] Funaro | The impact of a pervasive electrodynamical background on biological interactions[END_REF][START_REF] Funaro | Newtonian forces exerted by electromagnetic waves traveling into matter[END_REF], in part well formalized and in part only sketched out in a heuristic way. We will, however, remain in this Essay in a predominantly descriptive area. When we talk about the laws of gravitation, it is important to note that the concept of mass is linked to objects which, however small they may be, are nonetheless made up of a gigantic number of atoms. This means that in the classic gravitational context, the way matter is made does not come into play in the specification of the laws, implicitly treating the masses as volume integrals of a homogeneous density of mass. However, we know that this is not correct, as the actual mass is concentrated in the atomic nuclei, which occupy a much smaller portion of space than the entire atom. This disproportion radically changes the points of view, considering that an atom has an average diameter of the order of 10 -10 meters, while the nucleus has an average diameter of the order of 10 -15 meters. The ratio in terms of volume becomes 10 15 . Certainly not a trivial amount! In truth, the singular masses involved are extremely small and it is the enormous quantity of particles that generates the global mass. But it is also true that between one nucleus and the next the gravitational force density is decidedly negligible so that the volume integral is the result of a sum of an overwhelming number of peaks concentrated almost punctually. If instead we focus on quantities of an electromagnetic nature, things change. The intensity of the fields between one nucleus and another is very large, to the point of having, for example, magnetic fields of the order of Teslas. The stability of matter (which is actually almost devoid of "real" matter) is the consequence of considerable energies which, compared to almost zero average, retain strong chemical bonds. According to this interpretation, "solid" objects appear as a mixture of physical particles and energy in constant evolution. In this regard, we specify that the vacuum, whether internal or external to matter, is not actually "empty" but it is comparable to an electromagnetic background that permeates everything (see, e.g.: [START_REF] Milonni | The Quantum Vacuum: An Introduction to Quantum Electrodynamics[END_REF][START_REF] Meis | Light and Vacuum[END_REF]). This brings Einstein's equations back into play, and it is obvious that in the new context the constants involved will have to be rethought, and this will also depend on the type of material with which we are dealing. In a revised interpretation of the equations of general relativity the constants that multiply the right-forcing term will be much larger. We can even be more courageous and advance the guess that the intense spacetime deformations that take place inside the electromagnetic background, can also model with the same constants the phenomena classified up to now as essentially gravitational in nature. In fact, the very weak gravitational attraction between two large masses could be the result of more intense electromagnetic forces between the innumerable particles that make up the two bodies, whose nonlinear interactions cancel each other out, providing a result that is not null but definitely smaller. This would be an elegant way to address the chapter concerning the unification of gravitational and electromagnetic forces, about which we would not venture to say any more. At this point it is necessary to mention some experimental evidence to what has been stated above, namely that geometric changes take place inside matter which are not trivial at all. These deformations are not due to the presence of elementary particles as massive elements, but to an energy density which lives inside the material bodies and which is distributed over their entire volume. We claim that the tireless electromagnetic agitation of the vacuum is adapted by the presence of the atoms which display a non-stationary activity inside them, strongly dependent on the type of nuclei. When we were still talking about aether, many supporters believed that it was not at rest (in a reference system integral with the fixed stars), but that it was constantly evolving, being able to adapt dynamically by fluctuating ceaselessly among the atoms [START_REF] See | New theory of aether[END_REF]. Nowadays, there is no longer any need to suppose the existence of the aether; however, electromagnetic radiation and its associated geometry remain. The two interpenetrate each other, acting on each other and evolving like a fluid, which, although it does not contain masses, behaves as if some mass density really existed. As also underlined in [START_REF] Funaro | From Photons to Atoms, The Electromagnetic Nature of Matter[END_REF][START_REF] Funaro | The impact of a pervasive electrodynamical background on biological interactions[END_REF], matter could be defined as the indissoluble union of small units of mass and charge that determine the scaffolding, plus something that could be given the name of pseudo-matter. Pseudo-matter cannot be isolated. It is mass density without a container. It is spread and takes both positive and negative values, the contribution of which, integrated in physical space and time, provides a null resultant. To say that pseudo-matter can somehow be assimilated to dark matter is perhaps a bit far-fetched, but we believe that the family it belongs to is the same, as we will analyze below. It is not necessary to choose sophisticated examples to defend the hypothesis of the existence of strong curvatures within matter. The case of diffraction is emblematic. From abstract rules of geometrical optics, mostly based on phenomenological descriptions, we move on to a relativistic interpretation, where the deviation of light is due to the presence of pre-existing geodesics. The phenomenon is nonlinear and back-reacting, i.e. the very passage of photons inside matter contributes to the variation of the geometry. We specify once again that there are no specific masses or black holes to bend the trajectories of the rays. The effect is instead due to the geometry induced by the pseudo-matter, which in turn originates from the presence of atomic nuclei and at the same time helps to keep them together. With these assumptions, even deterministic explanations are starting to come to life in a context hitherto relegated to quantum phenomena. In the double-slit experiment, the background radiation, due to the presence of closely spaced holes, assumes a conformation that already prefigures the existence of interference fringes. Particles (materials or photons) only show predetermined paths. Their dynamic behavior depends on how the geodesic system is approached. It is therefore not the interaction between particles (which in some experiments are only sent one at a time to the device) that produces wave interference, but the control that is carried out on them by the modified geometry around the peculiar electromagnetic framework surrounding the two slits (see figure 3.18 in [START_REF] Funaro | From Photons to Atoms, The Electromagnetic Nature of Matter[END_REF]). This also explains why the observation of the phenomenon by masking (even partially) one of the slits disturbs the whole experiment. The act of "observing" has not influence on the single particle, but on the whole apparatus in such a way to reconfigure the preexisting interference fringes. The presence or absence of the particles is irrelevant. We realize we have taken a rather slippery path so we stop here with this topic and we refer to [START_REF] Funaro | From Photons to Atoms, The Electromagnetic Nature of Matter[END_REF] for more explanations. The existence of electromagnetic waves (possibly mixed with particles to form a plasma) trapped by their own self-generated geodesics, may explain the ball-lightning phenomenon (see, e.g. [START_REF] Stenhoff | Ball Lightning: An Unsolved Problem in Atmospheric Physics[END_REF][START_REF] Shmatov | Advances in ball lightning research[END_REF]). The natural shape for this type of structures is the toroid, which dynamically behaves like a fluid dynamic vortex ring [START_REF] Funaro | Ball lightning as plasma vortexes: a reinforcement of the conjecture[END_REF]. If it is therefore true that electromagnetism acts on geometry in such a marked way, one can think that the action can be extended to phenomena of a classically gravitational type, such as the motion of the planets around our Sun. Magneto-hydrodynamics (see, e.g. [17,[START_REF] Davidson | An Introduction to Magnetohydrodynamics[END_REF]) studies the evolution of plasma at both the solar system and cosmological levels. A geometry altered by the presence of masses and combined with that coming from electromagnetic radiation provides new vistas. In [START_REF] Fatone | Electromagnetic fields simulating a rotating sphere and its exterior with implications to the modeling of the Heliosphere[END_REF], it is hypothesized that the Sun can impose its presence through three types of phenomena: the emission of photons, the solar wind, the generation of encapsulated and geometrically growing electromagnetic structures. Precise construction details based on the resolution of Maxwell's equations are provided for the latter. If the effect has some weight on the evolution of the planets, it would not be difficult to explain the coplanarity of their orbits and the geometric growth of their distances from the Sun (Titius-Bode law). It's time to introduce the model equations and we will do it as concisely as possible due to lack of space. However, all explanations are accessible in [START_REF] Funaro | From Photons to Atoms, The Electromagnetic Nature of Matter[END_REF]. It should first be observed that the electromagnetic background contains regions where the divergence ρ of the electric field is non-zero at points where there may be no charges. This statement may seem very unconventional, but sooner or later becomes necessary to face the issue if we want to fully understand the dynamics of electromagnetic phenomena. To introduce the problem, the most basic example is that of a charged capacitor with parallel plates of infinite size. The charge or the distance of the plates is then altered in a uniform manner, always maintaining parallelism so as not to develop magnetic fields. Since information travels at a finite speed (that of light) it is inevitable that momentary zones are created between the plates with ρ = 0 which compensate each other between positive and negative values, so that Gauss's theorem still applies to the whole system. A full discussion of this problem is available [START_REF] Funaro | Charging capacitors according to Maxwell's equations: Impossible[END_REF]. Thus, the non-stationary electromagnetic vacuum also contains a pseudo-charge, which does not correspond to the presence of physical charges. On the other hand, in subatomic physics it is convenient to define virtual pairs of particles and anti-particles, in our opinion precisely to meet the need to give meaning to the presence of charge density without charge. We denote by E the electric field, by B the magnetic one, and by V a velocity vector field. In particular, V indicates the evolution of the electromagnetic information, which, as said, is not necessarily carried by real massive bodies like charged particles. The set of model equations in Minkowski space, coupling Maxwell's equations and Euler's equation for non viscous fluids, reads as follows: ∂ E ∂t = c 2 curl B -ρ V ∂ B ∂t = -curl E div B = 0 (1) ρ µ -1 D V Dt + ( E + V × B) = --1 0 ∇p (2) with ρ = div E; c is the speed of light and 0 the vacuum permittivity constant. The setting is not dissimilar from those analyzed within the framework of plasma physics or magneto-hydrodynamics. The nontrivial exception is that we do not introduce any mass density here. The scalar p is a potential denoting pressure density per unit of surface, which, differently from fluid dynamics, can also attain negative values. It represents the link between electromagnetic and Newtonian-type forces. The term E + V × B recalls Lorentz's force. Finally, the constant µ is dimensionally equivalent to Coulomb/Kg. The set of equations extends the classical Maxwell's model in vacuum (just impose ρ = 0). The equations can be derived through an energy tensor, which, in addition to including the well-known electromagnetic stress, incorporates a mass tensor resembling that of a dust of particles for a perfect fluid (remember, no mass density though!). The same energy tensor is the one to plug on the right-hand side of Einstein's equation. Among the extraordinary properties of this system of equations there is the possibility of constructing explicit solutions of (1)-( 2) which, despite being of the classical type, reflect all the prerogatives of photons, i.e. they travel without dispersion along straight lines at the speed of light; they behave like particles and electromagnetic waves at the same time; they respect the rules of geometrical optics. Although the theme has now become out of fashion, the result answers the ageold question of attributing a clear definition to the notion of wave-particle. Furthermore, associated with these waves, exact solutions of Einstein's equation are computable. It is also surprising to note that to construct the photon-waves it is necessary to hypothesize that inside them ρ = 0; nevertheless, in the metric generated by their passage we find the condition ρ = 0, thus rediscovering the standard Maxwell's equations in vacuum. All of this material is well documented and available in [START_REF] Funaro | Electromagnetism and the Structure of Matter[END_REF][START_REF] Funaro | From Photons to Atoms, The Electromagnetic Nature of Matter[END_REF]. We would like to dwell on the exposition but this is not possible in order to remain within the limits of this Essay. From these reasoning it follows that the actual matter is immersed in an electromagnetic soup which already contains the essence of the matter itself, and the laws that regulate its dynamics are the embryo of those that will become known laws in the presence of matter. In this primordial universe, the only law that is not established a priori is that of Coulomb, which will automatically derive from the basic model once elementary particles are introduced. There is no precise metric that describes this universe (bad news for cosmologists!), presenting itself as a predominantly calm ocean, but full of constantly evolving substructures at various scales of magnitude, which can be created or annihilated depending on the circumstances. The structures develop approximately respecting the basic rule that the frequencies of the vortexes involved are inversely proportional to their dimensions. Arrived at this point the reader, alongside the considerations made above which some may consider fanciful, will wish to know innovative extensions. A very recent experiment still awaiting replication would not only validate the above, but would pave the way for revolutionary applications. The recipe is to put pseudo-matter into rotation inside an asymmetrical cavity. In the specific case it is a dielectric ring with an appropriate design. The ring is stressed through a conductive winding to which a radio-frequency signal is applied. Via a high voltage power supply, a stationary electric field is also added. For an appropriate choice of the input frequency, internal Newtonian forces with non-zero average resultant begin to press on the entire apparatus, exerting a side thrust in violation of the action-reaction principle. The thrust is orders of magnitude greater than expected, somehow confirming that the constants involved need to be revisited. The laboratory test is documented in [START_REF] Funaro | An efficient ring-shaped electromagnetic thruster[END_REF], and some partial explanations in relativistic terms are provided in [START_REF] Funaro | Newtonian forces exerted by electromagnetic waves traveling into matter[END_REF], where Einstein's equations are exactly solved in simplified situations. Hopefully, future numerical approximations will provide more details. We remind that pseudo-mass and pseudo-charge are not relativistic invariants. This means that the reaction of the vacuum can be different depending on how we stress it, and therefore that we can exploit this property to our advantage (any relation to the Unruh effect [START_REF] Fulling | Nonuniqueness of canonical field quantization in Riemannian space-time[END_REF]? Or to the work of Woodward on capacitors [START_REF] Woodward | A new experimental approach to Mach's principle and relativistic graviation[END_REF]?). The distortion that the pseudo-matter in asymmetrical rotation brings to the molecular lattice is momentary. The possibility of "deceiving" the classical conservation principles may be based on the recovery delay of the initial molecular configurations and the repetition of the impulses at a rate of billions times per second. The ring is about ten centimeters in size and the information travels at speeds comparable to that of light. There are margins for hypothesizing that the "reaction" does not occur instantaneously but that there is room for a new "action", thus circumventing fundamental theories. The importance of this discovery is primary, for example, in the construction of the so-called "EM-thrusters" in the space sector. The moral of this article, which deliberately does not report theoretical mathematical passages, is that with common sense and a necessary correction of the gravitational constant in the case of electromagnetic fields, a sequence of practicable paths is obtained aimed at answering ancient questions without the need to introduce a "new physics". The revision requires a wide-ranging intervention that cannot be carried out by a single individual, and touches on many crucial aspects underlying theories that are considered already consolidated. However, we believe that an effort must be made in the directions indicated, in order to acquire a unified and deeper understanding of the matter that surrounds us.
04095848
en
[ "sdv.mhep" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04095848/file/These-2022-BS-Analyse_et_traitement_de_l_information_et_des_images_medicales-BRAHIM_Ikram.pdf
. Dry eye disease (DED) is also an independent multifactorial disorder with a prevalence of up to 50% [2]. The ocular surface inflammation causes discomfort, fatigue and overall, a lower quality of life [2, 3]. Traditional therapies help manage the symptoms and avoid permanent damage. Hence, it is pivotal to grade and follow the development of DED. A common drawback in existing methods that diagnose and quantify DED is reproducibility, invasivity and inaccuracy. We reviewed classical methods and those that incorporate automation to measure the extent of DED [4]. The study showed that DED has yet to benefit from what Artificial Intelligence (AI) has to offer. Using slit-lamp examinations of the ocular surface we aimed to improve the quantification of the Oxford score [5]. Our proposed method uses unsupervised learning to register frames from the examinations to a common coordinate system. By learning the camera motion and depth simultaneously we are able to track the ocular surface in 3-D, compensate for eye motion and visualise the full eye. The light source attached to the camera is a challenge and a disturbance when learning egomotion. This was solved through semantic segmentation and adding a new supervision signal: semantic reconstruction loss. We also used the advantage of estimating the shape of the eye as prior knowledge we could include as a constraint. This was implemented through a shape fitting loss; the shapes being two spheres intersecting each other. Our registration showed quantitative and qualitative improvement with each contribution. We also calculated the inter-rater reliability of the punctate dots (damaged areas) annotations. Our method came closest to what can be considered human error. The proposed registration method was also used for a pre-processing task, frame selection. Once applied to automated Oxford score classification, our method improved the results as well. The improvement validates that the strong color/illumination variances present in the examinations are a disturbance for any deep learning task. We overcame this via our contributions and proposed method. pour une tâche de prétraitement : la sélection des images à analyser. Une fois appliquée à la classification automatique du score d'Oxford, notre méthode a également permis une amélioration des résultats. Cette amélioration valide le fait que les fortes variations de couleur et d'illumination présentes dans les examens constituent une perturbation pour toute tâche d'apprentissage profond. Nous avons surmonté ce problème dans les deux tâches grâce à nos contributions et à la méthode proposée. 4 (Lymphocytes B et Autoimmunité). Also involved in a large European project, NECESSITY https://www.necessity-h2020.eu, where one of the goals is to automate the quantification of eye dryness. RÉSUMÉ Le syndrome de Sjögren est une maladie du système immunitaire dont les deux symptômes communs sont la sécheresse des yeux et celle de la bouche. La gêne occasionnée par les symptômes de sécheresse oculaire affecte la vie quotidienne, entraîne une diminution de 30% des activités et touche 95% des patients atteints du syndrome de Sjögren [START_REF] Nichols | Impact of dry eye disease on work productivity, and patients' satisfaction with over-the-counter dry eye treatments[END_REF]. Le syndrome de l'oeil sec (SOS) est également un trouble multifactoriel indépendant dont la prévalence peut atteindre 50% [2]. L'inflammation de la surface oculaire entraîne une gêne, une fatigue et, globalement, une baisse de la qualité de vie [2,3]. Les thérapies traditionnelles permettent de gérer les symptômes et d'éviter les dommages permanents. Il est donc essentiel de classer et de suivre l'évolution du SOS. Les méthodes existantes qui permettent de diagnostiquer et de quantifier le SOS présentent des inconvénients communs : la reproductibilité, l'invasivité et l'imprécision. Nous avons passé en revue les méthodes classiques et celles qui intègrent l'automatisation pour mesurer l'étendue du SOS : [4]. Cette étude a montré que le SOS n'a pas encore bénéficié de ce que l'intelligence artificielle (IA) a à offrir. En utilisant des examens de la surface oculaire à la lampe à fente, nous avons cherché à améliorer la quantification du score d'Oxford [5]. La méthode que nous proposons utilise l'apprentissage non supervisé pour recaler les images des examens dans un système de coordonnées commun. En apprenant simultanément le mouvement de la caméra et la profondeur, nous sommes en mesure de suivre la surface oculaire en 3D, de compenser le mouvement de l'oeil et de visualiser l'oeil entier. La source lumineuse fixée à la caméra constitue un défi et une perturbation lors de l'apprentissage du mouvement de l'observateur. Ce problème a été résolu par la segmentation sémantique et l'ajout d'un nouveau signal de supervision : la fonction de coût de reconstruction sémantique. Nous avons également utilisé la forme de l'oeil comme une connaissance a priori que nous pouvons inclure comme une contrainte. Ceci a été mis en oeuvre par une fonction de coût d'ajustement de forme ; les formes étant deux sphères se croisant l'une l'autre. Notre recalage a montré une amélioration quantitative et qualitative suite à chaque contribution. Nous avons également calculé la concordance inter-observateur des annotations de la kératite ponctuée (zones endommagées). Notre méthode est celle qui se rapproche le plus du niveau d'erreur humaine. La méthode de recalage proposée a également été utilisée TABLE OF CONTENTS Context Due to the evolution of our environment and our lifestyle, more and more people are suffering from dry eyes. According to a recent study, this syndrome affects nearly 7% of the American adult population. Among those with dry eye, some patients have an autoimmune disease called Sjögren's syndrome, which is characterized by inflammation of the salivary and lacrimal glands. In these patients, dry eye can be severe and requires symptomatic or specific treatments. However, no disease-modifying therapy has been proven to be effective in modifying the course of the disease and preventing ocular damage. The measurement of ocular dryness is mostly based on an examination of the surface of the eye (cornea and sclera) using a slit lamp (or biomicroscope). The ophthalmologist first applies a contrast medium to the cornea and then observes several signs of dryness. First, he measures the time it takes for the tear film to tear following a blink of the eye, a short time indicating a thin film. Second, it detects the areas of the ocular surface damaged by dryness. These areas appear as dots, specks or filaments and depending on their number and location, a degree of ocular dryness can be determined. The problem with these measurements is that they are not very accurate or reproducible. These limitations prevent a reliable quantification of the evolution of dry eye in a patient. In particular, they do not allow the effect of a treatment to be measured satisfactorily. Our objective is therefore to set up an artificial intelligence designed to perform these measurements in an accurate and reproducible manner. The objectives of this thesis are multiple and concern the automated processing of videos of the anterior segment of the eye using convolutional neural networks. A database of videos was acquired specifically for this work, during clinical studies including patients followed for a primary Sjögren's syndrome within the framework of the European project IMI2 NECESSITY. On these videos, the study of the quality of the blink and the determination of the tear time of the tear film will be the first objective. A classification taking into account the temporal context could be considered (RNN). It will then be necessary to propose a solution to automatically calculate the "Oxford score" corresponding to the location and density of ocular dryness injuries. This second part will require a recalibration between the different images of the video stream, an automatic determination of the appropriate moment for the calculation of the score and a segmentation of the visible lesions. Motivation Dry eye disease (DED) is a condition that affects the ocular surface and tear film, resulting in damage. The reason of the visual disturbance can be traced back to a range of medical disorders. The International Dry Eye WorkShop (DEWS) summarizes all of the findings with the goal of bringing existing dry eye disease knowledge up to date [2]. The most recent report creates a DED classification system. The aqueous-deficient, evaporative, or a combination of both causes the tear film to lose homeostasis, or equilibrium, as seen in this disease. DED is a global eye disorder, yet diagnostic approaches are still intrusive, and some grading is non-reproducible. The main goal of the thesis is to automate a quantification method in order to obtain a more accurate form of DED grading. We focused on providing a complete visual of the eye to help render the process more reproducible. Using artificial intelligence (AI) to its full potential given the scope of the positive results in other fields [6][START_REF] Ngiam | Multimodal deep learning[END_REF][START_REF] Litjens | A survey on deep learning in medical image analysis[END_REF]. The following thesis is a cotutelle between LaTIM (Laboratoire de traitement de l'information médicale) and LBAI Introduction existing but also at semi-automated, and fully automated methods. With this article we were able to validate that more methods have emerged in the last couple of years that incorporate automation and deep learning to evaluate and quantify DED. In the methodological background, Chapter 2, we introduce the framework we wanted to follow to help improve DED quantification. We present the methods we want to utilise, deep learning, projective geometry and lastly classification to predict the DED grade. Following this we present concepts that focus on Odometry, Visual odometry methods with deep learning and more importantly unsupervised methods. We follow this with a description of the materials in Chapter 3. Presenting data we utilised as well as the needed camera calibration and annotations. We extend some of the methods we wanted to utilise as baselines in our work, in Chapter 4. We investigate a few approaches and their essential components in line with the estimation of depth and camera motion prediction. Additionally, we obtained a few poor qualitative results that we wished to further explain. Chapter 5 first focuses on the development of a baseline method with the use of our evaluation metric. We make an effort to completely comprehend both the reasons why the baselines failed and how they correlate to the challenges in our problematic. To test our theories, we begin by enhancing the primary supervision signals by first adding a more significant loss than the photometric loss. We continue with our more unique loss that emphasizes the use of prior information and shape fitting and achieve better results. We move on to the prediction of the DED grade through classification in Chapter 6. Alongside work completed in an internship, we are able to investigate classical classification. We then incorporate our proposed method to improve the DED prediction. We finalize by reviewing the overall scope of the work proposed, carried out, and its obstacles. Additionally we consider the several directions and perspectives that could be taken. Chapter CLINICAL BACKGROUND "Wherever the art of Medicine is loved, there is also a love of Humanity." -Hippocrates T his chapter focuses on the definition, impact and the current state of the art of dry eye disease diagnosis. We look at current clinical diagnostic techniques as well as the rise of automation in the field. Finally, we point out the main issues in the current approaches that we will address in this work. Dry Eye Disease The international dry eye workshop updated definition of dry eye disease to the following [START_REF] Lemp | The Definition Classification of Dry Eye Disease[END_REF][START_REF] Wombat | The true meaning of 42[END_REF]: Dry eye is a multifactorial disease of the tears and ocular surface that results in symptoms of discomfort, visual disturbance, and tear film instability with potential damage to the ocular surface. It is accompanied by increased osmolarity of the tear film and inflammation of the ocular surface. With a prevalence of 5-50%, the disease impacts include discomfort, visual function and general quality of life . Representing almost 25% of the reasons for consultations in ophthalmology [START_REF] Hassani | Surface Oculaire[END_REF]. A study estimated the annual cost associated with the management of DED in six countries summarised in the table below [START_REF] Clegg | The annual cost of dry eye syndrome in France, Germany, Italy, Spain, Sweden and the United Kingdom among patients managed by ophthalmologists[END_REF]. The study notes that DED prevalence is difficult to measure because it is multifactorial with different definitions and sparse research. The lack of standardization in diagnostic tests also adds to the difficulty to conduct such analyses. It is estimated that the prevalence is expected to increase within the next 40 years [START_REF] Sergheraert | Le syndrome de l'oeil sec, une pathologie en forte progression[END_REF]. A common cause stems from the fact that we rely daily on computer screens and smartphones, which are reported to cause 30-50% reduction in blinking and therefore increasing the risk of DED. A recent cross-sectional study found that certain etiological subtypes of DED can be found certain demographic and lifestyle factors [START_REF] Wolffsohn | Demographic and lifestyle risk factors of dry eye disease subtypes: a cross-sectional study[END_REF]. DED can be classified into aqueous-deficient and evaporative. Evaporative DED is due to a high evaporative rate of the tear film, caused by Meibomian gland dysfunction (MGD) and lipid insufficiency. The changes in the components of the tear cause instability and goblet cell loss that are responsible for mucins production [START_REF]Tear film mucins: front line defenders of the ocular surface; comparison with airway and gastrointestinal tract mucins[END_REF]. Reduced tear production and lacrimal gland dysfunction, characterize aqueous deficient DED. Advancing age, stress and poorer health status were found to be associated to both subtypes. The association of gender was studied and showed that females were more at risk of aqueous deficient DED. Risk factors for evaporative dry eye were found to be contact lens wear, increased screen exposure, stress, age and east and south Asian ethnicity. There is a higher population prevalence for evaporative disease caused by MGD or contact lens wear [2,[START_REF] Stapleton | Tfos dews ii epidemiology report[END_REF][START_REF] Schaumberg | The international workshop on meibomian gland dysfunction: report of the subcommittee on the epidemiology of, and associated risk factors for, MGD[END_REF]. Symptoms including persistent unpleasant gravel sensation in the eye and the use of tear substitutes. Primary Sjögren's Syndrome Primary Sjögren Syndrome (pSS) is an autoimmune illness that causes dry eyes by targeting the lacrimal glands, mucous membranes, and moisture secreting glands. Various ocular surface diseases can co-exist with dry eye as well, but Dry eye disease (DED) can also be present as a separate condition as-well. One of the main causes of DED is pSS, which is an autoimmune disease that affects the lacrimal glands and results in DED, though not present in all pSS patients [START_REF] Shiboski | American College of Rheumatology classification criteria for Sjögren's syndrome: a data-driven, expert consensus approach in the Sjögren's International Collaborative Clinical Alliance cohort[END_REF]. Ten percent of patients diagnosed with DED also have pSS, this further complicates their detection as both have proven to be difficult to diagnose. NECESSITY Project The aim of the NECESSITY European project is to help identify measures that can be incorporated in clinical trials to test new pSS medicine. The goal is to help solidify clinical trials and quantify their efficacy to bring forward better developed treatments. The primary objective is aimed at evaluating drug treatments for high burden pSS patients. The second objective is to evaluate biomarkers for pSS stratification for organ involvement and disease progression, and the third and last objective is to set-up and execute clinical trials to help validate biomarkers, or newly identified clinical endpoints. With several work-packages as shown in the Figure 1.2 below, the following work takes part in the package title "Novel endpoints: Generation of Clinical End Points -WP5". WP5 also has multiple objectives and ours help develop secondary endpoints that will be innovative tools to better capture the disease progression. Clinical diagnostic methods Many clinical diagnostic methods have been developed over the past decades. These methods have been reviewed and categorized in recent surveys [START_REF] Van Bijsterveld | Diagnostic tests in the sicca syndrome[END_REF][START_REF] Senchyna | Quantitative assessment of tear production: A review of methods and utility in dry eye drug discovery[END_REF][START_REF] Savini | The challenge of dry eye diagnosis[END_REF][START_REF] Simpson | Dry Eye Symptoms Assessed by Four Questionnaires[END_REF][START_REF] Sweeney | Tear film stability: A review[END_REF][25][START_REF] Garaszczuk | The tear turnover and tear clearance tests -a review[END_REF][START_REF] Begley | Review and analysis of grading scales for ocular surface staining, The ocular surface[END_REF]. However, we noticed a lack of surveys that take into account the path of automation which was the main outline of our review article [4]. There are clinically used methods as well as methods that have been enhanced by using new equipment or automating the grading alongside new equipment. Given that lack of surveys we decided to review classical, semi-automated and automated methods that aim to diagnose and quantify eye dryness. Tear Secretion & Volume One way to quantify DED is to evaluate the decrease in tear secretion and volume. Dryness of the ocular surface appears through different signs and clinical tests that measure the tear meniscus shape and regularity. Reproducible tests that measure secretion and residual volume of tears are key indicators for dry eye. This is because 90% of the tear volume is found in either the superior or inferior tear menisci. A primary indicator of dry eye syndrome, is the reduction of tear secretion. A deficiency 20 in any of the layers, ultimately causes discomfort and disrupts the tear film. Estimating tear production precisely allows clinicians to follow through with a suitable course of treatment. Assessing tear secretion dates back to 1903 when Schirmer [START_REF] Schirmer | Studies on the physiology and pathology of the secretion and drainage of tears[END_REF] first presented his test. The test determines whether or not enough tears are being produced to maintain moisture in the eye. It measures both basal and reflex tears [START_REF] Senchyna | Quantitative assessment of tear production: A review of methods and utility in dry eye drug discovery[END_REF]. The test uses a small piece of filter paper measuring 35x5 mm that is placed over the lower eyelid. Timing five minutes and measuring the length of the wetted filter paper gives the tear secretion grade . Schirmer describes variations of the test including using a topical anaesthetic, and nasal stimulation to measure reflex tears. Despite the controversy and lack of reproducibility, sensitivity and specificity [START_REF] Savini | The challenge of dry eye diagnosis[END_REF][START_REF] Clinch | Schirmer's test: a closer look[END_REF][START_REF] Cho Pauline | Schirmer test. I. A review[END_REF], the test is still used frequently. The use of anaesthetic was investigated, as tear secretion was thought to decrease following its use,therefore, causing misclassification of the damaged ocular surface in staining tests if they are performed afterwards [START_REF] Senchyna | Quantitative assessment of tear production: A review of methods and utility in dry eye drug discovery[END_REF]. Limitations of the test include the testing time being too long, the discomfort it causes, and the lack of strict procedure regarding the placement of the paper strip. Modifications have been made to reduce the limitations without major improvements [START_REF] Clinch | Schirmer's test: a closer look[END_REF][START_REF] Henderson | Influence of age and sex on flow of tears[END_REF][START_REF] Prause | TEAR ABSORP-TION INTO THE FILTER-PAPER STRIP USED IN THE SCHIRMER-I-TEST: A methodological study and a critical survey[END_REF][START_REF] Pandher | Effect of meibomian oils on Schirmer tear test[END_REF][START_REF] Wright | A review of the Schirmer test for tear production[END_REF][START_REF] Hanson | Schirmer test of lacrimation: Its clinical importance[END_REF][START_REF] Patel | Reliability and variability of the Schirmer test[END_REF][START_REF] Halberg | Standardized Schirmer tear test kit[END_REF][START_REF] Jones | The lacrimal secretory system and its treatment[END_REF][START_REF] Doughman | Clinical tests[END_REF][START_REF] Shapiro | Schirmer test and break-up time of tear film in normal subjects[END_REF]. Variants of Schirmer's test are listed in Table 1.2. Given the limitations and lack of repeatability [START_REF] Nichols | The lack of association between signs and symptoms in patients with dry eye disease[END_REF], Nelson et al. [START_REF] Nelson | A shorter Schirmer tear test[END_REF] and Bawazeer et al. [START_REF] Bawazeer | One-minute Schirmer test with anesthesia[END_REF] both describe methods that shorten the procedure to one-minute tests in order to alleviate estimation, without and with topical anaesthetic respectively. Regarding the discomfort aspect, the filter paper was replaced by Kurihasni [START_REF] Kurihashi | A modified Schirmer test: the finethread method for measuring lacrimation[END_REF] with a fine thread that was stained with fluorescein at one end. This produced the ′ Phenol red thread test ′ [START_REF] Hamano | A new method for measuring tears[END_REF] where the wetted portion of the thread turns yellow due to the pH of the tears and the length is then measured. Further modifications to the Schirmer's test include the Fluorescein Clearance Test (FCT) that assesses the tear clearance or turnover rate. These values indicate both the tear secretion and drainage. Clearance is described as normal if the fluorescein dye is no longer detected after 20-minutes. Fluorescein clearance test consists of 1-minute Schirmer's tests performed consecutively for 30-minutes after the application of the dye [START_REF] Prabhasawat | Frequent association of delayed tear clearance in ocular irritation[END_REF][START_REF] Pflugfelder | Evaluation of subjective assessments and objective diagnostic tests for diagnosing tear-film disorders known to cause ocular irritation[END_REF]. Tear function index value was proposed by Xu et al. [START_REF] Xu | Tear function index: a new measure of dry eye[END_REF] and consisted of the Schirmer's test as well as measuring the Tear Clearance Rate (TCR). It is the rate at which the dye fades five minutes after instillation and is graded 1,1/2, 1/4, 1/8, 1/16 1/32, 1/64, 1/128 and 1/256. The tear interference device allows to non-invasively visualise the tear meniscus, first reported by Guillon et al. [START_REF] Guillon | Non-invasive tearscope plus routine for contact lens fitting[END_REF]. Strip meniscometry was investigated by Dogru et al. [START_REF] Dogru | Strip Meniscometry: A New and Simple Method of Tear Meniscus Evaluation[END_REF] that eliminates the use of fluorescein dye, or any touching of the eyelid and is performed in 5s. Ocular Surface Damage The ocular surface provides both anatomic and immunologic protection as well as functions to maintain clarity for the cornea [51]. Given the sensitivity of the structures that it helps and protects, any damage to the ocular surface can produce severe consequences. Instillation of a [START_REF] Bawazeer | One-minute Schirmer test with anesthesia[END_REF] dye causes the penetration of the lipid layer of the epithelium, and staining areas are where shed cells are highlighted. These areas show where the epithelium has been damaged. Although not a specific sign of dry eye, staining can quantify the damage done and its severity. The limitations of visual scoring and grading of ocular surface damage motivate the need for improvement. Damage to the corneal epithelial is stained and made more visible using fluorescein sodium. Instilled using paper strip or preserved doses, the staining is more visible if a yellow (blue-free) filter is placed in the slit-lamp [START_REF] Savini | The challenge of dry eye diagnosis[END_REF]. Damage to the conjunctival epithelium is more difficult to detect with fluorescein staining due to the poor scleral contrast. One examination is also not sufficient to evaluate the damaged ocular surface when using ocular staining. Another dye which is a derivative of fluorescein is Rose Bengal (RB), which is mainly used on the conjunctiva to detect damage to the epithelium [START_REF] Savini | The challenge of dry eye diagnosis[END_REF]. RB staining was shown to stain areas that lack membrane-associated mucins [START_REF] Argüeso | Mucin characteristics of human corneal-limbal epithelial cells that exclude the rose bengal anionic dye[END_REF]. Lastly, Lissamine green is a synthetic organic acid dye that is interchangeable with RB [25], though used more often since it has been proven to be less toxic and more easily tolerated. Dating back to 1882 when fluorescein was used to stain corneal abrasions [START_REF] Pflüger | Zu Ernahrung der cornea[END_REF]. Bron et al. present a more exhaustive coverage of clinical ocular surface staining [START_REF] Bron | Clinical staining of the ocular surface: Mechanisms and interpretations[END_REF]. Numerous grading scales for ocular staining have been developed as detailed by Begley et al. [START_REF] Begley | Review and analysis of grading scales for ocular surface staining, The ocular surface[END_REF]. Grading scales mentioned in Table 3.3 were studied on dry eye subjects. Another more detailed method of grading recently developed by Woods et al. [START_REF] Woods | A novel scale for describing corneal staining[END_REF] includes a grading scale of 0-100 for staining and extent (area) and 0-4 for depth. Named the CORE (Centre for Ocular Research & Education) staining scale, it can also be reported for each of the five zones (central (C), superior (S), nasal (N), inferior (I) and temporal (T)) which could aid in tracking the evolution of the damage. The Oxford scale [5] includes six grades (0-5) with the dots ordered on a log scale. Figure 2.7 is the Oxford scale that is referred to visually after the examination to determine the patient's grade. First a dye is instilled that stains the dryness on the ocular surface. The damage is then made more visible and in order to quantify it a grade can be obtained by referring to a scale during the examination. By examining the eye through a slit-lamp using standard settings the examiner starts to grade the eye, which for the Oxford score is decomposed into three panels. The three panels being: nasal conjunctiva, the side adjacent to the nose, the temporal that is adjacent to the temple for each of the eyes and lastly the cornea in the middle. For the examination the upper eyelid is raised to have a better complete visual of the cornea. Each of the panels, as shown in the figure are graded from 0-5 and the final Oxford score is the sum and ranges from 0-15. The grading scale is an estimated number of dots, that are of course impossible to count during the examination, and the log of the dot numbers is the final grade. Tear film stability The priority in assessing the tear film is to be non-invasive yet able to represent its instability. The quality of each layer within the tear film is important for its stability, accordingly, measuring it is essential to characterize DED. The most referenced and useful technique to assess the extent of tear evaporation is the tear break-up time (TBUT). Introduced by Norn et al. [START_REF] Norn | DESICCATION OF THE PRECORNEAL FILM: I. Corneal Wetting-Time[END_REF] the test aims to diagnose the tear films instability by instilling sodium fluoresein, and timing between the last blink and appearance of the first break or dry spot: less than 10s is abnormal, 5-10s being marginal and less than 5s suggests dry eye. By observing through a slit-lamp, the first appearance of a dry spot or a tear in the film indicates the TBUT. TBUT is performed in various ways and is being continuously modified. The main difference between the ways the test is performed is the degree of invasiveness. Current methods include instilling sodium fluorescein, and observed using a cobalt blue light with a yellow filter. On the other hand, non-invasive techniques do not involve instilling dye and instead use different diagnostic instruments. The temperature change mapped using an ocular thermogram allowed Morgan et al. [START_REF] Morgan | Infrared thermography of the tear film in dry eye[END_REF] to determine that the mean ocular surface temperature was greater in dry eye patients. Fujishima et al. [START_REF] Fujishima | Corneal temperature in patients with dry eye evaluated by infrared radiation thermometry[END_REF] determined a change in the corneal temperature using an infrared radiation thermometer. Changes in temperature with each blink was observed to be smaller in patients with dry eye. Another instrument is the keratometer which measures the corneal curvature. Observation mostly includes an illuminated grid pattern reflected from the tear surface [START_REF] Sweeney | Tear film stability: A review[END_REF]. A modified method of keratometer, proposed by Hirji et al. [START_REF] Hirji | Human tear film pre-rupture phase time (TP-RPT)-A non-invasive technique for evaluating the pre-corneal tear film using a novel keratometer mire[END_REF], includes adding a circular grid and the mean of five measurements to obtain the TBUT. Meibomian Gland Dysfunction Meibomian gland dysfunction (MGD) occurs when the Meibomian glands become blocked, resulting in a lack of oil production, disrupting homeostasis and causing the tear film to evaporate too rapidly. Tear Film and Ocular Surface Society (TFOS) completed a report on MGD in 2011, a leading cause of DED [START_REF] Nichols | The international workshop on meibomian gland dysfunction: executive summary[END_REF]. MGD treatments are often based on severity which only underlines the need for a precise diagnostic method. Xiao et al. [START_REF] Xiao | Diagnostic Test Efficacy of Meibomian Gland Morphology and Function[END_REF] recently addressed this through a study where patients were classified into dry eye severity level (1-4) [START_REF] Lemp | The definition and classification of dry eye disease[END_REF]. The study compares morphologic and functional parameters including length, density, thickness and quality from Meibography images. Results found that Meibograde, gland distortion and MG length differentiate between DED and non DED. Other diagnostic methods Different approaches that detect symptoms of dry eye can also be used to quantify the severity of the disease. A very simple method that is employed clinically is questionnaires: Dry Eye questionnaire, McMonnies questionnaire, and Ocular Surface Disease Index (OSDI) are assessed by Simpson et al. [START_REF] Simpson | Dry Eye Symptoms Assessed by Four Questionnaires[END_REF]. The study shows that responses from the three questionnaires met the Rasch analysis criterion of unidimensionality, where the Rasch analysis represents a certain structure for a data that enables a successful measurement. A detailed review of existing questionnaires in Japan providing their contents and characteristics is assessed by Shiraishi et al. [START_REF] Shiraishi | Assessment of Dry Eye Symptoms: Current Trends and Issues of Dry Eye Questionnaires in Japan[END_REF]. Several other reviews assess dry eye questionnaires including [START_REF] Smith | The epidemiology of dry eye disease[END_REF][START_REF] Begley | Use of the dry eye questionnaire to measure symptoms of ocular irritation in patients with aqueous tear deficient dry eye[END_REF][START_REF] Vitale | Comparison of the NEI-VFQ and OSDI questionnaires in patients with Sjögren's syndrome-related dry eye[END_REF][START_REF] Chalmers | Validation of the 5-Item Dry Eye Questionnaire (DEQ-5): Discrimination across self-assessed severity and aqueous tear deficient dry eye diagnoses[END_REF][START_REF] Grubbs | A review of quality of life measures in dry eye questionnaires[END_REF]. Another method is to estimate tear osmolarity and protein concentration in tear film to quantify dry eye. An osmolarity referent of 316 mOsmol/L found to be a good cutoff value to determine tear hyperosmolarity [START_REF] Tomlinson | Tear film osmolarity: determination of a referent for dry eye diagnosis[END_REF]. Automated osmolarity is measured often with a tear osmometer as presented by Suzuki et al. [START_REF] Suzuki | Tear Osmolarity as a Biomarker for Dry Eye Disease Severity[END_REF], where it correlated with Schirmer's test (r=-0.52) and a microfluidic approach by Karns et al. [START_REF] Karns | OPHTHALMOLOGIST-ON-A-CHIP: FULLY INTE-GRATED MICROFLUIDIC TEAR OSMOLARITY AND PROTEIN BIOMARKER QUANTIFICATION FOR DRY EYE STRATIFICATION[END_REF]. An automated evaluation has been made possible using the TearLab Osmolarity System (TearLab Corp., San Diego, CA) which facilitates tear sample collection and displays the result of the ion concentrations. A more precise method of analysis of the tear film focuses on the primary component of the external layer, the meibum. Various collection and quantification methods are addressed by Pucker et al. [START_REF] Pucker | Analysis of Meibum and Tear Lipids[END_REF]. The change in protein expression of tear film proteins was studied by Srinivasan et al. [START_REF] Srinivasan | iTRAQ Quantitative Proteomics in the Analysis of Tears in Dry Eye Patients[END_REF] in DED and non DED patients. Isobaric tags for relative and absolute quantitation (iTRAQ) was used, followed by protein information resource (PIR) to interpret pathways and protein functions. Differences were detected between the two groups that correlated with Schirmer's and OSDI scores. Diagnosis problematics Although clinical diagnostic tests vary and there are many ways to detect the disease, there is no standard test to this day [START_REF] Brewitt | Dry eye disease: the scale of the problem[END_REF]. Clinical diagnostic tests are limited in terms of quantifying dry eye, which inherently makes dry eye severity assessment also a challenging task. Measuring the extent of dry eye has also proven to be a difficult task. Most classical methods lack efficiency in certain aspects including repeatability, accuracy and reproducibility. The complexity of the disease causes varied signs, symptoms and extreme changes with seasons, time of day and ultimately eye care examinations [START_REF] Zeev | Diagnosis of dry eye disease and emerging technologies[END_REF]. There are also asymptomatic patients that are overlooked, or even physical conditions, such as like floppy eyelid syndrome, lid imbrication syndrome, and conjunctivochalasis, that cause misdiagnosis of DED [START_REF] Brooke Henson | Diagnosis and overlooking asymptomatic patients with DED June 2021[END_REF]. Conclusion With the evident burden dry eye holds on patients, it is becoming more critical to standardize the diagnosis. It is currently critical to look at patient history and utilize at least two clinical examinations to assess patients. There is a clear motivation for the need of an automation 25 for DED diagnosis to better evaluate current patient treatment. The review we conducted for classical, semi-automated and fully-automated diagnostic methods displayed a clear trend for the incorporation of artificial intelligence [4]. We move on in the next chapter to detail the key elements we will use in this work. Chapter METHODOLOGICAL BACKGROUND " T his chapter discusses the main objectives of this research in a detailed manner. We also take a look at the key elements used to investigate a new method for DED quantification. The chapter also details the state of the art of the key elements. Introduction The objective for this research is to utilise the examinations for dry eye diagnosis along with artificial intelligence and obtain a new method to automate the quantification. The dataset is a collection of slit-lamp video examinations of DED along with annotations of the Oxford grade in Figure 2.7. To facilitate the problem we process the frames from these videos to first register them to a common coordinate system. One of the ways we evaluate our proposed solution is by predicting the Oxford grade for each patient. There are three key elements that makeup the following thesis, AI, projective geometry and lastly classification. 2) Projective geometry : image registration 3) Oxford Grading : Classification    using -→ 1) Artificial Intelligence : Deep learning Deep Learning Deep learning (DL), the most popular branch of AI, is already a pillar in automated methods in various fields [6,[START_REF] Ngiam | Multimodal deep learning[END_REF] as well as medical image analysis [START_REF] Litjens | A survey on deep learning in medical image analysis[END_REF]. DL is a variant of the traditional neural network, however it performs much better than its foundations. Convolutional neural networks (CNNs) are widely used in the field of deep learning. The main advantage of CNNs over classical methods, is that they identify relevant features without any human supervision. They have been applied in a range of different domains, including computer vision, speech processing, facial recognition, etc [START_REF] Alzubaidi | Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions[END_REF][START_REF] Young | Recent trends in deep learning based natural language processing[END_REF][START_REF] Adeel | Contextual deep learning-based audio-visual switching for speech enhancement in real-world environments[END_REF][START_REF] Tian | Evolutionary programming based deep learning feature selection and network construction for visual data classification[END_REF]. The global AI in healthcare market is projected to grow almost 46%, reaching USD 95.65 Billion by 2028, up from USD 6.60 Billion in 2021 [START_REF] Research | statistics: U.S. and Global Artificial Intelligence (AI) in healthcare market[END_REF]. Researchers have experimented on CNNs to help in disease detection, recognition and ultimately to optimise interpretation [START_REF] Serte | Deep learning in medical imaging: A brief review[END_REF]. Some of the applications that have been examined and enhanced in medical imaging using CNNs include: classification, object detection, image segmentation, image generation and transformation [START_REF] Kim | Deep learning in medical imaging[END_REF]. Convolutional, pooling, and fully linked layers make up CNN's architecture (Fig. 2.3). A convolutional layer's main objective is to find distinct local edges, lines, and other visual components. Convolutions are specialized filter operators whose parameters are learnt. This mathematical process denotes the multiplication of a particular pixel's immediate neighbors by a tiny set of previously learnt parameters known as a kernel. This procedure imitates the extraction of visual characteristics, such as edges and colors, similar to that observed for the visual cortex, by learning relevant kernels. Filter banks can be used to complete this procedure. Each filter can be seen as an item with a square form that moves over the provided input or image [START_REF] Lee | Deep learning in medical imaging: general overview[END_REF]. We believe that the application of artificial intelligence (AI) in medical imaging will operate as a collaborative tool for reducing the strain and distraction from numerous mundane and repetitive jobs. This is why our first and key element for this research is to accomplish the objective by employing deep learning techniques. Learning methods The most common learning method used in AI systems is supervised learning, which involves feeding the learning system "labeled" training data (data samples paired with the relevant class or label), in order to train the model. Finding a relation that converts each input of the training set (the data) into an output is the learning system's goal (the label). In medicine, using various imaging modalities, the output label might be anything from the disease diagnosis to the patient's state (such as the disease stage at a specific follow-up period) to the treatment outcome. The distinctions between supervised and unsupervised architectures are shown in Figure 2.5. In both approaches, the inputs are fed and then adjust the network weights to reduce errors between the produced output and the predicted result (supervised learning) or between the similarity of the input signals and the output. Unsupervised learning minimizes error by comparing the various inputs, but supervised learning necessitates labels for error optimization. Additional methods can also be utilized, such as semi-supervised learning, which combines supervised and unsupervised learning by labeling only a portion of the training data [START_REF] Bishop | Pattern recognition and machine learning[END_REF][START_REF] Castiglioni | AI applications to medical images: From machine learning to deep learning[END_REF]. Semi-supervised learning is a third method where the input dataset is a mix of labelled and unlabelled inputs. To address their main problems, semi-supervised learning bridges supervised learning and unsupervised learning methods. A self-training method is a simple method of semi supervised learning where initially a model is trained on a small sample of labeled data before expanding it repeatedly to a larger sample of unlabeled data. An improved version is co-training [START_REF] Ning | A review of research on co-training[END_REF], a semi-supervised technique, where two individual models are trained based on two views of the data. This means that the datasets have different sets of features that can stand alone and reliably train a model. There are various learning methods, Figure 2.4, and a subcategory of unsupervised learning is self-supervised learning [START_REF] Cheplygina | Not-so-supervised: a survey of semisupervised, multi-instance, and transfer learning in medical image analysis[END_REF]. This refers to using data that doesn't have manually added labels 'pseudo-labels'. Models are trained to learn good representations of objects, this is called a 'pretext task'. A pretext task is one with artificially created labels, or 'pseudo-labels'. Finally a downstream task is a task that evaluates the quality of features learned by self-supervised learning. This is when a model that has been previously trained (self-supervised learning) or simply components are used to perform tasks such as image recognition or segmentation in a supervised learning pipeline. Projective Geometry We perceive our three dimensional world in 2-D, and what we perceive is a succession of 2-D projections. In the case of using a camera, as it moves we need to estimate this motion and this is possible through projective geometry. We are also able to model the scene through these 2-D acquisitions. In projective geometry, projective transformation depicts objects from different points of view. We have affine geometry which includes Euclidean geometry. In order to model a camera imaging process, we often refer to projective geometry since it includes a larger class of transformations. Projective transformations also preserve type, incidence and cross ratio but not size nor angles [START_REF] Wang | Guide to three dimensional structure and motion factorization[END_REF]. These are important properties in projective geometry, where incidence is the heterogeneous relation between a point and a line. In 3D space P 3 , the homogeneous coordinates of a point is represented by a 4-vector as X = [x1, x2, x3, x4] T which is defined up to a scale since X and sX(s ̸ = 0) represent the same point. A plane in 3D space P 3 can be formulated as : ΠX = π 1 x 1 + π 2 x 2 + π 3 x 3 + π 4 x 4 = 0 (2.1) Determining three-dimensional information from two-dimensional images is referred to as the reconstruction problem. We want to determine the 3D scene and the camera viewpoints from a series of images [START_REF] Laveau | Oriented projective geometry for computer vision[END_REF][START_REF] Wang | Guide to three dimensional structure and motion factorization[END_REF][START_REF] Rother | Multi-view reconstruction and camera recovery using a real or virtual reference plane[END_REF] . The use of one formula allows for a straightforward formulation of this issue. A scene is made up of a collection of 3D points. The projection of the 3D point X to the image point x using a camera with a Cartesian center Q and matrix H is as follows: x = H(X -Q) (2.2) A simplification of a real world camera is called a pinhole camera. Where we consider it as a central projection device of the euclidean space E 3 onto the image plane E 2 . Mapping the 3D point X onto a point x on the image plane is written as : X = (X, Y, Z, 1) T onto x = (x, y, 1) T (2.3) x = f X Z and y = f Y Z (2.4) The centre of projection is camera centre Q, and the focal length f is the distance between the camera centre and the image plane through Q. Aspect ratio r and the skew s of a pixel are also parameters of the camera model. The optical axis of the camera is defined as a line that is perpendicular to the image plane and passes through its center. The principal point x0 of the camera is where the optical axis and image plane cross. The pinhole camera is shown in figure 2.6(a) where the principle plane is z. The camera coordinate system can be assumed to align with the world coordinate system, but in general it is rotated and translated with respect to the world coordinate system (Figure 2.6(b)).     x y 1     ∼     f s x 0 0 rf y 0 0 0 1     (I 3x3 |0 3x1 )        X Y Z 1        (2.5) (b) World coordinates to camera coordinates [START_REF] Rother | Multi-view reconstruction and camera recovery using a real or virtual reference plane[END_REF] Matrix K is the calibration matrix that has the intrinsic camera parameters, unique to each camera, and constant over time. These parameters are determined by camera calibration, which is later discussed in details in chapter 3. Extrinsic camera parameters are the R and Q that change over time. x∼K(I 3x3 |0 3x1 )X (2.6) x = K[R|t]X world_coord ∼P X world_coord (2.7) Classification: Oxford grade One of the ways to assess DED, is through the damage caused which is made visible through staining of the ocular surface. Dyes highlight the damaged ocular surface of the conjunctiva and cornea. The damage on the surface, referred to as punctate dots, can be very hard to count, and current grading scales that describe the patients' state are very broad. There were detailed in section 1.3.2. One of the methods we detailed is the Oxford scale [5], which includes six grades (0-5), where the number of dots are ordered on a log scale. Figure 2.7 shows the Oxford scale that is referred to visually during the examination to determine the patient's grade. Figure 2.7 -The Oxford grading scale [5]. In order to evaluate our improved method of quantification we implement an Oxford grading classification as an evaluation method. Given the material and annotations we have that are detailed in chapter 3, we are able to evaluate using deep learning methods. Ultimately to accurately grade the damage to the ocular surface, a full 'eye map' is needed. A map where all the punctate dots are visible, to minimize disregarding some or overestimating the damage. Acquiring a full mapping can also help in evaluating a patient's progress given the medical treatment they are following. The concept of classifying images involves taking an input, such as an image, and producing a class, or a probability that the input belongs to a specific In the rest of the chapter we will look at AI solutions that exist for image registration, and classification. First taking a look at the exact problem we want to solve given the assumptions we made: 1. Movement observed is due to camera motion relative to the eye ⇒ multiple view geometry 2. Only data available are the examination videos ⇒ unsupervised learning Problem : -Learn to register multiple camera views to one coordinate system in an unsupervised manner. More specifically, we focus on AI solutions that help learn image registration from multiple camera views in an unsupervised manner. Other AI solutions we look into are image segmentation and classification. Image segmentation is used to identify objects and boundaries (such as lines, curves, etc.) in images. Classification is the process of identifying to which predetermined categories images or pixels pertain to. We begin the methodological background by looking at approaches that aim to quantify DED using AI. Deep Learning-based Dry eye quantification As we mentioned in Chapter 1, our extensive review article includes clinical diagnostic methods, semi-automated and fully-automated methods. For DED diagnosis some of the fullyautomated methods were deep learning based. We go into more depth about some of the deep learning approaches we looked at below, since deep learning will be used throughout our work Both methods that are in line with the imaging system we have, cameras attached to slitlamps, as well as the method of diagnosis we wish to focus on, rely heavily on annotations. Unfortunately, they are both time and data intensive approaches. Although we only highlight deep learning-based solutions in this section, our review article [4] of DED diagnosis methods includes other fully automated approaches as well. Odometry Odometry is the process of estimating changes in location over time using data from motion sensors. Some wheeled or four-legged robots in robotics use it to calculate their position in relation to a starting point. We focus on visual odometry where the data inputs are visual elements from associated camera images. Firstly we take a look at the definition and aims of Visual Visual Odometry Many applications in computer vision depend on camera ego-motion estimation, also known as Visual Odometry (VO) [110]. Over the years VO has become a viable method for vehicle localisation using a stream of images captured by the camera attached to a vehicle. The applications of VO include not only autonomous driving but also medical robots, augmented and virtual reality. Optical cameras are one of many sensors employed and has been of great interest for visual-based localization [111]. It is also an inexpensive alternative, that is comparatively more accurate [112]. Given the nature of our problem we focused on optical camera vision based methods. Types of camera/data sensor used for VO estimations are: 1. Stereo-camera -Also referred to as a binocular camera, it has two lenses which separate image sensors for each lens. Monocular cameras -Only a single camera sensor or lens is present. Stereo or monocular omni-directional cameras -omni-directional cameras have a very wide field of vision, and therefore more information. RGB-D cameras -Both color and dense depth images can be obtained from RGB-D (color-depth) cameras in real time. Depth information is obtained through the 3D depth sensor. Each of these camera sensors have advantages and disadvantages. There are various VO estimation methods that can be implemented with any of the camera sensors. These are considered as a module for Visual-SLAM, which we will discuss in detail in the coming section. Feature-based methods These type of methods rely on extracted image features (corners, lines, curves) and either matching or tracking distinctive features throughout a set of frames. The images are matches by comparing the features and measuring the Euclidean distance of the vectors to match the features. The displacement is then measured by calculating the velocity vector between the matches feature points. Camera motion is also estimated by measuring the relative pose of the camera through geometry transformations between two images. A common computational method to match feature points is by determining nearest neighbour pairs from feature descriptors [110]. The extracted feature points are used to project two-dimensional points. Most VO implementations assume a calibrated camera, or require this to be performed beforehand. Algorithms that detect, describe, and match local features often have a high computational cost. Three main feature point extraction methods are detailed below : Recently, it seems that direct approaches could use the geometry and grayscale information directly from the image pixels to generate the error function through the graph optimization to minimize the cost function, thereby achieving the ideal camera pose. Large-scale map problems with pose graph are addressed using these techniques [122,129]. Hybrid of feature-and appearance-based methods Feature-based and direct based methods have great advantages, a hybrid semi-direct method was proposed named semi-direct visual odometry (SVO) [130]. This method uses direct methods to obtain the pose, but also depends on characteristics of consistency. It uses a probability model for depth estimation and the deep filtering is based on a mixed model of Gauss and homogenous distribution. In SVO, first the direct method is used to solve pose matching, then the Lucas-Kanade optical flow matching [131] to obtain subpixel accuracy and finally the reprojection error minimization is optimized by combining the point cloud map. The method relies on selecting key frames and does not directly match the whole image but instead extracts an image block to obtain the camera pose. In conclusion, VO is considered a building block of visual SLAM. Although direct based methods are popular the key issues remain the lack of speed and consistency. Feature-based and hybrid semi-direct based methods are able to build sparse maps and direct methods can build semi-dense maps. Visual-SLAM We now take a look at the next general concept that includes visual odometry algorithms, visual SLAM. Initialization is used to determine the global coordinate system to build an initial map which is then used to track and map. Tracking estimates the sensor's pose continuously and establishes a correspondences between the frame and the map. This problem is called perspective-n-point, which is to estimate the pose of a calibrated camera from a set of n 3D points in the world coordinate system and 2D projected points in an image. Mapping is the process of computing and expanding the 3D environment, and can result in sparse, semi-dense, or dense 3D reconstruction. The table below summarizes the visual-SLAM methods discussed briefly in this section. The nature of the method lines up with our objective of using deep learning to obtain a reconstruction of the environment. CNN-SLAM [137] is a supervised method but it is the ideal framework for us to follow and the first algorithm that incorporates convolutional neural networks to solve the SLAM problem. Structure from Motion The field of structure from motion (SfM) which is a more general concept encloses visual-SLAM and visual odometry. SfM is the notion of creating a map using several images taken from different perspectives, or even different cameras. The 3D structure of the environment can be determined from a set of multiple overlapping images from a moving camera as shown in figure 2.25. The principle of SfM remains that the location and pose of the camera (s) must be known. In some cases the absence of GPS or sensors that give such information, triangulation is used to reconstruct scene geometry. This brings us back to the the requirement of projective geometry which is the basis of solving this problem. Some of the steps in an SfM offline algorithm often mimic the building blocks of a fundamental SLAM algorithm. and motion with the highest likelihood given just the 2D measurements entails combining all potential 3D feature mappings to 2D measurements. An technique that iteratively refines a probability distribution across the set of all correspondence assignments was used to get the desired result. The resulting method is quick, straightforward, and easy to use. An approach to estimate structure and motion using a series of photos gathered in a causal manner was given by Jin et al [127]. A class of geometric and photometric models for the scene that may be finitely parameterized is used by the algorithm to integrate visual input. A closed loop is created by monitoring the picture region and combining 3D motion estimates. They framed the SfM issue within the context of nonlinear filtering. By recreating the state of a nonlinear dynamical system using an extended Kalman filter, the unknown structure and velocity are estimated. Furthermore, they have demonstrated that the dynamical system is observable when the translational velocity is non-zero, the scene comprises at least two planar patches, each with a distinct normal direction. The algorithm's recursive structure makes real-time implementation possible. Agarwal et al. [144] developed a method to convert massive, disorganized collections of pictures into 3D geometry. The system uses image matching and 3D reconstruction methods that maximize parallelism at each pipeline level. The system is broken down into two parts: (1) Pre-processing, where photos are stored centrally and then delivered to cluster nodes as needed in chunks of a specific size. Conclusion More detailed surveys on all three fields, and interesting literature that details these offline algorithms can be found in [152-159]. These methods are not only more time-consuming, but also more complex to implement in order to fix our issue. Additionally, they rely significantly on offline knowledge and standardized equipment. Even in this field, there is a clear trend toward incorporating deep learning to improve the accuracy and automation of camera motion estimations. The tendency is also supported by enhanced performance, reproducibility, and accessibility. Therefore, as mentioned in the objectives of this thesis, we want to solve our VO problem with the deep learning. Therefore we focused our attention on visual odometry with deep learning looking at proposed methods in detail, their applications and what is prominent in this division of the field. Visual Odometry with Deep Learning As we have seen the classical geometry-based VO field is impressive and has progressed over the years. The robustness of these methods continue to be a challenge in certain environments, A widely used evaluation method with five evaluation indicators-RMSE, RMSE log, Abs Rel, Sq Rel, and Accuracies-is provided in [166] in order to assess and compare the performance of various depth estimation networks. These indications are as follows: -RMSE = 1 |N | i∈N ∥ d i -d * i ∥ 2 , -RMSE log = 1 |N | i∈N ∥ log(d i ) -log(d * i ) ∥ 2 , -Abs Rel = 1 |N | i∈N |d i -d * i | d * i , -Sq Rel = 1 |N | i∈N ∥d i -d * i ∥ 2 d * i , -Accuracies: % of d i s.t. max( d i d * i , d * i d i ) = δ < thr, where d i is the predicted depth value of pixel i, and d * i stands for the ground truth of depth.N is the total number of pixels with real-depth values, and thr is the threshold. Unsupervised deep learning-based methods The geometric restrictions between frames are used as the supervisory signal during training of the unsupervised methods in place of the ground truth. Following a stereo camera baseline, the overlapped area between two stereo images, each pixel in one image can find its correspondence in the other with the horizontal distance H [157]. H = Bf /D, (2.9) where B is the baseline of a stereo camera, f is the focal length, and D is the depth value of the corresponding pixel. This can be translated to monocular systems. A fundamental unsupervised model is as follows: the geometric constraints for unsupervised algorithms is based on the projection between adjacent frames, and they are learned from a monocular image sequences: are developed: p n-1 ∼ KT n→n-1 D n (p n )K -1 p n , ( 2 p n-1 ∼ K Tn→n-1 Dn (p n )K -1 p n . (2.11) The supervision signal which serves as the geometrical constraint is calculated as the photometric error between corresponding pixels. Inspired by [178] the reconstruction loss is formulated as : L vs = 1 N N p |I n (p) -În (p)|, (2.12) where p indexes over pixel coordinates and În (p) the reconstructed frame. Conclusion In this chapter we first presented the three main concepts we will exploit in this work. We also presented existing deep learning-based methods for DED quantification. There is a contrast between the existing methods and the concepts we wish to utilise which have yet to be deployed for DED diagnosis. We detail the existing methods for Odometry, and Visual Odometry which is a very rich field. Lastly, we present the most recent contributions in the field with deep learning. We observed a general tendency toward the use of deep learning-based, and we concentrate on unsupervised methods since we believe they are especially suitable for our work. Therefore we will learn to predict the camera movements from our data without the use of any sensors. We continue on to detail the materials in the following chapter to look at the resources we are using for this thesis. T his chapter details the materials collected and used for this thesis. We look at the different protocols and the acquisition methods used to obtain the database. Calibration of the camera was also necessary for our framework. We discuss the methods and results of camera calibration and conclude with the finalised dataset. Introduction When LaTIM started studying the automation of DED diagnosis (Master 's thesis funded by Laboratoires Théa in early 2019), a preliminary protocol was set up and a dataset of multifactorial DED patients was collected. This initial dataset is referred to as Original database 'O'. In the framework of Necessity1 , there was a prospective cohort PEPSS (Recueil des symptômes et évaluation de la sécheresse oculaire : développement de nouveaux outils pour le syndrome de Sjögren primitif) that evaluated the ocular surface damages in patients with Sjogren's syndrome. This was a slightly improved protocol where we also collected a dataset of DED examinations from Sjogren's syndrome patients. This second dataset is referred to as PEPSS database 'P'. Both datasets have similarities, and so we were able to use them jointly in some experiments All examinations were conducted at the Service d'Opthalmologie, Brest University Hospital Centre (CHRU Brest) by Opthalmologists Dr. Anas-Alexis Benyoussef and Dr. Beatrice Cochner. We will go into further details on how we used each of these for our experiments and proposed method in Chapter 5. Acquisition Method The videos were recorded using the Haag Streit BQ 900 slit lamp2 and the camera module CM 900 (resolution : 1600 × 1200pixels, 12f rames/second). A Galilean microscope with a magnification range of 6.3 to 40 that is adjustable in 5 fixed steps is included as standard equipment with the BQ 900 as shown in Fig. Original Database 'O' The collection of the original database was performed at the CHRU Brest service. The dataset contains two magnification settings: x10, x16. During these examination multiple settings were tested until a satisfactory acquisition was obtained that allowed a complete DED grading. This was done by Dr. Benyoussef, Mathieu Lamard, and Bendy Latortue who worked giving a clinician's and an engineer's input. Patients that were examined had DED symptoms and so the protocol was the following : 1. Blue light yellow filtered fluroscein sodium staining : (a) Three blinks with a pause to view the tear film breaking (TBUT diagnosis) (b) Cornea grading (c) Temporal and nasal conjunctiva grading. 2. White light lissamine green staining: (a) Temporal and nasal conjunctiva grading. PEPSS Database 'P' For the examinations under the PEPSS study was set to x 10. This study included patients that had been diagnosed with Sjogren's syndrome. The study also included multiple diagnostic examinations specific to Sjogren's syndrome. Given that dry eye is also a principal symptom the ocular surface was analysed and the damage was graded through staining examinations. This included illumination with white light (lissamine green evaluation) followed by cobalt blue light and interposition of a yellow filter (fluorescein evaluation). There were two clinical diagnostic tests for DED, tear break up time measurement and superficial punctate keratitis (SPK) grading that can be done using various grading scales as mentioned in Chapter 1. The protocol followed was the following: 1. Schirmer's test The staining grade was given using the modified Oxford scale for both cornea and conjunctiva, shown in Figure 3.2. OSDI is a 12-item questionnaire that assesses dry eye symptoms where patients rate their responses on a 0 to 4 scale and the final score ranges from 0 to 100. Camera Calibration We performed camera calibration using two different checkerboards, 8 x 7 squares and 9 x 10 squares both 1mm. In order to compare the two methods; MATLAB & OpenCV, we calculated the re-projection error which is commonly used to evaluate the camera calibration of a single camera. Camera calibration is done by measuring feature points with a known spatial relation to each other. Assuming a pinhole camera model, we use the checkerboard size to detect the points, origin and ultimately the re-projected points. Once the feature points are detected we measure the spatial inter-relation and the more valid images we have the better our measurements of the intrinsic and extrinsic parameters are. We then re-project the feature points onto the scene using the camera model and compute the re-projection error as an average L2 norm of all point correspondences : Reprojection error = 1 N N -1 i=0 |p i -q i | 2 (3.1) where p i are the observed points and q i are the feature points locations predicted on the image plane. For both image sizes the lowest re-projection errors we obtained were using the Python code calibration implementation, although values were fairly close with the Matlab toolbox. Dataset The original database of videos, we labelled 'O' contains 79 videos of unique eyes. Although they were of poor quality and did not follow the same protocol as those acquired for the PEPSS study, they were still useful for various tasks that will be detailed in chapter 5. The lack in Ground truth annotations : PEPSS Database 'P' Considering the DED specific diagnostic tests conducted under the PEPSS study, we obtained various ground truth annotations for all the data. Figure 3.9 shows a normalized box-plot of these grades. We wanted to focus on using the oxford score for our application where the data seemed to be variable. For more precise annotations we asked five experts and Dr. Benyoussef to manually annotate the punctate dots. The damaged areas that appear in fluorescent green is what the aims to oxford score quantify. In order to render this more precise we obtained precise co-ordinate locations After the first annotations of the areas on two frames, the following experts were given the annotations of the first frame and asked to annotate the second frame by finding the same areas. This allowed us to calculate what we considered a human error. By taking the primary co-ordinates as ground truth, we compared all the rest to it and calculated the difference between all the graders through the euclidean distance Eq.3.2. The results are shown below : euclideandistance = (x 1 -x 2 ) 2 + (y 1 -y 2 ) 2 (3.2) where x 1 , y 1 are the ground truth points, and x 2 , y 2 are the secondary grader's points we want to compare. Conclusion In this chapter we detailed the two databases we have and the camera calibration we implemented. Given that both databases use the same camera, the intrinsic parameters we obtained could be applied when using either databases. Following this we go on to a few main baselines and analyse their main contributions and how they correlate to their objectives. Introduction The extensive work done by Alexandre Guerre [180] was the first to address augmentation of field of view for a ophthalmology problematic. The thesis work looked at both ocular endoscopy and, retinal videos but have the same light and motion disturbances as our data. Alexandre showed that CNNs such as Flownet were not very promising [181], given that the estimation of motion was very difficult and required ground truth data. A similar conclusion was reached by [182] given the light variations present in the dataset. Ultimately successful methods all required the creation of an artificial dataset with the ground truth for optical flow estimation. In our approach we wanted to implement a self-supervised or unsupervised method and avoid the creation of an artificial dataset. A simplified SFM learner [176] Figure 4.1a was more encouraging, but the depth estimation was also difficult with both datasets Alexandre experimented with, results in Figure 4.1b. As mentioned in Chapter 2 autonomous driving is a well known topic to benefit from AI potential. Just as SFM learner used the Kitti database, there are methods that followed by Zhou p s ∼ K Tt-→s Dt (p t )K -1 p t (4.1) where p s are pixels in the source frame, p t are pixels in the target frame, K is the intrinsic matrix, and is the relative depth Dt (p t ). Using this warping operation and given two frames ; source frame (s) and a target frame (t), we can translate the scene to the next frame and obtain the next image by projection. In order to solve for p s , we need to solve for Dt (p t ) and Tt-→s . This can be accomplished through two steps : 1. Depth Estimation D i = θ(I i ) θ : R(H × W × 3) → R(H × W ) 2. Pose Estimation : E 1→2 = ψ E (I 1 , I 2 ) = (t x , t y , t z , r x , r y , r z ) E 2→3 = ψ E (I 2 , I 3 ) = (t x , t y , t z , r x , r y , r z ) where H = height and W = width of the image. Where a dense depth map is estimated from a single RGB frame, and the pose estimation takes in a sequence of two RGB images as input and produces the transformation between the frames and giving both the translation (t x , t y , t z ) and rotation parameters (r x , r y , r z ) between the frames. Depth prediction for ego-motion We looked at existing methods for image depth estimation, although most require real imagedepth or stereo images for training. A simple implementation is DenseDepth [184], with an encoder-decoder architecture. This served as a baseline for our depth map prediction task. Another proposed method T Joint depth & egomotion prediction Baseline Methods Depth learning can also be joined with egomotion, and this is also a rich field that we took a closer look at. We started with the a simple overview of SFM learner which is a framework for unsupervised learning used to estimate monocular depth and camera motion Conclusion In conclusion, our experiments with existing approaches validated our concerns with our data. We know that depth is difficult to estimate from single images, and the motion in the videos is not enough to allow for optical flow training. When looking at the baselines we tested, qualitatively, we could not asses our depth maps. Therefore we move on with two of the methods mentioned in Section 4.1.2 that had obtained state-of-the-art results. In this section these selfsupervised methods joined both depth and ego-motion learning, and we want to qualitatively assess their ability to learn and estimate the camera's position for our data. We continue with more in-depth investigations in Chapter 5 with implementations of these baselines in order to obtain qualitative results and further analyze their significance. We also decided to focus more on solving for the disturbances and distinguish the learning for our problematic. T his chapter explores the development of our proposed method for depth and egomotion prediction from slit-lamp eye examination videos. We focus on the reasons that cause the failure of the baseline methods on our data, mainly hindering the optimization. Starting with simple modifications to answer these problems, we obtain a stable training as a baseline to compare our proposed method to. Next, through two novel developments we obtain a successful, fully self-supervised image registration algorithm. Introduction Following our experiments presented in Chapter 4, moving forward we wanted to add semantic segmentation to extend two of the methods we detailed in chapter 4: Depth from videos in the wild and Struct2depth [183,188]. These baselines are able to incorporate both depth and camera motion estimation to better predict the environment captured by the camera. The incorporation of depth image-based rendering is the key to linking both estimations. This also allows the use of projective geometry and a self-supervised method of estimating camera motion with the least amount of external input data. This is beneficial as we have depth information simultaneously that we also think we can include in our improvements. The main drawbacks of all these methods is their strong dependence on the photometric loss. The main guidance for models to learn in these baseline models is that color appears similar from any camera view-point. For us, this is a drawback as we are aware that photometric consistency is lacking in our data. We measure the accuracy of pose estimation that is learned and predicted through the registration error. Therefore registration error is the first thing we wanted to address and improve in our proposed approach. We investigate three main limitations of the baselines: 1. Heterogeneity of information fed to the model (in section 5.4) 2. Training disturbances due to specular reflections (in section 5.4) 3. Overlooking the complete structure of the eye (in section 5.5) We addressed these three key issues by first alternating the use of the semantic segmentations in the pre-processing. We introduce a new loss that replaces the photometric loss, the semantic reconstruction loss, as a primary supervision signal and thus rids us of the specular reflections interfering with the training. We also benefit from the eye anatomy and look at shape fitting, expanding this to fit our problematic giving us an original loss, sphere fitting loss. In this chapter will take a look at these evolution in details. We start by presenting the preliminary task of semantic segmentation, used in all the proposed improvements. Semantic Segmentation Semantic segmentation is commonly implemented in visual odometry as it allows for better scene understanding. This is particularly relevant in applications where we don't have any sensor information, unlike autonomous driving for instance. To give our method additional information, we implemented state-of-the-art models for semantic segmentation. We manually segmented 200 randomly selected frames. This was done using the PixelAn-notationTool [190] which is a software that helps manual annotations of images. It uses the algorithm 'watershed marked' of OpenCV and the user has to provide a marker with the brush that produces a segmentation. This can also be corrected and refined if need be. Segmentation for disturbances We first created manual annotations for binary masks keeping only the region of interest to be: the eye parts illuminated by the light. We ignored any over-illuminated areas, light reflections, eyelashes and eyelids. Table 5.1 shows what we named 'Mask for disturbances'; disturbances that we ignore in white, and in black the areas of the image we keep. We also improved these masks to define a criterion for selecting frames. We decided to exclude the light used during the examination as a reference and focus on the visible part of the eye. With these new masks titled 'Binary mask ROI', all the visible ocular surface is segmented, seen in black, although we continued to ignore the light reflections as can be seen in Table 5.1 in white along with the eyelid. Only frames with a minimum of 40% of non-zero pixels are further analyzed and utilised for training. This rule was implemented as a pre-processing step which allowed us to ensure only viable frames that included a visible ocular surface we kept. Segmentation for ROI Our final annotations are semantic segmentation with three regions defined: eyelid + eyelashes, cornea and conjunctiva. Following the same manual annotation method we obtain masks with three distinct values for each region: eyelid & eyelashes = 0, cornea = 1 and conjunctiva = 2. For this case we decided to no longer highlight the disturbances but benefit from knowing the anatomy of the eye. We wanted to include this information into our method to help us imitate how the Oxford grading is done by the ophthalmologist. As we mentioned previously, in Chapter 3 the Oxford grading scale consists of three 0-5 grades for each of the sections: cornea, nasal conjunctiva and temporal conjunctiva. By distinguishing these areas in our frames we could treat the cornea and conjunctiva separately for future tasks, as they are our regions of interest (ROI). These masks shown in Table 5.1. 'Mask for ROI' were also used in a pre-processing step. By setting a condition that at least 40% of the ROI (the eye) We tested two of the nine available architectures: U-Net [192] and FPN (Feature Pyramid Network) [193]. We evaluated our models using the Sørensen-Dice coefficient which is also the metric of training. More commonly referred to as the Dice coefficient (F1 Score), it is used to compare the pixel-wise agreement between a predicted segmentation and its corresponding ground truth. It ranges from 0 to 1, with 1 signifying the greatest similarity between predicted and ground truth. DiceScore = 2 * |XY | |X| + |Y | (5.1) where X is the predicted set of pixels and Y is the ground truth. Given our small manually annotated set for the training, we included data augmentation. This technique is used to increase data amount by applying transformations and creating slightly modified versions of the data. We performed online augmentation, using random transformations from the Augmentor python package [194]. Online augmentation is a preferred method as it doesn't require pre-processing and saves memory. We applied the following methods of distortion: rotation, flipping left to right, flipping top to bottom, random zoom. 5.2. For the following task the experiments used a combination of different models and encoders with a fixed set of hyper-parameters. The hyper-parameters used are: 1. Model pre-trained: ImageNet. 2. Loss function: Cross entropy. 3. Optimizer: Adam. 4. Learning rate: 10 -6 . 5. Batch size: 32. 6. Epochs: 300. Baseline assessment As an extension of the two methods we implemented a hybrid that took into account the main architectures of the proposed methods and the main losses [183,188]. We first detail the main elements from [183, 188] we want to maintain that make up our hybrid implementation. This includes the training framework and the weights attributed to three of the main losses. We demonstrate below how the semantic segmentations we obtained are utilised in the baseline framework and present different experiments and their limitations. We investigate the reasons for these limitations and to show that the assumptions made in these baseline approaches are defied in our case, and do not align with our problematic. We start with a the baseline method (hybrid of [183,188]) and in order to train it we first setup the input framework (which is similar for both methods). Although they detect object-motion by using masks, our first experiment assumes no occlusion in our examinations first. Given that our main reservations with these methods remains the quality of our images, we wanted to ensure and test the limitations of what valuable information can be learned. We implemented the framework with a slightly different approach excluding object-motion detection, but rather guide the egomotion training to focus on certain regions and ignore those that were poorly lit. The method is made up of two convolutional neural networks (CNN); one predicts depth from a single image, while the other uses two images to predict egomotion, object motion field in relation to the scene, and camera intrinsics. The first CNN is a UNet architecture with a ResNet 18 base. It has a softplus activation (z = log(1 + e ℓ )) to convert the logits (ℓ) to depth (z). The second CNN for the egomotion estimation is inspired by FlowNet [195]. The final output, 3 channels, each predict the global rotation angles (r 0 ) and translation vector (t 0 ). The complete method gives a depth map prediction and the estimation of the scene movement with respect to the camera. In this experiment we also used the masks as done in [183] to ignore certain regions when training the egomotion CNN. When predicting egomotion we use the masks for disturbances m(x, y) as shown in Table 5.1 to ignore any disturbances. Following [183]'s implementation, this was added as follows for a warping of pair of frames Îs -I t : I s ∼ K Tt-→s Dt (I t )K -1 I t (5.2) Inverse Warping The main supervision signal that ties in Equation 5.2 and the key is that the model learns through the difference between the reconstructed target image using source pixels and comparing them to the original target frame. This is named inverse warping or backward warp, which has been proven to be much easier to optimize than forward warping [196] w ij I s (p ij s ) (5.4) where p t are pixels that belong to the target frame I t , p s to the source frame I s , and Îs the warped frame, where w ij is linearly proportional to the spatial proximity between p s and p ij s , and i,j w ij = 1, These mechanisms also include implicit assumptions and therefore limitations to the model learning: 1. Static scene with no moving objects. No occlusion between camera views. 3. A Lambertian surface for a meaningful photo-consistency error. Where the surface appears uniformly bright from all angles and has the lambertian reflectance property. A violation of these assumptions inhibits the training and also could corrupt the overall gradient. We assume that our videos are comprised of a static eye with the camera moving (left, right) during the examination. Therefore our data complies with the first assumption, as for the second we use our semantic segmentation to exclude the only occlusion we have: eyelid. Our predicted masks allow us to pre-process the dataset and only keep frames were the eye is visible and so removing any blinking frames, or half open eye frames. Lastly, the eye is not a Lambertian surface and a main disturbance is the specular reflections. The cornea is also a transparent part of the eye making this another major disturbance when attempting to calculate an error based on consistency in the surface pixel value. Losses There are various losses, some for either CNNs specifically and a common main supervision signal that we previously mentioned. This equation shows the relation between two adjacent video frames using a depth map and the camera intrinsic parameters: z ′ p ′ = KRK -1 zp + Kt (5.5) where p and p ′ are pixel coordinates in homogeneous form before and after the transformation represented by the rotation matrix R and the translation vector t. z and z ′ are the respective depths. The losses used to obtain the warped image, using a differentiable image warping operator ϕ(I i , D j , E i→j ) → Îi→j , make up the total loss for every pair of frames Îi→j -I j . Two losses are used as image reconstruction guidance; photometric loss & SSIM, while depth smoothness is focused on smoothing the predicted depth maps. Both depth and pose estimations are optimized through the image reconstruction losses. The first, photometric loss, is the difference between corresponding pixels. This simple L1 loss is presented below Equation 5.6. Notations Photometric loss (RECON) frames are used as input to compare this reconstructed image Îi→j to the next frame I j [188]. L recon = ∥ Îi→j -I j ∥ (5.6) Structural similarity loss (SSIM) is used to assess the quality of the warping [198]. The introduction of structured similarity (SSIM) [198] included an evaluation of the quality of the predicted image as well. SSIM index measures the similarity between images in terms of luminance, contrast and structural information. We measure this between Îi→j and I j . Luminance of an image signal is estimated by mean intensity 5.7 & luminance of two images is then compared by 5.8: µ x =; 1 N N i=1 x i (5.7) l(x, y) = 2µ x µ y + l 1 µ 2 x + µ 2 y + l 1 (5.8) Contrast is measured by difference of the luminance between objects in the field of view. This is done by calculating the standard deviation of the image signal 5.9, and the contrast similarity is calculated by 5.10. σ x = ( 1 N -1 N i=1 (x i -µ x ) 2 ) σ xy = 1 N -1 N i=1 (x i -µ x )(y i -µ y ) (5.13) SSIM ( Îi→j , I j ) = l( Îi→j , I j ) * c( Îi→j , I j ) * s( Îi→j , I j ) (5.14) SSIM ( Îi→j , I j ) = (2µ Îi→j µ I j + l 1 )(2σ Îi→j I j + l 2 ) (µ 2 Îi→j + µ 2 I j + l 1 )(σ 2 Îi→j + σ 2 I j + l 2 ) (5.15) where µ Îi→j , µ I j are the average, σ 2 Îi→j , σ 2 I j the variance, and σ Îi→j I j covariance of Îi→j , I j , where 2 , l 3 = l 2 /2, L is the dynamic range of the pixel-values, k 1 = 0.01, l 1 = (k 1 L) 2 , l 2 = (k 2 L) k 2 = 0.03. The SSIM loss, which is used as part of the objective function, is given by: L SSIM = 1 -SSIM ( Îi→j , I j ) (5.16) Together these make up the main loss for the image reconstruction task as has been used in various methods [174,199]. To address the gradient-locality in motion estimation, a smoothness term is introduced to avoid discontinuity of the learned depth maps in regions with low-texture. The edge-aware depth smoothness loss used in [174] uses image gradient to weigh the depth gradient. Depth smoothness (DS) encourages smoothness by penalizing depth discontinuity if the image shows continuity in the same area [173]. L DS = |∇ x D i |e -∇xI i | + |∇ y D i |e -∇yI i | (5.17) where ∇ x , ∇ y are image gradients in the horizontal and vertical direction, respectively, ∇ denotes the 2D differential operator, and | • | is the element-wise absolute value. The total loss is made up of a sum of the losses mentioned where each is multiplied by a weight α a , α b , α c . L total = α a L recon + α b L SSIM + α c L DS (5.18) Following implementations [183,188] the weights were the following: α a = 0.85, α b = 0.15, and α c = 0.04. Experiments & Results We trained the models with the inputs shown in 1. Depth: trains with single frame from the triplet and produces a depth map. D i = θ(I i ), θ : R (H×W ×3) → R (H×W ) (5.19) 2. Egomotion: the network takes three frames (ex. [I t-n , I t , I t+n ]) and predicts transformations simultaneously ψ E (I i-n , I i ) = (t x 1 , t y 1 , t z 1 , r x 1 , r y 1 , r z 1 ) (5.20) ψ E (I i , I i+n ) = (t x 2 , t y 2 , t z 2 , r x 2 , r y 2 , r z 2 ) (5.21) The data was split into ; 35 eyes for the train, 39 for the validation and 14 for the test. All eyes from the same patient were assigned to the same set. We tested the hybrid implementation by first training with masks (experiment inputs 'a') that take our full frames into account and therefore ignore nothing. Followed by an experiment with inputs 'b' where we utilised our masks for disturbances to help the egomotion CNN focus only on the well lit areas. These are shown in Table 5.5. For both experiments we kept the same set-up. The pre-trained model we used to initiate the training was trained on the Cityscape dataset. With this set-up we noticed that the loss diverged constantly. Training with different parameters, and randomly initialised weights unfortunately did not help either. Neither set-ups or changes helped the models learn any valuable information. These changes include the mask changes as well as changing the weights of the losses or using randomly initialized weights. The main issue was also the exploding gradient after very few epochs where the loss diverged to L total → ∞. The optimization problem proved to be too difficult. Table 5.6 demonstrates two loss plots of the experiments that constantly required a form of 'restart training'. With no sign of convergence the training optimization seemed to be unsuccessful given the set-up we were using. We concluded that if we took into the full image (inputs 'a'), or masks for disturbances (inputs 'b') as shown in Table 5.5 for the equation 5.3, there was no improvement in the training. The difficulty we faced in these experiments demonstrated that the main losses do not work for our data and problematic. The optimization difficulty surpassed the constrains implemented when using our dataset. Not only were the losses not enough, we probably did not make the best use of the masks. Attempting to change the object-motion detection to ignore disruptions seemed to have limited effect in the current implementation. As we had theorised, our data defies the assumptions made and the main components that seemed to be common in several baseline methods were inadequate. Semantic reconstruction loss Our previous experiments demonstrated the difficulty of training when relying on the photometric loss mainly, as it was given the highest weight to guide the depth egomotion learning. We decided to retain the use of the masks although we didn't see any improvement in the way we used them in our previous experiments. Instead we wanted to incorporate them to help us minimize our model taking in any trivial information. This is mainly the eyelid areas present in our frames, which doesn't contain any information of good quality. The second point that we were able to prove in the previous experiments is that the photometric loss can not be the highest weighted source of guidance to train the model, and therefore the strongest penalisation for the models. Our hypothesis that inconsistency present in our frames, which is due to the lighting, was confirmed by the experiments in Section 5.3.3. The constant movement of the light, being that it is attached to the camera, wasn't a minor but a major disturbance during the training. This also includes the artifacts of reflections and overly exposed areas due to the light. Before detailing the semantic reconstruction loss, a set of steps were taken that led to its proposal that we wish to discuss. We started by addressing the removal of non-essential region ; eyelid using the binary masks we named 'Binary mask ROI' shown in Table 5.1. We used these binary masks ROI and multiplied them by the frames as a pre-processing step for training. We then obtained frames that contain only the eye, examples shown in 5.4. This was easily incorporated into our code as a variable which would allow us to easily interchange the inputs for training. Another pre-processing step that we incorporated is the choice of a step of n frames as we believe the motion between two consecutive frames was limited. The baseline approaches do not utilise this pre-processing and rely on consecutive frames when creating the dataset. The interpretations of the previous results led us to focus on the main loss, the photometric loss(RECON), which had the highest weight in the total loss in our baseline assessment 5.3. The photometric loss compared the reconstructed image Îi→j to the next frame I j . This is the main supervision signal that utilises both predictions from the Depth and Egomotion CNNs (see Eq.5.6). To enable the Egomotion CNN to learn valuable camera motion information we had to find a way to get rid of the light seen on the frames. We wanted to be able to compare the estimated motion between frames without relying solely on the photometric consistency. Using the semantic segmentations we introduced earlier as 'Mask for ROI' in Table 5.1, referred to as m ROI , we had three binary masks that identified three regions in our frames ; eyelid, cornea and sclera, detailed in Eq 5. [START_REF] Simpson | Dry Eye Symptoms Assessed by Four Questionnaires[END_REF]. With each of these regions' pixels in the mask ROI having a value of ; 0, 1 and 2 respectfully we implemented a new semantic reconstruction loss L SRL .        m eyelid (x, y) = 1 ⇐⇒ m ROI (x, y) = 0 m cornea (x, y) = 1 ⇐⇒ m ROI (x, y) = 1 m sclera (x, y) = 1 ⇐⇒ m ROI (x, y) = 2 (5.23) The loss is based on Eq.5.6, in which we still utilise the predicted depth and camera motion. The inverse warping step remains the same and the predicted transformation matrix is now used to warp the mask ROI instead. Although the model takes in our processed frames Fig5.4 to learn, we measure the error using the masks instead of the raw frame input, as done in the baselines [183, 188] using 5.6. This removes entirely the need for color consistency which is how both the depth and egomotion models were previously penalised via the L recon loss (Eq.5.6). We also calculate this loss per region which allows us to precisely measure the error of reconstruction for both the cornea and sclera. Semantic reconstruction loss (SRL) is the main supervision signal. L SRL = ∥ mcornea i→j -m cornea j ∥ + ∥ msclera i→j -m sclera j ∥ (5.24) where mi→j is the reconstructed mask ROI, m j the target mask ROI. This new addition to training was set as our main loss, while maintaining L recon 5.6, L SSIM 5.16, L DS 5.17, that continue to be calculated with frame inputs. The total loss is now: L total = α a L recon + α b L SSIM + α c L DS + α d L SRL (5.25) where α a = 0.85, α b = 0.15, α c = 0.04., and α d = 1. Training The overall training setup included fixed values for the weights, the learning rate = 0.0002, and the batch size = 8. We trained all models for 200 epochs and used the 'best model' for inference. The best model is defined as the model with the lowest total loss on the training set. Variables that were investigated to allow us to assess our newly modified approach is the inclusion of the frame step, the original photometric loss L recon . Lastly, we detail which dataset was used Results For the qualitative results we used the annotations we detailed in Chapter 3. One way we could ensure the egomotion estimation is correct is mark the coordinates of distinct points in various frames. These points were annotated on static parts of the eye. The annotated punctate dots (damaged area) were on the surface of the eye, and visible veins on the sclera. To visualise the accuracy of our predictions we warp a source frame into a target frame and track the marked points. We present our first results in Table 5.8. As mentioned we had tested changing various variables. Focusing on the Mean Euclidian distance (px,%) to evaluate these experiments we notice an improvement of ≈ 21px, 1.29% when comparing the baseline to our proposed method which includes the semantic reconstruction loss, L SRL (see Eq. 5.24). To review these results in more depth, we first started with the initial approach, detailed in 5.3, which uses the total loss (see Eq. 5.18), and consecutive frames for input, or a frame step of n = 1. The results for this setup up is referred to as Exp. No. A1 in Table 5.8. This gave us our baseline results where we trained with the database 'P', and used the model pre-trained on database 'C'. In order to verify that there was a lack of motion we added a frame step of n = 10 and obtained an improvement of ≈ 1.7px, 0.1%, referred to as Exp. No. A2 in Table 5.8. We then decided to continue our training with a frame step of n = 10, as it showed some promise. Adding our newly proposed semantic reconstruction loss L SRL , continued to improve results. The results of this setup, which included both a frame step of n = 10, and the semantic reconstruction loss L SRL is referred to as Exp. No. A3 in Table 5.8. Given that we had the database 'O', we also wanted to test the setup used for Exp. No. A3 and train the models first with database 'O' to be able to obtain a pre-trained model before moving forward with dataset 'P'. The results validated that although the model can be exposed to a dataset of lesser quality, dataset 'O', it resulted in a better training set-up than using a model pre-trained on a completely different set of images from database 'C'. 5.8 summarises the first set of experiments, and includes the grader error. The grader error was measured as a mean of all the annotations performed by experts, as described in 3.5. We deduced from these results that we wanted to maintain a frame step ̸ = 1 and the model that was pre-trained using dataset 'O'. As we mentioned previously we wanted to minimize any irrelevant information, which includes the eyelid and eyelashes, being fed to the models when training so we changed our input to the binary frames, as shown in For these set of experiments B, we continued to see improvements when keeping the photometric loss L recon of ≈ 5.7px, 0.36%, versus using only our proposed L SRL along with L DS and L SSIM . We increased the frame step from n = 10 to n = 30 but with these inputs, which seemed to have a negative effect. Experiments B3,B4 with n = 30 proved that there was a limitation to the motion between frames that we need to respect. Although B3,B4 remained an improvement to experiments without the L SRL (A1,A2), we deduced that n = 10 was an ideal frame step to train with. Lastly, the results from experiment B showed that both losses L SRL and L recon worked better together. We reverted to maintaining the same input for both CNNs and using the mask ROI only for the L SRL loss calculation. These results for experiment C are shown below in Table 5.10. Again, keeping the L recon continued to improve results by ≈ 1.18px, 0.07%. The final set of experiments gave us our best results, but with still a large margin when compared to the human error we obtained when annotating our test set. With the three set of experiments in Tables 5.8, 5.9, 5.10 we were able to finally stabilize training and improve results. We showed with various combinations that the new semantic reconstruction loss always enhanced results. We also saw visual improvements in the depth map estimations, Fig. ??, and this is validated with the fact that depth is utilised for inverse warping and hence the improvement in our results. As we discussed each experiment we concluded that a certain set of parameters worked best for our training: frame step n = 10, frame binary for both CNN inputs and using the model pre-trained on the database 'O'. Our best performing method C2 had a difference of ≈ 8.1px, 0.5% Euclidean distance when compared to the grader error. The photometric reconstruction loss L recon seemed to be less reliable, giving a noisy training with our data. The semantic reconstruction loss displayed a more global and more dependable supervision signal but still lacking in precision. We deduce this given the difference between our best results, Exp. No. C2, and the grader errpr. We found that the best results is giving a stronger weight to L SRL , weight = 1 , compared to L recon , weight = 0.85, but maintaining both. We obtain a more stable enforcement of frame consistency with both losses working together. Shape Fitting After the introduction of the semantic reconstruction loss we wanted to take advantage of the greatest difference our problematic has compared to others. We have concrete knowledge of what the camera is looking at, the eye. This knowledge can be incorporated into the model to help guide the learned depth to fit a certain shape. The closest to this proposed idea would be the Iterative Closest Point (ICP) algorithm [200]. The general idea in ICP is to constantly revise transformation predictions and minimize an error by measuring the distance between the source point cloud and the target point cloud. This inspired us to add a constraint to our point cloud. Although ICP has also been incorporated as a 3D Point Cloud Alignment Loss in [199], it is not differentiable and so the proposed method only approximated the gradient to allow for back-propagation. ICP also remains a computationally heavy calculation step. Sphere fitting loss Our final novel loss helps both the depth map and camera pose estimations. This loss is based on modelling the human eye as two intersecting spheres. A sphere for the cornea which is in a larger sphere representing the sclera Fig. To simplify the calculations we have a sphere fitting implemented for each region. We also apply a threshold before calculating this loss, by counting the number of pixels present for each region and ensuring there is at least > 50%. Although we previously applied a threshold of 40% ROI for all frames, as we mentioned in 5.2, this threshold is to make sure we have enough of each region visible before penalizing its sphericity. The frame is discarded for the sphere fitting loss calculation if this threshold is not met for either regions. We define our threshold as the count of non zero pixels pertaining to either regions and dividing that by the total number of pixels of the frame, Equation 5.27. m ROI =    m cornea m ROI (x, y) = 1 m sclera m ROI (x, y) = 2 (5.26) Region percentage r i = ∀(x,y)∈m ROI (x,y) 1 [m ROI (x,y)=i] ∀(x,y)∈m ROI (x, y) 1 × 100 (5.27) where i = 1(cornea) or i = 2(sclera) Once the region percentage r i for regions: cornea, sclera is > 50% we are able to use the respective frame to calculate the L SF L . Taking the predicted depth map D and the mask ROI m ROI we only keep the depth estimations for that region : Dcornea = m cornea × D Dsclera = m sclera × D (5.28) Using this regional depth map, and the associated frame we are able to obtain a 3-D point cloud using the inverse of the intrinsic matrix. This geometric projection can be explained by first looking at the simple conversion of world coordinates to image plane coordinates written with homogeneous coordinates. Every point (x, y) in a 2D Cartesian plane has a corresponding set of homogeneous coordinates in the 3D projective space. Once we obtain the pixel co-ordinates as homogeneous coordinates while keeping the last dimension = 1, we can perform any operation or transformation [203]. Equation 5.29 shows the conversion from image plane (pixels), as homogeneous coordinates to world co-ordinates (x, y, z, 1), where 1 z is proportional to disparity. Disparity is the difference in horizontal position of a point's projections between a left and right image, Eq ??.     u v 1     = 1 z   K 0 0 1   [R|t]        x y z 1        (5.29)        u v 1 1 z        = 1 z   K 0 0 1     R t 0 1          x y z 1        (5.30) neglecting R,t the Eq.5.30, the inverse of the camera matrix can be simplified to the following:        x y z 1        = z        1 fx 0 0 0 0 1 fy 0 0 0 0 1 0 0 0 0 1               u v 1 1 z        (5.31) We then use these estimated point cloud coordinates and apply a least squares sphere fitting. Following a method by Jekel [204] we are able to determine the best sphere center for the given data points. By rearranging the terms in Eq (5.32) we can express the equation in matrix notation and solve for ⃗ c (5.35). By fitting the data points x i , y i , z i we can solve for the centre coordinates of the sphere x 0 , y 0 , z 0 and the radius r. (x -x 0 ) 2 + (y -y 0 ) 2 + (z -z 0 ) 2 = r 2 (5.32) x 2 + y 2 + z 2 = 2xx 0 + 2yy 0 + 2zz 0 + r 2 -x 2 0 -y 2 0 -z 2 0 (5.33) ⃗ f = A⃗ c (5.34) ⃗ f =        x 2 i + y 2 i + z 2 i x 2 i+1 + y 2 i+1 + z 2 i+1 . . . x 2 n + y 2 n + z 2 n        A =        2x i 2y i 2z i 1 2x i+1 2y i+1 2z i+1 1 . . . . . . . . . . . . 2x n 2y n 2z n 1        ⃗ c =        x 0 y 0 z 0 r 2 -x 2 0 -y 2 0 -z 2 0        (5.35) ⃗ f =        x 2 k + y 2 k + z 2 k x 2 k+1 + y 2 k+1 + z 2 k+1 . . . x 2 n + y 2 n + z 2 n        A =        2x k 2y k 2z k 1 2x k+1 2y k+1 2z k+1 1 . . . . . . . . . . . . 2x n 2y n 2z n 1        ⃗ c =        x 0 y 0 z 0 r 2 -x 2 0 -y 2 0 -z 2 0        (5.36) Sphere fitting loss L SF L is a mean square error (MSE) between the fitted sphere and the data points. The sphericity for each of the corneal and scleral regions have a weight α e . With both regions fitted to a sphere we then calculate the loss for each pixel p. The threshold applied before calculating this loss ensures either regions are present in the frame. The final loss is a sum of the sphericity of both regions. L SF L = L cornea SF L + L sclera SF L (5.37) L cornea SF L = 1 p c pc k=1 ((x ck -x c0 ) -r c ) 2 , L sclera SF L = 1 p s ps k=1 ((x sk -x s0 ) -r s ) 2 (5.38) where p c are number of pixels, x ck data points, x c0 centre coordinate on the corneal surface, r c the cornea radius and p s are number of pixels, x sk data points, x s0 centre coordinate on the scleral surface, r s the estimated sclera radius. Our proposed method now has the following total loss: (5.39) where α a = 0.85, α b = 0.15, α c = 0.04., and α d = 1, and α e = 10k. L total = α a L SRL + α b L recon + α c L SSIM + α d L DS + α e L SF L Training Once we established this loss we first acknowledged that the calculated errors of sphericity on both regions were very minor. This required a large weight so we could be sure that the loss was making an impact in training. This was done through various tests of changing the α for L SF L until we saw a stable convergence in the loss. We focused on the total sphere loss, since the loss per region was sometimes ≈ 0 for certain frames. We also kept the frame step of n = 10, and the pre-trained model on dataset 'O' fixed during these experiments. Results The results are shown in Table 5.12 where we started with the baseline being our best result from the previous section. We then added our novel sphere fitting loss that gave us the best improvement yet with a difference of ≈ 7.21px, 0.45% between Exp. No. C2 and with an Euclidean of ≈ 12.92px, 0.81% and D4 of ≈ 4.77px, 0.29% Both these results are consistent showing that the addition of this new depth constraint to the training improved results. Most importantly the sphericity constraint seemed to have a bigger impact than the semantic reconstruction loss. Ultimately, this allowed us to obtain results as good as, and even a little better, than the grader errors with a difference of ≈ 0.04px, 0.01% Euclidean distance. By plotting the point cloud we are able to visualise if the constraint had enabled the depth maps to improve. We believe that the results in Section 5.4.2 Table 5.4.2, were enhanced through depth smoothness, alongside the semantic reconstruction loss L SRL but visually do not correspond perfectly to the shape of the eye, but are a great improvement. Figure Conclusion Our two main contributions have enabled us to obtain a fully self-supervised method we named 'SiGMoid: Semantic & geometric monocular visual odometry'. SiGMoid learns both depth and egomotion, which allows for a successful image registration. Our algorithm with the novel sphere fitting loss demonstrated results not only superior to baselines but also to human error. We implemented sphere shape fitting and did not consider the scale, which can also be included as an additional constraint. The human eye as we mentioned is estimated to have a radius of ≈ 12mm. By including the scale on the point cloud estimation we can measure how realistic the predictions are. This is a possible improvement on shape fitting that is made possible because of the use of prior knowledge. Sphere fitting loss and the semantic reconstruction loss improved our registration results considerably. We can envision that including the scale can produce a full up to scale 3D reconstruction of the eye. Figure 5.12 illustrates the SiGMoid framework and complete losses. Both CNNs although trained jointly can be used for inference separately. With the help of the manual annotations our evaluation method relies on the inverse warping which validates both the predictions. By finalising a state-of-the-art method to register the frames as well as select viable frames in the pre-processing we move on to applying this to its application in the DED Oxford grade prediction. Introduction With a finalised self-supervised algorithm, SiGMoid, we are able to register frames to a common coordinate system. A precise registration can avert over or under estimation of punctate dots grading. We relied on the grading scale, Oxford score, detailed in Chapter 1 to train a classifier to predict the scores. The grading scale annotations were given for the cornea, nasal and temporal conjunctiva with each having a score of 0-5 and the total Oxford score, the sum, can be 0-15 as shown in Figure 6.1. Given our completed semantic segmentation task, detailed in section 5.2, we are able to distinguish these three parts using the 'Mask ROI' m ROI . Figure 6.1 -Oxford score [START_REF] Bron | Clinical staining of the ocular surface: Mechanisms and interpretations[END_REF]. We set up some of the following classification tasks as part of an internship that I had the pleasure of supervising. The work that of this internship, by the intern Abdel OUEDRAOGO, is detailed in sections 6.1.1, 6.3.1, and lastly 6.3 which was also further developed later on. We first look over the database 'P' to evaluate the distribution of the videos and their Oxford score. With a total of 16,800 frames we noticed that the grade 5 was the lowest available data example. This indicated that our database is unbalanced for a multi-class classification problem. The automation of grading DED can be accomplished through a classical multi-class classification. We examined various ways to optimize the classification task before incorporating our selfsupervised algorithm. Incorporating SiGMoid would enable us to validate that registered frames facilitate the classification task. Pre-processing In order to follow the Oxford grading procedure we first implemented a pre-processing to distinguish the nasal and temporal conjunctiva. Although our m ROI gave us the information of which areas of the frames belonged to the cornea and conjunctiva, we needed an additional method to distinguish between the nasal and temporal conjunctiva. The videos were labelled with a right or left eye label. This facilitated the task the only supplemental information required was to detect the center of the cornea. With this we know that for left eye videos anything left to the cornea is the nasal conjunctiva and to the right was the temporal conjunctiva, and vice versa. A well known algorithm, Kalman filtering [205], predicts the position of a tracked object and updates the series of measurements over a period of time. By extracting only the cornea from m cornea , we predict the center for each and apply it to the frames. The example below is for a left eye video where the center, in pixels, was estimated to be x = 735, y = 81.25. where true positive (TP) is when the prediction is the positive class, true negative (TN) when the prediction is the correct negative class, false positive (FP) the correct class is predicted incorrectly, false negative (FN) where the negative class is predicted incorrectly. Top-1 accuracy will consider a prediction as correct if and only if the most probable prediction is the correct. Top-N Accuracy takes N predictions with highest probability, we calculated the accuracy for 3 highest probabilities for multi-class prediction. True Positive rate (TPR) = T P T P + F N (6.5) False Positive rate (FPR) = F P F P + T N (6.6) The ROC curve plots TPR or recall vs. FPR, and AUC is the measure of the area under that curve. It measures the performance of various classification thresholds. It also helps interpret the probability that the trained model predicts positives more randomly than negatives. It ranges from 0-1, where 0 is for a model whose predictions are all wrong and 1 are all correct. Both the validation and test evaluation were performed per eye by using a majority vote for the f frames per patient. Multi-class Classification Classification Methods We tested adding a voting method to compare the classification given our limited dataset. The three different methods were the following: (c) Choose the class corresponding to the highest value. Experiments & results We tested three commonly used backbones to simply evaluate the voting methods: ResNet50, EfficientNetB3, and EfficientNetB4. Given distribution of our complete database, shown in table 6.1, and following the poor results obtained for multi-class classification, we decided to focus on a binary classification (mild DED vs. severe DED) to evaluate our proposed method in a diagnostic aspect. Binary classification Now the model is trained to classify whether the patient has a mild DED or a severe DED. Mild DED (respectively severe DED) is defined as a corneal Oxford score ≤ 1 (respectively ≥ 2) [206]. We also decided to focus on the cornea grade classification to follow the Oxford score grading technique. As was done for the training, which used all frames from each patient's video, we continued to validate and test per patient. Once the model is trained, it predicts scores for a set of frames and the majority vote is then applied to obtain the final predicted grade. For the following classification evaluation we first had to outline a data split that would respect several rules. By training with only database 'P' we found that there wasn't enough frames per class or even overall. Again we decided to utilise the original database 'O' to help in balancing and also take advantage of extra samples to train with. Given that database 'P' is our primary training set we first formed a data split where the database 'P' was the primary training set, keeping the patients originally placed and used as test for the SiGMoid implementation in the test set. We then placed all the data from database 'O', containing 32 patients, as the validation set, detailed in Table 6.4. An additional detail in the tables is the pre-processing: in one scenario, named 'Select best', frame selection is performed, using SiGMoid. This was realized using the warping error which the semantic registration loss L SRL we detailed in Chapter 5. Once the warping error is calculated for all the dataset, we set a threshold to only keep frames with a W arpingError < 5%. An average of 35% of frames were removed once this limitation was applied. Once we established the data splits the experiments were performed using two modes. Classical classification 'C' where the input was the frames of the cornea f rames × m cornea . SiGMoid classification 'SG' where the input was the fusion of two frames giving us a mosaic from the pair of frames. An example of the training frames used as an input for both modes is shown below in Figures 6.5, 6.6. Experiments & Results For the experiments we attempted to investigate various hyper-parameters including : - Classical input SiGMoid input We tested OneCycleLR which is an optimizer that changes the LR after every batch. This policy was described in [207] and it modifies the LR form an initial value to a maximum, which can be the initial LR set at the beginning of training, and then to a minimum value. This is done based on the "Div factor" which we also included as a hyper-parameter. This is determined by the following Equation 6.7. This policy allows for an online learning rate optimization. Initial LR = Maximum LR Div factor (6.7) Another optimizer is the StepLR that decays the learning rate by gamma every step epoch. We maintained the gamma at = 0.1. Lastly, the CosineAnnealingLR proposed in [208] that begins with a large LR and aggressively decreases it before increasing it. It also maintains information from the previous cycle each time it restarts, the Equation 6.8 below details the new LR which also relies on the step epoch. Classical input SiGMoid input η t = η min + 1 2 (η max -η min ) 1 + cos T cur T max π (6.8) We implemented for each of the backbones, while varying each of these hyper-parameters. This amounted to 179 results. For the full set of experiments we calculated the importance of each of the hyperparameters. Figure 6.4 displays the percentage importance of each of these parameters. We note that the most prominent being the backbone but that the scheduler and the mode are also important and the fourth being the 'Select best'. This shows that the Mode: 'SG' classification using frames registered using SiGMoid or 'C' classical classification using raw frames as input, along with 'Select best' have an impact on the classification results. It also validates the need for such a diverse set of experiments with different backbones and schedulers. The best results for both methods and their respected configurations are shown below in For the first set of experiments,'select best = False', the highest AUC was obtained with mode SG and an overall trend of the SG experiments can be seen to maintain an average AUC above 0.5 in Figure 6.5. There are also more experiments under an AUC of 0.3 with the mode C. As for the box plots the medians are close for two backbones: Densenet121 and NoisyStudent (EfficientNet-B4), but higher with mode SG for Resnet50, and Inception V3, shown in Figure 6.6. We then incorporated, 'select best = True' for the SiGMoid classification as it helps us reduce frames that were deemed to have been badly registered. As we mentioned, this was based on a warping error. With less data for the SiGMoid training we were still able to obtain four of the highest AUCs and also maintain an average that was higher than that of the classical classification mode C. The overall distribution for mode SG in Figure 6.7 is more consistent Conclusion In and obtain a grade without clinician interference. We found that although evolution towards automated quantification of DED is slow, promising results have been obtained. The next step seems to be the integration of artificial intelligence (AI). The focal point of our work was to obtain a method that can be deployed with minimal additional information and aid in DED diagnosis. Our main proposition was to utilise the videos obtained from DED staining examinations and try to improve the visualisation of the damaged areas. We believe that this could also help in avoiding over or under-estimation when grading. With this we prioritize the use of projective geometry and visual odometry. A field that exploited this area significantly is autonomous driving and robotics. We therefore explored the domain, and narrowed our interest to methods that can be implemented in an unsupervised or self-supervised manner. Along the investigation we performed various preliminary tasks before finalising our proposed method SiGMoid, and lastly applied our method to DED grade prediction for evaluation. With two main databases obtained we investigated methods that learn to register images to a common coordinate system. These methods focused on learning and estimating depth from images and camera motion between pairs of images. While keeping in mind the main limitations we have with our examination videos. These challenges included (1) the heterogeneity of our data, (2) disturbances in the data due to specular reflections. We introduced existing methods in Chapter 2, some of which we focused on and re-implemented in Chapter 5. The methods we focused on had a common supervision signal based on depth image based rendering, namely the photometric loss. With this we were able to validate that the disturbances present in our data hindered training. The main complication caused training to diverge to a loss of infinity or zero. To address these complications we integrated semantic segmentation masks in two different ways. We first introduced masks that alienated any disturbances and unwanted information, allowing the models to only focus on well lit areas. Our second masks highlighted the eye and concealed the eyelid. We then tested the baseline with both masks and were able to obtain a stable training, but still an unsuccessful one. With no sign that the total loss was improving or converging we then focused on the main supervision signals and the assumptions made when they are used. We found that we violated most assumptions leading us to our first contribution, the semantic reconstruction loss. We replaced the photometric loss as the main loss, with the semantic reconstruction loss seeing that it allowed us to overlook the major disturbances present in our data when learning the 3D camera motion (egomotion). With baseline results we were able to see improvement using manual annotations of punctate dots (damaged eye areas) and veins. Once we predicted a transformation between frames we were able to measure the error in Euclidean distance (pixels, percentage). Once we had a constraint that was more suitable to our data for the egomotion, we focused on the depth estimation. With this we introduced our more valuable contribution which was derived from the notion that we have greater knowledge about the object seen in our videos unlike in autonomous driving. We then considered how we may use the eye's anatomy as a restriction for the depth model. Therefore, we developed a loss that penalizes the estimated point cloud, obtained from the depth estimations, to fit a more spherical shape. Since the cornea and conjunctiva are two intersecting spheres that make up the human eye, we implemented a sphericity loss that penalized each. With this final incorporation our proposed method was finalised, which we named 'SiGMoid: Semantic & geometric monocular visual odometry'. We were also able to patent the key idea behind the shape fitting loss as a 'Method for modeling an anatomical structure of the human body'. Furthermore, once this was evaluated we obtained more noticeable improvements. Considering that we used a human error as a benchmark, which we obtained via interobserver variability of the annotations used, SiGMoid obtained marginally superior results. Lastly, we proposed an application to DED grading through classification of the Oxford score [START_REF] Bron | Clinical staining of the ocular surface: Mechanisms and interpretations[END_REF]. In this work the first tasks were carried out as a part of an internship that was completed including TBUT, Schirmer, and OSS. These can also be integrated for a Multi-modality approach. Along with the inclusion of demographic data we can obtain more reliable predictions. 5. Additionally having patient follow-up examinations could allow us to expand our longitudinal data. With this we can ensure the detection of even subtle changes, or response to treatment. In future studies we would aim to replicate results with a larger database. The possibility of obtaining a new grading scale warrants further investigation. We have demonstrated the great potential of this field, and our findings have been evaluated and found to support our theories. Abstract: Sjögren's syndrome is an immune system disorder with two common symptoms, dry eyes and a dry mouth. The discomfort of dry eye symptoms affects daily lives, results in 30% activity impairment and affects 95% of Sjögren patients [START_REF] Nichols | Impact of dry eye disease on work productivity, and patients' satisfaction with over-the-counter dry eye treatments[END_REF]. Dry eye disease (DED) is also an independent multifactorial disorder with a prevalence of up to 50% [2]. The ocular surface inflammation causes discomfort, fatigue and overall, a lower quality of life [2,3]. Traditional therapies help manage the symptoms and avoid permanent damage. Hence, it is pivotal to grade and follow the development of DED. A common drawback in existing methods that diagnose and quantify DED is reproducibility, invasivity and inaccuracy. We reviewed classical methods and those that incorporate automation to measure the extent of DED [4]. The study showed that DED has yet to benefit from what Artificial Intelligence (AI) has to offer. Using slit-lamp examinations of the ocular surface we aimed to improve the quantification of the Oxford score [5]. Our proposed method uses unsupervised learning to register frames from the examinations to a common coordinate system. By learning the camera motion and depth simultaneously we are able to track the ocular surface in 3-D, compensate for eye motion and visualise the full eye. The light source attached to the camera is a challenge and a disturbance when learning egomotion. This was solved through semantic segmentation and adding a new supervision signal: semantic reconstruction loss. We also used the advantage of estimating the shape of the eye as prior knowledge we could include as a constraint. This was implemented through a shape fitting loss; the shapes being two spheres intersecting each other. Our registration showed quantitative and qualitative improvement with each contribution. We also calculated the inter-rater reliability of the punctate dots (damaged areas) annotations. Our method came closest to what can be considered human error. The proposed registration method was also used for a pre-processing task, frame selection. Once applied to automated Oxford score classification, our method improved the results as well. The improvement validates that the strong color/illumination variances present in the examinations are a disturbance for any deep learning task. We overcame this in both tasks via our contributions and proposed method. Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Clinical Background 1.1 Dry Eye Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Primary Sjögren's Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Clinical diagnostic methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Tear Secretion & Volume . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Ocular Surface Damage . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Tear film stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Meibomian Gland Dysfunction . . . . . . . . . . . . . . . . . . . . . 1.3.5 Other diagnostic methods . . . . . . . . . . . . . . . . . . . . . . . 1.4 Diagnosis problematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Methodological Background 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Classification: Oxford grade . . . . . . . . . . . . . . . . . . . . . . 2.2 Deep Learning-based Dry eye quantification . . . . . . . . . . . . . . . . . 2.3 Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Visual Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Visual-SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Structure from Motion . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Visual Odometry with Deep Learning . . . . . . . . . . . . . . . . . . . . . (e) Sclera example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Visualisation of the L SRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Difference between source & target . . . . . . . . . . . . . . . . . . (b) Difference between registered mask ROI & target . . . . . . . . . . 5.7 Iterative Closest Point (ICP) algorithm [200]. . . . . . . . . . . . . . . . . 5.8 Point cloud plot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Human eye anatomy [202]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Modelling of the eye as two intersecting spheres. . . . . . . . . . . . . . . . 5.11 Point cloud plots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Point without L SF L , Exp. No. C2. . . . . . . . . . . . . . . . . . . . (b) Point cloud using L SF L , Exp. No. D4. . . . . . . . . . . . . . . . . . 5.12 SiGMoid: Semantic & geometric monocular visual odometry framework. . . 6.1 Oxford score [54]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Kalman filtering with left eye video [205]. . . . . . . . . . . . . . . . . . . . (a) m cornea × f rame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) m cornea with center prediction. . . . . . . . . . . . . . . . . . . . . . (c) Temporal m conjunctiva . . . . . . . . . . . . . . . . . . . . . . . . . . (d) Nasal m conjunctiva . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Classical multi-class classification voting method evaluation per patient. . . 6.4 Hyperparameters percentage importance for the classification results. . . . 6.5 Scatter plot with both modes having select best = False. . . . . . . . . . . 6.6 Boxplot with both modes having select best = False. . . . . . . . . . . . . 6.7 Scatter plot with SiGMoid select best = True. . . . . . . . . . . . . . . . . 6.8 Boxplot with only SiGMoid select best = True. . . . . . . . . . . . . . . . INTRODUCTION "Our intelligence is what makes us human, and AI is an extension of that quality." -Yann LeCun 1. 1 1 Dry Eye Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Primary Sjögren's Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Clinical diagnostic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Tear Secretion & Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Ocular Surface Damage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Tear film stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Meibomian Gland Dysfunction . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 Other diagnostic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Diagnosis problematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 . 1 - 11 Figure 1.1 -Annual Cost (2003/2004) of Ophthalmologists in France managing 1,000 Dry Eye Patients Figure 1 . 2 - 12 Figure 1.2 -Necessity Working Packages (WPs)[19] Figure 1 . 3 - 13 Figure 1.3 -The Oxford grading scale [5]. Figure 2 . 1 - 21 Figure 2.1 -Artificial Intelligence subsets [81]. A simple visualisation of classification training is shown on Fig 2.2. CNNs employ shared weights and local connections, in contrast to multilayer perceptrons (MLP), to fully exploit 2D input data structures like found in images. The training process is sped up, made simpler, and employs a very limited number of parameters in this design. Figure 2 . 2 - 22 Figure 2.2 -CNN training visual example [85]. Figure 2 . 3 - 23 Figure 2.3 -General CNN architecture pipeline[START_REF] Lee | Deep learning in medical imaging: general overview[END_REF] Figure 2 . 4 - 24 Figure 2.4 -Learning methods. Figure 2 . 5 - 25 Figure 2.5 -Supervised versus unsupervised learning . Figure 2 . 6 - 26 Figure 2.6 -Camera coordinates class. As a result, we want to finish by including DED grade classification in our proposed method. Focusing on the Oxford score, we can learn to classify frames from our examinations and predict the severity of the DED. A general classification pipeline is shown below in Figure2.8. Figure 2 . 8 - 28 Figure 2.8 -Classification pipeline [95]. Figure 2 . 9 - 29 Figure 2.9 -CNN-SPK measurement framework [101]. Figure 2 . 2 Figure 2.10 -CNN Tear film break-up time (CNN-BUT) measurement [105]. Our review article also includes an overview of other diagnosis methods, such as those that quantify Meibomian gland dysfunction. This area also included a few deep learning-based approaches. Fully automated image-based solutions rely on tasks enhanced by deep learning. One of which is automated segmentation, and it has been used to quantify Meibomian gland dropout rate. Wang et al.'s proposed model segments and computes atrophy percentage achieving an accuracy of 97.6% and 95.4%, respectively [106]. Using 706 annotated images the proposed method displayed an accurate evaluation of gland atrophy and ultimately DED diagnosis through Meibomian gland dysfunction. A more recent study by Prabhu et al.[107] uses CNNs to segment Meibomian glands. Comparing the p-value > 0.005 between the ground truth segmentation and various trained models, it showed that the model trained with data augmentation improved accuracy. These automated gland segmentation methods using deep learning pave the way for improvement within the field. Figure 2 . 11 - 211 Figure 2.11 -Deep learning-based Meibomian gland dysfunction diagnosis methods. (a) Meibomian gland dropout rate prediction pipeline [106] (b) Meibomian gland segmentation architecture [107] Odometry (VO), Visual Simultaneous Localization and Mapping (VSLAM), and Structure from Motion (SfM), and how they relate to each other. The main general concept is SfM that is an offline method that uses unordered sequences of images and it aims to map the environment. The images are from different perspectives and can even be from different cameras. SLAM's main goal is a global consistent estimate of a robot's path. Besides localization, loop detection and loop closure are the main issues in SLAM[108]. Lastly, VO is the key components of VSLAM, and the main process of detecting the orientation and location of the robot. Figure2.13 shows the relation between the three concepts. We go into more detail on visual odometry first, and the different data inputs and estimation methods. Figure 2 . 12 - 212 Figure 2.12 -SfM vs. V-SLAM vs. VO [109]. 1 . 1 Scale-invariant feature transform (SIFT)[113] 2. Speeded-up robust features (SURF)[114] 3. Oriented FAST and rotated BRIEF (ORB)[115, 116] : combines the advantages of features from accelerated segment test (FAST) [117] and binary robust independent elementary features (BRIEF)[118].Lastly, after detecting and matching features the final step is to calculate relative motion between frame. Depending on the available data one of the following approaches can be employed :-Perspective-three-point (P3P) -Iterative closest point (ICP) -Epipolar geometry More precision can be achieved through iterative optimization such as bundle adjustment. Bundle adjustment consists of reducing the reprojection error between observed image locations and predicted ones. Through this nonlinear least-squares method we achieve to minimize the error. Outliers are also addressed through the iterative process of Random sample consensus (RANSAC).Direct tracking based methodsDirect based methods, developed from optical flow, estimates camera motion and pixel spatial location mainly by minimizing the photometric error (reprojection error). This method does not rely on any feature extraction or feature description. Direct methods directly calculate the structure and motion based on the image's intensity data. The size and direction of the gradient are employed for optimization. In terms of robustness with insufficient textures or unfocused cameras, direct techniques perform better than feature-based methods [120]. Direct approaches work directly on the image's intensity values, which speeds up feature detection [121]. Figure 2 . 2 Figure 2.13 -ORB vs SIFT vs SURF feature extraction methods [119]. Figure 2 . 2 [START_REF] Wolffsohn | Demographic and lifestyle risk factors of dry eye disease subtypes: a cross-sectional study[END_REF] summarizes the two main approaches discussed, feature based versus direct methods. The feature point methods have been widely applied, although the description of feature points is mainly responsible for its accuracy. The direct methods are relatively recent technique that has strong robustness and may be applied to situations with very few features, such as hallways or smooth walls[132]. Figure 2 . 14 - 214 Figure 2.14 -Feature based vs Direct method pipelines [125]. Figure 2 . 2 [START_REF]Tear film mucins: front line defenders of the ocular surface; comparison with airway and gastrointestinal tract mucins[END_REF] shows the general visual SLAM pipeline where the data input ranges from 2D images, 2D images + Inertial Measurement Unit data or 2D image + depth data. The main three modules can be divided into input data, initialization, and tracking and mapping which are odometry algorithms that we looked at in the previous sections. Figure 2 . 2 Figure 2.15 -Visual SLAM general pipeline [133]. Focusing only on the Visual (only) SLAM algorithms, figure 2.16 is a timeline of the most representative ones. We will briefly discuss each of these algorithms that mostly precede the incorporation of deep learning until 2017 [133, 134]. Figure 2 . 2 Figure 2.16 -Visual SLAM algorithms timeline [133]. Figure 2 . 2 Figure 2.17 -MonoSLAM [133]. Figure 2 . 2 Figure 2.18 -PTAM SLAM [136]. Figure 2 . 2 Figure 2.19 -DTAM [121]. Figure 2 . 2 Figure 2.20 -SVO [130]. Figure 2 . 2 Figure 2.21 -LSD-SLAM [125]. Figure 2 . 2 Figure 2.22 -ORB-SLAM 2 [116]. Figure 2 . 2 Figure 2.23 -CNN-SLAM [137]. Figure 2 . 2 Figure 2.24 -DSO-SLAM [122]. Figure 2 . 2 Figure 2.25 -SfM [138]. and their heavy computational cost is a persisting disadvantage. Many researchers aim to use deep learning techniques to the VO problem in an effort to lower the high computing cost; their work can be separated into supervised and unsupervised methods. An example of what deep learning can replace is shown in Figure2.26, ultimately estimating the camera pose from data directly. As mentioned in Section 2.1.1 ground truth is required as a supervision signal for supervised methods while the output is used as supervision signal in unsupervised methods. Figure 2 . 50 Figure 2 . 27 - 250227 Figure 2.26 -Geometry-based VO & Deep learning-based VO [160]. Figure 2 . 2 Figure 2.28 -Cityscape dataset samples [177]. 51 51 .10) p n stands for the pixel on image I n , and p n-1 to the corresponding pixel of p n on image I n-1 . K is the camera intrinsics matrix, D n (p n ) the depth value at pixel p n , T n→n-1 represents the spatial transformation between the two images I n and I n-1 . Figure 2 . 29 - 229 Figure 2.29 -Warping process for view reconstruction in unsupervised methods [174-176]. Figure 2 . 2 Figure 2.29 illustrates the differentiable image warping process where for each point p t in the 'target' view, it is projected onto the 'source' view based on predicted depth and camera pose. Bilinear interpolation is then used to obtain the value of a new warped image Îs . Zhou et al.[176] estimates D n (p n ) and T n→n-1 using a depth network and a pose network. The networks predict the depth map Dn from a single image I n , and a pose network to regress the transformation Tn→n-1 between frames (I n and I n-1 ). Pixels correspondences are established by the projection function and based on estimations, the correspondences between I n and I n-1 Figure 2 . 30 - 230 Figure 2.30 -Unsupervised learning of depth and egomotion pipeline [176]. 3.3a. The slit lamp has an improved clinical vision through the light transmission and the optical quality. It also includes the imaging systems : IM 600 and IM 910 cameras. The EyeSuite software Fig.3.3b controls all Haag-Streit devices and is made to optimize patient care in busy practices. It allows for access to patient data, patient management system and more importantly for our objective image and video capture. Following the Haag-Streit image exposure guide three modules, shown in Fig.3.4, were used. 2 . 3 . 4 . 234 Blue light yellow filtered fluroscein sodium staining: (a) Three blinks with a pause to view the tear film breaking (TBUT diagnosis) (b) Cornea grading following the Oxford scale. (c) Temporal and nasal conjunctiva grading following the Oxford scale. White light lissamine green staining: (a) Temporal and nasal conjunctiva grading following the Oxford scale. Patient questionnaire Ocular Surface Disease Index (OSDI). Figure 3 . 1 - 31 Figure 3.1 -Oxford scale [5]. Figure 3 . 2 - 32 Figure 3.2 -Examples of DED grading. (a) Cornea Oxford grade = 1. (b) Cornea Oxford grade = 3. (c) Cornea Oxford grade = 4. Figure 3 . 3 - 33 Figure 3.3 -Acquisition method. Figure 3 . 4 - 34 Figure 3.4 -Haag-Streit image exposure guide. Figure 3 . 3 2 and 3.3 show both checkerboards photos and with a better visualisation of the 8 x 7 checkerboard we continued with the calibration using those set of images. The images of the 9 x 10 checkerboard often had areas that were covered which is mainly due to the size of the checkerboard and the small field of view of the camera. Given the protocol used we conducted all the camera calibration image acquisitions at a magnification of x10. We conducted two experiments to obtain the camera intrinsic parameters using MATLAB R2020a and a Python code using OpenCV. Using the Camera Calibrator application and frames of the filmed checkerboard. The videos filmed resulted in a number of extracted frames ranging from 1080 to 1800, depending on the length of the video. Both methods employed include frame selection, where only around 5% of the frames extracted were retained. This can mainly be attributed to the quality of the image aswell as the chessboard print being flat and non-spherical like the eye. The majority of the rejected frames had blurry parts of the chessboard, making it impossible to detect the corners. The experiments were also done using the raw image size obtained from the video camera (1600x1200 pixels) and resized images (192x256 pixels). The resized images were what we used for all the baseline experiments and our proposed method training which will be detailed in chapter 5. Figure 3 . 5 - 35 Figure 3.5 -9 x 10 checkerboard photos through Haag Streit BQ 900 slit lamp. Figure 3 . 6 - 36 Figure 3.6 -8 x 7 checkerboard photos through Haag Streit BQ 900 slit lamp. Figure 3 . 7 - 37 Figure 3.7 -Reprojection Error Figure 3 . 8 -Figure 3 . 9 - 3839 Figure 3.8 -Databases box-plots (a) Databases vs. Video Length(sec) (b) Databases vs. No. of frames Figure 3 . 10 - 310 Figure 3.10 -LabelImg tool [179]. Figure 4 . 1 - 41 Figure 4.1 -Depth estimations from Alexandre Guerre's thesis [180] et al.[176] andGordon et al.[183] with multiple improvements. Given the difficulty of estimating depth alone we decided to focus on such methods that jointly learn depth and egomotion. The main supervision signal and equation we wanted to solve was the following : Figure 4 . 2 -Figure 4 . 424 Figure 4.2 -Depth estimation methods (c) Dense Depth [184] (d) T 2 N et [185] Figure 4 . 3 - 43 Figure 4.3 -Deep learning SLAM methods. Depth from videos in the wild [183] (c) Endo-SfMLearner [187] Figure 5 . 1 - 51 Figure 5.1 -PixelAnnotationTool [190]. Figure 5 . 5 Figure 5.2 shows some of the examples of the data augmentation step. Details of the training and results are listed in Table 5.2. Figure 5 . 2 - 52 Figure 5.2 -Frame augmentation examples. Figure 5 . 3 - 53 Figure 5.3 -Differentiable depth image-based warping [176] - Triplet frames: I : [I t-n , I t , I t+n ] -Step between frames: n -Pair of frame from a triplet : I i , I j -Depth map: D -Egomotion transform of i→ j :E i→j 1 2 ( 5 2 x +; σ 2 y +; l 2 ( 5 . 10 ) 125222510 .9) c(x, y) = 2σ x σ y +; l 2 σ Strong inter-dependencies between relatively near pixels are used to represent structural information. Image signals are projected as unit vectors on hyperplanes defined by 5.11, the signals are normalize by subtracting the mean intensities and then dividing by respective standard deviation.The structural data is related to these unit vectors 5.12, which then gives the correlation between the two windows 5.13. , y) = σ xy + l 3 σ x σ y + l 3 (5.12) Figure 5 . 4 - 54 Figure 5.4 -Processed frames with binary mask ROI. for training and which was used for pre-training. Tables 5.8,5.9,5.10 detail the experiments and results. The notation used are the following: -L recon = Photometric loss -L SRL = Semantic reconstruction loss -Dataset 'P' = PEPPS dataset -Dataset 'O' = Original dataset -Pre-train Dataset 'C' = model trained on the Cityscapes dataset [172] Figure 5 . 5 Figure 5.5 shows an example of this qualitative evaluation method that includes the main supervision signal that trains both the depth and egomotion CNN. Our evaluation uses the inverse warping which first requires the depth map Dt prediction of the target frame, and then the egomotion of source → target. Figure 5 . 5 - 55 Figure 5.5 -Registration evaluation. Figure 5 . 4 . 54 Our new main loss L SRL was promising, so we wanted to test training the egomotion CNN with only the 'Mask for ROI' as input, which is what is used to calculate the L SRL and shown in Fig. 5.1. These are the inputs we set for Experiments B, along with keeping L DS and L SSIM . Table 5 . 11 -Figure 5 . 6 - 51156 Figure 5.6 -Visualisation of the L SRL . (a) Difference between source & target (b) Difference between registered mask ROI & target Figure 5 . 7 - 57 Figure 5.7 -Iterative Closest Point (ICP) algorithm [200]. IterationA visual of a point cloud plot for the best performing proposed method, Exp. No. C2 in section 5.4, is shown below in Fig.5.8. Figure5.8 is a plot that ignores the eyelid and eyelashes, that are seen as a black plane plotted at depth = 0. By using our semantic segmentations we continue to ignore these areas that hold no valuable information for training. This helped us visualise the errors in the estimations of the current depth and confirmed the need for a stronger constraint to be applied to the depth CNN. The previous section demonstrated that better constraints that fitted our problematic, which was a new loss L SRL along with the correct setup improved training and results. Adding the frame step alone allowed us to stabilise training to obtain baseline results, allowing our egomotion CNN to finally learn valuable information. This led us to focus on improving the learning for the depth CNN, as both depth and egomotion estimations are utilised in the key supervision signals L SRL and L recon . Figure 5 . 8 - 58 Figure 5.8 -Point cloud plot. 5 . 10 . 510 Studies also show that human adult eye diameter is 24.2 mm (transverse) × 23.7 mm (sagittal) × 22.0-24.8 mm [201]. A smaller anterior transparent sphere is the cornea and a posterior sphere representing the sclera. In order to implement this loss we first estimate a depth map, and then using the semantic segmentation we calculate the sphericity of the two regions: cornea, sclera. Figure 5 . 9 - 59 Figure 5.9 -Human eye anatomy [202]. Figure 5 . 10 - 510 Figure 5.10 -Modelling of the eye as two intersecting spheres. 6. 2 2 shows point cloud plot examples from two experiments. Figure 5 . 11 - 511 Figure 5.11 -Point cloud plots. (a) Point without L SF L , Exp. No. C2. (b) Point cloud using L SF L , Exp. No. D4. Figure 5 . 5 Figure 5.12 -SiGMoid: Semantic & geometric monocular visual odometry framework. Figure 6 . 2 - 62 Figure 6.2 -Kalman filtering with left eye video [205]. 6 . 2 F 1 - 621 (a) m cornea × f rame . (b) m cornea with center prediction. (c) Temporal m conjunctiva . (d) Nasal m conjunctiva . Evaluation metrics Accuracy (ACC), precision and recall were used as metrics for all classification results with the Top-N (N=3) accuracy added for multi-class classification, and Area under the (receiver operating characteristic) curve (AUC) for the binary classification. Accuracy = T P + T N T P + T N + F P + F N (Score = 2 * P recision * Recall P recision + Recall = 2 * T P 2 * T P + F P + F N (6.4) 1 . 2 . 3 . 123 Majority voting: (a) Classify each of the frames of all the video. (b) Choose the class to which the majority of the frames belong to. Argmax of sum: (a) Mean of the probability vectors. (b) Choose the class corresponding to the highest probability. Sum of argmax: (a) First group the frames by class. (b) In each class, average the probabilities associated with each frame. Figure 6 . 6 3 presents these results, but Tables 6.2,6.3 are more detailed evaluations of the validation and test set. With this experiment we saw the best results using the majority voting method. We decided to use this for all upcoming experiments of classification. Figure 6 . 3 - 63 Figure 6.3 -Classical multi-class classification voting method evaluation per patient. Backbone including : Resnet50, Densenet121, Inception V3, NoisyStudent (EfficientNet-B4) -Learning rate: [2e -3 -2e -6 ] -LR schedulers : OneCycleLR, StepLR, CosineAnnealingLR -Weight decay: [0.1 -3e -4 ] -Select best (True/False) -Div factor (DF), Step Epoch (SE)These are a few of many hyper-parameters that can be explored when fine-tuning. We deemed these the most important and, solely due to time constraints, we based our experiments on changing them. To better clarify the backbone of the experiments includes existing architectures.The learning rate (LR) is a key parameter that determines the step size and manages how quickly the model can adapt to the problem. Weight decay is a regularization technique to better moderate how the weights of the CNN are obtained. Figure 6 . 4 - 64 Figure 6.4 -Hyperparameters percentage importance for the classification results. Figure 6 . 5 - 65 Figure 6.5 -Scatter plot with both modes having select best = False. Figure 6 . 6 - 66 Figure 6.6 -Boxplot with both modes having select best = False. Figure 6 . 7 - 67 Figure 6.7 -Scatter plot with SiGMoid select best = True. Figure 6 . 8 - 68 Figure 6.8 -Boxplot with only SiGMoid select best = True. Titre: Quantification automatique de la sècheresse oculaire par intelligence artificielle au cours du syndrome de Sjögren Mot clés : Sécheresse oculaire, Classification automatique, Apprentissage profond, Conformité de structure Résumé : Le syndrome de Sjögren est une maladie du système immunitaire dont les deux symptômes communs sont la sécheresse des yeux et de la bouche. La gêne occasionnée par les symptômes de sécheresse oculaire affecte la vie quotidienne, entraîne une diminution de 30% des activités et touche 95% des patients atteints du syndrome de Sjögren[START_REF] Nichols | Impact of dry eye disease on work productivity, and patients' satisfaction with over-the-counter dry eye treatments[END_REF]. La sécheresse oculaire est également un trouble multifactoriel indépendant dont la prévalence peut atteindre 50%[2]. L'inflammation de la surface oculaire entraîne une gêne, une fatigue et, globalement, une baisse de la qualité de vie[2, 3]. Les thérapies traditionnelles permettent de gérer les symptômes et d'éviter les dommages permanents. Il est donc essentiel de classer et de suivre l'évolution de la DED. Les méthodes existantes qui permettent de diagnostiquer et de quantifier les DED présentent des inconvénients communs : reproductibilité, invasivité et imprécision. Nous avons passé en revue les méthodes classiques et celles qui intègrent l'automatisation pour mesurer l'étendue de la DED :[4]. Cette étude a montré que la DED n'a pas encore bénéficier de ce que l'intelligence artificielle (IA) a à offrir. En utilisant des examens de la surface oculaire à la lampe à fente, nous avons cherché à améliorer la quantification du score d'Oxford[5]. La méthode que nous proposons utilise l'apprentissage non supervisé pour recaler les images des examens dans un système de coordonnées commun. En apprenant simultanément le mouvement de la caméra et la pro-fondeur, nous sommes en mesure de suivre la surface oculaire en 3D, de compenser le mouvement de l'oeil et de visualiser l'oeil entier. La source lumineuse fixée à la caméra constitue un défi et une perturbation lors de l'apprentissage de l'égomotion. Ce problème a été résolu par la segmentation sémantique et l'ajout d'un nouveau signal de supervision : la losse de reconstruction sémantique. Nous avons également utilisé la forme de l'oeil comme une connaissance préalable que nous pouvons inclure comme une contrainte. Ceci a été mis en oeuvre par une perte d'ajustement de forme ; les formes étant deux sphères se croisant l'une l'autre. Notre recallage a montré une amélioration quantitative et qualitative avec chaque contribution. Nous avons également calculé la fiabilité inter-juges des annotations des points ponctués (zones endommagées). Notre méthode s'est rapprochée le plus de ce qui peut être considéré comme une erreur humaine. La méthode de recalage proposée a également été utilisée pour une tâche de prétraitement, la sélection des images. Une fois appliquée à la classification automatique du score d'Oxford, notre méthode a également amélioré les résultats. Cette amélioration valide le fait que les fortes variations de couleur/illumination présentes dans les examens constituent une perturbation pour toute tâche d'apprentissage profond. Nous avons surmonté ce problème dans les deux tâches grâce à nos contributions et à la méthode proposée. Title: Automatic quantification of ocular dryness by artificial intelligence in the context of Sjögren's syndrome Keywords: Dry eye disease, Automated grading, Deep learning, Shape fitting Table 1 . 1 1 -Cost of managing cohort of 1,000 patients Country US$ million France 0.20 -0.38 Germany 0.41 -0.66 Italy 0.47 -0.88 Spain 0.60 -1.01 Sweden 0.28 -0.58 UK 0.70 -1.50 Table 1 . 2 12 Schirmer's test and its variations. 1903 Schirmer Test (Schirmer 1903) 1982 Short Schirmer I (Nelson 1982) 1983 Phenol red thread (Hamano et al. 1983) 1995 Tear Function Index (Xu et al. 1995) 1998 Fluorescein Clearance test (Prabhasawat & Tseng 1998) 1999 Fluorescein Clearance test (Fluorophotometry) (Afonso et al. 1999) 2003 Short Schirmer I (with anesthesia) Table 1 .3 -Ocular surface staining grading scales Scale Year Zone Grading description Cornea 1 Mitaya et al.(Miyata 2003) 2003 Whole N/A Combination score De paiva et al.(De Paiva & Pflugfelder 2004) (Baylor Scale) 2004 5 zones N/A 0-40 sum of corneal zones Whitcher et al.(Whitcher et al. 2010) (SICCA OSS group scale) 2010 Whole 2 zones 0-3 each zone based on counting dots Abelson et al.(Abelson et al. 2016) (ORA Calibra Scale) 2016 3 zones 2 zones 0-4 grade (none to severe staining) Woods et al.(Woods et al. 2018) (CORE) 2018 5 zones 2 zones 0-100 grade (area) Conjunctiva Van Bjisterveld and Utrect (Bijsterveld 1969) 1969 Whole 2 zones 0-3 each zone based on staining intensity Lemp (Lemp & Michael 1995) (NEI/Industry Workshopscale) 1995 5 zones 6 zones 0-3 each zone based on staining intensity Bron et al.(Bron 1997, Bron, Evans & Smith 2003) (Oxford Score) 1997 Whole 2 zones 0-5 each zone (log-linear increase) Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking'-that, somehow, is much harder." -Donald Knuth 2.1 2.1.3 Classification: Oxford grade . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Deep Learning-based Dry eye quantification . . . . . . . . . . . . . . . . . . . . . 2.3 Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Visual Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Visual-SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Structure from Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Visual Odometry with Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Unsupervised deep learning-based methods . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 2 . 2 1 -Visual-SLAM methods. No. Method Type Map Reference 1 MonoSLAM Feature-based Sparse [135] 2 PTAM Feature-based Sparse [136] 3 DTAM Direct Dense [121] 4 SVO Hybrid Sparse [130] 5 LSD Direct Semi-dense [125] 6 ORB-SLAM 2 Feature-based Sparse [116] 7 CNN-SLAM Direct Semi-dense [137] 8 DSO Direct Sparse [122] all of them by the geometry estimation algorithm. A group of computers (called nodes) make up the system, and one of them is designated as the master node, which manages work scheduling. A strategy for unstructured image collections that takes into account every photo at once rather than developing a solution piece by piece was provided by Crandall[145]. The method computes an initial estimate of the camera position from all available images and then uses bundle adjustment to improve that estimate for scene structure. The method employs a two-step procedure. Levenberg-Marquardt non-linear optimization, which is related to bundling adjustment but involves extra constraints, is used to estimate camera parameters in the second phase after the discrete belief propagation (BP) technique in the first step. When compared to current incremental bundle adjustment (IBA) techniques, the method provides Each node extracts SIFT features while downsampling its images to a set size. (2) Validation: suggest possible image pairs and validate them (using feature matching) (3) Track generation: aggregate these features so that a single 3D point may be estimated from superior reconstructions and is faster. Numerous SfM techniques have been proposed, including global [145-147], hierarchical [148], and incremental [144, 149-151]. Table 3 . 3 1 -Camera calibration results Method Image Size Fx Fy Cx Cy Reprojection Error Python 1600x1200 29,856.2 29,367.7 800.0 600.5 0.27 Matlab 1600x1200 33,172.8 32,981.5 622.7 539.9 1.37 Python 192x256 3,758.9 3,758.9 138.8 85.4 0.07 Matlab 192x256 5,919.2 5,793.8 76.8 89.5 0.17 quality present in the original database includes changes in lighting fixtures, zoom parameters and also more abrupt motions which were all necessary to finalise the protocol. Database 'O' was also used for an internship in 2019 with similar scope and achieved good results for classification of open/closed eye. As part of the PEPSS study 39 patients were evaluated but we only obtained 26 examinations. We named this database 'P' and it contained 52 videos of unique eyes. Table 3 .2 summarises the two databases and shows examples of frames taken from a few examinations of each. Figure 3 .8 shows the two box plots with the minimum, first quartile, median, third quartile, and maximum of both databases for the time and number of frames. Table 3 . 3 3 -Grader error results. Grader errors Mean Euclidian (px) Mean Euclidan (%) All annotations 5.30 0.33 Annotations on sclera 5.68 0.35 Annotations on cornea 3.33 0.21 Table 4 . 4 1 -Depth map predictions part I. Frame T 2 Net-Vanilla T 2 Net-Full DenseDepth IrisDepth Table 4 . 4 2 -Depth map predictions part II. Frame Depth from videos in the wild Struct2Depth Endo-SfMLearner Table 5 . 5 2 -Dice score of all experiments. Mask for disturbances Binary mask ROI Mask for ROI Model Encoder Dice Score Model Encoder Dice Score Model Encoder Dice Score U-Net resnet50 0.88 U-Net resnet50 0.83 U-Net resnet50 0.96 FPN resnet50 0.83 FPN resnet50 0.85 FPN resnet50 0.87 U-Net EfficientNet-B3 0.87 U-Net EfficientNet-B3 0.82 U-Net EfficientNet-B3 0.92 FPN EfficientNet-B3 0.90 FPN EfficientNet-B3 0.85 FPN EfficientNet-B3 0.95 Table 5 . 5 3 -Specification of DispNet architecture [164]. Name Kernel Str. Ch I/O InpRes OutRes Input conv1 7×7 2 6/64 768×384 384×192 Images conv2 5×5 2 64/128 384×192 192×96 conv1 conv3a 5×5 2 128/256 192×96 96×48 conv2 conv3b 3×3 1 256/256 96×48 96×48 conv3a conv4a 3×3 2 256/512 96×48 48×24 conv3b conv4b 3×3 1 512/512 48×24 48×24 conv4a conv5a 3×3 2 512/512 48×24 24×12 conv4b conv5b 3×3 1 512/512 24×12 24×12 conv5a conv6a 3×3 2 512/1024 24×12 12×6 conv5b conv6b 3×3 1 1024/1024 12×6 12×6 conv6a pr6+loss6 3×3 1 1024/1 12×6 12×6 conv6b upconv5 4×4 2 1024/512 12×6 24×12 conv6b iconv5 3×3 1 1025/512 24×12 24×12 upconv5+pr6+conv5b pr5+loss5 3×3 1 512/1 24×12 24×12 iconv5 upconv4 4×4 2 512/256 24×12 48×24 iconv5 iconv4 3×3 1 769/256 48×24 48×24 upconv4+pr5+conv4b pr4+loss4 3×3 1 256/1 48×24 48×24 iconv4 upconv3 4×4 2 256/128 48×24 96×48 iconv4 iconv3 3×3 1 385/128 96×48 96×48 upconv3+pr4+conv3b pr3+loss3 3×3 1 128/1 96×48 96×48 iconv3 upconv2 4×4 2 128/64 96×48 192×96 iconv3 iconv2 3×3 1 193/64 192×96 192×96 upconv2+pr3+conv2 pr2+loss2 3×3 1 64/1 192×96 192×96 iconv2 upconv1 4×4 2 64/32 192×96 384×192 iconv2 iconv1 3×3 1 97/32 384×192 384×192 upconv1+pr2+conv1 pr1+loss1 3×3 1 32/1 384×192 384×192 iconv1 where I s are pixels in the source frame, I t are pixels in the target frame t(x, y) = t 0 + m(x, y)δt(x, y). (5.3) where t 0 is the translation vector and m(x, y) equals one at pixels that could belong to distur- bances and zero otherwise. Architecture of DispNet [164] is detailed in Table 5.3, and of FlowNet [195] in 5.4. Table 5 . 5 4 -Specification of FlowNet architecture [195]. Name Kernel Str. Ch I/O InpRes OutRes Input conv1 7×7 2 6/64 384×512 192×256 Images conv2 5×5 2 64/128 192×256 96×128 conv1 conv3 5×5 2 128/256 96×128 48×64 conv2 conv3 1 3×3 2 256/256 48×64 48×64 conv3 conv4 3×3 2 256/512 48×64 24×32 conv3 1 conv4 1 3×3 2 512/512 24×32 24×32 conv4 conv5 3×3 2 512/512 24×32 12×16 conv4 1 conv5 1 3×3 2 512/512 24×32 12×16 conv5 conv6 3×3 2 512/1024 12×16 6×8 conv5 1 Table 5 5 .7. Both experiments used a set of triplet frames as input : I : [I t-n , I t , I t+n ]. Table 5 . 5 55 -Training input examples. Table 5 . 5 6 -Experiment loss & training details. Exp. Index Loss Training Time Loss plot a 21.91 21hrs b 19.93 72hrs Table 5 . 5 7 -Experiment results. Exp. Index Frame Depth Prediction a b Table 5 . 5 [START_REF] Litjens | A survey on deep learning in medical image analysis[END_REF] -Experiment details and results A. Exp. No. Frame step DepthNet Input EgoNet Input L SRL L Recon Mean Euclidian (px) Mean Euclidan (%) Training dataset Pre-train dataset A1 n=1 Frames Frames No Yes 33.67 2.10 P C A2 n=10 Frames Frames No Yes 31.96 2.00 P C A3 n=10 Frames Frames Yes Yes 27.07 1.69 P C A4 n=10 Frames Frames Yes Yes 22.48 1.4 P O Grader errors - - - - - 5.30 0.33 - - Table Table 5 . 5 [START_REF] Lemp | The Definition Classification of Dry Eye Disease[END_REF] -Experiment details and results B. Exp. No. Frame step DepthNet Input EgoNet Input L SRL L Recon Mean Euclidian (px) Mean Euclidan (%) Training dataset Pre-train dataset B1 n=10 Frames Binary Segmentation Yes No 22.61 1.41 P O B2 n=10 Frames Binary Segmentation Yes Yes 17.64 1.10 P O B3 n=30 Frames Binary Segmentation Yes No 24.67 1.54 P O B4 n=30 Frames Binary Segmentation Yes Yes 25.12 1.57 P O Grader errors - - - - - 5.30 0.33 - - Table 5 . 5 [START_REF] Wombat | The true meaning of 42[END_REF] -Experiment details and results C. Exp. No. Frame step DepthNet Input EgoNet Input L SRL L Recon Mean Euclidian (px) Mean Euclidan (%) Training dataset Pre-train dataset C1 n=10 Frames Binary Frames Binary Yes No 14.10 0.88 P O C2 n=10 Frames Binary Frames Binary Yes Yes 12.92 0.81 P O C3 n=30 Frames Binary Frames Binary Yes No 16.73 1.05 P O C4 n=30 Frames Binary Frames Binary Yes Yes 12.98 0.81 P O Grader errors - - - - - 5.30 0.33 - - Table 5 . 5 [START_REF] Clegg | The annual cost of dry eye syndrome in France, Germany, Italy, Spain, Sweden and the United Kingdom among patients managed by ophthalmologists[END_REF] -Experiment details and results D. No. Frame step DepthNet Input EgoNet Input L SRL L SF L L Recon Mean Euclidian (px) Mean Euclidan (%) Training dataset Pre-train dataset D1 n=10 Frames Binary Frames Binary Yes No Yes 12.92 0.81 P O D2 n=10 Frames Binary Frames Binary Yes Yes Yes 5.71 0.36 P O D3 n=10 Frames Binary Segmentation Yes No Yes 17.64 1.10 P O Grader errors - - - - - - 5.30 0.33 - - D4 n=10 Frames Binary Segmentation Yes Yes Yes 4.77 0.29 P O Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.1.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.2 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.3 Multi-class Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.3.1 Classification Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.3.2 Experiments & results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6.4 Binary classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.4.1 Experiments & Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 T his chapter explores the final task of DED grading prediction. A part of the work was implemented during an internship I supervised. The classification task focused on the Oxford score and we experimented with multi-class and binary classification. Ultimately, we utilise the classification framework to further evaluate SiGMoid, our proposed method. Chapter 6 OXFORD GRADE CLASSIFICATION USING SIGMOID "The key to artificial intelligence has always been the representation." -Jeff Hawkins 6.1 Table 6 . 6 1 -Database Oxford score grading. Oxford score Videos Frames Percentage 0 15 3000 17.1% 1 21 4200 25% 2 14 2800 16.7% 3 21 4200 25% 4 10 2000 11.9% 5 3 600 3.6% Table 6 . 6 2 -Validation set classical multi-class classification results. Backbone Top-1 Acc Top-3 Acc Top-3 Acc Precision Recall F1-Score ResNet50 0.93 0.99 0.99 0.31 0.27 0.29 EfficientNetB3 0.87 0.99 0.99 0.27 0.24 0.25 EfficientNetB4 0.60 0.81 0.97 0.25 0.12 0.13 Table 6 . 6 3 -Test set classical multi-class classification results. on the training set and is unable to generalize with unseen data. Some explanations include training the model for a long period of time or with few examples causes it to focus on irrelevant information. Once the model is trained, it unfortunately has memorized the training set in the wrong way and cannot perform the prediction or classification it was trained for effectively. Backbone Top-1 Acc Top-3 Acc Top-3 Acc Precision Recall F1-Score ResNet50 0.25 0.53 0.78 0.13 0.27 0.15 EfficientNetB3 0.32 0.47 0.54 0.25 0.26 0.23 EfficientNetB4 0.23 0.41 0.55 0.27 0.25 0.15 fits Table 6 . 6 4 -Data split description. Set No. Patients No. of eyes Frames Percentage Best Frames Best Percentage Train 18 35 9,998 38.1% 6,425 37.3% Validation 32 41 11,336 43.2% 7,327 42.5% Test 7 14 4,911 18.7% 3,487 20.2% Table 6 . 5 65 -Classification input examples. Table 6 . 6 6 -Classification input examples II . Table 6 . 6 [START_REF] Ngiam | Multimodal deep learning[END_REF] . For mode SG, an AUC of 0.93 was the highest; an AUC of 0.64 was obtained for mode C with the same configuration. For mode C, the highest AUC obtained was 0.87; an AUC Table 6 . 6 7 -Best configuration test set results. Mode Backbone LR Weight decay Select best DF Scheduler SE AUC Acc Precision Recall SG NoisyStudent (EfficientNet-B4) 0.0002 0.0003 True NA CosineAnnealingLR 15 0.93 0.64 0.75 0.72 C NoisyStudent (EfficientNet-B4) 0.0002 0.0003 False NA CosineAnnealingLR 15 0.64 0.43 0.70 0.56 C Inception V3 0.002 0.0003 False 20.0 OneCycleLR NA 0.87 0.36 0.18 0.50 SG Inception V3 0.002 0.0003 False 20.0 OneCycleLR NA 0.67 0.36 0.18 0.50 We display the detailed results tables in the appendix but plotted them for easier interpretation and for discussion purposes. Our first scatter plots display two sets of results, and for the experiment number (Exp. No.) the first 40 are using the SiGMoid input and the remaining 40-80 are using the classical frame input. We also added a boxplot to summarise the median, min and max obtained by both modes, SG and C, for each backbone. For the setup where both methods had 'select best = False'; Figures 6.5, 6.6 present the results and Figures 6.7, 6.8 for when we set 'select best = True' for the SiGMoid input classification. the previous chapter, we presented SiGMoid, a self-supervised image registration algorithm towards DED diagnosis and quantification from slit lamp videos. In this chapter, we demonstrated that obtaining an accurate reconstruction is beneficial to the classification of DED grading. It helps in two ways by enlarging the field of view via the registration and allows removal of poor quality frames. Both advantages put together lead to an improvement in classification performance. We demonstrated the generalizability of our method's classification improvement through a large number of experiments. CONCLUSION & PERSPECTIVES "If we knew what we were doing, it would not be called research, would it?" -Albert Einstein Conclusion Sjogren's syndrome (SS) is a disorder of the immune system that critically requires more optimized and precise grading of its symptoms. Two of the main symptoms being dry mouth and dry eye. In this work we focused on Dry eye disease (DED), which is also a disease that can be present outside of SS with a prevalence increasing with age. Found to be higher in women than men, prevalence is found to have no correlation with education or location [209] . A comprehensive study on existing methods that we published allowed us to better comprehend what has been done in the field and what was missing. Most existing diagnostic methods are unfortunately invasive, non-reproducible, lacking in accuracy and constantly subjective. Methods are expected to aid in evaluating the progress of the patient, the extent of the damage, and how well certain medications perform. Only a handful of recent methods are able to automate quantification Table 6 . 6 8 -Detailed classification Test set experiment results using S1 data split -Part I. Mode Backbone LR Weight decay Select Best DF Scheduler SE AUC Acc Precision Recall SG densenet121 0.0002 0.0003 True 15.0 OneCycleLR NA 0.91 0.57 0.73 0.67 SG densenet121 0.0002 0.0003 False 15.0 OneCycleLR NA 0.51 0.5 0.57 0.57 C densenet121 0.0002 0.0003 False 15.0 OneCycleLR NA 0.73 0.64 0.75 0.72 SG densenet121 0.002 0.0003 True 20.0 OneCycleLR NA 0.47 0.36 0.4 0.41 SG densenet121 0.002 0.0003 False 20.0 OneCycleLR NA 0.64 0.36 0.18 0.5 C densenet121 0.002 0.0003 False 20.0 OneCycleLR NA 0.51 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 True NA StepLR 20 0.8 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 False NA StepLR 20 0.49 0.36 0.18 0.5 C densenet121 0.0002 0.0003 False NA StepLR 20 0.73 0.5 0.71 0.61 SG densenet121 2,00E-06 0.0003 True NA StepLR 20 0.64 0.57 0.62 0.62 SG densenet121 2,00E-06 0.0003 False NA StepLR 20 0.69 0.57 0.62 0.62 C densenet121 2,00E-06 0.0003 False NA StepLR 20 0.53 0.64 0.75 0.72 SG densenet121 0.002 0.001 True NA LambdaLR 5 0.5 0.36 0.18 0.5 SG densenet121 0.002 0.001 False NA LambdaLR 5 0.38 0.5 0.29 0.39 C densenet121 0.002 0.001 False NA LambdaLR 5 0.5 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 True 5.0 OneCycleLR NA 0.78 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 False 5.0 OneCycleLR NA 0.67 0.57 0.73 0.67 C densenet121 0.0002 0.0003 False 5.0 OneCycleLR NA 0.67 0.57 0.73 0.67 SG densenet121 0.0002 0.0003 True 40.0 OneCycleLR NA 0.71 0.57 0.62 0.62 SG densenet121 0.0002 0.0003 False 40.0 OneCycleLR NA 0.33 0.36 0.18 0.5 C densenet121 0.0002 0.0003 False 40.0 OneCycleLR NA 0.67 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 True NA CosineAnnealingLR 15 0.82 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 False NA CosineAnnealingLR 15 0.58 0.36 0.18 0.5 C densenet121 0.0002 0.0003 False NA CosineAnnealingLR 15 0.73 0.64 0.75 0.72 SG densenet121 0.0002 0.001 True NA CosineAnnealingLR 15 0.82 0.36 0.18 0.5 SG densenet121 0.0002 0.001 False NA CosineAnnealingLR 15 0.4 0.43 0.52 0.51 C densenet121 0.0002 0.001 False NA CosineAnnealingLR 15 0.73 0.64 0.75 0.72 SG densenet121 0.0002 0.0003 True NA LambdaLR 5 0.53 0.36 0.18 0.5 SG densenet121 0.0002 0.0003 False NA LambdaLR 5 0.38 0.5 0.29 0.39 C densenet121 0.0002 0.0003 False NA LambdaLR 5 0.42 0.36 0.18 0.5 SG resnet50 0.0002 0.0003 True 15.0 OneCycleLR NA 0.29 0.21 0.23 0.21 SG resnet50 0.0002 0.0003 False 15.0 OneCycleLR NA 0.62 0.64 0.75 0.72 C resnet50 0.0002 0.0003 False 15.0 OneCycleLR NA 0.69 0.71 0.78 0.78 SG resnet50 0.002 0.0003 True 20.0 OneCycleLR NA 0.53 0.36 0.18 0.5 SG resnet50 0.002 0.0003 False 20.0 OneCycleLR NA 0.44 0.57 0.62 0.62 C resnet50 0.002 0.0003 False 20.0 OneCycleLR NA 0.47 0.36 0.18 0.5 SG resnet50 0.0002 0.0003 True NA StepLR 20 0.53 0.5 0.57 0.57 SG resnet50 0.0002 0.0003 False NA StepLR 20 0.64 0.57 0.73 0.67 C resnet50 0.0002 0.0003 False NA StepLR 20 0.51 0.64 0.75 0.72 SG resnet50 2,00E-06 0.0003 True NA StepL 20 0.53 0.5 0.71 0.61 SG resnet50 2,00E-06 0.0003 False NA StepLR 20 0.6 0.43 0.52 0.51 C resnet50 2,00E-06 0.0003 False NA StepLR 20 0.16 0.36 0.18 0.5 SG resnet50 0.002 0.001 True NA LambdaLR 5 0.5 0.36 0.18 0.5 SG resnet50 0.002 0.001 False NA LambdaLR 5 0.5 0.36 0.18 0.5 C resnet50 0.002 0.001 False NA LambdaLR 5 0.5 0.36 0.18 0.5 SG resnet50 0.0002 0.0003 True 5.0 OneCycleLR NA 0.71 0.57 0.73 0.67 SG resnet50 0.0002 0.0003 False 5.0 OneCycleLR NA 0.62 0.64 0.75 0.72 C resnet50 0.0002 0.0003 False 5.0 OneCycleLR NA 0.8 0.79 0.81 0.83 SG resnet50 0.0002 0.0003 True 40.0 OneCycleLR NA 0.53 0.5 0.57 0.57 SG resnet50 0.0002 0.0003 False 40.0 OneCycleLR NA 0.73 0.64 0.75 0.72 C resnet50 0.0002 0.0003 False 40.0 OneCycleLR NA 0.76 0.64 0.75 0.72 SG resnet50 0.0002 0.0003 True NA CosineAnnealingLR 15 0.76 0.57 0.73 0.67 SG resnet50 0.0002 0.0003 False NA CosineAnnealingLR 15 0.91 0.64 0.75 0.72 C resnet50 0.0002 0.0003 False NA CosineAnnealingLR 15 0.38 0.36 0.18 0.5 SG resnet50 0.0002 0.001 True NA CosineAnnealingLR 15 0.49 0.5 0.52 0.52 SG resnet50 0.0002 0.001 False NA CosineAnnealingLR 15 0.69 0.57 0.73 0.67 C resnet50 0.0002 0.001 False NA CosineAnnealingLR 15 0.76 0.64 0.75 0.72 SG resnet50 0.0002 0.0003 True NA LambdaLR 5 0.38 0.36 0.18 0.5 SG resnet50 0.0002 0.0003 False NA LambdaLR 5 0.2 0.36 0.18 0.5 C resnet50 0.0002 0.0003 False NA LambdaLR 5 0.18 0.36 0.18 0.5 Table 6 . 6 9 -Detailed classification Test set experiment results using S1 data split -Part II. Mode Backbone LR Weight decay Select Best DF Scheduler SE AUC Acc Precision Recall SG inception v3 0.0002 0.0003 True 15.0 OneCycleL NA 0.8 0.57 0.73 0.67 SG inception v3 0.0002 0.0003 False 15.0 OneCycleL NA 0.58 0.64 0.75 0.72 C inception v3 0.0002 0.0003 False 15.0 OneCycleL NA 0.8 0.71 0.78 0.78 SG inception v3 0.002 0.0003 True 20.0 OneCycleL NA 0.58 0.36 0.18 0.5 SG inception v3 0.002 0.0003 False 20.0 OneCycleL NA 0.67 0.36 0.18 0.5 C inception v3 0.002 0.0003 False 20.0 OneCycleL NA 0.87 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 True NA StepLR 20 0.47 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 False NA StepLR 20 0.67 0.36 0.18 0.5 C inception v3 0.0002 0.0003 False NA StepLR 20 0.18 0.36 0.18 0.5 SG inception v3 2,00E-06 0.0003 True NA StepLR 20 0.56 0.5 0.57 0.57 SG inception v3 2,00E-06 0.0003 False NA StepLR 20 0.6 0.43 0.52 0.51 C inception v3 2,00E-06 0.0003 False NA StepLR 20 0.71 0.5 0.71 0.61 SG inception v3 0.002 0.001 True NA LambdaLR 5 0.5 0.36 0.18 0.5 SG inception v3 0.002 0.001 False NA LambdaLR 5 0.5 0.36 0.18 0.5 C inception v3 0.002 0.001 False NA LambdaLR 5 0.5 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 True 5.0 OneCycleL NA 0.47 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 False 5.0 OneCycleL NA 0.4 0.36 0.18 0.5 C inception v3 0.0002 0.0003 False 5.0 OneCycleL NA 0.51 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 True 40.0 OneCycleL NA 0.44 0.5 0.48 0.48 SG inception v3 0.0002 0.0003 False 40.0 OneCycleL NA 0.58 0.57 0.73 0.67 C inception v3 0.0002 0.0003 False 40.0 OneCycleL NA 0.44 0.5 0.48 0.48 SG inception v3 0.0002 0.0003 True NA CosineAnnealingLR 15 0.47 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 False NA CosineAnnealingLR 15 0.67 0.36 0.18 0.5 C inception v3 0.0002 0.0003 False NA CosineAnnealingLR 15 0.18 0.36 0.18 0.5 SG inception v3 0.0002 0.001 True NA CosineAnnealingLR 15 0.47 0.36 0.18 0.5 SG inception v3 0.0002 0.001 False NA CosineAnnealingLR 15 0.67 0.36 0.18 0.5 C inception v3 0.0002 0.001 False NA CosineAnnealingLR 15 0.16 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 True NA LambdaLR 5 0.4 0.36 0.18 0.5 SG inception v3 0.0002 0.0003 False NA LambdaLR 5 0.43 0.36 0.18 0.5 C inception v3 0.0002 0.0003 False NA LambdaLR 5 0.61 0.36 0.18 0.5 SG NS Efficientnet b4 0.0002 0.0003 True 15.0 OneCycleL NA 0.77 0.57 0.73 0.67 SG NS Efficientnet b4 0.0002 0.0003 False 15.0 OneCycleL NA 0.67 0.57 0.73 0.67 C NS Efficientnet b4 0.0002 0.0003 False 15.0 OneCycleL NA 0.76 0.71 0.7 0.64 SG NS Efficientnet b4 0.002 0.0003 True 20.0 OneCycleL NA 0.58 0.5 0.71 0.61 SG NS Efficientnet b4 0.002 0.0003 False 20.0 OneCycleL NA 0.4 0.36 0.42 0.46 C NS Efficientnet b4 0.002 0.0003 False 20.0 OneCycleL NA 0.13 0.36 0.18 0.5 SG NS Efficientnet b4 0.0002 0.0003 True NA StepLR 20 0.44 0.5 0.71 0.61 SG NS Efficientnet b4 0.0002 0.0003 False NA StepLR 20 0.2 0.29 0.15 0.4 C NS Efficientnet b4 0.0002 0.0003 False NA StepLR 20 0.47 0.36 0.18 0.5 SG NS Efficientnet b4 2,00E-06 0.0003 True NA StepLR 20 0.6 0.29 0.15 0.4 SG NS Efficientnet b4 2,00E-06 0.0003 False NA StepLR 20 0.51 0.36 0.42 0.46 C NS Efficientnet b4 2,00E-06 0.0003 False NA StepLR 20 0.6 0.5 0.57 0.57 SG NS Efficientnet b4 0.002 0.001 True NA LambdaLR 5 0.71 0.64 0.58 0.54 SG NS Efficientnet b4 0.002 0.001 False NA LambdaLR 5 0.56 0.64 0.32 0.5 C NS Efficientnet b4 0.002 0.001 False NA LambdaLR 5 0.53 0.57 0.48 0.49 SG NS Efficientnet b4 0.0002 0.0003 True 5.0 OneCycleL NA 0.67 0.5 0.71 0.61 SG NS Efficientnet b4 0.0002 0.0003 False 5.0 OneCycleL NA 0.69 0.57 0.73 0.67 C NS Efficientnet b4 0.0002 0.0003 False 5.0 OneCycleL NA 0.54 0.43 0.69 0.56 SG NS Efficientnet b4 0.0002 0.0003 True 40.0 OneCycleL NA 0.51 0.57 0.62 0.62 SG NS Efficientnet b4 0.0002 0.0003 False 40.0 OneCycleL NA 0.56 0.5 0.48 0.48 C NS Efficientnet b4 0.0002 0.0003 False 40.0 OneCycleL NA 0.6 0.57 0.57 0.58 SG NS Efficientnet b4 0.0002 0.0003 True NA CosineAnnealingLR 15 0.93 0.64 0.75 0.72 SG NS Efficientnet b4 0.0002 0.0003 False NA CosineAnnealingLR 15 0.44 0.43 0.69 0.56 C NS Efficientnet b4 0.0002 0.0003 False NA CosineAnnealingLR 15 0.64 0.43 0.69 0.56 SG NS Efficientnet b4 0.0002 0.001 True NA CosineAnnealingLR 15 0.51 0.43 0.47 0.47 SG NS Efficientnet b4 0.0002 0.001 False NA CosineAnnealingLR 15 0.29 0.36 0.18 0.5 C NS Efficientnet b4 0.0002 0.001 False NA CosineAnnealingLR 15 0.49 0.36 0.18 0.5 SG NS Efficientnet b4 0.0002 0.0003 True NA LambdaLR 5 0.71 0.64 0.58 0.54 SG NS Efficientnet b4 0.0002 0.0003 False NA LambdaLR 5 0.56 0.64 0.32 0.5 C NS Efficientnet b4 0.0002 0.0003 False NA LambdaLR 5 0.53 0.57 0.48 0.49 https://www.necessity-h2020.eu/ https://www.haag-streit.com/haag-streit-diagnostics/products/slit-lamps/bq-900/ Chapter BASELINE METHODS "Research means that you don't know, but are willing to find out" -Charles F. Kettering T his chapter looks at the key baseline methods that we wanted to focus on. We take a closer look at the state-of-the-art and the techniques they implemented. Naturally, we carefully examine any assumptions made and anything that does not line up with our research topic. Training set-up Experiments In order to obtain our fully trained model that could predict any of the three semantic segmentations we annotated in Table 5.1, we utilised the library Segmentation Models [191]. Segmentation models is a PyTorch Module that can be easily incorporated to any code. It facilitates the creation of models and contains pre-trained weights for faster and better convergence. by the intern: Abdel OUEDRAOGO. We first investigated classical classification where frames from the examinations were used to predict the Oxford score. We also looked at various classification methods, as well as experimented with different backbones and hyper-parameters. With a challenging and unbalanced dataset we focused on binary classification. We then extended Abdel's work and performed a more thorough analysis to evaluate the incorporation of SiGMoid to the classification. We saw improvement on with various backbones and hyper-parameters which validated the generalizability. Ultimately we were able to show that the classification of DED grading benefits from having an accurate reconstruction. Perspectives We believe that it is crucial to increase both the visual quality and visual field in order to have a basis for better DED grading. By applying deep learning methods that are less conventional we were able to find a self-supervised method to register frames to a common coordinate system. DED has yet to benefit from all types of advances we have seen grow over the years in AI. We think that this thesis has shown, through encouraging findings and original concepts, that there is still more to learn about this topic because it is slowly growing with AI infused interest. Some of the perspectives we wish to address in the future: 1. Recently, our team was granted access to a more modern database. This database was obtained utilizing a more sophisticated acquisition protocol, and the quality is evidently much better. With this, we would be able to train with examinations of better quality in addition to expanding our training set. From Qauntel medical, the ION imaging system 1 produces high resolution videos and images (4K). The ION system combines the slit-lamp with Apple's iPhone technology allowing for fully connected examinations. 2. We believe that the sphericity constraint can be expanded by including the scale. Once we input the scale we can further utilise the human eye estimate radius to also help the model predict more realistic depth maps. This can also ultimately lead to more accurate 3D reconstructions. 3. Another way to expand the sphericity loss is through a sphericity measure ψ constraint [210]. Defined as the ratio of the surface area and volume, it is 1 for a sphere and less for any other shape. A study in [211] focused on both sphericity and roundness which can also be incorporated to as a differentiation constraint. 4. Apply our proposed solutions to other DED diagnostic methods for a richer analysis. This is possible as we have annotations for various grades as we mentioned in Chapter 3, Conference papers: -
00504153
en
[ "info.info-dc" ]
2024/03/04 16:41:18
2007
https://inria.hal.science/inria-00504153/file/icps-2007-182.pdf
Stéphane Genaud email: [email protected] P2P-MPI: A Peer-to-Peer Framework for Robust Execution of Message Passing Parallel Programs on Grids Introduction Grid computing oers the perspective of solving massive computational problems using a large number of computers arranged as clusters embedded in a distributed telecommunication infrastructure. It involves sharing heterogeneous resources (based on dierent platforms, hardware/software architectures) located in dierent places, belonging to dierent administrative domains over a network. When speaking of computational grids, we must distinguish between grids involving stable resources (e.g. a supercomputer) and grids built upon versatile resources, that is computers whose conguration or state changes frequently. (e.g. computers in a students computer room which are frequently switched o and whose OS is regularly re-installed). The latter are often referred to as desktop grids and may in general involve any unused connected computer whose owner agrees to share its CPU. Thus, provided some magic middleware glue, a desktop grid may be seen as a large-scale computer cluster allowing to run parallel application traditionally executed on parallel computers. However, the question of how we may program such cluster of heterogeneous computing resources remains unclear. Most of the numerous diculties that people are trying to overcome today fall in two categories. -Middleware The middleware management of tens or hundreds grid nodes is a tedious task that should be alleviated by mechanisms integrated to the middleware itself. These can be fault diagnostics, auto-repair mechanisms, remote update, resource scheduling, data management, etc. -Programming model Many projects propose a client/server (or RPC) programming style for grid applications (e.g. JNGI [START_REF] Verbeke | Framework for Peer-to-Peer Distributed Computing in a Heterogeneous, Decentralized Environment[END_REF], DIET [START_REF] Caron | A Scalable Approach to Network Enabled Servers[END_REF] or XtremWeb [START_REF] Fedak | XtremWeb : A Generic Global Computing System[END_REF]) oer such a programming model. A major advantage of this paradigm lies in the ability for the client to easily cope with servers failures. However, the message passing and data parallel programming models are the two models traditionally used by parallel programmers. MPI [START_REF]MPI: A Message Passing Interface Standard[END_REF] is the de-facto standard for message passing programs. Most MPI implementations are designed for the development of highly ecient programs, preferably on dedicated, homogeneous and stable hardware such as supercomputers. Some projects have developed improved algorithms for communications in grids (MPICH-G2 [START_REF] Karonis | MPICH-G2: A Grid-enabled implementation of the Message Passing Interface[END_REF], PACX-MPI [START_REF] Gabriel | Distributed Computing in an Heterogeneous Computing Environment[END_REF], MagPIe [START_REF] Kielmann | MagPIe: MPI's collective communication operations for clustered wide area systems[END_REF] for instance) but still, assume hardware stability. This This design means no overhead in process management but makes fault handling dicult: one process failure causes the whole application to fail. This constraint makes traditional MPI applications unadapted to run on grids because failures of nodes are somehow frequent in this context. Moreover, MPI applications are OS-dependent binaries 2 which complicates execution in highly heterogeneous environments. If we put these constraints altogether, we believe a middleware should provide the following features: a) self-conguration (system maintenance autonomy, discovery), b) data management, c) robustness of hosted processes (fault detection and replication), and d) abstract computing capability. This paper introduces a new middleware called P2P-MPI designed to meet the above requirements. The contribution of P2P-MPI is its integrated approach: it oers simultaneously a middleware with the above characteristics and a general parallel programming model based on MPI. The integration allows the communication library to transparently handles robustness by relying on the internals of the middleware, relieving the programmer from the tedious task of explicitly specifying how faults are to be recovered. The rest of the paper shows how P2P-MPI fullls these requirements. We rst describe (section 2) the P2P-MPI middleware through its modules, so as to understand the protocols dened to gather collaborating nodes in order to form a platform suitable for a job request. In section 3 we explain the faultdetection and replication mechanisms in the context of message-passing programs. We nally discuss P2P-MPI behavior in section 4, at the light of experiments carried out on several benchmarks and on dierent hardware platforms. The P2P-MPI Middleware We describe in this section P2P-MPI 's structure and how its modules interact to gather requested resource to execute an MPI application. subset of the MPJ specication [START_REF] Carpenter | MPJ: MPIlike Message Passing for Java[END_REF]). The core behind the API implements appropriate message handling, and relies on three other modules. Modules Organization This task, as detailed in section 3 relies on processes liveliness detection. Three separate services (running as daemon processes) implement one of the desired feature listed in the introduction. -The Message Passing Daemon (MPD) is responsible for self-conguration, as its role is either to search for participating nodes or to act as a gate-keeper of the local resource. -The File Transfer Service (FT) module handles the data management by transferring executable code, input and output les between nodes. -The Fault Detection Service (FD) module is necessary for robustness as the application needs to be notied when nodes become unreachable during execution. In addition, we also rely on external pieces of software. The abstract computing capability is provided by a Java Virtual Machine and the MPD module uses JXTA [START_REF]JXTA[END_REF] for self-conguration. JXTA denes a set of protocols that may be used in any peer-to-peer applications. The specication formalizes the role of peers, how peers group are formed to achieve a common task, and the way they can discover other peers and communicate. Currently, two implementations of the specication are actively developed with the support of Sun Micro Systems. One is in Java and the other in C, the Java implementation being always ahead the C version. Discovery for an Execution Platform In the case of desktop grids, the task of maintaining an up-to-date directory of participating nodes is a so tedious task that it must be p2pmpi.tex; 24/11/2006; 17:03; p.4 automated. We believe one of the best options for this task is discovery, which has proved to work well in the many peer-to-peer systems developed over the last years for le sharing. P2P-MPI uses the discovery service of JXTA. The discovery is depicted in JXTA as an advertisement publication mechanism. A peer looking for a particular resource posts some public advertisement and then waits for answers. (The advertisements are actually handled transparently by a set of peers called rendez-vous, whose role is to cache, broadcast, search for advertisements.) The peers which discover the advertisement directly contact the requester peer. In P2P-MPI, we use the discovery service to nd the required number of participating nodes at each application execution request. Peers in P2P-MPI are the MPD processes. When a user starts up the middleware it launches a MPD process which publishes its pipe advertisement. This pipe can be seen as an open communication channel that will be used to transmit boot-strap information. When a user requests n processors for its application, the local MPD begins to search for some published pipe advertisements from other MPDs. Once enough peers have reported their availability, it connects to the remote MPDs via the pipe 3 to ask for their FT and FD services ports. The remote MPD acts as a gate-keeper in this situation and it may not return these service ports if the resource had changed its status to unavailable in the meantime. Once enough 4 hosts have sent their service ports, we have a set of hosts ready to execute a program. We call this set an execution platform since the platform lifetime is not longer than the application execution duration. Job submission scenario We now describe the steps following a user's job submission to a P2P-MPI grid. The steps listed below are illustrated on Figure 2. (1) The user must rst join the grid. By invoking mpiboot, it spawns the MPD process which makes the local node join the P2P-MPI group if it exists, or creates it otherwise. (2) The job is then submitted by invoking a run command which starts the process with rank 0 of the MPI application on local host. We call this process the root process. ( (3) (1) (1) (3) (4) (7) (8) (5) (6) P2P-MPI Peer Group FT FT FD Submitter Grid peer Figure 2. A submission where the submitter nds one collaborating peer. the local MPD sends into each discovered pipe, the socket where the MPI program can be contacted. (4) Hand-shake: the remote peer sends its FT and FD ports directly to the submitter MPI process. (5) File transfer: program and data are downloaded from the submitter host via the FT service. (9) Fault detection: MPI processes register in their local FD service and starts. Then FD will exchange their heart-beat message and will notify MPI processes if they become aware of a node failure. Replication for Robustness As stated in the introduction, the robustness of an execution is of tremendous importance for MPI application since a single faulty process p2pmpi.tex; 24/11/2006; 17:03; p.6 is very likely to make the whole application fail. There is no simple solution to that problem and many of the proposals we discuss in section 5, mainly based on check-pointing are incompatible with a peer-to-peer paradigm. We have chosen for P2P-MPI a solution based on process replication. Logical processes and replicas Though absolutely transparent for the programmer, P2P-MPI implements a replication mechanism to increase the robustness of an execution platform. When specifying a desired number of processors, the user can request that the system run for each process an arbitrary number of copies called replicas. An exception is made for the root process which is not replicated because we assume a failure on the submitter host is critical. In practice, it is shorter to request the same number of replicas per process, and we call this constant the replication degree. Currently, because we do not take into account hosts reliability, we map processes randomly on hosts. Therefore the interest of specifying how many replicas should be chosen per process is limited. In the following we name a usual MPI process a logical process, noted P i when it has rank i in the application. A logical process P i is thus implemented by one or several replicas, noted P 0 i , . . . , P n i . The replicas are run in parallel on dierent hosts since the goal is to allow the continuation of the execution even if one host fails. Mapping of logical processes We can now precise how the MPD determines if enough peers are available to satisfy the user request. If the system can not nd one host per process required by the user request, then it will map several processes per node. Consider a run command requesting n logical processes with a replication degree r. In addition to the root process, we need (n -1)r other nodes to run an application with one process per node. However, the minimum number of nodes we need without having replicas of the same rank residing on the same node is only r. Thus, if the MPD has found after a certain period, an insucient number m of nodes such that r ≤ m < (n -1)r, it distributes the MPI ranks {1, . . . , n-1} to these nodes in a round-robin way to ensure that no two replicas of the same rank run on the same node. Otherwise, the MPD will keep searching for other nodes until at least r nodes are found or a nal time-out is reached. p2pmpi.tex; 24/11/2006; 17:03; p.7 Replicas coordination protocol The coordination of replicas means in our case, that we must dene a protocol insuring that the communication scheme is kept coherent with the semantics of the original MPI program. Such protocols have been proposed and fall into two broad classes. Passive replication [START_REF] Budhiraja | The Primary-Backup Approach[END_REF] (or primary backup) makes senders send messages to only one process (the primary) in the group of receivers which in turns, retransmits the message to replicas of the group. In active replication [START_REF] Schneider | Replication Management Using State-Machine Approach[END_REF], all replicas of the destination group receive the sent message. Our protocol follows the latter strategy except that specic agreement protocols are added on both sender and receiver sides. Sending message agreement protocol On the sender side, we limit the number of sent messages by introducing the following agreement protocol: In each logical process, one replica is elected as master of the group for sending. Figure 3 illustrates a send instruction from P 0 to P 1 where replica P 0 0 is assigned the master's role. When a replica reaches a send instruction, two cases arise depending on the replica's status: if it is the master, it sends the message to all processes in the destination logical process. Once the message is sent, it noties the other replicas in its logical process to indicate that the message has been correctly transmitted. if the replica is not the master, it rst looks up a journal containing the identiers of messages sent so far (log on Figure 3) to know if the message has already been sent by the master. If it has already been sent, the replica just goes on with subsequent instructions. If not, the message to be sent is stored into a backup table and the execution continues. (Execution only stops in a waiting state on a receive instruction.) When a replica receives a commit, it writes the message identier in its log and if the message has been stored, removes it from its backup table. Reception message agreement protocol The communications in P2P-MPI are asynchronous and from an implementation point of view, the P2P-MPI library creates a thread for sending a message in the sender part. The MPI standard requires that when several messages with the same tag are sent, they are received in the same order they were sent. P2P-MPI implements that property using a unique identier for each MPI message. Each message has an identier mid constructed as follows: mid = (cid, midkey, msg) with midkey = (src, dest, tag) where cid is the identier of the communicator5 , src and dest are the MPI rank of the sender and receiver processes respectively, tag is a tag number of the message and msg is the number of calling MPI_Send/MPI_Recv of a message which has the same midkey. For example, in COMM_WORLD a process of rank 0 sends two messages with the same tag (tag = 1) to a process of rank 2. P2P-MPI constructs the identier of a rst message with cid=0, src=0, dest=2, tag=1 and msg = 0. Assume that this is the rst time that MPI_Send/MPI_Recv is called with midkey = (0, 2, 1). Thus, the identier of the rst message is (0, (0, 2, 1), 0) and (0, (0, 2, 1), 1) for the second message. Symmetrically in the receiver, the rst MPI_Recv call will wait for the message with the identier (0, (0, 2, 1), 0) and (0, (0, 2, 1), 1) for the second MPI_Recv call. If the second message arrives rst, it is put into the message buering table. Thus, P2P-MPI preserves the message order according to the MPI standard. Note that the state of replicas are not necessarily the same after the reception of a message. The only case where such a situation can happen is when using the speciers MPI_ANY_SOURCE or MPI_ANY_TAG as source and tag values respectively in the receive call. Suppose a process P 0 implemented by two replicas (P 0 0 and P 1 0 ), whose code executes one receive operation followed by another, both specifying MPI_ANY_SOURCE. Then, assume two other processes P 1 and P 2 send to P 0 nearly at the same time, messages m 1 and m 2 respectively. It can happen that P 0 0 receives m 1 before m 2 while P 1 0 receives m 2 before m 1 . However, this programming pattern denotes that the subsequent computations depending on the received values make no assumptions on the order of the receptions, and either sequence of reception is acceptable. A common example of such computation is the summation of values gathered in an unspecied order which is correct because sum is associative, commutative and has a neutral element. Yet, it remains particular situations where the consistency of results can not be guaranteed because failures can happen at any point. In the example above, consider P 0 0 sends its rst received message m 1 , commits its send at P 1 0 , then crashes. P 1 0 becomes the master and see that the rst message has been sent, so it goes on with sending its second message, which is again m 1 , yielding a duplication of m 1 and a loss of m 2 . A consensus on the received values could theoritically solve this problem, but given the rareness of such cases, is not implemented here for a matter of performances. Fault Detection and Recovery To become eective the replication mechanism needs to be notied of processes failures. The problem of failure detection has received much attention in the literature and many protocols mostly based on timeouts, have been proposed. The two classic models of fault detectors, discussed in [START_REF] Felber | Failure detectors as rst class objects[END_REF], are the push and pull models. However, if these models have proved to work well in small networks, better proposals have been made for large-scale systems, such as the gossip-style protocol [START_REF] Renesse | A Gossip-Style Failure Detection Service[END_REF], which avoids the combinatorial explosion of messages. This is the protocol adopted for P2P-MPI. Fault survival We now study the probability for an application to fail after k faults have occurred. Again, we consider the root process as reliable. This leads to simpler expressions in the following discussion while having a very small impact on quantitative results when tens of processors are involved. PROPOSITION 1. Consider an application with p logical processes (not counting the root process), with a replication degree of r, having all its processes distributed on dierent hosts. If we assume faults occurrences are identically distributed among hosts, the failure probability of the application after k faults is p i=1 (-1) i-1 p i pr-ir k-ir pr k Proof. Suppose a sample of k faulty nodes is drawn randomly. The application fails as soon as a process has its r replicas failed. Let A i be the event amongst the k faulty nodes drawn, r belong to process P i . The probability of failure for the application is thus the union of such events for all logical processes, noted P r( p i=1 A i ). From the inclusion-exclusion principle (also called Poincaré formula) we have : P r( p i=1 A i ) = p i=1   (-1) i-1 1≤j 1 <...<j i ≤p P r(A j 1 ∩ . . . ∩ A j i )   (1) where the inner summation is taken over the i-subsets of {A} p i=1 . (We call an i-subset a subset of a set on p elements containing exactly i elements. The number of i-subsets on p elements is p i ). The expression P r(A j 1 ∩ . . . ∩ A j i ) denotes the probability that the i logical processes j 1 , . . . j i fail, that is r faults occur in each process. Following the counting principle of an hypergeometric distribution, we have: P r(A j 1 ∩ . . . ∩ A j i ) = r r i • pr-ir k-ir pr k = pr-ir k-ir pr k (2) p2pmpi.tex; 24/11/2006; 17:03; p.11 Indeed, amongst the k faults we have r faults on each of these i processes, which is r r i = 1 possibility. There remains k -ir faults to be chosen from the pr -ir faults (on the p -i remaining processes), which is pr-ir k-ir . We now sum the expression of equation 2: 1≤j 1 <...<j i ≤p P r(A j 1 ∩ . . . ∩ A j i ) = pr-ir k-ir pr k • 1≤j 1 <...<j i ≤p 1 (3) and by denition of the i-subsets, 1≤j 1 <...<j i ≤p 1 =| {(j 1 , . . . , j i ) ∈ {1, . . . , p} p | j 1 < . . . < j i } |= p i replacing into equation 1, we have : P r( p i=1 A i ) = p i=1 (-1) i-1 p i pr-ir k-ir pr k (4) We illustrate on gure 4 how the probability of application failure evolves with the number of faulty nodes. We make the number of faults vary in a range bounded by the minimum that can crash the application, which is r, and the maximum faults that the application can tolerate, which is p(r -1). We truncate this range when probability is very close to 1. The top gure shows the behavior for an application with 50 processes and the lower gure for 100 processes. These data help clarify what kind of platform P2P-MPI could be used on, and with which replication degree. A 50 processes application has 80% chance of surviving after 7 faults with a replication degree of 2. Even for a run that lasts more than an hour we feel the user can be condent if the execution platform consists of some pretty stable resources (e.g. computers owned by sta members in a university). On the contrary, on unstable resources (e.g. machines from student computer rooms) a replication degree of 3 would probably be more appropriate as the above probability is reached for 26 faults. present hereafter the experimental data concerning the fault resilience for a 50 processes application with replication degree r = 2 and r = 3. One trial in an experiment consists in killing every 50 seconds (a period long enough to make sure all processes get aware of that failure) one process chosen randomly out of the remaining application processes, until the application crashes. When the application crashes, we report the number of faults that led to the failure. The trials have been iterated a 100 times for each replication degree. Thus, the trial can be considered a random variable X whose values are the numbers of faults leading to a crash. In Figures 5 and6, we plot on the left sides all the outcomes from trials. Each graduation on the horizontal axis corresponds to one trial, and the corresponding cross on the vertical axis indicates how many faults led to the application failure for that trial. We compute the sample mean µ X and standard deviation σ X and we plot µ X on the gure as the horizontal continuous line, while µ X -σ X and µ X + σ X are represented as dashed lines. p2pmpi.tex; 24/11/2006; 17:03; p.13 In order to compare these data with the theoretical count, we are interested in the probability of the events the application has failed after k faults in the sample, noted P r( X ≤ k). If we let F k the number of outcomes where the application failed with exactly k faults, P r( X ≤ k) = (F 1 +F 2 +. . .+F k )/N , where N is the sample size. In our case, the above probabilities are well summarized by computing the percentile for each k in the sample. For instance in the sample of Figure 5, the event the application has failed after 5 faults is at the 16th percentile, that is in the lowest 16% of the dataset sorted ascending on fault numbers. On the right handsides of Figures 5 and6 are superimposed the percentiles and the curves obtained from the theoretical count. From this graphical point of view, we see that our experimental data have a distribution that closely follows the model. From a statistic point of view, we can state that the two samples follow our theoretical distribution thanks to a χ 2 test [START_REF] Chase | General Statistics[END_REF]. We test the hypothesis H 0 : the number of faults observed is equivalent to the one predicted by the theoretical distribution. The variable called χ 2 measures the distance between an observed distribution and a theoretically expected one. In our case, we obtain χ 2 = 23.19 and χ 2 = 78.34 for the rst and second experiment respectively. If we choose a typical condence level of 95%, the previous values are well below the critic probabilities (40.11 with 27 degrees of freedom and 95.08 with 74 degrees of freedom, for experiments 1 and 2 respectively). Hence the H 0 hypothesis must not be rejected. Experiments The objective of the experiments is to study how P2P-MPI behaves in terms of performances. The experiments test communication and computation performances, how applications scale, and the impact of replication on performance. These tests are carried out on a limited set of hardware platforms where we can reproduce the experiments. These are a) a student computer room in experiment 1, and b) a part of the Grid5000 testbed (detailed later) in experiment 2. We do not have tests in highly heterogeneous environments such as the usual situation where nodes are dynamically chosen from PCs around. A precise assessment of P2P-MPI 's behavior on such congurations is dicult because experiments are not easily reproducible. However, the two congurations chosen for the experiments may also reect real situations. Experiment 1 In the rst experiment, we measure the gap between P2P-MPI and some reference MPI implementations on a limited set of PCs. The hardware platform used is a student computers room (24 Intel P4 3GHz, 512MB RAM, 100 Mbps Ethernet, Linux kernel 2.6.10). We compare P2P-MPI using java J2SE-5.0, JXTA 2.3.3 to MPICH-1.2.6 (p4 device) and LAM/MPI-7.1.1 (both compiled with gcc/g77-3.4.3). We have chosen two test programs with opposite characteristics from the NAS benchmarks [START_REF] Bailey | The NAS Parallel Benchmarks[END_REF] (NPB3.2) 6 . The rst one is IS (Integer Sort- ing) which involves a lot of communications since a sequence of one MPI_Allreduce, MPI_Alltoall and MPI_Alltoallv occurs at each iteration. The second program is EP (Embarrassingly Parallel). It does independent computations with a nal collective communication. Thus, this problem is closer to the class of applications usually deployed on computational grids. Expected Behavior It is expected that our prototype achieves its goals at the expenses of an overhead incurred by several factors. First the robustness requires extracommunications: regular heart-beats are exchanged, and the number of message copies increase linearly with the replication degree as can be seen on Figure 3. Secondly, compared to ne-tuned optimizations of communications of MPI implementation (e.g. in MPICH-1.2.6 [START_REF] Thakur | Optimization of Collective Communication Operation in MPICH[END_REF] use four dierent algorithms depending on message size), P2P-MPI has simpler optimizations (e.g. binomial trees). Last, the use of a virtual machine (java) instead of processor native code leads to slower computations. Performances Figure 7 plots results for benchmarks IS (left) and EP (right) with replication degree 1. We have kept the same timers as in the original benchmarks. Values plotted are the average total execution time. For each benchmark, we have chosen two problem sizes (called class A and B) with a varying number of processors. Note that IS requires that the number of processors be a power of two and we could not go beyond 16 PCs. For IS, P2P-MPI shows an almost as good performance as LAM/MPI up to 16 processors. The heart-beat messages seem to have a negligible eect on overall communication times. Surprisingly, MPICH-1.2.6 is signicantly slower on this platform despite the sophisticated optimization of collective communications (e.g. uses four dierent algorithms depending on message size for MPI_Alltoall). It appears that the MPI_Alltoallv instruction is responsible for most of the communication time because it has not been optimized as well as the other collective operations. The EP benchmark clearly shows that P2P-MPI is slower for computations because it uses Java. In this test, we are always twice as slow as EP programs using Fortran. EP does independent computations with a nal set of three MPI_Allreduce communications to exchange results in short messages of constant size. When the number of processors increases, the share of computations assigned to each processor decreases, which makes the P2P-MPI performance curve tends to approach LAM and MPICH ones. Replication overhead Since replication multiplies communications, the EP test shows very little dierence with or without replication, and we only report measures for IS. Figure 8 shows the performances of P2P-MPI for IS when each logical process has one to four replicas. For example, for curve Class B, 8 processors, 3 replicas per logical process means 24 processors were involved. We have limited the number of logical processes so that we have at most one replica per processor to avoid load-imbalance or communications bottlenecks. As expected, the gure shows a linear increase of execution time with the replication degree, with a slope depending on the number of processors and messages sizes. p2pmpi.tex; 24/11/2006; 17:03; p.17 Experiment 2 In this second experiment we test the ability of P2P-MPI to handle an application using up to 128 processes, and the potential scalability of the application. The scalability test also analyzes the impact of having processors on distant sites. The experiment takes place on Grid5000 7 , a testbed designed as a tool for experimentation. By the end of 2005, it will allow a user to reserve a number of nodes across several geographical sites and reinstall a full system on each of the nodes. Currently under construction, the testbed will eventually gather 5000 processors across nine sites in France through a virtual private network on the research and education network Renater. Two thirds of the processors are homogeneous (Opteron) while the remaining third may be dierent (currently Xeon, Itanium 2 and G5). Though not yet fully functional (only a part of the processors are currently available and the multi-site reservation is not implemented yet) the testbed allowed us to test P2P-MPI with a number of processors we would not be able to gather otherwise. Performances The application we choose is the ray-tracer from the Java Grande Forum MPJ Benchmark because it was reported [START_REF] Bornemann | MPJ/Ibis: a Flexible and Ecient Message Passing Platform for Java[END_REF] to scale well with MPJ/Ibis. This program renders a 3D scene of 64 spheres into an image of 150x150 or 500x500 pixels in the original benchmark, but we have computations on its part of the scene for which a checksum is combined with a MPI_Reduce operation by the rank 0 process. In the end, each process sends its result to process 0. In the experiment we have run several series of executions of the application on dierent days of a week. A series consists in a set of executions using from 2 to 128 processes (one process per processor). We choose two sites of Grid5000 with homogeneous processors (AMD Opteron 2GHz, 2GB RAM) to isolate the impact of communications. The sites are Orsay located near Paris and Sophia near Nice, and are bound with a 2.5 Gbps network link. We observe how the application scales on a single site (all processes at Orsay) and then when processes are distributed half on each site. We report on gure 9 the highest and lowest speedups obtained in each case. The application scales well up to 64 processors on a single site, and in some occasions the execution involving the two sites is even as quick as the lowest execution on one site. With 128 processors, the scalability largely decreases on one site, and turns to a slowdown with distant sites. We reach here a computation to communication ratio that does not allow for more parallelism. However, the experiment conrms the good scalability of the appli- Fault-tolerance The deployment of message passing applications in unstable environments is a challenging area and numerous attempts have been made to tackle this problem. Most works are devoted to check-point and restart methods in which the application is restarted from a given recorded state. The rst such attempt has been the Co-Check [START_REF] Stellner | CoCheck: Checkpointing and Process Migration for MPI[END_REF] framework, which extends the single process checkpoint of Condor [START_REF] Litzkow | Condor: A Hunter of Idle Workstations[END_REF] to a distributed parallel application. The consistency of a saved state is insured by ushing all in ight messages before a check-point is created. However by saving the whole process states, Co-Check suers a large overhead. Further, the ushing of messages makes the check-pointing synchronous with a potential scaling problem. The Starsh [START_REF] Agbaria | Starsh: Fault-Tolerant Dynamic MPI programs on Clusters of Workstations[END_REF] project also provides check-pointing facilities through the Ensemble execution environment Starsh is built upon. Starsh provides an event model where processes register to be notied of changes in cluster conguration and nodes failures. When a failure is detected, the system can automatically recover the application from a previous checkpoint. Flushing message queues before check-points is avoided thanks to the atomic communication primitives provided by the Ensemble system. However, consistency of communicators is not addressed and any node failure implies to restart the entire MPI application to recover from a single failed process. The MPI-FT project [START_REF] Louca | MPI-FT: Portable Fault Tolerenace Scheme for MPI[END_REF] supports the development of master-slave like applications. During execution a monitoring process called observer, stores a copy of each sent message. The observer is notied when a node has been idle too long and then spawns a new process whose state is rebuild from previous messages. The drawback is the memory needed for the observer process in long running applications, as well as the bottleneck it induces. Also, the assumption that the state can be rebuilt using just the knowledge of any passed messages does not hold in the general case (e.g. non deterministic convergence of a solver). The MPICH-V1 [START_REF] Bosilca | MPICH-V: Toward a Scalable Fault Tolerant MPI for Volatile Nodes[END_REF] After a crash, a re-executed process retrieves all lost receptions in the correct order by requesting them to its associated channel memory. The logging has however a major impact on the performance (bandwidth divided by 2) and requires a large number of channel memories. MPICH-V2 [START_REF] Bouteiller | MPIch-V2: a Fault Tolerant MPI for Volatile Nodes based on the Pessimistic Sender Based Message Logging[END_REF] improves V1 by splitting into two parts the logging: on one hand, the message data is stored on the computing node, following a senderbased approach. On the other hand, the corresponding event (the date and the identier of the message reception) is stored on an event logger which is located on a reliable machine. Apart from these project based on check-pointing, some other approaches represent alternatives which does not require any specic reliable resource to store system states. FT-MPI [START_REF] Fagg | Harness and fault tolerant MPI[END_REF], actively developed at University of Tennessee, proposes an original approach since it is up to the application programmer to call the fault-tolerance services to manage application's behavior in case of failure. When a fault occurs, all MPI processes of the communicator are informed about the fault through the returning value of MPI calls, and specic actions can be taken by the application to restore the communicator in a coherent state and then to treat the error. The main advantage of FT-MPI is its performance since it does not checkpoint nor log and let the application take the minimum number of actions needed for recovery, but its main drawback is the lack of transparency for the programmer. MPI/FT [START_REF] Batchu | MPI/FT TM : Architecture and Taxonomies for Fault-Tolerant, Message-Passing Middleware for Performance-Portable Parallel[END_REF] is the project closest to our proposal since it provides fault-tolerance by introducing replication of processes. The library can detect erroneous messages by introducing a vote algorithm among the replicas and can survive process failures. The drawback of this project is the increasing resource requirement by replicating MPI processes but this drawback can be overcome by using large platforms. In comparison, we designed fault-tolerance in P2P-MPI so that it is totally transparent to the programmer and does not break the peer-topeer paradigm in which all hosts play the same role. In consequence, we cannot rely on particular hosts considered reliable and used in checkpointing. As we do not check-point, we do not replace failed processes. Our choice is to tolerate a certain number of faults, postponing an eventual application failure. p2pmpi.tex; 24/11/2006; 17:03; p.21 MPI and Java Java's ability to run on several platforms without re-compiling has raised its attractivity in today's heterogeneous environments and has motivated several attempts to adapt the MPI specication to Java. The early JavaMPI [START_REF] Mintchev | Towards portable message passing in Java: Binding MPI[END_REF] and mpiJava [START_REF] Baker | mpiJava: A Java interface to MPI[END_REF] projects were implemented as a set of JNI wrappers to native MPI packages. (It requires a native MPI library to be installed aside). The communication are hence almost as ecient as with the underlying MPI library, but the presence of an MPI library and system-dependent generated bindings reduce the portability advantage of Java programs. Later, MPJ [START_REF] Carpenter | MPJ: MPIlike Message Passing for Java[END_REF] has been adopted by the Java Grande forum as a common MPI language bindings to Java, merging the above proposals. However, we know of very few MPJ implementations: mpiJava claims to move towards this specication, and recently, an implementation in pure Java from Vrije University [START_REF] Bornemann | MPJ/Ibis: a Flexible and Ecient Message Passing Platform for Java[END_REF] called MPJ/Ibis has been proposed. This implementation benets from the ecient and various network drivers of the Ibis platform upon which MPJ is built. In comparison, P2P-MPI implements native Java TCP communications only since clusters or supercomputers with specic network devices (e.g. Myrinet) was not a goal. We rather focus on commodity hardware. The subset of MPJ we implement is growing regularly 8 and has shown to perform well in the experiments. Resource discovery and coordination To our knowledge, P3 (Personal Power Plant) [START_REF] Shudo | P3: P2P-based Middleware Enabling Transfer and Aggregation of Computational Resource[END_REF] is the only project that really compares to P2P-MPI as far as resource discovery and coordination is concerned. The two projects share the goal to dynamically gather computational resources without any centralized directory, in order to execute a message-passing parallel application. In P3, JXTA is also used for discovery: hosts entities automatically register in a peer group of workers and accept work requests according to the resource owner policy. Secondly, a master-worker and message passing paradigm, both built upon a common library called object passing, are proposed. Unlike P2P-MPI, P3 uses JXTA for its communications (JxtaSockets). This allows to communicate without consideration of the underlying network constraints (e.g. rewalls) but incurs performance overhead when the logical route established goes through several peers. In addition, P3 has no integrated fault-tolerance mechanism for message passing programs which we believe is important for long executions. Conclusion and Future Work This paper has introduced a new middleware called P2P-MPI. It has been designed to facilitate parallel programs deployment on grids by addressing the major issues that are currently real obstacles to grids usability. To the best of our knowledge, no other project brings simultaneously: (a) an automatic discovery of resources, (b) a messagepassing programming model, allowing any type of parallel program to be developed and run, and (c) a fault-detection system and replication mechanisms in order to increase robustness of applications execution. The paper has discussed in details the design of each these features. We have analyzed how the failure probability of applications evolves with the replication degree and we have discussed further these probabilities in real situations. It was also important to understand how P2P-MPI behaves in terms of performance. Two NAS parallel benchmarks with opposite behavior have been run on a small conguration and compared with performances obtained with LAM/MPI and MPICH. We have also tested a ray-tracing application with up to 128 processors, distributed on one site and then across two sites of the Grid5000 testbed to understand the impact of wide area communications. On the whole, the performances obtained are close to what we obtain with the other message passing libraries we referred to. Hence, we believe P2P-MPI is a good tool for more experiments on various categories of applications. Next developments should concern strategies for mapping processes onto resources. Though the peer-to-peer model abstracts the network topology, we could use some network metrics (e.g. ping time) to choose among available resources. Also, the mapping of replicas could be based on information about resources capability (e.g. CPU type, number of jobs accepted) and reliability (e.g. average uptime). assumption allows for a simple execution model where the number of c 2006 Kluwer Academic Publishers. Printed in the Netherlands. p2pmpi.tex; 24/11/2006; 17:03; p.2 processes is static from the beginning to the end of the application run 1 . Figure 1 Figure 1 . 11 Figure 1 depicts how P2P-MPI modules are organized in a running environment. P2P-MPI proper parts are grayed on the gure. On top of diagram, a message-passing parallel program uses the MPI API (a ( 6 )( 7 )( 8 ) 678 Execution Notication: once the transfer is complete the FT service on remote host noties its MPD to execute the downloaded program. Remote executable launch: MPD executes the downloaded program to join the execution platform. Execution preamble: all processes in the execution platform exchange their IP addresses to construct their local communicationtable, register in their local FD service and start. Figure 3 . 3 Figure 3. A message sent from logical process P0 to P1. 3. 6 . 6 Fault survival experiments A set of experiments has been conducted to validate some of the above analytical results. The experiments required a hundred processors during about 70 hours on the Grid5000 testbed detailed in section 4.2. We p2pmpi.tex; 24/11/2006; 17:03; p.12 Figure 4 . 4 Figure 4. Probability that an application with 50 processes and 100 processes fails depending on the number of nodes and the replication degree r. Figure 5 . 5 Figure 5. Number of faults leading to a crash for 100 experiments (p = 50, r = 2) (left), Comparison of experimental numbers of faults with theoretical count (right) Figure 6 . 6 Figure 6. Number of faults leading to a crash for 100 experiments (p = 50, r = 3) (left), Comparison of experimental numbers of faults with theoretical count (right) Figure 7 . 7 Figure 7. Comparison of MPI implementations performance for IS (left) and EP (right). Figure 8 . 8 Figure 8. Performance (seconds) for IS depending on replication degree. Figure 9 . 9 Figure 9. Ray-tracer speedups when run on a single site and on two distant sites. cation provided the image to compute is big enough, even when two distant sites are involved. We should extend the number of sites in future experiments to better understand what platforms are suitable for such applications. p2pmpi.tex; 24/11/2006; 17:03; p.19 5. Related Work P2P-MPI 's features are linked to several issues that have been addressed previously: fault-tolerance using check-pointing or replication, parallel programming with java, and resource discovery and coordination. In this model, failure detectors are distributed and reside at each host on the network. Each detector maintains a table with one entry per detector known to it. This entry includes a counter called heartbeat counter. During execution, each detector randomly picks a distant detector and sends it its table after incrementing its heartbeat counter. The receiving failure detector will merge its local table with the received table and adopts the maximum heartbeat counter for each entry. If the heartbeat counter for some has not increased after a certain time-out, the corresponding host is suspected to be down.When the local instance of the MPI program is notied of a node failure by its FD service, it marks the node as faulty and no more messages will be sent to it on subsequent send instructions. If the faulty node hosts a master process then a new master is elected in the logical process. Once elected, it sends all messages left in its backup table. p2pmpi.tex; 24/11/2006; 17:03; p.10 entry Except dynamic spawning of process dened in MPI-2. The MPI specication denes bindings for C, C++ and Fortran only. p2pmpi.tex; 24/11/2006; 17:03; p.3 Actually a JxtaSocket built upon the pipe. We will detail in next section what is considered a minimum number of hosts. p2pmpi.tex; 24/11/2006; 17:03; p.5 For instance, the default communicator created by MPI.Init() is COMM_WORLD and has cid = 0. p2pmpi.tex; 24/11/2006; 17:03; p.9 We have translated IS and EP in java for P2P-MPI from C and Fortran respectively. p2pmpi.tex; 24/11/2006; 17:03; p.15 http://www.grid5000.fr p2pmpi.tex; 24/11/2006; 17:03; p.18 See http://grid.u-strasbg.fr/p2pmpi/documentation/javadoc/ p2pmpi.tex; 24/11/2006; 17:03; p.22
04096108
en
[ "math.math-na" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04096108/file/Density_estimation_McKean.pdf
Marc Hoffmann email: [email protected] Yating Liu email: [email protected] A statistical approach for simulating the density solution of a McKean-Vlasov equation Keywords: Mathematics Subject Classification (2010): 60J60, 65C30, 65C35 Interacting particle systems, McKean-Vlasov models, Euler scheme, Oracle inequalities, Lepski's method come A statistical approach for simulating the density solution of a McKean-Vlasov equation Introduction 1.Motivation Let P 1 denote the set of probability distributions on R d , d ≥ 1, with at least one moment. Given a time horizon T > 0, an initial condition µ 0 ∈ P 1 and two functions b : [0, T ] × R d × P 1 → R d and σ : [0, T ] × R d → R d ⊗ R d , a probability solution µ = (µ t (dx)) t∈[0,T ] of the nonlinear Fokker-Planck equation ∂ t µ + div b(•, •, µ)µ = 1 2 d k,k =1 ∂ 2 kk σσ µ µ t=0 = µ 0 (1) exists, under suitable regularity assumptions on the data (µ 0 , b, σ), see in particular Assumptions 2.1, 2.2 and (2.3 or 2.14) in Section 2.1 and 2.4 below. Moreover, for each t > 0, µ t (dx) has a smooth density, also denoted by µ t (x), which is a classical solution to (1). The objective of the paper is to construct a probabilistic numerical method to compute µ T (x) for every T > 0 and x ∈ R d , with accurate statistical guarantees, that improves on previous works on the topic. Model (1) includes in particular nonlinear transport terms of the form b(t, x, µ) = F f (x) + R d g(x -y)µ(dy) , for smooth functions F, f, g : R d → R d that account for evolution systems with common force f and mean-field interaction g under some nonlinear transformation F together with diffusion σ. Such models and their generalizations have inspired a myriad of application domains over the last decades, ranging from physics [START_REF] Martzel | Mean-field treatment of the many-body Fokker-Planck equation[END_REF][START_REF] Bossy | Instantaneous turbulent kinetic energy modelling based on Lagrangian stochastic approach in CFD and application to wind energy[END_REF] to neurosciences [START_REF] Baladron | Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons[END_REF][START_REF] Bossy | Clarification and complement to "Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons[END_REF], structured models in population dynamics (like in e.g. swarming and flocking models [BFFT12, BFT15, MEK99, BCM07]), together with finance and mean-field games [LL18, CL18, CD18], social sciences and opinion dynamics [START_REF] Chazelle | Well-posedness of the limiting equation of a noisy consensus model in opinion dynamics[END_REF], to cite but a few. The classical approach of Bossy and Talay [START_REF] Bossy | A stochastic particle method for the McKean-Vlasov and the Burgers equation[END_REF], subsequently improved by Antonelli and Kohatsu-Higa [START_REF] Antonelli | Rate of convergence of a particle method to the solution of the McKean-Vlasov equation[END_REF] paved the way: the probability µ t has a natural interpretation as the distribution L(X t ) of an R d -valued random variable X t , where the stochastic process (X t ) t∈[0,T ] solves a McKean-Vlasov equation dX t = b(t, X t , L(X t ))dt + σ(t, X t )dB t L(X 0 ) = µ 0 , (2) for some standard d-dimensional Brownian motion B = (B t ) t∈[0,T ] . The mean-field interpretation of (2) is the limit as N → ∞ of an interacting particle system dX n t = b(t, X n t , N -1 N m=1 δ X m t )dt + σ(t, X n t )dB n t , 1 ≤ n ≤ N, L(X 1 0 , . . . , X N 0 ) = µ ⊗N 0 , where the (B n t ) t∈[0,T ] are independent Brownian motions. Indeed, under fairly general assumptions, see e.g. [START_REF] Henry P Mckean | Propagation of chaos for a class of non-linear parabolic equations[END_REF][START_REF] Sznitman | Topics in propagation of chaos[END_REF][START_REF] Méléard | Asymptotic behaviour of some interacting particle systems; McKean-Vlasov and Boltzmann models[END_REF] we have N -1 N n=1 δ X n t → µ t weakly as N → ∞. One then simulates an approximation of (X 1 t , . . . , X N t ) t∈[0,T ] (3) by a stochastic system of particles ( X1,h t , . . . , XN,h t ) t∈[0,T ] (4) using an Euler scheme with mesh h, see (7) below for a rigorous definition. The synthesised data (4) are then considered as a proxy of the system (3) and a simulation of µ T (x) is obtained via the following nonparametric kernel estimator µ N,h,η T (x) := N -1 N n=1 η -d K η -1 (x -Xn,h T ) . (5) Here K : R d → R is kernel function that satisfies R d K(x)dx = 1 together with additional moment and localisation conditions, see Definition 2.4 below. When K is a Gaussian kernel, [START_REF] Bossy | A stochastic particle method for the McKean-Vlasov and the Burgers equation[END_REF] and [START_REF] Antonelli | Rate of convergence of a particle method to the solution of the McKean-Vlasov equation[END_REF] obtain some convergence rates that depend on N, η, h and the data (µ 0 , b, σ). See also the comprehensive review of Bossy [START_REF] Bossy | Some stochastic particle methods for nonlinear parabolic PDEs[END_REF]. Our objective is somehow to simplify and improve on these results, up to some limitations of course, by relying on adaptive statistical methods. In order to control the strong error E [| µ N,h,η T (x)-µ T (x)| 2 ], a crucial step is to select the optimal bandwidth η that exactly balances the bias and the variance of the statistical error and that depends on the smoothness of the map x → µ T (x). To do so, we rely on recent data-driven selection procedures for η based on the Goldenshluger-Lepski's method in statistics [START_REF] Goldenshluger | Universal pointwise selection rule in multivariate function estimation[END_REF][START_REF] Goldenshluger | Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality[END_REF][START_REF] Goldenshluger | On adaptive minimax density estimation on R d[END_REF]. They do not need any prior knowledge of x → µ T (x), and this is a major advantage since the exact smoothness of the solution map (µ 0 , b, σ) → (x → µ T (x)) may be difficult to obtain, even when µ 0 , b, σ are given analytically. At an abstract level, if we assume that we can observe (X 1 t , . . . , X N t ) t∈[0,T ] exactly, a statistical theory has been recently proposed to recover µ T (x) in [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF], a key reference for the paper. However, the fact that we need to simulate first a proxy of the idealised data via a system of interacting Euler schemes needs a significant adjustment in order to obtain precise error guarantees. For other recent and alternate approaches, we refer to [START_REF] Belomestny | Projected particle methods for solving McKean-Vlasov stochastic differential equations[END_REF][START_REF] Belomestny | Optimal stopping of McKean-Vlasov diffusions via regression on particle systems[END_REF] and the references therein. Results and organisation of the paper In Section 2.1 we give the precise conditions on the data (µ 0 , σ, b). We need a smooth and sub-Gaussian integrable initial condition µ 0 in order to guarantee the existence of a classical solution µ t (x) to the Fokker-Planck equation (1) and sub-Gaussianity of the process (X t ) t∈[0,T ] that solves the McKean-Vlasov equation (2). The coefficients b(t, x, µ t ), σ(t, x) are smooth functions of (t, x) and σ is uniformly elliptic, following the conditions of Gobet and Labart [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF]. In Section 2.2 we construct the estimator µ N,h,η T (x) of (5) via a regular kernel K of order , see Definition 2.4. It depends on the size N of the particle system, the step h > 0 of the Euler scheme and the statistical smoothing parameter η and the order of the kernel . Abbreviating K η = η -d K(η -1 •) for η > 0, we have the following decomposition for the simulation error: µ N,h,η T (x) -µ T (x) = R d K η (x -•)d(μ N,h T -μh T ) + K η (μ h T -µ T )(x) + K η µ T (x) -µ T (x), where denotes convolution, μN,h T = N -1 N n=1 δ Xn,h T and μh T is the probability distribution of the continuous Euler scheme at time t = T , constructed in (9) below. In turn, the study of the error splits into three terms, with only the first one being stochastic. We prove in Proposition 4.1 a Bernstein concentration inequality for the fluctuation of μN,h T (dy) -μh T (dy). In contrast to [START_REF] Malrieu | Concentration inequalities for Euler schemes[END_REF], it has the advantage to encompass test functions that may behave badly in Lipschitz norm, while they are stable in L 1 like a kernel K η for small values of η. It reads, for an arbitrary bounded test function ϕ: P R d ϕ d(μ N,h T -μh T ) ≥ ε ≤ κ 1 exp - κ 2 N ε 2 |ϕ| 2 L 2 (µ T ) + |ϕ| ∞ ε , for every ε ≥ 0, with explicitly computable constants κ 1 and κ 2 that only depend on T and the data (µ 0 , b, σ). The proof is based on a change of probability argument via Girsanov's theorem. It builds on ideas developed in [START_REF] Lacker | On a strong form of propagation of chaos for McKean-Vlasov equations[END_REF] and [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF], but it requires nontrivial improvements in the case of approximating schemes, in particular fine estimates in Wasserstein distance between the distributions μh T and µ T , thanks to Liu [START_REF] Liu | Optimal Quantization: Limit Theorem, Clustering and Simulation of the McKean-Vlasov Equation[END_REF]. When the measure argument in the drift is nonlinear, we impose some smoothness in terms of linear differentiability, see Assumption 2.14. One limitation of this approach is that we can only encompass a measure term in the drift but not in the diffusion coefficient. The second approximation term K η (μ h T (x) -µ T (x) is managed via the uniform estimates of Gobet and Labart [GL08a] of μh T (x) -µ T (x) so that we can ignore the effect of K η as η → 0 by the L 1 -stability |K η | L 1 = |K| L 1 . The final bias term K η µ T (x) -µ T (x) is controlled by standard kernel approximation. In Theorem 2.9 we give our main result, namely a deviation probability for the error µ N,h,η T (x)µ T (x), provided h and η are small enough. In Theorem 2.10, we give a quantitative bound for the expected error of order p ≥ 1 that is further optimised in Corollary 2.11 when x → µ T (x) is k-times continuously differentiable. We obtain a (normalised) error of order N -k/(2k+d) + h, which combines the optimal estimation error in nonparametric statistics with the optimal strong error of the Euler scheme. Finally, we construct in Theorem 2.12 a variant of the Goldenshluger-Lepski's method that automatically selects an optimal data-driven smoothing parameter η without requiring any prior knowledge on the smoothness of the solution. As explained in Section 1.1, our approach suggests a natural way to take advantage of adaptive statistical methods in numerical simulation, when even qualitative information about the object to simulate are difficult to obtain analytically. We numerically implement our method on several examples in Section 3. We show in particular that it seems beneficial to use high-order kernels (i.e. with many vanishing moments) rather than simply Gaussian ones or more generally even kernels that only have one vanishing moment. This is not a surprise from a statistical point of view, since x → µ T (x) is usually very smooth. Also, the data-driven choice of the bandwidth seems useful even in situations where our assumptions seem to fail (like for the Burgers equation, see in particular Section 3.3). The proofs are delayed until Section 4. Main results Notation and assumptions For x ∈ R d , |x| 2 = x • x denotes the Euclidean norm. We endow P 1 with the 1-Wasserstein metric W 1 (µ, ν) = inf ρ∈Γ(µ,ν) R d ×R d |x -y|ρ(dx, dy) = sup |ϕ| Lip≤1 R d ϕ d(µ -ν), where Γ(µ, ν) denotes the set of probability measures on R d × R d with marginals µ and ν. All R d -valued functions f defined on [0, T ], R d , P 1 (or product of those) are implicitly measurable for the Borel σ-field induced by the (product) topology and written componentwise as f = (f 1 , . . . , f d ), where the f i are real-valued. Following [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF], we say that a function f : [0, T ] × R d → R d is C k,l b for integers k, l ≥ 0 if the real-valued functions f i in the repre- sentation f = (f 1 , . . . , f d ) are all continuously differentiable with uniformly bounded derivatives w.r.t. t and x up to order k and l respectively. We write C k b (or C l b ) if f is C k,l b and only depends on the time (or space) variable. For ϕ : R d → R, a σ-finite measure ν on R d and 1 ≤ p < ∞, we set |ϕ| L p (ν) = R d |ϕ(x)| p ν(dx) 1/p , |ϕ| L p = R d |ϕ(x)| p dx 1/p , |ϕ| ∞ = sup x∈R d |ϕ(x)|. Assumption 2.1. The law L(X 0 ) = µ 0 (dx) in (2) or equivalently in (1) is sub-Gaussian, that is, R d exp(γ|x| 2 ) µ 0 (dx) < ∞ for some γ > 0. The strong integrability assumption of µ 0 (together with the subsequent smoothness properties of b and σ) will guarantee that µ t is sub-Gaussian for every t > 0. This is the gateway to our change of probability argument in the Bernstein inequality of the Euler scheme in Proposition 4.1 below. Assumption 2.2. The diffusion matrix σ : [0, T ] × R d → R d ⊗ R d is uniformly elliptic, C 1,3 b and ∂ t σ is C 0,1 b . By uniform ellipticity, we mean inf t,x ξ c(t, x)ξ ≥ δ|ξ| 2 for every ξ ∈ R d and some δ > 0, where c = σσ . As for the drift b : [0, T ] × R d × P 1 → R d , | b(t, 0, 0)| < ∞. Assumption 2.3 implies that the function (t, x, µ) → b(t, x, µ) is Lipschitz continuous, i.e. for every (t, x, µ), (s, y, ν) ∈ [0, T ] × R d × P 1 , there exists some constant |b| Lip > 0 such that |b(t, x, µ) -b(s, y, ν)| ≤ |b| Lip |t -s| + |x -y| + W 1 (µ, ν) (6) where W 1 denotes the L 1 -Wasserstein distance on P 1 (see e.g. [CD18, (5.4) and Corollary 5.4] for the definition of the Wasserstein distance). As detailed in the proof of Proposition 4.4 below, Assumption 2.3 also implies that (t, x) → b(t, x, µ t ) is C 1,3 b . This together with Assumption 2.2 on the diffusion coefficient enables us to obtain a sharp approximation of the density of the Euler scheme of (2), a key intermediate result. This is possible thanks to the result of Gobet and Labart [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF], which, to our knowledge, is the best result in that direction. See also [START_REF] Bally | The law of the Euler scheme for stochastic differential equations: Convergence rate of the density[END_REF][START_REF] Konakov | Edgeworth type expansions for Euler schemes for differential equations[END_REF][START_REF] Guyon | Euler scheme and tempered distributions[END_REF] for similar results under stronger assumptions. Construction of an estimator of µ T (x) We pick an integer M ∈ N \ {0} for time discretisation and h := T M as time step. We set t m := m • h, m ∈ {0, ..., M }. For 1 ≤ n ≤ N , let B n = (B n t ) t∈[0,T ] be N independent d-dimensional        Xn,h tm+1 = Xn,h tm + h • b(t m , Xn,h tm , μN,h tm ) + √ h • σ(t m , Xn,h tm )Z n m+1 , 1 ≤ n ≤ N, Z n m+1 := 1 √ h B n tm+1 -B n tm , μN,h tm := 1 N N n=1 δ Xn,h tm , L X1,h 0 , . . . , Xh,N 0 = µ ⊗N 0 , (7) with X1,h 0 , . . . , Xh,N 0 independent of (B n ) 1≤n≤N for safety. They are appended with their continuous version: for every 0 ≤ m ≤ M -1, t ∈ [t m , t m+1 ) and 1 ≤ n ≤ N : Xn,h t = Xn,h tm + (t -t m )b(t m , Xn,h tm , μN,h tm ) + σ(t m , Xn,h tm )(B n t -B n tm ). Definition 2.4. Let ≥ 0 be an integer. A -regular kernel is a bounded function K : R d → R such that for k = 0, . . . , , R d |x| k |K(x)|dx < +∞ and R d x k [1] K(x)dx = . . . = R d x k [d] K(x)dx = 1 {k=0} with x = (x [1] , . . . , x [d] ) ∈ R d . (8) Remark 2.5. A standard construction of -order kernels is discussed in [START_REF] Scott | Multivariate density estimation: theory, practice, and visualization[END_REF], see also the classical textbook by Tsybakov [START_REF] Alexandre | Introduction to Nonparametric Estimation[END_REF]. In the numerical examples of the paper, we implement in dimension d = 1 the Gaussian high-order kernels described in Wand and Schucany [START_REF] Matthew | Gaussian-based kernels[END_REF]. Given a -regular kernel h in dimension 1, a multivariate extension is readily obtained by tensorisation: for x = (x [1] , . . . , x [d] ) ∈ R d , set K(x) = h ⊗d (x) = d l=1 h(x [l] ), so that (8) is satisfied. Finally, we pick a -regular kernel K, a bandwidth η > 0 and set µ N,h,η T (x) := N -1 N n=1 η -d K η -1 (x -Xn,h T ) for our estimator of µ T (x). It is thus specified by the kernel K, the Euler scheme time step h, the number of particles N and the statistical smoothing parameter η. Convergence results Under Assumptions 2.1, 2.2 and (2.3 (or later 2.14)), (1) admits a unique classical solution (t, x) → µ t (x) see e.g. the classical textbook [START_REF] Vladimir I Bogachev | Fokker-Planck-Kolmogorov Equations[END_REF]. Our first result is a deviation inequality for the error µ N,h,η T (x) -µ T (x). We need some further notation. Definition 2.6. The abstract Euler scheme relative to the McKean-Vlasov equation (2) is defined as      Xh tm+1 = Xh tm + h • b(t m , Xh tm , µ tm ) + √ h • σ(t m , Xh tm )Z m+1 , Z m+1 = 1 √ h (B tm+1 -B tm ), L( Xh t0 ) = µ 0 , (9) appended with its continuous version: for every 0 ≤ m ≤ M -1, t ∈ [t m , t m+1 ) and 1 ≤ n ≤ N : Xh t = Xh tm + (t -t m )b(t m , Xh tm , µ tm ) + σ(t m , Xh tm )(B t -B tm ). We set μh t = L( Xh t ). The term abstract for this version of the Euler scheme comes from its explicit dependence upon µ tm , that prohibits its use in practical simulation. It however appears as a natural approximating quantity. Definition 2.7. For ε > 0, the ε-accuracy of the abstract Euler scheme is h(ε) = sup h > 0, sup x∈R d |μ h T (x) -µ T (x)| ≤ ε , (10) with the convention sup ∅ = ∞. Proposition 4.4 below, relying on the estimates of [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF], implies that sup x∈R d |μ h T (x)-µ T (x)| is of order h. Therefore h(ε) is well defined and positive for every ε > 0. We further abbreviate K η (x) = η -d K(η -1 x) for η > 0. Definition 2.8. The bias (relative to a -regular kernel K) of a function ϕ : R d → R at x and scale η > 0 is B η (ϕ, x) = sup 0<η ≤η R d K η (x -y)ϕ(y)dy -ϕ(x) and for ε > 0, the ε-accuracy of the bias of ϕ is η(ε) = sup η > 0, B η (ϕ, x) ≤ ε . (11) Note that whenever ϕ is continuous in a vicinity of x, we have B η (ϕ, x) → 0 as η → 0, and η(ε) is well-defined and positive for every ε > 0. Theorem 2.9. Work under Assumptions 2.1, 2.2 and 2. 3. Let x ∈ R d , N ≥ 2, ε > 0 and C > 0. For any 0 < h < min h( 1 3 |K| -1 L 1 ε ), CN -1 and 0 < η < η ( 1 3 ε), we have: P µ N,h,η T (x) -µ T (x) ≥ ε ≤ 2κ 1 exp -( 1 3 ) 2 κ 2 N ε 2 |K η (x -•)| 2 L 2 (µ T ) + |K η | ∞ ε , ( 12 ) where κ 1 , κ 2 are constants that only depend on C, x, T , the kernel K, the data (µ 0 , b, σ) and that are defined in Proposition 4.1 below. Several remarks are in order: 1) We have |K η (x -•)| 2 L 2 (µ T ) ≤ sup y,x-y∈Supp(K) µ T (y)|K η | 2 L 2 = C(µ T , x, K)η -d and |K η | ∞ = η -d |K| ∞ , therefore (12) rather reads P µ N,h,η T (x) -µ T (x) ≥ ε ≤ c 1 exp -c 2 N η d ε 2 1 + ε , up to constants c 1 and c 2 that only depend on C, x, T , the kernel K and the data (µ 0 , b, σ). 2) The factor 1 3 in the upper bound of h, η and the right-hand size of (12) can be replaced by an arbitrary constant in (0, 1) by modifying the union bound argument (31) in the proof. 3) The estimate (12) gives an exponential bound of the form exp(-cN η d ε 2 ) for the behaviour of the error µ N,h,η T (x) -µ T (x) for small ε, provided h and η are sufficiently small. This is quite satisfactory in terms of statistical accuracy, for instance if one wants to implement confidence bands: for any risk level α, with probability bigger than 1 -α, the error | µ N,h,η T (x) -µ T (x)| is controlled by a constant times » N -1 η d log 1 α . Theorem 2.9 does not give any quantitative information about the accuracy in terms of h and η. Our next result gives an explicit upper bound for the expected error of order p ≥ 1. Theorem 2.10. Work under Assumptions 2.1, 2.2 and 2.3. For every p ≥ 1 and x ∈ R d , we have E µ N,h,η T (x) -µ T (x) p ≤ κ 3 B η (µ T , x) p + (N η d ) -p/2 + h p . ( 13 ) for some κ 3 > 0 depending on p, x, T , |K| L 2 , |K| ∞ and the data (µ 0 , b, σ). If z → µ T (z) is k-times continuously differentiable in a vicinity of x, then, for a regular kernel of order (k -1) + , we have B η (µ T , x) η k , see for instance the proof of Corollary 2.11. This enables one to optimise the choice of the bandwidth η: Corollary 2.11. Assume that x → µ T (x) is C k b for some k ≥ 1 and that K is -regular, with ≥ 0. In the setting of Theorem 2.10, the optimisation of the bandwidth choice η = N -1/(2 min(k, +1)+d) yields the explicit error rate E µ N,h,η T (x) -µ T (x) p ≤ κ 4 N -min(k, +1)p/(2 min(k, +1)+d) + h p , (14) for some κ 4 > 0 depending on p, , x, T, |K| L 2 , |K| ∞ and the data (µ 0 , b, σ). Some remarks: 1) The dependence of κ 3 and κ 4 on (µ 0 , b, σ) is explicit via the bounds that appear in Assumptions 2.1, 2.2 and 2.3. 2) Corollary 2.11 improves the result of [AKH02, Theorem 3.1] for the strong error of the classical solution µ T (x) in the following way: in [START_REF] Antonelli | Rate of convergence of a particle method to the solution of the McKean-Vlasov equation[END_REF], the authors restrict themselves to the one-dimensional case d = 1 with p = 1 in the loss and take a Gaussian kernel for the density estimation, with bandwidth h = T M as in the time discretisation. They obtain the rate (in dimension d = 1) N -1/2 h -1/2-+ h, for arbitrary > 0 (up to an inflation in the constant when vanishes), and this has to be compared in our case to the rate (14) which gives N -k/(2k+1) + h = N -1/2 N 1/2(2k+1) + h, which yields possible improvement depending on the regime we pick for h, namely whenever N h -(2k+1) which is less demanding as k increases. One defect of the upper bound ( 14) is that the optimal choice of η depends on the analysis of the bias B η (µ T , x) p , and more specifically on the smoothness k, and that quantity is usually unknown or difficult to compute: Indeed, the information that x → µ T (x) is k-times continuously differentiable with the best possible k is hardly tractable from the data (µ 0 , b, σ), essentially due to the nonlinearity of the Fokker-Planck equation (1). Finding an optimal η without prior knowledge on the bias is a long standing issue in nonparametric statistics. One way to circumvent this issue is to select η depending on the stochastic particle system ( X1,h T , . . . , XN,h T ) itself, in order to optimise the trade-off between the bias and the variance term. While this is a customary approach in statistics, to the best of our knowledge, this is not the case in numerical simulations. It becomes particularly suitable in the case of nonlinear McKean-Vlasov type models. We adapt a variant of the classical Goldenshluger-Lepski's method [GL08b, GL11, GL14], and refer the reader to classical textbooks such as [START_REF] Giné | Mathematical foundations of infinite-dimensional statistical models[END_REF]. See also the illumating paper by [START_REF] Lacour | Estimator selection: a new method with applications to kernel density estimation[END_REF] that fives a good insight about ideas around bandwidth comparison and data driven selection. For simplicity, we focus on controlling the error for p = 2. Fix x ∈ R d . Pick a finite set H ⊂ [N -1/d (log N ) 2/d , 1], and define A N η (T, x) = max η ≤η,η ∈H µ N,h,η T (x) -µ N,h,η T (x) 2 -(V N η + V N η ) + , where {x} + = max(x, 0) and V N η = |K| 2 L 2 (log N )N -1 η -d , > 0. ( 15 ) Let η N (T, x) ∈ argmin η∈H (A N η (T, x) + V N η ). ( 16 ) We then have the following so-called oracle inequality: Theorem 2.12. Work under Assumptions 2.1, 2.2 and 2.3. For every x ∈ R d , we have E µ N,h, η N (t,x) T (x) -µ T (x) 2 ≤ κ 5 min η∈H B η (µ T , x) 2 + N -1 η -d log N + h 2 , ( 17 ) for some κ 5 > 0 depending on x, T , |K| ∞ , Supp(K) and (µ 0 , b, σ), as soon as Card(H) ≤ N , ≥ 12κ -1 2 max(sup y∈Supp(K) µ T (x -y), 4|K| -2 L 2 |K| 2 ∞ κ -1 2 ) and η d ≥ N -1 log N 3 log 2|K| 2 ∞ 2|K| 2 L 2 . Some remarks: 1) Up to an unavoidable log N -factor, known as the Lepski-Low phenomenon (see [START_REF] Lepskiȋ | A problem of adaptive estimation in Gaussian white noise[END_REF][START_REF] Mark | Nonexistence of an adaptive estimator for the value of an unknown probability density[END_REF]) in statistics, we thus have a data-driven smoothing parameter η N (T, x) that automatically achieves the optimal bias-variance balance, without affecting the effect of the Euler discretisation step h. 2) One limitation of the method is the choice of the pre-factor in the bandwidth selection procedure, which depends on upper bound of µ T locally around x, and more worryingly, on the constant κ -1 2 of Proposition 4.1 below which is quite large. In practice, and this is universal to all smoothing statistical methods, we have to adjust a specific numerical protocol, see Section 3 below. When we translate this result in terms of number of derivatives for x → µ T (x), we obtain the following adaptive estimation result. Corollary 2.13. Assume that x → µ T (x) is C k b for some k ≥ 1 and that K is -regular. Specify the oracle estimator µ N,h, η N (T,x) T (x) with H = {(N/ log N ) -1/(2m+d) , m = 1, . . . , + 1}. In the setting of Theorem 2.12 for every x ∈ R d , we have E µ N,h, η N (T,x) T (x) -µ T (x) 2 ≤ κ 6 log N N 2 min(k, +1) 2 min(k, +1)+d + h 2 , for some κ 6 > 0 depending on T , |K| ∞ , Supp(K) and the data (µ 0 , σ, b). In practice, see in particular Section 3 below, if x → µ T (x) is very smooth (and this is the case in particular if x → b(t, x, µ t ) is smooth and σ constant) we are limited in the rate of convergence by the order of the kernel. This shows in particular that it is probably not advisable in such cases to pick a Gaussian kernel for which we have the restriction = 1. Nonlinearity of the drift in the measure argument In the cas of a drift (t, x, µ) → b(t, x, µ) with a nonlinear dependence in the measure argument µ, the assumptions are a bit more involved. For a smooth real-valued test function ϕ defined on R d , we set A t ϕ(x) = b(t, x, µ t ) ∇ϕ(x) + 1 2 d k,k =1 c kk (t, x)∂ 2 kk ϕ(x), (18) that can also be interpreted as the generator of the associated nonlinear Markov process of (X t ) t∈[0,T ] defined in (2). Following [START_REF] Carmona | Probabilistic theory of mean field games with applications[END_REF], we say that a mapping f : P 1 → R d has a linear functional derivative if there exists δ µ f : R d × P 1 → R d such that f (µ ) -f (µ) = 1 0 dϑ R d δ µ f (y, (1 -ϑ)µ + ϑµ )(µ -µ)(dy), with |δ µ f (y , µ )-δ µ f (y, µ)| ≤ C(|y -y|+W 1 (µ , µ)) and |∂ y (δ µ f (y, µ )-δ µ f (y, µ))| ≤ CW 1 (µ , µ), for some C ≥ 0. We can iterate the process via mappings δ µ f : (R d ) × P 1 → R d for = 1, . . . , k defined recursively by δ µ f = δ µ • δ -1 µ f . Assumption 2.14. (Nonlinear representation of the drift.) For every µ ∈ P 1 , (t, x) → b(t, x, µ) is C 1,3 b . Moreover, for every x, (t, y) → A t δ µ b(t, x, y, µ t ) exists and is continuous and bounded. Finally, µ → b(t, x, µ) admits a k-linear functional derivative (with k ≥ d for d = 1, 2 and k ≥ d/2 for d ≥ 3) that admits the following representation δ k µ b(t, x, (y 1 , . . . , y k ), µ) = I,m j∈I (δ k µ b) I,j,m (t, x, y j , µ), (19) where the sum in I ranges over subsets of {1, . . . , k}, the sum in m is finite and the mappings (x, y) → (δ k µ b) I,j,m (t, x, y j , µ) are Lipschitz continuous, uniformly in t and µ. Assumption 2.14 has two objectives: first comply with the property that (t, x) → b(t, x, µ t ) is C 1,3 b in order to apply the result of Gobet and Labart [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF] in Proposition 4.4 below and second, provide a sufficiently smooth structure in (19) in order to implement the change of probability argument of Proposition 4.1. While (19) appears a bit ad-hoc and technical, it is sufficiently general to encompass drift of the form b(t, x, µ) = j F j t, x, R q 2 G j t, x, R d H j (t, x, z)µ(dz), z λ j (dz ) for smooth mappings F j (t, x, •) : R q3 → R d , G j (t, x, •) : R q1 × R q2 → R q3 , H j (t, x, •) : R d → R q1 and positive measures λ j on R q2 in some cases and combinations of these, see Jourdain and Tse [START_REF] Jourdain | Central limit theorem over non-linear functionals of empirical measures with applications to the mean-field fluctuation of interacting particle systems[END_REF]. Explicit examples where the structure of the drift is of the form 2.14 rather than 2.3 are given for instance in [CDFM18, Oel85, Mél96, JM98]. Theorem 2.15. Work under Assumptions 2.1, 2.2 and 2.14. The results of Theorems 2.9, 2.10, 2.12 and Corollary 2.11, 2.13 remain valid, up to a modification of the constants κ i , i = 1, . . . , 6. Numerical implementation We investigate our simulation method on three different examples. The simulation code is available via Google Colab. 1. A linear interaction as in Assumption 2.3 of the form b(t, x, y) = c(x-y). This case is central to several applications (see the references in the introduction for instance) and has moreover the advantage to yield an explicit solution for µ t (x), enabling us to accurately estimate the error of the simulation. See: bit.ly/3LID1rV. 2. A double layer potential with a possible singular shock in the common drift that "stresses" Assumption 2.3. Although the solution is not explicit, the singularity enables us to investigate the automated bandwidth choice on repeated samples and sheds some light on the effect of our statistical adaptive method. See: bit.ly/3yY332B and bit.ly/3ZnvhPj. 3. The Burgers equation in dimension d = 1. Although not formally within the reach Assumption 2.3, we may still implement our method. While we cannot provide with theoretical guarantees, we again have an explicit solution for µ t (x) and we can accurately measure the performance of our simulation method. See: bit.ly/3FEXMAM. For simplicity, all the numerical experiments are conducted in dimension d = 1. The common findings of our numerical investigations can be summarised as follows: when the density solution x → µ t (x) is smooth, high-order kernels (with vanishing moments beyond order 1) outperform classical kernels such as the Epanechnikov kernel or a standard Gaussian kernel as shown in Cases 1 and 3. Specifically, we implement high-order Gaussian based kernels as developed for instance in Wand and Schucany [START_REF] Matthew | Gaussian-based kernels[END_REF]. Table 1 below gives explicit formulae for K and = 1, 3, 5, 7, 9. Otherwise, for instance when the transport term has a common exactly Lipschitz continuous component (but no more) as in Case 1, the automated bandwidth method tends to adapt the window to prevent oversmoothing, even with high-order kernels. Overall, we find in all three examples that implementing high-order kernels is beneficial for the quality of the simulation. order kernel K(x) 1 φ(x) 3 1 2 (3 -x 2 )φ(x) 5 1 8 (15 -10x 2 + x 4 )φ(x) 7 1 48 (105 -105x 2 + 21x 4 -x 6 )φ(x) 9 1 384 (945 -1260x 2 + 378x 4 -36x 6 + x 8 )φ(x) Table 1: Construction of kernels K of order = 1, 3, 5, 7, 9, with φ(x) = (2π) -1/2 exp(-1 2 x 2 ), dimension d = 1, following [START_REF] Matthew | Gaussian-based kernels[END_REF]. The case of a linear interaction We consider the simplest McKean-Vlasov SDE with linear interaction dX t = - R (X t -x)µ t (dx)dt + dB t , L(X 0 ) = N (3, 1 2 ) where (B t ) t∈[0,T ] is a standard Brownian motion and N (ρ, σ 2 ) denotes the Gaussian distribution with mean ρ and variance σ 2 . We do not fulfill Assumption 2.3 here since b(t, x, y) = x -y is not bounded. We nevertheless choose this model for simulation since it has an explicit solution µ t as a stationary Ornstein-Uhlenbeck process, namely, abusing notation slightly, µ t = L(X t ) = N (3, 1 2 ). We implement the Euler scheme ( X1,h t , . . . , XN,h t ) t∈[0,T ] defined in (7) with b(t, x, µ) = -R (xy)µ(dy) and σ(t, x) = 1. We pick T = 1, h = 10 -2 T = 10 -2 , for several values of the system size N = 2 7 = 128, 2 8 = 256, . . . , 2 15 = 32768. We then compute µ N,h,η T (x) := N -1 N n=1 η -d K η -1 (x -Xn,h T ) as our proxy of µ T (x) for = 1, 3, 5, 7, 9 and the kernels given in Table 1. We pick η = N -1/(2( +1)+1) according to Corollary 2.11 since µ T ∈ C k for every k ≥ 1. We repeat the experiment 30 times to obtain independent Monte-Carlo proxies ( µ N,h,η T ) j (x) for j = 1, . . . , 30 and we finally compute the Monte-Carlo strong error E N = 1 30 30 j=1 max x∈D ( µ N,h,η T ) j (x) -µ T (x) 2 , where D is a uniform grid of 1000 points in [0, 6]. (The domain is dictated by our choice of initial condition µ 0 ∼ N (3, 1 2 ).) Figure 1: Monte-Carlo (for 30 repeated samples) strong error for different kernel orders: log 2 E N as a function of log 2 N for = 1 (purple), = 3 (blue), = 5 (red), = 7 (orange), = 9 (green). We see that a polynomial error in N is compatible with the data. Figure 2: Least-square estimates of the slope α of log 2 E N = α log 2 N + noise in a linear model representation. We plot α as a function of the order of the kernel. We see that a higher order for the choice of the kernel systematically improves on the error rate, as predicted by the statistical bias-variance analysis. In Figure 1, we display on a log-2 scale E N for = 1, 3, 5, 7, 9. In Figure 2 we display the least-square estimates of the slope of each curve of Figure 1 according to a linear model. We thus have a proxy of the rate α of the error E N ≈ N -α for different values of . We clearly see that higher-order kernels are better suited for estimating µ t . This is of course no surprise from a statistical point of view, but this may have been overlooked in numerical probability simulations. A double layer potential We consider an interaction consisting of a smooth long range attractive and small range repulsive force, obtained as the derivative of double-layer Morse potential. Such models are commonly used (in their kinetic version) in swarming modelling, see for instance [START_REF] Bolley | Stochastic mean-field limit: nonlipschitz forces and swarming[END_REF]. The corresponding McKean-Vlasov equation is dX t = R U (X t -x)µ t (dx)dt + dB t , L(X 0 ) = N (0, 1), where we pick U (x) = -exp(-x 2 ) + 2 exp(-2x 2 ). The potential U and its derivative U are displayed in Figure 3. η N (T, x) -d K η N (T, x) -1 (x -Xn,h T ) as our proxy of µ T (x) for = 1, 3, 5, 7, 9 according to K as in Table 1. The data-driven bandwidth η N (T, x) is computed via the minimisation ( 16), for which one still needs to set the penalty parameter arising in the Goldenschluger-Lepski method, see (15) in particular. The grid is set as H = {(N/ log N ) -1/(2m+1) , m = 1, . . . , + 1}, see in particular Corollary 2.13 in order to mimick the oracle. In this setting, we do not have access to the exact solution µ T (x); we nevertheless explore several numerical aspects of our method via the following experiments: • Investigate the effect of the order of the kernel (for an ad-hoc choice of the penalty parameter ). We know beforehand that the mapping x → µ T (x) is smooth, and we obtain numerical evidence that a higher-order kernel gives better results by comparing the obtained µ N,h, η N (T,x) T (x) for different values of as N increases in the following sense: for N = 2 5 particles, the estimates for high order kernels are closer to the estimates obtained for N = 2 10 than lower order kernels. • Investigate the distribution of the data-driven bandwidth η N (T, x), for repeated samples as x varies and for different values of the penalty over the grid H = {(N/ log N ) -1/(2m+1) , m = 1, . . . , +1}. The estimator tends to pick the larger bandwidth with overwhelming probability, which is consistent with our prior knowledge that x → µ T (x) is smooth. • In order to exclude an artifact from the preceding experiment, we conduct a cross-experiment by perturbing the drift with an additional Lipschitz common force (but not smoother) that saturates our Assumption 2.3. This extra transport term lowers down the smoothness of x → µ T (x) and by repeating the preceding experiment, we obtain a different distribution of the data-driven bandwidth η N (T, x), advocating in favour of a coherent oracle procedure. The effect of the order of the kernel We display in Figure 4 the graph of x → µ N,h, η N (T,x) T (x) for T = 1, = 1, 3, 5, 7, 9 for N = 2 5 and N = 2 10 . The tuning parameter in the choice of η N (T, x) is set to [1] = 23. The same experiment is displayed in Figure 5 for N = 2 15 . The value N = 2 15 mimicks the asymptotic performance of the procedure as compared to N = 2 5 or N = 2 10 . We observe that the effect of the order of the kernel is less pronounced. A visual comparison of Figure 4 and 5 suggests that a computation with higher order kernels always perform better, since the shape obtained for = 9 for small values of N is closer to the asymptotic proxy N = 2 15 than the results obtained with smaller values of . The distribution of the data-driven bandwidth We pick several values for (namely = 20 -1 , 10 -1 , 1, 10, 20) and compute η N (T, x) accordingly for 1200 samples. Figure 6 displays the histogram of the η N (T, x) for x ∈ [-4, 4] (discrete grid with mesh 8 • 10 -2 ) for = 3 and = 5. We observe that the distribution is peaked around large bandwidths at the far right of the spectrum of the histogram, with comparable results for = 7 and = 9. In Figure 7, we repeat the experiment for = 5 over the restricted domain [-1 2 , 1 2 ] (discrete grid, mesh 10 -2 ) where we expect the solution µ T (x) to be more concentrated and see no significant difference. This result is in line of course with the statistical nonparametric estimation of a smooth signal (see e.g. the classical textbook [START_REF] Bernard W Silverman | Density estimation for statistics and data analysis[END_REF]). [1] This choice is quite arbitrary, the values of the data-driven bandwidths showing stability as soon as ≥ 20. Cross experiment by reducing the smoothness of x → µ T (x) We repeat the previous experiment by adding a perturbation in the drift, via a common force V (x) = 2(1-|x|)1 {|x|≤1} . The drift now becomes b(x, µ) = V (x)+ R U (x-y)µ(dy). The common transport term V (x) is no smoother than Lipschitz continuous thus reducing the smoothness of the solution x → µ T (x). In the same experimental conditions as before, Figure 8 displays the distribution of η N (T, x) and is to be compared with Figure 7. We observe that the distribution is modified, in accordance with the behaviour of the oracle bandwidth of a signal with lower order of smoothness. This advocates as further empirical evidence of the coherence of the method. [-1 2 , 1 2 ] over a discrete grid of 100 points, when adding a common perturbation force V (x) = 2(1 -|x|)1 {|x|≤1} in the drift. The effect on the empirical distribution of η N (T, x) is in line with the behaviour of an oracle bandwidth that adjusts smaller bandwidths when the signal is less smooth. The Burgers equation in dimension d = 1 We consider here the following McKean-Vlasov equation dX t = R H(x -y)µ t (dy)dt + σdB t , L(X 0 ) = δ 0 , with σ = √ 0. 2, associated with the Burgers equation in dimension d = 1 for H(x -y) = 1 {y≤x} . Although the discontinuity at y = x rules out our Assumption 2.3, we may nevertheless implement our method, since a closed form formula is available for µ t (x). More specifically, for t > 0, the cumulative density function of µ t (x) is explicitly given by M t (x) = ∞ 0 exp -1 σ 2 (x-y) 2 2t + y dy 0 -∞ exp -1 σ 2 (x-y) 2 2t dy + ∞ 0 exp -1 σ 2 (x-y) 2 2t + y dy . (x) for N = 2 10 for several values of . The values = 3 or = 5 provide with the best fit, showing again the benefit of higher-order kernels compared to a standard Gaussian kernel (with = 1). Similar results were obtained for other values of N . Yet, the distribution of η N (T, x) displayed in Figure 11 shows that our method tends to undersmooth the true density. The effect us less pronounced for N = 2 15 . In any case, our numerical results are comparable with [START_REF] Bossy | A stochastic particle method for the McKean-Vlasov and the Burgers equation[END_REF]. We first establish a Bernstein inequality for the fluctuations of μN,h T (dx) -μh T (dx). Proposition 4.1. Work under Assumptions 2.1, 2.2 and 2.3. Let N ≥ 2 and N h ≤ C. For any real-valued bounded function ϕ defined on R d and any ε ≥ 0, we have: P R d ϕ d(μ N,h T -μh T ) ≥ ε ≤ κ 1 exp - κ 2 N ε 2 |ϕ| 2 L 2 (µ T ) + |ϕ| ∞ ε , ( 20 ) where κ 1 , κ 2 are constants that only depend on C, T and the data (µ 0 , b, σ) that are explicitly given in the proof. The proof follows the strategy of Theorem 18 in [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF]. We repeat the main steps in order to remain self-contained and highlight the important modifications we need in the context of Euler scheme approximations. Proof of Proposition 4.1. We work on a rich enough filtered probability space (Ω, F, (F t ) t≥0 , P) in order to accomodate all the random quantities needed. First, note that the abstract Euler scheme ( Xh t ) t∈[0,T ] defined by (9) solves d Xh t = b(t, Xh t , µ t )dt + σ(t, Xh t )dB t , L( X0 ) = µ 0 , ( 21 ) where t is defined by ∀ 1 ≤ m ≤ M and t ∈ [t m , t m+1 ), t := t m . Similarly, ( X1,h t , ..., XN,h t ) t∈[0,T ] defined by (7) solves        d Xn,h t = b(t, Xn,h t , μN,h t )dt + σ(t, Xn,h t )dB n t , 1 ≤ n ≤ N μN,h t = 1 N N n=1 δ Xn,h t , L( X1,h 0 , . . . , XN,h 0 ) = µ ⊗N 0 . ( 22 ) Step 1. We construct a system (Y 1,h t , ..., Y N,h t ) t∈[0,T ] of N independent copies of the abstract Euler schemes (21) (thus without interaction) via dY n,h t = b(t, Y n,h t , µ t )dt + σ(t, Y n,h t )dB n t , 1 ≤ n ≤ N (Y 1,h 0 , . . . , Y N,h 0 ) = ( X1,h 0 , . . . , XN,h 0 ), ( 23 ) For t ∈ [0, T ], let L N,h t := N n=1 t 0 (c -1/2 b)(s, Y n,h s , 1 N N n=1 δ Y n,h s ) -(c -1/2 b)(s, Y n,h s , µ s ) dB n s and E t (L N,h • ) := exp L N,h t - 1 2 L N,h • t , where c -1/2 is any square root of c -1 = (σσ ) -1 and L N,h • t denotes the predictable compensator of L N,h t . The following estimate is a key estimate to proceed to a change of probability. Its proof is delayed until the end of this section. Lemma 4.2. Work under Assumptions 2.1, 2.2 and 2.3. Assume that N h ≤ C for some constant C > 0. For every κ > 0, there exists δ(κ) > 0 such that ∀ δ ∈ [0, δ(κ)], sup t∈[0,T -δ(κ)] E P exp κ L N,h • t+δ -L N,h • t ≤ C ( 24 ) where C > 0 only depends on C, T and the data (µ 0 , b, σ). By taking κ = 1 2 and by applying Novikov's criterion (see e.g. [KS91, Proposition 3.5.12, Corollary 3.5.13 and 3.5.14]), Lemma 4.2 implies that E t (L N,h • ) t∈[0,T ] is a (F t , P) martingale as soon as N h ≤ C. We may then define another probability distribution Q on Ω, F T by setting Q := E T (L N,h • ) • P. ( 25 ) By Girsanov's Theorem (see e.g. [KS91, Theorem 3.5.1]), we have Q • (Y 1,h t , ..., Y N,h t ) -1 t∈[0,T ] = P • (X 1,h t , ..., X N,h t ) -1 t∈[0,T ] . ( 26 ) Step 2. We claim that for any subdivision 0 = t 0 < t 1 < ... < t K = T and for any F T -measurable event A N , we have E P Q A N F tj-1 ≤ E P Q A N F tj 1 4 E P exp 2 L N,h • tj -L N,h • tj-1 1 4 , 1 ≤ j ≤ K. ( 27 ) The proof is the same as in Step 3 of the proof of Theorem 18 in [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF] and is inspired from the estimate (4.2) in Theorem 2.6 in [START_REF] Lacker | On a strong form of propagation of chaos for McKean-Vlasov equations[END_REF]. We repeat the argument: we have E P Q A N F tj-1 = E P E Q Q A N F tj F tj-1 = E P E P E tj (L N,h • ) E tj-1 (L N,h • ) Q A N F tj F tj-1 = E P E tj (L N,h • ) E tj-1 (L N,h • ) Q A N F tj , where the second inequality follows from Bayes's rule in [KS91, Lemma 3.5.3]. Next, we have E tj (L N,h • ) E tj-1 (L N,h • ) = E tj 2(L N,h • -L N,h tj-1 ) 1 2 • exp L N,h • tj -L N,h • tj-1 1 2 . The process E tj 2(L N,h • -L N,h tj-1 ) t∈[tj-1,T ] is a (F t , P) martingale if N h ≤ C. Hence E P E tj 2(L N,h • -L N,h tj-1 ) = 1 for every t ∈ [t j-1 , T ]. It follows that E P Q A N F tj-1 = E P E tj 2(L N,h • -L N,h tj-1 ) 1 2 • exp L N,h • tj -L N,h • tj-1 1 2 Q A N F tj ≤ E P î exp L N,h • tj -L N,h • tj-1 Q A N F tj 2 ó 1 2 ≤ E P exp 2 L N,h • tj -L N,h • tj-1 1 4 E P î Q A N F tj 4 ó 1 4 ≤ E P exp 2 L N,h • tj -L N,h • tj-1 1 4 E P Q A N F tj 1 4 , where the first two inequalities follows from Cauchy-Schwarz's inequality and the last inequality follows from Jensen's inequality. Thus ( 27) is established. Step 3. Let A N ∈ F T . Since E t (L N,h • ) t∈[0,T ] is a (F t , P) Q(A N ) = E Q Q(A N F 0 ) = E P Q(A N F 0 ) . Now, let m ≥ 1 and take a subdivision 0 = t 0 < t 1 < . . . < t m = T such that ∀ 0 ≤ k ≤ m -1, t k+1 -t k ≤ δ(2) where δ(2) is the constant in Lemma 4.2 for κ = 2. It follows by ( 27) that E P Q(A N F 0 ) ≤ E P Q A N F T 4 -m m j=1 E P exp 2 L N,h • tj -L N,h • tj-1 j 4 ≤ P(A N ) 4 -m m j=1 E P exp 2 L N,h • tj -L N,h • tj-1 j 4 (since A N ∈ F T ) ≤ P(A N ) 4 -m sup t∈[0,T -δ(2)] E P exp 2 L N,h • t+ε -L N,h • t m(m+1) 8 ≤ P(A N ) 4 -m C m(m+1) 8 (28) by applying Lemma 4.2 with κ = 2. Step 4. We first recall the Bernstein's inequality: if Z 1 , . . . , Z N are real-valued centred independent random variables bounded by some constant Q, we have ∀ ε ≥ 0, P N n=1 Z n ≥ ε ≤ exp - ε 2 2 N n=1 E [Z 2 n ] + 2 3 Qε . (29) Now, for ε ≥ 0, we pick A N T := N n=1 ϕ Y n,h T - R d ϕ(y)μ h T (dy) ≥ N ε so that A N T ∈ F T . Since (Y 1,h T , . . . , Y N,h T ) are independent with common distribution μh T , we have P(A N T ) ≤ exp - N ε 2 2|ϕ| 2 L 2 (μ T ) + 2 3 |ϕ| ∞ ε . (30) by ( 29) with Z n = ϕ(Y n,h T ) - R d ϕ(x)μ h T (dx), having E [Z 2 n ] ≤ |ϕ| 2 L 2 (μ h T ) and Q = |ϕ| ∞ . It follows by (26) that P R d ϕ d(μ N,h T -μh T ) ≥ ε = P N n=1 ϕ Xn,h T - R ϕ(y)μ h T (dy) ≥ N ε = Q(A N T ). By Step 3, we infer Q(A N T ) ≤ P(A N T ) 4 -m C m(m+1) 8 ≤ C m(m+1) 8 exp -4 -m N ε 2 2|ϕ| 2 L 2 (μ h T ) + 2 3 |ϕ| ∞ ε , and we conclude by taking κ 1 = C m(m+1) 8 and κ 2 = 2 -1 4 -m . Completion of proof of Theorem 2.9 Recall the notation K η (x) = η -d K(η -1 x) and set ϕ ψ(x) = R d ϕ(x -y)ψ(y)dy for the convolution product between two integrable functions ϕ, ψ : R d → R. We have P µ N,h,η T (x) -µ T (x) ≥ ε ≤ P µ N,h,η T (x) -K η μh T (x) ≥ ε/3 + 1 {|Kη μh T (x)-Kη µ T (x)|≥ε/3} + 1 {|Kη µ T (x)-µ T (x)|≥ε/3} . (31) We have K η μh T (x) -K η µ T (x) ≤ |K| L 1 sup x∈R d μh T (x) -µ T (x) since |K η | L 1 = |K| L 1 and this last term is strictly smaller than ε/3 for h < h |K| -1 L 1 1 3 ε , therefore the second indicator is 0. Likewise K η µ T (x) -µ T (x) ≤ B η (µ T , x) < ε/3 for η < η( 1 3 ε) and the third indicator is 0 as well. Finally P µ N,h,η T (x) -K η μh T (x) ≥ 1 3 ε = P R d K η (x -•)d(μ N,h T -μh T ) ≥ 1 3 ε ≤ 2κ 1 exp - κ 2 N ( 1 3 ε) 2 |K η (x -•)| 2 L 2 (µ T ) + |K η | ∞ ε by Proposition 4.1 and Theorem 2.9 follows. Proof of Lemma 4.2 Writing µ N,h s = 1 N N n=1 δ Y n,h s , for κ > 0, we have κ L N,h • t+δ -L N,h • t = κ N n=1 t+δ t b(s, Y n,h s , µ N,h s ) -b(s, Y n,h s , µ s ) c(s, Y n,h s ) -1 b(s, Y n,h s , µ N,h s ) -b(s, Y n,h s , µ s ) ds ≤ κ sup t∈[0,T ] |Tr(c(t, •) -1 )| ∞ N n=1 t+δ t b(s, Y n,h s , µ N,h s ) -b(s, Y n,h s , µ s ) 2 ds ≤ 2κ sup t∈[0,T ] |Tr(c(t, •) -1 )| ∞ N n=1 t+δ t b(s, Y n,h s , µ N,h s ) -b(s, Y n,h s , μh s ) 2 ds + N | b| 2 Lip t+δ t W 1 (μ h s , µ s ) 2 ds , ∀ s, t ∈ [0, T ], s < t, E |X t -X s | 2 1/2 ≤ κ√ t -s for some κ that depends on (µ 0 , b, σ) and T , see [ W 1 (μ h s , µ s ) ≤ κ 7 h 1/2 , where κ 7 depends on the data (µ 0 , b, σ) and T , by rewriting (2) as a Brownian diffusion The random variables (ξ n,n s ) 1≤n ≤N,n =n are centered, identically distributed and conditionally independent given Y n,h s . Moreover, for every integer m ≥ 1, we have the following estimate Lemma 4.3. We have dX t = b µ (t, X t )dt + σ(t, X t sup s∈[0,T ] E |Y n,h s | 2m ≤ κ m 8 m! where κ 8 > 0 depends on T and the data (µ 0 , b, σ). The proof is standard (see for instance [START_REF] Méléard | Asymptotic behaviour of some interacting particle systems; McKean-Vlasov and Boltzmann models[END_REF][START_REF] Sznitman | Topics in propagation of chaos[END_REF] or [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF] for a control of growth in the constant m and we omit it). From Lemma 4.3, we infer, for every m ≥ 1: E ξ n,n s 2m | Y n,h s = E b(s, Y n,h s , Y n ,h s ) - R d b(s, Y n,h s , y)μ h s (dy) 2m Y n,h s ≤ E 2 2m-1 b(s, Y n,h s , Y n ,h s ) -b(s, Y n,h s , 0) 2m + b(s, Y n,h s , 0) - R d b(s, Y n,h s , y)μ h s (dy) 2m Y n,h s ≤ E 2 2m-1 | b| 2m Lip |Y n ,h s | 2m + | b| 2m Lip E |Y n ,h s | 2m Y n,h s = 2 2m | b| 2m Lip E |Y n ,h s | 2m ≤ 4| b| 2 Lip κ 8 m m! E exp (N -1)(S n,N s ) 2 [k] 8κ 9 = E E exp (N -1)(S n,N s ) 2 [k] 8κ 9 Y n,h s ≤ 2, and in turn E exp (N -1)|S n,N s | 2 8dκ 9 ≤ d -1 d k=1 E exp (N -1)(S n,N s ) 2 [k] 8κ 9 ≤ 2. ( 32 ) Likewise E exp |ξ n,n s | 2 8dκ 9 ≤ 2. ( 33 ) Abbreviating τ = 2κ sup t∈[0,T ] |Tr(c(t, •) -1 )| ∞ , it follows that 32) and (33). We obtain Lemma 4.2 with C = 2 exp(κ 10 C) with κ 10 = κ 2 7 384dκ8 . E exp κ( L N,h • t+δ -L N,h • t ) ≤ e τ | b| 2 Lip κ 2 7 δN h E exp τ t+δ t N n=1 N -1 N n =1 ξ n,n s 2 ds ≤ e τ | b| 2 Lip κ 2 7 δN h 1 2δ t+δ t N -1 N n=1 E exp 4δτ N N -1 ξ n,n s 2 + E exp 4τ δN N -1 N S n,N s 2 ds ≤ 2e τ | b| 2 Lip κ 2 7 δN h as soon as δ ≤ (32τ dκ 9 ) -1 N N -1 = (64κ sup t∈[0,T ] |Tr(c(t, •) -1 )| ∞ dκ 9 ) -1 N N -1 by ( Proof of Theorem 2.10 We first need a crucial approximation result of the density of a diffusion process by the density of its Euler scheme counterpart. We heavily rely on the sharp results of Gobet and Labart [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF]. Proposition 4.4. Work under Assumptions 2.1, 2.2 and 2.3. For any 0 < t min ≤ T , we have sup (t,x)∈[tmin,T ]×R d μh t (x) -µ t (x) ≤ κ 10 • h, (34) for some κ 10 > 0 depending on t min , T, d and the data (µ 0 , b, σ). Proof. For x 0 ∈ R d , let (ξ x0 t ) t∈[0,T ] be a diffusion process of the form ξ x0 t = x 0 + t 0 b(s, ξ x0 s )ds + t 0 σ(s, ξ x0 s )dB s , (35) where σ : [0, T ] × R d → R d ⊗ R d and b : [0, T ] × R d → R d are both C 1,3 b and σ is uniformly elliptic and ∂ t σ is C 0,1 b . We associate its companion Euler scheme ( ξx0,h t ) t∈[0,T ] :            ξx0,h tm+1 = ξx0,h tm + h • b(t m , ξx0,h tm ) + √ h σ(t m , ξx0,h tm )Z m+1 , Z n m+1 := 1 √ h B tm+1 -B tm , ξx0,h t0 = x 0 , ∀ t ∈ [t m , t m+1 ), ξx0,h t = ξx0,h tm + (t -t m ) b(t m , ξx0,h tm ) + σ(t m , ξx0,h tm )(B t -B tm ). For t > 0, both L(ξ x0 t ) and L( ξx0,h t ) are absolutely continuous, with density p t (x 0 , x) and ph t (x 0 , x) w.r.t. the Lebesgue measure dx on R d . By Theorem 2.3 in [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF], there exist two constants c 1 and c 2 depending on T , d and the data ( σ, b) such that ph t (x 0 , x) -p t (x 0 , x) ≤ c 1 h t -d+1 2 exp - c 2 |x -x 0 | 2 t . (36) Taking σ(t, x) = σ(t, x) and b(t, x) = b(t, x, µ t ), we can identify μt (x) with R d ph t (x 0 , x)µ 0 (dx 0 ) and µ t (x) with R d p t (x 0 , x)µ 0 (dx 0 ). For every t ∈ [t min , T ], we have μh t (x) -µ t (x) ≤ R d ph t (x 0 , x) -p t (x 0 , x) µ 0 (dx 0 ) ≤ c 1 h R d t -d+1 2 exp - c 2 |x -x 0 | 2 t µ 0 (dx 0 ) ≤ c 1 t -d+1 2 min • h =: κ 10 • h, by Theorem 2.3 in [START_REF] Gobet | Sharp estimates for the convergence of the density of the Euler scheme in small time[END_REF] since Assumption 2.3 implies (t, x) → b(t, x, µ t ) is C 1,3 b . More specifically, for (X t ) t∈[0,T ] solving (2), we have b(t, x, µ t ) = E b(t, x, X t ) . By dominated convergence, x → b(t, x, µ t ) is C 3 b thanks to the regularity assumptions on x → b(t, x, y). For the regularity in time, by Itô's formula E b(t, •, X t ) = E b(0, •, X 0 ) + t 0 E ∂ T b(s, •, X s ) ds + t 0 E A s b(s, •, X s ) ds, where (A s ) s∈[0,T ] is the family of generators defined in (18) applied to y → b(s, •, y) = ( b(s, •, y) [1] , . . . , b(s, •, y) [d] ) componentwise. Moreover, for each component 1 ≤ k ≤ d, A s b(s, •, X s ) [k] = E b(s, •, X s )] ∇ y b(s, •, X s ) [k] + 1 2 d l,l =1 c ll (s, X s )∂ y l y l b(s, •, X s ) [k] , hence E A s b(s, •, X s ) [k] = E b(s, •, X s )] E F k (s, X s ) + E G k (s, X s ) , with F k (s, y) = ∇ y b(s, •, y) [k] , and G k (s, y) = 1 2 d l,l =1 c ll (s, y)∂ y l y l b(s, •, y) [k] . All three functions (s, y) → b(s, •, y), (s, y) → F k (s, y) and (s, y) → G k (s, y) are continuous and bounded by Assumption 2.3, and so are s → E b(s, •, X s )], s → E F k (s, X s ) and s → E G k (s, X s ) by dominated convergence using in particular that s → X s is stochastically continuous. Hence s → E A s b(s, •, X s ) is continuous and s → E ∂ s b(s, •, X s ) too, using again Assumption 2.3. It follows that t → b(t, x, µ t ) is C 1 b on [0, T ]. Completion of proof of Theorem 2.10 We have E µ N,h,η T (x) -µ T (x) p ≤ 3 p-1 E µ N,h,η T (x) -K η μh T (x) p + K η μh T (x) -K η µ T (x) p + K η µ T (x) -µ T (x) p , and we plan to bound each term separately. First, by Proposition 4.1 E µ N,h,η T (x) -K η μh T (x) p = ∞ 0 P µ N,h,η T (x) -K η μh T (x) ≥ z 1/p dz ≤ 2κ 1 ∞ 0 exp - κ 2 N z 2/p |K η (x -•)| 2 L 2 (µ T ) + |K η | ∞ z 1/p dz ≤ 2κ 1 ∞ 0 exp - κ 2 N η d z 2/p sup y∈Supp(K) µ T (x -y)|K| 2 L 2 + |K| ∞ z 1/p dz ≤ 2κ 1 c p κ -p/2 2 sup y∈Supp(K) µ T (x -y) p/2 |K| p L 2 (N η d ) -p/2 , stemming from the estimate ∞ 0 exp - az 2/p b + cz 1/p dz ≤ c p max a b -p/2 , a c -p , valid for a, b, c, p > 0, with c p = 2 ∞ 0 exp -1 2 (min(z, √ z)) 2/p dz. Next, K η μh T (x) -K η µ T (x) p ≤ sup y∈R d μh T (y) -µ T (y) p R d K η (x -y) dy p ≤ κ p 10 |K| p L 1 h p by Proposition 4.4. Finally, by definition K η µ T (x) -µ T (x) p ≤ B η (µ T , x) p , and we obtain Theorem 2.10 with κ 3 = 3 p-1 max 2κ 1 c p κ -p/2 2 sup y∈Supp(K) µ T (x -y) p/2 |K| p L 2 , κ p 10 |K| p L 1 , 1 . Proof of Corollary 2.11 By Taylor's formula, we have, for x, y ∈ R d , η > 0 and any 1 ≤ k ≤ k: µ T (x + ηy) -µ T (x) = 1≤|α|≤k -1 ∂ α µ T (x) α! (ηy) α + k |α|=k (ηy) α α! 1 0 (1 -t) k -1 ∂ α µ T (x + tηy)dt, (37) with multi-index notation α = (α 1 , . . . , α d ), α i ∈ {0, 1, . . .}, α! = α 1 !α 2 ! • • • α d !, |α| = α 1 + α 2 + • • • + α d , and for x = (x 1 , . . . , x d ): x α = x α1 1 x α2 2 • • • x α d d , and ∂ α f = ∂ |α| f ∂ α1 x 1 ∂ α2 x 2 • • • ∂ α d x d . It follows that K η µ T (x) -µ T (x) = η -d R d K η -1 (x -y) µ T (y) -µ T (x) dy = R d K(-y) µ T (x + ηy) -µ T (x) dy = R d K(-y) min(k, + 1) |α|=min(k, +1) (ηy) α α! 1 0 (1 -t) min(k, +1)-1 ∂ α µ T (x + tηy)dt dy, thanks to (37) with k = min(k, + 1) and the cancellation property (8) of the kernel K that eliminates the polynomial term in the Taylor's expansion, hence K η µ T (x) -µ T (x) ≤ |µ T | min(k, +1),K η min(k, +1) , with |f | min(k, +1),K = |α|=min(k, +1) |∂ α f | ∞ α! R d |y| α |K(y)|dy. (38) It follows that B η (µ T , x) p ≤ |µ T | p min(k, +1),K η min(k, +1)p . (39) Applying now Theorem 2.10, we obtain E µ N,h,η T (x) -µ T (x) p ≤ κ 3 |µ T | p min(k, +1),K η min(k, +1)p + (N η d ) -p/2 + h p ≤ κ 3 |µ T | p min(k, +1),K + 1)N -min(k, +1)p/(2 min(k, +1)+1) + h p with the choice η = N -1/(2 min(k, +1)+d) hence the corollary with κ 4 = κ 3 (1 + |µ T | p min(k, +1),K ). Proof of Theorem 2.12 This is an adaptation of the classical Goldenshluger-Lepski method [GL08b, GL11, GL14]. We repeat the main arguments and highlight the differences due to the stochastic approximation and the presence of the Euler scheme. Abbreviating A N η (T, x) by A N η , we have, for every η ∈ H, E µ N,h, η N (T,x) T (x) -µ T (x) 2 ≤ 2E µ N,h, η N (T,x) T (x) -µ N,h,η T (x) 2 + 2E µ N,h,η T (x) -µ T (x) 2 ≤ 2E µ N,h, η N (T,x) T (x) -µ N,h,η T (x) 2 -V N η -V N η N (T,x) + + V N η + V N η N (T,x) + 2E µ N,h,η T (x) -µ T (x) 2 ≤ 2E A N max(η, η N (T,x)) + V N η + V N η N (T,x) + 2E µ N,h,η T (x) -µ T (x) 2 ≤ 2 E A N η + V N η + 2E A N η N (T,x) + V N η N (T,x) + 2E µ N,h,η T (x) -µ T (x) 2 ≤ 4E A N η + V N η + κ 3 (B η (x, µ T ) 2 + (N η d ) -1 + h 2 ) ≤ κ 11 E A N η + V N η + B η (x, µ T ) 2 + h 2 with κ 11 = 2 max(2 + κ3 |K| 2 L 2 , κ 3 ) by definition of η N (T, x) in ( 16), of V N η in (15) and using Theorem 2.10 with p = 2 to bound the last term. We next bound the term E A N η . For η, η ∈ H, with η ≤ η, we start with the decomposition µ N,h,η T (x) -µ N,h,η T (x) = µ N,h,η T (x) -K η µ h T (x) + K η µ h T (x) -K η µ T (x) + K η µ T (x) -µ T (x) -µ N,h,η T (x) -K η µ h T (x) -K η µ h T (x) -K η µ T (x) -K η µ T (x) -µ T (x) , and thus, by Proposition 4.4, we infer µ N,h,η T (x) -µ N,h,η T (x) ≤ µ N,h,η T (x) -K η µ h T (x) + µ N,h,η T (x) -K η µ h T (x) + 2κ 10 |K| L 1 h + 2 B η (µ T , x) using η ≤ η to bound the second bias term. µ N,h,η T (x) -µ N,h,η T (x) 2 -V N η -V N η ≤ 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η + 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η + 3 2κ 10 |K| L 1 h + 2B η (x, µ T ) 2 . Taking maximum for η ≤ η, we thus obtain max η ≤η µ N,h,η T (x) -µ N,h,η T (x) 2 -V N η -V N η ≤ 3( µ N,h,η T (x) -K η µ T (x)) 2 -V N η + + max η ≤η 3( µ N,h,η T (x) -K η µ T (x)) 2 -V N η + + 24 κ 2 10 |K| 2 L 1 h 2 + B η (x, µ T ) 2 . We finally bound the expectation of each term. First, we have, by Proposition 4.1 E 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η + = ∞ 0 P 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η ≥ z dz = ∞ 0 P µ N,h,η T (x) -K η µ h T (x) ≥ 3 -1/2 (z + V N η ) 1/2 dz ≤ 2κ 1 ∞ V N η exp - κ 2 N η d 1 3 z sup y∈Supp(K) µ T (x -y)|K| 2 L 2 + |K| ∞ 3 -1/2 z 1/2 dz ≤ 2κ 1 ∞ V N η exp - κ 2 N η d z 6 sup y∈Supp(K) µ T (x -y)|K| 2 L 2 dz + 2κ 1 ∞ V N η exp - κ 2 N η d z 1/2 2 √ 3|K| ∞ dz ≤ 12κ1 κ2 sup y∈Supp(K) µ T (x -y)|K| 2 L 2 (N η d ) -1 N -κ2 /6 sup y∈Supp(K) µ T (x-y) + 48 √ 3κ1 κ2 |K| ∞ |K| L 2 1/2 (N η d ) -3/2 (log N ) 1/2 exp - 1/2 κ2|K| L 2 2 √ 3|K|∞ (N η d ) 1/2 (log N ) 1/2 , where we used ∞ ν exp(-z 1/2 )dz ≤ 4ν 1/2 exp(-ν 1/2 ) for ν ≥ 16. The specifications of and η d of Theorem 2.12 entail E 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η + ≤ κ 12 (log N ) 1/2 N -2 , with κ 12 = 2 max( 12κ1 κ2 sup y∈Supp(K) µ T (x -y)|K| 2 L 2 , 48 √ 3κ1 κ2 |K| ∞ |K| L 2 1/2 ). In the same way, we have the rough estimate E max η ≤η 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η + ≤ η ∈H E 3 µ N,h,η T (x) -K η µ h T (x) 2 -V N η + ≤ Card(H)κ 12 (log N ) 1/2 N -2 ≤ κ 12 (log N ) 1/2 N -1 , using the previous bound. We thus have proved E A N η ≤ 2κ 12 (log N ) 1/2 N -1 + 24 κ 2 10 |K| 2 L 1 h 2 + B η (x, µ T ) 2 ≤ κ 13 V N η + B η (x, µ T ) 2 + h 2 , with κ 13 = max 2κ 12 /|K| 2 L 2 , 24κ 2 10 |K| 2 L 1 , 24 . Back to the first step of the proof, we infer E µ N,h, η N (T,x) T (x) -µ T (x) 2 ≤ κ 11 E A N η + V N η + B η (x, µ T ) 2 + h 2 ≤ κ 5 V N η + B η (x, µ T ) 2 + h 2 , where now κ 5 = κ 11 (1 + κ 13 ). Since η ∈ H is arbitrary, the proof of Theorem 2.12 is complete. Proof of Corollary 2.13 For every η ∈ H, we have B η (µ T , x) 2 ≤ |µ T | 2 k,K η 2 min(k, +1) , where |µ T | k,K is defined via (38), see (39) in the proof of Corollary 2.11. By Theorem 2.12, we thus obtain with κ 6 = 2κ 5 max(|µ T | 2 k,K , |K| 2 L 2 , 1), using the fact that the minimum in η ∈ H is attained for η = (N/ log N ) 1/(2 min(k, +1)+d) . Proof of Theorem 2.15 We briefly outline the changes that are necessary in the previous proofs to extend the results to a nonlinear drift in the measure argument that satisfies Assumption 2.14. Theorem 2.9 under Assumption 2.14 Only Lemma 4.2 needs to be extended to the case of a nonlinear drift in order to obtain Proposition 4.1, the rest of the proof remains unchanged. This amounts to prove that the random variables b(s, Y n,h 2 + min W 1 ( µ N,h s , μs ) 2 , W 1 ( µ N,h s , μs ) 2k , following the proof of Lemma 21 in [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF]. Next, min W 1 ( µ N,h s , μs ) 2 , W 1 ( µ N,h s , μs ) 2k is sub-Gaussian with the right order in N , thanks to the sharp deviation estimates of Theorem 2 in Fournier and Guillin [START_REF] Fournier | On the rate of convergence in Wasserstein distance of the empirical measure[END_REF]. As for the main part of the previous expansion, we use a bound of the form for some constant c depending on b, see Lemma 22 in [START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF], showing eventually that this term is sub-Gaussian with the right order in N . Lemma 4.2 follows. E (R d ) l Theorem Remaining proofs The proofs of Corollary 2.11, Theorem 2.12 and Corollary 2.13 remain unchanged under Assumption 2.14. Figure 3 : 3 Figure 3: Plot of the potential U (red) and its derivative U (blue). We implement the Euler scheme ( X1,h t , . . . , XN,h t ) t∈[0,T ] defined in (7) for coefficients b(x, µ) = R U (x -y)µ(dy) and σ(t, x) = 1. We pick T = 1, h = 10 -2 T = 10 -2 , for several values of the system size N = 2 5 = 32, 2 6 = 64, . . . , 2 16 = 65736. We then compute Figure 4 : 4 Figure 4: The graph of x → µ N,h, η N (t,x) t (x). The domain x ∈ [-4, 4] is computed over a discrete grid of 2000 points, i.e. mesh 4 • 10 -3 ) for N = 2 5 (Left) and N = 2 10 (Right). Figure 5 : 5 Figure 5: Same experiment as in Figure 4 mimicking an asymptotic behaviour of the procedure for N = 2 15 . Figure 6 : 6 Figure 6: Distribution of η N (T, x). The domain x ∈ [-4, 4] is computed over a discrete grid of 100 points, i.e. mesh 8 • 10 -2 ), N = 2 5 , 2 6 , • • • , 2 16 for = 3 (Top) and = 5 (Bottom). Figure 7 : 7 Figure 7: Same experiment as in Figure 6 (Bottom) on the restricted domain [-1 2 , 1 2 ] over a discrete grid of 100 points. The results are comparable with the experiment displayed in Figure 6 (Bottom). Figure 8 : 8 Figure8: Same experiment as in Figure7on the restricted domain [-1 2 , 1 2 ] over a discrete grid of 100 points, when adding a common perturbation force V (x) = 2(1 -|x|)1 {|x|≤1} in the drift. The effect on the empirical distribution of η N (T, x) is in line with the behaviour of an oracle bandwidth that adjusts smaller bandwidths when the signal is less smooth. Figure 9 : 9 Figure 9: Plots of x → M T (x) (Left) and x → µ T (x) (Right). Figure 9 9 Figure 9 displays the graph of x → M T (x) and x → µ T (x) for T = 1. We display in Figure 10 the reconstruction of x → µ T (x) via µ N,h, η N (T,x) T Figure 10 : 10 Figure 10: Reconstruction of µ T by µ N,h, η N (T,•) T with = 23, for different kernel orders for N = 2 10 (Top) and N = 2 15 (Bottom). Higher order kernels outperform the reconstruction provided with standard Gaussian kernels ( = 1) for N = 2 10 . Figure 11 : 11 Figure 11: Distribution of η N (T, x). The domain x ∈ [-3, 4] is computed for 1200 samples over a discrete grid of 100 points, i.e. mesh 8 • 10 -2 , N = 2 5 , 2 6 , . . . , 2 16 for = 7. The method tends pick the largest bandwidth. martingale, P and Q coincide on F 0 see e.g. [KS91, Section 3 -(5.5)] . It follows that )dB t and by applying the classical convergence result of the Euler scheme for a Brownian diffusion (see e.g. [Pag18, Theorem 7.2]) since for every s ∈ [0, T ], W 1 (μ h s , µ s ) ≤ E |X s -Xh s | . Writing b(s, Y n,h s , µ N,h s ) -b(s, Y n, ≤ exp κ9v 2 2 , 2 and in turn, by letting κ 9 = 24| b| 2 Lip κ 8 , we derive, for every v ∈ R,E exp(v(ξ n,n s ) [k] ) | Y n,h s k = 1, . . . , d.In other words, conditional onY n,h s , each component (ξ n,n s ) [k] of ξ n,n s = (ξ n,n s ) [1] , . . . , (ξ n,n s ) [d]is sub-Gaussian with variance proxy κ 9 (see e.g.[START_REF] Buldygin | Sub-Gaussian random variables[END_REF] and [Pau20, Theorem 2.1.1]). Consequently, conditional onY n,h s , each component (S n,N s ) [k] of S n,Ns is sub-Gaussian with variance proxy (N -1) -1 κ 9 , by the conditional independence of the random variables (ξ n,n s ) 1≤n ≤N,n =n . This implies in particular , η N (T,x) T (x) -µ T (x) 2 ≤ κ 5 max(|µ T | 2 k,K , |K| 2 L 2 , 1) × min η∈H (N η d ) -1 log N + η 2 min(k, +1) + h 2 ≤ κ 6 log N N 2 min(k, +1)2 min(k, +1)+d + h 2 s , µ N,h s ) -b(s, Y n,h s , μh s ) are sub-Gaussian, with the correct order in N . We may then repeat part of the proof of Proposition 19 of[START_REF] Della | Nonparametric estimation for interacting particle systems: McKean-Vlasov models[END_REF]. The smoothness assumption (19) for the nonlinear drift enables one to obtainb(s, Y n,h s , µ N,h s ) -b(s, Y n,h s , μh s ) 2 k-1 l=1 (R d ) l δ l µ b(s, Y n,h s , y l , μh s )( µ N,h s -μh s ) ⊗l (dy l ) δ l µ b(s, Y n,h s , y l , μh s )( µ N,h s -μh s ) ⊗l (dy l ) 2p ≤ c p N -p p! for every p ≥ 1, by the uniform ellipticity of c thanks to Assumption 2.2 and by triangle inequality for the last inequality, with | b(t, x, y) -b(t, x, y ) ≤ | b| Lip |y -y | in particular. Recall that under Assumption 2.1, 2.2 and 2.3, there exists a unique strong solution X = (X t ) t∈[0,T ] of the McKean-Vlasov equation (2) satisfying LP23, Proposition 2.1]. Hence, the function (t, x) → b µ (t, x) := b(t, x, µ t ) with µ t = L(X t ) is 1 2 -Hölder continuous in t and Lipschitz continuous in x. Thus, one can obtain sup s∈[0,T ] 2.10 under Assumption 2.14Again, only an extension of Proposition 4.4 is needed, the rest of the proof remains unchanged. This only amounts to show that (t,x) → b(t, x, µ t ) is C 1,3b . The smoothness in x is straightforward, and only the proof that t → b(t, x, µ t ) is C 1 b is required. By Itô's formula, we formally haved dt b(t, •, µ t ) = ∂ 1 b(t, •, µ t ) + E A t δ µ b(t, •, X t , µ t ) ,where (X t ) t∈[0,T ] is a solution to (2). Assumption 2.14 enables one to conclude that d dt b(t, •, µ t ) is continuous and bounded.
03856652
en
[ "shs", "sdv" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-03856652v2/file/Capocasa%20and%20Venier_Nutrition%2C%20evolution%2C%20misinformation.pdf
Marco Capocasa email: [email protected] Davide Venier evolution, misinformation. 2023 Davide Venier Nutrition Hal-03856652v2 Nutrition * Keywords: Nutrition, evolution, misinformation, evolution, misinformation eating habits, health, globalization, fake news, informed choices à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Human evolution is strongly connected with environmental conditions with which the subsistence strategies and the availability and variety of food are related and still largely dependent [START_REF] Boone | Subsistence Strategies and Early Human Population History: An Evolutionary Ecological Perspective[END_REF]Erlich and Erlich 2008). Over their history, human populations have occupied almost every place on the planet, experiencing different possibilities of survival which have inevitably defined various eating habits. The diversity of environmental conditions and food resources led to different selective pressures, favouring adaptations at the local level and determining genetic variation among populations [START_REF] James | Nutrition and its Role in Human Evolution[END_REF]. The relationships among environment, dietary habits and human genetic variation have been extensively investigated especially regarding the neolithic transition from a lifestyle of hunting and gathering to agro-pastoralism [START_REF] Ammerman | The Neolithic Transition and the Genetics of Populations in Europe[END_REF][START_REF] Bickle | Stable Isotopes and Dynamic Diets: The Mesolithic-Neolithic Dietary Transition in Terrestrial Central Europe[END_REF][START_REF] Schulting | Dietary Shifts at the Mesolithic-Neolithic Transition in Europe: An Overview of the Stable Isotope Data[END_REF][START_REF] Bergström | From Domestication Genomics Towards Molecular Ecology of Human Environments[END_REF]. This cultural change led to a variety of dietary changes in Neolithic populations. Particularly, the introduction of a great quantity of foods rich in starch in the daily diet of the early agropastoral communities. The digestion of starch depends on specific enzymes called amylases. One of them, alpha-amylase, is present in the saliva, while others are released at the pancreatic level. The gene responsible for the synthesis of this enzyme (AMY1) has undergone duplications during human evolution showing extensive copy number variation among populations [START_REF] Perry | Diet and the Evolution of Human Amylase Gene Copy Number Variation[END_REF]). This means that there are people having multiple copies of AMY1 in their genome, a condition contributing to a higher enzymatic availability for their digestion of starch. [START_REF] Perry | Diet and the Evolution of Human Amylase Gene Copy Number Variation[END_REF] highlighted a higher percentage of individuals carrying this genetic pattern among agro-pastoral societies than in communities of hunter-gatherers. More copies of this gene are significantly associated with higher levels of alpha-amylase [START_REF] Carpenter | Copy Number Variation of Human AMY1 is a Minor Contributor to Variation in Salivary Amylase Expression and Activity[END_REF], representing an improvement in the digestive process in populations with a starchy diet. The history of human eating habits does not depend only on the development of agricultural practices. Other most recent historical dynamics have contributed to its changing process. Firstly, the economic and social shifts linked to the transition of human societies from pre-eminently agricultural to industrial, which has taken place from the 18th century. Secondly, the increase in movements of people, information and goods linked to globalization, especially starting from the second half of the 20th century and, even more, in the new millennium. All these changes influenced lifestyles and, consequently, food choices, as they relate to the exponential growth in the productivity of the food industry [START_REF] Mendez | Globalization, Urbanization and Nutritional Change in the Developing World[END_REF]. Eating habits in the westernized world One of the most discussed effects that globalization has on eating habits regards its large-scale "westernization". This transition is not taking place at the same level everywhere, with local contexts where cultural and gastronomic identities are so marked as to curb the success of other external models [START_REF] Haile | Evolution of Human Diet and Effect of Globalization on Regional Diet with Emphasis to the Mediterranean Diet[END_REF][START_REF] García-Dorado | Economic Globalization, Nutrition and Health: A Review of Quantitative Evidence[END_REF][START_REF] Azzam | Is the World Converging to a 'Western Diet'?[END_REF]). However, a great variety of lowcost packaged food products, pivotal representatives of the "Western diet", rich in sugar, salt, and saturated fats, are available virtually everywhere [START_REF] Kopp | How Western Diet and Lifestyle Drive the Pandemic of Obesity and Civilization Diseases[END_REF]. This rapid change could play a crucial role in influencing people's health and quality of life [START_REF] Cordain | Origins and Evolution of the Western Diet: Health Implications for the 21st Century[END_REF][START_REF] García-Montero | Nutritional Components in Western Diet Versus Mediterranean Diet at the Gut Microbiota-Immune System Interplay. Implications For Health and Disease[END_REF]. Understanding these dynamics is useful to identify their effects on the human body at a biological level. As pointed out by a large body of research evidence, the Western diet is involved in determining imbalance in the gut microbiota. These investigations showed that the gut microbial diversity of huntergatherers is richer and varied than those of members of agropastoral groups and, even more, urban western communities (see [START_REF] Gupta | Geography, Ethnicity or Subsistence-Specific Variations in Human Microbiome Composition and Diversity[END_REF] and related citations therein). The main reason of this differences seems to be that a greater diversity of microbial species would confer a better resistance to parasites and efficiency in coping with the not optimal intake of nutrients due to the scarcity of food. The gut microbiota of hunter-gatherers greatly differs from that of the urban societies in terms of the richness of several genera of bacteria, such as Prevotella and Treponema, very useful for diets rich in fibres [START_REF] Obregon-Tito | Subsistence Strategies in Traditional Societies Distinguish Gut Microbiomes[END_REF]. Comparative studies on gut microbiota explain how different lifestyles and eating habits influence the bacterial composition of the human intestinal tract and the impact of its impoverishment on human health. Western diet is also strongly associated with obesity and other diseases, such as cardiovascular diseases and diabetes, which are among the ten with the highest mortality rate worldwide (GBD 2017 Causes of Death Collaborators 2018; [START_REF] Capocasa | A Light in the Dark: Open Access to Medical Literature and the COVID-19 Pandemic[END_REF][START_REF] Bisol | How Can We Get More Open Access to Medical Studies? Simple, Let's Take the Green Road[END_REF]. It is worth noting that more than half of the daily intake of calories in several countries of the western world such as Canada, the United States and the United Kingdom derives from these foods [START_REF] Monteiro | Ultra-Processed Foods: What They Are and How to Identify Them[END_REF]). In the last two decades, the sale of these products is growing dizzyingly also in low-and middle-income countries [START_REF] Monteiro | Ultra-Processed Products Are Becoming Dominant in the Global Food System[END_REF][START_REF] Moodie | Ultra-Processed Profits: The Political Economy of Countering the Global Spread of Ultra-Processed Foods -A Synthesis Review on the Market and Political Practices of Transnational Food Corporations and Strategic Public Health Responses[END_REF]. In 2004, the Food and Agriculture Organization (FAO) published a report entitled "Globalization of Food Systems in Developing Countries: Impact of Food Security and Nutrition", in which authors highlighted how unhealthy food market in developing countries was supported by the television commercials and by the spread of mobile phones and text messages. Emblematic is the case of Brazil, where 58% of TV commercials related to food products concerned those with high fat and sugar contents, only 9% regarding foods based on meat, beans, or eggs, and no commercials dedicated to fresh fruit and vegetables. Nutrition science, between myths and lies Television and the Internet have allowed amplification of the promotion of food products. At the same time, they represent the main tools for the dissemination of nutrition and healthy eating contents. However, scientists and nutrition professionals are still struggling to find the right way to communicate efficiently and effectively (throughout these channels) the positive implications for society that their studies could produce. This is because the "translation" of scientific works from the technical language to one suitable for a wider audience is an undertaking that hides several traps, such as oversimplification, sensationalism, and inaccuracy [START_REF] Secko | Four Models of Science Journalism: A Synthesis and Practical Assessment[END_REF]. This is particularly true in the case of nutrition science as it involves aspects of the daily life of everyone [START_REF] Rowe | Nutrition Science Communication: "If It's Transparent, Is That Why We Cannot See It?[END_REF]. Mass media not only have been used to advertise products, but also to disseminate fake news concerning the nutritional values of foods and their effects on human health (e.g. the case of the "milk consumption-hip fractures" controversial relationship; see [START_REF] Michaëlsson | Milk Intake and Risk of Mortality and Fractures in Women and Men: Cohort Studies[END_REF], but see also [START_REF] Hidayat | Systematic Review and Meta-Analysis of the Association Between Dairy Consumption and the Risk of Hip Fracture: Critical Interpretation of the Currently Available Evidence[END_REF]. However, even long before the invention of television and the worldwide web, word of mouth worked well for "food mythology" to reach a large audience. An example above all: "spinach is the richest source of iron". This is one of the longest-lived scientific legends, as it is now two centuries old. One version of the story tells about an error brought since the dawn of biochemistry, caused by confusing the iron content in the dry weight of spinach with that of fresh weight (see Rekdal 2014 and related studies cited therein). Other reconstructions refer to a misplaced decimal point in calculation by two different "guilty": Emil von Wolff and Gustav von Bunge [START_REF] Mielewczik | Spinach in Blunderland: How the Myth that Spinach is Rich in Iron Became an Urban Academic Legend[END_REF]. None of these stories would be true, however it is true that spinach is one of the vegetables that contain only a fair amount of iron (2.9 grams per 100 grams of fresh product) and that we can assimilate only a small quantity of this micronutrient due to its interactions with other dietary factors (i.e. polyphenols, phytates and calcium; [START_REF] Rodriguez-Ramiro | Estimation of the Iron Bioavailability in Green Vegetables Using an in Vitro Digestion/Caco-2 Cell Model[END_REF]). This is one of the many examples of food myths that the nutrition sciences have helped to unravel. A particularly arduous task in the globalized world, literally haunted by the proliferation of fake news mainly on the world wide web. The internet is widely used to find out information on nutrition and diets, although it cannot be view as the most reliable source [START_REF] Wangberg | Use of the Internet for Health Purposes: Trends in Norway 2000-2010[END_REF][START_REF] Goodman | Use of Nutritional Information in Canada: National Trends Between 2004 and 2008[END_REF]. [START_REF] Vosoughi | The Spread of True and False News Online[END_REF] pointed out the greater speed with which fake information spread on the web compared to truths. They also discussed the role of social networks noting how, through Twitter, fake news are 70% more likely to be shared compared to reliable news. This study clarifies how the web is a context in which false, or at least inaccurate news, can reach people needing information, leading them to make wrong choices. It is the case of the significant number of people who follow restrictive dietary patterns, such as lactose and gluten-free diets, without having performed the proper medical exams to verify if they have any intolerance (e.g. see [START_REF] Araya | Living with Gluten and Other Food Intolerances: Self-Reported Diagnoses and Management[END_REF]. Towards truly informed choices The spread of fake news is due to the lack of scientific knowledge that is needed to distinguish which is a reliable information and which should be taken as a joke. As a matter of fact, people process information extrapolated from texts and videos differently, depending on their level of knowledge of the topics. Based on this elaboration, they may be vulnerable and become easily persuaded to make wrong and even harmful choices. The lower the knowledge, the greater the vulnerability and therefore the tendency to rely on the online pull of mixed contents, regardless of its quality. In an increasingly interconnected world, the popularity of social media contributed to make celebrities and influencers as role models. Followers may wonder how this well-known personality "has that physical aspect", desiring to resemble him/her, and thus wanting to know "what diet is he/she following", "what sporting activity is he/she practicing", or "which supplements are involved". In this way, popular personalities are seen as more reliable as experts, even if they do not have a specific scientific background. All these misconceptions bring people to follow them instead of searching for information from reliable sites, open access scientific papers or, even better, asking directly a nutritional or medical professional. How can we incentivize informed choices? First, we believe that nutrition professionals should play a primary and decisive role in providing reliable information to patients and their relatives. Particularly, we stress the importance of an accessible language to avoid technicalities that are perceived as hurdle by the patients. As pointed out by [START_REF] Mira | Barriers for an Effective Communication Around Clinical Decision Making: An Analysis of the Gaps Between Doctors' and Patients' Point of View[END_REF], communication barriers lead to a less involvement of patients and their reduced willingness to follow directions and suggestions coming from healthcare professionals. Furthermore, we claim that making a greater effort in the spread of scientific knowledge to the public, organizing public events and publishing easily readable articles on websites, blogs and magazines, could contribute to raise the awareness of the huge influence that eating habits have on human health.
04096259
en
[ "stat.ml" ]
2024/03/04 16:41:18
2020
https://hal.science/hal-04096259/file/ISMIR2020-LBD-422-abstract.pdf
Grigore Burloiu email: [email protected] Cinetic Unatc Adaptive Drum Machine Microtiming with Transfer Learning and RNNs We introduce rolypoly˜, the first drum machine for live performance that adapts its microtiming in relation to a human musician. We leverage state-of-the-art work in expressive performance modelling with recurrent nets, towards real-time application on the micro scale. Our models are pretrained on the Groove MIDI Dataset from Magenta, and then fine-tuned iteratively over several duet performances of a new piece. We propose a method for defining training targets based on previous performances, rather than a prior ground truth. The agent is able to adapt to human timing nuances, and can achieve effects such as morphing a rhythm from straight to swing. INTRODUCTION We are interested in the coupling between drum machine and human player, with a view to extending their mutual dynamics computationally [START_REF] Inverno | Heroic versus collaborative AI for the arts[END_REF]. We centre on the microtiming scale as a locus of expressive music interaction [START_REF] Leman | The expressive moment: How interaction (with music) shapes human empowerment[END_REF]. In studies on tempo and time-shift representation, [START_REF] Honing | Timing is tempo-specific[END_REF] posits that global tempo curves alone cannot account for the alterations observed in performances of the same material at different speeds. Nevertheless, score-driven automatic accompaniment has traditionally worked by computing such a curve to drive the warping of a backing track [START_REF] Raphael | Music plus one and machine learning[END_REF][START_REF] Cont | On the creative use of score following and its impact on research[END_REF]. Increasingly however, attention is also being paid to the interpretation of real-time accompaniment on the micro scale [START_REF] Xia | Spectral learning for expressive interactive ensemble music performance[END_REF][START_REF] Maezawa | Deep linear autoregressive model for interpretable prediction of expressive tempo[END_REF]. By leveraging advances in natural language processing, recurrent neural network (RNN)based machine learning architectures have been shown to produce state-of-the-art expressive outputs [START_REF] Jeong | Virtuosonet: A hierarchical attention rnn for generating expressive piano performance from music score[END_REF][START_REF] Gillick | Learning to groove with inverse sequence transformations[END_REF][START_REF] Oore | This time with feeling: Learning expressive musical performance[END_REF]. We delineate a set of design principles for a new musical agent. rolypoly˜is a score-driven drum machine with the following characteristics: lightweight (inference must be fast enough to run in real-time on an average, accessible computer), audio-interactive (must not only adapt to incoming sound from human musicians, but also "listen" to its own output, coupled with the former), progressive/scalable (must not rely on existing duet/ensemble ground truth; rather, it builds up a performance corpus, accruing learning along the way), and score-agnostic (does not require symbolic specification of the parts played by other musicians). AGENT ARCHITECTURE The piece to be interpreted is represented as a sequence of feature vector rows, encoding each drum hit and its context (tempo, time signature, beat phase) at timestep t, and the drum-target onset distance dif f t-1 (since this distance can only be measured in hindsight). The output is a drum microtiming offset, y t , with its corresponding estimation ŷt determining when in relation to the absolute notated time the t-th drum hit will actually be triggered. Since we do not rely on an existing drum-human duet ground truth (no such dataset suitable for deep learning exists), we define y after and as a function of a performance, as follows. We define a variable d t at timestep t as the cumulated realised offsets of score-to-drum and (variance-adjusted) drum-to-target: d t = ŷt + A • dif f t σ dif f /σ ŷ , (1) We then use d t to determine the ground truth drum offsets for training the next iteration of the model, by subtracting its mean (to keep outputs centred around zero) and again applying deviation normalisation: y t = B • d t -µ d σ d /σ ŷ . (2) A and B above are hyperparameters, controlling the weighting of the target offsets and the cumulated offsets, respectively. Their default setting is 1, allowing the timing output to adapt gradually, without major fluctuations. For rhythm morphing (see Section 3) they can be set to cancel out the effect of variance scaling. We propose two RNN-derived architectures that predict each following drum timing based on the input features received in real time. Both are defined and trained using the PyTorch library, and communicate via OSC to Max, which handles the audio playback and analysis. The first model is built on a 2-layer unidirectional LSTM network [START_REF] Hochreiter | Long short-term memory[END_REF] with 256 hidden units fed into tanh nonlinear activations. The result is passed through a linear layer with a single output, ŷ. The second model is a simplification of the Seq2Seq [START_REF] Sutskever | Sequence to sequence learning with neural networks[END_REF] architecture described in [START_REF] Gillick | Learning to groove with inverse sequence transformations[END_REF]. In our case the source sequence is the complete pre-performance input dataset (sans the unrealised audio-related features), and the target is the performance dataset up to the current timestep. The encoder is a bidirectional LSTM, and the decoder is a 2-layer unidirectional LSTM with 256 hidden units. As with the first model, a tanh nonlinearity and final linear layer project the decoder output to a one-dimensional activation, ŷ. To pretrain our models we process a drums-only performance dataset, the Groove MIDI Dataset (GMD) from Magenta [START_REF] Gillick | Learning to groove with inverse sequence transformations[END_REF], the largest existing dataset of expressive drumming. We predict residual drum offsets, 1 resulting in audio-agnostic expressive interpreter models, ready to be fine-tuned with subsequent performances. The source code and trained models 2 , notebook 3 and a demo video 4 are available EXPERIMENTS Figure 1 pictures the Seq2Seq model transitioning from a straight 4/4 beat to a "swing" shuffle, where offbeats are pushed slightly later-simply by "swinging" the respective notes on guitar for three iterations. Similarly we were able to morph the pattern x..x..x. to three equallydistanced triplets, as seen in the demo video. The typical rolypoly˜use case consists in a song being performed multiple times and the model learning incrementally after each take. Heuristically we found the agent is able to limit the drum-target variance, and adapt to structural patterns in the piece. FUTURE WORK We are exploring improvements to e.g. the target audio representation, via the learning of a latent feature space of Figure 1 . 1 Figure 1. Rhythm morphing over three training iterations. Score: dotted lines. ŷ: blue stems. dif f : grey bars. Transition from straight (bottom) to swinging (top). Thus, y measures the distance to the drum hit from its quantised position. While[START_REF] Gillick | Learning to groove with inverse sequence transformations[END_REF][START_REF] Makris | Conditional neural sequence learners for generating drums' rhythms[END_REF] used 16 steps per bar, we chose a quantisation step of 24, to better account for triplets and swing.[START_REF] Leman | The expressive moment: How interaction (with music) shapes human empowerment[END_REF] See https://github.com/RVirmoors/rolypoly/ tree/master/py.
04096334
en
[ "info.info-ai" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04096334/file/these.pdf
Renée El Melhem Bât Pascal Blaise M Stéphane ÉLECTROTECHNIQUE E E A Électronique M Philippe Delachartre email: [email protected] Bénédicte Lanza Mme Sandrine email: [email protected] Mme Sylvie M Hamamache Kheddouci M Stéphane Benayoun email: [email protected] Stéphanie Cauvin M Jocelyn Bonjour M Christian Montes Dr Jean- Marc Petit Dr Pierre-Edouard Portier Sylvie Dumas Worldwide, highway accidents have important social and financial impacts. To reduce their frequency and gravity, crash prediction models (CPM) are used to identify hazardous roadway segments and to provide actionable clues about the associated risk factors. CPM are either parametric statistical models, in particular generalized linear models (GLM), or machine learning models with a large number of parameters without associated uncertainty estimates (e.g., ensemble of decision trees, supportvector machine . . . ). Simple parametric models tend to be more interpretable but less effective than highly flexible non-parametric models that work like black-boxes. When pondering high stake decisions, such as in the context of highway safety, field experts expect predictive models to be both effective and glass-box interpretable. The models must assist them in conceiving and deploying preventive or remedial safety actions. As such, we contribute to enhancing the predictive performance of parametric models while maintaining their interpretability. In the first place, a well-chosen hierarchical structure can handle correlations among groups of observations and significantly improve the quality of the models' predictions and of their interpretation. We propose to learn it by leveraging the output of a post-hoc explainability framework (viz., SHAP) applied to a highly flexible black-box model (viz., XGBoost). In our first contribution, this hierarchical structure informs a Bayesian multilevel GLM. Moreover, in an effort to further improve the predictive performance of the model without deteriorating its interpretability, we propose to extend its linear functional form to account for major first-order interactions between explanatory variables. These interactions are learnt from the data by analyzing the results of a trained self-organized polynomial network. In our second contribution, we exploit the hierarchical structure even better by replacing the GLM with a simulated annealing based multi-objective symbolic regression algorithm to automate feature engineering and feature selection. Thus, by computing a cluster-specific ranking of expansions of regularized linear models ordered by increasing complexity, we facilitate a dynamic interpretative process which makes it possible to discover effective, efficient and interpretable predictive models. Experiments have been conducted on a highway safety dataset and on more than ten public datasets covering classification and regression tasks. They show promising results with our two contributions outperforming traditional glass-box interpretable models while getting close to the best non-parametric models. Finally, we illustrate the benefits of our approach by introducing, on a realistic case study, an application we designed for highway safety experts. iv Résumé Dans le monde entier, les accidents de la route ont des impacts sociaux et financiers importants. Pour réduire leur fréquence et leur gravité, les modèles de prédiction d'accidents (CPM) sont utilisés pour identifier les segments de route dangereux et fournir des indices exploitables sur les facteurs de risque associés. Les CPM sont soit des modèles statistiques paramétriques, en particulier des modèles linéaires généralisés (GLM), soit des modèles d'apprentissage automatique avec un nombre important de paramètres sans estimation d'incertitude associée (e.g., ensemble d'arbres de décision, machine à vecteurs de support . . . ). Les modèles paramétriques simples ont tendance à être plus interprétables mais moins performants que les modèles non paramétriques très flexibles qui fonctionnent comme des boîtes noires. Lorsqu'ils réfléchissent à des décisions à fort enjeu, comme dans le contexte de la sécurité routière, les experts métier s'attendent à ce que les modèles prédictifs soient à la fois performants et interprétables. Les modèles doivent les aider à concevoir et à déployer des actions de sécurité préventives ou correctives. Dans ces travaux, nous contribuons à améliorer les performances prédictives des modèles paramétriques tout en conservant leur interprétabilité. En premier lieu, une structure hiérarchique bien choisie peut gérer les corrélations entre groupes d' observations et améliorer significativement la qualité des prédictions des modèles et leur interprétation. Nous proposons de l'apprendre en exploitant le résultat d'une méthode d'interprétabilité post-hoc (viz., SHAP) appliquée à un modèle boîte noire flexible (viz., XGBoost). Dans notre première contribution, cette structure hiérarchique informe un GLM bayésien multiniveaux. De plus, dans le but d'améliorer encore les performances prédictives du modèle sans détériorer son interprétabilité, nous proposons d'étendre sa forme fonctionnelle linéaire pour tenir compte des interactions majeures de premier ordre entre variables explicatives. Ces interactions sont apprises à partir des données en analysant les résultats d'un réseau polynomial auto-organisé. Dans notre deuxième contribution, nous exploitons encore mieux la structure hiérarchique en remplaçant le GLM par un algorithme de régression symbolique multi-objectif basé sur le recuit simulé pour automatiser la sélection des variables explicatives et l'extraction de caractéristiques (viz., interactions, transformations de v variables explicatives). Ainsi, en calculant un classement spécifique à chaque cluster des expansions de modèles linéaires régularisés ordonnés par complexité croissante, nous facilitons un processus d'interprétation dynamique qui permet de découvrir des modèles prédictifs efficaces, efficients et interprétables. Des expériences ont été menées sur un jeu de données de sécurité routière et sur plus de dix jeux de données publics couvrant des problèmes de classification et de régression variés. Les résultats obtenus sont prometteurs étant donné que nos deux contributions surpassent les modèles interprétables traditionnels et se rapprochent des meilleurs modèles non paramétriques boîtes noires. Enfin, nous illustrons les bénéfices de notre approche en présentant, sur une étude réelle de cas, une application que nous avons conçue pour les experts de la sécurité routière. vi life to its fullest, will remain engraved in me. I feel so lucky to have known you. The best for the end, I would like to thank my love, Manon. You can't imagine how much I admire you for your passion, your perseverance, for being you. Since you came into my life, everything seems happier and simpler. With you by my side, I feel able to reach my most unattainable goals and live my wildest dreams. Je t'aime, du plus fort que je peux. Context and objectives Road safety is a socio-economic concern: according to the World Health Organization [START_REF]Road traffic injuries[END_REF], approximately 1, 35 million people are killed each year on roadways around the world. Related expenses average 3% of the gross domestic product of a country. As stated by the French Road Safety Observatory [START_REF]Road safety annual report[END_REF], these costs grow exponentially with the severity of the accidents. In 2019, a property damage only accident incurred expenses up to 5 258 euros while the average cost of an accident with at least one fatality was 3 429 000 euros 1 . Since the 1970s, road safety has become a major challenge for successive French governments. Many new safety policies have been engaged in, such as reductions of speed limits, the multiplication of fixed radars, the increase in the amount of speeding fines. At the same time, vehicles became more secure. Road safety was therefore continuously improved. However, since 2010, the positive trend was attenuated and the reduction in the fatality rate started to stagnate. For this reason, the French government, supported by the European Union, motivated new safety policies whose main issuers would be local authorities and road managers. Among them, APRR 2 group, subsidiary of Eiffage, finances, maintains and manages the infrastructure of a 2 323 km-long highway network (see Fig. 1.1) in return for toll collection. The objective of the company is to offer high-performance transport infrastructure and support road users at every stage of their travel, guaranteeing them the best conditions for traffic flow and safety. To secure their network, field experts of APRR leveraged an extension of the User's 1 These amounts include the various costs incurred for territorial and local authorities (hospitalization, insurance, etc.) and for road managers (marking, road works, etc.) 2 Autoroutes Paris-Rhin-Rhône: https://aprr.com/en Safety on Existing Roads (SURE) approach, a comprehensive and global framework for network safety management initiated by the French government. First, they implemented a multi-criteria decision analysis method to compute a ranking of segments based on the values of different crash-related criteria (e.g., annual average daily traffic, fatality or injury rates, . . . ). Then, after studying crash reports and conducting on-site visits, field experts elaborated pluriannual action plans (operation, maintenance, . . . ) on segments where the safety improvement were potentially the greatest. In the end, this method contributed to a significant reduction in fatality and severity rates. Nevertheless, SURE does not work on reducing the crash likelihood and does not explain the mechanisms affecting crash-prone areas. Therefore, decision makers have opted for the development of new intelligent systems inspired by the latest advances in Artificial Intelligence (AI). A collaboration has been initiated between LIRIS3 laboratory and Data New Road4 , a subsidiary of APRR dedicated to the harnessing of the company's historical data. The main objective of this collaboration was to build an AI system to estimate accurately future hazardous segments and to identify risk factors. A system that meets these requirements will be a fundamental element in the implementation of new proactive safety policies, and will therefore have a beneficial and directly measurable impact. Crash prediction models In the field of road safety, research has focused on obtaining a better understanding of the mechanisms that affect the probability of an accident. Given the absence of precise data related to human behavior (lack of attention, fatigue, alcohol, . . . ), crash prediction models (CPM) do not provide cause-and-effect relationships that would explain the occurrence of an accident. Instead, they emphasize the use of factors that could influence the likelihood of an accident over a given period (e.g., year, month) and in a given geographical area. CPM are trained on historical data to discover hidden patterns between the crash-related variable and several explanatory variables (e.g., traffic, speed limit, altitude, . . . ). They are mainly used to identify risk factors in order to steer the evolution of safety policies. In their survey [START_REF] Lord | The statistical analysis of crashfrequency data: A review and assessment of methodological alternatives[END_REF], Lord and Mannering provide a broad perspective on the variety of data-related issues raised by crash count prediction: over-dispersion of count data, temporal and spatial correlations due to multiple measurements of a same location at different times, fixed parameters that cannot adapt from one roadway to the next, low sample-mean due to the sparsity of crashes, non-linear relationships between crash-frequencies and explanatory variables, etc. Most of theses issues are made more prominent with the use of parametric models, in particular generalized linear models (GLM). Indeed, GLM must undergo many transformations to adapt to the crash count prediction context (e.g., the choice of a non-normal likelihood distribution, the integration of random effects and hierarchical models, etc). Whereas non-parametric approaches (e.g., neural networks, tree-based algorithms, SVM...) will deal with most of these issues without the need for specific adaptations and will usually offer better predictive performances than parametric models. On the other hand, this improvement in predictive power comes at a cost. These models, which involve a large number of parameters, are often considered as black boxes that do not allow direct understanding of the mechanisms that led to a prediction. Thus, when designing CPM, both dimensions (viz., performance and interpretability) must be considered. However, while performance is easily measured with widely adopted metrics, quantifying interpretability remains more debated. Model interpretability: concepts, taxonomies In our study, we consider a predictive model to be glass-box interpretable when it quantifies explicitly (i.e., not by simulation) the marginal effects of the explanatory variables and also possibly of a few simple transformations of these variables (e.g., multiplicative interactions, log transforms, etc.). Thus, we stress the importance of favoring simple functional forms namely, expansions of linear models. On the contrary, non-parametric models allow for very flexible, but often complex, functional forms. For these models, some sense of the marginal effects can be gained by studying ceteris paribus profiles. They represent the influence of an explanatory variable by assuming that all the other variables remain constant. In the case of flexible functional forms, the choice of the fixed values for all the variables but the one for which the marginal effects are to be observed has a great influence on the interpretation of the effect. Often, different choices (if not all) should be considered to provide a more faithful view of the marginal effects. Of course, the number of configurations of fixed values grows quickly with the number of explanatory variables. Some heuristics have been used to manage this complexity. For example, Beck et al. [START_REF] Beck | Improving quantitative studies of international conflict: A conjecture[END_REF] chose to "hold constant the other variables at two values: high and low probability" of the target. To truly measure the marginal effects, one should, as stated in [START_REF] Hainmueller | Kernel regularized least squares: Reducing misspecification bias with a flexible and interpretable machine learning approach[END_REF], "estimate partial derivative with respect to each input variable and at each observation through repeated simulations". This strategy is computationally prohibitive when implemented by simulating data from a complex model. Among the oldest computationally convenient variants, we find the permutation based methods applied to variable importance, partial dependence plots or individual conditional expectation plots. However, Hooker and Mentch [START_REF] Hooker | Please stop permuting features: An explanation and alternatives[END_REF] have found that "when features in the training set exhibit statistical dependence, permute-and-predict methods can be highly misleading when applied to the original model". More refined approximation methods have been proposed, such as LIME or SHAP. They are also based on observing the effects of a perturbation of a given instance on the output of the model. These approaches are often referred to as model-agnostic post-hoc explanation methods. However, recent works, in particular [START_REF] Slack | Fooling lime and shap: Adversarial attacks on post hoc explanation methods[END_REF] tend to show that it can be difficult to assess their reliability and robustness. Also, Garreau and Luxburg [START_REF] Garreau | Explaining the explainer: A first theoretical analysis of lime[END_REF], in a theoretical study of LIME, verified by simulations, showed that, while this post-hoc approach "discovers interesting features, it might forget some important features and the surrogate model is not faithful". Generalized Additive Models (GAM) [START_REF] Trevor | Generalized additive models[END_REF] can be located halfway between complex flexible functional forms and simple glass-box models. They give some access to the marginal effects by enabling plots of the expectation of the target given the values taken by each explanatory variable. Lou et al. [START_REF] Lou | Intelligible models for classification and regression[END_REF] designed a GAM variant based on boosted trees, the Explainable Boosting Machine (EBM). It is often almost as accurate as flexible black-box models. However, Chang et al. [START_REF] Chang | How interpretable and trustworthy are gams[END_REF] observed that different GAM algorithms, while offering comparable results in terms of accuracy, can lead to very different interpretations of the predictions. Finally, we consider that simple linear models with the inclusion of some transforms of the original explanatory variables and the use of a regularization term (e.g., ridge or LASSO) to control the bias/variance trade-off remain promising among the intrinsically glass-box approaches. However, they come with key challenges. If one doesn't want the user to be solely responsible for deciding which functions of the original variables are to be considered, a strategy must be adopted to explore a combinatorial space of potential functional forms. Also, for the model to stay interpretable in presence of a large number of explanatory variables, the number of relevant coefficients shouldn't be too large. However, variable selection is not without drawbacks. In particular, it suffers from the risk of hiding the main effects behind more complex correlated terms. Machine learning formalization for crash prediction Data description Throughout this work, we will refer to x as the vector of explanatory variables describing a roadway segment during a given period. Each explanatory variable can be either continuous, discrete or categorical. Continuous variables can assume an infinite number of real values within a given interval. As opposed to a continuous variable, a discrete variable can assume only a finite number of numerical values within a given interval, such as the number of interchanges on a roadway segment in our context. The categorical variable refers to a variable that takes values in a discrete unordered set of categories {c 1 , ...c n }. Sometimes, categorical variables have only two categories and are therefore called binary variables. For instance, in our crash prediction problem, the variable presence of tunnel indicates whether a tunnel is present or not on a roadway segment. The dependent variable, also known as the target variable, will be denoted y. In the context of crash count prediction, y is a discrete positive integer that quantifies the number of crashes observed on a roadway segment for a given time interval. Note that in the literature, the problem can be tackled with a binary categorical variable. In this case, the focus is not to predict a crash count, but to know whether or not a roadway segment will be accidental given the values of the explanatory variables. These models require the definition of a threshold that is used to differentiate crashprone configurations from normal ones. Supervised learning In a supervised context, the predictive model must discover how to associate an observation x to a label y, based on a set of known examples {x i , y i } N i=1 . To model these dependencies, the process goes through two phases: • training phase: the model is trained to find patterns in the training data that map the vectors of explanatory variables x to the dependent variables y. To learn this mapping, the model goes through a stage called model training, where its parameters (e.g., weights, bias) are updated so the function it approximates captures the most relevant dependencies among the variables. Sometimes, models involve hyper-parameters i.e, parameters that are determined before training and remain fixed afterwards. Thus, an additional step called validation step can be performed in order to verify the correctness of their values and adjust them if necessary. • testing phase: the trained model is evaluated on a set of unknown samples with appropriate performance metrics (e.g., mean squared error for regression tasks, f 1-score for classification tasks). By doing so, one obtains a less biased estimate of the predictive performance of the model which will allow, firstly, to validate the generalization capacity of the model and secondly, to compare the predictive performance of the selected model with those of others. However, doing a single train-test split will give, most of the time, an overoptimistic estimate of the predictive performance of the model. One way to mitigate this is to use k-fold cross-validation. In k-fold cross-validation, the dataset is split into k subsets of data (known as folds). The model is trained on k -1 subsets, and then evaluated on the remaining subset that was not used for training. This process is repeated k times, with a different test subset each time. The average performance obtained on the k test subsets gives an estimate of the model's predictive performance. In practice, k-fold cross-validation can serve multiple purposes such as defining the best set of hyper-parameters for a machine learning model or comparing multiple predictive models and selecting the one that has the best generalization ability. Application to the highway dataset In our work for highway safety analysis, the objective is to predict efficiently the crash count observed on highway segments, given the value of spatial explanatory variables (e.g., number of lanes, speed limit, . . . ) and temporal explanatory variables (annual average daily traffic, percentage of heavy vehicles). To learn these relationships, we leverage supervised learning: first, we start by generating a dataset with samples representing the crash count and the values of explanatory variables observed on a segment for a particular year (see table 1.1). The process of generating the dataset will be described in more detail in section 4.1. Then, we train a predictive model on a subset of randomly selected samples from the overall dataset (say for example 80%) to learn a function that maps the explanatory variables to the crash count. To efficiently help field experts, this function will have to meet different challenges presented in the next section. Finally, we evaluate the performance of the resulting function on the remaining samples. Challenges and research questions If we assume the observed data are generated from an unknown data-generating process f such that y = f (x) + ε, with ε being a zero-centered Gaussian noise of variance σ 2 , our objective is to find a function f that is as close as possible to the true but unknown process which generated the data. Given the available training data {(x i , y i )} N i=1 , we estimate f so it minimizes an objective function such as the mean squared error i (y i -f (x i )) 2 . Bias-variance trade-off To measure the ability of f to generalize to points outside of the training data, we compute the expected prediction error (EP E) of the prediction y * = f (x * ) + ϵ, with x * being an unseen sample. As shown by Hastie et al. [START_REF] Hastie | The elements of statistical learning: data mining, inference, and prediction[END_REF], EPE can be decomposed as: EP E = E[(y * -f (x * )) 2 ] = (f (x * ) -E[ f (x * )]) 2 + E[ε 2 ] + E[ f (x * ) 2 ] -E[ f (x * )] 2 = bias( f ) 2 + σ 2 + var( f ) (1.1) We observe that EP E embeds three different sources of error: σ 2 is the irreducible error that cannot fundamentally be reduced by any model, the bias measures the average error of f (x * ), and the variance var gives a measure of the variation of the prediction f (x * ) from one training dataset to another. An ideal model minimizes both the bias and the variance. However, in a world with imperfect models and finite data, it is nearly impossible to do both simultaneously. On the one hand, complex models (e.g., deep neural networks) approximate precisely training data but are exposed to over-fitting. On the other hand, simpler models with high bias, such as linear models, fail to capture important regularities in the data and are therefore prone to underfitting. Thus, efforts should be made to reach optimum model complexity for which the expected prediction error is generally close to its lowest level (see Fig. 1.2). One way of doing this is to use regularization to find a good trade-off between bias and variance. Likewise, dimensionality reduction and feature selection also decrease the variance of complex models. Finally, when designing predictive models in a high-stakes context, it is necessary to not focus only on mitigating the bias-variance trade-off, but also to consider model interpretability because the understanding of the mechanisms that lead to a prediction is as valuable as the prediction itself. Performance and interpretability Broadly speaking, non-parametric complex models, which cover big hypothesis spaces, often obtain very good performance on many predictive tasks. However, as they involve a large amount of parameters, they're often considered as black-boxes [START_REF] Barredo Arrieta | Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai[END_REF]. To gain insights into their underlying behaviors, post hoc explanation tools must be used. Nonetheless, these tools have several drawbacks w.r.t. the trust and veracity of the explanations they deliver [START_REF] Rudin | Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[END_REF][START_REF] Slack | Fooling lime and shap: Adversarial attacks on post hoc explanation methods[END_REF]. Indeed, the representation of the model can be inaccurate in parts of the feature space and may even be misleading in the under-represented parts [START_REF] Molnar | Interpretable Machine Learning[END_REF]. Moreover, even though these tools can reproduce accurately the predictions of the original model, they may use completely different features. A well-known example is the proprietary prediction model COMPAS, a recidivism-risk scoring model used widely throughout the U.S. criminal justice system for parole and bail decisions, which has been criticized as being dependent on race based on the interpretations of an explanation model [START_REF] Mattu | Machine bias[END_REF] whereas it has been demonstrated afterwards that COMPAS does not seem to "depend strongly on either criminal history or proxies for race" [START_REF] Rudin | The age of secrecy and unfairness in recidivism prediction[END_REF]. Moreover, when black-box models do not behave as expected (e.g., excellent accuracy during training, but very bad performance in production or with testing data), trying to gain insights into the model is required for troubleshooting (debugging). Post hoc explanations are commonly used. However, combining a black-box model and an explanation model (or tool) makes the process of troubleshooting very sensitive. As the explanation is not always correct, it can be difficult to tell whether or not the black-box model is wrong [START_REF] Rudin | Interpretable machine learning: Fundamental principles and 10 grand challenges[END_REF]. Furthermore, as stated by Rudin [START_REF] Rudin | Interpretable machine learning: Fundamental principles and 10 grand challenges[END_REF], "black-box models often predict the right answer for the wrong reason, leading to excellent performance in training but poor performance in practice". In psychology, this is known as the clever Hans phenomenon [START_REF] Pfungst | the horse of Mr. Von Osten.) a contribution to experimental animal and human psychology[END_REF], that comes from a horse (of the same name) claimed to perform arithmetic and other intellectual tasks, but actually was watching the reactions of his trainer. In the AI context, this phenomenon describes black-box models that can learn different relationships between variables than the ones of the true data generating process and, at the same time, provide very accurate predictions. Thus, efforts should be made to avoid this phenomenon in domains where high-stakes decisions can be made from the analysis of predictive models such as healthcare, road safety, or criminal justice. On the other hand, researchers working in the field of interpretable AI observe that linear models can perform very well on tabular data, especially when the full data science process is considered (e.g., variable selection, data preparation, model selection, . . . ) [START_REF] Rudin | Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[END_REF]. This could arise from the Rashomon Effect [START_REF] Breiman | Statistical modeling: The two cultures (with comments and a rejoinder by the author)[END_REF], which characterize problems where many accurate-but-different models exist to describe the same data [START_REF] Semenova | A study in rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning[END_REF]. In [START_REF] Semenova | A study in rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning[END_REF], the authors show empirically that when the Rashomon set (i.e. the set of almost-equally-accurate models) is large, "most machine learning methods tend to perform similarly, and also in these cases, interpretable or sparse (yet accurate) models exist". Lastly, interpretable models provide a simple and intelligible picture of the relationship between the explanatory variables and the dependent variable. This not only ease the process of elaborating new actions, but also allow end users to be confident in what they plan. Publications and outline Publications In this thesis, we try to meet these challenges by extracting, from data and without prior expert knowledge, the information necessary to build efficient and highly interpretable models. In our first proposal, we consider that data can often be partitioned so that refined predictive models can apply to different parts more efficiently and more meaningfully than a global model. We discover such a structure by clustering the instances based on the features' scores returned by the SHAP [START_REF] Scott | A unified approach to interpreting model predictions[END_REF] post-hoc analysis of a flexible black-box model. This clustering then informs a Bayesian multilevel -hierarchical -generalized linear model. Moreover, we propose to integrate nonlinearities by adding to its underlying functional form, the most important interactions between explanatory variables learnt by a self-organized polynomial network. This first contribution, described in section 3.2 and section 3.3, was published in: Crash prediction for a French highway network with an XAI-informed Bayesian hierarchical model, 2020 IEEE International Conference on Big Data (Big Data) [START_REF] Veran | Crash prediction for a french highway network with an xai-informed bayesian hierarchical model[END_REF] In our second proposal, we propose to exploit the hierarchical structure even better by computing a cluster-specific ranking of expansions of regularized linear models ordered by increasing complexity. We design a symbolic regression approach for the automatic discovery of a mapping from the original explanatory variables to a basis expansion which adds both sparsity and flexibility by including transforms of the variables. This exploration of the space of potential expansions of linear models is guided by simulated annealing. More precisely, given a predictive performance metric and a complexity metric, the meta-heuristic search conducts a multi-objective optimization to return a list of Pareto optimal models. Through a dynamic interpretative process, end users can navigate around these models and select the best model according to their needs, from the simplest one highlighting main effects to a more specific one depicting particular configurations related to few instances in the dataset. This contribution, described in section 3.4, was published in: Interpretable hierarchical symbolic regression for safety-critical systems with an application to highway crash prediction. Engineering Applications of Artificial Intelligence, 117:105534, 2023. [START_REF] Veran | Interpretable hierarchical symbolic regression for safety-critical systems with an application to highway crash prediction[END_REF] Outline In chapter 1, we started by introducing the context and stakes of road safety analysis and then described the numerous challenges crash prediction models have to face. In the next chapter, we will first provide more details on the different concepts of model interpretability through a review of the state of the art before introducing how crash predictions and explanations have been tackled in the community. In chapter 3, we present our contributions. More precisely, after giving an overview of our methodology in section 3.1, we will describe in section 3.2 the first module of our methodology, which consists in finding a hierarchical structure in the dataset. In section 3.3, we will explain how we integrate this hierarchical structure to enhance both the interpretability and efficiency of GLM. We will also explain how non-linearities are discovered and afterwards embedded in the GLM to enhance, yet again, its predictive performance. In section 3.4, we will describe how the aforementioned methodology can be improved by replacing the GLM by a symbolic regression approach that allows us to capture more relevant non-linearities while providing sparse and highly interpretable models. Then, in chapter 4, we will present the experiments carried out on the highway network dataset and on thirteen public datasets covering different tasks (viz., regression and classification) which underline very promising results as our framework outperforms fully interpretable models (e.g., linear models, shallow decision trees) while getting close to non-parametric models. Finally, in section 4.6, we unveil its potential for guiding a dynamic interpretative process with a realistic case study on the highway network dataset which highlights the benefit of using multi-objective optimization to help field experts develop new safety policies. Chapter 2 Related work Model interpretability Properties AI systems are used to assist field experts in making high stake decisions which may indirectly affect humans' lives. Thus, understanding how does the system behave is of paramount importance. In many areas, knowing the predictions made by a model is not enough. There is an emerging need that these models, besides being efficient, compel to numerous properties regarding their explainability [START_REF] Barredo Arrieta | Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai[END_REF]: • trustworthiness: the models will act as intended when facing a given problem • informativeness: to support decision making, they provide a great amount of information on the problem being tackled • confidence: they communicate on the confidence of their working regime • accessibility: they allow end users to get more involved in their development cycle • interactivity: they allow straightforward interactions with end users. The design of interpretable AI systems has been tackled with two distinctive approaches rallied around the field of explainable Artificial intelligence (XAI). On the one hand, there are models that are inherently interpretable as their complexity is restricted by their design (e.g., GLM, decision trees). On the other hand, complex models (e.g., deep Neural Network, ensemble of decision trees) act as black-boxes and do not provide, at least initially, any information on the relationship that connects the explanatory variables to the dependent variable. Knowledge is often obtained by using post hoc explainability techniques. Intrinsic interpretability Some models are inherently interpretable and indeed meet the above properties. They are often perceived as glass-box models as the relation between the explanatory variables and the dependent variable is explicit, thus simplifying the understanding of the marginal effects. Among them, we can find shallow decision trees, generalized linear models (GLM), generalized additive models (GAM). Decision trees Decision trees can be used to answer regression problems and classification problems. They are commonly found under the term Classification and Regression Trees (CART) introduced by Breiman [START_REF] Breiman | Classification and regression trees[END_REF]. These models aim at predicting the value of a dependent variable by learning simple decision rules from the features. They divide the input space by recursively splitting the data into subsets according to tests on features (see Fig. 2.1). The procedure to select the best split varies according to the problem at hand. For regression tasks, CART takes a feature and determines which cut-off minimizes the variance of the dependent variable. For classification tasks, CART seeks to minimize the Gini impurity of the class distribution defined as: Gini = 1 - C i=1 (p i ) 2 where p i is the probability of an observation to be classified to one of the C classes. The algorithm continues the search-and-split in the new nodes until a stopping criterion is reached (e.g., minimum number of observations in a node before splitting, minimum number of observations in a terminal node). Finally, to predict the outcome of a new observation, CART averages all training instances in the terminal node reached by the new observation after browsing the tree. Interpreting the prediction is straightforward as it combines, from the root node to the terminal node, simple decision rules. However, for deep trees, the high number of decision rules used for a prediction makes it hard to understand. Thus, in our study, we consider a CART model to be interpretable if its depth is, at most, equal to 5. Generalized linear model In a GLM [START_REF] Mccullagh | Generalized linear models[END_REF], the dependent variable Y is assumed to be generated from a particular distribution in an exponential family (e.g., normal, binomial, Poisson distributions). The mean µ of the distribution is connected to a linear predictor Xβ such that: where g is a link function introduced to map the non-linear transformed mean g(µ) g(E[Y |X]) = g(µ) = Xβ (2.1) to the definition space of the linear predictor, the real line. The choice of a relevant link function depends on the problem at hand. For instance, for a dependent variable generated from a binomial distribution, the preferable link function is the logit. For that model, Eq. 2.1 becomes: logit(µ) = ln( µ 1 -µ ) = Xβ which is also referred to as logistic regression. For a normally distributed dependent variable, the link function is simply the identity function id: id(µ) = µ = Xβ The coefficient β can be estimated by maximum likelihood estimation or Bayesian inference (see section 3.3.2 for a detail explanation of these techniques). Generalized additive model To model nonlinear but still interpretable relationships between the dependent variable and the explanatory variables, one can apply transformations (e.g., logarithm, categorization) to the original variables. Another option is to use GAM [START_REF] Trevor | Generalized additive models[END_REF], an extension of GLM where the linear predictor embeds unknown smooth functions of m explanatory variables: g(E[Y |X 1 , ..., X m ]) = β 0 + m i=1 f i (X i ) (2.2) To learn the nonlinear flexible functions f i , different approaches have been proposed: Spline basis The original one is based on spline basis and makes use of the backfitting algorithm [START_REF] Trevor | Generalized additive models[END_REF], which works by iterative smoothing of partial residuals. However, it is difficult to estimate the degree of smoothness of splines f i . This parameter can be either set by the users or estimated with cross-validation during model training. However, this original approach is computationally expensive (O(n 3 ) with n the number of observations). Thus, several methods try to reduce the size of the basis used for smoothing [START_REF] Fahrmeir | Bayesian inference for generalized additive mixed models based on markov random field priors[END_REF][START_REF] Kim | Smoothing spline gaussian regression: more scalable computation via efficient approximation[END_REF], or find sparse representations of the smooth functions using Markov random fields [START_REF] Rue | Approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations[END_REF]. Modern machine learning techniques In [START_REF] Tutz | Generalized additive modeling with implicit variable selection by likelihood-based boosting[END_REF], the authors proposed to train GAM with a likelihood-based boosting procedure. Lately, Lou et al. [START_REF] Lou | Intelligible models for classification and regression[END_REF], in a framework called Explainable Boosting Machine (EBM), proposed a tree-based extension of GAM that introduces a small number of two-dimensional interactions: g(E[Y |X 1 , ..., X m ]) = β 0 + f i (X i ) + f ij (x i , x j ) (2.3) To learn the nonlinear function f , they combine bagging [START_REF] Breiman | Bagging predictors[END_REF], gradient boosting [START_REF] Jerome H Friedman | Greedy function approximation: a gradient boosting machine[END_REF] (further explained in section 3.2.2) and automatic detection of pairwise interactions. During training, features are trained one at a time in round-robin fashion to mitigate the effects of multicollinearity. When applied to tabular data, EBM outperforms standard GAM based on regression splines and is often almost as accurate as flexible black-box models. Moreover, EBM is also considered to be interpretable because each feature contributes to the prediction in an additive way (see Eq. 2.3) thus simplifying the understanding of its influence on the dependent variable. Post hoc interpretability Highly flexible models usually reach better predictive performance on some datasets mainly because they manage to capture nonlinearities in the data. However, this comes along with more complex design that makes them act like black-boxes. Posthoc explanation tools are required to provide some degree of interpretability. Different approaches have been proposed such as global model-agnostic explanation methods, simulation-based methods or local explanation methods. Global model-agnostic explanation methods These post hoc methods are used to describe the average behavior of a machine learning model. They quantify the main effects and the interaction effects of the Partial Dependence plots First introduced by Friedman [START_REF] Jerome H Friedman | Greedy function approximation: a gradient boosting machine[END_REF], partial dependence plots (PDP) show the dependence between the dependent variable and a set of features of interest, marginalizing over the values of the complementary features: pd(x S ) = E X C [ f (x S , X C )] = f (x S , x C )p(x C )dx C where f is the ML model, x S are the features of interest and X C are the complementary features here treated as random variables. As we marginalize over the other features, we get a function that depends only on features in S [START_REF] Molnar | Interpretable Machine Learning[END_REF]. We obtain the PDP by computing the integral for various values of x S . This plot provides insights into the nature of the relationship between the target and a feature (see Fig. 2.2). However, one main limitation of PDP lies in its assumption of independence between features which may result in the transmission of misleading information. Sobol indices In [START_REF] Ilya | Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates[END_REF], based on a finite expansion of a multivariate function f and under several constraints to maintain its unicity, Sobol was able to decompose the global variance V (Y ) into parts attributable to input features and combinations of features: V (Y ) = n i=1 V i + n i n j>i V ij + • • • + V 12...n (2.4) where V i = V ( fi (x i )) = V (E[Y |x i ]) V ij = V ( fij (x i , x j )) = V (E[Y |x i , x j ]) -V i -V j . . . From Eq. 2.4, one can estimate the main effect of a feature x i with the first-order Sobol indices, which is the amount of variance in the output explained by that feature: S i = V (E[Y |X i ]) V (Y ) Simulation-based methods These methods seek to obtain a global understanding of a black-box model, not by using global post hoc analysis tools, but by simulating its prediction function with a simpler one. Tan et al. [START_REF] Tan | Learning global additive explanations for neural nets using model distillation[END_REF] proposed to leverage model distillation techniques [START_REF] Buciluǎ | Model compression[END_REF][START_REF] Hinton | Distilling the knowledge in a neural network[END_REF] to learn global additive explanations of the form: Ĥ(x) = h 0 + i h i (x i ) + i̸ =j h ij (x i , x j ) + i̸ =j j̸ =k h ijk (x i , x j , x k ) + . . . (2.5) where h are either splines or bagged trees. During training, they seek to minimize the mean squared error between the prediction function of the black-box model and Ĥ(x). If h are splines, they use penalized maximum likelihood and estimate the smoothing parameters with cross-validation. If h are bagged trees, cyclic gradient boosting [START_REF] Lou | Intelligible models for classification and regression[END_REF] is used. To understand the behavior of the model, they analyze the feature shapes by plotting the feature's contribution h i (x i ) against the domain of x i . Another approach, proposed by Lakkaraju et al. [START_REF] Lakkaraju | Faithful and customizable explanations of black box models[END_REF], use sub-space explanations to mimic the behavior of black-box models in classification problems. They leverage a two-level decision set representation, where the outer decision rules describe the subspaces (i.e. specific regions of the feature space), and the inner decision rules explain the decision logic of the black-box model within the related sub-spaces. Moreover, their approach allow end users to customize explanations by adding certain features of interest in the outer level representing the sub-spaces. Local model-agnostic explanation methods The objective of local explanation method is to learn local approximations to explain individual predictions. A broad variety of methods has been proposed and will be discussed in this section. First, we start by explaining how a contribution of a feature is computed on a simple linear model prediction and how it is extended to any model by means of Shapley values, a game theoretic approach. Then, we introduce LIME, a method to interpret individual model predictions based on locally approximating the model around a given prediction. Finally, we explain how Shapley values and LIME have been connected together to provide an efficient tool for local model-agnostic explanations. Simple computation of contributions in a linear model setting A simple linear model prediction for an instance x is computed with: f (x) = β 0 + p i=1 β i x i where β i is the weight corresponding to variable i, and x i the value of variable i on this instance. The contribution ϕ i of the i-th feature can be computed by: ϕ i ( f ) = β i x i -E[β i X i ] where E[β i X i ] is the average effect of feature i [START_REF] Molnar | Interpretable Machine Learning[END_REF]. Thus, with a linear model, computing a feature's contribution to a prediction is straightforward. To measure how much a feature affects the prediction of more complex nonlinear models, a game theoretic approach based on the computation of Shapley values has been developed. [START_REF] Ls Shapley | Quota solutions op n-person games1[END_REF] measure how much feature i contributes to the prediction of any model f . Specifically, if S is a subset of the p features considered in the model, then the Shapley value ϕ i of a feature i is defined as: Model-agnostic computation of a contribution with Shapley regression value Shapley regression values ϕ i ( f ) = S⊆{1,...,p}\{i} |S|!(p -|S| -1)! p! (v(S ∪ {i}) -v(S)) (2.6) where, for a given instance x, v x (S) is the prediction of feature values in S that are marginalized over features not included in S [START_REF] Molnar | Interpretable Machine Learning[END_REF]: v x (S) = f (x 1 , ..., x p )dP x / ∈S -E X [ f (X)] Local interpretable model-agnostic explanations (LIME) Ribeiro et al. [START_REF] Tulio Ribeiro | why should i trust you?" explaining the predictions of any classifier[END_REF] introduced LIME to approximate the model's behavior in the neighborhood of a given instance. For an instance x, they build a locally faithful and interpretable surrogate model g by solving: ξ(x) = arg min g∈G L( f , g, π x ) + Ω(g) (2.7) with L a loss function, f the model being explained, G a set of interpretable models (e.g., decision trees, linear models), and Ω a complexity measure. In Eq. 2.7, Ω can be perceived as a regularization term that constrains the surrogate model g to provide short explanations. Moreover, to ensure local fidelity, Ribeiro et al. define L as being the locality-aware square loss: L( f , g, π x ) = z,z ′ π x (z)(f (z) -g(z ′ )) 2 (2.8) where z ′ are binary vectors sampled at random from x ′ ∈ {0, 1} pviz., the interpretable representation of the instance x ∈ R p , p being the number of input features, which indicates if the feature i is present (x ′ i = 1) or missing (x ′ i = 0) in the surrogate model -and z are the original representations of z ′ . In Eq. 2.8, π x is a proximity measure defined by the authors as being the exponential kernel: π x = exp(-d(x, z) 2 /σ 2 ), with d a distance function and σ the kernel width. LIME creates sparse linear explanations with a measure of faithfulness in the neighborhood of a given instance thanks to the introduction of the kernel π x . However, LIME highlights some limitations regarding the definition of this neighborhood. Defining a good neighborhood in an automatic manner with a proper kernel is still in development. For now, the selection of appropriate kernel settings is made heuristically. Also, as the neighborhood is computed based on points sampled uniformly at random, the analysis can be performed on unlikely data points. Moreover, recent work [START_REF] Slack | Fooling lime and shap: Adversarial attacks on post hoc explanation methods[END_REF] tend to show that it can be difficult to assess its reliability and robustness. In a theoretical study of LIME verified by simulations, Garreau and Luxburg [START_REF] Slack | Fooling lime and shap: Adversarial attacks on post hoc explanation methods[END_REF] showed that, while this post-hoc approach "discovers interesting features, it might forget some important features and the surrogate model is not faithful". Shapley additive explanation (SHAP) Lundberg and Lee [START_REF] Scott | A unified approach to interpreting model predictions[END_REF] unify existing approaches such as LIME [START_REF] Tulio Ribeiro | why should i trust you?" explaining the predictions of any classifier[END_REF], Shapley values [START_REF] Ls Shapley | Quota solutions op n-person games1[END_REF], DeepLIFT [START_REF] Shrikumar | Learning important features through propagating activation differences[END_REF], etc. SHAP quantifies the contribution of each original explanatory variable to each prediction. The authors proposed to represent Shapley value explanations as an additive feature attribution method. Let h x be an input mapping from simplified inputs x ′ , indicating feature presence, to original inputs x such that x = h x (x ′ ), and f a black-box model. An additive feature attribution model tends to reach: g(z ′ ) ≈ f (h x (z ′ )) whenever z ′ ≈ x ′ Moreover, a model g in this class is a linear function of binary variables: g(z ′ ) = ϕ 0 + M i=1 ϕ i z ′ i (2.9) where the z ′ i variables indicate if the feature is observed (z ′ i = 1) or unknown (z ′ i = 0), ϕ i ∈ R are the feature attribution values, and M is the number of input features. The authors state that there is a single unique solution that resolves this linear function while having the three following desirable properties [START_REF] Scott | A unified approach to interpreting model predictions[END_REF]: • local accuracy: the explanation model g shall match the output of the original model f for the simplified input x ′ • missingness: a missing feature has no impact • consistency: "if a model changes so that some simplified input's contribution increases or stays the same regardless of the other inputs, that input's attribution should not decrease" [START_REF] Scott | A unified approach to interpreting model predictions[END_REF] They prove that the coefficients ϕ of this linear function in fact corresponds to the Shapley values of the features: ϕ i ( f , x) = z ′ ⊆x ′ |z ′ |!(M -|z ′ | -1)! M ! [ fx (z ′ ) -fx (z ′ \ i)] with M the number of simplified input features, |z ′ | the number of non-zero entries in z ′ , and z ′ ⊆ x ′ represents all z ′ vectors where the non-zero entries are a subset of the non-zero entries in x ′ . In the same paper, the authors introduce KernelSHAP, a model-agnostic approximation method that connects linear LIME and Shapley values. They prove that, given an appropriate weight function π x ′ (z ′ ), for an instance x, the Shapley values of the original features are the coefficients of the linear model g mimimizing the loss function defined in eq. 2.7: L( f , g, π x ) = z ′ ∈Z [ f (h x (z ′ )) -g(z ′ )] 2 π x ′ (z ′ ) where: π x ′ (z ′ ) = M -1 M |z ′ | |z ′ |(M -|z ′ |) Model interpretability in the ML community The machine learning research community offers nuanced perspectives about the merits of post-hoc explanations. As a representative example, Lipton [START_REF] Zachary C Lipton | The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery[END_REF] suggests that post-hoc explanations should not be ruled-out as valid, although indirect, means of knowledge about the underlying data generating process. Lipton also underlines the potential risk of focusing on misleading information when relying on post-hoc explanations. Moreover, he considers that transparent linear models may not always be more interpretable than deep neural networks (DNN) because they often need heavily engineered features to obtain similar performances. Likewise, Poursabzi-Sangdeh et al. [START_REF] Poursabzi-Sangdeh | Manipulating and measuring model interpretability[END_REF] observe that practitioners can be affected by the information overload phenomenon [START_REF] Russell L Ackoff | Management misinformation systems[END_REF][START_REF] Keller | Effects of quality and quantity of information on decision effectiveness[END_REF] when the number of features becomes too large. Otherwise, Rudin [START_REF] Rudin | Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[END_REF] emphasizes the importance of taking into account the whole data analysis process, including the preprocessing steps: "when considering problems that have structured data with meaningful features, there is often no significant difference in performance between more complex classifiers (DNN, boosted decision trees, random forests) and much simpler classifiers (logistic regression, decision lists) after preprocessing". Rudin points out that there is not necessarily a trade-off between accuracy and interpretability: performance gaps can be reduced iteratively through better data processing and model understanding. The latter is facilitated by the use of interpretable models. Crash prediction models and their interpretability In their survey, Lord and Mannering [START_REF] Lord | The statistical analysis of crashfrequency data: A review and assessment of methodological alternatives[END_REF] describe the different methods used for longterm crash frequency analysis. These methods can be classified in two categories: on one side, parametric statistical models explicitly associate the crash related variable to a vector of input explanatory variables. On the other side, non-parametric models use more complex design to better fit the data at the expense of their interpretability. In this section, we describe both approaches, starting with the parametric statistical models. Parametric statistical models Poisson regressions Crash-count data observations y i being positive integers, they have originally been modelled by Poisson regressions [START_REF] Jones | Analysis of the frequency and duration of freeway accidents in seattle[END_REF][START_REF] Sarath | Estimating truck accident rate and involvements using linear and poisson regression models[END_REF][START_REF] Miaou | The relationship between truck accidents and geometric design of road sections: Poisson versus negative binomial regressions[END_REF]. Poisson distribution is a special shape of the binomial distribution with a small probability of an event and a large but unknown number of trials. GLM have been used to link a linear predictor made of p explanatory variables to the rate λ i of a Poisson likelihood. As explained in section 2.1.2, an appropriate canonical link function is chosen to map the definition space of crash data (viz., discrete and positive only) to the real line covered by the linear predictor. For Poisson regressions, a log-link function is selected. The model can be formulated as: y i ∼ P oisson(λ i ) log(λ i ) = β 0 + p j=1 β j x ij Negative Binomial regression The Poisson distribution has a unique free parameter, the rate λ i . The variance cannot be adjusted independently of the mean. Yet, the observed variance of crash count data often exceeds this amount. A negative binomial (NB) distribution can be proposed to better deal with this over-dispersion phenomenon. Given the probability of crash, the NB gives the probability of observing n crashes before the α-th non-crash. With λ representing the mean, its probability mass function can be parameterized as follows [START_REF] Walck | Hand-book on statistical distributions for experimentalists[END_REF]: NB(n; λ, α) = n + α -1 n λ λ + α n α λ + α α The variance of the NB is λ + λ 2 α . Therefore, the parameter α controls the amount of over-dispersion. When α → ∞, the NB likelihood approaches a P oisson(λ) distribution. Moreover, as shown in [START_REF] Walck | Hand-book on statistical distributions for experimentalists[END_REF], a NB(λ, α) corresponds to a P oisson(λ) where λ comes from a gamma distribution Gamma(a = α λ , b = α) whose probability density function is defined as: Gamma(x; a, b) = a b x b-1 e -ax Γ(b) for x > 0, Γ(b) being the gamma function. The Poisson-gamma model -a continuous mixture of Poisson distributions with rates distributed as a gamma distribution -has been identified as a reference model by road safety experts (see for example the AASHTO Highway Safety Manual [START_REF]The highway safety manual[END_REF]). We find it in the core of many related work [START_REF] Miaou | Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and bayes versus empirical bayes methods[END_REF][START_REF] Lord | Modeling crash-flowdensity and crash-flow-v/c ratio relationships for rural and urban freeway segments[END_REF][START_REF] El | Comparison of two negative binomial regression techniques in developing accident prediction models[END_REF][START_REF] Lord | Examining the effects of site selection criteria for evaluating the effectiveness of traffic safety countermeasures[END_REF] where crash count regression is done with a linear model attached to the λ parameter of the NB through a log-link function. Generalized linear mixed model In some studies, correlation among observations can be taken into account on the temporal level. This arises in the context of panel data analysis, where crash data are considered to be collected through a series of repeated observations of the same groups (e.g., roadway segments) over a time period. To account for the differences between segments, GLM are accommodated to introduce random effects in addition to the usual fixed effects. The resulting model is known as a generalized linear mixed model (GLMM). In this configuration, Poisson regressions are adapted as follows: y ik ∼ P oisson(λ ik ) log(λ ik ) = β 0 + p j=1 β j x ij + u k where u k is the random effect for group k, supposed to be distributed according a zero-centered Gaussian distribution with variance σ 2 (σ > 0). GLMM have found many applications in road safety analysis. Amongst them, Johansson et al. [START_REF] Johansson | Speed limitation and motorway casualties: a time series count data regression approach[END_REF] analyzed the effect of a lowered speed limit on the number of crashes on Swedish highways. In a study of crashes caused by median crossovers in Washington State, Shankar et al. [START_REF] Venkataraman | Evaluating median crossover likelihoods with clustered accident counts: An empirical inquiry using the random effects negative binomial model[END_REF] compared a standard NB and a NB modified to account for random effects relating to each site. However, they did not observe any benefits from using a NB with random effects. Hierarchical model GLM can also be refined into hierarchical models to take into account clusters of related observations. For example, crashes occurring in a given geographical region may possess specific characteristics while not differing entirely from crashes in other regions. A simple Poisson regression, by pooling all the observations together, would assume an invariant population and couldn't benefit from regional peculiarities. Otherwise, k clusters could be modeled with the addition of k -1 mutually exclusive binary variables to the linear model, but this would correspond to no pooling at all and the clusters would be assumed independent of one another. Contrariwise, hierarchical models, also known as multilevel models, offer partial pooling through an adaptive regularizing prior. Thus, in the following multilevel Poisson regression with k clusters, hyperpriors µ and σ will allow an adaptive shrinkage of the cluster-specific β jk towards a common mean: level 0 -model y ik ∼ P oisson(λ ik ) log(λ ik ) = β 0k + j β jk x ijk level 1 -priors β jk ∼ N (µ, σ) level 2 -hyperpriors µ ∼ N (0, 100) σ ∼ HN (100) with HN being the half-normal distribution. Gelman indicates in [START_REF] Gelman | Prior distributions for variance parameters in hierarchical models (comment on article by browne and draper)[END_REF] that a halfnormal distribution with high standard deviation is a non-informative but proper prior for the variance parameter σ. In this way, Jones and Jørgensen [START_REF] Andrew | The use of multilevel models for the prediction of road accident outcomes[END_REF] design a multilevel model to predict the severity of an incident given the involved casualties (level 1), their respective vehicles (level 2) and the accident location (level 3). Ahmed et al. [START_REF] Ahmed | Exploring a bayesian hierarchical approach for developing safety performance functions for a mountainous freeway[END_REF] conceive a multi-level model to predict crashes on a mountainous freeway by modeling both the dry or snow seasons and spatial correlation between adjacent sites. Deublein et al. [START_REF] Deublein | Prediction of road accidents: A bayesian hierarchical approach[END_REF] propose a multilevel model to manage simultaneously 4 response variables (resp. injury accidents, light injuries, severe injuries and fatalities) through gamma updating of the parameters. Finally, Fawcett et al. [START_REF] Fawcett | A novel bayesian hierarchical model for road safety hotspot prediction[END_REF] predict future safety hotspots with a multilevel model where, first, the variance of the rate of a NB increases with the timestamp of an observation and, second, a global trend effect is altered by site-specific ones. Intrinsic interpretability The aforementioned models are interpretable by nature. Indeed, the relation between the explanatory variables and the dependent one is explicit. Nevertheless, as Poisson regressions and their variants use a log-link function to map the linear predictor to the crash-related variable, the effects of the coefficients β are not as obvious as those of a linear model. To mitigate this, Kweon and Kockelmam [START_REF] Kweon | Safety effects of speed limit changes: Use of panel models, including speed, use, and design variables[END_REF] proposed to use the incidence rate ratio which measures the percentage of change in the crash related variable when an explanatory variable is increased by a unit. Moreover, analyzing the effects of coefficients shall not be based solely on point estimates but must come along with the knowledge of the uncertainty associated with them. Most of studies mitigate this by using Bayesian inference. By associating a probability distribution to each coefficient, they can measure the underlying uncertainty of coefficients. Non-parametric Machine Learning models Support vector machines SVM were originally introduced by Cortes and Vapnik [START_REF] Cortes | Support-vector networks[END_REF] for classification tasks. A version for regression has been developed by Drucker et al. [START_REF] Drucker | Support vector regression machines[END_REF]. Among the variants proposed for regressions, the epsilon-insensitive SVM (ε-SVM) is the most common. Given a vector y ∈ R n of dependent variables and X ∈ R n×p the associated explanatory variables, ε-SVM aims to find a function f (x i ) that deviates from y i by a value inferior to ε for each training point x i , i ∈ {1, ..., n} (see Fig. 2.3). In addition, f has to be as flat as possible. To achieve this, ε-SVM solves the following problem: min w,b,ζ,ζ * 1 2 w T w + C n i=1 (ζ i + ζ * i ) (2.10) subject to y i -w T ϕ(x i ) -b ≤ ε + ζ i , w T ϕ(x i ) + b -y i ≤ ε + ζ * i , ζ i , ζ * i ≥ 0, i = 1, ..., n where ζ i , ζ * i penalize observations lying outside the ε margin and C is a positive regularization constant that controls the penalty imposed by ζ i and ζ * i . In this problem, ϕ is a transformation function that maps x to a higher dimensional space where the problem resolution is linear. Without it, ε-SVM won't have the ability to model nonlinear relationship between explanatory variables and the dependent variable in the original space. The optimization problem described above is computationally simpler to solve by constructing a Lagrangian function which introduce positive multipliers α i and α * i for each observation x i [START_REF] Drucker | Support vector regression machines[END_REF]: min α,α * 1 2 (α -α * ) T G(α -α * ) + ε1 T (α + α * ) -y T (α * -α) (2.11) subject to 1 T (α -α * ) = 0 0 ≤ α, α * ≤ C, i = 1, ..., n In eq. 2.11, G is a n × n positive semidefinite matrix whose elements are computed with the kernel function k: G ij = k(x i , x j ) where k(x i , x j ) = ⟨ϕ(x i ), ϕ(x j )⟩ (⟨•, •⟩ being the inner product). Here, the explicit mapping ϕ can be avoided as we can obtain G by computing G ij directly with the kernel function. This process is known as the kernel trick. By doing so, ε-SVM estimates f in the transformed input space and not the original one. To generate this transformation, various kernels have been proposed: linear degree of the polynomial), or based on radial basis function (k (k(x i , x j ) = x i x j ), polynomial (k(x i , x j ) = (1 + x T i x j ) d with d the (x i , x j ) = exp(-||x i - x j || 2 /σ 2 ). Finally, to make a prediction, ε-SVM computes the contribution of each support vector (i.e. a vector x i for which α i or α * i is equal to zero): f (x) = i∈S (α i -α * i )k(x i , x) + b where S is the set of support vectors. In the binary classification problem, the objective is not to find an hyperplane that contains a maximum number of observations within its ε margin, but to determine the best decision boundary between the two classes. In this case, the goal is to maximize the margin between the closest vectors of each class and the hyperplane. In highway safety analysis, SVM have been applied to various problems. Li et al. [START_REF] Li | Predicting motor vehicle crashes using support vector machine models[END_REF] compare a NB and a SVM on crash-count predictions for rural roadway segments located in Texas, USA. They observe that SVM predict crash data more effectively and accurately than traditional NB. In a study on crash data collected at 326 freeway diverge areas, Li et al. [START_REF] Li | Using support vector machine models for crash injury severity analysis[END_REF] apply SVM to the task of predicting crash severities. When compared to an ordered probit model, they found that SVM produces better prediction performance. Lately, Dong et al. [START_REF] Dong | Support vector machine in crash prediction at the level of traffic analysis zones: assessing the spatial proximity effects[END_REF] study the effects on crash-count predictions of spatial correlations at different scales on a dataset including more than 57 000 crashes in the Hillsborough county of Florida. Artificial neural networks Artificial neural networks (ANN) are biologically inspired computational networks first introduced by McCulloch [START_REF] Warren | A logical calculus of the ideas immanent in nervous activity[END_REF]. ANN are composed of artificial neurons typically organized into a multi-layer structure. Neurons of one layer are connected to neurons of the next layer. An ANN generally consists of three types of layers: the input layer receives the external data, then the hidden layer(s) process the data and finally, the output layer produces a result (e.g. crash-count prediction). In the well-known configuration of multilayer perceptrons (MLP), neurons are fully connected i.e., every neuron in one layer is connected to every neuron in the next layer (see Fig. 2.4a). Each neuron has inputs and produces a single output by first computing weighted sum of its inputs and then applying a nonlinear activation function σ such as sigmoid or hyperbolic tangent (see Fig. 2 .4b). When training an ANN, the objective is to adjust the connection weights to minimize the cost function that measures the overall performance of the network. One of the most common approach is backpropagation: first, it calculates the gradient of the cost function associated with a given state with respect to the weights, and then updates the weights in the opposite direction of the gradient, as it is the direction of the steepest descent. Several variants of gradient descent have been developed and are well explained in [START_REF] Ruder | An overview of gradient descent optimization algorithms[END_REF]. ANN have found numerous applications in highway safety analysis. Abdelwahab and Abdel-Aty [START_REF] Hassan | Development of artificial neural network models to predict driver injury severity in traffic accidents at signalized intersections[END_REF] compares a backpropagation neural network (BPNN) with an ordered logit statistical model to predict the severity of accidents at intersections. Chang [START_REF] Chang | Analysis of freeway accident frequencies: negative binomial regression versus artificial neural network[END_REF] compares a one hidden layer BPNN with NB regression for crash frequencies prediction. BPNN slightly outperforms the NB model with a difference of 0.6% of accuracy on testing data. Huang et al. Post-hoc interpretability SVM and ANN are opaque decision systems. The former, by introducing the kernel function, models complex relationships in a high-dimensional space. The later often involves a large number of parameters growing exponentially with the number of hidden layers. To reduce the complexity of the models, one can restrict the kernel of SVM to be linear or define a simpler architecture for the ANN (e.g. a perceptron with a single hidden layer with few neurons). However, trying to reduce complexity in this way will tend to deteriorate the predictive results. To balance performance and interpretability, all studies of the previous section opted instead for the use of posthoc explanation methods. Authors perform a sensitivity analysis of the black-box model: for each explanatory variable, while keeping all other variables unchanged, they record the effect on the output prediction of a perturbation of the current variable (see Fig. 2.5). However, as stated in [START_REF] Xie | Predicting motor vehicle collisions using bayesian neural network models: An empirical analysis[END_REF], the relationship between the current explanatory variable and crash frequency may vary due to correlated variables, making it difficult to interpret the sensitivity plots. Moreover, by generating simulated data while assuming the explanatory variables to be independent, sensitivity analysis will indiscriminately generate potentially misleading hypothetical predictions for unlikely data points. Finally, this method only provides global interpretations and does not allow domain experts to identify roadway segments where the predictive model behaves singularly. To mitigate this, some studies used local model-agnostic explanation methods. Mihaita et al. [START_REF] Mihaita | Arterial incident duration prediction using a bi-level framework of extreme gradient-tree boosting[END_REF] investigate with SHAP the impact of different features on arterial incident duration. They observe that the number of affected lanes and the hour of the day seem to be the most important features which increase the incident duration prediction. Parsa et al. [START_REF] Amir Bahador Parsa | Toward safer highways, application of xgboost and shap for real-time accident detection and feature analysis[END_REF] extend the use of SHAP by analyzing complex and nonlinear joint impacts of features on the output of a XGBoost model, in the context of real-time accident detection. Chapter 3 Contributions Overview Introduction In previous chapters, we stressed the need to be able to provide explanations in addition to predictions when actions are pondered based on the analysis of predictive models. To obtain such explanations, we saw that two approaches have been developed: intrinsically interpretable models that offer a direct understanding of their internal decision process, versus post-hoc analysis of black-box models. However, we have seen that using post-hoc explanation methods comes with some limitations. For data-driven high-stakes decisions, it is imperative that the explanations transparently and truthfully reflect what the model has learned. Moreover, as observed by Rudin [START_REF] Rudin | Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[END_REF], simple and interpretable models can achieve predictive performance similar to black-box models on well-structured tabular data. For these reasons, we favored the use of simple interpretable models. The focus of our work is to enhance the predictive power of simple models without deteriorating their high degree of interpretability. Our main contributions aim to achieve this goal in two steps. First, we introduce a supervised method to discover a partition of the original observations and build a hierarchical model above it. Second, we introduce two algorithmic approaches (viz., a polynomial neural network, and an extension of multi-objective symbolic regression) to discover highly discriminant non-linear transforms of the original variables. The former can handle correlations among groups of observations which usually lead to improvements in the quality of the models' predictions and of their interpretation. The latter, while remaining simple (e.g. first-order interactions), allow the models to capture more of the variability in the dependent variable. Bayesian hierarchical generalized linear model In our first proposal presented in Fig. 3.1, we describe how to extract, from data and without prior expert knowledge, the information necessary to build a hierarchical Bayesian model with first-order interactions between explanatory variables to efficiently solve regression or classification problems while preserving interpretability. First, in the hierarchical structure module described in section 3.2, we elucidate how Shapley values of the variables give rise to a clustering of the original observations that is likely to make sense in terms of the problem to be solved. This clustering then informs a multilevel Bayesian model (see section 3.3). Second, we explain how a self-organized neural network reveals the most important interactions between explanatory variables (viz., first-order interactions module). These interactions are then integrated to the functional form of the multilevel model (see section 3.3). Interpretable hierarchical symbolic regression In our second proposal (see Fig. 3.2), the discovered hierarchical structure is even better exploited by using symbolic regression to capture sparser, more complex but still interpretable relationships between the explanatory variables and the crash count. To do so, to the whole training dataset and to each cluster of the hierarchical structure, we apply a variant of the symbolic regression (SR) method to find expansions of linear models with effective and interpretable functional forms. To this end, presented in section 3.4, we designed a multi-objective simulated annealing algorithm to solve the SR problem. Thus, we can discover Pareto optimal predictive models1 with various trade-offs between accuracy and complexity. In our case, symbolic regression serves two purposes. First, it discovers global models, learned from the whole training 3.2 Supervised learning of a hierarchical structure Introduction As explained in section 2.2.1, a well-chosen hierarchical structure can handle correlations among groups of observations and significantly improve the quality of the models' predictions and of their interpretations. However, in the literature, this structure, which is dependent on the dataset, is the result of either expert knowledge or unsupervised clustering. The former is not always available, and the latter does not account for the task being solved (e.g., prediction of crash counts) and tries to group features without knowing if they're relevant for an outcome of interest [START_REF] Scott M Lundberg | From local explanations to global understanding with explainable ai for trees[END_REF]. Moreover, in our work, we would like to group roadway segments that share a similar crash severity for similar reasons. In other words, when computing clusters, we do not want to focus only on the distribution of the crash-related variable, but also want to account for the influence of each explanatory variable. Indeed, a similar crash severity between segments may be explained by different risk factors. Thus, clustering segments by considering only the dependent variable may result in segments in a same cluster that are in fact hazardous for different reasons. To obtain such a hierarchical structure, we propose to learn it from the data in two stages. First, we analyze the results of a black-box machine learning model with a local post hoc explanation tool (viz., SHAP). More specifically, for each instance of the dataset, SHAP computes the contribution of each explanatory variable to the prediction. Then, we apply a clustering on these SHAP explanation instances. In other terms, we make use of an unsupervised clustering where samples are grouped together based on their explanations (viz., the SHAP explanation instances), thus alleviating the aforementioned shortcomings of unsupervised clustering. In this section, we describe how we successfully retrieve a relevant hierarchical structure with an unsupervised clustering on labeled datasets2 . First, we start by presenting gradient boosting trees, the models we selected to discover a latent structure in the data. Then, we explain how the black-box effect induced by these models is mitigated by leveraging a local explanation tool (viz., SHAP) to measure the features' contributions to each observation. Finally, we explain how a hierarchical agglomerative clustering applied to these SHAP explanation instances help to discover a relevant structure made up of similarities of explanations between observations. Training of a non-parametric ML model Model selection We are interested in highly flexible, efficient models with fast training time on tabular data. To answer the first two requirements, numerous models can be considered: nonlinear SVM, ensemble of trees, deep learning models, . . . However, training a nonlinear SVM is computationally infeasible on large dataset (between O(n 2 ) and O(n 3 ), n being the number of samples, for a standard implementation of SVM [START_REF] Abdiansah | Time complexity analysis of support vector machines (svm) in libsvm[END_REF]) which make them poorly suited to the preliminary task of discovering a hierarchical structure in the data. On their side, deep learning models are better adapted to image recognition, natural language processing or speech recognition [START_REF] Scott M Lundberg | From local explanations to global understanding with explainable ai for trees[END_REF]. In their experiments, Chen et al. [START_REF] Chen | Xgboost: A scalable tree boosting system[END_REF] observe that tree-based models consistently outperform standard deep learning models on tabular datasets where "features are individually meaningful and do not have a strong multi-scale temporal or spatial structures". Recently, Borisov et al. [START_REF] Borisov | Deep neural networks and tabular data: A survey[END_REF] compare algorithms based on gradientboosted trees with a broad variety of deep learning models on three middle and large size datasets covering two classifications and one regression task. They observe that gradient-boosted trees outperform deep learning models on all datasets, while having lower training time. A balance of computational efficiency, ease of use, and high accuracy have made tree-based models the most popular non-linear model type. In 2021, 75% of most popular ML models used by data scientists and engineers are based on trees [START_REF] Kaggle | The state of ml and data science[END_REF]. Among ensemble of trees, random forest or gradient boosting trees are the most popular. However, the former tends to reduce only the variance with a bagging strategy while the latter, by combining multiple models and gradient descent on residuals, reduces both the variance and bias. Plus, gradient boosting trees often outperform random forests [START_REF] Borisov | Deep neural networks and tabular data: A survey[END_REF]. For these reasons, we make use of gradient boosting trees as a first step to discover the hierarchical structure. Gradient tree boosting Given a dataset {(x i , y i )} n i=1 , gradient tree boosting aims to build, through additive training, an ensemble of trees T that minimizes a loss function L (e.g., mean squared error for regression tasks, logistic loss for classification tasks). The algorithm starts by initializing the model with the most accurate constant α: T 0 (x) = arg min α n i=1 L(y i , α) Then, at each iteration m, the model is expanded in a greedy fashion such that: T m (x) = T m-1 (x) + arg min tm n i=1 L(y i , T m-1 (x i ) + t m (x i )) where t m is a decision tree. In other words, the model T m tries to correct the error of its predecessor T m-1 by adding a new decision tree, also called weak learner. However, choosing the decision tree that best reduces the loss function at each iteration has a high computational cost. Thus, assuming L is differentiable, the optimization process is simplified by leveraging gradient descent. The new decision tree t m is trained on {(x i , r im )} n i=1 with r im the residuals indicating locally the steepest direction. Specifically, r im are defined as: r im = - ∂L(y i , T (x i )) ∂T (x i ) T (x)=T m-1 (x) , i = 1, . . . , n Then, t m is added to the expansion such that: T m (x) = T m-1 (x) + γ m (t m (x)) where γ m is selected by minimizing the loss function of the new expansion: γ m = arg min γ n i=1 L(y i , T m-1 (x) + γ m (t m (x))) The expansion process stops when a stopping criterion is reached (e.g., maximum number of iteration, satisfactory value of the loss). Implementation We select the XGBoost3 's implementation of gradient tree boosting. The major benefits of using this library lie in its high portability, scalability, and high performance as it provides state-of-the-art results on many problems [START_REF] Chen | Xgboost: A scalable tree boosting system[END_REF][START_REF] Borisov | Deep neural networks and tabular data: A survey[END_REF]. Moreover, the authors use three different methods to prevent overfitting [START_REF] Chen | Xgboost: A scalable tree boosting system[END_REF]. First, they penalize the complexity of the model when optimizing the loss function. Their definition of model complexity penalizes both the number of leaves in the trained trees, and the leaf values by means of a l2-regularization term. Then, they add shrinkage, a technique proposed by Friedman [START_REF] Jerome H Friedman | Stochastic gradient boosting[END_REF], to control the influence of new added trees during the update process. Finally, they use features subsampling to lower the number of features available to each new tree and thereby increase the variance between trees. Local post hoc explanations The aforementioned model is an ensemble of trees which consists of hundreds, if not thousands, of trees grown sequentially (in our experiments, the number of trees is set to 100, the default value of the XGBoost package). Even though the authors penalize the complexity of each tree while training, the resulting model is hardly interpretable. To mitigate this black-box effect, we compute local explanations with SHAP [START_REF] Scott | A unified approach to interpreting model predictions[END_REF] (see section 2.1.3), to understand how the model maps the original explanatory variables to an outcome (e.g., a crash count in the case of highway safety). In addition to retaining local faithfulness, SHAP presents important properties described in section 2.1.3. To each observation, SHAP associates a linear function g: g(z ′ ) = ϕ 0 + M i=1 ϕ i z ′ i (3.1) where M is the number of input features, z ′ i ∈ {0, 1} M and ϕ i ∈ R are their contributions which correspond to the game theoretic concept of Shapley values. Among the different implementations of SHAP, we selected TreeSHAP, an efficient tree-based algorithm for fast and consistent computations of exact Shapley values [START_REF] Scott M Lundberg | Consistent individualized feature attribution for tree ensembles[END_REF][START_REF] Scott M Lundberg | From local explanations to global understanding with explainable ai for trees[END_REF]. Compared to KernelSHAP, TreeSHAP reduces the complexity of Shapley value computation from exponential to low order polynomial time [START_REF] Scott M Lundberg | Consistent individualized feature attribution for tree ensembles[END_REF]. Aside from ensuring the properties defined in section 2.1.3 (viz., local accuracy, missingness and consistency), TreeSHAP is also able to account for feature dependence. Moreover, in [START_REF] Scott M Lundberg | From local explanations to global understanding with explainable ai for trees[END_REF], the authors observed that TreeSHAP consistently outperforms alternative methods across a benchmark of 21 different local explanation metrics. Furthermore, in Lundberg and Lee [START_REF] Scott | A unified approach to interpreting model predictions[END_REF], the authors propose a force plot visualization (see Fig. 3.3) to materialize how much each contribution shifts the output relatively to the overall expected value of the XGBoost model. These contributions can be perceived as being forces that shift the output towards or away from the expected value of the model (hence the name of the plot): the higher the contribution, the larger the magnitude of the force is in Fig. 3.3. In practice, these plots help us understand visually how the black-box model behaves on single instances of our dataset. In the next section, we will describe how they also reveal a hierarchical structure in the data. Discovery of a hierarchical structure When applied to all observations, the SHAP forceplots can be clustered by similarities of their profiles. On the highway network dataset, we discover clusters of roadway segments which are similar based on their SHAP explanation instances (see Fig. 3.4): the left part of the figure characterizes roadway segments that are on average moderately hazardous, while the middle and far right parts represent highly hazardous segments. To obtain such a structure in our tabular dataset, we use a hierarchical agglomerative clustering (HAC) of the observations based on the explanatory variables' contributions as provided by the SHAP analysis. We select the squared euclidean distance as a measure of distance between pairs of instances and the ward linkage criterion to measure the dissimilarity of groups of instances. Unlike other distance measures, one characteristic of the ward linkage criterion lies in the fact that it considers the inner-variability of clusters when computing distances between two clusters. More specifically, given A and B two clusters, it states that the distance d between these d(A, B) = i∈A∪B |x i -m A∪B | 2 - i∈A |x i -m A | 2 - i∈B |x i -m B | 2 = n A n B n A + n B |m A -m B | 2 where | • | 2 is the norm of the vector, m J the center of cluster J and n J the number of SHAP instances in it. At the beginning, each cluster consists of a unique instance of SHAP and d is equal to zero. Then, at each iteration, clusters are merged together in a way that minimizes the increase in d. The procedure is repeated until a unique cluster is obtained, composed of the whole set of SHAP instances. Finally, we estimate the optimal number of clusters by detecting the greatest increase in the squared Euclidean distance between clusters when their number decreases (see Thorndike [START_REF] Robert | Who belongs in the family[END_REF] for the original presentation of this widely used method). Furthermore, in production, we do not have a straightforward way of knowing to which cluster a new observation belongs to. However, we need this information as our underlying predictive models require this prior knowledge. Thus, to learn how to associate an observation of the test dataset4 to a cluster, we propose to train a decision tree classifier to approximate a good mapping between the original explanatory variables and the cluster of SHAP instances. In section 4.5.1, we report on cross-validation measures showing, on various datasets, that this association is very accurate. Representation of the hierarchical structure in highway safety In this section, we suppose that a hierarchical structure has been discovered by using ten years of data, from January 1st, 2008 to December 31th, 2017. Four clusters have been identified and are presented in Fig. 3.5. To obtain additional information on these automatically discovered clusters, we propose to visualize the distributions of explanatory variables and the dependent variable. In Fig. 3.6a and Fig. 3.6b, we illustrate this for the observed crash counts and the annual average daily traffic (AADT) which allows us to understand that clusters do not have the same amount of historical crash counts, and present as well differences regarding the explanatory variables such as the traffic related one. If necessary, experts can fine-tune the number clusters with the help of a dendrogram to obtain clusters that better match with their domain knowledge. They may be interested in broader analysis (with less clusters) or more detailed analysis (with more clusters) that highlight particular group of roadway segments. When reducing the number of clusters from 4 to 2, one remains identical (viz., Cluster 1 of Fig. 3.7) while the others are merged together. The difference between the two resulting clusters mainly lies in the Shapley values of the AADT, which explains most of the critical nature of cluster 1. When looking at the historical data, we observe that this cluster is more hazardous than cluster 0 (see Fig. 3.8a) and is composed of roadway segments with high traffic (see Fig. Conversely, when increasing the number of clusters, some particular hazardous configurations, explained differently by the SHAP analysis, appear on the highway network. For instance, cluster 0 of Fig 3 .5, a moderately hazardous cluster composed mainly of rural and mountainous segments, is now divided into cluster 1 and 4 in Fig. 3.9. The most hazardous segments of the previous cluster (see Fig. 3.6a) are now grouped into cluster 4. These segments are located in the mountainous part of the network (see Fig. 3.10b). However, moving away from the initial clusters defined by the automatic process is not without risks, especially when the number of clusters is increased. Indeed, increasing the number of clusters will cut the explanation space in a way that some clusters will only be represented by a few samples which will bring a high source of uncertainty in the models. Moreover, we observed in our experiments that the performance is altered when the number of clusters moves away from the original value defined in the supervised way. We recommend to stay close to the original value as it represents a good trade-off between performance and interpretability. Finally, road safety experts confirm the relevance of these automatically discovered clusters. They also point out that the time saved can advantageously be spent on, for example, planning remedial actions. Bayesian hierarchical generalized linear model In our first proposal, the hierarchical structure discovered in the previous chapter is used to enhance the predictive performance and interpretability of generalized linear models (GLM). To account for such a structure, we use a Bayesian inferred hierarchical GLM. In section 3.3.1, we start by describing the multilevel structure of such models and then explain how Bayesian inference, with advances in probabilistic programming, infers efficiently the coefficients of these models (see section 3.3.2). Moreover, as the standard formulation of GLM generally attaches a linear functional form to a parameter of the likelihood distribution, interactions between the explanatory variables are not taken into account. In section 3.3.3, we also describe how we mitigate this by first learning simple interactions from the analysis of the results of a special kind of polynomial neural network, and then by adding them to the linear functional form. We finish this chapter by explaining how we evaluate and validate the models with a well-known technique, the Posterior Predictive Check. Model description To integrate the hierarchical structure, made of k clusters, into a GLM, we design the following multilevel model: level 0 -model Y ik ∼ L(Y ik |µ ik , Ω) µ ik = E[Y ik |X ik ] = g -1 (η ik ) η ik = β 0k + N j=1 β jk X ijk level 1 -priors β jk ∼ N (µ, σ) level 2 -hyperpriors µ ∼ N (0, 100) σ ∼ HN (100) According to this model, output Y ik , for observation i in cluster k, is generated from a likelihood distribution L parameterized with a set of specific parameters Ω and also µ ik , the expected value of Y ik conditioned on the observations. In section 2.2.1, we saw that, for crash prediction, L can be a N egativeBinomial(Y ik |λ, α) where λ is the expected number of crashes and α controls the amount of allowed overdispersion. Next, η ik is a linear transformation of the explanatory variables. The inverse link function g -1 is necessary to map the domain of η ik (viz., the real line) to the one of µ ik . For example, since the rate λ of a negative binomial must be positive, the exponential function is used for crash-count prediction. In case of binary classification, when the Bernouilli(p) likelihood is used, p being a probability, g can be set to the logit function. Indeed, the logit maps a parameter constrained between 0 and 1 onto the real line (the inverse link function g -1 is, in that case, the logistic function). Finally, as explained in the previous section, the cluster-specific coefficients β jk , for the N explanatory variables, depend on hyperpriors µ and σ, thus allowing an adaptive shrinkage to a mean common to all the observations. Bayesian inference Description Bayesian Inference (BI) treats the parameters as random variable and associates a probability distribution to each parameter. Specifically, BI derives the posterior probability from the prior p(θ) and the likelihood p(y | θ): p(θ | y) ∝ p(y | θ)p(θ) (3.2) with θ the set of parameters in the models. Based on the knowledge of observed crash-count data, the parameters are updated according to eq. 3.2. However, the posterior distribution is usually not obtained in a closed-form distribution but by means of approximation techniques. In our work, we estimate the distribution with a sampling procedure based on a Markov Chain Monte Carlo (MCMC) algorithm. Despite the use of computationally expensive sampling algorithms, BI becomes more and more efficient thanks to the advances of probabilistic programming [START_REF] Development | Stan: A c++ library for probability and sampling, version 2.5[END_REF][START_REF] Salvatier | Probabilistic programming in python using pymc3[END_REF]. Note that we could have used the frequentist alternative based on Maximum likelihood estimation (MLE). MLE use iterative algorithm, such as the Newton-Raphson method or gradient descent, to estimate coefficients β j . MLE is closely connected to the Maximum A Posteriori (MAP) estimate, which is the mode of the posterior distribution after BI. Indeed, given a uniform prior, MLE and MAP estimates are identical. However, with MLE, each coefficient has a unique point estimate, thus the introduction of prior knowledge is not possible. Moreover, with BI, we also obtain uncertainty knowledge for each MAP estimate. For these reasons, we use BI to infer the parameters of our models. Uncertainty knowledge The analysis of the posterior distributions (see Fig. 3.11) allows one to measure the influence of each variable on the crash count. Due to the inherent partial pooling nature of the model, posterior distributions are dissimilar among clusters thus revealing various impacts of the same explanatory variable on the crash count. For instance, the posterior's mean related to the percentage of heavy vehicles in Fig. 3.11b is positive for cluster 0 and 3, but negative for cluster 1. Moreover, we observe that the shapes of the posterior distributions are different from one cluster to another. This variability is linked to the size of the clusters: in general, the more samples, the more confidence in the estimates. For example, in Fig. 3.11a the posterior of the traffic related variable (log(AADT)) has a sharp probability distribution for cluster 0 and 3 but a flatter one for cluster 1 and 2. The latter highlights a greater uncertainty in estimating the coefficient associated with the traffic and calls for more vigilance when drawing conclusions for this risk factor. Supervised learning of interactions with a polynomial network Objectives When first-order interactions between explanatory variables are integrated into a GLM, the relationship between a variable and the target may depend on the value of another variable. This can lessen the gap in predictive power between Bayesian inferred GLM and ML algorithms inherently able to capture complex nonlinear relationships. Moreover, these simple interactions are interpretable while the potentially highly entangled ones discovered by ML algorithms will often remain inaccessible to human understanding and increase the risk of overfitting. Group Method of Data Handling algorithm In our approach, important first-order interactions are discovered with a variant of the Group Method of Data Handling (GMDH) family of supervised algorithms [START_REF] Grigorevich | Polynomial theory of complex systems[END_REF]. This GMDH algorithm is a self-organized multi-layered structure of nodes (see Fig. 3.12). Each node generates its output z by applying a linear function with a covariation term to a pair of inputs (x 1 , x 2 ) taken among either the nodes of the previous layer or the original explanatory variables: z = a 0 + a 1 x 1 + a 2 x 2 + a 3 x 1 x 2 Let n be the number of nodes of the previous layer and m be the number of explanatory variables. To build the next layer, for each of the n+m 2 polynomials, the a 0 , ..., a 3 parameters are set by minimizing through Ridge regression the least square error made by the polynomial when it approximates the target on a train dataset. Then, the fitted polynomials are evaluated on a validation dataset to select the top m constituting the new layer. When the score obtained on the validation dataset by the best node of the last layer added stops improving, the process terminates and the best node of the penultimate layer is the output of the network. Thus, this self-organized network discovers a polynomial that approximates the relationship observed on the training dataset between the explanatory variables and the target. In our approach, we use this polynomial to select the most important first-order interactions between explanatory variables. If the coefficient of a term involving the product of two variables exceeds a given percentage δ of the magnitude of the biggest coefficient, δ being a hyper-parameter of our framework, we add to our Bayesian hierarchical model an interaction between these two variables. Integration of interactions into the GLM For a given observation i, let {int i1 , int i2 , ...int iM } be the set of first-order interactions selected by our GMDH-based methodology. To integrate them into the Bayesian hierarchical model, we modify the linear predictor η ik (see section 3.3.1) such that: η ik = β 0k + N j=1 β jk X ijk + M m=1 β mk int im Visualizing the effects of interactions In our experiments, the GMDH polynomial highlights a major first-order interaction between speed limit and altitude. Triptych plots, introduced by Mc Elreath [83, p.234], are made to visualize such interactions. Thus, Fig. 3.13 depicts the bivariate relationship between annual average daily traffic (AADT) and predicted crash counts for cluster 1 from Fig. 3.5, depending on whether or not an interaction with the number of resting places is integrated into the Bayesian model. We observe that the slopes of the regression lines, constant and positive for both models, are steeper when considering an interaction. Thus, taking into account its interaction with the number of resting places, the positive influence of AADT on predicted crash counts gets more pronounced when the number of resting places increase. Implementation details We trained the polynomial network with GmdhPy5 , an open source Python implementation of the GMDH algorithm. For fast computation time, we restrict the maximum number of layers to 5. The number of selected best neurons is equal to the number of original features, its default value. Once the model is trained, the underlying polynomial is obtained by recursive parsing of the network, from the final node to the input layer containing the explanatory variables. Major first-order interactions are identified when the absolute value of their coefficients are superior to a threshold value defined as δ × max, with max being the coefficient with the highest amplitude, and δ = 0.01. Model evaluation and validation To use our model for point-estimate prediction, we must derive a Bayesian estimator from the posterior distributions. We use the mean of the posteriors which can be shown to minimize the mean squared error. In that way, we can compare our approach to black-box ML algorithms with standard quality metrics on a test dataset. Moreover, to check if the estimation of the posterior distributions converged well, we use the Posterior Predictive Check (PPC) graphical analysis method. Indeed, according to Gelman and Hill in [44, p. 158], PPC is a technique to "simulate replicated data under the fitted model and then compare these to the observed data". It allows one to look for systematic discrepancies between real and simulated data [START_REF] Gelman | Bayesian data analysis[END_REF]. To illustrate this, we compare in Fig. 3.14, for two examples of clusters, the histogram of observed crashes with that of samples drawn from the posterior distribution of crash-counts. Dashed green lines indicate the means of, respectively, the observed and replicated data. The two distributions being similar, our model fits adequately the data. Thus, the integration of latent structure and interactions do not bring skewed prior knowledge that would disturb the inference. Conclusion In this section, we have seen how a Bayesian learning of a hierarchical GLM can be enhanced by the introduction of a hierarchical structure and interactions identified by methods from the field of explainable artificial intelligence. In section 4.5, we report on results obtained on a 5-fold cross-validation, showing that these models improve the predictive performance of standard GLM (without multilevel formulation and interaction) and of the other simple interpretable models such as linear models or shallow decision trees. However, this first proposal underlines several limitations. First, if one does not want the user to be solely responsible for deciding which functions of the original variables are to be considered, a strategy must be adopted to explore a combinatorial space of potential functional forms. Also, for the model to stay interpretable in presence of a large number of explanatory variables, the number of relevant coefficient shouldn't be too large or ones can suffer from the information overload phenomenom [START_REF] Russell L Ackoff | Management misinformation systems[END_REF][START_REF] Keller | Effects of quality and quantity of information on decision effectiveness[END_REF][START_REF] Poursabzi-Sangdeh | Manipulating and measuring model interpretability[END_REF]. Thus, in our second proposal, we try to meet these challenges by computing a cluster-specific ranking of expansions of sparse regularized linear models ordered by increasing complexity thanks to symbolic regression. Interpretable hierarchical symbolic regression Throughout the previous sections, we described how a relevant hierarchical structure identified by analyzing the results of a post hoc explanation tool can enhance the predictive performance and interpretability of GLM by means of a multilevel formulation. In this section, we further exploit the discovery of this hierarchical structure by using symbolic regression to capture more complex but still interpretable relationships between the explanatory variables and the dependent variable. First, we detail the related work of symbolic regression (SR) in section 3.4.1. Then, in section 3.4.2, we introduce SR with a focus on the technique we use to conduct the search in the space of mathematical expressions, viz., a multi-objective optimization extension of the simulated annealing algorithm. Moreover, we present in section 3.4.3 how we successfully manage to build, through a partial pooling approach, a hierarchical symbolic regression to account for the structure previously discovered. Finally, in section 3.4.4, we explain how uncertainty knowledge is provided thanks to Bayesian Inference. Related work Symbolic regression consists in exploring a large space of functional forms to discover a predictive model with a good trade-off between accuracy and simplicity. Each element of this space is a parametric regression or classification model whose performance is measured (e.g., with cross-validation) on a given dataset after fitting its parameters. Both the parameters and the functional form of a predictive model are learned based on available data. A wide variety of approaches have been tried to effectively explore the space of functional forms. Evolutionary algorithms and other optimization techniques In genetic programming, population of a mathematical expressions evolves through selection, crossover and mutation to improve a fitness function [START_REF] Mckay | Using a tree structured genetic algorithm to perform symbolic regression[END_REF][START_REF] Adriano | Symbolic regression via genetic programming[END_REF][START_REF] Schmidt | Distilling free-form natural laws from experimental data[END_REF][START_REF] Amir Haeri | Statistical genetic programming for symbolic regression[END_REF][START_REF] William | Contemporary symbolic regression methods and their relative performance[END_REF]. Other SR are based on the metaheuristic algorithm of Pareto simulated annealing to discover a set of models which are optimal in terms of a balance of both accuracy and simplicity metrics [START_REF] Stinstra | Metamodeling by symbolic regression and pareto simulated annealing[END_REF]. Thanks to use of Meijer G-functions, SR can also be approached by algorithms based on gradient descent [START_REF] Ahmed | Demystifying black-box models with symbolic metamodels[END_REF]. Bayesian processes, with algorithms based on the Markov Chain Monte Carlo strategy have been used as well to solve the SR problem [START_REF] Jin | Bayesian symbolic regression[END_REF]. Symbolic regression based on Machine Learning Recent studies apply deep learning methods to symbolic regression in order to discover physical laws from experimental data. In [START_REF] Silviu | Ai feynman: A physics-inspired method for symbolic regression[END_REF], Udrescu et al. create a framework for symbolic regression based on neural network to discover hidden simplicity in the data (e.g., symmetry, separability) in order to decompose complex problems into simpler sub-problems. They apply their framework on 120 selected equations from classical mechanics, electromagnetism, quantum mechanics and obtain promising results on simulated data as they manage to discover more than 90% of the equations. Lately, this approach has been improved with the integration of an information complexity metric by means of Pareto optimization [START_REF] Silviu-Marian | Ai feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity[END_REF]. However, on a recent benchmark realised by La Cava et al. [START_REF] William | Contemporary symbolic regression methods and their relative performance[END_REF], this framework obtain poor results on real data and is even dominated by simple linear models. Recently, Petersen et al. [START_REF] Brenden K Petersen | Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients[END_REF] use a hybrid approach that combines genetic algorithms and a recurrent neural network (RNN) trained by reinforcement learning to generate better symbolic models at each iteration. Finally, Valipour et al. [START_REF] Valipour | Symbolicgpt: A generative transformer model for symbolic regression[END_REF] consider the problem as a sub task of language modelling and train a generative RNN model with reinforcement learning to produce symbolic equation skeletons whose constants are further adjusted by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [START_REF] Fletcher | Practical methods of optimization[END_REF]. Areas of application SR finds applications in numerous domains such as physics [START_REF] Schmidt | Distilling free-form natural laws from experimental data[END_REF], finance [START_REF] Chen | Genetic algorithms and genetic programming in computational finance[END_REF], climate modeling [START_REF] Stanislawska | Modeling global temperature changes with genetic programming[END_REF] or renewable energies [START_REF] William | Automatic identification of wind turbine models using evolutionary multiobjective optimization[END_REF]. So far, only few studies applied symbolic regression to safety analysis. Meier et al. [START_REF] Meier | Symbolic regression for precrash accident severity prediction[END_REF] use prioritized grammar enumeration, a dynamic programming version of symbolic regression, to predict crash severity a few milliseconds before collision. Patelli et al. [START_REF] Patelli | Traffic modelling and prediction via symbolic regression on road sensor data[END_REF] design a GP-based symbolic regression to predict the traffic flow. To the best of our knowledge, symbolic regression has not been applied to long term crash predictions. Symbolic regression with Pareto simulated annealing General description Most versions of symbolic regression (SR) discover an expansion of a linear model with the addition of non-linear effects by searching a space of functional forms. Section 3.4.1 gave an overview of the various methods that have been used to perform this search. Among them, we select simulated annealing, an effective metaheuristic known for its robustness in optimization problems involving a large search space [START_REF] Eren | Küçükdemiral, and İlker Üstoğlu. Chapter 2 -introduction to optimization[END_REF][START_REF] Daniel Delahaye | Simulated annealing: From basics to applications[END_REF]. Thus, we represent the problem as a local search. Moreover, we adopt a multi-objective extension of the simulated annealing algorithm to perform the search while optimizing both the complexity and the accuracy of the models [START_REF] Stinstra | Metamodeling by symbolic regression and pareto simulated annealing[END_REF]. The search ends on a set of mutually non-dominated predictive models, the Pareto front. Definition of a solution The functional form of a model is extracted from a set of expression trees. Each expression tree is perfect, binary and consists of internal operator nodes and leaves. Leaves are either represented by a constant or an explanatory variable. Operator nodes can be unary (e.g., cos, sin, tan, exp, ln, left, right) or binary (e.g., +, ×, /) and have two children. For unary operators, we indicate with the subscripts " l " (for "left") and " r " (for "right") to which child the operation is applied. For instance, if the operator is ln l , then the logarithm is applied to the left child. The left and right operators apply the identity function to the left and right child, respectively. We extract the symbolic expression by a breadth-first traversal of the expression tree. In practice, as operators, constants and input variables are defined with Sympy, an open-source Python library for symbolic computation [START_REF] Meurer | Sympy: symbolic computing in python[END_REF], the traversal returns a Sympy expression. Finally, the functional form S of a solution is obtained by the combination and algebraic simplification of the Sympy symbolic expressions of a set of expression trees (see Fig. 3.15). Initialization of a first solution Function initialize of algorithm 1 generates a first solution represented by a set M cur of random expression trees, with S cur the associated functional form. To this end, this function first creates balanced binary trees, each of the same depth. Then, for each tree, an inorder traversal associates an index to each node. Odd indices refer to internal nodes while even indices refer to the leaves (see Fig. 3.16a). In this way, we have an efficient means to search for a node in a tree and to know directly what type of node it is (see section Neighbourhood of a solution). At the same time, internal nodes are initialized with an operator chosen with equiprobability from the set of predefined operators introduced in section 3.4.2. Each leaf has a 50% probability of being initialized either to a constant or to one of the explanatory variables. In the latter case, each explanatory variable is equiprobable. Neighbourhood of a solution Function generate of algorithm 2 generates a new solution S new in the neighbourhood of the current solution S cur . It randomly selects an expression tree from M cur and a node index from {0, ..., 2 T -2}, T being the tree depth. Then, a recursive search finds the node with the selected index. When the node is an operator (viz., its index is odd), it is replaced by a randomly selected operator. Likewise, when a leaf is selected (viz., its index is even), it is replaced by a randomly selected constant or explanatory variable. Fig. 3.16 illustrates this process. If unchecked, function generate could lead to ill-defined operators. For instance, a logarithm could be applied to a potentially negative domain. Therefore, function integrityCheck infers recursively the domain of each operator node and, based on rules from interval arithmetic, checks its validity (table 3.1 introduces some of these rules). With interval arithmetic, we have an efficient way to ensure that the functional form generated from the random process does not contain any undefined values [START_REF] Keijzer | Improving symbolic regression with interval arithmetic and linear scaling[END_REF]. For more details on the integrity check, we refer to [START_REF] Stinstra | Metamodeling by symbolic regression and pareto simulated annealing[END_REF]. Cost of a solution The cost of a solution is measured in terms of both the prediction error (see function measurePerformance of algorithm 2) and the complexity of the functional form (see function measureComplexity of algorithm 2). Performance To obtain a robust estimate of the prediction error of S new , we compute the average RMSE, for a regression task, or f 1-score, for a classification task, on Algorithm 1 Symbolic regression with Pareto simulated annealing Require: m: number of expression trees; T min = 0.0001: initial temperature (heating phase) and minimum temperature (cooling phase); λ h = 1.15, λ c = 0.85: ratios between two adjacent temperatures in the heating phase and cooling phase, respectively; s h = s c = 300: number of iterations between two updates of temperature; γ c = 1.15: ratio that controls the growth of s c ; max: maximum number of iterations during the cooling phase. while α ≤ 0.9 do ▷ Heating phase T ← T × λ h , α ← acc/(acc + rej) 12: acc ← 0, rej ← 0 13: i = 0, ζ = ∅ 14: M curr , S curr ← initialize (m) 15: while T > T min and i < max do ▷ Cooling phase M curr ← M new 15: return ζ (a) x 1 x 2 + x 2 (b) x 1 + 2x 2 (c) x 1 + 2x 2 (d) x 1 + x 2 + x 0 x 2 i ← i + 1 16: return i, ζ, M curr , S curr , acc, rej the validation subsets of a 5-fold cross-validation process. On each training subset, the coefficients β i of S new are learned by solving an l2-regularized linear regression, for regression tasks, or an l2-regularized logistic regression, for classification tasks. In either case, the regularization parameter is determined on each training subset of the aforementioned 5-fold cross-validation either by an efficient generalized crossvalidation [START_REF] Golub | Generalized cross-validation as a method for choosing a good ridge parameter[END_REF] for ridge regression or by a nested cross-validation process for logistic regression. Once the estimate of the prediction error is obtained, the coefficients β i are fitted one last time on the whole training dataset. Complexity We improve the strategy introduced in [START_REF] Stinstra | Metamodeling by symbolic regression and pareto simulated annealing[END_REF] to propose a new measure of the complexity of a solution. We penalize both the collinearities and the number of terms present in the symbolic expression of the functional form. The complexity of a solution S composed of m terms is defined as: Complexity(S ) = m i=1 1 + max |r ij |; j ∈ {1, 2, ..., m} \ i C i (3.3) where r ij is the Pearson's correlation coefficient, computed on the training dataset, between terms i and j, and C i is the complexity of the term i. We use algebraic rules to compute the complexity of each term, some of which are presented in table 3.2. The complexity of a unary operator (e.g., the natural logarithm) is determined by approximating the operator, on its inferred domain, by a polynomial of increasing degree (at most 10) until the score of the fit, as measured on a validation set, is below a predefined threshold. The complexity of the unary operator is then defined as the degree of the best polynomial approximation. It should also be noted that, according to equation 3.3, the more terms a solution has, the more complex it is. We were able to confirm experimentally that the measured complexity represents well the complexity perceived by the safety experts. Comparison of two solutions The search ends with a set of Pareto optimal solutions that belong to the boundary beyond which neither the prediction error nor the complexity can be improved without deteriorating the other objective. This can be formally defined in terms of a dominance relation. Let U 1 be the prediction error and U 2 be the complexity metric. S a dominates S b ≡ ∀i ∈ {1, 2} : U i (S a ) ≤ U i (S b ) and ∃j ∈ {1, 2} s.t. U j (S a ) < U j (S b ) Thus, the search returns a set of non-dominated solutions called the Pareto front. Term i Complexity C i Example Computed C i const 0 2 0 x 1 x 4 1 f (x) n n × C(f (x)) a x 2 2 2 f (x) × g(y) C(f (x)) + C(g(y)) x 1 x 2 2 3 f (g(y)) C(f (x)) × C(g(y)) ln(3x 2 2 ) C unary (ln) × 2 b a C(.) is the complexity of the inner function b C unary (.) is the complexity of the unary operator Table 3.2: Algebraic rules used to compute the complexity of each term, adapted from [119, p.320] Exploration by Pareto simulated annealing Simulated annealing (SA) is an iterative local search process used to solve optimization problems for which a simple hill-climbing approach would most often converge on a poor local optimum. At each iteration, SA generates randomly a solution S new in the neighborhood of the current solution S cur . The probability P of accepting S new as the new current solution is a function of both a temperature parameter T and the difference in cost ∆E between the two solutions. P = e -∆E/T (3.4) Annealing temperature T SA mimics the physical process of annealing in metallurgy where a material is first heated before being gradually cooled in order to reach an equilibrium state with increased ductility and hardness. SA follows a similar two-steps process. The heating phase aims at discovering an initial temperature T 0 that favors exploration over exploitation in the beginning of the search. The heating process starts from a low temperature at which a deteriorating neighbour of the current solution is rarely accepted. Then, every s h iterations, the temperature is increased according to a geometric series of ratio λ h > 1. The process ends at a temperature T 0 at which at least 90% of the randomly generated neighbours are accepted. During the cooling phase, the annealing temperature is progressively decreased, every s c iterations, according to a geometric series of ratio λ c < 1. High temperatures favor the exploration of the space of functional forms by preventing the process from converging too early on a local optimum (see Fig. 3.17). On the contrary, the more the temperature decreases, the less likely it is for a deteriorating neighbour to replace the current solution (see Fig. 3.17). The value of λ c controls the speed at which the annealing temperature decreases. If λ c is too small, the optimization may stay stuck too early in the neighborhood of a poor local optimum. Whereas, if λ c is too close to 1, the optimization may take too long to reach a good optimum. Moreover, parameter s c increases according to a geometric series of ratio γ c > 1. Thus, more iterations are allocated to lower temperatures to favor the exploitation of promising functional forms. Finally, the search ends when either the temperature falls below a threshold or the number of iterations reaches a predefined maximum. ∆E and the acceptance of a new solution For a single-objective optimization problem, ∆E is simply the difference of the objective function evaluated at two neighbouring solutions. For our multi-objective optimization problem, we use the dominance-based performance metric introduced above. When a new solution S new dominates, or is as good as, S cur , it is accepted as the new solution (see function accept of algorithm 3). When S new is less effective than S cur , it has a probability P defined by eq. 3.4 to be accepted. In that case, ∆E is defined as: ∆E(S cur , S new ) = 1 | ζ| | ζSnew | -| ζScur | ( 3 Partial pooling approach for symbolic regression To handle correlations among groups of observations in the dataset, we developed a two-stage partial pooling approach to build hierarchical symbolic regressions. First, we start by running a symbolic regression on the whole dataset to automatically discover global models. Then, for each cluster in the hierarchical structure discovered in section 3.2, we run a new symbolic regression to extract cluster-specific functional forms that are merged afterwards to the functional form of a global model previously selected. Automatic discovery of global models In section 4.6, where we illustrate the dynamic interpretative process made possible by our framework, we emphasize the interest of being able to let the user choose a predictive model on the Pareto front. In that way, the end-user can precisely balance between the predictive performance and the simplicity of the model. However, in our proposed methodology, we also need a principled way to automatically select a model on the Pareto front. To do this, first, we consider the point Ω in the Pareto plan that, (i) on the performance axis, is at the level of the most efficient model encountered and, (ii) on the complexity axis, is at the level of the simplest model encountered. Euclidean distance. This model, located in the elbow of the Pareto front, is likely to offer a good trade-off between predictive performance and complexity. In the next stage of our approach, it is used as a starting point to build cluster-specific models. Cluster-specific models To discover cluster-specific phenomena, for each cluster discovered by the approach introduced in section 3.2, a modified version of the symbolic regression search is conducted. It consists in merging the functional form built from the expression trees with the fixed functional form of S glob (see Fig. 3.19): common terms are grouped together and new terms are added to the formula. Hence, the marginal effects already represented by S glob can be reduced or amplified and new cluster-specific effects can be discovered. It corresponds to a partial pooling approach where cluster-specific models can benefit from the effects already discovered by the global model. In classification tasks, the dependent variable can be highly imbalanced on some clusters only. For imbalanced clusters, we apply jointly the synthetic minority oversampling technique (SMOTE) [START_REF] Nitesh V Chawla | Smote: synthetic minority over-sampling technique[END_REF] and edited nearest neighbor (ENN) [START_REF] Dennis L Wilson | Asymptotic properties of nearest neighbor rules using edited data[END_REF]. The former generates samples from the minority class with interpolation. The later applies under-sampling to clean the noisy samples generated with SMOTE. Also, it should be noted that in the case of a homogeneous cluster, a cluster-specific model would not make sense and we propose to stay with the global model S glob . Uncertainty estimation Our approach results in global and cluster-specific expansions of linear models. Therefore, the marginal effects of the terms composing the models are readily interpretable. However, since the training set has been used to estimate the l2-regularization hyper-parameters, there is no simple linear relationship between uncertainty in the parameters and uncertainty in the target. Bootstrap and asymptotic statistics Bootstrap techniques could estimate the uncertainty in the parameters. Still, a standard bootstrap approach is not appropriate since the bias introduced by the penalty term would not be correctly estimated. Double bootstrap techniques have been proposed [START_REF] Hrishikesh | Double bootstrap for shrinkage estimators[END_REF][START_REF] Mccullough | Implementing the double bootstrap[END_REF] to take into account an estimation of the bias. Nonetheless, they are computationally expensive (O(n 3 ) where n is the number of samples). Also, asymptotic statistics have been derived to measure the uncertainty in the parameters under a fixed setting of the regularization parameter [START_REF] Firinguetti | Asymptotic confidence intervals in ridge regression based on the edgeworth expansion[END_REF]. They wouldn't be appropriate in our case since we estimate the regularization parameter by leave-one-out crossvalidation. Bayesian interpretation of Ridge regression In our work, we make use of a well-known equivalence [START_REF] Mehta | A high-bias, low-variance introduction to machine learning for physicists[END_REF] between the ridge regression regularization parameter and the parameters of a Gaussian prior for the Bayesian formulation of linear regression. It can be shown that the variance of the zero-centered Gaussian prior τ 2 must be defined as: τ 2 ≡ σ 2 λ where λ is the ridge regularization parameter and σ 2 is the variance of the likelihood that can be estimated by measuring the variance of the target on the training dataset. More specifically, in the frequentist approach, the coefficient estimate βRidge of Ridge regressions is solved from: βRidge = arg min[(Y -Xβ) ⊤ (Y -Xβ) + λβ ⊤ β] (3.6) In the Bayesian paradigm, no single coefficient estimate is found but a posterior distribution of β is inferred from the data. From the Bayes theorem, this posterior distribution can be written as: p(β|Y, X) ∝ p(β) • p(Y |X, β) (3.7) where Y | X, β ∼ N (Xβ, σ 2 I) with σ > 0 is the Bayesian formulation of linear regression. If we suppose zero-centered gaussian priors with variance τ 2 , then Eq. 3.7 becomes: p(β|Y, X) ∝ exp - 1 2 (β -0) ⊤ 1 τ 2 I(β -0) • exp - 1 2 (Y -Xβ) ⊤ 1 σ 2 (Y -Xβ) = exp - 1 2σ 2 (Y -Xβ) ⊤ (Y -Xβ) - 1 2τ 2 β ⊤ β 60 From this expression, we can compute the Maximum A Posteriori estimate βMAP : βMAP = arg max exp - 1 2σ 2 (Y -Xβ) ⊤ (Y -Xβ) - 1 2τ 2 β ⊤ β = arg min 1 σ 2 (Y -Xβ) ⊤ (Y -Xβ) + 1 τ 2 β ⊤ β = arg min (Y -Xβ) ⊤ (Y -Xβ) + σ 2 τ 2 β ⊤ β which is equivalent to the Ridge regression estimate found in Eq. 3.6 when λ ≡ σ 2 τ 2 . Thus, for each Pareto optimal solution, we start from the discovered functional form and the value of the ridge regularization hyper-parameter λ to infer again the coefficients, but this time, using Bayesian inference with the above prior. The resulting posterior distributions give an estimate of the parameters' uncertainty. Implementation details We implement our model in Python. In order to converge towards interpretable models, we restrict the operators available to the symbolic regression to {left, right, ln} for the unary ones, and {×, +, -} for the binary ones. For the algebraic simplification of the expression trees by, e.g., grouping common terms together (see function simplify in algorithm 2), we use a module6 from the Sympy library ( [START_REF] Meurer | Sympy: symbolic computing in python[END_REF]). To fit the coefficients of a newly discovered functional form, we use the scikitlearn 7 [96] implementations of ridge regression (in the case of a regression task) and l2-regularized logistic regression (in the case of a classification task). The optimal coefficients of the linear models are computed with regularized least square algorithms. Indeed, with the introduction of a weight decay, better generalization performances can usually be achieved and the models are less prone to the negative effects of multicollinearities. We optimize the l2-regularization parameter by either an efficient form of leave-one-out cross-validation (viz., generalized cross validation) for regression tasks or by five-folds cross-validation for classification tasks. However, for even moderately large classification datasets (e.g., Adult in table 4.2), performing a k-fold cross-validation at each iteration of the symbolic regression can take a long time. In that case, we perform the search using the scikit-learn implementation of a ridge classifier to benefit from the efficient computation of the generalized cross validation. Only when a model is added to the Pareto front, do we fit its parameters by l2-regularized logistic regression with k-fold cross-validation to optimize the regularization parameter. Indeed, we observed experimentally that an l2-regularized logistic regression model tends to perform slightly better than a ridge classifier. By adopting this strategy, we reduce the computation time by a factor of 6 on the Adult dataset, while the final Pareto front remains nearly identical. For imbalanced classification tasks, we use the imbalanced-learn implementation8 of the SMOTE-ENN resampling technique [START_REF] Nitesh V Chawla | Smote: synthetic minority over-sampling technique[END_REF][START_REF] Dennis L Wilson | Asymptotic properties of nearest neighbor rules using edited data[END_REF]. As explained in section 3.4.4, in order to endow our final models with uncertainty estimates, we use Bayesian inference to compute the posteriors for the coefficients of each functional form on the Pareto front. We apply gaussian priors corresponding to the already known optimal value of the regularization hyper-parameter (see section 3.4.4). We rely on the pymc39 library with the No U-Turn Sampler ( [START_REF] Matthew D Hoffman | The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo[END_REF]) to run simultaneously two Markov chains for 3000 iterations, with a burn-in period of 1000 iterations. Chapter 4 Experiments In the thesis, we design intrinsically interpretable predictive models to improve highstakes decision-making processes. In our first contribution, we propose a methodology that combines Bayesian learning of hierarchical GLM with automatic detection of a hierarchical structure and interactions through methods borrowed from the field of explainable artificial intelligence (XAI). In our second contribution, we develop a novel methodology to exploit even better the hierarchical structure by combining symbolic regression and multi-objective optimization to obtain models that are sparser, more efficient while capturing more relevant interactions. To validate that our approaches improve the predictive capacities of interpretable models and get close to the black-box models, our experiments are carried out on the highway dataset and on more than ten public datasets covering different tasks from various domains. In this chapter, we begin by presenting in section 4.1 the different datasets on which the experiments are carried out and we introduce the different data processing in section 4.2. Then, we introduce in section 4.3 the evaluation metrics for regressions and classifications. In section 4.4, we describe all the parametric and non-parametric models that are used for comparison purposes. Then, we share the results in section 4.5. Finally, in section 4.6, we illustrate the benefits of our approach by introducing, on a realistic case study, an application we designed for highway safety experts 63 Description of the datasets French highway dataset To address the problem of crash prediction, we generate a tabular dataset from the different available data sources. Most of the data come from the APRR's relational databases, and the remaining data, in particular topographical surveys of the network infrastructure, was sent to us by field experts in the form of structured flat files. To generate the dataset, we first divide the highway network into segments of equal length, then for each segment, we calculate the different variables on a given time period. In this section, we describe in more detail the process that leads to the generation of this dataset, further referred to as the French Highway dataset. Highway network meshing In the first place, the network must be divided into segments in order to be able to identify afterwards more or less crash-prone configurations on the network and to understand the associated influential explanatory variables. For this, we rely on spatial surveys of reference points of each highway belonging to the network. Each highway is divided into one-way segments of equal length. In this study, we select a length equal to 10 kilometers, in order to be able to capture the variations of the explanatory variables on the vast network while having a crash count rarely equal to zero. Of course, at the ends of the network, some segments will have a length less than the predefined one. Thus, we decide not to consider them in the analysis. Calculation of variables Considering the data available to us, we chose to restrict the analysis to the APRR network only. The study therefore covers 1,894 km of network, since the 429 km of the AREA subsidiary's highway network (see Fig. 1.1) have been withdrawn. The data covers eleven years, from January 1, 2008 to December 31, 2018. In our work, we focus on predicting the annual crash count. The available temporal explanatory variables are therefore also aggregated to this time scale. The spatial variables remain fixed over all the years of the study. In this section, we describe how we calculate the different variables, starting with the accident count. Descriptive statistics of the computed variables are given in table 4.1. Crash count The accident data comes from police reports or descriptive sheets filled out by APRR staff present at the crash site. The information transmitted are very detailed: location, date and time, number of individuals involved, type of vehicle, etc. About 20% of accidents have no time indication because the protagonists did not wait for the arrival of help, police or group staff before leaving. However, they are identified by the patrols on the same day and are therefore also taken into account. Then, accident data are stored in the company's large databases. It is from these databases that we are able to retrieve all crash-related information needed for the study. For each highway segment, the accidents are aggregated over each year of the study. The number of crashes per segment varies greatly (see Figure 4.1 for the distribution of observed crash counts). In table 4.1, we observe that the variance of the crash count exceeds its mean. This phenomenon, called over-dispersion, is often witnessed with crash data [START_REF] Lord | The statistical analysis of crashfrequency data: A review and assessment of methodological alternatives[END_REF]. Traffic related variables Two traffic-related variables are considered in the study: the annual average daily traffic (AADT) and the percentage of heavy vehicles. Two data sources are available to estimate these variables. One, coming from the 797 counting loops integrated into the road pavement, represents the number of vehicles and the percentage of heavy vehicles having passed through the sensors during a fixed period of time (6 min). The other comes from transactions at toll gates, taking into account the class of vehicle (e.g., light vehicle, two-axled truck, etc.). While the former often has noisy data (e.g., outliers, long absence of data), mainly due to sensor failures, the latter provides a very reliable estimate of the AADT and percentage of heavy vehicles on highway segments. We therefore select toll data as a way to estimate the yearly traffic related variables on the network. We use the weighted mean to evaluate the traffic on our segments and to be able to capture traffic variations inside the segments: x = n i=1 w i x i n i=1 w i (4.1) with w i being the distance for which the variable has the value x i . 4.1. For the variables representing specific elements such as interchanges, tunnels, etc., we received from field experts a spatial statement of their coordinates. Variables that have very small variation on the network (viz., ramps, tunnels, toll barrier, structures) are binarized to indicate whether an element is present on a roadway segment or not. The others (viz., interchanges, resting places) are discrete variables referring to as a count of the element of interest on the roadway segment. The remaining variables (viz., right shoulder width, altitude and speed limit), representing continuous elements on the network, are calculated from the weighted mean defined in eq. 4.1. Name Other datasets As said before, we extend the study to 13 other datasets covering regression and classification tasks from various domains. The size of these datasets varies greatly, both in terms of volumes (from 546 to 116640 instances) and dimensionalities (from 7 to 117 features). A general description of the datasets is given in table Data preparation For all datasets, we do not apply any process of dimensionality reduction such as feature selection or principal component analysis. The original data are transformed in different ways depending on the ML model. Models that introduce regularization terms have standardized data as input. Standardization makes the values of each continuous explanatory variable have zero-mean and a unit-variance: x ′ = x -µ σ where µ is the mean value of the variable, and σ its standard deviation. For neural networks, continuous explanatory variables are re-scaled in the [0, 1] interval: x ′ = x -min(x) max(x) -min(x) Moreover, categorical explanatory variables are one-hot encoded. More precisely, if n is the number of different categorical values, then one-hot encoding transforms the variable into n -1 binary explanatory variables describing the values of the original explanatory variable. Predicted class Positive Negative Actual class Positive True positive (TP) False negative (FN) Negative False positive (FP) True negative (TN) Regression To measure the performance of predictive models on regression tasks, we compute the root mean square error and the mean absolute deviation. Given n the number of observations, y i the target and ŷi the predicted value: Root mean square error is a measure of how spread out are the prediction errors, in the context of numerical predictions. This metric is defined as: RM SE = 1 n n i=1 (ŷ i -y i ) 2 Mean Absolute Deviation is a measure of variability that indicates the average distance between the predictions and the average value of the observed variable: M AD = 1 n n i=1 |ŷ i -ȳ| with ȳ being the mean value of y. Classification To evaluate our models on binary classification tasks, we use metrics derived from the confusion matrix presented in table 4.3. Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class. However, many machine learning and statistical models predict a probability. To know the associated predicted class, a classification threshold is used, most of the time equal to 0.5. Probability values greater than this threshold are mapped to one class, and the remaining are mapped to another. From the confusion matrix in table 4.3, we can compute: accuracy is the proportion of correct results that a classifier achieved: acc = T P + T N T P + T N + F P + F N This very intuitive metric loses interest when data are imbalanced, as it can give an over-optimistic measure of the model's predictive performance. For instance, on the Breastcancer dataset where the percentage of negative cases (viz., the tumor is benign) is 0.94 (see table 4.2), a model that has learned to detect only benign tumors will obtain a high accuracy. However, measuring the model's ability to predict malignancies can be equally useful, depending on the context. Thus, other metrics must be considered. precision is the number of correctly predicted positive observations over the total number of predicted positive observations: pre = T P T P + F P recall is the number of correctly predicted positive observations over the total number of observations in the actual class: rec = T P T P + F N f 1-score is the harmonic mean of the precision and recall: f 1 = 2 * pre * rec pre + rec Both the false positives and false negatives are considered which makes this metric very useful, especially when the data are imbalanced. Receiving Operator Characteristic (ROC) curve is designed by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold values (see Fig. 4.2a). TPR indicates how many positive predicted outcomes occur among all positive samples: T P R = T P T P + F N FPR defines how many incorrect positive outcomes occur among all negative samples: F P R = F P F P + T N ROC curve is appropriate when the observations are well balanced between each class. However, it can give an overly optimistic view of the model's performance if datasets are very imbalanced. Precision-recall curve illustrates the trade-off between precision and recall for various threshold values (see Fig. 4.2b). This curve is tailored for problems where classes are very imbalanced, but also when the positive class is more interesting than the negative one. The two aforementioned parametric curves are not performance measures by themselves but representations of how well classifiers can discriminate between the two classes for different threshold settings. Thus, to obtain a measure of performance across all possible classification thresholds, we compute the area under the curve. These metrics are further referred to as AUROC and AUPR for the ROC curve and the precision-recall curve, respectively. Models used for comparison Parametric interpretable models Among the most simple and interpretable models, we select the scikit-learn implementations of ordinary least square regression (OLS ), logistic regression (LR) and decision trees of depth no more than 5 (to preserve interpretability). We also consider a standard Bayesian inferred GLM (B-GLM ) implemented with the Pymc3 library. This model does not include the prior knowledge of a hierarchical structure nor interactions between variables. A variant with interactions discovered by the polynomial network presented in section 3.3.3 is also considered and is further referred to as B-GLM-int. On the French Highway dataset, we also train a generalized linear mixed model (GLMM ) implemented with the statsmodels1 library. Moreover, we include two variants of Generalized Additive Models (GAM). For the first one, GAM-splines, based on a spline basis, we use the PyGAM2 implementation. For the second one, explainable boosting machine (EBM ), based on gradient boosting with bagging, we use the implementation provided by the InterpretML framework [START_REF] Nori | Interpretml: A unified framework for machine learning interpretability[END_REF]. We also compare our approach to genetic programming based symbolic regressions with the reference implementations of the gplearn3 package (SR-GP ) and GP-GOMEA 4 (SR-Gomea) [START_REF] Virgolin | Improving model-based genetic programming for symbolic regression of small expressions[END_REF], the latter being known to perform well on many real world datasets [START_REF] William | Contemporary symbolic regression methods and their relative performance[END_REF]. For both implementations, the set of operators is restricted to the one we use in our approach (viz., {+, -, ×, ln}). We also consider SR-Gomea-op, the same model with a less restricted set of operators (viz., {+, -, ×, ln, cos, sin, √ }), the same as the one used by [START_REF] William | Contemporary symbolic regression methods and their relative performance[END_REF] in their recent survey. Note that GP-GOMEA does not currently have an implementation for classification problems. For all the aforementioned interpretable models, we apply a no pooling approach that accounts for the clusters discovered by our Hierarchical structure module (see section 3.2). This approach fits a separate model for each cluster and considers that no similarities exist between them. Non-parametric black-box models We select three highly flexible black-box models: (i) the scikit-learn implementation of Support Vector Machines (SVM) and (ii) Multilayer Perceptrons (MLP), and (iii) the XGBoost ( [START_REF] Chen | Xgboost: A scalable tree boosting system[END_REF]) gradient tree boosting library5 Models from our proposal We consider the first proposal of ours namely, the Bayesian hierarchical GLM (BH-GLM-int). Recall that the latter is based on the bayesian inference of a linear hierarchical model with data-driven discovery of objective priors in the form of i) a hierarchical structure (chapter 3.2) and ii) strong first-order interactions obtained through the analysis of the structure of a trained self-adaptive polynomial network (chapter 3.3). We also include the variant without interactions between variables (BH-GLM ). For each dataset, the associated parameters and priors are given in table 4.4. 4.4: Parameters and priors for the Bayesian hierarchical models Moreover, we consider several variants of our second proposal. SR-trad and SRmax use only the global model of section 3.4.3 while HSR-trad and HSR-max use the cluster-specific models of section 3.4.3 (prefix "H" stands for hierarchical ). We also train our hierarchical symbolic regression on clusters discovered with a hierarchical agglomerating clustering applied to the training data, including the dependent variable. These models are further referred to as HSR-naive-trad and HSR-naive-max. Finally, SR-NP-trad and SR-NP-max are cluster-specific models learned with a no pooling approach, meaning that they do not include the knowledge of the global model. The suffixes trad and max are used to distinguish models selected near the elbow of the Pareto front, that should have a good trade-off (whence trad ) between complexity and predictive performance, from models of maximum complexity (whence max ). Dataset Link a #clusters 1st level 2nd level 3rd level Regression FrenchHighway Log 4 Y ik ∼ NB(µ ik , α) b , α ∼ G(a = 0.5, b = 2.5) c β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(5) Insurance Id d 2 Y ik ∼ N (µ, 1000) β j ∼ N (µ, σ) µ ∼ N (0, 100), σ ∼ H(5) Airbnb Id 4 Y ik ∼ N (µ, 5) β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(2.5) House Id 4 Y ik ∼ N (µ, 1000) β j ∼ N (µ, σ) µ ∼ N (0, 100), σ ∼ H(5) Puma Id 3 Y ik ∼ N (µ, 10) β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(2.5) Satellite Id 3 Y ik ∼ N (µ, 20) β j ∼ N (µ, σ) µ ∼ N (0, 50), σ ∼ H(5) Wind Id 3 Y ik ∼ N (µ, 10) β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(2.5) Breast tumor Id 2 Y ik ∼ N (µ, 20) β j ∼ N (µ, σ) µ ∼ N (0, 50), σ ∼ H(5) Music Id 3 Y ik ∼ N (µ, 10) β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(2.5) Wine Id 3 Y ik ∼ N (µ, 10) β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(2.5) Toxicity Id 2 Y ik ∼ N (µ, 5) β j ∼ N (µ, σ) µ ∼ N (0, 10), σ ∼ H(2.5) Gas Id 2 Y ik ∼ N (µ, 20) β j ∼ N (µ, σ) µ ∼ N (0, 50), σ ∼ H(5) Classification Breastcancer Logit 2 Y ik ∼ B(p = exp(µik) 1+exp(µik) ) e β j ∼ N (µ, σ) µ ∼ N (0, 100), σ ∼ H(5) Adult Logit 4 Y ik ∼ B(p = exp(µik) 1+exp(µik) ) β j ∼ N (µ, σ) µ ∼ N (0, Hyper-parameters tuning For fair comparisons, the hyper-parameters of the models presented in section 4.4.1 and section 4.4.2 are optimized by cross-validation with grid-search. For each model, the grid of hyper-parameters' values are given in table 4.5. Moreover, we conducted a grid-search for the SR's hyper-parameters (viz., number of expression trees, depth of an expression tree, parameters controlling the annealing temperature in simulated annealing) on two datasets, Insurance and Adult. We obtained the following results: Expression tree depth Deeper trees lead to more complex models, mainly due to the possibility of deep compositions of functions. Motivated by finding a good com- promise between the complexity of the final models and their predictive performance, we restrict the tree depth to 4. In this way, each expression tree, being a perfect binary tree, has 8 leaves. Number of expression trees There is no noticeable improvement in predictive performance when the number of expression trees exceeds two-thirds of the number of features after data preparation (viz., one hot encoding). Parameters of the simulated annealing The grid-search is also conducted on the parameters of the simulated annealing. For the heating phase, we find that λ h = 1.15 is a good value for reaching quickly a suitable starting temperature T 0 . For the cooling phase, the ratio λ c = 0.85 is a good balance between algorithmic efficiency and not falling into local optimums too quickly. The parameter γ c , which controls the growth of s c (viz., the number of iterations between two updates of temperature) is set to 1.15. Results We apply a 5-fold cross-validation to measure the predictive performances of the models. Results obtained on regression datasets are reported in table 4.7 and table 4.8, the latter for cluster-specific interpretable models trained with a no pooling approach. Results for classification datasets are shown in table 4.9. Hierarchical structure module For each dataset, the optimal number of clusters computed in the Hierarchical structure module is given in table 4.6. Moreover, to validate the ability of this module to associate an unknown sample to a cluster, a train-test split approach is applied on each training subset of the 5-fold cross-validation. For each training subset, the decision tree classifier is trained, on 80% of the data, to predict, based on the explanatory variables, the cluster to which a new observation belongs. A f 1-score is computed on the remaining 20% of each training subset. The decision tree classifier is highly accurate on all datasets (see table 4.6). Regression datasets BH-GLM and BH-GLM-int obtain better RMSE and MAD than the fully interpretable models (viz., OLS, decision tree, B-GLM, B-GLM-int) on most datasets. Moreover, on the French Highway dataset, BH-GLM obtains a 6.64% decrease (6.77% As expected, we observe the superior predictive performance of modern tree-based algorithms (viz., XGBoost and EBM ). However, the black-box models, SVM and MLP, do not appear to have the expected high predictive performances as BH-GLM and BH-GLM-int obtain similar if not better RMSE and MAD on most datasets. SR-GP, the symbolic regression based on genetic programming, obtains poor results and is even dominated by the fully interpretable models on all datasets. The more recent approach SR-Gomea obtains better predictive performance than SR-GP but is still dominated by HSR-trad and HSR-max. SR-Gomea-op does not highlight significant predictive gains compared to SR-Gomea on most datasets. This validates that restricting the operators makes it possible to obtain interpretable functional forms with more than satisfactory predictive performance on real world datasets. HSR-trad, the model that, according to our second proposal, should offer a good trade-off between performance and complexity, obtains better RMSE and MAD than BH-GLM and BH-GLM-int on most datasets. As expected, HSR-max, the most complex model resulting from our approach, performs better than HSR-trad, except for the Insurance dataset where they obtain similar predictive performance. Moreover, the fact that these models have performance metrics with low standard deviations testifies to their robustness. Indeed, they are likely to discover similar solutions on similar datasets. HSR-trad and HSR-max, the cluster-specific models, often show a clear improvement when compared to the global models SR-trad and SR-max. The partial pooling approach has a clear interest given that HSR-trad and HSR-max outperform SR-NPtrad and SR-NP-max, their no pooling variants. Also, our approach to discover a hierarchical structure is robust, efficient and obtain better results on all datasets compared to the approach that considers more naive clusters. Indeed, on all datasets, HSRtrad and HSR-max are substantially better than HSR-naive-trad and HSR-naivemax. Moreover, incorporating the data-driven discovery of a hierarchical structure not only provides better predictive performances, it also offers better interpretability by capturing cluster-specific phenomena (see section 4.6). HSR-max and GAM-splines have similar performances on all datasets but the Insurance dataset where HSR-max discovers a significant interaction between the body mass index and being a smoker. However, the no pooling variant of GAMsplines is slightly better than HSR-max on the Insurance dataset. Finally, EBM performs well on all datasets. It obtains the best performances on the Insurance and Gas datasets and is similar, if not better, to XGBoost on the French Highway, Airbnb, and Wine datasets. Classification datasets For the breast cancer dataset, the search for a hierarchical structure (see section 3.2) identifies two homogeneous clusters and there is no need to learn cluster-specific models. Thus, for this dataset, we consider only the global models SR-trad and SRmax. Still for this same dataset, all models perform very well. Nonetheless, SR-max is as, or even more, efficient than LR and black-box models. Also, we observe that BH-GLM and BH-GLM-int, the Bayesian models that account for the hierarchical structure in the data, perform better than the standard Bayesian models B-GLM. Embedding major first-order interaction also improves the predictive performance. XGBoost and EBM achieve the best results on the adult dataset. HSR-trad obtains similar results to LR, thus suggesting that sparse models can have similar predictive performances than models that consider all explanatory variables, the latter being prone to the information overload phenomenon. HSR-max obtains similar, if not better, results than black-box models. Its performance is close to the one of GAM-splines. As observed previously for regression datasets, HSR-trad and HSRmax still perform better than BH-GLM and BH-GLM-int. Also, they are better than the global models (viz., SR-trad and SR-max ), thus confirming the benefits of incorporating a hierarchical structure in case of classification problems. Discussion Confirming previous studies [START_REF] Lou | Intelligible models for classification and regression[END_REF][START_REF] Caruana | Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission[END_REF], we observe that EBM, as a variant of GAM, is very efficient on both regression and classification tasks. Moreover, this model meets many of the expected criteria for interpretability enumerated in [START_REF] Barredo Arrieta | Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai[END_REF]. However, it also has limitations that can make it unsuitable for road safety analysis. First, different optimization strategies adopted to learn an EBM model, can lead to different interpretations of its predictions [START_REF] Chang | How interpretable and trustworthy are gams[END_REF]. However, for road safety analysis, trust in the identification of the main risk factors is required by experts when they elaborate remedial actions. Moreover, for satisfactory interpretability, it helps if a GAM has a small number of components and if each component function is relatively smooth. However, EBM, due to their reliance on boosted trees, can hardly maintain these constraints [START_REF] Rudin | Interpretable machine learning: Fundamental principles and 10 grand challenges[END_REF]. With our approach, field experts are more likely to be confident in models with cluster-specific behaviors and stable functional forms that highlight a selection of relevant factors and their interactions. Furthermore, for the French Highway dataset, as already observed in [START_REF] Veran | Crash prediction for a french highway network with an xai-informed bayesian hierarchical model[END_REF], the best known strategy to estimate the number of crash counts is to average, for each highway network segment, the number of accidents that occurred in previous years (c.f., the Local model in table 4.7). We also observe that the predictive performances of GLMM are equivalent to those of the Local model. When looking at the model in more detail, we understand that the introduced random effects adjust the predictions of fixed effects such that they match the averages of the historical crashes observed on the segments. However, these models (viz., GLMM and Local ) do not offer much insight about the associations between crash counts and risk factors. We observed that flexible models, such as EBM, are able to approach in performance these local models by discovering quasi-identifiers of road segments. For example, an EBM discovers a complex nonlinear relationship between the altitude and the number of accidents, see Fig. 4.3. Accidents appear more likely for the lowest altitudes. However, this phenomenon should not be interpreted as a potential risk factor linked to the altitude. In fact, the model is using the altitude as a proxy variable to identify a group of nearby road segments. Therefore, in that particular context, EBM, despite its good predictive performance, does not always provide relevant information to field experts. It can even, at times, mislead them. Although plots like the one of Fig. 4.3 make it possible to identify these potentially misleading models' behaviors, the EBM model does not provide alternative associations between the explanatory variables and the target. With our approach, the risk of misinterpretation is reduced thanks to the successive models on the Pareto front: from less complex, which capture only overall effects, to most complex, which are flexible enough to focus on hazardous configurations specific to a few roadway segments. Through such a dynamic interpretative process, field experts can use the model best suited to meet their needs. In the next section, we illustrate this process with an experimental study applied to the French Highway dataset. Dynamic interpretative process Introduction A glass-box interpretable crash prediction model able to highlight the marginal effects of the potential risk factors is a valuable tool to help design effective highway safety policies. Indeed, in such a context, the predictive model must limit as much as possible the risks of misinterpretation. In this section, we show how, on a realistic use case validated by field experts, the framework we propose responds to this challenge by making possible an efficient dynamic interpretative process that leads to selecting a predictive model well suited to the task at hand. Each functional block of our framework (see figure 3.2) plays an important role in the methodology we propose. The hierarchical structure module brings out a relevant partition of the highway network. This partitioning of the measured crash counts, with their associated explanatory variables, not only improves the performance of the models (see section 4.5) but also refines the analysis of marginal effects by capturing cluster-specific phenomena. Moreover, based on the multi-objective optimization leading to global and cluster-specific models, safety experts can explore a list of optimal models that offers a variety of alternatives along the performance/complexity trade-off axis. They will often start with the simpler ones that capture only the global effects, and move towards the more complex ones that can represent more localized phenomena. To support this methodology, we developed, in dialogue with field experts, a graphical user interface (see Fig 4.4). In the following description of a case study, we suppose that the partitioning of the highway network and the training of the global and cluster-specific models have already been done on ten years of data, from January 1st, 2008 to December 31th, 2017. Data from 2018 is used to validate that, based on out-of-sample predictions, the framework provides useful information to safety experts. Use case Partitioning the highway network From the hierarchical structure module of section 3.2, safety experts identified four relevant clusters (see Fig. 4.5). In table 4.10, we share some descriptive statistics of explanatory variables and crash count for each cluster. We observe that clusters are heterogeneous on most of the explanatory variables and on the crash count. For instance, cluster 1 is representative of hazardous segments with high traffic, whereas cluster 3 is made of less accidental secondary segments that connect the main highways. where the additional variables x 0 , x 1 and x 8 are, respectively, the speed limit, the number of interchanges and the presence of tunnels. Out-of-sample predictions from models 1 and 2 differ locally (see Fig 4.7a and Fig 4.7b). In particular, in mountainous areas, segments considered as moderately hazardous by the first model, are now associated with a high risk of accidents due to the discovery of a first-order interaction between the presence of tunnel and the speed limit. Safety experts, by combining prior knowledge of the network with the observed transition from model 1 to model 2, are confident that this interaction is one of the main reasons why a large number of accidents have occurred on these segments during the ten years covered by the training data. This discovery may support a proposal for reducing the authorized speed limit on these specific segments. Also, more refined models are sometimes able to better identify less hazardous 86 configurations. For instance, a segment close to Belfort was identified as highly hazardous by model 1 (between 20 and 25 predicted crashes) while model 2 considers it as moderately hazardous (between 15 and 20 predicted crashes). As remedial or preventive actions may involve significant costs, predicting less hazardous configurations is therefore beneficial because actions will be implemented at first on more prominent hazardous segments. So, thanks to this more complex model, field experts optimise the order of safety policies and funding shall be better allocated. Nonetheless, one of the major difficulties for the interpretation of more complex models is related to the introduction of collinearities and interactions between continuous variables. To illustrate this, consider model 3 from Fig. 4.6: Model 3: ŷ = -33.3 -0.00489x 10 + 1.57x 2 -9.37 • 10 -6 x 3 x 6 + 0.00286x 3 + 0.15x 6 Model 3 is characterized by an interaction between the averaged altitude x 6 and the traffic x 3 . By focusing on this interaction, model 3 better fits training data than model 2 but its effects plots (see Fig 4.11a) are arguably more difficult to interpret. However, our framework only produces differentiable closed-form expressions for which it is always possible to compute the partial analytical derivatives (PD) w.r.t. variables of interest, to quantify explicitly their partial effects (i.e., a measure of the conditional effect of a variable on the target) [START_REF] Seidyo | Measuring feature importance of symbolic regression models using partial effects[END_REF]. In this sense, we can understand how a unit change in an explanatory variable affects the crash count when other variables are held constant. For instance, the partial derivatives for the trafic x Histograms of the pointwise partial derivatives can be useful interpretative tools (see Fig. 4.10). Although the partial effects of the traffic x 3 are mostly positive, a few are negative for segments of high altitudes and above average traffic (see Table 4.11). Thus, these variations in partial derivatives emphasize that the relation between the crash count and x 3 is more complex than the linear dependency proposed by model 1 and model 2. By introducing this novel interaction, model 3 manages to capture more variability in the dependent variable than the previous models. Finally, model 4 is much more complex: Model 4: ŷ = -34.7 + 8.5 • 10 -6 x 2 1 -0.000671x 1 x 3 -8.5 • 10 -6 x 1 x 6 + 13.5x 1 + 0.0113x 10 + 1.92x 2 -8.5 • 10 -6 x 3 x 6 + 0.000679x 3 x 8 + 0.00269x 3 -0.0276x 4 + 1.61x 5 + 0.133x 6 + 0.12x 7 -0.276x 8 + 0.0472x 9 As we can see from Fig. 4.11b, the introduction of new correlated terms improves slightly the fit. To capture extra variability in the dependent variable, model 4 where many accurate-but-different models exist to describe the same data [START_REF] Semenova | A study in rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning[END_REF]. As discussed by [START_REF] Rudin | Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[END_REF], we argue that the availability of multiple efficient predictive models is useful since field experts may have more flexibility in choosing a model that they find interpretable. Moreover, we help them in this process as our definition of the complexity warns them when models are likely to be difficult to understand. Finally, the dynamic interpretative process can be a useful tool to construct new handmade predictive models, based on the knowledge learnt by analyzing the Pareto optimal models. For instance, we have seen that the interactions introduced in model 2 and model 3 are both valuable. The user could consider building a new model with both of them. Towards causality A central question remains: among these different models, how can be distinguished the trustworthy ones from the ones based on spurious associations due to inductive bias? As illustrated above, one way is to rely on the diligence of the user equipped with expert knowledge and effective tools. This could also be partially automated when prior knowledge of the conditional independences between variables is formalized, e.g., as a causal graph [START_REF] Pearl | Causality[END_REF]. Such approaches are beyond the scope of our current work. However, our framework fosters a dynamic interpretative process that, combined with a clear quantification of uncertainty, is a useful tool to identify variables of interest and to understand how they interact. Therefore, we can surmise that our framework could facilitate the development of causal models. Chapter 5 Conclusion and perspectives Conclusion Predictive models are being used increasingly to make high stake decisions. For many applications, there is a need for both accuracy and interpretability. For instance, in highway safety analysis, we argue that a preference should be given to predictive models that are both accurate and glass-box interpretable in order to increase the confidence of safety experts in the identification of hazardous segments. Motivated by these requirements, in our first contribution, we introduced a framework to build efficient and interpretable Bayesian hierarchical models for regression or classification tasks. We proposed a data-driven discovery of objective priors in the form of a hierarchical structure and strong first-order interactions between explanatory variables. We start with a trained and usually efficient, albeit opaque, ML algorithm in order to compute for each observation the Shapley values of the explanatory variables. Then, a partition of the instances, related to how the ML algorithm predicts the target, emerges from the hierarchical agglomerative clustering of the observations described by the Shapley values. Furthermore, we analyze the structure of a trained self-adaptive polynomial network to discover important firstorder interactions. This prior knowledge is then embedded into the definition of a Bayesian hierarchical GLM with nonlinear functional form. In our second contribution, we proposed to exploit even better the discovery of the hierarchical structure by using symbolic regression to discover sparse interpretable models with rich interactions. We combined a multi-objective simulated-annealingbased symbolic regression and a partial pooling approach to discover models that capture global effects and cluster-specific effects. More specifically, we start by computing a Pareto front of global predictive models. We select among these models the one offering a good trade-off between its predictive performance and its complexity. Afterwards, for each cluster of the previously discovered hierarchical structure, the global model is used as the starting seed for a new multi-objective symbolic regression. Finally, the best models, i.e. the ones appearing on the Pareto fronts, are re-estimated through Bayesian inference in order to associate uncertainty estimates to their coefficients. On fourteen datasets, covering both regression and classification tasks, our two proposals outperform most interpretable models. On some datasets, we achieve performance comparable to that of non-parametric black-box models. Furthermore, we presented a case study based on the highway network dataset to validate the new dynamic interpretative process made possible by our second proposal. As our approach discovers transparent and parsimonious symbolic models, safety experts can be more confident in their understanding of the relations between the explanatory variables and the dependent variable. Moreover, thanks to Bayesian inference, the risk factors are associated with measures of uncertainty. In addition, the use of Pareto optimization allows field experts to build a multi-scale view of the risk factors, from the most general to the most specific. Perspectives First, we plan to improve the dynamic interpretative process by generating humanreadable explanations of the relationships between Pareto optimal models, such as: model X is an extension of model Y and improves the predictive performance on training data by 3% thanks to the introduction of a new interaction between variables x A and x B . In this way, field experts will have a direct understanding of the Pareto front. Then, we are going to test several divisions of the road network into segments. Currently, we generate the dataset for crash prediction, we first divide the network with segments of equal length and then compute the variables for each segment (see section 4.1.1). With this procedure, we can capture the heterogeneity of the explanatory and dependent variables along the network. However, other types of meshes can be used. For instance, we can use the segments identified by the SURE approach (see section 1.1). Also, Deublein et al. [START_REF] Deublein | A bayesian network model to predict accidents on swiss highways[END_REF] divide the network into homogeneous segments where explanatory variables are constant, assuming that the dependent variable is weighted by an exposure term that depends on the length of the section and traffic. Thus, future work will analyze the effects of different types of meshes. Also, the user interface will allow to choose the type of mesh. Thus, field experts will be more aware of the effects of this important prior. Moreover, our framework relies on a specific approach to discover a hierarchical structure. Even though we validated its robustness on numerous datasets, promis-ing next steps involve analyzing other methods that are compatible with ours. For instance, in [START_REF] Rengasamy | Towards a more reliable interpretation of machine learning outputs for safety-critical systems using feature importance fusion[END_REF][START_REF] Rengasamy | Mechanistic interpretation of machine learning inference: A fuzzy feature importance fusion approach[END_REF], the authors propose an efficient ensemble feature importance method where multiple feature importance approaches are applied to a set of ML models and their crisp importance values are combined to produce a final importance for each feature. Thus, we will constitute a benchmark of feature importance methods [START_REF] Barredo Arrieta | Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai[END_REF] and evaluate them based on their efficiency, scalability, and on the quality of computed clusters. Lastly, we will also extend the framework for near real-time crash risk assessment. New spatio-temporal explanatory variables, such as weather-related variables (e.g. rainfall, snow, wind, etc.) or the pavement quality, will be introduced in the analysis. In this context, since remedial actions will probably affect humans' lives even more directly, having both efficient and interpretable models will be all the more important to assist safety experts in their work. Les modèles paramétriques simples ont tendance à être plus interprétables mais moins performants que les modèles non paramétriques très flexibles qui fonctionnent comme des boîtes noires. Lorsqu'ils réfléchissent à des décisions à fort enjeu, comme dans le contexte de la sécurité routière, les experts métier s'attendent à ce que les modèles prédictifs soient à la fois performants et interprétables. En effet, les modèles doivent les aider à concevoir et à déployer des actions de sécurité préventives ou correctives. Dans ces travaux, nous contribuons à améliorer les performances prédictives des modèles paramétriques tout en conservant leur interprétabilité. En premier lieu, une structure hiérarchique bien choisie peut gérer les corrélations entre groupes d'observations et améliorer significativement la qualité des prédictions des modèles et leur interprétation. Nous proposons de l'apprendre en exploitant le résultat d'une méthode d'interprétabilité post-hoc (viz., SHAP) appliquée à un modèle boîte noire flexible (viz., XGBoost). Puis, nous introduisons deux approches algorithmiques (viz., un réseau de neurones polynomial, et une extension de la régression symbolique multi-objectif) pour découvrir des transformations non linéaires des variables explicatives. Ces dernières, tout en restant simple (e.g. interactions de premier ordre, logarithme), permettent aux modèles de capturer une plus grande partie de la variabilité dans la variable dépendante. De plus, avec la régression symbolique multiobjectif, nous calculons un classement spécifique à chaque cluster des expansions de modèles linéaires régularisés ordonnés par complexité croissante. Cela facilite un processus d'interprétation dynamique qui permet de découvrir des modèles prédictifs efficaces, efficients et interprétables. Des expériences ont été menées sur un jeu de données de sécurité routière et sur plus de dix jeux de données publics couvrant des problèmes de classification et de régression variés. Les résultats obtenus sont prometteurs étant donné que nos deux contributions surpassent les modèles interprétables traditionnels et se rapprochent des meilleurs modèles non paramétriques boîtes noires. Enfin, nous illustrons les bénéfices de notre approche en présentant, sur une étude réelle de cas, une application que nous avons conçue pour les experts de la sécurité routière. MOTS-CLÉS 4 Experiments 4 . 1 441 Description of the datasets . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 French highway dataset . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Other datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Data preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Models used for comparison . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Parametric interpretable models . . . . . . . . . . . . . . . . . 4.4.2 Non-parametric black-box models . . . . . . . . . . . . . . . . 4.4.3 Models from our proposal . . . . . . . . . . . . . . . . . . . . 4.4.4 Hyper-parameters tuning . . . . . . . . . . . . . . . . . . . . . 4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Hierarchical structure module . . . . . . . . . . . . . . . . . . 4.5.2 Regression datasets . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Classification datasets . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Dynamic interpretative process . . . . . . . . . . . . . . . . . . . . . 4.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi List of Figures 4 . 4 44 Parameters and priors for the Bayesian hierarchical models . . . . . . 4.5 Hyper-parameters tuning . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Performance of the prediction to associate a new observation to its cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Results on the regression datasets . . . . . . . . . . . . . . . . . . . . 4.8 Results obtained by cluster-specific interpretable models . . . . . . . 4.9 Results on the classification datasets . . . . . . . . . . . . . . . . . . 4.10 Descriptive statistics of variables for each cluster . . . . . . . . . . . . 4.11 Description of explanatory variables for the overall cluster-specific data and for samples where partial derivatives w.r.t. the traffic variable are the lowest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 . 1 : 11 Figure 1.1: APRR's highway network[START_REF] Aprr | Cartography of the highway network[END_REF] Figure 1 . 2 : 12 Figure 1.2: Bias and variance contributing to total error[START_REF] Fortmann-Roe | Understanding the bias-variance tradeoff[END_REF] Figure 2 . 1 : 21 Figure 2.1: Example of a shallow decision tree trained on the French highway dataset Figure 2 . 2 : 22 Figure 2.2: Partial dependence of the annual average daily traffic for a gradient boosting tree model on the highway dataset Figure 2 . 3 : 23 Figure 2.3: Toy example of a linear SVM, with the red line being the optimal hyperplane. Figure 2 . 4 : 24 Figure 2.4: Multilayer perceptron (a) simple architecture with one single hidden layer (b) toy example of the computation of an output [START_REF] Huang | Predicting crash frequency using an optimised radial basis function neural network model[END_REF] compare a radial basis functions neural network (RBFNN) with BPNN and NB regression for crash frequencies prediction. RBFNN obtains the best results. Xie et al.[START_REF] Xie | Predicting motor vehicle collisions using bayesian neural network models: An empirical analysis[END_REF] use a bayesian neural network for crash-count prediction. The neural network model outperforms BPNN and NB regression. Figure 2 . 2 Figure 2.5: Sensitivity analysis performed on a Bayesian neural network for the traffic variable, from Xie et al. [133, p.930] Figure 3 . 1 : 31 Figure 3.1: Description of a Bayesian hierarchical generalized linear model Figure 3 . 2 : 32 Figure 3.2: Proposed framework for an interpretable hierarchical symbolic regression Figure 3 . 3 : 33 Figure 3.3: Force plot of the explanatory variables' contributions to the estimated crash count for a specific observation. On this example, the traffic has the biggest positive contribution and explains most of the crash count shift from its overall expected value Figure 3 . 4 : 34 Figure 3.4: Observations grouped by similarity of their forceplots Figure 3 . 3 Figure 3.5: Clusters automatically discovered by the hierarchical structure module, for 2018 3.8b) between Lyon and Dijon and in the vicinity of Paris and Belfort. (a) Observed crash count (b) Annual average daily traffic Figure 3 . 6 :Figure 3 . 7 : 3637 Figure 3.6: Variables' distributions for 4 identified clusters (a) Observed crash count (b) Annual average daily traffic Figure 3 . 8 :Figure 3 . 9 : 3839 Figure 3.8: Variables' distributions for 2 identified clusters Figure 3 . 10 : 310 Figure 3.10: Variables' distributions for 6 identified clusters Figure 3 . 11 : 311 Figure 3.11: Two examples of posterior distributions Figure 3 . 12 : 312 Figure 3.12: Structure of the GMDH model (a) Without interaction effects (b) With interaction effects Figure 3 . 13 : 313 Figure 3.13: Triptych plots of predicted crash counts vs. annual average daily traffic. Note that explanatory variables are standardized (see section 4.2). Figure 3 . 14 : 314 Figure 3.14: Posterior Predictive Checks (PPC) on two clusters. Left column: observed data (cluster 0: 1491 samples; cluster 2: 965 samples). Right column: replicated data (2000 simulations) Figure 3 . 3 Figure 3.15: A functional form associated with a set of expression trees. 1 :M 1 function Simulated Annealing(T min , λ h , s h , λ c , s c , γ c , max) curr , S curr ← initialize (m) 8: , M curr , S curr , acc, rej ← explore(T, i, ζ, M curr , S curr , acc, rej) 17: if i mod s c = 0 then 18: T ← T × λ c , s c ← s c × γ c 19: Figure 3 . 3 Figure 3.16: A sequence of transformations applied to an expression tree. (a) An initial expression tree. Blue integers refer to node indices. (b) A transformation is applied to operator node 1, thus modifying the underlying functional form. (c) The transformation applied to leaf node 6 is muted due to its left operator parent. (d) Later in the process, node 6 can be reactivated when its parent is transformed. Figure 3 . 17 : 317 Figure 3.17: Effect of the annealing temperature on the acceptance probability. .5) with ζ the set of solutions that approximate the Pareto front, | ζ| the cardinality of ζ ∪ {S cur , S new }, and | ζS | the number of solutions in | ζ| that dominate S (see Fig. 3.18). Moreover, to smooth the estimated acceptance probability distribution, new artificial points are added to the attainment surface to get an evenly spread attainment surface over the two dimensions of the Pareto front [115]. Updates of the Pareto front Finally, when S new is accepted, the Pareto front ζ is updated (see function update of algorithm 3) by removing the solutions dominated by S new and then adding S new to ζ when it is not dominated by any other solution Figure 3 . 18 : 318 Figure 3.18: Example of an approximated Pareto front and its attainment surface, adapted from [119, p. 322]. From Eq. 3.5, ∆E(S cur , S new ) = (2 -4)/9 = -2/9 Figure 3 . 19 : 319 Figure 3.19: Extraction of a cluster-specific functional form from a set of expression trees and the fixed functional form of the global model Figure 4 . 1 : 41 Figure 4.1: Distribution of the observed crash counts Figure 4 . 2 : 42 Figure 4.2: Examples of (a) ROC curve and (b) precision-recall curve obtained on the FrenchHighway dataset 100), σ ∼ H(5) a Link function; b Negative Binomial distribution; c Gamma distribution; d Identity function; e Bernoulli distribution Table Figure 4 . 3 : 43 Figure 4.3: Global explanation plot provided by the InterpretML framework for an EBM model on the altitude variable on the French Highway dataset Figure 4 . 4 : 44 Figure 4.4: Screenshot of the web application 10 :Figure 4 . 6 : 85 ( 104685 Figure 4.5: Clusters identified for 2018 Figure 4 . 7 : 47 Figure 4.7: Crash count predictions for 2018 Figure 4 . 8 : 48 Figure 4.8: Effects plots and posteriors of the most influencing variables for model 1. Effects plots are obtained by computing for all observations the effect of a variable j on the crash count, defined by effect(i) j = β j x (i) j, where β j is the coefficient estimate of the j-th variable of the model and x (i) j is the value of variable j on the i-th observation[START_REF] Molnar | Interpretable Machine Learning[END_REF]. Figure 4 . 9 : 49 Figure 4.9: Effects plots and posteriors plots of the most influencing variables for model 2 3 and altitude x 6 6 are respectively: PD(x 3 ) = δ ŷ δx 3 = 0.00286 -9.37 • 10 -6 x 6 , PD(x 6 ) = δ ŷ δx 6 = 0.15 -9.37 • 10 -6 x 3 Figure 4 . 4 Figure 4.11: Effects plots 'INSA LYON, MEMBRE DE L'UNIVERSITE DE LYON NOM : VERAN DATE de SOUTENANCE : 04/11/2022 Prénoms : Thomas, Marcel, Simon, Nicolas TITRE : Efficient and interpretable crash prediction models : an application to a French highway network NATURE : Doctorat Numéro d'ordre : 2022ISAL0096 Ecole doctorale : INFOMATHS Spécialité : Informatique RESUME : Dans le monde entier, les accidents de la route ont des impacts sociaux et financiers importants. Pour réduire leur fréquence et leur gravité, les modèles de prédiction d'accidents (CPM) sont utilisés pour identifier les segments de route dangereux et fournir des indices exploitables sur les facteurs de risque associés. Les CPM sont soit des modèles statistiques paramétriques, en particulier des modèles linéaires généralisés (GLM), soit des modèles d'apprentissage automatique avec un nombre important de paramètres sans estimation d'incertitude associée (e.g., ensemble d'arbres de décision, machine à vecteurs de support…). : Machine Learning, model interpretability, SHAP, Bayesian hierarchical modeling, symbolic regression, multiobjective optimization Laboratoire (s) de recherche : LIRIS Directeur de thèse : Prof. Dr. Jean-Marc PETIT Président de jury : Prof. Dr. Florence SEDES Composition du jury : Prof. Dr. Pierre GANCARSKI, Prof. Dr. Gabriele GIANINI, Prof. Dr. Florence SEDES, Prof. Dr. Julien JACQUES, Prof. Dr. Jean-Marc PETIT, Dr. Pierre-Edouard PORTIER 3.2.2 Training of a non-parametric ML model . . . . . . . . . . . . 3.2.3 Local post hoc explanations . . . . . . . . . . . . . . . . . . . 3.2.4 Discovery of a hierarchical structure . . . . . . . . . . . . . . . 3.2.5 Representation of the hierarchical structure in highway safety 3.3 Bayesian hierarchical generalized linear model . . . . . . . . . . . . . 3.3.1 Model description . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Supervised learning of interactions with a polynomial network 3.3.4 Model evaluation and validation . . . . . . . . . . . . . . . . . 3.3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Interpretable hierarchical symbolic regression . . . . . . . . . . . . . . 3.4.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Symbolic regression with Pareto simulated annealing . . . . . 3.4.3 Partial pooling approach for symbolic regression . . . . . . . . 3.4.4 Uncertainty estimation . . . . . . . . . . . . . . . . . . . . . . 3.4.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . 1.1 APRR's highway network[START_REF] Aprr | Cartography of the highway network[END_REF] . . . . . . . . . . . . . . . . . . . . . . . 1.2 Bias and variance contributing to total error[START_REF] Fortmann-Roe | Understanding the bias-variance tradeoff[END_REF] . . . . . . . . . . . . 2.1 Example of a shallow decision tree . . . . . . . . . . . . . . . . . . . . 2.2 Partial dependence on the French highway dataset . . . . . . . . . . . 2.3 Toy example of a linear SVM . . . . . . . . . . . . . . . . . . . . . . 2.4 Multilayer perceptron and the computation of a neuron output . . . . 2.5 Sensitivity analysis performed on a Bayesian neural network . . . . . .15 A functional form associated with a set of expression trees. . . . . . . 3.16 A sequence of transformations applied to an expression tree . . . . . 3.17 Effect of the annealing temperature on the acceptance probability. . . 3.18 Example of an approximated Pareto front and its attainment surface 3.19 Extraction of a cluster-specific functional form . . . . . . . . . . . . . 4.1 Distribution of the observed crash counts . . . . . . . . . . . . . . . . 4.2 Examples of ROC curve and precision-recall curve . . . . . . . . . . . xiii 4.3 Global explanation plot provided by the InterpretML framework . . . 4.4 Screenshot of the web application . . . . . . . . . . . . . . . . . . . . 4.5 Clusters identified for 2018 . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Pareto front for cluster 0 specific models . . . . . . . . . . . . . . . . 4.7 Crash count predictions for 2018 . . . . . . . . . . . . . . . . . . . . . 4.8 Effects plots and posteriors of the most influencing variables for model 1 4.9 Effects plots and posteriors of the most influencing variables for model 2 4.10 Histograms of partial derivatives . . . . . . . . . . . . . . . . . . . . . 4.11 Effects plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Illustrative description of the french highway dataset . . . . . . . . . 3.1 Rules for interval arithmetic . . . . . . . . . . . . . . . . . . . . . . . 3.2 Algebraic rules used to compute the complexity of each term . . . . . 3.1 Description of a Bayesian hierarchical generalized linear model . . . . 3.2 Proposed framework for an interpretable hierarchical symbolic regression 3.3 Example of a forceplot for a specific observation . . . . . . . . . . . . 3.4 Observations grouped by similarity of their forceplots . . . . . . . . . 3.5 Clusters automatically discovered by the hierarchical structure module 3.6 Variables' distributions for 4 identified clusters . . . . . . . . . . . . . 3.7 Clusters when the number of clusters is reduced to 2 . . . . . . . . . 3.8 Variables' distributions for 2 identified clusters . . . . . . . . . . . . . 3.9 Clusters when the number of clusters is increased to 6 . . . . . . . . . 3.10 Variables' distributions for 6 identified clusters . . . . . . . . . . . . . 3.11 Two examples of posterior distributions . . . . . . . . . . . . . . . . . 3.12 Structure of the polynomial network . . . . . . . . . . . . . . . . . . 3.13 Triptych plots of predicted crash counts vs. annual average daily traffic 3.14 Posterior Predictive Checks on two clusters . . . . . . . . . . . . . . . 3xiv List of Tables 14.1 Description of the French Highway dataset . . . . . . . . . . . . . . . 4.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Confusion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 1 . 1 1: Illustrative description of the french highway dataset Segment Year Crash count AADT a %HV b #Interchanges . . . Presence of tollgates A6-2-354-364 c 2008 20 26456 19.59 0 . . . 1 A6-2-354-364 2009 20 26343 16.9 0 . . . 1 A6-2-354-364 2010 27 . . . . . . . . . 26656 . . . 17.31 0 . . . . . . . . . 1 . . . . . . A40-1-102-112 2017 10 12701 14 0 . . . 0 A40-1-102-112 2018 7 13015 14 0 . . . 0 a annual average daily traffic; b percentage of heavy vehicles c highway name -direction -reference point begin -reference point end Table 3 . 3 1: Rules for interval arithmetic, from [119, p.318]. We suppose that an operator node has two children. The left one is defined on [a, b] and the right one on [c, d]. if accept(perf new , compl new , perf curr , compl curr , ζ, T ) then M curr , S curr , perf curr , compl curr ← update( ζ, M new , S new , perf new , compl new ) Algorithm 2 Pseudo code of function explore 3: if integrityCheck(M new ) then 4: S new ← simplify(M new ) 5: if S new ̸ = S curr then 6: perf new ← measurePerformance(S new ) 7: compl new ← measureComplexity (S new ) 8: 9: ζ, 10: acc ← acc + 1 11: else 12: rej ← rej + 1 13: else 14: 1: function explore(T, i, ζ, M curr , S curr , acc, rej) 2: M new ← generate(M curr ) : Then, we select the model S glob on the Pareto front closest to Ω in the sense of the 57 Algorithm 3 Pseudocode of accept and update functions 1: function accept(perf new , compl new , perf curr , compl curr , ζ, T ) if perf new ≤ perf curr and compl new ≤ compl curr then function update(ζ, M new , S new , perf new , compl new ) M new , S new , perf new , compl new 2: is_accepted ← False 3: 4: is_accepted ← True ▷ new solution dominates, or is as good as, the current one 5: else 6: compute P according to Eq. 3.4 7: draw randomly j in [0, 1] 8: if P ≥ j then 9: is_accepted ← True 10: return is_accepted 1112: is_dominated = False 13: for solution S in the Pareto front ζ do 14: if S dominates S new then 15: is_dominated = True 16: if is_dominated = False then 17: remove solutions in ζ dominated by S new 18: add S new to ζ 19: return ζ, Table 4 . 1 : 41 Description of the French Highway datasetSpatial variables A list of spatial variables is given in table type Min Max Mean Std. Dependent variable Crash count discrete 0 78 11.43 8.06 Explanatory variables Temporal AADT continuous 659 37644 12828 6791 Percentage of heavy vehicles continuous 1 35.1 17.39 5.80 Spatial Speed limit continuous 104 130 128.37 4.91 Number of interchanges discrete 0 7 0.2 0.68 Number of resting places discrete 0 3 0.63 0.57 Right shoulder width continuous 0.6 3 2.93 0.25 Altitude continuous 71.97 615.8 250.83 105.71 Presence of ramps binary 0 1 0.24 0.43 Presence of tunnels binary 0 1 0.02 0.13 Presence of tollgates binary 0 1 0.43 0.5 Presence of bridges binary 0 1 0.18 0.39 Number of instances in the dataset: 4152 Table 4 . 4 4.2. Datasets taken from https://epistasislab.github.io/pmlb/index.html † Datasets taken from https://archive.ics.uci.edu/ml/index.php 66 * 2: Datasets Table 4 . 4 3: Confusion matrix Table 4 4 .5: Hyper-parameters tuning Table 4 . 4 6: For each dataset: number of clusters selected and performance of the prediction to associate a new observation to its cluster for BH-GLM-int) in RMSE when compared to B-GLM, the standard Bayesian model. Dataset #clusters f1 (std) Regression French Highway 4 0.995 (0.003) Insurance 2 1.0 (0.0) House 4 0.908 (0.026) Puma 3 0.975 (0.013) Satellite 3 0.964 (0.003) Wind 3 0.944 (0.022) Breast tumor 2 0.995 (0.004) Music 3 0.96 (0.029) Wine 3 0.957 (0.012) Toxicity 2 0.95 (0.039) Gas 2 0.99 (0.003) Airbnb 4 0.985 (0.005) Classification Breastcancer 2 0.961 (0.028) Adult 4 0.995 (0.002) Table 4 . 4 number of trees for SR-* and HSR-* models b averages and standard deviations of performance metrics obtained on 5-fold cross-validation 7: Results obtained on 12 regression datasets French Highway (8 a ) Insurance (6) Airbnb (12) Puma (6) RMSE (std) b MAD (std) RMSE (std) MAD (std) RMSE (std) MAD (std) RMSE (std) MAD (std) Local 5.052 (0.250) 3.677 (0.120) - - - - - - GLMM 5.044 (0.287) 3.676 (0.140) - - - - - - OLS 6.213 (0.496) 4.458 (0.216) 6077 (287) 4203 (144) 0.50 (0.011) 0.360 (0.004) 4.471 (0.068) 3.649 (0.026) Decision tree 6.134 (0.422) 4.405 (0.196) 4739 (324) 2738 (125) 0.490 (0.011) 0.352 (0.004) 3.688 (0.05) 2.881 (0.011) GAM-splines 5.80 (0.344) 4.245 (0.183) 6021 (299) 4229 (158) 0.464 (0.011) 0.330 (0.004) 4.236 (0.065) 3.492 (0.03) EBM 5.363 (0.233) 3.968 (0.134) 4533 (339) 2528 (131) 0.450 (0.013) 0.321 (0.005) 3.283 (0.049) 2.553 (0.037) SR-GP 8.06 (0.730) 5.602 (0.408) 5168 (360) 2611 (95) 0.563 (0.026) 0.411 (0.014) 4.504 (0.086) 3.593 (0.045) SR-Gomea 6.309 (0.303) 4.539 (0.128) 4885 (251) 2931 (113) 0.502 (0.010) 0.363 (0.004) 3.362 (0.033) 2.626 (0.057) SR-Gomea-op 6.297 (0.267) 4.543 (0.155) 4815 (250) 2852 (160) 0.514 (0.01) 0.371 (0.004) 3.238 (0.059) 2.488 (0.044) B-GLM 6.468 (0.531) 4.58 (0.241) 6080 (266) 4228 (124) 0.498 (0.011) 0.360 (0.004) 4.464 (0.068) 3.648 (0.026) B-GLM-int 6.356 (0.513) 4.474 (0.235) 5151 (293) 2950 (140) 0.497 (0.01) 0.360 (0.004) 4.282 (0.065) 3.537 (0.026) XGBoost 5.571 (0.377) 4.084 (0.175) 4667 (346) 2673 (193) 0.440 (0.011) 0.312 (0.003) 3.257 (0.056) 2.545 (0.028) SVM 6.310 (0.645) 4.382 (0.268) 4953 (272) 2968 (147) 0.503 (0.011) 0.356 (0.004) 4.493 (0.089) 3.612 (0.037) MLP 6.140 (0.462) 4.392 (0.212) 4867 (347) 2916 (205) 0.463 (0.013) 0.330 (0.008) 3.170 (0.052) 2.445 (0.029) BH-GLM 6.102 (0.441) 4.380 (0.187) 4926 (300) 2930 (130) 0.476 (0.019) 0.354 (0.01) 3.873 (0.133) 3.057 (0.091) BH-GLM-int 6.004 (0.46) 4.315 (0.191) 4925 (301) 2929 (131) 0.474 (0.02) 0.352 (0.011) 3.871 (0.13) 3.056 (0.088) SR-trad 6.258 (0.509) 4.488 (0.208) 5219 (330) 3176 (209) 0.510 (0.018) 0.373 (0.011) 3.961 (0.031) 3.162 (0.026) SR-max 6.186 (0.469) 4.44 (0.224) 4889 (293) 3095 (366) 0.507 (0.011) 0.369 (0.008) 3.528 (0.042) 2.791 (0.041) HSR-naive-trad 6.472 (0.462) 4.644 (0.261) 6429 (307) 3304 (237) 0.584 (0.067) 0.410 (0.031) 4.156 (0.184) 3.064 (0.162) HSR-naive-max 6.386 (0.473) 4.582 (0.252) 6328 (307) 3170 (243) 0.545 (0.018) 0.401 (0.026) 4.062 (0.078) 3.002 (0.09) HSR-trad 5.921 (0.54) 4.250 (0.226) 4840 (308) 2930 (153) 0.475 (0.012) 0.343 (0.011) 3.30 (0.084) 2.547 (0.084) HSR-max 5.80 (0.507) 4.210 (0.282) 4844 (304) 2933 (149) 0.470 (0.011) 0.340 (0.011) 3.277 (0.082) 2.532 (0.087) Satellite (24) Wind (9) Breast tumor (6) Music (78) OLS 1.213 (0.009) 1.02 (0.011) 3.289 (0.104) 2.521 (0.077) 10.023 (0.036) 7.88 (0.027) 0.465 (0.039) 0.35 (0.03) Decision tree 1.061 (0.022) 0.621 (0.017) 3.839 (0.112) 2.980 (0.103) 9.844 (0.039) 7.632 (0.03) 0.705 (0.066) 0.497 (0.053) GAM-splines 0.90 (0.009) 0.624 (0.007) 3.082 (0.084) 2.366 (0.057) 9.663 (0.047) 7.537 (0.034) 0.898 (0.101) 0.678 (0.069) EBM 0.851 (0.035) 0.575 (0.021) 3.140 (0.089) 2.398 (0.061) 9.519 (0.049) 7.401 (0.04) 0.60 (0.036) 0.441 (0.035) SR-GP 1.628 (0.438) 1.055 (0.095) 3.858 (0.216) 2.979 (0.191) 10.441 (0.277) 8.151 (0.23) 0.71 (0.154) 0.51 (0.102) SR-Gomea 1.164 (0.028) 0.897 (0.036) 3.306 (0.128) 2.545 (0.099) 9.988 (0.056) 7.795 (0.044) 0.523 (0.081) 0.379 (0.056) SR-Gomea-op 1.102 (0.032) 0.826 (0.029) 3.296 (0.101) 2.534 (0.077) 9.973 (0.049) 7.788 (0.025) 0.499 (0.036) 0.369 (0.036) B-GLM 1.213 (0.008) 1.019 (0.012) 3.308 (0.098) 2.527 (0.074) 10.018 (0.033) 7.776 (0.03) 0.473 (0.033) 0.353 (0.03) B-GLM-int 1.117 (0.044) 0.905 (0.069) 3.295 (0.098) 2.517 (0.073) 9.995 (0.034) 7.791 (0.026) 0.469 (0.045) 0.358 (0.031) XGBoost 0.667 (0.033) 0.35 (0.016) 3.084 (0.075) 2.365 (0.050) 9.435 (0.048) 7.288 (0.041) 0.507 (0.046) 0.367 (0.037) SVM 1.261 (0.028) 1.003 (0.022) 3.307 (0.102) 2.521 (0.073) 10.045 (0.036) 7.86 (0.029) 0.472 (0.042) 0.348 (0.03) MLP 0.789 (0.047) 0.448 (0.03) 3.076 (0.087) 2.367 (0.062) 9.67 (0.04) 7.519 (0.023) 0.498 (0.042) 0.36 (0.04) BH-GLM 0.986 (0.036) 0.598 (0.034) 3.297 (0.101) 2.538 (0.079) 9.76 (0.035) 7.611 (0.031) 0.472 (0.035) 0.356 (0.027) BH-GLM-int 0.952 (0.029) 0.553 (0.028) 3.291 (0.106) 2.531 (0.084) 9.751 (0.036) 7.598 (0.033) 0.467 (0.037) 0.353 (0.027) SR-trad 1.175 (0.05) 0.948 (0.059) 3.342 (0.104) 2.568 (0.080) 10.096 (0.064) 7.917 (0.051) 0.543 (0.081) 0.401 (0.049) SR-max 1.018 (0.04) 0.784 (0.035) 3.176 (0.081) 2.445 (0.059) 10.03 (0.079) 7.864 (0.06) 0.476 (0.045) 0.359 (0.037) HSR-naive-trad 1.012 (0.064) 0.645 (0.068) 3.547 (0.041) 2.731 (0.050) 11.932 (0.16) 9.289 (0.241) 0.524 (0.045) 0.372 (0.02) HSR-naive-max 0.991 (0.034) 0.631 (0.031) 3.562 (0.068) 2.711 (0.046) 11.915 (0.108) 9.267 (0.185) 0.507 (0.056) 0.372 (0.039) HSR-trad 0.95 (0.034) 0.554 (0.032) 3.205 (0.074) 2.457 (0.054) 9.727 (0.056) 7.595 (0.06) 0.497 (0.05) 0.366 (0.031) HSR-max 0.934 (0.056) 0.529 (0.062) 3.198 (0.074) 2.455 (0.054) 9.662 (0.061) 7.531 (0.049) 0.471 (0.053) 0.351 (0.027) House (5) Wine (8) Toxicity (6) Gas (7) OLS 41563 (1270) 24354 (202) 0.754 (0.02) 0.586 (0.012) 1.256 (0.097) 0.949 (0.072) 8.112 (0.133) 5.796 (0.081) Decision tree 35752 (1199) 20035 (401) 0.753 (0.015) 0.596 (0.013) 1.394 (0.12) 1.059 (0.086) 7.705 (0.127) 5.638 (0.068) GAM-splines 33460 (1405) 18634 (212) 0.728 (0.029) 0.565 (0.015) 1.245 (0.106) 0.92 (0.08) 5.993 (0.10) 4.187 (0.038) EBM 31062 (1192) 16921 (133) 0.689 (0.019) 0.537 (0.01) 1.20 (0.095) 0.899 (0.06) 5.476 (0.072) 3.763 (0.038) SR-GP 56615 (21299) 24687 (2204) 0.857 (0.068) 0.664 (0.055) 1.457 (0.264) 1.087 (0.152) 10.75 (1.107) 8.165 (0.775) SR-Gomea 36750 (1693) 20768 (976) 0.742 (0.022) 0.582 (0.015) 1.343 (0.189) 0.978 (0.088) 8.703 (0.144) 6.583 (0.176) SR-Gomea-op 36865 (1434) 20916 (788) 0.739 (0.021) 0.579 (0.015) 1.267 (0.116) 0.956 (0.086) 8.742 (0.278) 6.764 (0.227) B-GLM 42104 (1406) 22632 (220) 0.753 (0.02) 0.586 (0.012) 1.237 (0.099) 0.934 (0.073) 8.112 (0.134) 5.797 (0.081) B-GLM-int 39811 (1375) 20872 (335) 0.751 (0.019) 0.583 (0.012) 1.246 (0.089) 0.941 (0.056) 8.112 (0.137) 5.797 (0.081) XGBoost 29630 (1237) 15733 (120) 0.68 (0.014) 0.531 (0.006) 1.157 (0.129) 0.872 (0.092) 5.705 (0.190) 4.001 (0.126) SVM 44879 (1676) 21201 (228) 0.748 (0.018) 0.582 (0.01) 1.286 (0.195) 0.959 (0.132) 6.954 (0.372) 4.875 (0.453) MLP 36004 (851) 20174 (396) 0.757 (0.071) 0.587 (0.051) 1.280 (0.153) 0.973 (0.112) 6.023 (0.239) 4.223 (0.134) BH-GLM 35212 (1661) 19444 (708) 0.741 (0.017) 0.576 (0.013) 1.235 (0.095) 0.933 (0.069) 6.87 (0.282) 4.768 (0.159) BH-GLM-int 35039 (1820) 19079 (443) 0.737 (0.017) 0.574 (0.014) 1.237 (0.098) 0.934 (0.073) 6.87 (0.282) 4.769 (0.159) SR-trad 37412 (2150) 21067 (524) 0.744 (0.024) 0.584 (0.015) 1.253 (0.089) 0.953 (0.078) 8.153 (0.686) 6.131 (0.515) SR-max 34716 (2049) 18976 (491) 0.731 (0.019) 0.571 (0.016) 1.233 (0.106) 0.935 (0.071) 7.395 (0.553) 5.467 (0.565) HSR-naive-trad 37518 (1563) 19592 (433) 0.735 (0.021) 0.575 (0.011) 1.363 (0.112) 0.989 (0.08) 7.46 (0.708) 5.557 (0.607) HSR-naive-max 37640 (1741) 19294 (472) 0.725 (0.019) 0.566 (0.009) 1.266 (0.071) 0.944 (0.057) 6.621 (0.15) 4.762 (0.152) HSR-trad 33542 (1127) 17964 (258) 0.724 (0.019) 0.566 (0.013) 1.243 (0.097) 0.933 (0.047) 7.181 (0.618) 5.32 (0.776) HSR-max 33102 (833) 17403 (155) 0.712 (0.014) 0.559 (0.011) 1.214 (0.065) 0.921 (0.038) 6.421 (0.150) 4.662 (0.153) a Table 4 . 4 8: Results obtained by cluster-specific interpretable models Table 4 . 9 49 : Results on the classification datasets 42 7685 14658 26606.63 3513.14 20949 37113 15920.91 1721.15 13382 20945 5627.28 1225.75 659 8438 1 35.06 16.28 4.67 1 34.13 1.03 3 2.97 0.07 2.6 3 84.03 521.02 188.75 100.25 79.76 549.63 5.35 0.34 96.58 5.2 31.4 16.84 2.68 3 2.89 71.97 427.78 286.83 4.54 0.06 70.51 6.73 2.08 30.3 17.13 0.27 1.15 3 2.97 104.72 79.76 615.78 210.17 0.46 0 1 0.17 %age heavy vehicles 18.22 Right shoulder width 2.91 Altitude 274.53 Presence of ramps 0.30 Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0096/these.pdf © [T. Véran], [2022], INSA Lyon, tous droits réservés Laboratoire d'InfoRmatique en Image et Systèmes d'information: https://liris.cnrs.fr/ https://www.data-newroad.com/ A model is said to be Pareto optimal if there is no alternative model that can improve one of its objective function without deteriorating the others. Note that, in[START_REF] Scott M Lundberg | From local explanations to global understanding with explainable ai for trees[END_REF], Lundberg et al. call this process a supervised clustering. However, this usually refers to clustering with access to a "teacher" that knows the right cluster for some of the instances[START_REF] Awasthi | Supervised clustering[END_REF]. In our case, our approach for clustering is unsupervised. https://xgboost.readthedocs.io/en/stable/ The test dataset materializes a production context as our model did not use these data for training. https://github.com/kvoyager/GmdhPy https://docs.sympy.org/latest/modules/simplify/simplify.html https://scikit-learn.org/stable/ https://imbalanced-learn.org/stable/references/generated/ imblearn.combine.SMOTEENN.html https://docs.pymc.io/en/v3/ https://www.statsmodels.org/stable/mixed_glm.html https://pygam.readthedocs.io/en/latest/ https://gplearn.readthedocs.io/en/stable/ https://github.com/marcovirgolin/GP-GOMEA https://xgboost.readthedocs.io/en/latest/python/python_intro.html Acknowledgments The doctoral student was part of the research groups for distributed systems, information retrieval and mobility (DRIM) and data base (BDThis work has received financial support from the French Association for Research and Technology (ANRT). x 3 x 6 overall PD(x 3 ) < -0.0015 overall PD(x introduces highly correlated combinations of terms. From model 3 to model 4, a sharp increase in complexity for a small gain in performance should alert the user to the risk of no longer understanding the inner workings of model 4: time must be spent at studying the various partial effects before deciding if the model can still be trusted. Specificities and benefits of the ranking-by-complexity approach Thanks to our complexity metric (see Eq. 3.3), model 3 does not dominate model 2 even though they have the same number of terms and their terms have equal complexities. If we had not penalize collinearities, then model 2, which is of high interest to field experts, would not have been included in the Pareto front. In this sense, the ranking-by-complexity favors a progressive analysis of numerous instructive models. Among Fig. 4.6 models, some can attain similar predictive performances while bringing out different effects of the explanatory variables. This can be understood from the point of view of the Rashomon effect [START_REF] Breiman | Statistical modeling: The two cultures (with comments and a rejoinder by the author)[END_REF] which characterizes problems
04096339
en
[ "phys.meca.acou" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04096339/file/these.pdf
Renée El Melhem Bât Pascal Blaise M Stéphane ÉLECTROTECHNIQUE E E A Électronique M Philippe Delachartre email: [email protected] M Philippe email: [email protected] Mme Sylvie M Hamamache Kheddouci M Stéphane Benayoun email: [email protected] Stéphanie Cauvin M Jocelyn Bonjour M Christian Montes A B S T R A C T The reproduction of the vibration and acoustic responses of structures under random excitation such as the Diffuse Acoustic Field (DAF) or the Turbulent Boundary Layer (TBL) is of particular interest to researchers and the transportation industry (automobile, aeronautics, etc.). In practice, the characterization of structures under random excitations requires making in-situ measurements or using test facilities such as anechoic wind tunnels or the reverberant rooms, which are complex and costly methods. Based on the previous considerations, the necessity of finding simple, cost-efficient and reproducible alternative methods, becomes obvious. In this thesis, an alternative method called the Source Scanning Technique (SST) is presented. This solution is part of the methods that employ acoustic source arrays. However, SST is based on the use of a single monopole source along with the synthetic array principle: the monopole source is spatially displaced to an arbitrary number of different positions following a predefined spatial step, thereby creating a virtual array of monopole sources, overcoming limitations due to the source array density, the upper frequency range of the synthesis and the geometry of the test structure. The first objective is to assess the validity of the proposed technique by comparing its results with numerical and experimental ones for simple structures such as Flat Rectangular Panels (FRPs). An academic case study consisting of a baffled and simply supported aluminum panel under diffuse acoustic field and turbulent boundary layer excitations is considered. After conducting parametric studies in order to define the optimal design parameters of the virtual array of monopoles, an experimental implementation of SST on FRPs is done. The experimental vibration response of the panel as well as the transmission loss using the proposed process are compared to results from random vibration theory on one hand. On the other hand, the same experimental results v obtained using the SST are compared with results obtained with measurements in a reverberant room (DAF) and an anechoic wind tunnel (TBL). These comparisons show good agreement that validate SST for the considered panel. In a second step, the goal is to extend SST to more complex structures such as Curved Rectangular Panels (CRPs). Although the principle of SST remains the same for these types of structures, one needs to determine the closed-form transfer functions between one source position and a point on the structure which are not as simple as the ones for FRPs. The closed-form solutions are determined for the two-dimensional case and the three-dimensional case and numerically validated using the Boundary Element Method (BEM). Afterwards, parametric studies similar to the ones conducted in the case of FRPs are done in order to investigate the optimal design parameters of the array of monopoles. Two monopole array -structure geometric configurations are studied: conformal configuration (monopole array with the same geometry as the test structure) and non-conformal (monopole array with a different geometry from the test structure). After defining an experimental setup for the application of SST on CRPs, the transfer functions are experimentally validated by comparing the measurements to the analytical solutions. The comparison shows very good agreement between both types of results. An experimental implementation of the source scanning technique is done for the two-dimensional case and the satisfying comparison between the target pressure fields and the synthesized ones show good promising applications of SST for convex curved structures. R É S U M É E N F R A N Ç A I S La reproduction des réponses vibratoires et acoustiques de structures sous excitations aléatoires telles que le champ acoustique diffus ou la couche limite Turbulente intéresse particulièrement le monde de la recherche et l'industrie des transports (automobile, aéronautique. . . ). En pratique, la caractérisation de structures sous excitations aléatoires nécessite de réaliser des mesures in-situ ou d'utiliser des moyens d'essais tels que les souffleries anéchoïques ou les salles réverbérantes, qui sont des méthodes complexes et coûteuses. Sur la base des considérations précédentes, la nécessité de trouver des méthodes alternatives simples, rentables et reproductibles, devient évidente. Dans cette thèse, une méthode alternative appelée la méthode de balayage de source ou Source Scanning Technique (en anglais) est présentée. Cette solution fait partie des procédés utilisant des réseaux de sources acoustiques. Cependant, contrairement aux autres techniques, la méthode de balayage de source est basée sur l'utilisation d'une seule source monopolaire en appliquant le principe du réseau synthétique : la source monopolaire est déplacée dans l'espace vers un nombre arbitraire de positions différentes suivant un pas spatial prédéfini, créant ainsi un réseau virtuel de sources, s'affranchissant des limitations dues à la densité du réseau de sources, à la gamme de fréquences supérieures de la synthèse et à la géométrie de la structure de test. Le premier objectif est d'évaluer la validité de la technique proposée en comparant ses résultats avec des résultats numériques et expérimentaux pour des structures simples telles que les panneaux plans. Une étude de cas consistant en un panneau plan en aluminium en appuis simples et soumis à une excitation de types champ acoustique diffus et couche limite turbulente est considérée. Après avoir mené des études paramétriques afin de définir les paramètres de conception optimaux du réseau virtuel de monopoles, une implémentation expérimentale de la méthode de balayage de source sur le panneau plan en aluminium est réalisée. La réponse expérimentale aux vibrations du panneau ainsi que la perte en transmission sont comparées aux résultats de la théorie des vibrations aléatoires d'une part. D'autre part, ces mêmes résultats expérimentaux obtenus avec la méthode de balayage de source sont comparés aux résultats obtenus avec des mesures directes en salle réverbérante (champ acoustique diffus) et en soufflerie anéchoïque (couche limite turbulente). Ces comparaisons montrent un bon accord qui valide la méthode de balayage de source pour le panneau considéré. Dans un second temps, le but est d'étendre la méthode de balayage de source à des structures plus complexes telles que les panneaux courbes. Bien que le principe de la méthode vii reste le même pour ces types de structures, il faut déterminer les fonctions de transfert analytiques entre une position source et un point d'un panneau courbe qui ne sont pas aussi simples que celles des panneaux plans. Les solutions analytiques sont déterminées pour le cas bidimensionnel et le cas tridimensionnel et validées numériquement à l'aide de la méthode des éléments de frontière. Ensuite, des études paramétriques similaires à celles menées dans le cas des panneaux plans sont réalisées afin de déterminer les paramètres de conception optimaux du réseau de monopoles. Deux configurations géométriques réseau monopolaire -structure sont étudiées : configuration conforme (réseau monopolaire avec la même géométrie que la structure test) et non conforme (réseau monopolaire avec une géométrie différente de la structure test). Après avoir défini un montage expérimental pour l'application de l'approche proposée à des panneaux courbes, les fonctions de transfert sont validées expérimentalement en comparant les mesures aux solutions analytiques. La comparaison montre un très bon accord entre les deux types de résultats. Une implémentation expérimentale de la technique de balayage de source est faite pour le cas bidimensionnel et la comparaison satisfaisante entre les champs de pression cibles et ceux synthétisés montre des applications prometteuses de la méthode de balayage de source pour les structures courbes convexes. The manuscript is organized as follows. After this general introduction and contextualization of the subject, a literature review on the excitations of interest and the existing methods used for the vibroacoustic characterization of structures is proposed in Part I. This first part also includes a chapter dedicated to a detailed description of the SST approach and its parameters. Part II is devoted to a numerical and experimental study of SST applied on flat rectangular panels. Then Part III proposes an extension of the method to curved rectangular panels. L I S T O F TA B L E S Part I presented. The chapter ends with a presentation of the outline for the remainder of the manuscript. B A C KG R O U N D A N D P R E S E N TAT I O N O F T H E S O U R C E S C A N N I N G T EC H N I Q U E P R O C E S S 1 L I T E R AT U R E R E V I E W the excitation fields In general, there are two stochastic excitations that are of interest in several industrial applications • the DAF excitation which is namely used in the building sector as an acoustic source for the characterization of the sound transmission loss of the walls, • the TBL excitation which is generally used in the transportation industry (aeronautics, naval and automobile sectors) in order to represent the pressure fluctuations induced by the flow on the vibrating structures. In the following, we propose an overview on the characterization of random pressure fields using stochastic tools before describing the two random pressure fields of interest. Characterization of spatially correlated random pressure field The following convention is used for Fourier temporal and spatial transforms. temporal fourier transform F [f (t)] = F (ω) = +∞ -∞ f (t) e -iωt dt (1.1) F -1 [F (ω)] = f (t) = 1 2π +∞ -∞ F (ω) e iωt dω (1.2) spatial fourier transform F [f (x)] = F (k) = +∞ -∞ f (x) e -ikx dx (1.3) F -1 [F (k)] = f (x) = 1 2π +∞ -∞ F (k) e ikx dk (1.4) where F represents the Fourier transform. The early studies on Wall-Pressure Fluctuations (WPF) were based on wind tunnel measurements or flight tests. Pressure transducers were used to measure the pressure time signal at one point in space, p b (x, t), the statistics of which are usually assumed to be both stationary and homogeneous up to order 2 [START_REF] Smol | The Measurement of Turbulent Fluctuations: An Introduction to Hot-Wire Anemometry and Related Transducers[END_REF]. The autocorrelation function of this signal is defined as R p b p b (x, τ) = ⟨p b (x, t) p b (x, t + τ)⟩ (1.5) where ⟨ • ⟩ represents an average over time. The wavenumber-frequency spectrum of the blocked pressure p b (x, t), denoted S p b p b (k, ω) is defined as the space-time Fourier transform of the autocorrelation function R p b p b (x, τ), that is, in terms of the space-frequency spectrum, S p b p b (x, ω) of the blocked pressure S p b p b (k, ω) = Σ s S p b p b (x, ω) e -iωx dx (1.6) and S p b p b (x, ω) = +∞ -∞ R p b p b (x, t) e -iωt dt (1.7) where Σ s is the surface of the structure. Eq. (1.7) provides information on the energy of the TBL excitation at one specific point over the surface. By inferring that the convected pressure field is statistically homogeneous on the surface of the structure, the Fourier transform of the autocorrelation function of the WPF, also known as the wall-pressure auto-spectrum or single point wall-pressure spectrum, can be determined by integrating the wavenumber-frequency spectrum over the entire range of wavenumbers k x , k y S p b p b (ω) = +∞ -∞ S p b p b (k, ω) dk (1.8) As spatial homogeneity is assumed, the dependence of the wall-pressure ASD function on x is no longer specified as in Eq. (1.8). Diffuse acoustic field The term diffuse acoustic field has been defined from several approaches in the literature. However, the following formulation appears to have gained acceptance in the acoustical community: In a diffuse acoustic field, there should be equal probability of energy flow in all possible directions, which means at a given position, sound waves are approaching with random phases (incoherent), and the probabilities of approaching from all possible directions are equal [START_REF]Acoustics -Measurement of sound absorption in a reverberation room. Standard. Geneva[END_REF]. It indicates that the time-averaged intensity is zero at all positions and, hence, the field is isotropic in nature. At the same time, there should be uniform energy density at all positions, ensuring homogeneous characteristics of the field [START_REF]Acoustics -Measurement of sound absorption in a reverberation room. Standard. Geneva[END_REF][START_REF] Schultz | Diffusion in reverberation rooms[END_REF]. Once the field is isotropic and homogeneous, another criterion of diffuse acoustic field -constant irradiation from each volume element -will be fulfilled automatically [START_REF] Stephenson | Different assumptions -Different reverberation formulae[END_REF]. A DAF is often described by a correlation function [START_REF] Cook | Measurement of Correlation Coefficients in Reverberant Sound Fields[END_REF][START_REF] Jacobsen | The coherence of reverberant sound fields[END_REF][START_REF] Nélisse | Characterization of a diffuse field in a reverberant room[END_REF] but in the following, the Cross-Spectral Density (CSD) function, which is the temporal Fourier transform of the correlation function, defined by Eq. (1.9) will be used. S p b p b r -r ′ , ω = S p b p b (ω) sin (k 0 |r -r ′ |) k 0 |r -r ′ | (1.9) where r and r ′ are the position vectors of two points; ω is the frequency; k 0 = ω/c 0 is the acoustic wavenumber and c 0 the speed of sound in the medium. It is also possible to define the CSD function of the excitation in the frequency-wavenumber domain: S p b p b (k, ω) =                    S p b p b (ω) 2π k 0 1 k 2 0 -|k| 2 if |k| < k 0 0 if |k| ≥ k 0 (1.11) where |k| = k 2 x + k 2 y , k x and k y are the wavenumbers in the x and y directions, respectively. S p b p b (k, ω) is simply a two dimensional spatial Fourier transform of the CSD function written in the space-frequency domain. The DAF excitation is generally used to determine the sound transmission of a given structure or material in building acoustics for airborne sound insulation purposes [START_REF]Acoustics -Field measurement of sound insulation in buildings and of building elements -Part 1: Airborne sound insulation. en. Tech. rep[END_REF]. It is used in the transportation industry (aeronautics, automobile, etc.) as well. Turbulent boundary layer Description When a fluid flows along an object or when an object is moving through a fluid, as a consequence of the viscosity of the fluid, there is internal friction between adjacent layers starting from the wall(s) of the considered object. The layer below the top layer will be dragged in the forward direction by the layer above it, but at a velocity somewhat less than the velocity of the layer above. This layer, in turn, will exert a forward drag on the layer beneath it at reduced velocity and so on through the entire fluid layer up until the wall(s) of the object: a boundary layer is forming. The boundary layer is a well known concept in many fields (mechanical, aeronautical, naval, etc.) and has been the subject of increasing interest from the scientific community. This concept was initiated by Prandtl [START_REF] Prandtl | Über Flüssigkeits bewegung bei sehr kleiner Reibung[END_REF] in the beginning of the past century where he 1.1 the excitation fields 13 built a bridge between the two major disciplines of fluid dynamics: hydrodynamics at that moment which was developed from Euler's theory of inviscid flows and hydraulics which relied on a large amount of experimental data to tackle practical engineering problems. In his 1904 paper, Prandtl stated that however small the viscosity of a fluid in motion may be, it cannot be ignored. Over the years, this led to the development of the boundary layer theory. A simple definition of the boundary layer would be (according to Prandtl): The boundary layer is a region of the flow with non-negligible effects of viscosity separating a solid body and a free flow which corresponds to the inviscid limiting solution. The flow within the boundary layer can be considered laminar or turbulent depending on the value of the Reynolds number Re defined by the following equation Re = U L ν (1.12) ν is the kinematic viscosity of the fluid, U and L are the characteristic streamwise velocity and length, respectively. When the Reynolds number increases, the regime of the boundary layer transitions from laminar to turbulent leading to much more complex flow patterns and a thickening of the boundary layer. TBL parameters The main parameters allowing the description of a TBL excitation are presented hereunder. This type of excitation is illustrated in Fig. 1.2 along with two parameters that describe the excitation. Physical domain When a fluid is flowing around a structure (or the structure is moving through a fluid) at a certain velocity U ∞ , an area with a velocity gradient can be observed: the fluid velocity U (z) gradually increases away from the structure until it reaches the outer literature review velocity U ∞ . This area is called the boundary layer and has a thickness δ. The transition from boundary layer flow to outer flow takes place continuously (at least in the case of laminar flows), so that a precise boundary cannot, in principle, be given. However, the concept of boundary layer thickness is very often used in practice. Frequently the boundary layer thickness is arbitrarily given as being at the point where the velocity reaches a certain percentage of the outer velocity, e.g. 99%. This is translated by Eq. (1.13) below U (δ) = 0.99U ∞ (1.13) δ ⋆ = ∞ 0 1 - U (z) U ∞ dz (1.14) 1.1 the excitation fields 15 Following the analogy of the displacement thickness, a momentum thickness may be defined. The momentum thickness θ is the distance by which a surface would have to be moved parallel to itself towards the reference plane in an inviscid fluid stream of velocity U ∞ to give the same total momentum as exists between the surface and the reference plane in a real fluid θ = ∞ 0 U (z) U ∞ 1 - U (z) U ∞ dz (1.15) The other parameters allowing the description of TBL excitation are given in Table 1.1. Name Equation Comments Wall shear stress τ w = µ ∂u ∂z w ≈ µ U ∞ δ According to Newton's law of friction. The index w denotes the value at the wall and µ is the dynamic viscosity. Friction velocity u τ = τ w ρ ρ is the fluid density. Shape factor H = δ ⋆ θ δ ⋆ is the displacement thickness and θ is the momentum thickness. [START_REF] Schlichting | Boundary-Layer Theory[END_REF]. Wavenumber-frequency spectrum The WPF induced by a TBL excitation are usually modeled in the wavenumber-frequency • The acoustic region: it is located in the low wavenumbers region (|k| ≤ k 0 , k 0 is the acoustic wavenumber), has low energy levels and is associated to the compressibility effects of the fluid. However, if the fluid is considered incompressible, the wall-pressure CSD function tends toward 0 when the wavenumber tends toward 0. This property known as the Kraichnan-Phillips theorem [START_REF] Kraichnan | Pressure Fluctuations in Turbulent Flow over a Flat Plate[END_REF][START_REF] Phillips | On the aerodynamic surface sound from a plane turbulent boundary layer[END_REF] means that the eddies constituting the boundary layer can only be of finite size as the boundary layer is of finite dimensions. • The convective region (|k| ≈ k c , k c represents the convective wavenumber and is defined by Eq. (1.16)): it corresponds to the region defined by the convective ridge in the high wavenumbers region, has substantial energy levels and corresponds to the mass convection of the fluid. • The transitional region (k 0 < |k| < k c ): it is the region that connects the acoustic and the convective regions. Up until now, this region is not well known because the energy levels of the convective ridge are very high making it difficult to measure the energy levels in this region. The distinction between these different regions of the spectrum is fundamental as Hwang and Maidanik showed it in their paper [START_REF] Hwang | A wavenumber analysis of the coupling of a structural mode and flow turbulence[END_REF]: the contribution of the low wavenumber region to the response of the structure under TBL excitation can be significant in some cases. This phenomenon is known as the filtering effect of the structure under TBL forcing and will be discussed in Part II, Chapter 3. Let us develop a little more on the acoustic and convective regions of the WPF CSD function starting by the former. The acoustic component of the TBL excitation corresponds to the pressure fluctuations of the acoustic radiation of the eddies in the fluid. It is characterized by the acoustic wavenumber k 0 defined previously. This acoustic component has lower energetic levels 1.1 the excitation fields 17 compared to the convective component and is generally located in the low wavenumber domain. It is often represented in the wavenumber domain by a disk centered at k x = 0, k y = 0 and of radius k 0 (see Fig. 1.3). In fact, the acoustic component corresponds to waves propagating in all directions which can be assimilated to a DAF [START_REF] Arguillat | Etude expérimentale et numérique de champs de pression pariétale dans l'espace des nombres d'onde, avec application aux vitrages automobiles[END_REF]. The convective component corresponds to the pressure fluctuations due to the mass convection. In the wavenumber domain, it is defined by the convective wavenumber given by the following equation k c = ω U c (1.16) where ω is the angular frequency and U c = KU ∞ is the convective velocity. K is an experimentally determined coefficient [START_REF] Arguillat | Measured wavenumber: frequency spectrum associated with acoustic and aerodynamic wall pressure fluctuations[END_REF] and its values lie in the interval [0.7, 0.8]. This component of the TBL excitation is the most energetic part and is usually located in the high wavenumbers domain depending on the free stream velocity U ∞ . It can be represented by an ellipse centered at k x = k c (x being the streamwise direction) in the wavenumber domain whose parameters (length ∆k x and width ∆k y ) are given by the bandwidth at -3dB around the convective ridge along k x and k y [START_REF] Blake | Mechanics of Flow-Induced Sound and Vibration[END_REF] as illustrated in Fig. 1. 3. These two lengths ∆k x and ∆k y are respectively given by the following equations ∆k x = 2ωα x U c and ∆k y = 2ωα y U c (1.17) where α x and α y are called the Corcos coefficients and account for the spatial coherence of the TBL WPF along the streamwise and spanwise directions, respectively. TBL models Up to now, there is no closed-form analytical model that describes a typical TBL excitation. However, the TBL parameters can be estimated numerically through analytical models on simple structures or Reynolds-Averaged Navier-Stokes (RANS) equations for more complex structures. For example, Peltier and Hambric [START_REF] Peltier | Estimating turbulent-boundary-layer wall-pressure spectra from CFD RANS solutions[END_REF] developed a statistical model allowing to obtain the spatial correlation functions using data computed from RANS literature review equations and taking into account a pressure gradient. These numerical models are often complex to implement and costly in terms of computing time. Thus our interest concerns empirical and semi-empirical models that can describe the TBL fluctuations provided that some adjustments are done. Historically, it has not been obvious to measure the WPF induced by a TBL [START_REF] Bull | Wall-Pressure Fluctuations Beneath Turbulent Boundary Layers: Some Reflections on Forty Years of Research[END_REF][START_REF] Hwang | Comparison of semi-empirical models for turbulent boundary layer wall pressure spectra[END_REF]. This fact led to the introduction of the normalized crossspectrum, usually denoted Φ p b p b (k, ω), and which is a ratio between the cross-spectrum itself S p b p b (k, ω) and the auto-spectrum S p b p b (ω). In this section, the two types of spectra are discussed: on one hand, the auto-spectrum of the TBL WPF and on the other hand, the cross-spectrum. the excitation fields 19 Single point wall-pressure spectrum The single point wall-pressure spectrum has been the object of many investigations and numerous models came out of those studies. One can cite the model of Robertson [START_REF] Robertson | Prediction of in flight fluctuating pressure environments including protuberance induced flow[END_REF] based on data from supersonic NASA-Ames measurements, that of Willmarth-Amiet-Roos [START_REF] Amiet | Noise due to turbulent flow past a trailing edge[END_REF] based on measurements by Willmarth and Roos [START_REF] Willmarth | Resolution and structure of the wall pressure field beneath a turbulent boundary layer[END_REF] over a flat plate, the model of Goodwin [START_REF] Goodwin | An in-flight supersonic TBL surface pressure fluctuation model[END_REF] based on flight test data on three supersonic aircraft: XB-70, A3J and Concorde, etc. As an example, one can take a look at the model of Goody [START_REF] Goody | Empirical Spectral Model of Surface Pressure Fluctuations[END_REF] considered as the model that matches the most the experimental measurements performed in the case of a TBL excitation characterization as it takes into account a large number of measurements, those of seven different research teams [START_REF] Hwang | Comparison of semi-empirical models for turbulent boundary layer wall pressure spectra[END_REF]. This model takes into account the influence of the Reynolds number through the parameter R T S p b p b (ω) = 3τ 2 w δ ωδ U ∞ 2 U ∞               0.5 + ωδ U ∞ 0.75       3.7 + 1.1R -0.57 T ωδ U ∞ 7         (1.18) where R T = δu 2 τ U ∞ ν is a ratio between two time scales: δ U ∞ and ν u 2 τ . The model is valid for a large range of Reynolds number: 1400 < Re < 23400 and has inspired the models established by Rozenberg [START_REF] Rozenberg | Modélisation analytique du bruit aérodynamique à large bande des machines tournantes : utilisation de calculs moyennés de mécanique des fluides (Analytical modeling of the broadband aerodynamic noise of rotating machines: use of average calculations of fluid mechanics)[END_REF], Catlett [START_REF] Catlett | Empirical Spectral Model of Surface Pressure Fluctuations beneath Adverse Pressure Gradients[END_REF] and Klabes [START_REF] Klabes | Fuselage Excitation During Cruise Flight Conditions: A New CFD Based Pressure Point Spectra Model[END_REF] for the single point wallpressure spectrum of a TBL excitation. Now let us take a look at some cross-spectrum models non-exhaustively listed below. Two point wall-pressure spectrum corcos (1963) From the analysis of experimental results obtained by Willmarth and Wooldridge [START_REF] Willmarth | Measurements of the fluctuating pressure at the wall beneath a thick turbulent boundary layer[END_REF], Corcos proposed the following model for the WPF CSD function of a TBL [START_REF] Corcos | Resolution of Pressure in Turbulence[END_REF]: S p b p b ξ x , ξ y , ω = S p b p b (ω) e -|ξ x | /L x e -ξ y /L y e -iω |ξ x | /U c (1.19) where ξ x and ξ y are streamwise and spanwise distances, respectively. L x is the correlation length in the streamwise or x-direction and L y is the correlation length in the spanwise or y-direction, U c is the convection velocity and S p b p b (ω) corresponds to the WPF ASD function and will be developed later. As suggested by Corcos, the correlation lengths are assumed to be inversely proportional to frequency, and have the form L x = U c ωα x and L y = U c ωα y (1.20) where α x and α y are the decay rates of the spatial coherence and correspond to empirical constants previously introduced as the Corcos coefficients. The spatial Fourier transform of Eq. (1.19) yields the CSD function in the wavenumber domain: S p b p b (k, ω) = S p b p b (ω) U c ω 2 4α x α y α 2 x + 1 -k x k c 2 α 2 y + k y k c 2 (1.21) where k x and k y are the streamwise and spanwise wavenumbers, respectively. This model describes the convective ridge accurately. However, the simplicity of the model proposed by Corcos limits its validity domain to the wavenumbers around he convective ridge [START_REF] Blake | Mechanics of Flow-Induced Sound and Vibration[END_REF]. In fact, on one hand, the model proposed by Corcos assumes that the correlation lengths L x and L y do not depend on the thickness δ of the TBL which leads to an underestimation of the convective ridge width when the spatial separations ξ x and ξ y become large. On the other hand, this model also assumes a separation between the streamwise and spanwise directions which constitutes a strong hypothesis. Other models (Efimtsov [31], Smol'yakov and Tkachenko [START_REF] Smol'yakov | Model of a field of pseudosonic turbulent wall pressures and experimental data[END_REF], etc.) which propose improvements of the Corcos model can be found in the literature. mellen (1990) The model proposed by Mellen [START_REF] Mellen | On modeling convective turbulence[END_REF] S p b p b k x , k y , ω = S p b p b (ω) 2π α x α y k 2 c 2 α x α y k 2 c 2 + α x k c k y 2 + α y k c 2 (k c -k x ) 2 3/2 (1.22) where α x = 0.116 and α y = 0.7 [START_REF] Mellen | On modeling convective turbulence[END_REF]. Note that these exponential decay rates α x and α y can be modified to fit flow measurements. Now, let us summarize this discussion on TBL models. It is important to note that the TBL ASD function sorts the energy into frequencies while the normalized wavenumberfrequency spectrum (CSD function normalized by the ASD function) sorts the energy into wavenumbers. There are numerous models available and aiming at characterizing a TBL excitation but none of them do fit exactly to experimental measurements without some adjustments. The list of models presented previously is far from being exhaustive. Hereafter are two references that discuss TBL models more thoroughly. Bull [START_REF] Bull | Wall-Pressure Fluctuations Beneath Turbulent Boundary Layers: Some Reflections on Forty Years of Research[END_REF] has done an extensive study on WPF beneath a TBL by looking at the statistical parameters inherent to the WPF of a TBL, its frequency spectrum and the mean square pressure from which one can define the ASD function of the WPF. Miller [66], in her PhD thesis, was also interested in the analysis of three types of TBL models from a theoretical to a practical point of view. literature review standard test means In this section, a brief overview on the standard experimental methods used to study the vibroacoustic properties of a given structure under stochastic excitation is presented for both the DAF and TBL cases. In order to assess the vibroacoustic behavior of a given structure, two quantities are considered: the structural or vibration response and the sound transmission of the structure. Diffuse acoustic field The experimental reproduction of a DAF excitation is generally done in a reverberant room also called reverberation room. The purpose of a reverberation room is to create a highly diffuse acoustic measurement environment, defined as a sound field in which the acoustic energy flows equally in all directions as stated earlier in Sec. 1.1.2. Reverberant rooms are involved in several standardized measurements such as the sound absorption of materials, the sound power of noise sources or the sound insulation of partitions [START_REF] Chazot | Diffuse Acoustic Field Produced in Reverberant Rooms: A Boundary Diffuse Field Index[END_REF]. They are also used for high-intensity noise level fatigue testing of, for example, aircraft and spacecraft components analysis [START_REF] Yeh | High intensity acoustic testing to determine structural fatigue life and to improve reliability in nuclear reactor and aerospace structures[END_REF]. In order to determine the sound transmission of a given structure using a reverberant room in which the DAF is experimentally created by a source emitting broadband noise (this room is referred to as the sending room), it is usually coupled to another reverberant room or an anechoic room which is referred to as the receiving room. The sound transmission of the structure is measured through the Transmission Loss (TL) if one is using the American Society for Testing and Materials (ASTM) standards [START_REF]Standard Test Method for Laboratory Measurement of Airborne Sound Transmission Loss of Building Partitions and Elements[END_REF] or the Sound Reduction Index (SRI) if one is using the European International Organization for Standardization (ISO) standards [START_REF]Acoustics -Measurement of sound insulation in buildings and of building elements using sound intensity -Part 1: Laboratory measurements[END_REF]. It should be noted that the main difference between these two quantities resides in the fact that the SRI value has been developed to approximate the performance of a material in reducing the transmission of speech, and 1.2 standard test means 23 therefore, does not give a true idea with respect to non-speech sounds such as music, traffic, trains, aircraft, etc. Although the use of reverberant rooms is widely spread, there are some limitations inherent to these types of installation. In fact, it has been experimentally proven that the field in the reverberant room can be considered as a DAF only above a cutoff frequency [START_REF] Jacobsen | The Diffuse Sound Field: Statistical Considerations Concerning the Reverberant Field in the Steady State[END_REF][START_REF] Schroeder | Statistical Parameters of the Frequency Response Curves of Large Rooms[END_REF] also known as the Schroeder frequency. It marks the transition from individual, well-separated resonances (that dominate the pressure field which is thus not diffuse) to many overlapping normal modes [START_REF] Schroeder | The "Schroeder frequency" revisited[END_REF]. Another limitation is the strong variability of the results obtained between different facilities. These differences are mainly due to test conditions, mounting conditions of test structures, geometry of the reverberant room, measurement strategies, etc. Turbulent boundary layer In Sec. 1.1.3.1, it is stated that a boundary layer arises when a fluid is moving around a structure or the structure is moving through a fluid at a certain speed. Naturally, there is mainly two experimental solutions for the characterization of the vibroacoustic behavior of a structure under TBL excitation: the wind tunnel (air flowing around an object) and the in situ or in-flight measurements (object moving through air). Wind tunnel The wind tunnel is the most commonly used method for the determination of the vibroacoustic behavior of structures under TBL excitation. Wind tunnels are large ducts with air flowing inside and are used to reproduce the interaction between a flow and a full or downscaled object. The measurements, which can be complex to perform, are done in a small section of the tunnel. In fact, while making measurements inside a wind tunnel, it is important to have control over the flow velocity and the pressure inside the tunnel on one hand. On the other, one must make sure to homogenize the flow with honeycomb grids and decouple the structure to be tested from the flow vein. literature review The experimental setup, the transducers used for the measurement of the parameters of interest as well as the flow quality have a direct impact on the measurements done in a wind tunnel [START_REF] Owen | Measurement and assessment of wind tunnel flow quality[END_REF][START_REF] Willmarth | Measurements of the fluctuating pressure at the wall beneath a thick turbulent boundary layer[END_REF], thus justifying the numerous TBL models present in the literature. As one can notice, wind tunnel measurements can be expensive and delicate to implement. In situ tests The other standard solution for the determination of the TBL induced noise and vibrations is in situ or in flight measurements where an aircraft is equipped with numerous transducers. These allow one to measure the turbulent flow and the vibrations at some regions of the aircraft as well as an estimation of the cabin noise. One can cite the work of Bhat [START_REF] Bhat | Flight test measurement of exterior turbulent boundary layer pressure fluctuations on Boeing model 737 airplane[END_REF] who carried out exterior WPF measurements on a Boeing model 737 airplane in two separate flight tests and at two different Mach numbers. Subsequently, he and Wilby [START_REF] Bhat | Interior noise radiated by an airplane fuselage subjected to turbulent boundary layer excitation and evaluation of noise reduction treatments[END_REF] studied the interior noise radiated by an airplane fuselage subjected to a TBL excitation and evaluated the effectiveness of some noise reduction treatments. However, the flight test data were in poor agreement with available wind tunnel measurements. Afterwards, Wilby and Gloyna [START_REF] Wilby | Vibration measurements of an airplane fuselage structure I. Turbulent boundary layer excitation[END_REF] measured the response of an airplane (Boeing model Wind tunnel and in situ measurements have been widely used over the years and almost all the TBL models that can be found in the literature were established using these measurements. However there are inherent drawbacks to these standard methods. Namely: cost, complexity, reproducibility of the results, etc. These reasons motivated researchers to propose alternatives to the standard test means and which will be discussed in the following section. 1.3 alternative approaches to standard test means 25 alternative approaches to standard test means The possibility of synthesizing the vibroacoustic response of structures under stochastic excitations using an array of acoustic sources was theoretically shown some decades ago [START_REF] Fahy | On simulating the transmission through structures of noise from turbulent boundary layer pressure fluctuations[END_REF]. But due to technical limitations, this method could not experimentally be validated. Since then, several techniques have been proposed over the years. Hereunder, a non-exhaustive list of alternative techniques for the reproduction of the vibroacoustic behavior of structures under DAF and TBL excitation are presented. Reciprocity method Recently, a new methodology based on the reciprocity principle was proposed by Marchetto et al. for the experimental characterization of flat rectangular panels under DAF [START_REF] Marchetto | Vibroacoustic response of panels under diffuse acoustic field excitation from sensitivity functions and reciprocity principles[END_REF] and TBL [START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF] excitations. This approach is based on the concept that the panel response at a given point on the panel to a random pressure field depends on two quantities in the wavenumber domain: first, the wall-pressure CSD function, which characterizes the excitation, and, second, the so-called "sensitivity function" determined at point, which characterize the dynamic behavior of the panel. Those sensitivity functions can be determined using the reciprocity principle, which states that they are equivalent to the panel velocity FRF when the panel is excited by a normal force at the point of interest, expressed in the wavenumber domain. The authors compared the vibration response and the transmission loss determined using the proposed approach and those determined using standard test means (reverberant room and wind tunnel). The results obtained in both papers [START_REF] Marchetto | Vibroacoustic response of panels under diffuse acoustic field excitation from sensitivity functions and reciprocity principles[END_REF][START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF] show good agreement, thus validating the proposed approach. Other than the limitations inherent to the mathematical formulation of the problem (linearity, time invariance, etc.), the authors stated that the methodology can be costly in terms of measurement time. However, this limitation can be overcome with the recently developed full-field vibration measuring techniques. literature review Wave Field Synthesis Wave Field Synthesis (WFS) is one of the most well-known sound field reproduction techniques. It was introduced at the Delft University of Technology by Berkhout in 1988 in a greatly cited paper [START_REF] Berkhout | A Holographic Approach to Acoustic Control[END_REF] where he suggested the term acoustic control system and not yet WFS. In fact, the term WFS was introduced later on as wave front synthesis [START_REF] Berkhout | Wave-front synthesis: A new direction in electroacoustics[END_REF] and subsequently as wave field synthesis [START_REF] Berkhout | Acoustic control by wave field synthesis[END_REF]. Since then, many researchers have investigated this new method and a large number of publications on WFS is now available. A substantial number of PhD theses have also been written in the context of WFS as well, see for example Ref. [START_REF] De Bruijn | Application of Wave Field Synthesis in Videoconferencing[END_REF][START_REF] Fazi | Sound Field Reproduction[END_REF][START_REF] Gauthier | Synthèse de champs sonores adaptative (Adaptive sound field synthesis)[END_REF][START_REF] Oldfield | The Analysis and Improvement of Focused Source Reproduction with Wave Field Synthesis[END_REF][START_REF] Robin | Experimental vibroacoustic testing of plane panels using synthesized random pressure fields[END_REF][START_REF] Sanalatii | Synthèse d'un champ acoustique avec contraste spatial élevé (Synthesis of an acoustic field with high spatial contrast)[END_REF][START_REF] Start | Direct Sound Enhancement by Wave Field Synthesis[END_REF][START_REF] Verheijen | Sound Reproduction by Wave Field Synthesis[END_REF][START_REF] Vogel | Application of Wave Field Synthesis in Room Acoustics[END_REF]. This sound reproduction technique is based on the wave theory concept of Huygens [START_REF] Huygens | Traité de la lumière. Les maîtres de la pensée scientifique[END_REF] which was originally described for water waves and optics. The construction principle postulated by Huygens states that each point on a wavefront can be regarded as a point source and that all these secondary sources combine to form subsequent wavefronts which are the same as the the wavefronts generated by the original source. This principle can be mathematically formulated using the Kirchhoff-Helmholtz integral representation which states that the sound pressure is completely determined in a volume if the sound pressure and the velocity are determined on the surface delimiting the volume. This integral is usually simplified via the first Rayleigh integral (for monopole-like sources) and the second Rayleigh integral (for dipole-like sources) which explicitly define the signals which are to be fed to the loudspeakers in terms of the normal derivative of the target pressure field to be synthesized. This representation formula is the basis of WFS where an array of equally spaced loudspeakers is used to reproduce a given pressure field. For practical reasons, the implementation of WFS is usually limited to sound field reproduction on a plane using a finite size of loudspeaker array, thus implying a discretization of the ideally continuous distribution of secondary sources from Huygens principle. Other methods have been developed in order to extend this reproduction technique to arrays of different shape: a relevant example is the so called 2.5 operator derived from 1.3 alternative approaches to standard test means 27 a stationary phase approximation of the three dimensional free-field Green function applied to two dimensional arrays, see for instance Ref. [START_REF] Start | Direct Sound Enhancement by Wave Field Synthesis[END_REF]. Berry et al. [START_REF] Berry | A wave field synthesis approach to reproduction of spatially correlated sound fields[END_REF] proposed a numerical study aiming at reproducing three types of excitation using WFS: propagating plane wave, diffuse acoustic field and wall-pressure in subsonic or supersonic TBL. They examined the accuracy of the reproduction in terms of the size of the source and reproduction planes, their separation and the number of reproduction sources required per acoustic wavelength. In their paper, they show that the approach is well suited to the reproduction of a DAF excitation or a supersonic TBL excitation (for which most of the wall-pressure energy is contained in the acoustic wavenumber domain) with a minimum of two reproduction monopoles per acoustic wavelength in the streamwise and crosswise directions. However, the reproduction approach has inherent limitations in the case of the small wavelength components of subsonic turbulent boundary layers, which are known to contribute to sound transmission. Planar Near-field Acoustical Holography Acoustical holography is a process by which the measurement of the pressure at a set of points on a surface in the vicinity of an acoustically radiating object is analyzed to yield the velocity vector field, the acoustic pressure, and the acoustic intensity map in three-dimensional space. This technique which appeared in the mid 1960s [START_REF] Hildebrand | An Introduction to Acoustical Holography[END_REF] is only an approximation of the inverse problem of reconstructing sound fields. If the measurement points are recorded on a surface located at a distance of at least one acoustic wavelength from the vibrating surface, then the processing of the data results in an acoustic image that can only resolve sources that are separated by at least one acoustic wavelength, in other words, only source details greater than the acoustic wavelength can be retrieved in this procedure. However, if the measurement points are taken in the nearfield of the source region, then the spatial resolution can be reduced to a fraction of a wavelength [START_REF] Hayek | Nearfield Acoustical Holography[END_REF], resulting in what is known as Nearfield Acoustic Holography (NAH) literature review which appeared in the 1980s [START_REF] Maynard | Nearfield acoustic holography: I. Theory of generalized holography and the development of NAH[END_REF][START_REF] Williams | Holographic Imaging without the Wavelength Resolution Limit[END_REF]. The surface containing the measurement points is called the hologram surface. For more details on the theoretical foundations, the theory in different coordinate systems (planar, cylindrical and spherical) and the applications of NAH, see Ref. [START_REF] Maynard | Nearfield acoustic holography: I. Theory of generalized holography and the development of NAH[END_REF]. Planar Nearfield Acoustical Holography (PNAH) is a method of imaging the source field of a planar or near planar vibrating structure and the three-dimensional half-space in front of the structure. The hologram is thus a plane of measurement points. PNAH is usually used to determine the pressure and velocity distributions on a plane knowing the pressure distribution on a parallel plane. However, it can also be used for the reproduction of a target pressure field on a plane as shown by Robin et al. [START_REF] Robin | Reproduction of random pressure fields based on planar nearfield acoustic holography[END_REF]. In their paper, they considered the reproduction of random pressure fields based on PNAH for the measurement of the vibroacoustic properties of plane panels. The numerical simulation showed a good reproduction of the wall-pressure fluctuations inside the acoustic circle for the considered three types of excitation: DAF, supersonic TBL and subsonic TBL. However, the system should involve a density of at least 400 monopoles per square meter for the test of a panel up to a frequency of 1 kHz in the case of a subsonic TBL. Thus PNAH is very well adapted to the synthesis of a DAF excitation or a supersonic TBL excitation. Least Squares Technique Bravo and Maury [START_REF] Maury | Turbulent Boundary-Layer Simulation with an Array of Loudspeakers[END_REF], Elliott et al. [START_REF] Elliott | The synthesis of spatially correlated random pressure fields[END_REF], Maury and Bravo [START_REF] Bravo | The experimental synthesis of random pressure fields: Methodology[END_REF] and Maury et al. [START_REF] Maury | The experimental synthesis of random pressure fields: Practical feasibility[END_REF] have widely discussed the reproduction of random excitations using an array of acoustic sources. This alternative method which uses a two-dimensional array of loudspeakers whose control filters are determined using the least squares method by minimizing the error between the desired target signals and the generated ones, hence the name least squares technique of this approach. It was established that this method works well when it comes to the reproduction of a DAF excitation but due to the limited number of sources in the array, it fails to simulate the WPF of a subsonic TBL excitation because of the high 1.3 alternative approaches to standard test means 29 wavenumbers involved meaning that a denser source array would be required. A criteria of approximately four sources per smallest wavelength was derived in the literature in order to reproduce the small correlation lengths of the surface pressure field induced by the TBL [START_REF] Aucejo | Experimental simulation of turbulent boundary layer induced vibrations by using a synthetic array[END_REF][START_REF] Maury | The experimental synthesis of random pressure fields: Practical feasibility[END_REF]. As frequency increases, the number of required sources becomes very large and as one is limited by the size of the sources, the frequency range that can be studied with a given source is also limited. In order to circumvent this issue, Maury and Bravo [START_REF] Maury | Focussed Synthesis of a Turbulent Boundary Layer Excitation[END_REF] proposed a focused synthesis of the TBL excitation over a subdomain of the simulation surface. While this method allows to reach higher frequencies and ensures correct reproduction of the TBL excitation, it also limits the observation area to a fraction of the actual panel. Always in the framework of the least squares technique, Bravo and Maury [START_REF] Bravo | A synthesis approach for reproducing the response of aircraft panels to a turbulent boundary layer excitation[END_REF] proposed another alternative which consists in the reproduction of TBL-induced vibroacoustic response of a flat panel. However, this alternative requires a prior modal analysis of the panel. Source Scanning Technique We have seen that all of the methods using arrays of sources presented above are able to synthesize a DAF-like pressure fields but most of them fail to reproduce a TBL-like excitation due to the limited number of sources in the arrays. This results in a limitation in the wavenumber/frequency range that can be covered by these solutions. In order to circumvent this issue, Aucejo et al. introduced the Source Scanning Technique (SST) [START_REF] Aucejo | Experimental simulation of turbulent boundary layer induced vibrations by using a synthetic array[END_REF][START_REF] Aucejo | Source Scanning Technique for Simulating TBL-Induced Vibrations Measurements[END_REF][START_REF] Aucejo | Vibro-acoustique des structures immergées sous écoulement turbulent[END_REF] which involves the synthetic array principle and a monopole source that is spatially displaced at predefined positions in order to virtually create a full array of sources. However, the proposed approach was only partially validated through qualitative comparisons on a rectangular steel panel for a limited frequency range (up to 300 Hz). This partial validation of SST concerned only flat panels and the process was not automatized. Furthermore, only the vibration response of the steel panel was determined. literature review objectives of this thesis SST, as it was presented some years ago, is a promising which allows to overcome the limitations inherent to the other alternative techniques using full arrays of sources by using only one source and displacing it which means that one has much more flexibility on the spacing between two adjacent source positions. However, there were still several points to be clarified and improved for the approach to be used at industrial level. The first task at hand is therefore the validation of SST on flat rectangular panels through a parametric study on a wider frequency range for the DAF and the TBL excitations and automatizing the process using robots. The design parameters of the synthetic array will be studied in order to define optimal values for these parameters and determine accurately the vibration response as well as the power radiated by the panel (allowing to determine the transmission loss). The results obtained using SST will then be compared to results directly measured in a reverberant room in the case of the DAF excitation and in an anechoic wind tunnel for the TBL excitation. Once SST is completely validated for flat rectangular panels, an extension of the method to more complex structures will be proposed. This is motivated by the fact that the structures found in the transportation industry are seldom flat and that it is also interesting to explore the validity of SST for other geometrical configurations. The ultimate goal being for the method to be adopted as an alternative or complementary measurement process to the standard test means. dissertation outline The remainder of the thesis is organized as follows. Chapter 2 of Part I is dedicated to a general presentation of the Source Scanning Technique (SST) in a theoretical point of view. Then, Part II deals with the application of SST on FRPs: Chapter 3 starts with a presentation of the study case and the theoretical background concerning panels. dissertation outline 31 This is followed by a parametric study of the design parameters of SST. In Chapter 4, a presentation of the experimental SST setup followed by a comparison of the results to numerical and/or experimental results from direct measurement in standard facilities is proposed. In Part III, an extension of the SST process to Curved Rectangular Panels (CRPs) is proposed. First the study case and the theoretical background on CRPs are presented in Chapter 5. Subsequently is Chapter 6 where an experimental validation of SST on CRPs is proposed: the experimental setup is presented first before going through the results and discussion. Following this, a general conclusion and a discussion on future work on this topic close the main body of the manuscript. Following the main body of this thesis are several appendices with additional information and a bibliography. This chapter is dedicated to the presentation of the source scanning technique from a theoretical point of view. First we will talk about the synthetic array principle and the wavenumber formulation on which the SST approach is based. In particular, we will discuss how the response of structures under stochastic excitations are derived using random vibration theory along with the filtering effect. Then we will formerly introduce the SST process and its parameters. For the sake of simplicity, the structures illustrated in this chapter are flat and rectangular. 2 S O U R C E S C A N N I N G T EC H N I Q U E synthetic array principle The synthetic array principle is inspired by the concept of the synthetic aperture radar, invented in the 1950s by Carl A. Wiley [START_REF] Wiley | Pulsed Doppler Radar Methods and Apparatus[END_REF], and which consists in post-processing source scanning technique the signals received by a moving radar to produce fine resolution images from an intrinsically resolution-limited radar system in the along-track direction [START_REF] Autrey | Passive synthetic arrays[END_REF][START_REF] Curlander | Synthetic Aperture Radar: Systems and Signal Processing[END_REF]. Linearity is one of the most useful features that a system can have, and there is an entire branch of mathematics devoted to linear systems theory. With that in mind, what makes a system linear? Simply put, a system is considered linear if the equations that govern it follow the simple rule that the sum of the inputs to the system yields the sum of the outputs. Put mathematically, if g (x) describes our system, and a and b are our inputs, then g(a + b) = g(a) + g(b). In our case, the acoustic medium and the structure are assumed to behave linearly which means that the technique will not work for high sound levels or large displacements/deformations of the structure. The synthetic array principle, which assumes a linear system, consists in using one monopole source that is spatially displaced in the desired positions in order to virtually create an array of monopole sources. Hence, the response to a full array of monopole sources is obtained by adding up the individual responses due to each monopole position. Using an actual array allows one to simultaneously drive all of the sources and proceed with an online process whereas in the case of the synthetic array, the process is an offline one. It is important to note that with the synthetic array approach, one needs to add a postprocessing step in order to obtain the response of the structure due to the actual array using the linearity assumption stated earlier at the beginning of Sec. 2.1. wavenumber formulation and quantities of interest This analysis considers the response of structures to random pressure fields assumed to be stationary in time and homogeneous in space, as described in [START_REF] Maury | A Wavenumber Approach to Modelling the Response of a Randomly Excited Panel, Part I: General Theory[END_REF]. Basically, this means that the our system is in a steady state response and that the forcing function time history repeats itself for all time. Let us consider the structure of surface Σ s illustrated in Fig. 2.2 and under random pressure field. In the following, we will assume that the wall-pressure fluctuations are not affected by the vibrations of the structure which means that the excitation is not modified by the structural response. This assumption holds as long as the flow-induced displacements are much smaller than the characteristic length scales of the flow and as long as we are further downstream from the transitional boundary layer, so that the flow is robust to small perturbations due to the vibrations of the structure [START_REF] Vaicaitis | Noise transmission through stiffened panels[END_REF]. This approximation makes the problem more tractable, and suitable for the derivation of analytical expressions. Thus the random excitations considered here are modeled by the wall pressure fluctuations that would be observed on a smooth rigid wall, also known as the blocked pressure p b [START_REF] Fahy | Sound and Structural Vibration: Radiation, Transmission and Response[END_REF]. The time response of a structure, when excited by the blocked pressure p b , is denoted α (x, t) in general. This quantity designates either the structural velocity response v (x, t) or the radiated pressure by the structure p (x, t) or the particle velocity response v 0 (x, t). Note that,in the first case, x is located on the Σ s and in the last two cases, x is located inside the fluid domain and on the radiating side of the structure. The time response α is given by the following convolution product [START_REF] Maury | A Wavenumber Approach to Modelling the Response of a Randomly Excited Panel, Part I: General Theory[END_REF] α (x, t) = +∞ -∞ Σ s γ α (x, y, t -τ) p b (y, τ) dxdτ (2.1) where γ α (x, y, t) is the space-time impulse response of the structure at point x when excited by a normal unit force at point y. Considering the random process as ergodic, the cross-correlation function R αα ′ (x, t) can be written R αα ′ (x, t) = +∞ -∞ α (x, t) α ′ (x, t + τ) dτ (2.2) where α ′ also designates v, p or v 0 . Performing a time-Fourier transform of the crosscorrelation function after introducing Eq. (2.1) in Eq. (2.2) yields the following spacefrequency spectrum of the response of the structure [START_REF] Maury | A Wavenumber Approach to Modelling the Response of a Randomly Excited Panel, Part I: General Theory[END_REF][START_REF] Maxit | Prediction of flow induced sound and vibration of periodically stiffened plates[END_REF] S αα ′ (x, ω) = Σ s Σ s Γ α (x, y, ω) S p b p b (y, z, ω) Γ * α (x, z, ω) dydz (2.3) where Γ α (x, y, ω) is the time-Fourier transform of γ α (x, y, t) and corresponds to the FRF of the structure at point x when excited by a unit normal point force at point y; S p b p b (y, z, ω) is the time-Fourier transform of the blocked pressure cross-correlation function and the superscript " * " represents the complex conjugate. Defining the wavenumber-frequency spectrum of the blocked pressure S p b p b (k, ω) as the wavenumber transform of its space-frequency spectrum S p b p b (x, y, ω) yields S p b p b (x, y, ω) = 1 4π 2 +∞ -∞ S p b p b (k, ω) e ik(x-y) dk (2.4) Introducing Eq. (2.4) in Eq. ( 2.3) and re-arranging, one obtains the following expression of the response of the structure S αα ′ (x, ω) = 1 4π 2 +∞ -∞ H α (x, k, ω) S p b p b (k, ω) H * α ′ (x, k, ω) dk (2.5) where H α (x, k, ω) = Σ s Γ α (x, y, ω) e -iky dy (2.6) represents the sensitivity function and characterizes the vibroacoustic behavior of the structure. From Eq. (2.6), one can deduce that it corresponds to the response of the considered system at point x when it is excited by a unit WPPW of wavevector k at the angular frequency ω. In practice, the following discretized version of Eq. (2.5) is used S αα ′ (x, ω) ≈ 1 4π 2 k∈Ω k H α (x, k, ω) S p b p b (k, ω) H * α ′ (x, k, ω) δk (2.7) where Ω k is a set of properly chosen wavevectors. The two quantities of interest being the vibration response as well as the transmission loss of the structure, let us introduce them hereunder. vibration response The velocity response of the structure at a given point x is obtained by replacing α and α ′ with v in Eq. (2.7) S vv (x, ω) ≈ 1 4π 2 k∈Ω k S p b p b (k, ω) |H v (x, k, ω)| 2 δk (2.8) source scanning technique Looking at the previous equation, one can conclude that the velocity sensitivity functions H v (x, k, ω) have to be determined in order to estimate the velocity response ASD function S vv (x, ω) and the way to do so will be discussed in Sec. 2.4. transmission loss The sound transmission loss, denoted TL, is a measure of the performance of a structure in preventing transmission of sound from one space to another. It is defined as ten times the logarithm in base ten of the ratio of the sound power incident on the structure to the sound power radiated by the structure on the receiving side, as indicated by the following equation TL (ω) = 10 log 10 Π i (ω) Π r (ω) (2.9) where Π i and Π r denote the incident power and the radiated power by the structure, respectively. The incident power will be discussed in Chapter 3 and Chapter 5 for each of the considered random pressure fields. The radiated power is defined by the following equation Π r (ω) = Σ p I act (x, ω) dx (2.10) where dx is the surface element and I act (x, ω) is the normal component of the active sound intensity at point x. The active sound intensity is directly related to the CSD function S pv 0 (x, ω) between the sound pressure and the particle velocity at point x [36] I act (x, ω) = ℜ S pv 0 (x, ω) (2.11) where ℜ designates the real part and from Eq. (2.7), replacing α by p and α ′ by v 0 one has S pv 0 (x, ω) ≈ 1 4π 2 k∈Ω k H p (x, k, ω) S pp (k, ω) H * v 0 (x, k, ω) δk (2.12) In practice, a discretized version of Eq. (2.10) will be used Π r (ω) ≈ x∈Σ r I act (x, ω) δx (2.13) where Σ r is a virtual surface at a certain distance z on the radiating side of the structure. Here again, in order to compute the CSD function between the radiated pressure by the structure and the particle velocity on the radiating side of the structure S pv 0 (x, ω), one will need to determine the radiated pressure sensitivity function H p (x, k, ω) and the particle velocity sensitivity function H v 0 (x, k, ω). As for the velocity sensitivity function, these two other types of sensitivity function will be determined using the SST approach. wall-pressure plane waves Here, we will answer the following questions: What is a Wall-Pressure Plane Wave (WPPW)? How can we use them in order to model a TBL pressure field through the CSD function? How can we experimentally generate them? Definition In room acoustics, a diffuse field can be approximated by adding up the effect of an infinite number of acoustic plane waves having the same amplitudes and emanating from all directions [START_REF] Pierce | Acoustics: An Introduction to Its Physical Principles and Applications[END_REF]. A WPPW is defined as the blocked pressure acting at the surface of a structure when excited by a pressure field due to a DAF or TBL excitations for instance. source scanning technique As for a plane wave, the pressure induced by a WPPW η is defined by an amplitude A η and a two-dimensional wavevector k η = k η x , k η y . Hence, it can be defined as follows p η (x, t) = A η (t) e -ik η x (2.14) where A η (t) is a random variable. It is important to note that WPPWs are only defined at the very surface of the structure and do not depend on the acoustic propagation as for a simple plane wave. The pressure CSD function corresponding to the WPPW is therefore S p η p η (ξ, ω) = S A η A η (ω) e -ik η ξ (2.15) where ξ = ξ x , ξ y is the vector representing the spatial shift between two points on the surface of the structure and S A η A η (ω) is the ASD function of the amplitude of the WPPWs. Let us suppose a rigid surface excited by a set of uncorrelated WPPWs. The total pressure p (x, t) is given by p (x, t) = η p η (x, t) (2.16) As the WPPWs are supposed uncorrelated, one has S A η A ′ η (ω) = 0 if η η ′ . Consequently, the CSD function of the total pressure p (x, t) is S pp (ξ, ω) = η S A η A η (ω) e -ik η ξ (2.17) Uncorrelated WPPWs and TBL excitation The goal here is to establish a relationship between uncorrelated WPPWs and the TBL excitation through their CSD functions. The CSD function of the TBL excitation S pp (ξ, ω) in the space-frequency domain is related to that in the wavenumber-frequency domain S pp (k, ω) by the following inverse space-Fourier transform S pp (ξ, ω) = 1 4π 2 +∞ -∞ S pp (k, ω) e -ikξ dk (2.18) In practice, a discretized version of Eq. (2.18) is used S pp (ξ, ω) = 1 4π 2 Ω k S pp (k, ω) e -ikξ ∆k (2.19) where Ω k represents a set of wavevectors and ∆k = δk x δk y with δk x and δk y corresponding to the wavenumber resolutions in the x and y directions, respectively. Equalizing Eq. (2.17) and Eq. (2. [START_REF] Bravo | A synthesis approach for reproducing the response of aircraft panels to a turbulent boundary layer excitation[END_REF]) and assuming that the sums are both defined over the same set of wavevectors Ω k , one can establish the following relation between the ASD function of the amplitude of the WPPW and the CSD function of the TBL excitation in the wavenumber-frequency domain S A η A η (ω) = 1 4π 2 S pp (k, ω) ∆k (2.20) where k ∈ Ω k . This set Ω k is chosen depending on what one wants to achieve by approximating the TBL excitation with uncorrelated WPPWs. Generating WPPWs In the following, the generation of WPPWs is discussed. To begin with, let us remember that a WPPW and an acoustic plane wave are two different concepts. Let us consider an acoustic plane wave of unit amplitude, the pressure induced by this wave can be written as p ac (x, t) = e -ikx (2.21) where k = k x , k y , k z and x = (x, y, z). It is important to note that, in this case, the components of the three-dimensional wavevector k must satisfy the following dispersion relation of the Helmholtz equation k 2 x + k 2 y + k 2 z = k 2 0 (2.22) where k 0 represents the acoustic wavenumber in the considered medium. In the case of a WPPW η, as the wavenumbers in the x and y directions are a priori known (as discussed in Sec. 2.3.2), the wavenumber in the direction perpendicular to the surface of the structure must satisfy k η z =              ± k 2 0 -k η x 2 -k η y 2 if k 0 > k η x 2 + k η y 2 ±i k η x 2 + k η y 2 -k 2 0 if k 0 < k η x 2 + k η y 2 (2.23) The previous equation clearly indicates that wall-pressure fields corresponding to those of propagating (k η z is real) and evanescent (k η z is complex) acoustic plane waves must be generated to simulate a TBL wall-pressure field from uncorrelated WPPWs. It is very difficult to experimentally create evanescent plane waves normal to the surface of the structure. However, one can achieve this by using an array of monopole sources by simulating the required near-field interferential conditions. description of the sst approach The proposed approach is based on the mathematical formulation of the problem in the wavenumber domain. This formulation is appropriate because it allows, through Eq. (2.5), an explicit separation of the contributions of the excitation via the wall-pressure CSD function from those of the vibroacoustic behavior of the structure via the sensitivity functions discussed above. description of the sst approach 43 Let us consider a unit WPPW characterized by the wavevector k and the angular frequency ω. The pressure at the surface of the structure that will be referred to as the reproduction surface is simply given by p (x, k, ω) = e -ikx . The SST process can be divided into four main steps that will allow us to reproduce this target pressure field from S positions of the monopole source. 1. Definition of the target pressure at the observation points: one supposes that the reproduction surface is regularly discretized in P observation points and one defines the target pressure vector as the vector with the components corresponding to the pressure of the unit WPPW at the P points. The spacing between the points should be sufficiently small to describe the spatial variation of the WPPW. source scanning technique 3. Computation of the vector of source amplitudes q by inverting the following matrix equation Gq = p (2.24) that can be rewritten explicitly as S s=1 G ps (ω) q s (k, ω) = p p (k, ω) , ∀p ∈ [1, P ] (2.25) where q s (k, ω) is the amplitude of the source at position s and p p (k, ω) = e -i(k x x p +k y y p ) represents the target pressure at the observation point p whose coordinates are x p and y p on the structure. When the number of observation points P is less than the number of source positions S, the system in Eq. (2.24) is underdetermined and has an infinite number of solutions. However, when P > S, the system is overdetermined and does not have one single exact solution. Nevertheless, a solution minimizing the error between the target pressure and the synthesized one can be determined. The matrix G is then rectangular, therefore Eq. (2.24) is solved in the least squares sense as q = G † p (2.26) The dagger symbol in Eq. (2.26) indicates the Moore-Penrose pseudo-inverse. The reproduction of a target pressure field using an array of acoustic sources is thus an inverse problem which leads to some issues that will be discussed later on. 4. Synthesis of the target pressure field and of the sensitivity function: in order to assess the quality of the reconstructed pressure field, one considers Q points on the reproduction surface. These Q points must be different from the P reference points in order to estimate the ability of the technique to reproduce correctly the pressure field between the reference points. After the transfer function matrix Ĝ between the S source positions and the Q reconstruction points is determined, the vector of the reconstructed pressure p can be computed with the following expression: p = Ĝq. One will use this expression in the following to estimate the efficiency of the SST and to define the optimal parameters of the synthetic array. However, in practice, it will not be used, as only the sensitivity functions are of interest to estimate the response of the structure to the stochastic excitation. The sensitivity functions are given by the following equation H α (x, k, ω) = S s=1 q s (k, ω) Γ s α (x, ω) (2.27) where α = (v, p, v 0 ) and Γ s α (x, ω) represents the FRF between point x and the source at position s and is defined as the response α at point x when the source is located at point s. Naturally, there are design parameters that are inherent to this approach and whose values need to be carefully chosen for a good reproduction process. These parameters are • the cutoff wavenumber k which allows one to define the set of wavevectors Ω k over which one has an accurate synthesis of the response of the structure, • the distance d m between two adjacent monopole positions (see Fig. 2.5), and • the distance d between the structure and the array of monopoles which will be determined through an analysis of the condition number of the transfer matrix G and the relative mean square error between the target WPPWs and the synthesized ones. See Fig. 2.5 for a depiction of this distance. 2.5 summary 47 summary This chapter was dedicated to the presentation of the SST approach and its theoretical background. The method is based on the synthetic array principle where one monopole source is spatially displaced in several positions in order to reconstruct the effect of a full array of monopoles. The technique relies on a formulation of the vibroacoustic problem in the wavenumber domain: this allows one to be able to simulate a TBL excitation through uncorrelated WPPWs. The same can be done for the DAF excitation. Apart from the CSD functions of the DAF and TBL, the other difference originating from the modeling of these two excitations through uncorrelated WPPWs is in the very nature of the waves. In the case of the DAF, only propagating waves are needed whereas in the TBL case, one also needs to produce evanescent waves. The main design parameters of the synthetic monopole array have been presented: a study of the influence of these parameters is done in Chapter 3 and Chapter 5 for rectangular panels and open circular cylindrical shells, respectively. proposed before closing the chapter with a brief summary. 3 S S T O N F L AT R E C TA N G U L A R PA N E L S : T H EO R Y A N D N U M E R I C A L S T U D I E S problem statement This analysis considers the response of two-dimensional rectangular planar structures to a random pressure field excitation. This pressure field is assumed to be stationary in time and homogeneous in space. We will be interested in two types of random excitations: the diffuse acoustic field and the turbulent boundary layer excitations. The quantities of interest will be the ASD function of the velocity at one point on the panel and the sound transmission loss of the considered panel. The geometric configuration of the studied structure of surface Σ p is shown in Fig. 3.1. The geometric parameters characterizing the panel are the length L x , the width L y and the thickness h. In the following, we will assume that the WPFs are not affected by the vibrations of the structure which means that the excitation is not modified by the structural response. Thus the random excitations considered in this paper are modeled by the WPFs that would be observed on a smooth rigid wall, also known as the blocked pressure p b [START_REF] Fahy | Sound and Structural Vibration: Radiation, Transmission and Response[END_REF]. The goal here is to determine the vibroacoustic responses of the panel by reproducing WPFs that are close enough to the target WPFs using SST. The transverse velocity v z (x, t), which will be simply written as v (x, t) throughout this dissertation, represents the vibration response of the panel when excited by a normal random pressure field p b (x, t). The transmission loss denoted TL (ω), introduced in Sec. 2.2, gives a measure in decibels ( dB) of the ability of the structure to reduce sound energy transmission. In a first step, let us briefly derive the expression of these two quantities of interest for the considered panel. 3.2 theoretical responses [START_REF] Klabes | Fuselage Excitation During Cruise Flight Conditions: A New CFD Based Pressure Point Spectra Model[END_REF] 3.2 theoretical responses Vibration response -Sensitivity functions The ASD function of the vibration response of a given structure under random excitation was derived in Chapter 2 and is given by Eq. (2.8) which is rewritten hereunder S vv (x, ω) ≈ 1 4π 2 k∈Ω k S p b p b (k, ω) |H v (x, k, ω)| 2 δk The previous equation clearly shows that one needs to estimate the velocity sensitivity functions H v (x, k, ω) in order to determine the ASD function of the vibration response. These functions are theoretically determined using a modal expansion method. In general, the standard expression of the velocity sensitivity functions H v (x, k, ω) written using a modal expansion is given by Eq. (A.1) and all the details about the computation of these functions are also given in Appendix A. Transmission loss The transmission loss was also introduced in Chapter 2 through Eq. (2.9) rewritten below TL (ω) = 10 log 10 Π i (ω) Π r (ω) In order to determine the radiated power Π r (ω) defined by Eq. Π i (ω) = S DAF p b p b (ω) Σ p 8ρ 0 c 0 (3.1) For a TBL, there is no incident acoustic pressure in the farfield. Nevertheless, in order to compare both excitation fields, we define an equivalent incident power Π eq i (ω), which will be the one of a DAF having a sound power CSD function equal to S T BL pp (ω) on the plate surface. The equivalent incident power is thus given by the following equation Π eq i (ω) = S T BL p b p b (ω) Σ p 8ρ 0 c 0 (3.2) where S T BL p b p b (ω) corresponds, this time, to the ASD function of the TBL WPF. sst on flat rectangular panels: parametric studies In the following, some parametric investigations on the SST are proposed. These studies aim at determining the optimal parameters of the array of sources for an accurate reproduction of the vibroacoustic response of the structure. The numerical simulations presented in this section concern a panel with the same geometrical and mechanical properties (see Table 3.1) as the one considered for the experimental work presented in Chapter 4. The panel is supposed simply supported on its four edges. The normal modes are then calculated analytically and the sensitivity functions can be estimated using the modal expansion method as described in Sec. 3.2.1 and more thoroughly in Appendix A. Cutoff wavenumber and wavenumber resolution The minimum separation between the source positions is derived from the maximum wavenumber or the minimum wavelength to be synthesized. For frequencies well above the hydrodynamic coincidence frequency and accordingly to Marchetto et al. [START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF], the wavenumber domain Ω k over which Eq. (2.7) is calculated must at least include the flexural wavenumber of the panel at the highest frequency of interest. The natural flexural wavenumber of a thin panel is given by the following equation k f (ω) = 4 ω 2 ρh D (3.3) where D = Eh 3 12 (1 -ν 2 ) is the flexural rigidity of the panel. Thus the smallest cutoff wavenumber from which the spacing δ s between the source positions is defined is set to k max = βk f (ω max ) (3.4) where β is a safety coefficient such that β > 1 and ω max corresponds to the maximum frequency of interest. The spacing between two adjacent source positions is then defined using the criteria of four monopoles per smallest wavelength (as shown in previous studies [START_REF] Aucejo | Experimental simulation of turbulent boundary layer induced vibrations by using a synthetic array[END_REF][START_REF] Maury | The experimental synthesis of random pressure fields: Practical feasibility[END_REF]) δ s = λ min 4 (3.5) where λ min = 2π/k max . Let us now talk about the wavenumber resolution. Numerical simulations do not show that the wavenumber resolution is a critical parameter as H α (x, k, ω) and S p b p b (k, ω) do not vary quickly with respect to the wavenumber. Moreover, SST can deal with fine resolution as the wavenumber resolution only affects the post-processing steps. In the following, the wavenumber resolution is set to δk x,y = 1 rad m -1 . Interplanar distance The interplanar distance represents the distance between the source array plane and the panel plane (or reproduction plane). The study presented in this section aims at defining the optimal interplanar distance ensuring an accurate pressure synthesis. In order to assess the quality of the reproduction process, two different parameters are examined: • The condition number of the transfer matrix G denoted κ (G) which is a measure of the sensitivity of the sought parameters (i.e. the amplitudes of the sources) with respect to perturbations in the input data and round-off errors made while solving Eq. (2.24) for q. The condition number of a matrix A (m × n) is defined as follows κ (A) = σ 1 σ n with σ 1 >> 1 >> σ n > 0 where σ 1 and σ n are the greatest and the smallest singular values of matrix A and are determined from the singular value decomposition of matrix A. When the condition number is large, the computed solution of the system may be in error. Values of the condition number near one indicate a well-conditioned matrix whereas large values indicate an ill-conditioned matrix [START_REF] Cheney | Numerical Mathematics and Computing[END_REF]. • The Mean Square Error (MSE) on the synthesized pressure field denoted e p is defined by the following equation e p k x , k y , ω = E p x, k x , k y , ω -p x, k x , k y , ω 2 p x, k x , k y , ω 2 (3.6) where p x, k x , k y , ω and p x, k x , k y , ω are the target and reconstructed pressure vectors, respectively, ∥ • ∥ represents the Euclidean norm and E [ • ] represents an ensemble average also referred to as the mathematical expectation (or expected value) of the considered parameter. Note that the relative MSE is a spatial average over the panel. An arbitrary threshold of -10 dB (corresponding to a relative MSE of 10%) is chosen in order to gauge the accuracy of the reproduction process. As long as the relative MSE (which will be called the reproduction error in the following) is less than that threshold, the pressure field synthesis will be considered accurate. Fig. 3.2 shows the two quantities presented above plotted as functions of frequency and the normalized interplanar distance (with respect to λ min ). In Fig. 3.2a, one can notice that the condition number of the transfer matrix is almost frequency independent but increases when the interplanar distance increases. This means that the closer the array of acoustic sources is to the panel plane, the less sensitive the system is to noise. In order to avoid large condition numbers, an upper limit of the interplanar distance d is set to λ min 2 . Concerning the mean square error in Fig. 3.2b, one observes that it is roughly constant as a function of the frequency for a given interplanar distance. On the contrary, its evolution as a function of the interplanar distance for a given frequency is interest and that it is in agreement with the proposed criterion as the corresponding interplanar distance is within I opt . This confirms the consistency of the proposed criterion: for an accurate synthesis, it is preferable to choose the interplanar distance such that d ∈ I opt . method succeeds relatively well to reconstruct the target plane wave. Some errors can be noticed on the amplitude but the shape of the wave is correctly described. On Fig. 3.5c where the interplanar distance is taken from the interval I opt , the synthesized pressure field is almost identical to the target pressure field. These observations thus validate the chosen interval I opt from which one can choose a value for the interplanar distance. For the experimental study presented in the following chapter, the parameters of the array are set to the following values: k max = 50 rad m -1 , δ s = 3 cm and d = 3 cm. sst on flat rectangular panels: theory and numerical studies summary In this chapter, a brief review on the theoretical vibroacoustic responses of panels under random excitations was presented. The quantities of interest were the vibration/velocity response and the transmission loss of the panel subject to a DAF or TBL pressure field. Afterwards, parametric studies on the SST process were done in order to determine the optimal values of the design parameters of the synthetic array for an accurate synthesis process. These design parameters are the cutoff wavenumber k of and the interplanar distance d between the plane of the panel and the plane of the synthetic array. In this chapter, the vibroacoustic response of the aluminum flat panel is experimentally determined using SST. In a first step, the various devices used during the experiment as well as the experimental setup are presented in Sec. The excitation device used during the SST process is a mid-high frequency monopole source manufactured by Microflown Technologies [START_REF] De Bree | Microflown based monopole sound sources for reciprocal measurements[END_REF] and shown in Fig. 4.1. This monopole source is composed of a simple flexible hose at the end of which there is a 15 mm diameter nozzle with a reference sensor at the output and the whole is driven with a high impedance loudspeaker. The source (volume velocity) was calibrated by measuring the radiated sound pressure at a given distance in anechoic conditions for a given input voltage and using the theoretical model of a monopole in free field conditions in order to verify the relation between the radiated sound pressure and the volume velocity. Microphones In order to determine the transfer functions G ps between the S source positions and the P observation points, 20 1/4 ′′ microphones were used. They were all calibrated using a sound level calibrator. These microphones were also used to measure the transfer functions Γ s α (x, ω) (for α = p and α = v 0 ) needed in order to determine the sensitivity functions as described by Eq. (2.27) in Chapter 2. Accelerometer A z-axis Bruel & Kjaer accelerometer was used for the measurement of the transfer functions Γ s v (x, ω) allowing to determine the velocity sensitivity functions H v (x, k, ω). Experimental setup The SST process described in Chapter 2 has been applied on a simply supported aluminum panel. The characteristics of the panel are the same as the one considered in the numerical simulation in Chapter 3. To simulate the appropriate boundary conditions (i.e. simply supported), the panel was mounted using the protocol presented by Robin et al. [77] and was placed in a baffle consisting of a 2 cm thick square plywood with a 1 m side and in which there is an aperture the size of the panel, see Fig. source radiating in the semi-anechoic chamber with the baffle plane). This means that for the characterization of a second structure presenting dimensions smaller or equal to the present one, the same transfer matrix G could be used, albeit with a smaller number of microphone positions over its surface. • In step 4 of the SST process presented in Sec. 2.4, Γ s α (x, ω), the FRFs between point x and the source at position s are determined: they were measured when the test panel was mounted in the baffle as shown in Fig. 4.5. Two cases depending of the final quantities of interest were considered to evaluate the velocity sensitivity function H v (x, k, ω) at a point x on the panel, the acceleration response at point x was measured using one Bruel&Kjaer type 4508 accelerometer. The FRFs Γ s γ (x, ω) corresponding to the acceleration of the panel at point x for a monopole source at position s were measured for the S = 195 monopole positions. It took approximately 45 minutes to measure all the FRFs Γ s γ (x, ω). These latter were used with Eq. (2.27) to estimate the acceleration sensitivity functions of the panel at point x, H γ (x, k, ω) for k in Ω k defined in Sec. 2.2. Finally, the velocity sensitivity functions were de- note that the velocity sensitivity functions H v (x, k, ω) are determined here using the direct interpretation of Eq. (2.6) in contrast with the results in [START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF] where the reciprocity principle was used by exciting the panel with a shaker at the point of interest x and the response of the panel was measured on a grid of points using a laser vibrometer. In our case, the grid of monopole source positions constitute the excitation and the response is measured at only one point, the point of interest x, using one accelerometer. to evaluate the radiated power by the plate. In accordance with Eq. (2.10) to (2.13), the sensitivity functions in term of the radiated pressure and of the particle velocity along the z-axis should be estimated for the R = 9 × 20 = 180 points discretizing Σ r . A linear array of microphones close to the panel (at a distance of approximately 7 cm from the panel plane on the radiating side, corresponding to z = -7 cm as defined in Sec. 2.2) was used as shown in Fig. 4.7 to measure the pressure at each point r of Σ r . To evaluate the particle velocity, an estimation of the pressure gradient was considered. With the time dependence e iωt , the particle velocity at direction z can be written v z 0 = - i ρ 0 ω ∂p ∂z (4.1) sst on panels: experiments Thus, the particle velocity can be obtained by evaluating the pressure gradient ∂p/ ∂z [START_REF] Bai | Particle velocity estimation based on a two-microphone array and Kalman filter[END_REF][START_REF] Pierce | Acoustics: An Introduction to Its Physical Principles and Applications[END_REF] by using the two point finite difference method v z 0 ≈ - i ρ 0 ω p 2 -p 1 δz (4.2) where p 1 and p 2 are pressure measurements at two adjacent positions on surface S 1 and on surface S 2 , respectively, as shown in Fig. 4.6. δz is the spacing between the two surfaces S 1 and S 2 and is also the distance between two microphone positions on the grid. δz must be large enough to induce a sufficient pressure difference in order to determine the particle velocity but it also must be small enough for the approximation in Eq. (4.2) to be valid. Some trial and error tests with the monopole source were made in order to define an adequate spacing between the two planes where the pressure would be measured and it was found that a separation of δz ≈ 2 cm was ideal for these measurements. Normally, the finite difference method suffers from robustness issue against sensor noise and mismatch but the latter is avoided here as the particle velocity at one position of the grid in Fig. 4.6 is obtained using two pressure measurements of the same microphone at the designated position (see Fig. 4.6 for an illustration of the proposed methodology). In the experimental setup, only one linear array of 20 microphones is used to accomplish these pressure measurements on the two surfaces S 1 and S 2 . In fact the linear array of microphone is mounted on a 2D Cartesian robot allowing us to sweep through a given surface on the radiating side of the panel. Measuring the transfer functions Γ s p (x, ω) (radiated pressure) and Γ s v 0 (x, ω) (particle velocity) at point x, now located on the discretized surface Σ r , allows us to determine the pressure sensitivity functions H p (x, k, ω) and the particle velocity sensitivity functions H v 0 (x, k, ω) used in Eq. (2.12). The measurement of all the transfer functions needed in the determination of the radiated power by the panel took approximately 46 hours. results and discussion Numerical validation In order to assess the accuracy of the SST process on the test panel, one proposes in this section to compare its results with numerical ones. As evoked in Sec. 3.2.1, these latter were achieved using the analytical normal modes and the modal expansion method as described in Appendix A. First, let us compare the velocity sensitivity functions experimentally estimated with the SST process with the ones estimated numerically at 4.2 results and discussion 73 point x = (0.06, 0.3, 0) m. The results are plotted in Fig. 4.8 as a function of the frequency and the wavenumber, k x for k y = 0. A good agreement between both results can be observed, even for wavenumbers above the acoustic wavenumbers (which are symbolized by the continuous white line in Fig. 4.8). This highlights that the SST approach is well adapted for reproducing subsonic plane waves, which is a limitation of past-developed reproduction techniques [START_REF] Berry | A wave field synthesis approach to reproduction of spatially correlated sound fields[END_REF][START_REF] Bravo | The experimental synthesis of random pressure fields: Methodology[END_REF][START_REF] Robin | Experimental vibroacoustic testing of plane panels using synthesized random pressure fields[END_REF]. Below 300 Hz, one can however observe that the SST results are noisy. For these frequencies, the monopole source was not efficient and the measurements of the transfer functions were polluted by the background noise. On the other hand, in the higher part of the frequency range, some discrepancies between the two approaches appear. They can be attributed to the difference between the model that supposed a panel, perfectly simply supported on its four edges and the experiment that approaches these conditions with thin blades. It can be observed that the vibration responses, in both the DAF and TBL excitation cases, determined using SST do not match the numerical ones between approximately 230 Hz and 300 Hz: this is due to the fact that the source is not efficient in that frequency range as stated before. The vertical offsets that can be observed at some frequencies are due to the fact that for the numerical case, the modal damping of the panel is taken constant over the entire frequency range (i.e. η = 0.005) whereas it is certainly dependent on the panel modes in real conditions (see the values measured by Marchetto for the first modes in Table II, Ref. [START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF]). It can also be noticed that the vibration response in the DAF case is higher than in the case of the TBL excitation: this is due to the fact that the modes that contribute to the panel response under a diffuse field excitation are more efficiently excited than in the case of a TBL excitation [START_REF] Maury | A Wavenumber Approach to Modelling the Response of a Randomly Excited Panel, Part I: General Theory[END_REF]. Now, let us focus on the panel radiation. The radiated power was determined using the two microphones method for the SST approach. For both cases (DAF and TBL), the experimental results do not match the theoretical results under approximately 700 Hz: this is probably due to the fact that the monopole source was not very efficient to induce a sufficient radiation amplitude for the measurement of the pressure and particle velocity sensitivity functions. Moreover, the estimation of the particle velocity with the two microphone measurements can amplify the uncertainties. From a practical point of view, one can conclude that an acoustic source more efficient in the low frequency range would be required. In general, this type of source (such as subwoofer loudspeakers) has a greater size than the considered one, that would require to define a coarser mesh for the source positions. With the present source, one can however observe a good agreement between the SST results and the numerical ones for both excitations. The two curves match very well above 700 Hz for the TBL excitation. That shows that the subsonic plane waves are well synthesized by the SST (as it has been already observed in Fig. 4.8). In conclusion of this section, there is globally a good agreement between the numerical results and those obtained from the proposed method in terms of the panel vibrations and the radiated sound power. For this latter quantity, the strength of the acoustic source did not permit however a satisfactory reproduction below 700 Hz. Validation with direct measurements in standard test facilities In this section, the experimental results obtained with the proposed SST method are compared to results obtained in standard test facilities. The results obtained in the standard test facilities have already been published [START_REF] Marchetto | Vibroacoustic response of panels under diffuse acoustic field excitation from sensitivity functions and reciprocity principles[END_REF][START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF] and were done at the University of Sherbrooke by Marchetto et al. They concern the vibrations of a similar panel. In Ref. [START_REF] Marchetto | Vibroacoustic response of panels under diffuse acoustic field excitation from sensitivity functions and reciprocity principles[END_REF], the panel is excited by a DAF experimentally generated within a reverberant room whereas as in Ref. [START_REF] Marchetto | Experimental prediction of the vibration response of panels under a turbulent boundary layer excitation from sensitivity functions[END_REF], it is excited by a turbulent flow generated in an anechoic wind tunnel at a flow speed of 20 m s -1 . More details on these measurements can be found in these references. There is globally a good agreement between the results obtained using SST and those measured in the test facilities. One can notice a shift of the responses along the frequency axis which is due to the fact that the panel used in the test facilities is not exactly the same as the one used in our experiment although both panels are made out of aluminum and have the same dimensions. This shift can also be explained by some minor differences in the boundary conditions. These results show that the proposed SST is able to reproduce the excitation of a standard test facility. summary This chapter presented an experimental implementation of the SST approach. This process can be seen as alternative or complementary to standard test facilities such as reverberant rooms or wind tunnels. In the present study, our attention was focused on the validation of the process with comparisons against numerical and experimental results obtained with test facilities such as the reverberant room and the anechoic wind tunnel. A parametric study based on numerical simulations aiming at defining the ideal design of the array of virtual monopoles was done. This study allowed to define an optimal interval I opt , for the distance between the panel and the source array, in which the synthesized pressure field is in good agreement with the target pressure field, allowing a good reproduction of the vibroacoustic response of the panel. The proposed method was applied on a simply supported aluminum panel which was subject to either DAF or TBL excitation. Both the velocity response at a given point and sst on panels: experiments the transmission loss of the panel were determined. A 3D Cartesian robot was used to move the acoustic source whereas a 2D Cartesian robot was used to move a linear array of microphones. This system was controlled by a MATLAB script that allows us to automatize the measurement process of the transfer functions between the source (at different positions) and the quantities of interest. To evaluate the radiated power for the computation of the transmission loss, the two microphone method was used to estimate the normal particle velocity. Apart from an overestimation of the panel responses between 230 and 300 Hz due to the low efficiency of the monopole source and the noisy transmission loss curve under approximately 600 Hz stemming from the two microphone method, there is a fairly good agreement between the three types of results (numerical, SST and direct measurements). The total measurement time of the transfer functions needed to determine the velocity response as well as the radiated power by the panel is approximately 60 hours. The measurement of the transfer functions needed to determine the radiated power by the panel are the most time consuming (approximately 46 hours) as it required to measure the radiated pressure and to estimate the particle velocity using a linear array of microphone. These measurement times are however not completely penalizing as the process is fully automatized. The use of acoustic velocity probes as well as a larger microphone array could greatly reduce these measurement times. Moreover, compared to standard facilities, the process supplies the sensitivity functions that can give some insights on how the structure filters out the random excitation. Table 4.1 summarizes the measurement times for each quantity of interest. Measured quantities Times Transfer matrix (G) Part III This chapter is dedicated to the theoretical and numerical study of SST applied on Curved Rectangular Panels (CRPs). In a first step, an analytical form of the transfer functions between a monopole source and an observation point on the structure of interest is established using Green's representation in the two-dimensional and three-dimensional cases. The established closed-form solutions for the 2D and 3D cases are then validated using numerical results obtained with OpenBEM1 . Afterwards, parametric studies on the design parameters of the SST process for a CRP are proposed and discussed before closing the chapter with a brief summary. E X T E N S I O N T O C U R V E D R EC TA N G U L A R PA N E L S 5 S S T O N C U R V E D R EC TA N G U L A R PA N E L S : T H EO R Y A N D N U M E R I C A L S T U D I E S 5 scattering by a rigid and curved rectangular panel: transfer functions A thorough search of the relevant literature did not yield any analytical closed-form transfer functions between a monopole source located at a given point x 0 and an observation point y located on the structure σ as depicted in Fig. 5.1. Thus, this section is dedicated to the resolution of the boundary value problem formulated in Eq. (5.1). We seek a function p (x), solution of the following Neumann problem              ∆p (x) + k 2 p (x) = -δ (x -x 0 ) , x ∈ Ω ∂ n(x) p (x) = 0, x ∈ σ (5.1) where ∂ n(x) • designates the partial normal derivative at point x in the direction n and n corresponds to an exterior unit vector on the boundary constituted by σ ∪ β. The domain of propagation Ω corresponds to the upper half-space delimited by the baffle β and the structure σ . x y n C R α O x (r, θ) x β r β , θ β (σ ) (β) ( β) x 0 (r 0 , θ 0 ) In a first step, the two-dimensional case is studied and then follows the general threedimensional case. x 0β r 0β Two-dimensional case In what follows, use will be made of the polar coordinates with the origin located at point O as shown in Fig. 5.1: the coordinates of a point x inside the propagation domain are denoted by (r, θ) while those of a point y on σ ∪ β are denoted by (r σ , θ σ ) in the polar coordinates. x β designates the image of point x with respect to the baffle plane (β) and x designates a point in Ω. Note that since x is closer to σ than x 0 in our application, it results that r < r 0 . From Eq. (3.51) in [START_REF] Filippi | Acoustics: Basic Physics, Theory, and Methods[END_REF], Green's representation of the exterior pressure field p at x due to the source at point x 0 and diffracted by the surface σ ∪ β reads p (x) = p 0 (x) + σ ∪β Tr ∂ n(y) p (y) G (x, y) -Tr [p (y)] ∂ n(y) G (x, y) dS (y) (5.2) where A priori, the Green's function G is chosen to be the following free-field Green's function G (x, y) = i 4 H (1) 0 [k 0 d (x, y)] , x ∈ Ω and y ∈ σ ∪ β where H (1) 0 is the Hankel function of the first kind and order 0. In the remainder of this chapter, the superscript "(1)" in the considered function will be omitted as only Hankel or Bessel functions of the first kind are used. sst on curved rectangular panels: theory and numerical studies It is important to note that ∂ n(y) G (x, y) 0 on β. One has a "certain" freedom of choice for the Green's function in the Green's representation in Eq. (5.3), see Eq. ( 5) of Ref. [84] for confirmation. Let us choose the half-space Green's function G β whose normal derivative vanishes on β(∪ β). It is given by G β (x, y) = i 4 H (1) 0 [k 0 d (x, y)] + i 4 H (1) 0 k 0 d x, y β (5.4) where x β is mirror-image point of x with respect to the plane baffle β(∪ β). Thus, one has ∂ n(y) G β (x, y) = 0 on β ∪ β . The integral representation in Eq. ( 5.3) thus reduces to p (x) = p 0β (x) - σ Tr [p (y)] ∂ n(y) G β (x, y) dS (y) (5.5) where the integral term, which represents the sound field diffracted by σ ∪ β is now defined on a finite domain σ compared to Eq. ( 5.3) where the the integration domain was composed of the structure σ and the infinite baffle β. Note that the free-field source term p 0 (x) has been replaced by p 0β (x) = p 0 (x, x 0 ) + p 0 x, x 0β such that ∂ n(x) p 0β (x) = 0 when x ∈ β(∪ β). This modification is reflected in the term φ I of Eq. ( 5) in [START_REF] Seybert | Modified Helmholtz integral equation for bodies sitting on an infinite plane[END_REF], but dealing with an incident plane wave instead of a point source as in our case. We now proceed with a cylindrical harmonic expansion of p 0β (x) and ∂ n(y) G β (x, y). • Source term p 0β (x) For r < r 0 , we have (see Appendix D of Ref. [START_REF] Chew | Waves and Fields in Inhomogeneous Media[END_REF]) i 4 H (1) 0 [k 0 d (x, x 0 )] = i 4 +∞ n=-∞ H n (k 0 r 0 ) J n (k 0 r) e in(θ-θ 0 ) where H n and J n are the Hankel function of the first kind and order n and the Bessel function of the first kind and order n, respectively. Note that H n J n = H -n J -n . 5.1 scattering by a rigid and curved rectangular panel: transfer functions 85 We also have i 4 H (1) 0 k 0 d x, x 0β = i 4 +∞ n=-∞ H n k 0 r 0β J n (k 0 r) e in(θ-θ 0β ) for r < r 0β Note that r 0β = r 0 and θ 0β = -θ 0 . It follows p 0β (x) = i 4 +∞ n=-∞ H n (k 0 r 0 ) e inθ J n (k 0 r) e -inθ 0 + J n (k 0 r) e inθ 0 (5.6) Using Euler's trigonometric formula yields p 0β (x) = i 2 +∞ n=-∞ H n (k 0 r 0 ) J n (k 0 r) e inθ cos (nθ 0 ) (5.7) • Derivative of the half-space Green's function ∂ n(y) G β (x, y) G β reads G β (x, y) = i 4 H (1) 0 [k 0 d (x, y)] + i 4 H (1) 0 k 0 d x, y β where y (r σ , θ σ ) is now on σ exclusively. Note that r > r σ β = r σ . G β can thus be written as G β (x, y) = i 4 +∞ n=-∞ H n (k 0 r) J n (k 0 r σ ) e in(θ-θ σ ) + H n (k 0 r) J n k 0 r σ β e in(θ σ β -θ) (5.8) and becomes G β (x, y) = i 2 +∞ n=-∞ H n (k 0 r) J n (k 0 r σ ) e -inθ cos (nθ σ ) (5.9) The partial normal derivative ∂ n(y) • corresponds to the radial derivative in the polar coordinates ∂ r σ • , that is ∂ n(y) G β (x, y) = ∂ r σ G β (x, y) (5.10) and ultimately, one obtains ∂ n(y) G β (x, y) = ik 0 2 +∞ n=-∞ H n (k 0 r) J ′ n (k 0 r σ ) e -inθ cos (nθ σ ) (5.11) where the prime denotes derivative with respect to the argument. Inserting Eqs. (5.7) and (5.11) into Eq. ( 5.5), one gets p (x) = i 2 +∞ n=-∞ H n (k 0 r 0 ) J n (k 0 r) e inθ cos (nθ 0 ) - ik 0 2 +∞ n=-∞ H n (k 0 r) e inθ σ Tr [p (y)] J ′ n (k 0 r σ ) cos (nθ σ ) r σ dθ σ Let A n = σ Tr [p (y)] J ′ n (k 0 r σ ) cos (nθ σ ) r σ dθ σ be the unknowns, one can rewrite the previous equation as follows p (x) = i 2 +∞ n=-∞ H n (k 0 r 0 ) J n (k 0 r) e inθ cos (nθ 0 ) - ik 0 2 +∞ n=-∞ H n (k 0 r) e inθ A n (5.12) Let us determine ∂ n(x) p (x) = ∂ r p (x) . It reads ∂ r p (x) = ik 0 2 +∞ n=-∞ H n (k 0 r 0 ) J ′ n (k 0 r) e inθ cos (nθ 0 ) - ik 2 0 2 +∞ n=-∞ H ′ n (k 0 r) e inθ A n Applying the Neumann boundary condition ∂ n(x) p (x) x=y = 0, one obtains ik 0 2 +∞ n=-∞ H n (k 0 r 0 ) J ′ n (k 0 r σ ) e inθ σ cos (nθ 0 ) - ik 2 0 2 +∞ n=-∞ H ′ n (k 0 r σ ) e inθ σ A n = 0 (5.13) A necessary condition for Eq. ( 5.13) to be satisfied requires A n = J ′ n (k 0 r σ ) k 0 H ′ n (k 0 r σ ) H n (k 0 r 0 ) cos (nθ 0 ) (5.14) 5.1 scattering by a rigid and curved rectangular panel: transfer functions 87 Substituting Eq. (5.14) into Eq. (5.12) one gets the pressure at any point x ∈ Ω p (x) = i 2 +∞ n=-∞ H n (k 0 r 0 ) J n (k 0 r) e inθ cos (nθ 0 ) - ik 0 2 +∞ n=-∞ H n (k 0 r) e inθ J ′ n (k 0 r σ ) k 0 H ′ n (k 0 r σ ) H n (k 0 r 0 ) cos (nθ 0 ) Hence, one gets the following expression p (x) = i 2 +∞ n=-∞ e inθ cos (nθ 0 ) H n (k 0 r 0 ) J n (k 0 r) - J ′ n (k 0 r σ ) H ′ n (k 0 r σ ) H n (k 0 r) (5.15) Eq. ( 5.15) also represents the transfer function between a field point x and a unit point source located at x 0 . However, we are interested in the pressure at the surface of the structure σ . Using one of the Wronskian relations J n (ξ) H ′ n (ξ) -H n (ξ) J ′ n (ξ) = 2i πξ (see Ref. [85], p. 21), one can establish the following expansion for the pressure on the surface of the structure itself at r = r σ p (x σ ) = - 1 π +∞ n=-∞ e inθ σ cos (nθ 0 ) 1 k 0 r σ H n (k 0 r 0 ) H ′ n (k 0 r σ ) (5.16) Three-dimensional case In this case, using Green's representation along with the cylindrical harmonics expansion would be rather complicated even though an expansion in cylindrical harmonics of the 3D Green's function is available [START_REF] Wang | Expansion of the Green's Funcion in a Cylindrical Coordinate System[END_REF]. In order to solve the 3D Helmholtz equation (see Eq. (5.1)), let us consider an infinite rigid half cylinder placed on a rigid infinite baffle plane and insonified by a monopole source located at x 0 as shown in Fig. 5.2. The total sound field of the acoustic excitation sst on curved rectangular panels: theory and numerical studies x y O x (r, θ, z) (σ ) (β) (σ β ) x 0 (r 0 , θ 0 , z 0 ) due to a point source on a long half cylinder above/on a hard ground can be expressed as [START_REF] Lui | The scattering of sound by a long cylinder above an impedance boundary[END_REF][START_REF] Skelton | Theoretical Acoustics of Underwater Structures[END_REF] p (x) = p 0 (x) + p 0β (x) + p σ (x) + p σ β (x) (5.17) where p 0 (x) and p 0β (x) represent respectively the pressure field due to the monopole source and its image with respect to the ground while p σ (x) and p σ β (x) correspond to the scattered pressure field by the infinite half cylinder and its image with respect to the ground, respectively. x 0β r 0β , θ 0β , z 0β Using the standard Fourier transform about the z variable, the free field pressure generated by the monopole source p 0 (x) = e ik 0 |x-x 0 | 4π |x -x 0 | (5.18) 5.1 scattering by a rigid and curved rectangular panel: transfer functions 89 can be expanded in terms of its Fourier integral representation as follows p 0 (x) = i 8π +∞ n=-∞ e in(θ-θ 0 ) +∞ -∞ J n (k r r) H n (k r r 0 ) e ik z (z-z 0 ) dk z (5.19) where k 2 r = k 2 0 -k 2 z corresponds to the axial wavenumber. The same is done for the image of the source with respect to the ground/baffle. Hence p 0β (x) = e ik 0| x-x 0β | 4π x -x 0β (5.20) and then, be expanded as p 0β (x) = i 8π +∞ n=-∞ e in(θ-θ 0β ) +∞ -∞ J n (k r r) H n k r r 0β e ik z (z-z 0β ) dk z (5.21) Note that r 0β = r 0 , θ 0β = -θ 0 and z 0β = z 0 . Hence, one gets p 0β (x) = i 8π +∞ n=-∞ e in(θ+θ 0 ) +∞ -∞ J n (k r r) H n (k r r 0 ) e ik z (z-z 0 ) dk z (5.22) The pressure field scattered by the structure σ and its image with respect to the ground are respectively given in convenient forms as [START_REF] Skelton | Theoretical Acoustics of Underwater Structures[END_REF] p σ (x) = +∞ n=-∞ e inθ +∞ -∞ A n H n (k r r) e ik z z dk z (5.23) and p σ β (x) = +∞ n=-∞ e in θ +∞ -∞ B n H n (k r r) e ik z z dk z (5.24) where r and θ represent the coordinates of the field point with respect to the centerline of the image structure σ β , which is the same as that of the structure σ itself. sst on curved rectangular panels: theory and numerical studies Noticing that the centerline of σ β is the same as that of σ , it comes r = r and θ = θ. Hence the total pressure scattered by the structure p s (x) = p σ (x) + p σ β (x) is p s (x) = +∞ n=-∞ e inθ +∞ -∞ (A n + B n ) H n (k r r) e ik z z dk z (5.25) which will be simply written as p s (x) = +∞ n=-∞ e inθ +∞ -∞ C n H n (k r r) e ik z z dk z (5.26) with C n = A n + B n . In the case where the centerline of the structure and its image are not superimposed as in the current configuration (see Fig. 5.2), one would have to use the Graf's translational addition theorem (see Chapter 9 of Ref. [START_REF] Abramowitz | Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables[END_REF]) in order to transpose the cylindrical wave functions of the image structure to the cylindrical coordinates of the real structure and vice versa. The total pressure field in Eq. ( 5.17) can now be expressed as p (x) = i 4π +∞ n=-∞ e inθ +∞ -∞ cos (nθ 0 ) J n (k r r) H n (k r r 0 ) e ik z (z-z 0 ) -4πiC n H n (k r r) e ik z z dk z (5.27) Applying the Neumann boundary condition ∂p (x) ∂r r=r σ = 0, one then obtain C n = 1 4πi J ′ n (k r r σ ) H ′ n (k r r σ ) H n (k r r 0 ) e -ik z z 0 cos (nθ 0 ) (5.28) Hence the sound pressure field of the acoustic excitation due to a monopole source on a long cylinder sector placed on rigid baffle can be written as p (x) = i 4π +∞ n=-∞ e inθ cos (nθ 0 ) +∞ -∞ H n (k r r 0 ) J n (k r r) - J ′ n (k r r σ ) H ′ n (k r r σ ) H n (k r r) e ik z (z-z 0 ) dk z (x σ ) = - 1 2π 2 +∞ n=-∞ e inθ σ cos (nθ 0 ) +∞ -∞ 1 k r r σ H n (k r r 0 ) H ′ n (k r r σ ) e ik z (z σ -z 0 ) dk z (5.30) which is naturally very similar to the expression obtained for the 2D case in Eq. (5.16). Validation of the solutions Now that Eq. ( 5.1) has been solved, we need to validate the solutions of the twodimensional and three-dimensional cases using numerical and/or experimental data. However, as we also need to validate the SST process for CRPs, the verification of these solutions will be done numerically using the Boundary Element Method (BEM) with OpenBEM [START_REF] Henríquez | OpenBEM -An open source Boundary Element Method software in Acoustics[END_REF] which is an open access code implemented in MATLAB. OpenBEM consists of three independent formulations: axisymmetrical (AxiBEM), bi-dimensional (2DBEM), and three-dimensional (3DBEM). They share a common structure and many of the features, but differ in the definition of the geometry and its implementation. Comparisons of both amplitude and phase of the surface pressure will be carried out between OpenBEM results and analytical ones. All these steps towards the validation of the solutions are included in Appendix D. For both cases (two-dimensional and three-dimensional), a first study on the number of harmonics that must be considered in Eq. (5.16) and Eq. (5.30) for an accurate estimation of the transfer functions is proposed. For the three-dimensional case, the infinite integral needed to be estimated in a finite domain: all the steps towards this estimation can be consulted in Sec. D.2.1. Then the proper validation of the transfer functions is done by comparing the surface pressure obtained using OpenBEM to those yielded by our solutions. sst on curved rectangular panels: theory and numerical studies Naturally, 2DBEM is used for the validation of the two-dimensional transfer functions and 3DBEM for that of the three-dimensional transfer functions. Comparing 2DBEM results to those provided by Eq. ( 5.16) show a very good agreement between both methods, as shown in Fig. 5.3. In the case of the three-dimensional transfer functions, the results are also very accurate as it can been seen in Fig. 5.4. Although there are some minor discrepancies due to the limits of numerical methods (OpenBEM in this case), we can effectively validate the solution established in Eq. (5.30). In the validation figures in Appendix D, one can notice that in the two-dimensional case, the match between the analytical solution and the numerical results is almost perfect. However, for the three-dimensional case, there are some minor discrepancies in the nu- merical results inherent to the BEM implementation. Indeed, it is important to note that although the boundary element method is nowadays a well established technique to solve acoustic problems, one of its most disadvantageous characteristics is the non-uniqueness of the solution. In fact, the exterior response is polluted by fictitious eigenfrequencies at the corresponding internal resonances. This purely mathematical issue is due to the ill-posed formulation of the method and strongly degrades the prediction accuracy. Several methodologies have been proposed to overcome this shortcoming. For Direct BEM, one of the most popular approaches is the Combined Helmholtz Integral Equation Formulation (CHIEF) method. It consists in applying internal over-determination points to improve the solution accuracy. Choosing these points is not straightforward since certain conditions have to be satisfied. In the past, methodologies have been developed to determine an efficient criterion. Nevertheless, practical applications are only limited to academic cases. Choosing the CHIEF points in an efficient way allows obtaining very accurate solutions with a limited increase of computational efforts. In our case, 5 CHIEF points were defined inside the mesh of the half cylinder shown in sst on curved rectangular panels: theory and numerical studies In the following table, a comparison of computation times between numerical (OpenBEM) and analytical solutions is proposed. The indicated computation times correspond to the time needed for each method to yield the transfer functions between one source position and a semi-circle of observation points (same number of points for each case) at one frequency. The computer processing times listed in Table 5.1 are all determined on the same computer with the following characteristics • Processor: Intel Core i7-8650U CPU at 1.90 GHz sst on curved rectangular panels: theory and numerical studies between two adjacent source positions is defined as a fraction of the lowest wavenumber to be reconstructed at the highest frequency of interest δ s = λ min n s (5.32) where n s is a positive integer representing the number of monopole positions per minimum wavelength. Eq. (??) is simply a generalization of Eq. (3.5). In these studies, the geometry of the virtual array of monopoles is the same as the structure of interest as shown in Fig. 5.5. The radius of this arc of sources is always greater than the radius of the structure. In comparison to the interplanar distance defined for rectangular structures, we define the radial height h as the difference between the radius of the arc of monopoles and that of the structure. In the following, the radial height is h = 5 cm. showed in these figures, the results for other WPPWs can be deduced by consulting the reproduction error maps in Fig. 5.6 and choosing one frequency and one wavenumber. In conclusion, the WPPWs are accurately synthesized when n s ≥ 3 which is consistent with the criteria that was defined for rectangular structures. Thus, we define the same criteria as in Sec. 3.3.1, that is n s = 4 for the remainder of this chapter. Let us now discuss the influence of the radial height on the conformal geometry setup of SST. sst on curved rectangular panels: theory and numerical studies Influence of the radial height Now that the criteria of four monopoles per smallest wavelength is verified, we will study the influence of the radial height on the reproduction process by modifying its value and discussing the results in terms of the indicators established in Chapter 2. Standard SST without added noise Fig. 5.9 shows the results of the parametric studies in determining the optimal radial height for an accurate reproduction process: one can respectively observe the reproduction error in the frequency-wavenumber domain, the condition number of the transfer matrix G, the real part and the imaginary part of the target and reconstructed pressure fields. Similar results as in Fig. 5.9 are obtained for the other two radial heights of interest: h = 10 cm and h = 20 cm. Let us now take a look at the condition number of the transfer matrix G for each radial height in Fig. 5.10. One can notice that the condition number does not vary that much as a function of the frequency: the same result was observed in Fig. 3.2a in Chapter 3 fro FRPs. However, comparing these results for each of the radial heights, we can notice a rapidly increasing condition number when the arc of monopoles is displaced farther from the structure: from ∼ 10 3 for h = 5 cm to ∼ 10 9 for h = 20 cm. It is important to note that this SST process is "numerical" and subject to only numerical errors. In practice the experimental process can be subject to multiple sources of errors and thus the less ill-posed the problem, the better. This means that the condition number of the transfer matrix must not be very high in practical applications. Otherwise, the errors induced by the hypothetical noise around the setup will be amplified due to the sensitivity of the system. In order to illustrate these observations, let us intentionally add some noise during the numerical reconstruction process. Adding noise to the standard SST process The reconstruction of the pressure field is done using the following equation p = Ĝq (5.33) where p is the reconstructed pressure field and q represents the monopole amplitude at each position. Eq. (??) can be written as follows p r = Ĝrs q s (5.34) where Ĝrs = G rs + ϵ rs , G rs the usual reconstruction transfer matrix computed using Eq. 500 (5.16) and ϵ rs ∈ [0, 0.01] is a normally distributed random noise. In the following, we discuss the influence of the background noise on the reproduction process of the target pressure field for the three radial heights that seemingly yielded the same results and choose the optimal radial height for the experimental SST on this type of geometry. In Fig. 5.11, we propose a comparison of the reproduction error in the frequencywavenumber domain and under the conditions presented above. One can now easily see the difference between the three radial heights: the further the arc of monopoles is from the structure, the more sensitive to background noise the system becomes. Fig. 5 .2 sst on curved rectangular panels: parametric studies 103 5.9c and Fig. 5.11a represent plots of the reproduction error during the SST process: one without added noise and the other with added noise. The comparison shows that the added noise induces a less accurate synthesis for frequencies less than 1500 Hz. In the wavenumber domain, it seems to affect high wavenumbers: this is understandable given that the higher the wavenumber, the more constrained the system is. However, it can be specifically designed to reproduce a target wavenumber. sst on curved rectangular panels: theory and numerical studies that the reconstructed pressure fields will not be successful. This is confirmed by Fig. 5 .12 and Fig. 5.13 which respectively compare the real part and the imaginary part of the reconstructed pressure field to the target values of both quantities. We notice that for a radial height h = 5 cm, the added noise barely affects the SST process. However, for h = 10 cm and specially for h = 20 cm, the added noise strongly disturbs the process. It is even more disturbed on the sides of the structure, i. e. for θ ∈ 0, π Considering all of the reasons discussed above, it is safe to state that the closer the arc of monopoles is to the structure, the less the condition number of the transfer matrix inducing a more accurate synthesis of the target pressure fields. Prior to presenting the three-dimensional case, let us consider a different structuremonopole array configuration than the conformal geometric configuration presented in Fig. 5.5. sst on curved rectangular panels: theory and numerical studies Non-conformal geometry Let us consider Fig. 5.14. In this new configuration, the array of monopole is organized as a line array in the two-dimensional case and in the three-dimensional case, it would be a rectangular array of monopoles. In the following, we will study the impact of the non-conformal structure-monopole array geometry in the results yielded by the SST process for the two-dimensional case: note that the length of the monopole line must be at least equal to the chord length of the structure. Note that in the new configuration, the radial height h is replaced by the height h ′ which corresponds to the distance between the ground where the structure is positioned and the line of monopoles. Let us take a look at the reconstructed WPPW for each of the arbitrarily chosen heights h ′ . For h ′ = 55 cm, the reconstructed pressure matches almost perfectly the target pressure field, see Fig. 5.15c and Fig. 5.15d. When h ′ = 60 cm, some discrepancies between the reconstructed WPPW and the target one start to appear on the sides of the structure, that is for θ ∈ 0, π 4 ∪ 3π 4 , π as can be noticed in Fig. 5.16c and Fig. 5.16d. This observation is even more visible when h ′ = 70 cm in Fig. 5.17c and Fig. 5.17d. sst on curved rectangular panels: theory and numerical studies In summary, the non conformal geometry allows a good reconstruction process in a reduced frequency-wavenumber domain when compared to the conformal geometry. We have also noticed that the condition number of the transfer matrix G in this case is very high and that better results are obtained with the conformal geometry configuration even though there is a good reproduction of some WPPWs in the current configuration. As one would expect, the conformal configuration is better suited for an accurate reconstruction of the target pressure field as the monopole source covers more area on the structure even though the line array of monopoles is denser in the non conformal geometry. In fact in the case of the conformal geometry, the monopole sources are placed along an arc of length πR and in the case of the non conformal geometry, the monopoles are placed along a line of length 2R. Fig. 5.18 shows a comparison between the reproduction error obtained using the conformal geometry and that obtained using a non-conformal geometry for "equivalent" setups, i. e. , a radial height h = 5 cm and a height h ′ = 55 cm, respectively. In conclusion, a conformal geometry configuration between the structure and the array of monopoles is better suited for an accurate SST process for these types of shells and will be considered for the remainder of this chapter. 5.2 sst on curved rectangular panels: parametric studies 111 Three-dimensional case Let us consider Fig. 5.5 with the z-axis as shown on the figure. In the three-dimensional case and considering the cylindrical coordinates, the WPPWs are of the following form p (x, k) = e -i(k θ Rθ+k z z) (5.35) where k θ is the circumferential wavenumber, k z is the longitudinal wavenumber and R is the curvature radius of the structure. The structure of interest is a rigid half-cylinder with the following dimensions Parameter Symbol Value Radius R 0.5 m Length L 1 m Influence of the density of the array of monopoles In this section, we want to study the number of sources n s needed per minimum wavelength in order to achieve an accurate reproduction process. Let us consider Eq. (5.32) where n s represents the number of monopoles per minimum wavelength. The goal is to study which values of n s allow an accurate reproduction of the target pressure field as was done in Sec. 5.2.1.1 for the two-dimensional case. As in previous parametric studies, the accuracy of the reproduction process will be assessed using the reproduction error defined in Eq. (3.6). The accuracy of the reproduction process is established by comparing the reproduction error to an arbitrarily set threshold of -10 dB corresponding to a relative error of 10% between the target and reconstructed pressure fields. Note that this reproduction error is actually a spatial average over the whole structure of the MSE between the target and reconstructed pressure fields at each point on the structure. Once the reproduction error is determined, one can verify the accuracy of the reproduction process by comparing the target wall-pressure field to the synthesized one. The condition number of the transfer matrix G allows one to choose which setup is more adequate for an experimental implementation of SST. We start by looking at the reproduction error when n s is given values in the set {1, 2, 3, 4}. In conclusion, these parametric studies confirm that we need at least four monopoles per minimum wavelength in order to have an accurate SST process as was the case for the two-dimensional case and Flat Rectangular Panels (FRPs). These comparisons allow us to state that one needs at least four monopoles per minimum wavelength for an accurate reconstruction process. sst on curved rectangular panels: theory and numerical studies Reproduction error Studying the effect of the radial height Let us now study the influence of the radial height h depicted in Fig. 5.5 by considering the following two radial heights: h = 5 cm and h = 10 cm. In these parametric studies, we will compare the reproduction errors induced by both radial heights as well as the reconstructed wall-pressure fields on the surface of the structure. The target wall-pressure field is the one depicted in Fig. 5.22 and corresponds to the WPPW defined by the wavevector (50, 50) rad m -1 . We will also discuss the condition numbers of the transfer matrices obtained using Eq. (5.16) in order to assess which radial height is more suitable for an experimental implementation of the SST process. Observing these figures, one can clearly notice that the reproduction error is always below the threshold of -10 dB which is required in order to state that the synthesis technique is accurate for both radial heights and at all three studied frequencies. Reproduction error Condition number of the transfer matrix The condition numbers of the transfer matrix G are given in table 5.3. One can notice that the condition number decreases when the frequency increases. This tendency has already been observed in the two-dimensional case. However its evolution is slow with respect to frequency. Finally, one can notice that there is not much difference between the two studied radial heights of 5 cm and 10 cm. Nevertheless, the condition number for h = 5 cm is slightly better than that obtained for h = 10 cm for each of the three frequencies of interest which is in accordance with the results we have obtained up to this point when the height of the monopole array is increased. That being said, one must remember that the distance between the monopole array and the structure must be at least equal to the separation between two adjacent monopoles in the array. summary In this chapter, the feasibility of the extension of the Source Scanning Technique (SST) to Curved Rectangular Panels (CRPs) was studied. Analytical solutions to the Helmholtz problem defined by Eq. (5.1) have been established for the two-dimensional as well as the three dimensional case. These solutions were consequently validated using the Boundary Element Method (BEM) implemented in OpenBEM. Once the validation is done, parametric studies have been carried out in order to determine the optimal parameters for an implementation of the SST in the two-dimensional and three-dimensional cases. The parametric studies allowed us to establish the minimum number of monopole sources per smallest wavelength at four monopoles which is the same number that was established four Flat Rectangular Panels (FRPs). Two different structure-array geometry configurations were studied. It was found that the conformal geometry, where the array of monopoles has the same geometry as the structure, is more suitable for the SST process than the non-conformal geometry configuration. Finally, it was found that the closer the monopole array is to the structure, the more accurate the reproduction process. In the previous chapter, we introduced transfer functions between source positions and observation points located on a half-cylindrical-like structure. We also applied the SST process on CRPs in order to determine the optimal parameters using the established 2D and 3D transfer functions. However the ultimate goal is to implement experimental SST on CRPs: the current chapter is thus dedicated to this effect. In a first step, we will present the devices used during the experiment as well as the experimental setup. Then, for the two-dimensional, we will discuss the results concerning the experimental validation of the transfer functions as well as experimental SST results compared with analytical ones. 6 S S T O N C U R V E D R EC TA N G U L A R PA N E L S : E X P E R I M E N T S As for the three-dimensional case, only an experimental validation of the 3D transfer functions is carried out. The excitation device used during the SST process is a monopole source similar to the one used for FRPs manufactured by Siemens and shown in Fig. 6.1. This monopole source is composed of a simple flexible hose at the end of which there is a 15 mm diameter nozzle with a reference sensor at the output and the whole is driven with a high impedance loudspeaker. Automation of the process It is important to remember that one key feature of the SST process is that it uses only one monopole source which is spatially displaced to the desired locations. Naturally, the process is automatized as was done for FRPs. A 5 degrees-of-freedom robot arm manufactured by Igus and shown in different angles in Fig. 6.4 is used to displace the monopole source to different positions. Experimental setup In the following, the experimental setup is described and pictures of how each device is mounted will be shown. Let us consider Fig. 6.5 where chamber 1 designates the semi-anechoic room where the structure, the truss on which the monopole source is mounted and the monopole source itself are present. The robot arm is mounted upside down on a truss structure for the needs of the experiment. This robot arm can slide along a motorized rail of length 2 m which is parallel to the z axis on Fig. 6.5. The 2 m motorized rail is aligned with the centerline of the structure (half open cylinder) in order to have an accurate positioning of the robot arm with respect to the structure. The monopole source is mounted on the robot using a 6 m flexible hose attached to an aluminum hollow tube which is fixed to the free end of the robot arm as shown in Fig. In order to avoid using a plethora of microphones for the measurement of the transfer functions, the idea is to apply the same principle of invariance as it was done for FRPs, see Appendix C. Certainly, the structure of interest is more complex than the flat panel but this invariance of the measurements should remain valid in this new case study in view of the symmetry of the system. This principle consists in using only one semi-circular microphone array to perform transfer function measurements over the entire surface of the half cylinder. Instead of having to move the microphone array (which is impossible in our configuration) or having a multitude of microphones to manage over the entire surface of the half cylinder, one only needs to move the monopole source to the position facing the location of the "virtual" array of microphones where the transfer functions are to be measured. However, the ideal location of the semi-circular array of microphones would be the reference position shown in Fig. 6.8. In this configuration, there would be no problem if the measurements were to be made on only one half of the structure (from the reference position to one edge or the other). However, one needs to measure the transfer functions over the entire surface of the structure. Thus, one will have some issues during the measurement of transfer functions that are located at a distance greater than half the length of the structure. We would leave the measurement area as shown in Fig. 6.8 and considering the 2 m stroke on the robot rail, one would not be able to measure these transfer functions. Hence the choice of the location shown in Fig. 6.8. The 63 microphones used during this experiment are flush-mounted on the rigid half open cylinder as shown in Fig. 6.9. They were assembled per sets of 8 microphones per external conditioner (see Fig. 6.3) except one which had 7 microphones. All the cables in chamber 1 and chamber 2 were plugged and retrieved in a control room where they were plugged in an OROS acquisition system. All the microphones were calibrated by taking into account the hypothetical attenuation due to the lengthy cables used to plug all the devices in the control room. In Fig. 6.10, one can observe the complete experimental setup for the measurement during the SST process on CRPs. 2) and ( 5). ( 6) corresponds to the robot arm control unit. results and discussion This section is dedicated to the experimental validation of the transfer functions studied in Chapter 5 as well as the experimental implementation of the SST process on Curved Rectangular Panels (CRPs). Naturally, we will study both the two-dimensional and the three-dimensional cases. 6.2.1 Two dimensional case: validation of the transfer functions and experimental SST We will start with the experimental validation of the transfer functions defined in Eq. (5.16) and then we will apply the SST process. Given the results obtained in this case, one can completely validate the two-dimensional theoretical transfer functions defined by Eq. (5.16) which had also been compared to numerical results using OpenBEM. Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the two-dimensional case. Experimental validation of the transfer functions Experimental SST The goal is to experimentally implement the SST process in the two-dimensional case using the same experimental setup introduced in Sec. 6.1.2. Three dimensional case: validation of the transfer functions As for the two-dimensional case, here we aim at validating the three-dimensional transfer functions defined in Eq. (5.30) using the experimental setup presented in Sec. 6.1.2. In this case, the structure of interest is the open rigid half-cylinder (4) in Fig. 5.30. Fig. 6.17 and Fig. 6.18 show comparisons between the theoretical transfer functions defined in Eq. (5.30) and the measured transfer functions through plots of the pressure at the surface of the structure at two different frequencies, a given radial height and for one position (r 0 , θ 0 , z 0 ) of the monopole source. Regarding the experimental setup, the arc of microphones is at the same location as for the two-dimensional case. However, the monopole source is no longer on the same plane as the microphone arc array. Indeed, it can now be in a different z-plane as we are now in the three-dimensional case. In Fig. 6.17, the monopole source is located at r 0 = 55 cm, θ 0 = π 2 , z 0 = 50 cm and the microphone array is always at the same position. Observing this figure, one can notice that despite some discrepancies, there is a good agreement between both types of results. As for Fig. 6.18, the monopole source is at the position r 0 = 60 cm, θ 0 = π 4 , z 0 = 25 cm . One can also notice that the measurements match the theoretical results. In conclusion, one can safely validate the three-dimensional transfer functions. The measurement of the transfer matrix between each monopole source position and observation points on the surface of the half-cylinder for an experimental implementation of the SST process would be time consuming in the scope of this thesis. However, given the θ and z symmetry of the system and the results obtained in the two-dimensional case, we do not see any limitations to the implementation of the three-dimensional SST process over the cylindrical geometry. Moreover, the measured transfer functions agree well with the modeled ones and the simulated three-dimensional SST was assessed in Sec. 5.2.2 from modeled transfer functions. 6.3 summary 141 summary In this chapter, the experimental devices (microphones, monopole source, robot arm) as well as the experimental setup were presented. The measurements were done using the devices along with an OROS acquisition system installed in the control room. The experimental validation of the transfer functions studied in Chapter 5 (see Eq. (5.16) and Eq. (5.30)) was then carried out for the two-dimensional case as well as for the three-dimensional case. The experimental implementation of the SST process was done for the two-dimensional case. The measurements were compared to analytical results with a very good agreement. summary 145 This thesis was focused on the reproduction of the vibroacoustic responses of structures under random pressure fields using the Source Scanning Technique (SST) which uses a single monopole source, displaced to several positions in order to mimic the effect of a full array, as an alternative or complementary method to the usual test facilities such as reverberant rooms and anechoic wind tunnels which are costly, difficult to implement and sometimes lack reproducibility. The random excitations of interest, the Diffuse Acoustic Field (DAF) and the Turbulent Boundary Layer (TBL), were considered stationary in time and homogeneous in space. We were interested in two types of structures: Flat Rectangular Panels (FRPs) and Curved Rectangular Panels (CRPs). In a first step, a validation of SST on FRPs was done. Subsequently, the proposed method was extended to CRPs. Part II was concerned with the application of SST on FRPs. Our attention was focused on the validation of the process with comparisons against numerical and experimental results obtained in test facilities such as reverberant rooms and anechoic wind tunnels. A parametric study based on numerical simulations aiming at defining the ideal design of the array of virtual sources was done: it was established that for an accurate reproduction process, one needs at least 4 monopole positions per smallest wavelength to be synthesized accurately. This study allowed to define an optimal interval for the distance between the panel and the source array, in which the pressure field synthesis is in good agreement with the target pressure field, allowing a good reproduction of the vibroacoustic response of the considered structure. A case study involving a simply supported aluminum FRP subjected to either a DAF or TBL excitation was proposed. Both the velocity response at a given point and the radiated power by the panel were estimated. A 3D Cartesian robot was used to move the acoustic source whereas a 2D Cartesian robot was used to move a linear array of microphones. This system was controlled by a MATLAB script that allowed us to automatize the measurement process of the transfer functions between the source (at different positions) and the microphone array to determine the sensitivity functions and subsequently, the vibroacoustic responses. To evaluate the radiated power for the computation of the transmission loss, the two-microphone method sst on curved rectangular panels: experiments was used to estimate the normal particle velocity. There was a fairly good agreement between the three types of results (numerical, SST and direct measurements) as can be visualized in Sec. 4.2. Part III was dedicated to the extension of the SST process to CRPs. To the best of our knowledge, the closed-form transfer functions between a monopole source position and a point on a CRP as presented in the problem description in Sec. 5.1 and Fig. 5. 1 was not yet established. Consequently, a preliminary work on the determination of these transfer functions was conducted to establish their analytical form. Once these transfer functions were numerically validated using OpenBEM, we investigated the optimal parameters of the monopole array for an accurate synthesis process through parametric studies. As for FRPs, it was also established that one needed at least 4 monopole positions per smallest wavelength to be reproduced. We also studied two different source array -structure geometric configurations: conformal configuration (monopole array with the same geometry as the test structure) and non-conformal (monopole array with a different geometry from the test structure). The results yielded that the conformal configuration is more suited for the proposed SST approach. Afterwards, we designed an experimental setup for the implementation of the SST process on CRPs. In order to automatize the process, the monopole source was mounted on an articulated robot arm. The measurements were done using an OROS acquisition system as was done for FRPs and the whole measurement operations were managed from a control room. The two-dimensional and three-dimensional measured transfer functions were compared to analytical ones. A good agreement between the two types of results was found thus validating each other. Finally, the SST process was implemented in the two-dimensional case and the results were very convincing as can be consulted in Sec. 6.2.1.2. Several perspectives and applications are now considered. As one can notice, the threedimensional experimental implementation of the broadband SST process in the case of CRPs was not done due to time consuming measurements induced by a large number of source positions over the size of the considered structure, see Table 5.2. Even though the experimental three-dimensional SST results should be, in theory, close to the analytical 6.3 summary 147 ones, it would be interesting to validate it in future works. Also, one needs to have a proper control on the mechanical boundary conditions of the structure of interest in order to accurately predict the vibroacoustic responses of CRPs to synthesized random pressure fields. As the SST process has been validated and automatized, it can be used in the future to compare the responses of different complex panels (stiffened panels or multilayer panels) under DAF or TBL excitation. As the considered excitation is represented by a model, the comparison between different panels will not be perturbed by uncertainties and background noises related to the excitation. Moreover, the analysis of the measured sensitivity functions will be helpful to extract the physical phenomena contributing to the noise radiation of the panels. One can also compare different configurations (space between stiffeners, stiffeners damping, etc.) and analyze the impact on the results. Let us now consider a given panel for which all the sensitivity functions have already been measured and which is subjected to an unknown random pressure field. Using the measured sensitivity functions and measurements of the vibroacoustic response of the panel, one can estimate the wall-pressure fluctuations on the surface of the panel due to the unknown random excitation. For future work, one can also imagine extending the SST process in order to synthesize inhomogeneous pressure fields and study the dispersion of the vibroacoustic response due to the inhomogeneity of the excitation. Finally, one can also imagine extending SST to dome-like structures for naval an aeronautical applications. For instance, the study of the vibroacoustic behavior of hulls or SONAR domes. A V E L O C I T Y S E N S I T I V I T Y F U N C T I O N S The sensitivity functions characterize the dynamical behavior of a given structure and have been defined by Eq. (2.6). For the considered panel in this study, these sensitivity functions are calculated using the modal expansion method H v (x, k, ω) = iω m,n F mn (k) φ mn (x) M mn ω 2 mn -ω 2 + iη mn ω mn ω (A.1) where m and n are both positive integers related to the summations. • x = (x, y, 0) is the point of interest on the panel surface, • F mn (k) is called the generalized force of the plane wave, • φ mn (x) represents the mode shape, M mn the modal mass, ω mn the modal angular frequency and η mn the modal damping. For a simply supported panel on all edges (as shown in Fig. 3.1), the modal parameters are given in the following equations ω mn =       mπ L 1 2 + nπ L 2 2       D ρh (A.2) φ mn (x) = sin mπ L 1 x 1 sin nπ L 2 x 2 (A.3) B R A D I AT E D P O W E R B Y A PA N E L The radiated power is defined by Π r (ω) = Σ v I act (x, ω) dx (B.1) where dx is the surface element and I act is the normal component of the active sound intensity at point x. The active sound intensity is directly related to the CSD function S pv 0 (x, ω) between the sound pressure and the particle velocity at point x [36] I act (x, ω) = ℜ S pv 0 (x, ω) (B.2) with S pv 0 (x, ω) = 1 4π 2 k H p (x, k, ω) H * v 0 (x, k, ω) S pp (k, ω) dk (B.3) Theoretically, the radiated power is obtained by solving the formal integral in Eq. (B.1). A discretized version of this equation is Π r (ω) = σ v I act (x, ω) δx (B.4) where σ v represents the set of points defined on Σ v and δx is the elementary point area. The pressure and particle velocity sensitivity functions can respectively be written as H p (x, k, ω) = 1 4π 2 k Hp k, k, ω e i kx d k (B. Π r (ω) = ℜ       1 4π 2 3 k k k Hp k, k, ω Hv 0 k, k, ω Σ p e i k-k dx × d kd kS pp (k, ω) dk       (B.7) Note that 1 4π 2 +∞ -∞ e i k-k dx = δ k -k =            1 if k = k 0 if k k (B.8) The previous relation can be used in the case of a baffled panel as the outgoing intensity is null on on the baffle. The radiated power can now be written as The theoretical radiated power by a simply supported plate under random excitation is given by the following equation Π r (ω) = ℜ 1 4π 2 Π r (ω) = ℜ         1 4π 2 2 k k Hp k, k, ω H * v 0 k, k, ω ∆ kS pp (k, ω) ∆k         (B.12) The particle velocity over the wall of a panel excited by a WPPW of wavenumber k can be written as a function of the acoustic pressure Hp k, k, ω = ωρ 0 k z Hv 0 k, k, ω (B.13) where ρ 0 is the density of the fluid and k z = k 2 0 -k2 x -k2 y if k2 x + k2 y ≤ k 2 0 , otherwise k z = j k2 x + k2 y -k 2 0 . Using Equation (B.13), the radiated power can be written as follows The second step of the SST process requires the measurement of the FRFs G ps between the source position s and the observation p on the reconstruction surface. A linear flush-mounted microphone array was used to achieve this measurement. As this array does not cover the whole reconstruction surface, one used the invariance property in translation of the idealized considered system (i.e. source, flat baffle, semi-anechoic room) to deduce the required transfer functions. This is highlighted in Π r (ω) = ℜ         ωρ 0 1 4π 2 2 k k 1 k z Hv 0 k, k, ω 2 δ kS pp (k, ω) δk         (B. invariance principle The blue grid shows the primary positions occupied by the source if we had a rectangular array or a displaceable (with an actuator for instance) linear array of microphones. The secondary source positions in red are the additional source positions needed to measure the FRFs if one had a single non-displaceable linear array of microphones considering the invariance property in translations. Thus, instead of measuring, for instance, the blue (dashed) FRF (see Fig. the slower the convergence of the series. However, there is not much difference in the 1 Given a circle, the apothem is the perpendicular distance from the midpoint of a chord to the circle's center. For a regular polygon, the apothem simply is the distance from the center to a side. See circular segment. number of necessary harmonics for the series to converge for the three frequencies studied. In conclusion, for the considered frequency range ([100, 2000] Hz), it is safe to say that above approximately N h = 30 harmonics, the estimation of the surface pressure is accurate enough. Looking at these maps of the pressure for a varying angular position of the monopole source (θ 0 ), one can notice the same behavior of the series regarding the number of necessary harmonics for a good estimation of the surface pressure, i.e. above N h = 30. D.1 two dimensional case 163 validation of the transfer functions for curved panels validation of the transfer functions for curved panels As for the two dimensional case, we must determine the optimal number of harmonics allowing an accurate computation of the transfer functions defined in Eq. (5.30). One can notice that this equation is very similar to Eq. (5.16) and a study on the number of cylindrical harmonics N h shows an identical behavior of the three dimensional transfer functions when compared to the two dimensional ones. Hence, we will focus on the integration domain in Eq. (5.30) and their validation using the three dimensional functions of OpenBEM. d.2.1 Integration domain In order to determine the three dimensional transfer functions defined by Eq. (5.30), one must integrate the following expression over an infinite domain I (k z ) = +N h n=-N h e inθ σ cos (nθ 0 ) 1 k r r σ H n (k r r 0 ) H ′ n (k r r σ ) e ik z (z σ -z 0 ) (D.1) However, it is not necessary to integrate this expression over an infinite domain. 2 Gmsh is an open source 3D finite element mesh generator with a built-in CAD engine and post-processor [START_REF] Geuzaine | Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities[END_REF]. (x σ ) ≈ - 1 2π 2 +∞ n=-∞ e inθ σ cos (nθ 0 ) +k 0 -k 0 1 k r r σ H n (k r r 0 ) H ′ n (k r r σ ) e ik z (z σ -z 0 ) dk z (D. I background and presentation of the source scanning technique process 1 literature review 7 1. 1 8 1. 1 . 1 8 1. 1 . 2 71811812 The excitation fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Characterization of spatially correlated random pressure field . . Diffuse acoustic field . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1. 3 sst on flat rectangular panels: theory and numerical studies 3 . 1 3 . 2 3 . 2 . 1 3 . 2 . 2 3 . 3 . 1 3 . 3 . 2 4 . 1 4 . 1 . 1 4 . 1 . 2 4 . 2 4 . 2 . 1 1 2 2 . 1 1 Figure 1 . 1 313232132233133241411412424211221111 Figure 1.1 DAF description . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 . 2 12 Figure 1.2 TBL description . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 . 3 13 Figure 1.3 Schematic of the characteristic regions of a wavenumber-frequency spectrum at a constant frequency . . . . . . . . . . . . . . . . . . Figure 2 . 1 21 Figure 2.1 Difference between (a) an actual array of monopoles and (b) a synthetic array using only one mobile monopole. . . . . . . . . . Figure 2 . 2 22 Figure 2.2 Structure under random pressure field where p b represents the blocked pressure at the surface Σ s . . . . . . . . . . . . . . . . . . . Figure 2 . 3 23 Figure 2.3 Description of the transfer function G ps (ω) . . . . . . . . . . . . . Figure 2 . 4 24 Figure 2.4 Description of the velocity FRF Γ s v (x, ω) . . . . . . . . . . . . . . . Figure 2.5 Description of the distance d between the structure and the synthetic monopole array and that of the distance d m between two adjacent monopole positions in the x and y directions. . . . . . . Figure 3 . 1 31 Figure 3.1 Simply supported panel subject to an excitation p b . . . . . . . . . Figure 3 . 2 Figure 3 . 3 3233 Figure 3.2 Optimal interplanar distance: (a) logarithm of the condition number (log 10 (κ)) of the transfer matrix, (b) reproduction error e p ( dB, ref. 1) on the reconstructed pressure field according to Eq. (3.6) for k x = k max and k y = 0. Both quantities are plotted as functions of frequency and the interplanar distance normalized by the smallest wavelength to be synthesized. . . . . . . . . . . . . . Figure 3 . 4 34 Figure 3.4 Reproduction error e p ( dB, ref. 1) in the wavenumber domain at (a) f = 200 Hz, (b) f = 500 Hz, (c) f = 1000 Hz and (d) f = 2000 Hz for d = λ min 4 . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3 . 5 35 Figure 3.5 Pressure fields ( Pa) in the spatial domain at a frequency f = 2000 Hz for the plane wave defined by k x = 50, k y = 50 rad m -1 : (a) target, (b) reconstructed for d = λ min 10 and (c) reconstructed for d = λ min 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4 . 1 41 Figure 4.1 Microflown mid-high frequency volume velocity source. (1) high impedance loudspeaker, (2) flexible hose and (3) nozzle with sensor. Figure 4 . 2 42 Figure 4.2 Roga RG-50 1/4" ICP microphone. . . . . . . . . . . . . . . . . . Figure 4 . 3 43 Figure 4.3 Bruel & Kjaer accelerometer. . . . . . . . . . . . . . . . . . . . . . Figure 4 . 4 44 Figure 4.4 Baffled simply supported panel setup for the SST process. (1) aluminum panel, (2) nozzle component of the monopole source mounted on the 3D Cartesian robot, (3) 5 cm thick plywood baffle and (4) sound absorbing foam. . . . . . . . . . . . . . . . . . . . . Figure 4 . 5 45 Figure 4.5 Measurement of the transfer functions G ps : experimental setup. (1) 3D Cartesian robot, (2) linear array of 20 microphones, (3) front view of the flush mounted microphones and (4) back view of the flush mounted microphones. . . . . . . . . . . . . . . . . . Figure 4 . 6 46 Figure 4.6 Measurement of the particle velocity with the two microphone method, S 1 and S 2 are discretized surfaces consisting of two identical grids of R points. . . . . . . . . . . . . . . . . . . . . . . Figure 4 . 7 47 Figure 4.7 Particle velocity measurements: experimental setup. (1) linear array of microphones mounted on the 2D Cartesian robot and (2) 2D Cartesian robot. . . . . . . . . . . . . . . . . . . . . . . . . Figure 4 . 8 72 Figure 4 . 9 73 Figure 4 . 10 74 Figure 4 . 11 75 Figure 4 . 12 75 Figure 4 . 13 76 Figure 4 . 14 77 Figure 5 . 1 82 Figure 5 . 2 Figure 5 . 3 48724973410744117541275413764147751825253 Figure 4.8 Velocity sensitivity functions H v x, k x , k y = 0, ω 2 ( dB, ref. 1 m 3 s -1 Pa -1 ): (a) numerical and (b) SST. Continuous white line: acoustic wavenumber k 0 . Dashed white line: panel flexural wavenumber k f . . . . . 72 Figure 4.9 Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a DAF excitation: numerical (thin black line), SST (thick gray line). . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Figure 4.10 Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a TBL excitation: numerical (thin black line), SST (thick gray line). . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Figure 4.11 Transmission loss TL (ω) ( dB, ref. 1) of the panel under DAF excitation: numerical (thin black line), SST (thick gray line). . . . . . . 75 Figure 4.12 Transmission loss TL (ω) ( dB, ref. 1) of the panel under TBL excitation: numerical (thin black line), SST (thick gray line). . . . . . . 75 Figure 4.13 Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a DAF excitation: reverberant chamber measurements at the University of Sherbrooke [57] (thin black line), SST (thick gray line). . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Figure 4.14 Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a TBL excitation: wind tunnel measurements at the University of Sherbrooke [58] (thin black line), SST (thick gray line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Figure 5.1 Problem geometry and parameters. . . . . . . . . . . . . . . . . . 82 Figure 5.2 Problem geometry and parameters: section view of the 3D case. The z-axis is outgoing. . . . . . . . . . . . . . . . . . . . . . . . . . 88 Figure 5 . 4 54 Figure 5.4 Comparison between the magnitudes and phases of the wallpressure obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution in Eq. (5.30) (dash-dotted black line) for one position of the monopole source r 0 = 0.6 m, θ 0 = π 4 , z 0 = 0.5 m and at a frequency f = 2000 Hz: three-dimensional case. The distance along the z-axis between the plane of the monopole source and that of the observation points is |z -z 0 | = 0.3 m. . . . . . . . . . . . . . . . . . . . . . . . . Figure 5 . 5 55 Figure 5.5 Section view of the SST setup for CRPs. . . . . . . . . . . . . . . . . Figure 5 . 6 56 Figure 5.6 Reproduction error in the frequency-wavenumber domain: (a) n s = 1, (b) n s = 2, (c) n s = 3 and (d) n s = 4. . . . . . . . . . . . . . . Figure 5 . 7 57 Figure 5.7 Real part of the target pressure field (continuous grey line) and the reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz: (a) n s = 1, (b) n s = 2, (c) n s = 3 and (d) n s = 4. . . . . . . . Figure 5 . 8 Figure 5 . 9 Figure 5 . 10 Figure 5 . 11 Figure 5 . 12 Figure 5 . 13 Figure 5 . 14 Figure 5 . 15 5859510511512513514515 Figure 5.8 Imaginary part of the target pressure field (continuous grey line) and the reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz: (a) n s = 1, (b) n s = 2, (c) n s = 3 and (d) n s = 4. . . . . . . . Figure 5 . 16 516 Figure 5.16 Parametric studies for a non conformal geometry configuration and for h ′ = 60 cm: (a) reproduction error, (b) logarithm of the condition number (log 10 κ), (c) real part and (d) imaginary part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz. . . . . . . . . . . . Figure 5 . 17 517 Figure 5.17 Parametric studies for a non conformal geometry configuration and for h ′ = 70 cm: (a) reproduction error, (b) logarithm of the condition number (log 10 κ), (c) real part and (d) imaginary part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz. . . . . . . . . . . . Figure 5 . 18 518 Figure 5.18 Comparison between the reproduction error induced by the (a) conformal geometry configuration and the (b) non-conformal configuration in the frequency-wavenumber domain. . . . . . . . Figure 5 . 19 519 Figure 5.19 Parametric studies: influence of the density of the array of monopoles on the reproduction error e p at a frequency f = 500 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. . . . . . . . . . . . . . . . . . . Figure 5 . 20 Figure 5 . 21 520521 Figure 5.20 Parametric studies: influence of the density of the array of monopoles on the reproduction error e p at a frequency f = 1000 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. . . . . . . . . . . . . . . Figure 5 . 22 522 Figure 5.22 Target WPPW defined by the wavevector (50, 50) rad m -1 mapped on the surface of the half-cylindrical structure. . . . . . . . . . . . Figure 5 . 23 523 Figure 5.23 Parametric studies: influence of the density of the array of monopoles on the reconstructed pressure fields at a frequency f = 500 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. . . . . . . . . . . . . Figure 5 . 24 524 Figure 5.24 Parametric studies: influence of the density of the array of monopoles on the reconstructed pressure fields at a frequency f = 1000 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. . . . . . . . . . . . . Figure 5 . 25 525 Figure 5.25 Parametric studies: influence of the density of the array of monopoles on the reconstructed pressure fields at a frequency f = 2000 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. . . . . . . . . . . . . Figure 5 . 26 526 Figure 5.26 Influence of the radial height on the reproduction error at a frequency f = 500 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5 . 27 527 Figure 5.27 Influence of the radial height on the reproduction error at a frequency f = 1000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. . . . . . . . . . . . . . . . . . . . . . . . . Figure 5 . 28 528 Figure 5.28 Influence of the radial height on the reproduction error at a frequency f = 2000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. . . . . . . . . . . . . . . . . . . . . . . . . Figure 5 . 29 529 Figure 5.29 Reconstructed pressure fields using SST at a frequency f = 500 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. . . Figure 5 . 30 530 Figure 5.30 Reconstructed pressure fields using SST at a frequency f = 1000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. . . Figure 5 . 31 Figure 6 . 1 53161 Figure 5.31 Reconstructed pressure fields using SST at a frequency f = 2000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. . . Figure 6 . 2 CTTM 1 / 4 6214 Figure 6.2 CTTM 1/4 ′′ microphones. . . . . . . . . . . . . . . . . . . . . . . . Figure 6 . 3 63 Figure 6.3 Eight channel conditioner with a Harting connector output. . . . Figure 6 . 4 64 Figure 6.4 five degrees-of-freedom robot arm. . . . . . . . . . . . . . . . . . Figure 6 . 5 65 Figure 6.5 Sketch of the experimental setup for the SST process on CRPs. . . Figure 6 . 6 66 Figure 6.6 Monopole source mounted on the robot arm. . . . . . . . . . . . . Figure 6 . 7 67 Figure 6.7 Chosen location of the semi-circular array of 63 microphones flush-mounted on the rigid half cylinder. The reference (green) position is where the microphones are actually located and the edge (red) corresponds to the farthest location from the reference where one needs to measure the transfer functions. . . . . . . . . Figure 6 . 8 68 Figure 6.8 Optimal location of the semi-circular array of 63 microphones on the rigid half cylinder which reduces the diffraction effects on the measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6 . 9 69 Figure 6.9 Flush-mounted microphones on the rigid open half cylinder. Left: view in chamber 1 in Fig. 6.5. Right: view in chamber 2 in Fig. 6.5. Figure 6 . 10 Figure 6 . 11 610611 Figure 6.10 Experimental setup: simply supported rigid open half cylinder. The monopole source (7) is mounted on the robot arm (8) which can slide along the rail (9) and supported by the truss structure (1). The 63 microphones (3) are flush-mounted on the rigid open half cylinder (4) baffled by the half polystyrene cylinders (2) and (5). (6) corresponds to the robot arm control unit. . . . . . . . . . Figure 6 . 12 612 Figure 6.12 Pressure at the surface of the structure at two frequencies: (a) f = 500 Hz and (b) f = 2000 Hz, for a radial height h = 5 cm and source angular position θ 0 = π 4 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the two-dimensional case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6 . 13 613 Figure 6.13 Pressure at the surface of the structure at two frequencies: (a) f = 1000 Hz and (b) f = 1500 Hz, for a radial height h = 10 cm and source angular position θ 0 = π 2 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the two-dimensional case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6 . 14 Figure 6 . 15 614615 Figure 6.14 Pressure at the surface of the structure at two frequencies: (a) f = 500 Hz and (b) f = 2000 Hz, for a radial height h = 20 cm and source angular position θ 0 = π 2 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the two-dimensional case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 6 . 16 616 Figure 6.16 Comparison between the target pressure (dash-dotted black line) and the synthesized one (continuous gray line, the red dots correspond to the positions of the measurement points) using experimental SST at a frequency f = 2000 Hz for a radial height h = 5 cm and for three different Wall-Pressure Plane Waves (WPPWs): (a) and (b): k θ = 5 rad m -1 , (c) and (d): k θ = 25 rad m -1 , (e) and (f): k θ = 50 rad m -1 . Left column: real part of the pressure. Right column: imaginary part. . . . . . . . . . . . . . . . . . . . . . . . . Figure 6 . 17 617 Figure 6.17 Pressure at the surface of the structure at two frequencies: (a) f = 500 Hz and (b) f = 2000 Hz, for a radial height h = 5 cm and source angular position θ 0 = π 2 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the three-dimensional case. The observation points are on a different section from the source's plane/section, i.e. |z -z 0 | = 50 cm. . . . . . . . . . . . . . Figure 6 . 18 618 Figure 6.18 Pressure at the surface of the structure at two frequencies: (a) f = 500 Hz and (b) f = 2000 Hz, for a radial height h = 10 cm and source angular position θ 0 = π 4 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the three-dimensional case. The observation points are on a different section from the source's plane/section, i.e. |z -z 0 | = 25 cm. . . . . . . . . . . . . . Figure C. 1 Frequency 1 Figure C.1 Frequency Response Functions (FRFs) measurements using a nondisplaceable microphone array . . . . . . . . . . . . . . . . . . . . Figure D. 2 2 Figure D.2 Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position r 0 for θ 0 = π 2 and the number of harmonics N h used in Eq. (5.15) at a frequency f = 100 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . . . . . . . . . . . . . . Figure D. 3 3 Figure D.3 Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position r 0 for θ 0 = π 2 and the number of harmonics N h used in Eq. (5.15) at a frequency f = 1000 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . . . . . . . . . . . . . . Figure D. 4 4 Figure D.4 Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position r 0 for θ 0 = π 2 and the number of harmonics N h used in Eq. (5.15) at a frequency f = 2000 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . . . . . . . . . . . . . . Figure D. 5 5 Figure D.5 Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position θ 0 for r 0 = 0.8 m and the number of harmonics N h used in Eq. (5.15) at a frequency f = 100 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . . . . . . . . . . . . . . Figure D. 6 6 Figure D.6 Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position θ 0 for r 0 = 0.8 m and the number of harmonics N h used in Eq. (5.15) at a frequency f = 1000 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . . . . . . . . . . . . . . Figure D. 7 7 Figure D.7 Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position θ 0 for r 0 = 0.8 m and the number of harmonics N h used in Eq. (5.15) at a frequency f = 2000 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . . . . . . . . . . . . . . Figure D. 9 9 Figure D.9 Comparison between the amplitudes ((a), (c) and (e)) and phases ((b), (d) and (f)) of the transfer functions obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution (dash-dotted black line) for one position of the unit amplitude monopole source r 0 = 1 m, θ 0 = π 4 and at three different frequencies. (a) and (b): f = 2000 Hz; (c) and (d): f = 1000 Hz and (e) and (f): f = 100 Hz. . . . . . . . . . . . . . . . Figure D. 10 10 Figure D.10 Comparison between the amplitudes ((a), (c) and (e)) and phases ((b), (d) and (f)) of the transfer functions obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution (dash-dotted black line) for one position of the unit amplitude monopole source r 0 = 1 m, θ 0 = π 2 and at three different frequencies. (a) and (b): f = 2000 Hz; (c) and (d): f = 1000 Hz and (e) and (f): f = 100 Hz. . . . . . . . . . . . . . . . Figure D. 11 11 Figure D.11 Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 500 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 2 , z 0 = 0.5 m . The observation point located at (r = 0.5 m, θ = 0, z = 0.5 m) and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure D. 13 13 Figure D.13 Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 500 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 2 , z 0 = 0.5 m . The observation point located at r = 0.5 m, θ = π 2 , z = 0.5 m and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure D. 14 14 Figure D.14 Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 2000 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 2 , z 0 = 0.5 m . The observation point located at r = 0.5 m, θ = π 2 , z = 0.5 m and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure D. 15 15 Figure D.15 Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 2000 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 3 , z 0 = 0.5 m . The observation point located at r = 0.5 m, θ = π 2 , z = 0.75 m and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure D. 16 16 Figure D.16 Generated mesh for OpenBEM computations. . . . . . . . . . . . Figure D. 17 17 Figure D.17 Comparison between the magnitudes ((a), (c) and (e)) and phases ((b), (d) and (f)) of the transfer functions obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution (dash-dotted black line) for one position of the unit amplitude monopole source r 0 = 0.6 m, θ 0 = π 2 , z 0 = 0.5 m and at three different frequencies. (a) and (b): f = 500 Hz; (c) and (d): f = 1000 Hz and (e) and (f): f = 2000 Hz. The observation points are on the same section as the source, i.e. z = z 0 . . . . . . . Figure D. 18 18 Figure D.18 Comparison between the magnitudes (a) and phases (b) of the transfer functions obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution (dash-dotted black line) for one position of the unit amplitude monopole source r 0 = 0.6 m, θ 0 = π 2 , z 0 = 0.5 m and at the frequency f = 2000 Hz. The observation points are on a different section from the source's plane/section, i.e. |z -z 0 | = 0.1 m. . . . . Figure D. 19 19 Figure D.19 Comparison between the magnitudes ((a), (c) and (e)) and phases ((b), (d) and (f)) of the transfer functions obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution (dash-dotted black line) for one position of the unit amplitude monopole source r 0 = 0.6 m, θ 0 = π 4 , z 0 = 0.5 m and at two different frequencies. (a) and (b): f = 1000 Hz and (c) and (d): f = 2000 Hz. The observation points are on a different section from the source's, i.e. |z -z 0 | = 0.3 m. . . . . . . . . . . . . Figure D. 20 Comparison 20 Figure D.20 list of tables 3 3 Nowadays, in the framework of developing more sustainable transportation means, new materials which are usually very light with good mechanical properties are spreading very fast. However, these new materials often have very poor acoustic properties. In order to improve the acoustic properties of these new materials, it is necessary to experimentally determine their vibroacoustic behavior when subjected to complex excitations such as the turbulent boundary layer excitation in the aeronautical transportation industry. There are currently available experimental facilities that allow one to determine the vibroacoustic properties of such structures but these are often very expensive and the reproducibility of the measurements performed in these facilities can be questioned, making it difficult to compare different solutions. It is therefore of considerable interest to have at disposal an experimental tool or process that can be used during the design stage of the considered structures and that allows one to accurately and cost-efficiently assess the vibroacoustic properties of the structures of interest. All the above reasons gave birth to the VIRTECH (VIRTualization of Experimental facilities in structural aCoustics by wall pressure syntHesis) project funded by the French National Research Agency or Agence Nationale de la Recherche (in French) (ANR). One of the main objectives of VIRTECH is the optimization of source networks to synthesize a wall-pressure field having the characteristics of a Diffuse Acoustic Field (DAF) or a Turbulent Boundary Layer (TBL) over the surface of a given structure, and by leveraging the filtering phenomena of structures in the wavenumber domain, determine their vibroacoustic responses with respect to the excitation of interest. The optimization will focus on various criteria: speed of implementation, spatial resolution, error sensitivity, cost and reproducibility.Another goal of the VIRTECH project consists in identifying the useful components of the acoustic field through direct measurements using microphones and through indirect measurement by examining the vibroacoustic behavior of the tested structure supplemented by an analysis of a numerical model of the pressure field.The work presented in this thesis is part of the component of the VIRTECH project that aims at optimizing source networks for the synthesis of random excitations in order to determine the vibroacoustic behavior of structures under random pressure fields list of tables through the structural and acoustic responses for instance and with a particular focus on the DAF and TBL excitations. As stated earlier, the experimental characterization of structures under DAF and TBL excitations is of great interest to researchers as well as the transportation industry for acoustic comfort reasons mainly. The objective of this research work is to establish an experimental process allowing to determine the vibroacoustic responses of structures using the Source Scanning Technique (SST) that was introduced a few years ago. Figure 1 . 1 : 11 Figure 1.1: DAF description S p b p b (ω) designates the wall-pressure Auto-Spectral Density (ASD) function which is actually a two-sided ASD as a function of the angular frequency ω expressed in Pa 2 rad -1 s and related to the more physically meaningful one-sided ASD (or Power Spectral Density (PSD)) W p b p b as a function of the frequency f and expressed in Pa 2 Hz -1 through the following equation W p b p b (f ) = 4πS p b p b (ω) (1.10) literature review Figure 1 . 2 : 12 Figure 1.2: TBL description domain through their CSD function S p b p b (k, ω) which gives information about the wavenumber components of the pressure field. Generally, it is represented in the literature review wavenumber domain and three main regions can be distinguished from a schematic representation of S p b p b (k, ω) (see Fig. 1.3) Figure 1 . 3 : 13 Figure 1.3: Schematic of the characteristic regions of a wavenumber-frequency spectrum at a constant frequency is a readjustment of the Corcos model. The separation of the components along the streamwise axis x and the spanwise axis y induces the diamond shaped convective ridge of the Corcos model in the 1.1 the excitation fields 21 wavenumber domain. The Mellen model combines both components and as a result, the convective ridge is oval shaped which is a more appropriate representation with respect to actual experimental measurements. The level of the spectrum in the convective ridge is then estimated better or in other words, less overestimated compared to that of the Corcos model. The Mellen model is defined by the following equation: 737) fuselage structure to a TBL excitation for two locations on the fuselage and three flight Mach numbers. Although in flight measurements are popular, they are costly and complicated to set up. Also, as the measurements are done during flight, they are polluted by other noise sources present in the aircraft. Figure 2 . 1 : 21 Figure 2.1: Difference between (a) an actual array of monopoles and (b) a synthetic array using only one mobile monopole. Figure 2 . 2 : 22 Figure 2.2: Structure under random pressure field where p b represents the blocked pressure at the surface Σ s . 2 . 2 Characterization of the acoustic source: one determines the transfer functions (G ps ) between source positions s ∈ [1, S] and observation points p ∈ [1, P ] on the structure as shown in Fig. 2.3, and define the transfer function matrix G as the matrix having the transfer functions G ps as components. Figure 2 . 3 : 23 Figure 2.3: Description of the transfer function G ps (ω) Figure 2 . 4 : 24 Figure 2.4: Description of the velocity FRF Γ s v (x, ω) Figure 2 . 5 : 25 Figure 2.5: Description of the distance d between the structure and the synthetic monopole array and that of the distance d m between two adjacent monopole positions in the x and y directions. Figure 3 . 1 : 31 Figure 3.1: Simply supported panel subject to an excitation p b . ( 2 . 2 [START_REF] Berkhout | Acoustic control by wave field synthesis[END_REF], one needs to determine the pressure sensitivity function H p (x, k, ω) and the particle velocity sensitivity function H v 0 (x, k, ω). However, as can be observed in Appendix B, one can use only one of these two types of sensitivity functions as they are related through Eq. (B.13), in order to determine the CSD function S pv 0 (x, ω) between the pressure and the particle velocity sst on flat rectangular panels: theory and numerical studies at point x. Appendix B also shows the derivation of the theoretical power radiated by the panel. Now, let us talk about the incident power on the panel Π i (ω). Its expression depends on the type of excitation. For a panel excited by a DAF in a reverberant room, the incident power corresponds to the power in the reverberant room, far away from the surface of the panel. According to Sabine's theory, the incident power depends on the ASD function S p b p b (ω) of the acoustic pressure, the celerity of sound c 0 , the density of the medium ρ 0 and the surface of the panel Σ p through the following equation[START_REF] Fahy | Foundations of Engineering Acoustics[END_REF] Figure 3 . 2 :∈ λ min 10 ,λ min 2 .Figure 3 . 3 : 3210233 Figure 3.2: Optimal interplanar distance: (a) logarithm of the condition number (log 10 (κ)) of the transfer matrix, (b) reproduction error e p ( dB, ref. 1) on the reconstructed pressure field according to Eq. (3.6) for k x = k max and k y = 0. Both quantities are plotted as functions of frequency and the interplanar distance normalized by the smallest wavelength to be synthesized. Fig. 3 . 4 34 Fig. 3.4 shows the reproduction error for the same interplanar distance as in Fig. 3.3c and 3.3d but in the wavenumber domain and at four different frequencies. One observes that the reproduction error is always below the threshold (see Eq. (3.6) and discussion). Figure 3 . 4 : 4 .Fig. 3 .Figure 3 . 5 : 344335 Figure 3.4: Reproduction error e p ( dB, ref. 1) in the wavenumber domain at (a) f = 200 Hz, (b) f = 500 Hz, (c) f = 1000 Hz and (d) f = 2000 Hz for d = λ min 4 . 4 . 1 . 2 . 4 .1 experimental implementation 4 . 1 . 1 devices 4 . 1 . 1 . 1 41244114111 Afterwards, the experimental results in terms of vibration response and transmission loss of the panel are compared to theoretical results on one hand and to direct measurements in standard test facilities on the other hand for each of the two excitations of interest (DAF and TBL) in Sec. 4.The chapter is closed with a brief summary. sst on panels: experiments Measurement Monopole source Figure 4 . 1 : 41 Figure 4.1: Microflown mid-high frequency volume velocity source. (1) high impedance loudspeaker, (2) flexible hose and (3) nozzle with sensor. 4 . 4 . 44 The measurements were done in a room where the walls are covered with absorbing wedges and 10 cm thick absorbing foam panels were placed on the floor and around the structure inside the baffle in order to prevent the potential reflections and noises coming from the robot and acquisition system from polluting the measurements. Figure 4 . 2 : 42 Figure 4.2: Roga RG-50 1/4" ICP microphone. Figure 4 . 3 : 43 Figure 4.3: Bruel & Kjaer accelerometer. Figure 4 . 4 : 44 Figure 4.4: Baffled simply supported panel setup for the SST process. (1) aluminum panel, (2) nozzle component of the monopole source mounted on the 3D Cartesian robot, (3) 5 cm thick plywood baffle and (4) sound absorbing foam. Figure 4 . 5 : 45 Figure 4.5: Measurement of the transfer functions G ps : experimental setup. (1) 3D Cartesian robot, (2) linear array of 20 microphones, (3) front view of the flush mounted microphones and (4) back view of the flush mounted microphones. 2 Figure 4 . 6 : 246 Figure 4.6: Measurement of the particle velocity with the two microphone method, S 1 and S 2 are discretized surfaces consisting of two identical grids of R points. Figure 4 . 7 : 47 Figure 4.7: Particle velocity measurements: experimental setup. (1) linear array of microphones mounted on the 2D Cartesian robot and (2) 2D Cartesian robot. Figure 4 . 8 : 48 Figure 4.8: Velocity sensitivity functions H v x, k x , k y = 0, ω 2 ( dB, ref. 1 m 3 s -1 Pa -1 ): (a) numerical and (b) SST. Continuous white line: acoustic wavenumber k 0 . Dashed white line: panel flexural wavenumber k f . Fig. 4 . 4 Fig. 4.9 and Fig. 4.10 show the ASD function of the structural velocity response at the receiving point x = (0.06, 0.3, 0) m (in dB units) when excited by a DAF and a TBL pressure field, respectively, as described in Sec. 1.1.2 and Sec. 1.1.3. Figure 4 . 9 : 49 Figure 4.9: Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a DAF excitation: numerical (thin black line), SST (thick gray line). Figure 4 . 10 : 410 Figure 4.10: Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a TBL excitation: numerical (thin black line), SST (thick gray line). Fig. 4 . 4 [START_REF] Berkhout | A Holographic Approach to Acoustic Control[END_REF] and Fig.4.12 show the inverse of the radiated power (in dB units) by the panel when excited by a DAF and a TBL pressure field, as described in Sec. 1.1.2 and Sec. 1.1.3 respectively. Figure 4 . 11 : 411 Figure 4.11: Transmission loss TL (ω) ( dB, ref. 1) of the panel under DAF excitation: numerical (thin black line), SST (thick gray line). Figure 4 . 12 : 412 Figure 4.12: Transmission loss TL (ω) ( dB, ref. 1) of the panel under TBL excitation: numerical (thin black line), SST (thick gray line). Fig. 4 . 4 Fig. 4.13 and Fig. 4.14 show for the DAF and the TBL, respectively, a comparison of the panel vibration responses obtained with the SST and the standard test facilities. Figure 4 . 13 : 413 Figure 4.13: Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a DAF excitation: reverberant chamber measurements at the University of Sherbrooke [57] (thin black line), SST (thick gray line). Figure 4 . 14 : 414 Figure 4.14: Velocity ASD function S vv (x, ω) ( dB, ref. 1 m 2 s -2 Hz -1 ) of the panel subjected to a TBL excitation: wind tunnel measurements at the University of Sherbrooke [58] (thin black line), SST (thick gray line). Figure 5 . 1 : 51 Figure 5.1: Problem geometry and parameters. 5. 1 1 scattering by a rigid and curved rectangular panel: transfer functions[START_REF] Schultz | Diffusion in reverberation rooms[END_REF] Tr [ • ] represents the Cauchy trace operator and ∂ n(y) • designates the partial normal derivative at point y in the direction n. Because the object is rigid, Tr ∂ n(y) p (y) = 0 on σ ∪ β and Eq. (5.2) reduces to p (x) = p 0 (x)σ ∪β Tr [p (y)] ∂ n(y) G (x, y) dS (y) (5.3) ΩFigure 5 . 2 : 52 Figure 5.2: Problem geometry and parameters: section view of the 3D case. The z-axis is outgoing. 5. 1 91 ( 5 . 29 ) 191529 scattering by a rigid and curved rectangular panel: transfer functions Using the same Wronskian relation as for the 2D case allows one to establish the following equation for the pressure on the surface of the structure itself at r = r σ p Fig. 5 . 3 Figure 5 . 3 : 5353 Fig. 5.3 and Fig. 5.4 show an example of comparisons between the pressure (amplitude and phase) at the surface of the considered structure obtained using OpenBEM with the analytical results for the two-dimensional case and the three-dimensional case, respectively. One can refer to Appendix D for more comparisons (Fig. D.8, Fig. D.9 and Fig. D.10 for the two-dimensional case; Fig. D.17, Fig. D.18, Fig. D.19 and Fig. D.20 for the three-dimensional case). Figure 5 . 4 : 54 Figure 5.4: Comparison between the magnitudes and phases of the wall-pressure obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution in Eq. (5.30) (dash-dotted black line) for one position of the monopole source r 0 = 0.6 m, θ 0 = π 4 , z 0 = 0.5 m and at a frequency f = 2000 Hz: three-dimensional case. The distance along the z-axis between the plane of the monopole source and that of the observation points is |z -z 0 | = 0.3 m. Fig. D.16. • RAM: 16.0 Go • Operating system: Windows 10 64 bits 5.2 sst on curved rectangular panels: parametric studies 5.2.1 Two-dimensional case As described in Chapter 2, SST aims at reproducing Wall-Pressure Plane Waves (WPPWs) of the form p (x, k) = e ikx which in turn will allow the synthesis of the vibroacoustic response of structures under random excitations. In the current geometry of the problem, illustrated in Fig. 5.5, these WPPWs are of the form p (x, k) = e -ik θ Rθ(5.31) Fig. 5 . 6 ,Figure 5 . 6 : 5656 Fig. 5.6, Fig. 5.7 and Fig. 5.8 respectively show the reproduction error defined in Eq. (3.6) in the frequency-wavenumber domain, the real part and the imaginary part of the target and synthesized pressure fields at the surface of the structure (θ designates the angular position of the considered point on the structure) for four different values of n s .It is important to remember the criteria for an accurate reproduction process. This criteria was discussed in Sec. 3.3.2 of Chapter 2: the synthesis is considered accurate when the reproduction error is less than -10 dB which corresponds to a relative MSE of less than 10%. Comparing Fig.5.6a, Fig.5.6b, Fig.5.6c and Fig.5.6d, one can notice that the reproduction error covers an increasing area in the frequency-wavenumber domain when n s varies from 1 to 4. This means that for an increasing number of monopoles per smallest wavelength, the reproduction process becomes more accurate. This observation is confirmed by comparing the target pressure fields and the synthesized ones. In Fig.5.7a,Fig. 5.8a, 5.7b and Fig. 5.8b, one can clearly see that the reproduced pressure fields do not match the target ones. However, in Fig. 5.7c, Fig. 5.8c, 5.7d and Fig. 5.8d, the reproduction process is very accurate and the synthesized pressure fields perfectly match with the target ones. It is important to notice that only one example of WPPW is Figure 5 . 7 :Figure 5 . 8 : 5758 Figure 5.7: Real part of the target pressure field (continuous grey line) and the reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz: (a) n s = 1, (b) n s = 2, (c) n s = 3 and (d) n s = 4. Figure 5 . 9 : 59 Figure 5.9: Parametric studies for h = 5 cm: (a) real part and (b) imaginary part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz and (c) reproduction error. Figure 5 . 10 : 510 Figure 5.10: Condition number (log 10 (κ)) of the transfer matrix G for three different radial heights: (a) h = 5 cm, (b) h = 10 cm and (c) h = 20 cm. Figure 5 . 11 : 511 Figure 5.11: Parametric studies for an added normally distributed noise which values vary between 0 and 0.01. Reproduction errors in the frequency-wavenumber domain for (a) h = 5 cm, (b) h = 10 cm and (c) h = 20 cm. Fig. 5 . 5 Fig. 5.11b and Fig. 5.11c respectively show the reproduction error for a radial height h = 10 cm and h = 20 cm. One can notice that, with the added noise, SST succeeds only in a narrow band around 0 rad m -1 and for all frequencies. Therefore, one can only expects Figure 5 . 12 :Figure 5 . 13 : 512513 Figure 5.12: Parametric studies for an added normally distributed noise which values vary between 0 and 0.01. Real part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) for (a) h = 5 cm, (b) h = 10 cm and (c) h = 20 cm. The WPPW is defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz. Figure 5 . 14 : 514 Figure 5.14: Section view of the non conformal SST geometry setup for CRPs. Fig. 5 . 15 , 515 Fig. 5.15, Fig. 5.16 and Fig. 5.17 show the SST results for three different heights h ′ . Comparing Fig. 5.15a, Fig. 5.16a and Fig. 5.17a, one can notice that the surface covered by the reproduction error in the frequency-wavenumber domain becomes smaller and smaller when the height h ′ is increased from 55 cm to 70 cm. This result was already pointed out in the case of the conformal geometry which has better results than the current geometry configuration. However, comparing Fig. 5.15b, Fig. 5.16b and Fig.5.17b, one can notice that the condition number does not vary that much among the three heights h ′ : from 10 14 to 10 17 . Compared to the conformal configuration (see Fig. Figure 5 . 15 : 515 Figure 5.15: Parametric studies for a non conformal geometry configuration and for h ′ = 55 cm: (a) reproduction error, (b) logarithm of the condition number (log 10 κ), (c) real part and (d) imaginary part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz. Figure 5 . 16 : 516 Figure 5.16: Parametric studies for a non conformal geometry configuration and for h ′ = 60 cm: (a) reproduction error, (b) logarithm of the condition number (log 10 κ), (c) real part and (d) imaginary part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz. Figure 5 . 17 : 517 Figure 5.17: Parametric studies for a non conformal geometry configuration and for h ′ = 70 cm: (a) reproduction error, (b) logarithm of the condition number (log 10 κ), (c) real part and (d) imaginary part of the target pressure (continuous grey line) and reconstructed pressure (dashed black line) of the WPPW defined by the wavevector (-50, 50) rad m -1 at a frequency f = 2000 Hz. Figure 5 . 18 : 518 Figure 5.18: Comparison between the reproduction error induced by the (a) conformal geometry configuration and the (b) non-conformal configuration in the frequencywavenumber domain. Fig. 5 . 19 , 519 Fig. 5.19, Fig. 5.20 and Fig. 5.21 show the reproduction error in the wavenumber domain for each value of n s , for the following frequencies: f = 500 Hz, f = 1000 Hz and f = 2000 Hz, respectively. One should remember that for an accurate reproduction process, the reproduction error must be less than -10 dB. Fig. 5 .Fig. 5 .Figure 5 . 19 : 55519 Fig. 5.19a, Fig. 5.20a and Fig. 5.21a represent the reproduction error for n s = 1 for the three frequencies of interest: 500 Hz, 1000 Hz and 2000 Hz, respectively. Observing Fig. 5.19a, one can notice that the synthesis process is accurate when the wavevector of the WPPWs is in a circle of radius 12 rad m -1 . For all other wavevectors and considering the set threshold of -10 dB, the synthesis process is not accurate. In Fig. 5.20a and Fig. 5.21a, which correspond to the reproduction error for n s = 1 and for f = 1000 Hz and f = 2000 Hz, respectively, one can clearly see that the reproduction is not accurate at all as the MATLAB colorbar upper and lower limits are intentionally left floating and not set to -10 dB and -25 dB, respectively, as for the other figures representing reproduction errors in the wavenumber domain. Increasing the density of the monopole array from n s = 1 to n s = 2 improves the process as the reproduction error decreases and there is less white area on the reproduction error maps in the wavenumber domain as it can be noticed in Fig. 5.19b, Fig. 5.20b and Fig.5.21b for all three frequencies. Starting from n s = 3, one can already notice in Fig. 5.19c, Fig. 5.20c and Fig.5.21c a good reproduction process over almost all the wavevectors in the wavenumber domain of interest [-50, 50] × [-50, 50] rad 2 m -2 . Figure 5 . 20 :Figure 5 . 21 : 520521 Figure 5.20: Parametric studies: influence of the density of the array of monopoles on the reproduction error e p at a frequency f = 1000 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. Figure 5 . 23 :Figure 5 . 24 :Figure 5 . 25 : 523524525 Figure 5.23: Parametric studies: influence of the density of the array of monopoles on the reconstructed pressure fields at a frequency f = 500 Hz. (a) n s = 1, (b) n s = 2, (c) n s = 3 and (a) n s = 4. Fig. 5 . 5 Fig. 5.26, Fig. 5.27 and Fig. 5.28 show maps of the reproduction error defined in Eq. (3.6) for both radial heights and for f = 500 Hz, f = 1000 Hz and f = 2000 Hz, respectively. Figure 5 . 26 : 526 Figure 5.26: Influence of the radial height on the reproduction error at a frequency f = 500 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. Figure 5 . 27 : 527 Figure 5.27: Influence of the radial height on the reproduction error at a frequency f = 1000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. Figure 5 . 28 :Fig. 5 . 29 ,Figure 5 . 29 :Figure 5 . 30 : 528529529530 Figure 5.28: Influence of the radial height on the reproduction error at a frequency f = 2000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. Figure 5 . 31 : 531 Figure 5.31: Reconstructed pressure fields using SST at a frequency f = 2000 Hz for two different radial heights: (a) h = 5 cm and (b) h = 10 cm. 6.1 experimental implementation 6 . 1 . 1 devices 6 . 1 . 1 . 1 6116111 Measurement Monopole source Figure 6 . 1 : 61 Figure 6.1: Siemens mid-high frequency volume source used during the SST process for CRPs. Figure 6 . 2 : 62 Figure 6.2: CTTM 1/4 ′′ microphones. Figure 6 . 3 : 63 Figure 6.3: Eight channel conditioner with a Harting connector output. Figure 6 . 4 : 64 Figure 6.4: five degrees-of-freedom robot arm. 6. 6 . 6 Figure 6 . 5 : 65 Figure 6.5: Sketch of the experimental setup for the SST process on CRPs. Figure 6 . 6 : 66 Figure 6.6: Monopole source mounted on the robot arm. Figure 6 . 7 : 67 Figure 6.7: Chosen location of the semi-circular array of 63 microphones flush-mounted on the rigid half cylinder. The reference (green) position is where the microphones are actually located and the edge (red) corresponds to the farthest location from the reference where one needs to measure the transfer functions. Figure 6 . 8 : 68 Figure 6.8: Optimal location of the semi-circular array of 63 microphones on the rigid half cylinder which reduces the diffraction effects on the measurements. Figure 6 . 9 : 69 Figure 6.9: Flush-mounted microphones on the rigid open half cylinder. Left: view in chamber 1 in Fig. 6.5. Right: view in chamber 2 in Fig. 6.5. Figure 6 . 10 : 610 Figure 6.10: Experimental setup: simply supported rigid open half cylinder. The monopole source (7) is mounted on the robot arm (8) which can slide along the rail (9) and supported by the truss structure (1). The 63 microphones (3) are flush-mounted on the rigid open half cylinder (4) baffled by the half polystyrene cylinders (2) and (5). (6) corresponds to the robot arm control unit. Fig. 6 . 11 ,Figure 6 . 11 : 2 .Figure 6 . 12 : 4 . 61161126124 Fig.6.11, Fig.6.12, Fig.6.[START_REF] Berkhout | Acoustic control by wave field synthesis[END_REF] and Fig.6.14 show comparisons of the analytical (see Sec. 5.1.1) and experimental transfer functions at two different frequencies for a given radial height and for one position (r 0 , θ 0 ) of the monopole source through plots of the pressure at the surface of the structure (a semi-circular arc in the two-dimensional case) experimentally materialized by the arc of microphones (3) in Fig.6.10 or in Fig.6.9.Observing Fig.6.11 to Fig.6.14, one can notice a good agreement between the analytical results and the experimental measurements done with an array of 63 microphones. One can notice that there are however some discrepancies of the pressure at structural points close to the monopole source position specially when the radial height is increased. In order to illustrate this statement, one can compare Fig.6.11 and Fig.6.14 where the pressure fields are plotted for the same monopole position but for two different radial heights: h = 5 cm and h = 10 cm, respectively. One can see that the measurements are more accurate when the monopole source is close to the structure, this result was also discussed in Sec. 5.2.1.2 of Chapter 5. Figure 6 . 13 : 2 .Figure 6 . 14 : 6132614 Figure 6.13: Pressure at the surface of the structure at two frequencies: (a) f = 1000 Hz and (b) f = 1500 Hz, for a radial height h = 10 cm and source angular position θ 0 = π 2 .Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the two-dimensional case. 6. 2 Fig. 6 .Figure 6 . 15 : 26615 Fig. 6.15 shows the condition number of the transfer matrix G measured between the 50 monopole positions and the 63 microphones of the arc array. One can notice a relatively acceptable condition number with regard to the measurement conditions. Fig. 6 .Figure 6 . 16 : 6616 Fig. 6.16 shows plots of the real and imaginary part of the target pressure and the synthesized one obtained with the experimental SST process at a frequency f = 2000 Hz for a radial height h = 5 cm and for three different circumferential wavenumbers k θ . Observing Fig. 6.16, one can verify a perfect reproduction process for the two-dimensional case. Figure 6 . 17 : 2 . 6172 Figure 6.17: Pressure at the surface of the structure at two frequencies: (a) f = 500 Hz and (b) f = 2000 Hz, for a radial height h = 5 cm and source angular position θ 0 = π 2 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the three-dimensional case. The observation points are on a different section from the source's plane/section, i.e. |z -z 0 | = 50 cm. Figure 6 . 18 : 4 . 6184 Figure 6.18: Pressure at the surface of the structure at two frequencies: (a) f = 500 Hz and (b) f = 2000 Hz, for a radial height h = 10 cm and source angular position θ 0 = π 4 . Comparison between the analytical results (dash-dotted black line) and the experimental measurements (continuous gray line, the red dots correspond to the positions of the measurement points) for the three-dimensional case. The observation points are on a different section from the source's plane/section, i.e. |z -z 0 | = 25 cm. 5 ) 2 k 52 radiated power by a panel H v 0 (x, k, ω) = 1 4π Hv 0 k, k, ω e i kx d k (B.6) Introducing Eq. (B.5) and Eq. (B.6) into Eq. (B.2), then the previous result is introduced in Eq. (B.3) then in Eq. (B.4) and after rearranging, one can obtain the following expression for the radiated power by the panel 2 kk 2 Hp k, k, ω Hv 0 k, k, ω d kS pp (k, ω) dk ( 14 )CFigure C. 1 : 141 Figure C.1: FRFs measurements using a non-displaceable microphone array Fig. C.1. C.1), one would displace the source at the position facing the linear microphone array (same x coordinate but different z coordinate) and measure the green (solid line) FRF G ps . As the linear array is along the y axis, there is no need to displace the source along the y coordinate. With this methodology, one can measure the transfer function G ps for any point on the reconstruction surface with the considered linear array.D VA L I D AT I O N O F T H E T R A N S F E R F U N C T I O N S F O R C U R V E D PA N E L Sd.1 Two dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 d.1.1 Convergence of the harmonic expansion . . . . . . . . . . . . . . . 159 d.1.2 Comparison with OpenBEM . . . . . . . . . . . . . . . . . . . . . . 167 d.2 Three dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 d.2.1 Integration domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 d.2.2 Validation with OpenBEM . . . . . . . . . . . . . . . . . . . . . . . 175 This appendix contains supplementary material to Sec. 5.1.3 which purpose is to validate the analytical solutions of the two-dimensional and three-dimensional formulations respectively given by Eq. (5.16) and Eq. (5.30). d.1 two dimensional case d.1.1 Convergence of the harmonic expansion The goal in this section is the study of the convergence of the expansion present in Eq. (5.15) as a function of the monopole coordinate (either r 0 or θ 0 ) and the number of harmonics effectively used in that same equation. Four different points have been arbitrarily chosen as checking points: see Fig. D.1. In order to do so, we are going to determine the surface pressure for the structure depicted in Fig. 5.1 with the geometrical parameters in Table D.1. One can easily notice that, considering the geometrical values Figure D. 1 :Table D. 1 : 11 Figure D.1: Localization of arbitrarily chosen test points x 1 , x 2 , x 3 and x 4 where the pressure is plotted in order to study the number of necessary harmonics for a convergence of the pressure.in TableD.1, the structure corresponds to a half cylinder with radius R s = √ a 2 + b 2 = 500 mm and an apothem 1 a. Figure D. 2 : 2 Figure D.2: Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position r 0 for θ 0 = π 2 and the number of harmonics N h used in Eq. (5.15) at a frequency f = 100 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . Fig. D. 5 ,Figure D. 3 : 53 Fig. D.5, Fig. D.6 and Fig. D.7 show maps of the evolution of the pressure at the same four checking points (x 1 to x 4 ), the only difference from previous maps is that this time the pressure field is plotted as a function of the coordinate θ 0 ∈ [0, π] for r 0 = 0.8 m. Figure D. 4 : 4 Figure D.4: Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position r 0 for θ 0 = π 2 and the number of harmonics N h used in Eq. (5.15) at a frequency f = 2000 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . Figure D. 5 :Figure D. 6 : 56 Figure D.5: Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position θ 0 for r 0 = 0.8 m and the number of harmonics N h used in Eq. (5.15) at a frequency f = 100 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . Figure D. 7 : 1 . 2 Figure D. 10 : 71210 Figure D.7: Evolution of the pressure at four different points (see Fig. D.1) on the structure as a function of the source position θ 0 for r 0 = 0.8 m and the number of harmonics N h used in Eq. (5.15) at a frequency f = 2000 Hz (a) x 1 , (b) x 2 , (c) x 3 , and (d) x 4 . Figure D. 11 : 11 Figure D.11: Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 500 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 2 , z 0 = 0.5 m . The observation point located at (r = 0.5 m, θ = 0, z = 0.5 m) and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. Figure D. 12 : 12 Figure D.12: Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 2000 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 2 , z = 0.5 m . The observation point located at (r = 0.5 m, θ = 0, z 0 = 0.5 m) and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. Figure D. 13 :Figure D. 14 :Figure D. 15 : 131415 Figure D.13: Values of I (k z ) for k z ∈ [-k max z , k max z ]: (a) real part and (b) imaginary part at a frequency f = 500 Hz and for a source position defined by r 0 = 0.7 m, θ 0 = π 2 , z 0 = 0.5 m . The observation point located at r = 0.5 m, θ = π 2 , z = 0.5 m and the dashed black lines correspond to values of the acoustic wavenumber at the considered frequency. d. 2 . 2 22 Validation with OpenBEM In this section, we aim at validating the three dimensional transfer functions established in Chapter 5 by comparing the results to those yielded by OpenBEM. In order to do so, a comparison between the magnitudes and phases of the analytical results and the numerical ones (OpenBEM) is proposed for different source positions, frequencies and configurations. Fig. D.16 shows the mesh generated with Gmsh 2.2 2 . Figure D. 16 : 16 Figure D.16: Generated mesh for OpenBEM computations. Fig Fig. D.17 shows the results for a configuration where the source position and the observation points are on the same section/plane for three different frequencies. In Fig. D.18, the source and the observation points are not on the same section. Fig. D.19 shows the results for two frequencies and a configuration where source and observation are on different planes and Fig. D.20 shows the results for one frequency and another source-observation points configuration. Figure D. 18 :Figure D. 20 : 1 . 2 .résumé étendu en français 3 . 4 . 2 . 182012342 Figure D.18: Comparison between the magnitudes (a) and phases (b) of the transfer functions obtained with OpenBEM (gray continuous line) and those obtained using the developed analytical solution (dash-dotted black line) for one position of the unit amplitude monopole source r 0 = 0.6 m, θ 0 = π 2 , z 0 = 0.5 m and at the frequency f = 2000 Hz. The observation points are on a different section from the source's plane/section, i.e. |z -z 0 | = 0.1 m. Wave Field Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.3.3 Planar Near-field Acoustical Holography . . . . . . . . . . . . . . . 27 1.3.4 Least Squares Technique . . . . . . . . . . . . . . . . . . . . . . . . 28 Source Scanning Technique . . . . . . . . . . . . . . . . . . . . . . 1.4 Objectives of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synthetic array principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wavenumber formulation and quantities of interest . . . . . . . . . . . . . 2.3 Wall-pressure plane waves . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Uncorrelated WPPWs and TBL excitation . . . . . . . . . . . . . . x contents 1.3.5 1.5 2 source scanning technique 2.1 2.2 ix 3 Turbulent boundary layer . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 Standard test means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.1 Diffuse acoustic field . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.2 Turbulent boundary layer . . . . . . . . . . . . . . . . . . . . . . . 23 1.3 Alternative approaches to standard test means . . . . . . . . . . . . . . . 25 1.3.1 Reciprocity method . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.3.2 2.3.3 Generating WPPWs . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Description of the SST approach . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II application on flat rectangular panels Table 1 . 1 ASD Auto-Spectral Density ASTM American Society for Testing and Materials BEM Boundary Element Method CHIEF Combined Helmholtz Integral Equation Formulation xxvii 1 Complementary TBL parameters . . . . . . . . . . . . . . . . . . . Table 3.1 Panel parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4.1 Measurement times during the experimental SST process. . . . . . Table 5.1 Transfer functions computation times. . . . . . . . . . . . . . . . Table 5.2 Dimensions of the rigid half-cylinder. . . . . . . . . . . . . . . . . Table 5.3 Values of the condition number (log κ) of the transfer matrix G for each frequency and radial height setup. . . . . . . . . . . . . . Table D.1 Geometrical properties of the structure . . . . . . . . . . . . . . . A C R O N Y M S ANR French National Research Agency or Agence Nationale de la Recherche (in French) CTTM Center for Transfer of Technologies of Le Mans or Centre de Transfert de MSE Mean Square Error NAH Nearfield Acoustic Holography PNAH Planar Nearfield Acoustical Holography PSD Power Spectral Density RANS Reynolds-Averaged Navier-Stokes SRI Sound Reduction Index SST Source Scanning Technique TBL Turbulent Boundary Layer TL Transmission Loss WFS Wave Field Synthesis WPF Wall-Pressure Fluctuations WPPW Wall-Pressure Plane Wave Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0077/these.pdf © [A.D. Pouye], [2022], INSA Lyon, tous droits réservés Standard test means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.1 Diffuse acoustic field . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.2.2 Turbulent boundary layer . . . . . . . . . . . . . . . . . . . . . . . 23 1.3 Alternative approaches to standard test means . . . . . . . . . . . . . . . 25 1.3.1 Reciprocity method . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.3.2 Wave Field Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 26 literature review broacoustic properties of the considered structure is given. Afterwards, a brief overview on other experimental methods that have been suggested over the years and aiming at synthesizing the pressure fluctuations due to the two types of excitations of interest is 1.1 The excitation fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.1 Characterization of spatially correlated random pressure field . . 8 1.1.2 Diffuse acoustic field . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1.3 Turbulent boundary layer . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 1.3.3 Planar Near-field Acoustical Holography . . . . . . . . . . . . . . . 27 1.3.4 Least Squares Technique . . . . . . . . . . . . . . . . . . . . . . . . 28 1.3.5 Source Scanning Technique . . . . . . . . . . . . . . . . . . . . . . 29 1.4 Objectives of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.5 Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 This chapter is dedicated to an overview of the different aspects related to the pro- posed testing method. It concerns the reproduction of random pressure fields and the determination of the vibroacoustic properties of the structure. The main quantities of interest are the vibration response and the sound transmission of a given structure. We start by presenting the two excitations of interest and how they are mathematically modeled. Then a discussion on the standard test means that are currently used in order to experimentally reproduce the DAF and TBL excitations for the assessment of the vi- Table 1 . 1 1: Complementary TBL parameters The higher the value of H, the stronger the adverse pressure gradient. A high adverse pressure gradient can greatly reduce the Reynolds number at which transition into turbulence may occur. Conventionally, H = 2.59 (Blasius boundary layer) is typical of laminar flows, while H = 1.3 -1.4 is typical of turbulent flows. For more details concerning the TBL excitation, see reference Synthetic array principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2 Wavenumber formulation and quantities of interest . . . . . . . . . . . . . 35 2.3 Wall-pressure plane waves . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.1 2.3.2 Uncorrelated WPPWs and TBL excitation . . . . . . . . . . . . . . 40 2.3.3 Generating WPPWs . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.4 Description of the SST approach . . . . . . . . . . . . . . . . . . . . . . . . 42 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2 Theoretical responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.2.1 Vibration response -Sensitivity functions . . . . . . . . . . . . . . 53 3.2.2 Transmission loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1 3.3 SST on flat rectangular panels: parametric studies . . . . . . . . . . . . . . 54 3.3.1 Cutoff wavenumber and wavenumber resolution . . . . . . . . . . 55 3.3.2 Interplanar distance . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 The next two chapters are dedicated to the application of SST on flat panels: theoretical background and parametric study in Chapter 3, and experimental validation in Chapter 4. The case study concerns a rectangular aluminum flat panel simply supported on its edges. In this chapter, the problem of interest is described in a first step. Secondly, a theoretical background on mechanics of panels and the computation of the quantities of interest (vibration response and transmission loss) are presented. Afterwards, a parametric study on the design parameters of the synthetic array of monopoles is Table 3 . 1 : 31 Panel parameters Parameter Symbol Value Young modulus E 68.9 GPa Poisson ratio ν 0.3 Mass density ρ 2740 kg m -3 Length L 1 0.48 m Width L 2 0.42 m Thickness h 3.17 mm Experimental implementation . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.1.1 Measurement devices . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.1.2 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2.1 Numerical validation . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2.2 Validation with direct measurements in standard test facilities . . 76 4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4 S S T O N PA N E L S : E X P E R I M E N T S 4.1 Table 4 . 1 : 41 Measurement times during the experimental SST process. .1 Scattering by a rigid and curved rectangular panel: transfer functions . . 82 5.1.1 Two-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.1.2 Three-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.1.3 Validation of the solutions . . . . . . . . . . . . . . . . . . . . . . . 91 5.2 SST on curved rectangular panels: parametric studies . . . . . . . . . . . 94 5.2.1 Two-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.2.2 Three-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Table 5 . 1 : 51 Transfer functions computation times. 2D 3D OpenBEM ∼ 75 s ∼ 163.3 s Analytical ∼ 0.86 s ∼ 3.5 s Table 5 . 2 : 52 Dimensions of the rigid half-cylinder. Table 5 . 3 : 53 Values of the condition number (log κ) of the transfer matrix G for each frequency and radial height setup. Frequency ( Hz) h = 5 cm h = 10 cm 500 16.9 17 1500 ∼ 16.6 16.7 2000 ∼ 16.4 16.5 Experimental implementation . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.1.1 Measurement devices . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.1.2 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 6.2.1 Two dimensional case: validation of the transfer functions and experimental SST . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 6.2.2 Three dimensional case: validation of the transfer functions . . . . 139 6.1 6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0077/these.pdf © [A.D. Pouye], [2022], INSA Lyon, tous droits réservés Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0077/these.pdf © [A.D. Pouye], [2022], INSA Lyon, tous droits réservés Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0077/these.pdf © [A.D. Pouye], [2022], INSA Lyon, tous droits réservés Collection of MATLAB codes which can be used to solve the Helmholtz equation using the Boundary Element Method (BEM). Agence Nationale de la Recherche, en France Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0077/these.pdf © [A.D. Pouye], [2022], INSA Lyon, tous droits réservés where R is the radius of the structure and k θ corresponds to the circumferential wavenumber related to the circumferential order n through the relation: n = k θ R. In the following, we go through the same steps as for the parametric study on SST for FRPs in order to define the optimal design parameters of the virtual array of monopoles in this new geometry of the problem. For the sake of simplicity, the structure of interest will be a half-circle with a radius R = 0.5 m. Studying the effect of the density of the array of monopoles From the results obtained with planar structures, a good reproduction of the target pressure field was obtained when the criterion of 4 monopoles per minimum wavelength is fulfilled [START_REF] Aucejo | Experimental simulation of turbulent boundary layer induced vibrations by using a synthetic array[END_REF][START_REF] Maury | The experimental synthesis of random pressure fields: Practical feasibility[END_REF]. However, in the current case, there is a curvature on the studied structure which means that this criterion has to be validated or changed for these structures. In this context, we propose to study the behavior of SST on these structures when all other parameters of the process are fixed but one: n s , the number of required monopole sources per minimum wavelength. We aim at synthesizing WPPWs which wavenumbers are in [-k max , k max ] with k max arbitrarily set at 50 rad m -1 . The spacing sst on curved rectangular panels: theory and numerical studies Reconstructed pressure fields Now let us take a look at the reconstructed pressure fields when n s is consequently given values from the set {1, 2, 3, 4}. Let us consider the WPPW depicted on the surface of the structure in Fig. 5.22. As expected after the analysis of the reproduction errors for n s = 1 and n s = 2 in Fig. One can notice in Fig. 5.23c, Fig. 5.24c and Fig. 5.25c, that the synthesized wall-pressure field matches the target one in Fig. 5.22 for all three frequencies. There is no noticeable difference between the results obtained for n s = 3 and those obtained for n s = 4. Indeed, velocity sensitivity functions 5.23a and The modal force is given by where Σ p designates the area of the panel and p (x, k) = e -ikx is the prescribed pressure field corresponding to a unit wall plane wave characterized by the wave-vector k. Equation (A.5) has a closed-form solution where for ξ ∈ {x, y} and p ∈ {m, n} (a)
04096406
en
[ "spi.elec" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04096406/file/Wireless%20Complex%20Permittivity%20Measurement%20Using.pdf
Student Member, IEEE Florian Requena Member, IEEE Nicolas Barbot Member, IEEE Darine Kaddour Senior Member, IEEE Etienne Perret Wireless Complex Permittivity Measurement Using Resonant Scatterers and a Radar Approach Keywords: Complex permittivity, Radar, Resonant, Scatterer, Wireless measurement In this paper, a method to characterize complex permittivity of dielectric is presented. The back-scattered signal from a resonant scatterer placed in contact with the dielectric is used to estimate the dielectric properties. The proposed method was tested in simulation and validated in practice using different dielectric samples and different dielectric thicknesses. This method is wireless, non-destructive, with no restriction on the sample thickness and is done using a VNA and an antenna. Discussions on the geometry of the resonator as well as the calibration step is proposed in order to improve the sensing capability of the approach. Monte-Carlo simulation has been performed to define a confidence interval for the values extracted with the proposed approach in the range 1 to 3.5 for permittivity and 0 to 0.2 for tan δ with a SNR of 20dB. For example, a permittivity εr = 3.54 ± 0.06 and tan δ = 0.0024 ± 0.003 has been measured for Rogers RO4003C and εr = 2.31 ± 0.05 and tan δ = 0.004 ± 0.002 for Duroid RT5880 with different thicknesses. It is also shown in simulation that the approach is compatible with materials having loss tangents up to 0.5, and real permittivity up to 10. I. INTRODUCTION M ICROWAVE sensors interest has increased over the last decades. Various sensors able to measure quantities such as temperature [START_REF] Requena | Thermal modeling of resonant scatterers and reflectometry approach for remote temperature sensing[END_REF], humidity [START_REF] Feng | Low-cost printed chipless rfid humidity sensor tag for intelligent packaging[END_REF], gas [START_REF] Yang | A novel conformal rfid-enabled module utilizing inkjet-printed antennas and carbon nanotubes for gas-detection applications[END_REF], strain [START_REF] Thai | Novel design of a highly sensitive rf strain transducer for passive and remote sensing in two dimensions[END_REF] without batteries or electronic chips have been published. This wireless approch based on radio-frequency (RF) waves caught the curiosity for sensor applications due to its low-profile (often planar and single layer sensors), its accuracy, its price and finally its infinite life-time of operation due the to absence of electronic parts. The RF waves also provide advantages such as wireless measurements that allow the reading of multiple sensors at the same time, in realtime, even with obstacles [START_REF] Requena | Chipless RFID temperature and humidity sensing[END_REF]. Such RF approaches have also been developed to do metrology. We can cite for example the characterization of the materials thermal dilatation [START_REF]Contactless characterization of metals' thermal expansion coefficient by a free-space rf measurement[END_REF] or the complex permittivity [START_REF] Alahnomi | Review of recent microwave planar resonator-based sensors: Techniques of complex permittivity extraction, applications, open challenges and future research directions[END_REF]. Concerning the complex permittivity measurements, numerous works can be found in the litterature [START_REF] Krupka | Frequency domain complex permittivity measurements at microwave frequencies[END_REF], [START_REF] Alahnomi | Review of recent microwave planar resonator-based sensors: Techniques of complex permittivity extraction, applications, open challenges and future research directions[END_REF], [START_REF] Kiani | Microwave sensor for detection of solid material permittivity in single/multilayer samples with high quality factor[END_REF], [START_REF] Massoni | Enhanced cavity sensor in SIW technology for material characterization[END_REF]. These methods can be divided in two main categories Manuscript received X; revised X; accepted X. Date of publication X; date of current version X. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 772539). This work is also supported by Univ. Grenoble Alpes. (Corresponding author: Requena Florian.). The authors are with Univ. Grenoble Alpes, Grenoble INP, LCIS, F-26000 Valence, France. E. Perret is also with Institut Universitaire de France, Paris, France. : resonant and non-resonant approaches. In non-resonant approaches, coaxial measurements are commonly used and standardized [START_REF] Ute | Mesure de la permittivite et de la permeabilite de materiaux homogenes et isotropes a pertes dans le domaine des micro-ondes. methode de mesure en guide coaxial circulaire[END_REF]. The coaxial method consists in measuring the transmission and reflection coefficients of a coaxial transmission line filled with the dielectric under-test [START_REF] Nicolson | Measurement of the intrinsic properties of materials by time-domain techniques[END_REF], [START_REF] Baker-Jarvis | Improved technique for determining complex permittivity with the transmission/reflection method[END_REF]. The transmission/reflection principle is also transposed to other techniques such as waveguides [START_REF] Bakhtiari | Open-ended rectangular waveguide for nondestructive thickness measurement and variation detection of lossy dielectric slabs backed by a conducting plate[END_REF] or free-space measurements [START_REF] Shimabukuro | A quasioptical method for measuring the complex permittivity of materials[END_REF]. These techniques are simple to implement, work on wide frequency band but the major drawback is their low accuracy, especially on the losses extraction for lowloss dielectrics. On the other hand, resonant methods allow to have higher accuracies and to measure lower losses but are restricted in the frequency response. Cavity measurements [START_REF] Li | Precise calculations and measurements on the complex dielectric constant of lossy materials using tm/sub 010/cavity perturbation techniques[END_REF], [START_REF] Dmowski | Contactless measurement of silicon resistivity in cylindrical te01n mode cavities[END_REF] are widely used for dielectric characterization. Besides, various resonator topologies have been published in the litterature. We can cite, open resonator [START_REF] Cullen | The accurate measurement of permittivity by means of an open resonator[END_REF], [START_REF] Hirvonen | Measurement of dielectrics at 100 ghz with an open resonator connected to a network analyzer[END_REF] and dielectric rod resonators short circuited at both ends by two parallel conducting plates [START_REF] Hakki | A dielectric resonator method of measuring inductive capacities in the millimeter range[END_REF], [START_REF] Kobayashi | Microwave measurement of dielectric properties of low-loss materials by the dielectric rod resonator method[END_REF]. For the solid dielectric materials which are exclusively dealt with in this paper, a major problem is the accurate machining of a specimen of the material to fit closely into the resonator or waveguide. In the recent years, planar microstrip resonators using the under-characterization material as a substrate were introduced in [START_REF] Lobato-Morales | Wireless sensing of complex dielectric permittivity of liquids based on the RFID[END_REF], [START_REF] Wiltshire | Passive split ring resonator tag configuration for RFID-based wireless permittivity sensing[END_REF], [START_REF] Perret | Permittivity characterization based on radar cross measurements[END_REF]. This technique allows a wireless measurement based on a radar principle and allows to avoid the machining drawback. While in [START_REF] Lobato-Morales | Wireless sensing of complex dielectric permittivity of liquids based on the RFID[END_REF], the RF access lines conventionally used to connect to a VNA [START_REF] Krupka | Frequency domain complex permittivity measurements at microwave frequencies[END_REF] were replaced by antennas, [START_REF] Perret | Permittivity characterization based on radar cross measurements[END_REF] and [START_REF] Wiltshire | Passive split ring resonator tag configuration for RFID-based wireless permittivity sensing[END_REF] present an approach where only a resonant target is present without any antenna. In [START_REF] Wiltshire | Passive split ring resonator tag configuration for RFID-based wireless permittivity sensing[END_REF], the resonant target (open metal ring on a known dielectric substrate) is realized on PCB. Here, the material to be characterized is a small dielectric slab of a few mm length, positioned in the opening of the ring. This method offers the advantages of being easy to implement and low-cost like wireless test-benches (only one antenna and a VNA are needed). In addition, it offers advantages (real-time and accurate measurements) making it a potential solution for dielectric characterization. Contrary to [START_REF] Hakki | A dielectric resonator method of measuring inductive capacities in the millimeter range[END_REF], [START_REF] Kobayashi | Microwave measurement of dielectric properties of low-loss materials by the dielectric rod resonator method[END_REF] the sample under test does not need to be precisely positioned in the test bench (e.g. the distance between the antenna and the resonator can be changed easily). In these works, the permittivity [START_REF] Perret | Permittivity characterization based on radar cross measurements[END_REF], or the complex permittivity [START_REF] Lobato-Morales | Wireless sensing of complex dielectric permittivity of liquids based on the RFID[END_REF], [START_REF] Wiltshire | Passive split ring resonator tag configuration for RFID-based wireless permittivity sensing[END_REF] is directly linked to the resonance impacting both its frequency and its attenuation. However in [START_REF] Lobato-Morales | Wireless sensing of complex dielectric permittivity of liquids based on the RFID[END_REF], [START_REF] Wiltshire | Passive split ring resonator tag configuration for RFID-based wireless permittivity sensing[END_REF], measurements or simulations are used to fit a model which describes the resonance frequency behavior. In this paper, the complex permittivity is related to the resonance with a theoretical approach and an analytical formula is obtained. In addition, the characterization frequency band, can be increased by using different resonators operating at different frequencies. As in [START_REF] Perret | Permittivity characterization based on radar cross measurements[END_REF], the approach is based on the use of an independent metallic resonator which is attached to the dielectric plate to be measured and whose thickness is assumed to be known. However, here a more general approach is introduced in order to determine the dielectric losses. Compared to [START_REF] Wiltshire | Passive split ring resonator tag configuration for RFID-based wireless permittivity sensing[END_REF], the sensitivity is maximized by using no other dielectric than the one to characterize. Besides, there is no need to machine the dielectric : its dimensions should largely cover the metallic resonator. In Section II the equations relating the resonance to the complex permittivity will be introduced. Section III focuses on the complex permittivity extraction from radar measurements. Section IV and V present simulations and measurements of several samples as well as discussions on the accuracy and sensitivity of the proposed approach. Finally, Section VI will conclude the paper. II. COMPLEX RESONANCE FREQUENCY The principle of the measurement is illustrated in Fig. 1a, which shows the measurement method based on the acquisition of a backscattered signal from a resonator in contact with the dielectric to be characterised. In this study the loop resonator shown in Fig. 1b will be used. Using the Singularity Expansion Method (SEM), it can be shown that in the time domain, the signal back-scattered u(t) by a resonator can be written as an exponentionally damped sinusoidal as follows : u(t) = Ae -σ(t-2τ ) cos(ω(t -2τ ))Γ(t -2τ ) ( 1 ) where A is the amplitude of the back-scattered signal, Γ is the Heavyside function and τ the propagation constant. This signal is illustrated Fig. 2. Let define the complex resonance frequency s = σ + jω to analyze the radar response of a resonant target in terms of damping factor σ and angular resonant frequency ω. The back-scattered signal can be rewritten as : u(t) = A × Re e -s(t-2τ ) × Γ(t -2τ ), (2) where Re defines the real part. If a simple loop resonator is used (as illustrated in Fig. 1) for our radar target, its resonance frequency f r is defined by [START_REF] Rance | Contactless characterization of coplanar stripline discontinuities by rcs measurement[END_REF]: f r = c 2L 1 + ε r -1 2 q , ( 3 ) where c is the speed of light in vacuum, L the length of the resonant scatterer, q the filling factor which is a coefficient to take into account the support thickness [START_REF] Chen | Characteristics of coplanar transmission lines on multilayer substrates: Modeling and experiments[END_REF] and ε r the permittivity of the dielectric under-test. For this reason, the resonance frequency of a resonator placed on a lossless dielectric is associated to the variable s r by : Even in the lossless case σ r is linked to metallic losses and radiation losses so it is not zero. When a dielectric presents RF losses, its permittivity can be written as ε = ε r (1j tan δ), where tan δ is the dielectric loss tangeant, hence the complex resonance frequency can be introduced : s r = σ r + jω r = σ r + j2π c 2L 1 + ε r -1 2 q . ( 4 ) f d = c 2L 1 + ε r (1 -j tan δ) -1 2 q , (5) which gives the expression of the complex resonance frequency in the case of electrical loss s d : s d = σ r + jBe - j 2 tan   tan δqε r 2 + q(ε r -1)   , (6) where B = ω 0 1 + ε r -1 2 q 1 + tan δqε r 2 + q(ε r -1) 2 1/4 . Note that the subscript d is used hereafter to designate expressions where the losses of the dielectric to be characterized are taken into account. For low losses, (tan δ ≪ 1) which is usually the case for RF substrates, the complex resonance frequency f d can be written : f d ≃ F 0 1 - j 2 X - 3 8 X 2 , (7) where F 0 = c 2L 1 + ε r -1 2 q = f 0 1 + ε r -1 2 q and X = tan δqε r 2 + q(ε r -1) . Therefore, the variable s d at the complex resonance frequency f d is now given by : s d = σ d + jω d = σ 0 + 1 2 W 0 X + jW 0 1 - 3 8 X 2 , ( 8 ) where W 0 = 2πF 0 . After identification, the damping factor and frequency at resonance for a lossy dielectric (σ d , ω d ) can be expressed by :                    σ d = σ 0 + ω 0 2 1 + ε r -1 2 q tan δqε r 2 + q(ε r -1) ω d = w 0 1 + ε r -1 2 q 1 - 3 8 tan δqε r 2 + q(ε r -1) 2 (9a) (9b) . Note that in (9b) only ε r and tan δ are unknown. Indeed, (σ 0 , ω 0 ) and (σ d , ω d ) are the damping factor and the resonance frequency measured for a loop resonator with no support and with the dielectric under-test respectively. Note that the length L does not affect the measurement. Also, different lengths can be used to charaterize the losses at different resonance frequencies. The coefficient q can be directly calculated using [START_REF] Chen | Characteristics of coplanar transmission lines on multilayer substrates: Modeling and experiments[END_REF] or obtained through simulations as explained later in Section IV. It is possible from the measurement of the backscattered signal of a resonator (see Fig. 2) to extract the values of damping factors and resonance frequencies. Several techniques allow the extraction of these coefficients from the backscattered electromagnetic signature of a resonant scatterer. Approaches applied to the time domain representation of the signal [START_REF] Sarkar | Using the matrix pencil method to estimate the parameters of a sum of complex exponentials[END_REF] or to a frequency domain representation [START_REF] Ali | Extraction of aspect-independent parameters using spectrogram method for chipless frequency-coded rfid[END_REF] are both considered in the litterature. Fig. 2 presents an example of time domain back-scattered signal (in simulation) obtained from the loop illustrated in Fig. 1. In this figure we can see both the backscattered signal obtained by simulation and the signal reconstructed from the extraction of the damping factors and the resonance frequencies based on [START_REF] Sarkar | Using the matrix pencil method to estimate the parameters of a sum of complex exponentials[END_REF]. For a rectangular loop of this type, there is a trade-off between the RCS level and the quality factor value. This tradeoff is primarily related to the width of the gap g between the two larger arms of the rectangular loop. The smaller the gap, the higher the quality factor and the lower the RCS. An optimisation on CST allowed to obtain a good compromise where the objective remains however to have a quality factor above 100 in air. III. PERMITTIVITY EXTRACTION Equation (9b) can be re-written as :            tan δ = 2(σ d -σ 0 ) 1 + ε r -1 2 q ω 0 2 + q(ε r -1) qε r ω 1 + ε r -1 2 q = 1 - 3 2 σ 2 1 + ε r -1 2 q , ( 10 ) where ω = ω d ω 0 and σ = σ d -σ 0 ω 0 . By letting Y = 1 + ε r -1 2 q, the second equation can be expressed as a quadratic equation aY 2 + bY + c with :        a = 3 2 σ 2 b = ω c = -1 . ( 11 ) The real positive solution Y is : Y = -ω + ω 2 + 6σ 2 3σ 2 . ( 12 ) Then the real part of the permittivity ε r can be found using [START_REF] Nicolson | Measurement of the intrinsic properties of materials by time-domain techniques[END_REF] with : ε r = 1 + 2 Y 2 -1 q = 1 + 2 q   -ω + ω 2 + 6σ 2 3σ 2 2 -1   (13) and the losses can be found using (9b) and ( 13) with : tan δ = 4σ -ω + ω 2 + 6σ 2 3σ 2 5/2 q + 2 -ω + ω 2 + 6σ 2 3σ 2 -1 2 . ( 14 ) These formulas are used to extract ε r and tan δ. The dielectric characterization requires 2 measurements for the loop resonator : the first with no support and the second with the dielectric under test. IV. SIMULATIONS Simulations have been carried out using CST MW to validate the introduced equations. A farfield E-probe with the Time domain solver is used. The loop resonator configuration of Fig. 1 has been used. As previously explained, for a rectangular loop of this type, there is a trade-off between the RCS level and the quality factor value. An optimisation on CST allowed to obtain a good compromise. Fianl dimensions are L = 50mm, w = 1.4mm and g = 2mm. The metal is 1mm thick. The loop is placed on the dielectric material to be characterized with permittivity ε r and loss tangent tan δ. The dielectric height is 1mm so q ≃ 0.58 based on [START_REF] Chen | Characteristics of coplanar transmission lines on multilayer substrates: Modeling and experiments[END_REF]. The excitation is a plane wave polarized along the ⃗ y-axis (see Fig. 1b). The probe used to obtain the backscattered signal is placed 1 meter away from the resonator. The simulated back-scattered E-fields for different values of ε r and tan δ are plotted in Fig. 3. These simulations show the impact of the variation of the dielectric parameters on both the resonance frequency and the damping factor. Based on these simulations, the Matrix Pencil method [START_REF] Sarkar | Using the matrix pencil method to estimate the parameters of a sum of complex exponentials[END_REF] is used to extract the complex natural resonance (CNR) s = σ + jω. For each configuration the resonance frequency f = ω/2π as well as the damping factor σ are plotted in Fig. 4. The correspondence betewen the values of the complex permitivity and the number (noted "Run ID") associated to the simulation is given in Table . I. We can see that resonance frequency decreases with an increase of the real and also of the imaginary part of the permittivity. The damping factor mostly increases with the imaginary part of the permittivity (losses). Fig. 4 also presents results obtained with (9b) which show a good agreement with the simulations. The study of the error obtained from (9b) will be realized in the measurement section. Indeed in simulation, even if the structure is relatively simple, the error will be directly linked to the models used in the EM simulator to take into account the losses, which is not what we seek to study in this article. Based on the values of the simulated frequency resonance and damping factor, the complex permittivity can be extracted using ( 13) and [START_REF] Bakhtiari | Open-ended rectangular waveguide for nondestructive thickness measurement and variation detection of lossy dielectric slabs backed by a conducting plate[END_REF]. Estimated ε r and tan δ are plotted in Fig. 5. Again, we can notice that the extracted values are in good agreement with the simulated ones. A discussion will be made concerning the sensitivity and accuracy of the proposed method based on the resonant scatterer. As shown in (9b), the only factor that impacts the results is the coefficient q. Indeed, the higher the value of q, the larger the shifts in the signal amplitude or on the resonance frequency. Thus better sensibility on the complex permittivity extraction can be expected. To increase q, the loop geometry as well as the dielectric under-test thickness should be considered when designing the resonator [START_REF] Chen | Characteristics of coplanar transmission lines on multilayer substrates: Modeling and experiments[END_REF]. A simulation with two different gap values for the loop is presented in Fig. 6. We can see that a smaller gap induces a higher value for q and so higher shifts in the resonance frequency is observed. The study of the variation of the resonance frequency as a function of q, more exactly the computation of d(ω r /ω 0 )/dq by derivating (9b) [see Fig. 8] allows to show analytically that the resonance frequency of the resonator on the dielectric varies the more when the value of q is high. This explains why the error obtained is smaller for thick substrate where q is large in this case. Similarly, the greater the thickness of the substrate, the lower the resonance frequency and therefore the greater the frequency variation compared to the case without the dielectric. Measuring this larger offset leads to less error. Futhermore, Fig. 7 presents the simulated filling factor q for different dielectric heights and different resonator metal thicknesses. We should notice that [26, eq. ( 21)] offers a good agreement with simulations for very thin metal (the metal thickness is considered infinitely thin). When the resonator's metal is thicker, [26, eq. ( 21)] is no more valid to determine q with accuracy. For this reason, extracted values of q obtained from simulations and shown in Fig. 7 will be used in the section V for dielectric characterization. Contrary to the classical cavity method, this approach does not require thin dielectric sample to work. Indeed, as shown in Fig. 7, the thicker the dielectric, the larger the value of q, which improves the final estimation of the complex permittivity. ε = 1 ε = 1.5 ε = 2 ε = 2.5 ε = 3 ε = 3.5 Run ID tan δ tan δ estimated ( 14) Simulated tan δ Fig. 5. εr and tan δ estimated with ( 13) and ( 14) respectively. The different values used for the study and corresponding to a ID number are shown in Table I. V. MEASUREMENTS The proposed approach will be validated by measurements using different samples. The different samples and dielectric thicknesses evaluated in practice are given in Table. II. The "Red" test sample was characterized using the cavity method for complex permittivity measurements (the Damaskos, Inc. Model 08 Thin Sheet Tester that measures the dielectric properties of low loss materials over the approximate band of 800-4000 MHz in a non-destructive manner). The following protocol was used for the measurements : firstly, the loop resonator is measured in the air using a monostatic configuration inside an anechoic chamber (see Fig. 1 and Fig. 9). The VNA 5222A by Agilent was used. The source power of VNA is equal to 0 dBm. The frequency sweep ranging from 2 to 3 GHz with 4001 points is considered. A mono-static configuration with Satimo (QH800) quad ridged [24, eq. ( 21)] ε = 2 ε = 3 ε = 4 ε = 5 Fig. 7. Filling factor q as a function of the dielectric height and the metal thickness. × markers correspond to a metal thickness of 0.01mm, + markers to a thickness of 0.5mm and ⃝ markers to 1mm. □ is the q calculated using [26, eq. ( 21)]. open boundary antennas (0.8-12 GHz) is used. We have used a dual linear antenna for this measurement, however any simpler (linear polarization) and less expensive (e.g. a Vivaldi antenna realized on PCB) would achieve the same performance for this application. Indeed, to keep the same SNR, with a less directive antenna, it is sufficient to decrease the antenna-resonator distance. Compared to [START_REF] Perret | Permittivity characterization based on radar cross measurements[END_REF] where a bistatic configuration was used for the measurement, the results presented here were obtained with a mono-static configuration, which is simpler to implement and for which it is shown to be sufficient to achieve the required degree of accuracy for the introduced extraction. The loop resonators were obtained by cutting a 0.5mm thick metal plate with a laser. Dimensions are L = 50.82mm, w = 1.43mm and g = 2.07mm. In the case, high purity copper metal has been chosen for the fabrication (more than 99% purity) [29]. Tape was used to suspend the loop in the air (see Fig. 9a) to be as close as possible to a free-space condition. The tape was also used to attach the loop to the samples as can be seen in Fig. 9b. Indeed, the use of tape for measurement is very significant in minimising the gap between the dielectric and the loop. This is especially true since these two elements are never perfectly planar (especially the metallic loop) and therefore the tape helps to keep them in touch during the measurement. The way in which the tape was used to clamp the loop was studied both in simulation and in practice to have as little impact as possible on the backscattered signal. The configuration chosen is shown in Fig. 9a and provides the best compromise between a very low impact on the resonant frequency, damping factor and a very good maintenance of the loop on the dielectric. It is however important to note that to consider as much as possible the presence of the tape in the method, we used tape both for the measurement in air (Fig. 9a) and when the loop is placed on the dielectric. This is a way to reduce the effect of tape in the measurement method introduced. Matrix Pencil Method [START_REF] Sarkar | Using the matrix pencil method to estimate the parameters of a sum of complex exponentials[END_REF] is used on the S-parameters to evaluate (σ 0 , ω 0 ). Secondly, the loop resonator was taped to the sample under-test as illustrated in Fig. 9b. Matrix Pencil is used once again on the S-parameters to evaluate (σ r , ω r ). Finally, ( 13) and ( 14) are used to estimate the complex permittivity of the sample. The value of q is obtained based on the results presented in Fig. 7 for the corresponding dielectric thickness and for the 0.5mm metal thickness resonator used in practice. An example of measured S-parameters is given in Fig. 10 for different substrates. For each of them, the extracted dielectric parameters are given in Table II. We can see that the estimated complex permittivity is close to the provider's informations for the different samples and different dielectric thicknesses. There is only a difference in the losses of the Duroid RT5880. Indeed, the measured losses for the Duroid RT5880 (both heights) using the proposed approach grant losses 4 times larger at 2.5GHz than the ones given by the provider at 10GHz. Although some of the difference may be attributable to the difference in frequencies, it is also possible that the limitations of the approach to the minimum loss tangents that can be measured can be observed here. We were also interested in evaluating the maximum loss tangent values that the method can extract. Indeed, this method relies on the extraction of the resonance frequency and the damping factor, and this extraction will be impacted by a too high loss tangent value. In simulation we have observed with the loop used that it becomes difficult to characterise dielectrics with loss tangents greater than 0.4. This shows that in order to reach higher values, it is necessary to work on the design of the loop resonator, or even to opt for another resonator. It is also important to notice that these values are frequency dependent (especially tan δ in this frequency band). The proposed method allows to measure the complex permittivity at the resonance frequency of the scatterer when it is placed on the sample to characterize. For this reason, designing different resonator lengths is a solution to determine the complex permittivity at multiple frequencies. Scatterers used in the paper were designed to have their fundamental resonance frequency close to 3GHz in the air. Now that the approach is validated in practice, a second approach is presented in order to correct systematic errors. When the resonator is taped to the dielectric, a small air gap could always be present in between the resonator and the dielectric changing the effective permittivity seen by the resonator. Also, tape induced a constant frequency shift due to its presence. When resonators are taped several times to the same dielectric, we notice that the resonance frequency does not vary a lot meaning that the air gap (with tape) is constant between measurements. The improvement proposed here is to use a known dielectric for calibration to estimate the "in-air" resonator values. Then the estimated "in-air" values are used to characterize a second dielectric under-test. For a known dielectric and so its (ε cal , tan δ cal ), we can write :                      ω 0 = w cal 1 + ε cal -1 2 q 1 - 3 8 tan δ cal qε cal 2 + q(ε cal -1) 2 σ 0 = σ cal + ω 0 2 1 + ε cal -1 2 q tan δ cal qε cal 2 + q(ε cal -1) . [START_REF] Shimabukuro | A quasioptical method for measuring the complex permittivity of materials[END_REF] For this reason, the initial measurement in air can be replaced by a measurement on a known dielectric and (15) can be used to determine (σ 0 , tan δ 0 ). By using this procedure to estimate the RT5880 using the RO4003C (both with a thickness of around 0.8mm) as a calibration dielectric permits to measure ε RT = 2.32 and tan δ RT = 0.003 at 2.6GHz hence improving a little the results. Note that in such a case the two dielectrics don't need to have the same thickness. Indeed, the value corresponding to the measurement of the loop and the dielectric of reference is computed using [START_REF] Shimabukuro | A quasioptical method for measuring the complex permittivity of materials[END_REF] with the filling factor q corresponding to this dielectric and so its thickness. Then the permittivity is estimated with ( 13) and ( 14) with the filling factor of the dielectric under-test. VI. DISCUSSIONS The resonance frequency and damping factor are related to the geometry and materials constituting the resonator. Indeed, from a theoretical point of view, and this is also what makes this method so interesting, the quantities involved in the extraction (resonance frequency and quality factor) are aspect-independent parameters [START_REF] Ali | Extraction of aspect-independent parameters using spectrogram method for chipless frequency-coded rfid[END_REF], which means that they are not linked to the measurement bench (such as the gain of the antennas, the distance, the orientation of the antennas...). It is therefore possible to carry out measurements at two different distances (measurement without and with dielectric), but this is of very limited interest in practice. Note that this measurement can easily be implemented in controlled environment for metrology measurements. In this respect, the maximum distance that can be used with this technique will depend on the environment and the measurement time (linked in particular to the averaging used) and the precision of the measurement equipment. However, a notable limitation of the approach occurs at the level of dielectrics with significant losses. Indeed, we could see in simulation (not shown in the paper) that for loss tangent higher than 0.5, it is no longer possible to extract the quality factor. This means that it is no longer possible to access the material losses. As far as the range of values on the real part of the permittivity is concerned, we did not observe such limitations. Indeed, we were able to test that a material with a permittivity of 10 was perfectly compatible with the introduced approach. In practice, if the distance between the resonator and the antenna increases, the SNR will decrease, so the estimation of the poles and then the complex permittivity will be impacted. Monte-Carlo (MC) simulation for a SNR of 20dB on the Sparameters is shown in Fig. 11. Noise was added on the time response signal of the simulated loop on a lossy dielectric substrate. The simulated permittivity and losses range respectively from 1 to 3.5 and 0 to 0.1. This simulates the effect on complex permittivity extraction when the distance between the antenna and the resonator is increased. We can see in these numerical simulations shown in Fig. 11 the impacts in terms of errors and uncertainties on the extracted permittivity. We can see that a SNR of 20dB does not introduce errors or uncertainties sufficient to modify the extraction. However, as expected, it will introduce slight uncertainties on the loss tangent. By lowering the SNR to 5dB, similar results were obtained on the extracted permittivity. The measurement of the SNR in practice and the use of the presented MC simulation can allow the user to estimate the uncertainties on his measurement setup (noise, distance for antenna-resonator,...). Sensing is improved when the estimation is robust to the noise, which is illustrated by the MC simulations presented Fig. 11. Sensing is also improved with higher variations of the measurand (higher variation usually means peaks easier to measure but also less correlated to noise). These variations of the resonance frequency in term of sensibility can be compared to others works [START_REF] Kiani | Microwave sensor for detection of solid material permittivity in single/multilayer samples with high quality factor[END_REF]. Note that the present manuscript does not try to improve the sensitivity S [30, eq. ( 3)] but is rather on the theory and setup used to measure the complex permittivity. However, in Fig. 6, a simulation is made to illustrate how the ε = 1 ε = 1.5 ε = 2 ε = 2.5 ε = 3 ε = 3.5 Run ID Extracted tan δ Fig. 11. Simulated complex permittivity in red and extracted complex permittivity in blue with a MC simulation with a SNR of 20dB. The simulated permittivity and losses range respectively from 1 to 3.5 and 0 to 0.1. design of the loop can be improve to obtain higher values of S. In this simulation and for a gap g=0.5mm (plain line), a value of S = (f 1f 2 ) f 1 ∆ε = 8.19% is obtained. We can see that this design introduced a sensitivity 2.5 times higher than the most sensitive permittivity sensor presented in [START_REF] Kiani | Microwave sensor for detection of solid material permittivity in single/multilayer samples with high quality factor[END_REF]. Note also that, the present work allows in addition to measure the losses of the dielectric which is not done in [START_REF] Kiani | Microwave sensor for detection of solid material permittivity in single/multilayer samples with high quality factor[END_REF]. Sensitivities S obtained in measurements are given in Table II. A comparison of sensitivities S of different works is given in Table III. Table II also illustrates the impact of the resonator geometry already discussed in Section IV. Indeed, we can see that considering one dielectric, the sensitivity S increases with the thickness of the sample under-test as q increases. Lastly, in [START_REF] Requena | Thermal modeling of resonant scatterers and reflectometry approach for remote temperature sensing[END_REF], section X, it was shown that different resonator topologies can be used to improve even more this sensibility. The dipole over a ground plane shown in [START_REF] Requena | Thermal modeling of resonant scatterers and reflectometry approach for remote temperature sensing[END_REF] can be used to obtain even higher sensitivity but it was not presented in the current paper since its utilization as a complex permittivity sensor seems more difficult to implement. The dipole resonator needs to be printed on the dielectric under test while the loop can be manually placed as it is done in the manuscript. The MC simulation allows to define a confidence interval in the complex permittivity values extracted with the proposed approach. Indeed Fig. 12 presents the error between simulated values (entered in CST) and extracted values (with the introduced approach) for different simulated complex permittivity values. It is interesting to note that this error mainly characterises the accuracy of the model and indeed the values obtained here are very close to the deviation that could be calculated between the values entered on CST and extracted by the model presented in Fig. 5 (in this case no noise has been added contrary to the results in Fig. 11). Thus, further work on the model (for certain permittivity or loss ranges, for example through a calibration), could further improve the accuracy of the approach. It is also interesting to consider the uncertainties obtained through the MC simulation (see Fig. 11). Indeed, as can be seen in this figure, this uncertainty (for a confidence level of one standard deviation, i.e. approximately 68%) is very low compared to the errors shown in Fig. 12. This shows that the noise added in the MC simulation has very little effect on the value extracted by the approach, which shows a very good accuracy. Taking these elements into account, we can say that the real value (entered in CST) is located (in the worst-case scenario) in a confidence interval which can be estimated based on the errors plotted in Fig. 12 and the uncertainty directly computed with the MC simulation. As the uncertainty is very small compared to the error, a confidence interval equal to +/-the error can be used here to give an idea of the accuracy that can be expected with this approach. We note also that the results obtained in terms of measurement error are similar to those described in [START_REF] Orloff | Dielectric characterization by microwave cavity perturbation corrected for nonuniform fields[END_REF] where complex permittivity is sensed using a cavity method. Indeed, in [START_REF] Orloff | Dielectric characterization by microwave cavity perturbation corrected for nonuniform fields[END_REF], epoxy was characterized with a permittivity of 2.93±0.11 and losses tangent of 0.028 ± 0.002. We can notice that the proposed approach allows to have similar results on the losses (also ±0.002 in this work). The same applies to permittivity. The measurement results of the samples are collected in Table IV where the confidence interval has been added for each value in order to give a more precise idea on the quality of the results obtained. VII. CONCLUSION In this paper, a method based on a radar approach to measure the materials' complex permittivity is introduced. A loop resonator is used to derivate the complex permittivity expression from back-scattered signals. Simulations and measurements have been realized to validate the proposed non destructive method on different dielectric slabs in the frequency band 2-3 GHz. Discussions on the resonator and dielectric geometries have been done in order to improve the accuracy of the extraction. Also, an additional calibration step using already known dielectric has been proposed and validated in practice to improve the estimations. Fig. 1 . 1 Fig. 1. a) Principle of the measurement of the scatterer. b) Loop resonator considered in this study. The support is the dielectric under-test defined by ε = εr(1 -j tan δ). Fig. 2 . 2 Fig. 2.Time domain back-scattered signal from a loop resonator in simulation. Fig. 3 . 3 Fig. 3. Simulated back-scattered E-field in amplitude for εr = [1 : 0.5 : 3.5] and tan δ ∈ [0, 0.005, 0.05, 0.2] for the loop resonator of Fig. 1 impringed by a plan wave. Dielectric height is 1mm. εεFig. 4 . 4 Fig. 4. Comparision between simulations and equations : a) Damping factor b) resonance frequency. ε = 1 ε 1 = 1.5 ε = 2 ε = 2.5 ε = 3 ε = 3 Fig. 6 . 6 Fig.6. Influence of the gap g of the loop on the extraction of the complex permittivity: gap g = 0.5mm (q = 0.77) [plain line] and g = 2mm (q = 0.6) [dashed line]. The blue curves correspond to a permittivity of ε = 1 and the red ones to a permittivity of ε = 3.55 * (1 -j0.0021). Fig. 8 . 8 Fig.8. The study of the variation of the resonance angular frequency ω as a function of the filling factor q. Fig. 9 . 9 Fig. 9. Testbench used for the measurement. a) Initial measurement to evaluate (σ 0 , ω 0 ). b) Measurement of the "Red" test sample where the loop has been positioned over of the sample with tape. Fig. 10 . 10 Fig. 10. Measurement of the S 11 parameters for a loop resonator in air (blue) and on the different substrates. ε = 1 ε 1 = 1.5 ε = 2 ε = 2.5 ε = 3 ε = 3 ε 2 ε = 1 εFig. 12 . 2112 Fig. 12. Error for a) real part and b) losses between simulated and extracted ones. The simulated permittivity and losses range respectively from 1 to 3.5 and 0 to 0.1. TABLE I CORRESPONDENCE I BETWEEN SIMULATIONS "RUN ID" AND (εr, tan δ). Run ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 εr 1 1 1 1 1.5 1.5 1.5 1.5 2 2 2 2 2.5 2.5 2.5 2.5 3 3 3 3 3.5 3.5 3.5 3.5 tan δ 0 0.005 0.05 0.2 0 0.005 0.05 0.2 0 0.005 0.05 0.2 0 0.005 0.05 0.2 0 0.005 0.05 0.2 0 0.005 0.05 0.2 TABLE II COMPARISON II OF THE COMPLEX PERMITTIVITY OF THE SAMPLE GIVEN BY THEIR PROVIDER AND MEASURED BY THIS APPROACH Thickness (mm) εr (provider) tan δ (provider) εr (cavity method) tan δ (cavity method) εr (this approach) tan δ (this approach) Sensitivity S (this approach) Rogers RO4003C 0.813 1.524 3.55 3.55 0.0021 @2.5GHz 3.72 0.0021 @2.5GHz 3.49 0.0029 @2.5GHz 3.54 0.0014 @3GHz 3.59 0.0024 @2.4GHz 0.0031 @2.2GHz 5.15% 6.45% Duroid RT5880 0.787 1.575 2.33 2.33 0.0012@10GHz 0.0012@10GHz 2.29 2.29 0.0018 @2.5GHz 2.31 0.0018 @2.5GHz 2.31 0.004 @2.6GHz 0.004 @2.5GHz 4.61% 5.94% Red 1.958 4.24 0.0173 @3.1GHz 4.25 0.018 @3.1GHz 4.27 0.012 @2.2GHz 10.71% TABLE III COMPARISON III OF THE SENSITIVITY S OF DIFFERENT WORKS Frequency (GHz) Sensitivity S [32], [33] [34] [35] [9] This work 3 7.6 2.1 4.5 5.65 2.6% 2.7% 2.2% 3.25% 5.1% to 10.7% ACKNOWLEDGMENT The authors are thankful towards Selim Azzouni for his help with cavity measurements and to Nathalie Franck for her help in proofreading the paper.
01367780
en
[ "info.info-lo" ]
2024/03/04 16:41:18
2015
https://hal.science/hal-01367780/file/David2015.pdf
Amélie David email: [email protected] Deciding ATL * Satisfiability by Tableaux Keywords: Alternating-time temporal logic, ATL *, Automated theorem prover, Satisfiability, Tableaux ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The logic ATL * is the full version of the Alternating-time Temporal Logic introduced in [START_REF] Alur | Alternating-time temporal logic[END_REF] in order to describe open systems, that is systems that can interact with their environment. Thus ATL and ATL * are the multi-agent versions of the branching-time temporal logics CTL and CTL * . Indeed, in ATL and ATL * the environment is modelled by an extra agent e interfering with the system components (the remaining agents) who need to succeed their task no matter how e responds to their actions. ATL * is an important extension of ATL and ATL + (an intermediate logic between ATL and ATL * ) since it allows one to express useful properties such as fairness constraints. Such properties can be expressed only if nesting of temporal operators is possible, which is not the case in ATL and ATL + . It is worth noting that ATL + only permits Boolean combination of unnested temporal operators. The problem studied in this paper is about deciding the satisfiability of ATL * formulae. Models for ATL * formulae are directed graphs called concurrent game structures where transitions between two states depend on the chosen action of each agent. In general, there exists two ways for deciding the satisfiability: using automata, as done in [8] or using tableaux; as we do here. In this paper, All proofs of lemmas, propositions and theorems, as well as complete examples can be found in the version with appendices at https://www.ibisc.univ-evry.fr/ ∼ adavid/ fichiers/cade15 tableaux atl star long.pdf. we propose the first tableau-based decision procedure for ATL * , as well as the first implementation of a decision procedure for ATL * , which is also the first implementation of a tableau-based decision procedure for ATL + . We extend our procedure for ATL + [START_REF] Cerrito | Optimal tableaux-based decision procedure for testing satisfiability in the alternating-time temporal logic ATL+[END_REF] following the natural basic idea: separate present and future. However, this extension is not trivial since the separation into present and future is more subtle than for ATL + and needs to keep track of path formulae so as to be able to check eventualities. We think that our tableau-based decision procedure for ATL * is easy to understand and therefore also provides a new tableau-based decision procedure for CTL * which is conceptually simple. We prove that our procedure runs in at most 3EXPTIME, which is suboptimal (the optimal worst case complexity has been proved to be 2EXPTIME in [8]). However, we do not know of any specific cases where our procedure runs in 3EXPTIME, which leaves the possibility that it is optimal, after all. This paper is organized as follows: Sect. 2 gives the syntax and semantics for ATL * . The general structure of the tableau-based decision procedure for ATL * that we propose can be found in Sect. 3 and details of the procedure in Sects. 4, 5 and 6. Theorems about soundness, completeness and complexity are given in Sect. 7 with their sketch of proof. The Sect. 8 is about the implementation of our procedure. The paper ends with some concluding remarks indicating also some possible directions of future research. 2 Syntax and Semantics of ATL * ATL * can be seen as an extension of the computational tree logic (CTL * ) [START_REF] Emerson | Deciding full branching time logic[END_REF] where the path quantifiers E -there exists a path -and A -for all pathsare replaced by A and [[A]] where A is a coalition of agents. Intuitively A Φ means "There exists a strategy for the coalition A such that, no matter which strategy the remaining agents follow, Φ holds". On the other hand, [[A]] Φ means "For all strategies of the coalition A, there exists a strategy of the remaining agents such that Φ holds". Also, whereas transition systems or Kripke structures are used in order to evaluate CTL * formulae, concurrent game models (CGM), whose definition is given in the Sect. 2.2 are used to evaluate ATL* formulae. Syntax of ATL * Before giving the syntax of ATL * , we recall that, as for CTL * or LTL, , and U mean "Next", "Always " and "Until " respectively. In this paper, we give the syntax in negation normal form over a fixed set P of atomic propositions and primitive temporal operators "Next", "Always "and U "Until ". The syntax of ATL * in negation normal form is defined as follows: State formulae: ϕ := l | (ϕ ∨ ϕ) | (ϕ ∧ ϕ) | A Φ | [[ A]] Φ (1) Path formulae: Φ := ϕ | Φ | Φ | (Φ UΦ) | (Φ ∨ Φ) | (Φ ∧ Φ) (2) where l ∈ P ∪{¬p | p ∈ P} is a literal, A is a fixed finite set of agents and A ⊆ A is a coalition. Note that ⊤ := p ∨¬ p, ⊥ := ¬⊤, ¬ A Φ := [[A]] ¬Φ. The temporal operator "Sometimes" ♦ can be defined as ♦ϕ := ⊤Uϕ and the temporal operator "Release "a sψ Rϕ := ϕ ∨ ϕ U(ϕ ∧ ψ). When unnecessary, parentheses can be omitted. In this paper, we use ϕ, ψ, η to denote arbitrary state formulae and Φ, Ψ to denote path formulae. By an ATL * formula we will mean by default a state formula of ATL * . Concurrent Game Models As for ATL or ATL + , ATL * formulae are evaluated over concurrent game models. Concurrent games models are transition systems where each transition to a unique successor state results from the combination of actions chosen by all the agents (components and/or environment) of the system. Notation: Given a set X, P(X) denotes the power set of X. Definition 1 (Concurrent game model and structure). A concurrent game model (in short CGM) is a tuple M =(A, S, {Act a } a∈A , {act a } a∈A , out, P, L) where -A = {1,...,k} is a finite, non-empty set of players (agents), -S is a non-empty set of states, -for each agent a ∈ A, Act a is a non-empty set of actions. For any coalition A ⊆ A we denote Act A := a∈A Act a and use σ A to denote at u p l ef r o mAct A . In particular, Act A is the set of all possible action vectors in M. -for each agent a ∈ A, act a : S →P (Act a ) \ {∅} defines for each state s the actions available to a at s, -out is a transition function assigning to every state s ∈ S and every action vector σ A = {σ 1 ,...,σ k }∈Act A a state out(s, σ A ) ∈ S that results from s if every agent a ∈ A plays action σ a ,w h e r eσ a ∈ act a (s) for every a ∈ A. -P is a non-empty set of atomic propositions. -L : S →P(P) is a labelling function. The sub-tuple S =( A, S, {Act a } a∈A , {act a } a∈A , out) is cal led a concurrent game structure (CGS). Semantics of ATL * In order to give the semantics of ATL * , we use the following notions. Although they are the same as those in [START_REF] Cerrito | Optimal tableaux-based decision procedure for testing satisfiability in the alternating-time temporal logic ATL+[END_REF], we recall them here to make the paper selfcontained. Computations. A play,o rcomputation, is an infinite sequence s 0 s 1 s 2 ••• ∈ S ω of states such that for each i ≥ 0 there exists an action vector σ A = σ 1 ,...,σ k such that out(s i ,σ A )=s i+1 .Ahistory is a finite prefix of a play. We denote by Plays M and Hist M respectively the set of plays and set of histories in M. For a state s ∈ S,w eu s ePlays M (s)a n dHist M (s) as the set of plays and set of histories with initial state s. Given a sequence of states λ, we denote by λ 0 its initial state, by λ i its (i + 1)th state, by λ ≤ i the prefix λ 0 ...λ i of λ and by λ ≥i the suffix λ i λ i+1 ... of λ. When λ = λ 0 ...λ ℓ is finite, we say that it has length ℓ and write |λ| = ℓ. Also, we set last(λ)=λ ℓ . Strategies. A strategy for an agent a in M is a mapping F a : Hist M → Act a such that for all histories h ∈ Hist M ,w eh a v eF a (h) ∈ act a (last(h)). This kind of strategies is also known as "perfect recall" strategies. We denote by Strat M (a) the set of strategies of agent a. A collective strategy of a coalition A ⊆ A is a tuple (F a ) a∈A of strategies, one for each agent in A. We denote by Strat M (A)t h e set of collective strategies of coalition A.Ap l a yλ ∈ Plays M is consistent with a collective strategy F A ∈ Strat M (A) if for every i ≥ 0 there exists an action vector σ A = σ 1 ,...,σ k such that out(λ i ,σ A )=λ i+1 and σ a = F a (λ ≤i ) for all a ∈ A. The set of plays with initial state s that are consistent with F A is denoted Plays M (s, F A ). For any coalition A ⊆ A and a given state s ∈ S in a given CGM M,a nA -co-action at s in M is a mapping Act c A : Act A → Act A\A that assigns to every collective action of A at the state s a collective action at s for the complementary coalition A \ A. Likewise, an A -co-strategy in M is a mapping F c A : Strat M (A) × Hist M → Act A\A that assigns to every collective strategy of A and every history h a collective action at last(h)f o rA \ A,a n dPlays M (s, F c A )i s the set of plays with initial state s that are consistent with F c A . Semantics. The semantics of ATL * is the same as the one of CTL * [START_REF] Emerson | Deciding full branching time logic[END_REF] (modulo CGM as intended interpretations) with the exception of the two following items: -M,s |= A Φ iff there exists an A-strategy F A such that, for all computations λ ∈ Plays M (s, F A ), M,λ |= Φ -M,s |=[ [ A]] Φ iff there exists an A-co-strategy F c A such that, for all computations λ ∈ Plays M (s, F c A ), M,λ |= Φ Valid, satisfiable and equivalent formulae in ATL * are defined as usual. 3 Tableau-Based Decision Procedure for ATL * In this section, we give the general description of our tableau-based decision procedure for ATL * formulae. The different steps of the procedure are summarized in this section and Fig. 1 and then detailed in the next three sections. From an initial formula η, the tableau-based decision procedure for ATL * that we propose attempts to build step-by-step a directed graph from which it i spo s s i b l et oe x t r a c taC G Mf o rη. This attempt will lead to a failure if η is not satisfiable. Nodes of that graph are labelled by sets of state formulae and are partitioned into two categories: prestates and states. A prestate can be seen as a node where the information contained in its formulae is "implicit". When we decompose all the formulae of a prestate and saturate the prestate, we obtain one or several states as successor nodes. States have the particularity of containing formulae of the form A ϕ or [[A]] ϕ from which it is possible to compute the next steps of the tableau creation. All prestates have states as successors and directed edges between them are of the form =⇒; on the other hand, all states have prestates as successors and directed edges between them are of the form σ A -→ where σ A is an action vector. The procedure is in two phases: the construction phase and the elimination phase. First, we create an initial node, that is a prestate containing the initial formula η, and we construct the graph by expanding prestates into states via a rule called (SR) and by computing prestates from states with a rule called (Next).T h er u l e(SR) decomposes each ATL * formula of a prestate, and then saturates the prestate into new states. Explanation of rules (SR) and (Next) can be found in Sects. 4 and 5, respectively. The procedure avoids creation of duplicated nodes (a form of loop check), which ensures termination of the procedure. The construction phase ends when no new states can be added to the graph. The graph obtained at the end of the construction phase is called the initial tableau for η, also noted T η 0 . The second phase of the procedure eliminates via the rule (ER1) all nodes with missing successors, that is prestates with no more successors at all or states with at least one missing action vector on its outcome edges. Also, by means of a rule called (ER2) it eliminates all states with "unrealized eventualities", that is states that cannot ensure that all the objectives it contains will be eventually fulfilled. The graph obtained at the end of the elimination phase of the procedure is called the final tableau for η, also noted T η . Explanation of rules (ER1) and (ER2) can be found in Sect. 6. Our tableau-based decision procedure for ATL * deals with what [START_REF] Goranko | Tableau-based decision procedures for logics of strategic ability in multiagent systems[END_REF] calls "tight satisfiability": the set A of agents involved in the tableau (and the CGMs it tries to build) is the set of agents present in the input formula. Construction Phase: Decomposition and Saturation Decomposition of ATL * Formulae All ATL * formulae can be partitioned into four categories: primitive formulae, α-formulae, β-formulae and γ-formulae. Primitive formulae correspond to the "simplest formulae" in the sense that they cannot be decomposed. These formulae are ⊤, ⊥, the literals and all ATL * suc- cessor formulae,o ft h ef o r m A ψ or [[A]] ψ where ψ is called the successor component of A ψ or [[A] ] ψ respectively. Every non-primitive formula must be decomposed into primitive formulae. α-formulae are of the form ϕ ∧ ψ where ϕ and ψ are α-components while β-formulae are of the form ϕ ∨ ψ where ϕ and ψ are β-components. Their decomposition is classical. Other formulae, that is of the form A Φ or [[A]] Φ, where Φ = ψ,a r eγ-formulae. This notion firstly introduced in [START_REF] Cerrito | Optimal tableaux-based decision procedure for testing satisfiability in the alternating-time temporal logic ATL+[END_REF] reveals quite useful also in the more expressive context of ATL * . Decomposition of these formulae is trickier than for α-a n dβ-formulae. Indeed, we will need to extract all possibilities of truth encapsulated in γ-formula ξ, which concretely aims at defining one or several conjunctions of primitive formulae such that their disjunction is equivalent to the γ-formulae ξ (see lemma 1). Decomposition of γ-Formulae. This subsection contains the heart of the decision procedure for ATL * , indeed the main difference with our decision procedure for ATL + lies in the treatment of γ-formulae. The first difficulty is that quantifiers A or [[A]] cannot distribute over Boolean connectors as seen in [START_REF] Cerrito | Optimal tableaux-based decision procedure for testing satisfiability in the alternating-time temporal logic ATL+[END_REF]. An additional difficulty specific to ATL * is the fact that it is now necessary to also deal with nesting of temporal operators, resulting in a second level of recurrence when the temporal operators and U are encountered in the decomposition function described below. In temporal logics, e.g. LTL, the operator U is considered as an eventuality operator, that is an operator that promises to verify a given formula at some instant/state. When we write λ |= ϕ 1 Uϕ 2 , where ϕ 1 and ϕ 2 are state formulae, we mean that there is a state λ i of the computation λ where ϕ 2 holds and ϕ 1 holds for all the states of λ preceding λ i . So, once the property ϕ 2 is verified, we do not need to take care of ϕ 1 , ϕ 2 and ϕ 1 Uϕ 2 any more. We say that ϕ 1 Uϕ 2 is realized. However, if ϕ 1 and ϕ 2 are path formulae, e.g. Φ 1 and Φ 2 respectively, state λ i is such that from it Φ 2 must hold forever -we say that Φ 2 is "initiated" at λ i , in the sense that we start to make Φ 2 true at λ i -, and for every computation λ ≥j , where j<i , Φ 1 must hold. So Φ 1 has to be true forever, that is even after Φ 2 had been initiated. This explains the fact that at a possible state s the path formula ϕ 1 Uϕ 2 may become ϕ 1 Uϕ 2 ∧ ϕ 1 when ϕ 1 is a path formula and we postpone ϕ 2 . Note that ϕ 1 is then also initiated at s.W e now face the problem of memorizing the fact that a path formula Φ is initiated since path formulae cannot be stored directly in a state. That is why, during the decomposition of γ-formulae, we add a new set of path formulae linked to a γ-component and the current state. The definition and general treatment of eventualities in our procedure are given in Sect. [START_REF] Goranko | Tableau-based decision procedures for logics of strategic ability in multiagent systems[END_REF]. In order to decompose γ-formulae ϕ = A Φ or ϕ =[ [ A]] Φ, we analyse the path formula Φ in terms of present (current state) and future (next states). This analysis is done by a γ-decomposition function dec : ATL * p →P(ATL * s × ATL * p × P(ATL * p )) where ATL * p is the set of ATL * path formulae and ATL * s is the set of ATL * state formulae. Intuitively, the function dec assigns to the path formula Φ, a set of triples ψ, Ψ, S where ψ a state formula true at the current state, Ψ is a path formula expressing what must be true at next states and S is the set of path formulae initiated at the current state during the γ-decomposition This set S will be used during the elimination phase to determine if eventualities are realized or not, see Sect. 6. We first define two operators ⊗ and ⊕ between two sets S 1 and S 2 of triples. ⋆ S 1 ⊗ S 2 := { ψ i . ∧ ψ j ,Ψ i . ∧ Ψ j ,S i ∪ S j | ψ i ,Ψ i ,S i ∈S 1 , ψ j ,Ψ j ,S j ∈S 2 } ⋆ S 1 ⊕ S 2 := { ψ i . ∧ ψ j ,Ψ i . ∨ Ψ j ,S i ∪ S j | ψ i ,Ψ i ,S i ∈S 1 , ψ j ,Ψ j ,S j ∈S 2 , Ψ i = ⊤,Ψ j = ⊤} The function dec is defined by induction on the structure of path formula Φ as follows: ⋆ dec(ϕ)={ ϕ, ⊤, ∅ } for any ATL * state formula ϕ ⋆ dec( Φ 1 )={ ⊤,Φ 1 , ∅ } for any path formula Φ 1 ⋆ dec( Φ 1 )={ ⊤, Φ 1 , {Φ 1 } } ⊗ dec(Φ 1 ) ⋆ dec(Φ 1 UΦ 2 )=({ ⊤,Φ 1 UΦ 2 , {Φ 1 } } ⊗ dec(Φ 1 )) ∪ ({ ⊤, ⊤, {Φ 2 } } ⊗ dec(Φ 2 )) ⋆ dec(Φ 1 ∧ Φ 2 )=dec(Φ 1 ) ⊗ dec(Φ 2 ) ⋆ dec(Φ 1 ∨ Φ 2 )=dec(Φ 1 ) ∪ dec(Φ 2 ) ∪ (dec(Φ 1 ) ⊕ dec(Φ 2 )) Note that the definition of the function dec is based on the fixed-point equivalences of LTL [START_REF] Emerson | Temporal and modal logics[END_REF]: ∨ is to automatically transform resultant formulae in conjunctive normal form without redundancy, and therefore ensures the termination of our tableau-based decision procedure. For instance, when applying the function dec on ♦Φ∧♦Φ we may obtain a path formula ♦Φ ∧ ♦Φ ∧ ♦Φ and applying again the function dec on the so-obtained path formula will return ♦Φ ∧ ♦Φ ∧ ♦Φ ∧ ♦Φ, and so on forever. Also when the formula is complicated with ∧ and ∨ embedded in temporal operators, we may not be able to define which part of a path formula is identical to another one. We avoid these unwanted behaviours with our use of ] Φ be a γ-formula to be decomposed. Each triple ψ, Ψ, S ∈dec(Φ) is then converted to a γ-component γ c (ψ, Ψ, S) as follows: Ψ ≡ Ψ ∧ Ψ and Φ UΨ ≡ Ψ ∨ (Φ ∧ (Φ UΨ γ c (ψ, Ψ, S)=ψ if Ψ = ⊤ (3) γ c (ψ, Ψ, S)=ψ ∧ A A Ψ if ζ is of the form A Φ, (4) γ c (ψ, Ψ, S)=ψ ∧ [[ A]] [[ A]] Ψ if ζ is of the form [[A]] Φ (5) and a γ-set γ s (ψ, Ψ, S)=S. The following key lemma claims that every γ-formula is equivalent to the disjunction of its γ-components. Lemma 1. For any ATL * γ-formula ζ = A Φ or ζ =[ [A]] Φ 1. Φ ≡ {ψ ∧ Ψ | ψ, Ψ, S ∈dec(Φ)} 2. A Φ ≡ { A (ψ ∧ Ψ ) | ψ, Ψ, S ∈dec(Φ)},a n d [[ A]] Φ ≡ {[[ A]] ( ψ ∧ Ψ ) | ψ, Ψ, S ∈dec(Φ)} 3. A Φ ≡ {γ c (ψ, Ψ, S) | ψ, Ψ, S ∈dec(Φ)} Example 1. (Decomposition of θ = 1 (( ♦q∨♦r)∧(♦q∨♦r))) . First, we apply the decomposition function to the path formula Φ =( ♦q ∨ ♦r) ∧ (♦q ∨ ♦r), see Fig. 2. We recall that ♦ϕ ≡⊤Uϕ. It is worth noting that p and r can be replaced by any state formulae without affecting the basic structure of the computation of the function dec. Then, for instance, from the triple r, ♦q ∧ ♦q, {r, ♦q} of dec(Φ), we obtain the γ-component γ c (r, ♦q∧♦q, {r, ♦q})=r∧ 1 1 ( ♦q∧♦q) and the γ-set γ s (r, ♦q ∧ ♦q, {r, ♦q})={r, ♦q}; from the triple ⊤, ♦q ∧ ♦q, {♦q} we obtain γ c (⊤, ♦q∧♦q, {♦q})= 1 1 ( ♦q∧♦q)andγ s (⊤, ♦q∧♦q, {♦q})={♦q}. Closure. The closure cl(ϕ)o fa nATL * state formula ϕ is the least set of ATL * formulae such that ϕ,⊤, ⊥∈cl(ϕ), and cl(ϕ) is closed under taking successor, α-, β-a n dγ-components of ϕ. For any set of state formulae Γ we define cl(Γ )= {cl(ψ) | ψ ∈ Γ } (6) We denote by |ψ| the length of ψ and by ||Γ || the cardinality of Γ . Lemma 2. For any ATL * state formula ϕ, ||cl(ϕ)|| < 2 2 2|ϕ| . Sketch of proof. The double exponent, in the size of the ϕ, of the closure comes from the fact that, during decomposition of γ-formulae, path formulae are put in disjunctive normal form. We recall that this form is necessary to ensure the termination of our procedure. ♦q ∨ ♦r) ∧ (♦q ∨ ♦r)) Full expansions of sets of ATL * formulae. Once we are able to decompose into components every non-primitive ATL * state formulae, it is possible to obtain full expansions of a given set of ATL * state formulae using the following definition: Definition 2. Let Γ , Δ be sets of ATL * state formulae and Γ ⊆ Δ ⊆ cl(Γ ). 1. Δ is patently inconsistent if it contains ⊥ or a pair of formulae ϕ and ¬ϕ. Δ is a full expansion of Γ if it is not patently inconsistent and satisfies the following closure conditions: -i fϕ ∧ ψ ∈ Δ then ϕ ∈ Δ and ψ ∈ Δ; -i fϕ ∨ ψ ∈ Δ then ϕ ∈ Δ or ψ ∈ Δ; -i fϕ ∈ Δ is a γ-formula, then at least one γ-component of ϕ is in Δ and exactly one of these γ-components, say γ c (ψ, Ψ, S),i nΔ, denoted γ l (ϕ, Δ), is designated as the γ-component in Δ linked to the γ-formula ϕ, as explained below. We also denote by γ sl (ϕ, Δ) the set of path formulae γ s (ψ, Ψ, S), which is linked to the γ-component γ l (ϕ, Δ) The set of all full expansions of Γ is denoted by FE(Γ ). Proposition 1. For any finite set of ATL * state formulae Γ : Γ ≡ Δ | Δ ∈ FE(Γ ) . The proof easily follows from Lemma 1. The rule (SR) adds to the tableau the set of full expansions of a prestate Γ as successor states of Γ . Rule (SR). Given a prestate Γ , do the following: 1. For each full expansion Δ of Γ add to the pretableau a state with label Δ. 2. For each of the added states Δ,ifΔ does not contain any formula of the form A ϕ or [[A]] ϕ, add the formula A ⊤ to it; 3. For each state Δ obtained at steps 1 and 2, link Γ to Δ v i aa= ⇒ edge; 4. If, however, the pretableau already contains a state Δ ′ with label Δ,d on o t create another copy of it but only link Γ to Δ ′ via a =⇒ edge. Construction Phase: Dynamic Analysis of Successor Formulae We recall that the considered agents are those explicitly mentioned in the initial formula η.T h er u l e(Next) creates successor prestates to a given state, say Δ, so that the satisfiability of Δ is equivalent to the satisfiability of all the prestates. In our tableau construction procedure, choosing one of the successor formulae contained in Δ is considered as a possible action for every agent. Then each possible action vector is given a set of formulae corresponding to the choice collectively made by every agent. More details about the rationale behind the rule (Next) can be found in [START_REF] Carral | EL-ifying ontologies[END_REF][START_REF] Goranko | Tableau-based decision procedures for logics of strategic ability in multiagent systems[END_REF]. Moreover, it is worthwhile noticing that the rule (Next) is done in such a way so that any created prestate contains at most one formula of the form [[A ′ ]] ψ, where A ′ = A. Rule (Next). Given a state Δ, do the following, where σ is a shorthand for σ A : 1. List all primitive successor formulae of Δ in such a way that all successor formulae of the form A Φ precede all formulae of the form [[A ′ ]] Φ where A ′ = A, which themselves precede all formulae of the form [[A]] Φ; let the result be the list L = A 0 ϕ 0 ,..., A m-1 ϕ m-1 , [[ A ′ 0 ]] ψ 0 ,...,[[ A ′ l-1 ]] ψ l-1 , [[ A]] μ 0 ,...,[[ A]] μ n-1 Let r Δ = max{m + l, 1}; we denote by D(Δ) the set {0,...,r Δ -1} |A| . Then, for every σ ∈ D(Δ), denote N (σ): ={i | σ i m}, where σ i is the ith component of the tuple σ, and let co(σ ):=[ i∈N (σ) (σ i -m)] mod l. 2. For each σ ∈ D(Δ) create a prestate: Γ σ = {ϕ p | A p ϕ p ∈ Δ and σ a = p for all a ∈ A p } ∪{ψ q | [[ A ′ q ]] ψ q ∈ Δ, co(σ)=q and A -A ′ q ⊆ N (σ)} ∪{μ r | [[ A]] μ r ∈ Δ} If Γ σ is empty, add ⊤ to it. Then connect Δ to Γ σ with σ -→ . If, however, Γ σ = Γ for some prestate Γ that has already been added to the initial tableau, only connect Δ to Γ with σ -→ . Example 2. We suppose a state containing the following successor formulae, that we arrange in the following way, where the first line of numbers corresponds to positions among negative successor formulae, and the second line corresponds to positions among successor formulae, with A = A. L = 0 1 1 ( ♦q ∧ ♦q), 0 1 [[1]] [[1]] ¬q, 1 2 [[2]] [[2]] ♦s, [[ 1 , 2]] ¬q The application of the rule (Next) on L gives the following results: σ N (σ) co(σ) Γ (σ) σ N (σ) co(σ) Γ (σ) 0, 0 ∅ 0 1 ( ♦q ∧ ♦q), ¬q 1, 2 {1, 2} 1 [[1]] ¬q, ¬q 0, 1 {2} 0 1 ( ♦q ∧ ♦q), [[2]] ♦s, ¬q 2, 0 {1} 1 [[1]] ¬q, ¬q 0, 2 {2} 1 1 ( ♦q ∧ ♦q), ¬q 2, 1 {1, 2} 1 [[1]] ¬q, ¬q 1, 0 {1} 0 ¬q 2, 2 {1, 2} 0 [[2]] ♦s, ¬q 1, 1 {1, 2} 0 [[2]] ♦s, ¬q 6 Elimination Phase The elimination phase works step-by-step. In order to go through one step to another we apply by turns two elimination rules, called (ER1) and (ER2), until no more nodes can be eliminated. The rule (ER1) detects and deletes nodes with missing successor, while the rule (ER2) detects and delete states that do not realize all their eventualities. At each step, we obtain a new intermediate tableau, denoted by T η n . We denote by S η n the set of nodes (states and prestates) of the intermediate tableau T η n . At the end of the elimination phase, we obtain the final tableau for η, denoted by T η . It is declared open if the initial node belongs to S η , otherwise closed. The procedure for deciding satisfiability of η returns "No" if T η is closed, "Yes" otherwise. Remark 1. Contrary to the tableau-based decision procedure for ATL + ,w ed o not eliminate all the prestates at the beginning of the elimination phase. We eliminate them with the rule (ER1) only if necessary. This does not have any effect on the result of the procedure, nor any relevant modification in the soundness and completeness proofs, but it makes implementation quicker and easier. Rule (ER1). Let Ξ ∈ S η n be a node (prestate or state). -In the case where Ξ is a prestate: if all nodes Δ with Ξ =⇒ Δ have been eliminated at earlier stages, then obtain T η n+1 by eliminating Ξ from T η n . -In the case where Ξ is a state: if, for some σ ∈ D(Ξ), the node Γ with Ξ σ -→ Γ has been eliminated at earlier stage, then obtain T η n+1 by eliminating Ξf r o mT η n . In order to define the rule (ER2), we first need to define what is an eventuality in the context of ATL * and then define how to check whether eventualities are realized or not. Eventualities. In our context, we consider all γ-formulae as potential eventualities. We recall that a γ-formula is of the form A Φ or [[A]] Φ where Φ = ϕ. When constructing a tableau step-by-step as we do in our procedure, it is possible to postpone forever promises encapsulated in operators such as U as far as we keep promising to satisfy them. We consider that a promise, which is a path formula, is satisfied (or realized) once it is initiated at the current state, which corresponds to an engagement to keep it true if necessary, for any computation starting at that state. So we want to know at a given state and for a given formula whether all promises (or eventualities) are realized. This is the role of the function Realized: ATL * p ×P(ATL * s ) ×P(ATL * p ) → B, where B is the set {true, f alse}. The first argument of the function Realized is the path formula to study, the second argument is a set of state formulae Θ, and the third argument is a set of path formulae on which one is "engaged". This third argument is exactly what is added with respect to ATL + treatment. For our purpose, to know whether a potential eventuality is realized, we use the set Θ to represent the state containing the γ-formula and the set S = γ sl (Φ, Θ) obtained during the decomposition of Φ and the full expansion of Θ. This last set S is computed in Sect. 4 and corresponds to the set of path formulae initiated in the current state Θ. The definition of Realized is given by recursion on the structure of Φ as follows: - Realized(ϕ, Θ, S)=true iff ϕ ∈ Θ -Realized(Φ 1 ∧ Φ 2 ,Θ,S)=Realized(Φ 1 ,Θ,S) ∧ Realized(Φ 2 ,Θ,S) -Realized(Φ 1 ∨ Φ 2 ,Θ,S)=Realized(Φ 1 ,Θ,S) ∨ Realized(Φ 2 ,Θ,S) -Realized( Φ 1 ,Θ,S)=true -Realized( Φ 1 ,Θ,S)=true iff Φ 1 ∈ Θ ∪ S -Realized(Φ 2 UΦ 1 ,Θ,S)=true iff Φ 1 ∈ Θ ∪ S Remark 2. In the two last items, we use the set Θ ∪ S to handle the particular case where Φ 1 is a state formula that is already in the set Θ because of the behaviour of another coalition of agents. We will see with Definition 4 that if the function Realized declares that an eventuality is not immediately realized at a given state, then we check in the corresponding successor states whether it is realized or not. But, because of the way γ-formulae are decomposed, an eventuality may change its form from one state to another. Therefore, we define the notion of Descendant potential eventuality in order to define a parent/child link between potential eventualities and keep track of not yet realized eventualities, and finally check whether the potential eventualities are realized at a given moment. Δ . The notion of descendant potential eventuality of ξ of degree d,ford>1, is defined inductively as follows: -any successor eventuality of ξ (w.r.t. some γ-component of ξ) is a descendant eventuality of ξ of degree 1; -any successor eventuality of a descendant eventuality ξ n of ξ of degree n is a descendant eventuality of ξ of degree n +1. We will also consider ξ to be a descendant eventuality of itself of degree 0. Realization of Potential Eventualities First, we give some notation: Notation: Let L = A 0 ϕ 0 ,..., A m-1 ϕ m-1 , [[ A ′ 0 ]] ψ 0 ,...,[[ A ′ l-1 ]] ψ l-1 , [[ A]] μ 0 ,...,[[ A]] μ n-1 be the list of all primitive successor formulae of Δ ∈ S η 0 , induced as part of application of (Next). Succ(Δ, A p ϕ p ):={Γ | Δ σ -→ Γ, σ a = p for every a ∈ A p } Succ(Δ, [[ A ′ q ]] ψ q ):={Γ | Δ σ -→ Γ, co(σ)=q and A -A ′ q ⊆ N (σ)} Succ(Δ, [[ A]] μ r ):={Γ | Δ σ -→ Γ } Definition 4. (Realization of potential eventualities) Let Δ ∈ S η n be a state and ξ ∈ Δ be a potential eventuality of the form A Φ or [[ A]] Φ.L e t S = γ sl (ξ, Δ).T h e n : 1. If Realized(Φ, Δ, S)=true then ξ is realized at Δ in T η n . 2. Else, let ξ 1 Δ be the successor potential eventuality of ξ w.r.t. γ l (ξ, Δ).I ff o r every Γ ∈ Succ(Δ, A ξ 1 Δ ) (resp. Γ ∈ Succ(Δ, [[ A]] ξ 1 Δ )), there exists Δ ′ ∈T η n with Γ =⇒ Δ ′ and ξ 1 Δ is realized at Δ ′ in T η n , then ξ is realized at Δ in T η n . Example 3. Let Δ = { 1 (( ♦q ∨ ♦r) ∧ (♦q ∨ ♦r)), [[1]] ¬q, [[2]] ♦s, [[ 1 , 2]] ¬q, ¬q, 1 1 ( ♦q ∧ ♦q), [[1]] [[1]] ¬q, s, [[2]] [[ 2 ]] ♦s} be a state. If we consider the potential eventuality ξ = 1 (( ♦q ∨ ♦r) ∧ (♦q ∨ ♦r)) ∈ Δ, Φ =( ♦q ∨ ♦r) ∧ (♦q ∨ ♦r)a n dS = γ sl (ξ, Δ)={♦q}, then we obtain the following result: Realized(Φ, Δ, S)=Realized( ♦q ∨ ♦r, Δ, S) ∧ Realized(♦q ∨ ♦r, Δ, S) = Realized( ♦q, Δ, S) ∨ Realized(♦r, Δ, S) ∧ Realized(♦q, Δ, S) ∨ Realized(♦r, Δ, S) =(true ∨ false) ∧ (false ∨ false)=false The call of the function Realized on (Φ, Δ, S) returns false, which means that the potential eventuality ξ is not immediately realized. Therefore, we must check in the future if ξ can be realized or not. Concretely, we must check that the descendant potential eventuality ξ 1 = 1 ( ♦q ∧ ♦q) is realized at the next states corresponding to the collective choices of all agents to satisfy the successor formula 1 ξ 1 , that is states resulting from the transitions (0, 0), (0, 1) and (0, 2), as seen in Example 2. Rule (ER2). If Δ ∈ S η n is a state and contains a potential eventuality that is not realized at Δ ∈T η n , then obtain T η n+1 by removing Δ from S η n . 7 Results and Sketches of Proofs Theorem 1. The tableau-based procedure for ATL * is sound with respect to unsatisfiability, that is if a formula is satisfiable then its final tableau is open. To prove soundness, we first prove that from any satisfiable prestate we obtain at least one satisfiable state, and we prove that from any satisfiable state we obtain only satisfiable prestates. Second, we prove that no satisfiable prestate or state can be eliminated via rule (ER1) or (ER2), and in particular, if the initial prestate is satisfiable, it cannot be removed, which means that the tableau is open. Theorem 2. The tableau-based procedure for ATL * is complete with respect to unsatisfiability, that is if a tableau for an input formula is open then this formula is satisfiable. To prove completeness, we construct step-by-step a special structure called Hintikka structure from the open tableau and then we prove that a CGM satisfying the initial formula can be obtained from that Hintikka structure. Theorem 3. The tableau-based procedure for ATL * runs in at most 3EXPTIME. We first argue that the number of formulae in the closure of the initial formula η is at most double exponential in the size of η (see Lemma 2). Then we have that the number of states is at most exponential in the size of the closure of η. Therefore the procedure runs in at most 3EXPTIME. Implementation of the Procedure We propose a prototype implementing our tableau-based decision procedure for ATL * , available on the following web site: http://atila.ibisc.univ-evry.fr/tableau ATL star/. This prototype aims at giving a user-friendly tool to the reader interested in checking satisfiability of ATL* formulae. This is why we provide our prototype as a web application directly ready to be used. The application allows one to enter a formula, or to select one from a predefined list of formulae, and then launch the computation of the corresponding tableau. It returns some statistics about the number of prestates and states generated as well as the initial and final tableaux for the input formula, therefore also an answer on its satisfiability. Explanation on how to use the application is given on the web site. Our prototype is developed in Ocaml for the computation, and in PHP and JavaScript for the web interface. Binaries of the application can be found on the same web page. As the main difference between ATL and ATL * comes from path formulae, we mainly focus our test on that point and use the list of tests proposed by Reynolds for CTL * in [START_REF] Reynolds | A faster tableau for CTL[END_REF]. This allows us to check that our application gives the same results in term of satisfiability and that our running times for these examples are satisfactory. Moreover, other tests using formulae with non trivial coalitions have been done. Nevertheless a serious benchmark has still to be done, which is a non trivial work, left for the future. Also, we plan to compare theoretically and experimentally our approach with the automata-decision based procedure of [8]. Conclusion In this paper, we propose the first sound, complete and terminating tableaubased decision procedure for ATL * : it is easy to understand and conceptually simple. We also provide the first implementation to decide the satisfiability of ATL * formulae, among which ATL + formulae. future works, it would be worthwhile to implement the automata-based decision procedure proposed in [8]a n d be able to make some practical comparisons. Another perspective is to implement model synthesis with a minimal number of states for satisfiable ATL * formulae. Fig. 1 . 1 Fig. 1. Overview of the tableau-based decision procedure for ATL * ∨ correspond respectively to the operators ∧ and ∨ where the associativity, commutativity, idempotence and identity element properties are embedded in the syntax. The aim of both . ∧ and . ∨ and the transformation of any new path formula in conjunctive normal form without redundancies. Now, let ζ = A Φ or ζ =[ [A] Fig. 2 . 2 Fig.2. Function dec applied on the path formula 1 (( ♦q ∨ ♦r) ∧ (♦q ∨ ♦r)) Definition 3 . 3 (Descendant potential eventualities). Let Δ be a state and let ξ ∈ Δ be a potential eventuality of the formA Φ or [[ A]] Φ. Suppose the γcomponent γ l (ξ, Δ) in Δ linked to ξ is, respectively, of the form ψ ∧ A A Ψ or ψ ∧ [[ A]] [[ A]]Ψ . Then the successor potential eventuality of ξ w.r.t. γ l (ξ, Δ) is the γ-formula A Ψ (resp. [[ A]] Ψ ) and it will be denoted by ξ1 Acknowledgement. I would like to thank Serenella Cerrito and Valentin Goranko for their advices and proofreading. I also thank the anonymous referees for their helpful criticisms.
00409650
en
[ "sdu.envi", "sde.mcg" ]
2024/03/04 16:41:18
2009
https://insu.hal.science/insu-00409650/file/A.Pelfrene-AGechemistry-2009.pdf
A Pelfrêne N Gassama D Grimaud Mobility of major-, minor-and trace elements in solutions of a planosolic soil: distribution and controlling factors Keywords: Soil solutions, Planosol, trace metals, speciation, Al, Fe, controlling factors, mobility Subsurface waters circulating in an unpolluted soil of a planosolic horizon (Massif Central, France) were studied in order to determine their physico-chemical characteristics. Three water sampling sites were chosen along a toposequence according to topography. For each site, two piezometers were placed above and in the gravely and concretion-rich horizon (Fe-and Mn-oxyhydroxides). Concentrations of major-, minor-(cations, anions, iron, manganese, phosphorus, silica) and trace elements (Al, Cd, Co, Cr, Cu, Ni, Pb, Zn, U) were monitored on bulk and filtered water (0.45 µm) to study both the particulate and the dissolved components, from 2004 to 2006, during the soil saturation period (i.e. from November to May). Chemical characteristics of soil solutions provide evidence for various chemical water compositions and for time variations of the water quality, pointing out that the hydrodynamic and chemical reactivity in the solution is different for the three sites. Calculations of pe values allow to give an information about a range of the redox state of soil solutions. The pe ranges are different for each piezometer but correspond to anoxic solution. For all piezometers, distribution between the dissolved and the particulate fraction and correlations between the various elements in the soil solutions indicate that: (i) Al and Fe show similar behaviour, (ii) Al is mainly present as oxyhydroxides and (iii) some trace metals are mainly associated with particles which Introduction Trace metals are naturally present in soils reflecting their occurrence in the parent material. They are, however, also present as environmental contaminants, particularly since soil pollution has become a major problem due to anthropogenic activities. As a consequence, assessment of this pollution requires a good knowledge of the distribution of trace metals in unpolluted soils, the so-called pedogeochemical background level. The total concentration of trace metal is not the only factor to be considered to assess toxicity hazard. The speciation of trace metals (i.e., the distribution of their physicochemical forms) ultimately determines their bioavailability and their mobility in the soil [START_REF] Alloway | Soil processes and the behaviour of metals[END_REF]. These characteristics of trace metals in soils, sediments and aquatic systems depend largely upon their interaction with organic compounds and/or minerals like clays, hydrous oxides of Fe, Al and Mn [START_REF] Stumm | Aquatic chemistry: an introduction emphasizing chemical equilibria in natural waters[END_REF]. Adsorption of trace metals onto solid compounds and associated surface coatings is considered very important in controlling metal activity [START_REF] Sposito | Electrochemical phenomena[END_REF][START_REF] Sigg | Chimie des milieux aquatiques[END_REF][START_REF] Alloway | Soil processes and the behaviour of metals[END_REF]. Moreover, pH and redox conditions are factors that also affect the chemistry of metals in soils, and their uptake by organisms [START_REF] Singh | Soil and water contamination by heavy metals[END_REF]. Planosols are characterised by a vertical succession of clay-poor horizons overlying a clay-rich horizon that drastically restricts the vertical flow of water and induces the occurrence of a seasonal water table [START_REF] Baize | Planosols in the "Champagne Humide" region, France. A multiapproach study[END_REF]. According to the water hydrodynamic during the year, oxide concretions may develop at the base of the last horizon. In this horizon, trace elements can be accumulated or released depending on the dynamic of oxide formation/dissolution. The Planosol studied here is developed from metamorphic parent material. Naturally rich in trace elements due to the geology, this soil is subject to seasonal water saturation during winter and spring [START_REF] Salvador-Blanes | Déterminisme de la distribution spatiale des éléments majeurs et traces dans les sols en contexte métamorphique (Plateau d'Aigurande, nord du Massif Central[END_REF]. In this hydromorphic soil, water circulation, notably lateral underflow above clayey horizons, can involve specific chemical reactions and transfers [START_REF] Bourrié | Mise en évidence de deux dynamiques saisonnières du fer dans les sols hydromorphes en climat tempéré[END_REF]. Soil solutions were collected above and in a gravely and concretion-rich (Fe and Mn oxides) horizon, in order to assess its impact on the distribution of trace metals in the soil waters. The objectives of this paper are (i) to determine the range, distribution and behaviour of major-, minorand trace elements in these unpolluted soil solutions, (ii) to identify the main physical and chemical controlling factors of their distribution, and (iii) to assess the impact of the concretion-rich horizon on the trace metal distribution. The soil solutions have been monitored for three years. Concentrations both in the soluble and in the particulate fraction have been considered. Redox conditions have been assessed to provide evidence for particular mechanisms. The study of chemical speciation of some trace elements has been developed in a further paper [START_REF] Pelfrêne | Dissolved Cu(II) speciation in unpolluted soil solutions of a planosolic horizon[END_REF]. Environmental settings The study area is located on the Aigurande plateau in the northern part of the Massif Central (France). The substratum is composed of Paleozoic metamorphic formations (gneiss and amphibolite) and intrusive granitic rocks [START_REF] Quenardel | Paleozoic evolution of the Plateau d'Aigurande (NW Massif Central, France)[END_REF]. The area is a grassland in a hedged farmland district. The studied soil is a Planosol developed from gneissic parent material. Planosols are characterized by the vertical succession of horizons (Fig. 1a). The upper horizon (sandy and organic-rich horizon) is allochthonous and derived from colluvial materials of amphibolitic (site 1, piezometers 1a and 1b) and gneissic (sites 2 and 3, piezometers 2a, 2b, 3a and 3b) origin [START_REF] Salvador-Blanes | Influence des substrats et des formations de versant sur la variabilité spatiale des teneurs naturelles en chrome de sols issus de roches métamorphiques[END_REF] (Fig. 1b). The gravelly and concretion-rich horizon below comprises concretions which are mainly composed of several types of cements (Fe-rich, Si-and Al-rich, Mn-rich and Ti-rich) surrounding grains of quartz, feldspars, micas and accessory minerals [START_REF] Salvador-Blanes | Déterminisme de la distribution spatiale des éléments majeurs et traces dans les sols en contexte métamorphique (Plateau d'Aigurande, nord du Massif Central[END_REF][START_REF] Cornu | Trace element accumulation in Mn-Fe-oxide nodules of a planosolic horizon[END_REF]. The deeper horizons (a silty horizon and alterite) are developed in gneissic material (mainly quartz with feldspars, micas and clay minerals) [START_REF] Salvador-Blanes | Déterminisme de la distribution spatiale des éléments majeurs et traces dans les sols en contexte métamorphique (Plateau d'Aigurande, nord du Massif Central[END_REF]. The horizons are waterlogged during winter and spring. The mean annual rainfall in the area was 872, 667 and 772 mm for 2004, 2005 and 2006, respectively. The main rainfall occurs between December and March. Sampling and analytical methods Three water sampling sites were chosen along a toposequence according to topography (Fig. 1). Two piezometers were placed in each site, in November 2003: above (noted 1a, 2a, 3a respectively for sites 1, 2 and 3) and in (noted 1b, 2b and 3b respectively for sites 1, 2 and 3) the gravelly and concretion-rich horizon. Piezometers were made of a strainer (20 cm height) in order to collect free water circulating in the soil horizon. Soil solution samples were collected from March 2004 to May 2006 (every 15 days) during the soil saturation period (i.e. from November to May). In 2004, sampling began in March in order to allow the soil to reach equilibrium. Bulk and filtered water were sampled to study both the particulate and the dissolved phases. The concentrations of the particulate fraction were determined by subtracting concentration measured in the bulk solution from concentration measured in the dissolved fraction. Soil waters were filtered in the field through 0.45 µm membrane filters (acetate of cellulose) with a frontal-flow system in order to separate dissolved and particulate fractions. Prior to use, each membrane was rinsed with deionized water (18 MΩ) and then with the sample. Filtered and bulk water samples were acidified to pH ≈ 2 with Suprapur ® grade nitric acid for aliquots intended for cation, phosphorus, dissolved silica, TOC, DOC and trace metal analysis. The samples were stored in previously washed polypropylene containers. Physico-chemical parameters (temperature, pH and electrical conductivity C) were measured in the field. Alkalinity was measured the same evening by the Gran method [START_REF] Gran | Determination of the equivalent point in potentiometric titrations[END_REF] with a HCl solution. Bicarbonate concentrations were calculated considering alkalinity mainly due to carbonate. Total organic C (TOC, on unfiltered water samples) and dissolved organic C (DOC) were determined by using a C analyzer (Shimadzu TOC-V CSH , NPOC method). Particulate organic C (POC) was obtained by difference between TOC and DOC. Calcium and Mg, in the bulk and filtered waters (0.45 µm), were determined by flame atomic absorption spectrometry, and Na and K by flame atomic emission spectrometry. Chloride, F -, NO 3 -and SO 4 2-in the filtered waters were measured by ionic chromatography. Total and dissolved P and Si were measured by molecular spectrometry. Iron and Mn in the bulk and filtered waters were measured by graphite furnace atomic absorption spectrometry. Trace elements (Al, Cd, Co, Cr, Cu, Ni, Pb, Zn and U) in the different fractions were measured by ICP-MS. Bulk solutions were analyzed by ICP-MS after successive HNO 3 digestions in order to destroy organic matter and to dissolve amorphous particles. However, the clayey fraction and other silicates were not destroyed. Therefore, the particulate fraction corresponds to amorphous particles (plus carbonates) and ions adsorbed on the clay minerals. All samples exhibited a charge imbalance lower than 10%. Blank filters were run out in the field to check for possible contamination. Deionized water passed through filters was also analyzed. Blank tests showed that the filtration introduces insignificant levels of contamination for all the elements of interest. Results Major elements and nutrients In the dissolved fraction, wide ranges of concentrations are recorded for the three sites from 2004 to 2006 for major-and minor-elements ( Concentrations of major cations do not differ significantly between sites except Ca 2+ , which is higher in 1a and 1b (Fig. 2). This Ca-rich content could be explained by the origin of colluvial materials. Amphiboles which are the main minerals of amphibolites might cause high Ca values at sites 1a and b. Low values for Ca at sites 2 and 3 indicate a different circulation system, perhaps via lateral input of water from a low Ca source (gneisses). For each piezometer, the cation concentrations do not show important temporal evolution (less than 20%, Fig. 2). Results for the anion distribution are different (Table 2). Strong temporal variations are observed for (i) HCO 3 -, mainly in 3a and 3b, and (ii) NO 3 -and Σ(Cl -+ SO 4 2- ) in 2a, 2b, 3a and 3b. For 3b in 2005, HCO 3 -concentrations were high (3.70 ± 0.23 mM), and no correlation exists between HCO 3 -and Ca 2+ . Because waters circulate in crystalline rocks, HCO 3 -cannot originate from bedrock weathering. During this period, concentrations of some other elements (Ca, Na, Mn and Fe) are also high. Calculations show that soil solutions are in a thermodynamic equilibrium with a pCO 2 (5.11x10 -2 atm in 2005) higher than medium pCO 2 (2004-2006, 1.72x10 -2 atm). For sites 2 and 3 NO 3 -concentrations exhibit strong temporal variations (Table 2), with low values during spring, indicating vegetation assimilation, absorption and/or denitrification. Fe, Mn, Al, Si and organic matter For each piezometer, results on soluble/particulate form distributions (Table 3) show that Fe (excepted in 1a) and Al are mainly associated with particles, while Mn, Si and organic matter have an ubiquitous behaviour. The particulate phases in the soil waters are rich in Fe, Al and Si. The composition of particles varies spatially and temporally. In the sites 1 and 3, particles are richer in Si. Silicium percentage increases from March 2004 to May 2006, while Fe and Al decrease in all sites. In the dissolved and particulate fraction, Al is not linked to Si (no correlation between these elements is recorded), indicating that the source of Al may not be aluminosilicate minerals but rather Al oxyhydroxides. Correlations between Al, Fe, Mg and POC are observed in the particulate fraction. These relationships will be discussed site by site. In January 2005, soil waters at site 3b exhibited more reducing characteristics with a maximum recorded concentration of dissolved Fe (4.31x10 -5 mol/L) and dissolved Mn (3.82x10 -5 mol/L) (Table 2). Trace elements Concentrations of trace elements are presented in Table 4. Site 1 In the particulate fraction, three groups of trace elements can be distinguished. In 1a, Cr is linked to Al (R 2 = 0.986); Co, Pb and Zn are strongly linked to Fe (R 2 = 0.999; 0.992 and 0.992 respectively). In 1b, Co, Cr, Cu, Ni, Pb, U and Zn are linked to Mg, Fe and Al (Table 5). No relationship between trace metals and organic matter is recorded. In the dissolved fraction, it was observed that (1) in 1a and 1b, some elements are linked to HCO 3 -; the relationships are particularly significant for Ca (R 2 = 0.726 in 1a and 0.813 in 1b), Mg (R 2 = 0.831 in 1a and 0.902 in 1b), Sr (R 2 = 0.868 in 1a and 0.907 in 1b) and Ba (R 2 = 0.874 in 1a and 0.813 in 1b); (2) in 1a and 1b, Cr is linked to Al (R 2 = 0.986 and 0.906, respectively). The correlations between HCO 3 -and some elements (Ca, Mg, Sr and Ba) could indicate that these elements are linked to carbonate dissolution. Chromium is linked to Al both in the dissolved and particulate fraction (1b). In 1b, the similar behaviour of Al, Fe and Mg suggests that these elements are controlled by the same mechanism. If Al is present as Al oxyhydroxide, Fe as Fe oxyhydroxide, Mg as clay, which are particles exhibiting different properties, this result could also suggest the presence of mixed particles, which is corroborated by a link between Al and Mg (R 2 = 0.978). The absence of correlation between organic matter content and trace metals could be attributed to the fact that the reactive fraction of the organic matter could not be assessed in the relationship [START_REF] Harter | Effect of soil pH on adsorption of lead, copper, zinc, and nickel[END_REF][START_REF] Mcbride | Solubility control of Cu, Zn, Cd and Pb in contaminated soils[END_REF]. No evidence of mixed particles is recorded in 2a. Site 3 In the particulate fraction, the correlation diagrams delineate several groups of elements. In 3a, Pb and U are correlated with Si (R 2 = 0.850 and 0.888 respectively); Al, Co and Cr are correlated with Mg (R 2 = 0.943; 0.900 and 0.919 respectively); Co, Cr, Pb and Zn are strongly correlated with Al (Table 8). In 3b, no correlation exists between Si, Mg and metals; Co, Cr, Cu, Ni, U and Zn are correlated with Fe (Fig. 4) and Al (except Ni) (Table 8); Cr and Cu are correlated with POC (R 2 = 0.980 and 0.987, respectively). In the dissolved fraction, in 3a, Co and Mn are correlated with Fe (R 2 = 0.947 and 0.964, respectively), and in 3b, Cu, Mn, Sr and U are correlated with HCO 3 -and Ca (Table 9). In 3a, some trace metals exhibit interaction with clay and Al oxyhydroxide where a relationship between Al and Mg (R 2 = 0.943) is recorded in particles. In 3b, observed relationships between POC and Fe (R 2 = 0.985), Al and Mg (R 2 = 0.971), and Al and Fe (R 2 = 0.940) suggest mixed particles made of a combination of clay + Al-Fe oxyhydroxides + POC. But here trace metals exhibit interaction mainly with Al and Fe oxyhydroxides. Also, in site 3, mixed particles can also assumed but with different composition and/or reactivity (interaction with carbonate in solution for 3b) according to horizon. Discussion Chemical characteristics of soil solution and water circulation Waters circulating in sites 2 and 3 are more mineralized than that in the site 1. The soil solution composition seems to depend on the water residence time. The Ca-rich content in the site 1 may evidence that water circulating in the sites 2 and 3 does not mainly come from vertical infiltration but rather from lateral flow. Between Chloride and SO 4 2-variation could be linked to the hydrodynamic of the soil solution, which is controlled both by the effective rainfall variations and vegetation uptake. During high rainfall, transfer and dilution processes can occur. Autumn rainfall flushes evaporated and thus concentrated superficial waters, whereas winter rainfall causes dilution. From November 2004 to May 2005, the high pCO 2 recorded in site 3 may indicate that confined waters circulated in the studied horizon. This could be explained by a shoaling of the water table. This explanation has also been proposed by Alberic et al. (Unpublished study), based on redox data. The reduction in NO 3 -content recorded at sites 2 and 3 in spring 2005 and 2006 may be linked to denitrification, because the concentration of dissolved Fe and mainly dissolved Mn increase at this same time. The subsurface water circulation pattern is not in agreement with topography. Site 2 seems to receive mainly lateral waters. Site 3 water may originate from site 2, from the side, or from a confined aquifer. Also, the evolution of trace element behaviour upstream/downstream cannot be discounted. Range of the redox state of soil solutions Redox reactions play an important role in the chemistry of natural water systems and influence the mobility and availability of many elements. In this work, the range of pe values in soil solutions was calculated in order to determine the degree of oxidation of trace metals and to study the links between metals and Fe/Mn/Al oxyhydroxides. Temporary waterlogging of soils may cause the redox potential to decrease. In the soil solutions, Fe and Mn are redox sensitive elements and can be used to evaluate the pe values, which were calculated using a pe-pH diagram, considering different redox couples and solids. Here pe-pH diagrams were constructed considering the following species: Fe 2+ , Fe 3+ , FeOH 2+ , Fe(OH) 2 , FeOOH and Mn 2+ , MnO 2 . For the Fe pe-pH diagram, it was considered that soluble Fe was at equilibrium with FeOOH (pKs = -1, [START_REF] Michard | Equilibres chimiques dans les eaux naturelles[END_REF] and not with Fe(OH) 3 because of Fe-minerals (mainly goethite) present in the studied soil [START_REF] Cornu | Trace element accumulation in Mn-Fe-oxide nodules of a planosolic horizon[END_REF] (for Fe(III) solid). The highest calculated pe values for Fe and Mn are +1.3 and +2.2, respectively, and the lowest are -6.0 and -2.5, respectively. For all piezometers, Mn pe calculated values are higher than Fe pe. Differences of up to 1.2-2.5 (pe) are observed between Fe and Mn redox couples, indicating a disequilibrium in the soil solutions. The redox reaction rates are typically slow. In this dynamic system where several redox reactions can occur simultaneously, redox species can hardly reach an equilibrium. Nevertheless, these calculations provide a range of the redox states of soil solutions. The pe ranges are different for each piezometer (Fig. 5), but correspond to anoxic conditions (pe values < +2.5) from November to May (for each year). When pe < 0, electrons are available for SO 4 2-reduction. In this case, typical products in soil solution are H 2 S, bisulphide (HS -) or thiosulphates (S 2 O 3 -) ions. However, in the field, the typical smell of these products was only present for site 3. The pe values for site 3 are lower than for site 2 and site 1. Upstream (site 1), waters are recently infiltrated and contain little O 2 . Then, during water circulation in the soil (site 2 and then site 3), mineral reactions result in a decrease in O 2 content and pe values. Behaviour of trace elements Binary correlations provide evidence when one mechanism prevails in the regulation of an element's behaviour. If several mechanisms with the same impact are involved, they cannot be identified. At the scale of the 3 sites, various behaviours are highlighted, notably: (i) good correlations between Co and Mn in the soluble phase (R 2 = 0.908; Fig. 6a), (ii) strong correlations between U and Fe in the particulate phase (R 2 = 0.958; Fig. 6b), (iii) some trace metals are mainly associated with particles (Table 4) and (iv) very few elements are correlated in the dissolved fraction. In the narrow range of calculated redox states, no difference is really noticed from site 1 to site 3, perhaps excepted for Cr and U in the particulate fraction. The relationship between Co and Mn reflects the well-known affinity and adsorption (or co-precipitation)/desorption of Co onto amorphous MnO 2 . Cobalt associated with Mnoxide phases is probably released (or maybe reduced) when the oxides are reduced [START_REF] Spencer | Aspects of the distribution and trace element composition of suspended matter in the Black Sea[END_REF][START_REF] Murray | The interaction of cobalt with hydrous manganese dioxide[END_REF]. Significant correlations were also found between dissolved Co and dissolved Mn in aquatic systems by [START_REF] Lienemann | Association of cobalt and manganese in aquatic systems: chemical and microscopic evidence[END_REF]. Most of the Co in the soils is contained in or associated with Mn in mineral form. Indeed, there is a large literature on the adsorption of cobalt by MnO 2 : in soils and sediments (Means et al., 1978b;[START_REF] Alloway | Soil processes and the behaviour of metals[END_REF], in fresh waters [START_REF] Hem | Thermodynamic stability of CoOOH and its coprecipitation with manganese[END_REF], in lakes [START_REF] Balistrieri | The biogeochemical cycling of trace metals in the water column of lake Sammamish, Washington: response to seasonally anoxic conditions[END_REF] and in sea water [START_REF] Knauer | Cobalt in north-east Pacific waters[END_REF][START_REF] Hem | Thermodynamic stability of CoOOH and its coprecipitation with manganese[END_REF][START_REF] Santschi | Factors controlling the biogeochemical cycles of trace elements in fresh and coastal marine waters as revealed by artificial radioisotopes[END_REF][START_REF] Shaw | Early diagenesis in differing depositional environments: the response of transition metals in pore water[END_REF]. Studies of adsorption of Co(II) on synthetic birnessite have been carried out too by Crowther et al. (1983). They proposed several mechanisms of Co incorporation into Mn-rich phases of minerals. In the soil solutions, co-precipitation and adsorption seem to be the mechanisms which explain the link between Co and Mn. Some studies have shown a high sorptive capacity of Mn oxyhydroxides and MnO 2 towards some trace metals. In various natural settings [START_REF] Jenne | Controls on Mn, Fe, Co, Ni, Cu and Zn concentration in soils and water: the significant role of hydrous Mn and Fe oxides[END_REF][START_REF] Carpenter | Fe-Mn oxide coatings in stream sediment geochemical surveys[END_REF][START_REF] Robinson | Adsorption of Cu, Zn and Pb near sulfide deposits by hydrous manganese-iron oxide coatings on stream alluvium[END_REF][START_REF] Dillard | The oxidation states of cobalt and selected metals in Pacific ferromanganese nodules[END_REF][START_REF] Lind | Manganese minerals and associated fine particulates in the streambed of Pinal Creek, Arizona, U.S.A.: a mining-related acid drainage problem[END_REF] and in a series of laboratory experiments, many metals are closely associated with Mn oxyhydroxides, including: Zn [START_REF] Balistrieri | The surface chemistry of sediments from the Panama Basin: the influence of Mn oxides on metal adsorption[END_REF][START_REF] Catts | Adsorption of Cu, Pb and Zn by (delta)MnO 2 : applicability of the site binding-surface complexation model[END_REF][START_REF] Hem | Synthesis and stability of hetaerolite, ZnMn 2 O 4 , at 25°C[END_REF], Cu and Ni [START_REF] Balistrieri | The surface chemistry of sediments from the Panama Basin: the influence of Mn oxides on metal adsorption[END_REF][START_REF] Hem | Coprecipitation and redox reactions of manganese oxides with copper and nickel[END_REF], Cd [START_REF] Hem | Coprecipitation mechanisms and products in manganese oxidation in the presence of cadmium[END_REF], Pb [START_REF] Catts | Adsorption of Cu, Pb and Zn by (delta)MnO 2 : applicability of the site binding-surface complexation model[END_REF]; but also in deep-sea sediments (Cu, Ni, Pb and Zn) [START_REF] Thomson | Redox zonation of elements at an oxic / post-oxic boundary in deep-sea sediments[END_REF], in sea water (Ni, Cu and V) [START_REF] Shaw | Early diagenesis in differing depositional environments: the response of transition metals in pore water[END_REF] and as coating on quartz [START_REF] Manceau | Natural speciation of Ni, Zn, Ba and As in ferromanganese coatings on quartz using X-ray fluorescence, absorption, and diffraction[END_REF]. The importance of processes occurring at reaction sites on solid-phase surface in contact with Mn-bearing solutions has long been recognized, but these results are not observed here. In this work, redox equilibrium and oxidation kinetics of Mn are important factors in the speciation of trace metals. Significant correlations exist between U and Fe in the particulate fraction. Uranium transport is dominated by U sorbed to Fe oxyhydroxides which are >0.45 µm in size. In spite of the dynamic redox environment expected in soil solutions, this result shows that U comprised of both U(IV) and U(VI) species. Some authors [START_REF] Ragnarsdottir | Environmental mineralogy: microbial interactions, anthropogenic influences, contaminated land and waste management[END_REF] have shown that the most important species for near neutral waters are the uncharged UO 2 (OH) 2 0 and U(OH) 4 0 species. The trace metal concentration is governed by a number of interrelated processes, including inorganic and organic complexation, precipitation/dissolution reactions, adsorption/desorption reactions and redox reactions [START_REF] Evans | Chemistry of metal retention by soils[END_REF][START_REF] Singh | Soil and water contamination by heavy metals[END_REF]. These observed results indicate the high impact of particles on the regulation of trace mobility (Table 10). In addition, they intend to assess the mixed nature of particles. If Al is mainly present as oxyhydroxides (no correlation with Si) its links with Mg suggest an association with clay mineral. This result is also observed for Fe. An adsorption of Fe and Al oxyhydroxides can occur on clay minerals and involves various types of binding between metals (M n+ ) and oxyhydroxides and/or clay minerals: (i) M n+ -Al/Fe oxyhydroxides, (ii) M n+ -clay minerals, and/or (iii) M n+ -Al/Fe oxyhydroxides -clay minerals. Some authors have provided evidence for similar mechanisms. [START_REF] Robinson | Adsorption of Cu, Zn and Pb near sulfide deposits by hydrous manganese-iron oxide coatings on stream alluvium[END_REF] showed that the chemical properties of the Mn oxides are associated with the clay minerals. [START_REF] Jenne | Controls on Mn, Fe, Co, Ni, Cu and Zn concentration in soils and water: the significant role of hydrous Mn and Fe oxides[END_REF] noted the probable occurrence of Fe and Mn oxides as partial coatings on other minerals (and notably clay minerals). Clay minerals and oxyhydroxide minerals have long been recognized as the main metal sorbents in soils [START_REF] Jenne | Controls on Mn, Fe, Co, Ni, Cu and Zn concentration in soils and water: the significant role of hydrous Mn and Fe oxides[END_REF][START_REF] Evans | Chemistry of metal retention by soils[END_REF] and in aquatic systems [START_REF] Förstner | Metal pollution in the aquatic environment[END_REF][START_REF] Benjamin | Multiple-site adsorption of Cd, Cu, Zn and Pb on amorphous iron oxyhydroxide[END_REF][START_REF] Bilinski | Trace metal adsorption on inorganic solid phases under estuarine conditions[END_REF]. Soil solutions were collected above and in the gravely and concretion-rich horizon, in order to assess the impact of the concretion-rich horizon on the distribution of trace metals in the soil waters. In this horizon, concretions are Fe-and Mn-rich coatings and can stabilize trace metals by adsorption or co-precipitation processes with Fe-Mn oxyhydroxides under oxidizing conditions [START_REF] Jenne | Controls on Mn, Fe, Co, Ni, Cu and Zn concentration in soils and water: the significant role of hydrous Mn and Fe oxides[END_REF][START_REF] Stumm | Aquatic chemistry: Chemical equilibria and rates in natural waters[END_REF]. When soils become reduced, trace metals (in soluble or particulate form) can be released into the soil solution by reductive dissolution of Fe-Mn oxyhydroxides. Much higher correlations were found between Fe/Al/Mg and/or POC and trace metals in the concretion-rich horizon. This can be linked to (i) concretion composition or (ii) hydrodynamic (reaction time between water and soil due to porosity) and chemical reactivity of the solution circulating. Conclusions This study highlights the distribution, behaviour and controlling factors of some trace metals in unpolluted soil solutions of a planosolic horizon and enhanced knowledge of these kinds of solution. The relationships between the different parameters and trace metals are complex and can not be related to only one parameter. However, results show Fe and Al oxyhydroxides, clay minerals and organic matter play an important role in trace metal distribution and mobility. In these weakly reductive waters, trace metals are mainly linked to particles. In addition, the concretion-rich horizon has an impact both on the trace element composition and on the particle composition, although this impact cannot be linked to the chemical distribution of the horizon. Table 1: Analytical results (mean, min, max and median) for samples from 1a, 1b, 2a, 2b, 3a and 3b in the dissolved phase. T°C pH C HCO 3 - Na + K + Ca 2+ Mg 2+ H 4 SiO 4 Cl - SO 4 2- NO 3 - ΣPO 4 F - µ S / c m m M m M µ M m M µ M m M µ M µ M µ M µ M µ M Mean 9 . 0 6 . 2 5 7 9 0 . 7 1 0 . 1 5 7 . 0 3 0 . 2 4 6 2 . 1 0 . 1 8 2 9 . 0 2 2 . 0 1 4 . 4 0 . 6 5 7 . 0 Min -Max 3.1 -14.3 5.57 -6.60 62 -100 0.41 -1.03 0.12 -0.18 2.20 -12.50 0.16 -0.33 42.8 -86.8 0.16 -0.22 8.8 -65.6 3.5 -38.2 1.5 -32.0 0.28 -0.87 5.2 -8.4 Median 9 . 5 6 . 3 2 8 1 0 . 7 9 0 . 1 5 6 . 6 1 0 . 2 3 6 3 . 5 0 . 1 7 1 5 . 6 2 6 . 5 1 1 . 6 0 . 7 0 7 . 2 Mean 7 . 2 5 . 9 3 8 5 0 . 8 1 0 . 1 6 4 . 5 4 0 . 2 6 6 7 . 1 0 . 1 9 2 2 . 9 1 8 . 2 9 . 8 0 . 5 9 5 . 9 Min -Max 1.6 -13.5 5.56 -6.62 54 -106 0.46 -1.13 0.14 -0.20 2.64 -8.09 0.15 -0.37 38.5 -89.5 0.15 -0.26 Min -Max 1.8 -14.0 5.91 -7.25 217 -525 0.60 -1.70 0.91 -1.17 0.00 -3.14 0.40 -0.56 211.0 -321.0 0.14 -0.27 222.3 -972.3 205.7 -309.0 0.9 -970.0 0.43 -0.65 3.5 -7.5 Median Min -Max 3.7 -15.3 6.09 -7.08 106 -295 0.81 -2.11 0.79 -1.09 0.59 -43.30 0.25 -0.50 259.8 -456.0 0.22 -0.54 97.4 -614.0 170.0 -321.0 5.6 -500.0 0.00 -0.71 7.6 -18. Min -Max 1.2 -15.5 6.25 -7.40 185 -372 1.13 -3.88 0.61 -1.02 3.90 -90.30 0.46 -0.86 274.0 -657.0 0.10 -0.21 34.0 -686.0 37.0 -553.0 0.0 -29.0 0.00 -0.85 8.0 -12. March 2004 to May 2006, for sites 2 and 3, some constituents (, Fe, Mn) exhibit interesting temporal patterns. Figure 1 a 1 Figure 1 a) Schematic diagram of the studied toposequence (modified from Cornu et al., 2005). b) Study area and location of three water sampling sites (piezometers 1a, 1b, 2a, 2b, 3a and 3b) (modified from Alberic et al., Unpublished study). Figure 2 : 2 Figure 2: Cation and anion distribution (expressed in % mol/L) in the three soil solution sites from 2004 to 2006. Figure 3 : 3 Figure 3: Correlations between some trace elements and Fe in 2b, the particulate phase of soil solutions. Figure 4 : 4 Figure 4: Correlations between some trace metals and Fe in 3b, the particulate phase of soil solutions. Figure 5 : 5 Figure 5: pE range calculated for the soil solutions from the six piezometers (see text for explanation). Figure 6 : 6 Figure 6: a) Correlations between Co and Mn in the soluble phase (piezometers 1a, 1b, 2a, 2b, 3a and 3b). b) Correlations between U and Fe in the particulate phase (piezometers 1b, 2a, 2b and 3b). Table 1 1 , exhaustive data are Table 2 : 2 Concentrations of HCO 3 -, Cl -, SO 4 2-, NO 3 -, Mn and Fe in the dissolved fraction for each year. HCO 3 - Cl - SO 4 2- NO 3 - Mn Fe mM µM µM µM µM µM 2004 1.33 ±0.18 332.5 ±38.5 269.5 ±7.5 30.0 ±1.1 0.09 ±0.06 0.14 ±0.04 2a 2004-2005 1.35 ±0.36 597.3 ±375.0 256.4 ±50.7 395.4 ±381.2 0.96 ±0.88 1.41 ±0.98 2005-2006 0.96 ±0.36 607.0 ±349.0 289.5 ±19.5 485.4 ±484.6 1.08 ±1.02 1.59 ±1.25 2004 1.59 ±0.26 414.5 ±105.5 181.0 ±11.0 75.0 ±51.0 9.60 ±9.59 1.25 ±1.15 2b 2004-2005 1.84 ±0.27 361.7 ±252.3 229.7 ±36.7 121.2 ±101.6 2.65 ±2.50 1.32 ±0.96 2005-2006 1.42 ±0.60 340.7 ±243.3 248.5 ±72.5 252.8 ±247.2 4.69 ±4.59 0.97 ±0.46 2004 1.76 ±0.01 77.5 ±17.5 136.0 ±33.0 14.5 ±14.5 0.81 ±0.47 1.10 ±1.00 3a 2004-2005 3.48 ±0.41 323.6 ±289.6 85.4 ±48.4 19.1 ±0.4 14.62 ±4.09 7.02 ±1.78 2005-2006 1.43 ±0.30 578.5 ±107.5 436.0 ±117.0 2.4 ±1.0 0.87 ±0.68 1.77 ±0.49 2004 1.75 ±0.05 93.0 ±32.0 144.5 ±37.5 14.0 ±14.0 1.16 ±1.01 1.29 ±1.19 2004-2005 3.70 ±0.23 337.4 ±239.4 122.8 ±66.7 83.7 ±65.1 29.71 ±8.47 21.60 ±21.50 2005-2006 1.90 ±0.75 439.7 ±256.3 356.1 ±191.9 2.9 ±2.9 6.68 ±4.92 2.89 ±1.58 Table 3 : 3 Concentrations of Al, Mn, Fe, Si and organic matter in the dissolved and particulate fraction for each piezometer (expressed as mol/L of solution). diss Al part Al part Mn diss Mn part Mn part Fe diss Fe part Fe part Si diss Si part Si part DOC POC POC Zn diss Zn part Zn part U diss U part U part Ba diss Ba part Ba part Rb diss Rb part Rb part Sr diss Sr part Sr part Al µM µM % µM µM % µM µM % mM mM % mg/L mg/L % mean 2.74 184.3 88.3 1.28 2.05 48.1 2.17 90.2 75.6 0.18 0.32 55.6 11.2 1.4 13.2 1a min max 0.28 9.99 5.6 693.8 62.5 100.0 0.08 2.60 0.08 8.74 6.5 97.8 0.10 5.20 1.53 341.0 37.4 99.9 0.16 0.22 0.03 0.60 12.5 77.0 4.9 17.5 0.0 6.1 0.0 55.8 median 1.76 26.5 95.3 1.24 0.48 49.2 2.39 7.34 78.6 0.17 0.36 68.5 10.9 0.6 3.6 mean 1.59 149.8 95.6 3.19 2.21 34.7 1.59 159.4 87.6 0.19 0.37 58.1 10.8 1.6 14.8 1b min max 0.27 4.89 13.9 769.1 83.7 100.0 0.25 8.10 0.06 11.00 1.6 76.5 0.10 2.82 2.26 1010.0 61.5 99.9 0.15 0.26 0.00 0.73 0.0 80.5 4.7 21.8 0.0 4.4 0.0 44.3 median 1.06 74.3 98.5 2.42 1.80 36.5 1.40 29.71 95.4 0.18 0.41 68.8 10.5 1.6 14.0 mean 1.89 101.5 95.7 0.53 0.77 61.0 1.18 39.81 93.3 0.17 0.24 41.1 9.8 3.2 22.6 2a min max 0.10 5.85 25.4 305.7 82.9 99.9 0.03 2.10 0.00 1.97 0.0 98.2 0.10 2.84 8.27 130.0 75.2 99.9 0.14 0.22 0.00 0.66 0.0 81.7 4.3 18.3 0.0 10.5 0.0 50.4 median 1.09 77.3 98.4 0.12 0.52 72.8 1.18 29.8 96.2 0.18 0.10 38.4 8.5 2.5 26.6 mean 0.96 1033.8 99.7 2.98 5.73 61.8 1.06 466.7 99.0 0.33 0.62 50.1 12.7 10.0 38.1 min 0.10 61.7 99.3 0.01 0.20 4.8 0.10 22.68 96.5 0.22 0.01 3.7 5.4 0.0 0.0 max 2.88 3999.0 100.0 19.20 31.10 99.9 2.40 4185.0 100.0 0.54 2.45 90.8 27.2 41.0 75.7 median 0.89 258.1 99.7 1.22 2.25 65.9 0.97 85.74 99.1 0.31 0.36 47.2 11.9 8.6 37.4 mean 2.18 435.9 97.3 5.21 2.78 54.0 3.21 103.77 91.6 0.14 0.17 43.3 21.4 2.5 10.5 3a min max 1.31 3.64 32.3 3199.0 94.5 99.9 0.18 18.70 0.55 10.18 4.0 96.8 0.10 8.80 13.82 620.0 76.4 99.9 0.10 0.21 0.01 0.40 2.6 79.8 15.8 33.5 0.0 8.7 0.0 35.3 median 1.81 73.4 97.7 1.28 2.31 67.1 2.09 38.36 91.8 0.12 0.14 40.2 19.9 2.0 10.0 mean 1.53 719.5 97.7 14.32 8.39 26.0 6.06 974.9 88.4 0.24 0.28 51.5 18.3 3.6 14.1 3b min max 0.26 3.60 29.1 6466.4 91.8 99.9 0.15 38.18 0.00 64.88 0.0 97.6 0.10 43.10 11165.0 99.9 3.85 46.3 0.13 0.67 0.00 0.53 0.0 74.1 11.4 26.2 0.0 18.6 0.0 54.6 median 1.07 64.0 99.2 7.63 2.31 16.5 2.25 57.52 90.0 0.21 0.25 57.3 17.0 1.6 8.2 Acknowledgments This work was supported by Le Conseil Régional du Centre as part of the METALOE program. We gratefully acknowledge Martine Bouhnik-Le-Coz (CAREN Géosciences Rennes) for ICP-MS analysis and Mrs Narbone (IUT Tours) for a part of AAS and AES analysis. Two anonymous reviewers supplied very useful comments. Pr. Gabriel Filippelli, Associate Editor, proposed important comments which improved the present manuscript.
04096517
en
[ "info" ]
2024/03/04 16:41:18
2023
https://inria.hal.science/hal-04096517/file/3544549.3583905.pdf
Ambre Assor email: [email protected] Arnaud Prouzeau email: [email protected] Pierre Dragicevic email: [email protected] Martin Hachet email: [email protected] Exploring Augmented Reality Waste Data Representations for Eco Feedback Keywords: Visualization, Concrete Scales, Augmented Reality, Embodied Interactions Figure 1: Illustrative examples of ARwavs we prototyped. (a) On the top left hand corner, a user looks at the amount of waste produced by his corporate restaurant in a week. The waste is represented by virtual trash bags which are displayed directly in the restaurant using augmented reality. (b) On the top right hand corner, 9 Litres of water represented using water bottle next to the toilets (average flush amount of water).(c) On the bottom left, the approximate amount of material displaced and emitted to manufacture 8 smartphones . (d) Bottom right, the quantity of single-use plastic cups accumulated for a given period of time. Captured with Hololens. The negative impact humans have on the environment is partly caused by thoughtless consumption leading to unnecessary waste. A likely contributing factor is the relative invisibility of waste: waste produced by individuals is either out of their sight or quickly taken away. Nevertheless, waste disposal systems sometimes break down, creating natural information displays of waste production that can have educational value. We take inspiration from such natural displays and introduce a class of situated visualizations we call augmented-reality waste accumulation visualizations or ARwavs, which are literal representations of waste data embedded in users' INTRODUCTION The adverse impact humans have on the environment (through, e.g., air pollution, plastic pollution, soil erosion, or damage to biodiversity) is one of the biggest challenges currently faced by our society. Although the causes are complex and numerous, individual behavior and lifestyles have been identified as among the key contributors [START_REF] Alpizar | A framework for selecting and designing policies to reduce marine plastic pollution in developing countries[END_REF][START_REF] Masson-Delmotte | Global warming of 1.5°C: an IPCC special report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways[END_REF]. A factor that likely contributes to the excessive consumption and unnecessary waste production is the invisibility of waste there: when we buy a new smartphone, we do not see the metal mines that went into its construction; Even when we directly create waste, it quickly goes away and becomes invisible (dirty water flow to the treatment plant after showering or flushing, trash is picked up by garbage collectors). We propose to use augmented reality to recreate these accumulations in the direct surroundings of users. These visualizations fill a gap in eco-feedback research [START_REF] Froehlich | The design of eco-feedback technology[END_REF], where most systems convey resource consumption and waste production using units and visual representations that are useful but often abstract and potentially difficult to grasp intuitively. Because ARwavs use literal representations of waste amounts [START_REF] Chevalier | Using Concrete Scales: A Practical Framework for Effective Visual Depiction of Complex Measures[END_REF] (e.g., 300 litres of garbage can be represented with ten 30-litres trash bags), we expect them to give a more visceral sense of quantities and stand as more engaging representations. However, ARwavs may not be self sufficient in some situations, as when a quick value judging is necessary. Hence, we are starting to explore embodied transitions with 3D classic aggregates (barplots, tables...) and include them in our demonstration. We already conducted a study suggesting that ARwavs yields more emotions compared to less immersive display modalities (3D on screen) and simpler information representations (numerals); results are under review and available as a preprint [START_REF] Assor | Augmented Reality Waste Accumulation Visualizations[END_REF]. In this interactivity paper, we : (1) Present the prototypes we plan to demonstrate: several 1:1 scale ARwavs pictured within scenarios and an embodied way of transitioning with 3D aggregates. (2) Explain the advantages of participating in this interactive experience and clarify the opportunity of initiating novel discussions among the community. BACKGROUND In this section we give an overview of AR-HCI research on ecofeedback, i.e., "technology that provides feedback on individual or group behaviors with a goal of reducing environmental impact" [START_REF] Froehlich | The design of eco-feedback technology[END_REF] and will present some related work in information visualization. A complete related work review is available at https://hal.inria.fr/hal-03907474. AR Eco-Feedback interfaces Some explorations in AR has been considered for eco-feedback technologies like AR overlays of energy consumption on physical building models [START_REF] Ens | Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics[END_REF], and heat cameras to visualize thermal loss [START_REF] Boomsma | Improving the visibility of energy use in home heating in England: Thermal images and the role of visual tailoring[END_REF]. But as far as we know, enhancing real-world environments with AR for eco-feedback purposes has been only mentioned as a possibility in past research (e.g., in [START_REF] Carneiro | Comprehensible and interactive visualizations of spatial building data in augmented reality[END_REF][START_REF] David Fredericks | Visualising the Invisible: Augmented Reality and Virtual Reality as Persuasive Technologies for Energy Feedback[END_REF][START_REF] Santos | Changing environmental behaviors through smartphone-based augmented experiences[END_REF]).The only example of ARwav we currently know of is a mobile app where users can visualize a year worth of trash bags in augmented reality [START_REF] Stark | Visualizing Impact Through Augmented Reality -Using Concrete Scales To Combat Waste[END_REF]. This prototype has directly inspired our work, but has not been discussed in the academic literature, nor has it been generalized to other cases or evaluated. This paper fills this gap. Information Visualization Strategies for representing large quantities or unfamiliar units next to familiar landmarks have been formalized in information visualization under the concrete scale framework [START_REF] Chevalier | Using Concrete Scales: A Practical Framework for Effective Visual Depiction of Complex Measures[END_REF]. Inspired by this framework, Lee et al. [START_REF] Lee | Data Visceralization: Enabling Deeper Understanding of Data Using Virtual Reality[END_REF] introduced the concept of data visceralization focusing on VR experience, as "a data-driven experience which evokes visceral feelings within a user to facilitate intuitive understanding of physical measurements and quantities". These article does not discuss eco-feedback as a possible application and the latter identify AR as a possible direction for future work. Following Drucker and Fernandez [START_REF] Drucker | A unifying framework for animated and interactive unit visualizations[END_REF] definition, ARwavs are unit visualizations: "each data item becomes a unique visual mark; such visualizations strictly maintain the identity of each visual mark and its relation to a corresponding data item."1 . However they may not be adapted to every situations: one could benefit from the advantages of aggregates regarding value judging or readability. Animations and transitions on on-screen unit representations are already explored to tackle its weaknesses [START_REF] Drucker | A unifying framework for animated and interactive unit visualizations[END_REF]. In our scope direct interactions enabled by AR units and 3D representations of classic aggregates 1 (barplot, graph...) opens of wide range of explorations toward embodied transitions. IMPLEMENTATION The ARwavs were implemented with a Microsoft Hololens 2. The see-through holographic display was equipped with 4 visible light cameras for head tracking and map building. IR-reflectivity steams are used to compute depth along with a depth camera that operates in two modes: high-frequency (45 FPS) near-depth sensing used for hand tracking and low-frequency (1-5 FPS) far-depth sensing used by spatial mapping. The display is equipped with speakers enabling built-in spatial sound. We developed with Unity 2020.3.18f1 and used the MRTK toolkit2 . We enabled hands, head tracking, and set the spatial awareness module to scan the environment at start (update interval of 3.5 seconds). The objects making up the ARwavs (trash, bottle, mud and plastic cup) where either found on the Unity Asset Store or download and adapted from free 3D models browsers 3 . The abstract representation have been adapted from Unity Packages 4 . They have been modified to take input from csv files with the data shown. We adapted the visualizations to make them AR compatible, enable manipulation of elements and make them dynamic to interact with units. WALK-THROUGH Augmented-reality waste accumulation visualizations or ARwavs aim to help people better perceive the quantitative impact of their actions and decisions in order to inform them, raise their awareness, or encourage them to change their habits. Figure 1 (a) illustrates a scenario where employees can perceive directly in their corporate restaurant a week of collective waste production represented with virtual trash bags. In the following sections we introduce the ARwavs we'll present at the venue. The last scenario put in context our current explorations on transitions with more classical representations such as 3D bar charts. Scenario 1: Personal motivation, water use. Water is a resource that many of us abundantly use at home when we take a shower, wash our hands, flush our toilets, or use appliances like washing machines. As water immediately disappears through the pipes, it can be hard to get a good impression of the amount we use over time. Knowing numbers is a good start, but is likely not enough. For instance, it might not seem a lot to consume 5 m 3 of water in a month, but it actually represents about 28 full bathtubs, or an average bathroom filled with water up to 2 m high. ARwavs can literally show these volumes of water to make abstract numbers more concrete. Figure 1 (b) shows an example where a user sees the amount of water used for a flush next to the used toilets. Scenario 2: Group dynamics, ecological rucksack. Computers and mobile devices are powerful tools with many useful applications, but they have a significant ecological footprint. One way to capture this footprint is the concept of ecological rucksack, which is the amount of matter displaced to build an object. 5 A smartphone requires an ecological rucksack of about 70kg, which includes the amount of soil mined to obtain the metals that go into its fabrication, and the amount of CO2 emitted during transportation and manufacturing. 6 Since the metal mines and the CO2 emissions are typically situated far away from where the phone is purchased and used, it is hard for people to have a vivid picture of them in their mind, even when they heard about them. In addition, computing devices often look clean and beautiful, so looking at them does not bring about any association with mines, soil, or greenhouse gases. With ARwavs, we can make those waste products more salient to users. For example, imagine a research team discussing whether to buy eight new smartphones for a project or use old ones. An eco-conscious member of the team could use AR to show them the size of the ecological rucksack necessary to build the new phones, as a pile of soil (Figure 1 (c)). Having this heap of soil appear as if it was in the room (as opposed to, e.g., on a computer screen or a magazine) could give the team members a more visceral sense of quantity and size. It also adds a dramatic dimension, which can help make environmental issues more salient and more influential in the discussion. Scenario 3: Support for policies, plastic cups. Single-use plastic cups have a negative repercussion on the planet. It is estimated that more than 50 billions are used in the US in one year. 7Yet manufacturing them is costly in energy and resources, they are hard to recycle, and they are one of ten most commonly found waste items in European beaches 8 . However, plastic cups are still routinely given away in cafés, shops and companies, and thus it remains easier for many people to use them instead of bringing a reusable cup. In this third scenario, a company chooses to ban plastic cups as part of a program to reduce its ecological footprint, but is afraid that not all of its employees will agree. The company has already communicated about the issue through figures and charts, but they went largely ignored. Therefore, the company decides to organize an event, with a booth in front of the coffee machine, where employees can observe through an AR-HMD (Head-Mounted Display) an ARwav composed of all plastic cups typically used over a week (see Figure 1 (d)). With this immersive experience, most employees get a much better sense of the amount of plastic cups accumulated over time, and many become more willing to make the effort to bring their own reusable cup. Scenario 4: Transitioning with classic representations in the 3D space. In scenario 3, a company organized an event for plastic cups usage awareness. The organizers have in mind to recreate this meet-up to sensitize about the company's waste. However, the charts they want to communicate are large datasets and the ARwavs they want to use (50L trash bags) are quite voluminous. This makes the experience difficult to put up. The marketing team choose to keep charts as they present advantages including easiness on value judging or readability on large datasets; however they don't want to drop ARwavs that were appreciated for their emotions raising. To keep both, they introduced a 3D barplot in the AR space and allowed embodied transitions with ARwavs. Employees can grab a bar from the barplot and throw it on the floor to see the amount of trash bags it represents appear and get the scaled reality of it. They can grab the trash bags and put them back in the chart to clean the space and continue to navigate in the dataset. DEMONSTRATION In the demo booth, we plan on showing the average amount of water used for a 6'30 min lasting shower with water bottles, the amount of soil needed to build 5 and 10 smartphones, the amount of plastic cups use in a day at our lab (before the no-plastic recent policy) and an example of transitions between a barplot about trash emission and ARwavs. ARwavs act as situated information displays that are unique in their ability to convey waste production in a way that is immediately understandable by a large audience and can carry emotional impact. Testers get to experience these with integrated scaled Holograms that may raise some awareness and initiate self-reflections. They will also be able to appreciate several embodied ways of transitioning from more classic representations of the same data. To tackle the potential layout issues and people flow abundance, we developed a 1:10 scale scene showing ARwavs, see Figure 2. This mini mode allow observers to see the entire ARwav without having to move their head or step back. This version will not be showed systematically, but will be if it is appropriate within the discussion with the tester or if it is more convenient due to the layout business. Not only this version tackles layout limits but it also fits Hololens 2 boundaries; indeed, the display FoV is (43 • x 29 • ) which does not allow to get the full AR scene without moving. Through this scaled version, users can have a first appreciation of the possible trade-offs that can be operated for such AR visualizations and these elements will be even more relevant for HCI or visualization researchers. This large-scale expert conference is a great opportunity to put light on of the challenges of eco-feedback. More broadly this interactive experience will potentially generate discussions in the community to imagine and design around environmental issues. The conference already shows sustainable HCI interests through workshops and talks and this demo will suitably feed these exchanges. Also, experts feed-backs are precious to push our work in the best direction. Figure 2 : 2 Figure 2: Miniature view of an ARwav, with a car and standing man silhouette as reference points. They are distinguished from aggregate visualizations (i.e. in which multiple data item are merged into visual aggregates that cannot be separated and that does not keep identity properties) https://docs.microsoft.com/fr-fr/windows/mixed-reality/develop/unreal/unrealmrtk-introduction [START_REF] Boomsma | Improving the visibility of energy use in home heating in England: Thermal images and the role of visual tailoring[END_REF] https://free3d.com/, https://www.turbosquid.com/fr/, https://sketchfab.com/store/3dmodels https://assetstore.unity.com/packages/tools/gui/3d-interactive-barchart-157887 https://www.gdrc.org/sustdev/concepts/27-rucksacks.html https://ec.europa.eu/environment/ecoap/about-eco-innovation/expertsinterviews/friedrich-schmidt-bleek_en https://plastic.education/the-problem-with-disposable-cups/ https://ec.europa.eu/environment/pdf/waste/single-use_plastics_factsheet.pdf CONCLUSION In this paper, we presented augmented-reality waste accumulation visualizations or ARwavs, which are literal representations of accumulated waste embedded in users' physical environments. We went through several examples of ARwavs to illustrate the variety of situations where they can be useful. Minimizing the negative impact of humans on the environment is an incredibly complex problem, and ARwavs could play a useful role if they usefully inspire future research or demonstrate to successfully engage visitors.
04096535
en
[ "spi.meca.msmeca", "phys.meca.geme" ]
2024/03/04 16:41:18
2017
https://hal.science/hal-04096535/file/Additive%20Manufacturing%202018.pdf
Guoying Dong Grace Wijaya Yunlong Tang Yaoyao Fiona Zhao Optimizing Process Parameters of Fused Deposition Modeling by Taguchi Method for the Fabrication of Lattice Structures Keywords: Additive Manufacturing, Fused Deposition Modeling, Lattice structure, Optimization, Taguchi method The lattice structure is a type of cellular material that can achieve a variety of promising physical properties. Additive Manufacturing (AM) has relieved the difficulty of fabricating lattice structures with complex geometries. However, the quality of the AM fabricated lattice structure still needs improvement. In this paper, the influence of parameters of the Fused Deposition Modeling (FDM) process on lattice structures was investigated by the Taguchi method. S/N ratio analysis was used to find the optimal process parameters that improve the printing quality, and ANOVA provided a significance ranking of the various factors analyzed in this paper. It was found that the optimum level and significance of each process parameter vary for horizontal and inclined struts. In addition, compression tests investigate the influence of process parameters on the mechanical properties of lattice structures. The results show that process parameters optimized by print quality can also improve the elastic modulus and the ultimate strength of these lattice structures. Introduction The lattice structure is a truss-like structure with intersecting struts and nodes with a certain repeated arrangement over a volumetric region. Compared to other cellular materials such as foams and honeycombs, lattice structures have more flexibility to achieve a variety of physical properties including high stiffness-weight ratio [START_REF] Deshpande | Effective properties of the octet-truss lattice material[END_REF], low thermal expansion coefficient [START_REF] Lehman | Stiff lattices with zero thermal expansion and enhanced stiffness via rib cross section optimization[END_REF], negative Poisson ratio [START_REF] Yang | Non-stochastic Ti-6Al-4V foam structures with negative Poisson's ratio[END_REF], and high heat dissipation rate through active cooling [START_REF] Zhang | Ultralight X-type lattice sandwich structure (I): Concept, fabrication and experimental characterization[END_REF]. Due to these excellent characteristics, lattice structures have been extensively implemented in engineering applications, including ultralight structures [START_REF] Tang | Bidirectional Evolutionary Structural Optimization (BESO) based design method for lattice structure to be fabricated by additive manufacturing[END_REF][START_REF] Rosen | Computer-aided design for additive manufacturing of cellular structures[END_REF], energy absorbers [START_REF] Ozdemir | Energy absorption in lattice structures in dynamics: Experiments[END_REF], low thermal expansion structures [START_REF] Lehman | Stiff, strong, zero thermal expansion lattices via material hierarchy[END_REF], and conformal cooling [START_REF] Brooks | Design of conformal cooling layers with self-supporting lattices for additively manufactured tooling[END_REF]. In addition, lattice structures can be used as biocompatible materials for orthopaedic implant [START_REF] Khanoki | Fatigue design of a mechanically biocompatible lattice for a proof-of-concept femoral stem[END_REF] and tissue engineering [START_REF] Masood | The design and manufacturing of porous scaffolds for tissue engineering using rapid prototyping[END_REF][START_REF] Cheah | Development of a Tissue Engineering Scaffold Structure Library for Rapid Prototyping. Part 1: Investigation and Classification[END_REF]. In the past, the complexity of lattice structure design was severely restricted by traditional manufacturing techniques such as casting, sheet metal forming and wire bonding [START_REF] Wadley | Multifunctional periodic cellular metals[END_REF]. Only simple topologies and structures on a macro scale could be fabricated, unable to fully exploit the considerable potential of lattice structures. However, Additive Manufacturing (AM) technology provides an alternative approach to directly or indirectly [START_REF] Lee | Rapid investment casting: Direct and indirect approaches via fused deposition modelling[END_REF] fabricate highly complex lattice structures via its layer by layer manufacturing principle. Powder bed fusion [START_REF] Yan | Evaluations of cellular lattice structures manufactured using selective laser melting[END_REF][START_REF] Parthasarathy | Mechanical evaluation of porous titanium (Ti6Al4V) structures with electron beam melting (EBM)[END_REF] and material extrusion [START_REF] Lee | Rapid investment casting: Direct and indirect approaches via fused deposition modelling[END_REF][START_REF] Karamooz Ravari | Numerical investigation on mechanical properties of cellular lattice structures fabricated by fused deposition modeling[END_REF] are the most common AM techniques for manufacturing lattice structures. However, each manufacturing technology has its limitations, and AM is no exception. A wellknown limitation of AM is the need for support structures on down-facing surfaces [START_REF] Atzeni | Study on unsupported overhangs of AlSi10Mg parts processed by Direct Metal Laser Sintering (DMLS)[END_REF]. Some support structures are difficult to remove, which leads to inferior surface quality or failed fabrication. To better understand the capability of AM to fabricate overhanging structures, the manufacturability and accuracy of self-supporting surfaces need further investigation. A previous analysis on the effects of the process parameters of SLM on the roughness of selfsupporting surfaces [START_REF] Fox | Effect of Process Parameters on the Surface Roughness of Overhanging Structures in Laser Powder Bed Fusion Additive Manufacturing[END_REF] had limited experimental results and did not show a clear dependency on process parameters. Therefore, the relationship between process parameters of AM and the manufacturing quality of overhangs needs further analysis. In addition, several attempts have been made to investigate process parameters of different AM approaches to enhance the print quality including surface roughness [START_REF] Pupo | Influence of process parameters on surface quality of CoCrMo produced by selective laser melting[END_REF], density [START_REF] Kamath | Density of additivelymanufactured, 316L SS parts using laser powder-bed fusion at powers up to 400 W[END_REF] and dimensional accuracy [START_REF] Wang | A model research for prototype warp deformation in the FDM process[END_REF]. However, they focus on simple geometry rather than complex structures with many overhangs. Overhangs include horizontal struts and inclined struts, which are found in lattice structures and challenge the manufacturability of AM processes. Due to the lack of research relating the effect of process parameters on overhanging structures, there are no guidelines for finding the optimal process parameters of AM for the fabrication of lattice structures. Therefore, it is imperative to investigate the relationship between process parameters and the manufacturing quality of lattice structures to improve manufacturability. In this paper, an optimization approach using the Taguchi method is proposed to find optimal parameters to fabricate lattice structures through Fused Deposition Modeling (FDM), a type of material extrusion AM process. The Taguchi method has been immensely used to optimize process parameters in product design through comprehensive experimental investigation. Since AM technology is influenced by many process parameters and currently has a high cost, the quantity and cost of a full factorial method experiment will be quite considerable. The design of experiment using Taguchi's method can significantly simplify the experimental plan. The Orthogonal Array (OA) is implemented here to give unique combinations between parameters and their levels to minimize the number of experiments while still investigating the entire parameter space. In the Taguchi method, the analyses of experimental results have three primary objectives [START_REF] Roy | A primer on the Taguchi method[END_REF]. Firstly, studying the main effects of each factor gives the general influence of each factor. Secondly, by knowing whether higher or lower values produce the preferred result, the best levels of factors can be predicted. Finally, the contribution of each factor is established by analysis of variance (ANOVA), a statistical treatment to determine the relative percent influence and significance of each factor. After predicting the optimum conditions and expected performance, a confirmation test is usually conducted for verification. This is because the OA represents only a small fraction of all possibilities, so the optimum condition may not be present among the experimental combinations already carried out. Additionally, Signal-to-Noise (S/N) ratio analysis are conducted over the replicates to determine the most robust set of operating conditions from variations within the results. In recent years, a lot of research have assessed the influence of the process parameters on the quality characteristics of FDM by the Taguchi method. For example, Anitha et al. [START_REF] Anitha | Critical parameters influencing the quality of prototypes in fused deposition modelling[END_REF] used the Taguchi method to investigate layer thickness, road width and the speed deposition to minimize the surface roughness. ANOVA revealed that layer thickness is the most significant parameter; and from S/N analysis, the optimum levels are 0.3556mm layer thickness, 0.537mm road width and 200mm speed of deposition. Another study used the Taguchi method to evaluate the elastic performance and optimize the throwing distance of an FDM fabricated bow made with flexible ABS [START_REF] Lee | Optimization of rapid prototyping parameters for production of flexible ABS object[END_REF][START_REF] Laeng | Optimizing Flexible Behaviour of Bow Prototype Using Taguchi Approach[END_REF]. It was found that air gap, slice height and raster angle most significantly impacted the elastic performance and optimum FDM parameter combinations were obtained. In addition, the Taguchi method with gray relational analysis has been used. One study used gray Taguchi to analyze the ultimate tensile strength, dimensional accuracy and surface roughness to optimize the FDM process [START_REF] Chung | Optimizing the rapid prototyping process by integrating the Taguchi method with the Gray relational analysis[END_REF]. The results demonstrate that for different objectives, the optimal combinations and most essential parameters may be different. Sood et al. [START_REF] Sood | Improving dimensional accuracy of Fused Deposition Modelling processed part using grey Taguchi method[END_REF] investigated the dimensional accuracy of FDM processed ABS400 part to minimize the percent change in length, width and height. The results indicated that optimum settings for each performance were different, so the gray relation grade combined three responses into a single response to simultaneously optimize parameters. The gray Taguchi method has also evaluated the influence of process parameters on the mechanical properties of FDM fabricated parts [START_REF] Liu | Mechanical property parametric appraisal of fused deposition modeling parts based on the gray Taguchi method[END_REF]. In this research, the design of experiment based on the Taguchi array evaluates the fabrication of lattice structures. S/N ratio analysis finds the best level of process parameters, the ANOVA method investigates the significance of each process parameter, and the compression test evaluates the correlation between the print quality and mechanical performance of the lattice structure. This proposed approach is not limited to the FDM process; rather, the flow of experiments and method of data analysis is applicable to other AM processes for the selection of appropriate manufacturing parameters. This paper is organized as follows: Section 2 introduces the basic concepts used in this approach, Section 3 illustrates the design of experiments, and Section 4 presents the results of data analyses and compression test, and results discussion. Finally, the paper ends with conclusions and future research directions. Basic concepts 2.1 Manufacturable Element (ME) To link the design and manufacturing process of lattice structures, a concept called the Manufacturable Element (ME) [START_REF] Tang | Lattice Structure Design and Optimization With Additive Manufacturing Constraints[END_REF] is used in this paper. This concept is defined by Rosen [START_REF] Rosen | Computer-Aided Design for Additive Manufacturing of Cellular Structures[END_REF] as a predefined, parametrized decomposition of a volumetric region of a part. Applied to the characteristics of lattice structures focused in this paper, a ME of lattice structure is defined as a lattice strut with its related geometry, material and process information. To parametrically represent each ME of lattice structures, a data structure of ME is proposed and its graphic view is shown in Figure 1. The ME of lattice structures consists of geometrical data, material data and process data. The objective of this paper is to find the best process parameters for lattice structures fabricated by FDM. From the perspective of ME, it is to find the relationship between the process and geometrical data, while the material data is set to be constant. For better printing quality, the fabricated geometry should be as precise as the designed geometry. However, less error in the fabricated geometry does not mean a better mechanical performance. As a result, the compression test is also conducted to investigate the relationship between the mechanical properties of lattice structure and the process parameters. Figure 1. Graphic view of data structure of ME for lattice structures [START_REF] Tang | Lattice Structure Design and Optimization With Additive Manufacturing Constraints[END_REF]. In this research, the samples are single struts, which is what lattice structures are constructed of. This is because it is difficult to evaluate the dimensional accuracy of the interior of as built lattice structures. Based on the geometrical data of ME, there are three types of lattice struts: horizontal struts, vertical struts and inclined struts. In addition, the cross section of the lattice strut will be circular, resulting in similar toolpaths for vertical struts and inclined struts [START_REF] Tang | Lattice Structure Design and Optimization With Additive Manufacturing Constraints[END_REF]. Therefore, the vertical strut can be regarded as a special case of inclined struts with a 90 degree incline. Consequently, there are two lattice strut groups in this experiment: the horizontal group and the inclined group as shown in Figure 2. The controlled geometrical data include strut diameter for both groups, strut overhanging length for only the horizontal group, and strut incline angle for the inclined group. Failure Method In the evaluation of the printing quality, is defined to represent the difference between the maximum and minimum thickness on a single strut. It is calculated by [START_REF] Deshpande | Effective properties of the octet-truss lattice material[END_REF] The definition of and for both type of struts is shown in Figure 3. In general, the parts with the smallest difference had geometries closest to the modeled shape. However, because the levels tested a wide range of printing parameters, there were runs where the geometry was not printable or badly printed that cannot be measured as shown in Figure 4, resulting in missing information. Four different failure methods account for the missing data: omit the missing information, set to the maximum deviation of the parts of the same geometry, to the part diameter, and to twice the part diameter. These failure methods may overestimate or underestimate the quality of the failed strut. Careful comparison between these methods is necessary to find the optimal process parameters in the fabrication of lattice struts. Design of Experiment In the experiment, the material data of ME is Acrylonitrile Butadiene Styrene (ABS) with no grade. Process data of ME include an Ultimaker 2 Extended+ for the machine, and FDM process for fabrication strategy. Based on the design parameters in Section 2.1, the lattice struts are divided into two groups: inclined group and horizontal group. Detailed information about the design parameters are summarized in Table 1. Before selecting the controlled process parameters, preliminary experiments are conducted to investigate the influence of process parameters on the manufacturing quality of lattice structures. During preliminary experimentation, inclines and horizontal struts are printed under various process conditions to identify the most significant parameters. The parameters which created significant variations between parts are nozzle temperature, fan speed, print speed, and layer height. For each parameter, different levels are tested to find the range and levels to test. The range of the parameters are located at the limits of manufacturability, resulting in poor parts. In some cases, the machine's limitations define these limitsnozzle temperature hit a maximum at 255ᵒC, and fan speed can only be set between 0 and 100%. The intervals are chosen to show a significant difference in part quality between levels. After preliminary testing, four printing parameters (2 with 4 levels, 1 with 3 levels, and 1 with 2 levels) were considered. Interactions between these parameters were not considered. The control parameters and their levels are summarized in Table 2. Other parameters held constant during the manufacturing process and are listed in Table 3. These parameters have little impact on part geometry during preliminary testing. An orthogonal array is modified to test the design parameters. The initial array, L16, can hold a maximum of 215 or 45 factors. The linear graph in Figure 5 is used to merge nine 2 level columns into three 4 level columns [START_REF] Taguchi | Taguchi's Quality Engineering Handbook[END_REF]. This is done by combining the 2 level columns specified at the nodes of the linear graph, forming one 4 level column located in the column number specified by the line joining the nodes. To create the 3 level factor, dummy treatment changes 'level 4' to 'level 1*' [START_REF] Roy | A primer on the Taguchi method[END_REF]. Level 1 was chosen to be repeated for more runs because a fan speed of 0% is an absolute measure, whereas 50% and 100% are relative values that depend on the printer used. The orthogonal array is shown in Table 4. Each set of parameters are used to print two types of structures: bridges with varying diameter and length, and inclined parts with varying diameter and angles listed in Table 1. Design parameters are tested with the full factor method. There are 9 horizontal samples and 12 inclined samples in each set of process parameters and each set is printed 3 times. Results and Discussion 4.1 Analysis of S/N Ratio S/N ratios measure the variation in the data. Since the best case is to have no difference between the maximum and minimum dimensions of the data, the target value is 0. Therefore, the S/N ratio of is calculated by the lower-the better formula [START_REF] Roy | A primer on the Taguchi method[END_REF], ( The results are analyzed according to their geometry: inclines and horizontal struts. The process and design parameters varied for horizontal struts are temperature, print speed, fan speed, layer height, diameter, and length. The parameters varied for inclines are temperature, print speed, fan speed, layer height, angle, and diameter. The level response of each factor level is found by taking an average of the S/N ratios within that factor level. ( ) 3 In addition, the response range for each factor was calculated by Response range = Level Response max -Level Response min [START_REF] Zhang | Ultralight X-type lattice sandwich structure (I): Concept, fabrication and experimental characterization[END_REF] The main effects of different process parameters at different levels are shown in Figure 6 for inclined struts and Figure 7 for horizontal struts. A higher S/N ratio indicates better printing quality. The main effects charts demonstrate that the failure methods of taking maximum deviation, strut diameter, and twice the strut diameter follow similar trends; while the omitted method does not. This is because the failed parts have extremely poor printing quality which is not reflected when results are omitted. The omission will underestimate the negative influence of process parameters on those failed parts. Therefore, taking the equal to maximum deviation, strut diameter, and doubling the strut diameter are more reasonable approximations to treat missing data due to failed prints and will be considered to find the optimal process parameters. However, there are still some inconsistencies between these three failure methods for some parameters, so additional experiments are conducted to further investigate those parameters and will be discussed in the next subsection. The optimal levels of the process parameters can be selected from Figure 6 and Figure 7, indicated by the levels which produce the highest S/N ratio values. For inclined struts, the trends for nozzle temperature, printing speed and fan speed are the same for all the failure methods. That means the optimal level for those parameters can be determined by the failure methods. The optimal levels are level 4 or 255 for nozzle temperature, level 2 or 1200mm/min for print speed, and 50% for fan speed. The best level for layer height cannot be determined because the trends for the S/N value are different in the three failure methods. This indicates that some failure methods may overestimate or underestimate the influence of the layer thickness on inclined struts. The print failure of struts interferes with the determination of optimal process parameters. The failure is because the OA cannot guarantee combinations of good printing quality. However, since the other three optimal process parameters have been found, the three settings can be used to re-fabricate the inclined struts with better printing quality regardless of layer thickness. This will eliminate the influence of the failed samples, and the optimal level of layer thickness for inclined struts can be found. This additional experiment is conducted in the confirmation test to further investigate the undetermined process parameters. As for horizontal struts, three parameters: nozzle temperature, layer height and printing speed can be determined by failure methods while the fan speed shows some inconsistency among failure methods. Through finding the highest S/N ratio for those three process parameters in Figure 7, it is found that the best level for nozzle temperature is level 3 or 245 , for printing speed is level one or the slowest speed, and for layer height is level 2 or 0.2mm printing thickness. However, for the fan speed, different failure methods show different trends, for the reasons explained above. Therefore, additional confirmation testing is required to determine the optimal fan speed for horizontal struts. Confirmation test In Section 4.1, the main effect charts did not show consistency for the layer thickness of inclined struts, and fan speed of horizontal struts between the three different failure methods. Additional tests are conducted to further investigate the optimal level of layer thickness for inclined struts and fan speed for horizontal struts. Five additional tests are designed and the process parameters of each test are shown in Table 5. For inclined struts, the layer height is set to 0.1mm and 0.2 mm, and for horizontal struts, the fan speed was set to 0%, 50% and 100%. The other parameters were set at the optimal level for inclined and horizontal struts respectively as obtained in the S/N Ratio analysis. The optimal level chosen will minimize print failure in these five tests. The results of these tests are shown in Figure 8. A higher S/N ratio refers to a better print quality. It can be concluded that the 0.1mm layer height is better than 0.2mm layer height for inclined struts. This is because slicing over a smaller layer height will reduce the overhang length of the layer. Additionally, the 0% fan speed is better than 50% and 100% for horizontal struts. During the print, the first layer is very weak because of the large overhang, and the fan's airflow in the printing chamber will cause oscillation of the unsolidified layer. Therefore, reducing airflow without a fan will result in better printing quality for the horizontal struts. Via S/N ratio analysis and confirmation test, the optimal level of each process parameter is obtained and summarized in Table 6. It is found from the result that best process parameters for different types of the lattice struts are totally different, due to the differing geometrical characteristics between horizontal struts and inclined struts. Therefore, if the parameters are dynamic during the printing process to optimize for different types of lattice struts, the printing quality of the entire structure can be further improved. In addition, tests No.1 and No.3 can be regarded as the confirmation test for inclined struts and horizontal struts because all the process parameters are optimum. It can also be concluded that the S/N ratio of the optimized process parameter is larger than the S/N ratio of the L16 array, which means that the optimized result improves the printing quality of both horizontal and inclined struts. ANOVA Analysis of variance for means is performed on the data to find the most significant process parameter for both inclined and horizontal struts. Three sets of ANOVA tables are calculated for each geometry type based on three failure methods which are maximum deviation, strut diameter, and doubling the strut diameter. Interactions are not considered as factors were assumed to be independent. Degrees of freedom for factor A is calculated by = m -1 ( 5 ) Where m is the total levels of A. The total degrees of freedom is the number of observations, subtracted by 1. The number of samples are 576 and 432, for inclines and horizontal struts respectively. Therefore, dfTot inclines = 575, and dfTot hor = 431. Finally, degrees of freedom for error is calculated by dfE = dfTot - [START_REF] Rosen | Computer-aided design for additive manufacturing of cellular structures[END_REF] where n denotes the total number of factors, and dfA i is the degree of freedom for each factor. The mean is calculated for each set of replicated trials, where the levels for each factor are held constant. This is calculated by i. = [START_REF] Ozdemir | Energy absorption in lattice structures in dynamics: Experiments[END_REF] where n is the number of observations values within the level. The grand mean is calculated by .. = (8) where i and j represent all the factors and their levels respectively. The sum of squares of factor A is obtained by [START_REF] Christensen | Analysis of variance, design, and regression: applied statistical methods[END_REF] SS A = 2 [START_REF] Brooks | Design of conformal cooling layers with self-supporting lattices for additively manufactured tooling[END_REF] In this case, N is the number of observations at each factor level. The fan speed parameter requires unbalanced ANOVA (Eq. 10) because the dummy variable used doubles the number of observations for a fan speed of 0%. SS A = 2 [START_REF] Khanoki | Fatigue design of a mechanically biocompatible lattice for a proof-of-concept femoral stem[END_REF] where is the number of observations of each factor level i. Then the mean sum of squares can be obtained: [START_REF] Masood | The design and manufacturing of porous scaffolds for tissue engineering using rapid prototyping[END_REF] The total sum of squares is calculated by Eq. 12 for each set of data: inclines and horizontal struts SS T = 2 [START_REF] Cheah | Development of a Tissue Engineering Scaffold Structure Library for Rapid Prototyping. Part 1: Investigation and Classification[END_REF] The sum of squares error is given by SS err = SS T - ( 13 ) where n is the total number of factors and is the mean sum of squares for each factor. The F ratio of factor is calculated by F ratio A = (14) P-contribution, is the mean sum of squares of a factor divided by the total mean sum of squares of all the factors, including error. It is calculated by Eq. 15. P A = ( 15 ) where n is the total number of factors, and is the mean sum of squares for each factor. The results of the AONVA through three failure methods are shown in Figure 9 and Figure 10 for inclined struts and horizontal struts, respectively. The contributions of each factor for horizontal struts and inclined struts are different. Small error contributions indicate that it is reasonable to assume no interactions between factors. Although the P-value varies slightly for different failure methods, the process parameters which gives the maximum P-value do not change with failure method. Furthermore, the rank of the influence is almost the same, except ranks 3 and 4 in the maximum deviation method for horizontal struts. However, it can be concluded that these parameters have little influence on the printing quality of horizontal and vertical struts. The average P values demonstrate that for inclines, the contribution of fan speed is 81.00%, nozzle temperature is 11.97%, print speed is 3.05%, and layer height is 1.42%. For horizontal struts, the contribution of layer height is 69.84%, temperature is 15.14%, print speed is 11.52%, and fan speed is 1.74%. Therefore, the most significant process parameters are fan speed for inclined struts and layer height for horizontal struts. The reason is that for inclined struts, the overhang area can be cooled down quickly by the fan which maintains the overhang's shape. Without a fan, the overhang area remains soft and will sag due to the gravity. As for horizontal struts, the first layer of the overhang is very important. With a larger printing thickness, the first layer of the overhang is more robust to support the following layers. Therefore, these two process parameters are significant for inclined struts and horizontal struts respectively. Figure 11 summarizes average S/N value ranges, in which a higher value indicates greater influence on the printing quality. By comparing ANOVA and S/N response range, the ranks for the S/N value range and P contribution are similar, and the largest range between the lowest and highest S/N values in all levels of the factor correspond to the largest contribution of that factor. It can also be concluded that the most significant process parameter for both types of struts obtained by the S/N value range is the same as obtained by ANOVA analysis. Therefore, both S/N value range and ANOVA analysis can be used to determine the significance of each parameter. The influence of these parameters on the mechanical property of lattice structures will be discussed in the next subsection. After S/N ratio analysis and ANOVA, the optimal process parameters for both horizontal struts and inclined struts are obtained, and the most significant parameter for each type of strut is determined. However, these analyses only focus on single struts and dimensional accuracy. To understand the influence of the process parameters on the entire lattice structure, the compression test is conducted to evaluate the mechanical performance of lattice structure fabricated by different process parameters. This test also verifies whether process parameters optimized by dimensional accuracy improves mechanical performance. In this experiment, geometrical data is set to be constant. All the lattice structures have the same dimension with a cubic-center topology with horizontal, inclined, and vertical struts. The length of the unit cell is 15mm and the strut thickness is 2mm. The lattice structure has three unit cells along each direction, as shown in Figure 12(a). The relative density is 12.8%. The lattice structure is fabricated with the process parameters in Table 4, and each run is shown in Figure 12(b). Half the fabrications have failed due to the process parameters in OA. If the optimization process had been directly applied on the entire lattice structure instead of single lattice struts, half the data would not be obtainable for the S/N ratio analysis and ANOVA. This demonstrates the benefit of using single struts, as most struts are still measurable and the missing data is only a small portion of the whole. Via the failure methods proposed in this research, the S/N ratio and ANOVA methods can find the optimal process parameters and the most significant process parameters. In contrast, manufacturing error accumulates in an entire lattice structure at each layer, which leads to the failure of subsequent struts. Therefore, it is difficult to optimize the process parameters due to plenty of failed fabrications. To verify the improvement of optimization process, the lattice structure is also fabricated by the optimized process parameters found from S/N ratio analysis and ANOVA. Because optimal levels of horizontal struts and inclined struts are different, overall optimal parameters have to be determined. From ANOVA, the most significant parameter is fan speed for inclined struts and layer thickness for horizontal struts. Therefore, the fan speed is set to 50%, the best level for inclines; and the layer thickness set to is 0.2mm, the best level for horizontal struts. Printing speed has little influence on the inclined struts, so 600mm/min is selected because it is the best level for horizontal struts. The temperature has equivalent significance for both types of struts and an average value of the two optimal temperatures is chosen, which is 250ᵒC. The overall optimal parameters create a lattice structure, whose mechanical performance will be compared to the lattice structures fabricated by process parameters in the Taguchi array. Test result and discussion The compression test is conducted by a TestResources 313 Universal Test Machine as shown in Figure 12(c) at a strain rate of 2mm/min. The loading direction is perpendicular to the build direction of the FDM machine coordinate system for two reasons. The first is that the bonding strength between each layer of the lattice structure can be tested in this direction. Because loading along the build direction would compress each layer, which does not expose the weak bonding strength between layers. The second reason is that the horizontal strut deflects during the fabrication process, which is vulnerable to bulking. The compression load perpendicular to the build direction causes bulking on horizontal struts so that manufacturing defects are more clearly seen. The results of the compression test are shown in Figure 13, with no test results for the failed lattice structures. The effective elastic modulus and the ultimate strength are summarized in Figure 14. It can be concluded that lattice structures fabricated with different process parameters are likely to exhibit different mechanical properties. However, the lattice structure fabricated by the optimized process parameters exhibits the highest effective Young's modulus and ultimate strength. Also, Lattice structures from Runs 2, 3 and 13 demonstrate better mechanical performance compared to the other runs. This is because all these runs have a 0.2mm layer thickness, the most significant parameter at the optimal level for horizontal struts. Furthermore, Runs 2 and 13 have 50% fan speed, which is the most significant parameter at optimal level for inclined struts; Run 3 with its 100% fan speed had lower effective Young's modulus and ultimate strength compared to Runs 2 and 13. In addition, while lattice structures are successfully fabricated in Runs 6, 7, 9 and 12, they exhibit worse mechanical performance compared to Runs 2 and 13 because their 0% fan speed is the worst level for inclined struts. Therefore, the optimal level of process parameters from S/N ratio analysis, and most significant process parameters from ANOVA can be used to improve the mechanical performance of as built lattice structure. Another conclusion is that all the lattice structures fabricated with 0.1mm layer thickness fail except in Run 11 where the other process parameters are exact or close to the optimal level. As a result, layer thickness is the most significant parameter in the fabrication of lattice structure by FDM process. One main failure mechanism in the compression test of lattice structures is fracture between layers of the vertical strut as shown in Figure 12(c). A higher nozzle temperature will strengthen the bond between each layer in the FDM process. The Optimized Run has the same process parameter levels as Run 2, except a nozzle temperature that is 25ᵒC higher, which is reflected in a 12% higher ultimate strength in the Optimized Run compared to Run 2. Another important failure mechanism in Figure 12(c) is the bulking of the horizontal strut. Some horizontal struts are badly fabricated with large deflection, making them vulnerable to bulking when the compression force is along their orientation. This can be seen by comparing Run 13 and the Optimized Run. While they both have the same layer thickness and fan speed, the Optimized Run's 600mm/min printing speed is the best level for horizontal struts and Run 13 uses 1200mm/min. In addition, the nozzle temperature in the Optimized Run is 250ᵒC which is closer to the optimal nozzle temperature for horizontal struts. Consequently, the effective Young's modulus and ultimate strength in the Optimized Run are 13% and 8% higher respectively than those in Run13. Figure 13 The result of the compression test Figure 14 The comparison of the effective young's modulus and ultimate strength of lattice structures fabricated with different process parameters Conclusions In this paper, the process parameters of FDM have been optimized for lattice structures. The concept of the Manufacturable Element is proposed to link the geometrical information of lattice structures and the manufacturing process. The lattice structure is deconstructed into horizontal and inclined struts. Taguchi method is implemented to investigate the optimal process parameters for both types of struts. Failure methods are used to represent the data of failed fabrication. It is shown that except for omitted method, the other three failure methods can well represent the missing data. S/N ratio analysis and ANOVA has been used to optimize the process parameter. The result shows that the most significant process parameter for inclined struts is fan speed; but for horizontal struts it is layer height. The optimal values for inclined struts are 255˚C nozzle temperature, 1200mm/min print speed, 50% fan speed and 0.1mm layer height. However, for horizontal struts, the optimum is 245˚C nozzle temperature, 600mm/min print speed, 0% fan speed and 0.2mm layer height. Furthermore, compression tests have been conducted to investigate the mechanical performance of lattice structures fabricated by different process parameters. Experimental result shows that the proposed optimization method can improve the mechanical performance of lattice structure, even though the optimization is based on the printing quality of lattice struts. All fabrications in this research use constant process parameters. Nevertheless, the S/N ratio analysis shows that optimized process parameters are different for different features. Future work will investigate the printing quality of lattice structures fabricated by FDM via dynamic process parameters. Figure 2 . 2 Figure 2. Examples of horizontal group and inclined group, L is the horizontal length, θ is the inclined angle, D is the diameter of both struts [30]. Figure 3 . 3 Figure 3. the difference between the minimum thickness and the maximum fabricated thickness for (a) horizontal strut and (b) inclined strut [30]. Figure 4 . 4 Figure 4. Poorly fabricated struts that are not measurable Figure 5 . 5 Figure 5. Linear graph used to merge nine 2 level columns, into three 4 level columns[START_REF] Taguchi | Taguchi's Quality Engineering Handbook[END_REF]. Figure 6 .Figure 7 . 67 Figure 6. Main effects charts for the effect of process parameters on average S/N Ratio for inclines Figure 10 P 10 Figure 10 P values for horizontal struts for three failure methods respectively Figure 12 12 Figure 12 The lattice structure for compression test, (a) the geometrical model, (b) the as built lattice structure by the process parameters in the Taguchi array, (c) the failure of the lattice structure during the compression test. Table 1 . 1 Design parameters and their levels Geometry Group Design Parameter Units Level 1 Level 2 Level 3 Level 4 Inclines Angle from horizontal ˚ 15 30 60 90 Diameter mm 2 4 6 - Horizontal Struts Length mm 10 30 50 - Diameter mm 2 4 6 - Table 2 . 2 Controlled printing parameters and their levels Symbol Printing Parameter Unit Level 1 Level 2 Level 3 Level 4 A Nozzle Temperature ˚C 225 235 245 255 B Print Speed mm/s 600 1200 1800 2400 C Fan Speed % 0 50 100 - D Layer Height mm 0.1 0.2 - - Table 3. Non-design process parameters Machine Bed XY Travel Z Speed Layer Retraction Retraction Temperature Speed Width Distance Speed Ultimaker 2 100 3900 1000 0.48mm 4.5mm 1800 Extended+ mm/min mm/min mm/min Table 4 . 4 Experimental runs with an L16 (42 x 31 x 21) orthogonal array Run A B C D 1 1 1 1 1 2 1 2 2 2 3 1 3 3 2 4 1 4 1* 1 5 2 1 3 1 6 2 2 1* 2 7 2 3 1 2 8 2 4 2 1 9 3 1 1* 2 10 3 2 3 1 11 3 3 2 1 12 3 4 1 2 13 4 1 2 2 14 4 2 1 1 15 4 3 1* 1 16 4 4 3 2 Table 5 . 5 Process parameters in additional tests Test No. nozzle temp print speed fan speed layer height 1 0.1 Inclines 2 255 1200 50 0.2 3 0 Horizontals 4 245 600 50 0.2 5 100 Table 6 . 6 Optimal process parameters for inclined struts and horizontal struts 13.3 7.6 Ratio 13.2 13.25 Ratio 7.2 7.4 S/N 13.15 S/N 7 Average 13 13.05 13.1 Average 6.4 6.6 6.8 12.95 6.2 0.05 0.1 0.15 0.2 0.25 0 20 40 60 80 100 120 Layer Height (mm) Fan Speed (%) (a) inclined strut (b) horizontal strut Figure 8. The S/N result of additional tests nozzle print fan layer temp speed speed height Inclined struts 255 1200 50 0.1 Horizontal struts 245 600 0 0.2 Corresponding author, E-mail: [email protected], Tel: +1 (514) 398-2523 Acknowledgement This research work is supported by National Sciences and Engineering Research Council of Canada Discovery Grant RGPIN436055-2013.
04096539
en
[ "info.info-ia", "phys.meca.msmeca" ]
2024/03/04 16:41:18
2017
https://hal.science/hal-04096539/file/A%20Survey%20of%20Modeling%20of%20Lattice%20Structures%20Fabricated%20by%20Additive%20Manufacturing.pdf
Fiona Zhao Guoying Dong Yunlong Tang Yaoyao Fiona Zhao email: [email protected] A Survey of Modeling of Lattice Structures Fabricated by Additive Manufacturing Keywords: Additive Manufacturing, Finite Element Analysis, Homogenization, Lattice Structure The lattice structure is a type of cellular material with truss-like frames which can be optimized for specific loading conditions. The fabrication of its intricate architecture is restricted by traditional manufacturing technologies. However, Additive Manufacturing (AM) enables the fabrication of complex structures by aggregation of materials in a layer-by-layer fashion, which has unlocked the potential of lattice structures. In the last decade, lattice structures have received considerable research attention focusing on the design, simulation and fabrication for AM techniques. And different modeling approaches have been proposed to predict the mechanical performance of lattice structures. This review introduces the aspects of modeling of lattice structures and the correlation between them; summarizes the existing modeling approaches for simulation; and discusses the strength and weakness in different simulation methods. This review also summarizes the characteristics of AM in manufacturing cellular materials and discusses their influence on the modeling of lattice structures. Introduction The word 'lattice' derives from old French 'latte' and it is defined as a structure consisting of strips of wood or metal crossed and fastened together with square or diamond-shaped spaces left between [START_REF] Merriam-Webster | lattice[END_REF]. In this review, the lattice structure refers to a type of cellular materials [START_REF] Schaedler | Architected Cellular Materials[END_REF] that has a truss-like structure with interconnected struts and nodes in a three-dimensional (3D) space. Compared to other cellular materials such as random foams and honeycombs, the lattice structure exhibits better mechanical performance [START_REF] Evans | Multifunctionality of cellular metal systems[END_REF]. By tailoring the material, the lattice structure can be optimized to satisfy specific functional requirements, which means the mechanical properties are more flexible to be controlled. According to literatures [START_REF] Gibson | Cellular solids: Structure and properties[END_REF][START_REF] Tang | A survey of the design methods for additive manufacturing to improve functional performance[END_REF][START_REF] Wang | A hybrid geometric modeling method for large scale conformal cellular structures[END_REF], there are more than one methods to classify the lattice structures. In this paper, lattice structures are categorized based on their degree of order of the lattice frame. Generally, they can be classified into three categories. The first type is called disordered lattice structures or randomized lattice structures. The unit cells of this type of lattice structures are randomly distributed inside the design space with different topologies and cell size. An example of disordered lattice structures is shown in MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 3 Figure 1(a). The second type of lattice structures is called periodic lattice structure as shown in Figure 1(b). This type of lattice structures can be regarded as a structure created by a regular periodic repetition of a unit cell with certain shape, topology and size in a three-dimensional Euclidean space. Thus, every cell in this type of lattice structures is in the same topology and size. Besides disordered lattice structures and periodic lattice structures, another type of cellular structure is called pseudo-periodic lattice structures. In this type of lattice structures, lattice cell only share the same topology but different size and shape. For example, the conformal lattice structure which is firstly proposed by Wang and Rosen [START_REF] Wang | Parametric Modeling Method for Truss Structures[END_REF] is a typical pseudo-periodic lattice structure. Conformal lattice structures are capable of keeping the integrity of their unit cells on the boundary as shown in Figure 1(c), which can stiffen or strengthen a complex and curved surface than periodic lattice structures [START_REF] Wang | A unit cell approach for lightweight structure and compliant mechanism[END_REF][START_REF] Nguyen | Conformal lattice structure design and fabrication[END_REF]. Both the periodic and the pseudo-periodic lattice structures can be further divided into two sub-types based on the uniformity of strut's thickness. They are homogenous lattice structures and heterogenous lattice structures. Due to tailorable properties, the periodic and pseudo-periodic lattice structures are more widely used in engineering applications. In most cases, the lattice structure is applied as lightweight core materials in a sandwich structure to transverse shear and compression loads. Compared to honeycombs, the lattice structure has the potential to improve compressive and shear strengths when designed to suppress buckling [START_REF] Queheillalt | Cellular metal lattices with hollow trusses[END_REF][START_REF] Clough | Mechanical performance of hollow tetrahedral truss cores[END_REF]. Heterogeneous lattice structures with spatially graded density matched to the local loads can further save the weight of traditional cores with uniform density and properties. The lattice structures can also be used as energy-absorbing materials for protection from impact and shockwaves. Schaedler et al. [START_REF] Schaedler | Designing metallic microlattices for energy absorber applications[END_REF] have investigated different types of metallic In this paper, a concept called modeling of lattice structures for AM is presented as shown in Figure 2. This concept is based on the design and fabrication methods for Additive Manufacturing proposed by Rosen [START_REF] Rosen | Computer-Aided Design for Additive Manufacturing of Cellular Structures[END_REF], Yang et al [START_REF] Yang | Additive Manufacturing of Metal Cellular Structures: Design and Fabrication[END_REF], and Seepersad et al [START_REF] Seepersad | Multiscale design for solid freeform fabrication[END_REF]. Rosen proposed a concept called Design for Additive Manufacturing (DFAM) that can support part modeling, process planning, and manufacturing simulations. This method includes the solution algorithms, analysis codes, libraries of materials and mesostructures, process planning and analysis of as-manufactured model. Yang et al proposed an approach that combines the analytical modeling, experimentation and simulation for the design of lattice structures. It is concluded that manufacturing factors are coupled with the actual cellular design, which need to be incorporated in to the geometric models to enable more accurate designs. Based on existing theories, the concept in this paper is divided into three aspects: design and geometrical modeling; simulation modeling; and AM fabrication modeling. Based on functional requirements, the proper lattice structure is designed and generated. Then the MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 7 design result is the geometrical input to the simulation modeling step. The simulation result then goes back to the geometrical modeling stage for the optimization of the initial design. The optimized geometrical model is the input to the AM fabrication modeling step. AM fabricated lattice structures need to be examined to check the manufacturability [START_REF] Tang | Lattice Structure Design and Optimization With Additive Manufacturing Constraints[END_REF]. If it is not manufacturable, the geometrical model should be modified to remove unsuitable features for fabrication. Besides, the manufacturing process may have influence on the mechanical properties of lattice structures such as the irregular surface roughness and the anisotropic material properties, which also needs to be investigated. The influence of AM process should then be imported into the simulation model to improve the accuracy of the simulation result. When simulation results satisfy design requirements and the design model is manufacturable, the lattice structure can be fabricated, which finalize the modeling process. This paper attempts to provide a comprehensive review of the state-of-the-art modeling approaches of lattice structures for researchers who aim to investigate the mechanical performance of lattice structures by experimental methods or simulations. In Section 2, experimental approaches as well as simulation modeling methods such as homogenization and Finite Element (FE) used in the investigation of mechanical performance of lattice structures will be discussed. Furthermore, manufacturing characteristics of AM have influence on the mechanical performance of lattice structures. These characteristics including surface irregularity and anisotropic material properties will be discussed in Section 3. Finally, these modeling methods are concluded and some future research to improve the existing modeling techniques of lattice structures are given in the last section. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 8 Mechanical performance modeling The mechanical performance of cellular materials has been investigated for decades. In the early stage, restricted by the manufacturing techniques, only simple cellular structures were studied. Gibson et al. [START_REF] Gibson | The mechanics of twodimensional cellular materials[END_REF] investigated the mechanical properties of two dimensional cellular material with beam theory and compared the result with experimental data. The mechanical properties of 3D cellular materials were linked to the relative density , where is the density of the cellular materials and is the density of the solid of which the structure is made. The relationships between linear elastic properties and the relative density for open-cell and closed-cell cellular material are given by [START_REF] Gibson | Cellular solids: Structure and properties[END_REF][START_REF] Gibson | The mechanics of three-dimensional cellular materials[END_REF] ( Where is the elastic modulus of the cellular material and is the elastic modulus of the solid material of which the cellular material is made. is the fraction of solid in the cell edges. and are simply constants of proportionality. Other mechanical properties including elastic collapse, plastic collapse, creep, brittle crushing and fracture toughness were also correlated with the relative density [START_REF] Ashby | The mechanical properties of cellular solids[END_REF]. Compressive experiments of cellular solids were conducted to get the stress-strain curve which was characterized by three regimes: a linear-elastic regime caused by elastic bending or stretching, a stress plateau caused by buckling, yielding or crushing, a densification caused by the load of the edges and cells against one another [START_REF] Gibson | Cellular solids: Structure and properties[END_REF][START_REF] Gibson | Biomechanics of cellular solids[END_REF]. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 9 The lattice structure is a type of cellular material which provides more flexibility to control the mechanical properties than the foam and honeycomb structures. Even though the relationship between the relative density and the mechanical performance of lattice structures can give designers a basic standard to choose the porosity of the lattice, it is difficult to precisely determine the mechanical performance of lattice structures only by the relative density. For instance, different topologies with the same porosity may have totally different mechanical properties such as the elastic modulus, shear modulus and Poisson's ratio. In order to predict the mechanical performance more accurately, the experiment, homogenization and FE methods are widely used to simulate lattice structures with different topologies. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 10 Experimental Method Experimental method is the most direct way to get the mechanical properties of asfabricated lattice structures. In the last decade, a number of experiments such as compression [START_REF] Labeas | Investigation on the Static Response and Failure Process of Metallic Open Lattice Cellular Structures[END_REF], bending [START_REF] Yang | Nonstochastic Ti-6Al-4V foam structures with negative Poisson's ratio[END_REF] and tensile [START_REF] Alsalla | Fracture toughness and tensile strength of 316L stainless steel cellular lattice structures manufactured using the selective laser melting technique[END_REF] tests, impact loading tests [START_REF] Winter | Plate-impact loading of cellular structures formed by selective laser melting[END_REF], dynamic MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 12 tests [START_REF] Salehian | Dynamic analysis of a lattice structure by homogenization: Experimental validation[END_REF], and fatigue test [START_REF] Jamshidinia | Fatigue properties of a dental implant produced by electron beam melting (EBM)[END_REF] have been done to investigate the performance of lattice structures fabricated by different types of AM techniques. The compressive elastic modulus of lattice structures obtained from experimental methods are summarized in Table 1. It has been shown by lots of experiments that the mechanical properties of lattice structures fabricated via AM cannot be simply determined by the relative density. Many researchers found that the compressive modulus of lattice structures is related to the strut dimension, the cell size and the surface roughness as well. Parthasarathy et al. [START_REF] Parthasarathy | Mechanical evaluation of porous titanium (Ti6Al4V) structures with electron beam melting (EBM)[END_REF] studied the strength of Ti6Al4V cube lattice structure made by EBM with porosities ranging from 49.75% to 70.32%. It is found that for nearly same porosities (49.75% and 50.75%) with different strut thickness, the compressive modulus of the lattice structure decreased dramatically from 2.92 GPa to 0.57 GPa, which indicates that the strength of the lattice structure depends not only on the porosity of the structure, but also on the geometrical dimension of the lattice strut. Similar results were found by Yan et al. [START_REF] Yan | Evaluation of light-weight AlSi10Mg periodic cellular lattice structures fabricated via direct metal laser sintering[END_REF][START_REF] Yan | Microstructure and mechanical properties of aluminium alloy cellular lattice structures manufactured by direct metal laser sintering[END_REF]. They conducted compressive experiment on the AlSi10Mg diamond lattice structure fabricated by SLM. The result showed that with the same volume fraction, the compressive strength was decreasing with the increase of the size of the unit cell. This can be explained by the "effective volume ratio" of the lattice strut that is proposed by Suard et al. [START_REF] Suard | Towards stiffness prediction of cellular structures made by electron beam melting (EBM)[END_REF]. Because the surface roughness of the strut fabricated by EBM is high, which has more influence on the mechanical properties of the thinner strut. Formanoir et al. [START_REF] De Formanoir | Improving the mechanical efficiency of electron beam melted titanium lattice structures by chemical etching[END_REF] investigated the influence of the chemical etching on the mechanical properties of EBM fabricated Ti-6Al-4V octet-truss lattice structures by compression tests. The result showed that with the same relative density, the lattice structure after the chemical etching MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 13 had a higher stiffness than the as-built structure due to the decrease in the surface roughness. It is also found by experiments that many types of defects are likely to occur in AM fabricated lattice structures such as the irregular strut size, internal porosity and surface breaking defects. Qiu et al. [START_REF] Qiu | Influence of processing conditions on strut structure and compressive properties of cellular lattice structures fabricated by selective laser melting[END_REF] used optical microscopy (OM), scanning electron microscopy (SEM) and micro-CT to investigate the as-fabricated strut size, morphology and internal porosity of AlSi10Mg diamond lattice structures fabricated by SLM and correlate them with the compressive properties. The primary conclusion is that the diameter of the strut increase monotonically with laser power, which improves the compressive properties, but deviated from the designed geometry. The defects of the Ti6Al4V cube lattice structure fabricated by EBM process have also been studied to understand their effects on the mechanical response [START_REF] Hernández-Nava | The effect of defects on the mechanical response of Ti-6Al-4V cubic lattice structures fabricated by electron beam melting[END_REF]. The result shows that the compressive yield strength was not affected much by the horizontal struts even though there are large surface breaking defects in them. Because the load direction is along the vertical struts which means the horizontal struts are redundant. However, the load direction along the horizontal struts is not discussed in this literature. The influence of the surface defect on the compressive yield strength still needs to be investigated. Even though the experimental method can directly obtain the mechanical properties of lattice structures fabricated via different AM techniques with different geometrical characteristics, there are some obvious limitations to implement the experimental method in engineering applications. Since the fabrication cost of AM techniques is still high, the experimental method requires a certain amount of samples to minimize the error and improve the experimental accuracy, which is not economical for AM processes. Secondly, MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 14 the speed of the manufacturing process of AM is relatively slow. It could take hours or days to fabricate a component via most of AM techniques. If a new lattice structure design needs to be verified for its mechanical performance, the experimental method is time-consuming which may delay the whole design process. Thirdly, in the conceptual design level, a comprehensive database is required to select the appropriate topology and relative density. It is not applicable to construct such a database by experimental method due to the cost of the time and money. Finally, to further improve the mechanical properties of lattice structures in engineering applications, the optimization process is vital at the design stage, which may need many rounds of iterations to find the optimal design. The experimental method is impractical to optimize the mechanical performance of the lattice structure. Therefore, it is imperative to simulate the mechanical properties of lattice structures analytically and numerically. Two modeling methods have been widely investigated and applied to estimate the mechanical response of lattice structures, which are the homogenization method and the Finite Element (FE) method. Homogenization Method The homogenization usually refers to a way to replace the composite with a kind of equivalent material model, which can overcome the difficulty in the analysis of the boundary value problem with high heterogeneities [START_REF] Hassani | A review of homogenization and topology optimization I-homogenization theory for media with periodic structure[END_REF]. The mathematical theory of homogenization has developed in the 1970s [START_REF] Bensoussan | Asymptotic analysis for periodic structures[END_REF][START_REF] Cioranescu | Homogenization in open sets with holes[END_REF]. It is used to obtain the effective properties of homogenized material for periodic heterogeneous continuous media in many physical and engineering applications. It can be divided into two steps. Firstly, the local problem based on a unit cell is solved to get the effective material properties. Secondly, the overall problem is solved by substituting periodic material to the MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 15 homogenised material with equivalent properties. Compared with FE methods, the homogenization method can save much computational cost. As shown in Figure 3, the beam FE model comprises of 240000 elements while the solid FE model comprises only 1000 elements. Therefore, a radical reduction of the solution time and cost is achieved by application of the homogenised FE model [START_REF] Ptochos | Shear modulus determination of cuboid metallic open-lattice cellular structures by analytical, numerical and homogenisation methods[END_REF][START_REF] Ptochos | Elastic modulus and Poisson's ratio determination of micro-lattice cellular structures by analytical, numerical and homogenisation methods[END_REF]. The mechanical properties of lattice structures can be analysed by the homogenization method because it is a periodic structure constructed by unit cells. Based on an asymptotic expansion using periodicity, a homogenization approach with a solid model of lattice structures was developed. This approach is implemented to get the effective elastic modulus for a given unit cell. It can also be used in a design procedure to find the optimal topology of a unit cell under a certain boundary condition [START_REF] Bendsøe | Generating optimal topologies in structural design using a homogenization method[END_REF]. Rabczuk et al [START_REF] Rabczuk | Homogenization of sandwich structures[END_REF] implemented the homogenization method to represent the core cell of the impulsive load bearing sandwich panel by a continuum constitutive model with the consideration of buckling. The homogenized model was derived from the core cell by making the continuum-stored energy density function equal to a discrete energy associated to a representative core cell. This approach was more computational efficient to the fully discrete models yet results of both models showed very good agreement. Florence and Sab [START_REF] Florence | Overall ultimate yield surface of periodic tetrakaidecahedral lattice with non-symmetric material distribution[END_REF][START_REF] Florence | A rigorous homogenization method for the determination of the overall ultimate strength of periodic discrete media and an application to general hexagonal lattices of beams[END_REF] investigated the overall ultimate strength of general elastoplastic periodic 2D MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 16 and 3D lattice by using a rigorous homogenization method to solve unit cell problems with finite number of degrees of freedom. The method is restricted to the determination of overall linear elasticity constants and overall ultimate failure envelopes. The advantage of this method is that the non-uniform cell wall thickness and the non-symmetric material distribution in the cell edges can be considered in this method. Arabnejad and Pasini [START_REF] Arabnejad | Mechanical properties of lattice materials via asymptotic homogenization and comparison with alternative homogenization methods[END_REF] investigated the mechanical properties of 6 different lattice topologies for a whole range of relative density by asymptotic homogenization. Also, discrete homogenization approaches are widely used to simulate the mechanical properties of periodic lattice structures by using structural elements such as a truss element and a beam element. Tollenaere and Caillerie [START_REF] Tollenaere | Continuous modeling of lattice structures by homogenization[END_REF] applied the discrete homogenization method to model the quasi-repetitive lattice structures. The constitutive relation of their equivalent continuum was obtained by utilizing truss elements and pinjointed nodes. This approach was also used to construct the equivalent macroscopic model for periodic structures such as graphene sheets which can be considered as lattices consisting of atoms and of interatomic bonds [START_REF] Caillerie | Discrete Homogenization in Graphene Sheet Modeling[END_REF]. Reis and Ganghoffer [START_REF] Reis | Discrete homogenization of architectured materials: Implementation of the method in a simulation tool for the systematic prediction of their effective elastic properties[END_REF] improved the discrete homogenization method by applying beam lattices instead of truss lattices. They used this approach to investigate the equivalent mechanical properties of auxetic lattices with two main mechanisms: the re-entrant and the rolling-up mechanism [START_REF] Reis | Equivalent mechanical properties of auxetic lattices from discrete homogenization[END_REF]. Because the predicted homogenized properties depend on the slenderness of the beam, and none of the simplifying assumptions are made, it provides more accurate results than those of Gibson and Ashby [START_REF] Gibson | Cellular solids: Structure and properties[END_REF]. To investigate the large deformations of extensible beams and lattice structures, a heuristic homogenization technique was proposed by considering a discrete spring model to formulate a continuum fully nonlinear beam model MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 17 [START_REF] Dell'isola | Large deformations of planar extensible beams and pantographic lattices: Heuristic homogenization, experimental and numerical examples of equilibrium[END_REF]. Recently a new homogenization approach using semi-rigid joint frame element shown in Figure 4(a) and as-fabricated model shown in Figure 4(b) for periodic lattice structures was presented in order to incorporate the geometrical discrepancies obtained during the AM process [START_REF] Park | Homogenization of Mechanical Properties for Additively Manufactured Periodic Lattice Structures Considering Joint Stiffening Effects[END_REF]. In this literature, three homogenization approaches were implemented to get the normalized elastic modulus; the proposed approach, the discrete homogenization with conventional frame elements, and the asymptotic homogenization. Two types of lattice structures, the cubic cell lattice and the diamond lattice were investigated by these three methods. The simulation results were compared to experimental result. The discrete homogenization with conventional beams lead more errors than the proposed method. Furthermore, the asymptotic approach yields large error in cubic cell specimens, but it gives relative accurate estimates in diamond unit specimens. The proposed method can estimate the modulus more accurately for both type of lattice structures. These results showed that the geometrical degradation during the AM process has significant impacts on mechanical properties of lattice structures and the proposed method enables accurate estimations of mechanical properties of the asfabricated samples. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 18 In biomechanics, Assidi et al. [START_REF] Assidi | Equivalent mechanical properties of biological membranes from lattice homogenization[END_REF] implemented the discrete asymptotic homogenization method to calculate the mechanical properties of biological membranes with lattice configurations. The effective moduli are calculated and recorded versus the geometrical and mechanical lattice parameters. Goda et al. [START_REF] Goda | A micropolar anisotropic constitutive model of cancellous bone from discrete homogenization[END_REF][START_REF] Goda | A 3D elastic micropolar model of vertebral trabecular bone from lattice homogenization of the bone microstructure[END_REF] proposed a quasi periodical lattice model of the cancellous bone which was discretely homogenized to generate the continuum model. The effective mechanical properties of the bone directly relate to the lattice microgeometry and micromechanical elastic properties. To evaluate fracture of trabecular bone, the overall plastic yield and brittle failure behaviors of three dimensional lattices are investigated by a microstructural modeling approach based on the homogenization of the initially discrete microstructure [START_REF] Goda | Limit analysis of lattices based on the asymptotic homogenization method and prediction of size effects in bone plastic collapse[END_REF]. Matrix methods of linear algebra have also been used to homogenize the structural mechanics of periodic lattices. Hutchinson and Fleck [START_REF] Hutchinson | The structural performance of the periodic truss[END_REF] applied Bloch's theorem in the matrix analysis and formulated a homogenized stiffness matrix by expressing the nodal deformation in a unit cell in terms of the macroscopic strain. This methodology is applied MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 19 to the collapse mechanisms of Kagome and triangular-triangular lattice structures. Vigliotti and Pasini [START_REF] Vigliotti | Linear multiscale analysis and finite element validation of stretching and bending dominated lattice materials[END_REF] developed a general matrix-based homogenization approach for the linear analysis of components made of two dimensional lattice materials with either pin joints or rigid joints. A linear multiscale procedure has been described and validated by comparing the homogenized model of equivalent macroscopic stiffness with the discrete lattice model. The result has illustrated that the homogenised model delivered a correct estimation of the stiffness of the lattice. This procedure is successfully extended to 3D lattice structures [START_REF] Vigliotti | Stiffness and strength of tridimensional periodic lattices[END_REF] and the non-linear constitutive models for lattice structures [START_REF] Vigliotti | Non linear constitutive models for lattice materials[END_REF]. Then, in order to consider the manufacturing influence of AM on the effective mechanical performance of lattice structures, a two-step homogenization method [START_REF] Park | Effective mechanical properties of lattice material fabricated by material extrusion additive manufacturing[END_REF] was proposed based on the approach proposed by Vigliotti and Pasini [START_REF] Vigliotti | Stiffness and strength of tridimensional periodic lattices[END_REF]. In the first step, a voxel-based model was used to determine the effective structural element parameters by including the shape deviations in fabricated struts. Then the structural element parameter obtained by the voxel-based model was imported into the homogenization method to compute the mechanical properties. Compared with the experiment, the result estimated by the two-step approach showed less error than the direct homogenization method, which means that it is important to consider the shape variation caused by the manufacturing process. Numerical homogenization is an alternative homogenization method to determine effective mechanical properties over a unit-cell with periodic boundary conditions using Finite Element Analysis (FEA). The educational description of this method has been provided based on a short Matlab implementation by Andreassen [START_REF] Andreassen | How to determine composite material properties using numerical homogenization[END_REF]. It is initially aimed to determine composite material properties, but single-phase lattice structures can MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 20 be simulated by assigning a very soft second materials. This approach can be easily extended to the homogenization of conductivity, thermal expansion and fluid permeability. Dirrenberger et al. [START_REF] Dirrenberger | Homogenization of periodic auxetic materials[END_REF] used the numerical homogenization technique combined with FEA to compute the elastic moduli tensor and investigate the anisotropy of three auxetic periodic lattices. Van Dijk [START_REF] Van Dijk | Formulation and implementation of stress-driven and/or strain-driven computational homogenization for finite strain[END_REF] presented an approach in the geometrically nonlinear regime for stress or strain driven homogenization which is straightforward in combination with FEA. Schwerdtfeger et al. [START_REF] Schwerdtfeger | Mechanical characterisation of a periodic auxetic structure produced by SEBM[END_REF] used a standard FEA in conjunction with a well known pseudo density approach to homogenize the linear elastic stiffness tensor of lattice structures with negative Poisson's ratio. It is found that the Poisson's ratio is strongly dependent on the relative density. Even though the numerical homogenization easier to be implemented, when the unit cell of the structure is relatively complicated, the computational cost could be considerable. Finite Element Analysis The development of AM technology provides more freedom for designers to create lattice structures with complex geometries and versatile functions, whose properties could be difficult to simulate with homogenization methods. Instead, Finite Element Analysis (FEA) has the capability of estimating the mechanical performance of complex structures. Recently, FE modeling of lattice structures has attracted plenty of attention from researchers and has been implemented to investigate the mechanical performance of lattice structures. Generally, FE modeling of lattice structures is constructed by beam elements or 3D solid elements as shown in Figure 5. It can be seen that the computation time of beam elements model is much less than that of the 3D solid mesh model because the quantity of beam elements is less than 1% of the quantity of 3D elements for the same MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 21 structure. But in some cases, the lattice struct cannot be modeled by beam elements. 3D solid elements have to be applied to construct the FE model. To more clearly compare the methods that investigate the mechanical properties of lattice structures, both the advantages and disadvantages of experimental and simulation methods are summarized in Table 2. In this subsection, the applications of FE modeling techniques to investigate the mechanical properties of lattice structures will be discussed in detail. (a) (b) (c)  High computational cost for large lattice structures  Difficult to mesh the thin strut  Mesh quality might be poor Beam element model In the early stage, beam elements are prevalently used in FE modeling of lattice structures [START_REF] Luxner | Finite element modeling concepts and linear analyses of 3D regular open cell structures[END_REF][START_REF] Zhu | Effects of cell irregularity on the elastic properties of open-cell foams[END_REF][START_REF] Zhu | Effects of cell irregularity on the elastic properties of 2D Voronoi honeycombs[END_REF][START_REF] Zhou | On the deformation of aluminum lattice block structures: from struts to structures[END_REF][START_REF] Gan | Three-dimensional modeling of the mechanical property of linearly elastic open cell foams[END_REF][START_REF] Cuan-Urquizo | Mechanical characterisation of additively manufactured material having lattice microstructure[END_REF]. To determine the effective elastic properties of random lattice structures, the homogenization modeling approach is inappropriate because it does not account for the natural variation in microstructure for random lattices. To overcome this difficulty, Zhu et al. [START_REF] Zhu | Effects of cell irregularity on the elastic properties of open-cell foams[END_REF][START_REF] Zhu | Effects of cell irregularity on the elastic properties of 2D Voronoi honeycombs[END_REF] used FEA to determine the equivalent Young's modulus, shear modulus and bulk modulus of 2D Voronoi honeycombs and 3D open-cell foams. Each strut was meshed with one to five Timoshenko beam elements according to the length of the strut. The effect of structural irregularity on the elastic properties was investigated. It is found that higher irregularity will increase the Young's modulus and shear modulus, but decrease the bulk modulus of the structure. The effects of the topological and microstructural irregularity of lattices are further investigate by the FE model to understand the stretch and bending mechanical response [START_REF] Alkhader | Effect of microstructure in cellular solids: Bending vs. stretch dominated topologies[END_REF][START_REF] Alkhader | Mechanical response of cellular solids: Role of cellular topology and microstructural irregularity[END_REF]. Then, a similar FE model was proposed to investigate the relationship between the elastic properties and the relative density of 3D Voronoi models [START_REF] Gan | Three-dimensional modeling of the mechanical property of linearly elastic open cell foams[END_REF]. It was found that the Kelvin foams can represent Voronoi foams in the low density regime. And the elastic modulus is sensitive to the imperfections while the compressive plateau stress is less sensitive. Furthermore, the elastic buckling of cell edges at the microscopic level is the dominant mechanism for compressive failures. Zhou et al. [START_REF] Zhou | On the deformation of aluminum lattice block structures: from struts to structures[END_REF] found that the tensile strengths of individual struts of a lattice structure exhibit significant scatter which is attributed to the presence of defects or voids. They used Timoshenko beam elements to construct the finite element model with various mechanical property input data to reflect the measured strength of individual struts. The result showed that the variations in the input data strongly influence the predicted stress-strain behavior. Solid element model Even though the FE model constructed by the beam element has a lower computational cost, not all types of the lattice structures can be meshed with beam elements. For instance, if the strut of the lattice structure is stout, it does not satisfy the assumptions of the beam element in FEA. It has been shown that with the decrease of porosity, which means the lattice struts get thicker, the mechanical property obtained from beam element model deviates from the experimental result [START_REF] Ahmadi | Mechanical behavior of regular opencell porous biomaterials made of diamond lattice unit cells[END_REF]. Besides, in some cases, the lattice structure is connected with skin [START_REF] Aremu | Effects of net and solid skins on self-supporting lattice structures[END_REF] or solid parts to serve certain functions [START_REF] Malek | Critical evaluation on structural stiffness of porous cellular structure of cobalt chromium alloy[END_REF]. Therefore, 3D solid elements are alternative choices in the FE modeling process to mesh the lattice structure. Chantarapanich et al. [START_REF] Chantarapanich | Scaffold library for tissue engineering: MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 44 A geometric evaluation[END_REF] used ten-node tetrahedron elements to investigate the mechanical response and the connectivity of different types of lattice topologies. Because the lattice cells were not connected by sharing nodes, it cannot be simulated by beam elements. Another advantage of 3D element FE model is that it can analyse the influence of the notch effect and the material concentration in the connection area of lattice struts which can predict a more accurate stress distribution in these locations [START_REF] Cerardi | Mechanical characterization of polyamide cellular structures fabricated using MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 38 selective laser sintering technologies[END_REF]. It has been shown by the solid element FE model that, with the same porosity, the effective Young's modulus of the lattice structure will change dramatically by shifting the material from the edge to the vertices [START_REF] Zargarian | Effect of solid distribution on elastic properties of open-cell cellular solids using numerical and experimental methods[END_REF]. Besides, for the large deformation and nonlinear explicit dynamic analysis, the linear 3D elements with lumped mass matrix are required. Tetrahedral, triangular prism and brick elements were used together by Ullah et al. [START_REF] Ullah | Performance of bioinspired Kagome truss core structures under compression and shear loading[END_REF] to minimize the stress jump at the transition point in the MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 24 explicit dynamic analysis of Kagome lattice core structures. Salonitis et al. [START_REF] Salonitis | A hybrid finite element analysis and evolutionary computation method for the design of lightweight lattice components with optimized strut diameter[END_REF] implemented a hybrid FE model to simulate the mechanical performance of the lattice structure combined with the solid material. Firstly, the effective Young's modulus and Poisson's ratio of the lattice structure are calculated by a beam element FE model. Then the lattice cells are modeled as representative volume elements (RVEs) with the estimated mechanical properties and the solid material is modeled with tetrahedral elements. The computational cost is reduced by using the RVEs for lattice structures. A similar approach was taken by Huo et al. [START_REF] Huo | Failure location prediction by finite element analysis for an additive manufactured mandible implant[END_REF] as shown in Figure 6. The mechanical properties of the lattice structure were obtained by a local FE model with tetrahedral elements in the first step. Then the lattice structure was substituted by a solid material with the homogenised mechanical properties. Finally, the homogenised lattice was connected to other solid materials and the global FE model is constructed. This approach is similar to the homogenization method. The difference is that the homogenised property is calculated by FE methods and the lattice struct is meshed with solid elements. The computational cost of the RVE model is higher than the homogenization method due to the fine mesh on the RVE. But this method is easy to implement because the homogenization is relatively complicated from a mathematical perspective. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 25 (1) (2) (3) Homogenized lattice [47] implemented the FE model with tetrahedral elements to investigate the mechanical properties of cubical lattice structures made by medical grade CoCr alloy with porosity ranging from 60% to 80% fabricated by SLM process. They compared the estimated elastic modulus to the experiment result which showed good agreement. Improvement of FE models However, in some investigations, discrepancies are found comparing the numerical simulation to the experimental result [START_REF] Cahill | Finite element predictions compared to experimental results for the effective modulus of bone tissue engineering scaffolds fabricated by selective laser sintering[END_REF][START_REF] Hedayati | Mechanical properties of regular porous biomaterials made from truncated cube repeating unit cells: Analytical solutions and computational models[END_REF][START_REF] Hedayati | Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell[END_REF]. This is due to the difference between manufactured and designed lattice structures in shapes, sizes, micro-porosities [START_REF] Coelho | Bioresorbable scaffolds for bone tissue engineering: Optimal design, fabrication, mechanical testing and scale-size effects analysis[END_REF] and MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 26 complex strut joint geometry [START_REF] Gümrük | Compressive behaviour of stainless steel micro-lattice structures[END_REF]. Therefore, existing FE models still need further refinements to obtain more accurate results. A common feature of the AM fabricated lattice structures is the material concentration in the nodal region [START_REF] Labeas | Investigation on the Static Response and Failure Process of Metallic Open Lattice Cellular Structures[END_REF]. There are two ways to consider this feature in the beam element model. One is that the beam element in the vicinity of the joint can be increased to the real thickness of the lattice strut [START_REF] Smith | Finite element modelling of the compressive response of lattice structures manufactured using the selective laser melting technique[END_REF][START_REF] Labeas | Investigation on the Static Response and Failure Process of Metallic Open Lattice Cellular Structures[END_REF]. Another way is to increase the stiffness of the beam element near the nodal region to compensate for the material aggregation [START_REF] Luxner | Finite element modeling concepts and linear analyses of 3D regular open cell structures[END_REF]. Because the thickness of the strut near the nodal region can be directly measured, the first method is easier to be implemented. The presence of microvoid in lattice struts and the nonuniform strut thickness caused by surface roughness will also lead to inaccurate simulation result. The FE model of two scaffolds with the same porosity are shown in Figure 7. It is found that the effective modulus of the scaffold with smooth surfaces is 68% higher than the model with rough surfaces. Using the measured dimension of fabricated samples to update the CAD model dimensions can improve the simulation accuracy for FEA [START_REF] Yang | Mechanical properties of 3D re-entrant honeycomb auxetic structures realized via additive manufacturing[END_REF]. Another way to consider the variation of the strut thickness is to use beam elements [START_REF] Karamooz Ravari | Numerical investigation on mechanical properties of cellular lattice structures fabricated by fused deposition modeling[END_REF][START_REF] Campoli | Mechanical properties of open-cell metallic biomaterials manufactured using additive manufacturing[END_REF][START_REF] Zargarian | Numerical simulation of the fatigue behavior of additive manufactured titanium MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 45 porous lattice structures[END_REF] with different diameters to discretize the lattice strut as shown in Figure 8. Campoli et al. [START_REF] Campoli | Mechanical properties of open-cell metallic biomaterials manufactured using additive manufacturing[END_REF] used Scanning Electron Microscopy (SEM) and Gaussian distribution to determine the diameter of each beam. Then the FE model was simulated several times to get the mean and standard deviation of the mechanical properties which means the lattice structure may have a range of mechanical properties due to the structural irregularity. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 27 Corresponding author: Yaoyao Fiona Zhao 28 Image-based FEA has also been used to simulate the mechanical properties of lattice structures. This approach is the most accurate way to capture the as-fabricated geometry, because the FE model is directly built on the Micro-CT images through which the manufacturing defection can be precisely reflected. Williams et al. [START_REF] Williams | Bone tissue engineering using polycaprolactone scaffolds fabricated via selective laser sintering[END_REF] experimentally and computationally tested the lattice structure made by Polycaprolactone (PCL) which is a bioresorbable polymer. It is concluded that the image-based FE model can estimate the mechanical properties of tissue lattice structure, bypassing the need for experimental testing. Suard et al. [START_REF] Suard | Towards stiffness prediction of cellular structures made by electron beam melting (EBM)[END_REF] used X-ray tomography to get the image of single strut of the lattice structure and computed the effective stiffness of strut with different build orientations by FEA. A concept called "effective volume ratio" was defined to set the lower bound of the stiffness. Then, the lattice structure was modeled by struts with effective volume ratio instead of the desired geometry. The result showed 5% difference of Young's modulus between different build orientations. Except for the stiffness, it was found from the X-Ray tomography and corresponding FE model that the failure mechanism is also influenced by the poor build quality of AM process [START_REF] Sercombe | Failure modes in high strength and stiffness to weight scaffolds produced by Selective Laser Melting[END_REF]. AM Fabrication modeling 3.1 Manufacturing Discrepancy To design and simulate the lattice structure, the manufacturing influence of AM cannot be neglected. In the early stage, the design and simulation of lattice structures mainly focus on the geometrical parameters. The design model is directly imported into the simulation process. However, manufacturing discrepancy is a critical issue in the fabrication process of AM techniques [START_REF] Arabnejad | High-strength porous biomaterials for bone replacement: A strategy to assess the interplay between cell morphology, mechanical properties, bone ingrowth and manufacturing constraints[END_REF]. Recently, extensive literatures have investigated the influence of the manufacturing discrepancy on the mechanical properties MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 29 of lattice structures by experiments. And researchers have incorporated the manufacturing influence of AM in the simulation process as summarized in Table 3. Therefore, the understanding of the manufacturing discrepancy is crucial in the design and simulation process of lattice structures. Manufacturing influence Homogenization method Park et al. [START_REF] Park | Effective mechanical properties of lattice material fabricated by material extrusion additive manufacturing[END_REF] Layer deposition parameter and as-fabricated cross sections Park and Rosen [START_REF] Park | Homogenization of Mechanical Properties for Additively Manufactured Periodic Lattice Structures Considering Joint Stiffening Effects[END_REF] Semi-rigid joint and layer deposition pattern FE model with beam elements Zhou et al. [START_REF] Zhou | On the deformation of aluminum lattice block structures: from struts to structures[END_REF] Various mechanical properties of individual lattice struts Labeas and Sunaric [START_REF] Labeas | Investigation on the Static Response and Failure Process of Metallic Open Lattice Cellular Structures[END_REF] Smith et al. [START_REF] Smith | Finite element modelling of the compressive response of lattice structures manufactured using the selective laser melting technique[END_REF] Increase thickness of beam elements in the vicinity of the joint for material aggregation Luxner et al. [START_REF] Luxner | Finite element modeling concepts and linear analyses of 3D regular open cell structures[END_REF] Increase stiffness of beam elements in the vicinity of the joint for material aggregation Campoli et al. [START_REF] Campoli | Mechanical properties of open-cell metallic biomaterials manufactured using additive manufacturing[END_REF] Zargarian et al. [START_REF] Zargarian | Numerical simulation of the fatigue behavior of additive manufactured titanium MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 45 porous lattice structures[END_REF] Ravari et al. [START_REF] Karamooz Ravari | Numerical investigation on mechanical properties of cellular lattice structures fabricated by fused deposition modeling[END_REF] Irregular surface modeled by beam elements FE model with 3D solid elements Yang et al. [START_REF] Yang | Mechanical properties of 3D re-entrant honeycomb auxetic structures realized via additive manufacturing[END_REF] Average value of measure dimension to update the dimension of the CAD model Cansizoglu et al. [START_REF] Cansizoglu | Properties of Ti-6Al-4V non-stochastic lattice structures fabricated via electron beam melting[END_REF] The influence of strut angles on the thickness Williams et al. [START_REF] Williams | Bone tissue engineering using polycaprolactone scaffolds fabricated via selective laser sintering[END_REF] Sercombe et al. [START_REF] Sercombe | Failure modes in high strength and stiffness to weight scaffolds produced by Selective Laser Melting[END_REF] Image-based FE model to incorporate manufacturing defects Suard el al. [START_REF] Suard | Towards stiffness prediction of cellular structures made by electron beam melting (EBM)[END_REF] X-ray tomography is used to get the effective volume ratio of lattice struts Cahill et al. [START_REF] Cahill | Finite element predictions compared to experimental results for the effective modulus of bone tissue engineering scaffolds fabricated by selective laser sintering[END_REF] Ravari et al. [START_REF] Karamooz Ravari | Numerical investigation on mechanical properties of cellular lattice structures fabricated by fused deposition modeling[END_REF] Irregular surface of lattice strut modeled by 3D solid elements Park and Rosen [START_REF] Park | Quantifying effects of material extrusion additive manufacturing process on mechanical properties of lattice structures using as-fabricated voxel modeling[END_REF] Stair-step irregularities between layers and the air gaps generated among the filaments The irregularity of the surface finishing is one of the most common discrepancies of the AM fabricated components. Because of the layer by layer principle, the AM fabricated model generally has the stair-step irregularities corresponding to the slicing process. Recently, a voxel-based as-fabricated modeling technique is implemented to consider the stair-step irregularities between layers and the air gaps generated among the filaments in MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 30 the AM process [START_REF] Park | Quantifying effects of material extrusion additive manufacturing process on mechanical properties of lattice structures using as-fabricated voxel modeling[END_REF]. Besides, the accuracy of the printing head is limited and the manufacturing process is unstable, which will also cause discrepancies and defects on the surface of the as-fabricated part. For the lattice structure, the manufacturing discrepancy problem becomes even more significant. Because the dimension of the lattice strut is relatively small and the stair-step irregularities will appear on the inclined strut. Cansizoglu et al. [START_REF] Cansizoglu | Properties of Ti-6Al-4V non-stochastic lattice structures fabricated via electron beam melting[END_REF] found that the fabricated strut thickness grows larger if the angle between the strut and the horizontal plane increases. And there could be a significant change in modulus for variations as small as 0.1mm in the strut thickness. Consequently, the FEA predicted stiffness based on the CAD model could be slightly lower than the actual stiffness. It is also found that the strut lying normal to the build direction is more likely to have defects on the surface [START_REF] Hernández-Nava | The effect of defects on the mechanical response of Ti-6Al-4V cubic lattice structures fabricated by electron beam melting[END_REF]. Another reason causing the surface irregularity is the presence of unmelted power attached to the surface of AM fabricated component [START_REF] Yan | Evaluations of cellular lattice structures manufactured using selective laser melting[END_REF]. It has been found that process parameters of laser based AM techniques have strong influence on the surface roughness [START_REF] Sachdeva | Investigating surface roughness of parts produced by SLS process[END_REF]114]. By optimizing the process parameters, the surface roughness of AM fabricated components can be reduced. Nevertheless, the influence of the surface roughness on the mechanical properties still cannot be neglected. Everhart et al. [START_REF] Everhart | The effect of surface finish on tensile behavior of additively manufactured tensile bars[END_REF] investigated the effect of surface roughness on the tensile behavior of bars fabricated by AM. The result showed that the unfinished tensile bar, which has a rougher surface, exhibited lower yield strength than the machined one. And the elongation of the unfinished sample is much lower than that of the machined sample. It is also mentioned that additional data for the surface characterization process need to be implemented in the FE model to reduce the error in the plastic region. Because the AM fabricated lattice MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 31 structure consists of unfinished single struts, it is difficult to accurately predict the mechanical property if the effect of the unfinished surface is neglected. Post-processing is able to reduce the surface roughness and the defects of the lattice structure fabricated by AM. Formanoir et al. [START_REF] De Formanoir | Improving the mechanical efficiency of electron beam melted titanium lattice structures by chemical etching[END_REF] use chemical etching to decrease the roughness of octet-truss lattice structures manufactured by EBM. This approach can improve the mechanical efficiency of the structure by removing the unmelted powders on the surface. It was also found that the elongation was increased because the critical surface defects are removed during the etching process which was more resistant to the crack initiation. However, this process reduced the strut thickness by almost 30%. The diameter of the strut in the simulation model should be modified. Anisotropic Mechanical Behavior Apart from the manufacturing discrepancy, the anisotropic material property of AM fabricated parts is another important aspect that needs to be considered in design and simulation of lattice structures. It is found in the literature that the mechanical properties are different in directions parallel and perpendicular to the building direction among most AM processes. The layer by layer manufacturing principle is one of the reasons causing the anisotropic material properties. Shanjani et al. [START_REF] Shanjani | Mechanical characteristics of solid-freeform-fabricated porous calcium polyphosphate structures with oriented stacked layers[END_REF] investigated the effect of the printing orientation on the mechanical characteristics of porous calcium polyphosphate structures fabricated by BJ process. It was found that samples with layers deposited parallel to the compressive loading direction were 48% stronger than those with layers deposited perpendicular to the load. Ladani et al. [START_REF] Ladani | Mechanical anisotropy and strain rate dependency behavior of Ti6Al4V produced using E-beam additive fabrication[END_REF] studied the anisotropic mechanical behavior of EBM process. The result showed that the in-plane properties such as elastic modulus, MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 32 yield strength, ultimate tensile strength were much higher than out-of-plane direction due to defects or imperfect bonding between layers. Sridharan et al. [START_REF] Sridharan | Rationalization of anisotropic mechanical properties of Al-6061 fabricated using ultrasonic additive manufacturing[END_REF] analyzed the reason for anisotropic mechanical properties of Al-6061 fabricated via Ultrasonic Additive Manufacturing (UAM). When the load was perpendicular to the interfaces, a brittle failure was observed due to the onset of strain localization during the UAM process instead of the lack of bonding between each layer. For the FDM process, the printing orientation also has significant influence on the mechanical properties such as the tensile, compressive strength [START_REF] Ahn | Anisotropic material properties of fused deposition modeling ABS[END_REF] and deformation behavior [START_REF] Hambali | Determination of the effect of part orientation to the strength value on additive manufacturing FDM for end-use parts by physical testing and validation via three-dimensional finite element analysis[END_REF]. To predict such properties, an anisotropic FE model for FDM process has been proposed by measuring the material properties of test specimens printed in multiple orientations [START_REF] Ogden | Anisotropic finite element modeling of the fused deposition modeling process[END_REF]. In some AM processes, the anisotropic mechanical performance is attributed to the anisotropic microstructure of the component. Anisotropic mechanical properties of Ti6Al4V component fabricated by Directed Energy Deposition (DED) have been studied [START_REF] Carroll | Anisotropic tensile behavior of Ti-6Al-4V components fabricated with directed energy deposition additive manufacturing[END_REF]. The result showed that the elongation in longitudinal direction and transverse direction was 11% and 14% respectively due to the columnar prior-β grain morphology and the grain boundary α. Akerfeldt et al. [START_REF] Åkerfeldt | Influence of microstructure on mechanical properties of laser metal wire-deposited Ti-6Al-4V[END_REF] also found that the microstructural constituents have influence on the anisotropic elongation rate of the Ti6Al4V specimen fabricated by laser metal wire-deposition process. Zhang et al. [START_REF] Zhang | Microstructure and anisotropic tensile behavior of laser additive manufactured TC21 titanium alloy[END_REF] investigated the correlation between the microstructure and the anisotropic tensile behavior of TC21 alloy fabricated by the DED process. The result shows that the sample vertical to the building platform exhibit better ductility due to the lack of continuous grain boundary α layers. But horizontal samples show inferior ductility because of the columnar β grain morphology and continuous grain boundary α layers. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 33 For AM fabricated lattice structures, the anisotropic material property also has significant influence on the mechanical properties. It was found that the ultimate tensile and yield strength of SLM fabricated 316L stainless steel lattice structures were approximately 60% higher and the elongation were 40% higher in vertical building direction than in horizontal building direction [START_REF] Alsalla | Fracture toughness and tensile strength of 316L stainless steel cellular lattice structures manufactured using the selective laser melting technique[END_REF]. Similar result was presented by Wauthle et al. [START_REF] Wauthle | Effects of build orientation and heat treatment on the microstructure and mechanical properties of selective laser melted Ti6Al4V lattice structures[END_REF] that the horizontal strut was weak which should be avoided in SLM process. Reinhart et al. [START_REF] Reinhart | Investigation of the Geometry-Dependent Anisotropic Material Behavior of Filigree Struts in ALM-Produced Lattice Structures[END_REF] investigated the relationship between the mechanical properties and the geometrical properties of the single lattice strut manufactured by SLM process. The result showed that for different diameters and different polar angles of the strut, the Young's modulus was different, which is shown in Figure 9. Therefore, it is not accurate to simulate the mechanical properties of the lattice structure with isotropic material properties. For EBM fabricated lattice structures, anisotropic mechanical properties were also observed by exerting loads parallel and perpendicular to the build direction [START_REF] List | Properties of Inconel 625 mesh structures grown by electron beam additive manufacturing[END_REF]. For the BJ process, Galeta et al. [START_REF] Galeta | Impact of structure and building orientation on strentgh of 3D printed models[END_REF] investigated the effect of building orientation on the mechanical properties of 2D lattice structures. The result showed that the building axis Y provided better strength than those oriented along X axis. And the 2D sample parallel to the printing bed exhibited slightly better strength than those perpendicular to the printing bed. Castilho et al. [129] also studied the influence of the printing direction of the BJ process and found that the mechanical behavior was highly influenced by the printing direction though the dimensional variation is almost the same. Therefore, a correction factor is needed in the calculation of the mechanical properties to balance the effects of the building direction and the dimensional deviation. Recently, to simplify the anisotropic material properties, a transversely isotropic FE model was proposed to MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 34 consider the influence of the printing direction on the mechanical behavior of lattice structures [START_REF] Zhang | Transversely isotropic hyperelastic-viscoplastic model for glassy polymers with application to additive manufactured photopolymers[END_REF]. The result of the simulation was in good agreement with that of the experiment, which suggested that the model can estimate the deformation of the lattice structures quite well. However, it was also found that the failure initiation predicted by the simulation model is less accurate than the mechanical behavior before failure, which indicated that further investigation was still needed. appropriate modeling approach should be implemented to improve the accuracy while minimizing the computational cost. During simulation of lattice structures, the manufacturing influence of AM processes cannot be neglected by designers. It has been shown by a lot of literatures that the geometrical discrepancy is a critical issue which may cause inferior mechanical properties. The effect of irregular surfaces and shapes on the mechanical properties of lattice structures should be considered in the design and simulation process. It is also acknowledged that the material properties of AM fabricated components exhibit anisotropy, which should also be considered in the simulation process. At last, based on the review, several future works concerning the modeling of lattice structure for the simulation have been pointed out:  New simulation method for Lattice-Solid hybrid structure should be developed. To serve certain functions, lattice structures are usually connected with solid materials. Though homogenized lattice structure can be represented by an equivalent material. It is not applicable for heterogeneous lattices. Besides, using solid element to model the hybrid structure will result in high computational cost due to the small size of the mesh on the lattice structure. A new approach that can efficiently model Lattice-Solid hybrid structure should be proposed.  Anisotropic material properties should be implemented in the modeling of the lattice structure. It has been presented by many literatures that the material properties of AM fabricated components are anisotropic, the lattice structure is no exception. And it is also found by experimental methods that the mechanical properties of lattice structures have correlation with the printing orientation. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 36 Therefore, anisotropic material properties should be considered in the simulation of lattice structures.  Multi-physics design and simulation model is needed for lattice structures including the heat transfer, fluid mechanics and solid mechanics. The lattice structure is promising in serving as multi-functional materials in engineering applications. A multi-physics model that can design and simulate the mechanical properties, thermal properties and fluid dynamics of lattice structures simultaneously will prompt the application of the lattice structure into a higher level. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 47 List of Figures Corresponding author: Yaoyao Fiona Zhao 48 List of Table Figure 1 . 1 Figure 1. Examples of different types of lattice structures based on the degree of order, (a) Disordered lattice structures, (b) Periodic lattice structures, (c) Conformal lattice structures Figure 3 . 3 Figure 3. (a) FE model with beam elements, (b) Homogenized FE model with solid elements Figure 4 . 4 Figure 4. (a) the conceptual configuration semi-joint frame element (b) the as-fabricated voxel modeling procedure [76] Figure 5 . 5 Figure 5. 3D tetrahedral elements compared with beam elements, (a) 3D solid mesh using 19830 elements and 2h 44mins computational time, (b) 1D beam mesh using 160 elements and 51s computational time, (c) 1D beam mesh using 96 elements and 12s computational time. Figure 6 . 6 Figure 6. Hybrid FE model to analysis the lattice structure connected to solid materials Figure 7 .Figure 8 . 78 Figure 7. Contour plot of von Mises stress distribution (MPa) for two scaffolds with exactly the same porosity that were compressed in the y direction, (a) smooth surface, (b) irregular surface Figure 9 . 9 Figure 9. Young's modulus and tensile strength as a function of polar angle and strut diameter Figure 1 . 2 Figure 2 . 6 Figure 3 . 18 Figure 5 . 27 Figure 8 . 27 Figure 9 . 12263185278279 Figure 1. Examples of different types of lattice structures based on the degree of order, (a) Disordered lattice structures, (b) Periodic lattice structures, (c) Conformal lattice structures ............................................................................................................................. 2 Figure 2. The concept of modeling of lattice structures for AM process ........................... 6 Figure 3. (a) FE model with beam elements, (b) Homogenized FE model with solid elements ............................................................................................................................ 15 Figure 4. (a) the conceptual configuration semi-joint frame element (b) the as-fabricated voxel modeling procedure [76] ......................................................................................... 18 Figure 5. 3D tetrahedral elements compared with beam elements, (a) 3D solid mesh using 19830 elements and 2h 44mins computational time, (b) 1D beam mesh using 160 elements and 51s computational time, (c) 1D beam mesh using 96 elements and 12s computational time............................................................................................................ 21 Figure 6. Hybrid FE model to analysis the lattice structure connected to solid materials 25 Figure 7. Contour plot of von Mises stress distribution (MPa) for two scaffolds with exactly the same porosity that were compressed in the y direction, (a) smooth surface, (b) irregular surface ................................................................................................................ 27 Figure 8. Beam elements with varied diameters to model the irregular strut, (a) implementation in FE model, (b) actual irregularity ........................................................ 27 Figure 9. Young's modulus and tensile strength as a function of polar angle and strut diameter............................................................................................................................. 34 The concept of modeling of lattice structures for AM process Design Requirement Design and Geometrical Model Design Geometrical modeling Approach Modeling Simulation Mechanical performance Experiment of Lattice Model and Thermal properties Homogenization method Structures Experiment Fluid mechanics Finite element method AM Fabrication Model Surface irregularity Anisotropic material properties Manufacturability Lattice Structure Figure 2. MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 6 Table 1a A summary of compressive elastic modulus of lattice structures obtained from experiments 1a AM material Elastic Lattice Relative Compressive Reference process modulus of topology density (%) Elastic modulus bulk (MPa) material (GPa) 67.2 polyamide 21.6 powder 2.4 10 [38] 40.8 62.4 SLS Polycapro-lactone (PCL) 0.122 0.013 45±0.9 ~62.5±1.5 54±3 ~67±4 [39] 20 49.67±9.93 40 163.19±1.42 polyamide EOSINT P/PA2200 1.419 ±0.105 60 20 40 60 351.91±4.26 509.42±4.26 175.96±11.35 378.87±49.67 [23] 20 52.50±2.84 40 153.25±8.51 60 313.60±14.19 RenShape™ 1x1x1 octet 107 SLA SL 7510 2.634 2x2x2 octet 35 60 [40] resin 3x3x3 octet 65 FDM Polylactic Acid 1684.87 ±144.7 7 49.07±0.13 [41] 316L 25.9 650 BJ stainless 4.07 [21] steel 50 1480 110 5~11 50~225 [42] 50.25±1.00 570±50 120 20.78±0.63 39.59±0.81 2130±210 2680±120 [25] EBM Ti6Al4V 49.25±0.69 2920±170 114 Octet as built Octet 7.3 14 28.5 3.9 912 2041 6407 513 [43] chemically 7 912 etched 21 4560 MD-17-1129 Corresponding author: Yaoyao Fiona Zhao 11 Table 1b A summary of compressive elastic modulus of lattice structures obtained from experiments 1b AM material Elastic Lattice Relative Compressive Reference process modulus of topology density (%) Elastic modulus bulk (MPa) material (GPa) rhombic 16 549±76 Ti6Al4V-ELI Not mentioned dodeca-hedron unit 22 30 1397±115 2619±64 [44] cells 33 3488±137 3.5 10.6 5.4 19.8 9.7 105 316L 140 13.9 4.0 207.5 84.6 [45] stainless 6.2 804.9 steel 11.1 1506.2 15.9 2273.2 97±10 (Micro 3.5-13.8 17.646~378 [46] strut) 20 4143 CoCr alloy 200 30 6260 [47] SLM 40 9844 AlSi10Mg 72.4 10 177.89~198.39 [48] 130±20 ~1250±40 110 5-20 [49] 120±30 ~1250±70 Ti6Al4V 25 30 1900±100 900±100 40 3100±400 114 50 25 4300±100 1200±400 [50] 30 1400±200 40 3400±300 50 4600±200 Table 2 Advantages and disadvantages of experimental and simulation methods 2 Advantages Disadvantages Experimental  It reflects the as-fabricated  High cost of the manufacturing Method mechanical properties process  It can be used as benchmarks  It is difficult to test functional to verify the simulation result components with complex shapes by standard testing machine Homogenization  Low computational cost  It is not applicable to Method  It can be used in Lattice-Solid heterogeneous lattice structures hybrid materials to represent  It is not easy to incorporate the lattice structure manufacturing defects  It is mathematically difficult to implement on a new topology FE Model with  Relatively low computational  The assumption of beam element Beam Elements cost requires slender strut, which is  It can model heterogeneous not applicable to stout strut lattice structures  It cannot accurately model the  It can model irregular strut manufacturing defects. thickness by varying the  The joint of the strut cannot be diameter and stiffness of the accurately modeled by beam beam element elements Table 3 Manufacturing influence considered in the simulation process 3 Table 1a A 1a summary of compressive elastic modulus of lattice structures obtained from experiments ....................................................................................................................... 10 Table 1b A summary of compressive elastic modulus of lattice structures obtained from experiments ....................................................................................................................... 11Table 2 Advantages and disadvantages of experimental and simulation methods ........... 21 Table 3 Manufacturing influence considered in the simulation process........................... 29
03660200
en
[ "spi" ]
2024/03/04 16:41:18
2021
https://hal.science/hal-03660200/file/Ripart2021.pdf
V Ripard D Goncalves F Ville J H O Seabra J Cavoret P Charles Orcid Ids Grease composition influence on friction & starvation Keywords: Tribology, grease, lubrication, film thickness, starvation Nowadays, grease lubrication is frequently used in rolling element applications such as bearings or constant velocity joints. The advantage of grease is to supply lubrication to the application without leaking thanks to its consistency. Nevertheless, starvation can occur leading to damage such as scuffing. In the present study, starvation is analyzed using the Starvation Degree parameter through tests with different oper-ating conditions and different types of greases. Introduction Greases are mainly used to lubricate components such as rolling element bearings or drivetrain components. The lubrication mechanism is quite complex. [START_REF] Lugt | A review on grease lubrication in rolling bearings[END_REF] It seems that grease lubrication is not a continuous process but it is characterized by events caused by film breakdown and recovery. [START_REF] Lugt | On the chaotic behavior of grease lubrication in rolling bearings[END_REF] In order to study the lubrication mechanism and possible starvation, film thickness is measured using different techniques such as electrical conductivity [START_REF] Muennich | Elastohydrodynamic lubrication of grease-lubricated rolling bearings[END_REF] or interferometry [START_REF] Gohar R | Optical measurement of Oil film thickness under elasto-hydrodynamic lubrication[END_REF] on a ball on disc tribometer for example. Using these techniques, base oil viscosity appears not to be enough to evaluate film thickness. The thickener must be taken into account especially at low speed. [START_REF] Morales-Espejel | Film thickness in grease lubricated slow rotating rolling bearings[END_REF] The thickener structure in particular plays a key factor on film thickness. [START_REF] Kaneta | Effects of a thickener structure on grease elastohydrodynamic lubrication films[END_REF] Relubrication is a key parameter to extend bearing life at low temperature. In addition, the mechanical design of the component also plays an important role. [START_REF] Cann | Bearing performance limits with grease lubrication: the interaction of bearing design, operating conditions and grease properties[END_REF] If relubrication is poor, starvation could occur. To evaluate starvation risk, a single dimensionless parameter (SD), between the operating parameters and the transition from the fully flooded to starved regime in case of oil lubricating contact was developed. [START_REF] Cann | The transition between fully flooded and starved regimes in EHL[END_REF] Its parameters are surface tension effects σ S , amount of oil present in the vicinity of the track h oil∞ , base oil viscosity η 0 , the track width a and the velocity u. It gives the formula below: SD = η 0 ua h oil∞ σ S For greased lubricated contacts, the SD evaluation is quite complicated due to events as mentioned before. [START_REF] Gonçalves | An experimental study on starved grease lubricated contacts[END_REF] Nevertheless, film recovery was often observed even if starvation occurs. [START_REF] Nagata | Track replenishment by lateral vibrations in grease-lubricated EHD contacts[END_REF] It seems to be due to contact oscillations that help film thickness recovery up to fully flooded conditions. In the present paper, different greases in terms of composition and physical or chemical properties were tested using different tribometers. Starvation and the SD parameter are analysed using different operating conditions such as temperatures and/or speeds. Material and methods Grease properties Four greases were used. Their compositions vary (Table 1). Two base oils are used. The first one (A) is a mix between PAO and mineral oil. The second one (B) is a synthetic one. Two different thickener technologies are used. The first one is a complex soap using Lithium and Calcium. The second one uses polyurea thickener. However, even if G3 & G4 have same thickener technology, the manufacturing process changes. To finish, two types of additives [START_REF] Dr | The Effect of Friction Modifier additives on CVJ Grease Performance[END_REF] are used : MoDTC (Molybdenum Dithiocarbamates), MoDTP (Molybdenum Dithiophosphate). Concentrations were normalized to 1 for the maximum additive concentration. Moreover, the oil separation from lubricating grease is determined on greases using IP121 method. [START_REF] Afnor | Produits pétroliers et graisses lubrifiantes -Séparation d'huile au stockage des graisses lubrifiantes -Méthode sous pression -Conditions statiques[END_REF] For 42 h at 40°C, greases are inside cups in order to separate the oil from the grease. A mass is used in order to press the grease into cup. The oil is salvaged in a container. As expected, G1 and G2 give the same oil separation results because only additives were added between G1 & G2. On the contrary, it is possible to observe a huge gap between G3 and G4. Even if the components are the same, different oil separation results are obtained which will influence their properties. It is due to the completely different manufacturing process of the thickener. High frequency reciprocating rig A PCS instrument HFRR is used to study the different available greases. The HFRR allows to measure friction coefficient and separation rate of a lubricant. The test principle is to study an alternative contact with pure sliding of a ball on a disc (Figure 1). It allows also to evaluate the grease ability to supply oil in the contact. It is easy to use thanks to its design. Also balls & discs are inexpensive sample. The applied load can be set from 0.98 N -(0.63 GPa) to 9.81 N (1.37 GPa) which applied a pressure named P h-HFRR . The upper specimen is a 6 mm diameter ball and the bottom specimen a flat disc. They are made from polished 100Cr6 steel with Ra < 20 nm for the disc and Ra < 50 nm for the ball. The rig can be configured in terms of frequency f HFRR (from 10 Hz to 200 Hz) and stroke d HFRR (from 20 μm to 2 mm). The temperature can be set from ambient temperature up to 150°C thanks to a heater block placed under the lower specimen. Test conditions, chosen in order to maximize speed on HFRR are the following: f HFRR = 15Hz ; d HFRR = 1mm ; P h-HFRR = 1.32 GPa To conduct this study different test durations and operating temperatures are set (Table 2). For this study, is it possible to write SD as a function of SD HFRR which is assumed constant for this study ( a h oil∞ σ S ), η 0 & u which variate between tests: SD = η 0 ua h oil∞ σ S = η 0 uSD HFRR This condition gives a mean sliding velocity of 30 mm/ s. A test example is available in Figure 2. For test condition 1 with G4 it is possible to observe repeatability between 2 tests (curves G4-1 & G4-2) and also the film thickness corresponding to first friction test Considering a time t, it is possible to calculate the gap between the friction coefficient G4-1 and G4-2 and divide it by G4-1 value. Taking the mean value, the deviation between these curves is 2%. So, the HFRR test device is a very reliable rig in order to study the friction coefficient with grease. In addition, to analyse the behaviour of grease during the test, the electrical contact potential is measured to evaluate surface separation. A second set of tests are considered in Table 3 in order to evaluate starvation. In order to increase the SD parameter, it is possible to increase mean velocity on test rig. Considering, the minimum SD value as SD G1-3 ,i ti s possible to calculate variation of SD for each condition. It is expose in Table 4 as a ratio between SD SD G1-3 .I ti s important to note that a that a higher SD value is more prone to starvation. The wear scars induced are measured using a Sensofar PLu neox optical profiler with Nikon objectives. Its vertical resolution is less than 0.1 nm. It uses an interferometry technique in order to determine topography. Ball-on-disc film thickness In order to explain the different behaviour of the grease in terms of wear and to complete the rheology study an investigation is carried out using EHD2 from PCS Instruments. It consists of a ball or roller on a disc contact with rolling and/or sliding conditions. This equipment allows to validate first results from HFRR and measure film thickness instead of having film separation. It is also possible to visualize contact and lubricant repartition inside the contact with a camera. The ball and disc are mounted on an independent rotational shaft with speed control S EHD . So, a slide to roll ratio SRR EHD can be introduced inside the contact. It can measure traction coefficient but also the film thickness down to 1 nm. The main goal of that study is to measure the grease film thickness using interferometry. [START_REF] Gohar R | Optical measurement of Oil film thickness under elasto-hydrodynamic lubrication[END_REF] Samples used are: • A roller made from 100Cr6 with a minor diameter of 19.05 mm and a crowned diameter of 133.35 mm. Its roughness is down to 20 nm. • A disc made from glass (E = 75 GPa v = 0.2) with a roughness of approximately 5 nm. Tests are conducted without grease scoop 9 to go as close as possible to the real contact. To insure repeatability, a 3D printed stencil is used (Figure 3(a)). When the stencil is filled, it allows 7.2 millilitre of grease to be placed on the disc (Figure 3(b)) ensuring that the same volume of grease is used for all the tests. The tests conditions are available in Table 5. They are set in order to allow maximum load available (so maximum Contact Hertz Pressure P h-EHD ) and a mean velocity equal to conditions 5 & 7. Compared to HFRR, the speed is almost equivalent to tests 5 & 7. However, EHD2 uses rolling and sliding contact as opposed to HFRR that only uses pure sliding contact. Another difference, is that, even if the load is almost equivalent, one sample is made of glass so the Young's modulus varies a lot. For that reason, hertz pressure drops. Experimental results HFRR 2.1.1. Friction at 40°C. The first step is to use HFRR in order to qualify the grease in terms of the friction coefficient and surface separation. At 40°C, the first observation is the role of additive concentrations. Indeed, the modification of the MODTC concentration and also the addition of MODTP (G1 to G2) allow the friction coefficient to be decreased by 25%, once COF is stabilized (around 16 h). Moreover, the replacement of the thickener and base oil (G2 to G3& G4) allows the friction coefficient (COF) again to be decreased by 30% in order to give a low friction grease. These 2 major modifications give two types of grease: a low friction grease with a COF of 0.06 and a regular grease with a COF of 0.12 after stabilization. Figure 5 gives an indication of the separation during tests measuring the electrical resistance between the 2 surfaces. Separations for greases G2, G3& G4 vary during tests at the opposite of G1 which is stable from the beginning. The most important is to do not have separation equal to 0 which could mean a dry contact. In this case, separation rate is always above 20%. After the stabilization of the COF, a distinction can be established between the grease with a full additive package (G2, G3, G4) and grease G1 without the same MODTC concentration and MODTP. In addition, wear scars have been measured and the polyurea grease seems to better protect surfaces. 2.1.2. Friction at 80°C. In Figure 6, the coefficient of friction evolution is represented. For each grease, 2 different durations were set: 3 h and 18 h. The tests were carried out twice and the curves could be superposed. In order to clarify, only 18 h tests were plotted on Figure 6. The first observation is that the coefficient of friction of greases G1 & G2 converge around 0.08 to 0.09. However, the G1 friction coefficient seems unstable. An explanation can be that the tribofilm is heterogeneous. Using XPS measurements, [START_REF] Ripard | Tribological characterization of greased drive-shaft : Evaluation of constant velocity joint durability[END_REF] it has been proved that Mo intensity can vary all along the scar width depending on the grease composition Figure 7. At 80°C, the additive concentration does not have a significant influence on the coefficient of friction, when comparing grease G1 and G2. The low friction greases G3 and G4 have a coefficient of friction below 0.08 which is interesting for a drivetrain application. As an example, a low friction grease will lead to reduce generated axial force on a constant velocity joint. [START_REF] Serveto | Joint tripode coulissant de transmission automobile. Effort axial généré : essais et modèles[END_REF] In addition, a friction coefficient increase (G4) is noticed at around 3 h. This increase can be linked to the thickener on G4, the bleed oil from G3 & G4 being the same. Figure 8 presents the separations of each grease at 80°C which seem to be quite significant. At 80°C the performance gap between greases becomes less. The concentration of additives seems to be less relevant than at 40°C. However, it is important to notice the couple oil + thickener plays a key role on performance at this temperature. Test operating conditions evolution to provoke starvation. A first approach is proposed to explain the influence of grease composition on friction. It shows the important role of additives on friction coefficient but also the contribution of the couple thickener + oil. This role is amplified at 40°C as it is possible to observe. Also, the couple oil + thickener is involved in friction coefficient establishment duration. In order to induce starvation, test conditions available in Table 3 were used on HFRR. These conditions, increasing frequency from 15 to 100 Hz and distance from 1 to 2 mm, induce a mean sliding speed thirteen times higher which can lead to starvation. The first step was to study greases at 40°C using condition number 5. Indeed, 40°C seems to be important to understand the grease behaviour. The friction coefficients are available in Figure 9. Greases G1, G2 & G4 have exactly the same behaviour as in the previous conditions of Table 2 (Figure 4). Only a variation of friction coefficient between test conditions #1 and #5 is noticed due to speed. In addition, the surface separation is very satisfactory (Figure 10). However, for grease G3 a major friction coefficient is obtained. Moreover, when this jump occurs, the separation goes to 0%, meaning, there is no lubricant between contacting parts. Starvation seems to occur in that case. In order to go further, another test at 80 Hz is carried out (test number 6) which reduces the speed and so the SD parameter (Table 4). Also this test is repeated at 100 Hz. (see Figure 11). This phenomenon appears to be repeatable. A first jump appears, then the lubrication resumes and a second jump occurs. After these failures and due to surface degradation, the grease G3 is now the equivalent of a standard grease as G1. Indeed, it gives a friction coefficient equivalent or superior to G1 due to surface degradation. Also, it is frequency dependent. At 80 Hz, the phenomenon does not occur (Figure 11). Looking at it with the naked eye (figure 12), this major friction seems to involve a lot of particles inside the grease. The grease becomes black between the samples. Now, at 80°C the same tests are performed (Figures 13 and14). There is no starvation compared to Figures 9 and10. Indeed, here the friction coefficients are almost stable (Figure 13) and the film thickness is always above 50% (Figure 14). Finally, starvation occurs only at 40°C with grease G3. To explain this wear (Figure 15) a test with condition 5 and G3 is repeated and stopped when the jump in friction coefficient occurs. Wear scars were measured on the ball and disc after testing with grease G3 as shown in Figures 16 &17 It is possible to observe a material transfer from the disc (Figure 16) to the ball (Figure 17), which shows that, scuffing occurs with grease G3. The disc and ball weld and, with the movement imposed by the actuator, it transfers material onto the ball surface from the disc surface. Now, it could be interesting to study the influence of the grease composition directly on film thickness. Film thickness study On an EHL test rig, grease test repeatability is very difficult to obtain due to the chaotic behaviour of the grease 15- 17 even for exactly the same operating conditions and grease volumes. For conditions 7&8, the tests are repeated more than twice to obtain meaningful results. In this section, G1, G3 and G4 are studied as they have meaningful HFRR results. In order to explain scuffing, the film thickness is measured as a function of time. The condition (8&9) seems relevant compared to previous tests (Table 2) due to velocity which seems to be the key parameter in this study. It can be compared to conditions 5&7 from table 3 of the HFRR section. The results for G1, G3 and G4 are available in Figures 18, 19 First of all, it is interesting to note that for a same test a grease can have some chaotic but at the same time repeatable film thickness behaviour over time. For example, grease G1 at 40°C has 3 distinct behaviours (Figure 18). The first one separates correctly the surface around 250 nm. The second one is the opposite. The film thickness becomes smaller and approaches zero damaging the spaced layer coating of the test track on the disc and another one which oscillates between the 2 curves. This behaviour is very interesting. Indeed, it shows the ability of grease to replenish contact. The same ability is observable for G4 not only at 40°C but also at 80°C. In order to investigate phenomenon from Figures 18 to 20, interferometric pictures of the contact area have been done. In Figure 21, images from EHD illustrate the contact state before and after replenishment. The rolling direction is indicated by a yellow arrow. When starvation begins in the left of top Figure 21, it could be observed starvation induced vibrations which promoted the replenishment (the sudden rise in film thickness of Figures 18-20). However, this behaviour is not observed on G3. At 40°C , the film thickness decreases quickly and never recover unlike G4&G1. Finally, after some minutes, it goes to zero and damages the track. It is illustrated in Figure 22. It is possible to observe the beginning of starvation on the right and left of the contact. However, the grease never replenishes contact and tends to a dry contact. It is similar to the behaviour found by other authors. [START_REF] Huang | Film thickness decay and replenishment in point contact lubricated with different greases: a study into oil bleeding and the evolution of lubricant reservoir[END_REF] On the contrary, at 80°C, the replenishment of this grease G3 greatly improves (temperature drops oil viscosity which improves the oil separation) and the film thickness at 80°C can even be higher than at 40°C, as shown in Figure 19. Discussions Grease G1, G2 & G4 were tested by 2 experimental devices. These devices which are complementary allow to study greases having friction coefficient, separation but also film thickness variation information. It has been shown that G1 & G4 which have completely different composition in terms of additives concentration and couple oil + thickener have completely different friction behaviour. This composition variation which lead to double friction coefficient from G4 to G1, have no influence on scuffing. Indeed, for G1 & G4, when the contact is at low values of film thickness, the grease can replenish the contact thanks to its properties even against centrifugal effects. For G1 at 80°C, the grease does not show the same ability. Nevertheless, the film thickness is lower than previous values due to the increase of temperature but it is more stable during the test, and the surface separation is improved protecting the surfaces too. There are sporadic replenishments during tests. It is possible to explain: • Starvation induces vibrations: these vibrations will allow the grease to move back inside the contact area • Centrifugal effect: during tests, oil is stored on both sides of the contact. Due to centrifugal effect, the storage closer to the rotational shaft can supply the contact with the bleed oil. • Mechanical degradation of the thickener matrix: degraded grease thickener insures a residual film on the rolling track. All these phenomena improve re-flow until the next starvation event occurs. [START_REF] Cann | Starvation and reflow in a grease-lubricated elastohydrodynamic contact[END_REF][START_REF] Cann | Thin-film grease lubrication[END_REF] On the contrary, although G3 & G4 have close composition, same SD, and performance except at 40°C with condition #5, it has been noticed that G3 at 40°C induces starvation in HFRR. This starvation leads to scuffing and so surface degradation. On EHD2 with G3 at 40°C, the film thickness events do not happen and therefore, the film generated shows a much smaller thickness, operating close to dry contact and hence, damaging the surface which can be the origin of scuffing found in section 2.1.3. However, it is interesting to note that G3 and G4 have the same oil (B) so the same viscosity. Also, in that case, the contact area and velocity are the same. To finish, the film thickness after 500 s at 40°C is almost the same. So, the parameters from starvation degree SD in oil-lubricated contacts is constant (table 4). This starvation criteria is not valid for grease as suggested by authors8. So, it could be interesting to consider oil separation measurements to study greased contact starvation as these are highly different for G3 & G4. Nevertheless, for these last greases the compositions and chemical and physical properties are similar, only the manufacturing process is different. Conclusions In The present study, it has been proved that grease composition and in particular additives can decrease friction. Adding MODTC and MOTDP in enough quantity can improve friction coefficient by 25% at 40°C. Also, changing the couple oil-thickener from a lithium-calcium complex with a mineral & PAO base oil to polyurea with Synthetic ester + PAO base oil allows friction to be decreased by 30% at 40°C. However, if the contact frequency increases it is possible to observe starvation and so scuffing on surfaces using greases with the same oil, thickener technology and additives. It is important to note that the manufacturing process of polyurea thickener seems to play a key role on starvation on our case. Borrowing the SD parameter from oil lubricated contact theory, it is possible to conclude that it is not sufficient to predict starvation on grease lubrication. Indeed, in this study, 2 greases with same oil viscosity, contact parameters leading to equivalent SD parameters give different behaviours due to very different oil separation. Indeed, when one grease insures correct lubrication, the other one induces starvation and so scuffing according to HFRR and EHD2 measurements. In a future study, it could be interesting in order to predict starvation to modify the SD parameter with a physical notion of oil bleed using the IP121 method for example. To do this, it would be interesting to lead this future study by controlling more finely the parameters between the different greases. Also, it has been shown that temperature and contact frequency play a key role on starvation. An application of this study can be rolling element lubrication from automotive drivetrains. A constant velocity joint uses grease in order to lubricate the contact between the roller and housing. Drivetrain greases are important for automotive transverse drivetrains. It fills 2 main goals: lubricating the contact and extracting heat. In that case, the friction coefficient can be directly linked to the generated axial force of the tripod [START_REF] Serveto | Joint tripode coulissant de transmission automobile. Effort axial généré : essais et modèles[END_REF] . Friction between rollers and housing creates a force. The strength will depend on the friction coefficient and implies vibration on the car. These functions which are essential, insure the components have a long life 21 avoiding starvation for example. Figure 1 . 1 Figure 1. Schematic of the HFRR. Figure 3 . 3 Figure 3. (a) EHD stencil, (b) stencil filled. Figure 4 . 4 Figure 4. Grease friction coefficient results at 40°C. Figure 5 . 5 Figure 5. Greases film thickness at 40°C. Figure 6 . 6 Figure 6. Grease friction coefficient results at 80°C. Figure 7 . 7 Figure 7. XPS line scan from wear scars with same types of grease.[START_REF] Ripard | Tribological characterization of greased drive-shaft : Evaluation of constant velocity joint durability[END_REF] Figure 8 .Figure 9 . 89 Figure 8. Grease film thickness at 80°C Figure 10 . 10 Figure 10. Film thickness for test number 5. Figure 11 . 11 Figure 11. G3 friction coefficient for test numbers 5 and 6. . Figure 13 . 13 Figure 13. Friction coefficient for test number 7. Figure 12 . 12 Figure 12. Wear G3 on HFRR. Figure 14 . 14 Figure 14. Film thickness for test number 7. Figure 15 . 15 Figure 15. HFRR test stopped with G3 when jump occurs. & 20 respectively. Figure 17 . 17 Figure 17. Ball scar. Figure 16 . 16 Figure 16. Disc scar. Figure 18 . 18 Figure 18. G1 results for conditions 8 & 9. Figure 20 . 20 Figure 20.Figure 20. G4 results for conditions 8 & 9. Figure 20 . 20 Figure 20.Figure 20. G4 results for conditions 8 & 9. Figure 19 . 19 Figure 19. G3 results for conditions 8 & 9. Figure 21 . 21 Figure 21. Contact replenishment with G4 at 40°C. Figure 22 . 22 Figure 22. Contact starvation with G3 at 40°C. Table 1 . 1 composition of greases tested.Four greases were used. Their compositions vary (table1). G1 G2 G3 G4 Oil A (Mineral + PAO) A (Mineral + PAO) B (Synthetic ester + B (Synthetic ester + PAO) PAO) Thickener Lithium / Calcium Lithium / Calcium Polyurea Polyurea Complex Complex Additives MODTC 0,28 MODTC 1 MODTC 1 MODTC 1 MODTP 0,4 MODTP 0,4 MODTP 0,4 Base oil dynamic viscosity 58.2 58.2 64.2 64.2 μ 0 (cP) 40°C Base oil dynamic viscosity 11.1 11.1 13.5 13.5 μ 0 (cP) 80°C S_Oil (%) IP 121 12 3.6 3.6 0.6 6 NLGI number 22 2 2 1-2 1-2 Table 2 . 2 initial HFRR conditions. Test number Duration [h] Temperature [°C] 134 0 21 8 4 0 338 0 41 8 8 0 Table 3 . 3 New HFRR conditions to discriminate grease starvation. Test f HFRR d HFRR P h-HFRR Duration Temperature number [Hz] [mm] [GPa] [h] [°C] 5 100 2 1.32 3 40 68 0 2 1 .32 3 40 7 100 2 1.32 3 80 Figure 2. HFRR test example with condition 1. Table 4 . 4 SD ratio variation as function of SD G1-3 . Test conditions number 1&2 3&4 5 6 7 G1 5.2 1 69.9 55.9 13.3 G2 5.2 1 69.9 55.9 13.3 G3 5.8 1.2 77.1 61.7 16.2 G4 5.8 1.2 77.1 61.7 16.2 Table 5 . 5 EHD test conditions. Test S EHD SRR EHD P h-EHD Duration Temperature number [mm/s] [%] [GPa] [s] [°C] 8 400 10 0.38 2100 40 9 400 10 0.38 2100 80 Acknowledgements The authors would like to thank the ANRT (CIFRE N°2017/ 0094), STELLANTIS, the LaMCoS, in particular S. Wegeler, the Universidade do Porto (INEGI and FEUP) and the LTDS in particular C. Minfray and F. Dassenoy for their help and support. Funding The author(s) received no financial support for the research, authorship and/or publication of this article. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
03659791
en
[ "spi" ]
2024/03/04 16:41:18
2022
https://hal.science/hal-03659791/file/Mollon2022.pdf
Guilhem Mollon email: [email protected] The soft discrete element method Keywords: Granular media, Discrete modelling, Soft grains, SDEM In order to accelerate simulations of assemblies of highly deformable grains, a novel numerical approach, called Soft Discrete Element Method (SDEM) is presented. It consists in extending the classical DEM by introducing the deformability of the grains in a simplified way. Simple kinematics are postulated in order to represent the ovalisation of the grains and their local deformations around interparticle contacts. Adequate equations of motion are derived, and a contact model is proposed in the framework of elastic frictionless grains. Comparisons are made with existing analytical and numerical solutions, and the ability of this method to simulate the compressibility and some micromechanical aspects of this class of materials is validated. Upon implementation in optimized DEM codes, the SDEM is expected to allow large scale simulations of samples of deformable particles. Introduction The Discrete Element Method (DEM, [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF]) has now become ubiquitous in the field of granular science, because it offers a large number of advantages when compared to a continuous modelling of granular samples: it is conceptually simple, versatile, and particularly informative about local quantities that may be out of reach for experimentalists. Modern refinements of this technique (e.g. use of complex/realistic grain shapes [START_REF] Mollon | 3D generation of realistic granular samples based on random fields theory and Fourier shape descriptors[END_REF][START_REF] Mollon | Discrete modelling of rock avalanches: sensitivity to block and slope geometries[END_REF][START_REF] Mollon | Can friction replace roughness in the numerical simulation of granular materials?[END_REF], enrichment of contact laws [START_REF] Scholtès | On the capillary stress tensor in wet granular materials[END_REF][START_REF] Luding | Cohesive, frictional powders: contact modes for tension[END_REF], coupling with different physics [START_REF] Tran | Numerical modelling of backward front propagation in piping erosion by DEM-LBM coupling[END_REF][START_REF] Renouf | Coupling electrical and mechanical effects in discrete element simulations[END_REF], etc.) allow it to deal with a broad range of scientific and technological applications. A major assumption of DEM, however, is the perfect rigidity of the grains (only slightly degraded by the small interpenetrations allowed at the contacts). This assumption is sensible in most applications, but becomes very questionable when the level of stress applied to the sample is of the same order of magnitude as the stiffness of the material composing the grains. If this material is ductile enough (i.e. if grain breakage is disregarded), such a situation should lead to large deformations of the grains (Fig. 1), which is prevented in the strict framework of DEM. Several numerical approaches have been proposed to deal with highly deformable grains [START_REF] Gethin | A discrete deformable element approach for the compaction of powder systems[END_REF][START_REF] Güner | Numerical modeling of cold powder compaction using multi particle and continuum media approaches[END_REF][START_REF] Nguyen | Compaction of granular materials composed of deformable particles[END_REF][START_REF] Boromand | Jamming of deformable polygons[END_REF], and all of them rely on some sort of discretization of the grains. This is notably the case of the Multibody Meshfree Approach [START_REF] Mollon | A multibody meshfree strategy for the simulation of highly deformable granular materials[END_REF][START_REF] Mollon | A unified numerical framework for rigid and compliant granular materials[END_REF], which was implemented in the open source code MELODY [START_REF] Mollon | D[END_REF]. In this framework, a large number of field nodes are positioned in each grain and on its contour. Each field node carries two degrees of freedom in displacement, and the displacement field is interpolated between the nodes using meshfree shape functions. This choice was made to improve the robustness of the code when grains are submitted to very large deformations. Grains contours are represented by a piecewise-linear frontier, and contacts are treated by a robust two-pass node-to-segment algorithm. This technique was successfully applied in granular physics [START_REF] Mollon | Mixture of hard and soft grains: micromechanical behavior at large strains[END_REF], tribology [START_REF] Mollon | Solid flow regimes within dry sliding contacts[END_REF][START_REF] Zhang | Significance of third body rheology in friction at a dry sliding interface observed by a multibody meshfree model: influence of cohesion between particles[END_REF], geomechanics [START_REF] Mollon | Can friction replace roughness in the numerical simulation of granular materials?[END_REF], and geophysics [START_REF] Mollon | Simulating melting in 2D seismic fault gouge[END_REF]. A major limitation of this tool, however, is its computational cost. The large number of degrees of freedom and the small time steps typically limit this approach to samples of a few thousands grains at the most, while classical DEM can nowadays deal with several millions of them. To go beyond this limitation, this paper presents a novel approach for the simulation of large samples of soft grains. This approach, called the Soft Discrete Element Method (SDEM), is based on a different philosophy inspired both by the field of Model Order Reduction [START_REF] Chinesta | A short review on model order reduction based on proper generalized decomposition[END_REF][START_REF] Amsallem | Nonlinear model order reduction based on local reduced-order bases[END_REF][START_REF] Corigliano | Model Order Reduction and domain decomposition strategies for the solution of the dynamic elastic-plastic structural problem[END_REF] and by Moveable Cellular Automatons [START_REF] Popov | Theoretical principles ofmodeling elastoplastic media by movable cellular automata method. I. Homogeneous media[END_REF][START_REF] Salman | Implementation of MCA in the framework of LIGGGHTS[END_REF]. It consists in simplifying considerably the kinematics of the soft grains by only considering a very limited number of deformation modes. It leads to a dramatic reduction in the computational cost, but keeps the main physics at work in samples of highly deformable grains. Section 2 of this paper presents these simplified kinematics, and Section 3 develops the associated equations of motions. In Section 4, the stress tensor and the contact forces are derived, as well as the techniques associated with the numerical solver. Section 5 proposes a validation of the method by comparing it with existing analytical and numerical solutions. Kinematics Degrees of freedom We first define the main frame O, ��� ⃗ e X , ��� ⃗ e Y of the problem, which remains fixed. For a given grain, an attached rigidbody motion frame is defined as C, � � ⃗ e x , � � ⃗ e y , with C being its center of mass. In the proposed numerical framework a 2D deformable grain is assumed to have only six degrees of freedom. Three of them are related to rigid-body motions: two translations of the grain center of mass x c (t) and y c (t) (which define the position of C with respect to O ) and one rotation (t) (which is the angle of the direction � � ⃗ e x with respect to ��� ⃗ e X ). The remaining three degrees of freedom are related to the deformations of the grain: they are the two principal strains 1 (t) and 2 (t) (with no specific ordering) and the angle (t) of orientation of the principal deforma- tion frame C, � � ⃗ e 1 , � � ⃗ e 2 with respect to the rigid-body frame C, � � ⃗ e x , � � ⃗ e y attached to the grain. Namely, (t) is the angle between the direction � � ⃗ e x and the direction � � ⃗ e 1 of the main stretching 1 (t) . This is summarized in Fig. 2, for a grain with an initial radius R = 0.5 . 1 (t) , 2 (t) and (t) actually define a strain tensor, which is assumed to be constant over the whole domain covered by the grain: When submitted to such a homogeneous strain field, a circular grain will take the shape of an ellipse with two orthogonal half-axes of respective lengths R ⋅ 1 + 1 (t) and R ⋅ 1 + 2 (t) , oriented along the frame C, � � ⃗ e 1 , � � ⃗ e 2 . These kinematics are thus more complex that those commonly encountered in 2D DEM, as they offer the possibility for each grain to gain a certain amount of ellipticity as a response to the loads it is submitted to. This is, obviously, a strong simplification of the real kinematics of deformable grains which, to be represented in detail, would require a much larger number of degrees of freedom (using for example shape functions attached to field nodes, like in the multibody meshfree approach, [START_REF] Mollon | A multibody meshfree strategy for the simulation of highly deformable granular materials[END_REF][START_REF] Mollon | A unified numerical framework for rigid and compliant granular materials[END_REF]). As we demonstrate in the remainder of this paper, it provides nevertheless a satisfactory description of the main physics of assemblies of deformable grains. The main question that arises is: how to determine the evolution in time of these six degrees of freedom? Newtonian dynamics can be readily applied to the three components of the rigid body motion (accounting for the possible changes in rotational inertia brought by the deformations), but the case of the three other degrees of freedom is less straightforward. The usual way to predict the deformation of a solid is to employ a constitutive model, i.e. a relation (usually based on thermodynamics and phenomenological observations) between the stress tensor and the strain tensor. Since the strain is assumed to be homogeneous in the grain, so must be the stress. We thus define a homogeneous field of stress tensor within a given grain as follows: (1) The principal directions of this stress tensor are aligned with those of the strain tensor if the chosen constitutive model is isotropic, which is postulated in the remainder of this paper for the sake of simplicity. 𝜀 = 𝜀 1 0 0 𝜀 2 {� ⃗ e 1 , � ⃗ e 2 } = 𝜀 xx 𝜀 xy 𝜀 xy 𝜀 yy {� ⃗ e x , � ⃗ e y } = 𝜀 XX 𝜀 XY 𝜀 XY 𝜀 YY {� � ⃗ e X , � � ⃗ e Y } (2) 𝜎 = 𝜎 1 0 0 𝜎 2 {� ⃗ e 1 , � ⃗ e 2 } = 𝜎 xx 𝜎 xy 𝜎 xy 𝜎 yy {� ⃗ e x , � ⃗ e y } = 𝜎 XX 𝜎 XY 𝜎 XY 𝜎 YY {� � ⃗ e X , � � ⃗ e Y } A given grain is submitted to a number of external forces applied on its external contour. These forces can be summarized, as is often done in granular science, by an equivalent stress field called ext . If static equilibrium is reached, the resulting force and torque applied to the grain are null and the stress within the grain is equal to that induced by the external forces ext . However, DEM simulations are dynamic by nature, and so is the proposed method. Hence, static equilibrium is never perfectly enforced. There is always some amount of unbalanced force and, in the present case, of unbalanced stress. Such a system hence relies on equations of motion, which have to be written for each degree of freedom. Any such equation must establish a relation between a generalized force and a generalized mass (both associated to a given degree of freedom) on one hand, and the second time-derivative of the concerned degree of freedom, on the other hand. The six equations of motion are derived in Section 3 for the postulated kinematics. Deformations around contacts The general ovalisation of the grain shape captures well some features of the grains deformation shown in Fig. 1, but a major element is still missing. It is, indeed, impossible with such kinematics to represent properly the closure of the intergranular space under a sufficiently large confining stress. The missing ingredient is the local deformation of the grains in the direct neighbourhood of their contacts. We clearly observe in Fig. 1 that, if the grains are squeezed enough, they do not exhibit any more a point contact like in usual rigid granular materials. Instead of this, the matter around the contact deforms to accommodate the load, and the contact point becomes a contact area (or contact line in 2D). As the load increases, these contact lines occupy a larger proportion of the grains contours, until the whole contour of each grain is in contact with other grains and the porosity is closed. We also observe in Fig. 1 that, at least Fig. 2 Illustration of the six degrees of freedom for an initially circular grain. The path to the current configuration can be conceptualized either by a deformation (a) followed by a rigid body motion (b), or by a rigid body motion (c) followed by a deformation (d) in the case where the constitutive model is the same for all grains, the contact line between any two grains is never far from a linear segment. In classical DEM, a small interpenetration is allowed between contacting grains, in order to compute a repulsive force opposing to contact. In most cases, the normal stiffness used to compute this force is seen as a numerical parameter of regularization rather than as a physical quantity. In some contexts (e.g. small strain behaviours of sands), this interpenetration is seen as a proxy for the local contact-induced deformation. The classical Hertz theory ( [START_REF] Hertz | Uber die Beruhrung fester elastischer Korper[END_REF]) can for example be applied to compute the repulsive normal force between two spheres on physical grounds, and the Hertz-Mindlin model ( [START_REF] Mindlin | Compliance of elastic bodies in contact[END_REF]) can be applied the same way to derive both normal and tangential components of this force. These approaches, however, rely on rather restrictive assumptions, including the necessity for the deformation (and thus for the interpenetration) to remain small with respect to the grain size. For this reason, they cannot be directly applied in the present context. Precisely describing the large deformation of the matter composing the grains in the neighbourhood of a heavily loaded contact would require a much larger number of degrees of freedom than the six that were defined in the previous section, and would contradict the general spirit of the method. Alternatively, we propose here to account for this phenomenon in a simplified way. We accept that very large interpenetrations take place between the ellipses representing the contacting grains, but we then assume that the real contours of the grains in that case are not completely represented by the ellipses. In the contact area, the contour is replaced by a straight segment linking the two intersection points between the ellipses, as represented in Fig. 3. Hence, in the case of multiple contacts, a given grain contour is a succession of straight segments and of portions of the main ellipse defined by the kinematics of the previous section. Obviously, this assumption means that a large part of the grains surface areas (i.e. all the area of each ellipse located beyond the contact segment) is ignored, which could be seen as a problem in terms of conservation laws. A simple solution exists to solve this issue: since this loss of surface can be computed, it can be added to the strain tensor as a spherical (i.e. purely volumetric) term, and thus be accounted for when applying the constitutive model to compute the stress tensor. If we assume, for example, that the material composing the grains is incompressible (i.e. any loss of surface is prohibited), then the surface area lost beyond the contact segment will be regained by an increase of 1 and 2 in accordance with the chosen constitutive model. This approach is detailed in Section 4, as well as the calibration of the contact forces associated with such a contact segment. Equations of motion Degrees of freedom of the rigid-body part of the motion: In this paragraph, we consider that a grain with an initial radius R and a unit mass has acquired a certain deformation (defined by the degrees of freedom 1 , 2 , and ), and that it is subjected to a rigid-body motion while keeping a constant shape. The case of the translational degrees of freedom x c and y c is straightforward, as they obey to the classical New- tonian laws of dynamics, written as: where F x (t) and F y (t) are the resulting forces applied on the grain and: The case of the rotational rigid motion is just less straightforward, since it requires to account for the current shape of the grain. It writes: where C (t) is the resulting torque applied on the grain, and M (t) is the current rotational inertia of the deformed elliptic grain (Fig. 4a): Degrees of freedom " 1 and " 2 In this paragraph and the next one, we consider that the degrees of freedom related to the rigid motion of the grains (i.e. x c , y c , and ) are kept constant and equal to zero. We thus only consider a deformation of the grain defined by two principal strains 1 and 2 , oriented along an angle . Under these assumptions, a point of position (X, Y) in the reference configuration is located at a certain time t at the location (x(t), y(t)) given by: (3) We first consider that 2 and are constant in time, and we compute the velocity of this point as a function of 1 (t): ⎧ ⎪ ⎨ ⎪ ⎩ ẍc (t) = F x (t) M x ÿc (t) = F y (t) M y (4) M x = M y = R 2 (5) θ(t) = C 𝜃 (t) M 𝜃 (t) (6) M (t) = 1 + 1 (t) 2 ⋅ 1 + 2 (t) 2 ⋅ R 4 2 If we assign a mass dm to every such material point, the total kinetic energy corresponding at a certain time t to the variation of 1 (t) is thus given by: where Ω is, for example, the geometric domain occupied by the body in its reference configuration. Thus, in polar coordinates (r, ) , we have X = r ⋅ cos and Y = r ⋅ sin , giving: This energy can be reorganized under the form: where M 1 is the generalized mass associated with the degree of freedom 1 , and given by: Which yields: And finally: [START_REF] Tran | Numerical modelling of backward front propagation in piping erosion by DEM-LBM coupling[END_REF] x (t)= 1 + 1 (t) ⋅ X ⋅ cos 2 (t) + Y ⋅ cos (t) ⋅ sin (t) + 1 + 2 (t) ⋅ X ⋅ sin 2 (t) -Y ⋅ cos (t) ⋅ sin (t) y(t)= 1 + 1 (t) ⋅ X ⋅ cos (t) ⋅ sin (t) + Y ⋅ sin 2 (t) ⋅ sin (t) + 1 + 2 (t) ⋅ -X ⋅ cos (t) ⋅ sin (t) + Y ⋅ cos 2 (t) (8) ̇x(t) = ̇𝜀 1 (t) ⋅ X ⋅ cos 2 𝛼 + Y ⋅ cos 𝛼 ⋅ sin 𝛼 ̇y(t) = ̇𝜀 1 (t) ⋅ X ⋅ cos 𝛼 ⋅ sin 𝛼 + Y ⋅ sin 2 𝛼 (9) E c1 (t) = ∫ Ω ̇x2 (t) + ̇y2 (t) 2 dm (10) E c1 (t) = 2𝜋 ∫ 0 R ∫ 0 ̇𝜀 2 1 (t) ⋅ r ⋅ cos 𝜃 ⋅ cos 2 𝛼 + r ⋅ sin 𝜃 ⋅ cos 𝛼 ⋅ sin 𝛼 2 2 + ̇𝜀 2 1 (t) ⋅ r ⋅ cos 𝜃 ⋅ cos 𝛼 ⋅ sin 𝛼 + r ⋅ sin 𝜃 ⋅ sin 2 𝛼 2 2 𝜌 ⋅ r ⋅ drd𝜃 (11) E c1 (t) = 1 2 M 1 ⋅ ̇𝜀 2 1 (t) (12) M 1 = ⋅ 2 ∫ 0 R ∫ 0 r ⋅ cos ⋅ cos 2 + r ⋅ sin ⋅ cos ⋅ sin 2 + r ⋅ cos ⋅ cos ⋅ sin + r ⋅ sin ⋅ sin 2 2 ⋅ r ⋅ drd (13) M 1 = ⋅ 2 ∫ 0 R ∫ 0 r 3 ⋅ cos 2 ( -) ⋅ drd In order to compute the generalized force associated with the same degree of freedom, we postulate that there exists an unbalanced homogeneous stress field s(t) in the body at a certain time t: This unbalanced stress corresponds to the stress ext (t) induced by external forces applied on the grain, corrected by the stress (t) related to the current strain tensor of the body by the means of a given constitutive model (see section 4). Since this field is homogeneous in the grain, there is an associated resulting force field in each material point of initial coordinates (X, Y): Hence, the generalized force associated with the degree of freedom 1 is given by the general expression: where (x(t), y(t)) are the current coordinates of the point of reference coordinates (X, Y) . Still considering 2 and as constant in time, and using Eq. ( 7), we have: These expressions are then injected in F 1 (t) from Eq. ( 17): (14) M 1 = R 4 ∕4 (15) s(t) = s XX (t) s XY (t) s XY (t) s YY (t) {� � ⃗ e X , � � ⃗ e Y } = s 11 (t) s 12 (t) s 12 (t) s 22 (t) {� ⃗ e 1 , � ⃗ e 2 } (16) F x (X, Y, t) =- s XX (t) X - s XY (t) Y F y (X, Y, t) =- s YY (t) Y - s XY (t) X ( 17 ) F 1 (t) = ∫ Ω x 1 (X, Y, t) ⋅ F x (X, Y, t) + y 1 (X, Y, t) ⋅ F y (X, Y, t) dΩ (18) ⎧ ⎪ ⎨ ⎪ ⎩ x 1 (X, Y, t) = X ⋅ cos 2 + Y ⋅ cos ⋅ sin y 1 (X, Y, t) = X ⋅ cos ⋅ sin + Y ⋅ sin 2 (19) F 1 (t) =-∫ Ω X ⋅ cos 2 + Y ⋅ cos ⋅ sin ⋅ s XX (t) X + s XY (t) Y + X ⋅ cos ⋅ sin + Y ⋅ sin 2 ⋅ s YY (t) Y + s XY (t) X dΩ If the grain is considered as isotropic, the generalized force of the degree of freedom 1 (related to the extension/ compression in the direction � � ⃗ e 1 ) should be independent from the value of the angle , and we can thus choose = 0. In that case, the frames O, � � ⃗ e x , � � ⃗ e y and O, � � ⃗ e 1 , � � ⃗ e 2 are identical, and we can rewrite the generalized force: Integrating this simple expression on the reference domain finally leads to: This result provides the generalized expression for the dynamics of the degree of freedom 1 : Similar expressions are derived for 2 . We have: with M 2 = M 1 and: Degree of freedom We now consider that 1 and 2 are constant in time (Fig. 4b), and we derive the new expressions for the current velocity of a given point of initial coordinates (X, Y) and of current coordinates (x(t), y(t)) . For that purpose, we differentiate in time the Eq. ( 7): This expression simplifies to: (20) F 1 (t) =-∫ Ω X ⋅ s 11 (t) X + s 12 (t) Y dΩ (21) F 1 (t) =-R 2 ⋅ s 11 (t) (22) ̈𝜀 1 (t) = F 1 (t) M 1 (23) ̈𝜀 2 (t) = F 2 (t) M 2 (24) F 2 (t) =-R 2 ⋅ s 22 (t) (25 ) ̇x(t) =-2 1 + 𝜀 1 X ⋅ sin 𝛼(t) ⋅ cos 𝛼(t) ⋅ ̇𝛼 (t) + 2 1 + 𝜀 2 X ⋅ sin 𝛼(t) ⋅ cos 𝛼(t) ⋅ ̇𝛼 (t) + Y 𝜀 1 -𝜀 2 ⋅ cos 2 𝛼(t) -sin 2 𝛼(t) ⋅ ̇𝛼 (t) ̇y(t) =-2 1 + 𝜀 2 Y ⋅ sin 𝛼(t) ⋅ cos 𝛼(t) ⋅ ̇𝛼 (t) + 2 1 + 𝜀 1 Y ⋅ sin 𝛼(t) ⋅ cos 𝛼(t) ⋅ ̇𝛼 (t) + X 𝜀 1 -𝜀 2 ⋅ cos 2 𝛼(t) -sin 2 𝛼(t) ⋅ ̇𝛼 (t) (26 ) ̇x(t) = ̇𝛼 (t) ⋅ 𝜀 1 -𝜀 2 ⋅ -X ⋅ sin 2 𝛼(t) + Y ⋅ cos 2 𝛼(t) ̇y(t) = ̇𝛼 (t) ⋅ 𝜀 1 -𝜀 2 ⋅ Y ⋅ sin 2 𝛼(t) + X ⋅ cos 2 𝛼(t) The kinetic energy associated with this degree of freedom is thus equal to: As in the previous paragraph, this energy can be reorganized under the form: where M (t) is the generalized mass associated with this degree of freedom, and given by: This expression simplifies to: After a simple integration on the reference domain, the generalized mass is: It is interesting to observe that this mass is not constant in time, since it depends on the values of 1 and 2 (it vanishes if the grain is circular, i.e. if 1 = 2 , since the grain can- not deform along the degree of freedom in that particular case). When implemented numerically, M will thus have to be updated at each time step. To compute the corresponding generalized force, we use a formula analogous to Eq. ( 17): where the force field is given in Eq. ( 16). The partial derivatives are computed from Eq. ( 7) and give, at the end of the calculation: (27) E c𝛼 (t) = ∫ Ω ̇x2 (t) + ̇y2 (t) 2 dm (28) E c𝛼 (t) = 1 2 M 𝛼 (t) ⋅ ̇𝛼 2 (t) (29) M (t) = ∫ Ω 1 -2 2 ⋅ -X ⋅ sin 2 (t) + Y ⋅ cos 2 (t) 2 + 1 -2 2 ⋅ Y ⋅ sin 2 (t) + X ⋅ cos 2 (t) 2 dm (30) M (t) = 1 -2 2 ∫ Ω X 2 + Y 2 dm (31) M (t) = 1 (t) -2 (t) 2 R 4 2 (32) F (t) = ∫ Ω x (X, Y, t) ⋅ F x (X, Y, t) + y (X, Y, t) ⋅ F y (X, Y, t) dΩ Injecting them and the force field in Eq. ( 32) yields: For reasons similar to those used in the previous paragraph, this generalized force should be independent on the value of the angle (i.e. it should be frame-independent to remain objective), and we can use an arbitrary value for this parameter, such as = 0 ∶ After integration, we finally obtain: Just like the generalized mass, this force vanishes for a circular shape (i.e. 1 = 2 ) . The general expression of the time-dynamics of the degree of freedom is thus: Summary Based on the calculations derived in the previous paragraph, we can summarize the equations of motion of a given grain of initial radius R and of initial unit mass : Using these equations of motion requires to compute both the external forces applied on the grain and its internal stress field. [START_REF] Cantor | Compaction model for highly deformable particle assemblies[END_REF] x (X, Y, t) = 1 -2 ⋅ -X ⋅ sin 2 (t) + Y cos 2 (t) y (X, Y, t) = 1 -2 ⋅ Y ⋅ sin 2 (t) + X cos 2 (t) (34) F (t) =-∫ Ω 1 -2 ⋅ -X ⋅ sin 2 (t) + Y cos 2 (t) ⋅ s XX (t) X + s XY (t) Y + 1 -2 ⋅ Y ⋅ sin 2 (t) + X cos 2 (t) ⋅ s YY (t) Y + s XY (t) X dΩ (35) F (t) =-1 -2 ∫ Ω Y ⋅ s 11 (t) X + s 12 (t) Y + X ⋅ s 22 (t) Y + s 12 (t) X dΩ (36) F (t) =-2 R 2 ⋅ 1 (t) -2 (t) ⋅ s 12 (t) (37) ̈𝛼 (t) = F 𝛼 (t) M 𝛼 (t) (38) ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ ẍc (t) = F x (t) 𝜌𝜋R 2 ÿc (t) = F y (t) 𝜌𝜋R 2 θ(t) = 2⋅C 𝜃 (t) (1+𝜀1(t)) 2 ⋅(1+𝜀 2 (t)) 2 ⋅𝜌𝜋R 4 and ⎧ ⎪ ⎨ ⎪ ⎩ ̈𝜀 1 (t) =- 4⋅s 11 (t) 𝜌R 2 ̈𝜀 2 (t) =- 4⋅s 22 (t) 𝜌R 2 ̈𝛼 (t) =- 4⋅s 12 (t) (𝜀1(t)-𝜀2(t))𝜌R 2 4 Forces and stresses Stress tensors As exposed in the previous sections, two stress tensors need to be defined. The first one, noted , is related in a univocal way to the strain tensor by the means of the constitutive model, and thus represents the actual stress in the grain. The second one, noted ext , serves as a compact reduction of the internal efforts induced in the grain by all the contact forces applied on its contour. As a first step into this novel technique, very simple assumptions are to be made. We assume in this paper that a linear relationship exists between the stress and strain tensors. Since we wish to account for the surface loss at the contact segments between grains, we need to distinguish the volumetric and deviatoric parts of the strain tensor. The deviatoric part is: The volumetric part accounts both for the surface change related to the main deformations 1 and 2 , and for the surface losses S j related to the intersection of the corresponding grain with any other contacting grain j (Fig. 3c): It should be noted that this evaluation of the volumetric strain does not rely on the trace of , since this approach would only be licit in the limit of small strains. In the more general case of finite strains, which is more likely to be encountered in the present situation, it is necessary to compute directly the area of the deformed grain based on the lengths of its main axes. With this decomposition, we can thus deduce the following stress field: (39) 1dev (t) = 1 (t) -1 (t) + 2 (t) ∕2 (40) 2dev (t) = 2 (t) -1 (t) + 2 (t) ∕2 (41) V (t) = R 2 ⋅ � 1 + 1 (t) �� 1 + 2 (t) � - ∑ j S j R 2 -1 (42) 1 (t) = (2 + ) ⋅ 1dev (t) + ⋅ 2dev (t) + ⋅ V (t) (43) 2 (t) = ⋅ 1dev (t) + (2 + ) ⋅ 2dev (t) + ⋅ V (t) where and are the Lamé coefficients and is the bulk modulus, related to Young's modulus and Poisson's coefficient by: Hence, equations of linearized elasticity are applied even though finite strains are expected in the system. Future extensions of the method might include more complex constitutive relations, such as hyper-elasticity or elasto-plasticity. The external stress tensor associated to the contact forces applied on a given grain is given by the classical Love-Weber formula ( [START_REF] Nicot | On the definition of the stress tensor in granular media[END_REF]): (44) = E (1 + )(1 -2 ) (45) = E 2(1 + ) (46) = E (3 -6 ) (47) ext ij = 1 S ∑ contacts F i ⋅ r j where S is the particle surface area, F i are the components of the force ��� ⃗ F c associated to a given contact, and r j are the com- ponents of the corresponding branch vector (i.e. the vector linking the grain center of mass and a representative contact point). When both and ext are determined, the unbalanced stress to be used in the equations of motion of the previous section is simply given by: Contact algorithm and contact law Applying Eq. ( 47) requires the computation of contact forces between two grains (and possibly between a single grain and a boundary wall). This force should depend on the local contact conditions (based on quantities such as interpenetration distance, etc.) and on the constitutive behavior of the contacting grains. A contact algorithm is therefore necessary. Several algorithms were proposed in the literature to characterize the intersection between arbitrary ellipses [START_REF] Rothenburg | Numerical simulation of idealized granular assemblies with plane elliptical particles[END_REF][START_REF] Ting | A robust algorithm for ellipse-based discrete element modelling of granular materials[END_REF][START_REF] Ting | An ellipsebased discrete element model for granular materials[END_REF]. they were proposed in a classical DEM framework, they were optimized for the case of small interpenetrations, (48) and relied on the fact that the shape and size of the ellipses were considered as constant. s(t) = ext (t) -(t) In order to bring more generality to the method and to its future extensions, the first implementation of SDEM that is presented here is based on a discretization of the contour of each ellipse, with an angular parameterization of the reference (circular and un-rotated) shape of each grain (Fig. 5). For a given grain i of initial radius R i , a number of nodes N n is positioned on the contour of reference, parameterized by the angle k = [0, ,2 , … ,2 -] with an angular step = 2 ∕N n (a value of = 1 • is used in the present work). At a given time in a simulation, if the degrees of freedom of the grains i take the values x ci , y ci , i , 1i , 2i , i , the coor- dinates of these contour nodes become: With the reference coordinates: And with: The transformation defined by Eq. ( 49) allows for a rapid evaluation of the current coordinates of each node on the (49) x ki y ki = A ⋅ X ki Y ki + B (50) X ki = R i ⋅ cos k Y ki = R i ⋅ sin k (51) B = x ci y ci (52) A = a 11 a 12 a 21 a 22 (53) ⎧ ⎪ ⎨ ⎪ ⎩ a 11 = � 1 cos 2 a + 2 sin 2 a + 1 � cos - � 1 -2 � cos sin sin a 12 = � 1 -2 � cos sin cos - � 1 sin 2 + 2 cos 2 + 1 � sin a 21 = � 1 cos 2 a + 2 sin 2 a + 1 � sin + � 1 -2 � cos sin cos a 22 = � 1 -2 � cos sin sin + � 1 sin 2 + 2 cos 2 + 1 � cos contour of the grains. A simple intersection algorithm based on a piecewise linear approximation of the current contour of each grain is then applied. For any pair of contacting grains i and j , this allows for a rapid computation of key quanti- ties such as the normal gaps ni and nj (taken as negative is there is an interpenetration of the grains), the interpenetration areas S ij and S ji , and the two intersection points P and P ′ . The contact length L c is then taken as the distance PP ′ , defining a tangential vector ⃗ t = ����� ⃗ PP � ∕L c and its associated normal vector (Fig. 5). A representative point of contact P ij is also defined as the mid-point between P and P ′ , and used as the point of application of the contact force (in particular for the computation of C and ext ). Similar calculations are done for contacts between a grain and an infinite boundary wall. At that point, it is necessary to establish a relation between the local contact parameters and the contact force. In classical DEM, the standard contact model relies on the concept of contact stiffness, which is usually seen as a numerical regularization of the non-penetration condition. Since it does not bear any physical meaning (except in a limited number of situations, e.g. [START_REF] Gu | DEM simulations of the small strain stiffness of granular soils: effect of stress ratio[END_REF][START_REF] Richefeu | Dissipative contacts and realistic block shapes for modeling rock avalanches[END_REF]), its value is determined in order to ensure limited interpenetrations under the expected average normal load on each contact. In SDEM, however, the non-penetration condition is not enforced, since large overlaps between the deformed grains are expected. Besides, the contact force is seen in this framework as the result of local deformations of the grains, and should therefore be related in some way to their constitutive models. While the Hertzian model is licit at small interpenetrations, its general assumptions are too restrictive to be applied to the present case, especially because of the large deformations of the grains around the contacts. In the general case, it is expected that no closed-form solution can be found for this force, but a simple heuristic model can be proposed under the present assumptions (elastic grains, no friction, no cohesion). We postulate that the contact-related deformation of the grain remains local, and is limited to an area defined by (i) the contact segment and (ii) a curve which is defined as Fig. 6 Sketch of the proposed contact model the symmetrical curve to the initial contour of the grain (i.e. as it would be in the absence of contact) but with a certain scaling factor (Fig. 6). The exact expression of these curves (which are portions of ellipses) does not need to be known. In this area, the deformation is postulated to be constant, and to occur only in the direction normal to the contact. It is thus called n . We also define the normal gap n as the average between the gaps computed for the two grains concerned with the contact: We then assume that the maximum dimension of the deformed area in the normal direction is a linear composition of the two characteristic dimensions of the contact, namely its length L c and its normal gap n . With these assumptions, the normal strain n in the deformed area is related to the ratio between the initial dimensions of the deformed area (in the direction normal to the contact) to its current dimensions. Considering the point of maximum interpenetration (Fig. 6), we can write: Considering linear elasticity and integrating the normal stress on the whole contact length, we finally get a repulsive normal force equal to: With a secant normal stiffness k n given by: where a and b are two constants. It is interesting to notice that this expression is independent on the grain radius. To determine a and b , numerical simulations of the isotropic compression of a deformable grain between four walls are performed with the code MELODY [START_REF] Mollon | D[END_REF]. The grain has a diameter equal to 1 and a unit Young modulus, and several values of the Poisson Coefficient (ranging between 0 and 0.5) are tested. This process is summarized in Fig. 7a. As observed in Fig. 7b, the grain shape flattens at the contact, but the stress level remains rather constant on the contact segment, as postulated in the proposed contact model. At each time of each simulation, it is thus possible to measure the value of L c and to compute the value of n (by consid- ering the virtual circle that crosses the square box at the extremities of the contact segments). For given values of a and b , it is thus possible to apply the Eqs. (56-57) and to compare the obtained force with that provided by the simulation. This is done in Fig. 7c with a = 0.58 and b = 2 (i.e. with the same parameters for all values of the Poisson coefficient). The agreement is fairly good, except for low Poisson Coefficients (i.e. 𝜈<0.2 at very large compressive strains, where the model underestimates the normal contact force by up to 15% (Fig. 7d). Additional tests performed with different values of the radius and of the Young modulus (not reported here) confirmed this agreement. This is quite remarkable because the proposed contact model is very (54) 𝛿 n = 𝛿 ni + 𝛿 nj 2 < 0 (55) n = n a ⋅ L c + b ⋅ n -n (56) F n = E ⋅ n ⋅ L c = k n ⋅ n (57) k n = E ⋅ L c a ⋅ L c + (b -1) ⋅ n simplistic and relies on assumptions which are obviously wrong. It nevertheless offers a simple way to estimate the repulsive force between deformable grains with large interpenetration, at a very small computational cost. Solver, stability, damping, and incompressibility In order to integrate in time the equations of motion summarized in Eq. ( 38), a time step Δt is chosen, and an explicit solver is employed in the same manner as classical DEM. Considering for example the degree of freedom x c , we apply at each time step the following algorithm: Analogous formulations are used for the five other degrees of freedom. Since this solver is explicit, the question of its stability is essential. A critical point is the degree of freedom , because its associated generalized mass M , provided in Eq. ( 29), depends on the current state of deformation of the grain. More importantly, in the circular state of reference of each grain, this generalized mass vanishes to 0. It means that, for very small (in absolute value) levels of deformation 1 -2 , this mass is extremely small, and requires a very small time step. This is clearly a waste of computational resources, since the parameter has only a very small influence on the grain shape when the deviatoric deformation is low. A simple way to bypass this limitation is to ignore the generalized force F is the mass M is too small compared to the generalized mass of the other deformationrelated degrees of freedom. Hence, at each time step and for each grain, the following condition is checked: where k is a user-defined constant. If this condition is not verified, the force F receives the value 0 before applying Eqs. (58-59), which prevents any instability. Typical values of k of the order of 10 -3 to provide satisfactory results. An alternative approach is to apply a constant mass scaling to M (i.e. to multiply it by a factor 100, for example), but it is only licit in quasi-static situations (i.e. when the dynamic response of this degree of freedom is unimportant), and it does not solve the singular case M = 0 (when 1 = 2 , for example in the initial circular state). In the illustrative simulations described in the next section, a combination of both techniques was implemented. Another condition to ensure stability is to introduce a certain amount of damping to dissipate kinetic energy. In classical DEM this is introduced in the contact law, and this (58) ̇xc ← ̇xc +Δt ⋅ F x (t) M x (59) x c ← x c +Δt ⋅ ̇xc (60) M 𝛼 > k 𝛼 ⋅ M 𝜀1 approach is reproduced in the present study. We thus add to each contact force a damping term with the expression: where is a damping coefficient, k n is the secant contact stiffness provided in Eq. ( 57), M i and M j are the masses of the two grains into contact, and δn is the time-derivative of the normal gap (computed by numerical derivation based on the previous time step). In addition, we must apply a damping on the degrees of freedom involving a deformation of the grains (i.e. 1 , 2 , and ) in order to attenuate their free oscillations in the absence of contact. Hence, we add the following terms to the generalized forces of 1 , 2 , and respectively (Eqs. ( 21) and ( 36)): quasi-static simulations, a value = 1 proved satisfactory. In many practical and academic situations, soft grains are considered as incompressible (or at least, their compressibility is negligible compared to their deformability). This is problematic, because incompressibility is notoriously difficult to introduce in simulated elastic media (may it be in a discrete or in a continuous framework), due to the divergence of the bulk modulus towards infinity. A common solution for this issue is to introduce a Poisson coefficient close to-but not to-0.5. Typical values could be 0.49, 0.499, etc. This is analogous to a numerical penalization of any shift from perfect incompressibility. This solution is but presents two drawbacks: incompressibility is only enforced in an way, and it strongly reduces the critical time step of explicit solvers (because the bulk modulus, although not infinite, is very high). In the SDEM framework introduced here, this drawback can be circumvented by applying a simple technique. It consists in applying a Poisson coefficient strictly equal to 0.5, but in computing the internal stress field based only on the deviatoric part of the strain tensor. Hence, Eqs. (42-43) are replaced by: (61) F damp =-𝛾 ⋅ √ √ √ √ k n 1 M i + 1 M j ⋅ δn (62) F 1,damp =-𝛾 ⋅ M 1 ⋅ ̇𝜀 1 (63) F 2,damp =-𝛾 ⋅ M 2 ⋅ ̇𝜀 2 (64) F 𝛼,damp =-𝛾 ⋅ M 𝛼 ⋅ ̇𝛼 (65) 1 (t) = 2 ⋅ 1dev (t) This avoids any problem related to the singularity of . To enforce incompressibility, though, it is necessary to keep the volume of the grains constant. Hence, after updating the deformation of the grain in Eqs. (58-59) based on the stress tensor of Eqs. (65-66), an additional isotropic deformation should be added to the grain. The added volumetric strain Vadd should verify (for a given grain i in contact with several grains j): Which yields: We then simply add Vadd ∕2 to 1dev and 2dev before starting a new time step. Validation Isotropic compression In order to check that the proposed method offers a satisfactory representation of the mechanical response of collections of deformable grains, we first consider the isotropic compaction of two samples of ~ 1000 grains. The sample A follows (66) 2 (t) = 2 ⋅ 2dev (t) (67) R 2 1 + 1dev + Vadd 2 1 + 2dev + Vadd 2 - ∑ j S j = R 2 (68) Vadd =-1dev -2dev -2 + � � � � � 1dev + 2dev + 2 � 2 -4 � � 1 + 1dev �� 1 + 2dev � -1 - ∑ j S j R 2 � Fig. 8 Comparison with the analytical solution of [START_REF] Cantor | Compaction model for highly deformable particle assemblies[END_REF] for incompressible grains 13 a rather narrow Gaussian size distribution, with an average radius of 0.012, a standard deviation of 0.0024 (coefficient of variation of 0.2), and lower and upper size cut-offs at 0.008 and 0.016 respectively. The sample B follows a fractal distribution of fractal dimension 2, with lower and upper cut-offs at 0.004 and 0.038 respectively. Both samples are frictionless, with a Young's modulus equal to 1, and a unit density. They are positioned between four rigid walls forming a unit square box and submitted to a strain-driven isotropic compaction in quasi-static conditions (i.e. at a slow enough pace to avoid any inertial effect). It was recently shown in [START_REF] Cantor | Compaction model for highly deformable particle assemblies[END_REF] that an analytical solution could be inferred for this kind of physical system in the case of incompressible grains. The mean stress in the sample was indeed reported to be well described by the following expression, E is the Young's modulus of the grains, is the current solid fraction, Z 0 and 0 are the coordination number and the solid fraction at the jamming transition, and max is the maximum solid fraction that can be attained in the (this value is reported to depend mostly on the friction coefficient between the grains, and is usually very close to 1). The pre-factor k and the exponent a are inherited from Zrelations reported in various studies [START_REF] Andreotti | Granular media, between fluid and solid[END_REF][START_REF] Vu | Numerical simulations of the compaction of assemblies of rubberlike particles: a quantitative comparison with experiments[END_REF][START_REF] Nezamabadi | Parallel implicit contact algorithm for soft particle systems[END_REF] and take the values 5.1 and 0.5 respectively. The value of b was fitted in [START_REF] Cantor | Compaction model for highly deformable particle assemblies[END_REF] based on single-grain compaction tests and is equal to 0.12. Figure 8 shows a comparison between the n -curves obtained with SDEM and with this analytical solution. For this comparison, the values of Z 0 and 0 are first measured at the jamming point for both samples (equal to 4.20 and 0.833 respectively for sample A, 4.22 and 0.858 respectively for sample B), and the value of max is set to 1. The agreement both approaches is very satisfactory. To validate SDEM with different values of the Poisson's coefficient, however, a different approach must be used since the expression of Eq. ( 69) is only valid for = 0.5 . Simulations of isotropic compactions are thus performed on the same samples and in the same conditions with the code MELODY, using the Multibody Meshfree Approach. Qualitative results are provided in Fig. 9 for both samples, in the case = 0.3 . Four loading stages (provided in Fig. 10) are considered. Zoomed views on the same area of the sample are provided, and the grains are represented with a color scale corresponding to the local value of the mean (69) n =-E ⋅ b 2 ⋅ Z 0 + k -0 a ⋅ ln max - max -0 stress 1 + 2 ∕2 . The agreement between both numerical approaches is acceptable, both in terms of mean stress values (although this quantity is constant in each grain for SDEM while it is allowed to vary spatially in MELODY thanks to a much finer discretization) and in grains motions and deformations. The progressive closure of the void space and the appearance of straight contact segments between grains at larger compaction is also well-rendered, even when the contacting grains have very different radii (Sample B). At loading stage d. (i.e. for a mean stress applied on the sample close to 0.3), the saturation of the sample is almost complete, and the stress level in the grains is slightly over-estimated by SDEM when compared with MELODY. This is probably related to the decrease in accuracy of the simple contact law calibrated in Fig. 7 when the interpenetration becomes too large. It may be corrected in future versions of the method with more accurate contact models. Figure 10 covers the cases = 0.3 and = 0.5 , and pro- vides a quantitative view on this comparison. As shown in the upper part of the figure, SDEM is able to provide a good prediction of the compressibility of all samples, although this compressibility is slightly under-estimated at low pressures (i.e. 𝜎 n < 0.01 ) and slightly over-estimated at large pressures (i.e. 𝜎 n > 0.1 ). The lower part of the figure con- firms that SDEM can also bring fair estimates of microstructural features of the samples, as it provides good first-order evaluations of the evolution of the coordination number Z n during compaction. The prediction of Z is especially good for = 0.3 , but a bit too large for low values of n in the case of incompressible grains. It is interesting to notice that the quality of the SDEM approximation is not degraded by the presence of a very broad size distribution of the grains (Sample B). Oedometric compression In order to check the validity of SDEM under a less isotropic loading, samples A and B are submitted to an oedometric compression test. Starting from the isotropic jamming state, the right-hand wall is progressively moved leftwards in quasi-static conditions (i.e. slowly enough to avoid inertial effects in the samples). This operation is performed both with the SDEM and with the Multibody Meshfree Method in MELODY. The whole process is illustrated in Fig. 11 in the case of Sample B. SDEM and MELODY results are qualitatively very similar, and the magnitude of the mean stress inside the grains is well reproduced by SDEM. During loading, with both approaches, we observe short events of reorganization of the samples, and the grains mostly keep a roughly circular shape (apart from the contact segments). We observe in Fig. 11e, h that some grains seem slightly elongated in the vertical direction, both with SDEM Fig. 9 Zoomed views on isotropic compression tests as performed on samples A and B by MELODY (Multibody Meshfree Approach) and SDEM for ν=0.3 . Loading stages a to d correspond to marks in Fig. 10 ◂ and MELODY, but this effect remains limited. Hence, despite the strongly deviatoric character of the loading path, the stress state in the sample does not seem to deviate much from a spherical stress state. This observation is to be attributed to the absence of friction and to the deformability of the grains. These properties simplify their relative motions and allow them to relax their internal stress field through simple changes in the granular configuration of the sample. It is likely that the presence of intergranular friction will lead the grains to adopt shapes more similar to those represented in Fig. 1b, although this will have to be confirmed when a more advanced version of the method includes this feature. Figure 12 provides quantitative results provided by SDEM and MELODY during the simulations of samples A and B. The horizontal xx and yy in the sample are computed based on the forces applied on the rigid walls. This figure shows that the of these stresses during loading and their stabilization at the end of the loading are well-reproduced by SDEM when compared to MELODY, with a relative error which remains below 10%. This figure also shows that the horizontal (in the direction of loading) and vertical (perpendicular to loading) stresses have the same order of magnitude at any loading state, but that the horizontal stress remains slightly larger than the vertical one. It means that both samples can develop a certain amount of deviatoric stress. This is confirmed in Fig. 13, which plots the deviatoric stress xxyy ∕2 against the mean stress xx + yy ∕2 in both samples, as predicted by SDEM and MELODY. Data appear to be scattered, but some trends clearly appear. Both samples are able to develop a deviatoric stress, which increases with the mean stress. The relation between mean and deviatoric stress seems to follow a power law, although this would have to be confirmed by more careful and dedicated simulations. Both SDEM and MELODY predict that sample B presents a resistance to shearing slightly larger than sample A. In general, it appears that SDEM offers a satisfactory prediction of the oedometric response of soft granular samples, at least under the assumptions of the present study. Conclusion and perspectives The Soft Discrete Element Method introduced in this work is based on simplified kinematics of soft grains submitted to large contact forces. The main physical assumptions are a constant strain field within each grain (leading initially circular grains to deform into ellipses) and straight contact segments between contacting grains. To close the equations of motion corresponding to these kinematics, a simplistic contact model is proposed in the frictionless case, based on simulations of single-grain compaction. The whole problem is integrated in time by an explicit solver, and solutions are proposed to ensure stability and incompressibility and to introduce damping. The general approach is successfully validated in the cases of isotropic and oedometric compactions, by comparison with an analytical formula and a more accurate numerical approach, for various Poisson's coefficients and grains size distributions. The conceptual simplicity of this method might make it possible to implement it in existing DEM codes. The main interest of this novel technique when compared to the current state of the art in soft grains modelling lies in the computation time. If we consider a single grain, a reasonable discretization in the Multibody Meshfree Framework requires about 100-150 field nodes, which corresponds to 200-300 degrees of freedom. The critical time step in such a system is driven by the nodes located at the boundaries of the grains, which are characterized by a low mass (because discretization is generally refined in the vicinity of the grains contour) and large stiffness (because the non-interpenetration condition with any contacting body requires a large contact stiffness). It is therefore necessary to adopt very low time steps to ensure stability of the solver. On the other hand, the Soft Discrete Element Method only considers six degrees of freedom per grain, and can be integrated in time with much larger time steps since the contact stiffness is comparatively rather low (Eq. 57). After implementation in an efficient and optimized code, CPU costs are thus expected to drop by 2 to 4 orders of magnitude when using SDEM instead of more finely discretized techniques. Obviously, this gain comes at the expense of a reduced accuracy at the local scale and of a questionable relevance in fully dynamic conditions. A basic version of the method was presented in this paper, but it is important to stress that this framework is particularly versatile. Future extensions of SDEM may indeed include more complex constitutive behaviours for the grains deformability (such as hyperelasticity, plasticity, etc.) and more complex contact models (friction, cohesion, capillary adhesion, etc.). Kinematics could be either enriched or degraded: if we call the present approach the D2F3-SDEM (2 dimensions, 3 DoF in deformation), standard DEM might then be called D2F0-SDEM. Other versions such as D2F1-SDEM (volumetric deformation only) or D3F6-SDEM could be imagined. A 3D extension seems indeed possible, since the computation of stress and strains tensors will be straightforward. It will however require specific algorithms for rapid computation of large interpenetrations between ellipsoids, and a calibration of contact laws with an accurate 3D code including contacts, in the manner of Fig. 7. A natural and simple extension would also consist in considering ellipses instead of discs, i.e. to modify the elastic state of reference for each body. With such features, SDEM can be expected to have an interesting potential for upscaling simulations of soft grains samples in granular science, complex fluids rheology, soft matter physics, geophysics, and tribology. Fig. 1 1 Fig. 1 Close views on numerical experiments performed on soft grains with the Multibody Meshfree Approach; a Frictionless isotropic compaction; b Isochoric deviatoric loading at maximum compaction with friction Fig. 3 a 3 Fig. 3 a, b Difference in contact conceptualization between DEM (rigid discs with "point" contact represented by a very limited interpenetration) and SDEM (deformed grains with large deformations Fig. 4 4 Fig.[START_REF] Mollon | Can friction replace roughness in the numerical simulation of granular materials?[END_REF] Differences between the degrees of freedom θ (rigidbody rotation) and α (change in the orientation of the principle directions of strain) for ε 1 = 1 and ε 2 = 0 . Note in particular the motion of a given material point when each degree of freedom is continuously increased from 0° to 90° (black arrows) Fig. 5 a 5 Fig. 5 a Change of coordinates from the reference to the current configuration =-30 • , = 75 • , 1 = 0.3, 1 = 0.1 ; b Local contact quantities computed based on piecewise-linear contours Fig. 7 7 Fig. 7 Calibration of the contact model; a Simulation of single grain isotropic compression; b Fields of mean stress in the grain at different compression stages and for Poisson Coefficients of 0.4, 0.2, and 0 (from left to right); c Normal force in each grain-wall contact as a Fig. 10 10 Fig. 10 Comparison with Multibody Meshfree Approach numerical solutions. Marks a to d are loading stages represented in Fig. 9 Fig. 11 11 Fig. 11 Oedometric compression of sample B with SDEM: a Initial state (isotropic jamming point); b Final state; c-e Zoomed views (black square area in a and b) at t = 5, t = 15, and t = 25, respectively; f-h Same views of the same system simulated with MELODY Fig. 12 Fig. 13 1213 Fig. 12 Oedometric compression test results: a Strain-driven loading history; b, c Horizontal and vertical stresses in sample A; d, e Horizontal and vertical stresses in sample B Acknowledgements The authors acknowledge that this study contains original material, as a result of a purely academic study without any kind of private funding or conflict of interest. Its publication has been approved tacitly by the responsible authorities at the institutes where the work has been carried out. Declarations Conflict of interest The authors declare that they have no conflict of interest.
03660590
en
[ "spi" ]
2024/03/04 16:41:18
2020
https://hal.science/hal-03660590/file/Mollon2020.pdf
Guilhem Mollon email: [email protected] Periodic instationarities of granular flows in conical hoppers Keywords: Granular flow, Periodic variations, Hopper flow, DEM Granular flows through converging sections such as conical hoppers have been reported to be submitted to instationarities, which in certain circumstances can appear to be organized and periodic. In this paper, we explore this phenomenon by conducting discrete element modelling simulations of a 3D gravity-driven hopper flow and varying a large number of parameters such as hopper geometry and granular sample properties. Dedicated postprocessing techniques are developed and used to investigate the spatial and temporal patterns of these instationarities and to bring some understanding on the physics of this spontaneous phenomenon. Numerical results show that a clear structure appears for these instationarities, under the form of rapidly propagating waves relating variations in velocity magnitude and coordination number. While very faint, periodic variations of the sample density are also detected. The parametric study reveals that the self-organization of these variations requires a narrow set of conditions in terms of hopper geometry and intergranular contact friction coefficient. Introduction Hopper flow is a common problem in physics of granular flows [START_REF] Forterre | Flows of dense granular media[END_REF]. It consists in the gravity-driven flow of a granular assembly through a converging channel, and the classic hourglass problem is a good example of this class of situations. Besides its theoretical interest, this problem also has a large number of practical applications since converging flows are used in a wide variety of industries manipulating powders and grains [START_REF] Cleary | DEM modelling of industrial granular flows: 3D case studies and the effect of particle shape on discharge[END_REF][START_REF] Wan | Influence of geometrical and material parameters on flow rate in simplified ADS dense granular-flow target: a preliminary study[END_REF][START_REF] Jia | Fluctuation and arching formation of very dense and slow pebble flow in a silo bed[END_REF][START_REF] Pizette | DEM GPU studies of industrial scle particle simulations for granular flow civil engineering applications[END_REF]. It has therefore attracted a large attention from the scientific community, both from the practical and academic viewpoints. Theoretical [START_REF] Nguyen | Gravity flow of granular materials in conical hoppers[END_REF][START_REF] Bazant | A theory of cooperative diffusion in dense granular flows[END_REF][START_REF] Hendy | Instabilities in granular flows[END_REF][START_REF] Sun | Radial hopper flow prediction using constitutive model with microstructure evolution[END_REF], experimental [START_REF] Michalowski | Flow of granular material through a plane hopper[END_REF][START_REF] Baxter | Pattern formation in flowing sand[END_REF][START_REF] Choi | Velocity profile of granular flows inside silos and hoppers[END_REF][START_REF] Gardel | Forcevelocity correlations in a dense, collisional, granular flow[END_REF][START_REF] Gardel | Dynamical fluctuations in dense granular flows[END_REF][START_REF] Gentzler | Measurement of velocity and density profiles in discharging conical hoppers by NMR imaging[END_REF][START_REF] Vivanco | Dynamical arching in a two dimensional granular flow[END_REF][START_REF] Villagran Olivares | Towards a one parameter equation for a silo discharging model with inclined outlets[END_REF] and numerical [START_REF] Ristow | Density patterns in two-dimensional hoppers[END_REF][START_REF] Potapov | Computer simulation of hopper flow[END_REF][START_REF] Mollon | Characterization of fluctuations in granular hopper flow[END_REF][START_REF] Magalhães | Analysis of the velocity field of granular hopper flow[END_REF][START_REF] Mollon | Granular flow fluctuations in a conical hopper[END_REF] studies have been reported using a number of different tools. The most common quantity of interest that is extracted from such studies is the hopper discharge rate, i.e. the amount of matter passing through the hopper outlet in a certain amount of time. Contemporary studies use the hopper flow configuration to investigate such complex topics as avalanching [START_REF] Wang | Velocity profiles of avalanches during hopper discharge[END_REF], self-structuration of the granular fabric triggered by the converging flow [START_REF] Guo | Enhancing the linear flow of fine granules through the addition of elongated particles[END_REF], shape-induced flow regimes [START_REF] Szabo | Flow of anisometric particles in a quasitwo-dimensional hopper[END_REF], or phase change (i.e. jamming) of the granular sample [START_REF] Tang | How granular materials jam in a hopper[END_REF]. Despite the simplicity of its geometry, the hopper flow gives rise to fairly complex behaviors, and one of the most mysterious is the spontaneous emergence of periodic flow instationarities. Such instationarities, sometimes called fluctuations, have been reported experimentally [START_REF] Baxter | Pattern formation in flowing sand[END_REF][START_REF] Gardel | Forcevelocity correlations in a dense, collisional, granular flow[END_REF][START_REF] Gardel | Dynamical fluctuations in dense granular flows[END_REF][START_REF] Vivanco | Dynamical arching in a two dimensional granular flow[END_REF] and numerically [START_REF] Ristow | Density patterns in two-dimensional hoppers[END_REF][START_REF] Mollon | Characterization of fluctuations in granular hopper flow[END_REF][START_REF] Mollon | Granular flow fluctuations in a conical hopper[END_REF]. In particular, a numerical study accounting for realistic sand grains shapes but limited to 2D kinematics [START_REF] Mollon | Characterization of fluctuations in granular hopper flow[END_REF] showed that these periodic instationarities could be described as waves propagating upstream, triggered by the build-up of force chains from the hopper walls. In the present study, we enrich this numerical framework by adding a third dimension and deploying appropriate statistical methods to extract knowledge on this phenomenon. Section 2 presents the numerical framework, including the postprocessing techniques, and describes the main flow parameters for a reference case. Section 3 provides a detailed analysis of the instationarities detected for this reference case, and Sect. 4 adds a parametric study involving a large number of simulations and systematically investigating several parameters of the system. Discussion of the results and proposals for future research avenues are presented in Sect. 5. Numerical framework Simulations principles and parameters The Discrete Element Method (DEM, [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF]) has now become a very common investigation tool in granular science, both for academics and practitioners, and is used in a large variety of scientific fields such as geophysics [START_REF] Mollon | Discrete modelling of rock avalanches: sensitivity to block and slope geometries[END_REF] or tribology [START_REF] Mollon | A numerical framework for discrete modelling of friction and wear using Voronoi polyhedrons[END_REF]. It is based on an explicit integration in time of the Newtonian dynamics of each one of the bodies composing the granular assembly. In addition to a possible gravity field, these bodies are submitted to interaction forces that arise when they enter into contact with each other. Various phenomenological contact laws can be used, but we will stick in the present study to the standard contact model of DEM. It includes contact stiffnesses in the normal (k n ) and tangential ( k t ) directions, which can be seen either as purely numerical regularization parameters or as a proxy for the local deformations of the contacting bodies. In the latter case, non-linear stiffnesses reproducing Hetrz-Mindlin theory can be used. In the present study, however we remain in the first interpretation of these parameters, and use k n = k t = constant . The standard model also includes two modes of energy dissipation: a classical dashpot introducing a damping in the normal direction, and a Coulomb friction coefficient in the tangential direction. In this paper we perform DEM simulations of perfectly conical hopper discharge, with various sample properties and hopper geometries. All the granular samples are composed of 50,000 spherical grains with a mean diameter D = 1 . Since the study is dimensionless, this quantity will be used as a length unit. The size distribution of the grains around this mean value is homogeneous, with coefficients of variation COV D (i.e. ratio of the standard deviation to the mean value) of D varying from 0% (perfectly monodis- perse sample) to 80% (strongly polydisperse sample). The opening diameter L of the hopper varies between 6D and 14D , and the hopper angle to the horizontal varies from 40 • to 80 • (Fig. 1). The gravity is kept constant and equal to 10, and the density of the grains is equal to 1. Hence, a given simulation is defined by 6 parameters, and the reference simulation corresponds to COV D = 40% , L = 10D , = 70 • , k n = 10 5 , = 0.4 , and = 0.2 . The results of this simulation are detailed in the next section. The time step is constant and equal to 10 -5 time units for all the simulations reported here. The only characteristic times of the system are related to free fall and to single-grain harmonic oscillations, and are respectively equal to: T g = √ g∕D ≈ 0.316 T k = √ 6k n ∕ D 3 ≈ 0.00229 Postprocessing techniques The main quantity of interest when studying hopper flow is the discharge rate, since it is of primary importance regarding the practical use of such a system for engineering applications. Figure 2 shows the evolution in time of the proportion of granular material passed through the hopper outlet, as predicted by the reference simulation. The passing rate is constant, in good agreement with existing theories and observations. This is also the case for all the other simulations described in this work. Since this paper is primarily interested in granular flow instationarities, a dedicated postprocessing technique is needed in order to follow the evolution of quantities which vary both in space and time. First, in order to avoid a drift in the average values of local flow parameters, the focus is put on a short time interval used as an observation window. As shown in Fig. 2, this interval runs from 5 to 7 time units after opening of the hopper, in order to make sure that the discharge flow has reached a steady state (in some simulations described in the next sections, a different starting time was chosen for the same reason). This time interval is short enough to consider that the mean flow parameters do not change much, but long enough to observe a statistically significant number of fluctuations. A major difficulty is the fact that relevant quantities are attached either to one grain (velocity and coordination number, i.e. number of contacting neighbors) or to a collection of grains (solid fraction). This is a logical consequence of the Lagrangian framework used to describe the problem, but renders direct interpretations difficult because these grains (or groups of grains) are in constant motion. This issue was already tackled in several papers with various mathematical approaches [START_REF] Goldhirsch | Stress, stress asymmetry and couple stress: from discrete particles to continuous fields[END_REF][START_REF] Weinhart | Closure relations for shallow granular flows from particle simulations[END_REF]. In this paper, a specific technique is used in order to transfer these quantities to fixed coordinates and study the flow from an Eulerian perspective. Grain-attached quantities (velocity and coordination number) are interpolated at fixed locations between grains using moving least squares [START_REF] Fasshauer | Meshfree approximation methods with matlab[END_REF], in a spherical radius equal to 5 distance units around the desired location (it is important to notice that a certain amount of spatial averaging is included in the process). And a local solid fraction at a given fixed location (x, y, z) of space is computed by a weighted averag- ing technique which complies with the principle of mass conservation: In this expression, V i corresponds to the volume of the grain i , r i (x, y, z) corresponds to the distance from the center of this grain to the location (x, y, z) , and the summation is executed on all the grains i located in a sphere of radius d = 8D around this location, which thus controls the cut-off interpolation distance. These post-processing techniques are used to extract and analyze the evolutions of velocity, coordination number and solid fraction at various locations, with a time step of 0.002 time units. A 3D Eulerian grid is used for that purpose, as shown in Fig. 2: 330 points are defined in a given vertical half-plane containing the symmetry axis of the hopper, and 8 such half-planes are used in order to cover the whole system. F s (x, y, z) = d 2 3∕2 ⋅ ∑ V i e -r i (x,y,z)∕d Mean fields Average fields of vertical and radial velocities, of coordination number and of solid fraction are plotted in Fig. 3 (in one half-plane, after averaging in the circumferential direction), as well as the corresponding time-related standard deviations of these quantities, for the reference simulation. In terms of time-average, the radial component of the velocity field is small compared to the vertical one, which progressively increases in norm when getting closer to the hopper outlet. The coordination number and the solid fraction are almost constant in the hopper (close to 3.5 and 0.45 respectively), except in the neighborhood of the inlet and of the outlet of the hopper where these values drop because of reduced local confining stress. These behaviors are consistent with common hopper flow observations (e.g. [START_REF] Vivanco | Dynamical arching in a two dimensional granular flow[END_REF][START_REF] Mollon | Characterization of fluctuations in granular hopper flow[END_REF]). The time-variability of the velocity field presents some interesting spatial patterns. Two areas are observed where velocity has important variations in time: the area located close to the outlet (because of local agitation related to the change of flow regime when the grains are released from the hopper confinement) and a broad area located in the upper half of the hopper, starting about 25 -30D above the outlet and extending up to the inlet. There is also a large time variability of the coordination number in the same area, while the solid fraction variability is very low in the whole sample. 3 Analysis of the instationarities Detection In order to describe quantitatively the flow instationarities, we perform the same postprocessing approach on a vertical profile corresponding to the axis of revolution of the hopper, in the reference case. The sampling time-step remains equal to 0.002 time units, and the sampling step in space along the vertical axis is equal to 0.01D . We therefore obtain data under the form Q(t, Y) where Q is velocity magnitude, coordination number, or solid fraction, and where t and Y denote time (with a zero corresponding by convention to the beginning of the observation time window, for the remainder of the present study) and vertical position (with a zero corresponding to the hopper outlet). These data are represented as space-time diagrams in the left-hand column of Fig. 4. Some organized fluctuations clearly appear in the diagrams of velocity and coordination number (but not in the diagram of solid fraction). It is however difficult to read and quantify them, because they are superimposed to a mean field which is spatially varying (as well as its time-variability, as shown in Fig. 3). This is especially the case for the velocity, which progressively increases when getting closer to the outlet. For this reason, quantities are locally normalized, using the following formula: In this expression, Q(t, Y) is the normalized quantity, Q(t, Y) is the initial quantity, Q (Y) is the time-average of Q at the location Y , and Q (Y) is the corresponding time-related standard deviation. Normalized velocity, coordination number, and solid fraction are plotted on space-time diagrams on the right-hand column of Fig. 4. Fluctuation patterns are much clearer in that normalized form: for both velocity and coordination number, they take the form of periodic oscillations around the mean value, which seem to be strongly synchronized in the vertical directions. More precisely, the lower part of the hopper does not seem to be subjected to any organized pattern, but above a certain height (which is close to Y = 15D for the velocity and Y = 10D for the coordination number), fluctuations acquire a structure which remains the same up to the hopper inlet. The case of solid fraction is more complex: normalized solid fraction seems also to be submitted to periodic and synchronized oscillations in the upper part of the hopper, but these oscillations are fainter and are blurred by noisy descending patterns which may be related to the downward motion of undisturbed grains structures within the granular flow. Q(t, Y) = Q(t, Y) -𝜇 Q (Y) 𝜎 Q (Y) It is interesting to notice that the periods of oscillation of the three considered quantities are the same, indicating that these oscillations are part of the same physical phenomenon. In similar studies in 2D [START_REF] Vivanco | Dynamical arching in a two dimensional granular flow[END_REF][START_REF] Mollon | Characterization of fluctuations in granular hopper flow[END_REF], it was observed that such oscillations could be interpreted as waves emanating from the hopper outlet and travelling upwards in the hopper. The velocity of these waves was reported to be much greater than that of the flowing grains. This interpretation is debatable here since the oscillations described in Fig. 4 seem to be synchronized in space. At a first glance, indeed, they look like stationary oscillations and do not seem to exhibit any propagation pattern. However, this could be related to such waves travelling too fast (with respect to their time-period) for this propagation to be detectable visually, because the hopper is not high enough to allow for a sufficient propagation distance. A preliminary analysis of similar 3D simulations [START_REF] Mollon | Granular flow fluctuations in a conical hopper[END_REF] indeed revealed that the use of a lower value of the contact stiffness between grains was able to reduce the period of the oscillations and to slow-down their propagation, allowing a clear observation of the wave upwards travel. We can thus hypothesize that the oscillations which are described here result from the same phenomenon than in 2D [START_REF] Mollon | Characterization of fluctuations in granular hopper flow[END_REF], but with waves travelling much faster. Patterns in time Figure 5 shows an illustrative snapshot of the spatial fields of normalized velocity and coordination number in a vertical plane of symmetry of the hopper. This visualization reveals that such instantaneous fields exhibit some clear trends (for example in the vertical direction) but an important spatial variability. Hence, the extraction of meaningful patterns requires some specific procedures. For this purpose, we explore the autocorrelation structure of the time variations of the normalized quantities in Fig. 6. At a given location Y on the revolution axis of the hopper, a given quantity Q(t, Y) varies in time, and its autocorrela- tion function A Q (Δt, Y) can be computed as a function of the phase shift Δt by computing the correlation between Q(t, Y) and Q(t +Δt, Y) . Maps of this autocorrelation are plotted in Fig. 6 for velocity, coordination number, and solid fraction and reveal interesting patterns. For both velocity and coordination number, a very clear autocorrelation peak occurs for a phase shift Δt = T = 0.192 time units , indicating a strong periodicity. This is only the case, however, beyond a certain height above the hopper outlet (close to Y = 15D for the velocity and Y = 10D for the coordination number), in good accordance with the observations of Fig. 4. Below this height, the correlation structure disappears. Besides, a similar map of solid fraction does not show any clear autocorrelation peak, indicating that the faint periodic signal observed in Fig. 4 is not strong enough to be revealed by this procedure. It is interesting to notice that the measured In order to describe more precisely the waveform of the detected oscillations, we plot in Fig. 7 the evolution in time of the normalized velocity magnitude, coordination number, and solid fraction at selected illustrative locations. More specifically, we focus on eight different locations in the hopper which share the same Y-coordinate and the same distance to the hopper axis, but in eight different radial directions (corresponding to the eight half-planes described in Fig. 2). As expected, these signals are very well synchronized (over about ten periods) for velocity and coordination number but do not show any organized pattern for the solid fraction. These signals are then stacked (i.e. averaged on the 8 locations and on 10 successive periods, using the main period value T = 0.192 obtained in the previous paragraph), as is often done in seismology to de-noise earthquakes waveforms [START_REF] Grigoli | Automated microseismic event location using master-event waveform stacking[END_REF]. The purpose of this operation is to focus on the systematic variations of the quantities that follow the main oscillation period while getting rid (by averaging) of all the other non-systematic variations. The resulting waveforms are plotted in the lower-part of Fig. 7 for the three quantities (velocity, coordination number, and solid fraction). Two identical periods of this waveform are shown to ease the reading. Several interesting observations can be extracted from these curves. It appears that the velocity oscillations are quasi-harmonic, with an evolution which is very symmetric around the average value (equal to 0 for such normalized quantities). The coordination number waveform, on the other hand, is strongly skewed and far from being harmonic. The main pattern that this quantity follows in time is divided in a rather long period of values moderately larger than the average (up to + 1 standard deviation), followed by a shorter but stronger drop below the average (up to -2 standard deviations). The stacking operation also reveals a systematic evolution of the solid fraction which could not be inferred from autocorrelation patterns (Fig. 6) or from the raw timesignals (upper part of Fig. 7). While noisier than the other quantities, the waveform of the solid fraction is very clear, and consists in a symmetrical signal, somewhere between harmonic and triangular waveforms. Solid fraction and coordination number waveforms are in phase, indicating that these two quantities are directly related. As could be expected, a larger solid fraction correlates with an increase in the number of contacts, and vice versa. Their waveforms are however different because of the intrinsically asymmetrical character of mechanical contacts. Indeed, since the stiffness of the contacts is pretty large, the interpenetration distance of contacting grains is always quite small. A minor decrease of the solid fraction is hence sufficient to increase a little bit the distance between the centers of the grains, which leads to the loss of a large number of contacts. Conversely, when the solid fraction increases, some contacts can be created if the initial gap separating the grains is small enough for them to eventually touch, but this situation is made rarer by the large stiffness of the contact model. This is the reason why the coordination number oscillations are skewed towards the low values while the solid fraction follows a more harmonic evolution. In contrast, the waveforms of velocity and coordination number are shifted in time: the velocity oscillations are delayed by about 0.3T with respect to the variations of coor- dination number. Broadly speaking, it appears that velocity magnitude decreases when the coordination number is above its average, and increases when it is below its average. Likewise, coordination number decreases when velocity is below its average, and increases when velocity is above its average. Patterns in space This stacking technique is further applied to all locations in the hopper half-plane in order to analyze the spatial patterns of evolution of the three quantities of interest, and the resulting fields are plotted for one period T in Fig. 8. The lower part of the hopper appears to have no significant variation in this representation. It does not mean that the fields do not vary in time at these locations, it means that staking the signals on a T-period averages them and thus that the signals do not have any T-periodicity in this area. The upper row of Fig. 8 shows that the velocity oscillations only appear above a height of about 15 -20D (con- firming the observations of Figs. 6 and7), and that they are very homogeneous. This area can be described as the organized fluctuations area. The same observation applies to the coordination number, although it clearly appears that organized fluctuations do start at a lower height of 10 -15D , and that they are very slightly less homogeneous This view is confirmed by the lower part of Fig. 9, which shows maps of wave delay, provided by the stacked signals at every locations of the half-plane. More specifically, it shows a map of the arrival time of the maximum velocity (lower-left part) and of the minimum velocity (lower-right part). The time delay between two successive lines in the maps is constant and equal to 0.0005 time units. It clearly appears that the velocity increase initiates Fig. 7 Pattern extraction for the velocity, the coordination number, and the solid fraction, at an illustrative location in the hopper. Upper-part: time-signals of normalized quantities obtained on eight similar locations of the hopper (same height and distance to the axis, different radial directions as shown in Fig. 2); lower-part: stacked signals for the three quantities on the lateral wall of the hopper, propagates inwards, and then upwards. The velocity decrease, on the contrary, initiates at the walls but also at the inlet and below the organized oscillation area, and then propagates inwards. It also propagates about three times faster. These observations confirm that the detected instationarities are indeed waves propagating in the hopper, and not only stationary vibrations. In the considered geometry, the propagation time remains however very small when compared to the main period of the oscillations. Parametric study Influence of the particle size distribution In this section, we investigate the influence of the size variability of the granular sample on the flow instationarities. This variability is quantified by the coefficient of variation of the particles diameters COV D , i.e. the ratio of the stand- ard deviation to the mean value of the diameter (which is always equal to 1 length unit in this study). COV D takes the Fig. 8 A typical oscillation cycle of period T starting at time t 0 (Fig. 7), obtained by stacking for the three quantities of interest in a hopper halfplane values 0%, 20%, 40% (reference case), 60%, and 80%, while all the other simulation parameters are kept constant and equal to those of the reference case. Results are presented in Fig. 10, and include (1) the discharge rate of the hopper flow during the whole discharge (this rate is found to be constant throughout the whole simulation, for all simulations presented in this work), ( 2) the average and standard deviation of the velocity magnitude at a height Y = 40D (i.e. in the organized oscillation area) and the corresponding COV, (3) the same data for coordination number, and (4) the maximum velocity autocorrelation at Y = 40D and the corresponding time-period (obtained in a manner similar to Fig. 6). These results reveal that the flow rate through the hopper outlet is unaffected by the size distribution of the grains, as well as the average velocity in the organized oscillation area, while the average coordination number slightly increases with COV D . The intensity of the instationarities (i.e. the coefficient of variation of these quantities) seems to decrease when increasing the dispersion of the grain sizes in the sample, but the coherence of the periodicity (quantified by the value of the maximum autocorrelation number) and their time-period do not seem to be modified by this parameter. Hence, polydispersity seems to limit the instationarities, but neither to change their nature nor to prevent them. Following this trend, we can however speculate that a very strongly polydisperse sample (e.g. with a fractal distribution) may only experience limited amount of flow oscillations, but it would have to be confirmed by further studies. Influence of the hopper geometry Figure 11 presents the influence of the diameter of the hopper outlet, with values varying from 6D to 14D (all the other parameters being constant and equal to those of the reference experiment). Flow rate and average velocity both increase quite strongly with the outlet diameter, as predicted by Beverloo law, and the average coordination number is rather small (3.3-3.4) for intermediate values of the outlet diameter ( 8 -10D ) but larger (3.8-4.2) for extreme values. We find the same kind of trend when analyzing the intensity of the instationarities (COV of velocity and Fig. 9 Typical velocity signals (obtained by stacking) at several locations in a hopper half-plane (with a small vertical shift to ease the reading), maps of signal delays, and corresponding propagation directions; upper-left: stacked signals on a vertical profile; upper-right: stacked signals on a radial profile; lower-left: arrival time of the maximum of velocity; lower-right: arrival time of the minimum velocity coordination number), the quality of their periodicity structure (maximum autocorrelation value) and their timeperiod. It is clear that hopper outlet diameters of 8 -10D favor the appearance and development of the organized flow fluctuations. For these simulations, the instationarities are more intense (with coefficients of variation of 30-50% for velocity and 30-40% for coordination number), and with clearer patterns (maximum autocorrelation close to 1), with a constant period. In contrast, when considering very small ( 6D ) or large ( ≥ 12D ) outlet diam- eters, the oscillations get less intense and with a much less clear periodicity. This is especially the case for large diameters, with coefficients of variation smaller than 10% for the velocity and the coordination number, a maximum autocorrelation lower than 0.8 (and as low as 0.5 in the case of 14D ), and a decreasing time-period. It is likely that considering even larger outlet diameters would lead these organized fluctuations to vanish. The influence of the hopper opening angle is presented in Fig. 12, from the case of a very flat hopper (angle to the horizontal = 40 • ) to a very steep one ( = 80 • ). Since the general geometry of the hopper changes in each simulation, local quantities (i.e. mean and standard deviation of velocity and coordination number) are computed at a varying height located 10D below the free surface at the hopper inlet, which corresponds to the location where organized fluctuations are the most prone to be observed. Influence of the contact model In a preliminary study performed in the same numerical framework [START_REF] Mollon | Granular flow fluctuations in a conical hopper[END_REF], the effect of the contact stiffness k n = k t was investigated. It was found that this parameter did not have any influence on the discharge rate, but that the periodicity of the flow oscillations was proportional to the inverse square root of this stiffness. It clearly indicated that the development of such oscillations relied on the elasticity of the contacts between the grains. In this paragraph, we evaluate the influence of the two other parameters of the standard contact model of DEM: friction and damping coefficients. Figure 13 presents numerical results obtained for friction coefficients varying from = 0.0 to = 0.8 . The hopper discharge rate slightly decreases with an increasing friction, as well as the average velocity and coordination number. There is however a clear change of oscillation regime between low and high friction values. For friction coefficients lower than 0.4, instationarities are very faint (coefficients of variation close to 5% for velocity and coordination number) and only poorly periodic. In the case of a null friction, the periodic structure completely vanishes. For larger friction coefficients, however, the intensity and coherence of the oscillations are high and rather constant. It is interesting to notice that large friction coefficients lead to larger time periods of the oscillations. The influence of the damping coefficient is detailed in Fig. 14, for damping values ranging from = 0.0 to = 0.4 (a value of 1.0 corresponding to critical damping). It appears that the influence of this parameter on the system is rather limited, except for a decrease of the intensity of the coordination number oscillations with a larger damping. For all the Discussion and conclusion The observations gathered during this numerical campaign reveal that 3D granular hopper flows are very prone to organized instationarities. They appear to occur as intense variations of the velocity field and of the contact structure of the granular sample, which are chaotic in the neighborhood of the hopper outlet but gain a clear periodicity and structure above a certain height. They take the shape of rapidly propagating waves originating from the hopper outlet and lateral walls and propagating upwards and inwards, leading to almost synchronized oscillations of the flow parameters in the upper part of the hopper. While velocity variations are quasi-harmonic, variations in the coordination number are more skewed, with short events of important contacts loss followed by longer stages of reconstruction of the contact network. The granular density of the sample, represented here by the solid fraction, is only submitted to very limited variations in amplitude, but can nevertheless be detected by appropriate procedures and appear to be perfectly in phase with the coordination number signal. The time delay between the velocity and coordination number variations and the propagation patterns of the waves transporting them draw a general pictures of the instationarities that follow several successive stages: 1. Force chains build up from the hopper outlet and from its lateral walls, because of the converging character of the flow. This leads to an increase in the coordination num- The systematic parametric study described in Sect. 4 suggests that some specific conditions are necessary for the initiation and continuance of such organized and periodic instationarities. A specific range of hopper outlet diameter (between 8D and 10D ) and of hopper open- ing angle (between 60° and 70° with the horizontal) are needed, otherwise the fluctuations do not gain intensity and periodic structure. A minimum value of the coefficient of friction of 0.4 is also necessary. The polydispersity of The reasons for these restricted conditions are still unclear, in particular because there is no apparent correlation between the discharge rate and the properties of the instationarities. For example, the change of oscillation regime is very clear between the cases = 0.2 and = 0.4 (Fig. 13), but no such drastic change is observed in terms of the discharge rate, which follows the same trend as for larger values of . Hence, a simplistic explanation relating the appearance of the fluctuations to the larger difficulty of the grains to flow through the hopper (which would lead them to collide more and would induce vibrations) does not seem to hold. Same observations can be made for the hopper outlet diameter and opening angle. This is confirmed by Fig. 15, which shows vertical profiles of the coefficient of variation of the velocity magnitude and of the coordination number along the revolution axis of the hopper, for the reference simulation, for the case of hopper opening diameter of 6D , and for the case of a fric- tion angle of = 0.2 . It clearly appears that the behaviors of these three systems in terms of flow variability are very similar in the lower part of the hopper, i.e. below a height of 10 -15D . It is only above this height that the reference sim- ulation starts to exhibit more intense instationarities, while the other two simulations do not. It would thus appear that organized fluctuations are not related to phenomena occurring in the close neighborhood of the outlet, such as the collisional regime transition. It can be speculated that their ability to self-organize may result from a complex interplay between the hopper geometry and the ability of the granular material (enhanced by a higher contact friction) to form strong and persistent force chains and thus to coordinate grains motions on a longer range. This interplay is yet to be understood. Further studies will focus on this assumption, and will explore the implications of a more realistic (i.e. non-linear) contact stiffness, of larger polydispersity, and of more complex grains shapes [START_REF] Mollon | 3D generation of realistic granular samples based on random fields theory and Fourier shape descriptors[END_REF]. Fig. 1 1 Fig. 1 Snapshot of particle velocities during hopper discharge Fig. 2 Fig. 3 23 Fig. 2 Left: Proportion of passed material in time in the reference simulation, and timewindow used for the observation of flow instationarities; right: 3D Eulerian grid used to map flow parameters in time Fig. 4 4 Fig. 4 Space-time diagram of the velocity magnitude, of the coordination number, and of the solid fraction, on the vertical revolution axis of the hopper; left: raw quantities; right: normalized quantities Fig. 5 5 Fig. 5 Snapshot of normalized flow parameters in a symmetry plane of the hopper Fig. 6 6 Fig. 6 Autocorrelation in time of normalized flow quantities on the revolution axis of the hopper, presented as functions of location Y and of phase shift; upper-left: Velocity autocorrelation map; upper- Figure 12 reveals that steep hoppers tend to increase the flow rate, but that fluctuations are favored by intermediate values of the opening angle. Flat hoppers ( = 40 -50 • ) indeed lead to very limited insta- tionarities (COV smaller that 8% for velocity and coordination number) with a much less clear periodic structure. The case = 40 • even leads to a complete disappearance of any kind of periodic fluctuations of the flow parameters. Similar observations are made for the steepest case ( = 80 • ). It is nevertheless interesting to observe that the shape of the Fig. 10 10 Fig. 10 Influence of the coefficient of variation of D ; upper-left: discharge rate; upper-right: velocity magnitude at Y = 40D (mean value ± one SD), and corresponding coefficient of variation; lower- Fig. 11 11 Fig. 11 Influence of the outlet diameter; upper-left: discharge rate; upper-right: velocity magnitude at Y = 40D (mean value ± one SD), and corresponding coefficient of variation; lower-left: same plot for Fig. 12 12 Fig. 12 Influence of the hopper angle; upper-left: discharge rate; upper-right: velocity magnitude at a distance of 10 below the hopper inlet (mean value ± one SD), and corresponding coefficient of varia- Fig. 13 13 Fig.[START_REF] Gardel | Forcevelocity correlations in a dense, collisional, granular flow[END_REF] Influence of the contact friction coefficient; upper-left: discharge rate; upper-right: velocity magnitude at Y = 40D (mean value ± one SD), and corresponding coefficient of variation; lower- Fig. 14 14 Fig. 14 Influence of the contact damping coefficient; upper-left: discharge rate; upper-right: velocity magnitude at Y = 40D (mean value ± one SD), and corresponding coefficient of variation; lower- Fig. 15 15 Fig.[START_REF] Gentzler | Measurement of velocity and density profiles in discharging conical hoppers by NMR imaging[END_REF] Standard deviations of velocity and coordination number as functions of the location Y on the revolution axis of the hopper, for the reference simulation, the case of an opening diameter of 6D , and the case of a friction coefficient = 0.2
03659784
en
[ "spi" ]
2024/03/04 16:41:18
2022
https://hal.science/hal-03659784/file/Bouillanne2022.pdf
Olivier Bouillanne Guilhem Mollon email: [email protected] Aurélien Saulot Sylvie Descartes Nathalie Serres Guillaume Chassaing Karim Demmou How vorticity and agglomeration control shear strength in soft cohesive granular flows Keywords: Soft grains, Sheared flow, Agglomeration, Vorticity, Tribological contact, Third body Deformable granular flows present complex kinematics. These materials can have various flow regimes: plastic, agglomerated, rigid-like granular flow, etc. In this paper, a multibody meshfree model is used to investigate the consequences of cohesion, stiffness, and viscosity of the particles on their collective sheared flows in tribological contacts. An approach derived from fluid mechanics postprocessing tools, based on vortex detection, is employed to understand the links between these parameters and the emerging friction coefficient of the sheared interface. These results pave the way to complete kinematic studies of third body simulations in tribological contacts. Introduction Tribology and fretting Tribology is the science that describes the interaction of the surfaces of two solid bodies in contact and relative motion. This contact generates friction and can lead to various mechanical phenomena, such as wear and fatigue. Friction is at the center of many practical topics, from moving cut stones by the ancient Egyptians [START_REF] Dowson | History of Tribology[END_REF] to brushing teeth [START_REF] Descartes | Tribological study of oral care silica[END_REF]. Mechanical systems can be lubricated or dry. While the understanding of lubricated contacts phenomenology is now fairly accurate, that of dry contacts remains fragmentary. An approach popularized by Godet [START_REF] Godet | The third-body approach: a mechanical view of wear[END_REF] and then Berthier [START_REF] Berthier | Velocity accommodation in fretting[END_REF][START_REF] Berthier | Fretting fatigue and fretting wear[END_REF] is to consider that, at the interface between two rubbing bodies, there is a third body. This third body, composed of a combination of wear debris from the first bodies and particles from the outside, is located between the surfaces and allows loads to be transmitted while accommodating the relative velocity of the two rubbing bodies. This view is supported by countless experimental characterizations of worn surfaces. Hereinafter, these two rubbing bodies will be called "first bodies", following the conventional tribological terminology. Fretting is a tribological phenomenon in which two surfaces in contact undergo oscillatory movements of amplitude smaller than the size of the contact. This fretting can generate fatigue or wear, or a mixture of both. Many mechanical parts undergo fretting: splines, bolted or riveted assemblies [START_REF] Kounoudji | Role of third body on bolted joints' self-loosening[END_REF], etc. This affects many fields of engineering: aeronautics, nuclear, biomedical, civil engineering, etc. The concept of third body has led to a better understanding of fretting, especially following advances made by the fretting maps [START_REF] Vingsbo | On fretting maps[END_REF] and the study of the different fretting regimes (stick condition, stick-slip condition, and gross-slip condition) [START_REF] Vincent | Mechanics and materials in fretting[END_REF]. The third body may protect or damage surfaces and can show various aspects, as can be seen in Fig. 1. These SEM views are obtained as a result of two titanium alloy (Ti-6Al-4 V) surfaces rubbing against each other in a frettingtest tribometer, i.e. following a small amplitude reciprocating sliding motion during a large number of cycles. As in any such test, a third body is visible. On the left-hand micrograph, various third body behaviors are detectable: granular on the left, and rather plastic in the center. On the right-hand micrograph, the third body is granular and large agglomerated wear debris are visible. Experimental approaches can reveal this third body after the test, but traditional mechanical models tend to neglect the impact that the third body can have on wear. The main reason for this is the fact that state-of-the-art measurement techniques are not able to resolve in time and space the variability of the local third body behaviors and their influence on local friction and wear. There is therefore a lack of knowledge of such local effects, and the best way to circumvent this experimental bottleneck is to employ numerical simulations. Conventional numerical methods of modeling wear are descriptive and hardly predictive. This is due to the multitude of factors influencing interface behavior: the amplitude of the movement between the two surfaces, the pressure distribution on these surfaces, the number of accumulated cycles and their frequency, the properties of the materials and the condition of their surfaces, the contact temperature, the atmospheric environment… Collins [START_REF] Collins | Failure of Materials in Mechanical Design[END_REF] thus counts about fifty variables that play a role in the fretting process. The relationship between the third body and wear is established but not yet totally predictable [START_REF] Arnaud | Modeling adhesive and abrasive wear phenomena in fretting interfaces: a multiphysics approach coupling friction energy, third body and contact oxygenation concepts[END_REF][START_REF] Baydoun | Modeling contact size effect on fretting wear: a combined contact oxygenation-third body approach[END_REF]. Surface degradation leads to wear after a while, especially if the particles that have become detached from the surface come out of contact permanently. If these particles remain within the contact, they form the third body and can help to limit this wear. The stresses applied by the flow of the third body in the interface are probably one of the causes of surface degradation. It is therefore necessary to characterize in detail the stresses applied to the first bodies by the third body. Such forces are obviously related to the flow regime of the third body in the interface, especially on its shear strength and on its spatial and temporal variations. Kinematic indicators will allow us to characterize these regimes and to understand the link between the third body and the stresses, then with the degradation, and finally with the wear. The kinematics of third body flows must therefore be better understood, to grasp the interaction between the third body and stresses. Numerical [START_REF] Wang | Strain dependent vorticity in sheared granular media[END_REF] and experimental [START_REF] Forterre | Stability analysis of rapid granular chute flows: formation of longitudinal vortices[END_REF][START_REF] Abedi | Vortex formation and dissolution in sheared sands[END_REF][START_REF] Rognon | A circulation-based method for detecting vortices in granular materials[END_REF] approaches to calculate the vorticity of a granular flow have already been proposed, but they were limited to rigid granular materials. The terms agglomerate and vortex are complementary. The term agglomerate refers to a set of particles moving in a coordinated manner, while a vortex refers to a set of particles rotating around a certain geometrical point. Hereafter, in this study, the term agglomerates or structures will be used, because the vortices studied here have much longer duration than those described in the existing granular physics literature. Numerical tribology Different numerical approaches are used to describe the tribological contacts. The first one is based on finite elements. To model the wear track, one way to do this is to calculate the stresses undergone by the surfaces and to deduce a displacement of the nodes composing the mesh of these solids. [START_REF] Ding | The effect of slip regime on fretting wear-induced stress evolution[END_REF]. Another way, more radical, consists of recalculating a complete mesh as the wear progresses [START_REF] Basseville | An evaluation of the competition between wear and crack initiation in fretting conditions for Ti-6Al-4V alloy[END_REF]. The "wearbox" numerical tool, on the other hand, relies on a measurement of the amount of energy dissipated in contact to apply a deformation to the mesh [START_REF] Mary | Numerical prediction of fretting contact durability using energy wear approach: optimisation of finiteelement model[END_REF][START_REF] Paulin | Finite element modelling of fretting wear surface evolution: application to a Ti-6A1-4V contact[END_REF]. The shortcoming of these methods is that they do not consider the debris bed present in the interface, i.e. the third body. Ding [START_REF] Ding | A finite element based approach to simulating the effects of debris on fretting wear[END_REF] tried to take this into account in a new finite element model (without representing this third body), but the wear tracks that can be simulated are shallow. Semi-analytical approaches are also used [START_REF] Gallego | A fast and efficient contact algorithm for fretting problems applied to fretting modes I, II and III[END_REF][START_REF] Gallego | Multiscale computation of fretting wear at the blade/disk interface[END_REF], which allow reducing the computation time and simulate the full system. The reader can refer to the article [START_REF] Renouf | Numerical tribology of a dry contact[END_REF] for more information on the numerous methods used in finite elements. Although finite element models allow modeling the stress and displacement fields of bodies in contact, they do not allow considering the discontinuities of the material. The discrete element approaches used in tribology have their origins in the numerical modeling of discontinua of Cundall and Hart [START_REF] Cundall | Numerical modelling of dicontinua[END_REF]. These simulations, while quite varied, are based on three key principles: • The equations of motion of mechanics; • The detection of contacts between particles; • The calculation of the forces between the particles. Discrete element models (DEM) are used in tribology to model different aspects of contact. For example, DEM can be used to model the contact and the degradation of materials between the wheel of a train and the rail [START_REF] Chapteuil | Influence of copper/graphite properties on the tribological and electrical behavior of copper-graphite third body layer[END_REF]. DEM simulations are also used to simulate the flow of the third body-as well as its degradation-in different contexts: thermo-mechanical response [START_REF] Rivière | Thermo-mechanical investigations of a tribological interface[END_REF], two-phase materials [START_REF] Champagne | Modeling wear for heterogeneous bi-phasic materials using discrete elements approach[END_REF], or electrical contacts [START_REF] Renouf | Coupling electrical and mechanical effects in discrete element simulations[END_REF][START_REF] Descartes | A new mechanical-electrical approach to the wheel-rail contact[END_REF]. Some authors tried to combine discrete elements with finite elements to combine the advantages of both methods [START_REF] Renouf | First-body versus third-nody: dialogue between an experiment and a combined discrete and finite element approach[END_REF][START_REF] Li | Fretting damage modeling of liner-bearing interaction by combined finite elementdiscrete element method[END_REF][START_REF] Leonard | Third body modeling in fretting using the combined finitediscrete element method[END_REF]. In particular, the particles of a discrete model are a priori rigid, which is not the case when combining these two methods. The field of deformable granular materials is of growing interest, both experimentally [START_REF] Dijksman | Refractive index matched scanning and detection of soft particles[END_REF][START_REF] Vu | Soft-grain compression: beyond the jamming point[END_REF] and numerically [START_REF] Cantor | Compaction model for highly deformable particle assemblies[END_REF][START_REF] Nguyen | Compaction of granular materials composed of deformable particles[END_REF][START_REF] Harthong | Contact impingement in packings of elastic-plastic spheres, application to powder compaction[END_REF][START_REF] Favier De Coulomb | Rheology of granular flows across the transition from soft to rigid particles[END_REF], and extends well beyond the purely tribological applications. Finally, the last type of approach concerns nanometric models. Several techniques exist, based on the displacement of atoms. The Monte Carlo method and the molecular dynamics method are the most widespread. These simulations are based on atomic interaction laws derived from quantum mechanics [START_REF] Doucet | Computer-Aided Molecular Design: Theory and Applications[END_REF]. Some authors focus on modeling adhesive wear and the formation of agglomerates in contact [START_REF] Aghababaei | Critical length scale controls adhesive wear mechanisms[END_REF][START_REF] Molinari | Adhesive wear mechanisms uncovered by atomistic simulations[END_REF]. These simulations are informative but timeconsuming, and only allow representing a tiny fraction of the contact. For more information on these simulations, the reader may refer to [START_REF] Robbins | Computer simulations of friction, lubrication, and wear[END_REF]. Methods Numerical model This work is largely based on a previous work [START_REF] Mollon | Solid flow regimes within dry sliding contacts[END_REF], where an extensive simulation campaign was performed, with various mechanical parameters. In what follows, we revisit these stored numerical results with dedicated techniques to extract relevant new kinematic data. It is therefore worth providing some general information about the model. The reader interested in more details may refer to [START_REF] Mollon | Solid flow regimes within dry sliding contacts[END_REF]. In that paper, the author describes a simulation of a flow of deformable particles in a dry sliding interface. These simulations are performed on the open-source code MELODY, a software developed in LaMCoS, INSA Lyon, France [START_REF] Mollon | A multibody meshfree strategy for the simulation of highly deformable granular materials[END_REF][START_REF] Mollon | A unified numerical framework for rigid and compliant granular materials[END_REF]. This software generalizes the conventional Discrete Element Method (DEM) and allows simulating complex granular materials, with either rigid or highly deformable grains with arbitrary shapes, within the framework of large deformation hyperelasticity in 2D. The versatility of this software allows it to be used in several fields, for example in mechanical engineering [START_REF] Zhang | Significance of third body rheology in friction at a dry sliding interface observed by a multibody meshfree model: influence of cohesion between particles[END_REF][START_REF] Quacquarelli | A dual numerical-experimental approach for modeling wear of Diamond Impregnated Tools[END_REF] or geomechanics [START_REF] Casas | DEM analyses of cemented granular fault gouges at the onset of seismic sliding: peak strength, development of shear zones and kinematics[END_REF]. Principles Figure 2 shows a sketch of the numerical model, which is purely dimensionless. Two rough rigid walls (length L = 100 ) are placed on either side of a collection of 2000 deformable particles with a unit average diameter, to represent two bodies rubbing against each other with a third body. The two first bodies have a self-affine roughness, with a R a equal to 1 space unit. They are rigid and non-degradable. Their cohesion with particles is very high, in order to prevent wall-slip effects, and to focus on velocity accommodation within the third body only. The deformable particles follow a hyperelastic law. This makes it possible to apply elasticity to bodies that undergo large deformations and rotations. The neo-Hookean law considers two main parameters, which are Young's modulus E and Poisson's ratio . The Poisson's ratio is fixed at 0.495 to impose a quasi-incompressibility. Finally, the density is equal to 1 (dimensionless model). To make the simulation stable, damping is added via the Rayleigh parameter: the damping matrix is proportional to the stiffness matrix via this parameter. The result is a viscoelastic-like behavior of the Kelvin-Voigt type. The contact law governing the interactions between the particles and with the walls is governed by a cohesive frictionless model with only one parameter c , which is the strength per length unit of contact between two surfaces. By using deformable particles that can move freely in space, the model combines the laws of dynamics with those of continuum mechanics. A gap (noted D ) between the two first bodies is imposed. Finally, a relative tangential velocity V = 1 is imposed on the two bodies: the upper body goes rightwards at V∕2 , and the lower one, leftwards at -V∕2 . This generates a shear within the collection of particles that will accommodate velocities while transmitting loads throughout the simulation, like the third body within a tribological contact. Periodic boundary conditions are applied to the extremities of the simulation. For each simulation, the simulated time corresponds to three sheared spatial periods, i.e. a time of 300-time units (called Δt and equal to 1 in this dimensionless context). To ensure that the flow regime of the third body is well established, only the last 200Δt will be considered for quantitative interpretation. The normal and tangential forces experienced by the upper and lower bodies are measured over time and noted F N (t) and F T (t) respectively. Their time average is noted F N et F T . The stresses, which are the ratio of the forces to the length L (2D simulation), are noted N and T . The ratio of these two quantities provides an average coefficient of friction at the scale of the whole interface, as an output quantity of the simulation. This approach is a good conceptualization of the experimental reality where the friction coefficient of a given interface is a measured quantity that emerges from a large number of local phenomena and events. The elastic behavior of each grain can certainly be seen as a limitation when applied to plastic and ductile materials (e.g. metals), but constitutes an improvement with respect to perfectly rigid grains. Besides, irreversible deformations occur within the flow because of the cohesive character of the contact laws. It means that plasticity is present in the simulations, albeit at the scale of the flow instead of that of the grains. Parametric space Three dimensionless parameters are studied and lead to various flows of the third body. These three parameters are a normalized stiffness Ẽ (between 0.15 and 1.2), a normalized cohe- sion c (between -1 and 1.4) and normalized viscosity ̃𝛼 (between -3.3 and -1.5). The first two parameters are normalized by the average normal stress experienced by the first bodies, n . The third parameter is normalized by the mean strain rate in the interface, V∕D . The normalized stiff- ness characterizes the resistance to deformation of particles under stress. The normalized cohesion quantifies the cohesive force between the particles in the system. This cohesion therefore varies roughly between one order of magnitude below the normal stress (negligible cohesion) and one order of magnitude above it (dominating cohesion). Finally, the normalized viscosity characterizes the response of the grains to their rate of deformation. As demonstrated in [START_REF] Mollon | Solid flow regimes within dry sliding contacts[END_REF], the chosen range for ̃𝛼 is consistent with the solid viscosity of steel (for example) under typical tribological conditions. Consequences of stiffness, cohesion, and viscosity of third body The coefficient of friction, noted , represents the ratio of the measured tangential force to the normal force. This is an output value, not an input value. It characterizes the shear strength of this confined soft flowing granular material. The evolution of the friction coefficient as a function of the values of Ẽ , c and ̃𝛼 can be seen in Fig. 3. The coef- ficient of friction has a wide range, from 0.1 to over 1.0, but remains in the typical range of observed values for dry sliding. The increase in the ̃𝛼 parameter leads to a general increase in the coefficient of friction. The two maps of Fig. 3 show the coefficient of friction at imposed ̃𝛼 . In particu- lar, when ̃𝛼 ≈-1.8 , a maximum coefficient of friction is observed, at Ẽ = 0.7 and c = 0.7 (that is, for a Young modu- lus of the grains and an interparticle cohesion roughly one order of magnitude larger than the applied normal stress). These three parameters greatly influence the flow and the microstructures of the third body. Figure 4 shows different kinematic patterns of the third body as a function of the parameters Ẽ and c , at imposed ̃𝛼 =-1.8 . For example, when Ẽ and c are low, the observed behavior is close to (4) = F T F N picture A. The particles deform a lot and fill the pore space. They adopt a structured pattern with a preferred horizontal direction which promotes a perfectly laminar flow: because of the low cohesion, each layer of particles slides over the others. On the other hand, in the case of B, the particles deform little. The observed regime can be described as granular. Cases D and F are also particularly interesting. In these cases, the particles have sufficient cohesion to form agglomerates, similar to those seen in Fig. 1. These third body agglomerates flow, shear and roll into the interface. The other cases (C, E, H, and I) are intermediate to those described above. These simulations are quantitatively consistent with the experimental variety of third body visual aspects [49]. Coherence In some simulations, agglomerates are observed. Grains form various sizes of cohesive clusters, in which the particles move in a motion similar to that of a rigid body. An example can be seen in Fig. 5. The agglomerates are difficult to observe in a succession of still images but are clearly visible in animations. The characterization of the size of these agglomerates would make it possible to analyze the consequences of the rheology of the third body particles on the first bodies. Tools developed to characterize this size of agglomerates, and derived from fluid mechanics are documented and adapted hereafter. The agglomerate identification function is derived from vortex identification functions. These mathematical descriptors were proposed in the context of vortex studies in fluid mechanics and are called Γ 1 and Γ 2 [50, 51]. These functions compare the observed motion of points near a central point with that of particles forming an ideal vortex. The function proposed here, called "coherence", is based on the same concept but compared with the motion of an ideal rigid body. To do this at a given time step, a velocity field is first interpolated on a regular Eulerian grid from the motion of the particles, as can be seen in Fig. 6. The interpolation used is cubic, which allows C 2 continuity. The velocity is interpolated with respect to the particle center-of-mass velocity, because the deformation-related velocity field in particles is small compared to the rigid body velocity of these particles, as can be seen in Fig. 2. The time step used to extract grains positions and to compute velocities is equal to one time unit, and corresponds to an average strain level of 0.05 between two extractions. This is a good compromise between too small (producing a large amount of redundant data) and too large (overlooking complex trajectories of the grains) time steps. The velocity field is discretized into a square mesh, with a spatial step of Δx =Δy = 1 , and each point of this mesh has a velocity ⃗ V . We then define a local quantity for each point, called "coherence", which quantifies to what extent the velocity field in the neighborhood of this point corresponds to a rigid-body motion. Coherence at a given point P is therefore calculated by comparing, at each point P , the velocity of each point M located in a disk of radius R and center P , with the velocity that an ideal rigid body of center P and radius R would have. The complete formula is as follows: where PM ≤ R , V circ is an ideal circumferential relative velocity of point M with respect to point P in a rigid-body rotation (the angular velocity is arbitrarily chosen to be equal to 1), and V ave is the average velocity of all points M within the radius R (cf. Fig. 7). By construction, -1 ≤ C < 1 . A value of C = 1 means the particles rotate in a coordinated rigid-body motion in the positive direction, and vice versa for C =-1 . The radius R of the interrogation circle is a user- defined parameter. It seems that a radius of two to three times the typical size of a particle allows the detection of structures properly while keeping the computation time reasonable. (5) C(x, y, R)= ∫ M∈S V -V ave ⋅ V circ dS √ ∫ M∈S V -V ave 2 dS ⋅ √ ∫ M∈S V circ 2 dS By repeating the operation on the whole simulation, it is thus possible to determine a coherence field, as can be seen in Fig. 8A, computed with the velocity field extracted from Fig. 5. Red zones correspond to a coherence close to 1 , and blue ones to -1 . Expectedly, the coherence map is essentially negative because the natural direction of rotation due to the shearing of the interface is negative in the present sign convention. By applying a filter to select only the areas where |C| > 0.8 , agglomerates are highlighted (Fig. 8B). It is finally possible to stack these detected areas to observe their evolution over time (Fig. 8C). Independent structures are easily identified and labeled based on connectivity relationships in the space-time domain. Each detected agglomerate is characterized by its coherence. An additional relevant property to be calculated is the average angular velocity, noted . Each agglomerate detected at an instant T can be associated with a certain number of grains belonging to it. The average angular velocity is calculated by computing the average of the circumferential component of the velocity of each grain to the barycenter of the agglomerate. With R i the distance between each point in the structure to the barycenter and n the number of points in. To consider the difference in distance between the first two bodies according to the simulations, this quantity is normalized by the largescale shearing of the third body layer, V∕D , and noted ̃𝜔 . In this case, the V∕D shear rate is negative. Structures rotating in a positive direction will therefore have a positive value of ̃ . Normalization makes it easy to compare the angular velocity of the structures with the shear rate. If ̃𝜔 = 1 , the structure rotates as a disk in a rolling movement without sliding in the contact. If ̃𝜔 > 1 , it rotates faster, and if 0 < ̃𝜔 < 1 , the structure rotates slower. Finally, if ̃𝜔 < 0 , the structure rotates in the opposite direction to the shear. Results 75 simulations were performed with different values of Ẽ , c and ̃𝛼 in [START_REF] Mollon | Solid flow regimes within dry sliding contacts[END_REF], and we revisit their stored numerical results. Coherence is applied to this series of simulations used to evaluate the coefficient of friction as a function of the three parameters identified above. Seven notable simulations are retained and are the same as those used in the previous study. They are noted from A to G and located in the parametric space in Fig. 4. (6) (x, y, R)= ∑ V circ R i � n (7) ̃𝜔 = 𝜔 V∕D Fig. 7 Coherence calculation scheme Stacked coherence In Fig. 9, the coherence is plotted over time and stacked, following the methodology shown in Fig. 8C. In case A, the particles are soft and hardly cohesive. There are very few agglomerates detected, and they all rotate in the negative direction. Because of the shear generated by the two upper and lower bodies, this is naturally the preferred direction of rotation, while positive rotations correspond to agglomerates which rotates against the main shear direction. The lifespan of the structures is very short. Cases B, C, and D follow a path of decreasing stiffness. C is located at the maximum of the friction coefficient. The other parameters do not vary very much. First of all, for case B, structures are not very spatially extensive but persist for a fairly long time. No observed structure rotates in the positive direction. For case C, on the other hand, the structures, although numerous, are not very extensive spatially and temporally. Some structures rotating in the positive direction are to be noted, unlike case A. Finally, in case D, the negatively rotating structures are massive, both spatially and temporally. Structures that rotate in the positive direction are also observed, and their spatial extent is notable. We thus observe for these three cases, with decreasing Ẽ , first many struc- tures, then a drastic reduction in their size, and finally the appearance of large, numerous, and persistent agglomerates. Simulations E, C, and F follow an increasing cohesion path. Case E is similar to case A, except that there are more structures, and they have a slightly larger spatial extent. Case F is very interesting and shows the impact of an important cohesion. The structures are very large, whatever their direction of rotation. In these three cases, the increase in cohesion results in an increase in structures size. Cases E and C are similar, while regime F has very large structures, but is seemingly shorter-lived than in case D. Case G, visible in Fig. 10, is similar to case C, except that the viscosity is much lower. The alpha viscosity has an important role in whether or not structures are formed since the detected structures have a longer lifetime and a larger spatial extent. Size of structures Figure 11 shows the proportion of grains belonging to a structure compared to all the grains in the simulation. This of grains is very dependent on the cohesion c , and to a lesser extent, on the viscosity ̃𝛼 . On the other hand, the stiffness Ẽ seems to have very little influence. The maximum grain content in a structure, 11%, is rather low. This is due to the choice of a rather high detection threshold ( C > 0.8 ). The data should be analyzed relatively rather than absolutely. The proportion of grains in the structures is an important parameter since it gives information on the extent of the agglomeration phenomenon. Angular velocity of structures The probability density function (PDF) is used to extract statistical data from all the simulations, including the angular velocity of the agglomerates. These PDF are in terms of equivalent grains, i.e. they represent the density of probability for a given grain to belong to a structure with a certain diameter or a certain angular velocity. They are visible in Fig. 12, left column. The seven cases analyzed above have very different profiles. Path B (purple) → C (medium blue) → D (green), which follows a decreasing stiffness, is interesting because we observe that case C (maximum peak of the coefficient of friction) corresponds to a minimum angular velocity and a minimum of grains in the structures. For path E (dark blue) → C (medium blue) → F (light blue), which follows an increasing cohesion, the case C is intermediate between E and F. Case G does not have very different characteristics from case C, which has the same stiffness and cohesion, but not the same viscosity. Case A, in orange, which has very few structures, shows only structures rotating in the shear direction, with very high dispersion. Thus, many structures rotate nearly twice as fast as the shear rate. To each of the 75 simulations can be attributed an angular velocity mode ̃𝜔 corresponding to the peak of the angular velocity of the PDF function. Some correlations are interesting to observe, for example, between Ẽ and ̃𝜔 in Fig. 13A. No correlation (Pearson's coefficient of 0.22, Pearson's coefficient is relevant to quantify the quality of linear regression) is noted between ̃𝜔 and Ẽ for the whole simulations, but a correlation is visible when the data are binned according to the value of the normalized viscosity. The linear relationship is much stronger at a low viscosity (Pearson's coefficient of 0.51 for ̃𝛼 =-2.2 and 0.55 for ̃𝛼 =-3.2 ); at ̃𝛼 =-1.8 , the correlation is weaker. Another remarkable correlation, visible in Fig. 13B, is that between the average angular velocity of the structures ̃𝜔 and the solid fraction S F . The solid fraction is the ratio of the surface occupied by the particles in the simulation to the surface of the space that contains them. A solid fraction close to 1 indicates that the grains are very tightly packed and fill all the pore space, while a value close to 0 indicates the opposite. Diameter of structures One way to quantify the size of the structures is to use a cylindrical representation. For this purpose, each structure is associated with a lifetime L and a diameter D. This diameter D is based on the average area of the structure in each time step along the lifetime. It is thus possible to approximately represent each structure as a cylinder whose height is the lifetime and whose diameter characterizes the average spatial extent over time. An example of cylindrical representation can be found in Fig. 14. In the same way as for the normalized angular velocity, statistical data of the structure diameters can be extracted from the studied simulations. The probability density function and the cumulative distribution function of the diameter of structures can be seen in Fig. 12, right column. In path B → C → D, case C corresponds to a minimum size of agglomerates. In the same way, as for the angular velocities, case C corresponds to an intermediate step in the E → C → F paths. An interesting relationship can be observed between the cohesion and the characteristic (i.e. the mode of the PDF) Fig. 13 A ̃ as a function of normalized rigidity Ẽ ; B ̃ as a function of solid fraction S F . The size of the dots is relative to the proportion of grains in agglomerates in the simulation (in %) Fig. 14 Cylindrical representation of detected structures for case F (cf. Fig. 9). The structures rotating in the positive direction are in red, and in blue for those rotating in the negative direction. The intensity of the color depends on the angular velocity diameter of the structures (cf. Fig. 15). The Spearman's correlation coefficient (relevant for non-linear correlations) is quite high, over 0.67. This indicates that the larger the cohesion, the larger the diameter of the structures, which agrees with the observations that can be made in Fig. 9. The relation between these quantities is non-linear, with a sharp increase of the characteristic diameter of the agglomerates when the normalized cohesion goes beyond 0.5. Coefficient of friction A correlation can be found between the normalized angular velocity structures ̃𝜔 , and the coefficient of friction and can be visualized in Fig. 16. The Pearson correlation coefficient between these two data is -0.70. When a large friction coefficient is observed, the average angular velocity of the field is very low. In contrast, the friction coefficient is found to be very low when the normalized angular velocity gets closer to 1, i.e. when the agglomerates rotate following the natural shear rate of the flow. As described in [START_REF] Mollon | Solid flow regimes within dry sliding contacts[END_REF], based on energetic arguments, coefficient of friction can be expressed as a sum of two partial friction coefficients: a "surface-related" coefficient of friction S , and a "bulk-related" coefficient of friction B . The former describes the part of the energy dissipated during the creation and destruction of surfaces in the granular sample during shear, and the latter the energy dissipated by inelastic deformations within the deformable particles. These components are particularly complex and difficult to correlate with other parameters. However, two significant correlations can be observed. In Fig. 17A, a link can be made between the normalized angular velocity of structures ̃𝜔 (cf. Sect. 3.2) with the bulk-related coefficient of friction, with a Pearson coefficient of -0.58. As the structures rotate more, there is less energy loss through dissipation in the bodies. In Fig. 17B, a clear link can be made between the size of the structures (cf. Sect. 3.4) and the bulk-related coefficient of friction (Spearman coefficient of -0.83). This relationship is monotonic but non-linear. The higher the coefficient of friction associated with the interparticle energy dissipations, the smaller structures there are. In contrast, the appearance of large agglomerates (i.e. with D > 4 ) cor- relates well with a large drop of the surface-related energy dissipation in the system. Discussion and conclusion Based on the quantitative results presented in the previous section, we can draw a general picture of the causes and consequences of the agglomeration phenomena in soft cohesive granular flows. It appears that the existence of persistent rigid-like agglomerates in this class of flows is promoted by the interparticle cohesion and to a lesser extent by the viscous response to shear deformation (cf. Fig. 11). Agglomeration is indeed negligible when the quantities c and ̃𝛼 are low. The size of the agglomerates is mostly related to cohesion (cf. Fig. 15), which is simply explained by the fact that more cohesive particles tend to pull more surrounding particles in the same motion. The normalized cohesion value c = 0.5 seems to be a threshold above which very large structures can develop. Viscosity also plays a role in the creation of large structures (cf. Fig. 11). The stiffness and the viscosity parameters play an important role in the angular velocity of the structures (cf. Fig. 13A). The softer and less viscous the grains are, the more they tend to form structures with an angular velocity corresponding to the natural shear rate of the interface (although a few outlying cases are noticed). When the stiffness is important, the structures are more reluctant to follow the natural shear-related rotation. The solid fraction also plays a role in the angular velocity of the structures (cf. Fig. 13B). When the solid fraction is high, the grains are closer to each other, their contacting areas are larger, and their motions are more coordinated, which seems to increase the angular velocity of agglomerates. Since a high stiffness and a low viscosity (to a lesser extent) decrease the solid fraction, we can speculate that this decrease in solid fraction provides less incentive for structures to rotate. Hence, a simplistic view of this class of granular flows could be that cohesion and viscosity favor the formation and the growth of agglomerates, and that stiffness and viscosity restrict their natural rotation within the sheared flow. The proposed results hence provide a contribution to the growing topic of the rheology of cohesive granular flows [START_REF] Macaulay | Viscosity of cohesive granular flows[END_REF][START_REF] Mandal | Rheology of cohesive granular media: shear banding, hysteresis, and nonlocal effects[END_REF], with the added complexity of a cohesive strength increasing with the contact area between the grains (and hence with their deformability and the applied stress). A link can be made with the surface-related coefficient of friction (cf. Fig. 17B). At a diameter greater than 4, the value of the surface-related coefficient of friction drops and becomes very low. This could be due to a geometrical effect linked to a specific surface limit beyond which the behavior changes radically. This value is approximate and could be dependent on the parameters chosen in the simulations. It is at the same value that a behavior change is observed between the cohesion and the average diameter of structures. This decrease of the surface-related dissipation can be explained by the fact that the agglomerates become larger. The specific surface area thus decreases in the sheared sample and the contact area mobilized between the agglomerates diminishes. The energy dissipation therefore no longer occurs at the interfaces between the agglomerates, but within them, via the inelastic component of their deformation. Likewise, as structures rotate faster (i.e. closer and closer to the natural angular velocity associated with rolling in the sheared flow) they tend to endure less deformation, resulting in a decrease in the bulk-related coefficient of friction (cf. Fig. 17A). Finally, the friction coefficient strongly depends on the angular velocity of the agglomerates (cf. Fig. 16). It can Fig. 17 A Bulk-related coefficient of friction as a function of mean normalized angular velocity; B Surface-related coefficient of friction in function of mean diameter be assumed that the agglomerates roll in the contact, and decrease the coefficient of friction, like balls in bearings. This link between high cohesion, the generation of rolling agglomerates, and a limitation of the resulting friction coefficient at the scale of the interface is consistent with the hypothesis reported in a previous study [START_REF] Zhang | Significance of third body rheology in friction at a dry sliding interface observed by a multibody meshfree model: influence of cohesion between particles[END_REF], but is now supported by more quantitative data. The accommodation of the relative velocity of the two walls is more likely to occur via rolling at low stiffness and high cohesion, while it is more likely to occur via particle deformation when the cohesion is low. The methodological framework proposed in this paper for the detection and quantification of agglomerated structures is still imperfect. A blind spot exists for structures with zero angular velocities, although it seems minimal. The presence of large pore spaces is also ignored by the interpolation algorithm, and this could be corrected in future work. The main limitation, however, seems to be the rather short duration of the simulations, which are imposed by the large computational cost of the multibody meshfree approach but limit the statistical accuracy of the detection and characterization of the agglomerates. Much longer simulations could be made possible in future studies by the use of the Soft Discrete Element Method [START_REF] Mollon | The soft discrete element method[END_REF]. The thickness of the sheared layer, which provides an upper bound for the agglomerates diameters, might also play an important role and should be investigated in future studies. From a more applicative perspective, the tool developed here will be applied to fretting simulations in a tribological context. By applying this approach to realistic mechanical contacts, we expect to establish a link between the third body rheological properties, the formation and characteristics of agglomerates, the stress fluctuation patterns undergone by the surfaces bounding the third body flow, and finally their damage and wear. Fig. 1 1 Fig. 1 SEM views of different areas of mechanical contact after friction tests, with various third body accommodation regimes ( 1 )Fig. 2 12 Fig. 2 Sketch of the numerical model with two rigid first-bodies and a third-body represented by deformable grains. Colors are arbitrary and only serve to distinguish grains on the top and the left figure. Velocity field magnitude is plotted in the lower-right corner Fig. 3 3 Fig. 3 Coefficient of friction in function of Ẽ and c , for ̃𝛼 =-1.8 and ̃𝛼 =-3.2 Fig. 4 Fig. 5 Fig. 6 456 Fig. 4 Microstructure of third body as a function of Ẽ and c (from [43]) Fig. 8 A 8 Fig. 8 A Coherence field extracted from the velocity field (Fig. B Filtered coherence field at the same time-step at |C| > 0.8 ; C Stacked filtered coherence over time Fig. 9 A 9 Fig. 9 A-F Stacked filtered coherence over time for six characteristic simulations. The structures in red rotate in the positive direction, and vice versa for the blue ones Fig. 10 10 Fig. 10 Stacked filtered coherence over time for G case. The structures in red rotate in the positive direction, and vice versa for the blue ones Fig. 12 12 Fig. 12 Probability density function of structures for various cases of normalized angular velocity of structures ̃𝜔 and diameter of structures D Fig. 15 Fig. 16 1516 Fig. 15 Average diameter of the structures as a function of the cohesion parameter Safran Aircraft Engines, Moissy-Cramayel, France Acknowledgements The authors gratefully acknowledge Safran Aircraft Engines and the French National Research and Technology Agency (ANRT) for financially supporting this research project. The authors acknowledge that this study contains original material. Its publication has been approved tacitly by the responsible authorities at the institutes where the work has been carried out. Declarations Conflict of interest The authors declare that they have no conflict of interest.
03660643
en
[ "spi" ]
2024/03/04 16:41:18
2020
https://hal.science/hal-03660643/file/GM2020.pdf
Guilhem Mollon email: [email protected] Adriana Quacquarelli Edward Andò Gioacchino Cinno Adriana Quacquarelli1 • Edward Andò Gioacchino Viggiani Can friction replace roughness in the numerical simulation of granular materials? Keywords: DEM, Friction, Roughness, Shearing, Localization niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Granular simulations based on various implementations of the Discrete Element Method (DEM) [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF] are nowadays commonly used in the various fields of science and technology interested in divided solid matter. This includes civil engineering [START_REF] Donze | Advances in discrete element method applied to soil, rock and concrete mechanics[END_REF], geomechanics [START_REF] Darve | DEM modelling in geomechanics: some recent breakthroughs[END_REF], environmental sciences [START_REF] Richards | Discreteelement modelling: methods and applications in the environmental sciences[END_REF], geophysics [START_REF] Mollon | Discrete modelling of rock avalanches: sensitivity to block and slope geometries[END_REF], granular physics [START_REF] Azema | Quasistatic rheology, force transmission and fabric properties of a packing of irregular polyhedral particles[END_REF], pharmaceutical processes [START_REF] Kodam | Discrete element mothod modeling of bi-convex pharmaceutical tablets: contact detection algorithms and validation[END_REF] and tribology [START_REF] Fillot | A granular dynamic model for the degradation of material[END_REF][START_REF] Mollon | A numerical framework for discrete modelling of friction and wear using Voronoi polyhedrons[END_REF], among others. While early developments of DEM were only considering discs or spheres (depending on the 2D or 3D context of the simulation), there is now a growing interest in the introduction of more complex, and maybe more realistic shapes. A part of this effort is dedicated to a better characterization of the shapes of various populations of grains, and to the extraction of relevant shape features using various experimental and analytical procedures [START_REF] Matsushima | 3D shape characterization and image-based DEM simulation of the Lunar soil simulant FJS-1[END_REF][START_REF] Blott | Particle shape: a review and new methods of characterization and classification[END_REF][START_REF] Wei | Generation of realistic sand particles with fractal nature using an improved spherical harmonic analysis[END_REF]. Another part, closely related to the first one, is dedicated to the generation of granular shapes and samples in order to reproduce morphological features of target grains populations [START_REF] Mollon | Fourier-Voronoi-based generation of realistic samples for discrete modelling of granular materials[END_REF][START_REF] Mollon | 3D generation of realistic granular samples based on random fields theory and Fourier shape descriptors[END_REF][START_REF] Ouhbi | 3D particle shape modelling and optimazation through proper orthogonal decomposition[END_REF]. Finally, the last part of this process is dedicated to the development of numerical techniques making it possible to introduce and simulate such generated shapes in DEM codes [START_REF] Azema | Quasistatic rheology, force transmission and fabric properties of a packing of irregular polyhedral particles[END_REF][START_REF] Lin | A three-dimensional discrete element model using arrays of ellipsoids[END_REF][START_REF] Richefeu | Dissipative contacts and realistic block shapes for modelling rock avalanches[END_REF][START_REF] Houlsby | Potential particles: a method for modelling non-circular particles in DEM[END_REF][START_REF] Kawamoto | All you need is shape: predicting shear banding in sand with LS-DEM[END_REF]. Meanwhile, an important question remains unanswered, which is related to the relevance of the geometrical features that are successively extracted from the real grains, used to generate synthetic grains, and simulated in a DEM framework. By relevance, we mean here relevance with respect to a certain simulation purpose. Indeed, each simulation is performed for a specific reason and with specific expectations regarding the output quantities to be extracted from it, be it in an academic or a practical framework. This question arises because, to date, a very large number of morphological descriptors were proposed based on real grains shapes analyses, but it is still very difficult for the practitioner to determine what should be the key morphological parameters to introduce in his/her simulation in order to reproduce some desired property of a given target granular sample (either mechanical, or thermal, electrical, etc.). Besides, each class of morphological properties comes with an associated numerical cost. For example, it is much less costly to simulate ellipsoids [START_REF] Lin | A three-dimensional discrete element model using arrays of ellipsoids[END_REF] than spheropolyhedra [START_REF] Richefeu | Dissipative contacts and realistic block shapes for modelling rock avalanches[END_REF], while both approaches are able to introduce an elongation of the grains. Hence, if the only relevant physical content needed for the success of a given simulation lies in the elongation, ellipsoids should be preferred. But how to be sure that this single property is sufficient in a given framework? In the present paper, we only address a small portion of this important question. More specifically, we focus our attention on the notions of roughness and of friction, with respect to strain localization in sheared granular materials [START_REF] Desrues | Strain localization in sand: an overview of the experimental results obtained in Grenoble using stereophotogrammetry[END_REF]. We wish to evaluate to what extent the roughness of the surface of the grains (which requires a fine discretization of their frontier and thus implies a large computational cost) may be replaced by an artificial increase of their friction coefficient (which comes at no additional cost). Two concepts need to be clearly defined in the context of this study: friction and roughness. Friction, on one hand, is a very complicated topic, and tribology is not a unified science yet. Establishing a direct link between roughness and friction has been tried many times (see for example [START_REF] Greenwood | Contact of nominally flat surfaces[END_REF][START_REF] Pöschel | Static friction phenomena in granular materials: coulomb law versus particle geometry[END_REF][START_REF] Pöschel | A simple geometrical model for solid friction[END_REF][START_REF] Ivkovich | The influence of the contact surface roughness on the static friction coefficient[END_REF]) and proved useful to enhance our understanding of contact mechanics. But the current state of the art is that there is no straightforward causality between these two concepts. Friction is now regarded, not as an intrinsic property of the bodies into contact, but as an emerging phenomenon [START_REF] Godet | The third-body approach: a mechanical view of wear[END_REF][START_REF] Colas | Describing third body flows to solve dry lubrication issue-MoS 2 case study under ultrahigh vacuum[END_REF] related to several physical properties and phenomena (surface adhesion, heat production, wear, acoustic wave emissions, local phase change, microstructural evolutions, interactions with the gaseous environment, etc.) occurring at several scales (the scale of the contacting bodies, the scale of the wear debris filling the contact interface, the scale of surface topography, the scale of surface chemistry, the scale of the mechanical system driving the motion of the contacting bodies, etc.). Roughness only plays a small part in this plot. With this complexity in mind, it is important to distinguish two concepts: (i) the complex and emerging physical phenomenon that we call friction, and (ii) the phenomenological quantity that is introduced in a granular simulation in order to prescribe satisfactory values to the force resisting to sliding motion. We clearly place our work in the context of this second definition of the friction. Since the physical complexity of a mechanical contact is out of reach in a DEM simulation, we mostly consider our coefficient of friction as a contact parameter leading to satisfactory granular behaviors. As such it is not different at all from those commonly used in DEM, where this shortcut is always tacitly used. Roughness, on the other hand, seems to be rather illdefined in the granular mechanics literature. There are several occurrences [START_REF] Jensen | DEM simulation of granular media-structure interface: effects of surface roughness and particle shape[END_REF][START_REF] Jensen | Effect of particle shape on interface behavior of DEM-simulated granular materials[END_REF][START_REF] Kozicki | Numerical simulations of sand behavior using DEM with two different descriptions of grain roughness[END_REF][START_REF] Kozicki | Effect of grain roughness on strength, volume changes, elastic and dissipated energies during quasi-static homogeneoue triaxial compression using DEM[END_REF][START_REF] Hare | The influence of aspect ratio and roughness on flowability[END_REF][START_REF] Mede | A medial axis based method for irregular grain shape representation in DEM simulations[END_REF] into which this word is used to describe the whole shape of the grain, at every scales, especially in DEM simulations using clumps, i.e. rigid clusters of discs. In some other papers, roughness is restricted to the topography of grains at very small scales, typically below the micrometer [START_REF] Santamarina | Effect of surface roughness on wave propagation parameters[END_REF][START_REF] Alshibli | Characterizing surface roughness and shape of sands using digital microscopy[END_REF][START_REF] Otsubo | Experimental and DEM assessment of the stress-dependency of surface roughness effects on shearmodulus[END_REF][START_REF] Nadimi | Effect of particle roughness on the bulk deformation using coupled boundary element and discrete element methods[END_REF]. Such a usage is common in scientific communities close to tribology or contact mechanics, where the very notion of "shape" is irrelevant at the scale of the contact. In this case, contacting solids are seen as planes with a superimposed irregularity, and roughness is used to describe this irregularity. Between these two extremes, it is often needed to consider in a single study the irregularities of the contacting objects at several scales: the scale of the object and the scale of its surface [START_REF] Gallier | Rheology of suspensions of rough frictional particles[END_REF][START_REF] Zheng | Traditional soil particle sphericity, roundness and surface roughness by computational geometry[END_REF][START_REF] Zheng | A corner preserving algorithm for realistic DEM soil particle generation[END_REF][START_REF] Wilson | The influence of surface roughness and adhesion on particle rolling[END_REF][START_REF] Hsu | Roughness-dependent tribology effects on discontinuous shear thicknening[END_REF]. A common approach for sand grains [START_REF] Blott | Particle shape: a review and new methods of characterization and classification[END_REF][START_REF] Jensen | Effect of particle shape on interface behavior of DEM-simulated granular materials[END_REF][START_REF] Zheng | Traditional soil particle sphericity, roundness and surface roughness by computational geometry[END_REF][START_REF] Zheng | A corner preserving algorithm for realistic DEM soil particle generation[END_REF][START_REF] Alshibli | Characterizing surface roughness and shape of sands using digital microscopy[END_REF][START_REF] Ehrlich | An exact method for characterization of grain shape[END_REF][START_REF] Bowman | Particle shape characterization using Fourier descriptor analysis[END_REF][START_REF] Andrade | Granular element method for computational particle mechanics[END_REF][START_REF] Chen | Numerical study of particle morphology effect on the angle of repose for coarse assemblies using DEM[END_REF] is to consider three independent scales of shape description: circularity/sphericity (main deviation of the object shape from a perfect disc/sphere), angularity/roundness (related to the sharpness of the corners and the variability of local curvatures around the solid frontiers), and roughness (related to irregularities in the surface). There is no general consensus, however, on the exact scale limits that separate these three descriptors, especially because the second one is ill-defined (what should be called a "corner" was never clearly stated on a geometrical basis). From a more mathematical point of view, self-affinity of surface roughness has been demonstrated in many studies [43, 46-49, 53, 54], for any kind of natural surface of polycrystalline materials (it often breaks down for glassy materials or for artificial machined surfaces, for different reasons). Hence, a rigorous way to define the scale where irregularities should be called "roughness" instead of "shape" could be: the scale beyond which the spectrum of the amplitudes of these irregularities becomes self-affine (i.e. follows a decreasing power-law). This is the definition that was used in several studies [START_REF] Mollon | Fourier-Voronoi-based generation of realistic samples for discrete modelling of granular materials[END_REF][START_REF] Mollon | 3D generation of realistic granular samples based on random fields theory and Fourier shape descriptors[END_REF][START_REF] Ehrlich | An exact method for characterization of grain shape[END_REF][START_REF] Bowman | Particle shape characterization using Fourier descriptor analysis[END_REF][START_REF] Meloy | Fast Fourier transforms applied to shape analysis of particle silhouettes to obtain morphological data[END_REF][START_REF] Das | Modeling three-dimensional shape of sand grains using discrete element method[END_REF][START_REF] Mollon | Generating realistic 3D sand particles using Fourier descriptors[END_REF], and that we use here. To investigate whether roughness can be replaced by friction in simulations of granular materials, we perform 2D granular simulations reproducing some published experimental results [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF], using an appropriate numerical framework into which it is possible to introduce complex shapes and realistic boundary conditions and to observe the appearance of strain localization. Two numerical campaigns are performed, the first one paying attention to the influence of surface roughness on the localization phenomenon, and the second one focusing on the consequences of an artificial replacement of the surface roughness by an increase in the friction coefficient. Methodology Description of the reference experiments The simulations presented in this paper are inspired by an experimental campaign performed some years ago at Laboratoire 3SR in Grenoble, and described in details in [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF][START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray μCT and volumetric digital image correlation[END_REF][START_REF] Andò | Grainscale experimental investigation of localised deformation in sand: a discrete particle tracking approach[END_REF] and in Fig. 1. In these experiments, sand samples underwent triaxial compression tests within a lab-size µCT scanner in order to follow at the grain scale the deformation and failure process in situ. The granular sample was a cylinder of diameter 11 mm and height 22 mm, containing many tens of thousands of sand grains. An appropriate amount of sand was first carefully deposited by dry pluviation inside a flexible membrane stretched within a mold and between two porous stones, and suction was then applied inside the membrane in order to create a confining pressure and to give the ability to the sample to hold steady. The mold was then removed, the geometric quality of the sample was checked, and the sample was positioned in the triaxial cell (which had to attenuate few x-rays) inside the scanner and submitted to a confining pressure applied on the membrane by pressurized water (vacuum in the sample was meanwhile released). The loading was applied from below by the hemispherical extremity of a ram, in a strain-controlled manner with a prescribed loading rate of 21 µm/min (Fig. 1). This loading was regularly stopped in order to perform a full 3D scan of the sample. From these experimental developments and the associated postprocessing of the 3D digital images, a large number of statements were made on the micromechanics of strain localization at the grain scale, and the reader is redirected to the large body of published work for more details on the technical challenges that had to be addressed and on the mechanical interpretations that could be derived [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF][START_REF] Hall | Discrete and continuum analysis of localised deformation in sand using X-ray μCT and volumetric digital image correlation[END_REF][START_REF] Andò | Grainscale experimental investigation of localised deformation in sand: a discrete particle tracking approach[END_REF]. Samples characterization and generation During the experimental campaign, several types of sands were tested and analyzed, but in the present paper we will only focus on one specific case, namely Ottawa sand. This is a sand of intermediate angularity, coming from sedimentary deposits, made up of quartz grains, with a D 50 of the order of 250 µm and a narrow grain size distribution (sieved between 210 and 300 µm). An SEM view of typical Ottawa sand grains is provided in Fig. 2a. In order to extract geometrical features, six Ottawa sand grains were scanned in a higher resolution machine at 1 µm/px. The voxels corresponding to the external surface of each grain were identified (their number was around 200,000 with the chosen scanning resolution), and these surfaces were then interpolated between these voxels on a set of 10,242 radial directions forming a geodesic structure, as described in more details in [START_REF] Mollon | Generating realistic 3D sand particles using Fourier descriptors[END_REF]. The resulting discretised surfaces of the six grains are shown in Fig. 2b, and are now defined as a discrete set of 10,242 radii R i i , i . Since the angular direction of each of these radii is well-known, it is possible to compute the angle they form with respect to each other, and thus to compute the angular autocorrelation function of the discrete random field R i . As explained in [START_REF] Mollon | 3D generation of realistic granular samples based on random fields theory and Fourier shape descriptors[END_REF], the Fourier spectrum of this field can then be computed in a univocal way, for each grain. The six spectra are plotted in Fig. 2 (their first modes in a Cartesian frame, and the distribution tails in a log-log frame), along with the mean spectrum obtained by averaging the six cases. There seems to be a good repeatability between the six spectra, indicating that their average bears some statistical significance despite the very limited size of the sample. The log-log chart indicates a power-law decrease of the spectra amplitude with their mode (up to mode 48, beyond which the noise created by the interpolation on the geodesic structure disturbs the high-frequency measurements), which is very consistent with the classical self-affine decay of roughness observed on sand grains [START_REF] Das | Modeling three-dimensional shape of sand grains using discrete element method[END_REF] and more generally on any natural surface at any scale [START_REF] Candela | Roughness of fault surfaces over nine decades of length scales[END_REF]. To improve the quality of the samples to be generated with this spectrum, the modes from 8 to 64 were replaced by a smooth power-law decay fitted on the modes 8 to 48 of the measured average spectrum, as proposed in [START_REF] Mollon | Fourier-Voronoi-based generation of realistic samples for discrete modelling of granular materials[END_REF]. Modes beyond 64 (i.e., corresponding to an angular period lower than 5.6°, and to a spatial distance smaller than about 12 µm for the average grain size considered here) were ignored, although it may theoretically be possible to extrapolate the power-law roughness decay to much smaller scales. 2D samples can then be generated based on this spectrum, using the Packing2D code described in [START_REF] Mollon | Fourier-Voronoi-based generation of realistic samples for discrete modelling of granular materials[END_REF], see Sect. 2.4. Numerical framework The simulations described in the next sections of this paper were performed with the code MELODY (described in [START_REF] Mollon | A multibody meshfree strategy for the simulation of highly deformable granular materials[END_REF][START_REF] Mollon | A unified numerical framework for rigid and compliant granular materials[END_REF] and available for download at http://guilh em.mollo n.free.fr). This code is based on a multibody meshfree approach, which is an extension of the classical DEM to the case of highly deformable grains. This framework is able to deal in a unified manner with a large number of rigid and deformable bodies, with arbitrary geometries. Both kinds of bodies are represented by polygons (i.e., by a set of boundary nodes linked by segments), and any kind of contact interaction between such two bodies (or self-contact of a deformable body) is handled by a robust penalty-based two-ways node-to-segment contact algorithm. The bulk of deformable bodies is discretized by a certain number of field nodes which carry the degrees-of-freedom in displacement, and the displacement field is interpolated between these nodes using Moving-Least-Square meshfree shape functions. The weak form so-obtained is then integrated in space (based on a Gaussian quadrature on a set of triangles discretizing the surface of each body), and in time (using an explicit adaptive time-stepping scheme). Newtonian dynamics of rigid bodies are also solved in time using the same solver, in a manner similar to classical DEM. The code is written in C++ and parallelized in an OpenMP framework, with a very good performance scaling up to 32 CPUs. A time-step close to 10 -7 s is reached by the adaptive solver for all the simulations presented hereafter. Numerical simulations, preprocessing and load phasing Since MELODY is currently restricted to plane-strain kinematics, we present here 2D simulations which aim to reproduce some features of the real 3D triaxial tests described in Sect. 2.1. Obviously, a large part of the physical reality is lost when dropping one dimension. A 2D sample is much more constrained from a kinematic point of view. However, 2D DEM is still used a lot in current scientific literature because some major aspects of the granular mechanical response (peak-plateau behavior, dilatancy, interlocking, etc.) appear to be well-reproduced in 2D. However, apart from implementation complexity and computational cost, nothing seems to prevent future extension of this work to 3D. The samples are composed of about 9600 grains, with a width of 22 mm and a height of about 45 mm after compaction. These samples are thus twice as large as the experimental one, but contain many fewer grains because they are in 2D. In the reference case, the full spectrum of Ottawa sand, measured and computed in Sect. 2.2, is used up to mode 64 in order to generate 2D grains with a contour discretized by 128 nodes with equal angular spacing. Although this is only an approximation, we will consider in the remainder of this paper that this case contains the full roughness of the grains (Fig. 3). However, since we are primarily interested in the effect of the roughness, we also generate several other granular samples with different properties. First, we introduce a smoothing of the surfaces of the grains, by cutting off the high angular frequencies of the Fourier spectrum used for their generation. This is done by considering only the N m first modes of the spectrum, with N m being equal respec- tively to 64 (reference case), 32, 16, and 8. Examples of such grains are provided in the first row of Fig. 3, where the smoothing effect of this operation is evident. This change in the spectrum, however, does not decrease the computational cost of the simulation because the number of nodes N n on the contour of each grain remains equal to 128. Hence, the number of proximity and contact detections and the number of contact resolutions at each time step do not change much. In order to investigate the effect of this parameter, several samples are generated with smaller numbers N n of nodes on the contour (namely 128 in the reference case, 64, 32, and 16). The number of nodes is kept larger than half the number of modes in order to avoid artificial disappearance of some amount of roughness by removing nodes while it is still included in the considered spectrum. The list of the 10 generated samples is provided in Fig. 3. As clearly appears in this figure, decreasing N m reduces the roughness of the surface, but decreasing N n seems to introduce a somewhat artificial angularity because of the coarseness of the description of the grains contours. A simulation is divided into a number of successive stages (Fig. 4) which allow a controlled and reproducible loading of the granular sample. At stage 0, the granular sample is generated using a Fourier-Voronoi technique, in such a way that the grains do not touch each other [START_REF] Mollon | Fourier-Voronoi-based generation of realistic samples for discrete modelling of granular materials[END_REF]. This sample is bounded by two porous stones (rigid rectangular bodies located above and below the sample) and the two walls of the mold (rigid rectangular bodies located on the left-and right-hand side of the sample). These four bodies are fixed in all directions, except for the lower porous stone which is allowed to move vertically, and which is in contact with the semi-circular extremity of the ram on its lower side. During stage 1, an upward vertical force is applied to the ram, corresponding to a pressure of 100 kPa when distributed on the width of the lower porous stone. During this uniaxial compression stage (which mimics the pluviation process but with a better control on the obtained solid fraction), the coefficient of friction between the grains (and also between the rigid walls and the grains) is set to a certain value c , which controls the compacted solid fraction (the larger this friction coefficient, the looser the compacted sample). For all the simulations presented in this paper, this friction value is calibrated by trial and error in order to obtain a constant solid fraction in the final sample (i.e., before stage 4). Stage 1 lasts until the sample is compacted and stabilized. At the beginning of stage 2, the flexible membranes are activated in the simulation. These membranes are slender rectangles (0.3 mm thick), initially located slightly remote from the granular sample. They are deformable, and follow a hyperelastic Neo-Hookean constitutive law with a Young's modulus of 1 MPa and a Poisson coefficient of 0.49 (in agreement with the properties of the latex used experimentally [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF]). In order for them to transmit the load applied by a virtual fluid located in the pressurized chamber around them, they are submitted on their outer edge to a pressure of 100 kPa. This pressure is applied in a fluidlike manner, meaning that, at each border node of the outer edge of a given membrane, the outwards normal vector is computed at each time step, and the fluid pressure is applied inwards in that specific direction. Since the membranes are very flexible, they quickly come into contact with the grains Fig. 3 Illustrative 2D Ottawa sand grain submitted to smoothing by a reduction of the number of Fourier modes and of the number of contour nodes (no contact detection is activated between the mold and the membrane), and transmit the external pressure to the sample in a rather realistic way (as was done in [START_REF] Kawamoto | All you need is shape: predicting shear banding in sand with LS-DEM[END_REF][START_REF] De Bono | DEM of triaxial tests on crushable sand[END_REF][START_REF] Yang | Particle-scale mechanisms in undrained triaxial compression of biocemented sands: insights from 3D DEM simulations with flexible boundary[END_REF]). The coefficient of friction between the membranes and the grains is set at c during this stage. Stage 3 corresponds to the removal of the mold. The lower porous stone is completely freed from its kinematic constraints, the coefficient of friction in the whole sample is set to a new value d (which will be the one used for the deviatoric loading). The two lateral walls representing the mold are removed by giving them an outwards horizontal velocity. This means that the load from the grains is progressively taken up by the flexible membranes. At the end of this stage the sample is in an equilibrium state, the only fixed bodies being the upper porous stone (in all directions) and the loading ram (in all directions but the vertical one, where it is still submitted to an upwards force). Stage 4 corresponds to the deviatoric loading. In this stage, the loading ram becomes displacement-controlled, and moves upwards with a velocity of 0.01 m/s. This velocity is much higher than the experimental one, but this is necessary in order to be able to run the simulation in a reasonable amount of time. Careful calibration was performed in order to ensure that this loading rate did not introduce any inertial effects in the simulations (by checking that the norms of the total vertical contact forces transmitted by the lower and upper boundaries are very close along time, indicating quasi-static loading). The coefficient of friction between the loading ram and the lower porous stone is kept low (0.05) during the whole stage 4 in order to allow slip at this interface, since this feature was observed experimentally. Stage 4 is performed until a strain level of about 15% is reached, similar to the actual experiment. During this stage, all the forces applied to the different parts of the apparatus, and all the kinematics of the grains are recorded. We observe that, during simulations, the membrane is often strongly folded (Fig. 4), requiring a self-contact detection in order to avoid breakdown of the simulation. Postprocessing techniques In order to analyze the mechanical behavior of the sample under deviatoric loading, a specific postprocessing technique was implemented. It consists in mapping a given state of the sample (i.e., the geometric positions of the 9600 grains) on a pixel-based "image", with a size of 3500 * 4500 pixels. A typical grain has a dimension of about 30 pixels in such image. Geometric detection techniques are used to attribute Fig. 4 Description of the different stages of a simulation, from sample preparation to deviatoric loading the value "1" to each pixel with a center located inside a sand grain, and "0" otherwise. This forms the image A (Fig. 5). Then, an image B is created by "closing" image A (i.e. by first dilating the image A by 20 pixels and then eroding the result by 20 pixels). This process (dilation-erosion) is very common in image processing in order to get rid of fortuitous holes into homogeneous regions [START_REF] Russ | The Image Processing Handbook[END_REF]. The resulting image B provides the spatial extension of the whole sample without any remaining porosity inside it. An image C is created by convoluting image A with a Gaussian filter with a radius equal to 5R ( R being the aver- age sand grain radius). The consequence of this operation is to smooth the spatial of the matter in the image, with a spatial smoothing scale of the order of a representative elementary volume (REV) of about 15-25 grains. As shown in Fig. 5, the obtained image allows the observation of some local variations in the sample density, but is very inaccurate on the boundaries of the sample. More specifically, all the areas located inside the sample but within a distance of 5R from its boundary are affected by the fact that a portion of the Gaussian filter covers a region where there are no grains at all (and they are thus attributed a density smaller than it should be). To address this issue, an image D is created by applying the same Gaussian filtering to image B. The final image E is obtained after correcting image C by image D (i.e. dividing the value of each pixel of C by its value in D) and multiplying the result by image B (in order to cancel the density in the pixels located outside of the sample). The result, provided in Fig. 5, gives a very clear picture of the field of granular density in the sample. From image E (at the final state of a simulation), it becomes rather straightforward to assess quantitatively (and in a reproducible way) the features of the shear band. First, its location and orientation are obtained by labelling all the pixels belonging to it (i.e., all the pixels with a local density value lower than the average density minus one standard deviation) and computing the center of mass and the principal axes of inertia of the obtained area (blue area in the insert of Fig. 5). A geometric zone of interest is then defined (dotted-lines square in the insert of Fig. 5) in order to focus on the central area of the shear band and to ignore membrane-related boundary effects. Longitudinal slices (dottedlines thin rectangle in the insert of Fig. 5) are then defined in order to average relevant quantities along the shear band and to extract quantitative profiles across it and through time. Results Reference simulation As a reference case, the 9600 grains are introduced in a simulation with a complete Ottawa spectrum (i.e., N m = 64 ) and a maximum number of nodes on their contours (i.e., N n = 128 ). The coefficient of friction during compaction is set to c = 0 , leading to an initial solid fraction close to 0.86, and during the deviatoric loading it is set to d = 0.5 . The curve of the stress ratio (ratio of the vertical stress to the horizontal stress v ∕ c ) as a function of the axial shortening of the sample (equal to h inih ∕h ini , where h ini and h are the initial and current height of the sample) is provided in the right-hand part of Fig. 6. The term "axial shortening" is preferred to the more classical "vertical strain" because strain is heterogeneous in the sample. This curve was smoothed in time in order to ease its reading. It shows the typical behavior of a dense granular medium with a distinct stress peak (stress ratio close to 5.40) followed by a decrease of the stress ratio, until its stabilization at a value close to 2.41. For comparison, the plot also shows an experimental curve obtained with Ottawa sand in [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF]. It appears that a contact friction coefficient d = 0.5 , combined with the chosen ini- tial solid fraction and sample grain-size and morphological properties, leads to a peak stress ratio which is very similar to the experimental one. However, it appears that the axial shortening at which this peak occurs is much smaller in the numerical response (0.7%) than in the experimental one (4.5%), that the peak is much sharper in the simulation, and that the numerical sample is much stiffer in the pre-peak Fig. 6 Left: six successive stages of the reference simulation; Right: Stress ratio measured during simulation and obtained in the reference experiment in [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF], and experimental field of porosity obtained for a similar sample [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF] zone than the experimental one. Actually, the final level of axial shortening is not sufficient to reach a stress plateau in the experimental case. The explanation for this discrepancy could be related to the 2D character of the simulation, but another possible explanation lies in the number of grains involved in the sample (i.e., 9600 in the simulation and about 50,000 in the experiment). Indeed, it was already pointed out in [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF] that there is a very important effect of the sample size on the axial shortening values at which the peak and the plateau do occur. A comparison with experiments from [START_REF] Combe | Comportement du sable d'Hostun S28 au triaxial axisymetrique[END_REF], conducted with the same sand and under similar conditions but with a much large sample (height of 100.2 mm instead of 11 mm) showed that the magnitude of the peak was not modified by the sample size, but that it was much more delayed for the large sample. Likewise, the pre-peak stress response was much softer in the case of the larger sample. The result obtained with the numerical model could thus be a direct consequence of these experimental observations. Figure 6 also shows the fields of solid fraction in the sample at six different stages of the simulation. In the prepeak zone, it clearly appears that the sample is largely undisturbed, but a broad zone of increased porosity appears in the middle of the sample when the peak is reached. During softening, a cross-like localization pattern appears, with two shear bands. When the stress plateau is reached, it appears that the lower porous stone has rotated and that eventually only one of them is active (the left-lateral one). This behavior, which is consistent with the experimental observations from [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF], is made possible by the lateral (leftward) motion of the lower porous stone. Indeed, a simulation where porous stone rotation was prevented (thus restraining this lateral motion) was performed and showed that, in that case, the cross-like pattern persisted until the end of the loading (Fig. 7). This analysis thus shows that the conditions that are applied in this study at the boundaries of the sample (i.e., two flexible membranes transmitting the fluid pressure on the lateral faces and a lower porous stone with an unconstrained motion but loaded by a contact with a hemispherical cap) make it possible to observe the appearance of a single band. This result is in good agreement with experiments, as shown by the insert of Fig. 6, which provides a cross-section of the porosity field in a sheared sample of Ottawa sand as obtained by in situ µCT [START_REF] Andò | Experimental investigation of microstructural changes in deforming granular media using x-ray tomography[END_REF], and exhibits very similar patterns. Figure 8 provides several quantitative results for the reference simulation described in Fig. 6. Figure 8a shows the stress ratio as a function of the axial shortening (raw data and smoothed curve, as presented in Fig. 6). In Fig. 8c, the "slice-averaged" solid fraction across the shear band (see Sect. 2.5) is plotted as a function of the axial shortening. This plot clearly shows the different stages of the development of a region of higher porosity. For low values of the axial shortening (i.e., before 0.7%, which corresponds to the stress peak), the dilation is homogeneous in the whole around the (not yet existing) shear band, and the solid fraction evolves from 0.86 to 0.84. Localization takes place 1% and 5% of axial shortening, i.e., during the stage of post-peak softening. During this period, the regions of the sample located at more than about 8 mm from the central axis of the shear band remain undisturbed, while the central part dilates more and more until a solid fraction close to 0.775 is reached. Afterwards, the profile of the dilation zone remains constant in time, showing that the shear band is now well-established and that the sample has reached its stress plateau. Figure 8b shows the evolution of the average solid fraction in the shear band and in the whole sample, and confirms a volumetric stabilization at a shortening close to 5%, with stabilized solid fractions of 0.775 and 0.830 respectively. Figure 8d shows the final "slice-averaged" profile of solid fraction across the shear band, and a similar profile for the shear rate. A comparison between these two profiles is instructive, because it shows that the dilated zone is much larger than the area where shear strain actually takes place. The total width of the dilated area is indeed close to 17.5 mm (i.e. about 70D 50 ), while the total width of the sheared zone is close to 7.9 mm (i.e. about 32D 50 ). This is further confirmed by Fig. 8e, which provides similar "slice-averaged" profiles for grain rotations across the shear band. Two measures of the grain rotations are performed. The first one ("Net rotations") is based on the difference between the initial and final orientations of the grains, and disregards all the history of the grain rotations between these two times. The second one ("Absolute rotations") is based on an integration of the absolute rotational motions in time, meaning that each grain rotation that occurs at a given time step is counted as positive and cumulated with the previous rotations of this grain. As expected, the magnitude of the absolute rotation is much larger than that of the net rotation (meaning that the grains are submitted to a complex rotation history, with a lot of back-and-forth small rotations in order to accommodate the shear). We also observe that the width of significant absolute rotations (10.8 mm, i.e., 43D 50 ) is much larger than that of the net rotations (6.8 mm, i.e., 27D 50 ). It draws the picture of a shear band where actual shear and significant (i.e., net) grain rotations occur in a narrow band of about 30 grains diameters, but which necessitates important dilation and grain relative motions in a broader band of about 60 grains diameters. Influence of a decrease in roughness In this subsection, we report the result of a campaign of 10 simulations performed with a varying number of Fourier modes (N m = 64, 32, 16, and 8) and a varying numof contour nodes (N n = 128, 64, 32, and 16). The 10 cases are summarized in Fig. 3, and among them the case with N m = 64 and N n = 128 is the reference case which was described in the previous subsection. Each sample was compacted with an appropriate value of the coefficient of friction c (determined by trial and error) in order to reach a constant and homogeneous initial solid fraction of 0.86 in all samples. During the deviatoric loading, the coefficient of friction was set to d = 0.5 in all cases in order to isolate the influence of the surface roughness of the grains. Quantitative results are summarized in Fig. 9. Figure 9a shows the stress ratios obtained at the peak and on the plateau for each simulation. The residual strength does not seem to be much influenced by the surface roughness of the grains, although some deviations are observed for N m = 8. In contrast, the peak strength shows very clear trends, and decreases in a regular way when the grains get smoother (from 5.40 in the reference case to 3.79 in the smoothest case). It appears however that the peak strength is much more influenced by the number of modes used in the Fourier Spectrum (i.e., by the frequency content of the surface roughness) than by the number of contour nodes (i.e., by the spatial discretization of the grains). Rather clear trends are also observed in Fig. which shows the evolution of solid fraction in the whole sample and in the shear band. In both cases, solid fraction increases steadily when the grains get smoother, especially within the shear band where it goes from 0.775 in the reference case to 0.801 when the number of Fourier modes is reduced to 8. In Fig. 9c the absolute and net rotations of the grains in the core of the shear band are plotted. While the net rotation does not seem to be strongly affected by the smoothing of the grains, the results are more scattered for the absolute rotation. It seems, however, that this kind of motion decreases when the grains are smooth, especially if they are discretized by a low number of contour nodes (407° in the reference case, and only 218° for N m = 8 and N n = 16). Figure 9d shows results in terms of shear band width, as measured in terms of dilation (as described in Fig. and net rotation (as described in Fig. 8f). Those widths can thus be considered to represent respectively the broad area of disturbed material and the narrower core of the shear band where strain localization takes place. Both widths seem to be more influenced by the quality of the discretization, since a decrease of N n leads to a wider disturbance and shear zone, especially at low numbers of Fourier modes. Overall, these results indicate that a smoothing of the grains leads to a lower peak strength, and to shear bands that are wider, denser, and with less absolute rotations of the grains. In the next subsection, we explore the possibility that an increase of the contact friction coefficient between the grains may counteract these effects and mimic roughness. Combined influence of a decreased roughness and an increased contact friction In order to study the combined influence of roughness and friction, another simulation campaign was performed with calibrated values of the friction coefficient. Namely, the friction coefficient used for compaction c was chosen in order for all the samples to have the same initial solid fraction as the reference case (as in previous subsection) and the friction coefficient used for deviatoric loading d was chosen in order for all the samples to reach the same stress ratio at peak as the reference case (i.e., about 5.40). This calibration was done by trial and error, and the resulting values of d are provided in Fig. 10. It is interesting to notice that, for the case N m = 8 and N n = 32 it was necessary to use a non-physical contact friction coefficient d = 1.08 instead of 0.5 in order to reach the same peak strength. However, in the smoothest case (i.e., for N m = 8 and N n = 16), it was not possible to reach the target peak stress ratio of 5.40, even with a contact friction coefficient as high as d = 2. The results of this series of simulations are summarized in Fig. 11. Figure 11a shows that the peak strength is indeed constant in the 10 samples, except in the smoothest case where it saturates at 4.19 whatever the value of the contact friction coefficient. The residual strength remains rather constant in all cases. This means that, if one only focuses on the mechanical response of the sample, friction can actually replace roughness up to a certain point. Figure 11b-d show however that, although friction was artificially increased in order to reach a target peak strength, this cannot fully compensate a decrease in roughness in terms of solid fraction, either in the shear band or in the whole sample. Hence, despite this calibration of d , the trends observed in Fig. 10 are still valid: a decrease in roughness still increases the density and the width of the shear band, and decreases the absolute grains rotations. This observation, however, is valid for very low number of modes (i.e., for Nm = 8). Discussion and conclusion Based on the results presented in the previous section, it that friction can only replace roughness in granular simulations under certain conditions. If one focuses only on the macroscopic mechanical response of the sample, it is possible to compensate a certain amount of smoothing of the grains by an increase in contact friction, and still to achieve peak and residual strength that are identical to the original sample. However, if the grains are too smoothed, this may require non-physical values of the contact friction. In the extreme case where only 8 Fourier modes are used (i.e., when only the general shapes of the grains are kept, but all significant roughness disappears), the fact of discretizing the grains contours too poorly (i.e., with only 16 nodes) prevents the peak strength reaching the target value. In this case, the peak strength saturates and cannot increase anymore, whatever the increase of the contact friction. This might be related to the fact that such poorly discretized grains tend to form some planar facets, which facilitates the intergranular motion without requiring a significant dilation. From a micromechanical point of view, a decrease of the grains roughness has some consequences on the properties of the shear band. The smoother the grains, the denser and the thicker the shear band, and this trend cannot be strongly counteracted by an artificial increase of the contact friction. However, a moderate smoothing only has moderate consequences. Since such simulations are typically very costly, it is of primary importance to reduce the number of nodes on the contour of each grain, since this will reduce the number of proximities and contact detections to be performed by the contact algorithm. Meanwhile, for a given number of nodes on the contour, it is worth accounting for as much roughness as possible, since this comes at no additional cost. For this reason, we present in Fig. the localization mechanisms for four particular simulations, which correspond to respectively 128, 64, 32 and 16 contour nodes. For each of these simulations, the number of Fourier modes was the maximum that can be reasonably represented with such numbers of nodes (i.e., respectively 64, 32, 16, and 8), and the friction coefficient was calibrated to achieve the same peak strength, apart for the last one which could only achieve a peak stress ratio of 4.19. the shear band could occur in either direction depending on the simulations, some of them were flipped horizontally in Fig. 12 in order to ease the comparison. First, it is clear that, from one simulation to another, the patterns may differ slightly. For example, the simulations with 128 and 32 nodes started to develop a cross-like pattern before one particular shear band developed, and these patterns are still visible in the final stages of the simulations. In contrast, the simulation with 64 nodes developed a single shear band from the very beginning toward a lateral sliding, and its shear band is thus clearer and less disturbed. This observation indicates that such simulations are just as prone to variability as actual experiments, and future studies should be dedicated to a deeper analysis of the repeatability of such simulations. Because of their cost, however, a reduction of the number of nodes per grain contour is necessary. Quantitative results detailed in the previous section, as well as a visual analysis of Fig. 12, indicate that a number of 32 nodes on the grain contours and of 16 Fourier modes is largely acceptable and can be used with a good level of confidence, provided that the contact friction coefficient is properly calibrated to compensate this smoothing. Indeed, such a sample can still reach the target peak strength with Fig. 12 Localization patterns for simulations with 128, 64, 32 and 16 contour nodes per using the maximum number of Fourier modes and a calibrated contact friction a reasonable calibrated value of d = 0.715 , and the attrib- utes of the shear band (thickness, solid fraction, amount of rotations, etc.) remain not too far from the reference ones (Fig. 11). This smoothing is identified as the "best compromise", as illustrated in Fig. 3. Considering fewer nodes or fewer Fourier descriptors, however, does not make it possible to reproduce both qualitatively and quantitatively the behavior of the reference sample, even for very high values of d . The expected computational cost reduction is very implementation-dependent, especially when parallelization comes into play, but in our case the simulation duration was roughly proportional to the number of nodes on the grains contour. Hence, passing from 128 nodes (reference case) to 32 nodes (best compromise) leads to a reduction of the cost by a factor 4. It could be argued that this improvement does not include the cost of the calibration of the appropriate contact friction coefficient needed to counteract the smoothing. However, it should be kept in mind that this calibration is always necessary when running DEM simulations: direct measurements of the contact friction between two grains of sand are very seldom in the literature (see for example [START_REF] Senetakis | Experimental study of sand grain behavior at their contacts with force-and displacement-controlled sliding tests[END_REF]), and can hardly be generalized to different grains of the same sample or to grains of other sands: contact friction in DEM is always back-calibrated. The roughness correction that is proposed in this paper, limited to an extent where the physics of strain localization are not strongly modified, hence comes at no additional calibration cost. Based on this knowledge, future work will be dedicated to larger scale simulations of similar systems, and will make it possible to explore the influence of other granulometric (D50, uniformity coefficient) and morphological (First Fourier modes) parameters on the qualitative and quantitative behavior of shear localization in granular materials. Compliance with ethical standards Conflict of interest The authors acknowledge that this study contains original material, as a result of a purely academic study without any kind of private funding or conflict of interest. Its publication has been approved tacitly by the responsible authorities at the institutes where the work has been carried out. Fig. 1 1 Fig. 1 Description of the reference experiment (taken from [50]); a Membrane fixed on the lower porous stone; b Pluviation of the sand inside the mould; c Removal of the mold; d close view of the loading system; e. Raw tomographic image of the sample before shearing Fig. 2 2 Fig. 2 Sample characterization; a SEM views of Ottawa sand grains; b Reconstruction of 6 grains from high resolution x-ray scans and representation as geodesic structures; c First modes of the measured Fig. 5 5 Fig. 5 Illustration of the postprocessing techniques used for the study of the shear bands Fig. 7 Fig. 8 78 Fig. 7 Cross-like localization pattern obtained in a simulation identical to the reference one but restraining the rotation of the lower porous stone Fig. 9 9 Fig. 9 Influence of a reduction in roughness; a Stress ratios; b Solid fractions; c Rotations; d Thickness of the shear band Fig. 10 10 Fig. 10 Contact friction coefficient µ d calibrated in order to reach the target peak stress ratio Fig. 11 11 Fig. 11 Influence of a reduction in roughness compensated by an increase in contact friction; a Stress ratios; b Solid fractions; c Rotations; d Thickness of the shear band
01008975
en
[ "spi.meca", "spi.mat" ]
2024/03/04 16:41:18
2008
https://hal.science/hal-01008975/file/Illoul2008.pdf
L Illoul P Lorong F Chinesta Study of High Speed Blanking Process with the C-NEM Keywords: C-NEM, Finite Transformations, Simulation of Forming Processes diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION The C-NEM has still a number of common points with the finite elements approach: the interpolation is defined on a set of nodes, degrees of freedom (DOF) are associated to the nodes, a localised in space shape function is associated to each node, the interpolation is nodal and is exact for linear fields. The C-NEM interpolation uses the constrained Voronoi diagram (dual of the constrained Delaunay's tesselation) associated to the set of nodes and the boundaries of the domain. The quality of the produced interpolation depends primarily on the distribution of the nodes. The presence of flat tetrahedrons in the dual mesh of Delaunay does not affect the interpolation quality. In next paragraphs we initially reconsider the NEM interpolation. We then focuse on the non-convex domains for which the construction of a constrained Voronoi diagram is needed. The aspects related to the partitioning of the domain are then evoked. Finally we give the example of a 3D simulation of blanking. THE C-NEM INTERPOLANT Headlines of the natural element method The NEM (Natural Element Method) interpolant is based on the Sibson's natural neighbor coordinates (shape functions) [START_REF] Sambridge | Geophysical parameterization and interpolation of irregular data using natural neighbours[END_REF][START_REF] Sukumar | The natural elements method in solid mechanics[END_REF] and is constructed on the basis of the Voronoi diagram. For a set of nodes S = {n 1 ,n 2 ,...,n N } in ℜ dim ,(dim ∈{ 2, 3}), the Voronoi diagram is the subdivision of ℜ dim into regions T i (Voronoi cells) defined by: T i = {x ∈ℜ dim : d(x, x i ) <d(x, x j ), ∀j = i} (1) Sibson's coordinates of x with respect to a natural neighbor n i (see figure 1) are defined as the ratio of the overlap area (volume in 3D) of their Voronoi cells to the total area (volume in 3D) of the Voronoi cell related to point x: If the point x coincides with the node n i , i.e. (x = x i ), φ i (x i )=1 , and all other shape functions are zero, i.e. φ i (x j )=δ ij (δ ij being the Kronecker delta). The properties of positivity, interpolation, partition of unity [START_REF] Sukumar | The natural elements method in solid mechanics[END_REF] and local coordinate property [START_REF] Sibson | A vector identity for the dirichlet tesselations[END_REF] are verified : φ i (x)= Area(af ghe) Area(abcde) (2) ⎧ ⎪ ⎨ ⎪ ⎩ 0 ≤ φ i (x) ≤ 1 φ i (x j )=δ ij n i=1 φ i (x)=1 x = n i=1 φ i (x)x i (3) It turns out that the support of φ i (x) is the union of the n circles (spheres in 3D) passing through the vertices of the n Delaunay triangles (tetrahedrons) connecting the node n i (in this case n is the number of natural neighbors of node n i ). Another important property of the NEM interpolant is the ability to reproduce linear functions over the boundary of convex domains. The proof can be found in Sukumar et al. [START_REF] Sukumar | The natural elements method in solid mechanics[END_REF]. This is not true in the case of non convex boundaries, and the next section focuses on an approach to circumvent this difficulty. The Constrained natural element method In its original form [START_REF] Sukumar | The natural elements method in solid mechanics[END_REF], the NEM can only be applied to strictly convex domains. For strongly non-convex domains (cracks, auto-contact...) some spurious influences between nodes of the boundaries appear [START_REF] Yvonnet | A new extension of the natural element method for non convex and discontinuous domains : the constrained natural element method (c-nem)[END_REF]. In order to avoid these drawbacks, we have proposed in a previous paper [START_REF] Yvonnet | A new extension of the natural element method for non convex and discontinuous domains : the constrained natural element method (c-nem)[END_REF] an extension of the NEM in which a visibility criterion is introduced in order to restrict influent nodes among natural neighbors. The computation of the shape functions is done on the basis of the so-called Constrained (or extended) Voronoi diagram (CVD), which is the strict dual to the constrained Delaunay triangulation introduced by Seidel in [START_REF] Seidel | Constrained delaunay triangulations and voronoi diagrams with obstacles[END_REF] (see [START_REF] Yvonnet | A new extension of the natural element method for non convex and discontinuous domains : the constrained natural element method (c-nem)[END_REF] for further details). The new cells T C i , called constrained Voronoi cells, are defined formally by: T C i = {x ∈ℜ n : d(x, x i ) <d(x, x j ), ∀j = i, S x→n i ∩ Γ=∅,S x→n j ∩ Γ=∅} ( 4 ) where Γ is the domain boundary, composed by a set of segments in 2D (triangular plane facets in 3D) and S a→b denotes the segment between the points a and b. In this framework, a point located inside a cell T C i is closer to the node n i than to any other visible node n j . For both the trial and the test functions the new approximation is then : u h (x)= i∈V φ C i (x)u i ( 5 ) where V is the set of natural neighbors which are visible from point x and φ C i is the constrained natural neighbor shape function related to the i-th node at point x. It was shown in [START_REF] Yvonnet | A new extension of the natural element method for non convex and discontinuous domains : the constrained natural element method (c-nem)[END_REF] that the use of the constrained Voronoi diagram does not affect the properties of the NEM interpolation. Constrained Delaunay tetraedrisation The constrained Delaunay triangulation does not always exist in 3D without adding new nodes [START_REF] Schönhardt | Uber die zerlegung von dreieckspolyedern in tetraeder[END_REF]. Some techniques for constructing 3D constrained Delaunay tessellations are available by adding Steiner points [START_REF] Shewchuck | Tetrahedral mesh generation by delaunay refinement[END_REF][START_REF] Shewchuck | Sweep algorithms for constructing higherdimensional constrained delaunay triangulations[END_REF]. These points are always set on triangles belonging to the boundary of the domain. It is possible to connect each of these points to at most 3 initial nodes (those defining triangles containing the new points). The new points are treated as Slave nodes, the initial nodes being the Master nodes. For each Slave node n S i , we set the following kinematic linear constraint : u S i = 3 j=1 η ij . u M (S i ) j (6) where u M (S i ) j are the nodal values at the 3 Master nodes of n S i , and η ij the reduced coordinates of the Slave node on its supporting triangle. To keep the NEM interpolation properties, the interpolation must be built using all the Master and Slave nodes values. With this partition, the interpolation becomes : u h (x)= i∈M i φ C i (x)u M i + i∈S i φ C i (x)u S i (7) M i and S i being respectively the set of Master and Slave natural neighbors which are visible from x. The software TETGEN (Quality Tetrahedral Mesh Generator and Three-Dimensional Delaunay Triangulator, [START_REF] Si | Meshing piecewise linear complexes by constrained delaunay tetrahedralizations[END_REF]) is currently used to build the constrained Delaunay tetrahedrisation. EXPLICIT LAGRANGIAN PROCEDURE With the principle of virtual work as a basis of kinematically based C-NEM solution scheme, the corresponding continuum incremental boundary value problem is formulated as follows: Ω t ρ(t)ü • ηdΩ t + Ω t σ t : ∇ x ηdΩ t = Ω t ρ(t)b • ηdΩ t + ∂Ω t σ τ • ηdΓ t ∀η ∈ ϑ ( 8 ) where ρ is the density, b and τ represent the body forces and applied tractions respectively, and ϑ is the space of virtual displacements. The properties dΩ t = J t dΩ 0 and ρ 0 dΩ 0 = ρ(t)dΩ t are used, which leads to: Ω 0 ρ 0 ü • ηdΩ 0 + Ω 0 P t : ∇ X ηdΩ 0 = Ω 0 ρ 0 b • ηdΩ 0 + ∂Ω t σ τ • ηdΓ t ∀η ∈ ϑ (9) where P denotes the first Piola-Kirschhoff stress tensor related to σ by P = JF -1 σ. The C-NEM discretization (5) of the variational form ( 9) results in the discrete set of algebraic time dependent equations, in matrix form, as: Mü n+1 (t)=F ext n (t) -F int n (u n ,t) ( 10 ) where t is the time, M denotes the mass matrix, F int n (u,t) the internal force vector, while F ext n (t) is the external force vector. As shown in the next section, the use of the SCNI quadrature [START_REF] Chen | Non-linear version of stabilized conforming nodal integration for galerkin meshfree methods[END_REF] results in a M matrix diagonal, whose diagonal terms are given by m i = ρ 0 Ω i , with Ω i the volume of the Voronoi cell related to node n i . The velocity v = u and acceleration ü = v are approximated by using central differences with variable time steps: v n+1/2 = v n-1/2 + ∆t 1 +∆t 2 2 ün ( 11 ) u n+1 = u n +∆t 2 v n+1/2 (12) 4 NUMERICAL INTEGRATION In the context of finite elements, a natural partition of the domain is carried out by the elements. With C-NEM the concept of element does not exist. A natural partitioning is given by the cells of the Voronoi diagram. When cells do intersect the boundary of the domain, only their part inside the domain is kept. However, for strongly non-convex domains in 3D, some boundary cells may have negative volume inside the domain (some faces may intersect themselves inside the domain). It is thus necessary to use another partitioning. For reasons of simplicity we have chosen a partitioning based on the Delaunay tetrahedrons : each elementary volume Ω i is defined by the quarters of all tetrahedrons connected to the node n i . We then use the stabilized conforming nodal integration (SCNI) proposed by Chen et al. [START_REF] Chen | Non-linear version of stabilized conforming nodal integration for galerkin meshfree methods[END_REF] to define a deformation gradient at node n i : ∇f h i = 1 |Ω i | Ω i ∇f h (x)dΩ (13) In order to avoid projections between two successive actualisations of the reference configuration all variables are computed with this gradient at the nodes. NUMERICAL SIMULATION OF BLANKING All the simulations given below are dealing with the same example. It is a simulation of blanking where the gap e between the punch and the die is equal to 0.6mm and the radius r of the cutting edges are equal to 0.2mm (Figure 2). The sample has a thikness (in the direction of cutting) equal to 5mm, and its behavior is modeled via a Johnson-Cook law [START_REF] Johnson | A constitutive model and data for metals subjected to large strains, high strain rates and high temperatures[END_REF] (Eq.14) with the following coefficients (SI): A =2 5 3 . 10 6 , B =6 8 5 . 10 6 , C =0 .097, m =2 .044, n =0 .3128, ǫp 0 =1., T 0 = 296. These coefficients correspond to a 306L stainless steel. ǫp the rate of plastic strain and T the current temperature. The punch and the die are considered as rigid and the contact with the sample is without friction. The two symmetries of the experiment are taken into account and only a quarter of the sample is studied. The elastic behavior take a major place in the localization of the shearing band. The elastic waves take 0.91 0 -6 s to pass across the sample and figure 4 shows that the shearing band is clearly localize after only 2.010 -6 s when elastic waves are still governing the stress distribution in the sample. On figure 4 we compare the equivalent plastic strain for 2 different punch velocities. t =0.25μs t =0.50μs t =1.00μs t =2.00μs This figure exhibits the asymmetry of the shearing band which is more important close to the die, this effect being more important when the velocity of the punch goes from 10 to 30m.s -1 . Punch Velocity 10ms -1 Punch Velocity 30ms -1 On both previous simulations the punch and the die are taken as rigid although their young modulus and mass density are close to those of the sample. This increases the effect of plastic strain near the contact and inside the sample. This aspect will be take into account for future simulations. CONCLUSIONS With the possibility of dealing with three-dimensional problems involving domains of any type (convex or not), C-NEM is from now a general tool. Its development is currently oriented towards the adaptation of the node distribution (refinement in particular) specially in zones where high gradients occur, and also to the adaptation of the boundary description when high deformations have occurred. For instance, a correct of the domain boundary is necessary to update the constrained Voronoi diagram in the context of an updated Lagrangian approach. First results on blanking are nevertheless interesting. Figure 1 : 1 Figure 1: Sibson shape functions construction in 2D σ y =[A + B.(ǭ p ) n ] 1+C. ln ǭp represents the equivalent plastic strain, Figure 2 : 2 Figure 2: The blanking device Figure 3 : 3 Figure 3: Stress field in blanking -Punch velocity : 10ms -1 Figure 4 : 4 Figure 4: Plastic strain localisation after 2.00µs ACKNOWLEDGEMENT his work has been done under the technical and financial support of CETIM Senlis (Centre Technique des Industries Mécanique).
04096601
en
[ "sdv.mhep.phy", "sdv.ba.zi" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04096601/file/ZANE_Flaminia_these_2022.pdf
Keywords: metabolomic data Characterization of the gene expression changes associated with the age-related Smurf phenotype in Drosophila melanogaster Ageing is a process affecting a broad range of living organisms. In humans and multiple model organisms, it is characterized by an age-dependent decrease in functional efficiency and increased vulnerability to death. Even if deregulations in specific pathways and mechanisms have been identified as associated with ageing, the so-called hallmarks of ageing, a comprehensive picture of the process is still missing. 2.1 Experimental design . . . . . . Most of the studies conducted to understand ageing associated changes are performed at a population level, assuming individuals at a given time point in a synchronous population to be a homogenous group. Although this assumption simplifies reality too much, as individuals in the population can die at very different time, and therefore carry a biological age that does not match with their chronological one, it is often used for practical reasons. Indeed, even if the definition of biological clocks to monitor biological age in humans and model organisms is helping in this matter, it is still challenging to have non-invasive biomarkers that could allow following individual ageing in a research context. We believe this inability to be a limiting factor in ageing research. In 2012, Rera and collaborators described a new age-dependent and deathrelated phenotype in Drosophila melanogaster. By looking at a physiological increase in the intestinal permeability to a non-toxic and non-absorbed blue food dye, they were able to identify individuals committed to death in a few days. Interestingly, these individuals, called "Smurfs", are the only ones, among a population, to exhibit various ageing-related changes and high-risk of impending death whatever their chronological age; furthermore, all the flies studied so far underwent the Smurf transition prior to death. Smurfness has been hypothesized to be a good proxy to follow the physiological age of single individuals. During my PhD, I have focused on characterizing the gene expression changes associated with the Smurf phenotype. In particular, my initial research question was to test whether Smurfness is a valid method to detect i biological age in vivo. Through whole-body RNA-sequencing on Smurf and age-matched non-Smurf flies of different chronological ages, I demonstrated that Smurfs carry a stereotypical transcriptional signature independently of their chronological age, mostly overlapping with what was described so far as ageing. Furthermore, by studying time-related changes and Smurf-related changes in gene expression concomitantly, I identified genes moving through time but not necessarily associated with the physiological collapse of the organism and death (Smurf phase). This led to the detection of putative pathways affected by time. Lastly, by looking at the genes changing their expression with Smurfness, I functionally validated new genes affecting longevity in Drosophila. In conclusion, rather than to focus on a specific aspect or mechanism of ageing, my PhD takes place in a more general context in the field, with the investigation of a new phenotype biomarking the biological age. The results, which I will present in the manuscript, point at Smurfs as an excellent tool to look at the physiological age of D. melanogaster, and possibly as a powerful method for deconvolving the changes related to physiological and chronological time. ii Abstract Caractérisation des changements d'expression génétique associés au phénotype Smurf lié à l'âge chez Drosophila melanogaster Le vieillissement est un processus qui affecte un large éventail d'organismes vivants. Chez l'homme et de nombreux organismes modèles, il est caractérisé par une diminution liée à l'âge de l'efficacité fonctionnelle et une vulnérabilité accrue à la mort. Même si la dérégulation de certaines voies et de certains mécanismes a été identifiée comme étant associée au vieillissement (ce que l'on appelle les marqueurs du vieillissement) il manque encore une image complète du processus. La plupart des études menées pour comprendre les changements associés au vieillissement sont réalisées au niveau de la population, en supposant que les individus à un moment donné dans une population synchrone constituent un groupe homogène. Bien que cette hypothèse simplifie trop la réalité, puisque les individus de la population peuvent mourir à des moments très différents et donc avoir un âge biologique qui ne correspond pas à leur âge chronologique, elle est souvent utilisée pour des raisons pratiques. En effet, même si le développement d'horloges biologiques pour suivre l'âge biologique chez l'homme et les organismes modèles aide en la matière, il est encore difficile de disposer de biomarqueurs non-invasifs qui pourraient permettre de suivre le vieillissement individuel dans un contexte de recherche. Nous pensons que cette incapacité est un facteur limitant dans la recherche sur le vieillissement. En 2012, Rera et ses collaborateurs ont décrit un nouveau phénotype dépendant de l'âge et lié à la mort chez Drosophila melanogaster. En observant une augmentation physiologique de la perméabilité intestinale à un colorant alimentaire bleu, ils ont pu identifier des individus voués à la mort en quelques jours. Il est intéressant de noter que ces individus, appelés "Smurfs", sont les seuls, au sein d'une population, à présenter divers changements normalement décrits comme liés à l'âge et un risque élevé de mort imminente, quel que soit leur âge chronologique. De plus, toutes les mouches étudiées jusqu'à présent ont subi la transition Smurf avant de mourir. Nous avons donc émis l'hypothèse que le phénotype Smurf était un bon indicateur pour suivre l'âge iii physiologique d'individus. Au cours de mon doctorat, je me suis concentrée sur la caractérisation des changements d'expression génique associés au phénotype Smurf. En particulier, ma question de recherche initiale était de vérifier si le phénotype est un bon outil pour détecter l'âge biologique in vivo. Grâce au séquençage des ARN du corps entier sur des mouches Smurfs et des mouches non-Smurfs appariées à différents âges chronologiques, j'ai démontré que les Smurfs portent une signature transcriptionnelle stéréotypée indépendamment de leur âge, qui recouvre en grande partie ce qui a été décrit jusqu'à présent comme étant la signature du vieillissement. En étudiant concomitamment les changements liés au temps et les changements liés aux Smurfs dans l'expression des gènes, j'ai essayé d'identifier les gènes qui sont affectés par le temps mais qui ne sont pas nécessairement associés à l'effondrement physiologique de l'organisme et à la mort (phase Smurf); ceci a conduit à la détection de voies putatives affectées par le temps. Enfin, en examinant les gènes affectés par la "Smurfness", j'ai essayé de cribler de nouveaux gènes affectant la longévité chez la drosophile. En conclusion, plutôt que de se concentrer sur un aspect ou un mécanisme spécifique du vieillissement, ma thèse s'inscrit dans un contexte plus général dans le domaine, avec l'étude de nouveaux phénotypes biomarqueurs de l'âge biologique. Les résultats, que je présente dans mon manuscrit, indiquent que les Smurfs sont un modèle efficace pour étudier l'âge physiologique de D. melanogaster, et peut-être un outil puissant pour déconvoluer les changements liés au temps physiologique et chronologique. General introduction and guidelines If asked to define ageing, our reply would probably be similar to "the progressive decline in health and functionality that our body experiences in late adulthood, and that is eventually leading to death". The perception of our body's ageing is that of a continuous process, the beginning of which is not readily identifiable, and that is eventually -at least partially -causally linked to death. If we now ask the same question to the scientific literature, the answer would not be so different. A standard definition is the one of progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death [START_REF] López-Otín | The Hallmarks of Aging[END_REF]. Ageing is acknowledged as a process occurring progressively over time, causing a decline in an organism's efficiency, and that correlates with increasing mortality. Establishing a causal link between ageing and death could seem intuitive to us. However, ageing is never listed as a cause of death itself. If we visit the Center for Disease Control (CDC) online page regarding the leading causes of death -currently updated to 2020 -we find in the podium three killers: heart diseases, cancer and Covid-19. All of these diseases have age as a significant risk factor. We could therefore say that ageing is an indirect cause of death. The definition of ageing that I introduced as standard is correct but not exhaustive. Does every organism age this way? Do all organisms age? These are complex questions to which whole books can be dedicated. The idea of ageing as a process that reduces the ability of an organism to survive goes back to unicellular organisms like bacteria or the yeast model organism Saccharomyces cerevisiae. For them, two different ageing features can be described: chronological lifespan (CLS) -the time the cell lives without dividing -and replicative lifespan (RLS) -the times the cell can divide. It is difficult to imagine these unicellular organisms sharing our same ageing patterns; indeed, not all the ageing-associated phenotypes are conserved across evolution. [START_REF] Lemoine | The Evolution of the Hallmarks of Aging[END_REF] interestingly tries to reconstruct the evolution of the hallmarks of ageing, as they are identified by [START_REF] López-Otín | The Hallmarks of Aging[END_REF], from prokaryotes to bilaterian animals like us. He provides a fascinating journey through time that shows how adding parts and pieces to a machine can make it more complex and elegant but can, on the other side of it, add potential problems that the machine needs to cope with. Even leaving on the side unicellular organisms, it would be a mistake to believe that all multicellular organisms age the way we do. Plants, which organization and life cycle is different from the one of animals, show a different ageing pattern [START_REF] Thomas | Ageing in plants[END_REF]. Even inside the animal kingdom, we find organisms not showing an over time decline. The jellyfish Turritopsis dohrnii can revert to an early, pre-reproductive stage of life when subjected to stress, and restart all over the cycle [START_REF] Martell | Life cycle, morphology and medusa ontogenesis of Turritopsis dohrnii (Cnidaria: Hydrozoa)[END_REF]. The cells of the small freshwater animal Hydra show an indefinite capacity of self-renewal [START_REF] Boehm | FoxO is a critical regulator of stem cell maintenance in immortal Hydra[END_REF]. The lobsters, even if not potentially immortal, show a different ageing pattern with no decline of functionality over time and expression in adult cells of the telomerese enzyme, which continuously works to protect and preserve the chromosomes [START_REF] Klapper | Longevity of lobsters is linked to ubiquitous telomerase expression[END_REF]. The scientific definition that I provided earlier is therefore biased towards human-like ageing, which, as a society, we are interested in studying for apparent reasons. Therefore, in this manuscript, I will focus on human-like ageing, which followed by Drosophila melanogaster, the model organism I used in my research. As my PhD takes place in the field of ageing, I will provide a general introduction to the topic in section 1.2. I will then move to discuss the difference between biological and chronological ageing (section 1.3), and what has been done so far to try to deconvolve the first from the second. I will then present the Smurf model, which I have investigated during the past three years to validate it as a tool for identifying biologically old individuals in fruitflies. The reader can see the introduction chapter as a guide towards the biological problem I investigated. I then divided the rest of the manuscript in three different chapters. Chapter 2 is dedicated to the bioinformatic part of the project. In Chapter 3, I describe the experimental work I conducted. Chapter 4 is dedicated to the final discussion of the results and the conclusions. I divided each chapter into different and clearly identifiable sections so that the reader can quickly jump from one to the other if needed. Ageing: how and why Ageing, as a process strongly decreasing an individual's fitness and eventually associated with its end, seems counterintuitive from an evolutionary point of view. Why would evolution select for something that damages the individual? On this subject, there are two leading schools of thought. Supporters of the programmed ageing point of view affirm that evolution is not blind with respect to ageing and that such mechanism is intrinsic to each organism; some of the associated theories explain why this would be selected by evolution, and some not. An example of a programmed ageing theory that explains ageing in the light of evolution is altruistic ageing (see [START_REF] Longo | Programmed and altruistic ageing[END_REF] for a more extensive review on the subject). We can reconduct somehow the origin of this view of ageing to Hamilton's kin selection theory in the 1960's [START_REF] Hamilton | The genetical evolution of social behaviour[END_REF] (though the conceptualization of kin selection goes back to Darwin), which affirms that an individual, in a given social context, can act in a way that reduces its fitness if such behaviour maximizes the fitness of its relative. Altruistic ageing brings this to the extreme, as ageing would be sort of an altruistic suicide of the individual in favour of the population's survival. Behaviours that could be interpreted in such a sense have been observed in unicellular organisms as Saccharomyces cerevisiae [START_REF] Longo | Programmed and altruistic ageing[END_REF]). The most obvious example of this in the animal kingdom is the behaviour of the Pacific salmons, which show fast senescence and death just after reproduction; this would be a case of phenoptosis, a programmed death of an organism when not needed anymore by the population (malfunction or competition, as in this case, with the progeny). On the other side, we have the non-programmed ageing theories, which affirm that in different ways, ageing results from a passive action of evolution. The first theory proposed in this context is the wear and tear (Weismann, 1882), which generally affirms that the organism ages and collapses due to damage accumulation over time. Other theories have been proposed along the past century, some on the trend of scientific discoveries (as the Reactive Oxygen Species -ROS-in association with ageing in the 1950's). Whether ageing is programmed or not, is still a matter of debate. For quite a long time now, what cannot be debated is that ageing is a plastic process that can be modulated. Indeed, back in the 1930's, it was shown that caloric restriction (CR) induced lifespan extension in rats [START_REF] Mccay | The Effect of Retarded Growth Upon the Length of Life Span and Upon the Ultimate Body Size: One Figure[END_REF][START_REF] Mccay | Retarded Growth, Life Span, Ultimate Body Size and Age Changes in the Albino Rat after Feeding Diets Restricted in Calories: Four Figures[END_REF]. The CR modulation of ageing has been since then demonstrated in other species, such as Caenorhabditis elegans [START_REF] Klass | Aging in the nematode Caenorhabditis elegans: Major biological and environmental factors influencing life span[END_REF], and D. melanogaster [START_REF] Chippindale | Phenotypic plasticity and selection in Drosophila life-history evolution. I. Nutrition and the cost of reproduction[END_REF]. More recently, it has been confirmed that CR improves healthspan (i.e. the amount of time the organism spend in good health) in primates [START_REF] Mattison | Caloric restriction improves health and survival of rhesus monkeys[END_REF]. In addition, mutations in specific genes can modulate ageing and extend lifespan. [START_REF] Friedman | A mutation in the age-1 gene in Caenorhabditis elegans lengthens life and reduces hermaphrodite fertility[END_REF] showed for the first time that a mutation in the age-1 gene of C. elegans was extending lifespan in both males and females. The gene codes for a domain of catalytic subunit of class-I phosphatidylinositol 3-kinase (PI3K). It is involved in the Insulin/IGF-1 Signaling (a well-known ageing pathway which I will further discuss in 1.2.1). Ten years later, the first genetic mutation increasing lifespan was identified in D. melanogaster [START_REF] Lin | Extended life-span and stress resistance in the Drosophila mutant methuselah[END_REF], while in the same years lifespan extension was shown for mice with hereditary dwarfism [START_REF] Brown-Borg | Dwarf mice and the ageing process[END_REF]. The biological mechanisms associated with ageing have been extensively studied in the past century. Although a complete picture is still missing, [START_REF] López-Otín | The Hallmarks of Aging[END_REF] have proposed, by summarizing what is known so far, nine hallmarks of ageing. For the hallmarks selection, they have considered three criteria: 1) the process manifests during normal ageing; 2) if experimentally induced or aggravated, it accelerates ageing; 3) if experimentally ameliorated, it retards ageing. I will now present the different hallmarks as a general state of the art of ageing. With each hallmark, I will describe, if any, the associated theories. Please note that the theory was proposed before the presented experimental evidence in some cases, but I am grouping them to allow a more fluid reading. At the end of the section, I will present the theories that do not fit in a specific hallmark. In figure 1.1, you can follow the chronological order of the theories and discoveries that I am presenting all along the Introduction. Hallmarks of ageing The nine hallmarks of ageing span from DNA to the systemic level, and they are summarized in figure 1.2. They can indeed be divided in three categories: nuclear, cellular and systemic. I will review them from the nuclear to the systemic ones, following the same order as the paper. Genomic instability Numerous diseases that show accelerated ageing phenotypes present defects in DNA repair mechanisms, as well summarized in [START_REF] Niedernhofer | Nuclear Genomic Instability and Aging[END_REF]. For example, progeria syndrome, which causes premature ageing in humans, is linked to a mutation in the lamin A gene, involved in nuclear architecture [START_REF] Eriksson | Recurrent de novo point mutations in lamin A cause Hutchinson-Gilford progeria syndrome[END_REF]. Furthermore, DNA lesion markers have been shown to increase with age in different cells isolated from mice and rats [START_REF] Helbock | DNA oxidation matters: The HPLC-electrochemical detection assay of 8-oxo-deoxyguanosine and8-oxo-guanine[END_REF][START_REF] Wong | Elevation of oxidativedamage biomarkers during aging in F2 hybrid mice: protection by chronic oral intake of resveratrol[END_REF][START_REF] Nie | Age-dependent accumulation of 8-oxoguanine in the DNA and RNA in various rat tissues[END_REF]. On the other hand, DNA repair mechanisms are indeed observed to decline in efficiency with age [START_REF] Ren | Non-homologous DNA end joining in the mature rat brain[END_REF][START_REF] Intano | Age-related base excision repair activity in mouse brain and liver nuclear extracts[END_REF][START_REF] Yamada | Aged human skin removes UVB-induced pyrimidine dimers from the epidermis more slowly than younger adult skin in vivo[END_REF]. In addition, [START_REF] Baker | Increased expression of BubR1 protects against aneuploidy and cancer and extends healthy lifespan[END_REF] showed how overexpression of BubR1, involved in guaranteeing the correct segregation of chromosomes during mitosis, extends lifespan in mice. Is this a simple correlation, or does genomic instability drive ageing? [START_REF] Niedernhofer | Nuclear Genomic Instability and Aging[END_REF] argue that although there are pieces of evidence supporting that, some studies in humans and mice show a weak correlation with increased DNA stability in long-living mammal models; an improved response to oxidative stress could be a confounding factor in this matter. Overall, cause or effect, genomic instability is a well-conserved hallmark of ageing. Interestingly, mitochondrial DNA (mtDNA) too has been shown to accumulate mutations over time in mice [START_REF] Ameur | Ultra-deep sequencing of mouse mitochondrial DNA: mutational patterns and their origins[END_REF]. Mutation-accumulation theory. This theory is part of the non-programmed view of ageing. First proposed by P. Medawar in 1952, it affirms that ageing is the result of the random accumulation of mutations that are only expressed later in life, after the reproduction time of the organism. Therefore, such mutations escape evolution. Antagonistic pleiotropy theory. G. C. Williams proposed it in 1957, and it falls as well under the non-programmed theories. Although it does not refer to genomic instability, it refers to genetic variants, and, therefore, I place it in the genome hallmark. It proposes that a gene controlling different traits (pleiotropy) can provide benefits to the organism early in life but results in a disadvantage late in life (antagonistic). Therefore, such genes would be actively selected for by evolution but driving ageing as a side effect. Experiments conducting artificial selection in D. melanogaster showed that the lines obtained as the progeny of young flies have a mortality rate increasing more rapidly with age compared to that of the lines obtained as the progeny of old flies. This seems to be the cost of a benefit in early life, with higher fertility observed in "young" lines, as explained in [START_REF] Partridge | Mechanisms of aging: public or private?[END_REF]. Programmed longevity. This theory falls in the programmed ageing view. It affirms that the genome is intrinsically meant to fail over time, and no organism can escape such mechanism. Genomic instability manifesting over time would therefore be the leading cause of ageing [START_REF] Davidovic | Old age as a privilege of the "selfish ones[END_REF]. In this context, long-living individuals are the "privileged ones", carrying an intrinsically more stable genome, which delays their ageing. This group is opposed to the "normal ones", who carry a normal genome meant to a normal ageing pattern. To support their hypothesis, the authors cite a retrospective study showing that siblings of centenarians experienced a mortality advantage throughout their lives relative to the reference cohort [START_REF] Perls | Life-long sustained mortality advantage of siblings of centenarians[END_REF]. However, it is unclear if such an effect is due to a more stable genome. Telomere attrition Telomeres are the end of linear DNA molecules in cells. In most eukaryotic cells, they are made of repeated sequences, polymerized by a dedicated enzyme (telomerase). Not all the animals show such telomere structure. Drosophila telomeres, for instance, are constituted by an array of retrotransposons [START_REF] Mason | Drosophila telomeres: an exception providing new insights[END_REF]. Telomeres shortening over time, due to the non-expression of telomerase in somatic cells, has beeen shown to play an essential role in cellular senescence, with artificial expression of the enzyme being sufficient to immortalize the cells without undergoing malignant transformation [START_REF] Bodnar | Extension of Life-Span by Introduction of Telomerase into Normal Human Cells[END_REF]. Moreover, it has been shown in mice that ageing can be delayed by telomerase induction, without cost in healthspan (Bernardes de [START_REF] Bernardes De Jesus | Telomerase gene therapy in adult and old mice delays aging and increases longevity without increasing cancer[END_REF]. On the other hand, maybe due to their different structure, telomere length seems uncoupled from longevity in Drosophila [START_REF] Walter | Effects of telomere length in Drosophila melanogaster on life span, fecundity and fertility[END_REF]. Epigenetic alterations Epigenetic is a broad and still not wholly understood world. Some changes associated with genome regulation and stabilization through epigenetic mechanisms have been seen as associated with ageing. Changes in histone modifications (both methylation and acetylation) have been observed. In particular, inhibition of histone methylation has been shown to extend longevity in nematodes and flies [START_REF] Greer | Members of the Histone H3 Lysine 4 Trimethylation Complex Regulate Lifespan in a Germline-dependent Manner in C. elegans[END_REF][START_REF] Siebold | Polycomb Repressive Complex 2 and Trithorax modulate Drosophila longevity and stress resistance[END_REF]. The sirtuin gene family (NAD-dependent protein deacetylases) acts on histones -amongst other targets-and has been extensively studied for its role in ageing. Indeed, Sir2 has been shown to have pro-longevity effects when overexpressed in yeast (replicative lifespan), nematodes, flies and mice [START_REF] Zhao | Sirtuins and their Biological Relevance in Aging and Age-Related Diseases[END_REF]. DNA methylation is also described as changing patterns with age, with a decreased global methylation favouring an increased local one. Even if its causal role -if any -is not clear yet, DNA methylation changes are a wellestablished biomarker of age. They have indeed been used as a base for the definition of the first biological clock (see section 1.3) [START_REF] Horvath | DNA methylation age of human tissues and cell types[END_REF][START_REF] Horvath | DNA methylation-based biomarkers and the epigenetic clock theory of ageing[END_REF]. Epigenetic changes can lead to transcriptional alterations and reduced genome protection, linking the first three described hallmarks in a more general and interconnected process associated with ageing. Loss of proteostasis Cells maintain proteostasis through two mechanisms: stabilization of unfolded proteins (chaperone-mediated folding) and degradation by the proteasome (ubiquitin-mediated) and lysosomes (autophagy-mediated). The heat-shock chaperones family (Hsp) seems to play an important role, as the activation of their regulator (called Hsf1 in humans) has been shown to cause lifespan shortening when knocked down in C. elegans, and an extension with its overexpression [START_REF] Hsu | Regulation of aging and age-related disease by DAF-16 and heat-shock factor[END_REF]. On the other hand, Hsp genes appear to be upregulated in nematodes [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF], and in flies [START_REF] Tower | Heat shock proteins and Drosophila aging[END_REF]. Therefore, it seems that cells undergo a stress condition with increasing age, which results in unfolded proteins and protein aggregation. Degradation mechanisms are impaired with age [START_REF] Vilchez | The role of protein clearance mechanisms in organismal ageing and age-related diseases[END_REF]. Proteasome mediated clearance has been described as less effective with ageing [START_REF] Vernace | Aging perturbs 26S proteasome assembly in Drosophila melanogaster[END_REF][START_REF] Ferrington | Altered proteasome structure, function, and oxidation in aged muscle[END_REF][START_REF] Andersson | Enhancing protein disaggregation restores proteasome activity in aged cells[END_REF]. Autophagy has also been shown to work less efficiently, having effects also on organelles clearance -particularly mitochondria (mitophagy) [START_REF] Vilchez | The role of protein clearance mechanisms in organismal ageing and age-related diseases[END_REF]. Interestingly, promoting autophagy by overexpressing its regulating genes increases lifespan in mice, nematodes, and fruit flies [START_REF] Hansen | Autophagy as a promoter of longevity: insights from model organisms[END_REF]. Autophagy is promoted by inactive insulin signal (see section 1.2.1.5), and its stable function is therefore involved in CR-mediated lifespan extension. Cross-linking theory. Proposed by J. Bjorksten in 1942, states that the decline observed with ageing is due to accidental cross-linking between proteins, which eventually damages cells and tissues. Deregulated nutrient sensing Our organism needs to finely perceive the level of nutrients available to respond to high and low energy states correctly. The IIS pathway -mentioned at the beginning of section 1.2 -is the mediator of high energy states response. I schematized its effect on ageing in figure 1.3. The axis mediated by insulin and insulin-like growth factor (IGF1) represses the transcription factor FOXO activity, with a final positive regulation of glucose uptake and fatty acid storage. The other important player of the response is the kinase mTOR, which is activated by the IIS pathway, but it also senses amino acids availability, positively regulating protein synthesis while inhibiting autophagy [START_REF] Sabatini | Twenty-five years of mTOR: Uncovering the link from nutrients to growth[END_REF]. It has been reported that artificial attenuation of the IIS pathway extends lifespan in worms, flies and mice [START_REF] Fontana | Dietary Restriction, Growth Factors and Aging: from yeast to humans[END_REF] -which justifies the effect of CR on longevity. Inhibition by rapamycin of mTORC1 (one of the two subunits of mTOR) is also a well-documented lifespan increasing treatment [START_REF] Harrison | Rapamycin fed late in life extends lifespan in genetically heterogeneous mice[END_REF]. What is even more interesting is that IGF-1 and GH decrease during normal ageing and have been shown to do so in mice models with accelerated ageing [START_REF] Schumacher | Delayed and Accelerated Aging Share Common Longevity Assurance Mechanisms[END_REF] -why so? It has been proposed that a constitutive decrease in IIS signalling, instead of a later-life one, decreases the rates in cell growth and metabolism, therefore decreasing the rate in cellular damage that comes with it as a side effect [START_REF] Garinis | DNA damage and ageing: new-age ideas for an age-old problem[END_REF]. Concerning the signal related to low nutrients sensing, we find the AMP-activated protein kinase (AMPK) -which inhibits mTOR -and once again a protein of the sirtuin family (Sirt1), which acts downstream of PGC-1α. The latest, when activated, stimulates mitochondriogenesis and fatty acid catabolism. AMPK activation through metformin extends lifespan in worm and mice [START_REF] Anisimov | Metformin slows down aging and extends life span of female SHR mice[END_REF][START_REF] Onken | Metformin induces a dietary restriction-like state and the oxidative stress response to extend C. elegans Healthspan via AMPK, LKB1, and SKN-1[END_REF]. Rate-of-living theory. It was proposed by M. Rubner in 1908, and affirms that the lifespan of an organism is inversely proportional to its metabolic rate. This formulation came from observing that larger animals outlive smaller animals with a slower metabolism. The theory was further supported in 1928 by R. Pearl in his book "The rate of living". However, the slow ageing observed The IIS pathway and its effects on ageing. In green the genes/treatments which result in a positive modulation of ageing. Vice versa for the yellow nodes. The IGF-1 activated inside the cell a kinase signaling cascade, which results in the inactivation of the transcription factor FOXO and the activation of the kinase mTOR. This provokes positive regulation of glucose and fatty acid storage, as well as protein synthesis, while inhibiting autophagy. This is the high nutrients level sensing axis. On the other hand, the kinase AMPK, together with Sirt1 and PGC-1α, mediates the low nutrient sensing axis. Their activity stimulates catabolism and mitochondriogenesis. By working as a calmer on the high nutrient sensing axis, CR extends lifespan. Picture inspired from [START_REF] López-Otín | The Hallmarks of Aging[END_REF]. in small animals like birds [START_REF] Holmes | Comparative biology of aging in birds: an update[END_REF] contradicts this theory or, at least, it questions its universality. Disposable soma theory. It was proposed by T. Kirkwood in 1977 and focuses on hypothetical necessity of the organism to allocate resources. In particular, the individual needs to balance the energies it spends for its own needs and reproduction. For instance, an increase in investment in reproduction would lead the organism to "neglect" its maintenance, bringing to a decline in functionality and eventually death. It has been shown in D. melanogaster that not only virgin flies live longer than mated flies, but that a late-reproducing line has a longer lifespan than an early-reproducing line, and that this difference disappears when a mutation in the oogenesis gene ovoD1 is introduced [START_REF] Sgrò | A delayed wave of death from reproduction in Drosophila[END_REF]. More recent results in flies support the theory [START_REF] Travers | Live fast die young life history in females: evolutionary trade-off between early life mating and lifespan in female Drosophila melanogaster[END_REF]. On the other hand, retrospective studies trying to correlate female lifespan with reproduction have been attempted in humans, with unclear results [START_REF] Kuningas | The relationship between fertility and lifespan in humans[END_REF][START_REF] Kaptijn | The Trade-Off between Female Fertility and Longevity during the Epidemiological Transition in the Netherlands[END_REF][START_REF] Gagnon | Natural fertility and longevity[END_REF]. Overall, the antagonistic pleiotropy and the disposable soma theories could be more generally summarized in one category, the trade-off theories. Mitochondrial dysfunction Mitochondria play an essential role in the metabolism and energy production of cells. They carry their own genome, partially coding for the proteins operating in the organelle [START_REF] Anderson | Sequence and organization of the human mitochondrial genome[END_REF]. They are also mediators of numerous stress responses in the cell. For instance, permeabilization of their outer membrane starts the intrinsic apoptosis pathway. Or again, in case of dysfunction, translation is attenuated in the cell through the action of the kinase GCN2 and can also mediate the unfolded protein response through the transcription factor ATFS-1 [START_REF] Melber | UPRmt regulation and output: a stress response mediated by mitochondrialnuclear communication[END_REF]. Given their importance in the cellular network and the fact that they are the main source of ROS production in the cell (see theories below), mitochondria have been intensively studied in ageing. Downregulation of genes acting in mitochondria has been listed as one of the gene expression hallmarks of ageing [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF]. [START_REF] Webb | Intimate Relations-Mitochondria and Ageing[END_REF] and [START_REF] Sharma | Causal roles of mitochondrial dynamics in longevity and healthy aging[END_REF] provide a good overview of the mitochondrial changes associated with ageing. Global mitochondrial proteomic changes are observed, with further evidence pointing at a reduced electron transport chain (ETC) efficiency. Senescence induction in cells also correlated with deterioration of mitochondrial oxidative phosphorylation. The clearance of dysfunctional mitochondria through mitophagy has also been shown as impaired with ageing. As mentioned in the previous section 1.2.1.1, mtDNA damage increases over time -with the accumulation of erroneous copy of the mtDNA inside mitochondria. However, this last point is debated, as mitochondria present multiple copies of their genome inside the organelle and can therefore carry different versions of it (heteroplasmy). Therefore, for a mutation to affirm and have an effect, it needs to be carried by a sufficient proportion of copies. Free-radical and ROS theory of ageing. It was first proposed in the 1950s by D. Harman as the free-radical theory of ageing, and it has later been expanded to oxidizing agents in general, such as ROS (reactive oxygen species). According to this theory, cells (and the organism they form) age due to the accumulation of oxidative damage to their components, caused by highly reactive free radicals. Over time, the accumulation of such compounds overcome cells reductive capability, leading to unrepaired damage. Such theory gained trend following the discovery of the first superoxide dismutase (SOD) [START_REF] Mccord | Superoxide dismutase. An enzymic function for erythrocuprein (hemocuprein)[END_REF], which plays a ROS scavenging role in the cell. This theory has been central in the ageing field during the second half of the 1900s. However, more recent findings do not support it systematically. Although there is evidence supporting an increased resistance to oxidative stress in long-lived Drosophila strains [START_REF] Deepashree | Oxidative stress resistance as a factor in aging: evidence from an extended longevity phenotype of Drosophila melanogaster[END_REF], some studies show how increases in oxidative damage is not always linked to lifespan reduction (Van Raamsdonk and Hekimi, 2009;[START_REF] Pérez | Is the Oxidative Stress Theory of Aging Dead?[END_REF]. In addition, it is now clear that the mitochondrial ROS do not play only a "destructive" role, but below a certain threshold, they trigger signalling pathways that are important in cells and organism homeostasis [START_REF] Ristow | Mitohormesis: Promoting Health and Lifespan by Increased Levels of Reactive Oxygen Species (ROS)[END_REF]. Cellular senescence Cellular senescence is a growth arrest state in which cells cannot proliferate, even under optimal conditions and mitogenic stimuli. Initially observed in dividing cultured cells, there is evidence that postmitotic cells can show a senescent phenotype. Senescence is thought to be induced by DNA damage and telomere shortening, and it is associated with changes in chromatin structure and mitochondrial dysfunctions. When looking deeper into the described changes [START_REF] Di Micco | Cellular senescence in ageing: from mechanisms to therapeutic opportunities[END_REF], they indeed overlap with some of the described hallmarks of ageing. Interestingly, the level of cyclin-dependent kinase inhibitor p16 -a senescent cell biomarker-, has been shown to increase with age in different rat tissues [START_REF] Krishnamurthy | Ink4a/Arf expression is a biomarker of aging[END_REF] and human skin [START_REF] Ressler | p16INK4A is a robust in vivo biomarker of cellular aging in human skin[END_REF]. This could lead us to a chicken-egg problem: are the previously hallmarks of ageing triggering cell senescence, or are -at least partially -hallmarks of senescence itself? Studies in mice have shown that accumulation of senescent cells due to genetic manipulation leads to early ageing tissues [START_REF] Baker | BubR1 insufficiency causes early onset of aging-associated phenotypes and infertility in mice[END_REF], with the clearance of such cells delaying the early ageing phenotype [START_REF] Baker | Clearance of p16Ink4a-positive senescent cells delays ageing-associated disorders[END_REF]. While this might not be enough to affirm that the accumulation of senescent cells is triggering ageing, it proves the association between the two phenotypes. Senescent cells also have specific secretory patterns (SASP -Senescence-associated secretory phenotype), mainly composed of pro-inflammatory cytokines and extracellular matrix (ECM) re-modelling enzymes. Di [START_REF] Di Micco | Cellular senescence in ageing: from mechanisms to therapeutic opportunities[END_REF] argue that it is yet to be deciphered if the ageing improvements observed in the literature upon senescent cells clearance are due to the removal of the cells themselves or their SASP instead. Stem cell exhaustion Adult stem cells contribute to tissue renewal all along life. As part of the organism, they are not immune from the alteration I described so far. Indeed, aged stem cells show a higher level of DNA damage and impaired metabolic regulation, with an altered balance between glycolysis and mitochondrial respiration [START_REF] Ermolaeva | Cellular and epigenetic drivers of stem cell ageing[END_REF]. The latter is also linked to an impaired production of epigenetic co-factors such as acetyl-coA, possibly cooperating with the changes observed in the epigenome. Over time, stem cells are also subjected to stress from changes in their niche. For instance, in skeletal muscle with age, we observed the so-called fibrosis, which is an excessive accumulation of ECM components, primarily collagen, due to excessive ECM production or alteration in ECM-degrading activities, or both [START_REF] Mahdy | Skeletal muscle fibrosis: an overview[END_REF]. This phenomenon changes the mechanical forces to which stem cells are subjected, resulting in reduced self-renewal and regenerative capacity [START_REF] Blau | The central role of muscle stem cells in regenerative failure with aging[END_REF]. It has also been observed that stem cells misregulation with age can cause over-proliferation and misdifferentiation, as in D. melanogaster intestine; [START_REF] Biteau | Lifespan Extension by Preserving Proliferative Homeostasis in Drosophila[END_REF] and [START_REF] Rera | Modulation of Longevity and Tissue Homeostasis by the Drosophila PGC-1 Homolog[END_REF] show that genetic manipulation improving stem cells homeostasis delays the dysplasia observed in the gut and extends lifespan. Altered intercellular communication This hallmark mainly refers to the chronic inflammation experienced by older individuals. As discussed in [START_REF] Calder | Health relevance of the modification of low grade inflammation in ageing (inflammageing) and the role of nutrition[END_REF], different studies in humans over the last decade have shown an increase in the blood of inflammatory biomarkers. Production of pro-inflammatory cytokines was found increased in old -and apparently healthy -individuals [START_REF] Fagiolo | Increased cytokine production in mononuclear cells of healthy elderly people[END_REF][START_REF] Franceschi | The immunology of exceptional individuals: the lesson of centenarians[END_REF]. Dysregulation of immune systems genes is also annotated amongst the hallmarks of transcriptional ageing [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF]. Does lowering the level of inflammation postpone ageing? It has been shown that NF-κB inhibition in the skin of old mice leads to the rejuvenation of the tissue, bringing back to a transcriptional signature more similar to the young mice [START_REF] Adler | Motif module map reveals enforcement of aging by continual NF-kappaB activity[END_REF]; once again, we also find involvement of a member of the sirtuin family. Active Sirt1 inhibits Nf-κB by deacetylation, and its downregulation (which has been described in mice cells with senescence and ageing [START_REF] Xu | SIRT1 is downregulated by autophagy in senescence and ageing[END_REF]), could therefore have an impact on the inflammatory status. In flies, it has been shown that light ubiquitous or gut-only upregulation of anti microbial peptides (AMPs, produced downstream of the immune pathways) extend mean lifespan (not maximum) by rectangularization of the survival curve of the population [START_REF] Loch | Antimicrobial peptides extend lifespan in Drosophila[END_REF]. However, the authors show that such flies present decreased expression of immune pathways. The slight artificial overexpression of the downstream product could therefore help maintain a negative feedback loop slowing down immune response; this could be the actual cause of lifespan extension. Immunological theory of ageing. This theory assigns the leading ageing role to the intrinsic deregulation of the immune system over time, which leaves the organisms more susceptible to infection and cancer, and therefore death. Firstly proposed by R. Walford [START_REF] Walford | The Immunologic Theory of Aging[END_REF], it laid the foundations for the study of the importance of immune system deregulation with ageing. Inflammageing. Over the years, increasing evidence was showing that the organism seems to enter a state of chronic inflammation with age. The immunological theory of ageing was then updated to what is now called inflammageing [START_REF] Franceschi | Inflammaging. An evolutionary perspective on immunosenescence[END_REF], according to which a state of chronic inflammation characterizes old individuals. This new formulation adds to what was hypothesized by [START_REF] Kirkwood | Is aging as complex as it would appear? New perspectives in aging research[END_REF] as the "ageing network". According to the ageing network, the anti-ageing cellular defence is based on: a) heat shock response, b) oxygen-free radical scavenging, c) UV-stress recovery, and d) DNA damage repair system. The state of chronic inflammation incorporates this picture as a first background step: it represents a first hit in the ageing process, while the second hit that leads to failure of the network depends on individual genetic background or specific environmental stimuli. Therefore, chronic inflammation would be a shared mechanism of ageing. At the same time, different individuals could be less or more resistant to the other kinds of stresses represented in the ageing network. Healthy ageing first hit inflammation susceptible time second hit Unhealthy ageing Figure 1.4: According to the Inflammageing theory proposed by [START_REF] Franceschi | Inflammaging. An evolutionary perspective on immunosenescence[END_REF], chronic inflammation would be a first hit (shared by all the individuals) towards unhealthy ageing. The second hit, which leads to the manifestation of the unhealthy ageing phenotype, can instead be different in different invidividuals, depending on their genetic background or the environmental stresses they undergo. Final remarks As already hinted by the discussion on the single hallmarks, it seems clear that no process is completely independent of the other, as cells are systems that base their life on well-regulated interactions and cross-talks. Indeed, in the Antagonist hallmarks Integrative hallmarks Response to damage, beneficial below a certain treshold propose to divide the hallmarks into three categories: primary, antagonistic and integrative. The primary hallmarks would be the first cause of damage, and they are uniquely negative. The antagonist hallmarks would occur at a second time. They are characterized by having contrasting effects depending on their intensity. If mild, they can have a protective role or trigger a protective signal. When exacerbated, they are detrimental for the organism. The authors propose that the primary hallmarks might play a role in the chronicization of such processes. Last, we have the integrative hallmarks, which emerge from the previous alterations and directly affects tissues function. In this section, I provided a general overview of the processes that have been shown as associated with ageing. I will now discuss the way we study ageing, focusing on the difference between the biological age and the chronological age of an organism. (2013). According to the study, the first hallmark appearing in prokaryotes is the accumulation of misfolded proteins. It follows the chromatin-related hallmarks with the appearance of Archea. After that, we find hallmarks related to organelles (mitochondria) and eukaryotic nuclear complexity. The metabolic hallmarks are the last to appear in unicellular organisms. As the organism's complexity increases with multicellularity, the rest of the hallmarks appear. Biological and chronological age The longevity curve (i.e the proportion of alive individuals plotted over time) of a synchronous population with human-like ageing is exemplified in figure 1.7. After a time characterized by an almost null mortality (initial survival plateau), the population progressively decreases in size as individuals die. Over time, the population experiences an apparently progressive increase of the hallmarks of ageing. What happens instead on an individual's level? If we look at the curve, we will immediately understand that all individuals are not biologically ageing at the same rate. A given population is characterized by a mean lifespan (ML, average time of death of its individuals), meaning that most individuals will die at an age around this number. However, some individuals will die earlier (short-lived) and some later (long-lived). The population is made of subpopulations with a different remaining lifespan at each moment. Individuals of different subpopulations are possibly ageing at a different rate, therefore experiencing ageing-related changes at different levels, even if they are the same age. There is indeed a distinction between the chronological age of an individual and its biological age. Chronological age is defined as the time passed since birth, and, therefore, it is equal for all the individuals born at the same time. The biological age is the individual's actual age in terms of its physiology and is reflected by its mortality risk. Different ageing rates amongst a population. [START_REF] Zhang | Extended Twilight among Isogenic C. elegans Causes a Disproportionate Scaling between Lifespan and Health[END_REF] conducted an experiment on C. elegans to understand how different individuals age in a synchronous population. They studied a cohort of 734 worms followed for all their lives in an automated way. They also recorded and kept track of different parameters (such as movements, body size, and tissue integrity) to assess their health status. They considered three different hyphotesis: 1) starting point hypothesis, according to which each individual would be born with a given status intrinsically deciding its lifespan; 2) rate of ageing hypothesis, according to which all the individuals are born with the same potential but then starting to age at a different pace; 3) premature death hypothesis, If we could distinguish the biological age of the individuals in a population, we would observe that they actually age at a different rate. In this example, the 10 individuals present at the beginning in the population are a mixture of individuals that will age at a different speed and die in different moments. The blue individuals are short-lived and will die off first. The most numerous red individuals will die at an age around the average lifespan of the population. But a few individuals (yellow) will die off last, at a higher age than the mean lifespan. In the heterogeneous population, we observe an increase, on average, of the hallmarks of ageing carried by individuals. according to which individuals are following the same ageing path, and dying stochastically on the way. Their results point at the second (rate of ageing) hypothesis as the one supported by the data, with a hint for the third hypothesis (premature death) for individuals with a similar ageing rate. This would mean that at day 0 in the population, all the individuals are equal, but some experience a faster ageing rate than others. Interestingly, their data also suggest that individuals living longer spend on average a more significant proportion of their life in bad health compared to individuals dying prematurely. So the latter would indeed experience a "faster senescence" compared to the first ones. Different ageing rates can create a sampling bias. [START_REF] Meyer | BiT age: A transcriptome-based aging clock near the theoretical limit of accuracy[END_REF] exemplifies the kind of bias we can face when sampling from a synchronous population ageing a different rates. of the individuals at a given chronological age as normally distributed. When the population is young (in the survival plateau), we expect the median biological age to overlap with chronological age for most of the individuals. However, some will be biologically younger and some biologically older -with a maximum biological age in the population. The biologically older individuals will die off first, shifting the mean of the distribution towards biologically younger individuals. The authors, therefore, suggest that, as the population advances in age, we experience an increase in the probability of sampling biologically younger individuals. This implies that each individual, once it diverges from the others in a way similar to the "rate of ageing" hypothesis explained in the paragraph before, carries a gap in its biological age relative to the other individuals -a "negative" one if it is ageing faster, a "positive" one otherwise. Ageing clocks. In order to identify at each moment individuals that are physiologically old or young, we need biomarkers that correctly represent their biological age. Then we would be able to distinguish the different biological subpopulations present in a group. The frailty index (FI34) was defined by [START_REF] Kim | Association of healthy aging with parental longevity[END_REF] The ageing clocks are instead predictors of the ageing rate of an organism given a set of biological variables. The first developed clock is the epigenetic Horvath's clock [START_REF] Horvath | DNA methylation age of human tissues and cell types[END_REF], which estimates a DNA methylation age (DNAm age) by using the cytosine-5 methylation pattern of 353 CpG sites (regions with repetitions of cytosine and guanine) in the human genome. The clock has been proven to almost perfectly correlate with age, with an error in estimation of 3.6 years. The estimation performs well also on blood cells, affirming the epigenetic clock as a non-invasive way to predict age in humans. Recently, attention has been focused on identifying a transcriptomic ageing clock that could be applied to organisms with no DNA methylation (as C. elegans) or a low level of CpG methylation as D. melanogaster. [START_REF] Tarkhov | A universal transcriptomic signature of age reveals the temporal scaling of Caenorhabditis elegans aging trajectories[END_REF] have identified a set of 327 genes whose expression is potentially a biomarker of ageing; amongst them, there are targets of genes that are involved in longevity regulation as daf-16 (ortholog of FOXO transcription factor). The authors rescale the samples' age to the mean lifespan to correct the difference between chronological and biological age during the estimation process. This leads to predicting a "bioage" expressed as an absolute number. Last year [START_REF] Meyer | BiT age: A transcriptome-based aging clock near the theoretical limit of accuracy[END_REF] published a new transcriptome based clock in C. elegans (BiT age). It selects 576 genes that predict the individual's biological age with a correlation of 0.98 (Pearson), and a r 2 of 0.96. The biological age is here computed by using a correcting factor, defined as the mean lifespan of the strain divided by the mean lifespan of the N-2 wild-type strain. Functional enrichment of the 576 selected genes predicting age with their expression reveals, amongst others, genes involved in innate immune response, detoxification and neuronal communication. Although these results are an essential step forward in identifying an individual's biological age, they are not yet easily applicable in an experimental setting. The recording of the expression of a numerous set of genes is indeed invasive in model organisms such as nematodes or fruit flies, and would not be feasible prior to sampling. [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF] described a phenotype in D. melanogaster that can be useful in this matter. I will describe this phenotype, called Smurf, in more detail in the next section. The Smurf phenotype The Smurf phenotype is an age-associated phenotype characterized by the increase in intestinal permeability to the blue food dye FD&C #1 in D. melanogaster. The dye, normally not absorbed by the digestive tract, spreads throughout the body in flies with impaired intestinal permeability, turning them completed blue. Hence, the name Smurf. A Smurf and a non-Smurf D. melanogaster (females and males) are shown in figure 1.9. I will start by describing the standard protocol for identification of Smurfs [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF][START_REF] Martins | In vivo Assessment of Intestinal Permeability in Multiple Model Organisms[END_REF], and I will subsequently move to illustrate the phenotype characterization that has been carried out so far. Smurf assay The Smurf assay allows in vivo assessment of gut permeability in Drosophila. It consists of a co-digestion of classic food mixed with the blue dye FD&C #1. It is important to specify that the dye is: 1. non-toxic. Two synchronous populations of Drosophila of the same strain, one aged on food + blue dye, the other aged on food only, do not show differences in their lifespan [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF][START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]; 2. non-toxic when the intestinal permeability is impaired. It does not cause a reduction in lifespan when continuously ingested after the increase in intestinal permeability [START_REF] Martins | In vivo Assessment of Intestinal Permeability in Multiple Model Organisms[END_REF]. It has been shown that two synchronous population of the same genetic background do not present differences in lifespan when a) one of them is continuously exposed to blue food; b) the other have the Smurf flies switched back to normal food as soon as detected, therefore limiting their exposure to the dye in a condition of impaired intestinal permeability. The most common way of detecting the presence of the Smurf flies is by eye, with the Smurf defined as such when the fly presents a complete blue colouration. However, the phenotype is not binary but instead presents continuous variations, with flies possibly showing an uncertain status or "light Smurf" phenotype. The authors propose three different explanations for this: 1) the dye might take some time to diffuse through limited gut permeability, generating a gradient phenotype depending on the permeability; 2) biological and environmental factors could cause variations in the phenotype observed; 3) different perception by different observers. The scoring "by eye" presents an additional bias related to the proportion of Smurf at the moment of scoring: in the presence of many non-Smurfs and only a few Smurfs, even light Smurfs could be scored. In order to prevent this, the authors suggest that, when the experimental conditions allow it, single individuals could be photographed for independent validation. As a general rule, only Smurfs presenting the blue colouration all over their body have to be scored as such, if not stated otherwise. Smurf phenotype in an ageing population Interestingly, the phenotype assessed by the Smurf assay shows an age-dependent pattern. Figure 1.10 illustrates what happens when we age flies with food where the dye is present. As soon as the population exits the survival plateau, blue flies appear in the population. Interestingly, the proportion of Smurfs (computed as the number of Smurfs over the total number of flies in the population at the given time point) is increasing with time. The Smurf Increasing Rate (SIR) depends on the way the population ages; a population with a higher mean lifespan and a slower decay in the number of individuals will show a lower SIR (figure 1.10). Smurfness is therefore an age-related phenotype. The absolute number of Smurfs in the population at each time point follows a different trend, which will be discussed in section 1.4.5. Last but not least, in the experiments conducted so far, every fly has been shown to turn Smurf prior to death [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF][START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. At a given time point in a population, we can distinguish two subpopulations, Smurfs and non-Smurfs. Let's suppose we separate them and monitor their remaining lifespan independently. In that case, we will see that the median lifespan (T 50 ) of the non-Smurfs is around the same as the one of the whole population at the given age; the one of the Smurfs is instead low and constant independently of their age (figure 1.11). An independent study [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF] confirmed that Smurfs have a T 50 ∼ 2.04 days, independently of their age, even if a trend is noticeable with young Smurfs living longer than old Smurfs. The relatively constant time that individuals spend as Smurf before dying suggests that the flies find themselves in a stereotyped condition characterized by similar (if not equal) molecular changes. In the attempt of understanding if Smurfs, given their high risk of impeding death, carry the burden of ageing associated changes independently of their age, [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF] tested the expression of a few genes described as ageing-related in Smurfs and non-Smurfs at different ages. They observed that changes are occurring with Smurfness rather than with age (figure 1.11). Summary. To conclude, if we age a population in the presence of FD&C #1, we will observe that: 1) after a certain time t, corresponding to the survival plateau, Smurfs are detectable at each time point in the population; 2) every fly turns blue before dying, and its probability of doing so increases with age; 3) once Smurf, the fly shows, independently of its age, some gene expression changes that have been so far associated with ageing. What is interesting if we look at figure 1.11 is the absence of a detectable trend for gene expression changes in non-Smurf. It seems that the signal is merely carried by the Smurf flies at each age, without an age-dependent pattern in the non-Smurfs. This suggests a biphasic behaviour of the ageing markers. Those observations led to hypothesize that the Smurf phenotype could be a good proxy to follow biological age in flies. The first row show the expression of three antimicrobial peptides (AMPs) expressed downstream of the inflammatory pathway, known to be overactivated with ageing. The AMPs is significantly lower in the non-Smurfs than in the Smurfs independently of their age, hinting that the Smurfs could be the one actually carrying the "ageing inflammation signal" in the population. On the second row, the same analysis is done on three FOXO targets, induced when the IIS is repressed, as it normally occurs with ageing. The same pattern of the inflammatory genes is observed, with a significant overexpression of the genes with Smurfness, independently of the age. Conservation of the Smurf phenotype In order to assess if Smurfness is specific to D. melanogaster, [START_REF] Dambroise | Two phases of aging separated by the Smurf transition as a public path to death[END_REF] have investigated the presence of the Smurf phenotype in other two Drosophila species (D. mojavensis and D. virilis), as well as in the nematode C.elegans and the vertebrate Danio rerio, commonly known as zebrafish. They have checked for three main features: 1) identification of the Smurf phenotype through Smurf assay; 2) age-related trend of the proportion of Smurf in a synchronous population and 3) association between the Smurf phenotype and high risk of death. I will briefly report the results obtained for the three points. 1. Smurf identification. With some adaptation of the protocol, the presence of Smurfs was tested for the four species in old individuals. This corresponded to 40 days old flies, 12 days old worms and 48 months old fishes. In all of them, Smurfs individuals, showing the dye colouration all over the body, were detected (figure 1.12). 2. Age dependency. The two species of Drosophila, wild-type strains presenting different lifespans, show an age-dependent SIR with Smurf assay performed every ten days. As in D. melanogaster, the strain with the shortest lifespan shows the highest SIR. Three wild-type strains were tested in nematodes. In all three, the age-dependency of the phenotype was confirmed. Furthermore, the two strains with similar lifespans have also a non-significantly different SIR. In contrast, with a shorter lifespan, the remained strain presents a higher SIR, significantly different from the others. In zebrafish, given the relatively long mean lifespan of the animal (∼ 42 months), fishes of the same strain were collected at four different ages (4, 8, 12 and 23 months), and each group was assayed for Smurfs. The same was repeated 3 and 7 months after. The results show a significant trend for males, and a non-significant but detectable trend for females. The authors suggest this could be due to the low number of individuals used. 3. Death risk. The Smurf population isolated from D. mojavensis and the one isolated from D. virilis show the same survival curve, with ex-Figure 1.12: Smurf phenotype conservation in different model organisms, with their associated phylogenetic tree. The pictures present females Drosophila, hermaphrodite nematodes and females zebrafish. From [START_REF] Dambroise | Two phases of aging separated by the Smurf transition as a public path to death[END_REF] ponential decay of the number of individuals and a T 50 of 2.5 days. This overlaps with what was shown in D. melanogaster [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. Similar results were obtained for the nematodes. Smurfs scored at 2 and 2.5 years in zebrafish were monitored for 7 months, with their aged-matched non-Smurfs. Again, Smurfs show a higher risk of death, with 75% dying within the seven months against the 24% amongst the non-Smurfs. Altogether, those results suggest that not only the Smurf phenotype first described in D. melanogaster is conserved amongst different species, even nonclosely related, but also that the amount of time the individuals spend in this phase could be function of the life expectancy at birth. Preliminary work in mammals Preliminary studies have been conducted on mice to validate the Smurf model in mammals. The work (Cansell et al., under review) gathers a cohort of 44 mice and follows them from the end of the survival plateau of the population until death. For each mouse, parameters such as body temperature, glycemia, intestinal permeability and activity are recorded at regular intervals. Figure 1.13 presents some the results. Interestingly, the authors cannot identify a changing trend with age for most of the recorded parameters but can instead identify a biphasic behaviour when grouping the mice by remaining time before death. For example, intestinal permeability increases around 22 days before death, while glycemia drops around 24 days before death. The presence of a biphasic behaviour hints at the possibility of transposing the Smurf model to mammals. 2-phase model of ageing Given the results pointing at the biphasic behaviour of age-related changes in non-Smurfs and Smurfs, [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF] proposed to reinterpret ageing as a sharp transition made of a non-Smurf and a Smurf phase rather than a continuous process (figure 1.14). According to such model, in phase 1 (non-Smurf) the individual is apparently healthy and has a null probability of dying but experiences an increasing probability with time to enter phase 2. In phase 2 (Smurf) the individual is committed to death independently of its chronological age, and it experiences the decline of the physiological function that has been so far described as hallmarks of ageing. [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. Each individual is undergoing two phases in his life: non-Smurf (phase 1) and Smurf (phase 2). The length of phase 1 represents the healthspan of the organism (i.e. the time the individual spends in good health). During such phase, the individual experiences null mortality, but an increasing probability with time of entering phase 2, or Smurf phase. In the Smurf phase, individuals experience the degradation in functional efficiency that we commonly refer to as ageing. While phase 1 length can differ in individuals, phase 2 seems to have a stereotyped short duration. In this view, the progressive increase in the ageing markers that have been seen so far as occurring in individuals when studying ageing at a population level could be reinterpreted as a progressive increase with time in the Smurf individuals carrying such changes. So, according to this model, we have two subgroups co-existing in the general population, the non-Smurfs and the Smurfs. The authors propose a mathematical formulation to model how their evolution shapes the survival curve of the whole population. The authors propose that three parameters with easy biological interpretation can explain the dynamics of an ageing population : • a. The rate at which Smurfs appear in the population; • k. The rate at which the Smurfs population decay. • t 0 . The time at which the first Smurf appears in the population. The proportion of Smurfs is modeled as increasing linearly over time, in the following way: p = at + b (1.1) where p is the proportion of new Smurfs at a given time point t, a is the slope of the line -the rate at which Smurfs appear in the population -, and b is the intercept with the y axis. t 0 is instead graphically the intercept with the x-axis (time) and can be computed as -b a . The decay of the Smurf population is instead modelled exponentially as follows: S = S 0 + e -kt (1.2) Where S is the number of Smurfs at a given time t, S 0 is the number of Smurfs at t0, and k is the parameter defining the exponential decay at which the Smurfs die. The whole population of flies is built at each moment by two subpopulations, Smurfs and non-Smurfs. As long as Smurfs do not appear in the population (i.e t < t 0 ), the number of individuals is equal to that of non-Smurfs. At t 0 , the two subpopulations (Smurfs and non-Smurfs) start to co-exist in the general population. How is the absolute number of individuals belonging to the non-Smurfs and Smurfs group evolving? For simplification, let's take t 0 as a starting point. The number of non-Smurfs N will change as following: dN dt = -atN t (1.3) where at is describing the rate of Smurfs appearance, and N t is the number of non-Smurfs at t. The number of Smurfs at a given time point is instead the number of flies turning smurfs from the N t flies group, summed to the number of flies dying (therefore leaving the Smurf population): dS dt = atN t -kS t (1.4) where k is the decay parameter of the smurf population and S t is the number of Smurfs in the population at time t. By solving the equations (for details, see [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]), we obtain the behavior shown in figure 1.15. The model has been shown to fit as well as models commonly used to describe human-like ageing, as the Gompertz and Weibull [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. Furthermore, it is built on parameters that are easy to interpret biologically and are experimentally measurable. I am here simulating how the tuning of the a and k parameter affects the longevity of a population (black curve) and the evolution of its non-Smurf (red) and Smurfs (blue) subpopulations, when t0 is fixed (in this case 10 days). Parameters are used are extracted from the [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF] paper. I am using three values of k and a: 0.05, 0.2, 0.5. a governs the rate at which Smurfs appear in the population; k the rate at which the Smurfs die. If we fix k and tune a (reading the graph by column) we can notice that the total population is decaying faster with a higher value of a, and a higher number of Smurfs is accumulating in the population, with a peak around the T 50 . If we fix a and we tune k, we will observe the total population decaying faster once again with higher values of k, but the Smurfs, although increasing at the same rate, do not accumulate in the population, as they die faster. For high values of k and low values of a (top right), we will barely observe Smurf in the population, as they are on average dying as soon as they become Smurf. For low values of k and high values of a we observe instead the accumulation of Smurfs in the population. 1.4.6 Critics to the model [START_REF] Bitner | Predicting death by the loss of intestinal function[END_REF] found contrasting results to the ones described above. More specifically, they affirm that 1) the dye has a negative effect on the flies' longevity and 2) only a fraction of flies turns blue prior to death. The study follows five different wild-type populations of Drosophila -males and females -on a control banana molasses food and five different blue dyes. Amongst them is the FD&C #1, reported under the name of Alfachem blue. Every fly of each population is placed individually in plastic straws closed with a pipette tip and scored daily for Smurf phenotype appearance and death. I will now discuss separately the two claims reported. • The dye affects longevity. I will discuss only the results regarding the FD&C #1, as it is the one used in the results illustrated above. [START_REF] Bitner | Predicting death by the loss of intestinal function[END_REF] show that the mean lifespan of control flies is higher than the one of the flies exposed to the dye across the five populations tested, with a difference that varies between 4.9 and 9.8 days. However, some points need to be elucidated: 1) by testing the significant effect of the longevity on sex through a linear model, they found that sex does not influence longevity. Therefore, they say they would not consider the sex for further analysis. However, this could be a risky choice given the number of individuals of different sex per line (∼ 20-30 flies) used for the estimation. Male and females have been shown to present a lifespan gap in multiple species [START_REF] Austad | Sex Differences in Lifespan[END_REF], and it is standard procedure to monitor their survival curve separately. Therefore, I assume that the data shown for the mean lifespan of the populations refer to a mixture of males and females (not clarified by the authors). Longevity curves are also not shown. 2) If males and females are mixed, the total number of flies used per population per condition varies between 54 and 58. Without any replication, it is difficult to firmly assess any significant effect on longevity with such a low number of individuals. • Smurfness is not necessary to die. The results presented in [START_REF] Bitner | Predicting death by the loss of intestinal function[END_REF] show how only a percentage of flies turns Smurfs prior to death. This percentage seems to depend on the population and the dye, never reaching 60%. This is in contrast with the studies described in the previous section. Reproducibility is essential to prove a scientific experiment as valid, but it needs to be pursued following the same conditions of the experiment to validate. In this case, two main issues arise in the way the experiment is conducted by [START_REF] Bitner | Predicting death by the loss of intestinal function[END_REF]. First, the flies are stored individually in thin tubes closed with pipette tips. The food is stored at the end of the tube, in quantity per tube not specified, and changed every three days. Such procedure differs from the standard longevity assay where the flies, in a group or individually, are kept in larger and closed vial [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. Although this could seem a detail, it is an important factor to control. Using thin tubes, though long enough for the flies to fly from one side to another, increases the chances for the insect to stick on the side of the tube with its wings and introduces therefore a stress factor. Furthermore, using open tubes allows ventilation, which could cause dry stress for the fly. In particular, when the food dries, it becomes more challenging to eat, introducing a second possible bias in the experiment. The two described factors possibly affect how the flies are dying and the food availability prior to death -both important for Smurf phenotype detection -, but are not commented in the paper. Interestingly, the authors claim that the Smurf population's demographic and the phenotype's dependency from chronological age has never been investigated at the date of their publication (2020), which is false. They also add that, according to their study, most of the Smurfs are dying within one day from transition, which limits its application in studies. Once more, this is no surprise, as it has been shown in the tested populations that Smurfs have an exponential decay with a T 50 of 2.04 days. Studies that imply Smurfs sampling need to consider that and generate sufficient N flies, with N depending on the number of Smurfs that need to be collected, taking into account the ageing pattern of the population. This study shows interesting results, but they look too weak to support their claim that the model is not working. The authors also show a scarce knowledge of the Smurf model as described so far. Aim of the study To recapitulate, the Smurf phenotype has been proposed as a non-invasive biomarker for biological age in three commonly used model organisms. Furthermore, preliminary studies aiming at detecting a biphasic behaviour similar to that of the Smurfs have also been conducted in mice. Nevertheless, the phenotype has been characterized more in Drosophila, where the expression of some age-related genes has been investigated. In order to prove the biphasic non-Smurf/Smurf behavior regarding ageing-related changes, and therefore validate the ability of the model to predict the individual's biological age, further investigations are needed. The laboratory in which I conducted my PhD research focuses on characterizing the Smurf phenotype and how to exploit it for better understanding ageing. My work integrates the one of the laboratory, focusing on characterizing the gene expression changes associated with the Smurf phenotype. More specifically, my PhD aims to characterize the Smurf transcriptome compared to the non-Smurf. I divide my research into three main questions: 1. Can we detect a biphasic behaviour in the gene expression of Smurfs and non-Smurfs? 2. How is the gene expression of the Smurfs and non-Smurfs affected by time? 3. Can we detect novel genes affecting longevity by looking at Smurfness? Chapter 2 will be dedicated to the results concerning the first two questions. Chapter 3 will instead be dedicated to the experimental work conducted for the third question. I will then discuss the results and propose further development of the project in Chapter 4. Chapter 2 Smurf transcriptional signature and ageing We decided to assess the behaviour of the transcriptome in Smurfs and non-Smurfs at different ages by performing bulk RNA-sequencing (RNA-Seq). The data collection and the sequencing were done prior to my PhD. I, therefore, did not take part in the generation and sampling of the flies, with my contribution starting from the pre-processing of the data (section 2.2.1). Experimental design RNA-Seq was performed on the whole body of the flies. Preliminary data suggested Smurfness as a systemic transition. Therefore, before going to a tissue-specific level, we wanted to characterize the core changes detectable at the level of the whole organism. drs-GFP Drosophila line was used for the transcriptomic experiment. This line is a GFP reporter for the expression of the AMP drosomycin (drs). Three main reasons are behind this choice. First, AMPs are overexpressed in Smurfs, with apparently no trend with age for the non-Smurfs (figure 1.11). It has been therefore thought that looking at the expression of inflammatory genes could be a good alternative method to select Smurfs and get rid of the bias introduced by a "by-eye" selection (see section 1.4.1). Preliminary data have shown that Smurf selection by drs reporter leads to identification of the same pattern of expression for the genes tested and shown in figure 1.11 [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF]. In case of the detection of a biphasic signal in the transcriptomic data from Smurf and non-Smurf, it would be possible then to check if the same signal is found when sampling flies by drs expression instead than "by-eye". The use of drs-GFP line allows to perform this experiment on the same genotype, without therefore introducing a possible line-related bias in the experiment. Secondly, at the same time, and for the same reason, proteomic data were generated from the same line. Last, the line presents a relatively short lifespan, which has a T 5 0 ∼ 30 days. The experiment was performed using mated females for coherence with previous data generated by the laboratory. Six vials of 30 mated females each (N = 180) were generated to follow the line's longevity. In parallel, ∼ 1200 flies belonging to the same population, and divided into vials of 30 individuals each, were used for the Smurfs and non-Smurfs sampling. Smurfs and non-Smurfs were sampled at three time points corresponding to ∼ 90%, 50% and 10% survival of the population, which corresponds to 20, 30 and 40 days old flies for the given line. In figure 2.1 I report the longevity curve of the population used for the experiment and the sampling times. Each sample corresponds to a group of 8 flies. From now on, when I refer to one sample, the actual number of individuals sampled has to be multiplied by 8. Flies were sampled at different ages to investigate the effect of time on the transcriptome of non-Smurfs and Smurfs. Smurfs were also sampled at different moments after transition to explore the dynamics of phase 2 of life. The sampling followed the procedure here detailed. • Transfer of the flies on blue food. All the flies -the ones used for longevity as the ones used for sampling -are placed on blue food overnight. • First sampling: mixed samples. The next morning (9 a.m.) 1 Smurf sample and 1 age-matched non-Smurf are collected. These samples are called "mixed", given the uncertainty regarding the time individuals spent as Smurfs. Indeed, the moment we place flies on blue food, we can detect individuals with impaired intestinal permeability. However, those might have lost their intestinal functionality before, without us being able to see it in the absence of the dye. In the mixed samples, we have no information about the time passed from the entrance in Smurf phase. After sampling, all the Smurfs left were removed from the vials and flies were kept on blue. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Second sampling: 5 hours Smurfs. At 2 p.m., 5 hours after the Smurfs clearance, 2 additional samples were collected. Those flies underwent the transition within the past 5 hours. At 20 days, no samples were collected; indeed, even with the large cohort of flies available, at 90% survival is not easy to obtain 5 hours Smurfs. In the specific case, the population has an expected presence of ∼ 5% of Smurfs at the given age, which corresponds to ∼ 60 flies given the number used for sampling. After the sampling, Smurfs were cleared from the population and flies were left on blue. • Second sampling: 24 hours Smurfs. 24 hours after (2 p.m. of the following day) Smurfs and age-matched non-Smurfs were sampled. Therefore, these flies are individuals who turned Smurfs within 24 hours from the sampling moment. For all time points, 3 samples per condition were collected. After sampling, flies were immediately frozen in liquid N 2 and stored at -80°C up to the RNA extraction. RNA was extracted following the standard Trizol protocol (see Appendix A.2), and samples were subsequently sent for sequencing. The final data consists of 32 samples. The data structure is schematized in figure 2.1. RNA-Seq analysis 2.2.1 Data preprocessing Samples were sequenced by paired-end using Illumina technology. The library preparation and the sequencing were externalized (Fasteris company), and my work started from the fastq files obtained from the sequencer. fastq is the file format used to store the information about the reads of one sample. In the case of paired-end sequencing, two fastq files are associated with each sample, as each carries the information about one read of the pair for each fragment sequenced. I performed the pre-processing of the data in collaboration with the bioinformatic platform of the Institute Biologie Paris-Seine (IBPS). The analysis was performed on their Galaxy [START_REF] Afgan | The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2018 update[END_REF] server. Prior to any manipulation, to assess the quality of the reads and eventual contamination from other species or Illumina adapters, we ran FastQC on the whole set of raw data. FastQC is a program that checks for different features of the reads, as the quality of the single basis inside the read or the GC content of the sample. For instance, a bimodal distribution in the GC content of the sequenced reads can be a sign of contamination with another species. As the FastQC report showed good results for the quality of our samples, we decided to perform no reads filtering prior to alignment. Alignment was performed using Hisat2 (Kim et al., 2019) on the reference Drosophila genome BDGP6.95. Alignments statistics proved good, with each sample having no less than 83% of the sequences identified as paired-end; amongst them, no less than 85% are uniquely mapping on the genome. After the alignment, each read is assigned to a certain position on the genome. In order to quantify gene expression, we need to annotate the localization of the genes and see how many reads colocalize with a specific gene. Such information is stored in a specific file (.gtf) that needs to match the genome release used for the alignment. The software featureCounts [START_REF] Liao | featureCounts: an efficient general purpose program for assigning sequence reads to genomic features[END_REF] was used to perform the counting. It inputs the Hisat2 output files (BAM extension) and the gtf file, and returns a matrix with the gene counts for each sample. It also returns the assignment statistics; in this case, the "worst" sample sees 86% of the reads assigned. This means that most of the reads for all samples are assigned to genes. The quality of the alignment and the counting was conveniently visualized by using the program MultiQC [START_REF] Ewels | MultiQC: summarize analysis results for multiple tools and samples in a single report[END_REF] (see figures B.1 and B.2 in Appendix B.1). At the end of this procedure, we obtained the raw counts matrix of 15364 genes which is the initial step for the biological analysis. The data preprocessing steps are schematized in figure 2.2. ; each fastq file contains information about one of the paired reads (so for each sample we have 2 fastq files). The quality of the reads is checked using FastQC. The reads passing the control go as input for the alignment step. This step, that we performed using Hisat2, also requires as input the reference genome on which the reads will be aligned. The algorithm output files are called BAM. These files store the information about the genomic coordinates on which each read is mapping. By inputing the BAM files and the .gtf file of the genome used for the alignment (containing the information about the genes coordinates), featureCounts counts the number of reads mapping to a certain genes. This gives as output the raw counts matrix, where for each samples are annotated the number of reads mapping to a certain gene. The quality of the alignement and counting can be conveniently checked by using the online tool MultiQC. The raw counts matrix needs to undergo normalization before performing further analysis. Normalization Normalization is essential to correct the technical biases present in the data. In the case of Illumina RNA-Seq, the normalization needs to correct for a bias related to the sequencing lane: certain samples might present a different library size (total number of reads) compared to the others, and the expression of their genes will therefore be consistently higher or lower. There are different normalization methods [START_REF] Dillies | A comprehensive evaluation of normalization methods for Illumina highthroughput RNA sequencing data analysis[END_REF], and the choice requires us to consider the type of analysis we want to perform afterwards. DESeq2 R package [START_REF] Love | Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2[END_REF], which is one of the standards for RNA-Seq analysis nowadays, provides normalization for inter-sample comparison. The method is based on the assumption that most genes are not differentially expressed. DESeq2 computes a ratio between each sample count and the geometric mean of the counts across the sample for each gene. Subsequently, it estimates a correction factor for a given sample by taking the median of the ratios for each gene. If most of the genes are not differentially expressed, the median of the ratios should be ∼ 1. Deviations from this value are due to a technical bias consistently increasing or decreasing the number of counts in the sample. Another commonly used program for RNA-Seq analysis is edgeR (Robinson et al., 2010). edgeR provides a normalization similar to DESeq2, assuming that most of the genes in the samples are not differentially expressed. The method is called Trimmed Mean of M-values (TMM) [START_REF] Robinson | A scaling normalization method for differential expression analysis of RNAseq data[END_REF]. For calculating the correcting factor, it takes one sample as reference and the others as tests, and computes the weighted mean of log ratios between the test and the reference. The most expressed genes and the ones with the largest log ratios are excluded from the computation. Similarly to what I previously explained, the TMM should be ∼ 1 in the absence of technical bias. The TMM is then used to rescale the library sizes and the raw counts. Exploratory analysis In order to first explore the data and see how samples group together, I performed a Principal Component Analysis (PCA). PCA is a dimensionality reduction technique that helps to visualize high-dimensional data by creating new features grouping the existing ones. In the RNA-Seq case, a sample can be seen as a vector of n features, where the features are the expressions of the genes. In order to visualize if samples have similar expression profiles, we would need to plot them in the space of n dimensions. This soon becomes not convenient, and impossible when we deal with thousands and thousands of genes. With the PCA, genes with similar expressions are "grouped" together in a single principal component (PC). The first PC explains the most variance in the dataset, and it is theoretically formed by genes whose expression is relevant for a particular biological condition. In the case of RNA-Seq data, prior to performing PCA we need to apply a transformation to stabilize the variance. Indeed, the counts follow a Poisson distribution, where the variance is equal to the mean. So genes with higher expression will also have a higher variance. This has no biological meaning and would therefore create a bias in the analysis. I used the variance stabilized transformation (vst) proposed by the DESeq2, which runs fast on large datasets. I subsequently selected the top 5000 genes in terms of expression variance (normalized counts) and performed the analysis on this group. Results are shown in figure 2.3. The samples are here plotted in the space of the first two principal components (PCs). The first component explains 38% of the variance against the 15% explained by the second. Colour indicates the Smurf status of the sample, while the symbols the age. Each sample is associated to an acronym, that indicate the time after transition and a letter or a number specifying the sample -see picture caption for the exact information -. From this PCA, we can conclude the following: • Smurfs and non-Smurfs transcriptomes are different. The first PC separates almost perfectly the Smurfs samples from the non-Smurfs. Therefore, we can affirm that, from a qualitative point of view, Smurfs and non-Smurfs are characterized by different gene expression patterns. The fact that the first PC summarizes this suggests that Smurfness is the biological variable affecting the most the transcriptome in our dataset. • Age has an effect. If we now focus on the single groups, the non-Smurfs tend to cluster by age and distribute along with the second PC. There and symbols the age. Each sample is associated to an acronym specifying the time after the transition (5h = 5 hours, 1d = 1 day and M = mixed, for which the information is not available) and an unique letter or number identifying the sample itself. For tracing the samples easily, such index has been retained from the original fastq data. PC1 highlights the separation between the Smurf and non-Smurfs samples, while PC2 looks associated to the age and seems to have a minor effect. M-1 M-4 M-7 M-10 M-13 M-16 1d-a 1d-b 5h-c1 5h-c2 5h-c3 5h-c4 5h-c5 5h-c6 5h-c7 5h-c8 1d-c 1d-d 1d-e 1d-f 1d-g 1d-h 1d-i 1d-j 1d-k 1d-l 1d-m 1d-n 1d-o 1d-p 1d-q 1d- could be a trend for the Smurf with age also along the first component, with older samples having values closer to 0. The situation is slightly different for the Smurfs, where the young and mid-life samples group homogeneously, while the 40 days present higher values of the second PC (possibly a age-related signal) and lower values of the first PC. This suggests them being similar to the old non-Smurfs samples. To conclude, the PCA suggests that both Smurfness and age impact the gene expression of the flies, with Smurfness contributing the most. Indeed, at least for all the young and mid-life samples, a Smurf is more similar to a Smurf of different ages than an age-matched non-Smurf (and vice-versa). In order to confirm that those results are not somehow specific to the PCA approach, I performed another dimensionality-reduction analysis (tSNE or t-distributed stochastic neighbour embedding). Contrary to PCA, tSNE is not deterministic (a seed needs to be set to assure reproducibility) and results are sensitive to the tuning of the perplexity parameter. Briefly, this parameter 1) has to be lower than the number of samples and 2) tells how to balance attention between local and global aspects of the data. Low perplexity values correspond to a smaller amount of "neighbours", vice-versa for higher values. Local clusters will be therefore preferred when perplexity is low. tSNE is widely used in single cells analysis. It has the advantage of highlighting sample separation; this feature needs to be used carefully to avoid creating "artificial" patterns. I ran the analysis using three different values for perplexity (5,10,15). Even if with an expected difference in local separation, results show the same pattern of the PCA, particularly the same 40 days Smurf outliers. Results are reported in figure B.3 (Appendix B.1). For further confirmation, I performed unsupervised hierarchical clustering on the samples by taking all the genes into account. The results are shown in figure 2.4. It is interesting to see how also in this case I obtain a good Smurf/non-Smurf separation, except for the old samples. Most of them cluster in an independent group, mixing between the two categories. These results, already suggested by the PCA, shows that non-Smurfs individuals share on average between them more similarities in gene expression than a Smurf and a non-Smurf of the same age. This reasoning does not apply to old individuals who appear to follow a distinct behaviour. 20s1d-d 20s1d-e 20s1d-f 20sM-4 30s1d-l 32sM-10 30s1d-k 30s5h-c3 30s5h-c4 40s1d-q 30n5h-c1 40n5h-c5 40n5h-c6 30n5h-c2 30n1d-h 32nM-7 30n1d-i 20n1d-c 20n1d-a 20n1d-b 20nM-1 40nM-13 Differential gene expression analysis Once assessed that differences are indeed present between Smurfs and non-Smurfs, I moved to quantify them through differential gene expression analysis. To this end, I used the R package DESeq2. Even if the old Smurf samples seemed like behaving differently than the rest, I decided to retain them for the analysis and group them with the Smurfs. My initial question is indeed to understand whether samples present or not a biphasic behaviour with Smurfness. By removing the 40 days samples that do not seem to differentiate from non-Smurfs as much as the others, I would introduce a bias towards my hyphotesis. By instead retaining them, I can understand if there is a core Smurf gene expression signal, independently of some heterogeneity. Comparing the whole group of 16 non-Smurfs to the whole group of 16 Smurfs results in 3049 genes differential expressed (DEGs) with a FDR (false discovery rate, adjusted p-value for multiple testing) cutoff of 5%. The volcano plot in figure 2.5 allows a fast visualization of the results. By plotting the negative logarithm of the p-value (in this case, the FDR) against the log2 fold change (log 2 FC, computed for Smurfs/non-Smurfs), we can immediately visualize genes that show a change in the condition of interest. In particular, in the right part of the graph, we find genes upregulated in Smurfs, and vice versa. The height of the points gives information on the significance level. In the plot, DEGs are in red. While most of them show a relatively contained change (between 0 and |1| for the fold change), some present extreme values. In order to have a general overview of such genes, I plotted the names of the ones presenting a log 2 FC greater than |2|. Amongst the overexpressed ones, we can recognize immediately two groups: AMPs (the antimicrobial peptides, amongst which also the three shown in figure 1.11) and Hsp genes. The active immune response in Smurfs previously observed (figure 1.11) is therefore confirmed. It adds to this a possible response to unfolded protein (UPR) suggested by the Hsp strong upregulation. The most overexpressed gene is Tim17a1, a component of the TIM/TOM complex for importing protein to mitochondria. However, this gene is lowly expressed, and this might influence the log 2 FC estimation. With a lower fold change showing high significance we find upd2 and Fst. Upd-2, together with upd-3 (also upregulated in Smurfs), is the activator ligand for the JAK/STAT pathway. In the adult Drosophila gut, this pathway is involved in keeping homeostasis and triggering tissue regeneration in case of damage [START_REF] Herrera | JAK/STAT signaling in stem cells and regeneration: from Drosophila to vertebrates[END_REF]. Its chronic activation is involved in the gut dysplasia observed with ageing in flies [START_REF] Li | Preventing Age-Related Decline of Gut Compartmentalization Limits Microbiota Dysbiosis and Extends Lifespan[END_REF]; the same study shows how the modulation of such pathway in the middle part of the midgut (copper cell regione, CCR) extends lifespan. The ligand vein, produced downstream of the JAK/STAT pathway, is significantly upregulated in Smurfs, suggesting an actual activity of the pathway. Frost is involved in the cold response. However, it is also overexpressed in adult diapause -a condition of dormancy triggered by environmental stress and characterized by stress resistance, somatic maintenance and reallocated energy resources, together with reduced senescence and lifespan extension [START_REF] Kučerová | Slowed aging during reproductive dormancy is reflected in genome-wide transcriptome changes in Drosophila melanogaster[END_REF]. The activation of both those signal peptides could support a stressful condition for the individual. Regarding the most downregulated genes, we find genes involved in egg formation (Vm26 and psd ). NimC4 is a transmemebrane phagocytic glial receptor, reported as expressed during development but not in the adult [START_REF] Hakim-Mishnaevski | Glial Phagocytic Receptors Promote Neuronal Loss in Adult Drosophila Brain[END_REF]. LManIV is an enzyme predicted to be active in different metabolic pathways and the lysosomes (information retrieved from the database Flybase [START_REF] Larkin | FlyBase: updates to the Drosophila melanogaster knowledge base[END_REF]). TotX belongs to the Turandot gene family, counting 8 members. Out of the seven retrieved in my dataset, five are significantly downregulated (TotX, Totb, Totc, Tota, TotM ). This family of gene is activated by a broad range of stress conditions (bacterial infection, heat shock, oxidative stress) [START_REF] Ekengren | A Family of Turandot-Related Genes in the Humoral Stress Response of Drosophila[END_REF]. Their downregulation in Smurfs seems to contradict the response to stress hinted by the strongly upregulated genes. Obp99a is a small peptide thought to be involved in odorant perception. The odorant-binding protein (Obp) family counts in Drosophila 51 genes. I detect 44 of them in my dataset, out of which 9 are significantly downregulated. Even if the name suggests an expression mostly in the nervous system, Obp genes seem to act in different tissues [START_REF] Arya | Natural Variation, Functional Pleiotropy and Transcriptional Contexts of Odorant Binding Protein Genes in Drosophila melanogaster[END_REF]. Interestingly, in the adult head of Drosophila males, a group of Obps, amongst which Obp99a have been found as downregulated in midlife flies compared to young ones [START_REF] Ratliff | Aging and Autophagic Function Influences the Progressive Decline of Adult Drosophila Behaviors[END_REF]. [START_REF] Arya | Natural Variation, Functional Pleiotropy and Transcriptional Contexts of Odorant Binding Protein Genes in Drosophila melanogaster[END_REF] also demonstrated a significant association between polymorphisms in the Obp19d gene and lifespan. The latter is not significantly downregulated in Smurfs, but this preliminary evidence from the literature may suggest an involvement of this gene family in ageing. The set of DEGs identified is a Smurf specific signature. Indeed, as shown in figure 2.6, they cluster all the Smurfs together independently on their age. Once again, I find the same three outlier samples of the PCA, while the other old Smurfs sample cluster with the younger ones. From the heatmap, we can see a shift in expression between Smurfs and non-Smurfs, with around half of the genes deregulated and half upregulated. The deregulation of a gene alone might not give us enough information. Therefore, to better investigate the list of DEGs, I proceed with an enrichment analysis. Confirmation of the results via edgeR In order to confirm the results obtained with DESeq2, I performed the DEG analysis by using another standard package, edgeR. As explained in 2.2.2, edgeR uses a different normalization (even if based on the same assumptions than DESeq2) and also has a different estimation procedure. I performed the analysis and set the same FDR cut-off of 5%. I obtained 2609 DEGs, 2377 of which overlap with the 3049 detected by DESeq2. The fold change estimation for all the genes is also correlating, as shown in figure B.4 (Appendix B.1). Enrichment analysis Enrichment analysis aims at extracting functional sense from the list of DEGs. In particular, it tests whether in a specific list we find genes belonging to a certain gene set or biological pathway in a higher number than expected. The well-known Gene Ontology (GO) annotation, for instance, can be considered as a gene set. It is important to clarify the difference between a gene set and a pathway, as they might sometimes erroneously be used as synonymous. A gene set is an unordered list of genes, classified together given their involvement in a similar biological process; each gene can belong to multiple gene sets if it has multiple functions. A pathway is instead an ordered group of genes, where their interactions are annotated; in a pathway, we have topological information, i.e. we know the spatial organization of the group of genes. A gene can as well belong to multiple pathways. All the gene set analysis can be performed using the gene set or the pathways if we consider them as a simple list of genes; but not all the pathways analysis can be performed on the gene sets. I will talk more about the pathway analysis in section 2.2.6. Gene set enrichment analysis (GSEA) [START_REF] Subramanian | Gene set enrichment analysis: A knowledgebased approach for interpreting genome-wide expression profiles[END_REF] is nowadays a well-known standard for performing enrichment analysis. It takes as input a pre-ranked list of genes (either the whole set of genes or just a subset, as the DEGs) and tests for the significant presence of gene sets at the top or the bottom of the list. If the list is ranked by FC, this estimates gene sets that are upregulated and downregulated. I performed the analysis using fgsea [START_REF] Korotkevich | Fast gene set enrichment analysis[END_REF], a recent and fast implementation in R of GSEA. I ran the analysis by ranking the DEGs list by FC and using the GO "biological process" (BP) categories as gene sets. The analysis provides an enriched score (whose sign represents the direction of the deregulation), the p-value (normal and adjusted) and the set of the core genes providing the enrichment. The method contains randomness, as it estimates the p-value empirically by random permutations of the gene labels (to check how many times a random permutation is more extreme than the obtained result). Therefore, the output can be slightly different if the analysis is repeated starting from the same input. The authors recommend therefore to set a seed for reproducibility. In addition, I decided to perform the analysis 50 times and retain only the categories that appear 80% of the times. I chose manually such thresholds, which proved sufficient for considering the analysis reliable without being too restrictive on the cutoff. Results are shown in figures 2.7 and 2.8.In the represented network, each node is a significantly deregulated GO BP category. Edges are connecting categories sharing the same genes to facilitate interpretation. The hubs can be therefore interpreted as a group of strictly related processes. Please note that I separated the results into two figures for enhancing nodes' readability, but that the two were generated as a single graph and therefore share the same colour scale for the enrichment score. More details on the score and the level of significance of the categories are reported in tables B. 1 and B.2 (Appendix B.1). • • • • • • • • • • • • • • • • • 'de novo' Smurfs upregulated biological processes Results for upregulated categories are presented in figure 2.7 and table B.1 (Appendix B.1). Amongst the biological processes that are upregulated in Smurfs, we find the two main groups that the volcano plot has already suggested: immune response and response to unfolded protein. As far as we know, Drosophila lacks an adaptative immune response, and therefore it is not a proper model to investigate the change with ageing in adaptative immunity. On the other hand, it has two pathways mediating the innate immune response: Toll (Gram-positive and fungi response) [START_REF] Lemaitre | The dorsoventral regulatory gene cassette spätzle/Toll/cactus controls the potent antifungal response in Drosophila adults[END_REF], and Immuno Deficiency (Imd, Gram-negative response) [START_REF] Lemaitre | A recessive mutation, immune deficiency (imd), defines two distinct control pathways in the Drosophila host defense[END_REF][START_REF] Dushay | Origins of immunity: Relish, a compound Rel-like gene in the antibacterial defense of Drosophila[END_REF]. In order to have a clearer idea of the genes pushing the group to be upregulated, I extracted the upregulated genes belonging to the largest groups of "immune system process" and "defence process". I expected those groups to contain the more specific categories such as "defence against Gram-negative bacteria" etc. This resulted in 178 DEGs being selected, out of which 121 were upregulated. This gives an idea of the extension of such deregulation. In order to visualize in a simple way the core genes related to the inflammation categories I mentioned above, I extracted from database KEGG the Toll and Imd pathway (dme04624) and plotted the alterations on the pathway itself. The result is presented in figure 2.9. We can observe a general upregulation of the pathway at different levels, from the activators to the transcription factors (both rel -the homologous of Nf-κBand dl -which plays the same role in the Toll pathway-) and the downstream AMPs targets. Interestingly, we find also some genes which negatively modulates the pathway, such as PGRP-SC and pirk. Those are genes expressed downstream of the pathway and play a negative feedback role. PGRP proteins have a double role, as they recognize and attack the peptidoglycans of the bacteria's wall. However, from the over-expression pattern of the pathway, the modulation carried out by those factors does not look sufficient to mitigate the inflammation. As I have discussed in section 1.2.1, a heightened and chronic inflammation state have been listed amongst the hallmarks of ageing. In our dataset, we can 58 detect it as a core Smurf signal. The other main group is the response to unfolded protein, which I more generally indicated with the name "stress response" in figure 2.7. I extracted the genes associated to all the categories. This procedure leads to 50 genes, out of which 12 are heat shock genes (11 upregulated). Interestingly, 7 CCT genes (Chaperonin Containing Tailless complex polypeptide 1) are lowly but consistently downregulated in the Smurfs. The 8 CCT subunits, coded by as many genes, act together as an oligomer of two rings. Contrarily to the heat shock family, which acts on a wide range of substrates, CCT proteins are more specific. In particular, they are involved in the polymerization of actin and tubulin [START_REF] Sternlicht | The t-complex polypeptide 1 complex is a chaperonin for tubulin and actin in vivo[END_REF], but CCT5 is also playing a role in actin and actin-binding proteins transcription [START_REF] Elliott | A novel function of the monomeric CCTε subunit connects the serum response factor pathway to chaperone-mediated actin folding[END_REF]. Only CCT2 is not significantly downregulated in Smurfs. However, it is important to specify that we are looking at the mRNA expression level. Therefore, this might not translate to a protein level, especially with a low downregulation. It is nevertheless worth mentioning that [START_REF] Kim | TRiC/CCT chaperonins are essential for organ growth by interacting with insulin/TOR signaling in Drosophila[END_REF] have shown how the CCT transcripts level is regulated by the insulin/TOR signalling activity, with rapamycin treatment reducing the level of their expression. Another upregulated gene interesting to mention is the Xbp1 transcription factor, which is involved in the UPR in the endoplasmatic reticulum (ER). Inside the stress response group, we can also notice a few categories related to the response to changes in oxygen levels, with three categories related. By looking at the DEGs present there, I find genes encountered already in other categories, such as AMPs and Hsp, making it difficult to understand if this is a "off-target" category. I find in this category also chico, slightly upregulated in Smurfs. The protein chico is phosphorylated by the insulin receptor (InR) as the first step in the insulin pathway. Chico is a longevity gene in Drosophila, with mutations reducing its activity has been shown to extend lifespan [START_REF] Clancy | Extension of life-span by loss of CHICO, a Drosophila insulin receptor substrate protein[END_REF]. Another group of categories that we find upregulated is the protein localization to mitochondria. This category is not straightforward to interpret. I find genes of the TIM/TOM complex, some with a negative and some with a positive FC, in all cases close to 0. Here, the group seems to mainly be driven by the Tim17a1, already discussed in section 2.2.4. In the following section (2.2.5.2) I will illustrate how genes involved in mitochondrial metabolism show general downregulation in Smurf. The upregulation of the genes involved in the protein import system to this organelle could be a way to try to cope with a decreased presence of mitochondrial related proteins. Even if it does not appear as an enriched category, I will briefly mention the glutathione metabolism. Indeed, multiple Gst (Glutathione S-transferase) genes are upregulated. Glutathione (GSH) is a molecule used by animals and plants as reducing power in oxidation-reduction (redox) reactions, and it is a tool for detoxification case of oxidative stress. It can exist in two forms: oxidized and reduced. Gst is the enzyme in charge of oxidizing the GSH to detoxificate another substrate [START_REF] Hopkins | On an Autoxidisable Constituent of the Cell[END_REF][START_REF] Deponte | Glutathione catalysis and the reaction mechanisms of glutathione-dependent enzymes[END_REF]. In Smurfs, 29 genes belonging to this category are deregulated, amongst which 20 upregulated Gst and one upregulated enzyme involved in the first step of glutathione synthesis (Glcl ). Even if the fold changes are low, the consistent upregulation of the group could suggest an oxidative stress condition faced by the organism. Smurf down-regulated biological processes There are several processes that appears downregulated in Smurfs. Results are presented in figure 2.8 and B.2 in Appendix B.1. The first group is related to fertility. This confirms what was shown by [START_REF] Rera | The Smurf transition: new insights on ageing from end-of-life studies in animal models[END_REF] regarding a significant decrease in fertility in Smurfs midlife individuals compared to age-matched non-Smurfs. These categories group 33 genes, mostly downregulated. We find genes playing a structural role in the formation of the egg, such as the Vm26Aa and Vm26Ab, and genes having a regulatory function. An interesting one is armi, which shows a light downregulation in Smurfs. This gene encodes for a helicase involved in the PIWI-interacting RNAs (piRNAs), a class of small RNAs suppressing transposable elements in the germline. Connected to these subcategories, we also have nodes related to extracellular matrix (ECM) organization. Apart from some genes involved in the eggshell formation that I have mentioned before, we find downregulation of laminin (LanB1 and LanB2 ), a structural component of the nucleus and collagen, Col4a1 and vkg. Interestingly, there is the upregulation of the two metalloproteinases Mmp1 and Mmp2, whose function is degradation and remodelling of the ECM. As the data are coming from the whole body of the flies, we cannot know if this is a general signal or a tissue-specific one. However, the Smurf phenotype is assessed by an alteration in intestinal selective permeability, suggesting an alteration on the gut epithelium. [START_REF] Kiss | Drosophila type IV collagen mutation associates with immune system activation and intestinal dysfunction[END_REF] have shown how Col4a1 mutants present a premature loss of intestinal integrity and increase of inflammation markers in the gut. We then find a broad metabolic group. The first subgroup is the ecdysteroid hormones metabolisms. Ecdysteroids (hormones produced starting from cholesterol) are synthesized throughout the insect's life. It has been suggested that ecdysone, together with juvenile hormone (JH), is involved in the lifespan increase observed under reduced insulin signal; indeed, the production of ecdysone is impaired in flies with insulin receptor mutants [START_REF] Tu | Deletion of the Mitochondrial Superoxide Dismutase sod-2 Extends Lifespan in Caenorhabditis elegans[END_REF]. Ecdysone levels are modulating lifespan in a sex-specific manner: lifespan extension has been observed in males, but not in females [START_REF] Tricoire | The steroid hormone receptor EcR finely modulates Drosophila lifespan during adulthood in a sex-specific manner[END_REF]. In our case, we observe a consistent deregulation of genes involved in ecdysone and other ecdysteroid hormones, such as sad, spo, and phm. However, ecdysone in adult females is well characterized for its role in the ovaries [START_REF] Niwa | Enzymes for ecdysteroid biosynthesis: their biological functions in insects and beyond[END_REF] : the downregulation of the enzymes involved in its construction could therefore be due a general impairment in the gonads of the Smurf flies. Lipid biosynthesis also has a negative enrichment score, with 69 genes belonging to the category, out of which 47 downregulated. It is a low but consistent downregulation that affects various genes involved in fatty acid biosynthesis. Amongst them, it is worth mentioning: FASN1, fatty acid synthetase catalyzing the de novo biosynthesis of long-chain saturated fatty acids; ACC, catalyzing the synthesis of malonyl-CoA from acetyl-coA, the first step of lipid biosynthesis; eloF, involved in the chain elongation. We also find the downregulation of Acsl, Acyl-CoA synthetase long chain: this enzyme is involved in fatty acid degradation, as it catalyzes the activation of the fatty acid prior to the entrance in the mitochondria through the carnitine shuttle. Fatty acids synthesis occurs in the cytosol, and depends on the availability of acetyl-coA, which is also an essential step in the Krebs cycle, by reacting with the pyruvate produced by glycolysis. Furthermore, acetyl-coA levels have also an effect on the epigenome, as acetyl-coA is used as substrate for histone acetylation [START_REF] Etchegaray | Interplay between Metabolism and Epigenetics: A Nuclear Adaptation to Environmental Changes[END_REF]. Interestingly, we also have mitochondrial metabolism, in particular cellular respiration. These categories contain 28 downregulated genes. The electron transport chain (ETC) in the mitochondrial inner matrix is the mediator of cellular respiration. It is composed of five complexes: the first four (NADH dehydrogenase -Complex I -, succinate dehydrogenase -Complex II -, cytochrome c reductase -Complex III -, cytochrome c oxidase -Complex IV -) mediates the H + gradient formation between the mitochondrial membranes. The last one (ATP synthase) exploits such a gradient to produce ATP. All those complexes are composed of multiple subunits and are partially encoded by the mitochondrial genome. We find 27 downregulated genes coding for subunits of Complex I, 2 of Complex II, 3 of Complex III and 3 of Complex IV. Let's suppose these changes actually correspond to a protein downregulation. In that case, Smurf metabolism is probably suffering from mitochondrial dysfunction. If the aerobic metabolism of mitochondria is impaired, the cell could switch to an anaerobic one, with the conversion of pyruvate (the final product of glycolysis) to lactate. Interestingly, the enzyme catalyzing such reaction (Ldh, lactate dehydrogenase) is upregulated in Smurfs. Interestingly, this reminds of the Warburg effect observed in tumors, where cells, even in presence of enough oxygen, switch their glucose metabolism from aerobic to anaerobic [START_REF] Warburg | THE METABOLISM OF TUMORS IN THE BODY[END_REF][START_REF] Deberardinis | We need to talk about the Warburg effect[END_REF]. The changes in metabolism occurring with ageing could contribute to the increase risk of cancer: ageing and cancer do indeed show overlapping metabolic changes [START_REF] Tidwell | Aging, Metabolism, and Cancer Development: from Peto's Paradox to the Warburg Effect[END_REF]. More details on the metabolism and Smurfness will be discussed in section 2.2.7. I grouped the ribosome biogenesis and proteolysis under the general name of proteostasis. The proteolysis is a large category, where we find 59% of genes downregulated. Even if this does not account for a vast majority of the group, there are some family of genes that are consistently downregulated. Interestingly, 16 out of 16 detected Jonah genes in our dataset are significantly downregulated in Smurfs. Jonah proteins are a family of serine endopeptidases. Not well characterized, these genes are expressed in larval and adult Drosophila gut [START_REF] Carlson | Developmental and functional analysis of Jonah gene expression[END_REF]. Similarly, we find another compact group of 10 trypsin-like endopeptidases. Although many other genes with similar activity are present in the list, those are the only two big families consistently deregulated in Smurfs. It is not easy to make sense or propose a hypothesis on the rest of the list by looking at the single genes. The categories of ribosome biogenesis present 47 genes, 45 out of which downregulated. In all cases, it is a low but consistent deregulation. It affects genes having a structural function in the ribosomes and genes coding for proteins processing rRNA. Also three mitochondrial ribosomal proteins are downregulated. Before integrating and commenting on these results, I will describe a complementary pathway analysis performed to 1) validate such results and 2) see possible deregulation that the DEG analysis might have missed. Pathway analysis As briefly described in section 2.2.5, pathway analysis aims at discovering the deregulation in biological pathways (as the ones annotated in KEGG or Reac-tome) in a particular biological condition. Such analysis can be topological or non-topological, depending on whether the structure of the pathway and the gene-gene interactions are taken into account. Pathway analysis has the advantage of taking as input the whole dataset without a pre-filter or any cut-off applied. Pathway analysis does not operate at the level of a single gene and is therefore more sensitive to detecting biological pathways whose genes move all together in a specific direction, even if lightly. This could be still of biological relevance and be missed by the DEG analysis, which considers one gene as an isolated entity. Here I will present the results of a non-topological pathway analysis performed by the GAGE [START_REF] Luo | GAGE: generally applicable gene set enrichment for pathway analysis[END_REF] method. The analysis returns 9 upregulated and 27 down-regulated pathways (FDR cut-off 5%). As shown in picture 2.10, in the downregulated pathways, we find similar terms to what already explored in the enrichment analysis. In addition, the list suggests a broader deregulation of metabolism that what previously seen: the glucose metabolism looks generally downregulated from the glycolysis to the respiration; fatty acid degradation and not only biosynthesis is affected; and many amino-acid metabolic pathways present genes less transcribed in Smurfs; also the pentose phosphate pathway, important for the synthesis of NADPH and pentoses that will be then used for the building of the nucleotides. Moreover, NADPH is involved, amongst others, in the anabolism of GSH (glutathione) mentioned above. These results confirm that Smurf flies experience a broad downregulation of genes coding for metabolic enzymes. The analysis also provides an extension of the previous results. Concerning the pathways upregulated in the Smurfs, some hits did not appear in the enrichment analysis. In particular, the endocytosis pathway, the autophagy and the glycerol phospholipid metabolism (which is partially overlapping to the "ether lipid metabolism"). Concerning autophagy, there is a light general upregulation of the genes involved. However, the log 2 FC estimated by DESeq2 is in most of the cases extremely low (∼ |0.01|, in grey in the map), which might hint for a non-relevant biological result concerning a broad upregulation of the pathway (figure B.5 in Appendix B.1). The endocytosis hit is mainly composed by genes involved in endosomal formation and func- The genes coding for the receptor (InR), the kinase chico and Thor do present a significant upregulation also in the DEG analysis. It is interesting as InR and Thor (indicated in the pathway as 4E-BP) was detected as upregulated also in previous Smurfs studies (figure 1.11). As explained in [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF], they are a target of dFOXO and activated when the insulin signal is reduced. Therefore, my results validate previous work and suggest a reduced IIS in the Smurfs. Integration with metabolomic data 2.2.7.1 The data The laboratory post-doc Céline Cansell, in collaboration with the Gustave Roussy Institute Metabolomic platform, has recently generated metabolomic data (mass spectrometry) from a large cohort of flies, males and females of different lines. Amongst them, we find males and females, non-Smurf and Smurf, from drs-GFP (at a single age, corresponding to the T 50 ). As the whole data set has not been fully analyzed yet, I decided to proceed with a simple check on a few metabolites corresponding to the transcriptome results. I focused only on the line corresponding to my RNA-Seq data, and on females. I had 14 samples available, 7 non-Smurfs and 7 Smurfs. Most of the samples are a mixture of 20-30 flies, apart from one Smurf sample consisting of only 8 flies. As in a first preliminary check, this sample behaved like an outlier and as the difference in the number of flies compared to the others was high, I decided to remove it and to continue with 7 non-Smurf and 6 Smurf samples. Exploratory analysis I was provided with a dataset of 202 metabolites, with already preprocessed values estimating their concentration. To see if I could detect a separation in the samples as in the transcriptomic data, I performed a PCA (figure 2.12). Once again, we observe a separation between Smurfs and non-Smurfs, which looks like a different metabolic profile. The separation between the two categories is clear, as I previously showed with the transcriptome for young and midlife flies. Results The gene expression analysis highlighted a broad metabolic deregulation in Smurfs. Different enzymes involved in fatty acid biosynthesis and degradation, and in the oxidative phosphorylation resulted as DEGs. The GAGE analysis has also detected the Krebs cycle (or TCA cycle) as downregulated. Among the metabolites detected in the analysis, I tried to look for the ones directly connected to the genes showing deregulation. Results are shown in figure 2.13 and described hereafter. The metabolic processes in which the genes and the metabolites here discussed are involved are schematized in figure 2.14. • Lactate. As briefly described in section 2.2.5.2, pyruvate can undergo aerobic metabolism (Krebs cycle and oxidative phosphorylation with consequent NADH and ATP production) or anaerobic metabolism (only NADH production). The second reaction occurs in the cytoplasm, while mitochondria carry out the aerobic metabolism. Somatic cells in the adult organism normally rely on aerobic metabolism. The fermentation path is preferred in case of immediate need of fast ATP production by the glycolysis, as for instance happens in muscles during exercise. In Smurfs, the gene Ldh, coding for the lactate dehydrogenase, is significantly upregulated; interestingly, the lactate is also significantly overrepresented in Smurfs. So a possible metabolic shift, speculated in section 2.2.5, could be confirmed by these results. However, the glycolytic genes are not upreguleted in Smurfs, and the pathway is even generally found as downregulated (section 2.2.6). Therefore, the active fermentation that we observe in Smurfs could be a simple response to mitochondrial dysfunction rather than a metabolic switch. • Electron transport chain. The transcriptomic data suggest downregulation of the ETC and the Krebs cycle. The succinate dehydrogenase enzyme is the Complex II of ETC, but it is also an enzyme of the Krebs cycle. In figure 2.13 (point b), I show the downregulation of two subunits of the succynate dehydrogenase enzyme (Complex II of the ETC) and its upstream metabolite, succinate, which is accumulating in Smurfs. The downregulation of the Sdh genes is low, but it can be reliable as 1.3 for the general pathway). In both cases, the data are noisy and I cannot detect a change in the Smurfs; further analysis might be needed. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • *** 9 10 11 NS S Ldh expression • • • • • • • • • • • • • ** 1.0e+10 1.5e+10 2.0e+10 2.5e+10 3.0e+10 NS S • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ** ** • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • * SdhA SdhC NS S NS S 10 11 12 • • • • • • • • • • • • • ** 2e+08 4e+08 6e+08 8e+08 1e+09 NS S • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • *** *** • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ** • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • *** *** ACC Acsl FASN1 NS S NS S NS S 13 14 15 expression • • • • • • • • • • • • • n.s. • • • • • • • • • • • • • ** • • • • • • • • • • • • • ** Acetyl-CoA L- • Fatty acid metabolism. Another signal coming from the transcriptional data is the downregulation of fatty acid metabolism. I have tried here to look at genes and metabolites that interact directly. I considered three already discussed genes (section 2.2.5.2) ACC, Acsl and FASN1. The three present a significant downregulation in Smurfs. I then looked at the concentration of three metabolites: acetyl-coA, L-Carnitine, Palmitic acid. acetyl-coA is the connection between glucose and fatty acid metabolism: its conversion to malonyl-CoA (catalyzed by ACC) is the entry point of the fatty acid biosynthesis; vice versa, fatty acid degradation releases one acetyl-coA molecule for every carbon of the chain; the acetyl-coA produced in such a way fuels the Krebs cycle. Acetyl-coA levels are noisy in our data, and no significant change can be assessed. The carnitine shuttle mediates the entrance of fatty acids to mitochondria prior to degradation. Such mechanism required the activation of the fatty acid, performed -amongst others-by Acsl. We observe in Smurfs both a downregulation of the gene and a decrease in the concentration level of carnitine, suggesting the the first steps of the lipid degradation might be impacted in Smurfs. Palmitic acid is a saturated long-chain fatty acid with a 16-carbon backbone. We see a significant decrease in its concentration in Smurfs, which could confirm the downregulation of fatty acid biosynthesis suggested by the decreased level of ACC and FASN1. This last results also confirm the decreased level of palmitic acid in Smurfs observed in [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF]. These preliminary results are encouraging, but I would like to highlight that these results are indicative, and a more exhaustive and general analysis is required to integrate the two datasets. This could restrict the hypothesis generated by the transcriptomic dataset to fewer processes that can be tested experimentally. producing the NAD + that will be then re-used by glycolysis. Alternatively, the pyruvate enters the mitochondria and is converted to acetyl-CoA, entering the Krebs cycle and producing ATP and NADH through the ETC. This aerobic pathway is downregulated in Smurfs. Lipid biosynthesis (top left) starts from acetyl-CoA, which is converted to malonyl-CoA by ACC. The elongation step follows (involvement of the FASN1 enzyme), and the process ends with the complete synthesis of a fatty acid, as palmitic acid. When fatty acids need to be consumed, they undergo a degradation pathway that partially occurs in mitochondria. Prior to the entrance, the fatty acid needs to be activated, a process in which the enzyme Acsl is involved. It then enters the mitochondria through the carnitine shuttle, and undergoes the final steps of degradation. Those will produce acetyl-CoA, which will enter the Krebs cycle. Gene expression noise and ageing It has been reported that gene expression becomes heterogeneous with age [START_REF] Bahar | Increased cell-to-cell variation in gene expression in ageing mouse heart[END_REF][START_REF] Perez-Gomez | The Aging Transcriptome: Read Between the Lines[END_REF]. However, not all the studies point in such a direction [START_REF] Rogina | Regulation of gene expression is preserved in aging Drosophila melanogaster[END_REF][START_REF] Pletcher | Genome-Wide Transcript Profiles in Aging and Calorically Restricted Drosophila melanogaster[END_REF]. In our case, we were interested in understanding if we could observe any change in gene expression variance with age or Smurfness. We used an easy way to visualize it. I computed the standard deviation for each condition (Smurf/non-Smurf) within each age group. I then obtained a relative standard deviation by dividing the group one for each sample. I plotted the distribution of such values and compared them within age and condition (Kolmogorov-Smirnov statistic to compare unidimensional distributions -KS). Results are presented in figure 2.15. As you can observe, age has a clear effect on the distribution. For both non-Smurfs and Smurfs, age shifts the peak towards higher values of relative standard deviation while lowering the peak itself (increase noise in the distribution). Such changes are significant to the KS statistic. Therefore, we can affirm that, in our dataset, we detect an effect of age on the gene expression noise. This is interesting, as it hints are the existence of changes that are occurring with time and normally associated with ageing, but not necessary to die. More will be discussed in section 2.2.9 and 2.5. Interestingly, it seems that there is also a Smurf-related effect. Indeed, young and midlife Smurfs present a higher peak and less noisy distribution than their age-matched non-Smurfs siblings. However, this difference is diluted by age and, although still significant at 40 days, it looks not relevant anymore. This could be due to the fact that non-Smurfs might be at the moment of sampling at a different biological age (different time before the Smurf transition) and this could lead to the presence of heterogeneity in the transcriptome across the samples. The fact that the age dilutes the difference could show how, by sampling at later ages, we enrich our samples in individuals that are closer and closer to the Smurf transition. More about the re-interpretation of non-Smurfs as pre-Smurfs will be discussed in section 2.4. Genes belonging to the 1st quartile of the mean expression distribution are removed from the analysis. For the sake of visualization, the distribution are cut at 0.5 in the plot, but all the values of relative standard deviation are taken into account for the statistic. Results discussion In the previous sections, I have presented the results of a standard RNA-Seq analysis to characterize the Smurf samples. I will discuss the effect of age on our samples in section 2.3. Before moving to that, I would like to conclude this section by discussing the results that I have so far listed. In particular, can we reply to the initial questions of 1) is Smurfness showing a biphasic behaviour and 2) is Smurfness a good proxy for biological age? I have demonstrated that the Smurf transcriptome is more similar to the one of another Smurf than to the one of an age-matched non-Smurf. However, we can see an effect of time, especially for the non-Smurfs and the old Smurfs. By looking at the DEG analysis mixing all the Smurfs independently of their age, I had identified a core Smurf signal. I would therefore affirm that, with the element collected so far, Smurfs show at least a partial biphasic behaviour, with a stereotyped transcriptome and changes always affecting them independently of the age. A more complete answer can be given with the results of 2.3. Regarding the second question, we need to compare the Smurf changes to what has been so far described as ageing-related changes in order to reply. As I am analyzing mainly transcriptomic data, I will focus on what has been described as the ageing transcriptional signature to proceed for a fair comparison. I will refer to two main studies: Stegeman and Weake (2017) and [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF], two recent studies reviewing the transcriptional changes identified with ageing in different species. Both the cited reviews agree on the difficulty of easily categorizing a transcriptional signature of ageing. Different studies performed in different organisms, tissues, and ages often do not show an overlap. Indeed, different deregulation occur in different tissues, as the genes are physiologically expressed at a different level. Nevertheless, there is a conserved set of six changes that Frenk and Houseley (2018) have called "hallmarks of ageing gene expression". • Downregulation of genes encoding mitochondrial protein. This hallmark can be linked to the general mitochondrial dysfunction described in section 1.2.1. This hallmark is present in the Smurfs with the deregulation of the Krebs cycle, ETC, and some mitochondrial ribosomal proteins. • Downregulation of the protein synthesis machinery. The ribosome hallmark is described as more challenging to detect in highthroughput experiments due to their high expression, possibly leading to a non-negligent difference in RNA molecules and resulting in minor FC, missing DEGs cut-off. However, in the Smurf case, we have detected GO terms and KEGG pathways related to ribosomes and their biogenesis as downregulated, even if single genes present a low FC. In addition, proteolysis is downregulated, which, together with the upregulation of In the Smurf transcriptome, three of the four hallmarks are detected (downregulation of mitochondrial proteins, downregulation of ribosomes, dyresgulation of immuno system). A fourth hallmarks is partially detected (stress response), while no hits are found to affirm an overexpression of DNA damage related genes or a reduction in growth factors. On the other hand, the dysregulation in gene expression seems a time-related signal rather than a Smurf related one. the heat shock proteins, hints at the impaired proteostasis described as a general hallmark of ageing [START_REF] Lopez | Analysis of immune-related genes during Nora virus infection of Drosophila melanogaster using next generation sequencing[END_REF]. • Dysregulation of immune system genes. As discussed before, flies are missing -for what we know -adaptive immunity but present the innate one. In Smurfs, we find the over-activation of the inflammation that is described both as transcriptional and general ageing hallmark. • Reduction in growth factor signalling. This hallmark mainly refers to two categories: cell cycle and "cell growth" (as the nutrient sensing insulin/mTOR pathway). The provided examples mainly refer to rodent models and humans, but it is still interesting to see if some of these deregulations can be found in the Smurfs. Concerning genes involved in DNA replication and cell cycle, being these categories well annotated in GO and pathway, the fact that they did not appear in the analysis is per se a sign of no deregulation. Concerning the growth factors, I proceeded with a manual check. I downloaded from Flybase the genes annotated in three different "growth" pathways: EGFR (epidermal growth factor receptor) signalling pathway, FGFR (fibroblast growth factor receptor) signalling and insulin signalling. I then checked if genes belonging to these pathways were differentially expressed in my dataset. As described in section 2.2.6, some genes belonging to the insulin pathway and regulated by FOXO are upregulated, suggesting a reduced IIS. However, the insulin-like peptides (Ilp) do not present significant change. We cannot detect a trend or change in core genes for the other pathways. Therefore, I would not affirm that we can detect a reduction in growth factor signalling in Smurfs, even if the general downregulation of metabolism does not point at such pathways being active. • Constitutive responses to stress and DNA damage. This hallmark is actually of no straightforward interpretation, as Frenk and Houseley (2018) report a well-conserved upregulation of heat shock proteins. In contrast, Stegeman and Weake (2017) are more careful and show an organism dependent pattern, reporting heat shock proteins upregulation in the whole body of the worms, and downregulation for the whole body of the fruit flies and some rodent's tissues. However, as discussed in the Introduction, various studied showed upregulation of Hsp with ageing in the fruit fly [START_REF] Tower | Heat shock proteins and Drosophila aging[END_REF]. On the other hand, the DNA damage response is treated oppositely. In the hallmarks paper, it is reported as a less conserved process, with an increase of DNA damage in old individuals often not correlating with an increase in transcription of DNA repaired genes. In the second paper, the upregulation of DNA repair genes is reported as a conserved feature. The contrasting results might dependent on the specific experiments and tissues (survival curve of the population and sampling time, food, etc). Overall, in Smurfs, we find indeed upregulation of heat shock protein, UPR and glutathione trans-ferases, suggesting a stress response. On the other hand, DNA repair is detected in none of the analyses. • Dysregulation of gene expression and mRNA processing. This hallmark refers to a general increase in gene expression heterogeneity (i.e. noise) with age and dysfunction in the RNA pre-processing steps and maturation. In the DEG and pathway analysis, categories such as "Splicing" or "mRNA-preprocessing" are annotated but never significant. Concerning the increase in noise in gene expression, as described in section 2.2.8, we can detect a difference between Smurfs and non-Smurfs, with non-Smurfs presenting a higher heterogeneity. As briefly discussed, this could be due to the non-Smurfs being a different moment of their biological age at the moment of sampling, compared to the Smurfs, which are in a late and stereotyped end-of-life phase. However, a relevant and significant difference is appreciated with time in Smurfs and non-Smurfs. Our analysis suggests that the increased noise in gene expression could be a time-related effect. Therefore, it could be an ageing-related change not necessary for death (i.e. entrance in the Smurf phase). The results presented in this section show how not only Smurfs present a core signal independent from age, but then such signal recapitulates more than half of the hallmarks of transcriptional ageing. On the other hand, the analysis performed have also hinted at an effect of age on the gene expression of the flies. In the next part of the chapter, I will focus on the age-related changes in Smurfs and non-Smurfs. Smurfs and non-Smurfs through time An intuitive solution to explore age-associated changes is to look at the differences between young and old samples, separating them into the respective Smurfs and non-Smurf categories. I will first present the results of such comparisons. I will then move to discuss how to interpret the changes occurring over time in the non-Smurfs (section 2.4). I will finally show the results obtained in the attempt of identifying an age-specific signal differing from the Smurf one (section 2.5). Effect of age Smurf DEGs: an example As a start, I looked at what happens for different groups of Smurfs DEGs. By dividing them by biological pathways, I plotted the expression of the related genes through heatmaps to easily visualize if differences were noticeable. An example is in figure 2.6, where the Smurf DEGs belonging to the KEGG "Toll and Imd pathway" are represented (dme04624). Figure 2.17: The heatmap of the DEGs mapping to the KEGG "Toll and Imd pathway" shows how the expression of these genes in old Smurfs tends to converge to the one of the Smurfs and vice versa. The plot shows how the genes related to the inflammatory pathway present on average a lower upregulation in the 40 days Smurfs compared to the young and midlife samples. On the other hand, the old non-Smurfs seem to have an increased expression of these genes compared to the young and mid-life samples. I am here showing only the inflammatory genes, which well exemplify the general behaviour I observed by looking at the different groups of genes. To investigate the extent of those changes in non-Smurfs and Smurfs over time, I proceeded on differential gene expression analysis within each group. Non-Smurf samples over time I first compared the differences occurring over time in the transcriptome of the non-Smurfs. This comparison is also essential to prove whether Smurfness has a biphasic behaviour and is a better ageing predictor than chronological age. If indeed the old non-Smurfs present the ageing changes listed in section 2.2.9, our Smurf model could still be good for the identification of young flies presenting ageing changes but would fail to be proved as an actual ageing model. Similarly for what done in 2.2.4, I performed a pairwise comparison between the different non-Smurfs group. The number of DEGs for each comparison in show in table B.3 (Appendix B.1). The old vs young comparison (40 vs 20 days) results in "only" 510 DEGs. Note that the only is placed into quotation marks, as it is a relative statement compared to the 3049 DEGs identified with Smurfness. I proceeded on performing the enrichment analysis with the same package and parameters as the one presented before (section 2.2.5). Results are shown in figure 2.18 and table B.4. Old non-Smurfs present an upregulated immune response compared to younger flies and a decrease in the expression of oogenesis genes. Therefore, changes are occurring with time in the non-Smurfs. However, these changes are less numerous than those occurring with Smurfness and are included in the Smurf changes (both inflammation and oogenesis are deregulated in Smurfs). So interestingly, we can detect age-related transcriptional changes (as downregulation in mitochondrial genes and ribosomes related genes) in Smurfs, regardless of their age, but not in non-Smurfs individuals sampled at 10% survival. This proves that Smurfness can identify -at a transcriptional level -biologically old individ- uals better than age. It also shows that most of what has been described as transcriptional marker of ageing is actually a Smurf related signal. Smurf could therefore be a confounding factor for many ageing studies, at least in Drosophila. • • • • • • • • • • • • • • • • • • • • • • antibacterial humoral response The fact that the DEGs found in the old non-Smurfs are associated with categories that we find deregulated in the Smurfs made me advance the following hypothesis: are those changes simply age-related, or are they indicative of pre-Smurfs changes? The 40 days samples are very close to the transition, as they are sampled at 10% survival and might present changes that are essential for the Smurf transition. Inflammation and decreased fertility might be amongst them. However, additional light changes might occur and possibly miss the p-value cut-off in the DEG analysis. I will further develop this hypothesis in section 2.4. Smurf samples over time I used the same workflow to explore the differences occurring in the Smurfs over time. As expected given the results obtained so far, the young and midlife individuals do not present many differences (see table B.3 in Appendix B.1). Surprisingly, when comparing young Smurfs to old Smurfs I find 2320 DEGs. Once again, I followed the same procedure as for the other comparisons and performed the enrichment analysis. The results are presented in table B.5 and in figure 2.19 and 2.20. Note that the two figures were generated together and afterwards split to improve readability. The colour scale is therefore the same. The first thing worth mentioning is that we do not find specific Smurf categories as upregulated in the old Smurfs, if not for some metabolic genes; if the trend I have noticed for inflammation is legit, it might be too weak to be detected through differential gene expression. The majority of nodes of the graph in figures 2.19 and 2.19 are associated with RNA processing (as splicing factors), cell cycle, chromatin organization and response to DNA damage. This signal is not detected in any of the previous comparisons, but is present in the transcriptional hallmarks of ageing as presented in figure 2.16. Regarding the response to DNA damage, as reported before, the hallmark refers to a constitutive upregulation, although the authors are careful and consider it as an uncertain hallmark. In Smurf, however, we find the opposite result. The dysregulation of RNA processing goes instead in the same direction of what was proposed by [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF]. Why are the old Smurfs carrying such a signal? I mainly thought about three hypothesis: 1. this is an age-related signal, not necessary to enter the Smurf phase. It is therefore carried by the old samples, with an exacerbation in Smurfs that allows its detection; 2. it is somehow specific only to old Smurfs; 3. it is an artifact of the data, even if this is less probable as the signal seems quite strong. I will further discuss this point in section 2.5. In particular, I find terms related to RNA processing and DNA regulation (amongst which chromatin regulation) which are not found when comparing the Smurfs to the non-Smurfs or the old non-Smurfs to the young non-Smurfs. The only upregulated terms concerns metabolism, and are deregulations before associated to Smurfs when compared to non-Smurfs. This suggests that, for such categories, the "dilution" of the Smurf signal with time discussed in figure 2.17 could be not strong enough to be detected by DESeq2. Each node is a GO term, and edges are connecting categories with overlapping genes. In particular, I find terms related to RNA processing and DNA regulation (amongst which chromatin regulation) which are not found when comparing the Smurfs to the non-Smurfs or the old non-Smurfs to the young non-Smurfs. The only upregulated terms concerns metabolism, and are deregulations before associated to Smurfs when compared to non-Smurfs. This suggests that, for such categories, the "dilution" of the Smurf signal with time discussed in figure 2.17 could be not strong enough to be detected by DESeq2. Each node is a GO term, and edges are connecting categories with overlapping genes. • • • • • • • • • • • • • • • • • • • • • • • • cytoplasmic • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • cell cycle checkpoint cell cycle • • • • • • • • • • • • • • • • • • • • • • • • • • ATP-dependent A pre-Smurf phenotype As explained in section 2.3.2 not many changes are detected over time in the non-Smurfs. However, the changes found (reduced oogenesis genes and increased expression of inflammatory genes) are shared with the Smurfs. This led me to hypothesize that, rather than age-related changes, these could be pre-Smurf related changes. In other words, changes that are already occurring before we can detect the transition to the Smurf phase with the current biomarker of intestinal permeability. The results presented in this chapter have shown how Smurfs present quite a biphasic and drastic change in gene expression, even at a whole-body level. Furthermore, some ageing-related changes occurring with Smurfness cannot be detected when looking only at non-Smurfs over time. This suggests that given the features currently used to define an ageing transcriptome [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF];Stegeman and Weake (2017)), Smurf transcriptome is an ageing transcriptome, while that of an old non-Smurf is not. When we look at Smurfness, as described in the Introduction, we are looking at a very late phase of life that has been not shown to be reversible so far. A possible way to see it is that Smurfness represents a final collapse of the organism, when the organism itself has lost its resiliency and is irreparably committed to death. If that is the case, Smurfness would be a non-plastic pre-death stage. Interventions increasing lifespan would delay the entrance in the Smurf state by prolonging the non-Smurf one rather than acting on the Smurf phase. In Chapter 3, I will comment more on this latest hyphotesis, with experimental evidence pointing in that direction. In such a scenario, it would be interesting to understand the changes leading to Smurfness by looking at what is happening to non-Smurfs before the transition. This would mean looking at the biological time ticking in a single individual and finding features (such as expression of genes, for instance) that are a strong predictor of impending Smurfness. How to do this? It would be ideal to follow and record data from an organism every day as it lives. This would allow us to have data to analyze retrospectively once the organism turns Smurf. Unfortunately, in practice, this kind of experiment is impossible, not only for the cost and amount of work required but also because the data col-lection is usually invasive and, in some cases, kills the organism (for instance, in the fruit fly). Please note that an approximation of such an experiment is currently ongoing in the lab in mice for translation of the Smurf model in such organism (section 1.4.4). I tried to approximate the problem to find a possible answer using my data. The young non-Smurf samples I have are sampled at 90% survival; the old at 10% survival. Given the random sampling, the 20 days non-Smurfs are enriched in flies being further from the Smurf transition than the old ones. The ones sampled at 30 days would be in a middle point. Thus, I thought to reconstruct the Smurf genes' expression through physiological time by looking at the timecourse non-Smurf signal as a pre-Smurf signal. This idea is schematized in figure 2.21. As in the population from which my data come from 20 days corresponds to 90% survival, I can imagine that, on average, the non-Smurfs samples at this point are representative of an early phase of life. The same reasoning applies to the samples at 40 days (10%) survival, which could represents a pre-Smurf phase of life. Samples at 30 days are at the T 50 , and could therefore be a good approximation of a non-Smurfs midlife. I assumed that the changes preceding the Smurfs must be probably conserved in the Smurfs themselves. I therefore focused on the genes that are DEGs in Smurfs. I discarded genes lowly expressed (mean expression across samples <50 counts, which discards around the lowest 20% of the DEGs), as the signal could be noiser and genes lowly expressed would not be the best targets to find a biomarker of pre-Smurfs. The expression of each gene is centered on the mean across conditions and time and divided for its standard deviation, in order to highlight differences. Each point is the gene average expression for a certain condition and a certain age. The lines in the background are connecting the same gene over time, and the violin plots help visualize the distribution of the points. (a) Genes upregulated in Smurfs. Over time, the change in expression of those genes in the Smurfs seems noisy, and suggests that, although an increase in the expression compared to the non-Smurfs is always present, such increase is not necessarily always the same for the same gene over age. Regarding the non-Smurfs, there is a noticeable increasing trend over time for the peaks of the violin plot, hinting that those genes are already experiencing an increase prior to the Smurf transition. (b) Genes downregulated in Smurfs. Regarding the genes down-regulated in Smurfs, the first thing that comes to the eye is the very noisy distribution at 40 days for the Smurfs. Some of the genes might therefore not experience a downregulation if we compare simply the non-Smurfs with the old Smurfs: this might also be due to the outliers showed in picture 2.6. In the non-Smurfs, a general downregulation trend over time is less noticeable. However, it might be that some genes show and interesting trend if considered alone. Figure 2.22 shows the evolution over time of the considered genes in Smurfs and non-Smurfs; upregulated and downregulated genes are separated. By looking at the peak of the distribution in the violin plots, a shift towards higher values is noticeable in the non-Smurfs over time for Smurfs upregulated genes. On the other hand, we can see how the expression in the Smurfs is quite noisy, indicating that not all the genes are experiencing the same increase in expression. In the Smurfs' downregulated DEGs, no clear trend is visible for the non-Smurfs over time. This general representation could be already informative in the case in which a trend is noticeable over the whole set of genes, but that could be misleading if no general trend is visible. Indeed, it might be that only some genes having an important function display such a trend. To investigate this, I opted for a simple approach, by inferring the following linear model for the expression of the genes in the non-Smurfs: g = β0 + β1 t + εi (2.1) where g is the expression of the gene, t the time (20, 30, 40 days) and εi the noise (i.e. what of the g variable cannot be explained with time). If time does not affect the expression, I expect its coefficient β1 not to be significant. I obtained 355 genes (out of 2482 tested) for which the time coefficient is significant. 240 belongs to the genes upregulated in Smurfs, the remaining 115 to the genes downregulated in Smurfs. I explored this list by mapping the genes to each KEGG pathway and counting how many genes map to a particular pathway. There is in this method a bias towards large pathways, that could be mapped by chance by more genes than more minor pathways. However, the pathways where most of the genes are mapped might be good indicators of overall processes. I initially tried to map the genes on the GO biological process terms, but the redundancy and the vastness of each category made the interpretation more challenging than the simple list of genes. In figure 2.23 I show the top five of the KEGG categories for the upregulated and downregulated Smurfs groups. We can see how only a few genes are mapped in both cases. The first pathway ranked is the "Metabolic pathways" (dme01100), which groups all the different metabolic pathways noted for the organism. It is a large category, where two genes can map far and be involved in different processes. This is the case for the category of upregulated genes, where the mapping genes are dispersed throughout different groups. For the downregulated genes, some are found grouped in the same pathways, and reappear in the categories from the presented list. For these reasons, in the following subsections, I will discuss the other pathways found, leaving on the side this first hit. You can find all the genes mapping retrieved from the analysis in table B showing a significant trend over time in the non-Smurfs. The ones belonging to the upregulated Smurf group seems to have more genes mapping to the pathways. Upregulation The inflammation category confirms the results found in the DEG analysis over time. Only 9 genes map to related KEGG pathway, but I can retrieve up to 28 genes if I filter for the same GO inflammatory-related categories as in section 2.2.5. This is interesting, as it represents a compact group of genes upregulated in Smurfs, which also follows an increase in the non-Smurfs prior to the transition. Interestingly, I do find the two transcription factors rel and dl. This adds as a confirmation of the results of 2.3.2, where inflammatory genes are present amongst the DEGs. Results from some of the inflammatory genes are presented in figure 2.24. In the ER and endocytosis related categories (dme04141 and dme04144) we find genes involved in protein folding and genes involved in vesicle mediated transport or polymerization of the actin cytoskeleton. However, some of these q q q q q q q q q q q q q q q q 8.6 8.8 9.0 20 25 30 35 40 time expression PGRP-LA q q q q q q q q q q q q q q q q 9.0 PGRP-SD q q q q q q q q q q q q q q q q 11.8 Rel q q q q q q q q q q q q q q q q 11.2 11.3 11.4 11.5 20 25 30 35 40 time expression dl q q q q q q q q q q q q q q q q 8.0 Listericin q q q q q q q q q q q q q q q q 8 9 10 11 12 13 20 25 30 35 40 time expression DptA q q q q q q q q q q q q q q q q 8 9 10 11 1.0 1.5 2.0 2.5 3.0 time expression Jon99Fi q q q q q q q q q q q q q q q q 9 10 11 12 1.0 1.5 2.0 2.5 3.0 time expression Jon99Fii q q q q q q q q q q q q q q q q 14.0 14.5 15.0 15.5 16.0 16.5 1.0 1.5 2.0 2.5 3.0 time expression Yp3 q q q q q q q q q q q q q q q q 8.5 9.0 9.5 10.0 1.0 1.5 2.0 2.5 3.0 time expression Jon66Ci q q q q q q q q q q q q q q q q 7.5 8.0 8.5 9.0 1.0 1.5 2.0 2.5 3.0 time expression Npc2d q q q q q q q q q q q q q q q q 7.0 genes might have different functions depending on the cells and the conditions, so it is difficult to hypothesize on them. Regarding the MAPK pathway, it is worth mentioning that amongst the genes, we find the transcription factor Jra and its downstream product puc, which mediate a feedback loop regulation. kay, the other transcription factor of the pathway, is also upregulated in Smurfs (as it was displayed already in figure 2.9. The JNK pathway, in which these genes act, has been shown to have an effect on lifespan [START_REF] Wang | JNK Signaling Confers Tolerance to Oxidative Stress and Extends Lifespan in Drosophila[END_REF][START_REF] Wang | JNK Extends Life Span and Limits Growth by Antagonizing Cellular and Organism-Wide Responses to Insulin Signaling[END_REF]. The table detailed the results of this section is B.6 (Appendix B.1). Downregulation In this group, the metabolism makes most of the mapping results. The last four categories of the list indeed contain genes overlapping with the bigger "Metabolic pathways". Overall, there is no biological process that seems more represented than the others, as it is for the immune response in the case of the upregulated genes. Furthermore, the trend at the level of single genes seems weaker than the previous group. As most of the mapped genes showed a rather weak slope (∼ 0.2), I decided to focus instead on the ones showing the bestinferred slopes. Those are represented in figure 2.24. Amongst them, I have three proteins of the Jonah family, which I discussed briefly already in section 2.2.5, as they were consistently present in the genes downregulated in Smurfs. Jon99Fi and Jon99Fii are arranged in a cluster on chromosome 3R, and it could therefore be the case that the activation of one implies the activation of the other. I would discharge the hypothesis that their activation is a side-effect on chromatin opening due to another gene activation as I find downregulation of other Jonah genes which are not in the same region (for instance, Jon66Ci is on chromosome 3L). Yp3 is involved in egg formation, while Npc2d and Lsp1beta have predicted roles, respectively in sterol transport and amino acids storage. The Jonah genes might be the most interesting result in this context as they are present in a group. An interesting pattern for the Smurf downregulated genes is that 20% of the genes present a significantly positive trend over time in non-Smurfs. On the other hand, in the upregulated group, the genes showing significantly showing a negative trend were only 4%, having, in addition, a slope close to zero. Hereafter is an example of genes showing this behaviour, which I comment in the caption: q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 10 11 12 20 25 30 35 40 time expression wat q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 8.0 ). The linear model refers to only the grey points, representing the non-Smurfs samples. The blue points are the expression of the Smurfs at different age. As you can notice, even if the genes were detected as differentially expressed, the points are quite overlapping, suggesting that the signal in the Smurfs themselves might be weak. In particular, in this case of wat we can see how non-Smurfs and Smurfs converge at old age. Note that mthl2 belongs to the same receptor family as mth, the first gene identified as extending lifespan in fruit fly, when mutated. To conclude, I can indeed detect a trend over time for some of the genes that are DEGs in Smurfs. In particular, many of the immune-related genes that are overexpressed in Smurfs follow such trend. One can speculate that inflammation might therefore be an early mechanism of the Smurf transition, growing over the organism own biological time and enhancing the probability of entrance in the Smurfs phase. On the other hand, isolated genes might of course have an important role as well in the determining the entrance of the transition, and putative targets might be hidden amongst the ones showing a better trend over time in the non-Smurfs. By filtering on these genes, we can go down to a list of targets to handle experimentally. An idea could be to build a report (as GFP report) that can follow the expression of the gene of interest over time in vivo in the flies, without affecting the expression and the function of the gene itself. I will further comment on these hypotheses in Chapter 4 (4.1.3). Age and Smurfness: a distinct effect? In the previous section, I discussed the possibility of re-interpreting (given my data structure) the chronological age of the non-Smurfs as an approximation of the biological age of the samples. By investigating this, and given the results of section 2.3.3, I wondered whether some genes may be affected by age only and not by the time left to the Smurf transition. In other words, these genes might not be strictly associated with the fly's biological age. However, they might be a mere result of the exposition of the organism to time and the environment. If those effects existed, they would be specific to the old individuals, whether Smurfs or non-Smurfs. I thought that a simple way of visualizing such changes could be inferring the correlation of each gene with Smurfness and age. A gene whose regulation changes in Smurfness but that is not affected by age would present a good correlation with Smurfness but not with age. A pre-Smurf gene (as the ones investigated in section 2.4) would present a good correlation with both. A gene affected by time but not by Smurfness would correlate with the first but not with the second. Finally, a gene not affected by time or age will not present a correlation with any of them. The correlation with Smurfness increases when those samples are removed, suggesting immediately that age has an effect also on genes well correlating with Smurfness. 91 I estimated the correlation with both Smurfness and age for all the genes. Results are shown in figure 2.26. I did not display all the points, but only their density maps for visualization purposes. Each point would represent a gene, plotted in the space of the two computed correlations. Out of curiosity, I plotted all the samples (density in black) and all the samples but the 40 days (density in the background, red dotted line). The correlation with Smurfness increases drastically when the 40 days samples are removed, again showing that the old Smurfs indeed have a "diluted signal" compared to the young Smurfs. I tried to follow the behaviour of the genes differentially expressed in all the different comparisons I have done so far. Results are shown in figure 2.27. The genes differentially expressed in Smurfs plot on the side of the graph showing a high correlation with Smurfness. Some also show a correlation with age, and are probably overlapping with the ones illustrated in section 2.4. The genes differentially expressed when comparing young and old non-Smurfs distributes interestingly amongst the ones correlating with both age and Smurfness. This would suggest that amongst the DEGs in the non-Smurf, we cannot detect genes affected by age only, if any. When we look at the graph for the young- They map on the side of the plot, suggesting a high correlation with Smurfness as expected. Though the peak of the density is on the space of higher Smurfness correlation than age, some genes also display correlation with age. (b)DEGs in old non-Smurfs compared to young non-Smurfs. Those genes have a good correlation with both age and Smurfness, as expected from the analysis in section 2.3.2. (c) DEGs in old Smurfs compared to young Smurfs. The downregulated genes of this group are interesting, as they show a compact behavior in a zone of higher correlation with age than with Smurfness. old Smurfs comparison, the distribution of the DEGs is different compared to the ones of the non-Smurfs. The genes overexpressed at old age in the Smurfs (mostly given by a dilution of the Smurf signal) distributes along the x-axis; the genes downregulated in old Smurfs instead are a more compact group, with peaks at low correlation with Smurfness and high negative correlation with age. The genes downregulated with Smurfness could therefore be a putative age-related signal, as already suggested in section 2.3.3. In figures 2.19 and 2.20 I have seen how the genes related to cell cycle, RNA processing and DNA repair were overrepresented in the genes downregulated in old Smurfs. I therefore tried to plot the genes related to those pathways in the correlation plot, to check if there were distributed in the lower part (good negative correlation with age). To select the specific process and genes, I referred to the list of KEGG pathways in Drosophila, which can be easily retrieved from the website. I selected the pathways indexed under the groups "Transcription" and "Replication and repair". The specific name are listed in following table: As a "control", I extracted the genes belonging to the inflammation pathway, which I expect to correlate with Smurfness mostly. Results are shown in figure 2.28. Replication and repair Transcription The genes related to DNA replication, repair, and transcription show an age-dependent behaviour rather than Smurf dependent one. As expected, the inflammation shows mainly a correlation with Smurfs, although some noise is present. I compared the two dimensional distributions using the Fasano-94 -0.5 0.0 0.5 -0.5 0.0 0.5 gene expression correlation with smurfness gene expression correlation with age Figure 2.28: All the genes detected in my dataset are here plotted, as in figure 2.27. The red points and density lines refer to different group of genes that I will describe in the specific caption point. In all cases the difference in distribution between the genes considered and the background is significant (Fasano-Franceschini test, p-value ¡ 0.05). (a)Toll and Imd pathway As expected, the genes mapping to this pathway mostly map in a zone of high correlation with Smurfness. (b). Replication and repair. Those genes, taken from the pathways listed in table 2.1, show a compact behavior in a zone of higher correlation with age rather than Smurfness. (c). Transcription The same reasoning as in point (b) applies for this category of genes, for which the zone of major density is at higher correlation with age than Smurfness. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • -0. Franceschini test, an implementation of the KS statistic for multidimensional distributions. In all cases, the distributions resulted significantly different from the background ones. The results obtained for the "Transcription" pathways deregulated with time could be linked to the increased noise in gene expression observed with age in our dataset (section 2.2.8). Interstingly, this analysis suggests that we can indeed discern a "chronological old" signal from a "biologically old" (Smurf) signal when we include the Smurfness information in our data. Smurf identification would then be a powerful tool for deconvolving chronological and biological ageing. Indeed, studying ageing at a population level rather than at an individual level does not allow us to distinguish changes associated with time from changes associated with physiological age. Although our common sense tends to mix them, these two processes may differ. The Smurf model provides us with a tool to identify an individual's advanced physiological age, independent of its chronological age. If we were unable to separate Smurf samples from non-Smurf samples, we would likely have identified an "aging transcriptome" only by comparing samples at 40 days versus samples at 20 days, due to the higher proportion of Smurfs individuals at 10% survival. We would then have identified what is shown in figure 2.16 as an ageing signal, labelling under the name of "ageing" both Smurf specific changes and age-specific changes. Interestingly, the age-related changes might give us hints on why the probability of becoming Smurf increases with time. For example, one could speculate that an increased transcriptional heterogeneity would increase the gene networks instability up to the moment when the resilience of the network fails, leading to a general collapse that would correspond to the Smurf phase. specific, confirming the presence of a (at least partial) biphasic behaviour and pointing at Smurfness as a better proxy for physiological age of the organism than age itself, and redefines most of the so-called ageing signature as an end-of-life one. 2. Effect of age on the expression of the genes As already suggested by the PCA in figure 2.3, age affects the transcriptome, but its effects are lower than the ones of Smurfness. Interestingly, the age-effects detectable in the non-Smurfs overlap with Smurfs' alterations. This led me to hypothesize that the non-Smurfs age-related changes might be pre-Smurfs alterations (figure 2.21). By individually looking at the time trend of the gene expression in non-Smurf, I found inflammation as a process consistently showing an increase over time in non-Smurfs. Those results will be further discussed in Chapter 4. On the other hand, the Smurfs show a specific signal over time, especially in the downregulation of genes related to DNA replication and repair and transcription. These results are interesting, as DNA repair genes have been shown (even if not consistently) to be upregulated with ageing, while deregulation mRNA processing is indeed one of the hallmarks of transcriptional ageing that was not found when comparing the Smurfs to the non-Smurfs. Moreover, when looking at the correlation of these genes with age, independently of the Smurfness status of the sample, they show an age-dependent behaviour. Interestingly, by analyzing how the heterogeneity in gene expression evolves in the samples, I can detect a trend of increasing variance with age in both Smurfs and non-Smurfs 2.15. Those results open the way to deconvolving time-specific signals (based on chronological age, independently of the Smurf status) and ageing-specific signals (by looking at the non-Smurfs over their own physiological time, prior to the Smurf phase). In addition, I investigated the differences between Smurfs sampled at different moments after transition, but I could not detect a difference in the signal (results not shown). Therefore, Smurfs carry the same changes at any moment after the transition. However, the study presents here a limited amount of samples (no 5h Smurfs for the 20 days samples, and only two samples each for 5h Smurfs at 30 days and 5h Smurfs at 40 days), which considerably lowers the statistical power, especially for the detection of light changes. Chapter 3 Identifying genes affecting longevity using Smurfness Since the identification of the mth mutation as extending lifespan in Drosophila, alterations in many genes have been detected as increasing lifespan in the fruit fly. Those are conveniently annotated in the GenAge database [START_REF] Tacutu | Human Ageing Genomic Resources: new and updated databases[END_REF]. Amongst the annotated genes (202), we find 58 present in the Smurfs DEGs (figure 3 We wondered if, amongst the Smurfs DEGs, were hiding new genes whose artificial manipulation could affect longevity. With Smurfness we are looking at a late phase of life, where causes and effects of ageing are probably mixed. In other words, a gene that is altered in Smurfs could hypothetically present such deregulation for the following reasons: 1. The alteration of the gene is cause of entrance in the Smurf phase. Suppose the upregulation or downregulation of a gene played a role in the entrance in the Smurf phase. In that case, we could think that artificially inducing the opposite alteration (for instance a physiological downregulation for a gene upregulated in Smurfs) could delay the entrance in the Smurf phase. This would prolong the non-Smurf phase and the lifespan of the population. 2. The alteration of the gene is an effect of the entry in the Smurf phase. For instance, the general downregulation of metabolism observed in Smurfs could be a late process, occurring after the organism is already committed to death. As another example, some genes could have a rescue mechanism, meaning that they are activated to try to compensate for other stresses affecting the organism. The alteration of an "effector" gene might likely not affect the lifespan, except possibly in the "rescue genes" case, if those carry out their action early enough in the transition process. We decided to screen for the genes deregulated in Smurfs by altering their expression (overexpression or downregulation) and monitoring the effect on the population's lifespan. We selected the genes to experimentally test by trying to focus on genes possibly belonging to the first category (cause of Smurfness rather than effect). I will start by illustrating the selection process, then moving to the experimental plan and finally illustrating the results. Genes selection process As the genes deregulated in Smurfs are numerous, I needed to apply some selection criteria before moving to the bench to test them. Such selection criteria were meant to maximize the probability of the gene selected to be actively involved in the "Smurf process" rather than being an effect of it. I considered that an intuitive category to focus on was transcription factors (TFs). Indeed, the response that I am observing is an alteration in gene expression. Being time and spatial regulators of transcription, TFs might explain the changes observed, acting upstream of the deregulation. I downloaded the list of Drosophila TFs from the database Flybase. I subsequently intersected this list with the list of Smurf DEGs. This led to the identification of 102 deregulated TFs out of the 624 in the Flybase list. 77 are upregulated and 25 are downregulated. For the list of Smurf deregulated TFs, see table B.9 (Appendix B.2). As a hundred genes to test was still a too large number, I tried to reduce it by using two additional selection criteria. First, by looking for upstream regulators of the genes present in the list to possibly condense a redundant signal and secondly, selecting genes involved in pathways found as deregulated in Smurfs, or having a function of particular interest. The idea behind this approach is schematized in figure 3.2. Identification of upstream regulators: i-cisTarget. We reckoned that identifying putative upstream regulators of the Smurf TFs selected could have been a good way to shrink the available list of genes. For this purpose, I used the online software i-cisTarget [START_REF] Herrmann | i-cisTarget: an integrative genomics method for the prediction of regulatory features and cis-regulatory modules[END_REF][START_REF] Imrichová | i-cisTarget 2015 update: generalized cis-regulatory enrichment analysis in human, mouse and fly[END_REF]. i-cisTarget takes as input a list of genes and, by looking at common patterns in their promoters, returns as output a list of TFs that are putative regulators. Each proposed TF is associated with a score. I split the list of Smurf TFs in two, dividing the TFs upregulated from the ones downregulated. The two lists were submitted independently. In a parallel analysis, I used i-cisTarget to look for regulators of the most deregulated Smurf genes, TFs or not. I took all the genes deregulated in Smurfs with a log 2 FC > |2|. The results of the i-cisTarget analysis are shown in figure 3.3. In particular, or downregulated (in green). These TFs might be partially responsible for the expression profile in point 3. By extracting them from the rest of the list, we possibly go one step back in the biological regulation of our final expression landscape. This brings us to point 2. Those TFs are in turn regulated by other genes, which could be present in the group itself or not be deregulated in the final expression profile (as it can happen in case of activation by post-translational modification). To move one step further in our backwards path to upstream regulators, we can try to predict which genes are regulating the TFs in point 2. This could be done by looking at the presence of shared binding motifs for specific TFs in the promoters of the genes. The software i-cisTarget conveniently performs this job, and brings us to point 1. The orange circles represent the predicted TFs not present in the final expression profile. As putative upstream regulators of the Smurf transition, the genes in point 1 could be good targets for experimental validation. From right to left, dashed lines reconstruct the hypothetical regulation path. I have selected the hits that were occurring more than one time (possible different binding motifs) and presented a score > 4 (a score of 3 is indicated as the minimum threshold). As you can notice from figure 3.3, the results are often redundant in terms of putative targets. This could be due to the output genes interacting and playing a role in the same processes. On the other hand, it could be that only one or a few of them are involved in the process we are investigating, and the others are an "off-target" signal. Other TFs selection. Apart from the results of i-cisTarget, I manually screened the list of TFs to check for other genes to test. I focused on genes possibly fulfilling both these criteria: 1) showing an important deregulation (log 2 FC > |1|); 2) not having been investigated so far in longevity. The inflam- (2) Putative regulators of TFs down in Smurfs. Note that the second hit is a group of GATA family genes, and comes as a compact category in the output. (3) Putative regulators of genes up in Smurfs (log 2 FC > 2). The results reflect the immune and stress response identified as upregulated signal in Smurf. (4) Putative regulators of genes downregulated in Smurfs (log 2 FC < -2). Similarly to what happened in point 3, this output reflected the results found by the enrichment analysis, as the decrease in fertility genes (Blimp-1 and ken). Interestingly, maf-S has been shown to ameliorate age-associated phenotypes when overexpressed [START_REF] Rahman | Declining signal dependence of Nrf2-MafS-regulated gene expression correlates with aging phenotypes[END_REF]. matory genes are, for instance strongly deregulated in Smurfs, but the action of the pathways TFs has been the object of studies in Drosophila longevity, even if sometimes with contrasting results [START_REF] Khor | Control of lifespan and survival by Drosophila NF-κB signaling through neuroendocrine cells and neuroblasts[END_REF][START_REF] Moskalev | Pharmacological inhibition of NF-κB prolongs lifespan of Drosophila melanogaster[END_REF]. In the next section I will discuss the final list of selected genes. The genes The final list of selected genes is shown in figure 3.4. I will now provide a brief description for each of the selected and tested genes. Otherwise cited, the information presented here are retrieved from Flybase. Drosophila longevity. Some of them have been identified starting from the DEG analysis, while some of them are putative upstream regulators (i-cisTarget). From i-cisTarget output Aef1. Adult enhancer factor 1, it reported the best score with i-cisTarget for putative regulators of TFs upregulated in Smurfs. It is annotated as acting at the level of the fat body, but seems overall poorly characterized. Together with Adf1 and Trl, it has been predicted to cobind in regions of increased FOXO activity with ageing in Drosophila [START_REF] Birnbaum | Age-Dependent Changes in Transcription Factor FOXO Targeting in Female Drosophila[END_REF]. Adf1. Adh transcription factor 1 is named after its activation role in alcohol dehydrogenase gene (Adh), but it is poorly characterized and might therefore present other functions. CG4360. Its function or role is not characterized. Interestingly, it is orthologous to the human ZSCAN10, an embryonic stem cell-specific TFs (from Uniprot annotation). Trl. Trithorax-like plays a role in gene expression through chromatin modification. It has been reported as having an effect on the expression of various genes, amongst which homeotic and Hsp genes [START_REF] Katokhin | Molecular genetic analysis of Thrithorax-like gene encoded transcriptional factor GAGA in Drosophila melanogaster[END_REF]. FoxP. Forkhead box P has been shown as working in the nervous system. Interestingly, it is also differentially expressed in Smurfs (log 2 FC = 0.46). Mef2. Initially characterized for its role in muscle development, it looks now as being functional in a larger number of tissues. [START_REF] Clark | MEF2 is an in vivo immune-metabolic switch[END_REF] show how Mef-2 plays a role in both metabolic regulation and immune response, and negatively affect the survival of the flies under infection when KD. Since in Smurfs both these processes are deregulated, we decided to retain this gene. It is also lightly upregulated in Smurfs (log 2 FC ∼ 0.07, which might not be relevant). Hsf. Even if its role has been already investigated with longevity (with the overexpression of the gene giving lifespan increase, as seen in section 1.2.1), I included it in the list of the genes to test as the Hsp genes are strongly upregulated in Smurfs. Nf-YB. Nuclear factor Y-box B (Nf-YB) codes for a subunit of the Y transcription factor. It has been mostly investigated for its role during develop-ment, but it is interestingly both upregulated with Smurfness (even if lowly, log 2 FC of 0.2) and a putative regulator of TFs downregulated in Smurfs. GATAe,GATAd, srp, grn, prn. Those genes are all part of the GATA group, a family of TFs named after its binding sequence. The human GATA TFs are known mostly for their developmental role. Recently [START_REF] Dobson | Tissue-specific transcriptome profiling of Drosophila reveals roles for GATA transcription factors in longevity by dietary restriction[END_REF], they have been shown to be involved in the CR-related lifespan extension. Not all those genes have been characterized yet in Drosophila, but I initially planned on testing them all. For practical reasons, I had to leave aside grn and prn. From DEseq2 analysis dmrt93B. Even if not well characterized, I selected this gene as it presented a log 2 FC > 2 in Smurfs. rib. It presents a medium-level upregulation in Smurfs (log 2 FC of 1.5) and has not been studied yet in ageing. I therefore selected it for testing. Ets21C. It has a log 2 FC of ∼ 1.7 and has been reported as involved in stress response, especially in the gut [START_REF] Mundorf | Ets21c Governs Tissue Renewal, Stress Tolerance, and Aging in the Drosophila Intestine[END_REF]. Hey. It is annotated as being involved in nervous system development, and is upregulated in Smurfs (log 2 FC of 1.45). Selected for the same reasons as rib. kay. It has been shown to be involved in different processes, from cell cycle to stress response, and its pathway to be involved in longevity (see section 2.4). It presents a light upregulation in Smurfs (log 2 FC ∼ 0.5), but I decided to add it as it was often present in the analysis together with other genes, as for Jra in section 2.4 and Ets21C in [START_REF] Mundorf | Ets21c Governs Tissue Renewal, Stress Tolerance, and Aging in the Drosophila Intestine[END_REF]. Ets96B. Part of the same family as Ets21C, it is downregulated in Smurfs (log 2 FC of -1.2). It is mainly annotated as having a role in dopamine signaling in the nervous system. Experimental plan For each of the selected genes, depending on the availability of the transgenic line, I performed longevity experiments after knock-down (KD) and overexpression (OX) of the gene. The gene alteration was performed using the GeneSwitch (GS) [START_REF] Osterwalder | A conditional tissue-specific transgene expression system using inducible GAL4[END_REF][START_REF] Roman | P{Switch}, a system for spatial and temporal control of gene expression in Drosophila melanogaster[END_REF]. This technique, widely used as KD and OX system in the fruit fly, allows tuning spatially (through the use of tissue-specific promoters) and temporally (by controlling the system's activation with the RU486 drug) the manipulation of the gene of interest. Its mechanism is exemplified in figure 3.5. .5: GAL4 is a yeast TF binding the yeast promoter sequence called UAS. This system has been engineered to manipulate gene expression in Drosophila in a specific way, as GAL4 and UAS are not present in the genome of the fly. The TF GAL4 is cloned downstream of a promoter of interest. GAL4 is produced every time that the promoter is active. Therefore, the promoter will allow for spatial and temporal control of its expression -for instance, a promoter active in a particular tissue only during development -. GAL4 binds the UAS sequence as a dimer. In the GS system, GAL4 is engineered so that it can dimerize (and therefore be functional) only when the drug RU486 is present. Adding or removing the drug allows for temporal control of the gene alteration, switching on and off the system. When GAL4 is active, it activates the transcription of the gene downstream of the UAS sequence. If we want to overexpress a gene, its cDNA is cloned downstream of the UAS; if we want to knock it down, a double-strand RNA (dsRNA) against its sequence is cloned downstream the UAS. As the fly research community widely uses the GS, it is likely to find the flies already engineered for the gene of interest in stock centres such as Bloomington, FlyORF or VRDC. In particular, we need two "types" of flies: the driver, carrying the GAL4 under the control of a chosen promoter, and the flies carrying the UAS part of the construct. As the RNA-Seq data from which the list of genes to test was extracted are from the whole body of the flies, we chose to start our screening with a ubiquitous promoter, active both during development and adulthood. We used the daughterless-GS (daGS) line [START_REF] Tricoire | The steroid hormone receptor EcR finely modulates Drosophila lifespan during adulthood in a sex-specific manner[END_REF]. The information about the retrievement of each line is provided in Appendix B.2. Once both the driver and the UAS line are retrieved, the preparation of the experiment follows the scheme in figure 3.6. Virgin driver females are crossed with males from the UAS line. It is essential to use virgin females in the cross. Indeed, Drosophila females have a spermatheca, an organ dedicated to storing the sperm after mating. A female can lay fertilized eggs up to more than two weeks after a mating event. Therefore, the use of virgin females is essential for controlling for the genetic background of the flies we want to use in the experiment. The scheme in figure 3.6 represents an experimental set up in which the gene alteration's is induced the first day after eclosion. On day 1 of adulthood, flies are placed in vials containing the RU486 drug. We used 5 different drug concentrations, including the controls at 0 µg/mL. We then diluted the drug in the food at 10 µg/mL, 50 µg/mL, 100 µg/mL and 200 µg/mL (as done in the study describing the line [START_REF] Tricoire | The steroid hormone receptor EcR finely modulates Drosophila lifespan during adulthood in a sex-specific manner[END_REF]). Depending on the strength of the promoter and the correct function of the system (i.e. the GAL4GS is actually not binding in the absence of the drug), this will create a gradient of activation of the gene alteration, allowing to test from "light" alterations to stronger ones. Once the flies are exposed to the assigned drug condition, males and females are kept in the same vial to mate for 48 hours. As I discussed in the Introduction, in particular with the disposable soma theory of ageing, fertility and mating are debated to have a role on longevity. I therefore controlled for this variable possibly affecting lifespan by mating all the flies used for the experiment for the same amount of time. Subsequently, 150 females for each condition were sorted and divided into 5 vials of 30 flies each, for a total of 25 Figure 3.6: Exemplification of the experimental protocol followed for the experiment. Virgin females from the driver daGS are crossed with males carrying the UAS construct. The F1 are the flies containing both transgenic constructs and will be used for the longevity experiment. At the moment of eclosion (day 1 of adulthood), flies are placed in new vials with different concentrations of drug and let mate for two days. After that, 150 females per condition are chosen for the longevity experiment and placed in 5 vials of 30 females each. On the other hand, if we want to alter the gene of interest since development, the cross will take place on vials with the drug. The rest of the protocol will not differ. vials and 750 flies used per line tested. Development and adulthood. The protocol illustrated allows the alteration of the gene only during adulthood. As a complementary check, I also monitored if differences in lifespan can be observed when the gene is altered during both development and lifespan. This implies two main changes in the workflow. First, parents of the F1 are mating on food already containing the drug (note that as these flies do not carry the complete GS system, the expression of their genes is not affected). [START_REF] Osterwalder | A conditional tissue-specific transgene expression system using inducible GAL4[END_REF] recommend not to use a concentration > 100 µg/mL during development, to avoid possible toxic effects. In order to reduce as much as possible the possibility of drugdependent toxicity, while keeping the same fold difference in the concentrations used, we divided by 10 the concentrations of drug used for the adults (concentrations already tested during development in Drosophila in [START_REF] Rera | Mitochondrial electron transport chain dysfunction during development does not extend lifespan in Drosophila melanogaster[END_REF]) . Therefore, during development flies are exposed to RU 0 µg/mL (control), 1 µg/mL, 5 µg/mL, 10 µg/mL and 20 µg/mL. On day 1 of adulthood, flies are placed in vials with the drug concentrations listed in the paragraph before. In order to reduce the number of crosses to make, I used the same parents when testing for lifespan effect of alteration of the gene in adulthood only and development and adulthood. After mating for two days in food without drug, where the eggs will develop for the adulthood only (AO) experiment, the flies were placed in food with the reduced amount of drug for additional 48 hours. The parents will then be removed, and the eggs will develop with the alteration of the gene of interest already occurring. No other differences are present in the two experiments. From now on, I will call the experiment where the gene alteration is tested during both development and adulthood whole life experiment (WL). Longevity experiment: the work load. As I stated before, for one experiment (i.e. five conditions of RU486 on one gene), I need to generate the final amount of 750 flies. It doubles to 1500 when we consider both AO and WL experiments. To generate the final number of 5 vials of 30 females each, I usually proceed on preparing 8 vials of crosses per concentration of drug (8 virgins females drivers with 5 UAS males each). So, for testing one gene in all the five conditions, I need to collect 320 virgins females and 200 UAS males. This requires some prior work of amplification of the single lines (UAS and drivers), and a consistent work of virgins collection and fly sorting for the experimental preparation. Once the experiment is launched, there is the need to score the vials every other day and prepare new food to replace them (with the same frequency as the scoring). The 48 longevity experiments performed in the first screening (section 3.3) implies the use of 36,000 flies. As the management at the same time of the such a high amount of flies is not possible, I split the experiments in 3 rounds. Considering that a longevity experiment (preparation phase + following of the longevity) takes between 3 and 4 months, the first screening took around a year to be performed. For more details regarding the lines used and the food preparation, refer to Appendix A.1. First screening Results I carried a first round of screening on the selected genes, OX and KD in most cases, for total amount of 48 longevity experiments (1 longevity experiment = the monitoring of 5 populations of the same genetic background exposed to the 5 RU486 conditions). Results are summarized in a more qualitative way in figure 3.7. For more details on the data, I refer to table B.10 in Appendix A.2. As shown in figure 3.7, most of the experiments result in a negative effect on lifespan (blue points). However, there are lines showing an increased lifespan due to the gene alteration. In particular, the genes selected thanks to i-cisTarget show positive results in multiple cases. Before moving to that, I would like to spend a few words concerning Aef1 that, as seen in figure 3.7, causes a significant decrease in lifespan when downregulated. Figure 3.8 represents the five longevity curves of the experiment. The gradient in the gene downregulation causes a gradient in the decrease in lifespan in the flies. This is interesting, as in other cases (as in figure 3.8), we observe the KD causing a sharp decrease in lifespan, suggesting that the intervention is toxic for the • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Genes regulating genes down in Smurf Figure 3.7: Summary of the results on the longevity screen carried out on the genes listed in figure 3.4. For each gene alteration, I have the AO and the WL experiment, and 5 RU486 conditions. The controls are not present as they are the reference for the statistical test (log-rank). The size of the point indicates the significance of the difference in the longevity curves of the population, while the colour indicates the direction of the change -decrease or increase in mean lifespan. In most cases, even if we detect a significant difference, this is due to the gene alteration negatively affecting the population's lifespan. Interestingly, most of the red hits are present in the group of the genes found by i-cisTarget as putative regulators of TFs up in Smurfs. flies. We wondered if, by overexpressing Aef1, we could have seen the opposite trend. Unfortunately, an UAS-Aef1 was not already available. We initially thought of creating the line from scratch. Unfortunately, given the impact on the project of the Covid-19 pandemic and other practical issues that delayed experiments (see section 4.2), we had to postpone this idea, which we could not finally realize. However, it could be interesting to further investigate more on this gene in the future. We initially considered validating all the positive results by re-running the longevity experiments. Once again, we had to lower the number of experiments to perform for practical reasons. We then focused our attention on the genes reported in figure 3.9. We did not proceed further with the results of dmrt93B KD, Hey KD, rib KD, FoxP, NF-yB OX, and srp KD. For dmrt93B, rib, NF-yB and srp it is mostly linked to the fact that the hits were isolated (the other drug conditions were killing the flies or showing no effect at all) or not relevant (an increase in lifespan of 3%, even if significant, is less reliable and possibly given by the condition in which the given experiment is carried). Concerning Hey KD, it was showing consistent results when downregulated during the WL. However, we decided to exclude it as we preferred to focus on genes showing an effect when also altered during AO. However, as for Aef1, Hey could be a possible target for further investigation. The same reasoning applies for FoxP. In addition, the lifespan of the controls in the WL is ∼ 6% lower than in the AO, suggesting that the lifespan increase of ∼ 5.5% detected for its KD during the WL could be an artifact. Genes selected for validation We therefore finally selected for further validation the genes shown in figure 3.9. The longevity curves on the right of the table refer to the experiment carried out on the first screening, showing in black the controls and in red the population exposed to the RU486 condition reported in the table. Figure 3.9: Summary table showing the genetic alterations selected for final validation at the end of the first round of experiments. In the figure, I show the longevity curve for the RU condition presenting the best results (the condition is reported on the RU column). In the survival curves, the controls are black, while the treated population is red. Note that, even if only one condition is reported here, I then validated the experiments on all the RU. The genes selected in red are the ones that, after the validation round, still showed an effect on longevity when knocked-down. I will presented them in section 3.4. Trl, CG4360 and Adf1 showed anyway a consistent trend along RU486 concentrations, as seen in figure 3.7. Ets96B is interesting as it shows the increase in lifespan only for the RU200 condition, in both AO and WL experiments. In the figure, I reported the one resulting from the AO experiment, but the WL experiment curve shows a similar trend. Hsf OX shows the best increase in lifespan in the whole set of experiments (∼ 10%). However, I encountered here a similar problem to the one before presented for FoxP : the WL controls live around 11% less than the AO control. The observed increase could therefore be an artifact. Nevertheless, since it was the best result and the gene was interestingly from a biological point of view, given the Hsp overexpression signal found in Smurfs, I decided to re-perform the experiment. Following the same protocol, I performed again the longevity experiments for all these 5 genes in females. I could not see again an effect on longevity for Ets96B and Hsf. Hsf OX (second round) simply showed no effect on lifespan, while Ets96 KD (second round) was toxic. In order to exclude the hypothesis of a problem related to the specific experiment, I performed the longevity on Ets96B a third time, obtaining the same results. I therefore left this gene on the side. The cases of these two genes exemplify the importance of performing the longevity experiments more than once, even if they are long and timeconsuming. I will not present the results for Ets96B and Hsf. I will focus instead on the results for the validation of Trl, CG4360 and Adf1. Validation experiments The validation experiments were performed following the same protocol as in figure 3.2. In addition, I also recorded the evolution of the Smurf proportion with time in the population. Indeed, as suggested before, if the gene alteration causes a lifespan increase, this could translate to a delayed appearance of Smurfs in the population. For more details on the Smurf recording protocol, see Appendix A.2. I divided the results into three sections, one for each gene investigated. I also performed the experiments in males, and I will briefly describe these results in section 3.4.4. Adf1 Adf1 KD is confirmed to affect lifespan, even if this time I observed it with the RU 50 µg/mL (+10.7%) instead than with the RU 10 µg/mL as the experiment before. In figure 3.10, I show the results for the RU concentration extending the lifespan, as well as the proportion of Smurfs recorded overtime on the same flies on which the longevity was performed. Smurf proportion data are analyzed by fitting the following linear model: Smurf s ∼ time + dose + time * dose + ϵ (3.1) where Smurf s is the proportion of Smurfs recorded, time the time interval over which we are fitting the model, and dose is the RU486 concentration, and ϵ the noise (i.e. what of the dependent variable Smurf s cannot be explained by the explanatory variables). The final term consists in the interaction between the time and the dose of drug, and its significance (F-test) proves that the drug concentration has an effect on the Smurf proportion increase. By looking at the survival curve (figure 3.10a) we observe that the population exposed to RU 50 µg/mL (in red) exits the survival plateau later, while it shows a faster decay once the individuals start to die. The control and the treated population are converging at the end. In figure (figure 3.10b) I report the increase in Smurf proportion in the two populations, as well as the number of alive flies on which the proportion is computed. Both the population show a significant time-dependent increase in the proportion of Smurfs. However, the treated population shows a significant slower increase of Smurf proportion. This confirms that the lifespan extension delays the entrance in the Smurf phase, prolonging phase 1 of life. As I did not monitor the lifespan of the Smurfs, I cannot affirm that the gene alteration does not affect its length. I could not detect an increase in lifespan for the other RU concentrations. The quantification of the KD gene through qPCR was performed but resulted in non-interpretable outcome due to technical issues, and will have to be repeated. Each point represents the proportion of Smurfs in an independent vial. The fitted linear model results in a significant coefficient for the time variable (p-value < 0.0001), proving that the proportion of Smurfs is time-dependent. The interaction coefficient between time and drug dose is also significant (p-value < 0.05), proving that the drug dose significantly affect the time-dependence of the Smurf proportion. The graph at the bottom represents the number of alive flies at the moment of the Smurf counting. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0. Trl Trl also confirms its positive effect on lifespan. In figure 3.11, I report the results for the RU 50 µg/mL, which shows the highest effect in this experiment (+ 10.5%). The control curve looks in this case less "clean", with a light drop around day 50. However, even after, flies continue to die faster in the controls than in the treated population, hinting that the lifespan extension observed is not due to unhealthy controls. The Smurf recording mimics the behavior of the survival curve, with the proportion of Smurfs in the control increasing faster than in the treated flies. As for Adf1, we can therefore conclude that the treated population spends on average longer time in the non-Smurf phase. The quantification of the KD for the RU 50 µg/mL concentration is reported in figure B.7 (Appendix B.2). The WL experiment proved a significant lifespan increase for for RU 100 µg/mL and RU 200 µg/mL. However, shortly after day 75 the control curve shows a sharp drop which corresponds to a single day of scoring in the data. As the curves are quite close, it might be that such a drop is exacerbating the lifespan extension observed. Even if the log-rank test is significant, I would therefore be careful in affirming that in this case we are looking at a real lifespan extension. Data are reported in figure B.9 (Appendix B.2). As explained before, the RNA-Seq study presented identified a female-Smurf transcriptional signature. We therefore experimentally tested the genes presented in this chapter in females. However, for the ones showing a longevity effect, we wanted to address if the effect was conserved in males. I performed and completed the experiments for Trl and CG4360 in daGS males. The experiment for Adf1 is ongoing, together with a third validation on daGS females. Its estimated end is mid-April 2022. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0. *** • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0. In figure 3.15 I show the results for the completed experiment. The T 50 (dashed line in the figures) of the lifespan in males does not differ much from the ones of the females of the same line (precedent figures for Trl and CG4360 ). All the longevity curves on males show, on average, a similar shape. They present a faster exit from the survival plateau than the females' ones. However, this has no significant effect on the mean lifespan of the flies when comparing males and females (data not shown). Indeed, by looking at the curves, the flies exit the survival plateau faster but decay slower than the females. The WL experiment for Trl showed a significant lifespan increase for RU 10 µg/mL (log-rank p-value 0.02). The effect translates to an increase in mean lifespan of 6%, probably due to the initial rectangularization of the curve. However, the two curves do not show a clear separation and remind more of what was observed in figure 3.14 for the KD during adulthood of CG4360. I would therefore not claim a lifespan extension in this case. The fact that we do not detect an increase in lifespan in males does not invalidate the results shown in females. However, it highlights the issue of physiological sexual dimorphism of Drosophila, which has been recently more investigated. Males and females indeed present differences even in processes not intuitively sex-related, such as the immune response [START_REF] Belmonte | Sexual Dimorphisms in Innate Immunity and Responses to Infection in Drosophila melanogaster[END_REF], and they can respond differently to lifespan-extending treatment (Garratt, 2020; [START_REF] Regan | Sex difference in pathology of the ageing gut mediates the greater response of female lifespan to dietary restriction[END_REF]. Furthermore, even if the Smurf phenotype was proven to exist in males, most of the studies conducted so far for its characterization were performed on females. Therefore, we cannot exclude that the transcriptional signature identified in Smurf females does not entirely reflect the transcriptional signature of Smurf males. The type of experiment (AO or WL) is specified below each graph. Note that the RU200 in the CG4360 WL experiment is missing, due to a problem in the growth of the flies during the experiment preparation. None of the population show lifespan increase but for the Trl KD in the WL experiment. The extension is however low, and even if significant it might not be relevant. Complementary experiments: preliminary data The experiments performed in section 3.4 confirmed the effect of the KD of Adf1, Trl and CG4360 on lifespan. Investigating the mechanism through which those genes act could represent a whole research project. We decided to proceed with preliminary experiments to investigate those genes and narrow their possible range of action on longevity. The proposed experiments are the following: 1. Tissue-specific KD by using the same UAS line and a different driver; 2. test the resistance of flies to different kinds of stress, to check if the KD helps the individuals to cope with a stressful event. In particular, we chose heat shock, starvation and oxidative stress; 3. monitor if the KD affects the fertility of the flies, to check if the lifespan extension could be a case of a trade-off between energy investment in reproduction and energy investment in its maintenance, as suggested by the disposable soma theory. In the following sections I will present the results obtained so far in this context. I do not have yet any additional result for Adf1 yet, as chronologically was the last gene for which I confirmed the lifespan extension, meanwhile additional experiments for CG4360 and Trl were already ongoing. However, tissue-specific experiments are ongoing for Adf1. The estimated end is mid-April 2022. Tissue specific GS experiments Tissue-specific longevity experiments have been completed so far only for Trl. We performed the experiments with four different drivers targeting different cell types/systems. Below I report the information for each driver and the associated experiments. All the experiments were performed on the AO and WL settings. Ti GS. It is a pan-gut driver, i.e. expressed in all the different cell types of the gut. The AO experiment shows a significant lifespan extension, as shown in figure 3.16a (+8% for RU 200 µg/mL). Unfortunately, the WL experiment does not present a clean curve for the controls. The results are therefore not interpretable. 5961 GS. Given the positive result for the Ti GS experiment, we decided to look more specifically at the level of only certain cell types. This driver targets the gut's intestinal stem cells (ISCs) and enteroendocrine (EE) cells. It does not drive in the enterocytes (EC). The results could hint for a lifespan extension in the AO experiment, RU 100 µg/mL (the +6% showed in figure 3.16b) is significant to the log-rank test. However, the two curves are close and I would therefore not be confident in claiming a lifespan extension in this case. The WL experiment presented the same problem for the controls as in Ti GS. elav GS. It is the most commonly used GS driver for targeting neurons. In particular, we used the driver version kindly provided by Tricoire laboratory (UMR 8521), which had mobilized the construct to enhance its expression. elav GS showed toxicity during development, with only the RU 1µg/mL having flies developing in sufficient number to perform the experiments. However, the treated flies died faster during adulthood. This must be an aftermath of the developmental toxicity, as such an effect was not seen in the AO experiment, where the lifespan of the treated flies was not altered (data not shown). repoGS. It is the most commonly used GS driver for targeting glial cells. Flies subjected to Trl KD driven by repo were looking unhealthy since early age. Moving slower in the vials and flying with more difficulty, they were extremely slow in regaining motility after tapping during vial transfer. All the RU concentrations, temporally starting from 200 µg/mL, showed such behaviour and eventually died. However, also the controls started to look sick and eventually die (mean lifespan ∼ 60 days, lower than the ones observed in all the other longevity experiments). We therefore hypothesized that the toxic effect was line-related rather than coming from the KD of the gene. Although the RU 100 µg/mL effect is significant, it might not be relevant and mostly caused by the initial difference in the two curves. Control experiments for drivers' toxicity Given the issue experienced with the repoGS, we decided to control for possible toxic effects of the drivers themselves on longevity and development. [START_REF] Robles-Murguia | Tissue-specific alteration of gene expression and function by RU486 and the GeneSwitch system[END_REF] have shown that the RU486 can affect the flies gene expression and physiology in a line-dependent manner. The results observed could therefore be driver specific rather than biologically relevant. To check for this, we crossed the drivers with UAS lines altering genes supposedly not affecting longevity or development. By following the same procedure used for all the experiments presented (figure 3.6), we crossed all the drivers used in our experimental set with two different UAS lines. The first UAS line is mediating white KD, which should not have any effect on longevity. The second UAS line is driving the expression of lacZ, a gene not present in eukaryotes and which expression should therefore not alter the fly physiology. Interestingly, repoGS showed, for both the UAS lines, the same phenotype as in the Trl experiment -toxicity during development and adulthood. Therefore, the results of the previous experiments are an artefact and do not have any biological meaning regarding the gene of interest. The same applies to elav GS, which resulted toxic during development for both the control lines. Concerning Ti GS, no significant changes in lifespan or development were detected in the control experiments (data not shown). 5961 GS and daGS showed, for some RU concentrations, an initial drop in the longevity curve, with a subsequent plateau. However, this is likely due to a problem of the specific cross or experiment. Indeed, no such effect was seen consistently in the numerous daGS experiments performed, and in none of the 5961 GS experiments. This therefore suggests that, as in the case of Ti GS, the driver itself, in presence of the drug, is not altering lifespan. Nevertheless, the control experiments for daGS and 5961 GS should be repeated to obtain statistically valid results. Fertility and heat shock assay The protocol chosen for such assays is explained in Appendix A.2. Unfortunately, both the assays did not output explainable results due to technical problems. Regarding the fertility assay, I monitored it as the number of eggs laid per female in a vial. We approximated the number of eggs by the count of the pupae after development. However, some vials experienced growth issues and dry medium -even with daily manual watering. Therefore, the experiments will need to be performed a second time to obtain interpretable results. Smurf flies present a strong upregulation of the heat shock proteins. We were wondering therefore if the KD of the genes of interest was somehow affecting such response. The heat shock assay was performed, as explained in Appendix A.2, at in an incubator at 37°C degrees, on day 5 of life. From the beginning to the end of the assay, the flies were scored every hour. The protocol was built based on previous experiments run by the laboratory. In the ideal setting, the flies needed to be followed up to the death of the whole population. Unfortunately, given the fixed opening hours of the institute (shortened during the time of Covid-19 emergency), it was not possible to complete the experiment. On average, half of the flies were still alive for some of the RU concentrations in the evening. However, they were all dead the morning after, suggesting the experiment with such protocol needs to be carried out over 24 hours. In addition, there was a strong effect of the position of the vial on the death of the flies. Flies in vials positioned in the outer part of the rack were experiencing a higher death rate than the ones positioned internally. As they were ordered by concentration, some were strongly affected and it was impossible to draw biological conclusions from the data. I would therefore change the protocol of the assay for the next experiments. Flies could be kept at high temperature for a given amount of time, and monitor the mortality after, as done by [START_REF] Moghadam | Heat hardening capacity in Drosophila melanogaster is life stage-specific and juveniles show the highest plasticity[END_REF]. In order to try to minimize the vial specific effect, vials can be placed individually in small racks (or equally separated in bigger racks) in the incubator or a water bath [START_REF] Moghadam | Heat hardening capacity in Drosophila melanogaster is life stage-specific and juveniles show the highest plasticity[END_REF]. Starvation assay Starvation resistance can be associated to increased energy and fat stores in the organism. Indeed, the resistance to starvation shows in Drosophila a dependency from the sex and the diet to which the flies have been exposed prior to the assay [START_REF] Chandegra | Sexually dimorphic effects of dietary sugar on lifespan, feeding and starvation resistance in Drosophila[END_REF]. As in Smurfs we observed a downregulation of fatty acid anabolism, and reduced fat storages were already shown in Smurfs compared to non-Smurfs [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF], we included the starvation amongst the stress assays to perform. I performed the assay twice in order to set the final protocol explained in Appendix A.2. On day 7 of life, flies were switched from the longevity food to 1% agarose. Flies were scored three times a day (morning, mid-day, afternoon) up to death. The results of the final experiment for Trl and CG4360 KD are shown in figure 3.17. The only case in which we can detect an increase in lifespan is the CG4360 KD over the whole life of the flies. In this case, we have an increase of survival up to +15% (figure 3.18). Interestingly, the WL experiment is also the one for which CG4360 showed lifespan extension. Concerning the CG4360 AO, the treated conditions live less than the controls. However, the treated conditions present less clean curves at the beginning and the results, even if significant, need to be interpreted carefully (figure 3.18). • • • • • RU0 RU10 RU50 RU100 RU200 Figure 3.17: Longevity curves for the starvation experiments. The experiment corresponding to the curve is reported below the graph. Time is reported in hours; time 0 is the time at which the starvation assay starts. For some curves, we can observe a initial drop for some of the drug concentrations, due to deaths overnight before the chance to score them regularly. CG4360 KD over whole life shows increased lifespan for the treated conditions. 3.17. Mean lifespan is reported in hours after the start of the starvation assay. There is no difference in the lifespan of the controls. In the WL experiment, the treated flies experience an increase in lifespan up to 15%. For Trl the two experiments show similar results, with the curves overlapping but for RU 10 µg/mL. This concentration presents a starting drop that is probably experiment-specific rather than biologically relevant. The fact that we observe an increased survival to starvation stress does not imply that is the mechanism through which CG4360 extends lifespan in physiological condition. However, in the light of such results it could be relevant to check for an age-dependent and Smurf-dependent evolution of energy stores in the CG4360 KD line compared to the controls. Oxidative stress assay As explained in the Introduction, mitochondrial dysfunction is a hallmark of ageing. For a long time, it has been thought to be the main cause of ageingrelated damage (ROS theory). In Smurfs, we indeed observe a strong decrease in the expression of genes involved in mitochondrial metabolism, suggesting a decrease in mitochondrial efficiency. On the other hand, there is upregulation of Gst genes, which are used for detoxification. Furthermore, the UPR detected in the Smurfs could also hint at a condition of oxidative stress. We therefore included the oxidative stress in the assays to perform, to check if the KD in the genes extending lifespan was helping the flies cope with such condition. H 2 0 2 at four different concentrations (0% -control-, 0.5%, 1%, 2%) was used to test the resistance of the flies to oxidative stress. For each concentration, the five usual RU486 concentrations were tested. As the number of flies to generate for one experiment became consistent, I lowered the N from the usual 150 per concentration used in the previous experiments (5 vials, 30 females each) to 120 (4 vials, 30 females each). For more details concerning the protocol and the food preparation, see Appendix A.2. The experiment has been performed so far only in the AO setting. Trl, while a significant positive effect is found for CG4360 all along the RU concentrations. However, the RU controls were in this case showing a initial drop that could influence the results. H 2 0 2 1% show lifespan extension for both the KD, with an extension of 30% for the RU 10 µg/mL of Trl KD. H 2 0 2 2% is instead toxic in all cases, with no distinctions across the RU concentrations. Indeed, even if some significant changes are found, they are not relevant from the point of view of the effect. Interestingly, it seems that both Trl and CG4360 KD confer resistance to oxidative stress for 0.5% and 1% H 2 0 2 concentrations. It would be interesting to see, especially for CG4360, if the same happens when the KD occurs during the whole life of the fly. Indeed, in the case of Trl the conferred resistance to oxidative stress is associated with a lifespan extension (AO experiment, figure 3.11). In the case of CG4360 instead, it does not match with the longevity results observed so far (AO experiments, figure 3.14). As discussed for the starvation results, the fact that the KD of the investigated genes improves resistance to oxidative stress, does not automatically prove that this is the mechanism through which lifespan is extended in physiological condition. It could be however relevant to check for oxidative markers (as protein glycation, lipid oxidation and DNA damage) over time and with Smurfs for the CG4360 and Trl KD. Validation using independent transgenic lines The experiments reported were carried on specific lines bought by fly stocks such as Bloomington or VRDC (see Appendix A.2). We decided to perform the longevity experiments on independent UAS lines targeting the same genes as an additional check. We proceeded by buying at least one additional UAS line for the genes showing lifespan increase. The information about such lines are reported in appendix A. For CG4360 only one additional line was found (VRDC stock number 10500). However, it had a reported off-target (CG42277, gene symbol rn). The experiment resulted in toxicity even for the control flies (data not shown). This led us to think that the cross with daGS is probably somehow toxic, as it happened with repoGS. In the case of the Adf1, I performed the validation on two independent Bloomington lines (line showed the initial effect was bought from VRDC). While for one I could not detect lifespan alteration, for the other I could detect a significant lifespan extension (+11%) for the RU 10 µg/mL in the WL experiment (while for the AO there was no detectable lifespan alteration). The results are shown figure 3.22, point b. I tested another Trl KD GS line. The line showed a strong toxicity during development, with flies not growing in food containing drug. Surprisingly, also the AO experiment showed a quite strong toxicity, with a reduction in lifespan of 20%. Results are shown in figure 3.22, point a. I performed the experiment a second time, obtaining the same results. Since the control have a clean longevity curve, the toxicity of the treated samples must be due to: 1) the Trl KD itself, which might invalidate the previous experiments; 2) the activation of the GS itself, with for instance no-reported off targets. I made the hypothesis of a stronger KD mediated by the second line, which I will for simplicity refer to as line 2. I quantified the KD of Trl in line 2, but it did not show a stronger effect than line 1. However, the RU 100 µg/mL presented no difference compared to the controls, suggesting that there might be a problem in the analysis (either at the level of the RNA used for the retrotranscription or the qPCR preparation). Therefore, flies of line 2 will have to be regenerated and the quantification re-performed. As the effect is line-specific, I read more in details the information concerning the lines and the vectors used for their construction (TRiP project, see A.2). The sequence targeted by the two dsRNAs differs. Line 1, showing the effect, targets exon 2; line 2, which shows toxicity, targets exon 4. However, both those exons are always retained in the different isoforms of the gene, suggesting that the different effect does not come from a isoform-specific KD. Trl presents an ovelapping ORF (CG42507 ). This gene, not characterized, should be equally targeted by the two dsRNAs given its position in the genome. However, line 1 (Bloomington 41582) uses the vector "VALIUM 22", while line 2 (Bloomingotn 67265) uses the "VALIUM 20" vector. As reported in the TRiP project website (see Appendix B.2 for the link to the online page) concerning the RNAi lines generation, the "VALIUM 22" should drive a stronger expression in the germline and a light expression on the soma, and viceversa for the "VALIUM 20". The results of the qPCR show how the KD is however detectable for the line 41582 (whole flies). Furthermore, a lifespan extension for the same line is detected in Ti GS, which suggests that the KD is active at the level of the gut. Unfortunately, the qPCR performed on the gut samples to confirm the KD of Trl presented technical problems and will therefore have to be repeated. In order to investigate the reason behind the line-specific difference, I would follow the following steps: 1. perform the qPCR on ovaries only for line 1. In parallel, perform the qPCR on other body-parts (as the head, for an approximation of the nervous system). Detecting the KD in the heads would prove that the line is driving not only in the germline; 2. validate the KD for Ti GS gut; 3. re-perform the qPCR on the line 2, given the unexpected results for RU 100 µg/mL, which suggest a technical problem. In addition, perform the quantification in the ovaries only, to compare the signal found in line 1. If there is not detectable difference in the whole body signal between the two lines, but there is in the ovaries, it could be that the effect is due to the germline specificity of the vector. If on the other hand a stronger KD is detected for line 2 in the whole body, a model in which a light KD of Trl is pro-longevity, while a stronger one is toxic, can be proposed. A way to finally confirm this would be to decrease the RU concentrations used for the experiment in line 2, in order to drive a KD of similar strenght compared to line 1. Conclusions In this chapter, I have described the experiments aimed at identifying new genes affecting lifespan using the Smurf phenotype. Those experiments were carried out during my PhD's second and third years. Part of the complementary experiments, aiming at further investigating the genes having a positive effect on lifespan, are still ongoing. The starting point of this part of my project is the DEG analysis presented in section 2.2.4. Given the presence of numerous genes annotated as affecting longevity amongst the Smurf genes (figure 3.1 and table B.8), we decided to screen for new genes affecting longevity amongst the ones detected as DEGs in Smurfs. We chose to focus on TFs, as amongst them may lie genes responsible for the signature observed. Starting from a selection at the level of the TFs detected as DEGs and their putative upstream regulators (through the software i-cisTarget), we selected 17 genes to test. Then, through GS, I performed longevity experiments to assess the effect on the longevity of their KD or OX. The first screening resulted in five genes putatively extending lifespan (figure 3.7). Amongst the ones not extending lifespan, Aef1 showed interesting results, with its KD at different levels showing a gradient effect on the longevity of the flies (figure 3.8). This hints at a putative role of this gene in shaping longevity. Unfortunately, the OX line was not available yet and, therefore, I could not carry the complementary OX experiment. However, it will be interesting to do so in the future and further investigate this gene. I carried a second round of experiments to validate the results observed in the first screening, confirming the effect for three of the five genes initially identified (figure 3.4): Trl, CG4360 and Adf1. This proves that we can identify new genes affecting longevity by looking at the Smurf phenotype. For all the genes showing an effect on lifespan, I recorded the change in Smurf proportion over time. For all the experiments performed ((figures 3.10, 3.11, 3.12, 3.13, 3.14), I confirmed the time-dependence increase of the Smurf proportion in a population. Interestingly, the lifespan extension (figures 3.10, 3.11, 3.12, 3.13) is accompanied by a slower increase of Smurfs in the population, proving that the lifespan extension corresponds to an increase in length of the non-Smurf phase in the flies. The KD of Trl and CG4360 in males do not show any effect on lifespan. Longevity experiment on Adf1 KD in males are ongoing. To further investigate the three genes showing a positive effect, we planned a series of complementary experiments. More specifically: 1) performing tissue-specific KD using tissue-specific GS drivers; 2) performing longevity experiments under stress conditions (starvation, oxidative stress and heat shock) to test if the KD of the gene of interest increases the flies' resistance; 3) monitor the fertility of the flies, to check if the increase in lifespan is accompanied by alteration in fertility. Only part of these experiments has been completed. Amongst the ones performed, the only positive hit is given by lifespan extension of Trl KD in the gut. Tissue-specific experiments for Adf1 are ongoing. The heat shock and fertility assays will have to be carried out again, as the experiments did not return explainable results. CG4360 KD confers resistance to starvation in the WL experiment, with an extension in lifespan up to +15%. CG4360 KD also confers resistance to oxidative stress, with treated flies living longer than controls when exposed to food at H 2 O 2 0.5% and 1%. Interestingly, also Trl KD, which does not show any effect for the starvation assay, confers resistance to oxidative stress. Flies with Trl KD live 30% more than the controls when exposed to food at 0.5% H 2 O 2 . The exploratory experiments conducted suggest that Trl and CG4360 have a role in stress resistance (starvation and oxidative stress). However, additional experiments will be required to clarify the role of these genes in establishing lifespan extension. The Smurf phenotype is assessed through the loss of selective intestinal permeability in the organism [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF]. In the studies conducted so far, flies have been shown to always turn Smurf prior to death and to spend around a constant amount of time in such phase [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF][START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. The proportion of Smurf flies in a synchronous population increases with time [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF]. Therefore, individuals dying off early in a population will turn Smurf faster and spend less time as non-Smurfs than those who die later. Furthermore, it has been shown that Smurfs are the only ones in the population, independently of their age, to carry some markers that have been previously associated with ageing [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF][START_REF] Rera | The Smurf transition: new insights on ageing from end-of-life studies in animal models[END_REF]. Therefore, it has been proposed that ageing could be reinterpreted as a biphasic path instead of a progressive one [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. Individuals would then undergo two different phases in their life. In phase 1 (non-Smurf) the organism is seemingly healthy and experiences a null mortality rate. However, with time their probability of transition to phase 2 increases. In phase 2 (Smurf), the individuals experience health degradation and decreased functionality that is typically described as ageing, together with a high risk of impending death [START_REF] López-Otín | The Hallmarks of Aging[END_REF]. If this model holds, then the ability to identify Smurfs in a population would allow for identifying physiologically old individuals, independently of the age, in vivo and in a non-invasive way. In order to validate the model and its implication in ageing, I investigated the changes occurring in the transcriptome in Smurfs at different ages. Smurfs: a biphasic core signal summarizing ageing The results presented in my work validated the Smurfs (transcriptomewide) as a powerful model to identify biologically old individuals in vivo. Indeed, Smurfs present a core gene expression signal that define them as such in comparison to the non-Smurfs and independently of their age (sections 2.2.4, 2.2.5 and figures 2.6, 2.7, 2.8). Furthermore, these Smurf specific changes summarize more than half of the transcriptional hallmarks of ageing (Frenk and Houseley, 2018) (figure 2.16). These founds demonstrated that most of what has been so far described as ageing-related signal, is a Smurf specific signal. Inflammation and stress response (UPR, together with oxidative stress suggested by the Gst family activation) are found as consistently upregulated in Smurfs on the different analyses performed. On the other hand, Smurfs present a broad downregulation of metabolic pathways, especially mitochondrial-related, and genes involved in oogenesis. Also, downregulation of ribosome-related genes is detected. When I compare old non-Smurfs (sampled at 10% survival) to young non-Smurfs (90% survival), 3% of the genes result as DEGs, in opposition to the 20% detected with Smurfness. Those genes are mainly connected to increased inflammation and a decrease in fertility. These results show that the old non-Smurf one is not an ageing transcriptome. Therefore, Smurfness establishes itself, transcriptome-wide, a better predictor of ageing than chronological age. Effect of chronological age per se However, it would be too simplistic to confine ageing to a binary vision of life. Indeed, the mere fact that time increases the probability of transitioning to Smurf is a confirmation of its effects on individuals. To investigate the effect of time on my samples, I proceeded with a timecourse comparison within conditions (non-Smurfs and Smurfs). As mentioned in the previous section, the old non-Smurfs carry a partial Smurf signal (increased inflammation and reduced fertility). However, the old Smurfs carry a strong signal not observed in the previous analysis. That can be summarized in a decreased expression for genes involved in DNA replication and repair and RNA processing and transcription. The latter is one of the hallmarks of ageing that was not detected in Smurfs 2.3.3. By looking at how the genes involved in such processes correlate with age and Smurfness, I showed that they negatively correlate with age (section 2.5 and figure 2.28). This suggests that we are looking at an age-related signal rather than an old Smurf one. The fact that DESeq2 detects it only in old Smurfs might be due to a possible exacerbation of these changes after the old flies turn Smurfs. Another transcriptional hallmark of ageing, falling in the same category as RNA processing, is a general misregulation of transcription, provoking an increase in heterogeneity in gene expression. In section 2.2.8 I show that, although we can also detect Smurf/non-Smurf differences, the noise associated with gene expression increases with age for both conditions. This result is probably not independent of the fact that I observe a decrease in transcription-related genes: a malfunction in these pathways might generate variability in gene expression. This could also be one of the mechanisms through which the probability of turning Smurf increaes over time. The increased hetereogeneity in gene expression could also be a concomitant mechanism in explaining the increased probability of entering the Smurf phase with time, as I speculated in section 2.2.9. By using Smurfness as biomarker of physiological age, we can have a tool to deconvolve markers associate to the biological age of the organism and markers associated to its chronological age. In the literature, this signal might be mixed, as for what shown with the transcriptional hallmarks of ageing [START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF] (see section 2.2.9). My results point at the model proposed in 4.1, where short-lived and longlived individuals present the same physiological ageing hallmarks, with the difference of experiencing them at different rates. However, they differ in the chronological ageing ones, not experienced only by long-lived individuals in their last phase of life. As a continuation of the study, it would be interesting to experimentally validate such results. To start, I would confirm by qPCR the decrease of expression of some of these putative age-only related genes in Smurfs and non-Smurfs (possibly mimicking the same time points of the current experiment, i.e. 90%, 50% and 10% survival). Suitable targets for this validation could be the genes showing the best negative correlation with age in figure 2.28. The experimental validation of these results would add evidence to the model proposed here (figure 4.1). Non-Smurf and Smurf represents two distinct phases in the life of the organism. The non-Smurf is a seemingly healthy phase, where individuals do not present ageing-related gene expression changes. However, before turning Smurfs, flies experience one of the transcriptional hallmarks of ageing, the disregulation of immune genes. This brings the flies in a "susceptible" phase, or pre-Smurf phase. The transition to Smurf marks the beginning of phase 2 of life, characterized by the degradation of the body's health and a high risk of death. From a transcriptional point of view, individuals show many of the markers associated with ageing (downregulation of mitochondrial metabolism genes, downregulation of genes involved in translation, deregulation of immune response genes and upregulation of genes involved in the response to different types of stress). The presence of such changes is independent from the time at which the transition happen, as they are related to the physiological time of the organism. Short-lived individuals (i.e. dying before the mean lifespan of the population) will have a shorter non-Smurf phase compare to long-lived individuals (i.e. dying after the mean lifespan of the population). The long-lived individuals will have a "slower" physiological time compared to the short living individuals. However, at old ages they will experience time related changes not-observed in the short living individuals and that accounts for an hallmark of transcriptional ageing that is not otherwise found in Smurfs: increased noise in gene expression and RNA processing. Studying the pre-Smurfs The Smurf phenotype is an late stage of life, where an individual is committed to death within a specific time frame (on average 2 days in flies, [START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]). The fact that, at least from a transcriptional point of view, Smurfs summarize much of what has been described as the markers of ageing, may suggest that the phenotype represents a critical confounding factor in ageing studies. Indeed, if we randomly sample from a population over time, we will sample, on average, an increasing number of non-Smurfs. Older time points would therefore experience a Smurf bias in the sampling. In this case, the progressive increase in the ageing markers would be actually due to an increase in the sampling of Smurf individuals carrying them. Smurfness being a late phase of life, the changes we identify studying it are probably a mixture of causes and effects of ageing. Interventions extending lifespan delay the entrance in this phase, prolonging the non-Smurf phase instead (results presented in 3.4). This suggests that the Smurf phase might not be a plastic phase of ageing, but rather a non-returning one that commits the organism to death. As I discussed in section 2.4, the ability to identify flies before they turn Smurf could give us powerful insights on the early mechanism of ageing. My results show how inflammation is probably one of the pre-Smurf processes. This support the inflammageing hypothesis [START_REF] Franceschi | Inflammaging. An evolutionary perspective on immunosenescence[END_REF] as illustrated in figure 1.4, according to which inflammation would be the first hit of ageing, leading to entering a susceptibility state before a second hit brings to the unhealthy stage of life. The second hit could be reinterpreted as the entrance in the Smurf transition. Building a Smurf reporter would be a non-invasive way to identify pre-Smurfs. The idea would be to observe the physiological time passing by, in vivo. This could be achieved by expressing a fluorescent reporter (as GFP) for a gene showing an increase in expression with time in the non-Smurfs (figure 4.2). By testing putative reporters, and collecting data on the expression of the gene and the entrance in the Smurf phase, we would possibly be able to find a good predictor for the transition and set a tool for pre-Smurf identification. Such a tool would allow identification of individuals at different physiological times, following their ageing. The pre-Smurf reporter project was started last summer by Sofia Sosa Marmol, an undergraduate student in the laboratory. However, the work was preliminary, and the project is currently on hold. In the Introduction, I presented the work of [START_REF] Zhang | Extended Twilight among Isogenic C. elegans Causes a Disproportionate Scaling between Lifespan and Health[END_REF] in C. elegans, where the authors were investigating the different individual ageing rates in a large population of nematodes (section 1.3). Their results pointed to a model supporting the "rate of ageing hyphotesis". Individuals are biologically equal at birth (in terms of life expectancy) but diverge early in life, starting to experience a different ageing rate and dying at a different time. Furthermore, their results suggest that long-lived individuals spend, in proportion, more time in a senescent or "bad health" phase compared to short-lived individuals. Flies have been shown, on the other hand, to spend around the same time in the Smurf state [START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF][START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF], which would correspond to the late/bad health phase of life. However, given the model proposed in 4.1, we could hypothesize that the chronological ageing experienced by the long-lived individuals brings them to experience a more extended susceptibility phase (which would translate in a longer pre-Smurf phase compared to short-lived individuals). Of course, this is no more than speculation, but it could be an interesting way to investigate and combine those two models in the future. Comparison with the BiT transcriptome In the Introduction, I also discussed the recent published BiT transcriptome [START_REF] Meyer | BiT age: A transcriptome-based aging clock near the theoretical limit of accuracy[END_REF], which identifies a transcriptomic ageing clock in C. elegans. We tried to compare the signature found in our data with the C. elegans BiT. The enrichment analysis performed by the authors on the 576 genes forming the ageing clock found enriched categories as immune response, neuropeptide signalling, and genes related to stress response. The biological processes cited recall of what identified in section 2.2.5 for the Smurfs. As independent validation, I downloaded the list of 576 genes constituting the clock and the corresponding coefficients (negative if the gene is decreasing its expression with ageing, and viceversa). I then submitted the two lists independently to the enrichment analysis tool avaiable at WormBase [START_REF] Davis | WormBase in 2022-data, processes, and tools for analyzing Caenorhabditis elegans[END_REF]. The immune response was the primary signal that I could detect for genes with a positive coefficient. In contrast, the downregulated genes were enriched in germline/mating categories, motility, neuropeptides signalling and extracellular matrix components. Those results are interesting, as they show good overlap with what was observed in Smurfs. Thus, I tried to compare at a gene level. In order to compare the two datasets, I first retrieved the Drosophila orthologs corresponding to the BiT transcriptome (with the online tool g:Profiler [START_REF] Raudvere | Profiler: a web server for functional enrichment analysis and conversions of gene lists (2019 update)[END_REF]). I found 316 Drosophila genes, corresponding to 129 genes of the BiT. The numbers do not correspond as for certains genes more than one ortholog is retrieved. Interestingly, the TF Adf1 is a BiT gene, which supports its role in longevity as described in the manuscript. I first tried to plot the genes retrieved in a similar manner of what done in section 2.5. As shown in figure 4.3, I cannot detect a significant difference in the gene distribution compared to the background. This means that the genes retrieved do not specifically correlate with age or Smurfness in my dataset. Comparing the list of genes retrieved with the Smurf DEGs do not increase the interpretability. I could detect only 140 Drosophila genes, retrieved from In order to exclude the possibility that the lack of correlation between my dataset and the BiT was due simply to a conversion issue with the tool used, I performed the conversion using the homologene R package. Also in this case, I could not detect an overlap. However, the lack of immune related genes in the Smurf DEGs overlapping with the BiT genes suggests a problem at the level of conversion or the presence of homologous of BiT genes in Drosophila. It might indeed be the case that a Drosophila BiT implies genes involved in similar processes of the nematode BiT, but not directly identifiable as homologous. Smurfness and new genes affecting longevity Chapter 3 was dedicated to the illustration of the experiments carried during the second and third year of PhD. The aim of such experiments was to investigate the possibility of finding new genes extending lifespan by studying ageing through the Smurf phenotype. A large screening carried by longevity experiments showed lifespan extension effect for the KD of three TFs: Adf1, Trl and CG4360. Those genes have been found as putative regulators of TFs upregulated in Smurfs (see section 3.1). As shown in figure 3.3, the genes predicted as targets amongst the ones selected in Smurfs do show a non-negligent overlap. The products of these genes might therefore directly or indirectly interact and share a similar action in their lifespan extending effect. In order to investigate the possible interactions amongst the three genes validated as having a longevity effect, I used the STRING database [START_REF] Szklarczyk | The STRING database in 2021: customizable proteinprotein networks, and functional characterization of user-uploaded gene/measurement sets[END_REF]. STRING is a database for protein-protein interactions, which conveniently groups and maps them in a weighted network (i.e. a network where each node is a protein and each edge an interaction, weighted with a score summarizing the evidences provided for it) for each query. The annotated protein-protein interactions can be known (from curated databases and experimental evidences), predicted (gene neighborhood, gene fusion, gene co-occurrence), or coming from textmining (as co-presence in abstracts). In figure 4.4 I reported the STRING interaction network of Adf1. We can retrieve some familiar interactors encountered along Chapter 3: Trl, Aef1, CG4360. The interaction with Aef1 comes from textmining, in particular from the paper briefly discussed in 3.1 [START_REF] Birnbaum | Age-Dependent Changes in Transcription Factor FOXO Targeting in Female Drosophila[END_REF], that finds Adf1, Aef1 and Trl as putative co-actors of FOXO (binding site are found enriched in FOXO binding regions). Interestingly, the interaction between Adf1 and Trl has been experimentally demonstrated [START_REF] Lomaev | The GAGA factor regulatory network: Identification of GAGA factor associated proteins[END_REF]. In this study, the authors screens for Trl (which protein is mostly known as GAGA factor, or GAF) interacting proteins. In addition, although not demonstrating a physical interaction between the two proteins, [START_REF] Talamillo | Expression of the Drosophila melanogaster ATP synthase α subunit gene is regulated by a transcriptional element containing GAF and Adf-1 binding sites[END_REF] show the requirement of both of them for the expression of the ATP synthase α subunit. However, the two genes have mostly been studied in different contexts. The GAF protein coded by Trl works as an antirepressor, inducing gene expression by promoting open chromatin configuration [START_REF] Farkas | The Trithorax-like gene encodes the Drosophila GAGA factor[END_REF][START_REF] Okada | Chromatin Remodeling Mediated by Drosophila GAGA Factor and ISWI Activates fushi tarazu Gene Transcription In Vitro[END_REF]. Its action has been documented as influencing disparate biological processes, as dosage compensation in males [START_REF] Greenberg | The Drosophila GAGA factor is required for dosage compensation in males and for the formation of the male-specific-lethal complex chromatin entry site at 12DE[END_REF], activation of genes of Hsp family [START_REF] Weber | Molecular architecture of the hsp70 promoter after deletion of the TATA box or the upstream regulation region[END_REF][START_REF] Leibovitch | GAGA factor and the TFIID complex collaborate in generating an open chromatin structure at the Drosophila melanogaster hsp26 promoter[END_REF], and both oogenesis [START_REF] Fedorova | GAGA protein is required for multiple aspects of Drosophila oogenesis and female fertility[END_REF] and spermatogenesis [START_REF] Dorogova | GAGA protein is essential for male germ cell development in Drosophila[END_REF]. Its role in the germline has been studied also during development [START_REF] Dorogova | Role of GAGA Factor in Drosophila Primordial Germ Cell Migration and Gonad Development[END_REF]. The role of Trl in the germline is interesting in the light of what discussed in section 3.6 concerning the germline specific KD of Trl in the line showing lifespan extension. However, according to the studies cited above, a reduction in GAF has negative effects on the oogenesis, negatevely impacting fertility. Even if a proper quantification of the fertility has not been obtained yet, the line never presented noticeable sterility in all the experiments carried. Adf1 was instead first studied through its mutant nalyot. nalyot flies present defects in long-term memory, with impaired learning capacity when undergoing Pavlovian training [START_REF] Dezazzo | nalyot, a Mutation of the Drosophila Myb-Related Adf1 Transcription Factor, Disrupts Synapse Formation and Olfactory Memory[END_REF]. Adf1 has been since then mostly studied in the nervous system. Its expression has been confirmed in the motor neurons of the thoracic abdominal ganglion in the adult [START_REF] Timmerman | The Drosophila Transcription Factor Adf-1 (nalyot) Regulates Dendrite Growth by Controlling FasII and Staufen Expression Downstream of CaMKII and Neural Activity[END_REF]. The same study shows its implication in dendritic branching in motor neurons. Interestingly, in none of my experiments, I observed visible differences in locomotor activity compared to other lines. In addition, at a developmental level, I could not notice toxicity for the treated flies, if not maybe a slight reduced growth for the higher concentrations of drugs. The longevity assay on elav GS -Adf1 KD is ongoing, and it is not showing toxicity so far (estimated end: mid-April 2022). It is possible that, given their experimentally demonstrated interaction, the mechanism through which Trl and Adf1 are acting on longevity is the same. In the context of further studies addressing such point, I would start by investigating in which tissues these genes show co-expression. If any, the interaction between the two proteins in those tissues will have to be verified by using techniques such as pull-down assay. Identifying the tissue or tissues of action will narrow the space of investigation. As a subsequent step, I would then return to the list of putative Smurf genes target returned by i-cisTarget and evaluate which ones present a different expression in the Smurfs in the tissues of interest. At that point, we would have a better idea of the pathways through which the genes are acting. CG4360 is uncharacterized. Flybase reports it as an orthologous of the human zinc TFs ZSCAN10, ZNF157 and ZNF182, which show as well poor characterization. I tried to blast the sequence of CG4360 against Drosophila genome using blastn [START_REF] Altschul | Basic local alignment search tool[END_REF]. The gene shows 70% of identity with Aef1, which also belongs to the zinc finger family. However, according to presented results, the two genes have the opposite effect on longevity. The available elements are not enough to affirm a possible interaction of the two, as it happens on the other hand for Adf1 and Trl. However, it could be interesting to better characterize them and their possible role in longevity. The network in figure 4.4 presents an edge between CG4360 and Adf1, which corresponds to interaction of the homologous genes in C. elegans [START_REF] Reece-Hoyes | Extensive Rewiring and Complex Evolutionary Dynamics in a C. elegans Multiparameter Transcription Factor Network[END_REF]. Limits of the study The main limit presented by the study is the use of only females for the Smurf signature characterization. Indeed, the transcriptional signature of the Smurf males might be different. In order to describe an universal Smurf signature, the validation on males will be required. More generally, most of the Smurf studies performed so far has been done in females. The metabolomics data, briefly presented in section 2.2.7, will start to fill this gap as they will allow to compare the metabolomic Smurf female signature with the male's one. Concerning the understanding on the Smurf model, another limitation of the study is the characterization of the transcriptional state only. Indeed, gene expression changes can give us extensive information about the the physiological state of the organism; on the other hand, a change observed in the gene expression might not translate to protein level. Furthermore, a process regulated at a post-translation level might not be possible to detect with transcriptomic data. The integration with the metabolomic data and proteomic data collected in Smurfs will help to adress this limitation. The genes that are found as deregulated on many levels could be interesting ageing targets. An example of such gene is ref( 2 Impact of the Covid-19 pandemic on the project The Covid-19 pandemic had an important impact on my research project. When the first lockdown (March-May 2020) arrived, I had started since a few weeks the first experimental screening 3.3. The lockdown delayed the experiments, exacerbated by the loss of our fly stocks. All the lines that were initially bought in October 2019 were re-bought. The delivery of flies from stock centres was also slowed down over the summer 2020, with flies arriving sometimes in bad conditions and needed to be re-order. Over the same summer, the moving of the laboratory from Institut Biologie Paris-Seine (IBPS) to the Center for Reaserch and Interdisciplinarity (CRI, now Learning Planet Institute LPI) blocked the experiments for a few weeks. Over the academic year 2020-2021, the laboratory remained accessible during the second and third lockdowns for those who could not work from home, but was subjected to curfew times (closing at 8p.m. for most of the year) and having short opening times over the weekend. Unfortunately, working with animals often required flexibility, even with good planning. The restricted opening hours especially affected the stress resistance assays, with the impossibility to complete the heat shock experiment and the need to carry the starvation experiment three times before obtaining a protocol adapted to the institute accessibility. The pandemic also affected my work in other ways. Spring 2020 and the time from fall 2020 to late spring 2021, when restrictions were eased and then lifted, was a time characterized by the impossibility of doing not much more but working. Over time, the isolation and lack of any other activity that was not work-related, together with the general non-standard situation, dramatically increased my level of stress and eventually decreased my productivity, creativity and lucidity at work. In addition to the pandemic, the development of carpal syndrome pathology on both hands badly affected my capacity to perform experiments and work several hours on my computer during my second and third years of PhD. At the end of the first year, starting from the bioinformatic analysis performed during that time, we had many ideas for complementary experiments to carry. Amongst them, we wanted, for instance, to investigate changes in structural genes downregulated in Smurfs in the gut through immuno-staining. I did some preliminary work in January 2020 on this, but the lockdown and the worsening of my hands condition prevented me from pursuing additional experiments apart from the longevities. In this regard, I want to take my supervisor Michael Rera and the students of the laboratory (Hayet Bouzid, Sofia Sosa Marmol and Jasmina Yeung) who helped me in the maintenance and carrying of my experiments when it was needed. Final conclusions I started this manuscript by referring to the words of Heraclitus about the in becoming state of animated and non-animated matter. Ageing has always been studied as a becoming (or better, unbecoming) condition of the organism, progressively occurring through time. The Smurf model apparently contradicts such a view, proposing a transition between 2 phases of life. The Smurf phenotype allows us to identify a very late, stereotyped stage of life, where the ageing process (in the sense of unbecoming of the organism through time) might be already concluded. However, as I have illustrated throughout the manuscript, many ageing gene expression hallmarks are Smurfspecific, rather than age-specific. So, then, is Smurfness ageing? If with ageing we mean the changes that have been so far described as such, my answer is yes. Smurfness does summarize most of ageing. If we instead refer to ageing as the process that brings to the organism's failure, my answer is no. In this context, I would define as ageing what occurs through time before the Smurf transition. The critical information that the biphasic ageing model carries is that ageing might be a more discrete process than what we initially thought. The organism undergoes a first healthy state of life, then enters a susceptibility phase, and then a late stage (Smurfness). The plastic steps of ageing are probably hidden in the first healthy and susceptibility phases. Therefore, the Smurf model can provide a critical implementation to the field of ageing by not only allowing to follow ageing individually, but by opening the way to studying it A.2 Experiments General fly maintenance. All the flies are kept in closed vials in incubators at controlled temperature, humidity and 12 hours light cycle. All the experiments are carried at 26°C. Line in stocks are kept at 18°C. Longevity experiment food. All the longevity experiments were run on the following food composition: 5.14% (w/v) yeast, 2.91% (w/v) corn, 4.28% (w/v) sugar, 0.57% (w/v) agar and Methyl 4-hydroxybenzoate (Moldex) at a final concentration of 5.3 g/L to prevent fungi contamination. All components are mixed as powders in the respective amount and water is then added. The solution is mixed up to the moment it is homogeneous. The subsequent step is boiling. Boiling is essential for agar polymerization. After the first boiling, the food is mixed and then boiled a second time. We perform the additional boiling to be sure that the whole batch of food is correctly boiled and not only the surface. After that, the Moldex is added and the solution remixed. Food is then poured in the vial, in the amount of 1.25 mL each. Maintenance and developmental. Stocks and developmental part of the experiments are performed on vials with 5 mL of food each. Food composition is the following: 1% (w/v) agar, 9% (w/v) corn, 7% (w/v) yeast, and Moldex (final concentration 5.3 g/L). This food was bought from the UMR7622. Use of RU486 drug. The drug is stocked in ethanol 96% at 20 mg/mL (stored at -20°C). At the moment of the food preparation, the batch of food is divided into 5 batches where the drug is diluted. Ethanol is added in the batches with a lower drug concentration to compensate for the otherwise higher ethanol presence in the highest drug conditions. driver lines used (GS). Lab based. daGS; Kindly provided. Ti GS was provided by the UMR9197; elav GS, repoGS and 5961 GS were provided by the UMR8521. TRiP project. Information about the 41582 and 67265 lines and the vectors used for their construction were retrieved from the official website of the TRiP project (https://fgr.hms.harvard.edu/fly-in-vivo-rnai). UAS lines used (GS). Longevity assay. Just after eclosion, flies are collected in tubes with food and RU486. Males and females are left together to mate for 48 hours. After that time, males or females (depending on the experiment) are sorted in number of 30 per vial, with 5 vials for each RU concentration. The total N for each RU concentration is therefore 150. Flies are transferred to new vials with fresh food and scored three times per week (Monday, Wednesday, Friday). An exception are the first two weeks of the experiment, when females undergo an additional transfer on Saturday or Sunday due to the fertilized eggs altering the food composition. The food is prepared normally the day before the scoring. Smurf assay. Flies were transferred from normal food to food containing the blue dye FD&C #1 at 2.5% (w/v) 24 hours prior to Smurfs counting. The dye is added as last component in the food, and dissolved in it. At the moment of the counting, flies were transferred back on normal food. All the flies are therefore spending the same amount of time on blue food, in order not to introduce bias in the counts. Note that with the following method we are not having information about the time at which the Smurfs are becoming such. However, as the Smurfs spend on average the same amount of time in this phase, recording the presence of a mixed Smurf population gives us a good estimation of their appearance in the population. Smurf counting was performing every two weeks while the population was in the survival plateau, and every week once it exited it. Fertility assay. The F1 generated as in figure 3.6 is left to mate for 48 hours (in vials with the specific drug concentration). After that time, females of each group are divided in five vials, in number of 20 females per vial. I lowered the number of flies per vial from 30 to 20 as the aim of the experiment is to count the number of eggs laid, and having a high number of flies could cause overcrowding and possible underestimation of the number of eggs laid. I perform the assay at day 10 and day 17 of age. For 8 hours, flies were placed in vials with 5 mL of food and no drug. After the 8 hours, they are switched back on vials with drug and 2.5 mL of food each (the normal conditions of the longevity assay). After 10 days -development time for the lines at 26°C -the pupae are manually counted for each vial. The eggs are laid and develop on food without drug in order to interfere as less as possible with the development of flies. Starvation assay. After the development and the 48 hours mating, females flies are separated, in number of 30 per vial, and 5 vials per concentration of drug. Flies are placed on starvation food at day 7 and scored three times per day, every day. The starvation food consists of 1% agarose in water. Oxidative stress assay. Flies are generated and kept on 48 hours mating as in figure 3.6. Females are then separated, in number of 30 per vial, and 16 vials per concentration of drug. Flies are kept for five days on condition equals to the one of the longevity experiment. At day 5, the assay starts. For each drug concentration, I test 4 concentration of H 2 O 2 (0% -controls-, 0.5%, 1%, 2%). For each concentration of drug and H 2 O 2 I have therefore 4 vials of 30 females each (N = 120). Survival is scored every day up to the end of the assay. Food is prepared following the protocol used for the longevity assay. The food is prepared in a single batch, then divided in 5 sub-batches for the 5 drug concentrations, and subsequently each drug batch is divided in 4 sub-batches for the H 2 O 2 concentration. It is important to add the H 2 O 2 once the food reaches 40-50°C, in order to preserve the molecule integrity. It is also important to correct for the amount of water when preparing the initial batch. Indeed, H 2 O 2 stock solution is diluted in water. In order to have the same amount of water desired at the end, the % of water we are adding with the H 2 O 2 needs to be removed from the preparation of the initial batch. Heat shock assay. Flies are generated and kept on 48 hours mating as in figure 3.6. Females are then separated, in number of 30 per vial, and 5 vials per concentration of drug. The conditions are the same of the longevity experiment. After one week, the assay is performed. Flies are put in the morning in vials with no drug, 5 mL of food each. The vials are placed in a incubator at 37°C, and a bucket of water in order to enhance humidity and avoid the food to dry because of the heat. Survival is scored every hour. RNA extraction Extraction of RNA was performed using the Trizol protocol as in [START_REF] Rio | Purification of RNA using TRIzol (TRI reagent)[END_REF], adapted to the amount of tissue used. Each sample corresponds to a mixture of three flies. For RNA-Seq, each sample was obtained by homogenization of 8 flies. For the qPCR performed on the daGS-UAS lines, 3 flies were homogenized together for each biological replicate. RT and qPCR RNA was retro transcribed using the Applied Biosystems cDNA Reverse Transcription Kit. RT-qPCR was subsequently performed using the Applied Biosystem PowerTrack SYBR Master Mix on Applied Biosystem QuantStudio-1 Real Time PCR System. Primers were designed on Benchling. Primers sequence. Adf1 Fw: ACAGCCCTTCAACGGCA Adf1 Rw: CGGCTCGTAGAAGTATGGCT CG4360 Fw: CAGCAGAGCACCCTTACCAA CG4360 Rw: GGAGCGGGCATTGAGTGAT Trl Fw: TCCTATCCACGCCAAAGGCAAA Trl Rw: TAGCAAATGGGGCAAGTAGCAGG Act Fw: CCATCAGCCAGCAGTCGTCTA Act Rw: ACCAGAGCAGCAACTTCTTCG . This is probably due to the low number of samples, and the quantification will have to be repeated with higher sample size. Less than half of the initial expression is present in the treated samples, but the difference is not significance to wilcoxon test (p-value = 0.1). This might be due to the low number of samples used, and the quantification will have to be repeated with higher sample size. M-1 M-4 M-7 M-10 M-13 M-16 1d-a 1d-b 5h-c1 5h-c2 5h-c3 5h-c4 5h-c5 5h-c6 5h-c7 5h-c8 1d-c 1d-d 1d-e 1d-f 1d-g 1d-h 1d-i 1d-j 1d-k 1d-l 1d-m 1d-n 1d-o 1d-p 1d-q 1d- Table B.8: Drosophila longevity genes (annotated in GenAge) which are present in the list of Smurf DEGs. I report here the gene symbol, the log 2 FC in Smurfs, the annotated effect in GenAge (direction and % of change on the ML), the experiment to which the GenAge annotation refers to and its reference. Note that for the same gene different experiments could be reported. Figure 1 . 1 : 11 Figure 1.1: Chronological representation of the theories, discoveries and important advancements in the field of ageing across the last century, as presented in this chapter. Figure 1 . 2 : 12 Figure 1.2: The 9 hallmarks discussed by López-Otín et al. (2013). They can be spatially divided in three main categories: nuclear hallmarks, cellular hallmarks, and systemic hallmarks -involving cell-cell communication and tissue alterations -. Figure 1.3: The IIS pathway and its effects on ageing. In green the genes/treatments past years, theories imputing ageing to a single mechanism are progressively abandoned in favour of models integrating different processes. López-Otín et al. (2013) try to incorporate the hallmarks just explained in the attempt of reconstructing what chronologically happens in the organism. Their proposition is illustrated in figure 1.5. On the other hand, Lemoine (2021) proposes an interesting evolutionary view of such hallmarks, illustrated in figure 1.6. Figure 1 . 5 : 15 Figure 1.5: Temporal classification of the hallmarks of ageing. López-Otín et al. (2013) Figure 1 1 Figure 1.6: Lemoine (2021) proposes an evolutionary view of the hallmarks of ageing presented in[START_REF] López-Otín | The Hallmarks of Aging[END_REF]. According to the study, the first hallmark appearing in prokaryotes is the accumulation of misfolded proteins. It follows the chromatin-related hallmarks with the appearance of Archea. After that, we find hallmarks related to organelles (mitochondria) and eukaryotic nuclear complexity. The metabolic hallmarks are the last to appear in unicellular organisms. As the organism's complexity increases with multicellularity, the rest of the hallmarks appear. Figure 1 . 7 : 17 Figure 1.7: Longevity curves of a synchronous population with human-like ageing. (a)Population based view. After experiencing a phase of no-mortality, the population decreases progressively with time. A population is characterized by a mean lifespan, i.e. the average time of deaths of the individuals. As it ages, individuals in the population progressively experience an increase in the hallmarks of ageing. (b) Individual based view. If we could distinguish the biological age of the individuals in a population, we would observe that they actually age at a different rate. In this example, the 10 individuals present at the beginning in the population are a mixture of individuals that will age at a different speed and die in different moments. The blue individuals are short-lived and will die off first. The most numerous red individuals will die at an age around the average lifespan of the population. But a few individuals (yellow) will die off last, at a higher age than the mean lifespan. In the heterogeneous population, we observe an increase, on average, of the hallmarks of ageing carried by individuals. Figure 1 . 9 : 19 Figure 1.9: Smurf phenotype identified in both males and females in D.melanogaster. In non-Smurf flies, the dye is restricted at the level of the digestive tract, while the Smurf flies show the blue coloration all over the body. The non-Smurf coloration at the level of the gut is easier to visualize in females, given their larger abdomen. Adapted from Martins et al. (2018). Figure 1 . 1 Figure 1.10: Smurfs evolution over time in a population. (a): If we age a populationon stardard food and blue food dye, we will notice the first blue flies appearing as soon as the population exits the mortality plateau. From that moment, the proportion of Smurfs at each time point will increase over time. (b): two wild-type Drosophila population with different lifespan. The longevity curve was reconstructed by following the population over time, and the proportion of Smurfs was recorded at 5 or 6 time points, depending on the lifespan of the population. In both cases the proportion of Smurfs increases linearly, but we can see how w 1118 , with a shorter lifespan, presents a faster increase in Smurfs appearance. Adapted from[START_REF] Rera | Intestinal barrier dysfunction links metabolic and inflammatory markers of aging to death in Drosophila[END_REF] Figure 1 . 1 Figure 1.11: Smurf characterization, adapted from Rera et al. (2012). (a) Non-Smurfs and Smurfs from w 1118 synchronous population were sampled at 4 different time points, and their remained lifespan monitored. We can notice how the Smurfs present a constant and low T 50 independently of their age, always significantly different from the one of the non-Smurfs. The remaining lifespan of the non-Smurfs at a given age is similar to the one of the general population. (b) mRNA level (quantified by qPCR) of genes involved in inflammation and IIS pathway in Drosophila.The first row show the expression of three antimicrobial peptides (AMPs) expressed downstream of the inflammatory pathway, known to be overactivated with ageing. The AMPs is significantly lower in the non-Smurfs than in the Smurfs independently of their age, hinting that the Smurfs could be the one actually carrying the "ageing inflammation signal" in the population. On the second row, the same analysis is done on three FOXO targets, induced when the IIS is repressed, as it normally occurs with ageing. The same pattern of the inflammatory genes is observed, with a significant overexpression of the genes with Smurfness, independently of the age. Figure 1 . 13 : 113 Figure 1.13: The biological features recorded over time in the mice cohort are analyzed both by age and remaining lifespan. I am here showing the the intestinal permeability (3 hours after dye gavage) -first row-, glycemia -second row-, body weight -last row-. In the first column data points are analyzed by age, in the second by remaining lifespan. The aim is to see if a transition point can be detected. The dashed blue line shows the transition point in the data identified by multivariate adaptive regression splines (MARS) analysis. Full black line depicts the nonparametric local regression (LOWESS) on the data. Green and red lines represent linear regression analysis of longitudinal intestinal permeability values before and after the transition point respectively. If MARS analysis does not identify a transition point (as in the case of the intestinal permeability by age), linear regression of longitudinal values does not apply. $ p < 0.05 by the Mann-Whitney U test on mean intestinal permeability values before and after the transition point. p < 0.05 by the Ftest performed on linear regression slope of before and after the transition point. Intestinal permeability and body weight show a significant biphasic behavior only when analyzed by remaining lifespan. Glycemia instead, show a significant decrease in both analysis. However, the analysis by remaining lifespan identifies a sharper transition. Figure 1 . 1 Figure 1.14: 2-phase model of ageing as proposed in[START_REF] Tricoire | A New, Discontinuous 2 Phases of Aging Model: Lessons from Drosophila melanogaster[END_REF]. Each Figure 1.15: I am here simulating how the tuning of the a and k parameter affects the Figure 2 . 1 : 21 Figure 2.1: Experimental design for RNA-Seq sampling. (a) Longevity curve of the drs-GFP line used for the experiment. Sampling points are highlighted in red. (b) Data structure, with the number of sample per condition and age. Figure 2 . 2 : 22 Figure2.2: Workflow used for the RNA-seq preprocessing. After sequencing, reads are stored in fastq files (raw data); each fastq file contains information about one of the paired reads (so for each sample we have 2 fastq files). The quality of the reads is checked using FastQC. The reads passing the control go as input for the alignment step. This step, that we performed using Hisat2, also requires as input the reference genome on which the reads will be aligned. The algorithm output files are called BAM. These files store the information about the genomic coordinates on which each read is mapping. By inputing the BAM files and the .gtf file of the genome used for the alignment (containing the information about the genes coordinates), featureCounts counts the number of reads mapping to a certain genes. This gives as output the raw counts matrix, where for each samples are annotated the number of reads mapping to a certain gene. The quality of the alignement and counting can be conveniently checked by using the online tool MultiQC. The raw counts matrix needs to undergo normalization before performing further analysis. Figure 2 . 3 : 23 Figure 2.3: PCA on the top 5000 genes of the dataset. Color indicates the Smurf status Figure 2 . 4 : 24 Figure 2.4: Unsupervised hierarchical clustering on sample-to-sample distance. Three main clusters are identified, showing a good separation amongst Smurfs and non-Smurfs but for the old sample, which appears to either correlate with one of the two groups or form a third cluster independently of the Smurf status. Figure 2 . 5 : 25 Figure2.5: Volcano plot of the differential gene expression analysis. The negative logarithm of the adjusted p-value (FDR) is plotted as a function of the log 2 FC for each gene. DEGs are highlighted in red. Genes with a log 2 FC > |2| are mapped with the their gene symbol, or Flybase identifier when the symbol could not be retrieved. Figure 2 . 7 :Figure 2 . 8 : 2728 Figure 2.7: Smurfs upregulated GO terms in fgsea analysis. Edges connects terms with overlapping genes, allowing hub identification. The name of the hubs are manually annotated. Figure 2.9: Log2 fold change of DEGs plotted on the KEGG Toll and Imd pathway (dme04624) show a general upregulation of the process in Smurfs. Figure 2 2 Figure 2.10: Pathways deregulated according to the GAGE analysis, plotted as a function of the negative log10 of the adjusted p-value. The colors manually map the processes the pathways belong to. We can notice a consistent representation of metabolism amongst the downregulated pathways. Figure 2 2 Figure 2.12: PCA on the 202 metabolites detected for Smurfs and non-Smurfs. The two conditions are separated along the first component. Figure 2 . 15 : 215 Figure 2.15: Transcriptional noise with age and Smurfness. The distribution of the relative variance is plotted for the different ages and condition. (a,b,c) Smurf and non-Smurfs relative standard deviation is plotted for the groups at 20 days (a), 30 days (b), 40 days (c).There is a clear shift with time of the peak of the distribution in both Smurfs and non-Smurfs, confirming an effect of age on the both categories. Interestingly, the difference in the distributions observed between Smurfs and non-Smurfs is reduced with age. (d,e) Smurfs (d) and non-Smurfs (e) are this time plotted in separated graphs, so to visually compare the effect of age within each group. *** p-value < 0.001 for Kolmogrov-Smirnov statistic. Genes belonging to the 1st quartile of the mean expression distribution are removed from the analysis. For the sake of visualization, the distribution are cut at 0.5 in the plot, but all the values of relative standard deviation are taken into account for the statistic. Figure 2 2 Figure 2.16: (a) The six hallmarks of cellular transcriptional ageing across multiple organisms, as identified in[START_REF] Frenk | Gene expression hallmarks of cellular ageing[END_REF]. (b) In the Smurf transcriptome, three of the four hallmarks are detected (downregulation of mitochondrial proteins, downregulation of ribosomes, dyresgulation of immuno system). A fourth hallmarks is partially detected (stress response), while no hits are found to affirm an overexpression of DNA damage related genes or a reduction in growth factors. On the other hand, the dysregulation in gene expression seems a time-related signal rather than a Smurf related one. Figure 2 2 Figure 2.18: Enrichment analysis in the old non-Smurfs DEGs show upregulation of immune response related categories and downregulation of oogenesis genes. Figure 2 . 19 : 219 Figure2.19: Enrichment analysis in the old non-Smurfs DEGs shows a surprisingly high number of terms. In particular, I find terms related to RNA processing and DNA regulation (amongst which chromatin regulation) which are not found when comparing the Smurfs to the non-Smurfs or the old non-Smurfs to the young non-Smurfs. The only upregulated terms concerns metabolism, and are deregulations before associated to Smurfs when compared to non-Smurfs. This suggests that, for such categories, the "dilution" of the Smurf signal with time discussed in figure 2.17 could be not strong enough to be detected by DESeq2. Each node is a GO term, and edges are connecting categories with overlapping genes. Figure 2 . 20 : 220 Figure2.20: Enrichment analysis in the old non-Smurfs DEGs shows a surprisingly high number of terms. In particular, I find terms related to RNA processing and DNA regulation (amongst which chromatin regulation) which are not found when comparing the Smurfs to the non-Smurfs or the old non-Smurfs to the young non-Smurfs. The only upregulated terms concerns metabolism, and are deregulations before associated to Smurfs when compared to non-Smurfs. This suggests that, for such categories, the "dilution" of the Smurf signal with time discussed in figure 2.17 could be not strong enough to be detected by DESeq2. Each node is a GO term, and edges are connecting categories with overlapping genes. Figure 2 2 Figure 2.21: (a) Each individual follows its physiological time, experiencing ageing at adifferent rate and entering the Smurf phase at a different age (therefore dying at a different age). Being able to follow the physiological time of the organism prior to Smurf transition might help to put a chronological order in the events occurring in the Smurfs, and identify early ageing mechanisms. (b) As we do not know yet how to identify pre-Smurfs, we can try to approximate the physiological time of an individual by averaging at a population level. As in the population from which my data come from 20 days corresponds to 90% survival, I can imagine that, on average, the non-Smurfs samples at this point are representative of an early phase of life. The same reasoning applies to the samples at 40 days (10%) survival, which could represents a pre-Smurf phase of life. Samples at 30 days are at the T 50 , and could therefore be a good approximation of a non-Smurfs midlife. Figure 2 . 22 : 222 Figure 2.22: Expression of the Smurfs' DEGs, plotted over time in non-Smurfs and Smurfs. Figure 2.24: (a) The effect of time on six different genes belonging to the Toll and Imd pathway: two upstream regulators (PGRP-LA and PGRP-SD), the two transcription factors (Rel and dl ), and two downstream effectors (Listericin and DptA). Each point is the gene expression of a non-Smurf sample at given age, and the trend show the inferred linear model as in equation 2.1. (b) The genes showing the best decreasing trend in the non-Smurfs over time. We find three genes of the Jonah family, which was already overrepresented in the genes downregulated in Smurfs. Figure 2 . 25 : 225 Figure 2.25: Example of two genes showing a positive trend ( β1 ∼ 0.83 for wat and ∼ 0.74 for mthl2). The linear model refers to only the grey points, representing the non-Smurfs samples. The blue points are the expression of the Smurfs at different age. As you can notice, even if the genes were detected as differentially expressed, the points are quite overlapping, suggesting that the signal in the Smurfs themselves might be weak. In particular, in this case of wat we can see how non-Smurfs and Smurfs converge at old age. Note that mthl2 belongs to the same receptor family as mth, the first gene identified as extending lifespan in fruit fly, when mutated. Figure 2 2 Figure2.26: Density plot for the correlation of all the genes with Smurfness and age. In black all the data are considered, in red (dotted lines) the data without 40 days samples. The correlation with Smurfness increases when those samples are removed, suggesting immediately that age has an effect also on genes well correlating with Smurfness. Figure 2 2 Figure 2.27: All the genes detected are here plotted, while the densities refer to specific DEGs that I will specify in each point of the caption. Red always represents upregulated genes, green downregulated. (a) DEGs in the comparison non-Smurfs -Smurfs. They map on the side of the plot, suggesting a high correlation with Smurfness as expected. Though the peak of the density is on the space of higher Smurfness correlation than age, some genes also display correlation with age. (b)DEGs in old non-Smurfs compared to young non-Smurfs. Those genes have a good correlation with both age and Smurfness, as expected from the analysis in section 2.3.2. (c) DEGs in old Smurfs compared to young Smurfs. The downregulated genes of this group are interesting, as they show a compact behavior in a zone of higher correlation with age than with Smurfness. Figure 3 . 1 : 31 Figure 3.1: Venn diagram representing the intersection between the genes annotated inGenAge as altering lifespan in the fruit fly and the Smurf DEGs. Around 24% of the genes annotated in GenAge are found as deregulated in Smurfness, highlighting the validity of the model for the identification of genes affecting longevity. Amongst the rest of the Smurf DEGs could hide new genes playing a role in longevity. Figure 3 . 2 : 32 Figure3.2: My data consists of the final expression profile of the flies (3). Some of the deregulated genes (circles of different colours in the figure) are TFs, upregulated (in red) or downregulated (in green). These TFs might be partially responsible for the expression profile in point 3. By extracting them from the rest of the list, we possibly go one step back in the biological regulation of our final expression landscape. This brings us to point 2. Those TFs are in turn regulated by other genes, which could be present in the group itself or not be deregulated in the final expression profile (as it can happen in case of activation by post-translational modification). To move one step further in our backwards path to upstream regulators, we can try to predict which genes are regulating the TFs in point 2. This could be done by looking at the presence of shared binding motifs for specific TFs in the promoters of the genes. The software i-cisTarget conveniently performs this job, and brings us to point 1. The orange circles represent the predicted TFs not present in the final expression profile. As putative upstream regulators of the Smurf transition, the genes in point 1 could be good targets for experimental validation. From right to left, dashed lines reconstruct the hypothetical regulation path. Figure 3 3 Figure 3.3: i-cisTarget results for the four cases considered. (1) Putative regulators of TFs up in Smurfs.(2) Putative regulators of TFs down in Smurfs. Note that the second hit is a group of GATA family genes, and comes as a compact category in the output. (3) Putative regulators of genes up in Smurfs (log 2 FC > 2). The results reflect the immune and stress response identified as upregulated signal in Smurf. (4) Putative regulators of genes downregulated in Smurfs (log 2 FC < -2). Similarly to what happened in point 3, this output reflected the results found by the enrichment analysis, as the decrease in fertility genes (Blimp-1 and ken). Interestingly, maf-S has been shown to ameliorate age-associated phenotypes when overexpressed[START_REF] Rahman | Declining signal dependence of Nrf2-MafS-regulated gene expression correlates with aging phenotypes[END_REF]. Figure 3 . 4 : 34 Figure 3.4: Genes selected for testing experimentally the effect of their alteration on Figure 3.5: GAL4 is a yeast TF binding the yeast promoter sequence called UAS. This Figure 3 . 8 : 38 Figure 3.8: Longevity curves for the AO experiment on the Aef1 KD (a) and Nf-yB KD (b). Controls are in black. Colors indicate the level of KD, as reported in the legend. (a)Aef1 KD shows a grandient decrease in the lifespan of the flies, corresponding to the gradient of the drug. This might hint for a mild effect of the gene on longevity. (b) Nf-yB KD kills the flies independently from the concentration, suggesting a toxicity of the KD itself more than an effect on ageing. Figure 3 . 10 : 310 Figure 3.10: Validation results for Adf1. (a) Longevity curve for the control population and the KD showing lifespan extension (ML RU0 = 71 days, ML RU50 = 79.5 days). We observe a rectangularization of the curve, with the treated flies leaving the mortality plateau later but decaying faster afterwards. *** p-value < 0.0001 (log-rank test). (b) In the upper part, we have the proportion of Smurfs recorded over time in controls (grey) and treated (red).Each point represents the proportion of Smurfs in an independent vial. The fitted linear model results in a significant coefficient for the time variable (p-value < 0.0001), proving that the proportion of Smurfs is time-dependent. The interaction coefficient between time and drug dose is also significant (p-value < 0.05), proving that the drug dose significantly affect the time-dependence of the Smurf proportion. The graph at the bottom represents the number of alive flies at the moment of the Smurf counting. Figure 3 . 11 : 311 Figure 3.11: Validation results for Trl, AO experiment. (a) I am showing here the results for the RU 50 µg/mL treatment, as amongst the four treatment is the one showing the higher effect (ML RU0 = 72.2 days, ML RU50 = 79.8 days). Contrary to what results with Adf1, the curves here are diverge from the survival plateau to the end of the curve. *** p-value < 0.0001 (log-rank test). (b) In the upper part the proportion of Smurfs recorded over time.The analysis resulted in a significant time-dependence of the proportion of Smurfs (p-value < 0.0001) and a significant effect of the drug dose on such dependence (p-value < 0.01). The lifespan extension is therefore significantly associated to a slower increase in Smurf proportion. The graph at the bottom indicates the number of alive flies at each moment of the Smurf recording. Figure 3 . 12 : 312 Figure 3.12: First validation results for CG4360. (a) I am showing here the results for the RU 10 µg/mL treatment in the WL CG4360 KD experiment (ML RU0 = 71.7 days, ML RU10 = 78.5 days). Here the two curves are initially overlapping, but diverge around day 60. *** p-value < 0.001 (log-rank test). (b) In the upper part the proportion of Smurfs recorded over time, each point representing the proportion in an independent vial. Time-dependence increase of the Smurf proportion is significance (p-value < 0.001), as well as the effect of the RU486 dose on it (p-value < 0.05), proving a significant slower increase of Smurf proportion in the treated population. The graph on the bottom of point (b) shows the number of alive flies at each Smurf recording. Figure 3 . 13 : 313 Figure 3.13: Second validation results for CG4360, WL experiment. (a) I am showing here the results for the RU 10 µg/mL treatment that for a second time showed a significant lifespan extension (ML RU0 = 68.5 days, MLRU10 = 77.0 days). The curves are here diverging before what happened in the experiment in figure 3.12, with the treated population staying longer in the survival plateau. *** p-value < 0.001. (b) In the upper part the proportion of Smurfs recorded over time, each point representing the proportion in an independent vial.The analysis results in a significant time-dependence of the proportion of Smurfs (p-value < 0.001), and a significant interaction term (p-value < 0.01) between the RU486 concentration and the time-dependence. This proves the slower increase in Smurf proportion in the treated flies. The graph at the bottom of point (b) shows the number of alive flies at each Smurf recording time point. Figure 3 . 15 : 315 Figure 3.15: Survival curves for all the concentrations of RU tested in males for Trl and CG4360.The type of experiment (AO or WL) is specified below each graph. Note that the RU200 in the CG4360 WL experiment is missing, due to a problem in the growth of the flies during the experiment preparation. None of the population show lifespan increase but for the Trl KD in the WL experiment. The extension is however low, and even if significant it might not be relevant. Figure 3 3 Figure 3.16: (a) Trl KD driven by Ti GS in the gut, AO experiment. Higher RU486 concentrations led to lifespan extension, with the best effect (+8%) shown by RU µg/mL. (b) Trl KD driven by 5961 GS in the intestinal stem cells and enteroendocrine cells, AO experiment.Although the RU 100 µg/mL effect is significant, it might not be relevant and mostly caused by the initial difference in the two curves. Figure 3 . 3 Figure 3.19 illustrate the results for Trl KD, while figure 3.20 the results for CG4360. In figure 3.21, I report the mean lifespan and statistical analysis for each of the H 2 0 2 and RU concentrations. Figure 3 3 Figure 3.22: (a) Trl KD using an independent line (referred as line 2 in the text, it is the Bloomington 67265.) The plot longevity curves show how the KD is toxic all across the different RU concentrations. (b) Lifespan extension for Adf1 KD with an independent line (Bloomington 31500). discussion Throughout this manuscript, I tried to guide the reader through the work I have done during my PhD. As explained in the Introduction, my research project focused on validating a new age-related phenotype (Smurf) in Drosophila as a model for following biological age in vivo. Figure 4 . 1 : 41 Figure 4.1: Proposed model for transcriptomic behaviour with Smurfness and ageing. Figure 4.2: A reporter mimicking the expression of a pre-Smurf gene is a non-invasive way to identify flies before the transition. A pre-Smurf gene (in red) is putatevely changing its expression gradually before to Smurf transition, experiencing then an exacerbation of such change once Smurf. The reporter will mimick the action, by being activated by the same promoter as the pre-Smurf gene. In this case, I simulate a pre-Smurf gene increasing its expression with time. Figure 4 . 3 : 43 Figure 4.3: The behaviour of the BiT genes retrieved in Drosophila do not show a distribution skewed towards age or Smurfs correlation in our dataset. Figure 4.4: STRING protein-protein interaction network result for Adf1. We can notice the presence of Trl, Aef1 and CG4360. )P. This gene is upregulated in Smurf, upregulated in old non-Smurfs, with a significant trend in the non-Smurfs over time. Ref(2)P, involved in autophagy, is also upregulated in the Smurf proteome. Its KD with daGS has been shown to increase lifespan (unpublished data by the laboratory, figure 4.5). Recently, its KD in neurons has been shown to increase lifespan in Drosophila[START_REF] Hurley | Inhibition of Ref(2)P, the Drosophila homologue of the p62/SQSTM1 gene, increases lifespan and leads to a decline in motor function[END_REF]. Figure 4 4 Figure 4.5: ref(2)P KD through daGS increases lifespan (unpublished data by the laboratory). Figure B. 4 : 4 Figure B.4:The fold changes obtained by edgeR and DESEq2 for the 2377 overlapping genes found as DEGs by both the analysis are plotted against each other. The correlation between the two is 0.99 (pearson). Figure B. 5 : 5 Figure B.5: Autophagy KEGG pathway in Drosophila (dme04136), identified from GAGE analysis. Most of the genes present an almost null fold change in Smurfs (in grey). The log 2 FC plotted refers to DESeq2 estimation (independently from the significance of the deregulation of the single gene). Figure B. 6 : 6 Figure B.6: Endocytosis KEGG pathway in Drosophila (dme04144), identified from GAGE analysis. Smurfs present positive regulation of genes involved in endosomal function. Figure B. 7 : 7 Figure B.7: RU50 µg/mL Trl samples fold change, relative to controls. n = 3 for controlsand treated. Half of the initial expression is present in the treated samples, but the difference is not significance to wilcoxon test (p-value = 0.1). This is probably due to the low number of samples, and the quantification will have to be repeated with higher sample size. Figure B. 8 : 8 Figure B.8: RU10 µg/mL CG4360 samples fold change, relative to controls. n = 3 for controls and treated. Less than half of the initial expression is present in the treated samples, but the difference is not significance to wilcoxon test (p-value = 0.1). This might be due to the low number of samples used, and the quantification will have to be repeated with higher sample size. Figure B. 10 : 10 Figure B.10: Data collected for the first screening of longevity experiments. Results are organized as in figure 3.7. The % effect refers in each case to the controls. NA indicate a missing value, due to missing population. This is due to developmental toxicity of the gene alteration, not allowing flies to grow. They model the biological age[START_REF] Zhang | Extended Twilight among Isogenic C. elegans Causes a Disproportionate Scaling between Lifespan and Health[END_REF] illustrate three possible hypothesis regarding the individuals different ageing rates. In a population we can distinguish three kind of individuals: short-lived (blue), average-lived (red) and long-lived (yellow). Colors are the same as in figure 1.7. In the starting point hypothesis, the three kind of individuals are distinguishable since birth, with a prognosis reflecting the age of death. In the rate of ageing hypothesis, the prognosis at birth is instead the same for the three kind of individuals. However, soon after birth their age rate diverge, and they die off at different moments. The premature death hypothesis proposes instead a situation in which not only individuals have the same prognosis at birth, but they also age at the same rate. Death is here a stochastic event that randomly hits individuals in a population. short-lived average-lived long-lived Prognosis at birth Prognosis at birth Prognosis at birth Age Age Age Starting Point Hypothesis Rate of Ageing Hypothesis Premature Death Hypothesis Figure 1.8: in order to target this problem late in life, from a clinical point of view. It consists of 34 parameters linked to age-related conditions (as for instance cataracts, high cholerestol, need of assistance) that can predict mortality independently of age. However, it is human-restricted and impossible to apply to other organisms, especially in research. Smurfs present downregulation of enzymes involved in fatty acid metabolism; interestingly, although we cannot observe a difference in the level of acetyl-coA, we observe a reduction in carnitin levels -playing an important role in the fatty acid degradation pathway -and palmitic acid. Significance refers to DEG analysis for the genes and wilcoxon test for the metabolites. *P < 0.05; **P < 0.001;***P < 0.0001 it comes in a more general context of downregulation. The accumulation of succinate could reinforce the hypothesis of a dysfunction, even if the downstream metabolite of the enzyme (fumarate) is not significantly changing in Smurfs. I also looked at the ratio NAD/NADH and ATP/ADP (results not shown). The first is an indicator of the cell's reduction power and the correct function of the Krebs cycle, which is the leading NADH producer. The second is an indicator of the energy state and the correct functioning of the electron transport chain that produces ATP. High level of ADP/ATP values indicates a low energy state, and activate, for instance, the kinase AMPK (see figure condition NS S age • 20 • 30 • 40 Lactate concentration expression Succinate concentration Metabolites concentration Figure 2.13: (a) Ldh overexpression and lactate accumulation suggest anaerobic metabolism in Smurfs. (b) Light downregulation of two subunits of the enzyme succinate dehydrogenase and succinate accumulation in Smurfs might hint for dysfunction of Complex II. (c) .6 and B.7 (Appendix B.1). Upregulated in Smurfs Downregulated in Smurfs pathway counts pathway counts dme01100 Metabolic pathways 21 dme01100 Metabolic pathways 20 dme04144 Endocytosis 9 dme00020 Citrate cycle (TCA cycle) 4 dme04624 Toll and Imd signaling pathway 9 dme01200 Carbon metabolism 4 dme04013 MAPK signaling pathway -fly 6 dme04142 Lysosome 4 dme04141 Protein processing in endoplasmic reticulum 6 dme00010 Glycolysis / Gluconeogenesis 3 Figure 2.23: First 5 pathways for numbers of genes mapping, respectively for the genes Table 2 . 2 1: Pathways selected from KEGG, putatevely changing with age and not influenced by Smurfness. In total we have 6 pathways for the group "Replication and repair" and three pathway for the group "Transcription". .1 and table B.8 in Appendix B.2, for more details concerning the longevity effect and the Smurf deregulation.). longevity Smurf genes annotated genes 143 58 2991 201 3049 Oxidative stress longevity assay on flies carrying Trl KD and controls. The arrow indicates the starting time of the assay. H 2 0 2 0% flies are not dying during the experimental time, proving that the the deaths that we observe in the other conditions are indeed due to the H 2 0 2 presence. Flies exposed to 2% start to die the same day of the assay, with little or no difference amongst the RU concentrations (see 3.21). The intermediate concentrations show instead interesting results, with RU 100 µg/mL extending lifespan for the 0.5% concentration and a more general resistance effect noticed in the 1% concentration. 1.0 0.8 % Survival 0.4 0.6 0.2 0.0 0 5 10 15 20 25 30 35 Time (days) H 2 0 2 0% H 2 0 2 1% Figure 3.19: H 2 0 2 0% H 2 0 2 1% H 2 0 2 0.5% H 2 0 2 2% H 2 0 2 0.5% H 2 0 2 2% • • • • • RU0 RU10 RU50 RU100 RU200 Oxidative stress longevity assay on flies carrying CG4360 KD and controls.The arrow indicates the starting time of the assay. H 2 0 2 0% shows only a couple of sporadic deaths, proving that the the deaths that we observe in the other conditions are indeed due to the H 2 0 2 presence. Flies exposed to 2% start to die the same day of the assay, with little or no difference amongst the RU concentrations (see 3.21). Concerning the intermediate concentrations, 0.5% shows a potentially interesting effect, but unfortunately the controls show a initial drop that makes the interpretation of the results less reliable. On the other hand, the 1% show an extension that looks like following the RU gradient (from 10 µg/mL to 200 µg/mL), as also reported in figure3.21. Mean lifespan (ML), % effect in lifespan change and log-rank p-value for the different H 2 0 2 and RU concentrations. H 2 0 2 0.5% has basically no significant effect for RU0 ML 22.8 18.6 RU0 ML 7.3 7.4 RU0 ML 10.7 11.5 Figure 3.20: H 2 0 2 0.5% RU10 RU50 ML effect % p-val ML effect % p-val Trl KD 22.3 -2.2 n.s. 22.5 -1.3 n.s CG4360 KD 21.4 15.0 *** 20.7 11.3 *** H 2 0 2 2% RU10 RU50 ML effect % p-val ML effect % p-val Trl KD 7.3 0 n.s. 7.0 -4.1 ** CG4360 KD 7.7 4 n.s. 7.6 2.7 n.s. H 2 0 2 1% RU10 RU50 ML effect % p-val ML effect % p-val Trl KD 14.0 30.8 *** 12.8 19.6 *** CG4360 KD 13.5 17.4 *** 13.5 17.4 *** Figure 3.21: ML 24.0 22.1 ML 7.7 7.3 ML 11.4 12.3 RU100 effect % p-val 5.2 * 18.8 *** RU100 effect % p-val 5.6 ** -1.3 n.s. RU100 effect % p-val 6.5 *** 6.9 *** ML 22.8 20.8 ML 7.7 7.9 ML 11.6 13.4 • • • • • RU200 RU0 RU10 RU50 RU100 RU200 effect % p-val 0.0 n.s 11.8 *** RU200 effect % p-val 5.6 n.s. 6.7 *** RU200 effect % p-val 8.4 *** 15.5 *** tSNE (perplexity = 10) on the samples. Colour indicates Smurfness and symbols the age. Similarly to the PCA results in figure2.3, the Smurfs and non-Smurfs form two separate groups. In the non-Smurfs group, we can notice the samples clearly distributing by age, while the Smurfs appear more mixed. The same outliers as in the PCA analysis are identified. Age 20 days 30 days 40 days Condition a non-Smurf a Smurf 50 r Dimension 2 0 -50 -60 -30 0 30 60 Dimension 1 Figure B.3: Table B . 1 : B1 GO BP categories significantly upregulated when performing the analysis on the list of DEGs from Smurf/non-Smurf comparison. ** adjusted p-value < 0.01; * adjusted p-value < 0.01 Table B.2: GO BP categories significantly downregulated when performing the analysis on the list of DEGs from Smurf/non-Smurf comparison. ** adjusted p-value < 0.01; * adjusted p-value < 0.01 Table B.3: Number of DEGs detected in the different over time comparisons (DESeq2). ID ID Description Description 30 vs 20 days 40 vs 30 days Score adj p-value Score adj p-value 40 vs 20 days GO:0019731 GO:0008202 antibacterial humoral response steroid metabolic process (number DEGs) (number DEGs) 2.34 (number DEGs) ** -2.32 ** GO:0050829 GO:0030703 non-Smurf defense response to Gram-negative bacterium eggshell formation 59 158 2.31 510 ** -2.2 ** GO:0070482 GO:0019730 GO:0042445 GO:0007304 Smurf response to oxygen levels antimicrobial humoral response hormone metabolic process chorion-containing eggshell formation 4 915 2.3 2.28 ** ** -2.19 -2.17 2320 ** ** GO:0006457 GO:0034754 protein folding cellular hormone metabolic process 2.26 ** -2.16 * GO:0006458 GO:0045455 'de novo' protein folding ecdysteroid metabolic process 2.24 ** -2.16 * GO:0051084 GO:0016125 'de novo' posttranslational protein folding sterol metabolic process 2.24 ** -2.1 * GO:0051085 GO:0006694 chaperone cofactor-dependent protein refolding steroid biosynthetic process 2.24 ** -2.1 ** GO:0061077 GO:0006508 chaperone-mediated protein folding proteolysis 2.24 ** -2.09 ** GO:0034620 GO:0042180 cellular response to unfolded protein cellular ketone metabolic process 2.22 ** -2.06 * GO:0035967 GO:0007305 vitelline membrane formation involved in chorion-containing eggshell formation cellular response to topologically incorrect protein 2.22 ** -2.03 ** GO:0006986 GO:0030704 response to unfolded protein vitelline membrane formation 2.22 ** -2.03 ** GO:0035966 GO:0035803 response to topologically incorrect protein egg coat formation 2.22 ** -2.03 ** GO:0042026 GO:0085029 protein refolding extracellular matrix assembly 2.11 * -2.02 * GO:0001666 GO:0030198 response to hypoxia extracellular matrix organization 2.1 ** -2.02 * GO:0036293 GO:0043062 response to decreased oxygen levels extracellular structure organization 2.09 ** -2.01 * GO:0006959 GO:0045333 humoral immune response cellular respiration 2.07 ** -1.98 * GO:0042742 GO:0022900 defense response to bacterium electron transport chain 2.03 ** -1.94 * GO:0051276 GO:0022904 chromosome organization respiratory electron transport chain 2.02 ** -1.93 * GO:0050830 GO:0042773 defense response to Gram-positive bacterium ATP synthesis coupled electron transport 2 * -1.93 * GO:0031347 GO:0010817 regulation of defense response regulation of hormone levels 1.95 * -1.91 * GO:0006955 GO:0042775 immune response mitochondrial ATP synthesis coupled electron transport 1.93 ** -1.91 * GO:0061057 GO:0044282 peptidoglycan recognition protein signaling pathway small molecule catabolic process 1.93 * -1.88 * GO:0006952 GO:0042254 defense response ribosome biogenesis 1.89 ** -1.86 * GO:0017038 GO:0044281 protein import small molecule metabolic process 1.87 * -1.85 ** GO:0098542 GO:0016054 defense response to other organism organic acid catabolic process 1.85 ** -1.79 * GO:0006626 GO:0046395 protein targeting to mitochondrion carboxylic acid catabolic process 1.85 * -1.79 * GO:0070585 GO:0015980 protein localization to mitochondrion energy derivation by oxidation of organic compounds 1.85 * -1.79 * GO:0072655 establishment of protein localization to mitochondrion GO:0006091 generation of precursor metabolites and energy 1.85 * -1.77 * GO:0006575 GO:0008610 cellular modified amino acid metabolic process lipid biosynthetic process 1.79 * -1.76 * GO:0009617 GO:0022613 response to bacterium ribonucleoprotein complex biogenesis 1.76 * -1.75 * GO:0002376 GO:0044283 immune system process small molecule biosynthetic process 1.76 ** -1.65 * GO:0009266 GO:0055114 response to temperature stimulus oxidation-reduction process 1.76 * -1.57 * GO:0006950 GO:0019538 response to stress protein metabolic process 1.72 ** -1.45 * GO:0071310 cellular response to organic substance 1.72 * GO:0009628 response to abiotic stimulus 1.71 * GO:0009607 response to biotic stimulus 1.66 * GO:0043207 response to external biotic stimulus 1.66 * GO:0051707 response to other organism 1.66 * Table B B .4: GO BP categories significantly enriched when performing the analysis on the list of DEGs from non-Smurfs at 40 days compared to non-Smurfs at 20 days. ** adjusted p-value < 0.01; * adjusted p-value < 0.01Table B.5: GO BP categories significantly enriched when performing the analysis on the list of DEGs from Smurfs at 40 days compared to Smurfs at 20 days. * adjusted p-value < 0.01 Table B.5: GO BP categories significantly enriched when performing the analysis on the list of DEGs from Smurfs at 40 days compared to Smurfs at 20 days. * adjusted p-value < 0.01 Table B.5: GO BP categories significantly enriched when performing the analysis on the list of DEGs from Smurfs at 40 days compared to Smurfs at 20 days. * adjusted p-value < 0.01 TableB.6: Genes presenting a significant positive slope in linear regression over the three time points in non-Smurfs and discussed in 2.4. *p-value < 0.05; **p-value < 0.01 (F-test). All these genes are significantly upregulated in Smurfs. Smurfs and discussed in 2.4. *p-value < 0.05; **p-value < 0.01 (F-test). All these genes are significantly upregulated in Smurfs.TableB.7: Genes presenting a significant negative slope in linear regression over the three time points in non-Smurfs and discussed in 2.4. P-value * < 0.05, ** < 0.01 (F-test). All these genes are significantly downregulated in Smurfs. ID ID B.2 Chapter 3 ID Gene symbol Slope Description Description Description r 2 time points in non-Gene symbol Slope r 2 p-value p-value Mapping Mapping Score adj p-value Score adj p-value Score adj p-value Gene symbol Slope r 2 p-value Mapping 1.00 GO:0010038 GO:1902275 GO:0044093 response to metal ion regulation of chromatin organization positive regulation of molecular function CG10814 1.574 0.733 ** Hsc70-3 0.15 0.396 * ER Cbs -0.087 0.512 * 0.75 n.s. 1.98 -2.52 Metabolism Metabolism -2.26 * * * GO:0010035 GO:0006030 GO:1901071 GO:0061640 GO:0000910 GO:2001252 GO:0006475 GO:0016573 GO:0018393 response to inorganic substance chitin metabolic process glucosamine-containing compound metabolic process cytoskeleton-dependent cytokinesis cytokinesis positive regulation of chromosome organization internal protein amino acid acetylation histone acetylation internal peptidyl-lysine acetylation Gnmt 1.463 0.853 ** Metabolism 1.78 1.78 1.78 -2.53 -2.53 -2.53 Nmdmc 0.482 0.495 * Metabolism yellow-f 0.457 0.57 * Metabolism alpha-Man-Ia 0.144 0.464 ** ER CG8042 0.115 0.551 * ER CG13887 0.106 0.701 ** ER cN-IIIB -0.088 0.435 * Metabolism Aldh-III -0.09 0.366 * Metabolism muc -0.095 0.332 * Metabolism 0.50 foldchange -2.27 -2.27 -2.27 * * * * * * * * * GO:0006022 GO:0006511 GO:0018394 aminoglycan metabolic process ubiquitin-dependent protein catabolic process peptidyl-lysine acetylation Prx2540-2 0.37 0.545 ** Metabolism 1.75 -2.53 Mulk -0.101 0.407 * Metabolism 0.25 -2.27 * * * GO:0006040 GO:0006302 GO:0071897 amino sugar metabolic process double-strand break repair DNA biosynthetic process CG12896 0.356 0.709 ** bgm -0.121 0.587 ** 0.00 1.74 -2.56 Metabolism Metabolism -2.29 * * * GO:0055114 GO:0005975 GO:0010948 GO:0034968 GO:0043044 GO:2001251 oxidation-reduction process carbohydrate metabolic process negative regulation of cell cycle process histone lysine methylation ATP-dependent chromatin remodeling negative regulation of chromosome organization LKRSDH 0.352 0.546 * Prx2540-1 0.282 0.603 ** spidey -0.123 0.349 * CG5991 -0.131 0.451 * RU50 sample 1.53 1.52 -2.57 -2.58 Metabolism Metabolism Metabolism Metabolism -2.3 -2.3 * * * * * * GO:0017144 GO:0010639 GO:0006479 drug metabolic process negative regulation of organelle organization protein methylation aay 0.266 0.765 ** Metabolism 1.51 -2.59 l(1)G0334 -0.152 0.372 * Metabolism -2.3 * * * ID GO:0032984 GO:1901988 GO:0008213 Description negative regulation of cell cycle phase transition protein-containing complex disassembly protein alkylation iPLA2-VIA 0.224 0.516 ** ATPsynD -0.159 0.302 * -1.98 -2.59 Metabolism Metabolism Score adj p-value * -2.31 * * GO:0009617 GO:1901991 GO:0000086 GO:0061982 response to bacterium negative regulation of mitotic cell cycle phase transition G2/M transition of mitotic cell cycle meiosis I cell cycle process CG30022 0.207 0.508 * ND-B16.6 -0.168 0.351 * -1.98 -2.64 Metabolism Metabolism 2.08 -2.33 * * ** * GO:0009607 GO:0034470 GO:0044839 GO:0006473 response to biotic stimulus ncRNA processing cell cycle G2/M phase transition protein acetylation alpha-Man-Ia 0.144 0.464 ** l(1)G0156 -0.169 0.278 * -2 -2.65 Metabolism Metabolism 2.01 -2.33 * * ** * GO:0043207 GO:0051129 GO:0051303 GO:0006277 response to external biotic stimulus negative regulation of cellular component organization establishment of chromosome localization DNA amplification LPCAT 0.142 0.363 * Ugt86Dh -0.202 0.309 * -2 -2.65 Metabolism Metabolism 2.01 -2.33 * * ** * GO:0051707 GO:0045930 GO:0065004 GO:0050684 response to other organism negative regulation of mitotic cell cycle protein-DNA complex assembly regulation of mRNA processing Gclc 0.129 0.701 * SdhA -0.205 0.294 * -2.01 -2.67 Metabolism Metabolism 2.01 -2.33 * * ** * GO:0050830 GO:0043254 GO:0007062 GO:0006270 defense response to Gram-positive bacterium regulation of protein complex assembly sister chromatid cohesion DNA replication initiation Hsepi 0.122 0.466 * Metabolism -2.01 -2.68 CG11241 -0.213 0.532 * Metabolism 1.97 -2.34 * * ** * GO:0009605 GO:0022613 GO:0000380 GO:0007127 response to external stimulus ribonucleoprotein complex biogenesis alternative mRNA splicing, via spliceosome meiosis I Pi4KIIalpha 0.1 0.39 * Metabolism -2.03 -2.71 CG3940 -0.239 0.566 ** Metabolism 1.97 -2.35 * * ** * GO:0019731 GO:0010769 regulation of cell morphogenesis involved in differentiation antibacterial humoral response GO:0000381 regulation of alternative mRNA splicing, via spliceosome GO:0007088 regulation of mitotic nuclear division wol 0.092 0.479 * Metabolism -2.04 -2.72 CG12262 -0.352 0.31 * Metabolism 1.96 -2.35 * * ** * GO:0042742 GO:0006342 GO:0034728 GO:0031123 ifc Gal defense response to bacterium chromatin silencing nucleosome organization RNA 3'-end processing 0.077 0.553 ** -0.402 0.365 * -2.04 -2.73 Metabolism Metabolism 1.96 -2.35 * * ** * GO:0050829 GO:0006952 GO:0045814 GO:0030951 establishment or maintenance of microtubule cytoskeleton polarity defense response to Gram-negative bacterium defense response negative regulation of gene expression, epigenetic GO:0043161 proteasome-mediated ubiquitin-dependent protein catabolic process GO:1903311 regulation of mRNA metabolic process CG5724 -0.587 0.479 * Metabolism -2.04 -2.05 -2.74 NT5E-2 0.214 0.443 * Metabolism GO:0006281 DNA repair -2.75 Glycogenin -0.121 0.396 * Metabolism GstE9 0.487 0.363 * Metabolism 1.93 1.9 -2.35 * * * * ** ** * GO:0098542 GO:0050770 GO:0031109 GO:0051783 defense response to other organism regulation of axonogenesis microtubule polymerization or depolymerization regulation of nuclear division PIG-L -0.063 0.44 * Metabolism -2.06 -2.8 muc -0.095 0.332 * TCA cycle 1.89 -2.36 * * ** * GO:0002376 GO:0019730 GO:0060810 GO:0071826 GO:0006323 GO:0016571 GO:0043543 immune system process antimicrobial humoral response intracellular mRNA localization involved in pattern specification process ribonucleoprotein complex subunit organization DNA packaging ALiX 0.247 0.478 ** Endocytosis -2.08 -2.82 l(1)G0334 -0.152 0.372 * TCA cycle histone methylation protein acylation -2.89 CG10103 0.202 0.411 * Endocytosis l(1)G0156 -0.169 0.278 * TCA cycle 1.87 1.82 -2.37 -2.37 * * * ** * * * GO:0006955 GO:0051704 GO:0060811 GO:0031056 GO:0007143 GO:0007307 GO:0006352 immune response multi-organism process intracellular mRNA localization involved in anterior/posterior axis specification regulation of histone modification female meiotic nuclear division CHMP2B 0.187 0.76 ** SdhA -0.205 0.294 * eggshell chorion gene amplification DNA-templated transcription, initiation Rho1 0.161 0.414 * muc -0.095 0.332 * Carbon metabolism -2.08 -2.95 Endocytosis TCA cycle -2.97 Endocytosis 1.81 1.8 -2.37 -2.38 * * * ** ** * * GO:0006950 GO:0043085 GO:0006333 GO:0006367 transcription initiation from RNA polymerase II promoter response to stress positive regulation of catalytic activity chromatin assembly or disassembly Cdc42 0.145 0.405 * Endocytosis -2.08 -2.97 l(1)G0334 -0.152 0.372 * Carbon metabolism 1.8 -2.38 * * ** * GO:0006959 GO:0007338 GO:0045786 GO:0006310 Sara l(1)G0156 humoral immune response single fertilization negative regulation of cell cycle DNA recombination 0.127 0.316 * -0.169 0.278 * -2.09 -2.97 Endocytosis Carbon metabolism 1.78 -2.38 * * * * GO:0007292 GO:0009566 GO:0072657 GO:0032259 Asap SdhA female gamete generation fertilization protein localization to membrane methylation 0.112 0.554 ** -0.205 0.294 * -2.12 -2.39 Carbon metabolism -2.09 -3.03 Endocytosis * * * * GO:0048477 GO:1990778 GO:0007052 GO:0043414 oogenesis protein localization to cell periphery mitotic spindle organization macromolecule methylation TSG101 0.091 0.6 * LManI -0.29 0.597 ** -2.1 -3.03 Endocytosis Lysosome -2.12 -2.39 * * * * GO:0022412 cellular process involved in reproduction in multicellular organism GO:0031400 negative regulation of protein modification process -2.1 GO:0034660 ncRNA metabolic process GO:0071103 DNA conformation change -3.04 Vps4 0.059 0.399 * Endocytosis CG4847 -0.315 0.403 * Lysosome -2.19 -2.4 * * * * GO:0007276 GO:0030952 GO:0010498 GO:0045132 gamete generation establishment or maintenance of cytoskeleton polarity proteasomal protein catabolic process meiotic chromosome segregation DptA 1.391 0.442 * Immune response -2.11 -3.05 Gal -0.402 0.365 * Lysosome -2.25 -2.4 * * * * GO:0007281 GO:0007265 GO:0033045 GO:0006261 germ cell development Ras protein signal transduction regulation of sister chromatid segregation DNA-dependent DNA replication pirk 0.717 0.576 ** Immune response -2.12 -3.08 Bace -0.718 0.368 * Lysosome -2.29 -2.4 * * ** * GO:0051052 GO:1901987 GO:0000070 regulation of DNA metabolic process regulation of cell cycle phase transition mitotic sister chromatid segregation PGRP-SD 0.519 0.381 * Immune response -2.13 -3.24 Aldh-III -0.09 0.366 * Glucose metabolism -2.41 * * * GO:0046578 GO:1901990 regulation of Ras protein signal transduction regulation of mitotic cell cycle phase transition PGRP-LF 0.33 0.495 * Immune response -2.15 muc -0.095 0.332 * Glucose metabolism -2.41 * * GO:0044786 GO:0050000 cell cycle DNA replication chromosome localization Rel 0.328 0.482 * l(1)G0334 -0.152 0.372 * -2.15 Immune response Glucose metabolism -2.41 * * GO:0006399 GO:0006470 tRNA metabolic process protein dephosphorylation PGRP-LA 0.289 0.55 ** -2.16 Immune response -2.45 * * GO:0008298 GO:0007098 intracellular mRNA localization centrosome cycle dl 0.172 0.523 ** -2.16 Immune response -2.45 * * GO:0033047 GO:0018022 regulation of mitotic sister chromatid segregation peptidyl-lysine methylation key 0.156 0.349 * -2.17 Immune response -2.46 * * GO:0006402 GO:0044772 Jra mRNA catabolic process mitotic cell cycle phase transition 0.11 0.411 * -2.17 Immune response -2.47 * * GO:0051056 GO:0044770 regulation of small GTPase mediated signal transduction cell cycle phase transition edl 0.243 0.467 * MAPK signalling -2.18 -2.49 * * GO:0007093 GO:0051983 mitotic cell cycle checkpoint regulation of chromosome segregation puc 0.236 0.416 * MAPK signalling -2.19 -2.49 * * GO:0010638 GO:0002181 positive regulation of organelle organization cytoplasmic translation raw 0.177 0.421 * -2.2 MAPK signalling -2.49 * * GO:0043087 GO:0030261 regulation of GTPase activity chromosome condensation msn 0.136 0.547 ** -2.23 MAPK signalling -2.51 * * GO:0000075 GO:0043484 vap cell cycle checkpoint regulation of RNA splicing 0.132 0.507 ** -2.23 MAPK signalling -2.51 * * GO:0022618 GO:0048024 ribonucleoprotein complex assembly regulation of mRNA splicing, via spliceosome Jra 0.11 0.411 * MAPK signalling -2.24 -2.51 * * GO:0031124 GO:1902850 mRNA 3'-end processing microtubule cytoskeleton organization involved in mitosis P58IPK 0.203 0.497 ** ER -2.25 -2.51 * * GO:0000281 GO:0006401 CG10420 mitotic cytokinesis RNA catabolic process 0.156 0.359 * ER -2.25 -2.52 * * TableB.8: Drosophila longevity genes (annotated in GenAge) which are present in the list of Smurf DEGs. I report here the gene symbol, the log 2 FC in Smurfs, the annotated effect in GenAge (direction and % of change on the ML), the experiment to which the GenAge annotation refers to and its reference. Note that for the same gene different experiments could be reported. TableB.9: TFs differentially expressed in Smurfs. I reported the Flybase ID and gene symbol, together with the log 2 FC (Smurf/non-Smurf) and the level of significance (*** p-value < 0.001; ** p-value < 0.01; * p-value < 0.05). TableB.9: TFs differentially expressed in Smurfs. I reported the Flybase ID and gene symbol, together with the log 2 FC (Smurf/non-Smurf) and the level of significance (*** p-value < 0.001; ** p-value < 0.01; * p-value < 0.05). TableB.9: TFs differentially expressed in Smurfs. I reported the Flybase ID and gene symbol, together with the log 2 FC (Smurf/non-Smurf) and the level of significance (*** p-value < 0.001; ** p-value < 0.01; * p-value < 0.05). Flybase ID Flybase ID Symbol Symbol log 2 FC adj p-value log 2 FC adj p-value Flybase ID Symbol log 2 FC adj p-value FBgn0000097 FBgn0027364 FBgn0261283 SREBP aop Six4 0.11 0.29 -0.21 * *** * Symbol AhcyL1 Akt1 alpha-Man-Ia Atg1 Atg8a Atg8a ATPCL Cbs Pcyt1 CG14207 Tspo ATPsynbetaL ND-20 chico Coq2 Myc Eip71CD Eip71CD elav esg fabp Gadd45 Gclc GlyP GlyS Gpdh GstS1 Hsc70-3 Hsp27 Hsp67Bc Hsp68 Hsp70Ba Hsp70Ba Hsp70Ba Hsp70Ba Hsp70Ba Hsp70Ba Ilk Symbol log 2 FC 0.05 0.18 0.29 0.17 0.26 0.26 -0.28 -0.11 0.46 0.31 0.59 1.61 -0.15 0.23 0.19 0.29 -0.4 -0.4 0.31 0.31 -0.21 1.3 0.21 -0.13 -0.1 -0.41 -0.09 0.26 -0.22 1.48 1.95 2.55 3.31 2.87 2.55 3.31 2.87 0.09 FBgn0025525 Effect increase decrease increase increase increase increase increase increase increase increase increase increase increase increase increase decrease increase increase decrease increase increase increase increase increase increase increase increase increase increase increase increase decrease decrease decrease increase increase increase increase FBgn0260632 % effect 13 11 39 25 56 50 32 43 8 5 27 11 46 48 30.8 47 70 20 66 21.1 81 77 50 17.1 10 20 33 27 30 6 20 30 30 30 25 25 25 63 bab2 FBgn0024244 drm FBgn0259938 log 2 FC Effect % effect Experiment RNA interference RNA interference RNA interference, Mutations Overexpression Overexpression Overexpression Mutation Overexpression Overexpression Overexpression RNA interference RNA interference RNA interference Knockout Mutation Overexpression Overexpression Overexpression Mutation Mutation Overexpression Overexpression Overexpression Post developmental RNA interference Post developmental RNA interference Mutation Overexpression Overexpression Overexpression Overexpression Overexpression Overexpression Overexpression Overexpression Epigenetic modification Epigenetic modification Mutation 0.22 dl 0.26 modification 0.32 cwo 0.21 Epigenetic Experiment * ** FBgn0000287 salr 0.22 * FBgn0027788 Hey 1.46 FBgn0261930 vnd -0.24 * ImpL2 0.91 increase 23 Overexpression FBgn0000448 Hr3 0.3 * FBgn0028789 Doc1 0.4 FBgn0261963 mid -0.63 *** InR 0.42 increase 85 Mutation FBgn0000567 Eip74EF 0.27 ** FBgn0028979 tio 0.32 FBgn0262477 FoxP 0.46 ** Keap1 0.27 increase 10 Mutations FBgn0000964 tj -0.64 *** FBgn0028996 onecut 0.21 FBgn0262656 Myc 0.27 *** ND-SGDH -0.13 increase 24 RNA interference FBgn0001150 gt 0.35 * FBgn0029504 CHES-1-like 0.17 FBgn0263118 tx 0.38 *** Lnk 0.19 increase 17.5 Mutations FBgn0001168 h 0.51 *** FBgn0030005 CG2120 -0.65 FBgn0264490 Eip93F 0.23 * Lnk 0.19 increase 33 Mutation FBgn0001291 Jra 0.25 *** FBgn0030505 NFAT 0.15 FBgn0265784 CrebB 0.09 ** loco 0.56 increase 20 Knockout FBgn0001297 kay 0.49 *** FBgn0030899 Hesr -0.46 FBgn0267978 ap -0.32 * loco 0.56 decrease 20 Overexpression p38a 0.11 decrease 40 Mutation Mrp4 0.48 decrease 47 Mutation Mrp4 0.48 increase 16 Overexpression MTF-1 0.13 increase 40 Overexpression mth -0.15 increase 35 Mutation mth -0.15 increase 21 Overexpression mth -0.15 increase 29 RNA interference mys 0.36 increase 20 Mutation mys 0.36 increase 44 Mutation Naam 0.36 increase 30 Overexpression ND-75 -0.3 increase 15 RNA interference Nmdmc 1.13 increase 120 Overexpression NPF 0.39 decrease 25 Overexpression p38b -0.07 decrease 60 Mutation p53 0.11 increase 58 p53 0.11 increase 19 Mutation Prp19 -0.08 increase 25 Overexpression Rbp9 0.2 decrease 33 Mutation SdhC -0.17 decrease 22 Sirt1 0.11 increase 57 Overexpression Sirt1 0.11 decrease 30 RNA interference Sirt1 0.11 increase 13 Overexpression Sod1 -0.2 increase 33 Overexpression Sod1 -0.2 increase 48 Overexpression Sod2 -0.15 increase 20 Overexpression teq -0.54 increase 31 Mutation Thor 0.77 increase 20 Overexpression Thor 0.77 increase 22 Overexpression Tsp42Ef 0.29 increase 18.2 Zw -0.22 increase 38 Overexpression FBgn0023489 Pph13 0.3 * FBgn0259211 grh 0.18 FBgn0022740 HLH54F 0.74 * FBgn0259176 bun 0.09 RNA interference FBgn0021872 Xbp1 0.19 ** FBgn0085432 pan 0.22 Post developmental Parkhitko et al. (2016) Reference Biteau et al. (2010) Liu et al. (2009) Ulgherait et al. (2014) Simonsen et al. (2008) Park et al. (2009) Peleg et al. (2016) Kabil et al. (2011) Landis et al. (2003) Vos et al. (2016) Lin et al. (2014) Copeland et al. (2009) Copeland et al. (2009) Clancy et al. (2001) Liu et al. (2011) Greer et al. (2013) Ruan et al. (2002) Chung et al. (2010) Toba et al. (2010) Magwire et al. (2010) Lee et al. (2012) Plyusnina et al. (2011) Orr et al. (2005) Bai et al. (2013) Sinadinos et al. (2014) Talbert et al. (2015) Simonsen et al. (2008) Simonsen et al. (2008) Wang et al. (2004) Vos et al. (2016) Biteau et al. (2010) Yang and Tower (2009) Yang and Tower (2009) Yang and Tower (2009) Zhao et al. (2005) Zhao et al. (2005) Nishimura et al. (2014) *** Zhao et al. (2005) *** ** Reference ** Alic et al. (2011) *** Tatar et al. (2001) * Sykiotis and Bohmann (2008) ** Copeland et al. (2009) *** Slack et al. (2010) *** Song et al. (2010) * Lin et al. (2011) * Legan et al. (2008) * Bai et al. (2013) * FBgn0020912 Ptx1 -0.24 * FBgn0085424 nub 0.18 * Zid et al. (2009) FBgn0016076 vri 0.28 *** FBgn0085405 CG34376 -0.18 * Demontis and Perrimon (2010) FBgn0015919 caup -0.54 *** FBgn0052296 Mrtf 0.13 * Huang et al. (2015) FBgn0015904 ara -0.57 *** FBgn0052121 CG32121 0.29 ** Curtis et al. (2007) FBgn0014931 CG2678 -0.11 * FBgn0050431 CG30431 -1.08 *** Sun and Tower (1999) FBgn0014859 Hr38 0.49 ** FBgn0050401 dany 0.51 * Orr and Sohal (1994) FBgn0014343 mirr -0.3 * FBgn0043364 cbt 0.19 *** Hoffmann et al. (2013) FBgn0014018 Rel 0.57 *** FBgn0042696 NfI 0.21 * Kusama et al. (2006) FBgn0013751 Awh -0.39 * FBgn0041156 exex 0.61 *** Rogina and Helfand (2004) FBgn0011656 Mef2 0.08 * FBgn0040305 MTF-1 0.12 * mutation Tsuda et al. (2007) FBgn0005660 Ets21C 1.68 *** FBgn0039808 CG12071 0.41 ** Dominant negative FBgn0005659 Ets98B 0.64 *** FBgn0039225 Ets96B -1.2 *** Toba et al. (2010) FBgn0005638 slbo 0.53 ** FBgn0039209 REPTOR 0.26 *** Garschall et al. (2017) FBgn0005561 sv -0.24 * FBgn0039044 p53 0.1 * Bauer et al. (2007) FBgn0004914 Hnf4 0.08 * FBgn0039039 lmd 0.68 * mutation Bauer et al. (2005) FBgn0004865 Eip78C 0.42 ** FBgn0038851 dmrt93B 2.66 *** Dominant negative FBgn0004858 elB 0.2 * FBgn0038418 pad 0.18 * Vrailas-Mortimer et al. (2011) FBgn0004666 sim -0.46 *** FBgn0037877 CG6689 0.14 *** Gendron et al. (2014) FBgn0004567 slp2 0.36 * FBgn0037617 nom 0.12 * Yu et al. (2015) FBgn0004396 CrebA 0.23 * FBgn0037275 CG14655 0.53 *** Owusu-Ansah et al. (2013) FBgn0004394 pdm2 -0.32 ** FBgn0036294 CG10654 -0.27 * Balan et al. (2008) FBgn0003963 ush -0.19 ** FBgn0036126 Irbp18 0.3 *** Nishimura et al. (2014) FBgn0003900 twi 0.74 ** FBgn0035903 CG6765 0.33 * Goddeeris et al. (2003) FBgn0003499 sr 0.75 *** FBgn0035691 CG7386 0.24 * Gimenez et al. (2013) FBgn0003459 stwl 0.17 * FBgn0035157 CG13894 1.2 *** Gimenez et al. (2013) FBgn0003448 sna 0.63 * FBgn0035144 Kah 0.58 *** Lin et al. (1998) FBgn0003254 rib 1.51 ** FBgn0035137 CG1233 0.1 ** Bahadorani et al. (2010) FBgn0003117 pnr -0.36 * FBgn0034012 Hr51 0.58 * Huang et al. (2014) FBgn0002609 E(spl)m3-HLH -0.24 * FBgn0033252 CG12769 0.16 * Huang et al. (2014) FBgn0002576 lz 0.34 * FBgn0032816 Nf-YB 0.19 *** Vrailas-Mortimer et al. (2011) FBgn0001981 esg 0.27 ** FBgn0032512 Bdp1 0.11 * Lin et al. (2011) FBgn0001319 kn -0.47 * FBgn0032202 REPTOR-BP 0.16 * FBgn0283451 br -0.4 ** Table B . B 10: Smurf DEGs amongst the BiT genes. The fold change (log 2 FC) refers to the DESeq2 analysis.Table B.10: Smurf DEGs amongst the BiT genes. The fold change (log 2 FC) refers to the DESeq2 analysis. Table B.10: Smurf DEGs amongst the BiT genes. The fold change (log 2 FC) refers to the DESeq2 analysis. Worbase ID Worbase ID Symbol Drosophila Symbol Drosophila log 2 FC BiT coefficient log 2 FC BiT coefficient WBGene00003843 WBGene00000778 CG14691 Mp20 1.37 0.244 -1.063 -2.595 WBGene00003843 WBGene00002259 CG17752 fabp -0.452 -0.207 -1.063 -3.442 Worbase ID WBGene00003843 WBGene00012475 WBGene00003843 WBGene00077490 WBGene00003843 WBGene00013739 WBGene00003843 WBGene00018946 WBGene00003843 WBGene00012467 WBGene00003843 WBGene00012467 WBGene00016664 WBGene00012467 WBGene00016664 WBGene00012467 WBGene00008018 WBGene00012467 WBGene00011095 WBGene00012467 WBGene00007509 WBGene00001150 WBGene00007509 WBGene00013900 WBGene00019105 WBGene00009226 WBGene00019105 WBGene00007963 WBGene00019105 WBGene00007963 WBGene00019105 WBGene00007963 WBGene00008127 WBGene00007963 WBGene00010585 WBGene00007963 WBGene00010585 WBGene00014052 WBGene00044459 WBGene00002252 WBGene00044459 WBGene00001592 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00011161 WBGene00016863 WBGene00018210 WBGene00016863 WBGene00017948 WBGene00016863 WBGene00013717 WBGene00006679 WBGene00011760 WBGene00006679 WBGene00018569 WBGene00016943 WBGene00002260 WBGene00016943 WBGene00016595 WBGene00016943 WBGene00016595 WBGene00016266 WBGene00016595 WBGene00009146 WBGene00016595 WBGene00009146 WBGene00016595 WBGene00018877 WBGene00016595 WBGene00018877 WBGene00016595 WBGene00018877 WBGene00016595 WBGene00018877 WBGene00016595 WBGene00018877 WBGene00016595 WBGene00018877 Symbol Drosophila CG6231 GILT2 CG4462 l(2)34Fc CG6356 spin CG3690 axed CG31103 CG10086 CG31272 CG3014 CG11291 CG2993 CG5577 Leash tbc CG18745 CG7997 CG18748 CG6465 Mtpalpha CG6733 Ugt36D1 Bace Cyp4s3 CG6508 Cyp9b1 CG31661 Cyp9b2 CG31928 Cyp9h1 CG12338 Cyp9f3 Fbxl7 Cyp9f2 CG32085 GILT2 Fbxl7 mthl10 CG32085 GluClalpha Est-6 Idgf6 Glt Idgf3 Jhe Cht4 alpha-Est1 Cht8 alpha-Est10 Cht9 alpha-Est2 Cht12 alpha-Est3 Idgf5 alpha-Est6 nemy alpha-Est7 mthl14 alpha-Est8 brwl Task6 NKAIN CG34396 Pka-C3 Acox57D-d fabp CG4860 Est-6 Arc42 Glt CG5028 Jhe CG9449 alpha-Est1 CG9452 alpha-Est10 LManII alpha-Est2 LManIII alpha-Est3 LManIV alpha-Est6 LManV alpha-Est7 LManVI alpha-Est8 LManI 0.349 log 2 FC BiT coefficient -1.063 -0.172 -0.466 -1.063 2.504 -0.727 -0.331 -1.063 2.251 0.196 0.37 -1.063 2.022 0.286 -0.248 -1.063 2.002 0.404 0.688 -1.063 1.922 -0.497 1.939 -1.106 1.922 0.114 -0.343 -1.106 1.922 0.122 0.503 -1.107 1.922 0.22 -0.486 -1.227 1.922 0.353 -0.32 -1.432 1.922 -0.148 -0.782 -1.432 1.866 0.401 -0.567 -1.671 1.861 -0.7 -2.609 -1.671 1.784 0.591 -1.447 -1.671 1.731 0.617 -1.745 -1.671 1.731 -0.515 -0.24 -1.713 1.731 -0.178 0.456 -1.863 1.731 -0.278 0.315 -1.863 1.731 -0.172 0.456 -1.956 1.668 0.484 0.315 -1.956 1.539 0.251 -0.753 -2.062 1.398 -0.206 -0.924 -2.062 1.106 0.65 -1.231 -2.062 1.106 1.265 -0.425 -2.062 1.106 -1 0.258 -2.062 1.106 0.647 -0.939 -2.062 1.106 1.297 0.224 -2.062 1.106 -0.54 -1.395 -2.062 1.106 -0.395 -0.583 -2.062 0.9 0.385 -0.338 -2.062 0.884 0.156 0.385 -2.313 0.806 0.103 0.388 -2.313 0.757 0.355 -0.352 -2.356 0.378 -0.207 0.17 -2.356 0 -0.753 -0.105 -2.356 -0.264 -0.924 -0.132 -2.371 -0.264 -1.231 -0.377 -2.407 -0.264 -0.425 1.358 -2.407 -0.264 0.258 -0.569 -2.437 -0.264 -0.939 -2.891 -2.437 -0.264 0.224 -3.13 -2.437 -0.264 -1.395 -2.01 -2.437 -0.264 -0.583 -1.694 -2.437 -0.264 -0.338 -0.264 -0.863 -2.437 Table B.10: Smurf DEGs amongst the BiT genes. The fold change (log 2 FC) refers to the DESeq2 analysis. Worbase ID Symbol Drosophila log 2 FC BiT coefficient WBGene00017069 Tim17a1 3.736 -0.443 WBGene00015284 Est-6 -0.753 -0.561 WBGene00015284 Glt -0.924 -0.561 WBGene00015284 Jhe -1.231 -0.561 WBGene00015284 alpha-Est1 -0.425 -0.561 WBGene00015284 alpha-Est10 0.258 -0.561 WBGene00015284 alpha-Est2 -0.939 -0.561 WBGene00015284 alpha-Est3 0.224 -0.561 WBGene00015284 alpha-Est6 -1.395 -0.561 WBGene00015284 alpha-Est7 -0.583 -0.561 WBGene00015284 alpha-Est8 -0.338 -0.561 WBGene00023423 CG4593 -0.114 -0.731 WBGene00017013 CG30280 -1.529 -0.802 WBGene00010703 Asph 0.323 -0.854 WBGene00006468 RhoGEF2 0.097 -0.875 WBGene00019227 Scgalpha 0.348 -0.892 WBGene00021412 Cyp4d1 -0.816 -0.916 WBGene00021412 Cyp4d2 -0.218 -0.916 WBGene00021412 Cyp4d8 -0.548 -0.916 WBGene00021412 Cyp4e1 -0.539 -0.916 WBGene00021412 Cyp4ae1 -0.275 -0.916 WBGene00021412 Cyp4p1 0.854 -0.916 WBGene00021412 Cyp4d14 -0.489 -0.916 WBGene00021412 Cyp4ac1 -0.702 -0.916 WBGene00021412 Cyp4ac2 -1.482 -0.916 WBGene00021412 Cyp4ac3 0.342 -0.916 WBGene00021412 Cyp4d21 -0.949 -0.916 WBGene00021412 Cyp4ad1 -0.318 -0.916 WBGene00021412 Cyp4p2 0.7 -0.916 WBGene00021412 Cyp4p3 0.443 -0.916 WBGene00021412 Cyp4d20 -0.552 -0.916 WBGene00011362 Est-6 -0.753 -0.987 WBGene00011362 Glt -0.924 -0.987 WBGene00011362 Jhe -1.231 -0.987 WBGene00011362 alpha-Est1 -0.425 -0.987 WBGene00011362 alpha-Est10 0.258 -0.987 WBGene00011362 alpha-Est2 -0.939 -0.987 WBGene00011362 alpha-Est3 0.224 -0.987 WBGene00011362 alpha-Est6 -1.395 -0.987 WBGene00011362 alpha-Est7 -0.583 -0.987 WBGene00011362 alpha-Est8 -0.338 -0.987 WBGene00007072 Ugt36D1 0.401 -1.003 WBGene00001433 Fkbp39 -0.166 -1.017 WBGene00003843 Orct -0.336 -1.063 WBGene00003843 CG15221 -0.537 -1.063 WBGene00003843 SLC22A 0.703 -1.063 WBGene00003843 CG7458 -0.141 -1.063 Smurfs. The 3049 genes are sufficient to obtain an almost perfect separation between Smurfs and non-Smurfs, with the same three outliers identified in previous analysis. Those can be visually recognized as presenting a mixed pattern in gene expression. The rest of the Smurfs and non-Smurfs, independently of the age, show a compact expression pattern. Enrichment analysis will be used to investigate the genes overexpressed and downregulated in Smurfs. Summary of the results In this chapter, I have discussed the bioinformatic analysis performed in order to answer the following biological questions: 1. Can we detect a biphasic behaviour in the gene expression of Smurfs and non-Smurfs? 2. How is the gene expression of the Smurfs and non-Smurfs affected by time? 1. Biphasic behaviour of Smurfs A Smurf is more similar -transcriptionwide -to another Smurf than to an age-matched non-Smurf. PCA analysis (figure 2.3) shows a separation along the first component (38% of variance) of Smurfs and non-Smurfs, and along the second (15% of variance) a separation along age groups, especially for the non-Smurfs. Differential gene expression analysis identified almost 20% of genes deregulated with a p-value cut-off of 5%. Expression of these genes defines the Smurf status in opposition to the non-Smurf (figure 2.6, suggesting the presence of a core Smurf signal. Enrichment analysis on the deregulated genes found upregulation of innate immunity and UPR genes, as well as genes involved in glutathione metabolism, suggesting a condition triggering a general stress response in the Smurfs. On the other hand, metabolic categories dominate downregulated genes, especially mitochondrial related. Such results also found confirmation in preliminary analysis conducted on metabolic data on Smurfs 2.2.7. In addition, Smurfs present a downregulation of genes involved in oogenesis, confirming a drop in fertility that has already been experimentally demonstrated [START_REF] Rera | The Smurf transition: new insights on ageing from end-of-life studies in animal models[END_REF]. The Smurf signature recapitulates more than half of the transcriptional ageing hallmarks 2.16. When I compared the young non-Smurfs to the old non-Smurfs through the same analysis, in order to see weather the effect of age was overlapping the effect of Smurfness, I could only detect the increase in inflammation and the decrease in expression of genes involved in reproduction. Those changes account for only one hallmark of transcriptional ageing. This proves the deregulations found in Smurfs independently of their age as Smurf CG4360 When repeating the experiment for CG4360, I obtained an unexpected result. Indeed, I could not detect a clear lifespan extensions in the case of the AO, while WL showed a good effect on the RU10 µg/mL concentration (+9.5%), mirrored by a slower increase in Smurf proportion. Results are reported in figure 3.12. In the previous experiment (see figure 3.7 and figure B.10), the lifespan extension in the WL was moderated, and looked not relevant. In order to clarify such results, I performed the experiment a third time. This additional validation confirmed the results obtained in the second experiment, with a lifespan extension for the RU10 µg/mL in the WL (+12.4%, figure 3.13). No relevant lifespan extension was detectable for RU 10 µg/mL (figure 3.14) of the AO experiment. Therefore, two independent experiments did not confirm the lifespan extension for CG4360 KD, while confirming an effect for the same dose in the WL set up. Interestingly, for the AO experiment (see figure 3.14) we can detect a significant time-dependence increase of the Smurfs proportion, but we cannot detect a significant difference amongst the controls and the treated group. This proves that two populations ageing at the same rate present the same increase in Smurfs proportion over time. The quantification of the KD for the RU 10 µg/mL concentration is reported in figure B.8 (Appendix B.2). for the RU 10 µg/mL treatment in the AO CG4360 KD experiment (ML RU0 = 71.6 days, ML RU10 = 75.6 days), as it is the one that initially gave lifespan extension. However, log-rank test is not significant (p-value < 0.05). Therefore, the results of the first screening for the AO KD of CG4360 were not confirmed in two independent experiments. (b) In the upper part the proportion of Smurfs recorded over time, each point representing the proportion in an independent vial. The analysis proved the time-dependence of the Smurf proportion (p-value < 0.001), but rejected the hypothesis of an interaction between the drug dose and the time-dependence (p-value > 0.05). Therefore, in a condition in which we cannot detect a significant lifespan extension, we cannot detect a significant difference in the Smurf proportion increase. This is an additional proof of the relation between the Smurf increase rate and the population ageing. In the graph at the bottom of point (b), I report the number of alive flies at each time point for the Smurf recording. by focusing on the time window where it is is plastic, and can be modulated. This ability can undoubtedly helping us move forward in the understanding of ageing. To conclude, I do not believe that any model is entirely wrong or right. Instead, I reckon that the continuous ageing and a biphasic ageing views should result in a synergistic combination rather than in an unfruitful competition. Appendix A Materials and Methods A.1 Bionformatics Unless stated otherwise, analysis have been performed on R 3.5.3. Mentioned packages are therefore to be interpreted as R packages. When not stated otherwise, plots have been generated using ggplot2 3.3.5. Exploratory analysis. PCA was performed using package DESeq2 1.22.2. tSNE was performed on package Rtsne 0.15. Sample-to-sample clustering was performed used the function dist from Rbase and package pheatmap 1.0.12. Differential gene expression analysis. The main analysis has been perfomed on DESeq2 1.22.2. Validation analysis on edgeR 3.24.3. Enrichment analysis. Enrichmend analysis was performed with the Bioconductor package clusterProfiler 3.10.1, which calls fgsea 1.8.0. Analysis was ran with the following parameters: nPerm = 15000, minGSSize = 10, maxGSSize = 600. Plot was generated with the function emmaplot from the same package. Pathway analysis. Performed on R package gage 2.32.
04096866
en
[ "info", "math", "nlin", "sdv" ]
2024/03/04 16:41:18
2022
https://inria.hal.science/hal-04096866/file/abs-ccs2022.pdf
Leonardo Trujillo email: [email protected] Paul Banse email: [email protected] Guillaume Beslon email: [email protected] Inversion mutations attenuate complexity catastrophe in NK fitness landscapes Context In the Kauffman's NK model of rugged fitness landscapes [START_REF] Kauffman | The origins of order: Self-organization and selection in evolution[END_REF], besides the genome length N ∈ N, the integer K ∈ Z (0,N -1) , describes the epistatic interactions between loci and the contribution of each component to the total fitness-which depends on its own value as well as the values of K other loci. The total fitness f ∈ [0, 1) for a binary sequence x ∈ {0, 1} N is then defined as: f (x) := 1 N N i=1 f i (x i , x i1 , . . . , x i K ) , with the fitness per locus f i which depends on the state of locus x i and K other loci x i K . It is usually assumed that the f i 's are given by independent and identically distributed random variables sampled from a given uniform probability distribution. Two popular schemes of interaction between loci are: the adjacent and random neighborhood models [1, p. 55]. On the other hand, evolution is simulated as adaptive walks, such that a starting genotype x varies through point mutations resulting in a mutated sequence y. Then, if f (y) > f (x), the mutated genotype is selected, otherwise other mutations on x are tested until the fitness increases. The evolutionary dynamics stops when local maximum of fitness is reached. As can be appreciated in figure 1 the average of local fitness maxima f K is not monotonic for different values of K. In particular for point mutations (red points in fig. 1), Kauffman showed that the expected value of local fitness maxima is monotonically non-decreasing up to a certain value of K and then decreases such that lim K→N -1 f K = 1/2 [START_REF] Kauffman | The origins of order: Self-organization and selection in evolution[END_REF]. This decrease of fitness values toward 0.5 is the well-known "complexity catastrophe" [1, p. 52]. That is, as K increases, the number of possible values of f i is greater. But the number of conflicting constraints due to epistasis can also increase, implying that the accessible local maxima are lower and thus the expected fitness peaks decrease in height. Results The adaptive dynamics in the NK model have been studied mainly with point mutations. Nevertheless in molecular evolution there are also structural mutations changing more than one nucleotide, for instance: duplications, inversions and other chromosomal rearrangements. These different types of structural variations are completely overlooked in conventional analysis of the NK model. We show that chromosomal inversions reveal interesting features of adaptive dynamics. In particular, we communicate results from adaptive walks simulations and show that the complexity catastrophe can be attenuated via chromosome inversions. This is sketched in figure 1 for a genome of size N = 100 and inversion mutations with sizes ranging from 1%-a single locus (i.e. inversions are like point mutations)-up to 100% of the genome size, for patterns of adjacent and random (inset plot in fig. 1) epistatic interaction between loci. For the simplest case K = 0, with no epistatic interaction between neighboring loci, we can verify in fig. 1, that the average fitness for all inversions size percentages are equal f K=0 0.667. In this case, the landscape is smooth with only a single peak. This result also agrees with the analyt-ical result f K=0 = 2 3 [1, p. 55]. Then, for K > 0, the average fitness f K increases with K until reaching a maximum value of fitness. After these maximum, the fitness values decrease as K increases. When K → 99, the mean fitness converge to the same value, regardless of the type of epistatic interaction neighborhood. Now, what's new is that for inversions with sizes greater than 1%, the average fitness values are higher than those for mutations of a single locus. Indeed, for K → N -1, the average fitness trend is very different from that of point mutations [START_REF] Kauffman | The origins of order: Self-organization and selection in evolution[END_REF][START_REF] Solow | Understanding and attenuating the complexity catastrophe in Kauffman's NK model of genome evolution[END_REF]. The evolutionary process reaches higher expected fitness values f K=99 > 1/2, for both random and adjacent epistatic interactions, and thus attenuating the complexity catastrophe. Finally, let's note as a curious fact, that when the size of the inversion mutations is equal to 100% (blue points in fig. 1), then f I K=99 = 0.610. We conjecture that for inversion mutations when K → N -1, the complexity catastrophe must converge to the reciprocal of the golden ratio 1/ϕ = 2/(1 + √ 5) ≈ 0.618 . . .. Fig. 1 . 1 Fig. 1. The average value of local fitness maxima show that the complexity catastrophe is attenuated by inversion mutations. The behavior of the higher reached fitness vs the epistatic parameter K for point mutations (red) and inversions restricted to a certain size range (sizes values and colors nomenclature are displayed in the figures), averaged for 100 instances of adaptive walks simulations in NK landscapes with adjacent neighboring epistatic interactions. Inset: Display the results with random epistatic interactions.
04096902
en
[ "math" ]
2024/03/04 16:41:18
2023
https://normandie-univ.hal.science/hal-04096902/file/Condioning%20on%20large%20widht.pdf
Aymen Bouaziz email: [email protected] LOCAL LIMITS OF GALTON-WATSON TREES CONDITIONED ON LARGE WIDTH Keywords: Mathematics Subject Classification. 60J80, 60B10, 05C05 Galton-Watson tree, random trees, local limits, width We study the local convergence of critical Galton-Watson trees under various conditionings. We give a sufficient condition, which serves to cover all the previous cases, for the convergence in distribution of a conditioned Galton-Watson tree to Kesten's tree. We also propose an other proof to give the limit in distribution of a critical Galton-Watson tree, with bounded support, conditioned on having a large width. Introduction In [START_REF] Kesten | Subdiffusive behavior of random walk on a random cluster[END_REF], Kesten proved that a critical or sub-critical Galton-Watson (GW) tree conditioned on reaching at least height h converges in distribution (for the local topology on trees) as h goes to infinity toward the so-called sized-biased tree (that we call here Kesten's tree and whose distribution is described in Section 2.3). Since then, several different conditioning have been studied, in particular, the conditioning of extinction after large time, the conditioning of large total population size and the conditioning of large number of leaves. In [START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF], Abraham and Delmas provided a criterion for local convergence of finite random trees to Kestens tree, then gave short and elementary proofs of essentially all previous related results and some new ones. Later, in [START_REF] Abraham | Local limits of Galton-Watson trees conditioned of the number of the protected nodes[END_REF], we prove that a critical Galton-Watson tree conditioned on having a large number of marked vertices converges in distribution to Kesten's tree and we then apply this result to give the limit in distribution of a critical Galton-Watson tree conditioned on having a large number of protected nodes. Let L(t) be the width of the tree t. Remark that this functional L is clearly monotone in the sense of [START_REF] He | Local Convergence of Critical Random Trees and Continuous-State Branching Processes[END_REF]; therefore, using Theorem 2.1 of [START_REF] He | Local Convergence of Critical Random Trees and Continuous-State Branching Processes[END_REF], we immediately get that a critical GW tree τ conditioned on {L(τ ) > n} converges in distribution toward Kesten's tree as n goes to infinity. In this paper, we propose another proof by generalizing somewhat the monotonicity property. Notice that the functional L does not satisfy additivity in [START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF]. Thus, considering the conditioning event {L(τ ) = n} is still an open problem. We say that a probability distribution p = (p(0), p(1), . . .) on nonnegative integers is bounded, if the set {n ∈ N, p(n) > 0} is bounded. For any critical and bounded distribution p satisfying assumption (1), Xin He [START_REF] He | Local Convergence of Critical Random Trees and Continuous-State Branching Processes[END_REF] provides a positive answer to this question, more precisely he proves that a critical GW tree τ conditioned on {L(τ ) = n} converges in distribution toward Kesten's tree as n goes to infinity. Provide an other proof of this result is the main objective of this paper. On the technical level, our proofs are extremely short and elementary, thanks in particular to the convenient framework in [START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF]. The paper is then organized as follows: In section 2, we recall briefly the framework we use for discrete trees and define the Galton-Watson tree τ and Kesten's tree τ * associated with offspring distribution p. In section 3, we state and prove our general result on local convergence of conditioned critical and sub-critical Galton-Watson trees and we apply it to the conditioning of large width, in the critical case, in Corollary 3.6, which is one of our main objective of this paper. In section 4, we study the conditioning on having large width of a critical Galton-Watson tree, with bounded support, where we give an elementary and short proof. Finally, we generalized this result by considering a type of conditioning never treated. Technical background on GW trees 2.1. The set of discrete trees. We denote by N = {0, 1, 2, . . .} the set of non-negative integers and by N * = {1, 2, . . .} the set of positive integers. We recall Neveu's formalism [START_REF] Neveu | Arbres et processus de Galton-Watson[END_REF] for ordered rooted trees. Let U = n≥0 (N * ) n be the set of finite sequences of positive integers with the convention (N * ) 0 = {∅}. For u ∈ U, its length or generation |u| ∈ N is defined by u ∈ (N * ) |u| . If u and v are two sequences of U, we denote by uv the concatenation of the two sequences, with the convention that uv = u if v = ∅ and uv = v if u = ∅. The set of ancestors of u is the set An(u) = {v ∈ U; ∃w ∈ U such that u = vw}. The most recent common ancestor of a subset s of U, denoted by M (s), is the unique element u of ∩ u∈s An(u) with maximal length |u|. For two distinct elements u and v of U, we denote by u < v the lexicographic order on U i.e. u < v if u ∈ An(v) and u = v or if u = wiu and v = wjv for some i, j ∈ N * with i < j. We write u ≤ v if u = v or u < v. A tree t is a subset of U that satisfies: • ∅ ∈ t. • If u ∈ t, then An(u) ⊂ t. • For every u ∈ t, there exists k u (t) ∈ N such that, for every i ∈ N * , ui ∈ t iff 1 ≤ i ≤ k u (t). The vertex ∅ is called the root of t. The integer k u (t) represents the number of offsprings of the vertex u ∈ t, and we call it the outdegree of the node u in the tree t. The maximal outdegree M (t) of a tree t is defined by M (t) = sup{k u (t), u ∈ t} The set of children of a vertex u ∈ t is given by: C u (t) = {ui; 1 ≤ i ≤ k u (t)}. By convention, we set k u (t) = -1 if u ∈ t. A vertex u ∈ t is called a leaf if k u (t) = 0. We denote by L 0 (t) the set of leaves of t. A vertex u ∈ t is called a protected node if C u (t) = ∅ and C u (t) L 0 (t) = ∅, that is u is not a leaf and none of its children is a leaf. For a tree t, we denote by Z(t) = Card({u ∈ t, |u| = n}) the size of the n-th generation of t, the height of t is defined by H(t) = sup{|u|, u ∈ t} and can be infinite. We define the width L(t) of t as: L(t) = sup k≥0 Z k (t)• We denote by T the set of trees, by T 0 = {t ∈ T; Card(t) < +∞} the subset of finite trees and by T 1 = {t ∈ T; lim n-→+∞ |M ({u ∈ t; |u| = n})| = +∞} the subset of trees with a unique infinite spine. We say that a sequence of trees (t n , n ∈ N) converges locally to a tree t if and only if lim n→∞ k u (t n ) = k u (t) for all u ∈ U. Let (T n , n ∈ N) and T be T-valued random variables. We denote by dist(T ) the distribution of the random variable T and write lim n-→+∞ dist(T n ) = dist(T ) for the convergence in distribution of the sequence (T n , n ∈ N) to T with respect to the local topology. If t, t ∈ T and x ∈ L 0 (t) we denote by t x t = {u ∈ t} ∪ {xv; v ∈ t } the tree obtained by grafting the tree t on the leaf x of the tree t. For every t ∈ T and every x ∈ L 0 (t), we shall consider the set of trees obtained by grafting a tree on the leaf x of t: T(t, x) = {t x t ; t ∈ T}. For convergence in distribution in the set T 0 ∪ T 1 , we recall the following key characterization in [START_REF] Abraham | An introduction to Galton-Watson trees and their locals limits[END_REF]. Lemma 2.1. Let (T n ) n∈N and T be random trees taking values in the set T 0 ∪ T 1 . Then the sequence (T n ) n∈N converges in distribution to T if and only if: (1) for every finite tree t ∈ T 0 , lim n-→+∞ P(T n = t) = P(T = t); (2) for every finite tree t ∈ T 0 and every leaf x of t, lim inf n-→+∞ P(T n ∈ T(t, x)) ≥ P(T ∈ T(t, x)). Galton Watson trees. Let p = (p(n), n ∈ N) be a probability distribution on N. We assume that [START_REF] Abraham | Local limits of Galton-Watson trees conditioned of the number of the protected nodes[END_REF] p(0) > 0, p(0) + p(1) < 1, and µ := +∞ n=0 np(n) < +∞. A T-valued random variable τ is a GW tree with offspring distribution p if the distribution of k ∅ (τ ) is p and it enjoys the branching property: for n ∈ N * , conditionally on {k ∅ (τ ) = n}, the subtrees (S 1 (τ ), . . . , S n (τ )) are independent and distributed as the original tree τ . The GW tree and the offspring distribution are called critical (resp. sub-critical, super-critical) if µ = 1 (resp. µ < 1, µ > 1). In the critical and sub-critical case, we have that a.s τ belongs to T 0 . Kesten's tree. Let p be an offspring distribution satisfying Assumption (1) with µ ≤ 1 (i.e. the associated GW process is critical or sub-critical). We denote by p * = (p * (n) = np(n)/µ, n ∈ N) the corresponding sizebiased distribution. We define an infinite random tree τ * (the size-biased tree that we call Kesten's tree in this paper) whose distribution is described as follows: There exists a unique infinite sequence (v k , k ∈ N * ) of positive integers such that, for every h ∈ N, v 1 • • • v h ∈ τ * , with the convention that v 1 • • • v h = ∅ if h = 0. The joint distribution of (v k , k ∈ N * ) and τ * is determined recursively as follows. For each h ∈ N, conditionally given (v 1 , . . . , v h ) and {u ∈ τ * ; |u| ≤ h} the tree τ * up to level h, we have: This yields a short and an elementary proof of theorem 4.1 in [START_REF] He | Conditioning Galton-Watson trees on large maximal out-degree[END_REF]. In general, assume that p is critical, A satisfies the identity property in [START_REF] Abraham | An introduction to Galton-Watson trees and their locals limits[END_REF] and P(A(τ ) = n) > 0 for any n, Then by theorem 3.1, as n -→ +∞, dist(τ |A(τ ) = n) -→ dist(τ * )• and as n -→ +∞, dist(τ |A(τ ) ≥ n) -→ dist(τ * )• Remark 3.4. Assume that p is critical. If A satisfies the monotonicity property in [START_REF] He | Local Convergence of Critical Random Trees and Continuous-State Branching Processes[END_REF], which is equivalent to A(t x t ) ≥ A(t ) ∀t ∈ T, and P(A(τ ) ≥ n) > 0 for any integer n (consider n 0 = +∞), Then, using Theorem 3.1, we have as n -→ +∞: dist(τ |A(τ ) ≥ n) -→ dist(τ * )• Thus, the monotonicity property is a particular case of our result, which provide another and a short proof of Theorem 2.1 in [START_REF] He | Local Convergence of Critical Random Trees and Continuous-State Branching Processes[END_REF]. Remark 3.5. As a conclusion, Theorem 3.1 serves to cover all the different cases (identity, additivity and monotonicity) which allows to give a simple and short proof of theorem 2.2.1 and 2.2.4 in [START_REF] Abraham | An introduction to Galton-Watson trees and their locals limits[END_REF] As direct application, we can recover several specific conditionings in the critical case: (1) Conditioning on extinction after large time, see [START_REF] Kesten | Subdiffusive behavior of random walk on a random cluster[END_REF][START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF], (A = H), as n -→ +∞ dist(τ |H(τ ) ≥ n) -→ dist(τ * )• (2) Conditioning on the total population size, (A = Card), As n -→ +∞ dist(τ |Card(τ ) ≥ n) -→ dist(τ * )• (3) Conditioning on the number of individuals having a given number of children: Let A be a non-empty subset of N. For a tree t, we denote by L A (t) the total number of nodes in the tree t with outdegree in A. Assume that k∈A p(k) > 0. Then as n -→ +∞ dist(τ |L A (τ ) ≥ n) -→ dist(τ * )• (4) Conditioning on the number of protected nodes, see [START_REF] Abraham | Local limits of Galton-Watson trees conditioned of the number of the protected nodes[END_REF], Let A(t) be the number of protected nodes in the tree t. Then as n -→ +∞ dist(τ |A(τ ) ≥ n) -→ dist(τ * )• One of our original motivations for this result is the local convergence under the conditioning of large width. So right now we will apply Theorem 3.1 to this specific conditioning. Since p 0 + p 1 < 1, we have P(L(τ ) ≥ n) > 0 for any n. Then Theorem 3.1 immediately gives the local convergence of critical GW trees to Kesten's tree, under the conditioning of large width. Recall that the width L(t) of a tree t is defined to be L(t) = sup k≥0 Z k (t)• Corollary 3.6. Assume that p is critical. Then as n -→ +∞, dist(τ |L(τ ) ≥ n) -→ dist(τ * )• 4. The conditioning on the largest generation, critical case Theorem 4.1. Let τ be a critical GW tree with offspring distribution p satisfying (1) with bounded support and let τ * be a Kesten's tree associated to p. Let τ n be a random tree distributed according to τ conditionally on {L(τ ) = n}. Then as n -→ +∞, dist(τ n ) -→ dist(τ * )• where the limit is understood along the infinite sub-sequence {n ∈ N : P(L(τ ) = n) > 0}. Proof. We consider A(t) = L(t) with n 0 = 1 that is A n = {t ∈ T; L(t) = n}• Since p(0) + p(1) < 1, the set {n ∈ N : P(L(τ ) = n) > 0} is infinite. Let t ∈ T 0 , x ∈ L 0 (t) and n be a large enough integer. We consider K = sup{n ∈ N, p(n) > 0} < +∞ be the supremum of the support of p. We have ∀ |x| ≤ s ≤ H(t) Z s (t) + Z s-|x| (τ ) ≤ L(t) + K s-|x| < n We deduce that, L satisfies the property identity, that is: L(t * x τ ) = n ⇐⇒ L(τ ) = n According to Theorem 3.1, we deduce the convergence and hence end the proof. We can generalize these results concerning the width by considering: Let A be a non-empty subset of N. For a tree t and s a non-negative integer, we write The case A = {0}, L (s) A (t) represents the number of leaves in generation s of t and S A (t) the maximum number of leaves in the same generation. We can also have L As, for all t ∈ T 0 , x ∈ L 0 (t) and n be a large enough integer, ∀ |x| ≤ s ≤ H(t) L L A (t) = {u ∈ t; |u| = s and k u (t) ∈ A} the set of individuals, in generation s, whose number of children belongs to A and L A (t) = Z s (t) and so S A (t) = L(t) the largest generation by taking A = N.Since p(0) + p(1) < 1 and k∈A p(k) > 0, the set {n ∈ N : P(S A (τ ) = n) > 0} is infinite. ≤ L(t) + K s-|x| < nThe same work as in the previous proof allows to prove Theorem 4.2. Let τ be a critical GW tree with offspring distribution p satisfying (1) with bounded support and such that k∈A p(k) > 0. Let τ * be a Kesten's tree associated to p. Then as n -→ +∞,dist(τ |S A (τ ) = n) -→ dist(τ * )•where the limit is understood along the infinite sub-sequence {n ∈ N : P(S A (τ ) = n) > 0}. anddist(τ |S A (τ ) ≥ n) -→ dist(τ * )•Note that this type of conditioning is never treated. Acknowledgements. Sincere thanks to anonymous referees for their comments and suggestions, which improved considerably the presentation of this paper. • The number of children (k u (τ * ), u ∈ τ * , |u| = h) are independent and distributed according to p if u = v 1 • • • v h and according to p * if u = v 1 . . . v h . • Given {u ∈ τ * ; |u| ≤ h + 1} and (v 1 , . . . , v h ), the integer v h+1 is uniformly distributed on the set of integers {1, . . . , k Remark 2.2. Notice that by construction, a.s. τ * ∈ T 1 has a unique infinite spine. And following Kesten [START_REF] Kesten | Subdiffusive behavior of random walk on a random cluster[END_REF], the random tree τ * can be viewed as the tree τ conditioned on non extinction. Following [START_REF] Abraham | An introduction to Galton-Watson trees and their locals limits[END_REF], for t ∈ T 0 and x ∈ L 0 (t), we have: (2) In the particular case of a critical offspring distribution (µ = 1), we get for all t ∈ T 0 and x ∈ L 0 (t): Main result Let A be an integer-valued function defined on T which is finite on T 0 and let n 0 ∈ N ∪ {+∞} be given. We define for all n ∈ N * , the subset of trees Common values of n 0 that will be considered are 1 and +∞. The following theorem states a general result concerning the local convergence of critical and sub-critical GW tree τ conditioned on A n toward Kesten's tree and its proof is in fact a straighforward adaptation of the proof of theorem 3.1 in [START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF]. We denote by the conditional law of τ given {τ ∈ A n }. Theorem 3.1. Assume that assumption (1) hold, µ ≤ 1 and that P(τ ∈ A n ) > 0 for n large enough. Then, if for all t ∈ T 0 and x ∈ L 0 (t), (3) lim inf n-→+∞ As n -→ +∞, We have Proof. Since µ ≤ 1, we have a.s τ ∈ T 0 and τ * ∈ T 1 . So we will use Lemma 2.1 to prove the convergence. Using [START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF] or else [START_REF] Bertoin | On largest offspring in a critical branching process with finite variance[END_REF], we have for every t ∈ T 0 , x ∈ L 0 (t) and t ∈ T 0 : Let t ∈ T 0 and x ∈ L 0 (t). For such n we get: So, we obtain that 2), we deduce that: , we deduce that: (5) lim inf Furthermore, for all t ∈ T 0 and all n > A(t), we have Finally, by Lemma 2.1, we have proved this result. We have several remarks related to Theorem 3.1. Remark 3.2. However, it seems that the above proof is almost copied from the proof of Theorem 3.1 in [START_REF] Abraham | Local limits of conditioned Galton-Watson trees : the infinite spine case[END_REF], it is easy to see that the sufficient condition in Theorem 3.1 is a particular case of this result. Remark 3.3. We apply Theorem 3.1 to the Conditioning on the maximal out-degree: Let τ be a critical Galton-Watson tree and let n be a large enough integer, we have for every t ∈ T 0 , x ∈ L 0 (t):
04096906
en
[ "info.info-ia", "info.info-lg", "phys.meca.msmeca" ]
2024/03/04 16:41:18
2022
https://hal.science/hal-04096906/file/manuscript.pdf
Ying Zhang email: [email protected] Mutahar Safdar email: [email protected] Jiarui Xie email: [email protected] Jinghao Li email: [email protected] Yaoyao Manuel Sage email: [email protected] Fiona Zhao A systematic review on data of additive manufacturing for machine learning application: the data quality, type, preprocessing, and management Additive manufacturing (AM) techniques are maturing and penetrating every aspect of the industry. With more and more design, process, structure, and property data collected, machine learning (ML) models are found to be useful to analyze the patterns in the data. The quality of datasets and the handling methods are important to the performance of these ML models. This work reviews recent publications on the topic, focusing on the data types along with the data handling methods and the implemented ML algorithms. The examples of ML applications on AM are then categorized based on the lifecycle stages, and research focuses. In terms of data management, the existing public database and data management methods are introduced. Finally, the limitations of the current data processing methods and suggestions are given. LDA Linear discriminant analysis DL Deep learning LR Linear regression GPR Gaussian process regression NB Naïve Bayes DT Decision tree LSTM Long short-term memory . However, those articles mainly focus on the opportunities of ML in AM. None of them provide a summary of the data perspective of AM for ML applications such as the most popular data type in AM, the typical data preprocessing methodology, data quality and quantity, and AM data management. Therefore, the primary objective of this article is to provide a comprehensive review on the data of AM for ML application. Specifically, this study contributes to: Introduction Additive manufacturing (AM, also known as 3D printing) refers to a process by which a 3D object is additively fabricated from a digital design, usually in a layer-by-layer fashion. Due to the nature of the AM process, which adds material only at the desired place, the lead-time and material waste are kept to a relatively low level. More importantly, since the tooling is no longer needed, AM unlocks a significant number of constraints for the designers to design products with complex geometry [START_REF] Kumke | Methods and tools for identifying and leveraging additive manufacturing design potentials[END_REF]. The technical features of AM make it suitable for industries with small batch sizes and complex part geometry, especially for hollow structures and curved surfaces. Thus, AM is becoming an important complement to the traditional manufacturing methods. American Society for Testing and Materials (ASTM) F42 categorizes all the AM processes into seven classes, with several technologies under each category branded by different manufacturers [START_REF] Wilson | Remanufacturing of turbine blades by laser direct deposition with its energy and environmental impact analysis[END_REF][START_REF] Dutta | The additive manufacturing (AM) of titanium alloys[END_REF]. Compared to conventional manufacturing methods, there are more uncertainties in AM processes as they usually involve complex designs, phase transformation, and in-process control to achieve the desired microstructure and properties. In the meantime, the enormous data generated from design, process, structure, and property (PSP) linkage [START_REF] Kumke | Methods and tools for identifying and leveraging additive manufacturing design potentials[END_REF][START_REF] Debroy | Additive manufacturing of metallic components-process, structure and properties[END_REF] in AM techniques brings challenges to researchers, and the traditional trial and error methods are usually not efficient. Considering the melt pool size alone, it can change from hundreds of micrometers in powder bed fusion (PBF) to several millimeters in directed energy deposition (DED), and this may eventually result in very different microstructures and properties. The uncertainties in the PSP are restricting the development and application of AM techniques [START_REF] Greitemeier | Uncertainty of Additive Manufactured Ti-6Al-4V[END_REF], and recognizing the interplay between PSP linkage in AM is crucial for quality control and the development of this technology. Modeling approaches and numerical simulations are the ideal tools to fill in the gap by saving time and experimental costs [START_REF] Li | Solidification Microstructure Simulation of Ti-6Al-4V in Metal Additive Manufacturing: A Review[END_REF][START_REF] Markl | Multiscale Modeling of Powder Bed-Based Additive Manufacturing[END_REF]. However, establishing mechanistic models usually requires domain knowledge and model validation process before their reliable applications. In recent years, an increasing number of studies are looking into solutions by directly applying data-driven approaches. Machine learning (ML) continuously shows its advantages to deal with the uncertainties and multiple types of data from AM for various purposes. First, it avoids solving complex equations in many physical models. Second, it fills the gap where traditional theories are either non-existent or loss function due to their inability to solve application-specific problems. Finally, it is able to make a prediction for a long process range based on the well-tested and reliable ML algorithms encoded by open-source software [START_REF] Debroy | Metallurgy, mechanistic models and machine learning in metal printing[END_REF]. When ML gets involved, datasets are an important matter to be considered as ML models are solely driven by data. Current AM datasets for ML are highly case-dependent due to a lack of existing standards. Therefore, in Section 2, this work first provides a comprehensive review of the stateof-art ML applications on AM in terms of the four data types (tabular, graphic, 3D, and spectrum) and their corresponding data handling procedures as well as the ML algorithms deployed. Then, based on the prediction target of ML models, the application area of ML on AM is summarized in Section 3. The existing public datasets are introduced in the following Section 4. Finally, some insights and suggestions for further developing trends are discussed in the end. All the references come from Scopus including most of the journal papers with the keywords "machine learning" and "additive manufacturing" in the recent four years. 144 related journal articles have been reviewed. For convenience, the standards from ASTM to categorize the AM technology types are used and the list of abbreviations for AM aspect is listed in Table 1. The list of abbreviations for the most frequent ML terminology is listed in Table 2. • Introduce the most popular data types and data handling methods in AM studies. • Summarize the data types, preprocessing methods, specific application, data quality, and selected ML algorithms for 100+ existing studies in terms of AM lifecycle stages of Design, Process, and Product. • Present the existing public datasets for AM studies and introduce the existing AM database management system. • Identify the research gaps, current limitations, and future research directions on AM data for ML applications. Data types and data handling This section discusses the original data types collected from the design, simulation, and manufacturing procedures of AM to construct ML models. Original data in this paper refers to the collected raw data before any further preparation or preprocessing. The data handling methods and popular categories of ML algorithms implemented for ML in AM are also illustrated in this section. Overview Research on ML and its applications utilizes many different terminologies for the work with data. These terms are often used interchangeably, complicating the transferability of implementations. For this paper, data handling is defined as an umbrella term comprising all work with data, from the collection of raw data to techniques solely applied to improve algorithm performance. In the field of ML-aided AM, data handling can be further divided into the four categories explained in the following: Feature extraction: raw data acquired from sensors is usually in forms that are difficult to statistically analyze and build ML models upon. ML practitioners need to develop salient representations from the original data as the input features. Therefore, statistical and geometrical features are usually extracted from raw data to facilitate the subsequent analysis [START_REF] Guyon | Feature extraction: foundations and applications[END_REF]. Feature selection and feature learning: after original features are derived from raw data, feature selection and feature learning can be performed to improve the performance of ML models. Feature selection evaluates and ranks the importance of features to choose a subset from the original features [START_REF] Li | Feature selection: A data perspective[END_REF]. Feature learning, instead, generates new features that are better representations of the dataset [START_REF] Bengio | Representation learning: A review and new perspectives[END_REF]. Discretization: discretization describes the process of transforming continuous objects into discrete elements. Vision data such as images and 3D models are commonly discretized to generate features suitable for ML. The most popular discretization methods for AM are pixelization and voxelization. Data preprocessing: data preprocessing summarizes techniques applied to improve the quality or suitability of the data for ML algorithms [START_REF] Alasadi | Review of data preprocessing techniques in data mining[END_REF]. For AM, preprocessing can be further classified into image preprocessing and numerical preprocessing. Images are often processed with techniques such as gray-scaling and cropping to reduce computational complexity and enhance the quality of data [START_REF] Chaki | A beginner's guide to image preprocessing techniques[END_REF]. Numerical preprocessing techniques such as normalization and removal of outliers support better performance and faster convergence for the training processes of ML [START_REF] Singh | Investigating the impact of data normalization on classification performance[END_REF]. Tabular data Data stored in tabular forms is structured into rows and columns, shown in Figure 1. Each row represents an instance, acting as a training example for ML [START_REF] Tennison | Model for tabular data and metadata on the web[END_REF]. Each column stands for an attribute, also known as a feature in ML, to characterize the instances. The data in cells could be continuous such as numerical values, or discrete such as categorical or ordinal values. The cells under the same column must exhibit the same category of data. For AM, this data type is extensively used to investigate the correlations between build parameters and build quality of the print [START_REF] Zhang | High cycle fatigue life prediction of laser additive manufactured stainless steel: A machine learning approach[END_REF][START_REF] Lee | Data analytics approach for melt-pool geometries in metal additive manufacturing[END_REF][START_REF] Jiang | Achieving better connections between deposited lines in additive manufacturing via machine learning[END_REF]. For example, Zhang et al. [START_REF] Zhang | High cycle fatigue life prediction of laser additive manufactured stainless steel: A machine learning approach[END_REF] construct an ML model that predicts the build quality with build parameters using tabular data. Each row records one experiment, consisting of build parameters as the input features and measurements of build quality as the target variables of the ML model. Figure 1 is an example of a tabular form, storing the experimental data of different process parameter settings of plastic 3D printing. Figure 1: Format of tabular data: an example of tabular data form to store the experimental data of plastic 3D printing. The most frequently used data handling method for tabular data is feature selection [START_REF] Rodríguez-Martín | Predictive models for the characterization of internal defects in additive materials from active thermography sequences supported by machine learning methods[END_REF][START_REF] Montazeri | In-process monitoring of porosity in additive manufacturing using optical emission spectroscopy[END_REF]. Feature selection processes are easily implemented for tabular data because features are well defined at the beginning. The most relevant features for target prediction are selected to construct ML models to reduce the computational complexity while retaining acceptable accuracy. Feature selection also helps discover the most decisive build parameters and remove unnecessary sensors to reduce cost. Data preprocessing such as removal of outliers and normalization are also common practices to improve the predictive performance of ML models [START_REF] García | Data preprocessing in data mining[END_REF]. Graphics data Graphics data refers to data forms that display or present images or drawings [START_REF] Murray | Encyclopedia of graphics file formats[END_REF]. There are generally two types of graphics data: bitmap and vector data. Bitmap is a pixelized representation of images, where a picture is segregated by a uniform grid and each cell is numerically defined to represent its color. Figure 2(a) renders the shape of an I-beam cross-section using one of the bitmap data representations: binary image. Each block is a black or white pixel, digitally represented by 1 or 0, respectively. In-process monitoring and quality inspection on AM are usually performed with bitmap data acquired from digital cameras [START_REF] Saluja | A closed-loop in-process warping detection system for fused filament fabrication using convolutional neural networks[END_REF][START_REF] Yeung | A meltpool prediction based scan strategy for powder bed fusion additive manufacturing[END_REF], microscopes [START_REF] Caggiano | Automated Laser Polishing for surface finish enhancement of additive manufactured components for the automotive industry[END_REF][START_REF] Decost | Computer vision and machine learning for autonomous characterization of am powder feedstocks[END_REF][START_REF] Özel | Surface topography investigations on nickel alloy 625 fabricated via laser powder bed fusion[END_REF], X-ray sensors [START_REF] Paulson | Correlations between thermal history and keyhole porosity in laser powder bed fusion[END_REF], and thermal cameras [START_REF] Zhang | Deep learning-based tensile strength prediction in fused deposition modeling[END_REF][START_REF] Zhang | Detection of Defects in Additively Manufactured Stainless Steel 316L with Compact Infrared Camera and Machine Learning Algorithms[END_REF]. For instance, layer-wise images are captured by digital cameras to monitor the conformance of geometry, or by microscope to inspect defects and microstructures. Commonly used bitmap formats are the joint photographic expert group (JPEG) and Portable graphics format (PNG). Vector data stores the key points of a drawing that can be restored with connecting lines and guidelines. Figure 2(b) is an example of a sketch reconstructed from vector data. The key points are connected with connecting lines following the guidelines such as straight line, spline, and arc. Vector data is frequently utilized to characterize the toolpaths and infill patterns of AM [START_REF] Zhang | Predicting flexural strength of additively manufactured continuous carbon fiber-reinforced polymer composites using machine learning[END_REF]. Pixelization is the most popular data handling method for ML in AM as most examples of image analysis in this field adopt bitmap data [START_REF] Bai | Anomaly detection of gas turbines based on normal pattern extraction[END_REF][START_REF] Gu | Bioinspired hierarchical composite design using machine learning: simulation, additive manufacturing, and experiment[END_REF][START_REF] Ren | Computational fluid dynamics-based in-situ sensor analytics of direct metal laser solidification process using machine learning[END_REF][START_REF] Sanchez | Machine learning to determine the main factors affecting creep rates in laser powder bed fusion[END_REF][START_REF] Snell | Methods for rapid pore classification in metal additive manufacturing[END_REF]. When an image is pixelized, each pixel is an original feature of the dataset. However, they might not be good representations for ML, and thus image preprocessing is commonly utilized to improve the datasets. Techniques such as cropping [START_REF] Bai | Anomaly detection of gas turbines based on normal pattern extraction[END_REF][START_REF] Han | Quantitative microstructure analysis for solid-state metal additive manufacturing via deep learning[END_REF], resizing [START_REF] Ren | Computational fluid dynamics-based in-situ sensor analytics of direct metal laser solidification process using machine learning[END_REF], gray-scaling [START_REF] Özel | Surface topography investigations on nickel alloy 625 fabricated via laser powder bed fusion[END_REF][START_REF] Sanchez | Machine learning to determine the main factors affecting creep rates in laser powder bed fusion[END_REF], and binarization [START_REF] Imani | Process mapping and in-process monitoring of porosity in laser powder bed fusion using layerwise optical imaging[END_REF] are utilized to reduce the computational power required. Image augmentation [START_REF] Shorten | A survey on image data augmentation for deep learning[END_REF] is also frequently conducted to generate more training examples using flipping [START_REF] Han | Quantitative microstructure analysis for solid-state metal additive manufacturing via deep learning[END_REF], rotating [START_REF] You | Mitigating Scattering Effects in Light-Based Three-Dimensional Printing Using Machine Learning[END_REF], etc. Image analysis techniques including texture analysis [START_REF] Yazdi | A hybrid deep learning model of processbuild interactions in additive manufacturing[END_REF] and edge detection [START_REF] Roach | Utilizing computer vision and artificial intelligence algorithms to predict and design the mechanical compression response of direct ink write 3D printed foam replacement structures[END_REF] are implemented to extract geometrical features. An example of a monitoring system that employs denoising and edge detection is shown in Figure 3. An image of the melt pool is captured in a pixelized format, then analyzed to generate its temperature contour map. Principal component analysis as a feature learning technique has been applied to graphics data to reduce the dimensionality of input variables of AM datasets [START_REF] Özel | Surface topography investigations on nickel alloy 625 fabricated via laser powder bed fusion[END_REF][START_REF] Han | Quantitative microstructure analysis for solid-state metal additive manufacturing via deep learning[END_REF]. 3D data 3D data is a coordinate-based representation of 3D objects with specialized software and has wide applications in AM [START_REF] Mchenry | An overview of 3d data content, file formats and viewers[END_REF]. For example, the part definition of plastic 3D printing is saved in Computer-aided design (CAD) files such as STL. The CAD files are processed by slicing software that generates toolpaths to guide the extruder. Figure 4 demonstrates how 3D objects can be represented by 3D data using tessellation. Tessellation is one of the 3D data techniques that cover the surface of a 3D object with polygons to record its shape. [START_REF] Szilvśi-Nagy | Analysis of STL files[END_REF]. In AM researches, 3D design files are utilized to train ML models that perform geometry detection [START_REF] Zhang | Predictive manufacturability assessment system for laser powder bed fusion based on a hybrid machine learning model[END_REF], manufacturability analysis [START_REF] Zhang | Predictive manufacturability assessment system for laser powder bed fusion based on a hybrid machine learning model[END_REF], and build quality prediction [START_REF] Herriott | Predicting microstructure-dependent mechanical properties in additively manufactured metals with machine-and deep-learning methods[END_REF]. 3D data can also be constructed with stereo cameras or X-ray tomography to perform in-process defect detection [START_REF] Snell | Methods for rapid pore classification in metal additive manufacturing[END_REF][START_REF] Gobert | Porosity segmentation in X-ray computed tomography scans of metal additively manufactured specimens with machine learning[END_REF]. Before input into ML models, 3D data is processed into multi-view images, volumetric, point cloud, polygonal mesh, or primitive-based data, while point cloud and volumetric forms are most popular for AM. 3D models can also be voxelized into cubes to generate original features. Sparse representations are deployed to reduce the memory needed to process huge voxelized data [START_REF] Zhang | Hybrid sparse convolutional neural networks for predicting manufacturability of visual defects of laser powder bed fusion processes[END_REF]. Geometrical features such as relative distances extracted from point clouds can be the input features to train ML models [START_REF] Kuschmitz | Design and Additive Manufacturing of Porous Sound Absorbers-A Machine-Learning Approach[END_REF]. Morphological and crystallographic features can also be extracted from volumetric data as input features to ML models [START_REF] Herriott | Predicting microstructure-dependent mechanical properties in additively manufactured metals with machine-and deep-learning methods[END_REF]. Figure 5 demonstrates an example of handling methods for 3D data. The features such as volume, surface area, bounding box, number of components, and number of fasteners are extracted from the 3D model and then used as ML input features. Spectrum data Spectrum data is discrete or continuous wavelengths measured from specific radiations. Figure 6 shows an example of vibration measurement, which belongs to spectrum data. Vibration data is usually first recorded in the time domain, then transferred to the frequency domain to reveal more information. Frequently measured radiations for AM are temperature [START_REF] Rodríguez-Martín | Predictive models for the characterization of internal defects in additive materials from active thermography sequences supported by machine learning methods[END_REF], acoustic emission (AE) [START_REF] Wu | Experimental study of the process failure diagnosis in additive manufacturing based on acoustic emission[END_REF][START_REF] Shevchik | Acoustic emission for in situ quality monitoring in additive manufacturing using spectral convolutional neural networks[END_REF][START_REF] Wasmer | In situ quality monitoring in AM using acoustic emission: A reinforcement learning approach[END_REF], photon [START_REF] Montazeri | In-process monitoring of porosity in additive manufacturing using optical emission spectroscopy[END_REF][START_REF] Okaro | Automatic fault detection for laser powder-bed fusion using semisupervised machine learning[END_REF], and vibration. The corresponding sensors are typically thermocouples, AE sensors, photodiode sensing systems, and vibration sensors, respectively. Thermal sensors can help monitor melt pool states and reveal cracks and voids near the part surface by analyzing the thermal gradients and history of the target area. AE sensors can be attached to the build platform to detect any onset of irreversible deformations such as cracks, warpage, and delamination. Vibration sensors can be installed on extruders to detect machine state errors such as filament runout, jamming, and breakage [START_REF] Yang | Filament breakage monitoring in fused deposition modeling using acoustic emission technique[END_REF]. Figure 6: Measurement of vibrationone type of spectrum data. The plot on the top is the vibration amplitude in the time domain. The plot at the bottom is the vibration amplitude in the frequency domain (adapted from [START_REF] Hong | A vibration measurement system for health monitoring of power transformers[END_REF]). Denoising and signal filtering are common techniques to clean spectrum data [START_REF] Özel | Surface topography investigations on nickel alloy 625 fabricated via laser powder bed fusion[END_REF]. For many cases, spectrum data collected by the aforementioned sensors is times-series data with temporal relationships among data points, adding difficulties to data analysis. Statistical features extracted from both time and frequency domains are proven efficient to build data-driven tools with spectrum data [START_REF] Zhang | Deep learning-based tensile strength prediction in fused deposition modeling[END_REF][START_REF] Zouhri | Optical process monitoring for Laser-Powder Bed Fusion (L-PBF)[END_REF][START_REF] Wu | Predictive modelling of surface roughness in fused deposition modelling using data fusion[END_REF]. Time-frequency analysis such as wavelet packet transform is implemented to extract statistical features from spectrum data [START_REF] Shevchik | Acoustic emission for in situ quality monitoring in additive manufacturing using spectral convolutional neural networks[END_REF]. LDA and principal component analysis as a feature learning method has been applied to AM spectrum datasets to reduce the input dimensionality [START_REF] Montazeri | In-process monitoring of porosity in additive manufacturing using optical emission spectroscopy[END_REF][START_REF] Obaton | A non-destructive resonant acoustic testing and defect classification of additively manufactured lattice structures[END_REF]. Machine learning techniques for additive manufacturing This paper reviewed the reported ML techniques used in the ML in AM literature and plots the percentages by ML type in Figure 7. One differentiating factor of ML is the type of learning. Supervised learning algorithms are trained with labeled data and, during test time, seek to identify the correct label for a queried instance. With over 90% of the analyzed literature falling into this category, supervised learning is prevalent in the field. Less prominent are unsupervised approaches, intending to recognize patterns in (unlabeled) data, and reinforcement learning. Figure 7c visualizes the popularity of certain algorithms or groups of algorithms with similar working principles. Basic regression algorithms comprise LR, ridge regression, and lasso regression and are implemented in 16 publications. Other conventional ML algorithms deployed in the field are NB, GPR, k-NN, and maximum margin algorithms such as SVMs and SVRs. Algorithms in the tree/ensemble category are based on decision trees and make use of either single trees or ensembles, for example in the case of RF and GB. Table 3 displays the popularity of ML types introduced above for each of the four data types. For tabular data, it is observed that the proportions do not significantly differ from those reported overall (see Figure 7 or the first row of Table 3). Only within the group of deep learning algorithms, normal ANNs were more often preferred over CNNs and RNNs. Spectrum data shows various differences when focusing on ML models compared to the other three data types. Unsupervised approaches are more popular, deployed in 21.4% of the analyzed literature with this data type as opposed to 9% on average over all 4 data types. Furthermore, all publications with spectrum data utilized shallow ML algorithms. Particularly popular compared to the other data types are tree/ensemble methods and maximum margin methods. Similar to tabular data, within the group of deep learning algorithms, ANNs are more often preferred over CNNs. selection to improve ML model performance. And many spectrum datasets need feature learning techniques to reduce their input dimensionality while retaining the most information. Discussion on data types and data handling techniques Table 4: Frequencies of data handling methods utilized to process each data type of AM dataset. Color coding is applied to this table to indicate large numbers with red and small numbers with green. The total number of papers reviewed is stated at the right-most column, corresponding to each data type. Targets/Applications Targets represent outputs of ML models and specify the AM application for which an ML model is trained and deployed. ML applications in AM can be classified for lifecycle stages. In this section, these applications are broadly categorized into pre-processing, processing, and postprocessing stages. The selection of an appropriate target within a lifecycle stage can expedite the modeling process and generate desired results. At the design stage, ML can be used for a range of tasks including but not limited to geometry prediction, design optimization, lattice design, and design classification. The main focus of ML applications in AM has undoubtedly been on the process stage as this can lead to corrective actions before a part is completely printed. At the process stage, ML can be used to predict optimal process parameters and identify defective process states. Apart from these applications, build and toolpath planning has been optimized according to different constraints through the use of ML models. Surrogate modeling of melt pools is gaining attraction to replace computationally expensive and time-intensive physical simulations of AM processes. In this regard, recent literature highlights the significance of effective surrogate modeling by combining ML models with physics-based models. The final category of ML applications in AM concerns product characteristics. Product characteristics have been divided into macro, micro, and mechanical properties (alongside other characteristics). Macro characteristics deal with the macrostructure and include dimensions, surface features, crosssectional parameters, and visual defects. Micro characteristics include ML applications where microstructural defects are evaluated. A lot of attention is being given to control microstructural characteristics through ML models as this leads to overall control on product performance. Product properties are also predicted through ML models in some applications. AM targets are linked with data types, handling techniques, algorithms, and instances in the form of tables. Trends of data handling techniques and ML algorithms within each data type were the focus of attention in the previous section. to indicate the frequency of ML-oriented research in each category. The number of instances indicates the comprehensiveness of each dataset and is a focus of discussion in the later sections of this survey. These tables have been arranged to depict common AM applications within each lifecycle stage. There is a total of seven such tables spanning design (Table 5), process parameters and process domains (Table 6), build and toolpath planning (Table 7), surrogate modeling of melt pool (Table 8), macro product characteristics (Table 9), micro product characteristics (Table 10), and mechanical properties with miscellaneous characteristics (Table 11). Design characteristics At the design stage, ML models can aid either in the material's design or structural design. Material design can be divided into homogenous and heterogeneous material design. Traditionally, design optimization techniques such as topology optimization and generative design have been the essential tools for structural design. Recently, ML models have been applied in conjunction with these optimization tools to improve their accuracy and reduce computational expense. Apart from optimizing structures, ML is being used to cope with AM constraints namely supports and overhangs. ML is also gaining attraction in the design of lattice, an active research field in design for additive manufacturing (DfAM). Prediction of geometric characteristics is found to be the focus of ML applications at this stage. ML applications in AM design are shown in Table 5. ML inputs in tabular form account for the majority of selected design targets making it the most common data type used at this stage. Tabular data can be extracted from a multitude of sources such as process parameters [START_REF] Zhang | Prediction of Dimensional Changes of Low-Cost Metal Material Extrusion Fabricated Parts Using Machine Learning Techniques[END_REF], lattice designs [START_REF] Garland | Pragmatic generative optimization of novel structural lattice metamaterials with machine learning[END_REF], spatial parameters [START_REF] Baturynska | Prediction of geometry deviations in additive manufactured parts: comparison of linear regression with machine learning algorithms[END_REF], and simulations [START_REF] Bessa | Bayesian machine learning in metamaterial design: Fragile becomes supercompressible[END_REF]. For instance, tabular data is used to explore a new meta-material concept that can adapt concerning different properties, base materials, length scales, and processes [START_REF] Bessa | Bayesian machine learning in metamaterial design: Fragile becomes supercompressible[END_REF]. Tabular data of performance characteristics (stress-strain requirements) is mapped with the design parameters of an ankle brace [START_REF] Jiang | Machine learning integrated design for additive manufacturing[END_REF]. Several ML models are employed to determine the dimensional features of printed parts by using tabular data of spatial parameters from STL and build orientation [START_REF] Baturynska | Prediction of geometry deviations in additive manufactured parts: comparison of linear regression with machine learning algorithms[END_REF]. Apart from tabular data, graphic data is also used to predict design characteristics in AM. Graphic data can be captured through in-process vision-based sensors or microscopes and scans in the post-process stage. A composite material part's geometry and tool path are reversed engineered using CT scan images in an RNN [START_REF] Yanamandra | Reverse engineering of additive manufactured composite part by toolpath reconstruction using imaging and machine learning[END_REF]. Similarly, graphic data from lattice designs is also used to predict design-oriented targets [START_REF] Garland | Pragmatic generative optimization of novel structural lattice metamaterials with machine learning[END_REF][START_REF] Després | Deep learning and design for additive manufacturing: a framework for microlattice architecture[END_REF]. Cases, where 3D data is used to predict design characteristics, are found in the literature as well. Design files from the upstream section of product informatics are found to be the source of 3D data. These design files are either based on native CAD systems or reversed engineered through miscellaneous techniques. For example, image segmentation is performed using CNN to separate bone and background as a pre-process in medical AM [START_REF] Minnema | CT image segmentation of bone for medical additive manufacturing using a convolutional neural network[END_REF]. In another work, 3D data from CAD models is used to identify parts eligible for AM [START_REF] Yang | Towards an automated decision support system for the identification of additive manufacturing part candidates[END_REF]. There are several benefits of using ML at AM design phase. A major motivation for applying ML at the design stage is the fact that the part has not been printed which leads to empirical modeling of different design aspects in a direct (e.g., following the PSP chain) or indirect (e.g., using process or product data to model design characteristics) manner. This can result in significant cost and time savings. A key application in this regard is AM candidacy evaluation before a CAD file is sent downstream in the digital thread to incorporate build and machine information. The notion of success at the design phase can guide designers to design parts that are AM compatible for a specific application. Another form of designer guidance is through design rules represented in the form of explicit AM knowledge. These rules can be formed using simple data-driven models. Finally, design parameters such as dimensions of features can be efficiently predicted using ML. Process characteristics Lack of repeatability has influenced process modeling in AM [START_REF] Vafadar | Advances in metal additive manufacturing: a review of common processes, industrial applications, and current challenges[END_REF][START_REF] Bandyopadhyay | Recent developments in metal additive manufacturing[END_REF]. Empirical (data-driven) and physical (physics-driven) modeling are two main approaches in this regard [START_REF] Masinelli | Artificial Intelligence for Monitoring and Control of Metal Additive Manufacturing[END_REF][START_REF] Michopoulos | On the multiphysics modeling challenges for metal additive manufacturing processes[END_REF]. Data-driven models provide a range of advantages over physics-based numerical or analytical modeling. There exist multiple bottlenecks which render physical modeling inefficient. The computational cost and time make the use of these models impractical. Another issue with these models can be overly simplistic assumptions without the important physical context required to solve complex AM processes. As a result, ML has emerged as a popular choice to understand AM processes. ML applications span a wide range of targets at the process stage, including process parameters, process domains, process planning, and melt pool modeling. These applications are linked with data types in the sections below. The benefits of applying ML at the process stage can range from as simple as process state correction to more sophisticated applications such as operator guidance. Process window exploration is analogous to design space exploration where a multitude of factors can influence the final success of a process. The application of ML at this stage can help discover the patterns in the joint distribution of these factors. These patterns can be related to process parameters to serve as a benchmark for future processes. The ML-enabled shift from post-processing to in-process monitoring is probably the most significant advantage at this phase. This shortens the printing cycle while saving costs spent on product inspection and similar activities. The development of reliable AM data-driven models can replace computationally expensive process models. A welldeveloped and generalized ML model is efficient to use on the fly which suits its application during AM processes. These models can then be linked to operator-friendly GUIs (APIs for interapplication usage) to support actions such as parameter or path selection for specific applications. Process parameters and process states Most of the applications in this section are concerned with the process states and process parameters. A process state refers to the current state of a process which can be a custom-defined label. These targets can range from as simple as good/bad and acceptable/unacceptable to specific anomalies such as cyber-attacks or faulty conditions. There are numerous instances of using ML to model process anomalies [START_REF] Scime | A multi-scale convolutional neural network for autonomous anomaly detection and classification in a laser powder bed fusion additive manufacturing process[END_REF][START_REF] Stanisavljevic | Detection of interferences in an additive manufacturing process: an experimental study integrating methods of feature selection and machine learning[END_REF][START_REF] He | Machine learning for continuous liquid interface production: Printing speed modelling[END_REF][START_REF] Gardner | Machines as craftsmen: localized parameter setting optimization for fused filament fabrication 3D printing[END_REF] and conditions of interest in AM [START_REF] Imani | Process mapping and in-process monitoring of porosity in laser powder bed fusion using layerwise optical imaging[END_REF][START_REF] Bastani | An online sparse estimation-based classification approach for real-time monitoring in advanced manufacturing processes from heterogeneous sensor data[END_REF]. Process parameters are of interest as ML models can help predict and optimize these with respect to various quality metrics. As a result, many researchers have focused on parameter prediction or optimization using ML techniques [START_REF] Osswald | Optimization of the production processes of powder-based additive manufacturing technologies by means of a machine learning model for the temporal prognosis of the build and cooling phase[END_REF][START_REF] Xiangyang | Atomic simulations of melting behaviours for TiAl alloy nanoparticles during heating[END_REF][START_REF] Caiazzo | Laser direct metal deposition of 2024 Al alloy: trace geometry prediction via machine learning[END_REF]. Process parameters related to deposition [START_REF] Oehlmann | Modeling Fused Filament Fabrication using Artificial Neural Networks[END_REF], material [START_REF] Decost | Computer vision and machine learning for autonomous characterization of am powder feedstocks[END_REF][START_REF] Desai | Spreading process maps for powder-bed additive manufacturing derived from physics model-based machine learning[END_REF], and energy source [START_REF] Caggiano | Automated Laser Polishing for surface finish enhancement of additive manufactured components for the automotive industry[END_REF][START_REF] You | Mitigating Scattering Effects in Light-Based Three-Dimensional Printing Using Machine Learning[END_REF][START_REF] Wang | In-situ droplet inspection and closed-loop control system using machine learning for liquid metal jet printing[END_REF] have been a common target of ML models at this stage of AM lifecycle. These applications are highlighted in Table 6. Data types used for process parameter and state prediction represent a relatively diverse set as compared to data types at the design stage. Tabular data is again found to be the most prevalent with 45% of the targets being predicted from this data type. Process information is found to constitute a significant portion of tabular data for these predictions. Process information for tables is extracted from process parameters [START_REF] He | Machine learning for continuous liquid interface production: Printing speed modelling[END_REF][START_REF] Osswald | Optimization of the production processes of powder-based additive manufacturing technologies by means of a machine learning model for the temporal prognosis of the build and cooling phase[END_REF][START_REF] Oehlmann | Modeling Fused Filament Fabrication using Artificial Neural Networks[END_REF], process conditions [START_REF] Imani | Process mapping and in-process monitoring of porosity in laser powder bed fusion using layerwise optical imaging[END_REF], and in-process images [START_REF] Wang | In-situ droplet inspection and closed-loop control system using machine learning for liquid metal jet printing[END_REF].Tabular data is used to predict diverse parameter and state targets including deposition [START_REF] Oehlmann | Modeling Fused Filament Fabrication using Artificial Neural Networks[END_REF], critical velocity [START_REF] Wang | Analysis of Critical Velocity of Cold Spray Based on Machine Learning Method with Feature Selection[END_REF], cooling time [START_REF] Osswald | Optimization of the production processes of powder-based additive manufacturing technologies by means of a machine learning model for the temporal prognosis of the build and cooling phase[END_REF], powder spreading quality [START_REF] Desai | Spreading process maps for powder-bed additive manufacturing derived from physics model-based machine learning[END_REF], process conditions [START_REF] Imani | Process mapping and in-process monitoring of porosity in laser powder bed fusion using layerwise optical imaging[END_REF][START_REF] He | Machine learning for continuous liquid interface production: Printing speed modelling[END_REF], and key parameters of interest [START_REF] Caiazzo | Laser direct metal deposition of 2024 Al alloy: trace geometry prediction via machine learning[END_REF]. Graphic and spectrum data types have equal representation with each being used to predict 25% of AM targets at these stages. Digital cameras and microscopes are two main sources of graphic data in this regard. Graphic data in the form of images is used to predict specific conditions [START_REF] Caggiano | Automated Laser Polishing for surface finish enhancement of additive manufactured components for the automotive industry[END_REF][START_REF] Imani | Process mapping and in-process monitoring of porosity in laser powder bed fusion using layerwise optical imaging[END_REF][START_REF] Caggiano | Machine learning-based image processing for on-line defect recognition in additive manufacturing[END_REF] and detect process anomalies [START_REF] Scime | A multi-scale convolutional neural network for autonomous anomaly detection and classification in a laser powder bed fusion additive manufacturing process[END_REF]100]. Acoustic emission sensors appear to generate virtually all spectrum data for this stage. Process state prediction is found to be a prominent target of spectrum data. For example, the logistic regression model is developed from acoustic signals for identifying the potential of recreating G-codes through cyber-attacks [101]. ML applications where 3D data is used to predict process parameters are also found in the literature [START_REF] You | Mitigating Scattering Effects in Light-Based Three-Dimensional Printing Using Machine Learning[END_REF]. Deposition strategies in AM influence product structure and, subsequently, its properties and performance. ML models can ignore underlying physics and relate path strategies with structure, property, and performance characteristics. It helps avoid certain toolpaths that are prone to failure with respect to these characteristics before a part is completely printed. The reviewed references in build and toolpath planning are summarized in Table 7. Among the available data types at this stage, 3D data is found to be the predominant input for build and toolpath predictions. This data type can be collected from different phases of AM processes such as design [102] and post-process [103]. In a relevant application, 3D data from sliced lattice models is used in an SVM model to predict optimal filling paths for lattice structures [104]. X-ray computed tomography (XCT) generates 3D volumes of parts that are sliced to 2D, cropped, and de-noised before being fed to a CNN for the prediction of build orientation [103]. Features of 3D junction geometries are employed in a NN to find optimal path length value to avoid material deficit [102]. Tabular data is another type that is used to predict build and tool path characteristics. The example applications that use tabular data have process parameters as their source. For example, tabular data of process parameters is also used to determine desired printing pattern [105]. In another example, a feed-forward NN improved the quality of the connection between two consecutively deposited paths using process parameters as inputs in tabular form [START_REF] Jiang | Achieving better connections between deposited lines in additive manufacturing via machine learning[END_REF]. As a sub-category of data-driven approaches, ML models are perfect candidates to serve as surrogates of complex multi-phase and multi-physics process simulations in AM. There are multiple ways to develop these surrogate models. ML models can be trained completely on experimental data where no simulation results are needed. In some cases, process simulations are used to inform ML models partially or completely. There are also instances where physical knowledge is incorporated in ML models at the structural level i.e. a physics-based error function to learn model parameters [106]. Melt pool characteristics are of key interest in surrogate modeling. Accurate prediction of these characteristics can help pick an adaptive approach to process control. There has been extensive interest in predicting the thermal distribution of melt pools as this can be a good representative of future structures and properties [107][108][109][110][111]. Melt pool topography [START_REF] Lee | Data analytics approach for melt-pool geometries in metal additive manufacturing[END_REF][START_REF] Yeung | A meltpool prediction based scan strategy for powder bed fusion additive manufacturing[END_REF]112,113] and other characteristics [START_REF] Ren | Computational fluid dynamics-based in-situ sensor analytics of direct metal laser solidification process using machine learning[END_REF][START_REF] Sanchez | Machine learning to determine the main factors affecting creep rates in laser powder bed fusion[END_REF] have been the target of some research works. The related research articles are briefed in Table 8. Tabular, graphic, and 3D data types are seen as potential inputs for ML-based surrogate modeling. Tabular data comprising process [START_REF] Lee | Data analytics approach for melt-pool geometries in metal additive manufacturing[END_REF], material [106], geometry [108], and temperature parameters [114] is widely used to model melt pool characteristics. It roughly accounts for 60% of all data types employed for surrogate modeling. Finite element (FE) generated tabular data is a clear addition to existing trends at this stage [107,109,111,112]. Specific applications of tabular data include melt pool geometry and thermal distribution prediction. The thermal history of a metal additive manufacturing (MAM) process is computed using an unsupervised clustering technique applied to input geometry and scan parameters in tabular form [114]. Graphic data from digital cameras and microscopes is found to be the second-best choice for melt pool modeling. Melt pool is monitored through these sensors and the resulting images/videos are used in ML models to predict characteristics of interest. Digital images from simulations are used in a CNN to predict melting conditions in a PBF process [START_REF] Ren | Computational fluid dynamics-based in-situ sensor analytics of direct metal laser solidification process using machine learning[END_REF]. 3D data is also used in ML models at this stage of AM process flow. Features of 3D data are employed in deep models to predict thermal field of a wire DED process [110]. Product characteristics The last category of ML applications in AM is related to product characteristics. There are numerous parameters of interest that relate to printed parts. These can be classified with respect to product structure and properties. In AM, quality and business constraints require parts to be checked at both macro and micro levels. Macro-level deals with geometrical and visual aspects, whereas micro-level deals with anomalies and defects in the microstructure of printed parts. ML models are also employed to predict products' properties and other aspects (cost, time, life, etc.). These applications are discussed in detail in the subsequent sections. ML applications concerning AM product characteristics offer unique benefits as well. Data-driven models for macro and micro characteristics can replace labor-intensive tasks such as measurements, characterizations, and microstructural evaluations. Given the complex nature of such characteristics (e.g., the microstructure of composite material systems), reliable physics-driven models are often impossible to develop. ML models on the other hand can extract key input-output relations in the context of given applications. This can relieve practitioners from expensive alternatives to deduct such complex characteristics. This applies to both macro (e.g., deviations in the printed parts) and micro (e.g., defects in the microstructure) characteristics of AM products. Similar to design and process spaces, property spaces can be established to guide designers and operators. Regions of desired properties can be linked to either design or process parameters. All of this is possible from base ML models correctly capturing the geometry, microstructural, and property traits of AM printed parts. Macro level The majority of ML applications in AM deal with product characteristics at the macro or micro level. Macro-level targets usually concern with the visual characteristics of printed products and are often the first category of quality metrics against which products are checked. Geometric dimensions and visual defects are two main targets in this regard. These can relate to the macro characteristics of a single path, single layer, or multiple layers. As a result, several works focused on predicting geometry deviations in AM printed parts [115][116][117][118][119]. Inspecting visual defects has been a common way to judge product quality and is used in several ML models with tabular [START_REF] Zhang | Hybrid sparse convolutional neural networks for predicting manufacturability of visual defects of laser powder bed fusion processes[END_REF], graphic [120,121], spectrum [START_REF] Obaton | A non-destructive resonant acoustic testing and defect classification of additively manufactured lattice structures[END_REF] data as inputs. Regression-based ML models are a popular choice to determine the exact geometry of products in AM [122][123][124]. There are cases where the domain or expert-defined labels are used to make decisions on the macro-level quality of AM parts [125,126]. The reviewed references for macro structure characteristics are outlined in Table 9. Graphic, tabular, and 3D data types have more or less similar proportions for macro level targets with each representing 36%, 30%, and 30% of datasets respectively. The majority of graphic data come from digital cameras that capture images at different stages of AM process [115,119,120,127]. Microscopes [START_REF] Özel | Surface topography investigations on nickel alloy 625 fabricated via laser powder bed fusion[END_REF] and infrared cameras [128] are also employed to capture AM process. These images are then used to predict a range of macro characteristics such as dimensional variations and visual defects. Digital images of the process are employed in different ML models to detect visual defects in a ME process [120]. Simulation and camera images are used to detect real-time cyber-attacks resulting in malicious defects using ML models [129]. Tabular data from design [122] and process [123] stages is used to model macro structure anomalies. Tabular data alongside a design file is used in a CNN to predict visual flaws of ME printed parts [START_REF] Gardner | Machines as craftsmen: localized parameter setting optimization for fused filament fabrication 3D printing[END_REF]. 3D data from design files and point clouds is also used to predict macro structure targets [118,[130][131][132]. In one application, spectrum data of acoustic signals is used to predict geometric defects in PBF printed parts [START_REF] Obaton | A non-destructive resonant acoustic testing and defect classification of additively manufactured lattice structures[END_REF]. In this regard, ML models have been employed for a range of tasks including pore detection [START_REF] Montazeri | In-process monitoring of porosity in additive manufacturing using optical emission spectroscopy[END_REF][START_REF] Paulson | Correlations between thermal history and keyhole porosity in laser powder bed fusion[END_REF][START_REF] Zhang | Detection of Defects in Additively Manufactured Stainless Steel 316L with Compact Infrared Camera and Machine Learning Algorithms[END_REF][START_REF] Wasmer | In situ quality monitoring in AM using acoustic emission: A reinforcement learning approach[END_REF][141][142][143][144], pore classification [START_REF] Snell | Methods for rapid pore classification in metal additive manufacturing[END_REF][START_REF] Khanzadeh | Porosity prediction: Supervised-learning of thermal history for direct laser deposition[END_REF][START_REF] Shevchik | Acoustic emission for in situ quality monitoring in additive manufacturing using spectral convolutional neural networks[END_REF]145,146], and pore size prediction [START_REF] Xiangyang | Atomic simulations of melting behaviours for TiAl alloy nanoparticles during heating[END_REF]. Some ML applications also deal with the classification of specific microstructure types [START_REF] Han | Quantitative microstructure analysis for solid-state metal additive manufacturing via deep learning[END_REF]. Lack of fusion and balling defects have also been considered as targets in several ML models. The references are summarized in Table 10. Datasets for predicting micro characteristics fall in all four general types introduced in this survey. Graphic data stands out as a clear choice when it comes to predicting micro characteristics of AM parts. 59% of the available data in this section belongs to the graphic category. The sources of graphic data are found to be diverse as well. Digital cameras [147,148], thermal cameras [START_REF] Paulson | Correlations between thermal history and keyhole porosity in laser powder bed fusion[END_REF][START_REF] Zhang | Detection of Defects in Additively Manufactured Stainless Steel 316L with Compact Infrared Camera and Machine Learning Algorithms[END_REF], and microscopes [145,149] are used for in-process graphic data generation to predict micro characteristics. This data type is mainly used to predict microstructural defects. Graphic data of melt pool images is used to classify balling, keyholing, porosity, under-melting, and desirable conditions in an SVM classifier [150]. Layer-wise images of a PBF process are cropped and used in a deep learning model to distinguish the lack of fusion defects from standard cases [151]. 3D, spectrum and tabular data types are found to be relatively less common and account for 10%, 14%, and 17% of data share respectively. XCT-based 3D data and microscopic images are used to cluster different types of pores [START_REF] Snell | Methods for rapid pore classification in metal additive manufacturing[END_REF]. Spectrum data from acoustic signals is used in a NN to predict porosity-based quality (poor, medium, high) of PBF printed parts [START_REF] Shevchik | Acoustic emission for in situ quality monitoring in additive manufacturing using spectral convolutional neural networks[END_REF]. A few other applications of spectrum data also employ acoustic sensor data in ML models and are listed in Table 10. Tabular data from diverse sources can be found to predict porosity [143,152] and grain growth in AM [153]. Mechanical properties and other characteristics Properties lie at the end of the process-structure-property chain of AM process flow. The future performance of products is based on the underlying properties. Mechanical properties have been a target of immense interest for ML models in AM [START_REF] Gu | Bioinspired hierarchical composite design using machine learning: simulation, additive manufacturing, and experiment[END_REF][START_REF] Roach | Utilizing computer vision and artificial intelligence algorithms to predict and design the mechanical compression response of direct ink write 3D printed foam replacement structures[END_REF][158][159][160][161]. The representative examples are tensile strength [START_REF] Zhang | Deep learning-based tensile strength prediction in fused deposition modeling[END_REF][START_REF] Zhang | Predicting flexural strength of additively manufactured continuous carbon fiber-reinforced polymer composites using machine learning[END_REF][START_REF] Herriott | Predicting microstructure-dependent mechanical properties in additively manufactured metals with machine-and deep-learning methods[END_REF][START_REF] Okaro | Automatic fault detection for laser powder-bed fusion using semisupervised machine learning[END_REF]162], elongation [161], hardness [163], fatigue [START_REF] Zhang | High cycle fatigue life prediction of laser additive manufactured stainless steel: A machine learning approach[END_REF]164,165], and surface roughness [START_REF] Wu | Predictive modelling of surface roughness in fused deposition modelling using data fusion[END_REF][166][167][168]. Some works also focus on ML-based modeling of residual stress [169,170] and density [START_REF] Zouhri | Optical process monitoring for Laser-Powder Bed Fusion (L-PBF)[END_REF]168,171] in printed parts. Quality metrics are based on specific properties and serve as labels in ML models [172,173]. Finally, miscellaneous applications where business-related characteristics of printing cost and time are computed using ML modes can be found in AM literature [174]. Table 11 summarizes the existing articles that fall in this category. ML-based prediction of mechanical properties has a clear winner in tabular data with 61% share of all datasets. Tabular data of material [175], deposition [START_REF] Zhang | Predicting flexural strength of additively manufactured continuous carbon fiber-reinforced polymer composites using machine learning[END_REF], process [162], fatigue [164], and geometry [160] parameters is frequently used to model diverse mechanical properties. Process parameters have a major share among all sources of tabular data at this stage [159,165,167,168,172,176,177]. For instance, process parameters alongside mechanical properties in tabular form are used in an adaptive neuro-fuzzy inference system to estimate the fatigue life of AM printed metal parts [START_REF] Zhang | High cycle fatigue life prediction of laser additive manufactured stainless steel: A machine learning approach[END_REF]. 17% of the available data falls in the graphic category making it an attractive choice after tabular data. Similar to tabular data, graphic data has been used to predict a range of mechanical properties. Digital cameras are a common source of graphic data at this stage [START_REF] Roach | Utilizing computer vision and artificial intelligence algorithms to predict and design the mechanical compression response of direct ink write 3D printed foam replacement structures[END_REF]178,179]. Spectrum and 3D data types are found to have an equal contribution of 11%. Spectrum data from optical signals was used to classify density levels (low, medium, and high) of PBF printed parts in an SVM classifier [START_REF] Zouhri | Optical process monitoring for Laser-Powder Bed Fusion (L-PBF)[END_REF]. 3D data of microstructures is used in a CNN model to predict the effective yield strength of parts manufactured by MAM processes [START_REF] Herriott | Predicting microstructure-dependent mechanical properties in additively manufactured metals with machine-and deep-learning methods[END_REF]. Spectrum and 3D data types are also employed to determine several other properties as referenced in Table 11. Public AM databases This section briefly introduces the major existing public databases containing datasets for AM studies. • National Institute of Standards and Technology (NIST) The engineering laboratory of NIST offers a database system named the additive manufacturing materials database (AMMD). It is built to provide open data access and material data sharing to the AM community. Their datasets include testing data, machine data, part design data, build part process data, and material data. Though their database is designed for all seven typical AM processes, only PBF data is collected at the current stage [185]. This database can be used to applications such as prediction of mechanical performance, material selections in AM, and realtime quality monitoring. • Mendeley Data Mendeley Data is a cloud-based repository where users can store and search research data. It is hosted by Elsevier. As Elsevier is a publishing company specializing in science and technology, they collect all the uploaded datasets for the published research articles from their flagship journals. Users can also store their datasets by uploading files of any type. By researching "additive manufacturing" in their search tool and narrowing down the data types to "dataset", 2020 datasets are found at the current stage. Users can easily download the datasets and export the citations as well [186]. • DOE Data Explorer DOE Data Explorer is supported by the U.S. Department of Energy and the Office of Scientific and Technical Information. It is a search tool for finding their supported projects and included dataset records. By searching "additive manufacturing", 67 datasets can be found. All the links to download datasets are provided as well [187]. • DATA.GOV DATA.GOV is a searching tool founded by the U.S. government to support finding data and research. Users can narrow down the target data via the search engine. At the time of this writing, 930 datasets can be found under the term 'additive manufacturing. However, some of them are solely brief introductions to the research project. Users should filter the search and find the potential useful information and datasets for their research [188]. • DataCite DataCite is a non-profit organization that provides persistent identifiers (DOIs) for research data and other research output. Their members include data centers, libraries, research universities, and governments over the world. They also offer the service to look for reliable data shared by the research community and reuse the data for other research studies. By searching "additive manufacturing" and filter the works to "dataset" type, 225 downloadable datasets are found [189]. • IEEEDataPort IEEEDataPort is developed by IEEE. This platform provides free uploads of datasets and free downloads of the open-access datasets shared by IEEE members and users. There are over 1,500 datasets and most of them are in the artificial intelligence, machine learning, computer vision, and image processing aspects. By searching "additive manufacturing" in their dataset library, only one dataset is available. However, more datasets are expected in the coming future [190]. Existing data management approaches in AM There are countless AM studies in the literature, and they have been continuously published every year. This trend has gained momentum in recent years resulting in enormous data for ML applications in AM. As a result, the management, storage, and accessibility of this data have become a challenge. For most data-search engines, they use keywords as the first step to filter the search. After that, the most common filter options are data types including datasets, documents, images, videos, etc., and sources including universities over the world, journals, etc. Finally, users should use their knowledge to open the results in order and read the descriptions to determine whether the selected result is suitable for their needs. As most data-search engines include datasets from various research areas, they cannot offer specific filter options for AM. In this case, finding the target data is still difficult for AM researchers. There are also some systematic studies to establish data management strategies specifically for AM. One of the representative researches is from Liu et al [154]. They proposed a cloud-based digital twin-enabled data management framework for MAM. It contains AM data in different product lifecycle stages including product design, quality measurement, process planning, manufacturing, and post-processing. Data items of each lifecycle stage are listed with subcategories, common measurement methods, and data format. Lu et al. [191] provided a similar approach on the collaborative AM data management system. Their data management system aims to establish the correlations between processes, materials, and parts. A web interface is available to store, explore and download data which has been introduced in Section 4.1. In their database, material data, machine data, design data, process data, and test data are collected. Both approaches provide a well-organized data structure to store and manage data. However, none of them is sufficient in data searching and sharing. They are more appropriate for institutes, industries, or organizations to internally store and manage data. When sharing to the public, datasets are expected to be easy to access. In their current version of the data management system, it is either hard for external users to understand the system or hard to find suitable filters to search datasets of interest. The existing effort on this topic is very limited, which motivates more future interests and developments in AM data management systems. Limitations and challenges The above sections presented the existing ML applications in AM with details of their data types, data handling approach, and applications. It can be seen that the current data in ML-assisted AM studies still has some problems including quantity and diversity of datasets, the guarantee of the manual label accuracy, reproducibility and standardization of data-driven research, and the need for a simple, easy-access, and systematic database. The following section will provide a detailed discussion on the limitations and challenges. Quantity and diversity of datasets Diversity and the data size are extremely important characteristics in a well-developed dataset that can directly affect the quality of the dataset and the performance of the ML model. Figure 8 shows the frequency of the number of instances used in the research articles reviewed in this paper. Most of the existing research falls in the group of 100-1,000 instances, and only 13 out of 144 research articles include 10,000+ instances in their datasets. Determination of whether the size of the dataset is large enough depends on the applications and the complexity of the ML model. However, the evaluation for data size and diversity is absent in most studies, and the datasets for some studies are limited in quantity and diversity. The ideal method to identify the size of data is to generate a learning curve for the model performance on datasets [192]. The required number of data size can be obtained when the learning curve reaches the saturation point. To make it simple, there are some common rules from the ML community to identify the ideal size of the dataset. These rules are generally a factor of certain characteristics of the prediction problem. For example, some researchers indicate that the data size needs to be at least 50 to 1000 times the number of prediction classes [193]. Another rule states that the data size needs to be at least 10 to 100 times the number of the features [194,195]. The most common method is to include at least 10 times the number of weights in the network if neural network models are used [196,197]. However, a later study [192] states that the factor of 10 is insufficient, and they conclude that the data size needs to be at least 27 to 31 times the number of weights in the network. Even though the data size may also vary on the different applications, those common rules can provide a general idea of how many samples are enough for their studies. Guarantee of the manual label accuracy Another limitation of the current ML-assisted AM studies is to ensure the accuracy of the label/ground truth and how to increase the ease of the manual labeling process. At the current stage, most existing studies utilize the supervised learning approach which requires labeling. The performance of ML models is always restricted by the available data. As introduced in Section 3, the most common ML applications in AM include design-related, process-related, and productrelated applications. The label/ground truth can also be categorized into experimental results, computational results, and manual labeling. In terms of label accuracy, the experimental results such as mechanical properties and computational results such as thermal distribution obtained from the finite element method (FEM) model are more promising. Manual labeling such as the location of pores, visual defects determination, mark of the failure area, etc. is less reliable. Mislabeling errors may occur during the labeling process. Most of the existing studies reviewed in this paper assume manually labeled data to be authentic. There is no such annotation tool developed for labeling AM data and ensure the quality of the manually labeled data. Some of them may acquire manual votes from several AM experts. Moreover, the manual labeling process is extremely timeconsuming. There are some annotation tools available in the literature for computer vision and natural language processing [198][199][200]. The annotation tool can help users on labeling maps, attributes, classes, etc. For example, for images and videos, the annotation tool can provide functions to easily mark the area with different shapes for segmentation tasks. The users can even do real-time labeling of the segmentation target and justify their strategy based on the performance of ML models. This is much more efficient than the traditional trial-and-error methods by applying different filters. The annotation tool can also help to convert audio to text automatically. Another advantage of the annotation tool is the control of quality. It can offer the function for users to provide feedback on the accuracy of the label. ML studies in AM can take the benefits from the existing annotation tool, and a specific annotation tool for AM can be developed in future work as well. Reproducibility and standardization of data-driven research in AM As a major principle of the scientific method, it is important to keep the reproducibility of the research. Reproducibility generally refers to obtaining consistent results when the study is replicated by using the same input data, methodology, codes, and experimental conditions [201]. Recently, with more and more ML studies conducted in AM applications, reproducibility becomes a challenge due to a lack of shared datasets. Less than half of the articles shared their datasets unless making the request personally. Unlike other ML applications, there are no standard datasets that can be used in the literature such as MNIST (handwritten digit database) [202], IMDB datasets (50k movie reviews), MIMIC (datasets for computational physiology) [203], ShapeNet (3D model repository) [204], etc. Moreover, AM is a relatively complex application for ML. Even for the same AM technology, the different machine has various printing performances. Using the same brand of machine and material is always recommended for fabrication. AM techniques have not been standardized maturely which leads to difficulties in building a general database or dataset for AM. AM has various research areas including design, process, and manufacturing. It even contains seven different types of technology. It is challenging to build a dataset including all the critical characteristics for AM. However, building a general dataset for a specific application such as visual defects, porosity, thermal distribution, etc. for a specific AM technology such as LPBF and ME is possible. As summarized in Table 5 to Table 11, the most popular AM techniques are LPBF and ME. A small number of studies focus on DED and VP. With the guarantee of diversity, quantity, non-duplication, and accuracy, multiple stand-alone datasets can be combined together to establish a standard and rich dataset for use of multiple research studies. Simple, easy-access, and systematic database for AM There are still few well-developed AM databases and there is no well-known or commonly used dataset in AM studies. A small portion of AM datasets is duplicated which results in a waste of time. Most of them are private and hard to access. The existing databases are either not designed for AM or more suitable for an organization to manage its internal data. There is no simple data port designed for sharing and accessing AM data publicly. A database for AM is required, and it is expected to be simple, easy-access, and systematic. Therefore, datasets from different studies can be collected. Researchers can save time on collecting data and data sharing can encourage the connection and collaboration between researchers. Moreover, some small datasets can be combined together to generate a larger and richer dataset which can be beneficial for all the AM researchers. Hence, based on the best understanding of the authors, a potential simple data port is proposed here and ready for data uploading and query. This data port is expected to be web-based and shared with the public. Everyone is welcome to provide their open data or download and reuse the data. For each dataset, the donator needs to fill five required fields and five optional fields. The required fields include AM technique type, raw input data type, application/targets, whether the data is labeled or not, and the zip file for data. The optional fields include Raw output data type, reference source, contact information, machine type, and material type. Machine type and material type include the brand and series information for selected machines and materials. This information can be viewed in the "more details" panel. The preliminary design for the AM data port is shown in Figure 9. On the left of the page, users have the option to filter the database to what they are looking for based on 'AM technique type', 'applications/target', 'raw data type', and 'labeled or not?'. This simple and informative AM data port is aimed to increase data sharing in the AM community and accelerate the ease of data gathering as well. This data port will not recommend any data handling process or ML algorithm to users. Raw datasets are provided, and users have unlimited freedom to process the data. The data is expected to be used in various research. Conclusions and future perspectives This paper presents a comprehensive review on ML data processing and management for AM research and applications. Based on the reviewed papers that are published in the recent four years, the utilized data handling methods are summarized for four major data types: tabular data, graphic data, 3D data, and spectrum data. The major handling methods include feature extraction, discretization, data processing, feature selection, and feature learning. It has been noticed that the ML approaches have been applied to various AM applications. At the current status, most of the ML applications in AM focus on product characteristics such as printability, porosity, and surface roughness. The existing studies have demonstrated promising performances. Moreover, the ML approaches are already shown to be suitable and valid for investigating the PSP linkage in AM and the potentials in design for AM. However, there are still some challenges when applying ML in practice including the quantity and diversity of the dataset, the accuracy of the manual labeling, reproducibility, and standardization of data-driven research, and the limitation of the existing databases and management systems. These challenges need future investigations and could motivate some potential research directions including: 1. Most of the existing studies use a single type of data as their input to a single ML model. Multiple types of data can be combined together to develop a more comprehensive hybrid ML model. 2. A systematic AM database is critical in ML-assisted AM studies as well as the investigation on the role of AM in industry 4.0. However, there are few studies related to AM database in the literature. A database framework to organize and store AM data including both structured data and unstructured data can be a potential research direction. 3. Drawing analogies from the field of AI, knowledge transfer can be an important next step in data-driven AM. Recently, some researchers have applied the idea of transfer learning to the models developed in the same study [205]. Managing data systematically will support data-based transfer learning from existing ML applications to new ML applications in AM. This can be an important research direction. 4. 91.7% of reviewed articles select supervised machine learning as their approach. 9% of them use unsupervised learning and only 0.7% select reinforcement learning. Unsupervised learning has the advantage of solving problems by learning the data and classifying it without any labels. Also, it can be really helpful in finding patterns in data. Reinforcement learning learns by the modeling self by making and correcting mistakes. It has the potential to solve very complex problems and create a perfect model for a particular problem. More studies on the potential of applying unsupervised learning and reinforcement learning in AM can be conducted in the future. 5. Only a small portion of existing studies focus on the design characteristics of AM. More studies on how ML can help on design for AM can be conducted. For example, ML can help with design idea generation based on the functional needs of the product. Also, ML can help in generating a design more suitable for AM processes with the consideration of functions and cost. Figure 2 : 2 Figure 2: Schematics of graphics data: (a) binary imagea type of bitmap data; and (b) a sketch restored by vector data Figure 3 : 3 Figure 3: Example of denoising and edge detection: In-process melt pool monitoring system of the fusion zone measured by an infrared camera during DED Ti6Al4V (left), the image data gathered (middle), and the processing of denoising and edge detection (right). [57] Figure 4 : 4 3D object represented by 3D data using tessellation: (a) original 3D object; and (b) tessellated 3D object. Figure 5 : 5 Figure 5: Example of feature extraction from 3D model[START_REF] Yang | Towards an automated decision support system for the identification of additive manufacturing part candidates[END_REF] Figure 7 : 7 Figure 7: Popularity of ML techniques in the percentage of the 144 reviewed papers. Grouped by (a) types of learning, (b) types of ML architecture, and (c) algorithms or groups of algorithms. Note that charts do not sum to 100% as sever Furthermore, ML algorithms can be associated with either shallow or deep learning. Deep learning algorithms are based on ANN and its adaptations such as CNNs or RNNs. In Figure 7, shallow ML describes those algorithms that do not fall into the category of deep learning. It can be seen that deep architectures are slightly more common in the field compared to shallow ones. Figure 8 : 8 Figure 8: Frequency of the number of instances used in the reviewed research articles Figure 9 : 9 Figure 9: Preliminary design for the AM data port Table 1 : 1 The abbreviations of AM aspect used in this review Abbreviation Definition AM Additive manufacturing ME Material extrusion PBF Powder bed fusion MJ Material jetting VP Vat photopolymerization DED Directed energy deposition Table 2 : 2 The abbreviations of the most frequent ML terminology used in this review Abbreviation Definition ML Machine learning NN Neural network MLP Multilayer perception ANN Artificial neural network CNN Convolutional neural network RNN Recurrent neural network k-NN k-nearest neighbor SVM Support vector machine SVR Support vector regressor RF Random forest GB Gradient boosting Table 3 : 3 Popularity of ML algorithms overall and by data type. Bold cells highlight ML categories or algorithms that are significantly more popular for one data type (>10 percentage points compared to average). Note that categories and rows do not sum to 100% as several review papers employed multiple techniques for one task. supervised unsupervised shallow ML deep learning basic regression (linear, lasso, ridge) Gaussian process tree/ensemble margin SVR) (SVM, kNN ANN CNN RNN other all 91.7% 9.0% 53.1% 59.3% 11.0% 9.0% 22.8% 20.0% 10.3% 34.5% 21.4% 4.1% 11.7% tabular 95.0% 5.0% 56.7% 58.3% 11.7% 13.3% 26.7% 20.0% 6.7% 48.3% 8.3% 1.7% 11.7% graphics 90.9% 9.1% 49.1% 63.6% 7.3% 5.5% 23.6% 18.2% 12.7% 25.5% 36.4% 7.3% 7.3% 3D 92.0% 8.0% 48.0% 76.0% 12.0% 8.0% 16.0% 16.0% 16.0% 40.0% 32.0% 4.0% 20.0% spectrum 78.6% 21.4% 100.0% 57.1% 28.6% 21.4% 42.9% 57.1% 14 .3% 42.9% 14.3% 7.1% 14.3% For graphics data, Similar to tabular data, solely the popularity of deep learning algorithms differs from the overall usage in the field. CNNs are significantly more frequently seen in publications with graphics data (36.4% compared to 21.4% overall), while ANNs are deployed less (25.5% compared to 34.5%). 3D datasets are less frequently combined with shallow ML models. Most of the analyzed literature for this data type (76%) utilized deep learning algorithms. Both ANNs and CNNs are more common with 3D datasets compared to the field. Table 4 4 is the pivot table indicating, for all four data types, the number of papers that have utilized a data handling technique belonging to the four categories introduced above. Tabular and graphics datasets have been extensively researched and utilized to construct ML models for AM, while 3D and spectrum datasets appear less frequently. Feature extraction techniques have been widely applied to the analysis of all data types, especially for spectrum data. Representations derived from feature extraction methods with domain knowledge are better input features than the original feature sets in many papers. Discretization is only applicable to bitmap and 3D models. Pixelization has been implemented in 84% of the reviewed papers with graphics data, while voxelization only appears in 32% of the reviewed paper with 3D data. Image preprocessing methods have been extensively utilized to improve the quality of graphics datasets. It is suspected by the authors that data preprocessing methods should be more frequently implemented than the observations of Table 4, as techniques such as normalization are generally implemented but barely mentioned. Feature selection techniques have been applied to more than 25% of the tabular and spectrum datasets reviewed. Tabular datasets have original features readily available for feature Table 5 : 5 ML applications at the design stage of AM Design Characteristics Table 6 : 6 ML applications at the process stage of AM -Process parameters and domains Process Characteristics -Process parameters and process domains Year AM Data Data Handling ML Algorithm Application Instances Ref 2021 ME Tabular data on Feature NN Force in nozzle 20,000 [95] printing Standardization parameters 2019 ME Spectrum data Instance SVM, Presence or 523,000 [88] from Conversion, NB, Absence of acceleration and Attribute transf RF, interference temperature ormation and k-NN sensors selection ME Acoustic Extract features Self-organizing Process failure 213 [66] emission signal from signals map ME Acoustic Signal filtering SVM; LR Process 442 [101] emission from and feature parameters the process extraction and tabular data of G-code ME Heterogeneous Convert sensor Online sparse Extrusion 2,000+ [91] sensor signals signals to estimation- conditions underdetermine based d linear system classification of equations PBF Microscope Segmentation CNN Laser polishing 432 [39] image conditions PBF Tabular data of Feature filtering ANN Critical Not clear [98] material to eliminate velocity parameters irrelevant or redundant features PBF Tabular data of None LR, stepwise Cooling time 30 [92] process mentioned linear parameters regression, quadratic SVM, GPR, DT PBF Digital image Pixelization CNN Process Not clear [100] anomaly detection Table 7 : 7 ML applications at the process stage of AM -Build and toolpath characteristics Process Characteristics -Build and toolpath characteristics Year AM Data Data Handling ML Algorithm Application Instances Ref 2020 ME Tabular data for None ANN Connection 400 [32] processing mentioned status parameters between paths PBF The raw XCT- Sliced to 2D, 3D-ResNET Build 192 [103] generated 3D then cropped orientation volumes and finally denoised DED 3D data from Geometric NN Path length 63 [102] junction features value to geometries extraction avoid Table 8 : 8 Year AM Data Data Handling ML Algorithm Application Instances Ref 2020 ME Tabular data Extract features ANN Thermal 11 [108] of part from tabular distribution geometry data (relative distances from the cooling surfaces, from the heat sources, and a set of deposition times influencing the thermal behavior) PBF Tabular data Extract features GPR Thermal Not clear [107] of FE results from tabular distribution data ([Node, X, Y, Z, Temperature]) PBF Tabular Pixelization, gray- RF, deep NN, Creep rate 512 [49] data for build scale, binary filter, SVR, GB parameters an connected d microscope component image labeling algorithm, extraction of material descriptors, label encoding, normalization, cropping PBF Tabular data Extract features RNN Thermal 340 [111] of FE results from the tool path distribution PBF or Tabular data None mentioned NN Temperature, Not clear [106] DED of process, Melt pool material, and dynamics, and geometry dimensions, Cooling rates PBF Tabular data Generate a matrix RNN+ANN Thermal 100 [109] of FE results to store the laser distribution scanning pattern from the tabular data PBF Digital image None mentioned Polynomial Melt pool area 20,902 [38] of melt pool regression PBF Tabular data Extract features GPR Melt pool 200 [112] of simulations from tabular data geometry PBF Digital Resizing, CNN Melting 1,412 [48] image from pixelization, conditions the simulation normalization ML applications at the process stage of AM -Surrogate modeling of the melt pool Process Characteristics -Surrogate modeling of the melt pool Table 9 : 9 ML applications at the product stage of AM -Macro structure characteristics Porosity, lack of fusion, micro-cracks, and balling are some of the most frequent defects in AM printed parts' microstructure. Porosity has been a common target of ML models in this category. Product Characteristics -Macro Structure Characteristics Year AM Data Data Handling ML Applicatio Instance Ref Algorithm n s 2021 ME Digital Pixelization CNN Over 1,400 [127] image of extrusion layer and under extrusion (macro) 2021 ME Tabular data Randomization GPR, SVM Geometric 288 [117] of process deviation parameters 2021 ME Digital Image NN, GB, Visual 6,000 [120] image of preprocessing and SVM, defects layer filtering cluster chartin during g process (Macro scale) 2021 ME Design file Generate 3D point Bagging of Geometrica 50 [130] cloud based on Trees, GB, l defect design file and RF, k-NN, and detection preprocessing to Linear SVM Table 10 : 10 ML applications at the product stage of AM -Micro structure characteristics Product Characteristics -Micro Structure Characteristics Year AM Data Data Handling ML Application Instance Ref Algorithm s 2021 ME 3D data of Visual extraction of NN, k-NN Flow 500 [64] as-designed geometric features resistivity, and as-built Porosity, models Tortuosity, Thermal length, Viscus length, and Permeability Table 11 : 11 ML applications at product stage of AM -Mechanical properties and other characteristics Product Characteristics -Mechanical Properties and other characteristics Year AM Data Data Handling ML Algorithm Application Instances Ref 2021 ME Graphic data Sobel and Canny NN Mechanical 250 [56] for X- edge finding compression sectional algorithm, Hough curve values images line finding algorithm 2021 ME Tabular data None mentioned GPR Mechanical 30 [161] of 3D properties printed carbon fiber composites 2020 ME Tabular data Normalization NN Force 20,000 [175] of three displacement material curve (FDC) parameters error difference Acknowledgment Financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Collaborative Research and Development (CRD) CRDPJ 520348-17 for Ying Zhang is acknowledged with gratitude. Financial supports from the National Research Council of Canada NRC INT-015-1 and McGill Engineering Doctoral Award (MEDA) grant for Mutahar Safdar are acknowledged with gratitude. Financial supports from MITACs Advanced Manufacturing Automation, Digitization and Optimization (AMADO) grant and McGill Graduate Excellence Fellowship Award for Jiarui Xie are acknowledged with gratitude. Financial support from the NSERC CRD CRDPJ 479630-15 for Jinghao Li is acknowledged with gratitude. Jinghao Li also received partial funding from the NSERC Collaborative Research, Training Experience (CREATE) Program Grant 449343, MEDA grant, and China Scholarship Council (201706460027). Financial supports from MITACs AMADO grant and MEDA grant for Manuel Sage are acknowledged with gratitude.
03174991
en
[ "math.math-cv", "math.math-ca", "math.math-fa" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-03174991v3/file/KO-FockGen2023-04-29.pdf
S Konate M.-A Orsoni SAMPLING CONSTANTS IN GENERALIZED FOCK SPACES Keywords: 2010 Mathematics Subject Classification. 30H20 Dominating sets, reverse Carleson measure, Fock space, sampling We prove several results related to a Logvinenko-Sereda type theorem on dominating sets for generalized doubling Fock spaces. In particular, we give a precise polynomial dependence of the sampling constant on the relative density parameter γ of the dominating set. Our method is an adaptation of that used in [HKKO21] for the Bergman spaces and is based on a Remez-type inequality and a covering lemma related to doubling measures. Introduction Sampling problems are central in signal theory and cover, for instance, sampling sequences and so-called dominating sets which allow to recover the norm of a signal -defined by an integration over a given domain -from the integration on a subdomain (precise definitions will be given later). They have been considered in a large variety of situations (see e.g. the survey [START_REF] Fricain | Harmonic analysis, function theory, operator theory, and their applications[END_REF]), including the Fock space and its generalized versions (see [Sei04, MMOC03, OC98, Lin00, JPR87, LZ19]). In this paper we will focus on the second class of problems, i.e. dominating sets. Conditions guaranteeing that a set is dominating were established rather long ago (in the 70's for the Paley-Wiener space and in the 80's for the Bergman and Fock spaces, see e.g. the survey [START_REF] Fricain | Harmonic analysis, function theory, operator theory, and their applications[END_REF] and references therein). More recently, people got interested in estimates of the sampling constants which give quantitative information on the tradeoff between the cost of the sampling and the precision of the estimates. The major paper on this connection is by Kovrijkine [Kov01] who gave a method to consider this problem in the Paley-Wiener space. Subsequently, his method was adapted to several other spaces (see [HJK20, JS21, HKKO21, GW21, BJPS21]). In this paper, inspired by the methods in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF], we will discuss the case of generalized doubling Fock spaces. For these, the paper [START_REF] Marco | Interpolating and sampling sequences for entire functions[END_REF] provides a wealth of results that allow to translate the main steps of [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF] to this new framework. Notations. As usual, A B (respectively A B) means that there exists a constant c > 0 independent of the relevant variables such that A ≤ cB (resp. A ≥ cB). Similarly, A B stands for A B and A B. 1.1. Doubling subharmonic functions and generalized Fock spaces. A function φ : C -→ R of class C 2 is said to be subharmonic if ∆φ ≥ 0, and doubling subharmonic if it is subharmonic and the (non-negative) measure µ := ∆φ is doubling, i.e. there exists a constant C such that for every z ∈ C and r > 0, µ(D(z, 2r)) ≤ Cµ(D(z, r)). (1.1) Here D(z, r) denotes the standard open euclidean disk of center z ∈ C and radius r > 0. The minimal constant satisfying (1.1) is denoted by C µ and called doubling constant for µ. A basic example of doubling subharmonic function is φ(z) = |z| 2 for which µ = ∆φ is equal to 4 times the Lebesgue measure and C µ = 4. The reader may consider this simpler case for the time of the introduction. Doubling subharmonic functions induce a new metric on the complex plane C which is more adapted to the complex analysis we shall work with. We postpone the advanced results on doubling measures to Section 2 but let us introduce some basic objects related to it. First, we can associate with φ as above a function ρ : C -→ R + , such that µ(D(z, ρ(z))) = 1. To get an idea, assuming φ is suitably regularized, we have ∆φ ρ -2 (see [MMOC03, Theorem 14 and Remark 5]). Next, denote D r (z) := D(z, rρ(z)) the disk adapted to the new metric and write D(z) := D 1 (z) the disk with unit mass for the measure µ. Finally, with these definitions in mind we can introduce the following natural density. A measurable set E is (γ, r)-dense (with respect to the above metric), if for every z ∈ C |E ∩ D r (z)| |D r (z)| ≥ γ. Here |F | denotes planar Lebesgue measure of a measurable set F . We will just say that the set is relatively dense if there is some γ > 0 and some r > 0 such that the set is (γ, r)-dense. We shall see in the next subsection that relative density is the right notion to characterize dominating sets. Let dA be the planar Lebesgue measure on C. For a doubling subharmonic function φ : C -→ R and 1 ≤ p < +∞, we will denote by L p φ (F ) the Lebesgue space on a measurable set F ⊂ C with respect to the measure e -pφ dA. We will also use the notation f p L p φ (F ) := F |f | p e -pφ dA. Finally, we define the doubling Fock space, by F p φ = {f ∈ Hol(C) : f p L p φ (C) := C |f | p e -pφ dA < +∞}. These spaces appear naturally in the study of the Cauchy-Riemann equation (see [START_REF] Christ | On the ∂ equation in weighted L 2 norms in C 1[END_REF][START_REF] Constantin | Some spectral properties of the canonical solution operator to ∂ on weighted Fock spaces[END_REF]) and are well studied objects (see e.g. [START_REF] Oliver | Toeplitz operators on doubling Fock spaces[END_REF] for Toeplitz operators on these spaces). When φ(z) = |z| 2 , F p φ is the classical Fock space (see the textbook [START_REF] Zhu | Analysis on Fock spaces[END_REF] for more details). 1.2. Dominating sets and sampling constants. A measurable set E ⊂ C will be called dominating for F p φ if there exists C > 0 such that E |f | p e -pφ dA ≥ C p C |f | p e -pφ dA, ∀f ∈ F p φ . (1.2) The biggest constant C appearing in (1.2) is called the sampling constant. The notion of dominating sets can be defined for several spaces of analytic functions and their characterization has been at the center of intensive research starting with a famous result of Panejah [START_REF] Panejah | On some problems in harmonic analysis[END_REF][START_REF] Panejah | Certain inequalities for functions of exponential type and a priori estimates for general differential operators[END_REF], and Logvinenko and Sereda [START_REF] Logvinenko | Equivalent norms in spaces of entire functions of exponential type[END_REF] for the Paley-Wiener space, which consists of all functions in L 2 whose Fourier transform is supported on a fixed compact interval. For corresponding results in the Bergman space we refer to [START_REF] Luecking | Inequalities on Bergman spaces[END_REF][START_REF] Luecking | Forward and reverse Carleson inequalities for functions in Bergman spaces and their derivatives[END_REF][START_REF] Luecking | Dominating measures for spaces of analytic functions[END_REF][START_REF] Green | Dominating Sets in Bergman Spaces on Strongly Pseudoconvex Domains[END_REF]. As for Fock spaces, this has been done by Jansen, Peetre and Rochberg in [START_REF] Janson | Hankel forms and the Fock space[END_REF] for the classical case ϕ(z) = |z| 2 and by Lou and Zhuo in [LZ19, Theorem A] for generalized doubling Fock spaces (see also [START_REF] Ortega-Cerdà | Sampling measures[END_REF] for a characterization of general sampling measures). It turns out that a set E is dominating for the doubling Fock spaces if and only if it is relatively dense. This characterization also holds for the other spaces of analytic functions mentioned above with an adapted notion of relative density. Once the dominating sets have been characterized in terms of relative density, a second question of interest is to know whether the sampling constants can be estimated in terms of the density parameters (γ, r). Kovrijkine answered this question in [START_REF] Kovrijkine | Some results related to the Logvinenko-Sereda theorem[END_REF] giving a sharp estimate on the sampling constants for the Paley-Wiener spaces, with a polynomial dependence on γ (see also [START_REF] Reznikov | Sharp constants in the Paneyah-Logvinenko-Sereda theorem[END_REF] for sharp constants in some particular geometric settings). Such a dependence is used for example in control theory (see [START_REF] Egidi | Sharp geometric condition for null-controllability of the heat equation on R d and consistent estimates on the control cost[END_REF][START_REF] Egidi | An abstract Logvinenko-Sereda type theorem for spectral subspaces[END_REF][START_REF] Beauchard | Spectral estimates for finite combinations of Hermite functions and null-controllability of hypoelliptic quadratic equations[END_REF] and the references therein). Kovrijkine's method involves Remez-type and Bernstein inequalities. It has been used in several spaces in which the Bernstein inequality holds (see for instance [START_REF] Hartmann | Quantitative estimates of sampling constants in model spaces[END_REF] for the model space, [START_REF] Beauchard | Spectral estimates for finite combinations of Hermite functions and null-controllability of hypoelliptic quadratic equations[END_REF] for spaces spanned by Hermite functions, or [START_REF] Egidi | An abstract Logvinenko-Sereda type theorem for spectral subspaces[END_REF] for general spectral subspaces), and also to settings where a Bernstein inequality is not at hand (e.g. Fock and polyanalytic Fock spaces [START_REF] Jaming | Planar sampling sets for the short-time Fourier transform[END_REF] or Bergman spaces [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF][START_REF] Green | Dominating Sets in Bergman Spaces on Strongly Pseudoconvex Domains[END_REF]). In [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF], the authors developed a machinery based on a covering argument to circumvent Bernstein's inequality and the aim of this paper is to adapt this method to doubling Fock spaces. 1.3. Main results. It can be deduced from Lou and Zhuo's result [LZ19, Theorem A] that if E is (γ, r)-dense then there exist some constants 0 < ε 0 < 1 and c > 0 depending only on r such that inequality (1.2) holds for every C p ≤ cγε 2(p+2) γ 0 . This last estimate gives an exponential dependence on γ. In the spirit of the work [START_REF] Kovrijkine | Some results related to the Logvinenko-Sereda theorem[END_REF], we improve it providing a polynomial estimate in γ with a power depending suitably on r. Theorem 1.1. Let φ be a doubling subharmonic function and 1 ≤ p < +∞. Given r > 1, there exists L such that for every measurable set E ⊂ C which is (γ, r)-dense, we have f L p φ (E) ≥ γ c L f L p φ (C) (1.3) for every f ∈ F p φ . Here, the constants c and L depend on r, and for L we can choose L r log 2 (Cµ) + 1 p (1 + log(r)) where C µ is the doubling constant (see (1.1)) and the implicit constant depends only on the space (i.e. only on φ). Remark 1.2. When φ(z) = |z| 2 , we have log 2 (C µ ) = 2 and we get the same result as that given in [JS21, Theorem 4.6] for the classical Fock space with an explicit dependence on p in addition. Observe that we are mainly interested in the case when r is big, for instance r ≥ 1 (in case E is (γ, r)-dense for some r < 1 we can also show that E is ( γ, 1)-dense for some γ γ where underlying constants are universal). The proof of Theorem 1.1 follows the scheme presented in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF]. We will recall the necessary results from that paper. The main new ingredients come from [START_REF] Marco | Interpolating and sampling sequences for entire functions[END_REF] and concern a finite overlapping property and a lemma allowing us to express the subharmonic weight locally as (the real part of) a holomorphic function. We should mention that in [LZ19, Theorem 7], in the course of proving that the relative density is a necessary condition for domination, it is shown that γ C p . Hence, we cannot expect better than a polynomial dependence in γ of the sampling constant C. In this sense, our result is optimal. As noticed in a remark in [START_REF] Luecking | Inequalities on Bergman spaces[END_REF]p.11] and after Theorem 2 in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF] for the Bergman space, there is no reason a priori why a holomorphic function for which the integral E |f | p e -pφ dA is bounded for a relative dense set E should be in F p φ . Outside the class F p φ relative density is in general not necessary for domination. For this reason, we always assume that we test on functions f ∈ F p φ . As a direct consequence of Theorem 1.1, we obtain a bound for the norm of the inverse of a Toeplitz operator T ϕ . We remind that for any bounded measurable function ϕ, the Toeplitz operator T ϕ is defined on F 2 φ by T ϕ f = P(ϕf ) where P denotes the orthogonal projection from L 2 φ (C) onto F 2 φ . As remarked in [LZ19, Theorem B], for a non-negative function ϕ, T ϕ is invertible if and only if E s = {z ∈ C : ϕ(z) > s} is a dominating set for some s > 0. Tracking the constants we obtain Corollary 1.3. Let ϕ be a non-negative bounded measurable function. The operator T ϕ is invertible if and only if there exists s > 0 such that E s = {z ∈ C : ϕ(z) > s} is (γ, r)-dense for some γ > 0 and r > 0. In this case, we have ||T -1 ϕ || ≤ ϕ -1 ∞ 1 -1 - s ϕ ∞ 2 γ c 2L . Notice that the right-hand side behaves as γ -2L as γ → 0. For the sake of completeness, we will give a proof of the reverse implication at the end of Section 4. The paper is organized as follows. In Section 2 we recall several results from [START_REF] Marco | Interpolating and sampling sequences for entire functions[END_REF] concerning doubling measures and subharmonic functions. In Section 3, we introduce planar Remez-type inequalities which will be a key ingredient in the proof of Theorem 1.1. Finally, we prove the covering lemma and Theorem 1.1 in Section 4 and deduce Corollary 1.3. Reminders on doubling measures A non-negative doubling measure µ (see (1.1) for the definition) determines a metric on C to which the usual notions have to be adapted. In this section, we recall several geometric measure theoretical tools in connection with doubling measures that we will need later and which come essentially from the paper [MMOC03, Chapter 2]. Actually, the results of the present section can be translated in terms of the distance induced by the metric ρ(z) -2 dz ⊗ dz but we will not exploit this point of view here. We start with a standard geometric estimate whose main part is due to [Chr91, Lemma 2.1]. Lemma 2.1. [MMOC03, Lemma 1] Let µ be a doubling measure on C. For any disks D(z, r) and D(z , r ) such that r > r and z ∈ D(z, r), we have 1 2C µ r r κ ≤ µ(D(z, r)) µ(D(z , r )) ≤ C 2 µ r r log 2 (Cµ) . where κ = 1 C 2 µ +2 . Here x denotes the smallest integer bigger or equal to x and correspondingly, x is the biggest integer less or equal to x. The paper [START_REF] Marco | Interpolating and sampling sequences for entire functions[END_REF] refers to [START_REF] Christ | On the ∂ equation in weighted L 2 norms in C 1[END_REF] for the proof of this result. We mention that this latter paper only contains the left hand side estimate. For the convenience of the reader and in order to better understand the constants, we provide a detailed proof completing Christ's argument. Proof. Let us start with the right hand side inequality. By the triangular inequality, we have D(z, r) ⊂ D(z , 2r). Now, note that 2r = 2 log 2 ( r r )+1 r ≤ 2 log 2 ( r r ) +1 r . Hence iterating the doubling inequality (1.1), we get µ(D(z, r)) ≤ µ(D(z , 2r)) ≤ C log 2 ( r r ) +1 µ µ(D(z , r )) ≤ C 2 µ r r log 2 (Cµ) µ(D(z , r )). As mentioned above, the left hand side inequality is given in [Chr91, Lemma 2.1]. We reproduce the proof here making the constant κ more precise. Since z ∈ D(z, r), we have D(z , r) ⊂ D(z, 2r) and so µ(D(z, r)) µ(D(z , r )) ≥ C -1 µ µ(D(z, 2r)) µ(D(z , r )) ≥ C -1 µ µ(D(z , r)) µ(D(z , r )) . (2.1) Hence it suffices to prove that for any z ∈ C, µ(D(z , r)) µ(D(z , r )) ≥ 1 2 r r κ whenever r > r . Let k ≥ 3 be an integer to be fixed later. First assume that r r = 2 k . Then we can construct pairwise disjoint disks D 1 , D 2 , . . . D k-2 such that D j has radius 2 j r , D(z , r ) ⊂ 3D j and D j ⊂ D(z , r). A way to do that is to choose the disks D j centered along a radius of D(z , r) and ordered in such a way that the radius increases away from z (see Figure 1). Now, since µ(D(z , r )) ≤ µ(3D j ) ≤ C 2 µ µ(D j ), we get µ(D(z , r)) ≥ k-2 j=1 µ(D j ) + µ(D(z , r )) ≥ [(k -2)C -2 µ + 1]µ(D(z , r )). Hence, fixing k to be the smallest integer such that (k -2)C -2 µ + 1 ≥ 2 i.e k = C 2 µ + 2 , we obtain µ(D(z , r)) ≥ 2µ(D(z , r )) whenever r r = 2 k . Let us treat the general case r > r now. If r r ≥ 2 k , we reduce the problem to the previous setting decomposing D(z , r) into a chain of disks D(z , r ) = D(z , r 0 ) ⊂ D(z , r 1 ) ⊂ • • • ⊂ D(z , r m ) ⊂ D(z , r) where r i = 2 k r i-1 for 1 ≤ i ≤ m, and m = log 2 (r/r ) k . Hence we can iterate the process and get for every z ∈ C, µ(D(z , r)) ≥ µ(D(z , r m )) ≥ 2µ(D(z , r m-1 )) ≥ . . . ≥ 2 m µ(D(z , r )) = 2 log 2 ( r r )/k µ(D(z , r )) ≥ 1 2 r r 1/k µ(D(z , r )). Combined with (2.1), this leads to the left hand side inequality of Lemma 2.1 with κ = 1 k = 1 C 2 µ + 2 . If r r < 2 k , then it is clear that 1 2 r r κ < 1 ≤ µ(D(z ,r)) µ(D(z ,r )) since r > r . Again, combined with (2.1), this yields the expected constant in the left hand side inequality. Remark 2.2. If instead of z ∈ D(z, r), we have D(z, r) ∩ D(z , r ) = ∅ as in [MMOC03, Lemma 1], we get the same inequalities with C µ replaced by C 2 µ in the left hand inequality and C 2 µ replaced by C 3 µ in the right hand inequality. Also, when z = z , we can remove C µ from the left hand side and replace C 2 µ by C µ in the right hand side. Remark 2.3. In the classical Fock space, µ is the Lebesgue measure (up to a multiplicative constant) and so C µ = 4. Note that in this case, we obtain an equality with the right hand side and the factor C 2 µ disappears. This is due to the invariance by translation and the homogeneity of Lebesgue measure. As a direct consequence of Lemma 2.1 and Remark 2.2, picking z = z, and replacing r by rρ(z) and r by ρ(z), we get the following useful estimate: for all z ∈ C and r > 1, 1 2 r κ ≤ µ(D r (z)) ≤ C µ r log 2 (Cµ) . (2.2) We are now looking for an upper bound and a lower bound for ρ(z) ρ(w) whenever w ∈ D r (z). Lemma 2.4. For every z ∈ C, we have ∀r > 0, ∀w ∈ D r (z), ρ(z) ρ(w) ≥ 1 1 + r . (2.3) and ∀r > 1, ∀w ∈ D r (z), ρ(z) ρ(w) max r log 2 (Cµ) κ -1 , 1 = r max log 2 (Cµ) κ -1, 0 . (2.4) Proof. A classical fact is that ρ is a 1-Lipschitz function (see [OP16, equation 2.4] for a simple proof): |ρ(z) -ρ(w)| ≤ |z -w|, ∀z, w ∈ C. Hence, for every z ∈ C, we get inequality (2.3) by the triangular inequality. To obtain the upper bound, we use both inequalities of Lemma 2.1 distinguishing the cases rρ(z) ≥ ρ(w) and rρ(z) < ρ(w). We get for every w ∈ D r (z), rρ(z) ρ(w) max µ(D r (z)) 1/κ , µ(D r (z)) 1/ log 2 (Cµ) . Therefore inequality (2.2) implies for r > 1 rρ(z) ρ(w) max r log 2 (Cµ) κ , r , and hence (2.4) follows. Notice that for C µ large enough, we have log 2 (Cµ) κ -1 ≥ 0, which means that the maximum in the last inequality (2.4) is equal to r log 2 (Cµ) κ -1 . This holds exactly when C µ ≥ 4 √ 2. In the proof of our main theorem, we will need a particular covering of the complex plane. Let us explain how we can construct it. We say that a sequence (a n ) n∈N is ρ-separated if there exists δ > 0 such that |a i -a j | ≥ δ max (ρ(a i ), ρ(a j )), ∀i = j. This means that the disks D δ (a n ) are pairwise disjoint. We will cover C by disks satisfying a finite overlapping property and such that the sequence formed by their centers is ρseparated. In [START_REF] Marco | Interpolating and sampling sequences for entire functions[END_REF], the authors construct a decomposition of C into rectangles R k : C = k R k , and two such rectangles can intersect at most along sides. For these rectangles, so-called quasi-squares, there exists a constant e > 1 depending only on C µ such that the ratio of sides of every R k lies in the interval [1/e, e]. Denoting by a k the center of R k , Theorem 8(c) of [START_REF] Marco | Interpolating and sampling sequences for entire functions[END_REF] claims in particular that there is r 0 ≥ 1, such that for every k, ρ(a k ) r 0 ≤ diam R k ≤ r 0 ρ(a k ). In other words D 1/(2Cr 0 ) (a k ) ⊂ R k ⊂ D r 0 /2 (a k ) (2.5) with C = √ 1 + e 2 . As a consequence we get the required covering C = D r 0 /2 (a k ) and the sequence (a k ) is ρ-separated (take δ = 1/(2Cr 0 )). We end this section by focusing on the particular case of a doubling measure µ given by the Laplacian ∆φ of a subharmonic function φ which is the case of interest in this paper. We will need to control the value of φ in a disk by the value at its center. For this, we can use [MMOC03, Lemma 13] that we state as follows: for every σ > 0, there exists A = A(σ) > 0 such that for all k ∈ N, sup z∈D σ (a k ) |φ(z) -φ(a k ) -h a k (z)| ≤ A(σ) (2.6) where h a k is a harmonic function in D σ (a k ) with h a k (a k ) = 0. Moreover, in view of the proof of [MMOC03, Lemma 13] we have A(σ) sup k∈N µ (D σ (a k )) σ log 2 (Cµ) (2.7) where the inequalities come from [MMOC03, Lemma 5(a)] and the estimate (2.2). This last result will allow us to translate the subharmonic weight locally into a holomorphic function. Remez-type inequalities Let us introduce a central result of the paper [START_REF] Andrievskii | Remez-type inequalities in the complex plane[END_REF]. Let G be a (bounded) domain in C and let 0 < s < |G| (Lebesgue measure of G). Denoting Pol n the space of complex polynomials of degree at most n ∈ N, we introduce the set P n (G, s) = {p ∈ Pol n : |{z ∈ G : |p(z)| ≤ 1}| ≥ s}. Next, let R n (z, s) = sup p∈Pn(G,s) |p(z)|. This expression gives the biggest possible value at z of a polynomial p of degree at most n and being at most 1 on a set of measure at least s. In particular Theorem 1 from [START_REF] Andrievskii | Remez-type inequalities in the complex plane[END_REF] claims that for z ∈ ∂G, we have R n (z, s) ≤ c s n (3.1) where the constant c depends only on the (square of the) diameter of G. This result corresponds to a generalization to the two-dimensional case of the Remez inequality which is usually given in dimension 1. In what follows we will essentially consider G to be a disk or a rectangle. By the maximum modulus principle, the above constant gives an upper estimate on G for an arbitrary polynomial of degree at most n which is bounded by one on a set of measure at least s. Obviously, if this set is small (s close to 0), i.e. p is controlled by 1 on a small set, then the estimate has to get worse. Remark 3.1. Let us make another observation. If c is the constant in (3.1) associated with the unit disk G = D = D(0, 1), then a simple argument based on homothety shows that the corresponding constant for an arbitrary disk D(0, r) is cr 2 (considering D(0, r) as underlying domain, the constant c appearing in [AR07, Theorem 1] satisfies c > 2m 2 (D(0, r))). So, in the sequel we will use the estimate R n (z, s) ≤ cr 2 s n , (3.2) where c does not depend on r. Up to a translation, the following counterpart of Kovrijkine's result for the planar case has been given in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF]: Lemma 3.2. Let 0 < r < R be fixed. There exists a constant η > 0 such that the following holds. Let w ∈ C, let E ⊂ D r (w) be a planar measurable set of positive measure and let z 0 ∈ D r (w). For every φ analytic in D R (w), if |φ(z 0 )| ≥ 1 and M = sup z∈D R (w) |φ(z)| then sup z∈D r (w) |φ(z)| ≤ cr 2 ρ(w) 2 |E| η log M sup z∈E |φ(z)|, where c does not depend on r, and η ≤ c R 4 (R -r) 4 log R R -r for an absolute constant c . The corresponding case for p-norms is deduced exactly as in Kovrijkine's work. Corollary 3.3. Let 0 < r < R be fixed. There exists a constant η > 0 such that the following holds. Let w ∈ C, let E ⊂ D r (w) be a planar measurable set of positive measure and let z 0 ∈ D r (w). For every φ analytic in D R (w), if |φ(z 0 )| ≥ 1 and M = sup z∈D R (w) |φ(z)| then for p ∈ [1, +∞) we have φ L p (D r (w)) ≤ cr 2 ρ(w) 2 |E| η log M + 1 p φ L p (E) . The estimates on η are the same as in the lemma. The constant c does not depend on r. Proof of Theorem 1.1 The key ingredient in the proof of Theorem 1.1 and a main contribution of this paper is the existence and the estimate of the covering constant. Indeed, we have shown in Section 2 that the disks (D s (a k )) k∈N cover the complex plane C for s > r 0 /2. Now, we shall prove that there exists a covering constant N depending on the covering radius s ≥ r 0 /2. This means that the number of overlapping disks cannot exceed N . We denote by χ F the characteristic function of a measurable set F in D. We are now in a position to prove the covering lemma. Lemma 4.1. For s > 1, there exists a constant N := N (s) such that k χ D s (a k ) ≤ N. Moreover, there exists some universal constant c ov := c ov (φ) > 0 such that N ≤ c ov s α where α = min 2 max 1 + log 2 (C µ ) κ , 2 ; max log 2 (C µ ) κ , 1 log 2 (C µ ) , C µ is the doubling constant and κ is given in Lemma 2.1. Obviously, the constant N is at least equal to 1 and since N is non-decreasing in s, we are only interested in the behavior when s goes to ∞. We also remind the reader that in order to cover the complex plane C, we need to require that s ≥ r 0 /2. Proof of Lemma 4.1. The proof is inspired by that of [AS11, Lemma 2.5]. Let s > 1 and z ∈ C. Denote Γ(z) = {k ∈ N, z ∈ D s (a k )}. Our goal is to estimate #Γ(z). Write θ = log 2 (Cµ) κ . Recall that for z ∈ D s (a k ), we have by (2.3) and (2.4) ρ(z) 1 + s ≤ ρ(a k ) ρ(z) max(s θ-1 , 1). (4.1) Now, pick δ > 0 such that the disks D δ (a n ) are disjoint (recall that (a n ) n∈N is ρ-separated). By the triangular inequality, there exists a constant C > 0 such that D δ (a k ) ⊂ D C max(s θ ,s) (z) whenever z ∈ D s (a k ) (see Figure 2). Indeed, if w ∈ D δ (a k ) then |w -z| ≤ |w -a k | + |z -a k | ≤ δρ(a k ) + sρ(a k ) (δ + s)ρ(z) max(s θ-1 , 1) ρ(z) max(s θ , s), where we have used that δ > 0 is fixed and s > 1. Hence, since D δ (a k ) are disjoint and with (4.1) in mind, it follows #Γ(z) ≤ # k ∈ N, D δ (a k ) ⊂ D C max(s θ ,s) (z) ≤ D C max(s θ ,s) (z) inf k∈Γ(z) |D δ (a k )| = sup k∈Γ(z) C max(s θ , s)ρ(z) 2 (δρ(a k )) 2 max(s θ , s)(1 + s) 2 s 2 max(1+θ, 2) where we have used that max(s θ , s) = s max(θ,1) for s > 1. Moreover, notice that µ D δ (a k ) 1 since 0 < δ < 1 is fixed. Hence an analogous computation, replacing the Lebesgue measure by the measure µ and using the right hand side of inequality (2.2), leads to #Γ(z) ≤ # k ∈ N, D δ (a k ) ⊂ D C max(s θ ,s) (z) ≤ µ D C max(s θ ,s) (z) inf k∈Γ(z) µ [D δ (a k )] max(s θ , s) log 2 (Cµ) s max(θ, 1) log 2 (Cµ) . Finally, taking the minimum between the two estimates, we get for s > 1 #Γ(z) min s 2 max(1+θ, 2) , s max(θ, 1) log 2 (Cµ) = s min[2 max(1+θ, 2), max(θ, 1) log 2 (Cµ)] . Since the estimates are uniform in z, the lemma follows. As in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF] for the Bergman space, we now introduce good disks. Fix r 0 /2 ≤ s < t where r 0 /2 is the radius such that the disks D r 0 /2 (a k ) cover the complex plane. For K > 1 the set I K-good f = {k : f L p φ (D t (a k )) ≤ K f L p φ (D s (a k )) } will be called the set of K-good disks for (t, s) (in order to keep notation light we will not include s and t as indices). This set depends on f . The following proposition has been shown in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF] for the Bergman space, but its proof, implying essentially the finite overlapping property, is exactly the same. Proposition 4.2. Let r 0 /2 ≤ s < t. For every constant c ∈ (0, 1), there exists K such that for every f ∈ F p φ we have k∈I K-good f f p L p φ (D s (a k )) ≥ c f p L p φ (C) . One can pick K p = N (t)/(1 -c) where N (t) corresponds to the overlapping constant from Lemma 4.1 for the radius t. We are now in a position to prove the theorem. Proof of Theorem 1.1. Take s = max(r, r 0 /2) and t = 4s. As noted in [START_REF] Hartmann | Dominating sets in Bergman spaces and sampling constants[END_REF], if E is (γ, r)-dense then E is ( γ, s)-dense, where γ = cγ and c is a multiplicative constant. Hence we can assume that E is (γ, s)-dense instead of (γ, r)-dense. Let h a k be the harmonic function introduced in (2.6) for σ = t. Since h a k is harmonic on D t (a k ), there exists a function H a k holomorphic on D t (a k ) with real part h a k . Now, given f with f L p φ (C) = 1, let g = f e -(Ha k +φ(a k )) for k ∈ I K-good f , and set h = c 0 g, c 0 = πs 2 ρ(a k ) 2 D s (a k ) |g| p dA 1/p . Clearly there is z 0 ∈ D s (a k ) with |h(z 0 )| ≥ 1. Set R = 2s. We have to estimate the maximum modulus of h on D R (a k ) in terms of a local integral of h. To that purpose, we use that h ∈ A p (D t (a k )). Indeed, applying (2.6) we have D t (a k ) |h| p dA = πs 2 ρ(a k ) 2 D s (a k ) |g| p dA D t (a k ) |g| p dA = πs 2 ρ(a k ) 2 D s (a k ) |f | p e -p(ha k +φ(a k )) dA D t (a k ) |f | p e -p(ha k +φ(a k )) dA ≤ πs 2 ρ(a k ) 2 e 2pA(t) D s (a k ) |f | p e -pφ(z) dA(z) D t (a k ) |f | p e -pφ(z) dA(z). ≤ πs 2 ρ(a k ) 2 e 2pA(t) K p where the last inequality comes from the fact that k is K-good for (s, t). Therefore, taking into account this last estimate, the subharmonicity of |h| p yields where C , C depend only on the space. In view of (2.7), we get log M s log 2 (Cµ) + 1 p (1 + log(s)). M p := max z∈D R (a k ) |h(z)| p ≤ 1 πs 2 ρ(a k ) 2 D t (a k ) |h| p dA ≤ c s K p Finally, noticing that r s = max(r, r 0 /2) for r > 1, this implies log M r log 2 (Cµ) + 1 p (1 + log(r)). Finally, we give a short proof of the reverse implication in Corollary 1.3 which is a straightforward adaptation to the doubling Fock space of Luecking's proof of [Lue81, Corollary 3]. Proof. First, assume that ϕ ≤ 1. Therefore s ≤ 1. Since E is (γ, r)-dense, it is a dominating set. So C ϕ 2 |f | 2 e -2φ dA ≥ s 2 E |f | 2 e -2φ dA ≥ s 2 C 2 f 2 L 2 φ (C) , where C is the sampling constant. Then (I -T ϕ )f 2 L 2 φ (C) = T 1-ϕ f 2 L 2 φ (C) = P[(1 -ϕ)f ] 2 L 2 φ (C) ≤ (1 -ϕ)f 2 L 2 φ (C) ≤ C (1 -ϕ 2 )|f | 2 e -2φ dA ≤ (1 -s 2 C 2 ) f 2 L 2 φ (C) Hence I -T ϕ < 1. So T ϕ is invertible and T -1 ϕ ≤ 1 1 -I -T ϕ ≤ 1 1 - √ 1 -s 2 C 2 . Using Theorem 1.1, we obtain ||T -1 ϕ || ≤ 1 1 -1 -s 2 γ c 2L . Finally, for a general ϕ, let ψ = ϕ ϕ ∞ . Then ψ ≤ 1, E = {ψ ≥ s ϕ ∞ = s } and T ϕ = ϕ ∞ T ψ . So, applying the previous discussion to ψ, it follows T -1 ϕ = T -1 ψ ϕ -1 ∞ ≤ ϕ -1 ∞ 1 -1 -(s ) 2 γ c 2L = ϕ -1 ∞ 1 -1 - s ϕ ∞ 2 γ c 2L . Figure 1 . 1 Figure 1. The disks D j , D(z , r ) and D(z , r). Figure 2 . 2 Figure 2. The disks D s (a k ), D C max(s θ ,s) (z) and D δ (a k ). where c s = e 2pA(4s) since t = 4s. Now, setting E = E ∩ D s (a k ) we get using Corollary 3.3 applied to h:D s (a k ) |h(z)| p dA(z) ≤ cs 2 ρ(a k ) 2 | E| pη log M +1 E |h(z)| p dA(z)Again, by homogeneity we can replace in the above inequality h by g. Note also that πs 2 ρ(a k ) 2 /| E| is controlled by 1/γ. This yieldsD s (a k ) |f | p e -pφ(z) dA(z) ≤ e pA(t) D s (a k ) |g(z)| p dA(z) ≤ e pA(t) cs 2 ρ(a k ) 2 | E| pη log M +1 E |g(z)| p dA(z) ≤ e pA(t) c 1 γ pη log M +1 E |g(z)| p dA(z) ≤ e 2pA(t) c 1 γ pη log M +1 E |f (z)| p e -pφ(z) dA(z),where c 1 is an absolute constant. Summing over all K-good k, and using Lemma 4.1 and Proposition 4.2 we obtain the required resultc f L p φ (C) c 1 γ η log M +1/p f L p φ (E)where in view of Lemma 3.2 η ≤ c × 2 4 log 2 and log(M ) ≤ log(c 1/p s K)Here c comes from Proposition 4.2, and c ov and α from Lemma 4.1. In particular, fixing c ∈ (0, 1) we obtain log M ≤ 2A(4s) + 1 p (C + C log(s)) Acknowledgment. This work is part of the first named author's thesis supervised by Andreas Hartmann. The authors would like to thank him for many helpful discussions. The research of the first author is partially supported by Banque Mondiale via the Projet d'Appui au Développement de l'Enseignement Supérieur du Mali. The research of the second author is partially supported by the project ANR-18-CE40-0035 and by the Joint French-Russian Research Project PRC CNRS/RFBR 2017-2019. University of Segou, Mali E-mail address: [email protected] Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400, Talence, France E-mail address: [email protected]
04097027
en
[ "shs.langue" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04097027/file/elevational2.pdf
Guillaume Jacques Elevational deixis in the Kiranti verb Keywords: Kiranti, elevational deixis, motion verbs, Thulung, Khaling, Belhare, Bantawa, case marking This article deals with elevational deixis in Kiranti languages, a feature which is pervasive in these languages in both the verbal and nominal domains. The system of elevation is described in most grammars of these languages as tripartite, following the typologically common system comprising up(wards), same level/across, down(wards) elevations. This work reviews the available data on elevational deixis in the verbal system, and has two main contributions. First, it shows that motion verbs unspecified for elevation are an essential part of the elevation marking paradigm, and are obligatory in some specific elevational configurations. Second, it argues that on the one hand elevationally marked motion verbs are cognate as whole sets across Kiranti, and probably reconstructible to proto-Kiranti and beyond, and on the second hand that a subgroup of Kiranti including Chintang, Athpare, Belhare and Yamphu have innovated a second set of elevationally-marked motion verbs. Introduction Elevational (or topographical) deixis has been described as a pervasive typological feature in Trans-Himalayan/Sino-Tibetan [START_REF] Post | Topographical deixis in Trans-Himalayan (Sino-Tibetan) languages[END_REF][START_REF] Post | The distribution, reconstruction and varied fates of topographical deixis in Trans-Himalayan (Sino-Tibetan): Implications for the reconstruction of an early Trans-Himalayan environment[END_REF], including Lolo-Burmese (Bradley 1979: 354-356;[START_REF] Bradley | Deictic patterns in Lisu and southeastern Tibeto-Burman[END_REF], Rgyalrongic [START_REF] Zhang | Le dialecte de Brag-bar du rgyalrong situ et sa contribution à la typologie de l'expression des relations spatiales: le mouvement associé et l'orientation[END_REF], Bodish (Hyslop 2017: 161) and Tani [START_REF] Post | The distribution, reconstruction and varied fates of topographical deixis in Trans-Himalayan (Sino-Tibetan): Implications for the reconstruction of an early Trans-Himalayan environment[END_REF]. It is particularly conspicuous in Kiranti languages, where a tripartite (upwards, same level, downwards) elevational contrast [START_REF] Bickel | Deictic transposition and referential practice in Belhare[END_REF][START_REF] Jacques | The auditory demonstrative in Khaling[END_REF][START_REF] Michailovsky | Kiranti languages[END_REF]) is coded by locational/directional adverbs, demonstratives and case markers as well by motion verbs. 1 The Elevation distinctions Kiranti languages distinguish between three marked elevations: high/upward, low/downward, and same level/across. Examples (1) and (2), from Khaling and Thulung respectively, illustrate the coding of upward elevation, using a combination of locational adverbs, case markers and motion verbs. 'Because of this, I brought my children up here.' (Soamaya's life) cord to which 'All languages with level or across elevationals also have down and up elevationals." 2 Some of the texts cited in this article are included in the Pangloss collection [START_REF] Michailovsky | Documenting and researching endangered languages: The Pangloss collection[END_REF]. The Bantawa example (3) illustrates the coding of downward elevation, through both a case marker and a downward motion verb. (3) Bantawa hyu-cok-yu down-floor-loc.up dʰa-Ø-kʰa-Ø go.down-n.pst-see-n.pst 'Please, come down to the lower floor! ' Doornenbal 2009: 85) An example from Chintang (Paudyal 2015: 279) illustrates the expression of same level elevation through a demonstrative and a motion verb. As the name implies, same level elevation refers to the elevation at a location or motion at the same level as the speaker or deictic center; in Kiranti languages, it has been referred to with terms such as 'across', 'horizontal' or 'same level'. Same level elevation is, significantly, distinct from neutral or unspecified elevation, which is coded using distinct verbs, adverbs and case markers. (4) Chintang Type of elevation coding There are two main interpretations for the elevation coded in Kiranti languages. First, and by far most frequently, it can refer to a geophysical (uphilldownhill) contrast, a common interpretation in mountainous areas such as the Himalayas. Examples frequently include geographical or landscape features, in relation to which and along which elevation-coded motion occurs. This can be seen in ( 5) and ( 6). (5) Thulung (Rai 2016: 303) In the following Khaling example, elevation is coded through upward and downward motion verbs, as well as exploiting local knowledge about the elevations of the ʣhēːs (highland area) and the wʌjʌ (lowland area) relative to the Khaling homeland, which serves as the deictic center. 'The highland insect comes down from the highlands saying "rachachacha". The lowland insect goes up from the lowlands saying "nyeyeyeyeye".' (Solme Lamalit,(128)(129)(130) Second, the up vs. down elevational contrast can refer to a vertical axis. This interpretation is found in a number of different situations, such as descriptions of birds flying (8), meteorological events, such as rain falling (9) or the position of stars (10), and events or descriptions involving body posture (11). Both geophysical and vertical elevation are coded using the same morphology in most Kiranti languages. The only documented exception is Mewakhola Limbu [START_REF] Michailovsky | On Limbu directionals and locative expressions[END_REF] which has contrasting elevationcoding adverbs opposing vertical vs. geophysical (up vs. uphill/upstream3 ) for upper and lower elevations; note however that for elevation-coding postpositions, the contrast in types of elevation is neutralized for lower elevation, for which there is a single marker covering both geophysical and vertical interpretations. Domains of elevation coding One interesting feature of elevation coding in Kiranti languages is the possibility of marking elevational location/direction in multiple grammatical domains simultaneously, something that can be thought of as elevational concord. All languages have demonstratives, locational/directional adverbs, and motion verbs, and some additionally have locational/directional case markers. Among the languages that do not have locational/directional case markers are Hayu, Athpare, Yakkha, Wambule and Bahing, for which the locational/directional adverbs and/or demonstratives are often cognate with the locational/case markers in the languages which have them. Post (2020: 381) proposes reconstructed forms for both the adverbs and the elevationcoding case markers: an upward form in *tV, a downward form in *mV or *yV, and a same-level form in *yV or *nV (all of these are reconstructed with a back-rounded vowel), with far more variability to be found among same-level formatives than among upward and downward . The examples presented thus far show the phenomenon of elevation concord, as do additional examples (12,13,14). 'That [child] did not come down to the house.' One of the logical consequences of being able to encode elevation in multiple domains is the possibility of discordant marking, namely situations in which elevation is not consistently marked across the same utterance. This will be discussed in section 4. Motion verbs The focus of this article is the coding of elevational deixis through sets of lexical verbs, and this section presents the motion verbs used to code elevation. As with other grammatical domains, motion verbs code three distinct elevations-high/upward, same level, down/downward are found. In addition to these are motion verbs unspecified for elevation. These are considered to make up the fourth element of the paradigm, for reasons that will be explained below. The contrast in elevation coding can be seen in ( 15) for Belhare, where distinct motion verbs are used to express upward, downward and same level motion. (15) Belhare a. come.across-ipfv:npst 'From there, the ox is coming over to the tree that has been planted.' (Bickel 1996: 126) Venitive and itive verbs and elevation We distinguish between venitive motion verbs, describing motion towards the speaker or towards the deictic center (which we gloss 'come' and which are seen in the Belhare examples above), and itive verbs, describing motion away from the speaker or from the deictic center (which we gloss 'go'). 4One characteristic of Kiranti languages is that while they all have elevationcoding venitive verbs, they rarely have elevation-coding itive verbs. Belhare is one of two languages (to the best of our knowledge) with complete sets of both, as can be seen in the elevation-coding itive verbs in the examples below ( 16). (16) Belhare a. 3.n.sg-go.across-telic-pst 'Hold on! These two guys are going only now, their friends went much earlier!' (Bickel 1996: 137) Another language with elevation-coding itive verbs in addition to venitives is Bantawa. The itive verbs of both Bantawa and Belhare are presented together in Table 1. In addition to Bantawa and Belhare, a few other languages show signs of having lexicalized elevation-coding itive verbs. Athpare has the same contrast but only for 'upwards' elevation: Ebert (1997b) has both kat-'come from below' and thaŋ-'climb up, go up', like Belhare. A cognate of the latter verb is also found in Yakkha (thaŋ-'climb', Schackow 2015: 534). In the available data on Athpare, there is, however, no evidence of 'same level' and 'downwards' coding on itive verbs, and the Athpare data is best seen as evidence of a possible source for one member of the Belhare set coding elevation and motion away-in the form of a verb coding elevational change but not motional deixis. Situations like the ones in Athpare and Yakkha, with single members of sets for motion away verbs, raise the issue of the interpretative difficulties with using grammatical descriptions: how should one interpret glosses such as 'ascend', 'climb'? In one sense these are indeed verbs that combine features of motion and elevation, and are as such comparable to other material considered here; on the other hand, the gloss may be an approximation for an action that includes motion and elevational features but is not limited to these, also encoding other aspects of the action such as manner. Wambule does not have a system of distinct verbal roots to express itive motion, but has compound itive verbs ga-lwa 'go up' and pi-lwa-'go (same level)' built by compounding the corresponding venitive verbs (Table 2) with the elevationally-unspecified lwa-'go' (Opgenort 2004: 794). The elevationally-marked motion verb roots gaand pimay thus be interpreted as venitive by default, but are neutral as to motion deixis in this language. The meaning 'go down' in Wambule is expressed using the roots doand dwak-, which are however also translated as 'fall' and 'come down' and may not be dedicated itive motion verbs. The most frequent elevation-coding motion verbs in Kiranti are therefore venitive verbs. Table 2 shows the verb roots corresponding to these meanings in most of the Kiranti languages.5 ? Thulung get- jok- bik- rok- Wambule ga- ywa- pi- blak- Khaling kʰoŋ- je- pi- ɦo- Dumi kʰoŋ- yi- pi- ho- Koyi kʰo- gʰuʔ- bhiʔ- hu- Kulung tʰoŋ- yu- ban- ta- Camling saŋ- i- ban- ? Puma tʰoŋ- i- ben- ta- Bantawa tʰaŋ- yɨ- ban- ta- Chintang kat- kuŋs- tʰap- ti- Athpare kat- uŋs- ap- ta- Belhare kat- uŋs- ap- ta- Yakkha keʔ- uks- ap- ta- Yamphu kat- uks- ap- ? Limbu tʰaŋ- yu- phɛn- ta- It is important to highlight that it is the verb roots themselves that express the combination of elevational and motional deixis. 6 The verbs in each language form a suppletive paradigm, in that the verbs have different lexical roots. Three factors lead us to consider that the sets of verbs in Table 2 constitute paradigms. First, their use in obligatory when conditions are met, the required conditions being motion towards the speaker/deictic center (as well as away from it, for Bantawa and Belhare) combined with the specification of elevation which is either upwards, horizontal, or downwards. The elevationunspecified member of the sets is not, in fact, a basic motion verb (it is unspecified, not underspecified, for elevation), as it cannot be used as a substitute for one of the elevation-coding motion verbs: it can be considered to be part of the paradigm, being selected only in specific conditions (when Driem 1993; Michailovsky 2012), Koyi [START_REF] Lahaussois | Koyi Rai: An initial grammatical sketch[END_REF], Kulung [START_REF] Tolsma | A grammar of Kulung[END_REF], Camling (Ebert 1997a), Chintang [START_REF] Paudyal | Aspects of Chintang syntax[END_REF], Athpare (Ebert 1997b), Belhare [START_REF] Bickel | Aspect, mood, and time in Belhare: studies in the semantics-pragmatics interface of a Himalayan language[END_REF][START_REF] Bickel | Spatial operations in deixis, cognition, and culture: where to orient oneself in Belhare[END_REF], Bantawa [START_REF] Doornenbal | A grammar of Bantawa[END_REF], Yakkha [START_REF] Schackow | A grammar of Yakkha[END_REF], Yamphu [START_REF] Rutgers | Yamphu, grammar, texts and lexicon[END_REF], Limbu [START_REF] Michailovsky | Limbu-English dictionary[END_REF]. 6 Note, however, that associated motion, which also encodes elevation in some Kiranti languages, is expressed through a series of derivational suffixes [START_REF] Jacques | Associated motion in Sino-Tibetan/Trans-Himalayan[END_REF]. elevation change is unknown, or in the conditions described in section 4.2). In other words, while the elevation-coding motion verbs each code specific elevation and motion deixis, the elevation-unspecified motion verb covers the rest of the semantic space for motion without elevation change (upwards, downwards) or specification (across) (except for situations with neutralization, as described in section 4.2), without overlapping on the elevation coding motion verbs. This property of elevation-unspecified verbs is not reported in typological surveys on elevational deixis7 (Forker 2020; Post 2020), and may in fact not be relevant beyond the Kiranti languages. Second, these verbs form a closed class: the languages in our sample only have a single verbal root that meets each of the conditions described (motion + change in elevation). There is however some diachronic evidence about sources for some of these verbs ( §6), some of the sources being verbs which code elevation but not motion (verbs such as 'ascend', 'climb'.) Third, the verbs in these sets have a tendency to grammaticalize into verbal derivational affixes (often called V2's in South Asia). The specific contexts where the elevation-unspecified motion verb is obligatory are those where the elevation is unknown. This will typically occur in questions. This can be seen in ( 17) from Thulung, with the elevationunspecified motion verb romu used to ask where the children are coming from, on account of the elevational path being unknown to the speaker. This contrasts with the verb which follows, namely the same level motion verb bimu, which is used, in an imperative form, because the elevational aspect of the path between their current position and the speaker is clear. ? Thulung kʰet- sɵt- pʰit- ret- Khaling kʰoŋt- jet- pit- ɦot- Dumi kʰoŋt- yit- pit- hot- Koyi kʰo- gʰuʔ- bhiʔ- hu- Kulung ? yut- ban- tat- Camling said- it- baid- ? Bantawa tʰakt- yɨt- batt- tat- Chintang katt- kukt- tʰapt- tat- Athpare katt- ukt- apt- tat- Belhare katt- ukt- apt- ta- Yakkha ket- ukt- apt- taʔ- Limbu tʰakt- yuːt- phɛtt- taːt- Caused accompanied motion verbs In addition to motion verbs, which are intransitive, Kiranti languages also have cognate verbs which express caused accompanied motion: these are transitive verbs (glossed 'bring' and 'take' respectively for motion towards and away from the speaker/deictic center), and they convey the fact that both subject and object are affected by the motion event. The existence of intransitive and transitive motion verb sets is due to the fact that the caused accompanied motion verbs are derived from the corresponding motion verbs by suffixation of the applicative/causative -t suffix [START_REF] Michailovsky | Tibeto-Burman dental suffixes: Evidence from Limbu (Nepal)[END_REF][START_REF] Jacques | Derivational morphology in Khaling[END_REF]. In rarer cases, there is an aspiration alternation between the initial consonant of the motion verb and that of the caused association motion verbs (for instance Thulung get-'come up' vs. kʰet-'bring up'), or suppletion (Thulung jok-'come down' vs. sɵt-'bring down'). Both the motion verbs and caused accompanied motion verbs can, depending on the language, be the locus for the expression of different elevational contrasts. Table 3 lists venitive caused accompanied motion verbs, as well as the elevation-unspecified counterparts, across the various languages. For languages without an elevation-coding set of itive verbs, there are also no itive caused accompanied motion verbs. Bantawa, however, has a nearly complete set of venitive and itive verbs, shown in Table 4. A few Kiranti languages (in particular Khaling and Belhare) have gram- maticalized associated motion markers from motion and caused accompanied motion verbs [START_REF] Jacques | Associated motion in Sino-Tibetan/Trans-Himalayan[END_REF], and the elevation contrast is thus also reflected in complex predicates taking these markers. Since the grammatical encoding of elevation is completely parallel between motion verbs, caused accompanied motion verbs and associated motion markers, the remainder of the paper will focus on the former. Mismatches in the encoding of elevational deixis All the examples provided above show elevational concord across the various grammatical domains where elevation is expressed in a given utterance. It happens, however, that one comes across examples in narratives and conversations that appear to contain a mismatch in elevational deixis in the materials: these examples feature discordances in the elevation coded across the locational/directional case markers, demonstratives, adverbs, and motion verbs, combining an elevation-coding formative with an elevationunspecified formative. Three distinct situations involving mismatches are detailed in the following sections. Elevationally neutral locative with elevationally marked motion verbs One type of apparent mismatch involves the neutral locative marker paired with an elevationally marked verb, as in examples (18,19,20). 'We came up here.' (Rutgers 1998: 416, S18) However, unlike elevationally unspecified motion verbs, which are an essential part of the elevational paradigm in Kiranti languages (see section 4.2), the elevationally neutral locational/ directional case marker is not part of the same paradigm as its elevation-coding counterparts. We make this claim on account of the neutral locative marker often combining with elevation-coding case-markers and demonstratives to build compound elevation-coding formatives. This is seen in examples ( 21) and ( 22). fall-res-pst-evd '[The snake] was a bit further up' (Rutgers 1998: 82) For this reason, we consider that the neutral locative case marker is actually not unspecified but underspecified for elevation, hence the use of the term 'neutral'. Therefore, the combination of a neutral case marker with elevationcoding verbs do not represent genuine mismatches in elevation-coding. Neutralization of deixis When a scenario involves displacement between elevations, and the deictic center is distinct from the endpoint of the motion, two possible situations can result: • When the deictic center is not at the same elevation as the endpoint of motion, elevational deixis is neutralized, and only motional deixis is expressed: a motion verb unspecified for elevation is used. • When the deictic center is at the same elevation but distant from both source and goal, the motional deixis is neutralized, and only elevational deixis is taken into account: an elevation coding motion verb can be used -even when the motion is deictically the opposite of that expressed by the verb. This most likely results from most of the languages not having itive verbs that encode elevation. In both cases, mismatches in elevational deixis can result between verbs and other domains, as exemplified below. Neutralization of elevational deixis Examples ( 23), ( 24) and ( 25) illustrate uses of elevationally-unspecified motion verbs in contexts where the starting point of a path is lower or higher than the deictic center, and thus where an elevationally marked motion verb could have been expected instead. In ( 23), the deictic center (the protagonist referred to as "mother") is in a tree, while the children move on the ground. The reason for using the elevationally unspecified verb is not simply due to the fact that the mother can only perceive them by hearing and is not even sure of their identity (or their number), but rather because the children are moving towards to deictic center but remain on low elevation, thus not being closer to the elevational center. It is impossible here to use either kʰoŋ-'come up' (since they are not coming up the tree) or pi-'come (same level)' since they are not moving in the air at the same height as the tree. In ( 24), the deictic center is a house located at a lower elevation than the fields. The use of the unspecified venitive here is due to two factors. First, the direction from which the boar will come is not known in advance. Second, as in the case of ( 23), the boar indeed gets closer to the deictic center from the point of view of absolute distance, but its endpoint is not on the same elevational level. Using the 'upwards' venitive |kʰoŋ| 'come up' here would imply that the boar was coming from a lower elevation to a place located at the same elevation as the house. Example (25) from Thulung illustrates a similar situation but with three levels of elevation. The use of different elevation coding verbs paints a picture allowing the audience to situate the actors in their topography: A Tibetan man brings his daughter down to marry the main character, who is staying in an encampment situated at a higher location than his village (but lower than the Tibetan's). The main character then goes back to his village without his bride. The 'downwards' venitive verb 'bring down' is predictably used for the second motion, but for the first one, the unspecified venitive verb reʈis used instead. The reason for this is that the deictic center is in the main character's village, and that the first motion event is between two locations that are both elevationally higher than -and never reaching the elevation of -the deictic center. Neutralization of motion deixis Neutralization of motion deixis is considerably rarer than the previous case, and the only examples we have found are in elicited material from Bantawa taken from Doornenbal (2009: 109). In the scenarios represented in Figures 1 and2, there are three characters, Prem, Syam and Ram. In Figure 1, 8 Syam adresses Prem (who is located lower) to tell him to move towards his own position (the red arrow), and uses the upwards venitive verb tʰaŋ-(26a). To tell him to go to Ram (the green arrow), he has the choice between the elevationally unspecified itive (26c), along with a distally marked elevation coding adverb, or, more surprisingly, the upwards venitive verb (26b), accompanied by a nominal phrase indicating that the direction is upwards toward Ram. What is surprising about (26b) is that even though the motion involves an upward direction, as expected for the verb |thaŋ|, this is a venitive verb, and yet the motion takes Prem away from, rather than towards, the deictic center (which is the speaker, namely Syam). Prem only comes closer to the deictic center from the point of view of elevational deixis, since he ends up at the same elevation as Syam. The converse situation applies, of course, to venitive verbs with downward elevation, as seen in ( 27a) and ( 27b). Because Bantawa has elevation-coding itive verbs as well, the same phenomenon is found with those verbs. In a second scenario (Figure 2), Syam tells Ram to go to Prem, who is located higher that either of them. Here, the upwards itive verb lontmust be used (28). This is because the itive verb is used in situations where both elevation and motion are excentric, i.e. away from the deictic center. (28) Bantawa mu-dʰutni that-upwards lont-a! go.up-pst 'Go up over there!' (go up, towards that direction, upwards) The above discussion can be summarized in Table 5. The elevationally marked venitive verbs can be used in two cases: (i) when the subject moves closer to the deictic center of the motion (examples 26a and 27a)) or (ii) when it moves to the same elevation as the deictic center, without however moving closer to it (examples 26b and 27b). In case (ii), the elevationally unspecified itive verb kʰatcan be used instead (examples 26c and 27c), but not the elevationally marked ones. With this distinction, the data in Table 5 can be accounted for with the following rules: 1. Elevationally marked venitive verbs can be used both for motion towards the motional deictic center or towards the elevational deictic center. 2. Motion towards the elevational deictic center, but away from (or not closer to) the motional deitic center can also be expressed with the unspecified itive verb. 3. The elevationally marked itive verbs are only used for motion away from both deictic centers. In languages where itive verbs lack the elevational contrast, the distinction between elevational and motional deictic centers is still relevant. In particular, the elevationally unspecified venitive verbs are the only option when the moving entity gets closer to the motional deictic center in terms of absolute distance, but remains at a different elevation (higher or lower), and thus remains equally distant from the elevational deictic center. This neutralization of either the motional or elevational component in situations such as those described above is one of the factors that can explain what may otherwise appear to be discordant uses of deixis across different word classes within a sentence. The choices in fact come about in scenarios involving more than two elevations or motion not involving the deictic center. Omission of allative and ablative marking Another cause for discordant elevational deixis is the fact that allative (for languages which have such a case marker) and ablative case markers are frequently omitted and replaced by elevation coding case markers. Because there is no way to identify the source and goal, in the absence of allative and ablative case markers, there can appear to be discordant, or non-agreeing, elevation marking across a sentence. In (29) and in (30), Thulung uses an elevation-coding locational/directional case marker ('down(wards)' here), rather than an ablative, to indicate the source; when combined with the upward venitive verb, the use of the case marker appears discordant, until it is clearly identified as the source rather than the goal. 'They brought water from down in the river and drank it. ' [Thulung origins,215] In these two examples, it is the elevation-coding markers which make it possible, together with the motion verbs, to parse the sentence correctly and identify key elements of the path such as source and goal. Fluidity of the deictic center In section 4.2.2 above, we saw that neutralization of elevational and motion deixis in specific contexts can cause confusion between itive and venitive motion. A similar confusion can arise through another distinct phenomenon, namely shifts in motion deixis within a narrative. In (31) for instance, the main character leaves his father (who has been transformed into a stone) and goes back to his village; he predictably uses the venitive verb kʰɵŋ-ʌ-t-ʌ 'I came up' when talking to his co-villagers, and invites them to return to the place where he left his father using the itive kʰoɔç-ki 'let us go'. However, when describing their trip back to this place (of lower altitude), the venitive verb jāː-t-nu (literally 'they came down') occurs instead of the itive, apparently to highlight the fact that the main character is among the persons going to that place. (31) Khaling In the following Thulung passage (32a) and (32b), made up of material which is separated by two sentences in a narrative, the deictic center shifts from the Thulung homeland of Mukli -as evidenced by the adverbs ojʉ, ola (down(wards) and up(wards) respectively), the itive caused accompanied motion verb lʌ:ɖʉ and the up(wards) locative case marker -la in (32a) -to the main protagonist, as seen by the use of locational/directional case markers -ra (neutral) and -jʉ (downwards) combined with -go ('inside'), showing that they are from the perspective of the cave; and then back again, with the use of the downward venitive verb jomu. This fluidity in the deictic center, and its ability to shift even within a single sentence, leaves speakers with a great amount of flexibility in the description of the physical environment in which narratives occur. "mɛbenʌ then ʔuŋ-ʌ 1sg-erg ni top gʰrok-tû-ŋ-t-ʌ-nʌ leave-put-1sg→3-pst-1sg-lnk kʰɵŋ-ʌ-t-ʌ-m come.up-1sg-pst-1sg-nmlz Diachronic evolution All Kiranti languages have elevational motion verbs, and some of these verb forms are cognate across the subgroup. In this section, we show that nearly all elevational motion verbs in Kiranti fit into two distinct tripartite cognate sets, and propose several historical hypotheses to account for the observed data. Two sets of elevation deixis verbs Among Kiranti languages, only Bantawa and Belhare are described as having a tripartite elevation constrast for both venitive and itive verbs (section 3.1). The system of Belhare (Table 6) is particularly interesting, because both its itive and venitive series have cognates occurring as a set in other languages (at least in East Kiranti): the Belhare itive set is cognate with the Limbu venitive set, while the Belhare venitive set is cognate with its Chintang, Yakkha and Yamphu counterparts. By contrast, the itive system of Bantawa is completely distinct from those found in its closest relatives, Camling and Puma.9 Since there is some discrepancy between languages as to the motion deixis of these verb roots, the unspecified terms 'set A' and 'set B' are used in the following. Set A Set A occurs as a complete set in Belhare and Limbu, and in Bantawa it is split between the venitive and the itive verbs. Languages west of Bantawa have set A verbs, but lack the 'upwards' verbs. All three verb roots have cognates elsewhere in Trans-Himalayan/Sino-Tibetan, in particular in Rgyalrongic. The 'upwards' motion verb, reconstructed *tʰaŋ in Jacques (2017),10 is clearly related to the locative adverb thaŋ 'up, overhead' in Limbu (Michailovsky 2015: 115). Outside of Kiranti, we find cognates of this verb in the whole Trans-Himalayan/Sino-Tibetan family, but with a final stop: Chinese has the cognate verb 陟 ʈik 'ascent' (Old Chinese *trək, Baxter and Sagart 2014; on the vowel correspondence see [START_REF] Gong | The system of finals in proto-Sino-Tibetan[END_REF], and in Gyalrongic there are cognate nouns or adverbs, such as Japhug taʁ 'up' (from *taq). The 'downwards' motion verb had the form *ju proto-Kiranti but its rhyme presents irregular correspondences in some languages such as Khaling and Dumi due to the fact that it underwent fusion with person indexation suffix, as is the case in Limbu, where its conjugation is irregular (Jacques 2018,2020). These verbs are mainly used with itive meaning, but also occur in contexts incompatible with the itive, and are better analyzed as having neutral deixis. There are several similar-looking 'same level' venitive verb roots in set A. The first one is reflected by Hayu pʰi-, Thulung bik-(with an irregular -k, possibly analogical, see Lahaussois 2011), Khaling and Dumi pi-, and can be reconstructed as *piin proto-Kiranti. The second one corresponds to Belhare pʰen-'go across' and Limbu pʰɛn-'come across' (possible proto-Kiranti reconstructions for this etymon would be *pʰɛn or *pɛn). The West and East Kiranti forms, although they do not derive from exactly the same proto-form, are very probably related. Although alternations between *ɛ and *i and the addition of -n in Limbu are not regular morphological processes, there are other examples of this correspondence between Limbu and Khaling. In particular note the Khaling verb pʰi-'be spoiled (of rice)' and Limbu pʰɛn-'be spoiled' (Jacques 2017: 203). While the expla-nation for these alternations still eludes us, it is not an obstacle to positing cognacy between these roots. 11 Furthermore, Jacques (2017: 203) analyzes the Bantawa itive verb bitt-'go (same level)' as the cognate of *pi-(Bantawa bregularly goes back to proto-Kiranti *p-). The origin of the coda -ttis completely clear, but it is not an obstacle against positing an etymological relationship between these forms. 12 Outside of Kiranti, cognates of this verb root are found in Rgyalrongic (Cogtse Situ pi, stem II of the verb 'come', Lin 2003). Set B The set B verbs are restricted to a subset of Eastern Kiranti languages (Table 8), excluding Limbu. The 'upwards' verb can be reconstructed as *gaton the basis of Eastern languages. 13 It is perhaps tempting to compare Thulung get-'come up' and Wambule ga-'come up' to this set, but as far as we know there are no good examples of Thulung -et corresponding to East Kiranti -at, and Wambule normally 11 Another root form reflected by banin Kulung, Camling and Bantawa 'come across' goes back to *panfollowing [START_REF] Michailovsky | Manner vs place of articulation in the Kiranti initial stops[END_REF] laws. It is unrelated to the previous forms, as although the initial consonant is the same, there are no other examples of such a correspondence. These verbs are instead to be compared with the non-independent Khaling verb root -pɛ-(from *pa) which appears in the bipartite verb ɦɵ-n-pɛ-nɛ 'reach'. 12 An interesting gap in Bantawa is the absence of a corresponding itive caused accompanied motion verb (Doornenbal 2009: 125), which would be the equivalent of Khaling pit-'bring across', and whose expected form in Bantawa would be †bit-. In view of this gap, it is possible that Bantawa bittactually reflects a double derivation from the same root: first derived into a caused accompanied motion verb by the -t applicative suffix, and then back to an intransitive form by middle derivation. 13 Yakkha -eʔregularly corresponds to proto-Kiranti *-at, as shown by kʰeʔ-'go' from *kʰat-. preserves a trace of the final *-t. It is more straightforward to compare Thulung get-'come up' to Limbu kɛt-'to arrive, to reach (a destination), to occur' (33). The 'same level' verb has a tʰinitial in Chintang corresponding to a zero initial in the other languages. As pointed out by [START_REF] Michailovsky | Preliminaries to the comparative study of the Kiranti subgroup of Tibeto-Burman[END_REF], proto-Kiranti *t debuccalizes in Yamphu and Belhare, and this is true of Yakkha too (for instance *tuŋ-'drink' yields Belhare and Yakkha uŋand Yamphu uks-). Thus, a proto-form *tapcould account for the forms of all languages except Athpare, where *tdoes not debuccalize regularly (cf Athpare tʰuŋs-'drink'). This suggests that Athpare ap-'come across' is not a native word, but a borrowing, perhaps from Belhare. The 'come down' verb has kin Chintang corresponding to zero initial in other languages, but there is no evidence that velars debuccalize in Yamphu and Belhare, so that this set is unexplained. Given the absence of clear cognates of these verbs outside of Eastern Kiranti, it is likely that set B constitutes an innovation for this subgroup. Historical shifts The set A paradigm is most likely reconstructible to proto-Kiranti and is possibly older than Kiranti, judging from cognates with the same meaning in Thangmi and Gyalrongic. Two main hypotheses can be proposed for the function of the set A verbs in proto-Kiranti. First, one can suppose that the verbs in set A were venitive, as they are in most Kiranti languages. In this hypothesis, the Chintang-Yamphu group has undergone a shift whereby former venitive motion verbs became itive, through elevational contexts where motional deixis is neutralized (Table 5), or as a result of the fluidity that can characterize motion deixis (section 5). Set B verbs were innovated and became a new set of venitive verbs. Second, it is also possible that set A verbs had an elevational contrast but were neutral with respect to motion deixis, as seems to be the case in Wambule (section 3.1). Limbu on the one hand and Western languages on the other could have independently innovated by interpreting set A verbs as venitive, whereas they became itive in the Chintang-Yamphu group. In any case, in both hypotheses, set A is reconstructible to proto-Kiranti, while set B is an innovation. Moreover, while motional deixis may vary across languages, elevational deixis is remarkably stable. ' Come over here! (Your mother told me to ask you.)' who stayed above the footpath' kite) came down (from heaven) to earth and gave his grandmother (the comb).' (Origin, 66) (9) Khaling wɵ rain je come.down:n.pst:3sg 'It is raining.' An upward location along a vertical axis is seen in (10), referring to the position of a star in the sky. (10) Khaling pɛri-bʉ-tʉ heaven-loc-loc.up tsʌī top sukra Polar.Star mōː-tɛ be-pst:3sg 'Up in the sky was the Polar Star' (Ur-Mother 44) In (11) the up(ward) case marker is used to express a vertical posture with respect to the ground. -leg-loc.up lamdi-pa walk-act.pcp dʉs-ta-m-ka become-pst.3sg-nmlz-erg 'Because she had become able to walk up on her legs, ....' [about a child learning to walk]. basket.shelf-loc.up ʔʌm-si make.sleep-refl.n.pst.3sg ʔei intj lʉː-tɛ-su tell-pst:3→3-du 'In that case you will sleep up there on the shelf, they told him. (17) Thulung ba-tsi stay-imp.2du a-tsʉsʉ-tsip 1sg.poss-grandchild-du ba:-laŋka where-abl re foc rok-tsi come-pst.2du bik-tsi come.across-imp.2du bik-tsi come.across-imp.2du rwak-ta say-pst.3sg retsʌ it.seems ʔe hs 'She said "Stay, stay, Grandchildren, where did you come from? Come, come."' (Vulture, 79) ' When they come down to the husker and the handmill, nobody must leave any flour or rice!'(Ebert 2000come.up.s2-1sg.so-pa-also sa:ŋwa buffalo kɔtt-aŋ keep.s2-1sg.so.pa pit cattle kɔtt-aŋ-aŋ keep.s2-1sg.so.pa-and way-aŋ be.s2-1sg.so.pa 'I became an orphan, so I came up to my maternal uncle's house and stayed looking after the buffalo and the cattle.' (https://doi.org/ 10.24397/pangloss-0004194, above-loc-loc.up sip-pet:-tt-ae: noise [made by] children, their du mother knew that [her] children had come here.'(Solme Lamalit, 71) ' The boar will come and eat a lot of lentils in the field high up.'(Ogress 156) reʈ-ɖʉ-m-ka bring.unspec-pst.(3→3)sg-nmlz-erg me-sɵɖ-ʉ-ja neg-bring.down-(3→3)sg-irr retsʌ seem.to ʔe hs 'Because he [Tibetan] brought him [main protagonist] a Tibetan girl, he did not bring her down to his home [lower down, in the ancestral village].' 8 We thank Lai Yunfan for the artwork. Figure 1 : 1 Figure 1: Motion along a slope in Bantawa (first scenario) 'Go downwards to where Syam is!' Figure 2 : 2 Figure 2: Motion along a slope in Bantawa (second scenario) bring.water-ant.cvb pe-m-thal-miri eat-3pl-hab-pst.3pl→3 lʉː-tɛ-nu-nʌ say-pst.3→3-pl-lnk kʰɵleŋʌ all ʔu-ŋoɔpsu-ɦɛm-kolo 3sg.poss-clan.members-pl-comit jāː-t-nu-nʌ come.down-pst.3-pl-lnk sên-tɛ-nu see-pst.3→3-pl"Then I left him there and came back. So, clan members, let us go look for him" he said to them, and they all came down and looked.'Khamnime, 36) me-go-la-ŋa dist.dem-inside-loc.up-int ʣʉl-thal-lʉ place-hab-pst.(3→3)sg ʔe hs me-la dist.dem-loc.up burkhum-go-la cave-inside-loc.up thʌk-ʣɵl-thal-lʉ hide-antic-hab-pst.(3→3)sg ʔe hs pʉ:-thal-lʉ eat-hab-pst.(3→3)sg ʔe hs bai-thal-la stay-hab-pst.(3→3)sg ʔe hs 'He used to take food from down here, from home, and used to place it up there, he used to hide it up inside a cave up there and eat it and stay there.' [Foundation 16] b. burkhum-go-ra cave-inside-loc kwa: mud dha-i-thal-lʉ-ma dig-3sg-hab-pst.(3→3)sg-conj dhwa:-go-jʉ earth-inside-loc.down thʌk-ʣɵl-lʉ-ma hide-antic-pst.(3→3)sg-conj jok-thal-la come.down-hab-pst.3sg ʔe hs 'He used to dig down into the mud in the cave and hide it [the food], and then come down.' [Foundation 19] Table 1 1 : Itive motion verbs Upwards Downwards Across Unspecified Belhare Bantawa lont-tʰaŋ- yu-dʰa- pʰen-bitt- kʰat-kʰat- Table 2 : 2 Venitive verbs Upwards Downwards Across Unspecified Hayu dzɔk- yu- phi- Table 3 : 3 Caused accompanied motion verbs (venitive) Upwards Downwards Across Unspecified Hayu ? yut- pit- Table 4 4 : Bantawa elevational motion verbs Up Down Across Unspecified come go tʰaŋ-yɨ-lont-dʰa- ban-bitt- ta-kʰat- bring (transitive) tʰakt yɨt take (transitive) lont-dʰant--batt tat- kʰatt- Table 5 5 : Elevation deixis vs. Motion deixis Example Verb Elevation Motion Adverb/Noun (26a) (26b) tʰaŋ-ven+up tʰaŋ-ven+up ven+up ven+up ven iti ven+up iti+up (26c) kʰat-iti+unspec ven+up iti loc:up (28) lont-up+iti iti+up iti iti+up (27a) yɨ-ven+down ven+down ven ven+down (27b) (27c) yɨ-ven+down kʰat-iti+unspec ven+down iti ven+down iti iti+down loc:down Table 6 : 6 Elevational deixis in Belhare intransitive motion verbs (based on Bickel 1999: 94) Itive 'go' (set A) Venitive 'come' (set B) Upwards Downwards yu-tʰaŋ-Across pʰen-Unspecified kʰat- kat-uŋ-ap-ta- Table 7 7 : Set A motion verbs Upwards Downwards Across Hayu Thulung Khaling Dumi Koyi Kulung Camling saŋ-tʰoŋ-Bantawa tʰaŋ-Puma tʰoŋ-Athpare tʰaŋ-(itive) Belhare tʰaŋ-(itive) yu-(itive) yu-jok-je-yi-gʰuʔ-? yu-i-yɨ-i-Yakkha tʰaŋ-(itive) Yamphu saŋ-(itive) yu-(itive) Limbu tʰaŋ-yu- phi-bik-pi-pi-bhiʔ-bitt-(itive) pʰen-(itive) ? pʰɛn- 2017: 203). This verb is related to the adverbial root reflected by Limbu yo 'down, downhill, downstream'. Outside of Kiranti, cognates of both the motion verbs and their -t applicatives are found in Thangmi (yu-'come down', Turin 2012), but also further away in Situ Gyalrong: Cogtse Situ jə' go downwards', jut 'take downwards' (Lin 2017: 61) and Bragbar Situ ɟə, ɟə, ɟú 'se déplacer vers le bas', ɟət, ɟət, ɟút 'transporter vers le bas' (Zhang Table 8 : 8 Venitive motion verbs Upwards Downwards Across Chintang kat-Athpare kat-Belhare kat-Yakkha keʔ-Yamphu kat- kuŋs-uŋs-uŋs-uks-uks- tʰap-ap-ap-ap-ap- The riverine (upstream-downstream) dimension, which is linguistically encoded in other Trans-Himalayan/Sino-Tibetan languages, in particular in Gyalrongic[START_REF] Sun | Parallelisms in the verb morphology of Sidaba rGyalrong and Lavrung in rGyalrongic[END_REF][START_REF] Lin | A dimension missed: East and west in Situ rGyalrong orientation marking[END_REF][START_REF] Lai | Grammaire du khroskyabs de Wobzi[END_REF][START_REF] Zhang | Le dialecte de Brag-bar du rgyalrong situ et sa contribution à la typologie de l'expression des relations spatiales: le mouvement associé et l'orientation[END_REF] and Tani (Post 2019, 2020), is rarely mentioned in descriptions of Kiranti languages. It is worth pointing out that the connection found in some Sino-Tibetan languages between 'home' and 'uphill'(Post 2019: 243) does not appear to be strongly developed in the Kiranti materials we have access to. AlthoughSchackow (2015: 182) mentions that any location outside the Himalayas is referred to as 'downhill' in Yakkha, this statement may simply be an expression of a topographic fact. Examples such as 25, where the homeland is at with a lower elevation, as reflected in the choice of motion verb, do not suggest that there is any particular connection between 'home' and 'uphill'. Languages in the table are organized from West to East, with Hayu being the westernmost language. The sources are the following: Hayu[START_REF] Michailovsky | La langue hayu[END_REF], Thulung[START_REF] Lahaussois | Thulung Rai[END_REF], Wambule[START_REF] Opgenort | A grammar of Wambule[END_REF], Khaling(Jacques et al. 2015), Dumi (van It is possible that what Schapper (2014) refers to as 'unelevated' terms have a similar use. The 'upwards' itive verb of Bantawa lont-'go up' also means 'go out'(Doornenbal 2009: 350), and is cognate to Puma lon-'come out', which does not belong to the elevational deixis paradigm. Concerning the correspondence of Camling sto tʰin other languages,[START_REF] Michailovsky | Preliminaries to the comparative study of the Kiranti subgroup of Tibeto-Burman[END_REF], who reconstructs *Xt for the set reconstructed here as *tʰ, argues for a consonant cluster, but also indicates that *tʰis a possible reconstruction, which however implies a specific sound change in Camling (perhaps *tʰ → *θ → s). Abbreviations In adding to the Leipzig glossign rules, this work uses the following abbreviations: act active, antic anticipatory, comit comitative, conf confirmative, conv converb, int intensifier, lnk linker, pcp participle, pft perfect, rep reported speech, res resultative, subj subject, Conclusion This paper contributes to the study of elevational deixis in both typological and historical perspectives. Kiranti languages have typologically fairly common elevational systems (Forker 2020) involving 'upwards', 'level/across' and 'downwards' elevations in the verbal system (section 3). However, the Kiranti data contribute to the typology of elevational deixis by showing that the elevation-unspecified motion verbs form, at least for Kiranti, an integral part of the elevation system, in addition to up(wards), down(wards) and level/across categories: they fill the gaps in the semantic spaces not expressible by elevation-specified verbs. Elevational motion verbs are largely cognate across Kiranti as entire sets (section 6.1), and it is likely that this category is reconstructible to proto-Kiranti. Cognate verbs always have the same elevational deixis, but may be itive in some languages and venitive in others. The synchronic shifts of deixis documented in sections 4.2 and 5 are not only of interest for synchronic description and typology, but also offer plausible models to interpret the data in section 6 from a historical perspective and reconstruct proto-Kiranti.
04088149
en
[ "phys.phys.phys-optics" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04088149/file/Sheveleva%20-%202023.pdf
Anastasiia Sheveleva Saïd Hamdi Aurélien Coillet Christophe Finot Pierre Colman email: [email protected] Analysis of Dispersive Fourier Transform dataset using Dynamic Mode Decomposition: evidence of multiple vibrational modes, and their interplay in a three-soliton molecule We demonstrate that the Dynamic Mode Decomposition technique can effectively reduce the amount of noise in Dispersive Fourier Transform dataset; and allow for finer quantitative analysis of the experimental data. We therefore were able to demonstrate that the oscillation pattern of a soliton molecule actually results from the interplay of several elementary vibration modes. Since its first demonstration, the Dispersive Fourier Transform (DFT) is a key technique to investigate experimentally ultrafast fiber ring laser cavities [START_REF] Jannson | Real-time fourier transformation in dispersive optical fibers[END_REF][START_REF] Goda | Dispersive fourier transformation for fast continuous single shot measurements[END_REF][START_REF] Mahjoubfar | Time stretch and its applications[END_REF][START_REF] Wang | Recent advances in real-time spectrum measurement of soliton dynamics by dispersive fourier transformation[END_REF][START_REF] Godin | Recent advances on time-stretch dispersive fourier transform and its applications[END_REF]. As a convenient way to observe the pulse evolution round-trip after round-trip, this technique allows indeed a better comprehension of the nonlinear phenomena governing these systems. If the main features of fiber ring laser cavities are now well understood, it appears that weaker effects are also of importance because they can built-up to the point where their contribution to the dynamics becomes also determinant [START_REF] Jang | Ultraweak long-range interactions of solitons observed over astronomical distances[END_REF][START_REF] Hamdi | Superlocalization reveals long-range synchronization of vibrating soliton molecules[END_REF]. Correspondingly, DFT experiments would now require either very long record time -so that the long-term dynamics caused by the weaker effects can fully develop; and more acute measurements -so that minute phenomena can be observed. Experimentally, the bottleneck would therefore come from the oscilloscope which records the DFT trace. Indeed, the electronic noise and the discretization granularity prevent recording the single shot spectra with utmost precision. Using the example of a three-soliton molecule, we demonstrate in this letter that the Dynamic Mode Decomposition (DyMD) allows minimizing experimental noise without compromising the quality of the information. We show in particular that this technique is more efficient than any other curve smoothing techniques, for instance, the Savitzky-Golay filter [START_REF] Savitzky | Smoothing and differentiation of data by simplified least squares procedures[END_REF]. Namely, we fully characterize the oscillations of a 3-Soliton Molecule (SM) and demonstrate that it comprises two nonlinear oscillators of different timescales that are coupled to each other. On a minor side, DyMD permits lossless compression of the DFT dataset by at least a 90 % factor. This demonstration is the occasion to show that DyMD can greatly help to get a finer qualitative analysis of the dynamics of SMs, which are bound states formed by dissipative solitons [START_REF] Lapre | Real-time characterization of spectral instabilities in a mode-locked fibre laser exhibiting soliton-similariton dynamics[END_REF] travelling around the laser cavity in close interaction [START_REF] Stratmann | Experimental observation of temporal soliton molecules[END_REF][START_REF] Akhmediev | Multisoliton solutions of the complex ginzburglandau equation[END_REF]. The large variety of mechanisms responsible for their existence, for instance, gain recovery dynamics, cross-phase modulation effect [START_REF] Nimmesgern | Soliton molecules in femtosecond fiber lasers: universal binding mechanism and direct electronic control[END_REF], laser noise [START_REF] Weill | Noise-mediated casimir-like pulse interaction mechanism in lasers[END_REF][START_REF] Zhou | Oscillatory self-organization dynamics between soliton molecules induced by gain fluctuation[END_REF], emission of dis-persive waves [START_REF] Soto-Crespo | Quantized separations of phaselocked soliton pairs in fiber lasers[END_REF], or acoustic phenomena [START_REF] Jaouën | Transverse brillouin effect produced by electrostriction in optical fibers and its impact on soliton transmission systems[END_REF], results in the SMs exhibiting numerous distinct vibration patterns . In detail, we consider here a 5-m long fiber ring laser cavity comprising 3 meters of Erbium doped fiber and a polarizer (see Fig. 1-(a)). Mode-locking is ensured by nonlinear polarization rotation followed by discrimination through a polarization beam splitter, also acting as the output coupler. After dispersion of the laser output in a -50 ps/nm Highly Dispersive Fiber (HDF), the DFT spectra are recorded by an 80 GS/s, 40 GHz electrical bandwidth, 8-bit depth oscilloscope. The photodiode has a 25 GHz bandwidth, and acts as a low pass filter for higher frequencies. This laser architecture can support the propagation of a 3-SM, characterized by the distinct interferometric fringes seen on the DFT spectra (Fig. 1-(b)). As detailed later in this letter, this molecule can be described as two coupled oscillators. It exhibits in Fig. 2 a complex oscillatory pattern with a principal periodicity of about 71 round-trips (RTs). Note that the SM under investigation here is not a soliton crystal because the solitons do not have the same optical phase: the leading and the trailing soliton are actually π-shifted, resulting in a more unstable interaction. Therefore, the separation between the middle an the last soliton is slightly larger that the one between the leading and middle soliton. Another consequence is that the soliton pairs have radically different properties and exhibit a very complex dynamics [START_REF] Igbonacho | Dynamics of distorted and undistorted soliton molecules in a mode-locked fiber laser[END_REF]. If the vibrations associated to the two closest pairs of solitons can be easily retrieved from the Fourier transform of the DFT spectra, the last oscillatory pattern created by the interaction of the two soliton located at each bound of the molecule clearly suffers from strong measurement noise (Fig. 2-(c,f), black dots) despite the noise level being already drastically reduced by the use of a Savitzky-Golay [START_REF] Savitzky | Smoothing and differentiation of data by simplified least squares procedures[END_REF] filter (hence a 4 th order polynomial fit over a sliding window). This type of filtering does indeed reduce the amount of noise, but it cannot suppress low frequency noise (aka. drift) and, furthermore, filter out any fast-varying data. Consequently, this signal processing helps improving the precision of the measurement to some extend, but it leaves at the same time some residual artifacts that will ultimately limit the former. In brief, the main limitation of such a filtering originates from the fact that it is conducted independently on each single shot spectrum; and therefore it does not benefit from the knowledge of the whole evolution of the single-shot spectra in order to identify truly random features, namely noise. Therefore, we performed a DyMD [START_REF] Schmid | Dynamic mode decomposition of numerical and experimental data[END_REF][START_REF] Schmid | Application of the dynamic mode decomposition to experimental data[END_REF] on the DFT dataset (Fig. 1-(b)) in order to improve the quality of the data. It consists in decomposing the instantaneous snapshot spectra S(λ, n) at round-trip n as S(λ, n) = k ( M ode k (λ)b k (n) ). The temporal dynamics of the laser system is then encoded in the b k coefficients (Fig. 3-(c,d), Fig. 4-(b)), whereas the M ode k contains the information related to vibration modes (Fig. 3-(a-b), Fig. 4). This decomposition method makes use of the full DFT dataset, hence it is more accurate than any local filtering and fitting technique. As the DyMD would rely on a round-trip-shifted covariance, we believe that it is very well suited for DFT data because the spectrum evolves nonchaotically round-trip after round-trip. The impact of the DyMD performed with respectively 1, 2, and 4 modes is shown in Fig. 2. First we see in Fig. 2-(c,d) that the relative motion of the trailing and heading solitons is now retrieved correctly, whereas it suffers from very strong noise as seen for the raw data. This validates the possibility for DyMD to investigate larger soliton molecules that would otherwise remained hidden by the electronic noise (the corresponding signal is barely visible in Fig. 1-(d)). In details, we see first that the vibration patterns obtained by mean of the 4-Mode decomposition (Fig. 2, orange lines) follow almost exactly the original data. The 2-mode decomposition (yellow lines in Fig. 2) reconstructs the experimental data with an average error of 1 bit, and the maximal error never exceeds 2 bits. Indeed, if we look at Fig. 1-(b,c) where are shown the original DFT data, the reconstruction is done with little error (Fig. 1-(e)). A reconstruction error below -24 dB would actually correspond to a lossless reconstruction for a 8-bits digitization of a full scale signal. True lossless reconstruction is obtained for 8-mode decomposition (not shown here). As a side remark, the memory requirement for such a decomposition (2 10 points per spectrum, 2 14 Round-Trips, 32 bits resolution) would correspond to a compression of the experimental raw data by 93%. In contrast, the decomposition using a single mode clearly misses a few important features, like, for instance, the fast over-modulation in Fig. 2-(d). The 2-mode DyMD appears here to be the best choice; and the corresponding modes are shown in Fig. 3-(a,b). An important point is that this decomposition does not only reduce the amount of noise, it also allows a finer interpretation of the SM vibration pattern. The b k coefficients (Fig. 3-(c,d)) are indeed completely correlated to the molecule oscillations shown in Fig. 2. We see that the weak over-modulation of 5.24 RTs periodicity that is over-imposed on the main oscillation originates almost exclusively from the second mode, whose b 2 coefficient exhibits clearly fast varying features. In contrast b 1 evolves more slowly, following a periodic pattern with high harmonic content. Detailed Fourier analysis in Fig. 3-(e) confirms this relative distribution of fast and slow oscillations between the two modes. Statistically, independent oscillators are described by different modes; this is then the case here. It is actually the reason behind the failure of the single-mode DyMD as it cannot reproduce concomitantly the dynamics of two independent modes. Knowing the shape of the modes, their impact on the SM's motion can then be traced back in Fig. 4. The first vibration mode does not impact much the relative distance between the leading and trailing solitons. This fact does not change even if more modes are included in the DyMD. As a good approximation, the first mode involves the motion of the middle pulse, along with the corresponding dephasing of the first dissipative soliton. In the second mode, the middle pulse acts as an anchor, and the two other pulses move (and dephase) in an anti-symmetric manner. We can conclude that each of the two fundamental vibration patterns does not necessarily involve a single soliton pair, and that it spans over the whole molecule. Therefore, the oscillation experienced by the 3-soliton, or more complex, SM must not be simply described by looking only at the vibrations of soliton pairs. If the two oscillators are independent entities, they, however, interplay with each other. Indeed both b 1 and b 2 show, respectively, modulations at high and low frequencies, which is characteristic of an interac-tion between the two vibration modes (Fig. 3-(e)). In detail b 2 encodes the fast oscillation of about 5 RTs periodicity, but exhibits at the same time a modulation at the main fundamental oscillation period (71 RTs). In Fig. 3-(e) the tone of the fast oscillation (located at about 0.2 RT -1 ) is modulated by the main slow oscillation, resulting in the creation of two equally spaced sidebands. This is the direct consequence of the slow oscillation modifying adiabatically the parameters of the fast oscillator. We would like to stress that this phenomenon can only be clearly observed because the experimental noise has been reduced so that the small fast overmodulation takes over. As a consequence, the spectral comb that characterizes the modulation of one oscillator by another is only fully revealed by the DyMD approach. To allow a global comparison are presented on Fig. 5 the different spectrum of τ 2-3 depending on the data processing. This confirms that the single mode DyMD clearly is not able to reproduce correctly the signal because it misses the fundamental tone of the fast oscillation. Compared to the raw data, which show only one, the 2-Mode DyMD exhibits up to six harmonics. The DyMD technique also permits in-depth understanding of the molecule vibration through analysis of the b k coefficients. In literature, vibrations of SMs are usually classified depending on how the optical phase between the solitons evolves [START_REF] Krupa | Real-time observation of internal motion within ultrafast dissipative optical soliton molecules[END_REF], and a single oscillation pattern is usually assumed. Here the SM experiences a periodic evolution (71 RTs), set by the largest and strongest oscillator (b 1 ) that bounds the whole molecule. Still there exists a minor part of this oscillation that is caused by another oscillator, characterized by the coefficient b 2 . Consequently the SM is under the influence of at least two limit cycles of radically different nature. The fast oscillation is of much weaker amplitude and thus it is not strong enough to take over the dynamics. The co-existence of several limit cycle attractors, and their range of attraction, is a fundamental question for a better understanding of the creation and the dissolution of SMs [START_REF] Grelu | Dissipative solitons for mode-locked lasers[END_REF][START_REF] Schreiber | On the study of pulse evolution in ultra-short pulse mode-locked fiber lasers by numerical simulations[END_REF][START_REF] Zavyalov | Dissipative soliton molecules with independently evolving or flipping phases in mode-locked fiber lasers[END_REF][START_REF] Zavyalov | Discrete family of dissipative soliton pairs in mode-locked fiber lasers[END_REF]. There exists mostly scarce information regarding the stability of the molecules'motion. Assessing the latter experimentally is usually tricky as it might involve ac-tually the destruction of the SM as soon as one tries to alter forcefully its properties. Here we see that a finer analysis of the SM motion allows to get a better estimation of its stability by simply looking at its weak fluctuations. It provides also clear indication regarding how the destabilization of the SM would occur. Considering here that the optical phase relationship between the leading and trailing solitons is not favorable for the SM's stability, one hypothesis we could formulate is that a larger fast trembling could eventually result into the SM achieving a more stable conformation (metastability). The SM can thus switch between limit cycle attractors. To conclude, we demonstrated that Dynamic Mode Decomposition is a suitable method for the analysis of DFT data. By reducing the noise, it allows the observation of minute phenomena that would have otherwise remained hidden. Moreover, the DyMD decomposition can be used to compress the data efficiently, without loss of information. Depending on the denoising which is desired, the initial data set can be sequentially reconstructed using a different number of modes. For instance, the single mode projection clearly lacks details. When it comes to the quantitative analysis of the experimental data, the DyMD is able to reveal how a complex oscillation pattern of the SM is shared between its elementary constitutive components; and separate the temporal evolution of the individual vibration modes from their spatial distribution. We are then able to describe complex vibration patterns that involve more than two pulses, and this technique could be extended to more complex solitonic systems [START_REF] Wu | Intelligent breathing soliton generation in ultrafast fiber lasers[END_REF]. As a practical example, we investigated the dynamics of a 3-Soliton molecule in an ultrafast fiber ring laser cavity. We were able to demonstrate that the soliton belonging to the same molecule may not experience the same vibration, and that at least two limit cycles of different kind could co-exist inside the same laser cavity. The optical phase difference of ∆φ between the leading and the trailing soliton is known to create repulsion. This poses the question regarding the meta-stability of some soliton molecules configurations. And it reinforces the interest in experiments (and data analysis) of acute precision as most transition phenomena start from weak instabilities. For this first proof-of-principle demonstration about the opportunities that are offered by the DyMD, we implemented the DyMD on the raw DFT spectral data, without performing any subsequent data treatment. The technique also has the denoising and compression capability. Regarding the analysis of the dynamics based on the analysis of the mode decomposition, the DyMD representation can be seen as a change of basis (or a projector, similar to a principal component analysis -PCA). Consequently, some DyMD decompositions could be more straightforward to interpret, depending on the system under study. For example, without changing the main conclusions, the DyMD could have been performed on the Fourier transformed data (Fig. 1-(d)) instead of the raw DFT data, or even on the resulting SM motion (Fig. 2). We believe that this letter paves the way for more detailed analysis of the SM dynamics. In particular, we demonstrated that the oscillatory pattern of SMs results from the interplay of individual vibration modes and we were able to extract the round-trip evolution of each mode. It is then possible to investigate further how the modes interact by constructing a simpler nonlinear model that would reproduce the interplay that is observed between the b k coefficients. Figure 1 : 1 Figure 1: (a) Experimental setup; PBS: polarization beam splitter; HDF: 545 m of Highly Dispersive Fiber; Pump: 980-nm laser diode; WDM: pump-signal multiplexer. (b) Single-shot spectra. (c) DFT Spectra after 2-Mode reduction by Dynamic Mode Decomposition. (d) (logarithmic scale) Fourier Transform of (b) showing three characteristic inter-Soliton seperations of 4.1 ps 4.6 ps and 8.7 ps (pink arrows). (e) Error of the DyMD in (b). -3 dB corresponds to 1 Bit of precision (hence -24 dB is 8-Bit precision). Subpanels (b-e) are represented in logarithmic scale. Figure 2 : 2 Figure 2: (a-b-c). Evolution of the inter-soliton separation for diffreent soliton pairs: {1 -2}, {2 -3}, and {1 -3}, respectively. (d-e-f): Same as above, but for the optical phase difference between the solitons. Black points stand for data processed using a Stavitzky-Golay filter. Orange: 4-Mode DyMD. Yellow: 2-Mode DyMD. Purple: 1-Mode DyMD. Figure 3 : 3 Figure 3: (a-b) Representation of the 1 st and 2 nd modes of the 2-Mode DyMD (c-d) Evolution of the coefficients b k , k = {1, 2} used for the reconstruction. (e) Logarithmic Scale. Spectra of evolving vibration mode's amplitude b k . Figure 4 : 4 Figure 4: Schematic representation of the two vibration modes, indicating the relative displacement of the dissipative solitons (arrows, with amplitude of the displacement indicated), and the concomitant changes of optical phase. These representations can be obtained numerically by zeroing either the b 1 or b 2 coefficients during the DyMD reconstruction. For clarity, the numbers have been rounded up. Figure 5 : 5 Figure 5: Spectrum of the round-trip evolution of the inter-soliton separation τ 2-3 . Blue: raw data. Red: Data processed by a 4 th order Savitzky-Golay filter. Dark and Light Grey: Results of 2-and single-Mode DyMD, respectively. Note the apparition of spurious harmonic tone, along with the disappearance of the fast oscillation's tone around 0.2 RT -1 . Funding This work was supported by the French ANR program, project OPTIMAL (contract ANR-20-CE30-0004) Data Availability Statement Data underlying the results presented in this paper may be obtained from the authors upon reasonable request. Disclosures The authors declare no conflicts of interest.
04097219
en
[ "info" ]
2024/03/04 16:41:18
2022
https://hal.science/hal-04097219/file/DeduplicationOverHeterogeneousAttributeTypesD-HAT-Liekah-Papadakis.pdf
Loujain Liekah email: [email protected] George Papadakis email: [email protected] Deduplication Over Heterogeneous Attribute Types (D-HAT) Keywords: Clustering, Entity matching, Data quality Deduplication is the task of recognizing multiple representations of the same real-world object. The majority of existing solutions focuses on textual data, this means that data sets containing boolean and numerical attribute types are rarely considered in the literature, while the problem of missing values is inadequately covered. Supervised solutions cannot be applied without an adequate number of labelled examples, but training data for deduplication can only be obtained through time-costly processes. In high dimensional data sets, feature engineering is also required to avoid the risk of overfitting. To address these challenges, we go beyond existing works through D-HAT, a clustering-based pipeline that is inherently capable of handling high dimensional, sparse and heterogeneous attribute types. At its core lies: (i) a novel matching function that effectively summarizes multiple matching signals, and (ii) MutMax, a greedy clustering algorithm that designates as duplicates the pairs with a mutually maximum matching score. We evaluate D-HAT on five established, real-world benchmark data sets, demonstrating that our approach outperforms the state-of-the-art supervised and unsupervised deduplication algorithms to a significant extent. Introduction Integrating overlapping and complementary data sets is a common process that creates new and valuable knowledge [START_REF] Chen | Big data: a survey[END_REF]. The main task of integration is to identify duplicate records, which represent the same real-world entity, such as products, institutes, or patients. This task is called deduplication [START_REF] Dong | Big data integration[END_REF], entity matching [START_REF] Konda | Magellan: toward building entity matching management systems[END_REF], entity resolution [START_REF] Papadakis | Blocking and filtering techniques for entity resolution: a survey[END_REF] or record linkage [START_REF] Fellegi | A theory for record linkage[END_REF]. It constitutes a crucial task that improves the data quality by repairing and curating data sources [START_REF] Fan | Interaction between record matching and data repairing[END_REF], reducing the storage size, and preparing data for downstream applications [START_REF] Dong | Big data integration[END_REF]. Existing solutions for deduplication are based on calculating pairwise similarity scores from one or more attributes [START_REF] Christophides | An overview of end-to-end entity resolution for big data[END_REF]. The unsupervised methods create This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 875171. a similarity graph, where the nodes correspond to records and the edges are weighted by the matching scores of the adjacent nodes [START_REF] Hassanzadeh | Framework for evaluating clustering algorithms in duplicate detection[END_REF]. The graph is then partitioned into clusters such that all nodes within each cluster correspond to duplicate records. These approaches typically calculate matching scores by treating all attributes as textual data [START_REF] Christophides | An overview of end-to-end entity resolution for big data[END_REF]. However, real-world data sets involve heterogeneous attribute types, i.e., numerical, categorical and boolean attributes. Casting these types as strings disregards important information and possibly leads to inaccurate matching scores. For example, the prices "14" and "14.00" are identical as numbers, but partially similar when compared as sequences of characters and totally dissimilar when treated as tokens. Hence, unsupervised techniques need to correctly model and support heterogeneous attribute types. On the other hand, supervised methods typically model deduplication as a binary classification task [START_REF] Konda | Magellan: toward building entity matching management systems[END_REF]. They convert each pair of records into a feature vector by applying similarity metrics on different attributes. The vectors are then labelled to train a classifier that predicts the matching status for unlabelled pairs. However, these approaches face multiple challenges: (i) The curse of dimensionality, i.e., tasks become exceedingly difficult with a higher number of dimensions. (ii) Labeled data is scarce, but obtaining it through crowd-sourcing is costly and time-consuming [START_REF] Wang | CrowdER: crowdsourcing entity resolution[END_REF]. Moreover, its size and quality affects the end result to a significant extent [START_REF] Mudgal | Deep learning for entity matching: a design space exploration[END_REF], but are hard to ensure, due to the heavy class imbalance. (iii) Supervised methods require long training times [START_REF] Mudgal | Deep learning for entity matching: a design space exploration[END_REF]. To address these shortcomings, we introduce D-HAT (Deduplication with Heterogeneous Attribute Types), a novel clustering-based pipeline for end-to-end deduplication. D-HAT goes beyond existing works in three ways: (i) It inherently supports data sets with heterogeneous types of attributes and a large portion of missing values (i.e., high sparsity). (ii) It inherently supports and leverages complex schemata of high dimensionality. Related Work The growing research on deduplication reflects its increasing importance, with numerous methods tackling various aspects [START_REF] Christen | The data matching process[END_REF][START_REF] Christophides | An overview of end-to-end entity resolution for big data[END_REF][START_REF] Dong | Big data integration[END_REF]. One of deduplication's main challenges is its quadratic complexity: in the worst case, it examines all possible pairs of records. Blocking is typically used to alleviate this complexity and to scale deduplication to voluminous data sets [5,[START_REF] Papadakis | Blocking and filtering techniques for entity resolution: a survey[END_REF]. Blocking puts together similar records in groups called blocks by applying blocking schemes or functions. A blocking function extracts signatures from every record, dividing the input data set into a set of overlapping blocks -comparisons are reduced to candidates, i.e., pairs of records sharing at least one block, reducing the computational cost to a significant extent. Yet, the higher time efficiency comes with the risk of missing potential matches [START_REF] Papadakis | Comparative analysis of approximate blocking techniques for entity resolution[END_REF]. After blocking, matching is performed to determine the degree of similarity between the candidate pairs of records. In essence, it applies similarity functions to the values of selected attributes of the candidate records, obtaining numerical matching scores. Next, it determines whether the resulting degree of similarity is sufficient for designating two records as duplicates. We distinguish the matching algorithms into unsupervised and supervised ones. The former category includes a collection of methods that are provided by JedAI [START_REF] Papadakis | Three-dimensional entity resolution with JedAI[END_REF][START_REF] Papadakis | The return of JedAI: end-to-end entity resolution for structured and semi-structured data[END_REF] and Stringer [START_REF] Hassanzadeh | Framework for evaluating clustering algorithms in duplicate detection[END_REF], with ZeroER [START_REF] Wu | ZeroER: entity resolution using zero labeled examples[END_REF] constituting the state-of-theart unsupervised approach; it represents every candidate pair as a feature vector. Unlike supervised methods, it does not require a training set. Instead, at its core lies the observation that the distribution of the feature vectors for duplicate records differs from that of the non-matching records. Based on this idea, it learns the parameters of the Gaussian distribution of matching vectors by iteratively applying expectation maximization to compute the posterior probability of a matching label given the feature vector. A posterior probability higher than 0.5 is considered as an indication of duplicate records. Among the supervised methods, the most popular one is Magellan [START_REF] Konda | Magellan: toward building entity matching management systems[END_REF], a system that combines a variety of features with the main machine learning classifiers, such as decision trees, logistic regression and support vector machines. After providing an annotated sample of candidate pairs T , matching is performed by training a classifier over T . Magellan also offers a set of blocking methods. DeepMatcher [START_REF] Mudgal | Deep learning for entity matching: a design space exploration[END_REF] is a space of matching solutions based on neural networks with three modules: i) attribute embedding, ii) attribute similarity representation, and iii) a classification module. In most cases, the first module relies on pre-trained fastText embeddings [START_REF] Bojanowski | Enriching word vectors with subword information[END_REF] to convert every token to a vector. EMTransformer [START_REF] Brunner | Entity matching with transformer architectures -a step forward in data integration[END_REF] and DITTO [START_REF] Li | Deep entity matching: challenges and opportunities[END_REF] go beyond DeepMatcher by leveraging attentionbased transformers like BERT [START_REF] Devlin | BERT: pre-training of deep bidirectional transformers for language understanding[END_REF], and RoBERTa [START_REF] Liu | RoBERTa: a robustly optimized BERT pretraining approach[END_REF]. These solutions perform well on textual data, outperforming Magellan in terms of accuracy [START_REF] Brunner | Entity matching with transformer architectures -a step forward in data integration[END_REF][START_REF] Li | Deep entity matching: challenges and opportunities[END_REF][START_REF] Mudgal | Deep learning for entity matching: a design space exploration[END_REF]. We disregard them, as they require large training sets and many hours of training [START_REF] Mudgal | Deep learning for entity matching: a design space exploration[END_REF] in order to fine-tune hundreds of thousands of parameters [START_REF] Wang | CorDEL: a contrastive deep learning approach for entity linkage[END_REF]. Preliminaries A data set T is a collection of records. A record is an object description denoted by r i , where i is a unique identifier. Records are defined by their attributes. The set of attributes in T is denoted by T.A, while the value of a specific attribute a in record r i is symbolized as r i .a; r i .a = N/A indicates that r i lacks a value for a, i.e., there is a missing or a null value. Two records, r i and r j , that describe the same real-world object are matching, i.e., duplicates, a situation denoted by r i ≡ r j . A data set is called clean if it does not contain any duplicates. Deduplication is the task of identifying and linking duplicate records. A characteristic of this task is that the number of duplicate records scales linearly with the size of the input, unlike its computational cost, which increases quadratically [START_REF] Getoor | Entity resolution: theory, practice & open challenges[END_REF]. As a result, Deduplication constitutes a heavily imbalanced task and its effectiveness is measured with respect to the following measures: 1. Recall, the portion of existing duplicates that are detected, i.e., Re = T P T P +F N . 2. Precision, the portion of record pairs characterized as duplicates that are indeed matching, i.e., P r = T P T P +F P . 3. F-Measure, the harmonic mean of Recall and Precision, F 1 = 2 × P r×Re P r+Re , where TP stands for the true positive pairs, FP for the false positive ones, and FN for the false negative ones. In this context, Deduplication can be formally defined as follows: Problem 1 (Deduplication). Given a data set T , detect the set of duplicate pairs of records, D = {r i , r j ∈ T : i = j ∧ r i ≡ r j }, such that Recall, Precision and F-Measure are maximized. Our Approach We now delve into our framework, whose pipeline is illustrated in Fig. 1. Step 1: Data Cleaning. The first step prepares the input by determining the core characteristics of the attributes describing the given data set(s)1 , i.e., it calculates the number of unique values and the data type per attribute. Attributes that have two unique values are converted to boolean to obtain a more precise degree of similarity. Attributes with very few unique values (<10) are treated as categorical variables. Numerical attributes are identified through regular expressions that detect quantities, possibly accompanied by an optional unit of measurement. E.g., an attribute value width = ''42.8 in'' is transformed into width = 42.8 and is marked as a numeric data type. Min-max normalization is then performed on the values of numeric attributes: Step 2: Attribute Selection. Attributes with a majority of missing values lack valuable information for deduplication and, thus, can be disregarded. The coverage of an attribute a expresses the portion of non-empty values in a across all input records; the fewer missing values there are, the higher is the coverage. We formally define the coverage c of each attribute as: c(a) = 1-|ri.a=N/A:ri∈T | |T | . This step discards the attributes with a coverage below a specific threshold. Preliminary experiments demonstrated that 0.1 constitutes an effective value. Step 3: Blocking. This step is critical because it determines two things: 1. Time efficiency, because the processing time of the following steps is determined by the number of candidates in the resulting blocks. 2. Effectiveness, because the recall of D-HAT is bounded by the recall of blocking; the false negative pairs of records, which have no block in common, cannot be detected by the subsequent steps, and are excluded from the final output. Therefore, it is crucial that blocking balances these two competing goals: the reduced search space and the high effectiveness. D-HAT is generic enough to accommodate any blocking method that meets this requirement. Preliminary experiments indicated that Magellan's [START_REF] Konda | Magellan: toward building entity matching management systems[END_REF] overlap blocker is a robust approach for creating blocks of high performance (see Sect. 5 for more details). It defines as candidate pairs those sharing at least one token in the values of a specific attribute. D-HAT applies the overlap blocker to all textual attributes in the given data sets and opts for the one minimizing the number of candidates, while maximizing coverage -high coverage implicitly signals high recall after blocking. Step 4: Feature Matrix. Similar to supervised approaches, D-HAT represents each pair of records as a feature vector by applying type-specific normalized similarity functions to selected attributes. Unlike supervised approaches, these vectors are unlabelled. In more detail, after detecting the type of every attribute in Step 1, D-HAT creates a feature vector V i,j for each candidate pair of records (r i , r j ) ∈ B, where B is the set of blocks produced by the previous step and the k th feature/dimension in V i,j , V k i,j , stems from a similarity function that is compatible with the type of the k th attribute, a k . If the value of either record for a k is empty or incorrect (i.e., incompatible with the type of a k ), V k i,j ='N/A', which stands for a missing feature. Note that this step does not require any domain knowledge from the user. D-HAT automatically detects the attribute type and applies the appropriate similarity functions in order to create the features. In particular, the following functions are used by D-HAT: • For boolean and categorical attributes, the equality operator. • For numerical attributes, four similarity functions are used: 1. The equality operator, 2. The Euclidean similarity, -Levenshtein similarity, the minimum number of edit operations (insert, delete or substitute) required to transform one string to another. -Hamming, similar to Levenshtein except that it allows only substitution. (ii) Semantic similarity measures. D-HAT exploits pre-trained embedding representations of textual data. Two types of representations are actually used: a) Word-based models like word2vec and GlobalVectors (GloVe). They substitute each token (word) by a meaningful numeric vector that is learnt from training a shallow feedforward neural network on large, external, un-annotated textual corpora, such as Google News and Wikipedia. In these models, words with contextual similarity have linearly related vector representations. However, they cannot produce vector representations for words that are out-of-vocabulary. b) To address this limitation, skipgram models like fastText [START_REF] Bojanowski | Enriching word vectors with subword information[END_REF] represent each word by the sum of the vector representations of its bag of characters. Thus, they are capable of learning a recurrent neural network that yields vector representations for words, independently of their occurrence in the training data. To extract numeric features/dimensions from the three pre-trained embeddings (i.e., word2vec, GloVe and fastText), D-HAT applies three similarity functions to the vectors of two records: the cosine, the Euclidean and the word mover's similarity [START_REF] Kusner | From word embeddings to document distances[END_REF]. For the last two functions, the homonymous distance function d is transformed into a similarity value sim as follows: sim = 1 1+d . (iii) Hybrid similarity measures. This configuration combines the aforementioned syntactic similarity measures with the semantic ones, given that they capture complementary matching evidence. V k i,j = 1 -EucDist(r i .a k , r j .a k ). 3. The relative similarity, V k i,j = 1 - Overall, D-HAT creates one feature per boolean and categorical attributes, four per numeric ones as well as nine semantic features and up to seven syntactic ones per textual attribute. Step 5: Matching Scores. The goal of this step is to estimate the matching likelihood for each pair of candidates based on the feature matrix of the previous step. This is carried out in two steps: (i) Binarizing the feature vectors. In essence, D-HAT treats each feature as a vote for a "match" (1) or a "non-match" (0) decision. The dimensions of boolean and categorical attributes are already binary. The dimensions of numerical and textual attributes are defined in [0, 1], with higher values indicating a higher matching likelihood. To binarize them, D-HAT employs a similarity threshold θ ∈ [0, 1], common to all dimensions, such that all numeric scores above θ are converted into "match" votes (1), while the rest become "non-match" votes (0). All dimensions with a "N/A" value are ignored. (ii) Score estimation. To calculate the matching score m i,j for two candidate records, r i and r j , we aggregate the dimensions of their binary feature vector V i,j into a single value through their mean, i.e., m i,j = N k=1 V k i,j /(Nn), where N is the total number of features, n is the number of missing ones and V k i,j ∈ {0, 1}. At the end of these two steps, the matching scores of all pairs are calculated and stored in a matrix M . The records and the matrix define a weighted graph G(V, M ), where the set of nodes V represent the input records, and M is the adjacency matrix of weights. G(V, M ) is referred to as the similarity graph. Step 6: MutMax Clustering. The final step receives as input the similarity graph G(V, M ) and partitions it into a set of disjoint clusters, such that every cluster corresponds to a unique entity, containing all duplicate records describing it. The partitioning is performed by MutMax, a greedy approach that defines as duplicates the pairs of records with mutually maximum scores. More specifically, MutMax operates as follows: For each record r i , all candidates are sorted in decreasing matching scores and the top one r i max = r j is selected as the potential match. If r i was set as the potential match for r j , the records r i and r j are designated as matches. The rest of the candidate pairs are ignored. Overall approach. D-HAT algorithm is outlined in Fig. 2. Step 1 (Data cleaning) is applied first (Line 1). Step 2 (Attribute selection) is performed given threshold c min (Lines 2-7). The overlap blocker is applied to each attribute (Lines 8-10). A performance score is computed per attribute by multiplying the coverage of attribute a with the reduction ratio [5]: getScore(B a , a) = c(a) • RR(B a , T ), where |B a | denotes the total number of candidate pairs in the blocks B a . The attribute with the highest score is selected (Lines 11-14), and is applied to retrieve the final set of blocks (Line 16). The next loop simultaneously applies Steps 4 and 5. It builds a twodimensional array M with a score for each pair of compared records. In more detail, F ∩ a is the set of functions applicable for attribute a. For each feature higher than θ, the overall similarity is incremented by one matching vote (Lines [START_REF] Wang | CorDEL: a contrastive deep learning approach for entity linkage[END_REF][START_REF] Wu | ZeroER: entity resolution using zero labeled examples[END_REF][START_REF] Christen | A survey of indexing techniques for scalable record linkage and deduplication[END_REF]. The average score is finally estimated for the current pair of candidates (Line 29). Finally, MutMax is applied to M (Lines 31-36). For each record, the most similar candidate is specified and stored in array O (Line 31). Using O, D-HAT identifies the record pairs that are mutually most similar (Lines 32-33), adding them to the output (Line 34). Note that D is a set and that each output pair is formed with the lowest id in the left part (i.e., i < j in (i, j) in Line 34); as a result, no duplicate pairs are returned as output. In terms of time complexity, the cost of Steps 1, 2 and 3 is linear with the number of attributes in the given data set T , i.e., O(|T.A|). For Steps 4 and 5, the cost is O(|B|). For Step 6 no sorting is required. Instead, D-HAT merely iterates once over all cells in the two-dimensional array M . A hash table can be used to store the estimated similarities in practice. As a result, both the time and space complexity of Step 6 (and the entire algorithm) are linear with the number of candidate pairs after blocking, i.e., O(|B|). Experimental Evaluation Setup. D-HAT is implemented in Python 3.8.5. All experiments were run on an Ubuntu 18.04.5 server with a 12-core Intel Xeon D-2166NT @2GHz, 64 GB of RAM and 300 GB HDD. A single core was employed in all time measurements. Benchmark Data Sets. We employ five established data sets that come from multiple domains: products, bibliography, restaurants, and healthcare. Immucare is a healthcare dataset matching two hospital visits of the same patient. The technical details of these data sets [START_REF] Konda | Magellan: toward building entity matching management systems[END_REF][START_REF] Wu | ZeroER: entity resolution using zero labeled examples[END_REF] are summarized in Table 1. Baseline Systems. We compare the performance of D-HAT with Magellan [START_REF] Konda | Magellan: toward building entity matching management systems[END_REF] and ZeroER [START_REF] Wu | ZeroER: entity resolution using zero labeled examples[END_REF]. For the former, we use decision tree as the classification algorithm, while for the latter, no configuration is needed. Evaluation Measures. We use the standard measures of recall, precision, and F1-score, which are defined in Sect. 3. We also report the overall run-time, i.e., the time that intervenes between receiving the data set(s) as input and producing the duplicate pairs as output. We repeat every measurement three times and report the average. Step 3: Blocking D-HAT applies Magellan's overlap blocker to all attributes and selects as optimal the one minimizing the number of candidates, while maximizing coverage. The resulting performance appears in Table 2. In all cases, the number of candidate pairs is reduced by whole order of magnitude (i.e., 90%) in comparison to the brute-force approach (i.e., |S| × |T |). The only exception is Abt-Buy, where the candidates drop by 86%, which is a dramatic reduction of the search space, too. Nevertheless, the recall in all cases remains rather high, above 90%. This means that the vast majority of duplicate pairs co-occur in at least one block. Note that precision after blocking remains very low for most data sets. To raise it to acceptable levels, matching is required. Note also that compared to the overall run-time of D-HAT and the rest of the methods (in Fig. 3), the overhead of blocking is negligible (< 10% in all cases). The only exception is Immucare, where the overhead of blocking is high, due to the very large number of attributes retained after Step 2 (75). Steps 4-6: Matching To ensure fairness, we apply the same blocker to the same key attribute for both baseline systems, (e.g., we use the 'phone' attribute instead of 'name' in Fodors-Zagats). Note that for Amazon-Google, ZeroER could not create its feature matrix within a time limit of 6 h. To complete the assessment, we combined it with the feature vectors created by Magellan instead. As a result, the performance of ZeroER could be slightly different from that reported in [START_REF] Wu | ZeroER: entity resolution using zero labeled examples[END_REF]. The resulting performance of all algorithms with respect to precision (Pr), recall (Re) and f-measure (F1) appears in Table 3, while the corresponding runtimes are reported in Fig. 3. Note that after preliminary experiments, we set c min = 0.1 and θ = 0.7 for D-HAT in all cases. Note also that D-HAT is combined with three different groups of features: (i) The syntactic ones, which include only the syntactic similarity functions for textual attributes along with the specialized functions of boolean, categorical and numeric attributes. (ii) The semantic features, which differ from the previous group in that they replace the syntactic similarity functions with the semantic ones. (iii) The hybrid features, which employ all similarity functions for all types of attributes defined in Sect. [START_REF] Christen | The data matching process[END_REF]. In this way, we are able to examine the contribution of the two types of textual similarity functions, which account for the majority of features used by D-HAT. Compared to blocking, precision has actually increased by whole orders of magnitude. This emphasis on precision should be attributed to MutMax clustering, which associates every record only with its most similar candidate. Comparing the various groups of features between them, we observe that the syntactic ones consistently outperform the semantic ones. The reason is that most data sets contain domain-specific terminology. As a result, especially word2vec and GloVe suffer from a large portion of out-of-vocabulary terms. The only exception is DBLP-ACM, which involves long textual attributes like venue names and publication titles; in these settings, the evidence provided by semantic similarities outperforms the syntactic ones, albeit by just ∼2%. In terms of time-efficiency, the advantage of syntactic similarity functions is clear in all cases, as shown in Fig. 3. The run-time of D-HAT increases by a whole order of magnitude in almost all cases, when replacing the syntactic similarity features with the semantic ones. This is caused by the large number of lookups and computations that are required for converting every attribute value into a high-dimensional embedding and a similarity score. It is interesting to examine whether the combination of syntactic and semantic similarities justifies the lower time efficiency by an increase in effectiveness. This is only true in Amazon-Google, where hybrid features' F1 is higher than the syntactic ones by ∼10%. In all other cases, the hybrid features lie between the two other groups of features, usually closer to the top performing one. Hence, D-HAT should be exclusively combined with the syntactic group of features. Compared to ZeroER, Table 3 shows that D-HAT with syntactic features achieves significantly better effectiveness in most cases. Its f-measure is actually higher by 50%, on average, across the five data sets. At the same time, Fig. 3 demonstrates D-HAT is consistently faster than ZeroER by whole orders of magnitude (e.g., 1 min vs 6 hrs over Amazon-Google) -the sole exception is DBLP-ACM, where D-HAT is slower, due to the computation of 10 syntactic similarity functions over textual values. D-HAT takes into account attributes with high level of noises (missing values, heterogeneity of existing values, errors), which inevitably corrupt some matching signals. Compared to Magellan, in the first two data sets, D-HAT achieves a higher f-measure than Magellan by more than 13%, while in the next three data sets both methods exhibit practically identical performance (i.e., their f-measures differ by less than 1%). The competitive performance of Magellan stems from its supervised functionality: in each dataset, 70% of the candidate pairs are used for training its classification model, leaving only 30% of the pairs as a testing set. In contrast, D-HAT processes all candidate pairs and its performance is bounded by blocking. In terms of time-efficiency, we observe in Fig. 3 that D-HAT takes a clear lead in all cases, as its run-time is lower than Magellan even by a whole order of magnitude (e.g., 35 vs 400 s over Abt-Buy). Overall, D-HAT typically outperforms the state-of-the-art unsupervised deduplication method to a significant extent in all respects. Compared to the state-of-the-art supervised approach, it exhibits similar effectiveness, if not higher, at a much lower run-time, despite the lack of labelled instances. Sensitivity Analysis The only configuration parameter that is crucial for the performance of D-HAT is the similarity threshold θ, whose value depends on the level of noise and heterogeneity in the data. To assess its impact on the overall performance of D-HAT, we consider all values in the range [0.5, 1] with a step of 0.1. The results appear in Fig. 4. Due to lack of space, we report three of the five datasets. We observe that this parameter has no effect on any evaluation measure over DBLP-ACM. The reason is that the pairs identified as matches in these datasets exhibit very high similarity (practically 1.0) for most of the features employed by D-HAT. As a result, the matching decisions of MutMax clustering are not altered by the value of θ. For Abt-Buy and Amazon-Google, we observe that up to 0.7, the performance of D-HAT improves (Abt-Buy) or remains the same (Amazon-Google). For θ > 0.7, a small increase in the similarity threshold yields slightly lower performance with respect to all measures. The reason is that both data sets are challenging tasks, because they contain many corner cases, i.e., records that are close to the decision boundary. Overall, we can conclude that D-HAT is robust with respect to its similarity threshold θ, with θ = 0.7 constituting a reliable default value. Conclusions We presented D-HAT, an efficient, fully automated clustering-based end-to-end deduplication system. D-HAT can process high dimensional data sets with heterogeneous attribute types and missing values without requiring user intervention or any labelled data. The thorough experimental study on benchmark data sets demonstrates that our system achieves high accuracy across different benchmark tasks, and outperforms supervised and unsupervised baselines. The main benefit of D-HAT over unsupervised methods is the high accuracy on all standard tasks, whereas compared to supervised methods, D-HAT eliminates the extra time and effort needed from domain experts to annotate a training set. It also saves the time required to find and train an efficient classification model. In the future, we plan to parallelize D-HAT on top of Apache Spark in order to scale it to huge data sets with millions of records. (iii) It achieves state-of-the-art results without requiring any labelled data. Our contributions are the following: -We propose D-HAT, an automated end-to-end, clustering-based framework for deduplicating high-dimensional data sets with heterogeneous attribute types and missing values. Its matching algorithm uses as features a comprehensive set of signals, coupling them with a novel greedy clustering method that defines as matches the records with mutually maximum matching scores. -We conduct experiments on established real benchmark data sets, showing that: (i) In terms of effectiveness, D-HAT outperforms the state-of-the-art supervised and unsupervised baseline methods. (ii) In terms of time efficiency, D-HAT has an undeniable advantage over the baseline methods. -We have publicly released all data and code used in our experiments through https://github.com/Loujainl/D-HAT. Fig. 1 . 1 Fig. 1. The end-to-end pipeline of D-HAT. |ri.a k -rj .a k | max(ri.a k ,rj .a k ) . 4. The normalized Manhattan similarity, V k i,j = |ri.a k -rj .a k | max(ri.a k ,rj .a k ) . • For textual attributes, the following functions are used: (i) Syntactic similarity measures. D-HAT distinguishes textual attributes into short strings, if their average value entails less than five words, and long strings otherwise. For both types, it employs the following functions: 1. Jaccard similarity: V k i,j = |token set(ri.a k )∩token set(rj .a k )| |token set(ri.a k )∪token set(rj .a k )| . 2. Generalized Jaccard, which extends the previous measure to consider the bags of tokens: V k i,j = |bag(ri.a k )∩bag(rj .a k )| |bag(ri.a k )∪bag(rj .a k )| . 3. Overlap Coefficient: V k i,j = |token set(ri.a k )∩token set(rj .a k )| min(|token set(ri.a k )|,|token set(rj .a k )|) . 4. Bag: V k i,j = 1 -max(|bag(ri.a k )-bag(rj .a k )|,|bag(rj .a k )-bag(ri.a k )|) max(|(rj .a k )|,|(ri.a k )|)|) 5. Dice Similarity: V k i,j = 2 × |token set(ri.a k )∩token set(rj .a k )| |token set(ri.a k )|+|token set(rj .a k )| . Additionally, D-HAT uses two similarity functions for short strings: Fig. 2 . 2 Fig. 2. The end-to-end algorithm of D-HAT. Fig. 3 . 3 Fig. 3. Run-time in seconds. Fig. 4 . 4 Fig. 4. Performance of D-HAT with syntactic features when varying the threshold θ. Table 1 . 1 Technical characteristics of the benchmark data sets. |S|, |T | and |D| stand for the number of source records, target records and duplicate pairs, respectively. Data set Amazon-Google 1,363 3,226 1,298 |S| |T | |D| #Attributes #Numerical #Bool. & Cat. #Textual #Selected 4 1 0 2 3 Abt-Buy 1,081 1,092 1,095 3 1 0 2 3 DBLP-ACM 2,614 2,294 2,223 4 1 1 2 3 Fodors-Zagats 533 331 112 5 0 0 5 5 Immucare 305 310 305 213 32 6 37 75 Table 2 . 2 Blocking performance. Time in Seconds. Data set Key Attribute #Candidates Recall Prec Time Amazon-Google Name 131,214 0.995 0.010 7.3 Abt-Buy Name 164,072 0.994 0.007 2.6 DBLP-ACM Authors 318,404 0.993 0.007 19.4 Fodors-Zagat Phone 111 0.929 0.936 0.7 Immucare Date of Birth 311 1.000 0.981 26.5 Table 3 . 3 Matching effectiveness of D-HAT, Magellan and ZeroER across all data sets. The best F1 per data set is underlined. Data set D-HAT Magellan ZeroER Syntactic Features Semantic Features Hybrid Features Pr Re F1 Pr Re F1 Pr Re F1 Pr Re F1 Pr Re F1 A-G 0.904 0.479 0.626 0.828 0.349 0.534 0.925 0.532 0.675 0.513 0.573 0.542 0.663 0.385 0.487 A-B 0.818 0.402 0.539 0.635 0.174 0.274 0.824 0.346 0.487 0.440 0.443 0.442 0.220 0.601 0.322 D-A 0.992 0.956 0.974 0.995 0.980 0.987 0.997 0.974 0.985 0.980 0.983 0.981 0.936 0.945 0.940 F-Z 0.981 0.929 0.954 0.971 0.911 0.940 0.981 0.929 0.954 0.939 0.969 0.954 1.000 0.312 0.476 CA 0.993 0.987 0.990 0.990 0.987 0.988 0.993 0.987 0.990 0.968 1.000 0.984 1.000 0.487 0.655 Table 4 . 4 The number of features per group. Data set Non-textual Syntactic Semantic Hybrid A-G 4 14 22 32 A-B 4 14 22 32 D-A 4 14 22 32 F-Z 0 35 45 80 CA 78 400 411 733 In the case of Record Linkage, we assume aligned schemata.
04097275
en
[ "math.math-co" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04097275/file/2022UPSLD044.pdf
Blanche Buet Antonin Chambolle Simon Masnou Gabriel Peyré Keywords: inverse problems, total variation, sparsity. C À Vincent qui a eu bien du courage de s'embarquer dans cette galère. À Yohann pour m'avoir suivi depuis Lyon (et pour m'avoir poussé dans le bureau de Vincent). À Marc Dambrine et Otmar Scherzer pour leurs précieuses relectures. R On s'intéresse dans cette thèse à une famille de problèmes inverses, qui consistent à reconstruire une image à partir de mesures partielles et possiblement bruitées. Si la résolution de problèmes inverses mal posés est une thématique classique en traitement du signal, une série de travaux menés depuis une trentaine d'années a particulièrement marqué le domaine. Ces derniers concernent la reconstruction de signaux dits parcimonieux à l'aide de méthodes variationnelles, qui reposent sur l'utilisation d'une fonctionnelle, appelée régulariseur, dont la minimisation produit des solutions de même structure que le signal inconnu. La principale contribution de cette thèse est l'étude d'un choix particulier de régulariseur, la variation totale (du gradient). Cette fonctionnelle est utilisée en imagerie depuis les travaux de Rudin Osher et Fatemi, menés en 1992, notamment pour sa capacité à pénaliser les oscillations tout en préservant les discontinuités. Alors qu'il est bien connu que sa minimisation produit des images constantes par morceaux, présentant une forme de parcimonie (elles sont composées d'un petit nombre de formes simples), ce point de vue n'a à notre connaissance pas été privilégié pour analyser les performances de ce régulariseur. Dans cette thèse, on se propose de mener cette étude. Dans la partie II, on considère les reconstructions obtenues par minimisation de la variation totale dans un régime de faible bruit, et on étudie leur proximité avec l'image inconnue. Cette dernière étant supposée parcimonieuse (c'est à dire composée d'un petit nombre de formes simples), on s'intéresse particulièrement à la structure de la reconstruction: est-elle elle-même parcimonieuse, est-elle composée du même nombre de formes, et ces formes sont-elles proches de celles présentes dans l'image inconnue. Dans la partie III, on propose une méthode numérique pour résoudre les problèmes variationnels associés à ce régulariseur. On adapte des travaux récents sur la reconstruction de mesures discrètes, a n d'obtenir un algorithme ne reposant pas sur l'introduction d'une discrétisation spatiale xe. Ceci a l'avantage, contrairement aux techniques existantes, de n'introduire ni ou ni anisotropie dans les images reconstruites, et d'en produire une représentation parcimonieuse. Les contributions de cette thèse peuvent aussi être considérées du point de vue du calcul des variations. On s'intéresse en e et à une problème variationnel, qu'on analyse théoriquement et pour lequel on propose une méthode numérique de résolution. Dans la partie II, on s'intéresse à la convergence des solutions de ce problème lorsque l'un des paramètres le dé nissant tend vers zéro. Le type de convergence qu'on obtient est bien plus fort que ceux considérés usuellement en calcul des variations. Plus précisément, on montre la convergence des ensembles de niveau des solutions en termes de déformations normales, ainsi que celle du nombre d'ensemble de niveaux non triviaux (et de leur nombre de composantes connexes). Dans la partie III, on résout ce problème numériquement sans se ramener à un problème de dimension nie via l'introduction d'une discrétisation spatiale xe. On propose une méthode itérative qui construit des solutions approximées ayant la même structure que certaines solutions d'intérêt. Mots-clés: problèmes inverses, variation totale, parcimonie. C Part I. We begin this manuscript with a general introduction to inverse problems and variational regularization techniques. We then introduce the total (gradient) variation, which is the speci c regularizer this thesis is dedicated to. We review existing theoretical results on total variation regularization, and discuss numerical methods for solving the associated variational problems. We argue that these subjects have not extensively been studied from a sparse recovery viewpoint, and state that the main aim of this thesis is to conduct such an analysis. We also stress that our contributions are attempts to adapt recently introduced ideas for sparse spikes recovery in a continuous domain. Part II. This part is devoted to the noise robustness analysis of total variation regularization, and more speci cally to the proof of an exact support recovery result. • Piecewise constant functions. The main purpose of this section is to introduce the class of sparse objects we aim at recovering, which are piecewise constant functions with a few speci c properties. We call them M -simple functions. We rst begin by collecting results on the faces of the total variation unit ball, which were rst proved in [Duval, 2022]. As we focus on exposed faces (instead of general linearly closed faces in the above-mentioned work), we are able to provide slightly di erent (and often simpler) proofs of some results. It turns out that the elements of nite dimensional exposed faces are M -simple functions. This analysis in particular allows to decompose M -simple functions belonging to the same face using a common set of atoms, which is crucial to prove our support recovery result. As a byproduct, we are also able to state an abstract su cient identi ability condition, which can be seen as a strengthened source condition. • The prescribed curvature problem. We then focus on a geometric variational problem called the prescribed curvature problem, which naturally appears in the analysis conducted in the previous section. Our aim is to study the behaviour of its solutions under variations of the associated curvature functional. We prove that, for two su ciently close curvature functionals, solutions of the two problems are close in terms of C 2 normal deformations. Assuming some solution of one problem is strictly stable, we show that it has a neighborhood (in terms of C 2 normal deformations) which contains at most one solution of the other problem. • Exact support recovery. Using the results of the rst two sections, we are nally able to prove our support recovery result. To achieve this, we rst show that the dimension of the exposed faces of the total variation unit ball is stable in some sense. We then de ne a so-called non-degenerate source condition, under which exact support recovery is guaranteed. 7 We stress that the term exact refers to the estimation of some measure of sparsity of the unknown image. The reconstructed image is made of the same number of shapes, each a C 2 normal deformation of a shape appearing in the sougt-after image. Part III. This third part is dedicated to the numerical resolution of variational problems associated to total variation regularization. We propose a grid-free approach based on the Frank-Wolfe (or conditional gradient) algorithm. A complete implementation of the proposed method is available at https://github.com/rpetit/pycheeger and https://github.com/rpetit/tvsfw. • Frank-Wolfe approach. In this rst section, we describe the theoretical algorithm we propose to solve our variational problem of interest. We explain how the linear minimization step, which is at the core of Frank-Wolfe algorithm, is closely linked to a shape optimization problem called the (generalized) Cheeger problem. We then turn to convergence results, before discussing the interest of a special nal update, called the sliding step. • Polygonal approximation of generalized Cheeger sets. This section is devoted to the numerical approximation of solutions of the (generalized) Cheeger problem. We propose to optimize its objective over simple polgons with a xed number of sides, and prove the existence of optimizers for this new problem. We then discuss a two-step numerical method to approach them. Finally, we discuss how polygonal maximizers compare to their continuous counterpart in the speci c case of radial weight functions. • Numerical results. We begin this last section by discussing the implementation of the sliding step. We describe the iterative method we propose to locally optimize the objective, and also mention the issue of topology changes, which might appear over the course of this evolution. We conclude this part by assessing the performance of our algorithm on a few recovery examples. We provide comparisons with two standard xed-grid methods, and discuss parameter choices. Part IV. We conclude this manuscript by discussing interesting avenues for future research. Publications. This thesis gave rise to two articles, which are listed below. • [De Castro et al., in preparation] Exact recovery of the support of piecewise constant images via total variation regularization, Y. De Castro, V. Duval and R. Petit, in preparation. • [De Castro et al., 2022] Towards o -the-grid algorithms for total variation regularized inverse problems, Y. De Castro, V. Duval and R. Petit, Journal of Mathematical Imaging and Vision, 2022 A short version of the second article appeared in the proceedings of the 8th international conference on scale space and variational methods in computer vision (SSVM 2021) [De Castro et al., 2021]. (Ω, R n ) (C k b (Ω) if n = 1) the set of functions u ∈ C k (Ω, R n ) that satisfy u C k (Ω) < +∞ with u C k (Ω) def. = k i=1 D i u ∞ , where D i u denotes the i-th derivative of u. If a function u is bounded and uniformly continuous on Ω, it has a unique continuous extension to Ω. We denote by C k (Ω) the space of functions in C k (Ω) with bounded and uniformly continuous derivatives up to order k. Hausdor measure. We denote by H 1 the 1-dimensional Hausdor measure on R 2 , and, for every Borel set A ⊂ R 2 , by H 1 A the measure H 1 restricted to A, i.e. such that for every Borel set E we have H 1 A (E) = H 1 (A ∩ E) . I Broadly speaking, an inverse problem consists in reconstructing an unknown object from partial and often indirect observations of it. Many problems fall in this category. To name only a few, let us mention the reconstruction of a 3D object from a few 2D photographs, the location of an earthquake's epicenter from measurements of the resulting seismic waves, or the determination of the composition of a material from the way it interacts with light (or the way other kinds of waves travel through it). This thesis is dedicated to a particular family of inverse problems, which belongs to the class of imaging inverse problems, in which the unknown object to reconstruct is an image. In this section, we rst provide a broad introduction to this domain1 , before presenting the problem we are speci cally interested in. Then, we introduce a family of methods for solving inverse problems, namely variational regularization techniques, that are at the core of our work. Finally, we present the main aim of this thesis, which is to adapt recent advances in the eld to our speci c setting. I As mentioned above, an imaging inverse problem consists in reconstructing an image (in the general sense) from a set of measurements. Such problems are ubiquitous in biological, medical, and astronomical imaging, as well as in computational photography. Before discussing these applications in greater detail, let us introduce the mathematical framework commonly used to model these problems. Framework and assumptions. The unknown image is modeled as an element f 0 of some vector space E. We assume that the measurement operator Φ, which accounts for the measurements acquisition process, is a linear 2 mapping from E to some Hilbert space H (a common scenario is to have access to m real-valued measurements, in which case H = R m ). Finally, the noiseless and noisy observations are respectively de ned by y 0 = Φf 0 and y = y 0 + w where w ∈ H is an additive noise. Solving the inverse problem means (approximately) recovering f from the knowledge of y 0 (noiseless case) or y (noisy case). Modeling images. The choice of the vector space E in which the sought-after image lies depends on the application. When dealing with digital images (say grayscale images for simplicity), which are arrays of real values, it is natural to take E = R m×n for some pair of integers m, n ∈ N * , or simply E = R n for some n ∈ N * . On the other hand, if one wishes to model "continuous images", choosing E to be an in nite dimensional space of functions or measures is relevant. The main examples of interest for our purpose are E = M(Ω), the set of bounded Radon measures on Ω, and E = L 2 (Ω), where Ω is the image domain. Structural assumptions on the sought-after image. In most cases, a major obstacle towards the resolution of an inverse problem is the non-injectivity of the measurement operator Φ. To put it another way, given a set of observations y 0 , multiple images f in E could satisfy Φf = y 0 , and the di culty is to select the right one among them. Even in situations where Φ is injective, the linear system Φf = y 0 is often ill-conditioned. Inverting Φ hence yields reconstructions that are highly sensitive to changes of y 0 , and which are therefore not robust to noise. The main ingredient usually used to overcome these issues is to make structural assumptions on the sought-after image f 0 . If we know a priori that f 0 has a special structure, then this information can be exploited to recover it among all images f satisfying Φf = y 0 . In the following subsection, we give examples of interesting structural assumptions on f 0 , which are all closely linked to the concept of sparsity. Sparse signals A class of signals that has been extensively studied since the 90's is the class of signals having some kind of sparsity. Given a set of atoms called a dictionary, a signal is said to be sparse if it can be decomposed as the weighted sum of a few atoms. Sparse vectors. The rst example of sparse objects that were extensively studied are sparse vectors. The sparsity of a vector x of R n is de ned as its number of non-zero coordinates, and a vector is said to be sparse if its sparsity is signi cantly smaller than n (it can hence be written as a weighted sum of a few 1-sparse vectors). The problem of recovering a sparse vector from a set of linear measurements has a tremendous number of applications. It has been studied empirically from the 70's in di erent domains, including seismic imaging [START_REF] Claerbout | Robust modeling with erratic data[END_REF]. Its mathematical analysis started in the 90's with the work of David Donoho and his co-authors (see for instance [Donoho, 1992, Chen et al., 1998]). In some situations, it may happen than one is interested in recovering images directly represented by sparse vectors (for example, if one wishes to reconstruct a superposition of a few point light sources). However, it often happens that the sought-after image is not directly sparse, but admits a sparse decomposition in some dictionary. In other terms, there exists a sparse vector x 0 and a dictionary Ψ such that f 0 = Ψx 0 . In this case, in order to exploit the sparsity of x 0 for solving the considered inverse problem, one can de ne a new measurement operator A = ΦΨ, and solve the inverse problem associated to x 0 and A1 . This ultimately allows to obtain an approximation of f 0 . We refer the reader to [Mallat, 2008, Chapter 12 and 13] for more details on this subject. Discrete measures. If one wishes to reconstruct a superposition of point light sources, it is often interesting, as explained in Section 1.3, to model the sought-after image as the weighted sum of a few Dirac masses. In this case, one choses E = M(X) the space of bounded Radon measures on a domain X (usual choices of X include the n-dimension torus T n or an open subset of R n ), and assumes that the unknown signal is a measure µ 0 of the form µ 0 = s i=1 a i δ x i , with a ∈ R s and x ∈ X s . The recovery of such signals has been the subject of many works in recent years (see for instance [De Castro and Gamboa, 2012, Bredies and Pikkarainen, 2013, Candès and Fernandez-Granda, 2014]). We refer the reader to Section 1.3.2 below and the review article [START_REF] Laville | O -The-Grid Variational Sparse Spike Recovery: Methods and Algorithms[END_REF] for more details. A part of the work presented in this thesis is an attempt to adapt these results to the reconstruction of piecewise constant functions. Time-dependant discrete measures. One could also consider the dynamic inverse problem which consists in recovering the evolution of a superposition of point light sources from a set of measurements. In this case the unknown signal is a time-dependent measure of the form µ 0 : t → s i=1 a i (t)δ x i (t) where a and x take values in R s and X s . This task was notably studied in the xed mass setting in [START_REF] Alberti | Dynamic Spike Superresolution and Applications to Ultrafast Ultrasound Imaging[END_REF], Bredies and Fanzon, 2020, Bredies et al., 2021], and then in the varying mass setting in [Bredies et al., 2022a] . Measures supported on 1D curves. Another class of signals presenting some kind of sparsity are measures supported on a few 1D curves. Such measures could be relevant to model structures of interest in biological imaging. We provide in Figure 1 two images of microtubules to support this claim. In this context, one would chose E = M(X) but assume the sought-after signal is a measure µ 0 of the form µ 0 = s i=1 a i H 1 Γ i , with a ∈ R s and Γ i a connected 1D curve for all i ∈ {1, ..., s}. Let us stress that a related task is considered in [START_REF] Laville | O -the-grid curve reconstruction through divergence regularisation: An extreme point result[END_REF]. This work however deals with vector-valued measures, which are hence associated (just as time-dependant measures) to oriented curves. Piecewise constant functions. In this thesis, the signals we focus on are models for piecewise constant images (also called "cartoon images"). The recovery of such objects could have potential applications in e.g. cell imaging. We provide in Figure 2 a set of images which could be considered piecewise constant as a rst approximation. In this setting, one could choose E = L 2 (R 2 ) and assume the sought-after signal is an element u 0 of E of the form u 0 = s i=1 a i 1 E i , with a ∈ R s and (E i ) s i=1 a collection of su ciently regular subsets of the plane. We give a precise de nition of the class of piecewise constant functions we deal with in Section 1 of Part 2. Examples of inverse problems We now give a few examples of measurement operators of interest. Denoising. The choice of Φ = Id corresponds to the denoising setting, where one wishes to recover an image corrupted by some (additive) noise. Deconvolution. Another setting of interest is when one has access to a blurry version of the sought-after image. This is modeled by de ning Φf = ϕ f with ϕ a convolution kernel. The " " operation denotes a discrete or continuous convolution depending on the nature of f . The kernel ϕ could for example model the point spread function of an imaging device, or a motion blur. Fourier transform sampling. The reconstruction of an image from a sampling of its Fourier transform has many practical applications. We describe two of them below. The rst is the detection of point light sources using radio interferometry in astronomical imaging. The second is medical imaging, and more precisely X-ray tomography and magnetic resonance imaging. • Radio interferometry: if one wishes to observe point light sources in the sky, a natural model for the sought-after image is a measure µ 0 that is a combination of a few Dirac masses. In radio interferometry, one uses an array of antennas to record the light emitted by celestial bodies. Computing the cross-correlation between these recordings gives access to a sampling of the Fourier transform of µ 0 (see e.g. [START_REF] Pan | Towards Generalized FRI Sampling With an Application to Source Resolution in Radioastronomy[END_REF]). The measurement operator can hence be modeled as Φ : M(R n ) → C |Ω| µ → ˆRd exp(-i ω, x ) dµ(x) ω∈Ω , where Ω ⊂ R n is a set of observed frequencies, and d is typically equal to two (e.g. in the context of astronomical imaging). • X-ray tomography, magnetic resonance imaging: other situations in which one wishes to recover an image from a sampling of its Fourier transform include X-ray tomographgy and magnetic resonance imaging (MRI). Here, the aim is to to reconstruct a general digi- tal (E = R n ) or continuous (e.g. E = L 2 (R 2 )) image. The measurement operator is given by Φf = f (ω) ω∈Ω , where f denotes the (discrete or continuous) Fourier transform of f and Ω is a set of frequencies. The main di erence between these two imaging techniques is the type of Fourier samples they give access to. For X-ray tomography Ω is a set of points located on radial lines in the Fourier plane. In the case of MRI, these points are located on smooth curves. Laplace transform sampling ( uorescence microscopy). A major challenge in modern biology is to understand mechanisms at the sub-cellular level. This requires the development of so-called super-resolution microscopy techniques, able to overcome the di raction limit (see Figure 2 for an example of images obtained using such techniques). A popular method to achieve this is single-molecule localization microscopy (SMLM) [START_REF] Betzig | Imaging Intracellular Fluorescent Proteins at Nanometer Resolution[END_REF], Rust et al., 2006, Shro et al., 2008]. It consists in introducing uorophores in a biological sample, and recording the light they emit after illuminating a random subset of them. In this context, acquiring depth information on the sample is a major challenge that can be adressed using several techniques [START_REF] Huang | Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy[END_REF], Pavani et al., 2009]. Among them, the multi-angle total internal re ection uorescence method (MA-TIRF) [START_REF] Boulanger | Fast high-resolution 3D total internal re ection uorescence microscopy by incidence angle scanning and azimuthal averaging[END_REF], Santos et al., 2014, Soubies et al., 2014] consists in illumating the biological sample along di erent directions. The resulting measurements correspond to a sampling of the Laplace transform (in depth) of the sought-after image, which is modeled as a discrete measure. In other words, neglecting the horizontal blur, this corresponds to the choice of E = M(R + ) and Φ : M(R + ) → R K µ → ˆR+ exp(-s k x) dµ(x) k=1,...,K , where s k ∈ R + for every k ∈ {1, ..., K}. Let us stress that this model is a gross simpli cation of the real models used in uorescence microscopy, and we refer the reader to [Denoyelle, 2018, Chapter 5] for more information on this subject. V 1.2.1. Presentation A fruitful idea for leveraging a priori knowledge on the sought-after signal f 0 is that of variational regularization. It consists, in the case of noiseless observations, in nding an approximation of f 0 by solving inf f ∈E R(f ) s.t. Φf = y 0 , (P 0 (y 0 )) where R : E → R ∪ {+∞} is called the regularizer. This mapping is chosen in order to promote solutions having the same structure as f 0 . Loosely speaking, the value of R(f ) should be small if f has the desired structure, and large otherwise. The choice of R given a priori assumptions on the sought-after signal is discussed in Section 1.2.2. If one has access to noisy observations, i.e. to y = Φf 0 + w, then minimizing R among functions f satisfying Φf = y makes little sense (even f 0 would not belong to the admissible set). One hence solves instead inf f ∈E R(f ) + 1 2λ Φf -y 2 H , (P λ (y)) with λ a positive real number called the regularization parameter. Typically, λ should be chosen according to an estimate of w H . Having de ned these two variational problems, it is natural to investigate if solving them allows to solve the considered inverse problems. Three questions therefore arise. 1. Is f 0 the unique solution of (P 0 (y 0 ))? 2. If w is small and λ well chosen, are solution of (P λ (y 0 + w)) close to solutions of (P 0 (y 0 ))? 3. Can one numerically solve (P 0 (y 0 )) and (P λ (y))? 1.2.2. Choice of the regularizer and a priori information Discrete 1 regularization. The most important (and rst historical) example of regularizer is arguably the 1 -norm in the Euclidean space. From the 70's, 1 regularization has been used empirically in various elds (notably in seismic imaging) to recover sparse vectors. It was later studied mathematically by David Donoho and its co-authors (among others) from the 90's (see e.g. [Donoho, 1992, Chen et al., 1998] and also [Tibshirani, 1996] for applications in statistics). Convex regularizers. In order to obtain convex problems (P 0 (y 0 )) and (P λ (y)), it is natural to choose R among convex functions. Although non-convex regularizers could also be considered, being able to leverage the theory of convex optimization is really useful for analyzing variation regulatization techniques. Representer theorems and convex regularization. The following series of recent works [START_REF] Chandrasekaran | The Convex Geometry of Linear Inverse Problems[END_REF], Boyer et al., 2019, Bredies and Carioni, 2019] has recently shed a new light on convex regularization. In these articles, the link between the choice of a regularizer and the structural properties of some solutions of (P 0 (y 0 )) and (P λ (y)) is investigated. Their main nding is that the atoms promoted by a convex regularizer are the extreme points of its associated unit ball: loosely speaking, if H has nite dimension, then some solutions of (P 0 (y 0 )) and (P λ (y)) are linear combinations of extreme points of {R ≤ 1}. We provide below a few examples of regularizers and describe the atoms they promote. • If E = R n and R = . 1 is the 1 -norm, then the extreme points of {R ≤ 1} are the vectors ±e i with i ∈ {1, ..., d}, where (e i ) i=1,...,n is the canonical basis of R n . • If E = R n×m and R = . * is the nuclear norm of matrices then the extreme points of {R ≤ 1} are the matrices u T v with u ∈ R n , v ∈ R m and u 2 = v 2 = 1. • If E = M(X) with X and R = |.|(X) is the total variation (of measures) then the extreme points of {R ≤ 1} are the measures ±δ x with x ∈ X. • If E = L 2 (R 2 ) and R = |D.|(R 2 ) is the total (gradient) variation then the extreme points of {R ≤ 1} are the functions ±1 E /P (E) with E a simple set with positive nite measure1 . Besides extreme points, the results of [START_REF] Boyer | On Representer Theorems and Convex Regularization[END_REF] show that the structure of the solution set of (P 0 (y 0 )) or (P λ (y)) is linked to the structure of the faces of {R ≤ 1}. The importance of closely studying these faces is highlighted in Part 2. If one has access to m observations (i.e. H = R m ), then some solutions lie in a face of dimension at most m -1 (or m in degenerate cases). In particular, these solutions can be written as the sum of at most m (or m + 1) extreme points of {R ≤ 1}. Identi ability If the rst question stated in Section 1.2.1 has a positive answer, i.e. if f 0 is the unique solution of (P 0 (y 0 )), then it is said to be identi able. Even if we rather focus on noise robustness results in this thesis, let us brie y discuss this question. Discrete 1 regularization. We begin by considering the recovery of sparse vectors using the 1 norm as a regularizer (i.e. E = R n , R = • 1 and H = R m ). The rst historical example of identi ability condition is the nullspace property. This name was rst used in [START_REF] Cohen | Compressed sensing and best k-term approximation[END_REF], although the condition also appeared in [START_REF] Gribonval | Sparse representations in unions of bases[END_REF] and implicitly in previous works. If Φ is the measurement matrix, then every s-sparse vector is identi able if and only if the nullspace property of order s is satis ed. This condition is di cult to check in practice. To circumvent this issue, the stronger restricted isometry property was introduced in [START_REF] Candes | Decoding by linear programming[END_REF]. Even if computing the restricted isoperimetry constants of a matrix is known to be NP-hard, large classes of random matrices satisfy the property with high probability (see for instance [START_REF] Candes | Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies[END_REF]). We refer the reader to the reference book [START_REF] Foucart | A Mathematical Introduction to Compressive Sensing[END_REF] for more details. Extensions. Su cient identi ability conditions also exist for low-rank matrix recovery, i.e. when E = R n×m and R = • * (see for instance [Foucart and Rauhut, 2013, Section 4.6]). Likewise, perfect recovery results were proven in the sparse spikes setting, i.e. with E = M(X) and R = | • |(X). We provide further details on this case in Section 1.3.2. Noise robustness As mentioned above, the second question stated in Section 1.2.1 is closely related to noise robustness. Indeed, if a signal f 0 is identi able and one proves that solutions of (P λ (y)) converge to a solution of (P 0 (y 0 )) when w is small and λ well chosen, then solving (P λ (y)) allows to approximately recover f 0 in a low noise regime. Stability and model stability. Noise robustness results typically fall into two categories. The rst ones are concerned with the convergence of a solution f λ,w of (P λ (y 0 + w)) towards f 0 with respect to some distance on the signal space E. This distance is typically the 2 distance if E = R n , the L 2 distance if E = L 2 (R 2 ), or an optimal transport-based distance if E = M(X). This notion of stability is however quite weak. Indeed, if f 0 is a combination of a few atoms, a convergence result of this kind does not tell anything about the structure of f λ,w . To be more precise, it does not allow to answer the following question: is f λ,w made of the same number of atoms as f 0 , and are these atoms related to those appearing in the decomposition of f 0 ? This shows the necessity of a second category of noise robustness results, allowing to answer these questions. They are sometimes called model stability results, and are the ones we focus on in this thesis. Discrete 1 regularization. Given a sparse vector, several works investigated whether solving (P λ (y 0 + w)) (with E = R n and R = . 1 ) in a low noise regime allows to exactly recover its support. To name only a few, let us cite [Fuchs, 2004, Dossal and Mallat, 2005, Tropp, 2006], in which su cient conditions are derived. Loosely speaking and using a modern terminology, exact recovery is possible if some minimal norm dual certi cate interpolates the sign of the signal coordinates. Extensions. In [START_REF] Vaiter | Model selection with low complexity priors[END_REF], Vaiter et al., 2018], the analysis of 1 regularization was extended to the family of so-called partly smooth regularizers, which promote signals having some notion of low complexity. Numerous popular regularizers fall in this category. To name only a few, let us mention the 1 norm, the 1 -2 norm (used to enforce group sparsity), and the nuclear norm of matrices. An extension to so-called mirror-strati able regularizers was also studied in [START_REF] Fadili | Sensitivity Analysis for Mirror-Strati able Convex Functions[END_REF], Fadili et al., 2019]. This last series of works has similarities with the dual-based approach of [START_REF] Duval | Exact Support Recovery for Sparse Spikes Deconvolution[END_REF], in which an exact support recovery result is proved for discrete measures. We discuss this topic in greater details in Section 1.3.2 G At rst glance, solving (P λ (y)) or (P 0 (y 0 )) with E an in nite dimensional space seems intractable. Considering such problems hence only seems to be of theoretical interest. However, a recent line of works demonstrated that it is in fact possible to solve (P λ (y)) and (P 0 (y 0 )) e ciently when E = M(X) and R = |.|(X) is the total variation of measures. Doing so even has numerous advantages, both from a numerical point of view and with respect to recovery guarantees. This section provides a review of these advances, and situates the subject of this thesis within this context. Sparse spikes super-resolution on thin grids Sparse spikes recovery and the BLASSO. As explained in Sections 1.1.1 and 1.1.2, the recovery of a discrete measure µ 0 = s i=1 a 0,i δ x 0,i from noisy linear measurements y = Φµ 0 + w has numerous applications. We also mentioned that solving (P λ (y 0 + w)) where y 0 = Φµ 0 with E = M(X) and R = |.|(X) is a good way to approximate µ 0 . The resulting problem, which writes inf µ∈M(X) |µ|(X) + 1 2λ Φµ -y 2 H (P λ (y)) and is known as the BLASSO1 , is however in nite-dimensional. How to numerically solve it is hence unclear. Restriction to a nite grid. A rst approach one can consider is to look for an approximate solution of (P λ (y)) whose support lies in some discrete grid G = {x i } n-1 i=0 ⊂ X, and therefore to solve (P λ (y)) with the additional constraint Supp(µ) ⊂ G. This yields the nite dimensional problem inf a∈R n a 1 + 1 2λ Φ G a -y 2 H , (P G λ (y)) which is the celebrated basis pursuit, also known as the LASSO (see for instance [START_REF] Chen | Atomic decomposition by basis pursuit[END_REF], Tibshirani, 1996]), where Φ G is de ned by Φ G : R n → H a → Φ n i=1 a i δ x i . Thin grids. Since no information on the support of µ 0 is known a priori, it is natural to consider increasingly "thinner" grids G, in order to cover the whole domain X. However, the analysis conducted in [START_REF] Duval | Sparse regularization on thin grids I: The Lasso[END_REF] shows this approach yields reconstructed measures whose structure is di erent from that of µ 0 . To be more precise, each spike in µ 0 is estimated by a pair of neighbooring spikes. This demonstrates the impossibility to exactly recover the support of µ 0 by relying on 1 regularization techniques on (even arbitrarily thin) nite grids. Grid-free sparse spikes recovery Considering the negative result presented above, it is natural to wonder if, despite its in nitedimensional nature, numerically solving the BLASSO problem (P λ (y)) is possible. Over the last ten years, numerous methods have in fact been developed to achieve this. We brie y review them here, before discussing available results on noise robustness. We refer the reader to the lecture notes [Poon, 2019] for a more detailed introduction to sparse spikes recovery using the BLASSO. Numerical resolution of the BLASSO. The rst method allowing to solve the BLASSO was introduced in [Tang et al., 2013, Candès andFernandez-Granda, 2014]. It relies on the equivalence, in the speci c case of 1D Fourier measurements, of (P λ (y)) with a nite dimensional semi-de nite program, for which e cient solvers exist. Extensions to higher dimensions rely on the so-called Lasserre hierarchy (see e.g. [START_REF] Catala | A Low-Rank Approach to O -the-Grid Sparse Superresolution[END_REF], [Catala, 2020, Chapter 2] and also [De Castro et al., 2017] for the case of polynomial measurements). For general measurements, approaches based on the conditional gradient or Frank-Wolfe algorithm, which does not require any Hilbertian structure (and is hence particularly well suited for optimization on the space of measures), were developed in [START_REF] Bredies | Inverse problems in spaces of measures[END_REF], Boyd et al., 2017, Denoyelle et al., 2019]. Finally, let us mention the work of [Chizat, 2022], which is based on conic particle gradient descent. Identi ability. The groundbreaking work [START_REF] Candès | Towards a Mathematical Theory of Super-resolution[END_REF] shows that a discrete measure can be exactly recovered from 2f c + 1 Fourier samples, provided the minimal distance ∆ between its atoms satis es ∆ ≥ 2/f c . In [START_REF] Tang | Compressed Sensing O the Grid[END_REF], it has also been shown that this result remains valid with high probability if a small number of Fourier samples are randomly selected. Let us stress that, for positive measures, the minimal separation condition can be lifted (see for instance [De Castro andGamboa, 2012, Schiebinger et al., 2018]). Noise robustness. The convergence of solutions µ λ,w of (P λ (y 0 + w)) in a low noise regime has been investigated in several works. The weak-* convergence of µ λ,w towards a solution µ 0 of the noiseless problem is proved in [START_REF] Bredies | Inverse problems in spaces of measures[END_REF]. In a similar spirit, [START_REF] Candès | Super-Resolution from Noisy Data[END_REF], Fernandez-Granda, 2013, Azaïs et al., 2015] provide error bounds (based on [START_REF] Burger | Convergence rates of convex variational regularization[END_REF]) which show that µ λ,w concentrates around the support of µ 0 . However, these results do not address the question of support recovery: if µ 0 is discrete and identi able, is µ λ,w discrete, and does it have the same number of spikes as µ 0 ? Exact support recovery. In [START_REF] Duval | Exact Support Recovery for Sparse Spikes Deconvolution[END_REF]], an answer to this last question is provided. The authors prove that, if a so-called non-degenerate source condition holds, µ λ,w has exactly the same number of spikes as µ 0 in a low noise regime, and the locations of these spikes (as well as the associated amplitudes) smoothly converge to the limit ones. This is in sharp contrast with the negative result presented for nite grids in Section 1.3.1. Perspectives In Section 1.3.2 above, we have seen that, in the context of sparse spikes recovery, grid-free approaches both have theoretical and numerical bene ts. In this subsection, we mention recent attempts to develop similar methods for recovering other kinds of signals. We nally introduce the problem this thesis is dedicated to, and advocate for studying noise robustness properties and developing numerical methods using the fruitful ideas presented in Section 1.3.2. Dynamic sparse spikes recovery. As mentioned in Section 1.1.1, the recovery of timedependant discrete measures has recently been investigated. Several regularizers were introduced to tackle the xed and varying mass settings, and extreme points results were derived. Although we are not aware of any identi ability or noise robustness result, numerical methods received a lot of attention, and approaches based on the Frank-Wolfe (or conditional gradient) algorithm were very recently proposed in [Bredies et al., 2022b, Duval andTovey, 2021]. Curve recovery. We also discussed in Section 1.1.1, the reconstruction of images which are the superposition of a few 1D curves. To our knowledge, the only work in which this task is investigated is the recent work [START_REF] Laville | O -the-grid curve reconstruction through divergence regularisation: An extreme point result[END_REF]. Its authors prove an extreme point result which we brie y state below. Let E = M(X) 2 be the dual of C 0 (X, R 2 ) (equipped with the supremum norm), and let R be de ned by R : M(X) 2 → R ∪ {+∞} µ → |µ|(X) + |div(µ)|(X) , with div(µ) the divergence of µ (de ned in the sense of distributions). Then the extreme points of {R ≤ 1} are the measures µ γ /R(µ γ ) where µ γ is the measure canonically associated to a simple oriented Lipschitz curve γ (see [Laville et al., 2022, De nition 7] for a precise de nition). An interesting avenue for future research is to investigate the performance of this regularizer for solving inverse problems (i.e. to derive identi ability and noise robustness results), and also to develop numerical methods allowing to solve (P 0 (y 0 )) and (P λ (y)) e ciently. Let us also stress that the measures this work deals with are associated to oriented curves, and that they also carry tangential information. Recovering measures supported on unoriented curves has, to our knowledge, not be considered, although they seem a natural model for images such as those of Figure 1. Reconstruction of piecewise constant images. In this thesis, we are concerned with the recovery of piecewise constant images, i.e. which are the superposition of a few simple shapes. To recover such objects, the natural choice of regularizer is the total (gradient) variation. The rst appearance of this functional for imaging applications dates back to the pioneering work of Rudin Osher and Fatemi [START_REF] Rudin | Nonlinear total variation based noise removal algorithms[END_REF], which focuses on the denoising problem. Since then, the performance of this regularizer has been the subject of a large number of works. It has notably been shown in [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF], Iglesias et al., 2018] that, in a small noise regime, solutions of (P λ (y 0 + w)) converge to a solution of (P 0 (y 0 )) (which is unique and equal to the image to recover in the case of denoising). The type of convergence proved in this result is already quite strong. Namely, it is shown that the level sets of solutions of (P λ (y 0 + w)) converge to the level sets of solutions of (P 0 (y 0 )) (see Proposition 1.12 for a precise statement). This convergence result for level sets is of particular interest in the context of imaging. Even if the analysis presented in Section 1.2.2 suggests that the total variation is particularly well suited to recover piecewise constant functions, the above mentioned works do not focus on unknown images having this structure. If we make this assumption, we argue that the emphasis should be on structurepreserving convergence guarantees, in the spirit of [START_REF] Duval | Exact Support Recovery for Sparse Spikes Deconvolution[END_REF]. To be more speci c, one should study whether the reconstructed image is composed of the same number of shapes than the sought-after image, and whether these shapes are close to the unknown ones. Finally, let us mention that, in this setting, the problem of deriving su cient identi ability conditions is to our knowledge mostly open. Still, let us point out that, in [START_REF] Bredies | A perfect reconstruction property for PDE-constrained total-variation minimization with application in Quantitative Susceptibility Mapping[END_REF], a perfect reconstruction result is obtained in the case where the measurement associated to a function is its image under a linear partial di erential operator with unknown boundary conditions. The main aim of this thesis is to study total variation regularization from a sparse recovery viewpoint. T The use of the total variation as a regularizer started with the pioneering work of Rudin Osher and Fatemi [START_REF] Rudin | Nonlinear total variation based noise removal algorithms[END_REF] in the context of image denoising. This functional penalizes oscillations while allowing discontinuities. Reconstruction methods based on its minimization therefore have the striking feature of preserving image edges (see Figure 3 for a few examples). Although state of the art algorithms now exhibit much better performance, the total variation is still routinely used in speci c applications, and remains an import baseline. In this section, we provide an introduction to total variation regularization. We review existing theoretical results and numerical methods, and also mention the open problems that we attempt to address in this thesis. Figure 3 -Reconstructions obtained with [Getreuer, 2012a[START_REF] Getreuer ; Getreuer | Total Variation Inpainting using Split Bregman[END_REF] in the case of deconvolution ( rst and third rows) and inpainting (second and fourth rows). Left: observations y = Φu 0 + w, middle: unknown image u 0 , right: approximate solution of (P λ (y 0 + w)) (source images: bazzier, Le Pays Malouin). 2.1. P The total variation We begin this section by de ning the total variation functional. Then, we introduce a few notions to be able to state the extreme point result mentioned in Section 1.2.2, which characterizes the atoms promoted by this regularizer. Finally, we give several results about the subdi erential of the total variation, which are useful for the analysis of the variational problems (P 0 (y 0 )) and (P λ (y)). De nition. If u ∈ L 1 loc (R 2 ) , its total variation is de ned by TV(u) def. = sup - ˆR2 u divφ φ ∈ C ∞ c (R 2 , R 2 ), φ ∞ ≤ 1 . (1) div φ = ´∂E φ • ν E dH 1 , which yields TV(1 E ) = H 1 (∂E). Further details about the total variation can be found in [Ambrosio et al., 2000, Chapter 3] and [Evans and Gariepy, 2015, Chapter 5]. Let us also mention [Chambolle et al., 2010a] for more information on its use in imaging. Remark 1.1 In this thesis, we extensively work with square integrable functions with nite total variation. These do not belong to the more classical space BV(R 2 ) of integrable functions with nite total variation. The main interest of this choice is that, in R 2 , the total variation dominates the L 2 norm (and more generally the L N/(N -1) norm in R N ), as shown by the isoperimetric inequality (3) presented below. As a consequence, the total variation unit ball is weakly compact in L 2 (R 2 ), which is hence the natural space in which we can obtain existence of solutions for variational problems with a total variation term. In all the following we consider TV as a mapping from L 2 (R 2 ) to R ∪ {+∞}. This function is convex, proper and lower semi-continuous. Sets of nite perimeter. If a measurable set E ⊂ R 2 is such that P (E) def. = TV(1 E ) is nite, it is said to have nite perimeter. Such sets have similar properties to those of smooth sets, in a measure-theoretic sense. In particular, they satisfy a generalized Gauss-Green formula, and have an approximate normal and an approximate tangent space. They are central in the study of geometric variational problems, mainly thanks to useful compactness results. We refer the reader to [Maggi, 2012, Part II] for further details on this subject. We collect notations and a few properties of sets of nite perimeter in Appendix A. The isoperimetric inequality We now state the useful isoperimetric inequality, and take this opportunity to make a digression on its quantitative version. This allows us to introduce several topics of interest for our purpose, namely shape calculus and stability in geometric variational problems. For every set of nite perimeter E ⊂ R 2 , the isoperimetric inequality states that min(|E|, |E c |) ≤ c 2 P (E) , (2) with equality if and only if E is a ball, and where c 2 def. = 1/ √ 4π is the isoperimetric constant (see e.g. [Maggi, 2012, Chapter 14]). In particular, if E is a set of nite perimeter, either E or E c has nite measure. The functional version1 of (2) is: ∀u ∈ L 2 (R 2 ), u L 2 (R 2 ) ≤ c 2 TV(u) . (3) A sharp quantitative version of (2) was rst proved in [START_REF] Fusco | The sharp quantitative isoperimetric inequality[END_REF] using symmetrization techniques (see also the review article [Fusco, 2015]). Denoting B = B(0, 1) the unit ball, for every measurable set E ⊂ R 2 with |E| = |B|, it writes: min x∈R 2 |(x + E) B| 2 ≤ C[P (E) -P (B)] . (4) Another proof, relying on optimal transport, was given in [START_REF] Figalli | A mass transportation approach to quantitative isoperimetric inequalities[END_REF]. A third approach, which is the most relevant for our purpose, was proposed in [START_REF] Cicalese | A Selection Principle for the Sharp Quantitative Isoperimetric Inequality[END_REF]. It relies on two main ingredients. The rst is a socalled selection principle, which allows to reduce the proof to nearly spherical sets, i.e. sets whose boundary is the normal graph of a su ciently small Lipschitz function on the sphere. The second is the proof of (4) for such sets, which dates back to [Fuglede, 1989]. The proof of Fuglede relies on shape calculus. Since nearly spherical sets are parametrized by a function ϕ on the sphere, one can look at the mapping ϕ → P (B ϕ ), where B ϕ is the nearly spherical set associated to ϕ, and use calculus to study the stability of B = B 0 as a minimizer. To be more precise, one can show that the second order derivative of this mapping at zero, which is called the second order shape derivative of the perimeter, is coercive, which ultimately yields the result. The derivation of stability results using second order shape derivatives has been generalized to other geometric functionals in several works (see [START_REF] Dambrine | Stability in shape optimization with second variation[END_REF] and the references therein). The noise robustness analysis we conduct in Part 2 heavily relies on these ideas, which we apply to the prescribed curvature problem presented below. Indecomposable and simple sets. Now, we introduce the notion of indecomposable and simple sets, which are the measure-theoretic analogues of connected and simply connected sets (we refer the reader to [START_REF] Ambrosio | Connected components of sets of nite perimeter and applications to image processing[END_REF] for more details). A set of nite perimeter E ⊂ R 2 is said to be decomposable if there exists a partition of E in two sets of positive measure A and B with P (E) = P (A) + P (B). We say that E is indecomposable if it is not decomposable 1 . An indecomposable set E is called simple if E = R 2 or |E| < +∞ and R 2 \ E is indecomposable. Extreme points of the total variation unit ball. We are now able to state the extreme point result mentioned Section 1.2.2. Proposition 1.2 ( [Fleming, 1957, Ambrosio et al., 2001]) The extreme points of the convex set {TV ≤ 1} def. = u ∈ L 2 (R 2 ) TV(u) ≤ 1 are the functions of the form ±1 E /P (E), where E is a simple set with 0 < |E| < +∞. This result, in conjunction with the representer theorems mentioned in Section 1.2.2, justies the a rmation that using the total variation as a regularizer promotes piecewise constant functions (i.e. that are the weighted sum of a few indicator functions of simple sets). Subdi erential. Let us now collect several results on the subdi erential of TV, which are useful to derive and analyze the dual problems of (P 0 (y 0 )) and (P λ (y)). Since TV is the support function of the convex set C def. = div z z ∈ C ∞ c (R 2 , R 2 ), z ∞ ≤ 1 , its subdi erential at 0 is the closure of C in L 2 (R 2 ), that is ∂TV(0) = C = div z z ∈ L ∞ (R 2 , R 2 ), divz ∈ L 2 (R 2 ), ||z|| ∞ ≤ 1 . (5) We also have the following useful identity: ∂TV(0) = η ∈ L 2 (R 2 ) ∀u ∈ L 2 (R 2 ), ˆR2 η u ≤ TV(u) . Finally, the sudi erential of TV at some u ∈ L 2 (R 2 ) is given by: ∂TV(u) = η ∈ ∂TV(0) ˆR2 η u = TV(u) . Hence, if η ∈ ∂TV(u), then η is an element of ∂TV(0) = C for which the supremum in the de nition of the total variation is attained. From the subdi erential to the level sets. Now, there is an important su cient and necessary condition for η ∈ ∂TV(u) to hold. In the rest of the article, given u : R 2 → R and t ∈ R, we use the following notation: U (t) def. = {x ∈ R 2 | u(x) ≥ t} if t ≥ 0 , {x ∈ R 2 | u(x) ≤ t} otherwise. It is worth noting that, if u ∈ L 2 (R 2 ), then |U (t) | < +∞ for all t = 0. With this notation, we may state the following result. Proposition 1.3 (see e.g. [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF]) Let u ∈ L 2 (R 2 ) be such that TV(u) < +∞, and let η ∈ L 2 (R 2 ). Then the following conditions are equivalent. (i) η ∈ ∂TV(u) . (ii) η ∈ ∂TV(0) and the level sets of u satisfy ∀t > 0, P (U (t) ) = ˆU(t) η and ∀t < 0, P (U (t) ) = -ˆU(t) η . (iii) The level sets of u satisfy ∀t > 0, U (t) ∈ Argmin E⊂R 2 , |E|<+∞ P (E) -ˆE η , ∀t < 0, U (t) ∈ Argmin E⊂R 2 , |E|<+∞ P (E) + ˆE η . The geometric variational problem appearing in (iii), which is inf E⊂R 2 , |E|<+∞ P (E) -ˆE η , (6) is called the prescribed curvature problem associated to η. This terminology stems from the fact that, if η is su ciently regular, every solution of (6) has a (scalar) distributional curvature (de ned in Appendix A) equal to η. This problem plays a crucial role in the analysis of total variation regularization, as explained below. If η ∈ ∂TV(0), one should note that a set of nite measure E solves (6) if and only if P (E) = ´E η, which is why (ii) and (iii) are equivalent. Primal and dual problems, dual certi cates We can now specialize the variational problems introduced in Section 1.2.1 to our setting, which corresponds to the choice of E = L 2 (R 2 ) and R = TV. Assumptions. In all the following, we assume that Φ : L 2 (R 2 ) → H is a continuous linear operator, which is equivalent to the existence of ϕ ∈ L 2 (R 2 , H) such that Φ : L 2 (R 2 ) → H u → ˆR2 u(x) ϕ(x) dx . We also assume that y 0 = Φu 0 with TV(u 0 ) < +∞. Primal problems. As mentioned above, in our setting, the two variational problems introduced in Section 1.2.1 are inf u∈L 2 (R 2 ) TV(u) s.t. Φu = y 0 , (P 0 (y 0 )) inf u∈L 2 (R 2 ) TV(u) + 1 2λ Φu -y 2 H . (P λ (y)) Existence of solutions for (P 0 (y 0 )) and (P λ (y)) can be shown using the direct method of the calculus of variations and the isoperimetric inequality (3). Dual problems. In the remaining of this section, we collect useful results following from standard duality arguments. We refer the reader to [Iglesias et al., 2018, Section 2] for their proof (which had been established in the particular case of denoising in [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF]). The Fenchel-Rockafellar dual problems of (P 0 (y 0 )) and (P λ (y)) are: sup p∈H p, y 0 H s.t. Φ * p ∈ ∂TV(0) , (D 0 (y 0 )) sup p∈H p, y H - λ 2 ||p|| 2 H s.t. Φ * p ∈ ∂TV(0) . (D λ (y)) The existence of a solution of (D 0 (y 0 )) is not always guaranteed. Requiring that there indeed exists a maximizer is called the source condition in the literature. On the contrary, (D λ (y)) can be reformulated as the projection problem of y/λ on the closed convex set {p ∈ H | Φ * p ∈ ∂TV(0)}. It hence has a unique solution. Strong duality holds between (P 0 (y 0 )) and (D 0 (y 0 )) (respectively (P λ (y)) and (D λ (y))), as stated in the following two propositions. Proposition 1.4 Strong duality holds between (P 0 (y 0 )) and (D 0 (y 0 )). Moreover, if there exists a solution p to (D 0 (y 0 )), then for every solution u of (P 0 (y 0 )) we have Φ * p ∈ ∂TV(u) . (7) Conversely, if (u, p) ∈ L 2 (R 2 ) × H with Φu = y 0 and (7) holds, then u and p respectively solve (P 0 (y 0 )) and (D 0 (y 0 )). Proposition 1.5 Strong duality holds between (P λ (y)) and (D λ (y)). Moreover, denoting p the unique solution of (D λ (y)), for every solution u of (P λ (y)) we have Φu = y -λ p , Φ * p ∈ ∂TV(u) . (8) Conversely, if (8) holds, then u and p respectively solve (P λ (y)) and (D λ (y)). Remark 1.6 Although there might not be a unique solution to (P λ (y)), Proposition 1.5 shows that all of them have the same image by Φ and the same total variation. Dual certi cates. If η = Φ * p and η ∈ ∂TV(u), we call η a dual certi cate for u in (P 0 (y 0 )), as its existence proves the optimality of u for (P 0 (y 0 )) (assuming u is admissible). Similarly, if η = -Φ * (Φuy)/λ and η ∈ ∂TV(u), we call η a dual certi cate for u in (P λ (y)). There could be multiple dual certi cates associated to (P 0 (y 0 )). One of them, the minimal norm certi cate, plays a crucial role in our analysis. A quick look at the objective of (D λ (y)) indeed suggests that, as λ goes to 0, its solution converges to the solution of the limit problem (D 0 (y 0 )) with minimal norm. This is Proposition 1.8 below. De nition 1.7 If there exists a solution to (D 0 (y 0 )), the minimal norm dual certi cate associated to (P 0 (y 0 )), denoted η 0 , is de ned as η 0 = Φ * p 0 with p 0 = argmin p H s.t. p solves (D 0 (y 0 )) . If λ > 0, we denote p λ,w the unique solution of (D λ (y 0 + w)), and η λ,w = Φ * p λ,w the associated dual certi cate. Noise robustness results extensively rely on the behaviour of η λ,w as λ and w go to zero. This behaviour is described by the following results. Proposition 1.8 If there exists a solution to (D 0 (y 0 )), then p λ,0 converges strongly to p 0 as λ → 0. Since p λ,w is the projection of (y 0 +w)/λ onto the closed convex set {p ∈ H | Φ * p ∈ ∂TV(0)}, the non-expansiveness of the projection mapping yields ∀(λ, w) ∈ R * + × H, p λ,w -p λ,0 H ≤ w H λ , (9) and hence ∀(λ, w) ∈ R * + × H, η λ,w -η λ,0 L 2 (R 2 ) ≤ Φ * w H λ . As a result, if λ → 0 and w H /λ → 0, the dual certi cate η λ,w converges in L 2 (R 2 ) to the minimal norm certi cate η 0 . N In this section, we review existing results concerning noise robustness (i.e. question 2 in Section 1.2.1). The main references on this topic are [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF], Iglesias et al., 2018]. Setting. In all the following we consider two sequences (w n ) n≥0 ∈ H N and (λ n ) n≥0 of noises and regularization parameters, and we assume that (u n ) n≥0 is such that u n solves (P λn (y 0 + w n )) for every n ≥ 0. Set convergence. Let us de ne a useful notion of set convergence, known as Painlevé-Kuratowski set convergence. We refer the reader to [Rockafellar and Wets, 1998, Chapter 4] for more details. De nition 1.9 If (E n ) n≥0 is a sequence of sets in R d , its inner and outer limits are de ned by lim inf n→+∞ E n def. = x ∈ R d lim sup n→+∞ dist(x, E n ) = 0 , lim sup n→+∞ E n def. = x ∈ R d lim inf n→+∞ dist(x, E n ) = 0 . If there exists a set E such that lim inf n→+∞ E n = lim sup n→+∞ E n = E , we say that (E n ) n≥0 converges to E and write lim n→+∞ E n = E. If a sequence of closed sets (E n ) n≥0 is bounded (i.e. there exists R > 0 such that E n ⊂ B(0, R) for every n ≥ 0), then the above notion of set convergence is equivalent to convergence in the Hausdor sense, that is to lim n→+∞ dist(•, E n ) -dist(•, E) ∞ = 0 . First convergence result. General convergence results given in [START_REF] Hofmann | A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators[END_REF] apply to our setting. This yields the following proposition. Proposition 1.10 ( [Hofmann et al., 2007, Theorem 3.5]) If λ n → 0 and w n 2 H = o(λ n ) then, up to the extraction of a subsequence (not relabeled), we have that (u n ) n≥0 converges strongly in L1 loc (R 2 ) and weakly in L 2 (R 2 ) to a solution u * of (P 0 (y 0 )). Moreover, we have TV(u n ) → TV(u * ). We stress that the strong L 1 loc convergence of (u n ) n≥0 towards u * and the fact TV(u n ) → TV(u * ) imply Du n * Du * , which in turn implies 1 Supp(Du * ) ⊆ lim inf n→+∞ Supp(Du n ) . ( 10 ) Properties of the level sets in the low noise regime. Stronger convergence results can be obtained by having a closer look at the level sets of u n . Indeed, from Propositions 1.3 and 1.5, we know they are solutions of the prescribed curvature problem associated to η λn,wn . This information can be exploited to obtain uniform properties of the level sets in the low noise regime. The main ingredient to achieve this is the following lemma, that we use in di erent parts of this manuscript. Lemma 1.11 ([Chambolle et al., 2016, Section 5]) Let (η n ) n≥0 ⊂ ∂TV(0) be a sequence converging strongly in L 2 (R 2 ) to η ∞ , and let E be de ned by E def. = E ⊂ R 2 , 0 < |E| < +∞ ∃n ∈ N ∪ {∞}, P (E) = ˆE η n . Then the following holds: 1. inf E∈E P (E) > 0 and sup E∈E P (E) < +∞, 2. inf E∈E |E| > 0 and sup E∈E |E| < +∞, 3. there exists R > 0 such that, for every E ∈ E, it holds E ⊂ B(0, R), 4. there exists r 0 > 0 and C ∈ (0, 1/2) such that for every r ∈ (0, r 0 ] and E ∈ E: ∀x ∈ ∂E, C ≤ |E ∩ B(x, r)| |B(x, r)| ≤ 1 -C . Point 4 in Lemma 1.11 is a weak regularity property. In particular, it ensures that E does not have cusps. Convergence of level sets. Before stating Proposition 1.12 below, Proposition 1.12 ( [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF], Iglesias et al., 2018]) If (D 0 (y 0 )) has a solution, λ n → 0 and w n H λ n ≤ 1 4 c 2 Φ * , then (Supp(u n )) n≥0 is bounded and, up to the extraction of a subsequence (not relabeled), we have that (u n ) n≥0 converges strictly in BV(R 2 ) to a solution u * of (P 0 (y 0 )). Moreover, for almost every t ∈ R, we have: U (t) n U (t) * -→ 0 and ∂U (t) n -→ ∂U (t) * . Remark 1.13 The result of Proposition 1.12 slightly di ers from the one proved in [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF], Iglesias et al., 2018]. Indeed, the latter guarantees the convergence of level sets up to the extraction of a further subsequence. We argue that this can be avoided by using Proposition D.1, that we prove in Appendix D. The extended support. The notion of extended support associated to u 01 was introduced in [Chambolle et al., 2016, Section 6]. If (D 0 (y 0 )) has a solution and η 0 is the minimal norm certi cate de ned in Appendix B, this set, denoted Ext(Du 0 ), is de ned as follows: Ext(Du 0 ) def. = Supp(Du) η 0 ∈ ∂TV(u) . From Proposition 1.3, the extended support is also characterized by Ext(Du 0 ) = ∂ * E |E| < +∞ and P (E) = ˆE η 0 , where ∂ * E denotes the reduced boundary of E (see Appendix A for a de nition). In other words, Ext(Du 0 ) contains the reduced boundary of all solutions of the prescribed curvature problems associated to the minimum norm dual certi cate and its opposite. In the above-mentioned work, the following result was proved in the case of denoising (i.e. Φ = Id). One may check that it still holds in the case of general measurement operators. Proposition 1.14 If (D 0 (y 0 )) has a solution, λ n → 0 and w n = o(λ n ) then lim sup n→+∞ Supp(Du n ) ⊆ Ext(Du 0 ) , or equivalently, for every r > 0 there exists n 0 ∈ N such that ∀n ≥ n 0 , Supp(Du n ) ⊆ {x ∈ R 2 | dist(x, Ext(Du 0 )) ≤ r} . Combining Proposition 1.14 and (10) yields Supp(Du * ) ⊆ lim inf n→+∞ Supp(Du n ) ⊆ lim sup n→+∞ Supp(Du n ) ⊆ Ext(Du 0 ) . Structure-preserving convergence results. Considering the exact support recovery results mentioned for other reconstruction problems, one could wonder if stronger convergence guarantees can be derived. Namely, if u 0 is identi able and the sum of a few indicator functions of simple sets, are solutions of (P λ (y 0 + w)) made of the same number of atoms? And if so, are these atoms related to those appearing in the decomposition of u 0 ? In Part 2, we provide an answer to these questions by adapting the tools introduced in [START_REF] Duval | Exact Support Recovery for Sparse Spikes Deconvolution[END_REF]. N In this section, we provide a quick overview of existing numerical methods for solving (P λ (y))2 . We refer the reader to the recent survey [Chambolle and Pock, 2021a] for a complete review of the topic. In the following, as we are interested in the practical resolution of (P λ (y)), we only consider the case of nitely many measurements, i.e. H = R m from some m ∈ N * . The main idea behind most numerical methods is to introduce a xed spatial discretization and a discrete version of the total variation. To be more precise, one de nes a discretization parameter h, a nite dimensional space E h , and discrete versions of the measurement operator and the total variation Φ h : E h → H and TV h : E h → R, and looks for a solution of inf u∈E h 1 2λ Φ h u -y 2 + TV h (u) . (11) In most situations ( 11) is a non-smooth convex problem and can be solved e ciently using standard algorithms. The main di culty usually lies in the choice of a relevant discretization TV h of the total variation. In the remaining of this section, we brie y present a few standard choices for TV h . Our aim is to give an idea of why discretizing the total variation is challenging, and to underline the drawbacks which are common to most existing approaches. Spatial discretization. As all solutions of (P λ (y)) have their support included in some common ball1 , one can equivalently solve (P λ (y)) in [-R, R] 2 (with Dirichlet boundary conditions) for a su ciently large R > 0. The most standard choice of spatial discretization is then to take a positive integer N , de ne h def. = 2R/N , and choose E h to be the space of N by N matrices. Every element u = (u i,j ) (i,j)∈[1,N ] 2 of E h encodes the values of a piecewise constant function on the partition of [-R, R] 2 composed of squares of equal size. The resulting approximate solution of (P λ (y)) is hence piecewise constant on this pixel grid. Finite di erences discretizations In order to simplify the exposition, we only review the two most basic discretizations, and refer to [Chambolle and Pock, 2021a, Section 2] for more details. For every matrix u = (u i,j ) (i,j)∈[1,N ] 2 ∈ E h we de ne ∂ h x u i,j def. = u i+1,j -u i,j , ∂ h y u i,j def. = u i,j+1 -u i,j , for all (i, j) ∈ [0, N ] 2 , with the convention u i,j = 0 if either i or j is in {0, N + 1}. We also de ne the discrete gradient ∇ h u i,j def. = ∂ h x u i,j , ∂ h y u i,j Anisotropic total variation. The most basic idea to discretize the total variation is to dene TV h as follows: TV h (u) def. = h N i=0 N j=0 |∇ h u i,j | 1 = h ∇ h u 1,1 . With this choice TV h (u) = TV(ũ) with ũ the piecewise constant function naturally associated to u. The main drawback of this choice is that the functional TV h is strongly "anisotropic": it over-estimates the total variation of oblique discontinuities, and its minimization hence favors horizontal and vertical discontinuities. Figure 4 shows how this anisotropy induces a strong bias on the reconstructed images. In fact, one can prove this discretization is not even consistent, as it Γ-converges towards the perimeter associated with a crystalline total variation (see e.g. [START_REF] Chambolle | Continuous limits of discrete perimeters[END_REF]). Original image Anistropic TV Isotropic TV Figure 4 -Solutions of the discrete Rudin-Osher-Fatemi denoising problem using the anisotropic and isotropic total variations ( gure taken from [START_REF] Tabti | Symmetric Upwind Scheme for Discrete Weighted Total Variation[END_REF]). Isotropic total variation. To avoid the anisotropy of the above-mentioned discretization, it is standard to consider the so-called isotropic discrete total variation, de ned by TV h (u) def. = h N i=0 N j=0 |∇ h u i,j | 2 = h ∇ h u 2,1 . It is arguably the most widely used discretization. As explained in [Chambolle and Pock, 2021a, Sections 2.1 and 2.2], it still has some anisotropy, since it is not invariation by a rotation of 90°of the input image. Reducing this anisotropy is however possible with slightly more sophisticated discretizations. Its main drawback (which is also shared by more complex discretizations) is that its minimization produces blurry images (see Figure 4 for an illustration of this phenomenon), while the (continuous) total variation is precisely used to recover sharp edges. Dual-based discretizations Another idea to de ne TV h is to consider the dual formulation of TV, given in (1), and to "discretize" the constraint φ ∞ ≤ 1 imposed on the dual variable. To be more precise, this amounts to de ning TV h as follows: TV h (u) def. = sup h p, ∇ h u Lp 2,∞ ≤ 1 , ( 12 ) where L is a linear operator encoding the constraints to be imposed on the dual variable p. Hintermüller, Rautenberg, Hahn and Condat's discretization. A discretization of the form ( 12) was introduced in two independant works [START_REF] Hintermüller | Functionalanalytic and numerical issues in splitting methods for total variation-based image reconstruction[END_REF], Condat, 2017]. The operator L is chosen in this case to enforce the constraint |p| ≤ 1 both at pixel centers and edges. The performance of this discrete total variation is acknowledged in [Chambolle and Pock, 2021a, Section 4.2]. The experiments presented in [Condat, 2017] indeed show reconstructions that are signi cantly less blurry than those obtained with the isotropic discretization, although recovered edges are never exactly sharp (i.e. the resulting images are not binary). Learned dual discretization. In [Chambolle and Pock, 2021b], a learning approach is proposed to nd the best choice of operator L on a particular task. The consistency of the resulting discrete total variations is guaranteed: they all Γ-converge towards the continuous total variation1 . Interestingly, discretizations learned on di erent tasks signi cantly di er, which, according to the authors, show that it is hopeless to look for a universal best discrete total variation. Motivation for grid-free approaches We now motivate the numerical approach introduced in Part 3. Common drawbacks of xed grid approaches. As mentioned above, most existing numerical methods for solving (P λ (y)) rely on the introduction of a xed spatial discretization. This often yield reconstruction artifacts, such as anisotropy or blur. Most importantly, they often fail to preserve the structure exhibited by (some) solutions of (P λ (y)), which are piecewise constant. Adaptative discretizations. To circumvent these issues, mesh adaptation techniques were introduced (see e.g. [START_REF] Viola | A unifying resolutionindependent formulation for early vision[END_REF], Bartels et al., 2021]). The spatial discretization is hence adapted to the reconstructed image during the reconstruction process. The re nement rules these works propose are, however, either heuristic or too restrictive to faithfully recover edges. In any case, they still rely on a discretization of the whole domain, and hence do not provide a compact representation of the reconstruced image, which is highly desirable when working with simple images (i.e. that are the superposition of a few simple shapes). A rst grid-free approach. In [START_REF] Ongie | O -the-Grid Recovery of Piecewise Constant Images from Few Fourier Samples[END_REF], a method for recovering piecewise constant images from few Fourier samples is introduced. Its orginiality is to produce a continuous domain representation of the image, assuming its edge set is a trigonometric curve. However, this approach heavily relies on relations satis ed by the Fourier coe cients of the image. As such, it does not seem possible to adapt it to handle other types of measurements. In some sense, this method can be seen as the counterpart of Prony-type methods used for the recovery of sparse spikes (see e.g. [Catala, 2020, Chapter 1]), which are speci c to certain type of measurements. On the other hand, greedy approaches (that we try to adapt to our setting in Part 3) are agnostic to the choice of measurement operator. Towards general grid-free numerical methods. Our goal is to design an algorithm which does not su er from some grid bias, while providing a continuous domain representation of solutions. To this aim, we propose to construct an approximate solution built from the atoms promoted by the total variation, namely indicator functions of simple sets. To obtain numerically tractable algorithms, we choose the simple sets we deal with to be simple polygons, although other choices could be considered. A schematic comparison of this approach and grid-based approaches is provided in Figure 5. We begin by collecting results about the total variation unit ball, and more speci cally about the structure of its (exposed) faces. The importance of such an analysis, hinted in Section 1.2.2, is showcased at then end, where we state that elements of a given face can be decomposed using a common set of atoms. Then, we turn to the analysis of the prescribed curvature problem. Its solutions are closely linked with elements of exposed faces, and our noise robustness analysis is intimately related to the stability of its minimizers. Finally, we introduce a so-called nondegenerate source condition, under which exact support recovery is obtained in a low noise regime. We end this part by a discussion on this condition, investigating whether it holds in a simple case. Conclusion P Piecewise constant functions are the sparse objects associated to the total variation. They play the same role as sparse vectors for the 1 norm, and as discrete measures for the total variation of measures. However, these two kinds of objects are arguably easier to deal with, mainly because there is a canonical way to write them as the sum of a few atoms. Finding a good way to decompose piecewise constant functions is somewhat more intricate. This increased complexity is closely linked to properties of the total variation unit ball, whose faces have a non-trivial structure. In this section, we discuss these issues in detail, and precisely de ne the class of piecewise constant functions we aim at recovering, which we call M -simple. E Motivation. As a result of Proposition 1.5 (and recalling Remark 1.6), given (λ, w) ∈ R * + × H, we have that every nonzero solution u of (P λ (y 0 + w)) satis es ˆR2 η λ,w u TV(u) = 1 , with η λ,w the dual certi cate associated to this problem (de ned in Section 2.1.2). The solution set of (P λ (y 0 + w)) is hence included (up to a normalization) in the solution set of sup u∈L 2 (R 2 ) ˆR2 η λ,w u s.t. TV(u) ≤ 1 , (13) which is in fact the face of {TV ≤ 1} exposed by η λ,w (see [Rockafellar, 1970, Section 18] for more details on the notion of exposed face). This observation plays a crucial role in our analysis. In the following, we describe the structure of the exposed faces of the total variation unit ball, and derive a useful result allowing to represent its elements using a common set of atoms. This representation is at the core of the proof of our support recovery result. The following is based on the habilitation thesis of Vincent Duval [Duval, 2022] and on a forthcoming work [Boyer et al., in preparation] 1 . The results we present here are a subset of those given in [Duval, 2022]. In this last work, general linearly closed faces of the total variation unit ball are considered. We only treat the case of exposed faces, which is enough for our purpose. This allows slightly di erent (and often shorter) proofs. Setting. Let us x η ∈ L 2 (R 2 ), and denote F the face of {TV ≤ 1} exposed by η, which is the solution set of sup u∈L 2 (R 2 ) ˆR2 ηu s.t. TV(u) ≤ 1 . ( 14 ) The weak compactness of {TV ≤ 1} yields that F is non-empty (i.e. the supremum in ( 14) is attained). Moreover, if η is not equal to zero almost everywhere, it has a Lebesgue point x such that |η(x)| > 0. There hence exists r > 0 such that | ´B(x,r) η| > 0, which shows the value of ( 14) is strictly positive, say 1/α for some α ∈ R * + . Hence, we have that ∀u ∈ L 2 (R 2 ), TV(u) - ˆR2 (αη)u ≥ 0 , with equality if and only if u solves ( 14). For convenience reasons, we assume in the following that α = 1 (all the results below remain valid if α = 1 by replacing η with η def. = αη), and consequently obtain ∀u ∈ L 2 (R 2 ), TV(u) - ˆR2 ηu ≥ 0 . ( 15 ) With this assumption we have the following characterization of F: F = u ∈ L 2 (R 2 ) TV(u) ≤ 1 and ˆR2 η u = 1 , and every u ∈ F satis es ´R2 η u = TV(u). Remark 2.1 Let us stress that Problem 14 has value 1 (i.e. α = 1) if and only if their exists a nonzero function u ∈ L 2 (R 2 ) which satis es ´R2 η u = TV(u) (i.e. η ∈ ∂TV(u)). Dual certi cates introduced above all fall in this category, unless the unique solution of (P 0 (y 0 )) (respectively (P λ (y))) is zero. Indicator functions From Proposition 1.2, we know that the extreme points of {TV ≤ 1} are the functions of the form ±1 E /P (E), with E a simple set such that 0 < |E| < +∞. Elements of F that are proportional to indicator functions hence play a special role in the analysis of its structure. Let us therefore de ne E def. = E + ∪ E -∪ {∅, R 2 } , E + def. = E ⊂ R 2 |E| < +∞, 0 < P (E) < +∞, 1 E P (E) ∈ F , E -def. = E ⊂ R 2 |E c | < +∞, 0 < P (E c ) < +∞, -1 E c P (E c ) ∈ F . ( 16 ) Remark 2.2 We stress that an extreme point of {TV ≤ 1} is not necessarily exposed. In fact, E ∈ E + if and only if P (E) = ´E η (and E ∈ E -if and only if P (E c ) = -´Ec η). This is equivalent to require that E (respectively E c ) is a solution of the prescribed curvature problem associated to η (respectively -η) studied in Section 2. In particular, this implies that E (respectively E c ) is bounded, that the number of Jordan curves in the decomposition of ∂ M E is nite, and that the following weak regularity property holds: ∃r 0 > 0, ∀r ∈ (0, r 0 ], ∀x ∈ ∂E, 1 16 ≤ |E ∩ B(x, r)| |B(x, r)| ≤ 1 - 1 16 . We refer the reader to [Chambolle et al., 2016, Section 5] for more details. The set E has the remarkable property of being closed under several operations that we describe below. We rst consider the intersection and union operations, before turning to M -connected components and holes. Proposition 2.3 The set E is closed under countable unions and countable intersections. Proof : If E ∈ E + and F ∈ E + the submodularity of the perimeter (see e.g. [START_REF] Ambrosio | Connected components of sets of nite perimeter and applications to image processing[END_REF], Proposition 1]) yields: P (E ∩ F ) + P (E ∪ F ) ≤ P (E) + P (F ) = ˆE η + ˆF η = ˆE∩F η + ˆE∪F η . We hence obtain: P (E ∩ F ) - ˆE∩F η + P (E ∪ F ) - ˆE∪F η ≤ 0 . Using (15), we get that the two terms above are nonnegative, which yields E ∩ F ∈ E + (un- less |E ∩ F | = 0) and E ∪ F ∈ E + . The same reasoning shows that the result also holds when replacing E + by E -. If E ∈ E + and F ∈ F -we have: P (E ∩ F ) + P ((E ∪ F ) c ) = P (E ∩ F ) + P (E ∪ F ) ≤ P (E) + P (F ) = P (E) + P (F c ) = ˆE η - ˆF c η = ˆE∩F η - ˆ(E∪F ) c η Reasoning as above, we obtain E ∩ F ∈ E + (unless |E ∩ F | = 0) and E ∪ F ∈ E -(un- less |(E ∪ F ) c | = 0), which shows E ∩ F ∈ E. Now, we have proved that E is stable under nite unions and nite intersections. The countable case follows from the fact exposed faces are closed in L 2 (R 2 ). Proposition 2.4 Let E, F ∈ E with F ⊂ E. If there exists C 1 , C 2 ⊂ R 2 such that 1. |C 1 | > 0 and |C 2 | > 0 2. E \ F = C 1 ∪ C 2 3. P (E \ F ) = P (C 1 ) + P (C 2 ) then F ∪ C 1 ∈ E and F ∪ C 2 ∈ E. Proof : We rst note that C 1 R 2 and C 2 R 2 (otherwise we would have E = R 2 and F = ∅, which would contradict the indecomposability of R 2 ). Using 2. and 3., from [Ambrosio et al., 2001, Proposition 3] we hence obtain that H 1 (∂ * C 1 ∩ ∂ * C 2 ) = 0. Now, using Proposition 1 of the same reference, we obtain: P (E) + P (F ) = 2 P (F ) + P (E \ F ) -2 H 1 (∂ * F ∩ ∂ * (E \ F )) = 2 P (F ) + P (E \ F ) -2 H 1 (∂ * F ∩ ∂ * (C 1 ∪ C 2 )) = (P (F ) + P (C 1 ) -2 H 1 (∂ * F ∩ ∂ * C 1 )) + (P (F ) + P (C 2 ) -2 H 1 (∂ * F ∩ ∂ * C 2 )) = P (G 1 ) + P (G 2 ) , with G 1 def. = F ∪ C 1 and G 2 def. = F ∪ C 2 . We can then conclude as in the proof of Proposition 2.3. As a consequence of Proposition 2.4, we obtain the following result. We remind the reader that M -connected components and holes are de ned in Appendix A. Corollary 2.5 If E ∈ E, any M -connected component of E belongs to E. Moreover, for any hole Y of E, we have that E ∪ Y ∈ E. Chains and maximal chains Considering exposed faces (instead of general linearly closed faces) does not allow for shorter proofs of the results given in this section. We hence do not include them and refer the reader to [Duval, 2022, Section 2.3]. We say that a collection C of subsets of R 2 is a chain if for every E, F ∈ C we have E ⊆ F or F ⊆ E, i.e. if C is totally ordered for the inclusion relation. We call its cardinal (denoted |C|) the length of C. Denoting dim(F) the dimension of the a ne hull of F, we may state the following result. Proposition 2.6 If C ⊆ E \ {∅, R 2 } is a chain and dim(F) = d, then C has length at most d + 1. We say that a chain C in E is maximal if there is no E ∈ E \ C such that C ∪ {E} is a chain. Such a chain must therefore contain ∅ and R 2 . As a result, if C = {E i } m i=0 is a maximal chain, then the collection of its increments {E i \E i-1 } m i=1 is a partition of R 2 . The following proposition shows that every set in E can be written as the union of elements of this partition. Proposition 2.7 Assume C = {E i } m i=0 is a maximal chain in E with ∅ = E 0 ⊂ E 1 ⊂ ... ⊂ E m = R 2 . Then the following holds. 1. For every i ∈ {1, ..., m}, the set E i \ E i-1 is indecomposable. 2. For every F ∈ E, there exists I ⊆ {1, ..., m} such that F = i∈I (E i \ E i-1 ). Even if there could be several maximal chains in F, they all have the same length and the same collection of increments, as the following proposition shows. In particular, we see that each exposed face F with nite dimension is naturally associated to a partition of R2 (which is the collection of increments of any maximal chain in F). Proposition 2.8 If C = {E i } m i=0 and C = {E i } m i=0 are two maximal chains in E, we have: 1. |C| = |C | = dim(F) + 3 , 2. {E i \ E i-1 } m i=1 = {E i \ E i-1 } m i=1 . C 1.2.1. De nition and properties Let u ∈ L 2 (R 2 ) be a function with nite total variation, taking a nite number of values1 t 1 > ... > t i 0 -1 > t i 0 = 0 > t i 0 +1 > ... > t m . Let E i def. = {u ≥ t i } if 1 ≤ i ≤ m and E 0 def. = ∅. We hence have ∅ = E 0 ⊂ E 1 ⊂ ... ⊂ E m = R 2 , i.e. {E i } m i=0 is a chain. Now if F i def. = E i \ E i-1 = {u = t i } for every 1 ≤ i ≤ m, we have CC M (F i ) = {F i,j } j∈J i with J i at most countable. De ning J def. = {(i, j) | 1 ≤ i ≤ m, 1 ≤ j ≤ |J i |}, for every (i, j) ∈ J we introduce E i,j def. = E i-1   1≤k≤j F i,j   . With these notations, the lexicographic order on J yields a total order on {E i,j } (i,j)∈J , which is hence a chain. We say that it is a chain associated to the function u (see Figure 6 for an illustration). By construction, u is constant on each of its increments (F i,j ) (i,j)∈J , which are all indecomposable. Remark 2.9 Since the order we choose on CC M ({u = t i }) is arbitrary, multiple chains can be associated to u (see Figure 6 for an illustration). These chains however have the same collection of increments I, which is given by: I = m i=1 CC M ({u = t i }) . Considering the construction above, we now de ne the sparse objects naturally associated to the total variation, which we call M -simple functions. This is the class of piecewise constant functions we work with in the rest of this manuscript. E 1 E 2 Y 2 Y 1 Figure 6 -A function u = 1 E 1 -1 E 2 . ∅ ∅ ∅ E1 E1 E1 E1 ∪ Y1 E c 2 \ (Y1 ∪ Y2) E c 2 \ (Y1 ∪ Y2) E1 ∪ Y1 ∪ Y2 E c 2 \ Y1 E c 2 \ Y2 E c 2 E c 2 E c 2 R 2 R 2 R 2 Figure 7 -Three chains associated to u (see Figure 6). Each column corresponds to a chain, whose elements are ordered from top to bottom. De nition 2.10 (M -simple functions) We say that a function u : R 2 → R is M -simple if there exists a nite set I, a collection {E i } i∈I of indecomposable sets of nite measure and t ∈ R I such that u = i∈I t i 1 E i . We stress that, with this de nition, any M -simple function u belongs to L 2 (R 2 ) and has nite total variation. We also have that a function u ∈ L 2 (R 2 ) with nite total variation is M -simple if and only if its associated chains have nite length. Moreover, the length of these chains is exactly the sum, over all values t taken by u, of the number of bounded M -connected components of {u = t}. S . The "right" measure of sparsity for a piecewise constant function u is the sum, over all values t taken by u, of the number of bounded M -connected components of {u = t}. We conclude this section by stating the following useful result, whose importance is explained below. Proposition 2.11 Assume that dim(F) = d and u ∈ F. Then u is M -simple. Moreover, any chain associated to u is a chain in E, and hence has at most d + 1 elements (not counting ∅ and R 2 ). Finally, u is constant on the increments of any maximal chain in E. Proof : Using Carathéodory's theorem, we have that u can be written as a convex combination of d + 1 extreme points of F, which are of the form 1 E /P (E) with ∈ {-1, 1} and E a simple set such that 0 < |E| < +∞. As a consequence, u takes a nite number of values. Using the above-de ned notations we have: u = i0-1 i=1 θ i 1 Ei + m-1 i=i0 θ i -1 E c i and ˆR2 η u = i0-1 i=1 θ i ˆEi η - m-1 i=i0 θ i ˆEc i η , with θ i > 0. By the coarea formula (see e.g. [Maggi, 2012, Chapter 13]), we also have: i0-1 i=1 θ i P (E i ) + m-1 i=i0 θ i P (E c i ) = TV(u) . Using ´R2 η u = TV(u) we hence obtain P (E i ) = ´Ei η for 1 ≤ i ≤ i 0 -1 and P (E c i ) = -´Ec i η for i 0 ≤ i ≤ m -1, which shows {E i } 1≤i≤m-1 is a chain in E. We conclude that {E i,j } (i,j)∈J is a chain in E by using Proposition 2.4. The last claim is a straightforward consequence of Proposition 2.7, since the level sets of u all belong to E. The main interest of this proposition is that, denoting {F j } 1≤j≤m the collection of increments of any maximal chain in F, every function u ∈ F is of the form u = m j=1 t j 1 F j , with t ∈ R m . To put it another way, the elements of F can be decomposed using a common set of atoms, which is the collection {1 F j } 1≤j≤m . A su cient condition for identi ability In this subsection, we derive a su cient identi ability condition using the above results. To this aim, we rst recall a condition, introduced in [START_REF] Burger | Convergence rates of convex variational regularization[END_REF] as the source condition, which ensures that a given function u 0 solves (P 0 (y 0 )). De nition 2.12 A function u 0 satis es the source condition if there exists η ∈ Im Φ * such that η ∈ ∂TV(u 0 ). If u 0 is an M -simple function, the source condition can be strengthened to ensure it is the unique solution of (P 0 (y 0 )) (i.e. u 0 is identi able). To this aim, for any M -simple function u and any chain C associated to u, whose ordered increments are denoted (F j ) 1≤j≤m , we de ne Φ C : R m → H t → Φ   m j=1 t j 1 F j   . We stress that this mapping only mildly depends on the choice of a chain C, in the sense that if C is another chain associated to u, then C and C have the same collection of increments. There hence exists a permutation of the coordinates σ such that Φ C = Φ C • σ. We may now state the following result, which is a natural analog of [Duval and Peyré, 2015, Proposition 5]. Proposition 2.13 Let u 0 be an M -simple function. Assume there exists η ∈ Im Φ * ∩ ∂TV(u 0 ). If a chain C 0 associated to u 0 is maximal in the face exposed by η, and if Φ C 0 is injective, then u 0 is the unique solution of (P 0 (y 0 )). Proof : Since η ∈ Im Φ * and η ∈ ∂TV(u 0 ), by Proposition 1.4 we have that every solution u of (P 0 (y 0 )) satis es η ∈ ∂TV(u), and hence that u/TV(u) is in the face exposed by η. Since C 0 is a maximal chain in this face, denoting (F j ) 1≤j≤m its increments, we obtain that u/TV(u) is constant on each F j , which yields the existence of t ∈ R m such that u = m j=1 t j 1 Fj . The injectivity of Φ C0 nally allows to conclude. Remark 2.14 We stress that a chain associated to an M -simple function u is maximal in an exposed face of {TV ≤ 1} if and only if all chains associated to u are maximal in this face. This directly follows from the fact all these chains have the same length. As a consequence, the result stated in Proposition 2.13 does not depend on the choice of a particular chain. The main question which remains unanswered is the following: if u 0 is M -simple and identi able, with w and λ small enough, are solutions of (P λ (y 0 + w)) M -simple, what is the length of their associated chains, and how are their elements related to those of C 0 ? To answer an analog question for the sparse spikes problem, a non-degenerate version of the source condition was introduced in [Duval and Peyré, 2015, De nition 5]. From Proposition 1.3 and ( 8), the elements of the chains associated to solutions of (P λ (y 0 +w)) are all solutions of the prescribed curvature problem associated to η λ,w . In Section 2.1.2, we have also seen that, under a few assumptions, η λ,w converges to the minimal norm certi cate η 0 when w and λ go to zero. It is therefore natural to investigate how solutions of the prescribed curvature problem behave under variations of the curvature functional, which is the topic of the following section. The results therein allow us to state a natural analog of the non-degenerate source condition in Section 3, and to nally answer the above question. T As mentioned above, this section is dedicated to the study of the prescribed curvature problem associated to some function η ∈ ∂TV(0): inf E⊂R 2 , |E|<+∞ J(E) def. = P (E) -ˆE η . (PC(η)) Our main aim is to investigate how the solution set of (PC(η)) behaves when η varies. To be more speci c, given two su ciently close curvature functionals η and η we are interested in answering the following two questions. (i) Are solutions of (PC(η )) close to solutions of (PC(η))? (ii) How many solutions of (PC(η )) are there in a neighborhood of a given solution of (PC(η))? In the following subsection, we answer the rst question using the notion of quasi-minimizers of the perimeter, as well as rst order optimality conditions for (PC(η)). Then, under a strict stability assumption on solutions of (PC(η)), we answer the second question using the notion of second order shape derivatives. Tools. As discussed in the following subsection, if η is su ciently regular, the solutions of (PC(η)) also enjoy some regularity. In this section, we hence mainly work with smooth sets and extensively use related notions. Relevant de nitions and properties are collected in Appendix B. We also use the notion of curvature (de ned in Appendix A) and of second fundamental form (see e.g. [Maggi, 2012, Section 17.6]). Given a su ciently smooth set E, we often consider sets that are normal deformations of E, i.e. whose boundary is a normal graph over ∂E. Such sets are parametrized by real-valued functions on ∂E, which leads us to use the notion of tangential gradient, tangential Jacobian, and the spaces C k (∂E), L p (∂E) and H 1 (∂E) (along with their associated norms). We refer to the reader to [Henrot and Pierre, 2018, Sections 5.4.1, 5.4.3 and 5.9.1] for a precise de nition of these objects. G Existence of minimizers. We rst stress that existence of solutions is guaranteed for (PC(η)). Indeed, since η ∈ ∂TV(0), the objective J is always nonnegative, and equal to zero when evaluated at the empty set, which is admissible. From Proposition 1.3, we also know that it has a non trivial solution as soon as there exists a nonzero function u ∈ L 2 (R 2 ) such that η ∈ ∂TV(u). Boundedness. As already mentioned, by Lemma 1.11 all solutions of (PC(η)) are included in some common ball, i.e. there exists R > 0 such that, for every solution E of (PC(η)), we have E ⊂ B(0, R). Regularity of the solutions. Let us now discuss the regularity of solutions of (PC(η)). If we have η ∈ L ∞ (R 2 ), then any solution of (PC(η)) is a strong quasi-minimizer of the perimeter, and, consequently, is of class C 1,1 (see [Ambrosio, 2010, De nition 4.7.3 and Theorem 4.7.4]). If η is moreover continuous, then the boundary of any solution is locally the graph of a function u which solves (in the sense of distributions) the Euler-Lagrange equation associated to (PC(η)), which is (up to a translation and a rotation): u 1 + u 2 (z) = u (z) 1 + u (z) 2 3/2 = η(z, u(z)) . (17) This in turn implies that u is C 2 (C k+2,α if η ∈ C k,α (R 2 )) and solves (17) in the classical sense. As a result, assuming η is bounded and of class C 1 , every solution of (PC(η)) is of class C 3 . Convergence result. Now, we state a result that, loosely speaking, tells that any neighborhood (in terms of C 2 -normal deformations) of the solution set of (PC(η 0 )) contains the solution set of (PC(η)) provided η is su ciently close to η 0 in C 1 (R 2 ) and L 2 (R 2 ). This provides an answer to question (i). Proposition 2.15 Let η 0 ∈ ∂TV(0) ∩ C 1 b (R 2 ). For every > 0 there exists r > 0 such that for every η ∈ ∂TV(0) with η -η 0 L 2 (R 2 ) + η -η 0 C 1 (R 2 ) ≤ r, the following holds: every solution F of (PC(η)) is a C 2 -normal deformation of size at most of a solution E of (PC(η 0 )) (i.e., with the notation of Proposition B.5, F = E ϕ with ϕ C 2 (∂E) ≤ ). Proof : We argue by contradiction and assume the existence of two sequences (η n ) n∈N * and (F n ) n∈N * such that • for all n ∈ N * , η n ∈ ∂TV(0) ∩ C 1 b (R 2 ), • the sequence (η n ) n∈N * converges to η 0 in L 2 (R 2 ) and C 1 b (R 2 ), • for all n ∈ N * , the set F n solves (PC(η n )) and cannot be written as a C 2 -normal deformation of size at most of a solution of (PC(η 0 )) . We hence have that (F n ) n∈N * is bounded and that F n is a strong (Λ, r 0 )-quasi minimizer of the perimeter (in short form F n ∈ QM(Λ, r 0 ), see [Maggi, 2012, Section 21] and [Ambrosio, 2010, De nition 4.7.3]) for all n ∈ N * , with Λ = sup { η n ∞ , n ∈ N * } and r 0 any positive real number. Taking r 0 small enough to have Λ r 0 ≤ 1, from [Maggi, 2012, Propositions 21.13 and 21.14] we get that, up to the extraction of a subsequence (not relabeled), (F n ) n≥0 converges in measure to a bounded set E ∈ QM(Λ, r 0 ), and that (∂F n ) n≥0 converges to ∂E. From |F n E| → 0 we obtain that E is a solution of (PC(η 0 )), and the convergence of (∂F n ) n≥0 towards E yields ∀r > 0, ∃n 0 ∈ N, ∀n ≥ n 0 , ∂F n ⊂ x∈∂E C(x, r, ν E (x)) , where C(x, r, ν E (x)) denotes the square of axis ν E (x) and side r centered at x, de ned in (60). From [Ambrosio, 2010, 4.7.4], and arguing as in the proof of [Maggi, 2012, Theorem 26.6], for every x ∈ ∂E we obtain the existence of r > 0, of n 0 ∈ N, of u ∈ C 1,1 ([-r, r]) and of a sequence (u n ) n≥n0 which is uniformly bounded in C 1,1 ([-r, r]), such that, in C(x, r, ν E (x)) , the set E is the hypograph of u and, for every n ≥ n 0 , the set F n is the hypograph of u n . Moreover, we have that u nu C 1 ([-r,r]) → 0. Now, we also have that u and u n (for n ≥ n 0 ) respectively solve (in the sense of distributions) the following equations in (-r, r): u (z) (1 + u (z) 2 ) 3/2 = H(z, u(z)) , with H(z, t) def. = η 0 (x + R ν E (x) (z, t)) , u n (z) (1 + u n (z) 2 ) 3/2 = H n (z, u n (z)) , with H n (z, t) def. = η n (x + R ν E (x) (z, t)) . (18) We hence immediately obtain that u and u n belong to C 2 ([-r, r]). Moreover, for every z ∈ (-r, r) we have: |u n (z) -u (z)| = H n (z, u n (z)) 1 + u n (z) 2 3/2 -H(z, u(z)) 1 + u (z) 2 3/2 ≤ ( H n -H ∞ + |H(z, u n (z)) -H(z, u(z))|) 1 + u n (z) 2 3/2 + H ∞ 1 + u n (z) 2 3/2 -1 + u (z) 2 3/2 , from which we obtain that u nu ∞ → 0. Using these new results in combination with (18), we get that u and u n belong to C 3 ([-r, r]). Di erentiating (18), we obtain, for every z ∈ (-r, r): u (3) (z) = ∂ 1 H(z, u(z)) + u (z) ∂ 2 H(z, u(z)) (1 + u (z) 2 ) 3/2 + 3 H(z, u(z)) u (z) u (z) (1 + u (z) 2 ) 3/2 , u (3) n (z) = ∂ 1 H n (z, u n (z)) + u n (z) ∂ 2 H n (z, u n (z)) (1 + u n (z) 2 ) 3/2 + 3 H n (z, u n (z)) u n (z) u n (z) (1 + u n (z) 2 ) 3/2 , from which we can nally show u (3) nu (3) ∞ → 0. Finally, using the compactness of ∂E, we obtain that (F n ) n≥0 converges in C 3 towards E, and Proposition B.3 allows to eventually write F n as a C 2 -normal deformation of E, whose norm converges to zero. This yields a contradiction. S Question (ii) is closely linked to the stability of minimizers of (PC(η)), that is to the behaviour of the objective J in a neighborhood of a solution. To analyze this behaviour, we use the general framework presented in [START_REF] Dambrine | Stability in shape optimization with second variation[END_REF], which relies on the notion of second order shape derivative. Approach. The natural path to obtain our main stability result, which is Proposition 2.23, is to prove that J is in some sense of class C 2 , i.e. that its second order shape derivative is continuous at zero (see Proposition 2.21 for a precise statement). We could not nd this result in the literature. A large part of this section is hence dedicated to its proof. To the best of our knowledge, deriving such a result is necessary to obtain Proposition 2.23. In particular, we had to use a stronger condition than the "improved continuity condition" (IC H 1 ,W 2,∞ ) of [START_REF] Dambrine | Stability in shape optimization with second variation[END_REF], which is satis ed by our functional. This condition only requires some uniform control of second order directional derivatives at zero, which is weaker than the result of Proposition 2.21. Structure of shape derivatives. Given an open solution E of (PC(η)), we introduce the following mapping, where E ϕ denotes the normal deformation of E associated to ϕ, de ned in Proposition B.5: j E : C 1 (∂E) → R ϕ → J(E ϕ ) . With this notation, the following result holds. Proposition 2.16 (See e.g. [Henrot and Pierre, 2018, Chapter 5]) If E is a bounded open set of class C 2 and η ∈ C 1 (R 2 ) , then j E is twice Fréchet di erentiable at 0 and, for every ψ ∈ C 1 (∂E), we have: j E (0).(ψ) = ˆ∂E [η -H] ψ dH 1 j E (0).(ψ, ψ) = ˆ∂E |∇ τ ψ| 2 -H η + ∂η ∂ν ψ 2 dH 1 where H denotes the curvature of E and ∇ τ ψ def. = ∇ψ -(∇ψ • ν) ν is the tangential gradient of ψ with respect to E. From the expression of j E (0) and j E (0) given above, we immediately notice that j E (0) can be extended to a continuous linear form on L 1 (∂E), and j E (0) to a continuous bilinear form on H 1 (∂E). Strict stability. Following [START_REF] Dambrine | Stability in shape optimization with second variation[END_REF], we say that a solution E of (PC(η)) is strictly stable if j E (0) is coercive in H 1 (∂E), i.e. if the following property holds: ∃α > 0, ∀ψ ∈ H 1 (∂E), j E (0).(ψ, ψ) ≥ α ψ 2 H 1 (∂E) . Under a few assumptions (that are satis ed by our functional), this strict stability condition ensures that E is a strict local minimizer of J (see Theorem 1.1 in the above-mentioned reference), and is hence the only minimizer (modulo Lebesgue negligible sets) among sets E ϕ with ϕ in a neighborhood of 0. It plays a crucial role in our answer to question (ii). Continuity results. Now, we prove a few results concerning the convergence of j E towards j 0,E and the continuity of ϕ → j E (ϕ), where j E and j 0,E are the functionals respectively associated to η and η 0 . To achieve this, we need to compute j E (ϕ) for ϕ ∈ C1 (∂E) in a neighborhood of 0. This may be done using Lemma 2.17 below. To state it, given a bounded open set E of class C 2 and ϕ in a neighborhood of 0 in C 1 (∂E), we introduce the mapping f ϕ = Id + ξ ϕ , with ξ ϕ de ned as in Lemma B.4. If ϕ C 1 (∂E) is su ciently small then f ϕ is a C 1 -di eomorphism, and we denote its inverse by g ϕ . Lemma 2.17 Let E be a bounded open set of class C 2 . Then for every ϕ in a neighborhood of 0 in C 1 (∂E), and for every ψ ∈ H 1 (∂E), we have: j E (ϕ).(ψ, ψ) = j Eϕ (0).(ξ ψ • g ϕ • ν ϕ , ξ ψ • g ϕ • ν ϕ ) + j Eϕ (0).(Z ϕ,ψ ) (19) where ν ϕ is the unit outward normal to E ϕ and Z ϕ,ψ = B ϕ ((ξ ψ • g ϕ ) τϕ , (ξ ψ • g ϕ ) τϕ ) -2(∇ τϕ (ξ ψ • g ϕ • ν ϕ )) • (ξ ψ • g ϕ ) τϕ , with ζ τϕ and ∇ τϕ ζ the tangential part and the tangential gradient of ζ with respect to E ϕ , and B ϕ the second fundamental form of E ϕ . Proof : To prove this result, we need to introduce J E 1 de ned by J E : C 1 b (R 2 , R 2 ) → R ξ → J((Id + ξ)(E)) . We denote ν the outward unit normal to E and B its second fundamental form. We also denote ζ τ and ∇ τ ζ the tangential part and the tangential gradient of ζ with respect to E. The structure theorem (see e.g. [Henrot and Pierre, 2018, Theorem 5.9.2] or [Dambrine and Lamboley, 2019, Theorem 2.1]) then yields, for every su ciently smooth vector eld ζ: J E (0).(ζ) = j E (0).(ζ ∂E • ν) , J E (0).(ζ, ζ) = j E (0).(ζ ∂E • ν, ζ ∂E • ν) + j E (0).(Z ζ ) , where Z ζ def. = B(ζ τ , ζ τ ) -2 (∇ τ (ζ • ν)) • ζ τ . Now, we rst notice that, for every pair of vector elds ξ, ζ such that Id + ξ is invertible, we have: (Id + ξ + ζ)(E) = (Id + ζ • (Id + ξ) -1 )((Id + ξ)(E)) . De ning F def. = (Id + ξ)(E) we hence obtain J E (ξ + ζ) = J F (ζ • (Id + ξ) -1 , ζ • (Id + ξ) -1 ), which yields J E (ξ).(ζ, ζ) = J F (0).(ζ • (Id + ξ) -1 ) . Using this with ξ = ξ ϕ and ζ = ξ ψ , we get: j E (ϕ).(ψ, ψ) = J E (ξ ϕ )(ξ ψ , ξ ψ ) = J Eϕ (0).(ξ ψ • g ϕ , ξ ψ • g ϕ ) , and we nally obtain (19) by applying the structure theorem. Most of the results below rely on the following technical lemma, whose rst part is contained in [Dambrine and Lamboley, 2019, Lemma 4.7]. Lemma 2.18 Let E be a bounded C 2 set. If ϕ C 1 (∂E) → 0 we have: (i) f ϕ -Id C 1 (∂E) → 0 , ν ϕ • f ϕ -ν C 0 (∂E) → 0 , (iii) (ii) g ϕ -Id C 1 (∂Eϕ) → 0 , Jac τ f ϕ -1 C 0 (∂E) → 0 . (iv) If ϕ C 2 (∂E) → 0 then we also have: (v) H ϕ • f ϕ -H C 0 (∂E) → 0 , B ϕ • f ϕ -B C 0 (∂E) → 0 . (vi) Moreover, the following holds: (a) lim ϕ C 1 (∂E) →0 sup ψ∈L 2 (∂E)\{0} (ξ ψ • g ϕ ) τϕ L 2 (∂Eϕ) ψ L 2 (∂E) = 0 , (b) lim ϕ C 1 (∂E) →0 sup ψ∈H 1 (∂E)\{0} ∇ τϕ (ξ ψ • g ϕ • ν ϕ ) L 2 (∂Eϕ) -∇ τ ψ L 2 (∂E) ψ H 1 (∂E) = 0 , (c) lim ϕ C 2 (∂E) →0 sup ψ∈H 1 (∂E)\{0} Z ϕ,ψ L 1 (∂Eϕ) ψ 2 H 1 (∂E) = 0 . Proof : See [Dambrine and Lamboley, 2019, Lemma 4.7] for a proof of the results stated in the rst part of the lemma. To prove (a) we use the fact that (ξ ψ • g ϕ ) τϕ 2 L 2 (∂Eϕ) = ˆ∂E (ν • g ϕ ) 2 τϕ • f ϕ Jac τ f ϕ ψ 2 dH 1 ≤ Jac τ f ϕ C 0 (∂E) (ν • g ϕ ) τϕ • f ϕ 2 C 0 (∂E) ψ 2 L 2 (∂E) = Jac τ f ϕ C 0 (∂E) ν -(ν • ν ϕ • f ϕ ) ν ϕ • f ϕ 2 C 0 (∂E) ψ 2 L 2 (∂E) . which, using (i), (iii) and (iv), yields the result. To prove (b), we notice that: ∇ τϕ (ξ ψ • g ϕ • ν ϕ ) = c 1 ϕ ψ • g ϕ + c 2 ϕ • ∇ τ ψ • g ϕ τ ϕ , with τ = ν ⊥ , τ ϕ = ν ⊥ ϕ 1 and c 1 ϕ def. = τ • g ϕ • ν ϕ (J gϕ τ ϕ ) • τ • g ϕ + τ ϕ • ν • g ϕ , c 2 ϕ def. = ν • g ϕ • ν ϕ (J gϕ τ ϕ ) . We hence obtain |∇ τϕ (ξ ψ • g ϕ • ν ϕ ) • f ϕ Jac τ f ϕ -∇ τ ψ| ≤ c ϕ (|ψ| + |∇ τ ψ|) with c ϕ independant of ψ. Moreover, using (ii) and (iii), we have: lim ϕ C 1 (∂E) →0 c ϕ C 0 (∂E) → 0 . 1 These two vectors are de ned as the application of the rotation of angle π/2 to ν and νϕ. Denoting A def. = ∇ τϕ (ξ ψ • g ϕ • ν ϕ ) L 2 (∂Eϕ) -∇ τ ψ L 2 (∂E) , this nally yields A ≤ ∇ τϕ (ξ ψ • g ϕ • ν ϕ ) • f ϕ Jac τ f ϕ -∇ τ ψ L 2 (∂E) ≤ √ 2 c ϕ C 0 (∂E) ψ H 1 (∂E) . We now prove (c). Since B ϕ ((ξ ψ • g ϕ ) τϕ , (ξ ψ • g ϕ ) τϕ ) L 1 (∂Eϕ) ≤ B ϕ C 0 (∂Eϕ) (ξ ψ • g ϕ ) τϕ 2 L 2 (∂Eϕ) and B ≤ ∇ τϕ (ξ ψ • g ϕ • ν ϕ )) L 2 (∂Eϕ) (ξ ψ • g ϕ ) τϕ L 2 (∂Eϕ) with B def. = (∇ τϕ (ξ ψ • g ϕ • ν ϕ )) • (ξ ψ • g ϕ ) τϕ L 1 (∂Eϕ) , we get the result. Using the above result, we now prove the continuity of ϕ → j E (ϕ) by proving the continuity of the two terms appearing in its expression. In all the following, if E is a (real) vector space, we denote Q(E) the set of quadratic forms over E. Proposition 2.19 If E is a bounded C 2 set and p E : ϕ → P (E ϕ ), the mapping p E : C 2 (∂E) → Q(H 1 (∂E)) ϕ → p E (ϕ) is continuous at 0. Proof : Using Lemma 2.18, for every ϕ ∈ C 2 (∂E) in a neighborhood of 0 and ψ ∈ H 1 (∂E), we obtain: p E (ϕ).(ψ, ψ) -p E (0).(ψ, ψ) = A + p Eϕ (0).(Z ϕ,ψ ) , with A def. = p Eϕ (0).((ξ ψ • g ϕ ) • ν ϕ , (ξ ψ • g ϕ ) • ν ϕ ) -p E (0).(ψ, ψ) . Now, we also have: A = ∇ τϕ (ξ ψ • g ϕ • ν ϕ ) 2 L 2 (∂Eϕ) -∇ τ ψ 2 L 2 (∂E) , and using Lemma 2.18 we obtain lim ϕ C 2 (∂E) →0 sup ψ∈H 1 (∂E)\{0} p Eϕ (0).((ξ ψ • g ϕ ) • ν Eϕ , (ξ ψ • g ϕ ) • ν Eϕ ) -p E (0).(ψ, ψ) ψ 2 H 1 (∂E) = 0 . Moreover |p E (0).(Z ϕ,ψ )| ≤ H ϕ L ∞ (∂Eϕ) Z ϕ,ψ L 1 (∂Eϕ) , and Lemma 2.18 allows to conclude. Proposition 2.20 If E is a bounded C 2 set, η ∈ C 1 b (R 2 ) and g E : ϕ → ´Eϕ η, the mapping g E : C 2 (∂E) → Q(H 1 (∂E)) ϕ → g E (ϕ) is continuous at 0. Proof : We proceed as in Proposition 2.19. De ning A def. = g Eϕ (0).((ξ ψ • g ϕ ) • ν Eϕ , (ξ ψ • g ϕ ) • ν Eϕ ) we have: A = ˆ∂Eϕ H ϕ η + ∂η ∂ν ϕ ((ψ ν) • g ϕ • ν Eϕ ) 2 dH 1 = ˆ∂E H ϕ η + ∂η ∂ν ϕ • f ϕ (ν • ν ϕ • f ϕ ) 2 Jac τ f ϕ ψ 2 dH 1 . This yields: g Eϕ (0).((ξ ψ • g ϕ ) • ν ϕ , (ξ ψ • g ϕ ) • ν ϕ ) -g E (0).(ψ, ψ) ψ 2 L 2 (∂E) ≤ c ϕ , with c ϕ def. = H ϕ η + ∂η ∂ν ϕ • f ϕ (ν • ν ϕ • f ϕ ) 2 Jac τ f ϕ -H η + ∂η ∂ν ∞ . Using Lemma 2.18 we obtain lim ϕ C 2 (∂E)→0 sup ψ∈H 1 (∂E)\{0} g Eϕ (0).((ξ ψ • g ϕ ) • ν ϕ , (ξ ψ • g ϕ ) • ν ϕ ) -g E (0).(ψ, ψ) ψ 2 H 1 (∂E) = 0 . Moreover |g Eϕ (0).(Z ϕ,ψ )| ≤ η ∞ Z ϕ,ψ L 1 (∂Eϕ) , and using again Lemma 2.18 we nally obtain the result. As a consequence of the two propositions above, we obtain: Proposition 2.21 If E is a bounded C 2 set and η ∈ C 1 b (R 2 ), the mapping j E : C 2 (∂E) → Q(H 1 (∂E)) ϕ → j E (ϕ) is continuous at 0. Now, we prove that for ϕ in a neighborhood of 0 in C 1 (∂E), the mapping j E (ϕ) is uniformly close to j 0,E (ϕ) provided η -η 0 C 1 (R 2 ) is small engouh. Proposition 2.22 Let E be a bounded C 2 set and η 0 ∈ C 1 b (R 2 ). There exists > 0 such that lim η-η 0 C 1 (R 2 ) →0 sup ϕ C 2 (∂E) ≤ j E (ϕ) -j 0,E (ϕ) Q(H 1 (∂E)) = 0 . Proof : Since |(j E -j 0,E ) (ϕ).(ψ, ψ)| ≤ c 1 ϕ + c 2 ϕ with c 1 ϕ def. = ˆ∂Eϕ H ϕ (η -η 0 ) + ∂ (η -η 0 ) ∂ν ϕ (ξ ψ • g ϕ • ν ϕ ) 2 dH 1 , c 2 ϕ def. = ˆ∂Eϕ (η -η 0 ) Z ϕ,ψ dH 1 , the result readily follows from Lemma 2.18. Stability result. We are now able to state the nal result of this section, which, loosely speaking, states that if E is a strictly stable solution of (PC(η 0 )), there is at most one ϕ in a neighborhood of 0 such that E ϕ is a solution of (PC(η)), provided η - η 0 C 1 (R 2 ) is small engouh. Proposition 2.23 Let η 0 ∈ ∂TV(0) ∩ C 1 b (R 2 ) and E be a strictly stable solution of (PC(η 0 )). Then there exists > 0 and r > 0 such that for every η ∈ ∂TV(0) with ηη 0 C 1 (R 2 ) ≤ r there is at most one ϕ ∈ C 2 (∂E) such that ϕ C 2 (∂E) ≤ and E ϕ solves (PC(η)). Proof : The fact E is a strictly stable solution of (PC(η 0 )) and Propositions 2.21 and 2.22 give the existence of > 0, r > 0 and α > 0 such that, for every (ϕ, η) ∈ C 2 (∂E) × C 1 b (R 2 ) with ϕ C 2 (∂E) ≤ and η -η 0 C 1 (R 2 ) ≤ r, we have: sup ψ∈H 1 (∂E)\{0} j E (ϕ).(ψ, ψ) ψ 2 H 1 (∂E) ≥ α As a result, j E (ϕ) is coercive (and hence positive de nite) for every ϕ such that ϕ C 2 (∂E) ≤ . We therefore obtain that j E is strictly convex on this set and the result follows. Summary. Combining the results of Propositions 2.15 and 2.23, we have proved that, provided η is su ciently close to η 0 in C 1 (R 2 ) and L 2 (R 2 ), every solution of (PC(η)) belongs to a neighborhood (in terms of C 2 -normal deformations) of a solution of (PC(η 0 )), and that, under a strict stability assumption, each of these neighborhoods contain at most one solution of (PC(η)). In Theorem 2.27 below, we prove (under suitable assumptions) that, if η = η λ,w is the dual certicate associated to (P λ (y 0 + w)) and η 0 the minimal norml dual certi cate associated to (P 0 (y 0 )), then each neighborhood of a solution of (PC(η 0 )) contains exactly one solution of (PC(η λ,w )). E In this section, we provide an answer to the following question: if u 0 is M -simple and identi able, are solutions of (P λ (y 0 + w)) M -simple, what is the length of their associated chains, and how are they related to the chains associated to u 0 ? To answer it, we rst use the results proved in Section 2 to show that, under a few assumptions, the dimension of the faces of the total variation unit ball is in some sense stable. Then, we introduce a non-degenerate version of the source condition, and nally prove our exact support recovery result. S If η ∈ ∂TV(0) ∩ C 1 b (R 2 ) and F is the face exposed by η, de ning E as in ( 16), we say that E ⊂ R 2 is a strictly stable element of E if E is a strictly stable solution of (PC(η)) or E c is a strictly stable solution of (PC(-η)). Theorem 2.24 Let η 0 ∈ ∂TV(0) ∩ C 1 b (R 2 ). Assume the face F 0 exposed by η 0 has a maximal chain C 0 whose elements are all strictly stable. Then for every > 0 there exists r > 0 such that, for every η ∈ ∂TV(0 ) ∩ C 1 b (R 2 ) with η -η 0 L 2 (R 2 ) + η -η 0 C 1 (R 2 ) ≤ r , the face F exposed by η has a maximal chain whose elements are C 2 -normal deformations of size at most of elements of C 0 . In particular dim(F) ≤ dim(F 0 ). Remark 2.25 To be more precise, Theorem 2.24 states that, if C 0 = {E i } m i=0 , then F has a maximal chain C of length n ≤ m, say C = {F j } n j=0 , with ∀j ∈ {1, ..., n -1}, F j = E θ(j) ϕ j , where θ : {1, ..., n -1} → {1, ..., m -1} is a strictly increasing function and ∀j ∈ {1, ..., n -1}, ϕ j C 2 (∂E θ(j) ) ≤ . Proof : We argue by contradiction and assume the existence of a sequence (η k ) k∈N * converging in L 2 (R 2 ) and C 1 (R 2 ) to η 0 . We denote by E 0 the set associated to F 0 de ned as in ( 16) (and similarly by E k the set associated to the face F k exposed by η k ). For simplicity, we rst treat the case where, for every i ∈ {1, ..., m -1}, the set E i belongs to E + 0 (i.e. E i is a solution of (PC(η 0 ))). The general case, where E i belongs to E - 0 (i.e. E c i is a solution of (PC(-η 0 ))) for every i ≥ i 0 + 1 with i 0 possibly strictly smaller than m -1, is discussed at the end of the proof. From Propositions 2.15 and 2.23 we know that there exists > 0 and k 1 ∈ N * such that, for every k ≥ k 1 , every solution of (PC(η k )) is in a neighborhood of size (in terms of C 2 -normal deformations) of a solution of (PC(η 0 )), and that each of these neighborhoods contains at most one of these solutions. We hence have that, for every i ∈ {1, ..., m -1}, there exists in nitely many k such that the neighborhood of size of E j contains exactly one solution of (PC(η k )), or in nitely many k such that this neighborhood does not contain any solution of (PC(η k )). We therefore obtain the existence of a partition (I, J) of {1, ..., m -1} such that, up to extraction and possibly increasing k 1 , for every k ≥ k 1 the following holds: • if i ∈ I then there is no solution of (PC(η k )) in the neighborhood of E i of size , • if i ∈ J there is exactly one solution of (PC(η k )) in the neighborhood of E i of size . There hence exists n = |J| + 1 ≤ m and an increasing bijection θ : {1, ..., n -1} → J with the following property: for every k ≥ k 1 and j ∈ {1, ..., n -1}, there exists ϕ k,j such that F k,j def. = E θ(j) ϕ k,j is the unique solution of (PC(η k )) in the neighborhood of E θ(j) of size . Now, let us show that there exists k 2 ∈ N such that, for every k ≥ k 2 , the collection C k def. = {F k,j } 0≤j≤n with F k,0 def. = ∅ and F k,n def. = R 2 is a chain. For every j ∈ {1, ..., n -2}, we have that (F k,j ∩ F k,j+1 ) k≥k1 solves (PC(η k )) and converges in measure to E θ(j) ∩ E θ(j+1) = E θ(j) . Arguing as in the proof of Proposition 2.15, we obtain the existence of k 2,j such that F k,j ∩ F k,j+1 is in the neighborhood of E θ(j) of size for every k ≥ k 2,j . Using Proposition 2.23 then yields F k,j ∩ F k,j+1 = F k,j for every k ≥ k 2,j . Repeatedly applying this argument we obtain the existence of k 2,1 , ..., k 2,n-2 and de ning k 2 as the maximum of these integers we get that C k is a chain for every k ≥ k 2 . Now, let us show the existence of k 3 ∈ N * such that, for every k ≥ k 3 , the chain C k is maximal in the face F k exposed by η k . Arguing by contradiction, we assume the existence of (G k ) k≥k2 such that G k ∈ E k \ C k and C k ∪ {G k } is a chain for every k ≥ k 2 1 . Arguing as in the proof of Proposition 2.15, we get the existence of G ∈ E 0 such that, up to the extraction of a subsequence that we do not relabel, G k = G ψ k for k large enough, with ψ k C 2 (∂G) → 0. Now, since C k ∪ {G k } is a chain for every k ≥ k 2 , we have that, for every j ∈ {1, n -1}, there exists in nitely many k such that F k,j ⊂ G k or in nitely many k such that G k ⊂ F k,j . Up to another extraction, we hence get the existence of j ∈ {0, ..., n -1} and k 3 ∈ N * such that F k,j ⊂ G k ⊂ F k,j+1 for every k ≥ k 3 . As a result, we obtain      ∅ ⊂ G ⊂ E θ(1) if j = 0 , E θ(n-1) ⊂ G ⊂ R 2 if j = n -1 , E θ(j) ⊂ G ⊂ E θ(j+1) otherwise. Using the maximality of C 0 , we get G = E i with i ∈ I. Since G k = G ψ k with ψ k C 2 (∂G) → 0 and, for every k ≥ k 1 , su ciently small neighborhoods of E i do not contain any solution of (PC(η k )), we get a contradiction. Finally, let us comment on the general case, where there exists i 0 ∈ {0, ..., m -1} such that, for every i ∈ {1, ..., m -1}, E i ∈ E + 0 if i ≤ i 0 and E i ∈ E - 0 if i ≥ i 0 + 1. The same arguments can be applied to obtain the result, bearing in mind that if i ≥ i 0 + 1, then E c i is a solution of (PC(-η 0 )) (instead of E i being a solution of (PC(η 0 ))). M We are now able to introduce a non-degenerate version of the source condition, which ultimately allows us to state our main result. De nition 2.26 (Non-degenerate source condition) Let u 0 be an M -simple function and C 0 a chain associated to u 0 . We say that u 0 satis es the non-degenerate source condition if 1. Φ C 0 has full rank 2. η 0 ∈ ∂TV(u 0 ), 3. C 0 is maximal in the face exposed by η 0 , 4. every element of C 0 is strictly stable, with η 0 the minimal norm certi cate. In that case, we say that η 0 is non-degenerate. Let us stress that, as pointed out in Remark 2.14, De nition 2.26 does not depend on the choice of a chain C 0 associated to u 0 , as one is maximal in an exposed face if and only if they all are. Theorem 2.27 Let u 0 be an M -simple function and C 0 = {E i } m i=0 an associated chain. Assume Φ * is contin- uous from H to C 1 b (R 2 ) and u 0 satis es the non-degenerate source condition. Then there exists constants α, λ 0 ∈ R * + such that for every (λ, w) ∈ R * + × H with λ ≤ λ 0 and w H /λ ≤ α, any solution u λ,w of (P λ (y)) is M -simple and its associated chains have the same length as C 0 . Moreover, writing u 0 = m i=1 t i 1 E i \E i-1 , we have u λ,w = m i=1 t λ,w i 1 E λ,w i \E λ,w i-1 , ( 20 ) with E λ,w 0 def. = ∅, E λ,w m def. = R 2 , and ∀i ∈ {1, ..., m -1}, E λ,w i = (E i ) ϕ λ,w i with ϕ λ,w i ∈ C 2 (∂E i ) . (21) Finally, if λ = w H /α, we have: ∀i ∈ {1, ..., m}, lim w→0 t λ,w i = t i , ∀i ∈ {1, ..., m -1}, lim w→0 ϕ λ,w i C 2 (∂E i ) = 0 . Proof : Let and r be such that the results of Theorem 2.24 hold. Since Φ * is continuous from H to C 1 b (R 2 ), using Proposition 1.8 and ( 9) we obtain the existence of α and λ 0 such that for every (λ, w) ∈ R * + × H with λ ≤ λ 0 and w H /λ ≤ α 0 , we have: η λ,w -η 0 L 2 (R 2 ) + η λ,w -η 0 C 1 (R 2 ) ≤ r . Since C 0 is a maximal chain in the face exposed by η 0 , Theorem 2.24 ensures the existence of a maximal chain C λ,w in the face exposed by η λ,w , whose length is not greater than the length of C 0 . As a result, u λ,w is M -simple. We also know the elements of C λ,w are C 2 -normal deformations of elements of C 0 . Using the fact u λ,w is constant on each increment of C λ,w and converges in L 1 (R 2 ) towards u 0 , we obtain that C 0 and C λ,w have equal length, and that (20) and ( 21) hold. A B u 0 A λ,w B λ,w u λ,w Finally, the convergence of ϕ λ,w i towards 0 in C 2 (∂E i ) follows from Theorem 2.24, and the convergence of t λ,w i from the convergence of u λ,w towards u 0 in L 1 (R 2 ). A The core assumption in Theorem 2.27 is the strict stability of the elements of the chains associated to u 0 , as minimizers of the prescribed curvature functional associated to the minimal norm certi cate η 0 . In this section, we study this strict stability assumption and derive a natural su cient condition for it to hold. We then discuss to what extent this condition is necessary. Setting. We x η ∈ ∂TV(0)∩C 1 b (R 2 ) and E a (non-trivial) solution of the prescribed curvature problem (PC(η)) associated to η. We recall that E is a strictly stable solution of (PC(η) ) if j E (0) is coercive in H 1 (∂E), with ∀ψ ∈ H 1 (∂E), j E (0).(ψ, ψ) = ˆ∂E |∇ τ E ψ| 2 -H E η + ∂η ∂ν E ψ 2 dH 1 . Equivalence of coercivity and positive de niteness. As explained (in a more general context) in [START_REF] Dambrine | Stability in shape optimization with second variation[END_REF], under a few assumptions, the bilinear form j E (0) is in fact coercive if and only if it is positive de nite. Instead of proving that our functional J ts the assumptions of Lemma 3.1 in the above reference, we use their arguments to provide a direct proof in our speci c setting. Lemma 2.28 ([Dambrine and Lamboley, 2019, Lemma 3.1]) Let η ∈ C 1 (R 2 ) and E be a bounded open set of class C 2 . Then the following propositions are equivalent: (i) j E (0) is positive de nite, i.e. ∀ψ ∈ H 1 (∂E) \ {0}, j E (0).(ψ, ψ) > 0 , (ii) j E (0) is coercive, i.e. ∃α > 0, ∀ψ ∈ H 1 (∂E), j E (0).(ψ, ψ) ≥ α ψ 2 H 1 (∂E) . Proof : The fact (ii) implies (i) is trivial. We assume (i) and let (ψ n ) n≥0 be a minimizing sequence for inf ψ∈H 1 (∂E) j E (0).(ψ, ψ) s.t. ψ H 1 (∂E) = 1 . (22) Up to the extraction of a subsequence (not relabeled), we have that (ψ n ) n≥0 converges weakly in H 1 (∂E) and (by the compactness of the embedding H 1 (∂E) ⊂⊂ L 2 (∂E)) strongly in L 2 (∂E) to some ψ ∈ H 1 (∂E). We hence obtain lim n→+∞ ˆ∂E H E η + ∂η ∂ν E ψ 2 n dH 1 = ˆ∂E H E η + ∂η ∂ν E ψ 2 dH 1 . Let us now distinguish two cases. 1. If ψ = 0, then we use ˆ∂E |∇ τ E ψ| 2 dH 1 ≤ lim inf n→+∞ ˆ∂E |∇ τ E ψ n | 2 dH 1 , which, using (i), yields lim inf n→+∞ j E (0).(ψ n , ψ n ) ≥ j E (0).(ψ, ψ) > 0. 2. If ψ = 0, using that ψ n H 1 (∂E) = 1 for all n, we obtain lim inf n→+∞ j E (0).(ψ n , ψ n ) = lim inf n→+∞ ˆ∂E |∇ τ E ψ n | 2 dH 1 = 1 > 0 . As a result, we obtain that, in both cases, the in mum in ( 22) is strictly positive, which shows that j E (0) is coercive. A su cient condition for coercivity. Using the expression of j E (0) and the equivalence between coercivity and positive de niteness, the following result can be directly obtained. Proposition 2.29 Let η ∈ C 1 (R 2 ) and E be a bounded open set of class C 2 such that j E (0) = 0. If sup x∈∂E H E (x) 2 + ∂η ∂ν E (x) < 0 , then j E (0) is coercive. Necessity of the condition? A natural question to investigate is whether the condition given in Proposition 2.29 is necessary. A rst result in this direction is given in Proposition 2.30 below. Before stating it, let us de ne λ 1 (Γ), the rst Dirichlet eigenvalue of the Laplacian associated to some simple C 1 curve Γ with nite length (see for instance [START_REF] Kuttler | Eigenvalues of the Laplacian in Two Dimensions[END_REF] for the more classical case of open bounded sets): λ 1 (Γ) def. = inf ψ∈H 1 0 (Γ)\{0} ∇ τ Γ ψ 2 L 2 (Γ) ψ 2 L 2 (Γ) . ( 23 ) The in mum in ( 23) is attained and is actually equal to the Dirichlet eigenvalue of the interval I = (0, H 1 (Γ)) ⊂ R, which is 1/(πH 1 (Γ) 2 ). To see this, one simply needs to consider an arc-length parameterization γ of Γ, and notice that, for every ψ ∈ H 1 0 (Γ) \ {0}, we have: ∇ τ Γ ψ 2 L 2 (Γ) ψ 2 L 2 (Γ) = ´I (ψ • γ) (t) 2 dt ´I (ψ • γ)(t) 2 dt . We can now state the following proposition. Proposition 2.30 If there exists α > 0 such that H 2 E + ∂η ∂ν E ≥ α on a connected subset Γ of ∂E with α H 1 (Γ) 2 ≥ 1/π , then j E (0) is not coercive. Proof : Since the in mum in the de nition of λ 1 (Γ) is attained, we have the existence of a nonzero function ϕ ∈ H 1 0 (Γ) such that ∇ τΓ ϕ 2 L 2 (Γ) ϕ 2 L 2 (Γ) = λ 1 (Γ) = 1 π H 1 (Γ) 2 ≤ α . We hence obtain ˆΓ |∇ τΓ ϕ| 2 -H 2 E + ∂η ∂ν E ϕ 2 dH 1 ≤ ˆΓ |∇ τΓ ϕ| 2 -α ϕ 2 dH 1 ≤ 0 . We can then extend ϕ to ψ ∈ H 1 (∂E) whose support is compactly included in Γ, which yields ˆ∂E |∇ τ E ψ| 2 -H 2 E + ∂η ∂ν E ψ 2 dH 1 ≤ 0 . We can therefore conclude that j E (0) is not coercive. We stress that Proposition 2.30 does not show that the condition in Proposition 2.29 is necessary, since it does not cover the case where H 2 E + ∂η ∂ν E is greater than a "small" positive constant on a "small" portion of the boundary. C In this subsection, we describe two ways to construct candidates for the minimal norm dual certi cate de ned in De nition 1.7. The rst one relies on an attempt to adapt the notion of calibrability, which is exploited for the denoising case in [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF], to general measurement operators. The second is based on vanishing derivatives pre-certi cates, introduced in [Duval and Peyré, 2015, Section 4]. As computing the minimal norm certi cate in full generality is challenging, we provide a further study of a particular setting, where Φ is assumed to be a convolution operator with radial kernel h, and the unknown image u 0 is the indicator of a ball of radius R 0 > 0. Our aim is to investigate, in this simpli ed scenario, when the non-degenerate source condition is satis ed. Φ-calibrability In [Chambolle et al., 2016, Section 4.3], the minimal norm certi cate is computed when Φ is the identity (and (P λ (y)) is hence the Rudin Osher Fatemi denoising problem) for a special class of sets called calibrable sets (see e.g. [START_REF] Bellettini | The Total Variation Flow in RN[END_REF], Alter et al., 2005]). It is hence tempting to adapt this strategy to our setting. An extension of the notion of calibrability for a set E with respect to Φ is to require that 1 E is a singular vector of the total variation as de ned in [Benning and Burger, 2013, De nition 4]. This leads to the following de nition. De nition 2.31 A set E ⊂ R 2 is said to be Φ-calibrable if λ E Φ * Φ1 E ∈ ∂TV(1 E ) with λ E def. = P (E) Φ1 E 2 . Remark 2.32 If λΦ * Φ1 E ∈ ∂TV(1 E ) for some λ ∈ R, taking the inner product with 1 E shows λ = λ E . In [Chambolle et al., 2016, Proposition 6], it is proven that, if Φ = Id and E is calibrable, then λ E 1 E is the minimal norm certi cate. With De nition 2.31, the following analog of this result is straightforward to obtain. Proposition 2.33 If E is Φ-calibrable, then the minimal norm dual certi cate is λ E Φ * Φ1 E . Proof : Let p ∈ H such that Φ * p ∈ ∂TV(1 E ) . Then, we have: P (E) = Φ * p, 1 E = p, Φ1 E ≤ p H Φ1 E H . Since λ E Φ1 E = P (E)/ Φ1 E H , we obtain that p H ≤ λ E Φ1 E . Now, from Proposition 1.3, we can readily obtain the following useful characterization of Φcalibrable sets. Proposition 2.34 A set E is Φ-calibrable if and only if E maximizes F → K(E, F ) def. = Φ 1 E P (E) , Φ 1 F P (F ) among sets of nite perimeter with positive nite measure. When Φ = Id, a complete characterization of calibrable sets is available [START_REF] Alter | A characterization of convex calibrable sets in Rn[END_REF]. We are not aware of any result outside this speci c setting. In fact, we provide numerical evidence below suggesting that even disks are not Φ-calibrable for natural choices of Φ (namely the convolution with the Gaussian and ideal low-pass kernels). Deconvolution of a disk. Let us de ne α(R 0 ) def. = sup K(R, R 0 ) K(R 0 , R 0 ) , R > 0 with K(R, R ) = K 1 B(0,R) , 1 B(0,R ) . With this notation we have that, if 1 B(0,R 0 ) is Φ-calibrable, then α(R 0 ) ≤ 1. Figure 9 shows the graph of R 0 → α(R 0 ) when Φ is the convolution with the Gaussian or ideal low-pass kernel, for several values of the variance and cut-o frequency. In every situation, these plots seem to indicate that α(R 0 ) > 1 for every R 0 > 0, and hence that 1 B(0,R 0 ) is never Φ-calibrable. This questions the relevance of our de nition of Φ-calibrability. In fact, it could even be that there are no Φ-calibrable sets, even for reasonable choices of measurement operator Φ. Pre-certi cates In [Duval and Peyré, 2015, Section 4], the notion of dual pre-certi cate is introduced. Its authors de ne it as any "good candidate" p ∈ H for solving Φ * p ∈ ∂TV(u), where u is an admissible function for (P 0 (y 0 )) whose optimality is to be proven. In this subsection, we de ne an analog of the so-called vanishing derivative pre-certi cate, in order to study the optimality of 1 E with E a simple set. We assume as in Theorem 2.27 that Φ * is continuous from H to C 1 b (R 2 ). We also assume that E is of class C 3 , as any set E satisfying Φ * p ∈ ∂TV(1 E ) for some p ∈ H has this regularity. Necessary optimality conditions. We know that Φ * p is a dual certi cate associated to 1 E if and only if Φ * p ∈ ∂TV(0) and E minimizes J(E) def. = P (E) -´E Φ * p. Using the expression of the rst order shape derivative of J, we obtain the following necessary optimality conditions: ˆE Φ * p = P (E) , (24) Φ * p ∂E = H E . (25) Now, provided E is simple, (25) in particular implies that ˆ∂E Φ * p = ˆ∂E H E = 2π . ( 26 ) One could then de ne the vanishing derivative pre-certi cate as the solution of ( 24) and ( 26) with minimal norm. De nition 2.35 We call vanishing derivative pre-certi cate associated to a simple set E the function η v = Φ * p v with p v the unique solution of min p∈H p H s.t. p, ˆE ϕ = P (E) and p, ˆ∂E ϕ = 2π . (27) If the source condition holds (i.e. there exists η ∈ ImΦ * ∩ ∂TV(0) such that η ∈ ∂TV(1 E ), then p v is well-de ned as Problem ( 27) is feasible. Since any dual certi cate satis es ( 24) and ( 26), we have the following result. Proposition 2.36 If η v ∈ ∂TV(0), then η v is the minimal norm dual certi cate. The constraints in ( 27) can be rewritten as follows: p, f E = 1 , p, g E = 1 , with        f E def. = 1 P (E) ˆE ϕ , g E def. = 1 2π ˆ∂E ϕ . (28) We hence have that p v satis es 27) for E = 1 B(0,1) and Φ the convolution with the Gaussian kernel with variance σ = 0.2. p v = αf E + βg E with α def. = g E 2 -f E , g E f E 2 g E 2 -f E , g E 2 and β def. = f E 2 -f E , g E f E 2 g E 2 -f E , g E 2 . x -2 -1 0 1 2 y -2 -1 0 Remark 2.37 In some sense, the function λ E Φ * Φ1 E considered above can be seen as the pre-certi cate associated to the single constraint (24). Imposing the second constraint (26) allows to build a pre-certi cate which is a combination of the two functions f E and g E (instead of f E only for λ E Φ * Φ1 E ). Other constraints could also be considered, as every certi cate satis es (25), which is only exploited "in average" in (26). Considering [Duval and Peyré, 2015, Propositions 7 and 8], it would be interesting to investigate whether the non-degenerate source condition holds if and only if η v = η 0 and η 0 is non-degenerate, and also whether η v = η 0 is a necessary condition for support recovery. Deconvolution of the unit disk. We now focus on the case where E = 1 B(0,1) and Φ is the convolution with the Gaussian kernel with variance σ. We provide in Figure 10 a plot of Φ * f E and Φ * g E , which are the two "basis functions" from which η v is built. From (5), we know that a way to show η v ∈ ∂TV(0) is to nd a vector eld z ∈ L ∞ (R 2 , R 2 ) such that div z = η v . Since f E and g E are radial, so is η v . It is hence natural to look for a radial vector eld z (i.e. such that there exists z r : R + → R with z(x) = z r ( x ) x/ x for almost every x ∈ R 2 ). In this case we have div z = η v if and only if, for every r > 0: ηv (r) = 1 r ∂ ∂r (r z r )(r) ⇐⇒ r ηv (r) = ∂ ∂r (r z r )(r) ⇐⇒ z r (r) = 1 r ˆr 0 ηv (s) s ds . This shows one only needs to ensure the mapping f v de ned by satis es f v ∞ ≤ 1 to show η v ∈ ∂TV(0). Figure 11 shows the graph of f v for several values of σ. This suggests that there exists σ 0 > 0 such that η v is a dual certi cate (and hence the one with minimal norm) for every σ < σ 0 . It even seems that σ 0 ≥ 0.75. f v : R + → R r → 1 r ˆr 0 ηv (s) s ds (29) Remark 2.38 One could wonder whether looking for a radial vector eld is not restrictive. In fact, if a vector eld z is suitable, then so is the radial vector eld z de ned by z(x) def. = zr ( x ) x x with zr (r) def. = 1 2π ˆ2π 0 z r (r, θ) dθ , where z r denotes the radial component of z. Indeed, we have |z(r)| ≤ 1 for all r with equality if and only if z r (r, θ) = 1 for almost every θ or z r (r, θ) = -1 for almost every θ. Moreover η v (r) = 1 2π ˆ2π 0 η v (r) dr = 1 r ∂ ∂r r 1 2π ˆ2π 0 z r (r, θ) dθ + 1 r 1 2π ˆ2π 0 ∂z θ ∂θ (r, θ) dθ = 1 r ∂ ∂r (r zr ) = div z. Finally, we can investigate if the non-degenerate source condition holds. As explained in Section 3.3, it is su cient to show that sup x∈∂E H 2 E (x) + ∂η 0 ∂ν E (x) < 0 . In our case H E is constant equal to one, and, since η 0 is radial, ∂η 0 ∂ν E is constant on ∂E. Proving that ∂η v ∂r (1) < -1 is hence su cient. In Figure 12, we numerically compute this quantity and notice it is always the case, even when η v / ∈ ∂TV(0). This suggests that there exists σ 0 > 0 such that, for every σ > 0, the non-degenerate source condition holds (and, from our experiments, it seems that σ 0 ≥ 0.75). Surprisingly, σ → ∂ηv ∂r (1) does not seem to be monotonous, even on [0, σ 0 ). Beyond the radial case. A way to ensure η v ∈ ∂TV(0) in the general case is to solve the (generalized) Cheeger problem associated to η v . This can be done using the numerical method presented in Section 2. Now, to ensure the non-degenerate source condition holds, one must also show that E is the unique (non-trivial) solution of the prescribed curvature problem associated to η v . In [START_REF] Buttazzo | On the selection of maximal Cheeger sets[END_REF], Carlier et al., 2009], a method is proposed to compute a "maximal" solution of the Cheeger problem, which is closely related to the prescribed curvature problem. It relies on the introduction of a small strictly convex penalization. We argue that these ideas could be adapted to numerically compute a "maximal" element of the face exposed by η v (i.e. a function whose chains are maximal in this face). If this element is proportional to 1 E , then 1 E is identi able and η v is the minimal norm certi cate. C Summary. In this part, we collected properties of the total variation unit ball and its (exposed) faces. The main interest of this analysis is a useful result, stating that elements of a given face can be decomposed using a common set of atoms. We also de ned the class of M -simple functions, which are the sparse objects naturally associated to the total variation. Then, we investigated the behaviour, under variations of the associated curvature functional, of solutions of the precribed curvature problem. This allowed us to prove that the dimension of the faces of the total variation unit ball is in some sense stable. Using this last result, we nally managed to prove that, under a so-called non-degenerate source condition, the jump set of an unknown M -simple function can be exactly recovered in a low noise regime. Towards grid-free numerical methods. Our support recovery result is yet another motivation for designing grid-free numerical methods allowing to solve (P λ (y)). Indeed, it shows that, under suitable assumptions, if y = Φu 0 + w with u 0 an M -simple function, then solutions of (P λ (y)) are themselves M -simple. Being able to compute approximate solutions with this structure is hence highly desirable. As explained in Section 2.3, traditional grid-based method are not designed with that goal in mind. They do not produce a continuous domain representation of the reconstructed images, which are often blurry. The aim of Part 3 is to propose a numerical solver which does not su er from these drawbacks, and accurately estimates the jump set of solutions. P 3 G In this part, we focus on the numerical resolution of (P λ (y)). We present a so-called grid-free algorithm, which does not rely on the introduction of a xed spatial discretization. It iteratively constructs an approximate solution built from the atoms promoted by the total variation, namely indicator functions of simple sets. We choose to numerically represent these sets using simple polygons. We provide a complete implementation of the proposed method in the following online repositories. • https://github.com/rpetit/pycheeger • https://github.com/rpetit/tvsfw Its performance is investigated in the last section, and comparisons with grid-based approaches are provided. This part is based on the following publication. Conclusion F W In [START_REF] Bredies | Inverse problems in spaces of measures[END_REF], Boyd et al., 2017, Denoyelle et al., 2019], variants of the conditional gradient algorithm, also known as Frank-Wolfe algorithm, were introduced to perform continuous domain sparse spikes recovery. In this section, we adapt this fruitful approach to our setting, and derive a modi ed Frank-Wolfe algorithm allowing to solve (P λ (y)) in a grid-free manner. In all the following, as we are interested in the numerical resolution of (P λ (y)), we only consider the case of nitely many measurements, i.e. H = R m from some m ∈ N * . F W Frank-Wolfe algorithm (see Algorithm 1) allows to minimize a function f over a subset C of a Banach space. The objective f is assumed to be convex and di erentiable, and the admissible set C convex and weakly compact. Each step of the algorithm consists in minimizing the rst order expansion of f on C, and building the next iterate as a convex combination of the obtained point and the current iterate. Algorithm 1: Frank-Wolfe algorithm k+1] ) // nal update 9 end 10 end Sparse greedy updates. Our main interest for Frank-Wolfe algorithm lies in the following classical observation: since the linear minimization step (Line 2) consists in minimizing a linear form over the weakly compact convex set C, at least one of its solutions is an extreme point of C. Choosing such a point at every iteration, we obtain that x [k] is a convex combination of at most k extreme points of C, provided x [0] = 0. This is particularly interesting in situations where the extreme points of C are "atoms" and where one is looking for a solution which is a sparse combination of those. Data: objective f , domain C, starting point x [0] ∈ C Result: point x * 1 while true do 2 nd s [k] ∈ Argmin s∈C f (x [k] ) + df (x [k] ).(s -x [k] ) 3 if df (x [k] ).(s [k] -x [k] ) = 0 then 4 output x * ← x [k] , which is optimal 5 else 6 γ [k] ← Argmin γ∈[0,1] f (x [k] + γ(s [k] -x [k] )) // line search 7 x[k+1] ← x [k] + γ [k] (s [k] -x [k] ) // tentative update 8 choose any x [k+1] such that f (x [k+1] ) ≤ f (x [ Choice of the nal update. An important feature of the algorithm is that, while the classical update (Line 8) is to take x [k+1] to be equal to x[k+1] , all convergence guarantees are preserved if one chooses instead any x [k+1] ∈ C such that f (x [k+1] ) ≤ f (x [k+1] ). As suggested by [START_REF] Bredies | Inverse problems in spaces of measures[END_REF], Boyd et al., 2017, Denoyelle et al., 2019] in the context of sparse spikes recovery, on can extensively make use of this exibility to produce iterates that are as sparse as possible. P Frank-Wolfe algorithm can not be directly applied to (P λ (y)), as its objective F (u) def. = TV(u) + 1 2λ Φu -y 2 is not di erentiable and its admissible set is unbounded. In this subsection, we derive an equivalent formulation of (P λ (y)) which ts into this framework, and then describe the resulting algorithm. Epigraphical lift. To obtain a problem equivalent to (P λ (y)) with a di erentiable objective and a weakly compact admissible set, we perform an epigraphical lift. This strategy is used in [START_REF] Denoyelle | The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy[END_REF] (following an idea of [START_REF] Harchaoui | Conditional gradient algorithms for norm-regularized smooth convex optimization[END_REF], Section 2, Paragraph "penalized norm minimization"]). Proposition 3.1 Problem (P λ (y)) is equivalent to inf (u,t)∈L 2 (R 2 )×R G(u, t) def. = 1 2λ Φu -y 2 + t s.t. TV(u) ≤ t ≤ 1 2λ y 2 , (Q λ (y)) i.e. both problems have the same value and: 1. if (u, t) is a solution of (Q λ (y)), then t = TV(u) and (u, v) is a solution of (P λ (y)), 2. if u is a solution of (P λ (y)), then (u, TV(u)) is a solution of (Q λ (y)). Proof : This is a straightforward consequence of the following fact: if u is a solution of (P λ (y)), then we have that F (u) ≤ F (0), which yields TV(u) ≤ 1 2λ y 2 . The admissible set of (P λ (y)) can hence be restricted to functions u satisfying TV(u) ≤ 1 2λ y 2 without modifying its solutions. The admissible set of (Q λ (y)) is now weakly compact and convex. Moreover, its objective G is di erentiable with, for all (u, t) ∈ L 2 (R 2 ) × R: dG(u, t) : L 2 (R 2 ) × R → R (v, s) → ˆR2 1 λ Φ * (Φu -y) v + s . Linear minimization step. Applying Algorithm 1 to (Q λ (y)), the linear minimization step at iteration k (Line 2) writes inf (u,t)∈L 2 (R 2 )×R ˆR2 1 λ Φ * (Φu [k] -y) u + t s.t. TV(u) ≤ t ≤ 1 2λ y 2 . ( 30 ) As explained above, there always exists a solution of (30) which is an extreme point of the admissible set. These points are described by the following lemma, whose proof is given in Appendix E. Lemma 3.2 If t * > 0, the extreme points of C def. = {u ∈ L 2 (R 2 ) | TV(u) ≤ t ≤ t * } are: • (0, 0) , • ( t * 1 E /P (E), t * ) with ∈ {-1, 1}, E ⊂ R 2 simple and 0 < |E| < +∞ . With this result, we obtain that nding a solution of (30) among the extreme points of the admissible set is equivalent to solving sup E⊂R 2 ∈{-1,1} P (E) ˆE η [k] s.t. E simple, 0 < |E| < +∞ , (31) with η [k] def. = (-1/λ)Φ * (Φu [k] y). Indeed, a simple computation shows that, if the value of ( 31) is smaller than 1, then (0, 0) is optimal. Otherwise, de ning M def. = y 2 /(2λ), for any solution (E, ) of ( 31), we get that ( M 1 E /P (E), M ) is optimal. Properties of (31), which is reminiscent of the Cheeger problem (see the surveys [Parini, 2011, Leonardi, 2015]), are discussed in Section 2.1. Form of the iterates. Applying Algorithm 1 to (Q λ (y)) with (30) solved as described above, one obtains a sequence of iterates (u [k] , t [k] ) k≥0 whose form is described by the following lemma. Lemma 3.3 If (u [0] , t [0] ) = (0, 0), for every k ∈ N, we either have that (u [k] , t [k] ) = (0, 0) or that there exists N [k] ∈ N * , [k] ∈ {-1, 1} N [k] , a [k] ∈ (R * + ) N [k] and a collection E [k] = E [k] 1 , ..., E [k] N [k] of simple sets of positive nite measure such that (u [k] , t [k] ) =   N [k] i=1 [k] i a [k] i 1 E [k] i , N [k] i=1 a [k] i P (E [k] i )   . ( 32 ) Proof : We argue by induction. The result is already known for k = 0. Let us x k ∈ N and assume that (u [k] , t [k] ) has the right form. 1. If (u [k] , t [k] ) = (0, 0) then: • if (0, 0) solves ( 30) then (u [k+1] , t [k+1] ) = (0, 0), • otherwise there exists ∈ {-1, 1} and a simple set E with 0 < |E| < +∞ such that the couple ( M 1 E /P (E), M ) solves (30). We therefore obtain that (u [k] , t [k] ) is as in (32) with andE [k+1] = (E) . N [k+1] = 1, [k+1] = ( ), a [k+1] = (γ [k] M/P (E)) 2. If (u [k] , t [k] ) is as in (32) then: • if (0, 0) solves ( 30), we get that (u [k+1] , t [k+1] ) is as in (32) with N [k+1] = N [k] , [k+1] = [k] , a [k+1] = (1 -γ [k] ) a [k] and E [k+1] = E [k] . • Otherwise as above denoting ( M 1 E /P (E), M ) a solution of ( 30) we obtain the result with N [k+1] = N [k] + 1, [k+1] = ( [k] , ), a [k+1] = ((1 -γ [k] ) a [k] , γ [k] M/P (E)) and E [k+1] = (E [k] , E). "Fully-corrective" variant. Lemma 3.3 allows us to use a so-called fully corrective variant of Frank-Wolfe, meaning that instead of obtaining the next iterate as a convex combination of the new atom and the previous iterate (as in Line 7 of Algorithm 1), we nd a [k+1] by minimizing G N ( , a, E) de ned by k+1] , E [k+1] ) xed. This minimization amounts to solving a LASSO problem with a positivity constraint and a weighted 1 penalty (the weights being the perimeters of the sets in E [k+1] ). Indeed, given N ∈ N * , a collection of sets E 1 , ..., E N , and two vectors a ∈ (R + ) N , ∈ {-1, 1} N we have: G N ( , a, E) def. = G N i=1 i a i 1 E i , N i=1 a i P (E i ) with N = N [k] + 1 and ( , E) = ( [ G N ( , a, E) = 1 2λ Φ E a -y 2 + N i=1 P (E i ) a i with Φ E def. =   i ˆEi ϕ j 1≤i≤N 1≤j≤m   T ∈ R m×N . The main interest of Lemma 3.3 is that, considering (32), this choice of a [k+1] yields a value of the objective no higher than that obtained with the standard update, and hence does not break convergence guarantees. Modi ed Frank-Wolfe algorithm applied to (Q λ (y)). Dropping the dependence on the auxiliary variable t ∈ R we obtain Algorithm 2. As a result of the equivalence between (P λ (y)) and (Q λ (y)), this algorithm is a valid application of Algorithm 1 to (P λ (y)), in the sense that Proposition 3.5 below holds. Before studying convergence results, let us make a comment on the stopping condition. Algorithm 2: modi ed Frank-Wolfe algorithm applied to (P λ (y)) Data: measurement operator Φ, observations y, regularization parameter λ Result: function u * 1 u [0] ← 0 2 N [0] ← 0 3 while true do 4 η [k] ← -1 λ Φ * (Φu [k] -y) 5 (E * , * ) ← Argmax E⊂R 2 ∈{-1,1} ´E η [k] /P (E) s.t. E simple, 0 < |E| < +∞ 6 if ´E * η [k] ≤ P (E * ) then 7 output u * ← u [k] , which is optimal 8 else 9 E [k+1] ← (E [k] 1 , ..., E [k] N [k] , E * ) 10 [k+1] ← ( [k] 1 , ..., [k] N [k] , * ) 11 a [k+1] ← Argmin a∈(R + ) N [k] +1 1 2λ Φ [k+1] E [k+1] a -y 2 + N [k] +1 i=1 P (E [k+1] i ) a i 12 remove atoms with zero amplitude 13 N [k+1] ← number of atoms in E [k+1] 14 u [k+1] ← N [k+1] i=1 a [k+1] i [k+1] i 1 E [k+1] i 15 end 16 end Stopping condition. The stopping condition (Line 3 of Algorithm 1) is replaced in Algorithm 2 by sup P (E) ˆE η [k] , E ⊂ R 2 simple , 0 < |E| < +∞, ∈ {-1, 1} ≤ 1 , (33) which is equivalent to η [k] ∈ ∂TV(0). Since the optimality of a [k] at Line 11 always ensures ˆR2 η [k] u [k] = N [k] i=1 a [k] i P (E [k] i ) ≥ TV(u [k] ) , we obtain that u [k] solves (P λ (y)) as soon as (33) holds. C 1.3.1. Convergence in objective value Curvature constant. As pointed out in [Jaggi, 2013], in the convergence analysis of Frank-Wolfe algorithm applied to a function f , a measure of the "nonlinearity" of f , called the curvature constant, naturally appears. Let f be a convex di erentiable function and C a weakly compact convex set. The curvature constant C f of f with respect to C is de ned by: C f def. = sup x,s∈C, γ∈[0,1], y=x+γ(s-x) 2 γ 2 [f (y) -f (x) -df (x).(y -x)] . Proposition 3.4 The curvature constant C G of G with respect to the admissible set C of (Q λ (y)) satis es: C G ≤ 1 λ c 2 Φ y 2 λ 2 , where c 2 = 1/ √ 4π is the isoperimetric constant. Proof : For any u, v ∈ L 2 (R 2 ) and t, s ∈ R, we have: G(u, t) -G(v, s) -dG(u, t).(v, s) = 1 2λ Φ(u -v) 2 . We hence obtain that C G = sup u,v∈L 2 (R 2 ) 1 λ Φ(u -v) 2 s.t. TV(u) ≤ 1 2λ y 2 , TV(v) ≤ 1 2λ y 2 . ( 34 ) Now, if u, v are admissible for (34), using the isoperimetric inequality (3), we nally get: Φ(u -v) ≤ Φ u -v L 2 (R 2 ) ≤ Φ c 2 TV(u -v) ≤ c 2 Φ y 2 λ . Convergence in objective value. As already mentioned, Algorithm 2 is a valid application of Algorithm 1 to (P λ (y)), in the sense that the following property holds (see [Jaggi, 2013]): Proposition 3.5 Let (u [k] ) k≥0 be a sequence produced by Algorithm 2. Then for any solution u * of (P λ (y)), we have: ∀k ∈ N, F (u [k] ) -F (u * ) ≤ 2 C G k + 2 . Approximate linear minimization step. As discussed in [Jaggi, 2013], the linear minimization step, which consists in solving (30) or equivalently (31), can be carried out approximately. In fact if there exists δ > 0 such that, for every k, the couple (E * , * ) computed at Line 5 of Algorithm 2 is an [k] -minimizer of ( 31) with [k] = δ C G /(k + 2), then ∀k ∈ N * , F (u [k] ) -F (u * ) ≤ 2 C G k + 2 (1 + δ) . Convergence of the iterates Convergence of minimizing sequences. Now, let us state a general property of minimizing sequences of (P λ (y)), which hence applies to the sequence of iterates produced by Algorithm 2. Proposition 3.6 Let (u [k] ) k≥0 be a minimizing sequence for (P λ (y)). Then there exists a subsequence which converges strongly in L 1 loc (R 2 ) and weakly in L 2 (R 2 ) to a solution u * of (P λ (y)). Moreover, we have TV(u [k] ) → TV(u * ). Proof : We rst notice that since (u [k] ) k≥0 is a minimizing sequence then (TV(u [k] )) k≥0 is bounded, and (from the isoperimetric inequality) (u [k] ) k≥0 is therefore bounded in L 2 (R 2 ). There hence exists a subsequence (not relabeled) which converges weakly in L 2 (R 2 ) to u * ∈ L 2 (R 2 ). Since this subsequence satis es sup k∈N u [k] L 2 (R 2 ) + TV(u [k] ) < +∞ , from [Ambrosio et al., 2000, Theorem 3.23] we obtain that (up to the extraction of a further subsequence, still not relabeled) (u [k] ) k≥0 converges strongly in L 1 loc (R 2 ) (to a limit which is necessarily u * ). The fact u * solves (P λ (y)) then follows from the lower semicontinuity of the data delity term with respect to the weak L 2 (R 2 ) topology, and the lower semicontinuity of the total variation with respect to the strong L 1 loc (R 2 ) topology. Finally, since 1 2λ Φu * -y 2 ≤ lim inf k→+∞ 1 2λ Φu [k] -y 2 , TV(u * ) ≤ lim inf k→+∞ TV(u [k] ) , 1 2λ Φu * -y 2 + TV(u * ) = lim k→+∞ 1 2λ Φu [k] -y 2 + TV(u [k] ) , we obtain TV(u [k] ) → TV(u * ). As explained earlier, the fact TV(u [k] ) → TV(u * ) and the strong L 1 loc convergence of (u [k] ) k≥0 towards u * also imply Du [k] * Du * . Properties of sets appearing in Algorithm 2. An important observation is that, for every k ∈ N, the set E * introduced at Line 5 of Algorithm 2 is a minimizer of the prescribed curvature problem associated to λ [k] η [k] with 1/λ [k] the value of (31) 1 . Moreover, if (u [k] ) is a subsequence as in Proposition 3.6, we have that Φu [k] → Φu * and hence η [k] → η * strongly in L 2 (R 2 ) with η * = (-1/λ)Φ * (Φuy), which also yields λ [k] → 1 2 . This can be exploited by using Lemma 1.11 to obtain "uniform" properties of sets appearing in Algorithm 2, which ultimately yields the stronger convergence results presented below. 1 The value of ( 31) is nonzero unless η [k] = 0, which occurs if and only if y = 0 (in which case u * = 0 is the unique solution of (P λ (y))). To see this, one simply needs to consider a Lebesgue point of η [k] at which the function is nonzero, and to show the objective is strictly positive for su ciently small balls. 2 To see this, it is su cient to note that ´E η 1 P (E) - ´E η 2 P (E) ≤ c2 η1 -η2 L 2 (R 2 ) for every E with 0 < |E| < +∞ and P (E) < +∞. Additional properties of sequences produced by Algorithm 2. If (u [k] ) k≥0 is a subsequence as in Proposition 3.6 then (λ [k] η [k] ) k≥0 converges strongly in L 2 (R 2 ), and its terms all belong to ∂TV(0). Lemma 1.11 hence yields the existence of R > 0 such that Supp(u [k] ) ⊂ B(0, R) for every k ∈ N. We consequently obtain the following strict BV convergence result. Proposition 3.7 Let (u [k] ) k≥0 be a sequence produced by Algorithm 2. Then, up to the extraction of a subsequence, we have that (u [k] ) k≥0 converges strictly in BV(R 2 ) to a solution u * of (P λ (y)). This in turn yields a convergence result for the level sets of the iterates, which is given below (we remind the reader that the inner limit of a sequence of sets is de ned in De nition 1.9). Corollary 3.8 Let (u [k] ) n≥0 be a subsequence as in Proposition 3.7. Then for almost every t ∈ R, we have lim n→+∞ |U (t) k U (t) * | = 0 and ∂U (t) * ⊆ lim inf n→+∞ ∂U (t) k . Proof : The strong convergence of (u [k] ) k≥0 towards u * in L 1 (R 2 ) and Proposition D.1 yield: lim n→+∞ |U (t) k U (t) * | = 0 for almost every t ∈ R . We now x such t ∈ R and let x ∈ ∂U (t) * . We want to show that x ∈ lim inf n→+∞ ∂U (t) k , which is equivalent to lim sup k→+∞ dist x, ∂U (t) k = 0 . By contradiction, if the last identity does not hold, we have the existence of r > 0 and of ϕ such that ∀k ∈ N, B(x, r) ∩ ∂U (t) ϕ(k) = ∅ . Hence for all k, we either have B(x, r) ⊂ U (t) ϕ(k) or B(x, r) ⊂ R 2 \ U (t) ϕ(k) . If B(x, r) ⊂ U (t) ϕ(k) for a given k, using that η * ∈ ∂TV(u * ) and hence that U (t) * satis es point 4 of Lemma 1.11 we obtain U (t) ϕ(k) U (t) * ≥ U (t) ϕ(k) \ U (t) * ≥ B(x, r) \ U (t) * ≥ C |B(x, r)| . We can in the same way show k are obtained as a nite number of unions and intersections of the sets in E [k] and their complements. Although the latter sets have the desired property, it is not stable by such operations, as Figure 13 shows. U (t) ϕ(k) U (t) * ≥ C |B(x, r)| if B(x, r) ⊂ R 2 \ U (t) ϕ(k) , S 1.4.1. Presentation Several works [START_REF] Bredies | Inverse problems in spaces of measures[END_REF], Boyd et al., 2017, Denoyelle et al., 2019] have advocated for the use of a special nal update, which helps identify the sparse structure of the sought-after signal. Loosely speaking, it would amount in our case to running, at the end of iteration k -1, the gradient ow of the mapping (a, E) → G( [k] , a, E) (35) initialized with (a [k] , E [k] ), so as to nd a set of parameters at which the objective is smaller. Formally, this would correspond 1 to nding a curve t → (a i (t), E i (t)) i=1,...,N [k] such that for all t: ∀i ∈ {1, ..., N [k] },        a i (t) = -P (E i (t)) - [k] i ˆEi (t) η(t) , V i (t) = -a i (t) H E i (t) - [k] i η(t) , (36) 1 The formulas in (36) can be formally obtained by using the notion of shape derivative, see [Henrot and Pierre, 2018, Chapter 5]. where V i (t) denotes the normal velocity of the boundary of E i (t) and η(t) = - 1 λ Φ * (Φu(t) -y) , u(t) = N [k] i=1 a i (t) [k] i 1 E i (t) . The study of this gradient ow (existence, uniqueness) seems to be challenging, and we did not investigate these questions. For our purpose, it is enough to introduce the sliding step as a local descent on the mapping de ned in (35) initialized with (a [k] , E [k] ). We however need to ensure this step does not increase the value of the objective, and we hence ask to nd (a, E) = (a i , E i ) 1≤i≤N [k] such that E i is simple for all i and G N [k] [k] , a, E ≤ G N [k] [k] , a [k] , E [k] . ( 37 ) The resulting algorithm is Algorithm 3. The introduction of Line 15, which is discussed in the next paragraph, ensures that all convergence guarantees derived for Algorithm 2 remain valid. Algorithm 3: modi ed Frank-Wolfe algorithm applied to (P λ (y)) (with sliding) Data: measurement operator Φ, observations y, regularization parameter λ Result: k] , which is optimal function u * 1 u [0] ← 0 2 N [0] ← 0 3 while true do 4 η [k] ← -1 λ Φ * Φu [k] -y 5 (E * , * ) ← Argmax E⊂R 2 ∈{-1,1} ´E η [k] /P (E) s.t. E simple, 0 < |E| < +∞ 6 if ´E * η [k] ≤ P (E * ) then 7 output u * ← u [ 8 else 9 E [k+1] ← (E [k] 1 , ..., E [k] N [k] , E * ) 10 [k+1] ← ( [k] 1 , ..., [k] N [k] , * ) 11 a [k+1] ← Argmin a∈(R + ) N [k] +1 1 2λ Φ [k+1] E [k+1] a -y 2 H + N [k] +1 i=1 P (E [k+1] i ) a i 12 remove atoms with zero amplitude 13 N [k+1] ← number of atoms in E [k+1] 14 perform a local descent on (a, E) → G N [k+1] ( [k+1] , a, E) initialized with (a [k+1] , E [k+1] ) 15 repeat the operations of Lines 11-13 16 u [k+1] ← N [k+1] i=1 a [k+1] i 1 E [k+1] i 17 end 1.4.2 . In uence on convergence The version of the sliding step we use was introduced in [START_REF] Denoyelle | The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy[END_REF]. In practice, it allows to considerably improve the convergence speed of the algorithm. It also produces sparser solutions: if the solution is a combination of a few indicator functions, removing the sliding step typically produces iterates made of a much larger number of atoms. In fact, it seems that the majority of these atoms correct the crude approximations of the support of the solution made during the rst iterations. In [START_REF] Denoyelle | The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy[END_REF], the introduction of the sliding step allowed the authors to derive improved convergence guarantees (i.e. nite time convergence) in the context of sparse spikes recovery. Their proof relies on the fact that, at the end of every sliding step, a "critical point" of the objective is reached. However, the existence issues mentioned in Section 1.4.1 make the adaptation of these results to our setting di cult. A reasonable de nition of "critical point" in our context could be as follows. De nition 3.9 (Critical point of G N ) Let N ∈ N * , ∈ {-1, 1} N , a ∈ (R * + ) N and E 1 , ..., E N be simple subsets of R 2 with positive and nite measure. We say that ( i , a i , E i ) i=1,...,N is a critical point of the mapping G N if ∀i ∈ {1, ..., N },    P (E i ) = i ˆEi η , H E i = i η , (38) where η def. = - 1 λ Φ * (Φu -y) and u def. = N i=1 a i i 1 E i . With this de nition, reaching a critical point at the end of the sliding step implies all sets in E [k] have the same distributional curvature (up to a sign). On the contrary, without sliding, sets are never modi ed and their curvature depends on the iteration during which they were introduced. We stress that if, for a given iteration, a critical point is reached at the end of the sliding step, then Line 15 can be skipped, since the rst equality in (38) ensure a [k+1] minimizes a → G N [k+1] ( [k+1] , a, E [k+1] ) . If a critical point is reached, the fact the sets in E [k] have the same distributional curvature can be exploited to obtain "uniform" density estimates for the level sets of u [k] , in the spirit of [Maggi, 2012, Corollary 17.18]. It is then natural to wonder whether this could be used to prove lim sup n→+∞ ∂U (t) n ⊆ ∂U (t) * , (39) and hence the convergence of (∂U (t) k ) k≥0 towards ∂U (t) * . A major obstacle towards this result is that, although Lemma 1.11 provides a uniform upper bound on the perimeter of sets appearing in the algorithm, to our knowledge, it does not seem possible to derive such a bound for the perimeter of the level sets, which prevents one from using the potential weak-* convergence of D1 U (t) k towards D1 U (t) * . Link with conic particle gradient descent. Assuming for simplicity that [k] = (1, ..., 1) and, for every i = j, that H 1 (∂ * E i ∩ ∂ * E j ) = 0, we obtain F   N [k] i=1 a i 1 E i   = G N [k] ( [k] , a, E) , and hence see that the sliding step amounts to performing a local descent on the mapping (a, E) → F   N [k] i=1 a i 1 E i   . In [Chizat, 2021, Chizat, 2022], a seemingly related task is considered. Given Θ a parameter space, the author investigates the minimization of J : M + (Θ) → R µ → 1 2λ Φµ -y 2 + µ(Θ) using conic particle gradient algorithms, which consist in performing gradient-based optimization on (a, x) → J N i=1 h(a i )δ θ i , (40) with h : R + → R + a smooth increasing bijection. The main nding of these works is that, for h(a) = a2 and for a speci c choice of metric on R + × Θ (plus a few assumptions), the gradient ow of (40) converges to a minimizer of J. An interesting avenue for future research could be to study how this analysis translates to our setting. On the practical side, one could investigate whether performing multiplicative (rather than additive) updates on the amplitudes a improves the convergence speed of the local descent, as suggested in [Chizat, 2021]. P C To implement Algorithm 3, one needs two oracles to carry out the operations of Lines 5 and 141 . In this section, we focus on the rst task (the second is the subject of Section 3.1). Our aim is hence to design a numerical method to approximately solve sup E⊂R 2 J(E) def. = 1 P (E) ˆE η s.t. E simple, 0 < |E| < +∞ , (41) which is called a generalized Cheeger problem2 . In section 2.1, we explain its connection with the classical Cheeger problem and provide properties of its solutions, which are called generalized Cheeger sets. We also review existing numerical methods for solving it, and motivate our choice to introduce a slightly di erent procedure. Our approach consists in choosing a (possibly large) integer n ≥ 3, and looking for an optimal set among simple polygons with at most n sides. The existence of such sets is proved in Section 2.2. To approximate this optimal polygon, we optimize the vertices of an initial polygon using a rst order algorithm. The initialization is performed by solving a relaxation of (41) on a xed grid. The details of this numerical method are presented in Sections 2.3 and 2.4. Finally, in Section 2.5, we study how these optimal polygons compare to solutions of (41) in a speci c setting, where η is assumed to be radial. A C Related problems. We rst stress that ( 41) is reminiscent of the Cheeger problem, which, given an open bounded set Ω ⊂ R d with Lipschitz boundary, consists in nding a subset E of Ω minimizing P (E)/|E| (see the surveys [Parini, 2011, Leonardi, 2015]). Many generalizations of the Cheeger problem have been considered. Let us mention [START_REF] Ionescu | Generalized Cheeger sets related to landslides[END_REF], in which the usual perimeter and volume are replaced by weighted versions (the integral on ∂ * E and E of some nonnegative weight functions), and [START_REF] Caselles | Anisotropic Cheeger Sets and Applications[END_REF][START_REF] Brasco | The fractional Cheeger problem[END_REF], in which the case of anisotropic and fractional perimeters are respectively investigated. Our setting. The problem we consider slightly di ers from the ones mentioned above. First, the emphasis is not on the domain Ω anymore (( 41) is solved on the whole plane), but on the weight function η. To put it another way, maximizing J on the whole plane or inside some domain Ω is equivalent as soon as η 2 concentrates1 in a compact subset of Ω. Moreover, contrary to [START_REF] Ionescu | Generalized Cheeger sets related to landslides[END_REF], the function η is not assumed to be nonnegative, and is taken in L 2 (R 2 ) (instead of L ∞ (Ω) in this last work). Existence of solutions. Two strategies can be considered to prove the existence of maximizers in (41). The rst is the one hinted in Section 1.2. It consists in proving the existence of maximizers for sup u∈L 2 (R 2 ) ˆR2 ηu s.t. TV(u) ≤ 1 , (42) and then showing at least one solution is the indicator function of a simple set (using the form of the extreme points of {TV ≤ 1} and the Krein-Milman theorem, or a result like [Carlier and Comte, 2007, Theorem 2]). The second strategy relies on purely geometric arguments, in the spirit of [Maggi, 2012, Section 12]. One can indeed prove a uniform lower bound on the perimeter of minimizing sequences, and obtain local convergence (up to extraction) towards some limit set, whose optimality can subsequently be shown. Generic uniqueness. As explained in [Buttazzo et al., 2007, Proposition 4.1], the solution of ( 41) is in some sense generically unique. Indeed, denoting V (η) the value of (42), we obtain that V is a continuous convex function on L 2 (R 2 ). A theorem of Mazur [Phelps, 1993, Theorem 1.20] then shows V is Gâteaux di erentiable on a dense G δ subset of L 2 (R 2 ). Since ∂V (η) is precisely the solution set of (41), generic uniqueness follows. Existing numerical methods. In [START_REF] Carlier | Approximation of maximal Cheeger sets by projection[END_REF] a method is proposed to approximate the so-called maximal Cheeger set. It covers the case of a weighted area term and of weighted and anisotropic perimeters. The proposed approach is to solve a discretized version of (42), where the maximization is performed over the set of piecewise constant functions on some xed grid. A method which is very similar to ours is introduced in [START_REF] Ionescu | Boundary variation method for the generalized Cheeger problem[END_REF]. It consists in iteratively deforming an initial domain by computing a shape gradient at each iteration. We discovered the existence of this work during the writing of this manuscript. E C The aim of this section is to prove the existence of polygonal generalized Cheeger sets, i.e. maximizers of J among simple polygons with a given number of sides. In fact, we prove a slightly stronger result, namely the existence of maximizers of a relaxed energy which coincides with J on simple polygons, and the existence of a simple polygon among these maximizers. Existence proofs in polygonal shape optimization problems. To prove the existence of optimizers for shape optimization problems in the class of polygons, a typical strategy is to use a compactness result for the Hausdor convergence of open sets. One constructs a minimizing sequence whose terms are open polygons included in a common compact set, and obtain its convergence (up to extraction) in the Hausdor sense to a "generalized polygon" (see for example [Henrot, 2006, Theorem 3.3.1] for the minimization of the rst Dirichlet eigenvalue of the Laplacian, or [Bucur and Fragalà, 2016, Proposition 9] for the minimization of the Cheeger constant). At the time we considered the problem we are discussing here, we were not aware of this literature, and followed a slightly di erent (but essentially similar) strategy. As explained below, our approach amounts to considering the objective as a function of the vertices of the input polygon. Notations. In the following, we x an integer n ≥ 3 and denote X n = x ∈ R n×2 [x 1 , x 2 ], ..., [x n , x 1 ] is simple . We recall that a polygonal curve is said to be simple if non-adjacent sides do not intersect. If x ∈ X n , then 1 ∪ n i=1 [x i , x i+1 ] is a Jordan curve. It hence divides the plane in two regions, one of which is bounded. We denote this region by E x (it is hence a simple polygon). With this notation the set P n of simple polygons with at most n sides is given by P n = {E x , x ∈ X n } . 1 If i > n we de ne xi def. = x i mod n , i.e. xn+1 = x1. Relaxed perimeter and weighted area. We rst begin by de ning relaxed versions of the perimeter and the (weighted) area. To be able to deal with polygons with a number of vertices smaller than n, which will be useful in the following, we de ne for all m ≥ 2 and x ∈ R m×2 the following quantities: P (x) def. = m i=1 x i+1 -x i and A(x) def. = ˆR2 η χ x , where χ x (y) denotes the index (or winding number) of any parametrization of the oriented polygonal curve [x 1 , x 2 ], ..., [x m , x 1 ] around y ∈ R 2 (see for instance [Rudin, 1986, Theorem 10.10]). In particular, for every x ∈ X m (i.e. for every x ∈ R m×2 de ning a simple polygon), we have χ x = ±1 Ex and P (x) = P (E x ) and |A(x)| = ˆEx η , and hence, as soon as P (x) > 0: J(E x ) = |A(x)| P (x) . This naturally leads us to de ne Y m def. = x ∈ R m×2 P (x) > 0 and to de ne, abusing notation, J( = |A(x)|/P (x) for every x ∈ Y m . Properties of the index. As mentioned above, if E x is simple then χ x = ±1 Ex . In the general case χ x is constant on each connected component of R 2 \ Γ x with Γ x def. = ∪ m i=1 [x i , x i+1 ]. It takes values in {-m, ..., m} and is equal to zero on the only unbounded connected component of R 2 \ Γ x . We also have ∂ supp(χ x ) ⊂ Γ x . Moreover χ x has bounded variation and, for H 1 -almost every y ∈ Γ x , there exists u + Γ (y), u - Γ (y) in {-m, ..., m} such that Dχ x = (u + Γx -u - Γx ) ν Γx H 1 Γ x . Positivity of the problem value. Now, we de ne α def. = sup {J(x), x ∈ Y n }. If η = 0, then the existence of maximizers is trivial. Otherwise, there exists a Lebesgue point x 0 of η at which η is non-zero. Now the family of regular n-gons inscribed in any circle centered at x 0 has bounded eccentricity. Hence, if x n,r de nes a regular n-gon inscribed in a circle of radius r centered at x 0 , the Lebesgue di erentiation theorem 1 ensures that lim r→0 + ´Exn,r η |E xn,r | > 0 , and the fact that α > 0 follows. 1 For details on the Lebesgue di erentiation theorem where balls are replaced by a family of sets with bounded eccentricity, see [Stein, 1993, Chapter 2, Paragraph 3.1]. Lemma 3.10 Let C > 0. There exists R > 0 and c > 0 such that, for every x ∈ Y n J(x) ≥ C =⇒ P (x) ≥ c and x i ≤ R for all i. Proof : The proof is similar to that of Lemma 1.11. Upper bound on the perimeter: the integrability of η 2 yields that for every > 0 there exists R 1 > 0 such that ˆR2 \B(0,R1) η 2 ≤ 2 . ( 43 ) Let > 0 and R 1 > 0 such that (43) holds. We have P (x) ≤ 1 C |A(x)| ≤ 1 C ˆR2 ∩B(0,R) η χ x + ˆR2 \B(0,R) η χ x ≤ 1 C η L 2 χ x L ∞ |B(0, R)| + χ x L 2 ≤ 1 C η L 2 n |B(0, R)| + 1 √ c 2 |Dχ x |(R 2 ) ≤ 1 C η L 2 n |B(0, R)| + 2n √ c 2 P (x) . Now, taking def. = C 4c 2 n and c = 2n C η L 2 |B(0, R)| , we nally get that P (x) ≤ c . Inclusion in a ball: we take = 1 4c2n and x R 2 > 0 such that ´R2 \B(0,R2) η 2 ≤ 2 . Let us show that supp(χ x ) ∩ B(0, R 2 ) = ∅ . By contradiction, if supp(χ x ) ∩ B(0, R 2 ) = ∅, we would have: P (x) ≤ 1 C |A(x)| = 1 C ˆR2 \B(0,R2) η χ x ≤ ˆR2 \B(0,R2) η 2 χ x L 2 ≤ c 2 |Dχ x |(R 2 ) ≤ 2n c 2 P (x) . Dividing by P (x) > 0 yields a contradiction. Now since ∂ supp(χ x ) ⊂ Γ x , we have diam(supp(χ x )) ≤ P (x) ≤ c which shows supp(χ x ) ⊂ B(0, R) with R def. = c + R 2 . This in turn implies that x i ≤ R for all i. Lower bound on the perimeter: the integrability of η 2 shows that, for every > 0, there exists δ > 0 such that ∀E ⊂ R 2 , |E| ≤ δ =⇒ ˆE η 2 ≤ 2 . Taking def. = C/(2 c 2 ), we obtain that if |supp(χ x )| ≤ δ P (x) ≤ 1 C |A(x)| = 1 C ˆsupp(χx) η ≤ 1 C ˆsupp(χx) η 2 |supp(χ x )| ≤ c 2 C P (supp(χ x )) ≤ c 2 C P (x) , the last inequality holding because ∂ supp(χ x ) ⊂ Γ x . We get a contradiction since P (x) is positive. Applying Lemma 3.10 with e.g. C = α/2, and de ning Y n def. = x ∈ R n×2 P (x) ≥ c and x i ≤ R for all i , we see that any maximizer of J over Y n (if it exists) is also a maximizer of J over Y n , and conversely. Lemma 3.11 Let x ∈ R n×2 . Then, for every a ∈ R 2 , denoting (c 1 c 2 ) the 2 × 2 matrix whose columns are c 1 , c 2 ∈ R 2 , we have: A(x) = n i=1 sign(det(x i -a x i+1 -a)) ˆax i x i+1 η = n i=1 det(x i -a x i+1 -a) ˆT1 η((x i -a x i+1 -a) y) dy , where ax i x i+1 denotes the triangle with vertices a, x i , x i+1 and T 1 def. = (α, β) ∈ (R + ) 2 α + β ≤ 1 is the unit triangle. Proof : Let us show that for all a ∈ R 2 we have χ x = n i=1 sign(det(x i -a x i+1 -a)) 1 axixi+1 (44) almost everywhere. We have that y ∈ R 2 is in the (open) triangle ax i x i+1 if and only if the ray issued from y directed by y - a intersects ]x i , x i+1 [. Moreover, if y is in this triangle, then det(x i -a x i+1 -a)) is positive if and only if the triangle yx i x i+1 is oriented counterclockwise. This shows that, if R 2 \ ∪ n i=1 [x i , x i+1 ] does not belong to any of the segments [a, x i ], evaluating the right hand side of (44) at y amounts to computing the winding number χ x (y) by applying the ray-crossing algorithm described in [START_REF] Hormann | The point in polygon problem for arbitrary polygons[END_REF]. This in particular means that (44) holds almost everywhere, and the result follows. From Lemma 3.11, we get that A is continuous on R n×2 . This is also the case of P . Now Y n is compact and included in Y n , hence the existence of maximizers of J over Y n , which in turn implies the existence of maximizers of J over Y n . Now, one can see that in each case we have P (x) = P (y) + P (z) and χ x = χ y + χ z almost everywhere, which in turn gives that A(x) = A(y) + A(z). We hence get that P (y) = 0 or P (z) = 0, and hence J(x) = J(y) or J(x) = J(z), or that P (y) > 0 and P (z) > 0, which yields Hence J(x) is smaller than a convex combination of J(y) and J(z), which gives that it is smaller than J(y) or J(z). This shows that y or z is suitable. We can now prove our nal result, i.e. that there exists x * ∈ X n such that ∀x ∈ Y n , J(x * ) ≥ J(x) . Indeed, repeatedly applying the above lemma starting with a maximizer x * of J over Y n , we either have that there exists m with 3 ≤ m ≤ n and x * ∈ X m such that J(x * ) = J(x * ), or that there exists y ∈ Y 2 such that J(x * ) ≤ J(y), which is impossible since in that case J(y) = 0 and J(x * ) = α > 0. We hence have x * ∈ X m such that ∀x ∈ Y n , J(x * ) = J(x * ) ≥ J(x) . We can nally build x * ∈ X n such that J(x * ) = J(x * ) by adding dummy vertices to x * , which allows to conclude. F In this section, we detail the initialization of our rst order method. We proceed almost exactly as in [START_REF] Carlier | Approximation of maximal Cheeger sets by projection[END_REF], i.e. we solve a discrete version of (42), in which the minimization is performed over the set of piecewise constant functions on a xed grid. The only di erence with this last work is that we do not look for some maximal solution of (42), and hence do not need the strictly concave penalization introduced in [START_REF] Buttazzo | On the selection of maximal Cheeger sets[END_REF] 1 . Discrete problem. Since every solution of (42) has its support included in some ball 2 , we can solve (42) in [-R, R] 2 (with Dirichlet boundary conditions) for a su ciently large R > 0. Let N be a positive integer and h def. = 2R/N . We denote E h the set of N by N matrices. For every matrix u = (u i,j ) (i,j)∈[1,N ] 2 ∈ E h we de ne ∂ h x u i,j def. = u i+1,j -u i,j , ∂ h y u i,j def. = u i,j+1 -u i,j , (45) for all (i, j) ∈ [0, N ] 2 , with the convention u i,j = 0 if either i or j is in {0, N + 1}. The isotropic discrete total variation is then de ned by TV h (u) def. = h N i=0 N j=0 ||∇ h u i,j || 2 = h ∇ h u 2,1 with ∇ h u i,j def. = ∂ h x u i,j , ∂ h y u i,j . We then solve the following discretized version of (42) for increasingly small values of h > 0 min u∈E h h 2 η h , u s.t. TV h (u) ≤ 1 , (46) where η h = 1 h 2 ´Ch i,j η (i,j)∈[1,N ] 2 and (C h i,j ) (i,j)∈[1,N ] 2 is a partition of [-R, R] 2 composed of squares of equal size, i.e. C h i,j def. = [-R + (i -1)h, -R + ih] × [-R + (j -1)h, -R + jh] . For convenience reasons, we also use the above expression to de ne C h i,j if i or j belongs to {0, N + 1}. Optimization algorithm. To solve ( 46), we propose to use the primal-dual algorithm introduced in [START_REF] Chambolle | A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging[END_REF]: we take (τ, σ) such that τ σ D 2 2 < 1 with D def. = h∇ h and de ne      φ n+1 = prox σ • 2,∞ (φ n + σ Dū n ) , u n+1 = (u n -τ D * φ n+1 ) -τ h 2 ηh , ūn+1 = 2 u n+1 -u n , (47) where prox σ • 2,∞ is given by: prox σ • 2,∞ (φ) = φ -σ proj { • 2,1 ≤1} φ σ . The projection onto the (2, 1)-unit ball can be computed e ciently (see [Condat, 2016]). Convergence as h → 0. The following proposition shows that, when the grid becomes ner, solutions of (46) converge to a solution of (42). Its proof is almost the same as the one of [Carlier et al., 2009, Theorem 4.1]. Since the latter however gives a slightly di erent result about the minimization of a quadratic objective (linear in our case) on the total variation unit ball, we decided to include it for the sake of completeness. Proposition 3.13 Let u h be the piecewise constant function on (C h i,j ) (i,j)∈[1,N ] 2 , extended to 0 outside [-R, R ] 2 associated to a solution of (46). Then there exists a (not relabeled) subsequence converging strongly in L 1 (R 2 ) and weakly in L 2 (R 2 ) to a solution u * of (42) when h → 0. Moreover, we have Du h * Du. Proof : First, let us stress that for any function v that is piecewise constant on (C i,j ) (i,j)∈[1,N ] 2 and that is equal to 0 outside [-R, R] 2 , we have TV(v) = h ∇ h v 1,1 where by abuse of notation ∇ h v is given by (45) with v i,j the value of v in C i,j . Hence TV h (u h ) ≤ 1 for all h implies that TV(u h ) (and hence u h L 2 ) is uniformly bounded in h. There hence exists a (not relabeled) subsequence that converges strongly in L 1 loc (R 2 ) and weakly in L 2 (R 2 ) to a function u, with moreover Du h * Du. Let us now take φ = (φ (1) , φ (2) ) ∈ C ∞ c (R 2 , R 2 ) such that ||φ|| ∞ ≤ 1. The weak-* convergence of the gradients give us that ˆR2 φ • dDu = lim h→0 ˆR2 φ • dDu h = lim h→0 N i=0 N j=0 ´Ch i,j ∩C h i+1,j φ (1) dH 1 ´Ch i,j ∩C h i,j+1 φ (2) dH 1 • ∇ h u h i,j . One can moreover show there exists C > 0 such that for h small enough and all (i, j) we have: ˆCh i,j ∩C h i+1,j φ (1) dH 1 -h φ (1) (x h i+1,j+1 ) ≤ Ch 2 , ˆCh i,j ∩C h i,j+1 φ (2) dH 1 -h φ (2) (x h i+1,j+1 ) ≤ Ch 2 , with x i,j def. = (-R + i h, -R + j h). We use the above inequalities and the fact φ(x) ≤ 1 for all x to obtain the existence of C > 0 such that for h small enough and for all (i, j) we have: ´Ch i,j ∩C h i+1,j φ (1) dH 1 ´Ch i,j ∩C h i,j+1 φ (2) dH 1 2 ≤ h √ 1 + C h . This nally yields N i=0 N j=0 ´Ch i,j ∩C h i+1,j φ (1) dH 1 ´Ch i,j ∩C h i,j+1 φ (2) dH 1 • ∇ h u h i,j ≤ N i=0 N j=0 h √ 1 + C h ∇ h u h i,j = √ 1 + C h TV h (u h ) , which gives ˆR2 φ • dDu ≤ lim sup h→0 √ 1 + C h TV h (u h ) ≤ 1 . We now have to show that ∀v ∈ L 2 (R 2 ), TV(v) ≤ 1 =⇒ ˆR2 η u ≤ ˆR2 η v . Let v ∈ C ∞ ([-R, R] 2 ) be such that TV(v) ≤ 1. We de ne v h def. = v i + 1 2 h, j + 1 2 h (i,j)∈[1,N ] 2 . One can then show that lim h→0 TV h (v h ) = TV(v) = 1 , so that for every δ > 0 we have TV h v h 1+δ ≤ 1 for h small enough. Now this yields ˆ[-R,R] 2 η u = lim h→0 ˆ[-R,R] 2 η u h ≤ lim h→0 ˆ[-R,R] 2 η v h 1 + δ = ˆ[-R,R] 2 η v 1 + δ . Since this holds for all δ > 0 we get that ˆ[-R,R] 2 η u ≤ ˆ[-R,R] 2 η v . (48) Finally, if v ∈ L 2 (R 2 ) is such that v = 0 outside [-R, R] 2 and TV(v) ≤ 1, by standard approximation results (see [Ambrosio et al., 2000, remark 3.22]) we also have that (48) holds, and hence u solves (42). Finally, since u solves (42), its support is included in [-R, R] 2 , which shows the strong L 1 loc (R 2 ) convergence of (u h ) towards u * in fact implies its strong L 1 (R 2 ) convergence. Extraction of a level set. Since we are interested in nding a simple set E that approximately solves (41), and now have a good way of approximating solutions of (42), we make use of the following result, which is a direct consequence of Proposition 1.3. Proposition 3.14 Let u be a solution of (42). Then the level sets of u are such that for all t ∈ R * with |U (t) | > 0, the set U (t) solves (41). If we have v h converging strongly in L 1 (R 2 ) to a solution v * of (42), then for almost every t ∈ R we have that lim h→0 V (t) h V (t) * = 0 . The above results hence show we can construct a sequence of sets (E k ) k≥0 such that |E k E * | converges to 0, with E * a solution of (41). However, this convergence only implies that lim sup k→∞ J(E k ) ≤ J(E * ) , and given that E k is a union of squares this inequality is likely to be strict, with the perimeter of E k not converging to the perimeter of E * . From the last paragraph of Section 1.3.1, we know we have to design a numerical method that allows to nd a set at which the value of J is arbitrarily close to J(E * ). This hence motivates the introduction of the re nement step described in the next subsection. Obtaining a simple polygon. As a nal remark, we note that, even for k large enough, E k could be non-simple. However, using the notations of (59) and ??, since for every set of nite perimeter E, we have that J(E) is a convex combination of the J(int(γ + i )) i∈I , J(int(γ - i,j )) i∈I,j∈J i , there is a simple set F in the decomposition of E such that J(F ) ≥ J(E). In practice, such a set can be found by extracting all the contours of the binary image 1 E , and nding the one with highest objective value. This procedure guarantees that the output of the xed grid step is a simple polygon. We stress that in all our experiments, v h is close to being (proportional to) the indicator of a simple set for h large enough, so that its non-trivial level sets are all simple. O As explained at the beginning of Section 2, our approach for approximating polygonal Cheeger sets essentially consists in optimizing the vertices of an initial polygon with a rst order method. This is in spirit very similar to so-called shape gradient algorithms (see e.g. [START_REF] Allaire | Chapter 1 -Shape and topology optimization[END_REF]). We iteratively construct a sequence of polygons by nding at each step a displacement of steepest ascent for J, along which the vertices of the previous polygon are moved. This displacement is found by exploiting Lemmas 3.15 and 3.16 below. Fixed sign assumption. Given E x 0 the polygon produced by the xed grid initizalization, we only aim at locally optimizing the objective J around x 0 . We can therefore assume the sign of the weighted area term A remains constant and equal to def. = sign(A(x 0 )) during the optimization. In the following, in order to simplify the exposition, we only consider the case = 1, and consequently seek to maximize J(x) = A(x) P (x) . Lemma 3.15 If x ∈ X n we have: P (E x+h ) = P (E x ) - n i=1 h i , τ + i + τ - i + o ( h ) , (49) where τ + i def. = x i+1 -x i x i+1 -x i and τ - i def. = x i-1 -x i x i-1 -x i . Proof : If h is small enough we have: P (E x+h ) = n i=1 x i+1 -x i + h i+1 -h i = n i=1 x i+1 -x i + h i+1 -h i 2 = n i=1 x i+1 -x i 1 + x i+1 -x i , h i+1 -h i x i+1 -x i 2 + o ( h ) = P (E x ) + n i=1 τ + i , h i+1 -h i + o ( h ) , and the result follows by re-arranging the terms in the sum. Lemma 3.16 Let x ∈ X n and η ∈ C 0 (R 2 ). Then we have ˆEx+h η = ˆEx η + n i=1 h i , w - i ν - i + w + i ν + i + o ( h ) (50) where ν + i , ν - i are respectively the outward unit normal to E x on [x i , x i+1 ], [x i-1 , x i ] and w + i def. = x i+1 -x i ˆ1 0 η((1 -t)x i + tx i+1 )(1 -t)dt , w - i def. = x i-1 -x i ˆ1 0 η((1 -t)x i + tx i-1 )(1 -t)dt . Proof : Our proof relies on the following identity (see Lemma 3.11 for a proof of a closely related formula): ˆEx η = sign n i=1 det(x i x i+1 ) n i=1 ω(x i , x i+1 ) , where ω(a 1 , a 2 ) def. = det(a 1 a 2 ) ´T1 η((a 1 a 2 ) y) dy and T 1 def. = (α, β) ∈ (R + ) 2 α + β ≤ 1 is the unit triangle. Assuming η ∈ C 1 (R 2 ) and denoting adj(A) the adjugate of a matrix A we get: ω(a 1 + h 1 , a 2 + h 2 ) = ω(a 1 , a 2 ) + det(a 1 a 2 ) ˆT1 ∇η((a 1 a 2 ) y) • ((h 1 h 2 )y) dy + tr adj(a 1 a 2 ) T (h 1 h 2 ) ˆT1 η((a 1 a 2 ) y) dy + o( h ) = ω(a 1 , a 2 ) + sign(det(a 1 a 2 )) ˆOa1a2 ∇η(y) • ((h 1 h 2 ) (a 1 a 2 ) -1 ) y) dy + tr adj(a 1 a 2 ) T (h 1 h 2 ) |det(a 1 a 2 )| ˆOa1a2 η(y) dy + o( h ) . Denoting g(y) def. = (h 1 h 2 )(a 1 , a 2 ) -1 y, we obtain: ω(a 1 + h 1 , a 2 + h 2 ) = ω(a 1 , a 2 ) + sign(det(a 1 a 2 )) ˆOa1a2 [∇η • g + η divg] + o( h ) = ω(a 1 , a 2 ) + sign(det(a 1 a 2 )) ˆ∂(Oa1a2) η (g • ν Oa1a2 ) dH 1 + o( h ) , where we used Gauss-Green theorem to obtain the last equality. Now if h is small enough then n i=1 det(x i + h i x i+1 + h i+1 ) and n i=1 det(x i x i+1 ) have the same sign, so that, de ning g i : y → ((h i h i+1 )(x i x i+1 ) -1 y) we get d ˆE• η (x) . h = n i=1 sign(det(x i x i+1 )) ω i , with def. = sign n i=1 det(x i x i+1 ) and ω i def. = ˆ∂ * (Oxixi+1) η (g i • ν Oxixi+1 ) dH 1 . Then one can decompose each integral in the sum and show the integrals over [0, x i ] cancel out each other, which allows to obtain d ˆE• η (x) . h = n i=1 ˆ[xi,xi+1] η (g i • ν i ) dH 1 . But now if y ∈ [x i , x i+1 ] then (x i x i+1 ) -1 y = 1 x i+1 -x i y -x i+1 y -x i , and the result follows by re-arranging the terms in the sum. One can then use an approximation argument as in [Maggi, 2012, Proposition 17.8] to show it also holds when η is only continuous. Remark 3.17 The results given in Lemmas 3.15 and 3.16 are in fact closely linked to the notion of shape derivative1 . If E x is a simple polygon with n vertices, there exists a constant C > 0 such that, given any displacement h of the vertices of E x , one can extend h to θ ∈ W 1,∞ (R 2 , R 2 ) with θ 1,∞ ≤ C h and such that E x+h = (Id + θ)(E x ) and, for every i ∈ {1, ..., n} ∀t ∈ [0, 1], θ((1 -t)x i + tx i+1 ) = (1 -t)θ(x i ) + tθ(x i+1 ) = (1 -t)h i + th i+1 . ( 51 ) The expressions of the shape derivatives of the perimeter and area (see [Henrot and Pierre, 2018, Chapter 5] for more details, including precise assumptions on the regularity of E and η) yield P ((Id + θ)(E)) = P (E) + ˆ∂ * E H E • θ dH 1 + o( θ 1,∞ ) , ˆ(Id+θ)(E) η = ˆE η + ˆ∂ * E η θ • ν E dH 1 + o( θ 1,∞ ) , where H E denotes the curvature vector of E. To recover the formulas given in the above lemmas, one simply needs to use (51) and the fact H Ex = - n i=1 (τ + i + τ - i )δ x i with τ + i def. = x i+1 -x i x i+1 -x i and τ - i def. = x i-1 -x i x i-1 -x i . Considering Lemmas 3.15 and 3.16, we construct our sequence of polygons as follows: given x t ∈ X n and a step size α t , we de ne the next iterate by x t+1 = x t + α t θ t with θ t j def. = 1 P (E x t ) θ t area,j -´Ex t η P (E x t ) θ t per,j , θ t area,j def. = w t- j ν t- j + w t+ j ν t+ j , θ t per,j def. = -(τ t+ j + τ t- j ) , where, for all j, ν t+ j and ν t- j are the outward unit normals of the two sides stemming from x t j , and τ t+ j def. = x i+1 -x i x i+1 -x i , w t+ j def. = x i+1 -x i ˆ1 0 η((1 -t)x i + tx i+1 )(1 -t)dt , τ t- j def. = x i -x i-1 x i -x i-1 , w t+ j def. = x i-1 -x i ˆ1 0 η((1 -t)x i + tx i-1 )(1 -t)dt . Lemmas 3.15 and 3.16 in fact show that the displacement θ t we apply to the vertices of E x t is such that lim α→0 + J(E x t +αθ ) -J(E x t ) α = θ t , θ , (53) i.e. that it is the displacement of steepest ascent for J at E x t . Numerical computation of θ t . To compute the integral of η over E x t , we integrate η on each triangle of a su ciently ne triangulation of E x t (this triangulation must be updated at each iteration, and sometimes re-computed from scratch to avoid the presence of ill-shaped triangles). The integral of η on a triangle and w t+ j , w t- j are computed using standard numerical integration schemes for triangles and line segments. If |T| denotes the number of triangles in the triangulation of E x t , |S T | (resp. |S L |) the number of points used in the numerical integration scheme for triangles (resp. line segments), the complexity of each iteration is of order O (m (|T| |S T | + n |S L |)). Comments. Two potential concerns about the above procedure are whether the iterates remain simple polygons (i.e. x t ∈ X n for all t) and whether they converge to a global maximizer of J over P n . We could not prove that the iterates remain simple polygons along the process, but since the initial polygon can be taken arbitrarily close to a simple set solving (41) (in terms of the Lebesgue measure of the symmetric di erence), we do not expect nor observe in practice any change of topology during the optimization. Moreover, even if J could have non-optimal critical points1 , the above initialization allows us to start our local descent with a polygon that hopefully lies in the basin of attraction of a global maximizer. Critical points. If the sequence of polygons de ned above converges to a simple polygon E x , then E x is such that ∀j ∈ {1, ..., n}, w + j ν + j + w - j ν - j = ´Ex η P (E x ) (-τ + i -τ - i ) . (54) This can be seen as a discrete version of the following rst order optimality condition for solutions of (41): η = ´E η P (E) H E on ∂ * E . (55) Note that ( 55) is similar to the optimality condition for the classical Cheeger problem (i.e. with η = 1 and the additional constraint E ⊆ Ω), namely H E = P (E)/ |E| in the free boundary of E (see [Parini, 2011, Proposition 2.4]). T In this section, we study how generalized Cheeger sets and their polygonal approximations compare in a particular setting, where the weight function η is radial. To be more precise, we assume that there exists a strictly decreasing nonnegative function η : [0, +∞) → R + such that η(x) = η( x ) for almost every x ∈ R 21 . We rst prove that, in this setting, (generalized) Cheeger sets associated to η are all disks centered at the origin. Then, we prove that the maximizers of J over P n are regular and inscribed in a circle centered at the origin when n ∈ {3, 4}, and conjecture that this result holds for every n ≥ 3. To enforce the uniqueness of the generalized Cheeger set, we may also invoke the following assumption, which is for example satis ed by η : r → exp(-r 2 /(2σ 2 )) for all σ > 0. Assumption 1. The function η is C1 . Moreover, de ning f : r → r η(r), we have f (r) r→+∞ ----→ 0. Finally, there exists r 0 > 0 such that f (r) > 0 for all r < r 0 and f (r) < 0 for all r > r 0 . Description of generalized Cheeger sets To prove the above-mentioned result, we rely on Steiner symmetrization. Let us brie y recall its de nition and main property. If E is a set of nite perimeter with nite measure, ν ∈ S 1 and z ∈ R, we denote E ν,z def. = {t ∈ R | z ν + t ν ⊥ ∈ E} . The Steiner symmetrization of E with respect to the line through the origin and directed by ν, denoted E s ν , is then de ned by E s ν def. = {x ∈ R 2 | | x, ν ⊥ | ≤ |E ν, x,ν |/2} . The fundamental property of Steiner symmetrization is that it preserves volume and does not increase perimeter (see [Maggi, 2012, section 14.1] for more details). To prove our result, we rst prove two useful lemmas. Lemma 3.18 Let f : R → R + be even and strictly decreasing on R + . Then for every measurable set A such that |A| < +∞ we have: ˆA f ≤ ˆAs f , where A s def. = (-|A|/2, |A|/2). Moreover, equality holds if and only if |A A s | = 0. Proof : We have ˆA f = ˆ+∞ 0 |{f 1 A ≥ t}| dt = ˆ+∞ 0 |{f ≥ t} ∩ A| dt . Since f is even, for all t > 0, there exists α such that {f ≥ t} = [-α, α], so that we have |{f ≥ t} ∩ A| = |[-α, α] ∩ A| ≤ min(2α, |A|) = |[-α, α] ∩ [-|A|/2, |A|/2]| = |{f ≥ t} ∩ A s | . Hence ˆA f ≤ ˆ+∞ 0 {f ≥ t} ∩ A s dt = ˆAs f . Now if |A A s | > 0, since |A| = |A s | then |A \ A s | = |A s \ A| > 0 and we have ˆAs f = ˆA∩A s f + ˆAs \A f > ˆA∩A s f + f (|A|/2) |A s \ A| ≥ ˆA∩A s f + ˆA\A s f = ˆA f , which proves the second part of the result. Lemma 3.19 Let E ⊂ R 2 be a set of nite perimeter with 0 < |E| < +∞. Then for any ν ∈ S 1 we have ´Es ν η P (E s ν ) ≥ ´E η P (E) , with equality if and only if |E E s ν | = 0. Proof : From [Maggi, 2012, theorem 14.4] we know that we have P (E s ν ) ≤ P (E). We now perform a change of coordinates in order to have E s ν = {(x 1 , x 2 ) ∈ R 2 | |x 2 | ≤ |E x1 |/2} with E x1 def. = {x 2 ∈ R | (x 1 , x 2 ) ∈ E} . Now, we have ˆE η = ˆ+∞ -∞ ˆ+∞ -∞ η(x 1 , x 2 ) 1 E (x 1 , x 2 ) dx 2 dx 1 = ˆ+∞ -∞ ˆEx 1 η(x 1 , •) dx 1 , with E x1 = {x 2 ∈ R | (x 1 , x 2 ) ∈ E}. For almost every x 1 ∈ R we have that E x1 is measurable, has nite measure, and that η(x 1 , •) is nonnegative, even and strictly decreasing on R + . We can hence apply Lemma 3.18 and get that ˆE η ≥ ˆ+∞ -∞ ˆ(Ex 1 ) s η(x 1 , •) d x1 = ˆEs ν η . (56) Moreover, if |E E s ν | > 0, then since |E E s ν | = ˆ+∞ 0 ˆ+∞ 0 |1 E (x 1 , x 2 ) -1 E s ν (x 1 , x 2 )| dx 2 dx 1 = ˆ+∞ 0 ˆ+∞ 0 1 Ex 1 (x 2 ) -1 (Ex 1 ) s (x 2 ) dx 2 dx 1 = ˆ+∞ 0 E x1 E s x1 dx 1 , we get that L 1 ({x 1 ∈ R | |E x1 (E x1 ) s | > 0}) > 0 and hence that ( 56) is strict. Using the above lemmas, we may now state the following result. Proposition 3.20 The generalized Cheeger sets associated to η are disks centered at the origin. Moreover, under Assumption 1, the optimal set is unique and its radius is [Maggi, 2012, Section 14.2], we get that E * is a convex set which is invariant by re ection with respect to any line through the origin, and hence that E * is a disk centered at the origin. R * = argmax R>0 1 R ˆR 0 rη(r) dr . Proof : If E ⊂ R 2 is Let us de ne G(R) def. = 2πJ(B(0, R)) = 1 R ˆR 0 rη(r) dr . Under Assumption 1 G can be extended to a C 1 function on R + with G(0) = 0, G (0) = η(0)/2 and (denoting f : r → rη(r) as in Assumption 1): ∀R > 0, G (R) = Rf (R) -´R 0 f (r) dr R 2 . Now de ning F : R → Rf (R)-´R 0 f (r) dr we see that F (R) = Rf (R) for all R > 0, so that F and f have the same variations. Using Assumption 1, we hence have that F is strictly increasing on [0, R 0 ] and strictly decreasing on [R 0 , +∞). Moreover, F (0) = 0 and, since f (R) R→+∞ -----→ 0, we also get F (R) < 0 for R large enough. Therefore, there exists R * > 0 such that F (and hence G ) is strictly positive on (0, R * ) and strictly negative on (R * , +∞). We hence obtain that R * is the unique maximizer (and the unique critical point) of G. In practice, instead of solving (41), we look for an element of P n (a simple polygon with at most n sides) maximizing J, for some given integer n ≥ 3. It is hence natural to investigate the proximity of these optimal polygons with the "continuous" Cheeger sets. Solving classical geometric variational problems restricted to the set of n-gons is involved, as the Steiner symmetrization procedure might increase the number of sides [Polya and Szegö, 1951, Sec. 7.4]. However, using a trick from Pólya and Szegö, one may prove: Proposition 3.21 Let n ∈ {3, 4}. Then all the maximizers of J over P n are regular and inscribed in a circle centered at the origin. Proof : Triangles (n = 3): let E * be a maximizer of J among triangles. Then the Steiner symmetrization of E * with respect to any of its heights through the origin (see Figure 14) is still a triangle, and Lemma 3.19 ensures it has a higher energy except if this operation leaves E * unchanged. As a consequence, E * must be symmetric with respect to all its heights through the origin. This shows E * is equilateral and inscribed in a circle centered at the origin. Quadrilaterals (n = 4): we notice that if E is a simple quadrilateral, then its Steiner symmetrization with respect to any line perpendicular to one of its diagonals (see Figure 15) is still a simple quadrilateral. We can then proceed exactly as for triangles to prove any maximizer E * of J over P 4 is symmetric with respect to every line through the origin and perpendicular to one of its diagonals. This shows E * is a rhombus centered at the origin. We can now symmetrize with respect to any line through the origin perpendicular to one of its sides to nally obtain that E * must be a square centered at the origin. Our proof does not extend to n ≥ 5, but the following conjecture is natural: Conjecture 1 The result stated in Proposition 3.21 holds for all n ≥ 3. For n ∈ {3, 4} or if Conjecture 1 holds, it remains to compare the radius of optimal polygons with the one of "continuous" Cheeger sets. Let us x a regular n-gon inscribed in the circle of radius R centered at 0, and denote it B n (0, R). Abusing notation, we de ne J(R) def. = J(B(0, R)) and J n (R) def. = J(B n (0, R)), and can now state the following result. J n -J|| ∞ = O 1 n 2 . Moreover, under Assumption 1 and assuming f is C 2 with f (r 0 ) < 0, then for n large enough we have that J n has a unique maximizer R * n and |R * n -R * | = O 1 n . Proof : Recalling that P (B n (0, R)) = 2πR sin(π/n) π/n we have: |J(R) -J n (R)| = 1 2πR ˆB(0,R) η - π/n 2πR sin(π/n) ˆBn(0,R) η = 1 - π/n sin(π/n) 1 2πR ˆB(0,R) η + π/n 2πR sin(π/n) ˆB(0,R)\Bn(0,R) η ≤ 1 - π/n sin(π/n) J(R) + π/n 2πR sin(π/n) η ∞ |B(0, R) \ B n (0, R)| . Now, we de ne R n (θ) def. = R cos (π/n) cos ((θ mod 2π/n) -π/n) , so that in polar coordinates an equation of ∂B n (0, R) is given by r(θ) = R n (θ). We obtain: |B(0, R) \ B n (0, R)| = ˆ2π 0 ˆR Rn(θ) r dr dθ ≤ 2π sup θ∈[0,2π] ˆR Rn(θ) r dr = π sup θ∈[0,2π] R 2 -R 2 n (θ) = π(R 2 -R 2 cos 2 (π/n)) . We hence obtain J n -J ∞ ≤ J ∞ 1 - π/n sin(π/n) + R η ∞ π/n sin(π/n) (1 -cos(π/n)) and we can nally conclude regarding the rst statement. Now, de ning α n (s) def. = cos(π/n)/cos(πs/n), we have: J n (R) = π/n 2πR sin(π/n) ˆ2π 0 ˆRn(θ) 0 rη(r) dr dθ = π/n 2πR sin(π/n) n ˆ2π/n 0 ˆR cos(π/n) cos(θ-π/n) 0 rη(r) dr dθ = π/n 2πR sin(π/n) 2n ˆπ/n 0 ˆR cos(π/n) cos(θ) 0 rη(r) dr dθ = π/n R sin(π/n) ˆ1 0 ˆRαn(s) 0 rη(r) dr ds = π/n sin(π/n) 1 R ˆR 0 r ˆ1 0 (α n (s)) 2 η(rα n (s)) ds dr . We can now de ne f n : r → r ´1 0 (α n (s)) 2 η(rα n (s)) ds and proceed as in Proposition 3.20: showing f n is strictly positive on (0, r 1 ) and strictly negative on (r 1 , +∞) for some r 1 > 0 is su cient to prove J n has a unique maximizer (its unique critical point). Now, we have: ∀r ∈ R + , f n (r) = ˆ1 0 (α n (s)) 2 (η(rα n (s)) + rα n (s)η (rα n (s))) ds . The image of [0, 1] by s → rα n (s) is [rcos(π/n), r]. Since r → η(r) + rη (r) = (rη) (r) is strictly positive on (0, r 0 ) and strictly negative on (r 0 , +∞), we get that f n is strictly positive on (0, r 0 ) and strictly negative on (r 0 /cos(π/n), +∞). It hence remains to investigate the sign of f n on the interval [r 0 , r 0 /cos(π/n)]. But since f is C 2 and f (r 0 ) < 0 there exists > 0 such that f (r) < 0 for every r ∈ (r 0 -, r 0 + ). For n large enough, we hence have: [r 0 cos(π/n), r 0 /cos(π/n)] ⊂ (r 0 -, r 0 + ) . This implies that for all r ∈ [r 0 , r 0 /cos(π/n)] we have rα n (s) ∈ (r 0 -, r 0 + ), and hence that f n (r) < 0. This nally shows there exists r 1 > 0 such that f n is strictly positive on (0, r 1 ) and strictly negative on (r 1 , +∞), and the result follows as in Proposition 3.20. Now R * and R * n are respectively the unique solutions of F (0, R) = 0 and F (π/n, R) = 0 with F (t, R) def. = ˆR 0 f t (r) dr -Rf t (R) , f t (r) def. = r ˆ1 0 α(t, s) 2 η(rα(t, s))ds , α(t, s) def. = cos(t) cos(ts) . Moreover, F is C 1 in a neighborhood of (0, R * ) and ∂ R F (0, R * ) = -R * f (R * ). From Assumption 1 we know the unique R > 0 such that f (R) = 0 is R 0 , which (from the proof of Proposition 3.20) is di erent from R * . We can hence apply the implicit function theorem and obtain that |R * n -R * | = |∂ t F (0, R * )| |∂ R F (0, R * )| 1 n + o 1 n , which yields the result. N Before assessing the practical performance of Algorithm 3, we brie y explain how we implement the sliding step (Line 14). L This section is dedicated to the implementation of the sliding step (Line 14 of Algorithm 3). Our approach is exactly the same as in Section 2.4, that is we optimize the vertices of a set of polygons and their corresponding amplitudes using a rst order algorithm. To be more precise, we perform a gradient descent on the mapping (a, x) → G N ( , a, E x ) , (57) where E x = (E x 1 , ..., E x N ). Given a step size α t , a vector a t ∈ R N and x t 1 , ..., x t N in X n , we de ne u t def. = N i=1 a t i 1 E x t i , η t def. = - 1 λ Φ * (Φu t -y) , and perform the following update: a t+1 i def. = a t i -α t h t i , h t i def. = P E x t i -i ˆEx t i η t , x t+1 i,j def. = x t i,j -α t θ t i,j , θ t i,j def. = a t i -(τ t+ i,j + τ t- i,j ) + i θ t data,i,j , θ t data,i,j def. = Φu ty, w t+ i,j ν t+ i,j + Φu ty, w t- i,j ν t- i,j , where τ t± i,j , ν t± i,j are de ned as in Section 2.4 and w t+ i,j def. = x t i,j+1 -x t i,j ˆ1 0 ϕ((1 -t) x t i,j + t x t i,j+1 ) (1 -t) dt , w t- i,j def. = x t i,j-1 -x t i,j ˆ1 0 ϕ((1 -t) x t i,j + t x t i,j-1 ) (1 -t) dt . Using the notations of Section 2.4, the complexity of each iteration is of order O (N m (|T| |S T | + n |S L |)) . Comments. We rst stress that the above update is similar to the evolution formally described in (36). Now, unlike the local optimization we perform to approximate Cheeger sets, the sliding step may tend to induce topology changes (see Section 3.3 for an example). This is of course linked to the possible appearance of singularities mentioned in Section 1.4.1. Typically, a simple set may tend to split in two simple sets over the course of the descent. This is a major di erence (and challenge) compared to the sliding steps used in sparse spikes recovery (where the optimization is carried out over the space of Radon measures) [START_REF] Bredies | Inverse problems in spaces of measures[END_REF], Boyd et al., 2017, Denoyelle et al., 2019]. This phenomenon is closely linked to topological properties of the faces of the total (gradient) variation unit ball: its extreme points do not form a closed set for any reasonable topology (e.g. the weak L 2 (R 2 ) topology), nor do its faces of dimension d ≤ k for any k ∈ N. As a result, when moving continuously on the set of faces of dimension d = k, it is possible to "stumble upon" a point which only belongs to a face of dimension d > k. Handling (or not) topology changes. Our current implementation does not allow to handle these topology changes in a consistent way, and nding a way to deal with them "o -the-grid" is an interesting avenue for future research. It is important to note that not allowing topological changes during the sliding step is not an issue, since all convergence guarantees hold as soon as the output of the sliding step decreases the energy more than the standard update. One can hence stop the local descent at any point before any change of topology occurs, which avoids having to treat them. Still, in order to yield iterates that are as sparse as possible (and probably to decrease the objective as quickly as possible), it seems preferable to allow topological changes. R Here, we investigate the practical performance of Algorithm 3. We focus on the case where Φ is a sampled Gaussian convolution operator, i.e. ∀x ∈ R 2 , ϕ(x) = exp - ||x -x i || 2 2σ 2 m i=1 for a given σ > 0 and a sampling grid (x i ) m i=1 . The noise is drawn from a multivariate Gaussian with zero mean and isotropic covariance matrix τ 2 I m . We take λ of the order of 2 log(m) τ 2 . Numerically certifying that a given function is an approximate solution of (P λ (y)) is di cult. However, as the sampling grid becomes ner, Φ tends to the convolution with the Gaussian kernel, which is injective. Relying on a Γ-convergence argument, one may expect that if u 0 is a piecewise constant image and w is some small additive noise, the solutions of (P λ (y)) with y = Φu 0 + w are all close to u 0 , modulo the regularization e ects of the total variation. Comparison with other algorithms. We assess the performance of our algorithm by comparing its output to that of a primal dual algorithm minimizing a discretized version of (P λ (y)) on a pixel grid, where the total variation term is replaced by the discrete isotropic total variation or the dual-based discretization of [START_REF] Hintermüller | Functionalanalytic and numerical issues in splitting methods for total variation-based image reconstruction[END_REF], Condat, 2017]. To minimize discretization artifacts, we arti cially introduce a downsampling in the forward operator, so that the reconstruction is performed on a grid four times larger than the sampling one. Experiments. Our rst experiment consists in recovering a function u 0 that is a linear combination of three indicator functions (see Figures 16 and17). During each of the three iterations required to obtain a good approximation of u 0 , a new atom is added to its support. One can see the sliding step is crucial: the large atom on the left, added during the second iteration, is of the xed grid method using the isotropic and Condat's total variation signi cantly re ned during the sliding step of the third iteration, when enough atoms have been introduced. The second experiment (see Figure 18) consists in recovering the indicator function of a set with a hole (which can also be seen as the sum of two indicator functions of simple sets). The support of u 0 and its gradient are accurately estimated. Still, the typical e ects of total variation regularization are noticeable: corners are slightly rounded, and there is a "loss of contrast" in the eye of the pacman. The third experiment (Figure 19) also showcases the rounding of corners, and highlights the in uence of the regularization parameter: as λ decreases, the curvature of the edge set increases. Finally, we provide in Fig. Figure 20 the results of an experiment on a more challenging task, which consists in reconstructing a natural grayscale image. Choice of parameters. The number of observations in the rst experiment is 60 × 60, 75 × 75 in the second and third ones, and 64 × 64 in the last one. In all experiments, we solved (46) on a grid of size 80 × 80. In both local descent steps (for approximating Cheeger sets and for the sliding step), the simple polygons have a number of vertices of order 30 times the length of their boundary (100 for the last experiment), and the maximum area of triangles in their inner mesh is 10 -2 (the domain being a square of side 1). The inner triangulation of a simple polygon is obtained by using Richard Shewchuk's Triangle library. The boundary of the polygons are resampled every 30 iterations. Line integrals are computed using the Gauss-Patterson scheme of order 3 (15 points) and integrals on triangles using the Hammer-Marlowe-Stroud scheme of order 5 (7 points). T Here, we illustrate the changes of topology that may occur during the sliding step (Line 14 of Algorithm 3). All relevant plots are given in Figure 21. The unknown function (see (a)) is the sum of two indicator functions: u 0 = 1 B((-1,0),0.6) + 1 B((1,0),0.6) , and observations are shown in (b). The Cheeger set computed at Line 5 of the rst iteration covers the two disks (see (c)). In this setting, our implementation of the sliding step converges to a function similar to (f)1 , and we obtain a valid update that decreases the objective more than the standard Frank-Wolfe update. The next iteration of the algorithm will then consist in adding a new atom to the k] (k = 1, 4) produced by Algorithm 3, outputs of the xed grid method using the isotropic and Condat's total variation approximation, with negative amplitude, so as to compensate for the presence of the small bottleneck. However, it seems natural that the support of (f) should split into two disjoint simple sets, which is not possible with our current implementation. To investigate what would happen in this case, we manually split the two sets (see (g)) and let them evolve independently. The support of the approximation converges to the union of the two disks, which produces an update that decreases the objective even more than (f). C In this part, we proposed an iterative algorithm for solving (P λ (y)), which is based on the conditional gradient (or Frank-Wolfe) algorithm. We studied the convergence of its iterates, and presented a natural analog of the "sliding step" introduced in [START_REF] Denoyelle | The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy[END_REF]. Unlike in the latter work, we left the theoretical analysis of this step open, and in particular could not prove that a so-called critical point of the objective can be reached at every iteration. We then discussed the practical implementation of this theoretical algorithm, and presented two oracles to carry out its main operations. The rst allows to approximate generalized Cheeger sets, and builds up on the method proposed in [START_REF] Carlier | Approximation of maximal Cheeger sets by projection[END_REF]. It consists in optimizing the vertices of a polygon with a shape gradient-like algorithm, initialized with the output of the latter method. To support our approach, we also proved the existence of generalized Cheeger sets in the class of polygons with a given number of sides, and investigated their proximity with their continuous counterpart in a radial setting. The second oracle we proposed allows to implement the sliding step mentioned above. We discussed the topological changes that can appear during the resulting evolution, and argued that handling them in a consistent way is natural but challenging. Finally, we assessed the performance of the propose numerical method on a few recovery examples, providing comparisons with xed-grid methods. Our approach is particularly suited to reconstruct simple images (i.e. which are the superposition of a few simple shapes), whereas on more complex natural images, existing methods perform better. 110 ensure a notion of pre-certi cate is relevant, a natural way would be to nd out if, as proved in [START_REF] Duval | Exact Support Recovery for Sparse Spikes Deconvolution[END_REF] in the sparse spikes setting, the fact it is a true certi cate is necessary for (noiseless) exact support recovery. Error bounds. The noise robustness analysis we presented in Part 2 is not quantitative. It does not provide bounds on the distance between solutions of (P 0 (y 0 )) and (P λ (y 0 + w)) in terms of the noise level w H . In the sparse spikes setting, such bounds were derived by analyzing the behaviour of dual certi cates in the so-called far and near regions (see for instance [START_REF] Azaïs | Spike detection from inaccurate samplings[END_REF]). In our case, it seems that the near region analysis is closely linked to the coercivity constant of the second order shape derivative at optimal sets. What seems more challenging is to quantify how the relevant geometric functional behaves in the far region, i.e. for sets that are not smooth normal deformations of a minimizer. We argue that this could be related to the derivation of a quantitative inequality for our functional (we brie y discussed the case of the isoperimetric inequality in Section 2.1.1). To prove such a result, one would need to adapt the selection principle of [START_REF] Cicalese | A Selection Principle for the Sharp Quantitative Isoperimetric Inequality[END_REF], and then use the analysis provided in Section 2.2 to deal with smooth deformations of a minimizer. Stability for the total variation ow. We argue that the stability analysis of Sections 2.2 and 3.1 could possibly be used to study the L 2 gradient ow of the total variation (see for instance [START_REF] Bellettini | The Total Variation Flow in RN[END_REF]). A relevant question concerning this evolution could be: if the ow is initialized with an M -simple function, is the solution also M -simple for su ciently small times? If so, is the length of the chains preserved? In the particular case where the ow is initialized with the indicator function of a calibrable set, it has been shown in the above reference that the solution remains proportional to the initialization. Answering the more general question formulated above is, to the best of our knowledge, an open problem. Given an initial data u 0 ∈ L 2 (R 2 ), an idea to apply the results of Sections 2.2 and 3.1 would be to consider the minimizing movement scheme min u∈L 2 (R 2 ) TV(u) + 1 2τ u -u 0 L 2 , (58) and to prove a stability result in the spirit of Theorem 2.27 when τ → 0 + . The main di erence with our setting is that, in (58), no advantage can be taken from the regularity of the measurement operator, and whether dual solutions converge smoothly to the minimal norm limit certi cate is unclear. Finally, let us mention that, although it is even less clear whether our results could be applied in this setting, one could also consider the same question for the Wasserstein gradient ow of the total variation [START_REF] Carlier | On the total variation Wasserstein gradient ow and the TV-JKO scheme[END_REF]. Model bias. In Part 2, we argued that the class of sparse objects associated to the total variation are M -simple functions. Regularizing an inverse problem with the total variation will hence often produce M -simple reconstructions. In practical applications, the sought-after image might be approximately, but not exactly piecewise constant. An interesting question would hence be to investigate what class of functions can be well approximated with s-sparse M -simple functions, and how this approximation behaves when s grows. This is related to the notion of model bias, i.e. to the error one makes by assuming the unknown signal has a speci c structure. Such an analysis has been conducted for sparse vectors (see for instance [START_REF] Foucart | A Mathematical Introduction to Compressive Sensing[END_REF], Section 2.1]). A related topic is also the study of nonlinear wavelet approximation (see [Mallat, 2008, Section 9.2]), and more speci cally of how the approximation error decreases when the number of coe cients grows. N In Part 3, we proposed an algorithm for solving (P λ (y)), which is based on the conditional gradient (or Frank Wolfe) algorithm. It iteratively constructs a sequence of iterates that provably converges to a solution of (P λ (y)). Each iterate is a linear combination of the atoms promoted by the total variation (i.e. indicator functions of simple sets). Then, we discussed the implementation of this theoretical algorithm, and proposed to numerically represent the above-mentioned atoms by indicator functions of simple polygons. We nally assessed the performance of our approach on a few recovery examples, providing comparisons with traditional grid-based methods. Convergence in nite time. In [Denoyelle et al., 2019, Theorem 3], a nite time convergence result is proved for the so-called sliding Frank-Wolfe algorithm. It holds under the assumption that the dual certi cate associated to the noisy problem is non-degenerate. A crucial ingredient of the proof is that a critical point of the objective is reached at the end of each sliding step. As explained above, we cannot guarantee this in our setting. Still, assuming this is true, it would be interesting to investigate whether the proof of [START_REF] Denoyelle | The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy[END_REF] can be adapted to Algorithm 3. Topology changes. In Section 1.4, we presented the so-called sliding step (in the spirit of [START_REF] Denoyelle | The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy[END_REF]). We formally described the associated gradient ow, but its study (existence, uniqueness) seems challenging. In Section 3.1 we introduced a possible way to mimic this evolution numerically. We explained that this could lead to the apparition of singularities, and that we could not handle them consistently with our approach. Even if one can circumvent this issue by stopping the evolution before the appearance of a singularity (which does not break any convergence guarantee), it seems preferable to allow them, in order to yield iterates that are as sparse as possible. This topic is related to the more global issue of handling topology changes with purely Lagrangian methods. In the shape optimization community, Eulerian methods (such as the level set method) are preferred to achieve this, as they can treat complicated deformations of a shape in a very robust way [START_REF] Allaire | Chapter 1 -Shape and topology optimization[END_REF]. In a recent work [Lévy, 2022], Bruno Lévy proposed a new Lagrangian free-surface mesh representation for uid simulation. He also mentioned these ideas could possibly be used for shape optimization. It would be interesting to investigate whether these tools can allow for a robust implementation of the sliding step. Approximation of maximal elements of exposed faces. We mentioned above that the construction of pre-certi cates is highly important to derive identi ability and noise robustness guarantees. Given a pre-certi cate, one can numerically ensure it is a true certi cate by solving the associated Cheeger problem, and showing its value is (approximately) one. If this is the case, it is desirable to nd which functions are associated to this certi cate, i.e. to obtain a complete description of the face it exposes. One can always nd an element of this face by solving the associated Cheeger problem, but how to approximate the complete set of maximizers seems challenging. We argue that a possible way to achieve this would be to adapt the ideas of [START_REF] Buttazzo | On the selection of maximal Cheeger sets[END_REF], Carlier et al., 2009]. These works propose to introduce a small strictly convex penalization of the objective, in order to select some "maximal" solution of the Cheeger problem. Their setting however slightly di ers from ours (the weight functions they deal with are non-negative), and nding out whether their techniques can be adapted to our case would be interesting. Practical applications. To conclude this part, let us discuss how the work presented here can be relevant for practical applications. The experiments conducted in Section 3.2 suggest that traditional grid-based methods perform better on complex natural images, while our approach is particularly well-suited for reconstructing simple piecewise constant ones. It is hence natural to wonder in which situations one would be interested in the recovery of such signals, and in obtaining a continuous domain representation of them. We mentioned in Section 1.1.1 that this could potentially be the case in cell imaging (see Figure 2). However, it seems natural that the choice of appropriate measurement operators and the relevance of this approach should extensively be discussed with practitioners. A. S In this section, we collect a few useful de nitions and properties related to sets of nite perimeter. We refer the reader to e.g. [Maggi, 2012] for further details, and to [START_REF] Ambrosio | Connected components of sets of nite perimeter and applications to image processing[END_REF] for more information concerning M -connected components and related notions. In particular, it does not have isolated points. In the following, we always consider such a representative and consequently obtain Supp(D1 E ) = ∂ * E = ∂E . Distributional curvature. If E ⊂ R 2 is a set of nite perimeter, then the distributional curvature vector of E is H E : C ∞ c (R 2 , R 2 ) → R de ned by ∀T ∈ C ∞ c (R 2 , R 2 ), H E , T = ˆ∂ * E div E T dH 1 , where div E T denotes the tangential divergence of T on E given by div E T = div T -(DT ν E ) • ν E , and DT denotes the di erential of T . A set E is said to have locally integrable distributional curvature if there exists a function H E ∈ L 1 loc (∂ * E; H 1 ) such that H E = H E ν E H 1 ∂ * E . For instance, if E is an open set with C 2 boundary, it has a locally summable distributional curvature which is given by the (classical) scalar mean curvature. M -connected components, holes, exterior. If E ⊂ R 2 has nite perimeter it can be decomposed (up to Lebesgue negligible sets) into an at most countable union of pairwise disjoint indecomposable sets of positive measure, i.e. B. S : In this section, we give some de nitions and facts about smooth sets. Although these notions are standard (most of them are for example discussed in [START_REF] Delfour | Shapes and Geometries: Metrics, Analysis, Di erential Calculus, and Optimization[END_REF]), we recall them here to keep the exposition (mostly) self-contained. Smooth sets. We say that a set is smooth if it is locally the hypograph of a smooth function. If r > 0, x ∈ R 2 and ν ∈ S 1 , we denote by C(x 0 , r, ν) the square with axis ν and side r centered at x 0 , de ned as follows: C(x 0 , r, ν) def. = x 0 + x ∈ R 2 | x, ν | < r, |x -x, ν ν| < r . ( 60 ) With this notation, we have C(0, r, e 2 ) = (-r, r) 2 . We also denote by R ν the rotation which maps e 2 to ν (and hence (-r, r) 2 to C(0, r, ν)). Given a mapping u : (-r, r) → R we de ne graph(u) def. = {(z, u(z)), z ∈ (-r, r)} , hypograph(u) def. = {(z, t) | z ∈ (-r, r), -r < t < u(z)} . De nition B.1 Let k ≥ 1. A set E ⊂ R 2 is said to be of class C k if for every x ∈ ∂E there exists r > 0, ν ∈ S 1 and u ∈ C k ([-r, r]) such that R -1 ν (∂Ex) ∩ (-r, r) 2 = graph(u) , R -1 ν (int Ex) ∩ (-r, r) 2 = hypograph(u) . As stated in [Delfour and Zolesio, 2011, Theorem 5.2], if E is bounded, the compactness of ∂E allows to take r in De nition B.1 independent of x, and the family (u x ) x∈∂E uniformly bounded in C k with equicontinuous k-th derivative. Convergence of smooth sets. Now, we turn to the de nition of C k convergence for smooth sets. De nition B.2 Let E be a bounded set of class C k with k ≥ 1. We say that (E n ) n≥0 converges to E in C k if there exists r > 0 and n 0 ∈ N such that • for every n ≥ n 0 we have ∂E n ⊂ x∈∂E C(x, r, ν E (x)) • for every n ≥ n 0 and x ∈ ∂E there exists u n,x ∈ C k ([-r, r]) such that: x) (int E nx) ∩ (-r, r) 2 = hypograph(u n,x ) R -1 ν E (x) (∂E n -x) ∩ (-r, r) 2 = graph(u n,x ) R -1 ν E ( • denoting (u x ) x∈∂E functions satisfying R -1 ν E (x) (∂E -x) ∩ (-r, r) 2 = graph(u x ) R -1 ν E (x) (int E -x) ∩ (-r, r) 2 = hypograph(u x ) we have lim In Section 2.2, we make use of the following result, which states that, if a sequence of sets converges in C k , the boundary of its terms can eventually be written as a normal deformation of the boundary of the limit set. A related statement can be found for instance in [Acerbi et al., 2013, Theorem 4.2]. The authors of this last work refer to [Almgren, 1975], which, according to them, essentially contains the result. We provide a detailed proof below for the sake of completeness. Proposition B.3 If (E n ) n≥0 converges to a bounded set E in C k with k ≥ 2, then for n large enough there exists a mapping ϕ n ∈ C k-1 (∂E) such that ∂E n = (Id + ϕ n ν E )(∂E), and ϕ n C k-1 (∂E) → 0. Proof : In all the following, we denote by ν (instead of ν E ) the outward unit normal to E. If u ∈ C k (Ω) and δ > 0, we also use the notation B C k (u, δ) def. = v ∈ C k (Ω) u -v C k (Ω) < δ . We divide the proof in two main steps. Step 1: we rst prove that, for every x ∈ ∂E, there exists n 0 ∈ N and two open neighborhoods U, V of x such that, for every n ≥ n 0 , there exists ϕ n : ∂E ∩ U → R of class C k-1 with ∂E n ∩ V = (Id + ϕ n ν)(∂E ∩ U ) and ϕ n C k-1 (∂E∩U ) → 0. By the C k convergence of (E n ) n≥0 towards E, we can x x ∈ ∂E, r > 0 and n 0 ∈ N such that, in C(x, r, ν E (x)), the set E is the hypograph of u ∈ C k ([-r, r]) and E n is the hypograph of u n ∈ C k ([-r, r]) for every n ≥ n 0 , with u nu C k ([-r,r]) → 0. By an abuse of notation, the normal ν is given in local coordinates by ∀z ∈ (-r, r), ν(z, u(z)) def. = 1 1 + u (z) 2 (-u (z), 1) . Finally, de ning U def. = (-δ 2 , δ 2 ) × (-r, r), U def. = x + R ν E (x) U , V def. = x + R ν E (x) V , ϕ v : y → ψ v ((R -1 ν E (x) (yx)) 1 ) , and taking n 0 large enough to have u n ∈ B C k (u, δ 1 ) for every n ≥ n 0 , we obtain ∂E n ∩ V = (Id + ϕ n ν)(∂E ∩ U ) and ϕ n C k-1 (∂E∩U ) → 0 with ϕ n def. = ϕ un . Step 2: from now on, we make the dependencies on x explicit. We know from step 1 that for every x ∈ ∂E there exists n 0 (x) ∈ N, a mapping ϕ n,x : ∂E ∩ U x → R, and two open neighborhoods U x , V x of x such that, for every n ≥ n 0 (x) ∂E n ∩ V x = (Id + ϕ n,x ν)(∂E ∩ U x ) . Moreover, ϕ n,x C k-1 (∂E∩Ux) → 0. Let us rst show that if y ∈ ∂E ∩ U x1 ∩ U x2 and n ≥ max(n 0 (x 1 ), n 0 (x 2 )) then Now, we cover ∂E by x∈∂E U x ∩ V x and obtain the existence of a nite set I such that E ⊂ i∈I U xi ∩ V xi . For every n ≥ max i∈I n 0 (x i ) we de ne ϕ n globally using the results above, and obtain ϕ n C k-1 (∂E) ≤ max i∈I ϕ n,xi C k-1 (∂E∩Ux i ) → 0 . By construction, we also have (Id + ϕ n ν)(∂E) = ∂E n , which concludes the proof. Normal deformations of smooth sets. Considering the result of Proposition B.3, one could wonder if, given a su ciently smooth ϕ : ∂E → R in a neighborhood of 0, there is a unique smooth set F satisfying ∂F = (Id + ϕ ν E )(∂E) . This is indeed the case, as shown below. Lemma B.4 If E is a bounded set of class C k (k ≥ 2), then there exists C > 0 such that, for every ϕ in C k-1 (∂E), the mapping ϕ ν E can be extended to ξ ϕ ∈ C k-1 (R 2 , R 2 ) with ξ ϕ C k-1 (R 2 ,R 2 ) ≤ C ϕ C k-1 (∂E) . Proof : We use the arguments of [Dambrine and Lamboley, 2019, Section 4]. Since E is bounded and of class C k , there exists an open bounded set Ω ⊃ ∂E of class C 1 , such that the projection π ∂E on ∂E belongs to C k-1 (Ω). For every ϕ ∈ C k-1 (∂E), one can then de ne ξ ϕ = (ϕ ν E ) • π ∂E to obtain an extension of ϕ ν E which belongs to C k-1 (Ω). Using Faà di Bruno's formula, one obtains the existence of C > 0 such that ∀ϕ ∈ C k-1 (∂E), ξ ϕ C k-1 (Ω,R 2 ) ≤ C ϕ C k-1 (∂E) . One can then use an extension theorem (see for instance [Stein, 1970, Chapter 6, Theorem 5]) to obtain the result with Ω = R 2 , up to a modi cation of C. Moreover, there exists an extension ξ ϕ of ϕ ν E such that E ϕ = (Id + ξ ϕ )(E) and ξ ϕ C k-1 (R 2 ,R 2 ) < 1 . In particular, E ϕ is C k-1 -di eomorphic to E. Proof : Let us rst prove the existence of E ϕ . Let C be as in Lemma B.4. If c < 1/C, then for every ϕ such that ϕ C k-1 (∂E) ≤ c, we have ξ ϕ C k-1 (R 2 , R 2 ) < 1. As a result, we obtain that f ϕ def. = Id + ξ ϕ is a C k-1 di eomorphism. We hence have that E ϕ If E and c are as in Proposition B.5, for every ≤ c, we say that a set F is a C k-1 -normal deformation of E of size at most if there exists ϕ such that ϕ C k-1 (∂E) ≤ and F = E ϕ . Likewise, if E is such that E c satis es the assumptions of Proposition B.5, we use the same terminology for F if F c = (E c ) ϕ . C. C The aim of this section is to prove the following result, which, loosely speaking, states that sets of nite perimeter and nite measure are characterized by their measure-theoretic boundary. In all the following d ≥ 2 is a xed integer and H s denotes the s-dimensional Hausdor measure on R d . Although this result is certainly well-known, we were not able to nd it clearly stated or used, and hence decided to include a proof for the sake of completeness. To prove it, we rely on a tool introduced in [Ambrosio et al., 2001, Section 7]: the topographic function (see Figure 23 for an illustration). To be more precise, if E and F are as in Proposition C.6 and u and v denote their topographic function, we prove that u = v almost everywhere, which, since 1 E = u mod 2 and 1 F = v mod 2, yields the result. This is done by induction, i.e. by proving that {u = k} = {v = k} (mod H d ) for every k ∈ N. Before diving into the proof, we state a few useful lemmas. Lemma C.7 Let E, F ⊆ R d be of nite perimeter. If E is indecomposable and H d-1 (∂ M F ∩ EM ) = 0 then we have E ⊆ F (mod H d ) or E ⊆ F c (mod H d ). Proof : Since supp(D1 F ) = ∂ M F (mod H d-1 ) we have |D1 F |( EM ) = 0. The indecomposability of E hence yields, using a result of [START_REF] Dolzmann | Microstructures with nite surface energy: The two-well problem[END_REF], that 1 F is a.e. equal to 0 on E, or a.e. equal to 1 on E, hence the result. In order to initialize our proof by induction, we need to prove the equality of the zero level set of the topographic functions. From the construction of [Ambrosio et al., 2001, Theorem 6] it is clear that, if u is the topographic function of a set of nite perimeter E with nite measure, then ext(E) = {u = 0}. We can hence conclude using the following lemma. The rst term converges to 0. The second converges to k assuming x ∈ {u ≤ k} (1/2) (which is not restrictive since H d-1 (∂ M {u ≤ k} \ {u ≤ k} (1/2) ) = 0). To show that the last term converges to 0, we write The rst term converges to 0 assuming x ∈ ∂ * {u ≤ k}. Again, this is not restrictive since H d-1 (∂ M {u ≤ k} \ ∂ * {u ≤ k}) = 0 . By [Ambrosio et al., 2000, Lemma 3.75] the second term is bounded for H d-1 -a.e. Proof : We already know that {u = 0} = {v = 0}. Let us prove by induction that {u = k} = {v = k} for every k ∈ N. We x k ≥ 1 and assume that {u = l} = {v = l} for every l ≤ k. Let A ∈ CC M ({u = k + 1}). Since A is indecomposable, u is constant on A and |Du| = H d-1 ∂ M E = H d-1 ∂ M F = |Dv| , we obtain that v is a.e. constant on A. The construction of [Ambrosio et al., 2001, Theorem 6] also yields H d-1 (∂ M {v ≤ k} ∩ ∂ M A) = H d-1 (∂ M {u ≤ k} ∩ ∂ M A) > 0 . Using Lemma C.9, we obtain that v = k + 1 a.e. on A, and hence A ⊆ {v = k + 1} (mod H d ). Since this holds for every A ∈ CC M ({u = k + 1)}) we get {u = k + 1} ⊆ {v = k + 1} (mod H d ). Exchanging E and F yields {v = k + 1} ⊆ {u = k + 1} (mod H d ), and we can nally conclude. D. C The aim of this short section is to prove the following elementary result. (t) . This implies the pointwise convergence of f almost everywhere, but only up to the extraction of a subsequence. Lifting this requirement is in fact possible, as shown below. Proof : Let t ≥ 0. We rst prove that |{u n ≥ t} \ {u ≥ t}| → 0. For every h > 0 we have: for almost every t ≤ 0 follows from the same arguments. u n -u L 1 ≥ ˆt t E. P L 3.2 We now give the proof of Lemma 3.2, whose content is recalled below. = {u ∈ L 2 (R 2 ) | TV(u) ≤ t ≤ t * } are: • (0, 0) , • ( t * 1 E /P (E), t * ) with ∈ {-1, 1}, E ⊂ R 2 simple and 0 < |E| < +∞ . Proof : Let (u, t) ∈ ext(C). 1. If t = 0 then u = 0. 2. Otherwise we have (u, t) = (1t/t * )(0, 0) + (t/t * )(t * u/t, t * ) , which yields t = t * . Considering that, for any v such that TV(v) ≤ t * , we have: (0, t * ) = 1 2 (-v, t * ) + 1 2 (v, t * ) , we also obtain TV(u) > 0. Now, since (u, t * ) = (1 -TV(u)/t * )(0, t * ) + (TV(u)/t * )(t * u/TV(u), t * ) , we get TV(u) = t * . Finally, let α ∈ (0, 1) and u 1 , u 2 ∈ L 2 (R 2 ) with TV(u 1 ) ≤ t * and TV(u 2 ) ≤ t * . If u = (1α)u 1 + αu 2 then writing (u, t * ) = (1α)(u 1 , t * ) + α(u 2 , t * ) we obtain u = u 1 = u 2 , which shows u ∈ ext({u ∈ L 2 (R 2 ) | TV(u) ≤ t * }). This shows that u = t * 1 E /P (E) for some simple set E such that 0 < |E| < +∞ and ∈ {-1, 1}. 3B Exact support recovery 3.1 Stability of the dimension of the faces of the total variation unit ball . . . . . . . 3.2 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 A su cient condition for strict stability . . . . . . . . . . . . . . . . . . . . . . . 3.4 Computing the minimal norm certi cate . . . . . . . . . . . . . . . . . . . . . . -Wolfe algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Proposed algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Convergence results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Sliding step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Numerical results 3.1 Local minimization of the objective . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Recovery examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Topology changes during the sliding step . . . . . . . . . . . . . . . . . . . . . . Smooth sets: de nition and convergence C Characterization of sets from their boundary D Convergence of the level sets E Proof of Lemma 3.2 N Functions spaces. Let m, n ∈ N * and Ω ⊂ R m be an open set. For any k ∈ N, we denote byC k (Ω, R n ) (C k (Ω) if n = 1) the set of R n -valued functions of class C k on Ω, and by C k b problems in imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Variational regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Grid-free sparse recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Total variation regularization 2.1 Presentation and preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Noise robustness results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Numerical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2 - 2 Figure 1 -Examples of microtubule images. Figure 5 - 2 N 52 Figure 5 -Comparison of grid-based and grid-free approaches for representing simple images. C1 Piecewise constant functions 1.1 Exposed faces of the total variation unit ball . . . . . . . . . . . . . . . . . . . . 1.2 Chains associated to a piecewise constant function . . . . . . . . . . . . . . . . 2 The prescribed curvature problem 2.1 Generalities and rst convergence result . . . . . . . . . . . . . . . . . . . . . . 2.2 Stability result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Exact support recovery 3.1 Stability of the dimension of the faces of the total variation unit ball . . . . . . . 3.2 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 A su cient condition for strict stability . . . . . . . . . . . . . . . . . . . . . . . 3.4 Computing the minimal norm certi cate . . . . . . . . . . . . . . . . . . . . . . Figure 8 - 8 Figure 8 -Illustration of the result stated in Theorem 2.27. Here u 0 is equal to 0 in A and B. The values taken by u λ,w in A λ,w and B λ,w are close (but not necessarily equal) to 0. Figure 9 - 9 Figure 9 -Graphs of R 0 → α(R 0 ) for the Gaussian and ideal low-pass lters, for several values of the variance and cut-o frequency. Figure 10 - 10 Figure 10 -Plots of Φ * f E and Φ * g E de ned in (27) for E = 1 B(0,1) and Φ the convolution with the Gaussian kernel with variance σ = 0.2. Figure 11 - 11 Figure 11 -Graph of f v de ned in (29) (left: global graph, right: zoom around 1). Figure 12 - 12 Figure 12 -Graph of ∂ηv ∂r (1) as a function of σ. Figure 14 - 14 Figure 14 -Steiner symmetrization of triangles Figure 15 -Steiner symmetrization of quadrilaterals Figure 16 - 16 Figure 16 -From left to right: observations, unknown function, output of Algorithm 3, outputs of the xed grid method using the isotropic and Condat's total variation Figure 19 - 19 Figure 19 -Left: unknown function, middle: observations, right: output of Algorithm 3 for di erent values of λ Figure 21 - 21 Figure 21 -Topology change experiment. (a): unknown signal, (b): observations, (c): weighted Cheeger set, (d,e,f,g): sliding step iterations (with splitting), (h): nal function. AA Sets of nite perimeter B Smooth sets: de nition and convergence C Characterization of sets from their boundary D Convergence of the level sets E Proof of Lemma 3.2 Reduced boundary. The reduced boundary ∂ * E of a set of nite perimeter E is de ned as the set of points x ∈ Supp (|D1 E |) at whichν E (x) B(x, r)) |D1 E | (B(x, r))exists and satis es |ν E (x)| = 1.Choice of representative. From[Giusti, 1984, Proposition 3.1], we know that if E has nite perimeter, there exists a Lebesgue representative of E with the property that ∀x ∈ ∂E, 0 < |E ∩ B(x, r)| < |B(x, r)| . Figure 22 - 22 Figure 22 -Illustration of De nition B.1. xu x C k ([-r,r]) = 0 ϕ n,x1 (y) = ϕ n,x2 (y) .We assume without loss of generality that|ϕ n,x1 (y)| ≤ |ϕ n,x2 (y)|. From step 1, we know that ϕ n,x2 (y) is the unique t ∈ (-γ x2 , γ x2 ) such that y + t ν(y) ∈ ∂E n ∩ C(x 2 , δ 2 , ν(x 2 )).But by de nition y + ϕ n,x1 (y) ν(y) ∈ ∂E n , and since |ϕ n,x1 (y)| ≤ |ϕ n,x2 (y)| we also have that ϕ n,x1 (y) ∈ (-γ x2 , γ x2 ) and y + ϕ n,x1 (y) ν(y) ∈ C(x 2 , δ 2 , ν(x 2 )). We therefore obtain that ϕ n,x1 (y) = ϕ n,x2 (y). Proposition B. 5 5 Let E be a bounded open set of class C k (with k ≥ 2). There exists c > 0 such that, for every ϕ ∈ C k-1 (∂E) with ϕ C k-1 (∂E) ≤ c, there is a unique bounded open set of class C k-1 , denoted E ϕ , satisfying ∂E ϕ = (Id + ϕ ν E )(∂E) . = f ϕ (E) is a bounded open set of class C k-1 and satis es ∂E ϕ = f ϕ (∂E) = (Id + ϕ ν E )(∂E). Let us now turn to uniqueness. If F and G are two bounded open sets of class C k-1 satisfying (63), in particular we have that ∂F = ∂G. Since F and G are su ciently smooth and bounded, they have nite perimeter and ∂ M F = ∂F = ∂G = ∂ M G. Using Proposition C.6 below (whose proof is the object of Appendix C), we obtain that F = G up to a Lebesgue negligible set. Using that F and G are su ciently smooth open sets, this nally yields F = G. Figure 23 - 23 Figure 23 -The topographic function of a set E (gray area). with B ± (x, r, ν(x)) = {y ∈ B(x, r) | ± yx, ν > 0}. Since |Du| = H d-1 ∂ M E we also have |a -b| = 1. Let us show that a ≤ k and b ≥ k + 1. We have: B -(x,r,ν(x)) -1 {u≤k} ) ≤ B -(x,r,ν(x)) |u -a| + 2 k |{u ≤ k} ∩ B(x, r)| |B(x, r)| + B(x,r)u (1 B -(x,r,ν(x)) -1 {u≤k} ) . B (x,r) u (1 B -(x,r,ν(x)) -1 {u≤k} ) ≤ |B -(x, r, ν(x)) \ {u ≤ k}| 1 d |B(x, r)| ˆB(x,r) |u| d/(d-1) d-1 d = |B -(x, r, ν(x)) \ {u ≤ k}| |B(x, r)| x. We hence obtain a ≤ k. By using the same arguments, one can show b ≥ k + 1 and we nally get a = k and b = k + 1.Proposition C.10Let E, F ⊂ R d be sets of nite perimeter with nite measure and u, v their topographic functions.If ∂ M E = ∂ M F (mod H d-1) then u = v almost everywhere. Lemma 3. 2 2 If t * > 0, the extreme points of C def. A function u has nite total variation if and only if its distributional gradient, denoted Du, is a nite Radon measure. In this case one has TV(u) = |Du|(R 2 ). Let us now provide more intuition on this functional by considering two classical examples. If u has integrable gradient one has TV(u) = ´R2 |∇u|. If E is a set of class C 1 (see Appendix B for a precise de nition) by the Gauss-Green theorem ´E and hence get the inequality for all k. Using that |U Figure 13 -The sets E and F have the weak regularity property (point 4 of Lemma 1.11), but E ∪ F does not, since the density ratio of x with respect to E ∪ F converges to 1. satis es point 4 of Lemma 1.11 (a property that one would crucially need to mimic the proof of Corollary 3.8). The level sets U E x F Convergence of the boundaries of the level sets ? Considering Corollary 3.8, it is natural to wonder whether lim sup n→+∞ ∂U k ⊆ ∂U (t) * (t) holds, which would yield the convergence of (∂U k ) k≥0 towards ∂U (t) * . However, it is unclear (t) whether U (t) (t) get a contradiction. (t) k U * | → 0, we (t) k a set of nite perimeter such that 0 < |E| < +∞ and ν ∈ S 1 we have: ´E η P (E) ≤ ´Es ν P (E s η ν ) , with equality if and only if |E E s ν | = 0. Hence if E * solves (41), arguing as in Proposition D.1If (Ω, Σ, |.|) is a measure space and u nu L 1 (Ω) → 0 then for almost every t ∈ R we have:Before giving its proof, let us stress that this property is certainly well-known, but we could not nd it clearly stated or proved. In fact, using Fubini's theorem, one could directly obtain and hence prove the L 1 convergence of f : t → U lim n→+∞ U (t) n U (t) → 0 . (65) lim n→+∞ ˆ+∞ 0 U (t) n U (t) dt = lim n→+∞ ˆR2 |u n -u| = 0 , (t) n U -h} \ {u ≥ t} = ∅, we obtain that the right-hand side in (66) converges to 0 when h goes to 0, and therefore that:lim n→+∞ |{u n ≥ t} \ {u ≥ t}| = 0 .Let us now prove that |{u ≥ t} \ {u n ≥ t}| → 0. For every h > 0 it holds that:u nu L 1 ≥ Using that |B \ A| ≤ |C \ A| + |B \ C| for every triple of sets (A, B, C) we get: |{u ≥ t} \ {u n ≥ t}| ≤ |{u ≥ t + h} \ {u n ≥ t}| + |{u ≥ t} \ {u ≥ t + h}| ≤ 1 h u nu L 1 + |{u ≥ t} \ {u ≥ t + h}| , t} \ {u n ≥ t}| ≤ |{u ≥ t} \ {u ≥ t + h}| ,(67)Since {u ≥ t} \ {u ≥ t + h} decreases as h decreases andh≥0 {u ≥ t} \ {u ≥ t + h} = {u = t},we obtain that, for almost every t ≥ 0, the right-hand side in (67) converges to 0 and therefore that:lim n→+∞ |{u ≥ t} \ {u n ≥ t}| = 0 .We hence nally have |{u n ≥ t} {u ≥ t}| → 0 for almost every t ≥ 0. The fact|{u n ≤ t} {u ≤ t}| → 0 and hence lim sup which yields lim sup ˆt+h t ˆt+h |{u n ≥ s} {u ≥ s}| ds ≥ t ˆt+h |{u ≥ s} \ {u n ≥ s}| ds ≥ -h |{u n ≥ s} {u ≥ s}| ds ≥ ˆt t-h |{u n ≥ s} \ {u ≥ s}| ds ≥ ˆt t-h |{u n ≥ t} \ {u ≥ t -h}| ds = h |{u n ≥ t} \ {u ≥ t -h}| . Using that |A \ B| ≤ |A \ C| + |C \ B| for every triple of sets (A, B, C) we get: |{u n ≥ t} \ {u ≥ t}| ≤ |{u n ≥ t} \ {u ≥ t -h}| + |{u ≥ t -h} \ {u ≥ t}| . We hence obtain: |{u n ≥ t} \ {u ≥ t}| ≤ 1 h u nu L 1 + |{u ≥ t -h} \ {u ≥ t}| , n→+∞ |{u n ≥ t} \ {u ≥ t}| ≤ |{u ≥ t -h} \ {u ≥ t}| . (66) Since {u ≥ t -h} \ {u ≥ t} decreases as h decreases and h≥0 {u ≥ t t |{u ≥ t + h} \ {u n ≥ t}| ds = h |{u ≥ t + h} \ {u n ≥ t}| . n→+∞ |{u ≥ Our presentation is greatly inspired by[Peyré, 2021, Chapter 8]. We refer the reader to this book draft and the companion website for more details on imaging. Let us also mention the reference book[START_REF] Scherzer | Variational Methods in Imaging[END_REF].2 In many situations of interest, the assumption that Φ is linear is unrealistic. Still, the class of linear inverse problems encompass a large variety of practically relevant applications. We stress that the techniques used to study nonlinear inverse problems signi cantly di er from those we present in this manuscript. This is the so-called synthesis approach. Another one is the analysis approach (see for instance[Elad et al., 2007]and[März et al., 2022, Section 1.2.2]). See Section 2.1.1 for a precise de nition of the total variation and simple sets. The name BLASSO stands for Beurling LASSO, and was coined in[De Castro and Gamboa, 2012]. We provide more information on this problem in Section 1.3.2 below. This inequality is for example proved in[Ambrosio et al., 2000, Theorem 3.47] by using (2) and the coarea formula (see e.g.[Maggi, 2012, Chapter 13]). In this reference u is assumed to be integrable, but the constant m appearing therein is also zero for any p-integrable function with 1 ≤ p < +∞. This notion is known under the name of inseparable set in the literature on submodular functions (see e.g.[Bach, 2013]). This is for instance proved in[Chambolle et al., 2016, proof of Proposition 8]. We stress that this set only depends on η0, and hence on y0, rather than on u0. One could also wonder how to numerically solve (P0(y0)). As this is less relevant for practical applications, we choose to not cover this question here. To see this, one simply needs to use Proposition 1.3 with Lemma 1.11. As underlined by the authors, this consistency is however a bit weak, as it does not provide convergence rates or error bounds. This property is for example also satis ed by the isotropic discretization. We also stress, as in[Duval, 2022, Section 2.6.2], that the total variation is a particular case of submodular function. The faces of the unit ball de ned by such functions have been studied in the monographs[Fujishige, 2005, Bach, 2013]. Meaning that |{u = ti}| > 0 for all i ∈ {1, ..., m} and, for almost every x ∈ R , we have u(x) ∈ {ti} m i=1 . This mapping allows to study the behaviour of the objective in a neighborhood of E with respect to general deformations, while jE is only related to normal deformations. Here E k denotes the set E de ned in (16) associated to F k . We recall that Line 11 amounts to solve a LASSO-type problem for which e cient solvers exist. Indeed, (31) and (41) are equivalent in the sense that (E, ) solves (31) if and only if E solves (41) and is the opposite of the sign of ´E η. To be more precise, we mean that there exists a compact set K ⊂ Ω such that ´R2 \K η is su ciently small. We stress that, although we do not follow this approach, nding some maximal solution of (42) and adding several atoms at each iteration could potentially be interesting. This would lead to what could be called a "polyatomic" Frank-Wolfe algorithm (see[Jarret et al., 2022] and the related approach of[START_REF] Flinth | On the linear convergence rates of exchange and continuous methods for total variation minimization[END_REF]). 2 Indeed, if u solves (42), then there exists α such that αη ∈ ∂TV(u), and the result follows from Proposition 1.3 and Lemma 1.11 We already introduced this tool in Part 2, where we only considered normal deformations (in contrast with the general deformations we use here). Here critical point is to be understood in the sense that the limit appearing in (53) is equal to zero for every θ. In particular, this implies that η is essentially bounded. This only occurs when λ is small enough. For higher values of λ, the output is similar to (d) or (e). Existence of a maximizing simple n-gon. Let us now show there exists a maximizer which belongs to X n , i.e. that de nes a simple polygon. To do so, we rely on the following lemma. Loosely speaking, it states that starting from a non-simple polygon, one can always construct another polygon with strictly fewer vertices achieving a higher objective value. Lemma 3.12 Let m ≥ 3 and x ∈ Y m \ X m . Then there exists m with 2 ≤ m < m and y ∈ Y m such that J(x) ≤ J(y) . Proof : x 1 ] is not simple. If there exists i with x i = x i+1 then y = (x 1 , ..., x i , x i+2 , ..., x m ) is suitable, and likewise if x 1 = x m then y = (x 1 , ..., x m-1 ) is suitable. Otherwise we distinguish the following cases: If there exists i < j with x i = x j : we de ne y = (x 1 , ..., x i , x j+1 , ..., x m ) ∈ R m-(j-i) , z = (x i , x i+1 , ..., x j-1 ) ∈ R j-i . We notice that 2 ≤ ji < m and 2 ≤ m -(ji) < m. If there exists i < j with x i ∈]x j , x j+1 [: we necessarily have (i, j) = (1, m). We de ne y = (x 1 , ..., x i , x j+1 , ..., x m ) ∈ R m-(j-i) , z = (x i , x i+1 , ..., x j ) ∈ R j-i+1 . We again have 2 ≤ m -(ji) < m, and since (i, j) = (1, m), we have ji < m -1 which shows that 2 ≤ ji + 1 < m. If there exists i < j with x j ∈]x i , x i+1 [: we necessarily have j > i + 1. We de ne We again have 2 ≤ ji < m, and since If there exists i < j with [ and in both cases we fall back on the previously treated cases. The same holds if (i, j) = (1, m). Otherwise, we de ne u [1] (before sliding) Supp(Du [1] ), Supp(Du 0 ) u [2] (before sliding) Supp(Du [2] ), Supp(Du 0 ) u [3] (before sliding) Supp(Du [3] ), Supp(Du 0 ) Figure 17 -Unfolding of Algorithm 3 for the rst experiment (u [k] denotes the k-th iterate) Figure 18 -From left to right: unknown function, observations, outputs of the xed grid method using the isotropic and Condat's total variation, output of Algorithm 3, gradients support (red: output of Algorithm 3, black: unknown) P 4 C We end this manuscript by presenting natural avenues for future research. We rst discuss those related to theoretical recovery guarantees, before turning to numerical methods. R Let us start by recalling our contributions to the theoretical analysis of total variation regulatization. In Part 2, we described the structure of the exposed faces of the total variation unit ball, and introduced the class of M -simple functions, which are the sparse objects naturally associated to this regularizer. We also considered solutions of the prescribed curvature problem, and studied their stability under variations of the associated curvature functional. Finally, we introduced a condition under which the jump set of an M -simple function can be exactly recovered in a low noise regime. Su cient identi ability conditions. In our opinion, an important question which remains mostly open is that of identi ability. Given a function u 0 and a measurement operator Φ, being able to provide su cient conditions ensuring u 0 is the unique solution of (P 0 (y 0 )) is highly desirable. As already mentioned, the only related work we are aware of is [START_REF] Bredies | A perfect reconstruction property for PDE-constrained total-variation minimization with application in Quantitative Susceptibility Mapping[END_REF]. A major di erence with the sparse spikes setting is that even the identi ability of a single atom, i.e. of u 0 = 1 E with E simple, seems di cult to study. Having a clear understanding of this problem could, subsequently, allow to answer the following question: if two atoms E, F are separately identi able and a, b ∈ R * , how does the relative position of E and F impact the identi ability of a 1 E + b 1 F ? Pre-certi cates. Considering the literature on sparse spikes recovery, the construction of pre-certi cates (i.e. good candidates for being a dual certi cate, possibly of minimial norm) is highly important for studying both identi ability and noise robustness. We have discussed in Section 3.4.2 possible ways to construct such objects in a simpli ed setting. This investigation should however be furthered. In particular, the pre-certi cate we described only uses the rst order optimality condition "in average", and other constraints could hence be considered. To We notice that if (z, t) ∈ (-r, r)×R and v : (-r, r) → R, then (z, u(z))+t ν(z, u(z)) ∈ graph(v) if and only if = r/2, then (61) holds and f (v, z, t) is hence well de ned. We claim that f is of class To see this, one should rst notice that the two mappings . Moreover, we have f (u, z, 0) = 0 for every z ∈ (-r , r ), and in particular f (u, 0, 0) = 0. Using that u (0) = 0, we also get ∂ t f (u, 0, 0) = 1 = 0. From the implicit function theorem, we hence obtain the existence of δ 1 > 0, of δ 2 ∈ (0, r ), of an open neighborhood W ⊂ (-r , r ) of 0, and of a mapping ψ : Since f (u, z, 0) = 0, we obtain ψ(u, z) = 0 for every z ∈ (-δ 2 , δ 2 ). Now, using the fact ψ is of class C k-1 , we obtain that all its derivatives up to order k -1 are uniformly continuous on every compact subset of B C k-1 (u, δ 1 ) × (-δ 2 , δ 2 ). By the Ascoli-Arzelá theorem C k ([-r, r]) is compactly embedded into C k-1 ([-r, r]), and we hence have that, for every δ 2 < δ 2 , the family of functions Now, we de ne We have that F is of class C k-1 and DF (0, 0) = Id. There hence exists two open neighborhoods We take δ 2 < δ 2 small enough to have (-δ 2 , δ 2 ) ⊂ U 1 , and By construction, the set on the left-hand side is included in the one on the right-hand side. Now, if y ∈ graph(v) ∩ V , then denoting (z, t) def. = F -1 (y), we have that z ∈ (-δ 2 , δ 2 ). Moreover, we also have (v, z, t) ∈ B C k-1 (u, δ 1 ) × (-δ 2 , δ 2 ) × W and f (v, z, t), which shows t = ψ v (z), and hence (62). Lemma C.8 Assume E and F are two sets of nite measure and nite perimeter in R d . Then, if we have ext(E) = ext(F ) (mod H d ). Proof : We have: which is necessarily ext(F ) since ext(E) has in nite measure. We hence obtain Exchanging E and F and applying the same argument yields the result. Before proving the main result, we need a last lemma, which is Lemma C.9. In the following, if u has bounded variation, we denote by J u its approximate jump set and S u its approximate discontinuity set (see [Ambrosio et al., 2000, Sections 3.6 and 3.7]). Lemma C.9 Let E ⊆ R d be a set of nite perimeter, u its topographic function and k ∈ u(R 2 ). Then for H d-1 -almost every x ∈ ∂ M {u ≤ k} we have x ∈ J u and, denoting ν(x) the outward unit normal to {u ≤ k} at x, the following holds: u and assume the approximate limit ũ(x) of u at x satis es ũ(x) ≤ k. We hence have: Since the last term converges to 0 as r → 0 + , we obtain that x ∈ {u ≤ k} M . Likewise if ũ(x) > k we obtain that x ∈ ({u ≤ k} c ) M . This shows ∂ M {u ≤ k} ⊆ S u , and we nally obtain (64). Now, let x ∈ ∂ M {u ≤ k} ∩ J u (which, from our assumptions, is non-empty). We denote by ν the outward unit normal to {u ≤ k}, and have the existence of a = b such that Conversely, let (u, t), (u 1 , t 1 ) and (u 2 , t 2 ) be in C and α ∈ (0, 1) with 1. If (u, t) = (0, 0), since t 1 , t 2 are nonnegative we obtain t 1 = t 2 = 0 and therefore u 1 = u 2 = 0. Hence (0, 0) ∈ ext(C). 2. If (u, t) = ( t * 1 E /P (E), t * ) with ∈ {-1, 1} and E a simple set such that 0 < |E| < +∞, then since t 1 ≤ t * and t 2 ≤ t * we get t 1 = t 2 = t * . Moreover, using that we also obtain u 1 = u 2 = u. We therefore have ( t * 1 E /P (E), t * ) ∈ ext(C). M Problèmes inverses, variation totale, parcimonie. R On s'intéresse dans cette thèse à une famille de problèmes inverses, qui consistent à reconstruire une image à partir de mesures linéaires possiblement bruitées. On cherche à analyser les méthodes de reconstruction variationnelles utilisant un régulariseur spéci que, la variation totale (du gradient). Cette fonctionnelle est utilisée en imagerie depuis les travaux de Rudin Osher et Fatemi, menés en 1992 A This thesis is devoted to the recovery of piecewise constant images from noisy linear measurements. We aim at analyzing variational reconstruction methods based on total (gradient) variation regularization. The total variation functional has been extensively used in imaging since the 90's. Its minimization is known to produce piecewise constant images, which hence have some kind of sparsity (they can be decomposed as the superposition of a few simple shapes). However, the performance of this regularizer has to our knowledge not extensively been studied from a sparse recovery viewpoint. This thesis aims at bridging this gap. We rst focus on noise robustness results. We assume that sought-after image is sparse, and study the structure of reconstructions in a low noise regime: are they sparse, made of the same number of shapes, and are these shapes close to those appearing in the unknown image? We then turn to numerical methods for total variation regularization. Existing techniques rely on the introduction of a xed spatial discretization, which often yield reconstruction artifacts such as anisotropy or blur. We propose an algorithm which does not su er from this grid bias, and produces a sparse representation of the reconstructed image. K Inverse problems, total variation, sparsity.
04090441
en
[ "phys.meca.ther" ]
2024/03/04 16:41:18
2023
https://hal.science/tel-04090441v2/file/MAKSASSI.pdf
Keywords: Curiosity, questions, passion, investigations, experiments, measurements, patience, analysis conclusions, and ethics all lead to a successful scientific researcher. Working on a project that benefits both humans and the environment is enough motivation to work hard and tirelessly List of figures List of Tables g acceleration due to gravity [m.s -2 ] h heat transfer coefficient [W.m -2 .K -1 ] k thermal conductivity [W.m -1 .K -1 ] n number of conductors [-] q heat source [W] r radius [mm] Preface Nowadays, our global population is estimated to be around 7.9 billion people. Since the beginning of the industrial revolution, the number has increased tenfold, and billions more people are expected to be added to the world's population in the 21st century, creating a very complicated world with unprecedented environmental challenges. Furthermore, recent theoretical developments have revealed that by 2030, the entire world will be confronted with a serious ecological problem in the form of global warming, which will result in the deaths of a large number of people [START_REF] Sachs | Sustainable Development Goals for a New Era[END_REF]. percent in just a decade [3]. The European Commission assesses between 240 and 450 GW of offshore wind power is needed by 2050 to keep the temperature increases below 1.5°C and 50% by total energy mix in 2050 is estimated by electricity and 30 % of the future electricity need will be supplied by offshore wind [4]. However, in order to achieve this goal, wind energy locations must be relocated to water depths greater than 50 m, which can be accomplished by using a floating offshore wind turbine (FOWT). FOWT is one of leading developed renewable energy, it's more efficient than bottom-fixed offshore wind turbines and on-land wind turbines because wind speed is higher far from the coast than near the coast, where a small increase in wind yields a large increase in energy production. For example, a turbine with a wind speed of 24 km/h can generate twice as much energy as a turbine with a wind speed of 19 km/h [START_REF]What are the advantages and disadvantages of offshore wind farms[END_REF]. The transportation of produced energy to shore is a significant challenge for any offshore wind farm. Floating systems require a dynamic cable, which connects the floating wind turbines to their electrical substation and underwater energy network. First, the floating offshore wind turbine sends its power through a dynamic power cable undersea to a transformer, and then the transformer sends its power through a static power cable to a converter platform. The alternating current is converted to high direct current and sent to a converter station on land, where it is converted into three phases of electric power. One significant challenge is that the specifications of dynamic cables are determined by project. The characteristics of the dynamic cable are determined by the unit power of the turbines (6 MW and 8 MW for experimental farms, but 10 MW to 12 MW for commercial farms), the general connection scheme (in series with at least two cables per wind turbine or in parallel), the architecture provided for the substations (embedded on a wind turbine float or independent float) and the environmental conditions of the site (water depth, swell, current and biofouling). Consequently, the cables are designed specifically and unitarily for each farm, with a wide range of solutions. Moreover, according to the experience of floating offshore installations in the oil industry, the greatest stresses are manifested at the head of the cable, that is, at the point of connection with the part fixed to the float. To reduce these stresses, and thus the fatigue of this section of the cable, a bending device with floats and a tensioner is used to shape the section just before the connection into a "s" shape [START_REF] Cruciani | L'éolien offshore flottant dans sa dimension industrielle et technologie[END_REF]. As a result, while dynamic power cables are a key component in electrical connections, their design, which must account for electrical, mechanical, and thermal concerns, remains a challenge for manufacturers and any external factor reducing its efficiency will result in lower energy reception. Among all the challenges, growth of biofouling has been identified, from sensitivity studies, as one of the most influent factor at design and maintenance stages. Marine growth embraces all the marine species that colonizes a component. Mollusc bivalves, particularly mussels, are a common and problematic component of fouling assemblies associated with vessel sea chests and pipework [START_REF] Sievers | Biofouling leads to reduced shell growth and flesh weight in the cultured mussel Mytilus galloprovincialis[END_REF], [START_REF] Nys | Biofouling[END_REF]. These species are experienced to be dominant in coast Atlantic areas [START_REF] Ameryoun | Stochastic Modeling of Forces on Jacket-Type Offshore Structures Colonized by Marine Growth[END_REF]- [START_REF] Schoefs | Reliability Updating of Offshore Structures Subjected to Marine Growth[END_REF]. Indeed, the marine growth of biofouling, particularly mussels, can modify heat transfer around the cable, affecting cable temperature, whereas according to IEC standard [START_REF]Electric cables-Calculation of the current rating-Current rating equations (100% load factor) and calculation of losses[END_REF], the cross-linked polyethylene (XLPE) electric insulation system of dynamic submarine electrical cable (DSEC) is designed to support a maximum copper wire temperature of only 90 o C continuously. This maximum conductor temperature is considered a limit to avoid wire insulation (XLPE) degradation. Furthermore, temperature fluctuations affect fatigue lifetime by increasing mechanical stresses, which leads to a decrease in electrical conductivity and, as a result, receiving less energy. There is a need to understand the complexity of each of them and investigate thermal effect. Therefore, due to the high number of uncertainties in space (spatial distribution, type of species) and time (evolution competition with time), it's a challenge to investigate the thermal effect of mussels around the cable in order to determine how it will affect the heat transfer between cable and water. The aim of this thesis is to assess the thermal effects of marine growth around a dynamic submarine electrical cable, also called umbilical cable. To accomplish this, the mussels should be thermally characterized, followed by an examination of the effect of mussel layer on the temperature of the DSEC copper conductor wire. There have been no previous studies on the thermal characterization of mussels around cables, as far as we are aware from these properties. Another aim is to create a thermal sensor to track biofouling growth around the DSEC. The thesis begins with an examination and interpretation of the current literature and state of knowledge concerning the floating offshore wind turbine, dynamic submarine electrical cable, biofouling, thermal effect of mussels, and methods to monitor biofouling growth (Chapter 1). Thermal characterization will be performed on mussels of three different ages: juvenile, mixed (juvenile and adult), and adult mussels only (Chapter 2). First, to measure their effective thermal conductivity, and then to measure the heat transfer coefficient of the water around them. The thermal effect of mussels on the dynamic submarine electrical cable will be evaluated (Chapter 3). A method for monitoring the growth of biofouling will be developed and tested (Chapter 4). Finally, conclusions and perspectives are presented. Furthermore, because of the distance between the shore and the offshore, repair and maintenance operations take longer and are thus more expensive [START_REF] Weerheim | Development of dynamic power cables for commercial floating wind farms[END_REF]. As shown in Figure 1.1, there are three dominant types of floating platforms for wind turbines: tension leg platform (TLP), semi-submersible, and spar-buoy. These platforms allow for the installation of FOWT from 40 m to 500 m depth. It's expected in the coming 5-10 years a large amount of floating wind farm projects will be executed and to be ready for this development, the technology shall be ready as well. This requires better assessment of key uncertainties associated with the behaviour, degradation and corresponding models of the underwater components. Moreover, it requires improving the efficiency of structural health monitoring (SHM) through the use of new sensors, while taking into account the performance of embedded sensor networks, including relevant uncertainties, throughout the entire service life of a floating offshore wind farm. Also, it requires better understanding of the material behaviour and degradation of mooring lines and umbilical's, including their link to the environment such as soilanchor interaction and effect of marine growth, which will lead to more accurate modelling of these components over time. For instance, the submarine dynamic power cable's (umbilical cables) design and bio colonization effect is still a challenge for the research and development in order to transfer the maximum produced energy with less possible loss. Introduction to Submarine power cable The world's first powered underwater cable was laid down in the ISAR River in Germany in 1811, and after more than a century, the first commercial high voltage direct current (HVDC) cable linked Sweden and Gotland Island in the Baltic Sea in 1954 [START_REF] Taormina | A review of potential impacts of submarine power cables on the marine environment: Knowledge gaps, recommendations and future directions[END_REF]. Submarine power cables have continued to expand throughout the world with improved technologies in terms of materials, installation techniques, width, and length, with nearly 8000 km of HVDC existing on the seabed worldwide in 2015, with more than 70% of which located in the European adjacent seas [START_REF] Taormina | A review of potential impacts of submarine power cables on the marine environment: Knowledge gaps, recommendations and future directions[END_REF]. Submarine power cables are classified into two types, the first type is called static cable which comes on top of or buried within the seafloor and the second type is called the dynamic cable (or Umbilical) which are deployed through the water column between the surface and the seafloor as shown in The static cable and dynamic cable differ primarily in two ways. The first is that the dynamic cable has double armoring to increase torsional stiffness due to dynamical application, whereas the static cable has a single armoring. The second distinction is that dynamic cable has a larger conductor cross-sectional area than static cable, which reduces induced heat [START_REF] Weerheim | Development of dynamic power cables for commercial floating wind farms[END_REF]. Figure 1.3: Structure of Medium Voltage power cable [START_REF] Weerheim | Development of dynamic power cables for commercial floating wind farms[END_REF]. There are numerous hanging configurations for dynamic cable from the FOWT to the sea bed in order to connect with the electric substation or to interconnect the wind farm turbines in the case of shallow water depth. As illustrated in Figure 1.4, the most wellknown configurations are free hanging, lazy wave, steep wave, lazy-S, steep-S, and plaint wave. The Lazy wave is a popular hanging configuration due to its suitability for dynamic cables and features such as lower maximum tension force, reduced cable response, and fewer fatigue cycles. Figure 1.6: W-shape interconnection configuration between two floating offshore wind turbine [START_REF] Srinil | Cabling to connect offshore wind turbines to onshore facilities[END_REF]. Based on the previous configurations and as depicted in Figure 1.7, the dynamic cable is subjected to various loads such as cable weight, hydrostatic pressure, buoyancy force, marine growth, and sea current. The presence of buoyant modules increases the buoyancy force and prevents the cable from touching the seabed, with the density of buoyancy modules being smaller than the surrounding water to create buoyancy force, and it is clearly shown in Figure 1.7 that the weight of the cable induces axial and bending forces, and these forces are increased with the presence of the marine growth (Biofouling), and thus the buoyancy modules lose their buoyancy over time. Figure 1.7: Mechanical loads on dynamic power cable. Biofouling The formation of unfavorable inorganic and/or organic residues on surfaces is referred to as biofouling. These residues increase fluid frictional resistance at the surface, accelerate corrosion at the surface, and can obstruct heat flow across the surface; regardless, energy losses occur [START_REF] Characklis | Bioengineering report: Fouling biofilm development: A process analysis[END_REF]. Once immersed in seawater, unfavorable fouling directly covered a structure. Over 4000 organisms have been linked to fouling problems in marine environments [START_REF] Yebra | Antifouling technology-past, present and future steps towards efficient and environmentally friendly antifouling coatings[END_REF]. Organisms are classified into two types based on their size: microorganisms (also known as micro-fouling or bio-film) and macro-organisms (also known as marine growth). Fouling is generally thought to progress in five stages, as illustrated in Figure 1.8 [START_REF] Delauney | Biofouling protection for marine environmental sensors[END_REF]: -First, after surface immersion, organic and inorganic macromolecules adsorb and form the primary film. -Second, microbial cell transport to and immobilization of bacteria on the immersed surface. -Third, as shown in Figure 1.9, the bacterial attachment to the substrate is solidified by the production of extracellular polymers, resulting in the formation of a microbial film on the surface. -Fourth, the presence of multicellular species, microalgae, residues, ruins, and other materials on the immersed surface leads to the evolution of a more complex collection. -Fifth, larger marine invertebrates such as mussels, barnacles, marco-algae, and so on form a bond. However, strict and comprehensive information about the fouling process is being researched. According to some references, some of these stages may occur concurrently or may not exist; for example, macro-organisms can exist on an immersed surface without the presence of biofilm, as mentioned by [START_REF] Delauney | Biofouling protection for marine environmental sensors[END_REF]. Figure 1.9: Biofilm cross section [START_REF] Bixler | Biofouling: lessons from nature[END_REF]. Because of its large effect on hydrodynamic loading, thermal resistance, and weight, macro-fouling is generally more dangerous to offshore structures, instruments, and many organisms, such as hydroids and soft corals, outgrow mussel colonization [START_REF] Compere | BIOFOULING and UNDERWATER MEASUREMENTS. Real-Time Coastal Observing Systems for Marine Ecosystem Dynamics and Harmful Algal Blooms: Theory, instrumentation and modelling Edition: Oceanographic Methodology series[END_REF]. Marine mussels are bivalve mollusks. This means they have two hard shells around their soft bodies to protect them (Figure 1.10). Mussels, like other bivalves (oysters and clams), can help to improve water quality and support healthy marine habitats. This is due to the fact that mussels are filter feeders, which means they are constantly bringing sea water into their bodies, filtering out phytoplankton and sediments while looking for their food, and they can filter hundreds of liters of seawater per day. Figure 1.10: Anatomy of mussels [START_REF]Mussels Anatomy[END_REF]. Mussels must therefore collect food and oxygen from the water in order to survive. They accomplish this by drawing water in through their incurrent siphon, moving seawater over their gills, and then passing the water out through their excurrent siphon. The idea is to suck up the surrounding water along with the particles that are suspended in it. The assembly enters the mantle cavity from the ventral side and passes through the gills (called ctenidia in molluscs). These can have a net-like structure that allows water to pass through (and thus allows breathing) while also retaining particles in suspension. The elements blocked at the gill level have two possible futures, depending on their size and palatability. Siphoned material is either transferred to the mouth for digestion or sloughs off the gills and exits through the ventral margin of the shell (pseudofeces). Digested material is either burned as fuel or excreted as feces. Because biofouling evolution is the result of a variety of chemical, physical, and biological factors, there are numerous parameters that influence the type of biofouling produced and its rate of evolution. Temperature, location, turbidity, depth, dissolved oxygen content, and other factors all play a role in the growth rate and type of biofouling. Several references show that marine growth depends on where it's located. In general, fouling is denser in more tropical locations doubtless due to the warmer temperature of seawater and to the continuous process of breeding. Some organisms may also fit to different environmental conditions and expand over big areas such as North Polar regions to the Mediterranean, while others are confined to a very restricted region. A significant portion of the fouling organisms are carried away by the currents from the shoreline to structure sites. Barnacles and mussels needed three weeks or more to expand and feed while being transported before settling on an appropriate substrate. As a result, while on the coast, they may sweep away from their originating point. Many physicochemical parameters of water, such as temperature, dissolved oxygen, turbidity, salinity, organic content, and light penetration, change as depth changes. All of these variables influence the type of fouling and its growth. In general, the amount of fouling is greater near the shore; however, as seawater depth increases, the amount of fouling decreases. According to [START_REF] Amram | Sedimentation and fouling of optical surfaces at the ANTARES site[END_REF], the density of bacteria after 3 months or 8 months at deep sea (>2000 m depth) is comparable to samples exposed for 1-2 weeks in shallow water, where the temperature measured at the ANTARES site is 13.2 °C [START_REF] Amram | Sedimentation and fouling of optical surfaces at the ANTARES site[END_REF]. They are explained by the low temperature and poor nutrient quality at these low levels. Furthermore, bacteria found in the deep sea are smaller than those found in shallow water. Increasing the depth usually results in a change in the dominant species and a decrease in thickness. Some species, such as algae, are photosynthetic and can only survive in areas with adequate light. Because mussels, tubeworms, barnacles, ascidians, and hydrozoans get their food from nutrients rather than light, fouling can occur at great depths [START_REF] Compere | BIOFOULING and UNDERWATER MEASUREMENTS. Real-Time Coastal Observing Systems for Marine Ecosystem Dynamics and Harmful Algal Blooms: Theory, instrumentation and modelling Edition: Oceanographic Methodology series[END_REF]. Temperature and access to light have a large impact on fouling growth and bacterial colonization on surfaces immersed in natural seawater. It should be noted that temperature is a significant parameter, as an increase in growth rate is most commonly observed with an increase in temperature. In temperate and cold climates, fouling growth appears from April to September; however, in tropical locations, fouling growth appears at a high and consistent level. It should be noted that some soft marine organisms die during the winter and live as residue. However, environmental changes affect the growth and reproduction rates of some soft marine species. Water movement, which includes currents, tides, circulations, and horizontal and vertical water exchanges, influences the organism's settlement. Any changes in water movement have a large impact on nutrition, metabolic activities, life cycles, and food chains, so water movement is undeniably important for marine plant life. The degree of exposure to wave wash has a significant impact on algae, and some species, such as hydroids, are highly affected by water movement [START_REF] Deacon | Marine Ecology: A Comprehensive Integrated Treatise on Life in Oceans and Coastal Waters[END_REF]. For marine ecologists, salinity and nutrients play a key role. Salinity varies from location to location, and as a result, the biology of species varies. For example, in the Red Sea, salinity is 41 percent because evaporation is greater than fresh water input, whereas in the Baltic Sea, where we have fluvial input, salinity is 10 percent [START_REF] Deacon | Marine Ecology: A Comprehensive Integrated Treatise on Life in Oceans and Coastal Waters[END_REF]. Water turbidity has a significant impact on the penetration of light inside water; if the turbidity is high, light penetration is difficult, affecting the growth of some species such as algae. In addition to salinity and water turbidity, nutrients availability has a significant impact on fouling rates. Ocean waters are typically lower in nutrients than water near the coast due to natural domestic or industrial effluents, as well as the presence of a high biodiversity and as a consequence, a large number of fouling larvae. As a result, sites near the coast generally have higher fouling rates. Previous factors vary depending on geographical location, distance from the coast, depth, temperature and season, water movement, and water quality which may influence the settlement of fouling on the structure. Other factors should also be structured to embrace all the complexity of biocolonization, such as, the competition between species and the structure nature. In terms of species competition, the limited space on a structure makes it difficult for some species to colonize it. For example, some species grow in the summer and die in the winter; in that case, the following summer they cannot find space to settle on structure because the space is occupied by other species [START_REF] Compere | BIOFOULING and UNDERWATER MEASUREMENTS. Real-Time Coastal Observing Systems for Marine Ecosystem Dynamics and Harmful Algal Blooms: Theory, instrumentation and modelling Edition: Oceanographic Methodology series[END_REF]. In terms of structure nature, the rate of bacterial colonization is influenced by a number of material surface factors. According to [START_REF] Schmidt | Aggregation kinetics of saccharomyces cerevisiae on solid surfaces[END_REF], negative surface charge reduces the extent of bacterial biofouling while increasing the mortality rate of adhering bacteria. The roughness of the surface has been investigated by [START_REF] Schmidt | Aggregation kinetics of saccharomyces cerevisiae on solid surfaces[END_REF]. It demonstrates that stainless steel with highly polished or extremely rough surfaces has a rapid growth rate. The growth rate of the yeast colonies, however, is the slowest for intermediate roughness. In the case of a highly polished surface, the physicochemical principle of adhesion dominates and is regarded as the cause of colonization, as a result of the presence of a larger flat adhesion area in comparison to a rougher surface. In the case of very rough surfaces, the hydrodynamic theory takes precedence, with the zone near a rough area characterized by a turbulent stream that transports the cell to the surface valleys. It appears as a small region of turbulence denoting a maximum shear force for the intermediate roughness, which has the lowest colonization. As a result, this depicts the equilibrium between adhesion and cell motion as influenced by the flow. The material nature of the surface also plays a role in the settlement of species; for example, for short exposure times (100 h or less), PET (Polyethylenterephthalate) outperforms PMMA (Poly methylmethacrylate) in fouling resistance because PET has a surface that is less suitable for biofouling adherence [START_REF] Schmidt | Aggregation kinetics of saccharomyces cerevisiae on solid surfaces[END_REF]. The effect of fouling in heat exchangers, as well as the method of detecting its growth, will be discussed in the following section. Fouling effect on shell and tube heat exchanger The presence of fouling in a heat exchanger can cause the flow to stop or change direction. Its nature is also less conductive than the separating wall, which increases heat transfer resistance. As a result, the heat exchanger's efficiency is reduced. Because of the random nature of this phenomenon, fouling quantification is regarded as one of the most difficult challenges. Because systematic periodic maintenance is not the best solution for customers, predictive maintenance is the best way to reduce loss. The reduction of the overall heat transfer coefficient is a traditional method for predicting the presence of biofouling. However, this method works when we have locally 1 D heat transfer, which consists of ignoring axial heat transfer in solid walls; additionally, compensation can occur between conduction and advection. The overall heat transfer coefficient is then not very sensitive to fouling in that case. Another popular method for detecting biofouling in heat exchangers is pressure drop. It should be noted that changes in flow rates cause a variation in pressure drop over time; however, the effectiveness of this method is dependent on the heat exchanger plate pattern [START_REF] Deacon | Marine Ecology: A Comprehensive Integrated Treatise on Life in Oceans and Coastal Waters[END_REF]. There are also complex methods for detecting fouling, such as silicon sensors, acoustic, ultrasonic, and optical techniques [START_REF] Amram | Sedimentation and fouling of optical surfaces at the ANTARES site[END_REF]. These methods are restricted to laboratory use because they necessitate specialized equipment that is not always available. A useful detection technique is to compare the system signature obtained experimentally during operation with the reference signature, where the reference model must be identified from a calibration experiment in the absence of fouling in the heat exchanger. This signature is valid for a system with linear time invariance (LTI). Here, the system signature agrees with its impulse responses or transfer functions, where the transfer function is the Laplace transform of the impulse response. These functions can be specified for any complex geometry, and it should be noted that they are solely dependent on the velocity field of the flowing fluid, the faced thermophysical properties, and the geometry. As a result, these advantages enable the detection of temporal differences in the system's structure during operation. The following are the main steps of a thermal detection method for fouling in heat exchangers [START_REF] Hadad | Fouling detection in a shell and tube heat exchanger using variation of its thermal impulse responses: Methodological approach and numerical verification[END_REF]: a. The location and type of thermal perturbation u(t), as well as the location of the output y, must be determined y(t). With the assurance that the perturbation is small enough to keep the system's response in the linear domain. b. In the absence of biofouling, the impulse response Hwithoutfouling(t) for a given mass flow rate of the fluid must be identified first and used as a reference. c. For the same mass flow rate values, the impulse response Hoperation(t) is identified during the operation. This is done on a regular basis throughout the operation. d. If a difference between Hoperation(t) and Hwithoutfouling(t) appears during the operation, it indicates the presence of biofouling. Table 1.2 depicts the impulse form and its response. H(t) expressions are only valid when the response is not instantaneous. Table 1.2: Impulse forms and impulse responses expression [START_REF] Hadad | Fouling detection in a shell and tube heat exchanger using variation of its thermal impulse responses: Methodological approach and numerical verification[END_REF]. In [START_REF] Hadad | Fouling detection in a shell and tube heat exchanger using variation of its thermal impulse responses: Methodological approach and numerical verification[END_REF], an impulse response is used in both cases where there is no fouling and when there is fouling. To begin the calibration, the impulse response without fouling is determined using the average temperature at the hot and cold fluid outlets, 𝜃 𝑜𝑢𝑡 ℎ𝑜𝑡 and 𝜃 𝑜𝑢𝑡 𝑐𝑜𝑙𝑑 , respectively. The impulse response H, which connects the excitation to its temperature response, is denoted by Z. As shown in Figure 1.12, it is considered a source q and its responses Zh (for 𝜃 𝑜𝑢𝑡 ℎ𝑜𝑡 ) and Zc (for 𝜃 𝑜𝑢𝑡 𝑐𝑜𝑙𝑑 ). Figure 1.12: Shell and tube heat exchanger [START_REF] Hadad | Fouling detection in a shell and tube heat exchanger using variation of its thermal impulse responses: Methodological approach and numerical verification[END_REF]. The excitation source is experimented in two forms step and ramp. The evolution of these two heat sources and its responses are presented in Figure 1.13a and 1.13b respectively. Figure 1.13: a) Step excitation q and corresponding average hot and cold outlet temperatures. b) For Ramp Excitation [START_REF] Hadad | Fouling detection in a shell and tube heat exchanger using variation of its thermal impulse responses: Methodological approach and numerical verification[END_REF]. Starting from these synthetic profiles, the impulse responses Zh and Zc are identified as shown in Figure 1.14 using the expressions in Table 1.2. The new impulse response (with fouling) is identified using the expression in Table 1.2 and plotted in Figure 1.16 while maintaining the same flow rate as in the previous case (without fouling). changes the area under the curve, which is larger for Zc and smaller for Zh. Furthermore, it necessitates an increase in relaxation time. As a result, the presented method is dependent on the various pulse responses that distinguish the system with fouling. The identified impulse responses account for all effects that are difficult to model implicitly, such as turbulence, contact resistance, wall roughness. As a result, the presence of fouling increases heat transfer resistance, reducing the efficiency of the equipment where heat transfer is required. The section that follows demonstrates how the presence of biofouling on submarine power cable reduces the cable's efficiency. Thermal effect of mussels on dynamic cable The presence of mussels on the surface of dynamic cable forms an insulating layer. Furthermore, the irregularities in the shape of the mussel layer change the water flow regime around the cable. As a result, the temperature of the cable rises. In general, biofouling can be modeled using four main parameters: roughness, density, thickness, and surface coverage percentage. A. MATINE et al, investigates the thermal effect of mussel thickness and surface coverage on a 33 kV XLPE insulated power cable composed of 3x475 mm 2 copper conductors used to handle a 30 MW wind turbine [START_REF] Matine | IREENA laboratory[END_REF]. For the thickness test, it is assumed that the entire surface is colonized with mussels with thicknesses ranging from 0 to 40 mm. The steady-state temperature distribution through the dynamic cable subjected to a current of 1010 A and immersed in water at 10°C with a biofouling thickness of 30 mm is depicted in Figure 1.17. In the two previous studies, the thermal conductivity of the biofouling layer was assumed to be equal to that of water. On the other hand, the cluster of mussels colonized around the DSEC (dynamic submarine electrical cable), is considered a porous medium composed of a solid part (mussels) and a fluid part (water between the mussels). So, in order to have an accurate and clear picture of the thermal effect of mussels on the submarine high voltage dynamic cable, the thermal properties of mussels, namely their effective thermal conductivity and the heat transfer coefficient of the water around them, must be estimated. This thermal property's estimation methodology will be shown in Chapter 2. Effective thermal conductivity in porous media Heat conduction through porous media occurs in a wide range of engineering applications, including permafrost soil mechanical stability, soil organism living conditions, nuclear waste disposal, energy production in geothermal engineering, materials drying, and the design and applications of compound materials, thermal characterization of marine growth, and so on [START_REF] Huai | Analysis of the effective thermal conductivity of fractal porous media[END_REF]. The effective thermal conductivity of a porous media is influenced by the thermal conductivities of the solid part and the fluid part. As shown in Figure 1.20, porous media frequently have highly complex structures, thus, heat conduction is also extremely complicated. Effective thermal conductivity is influenced by a variety of factors such as solid matrix properties, pore size and spatial distribution, the type of components, and state of fluid in pores, pressure, and temperature [START_REF] Huai | Analysis of the effective thermal conductivity of fractal porous media[END_REF]. The problem of heat conduction in porous media is mathematically equivalent to the problems of electrical conductivity, permittivity, and magnetic permeability in such materials. Several models and formulae for predicting effective thermal conductivity in two-phase systems have been proposed in [START_REF] Pietrak | A review of models for effective thermal conductivity of composite materials[END_REF]. All of the models predict effective thermal conductivity as a function of each phase's thermal conductivity and volume content, which is defined as the fluid volume to total volume ratio. Some models take into account additional parameters such as the shape of the solid particles, their orientation and distribution, and their contact resistance [START_REF] Tavman | Effective thermal conductivity of granular porous materials[END_REF]. Figure 1.20: Schematic of a porous medium [START_REF] Hsiao | Modified effective thermal conductivity due to heat dispersion in fibrous porous media[END_REF]. Since the 19 th century, numerous analytical expressions for estimating the effective thermal conductivity of composite materials have been proposed. Many studies have resulted in various theoretical models for estimating the effective thermal conductivity of porous media with various geometrical structures, such as the Maxwell [START_REF] Maxwell | A treatise on electricity and magnetism[END_REF], Rayleigh [START_REF] Lord Rayleigh | LVI. On the influence of obstacles arranged in rectangular order upon the properties of a medium[END_REF], Russell, Eucken, and Loeb [START_REF] Tonggeng | Thermophysical properties and influencing factors of polyimide/silica composite films[END_REF] models based on cylindrical or spherical pore structures. Zehnder and Schlunder [START_REF] Yu | Two Effective Thermal Conductivity Models for Porous Media with Hollow Spherical Agglomerates[END_REF] proposed a correlation for stagnant thermal conductivity based on a one-dimensional heat flow model for heat conduction through a packed bed of spherical particles, assuming point contacts of particles in the heat flow direction. Zimmerman [START_REF] Cosenza | Effective medium theories for modelling the relationships between electromagnetic properties and hydrological variables in geomaterials: a review[END_REF] used an effective medium theory to present a thermal conductivity model for fluid-saturated rocks. Krupiczka [START_REF] Tavman | Effective thermal conductivity of granular porous materials[END_REF] developed a numerical solution for the effective thermal conductivity of porous materials by first using a model composed of long cylinders and then another composed of spheres in a cubic lattice. With spherical inclusions, Verma et al. [START_REF] Chatterjee | Heat conduction model based on percolation theory for thermal conductivity of composites with high volume fraction of filler in base matrix[END_REF] developed an expression for predicting the effective thermal conductivity. Hsu et al. [START_REF] Hsu | A Lumped-Parameter Model for Stagnant Thermal Conductivity of Spatially Periodic Porous Media[END_REF] study developed a lumped parameter model for the effective stagnant thermal conductivity of some spatially periodic two-dimensional and three-dimensional media. Also, Yu and Cheng [START_REF] Cheng | Evaluation of effective thermal conductivity from the structure of a packed bed[END_REF] developed a fractal thermal conductivity model for both mono and bi-dispersed porous media by assuming that porous media are divided into two parts: some particles contact each other to form tortuous chains, while others do not. Woodside and Messmer [START_REF] Woodside | Thermal Conductivity of Porous Media. I. Unconsolidated Sands[END_REF] propose a model that combines the series distribution (in which the solid (spheres) and fluid phases are in layers normal to the direction of heat flow) and the parallel distribution (in which the solid (spheres) and fluid phases are in layers parallel to the direction of heat flow). The irregularity of the microstructure complicates the mechanism of heat transfer in porous materials, which makes determining the overall effective thermal conductivity of a porous medium difficult. In such materials, heat is transmitted through thermal conduction in the solid phase, thermal conduction in the fluid phase, radiation between solid particles, and convection in the fluid phase between particles. Radiation contributes to thermal conductivity in porous materials at temperatures above 200°C. It is also known that with values of Rayleigh number, Ra=Gr.Pr > 10 3 , convective heat transfer in pores (fluid phase) cannot be ignored [START_REF] Luikov | Heat and Mass Transfer in Capillary-Porous Bodies[END_REF], where Gr is the Grashof number and Pr is the Prandtl number. However, none of the previous models take into account the natural convection effect in pores between the particles. As a result, it is worthwhile to consider it if it exists during the computation of the effective thermal conductivity of any porous medium. Monitoring of biofouling As previously stated, biofouling is a significant disturbance factor for bodies immersed in sea water. As a result, it is critical to monitor biofouling growth to determine whether or not the level of growth is significant. Methods for monitoring biofouling are introduced in this section. The most well-known monitoring method is underwater inspection, which can be performed by divers or remotely operated vehicles. Remotely operated vehicles can travel deeper and for longer periods of time than divers, but they are more expensive, inflexible, and vulnerable to external factors. Divers, on the other hand, can cover a large area and inspect at various scales, from detailed close-ups to broader visual evaluations [START_REF] O'byrne | Image-Based Damage Assessment for Underwater Inspections, 1st ed. Boca Raton : Taylor & Francis, a CRC title, part of the Taylor & Francis imprint, a member of the Taylor & Francis Group, the academic[END_REF]. The quality of the inspection is determined by the diver's ability to supervise and provide a thematic description of defects, which is usually supported by photographs, but most of the time the quality of the photographs is poor and useless for quantitative analysis. However, because the extraction of quantitative information from images using computational techniques is becoming increasingly popular and useful, it is critical to have images that are suitable for quantitative analysis. As a result, a protocol was established, as shown in It can be seen that the shape of the reconstruction is affected by the quality of the imagery, which is affected by the acquisition criterion. So this protocol works well; however, it is highly dependent on the following factors: weather, water turbidity, and water depth. As a result, it's interesting to have a sensor independent from the latter factors. A novel approach to biofouling monitoring will be proposed in Chapter 4, which is a thermal sensor in which the equivalent heat transfer coefficient is monitored. Partial Conclusion In this Chapter, a floating offshore wind turbine bibliography and the significance of power dynamic cable for transferring energy from the offshore to the shore have both been introduced. In addition, bibliography on biofouling and the key elements that influence its growth is proposed. Additionally, it is demonstrated how biofouling can affect the dynamic power cable in terms of thermo-mechanics. The predominance of mussels in Atlantic coasts is highlighted. In addition, the effect of fouling on heat exchangers and how to detect it have been discussed. The effect of biofouling on submarine power cables is also discussed, as well as methods for detecting it, where the thermal conductivity of the biofouling layer was assumed to be the same as that of water. However, in reality the mussel clusters that colonized the submarine electrical cable are considered to be a porous media formed from a solid part, the mussels' bivalve itself, and a fluid part, the water between them. Therefore, it is interesting to estimate the thermal properties of mussels, specifically their effective thermal conductivity and the water's heat transfer coefficient around them, in order to quantify accordingly the thermal impact of mussels on the dynamic submarine electrical cable. Furthermore, a review of the literature on the equations that can be used to determine the effective thermal conductivity in a porous medium has been provided. It shows that all of the introduced models and formulas for estimating effective thermal conductivity don't take in consideration the natural convection in the porous medium. The following Chapter introduces an experimental method for measuring the effective thermal conductivity of a porous medium (mussels) while taking natural convection in the porous media into account. Chapter Thermal characterization of mussels -Steady-state analysis 2.1 Introduction As stated in Chapter 1, umbilical (dynamic) cable is an important component for offshore operators for two reasons: first, cable manufacturers have received few feedbacks on these new products (unlike static submarine cables), and second, dynamic cables are custom-designed for each new MRE farm project. Biofouling, in particular the growth of mussels, can affect cable cooling by increasing a thermal resistance that lessens the heat transfer between the cable and the seawater, affecting its maximum operating temperature and, more importantly, temperature fluctuations, which can affect fatigue lifetime. To determine the thermal impact of mussels on the dynamic submarine electrical cable, it is therefore necessary to thermally characterize the mussels. In this chapter, the mussels will be thermally characterized by estimating the effective thermal conductivity of mussels of varying ages as well as the heat transfer coefficient of the water surrounding them. The effective thermal conductivity of the mussels will be estimated using 1D analytical stationary model (Fourier's law) valid for a uniform distribution of the mussels around the experimental tube, i.e. mussels are distributed around the experimental tube with same thickness. The measurement method will be validated by measuring the thermal conductivity of a double-sided foam adhesive and comparing it to a measurement from a hot guarded plate device. In addition, the heat transfer coefficient of the water around the mussels will be also calculated using Newton's law. It will be also compared to two literature correlations. Furthermore, mussel distributions that are not uniform around the tube will be considered. In practice, mussel growth occurs undersea with non-uniform colonization around the DSEC. In this case, due to more complicated geometry, the effective thermal conductivity of mussels of different ages and heat transfer coefficient of the water around them will be estimated using numerical method to solve the 2D heat transfer equation and a parameter estimation technique. The extraction protocol and experimental setup for mussels The stationary measurements are carried out using an experimental tube as shown in The mussels of various ages (juvenile (6 months), mix (juvenile and adults), and adult (12 months), as shown in Table 2.1, are kept alive in buckets filled with sea water at the site. To avoid mixing, each bucket has been numbered according to the age of the mussels, then they were transferred to the LTeN laboratory which is far from the mussel's extraction site around one hour and half. When the mussels arrived at the lab (LTeN), small pumps connected to small hoses were placed in the buckets to create small bubbles that increased oxygen in the water, allowing the mussels to live as long as possible. Table 2.1: Various ages of mussels with their thickness layer. Thermal characterization for a uniform colonization of different ages of mussels around the tube In this section, mussels with different ages are distributed uniformly on the experimental tube, i.e. the mussels completely cover the experimental tube with the same average thickness of patches along the tube. Measurement method During the experiment, an aluminium tube covered with a uniform distribution of mussels is immersed in a tank filled with sea water, and the power provided by the heating elements inside the tube is used to reach a steady state temperature with a temperature discrepancy of about 5 to 10 K between both sides (internal and external) of the mussel layer. Fourier's law is directly used to calculate the effective thermal conductivity of mussels (kbiof) in case of uniform colonization (100 % coverage of mussels around the tube with same thickness (Figure 2.7) and this using temperature discrepancy between the two sides of the mussel layer and the heat power q (W) crossing it (Equation 2.1). In addition, the heat transfer coefficient between the external mussel layer and water (hw) is found from the expression of Newton's law (Equation 2.2). 𝑘 𝑏𝑖𝑜𝑓 (𝑊. 𝑚 -1 . 𝐾 -1 ) = ln( 𝑟𝑒 𝑟𝑖 ) * 𝑞 2𝜋 * 𝐿 * (𝑇 𝑎𝑣1 -𝑇 𝑎𝑣2 ) (2.1) ℎ 𝑤 (𝑊. 𝑚 -2 . 𝐾 -1 ) = 𝑞 2𝜋 * 𝑟 𝑒 * 𝐿 * (𝑇 𝑎𝑣2 -𝑇 𝑤 ) (2.2) where Tav1 is the average temperature on the internal side of the mussel layer (average of T1, T2 and T3) measured by the three thermocouples in the middle cross-section and located in the aluminum tube. Tav2 is the average temperature on the external side of the mussel layer (average of T4, T5 and T6) and Tw is the temperature of the water far away from the aluminum tube. In Equations (2.1) and (2.2), ri and re are respectively the internal and external radius of the mussel layer and L its length. With noting that in accordance to the law of the propagation of uncertainties [START_REF] Moffat | Describing the uncertainties in experimental results[END_REF], the absolute uncertainty on the effective thermal conductivity of mussels is obtained by: 𝛿𝑘 = √ ( 𝑑𝑘 𝑑𝐿 𝛿𝐿) 2 + ( 𝑑𝑘 𝑑𝑇 𝑎𝑣1 𝛿𝑇 𝑎𝑣1 ) 2 + ( 𝑑𝑘 𝑑𝑇 𝑎𝑣2 𝛿𝑇 𝑎𝑣2 ) 2 + ( 𝑑𝑘 𝑑𝑈 𝛿𝑈) 2 + ( 𝑑𝑘 𝑑𝐼 𝛿𝐼) 2 + ( 𝑑𝑘 𝑑𝑟 𝑖 𝛿𝑟 𝑖 ) 2 + ( 𝑑𝑘 𝑑𝑟 𝑒 𝛿𝑟 𝑒 ) 2 (2.3) where U and I are respectively the measured voltage and current through the heating element (q=U I) it follows then: 𝛿𝑘 = √ ( -ln( 𝑟 𝑒 𝑟 𝑖 ) * 𝑈 * 𝐼 2𝜋𝐿 2 (𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 𝛿𝐿) 2 + ( ln( 𝑟 𝑒 𝑟 𝑖 ) * 𝑈 * 𝐼 2𝜋𝐿(𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 2 𝛿𝑇 𝑎𝑣1 ) 2 + ( -ln( 𝑟 𝑒 𝑟 𝑖 ) * 𝑈 * 𝐼 2𝜋𝐿(𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 2 𝛿𝑇 𝑎𝑣2 ) 2 + ( ln( 𝑟𝑒 𝑟𝑖 ) * 𝐼 2𝜋𝐿(𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 𝛿𝑈) 2 + ( ln( 𝑟𝑒 𝑟𝑖 ) * 𝑈 2𝜋𝐿(𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 𝛿𝐼) 2 + ( -𝑈 * 𝐼 2𝜋𝐿𝑟 𝑖 (𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 𝛿𝑟 𝑖 ) 2 + ( 𝑈 * 𝐼 2𝜋𝐿𝑟𝑒(𝑇 𝑎𝑣2 -𝑇 𝑎𝑣1 ) 𝛿𝑟 𝑒 ) 2 (2.4) Also, the uncertainties on the heat transfer coefficient hw is calculated similarly as the one of thermal conductivity. The thermal characterization method and experimental setup were tested using a material double sided adhesive tape, as shown in Figure 2.8. The thermal conductivity of the double sided adhesive tape was initially measured using a hot guarded plate (HGP) device with a 5 percent relative uncertainty. The result shows that the thermal conductivity computed using equation (2.1) and measured with our experimental setup with aluminum tube was 0.052 W.m -1 .K -1 (for double sided tape adhesive not covered with steel net) and 0.053 W.m -1 .K -1 (for double sided tape adhesive covered with steel net), whereas the one with the HGP device was equal to 0.055 W.m -1 .K -1 therefore with only a 5.45% and 3.64% relative discrepancy, respectively. This validation confirms also that the steel net (used to maintain the mussels) has no significant effect on the effective thermal conductivity value. It also confirms that the tube extremities connected to the sample holder are well thermally insulated. In the addition to that, the aspect ratio between the thickness of the sample and the experimental tube's length is very small ( 1.9 610 = 0.0031). In other words, the axial heat flux along the aluminum tube is negligible in comparison to the radial one, which explains why the direction along the tube length is ignored in our thermal models (analytical and numerical). Effective thermal conductivity measurement resultsuniform colonization Temperatures of the internal and external layer of mussels is measured at steadystate for different ages of mussels of thicknesses (re-ri) with imposing a heat source q in the heating elements. Table 2.2 presents the evolution of the temperature along the aluminium tube (internal layer temperature of mussels (T1, T2 and T3)) and the external layer of mussels (T4, T5 and T6). Indeed, the porosities of the three biofouling materials increase with mussel age due to differences in mussel size. Therefore, for the same temperature difference between the internal and the external layer of mussels one can expect more natural convection inside older biofouling (in mussel's pore space), resulting a higher effective thermal conductivity. In order to confirm this assumption, the effect of water circulation in the open pores of a porous media on the effective thermal conductivity was investigated. For this purpose, a cluster of glass beads (beads=16mm) was distributed around the experimental tube and maintained using a steel net as shown in Figure 2.9a. The measured porosity of glass medium beads is approximately 43 percent and its measured effective thermal conductivity 54 was found to be 2.4 W.m -1 .K -1 (at q= 70.56 W) which is higher than the thermal conductivity of water (0.6 W.m -1 .K -1 ) and of the glass (1.1 W.m -1 .K -1 ). This demonstrates that there is a small circulation of water in the porous space, which leads to a higher effective thermal conductivity. Also, a test is performed after the glass medium has been covered with a polyethylene stretch film as shown in respectively, are lower than the measured effective thermal conductivity of the glass porous medium, which is 2.4 W.m -1 .K -1 . This is because none of the previous models take in consideration the natural convection effect in the pores of a porous medium as discussed in section 1.8. However, in the case of glass beads medium, the Rayleigh number (Ra) in this case is equal to 1.05x10 3 , so the natural convection in the pores between the particles must be considered, according to section 1.8. As a result, it appears that the hypothesis of a small circulation of water due to free convection in the porous space, resulting a higher effective thermal conductivity, is the most likely. This implies that using homogeneous models in the presence of a temperature gradient (free convection) can be misleading. In the present case the difference is between the effective thermal conductivity and the W.m -1 .K -1 and 12.3 W.m -1 .K -1 for measurements taken on July 8, 2020 and May 19, 2022, respectively, with a difference of less than 5% between the two measurements. It is obvious that the effective thermal conductivity of living and dead mussels increases with increasing homogeneous 𝑘 𝑒 = 𝑘 𝑓 (1 -(1 -𝜖) 1 2 + 2(1 -𝜖) 1 2 1 -K. B . ( (1 -K). B (1 -KB) 2 ln ( 1 K. B ) - B + 1 2 - B -1 1 -K. B )) 𝐾 = 𝑘 𝑓 𝑘 𝑠 , 𝐵 = 1.25 ( 1-𝜖 𝜖 ) ΔT due to increased natural convection in the mussels' water pores spaces. Because natural convection occurs in fluids in the presence of a gravitational field due to density differences caused by temperature differences within the fluid in the pores spaces of the mussels between the internal layer and the external layer of the mussels. Furthermore, the effective thermal conductivity of the living adult mussels is clearly higher than the effective thermal conductivity of the dead adult mussels, with an average difference of 3 W.m -1 .K -1 , which is approximately 40%. This could be due to the fact that natural convection in alive adult mussels is greater than natural convection in dead adult mussels or due to the mussels' filtering (when they are alive). To test this hypothesis, the Ra of natural convection in alive and dead mussels was measured. The Ra of porous media can be calculated as follows: 𝑅𝑎 = 𝛽𝛥𝑇𝜅𝐷𝑔𝑃𝑟 𝜈 2 (2.5) where 𝛽 is the thermal expansion coefficient, 𝛥𝑇 is the temperature difference across the porous media, g is the acceleration due to gravity, 𝑃𝑟 is the Prandtl number, ν is the kinematic viscosity, 𝜅 is the permeability of the porous media. Only the permeability of the mussels is unknown among the parameters. In order to determine the permeability parameter, a new experimental setup was designed and constructed. The experimental setup (Figure 2.12) is a closed circuit and consists of transparent plastic tube (Φ=110mm, Lcyl=1000mm), differential pressure meter, flow meter, water valve, water pump and a water tank. Darcy's law is thus used to measure the permeability of the mussels. To validate the permeability measurement method, the permeability of a porous medium formed from glass beads is measured experimentally and compared to a theoretical 59 value estimated using the Kozeny-Carman [START_REF] Rahrah | Networkinspired versus Kozeny-Carman based permeability-porosity relations applied to Biot's poroelasticity model[END_REF] equation modified by Macdonald et al [START_REF] Macdonald | A generalized Blake-Kozeny equation for multisized spherical particles[END_REF] (Equation 2.7) who modified the constants in the Kozeny-Carman equation by fitting a larger number of experimental data: 𝜅 = 𝑑 𝑝 2 𝜖 3 180(1-𝜖) 2 (2.7) Where dp is the spherical particle diameter, ϵ is the volumetric void fraction (porosity). As illustrated in the Figure 2.13, a glass beads porous medium with a 55 % porosity is placed in the experimental setup's tube. Following that, the Rayleigh number for both alive and dead adult mussels is calculated after the permeability of the adult mussels is measured. The difference in Rayleigh numbers for alive and dead adult mussels is not exceeding the 10 % as shown in Table 2.5. This difference is mainly due to variation of thermophysical properties with temperature. This indicates that natural convection in alive and dead adult mussels is approximately the same. As a result, a hypothetical difference in natural convection has a negligible effect in the difference in effective thermal conductivity between alive and dead mussels, where the effective thermal conductivity of alive adult mussels is significantly bigger than the effective thermal conductivity of dead adult mussels. So, this difference is due to the mussels' water filtering, which was discussed in section 1.3. When the mussels are alive, they create a suction of the water by taking their nutrition and then releasing the water, resulting in a microfluidic movement of the water in the pores of the mussels' clusters, which leads to a higher effective thermal conductivity than when the mussels are dead. Where dv is the average volumetric diameter and 𝜖 is the sphericity of the particles. In our case, an adult mussel (12 months) specimen is 6 cm length and 3 cm width as shown in Figure 2.17. Then, its average volumetric diameter is equal to 4.24 cm and its sphericity is equal to 0.8. As a result, the adult mussels have a porosity of 22.6 %. Figure 2.17: Size of an adult mussels specimen. However, the porosity of juvenile and mixed mussels is measured differently than that of adult mussels. The cluster of mussels with its interstitial water (Figure 2.18) are placed in a water bucket filled with water (Vw) to determine the mussel's cluster volume, which is labelled V1, as shown in Figure 2.18a. Next, we remove the mussels cluster (Figure 2.18b) and then we place it immediately above an empty bucket to collect the interstitial water and read its volume, which is labelled V11 in the Figure 2.18c. As a result, the water porosity of a mussel's cluster is equal to ϵmussels=V11/V1. This method was not applicable to adult mussels due to the high porosity. In this case, porosity was deduced from the permeability measured previously in this section. As shown in Table 2.6, the water porosity of mussels increases with age. Because the size of each mussel specimen grows with age which leads to a higher effective thermal conductivity of mussels as discussed before. As a result, the effective thermal conductivity of mussels is influenced by their water porosity as well as their filtering (nutrition). Heat transfer coefficient of the water around the mussels measurement results The analytical stationary method for estimating the heat transfer coefficient of the water around the mussels is validated by comparing the result of the heat transfer coefficient of the water around a tube without the mussels and two literature correlations (Churchill & Chu and Morgan [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]). Table 2.7 shows that the discrepancy between the experimental value hw and the one provided by Morgan correlation is equal to 6%, which is less than the discrepancy value of 29% provided by the Churchill & Chu's correlation. This is due to the fact that Churchill & Chu's correlation was set for a wide range of Rayleigh number (RaD ≤ 10 12 ), whereas Morgan's correlation is related to a narrow range of Rayleigh number (Table 9.1 in [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]). Moreover, the heat transfer coefficients obtained from the correlations were applied for an isothermal horizontal cylinder, whereas in our case the experimental tube is not completely isothermal. As a result, the difference between experimental and theoretical correlation values is acceptable. W.m -2 .K -1 for a uniform distribution of mussels around the tube for juvenile, mix (juvenile and adult), and adult mussels, respectively. However, the relative uncertainty reaches 19% -37% due to the fact that the position of the external thermocouple is not very accurate (± 5 mm) and the temperature difference between the mussel's external layer and the water is very small (less than 0.4 °C). Whereas, this relatively high uncertainty has no effect on the measured global resistance, as shown later. Considering the relatively high value of convective heat transfer of water around the mussels, its contribution to overall thermal resistance between the cable and the external water is small when compared to that of mussels (effective conductive thermal resistance of mussels). In Table 2.9, we show that the convective resistance of water around juvenile, mix, and adult mussels is 2.17 %, 8.33 %, and 4.35 % of the overall thermal resistance, respectively. Table 2.9 shows the overall thermal resistance with different ages of mussels covering the experimental tube. In the case of mussels covering the experimental tube, the overall thermal resistance consists of convective thermal resistance of the water around the mussels and the mussels' conductive thermal resistance. Whereas the convective thermal resistance around the mussels is the reciprocal of the product of the convective heat transfer coefficient of the water around the mussels and the surface area of the mussels, however, the thermal resistance in conduction of the mussels is determined by ( ln (𝑜𝑢𝑡𝑒𝑟 𝑟𝑎𝑑𝑖𝑢𝑠/𝑖𝑛𝑛𝑒𝑟 𝑟𝑎𝑑𝑖𝑢𝑠) 2𝜋(𝑙𝑒𝑛𝑔𝑡ℎ)(𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 𝑡ℎ𝑒𝑟𝑚𝑎𝑙 𝑐𝑜𝑛𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦) ). Therefore, results show that the overall thermal resistance of juvenile mussels (0.046 K.W -1 at T=10 K) is higher than the overall thermal resistance of mix mussels (0.036 K.W -1 at T=4.2 K) and adult mussels (0.023 K.W -1 at T=4.8 K) respectively. This is due to the fact that the conductive thermal resistance dominates the overall thermal resistance and the effective thermal conductivity of juvenile mussels is smaller than the effective thermal conductivity of mix and adult mussels. This can provide an indication of the expected effect of mussels on the temperature distribution in an electrical cable, as investigated in the next chapter. However, in practice, the configuration changes as the composition and thickness of the deposit change over time during successive seasons of mussel growth. we can anticipate that the higher thermal resistance associated with the increased deposit will eventually prevail, resulting in a situation in which the tube is more thermally insulated, causing overheating of the electric cable, which will be detrimental to its service life. Thermal characterization for a non-uniform colonization of mussels around the tube Mussels do not grow uniformly around submarine cables in real offshore installations. One reason for this is the sun's non-uniform irradiation. Such a situation with non-uniform biofouling growth around a horizontal tube should also be investigated. As a result, the effective thermal conductivity of the non-uniform mussel distribution around the tube, as well as the heat transfer coefficient of the water around them, were measured. In the absence of an analytical model to calculate the effective thermal conductivity from measurements in the case of the non-uniform distribution of mussels, a numerical tool will be used for parameter estimation. For this purpose, the effective thermal conductivity of the mussels and the heat transfer coefficient of the water around the mussels are estimated using a 2D steady state thermal model computed using finite elements (COMSOL software), with the following heat conduction equation: 𝑑𝑖𝑣(-𝑘𝛻 ⃗ 𝑇) = 0 (2.9) where k is the thermal conductivity (W.m -1 .K -1 ), T is the absolute temperature (K). Where, Tw is the temperature of the surrounding water, hw1 and hw2 is the heat transfer coefficient of the water around the mussels and the tube, respectively. T1,T2,T3 are the temperatures of surface of the experimental tube. T4,T5,T6 are the temperature of the external layer of mussels. In the biofouling region, a mesh with 2000 to 5000 nodes was used, while in the aluminium tube region, a mesh with 1000 nodes was used. As shown in Figure 2.20, the fine mesh type is used. In the 2D thermal model, the external temperature of the system is the one of the water and the heat rate of power source is provided by the voltage and current measurements on the heating elements. Table 2.11 shows the heat transfer coefficient of the water around the mussels with non-uniform colonization around the aluminium tube covered in polyethylene, as well as the estimated effective thermal conductivity of juvenile mussels. The effective thermal conductivity of juvenile mussels for 100% colonization differs from the value obtained previously with an experimental aluminium tube (4.4 W.m -1 .K -1 ) and the one obtained here with an aluminium tube covered with a polyethylene layer (1.6 W.m -1 .K -1 ). This difference is mainly due to the fact that the temperature gradient between the external and internal radiuses of the mussel layer is not the same in both cases (10 K for aluminium tube and 6 K for aluminium tube covered with polyethylene). That means that convection in the porous medium will not be the same involving different effective thermal conductivities. The results show that the effective thermal conductivity of uniform (100%) and non-uniform (25% and 50%) colonization is roughly of the same order of magnitude. The difference can be attributed to measurement accuracy and the temperature gradient between the external and internal layers of mussels, which is not exactly the same for the three different configurations. One can note that there is a difference for the effective thermal conductivity of juvenile mussels for 100% colonization between the value obtained in section 2.3.2 with an experimental aluminum tube (4.4 W.m -1 .K -1 ) and the one obtained here with an aluminum tube covered with a polyethylene layer (1.6 W.m -1 .K -1 ). This difference is mainly due to the fact that the temperature gradient between the external and internal radiuses of the mussel layer is not the same in both cases (10 K for aluminum tube and 6 K for aluminum tube covered with polyethylene). As in the previous section the convective coefficient values are relatively high and show some disparity. We have already seen that this parameter has a minor impact on global resistance. In the addition of the values of convective coefficients, Table 2.11 also shows the sensitivity of the measured temperature to these parameters defined by βs dT/dβs where βs is the parameter (kbiof, hw1 or hw2). It can be noted that the sensitivity value to the heat transfer coefficients are small compared to the one for the effective thermal conductivity. Because of the convective coefficient's low sensitivity and low contribution to global thermal resistance, its discrepancy has a minor impact on determining the effective thermal conductivity of mussels. The water in our current study is stagnant. However, there is a current flow velocity of water in the system in real-world application, then the effect of the global heat transfer coefficient of the water around the cable and in the mussel's layer (porous medium) must be considered. Effective thermal conductivity with water current flow -Glass beads medium The effective thermal conductivity of a porous medium made of glass beads around the experimental tube will be measured in this section while it is subjected to various water current velocities generated by a water traction pool in order to test the impact of the current water on a porous medium's effective thermal conductivity. The traction pool located at Ecole Centrale de Nantes, France. It is dimensions are 140 m long, 5 m wide, and has a constant depth of 3 m (Figure 2.22). The water temperature in the traction pool is 16.9 °C. 73 In order to create a porous medium made from glass beads with a porosity of about 43%, a cluster of glass beads (Φbeads=16mm) was distributed around the same experimental tube as in section 2.2 and kept in place using a steel net. To prevent any axial heat loss in the tube, the ends of the experimental tube were coupled with polyoxymethylene (POM) (Figure 2.24). Table 2.12 displays the average temperature distribution of the internal and external layers of the glass beads medium at various water current velocities. The development of the boundary layer on the surface has a significant impact on the Nusselt number for a flow motion normal to the axis of a smooth circular cylinder. As a result, the increase in Nu with increasing Re is due to a corresponding decrease in the thickness of the boundary layer which leads to the increase of the forced heat transfer coefficient (hforced). In consequence, the temperature of the thermocouples at the internal and external layers and the difference between them for the glass beads which is near from the stagnation point at (θ=333°) is lower than the temperature of the internal and external thermocouples and the difference between them at (θ=90°) and (θ=210°), respectively. Consider condition for Therefore, during the existence of the water current the effective thermal conductivity which is located at (θ=333°) is higher than the effective thermal conductivity at (θ=90°) and (θ=210°), respectively, as shown in Table 2.13. As we can see the forced heat convection increases with the increasing of the Reynolds number as the Nusselt number increases which is calculated using the correlation of Zukauskas [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]. 82 During the measurement, we do not have the same spatial resolution as the case of smooth cylinder (Figure 2.30) because on the experimental tube we only have only three locations of the thermocouples at (90°, 210°, and 330°). As we cannot follow all of these evolutions, we only consider the average of these points. Therefore, using Fourier's law, the average effective thermal conductivity of the glass beads (ke,beads= 𝑘 𝑒,𝑏𝑒𝑎𝑑𝑠 ,𝜃=90° + 𝑘 𝑒,𝑏𝑒𝑎𝑑𝑠 ,𝜃=210° + 𝑘 𝑒,𝑏𝑒𝑎𝑑𝑠 ,𝜃=330°3 ) for various water current velocities is calculated from the measured temperature difference between the internal and external layer of the glass beads medium and the imposed power (q=102 W). As shown in Figure 2.31 and as expected the average effective thermal conductivity of glass beads medium increases as the water current velocity increases. ) as function of the Ln (Re). Therefore, we get the following model: 𝑘 𝑒,𝑏𝑒𝑎𝑑𝑠 𝑘 𝑠𝑡𝑎𝑔𝑛𝑎𝑛𝑡,𝑏𝑒𝑎𝑑𝑠 = 1.8158𝑒 -6 𝑅𝑒 1.57 (2.12) Then, Figure 2.32 allows us to compare the model with our experimental data, where we have a very good matching between them. In the future it will be interesting to check if such a model stands for mussel's layers of different ages and different thicknesses. Measurement at site The mussels were manually fixed on the tube in the thermal characterization performed in this chapter, which can affect the contact between the biofouling and the experimental tube, resulting in contact thermal resistance. Moreover, our choice of a single species is not representative of the diversity of biocolonization in some sites. To solve this problem, we need a natural growth of biofouling on the surface of the experimental tube. The goal of this section is to evaluate the thermal conductivity of mussels that have grown naturally on experimental tubes in the Mediterranean Sea and the Atlantic sea. This project is linked to the Biodytherm8 project, which is funded by region pays de la Loire (WEAMEC). This is a collaborative project between Nantes University and Akrocean company. This study used 6 experimental tubes in the Mediterranean Sea and two in Atlantic Ocean (Figure 2.33), four of which were covered in with polyethylene (the material used for actual submarine cables) and four were not. Using experimental tubes with and without polyethylene covers, researchers will be able to compare how different types of colonization are influenced by surface conditions. The main objective was to create a database on the effective thermal conductivity of biofouling, which is dependent on time and location and influences the type of biofouling. This database will be used to assess the impact of biofouling growth on dynamic submarine electrical cables. Partial Conclusion In this Chapter, the thermal characterization of mussels was carried out in stagnant water. The measurement of the thermal properties of this natural medium (biofouling) is rarely addressed in the marine energy literature. The experimental work was very challenging because we were working with a live sample and had to complete all measurements within 24 hours in order to keep the mussels alive. In the uniform mussel colonization case, the thermal conductivity of different ages of mussels (juvenile, mix (juvenile and adult), and adult) as well as the heat transfer coefficient of the water around the mussels were measured. Results show that juvenile mussels have the lowest effective thermal conductivity when compared to a mix mussels and adult mussels only due to the water porosity effect, where porosity of juvenile mussels is less than the mix mussels and adult mussels respectively. The effect of water porosity has been highlighted and illustrated using a glass beads medium, it is demonstrated that the natural convection in the porous media significantly enhances the effective thermal conductivity of the glass porous media. Moreover, the effective thermal conductivity of adult mussels was measured when the mussels are alive and when they are dead. Results shows that the effective thermal conductivity of adult alive mussels is higher than the effective thermal conductivity of adult dead mussels due to the water filtering effect which creates an additional micro-movement leads to increasing of the effective thermal conductivity of the mussels. On the other hand, the effective thermal conductivity of the mussels increases with the increasing of the temperature difference between the internal layer of the mussels and the external layer of the mussels due to natural convection phenomenon. So as a summary, in case of stagnant water, the effective thermal conductivity of mussels is affected by three factors: the water porosity, the temperature difference between their internal and external layer and their filtering phenomenon (nutrition). Another study has also been done to demonstrate how the water current affects the effective thermal conductivity of a porous medium formed from glass beads. It was demonstrated that the effective thermal conductivity of glass beads in the presence of current water increases with increasing water current velocity as the forced heat transfer coefficient around the glass beads medium increases. Therefore, it's expected the same for the effective thermal conductivity in the presence of the water current. A power law model was proposed to describe this model. The results also demonstrate that the effective conductive resistance of mussels can be greater than the convective resistance surrounding the tube in their absence. Therefore, it would be interesting to investigate the thermal impact of mussel colonization near the DSEC. In order to better understand this issue, the following Chapter will focus on this topic. Chapter Thermal Effect of Mussels on the temperature of the dynamic submarine electrical cable Introduction In Chapter 2, we demonstrated that the age of the mussels changes the water porosity of the biocolonization, thus affecting the global thermal resistance around the dynamic submarine electrical cable. It was shown that thermal resistance of a layer of mussels can be lower or higher than convective resistance without mussels. Indeed, as described in section 2.3.3, the effective conductive resistance of juvenile mussels is higher than the convective resistance around the tube in the absence of mussels. However, the effective conductive resistance of mixed mussels is slightly bigger and the effective conductive resistance of adult mussels is slightly lower. Thus, mussel deposits on the surface of the marine power cable can form an insulating layer, resulting in increased thermal resistance which can lead decrease the fatigue life time of the cable. Due to the fact that, as was mentioned in Chapter 2, the maximum conductor operating temperature of a dynamic submarine electrical cable (DSEC) insulated with cross-linked polyethylene (XLPE), Figure 3.1, is 90 °C [START_REF]Electric cables-Calculation of the current rating-Current rating equations (100% load factor) and calculation of losses[END_REF]. That means that the conductor of an XLPE cable can be maintained at 90 °C for an extended period of time without further deterioration of the insulating material, which would otherwise reduce the mechanical properties of the cable and shorten its fatigue lifetime. Whereas the expected lifetime of XLPE insulation dynamic submarine electrical cable is between 40 and 60 years at rated operating conductor temperature of 90 °C and between 7 and 30 years at rated operating conductor temperature of 95 to 105 °C [START_REF] Alghamdi | A study of expected lifetime of XLPE insulation cables working at elevated temperatures by applying accelerated thermal ageing[END_REF]. Additionally, temperature fluctuations in the DSEC can increase the mechanical stresses in the cable's parts, which in turn reduces the fatigue lifetime of the cable. Moreover, as the temperature rises, the resistivity of the copper conductor used in the DSEC increases, resulting in an increase in energy loss. To address this issue, the cross section of the copper wire conductor must be increased, adding significantly to the cost. As a result, investigating the thermal effect of mussel's colonization on a real submarine power cable could be interesting. In this Chapter, the permissible current rating for a specific operating temperature is determined for a real DSEC using an analytical model based on an international standard (IEC standard). This yields to a power that can be used to investigate the thermal impact of various mussel ages on the temperature of the copper wire in a 2D numerical simulation model of the DSEC (finite elements via COMSOL). Analytical Model Based on IEC Standard Under steady-state conditions, IEC standards 60287-1-1 [START_REF]Electric cables-Calculation of the current rating-Current rating equations (100% load factor) and calculation of losses[END_REF] and 60287-2-1 [START_REF]The International Electroctechnical Comission. Electric cables-Calculation of the current rating, Thermal resistance-Calculation of thermal resistance[END_REF] are used to calculate the permissible current rating in accordance with operating temperature. It denotes a constant current load (100 percent load factor) that is only sufficient to generate the maximum conductor temperature asymptotically and assumes that the conditions in the surrounding ambient are constant. This method is widely used all over the world. It covers medium to high voltage cables, a wide range of installation methods, and a formula for current rating and losses. The main goal is to determine the relationship between the cable and the factors influencing heat dissipation, such as the thermal resistance of cable components, load, and the surrounding environment. For illustrating the method, we use a cable for which data were available: a 20 kV used in OMDYN2 project. It's a cross-linked polyethylene (XLPE) insulated dynamic submarine electrical cable (DSEC) designed to sustain a maximum conductor temperature of 90 °C continuously. It is made up of 3 × 50 mm 2 copper conductors insulated with XLPE. The cable is being covered with a double wire armor to increase torsional stiffness due to dynamical application and to protect it from mechanical stress, floating debris, and friction caused by the cable touching the seabed. Figure 3.2 shows the DSEC's complex hierarchical cross-sectional structure. Where R1 is the thermal resistance per unit length between one conductor and the sheath (K.m.W -1 ), 𝑅 1 = 𝜌 𝑡 𝑖 2𝜋 . ln (1 + 2𝑡 𝑖𝑛𝑠,𝑠𝑒𝑚𝑖𝑐  one,con ) (3.1) Where, 𝜌 𝑡 𝑖 is the thermal resistivity of the insulation (K.m.W -1 ), tins,semic thickness of the insulation including semi-conductive layers (m) and one,con diameter of one core (m). R2 is the thermal resistance per unit length of the bedding between sheath and armour (K.m.W -1 ), 𝑅 2 = 1 6𝜋 . 𝐺. 𝜌 𝑡 𝑏 (3.2) 93 Where, 𝜌 𝑡 𝑏 thermal resistivity of the cable bedding (K.m.W -1 ) and G is the geometric factor obtained using an empirical curve in IEC standard 60287-2 [START_REF]The International Electroctechnical Comission. Electric cables-Calculation of the current rating, Thermal resistance-Calculation of thermal resistance[END_REF] by calculating the rate rG as shown in equation (3.3): 𝑟 𝐺 = 𝑡 𝑏 + 𝑡 𝑃𝐸 𝑠  𝑖𝑛𝑡,𝑖𝑛𝑠 + 2𝑡 𝑠 (3.3) Where, ts is the thickness of metallic sheath (m), tb is the thickness of bedding itself (m), 𝑡 𝑃𝐸 𝑠 is the thickness of the inner plastic sheath (m), int,ins is the external diameter of insulation(m). R3 is the thermal resistance per unit length of the external serving of the cable (K.m.W -1 ), 𝑅 3 = 𝜌 𝑡 𝑜𝑐 2𝜋 . ln (1 + 2𝑡 3  𝑎𝑟 ) (3.4) Where, 𝜌 𝑡 𝑜𝑐 is the thermal resistivity of the outer covering (m), t3 is the thickness of the outer covering (m),  𝑎𝑟 is the external diameter of the armor (m). R4 is the thermal resistance per unit length between the cable surface and the surrounding medium (K.m.W -1 ), Where Ic is the current flowing in one conductor (A), ΔTc(K) = Tc -Ta, n is the number of load-carrying conductors in the cable. λ1 and 𝜆2 are calculated using IEC standard 60287-1 [START_REF]Electric cables-Calculation of the current rating-Current rating equations (100% load factor) and calculation of losses[END_REF], where λ1 is the ratio of losses in the metal sheath to total losses in all conductors in that cable, 𝑅 𝜆 1 = 𝜆 1 ′ + 𝜆 1 ′′ (3.8) Where, 𝜆 1 ′ is the loss caused by circulating currents in the sheath, expressed in equation (3.9) and 𝜆 1 ′′ is the loss caused by circulating eddy currents in the sheath, however, for a three core cable such what we consider here, with a metallic sheath per core conductor, there are no losses relative to eddy current. Therefore, 𝜆 1 ′′ = 0. Where, 𝑑 𝑐𝑜𝑟𝑒,𝑐𝑜𝑛 is the axial distance between core conductors (m), 𝑡 𝑠 is the thickness of the metallic sheath,  𝑒𝑥𝑡,𝑖𝑛𝑠 is the external diameter of the insulation, (𝑑 𝑒𝑥𝑡,𝑖𝑛𝑠 + 𝑡 𝑠 ) is the mean diameter of the screen (m) and 𝜋 ((𝑑 𝑒𝑥𝑡,𝑖𝑛𝑠 + 𝑡 𝑠 ) 2 -𝑑 𝑒𝑥𝑡,𝑖𝑛𝑠 2) is the cross section of the metallic sheath in (m 2 ) as defined in the IEC standard 60827-1 [START_REF]Electric cables-Calculation of the current rating-Current rating equations (100% load factor) and calculation of losses[END_REF] λ2 is the ratio of losses in the armouring to total losses in all conductors in that cable. 𝜆 is the alternating current resistance per unit length of the conductor at maximum operating temperature (Ω.m -1 ): 𝑅 𝐴𝐶 𝑇 𝑐𝑜𝑛 ′ = 𝑅 20°𝐶 ′ (1 + 𝛼 20 𝑐 (𝑇 𝑐𝑜𝑛 -20°C)(1 + 𝑦 𝑝 + 𝑦 𝑠 ) (3.15) Where 𝑅 20°𝐶 ′ is the DC resistance of the conductor at 20 °C, ys is the skin effect factor and yp is the proximity effect factor. Wd is the dielectric loss per unit length for the insulation surrounding the conductor (W.m -1 ): 𝑊 𝑑 = 𝜔𝐶𝑈 0 2 𝑡𝑎𝑛𝛿 𝑙 (3.16) Where ω is the angular frequency, 𝐶 refers to the cable capacitance per unit length (F.m -1 ), U0 is the voltage to earth and 𝛿 𝑙 is the loss angle. The thermal resistances and the heat losses in the submarine power cable used for our calculations are shown in Table 3.1. Table 3.1: Thermal resistances and heat losses in the submarine power cable. Symbol Material Value R1 Thermal resistance between one conductor and the sheath per unit length (K.m.W -1 ) 0.587 R2 Thermal resistance of the bedding between the sheath and the armor per unit length (K.m.W -1 ) 0.095 Alternating current resistance per unit conductor length at maximum operating temperature (Ω.km -1 ) 0.5 R3 𝑊 𝑑 The dielectric loss per unit length of the insulation that surrounds the conductor (W.m -1 ) 0.074 * These resistances are calculated for 90 °C operating temperature and 20 °C ambient temperature. Numerical Simulation of Temperature Field in a 2D DSEC In this section, the thermal impact of mussels of various ages on the copper wire of the DSEC used in the OMDYN2 project is investigated. To do this, a two-dimensional steady-state thermal model, consist of the exact shape and materials of the DSEC computed using finite elements (COMSOL software) and a mesh with normal size elements of 94124 nodes in the DSEC (Figure 3.4), is used to predict the temperature distribution within the cable. As mentioned earlier, the DSEC is designed to support a maximum copper wire temperature of 90 °C. Therefore, for a maximum operating temperature of 90 °C and an ambient temperature of 20°C, the maximum permissible current rating (I90°C =364 A) is calculated using equation (3.7). The obtained current rating (364 A) is then imposed in the numerical method of DSEC 2D thermal modeling (COMSOL software). Moreover, with an external temperature of 20 °C, a convective heat transfer coefficient (247 W.m 2 .K -1 ) is applied to the outer layer of the DSEC; this convective heat transfer coefficient is calculated using the classic natural convection formula (Morgan's correlation [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]). The steady-state temperature distribution in the cable is depicted in Figure 3.5. According to the results, the maximum conductor temperature calculated using a two-dimensional numerical thermal model is 90.15 °C, which is nearly equal to the maximum operating temperature of 90 °C. This means that the numerical approach is validated because the simulation code (COMSOL) produces the same temperature the IEC-60827 standard. The numerical thermal model can then be used to investigate the thermal effect of biofouling colonization on the DSEC conductor temperature. Taking into consideration that the numerical thermal model has been tested with a variety of mesh element sizes, including normal (94124 nodes), finer (113116 nodes), and extremely fine (186128 nodes), it has produced the same copper conductor temperature (90.15 °C) of the DSEC. Thus, juvenile mussels have the highest effect on the DSEC conductor temperature, because as previously described in Section 2.3.3, juvenile mussels have the greatest thermal resistance and thermal resistance decreases with the age of the mussels due to the increase in water porosity (increase in natural convection). 100 The copper conductor temperature (Tc) were also calculated analytically using equation (3.7) by using the effective thermal conductivity of the mussels and the heat transfer coefficient of the water around them to calculate the thermal resistance R4 and by imposing the same maximum permissible current rating (I90°C =364 A). Results show in ) between the analytical method and simulations is not exceeding the 0.47%. So another method is used to validate the numerical simulation. As far as natural convection is involved in the porous medium, one can compare its effect to the one of the natural convection in water around the cable without mussels. This can explain the effect of mussels on the copper conductor temperature of the electrical cable. Rresults show in Table 3.3 that the overall thermal resistance of juvenile mussels around the electrical cable (0.022 K.m.W -1 at T=10 K) is higher than the overall thermal resistance around the electrical cable in the absence of mussels (0.012 K.m.W -1 ) and it is slightly higher for mix mussels (0.017 K.m.W -1 at T=4.2 K) and smaller for adult mussels (0.011 K.m.W -1 at T=4.8 K). This explains why the copper conductor temperature for electrical cable colonized by juvenile or mixed mussels is higher than 90°C and lower than 90°C for electrical cable colonized by adult mussels. However, as the effective thermal conductivity and the heat transfer coefficient changes with the imposed power during the measurement which leads to have a different temperature difference (between the internal layer and the external layer of the mussels), then, the thermal resistances are changed. This result appears to indicate that the temperature of the conductor decreases with mussel age and can be even lower than the temperature without mussels, which is used as a reference here. This observation must be moderated due to several aspects of our study: (i) the experiments and simulations are conducted with a single layer of mussels of roughly the same order of thickness. In reality, as time passes from one season to the next, a new layer of mussels grows on top of the existing one. The water porosity and effective thermal conductivity will then evolve over time. This may cause the conductor to overheat. (ii) As previously stated, the value of effective thermal conductivity is related to natural convection in the porous medium composed of mussels. This natural convection effect is determined by the temperature difference across the mussel layer. The goal is to investigate the effect of mussel colonization on the copper conductor temperature of an electrical cable with a maximum operating temperature of 90 °C. In this simulation, we used the power calculated according to the IEC standard (P90°C). Then, we imposed the measured effective 102 thermal conductivity of the various mussel layers. The power values used in the experimental measurement differ from the power calculated using the IEC standard, which is used in the numerical simulations. This results in a difference in ΔT between experimental measurements and numerical simulations, as shown in Table 3.4. T is the temperature difference between the inner and outer radiuses of the mussel layer. This disparity in ΔT values is expected because the power imposed in numerical simulation differs from the power imposed in experimental measurement, and effective thermal conductivity depends on ΔT. This difference affects the determination of the simulated temperature of the copper conductor. In order to improve the accuracy of copper conductor temperature determination, the appropriate effective thermal conductivity of the mussels must be imposed in the numerical simulation. To achieve this, the same power (P90°C) used in the simulations which leads to cable conductor temperature of 90 °C (in case of without mussels) is applied during one of the measurements carried out in May 2022 (section 2.3.2). This results to mussels' effective thermal conductivity of adult mussels 9.1 W.m -1 .K -1 and a ΔT of 3K which is lower than the effective thermal conductivity (12.8 W.m -1 .K -1 ) measured at ΔT equal to 4.8 K. Then, the obtained value (9.1 W.m -1 .K -1 ) is applied to the adult mussels layer that has been added to the numerical method of DSEC 2D thermal modeling (COMSOL software), where the natural convection is taken into account around the mussels layer that has the same thickness during the measurements, which is 7 cm at an ambient temperature of 20 °C. Additionally, the simulation imposes the same permissible current rating (364 A) that results in a cable conductor temperature of 90 °C without mussels. Results demonstrate that the cable's copper conductor temperature, which is equal to 90.8 °C when kbiof,adult=9.1 W.m -1 .K -1 at ΔT= 3 K, is higher than the cable's copper conductor temperature, which is equal to 89.6 °C at kbiof,adult=12.8 W.m -1 .K -1 and ΔT= 4.8 K. Where the overall thermal resistance of the adult mussels (0.0208 K.m.W -1 at ΔT= 3 K) is higher than the convective thermal resistance without mussels (0.012 K.m.W -1 ). From measurements which has been done in July 2020, the effective thermal conductivity of juvenile mussels is measured by imposing two different powers 40.74 W and 224 W which results an effective thermal conductivity of juvenile mussels of 1.6 W.m - 1 .K -1 (ΔT=6 K) and 4.4 W.m -1 .K -1 (ΔT=10 K), respectively. Therefore, the effective thermal conductivity of juvenile mussels is estimated at the power P90°C from the curve fitting of two previous measurements as shown in Figure 3.8. Results show that the effective thermal conductivity of juvenile mussels at P90°C is equal to 2.45 W.m -1 .K -1 and ΔT equal to 7.25 K. Then, the obtained value (2.45 W.m -1 .K -1 ) of the effective thermal conductivity of juvenile mussels is imposed in the DSEC thermal model with a power (P90°C), resulting in a copper conductor temperature of 95.18 °C, where the overall thermal resistance (0.044 K.m.W -1 at ΔT=7.25 K) is higher than the convective thermal resistance of water in case of without mussels (0.012 K.m.W -1 ). The obtained copper conductor temperature (95.18 °C) may have an impact on the lifetime of the DSEC's XLPE insulation as discussed in section 3.1. It is important to note that all thermal modeling examples for the thermal impact of mussels on the DSEC are based on the assumption that the water surrounding the mussels is stagnant. As a result, the DSEC conductor temperature is consequently thermally affected by the presence of the mussels, particularly the juvenile mussels as shown in Figure 3.9. Therefore, it's worthwhile to track the growth of mussels around the DSEC in order to determine what kinds of mussels colonize the cable. In the following chapter, a new method for tracking mussel growth using a thermal sensor is introduced. to rise in the copper conductor temperature of the cable (95.18 °C). This exceeds in temperature of the copper conductor reduces the lifetime of the XLPE insulation, the overall fatigue life of the cable, consequently, maintenance costs are increased. Whereas, juvenile mussels have a bigger effect on the DSEC conductor temperature than mix and adult mussels which leads to rise in the copper conductor temperature of the cable. This rise in copper conductor temperature causes a decrease in the XLPE insulation fatigue lifetime, the overall cable fatigue lifetime. In that case, the dynamic submarine electrical cable requires maintenance which is too costly and not easy to do. Monitoring mussel growth and cleaning them from around the cable as needed would help to reduce the impact of mussel bio-colonization on the DSEC. The following chapter describes a novel method for monitoring biofouling growth with a thermal sensor. Chapter Toward tracking of biofouling growth -Transient thermal analysis Introduction As previously stated, the dynamic submarine electrical cable, also known as umbilical cable, is an important component for offshore operators. Dynamic cable is generally designed to withstand coupled mechanical-electrical-thermal constraints. However, biofouling growth, which has a strong impact on the cable's local linear mass, apparent diameter, surface roughness, and, last but not least, cable cooling by adding thermal resistance, which reduces heat transfer between the cable and the water, can exacerbate the coupled mechanical-electrical-thermal constraints. Monitoring the growth of biofouling will thus be useful for electrical cable maintenance. Thermal tracking is an efficient method because it is not affected by weather, diver abilities, or any other obstacles encountered during an underwater inspection using a flux metric device (thermal sensor). The experimental setup (flux metric device or thermal sensor) composed of a stainless steel layer with a thermocouple on top, a heating element made of polyimide (kapton) which has a resistance of 6.3 ohm, a thermoplastic layer made of polyethylene, a layer of rubber, and finally an aluminum block at the bottom. The surface area of each of these layers is 0.01 m 2 (10x10 cm 2 ), and they are all stacked in that order (Figure 4.1). Thermal models for the estimation of the heat transfer coefficient The basic idea of the thermal method to monitor the growth of the biofouling is to estimate the heat transfer coefficient value for mussels of varying ages. Using a flux metric device, a pulse heating (φ) of few seconds duration is provided under the biofouling and the temperature evolution and decay (T (K)) versus time are recorded (Figure 4.2) and used with a thermal model to estimate the heat transfer coefficient describing heat transfer through the biofouling. In this section, we will present the mathematical models used to calculate the heat transfer coefficient. The first is a simplified model (capacitive model) and the second is an exact model (conductive model). Capacitive model The capacitive model is one method for determining the heat transfer coefficient. It's a model adopted for a sensor with impulse flux law. As shown in Because Th differs from Tw, we can write the following expression for the flow density dissipated by the sensor's heating system: 𝜑 𝑑 = 𝜑 𝑝 + δ 𝜑 (4.1) δφ is the flow disturbance introduced by the heating system of the sensor, δφ can be written in the form: δ φ = ℎ ′ (𝑇 ℎ -𝑇 𝑤 ) (4.2) Where, ℎ ′ is the local heat transfer coefficient. If we consider that the sensor is perfectly isolated from the wall and that the heating law is of the impulse type, the energy balance is written: 𝜑 𝑑 = ℎ(𝑇 𝑤 -𝑇 𝐸 ) + ℎ ′ (𝑇 ℎ -𝑇 𝑤 ) + 𝐶 𝑑𝑇 ℎ 𝑑𝑡 (4.3) with C is the heat capacity of the sensor brought back to the surface unit and h is the equivalent heat transfer coefficient. In the case of an unheated sensor, i.e. 𝜑 𝑑 = 0, the temperature of the sensor without heating is Tc 0 , it only depends on the fluctuations of h, TE and Tw. Thus, equation 4.3 can be written as: ℎ(𝑇 𝑤 -𝑇 𝐸 ) + ℎ ′ (𝑇 ℎ 0 -𝑇 𝑤 ) + 𝐶 𝑑𝑇 ℎ 0 𝑑𝑡 = 0 (4.4) hence, ℎ ′ (𝑇 𝑤 -𝑇 ℎ 0 ) -𝐶 𝑑𝑇 ℎ 0 𝑑𝑡 = ℎ(𝑇 𝑤 -𝑇 𝐸 ) = 𝜑 𝑝 (4.5) Thus, equation 4.3 can be written as: 𝜑 𝑑 = ℎ ′ (𝑇 ℎ -𝑇 ℎ 0 ) + 𝐶 𝑑(𝑇 ℎ -𝑇 ℎ 0 ) 𝑑𝑡 (4.6) From this equation, we can obtain the formula expressing the law of cooling of the sensor, which allows to find ℎ', for a given flux law. In the case of a sensor with impulse flux law, the formula is given by: 𝜑 𝑑 = 𝜑 0 for (0≤𝑡≤ 𝑡0) (4.7a) 𝜑 𝑑 =0 for (t> t0) (4.7b) If we assume that in the initial state t = 0, (𝑇c -Tc 0 ) = 0 and that t0 ≪ 𝐶 ℎ′ , the solution of equation (4.6) is: (𝑇 ℎ -𝑇 ℎ 0 ) = 𝜑 0 ℎ′ [1 -𝑒 (- ℎ ′ 𝑡 𝐶 ) ] ≈ 𝜑 0 𝑡 𝐶 for (0≤𝑡≤ 𝑡0) (4.8) (𝑇 ℎ -𝑇 ℎ 0 ) = 𝜑 0 ℎ′ [1 -𝑒 (- ℎ ′ (𝑡-𝑡0) 𝐶 ) ] for ( t> t0 ) (4.9) Where 𝑇h 0 represents the temperature of the sensor before heating, 𝑇h represents the temperature of the heated sensor, C represents the sensor's heat capacity, 𝜑d represents the heating flux and t0 represents the time when the temperature reaches its maximum value 4.4 and this curve depends on ℎ′. By plotting the quantity Ln (𝑇h -𝑇h 0 ) / (𝑇𝑚 -𝑇h 0 ) as a function of (t-t0), see Figure 4.4, we obtain a linear trend slope. We check on the one hand, that the shape of the law of cooling of the sensor is indeed of exponential nature, and on the other hand, we deduce the time constant of the sensor τ that is related to the slope m by the formula: 𝜏 = -1 𝑚 = 𝐶 ℎ′ (4.10) So, h' is defined by: ℎ ′ = -𝑚𝐶 (4.11) However, the capacitive model is a simplified model in which we assumed the sensor is isothermal, which means there is no heat transfer in the sensor. Furthermore, no heat flux loss is considered at the sensor's back face (adiabatic identity). Therefore, conductive model can be a good alternative model because it has a less hypothesis comparing to capacitive model. ); Conductive model Due to symmetrical system, Aj=Dj. Where a is the thermal diffusivity and e is the thickness of the layer, S is the area of the layer and j can be PE, PVC, heating element or stainless steel. On the top side of the device, the system of equations is: The real computed temperature (Tc) at θ 𝑐 is obtained from its Laplace form using the Gaver-Stehfest method. Thus, given the Laplace transform F(p), the time domain solution approximation is The comparison between capacitive and conductive models will be performed in the following section. [ θ ℎ 𝛷 ℎ + ] = Comparison between Capacitive and Conductive models The conductive model (quadrupole model) and the experimental setup designed to estimate the mussels' equivalent heat transfer coefficient were tested in the flux metric device using a two-dimensional numerical transient thermal model (finite elements via COMSOL) and a mesh with normal size elements of 11000 nodes, by comparing the temperature evolution of Tc obtained from the conductive model and COMSOL. This comparison has been done using the same thermophysical properties of each layer of the flux metric device (Table 4.1), boundary conditions (heq=500 W.m -2 .K -1 ) and initial conditions (Twater,initial=23.2 °C) that were used in the conductive model with a pulse power of 22 W in the heating elements for 18 seconds and a relaxation time around 3.5 minutes. As mussel's samples are usable for approximately 24 hours (otherwise they die), glass beads of comparable size are used to simulate biofouling (porous medium). Therefore, in order to compare capacitive and conductive model, glass beads medium (beads=25 mm) with 4 cm thickness layer were placed above the thermal sensor, then the equivalent heat transfer coefficient (heq) of the glass beads medium is estimated using the conductive and the capacitive model. In both models a heat pulse with a total power of 22 W is introduced for 25 s. The temperature evolution was recorded at the location of the thermocouple in the thermal sensor at point c. Finally, the capacitive model and the conductive models give an equivalent heat transfer coefficient of the glass beads equal to 225 W.m -2 .K -1 and 332 W.m -2 .K -1 respectively, with a difference of 32% (Table 4.2). A heat pulse of total power of 22 W were imposed in the heating element of the flux metric for different heat pulse durations (25, 120, 180 s). The heat diffusion length (heat diffusion=√𝑎𝑡) in the glass beads medium has been calculated during the heat duration and during the overall measurement duration meaning ), 𝑘 ℎ,𝑔𝑙𝑎𝑠𝑠 𝑏𝑒𝑎𝑑𝑠 (0.89 W.m -1 .K -1 ) is the homogenous effective thermal conductivity of the glass beads calculated using Maxwell's equation (Chapter 2) and  𝑔𝑙𝑎𝑠𝑠 𝑏𝑒𝑎𝑑𝑠 (1870 kg.m -3 )and 𝐶 𝑝 𝑔𝑙𝑎𝑠𝑠 𝑏𝑒𝑎𝑑𝑠 (2244 J.kg -1 .K -1 ) are the density and the heat capacity of glass beads, respectively, calculated using mixing law.  𝑜𝑣𝑒𝑟𝑎𝑙𝑙 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡 is the heat diffusion length during the overall measurement duration (3 times duration of the heat pulse). In the present numerical transient simulation, the boundary conditions and the thermophysical properties of the previous simulations are the same, except the effective thermal conductivity of the glass beads. The effective thermal conductivity of the glass beads is estimated using minimization method between the experimental and the numerical temperatures (COMSOL) at point c. In order to fit the measured temperatures and the computed temperature estimated by COMSOL, the effective thermal conductivity was changed over time during the simulations with the changing of the deviation of the measured temperature. Where 𝑇 𝑠𝑠 is the average surface temperature of the upper surface of the stainless steel layer estimated by COMSOL (K), 𝑇 𝑟𝑒𝑓 is the temperature of the water above the glass beads medium (K),  𝑠𝑠,𝑐𝑜𝑠𝑚𝑜𝑙 is the heat density dessipated from the stainless steel surface estimated by COMSOL (W.m -2 ) and ℎ 𝑒𝑞,𝑐𝑜𝑚𝑠𝑜𝑙 is the equivalent heat transfer coefficient estimated by COMSOL (W.m -2 .K -1 ). Results show that the equivalent heat transfer coefficient changes with time for short and long heat pulse durations. However, Figure 4.15 shows that the equivalent heat transfer coefficient becomes slightly stable at the end of the heat pulse duration. Therefore, the equivalent heat transfer coefficient at the end of the heat pulse is the one which is As shown in Table 4.4, the equivalent heat transfer coefficient of the glass beads with at the end of the short heat pulse duration (25 s) is equal to (310 W.m -2 .K -1 ), however, the equivalent heat transfer coefficient at the end of the heat pulse 120 s (160 W.m -2 .K -1 ) is approximately the same of the equivalent heat transfer coefficient at the end of the heat pulse 180 s (153 W.m -2 .K -1 ). Moreover, the difference between the heat transfer coefficient estimated using conductive model and COMSOL for short heat pulse duration (25 s) is less than 5%, however, for long heat pulse duration (120 s and 180 s) the difference is very big. This confirms, that the conductive model is valid only for short heat pulse duration (25 s) in the case of glass beads medium. Therefore, the equivalent heat transfer coefficient of the glass beads can be estimated using the conductive model only if the heat pulse duration is small. However, if the heat pulse duration is long, then, a numerical simulation (finite elements COMSOL) shall be used to estimate the equivalent heat transfer coefficient. The equivalent heat transfer coefficient for long heat pulse duration stabilizes at the end of the heat pulse duration which seems that it is a good criterion to monitor the biofouling growth, however, the experimental measurements will confirm if it is valid for the biofouling or not. Transient analysistest with mussels Experiments were carried out to determine the equivalent heat transfer coefficient using conductive models of varying ages (juvenile, mix, and adult, as shown in Figure Results show that the equivalent heat transfer coefficient of mussels is sensitive to mussel's age, as shown in Figure 4.18. It is worth noting that juvenile mussels have the lowest global heat transfer coefficient. This is explained using the same analogy as in Chapter 2 for effective thermal conductivity. In fact, because juvenile mussels have the smallest water porosity (15%) compared to mix (18%) and adult mussels (22.6 %), the water circulation effect in the porous medium of juvenile mussels is small compared to mix and adult mussels. As a result, as the medium's water porosity increases, so does the equivalent heat transfer coefficient. The thermal sensor's sensitivity was then tested, and it is appropriate for the monitoring of the growth of biofouling around the dynamic submarine electrical cable, as well as to help identify what type of mussels are colonized around the cable. Moreover, the equivalent heat transfer coefficient of juvenile mussels with a layer thickness of 4 cm is compared to the equivalent heat transfer coefficient of juvenile mussels with a layer thickness of 9.4 cm. It's clearly shown from Figure 4.20 that the relaxation curve of the juvenile mussels with 4 cm thickness layer cools dawn faster than the relaxation curve of the mussels with a thickness layer of 9.4 cm, this lead to equivalent heat transfer coefficient (930 W.m -2 .K -1 ) and (550 W.m -2 .K -1 ), respectively. This is because increasing the thickness layer adds more heat resistance, resulting in a smaller heat transfer coefficient. Thus, based on the two previous experiments, the equivalent heat transfer coefficient of the mussels is sensitive to the mussel's age, agitation effect and the thickness layer. In consequence, the thermal sensor can be a good solution for the tracking of biofouling growth. Thermal sensorapplication on dynamic submarine electrical cable A thermal sensor to detect the growth of biofouling suitable to be fixed on the As shown in Figure 4.22, the thermal sensor consists of a heating element, connected to a battery and two thermocouples fixed on stainless steel materials above and beyond the heating element covered with a layer of polyethylene on both sides. The surrounding water temperature is also measured using a thermocouple (or Pt 100) linked to the data logger. The data recorded in the data logger will be transmitted through an acoustic element, where the reception will be performed with a boat located above the thermal sensor. Partial conclusion Tracking the growth of biofouling is essential to avoid a decrease in fatigue lifetime of a submarine electrical cable. In this chapter, an original thermal sensor for the tracking of the biofouling growth has been presented which based on a transient thermal model. This model results in estimation of the equivalent heat transfer coefficient of the material above the thermal sensor which in our case is biofouling. Two models capacitive and conductive have been introduced. The conductive model based on quadrupole method was used for the estimation of the equivalent heat transfer coefficient of the mussels above the thermal sensor since it has less hypothesis comparing to the capacitive model. The effect of the duration of the heat pulse on the estimation of the equivalent heat transfer coefficient was tested on glass beads medium (which simulate the mussels medium). It shown that for long heat pulse duration the heat diffuses in more than one direction which not comply with the conductive model since it's a 1D model. However, for short heat pulse duration the heat diffuses in one direction only. Then, a test toward the tracking of biofouling growth has been conducted for various ages, thickness, and water conditions with short heat pulse duration (15 s). We have found that the equivalent heat transfer coefficient of mussels changes with age, where juvenile mussels having a lower equivalent heat transfer coefficient than mix and adult mussels, respectively since the thermal resistance of the mussels decreases with the age of mussels (porosity increases with ages). In addition, as expected the equivalent heat transfer coefficient of a mussels increases in a non-stagnant water condition compared to a stagnant water condition. This is due to the fact that the convection in the pores of mussels is increased in the case of non-stagnant water compared to stagnant water. Moreover, the equivalent heat transfer coefficient decreases with the increasing of the mussel's thickness layer since we have an additional thermal resistance with the increasing of the thickness. As a result, the equivalent heat transfer coefficient is sensitive to the age of mussels, thickness of the mussels and the water current status (stagnant/non-stagnant). Last and not least, an introduction for a thermal sensor dedicated to be used on a submarine electrical cable has been introduced. All transient measurements, collected from the thermal sensor will be used to estimate the equivalent heat transfer coefficient from which the type and thickness of the mussels can be identified. The patent maturity is planned to be initiated immediately after this thesis defense. Conclusion and perspective Conclusions Floating wind turbine projects are being developed all over the world. One of the most important components of floating offshore wind turbines is the umbilical cable, which connects them to their electrical substation and the underwater energy network. Its electric insulation technology is intended to withstand conductor temperatures of up to 90 degrees Celsius. Mussel growth, in particular, can modify heat transfer around the cable and thus the maximum conductor temperature, also it leads to a temperature fluctuation, affecting fatigue lifetime. The measurement of the thermal properties of the biofouling is rarely addressed in the marine energy literature. In chapter 2, the effective thermal conductivity of various ages of mussels (juvenile, mix (juvenile-adult) and adult) as well as the heat transfer coefficient of the water around them were measured for uniform mussel colonization around an experimental tube for a stagnant water condition. The measurement was a real challenge since it deals with a living mussels. Results shows that the juvenile mussels have the lowest effective thermal conductivity comparing to a mix mussels and adult mussels. This is due to the water porosity effect, where porosity of juvenile mussels is lower than the mix mussels and adult mussels respectively. Also, results show that the effective thermal conductivity of adult alive mussels is higher than the effective thermal conductivity of adult dead mussels due to the water filtering (nutrition) effect which creates an additional microconvection involving higher effective thermal conductivity of the mussels. Moreover, results show that the effective thermal conductivity of adult mussels increases with the increasing of the temperature difference between its internal and external layer. So as a summary, in case of stagnant water, the effective thermal conductivity of mussels is affected by three factors: the water porosity, the temperature difference between their internal and external layer and their filtering (nutrition) phenomenon. Another study has also been done to demonstrate the water current effects on the effective thermal conductivity of a porous medium formed from glass beads. It was demonstrated that the effective thermal conductivity of glass beads in the presence of current water increases with increasing water current velocity since the forced heat transfer coefficient in the glass beads medium increases. Therefore, it's expected the same for the effective thermal conductivity of the mussels in the presence of the water current. In chapter 3, the thermal effect assessment of the biofouling on the DSEC is evaluated by estimating the temperature evolution of the copper conductor of a DSEC covered with mussels of various ages using a numerical simulation (via COMSOL). Those simulations were carried out for a given thickness layer of mussels and do not take into account the evolution of bio-colonization over time (multi-layer effects) as well as the water current which passes across or around the mussels. Results show that biocolonization of mussels causes an increase in the copper conductor temperature of the DSEC to more than its maximum operating temperature (90 °C) depending on the mussels' age in other words the corresponding global thermal resistance. Whereas, juvenile mussels have a bigger effect on the DSEC conductor temperature than mix and adult mussels which leads to rise in the copper conductor temperature of the cable. This rise in copper conductor temperature causes a decrease in the XLPE insulation fatigue lifetime, the overall cable fatigue lifetime. In that case, the dynamic submarine electrical cable requires maintenance which is too costly and not easy to do. Then, monitoring the biofouling growth, especially mussels is essential in order to clean them from around the cable which would help to reduce their impact o on the DSEC. The conductive model based on quadrupole method has been chosen to be used to estimate the equivalent heat transfer coefficient since it has less hypothesis comparing to the capacitive model. The effect of the duration of the heat pulse on the estimation of the equivalent heat transfer coefficient was tested with glass beads medium covers the thermal sensor (which simulate the mussels medium). It has been shown that for short heat pulse durations, heat diffuses only in one direction in the glass beads medium, which is consistent with the conductive model because it is a 1D model. However, heat diffuses in more than one direction during long heat pulse durations. Then, a test toward the tracking of biofouling growth has been conducted for various ages, thickness, and water conditions with short heat pulse duration (15 s). Results show that the equivalent heat transfer coefficient of mussels changes with age, where juvenile mussels having a lower equivalent heat transfer coefficient than mix and adult mussels, respectively, because the mussels' thermal resistance decreases with age (porosity increases with age). Furthermore, as expected the equivalent heat transfer coefficient of mussels increases in non-stagnant water compared to stagnant water. This is as a result of the fact that non-stagnant water has greater convection in the pores of the mussels than in stagnant water. Additionally, as the thickness of the mussel layer increases, we experience an additional thermal resistance, which causes the equivalent heat transfer coefficient to decrease. As a result, the equivalent heat transfer coefficient is affected by mussel age, thickness, and water current status (stagnant/non-stagnant). Last but not least, a patent for the thermal sensor dedicated to be used on a submarine electrical cable has been shown. All transient measurements from the thermal sensor will be used to calculate the equivalent heat transfer coefficient, which will be used to determine the type and thickness of the mussels. The start of patent maturity is planned to begin immediately just after the defence of this thesis. Perspectives This manuscript has summarized efforts to thermally characterize the biofouling (mussels) and assess its thermal effect on a dynamic submarine electrical cable in stagnant water. However much more work is needed to obtain a more complete picture on its thermal characteristics and assess its thermal effect on a dynamic submarine electrical cable with the presence of water current flow. Herein, some perspectives and reflections will be discussed on how the thermal characterization could be improved to further understand its thermal effect on the DSEC. This seems like an important step for future design of dynamic submarine electrical cable. To improve the thermal characterization of the mussels, the effective thermal conductivity of the mussels shall be measured during a natural colonization around the experimental tube that's why we already impose in the Mediterranean Sea and the Atlantic Ocean several experimental tube to have natural colonization of the mussels around them. Also, more interesting aspect is the creation of a data base for the effective thermal conductivity of various ages of mussels as function of temperature difference between the inner and outer layer of mussels and as function of different water current velocities and as function of different types of mussels (mussels of Atlantic Ocean different from the mussels of Mediterranean Sea) and finally as a function of different thicknesses during mussel reproduction. This will be very useful for the thermal assessment of the mussels' colonization on the dynamic electrical cable in the real applications. Moreover, for the monitoring of biofouling growth, it could be interesting to create a data base of the equivalent heat transfer coefficient using the transient thermal model with the new thermal sensor shape (as the patent) for different ages of mussels, types (different seas and oceans), thicknesses with the same real conditions which exists on site, meaning, the same water current velocities which exists in the seas and the oceans. At the end the data base of the equivalent heat transfer coefficient will helps to identify which age, type and thickness of mussels colonizes the dynamic submarine electrical cable. Then a thermal assessment of the effect can be determined using the database of effective thermal conductivity of mussels of different ages, thicknesses and types which will help to identify the thermal risks that can be induced by mussel colonization on dynamic submarine electrical cable. Therefore, it will help to optimize its design. Résumé en Français Résumé : Les projets d'éoliennes flottantes se développent à l'échelle mondiale de manière prometteuse. Un de leurs composants clés est le câble dynamique de puissance permettant le raccordement des éoliennes à leur sous-station électrique et au réseau énergétique sousmarin. Parmi les effets les moins connus et qui peut être les plus impactant figure la bio colonisation. On entend par bio colonisation le développement de concrétions marines (algues, moules, huîtres) qui peuvent atteindre plusieurs dizaines de centimètres d'épaisseur. Par l'effet d'écran thermique, de masse additionnelle et de modification de la rugosité, la bio colonisation va impacter le comportement du câble dynamique. Parmi les effets majeurs on note les sollicitations hydrodynamiques modifiant d'une part leur tenue mécanique notamment en situation de tempête ou d'autre part leur fatigue. Ces dégradations du câble sont accentuées lors de l'accroissement de sa température d'où la nécessité de comprendre l'effet de la biocolonisation sur les transferts thermiques autour du câble. Ce sujet très peu abordé dans la littérature a été l'objet de cette thèse. Des échantillons de moules avec différents stades de maturité (juvénile, adulte ou mixte) ont été prélevés sur site marin afin de réaliser des mesures de conductivité thermique équivalente. L'effet de la présence d'un écoulement forcé à travers l'échantillon de moules (milieu avec pores ouverts) a été étudié afin de prendre en compte la présence d'un éventuel courant marin. Des mesures complémentaires de perméabilité de ce milieu ont été réalisées. Enfin, en utilisant les mesures de caractérisation thermique, les champs de température au sein du câble sous-marin en présence de biocolonisation ont pu être simulés numériquement. Par ailleurs la problématique de la surveillance sur site marin de la croissance de la biocolonisation a été abordée en proposant un dispositif thermique original à installer sur les câbles sous-marins afin de périodiquement évaluer l'épaisseur ou les caractéristiques de la biocolonisation. Ce travail a été réalisé dans le cadre du projet BIODYTHERM financé par WEAMEC, West Atlantic Marine Energy Community, et avec un financement du Pays Région de la Loire. Il fait également partie du projet OMDYN-2 : Ombilicaux dynamiques pour les énergies marines renouvelables flottantes accordé par France Energies Marines et l'Agence nationale de la recherche. Title: Effect of bio-colonization on the thermal behavior of dynamic electrical cables of floating wind turbines Keywords: electric dynamic cable; thermal characterization of biofouling; effective thermal conductivity; marine renewable energies; floating offshore wind turbine Abstract: Floating wind turbine projects are developing on a global scale in a promising way. One of their key components is the dynamic power cable allowing the connection of the wind turbines to their electrical substation and to the underwater energy network. Among the least known effects and which can be the most impacting is biocolonization. By biocolonization we mean the development of marine concretions (algae, mussels, oysters) which can reach several tens of centimeters in thickness. By the effect of heat shield, additional mass and modification of the roughness, the bio colonization will impact the behavior of the dynamic cable. Among the major effects we note the hydrodynamic stresses modifying on the one hand their mechanical strength, particularly in stormy situations, and on the other hand their fatigue. These degradations of the cable are accentuated during the increase in its temperature, hence the need to understand the effect of biocolonization on the heat transfers around the cable. This subject, very little addressed in the literature, was the subject of this thesis. Samples of mussels with different stages of maturity (juvenile, adult or mixed) were taken from the marine site in order to carry out measurements of equivalent thermal conductivity. It is therefore necessary to understand the role of biocolonization, in its variability, to integrate studies on particular components. This thesis focuses on the thermal effect of biocolonisation (mussels( on the dynamic electric cable. The thermal characteristics for different ages of the mussels were measured and their thermal effect around the dynamic electric cable was simulated. Moreover, an original method for monitoring the growth of biocolonization around the dynamic underwater electric cable has been proposed using an original thermal sensor. This work was carried out within the framework of the BIODYTHERM project funded by WEAMEC, West Atlantic Marine Energy Community, and with funding from Pays Région de la Loire. is also part of the OMDYN-2 project: Dynamic umbilicals for floating marine renewable energies granted by France Energies Marines and the National Research Agency. Figure 1 . 1 : 18 Figure 1 . 2 : 19 Figure 1 . 3 : 20 Figure 1 . 4 : 21 Figure 1 . 5 : 21 Figure 1 . 6 : 22 Figure 1 . 7 : 22 Figure 1 . 8 : 24 Figure 1 . 9 : 24 Figure 1 . 10 : 25 Figure 1 . 11 : 26 Figure 1 . 12 : 31 Figure 1 . 31 Figure 1 . 14 : 32 Figure 1 . 15 : 33 Figure 1 . 16 : 33 Figure 1 . 17 : 35 Figure 1 . 18 : 35 Figure 1 . 19 : 36 Figure 1 . 20 : 37 Figure 1 . 21 : 40 Figure 1 . 40 Figure 2 . 1 : 44 Figure 2 . 2 : 45 Figure 2 . 3 : 45 Figure 2 . 4 : 46 Figure 2 . 5 : 47 Figure 2 . 6 : 48 Figure 2 . 7 : 50 Figure 2 . 8 : 52 Figure 2 . 9 :Figure 2 . 10 :Figure 2 . 11 :Figure 2 . 12 :Figure 2 . 13 :Figure 2 . 14 :Figure 2 . 15 :Figure 2 . 16 :Figure 2 . 17 :Figure 2 .Figure 2 . 19 :Figure 2 . 20 :Figure 2 . 21 :Figure 2 . 22 :Figure 2 . 23 :Figure 2 . 24 :Figure 2 . 25 :Figure 2 . 26 :Figure 2 . 27 :Figure 2 . 28 :Figure 2 . 29 :Figure 2 . 30 :Figure 2 . 31 :Figure 2 . 32 :Figure 2 . 33 :Figure 2 . 34 :Figure 2 . 35 :Figure 2 . 36 :Figure 3 . 1 :Figure 3 . 2 :Figure 3 . 3 :Figure 3 . 4 :Figure 3 . 5 :Figure 3 . 6 :Figure 3 . 7 :Figure 3 . 8 :Figure 3 . 9 :Figure 4 . 1 :Figure 4 . 2 :Figure 4 . 3 :Figure 4 . 4 :Figure 4 . 5 :Figure 4 . 6 :Figure 4 . 7 :Figure 4 . 8 :Figure 4 . 9 :Figure 4 . 10 :Figure 4 .Figure 4 . 12 :Figure 4 . 13 :Figure 4 . 14 :Figure 4 . 15 :Figure 4 . 16 :Figure 4 . 17 :Figure 4 . 18 :Figure 4 . 19 :Figure 4 . 20 :Figure 4 . 21 :Figure 4 . 22 : 1118121913201421152116221722182419241102511126112311311143211533116331173511835119361203712140140214422452345244625472648275028522921021121221321421521621722192202212222232242252262272282292302312322332342352363132333435363738394142434445464748494104412413414415416417418419420421422 Figure 1.1: Illustration of fixed offshore wind turbine and floating offshore wind turbine. ...................................................................................................................18 Figure 1.2: Power transmission system of floating offshore wind turbine. .......................19 Figure 1.3: Structure of Medium Voltage power cable [16]. .............................................20 Figure 1.4: Dynamic cable configurations [18]. ................................................................21 Figure 1.5: Power transmission system of offshore wind farm, top view [19]. .................21 Figure 1.6: W-shape interconnection configuration between two floating offshore wind turbine [19].............................................................................................................22 Figure 1.7: Mechanical loads on dynamic power cable. ....................................................22 Figure 1.8: Biofilm formation steps [20]. ..........................................................................24 Figure 1.9: Biofilm cross section [23]. ..............................................................................24 Figure 1 . 1 : 11 Figure 1.1: Illustration of fixed offshore wind turbine and floating offshore wind turbine. Figure 1 1 Figure 1.2. Figure 1 . 2 : 12 Figure 1.2: Power transmission system of floating offshore wind turbine. Figure 1 . 4 :Figure 1 . 6 , 1416 Figure 1.4: Dynamic cable configurations [18]. Figure 1 . 5 : 15 Figure 1.5: Power transmission system of offshore wind farm, top view [19]. Figure 1 . 8 : 18 Figure 1.8: Biofilm formation steps [20]. Figure 1 .Figure 1 . 11 : 1111 Figure 1.11: Mussel's nutrition [25]. Figure 1 . 14 : 114 Figure 1.14: Identified Zh and Zc from step excitation (continuous line) and from ramp excitation (x). Figure 1 . 15 : 115 Figure 1.15: Shell and tube heat exchanger with presence of uniform fouling (limestone deposit). Figure 1 . 16 : 116 Figure 1.16: Identified Z without and with different layers uniform thickness of fuling a) Zh and b) Zc. Figures 1 . 1 Figures 1.16a and 1.16b clearly show how the presence of a fouling layer affects the impulse response profiles. It should also be noted that increasing the fouling thickness Figure 1 . 17 : 117 Figure 1.17: Temperature distribution (°C), in case of uniform biofouling coverage and thickness 30 mm [30]. Figure 1 . 18 : 118 Figure 1.18: Temperature evolution versus biofouling thickness for each component of dynamic power cable [30]. Figure 1 . 19 : 119 Figure 1.19: Temperature evolution versus the percentage of surface coverage of biofouling for each component of dynamic power cable [30]. Figure 1 . 1 Figure 1.21, from equipment setup to image analysis, in order to have a standardized inspection involving precise tracking and more accurate damage evaluation. Figure 1 . 21 : 121 Figure 1.21: The main steps of stereo imaging pipeline [45]. Figure 1 . 1 Figure 1.22:a) Marine growth on structure b) Reconstructed surface with showing the dimensions [45]. Figure 2 . 1 . 21 It consists of an aluminum tube (int=60 mm, ext=70 mm, L=610 mm) implemented with 5 K-type thermocouples (3 in the middle cross section with 120° angle and one thermocouple on each of the two other cross-sections close to the extremities of the tube (at 5 cm). These thermocouples installed on grooved locations on the tube (which mentioned later) created by a CNC milling machine and glued with a marine onecomponent polyurethane sealant glue (Sikaflex 291i). Figure 2 . 1 : 21 Figure 2.1: Schematic for the experimental tube. Figure 2 . 2 Figure 2.2: a)before pumping the rubber air chamber b)after pumping the rubber air chamber Figure 2 . 3 :Figure 2 . 4 :Figure 2 . 5 : 232425 Figure 2.3: Experimental tube support. Photo Figure 2 . 6 : 26 Figure 2.6: Uniform distribution of mussels around the experimental tube held by a steel net. Figure 2 . 7 : 27 Figure 2.7: Middle cross-section of the experimental tube. Figure 2 . 8 : 28 Figure 2.8: Experimental tube covered with double sided adhesive tape Figure 2 .Figure 2 . 9 : 229 Figure 2.9: Experimental tube covered with glass beads a)without plastic cover b) with plastic cover. Figure 2 . 10 : 210 Figure 2.10: Adult mussels (12 months) a) alive b) dead. Figure 2 . 2 Figure2.11 depicts the evolution of the effective thermal conductivity of living and dead adult mussels as a function of the temperature difference between the mussels' internal and external layers. The measurements have a relative uncertainty of less than 9%. A repeated measurement of the effective thermal conductivity of adult mussels at ΔT=5.1 K is also performed; the result shows that the effective thermal conductivity is equal to 12.8 Figure 2 . 11 : 211 Figure 2.11: Effective thermal conductivity of dead and living adult mussels as function of temperature difference between the internal and the external layer of the mussels. 𝑄 ̇(m 3 /s) is the flow rate of the fluid flowing through an area A(m 2 ), 𝜅 is the permeability (m 2 ), 𝛥𝑝(𝑃𝑎) = 𝑝 𝑎 -𝑝 𝑏 is the total pressure drop over the sample, Lc(m) is the length of the cylinder, 𝜇 (𝑃𝑎. 𝑠) is the dynamic viscosity of the fluid. Figure 2 . 12 : 212 Figure 2.12: Schematic for the experimental setup to measure the permeability. Figure 2 . 13 : 213 Figure 2.13: Glass beads in the experimental setup. Figure 2 . 14 : 214 Figure 2.14: Results of difference of pressure before and after the glass beads sample (ΔP) as function of the flow rate 𝑄 ̇. Figure 2 . 15 : 215 Figure 2.15: Adult mussels in the experimental setup. Figure 2 . 2 Figure 2.16 depicts the evolution of differential pressure as a function of flow rate in the presence of adult mussels, where the relative uncertainty of the differential pressure meter is (0.05%). Since the differential pressure is directly proportional to the flow rates, the 𝛥𝑝 increases as the flow rate increases. The permeability of the adult mussels is then calculated using Darcy's law and the slope of Figure 2.16, and this yields a value of Figure 2 . 16 : 216 Figure 2.16: Results of difference of pressure before and after adult mussels sample (ΔP) as function of the flow rate 𝑄 ̇. Figure 2 . 2 Figure 2.18: a) Total volume of mussel's cluster b) Volume of the interstitial water between the mussels. The equation ( 2 . 9 ) 29 is solved numerically with the associated boundary conditions: imposed heat flux φ (W.m -2 ) with φ = 𝑄 ̇/2πr0L at the inner surface of the aluminum tube and convective heat transfer coefficient (hw1 and hw2) at the external surface, as shown in Figure 2.19. Figure 2 . 19 : 219 Figure 2.19: Various colonization distribution configurations around the heated covered polyethylene aluminum tube. (a) 100% colonization, (b) 50 % colonization and (c) 25% colonization. Figure 2 . 20 :Figure 2 Figure 2 . 21 : 2202221 Figure 2.20: Fine mesh in COMSOL for (a) 25% (b) 50% mussel colonization. Figure 2 . 22 : 222 Figure 2.22: Traction water basin. Figure 2 . 23 : 223 Figure 2.23: Traction carriage of water basin. Figure 2 . 24 : 224 Figure 2.24: Porous medium formed from glass beads fixed on the experimental tube. Figure 2 . 25 : 225 Figure 2.25: Support of the experimental tube. Figure 2 . 26 :Figure 2 . 27 : 226227 Figure 2.26: Distribution of thermocouples on the experimental tube with the direction of the water current flow. Figure 2 .Figure 2 . 29 : 2229 Figure 2.28a, shows the temperature evolution in the internal thermocouples for stagnant water and Figure 2.28b and Figure 2.28c show the temperature evolution in the internal thermocouples for water current velocity of 0.5 m/s and 2 m/s respectively. In the case of a water current velocity, we can see from Figures 2.27b and 2.27c, that there is a significant water disturbance at the top of the water surface at the location of the supports (above the extremities of the tube), but this water disturbance has no effect on the temperature of the internal thermocouples since as we can see that the internal temperatures of the middle, right, and left locations are approximately the same. Re≤ 10 5 , 5 Nusselt number (Nu) starts to decrease from the stagnation point with the increases of θ as a result of a laminar boundary layer development. The separation between the laminar boundary layer and the wakes (separation point), occurs at θ=80°, and as a result of mixing caused by vortex formation in the wake, the Nu increases.In contrast, for Re ≥10 5 , variation of Nu with is distinguished by two minima (Figure2.30).The decrease in Nu from the stagnation point value is once more caused by the development of a laminar boundary layer, but the abrupt increase that occurs between 80° and 100° is now caused by the boundary layer changing into turbulence. The turbulent boundary layer continues to grow, and Nu once more starts to decline. Finally, at (θ≈140°) separation happens, and the wake region's mixing causes Nu to rise[START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]. Figure 2 . 30 : 230 Figure 2.30: Nusselt number as function of angular coordinate for smooth cylinder [50]. Figure 2 . 31 : 231 Figure 2.31: Evolution of the average effective thermal conductivity of beads as function of the water current velocity. Figure 2 . 32 : 232 Figure 2.32: Effective thermal conductivity of the glass beads as function of Reynolds number. Figure 2 . 33 : 233 Figure 2.33: Experimental tube with and without polyethylene cover. Figure 2 . 34 : 234 Figure 2.34: Experimental tube fixed on the floater. Figure 2 . 35 : 235 Figure 2.35: Distribution of the experimental tubes in different zones. Figure 2 . 36 : 236 Figure 2.36: Experimental tube with and without polyethylene after two months of immersion in the sea. Figure 3 . 1 : 31 Figure 3.1: Cross-linked polyethylene (XLPE) insulation of dynamic power cable Figure 3 . 2 : 32 Figure 3.2: Cross section of a three phases DSEC. Figure 3 . 3 Figure 3.3 shows a physical representation of the cable and its surroundings as a network of combined thermal resistances (R1, R2, R3 and R4). Where Tc (K) is the operating Figure 3 . 3 : 33 Figure 3.3: Network of thermal resistances representing steady state heat transfer in a three core submarine cable and its surrounding environment. m -1 ) is the AC resistance for a given conductor temperature (T), X is the reactance of sheath per unit length of cable (Ω.m -1 ) given in equation (3.10) and 𝑅 𝑠 𝑇 𝑠 ′ which is the resistance of the metallic sheath at the temperature of the metallic sheath (Ω.m -1 ) is calculated in equation (3.11): Figure 3 . 4 : 34 Figure 3.4: Mesh with normal element size for the DSEC in COMSOL. Figure 3 . 5 :Figure 3 . 6 : 3536 Figure 3.5: Simulation of the DSEC steady-state temperature distribution (°C) under rated load conditions. Figure 3 . 7 : 37 Figure 3.7: DSEC conductor temperature as function of mussel age using finite element method (COMSOL). Figure 3 . 8 : 38 Figure 3.8: Effective thermal conductivity of juvenile mussels for different ΔT and imposed power. Figure 4 . 1 : 41 Figure 4.1: Schematic of flux metric device. Figure 4 . 2 : 42 Figure 4.2: Heating pulse and Temperature evolution and decay versus time. Figure 4 . 3 , 43 Tw is the 109 temperature of the wall, Th the temperature of sensor and 𝜑 𝑑 the density of flow dissipated by the heating system of the sensor. Figure 4 . 3 : 43 Figure 4.3: Diagram of a sensor with impulse heating laws. Figure 4 . 4 : 44 Figure 4.4: Sensor Cooling. A 0 ( 4 . 12 ) 0 ( 4 . 13 ) 04120413 multilayer system's local quadrupole matrix is created by multiplying the matrices for each layer to link the temperature-flux vectors on either side of it. The boundary condition is taken into account, and Laplace inversion is applied to obtain the necessary external temperatures and fluxes (depending on the transformed space that we use). Then, it is possible to determine the internal temperature-fluxes by multiplying the temperature-flux vector recently obtained at one external surface by the quadrupole matrix corresponding to the internal layer. The quadrupole method is only constrained by the symmetry consideration, so even if the propagation axis is switched around, the symmetrical material system remains unchanged.Temperature and heat flux in the Laplace space are represented, respectively, by:𝜃(𝑝) = 𝐿[𝑇(𝑡)] = ∫ T(t) exp(-pt) dt ∞ Φ(𝑝) = 𝐿[𝜑(𝑡)] = ∫ 𝜑(t) exp(-pt) dt∞ where p is the Laplace parameter. The Laplace temperatures and fluxes have been calculated after the modelling of the flux metric sensor device. The thermal considerations around the flux metric sensor device are shown in Figure 4.5. θc is the Laplace temperature at the location of the thermocouple (at the rear face of the stainless steel layer) and θE is the Laplace environmental temperature and 𝛷 ℎ + , 𝛷 ℎ -, 𝛷 𝐸 + , 𝛷 𝐸 -is the Laplace heat flux of heating element to upward, the heat flux of heating element to downward, heat flux of environment to the upward and the heat flux of environment to downward respectively. Figure 4 . 5 : 45 Figure 4.5: Thermal distribution of flux metric sensor device. [ Figure 4 . 6 : 46 Figure 4.6: Chart flow of quadrupole method to estimate the heat transfer coefficient h. Figure 4 . 7 : 47 Figure 4.7: Temperature evolution and relaxation at the location of the thermocouple Tc in the flux metric device. Figure 4 . 4 10 depicts the temperature evolution at the thermocouple at point c in the flux metric device (Figure 4.1)for various heat pulse durations. Figure 4 . 10 :Figure 4 .Figure 4 . 12 : 4104412 Figure 4.10: Temperature evolution for different heat pulse duration. Figure 4 .Figure 4 . 13 : 4413 Figure 4.13: Temperature evolution at point c using 1D and 3D numerical transient model (COMSOL). pulse and after the heat pulse. Where 𝑎 (2.12x10 -7 m 2 .s -1 ) is the thermal diffusivity of the glass beads medium (𝑎 = 𝑘 ℎ,𝑔𝑙𝑎𝑠𝑠 𝑏𝑒𝑎𝑑𝑠  𝑔𝑙𝑎𝑠𝑠 𝑏𝑒𝑎𝑑𝑠 𝐶 𝑝 𝑔𝑙𝑎𝑠𝑠 𝑏𝑒𝑎𝑑𝑠 Figure 4 .Figure 4 . 14 : 4414 Figure 4.14: Measured and numerical temperature at point c a) without reducing k values b) with reducing k values. Figure 4 . 15 : 415 Figure 4.15: Equivalent heat transfer coefficient of the glass beads for a)short heat pulse duration (25 s) and long heat pulse duration b) 120 s and c) 180 s. 4. 16 )Figure 4 . 16 : 16416 Figure 4.16: Different ages of mussels a) juvenile b) mixed c) adult. Figure 4 . 4 Figure 4.17 depicts the temperature evolution at the thermocouple at point c in the flux metric device (Figure 4.1) for various ages of mussels for stagnant water condition.The equivalent heat transfer coefficients of different ages of mussels have been estimated using the previous conductive model. Figure 4 . 17 : 417 Figure 4.17: Temperature evolution at c with various ages of mussels covering the flux metric device. Figure 4 . 18 :Figure 4 . 19 : 418419 Figure 4.18: Equivalent heat transfer coefficient of various ages of mussels for stagnant water condition. Figure 4 . 20 : 420 Figure 4.20: Temperature evolution at point c with juvenile 4 cm thickness layer and 9.4 cm thickness layer covering the flux metric device. Figure 4 . 21 : 421 Figure 4.21: Thermal sensor: application on submarine electrical cable. Figure 4 . 22 : 422 Figure 4.22: Cross-section of thermal sensor above the submarine electrical cable. Chapter 4 , 4 describes a novel method for monitoring biofouling growth with a thermal sensor based on transient thermal model. Two transient thermal models have been introduced, capacitive and conductive model. These two models leads to the estimating of an equivalent heat transfer coefficient where its value helps to identify what kind of specimen cover the thermal sensor, in other words leads to monitor the biofouling growth. Chapitre 1 :Chapitre 2 :Chapitre 3 : 123 BibliographieDans ce chapitre, une bibliographie sur les éoliennes offshores flottantes et l'importance du câble dynamique de puissance pour le transfert d'énergie de l'offshore vers le rivage a été introduite. De plus, une bibliographie sur le biofouling et les éléments clés qui influencent sa croissance est proposée. Il est démontré comment l'encrassement biologique affecte le câble d'alimentation dynamique en termes de thermomécanique. La prédominance des moules sur les côtes atlantiques est mise en évidence. De plus, l'effet de l'encrassement sur les échangeurs de chaleur et la façon de le détecter ont été discutés. L'effet de l'encrassement biologique sur les câbles électriques sous-marins est également discuté, ainsi que les méthodes de détection, où la conductivité thermique de la couche d'encrassement biologique était supposée être la même que celle de l'eau. Cependant, en réalité, les amas de moules qui ont colonisé le câble électrique sous-marin doivent être considérés comme un milieu poreux formé d'une partie solide, le bivalve des moules luimême, et d'une partie fluide (eau). Par conséquent, il est intéressant d'estimer les propriétés thermiques des moules, en particulier leur conductivité thermique effective et le coefficient de transfert de chaleur de l'eau qui les entoure, afin de quantifier en conséquence l'impact thermique des moules sur le câble dynamique haute tension sous-marin. Par ailleurs, une revue de la littérature sur les équations permettant de déterminer la conductivité thermique effective en milieu poreux a été proposée. Il montre que tous les modèles et formules introduits pour estimer la conductivité thermique effective ne tiennent pas compte de la convection naturelle dans le milieu poreux. Le chapitre suivant présente une méthode expérimentale de mesure de la conductivité thermique effective d'un milieu poreux (moules) en tenant compte de la convection naturelle dans le milieu poreux. Caractérisation thermique des moules -Analyse en régime permanent Dans ce chapitre, la caractérisation thermique des moules a été réalisée. La mesure des propriétés thermiques de ce milieu naturel (biofouling) est rarement abordée dans la littérature sur les énergies marines. Le travail expérimental était extrêmement difficile car nous devions effectuer toutes les mesures en 24 h afin de maintenir les moules en vie. Dans le cas de la colonisation uniforme des moules, la conductivité thermique des différents âges des moules (juvénile, mixte juvénile-adulte, et adulte) ainsi que le coefficient de transfert de chaleur de l'eau autour des moules ont été mesurés. Nous avons déterminé que les moules juvéniles ont la conductivité thermique effective la plus faible par rapport à un mélange de moules juvéniles et adultes et de moules adultes uniquement en raison de l'effet de porosité à l'eau, où la porosité des moules juvéniles est inférieure à celle du mélange de moules et de moules adultes respectivement. L'effet de la porosité à l'eau a été mis en évidence et illustré à l'aide d'un milieu de billes de verre, où il est démontré que la convection naturelle dans le milieu poreux améliore considérablement la conductivité thermique effective du milieu poreux en verre. De plus, la conductivité thermique effective des moules adultes a été mesurée lorsque les moules sont vivantes et lorsqu'elles sont mortes. Les résultats montrent que la conductivité thermique effective des moules adultes vivantes est supérieure à la conductivité thermique effective des moules adultes mortes en raison de l'effet de filtrage de l'eau qui crée un micro-convection supplémentaire entraînant une augmentation de la conductivité thermique effective des moules. D'autre part, la conductivité thermique effective des moules augmente avec l'augmentation de la différence de température entre la couche interne des moules et sur la couche externe des moules en raison du phénomène de convection naturelle. Ainsi en résumé, en cas d'eau stagnante, la conductivité thermique effective des moules est affectée par trois facteurs : la porosité à l'eau, la différence de température entre leur couche interne et externe et leur phénomène de filtrage. Une autre étude a également été réalisée pour démontrer comment le courant d'eau affecte la conductivité thermique effective d'un milieu poreux formé de billes de verre. Il a été démontré que la conductivité thermique effective des billes de verre en présence d'eau courante augmente avec l'augmentation de la vitesse du courant d'eau à mesure que le coefficient de transfert de chaleur forcé dans le milieu des billes de verre augmente. Par conséquent, on s'attend à ce qu'il en soit de même pour la conductivité thermique effective en présence du courant d'eau.Les résultats démontrent également que la résistance conductrice effective des moules peut être supérieure à la résistance convective entourant le tube en leur absence. Il serait donc intéressant d'étudier l'impact thermique de la colonisation par les moules à proximité de la DSEC. Afin de mieux comprendre cette problématique, le chapitre suivant se focalisera sur ce sujet. Effet thermique des moules sur la température du câble électrique sousmarin dynamique Dans ce chapitre, le risque thermique sur la câble électrique sous-marin dynamique est évalué en estimant la distribution de température au sein de la DSEC recouverte de moules d'âges différents à l'aide d'une simulation numérique (via COMSOL). Ces simulations ont été réalisées pour une épaisseur donnée de la couche de moules et ne tiennent pas compte de l'évolution de la bio-colonisation dans le temps (effets multicouches) ainsi que du courant d'eau qui passe à travers ou autour des moules. Les résultats montrent que la biocolonisation des moules provoque une augmentation de la température du conducteur en cuivre de la câble électrique sous-marin dynamique en fonction de l'âge des moules et de la résistance thermique globale correspondante. Les moules juvéniles ont un plus grand effet sur la température du conducteur câble électrique sous-marin dynamique que les moules mixtes et adultes. Cette augmentation de la température du conducteur en cuivre entraîne une diminution de la durée de vie de l'isolant XLPE, de la durée de vie globale en fatigue du câble et de la conductivité électrique du conducteur en cuivre du câble. Par conséquent, l'énergie est moins reçue et les coûts de maintenance sont augmentés. La surveillance de la croissance des moules et leur nettoyage autour du câble au besoin aideraient à réduire l'impact de la biocolonisation des moules sur la câble électrique sousmarin dynamique. Le chapitre suivant décrit une nouvelle méthode de surveillance de la croissance de l'encrassement biologique avec un capteur thermique. Titre : Effet de la bio-colonisation sur le comportement thermique des câbles électriques dynamiques des éoliennes flottantes Mots clés : câble dynamique électrique ; caractérisation thermique du biofouling ; conductivité thermique effective ; les énergies marines renouvelables ; éolienne offshore flottante Table 1 . 1 1: Daily rate for vessels. .......................................................................................17 Table 1 . 1 2: Impulse forms and impulse responses expression [29]. ...................................30 Table 2 . 2 1: Various ages of mussels with their thickness layer. .........................................47 Table 2 . 2 2: Distribution of measured temperature around the aluminum tube for different ages of mussels uniformly distributed. ..................................................................52 Table 2 . 2 3: Measured effective thermal conductivities of mussels of various ages uniformly distributed around the aluminum tube. .................................................53 Table 2 . 4 24 : Homogeneous versus experimental effective thermal conductivity of glass beads. .....................................................................................................................55 Table 2.5: Rayleigh number of alive and dead adult mussels. ...........................................62 Table 2 . 2 6: Water porosities of various ages of mussels. ....................................................63 Table 2 . 2 7: Experimental and theoretical values of the heat transfer coefficient of the water around the experimental tube without mussels. .....................................................64 Table 2 . 2 8: Heat transfer coefficient of the water around the mussels as measured experimentally........................................................................................................65 Table 2 . 2 9: Comparison between conductive and convective thermal resistances as a function of the ages of the mussels. .......................................................................65 Table 2 . 2 10: Distribution of measured temperatures around a polyethylene covered aluminum tube colonized by juvenile mussels with different configurations of colonization. ...........................................................................................................67 Table 2 . 2 11: Effective thermal conductivity of juvenile mussels and heat transfer coefficient of water around the mussels for different percentage of colonization. 70 Table 2 . 2 12: Temperature distribution with different water current velocity. ....................79 Table 2 . 2 Nomenclature C cable capacitance per unit length 13: Effective thermal conductivity of the glass beads with different water current velocity. ..................................................................................................................81 Table 3.1: Thermal resistances and heat losses in the submarine power cable. ................96 Table 3.2: Copper conductor temperature during the colonization of various ages of mussels. ................................................................................................................100 Table 3.3: Overall thermal resistance of various ages of mussels. ..................................101 Table 3.4: The temperature difference between the mussel layer's inner and outer radiuses. ...............................................................................................................102 Table 4.1: Thermophysical properties of different layers of flux metric device. ............118 Table 4.2: Equivalent heat transfer coefficient using the capacitive and conductive model....................................................................................................................119 Table 4.3: Layer thickness and porosity of various ages of mussels. ..............................130 1. Chapter 1. Literature review 1.1 Floating offshore wind turbine According to the European Wind Energy Association (EWEA) [15], at depths greater than 60 meters, 80 percent of Europe's offshore wind sources are available, with a potential floating wind capacity of 4000 GW, which can provide four times the EU's electrical energy consumption. Floating offshore wind turbine is one of the most promising clean energy technologies, and its development is accelerating. It has the ability to go in deep water in comparison to the fixed offshore wind turbine which can goes maximum to 50-60 m deep due to structural limitations. The benefits of FOWT over fixed-offshore wind farms include less visual disturbance, noise avoidance, stronger and more consistent wind, lower installation costs (Table 1 .1), no wind turbine size restrictions, and the FOWT is more environmentally friendly. However, there are some drawbacks to FOWT, such as technical difficulties with mooring line and power cable design, as well as electrical connections. Table 1 . 1 Fixed wind installations Floating wind installations Vessel Daily rate Vessel Daily rate Heavy lift vessel Standard tug boat (foundation € 150k-500k (tow out and hook € 30k-60k installation) up) Jack-up vessel (turbine installation) € 150k-200k Anchor handling installation) tug (mooring € 20k-50k Mobilization Several M€ Mobilization € < 100k 1: Daily rate for vessels. Table 2 . 2 2: Distribution of measured temperature around the aluminum tube for different ages of mussels uniformly distributed. .72 26.6 23.47 22.32 22.31 4.2 22.3 35 95 127.75 Adult 28.77 27.9 28.28 23.62 23.52 23.52 4.8 23.36 35 105 211.63Therefore, the measured values of the effective thermal conductivity of juvenile, mix (juvenile and adult) and adult mussels are 4.4, 8, 12.8 W.m -1 .K -1 , respectively, for a uniform distribution of mussels around the aluminium tube, as shown in Table2.3. As the relative uncertainty on kbiof measurement is less than 9 %, the differences between the various types of colonization is significant. One explanation for these discrepancies could be the increasing volume of mussels with age. Mussels type T1 T2 T3 T4 T5 T6 ΔT (Tav1- Tw ri re Q Tav2) o C o C o C o C o C o C K o C mm mm W Juvenile 35.7 32.82 31.39 23.56 23.1 23.13 10 23 35 75 228.03 Table 2 . 2 3: Measured effective thermal conductivities of mussels of various ages uniformly distributed around the aluminum tube. Mussels type kbiof Absolute uncertainty Relative uncertainty W.m -1 .K -1 W.m -1 .K -1 % Juvenile 4.4 ± 0.4 9 Mix (juvenile and adult) 8.0 ± 0.52 6.5 Adult 12.8 ± 0.97 7.6 Table 2 . 2 5: Rayleigh number of alive and dead adult mussels. Alive Mussels Dead Mussels ΔT Ra Ra Difference 1.75 7.15E+02 6.69E+02 7% 3.11 1.28E+03 1.16E+03 10% 3.89 1.58E+03 1.52E+03 4% 4.83 2.00E+03 1.89E+03 6% 5.47 2.27E+03 2.18E+03 4% The porosity of adult mussels is then estimated using equation 2.7, where Macdonald et al consider non-spherical particles, then dp in equation 2.8 is replaced by dp, equivalent which is the equivalent diameter for non-spherical particles given by: dp, equivalent= 𝜖.dv (2.8) Table 2 . 2 6: Water porosities of various ages of mussels. Mussel's age Juvenile (3-6 months) Mix (6-12 months) Adult (12-18 months) Water Porosity 15% 18% 22.6% Table 2 . 2 7: Experimental and theoretical values of the heat transfer coefficient of the water around the experimental tube without mussels. hw hw hw discrepancy hw hw discrepancy (experimental) (Churchill & (Experimental (Morgan) (Experimental Chu) and Churchill and Morgan) & Chu) W.m -2 .K -1 W.m -2 .K -1 % W.m -2 .K -1 % 220 309 29 234 6 Table 2 . 2 8 shows the measured heat transfer coefficients of 3395, 873, and 2682 Table 2 . 2 8: Heat transfer coefficient of the water around the mussels as measured experimentally. Mussels type hw Absolute uncertainty Relative uncertainty TMussels,water Absolute uncertainty Relative uncertainty W.m -2 .K -1 W.m -2 .K -1 % o C o C % Juvenile 3395 ± 1123 33 0.23 ± 0.07 30 Mix (juvenile & adult) 873 ± 164 19 0.4 ± 0.07 17.5 Adult 2682 ± 1003 37 0.2 ± 0.07 35 *TMussels,water: is the difference between the temperature on the external side of the mussel layer and the one of the water. Table 2 . 2 9: Comparison between conductive and convective thermal resistances as a function of the ages of the mussels. Rconductive (biof) is the effective conductive thermal resistance of biofouling * Rconvective is the convective thermal resistance of the water around the mussels *Roverall is the overall thermal resistance of the presence of mussels around the tube (including Rconductive(biof) and Rconvective). Mussels type Rconductive(biof) Rconvective Roverall Effective conductive resistance External convective resistance T(external and internal layer of contribution contribution mussels) K.W -1 K.W -1 K.W -1 % % K Juvenile 0.045 0.001 0.046 97.8 2.2 10 * In this case, Mix(juvenile 0.033 0.003 0.036 91.7 8.3 4.2 and adult) 0.022 0.0009 0.023 95.7 4.3 4.8 Adult Table 2 . 2 10 shows the measured temperature distribution around a polyethylene covered aluminium tube for different configurations of juvenile mussel colonization distribution (25%, 50% and 100%), as shown in Figure 2.19. Table 2 . 2 10: Distribution of measured temperatures around a polyethylene covered aluminum tube colonized by juvenile mussels with different configurations of colonization. Juvenile T1 T2 T3 T4 T5 T6 Tw ri re q mussels Colonization % o C o C o C o C o C o C o C mm mm W 25 28.12 26.44 24.8 22.55 - - 22.3 38.9 78.9 66.33 50 30.82 28.5 26 22.92 - - 22.45 38.9 78.9 85.43 100 29.48 28.46 28.95 23.03 22.6 22.4 22.12 38.9 78.9 40.74 Table 2 . 2 11: Effective thermal conductivity of juvenile mussels and heat transfer coefficient of water around the mussels for different percentage of colonization. Juvenile kbiof Sensitivity for hw1 Sensitivity hw2 Sensitivity mussels kbiof Colonization for hw1 for hw2 % W.m -1 .K -1 °C W.m -2 .K -1 °C W.m -2 .K - °C 1 25 1.4 1 1510 0.1 1960 0.9 50 1.9 1 310 0.1 3910 0.88 100 1.6 5 910 0.1 - - Table 2 . 2 [START_REF] Marty | Effect of Roughness of Mussels on Cylinder Forces from a Realistic Shape Modelling[END_REF]: Temperature distribution with different water current velocity. Up (Angle 0°) Angle 120° Angle 240° Velocity Reynolds number T6 T3 (T6-T3) T4 T1 (T4-T1) T5 T2 (T5-T2) m/s °C °C °C °C °C °C °C °C °C 0 20.06 16.86 3.19 20.02 16.84 3.18 20.02 16.84 3.18 0.2 2.04E+04 17.50 16.92 0.58 17.76 16.95 0.81 17.2 16.90 0.29 0.4 4.08E+04 17.18 16.91 0.27 17.32 16.92 0.40 16.99 16.90 0.09 0.6 6.12E+04 17.08 16.91 0.18 17.18 16.91 0.27 16.94 16.90 0.04 0.8 8.16E+04 17.02 16.90 0.11 17.07 16.91 0.16 16.93 16.90 0.02 1 1.02E+05 16.99 16.90 0.08 17.02 16.91 0.12 16.92 16.90 0.01 1.2 1.22E+05 16.97 16.90 0.06 16.99 16.91 0.09 16.91 16.90 0.01 1.4 1.43E+05 16.95 16.90 0.05 16.97 16.90 0.07 16.91 16.90 0.01 1.6 1.63E+05 16.94 16.90 0.04 16.96 16.90 0.06 16.91 16.90 0.01 1.8 1.84E+05 16.94 16.90 0.03 16.95 16.90 0.05 16.91 16.90 0.01 2 2.04E+05 16.93 16.90 0.02 16.94 16.90 0.04 16.91 16.90 0.01 Table 2 . 2 [START_REF] Schoefs | Reliability Updating of Offshore Structures Subjected to Marine Growth[END_REF]: Effective thermal conductivity of the glass beads with different water current velocity. Water current velocity Reynold number Nusselt number forced h ,beads e k ,beads, e k °θ=90 ,beads, e k °θ=210 ,beads, e k °θ=330 m. -s 1 -W.m .K 2 - 1 -W.m .K 1 - 1 W. -m .K 1 - 1 W. -m .K 1 - 1 W. -m .K 1 - 1 0 315 3.17 3.15 3.19 3.18 0.2 2.04E+04 83 488 34 28 20 55 0.4 4.08E+04 117 693 98 60 41 193 0.6 6.12E+04 129 764 185 94 61 400 0.8 8.16E+04 145 857 312 147 104 684 1 1.02E+05 159 937 451 205 149 1000 1.2 1.22E+05 171 1007 586 273 186 1300 1.4 1.43E+05 182 1073 721 329 234 1600 1.6 1.63E+05 192 1131 899 410 288 2000 1.8 1.84E+05 202 1187 1062 497 342 2346 2 2.04E+05 211 1239 1223 576 400 2692  𝑒,𝑐 (m) is the external diameter of the cable, ℎ 𝑠𝑢𝑟𝑟 is the heat transfer coefficient of the surrounding medium around the cable, 𝛥𝑇 𝑠 is the excess of cable surface temperature above ambient temperature (K). Using ohm's law and steady-state condition meaning when the current flow through the cable is at a constant value and the temperature of the cable is constant then the expression for the temperature rise above ambient temperature is Therefore, the maximum current that an AC cable can deliver is obtained as shown in equation (3.7) [13]: = [ 𝐼 𝑐𝑜𝑛 𝑇 𝑐𝑜𝑛 𝑅 𝐴𝐶 ′ 𝑅 1 + 𝑛𝑅 𝐴𝐶 Δ𝑇 𝑐𝑜𝑛 -𝑊 𝑑 [0.5 𝑅 1 + 𝑛 (𝑅 2 + 𝑅 3 + 𝑅 4 )] 𝑇 𝑐𝑜𝑛 ′ (1 + 𝜆 1 ) 𝑅 2 + 𝑛𝑅 𝐴𝐶 𝑇 𝑐𝑜𝑛 ′ (1 + 𝜆 1 + 𝜆 2 ) (𝑅 3 + 𝑅 4 ) ] 0.5 (3.7) 4 = 1 𝜋 𝑒,𝑐 ℎ 𝑠𝑢𝑟𝑟 𝛥𝑇 𝑠 1/4 (3.5) represented in equation (3.6). 𝛥𝑇 𝑐𝑜𝑛 = (𝐼 𝑐𝑜𝑛 2 𝑅 𝐴𝐶 𝑇 𝑐𝑜𝑛 ′ + 0.5𝑊 𝑑 ) 𝑅 1 + (𝐼 𝑐𝑜𝑛 2 𝑅 𝐴𝐶 𝑇 𝑐𝑜𝑛 ′ (1 + 𝜆 1 ) + 𝑊 𝑑 ) 𝑛 𝑅 2 + (𝐼 𝑐𝑜𝑛 2 𝑅 𝐴𝐶 𝑇 𝑐𝑜𝑛 ′ (1 + 𝜆 1 + 𝜆 2 ) + 𝑊 𝑑 ) 𝑛(𝑅 3 + 𝑅 4 ) (3.6) 94 Table 3 . 3 2 that the DSEC conductor temperature of juvenile, mix and adult mussels are 91.5 °C, 90.47 °C and 89.18 °C respectively and the error ( 𝑇 𝑠𝑖𝑚𝑢𝑙𝑎𝑡𝑖𝑜𝑛 -𝑇 𝑎𝑛𝑎𝑙𝑦𝑡𝑖𝑐𝑎𝑙 𝑇 𝑎𝑛𝑎𝑙𝑦𝑡𝑖𝑐𝑎𝑙 Table 3 . 3 2: Copper conductor temperature during the colonization of various ages of mussels. Mussel's Age Juvenile Mussels Mix mussels Adult mussels Copper Conductor Temperature 91.5 °C 90.47 °C 89.18 °C (Analytically) Copper Conductor Temperature 91.3 °C 90.47 °C 89.18 °C (COMSOL) Error 0.22 % 0.14 % 0.47 % Table 3 . 3 3: Overall thermal resistance of various ages of mussels. Specimen around the electrical cable Juvenile Mussels Mix mussels Adult mussels Water Roverall 0.022 0.017 0.011 0.012 (K.m.W -1 ) *Roverall is the overall thermal resistance of the presence of mussels around the tube (including Rconductive(biof) and Rconvective). Table 3 . 3 4: The temperature difference between the mussel layer's inner and outer radiuses. Mussels Type T (Measurement) T (COMSOL) K K Juvenile 10 3.3 Mix (juvenile & adult) 4.2 2.4 Adult 4.8 1.7 Table 4 . 4 1: Thermophysical properties of different layers of flux metric device. Type of Materials Thermal conductivity (W.m -1 .K -1 ) Density (kg/m 3 ) Heat Capacity (J.kg -1 .K -1 ) Stainless steel 14.4 7817 460 Polyimide (Kapton) 0.15 545 920 PVC (Polyvinyl 0.2 1380 2000 chloride) PE (Polyethylene) 0.34 910 2600 Figure 4.7 shows the temperature evolution and relaxation of Tc using COMSOL and the conductive model. It shows that the temperature obtained from COMSOL and conductive model are very similar with a maximum temperature difference not exceeding 0.05 %. Table 4 . 4 2: Equivalent heat transfer coefficient using the capacitive and conductive model heq,capacitive (W.m -2 .K -1 ) heq,conductive (W.m -2 .K -1 ) | (ℎ eq,capacitive -ℎ eq,conductive ) ℎ eq,conductive | Glass beads 225 332 32% This difference is due to the fact that capacitive model has more hypothesis comparing to conductive model. For the capacitive model, two hypotheses are considered: the flux metric device is isothermal and no heat flux loss is considered at the rear face of the sensor. However, in conductive model these two hypotheses disappear since the heat transfer in each layer of the flux metric device is considered as shown in Figure 4.8. Finally in our device we have found that that 90% of the total impulse power goes in the forward direction (𝛷 ℎ + = 19.8 W) and 10% goes in the backward direction (𝛷 ℎ -= 2.2 W). Thus, the conductive model (quadrupole model) is more appropriate for the estimation of the equivalent heat transfer coefficient heq. Table 4 . 4 3 shows the heat diffusion length during the heat pulse only ( ℎ𝑒𝑎𝑡 𝑝𝑢𝑙𝑠𝑒 ) and during the overall measurement ( 𝑜𝑣𝑒𝑟𝑎𝑙𝑙 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡 ). It's clearly shown that the heat diffusion length increases with the increasing of the heat pulse duration. To be sensitive the thickness of biofouling layer one should preferred long time duration. As the conductive model is a 1D model and can be applied only for short heat pulse duration, then, a 3D transient numerical model is proposed to estimate the equivalent heat transfer coefficient for long heat pulse duration. Table 4 . 4 1: Heat diffusion length for different heat pulse durations.  ℎ𝑒𝑎𝑡 𝑝𝑢𝑙𝑠𝑒 is the heat diffusion length during the heat pulse. Heat pulse duration (s)  ℎ𝑒𝑎𝑡 𝑝𝑢𝑙𝑠𝑒 (mm)  𝑜𝑣𝑒𝑟𝑎𝑙𝑙 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡 (mm) 25 2.3 4 120 5 8.7 180 6 10.7 * Table 4 . 4 2: Equivalent heat transfer coefficient of the glass beads for short and long heat pulse duration. Heat pulse duration heq,beads using COMSOL heq,beads using Conductive model (| ℎ 𝑒𝑞,𝑐𝑜𝑛𝑑𝑢𝑐𝑡𝑖𝑣𝑒 𝑚𝑜𝑑𝑒𝑙 -ℎ 𝑒𝑞,𝑐𝑜𝑚𝑠𝑜𝑙 ℎ 𝑒𝑞,𝑐𝑜𝑚𝑠𝑜𝑙 |) s W.m -2 .K -1 W.m -2 .K -1 % 25 310 323 4.2 120 160 321 100 180 153 315 106 Table 4 . 4 3: Layer thickness and porosity of various ages of mussels. Layer thickness Porosity Mussel's sample (cm) (%) Juvenile 4 15 Mix (Juvenile-Adult) 6 18 Adult 7 22.6 Acknowledgement Arnaud Arrivé, who created the experimental tube in a very professional and precise manner. I'd also like to thank my colleagues and friends. Thanks to West Atlantic Marine Energy Community (WEAMEC) and Pays Région de la Loire, for funding BIODYTHERM project where this work was carried out within the framework of it. Also, Thanks to France Energies Marines and National Research Agency for funding OMDYN-2 project which this thesis a part of. Partial Conclusion In this Chapter, the thermal effect assessment on the DSEC is evaluated by estimating the temperature evolution within the copper conductor of the DSEC covered with mussels of various ages using a numerical simulation (via COMSOL). Those simulations were carried out for a given thickness of the layer of mussels and do not take into account the evolution of bio-colonization over time (multi-layer effects) as well as the water current which passes across or around the mussels. The results show that biocolonization of mussels cause an increase in the copper conductor temperature of the DSEC above its maximum operating temperature (90 °C) depending on the mussels' age and the corresponding global thermal resistance. Whereas, juvenile mussels have a bigger themal effect on the DSEC copper conductor temperature than mix and adult mussels which leads F is the Laplace transform of f. N is determined by the computer's floating-point precision, with single precision N=10 and double precision N=20, where 𝐶 𝑗 is given by the following expression for an even value of N: )) (4.28) In equation (4.28), 'Int' designates the integer part of a real number and 'Min' the minimum of two numbers. So using the inverse method of Gaver-Stehfest, we got the real computed temperature (Tc) value at the location of the main thermocouple. After using the numerical inversion of Laplace transform we have different results of real computed temperature at point c, T 𝑐 (𝑡, ℎ 𝑙𝑜𝑐𝑎𝑙 ). Where t is the time in seconds and hlocal is the heat transfer coefficient of the mussel's interval. So to compute hlocal, we use a minimization method (simplex method) between the real computed temperature and the measured one at the location of the main sensor c. TPE : temperature through the polyethylene layer. J(hlocal)=∑ (𝑇 TPVC: temperature through the polyvinyl chloride layer. Th : temperature through the heating element layer. Tss : temperature through the stainless steel. Heat pulse duration effecttest with glass beads The effect of heat pulse duration on the estimation of the equivalent heat transfer coefficient will be investigated. To do this, a glass beads medium formed from 250 pieces of big beads (bead diameter 25 mm) with a layer thickness above the flux metric device of 4 cm and water porosity of 42 % has been implemented above the flux metric device, as shown in Figure 4.9.
04059777
en
[ "phys.meca.ther", "math.math-oc" ]
2024/03/04 16:41:18
2023
https://hal.science/tel-04059777v2/file/LIYIJUN.pdf
Keywords: Electronic overheating, Thermal management, Multiple-peak heat flux, Heat sink, Heat transfer enhancement, Flow distribution, Topology optimization, Experimental verification I am also deeply grateful to my co-supervisors Cathy Castelain and Stephane Roux, and supporting Lingai LUO for their expertise and advice, which have greatly enriched my work. Their critical feedback and encouragement have been indispensable in helping me navigate the challenges of my research. I would like to extend my thanks to the members of my thesis committee, Charbel Habchi, Raphael Boichot, Eva Dorignac and Lounes Tadrist, for their valuable, constructive feedback and suggestions, which have greatly improved the quality of my thesis. I am grateful to the staff, faculty, and lab services (technical, secretory, IT, etc.) of LTeN, for providing me with access to the resources including the calculation resources, experiment instruments and measurement platforms and opportunities to pursue my research. Especially, the calculation resource and helps of the staff from CCIPL: Le centre de calcul intensif des Pays de la Loire. Their support and encouragement have been invaluable and indispensable in helping me overcome the challenges of my PhD journey. I would like to acknowledge the contributions of my research participants: Gwenael Biotteau, El Mehdi Gouda, Yuchao Hua, Bruno Storti and Julien Aubril whose willingness to share their experiences and insights has been essential to the success of my research. I am also grateful to my colleagues, Hao Cheng, Yiyi Li, Thomas Naudin, Sourour Traouli, Rafic NADER, Rita Moussallem, Antonio della Volpe, Rodrigo Andres Olaya Gomez, Marie Lamard, Elia Missi, and Mosbah Kiwan, and my dear friends, Mickael Kerfant, Li He who have provided me with professional suggestions in the practice of presentation, and a supportive and stimulating academic community. I would like to express my heartfelt gratitude to my family, for their unwavering love and support throughout my PhD journey. Their belief in me has been a constant source of motivation and inspiration, and I could not have achieved this milestone without their encouragement and support. Finally, I would like to give my gratitude to the Chinese Scholarship Council (CSC) for supporting this PhD project financially. ABSTRACT The heat-generating surface with multiple heat sources is frequently encountered in modern power electronic devices. Efficient cooling techniques are especially needed to prevent the overheating of these devices, so as to avoid consequences like performance deterioration, failure rate increase, reduced lifetime and safety threats. The main objective of this PhD thesis is to design and optimize the structure of heat sinks for single-phase convective cooling of a heat-generating surface under multiple-peak heat flux. Two optimization methods have been developed and applied in this study: one is the size optimization of channel inlets for tailoring the fluid flow distribution in straight channel heat sinks and another is the topology optimization of the global flow channel configuration based on the genetic algorithm (GATO). The impacts of design and operation parameters on the effectiveness of both optimization methods are numerically evaluated, with performance comparison to reference parallel straight channel heat sinks. After that, experimental validations of the proposed optimization approaches have been done by testing different heat sink prototypes using PIV and infrared thermography. Both the numerical and experimental results indicate that the GATO heat sink shows the best cooling performance under the tested conditions. Finally, different objective functions have been tested with the GATO method and the obtained results are further compared and discussed to showcase its effectiveness and robustness. RESUME Les dispositifs électroniques génèrent souvent de la chaleur en différents points. Sans un refroidissement efficace, les points chauds et la surchauffe entraînent une augmentation du taux de défaillance, une détérioration des performances et des menaces pour la sécurité des composants électroniques. L'objectif principal de cette thèse est de concevoir et d'optimiser la structure des dissipateurs de chaleur par convection monophasique en minimisant la température maximale pour apporter une solution innovante à ces problèmes. Deux méthodes d'optimisation ont été développées et appliquées : l'optimisation de la taille des entrées de chaque canaux enfin de distribuer de façon optimum les fluide dans le dissipateur de chaleur à canal droit d'une part, et l'optimisation topologique de la configuration sur l'ensemble des canaux d'écoulement basée sur l'algorithme génétique (GATO). L'influence des paramètres, tels que les valeurs des pics de flux de chaleur, la vitesse d'entrée, la résolution matricielle d'un domaine de conception et la fraction de fluide a été étudiée numériquement. Ensuite, les approches d'optimisation proposées ont été validées expérimentalement en testant un dissipateur thermique à canal droit de référence (RSC), un dissipateur thermique à canal droit optimisé (OSC) et un dissipateur thermique GATO. En outre, les indicateurs de performance complets obtenus à partir des modèles validés expérimentalement des trois dissipateurs thermiques ont été comparés. Enfin, l'influence de différents objectifs d'optimisation pour la méthode GATO a été étudiée. SYNTHESE DE THESE (EN FRANCAIS) Cette thèse, financée par le CSC (Chinese Scholarship Council) et menée au sein du Laboratoire de Thermique et Energie de Nantes (LTeN) se concentre sur le problème du refroidissement des dispositifs électroniques surchauffés, en particulier de dispositifs ayant un volume compact et soumis à plusieurs sources de chaleur causées par une densité de puissance élevée. Il est important de noter que la surchauffe de ces dispositifs peut entraîner la détérioration de leur efficacité de fonctionnement, la diminution de leur durée de vie ou d'autres dommages irréversibles. Différentes techniques ont été développées et utilisées pour la gestion thermique de l'électronique. Elles peuvent être simplement classées en deux catégories : (1) refroidissement direct et (2) refroidissement indirect. L'une des approches les plus largement utilisées pour la gestion thermique est le refroidissement convectif à liquide monophasique, en raison de sa structure simple et compacte, de sa sécurité d'utilisation et de son efficacité. Un dispositif de refroidissement convectif à liquide monophasique est appelé un dissipateur thermique. Il contient une ou plusieurs entrées, la zone de conception principale située au milieu (où la convection de chaleur se produit principalement), et une ou plusieurs sorties. Pour améliorer les performances d'un dissipateur de chaleur, de nombreux articles de recherche se concentrent sur la conception et l'optimisation de sa géométrie, qui est une composante décisive dans l'amélioration du transfert de chaleur. L'approche de base pour l'amélioration des performances du dissipateur de chaleur (ou de l'échangeur de chaleur) consiste à optimiser l'écoulement de fluide couplé et le transfert de chaleur. Trois niveaux d'optimisation sont considérés : l'optimisation de taille, l'optimisation de forme et l'optimisation de topologie (TO). Pour l'optimisation de taille d'un dissipateur de chaleur, les diamètres des canaux ou des ailettes sont les variables à ajuster ou à définir. Avec une forme prédéfinie, l'optimisation de taille est l'approche la plus simple car elle nécessite moins de variables de conception. Cependant, elle ne permet d'obtenir les géométries optimales avec des formes plus complexes. L'optimisation de forme d'un dissipateur de chaleur consiste à optimiser la forme des canaux ou des ailettes du dissipateur, qui peuvent être circulaires, rectangulaires ou irrégulières, etc. Cette approche est plus flexible que l'optimisation de taille car son espace de solution comprend l'espace de solution de l'optimisation de taille, bien que les procédures soient plus compliquées. L'optimisation de topologie (TO) d'un dissipateur de chaleur n'a pas de géométrie prédéfinie requise. Diverses tailles et formes de vides peuvent être créées dans le domaine de conception pour générer différentes géométries TO. La solution de l'espace TO comprend l'espace de solution de l'optimisation de taille et de forme. Par conséquent, c'est l'optimisation avec le plus grand degré de liberté mais aussi la plus grande complexité. La première partie de ce travail propose une revue détaillée de la littérature sur la gestion thermique des dispositifs électroniques confrontés à des problèmes de chauffage inégal et de surchauffe. Une attention particulière est accordée à la conception et à l'optimisation structurelle des dissipateurs de chaleur pour le refroidissement efficace à un seul liquide de phase. Dans un premier temps, nous mettons en évidence la présence courante et les conséquences néfastes de la surchauffe des appareils électroniques en raison de sources de chaleur multiples, à travers divers exemples illustratifs. Ensuite, nous présentons brièvement les principales technologies de gestion thermique pour les appareils électroniques, en mettant l'accent sur l'amélioration du transfert de chaleur dans les dissipateurs de chaleur à micro/mini-canaux. Nous analysons ensuite plusieurs études sur la conception et l'optimisation des dissipateurs de chaleur, qui sont classées en quatre catégories : (1) Uniformisation ou contrôle de la distribution du flux pour les dissipateurs de chaleur à canaux droits parallèles ; (2) Optimisation de la forme de la section transversale du canal ; (3) Optimisation de la forme et de l'arrangement des ailettes ; et (4) Optimisation de la topologie de la configuration globale de l'écoulement dans le dissipateur de chaleur. Enfin, nous examinons les études existantes sur les tests expérimentaux de dissipateurs de chaleur optimisés topologiquement, qui sont une étape indispensable pour la validation des résultats de simulation et d'optimisation. La littérature actuelle met moins l'accent sur la gestion thermique des dispositifs électroniques confrontés à des problèmes de chauffage inégal et de surchauffe dus à des sources de chaleur multiples, qui sont pourtant courants dans l'électronique moderne, que sur le chauffage uniforme ou à pic unique. Cette situation met en évidence des lacunes dans la recherche sur divers aspects de la gestion de ce problème, tels que (1) l'optimisation de la distribution de flux dans les dissipateurs de chaleur à canaux droits parallèles ; (2) les approches sans gradient pour l'optimisation de la topologie de la configuration globale de l'écoulement de canal ; (3) la caractérisation expérimentale fine des comportements fluides et thermiques locaux des dissipateurs de chaleur optimisés topologiquement ; (4) la comparaison de performance entre les dissipateurs de chaleur optimisés par différentes méthodes ou sous différents critères/contraintes d'optimisation. La deuxième partie de notre thèse concerne le travail d'optimisation de taille pour un dissipateur de chaleur à canaux droits. Nous y abordons l'optimisation de la distribution de l'écoulement de fluide dans des dissipateurs de chaleur à mini-canaux parallèles, soumis à un flux de chaleur à multiples pics non uniforme, afin d'éliminer les points chauds de température. Un dissipateur de chaleur 3D comprenant 16 mini-canaux droits parallèles est utilisé comme modèle pour l'étude, chaque mini-canal ayant une dimension de 1 mm de largeur, 2 mm de hauteur et 34 mm de longueur. Dans cette perspective, un algorithme d'optimisation de taille/forme original est développé pour ajuster les entrées de ces mini-canaux en fonction de la distribution de température sur la surface de la base chauffante. Grâce à cette adaptation, la distribution de l'écoulement de fluide est améliorée, ce qui permet de réduire le pic de température sur la surface de chauffe. L'efficacité et la robustesse de l'algorithme d'optimisation sont testées et discutées dans cette étude. En utilisant la méthode d'optimisation proposée, des résultats encourageants ont été obtenus montrant une réduction de la température maximale respectivement de 10 K et 7 K pour les cas de flux de chaleur à deux et cinq pics. La configuration du dissipateur de chaleur avec des entrées de canal optimisées a démontré une résistance thermique plus faible que celle de la configuration à entrées de canal égales, même dans des conditions de flux de chaleur moyen ou de débit massique total différents. De plus, il a été constaté que l'adaptation de la distribution de l'écoulement du fluide de refroidissement est plus efficace pour réduire la résistance thermique que simplement augmenter le débit massique du liquide de refroidissement, à la même perte de pression. Cette méthode d'optimisation, simple et facile à mettre en oeuvre, pourrait être généralisée en tant que technologie efficace de gestion thermique pour le refroidissement électronique. Pour atteindre le plus haut degré de liberté dans la conception des dissipateurs thermiques, nous proposons une méthode d'optimisation de topologie basée sur un algorithme génétique (GATO) pour le refroidissement par convection d'une surface de chauffe soumise à un flux de chaleur à pics multiples. La zone centrale du dissipateur thermique recevant le flux de chaleur est traitée comme domaine de conception et représentée sous forme d'une matrice binaire. Chaque élément de la matrice est considéré soit comme fluide, soit comme solide, et leur allocation est optimisée pour minimiser la température maximale (Tpic) à la surface de chauffe du dissipateur thermique sous la contrainte d'un volume de vide constant pour le domaine fluide entièrement connecté. À chaque étape d'optimisation, les caractéristiques d'écoulement et de température du fluide sont obtenues par simulation CFD en utilisant OpenFoam et les opérations de GA (sélection, croisement, mutation, etc.) sont appliquées. Les impacts des paramètres de conception et d'opération sur la configuration du canal d'écoulement optimisé sont évalués, notamment la forme du flux de chaleur, la fraction de vide du fluide, la vitesse d'entrée et la résolution du domaine de conception. Les performances de refroidissement du dissipateur thermique GATO sont également comparées à celles du dissipateur thermique à canaux droits de référence (RSC) dans les mêmes conditions. Les résultats obtenus montrent que (1) la méthode GATO proposée peut déterminer avec succès la configuration optimale du canal d'écoulement du dissipateur thermique, réduisant ainsi la Tpic à la surface de chauffe ; (2) La configuration optimisée du canal d'écoulement dépend des paramètres de conception et d'exploitation, l'efficacité et la robustesse de la méthode GATO étant clairement démontrées ; (3) Comparé au dissipateur thermique RSC conventionnel, le dissipateur thermique GATO offre de meilleures performances de refroidissement, avec une augmentation raisonnable de la perte de charge. Après avoir effectué les deux optimisations numériques, la validation expérimentale est indispensable. Par conséquent, le travail suivant présente une évaluation et une comparaison des performances de différents dissipateurs de chaleur sous plusieurs sources de chaleur, en utilisant à la fois des méthodes expérimentales et numériques. Différents prototypes de dissipateurs de chaleur sont optimisés, usinés, instrumentés et testés, notamment un dissipateur de chaleur à canaux droits uniforme (dissipateur de chaleur RSC), un dissipateur de chaleur à canaux droits optimisé (dissipateur de chaleur OSC) et un dissipateur de chaleur optimisé par une méthode d'optimisation de topologie basée sur un algorithme génétique (dissipateur de chaleur GATO). La méthode PIV a été utilisée pour mesurer le champ de vitesse du dissipateur de chaleur RSC, tandis que la thermographie IR a été appliquée pour mesurer les champs de température du fluide de refroidissement dans les trois dissipateurs de chaleur. Les résultats de visualisation obtenus sont comparés avec le calcul CFD, montrant de bons accords entre eux. Une étude numérique systématique a ensuite été effectuée pour tester les trois dissipateurs de chaleur dans un large spectre de conditions de fonctionnement. Les résultats numériques ont montré que le dissipateur de chaleur GATO peut toujours atteindre les meilleures performances hydrodynamiques et thermiques parmi les trois dissipateurs de chaleur. L'efficacité et la robustesse de l'approche GATO pour l'optimisation des dissipateurs de chaleur ont ensuite été prouvées. Pour étendre les études d'optimisation précédentes qui ne concernaient que la performance thermique dans une fonction objective, la dernière partie de ce travail explorera l'influence de différentes fonctions objectives qui concernent à la fois la performance hydrodynamique et thermique sur les configurations de cheminement de flux optimales obtenues par notre approche GATO. En particulier, trois objectifs différents seront égaminés, à savoir (1) l'écart-type des températures à la surface de chauffe (RMSD), (2) le rapport entre le nombre de Poiseuille global et le nombre de Nusselt du dissipateur thermique, et (3) la fonction de somme pondérée de la température maximale à la surface de chauffe et de la chute de pression globale. Nous présenterons alors l'évolution de la géométrie du dissipateur thermique, de la configuration du canal de flux et des distributions de température à la surface de chauffe pour trois optimisations différentes. Les résultats des dissipateurs thermiques optimaux seront comparés et évalués en termes d'indicateurs de performance différents. Les résultats montrent que la méthode GATO proposée est efficace et robuste pour gérer des fonctions objectives complexes. Cependant, la minimisation d'indicateurs de performance globaux tels que RMSD ou Po/Nu ne conduit pas nécessairement à des températures de crête ou à des chutes de pression plus basses. La minimisation de la fonction objective de somme pondérée qui prend en compte à la fois la température de crête et la chute de pression permet non seulement d'obtenir des températures de crête et des chutes de pression plus basses, mais également des nombres de Nusselt plus élevés et des nombres de Poiseuille plus bas que les autres fonctions objectives. Les résultats obtenus dans cette étude ont ouvert plusieurs perspectives intéressantes. Dans un avenir proche, l'étude de paramètres pour GA pourrait être envisagée pour améliorer l'efficacité de l'algorithme. Le lissage pourrait être appliqué à un dissipateur thermique GATO pour améliorer l'interface fluide-solide et faciliter sa fabrication. L'ensemble du processus pourrait être automatisé en effectuant les calculs de GA et de CFD sur la même infrastructure informatique à haute performance (HPC). La contrainte de fraction de vide constante pourrait être abandonnée pour offrir plus de liberté dans l'optimisation. À plus long terme, l'extension de la conception en trois dimensions et la conception d'un échangeur de chaleur à deux fluides par GATO pourraient être explorées si le temps de calcul n'est pas une contrainte forte. Pour une proposition de conception plus pratique et efficace, l'étude de l'apprentissage automatique combiné à GATO pourrait être envisagée. Enfin, une comparaison entre l'approche basée sur le gradient de TO et celle non basée sur le gradient de TO pourrait être réalisée à travers une enquête expérimentale. In modern society, electronics are indispensable to individuals, industries, and public organizations. To meet the requirements of higher work efficiency, less occupied space, and more compact geometry, the working components usually would be enclosed in a small volume. According to Moore's law, the number of transistors in a dense Integrated Circuit (IC) doubles about every two years [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. However, those high-power density electronics could not fully use the electricity energy during operation, and some of the input energy would be transformed into extra heat. The more compact the electronics, the higher the power density (heat flux). Consequently, overheating will occur, causing the deterioration of normal working efficiency, a shorter lifespan, and irreversible damage. In most situations, the heating surface of electronics is considered uniform heating for simplification. Nevertheless, most electronics have multiple heat sources, for example, array arrangement of Lithium-ion battery cells in a pack, multiple heat sources of multi-chip modules (MCMs), etc. Under this circumstance, the overheating would become more severe than the previous uniform heating hypothesis because of the generation of extremely high local temperature hot spots (uneven heating). Therefore, it is essential to maintain the temperature of electronics under an acceptable range by thermal management (efficient cooling). Various techniques have been developed and used for the thermal management of electronics. They can be simply classified by: (1) direct cooling and (2) indirect cooling [START_REF] Zhang | A review of the state-of-the-art in electronic cooling[END_REF]. One of the most widely used approaches for thermal management is single-phase liquid convective cooling, due to its simple, compact structure, safety of usage, and efficiency. A device for single-phase liquid convective cooling is called a heat sink. It contains one/several inlet(s), the main design area located in the middle where heat convection mostly happens, and one/several outlet(s). To improve a heat sink performance, plenty of research papers work on the design and optimization of a heat sink geometry which is a decisive reason for the enhancement of heat transfer. The basic approach for heat sink (or heat exchanger) performance improvement is the optimization of coupled fluid flow and heat transfer. Regarding the optimization methods, there are three levels of optimization: size optimization, shape optimization, and topology optimization (TO), as illustrated in Figure 1.1 [START_REF] Zeng | Experimental and numerical investigation of a mini channel forced air heat sink designed by topology optimization[END_REF]. For the size optimization of a heat sink, the diameters of a channel or pin-fin are the variables to be adjusted or defined. With a predefined shape, size optimization is the simplest approach because it needs fewer design variables, and also because of this, it cannot obtain the optimal geometry with a more complicated shape. The design variable of a heat sink shape optimization is the shape of a channel or pin-fin of a heat sink, which could be a circle, rectangular or irregular, etc. It is more flexible than a size optimization since its solution space includes the solution space of size optimization, while the procedures are more complicated than those of a size optimization. There is no required predefined geometry for a heat sink TO approach. Various sizes and shapes of voids can be created in the design domain to generate different TO geometries. The solution of TO space includes the solution space of size and shape optimization. Therefore, it is the optimization with the highest degree of freedom and the maximum complexity of the process. Based on the optimization study on a heat sink, it can be categorized into (1) flow distribution optimization/investigation on a parallel straight channel; (2) heat sink flow cross-section optimization; (3) fin-shape and structure optimization; (4) heat sink global flow configuration optimization. Research gaps in the literature The most important is that they consider the heating condition uniform, which is not the case in reality. To maximize the freedom degree of design, a topology optimization (TO) approach would be chosen for a heat sink global flow configuration optimization. Moreover, without the experimental validation of the optimal heat sink model, under a rough mesh during the optimization process, its accuracy and performance are difficult to be ensured. The following research gaps still exist in the design and optimization of heat sinks: -Most heat sink optimizations in the literature are still performed under the uniform heating condition without sufficient awareness of the uneven heating surface with temperature hot spots caused by multiple heat sources for electronics in reality; -For the conventional parallel straight channel heat sink most research articles focus on the investigation of the inlet/outlet positions, the shape of distributors and collectors, etc., for the achievement of uniform flow distribution; -As a powerful geometric optimization method, most TO approaches are gradient-based methods, they are easily trapped into local optimum and have difficulties defining the solidfluid interface. The development of non-gradient based TO approaches is still needed; -The experimental characterization of optimized heat sink models is always required. However, for TO heat sinks, the validations of numerical models by experiments are inadequate. -The comparison of different objective functions for the TO approach is still lacking. Main research objectives The main objective of this thesis is the design and optimization of high-performance heat sinks for effective convective cooling of heat-generating surfaces with multiple-peak heat flux by the development and implementation of different optimization methods. The performance of different optimized heat sinks should be tested and compared using both numerical and experimental approaches. Furthermore, the choice of optimization objective on the performance of heat sinks should be investigated. Methodology Two optimization methods are developed and implemented for this purpose, including size optimization and TO. The size optimization of straight-channel heat sinks aims to tailor the flow distribution in channels based on the temperature distribution of the heated surface with multiple-peak heat flux. The method follows the previous work of Min WEI (2012[START_REF] Wei | Nouvelles méthodes pour l'optimisation de la distribution des fluides et leurs applications dans les systèmes des centrales solaires à concentration (CSP)[END_REF] in LTeN, which has been developed to realize a target flow distribution [START_REF] Wei | CFD-based evolutionary algorithm for the realization of target fluid flow distribution among parallel channels[END_REF] and for the cooling of an uneven heating surface of a tubular solar receiver [START_REF] Wei | Design and optimization of baffled fluid distributor for realizing target flow distribution in a tubular solar receiver[END_REF]. The genetic algorithm based TO (GATO) is dedicated to allocating fluid and solid elements in the design domain of a heat sink to minimize the peak temperature. This approach is inspired by the pioneering work of Raphaël BOICHOT et al. [START_REF] Boichot | Cellular automaton methods for heat and mass transfer intensification[END_REF][START_REF] Boichot | A genetic algorithm for topology optimization of area-to-point heat conduction problem[END_REF] on the pure heat conduction problem. Optical-based visualization methods are applied for the performance characterization of heat sinks and the validation of numerical simulations. Especially PIV and IR techniques are applied for in-house fabricated prototypes, which are LTeN's historical strengths. Thesis outline The rest of this thesis dissertation is organized into the following chapters: Chapter 2 This chapter states the overheating issue of electronics with multiple heat sources by listing real examples. An illustration of the structures of those electronics shows how the multiple heat fluxes generated, the allowable working temperature, and the damaging consequences without efficient cooling. To tackle this issue, various thermal management approaches are introduced. Furthermore, the literature on the design and optimization of singlephase liquid-cooled heat sinks is presented from different design and optimization perspectives. Among them, it presents research gaps for a conventional straight channel heat sink, TO heat sink, and the experimental studies of TO heat sinks. Chapter 3 In this chapter, a CFD-based size optimization method for parallel straight channel heat sink under a multiple-peak heat flux to vanish the temperature hot spots has been developed. Two optimization cases are studied under two heat flux peaks and five heat flux peaks heating conditions. After the attainment of the optimized heat sinks, a comparison concerning thermal and hydrodynamic performances between the straight channel heat sink and optimized straight channel heat sink is performed. Finally, a robustness study on both heat sinks under various working conditions is investigated. Chapter 4 This Chapter studies a GATO approach applied the heat sink optimization. Various design and operating parameters of the GATO method are investigated, e.g, the different heat flux peaks, inlet velocities, the fluid void fraction, and matrix resolution of the design domain. Meanwhile, the performances of all the optimal heat sink are compared with those of straight channel heat sink under the same conditions to highlight the performance improvement. Chapter 5 This chapter numerically and experimentally studies a reference straight channel (RSC) heat sink, an optimized straight channel (OSC) heat sink, and a genetic algorithm-based topology optimized (GATO) heat sink. Experimental results are used to compare with the simulations for numerical model validation. After that, the thermal and hydraulic performances of three heat sinks are compared under a wide range of operating conditions. Chapter 6 This chapter introduces different objective functions in the GATO approach, including root mean square deviation (RMSD), the ratio of Poiseuille number and Nusselt number, and the weighted sum of peak temperature and pressure drop. The optimal geometries of heat sinks under different objective functions are presented and analyzed. Their performances are evaluated and compared in several aspects. Chapter 7 This chapter summarizes the main conclusions from the previous chapters and some perspectives for future explorations. This chapter presents a detailed literature review on the thermal management issue for electronic devices facing uneven heating and overheating problems. Special focus is given to the design and structural optimization of heat sinks for efficient single-phase liquid cooling. Firstly, the common presence and harmful consequences of electronics overheating due to multiple heat sources are put in evidence with various illustrative examples. Then, main thermal management technologies for electronics are briefly introduced while more attention is given to the enhancement of heat transfer in micro/mini channel heat sinks. Various studies on the design and optimization of heat sinks are analyzed and classified into four categories: [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. Flow distribution uniformization or control for parallel straight channel heat sinks; [START_REF] Zhang | A review of the state-of-the-art in electronic cooling[END_REF]. Channel cross-section shape optimization; [START_REF] Zeng | Experimental and numerical investigation of a mini channel forced air heat sink designed by topology optimization[END_REF]. Pin-fin-shape and arrangement optimization; and (4). Topology optimization of global flow configuration in the heat sink. Finally, the existing studies on the experimental testing of topologically-optimized heat sinks, an indispensable step for the validation of simulation and optimization results, are surveyed and compared. It can be discovered that the uneven and overheating due to multiple heat sources, a problem commonly shown in modern electronics, is currently far less addressed in the literature than uniform or single-peak heating. Research gap exists in various aspects to handle this multiple-peak heat flux issue, including [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. Lack of optimization method to manage the flow distribution in parallel straight channel heat sinks; (2) Lack of gradient-free approaches for the topology optimization of global flow channel configuration; (3) Lack of fine experimental characterization on the local fluid-thermal behaviors of topologically-optimized heat sinks; (4) Lack of performance comparison between heat sinks optimized by different methods or under different optimization criteria/constraints. These identified literature gaps motivate us to perform the research by using different numerical and experimental approaches, to be presented in the following chapters of this thesis. Introduction Thermal management is vital for modern electronics facing more and more serious overheating problems due to the ever-increasing heat flux generated. In particular, the multiplepeak shape of heat flux occurs due to the compact packaging, such that several functional elements (heat sources) are usually concentrated in a small volume of the package or array arranged on a panel, resulting in the uneven heating of the electronic devices which is quite common in real life. This uneven heating with multiple-peak heat flux, compared to the uniform heating surface, raises higher requirements on the effectiveness of the associated thermal management device to prevent harmful consequences such as reduced lifetime, material damage, or thermal runaway. In this chapter, we will first list the harmful consequences of electronics overheating due to multiple heat sources in chapter 2.2, through some illustrative examples. The main thermal management techniques of interest and their shortcomings are then briefly introduced in chapter 2.3. Among them, forced-convention cooling using a micro/mini heat sink consists of a widely accepted technique owing to its compact structure, easy integration, and high cooling capacity. Abundant studies on the design and optimization of heat sinks have been reported in the literature, with the aim of performance enhancement. They are classified into four categories and reviewed in detail in chapter 2.4. In particular, a short literature survey on the experimental studies of TO heat sinks is presented in chapter 2.5. Finally, a summary is given in chapter 2.6 to identify the remaining research gaps to fill on this topic. Overheating problem of electronics under multiple peak heat flux In this part, the cases of the electronics generating multiple-peak heat flux are introduced, with an explanation of their packing/arranging structure and generated heat flux shape. Examples include Lithium-ion battery packs for electric vehicles, multiple light-emitting diodes (LED) arrays, power electronics (Insulated-gate bipolar transistor and diodes: IGBT), multichip modules (MCMs), multi-junction high concentrator photovoltaics (HCPVs), etc. Usually, there is an acceptable temperature range for each device to work under normal conditions. Nevertheless, uneven heat generation will more likely cause the overheating problem of the devices, leading to lowered working efficiency, components irreversible deterioration, thermal runaway, and shortened lifetimes. Lithium-ion battery packs As an energy storage unit, Li-ion batteries are widely used in electric vehicles due to their lightweight and large capacity. According to the shape of the battery cell, the Li-ion battery package can be classified into cylindrical type (Figure 2.1 (a)) and prismatic type (Figure 2.1 (b)). For both packaging types, several battery cells are arranged in an array layout to achieve the required storage capacity, i.e., the maximum endurance mileage for electric vehicles. Figures 2.1 (c) and (d) show measured temperature contours of a prismatic battery cell at its initial stage and final stage of discharging, respectively. Clear temperature gradients between the upside (hotspot) and the downside at the initial stage of discharging, and between the middle upside (hotspot) and the bottom at the final stage of discharging can be observed. The multiple peak heat flux shape then occurs for the battery package due to the array arrangement of the unit cells. This uneven heating surface with multiple hotspots would be more complicated to handle than a uniform heating surface, by the associated thermal management technique (e.g., thermal conductive pad shown in Figure 2.1(b)). Lithium-ion batteries could work normally under the acceptable temperature range of -20 °C -60 °C [START_REF] Ma | Temperature effect and thermal impact in lithium-ion batteries: A review[END_REF], and their capacity during normal charge and discharge is very sensitive to temperature. An experimental study [START_REF] Ehrlich | Handbook of Batteries[END_REF] on the capacity fade due to thermal impact reported that by keeping the other parameters (such as materials: C/LiMn2O4, depth of discharge (DOD) range: 4.2-2.5 V, cycle rate: C/1 and cycle number: 500) constant, the capacity fades increased from 28.0% to 51.0% when the battery temperature increased from 294 K to 318 K. Moreover, when overheating happens due to insufficient cooling, the chemical components inside are at the risk of decomposing and undergoing a series of exothermic reactions that can generate excessive heat and gaseous products [START_REF]issues that Lithium-ion battery under overheating[END_REF], triggering thermal runaway [START_REF] Zhao | Overheating Prediction and Management of Lithium-Ion Batteries[END_REF] or even explosion [START_REF]What happens when lithium-ion batteries overheat and explode[END_REF]. Multiple LED arrays LED packages are widely used in different industrial and residential sectors [START_REF] Tang | Experimental investigation on active heat sink with heat pipe assistance for high-power automotive LED headlights[END_REF] owing to the advantages of low energy consumption, high photoelectric efficiency, high luminous flux, and long lifetime. The unit LED chips are usually arranged into an array configuration (Figure 2.2 (a)), connected to the silicon die by Au-Si bonding, and then bonded to the copper heat spreader While there would be 60% -70% of the power of the compact package is transferred into extra heat, which brings about the delamination in the electronics packages and causes the blocking of the thermal pass, extremely harmful to its normal use [START_REF] Hu | Mechanism and thermal effect of delamination in light-emitting diode packages[END_REF]. Even if the LED package is cooled by a liquid cooling microchannel heat sink, temperature hot spots may still exist as shown in the numerical results in Figure 2 (c). This makes the package overheating even worse, i.e., the local temperature may approach the upper limit (85 °C [20]). Power electronics (Insulated-gate bipolar transistor and diodes) An insulated-gate bipolar transistor (IGBT) is a three-terminal power semiconductor device primarily used as an electronic switch for high-power applications, such as variable-frequency drives (VFDs), electric cars, trains, variable-speed refrigerators, lamp ballasts, arcwelding machines, and air-conditioners [START_REF] Bimal | Power Electronics and Variable Frequency Drives: Technology and Applications[END_REF]. Advances in power electronics (PEs) have led to the need for the manufacturing of highly compact IGBTs, associated with non-uniform and high heat flux generation [START_REF] Yahyaee | A Modification of Offset Strip Fin Heatsink with High-Performance Cooling for IGBT Modules[END_REF]. Figure .2.3 (b) illustrates a simulated temperature contour of the IGBT module based on the power loss of the IGBT diode chips with a heat sink simplified as an aluminum block. Temperature hotspots at the location of the chip can be seen, the temperature difference reaching up to 25.6 K, which is already quite high. What is even worse is that this temperature gradient due to uneven heating could be much higher for densely packaged powerful IGBT modules with hundreds of chips. According to Choi et al. [START_REF] Choi | Study and Handling Methods of Power IGBT Module Failures in Power Electronic Converter Systems[END_REF], 34% (the highest proportion in all failure causes) of failures in the power converter systems of IGBT modules are due to the semiconductor. Among them, solder failures induced by high temperatures by nearly 60% [START_REF] Qiu | Review of IGBT Junction Temperature Extraction and Estimation Methods[END_REF]. Single-chip modules (SCMs) and multi-chip modules (MCMs) CPU chips are indispensable elements for modern computing technologies. Based on the number of chips integrated, they can be classified into single-chip models (SCMs) and multi-chip modules (MCMs). A SCM commonly consists of a substrate, a CPU chip (a die), one thermal interface material (TIM-1) between the integrated heat spreader (IHS) and chip, and another TIM-2 between the heat sink and the IHS, as shown in Figure 2.4 (a). An MCM mainly includes several chips in a package (Figure 2.4 (b) for example), of which the arrangement could be a 3D stack or multiple 3D stacks. According to Moore's law, the number of transistors in a dense Integrated Circuit (IC) doubles about every two years [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. With the increasing demand for performance and the compactness of new-generation processors, the power density generated by the chip would grow higher and higher, causing more severe overheating problems. Meanwhile, SCMs and MCMs have their normal working temperature which ensures their normal working efficiency, security, and lifetime. Most CPUs should operate at thermals that aren't higher than 85-90 °C. When it comes to CPU idle temperature, it shouldn't go over 50 °C. For example, a Core i5-12600K processor cooled by a decent dual heatsink air CPU cooler with the setup housed inside a case with solid airflow, the thermals shouldn't go much over 70 °C [START_REF]What is the safe CPU temperature range?[END_REF]. While every 10 °C rises of the junction temperature, the device failure rate doubles [START_REF] Siva | Effect of flow maldistribution on the thermal performance of parallel microchannel cooling systems[END_REF], which could be well explained in Figure 2.4 (d) that The decline of junction temperature greatly affects the lifetime of an electronic device. Once the junction temperature exceeds the upper limit (105 -110 °C) [START_REF]CPU temperture limit[END_REF] for a long time, the device would choose to shut down for self-protection, to avoid the problems like material damage or shortened lifetime. When under workload, both SCMs and MCMs could generate multiple-peak shape heat flux. This is due to the different locations of transistors in a SCM (cf. Figure 2.4 (c)), and the presence of multiple chips in an MCM. Again, this multiple-peak heat flux caused overheating problems and could result in the consequences mentioned above, more serious and more difficult to handle than uniform heating conditions, raising higher demand for effective cooling techniques. Multi-junction high concentrator photovoltaics (HCPVs) Recently, the use of the solar photovoltaic system has been largely boosted for renewable electricity generation facing global warming [START_REF]IEA, Renewable electricity growth is accelerating faster than ever worldwide, supporting the emergence of[END_REF]. Conventional single-junction solar cell suffers the problem of low maximum theoretical efficiency (the Shockley -Queisser limit of 32% -33%,) of sunlight conversion [START_REF] Yamaguchi | Multi-junction solar cells paving the way for super high-efficiency[END_REF]. In contrast, high-efficiency multi-junction solar cells (HCPVs) can greatly increase the theoretical conversion efficiency by up to 45% by stacking several solar cells [START_REF]high-efficiency multi-junction solar cell has better performance[END_REF] and by increasing the band gaps to exploit a larger part of the solar spectrum and reduce the transmission and thermalization losses [START_REF] Philipps | III-V Multi-junction solar cells and concentrating photovoltaic (CPV) systems[END_REF]. Nevertheless, the concentration of sunlight unavoidably brings higher cell temperature, requiring more efficient thermal management. The flux map has multiple peaks with a maximum value of up to 2.940×10 6 W•m -2 . This uneven solar flux distribution will inevitably cause heterogeneous heat generation and thereby the temperature gradient on the board. Many researchers have reported the consequences of the CPV cells illuminated under a high and non-uniform solar concentration ratio. For instance, open circuit voltage is inversely proportional to the CPV cell temperature and thus deteriorates the power conversion efficiency [START_REF] Braun | Temperature dynamics of multijunction concentrator solar cells up to ultra-high irradiance[END_REF]. The effects of high temperature are considered to be nonnegligible at concentration levels approaching 1000 suns even as short as a few milliseconds [START_REF] Braun | Temperature dynamics of multijunction concentrator solar cells up to ultra-high irradiance[END_REF]. A single-cell CPV could reach an extremely high temperature of about 1200 °C without any heat sink under 400 suns [START_REF] Min | Thermal analysis and test for single concentrator solar cells[END_REF], which the current CSP (concentrating solar power) apparatus (maximum 1000 °C [START_REF]extreme temperature for CPS[END_REF]) can generally not afford. Summary From the illustrative examples introduced above, it can be seen that most modern electronic devices have their proper normal operating temperature and maximum temperature limit. The former makes sure that the nominal working performance is ensured within a guaranteed device lifetime whereas the latter prevents them from a shutdown, material damage, or even thermal runaway. Nevertheless, the demand for smaller, multifunctional, faster, and more powerful electronics provokes an ever-increasing heat generation, the increasing power density leading to the more and more frequently encountered overheating problem. Moreover, the pursuit of higher capacity and performance stimulates the fast development of electronic packages or boards with a higher level of integration. The array or stacking arrangement of unit functional elements in the integrated package will inevitably result in the uneven heat generation of multiple-peak shapes. Compared to the uniform heating or single heat source cases, this multiple-peak heat flux shape could result in a higher temperature gradient. If not properly cooled, the peak temperature exceeds the electronics' normal working temperature, even reaching the upper limit. Therefore, the development of more efficient thermal management technologies is especially required for this overheating and uneven heating of the electronics under multiple peak heat flux, an essential problem that is still insufficiently addressed in the current literature. Thermal management techniques Various techniques have been developed and used for the thermal management of electronics. They could be generally classified into direct cooling and indirect cooling [START_REF] Zhang | A review of the state-of-the-art in electronic cooling[END_REF] as presented in Figure 2.6. The difference between them is whether the coolant touches directly the heat-generating objects. Examples of direct cooling are air cooling, jet cooling, spray cooling, immersion cooling, etc.; For indirect cooling, a compact intermediate component is needed for the heat transfer between the heat-generating objects and the coolant, which is usually a "heat sink". Indirect cooling techniques usually contain single-phase liquid cooling, phase change cooling, evaporate cooling, heat pipe, thermal materials, etc. More details about these cooling techniques, their principles, and applied fields are briefly introduced as follows. Direct cooling • Direct air cooling: a fan A fan (ventilator) is commonly used for direct air cooling by forced convection, which uses external mechanical forces to generate the airflow motion around the heating objects. This technique is rather simple, easy to implement, and fast-acting, avoiding the use of accessories and coolant leakage The disadvantages exist in that firstly, it is not efficient when the ambient temperature is high; secondly, the deposition of dust and dirt in critical components in the equipment can cause short-circuits and even failure; moreover, when encountering the multiple heat sources conditions, it may be not capable of cooling down the surface uniformly. • Jet implementation cooling Jet implementation cooling is to cool miniature devices with a liquid/air jet. There are three device implementation schemes of jet impingement cooling [START_REF] Pukhovoy | Maximum heat fluxes and features of heat transfer mechanisms with boiling during jet impingement cooling of electronics[END_REF]: (i) using a free jet, (ii) a submerged jet and (iii) a confined by the device surfaces jet as shown in Figure 2.7 (a). The coolant is applied directly on the heating hot spot often with a speed of around 10m/s for a miniature device at the direction of either normal to the surface or at an angle within 45° [START_REF] Pukhovoy | Maximum heat fluxes and features of heat transfer mechanisms with boiling during jet impingement cooling of electronics[END_REF]. While the shortcoming is that coolant directly in contact with the heated surface may cause damage to the components, and parameters of jet implementation cooling play a significant role in the heat transfer of cooling. However, it is currently not possible to create a general model of the cooling process and singly systematize experimental data [START_REF] Mudawar | Recent Advances in High-Flux, Two-Phase Thermal Management[END_REF]. Jet cooling is broadly applied in large-scale industrial systems such as gas turbine cooling, rocket launcher cooling, and high-density electrical equipment cooling [START_REF] Hyung | Applications of impingement jet cooling systems[END_REF] to deal with local hot spot problems due to the small effective cooling area. • Spray cooling The main working principle of spray is presented in Figure 2.7 (b). An atomizer is used to break down the liquid coolant into numerous tiny droplets by high pressure, and those droplets reach then the heated surface to absorb the heat by evaporation [START_REF] Kim | Spray cooling heat transfer: The state of the art[END_REF]. The whole cooling process includes three stages: single-phase regime, two-phase regime, and constant heat flux regime. Spray cooling is featured by high heat transfer rate, uniformity of heat removal, small fluid inventory, low droplet impact velocity, and no temperature overshoot [START_REF] Kim | Spray cooling heat transfer: The state of the art[END_REF]. However, the sizable investment in the ongoing maintenance of spray cooling equipment could be high due to some technical issues like clogged nozzles and corroded rotary disk atomizers caused by the small size of fluid passages. Spray cooling is frequently used for rapid cooling of a heated surface at high temperature, such as the cooling of medium-thick plates and thin strips in the hot rolling steel mill, glass tempering in the auto industry, and electronic chip cooling in the computer manufacturing industry [START_REF]Multiphase Spray Cooling Technology in Industry[END_REF]. • Immersion cooling Immersion cooling (Figure 2.7 (d)) is another type of direct cooling in which the heatgenerating object is directly immersed into a dielectric fluid or coolant having good thermal conductivity but very poor electric conductivity. It is a very effective cooling method. Immersion fluids can increase heat transfer by up to 10000 times compared to air [START_REF] Roe | Immersion cooling for lithium-ion batteries -A review[END_REF]. This direct contact with the cooled surfaces further reduces the thermal contact resistances experienced in indirect cooling systems [START_REF] Wu | A critical review of battery thermal performance and liquid based battery thermal management[END_REF]. Immersion cooling simplifies the system design and reduces the system's complexity [START_REF] Deng | Effects of different coolants and cooling strategies on the cooling performance of the power lithium ion battery system: A review[END_REF]. But some disadvantages still exist, including the process of condensing evaporated vapor can be more complex and costly, and it may also result in higher pumping losses when dealing with high-viscosity fluids. The fluid itself may be expensive, and there may be issues with its compatibility with certain materials. Additionally, using this fluid may add extra weight [START_REF] Roe | Immersion cooling for lithium-ion batteries -A review[END_REF]. Indirect cooling • Air cooling through a pin-fin heat sink A pin-fin type heat sink (Figure 2.8 (c)) usually made of high thermal conductivity material is an example of indirect air cooling. Its base surface attached to the heat-generating object absorbs the heat and boosts the temperature, pins are extended from its base to increase the heat transfer surface area. The air near the metal is heated, which causes the flow motion (natural convection) brought by density difference due to the temperature difference of air near the heat sink and the ambient. It can easily be combined with a fan to take advantage of forced air convection for better cooling performance. • Phase change cooling A phase change cooling system is a heat pump consisting of a compressor, a condenser, an expansion valve, and an evaporator, as depicted in Figure 2.8 (a). The phase change process happens in the evaporator (cooling section) and the condenser. The evaporator is attached to the heat-generating object, the coolant absorbing a large amount of heat during phase change from liquid to vapor. It is employed for the cooling of computing equipment [START_REF]phase change cooling system[END_REF]. But it is less stable, more difficult to control, and with higher cost compared to single-phase liquid cooling which will be introduced below. • Single-phase liquid cooling Single-phase liquid cooling is a relatively simple method for indirect cooling a heatgenerating surface via a heat sink. Unlike the pin-fin structure for indirect air cooling (Figure 2.8 (c)), the heat sink for liquid cooling usually has confined channels, the liquid coolant flowing inside to bring out the generated heat by forced convection (without phase change). It is usually composed of inlet(s), outlet(s), and the main body (design domain) where the heat dissipation mostly happens. The main body of the liquid-cooling heat sink could have many designs and geometries, which will be introduced in detail in the later sub-section. As a simple and easy-to-implement cooling technique, single-phase liquid cooling by forced convection is broadly applied to various industries. Compared to natural convection, the heat transfer capacity is much higher owing to liquid forced convection. Compared to jet or spray cooling, the effective cooling surface area is much larger with a lower cost of the device. And compared to phase-change cooling, liquid cooling using a heat sink also has many advantages, including higher compactness and hardware reliability, lower system complexity and cost, etc. [START_REF]phase change cooling shortcomings[END_REF]. In the following of this thesis, we will only focus on the heat sinks for the single-phase liquid cooling system, especially their design and structural optimization. Many research studies [START_REF] Amoako | Optimization of heat sinks in a range of configurations[END_REF][START_REF] Khan | The Role of Fin Geometry in Heat Sink Performance[END_REF][START_REF] Hoang | Impact of fin geometry and surface roughness on performance of an impingement two-phase cooling heat sink[END_REF][START_REF] Cano-Banda | Effect of different geometry flow pattern on heat sink performance[END_REF], reported that the performances of a heat sink depend strongly on the flow channel geometries/topologies, i.e., how the fluid flow paths are arranged and organized. These aspects will be introduced in detail in the following chapter 2.4. Heat sink design and optimization According to the channel structure of the main fluid domain, heat sinks for liquid cooling can be classified into several basic types, including parallel straight channels, pin-fin structures, and complex structures. Some variants also exist, as shown in Figure 2.9. A parallel straight channel heat sink is one of the most common and conventional structures of which the main fluid domain is constructed by several parallel straight channels (Figure 2.9 (a)), with a thin solid separating wall between two channels. For electronic cooling, the channel characteristic dimension is usually at the micro or millimeter scale to take the high heat dissipation ability and the compact size. A usual variant in this category is the wavy/serpent channel shape (instead of straight channels) as displayed in Figure 2.9 (b). Such channel geometry aids to enhance heat transfer by creating secondary flows and recirculation zones [START_REF] Cui | Enhanced flow boiling of HFE-7100 in picosecond laser fabricated copper microchannel heat sink[END_REF]. Moreover, cavities channels are also proposed with obstacles in the middle of the channel, creating the split-recombine flow patterns (Figure 2.9 (d)). Similarly, ribs can be attached to the interior wall of the straight channels (Figure 2.9 (e)), helping to create local vortices and recirculation. Compared to the basic parallel straight channel type, higher thermal performance could be achieved by these variants, but always at the cost of a higher pressure drop (pumping power). Many researches have been published on the size and/or shape optimization of the unit enhancement elements (e.g., a rig, a cavity, etc.). Readers are invited to refer to the review paper [START_REF] Ahmed | Optimization of thermal design of heat sinks: A review[END_REF] [62] for more details. For these parallel channel-type heat sinks, another factor that greatly influences their thermal and hydraulic performances is the flow (mal)distribution, which is in close relation to the inlet/outlet position (s) and the shape of the distributor and collector [63] [64]. While the uniform flow distribution is often the target to achieve for cooling an evenly heating surface, the uneven heating surface with multiple heat sources changes and complicates the rule. Research on this topic will be further reviewed and analyzed in the following section 2.4.1. Moreover, the channel cross-sectional shape can also be subjected to optimization; some relevant work will be summarized in section 2.4.2. A pin-fin heat sink (Figure 2.9 (c)) is another typical category for which the flow domain is generally a cavity filled with an array of fins usually having the same height. The fin shape could be rectangular, triangular, circular, or others. The shape, the spacing, and the arrangement of fins all have influences on the performances of the heat sink, thereby being subjected to optimization. More studies on the pin fin structure optimization in the literature are reviewed in section 2.4.3. The last category of the heat sink combines two or more above-mentioned enhancement structures, for instance, cavity, rib, and parallel straight channel as shown in Figure 2.9 (f), or a porous medium (metal foam) structure in general. Again the penalty on the pressure drop should be specially considered despite the usually better thermal performance. Worth noting for this category is a new trend that recently emerged but rapidly becomes the focus of attention. For these complex heat sinks, their flow channel is not based on some pre-defined geometries (parallel channel, pin-fin, etc.), but is usually a result of topology optimization. Some real examples may be found in Table 2.1 and more discussion on this topic will be given in section 2.4.4. Flow distribution optimization/investigation on parallel straight channel Among various structures of heat sinks for electronic cooling, the parallel straight micro/mini channel configuration is the most widely used because of its simple geometry, high cooling performance, cost-effective fabrication, and easy implementation [START_REF] Xia | Effects of different geometric structures on fluid flow and heat transfer performance in microchannel heat sinks[END_REF]. It usually comprises single inlet and outlet ports, inlet/outlet manifolds, and a multitude of micro/minichannels in the middle. Due to their geometric specificity, the presence of the coolant flow maldistribution may result in the thermal performance deterioration of the heat sink and the formation of localized temperature hot spots in the electronic device [START_REF] Kumar | A novel approach to manage temperature non-uniformity in minichannel heat sink by using intentional flow maldistribution[END_REF]. Therefore, one important issue that attracts great attention is how to properly deliver and distribute the cooling fluid across the parallel micro/mini channels to ensure optimal cooling performance. Plenty of research has been devoted to achieving uniform flow distribution in parallel channel heat sinks under the assumption of an even heating surface. These researches can mainly be classified into three categories: (1) arrangement of heat sink inlet/outlet positions or the injecting angle ( Kumar and Singh [START_REF] Kumar | Effects of flow inlet angle on flow maldistribution and thermal performance of water cooled mini-channel heat sink[END_REF] numerically tested the effect of the flow inlet angle between the inlet port and the parallel channels (theta = 90°, 105°, and 120°) on the flow distribution nonuniformity and the thermal performance of a mini channel heat sink. It was found that the inlet angle of 105° provided the best flow uniformity and the best thermal performance under uniform heating conditions. Manikanda Kumaran et al. [START_REF] Manikanda Kumaran | Experimental and numerical studies of header design and inlet/outlet configurations on flow mal-distribution in parallel micro-channels[END_REF] experimentally and numerically studied the locations of inlet/outlet (U, C, V, S, and D types as shown in Figure 2.10 on the flow distribution non-uniformity. Their results showed that the main reasons for non-uniform flow distribution were the presence of secondary flow, flow separation, and re-circulation in the manifolds. Among the tested heat sink types, the C-type arrangement exhibited the best flow distribution uniformity whereas the V-type heat sink had the poorest. Similarly, the effect of inlet/outlet arrangement was numerically studied by Chein and Chen [START_REF] Chein | Numerical study of the inlet/outlet arrangement effect on microchannel heat sink performance[END_REF]. They found that the velocity distribution is less uniform for the heat sinks with coplanar inlet/outlet tube and parallel channels (I, N, D, and S type in Figure 2.10) than those with vertical fluid supply and collection (U and V type; i.e. inlet/outlet tubes are perpendicular to the parallel channels). Kumar and Singh [START_REF] Kumar | A novel approach to manage temperature non-uniformity in minichannel heat sink by using intentional flow maldistribution[END_REF] investigated different types of flow arrangement, and their numerical results showed that I-type flow arrangement could provide better thermal performance for uniform heating than D-type heat sink having a more uniform flow distribution. Chen et al. [START_REF] Chen | Cooling efficiency improvement of air-cooled battery thermal management system through designing the flow pattern[END_REF] investigated various positions of the inlet region and the outlet region of an air-cooled battery thermal management system (BTMS), The results showed that the symmetrical BTMS with the inlet and outlet located in the middle of the plenums achieved high cooling efficiency. Manikanda Kumaran et al. [START_REF] Manikanda Kumaran | Experimental and numerical studies of header design and inlet/outlet configurations on flow mal-distribution in parallel micro-channels[END_REF] also numerically and experimentally tested different header shapes (rectangular, triangular, and trapezoidal) and header sizes. Their results showed that the triangular inlet header and the trapezoidal outlet header provided better flow uniformity than others. Many other novel header designs [START_REF] Manikanda Kumaran | Experimental and numerical studies of header design and inlet/outlet configurations on flow mal-distribution in parallel micro-channels[END_REF][START_REF] Dąbrowski | Mitigation of Flow Maldistribution in Minichannel and Minigap Heat Exchangers by Introducing Threshold in Manifolds[END_REF][START_REF] Liu | Experimental study of the flow distribution uniformity in flow distributors having novel flow channel bifurcation structures[END_REF][START_REF] Pistoresi | Numerical study on the improvement of flow distribution uniformity among parallel mini-channels[END_REF][START_REF] Zhou | Characteristics of flow distribution in central-type compact parallel flow heat exchangers with modified inlet and header[END_REF] have also been proposed and tested, as summarized in the review paper by Ghani et al. [START_REF] Ghani | The effect of manifold zone parameters on hydrothermal performance of micro-channel HeatSink: A review[END_REF]. Different from the studies on the inlet header shapes, Song et al. [START_REF] Song | Enhanced flow uniformity in parallel mini-channels with pinfinned inlet header[END_REF] proposed adding a staggered pin-fin array in the inlet header of the water-cooled heat sink (Figure 2.11 (a)). The influences of different pin-fin arrangements in trapezoidal or rectangular inlet headers on the flow distribution uniformity among the minichannels were numerically assessed. Liu and Yu [START_REF] Liu | Numerical study on performances of mini-channel heat sinks with non-uniform inlets[END_REF] proposed a non-uniform-sized mini-inlet baffle to artificially control the mass flow rate of the working fluid in the mini-channels (Figure 2.11 (b)). Hou et al. [START_REF] Hou | A novel approach for suppressing flow maldistribution in minichannel heat exchangers[END_REF] proposed a built-in spiral baffle in the inlet manifold of a mini channel heat sink to achieve the uniform distribution of flow. In the study [START_REF] Fatahian | Improving the flow uniformity in compact parallel-flow heat exchangers manifold using porous distributors[END_REF], Fatahian inserted thin layers of porous media at the inlet of distribution tubes to address the flow mal-distribution in a parallel-flow heat exchanger. A parametric study was conducted to evaluate the effect of porous media geometrical parameters (Figure 2.11 (c)). Gilmore et al. [START_REF] Gilmore | Manifold configurations for uniform flow via topology optimisation and flow visualisation[END_REF] optimized compact and adaptable manifold configurations for achieving uniform flow distribution with minimal pressure drop. Their numerical results showed that both the flow distribution uniformity and the temperature uniformity at the heating base wall could be improved by using the appropriate baffle. Nevertheless, no optimization procedure has been proposed to determine the best baffle or obstacle geometries. baffle [START_REF] Liu | Numerical study on performances of mini-channel heat sinks with non-uniform inlets[END_REF] and (c) porous structure [START_REF] Fatahian | Improving the flow uniformity in compact parallel-flow heat exchangers manifold using porous distributors[END_REF]. The geometry of the heat sink channel is also considered a design parameter in many studies to adjust the flow distribution. Dhahad et al. [START_REF] Dhahad | Effect of flow field design and channel/header ratio on velocity distribution: An experimental approach[END_REF] showed a decreased flow nonuniformity with decreasing header/channel area ratio. Mu et al. [START_REF] Mu | Numerical study on temperature uniformity in a novel minichannel heat sink with different flow field configurations[END_REF] proposed parallel channel water-cooled heat sinks with variable channel height. Their results showed that the temperature non-uniformity (Tmax-Tmin) could be reduced from 4.7 K to 0.97 K and the thermal resistance decreased from 0.03 K•W -1 to 0.028 K•W -1 by replacing the conventional channels with the variable height channels. Hao et al. [START_REF] Hao | Numerical Analysis and Optimization on Flow Distribution and Heat Transfer of a U-Type Parallel Channel Heat Sink[END_REF] numerically investigated the effect of geometry parameters (number of channels, channel width, and channel length) of the fluid-cooled heat sink. Optimal values were determined for both flow distribution uniformity and maximum temperature reduction, using the orthogonal experiment design method. The effects of different channel dimensions (parameters) on the reduction of thermal resistance were numerically examined by Mitra and Ghosh [START_REF] Mitra | Mini-channel heat sink parameter sensitivity based on precise heat flux re-distribution[END_REF], and the optimum dimensions of the water-cooled minichannel heat sink modeled as fins on a substrate have been determined. To achieve the target flow distribution, the Design of Experiments (DOE) along with response surface optimization was used by Sogunuru et al. [START_REF] Annapurna | A Design of Experiments Approach Towards Desired Flow Distribution Through Manifolds in Electronics Cooling[END_REF] to arrive at desired flow by introducing suitable orifices obtained by applying Multi-Objective Genetic Algorithm (MOGA). The study [START_REF] Song | Effects of manifold design parameters on flow uniformity in parallel mini-channels[END_REF] developed a new predictive tool for quantitatively assessing flow mal-distribution in parallel channels based on the parametric effect. Their results showed that the magnitude of the relative influence of each variable on flow mal-distribution could be expressed in the following order: header length > channel height > connection tube diameter > total volumetric flow rate > channel width. Narendran et al. [START_REF] Narendran | Investigation on novel inertial minichannel to mitigate maldistribution induced high temperature zones[END_REF] evaluated the potential of ribs and inertial-based spillway channels to overcome the flow mal-distribution issue. They found that The ribbed inclined channel was found to perform better than other types and developed a 33 % lower center channel velocity than the normal channel. All the above-mentioned studies aimed at achieving uniform flow distribution under the assumption of uniform heat flux at the base wall of the heat sink. But in reality, the heat flux profile generated by the electronic devices is not uniform, instead, it has multiple heat sources and presents multiple-peak heat flux just as we presented in section 2.2. Under these circumstances, the intentional flow non-uniform distribution may be a better option to be adopted to decrease the local maximum temperature, as pointed out by Kumar and Singh [71]. This is actually in line with some observations reported in the literature [START_REF] Lou | Single-tank thermal energy storage systems for concentrated solar power: Flow distribution optimization for thermocline evolution management[END_REF][START_REF] Milman | Steam condensation in parallel channels with nonuniform heat removal in different zones of heat-exchange surface[END_REF][START_REF] Wei | Fluid flow distribution optimization for minimizing the peak temperature of a tubular solar receiver[END_REF] in that the optimal flow distribution is usually not uniform but obeys certain trends subjected to a defined optimization objective and constraints. In this regard, Kumar and Singh [START_REF] Kumar | A novel approach to manage temperature non-uniformity in minichannel heat sink by using intentional flow maldistribution[END_REF] indicated that the flow arrangement and the actual flow distribution should fit the heat flux shape to achieve a better thermal performance, i.e., lower maximum temperature and thermal resistance. yet, no optimization method has been developed so far to determine the optimal flow distribution profile. The above literature survey indicates that systematic and quantitative studies addressing flow and temperature distribution characteristics in the parallel micro/mini channels heat sink under non-uniform and multiple-peak heat flux conditions are still lacking. In particular, the relation between the optimal flow distribution of cooling fluid and the non-uniform heat flux shape is unclear. Moreover, investigations on the development of effective methods to determine and realize the most adapted flow distribution in parallel channel heat sinks are needed. Channel cross-section optimization For parallel straight channel heat sinks, a channel cross-section shape is also an optimization object. Note that the difference between the flow distribution optimization and the channel cross-section shape optimization is that the former targets adjusting the amount of mass flow in every channel whereas the latter focuses on the cross-section shape of a single channel or the shape of the entire heat sink cross-section. An identical cross-section shape is then extruded along the flow direction to form the whole channel length. According to the design parametrizations, the works in this domain could be classified into three classes as presented in Figure 2.12 (a). Single channel cross-section geometrical parameters optimization based on a predefined simple geometry [START_REF] Baodong | Multi-objective optimization design of a micro-channel heat sink using adaptive genetic algorithm[END_REF][START_REF] Karathanassis | Multi-objective design optimization of a micro heat sink for Concentrating Photovoltaic/Thermal (CPVT) systems using a genetic algorithm[END_REF][START_REF] Normah | Comparison of the optimized thermal performance of square and circular ammonia-cooled microchannel heat sink with genetic algorithm[END_REF][START_REF] Halelfadl | Optimization of thermal performances and pressure drop of rectangular microchannel heat sink using aqueous carbon nanotubes based nanofluid[END_REF][START_REF] Lin | Optimization of the Micro Channel Heat Sink by Combing Genetic Algorithm with the Finite Element Method[END_REF][START_REF] Hung | Optimal design of geometric parameters of doublelayered microchannel heat sinks[END_REF][START_REF] Wang | Optimal geometric structure for nanofluid-cooled microchannel heat sink under various constraint conditions[END_REF][START_REF] Wang | Inverse geometric optimization for geometry of nanofluid-cooled microchannel heat sink[END_REF][START_REF] Leng | Optimization of thermal resistance and bottom wall temperature uniformity for double-layered microchannel heat sink[END_REF][START_REF] Rao | Dimensional optimization of a micro-channel heat sink using Jaya algorithm[END_REF][START_REF] Yin | Shape optimization of water-to-water plate-fin heat exchanger using computational fluid dynamics and genetic algorithm[END_REF]; Figure 2.12 (b). Single channel cross-section shape optimization [START_REF] Foli | Optimization of micro heat exchanger: CFD, analytical approach and multi-objective evolutionary algorithms[END_REF][START_REF] Ge | Optimal shape design of a minichannel heat sink applying multi-objective optimization algorithm and three-dimensional numerical method[END_REF]; and Figure 2.12 (c) Heat sink entire cross-section topology optimization [START_REF] Dokken | Optimization of 3D printed liquid cooled heat sink designs using a microgenetic algorithm with bit array representation[END_REF]. The key step for optimization is to build the relationship between the design variables and the objective function, either by direct physics analysis or by constructing a surrogate function. Owing to its predefined and simple geometry, the size/parameter optimization of channel cross-section is the only class possible to establish explicit analytical relations between the design variables and the objective function based on some physical models. For example, Shao et al. [START_REF] Baodong | Multi-objective optimization design of a micro-channel heat sink using adaptive genetic algorithm[END_REF] optimized the number, width, and height of microchannels, as well as the thickness of the solid separating walls for a conventional parallel straight microchannel heat sink. A genetic algorithm (GA) has been applied based on the analytical relations between these variables and objectives (both thermal resistance and pressure drop). Other studies in this line have also been performed [START_REF] Khan | Optimization of Microchannel Heat Sinks Using Genetic Algorithm[END_REF], usually assuming fully developed laminar flow and simplified heating boundary (e.g., constant Nusselt number). But for more complex heat sink geometries or real operating conditions, the surrogate function has to be built. One of the approaches frequently applied for this purpose is the response surface analysis (RSA) [START_REF] Karathanassis | Multi-objective design optimization of a micro heat sink for Concentrating Photovoltaic/Thermal (CPVT) systems using a genetic algorithm[END_REF][START_REF] Husain | Optimization of a microchannel heat sink with temperature dependent fluid properties[END_REF][START_REF] Kulkarni | Multi-objective optimization of a double-layered microchannel heat sink with temperature-dependent fluid properties[END_REF]. For example, Karathanassis et al. [START_REF] Karathanassis | Multi-objective design optimization of a micro heat sink for Concentrating Photovoltaic/Thermal (CPVT) systems using a genetic algorithm[END_REF] used GA to optimize the channel width and solid wall thickness to minimize both the thermal resistance and the pressure drop. The conjugate-gradient method is often used for channel cross-section optimization [START_REF] Hung | Optimal design of geometric parameters of doublelayered microchannel heat sinks[END_REF][START_REF] Wang | Optimal geometric structure for nanofluid-cooled microchannel heat sink under various constraint conditions[END_REF][START_REF] Wang | Inverse geometric optimization for geometry of nanofluid-cooled microchannel heat sink[END_REF][START_REF] Leng | Optimization of thermal resistance and bottom wall temperature uniformity for double-layered microchannel heat sink[END_REF]. The number of channels, the channel aspect ratio, and the channel-to-pitch ratio are usually considered design variables, and the total thermal resistance is considered as the (single) objective criterion under the constraint(s) of fixed pressure drop or pumping power. GA method is usually used for multiobjective optimization. For example, Salma Halelfadl et al. [START_REF] Halelfadl | Optimization of thermal performances and pressure drop of rectangular microchannel heat sink using aqueous carbon nanotubes based nanofluid[END_REF] used an elitist non-dominated sorting genetic algorithm (NSGA-II) to optimize the channel aspect ratio and wall aspect ratio. Lin et al. [START_REF] Lin | Optimization of the Micro Channel Heat Sink by Combing Genetic Algorithm with the Finite Element Method[END_REF] employed the CFD-based (COMSOL) GA method to design and optimize the channel aspect ratio and the ratio of the channel width to pitch, with the objectives of minimizing the thermal resistance and the weight of the heat sink. Similar CFD-based GA approach may also be found in study [START_REF] Alperen | Multi objective optimization of a micro-channel heat sink through genetic algorithm[END_REF]. More recently, Rao et al. [START_REF] Rao | Dimensional optimization of a micro-channel heat sink using Jaya algorithm[END_REF] proposed a novel algorithm named 'Jaya' which relates the design variables and obtained by the RSA in the form of a polynomial expression. They compared the optimization result of the Jaya algorithm with the results obtained by the TLBO algorithm and a hybrid multi-objective evolutionary algorithm (MOEA) and numerical analysis, and it turns out to be better than the result obtained by the Jaya algorithm. To minimize the thermal resistance and pressure drop of a straight square/circle microchannel heat sink, a non-dominated sorting genetic algorithm (NSGA-II) was applied by Ghazali-Mohd et al. [START_REF] Normah | Comparison of the optimized thermal performance of square and circular ammonia-cooled microchannel heat sink with genetic algorithm[END_REF]. The Colburn factor and Fanning factor which are the main difficulties for the expression of the objective function (unit entropy production) were calculated by multiple regression analysis of the numerical simulation by yin et al. [START_REF] Yin | Shape optimization of water-to-water plate-fin heat exchanger using computational fluid dynamics and genetic algorithm[END_REF]. Relatively fewer studies are focused on the shape optimization of the channel crosssection. In the study of Foli et al., [START_REF] Foli | Optimization of micro heat exchanger: CFD, analytical approach and multi-objective evolutionary algorithms[END_REF] the shape of the separator in the heat exchanger was represented by two non-uniform Rational B-splines (NURBS) [START_REF] Piegl | The NURBS Book[END_REF] as the design parameters. A CFD-based MOGA method has been used considering both the heat transfer and the pressure drop. Similarly, the optimization method used by Ge et al. [START_REF] Ge | Optimal shape design of a minichannel heat sink applying multi-objective optimization algorithm and three-dimensional numerical method[END_REF], by describing the crosssectional shape of a straight mini-channel heat sink was by six variables. Their results indicate that the pumping power could be effectively reduced without significantly increasing the thermal resistance by modifying the rectangular cross-section shape to a curvy boundary shape (Figure 2.12 (b)). There are two papers have been performed on the topology optimization of the heat sink cross-section (Figure 2.12 (c)) to the best of our knowledge. The one [START_REF] Dokken | Optimization of 3D printed liquid cooled heat sink designs using a microgenetic algorithm with bit array representation[END_REF]. presented in Figure 2.12 (c)) discretizes a heat sink cross-section into a grid space, and a micro-genetic algorithm is applied to produce optimum shapes for liquid-cooled heat sinks represented as bit arrays based on the system entropy generation rate. In the other research article [START_REF] Lee | Topology optimization of a heat sink with an axially uniform cross-section cooled by forced convection[END_REF], Gilho Lee et al. propose a topology optimization method for maximizing the thermal performance of a heat sink with an axially uniform cross-section cooled by forced convection under the constraint of fixed pumping power. It is shown that the proposed topology optimization method can be used to design lighter heat sinks with higher thermal performance for practical applications. It should be noticed that this approach differs from the topology optimization of the flow channel configuration to be discussed later, since once optimized, the same cross-section topology is kept all along the channel length direction. It optimizes actually the mass flow distribution inside the heat sink without the parallel channel presetting, providing thereby the same prospects as tailoring the flow distribution to address the multiple-peak heat flux cases. Fin-shape and arrangement optimization for pin-fin heat sinks Pin fin structure is commonly used for heat sinks due to its large solid and fluid heat transfer surface area. Conventional pin-fin heat sinks usually have simple, regular, and uniform fin shapes (e.g., cuboid, cylinder, etc.), and the space between them in the structure is also uniform. To further improve their thermal/hydraulic performances, many studies have been devoted to the optimization of a pin-fin geometrical structure. Some of them are focused on size optimization while others optimize the pin shape and/or arrangement based on a predefined pin geometry or through mathematical parametrization. Except for that, it is found that TO is also applied in pin-fin optimization. The size optimization of the pin-fin structure is relatively simple. It is usually based on simple pin geometry (cuboid or cylinder), and this basic geometry does not change during optimization. The design variables could involve the number, the height, the width/length (or diameter), or the spacing/pitch of pins [START_REF] Ahmadian-Elmi | A comprehensive study on parametric optimization of the pin-fin heat sink to improve its thermal and hydraulic characteristics[END_REF], while the optimization objective(s) could consider both thermal and hydraulic performance indicators of the heat sink. Different optimization methods have also been used for this purpose, either based on simplified physical models or assisted by CFD simulation. For example, the Levenberg Marquardt Method design algorithm was used by Huang et al. [START_REF] Huang | An optimal design problem in determining non-uniform fin heights and widths for an impingement heat sink module[END_REF] to optimize the heights and widths of non-uniform fins, and the thermal resistance decreased by 10.53% compared with a uniform pin-fin structure. A similar study has been performed by Yang et al. using the Taguchi method [START_REF] Yang | Numerical Optimization of Pin-Fin Heat Sink with Forced Cooling[END_REF] (Figure 2.13 (a)) or using the GA coupled with RSA [START_REF] Yang | Numerical simulation and optimization of impingement cooling for rotating and stationary pin-fin heat sinks[END_REF]. Chen et al. [START_REF] Chen | Multi-objective optimization design of plate-fin heat sinks using a direction-based genetic algorithm[END_REF] applied the direction-based GA to search the optimal fin design variables for lower entropy generation and material cost. Slightly different pin geometrical design variables (pin-fin porosity and pin-fin located angle) were considered in the study of Zhao et al. [START_REF] Zhao | Numerical study and optimizing on micro square pin-fin heat sink for electronic cooling[END_REF], using the proposed geometry optimizing method The kriging method was employed by Nemati et al. [START_REF] Nemati | Shape optimisation of air-cooled finned-tube heat exchangers[END_REF] to generate the response surface for the output parameters, and a MOGA approach was used to perform the multi-objective optimization (entropy generation, pressure loss). Polat et al. [START_REF] Polat | Multi-objective optimization and performance assessment of microchannel heat sinks with micro pin-fins[END_REF] optimized the arrangement (porosity) of pins with different shapes (circular, square, and diamond) to minimize the pressure drop and maximize the Nusselt number, using the MOGA method. The formula of the objective functions (the time constant of the solid matter's response time when there are external heat disturbances and the pressure drop of pin-fin heat sink) design variables (inner spaces of pinfins) had been formed by the mathematical model by physical relations, and the real-coded genetic algorithm with a novel direction-based crossover operator was applied to achieve minimize the minimum time constant and pressure drop by Wang et al. [START_REF] Wang | The application of genetic algorithm for pin-fin heat sink optimization design[END_REF]. All the abovementioned studies are based on uniform heating surface. One exception is the study of yang etc. [START_REF] Yang | Optimization of Pin Arrangement and Geometry in EV and HEV Heat Sink Using Genetic Algorithm Coupled With CFD[END_REF], who optimized both the pin arrangement and pin geometry with localized heat sources, using a CFD-assisted GA method. The shape optimization of a pin-fin heat sink modifies the boundary of a predefined or non-predefined pin geometry using certain optimization methods. For example, Ismayilov et al. [START_REF] Ismayilov | Systematic micro heat sink optimization based on hydrofoil shape pin fins[END_REF] utilized the CFD-assisted MOGA to vary the hydrofoil pin-fin shape through flexible parametrization utilizing an ellipse and polynomials. Their results showed that with the novel pin fin shape, the heat transfer enhancement would dominate over the pressure drop increase at higher Reynolds numbers. Recently, Keramati et al. [START_REF] Keramati | Deep reinforcement learning for heat exchanger shape optimization[END_REF] used a similar approach by defining the geometry with a composite Bézier curve parameterized by control points (Figure 2.13 (b)). the proximal policy optimization with deep neural network and CFD was applied to maximize heat transfer and minimize pressure drop. Results showed that a 30% improvement in overall heat transfer and a 60% reduction of pressure drop compared to the rectangle reference geometry could be achieved. Huang et al. [START_REF] Huang | Novel thermal design of micro-bream-fin heat sink using contour-extraction-based (CEB) method[END_REF] proposed a novel method to extract the wake flow contours in the channels as the geometry contours of the pin fin to enhance the thermal and hydraulic performances of the micro-pin-fin heat sink. The pin convection performance of the novel bream fin could be up to 34% higher than that of the original circular fin. There is only one study [START_REF] Ghasemi | Multi-objective topology optimization of pin-fin heat exchangers using spectral and finite-element methods[END_REF] on the TO of a pin-fin heat sink (Figure 2.13(c)). Ghasem et al. developed a multi-objective topology optimization approach to optimize sink geometries to minimize thermal resistance and pressure loss. A dedicated pseudo-3D conjugate heat transfer model is utilized, by assuming periodic flow and fin design pattern. A pseudo-spectral scheme is used for the flow solution and the finite element method for the non-periodic conjugate heat transfer model. They found that the optimized topologies demonstrated superior cooling performances at lower costs of pressure losses compared to conventional (circular) inline and staggered fins, and confirmed the supremacy of topology over pure sizing optimization. However, due to the simulation and optimization complexity, these works on the pin-fin shape optimization are usually focused on a single pin shape, or at best one slice along the flow direction. The study addressing the cooling of a whole heat-generating surface in this regard seems still rare to the best of our knowledge by pin-fin shape and TO optimization. Fin optimization without a predefined shape only focuses on single-fin design. Moreover, the relatively simple geometry and array arrangement of the pin-fin structure may restrict the freedom to morph for the heat sink design, less flexible and diversified than the TO of the global flow channel configuration. Topology optimization (TO) of global flow configuration in the heat sink The TO of global flow channel configuration optimization allows organizing and arranging both the fluid and solid domains freely, i.e., creating solid islands or flow paths without any geometry presetting. Holding the highest design degrees of freedom, it has attracted enormous attention from both the academic and industrial communities and is regarded as a ground-breaking technique to obtain innovative designs of heat sinks (or heat exchangers in a general sense) with greatly improved effectiveness. The TO process could generally include four basic stages [START_REF] Fawaz | Topology optimization of heat exchangers: A review[END_REF]: (1) design parametrization, (2) heat transfer modeling, (3) optimization process, and (4) final realization. Based on the design parametrization, the TO for single-phase heat sinks may be classified into three types: density-based, level-set, and direct explicit. They differ from one another by different representations of optimization variables determining the design configurations that establish the relationship between the design variables and the physical properties, e.g., the density distribution describing the flow paths in the density-based method [START_REF] Yoon | Topological design of heat dissipating structure with forced convective heat transfer[END_REF]. Once parametrized, the design variables are mapped, interpolated, and updated iteratively to approach the optimum configuration. It involves the modeling of heat transfer (in fluid and solid phases) coupled with fluid flow to compute the distribution of state variables (pressure, velocity, and temperature) in each optimization iteration. Various solvers were implemented in decades, mainly including the finite element method (FEM), the finite volume method (FVM), and the lattice Boltzmann method (LBM), usually under the assumption of laminar, incompressible, and steady-state flow [START_REF] Dong | Multi-objective optimal design of microchannel cooling heat sink using topology optimization method[END_REF][START_REF] Subramaniam | Topology optimization of conjugate heat transfer systems: A competition between heat transfer enhancement and pressure drop reduction[END_REF][START_REF] Yaji | Topology optimization in thermalfluid flow using the lattice Boltzmann method[END_REF]. As for the optimizer, both gradient-based and non-gradientbased approaches can be used. Among them, the gradient method relates the design variables and objective function by a function and calculates the objective function's gradient and its stagnant point, generally using the adjoint method [START_REF] Sun | Large Scale 3D Topology Optimization of Conjugate Heat Transfer[END_REF]. A combination of the density-based method, the FEM, and the gradient-based optimizer is the mainstream in the TO of heat sinks (or heat exchangers in general) [START_REF] Fawaz | Topology optimization of heat exchangers: A review[END_REF]. It has shown good efficiency in handling optimization problems with a high number of design variables [START_REF] Alexandersen | Efficient topology optimisation of multiscale and multiphysics problems[END_REF]. More information about these methods and relevant studies are presented and reviewed in the paper [START_REF] Fawaz | Topology optimization of heat exchangers: A review[END_REF]. Some optimized examples in this line, and laboratory prototypes realized and tested can be found in Table 2.1. Nevertheless, such a TO strategy has some difficulties in handling numerical artifacts or descriptions of clear solid-fluid interfaces, and more importantly, it can be easily trapped into the local optimum [START_REF] Sigmund | Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima[END_REF]. On the contrary, some novel, gradient-free approaches, like genetic algorithm (GA) and Bayesian optimization, could overcome these deficiencies and converges towards a global optimum [START_REF] Wei | An improved fast-convergent genetic algorithm[END_REF]. Moreover, the majority of the above-mentioned TO studies deal with the simplified 2D design domain (e.g., [START_REF] Joo | Topology optimization of heat sinks in natural convection considering the effect of shape-dependent heat transfer coefficient[END_REF]). For those studies with 3D parametrization (e.g., [START_REF] Dilgen | Density based topology optimization of turbulent flow heat transfer systems[END_REF], [START_REF] Sun | 3D topology optimization of heat sinks for liquid cooling[END_REF]), most of them are performed under uniform heating, with only several exceptions [START_REF] Yu | A synergic topology optimization approach on distribution of cooling channels and diverse-intensity heat sources for liquid-cooled heat sink[END_REF] addressing more complex (but more realistic) heating boundaries with multiple heat sources. There are barely a few attempts on performing TO of global flow channel configuration in heat sinks using the non-gradient approach, mainly due to the complexity of modeling conjugate heat transfer and fluid flow, the difficulty in formulating the optimizer, and the high computational cost. Among them, Yoshimura et al. [START_REF] Yoshimura | Topology optimization of fluid problems using genetic algorithm assisted by the Kriging model[END_REF] proposed a Kriging surrogate modelassisted GA method on single-/multi-objective TO of cooling flow channel configurations. Later, the NSGA-II method has been coupled with the Kriging surrogate model to search for better designs of lattice-structured heat sinks regarding thermal performance and material cost [START_REF] Shimoyama | Multi-objective Bayesian topology optimization of a lattice-structured heat sink in natural convection[END_REF]. Mekki et al. [START_REF] Mekki | Voxel-Based Topology Optimization of Heat Exchanger Fins[END_REF][START_REF] Mekki | Genetic algorithm based topology optimization of heat exchanger fins used in aerospace applications[END_REF] developed and tested a GA-based TO method for thermo-fluid equipment in aerospace applications, but only the elementary fin shapes have been focused on using voxel representation. More recently, Yaji et al. [START_REF] Yaji | Data-driven multifidelity topology design using a deep generative model: Application to forced convection heat transfer problems[END_REF] proposed a hybrid data-driven multifidelity topology design (MFTD) combining both density-based methods for the low-fidelity TO and NSGA-II to select the optimal Pareto front. Experimental studies on the topologically optimized (TO) heat sinks As an essential step from theory to practice, heat sinks with TO geometries should be realized and experimentally tested to validate the numerical model and algorithms on one hand, and to showcase the superior performances compared to conventional designs. However, due to its geometry complexity and irregularity, the fabrication would be more difficult and costly than a conventional heat sink with or without enhancement units. Therefore, only a few researchers have realized their TO structures, even with the rapid development of additive manufacturing (AM) techniques. Proper post-treatments on the TO-derived structures are often needed and some engineering simplified versions are tested in practice. Table 2.1 presents a complete list of the manufactured and tested TO heat sinks reported in the open literature for single-phase forced convection cooling. The obtained and tested geometries shown in Table 2.1 are distinct from one another due to different objective functions and boundary settings, but usually involve the splitting and recombining of flow channels enclosing some solid islands. The prototypes are realized either by conventional manufacturing (e.g., CNC milling, laser cutting, etc.) or by Additive Manufacturing (AM). Conductive metal (e.g., aluminum, copper) is usually used to build a heat sink (solid part) while water is usually used as the coolant. Most of the tests are realized under ambient temperature and pressure and laminar flow condition, with only some exceptions [START_REF] Dede | Experimental and numerical investigation of a multi-pass branching microchannel heat sink[END_REF][START_REF] Li | Heat transfer augmentation in microchannel heat sink based on isogeometric topology optimization framework[END_REF][START_REF] Lee | A topology optimization based design of a liquidcooled heat sink with cylindrical pin fins having varying pitch[END_REF] being extended to the transitional/turbulent flow regime. Worth noting is the fact that although uneven heating (multiple heat sources) has been addressed by some TO studies [START_REF] Zeng | Topology optimization of liquid-cooled microchannel heat sinks: An experimental and numerical study[END_REF] by numerical simulation, it has never been configured in practice to the best of our knowledge: all the TO heat sinks are tested under uniform heating conditions. Commonly the coolant flow rate, the overall pressure drop, the fluid inlet, and outlet temperatures, and the temperatures at specific locations of solid walls are measured to characterize the global thermal and hydraulic performances of the TO heat sink. Some more words on the measurement technique of temperature (field) of the solid part, which can be classified into direct and indirect methods. For the direct method, several thermocouples are installed at the specific locations of the heat sink, usually close to the heat source. Nevertheless, only a limited number of points can be recorded by the thermocouples [START_REF] Mo | Topology optimization of cooling plates for battery thermal management[END_REF], but usually insufficient to correctly reconstruct or represent the temperature field of a certain surface (e.g., heating surface, fluid channel wall, etc.). To overcome this drawback, an Infrared thermal imager has been adopted by some researchers [START_REF] Li | Optimal design and thermal modelling for liquid-cooled heat sink based on multi-objective topology optimization: An experimental and numerical study[END_REF][START_REF] Qian | The influence of temperature dependent fluid properties on topology optimization of conjugate heat transfer[END_REF][START_REF] Ri | Design and Numerical Study of Liquid Cooling Radiator Based on Topology Optimization Method[END_REF] as an indirect method to measure and record the temperature field of a target surface. However, only the upper cover of the heat sink has been measured due to technical difficulties. This measuring surface is far away from the heating surface, with a smaller temperature difference, and thus cannot reflect the situation of the locations of interest. It should be noted that the local flow and temperature distributions of the cooling fluid in the TO heat sinks, which are of great interest in understanding the optimized geometries and their performances, have never been characterized by experiments to the best of our knowledge. Regarding the global performance of the tested TO-heat sinks, some of the experimental studies (cf. Table 2.1) include a reference prototype (usually parallel-straight channel heat sink) for comparison, while others focus on the improvement of the TO geometry itself or the proposition of a new treatment. Conclusion In this chapter, an extensive literature review has been presented on the design and optimization of heat sinks for the cooling of a heating surface with multiple heat sources. The main conclusions can be summarized as follows: • Non-uniform heating with multiple heat sources commonly appears in electronic devices due to the stacking/array arrangement of functional units for higher power or capacity. Such multiple-peak heat flux condition, compared to uniform heating or single-peak heating, is more likely to cause the overheating problem of electronics, bringing about serious issues like reduced working efficiency, device shut-down, reduced lifetime, irreversibly component deterioration, or even thermal runaway. • Consequently, an efficient thermal management approach is essential for cooling the non-uniform heating surface with multiple heat sources. Plenty of solutions exist for the thermal management of electronics. Among the above methods, single-phase cooling based on forced convection is found to be an efficient, compact, simple, low-cost, and safer technique for this purpose. • Conventional heat sinks for single-phase cooling can be classified into parallel channel, pin-fin, and complex types, with different variants having been proposed and developed for heat transfer enhancement. In particular, the thermal and hydraulic performances of heat sinks can be further improved by adopting size, shape, or topology optimization methods. Research efforts have been made on (1) flow distribution uniformization or control for parallel straight channel heat sinks; (2) channel cross-section shape optimization; (3) pin-fin shape and arrangement optimization; and (4) TO of global flow configuration in the heat sink. • The TO of global flow channel configuration involves reallocating organizing and arranging both the fluid and solid domains without a predefined geometry. Currently, the most popular TO method is a combination of the density-based design parametrization, the FEM for heat transfer modeling, and the gradient-based optimization algorithm. While this type of TO strategy is fast and straightforward, it also suffers from some problems like local optimum trapping, vague fluid-solid boundary, conjugate heat transfer modeling inaccuracy, etc. • Experiment testing of TO heat sinks is an essential step to validate the numerical modeling and optimization method. Only a few TO geometries have been manufactured and tested in practice. Thermocouples are often used to measure the solid temperature near the heaters while an IR camera is also employed but only to measure the temperature distribution on the top cover surface of the heat sink. The global thermal and hydraulic performances of the TO heat sink are often compared with those of a reference (straight channel) heat sink to showcase the design advantages. • The parameters usually chosen to measure for a single-phase heat sink experiment are inlet flow rate, inlet/outlet temperature, pressure drop, and temperature on the heat sink surface. No research papers on heat sink experimental studies measure the detailed temperature or velocity fields inside the fluid domain under the uneven heating condition of multiple heat sources. Almost no research paper on the experiment of topology optimization heat sink compares a benchmark straight channel heat sink and its size optimization with topology optimization. Various research gaps can be identified facing this multiple-peak heat flux issue, a problem commonly existing in modern electronics, but insufficiently addressed in the literature: (1) Lack of optimization method to tailor the fluid flow distribution for conventional parallel straight channel heat sinks under multiple-peak heat flux; (2) Lack of gradient-free approaches (e.g., GA) for the TO of global flow channel configuration for forced convection problem with coupled fluid flow and heat transfer; (3) Lack of fine experimental characterization of TO heat sinks under uneven heating with multiple heat sources, especially the local flow and temperature characteristics of cooling fluid; (4) Lack of performance comparison between heat sinks optimized by different methods or under different optimization criteria/constraints. These identified literature gaps are strong motivations for us to perform the research of this Ph.D. thesis by using different numerical and experimental approaches, to be presented in the following chapter. This chapter addresses the optimization of fluid flow distribution in parallel mini-channel heat sinks subjected to a non-uniform multiple-peak heat flux to eliminate the temperature hotspots. A 3D heat sink comprising 16 parallel straight mini-channels is used as a model for the study, each mini-channel having the dimension of 1 mm in width, 2 mm in height and 34 mm in length. In particular, an original size/shape optimization algorithm is developed to adjust the inlets of these mini-channels according to the temperature distribution on the heating base surface. The fluid flow distribution is thereby tailored, leading to the reduced peak temperature on the heating surface. The effectiveness and robustness of the optimization algorithm are tested and discussed. Results obtained show that the maximum temperature can be reduced by 10 K and 7 K for two-peak and five-peak heat flux cases, respectively, by using the proposed optimization method. The heat sink configuration with optimized channel inlets could always provide smaller thermal resistance than that of the equal channel inlet configuration under different average heat flux or total mass flow-rate conditions. At the same pressure drop, tailoring the flow distribution of the cooling fluid is more effective in reducing the thermal resistance than simply increasing the mass flow rate of the cooling liquid. This optimization method, simple and straightforward to implement as it is, could also be generalized as an efficient thermal management technology for electronic cooling. Keywords of the Chapter: Mini-channel heat sink; Fluid flow distribution; Temperature hot spots; Multiple-peak heat flux; Thermal management; Size/shape optimization. Introduction Being a conventional structure, parallel micro/mini-channel heat sinks are widely and commonly used for the efficient cooling of electronic devices. The cooling fluid is usually injected into the single inlet port, divided and distributed into parallel straight channels, and finally collected from the single outlet port. Due to such geometric specificity, the flow distribution among the parallel channels is naturally an essential characteristic that determines the cooling performance: the coolant maldistribution may result in the thermal performance deterioration of the heat sink and the formation of localized temperature hotspots [START_REF] Kumar | Effects of flow inlet angle on flow maldistribution and thermal performance of water cooled mini-channel heat sink[END_REF][START_REF] Ghani | The effect of manifold zone parameters on hydrothermal performance of micro-channel HeatSink: A review[END_REF]. In chapter 2.4.1 it has been shown that almost all the studies on this issue in the literature target achieving a uniform flow distribution among channels based on the assumption of uniform heat flux generated by electronics. Unfortunately, this assumption is not accurate for many real devices especially with high integration levels and array arrangement, as has been reviewed in detail in chapter 2.2). The only exception is the study of Kumar & Singe [START_REF] Kumar | A novel approach to manage temperature non-uniformity in minichannel heat sink by using intentional flow maldistribution[END_REF] which addressed the uneven heating condition and proposed that the flow arrangement and the actual flow distribution should fit the (non-uniform) heat flux shape to achieve a better heat sink thermal performance. yet no optimization method has been developed so far to determine the optimal flow distribution profile. A systematic and quantitative exploration of the optimal flow distribution under non-uniform and multiple-peak heat flux conditions is still lacking. In this chapter, we seek to fill this research gap by tailoring the flow distribution of the cooling fluid in parallel mini-channel heat sinks subjected to non-uniform multiple-peak heat flux, to minimize the peak temperature on the heating surface. For this purpose, a 3D heat sink comprising 16 parallel straight mini-channels is used as a model for the study. Non-uniform heat flux with multiple Gaussian peaks is set to the base heating surface of the heat sink to represent the real hot spots generated by the electronic devices. An original optimization algorithm is developed to adjust the channel inlets of the mini-channels according to the temperature distribution on the base surface. Consequently, the fluid flow distribution among the mini-channels is tailored step by step, reducing the peak temperature (global thermal resistance) of the heat sink. The effectiveness of the optimization algorithm will be illustrated and discussed through various numerical examples. It should be noted that acting on the channel inlets (also called perforated baffle in some studies) to regulate the flow distribution among parallel channels or tubes is not new but has been proposed and proven to be effective by many researchers [START_REF] Liu | Numerical study on performances of mini-channel heat sinks with non-uniform inlets[END_REF][START_REF] Tong | Attainment of Flowrate Uniformity in the Channels That Link a Distribution Manifold to a Collection Manifold[END_REF][START_REF] Wei | Design and optimization of baffled fluid distributor for realizing target flow distribution in a tubular solar receiver[END_REF][START_REF] Wei | Numerical and experimental investigation on the realization of target flow distribution among parallel mini-channels[END_REF]. But most of them use homogeneous or non-homogeneous insertion baffle as a convenient way to improve the flow uniformity. In an earlier study of our research group (Ph.D. thesis of Dr. Min WEI, 2015) [START_REF] Wei | Nouvelles méthodes pour l'optimisation de la distribution des fluides et leurs applications dans les systèmes des centrales solaires à concentration (CSP)[END_REF], the baffle geometry has been optimized to generate non-uniform flow distributions for absorbing heat in a high-temperature solar receiver. But the heat flux considered is a singlepeak Gaussian shape and the targeted flow distribution profile is predefined. The current study goes a step further by addressing the multiple peak heat flux condition, with a necessary extension of the previous optimization algorithm developed in-house [START_REF] Wei | Fluid flow distribution optimization for minimizing the peak temperature of a tubular solar receiver[END_REF][START_REF] Wei | Design and optimization of baffled fluid distributor for realizing target flow distribution in a tubular solar receiver[END_REF]. The peak temperature of the heating surface is minimized by directly adjusting the widths of the channel inlets; the resulting flow distribution profile is thus consequential and adapted. Methodology In this section, the heat sink model and optimization algorithm are first presented. Then the computational fluid dynamics (CFD) parameters for numerical testing and the performance indicators are introduced. The wall thickness of the solid envelope that encloses the fluid domain is equal to 2 mm. The base wall of the heat sink is a flat square surface (54×54 mm 2 ), receiving non-uniform multiple-peak heat flux generated by the electronic device. The heat will be firstly transferred by conduction in the solid part, and then by convection to the cooling fluid. Heat sink model Optimization algorithm This sub-section presents the basic principles of the optimization algorithm for tailoring the cooling fluid flow distribution in the parallel straight-channel heat sink. The following assumptions and simplifications have been made for this study: Based on the mass and energy conservation, the following equations could be written: 𝑚𝑚 𝑡𝑡𝑡𝑡𝑡𝑡 = 𝑚𝑚 𝑖𝑖𝑛𝑛 = ∑ 𝑚𝑚 𝑖𝑖 = 𝑚𝑚 𝑡𝑡𝑜𝑜𝑡𝑡 16 𝑖𝑖=1 (3.1) 𝑄𝑄 𝑡𝑡𝑡𝑡𝑡𝑡 = ∬ 𝑞𝑞𝑞𝑞𝑞𝑞 = 𝑚𝑚 𝑡𝑡𝑡𝑡𝑡𝑡 (𝐶𝐶𝑝𝑝 𝑡𝑡𝑡𝑡𝑡𝑡 𝑆𝑆 𝑡𝑡𝑡𝑡𝑡𝑡 -𝐶𝐶𝑝𝑝 𝑖𝑖𝑛𝑛 𝑆𝑆 𝑖𝑖𝑛𝑛 ) (3.2) Where mtot, min, mi and mout are the total, inlet, ith mini-channel, and outlet mass flow rate of the cooling fluid, respectively. Qtot is the total heating power; q is the heat flux at the heating surface of the heat sink; Cpout and Cpin are the specific heat of the outlet and inlet fluid and Tout and Tin are the inlet and outlet fluid temperature, respectively. Different from many earlier relevant studies, the heat flux q treated here is no longer uniform but shows a multiple-peak form (as shown in Figure 3.3 for example). The optimization algorithm is developed to determine the optimal inlet sizes of the parallel mini-channels to minimize the maximum temperature of the heating surface (and also the thermal resistance, cf. Eq. (3.24)) via tailoring the flow distribution of the cooling fluid. The method developed is deterministic, i.e. the width distribution of the channel inlets was optimized (adjusted) in a way of evolution, but not arbitrarily imposed or randomly generated in most of the existing studies in the literature. For this purpose, the heating surface of the heat sink was hypothetically divided into 16 monitoring planes corresponding to every mini channel, as schematically shown in Figure 3.2. 𝑆𝑆 𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 is defined as the maximum temperature of the ith plane, the indexing being marked in Figure 3.2. The objective of optimization may be written as: 𝑆𝑆 𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 = 𝑆𝑆 � 𝑚𝑚𝑎𝑎𝑚𝑚 (i=1, 2,…,16) (3.3) Where 𝑆𝑆 � 𝑚𝑚𝑎𝑎𝑚𝑚 is the mean temperature of all 16 𝑆𝑆 𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 (i=1, 2,…,16). Practically, the relative standard deviation (it is also called coefficient of variation) of the maximum temperatures of the 16 monitoring planes (𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 ) is monitored, written as: 𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 = � 1 15 ∑ � 𝑇𝑇 𝑖𝑖 𝑚𝑚𝑚𝑚𝑚𝑚 -𝑇𝑇 � 𝑚𝑚𝑚𝑚𝑚𝑚 𝑇𝑇 � 𝑚𝑚𝑚𝑚𝑚𝑚 � 2 16 𝑖𝑖=1 (3.4) Given the constant total mass flow rate ( tot m ) of the cooling fluid, the mass flow rate in each mini-channel is intended to be managed by adjusting the corresponding channel inlet width according to the temperature difference between max i T and max i T . In more detail, if max i T is higher than max i T , the higher mass flow rate is required for enhancing the cooling, thus the corresponding channel inlet width of the ith mini-channel should be enlarged. And vice-versa, if max i T is lower than max i T , the mass flow rate could be reduced by narrowing the channel inlet. This variation rule is written in Eq. (3.5). 𝑤𝑤 𝑘𝑘+1,𝑖𝑖 = 𝑤𝑤 𝑘𝑘,𝑖𝑖 + 𝛾𝛾�𝑆𝑆 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 -𝑆𝑆 � 𝑘𝑘 𝑚𝑚𝑎𝑎𝑚𝑚 � (3.5) Where wk,i is the width of the i th channel inlet for the step k iteration. γ is the adjusting factor deciding the variation amplitude of each iteration. The value of γ for each iteration was selected considering the geometric constraints in that all the channel inlets should not be smaller than zero (𝑤𝑤 𝑖𝑖 ≥ 0, i = 1, 2, ..., [START_REF] Tang | Experimental investigation on active heat sink with heat pipe assistance for high-power automotive LED headlights[END_REF] and two adjacent channel inlets should not overlap (𝑤𝑤 𝑖𝑖 + 𝑤𝑤 𝑖𝑖+1 ≤ 3, i = 1, 2, ..., 15): 𝛾𝛾 ≤ 3-𝑤𝑤 𝑘𝑘,𝑖𝑖 𝑇𝑇 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑚𝑚𝑚𝑚 -𝑇𝑇 � 𝑘𝑘 𝑚𝑚𝑚𝑚𝑚𝑚 (3.6) 𝛾𝛾 ≤ 𝑤𝑤 𝑘𝑘,𝑖𝑖 𝑇𝑇 � 𝑘𝑘 𝑚𝑚𝑚𝑚𝑚𝑚 -𝑇𝑇 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑚𝑚𝑚𝑚 (3.7) With the variation rule shown in Eq. (3.5), the passage ratio of the channel inlets (defined as the ratio between the total width of the channel inlets and the width of the distributing manifold) remains constant (29.6%) during iteration, as indicated by Eq. ( � = 0 (3.8) The optimization is started with the equal width of all the channel inlets, representing a conventional heat sink configuration with parallel straight channels. The optimization is considered to be completed when 𝑀𝑀𝑀𝑀 𝑇𝑇 𝑘𝑘 𝑚𝑚𝑚𝑚𝑚𝑚 (Eq. 3.4) is smaller than 0.003. Under the value of 0.003, the variation of Tk max is very small. The main steps of the optimization procedure are explained below and the flow chart of the whole procedure is presented in Appendix 3.A: (1) Input the initial geometrical parameters of the heat sink and the channel inlet width distribution (equal channel inlets at step 0); (2) Generate the geometry and mesh of the heat sink; (3) Calculate the temperature and fluid flow characteristics by CFD simulation under designed working conditions and simulation setup; compute the 𝑆𝑆 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 and 𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 of the heat sink in step k; (4) If 𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 < 0.003, then export the optimal geometry of the channel inlets and end the procedure; if 𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 > 0.003, update the new geometry according to Eq. (3.5), and go back to step (2) for iteration. CFD simulation parameters The flow and temperature fields of the heat sink at each iteration step are calculated using CFD simulation. Governing equations under steady-state are shown as follows: Continuity equation: ∇ • (𝜌𝜌𝑣𝑣 ⃗)=0 (3.9) Momentum conservation equation: ∇ • (𝜌𝜌𝑣𝑣 ⃗𝑣𝑣 ⃗)=-∇𝑝𝑝 + ∇ • (𝜏𝜏̿ ) + 𝜌𝜌𝑔𝑔 ⃗ + 𝑀𝑀 ⃗ (3.10) Where p is the static pressure; τ is the stress tensor; 𝜌𝜌𝑔𝑔 ⃗and 𝑀𝑀 ⃗ are the gravitational body force and external body force. Energy equation: ∇ • �𝑣𝑣 ⃗(𝜌𝜌𝐸𝐸 + 𝑝𝑝)�=∇ • �𝜆𝜆 𝑒𝑒𝑒𝑒𝑒𝑒 ∇𝑆𝑆 -∑ 𝐻𝐻 𝑗𝑗 𝚥𝚥 𝚥𝚥 ��⃗ + �𝜏𝜏̿ 𝑒𝑒𝑒𝑒𝑒𝑒 • 𝑣𝑣 ⃗� 𝑗𝑗 � (3.11) Where 𝜆𝜆 𝑒𝑒𝑒𝑒𝑒𝑒 is the effective conductivity; H is the sensible enthalpy; 𝜏𝜏̿ 𝑒𝑒𝑒𝑒𝑒𝑒 is the effective shear stress. To predict turbulent flow patterns, an additional turbulence model should be employed. For the solid zone, the energy transport equation is: ∇ • (𝜆𝜆 𝑠𝑠 ∇𝑆𝑆) + 𝑆𝑆 ℎ𝑠𝑠 = 0 (3.12) Where 𝑆𝑆 ℎ𝑠𝑠 is the heat source within the solid. In this study, geometries and meshes were generated using different modules of Ansys Workbench 18.2. Hexahedral elements and the multi-zone method were applied for meshing fluid and solid domains. Inflation and sizing meshing methods were adopted at the solid-fluid interface and the corners of the fluid domain to capture the boundary layer region of the fluid flow. Water was used as the cooling fluid and aluminum was chosen as the solid material of the heat sink body. Their temperature-dependent or constant thermophysical properties are expressed by the equations listed in Table 3.1. Thermal conductivity (W•m -1 •K -1 ) λ f = -8.356 × 10 -6 T 2 + 6.530 × 10 -3 T -0.598 (3.15) Specific heat (J•kg -1 •K -1 ) Cp f = 4182 (3.16) Aluminum Specific heat (J•kg -1 •K -1 ) Cp s = -3.973 × 10 -6 T 3 -5.667 × 10 -3 T 2 + 3.069T + 380.170 (3.17) Thermal conductivity (W•m -1 •K -1 ) λ s = 202.4 (3.18) Density (kg•m-3) ρ s = 2719 (3.19) For the fluid zone, the velocity inlet normal to the inlet boundary surface was set, with a temperature of 293 K. The inlet velocity was set to be constant and equal to 0.5 m•s -1 , 0.55 m•s - 1 , 0.6 m•s -1 , 0.65 m•s -1 and 0.7 m•s -1 (with inlet Re number (Eq. 3.26): 2488, 2737, 2986, 3234 and 3483, respectively, the corresponding mean channel Re number (Eq. 3.27): 406, 447, 487, 528 and 569, respectively) in different cases. The pressure outlet boundary was set for the outlet surface with the gauge pressure value being zero. All the walls for channels were defined as non-slip conditions. For the solid zone, all walls were considered adiabatic except the heating surface (base wall). For the latter, two-peak and five-peak heat flux were defined and tested, respectively, as shown in Figure 3.3. Their 2D surface heat flux Gaussian repartitions are given by Eq. 3.20. 𝑞𝑞(𝑥𝑥, 𝑦𝑦) = ∑ 𝑞𝑞 𝑖𝑖ℎ (𝑥𝑥, 𝑦𝑦) 𝑁𝑁 𝑖𝑖ℎ=1 = ∑ 𝐵𝐵 𝑖𝑖ℎ 𝑒𝑒 - �𝑚𝑚-𝑚𝑚 𝑖𝑖ℎ � 2 +�𝑦𝑦-𝑦𝑦 𝑖𝑖ℎ � 2 2𝜎𝜎 𝑖𝑖ℎ 2 𝑁𝑁 𝑖𝑖ℎ=1 (3.20) where 𝑁𝑁 represents the number of heat peaks (2 or 5), a peak located at 𝑥𝑥 = 𝑥𝑥 𝑖𝑖ℎ , 𝑦𝑦 = 𝑦𝑦 𝑖𝑖ℎ presents a maximum local heat flux; 𝐵𝐵 𝑖𝑖ℎ , 𝜎𝜎 𝑖𝑖ℎ represents the spatial spread of the peak. The total heat 𝑄𝑄 𝑖𝑖ℎ generated by a peak (if the plate had an infinite extent) can be computed by: 𝑄𝑄 𝑖𝑖ℎ = ∬ 𝑞𝑞 𝑖𝑖ℎ (𝑥𝑥, 𝑦𝑦)𝑞𝑞𝑥𝑥 𝑞𝑞𝑦𝑦 ℝ 2 = 2𝜋𝜋𝐵𝐵 𝑖𝑖ℎ 𝜎𝜎 𝑖𝑖ℎ 2 (3.21) The different values of 𝑥𝑥 𝑖𝑖ℎ , 𝑦𝑦 𝑖𝑖ℎ , 𝐵𝐵 𝑖𝑖ℎ , 𝜎𝜎 𝑖𝑖ℎ and 𝑄𝑄 𝑖𝑖ℎ are summarized in Table 3.2. The total power of the heat source is constant (𝑄𝑄 = 1130 W) and the average heat flux (power density) for the base wall (𝑞𝑞 𝑎𝑎𝑎𝑎𝑎𝑎 = 38.75 W⋅cm -2 ) are identical for both heat flux settings, as indicated in Eqs. (3.22) and (3.23). The difference between 𝑄𝑄 and ∑ 𝑄𝑄 𝑖𝑖 𝑖𝑖 is due to the truncation effect of the Gaussian on the limited extent plate. The maximum peak values for the two cases are slightly different, i.e. 130 W⋅cm -2 at (𝑥𝑥 1 , 𝑦𝑦 1 ) for two-peak heat flux and 120 W⋅cm -2 at (𝑥𝑥 1 , 𝑦𝑦 1 ) and (𝑥𝑥 5 , 𝑦𝑦 5 ) for five-peak heat flux, respectively. Different from other non-uniform heat sources e.g., several squares with uniform heat flux in each specific area [START_REF] Kumar | A novel approach to manage temperature non-uniformity in minichannel heat sink by using intentional flow maldistribution[END_REF][START_REF] Manaserh | Multi-objective optimization of 3D printed liquid cooled heat sink with guide vanes for targeting hotspots in high heat flux electronics[END_REF], the Gaussian-shaped heat flux has been chosen in this study considering the gradient of real heat flux generated by the electronic components [START_REF] Mahajan | Cooling a Microprocessor Chip[END_REF] [START_REF] Brunschwiler | Interlayer thermal management of high-performance microprocessor chip stacks[END_REF]. Note that it can be replaced by other heat flux profiles without much influencing the effectiveness of the algorithm. The two-peak case represents the asymmetry heat flux profile, while the fivepeak case considers the centrosymmetric heat flux condition. Table 3.2. The values of constants used for two-peak and five-peak heat flux cases In this study, 3D fluid flow simulations were performed under steady-state with heat transfer, using a commercial code Fluent (version 18.2). The gravity effect was also considered. k-ε RNG model was used to simulate the turbulent flow, providing better accuracy for rapidly strained flows and swirling flows at relatively low Reynolds number conditions. For the pressure-velocity coupling, the SIMPLE method was used. For discretization, the second-order spatial discretization scheme was chosen for pressure and second-order upwind differentiation for momentum, turbulent kinetic energy, and turbulent dissipation rate. The solution was considered to be converged when (i) the maximum temperature of the heating surface and the pressure drop was constant from one iteration to the next (less than 0.5% variation), and (ii) the normalized residuals were lower than 10 -8 for the energy equation and 10 -5 for other governing equations. For each iteration step of the optimization algorithm, MATLAB R2016b was used for data post-processing of the computed flow and temperature profiles from Fluent, to calculate the size variation of each channel inlet according to Eq. (3.5) and to pass the renewed geometric coordinates to Ansys Workbench for a new CFD simulation. A grid independence study was conducted with the increasing number of total elements from 0.28 million to 2.27 million. Table 3.3 shows the values of pressure drop and maximum solid temperature obtained with different grids under an inlet mass flow rate of 0.011731 kg•s - 1 (vin=0.6 m•s -1 ). A pressure drop variation within 1% and a maximum solid temperature within 0.7 K could be achieved with the grid elements higher than 1.14 million. Comparisons were also made on the fluid velocity profiles at the centerline of the outlet surface (x-direction). Again, there is no obvious difference for grids with elements number higher than 1.14 million. As a result, this grid with 1.14 million elements (0.5 million elements for the fluid zone and 0.64 elements for the solid zone) has been chosen for the present study considering a tradeoff between the computation cost and accuracy. The calculation was carried out in a workstation with Intel (R) processor Xeon (R) CPU E5-2620 and 32 GB memory. Two hours were needed for each optimization step. Velocity profile at the centerline of the outlet surface Performance indicators The performance of the heat sink was evaluated by the maximum temperature of the base wall, the global thermal resistance, and the pressure drop. The global thermal resistance (Rth) of the heat sink is calculated by Eq. (3.24): 𝑅𝑅 𝑡𝑡ℎ = 𝑇𝑇 𝑏𝑏𝑚𝑚𝑏𝑏𝑏𝑏 𝑚𝑚𝑚𝑚𝑚𝑚 -𝑇𝑇 𝑖𝑖𝑖𝑖 𝑄𝑄 (K•W -1 ) (3.24) Where max base T is the maximum temperature at the heating surface (base wall) of the heat sink, Tin is the inlet fluid temperature (293 K), and Q is the total heating power (1130 W). The pressure drop in different sections of the fluid domain is also monitored: ∆𝑃𝑃 𝑡𝑡𝑡𝑡𝑡𝑡 = ∆𝑃𝑃 𝑑𝑑𝑖𝑖𝑠𝑠 + ∆𝑃𝑃 𝑐𝑐ℎ + ∆𝑃𝑃 𝑐𝑐𝑡𝑡𝑐𝑐 (Pa) (3.25) Where ∆𝑃𝑃 𝑑𝑑𝑖𝑖𝑠𝑠 , ∆𝑃𝑃 𝑐𝑐ℎ and ∆𝑃𝑃 𝑐𝑐𝑡𝑡𝑐𝑐 stands for the pressure drop in the distributor section, the parallel channels section (including the channel inlets with variable widths), and the collector section, respectively. The inlet Reynolds number and the mean Reynolds number in the mini-channels are calculated by Eqs. Where 𝑣𝑣 𝑖𝑖𝑛𝑛 and 𝑣𝑣̅ 𝑐𝑐ℎ (m•s -1 ) are the inlet and mean channel velocities, respectively. 𝑆𝑆 𝑖𝑖𝑛𝑛 and 𝑆𝑆 𝑐𝑐ℎ (m) are the hydraulic diameters of the global inlet port and the mini channel, respectively. Non-dimensional parameters 𝑚𝑚 * and 𝑤𝑤 * are defined as follows. 𝑚𝑚 * = 𝑚𝑚 𝑖𝑖 𝑚𝑚 � (3.28) 𝑤𝑤 * = 𝑤𝑤 𝑖𝑖 𝑤𝑤 � (3.29) Where 𝑚𝑚 � and 𝑤𝑤 � are the mean channel mass flow rate and the mean width of channel inlets, respectively. Results and discussions In this section, the flow distribution and thermal characteristics of the straight minichannel heat sink with optimized channel inlets are shown and compared with the conventional heat sink with equal channel inlets. In addition, a parametric study and a robustness test on the relationship between the tailored flow distribution, the overall thermal resistance, and the total pressure drop are reported. Figure 3.4 shows that the value of 𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 (Eq. 3.4) evolves along with the increasing step number. It took the two-peak heat flux case 14 iteration steps to achieve the convergence (𝑀𝑀𝑀𝑀 𝑇𝑇 𝑚𝑚𝑚𝑚𝑚𝑚 <0.003), and for the case of five-peak heat flux, 10 iteration steps were needed. 3.5 (a), it can be seen that the largest channel inlet is located at the position where peak temperature appears for all steps (except for step 0). As the iteration step proceeds, the widths of the channel inlet for channel number 1-6 gradually enlarge, much broader than those for channel number 7-16 due to the location of the larger hot spot with higher temperatures. With the constraint of constant passage ratio, the inlet widths of channels 8-16 have all been narrowed, despite a (smaller) heat flux peak located in this region. This variation of the channel inlet widths results in the evolution of the fluid flow distribution characteristics, as shown in Figure 3.5 (b). For the starting step 0 (equal channel inlet widths), the shape of the mass flow distribution curve is almost symmetric concerning the centerline. Middle channels (No. 8 and 9) receive the highest mass flow rate, and it gradually decreases for the channels located closer to the edges (No. 1 and 16). This is because of the middle location of inlet/outlet tubes (U-type flow arrangement). The unmatched flow rate distribution to the heat flux peaks causes inevitably temperature hot spots (as shown in Figure 3.7). Generally speaking, the evolution of the flow distribution curve shows a similar tendency as the evolution of the inlet widths curve, indicating an effective control of the channel mass flow rate by adjusting the inlet widths. Under the constraint of a constant total mass flow rate, a large proportion of the cooling fluid has been guided to channels 1-8, where the larger heat flux peak is located. Regarding the flow distribution shown in Figure 3.6 (b), the mass of cooling fluid is guided towards the edges, and a more mass flow rate should be delivered to the location situated with the highest heat flux (channels No. [START_REF]issues that Lithium-ion battery under overheating[END_REF][START_REF] Zhao | Overheating Prediction and Management of Lithium-Ion Batteries[END_REF][START_REF]What happens when lithium-ion batteries overheat and explode[END_REF]. The sum of mass flow rates allocated to channels No. 12-16 is higher than that allocated to channels No. 1-5 at step 10, mainly due to the cooling capacity difference of coolant as discussed above. Temperature fields Figure 3.7 shows the evolution of temperature cartography on the heating surface along with the optimization steps for the two-peak heat flux case. For step 0, the maximum temperature occurs in monitoring planes 3 and 4 where the larger heat flux peak is located. By running the optimization algorithm, the higher amount of heat in this area is absorbed and efficiently evacuated owing to the broadened channel inlets and the increased mass flow rate of cooling fluid. Step by step, the hot spots largely disappear and the maximum temperatures of the 16 monitoring planes are (almost) equalized. Similarly, Figure 3.8 depicts the temperature cartography evolution for the five-peak heat flux case. For equal channel inlet condition, the diagonal arrangement of five heat flux peaks does not result in 5 diagonal temperature hot spots because the heat generated near the distributing manifold could be more efficiently absorbed by the cooling fluid at the lower temperature. In contrast, the temperature hot spot close to the collecting manifold is rather obvious, the maximum temperature being 347.44 K. With the optimization algorithm proceeds, the hot spot occurring in monitoring planes 12-16 begins to decrease and diagonally extends to the middle and left parts of the base wall. At the final step 10, the maximum temperatures of the 16 monitoring planes are (almost) equalized and the peak temperature of the base wall can be reduced to 341.5 K. Compared to the two-peak heat flux case with the same area-weighted average power density, the five-peak case with more heat flux peaks shows a relatively more uniform temperature distribution. As a result, it costs fewer iteration steps to reach the optimization criterion. The maximum temperature at the base wall as a function of the optimization step is plotted in Figure 3.10 for both the two-peak and five-peak cases. The reduction of maximum temperature is significant, reaching 10 K and 7 K for two-peak and five-peak cases, respectively. Recall that every 10 K reduction of maximum junction temperature could double the service time of the electronic devices [START_REF] Siva | Effect of flow maldistribution on the thermal performance of parallel microchannel cooling systems[END_REF]. It may be observed that the maximum temperature of the base wall decreases sharply for the first four optimization steps, mainly because of the large difference between the 𝑆𝑆 𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 and the average value 𝑆𝑆 � 𝑚𝑚𝑎𝑎𝑚𝑚 . The slope of the maximum temperature curve becomes smaller for the rest steps (about 2-3 K reduction) and finally stabilizes at the value of 341.7 K (two-peak case) and 341.5 K (five-peak case), respectively. . For both cases, the total pressure drop as well as the sectional pressure drop increase with the optimization steps, because the adjustment of channel inlet widths for tailoring the flow distribution adds supplemental hydraulic resistance. The pressure drop of the parallel mini-channels section (∆𝑝𝑝 𝑐𝑐ℎ ) makes the largest contribution to the total pressure drop and continues to grow faster than others (∆𝑝𝑝 𝑑𝑑𝑖𝑖𝑠𝑠 ; ∆𝑝𝑝 𝑐𝑐𝑡𝑡𝑐𝑐 ) along with the optimization step. This is because of the changing velocities in channels and the unequal channel inlet widths. The pressure drops in the distributing and collecting manifolds, accounting for a small portion of the total pressure drop, slightly increase with the optimization step for both testing cases. The pressure drop increase in the parallel channels section is difficult to avoid because of some narrowed channel inlets and tailored non-uniform flow distribution, for more efficient cooling. Nevertheless, for the pressure drops in distributing and collecting manifolds, some better header designs [START_REF] Gilmore | Manifold configurations for uniform flow via topology optimisation and flow visualisation[END_REF] may be considered to reduce the total pressure drop. Note that the proposed optimization method in this study is compatible with other manifold shapes. Effective range of optimized channel inlet widths-a robustness study In the above section, it has been demonstrated that under the design heat flux and flow rate conditions, the proposed optimization algorithm is effective in reducing the maximum temperature of the base wall. But for actual use, it will be interesting and of practical significance to further test the optimization method when the workload (heat flux, inlet velocity, etc.) deviates from its design point. The conventional parallel straight channels heat sink with equal channel inlets (step 0) is introduced as a reference for comparison. Effect of inlet velocity The optimized heat sink configuration (five-peak heat flux, vin = 0.6 m•s -1 , qavg = 38.75 W•cm -2 ) was tested under other four inlet velocities (heat flux profile remains unchanged), i.e. vin = 0.5 m•s -1 ; 0.55 m•s -1 ; 0.65 m•s -1 and 0.7 m•s -1 . The thermal resistance Rth (as defined in Eq. (3.24)) values of the optimized heat sink obtained under different vin conditions are plotted in the red line in Figure 3.12. Note that the blue line shows the Rth value of the heat sink with equal channel inlet widths (step 0). In general, the Rth decreases with the increasing mean Rech for both heat sink configurations because of the higher cooling capacity of the coolant at the higher flow rate. The heat sink with channel inlets optimized under vin = 0.6 m•s -1 (mean Rech = 487), when operated under other inlet velocities (mass flow rate) conditions, always shows a lower Rth (about 14%) than that of the conventional heat sink with equal channel inlet widths. Effect of pressure drop increase Another option to reduce the thermal resistance and the maximum temperature of the heat sink without optimization is to simply increase the mass flow rate (cooling capacity) of the coolant. But similar to any other heat transfer enhancement technique, higher mass flow rate results in an increased pressure drop (pumping power consumption). It is therefore interesting to compare different cooling enhancement measures considering both the thermal resistance and the pressure drop. Figure 3.13 presents the thermal resistance of the heat sink as a function of the pressure drop, comparing the two cooling enhancements. Note that the blue line shows the performance of the conventional heat sink configuration (equal channel inlet widths) with increasing mass flow rate (vin=0.6 m•s -1 , 0.65 m•s -1 , 0.7 m•s -1 , 0.75 m•s -1 ) whereas the red line represents different optimization steps of the heat sink with varied channel inlet widths (vin = 0.6 m•s -1 ). In general, the thermal resistance decreases with the increasing mass flow rate and the pressure drop. An encouraging result is that to reach the same thermal resistance, tailoring the flow distribution of the cooling fluid using the proposed optimization method always costs a smaller pressure drop (up to 6.5%) than simply increasing the total coolant mass flow rate for conventional parallel straight channels heat sink. The consumed pumping power is better "utilized" for the cooling purpose to reduce the maximum temperature of the heat sink. Effect of average heat flux The power dissipation of electronic devices (e.g. CPU) is often dynamic in actual operation due to the varied frequency and the switched load capacitance. Therefore, testing the optimized heat sink under a certain range of heat flux is necessary. The following test aims to investigate the efficiency and robustness of the optimized heat sink under various average heat fluxes but with a similar pattern since the position of power dissipation elements is often fixed. Figure 3.14 shows the influence of the heat flux variation (qavg = 24-45 W•cm -2 ) on the thermal performances of the heat sink. The red line marks the thermal resistance value of the optimized heat sink configuration (under five-peak heat flux, vin = 0.6 m•s -1 , qavg = 38.75 W•cm - 2 ) while the blue line is for the equal channel inlets configuration. The trend of thermal resistance for both heat sink configurations firstly goes down and then gradually climbs. For equal channel inlets configuration, the lowest thermal resistance is achieved at qavg =30 W•cm - 2 , while for optimized channel inlets configuration the lowest thermal resistance value is obtained logically under its nominal design condition at qavg = 38.75 W•cm -2 . But even not being operated under its nominal design point, the optimized channel inlets configuration can maintain the thermal resistance at a low level, about 9.4% lower than that of the equal channel inlets configuration. The thermal performance robustness of the optimized channel inlets configuration under variable average heat flux conditions is thereby highlighted. The green square marker presents the thermal resistance of the heat sink with its channel inlets optimized under the corresponding area-weighted average heat flux. The difference of Rth between the nominal design point (green square) and pseudo design point (red star) reduces as the area-weighted average heat flux increases, and the maximum difference of Rth is 0.0014 K•W -1 at qavg = 30 W•cm -2 , indicating that the channel inlets optimized under one nominal design heat flux could be considered as pseudo optimal with an acceptable tolerance (<3%) when the average heat flux varies within a certain range. Conclusion and the following work In this chapter, a parallel straight mini-channels heat sink subjected to a non-uniform multiple-peak heat flux has been studied, to minimize the maximum temperature on the base wall. The flow distribution of the cooling fluid among the parallel channels is tailored by adjusting the channel inlet widths using an iterative optimization algorithm. The working condition applicability of the optimized channel inlet configuration has also been tested and compared to the equal channel inlet widths heat sink configuration. The main findings obtained may be summarized as follows. • The maximum temperature can be reduced by 10 K using the proposed optimization method, under the area-weighted average heat flux of 38.75 W•cm -2 for the two-peak heat flux case. For the five-peak heat flux case, the maximum temperature can be decreased by 4 K to 7 K for the average heat flux ranging from 24-45 W•cm -2 , respectively. • The heat sink configuration with optimized channel inlets could always provide smaller thermal resistance than that of the equal channel inlet configuration (reference straight channel heat sink) under different average heat flux or total mass flow-rate conditions. • At the same pressure drop, tailoring the flow distribution of the cooling fluid is more efficient in reducing the thermal resistance than simply increasing the mass flow rate of the cooling liquid. The consumed pumping power is better utilized for cooling purposes to reduce the maximum temperature of the heat sink. • The effectiveness and robustness of the optimization algorithm have been illustrated by that the channel inlet widths configuration optimized under one certain average heat flux could be considered as pseudo-optimal with an acceptable tolerance when the average heat flux varies within a certain range. It should be noted that the proposed optimization method depends largely on the correctness of CFD simulation. The validation of fluid flow and temperature profiles by testing a laboratory heat sink prototype has been done and would be presented in the later Chapter 5. This size/shape optimization method, though relatively straightforward, has still its limitations. It has been developed for the conventional straight channel-type heat sink so that only the flow distribution property can be tailored or optimized while the global flow path configuration can by no means be modified. In this regard, the other originally proposed topology optimization approaches with more degrees of freedom than the predefined parallel straight channel geometry would be interesting to be investigated, which will be the main subject of the next chapter 4. Both optimized heat sinks' performances would be compared numerically and experimentally in Chapter 5. Introduction In Chapter 3, a size optimization method for channel inlets has been developed for adjusting the flow distribution among the parallel straight channel of a heat sink. Nevertheless, such optimization methods are based on a predefined initial geometry. Topology optimization (TO) treats this problem differently by acting directly on the global spatial distribution of fluid and solid materials and their connectivity within a certain domain [START_REF] Dugast | Topology optimization of thermal fluid flows with an adjoint Lattice Boltzmann Method[END_REF]. It has the highest degrees of freedom, capable of proposing complex but highly efficient designs without being limited to the prescribed geometry. Currently, the mainstream of TO works on a heat sink (exchanger) structural optimization combines the density-based method, the FEM (finite element method), and the gradient-based optimizer [START_REF] Fawaz | Topology optimization of heat exchangers: A review[END_REF], as has been discussed in chapter 2.4. It has shown good efficiency in handling optimization problems with a high number of design variables [START_REF] Alexandersen | Efficient topology optimisation of multiscale and multiphysics problems[END_REF]. Nevertheless, this TO strategy has some difficulties in handling numerical artifacts or descriptions of clear interfaces, such that thresholding is an indispensable post-treatment to eliminate the intermediate value of the design variable (density). More importantly, it can be easily trapped into the local optimum [START_REF] Sigmund | Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima[END_REF], necessitating the development of some gradient-free approaches, like genetic algorithm (GA) and Bayesian optimization, to overcome these deficiencies and to converge towards a global optimum [START_REF] Wei | An improved fast-convergent genetic algorithm[END_REF]. The GA is a stochastic evolutionary algorithm (EA) that mimicries the biological evolution of species based on chromosomes and genes [START_REF] Holland | Genetic Algorithms[END_REF]. Merited by its robustness to the global optimum and good fitness to multi-objective optimization, it has been developed and used by many researchers for the optimization of heat transfer, including both conduction and convection problems [START_REF] Gosselin | Review of utilization of genetic algorithms in heat transfer problems[END_REF][START_REF] Gao | Fluid flow and heat transfer in microchannel heat sinks: Modelling review and recent progress[END_REF]. A limited number of attempts [START_REF] Yoshimura | Topology optimization of fluid problems using genetic algorithm assisted by the Kriging model[END_REF][START_REF] Shimoyama | Multi-objective Bayesian topology optimization of a lattice-structured heat sink in natural convection[END_REF][START_REF] Mekki | Voxel-Based Topology Optimization of Heat Exchanger Fins[END_REF][START_REF] Mekki | Genetic algorithm based topology optimization of heat exchanger fins used in aerospace applications[END_REF][START_REF] Yaji | Data-driven multifidelity topology design using a deep generative model: Application to forced convection heat transfer problems[END_REF] have been made in recent years, as reviewed in chapter 2.4, but none of them addresses the TO of global flow channel configuration in heat sinks. Moreover, the majority of the TO studies deal with the simplified 2D design domain (only with several exceptions [START_REF] Sun | 3D topology optimization of heat sinks for liquid cooling[END_REF]), obviously insufficient to handle problems involving more complex heating boundaries with multiple heat sources. Being motivated by the remaining research gaps to fill, we develop in this work a GAbased TO (GATO) method to optimize the global flow channel configuration of the heat sink for the forced-convection cooling of a heating surface under multiple-peak heat flux. Different from other studies existing in the literature, the pseudo-3D design domain of the heat sink is represented by a binary matrix in a direct explicit way, each element being either solid (0) or fluid [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. Consequently, the TO problem of flow channel configuration in the design domain becomes then the search for the best allocation of 0 and 1 cells in the matrix. A 3D finite volume method (FVM) solver is used for the modeling of conjugate heat transfer and fluid flow for each design, showing its robustness and stability in handling complex CFD problems. An inhouse coded GA is used as the optimizer to renew the design variables (matrices) to minimize the peak temperature of the heating surface (Tpeak) under specific constraints of constant void fraction for the fully connected fluid phase. In more detail, the local features (genes) that contribute to the lowered Tpeak value would be maintained in successive generations while those that bring about worse results would be discarded, a procedure analogizing the mechanism of natural selection as the "survival of the fittest". One generation after another will converge to the optimized flow channel configuration that has the lowest Tpeak on the heating surface. This TO method combining direct explicit parametrization, FVM, and GA, inspired and developed from the pioneering works of Boichot et al. [START_REF] Boichot | Cellular automaton methods for heat and mass transfer intensification[END_REF][START_REF] Boichot | A genetic algorithm for topology optimization of area-To-point heat conduction problem[END_REF] on the pure conduction optimization problem, has never been used for the convection cooling of a heating surface under multiple heat sources to the best of our knowledge. The remaining parts of this chapter are organized as follows. Section 4.2 introduces the methodology, in which each step of the GATO algorithm is presented in detail. In section 4.3, the optimization results of a heat sink design under multiple-peak heat flux are discussed. A comprehensive parametric study evaluating the influences of different design variables is reported in section 4.4, demonstrating the robustness, effectiveness, and flexibility of the developed GATO method. Section 4.5 provides further discussions on some interesting issues to better highlight the advantages and limitations of this method. Finally, the main conclusions and following work are summarized in section 4.6. It has a single inlet (and outlet) straight channel of win (wout) in width and Lin (Lout) in length, both aligned with the center line of the heat sink. In between the inlet/outlet tubes is a rectangular cuboid having overall dimensions of Lmiddle in length (y direction), wdesign in width (x direction), and e in depth (z-direction). This core part of the heat sink consists of 3 sections: the inlet manifold (wdesign × Ldis), the middle flow channel domain (wdesign × Ldesign), and the outlet manifold (wdesign × Lcol). Note that all the flow channels are coplanar with the same channel depth of e, rendering it a pseudo-3D fluid domain adapted for design parametrization. Only the upper surface of the flow channel domain (heating surface with a surface area of Adesign) receives a non-uniform multiple-peak heat flux while all other surfaces enclosing the heat sink are considered adiabatic walls. In that way, the total amount of heat generated will be transferred and absorbed by the cooling fluid. The middle flow channel domain, also named the design domain, is constituted by both fluid and solid (cubic) elements at a given void fraction (Φ), each element having welement in width and Lelement in length. Their spatial distribution determines consequently the topology of the flow path and the cooling performance that should be optimized by GATO. One special case is the conventional straight channel configuration, with a channel width of wch and a separating wall thickness of wsw between two neighboring channels, which is shown in Figure 4.1 (a) and considered as the reference design (abbreviated as "RSC" hereafter) for performance comparison. Methodology Heat sink model The following assumptions and simplifications have been made for this study: Where 𝑚𝑚 𝑖𝑖𝑛𝑛 or 𝑚𝑚 𝑡𝑡𝑜𝑜𝑡𝑡 is the inlet or outlet mass flow rate of the cooling fluid, respectively. Q is the total input power (W); q is the heat flux at the heating surface; Cpf is the specific heat of cooling fluid; Tf,in or Tf,out is the inlet or outlet fluid temperature, respectively. GA procedure A GA procedure is developed to determine the best spatial distribution of the fluid and solid elements in the design domain to minimize the peak temperature of the heating surface (Tpeak) (it is also the fitness function in this optimization) under the constraint of constant void fraction (Φ) for the fully connected fluid phase. The reason for this fully connected fluid domain constraint is to prevent the existence of isolated fluid element(s) enclosed by solids, which may cause the boiling of fluid due to local overheating. To do this, the design domain is gridded into r rows and c columns and expressed by a binary matrix 𝑀𝑀 (𝑟𝑟×𝑐𝑐) , each element in M representing either solid (0) or fluid (1) as shown in Figure 4.1 (b). the TO problem can then be formulated as Eq. (4.3). Find 𝑀𝑀 (𝑟𝑟×𝑐𝑐) (4.3) Minimize 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 s.t. � 𝛷𝛷 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝐶𝐶𝑐𝑐𝑐𝑐𝑐𝑐𝑒𝑒𝑐𝑐𝑐𝑐𝐶𝐶𝑣𝑣𝐶𝐶𝑐𝑐𝑦𝑦 = 1 The general principle of GA is to assess the configurations among a starting random population, keep the best ones that meet the objective function (or fitness), and then cross and mutate them to get a new child population of the same size, and so on. One generation after another, the best design is expected to be determined. The main procedure of GA in this study is shown in the flow chart (Figure 4.2) and described in detail below.  Initial generation: several 𝑀𝑀 (𝑟𝑟×𝑐𝑐) matrices (100 in this study) are generated by random allocation of 0 and 1 elements at fixed Φ. The repeatability and connectivity are checked to guarantee that only the non-repeated M with a fully connected fluid domain is included in every generation. Each M is treated as an individual in this generation to be evaluated in the next steps.  Geometry transformation: all the matrices in the generation would be transformed into real geometry models (such as shown in Figure 4.3) by writing all the dimensions and 0 & 1 distribution information into a coded script as an executable file for the CFD tool.  Performance evaluation of individuals: the fluid flow and heat transfer characteristics of each configuration are calculated by the CFD method. Especially, the objective function (Tpeak in this study) values of individuals are obtained and extracted.  Ranking, selection, and elites keeping: all the tested configurations are ranked according to fitness (objective function). A certain amount of well-evaluated configurations (50 in this study, ranked from 2 to 51) with a higher ranking are selected as future parents to create the individuals of the next generation while others are eliminated from reproduction. Note that the top-ranked individual(s) are considered elite(s) (1 in this study) and would be maintained in the next generation, preventing the loss of the most fitted "genes" [174].  Crossover and Mutation: Crossover operation is firstly performed on two neighboring individuals between the top 2 to the top 51 in the ranking list (top one as elite) to produce two children. One-point crossover is used in the current study, i.e., based on a randomly selected element in the matrix either horizontal or vertical crossover is performed with equal probability. Then, each produced child is set to have a 20% probability to mutate, by either horizontal or vertical string swapping with equal probability. In that way, better genes (regarding fitness) would be inherited while in the meantime good diversity could be ensured. 100 children are generated and at the same time, the constant void fraction and connectivity constraints are obeyed. More detailed information about the crossover and mutation operations is given in Appendix 4.A of this chapter.  Dead ends elimination: In case one fluid element is surrounded on three sides by solid cells, the flow velocity is usually near zero thus the cooling effect at these ends might be low. Therefore, an additional step is to eliminate these fluid elements, considered dead ends, by randomly exchanging their locations with some solid cells, keeping always the constant Φ and full connectivity. More explanation about this step may be found in Appendix 4.B. All the matrices in this new generation will then be transferred to the geometry transformation step for recurrence following the above-explained procedure.  Termination criterion: the GATO is considered to be completed when the variation of the median value of the objective function (Tpeak) from one generation to the next is smaller than 1 × 10 -4 for at least 10 generations (Eq. 4.4). The flow channel configuration and the thermo-hydraulic characteristics of the optimum topology will be exported. �∑ � 𝑇𝑇 𝑘𝑘 𝑚𝑚𝑏𝑏𝑚𝑚𝑖𝑖𝑚𝑚𝑖𝑖 -𝑇𝑇 𝑘𝑘-1 𝑚𝑚𝑏𝑏𝑚𝑚𝑖𝑖𝑚𝑚𝑖𝑖 𝑇𝑇 𝑘𝑘-1 𝑚𝑚𝑏𝑏𝑚𝑚𝑖𝑖𝑚𝑚𝑖𝑖 � 𝑛𝑛 𝑘𝑘=𝑛𝑛-9 � < 1 × 10 -3 (4.4) Calculation of flow and temperature fields by CFD method The fluid flow and heat transfer characteristics of each flow channel configuration are calculated by CFD simulation. The governing Navier-Stokes equations and heat transfer equations under steady-state are the same as those presented in chapter 3, Eqs. 3.9 -3.12, thus will not be repeated here. Detailed simulation parameters used in this study are presented in section 4.3.1. Performance indicators and non-dimensional parameters Various parameters are used as performance indicators, introduced below. The overall thermal resistance of heat sink R th is defined in Chapter 3, Equation (3.24). The overall pressure drop of the heat sink is calculated as the pressure difference between the inlet and outlet ports: ∆𝑃𝑃 = 𝑃𝑃 𝑖𝑖𝑛𝑛 -𝑃𝑃 𝑡𝑡𝑜𝑜𝑡𝑡 (4.5) The Nusselt number (Nu) of the heat sink and the Reynolds number (Re) at the inlet channel are calculated by Eqs. (4.6) and (4.7): 𝑁𝑁𝑁𝑁 = ℎ 𝑚𝑚𝑎𝑎𝑎𝑎 •𝐷𝐷 ℎ,𝑚𝑚𝑏𝑏𝑏𝑏𝑖𝑖𝑎𝑎𝑖𝑖 𝜆𝜆 𝑓𝑓 (4.6) 𝑅𝑅𝑒𝑒 𝑖𝑖𝑛𝑛 = 𝜌𝜌 𝑓𝑓 •𝑎𝑎 𝑖𝑖𝑖𝑖 •𝐷𝐷 ℎ,𝑖𝑖𝑖𝑖 𝜇𝜇 𝑓𝑓 (4.7) Where 𝜆𝜆 𝑒𝑒 , 𝜌𝜌 𝑒𝑒 and 𝜇𝜇 𝑒𝑒 are the thermal conductivity, density, and dynamic viscosity of the fluid, respectively. 𝑣𝑣 𝑖𝑖𝑛𝑛 is the inlet flow velocity and 𝑆𝑆 ℎ,𝑖𝑖𝑛𝑛 is the hydraulic diameter of the inlet channel. The hydraulic diameter 𝑆𝑆 ℎ,𝑑𝑑𝑒𝑒𝑠𝑠𝑖𝑖𝑎𝑎𝑛𝑛 of the flow channels in the design domain are calculated as follows [START_REF] Li | Development of a high performance heat Sink Based on screen-fin technology[END_REF]: 𝑆𝑆 ℎ,𝑑𝑑𝑒𝑒𝑠𝑠𝑖𝑖𝑎𝑎𝑛𝑛 = 4𝑉𝑉𝑡𝑡𝑐𝑐𝑜𝑜𝑚𝑚𝑒𝑒 𝑐𝑐ℎ 𝑆𝑆 𝑤𝑤 (4.8) Where 𝑉𝑉𝑐𝑐𝑉𝑉𝑁𝑁𝑚𝑚𝑒𝑒 𝑐𝑐ℎ and 𝑆𝑆 𝑤𝑤 are the total wetted volume and the total wetted surface area of the flow channels in the design domain, respectively. The average heat transfer coefficient ℎ 𝑎𝑎𝑎𝑎𝑎𝑎 is calculated by Eq. (4.9): ℎ 𝑎𝑎𝑎𝑎𝑎𝑎 = 𝑄𝑄 𝐴𝐴 𝑏𝑏𝑓𝑓𝑓𝑓 �𝑇𝑇 𝑤𝑤,𝑚𝑚𝑎𝑎𝑎𝑎 -𝑇𝑇 𝑓𝑓,𝑚𝑚𝑎𝑎𝑎𝑎 � (4.9) Where 𝑞𝑞 𝑒𝑒𝑒𝑒𝑒𝑒 is the effective solid-fluid interface area, 𝑆𝑆 𝑤𝑤,𝑎𝑎𝑎𝑎𝑎𝑎 is the average channel wall temperature, and 𝑆𝑆 𝑒𝑒,𝑎𝑎𝑎𝑎𝑎𝑎 is the average fluid temperature calculated by: 𝑆𝑆 𝑒𝑒,𝑎𝑎𝑎𝑎𝑎𝑎 = 𝑇𝑇 𝑓𝑓,𝑖𝑖𝑖𝑖 +𝑇𝑇 𝑓𝑓,𝑜𝑜𝑜𝑜𝑜𝑜 2 (4.10) The standard deviation of temperature (STDT) at the heating surface is calculated as: 𝑆𝑆𝑆𝑆𝑆𝑆 𝑇𝑇 = � 1 𝑛𝑛-1 ∑ (𝑆𝑆 𝑃𝑃𝑖𝑖𝑚𝑚 -𝑆𝑆 � ) 2 𝑛𝑛 𝑃𝑃𝑖𝑖𝑚𝑚=1 (4.11) Where n is the total number of points with temperature value at the heating surface; Pix is the No.Pix point and 𝑆𝑆 � is the average temperature at the heating surface. Velocity, temperature, and pressure parameters are normalized as below: Where 𝑃𝑃 0 is pressure drop when the design void fraction (Φ) is equal to 1. 𝑣𝑣 * = 𝑎𝑎 𝑎𝑎 𝑖𝑖𝑖𝑖 Benchmark case and optimization results In this section, the optimization results of a benchmark heat sink case are presented to show the feasibility and effectiveness of the proposed GATO method. 4.1. The middle design domain has a square shape of 50 mm ⨯ 50 mm, represented by a binary matrix of 𝑀𝑀 50×50 . The fluid void fraction (Φ) for this benchmark case is set as 0.50, i.e. equal solid and fluid elements, and their distribution is subjected to optimization. Recall that for the convenience of simulation and optimization, the entire fluidic circuit has an identical channel depth of e=1 mm, making every fluid or solid element as a cubic form to morph and for the simplification of the geometry model and CFD calculation. The channel thickness could also be a parameter for a 3D TO of the fluid domain, but not considered in this study. Water and aluminum are used as the fluid phase and the solid phase, respectively. Their physical properties are considered as constant and values are listed in Table 4.2. Benchmark case and numerical parameters Table 4.2. Physical properties of the fluid and solid used for simulations Material Density (kg•m -3 ) Thermal conductivity (W•m -1 •K -1 ) Specific heat capacity (J•kg -1 •K -1 ) Viscosity (Kg•m -1 •s -1 ) Water 998.2 0.6 4182 1.003⨯10 -3 Aluminum 2719 202.4 871 -3D CFD simulations were performed to calculate the fluid flow and heat transfer characteristics of each design. Fluid velocity inlet (vin=0.1 m•s -1 ) normal to the inlet boundary surface is set, with a temperature of 293 K. The corresponding inlet Reynolds number Rein is calculated to be 166 to make sure the whole flow region is under laminar. The pressure outlet boundary is set for the outlet surface with zero-gauge pressure. All walls are considered nonslip and adiabatic, except for the upper surface of the design domain (heating surface of the heat sink). For the latter, a Gaussian-shape two-peak heat flux (HF) distribution is defined, given by the following Eq. (4.15). 𝑞𝑞(𝑥𝑥, 𝑦𝑦) = ∑ 𝐵𝐵 𝑖𝑖 𝑒𝑒 - �𝑚𝑚-𝑚𝑚 𝑖𝑖 � 2 +�𝑦𝑦-𝑦𝑦 𝑖𝑖 � 2 2𝜎𝜎 𝑖𝑖 2 2 𝑖𝑖=1 (4.15) The heat flux peak located at the position (xi, yi) has a maximum value of 𝐵𝐵 𝑖𝑖 and 𝜎𝜎 𝑖𝑖 indicates the spatial spread. The total power of the heat sources is equal to 90 W (due to the lower inlet Re number to avoid the phase-change of fluid), corresponding to an average heat flux (power density) for the heating surface of 𝑞𝑞 𝑎𝑎𝑎𝑎𝑒𝑒 = 3 .6 W•cm -2 . Detailed values of parameters in Eq. 4.15 and the shape of the heat fluxes are given in Table 4.3. In this study, an open-source FVM code OpenFoam (version 7) was used to solve the governing equations presented in section 3.2.3. Note that the gravity and viscous heating effects were not considered for simplification. The multi-physics conjugate heat transfer solver "chtMultiRegionFoam" in OpenFoam has been used for both solid and fluid domains [START_REF]OpenFoam user guide[END_REF]. No re-meshing is needed when updating the individuals from one generation to the next in the GATO procedure, thereby saving computational time and data storage. The laminar model was used for fluid flow due to the small Re numbers in the fluid domain. The widely used SIMPLE algorithm was employed for the velocity-pressure coupling. Geometric-Algebraic Multi-Grid (GAMG) solver with diagonal incomplete-Cholesky with Gauss-Seidel (DICGaussSeidel) smoother was used to solve the pressure equations for a faster iteration. Stabilized Precondition Conjugate Gradient (PBiCGStab) with Diagonal incomplete-LU (DILU) was used to solve the energy and momentum equations. The solution was considered to be converged when (i) the maximum temperature of the heated surface and the inlet-outlet pressure drop were constant from one iteration to the next (less than 0.5% variation), and (ii) the normalized residuals were lower than 10 -5 for the energy equation and 10 -4 for other governing equations. Structured cube shape meshes with equal edge lengths were generated. The grid used in the study had 129 k elements in total, with 71 k elements for the fluid zone and 56 k elements for the solid zone. A grid independence study (a randomly chosen topological individual in the initial generation of the benchmark study) was conducted to guarantee that the current mesh density used was appropriate and sufficient regarding both accuracy and calculation time. More details of the mesh independence study can be found in Appendix 4.C of this chapter. The CFD simulations using OpenFoam were performed with the help of a High-performance computing (HPC) cluster CCIPL (Le Centre de Calcul Intensif des Pays de la Loire) [START_REF]Le Centre de Calcul Intensif des Pays de la Loire (CCIPL)[END_REF]. Matlab (version: R2020 a) was used for the matrix generation and updating, data processing, and GA procedure. Correspondingly, the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑖𝑖𝑛𝑛 value also decreased continuously from 358.6 K (generation 1) to 348.6 K (generation 170), indicating the effectiveness of the proposed GATO method in attaining the defined optimization objective. Note that some stairs appeared in the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑖𝑖𝑛𝑛 curve especially during the first half of the convergence, which is mainly due to the elite keeping in GA: the crossover and mutation process cannot give birth to a better individual that underscores the top-ranked one in the previous generation. Also note that at the convergence, the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑖𝑖𝑛𝑛 and 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑒𝑒𝑑𝑑𝑖𝑖𝑎𝑎𝑛𝑛 values are very close (but not the same), indicating that the fitness difference between the top 50 individuals is very small. At the end of generation 170, a higher proportion of cooling fluid mass is guided from the inlet to the right part of the design domain. This is because the larger heat flux peak is located in this area and closer to the outlet manifold, higher cooling capacity (higher mass and lower temperature of coolant) is thereby needed to remove the temperature hot spots. The evolution of the temperature field on the heating surface along with GA steps (Figure . 4.5 (c)) shows clearly the better cooling performance of the heat sink by optimization of the flow path topology. The temperature hot spots gradually disappear and the isotherms become more parallel to the line connecting the two hot spots. perpendicular to the global flow direction (from the inlet to the outlet) along with better temperature uniformity. Optimization results of the benchmark case The results of this benchmark case indicate that at the constraint of constant void fraction (Φ), the fluid elements have been arranged more efficiently to form an adapted flow configuration corresponding to the power and location of heat sources. In this way, the Tpeak (objective) can be decreased generation by generation by the GATO method, achieving the best cooling of the heating-generating surface. Effects of design variables of GATO on the optimized channel configuration: a parametric study In section 4.3, the GATO has been successfully applied to obtain the optimal flow configuration (heat sink geometry) for the benchmark case. Here the optimized channel configuration as a function of the key design variables is investigated, including the heat flux shape (q), the void fraction (Φ), the inlet flow velocity (vin), and the design domain resolution (Mr⨯c). Note that other parameters are kept the same as the benchmark except for the variable being evaluated. The conventional RCS heat sink as shown in Figure 4.1 (a) has also been introduced for testing. To make it comparable, the geometrical parameters of the RSC heat sinks may be different based on the various parameters we investigate in this section, as presented in Table 4.4. Their cooling performances under the same design variables and operating conditions are compared and reported below. Influences of peak heat flux difference Three heat fluxes shown in Table 4.3 have been used as input for the heating surface. They have the same total power (Q=90 W; qavg = 3.6 W•m -2 ) but differentiate between them by the increasing value of the higher heat flux peak. The GATO has been executed for each and the obtained optimization results are shown in Figure 4.6. Shown in Figures 4.6 (a) and 4.6 (b) are the optimal channel configurations of the design domain, and the normalized velocity field, respectively. A similar general pattern may be observed, i.e., more fractionated small solid islands at the location of two heat flux peaks while bigger solid blocks in other regions. The more difference between the two peak values of the heat flux (e.g., heat flux 3 (HF 3)), the more fluid elements are allocated at the right part of the design domain to deliver more mass of the cooling fluid to the higher temperature hot spot (cf. The temperature contours at the heating surface of GATO and RSC heat sinks are shown in Figure 4.6 (c) and Figure 4.6 (d), respectively. Temperature hot spots can be observed for the RSC heat sink, while they are by and large eliminated in the GATO heat sink with clearly improved temperature uniformity. The Tpeak value for the GATO heat sink is 345.5 K (HF 1), 348.6 K (HF 2), and 351.3 K (HF 3), respectively, smaller than that of the RSC heat sink (351.3 K, 356.5 K, and 361.0 K, respectively). The greater the difference between two peaks (e.g., HF 3), the more reduction of Tpeak could be achieved by GATO compared to RSC. As already shown in chapter 3, the RCS as a basic (conventional) configuration is logically less performant in handling highly heterogeneous heating surfaces, for which the benefits of the GATO method are more significant. Table 4.5 lists the performance indicators of GATO and RSC heat sinks subjected to different heat fluxes. Generally, the Nu number of the GATO heat sink is about 40%-50% higher than that of the RSC heat sink, owing to the complex flow path configuration obtained that breaks the thermal boundary layer of fluid and therefore enhances the convection heat transfer. Nevertheless, the pressure drop is inevitably boosted due to the geometry complexity, and the P* value of GATO is significantly higher than that of RSC, However, the global pressure drop of the GATO heat sink (< 220 Pa) is still small at this low flow rate condition. The Rth values follow the same tendency as Tpeak discussed above. The values of STDT increase with the rising difference of two heat flux peaks, but the temperature distribution is more uniform at the heating surface of the GATO heat sink than that of the RSC heat sink. Table 4.5 Performance comparison between GATO and RSC heat sinks under different heat fluxes. Heat flux Nu (GATO) Nu (RSC) Rth (K • W -1 ) (GATO) Rth (K • W -1 ) (RSC) P* (GATO) P* (RSC) Tpeak (K) (GATO) Tpeak (K) (RSC) STDT (K) (GATO) STDT (K) (RSC Void fraction (Φ) of the design domain The influence of void fraction on the cooling performance of the heat sink optimized by GATO has been evaluated, by varying the Φ value from 0.40 to 0.80. The obtained results (optimal flow path configuration, velocity field, and temperature contour) are depicted in Figure 4.7. At a low void fraction (Φ=0.4), the limited amount of fluid elements is organized by GATO into a mesh-type flow circuit with relatively clear splitting or merging junctions. Fluid with higher velocity magnitude is guided by the main flow paths to cool down the temperature hotspots. In contrast, at a high void fraction (Φ=0.8), the excessive fluid elements are arranged like a porous medium in which the velocity magnitude is much lower. The numerous small solid islands work as pin-fin structures surrounded by the cooling fluid calmly flowing, the contacting surface area is thereby higher. Nevertheless, it can be seen from Figure 4.7 (b) that some fluid elements have near-zero velocity, implying that the void fraction is not all efficiently used. The sharping of the flow path could be done by eliminating these fluid elements considered as dead volume, which will be presented in later section 4.5.1. By examining Figure 4.7 (c), it is worth noting that the lowest Tpeak value (346.4 K) is reached at Φ=0.65 after GATO optimization. Higher or lower values of Φ render higher Tpeak of the GATO heat sink, and also the higher Rth values as reported in Table 4.6. Recall that Φ is treated as a constraint in the algorithm, i.e., constant Φ of all the individuals to be evaluated in the GATO. Such constraints may need to be revisited to achieve a more general optimum. The performance comparison between the GATO and RSC heat sinks at different Φ values is presented in Figure 4.7 (c &d), and in Table 4.6 as well. The Nu numbers of GATO heat sinks are very close, but all higher than those of parallel straight channels with the same Φ, indicating better cooling performance. Regarding the RSC heat sink, the Nu number significantly decreases with the increasing Φ, mainly due to an increase of the effective heat transfer surface area (higher number of channels as shown in Table 4.4) between fluid and solid under the same input power, which reduces heat transfer coefficient. Moreover, STDT values for both GATO and RSC are larger at a high void fraction, this is because of the lowtemperature region at the entrance of the design domain, closer to the fluid inlet temperature. But the GATO heat sink has a more uniform temperature distribution at the heating surface than that of the RSC heat sink for the same Φ value, owing to the dispersed hot spots by the optimized flow path configuration. The pressure drop of the GATO heat sink significantly increases with the decreasing Φ value, i.e., P* reaches 3.89 (429.4 Pa) at Φ=0.4. In addition to the higher velocity magnitude in the main flow paths, the numerous splitting/merging junctions also create additional singular losses [START_REF] Tondeur | Constructal optimization of arborescent structures with flow singularities[END_REF]. Regarding the RSC, the pressure drop increase is relatively small, i.e., P* ranging from 1.08 to 1.20. Table 4.6 Performance comparison between GATO and RSC heat sinks for different void fraction values. Φ Nu (GATO) Nu (RSC) Rth (K • W -1 ) (GATO) Rth (K • W -1 ) (RSC) P* (GATO) P* (RSC) Tpeak (K) (GATO) Tpeak (K) (RSC) STDT (K) (GATO) STDT (K) ( Inlet velocity The benchmark heat sink has been optimized by GATO under different inlet velocities (vin=0.1, 0.2, and 0.4 m•s -1 ) to investigate the influence of increasing fluid flow rate (or Rein) on the optimized flow path configuration and its cooling performance. The optimization results as well as the performance comparison with the RSC heat sink are reported in Figure 4.8 and Table 4.7. It can be observed from Figure 4.8 (a &b) that for all three tested inlet velocities, the main mass flow is delivered to the location of heat flux peaks, and more fluid flow is guided towards the higher peak than the smaller heat flux peak. With vin increases, the main flow structure in the entrance manifold tends to be fractioned into more small streams to compensate the stronger inertial effect. Lower Tpeak can be reached at high vin due to the higher cooling capacity, i.e., 348.6 K (vin=0.1 m•s -1 ), 329.7 K (vin=0.2 m•s -1 ) and 321.5 K (vin=0.4 m•s -1 ) respectively. Nevertheless, the normalized value (𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * ) increases with the increasing vin due to the smaller (𝑆𝑆 𝑒𝑒,𝑡𝑡𝑜𝑜𝑡𝑡 -𝑆𝑆 𝑒𝑒,𝑖𝑖𝑛𝑛 ). Table 4.7 lists the global thermal and hydraulic performances of GATO and RSC under different vin values for comparison. The increasing vin (Rein) results in reduced Rth, Tpeak, and STDT but increased Nu and P*. The higher cooling capacity at a high fluid mass flow rate enhances the convection heat transfer and temperature uniformity at the heating surface while at the same time raising the pressure drop. The augmentation of Nu number by applying the GATO method (compared to RSC heat sink) is more significant at a high flow rate, i.e., 46% at vin=0.1 m•s -1 whereas 80% at vin=0.4 m•s -1 . However, this enhancement is achieved at the cost of higher pressure drop (pumping power consumption) since in the current optimization algorithm, no hydraulic criterion is considered in the objective function nor as constraints. This issue will be further addressed in Chapter 6 of this thesis. Furthermore, the GATO heat sinks optimized under three vin values show better temperature uniformity than corresponding RSC heat sinks. With the increase of inlet velocity, the difference in temperature uniformity tends to be smaller, which indicates that the GATO heat sink could more obviously provide a better temperature uniformity than the RSC heat sink under a low inlet mass flow. Rth (K • W -1 ) (GATO) Rth (K • W -1 ) (RSC) P* (GATO) P* (RSC) Tpeak (K) (GATO) Tpeak (K) (RSC) STDT (K) (GATO) STDT (K) (RSC) 0.1 m•s -1 ( Design domain resolution The matrix resolution of the design domain (𝑀𝑀 𝑟𝑟×𝑐𝑐 ) determines directly the number of fluid and solid elements that can be allocated during the GA, thereby playing an important role in the optimization. To explore the influence of this structure fineness, the GATO has been executed under different matrix resolutions (M25⨯25; M50⨯50 and M100⨯100), with the void fraction (Φ=0.50), inlet velocity (vin=0.1 m•s -1 ) and heat flux (HF2) as the same as the benchmark case. The optimization results are shown in Figure 4.9. Although a similar pattern of mass flow delivery at a global level is proposed by GATO as has been discussed above, the flow path details are rather different at a local level, i.e., more local complex structures can be formed at the higher matrix resolution as shown in Figure 4.9 (a & b). The higher number of elements to morph brings highly diversified individuals during optimization, capable of constructing thin and dense channels with split and recombine flow paths. Moreover, the solid-fluid interface area could be largely increased, leading to the lowered Tpeak of the optimized flow channel geometry, i.e., 356.1 K at 𝑀𝑀 25×25 and 341.3 K at 𝑀𝑀 100×100 . Table 4.8 presents the influence of 𝑀𝑀 𝑟𝑟×𝑐𝑐 on the global thermal and hydraulic performances of the GATO heat sink. RSC heat sinks with the channel width equaling the element width (wch=welement, cf. Figure 4.1) are also introduced for comparison. Again, both the Nu number and the P* of the GATO heat sink are higher than those of the RSC heat sink due to the above-explained reasons. Nu number for both types of heat sinks gradually declines with the increasing design resolution. This is because of the smaller hydraulic diameter (Dh) of the flow circuit on the hand, and the lowered average heat transfer coefficient (havg) on the other hand. In particular, the relatively small Nu number of RSC at M100⨯100 (wch=0.5 mm) could be the heat conduction takes the dominant effect over the heat convection due to the small channel width. The temperature distribution at the heating surface is the most uniform at wch=1 mm and M50⨯50 for both RSC and GATO heat sinks. Further increasing the mesh resolution (smaller channel size) will reduce the temperature uniformity due to the existence of a low-temperature region at the entrance of the design domain, as discussed above. Higher pressure drop is resulted in high 𝑀𝑀 𝑟𝑟×𝑐𝑐 , mainly due to the increased flow path complexity (thereby more singular losses). The results discussed in Table 4.8 also show that higher matrix resolution in GATO has the advantage of decreasing the Rth at a certain inlet velocity but at the cost of higher pressure drop. Moreover, a higher number of GA generations is needed to reach the convergence due to the increased number of design variables and also the individual diversity, requiring more calculation time. Once optimized, the obtained flow circuit complexity with fine structures also put place higher demands on the level of manufacturing precision for its realization. Therefore, in practice, the appropriate design resolution should be decided by considering both the available computing resources and the fabrication capacity. Rth (K • W -1 ) (GATO) Rth (K • W -1 ) (RSC) P* (GATO) P* (RSC) Tpeak (K) (GATO) Tpeak (K) (RSC) STDT (K) (GATO) STDT (K) (RSC) M25⨯25 Further discussions Further discussions are made on some issues raised above, to revisit and understand the effectiveness as well as the limitations of the proposed GATO method. Post-treatment for deal volume elimination Despite the dead-end elimination step in the optimization algorithm (cf. It can be observed that the fluid paths become clearer, beneficial for the actual fabrication of the optimized heat sink in practice. The effective void fraction (Φ) declines after posttreatment but has a negligible impact on the thermal and hydraulic performances of the heat sink (Table 4.9). Note that a higher threshold value may result in smoother flow structures, but this aspect has not been further explored in this study. Repeatability The optimization algorithm has been executed another two times with the same settings for the benchmark case (chapter 4.3.1), to test the reproducibility of flow configuration at convergence. The convergence curves are shown in Figure 4.11. It can be observed that the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑒𝑒𝑑𝑑𝑖𝑖𝑎𝑎𝑛𝑛 values of the three runs at the convergence are very close (348.6 K, 348.9 K, and 349.3 K), with only a 1.26% difference. The final 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑖𝑖𝑛𝑛 values are also quite close with a difference smaller than 0.7% (348.6 K, 348.8 K, and 349.0 K), indicating the good robustness of the GATO in achieving the defined optimization objective. This difference should still be reduced by setting a more stringent convergence criterion (Eq. 4.4) but will be rather time-consuming due to the GA's nature. Nevertheless, a diversified flow configuration has been obtained each time by running the GATO (Figure 4.12), indicating the random feature during the crossover and mutation steps of the GA. Since they all provide the same cooling performance, several solutions may all be considered as very close to the global optimum, i. e., the objective function (Tpeak) is rather flat near the global optimal point regarding the variation of the fluid/solid elements distribution. Additionally, the log10 value of the variation of the difference of the last two generations divided by the difference of the first two generations of the first run (benchmark case) has been plotted and shown as Figure 4.13. Evidently, at the end, the value of the convergence curve still has the tendency to decrease, even the difference of last two generations has already reached 1% of the difference of the first two generations. This may demonstrate that the convergence curve seems not to achieve the strict convergence from the mathematics point of view. However, the variation of objective function (Tpeak) was only 0.002 K, which could be considered as largely sufficient (on the base of the convergence criterion in Eq.4.4) in real engineering practice. Simplifications in CFD model Some simplifications have been made in the CFD calculation to save computational time, including the negligible gravity and viscous heating effects (eligible for small channel depth and small total pressure drop), negligible heat loss to the ambient, and the temperatureindependent physical properties for both fluid and solid phases. The fluid viscosity could vary a lot within the operation temperature range, i.e., from 𝜇𝜇 =1⨯10 -3 Pa•s at 293.15 K to 3.54⨯10 - 4 Pa•s at 353.15 K. Moreover, heat loss may also need to be considered when an adiabatic boundary cannot be provided. These factors will be further considered in GATO for our following work in chapter 5 regarding the experimental validation. Careful readers may also notice that the operating conditions for the benchmark heat sink (this chapter) are slightly different from the one tested in chapter 3: smaller flow rate, smaller power input, and lower number of heat flux peaks. This is also to simplify the CFD calculation so that the laminar model can be used for the fluid flow. Theoretically, the GATO method developed here can also be used for turbulent flow patterns under a higher power input with multiple heat sources. But the CFD computation step will be more complicated and timeconsuming. Moreover, the current GATO method may also be extended to real 3D by dividing the design domain into a 3D matrix, the channel thickness being another optimization parameter. But it is beyond the scope of this Ph.D. thesis. Effectiveness vs. limitations Based on the results and discussions presented above, the advantages of the present GATO method (especially compared with the conventional FEM+density-based method) could be summarized as follows: (1) a clear fluid-solid boundary owing to the explicit parametrization of design variables, avoiding non-physical gray scales; (2) direct CFD analysis of designs without re-meshing by the FVM solver, ensuring the conservativeness/accuracy, and with excellent parallelism; and (3) robustness of GA optimizer to approach global optima subjected to complex heat boundaries (non-uniform heating with multiple-peak heat flux). The parameter study in chapter 4.4 indicates the effectiveness and robustness of the proposed GATO method by proposing optimized flow path configurations with better cooling performance than the reference straight channel heat sink. Nevertheless, some limitations of the optimized designs are also shown, such as the higher pressure drop, the existence of dead volumes, and the existence of numerous possible optimal configurations close to the global minimum point. These problems may be treated by the post-treatment (section 4.5.1), or by revisiting the objective function (e.g., multi-criteria optimization) and constraints (void fraction-free). Further investigations have been done and will be presented in Chapter 6 of this thesis. Another obvious limitation of this method is the high computation cost, i.e., with the current simplifications of the CFD model, two or three weeks are still needed to obtain a GATOoptimized configuration. This is mainly due to the sequencing & queuing of the HPC (free user account of CCIPL) as well as the cluster-local data exchange: GA algorithm written by local Matlab code while CFD simulations are performed in HPC. The "effective" calculation time is about 72 hours for one GATO run, which is not prohibitive at all. A significant reduction of computational time is thereby feasible by executing the GA algorithm directly in the HPC, or by using some local workstations instead of the HPC. Parameters of GA play an important role on the efficiency and rapidness of this algorithm. In this study, the CFD computation of the heat sink has not been simplified into 2D for the purpose of performing the GA parameter study. That is mainly due to the concerns that the identified parameters for GA by a 2D model may not be applicable in the real 3D heat sink study. Instead, some GA parameters were selected based on general knowledge [START_REF] Ge | Optimal shape design of a minichannel heat sink applying multi-objective optimization algorithm and three-dimensional numerical method[END_REF], including the crossover type, the mutation rate, the elite number, the number of individuals in each generation, etc. A detailed parametric study [e.g., [START_REF] Mekki | Genetic algorithm based topology optimization of heat exchanger fins used in aerospace applications[END_REF] could be useful to evaluate the separate effect of each parameter on the GA diversity and the convergence speed, to determine the appropriate parameter settings within an acceptable computational cost. This could be a direction for our following work. Conclusion and perspectives In this chapter, a GATO method has been developed and tested to obtain the optimal global flow channel configuration of the heat sink for cooling a non-uniform heating surface with multiple heat sources. Minimizing the peak temperature at the heating surface (Tpeak) is defined as the optimization objective under the constraint of constant void fraction for a fullyconnected fluid domain. Effects of design variables, like heat flux peak intensity, the void fraction, the inlet velocity, and the matrix resolution on the effectiveness of the GATO method have been investigated. The thermal and hydraulic performances of the optimized GATO heat sink have been compared with those of conventional straight channel (RSC) heat sink under the same conditions. The main conclusions could be drawn as followed: • The proposed GATO method could successfully determine the optimal spatial distribution of the fluid/solid elements in the design domain. The resulted meshed channel circuits intentionally guide the cooling fluid to the overheating positions, leading to the minimized Tpeak of the heating surface. • The optimized flow configurations depend strongly on the values of design and operating parameters. The robustness and the reproducibility tests also imply that many "close-to-the-optima" solutions can be proposed by GATO because of the insensitivity of the objective function to the global optimum at the fixed stopping criterion. • Compared with conventional RSC heat sinks, the GATO heat sinks always achieve a better thermal performance, indicated by the higher Nu number, the lower Rth, and the better temperature uniformity at the heating surface, but at the cost of the higher pressure drop. The performance improvement is more significant under more heterogeneous heating conditions (higher intensity difference between heat flux peaks), highlighting the strong adaptability of the developed optimization method owing to the more morphologic freedom offered by GA to address unspecified problems. • A higher matrix resolution of the design area leads to lowered Tpeak at convergence, owing to the generation of finer and more complex structures at a local level. Nevertheless, a larger number of GA generations is needed thus timeconsuming. Regarding engineering application, the appropriate design resolution (size of the element to morph) should be decided by considering both the available computing resources and the fabrication capacity. This CFD-based optimization method relies on the accuracy of numerical simulation while the experimental validation of the proposed method is indispensable. This involves the simulation, optimization, fabrication, experimental testing, and performance comparison of different heat sink prototypes (RSC, OSC, and GATO), which will be presented in the next chapter 5. Meanwhile, different objective functions considering both thermal and hydraulic indicators and other constraints for GATO are also investigated and results will be presented in Chapter 6 of this thesis. Introduction In the previous chapters 3 and 4, two optimization methods have been proposed and developed for optimizing the geometry of heat sink under multiple-peak heat flux: channel inlet size optimization for tailoring the fluid flow distribution among the parallel channels (chapter 3) and genetic algorithm-based topology optimization (GATO) for global flow channel configuration optimization (chapter 4). The results obtained on the benchmark case have all shown that both methods could successively achieve the defined optimization objective (min 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * ). Further numerical study on the influences of different design and operating parameters also demonstrated that the heat sink optimized by the GATO method (named GATO heat sink hereafter) and the one with optimized distribution (named OSC heat sink hereafter) both had better thermal performance compared to the reference straight channel (RSC) heat sink, at the cost of a reasonable pressure drop increase. Despite all these encouraging results obtained, additional work is still required in various aspects, listed as below. • The heat sink model used for GATO (Figure 4.3) has a simplified geometry (e.g., zero wall thickness) and boundary condition (e.g., adiabatic wall), to save computational time. A detailed simulation study of the GATO heat sink with realistic geometry and operating conditions is thereby necessary for its performance evaluation. • The performance comparison between the optimized heat sinks by different methods (OSC and GATO) and the reference case (RSC) under a wide range of operating conditions is extremely useful. The analysis of different performance evaluation criteria will showcase the effectiveness and robustness of the two optimization methods developed in this thesis, as well as their limitations. • And most important of all, chapters 3 and 4 devoted to the development of optimization methods relied only on the CFD simulation results while the experimental work consists of an indispensable step for the numerical model validation and also for the performance characterization. These listed aspects constitute then the main motivations for the work performed and presented in this chapter. The main objective of this part of the work is to evaluate and compare the performances of different heat sinks under multiple heat sources, by using numerical, experimental, and optimization approaches. In more detail regarding the methodology, OSC and GATO heat sinks have been optimized by the optimization methods developed in earlier chapters. CFD simulations of the heat models using Ansys Fluent code have been performed to evaluate and compare their thermal and hydraulic performances under a wide range of operating conditions. Experimental testing of different heat sink prototypes fabricated in-house has been performed to obtain detailed information on the flow and temperature fields, which are essential for the validation of the numerical model. Optical-based measuring techniques, for which the research team in LTEN has strong expertise and experience [START_REF] Wei | Numerical and experimental investigation on the realization of target flow distribution among parallel mini-channels[END_REF][START_REF] Tarlet | Entropy generation analysis of a mini heat exchanger for heat transfer intensification[END_REF], have been selected, developed, and implemented for this purpose, including the particle image velocimetry (PIV) for velocity field measurement and infrared thermography for temperature field measurement. The novelty and uniqueness of this work lie in the experimental testing of heat sinks with TO-optimized geometry, under multiple heating source conditions, which has never been done in the literature (as summarized in chapter 2.5). The main contribution of the present study is the employment of optical-based techniques for measuring the flow and temperature fields in the heat sink. Especially for the temperature field measurement by IR thermography, a few researchers [START_REF] Li | Optimal design and thermal modelling for liquid-cooled heat sink based on multi-objective topology optimization: An experimental and numerical study[END_REF][START_REF] Qian | The influence of temperature dependent fluid properties on topology optimization of conjugate heat transfer[END_REF][START_REF] Ri | Design and Numerical Study of Liquid Cooling Radiator Based on Topology Optimization Method[END_REF] have used such a technique for characterizing the TO-optimized heat sinks (as listed in Table 2.1), but only the temperature distribution on the upper solid cover has been measured. This is relatively simple to perform in practice, but the measuring surface is far from the heating surface thus neither representative nor enough sensitive to the geometry of flow configurations. Differentiating from all of the others, our work goes a step further by measuring the near-wall fluid temperature distribution at the fluid-solid interface, which is closer to the heating surface, permitting a better observation and characterization. The technical difficulty lies in the IR transparent optical access to the target fluid domain. This has been solved in our study by introducing and installing a sapphire window, ensuring most of the IR radiation emitted by fluid would be captured by the IR camera (will be further explained in section 5.3). The rest of the chapter is organized as follows. Section 5.2 presents the PIV study on a transparent RSC prototype under isothermal conditions. Section 5.3 presents the IR thermography measurement for three heat sink prototypes (RSC, OSC, and GATO) under multiple heating sources. In each section, the test rig, the machined prototype, the measuring principle the data processing will be presented in detail, and the experimental results obtained are compared with CFD simulation results given model validation. Furthermore, further analysis and discussion on the local flow and heat transfer characteristics of three heat sinks based on the numerical results are given in section 5.4. Their thermal and hydraulic performances under a wide range of operating conditions are compared as well. Finally, in section 5.5, the main conclusions drawn are summarized. Velocity distribution measurement of RSC heat sink under isothermal condition This section presents the experimental work on the measurement of velocity distribution inside an RSC heat sink prototype under isothermal conditions, using the PIV technique. The results of PIV measurements are mainly used to validate the CFD model under a laminar flow pattern. (in green). The fluid circuit is composed of a water tank (10 L), a pump and valves, a flowmeter (0 to 2 × 10 -6 m 3 •s -1 ; accuracy: 4.6%) and the test section (RSC). Water was used as the working fluid, its flow rate was controlled by a precision pump (REGLO-z, 32-3200 mL min -1 , instrumental error less than 0.1%). Hollow Glass Spheres (HGS, supplier: Dantec) with a diameter of 10 𝜇𝜇𝑚𝑚 were used as seeding particles in our study. Test-rig Test section A transparent prototype based on RSC configuration has been specially designed, fabricated, and tested for the velocity field measurement. The overall dimension of this prototype is 120 mm in length and 90 mm in width. The prototype made of PMMA is composed of two parts: a base block and a cover as shown in Figure 5.2 (a) left and right, respectively. The flow channel with an identical depth of 2 mm was carved by digital machining at the surface of the base block. Grooves were reserved on the cover plate to embed sealing strips at the edge of the fluid domain and in between the channels to prevent leakage. The cover and base plates were assembled by 10 bolts as shown in Figure 5.2 (b). The fluid domain is composed of one inlet channel (5 mm in width and 25 mm in length) and distributing manifold (50 mm in width and 10 mm in length), one outlet channel and collecting manifold of the same dimensions as the inlet, and 12 parallel straight channels in the middle. Each channel has the dimension of 2 mm in width, 2 mm in depth, and 50 mm in length, uniformly spacing one another by separating walls of 2 mm in thickness. A stainless steel tube with a rectangular cross-section of 5 mm ⨯ 2 mm has been connected to the inlet channel of the RSC prototype having the same dimension, providing additional entrance length for the development of the velocity profile. The length of this stainless steel tube is 130 mm, sufficient to achieve a fully-developed laminar flow (L>0.0575 • 𝑅𝑅𝑒𝑒 𝑖𝑖𝑛𝑛 • 𝑆𝑆 ℎ,𝑖𝑖𝑛𝑛 [START_REF] Frank | Fundamentals of heat and mass transfer, 6th Editio[END_REF]) in the current study (𝑆𝑆 ℎ,𝑖𝑖𝑛𝑛 = 2.86 mm and maximum 𝑅𝑅𝑒𝑒 𝑖𝑖𝑛𝑛 = 569). PIV facility, measuring parameters, and data processing The PIV facility usually contains an illumination unit (Laser), an imaging unit (camera), a power supply, a data processing unit, and a synchronizer for the camera and laser pulse. Figure 5.3 displays the basic principle of PIV measurement [START_REF] Willert | Particle Image Velocimetry: a practical guide[END_REF]. Firstly, tracer particles are distributed uniformly in the target flow domain for measurement. Then, the laser forms a 2-D light sheet in the target domain where the velocity field is going to be measured. After, the highresolution camera records two frames at a different time to observe the motion of particles by the Eulerian approach and calculate the local velocity. The instruments used in this study are as follows. A commercial laser model Litron Nano S65-15 PIV was used as the light source, forming a laser sheet at the middle plane (z=1 mm) of the fluid channels. The light scattered by the seeding particles was caught by a scientific Complementary Metal-Oxide-Semiconductor (sCMOS) camera (zyla 5.5) with a resolution of 2560 × 2160 pixels. A synchronizer was applied to ensure the laser and sCMOS camera would cooperate accordingly. A commercial software DANTEC Dynamic Studio 7.2 was used to set the parameters of image taking, data acquisition, and post-processing. The PIV recording would start when the fluid flow reaches the steady state laminar flow. After the acquisition of 2000 images with double frame/single exposure under the frequency of 10 Hz, the pre-processing would be performed, including spatial calibration, defining, and applying the mask for background removal. In more detail, the positions and coordinates of the vector field would be firstly calibrated and then the mask would be defined and applied to the images to cover the domain of no interest (e.g., solid parts). After that, the cross-correlation method by Fast Fourier Transform (FFT) algorithm was applied to determine the local displacement vector between two illuminations through an interrogation area grid set to be 16 × 16. PIV data validation by median filtering was utilized to remove spurious noise. Finally, the average vector fields and standard deviations would be obtained. An example of the final validated velocity vectors and normalized standard deviations is shown in Figures 5.4 For V* values (y-component), however, a quite big difference could be observed in the distributing manifold and at the enhancement of two middle channels (Figure 5.5 (b)). The reason could be the secondary flow and high-velocity magnitude at these regions which increase the probability of random particle motions. Another possible reason is particle aggregation: during the acquisition of 2000 images (200 seconds), the particles following the main fluid streams stagnate and accumulate in these areas. Therefore, the image of velocity vectors taken at a different time would be influenced by different amounts of particles, bringing the higher values of StdDev V*. In brief, 2000 images are sufficient to capture the velocity field under a steady state based on the validated pixel number and statistic of standard deviation shown in Figure 5.4 and Figure 5.5. Some high standard deviation values in the region of the distributor near the inlet still exist, which is caused by the aggregation of particles. The longer time it takes, the more chance this phenomenon of particle aggregation would appear. Parameters for CFD simulation To compare with the PIV measurements, 3D CFD simulations were performed in parallel to calculate the velocity field inside the flow circuit, using the same geometrical characteristics as the RSC prototype presented in section 5.2.2 (long inlet channel included). Water was used as the working fluid with constant physical properties (density: 998.2 kg m -3 , viscosity: 0.001003 kg m -1 s -1, and heat capacity: 4182 J kg -1 K -1 ). The inlet velocity was set to be constant and equal to the experimental condition. The operational pressure was fixed at 101,325 Pa. Simulations were performed under steady-state, incompressible, and isothermal condition. The gravity effect at -z direction was also taken into account. Navier-Stokes equations as shown in chapter 3.2.3 were solved by Ansys Fluent code (version R19.1), using the SIMPLE method for pressure-velocity coupling, and second-order upwind differential scheme for discretization of momentum, the second-order method for pressure and Green-Gauss node based approach for the gradient. The Laminar flow model was given the small flow rate tested (inlet Re number around 400). Constant velocity inlet at inlet surface was given and the boundary condition of the outlet was set as pressure-outlet with zero static pressure. Channel walls were considered as no slip. The structured mesh was generated to build up the geometry model, including about 0.4 million elements in total. The solutions were considered to be converged when (1) the sums of normalized residuals for control equations are all less than 1 × 10 -5 , and (2) the global pressure drop is constant from one iteration to the next (less than 0.05 Pa). Comparison between PIV and CFD results on the velocity fields To capture more details of the fluid motion, the PIV visualization window covers three fourth of the channels with 134 × 159 vectors, as marked in the green frame shown in Figure 5.6 (a). The velocity magnitude in this area, obtained by PIV measurement and by CFD calculation, is shown in Figures 5.6 (b) and 5.6 (c), respectively. Note that the tested volume flow rate at the inlet is 1.42 × 10 -6 m 3 •s -1 , corresponding to an inlet Rein=404 and average channel Rech=34. Generally, a good agreement can be observed between the PIV and CFD results, i.e., the maximum velocity appears at the middle of distributing manifold facing the inlet port. Two middle channels receive a large amount of the mass flow while the proportion for the rest of the channels is relatively small. This symmetric feature can be globally seen in the PIV velocity field, but irregular iso-velocity lines and some singular points are also visible, especially at high-velocity magnitude areas. This is due to the sticking and crusting of the seeding particles at the channel walls during the, disturbing the PIV measurement at a certain level as has been explained in the above section. • The experimental curve (blue) obtained by PIV measurement; • The CFD curve (orange) obtained by the numerical model under isothermal condition (section 5.2.4); • The CFD curve (purple) was obtained by a more detailed numerical model built for thermal calculation (would be presented in section 5.3 thermal measurement), with the boundary conditions of 120 W multiple-peak heat flux input and temperature-dependent physical properties. • And an analytical expression of velocity profile (green) for fully-developed laminar flow in a rectangular channel (Purday's model) [START_REF] Shah | Rectangular Ducts[END_REF]. 𝑎𝑎 𝑎𝑎 𝑚𝑚𝑚𝑚𝑚𝑚 = �1 -� 𝑚𝑚 𝑎𝑎 � 𝑏𝑏 � (5.1) where 𝑣𝑣 𝑚𝑚𝑎𝑎𝑚𝑚 is the peak value of the velocity profile; a is the half channel width (in x direction), and b is the correcting factor based on the aspect ratio (b=3.5 is calculated based on this study [START_REF] Shah | Rectangular Ducts[END_REF]). Figure 5.7 (a) plots the velocity magnitudes at the entrance of the distributing manifold (the first pixel of the PIV image). Both isotherm CFD model calculations and PIV measurement capture rather similar velocity magnitude profiles, and they are generally in good agreement with the theoretical calculation. The peak of velocity magnitude obtained by the CFD-thermal model is slightly lower than others. This is because the rise of global temperature results in a decreased fluid viscosity, bringing about a lower velocity gradient in the x-axis compared to the one under isothermal condition. Nevertheless, some differences between the CFD calculations and PIV measurements are still noticeable, especially the peak of velocity profiles in channels 1-5 (counting from right to left). The underestimation of the flow rate by PIV measurement could be due to the deviation of the laser sheet from the middle plane. The thickness of the laser sheet is 1.5 mm while the channel thickness is 2 mm, there could be a great chance that the particles captured by the camera did not position in the middle plane. The statistical calculation of the velocity magnitude by including these participles would thereby be lower than that of CFD calculations. The flow distribution obtained by PIV seems not symmetrical, while a symmetrical fluid flow distribution is supposed to be obtained due to the symmetry feature of the flow circuit (and it is the case of CFD calculations). The reason could be due to the location of the laser source at the right side of the testing field and more particles detected in the region closer to the laser source; or due to a small inclination of the laser sheet (or prototype) along the x direction. Summary In general, the results of CFD computations and PIV measurements show good consistency. The fully-developed laminar flow pattern is achieved at the entrance of the distributing manifold, and the choice of laminar flow model in the CFD calculation could be validated. The velocity fields don't change much by considering the heating effect in the CFDthermal model. This part of work under isothermal condition helps establish a good basis for the following testing and performance comparison of heat sinks with multiple heat sources. Similar flow behaviors could be present in the OSC prototype, which has been shown in our earlier study with a similar geometry [START_REF] Wei | Numerical and experimental investigation on the realization of target flow distribution among parallel mini-channels[END_REF]. The Ansys Fluent with proper simulation parameters has shown its calculation accuracy for such parallel straight channel flow circuit, thereby PIV measurement for OSC structure has not been repeated in this thesis work. Regarding the GATO flow circuit with a much more complex geometry, several technical difficulties still exist, some of which have already been mentioned above, including particle stagnation and agglomeration due to the secondary flow, the laser sheet transmission, calibration, etc. These technical difficulties remain to be solved for a credible PIV testing of the GATO prototype, which is certainly interesting but unfortunately not performed in this thesis work. Our effects have then been devoted to the testing of three heat sinks prototypes under heating condition and their performance evaluation, which will be presented in the next section. It should be noted that the pressure drop has not been measured in this experimental study. It is mainly due to the low pressure drop value of the heat sinks caused by small inlet Rein number (about 300 Pa). Such low pressure value is difficult to be detected and would not be a limiting factor in real practice. Temperature characteristics of heat sinks under multiple heating condition This section presents the experimental work on the measurement of temperature distribution at the fluid-solid interface inside the RSC, OSC, and GATO heat sinks. The experimental results are mainly used to validate the CFD model of the three heat sinks under multiple heating sources. The experimental technique used in this study for such purpose is optical-based IR thermography. Compared with conventional temperature measurements technique like thermocouples or temperature sensors which can only obtain temperature information at certain points, the non-intrusive feature of IR thermography could obtain the temperature distribution of a surface of interest (pixels in the frame of data) with ensured accuracy. Experimental set-up Figure 5.8 shows the schematic diagram of the test rig in LTEN for the thermal measurement of heat sinks. It consists of a flow circuit, the heat supply unit, and the IR thermography facility. Pure water was used as the coolant, initially stored in a water reservoir (at ambient temperature and pressure) located at a high position. It was charged into the test section (heat sink) by the gravity effect, and its flow rate could be adjusted and controlled by a valve and a flowmeter. After absorbing the heat in the test section, the water at a higher temperature was discharged to another water tank. Two K-type thermocouples (estimated uncertainty: ±0.2 K) were installed at the inlet and the outlet tube of the test section to measure the inlet and outlet water temperature, respectively. The measured data of thermocouples were monitored and recorded by a K-type thermocouple thermal meter (HI935002, uncertainty: ±0.2%). A power supply (model: Rohde & Schwarz, HMP4040, 384 W) was used to offer heat for three cartridge heaters (ARCELI, 24 V, 40W) at a set heating power. An IR camera (Xseries) has been used to monitor and record the temperature distribution at the target surface of interest. Heat sink prototypes A test section has been designed and fabricated for the thermal test of different heat sinks (RSC, OSC, GATO). It has an overall dimension of 250 mm ⨯ 110 mm ⨯ 20 mm (excluding the heater jackets), with a sandwich concept consisting of a base plate, a sapphire disk in the middle, and a cover plate, as shown in Figure 5.9 (a). The base plate is made of stainless steel 304, with the fluid channels machined on it, as shown in Figure 5.9 (b). The sapphire disk with a diameter of 100 mm is fixed in the circular groove of the cover plate (also made of stainless steel), providing optical access to the IR camera for measuring the temperature distribution (Figure 5.9 (a)). The thickness of the base, sapphire disk, and cover plate is 10 mm, 4 mm, and 10 mm, respectively. Three cylindrical heaters (with diameters: 6 mm, 8 mm, and 10 mm, respectively) were installed inside the three heater jackets (with total diameters: 10 mm, 12 mm and 15 mm and 46 mm in length), embedded at the bottom of the base at 6 mm depth. Their locations and shapes are indicated in Figure 5.9 (b) and also in Figure 5.11 (a), the distance between the heaters' head and the bottom wall of the flow channels being 2 mm. Sealing strips were installed around the edges of the fluid domain to prevent leakage and bolts were used for further sealing (e.g., Figure 5.9 (d)). The tightness of the assembled test section has been checked and ensured before each series of tests. Three flow channel configurations have been tested, representing RSC, OSC, and GATO heat sink. All three configurations have a long inlet tube of 130 mm, providing sufficient length for the establishment of a fully-developed laminar flow at the entrance of the distributing manifold (Figure 5.9). The flow channel configuration of the RSC heat sink has the same geometry and dimensions as those of the transparent prototype used for PIV measurement. Detailed information can be found in section 5.2.2 and Fig. 5.2 of this chapter thus will not be repeated here. The ORS flow channel is mainly based on the RSC, but with the width distribution of the channel inlets optimized by the method developed in former Chapter 3 for tailoring the flow distribution according to the locations of the heat flux. The optimization has been done using Ansys Fluent code, under a total input power of 120 W (40 W for each heater) and with an inlet water volume flow rate of 1.667×10 -6 m 3 •s -1 (100 mL•min -1 ; Rein= 474). More details about the optimization procedure for the OSC heat sink can be found in Chapter 3. In practice, a mini orifice baffle of 2 mm thickness has been fabricated and installed at the end of the distributing manifold, as shown in Figure 5.10. The orifice width for each channel corresponds then its optimized inlet width, as listed in Table 5.1, to properly distribute the inlet flow. InfraRed Thermography measurement The basic principle of IR thermography is that the IR radiation emitted by an object could be detected by the lens of an IR, which is then converted to an electrical signal proportionally. After amplification and data processing, the signal could be transformed and displayed as a temperature value. Most of the currently used IR cameras for temperature detection are sensitive in either the middle or the long wavelength spectral bands [START_REF] Astarita | Infrared Thermography for Thermo-Fluid-Dynamics[END_REF]. For our study, the broadband of the IR camera (X-series) ranges from 1.5 μm -5 μm. Under this range of wavelength, the emissivity of water is 0.92 -0.96, while the transmission percentage is less than 10% [START_REF] Kaylegian | Influence of fatty acid chain length and unsaturation on mid-infrared milk analysis[END_REF]. For the sapphire window with 4 mm thickness, the transmission percentage is over 65% [START_REF]Sapphire (Al2O3)[END_REF]. Therefore, the thermal radiation emitted by water could be detected by the IR camera through the sapphire window. It should be noted that the fluid-solid interface temperature between the sapphire disk and the water in contact has been measured by IR thermography, which has never been done for the experimental characterization of TO heat sinks in the literature. Three heat sinks have been tested under different operating conditions, with total input power values ranging from 60 W to 120 W (identical power for the three heaters) and inlet flow rates ranging from 1.333×10 -6 m 3 •s -1 (80 mL•min -1 ; Rein=379) to 2×10 -6 m 3 •s -1 (120 mL•min -1 ; Rein=569). For each test, the measurements were performed after the thermal balance at steady-state has been reached, i.e., the fluid outlet temperature is stable and the ratio of heat absorbed by the fluid to the total input power is higher than 95% (an example of heat balance check is shown in Appendix 5.A). Once the steady state was reached, a synchronized image recording was executed by the commercial software FLIR ResearchIR Max under the frame frequency of 30 Hz. The final measured temperature field with a resolution of 512 × 640 pixels was calculated by averaging the temperature values of 300 images at each pixel. CFD simulation parameters To compare with the IR thermography measurements, a full 3D CFD simulations for fluid flow and heat transfer were performed for three heat sinks under the same heating conditions as in the experiments. Water was used as the working fluid with temperature-dependent physical properties (cf. Table 3.1). The physical properties of the solid parts (stainless steel and sapphire) were considered constant and their values are listed in Table 5.2. The operational pressure was fixed at 101325 Pa. Simulations were performed under a steady-state, incompressible, and laminar flow regime with heat transfer. The viscous heating was neglected while the gravity effect at the z direction was considered. Navier-Stokes equations as shown in chapter 3.2.3 were solved by Ansys Fluent code (version R19.1). Least squares cell-based, second order upwind differential, and second order scheme were used for discretization of gradient, momentum, and pressure. The inlet velocity and temperature of fluid were set to be constant and equal to experimental conditions. The boundary condition of the outlet was set as pressure-outlet with zero static pressure. Three heaters are simplified as thermal boundary conditions of the surface heat flux of three cylinders. Each surface heat flux has the same input power but a different surface area, which generates various heat fluxes. Channel walls for the fluid domain were considered as no slip. The external walls were set as adiabatic, except for the external sapphire window for which a heat transfer coefficient (7 W•m -2 •K -1 ) with the ambient has been set. The structured mesh was generated to build up the geometry model, including about 0.17 million elements for the fluid, and 14.4 million elements for the solid parts. The solutions were considered to be converged when (1) the sums of normalized residuals for control equations are all less than 1 × 10 -5 , and (2) the global pressure drop is constant from one iteration to the next (less than 0.05 Pa). Regarding the GATO heat sink, the experiment and CFD results show again a good consistency. Different from the temperature contours of RSC and OSC heat sinks, the highest temperature region no longer appears at the middle-right position of the design domain, but instead at the upper-left corner where the largest heater is located. Distinguished from the trend observed for RSC and OSC heat sinks, the temperature isotherms decrease diagonally from the upper-left corner to the bottom-right corner. The coolant is guided to cool down the temperature hot spot by the topologically-optimized fluid channel configuration, implying the effectiveness of the GATO method. The maximum temperatures are very close, i.e., 328.52 K and 329.14 K for IR measurement and CFD calculation, respectively. A statistical analysis of Fig. 5.14 shows that the surface area of hotspots (e.g., T*>1.75) on the measuring surface reaches 8.79 mm 2 (RSC), 1.32 mm 2 (OSC) and 0.56 mm 2 (GATO), respectively, implying better temperature uniformity at the heating surface through enhanced cooling by applying the two optimization methods. Further performance evaluation and comparison between the three heat sinks will be presented in chapter 5.4.  Temperature profiles along the sampling lines For a more detailed comparison between the IR measurement and CFD simulations, the T* profile along three sampling lines is plotted for the RSC heat sink (in Figure 5.15) and for the OSC heat sink (Figure 5.16), respectively (channel NO.1, NO.6 and NO.11, with the position of x = -22 mm, -2 mm and = 18 mm, respectively, and y ranges from 10 mm to 60 mm). The results are discussed below. From Figure 5.15 (RSC heat sink), it can be observed the IR and CFD T* curves are generally matched, especially for channel No.1 (orange). Nevertheless, a relatively higher departure may be observed for I channel NO. 6 (purple), especially at the entrance region where the ∆T* could reach 0.146. For the plot of channel NO.11 which goes through the hot spot, the maximum ∆T* between IR and CFD results is about 0.07. The maximum T* reaches 1.75 at y=42 mm based on the CFD calculation, while for the IR measurement, it is about 1.73 at y=43 mm, again showing good consistency. T* profiles at the same channel locations are plotted in Figure 5.16 for the OSC heat sink. A similar global trend of IR and CFD curves may be observed, but the departure between them is more noticeable compared to that of the RSC heat sink. In more detail, the largest value of ∆T* reaches 0.13, 0.23, and 0.15 for channels No. [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]No.6,and No.11, respectively. This larger discrepancy could be due to the limitation of fabrication precision in realizing the mini orifice baffle for the OSC. The machining accuracy cannot achieve the precision of the optimized channel inlet widths (orifice sizes) as indicated in Table 5.1., which may influence the flow distribution properties among the parallel channels. Since the cooling effect of each channel relies strongly on the mass flow rate passing through, a small departure from the desired flow rate may result in a noticeable difference in the fluid temperature profiles in the channel, as shown in Figure 5.16. This also indicates that the size optimization method acting on the channel inlet width, really pragmatic and simple to implement, may be limited more by the fabrication/realization precision to achieve the optimized flow distribution among the parallel channels of the ORC heat sink.  Average temperature and temperature uniformity of the measuring surface Figure 5.17 presents the CFD prediction and the IR measurement of the area-weighted average temperature (Tavg * ) and the standard deviation STDT* for the measuring surface of three heat sinks under various input power (Qtot: 60 to 120 W) and inlet flow rate (1.333×10 -6 to 2×10 -6 m 3 •s -1 ) conditions. The STDT* value is calculated by Eq. 5.8: 𝑆𝑆𝑆𝑆𝑆𝑆 𝑇𝑇 * = � 1 𝑛𝑛-1 ∑ (𝑆𝑆 𝑃𝑃𝑖𝑖𝑚𝑚 * -𝑆𝑆 � * ) 2 𝑛𝑛 𝑃𝑃𝑖𝑖𝑚𝑚=1 (5.8) In experimental data, where n is the total number of pixels of the measuring surface.𝑆𝑆 𝑃𝑃𝑖𝑖𝑚𝑚 * is the normalized temperature at pixel Pix and 𝑆𝑆 � * is the average value of all 𝑆𝑆 𝑃𝑃𝑖𝑖𝑚𝑚 * . And in numerical results, n is the total number of mesh nodes of the measuring surface. From Figure 5.17 (a), it can be observed that the largest STDT * difference is 8.6% for the OSC heat sink (120 W input power and 2×10 -6 m 3 •s -1 (Rein=569) inlet volume flow rate), due to the reason explained above. The maximum STDT* error for the RSC heat sink and GATO heat sink is within 3.7% and 5.7%, respectively, showing good agreement between the IR and CFD results. A similar observation can be made for Tavg * shown in Figure 5.17 (b). Good agreement between IR and CFD results can be drawn, the maximum error being 8.2% (RSC), 2.9% (OSC), and 6.7% (GATO), respectively. In conclusion, the CFD model used for the thermal test can be validated by the IR measuring results. It is worth noting that among the three tested heat sinks, the GATO heat sink has the lowest values of STDT* and Tavg * for the measuring surface (inner sapphire wall in contact with the fluid). This implies that the GATO heat sink could have the best cooling performance, which will be further discussed in the next section. Performance evaluation and comparison between three heat sinks: further analysis of the CFD results In the previous section, the IR measurements on the fluid-solid interface have been used to validate the CFD models. Further analysis of the CFD results will be rather beneficial to gain more information on the local fluid flow and heat transfer characteristics of the three tested heat sinks as well as their global thermal and hydraulic performance evaluation under a wide range of operating conditions. For the RSC heat sink, the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * = 3.73 (356.8 K) is found at the middle-right position of the heating surface. This is because given the same power input (40 W), the heat flux is the highest for the heater with the smallest heating surface area (d=6 mm). The temperature field on the heating surface of the OSC heat sink follows a similar trend, with overheated areas shrinking a bit owing to the tailored flow distribution realized by the optimized channel inlet widths. The peak temperature 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * = 3.64 (355.4 K) is still located at the position that receives the highest heat flux. For the GATO heat sink, the peak temperature of the heating surface has been significantly decreased to 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * = 2.94 (343.8 K), 13.1 K, and 11.6 K smaller than that of the RSC and OSC heat sink, respectively. The temperature cartography also shows that the three uneven hot spots tend to be dissipated uniformly with the help of GATO. Local fluid flow and heat transfer characteristics of three heat sinks Besides, the temperature distributions at the outer wall of the sapphire disc (circle surface area) are also shown in Figure 5.18. The T* scale was set to be the same as Figure 5.14 for a better comparison. It is obvious that the temperature differences on outer wall of sapphire window (cover plate) are smaller than those at the testing surface (fluid-solid interface) in Figure 5.14 of three heat sinks. This illustrates the reason why the temperature field of the fluidsolid interface has been chosen for IR thermography. Because it is nearer to the heating surface thus more sensitive to the temperature uniformity improvement due to applying the optimization methods. The thermal performance of a single-phase cooling heat sink is greatly related to the fluid flow behaviors for heat convection. Therefore, a detailed analysis of the fluid domain in the three heat sinks has been made. The numerical results are shown in Figures 5. [START_REF] Hu | Mechanism and thermal effect of delamination in light-emitting diode packages[END_REF] Figure 5.19 presents the contour of velocity magnitude (x-y plane) at the mid-depth of the fluid channels (z=1 mm) for three studied heat sinks. For the RSC heat sink, the fluid flow distribution behavior has already been discussed in detail in section 5.2 devoted to the PIV test (Figure 5.7). Regardless of the location and heat flux of the three heaters, the geometry and dimension specificities of RSC render always the highest amount of cooling fluid passing through the two middle channels, while other channels are less supplied. This is the intrinsic drawback of the RSC heat sink, which may not be of sufficient concern for cooling a uniformly heated surface, but certainly not enough performant to treat localized multiple heat sources. For the OSC heat sink, more fluid is guided to the side (e.g., x>0) where the higher heat flux is located. Nevertheless, the cooling performance improvement is limited concerning the RSC heat sink. This is because the optimization method works only on the adjustment of flow distribution among parallel channels, while in each channel the flow direction is the same (y direction). This approach has shown its effectiveness for cooling a heating surface with multiple Gaussian-shape heat fluxes (chapter 3). But for the case of localized multiple heat sources but each with uniform heat flux, the limitation of this optimization method due to the lack of design freedom can be seen. This happens to be the distinguished advantage of the GATO approach, which needs no geometry presetting but has the highest degree of freedom to morph. The channel turnings, bifurcations, and intersections shown in Figure 5.19 can only be proposed by the TO method. Rather complex and non-established flow patterns can be generated, including divergent/confluence flows, transversal flows, and secondary flows, to enhance the forced heat convection, especially near the heat sources. As a result, the reduction of 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * at the heated surface is more significant by applying the GATO method (compared to the flow distribution optimization). Figure 5.20 compares the velocity vectors in the channel cross sections (x-z plane) at the channel mid-length (y=35 mm) of the three heat sinks. For better observation, the scale factor is set to 5 for magnification. It is rather obvious that the velocity vectors in the x-z plane of the GATO heat sink far exceed the amount in RSC and OSC heat sinks. The zoom-in images of some sampling positions are shown in Figure 5.21 with different scale factors. The distribution and directions of velocity vectors in the cross-section of RSC and OSC heat sinks are quite similar. Two small vortices are formed with descending velocity vectors (z-direction) in the middle of the cross-section and ascending velocity vectors (zdirection) at each side wall. This is because of the higher temperature of the bottom and side walls (close to the heaters) than the upper wall (inner surface of the sapphire disk). The buoyancy effect due to the fluid temperature difference (thus the density difference) results in the appearance of such vortices. Vortices and secondary flows are much more obvious for the GATO case shown in Figure 5.21, which is mainly due to the flow confluence and diverging at that intersection. Comparison of the global thermal and hydraulic performances of three heat sinks Local fluid flow and heat transfer characteristics have been discussed in the previous chapter 5.4.1, showing that the GATO heat sink has the lowest 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * in the heating surface, which is following the defined objective function for the optimization methods. In this section, attention is focused on the evaluation and comparison of the global thermal and hydraulic performances of three heat sinks. For this purpose, more CFD simulations (using the thermal model) have been performed for three heat sinks. The testing condition covers a wide range, with the inlet fluid flow rate ranging from 0.667×10 -6 to 2.33×10 -6 m 3 •s -1 (40 to 140 mL•min -1 ; Rein: 237 to 663) and total input power increasing from 30 W to 180 W (identical input for three heaters). Other operating parameters are kept the same (inlet fluid temperature at 20 °C; environment temperature at 17 °C and convection heat transfer coefficient at the outer sapphire disk at 7 W•m -2 •K -1 ). The obtained numerical results are used to calculate several global indicators for heat sink performance evaluation, including the Nusselt number (Nu), the performance evaluation criterion (PEC) number, and the Po/Nu ratio, introduced below. The calculation of the Nu number for the heat sink follows Eq. (4.6) explained in chapter 4. The PEC for OSC and GATO heat sink is calculated based on Eq. (5.9) [START_REF] Gong | Thermal performance of microchannels with wavy walls for electronics cooling[END_REF]. 𝑃𝑃𝐸𝐸𝐶𝐶 = 𝑁𝑁𝑜𝑜/𝑁𝑁𝑜𝑜 𝑟𝑟𝑏𝑏𝑓𝑓𝑏𝑏𝑟𝑟𝑏𝑏𝑖𝑖𝑐𝑐𝑏𝑏 �∆𝑃𝑃/∆𝑃𝑃𝑟𝑟𝑏𝑏𝑓𝑓𝑏𝑏𝑟𝑟𝑏𝑏𝑖𝑖𝑐𝑐𝑏𝑏 3 (5.9) where the RSC heat sink is considered as the reference case. ∆𝑃𝑃 (Pa) is the global inletoutlet pressure drop of the heat sink. The Poiseuille number (Po) is calculated by Eq. (5.10) 𝑃𝑃𝑐𝑐 = 𝑓𝑓 • 𝑅𝑅𝑒𝑒 𝑎𝑎𝑎𝑎𝑎𝑎 (5.10) Where 𝑅𝑅𝑒𝑒 𝑎𝑎𝑎𝑎𝑎𝑎 and f is the average Re number of the fluid domain, and the Darcy friction factor, calculated by Eq. (5.11) and Eq. (5.12), respectively. 𝑅𝑅𝑒𝑒 𝑎𝑎𝑎𝑎𝑎𝑎 = 𝜌𝜌 𝑓𝑓,𝑚𝑚𝑎𝑎𝑎𝑎 𝑎𝑎 𝑚𝑚𝑎𝑎𝑎𝑎 𝐷𝐷 ℎ 𝜇𝜇 𝑚𝑚𝑎𝑎𝑎𝑎 (5.11) Where 𝜌𝜌 𝑒𝑒,𝑎𝑎𝑎𝑎𝑎𝑎 and 𝜇𝜇 𝑎𝑎𝑎𝑎𝑎𝑎 are the average fluid density (kg•m -3 ) and average fluid dynamic viscosity (kg • m -1 • s -1 ). 𝑣𝑣 𝑎𝑎𝑎𝑎𝑎𝑎 the volume-weighted average velocity of the fluid domain (m•s - 1 ). 𝑆𝑆 ℎ is the hydraulic diameter of the fluid circuit (m) expressed by Eq. (4.8) in Chapter 4. 𝑓𝑓 = ∆𝑃𝑃 𝐿𝐿 • 2•𝐷𝐷 ℎ 𝜌𝜌 𝑓𝑓,𝑚𝑚𝑎𝑎𝑎𝑎 •𝑎𝑎 𝑚𝑚𝑎𝑎𝑎𝑎 2 (5.12) Where L is the distance (m) between the inlet and outlet. The numerical results obtained are analyzed and discussed as follows. Nusselt number PEC number The PEC number evaluates the heat sink performance considering both thermal and hydrodynamic aspects. It indicates the possible performance enhancement that can be achieved by a novel design of a heat sink concerning a reference case (here the RSC). The PEC value greater than 1 implies that the enhancement of thermal performance is beneficial at the cost of the pressure drop increase, and this is the case for both OSC and GATO heat sinks as shown in Figure 5.24. Among them, the PEC values (about 1.05) remain almost unchanged under the tested Rein conditions. While for the GATO heat sink, it rises from 1.30 up to 1.87, the performance enhancement is rather significant. This is rather encouraging since pressure drop has not been considered in the GATO method developed in chapter 4, neither as an objective nor as a constraint. Po/Nu ratio The Po/Nu ratio considers also both the hydraulic and thermal performances of a heat sink, with the smaller ratio indicating better performance. Figure 5. 25 (a) shows the values of Po/Nu for three tested heat sinks as a function of total power input. All three curves descend with the increasing Qtot. While Nu values remain almost constant with the increasing Qtot (as shown in Figure 5.22 (b)), the decreased Po/Nu ratio is mainly due to the decreased Po number at a higher Qtot. The higher Qtot raises the fluid temperature in general, thereby a smaller fluid viscosity and smaller ∆P. Among the three tested heat sinks, the GATO heat sink has always the lowest Po/Nu ratio at a given Qtot, at least 13.7% and 7.5% lower than those of RSC and OSC heat sinks. Figure 5.25 (b) presents the Po/Nu ratio vs. Rein for three tested heat sinks. The curves for RSC and OSC heat sinks gradually rise with the increasing Rein number. This is due to the boosted Po number but a rather stable Nu number at a higher Rein number, indicating that increasing the flow rate of coolant is not beneficial in terms of thermal & hydraulic performances of RSC and OSC heat sinks. As for the GATO heat sink, the Po/Nu ratio is rather stable under the tested Rein range. The slight rise of Po/Nu ratio at two extremes of the tested Rein range is mainly because the GATO heat sink has been optimized under Rein=474. Nevertheless, the effectiveness and robustness of the method considering both the thermal and hydraulic performances can still be seen under the tested Rein range. Conclusion This chapter has been devoted to the performance comparison and evaluation of different heat sinks using both experimental and numerical approaches. Three heat sinks have been tested, including the RSC, OSC, and GATO, under a wide range of operating conditions. The PIV method has been used to measure the velocity field at the mid-depth of the fluid domain while IR thermography has been applied to measure the temperature distribution at the fluid-solid interface. The main conclusions could be drawn as followed: • The velocity profiles of the RSC prototype captured by PIV show good agreement with the CFD calculations, so s to validate the numerical model (laminar) for fluid flow; • IR measurements and CFD calculations on the temperature distribution at the fluid-solid interface of three heat sinks are in good agreement, validating thereby the CFD model for the simulation of three heat sinks. • Amongst the three tested heat sinks, the GATO heat sink shows the smallest 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * at the fluid-solid interface as well as at the heating surface, showing the good effect of the GATO method in removing the temperature hotspots under multiple heat sources. The enhancement of convection heat transfer is mainly achieved by the non-established, transversal, and secondary flows in the GATO heat sink with numerous channel turnings, bifurcations, and intersections. • The performance comparison based on different performance indicators also shows that the GATO heat sink has the best thermal and hydraulic performances under a wide range of operating conditions. The performance enhancement is rather significant compared to RSC or OSC heat sinks. All the above conclusions showcase that the GATO approach is a rather effective and robust method due to its highest degree of design freedom. It is worth mentioning that the good performances GATO heat sink are based on the single optimization objective of minimization of the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 at the heating surface. Testing different optimization criteria for the GATO method would be the goal for the next chapter 6. Furthermore, the temperature distributions obtained by CFD laminar model, IR thermography and CFD turbulence model are compared in Figure 5.B3 (a-c). In the cold region (the distributor), the non-dimensional temperature ranging from 0.1 to 0.2 was better captured by turbulence model compared with the experimental result. Regarding the hot region which is more crucial to indicate the existence of hotspots, obviously, the surface of high temperature region in turbulence model was smaller than that of the experimental result. This hot area of interest could be better identified by the laminar model than the turbulence model. In whole region, the consistency between numerical and experimental results is better by using the laminar flow model. Introduction In the literature on TO for heat exchangers and heat sinks, the objective functions are mainly focused on thermal and/or hydrodynamic performance. Therefore, the commonly used objective functions can be broadly classified into those that indicate thermal performance and those that indicate hydrodynamic performance. Objective functions that reflect thermal performance mainly include the average temperature of a heating surface or solid parts of heat sinks, exchanged heat, thermal resistance, temperature rise, temperature difference, root mean square temperature, heat transfer rate, thermal compliance, recoverable thermal power, maximum temperature, and heat dissipation. Objective functions that embody hydrodynamic performance include pressure drop and energy dissipation. According to the review paper [START_REF] Fawaz | Topology optimization of heat exchangers: A review[END_REF], Figure 6.1 summarizes the objective functions used in 86 articles on TO for heat exchangers and heat sinks. From the statistical data of Figure 6.1 (a), the objective functions of average temperature, exchanged heat, and pressure drop are preferred. Moreover, most articles (55%) focus on optimizing thermal performance using a single objective function, while a significant portion (40%) consider both thermal and hydrodynamic performance using a duo-objective function. Only a small number of studies (5%) use a triple-objective function, which not only considers thermal performance but also includes multiple indicators such as average temperature and temperature difference. Chapter 4 presents a GATO approach and conducts various parameter studies using a single objective function (maximizing peak temperature at the heating surface) while maintaining a stable void fraction of fluid and ensuring connectivity when generating new individuals during optimization. The peak temperature at the heating surface is a crucial criterion for the thermal performance of a heat sink, as it ensures that the electronics can function properly. However, using only the maximum temperature value as a single point data only provides local information about the entire heating surface, it is important to consider global statistical values such as mean and standard deviation. Additionally, optimizing only for thermal performance may not be sufficient for the overall design of a heat sink, as it also needs to consider hydraulic performance. This chapter aims to fill the gaps in previous studies by choosing the Root Mean Square Deviation (RMSD) of the temperature field at the heating surface as an additional objective function. This considers the entire temperature field instead of just a single point of data. The optimization will also include a multi-objective function that takes into account both hydrodynamic and thermal performance using the ratio between the Poiseuille number (Po) and Nusselt number (Nu). Additionally, an objective function that combines normalized peak temperature and normalized pressure drop will be minimized using weighting factors. The results of these optimizations will be compared and evaluated. It's worth noting that the boundary conditions and constraints used in this chapter's optimization are the same as the benchmark case presented in section 4.3.1. Root mean square deviation (RMSD) minimization Temperature uniformity at the heating surface is a very important criterion for heat sink thermal performance. It reflects the uneven heating degree of the heated surface of an electronic. In the literature, the applied objective functions indicating temperature uniformity are the temperature difference and root mean square temperature. Here, the RMSD of a normalized temperature would be considered an objective function as described as follows: 𝑓𝑓 𝑡𝑡𝑗𝑗1 = 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * = � ∑ (𝑇𝑇 * ) 2 𝑃𝑃 𝑜𝑜𝑜𝑜𝑜𝑜 1 𝑃𝑃 𝑜𝑜𝑜𝑜𝑜𝑜 (6.1) Where Ptot and T* are the total number of cells/points of the heat sink heating surface in CFD mesh, and the normalized temperature is defined by the equation (4.13) in Chapter 4. The minimization of 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * not only considers the temperature uniformity of the heating surface but also tends to optimize the temperature of all the cells towards the inlet temperature (based on the definition of T*). After 172 iterations, the optimization of minimizing 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * has been converged based on the convergence criterion (Eq. 4.4) and its convergence history is presented in the following Figure 6.2, i.e., the value of 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * decreases from 1.052 to 0.899. More details on the geometry, velocity field, and temperature distribution evolution in the iterative process are presented in Figure 6.3. The most evident evolution in the temperature distribution along with the iteration is that the surface area between 0.85<T*<1.08 (in green) becomes larger and occupies the largest area in the heating surface. The objective of minimizing 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * tends to decrease the temperature in every cell of the heating surface with the same weighting factor to the target value (Tf,in=293 K). For this purpose, the main flow paths during iteration in Figure 6.3 (b) are formed to guide the inlet flow firstly to the side where the lower heat flux peak is located to ensure a lower temperature area, which is easier to realize because the location of lower heat flux peak is near the inlet, and then the main flow with a higher temperature than inlet temperature goes to the location of temperature hot spot to cool the highest temperature, which explains the existence of temperature hot spot. In general, this indicates that the GATO approach is effective in obtaining an optimal TO geometry for the minimization of 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * . Po/Nu minimization In addition to thermal performance, it is important to evaluate the hydrodynamic performance of a heat sink. Two non-dimensional indicators, the Po number, and Nu number evaluate hydrodynamic and thermal performance through friction factor and convective heat transfer coefficient. Therefore, in this section, the Po/Nu ratio (Eq. 6.2) will be used as the objective function in this optimization. The expressions of Nu number and Po number are presented as Eq.4.6 in Chapter 4 and Eq.5.10 in Chapter 5. Figure 6.5 records the optimization progress of top-ranked flow path configuration, velocity field at mid-channel depth, and temperature distribution at the heating surface of generation for 1, 6, 20, 40, 72, and 151. Minimizing Po/Nu aims to decrease pressure drop and increase the heat transfer coefficient. This is related to the area-weighted average temperature of fluid and solid interface, and the global hydraulic diameter of the heat sink design area. With a constant void fraction (0.50), a higher Nu number is achieved by a shorter wetted perimeter and a lower solid-fluid wall area-weighted average temperature. Thus, the optimal geometry in Figure 6.5 (a) would have larger solid islands, indicating a shorter wetted perimeter. Weighted-sum objective function of peak temperature and pressure drop In this section, the objective function is a combination of important thermal performance indicator, the peak temperature at the heating surface (in normalized form), and the normalized pressure drop, as expressed below: , fewer transversal flow paths are generated in this current due to considering the pressure drop at the same time. + 0.5𝑃𝑃 * minimization, with color change indicating the evolutionary direction of GA. The optimal choices based on other objective functions are also plotted for comparison. It can be observed that with the increase of GA generation, the color changes from blue to red, forming a sharp corner that points in the direction of lower pressure drop and temperature. This indicates that the GATO could satisfy both the minimization of temperature and pressure drop (the yellow point) at the same time. The green point in Figure 6.8 is the optimal point for minimizing only the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * at the heating surface (benchmark case in chapter 4). It is evident that the peak temperature of this optimal point is the lowest compared to others, but at the cost of a higher pressure drop. It is reasonable because there is no constraint of pressure drop when performing 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * minimization. The temperature difference between the green and yellow points is not so high (about 1.5 K), which indicates that the objective function considering both peak temperature and pressure drop would reach a low temperature close to the single objective optimization of minimization of only the peak temperature. 𝑓𝑓 𝑡𝑡𝑗𝑗3 = Comparison of optimizations of different objectives Conclusions In this chapter, the minimization of three objectives: 𝑓𝑓 𝑡𝑡𝑗𝑗1 = 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * , 𝑓𝑓 𝑡𝑡𝑗𝑗2 = 𝑃𝑃𝑡𝑡 𝑁𝑁𝑜𝑜 and 𝑓𝑓 𝑡𝑡𝑗𝑗3 = 𝜔𝜔 1 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * + 𝜔𝜔 2 𝑃𝑃 * has been performed. The evolution of TO heat sink geometry, fluid flow configuration, and temperature distributions at the heating surface of three optimizations are shown. All the heat sink performance indicators of the obtained optimal TO heat sinks are presented and compared. The main conclusions can be drawn as follows: • The GATO approach is robust and effective to solve the optimization problem with the complicated expression of objective functions; • Minimizing an objective function indicating the global performance of a heat sink, like 𝑓𝑓 𝑡𝑡𝑗𝑗1 and 𝑓𝑓 𝑡𝑡𝑗𝑗2 may give neither a lower peak temperature nor a lower pressure drop. • The optimization of 𝑓𝑓 𝑡𝑡𝑗𝑗3 brought both a lower temperature and a lower pressure drop with better Nu and Po numbers than optimizing an objective function only considering the peak temperature. Furthermore, to well combine the physic law with GATO approach, the minimization of the local entropy generation for a better heat transfer enhancement could be performed in the future study, which is also practical by applying this GATO method. Chapter 7: General conclusions and perspectives Conclusions This thesis focuses on the design and optimization of high-performance heat sinks for effective convective cooling of a heat-generating surface with multiple-peak heat flux. A size optimization approach based on conventional straight channel heat sink optimization and a genetic algorithm-based topology optimization (GATO) approach have been developed, tested, and compared. Experimental validations of the optimized numerical models have been performed. The influence of different objective functions for the GATO approach has been studied. The main conclusions of each chapter could be drawn as follows: In chapter 2, a survey on electronics indicates that non-uniform multiple heat sources widely exist in electronics. Under this circumstance, overheating issues slowing down the working efficiency, irreversible deterioration, and reduction of the electronics in a lifetime turn more serious. Among various thermal management methods, single-phase liquid cooling using heat sinks is found to be a compact, simple, low-cost, and safer solution. Geometry optimization of heat sinks has been intensively attempted to improve their performance. For a conventional straight channel heat sink, no research has systematically studied the optimal (tailored) flow distribution inside a heat sink under uneven heating conditions for thermal performance improvement. For a TO heat sink, most topology approaches are based on gradient methods, which might be easily trapped into a local optimum and have difficulties to define the fluidsolid interface. Only a few researchers experimentally test TO heat sinks; no research so far has measured the temperature distribution of the cooling fluid inside the TO heat sinks. In chapter 3, based on a conventional parallel straight mini-channel heat sink (RSC) subjected to a non-uniform multiple-peak heat flux, a size optimization method has been developed to adjust the channel inlet widths. The flow distribution among the parallel channels could thereby be optimized to minimize the peak temperature (Tpeak) on the heating surface. It was found that the proposed size optimization method was able to reduce the Tpeak by 10 K under an area-weighted average heat flux of 38.75 W•cm -2 in the two-peak heat flux case. The heat sink configuration featuring optimized channel inlets (OSC) consistently demonstrated lower thermal resistance compared to the RSC heat sink under various average heat flux and total mass flow rate conditions. Furthermore, at the same pressure drop, optimizing the flow distribution of the cooling fluid was found to be more effective in reducing the thermal resistance than simply increasing the mass flow rate of the cooling liquid. The effectiveness and robustness of the optimization algorithm were demonstrated when the average heat flux varied within a certain range. In chapter 4, the GATO method has been developed and tested to determine the optimal global flow channel configuration of a heat sink, again under a non-uniform heating surface with multiple heat sources. The optimization objective is to minimize the peak temperature at the heating surface (Tpeak) under the constraint of a constant void fraction in the fully-connected fluid domain. The proposed GATO method was able to successfully determine the optimal spatial distribution of the fluid/solid elements in the design domain. The optimized flow configurations were found to depend strongly on the values of the design and operating parameters. Robustness and reproducibility tests also showed that the GATO method can produce many "close-to-the-optima" solutions due to the insensitivity of the objective function to the global optimum at the fixed stopping criterion. Compared to conventional RSC heat sinks, GATO heat sinks consistently showed better thermal performance, as indicated by higher Nu numbers, lower Rth values, and better temperature uniformity at the heating surface. However, these improvements came at the cost of a higher pressure drop. Increasing the matrix resolution of the design area resulted in a lower Tpeak at convergence due to the generation of finer and more complex structures at the local level. But this requires higher computational resources and sets higher requirements on the fabrication technology for its realization. In chapter 5, RSC, OSC, and GATO heat sinks have been tested both experimentally and numerically. The velocity fields obtained by PIV measurement for the RSC prototype have been compared with the CFD calculation for laminar flow model validation. The temperature fields at the fluid-solid interface, obtained by IR measurements, have also been compared with the CFD calculation results, showing good agreement between each other. The GATO heat sink provided smaller areas of temperature hotspots and better temperature uniformity than RSC and OSC heat sinks at both the fluid-solid interface and the heating surface. GATO heat sink has shown at least 49.6% and 40.6% higher Nu numbers than that of the RSC and OSC heat sinks under a wide range of inlet Re numbers, and more significant performance improvement could be achieved under different input powers. The GATO heat sink has shown the lowest Po/Nu ratio within the testing input power and Re number range, more than 12% and 7.1% lower than those of RSC and OSC heat sinks. In chapter 6, various objective functions (𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * , Po/Nu and 𝜔𝜔 1 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * + 𝜔𝜔 2 𝑃𝑃 * ) have been introduced to the GATO approach. The thermal and hydraulic performances of the obtained optimal TO heat sinks are presented and compared. The GATO approach has shown its robustness and effectiveness in handling complicated expressions of objective functions. The optimization by minimizing a weighted sum of 𝜔𝜔 1 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * + 𝜔𝜔 2 𝑃𝑃 * could be brought both a lower temperature and a lower pressure drop with better Nu and Po numbers than optimizing an objective function only considering the peak temperature. Perspectives The newly proposed GATO approach has been first developed in this thesis, therefore, there are still lots of details that can be improved. Due to the limitation of computational resources, the proposed method hasn't been put into some extension usage during the thesis. Based on the results obtained in this thesis, several perspectives are proposed for short term or for long run: The future works in the short term:  The way to create new generations In this thesis, the necessary condition to generate a new individual is that the new individual resulting from the cross-over operation should have only one connectivity of all fluid elements, which would neglect a large number of diverse individuals. A method to improve this disadvantage is that first allow the creation of a binary matrix with more than one connectivity after cross-over, and then, replace the solid islands of less solid elements with fluid elements.  The GA parameter study To save time, the GA parameters in this thesis are referred to by other articles with similar optimization cases. The GA optimizations are different, and their parameters should be studied and defined before applying the approach on obtaining better optimal results or saving computation time. The parameters mentioned here include the individual number in one generation, the point number of cross-over, the probability of vertical and horizontal cross-over and mutation rate, the number of elite-keeping individuals, etc.  The post-processing of TO geometries with a filter Since the topological geometries are built based on binary matrices, the 90° corners of square-shaped fluid/solid elements are unavoidable. Therefore, for better looking and smoother boundaries to decrease pressure drop of the topological geometries. The solution could be a filter or an intermediate step between fluid and solid during the optimization process for the optimal geometry as a post-processing procedure.  The automation of the whole process In this study, the scripts to run GATO is coded in Matlab, which is commercial software. Therefore, it could not be applied in this HPC (CCIPL). As a result, the computation time is about two to three weeks for one optimization. While, this is not necessary if applying a script from an open-source software like python, it would cost about three days to complete a GATO case. Still, the time could be decreased if there are more cores and nodes available in an HPC.  GATO without the constraint of the constant void fraction All the previous GATOs were performed under the constraint of the constant void fraction during optimization. However, from the parameter study of void fraction, it was found that there was a lowest Tpeak at certain void fraction and Φ=1.0 will not lead to the best result. Therefore, a void fraction-free optimization could be concerned to explore the optimal void fraction to obtain the lowest Tpeak at a global level.  The improvement of straight channel size optimization on the optimization constrain and objective function In chapter 3, the channel inlets were optimized under the constraint of the constant inlet porosity. Nevertheless, if this constraint has been released, the optimal result could be better. Furthermore, the work of the multi-objective thermal-hydrodynamic performance can be also performed for the optimization of the inlet/outlet or header manifolds of a straight channel heat sink. The future works in the long run:  The extension of 3-D optimization with the GATO approach To simplify the model and due to the limitation of computation resources, the TOs in this thesis are performed based on a 2-D matrix. However, when increasing the thickness of the fluid domain in a heat sink, the geometry variation in the z direction cannot be ignored in optimization. Therefore, the 3-D TO should be applied in this case.  Turbulence flow with the GATO approach The CFD model applied in this thesis for the TO heat sink is under a laminar region. In reality, to achieve a better performance, turbulence flow can be encountered. Therefore, the interest of future work could be the TO heat sink with turbulent flow, which would be more difficult to choose a suitable turbulence model to capture in such complicated TO geometry, and with a more refined mesh, it will be a more time-consuming task.  Two fluid flows exchanger Unlike a single fluid heat sink, two or multiple fluids flows exchanger transfer the heat from one fluid to the other/another. The solid geometry separating two fluids plays an important role in heat transfer efficiency. The proposed GATO approach in this thesis could be a good choice to optimize the geometry of the exchanger.  Machine learning enhanced GATO A huge amount of computation time could be saved if machine learning (ML) is applied. The optimal TO geometries obtained in this study could be used as the training database for the ML optimizer. Suitable geometries (matrix) are expected to be proposed by the ML optimizer for varied boundary conditions such as heat flux, inlet mass flow rate etc.  Comparison of gradient-based TO with GATO by experimental and numerical study To compare the performance of heat sinks optimized by gradient-based and non-gradientbased TO approaches, experimental and numerical studies could be conducted in the future. Firstly, numerical optimizations using both approaches would need to be performed under the same objective, constraints and boundary conditions. Following this, the optimized heat sinks should be fabricated and tested under identical conditions. Finally, the performance of both heat sinks could be calculated and compared.  IR thermography of all the fluid-solid interface In this thesis, we only measured one fluid-solid interface's temperature. In the future, a heat sink could be fabricated with Infrared transparent material to measure the entire fluid-solid interface's temperature to obtain more global thermal information about the heat sink by IR camera. Title: Design and topology optimization of heat sinks for the cooling of electronic devices with multiple heat sources Keywords: heat sink, electronic cooling, multiple heat sources, flow distribution, topology optimization, genetic algorithm, infrared thermography Abstract: The heat-generating surface with multiple heat sources is frequently encountered in modern power electronic devices. Efficient cooling techniques are especially needed to prevent the overheating of these devices, so as to avoid consequences like performance deterioration, failure rate increase, reduced lifetime and safety threats. The main objective of this PhD thesis is to design and optimize the structure of heat sinks for single-phase convective cooling of a heat-generating surface under multiple-peak heat flux. Two optimization methods have been developed and applied in this study: one is the size optimization of channel inlets for tailoring the fluid flow distribution in straight channel heat sinks and another is the topology optimization of the global flow channel configuration based on the genetic algorithm (GATO). The impacts of design and operation parameters on the effectiveness of both optimization methods are numerically evaluated, with performance comparison to reference parallel straight channel heat sinks. After that, experimental validations of the proposed optimization approaches have been done by testing different heat sink prototypes using PIV and infrared thermography. Both the numerical and experimental results indicate that the GATO heat sink shows the best cooling performance under the tested conditions. Finally, different objective functions have been tested with the GATO method and the obtained results are further compared and discussed to showcase its effectiveness and robustness. AFLgqσ Surface area of the base wall [m 2 ] B Constant of heat flux formula [W • cm -2 ]C Specific heat [J • kg -1 • K -1 ] D Hydraulic diameter [m] E Energy [J]e The thickness of heat sink fluid domain [m] External Distance between the inlet and outlet [m] m Mass flow rate [kg• s -1 ] M Matrix m* Non-dimensional mass flow rate [-] MFT max Non-uniformity of maximum temperatures [-] N Number of heat peaks [-] n Total number of cell/point in CFD mesh or index of pixel in images from experimental data Nu Nusselt number [-] p Pressure [Pa] Pix Index of cell/point in CFD mesh or index of pixel in images from experimental data Heat flux [W • cm -2 ]ReReynolds number [-]Rth Thermal resistance [K • W -1 ] Sh Volumetric heat source [J • K -1 • m -3 ] Spatial spread of the heat peak [mm] λ Thermal conductivity [W • m -1 • K -1 ] μ Dynamic viscosity [kg • m -1 • s -1 ] ρ Density [kg• m -3 ] τ Shear stress [N • m -2 ] Figure 1 . 1 11 Figure 1.1 Basic principle for size (a), shape (b), and topology (c) optimizations [3]. Chapter 2 : 2 Design and optimization of heat sink for the thermal management of electronics with multiple heat sources: a literature review Chapter Summary Figure 2 . 1 21 Figure 2.1 Lithium-ion battery package with array arrangement of battery cells. (a) cylindrical type [8]; (b) prismatic type [9]; camera-measured temperature contours of the single battery cell (c) at the initial stage of discharging: t=250s and (d) at the end of discharging t=667s under a constant current [10]. Figure 2.2 (b)). Figure 2 . 2 22 Figure 2.2 LED package. (a) Array arrangement of LED chips; (b) LED layer structure [17]; (c) Temperature distribution on the surface of the microchannel heat sink [18]. Figure 2 . 2 Figure 2.3 (a) shows a real example of an IGBT module (Infineon FF225R17ME4) consisting of three sub-modules where diode chips and IGET chips are arranged diagonally.The heat generated by these chips should be timely evacuated by an air-cooling heat sink for example, in between them exist several intermediate layers (different materials) as shown in Figure2.3 (c). The temperature limit for an IGBT module ranges from 125 °C to 150 °C depending on the rated voltage[START_REF] Schlapbach | V IGBTs operating at 200°C? An investigation on the potentials and the design constraints[END_REF] Figure 2 . 3 23 Figure 2.3 Thermal management for IGBT module. (a) A real opened IGBT module [23]; (b) Simulated temperature distribution of IGBT module [24]; (c) Air-cooling of IGBT module through a heat sink [25]. Figure 2 . 4 24 Figure 2.4 Thermal management issue for CPU chips. (a) A SCM with an air-cooled heat sink; (b) A MCM with a liquid-cooled heat sink [28]; (c) Multiple-peak heat generation by a SCM [29,30]; (d) The Arrhenius plot of mean time to failure (MTTF) vs. junction temperature[31]. Figure 2 . 2 Figure 2.5 (a) describes how an HCPV works. Incident sunlight is reflected by the nonimaging dish concentrator to the crossed compound parabolic concentrator (CCPC) lens array which is made up of 8 × 10 CPV cells. The detailed structure of a single CCPC lens can be observed in the right part of Figure 2.5 (a). The CPV module was attached to the water-cooled heat sink via Artic Silver Adhesive. Figure 2 . 5 25 Figure 2.5 Ultra-high concentrator photovoltaics (UCPV) system cooled by a water-cooled multiplechannel heat sink. (a), Schematic diagram and working principle [42]; (b) Power map of an array of CPV cell modules [42]. Figure 2 . 6 26 Figure 2.6 Classification of thermal management techniques [2]. Figure 2 . 7 27 Figure 2.7 Direct cooling techniques. (a) jet cooling [43]; (b) spray cooling [43], (c) air cooling by fan [51]; (d) Immersion cooling for electronics [2]. Figure 2 . 8 28 Figure 2.8 Indirect cooling techniques. (a), Principle of phase-change cooling [52]; (b) Single-phase liquid cooling using a straight channel heat sink [54]; (c) a pin-fin structure heat sink for indirect air cooling [55] Figure 2 . 9 29 Figure 2.9 Different categories and variants of heat sink for liquid cooling. (a) Parallel straight channel[60]; (b) Parallel wavy channel [60]; (c) Pin-fin structure [65]; (d) Straight channel with cavities [66]; (e) Straight channel with ribs [67]; (f) Complex (hybrid) structure [68,69]. Figure 2 . 2 10); (2) design and structuration of the manifolds (headers); (3) shape modification of the parallel channels. Figure 2 . 2 Figure 2.10 Different arrangements of global inlet-outlet position for parallel straight channel heat sinks [72]. Figure 2 . 2 Figure 2.11 Design and structuration of the manifolds (headers) by (a) pin-fin structure [82] (b)baffle[START_REF] Liu | Numerical study on performances of mini-channel heat sinks with non-uniform inlets[END_REF] and (c) porous structure[START_REF] Fatahian | Improving the flow uniformity in compact parallel-flow heat exchangers manifold using porous distributors[END_REF]. Figure 2 . 2 Figure 2.12 Different types of heat sink cross-section optimization: (a) Single channel cross-section geometrical parameters optimization [101], (b) Single channel cross-section shape optimization [109], and (c) entire cross-section topology optimization [110]. Figure 2 . 2 Figure 2.13 Heat sink (a) pin-fins size and spacing arrangement [117], (b) fin-shape [118], and (3) fin TO optimization [119]. Table 2 . 1 A 21 Photo of prototype and TO geometry Chapter 3 : 3 Tailoring the fluid flow distribution in a parallel mini-channel heat sink under multiple-peak heat flux Chapter Summary Figure 3 . 3 Figure 3.1 shows the geometry and dimensions of the heat sink model used in this study.The core part of the heat sink is a cuboid solid monoblock, with overall dimensions of 54 mm in length (x-direction), 54 mm in width (y-direction), and 6 mm in height (z-direction). It has a U-type flow arrangement (cf. Figure2.10), with a single inlet and outlet tube (i.d.: 5 mm) aligned with the central line, perpendicular to the heating surface (and the cooling parallel channels). The length of the global inlet/outlet tubes is 18 mm and the distance between their centers is 45 mm. Between the global inlet and the outlet tubes, the fluid domain consists of three sections: the inlet distributing manifold, 16 parallel straight channels, and the outlet collecting manifold. Both the inlet and outlet fluid manifolds have a rectangular shape of 50 mm in length, 8 mm in width, and 2 mm in height. Mini channels with a rectangular crosssection of 1 mm in width and 2 mm in height are arranged in parallel, connecting the inlet manifold and the outlet manifold. The distance between the axes of two neighboring channels is 3 mm and the total length of the straight mini-channels is equal to 34 mm. For the convenience of description, these channels are indexed by i from 1 to 16 along the x-direction. The inlet of the mini-channels (2 mm in length) is subject to enlarging or narrowing by the optimization algorithm to adjust the mass flow rate of the cooling fluid flowing inside. Figure 3.1 shows the geometry and dimensions of the heat sink model used in this study.The core part of the heat sink is a cuboid solid monoblock, with overall dimensions of 54 mm in length (x-direction), 54 mm in width (y-direction), and 6 mm in height (z-direction). It has a U-type flow arrangement (cf. Figure2.10), with a single inlet and outlet tube (i.d.: 5 mm) aligned with the central line, perpendicular to the heating surface (and the cooling parallel channels). The length of the global inlet/outlet tubes is 18 mm and the distance between their centers is 45 mm. Between the global inlet and the outlet tubes, the fluid domain consists of three sections: the inlet distributing manifold, 16 parallel straight channels, and the outlet collecting manifold. Both the inlet and outlet fluid manifolds have a rectangular shape of 50 mm in length, 8 mm in width, and 2 mm in height. Mini channels with a rectangular crosssection of 1 mm in width and 2 mm in height are arranged in parallel, connecting the inlet manifold and the outlet manifold. The distance between the axes of two neighboring channels is 3 mm and the total length of the straight mini-channels is equal to 34 mm. For the convenience of description, these channels are indexed by i from 1 to 16 along the x-direction. The inlet of the mini-channels (2 mm in length) is subject to enlarging or narrowing by the optimization algorithm to adjust the mass flow rate of the cooling fluid flowing inside. Figure 3 . 1 . 31 Figure 3.1. Schematic view and dimensions of the heat sink model (unit: mm) • Steady-state, incompressible Newtonian fluid flow; • Negligible viscous heating effect; • Negligible radiation heat transfer; negligible heat loss to the environment; • No phase change of the cooling fluid. Figure 3 . 2 . 32 Figure 3.2. The base wall (heating surface) divided into 16 hypothetical planes Two peak heat flux case: N = 2 i x ih (mm) y ih (mm) B ih (W⋅cm -2 ) σ ih (mm) Q ih ( Figure 3 . 3 33 Figure 3.3 Two-peak and five-peak heat flux at the base wall. (3.26) and (3.27), respectively. Figure 3 . 4 . 34 Figure 3.4. Evolution of MF value along with the optimization step for two and five-peak heat flux cases Figures 3. 5 5 Figures 3.5(a) and (b) present the widths of channel inlets and the flow distribution characteristics of cooling fluid among mini-channels as a function of the optimization step for the two-peak heat flux case. From Figure3.5 (a), it can be seen that the largest channel inlet is located at the position where peak temperature appears for all steps (except for step 0). As the iteration step proceeds, the widths of the channel inlet for channel number 1-6 gradually enlarge, much broader than those for channel number 7-16 due to the location of the larger hot spot with higher temperatures. With the constraint of constant passage ratio, the inlet widths of channels 8-16 have all been narrowed, despite a (smaller) heat flux peak located in this region. Figures 3.5(a) and (b) present the widths of channel inlets and the flow distribution characteristics of cooling fluid among mini-channels as a function of the optimization step for the two-peak heat flux case. From Figure3.5 (a), it can be seen that the largest channel inlet is located at the position where peak temperature appears for all steps (except for step 0). As the iteration step proceeds, the widths of the channel inlet for channel number 1-6 gradually enlarge, much broader than those for channel number 7-16 due to the location of the larger hot spot with higher temperatures. With the constraint of constant passage ratio, the inlet widths of channels 8-16 have all been narrowed, despite a (smaller) heat flux peak located in this region. Figure 3 . 5 . 35 Figure 3.5. Channel inlet width evolution (a) and flow distribution (b) among mini-channels for step 0, step 2, step 5 and step 14 of the two-peak heat flux case Figures. 3.6 (a) and (b), analogous to Figure 3.5, are for the five-peak heat flux case.The channel inlet widths curve firstly tends to form the bathtub shape in step 2, and then gradually generates the three peaks shape at the middle and two edge sides in step 10. This is in line with the fact that the channel inlet widths in channels are modified in each step according to the temperature of hot spots on the heating surface. Even if the heat flux is centrosymmetric, one of the two highest heat flux peaks close to the distributing manifold has a lower temperature hot spot than the other near the collecting manifold. Before the coolant passes through the highest heat flux point close to the global outlet tube, it has already absorbed some quantity of heat in the straight channels. As a result, the inlet widths of the channels corresponding to the highest heat flux peak close to the outlet become the largest at the final step, as clearly shown in Figure3.6 (a). Figure 3 . 6 . 36 Figure 3.6. Channel inlet width evolution (a) and flow distribution (b) among mini-channels for step 0, step 2, step 4 and step 10 of the five-peak heat flux case Figure 3 . 7 37 Figure 3.7 Temperature cartography on the base wall of the heat sink at optimization step 0, step 2, step 5 and step 14 for the two-peak heat flux case (qavg=38.75 W•cm -2 ; vin=0.6 m•s -1 ) Figure 3 . 8 . 38 Figure 3.8. Temperature cartography on the base wall of the heat sink at optimization step 0, step 2, step 4 and step 10 for the five-peak heat flux case (qavg=38.75 W•cm -2 ; vin=0.6 m•s -1 ) Figure 3 . 9 . 39 Figure 3.9. 𝑻𝑻 𝒊𝒊 𝒎𝒎𝒎𝒎𝒎𝒎 in each monitoring plane as a function of the optimization step for (a) two-peak heat flux case and (b) five-peak heat flux case Figure 3 . 10 . 310 Figure 3.10. Maximum temperature evolution for two and five peaks heat flux cases Figure 3 . 11 . 311 Figure 3.11. Evolution of the total and sectional pressure drops of the heat sink as a function of the optimization step for (a) two-peak heat flux case and (b) five-peak heat flux case Figure 3 . 12 . 312 Figure 3.12. Thermal resistance as a function of mean channel Reynolds number for the optimized heat sink configuration (five-peak heat flux, vin = 0.6 m•s -1 , qavg = 38.75 W•cm -2 ) and for the conventional heat sink configuration with equal channel inlets Figure 3 . 13 . 313 Figure 3.13. Thermal resistance as a function of the total pressure drop of the heat sink by optimizing the channel inlet widths and by increasing the total mass flow rate of cooling fluid Figure 3 . 14 . 314 Figure 3.14. Thermal resistance under different average heat flux conditions for optimized heat sink configuration under qavg = 38.75 W•cm -2 , for optimized heat sink under corresponding areaweighted average heat flux (qavg = 24-45 W•cm -2 ) and for the equal channel inlet widths configuration Figure 4 . 4 Figure 4.1 shows a representative schematic view of the heat sink geometry for this study.It has a single inlet (and outlet) straight channel of win (wout) in width and Lin (Lout) in length, both aligned with the center line of the heat sink. In between the inlet/outlet tubes is a rectangular cuboid having overall dimensions of Lmiddle in length (y direction), wdesign in width (x direction), and e in depth (z-direction). This core part of the heat sink consists of 3 sections: the inlet manifold (wdesign × Ldis), the middle flow channel domain (wdesign × Ldesign), and the outlet manifold (wdesign × Lcol). Note that all the flow channels are coplanar with the same channel depth of e, rendering it a pseudo-3D fluid domain adapted for design parametrization. Only the upper surface of the flow channel domain (heating surface with a surface area of Adesign) receives a non-uniform multiple-peak heat flux while all other surfaces enclosing the heat sink are considered adiabatic walls. In that way, the total amount of heat generated will be transferred and absorbed by the cooling fluid. The middle flow channel domain, also named the design domain, is constituted by both fluid and solid (cubic) elements at a given void fraction (Φ), each element having welement in width and Lelement in length. Their spatial distribution determines consequently the topology of the flow path and the cooling performance that should be optimized by GATO. One special case is the conventional straight channel configuration, with a channel width of wch and a separating wall thickness of wsw between two neighboring channels, which is shown in Figure4.1 (a) and considered as the reference design (abbreviated as "RSC" hereafter) for performance comparison. • Steady-state, in-compressible Newtonian and viscous fluid flow; • Negligible radiation heat transfer; negligible heat loss to the environment; • No phase change for the working fluid. Equations for mass and energy conservation could then be written as: 𝑚𝑚 𝑖𝑖𝑛𝑛 = 𝑚𝑚 𝑡𝑡𝑜𝑜𝑡𝑡 (4.1) 𝑄𝑄 = ∬ 𝑞𝑞𝑞𝑞𝑞𝑞 𝑑𝑑𝑒𝑒𝑠𝑠𝑖𝑖𝑎𝑎𝑛𝑛 = 𝑚𝑚 𝑖𝑖𝑛𝑛 �𝐶𝐶𝑝𝑝 𝑒𝑒,𝑡𝑡𝑜𝑜𝑡𝑡 𝑆𝑆 𝑒𝑒,𝑡𝑡𝑜𝑜𝑡𝑡 -𝐶𝐶𝑝𝑝 𝑒𝑒,𝑖𝑖𝑛𝑛 𝑆𝑆 𝑒𝑒,𝑖𝑖𝑛𝑛 � (4.2) Figure 4 . 1 . 41 Figure 4.1. Schematic view of the heat sink model for GATO and RSC: (a) special case of a parallel straight channel heat sink (RSC) and (b) design domain of the GATO heat sink represented by fluid or solid (cubic) elements. Figure 4 . 2 . 42 Figure 4.2. Flow chart of the GATO algorithm for global flow channel optimization in heat sinks Figure 4 . 4 Figure 4.3 shows the 3-D CFD model for the studied benchmark case of the heat sink subjected to GATO. The dimension and operation details are listed in Table4.1. The middle design domain has a square shape of 50 mm ⨯ 50 mm, represented by a binary matrix of 𝑀𝑀 50×50 . The fluid void fraction (Φ) for this benchmark case is set as 0.50, i.e. equal solid and fluid elements, and their distribution is subjected to optimization. Recall that for the convenience of simulation and optimization, the entire fluidic circuit has an identical channel depth of e=1 mm, Figure 4 . 3 . 43 Figure 4.3. Heat sink model for the benchmark study. (a) Geometry and boundary conditions; (b) example of solid and fluid domains. Table 4 . 3 . 43 Gaussian-shape two-peak heat flux𝑥𝑥 𝑖𝑖 (𝑚𝑚𝑚𝑚) 𝑦𝑦 𝑖𝑖 (𝑚𝑚𝑚𝑚) 𝐵𝐵 𝑖𝑖 (𝑊𝑊 • 𝑐𝑐𝑚𝑚 -2 ) 𝜎𝜎 𝑖𝑖 (𝑚𝑚𝑚𝑚) Figure 4 . 4 Figure 4.4 depicts the evolution of 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑖𝑖𝑛𝑛 and 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑒𝑒𝑑𝑑𝑖𝑖𝑎𝑎𝑛𝑛 amongst the 100 individuals along with the increasing number of GA generations. More precisely, 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑖𝑖𝑛𝑛 stands for the smallest value of 100 Tpeak in a generation while 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑒𝑒𝑑𝑑𝑖𝑖𝑎𝑎𝑛𝑛 stands for the average value between the 50 th and 51 st of Tpeak values in a generation. It took 170 generations to meet the defined convergence criterion (Eq. 4.4), the value of 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 𝑚𝑚𝑒𝑒𝑑𝑑𝑖𝑖𝑎𝑎𝑛𝑛 being decreased from 365.1 K to 348.6 K. Figure 4 . 4 Figure 4.5 shows the flow channel configuration of the design domain (a), the corresponding velocity fields (b) at the middle channel depth (z*=-0.5), and the temperature field (c) on the heating surface for the top-ranked individual of generation 1, 10, 25, 45, 74 and 170.It is clearly illustrated that the flow topology evolves during the GA optimization to meet the objective function. In more detail, fluid elements tend to join together and form larger channels near the inlet and outlet manifolds. In contrast, pin-fin structures and small bifurcations and confluences are more likely to appear in the middle area where two heat flux peaks are located. Such a trend increases the fluid-solid contacting surface area and interrupts the formation of boundary layers, and enhances the cooling. The global structure of the flow paths is more or less established at generation 45 (e.g., a big cluster of solid elements downstream of the left heat flux peak) and inherited in the following generations. From then on, local details (small transversal flow paths) are still generated and adjusted along with the GA optimization. Figure 4 . 4 . 44 Figure 4.4. Evolution of 𝑻𝑻 𝒑𝒑𝒑𝒑𝒎𝒎𝒑𝒑 𝒎𝒎𝒊𝒊𝒎𝒎 and 𝑻𝑻 𝒑𝒑𝒑𝒑𝒎𝒎𝒑𝒑 𝒎𝒎𝒑𝒑𝒎𝒎𝒊𝒊𝒎𝒎𝒎𝒎 amongst the 100 individuals along with the increasing number of GA generations Figure 4 . 5 . 45 Figure 4.5. Top-ranked flow path configuration for the benchmark case at generation 1, 10, 25, 45, 74, and 170. (a) Fluid/solid elements distribution in the design domain; (b) Corresponding velocity field at mid-channel depth (z*=-0.5); (c) Temperature field at the heating surface. Table 4 . 4 44 The geometrical information of the RSC heat sinks introduced for performance comparison with GATO heat sinks. Figure 4 . 4 6 (b)). Figure 4 . 6 46 Figure 4.6 Influence of heat flux shape on the performance of GATO and RSC heat sinks. (a) optimized flow channel configuration; (b) velocity field (z*=-0.5) of the GATO heat sink; (c) temperature field at the heating surface of the GATO heat sink; (d) temperature field at the heating surface of the RSC heat sink. Figure 4 . 7 47 Figure 4.7 Influence of fluid void fraction (Φ) on the performance of GATO and RSC heat sinks. (a) optimized flow channel configuration; (b) velocity field (z*=-0.5) of the GATO heat sink; (c) temperature field at the heating surface of the GATO heat sink; (d) temperature field at the heating surface of the RSC heat sink. Figure 4 . 8 48 Figure 4.8 Influence of inlet fluid velocity (vin) on the performance of GATO and RSC heat sinks. (a) optimized flow channel configuration; (b) velocity field (z*=-0.5) of the GATO heat sink; (c) temperature field at the heating surface of the GATO heat sink; (d) temperature field at the heating surface of the RSC heat sink. Figure 4 . 9 49 Figure 4.9 Influence of matrix resolution (Mr⨯c) on the performance of GATO and RSC heat sinks. (a) optimized flow channel configuration; (b) velocity field (z*=-0.5) of the GATO heat sink; (c) temperature field at the heating surface of the GATO heat sink; (d) temperature field at the heating surface of the RSC heat sink. Figure 4 . 4 2), fluid elements with low-velocity magnitude are still numerous in the optimized GATO flow configuration, especially at a high void fraction (cf. Figure 4 . 7 ) 47 . These fluid elements bring about unclear and ineffective fluid paths, which could be further eliminated by a post-treatment of the optimized flow geometry. As an example, a threshold value of 𝑣𝑣 𝑡𝑡ℎ𝑟𝑟𝑒𝑒𝑠𝑠ℎ𝑡𝑡𝑐𝑐𝑑𝑑 * = 0.0055 has been applied for this purpose, i.e., all fluid elements having a velocity magnitude smaller than 5.5% of vin are considered dead volume and thus will be eliminated. The flow path configuration and the velocity field at different Φ values before and after this post-treatment are shown in Figure4.10. Figure 4 . 4 Figure 4.10 Post-treatment of the optimal flow configuration for dead volume elimination. (a) Original flow circuit; (b) flow circuit after post-treatment Figure 4 . 4 Figure 4.11 The convergence curve of 𝑻𝑻 𝒑𝒑𝒑𝒑𝒎𝒎𝒑𝒑 𝒎𝒎𝒑𝒑𝒎𝒎𝒊𝒊𝒎𝒎𝒎𝒎 and 𝑻𝑻 𝒑𝒑𝒑𝒑𝒎𝒎𝒑𝒑 𝒎𝒎𝒊𝒊𝒎𝒎 for three runs of the GATO for the benchmark case. Figure 4 . 4 Figure 4.12 (c) compares the temperature contour on the heating surface obtained by 3 runs of GATO. While the global pattern is quite similar (a small angle between isotherms and global flow direction due to the heat flux peak asymmetry), a slight difference at a local level can still be observed, especially in the position of the Tpeak. Figure 4 . 4 Figure 4.12 Comparison of the three runs of the GATO for the benchmark case. (a) optimized flow configuration; (b) velocity field (z*=-0.5); (c) temperature field at the heating surface. Figure 4 . 4 Figure 4.13 Convergence curve about log10 value of the variation of the difference of the last two generations divided by the difference of the first two generations. Figure 4 . 4 Figure 4.C Mesh convergence study for the CFD simulation in OpenFoam. Figure 5 . 5 Figure 5.1 shows the schematic diagram of the test rig in LTEN for PIV measurement. It includes the fluid circuit (in blue) including the test section, and the PIV measurement facility Figure 5 . 1 51 The test-rig in LTEN for PIV measurement in this study. (a) schematic view; (b) photo view Figure 5 . 2 52 Figure 5.2 RSC prototype used for velocity field measurement. (a) schematic drawing and (b) a photo of the fabricated prototype with an extended entrance tube. Figure 5 . 3 53 Figure 5.3 The basic principle of PIV measurement for velocity field[START_REF] Willert | Particle Image Velocimetry: a practical guide[END_REF]. and 5.5, respectively. Figure 5 . 5 Figure 5.4 shows the validated pixel number in the measuring field among 2000 images.In most of the measuring fields, the validated pixel number is between 1900 -2000, showing a credible measurement. The spots with a lower number appear at the entrance of distributing manifold and the inlets of parallel channels, which are coherent with the values of the standard deviation of velocity vectors described below. Figure 5 . 4 54 Figure 5.4 Example of the validated pixel number in the testing field among 2000 images. Figure 5 . 5 Figure 5.5 shows the normalized standard deviation of velocity vectors (U* and V* normalized by the inlet velocity) of 2000 images in every measured pixel. It can be observed that the U* values (x-component) do not show a large deviation (Figure 5.5 (a)). For V* values (y-component), however, a quite big difference could be observed in the distributing manifold and at the enhancement of two middle channels (Figure 5.5 (b)). The reason could be the secondary flow and high-velocity magnitude at these regions which increase the probability of random particle motions. Another possible reason is particle aggregation: during the acquisition of 2000 images (200 seconds), the particles following the main fluid streams stagnate and accumulate in these areas. Therefore, the image of velocity vectors taken at a different time would be influenced by different amounts of particles, bringing the higher values of StdDev V*. Figure 5 . 5 55 Figure 5.5 The standard deviation of the velocity vectors depending on x component (a) and y component (b). Figure 5 . 6 56 Figure 5.6 Comparison of the velocity magnitude at the mid-depth of the channels. (a) the visualization window; (b) PIV results and (c) CFD results. (unit: 𝒎𝒎 • 𝒔𝒔 -𝟏𝟏 ) Figure 5 . 5 Figure 5.7 (b) shows the velocity magnitude profiles in the parallel channels (y=21.95 mm as the red line marked in Figure 5.6a). It can be observed from both CFD and PIV results that the two middle channels have the highest velocity peak due to their position facing the inlet port. The velocity peaks in the rest of the channels are much smaller. These observations are in echo with the velocity magnitude contours shown in Figure 5.6. Figure 5 . 7 57 Comparison of the velocity profiles at the entrance of distributing manifold (a) and among the parallel channels (b). Figure 5 . 8 58 The test rig in LTEN for IR thermography measurement. (a) schematic view; (b) photo view. Figure 5 . 9 59 Figure 5.9 Geometry and dimensions of the sandwiched-type test section for thermal measurement. (a) cover plate; (b) base plate; (c) assembled heat sink (d) Photo view of the RSC heat sink. Figure 5 . 5 Figure 5.11 Geometry and dimensions of the GATO heat sink for thermal measurement. (a) Schematic view and (b) photo view. Figure 5 . 5 Figure 5.12 The insulation of the test section for the thermal test. Figure 5 . 5 13 presents (a) the GATO heat sink CFD model and (b) the thermal boundary condition of input power equal to 120 W. Figure 5 . 5 Figure 5.13 GATO heat sink CFD model and thermal boundary condition of input power = 120 W. Figure 5 . 5 Figure 5.14 Comparison of the normalized fluid temperature contours obtained by IR thermography measurement result (a-c) and CFD simulations (d-f) for RSC, OSC, and GATO heat sinks, respectively. Condition: total input power of Qtot=120 W; volume flow rate of 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 (100 mL•min -1 ; Rein=474) Figure 5 .Figure 5 . 55 Figure 5.15 Comparison of temperature profiles in different channels of RSC heat sink. Condition: total input power of Qtot=120 W; volume flow rate of 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 (100 mL•min -1 ; Rein=474) Figure 5 . 5 Figure 5.18 shows the T* cartography on the heating surface (2 mm below the bottom wall of the fluid channels) of the three heat sinks, extracted from the obtained CFD results. The temperature hots pots due to the power input from the three heaters are visible. Figure 5 . 5 Figure 5.18 Comparison of the normalized temperature distribution on the heating surface (a) and outer sapphire window (b) of three heat sinks. Condition: total input power of Qtot=120 W; volume flow rate of 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 (100 mL•min -1 ; Rein=474). -5.21 and commented on below. presents the velocity contours in the x-y plane, velocity vectors of cross sections in the x-z plane, and the zoom-in figures from the previous cross-section of the CFD thermal model of three heat sinks. Figure 5 . 5 Figure 5.19 Contour of velocity magnitude at the mid-depth of the fluid channels (z=1 mm) for three studied heat sinks. Condition: total input power of Qtot=120 W; volume flow rate of 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 (100 mL•min -1 ; Rein=474) Figure 5 . 5 Figure 5.20 Comparison of the velocity vectors at the channel cross sections of RSC, OSC, and GATO heat sinks (Channel mid-length at y=35 mm). Figure 5 . 5 Figure 5.21 Zoom-in images for the velocity vectors in the cross sections. Figure 5 . 5 Figure 5.22 (a) presents the Nu number of three heat sinks as a function of the inlet Re number. It can be observed that RSC and OSC heat sinks have similar Nu numbers (Nu=6 for RSC and 6.3 for OSC), and vary little with the Rein. In contrast, a much higher Nu number can be achieved for the GATO heat sink, ranging from 9.1 to 14.8 when Rein increases from 284 to 663. The boosting of Nu number for GATO heat sink is mainly attributed to the geometry Figure 5 . 5 22 Comparison of the Nu numbers of three tested heat sinks (a) Nu vs. Rein; (b) Nu vs. Qtot. Figure 5 . 23 523 Figure 5.23 Nu number as a function of pressure drop for RSC, OSC and GATO heat sinks. Figure 5 . 5 Figure 5.24 PEC values of OSC and GATO heat sinks under various inlet Re numbers. Figure 5 . 5 25 Comparison of the Po/Nu ratio for the three tested heat sinks. (a) Po/Nu vs. Qtot; and (b) Po/Nu vs. Rein. Figure 5 . 5 Figure 5.B1 Turbulent intensity (a) and turbulent kinetic energy (b) at the middle plane of the fluid domain under the boundary condition of 120 W input power and 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 volume flow rate (100 mL•min -1 ; Rein=474) of GATO heat sink (k-ε RNG turbulence model). Figure 5 . 5 Figure 5.B2 Velocity fields at the middle plane of the fluid domain of GATO heat sink, obtained by using a laminar model (a) and a 𝒑𝒑 -𝜺𝜺 RNG turbulent model (b) under the boundary condition of 120 W input power and 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 volume flow rate (100 mL•min -1 ; Rein=474). Figure 5 . 5 Figure 5.B3 Temperature fields at the testing surface of GATO heat sink, obtained by using a laminar model (a), experimental measurement (b) and a 𝒑𝒑 -𝜺𝜺 RNG turbulent model (c) under the boundary condition of 120 W input power and 1.667×𝟏𝟏𝟏𝟏 -𝟔𝟔 m 𝟑𝟑 •𝐬𝐬 -𝟏𝟏 volume flow rate (100 mL•min - percentage of objective functions used in the TO heat changers (b) and the percentage of the objective number concerned in one TO of heat exchangers. Figure 6 . 2 62 Figure 6.2 Iteration curve and the evolution of minimum value in every generation for RMSDT* minimization. Figure 6 . 3 63 Figure 6.3 Top-ranked flow path configuration for RMSDT* minimization at generation 1, 35, 50, 62, 91, and 172. (a) Fluid/solid elements distribution in the design domain; (b) Corresponding velocity field at mid-channel depth; (c) Temperature field at the heating surface. It should be noted that the Po number and the Nu number are calculated based on some global parameters, such as the global hydraulic diameter, the area-weighted average temperature at the solid and fluid interface, etc. It may not affect much on a local value, like peak temperature. It took 151 iterations to achieve the convergence recorded in Figure 6.4. The value of Po/Nu reduces from 17.23 to 5.25. Figure 6 . 4 64 Figure 6.4 Iteration curve and the evolution of minimum value in every generation for Po/Nu minimization. Figure 6 . 5 65 Figure 6.5 Top-ranked flow path configuration for Po/Nu minimization at generation 1, 6, 20, 40, 72, and 151. (a) Fluid/solid elements distribution in the design domain; (b) Corresponding velocity field at mid-channel depth; (c) Temperature field at the heating surface. 𝜔𝜔 1 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * + 𝜔𝜔 2 𝑃𝑃 * (6.3) Where 𝜔𝜔 1 and 𝜔𝜔 2 are two weighting factors that indicate the importance of the optimized objective. Here 𝜔𝜔 1 = 𝜔𝜔 2 = 0.5 shows the equal importance of the two indicators. The optimization has converged after 222 iterations as shown in Figure 6.6. The best result in 100 individuals of objective decreases from 1.91 to 1.26. The actual peak temperature decreased from 358.6 K to 350.3 K, and the pressure drop reduce from 240.05 Pa to 132.85 Pa. Figure 6 . 6 66 Figure 6.6 Iteration curve and the evolution of minimum value for 𝟏𝟏. 𝟓𝟓𝑻𝑻 𝒑𝒑𝒑𝒑𝒎𝒎𝒑𝒑 * + 𝟏𝟏. 𝟓𝟓𝑷𝑷 * minimization. Figure 6 . 7 67 Figure 6.7 Top-ranked flow path configuration for peak temperature and pressure drop minimization at generation 1, 10, 25, 75, 120, and 222. (a) Fluid/solid elements distribution in the design domain; (b) Corresponding velocity field at mid-channel depth; (c) Temperature field at the heating surface. Figure 6 . 6 Figure 6.7 presents the evolution of the geometries, velocity fields at the mid-depth of fluid domains, and temperature contours at the heating surfaces of top-ranked individuals in generations 1, 10, 25, 75, 120, and 222. From the temperature contour of generation 1 to generation 10, the hot spot has diverged into two hot spots with lower temperatures and the maximum temperature point keeps on the same side. Not until generation 25, the maximum temperature point moves to the other side where the lower heat flux peak is located, and the global fluid paths have been formed. More fluid elements are arranged on the right side where the highest heat flux peak exists. Compared with the flow paths in generation 75, more horizontal flow paths are generated in the TO geometries of generation 120 and 222 due to the objective of minimizing the peak temperature. But compared to the benchmark case (section 4.3.1) for minimizing only the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * Figure 6 . 6 Figure 6.8 records all the pressure drop and peak temperature values of individuals in 222 generations in 0.5𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * Titre: Conception et optimisation topologique des dissipateurs thermiques pour le refroidissement de dispositifs électroniques avec plusieurs sources de chaleur Mots clés : dissipateurs thermiques, refroidissement électronique, sources de chaleur multiples, distribution des fluides, optimisation topologique, algorithme génétique, thermographie infrarouge Résumé : Les dispositifs électroniques génèrent souvent de la chaleur en différents points. Sans un refroidissement efficace, les points chauds et la surchauffe entraînent une augmentation du taux de défaillance, une détérioration des performances et des menaces pour la sécurité des composants électroniques. L'objectif principal de cette thèse est de concevoir et d'optimiser la structure des dissipateurs de chaleur par convection monophasique en minimisant la température maximale pour apporter une solution innovante à ces problèmes. Deux méthodes d'optimisation ont été développées et appliquées : l'optimisation de la taille des entrées de chaque canaux enfin de distribuer de façon optimum les fluide dans le dissipateur de chaleur à canal droit d'une part, et l'optimisation topologique de la configuration sur l'ensemble des canaux d'écoulement basée sur l'algorithme génétique (GATO). L'influence des paramètres, tels que les valeurs des pics de flux de chaleur, la vitesse d'entrée, la résolution matricielle d'un domaine de conception et la fraction de fluide a été étudiée numériquement. Ensuite, les approches d'optimisation proposées ont été validées expérimentalement en testant un dissipateur thermique à canal droit de référence (RSC), un dissipateur thermique à canal droit optimisé (OSC) et un dissipateur thermique GATO. En outre, les indicateurs de performance complets obtenus à partir des modèles validés expérimentalement des trois dissipateurs thermiques ont été comparés. Enfin, l'influence de différents objectifs d'optimisation pour la méthode GATO a été étudiée. 3.8). ∑ �𝑤𝑤 𝑘𝑘+1,𝑖𝑖 -𝑤𝑤 𝑘𝑘,𝑖𝑖 � = 𝛾𝛾 ∑ �𝑆𝑆 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 -𝑆𝑆 � 𝑘𝑘 𝑚𝑚𝑎𝑎𝑚𝑚 � = 𝛾𝛾�∑ 𝑆𝑆 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 16 𝑖𝑖=1 16 𝑖𝑖=1 16 𝑖𝑖=1 𝛾𝛾 �∑ 𝑆𝑆 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑎𝑎𝑚𝑚 16 𝑖𝑖=1 ∑ 𝑇𝑇 𝑘𝑘,𝑖𝑖 𝑚𝑚𝑚𝑚𝑚𝑚 16 𝑖𝑖=1 -16 × 16 -16 × 𝑆𝑆 � 𝑘𝑘 𝑚𝑚𝑎𝑎𝑚𝑚 � = Table 3 .1. Thermal-physical properties of solid and fluid used for simulation 3 Property Fitting correlation (temperature range: 293K -360K) Water Density (kg•m -3 ) ρ 469T + 43.486 (3.13) Dynamic viscosity μ = 2.414 × 10 -5 × 10 247.8 T-140 (kg•m -1 •s -1 ) [START_REF] Kays | Convective Heat and Mass Transfer, 4th Editio[END_REF][START_REF]Water -Thermal Conductivity[END_REF][START_REF]Engineering ToolBox, Water -Density, Specific Weight and Thermal Expansion Coefficient[END_REF] f = -2.604 × 10 -8 T 4 + 4.719 × 10 -5 T 3 -3.279 × 10 -2 T 2 + 9. Table 3 .3. Comparison of the pressure drop, the maximum temperature, and the velocity profile for different tested grids 3 Grid (million elements) 0.28 0.52 0.75 1.14 1.76 2.27 Pressure drop (Pa) 1231.3 1216.6 1206.4 1196.7 1191.4 1185.4 The maximum temperature at the heating surface (K) 384.8 383.9 384.2 383.8 384.0 384.4 Table 4 .1 Dimensions of the benchmark case 4 𝑤𝑤 𝑖𝑖𝑛𝑛 & 𝑤𝑤 𝑡𝑡𝑜𝑜𝑡𝑡 𝑤𝑤 𝑑𝑑𝑒𝑒𝑠𝑠𝑖𝑖𝑎𝑎𝑛𝑛 &𝐿𝐿 𝑑𝑑𝑒𝑒𝑠𝑠𝑖𝑖𝑎𝑎𝑛𝑛 𝐿𝐿 𝑖𝑖𝑛𝑛 & 𝐿𝐿 𝑡𝑡𝑜𝑜𝑡𝑡 𝐿𝐿 𝑑𝑑𝑖𝑖𝑠𝑠 & 𝐿𝐿 𝑐𝑐𝑡𝑡𝑐𝑐 𝑤𝑤 𝑒𝑒𝑐𝑐𝑒𝑒𝑚𝑚𝑒𝑒𝑛𝑛𝑡𝑡 &𝐿𝐿 𝑒𝑒𝑐𝑐𝑒𝑒𝑚𝑚𝑒𝑒𝑛𝑛𝑡𝑡 Geometric parameter (mm) Dimensions (in mm) Table 4 .7 Performance comparison between GATO and RSC heat sinks under different inlet velocities. 4 Vin (Rein) Nu Nu (GATO) (RSC) Table 4 .8 Performance comparison between GATO and RSC heat sinks under different matrix resolutions of the design domain. 4 Matrix Nu Nu resolution (GATO) (RSC) Table 4 .9 Φ, T* and P* of the GATO heat sink before and after post-treatment for dead volume elimination 4 Φ (before) Φ (after) T* (before) T* (after) P* (before) P* (after) 0.40 0.36 1.31 1.31 3.90 3.90 0.50 0.44 1.28 1.27 1.83 1.84 0.65 0.56 1.22 1.22 1.31 1.32 0.80 0.65 1.24 1.24 1.15 1.16 Table 6 .1 Different indicators of optimal heat sinks based on various optimization objectives 6 𝑻𝑻 𝒑𝒑𝒑𝒑𝒎𝒎𝒑𝒑 * P* Nu Po 𝑹𝑹𝑹𝑹𝑹𝑹𝑹𝑹 𝑻𝑻 * Tpeak 1.28 1.82 5.7 62.7 1,042 (Benchmark in Chapter 4) 𝑓𝑓 𝑡𝑡𝑗𝑗1 = 𝑅𝑅𝑀𝑀𝑆𝑆𝑆𝑆 𝑇𝑇 * 1.49 2.23 8.1 66.5 0.899 𝑓𝑓 𝑡𝑡𝑗𝑗2 = 𝑁𝑁𝑁𝑁 𝑃𝑃𝑐𝑐 1.66 1.30 9.6 51.0 1.048 𝑓𝑓 𝑡𝑡𝑗𝑗3 = 𝜔𝜔 1 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * + 𝜔𝜔 2 𝑃𝑃 * 1.31 1.21 6.5 50.4 1.075 ; Rein=474). ACKNOWLEDGEMENTS First and foremost, I would like to express my sincere gratitude to my supervisor Yilin FAN, for his unwavering support, guidance, and encouragement throughout my Ph.D. journey. His invaluable insights, constructive criticism, and patience have been instrumental in shaping the direction of my research and in pushing me to achieve my academic goals. Nomenclature To achieve the highest degree of freedom of heat sink design, this chapter presents the development of a genetic algorithm-based topology optimization (GATO) method for convective cooling of a heating surface under multiple-peak heat flux. In more detail, the middle area of the heat sink receiving heat flux is treated as the design domain and represented as a 𝑀𝑀 𝑟𝑟𝑡𝑡𝑤𝑤×𝑐𝑐𝑡𝑡𝑐𝑐𝑜𝑜𝑚𝑚𝑛𝑛 binary matrix. Each element in the matrix is considered either as fluid or as solid, and their allocation is optimized to minimize the peak temperature (Tpeak) at the heating surface of the heat sink under the constraint of constant void volume for the fully-connected fluid domain. For each optimization step, the fluid flow and temperature characteristics are obtained by CFD simulation using OpenFoam and the GA operations (selection, crossover, mutation, etc.) are applied. The impacts of design and operation parameters on the flow channel configuration optimized are evaluated, including the heat flux shape, the fluid void fraction, the inlet velocity, and the resolution of the design domain. The cooling performance of the GATO heat sink is also compared to the reference straight channel heat sink (RSC) under the same conditions. The results obtained show that (1) the proposed GATO method could successfully determine the optimal flow channel configuration of the heat sink, decreasing the Tpeak at the heating surface; (2) The optimized flow channel configuration depends on the design and operating parameters, of the effectiveness and robustness of the GATO method is clearly shown; (3) Compared to conventional RSC heat sink, the GATO heat sink provides a better cooling performance, with a reasonable increase of the pressure drop. Keywords of the Chapter: Heat sink design, Multiple-peak heat flux, Genetic algorithm (GA), Topology optimization (TO), Flow channel configuration, Cooling performance Appendix 4.A: Crossover and mutation operations After sorting in an ascending way of Tpeak values of 100 individuals, the top 2 to the top 51 individuals (the top one kept as elite) in the ranking list are chosen to run the crossover operation. One parent pair will give birth to 2 children, in this way, 100 individuals are created for the next generation. The crossover parent pair for each individual is the previous and next ranking individuals. The last-ranked individual (top 51) would crossover with the secondranked individual (top 2). In this study, the crossover operation for the binary matrix follows the study of [START_REF] Mondal | Optimal topology for consensus using genetic algorithm[END_REF], presented below in Figure 4.A1. Crossover can be done either horizontally (A1a) or vertically (A1b). Firstly, a random element inside the matrices of parents would be chosen; then for horizontal crossover, the matrices of parents would be separated into two parts (part one includes those whose row number is smaller than the chosen element and the rest for part 2. For the row where the chosen element is located, the elements whose column number is smaller than the selected element belongs to part one, and the rest belong to part two. After that, child 1 would be created by combining part one from parent 1 and part two from parent 2. The vertical crossover has similar procedures. They are clearly shown in Figure 4.A1. The probability of either horizontal crossover or vertical crossover is 50%. Each child is set to have a 20% probability to mutate. The mutation operation includes horizontal string or vertical string swapping mutation as shown in Figure 4.A2. To do that, two different rows (or columns) within the matrix would be randomly chosen, and their positions will be swapped to create new individuals. The probability of horizontal string swapping or vertical string swapping is set as 50%. It should be noted that the crossover and mutation operations would be repeated such that the full connectivity of the flow domain is satisfied. (the objective function) is chosen as the indicator for the mesh independency study. It can be seen that when the grid number is higher than 127 k, the variation of 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * becomes rather small (<0.72%). Therefore, the mesh size of 127 k (marked in a black circle in Figure 4.C) was chosen considering both the calculation accuracy and time consumption. Chapter 5: Performance evaluation and comparison of heat sinks under multiple heat sources: an experimental and numerical study Chapter Summary This chapter presents experimental and numerical to evaluate and compare the performances of different heat sinks under multiple heat sources. Different heat sink prototypes are optimized, machined, instrumented, and tested, including uniform straight channel heat sink (RSC heat sink), optimized straight channel heat sink (OSC heat sink), and genetic algorithmbased topology optimization heat sink (GATO heat sink). The PIV method has been used to measure the velocity field of the RSC heat sink, while IR thermography has been applied to measure the temperature fields of the cooling fluid in three heat sinks. The visualization results obtained are compared with the CFD calculation, showing good agreements between each other. A systematic numerical study has then been performed to test three heat sinks under a wide range of operating conditions. The numerical results showed that the GATO heat sink can always achieve the best hydrodynamic and thermal performances among the three heat sinks. The effectiveness and robustness of the GATO approach for heat sink optimization have then been proven. Keywords of the Chapter: Heat sink; Multiple heat sources; Performance evaluation; Particle Image Velocimetry (PIV); Infrared thermography; Computation Fluid Dynamics (CFD). The GATO heat sink has been optimized following the approach presented in the former chapter 4, under the same operating conditions as the OSC heat sink (120 W; Rein= 474), The matrix resolution for the design domain is set as 𝑀𝑀 25×25 , indicating the dimension of each cubic element of 2 mm at the side. The void fraction for the fluid is set at Φ=0.48, identical to that for RSC and OSC heat sink for a fair comparison. More details on the optimization procedure are presented in Chapter 4), and the optimized geometry is shown in Figure 5.11 (a). The GATO flow channel has been machined by digital milling in LTEN, a photo view of the assembled test section for the thermal test is shown in Figure 5.11 (b). To prevent the test section from heat loss, the three heater jackets were inlaid inside three layers of rectangular refractory bricks stacked together (Figure 5.12 (a)), and the whole test section was wrapped by an insulating foam box filled with glass (Figure 5.12 (b)). In this way, the whole test section is well-insulated, except the surface of the sapphire window for IR thermography. Density (kg•m -3 ) ρ sa = 3980 (5.7) Comparison between IR measurements and CFD results  Temperature contours at the measuring surface Figure 5.14 presents the normalized temperature (T* defined in Eq. 4.13) field at the fluidsolid interface (inner surface of the sapphire disk) obtained by IR measurements and CFD simulations under the total input power of Qtot=120 W and volume flow rate of 1.667×10 -6 m 3 •s -1 (100 mL•min -1 ; Rein=474). For the RSC heat sink, the IR and CFD isotherms of the match well in the global trend, with the center hot spot facing the inlet port cooled down. The temperature isotherms are generally perpendicular to the flow direction in the first half of the design domain whereas two hot spots are visible near the collecting manifold. Some slight differences exist at the manifold entrance and the position of the largest heater (upper-left corner). The maximum temperature of the testing interface at 322.7 K (IR) and 325.2 K (CFD), respectively. Good agreement between IR and CFD results on the temperature field can also be observed for the OSC heat sink, the maximum temperature being 324.3 K (IR) and 325.4 K (CFD), respectively. One slight difference is that the isotherms of the experimental result tend to move more forward to the positive y direction, and the isotherm curves valued at 1.3 in two middle channels are sharper than those of the CFD results. In global, the shape of isotherms is rather similar to those of the RSC heat sink. complexity of GATO heat sink. Secondary flows and transversal flows were frequently generated under this kind of complex geometry, which prevented the establishment of thermal boundary layers and therefore enhanced the convective heat transfer as shown in section 5.4.1. Compared to RSC and OSC heat sinks, the thermal performance enhancement in terms of Nu by GATO is rather significant, especially at high Rein. At the section of higher Rein between around 550 -660, the slope of the Nu number curve of GATO heat sink seemed to be increased, and this may be caused by local turbulence flow (more details could be found in Appendix 5.B of this chapter). However, the existence of some local turbulences would further bring benefits for the cooling performance by enhanced forced heat convection. Whereas for RSC and OSC, developed laminar flow dominates the heat transfer in the parallel channels (except for the inlet region), resulting in a relatively stable Nu value under the tested Rein conditions. Therefore, the enhancement brought by the GATO heat sink compared to RSC or OSC heat sinks, indicated by the Nu increase, is more significant owing to the combined effects of geometry complexity and the local turbulences. Figure 5.22 (b) shows the Nu number for three heat sinks as a function of total input power Qtot. All three Nu curves show a slight increase when Qtot increases from 30 W to 180 W. This is mainly because of the slightly lower water thermal conductivity at a higher temperature due to the higher power input. The Nu numbers between RSC and OSC heat sinks are rather close, ranging from 5.6 to 6.4 and 6.0 to 7.1, respectively. Again, the GATO heat sink shows a much higher Nu number, ranging from 11.1 to 12.0 under the tested Qtot conditions. This indicates that the GATO heat sink, though optimized for minimizing the Tpeak of the heating surface, has the best global thermal performance in terms of the Nu number. Figure 5.23 shows a comparison of Nu numbers of three tested heat sinks as a function of the pressure drop (∆P). It may be observed that RSC and OSC heat sinks have relatively low Nu numbers, about 6.3 -6.4 for the RSC heat sink, and 6.5 -6.7 for the OSC heat sink, respectively. On the other hand, the Nu number of the GATO heat sink varies between 9.1 and 17.5 when ∆P increases from 128 Pa to 518 Pa. from the overlapped ∆P range, it can be seen that the Nu number of the GATO is much higher than that of RSC and OSC heat sinks at the same ∆P. This implies that with the same pumping power consumption/cost, the GATO heat sink always provides a much better thermal performance than RSC and OSC heat sinks. More evidence on this point will be given when discussing other performance indicators in the following sub-sections. Appendix 5.A: An example of steady state establishment The heat sink models were optimized under a steady state, therefore, the thermal measurement should be performed after the fluid flow and heat transfer reach the steady state. Figure 5.A1 shows an example of the thermal transient (OSC heat sink) from the beginning of the system heating up to the time when it reached a constant outlet fluid temperature. The power absorbed by the working fluid was then calculated. This record was based on the boundary conditions of 120 W total input power and 1.667×10 -6 m 3 •s -1 volume flow rate (100 mL•min - 1 ; Rein=474). The ambient temperature and the inlet temperature at the moment were 296.55 K and 294.85 K, respectively. Q' stands for the heat absorbed by the coolant, which was calculated by the difference between inlet and outlet temperature and the flow rate, and Q is the total input power (from the heaters). It took about 25 minutes to reach the steady state of the system. The ratio was about 98.1% and the estimated heat loss was about 2.3 W. This chapter will explore the influence of different objective functions that concern both hydrodynamic and thermal performance on the optimal flow path configurations obtained by our GATO approach. In particular, three different objectives, i.e., the root mean square deviation (RMSD) of the temperature field at the heating surface, the ratio between the heat sink's global Poiseuille number and Nusselt number, and the weighted-sum function of peak temperature at the heating surface and the global pressure drop, will be examined. It illustrates the evolution of heat sink geometry, flow channel configuration, and temperature distributions at the heating surface for three different optimizations. The results of the optimal heat sinks will be compared and evaluated in terms of different performance indicators. The results show that the proposed GATO method is effective and robust in handling complex objective functions. However, minimizing global performance indicators such as RMSD or Po/Nu may not necessarily result in lower peak temperatures or pressure drops. Minimizing the weighted-sum objective function that considers both the peak temperature and pressure drop not only achieves lower peak temperatures and pressure drops but also results in higher Nusselt numbers and lower Poiseuille numbers than other objective functions. Keywords of the Chapter: Objective functions, Weighted-sum objective function, Thermo-hydraulic performance, GATO Since the objectives of RMSDT* (purple point) and Po/Nu (redpoint) are global or averaged values, it is difficult to observe from the indicator of peak temperature. As for the pressure drop for the optimal point of Po/Nu, it can be seen that the optimal point of Po/Nu can reach a low-pressure drop than both green and purple points. Nevertheless, the 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 value achieved with this optimization objective function is the highest. Table 6.1 presents different performance indicators for optimal heat sinks based on various optimization objectives. It is obvious that when the indicator is the optimization objective itself, the indicator reaches its minimum value. This again reflects the effectiveness and robustness of the GATO approach developed in this study, which can handle different and complicated optimization problems. From the table, we may see, as an objective indicating the global performance, the minimization of 𝑓𝑓 𝑡𝑡𝑗𝑗2 (Po/Nu) brings the largest Nu and a rather low Po. And minimization of 𝑓𝑓 𝑡𝑡𝑗𝑗1 (RMSDT*) can bring neither a lower peak temperature nor a lower pressure drop. The weighted-sum objective function like 𝑓𝑓 𝑡𝑡𝑗𝑗3 , which would bring both lower peak temperature and pressure drop, at the same time have better Nu and Po numbers than those of the single objective of 𝑆𝑆 𝑝𝑝𝑒𝑒𝑎𝑎𝑘𝑘 * minimization. In real practice, it should be a good choice as an optimization objective function with adjustable values of 𝜔𝜔 1 and 𝜔𝜔 2 . List of references
04097359
en
[ "phys.meca.mefl", "phys.phys.phys-flu-dyn" ]
2024/03/04 16:41:18
2019
https://hal.science/hal-04097359/file/FP28-AERO2019-Muller.pdf
J.-C Roy NUMERICAL AND EXPERIMENTAL INVESTIGATION OF A THREE-AXIS FREE ROTATION WIND TUNNEL MODEL The current need of improving performance in terms of control and aerodynamic efficiency of ammunitions leads to the necessity of performing accurate flying geometry characterizations. Therefore, new investigation methods are developed in order to increase the aerodynamic knowledge. Free flight measurements experiments are the most common way to obtain dynamic aerodynamic coefficients. However, they do not always allow neither easy nor perfect measurements conditions. Currently ISL develops a stereovision method based wind-tunnel measurements methodology for investigation of a 3-axis free rotation model. This methods has been applied to the DREV-ISL reference model [1] [2] [3] [4] [5] in order to compare coefficients obtained by this method with numerical results. INTRODUCTION For concept validation and pitch damping coefficient measurements in wind tunnel, a three-axis free rotating test bench for projectiles, called MiRo [START_REF] Martinez | Motion measurement of a wind tunnel model by stereovision technique[END_REF], is under development at the ISL (French-German Research Institute of Saint-Louis). The final goal of this setup would be to become able to investigate the attitude of spin-stabilized models fitted with uncoupled actuators. Due to the mechanical complexity of such a device, the development is performed step by step. This paper presents the first step for which a methodology to obtain static and dynamic aerodynamic coefficients on a freerotating finned model has been developed. The measurement of the motion of the body during the wind tunnel test is performed with a stereovision technique, for which two high-speed cameras are employed simultaneously. Afterwards, for a stereoscopic purpose, both recordings are processed frame by frame and coupled by means of an image processing technique. At the end of the process, the attitude of the projectile during the wind tunnel test is reproduced numerically in order to identify the pitch damping moment coefficient with a curve fitting algorithm based on a theoretical motion model. To increase the confidence level of the obtained measurement, which can especially be affected by a cavity effect that is directly linked to the test bench principle, drag, lift and pitch moment terms have been compared for different configurations. Due to a very limited space inside the test bench and no optical access, the cavity effect investigation has exclusively been processed by RANS CFD simulations up to a 2° angle of attack. Previous wind tunnel and free flight results from literature have been taken as a comparative basis for the validation of both CFD results and MiRo wind tunnel measurements. DESCRIPTION OF THE EXPERIMENTAL SETUP THE MECHANICAL PRINCIPLE OF THE MIRO TEST BENCH The MiRo test bench [START_REF] Martinez | Mesure stréréoscopique de la déformée d'une aile battante[END_REF] consists in holding the model from the rear at its center of gravity while allowing the rotations to be free in the 3 directions in space. The roll motion is obtained by means of two bearings situated on both sides of a Cardan-like joint, which allows both pitch and yaw motions (See Fig. 1). kg.m². THE ATTITUDE DETERMINATION 2.2.1.THE STEREOVISION PRINCIPLE In order to record the motion in all directions of space, the experimental setup has to be able to capture the depth of the scene, like it is done by the human brain by combining information from both eyes. A one-eyed person, for whom the configuration is similar to a unique camera, is not able to see in 3D. For the stereovision technique that is employed herein, the same principle is recreated computationally. Therefore, the computer needs at least two cameras in order to obtain two different views of the scene. So as to follow the motion and reconstruct the flight, markers are placed on the model in order to be recognized by means of an image treatment process. Therefore, both image series are processed and coupled frame by frame. At the end of the process, the attitude of the projectile during the wind tunnel test is reproduced numerically in order to obtain the pitching moment derivative coefficient Cmα and the pitch damping moment coefficient Cmq. 2.2.2.MATHEMATICAL STEREOVISION MODEL The mathematical model which is employed for the stereoscopic determination of the attitude of the wind tunnel tested device is based on the pinhole camera model. The pinhole camera is a black box, that contains an aperture like a small hole and which reproduces an image after the passage of the light through the orifice. The mathematical model (Eq. 5) describes the relationship between the 3D coordinates of point M in space and its projection onto an image point m of an ideal pinhole camera as shown on the computer screen and for which the coordinates are expressed in pixels (See Fig. 4) [START_REF] Liu | Photogrammetry applied to Wind-Tunnel Testing[END_REF]. The model does not consider the geometric distortion. ( X c Y c Z c 1 ) = ( R T 0 1x3 1 ) ( X Y Z 1 ) (1) The transformation ② (See Eq.2) projects the object point M onto the CCD plane (𝑥 ⃗, 𝑦 ⃗) by perspective projection. This operation gives the projected point M'. In this equation, f is the focal length and s is a scale factor depending among other on the distance between the pinhole and the object point M. s ( x y 1 ) = ( f 0 0 0 0 f 0 0 0 0 1 0 ) ( X c Y c Z c 1 ) (2) The transformation ③ (See Eq.3) transforms the projected point M' from the CCD frame of reference (𝑥 ⃗, 𝑦 ⃗) into the image coordinates system (𝑢 ⃗⃗, 𝑣 ⃗). The obtained image point m which coordinates express its position in pixels on the recorded pictures that are displayed on the computer. Thereby, the pu and pv, horizontal and vertical pixel to length ratio, and (u0, v0), the location in the picture of the intersection between the CCD plane and the optical axis passing through the pinhole, are employed as shown in Eq. 3. ( u v 1 ) = ( p u p u cot(θ) u 0 +u v cot(θ) 0 p v / sin(θ) u v / sin(θ) 0 0 1 ) ( x y 1 ) (3) θ represents the possible non-orthogonality of the image rows and columns. In this case, we assume that the orthogonality is perfect, hypothesis which is valid with the cameras and lenses that are employed herein, which means θ=π/2. Therefore, Eq. 3 becomes: ( u v 1 ) = ( p u 0 u 0 0 p v u v 0 0 1 ) ( x y 1 ) (4) Finally, by combining Eq.1, Eq.2 and Eq. 4 the mathematical expression of the pinhole model is: s ( u v 1 ) = ( α u 0 u 0 0 0 α v u v 0 0 0 1 0 ) ( R T 0 1x3 1 ) ( X Y Z 1 ) (5) With α u =f.p u and α v =f.p v [START_REF] Liu | Photogrammetry applied to Wind-Tunnel Testing[END_REF] The intrinsic parameters (𝛼 𝑢 , 𝛼 𝑣 , u0 and v0) are specific to the lens of the camera, while the three Euler angles and the three components of the translation vector are the extrinsic parameters that express the camera position with respect to the object. This ten parameters are determined by means of a calibration process described in part 2.2.3. As the projectile is observed by two cameras and as each single marker creates an image point on both camera recordings, which coordinates are noted U1 and U2, the stereoscopic relationship [START_REF] Martinez | Mesure stréréoscopique de la déformée d'une aile battante[END_REF] between each single marker (object point) and its image is described by Eq. 5 and can be written in a more compact way: { s 1 U 1 = I C1 (R 1 .X+T 1 ) s 2 U 2 = I C2 (R 2 .X+T 1 ) (7) With U 1,2 = ( u 1,2 v 1,2 1 ), X= ( X Y Z ) (8) In this case s1,2 is the scale factor related to the camera 1 or 2, I1,2 is the intrinsic parameters matrix, R1,2 is the rotation matrix and T1,2 is the translation vector between the world reference frame and the camera reference frame. All these parameters except for s1,2 are determined at the calibration process. Each relation of the system in Eq. 7 is the equation, in matrix form, of a line in 3D space. Thus, the coordinates of the unknown 3D point can be calculated because it represents the intersection of both camera axes lines (See Fig. 3). Eq. 7 is a system of six scalar equations with five unknowns: X (three scalar values), s1 and s2 which leads to an overdetermined system of equations: { -I 1 .T 1 = I 1 .R 1 .X-s 1 .U 1 -I 2 .T 2 = I 2 .R 2 .X-s 1 .U 2 ⟺ [ R 1 -I 1 -1 .U 1 0 (3) R 2 0 (3) -I 2 -1 .U 2 ] . [ X s 1 s 2 ] = -[ T 1 T 2 ] (9) This latter can be rewritten in a compact form (Eq. 10) with P, a 6-component vector and H, a 6x5 matrix: H. [ X s 1 s 2 ] =-P (10) By performing a least square optimization process, it is possible to calculate the 3D coordinates of X as well as s1 and s2. The least square matrix solution can be expressed as: ( X s 1 s 2 ) = -(H T .H) -1 .H T .P (11) 2.2.3.CALIBRATION PRINCIPLE A calibration step [START_REF] Tsai | Estimatinh Three-Dimensional motion parameters of a rigid plane Patch, II: Singular Value Decomposition[END_REF] [16] is required for the initialization of the mathematical pinhole model. This step is divided into two sub-steps: the determination of known 3D object points on the image and an optimization algorithm employed to determine the intrinsic and extrinsic parameters. The first calibration step is performed thanks to an image of a 3D raw card, which is composed of three cube inner faces covered with a checkerboard pattern. The angular positioning of this latter, as well as the size of the squares, are known, so that the ten constant parameters of the pinhole model can be determined. The placement is chosen in such a way that both cameras perfectly see the three faces of the raw card. The algorithm estimates the position of the camera via the checkerboard squares deformation in the image. Three stages are necessary to achieve the calibration: 1. Manual selection of each faces' extreme points, determination of the position of each checkerboard corner by the Harris corner detection algorithm [START_REF] Harris | A combined corner and edge detector[END_REF], and estimation of the distance between the selected corners. 2. Deduction by direct linear transformation [START_REF] Yl | Direct linear Transformation into the object space coordinates in close-range photogrammetry[END_REF] of each control point approximate position on the image and association with its respective 3D coordinates. Determination of the parameters set by the Levenberg-Marquardt optimization method [START_REF] Levenberg | A method for the solution of certain problems in least squares[END_REF], [START_REF] Marquardt | An algorithm for leastsquares estimation of nonlinear parameters[END_REF]. AERODYNAMIC COEFFICIENT DETERMINATION METHODOLOGY In order to determine the static and dynamic aerodynamic coefficients, the angle of attack of the model has to be perturbed in order to obtain a damped oscillating attitude, on which the frequency and the amplitude evolution has to be analysed. The oscillating motion is specific to stable projectiles. If its amplitude is considered as small and the velocity of the flow as constant, the angular motion of the projectile can be described by the linearized equations of incidence as given by McCoy [START_REF] Mccoy | Linearized Pitching and Yawing Motion of Rotationally Symmetric Projectiles[END_REF]. Additionally, these equations can be simplified thanks to the test conditions: -No gravity effect -Spin rate close to zero -No angular variation of the velocity vector Under these conditions, the angle of attack evolution as a function of time is a damped oscillating motion defined as: α=e At α max sin(Bt+ϕ 0 ) With A= ρSVd 2 4I y C mq (13) B=2.π.f=√-ρSV 2 d 2I y c mα [START_REF] Mccoy | Linearized Pitching and Yawing Motion of Rotationally Symmetric Projectiles[END_REF] Iy is the transverse inertia and α the aerodynamic angle of attack in the plane of incidence. In order to estimate the aerodynamic coefficients, a damped sine wave curve is superimposed on the measurement by means of a curve fitting algorithm. The estimation of the aerodynamic coefficients is performed by identification of the angle of attack evolution model parameters: 1. Curve fitting of the model ( 12) on the measurement signal 2. Identification of the initial shift ϕ0, the period B and the damping factor A 3. Calculation of Cmα and Cmq from equations [START_REF] Forsythe | Detached-eddy simulation with compressibility corrections applied to a supersonic axisymmetric base flow[END_REF] and ( 14). CAVITY EFFECT INVESTIGATION A cavity had to be created at the rear part of the model for the Cardan-like joint to be hold by the sting. This artefact may have an impact on the pressure distribution around and inside of the projectile, and thus generate a modification of its attitude with respect to the free flight. To understand and quantify the impact of this cavity, an investigation has to be performed. Due to the complexity of the inner device, the lack of space and no optical access, this study can only be performed in a numerical way. NUMERICAL INVESTIGATION DESCRIPTION As illustrated in Fig. The simulation were performed with Fluent v19.2 at 3 different angles of attack: 0°, 1° and 2°. For each geometry, a tetrahedral mesh was generated with the ANSYS meshing software and converted to a polyhedral one with the meshing mode of Fluent. Both meshes have the same cell repartitions expecting in the cavity and at the rear of the bases. The number of cells are 6 700 000 and 2 900 000 for the MiRo and reference geometries, respectively (see Fig. 8 and9). PRESSURE DISTRIBUTION ANALYSIS In this part, the rotation centre around the MiRo holding sting has been set at the centre of gravity, i.e. at a distance of 2.6 calibre from the base. Fig. 10 and11 give numerical simulation normalized pressure distributions along the projectiles for two angles of attack: 0° and 2°. Those data have been extracted from the wall in the inclination plane that passes through the vertical fins (which is also the plane of symmetry of the model). At 0° angle of attack (Fig. 10), the comparison of the numerical simulation results (blue curve) with already existing wind tunnel pressure measurements [START_REF] Levenberg | A method for the solution of certain problems in least squares[END_REF] (red and black markers) indicates that the forebody pressure prediction show a very good agreement with the experiments. For both geometric configurations (Fig. 7), the respective outer pressure profiles are identical either for the simulations at 0° angle of attack (Fig. 10, blue and orange curves) as for the ones at 2° angle of attack (Fig. 11, both upper and lower pressure profiles). Therefore, it can be considered that the cavity does not impact the outer pressure profile. This property may be due to the supersonic characteristics of the flow, and may not be valid for a transonic or subsonic regime, meaning that additional investigations will have to be done for these lower velocities. Figure 10. Outer pressure profiles on the projectiles with and without cavity at 0 degree angle of attack Figure 11.Outer pressure profiles on the projectiles with and without cavity at 2 degrees angle of attack In the cavity (see Fig. 12), as the symmetry of the simulated case would suggest, the difference between the upper and lower pressure profiles (blue and yellow curves) is close to zero at 0° angle of attack. However, small discrepancies, that can be observed around X/d = 5.5, show that the upper and lower pressure profiles are not perfectly identical, although simulations have shown a good convergence. This observation suggests that the domain close to aperture (domain between 5.5 and 6.0 calibres from the nose) is very turbulent, so that steady state RANS simulations may show some limitations in this specific region. Therefore, for a deeper phenomena investigation, unsteady simulations should be performed, but for the evaluation of the cavity effect on the complete projectile, the order of magnitude of these discrepancies with respect to the outer pressure profiles suggests that the steady state results are acceptable for this investigations. When the model takes some elevation, a difference between the upper and lower cavity pressure profiles (light blue and red curves of Fig. 12) is obtained, leading to an effect on the attitude of the projectile. However, by calculating the area between the upper and lower pressure curves on the projectile (light and dark blue curves of Fig. 11) and in the cavity (light blue and red curves of Fig. 12), the relative difference of 1% between both integrated pressure profiles suggests that the effect of the cavity on the attitude of the projectile is very low. As this quantity has been calculated from pressure profiles on the inclination plane, for a more precise quantification, additional investigations should be performed first by considering additional parameters (different Mach numbers, different centres of rotations, etc.) and second, by integrating the pressure over the surface. Furthermore, Fig. 12 shows that once a characteristic depth of almost 1 calibre (X/d = 5) is reached by the flow, the cavity pressure becomes almost constant. This means that the flow velocity becomes close to zero in the deep inner part of the cavity, allowing the pressure to become homogeneous. This flow pattern is completely different from a traditional wake flow, where the pressure profile should effectively be constant (which is not predictable with RANS simulations [START_REF] Forsythe | Detached-eddy simulation with compressibility corrections applied to a supersonic axisymmetric base flow[END_REF]), but where the total base pressure would be less important, due to a velocity that remains in the recirculation (confirmed by the rear pressure comparison of Fig. 13). This analysis shows that the cavity has a non-negligible effect on the total drag but if the test bench centre of rotation is placed on the projectile's axis, the wind tunnel model attitude will remain very close to the free flight one. All these conclusions can be validated by comparing the global aerodynamic coefficients from Tab.1.The normal force coefficient slope CNα and the pitch moment coefficient derivative Cmα that are obtained by CFD with and without cavity are almost identical and also very close to the wind tunnel measurements [START_REF] Levenberg | A method for the solution of certain problems in least squares[END_REF]. Only the drag force coefficient offset shows a value that decreases from 10% when the cavity and the holding sting are added. Complementary cavity effect investigations are still to be done on a complete projectile flight domain, especially in the subsonic regime. CFD with cavity CFD without cavity Wind tunnel [ WIND TUNNEL MIRO EXPERIMENTAL CAMPAING WIND TUNNEL TEST CONDITIONS The experiments were performed in ISL's trisonic blow down wind tunnel [START_REF] Délery | Aérodynamique expérimentale. Souffleries et méthodes de mesure[END_REF] (See Fig. 14), which allows to perform investigations in a Mach range from 0.5 to 4.5. The tests presented in this paper were performed at Mach 2 with a stagnation pressure of 4.1 bars and a total temperature of 298°K. OPTICAL SETUP The attitude of the projectiles was recorded with 2 Photron SA-Z high-speed cameras. They are able to record 20000 frames per second in a full frame format (1024x1024 pixels) with an exposure time of 0.5µs. In order to avoid the motion blur, which would decrease the detection precision during the post-treatment, a very low exposure time is necessary to ensure a clear image regardless of the model's attitude. For this reason, four Dedotech Dedolight 400D DLH400D professional cool lights have been used to illuminate the black model on which white dot markers have been put. Both camera lenses have a focal length of 105 mm. The cameras were placed at a distance of 1.2 meters of the model and spaced of 0.6 meter. In this configuration, as illustrated in Fig. 15, the angle between both optical axes is 30 degrees. Figure 15. Optical setup EXAMPLE OF NON-PERTUBED FLIGHT In order to check the stability of the DREV-ISL at Mach 2 and to test the mechanical behaviour of the test bench in supersonic flow, a first test has been performed without mechanical perturbation. As illustrated in Fig. 16, the stable centre of gravity position (10 -4 m displacement amplitude on the 3 axis) during the blow down shows that the mechanical test bench holds the stress in the test conditions. The noise on the projectile's attitude on Fig. 18 is a combination of algorithmic and mechanical noises. As awaited, the green and red curves on Fig. 17 indicate that there are almost no pitch and yaw motions (maximum angular rotation is order of ±0.3 degrees) but a random roll motion is obtained due to aerodynamic perturbations. This last observation indicates, that the friction of the bearings is low enough to allow the model to spin gently. AERODYNAMIC COEFFICIENT DETERMINATION The method described in part 2. 12. The first few milliseconds, are ignored so as not to take the initial movement produced by the perturbation device into account. For this example, the damping factor A (Eq. 13) is equal to 185 rad.s -1 and the angular frequency f to 29.6 Hz. Tab. 2 shows that the experiment presents a very good repeatability and both static and dynamic pitch moment coefficients are very close to the free flight identification. Moreover, it is very important to notice that the Cmq is a coefficient that is quite complicated to be measured. A relative difference of only 7% for the results presented in Tab. 2 is very promising for the MiRo setup development to be continued. MARKERS EVOLUTION For the first wind tunnel campaigns (tests #1 and #2), manually set dots have been employed as markers, and Gauss fittings were employed to detect their respective centre position. However, for a same dot, if its shape is not perfect, small displacements of the identified centre location can occur between consecutive image pairs. As shown on Fig. 22, this artefact affects the results by generating algorithmic noise. Only the noise level on the roll signal has not decreased. This observation can be explained by the thickness of the fin to be too low for the mesh to 3D markers scatter points fitting algorithm to perform a precise fit. In fact, when the algorithm tries to fit the mesh on the markers 3D points, if some calculated marker coordinates are not precise enough, the mesh projection can jump from one fin face to the other, generating a parasitized roll signal. However, a traditional filtering process (not done for all the results presented in this article) could be a simple solution if this remaining noise would generate difficulties for data exploitation. Figure 1 . 1 Figure 1. Motion device Figure 2 . 2 Figure 2. DREV-ISL model For this test, the centre of gravity was located at 180.7 mm with respect to the base and its inertia was 9.3.10 -4 Figure 3 . 3 Figure 3. Stereovision principle Figure 4 . 4 Figure 4. Pinhole model representation The relationship could be decomposed into three successive elementary transformations as shown in Fig. 5. For each reference frame (𝐴 ⃗ , 𝐵 ⃗⃗ , 𝐶 ⃗ ), the notation of the associated coordinates is A, B and C, respectively. Figure 5 . 5 Figure 5. Pinhole model decomposition The transformation ① (See Eq.1) transforms the physical 3D point M (object points) expressed in the world reference frame (𝑋 ⃗ , 𝑌 ⃗⃗ , 𝑍 ⃗ ) to the camera reference frame (𝑋 𝑐 ⃗⃗⃗⃗⃗ , 𝑌 𝑐 ⃗⃗⃗⃗ , 𝑍 𝑐 ⃗⃗⃗⃗⃗ ) thanks to a rotation matrix R3x3 (ri,j), whose elements are expressed with the Euler angles, and a translation vector T (tx ty tz). Figure 6 . 6 Figure 6. 3D raw card (left) and checkerboard corners detection (right) During the optimization, the code minimizes the distance between the detected control points on the images and their theoretical image position foreseen by the pinhole model, which depends on the intrinsic and extrinsic parameters. 7, in order to predict the influence of the cavity and the holding sting, Reynolds-averaged Navier-Stockes computations were performed at Mach 2 on the full and on the MiRo DREV-ISL projectile. The k-ω SST turbulence model has been employed and the wind tunnel conditions have been taken as boundary conditions, i.e. the pressure P = 51 122 Pa, the Mach number M = 2 and the temperature T = 166.7 K. Figure 7 . 7 Figure 7. MiRo wind tunnel DREV-ISL (left) and reference DREV-ISL (right) Figure 8 .Figure 9 . 89 Figure 8. MiRo wind tunnel configuration mesh at 2° angle of attack Figure 12 .Figure 13 . 1213 Figure 12. Pressure profiles in the upper and lower cavity at 2 and 0 degrees angle of attack Figure 14 . 14 Figure 14. ISL's trisonic wind tunnel Figure 16 .Figure 17 . 1617 Figure 16. Centre of gravity position without mechanical perturbation (test #1) 3 was applied on this experiment to determine the Cmα and Cmq aerodynamic coefficients. For repeatability investigation, three experiments were performed. In order to avoid a rotating unbalance the MiRo test bench was set up so that the Cardan joint centre of rotation was superimposed on the centre of gravity of the model. For this reason, the attitude submitted by the model is only due to the aerodynamic loads. Like shown on Fig. 19, high amplitude damped oscillations are observed. The polar diagram on Fig. 20 also shows that the model describes an almost planar motion. Figure 19 . 19 Figure 19. Euler angles obtained for the perturbed experiment. Figure 20 . 20 Figure 20. Angular polar curve obtained for the perturbed experimentAs soon as the attitude of the projectile is planar, the pitching moment derivative coefficient Cmα estimation is based on a Fourier transform performed on the pitch angle signal obtained in Fig.19. This procedure allows to estimate the signal frequency f, and to estimate the value of B (Eq. 14), which finally allows to calculate the Cmα. Figure 21 . 21 Figure 21. Pitch angle measurement and model curve fitting Fig. 21 shows the pitch angle measurement and its respective fit with the curve of the model given in Eq.12. The first few milliseconds, are ignored so as not to take the initial movement produced by the perturbation device into account. For this example, the damping factor A (Eq. 13) is equal to 185 rad.s -1 and the angular frequency f to 29.6 Hz. Figure 22 . 22 Figure 22. Euler angles measurements with dot detections In the third test campaign (results given for test #3), dots were replaced by Secchi markers and the Gauss fitting algorithm by Harris and Stephen's corner detection method [7]. This modification improved the centre detection and, in the same time, notably reduced the noise on the pitch and yaw signals like shown on Fig 24. Furthermore, by comparing Fig. 16 with Fig. 23, the noise amplitude on the centre of gravity position signal has been evaluated and has decreased by a factor of 10 -2 .Only the noise level on the roll signal has not decreased. This observation can be explained by the thickness of the fin to be too low for the mesh to 3D markers scatter points fitting algorithm to perform a precise fit. In fact, when the algorithm tries to fit the mesh on the markers 3D points, if some calculated marker coordinates are not precise enough, the mesh projection can jump from one fin face to the other, generating a parasitized roll signal. However, a traditional filtering process (not done for all the results presented in this article) could be a simple solution if this remaining noise would generate difficulties for data exploitation. Figure 23 . 23 Figure 23. Centre of gravity position with corner detections . C. & Dupuis A.D. (1993) Experimental and numerical investigation of a finned projectile at Mach 2, Canada, pp607-616. 2. Berner C. & Dupuis A. D. (1996). Wind tunnel investigation and analysis of the DREV-ISL Table 2 . 2 Aerodynamic coefficients measurement results Cmα Cmq Frequenc Free- Free- y (Hz) MiRo flight MiRo flight [1] [1] Test #1 29.6 -4.52 - Test #2 29.86 -4.55 -4.29 -37.1 -40 Test #3 29.53 -4.45 -37.3
04097367
en
[ "info.info-oh" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04097367/file/2022UPSLD046.pdf
Je souhaite tout d'abord remercier Emilie Kaufmann et Panayotis Mertikopoulos d'avoir accepté d'être rapporteurs de ma thèse ainsi que Vianney Perchet, Gilles Stoltz et Marta Soare d'avoir accepté de faire partie du jury de ma soutenance de thèse. Vos retours et les discussions que nous avons pu avoir durant ma soutenance de thèse ont été très enrichissants. Durant ces quatre dernières années, j'ai eu la chance de pouvoir être encadré par Yann Chevaleyre, Rida Laraki, Igor Colin et Albert Thomas qui ont su me guider tout au long de cette thèse et m'inculquer le métier de chercheur. Pour cela, je souhaiterai leur témoigner ma profonde gratitude et les remercier de toute l'aide qu'ils ont pu m'apporter, tant sur le plan professionnel que personnel. Cette thèse effectuée d'une part au sein de l'équipe MILES du laboratoire LAMSADE de l'Université Paris Dauphine et d'autre part au sein de l'équipe Noah's Ark Paris de Huawei a représenté pour moi un cadre de travail propice à l'épanouissement scientifique et personnel, et cela a été possible grâce aux doctorants et membres permanents de ces deux équipes. C'est pourquoi je souhaite chaleureusement les remercier et leur témoigner que tous les moments (scientifiques comme moins scientifiques) partagés avec eux resteront un très bon souvenir. Enfin, j'ai eu la chance d'être entouré par ma famille et mes amis qui ont su me supporter et m'épauler aux moments où j'en avais le plus besoin. Pour cela, je les remercie infiniment. Finalement, une partie de ce doctorat vous revient aussi. Résumé Nous introduisons un nouveau modèle appelé Bandits Bilinéaires Graphiques où un apprenant (ou une entité centrale) alloue des bras aux noeuds d'un graphe et observe pour chaque arête une récompense bilinéaire bruitée représentant l'interaction entre les deux noeuds associés. Dans cette thèse, nous étudions le problème d'identification du meilleur bras et la maximisation des récompenses cumulées. Pour le premier, un apprenant veut trouver l'allocation du graphe maximisant la somme des récompenses bilinéaires obtenues à travers le graphe. Pour le second problème, au cours du processus d'apprentissage, l'apprenant doit faire un compromis entre l'exploration des bras pour acquérir une connaissance précise de l'environnement et l'exploitation des bras qui semblent être les meilleurs pour obtenir la récompense la plus élevée. Quel que soit l'objectif de l'apprenant, le modèle de bandits bilinéaires graphiques révèle un problème combinatoire sousjacent qui est NP-Dur et qui empêche l'utilisation de tout algorithme existant pour l'identification du meilleur bras (BAI) ou pour la maximisation des récompenses cumulées. Pour cette raison, nous proposons tout d'abord un algorithme d'α-approximation pour le problème NP-Dur sousjacent, puis nous nous attaquons aux deux problèmes mentionnés ci-dessus. En exploitant efficacement la géométrie du problème du bandit, nous proposons une stratégie d'échantillonnage aléatoire pour le problème BAI avec des garanties théoriques. En particulier, nous caractérisons l'influence de la structure du graphe (par exemple, étoile, complet ou cercle) sur le taux de convergence et proposons des expériences empiriques qui confirment cette dépendance. Pour le problème de la maximisation des récompenses cumulées, nous présentons le premier algorithme basé sur le regret pour les bandits bilinéaires graphiques utilisant le principe d'optimisme face à l'incertitude. L'analyse théorique de la méthode présentée borne l'α-regret par Õ( √ T ) et souligne l'impact de la structure du graphe sur le taux de convergence. Enfin, nous démontrons par diverses expériences la validité de nos approches. ix Set of d-dimensional real-valued vectors List of Figures and Tables R d×d ′ Set of d × d ′ -dimensional real-valued matrices [v] i the i-th element of a vector v ∈ R d [A] ij The element at the i-th row and j-th column of A ∈ R d×d ′ S + d The cone of all positive semi-definite matrices in R d×d ⟨u, v⟩ The scalar product between u and v ∈ R d : ⟨u, v⟩ ≜ u ⊤ v ∥v∥ p The ℓ p norm of a vector v ∈ R d : ∥v∥ p = d i=1 |[v] i | p 1/p ∥v∥ A The Mahanalobis norm of v ∈ R d with a matrix A ∈ S + d : ∥v∥ A ≜ √ v ⊤ Av ∥A∥ The spectral norm of a matrix A ∈ R d×d : ∥A∥ ≜ sup x:∥x∥ 2 =1 ∥Ax∥ 2 ∥A∥ F The Frobenius norm of a matrix A: ∥A∥ F = d i=1 d j=1 |[A] ij | 2 |X | Cardinality of the finite set X S X |X |-dimensional simplex: S X ≜ {γ ∈ R |X | | x∈X γ x = 1} I d d × d identity matrix N (•, •) The Gaussian distribution P(•) The probability of a random event E[•] The expectation of a random event 1 [•] The indicator function: 1(E) = 1 if E is true, 0 otherwise O(•) The Landau notation: f (T ) = O(g(T )) ⇔ lim sup T →∞ f (T ) g(T ) < ∞ o(•) The Landau notation: f (T ) = o(g(T )) ⇔ lim T →∞ f (T ) g(T ) = 0 ∧ The logical AND ∨ The logical OR ⊗ The Kronecker product vec (A) The vector in R Context & motivations This thesis aims at solving centralized multi-agent problems that involve pairwise interactions between agents. Configuring antennas in a wireless cellular network [START_REF] Siomina | Automated optimization of service coverage and base station antenna configuration in UMTS networks[END_REF] is an example of those problems: the choice of a parameter for an antenna has an impact on both its own signal quality and that of each of its neighboring antennas due to signal interference. Likewise, in a wind farm, the adjustment of a turbine blade not only impacts its own energy collection efficiency but also that of its neighbors' due to wind turbulence [START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF][START_REF] Van Dijk | Yaw-misalignment and its impact on wind turbine loads and wind farm power output[END_REF]. By considering each antenna or turbine blade as an agent, these problems can be modeled as a multi-agent multi-armed bandit problem (MA-MAB) [START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF] with the knowledge of a coordination graph [START_REF] Guestrin | Coordinated Reinforcement Learning[END_REF] where each node represents an agent and each edge represents an interaction between two agents. A multi-armed bandit problem (MAB) is a sequential decision problem where a learner must take an action (also called arm) at each iteration and gets a (possibly perturbed) associated reward that informs about the quality of the chosen action. Naturally, the learner does not know the distribution of the reward for each possible action. The learner may have very different goals, such as maximizing the rewards accumulated during the process, or in a minimum number of tries and regardless of the accumulated rewards, inferring which action is the best to choose i.e., the most rewarding. Hence, a multi-agent multi-armed bandit is the setting where several agents face a multi-armed bandit problem. In the bandit literature, one can distinguish unstructured and structured bandits. While the unstructured bandit considers that playing an action and getting the associated reward does not allow to deduce anything about the distribution of rewards of other actions, the structured one includes the bandit settings where the rewards of the different actions share a common parameter [START_REF] Lattimore | Bandit Algorithms[END_REF]. For instance, a popular structured bandit setting is the linear bandit [START_REF] Auer | Using confidence bounds for exploitation-exploration trade-offs[END_REF] where the reward associated with any action is linearly dependent on an unknown parameter vector θ. Hence at a given time, choosing an action and receiving its associated reward gives information about θ and by definition also about the rewards of all other actions. Here, we are interested in such structured environments and on that matter we present a novel multi-agent structured bandit called Graphical Bilinear Bandits. The specificity of this environment lies in the interdependence of the rewards obtained by the neighboring agents in the graph and in the assumption that these rewards are bilinear, which appears to us as the natural extension of linear rewards when agents are pairwise dependent. Indeed, while MA-MAB problems have been studied in the setting of unstructured bandits with independent and dependent agents (see e.g., [START_REF] Agarwal | Multi-agent multi-armed bandits with limited communication[END_REF][START_REF] Amin | Graphical Models for Bandit Problems[END_REF][START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF][START_REF] Besson | Multi-player bandits revisited[END_REF][START_REF] Bistritz | Distributed multi-player bandits-a game of thrones approach[END_REF][START_REF] Heliou | Learning with bandit feedback in potential games[END_REF][START_REF] Landgren | Distributed cooperative decision making in multi-agent multi-armed bandits[END_REF][START_REF] Sankararaman | Social learning in multi agent multi armed bandits[END_REF][START_REF] Shahrampour | Multi-armed bandits in multi-agent networks[END_REF]101]), only the setting of structured bandits with independent agents has been explored (see e.g., [START_REF] Amani | Decentralized multi-agent linear bandits with safety constraints[END_REF][START_REF] Cesa-Bianchi | A gang of bandits[END_REF][START_REF] Chan | Parallelizing contextual linear bandits[END_REF]). Through this thesis and the papers it refers to, we want to lay a first stone to the building. Problem setting 1.2.1 Stochastic Graphical Bilinear Bandits Let G = (V, E) be the directed graph defined by V the finite set of nodes representing the agents and E the set of edges representing the agent interactions. We assume that if (i, j) ∈ E then (j, i) ∈ E. The graph could be considered as undirected but we assume that the interactions between two neighbors are not necessarily symmetrical with respect to the obtained rewards, so we choose to keep the directed graph to emphasize this potential asymmetry. For all agent i ∈ V , we denote N i the set of its neighboring agents. Let n = |V | denote the number of nodes, m = |E| the number of edges and let X ⊂ R d be a finite arm set where K = |X | denote the number of arms. The graphical bilinear bandit with a graph G and an arm set X consists in the following sequential decision problem: Stochastic Graphical Bilinear Bandits For each round t > 0, 1. Each agent i ∈ V chooses an arm x (i) t in X 2. Then, each agent i ∈ V receives a noisy bilinear reward for each of its neighbors j ∈ N i y (i,j) t = x (i)⊤ t M ⋆ x (j) t + η (i,j) t , (1.1) where M ⋆ ∈ R d×d is an unknown matrix, and η (i,j) t a zero-mean σ-sub-Gaussian random variable. The reward y (i,j) t reflects the quality of the interaction between the neighboring nodes i and j when pulling respectively the arm x (i) t and x (j) t at time t. The bilinear setting appears as a natural extension of the commonly studied linear setting to model the interaction between two agents. Note that this setting can be considered either in a decentralized setting where agents take actions without consultation with others agents or in the centralized setting where a central entity chooses the arms of all the agents as well as aggregates the obtained rewards and designs a global strategy for the agents in the graph. In this thesis, we only consider the centralized setting where a central entity manages all the agents, chooses at each time t the joint arm (x (1) t , . . . , x (n) t ) and then receives the associated rewards y (i,j) t for all (i, j) ∈ E. We illustrate the sequential decision problem at a given round t in Figure 1.1. The learner chooses (x (1) t , x (2) t , x (3) t ) The learner receives y (i,j) t , ∀(i, j) ∈ E Figure 1.1: Illustration of the learner's decision process at a given round t for a simple graph of three nodes Objectives As briefly mentioned earlier, there are two different main goals that a learner (here the central entity) may want to achieve in a bandit problem. Identifying the best joint arm. The first objective that we want to deal with in this thesis is where the learner is interested in finding within a minimum of rounds the best joint arm (x (1) ⋆ , . . . , x (n) ⋆ ) that maximizes the expected global reward over the graph: (x (1) ⋆ , . . . , x (n) ⋆ ) = arg max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ M ⋆ x (j) . This objective implies that the central entity do not mind choosing a suboptimal joint arm (x (1) t , . . . , x (n) t ) at each time t as long as it gives enough information on the unknown parameter M ⋆ in order to construct an accurate estimate M. This objective is known as pure exploration or best arm identification [START_REF] Audibert | Best arm identification in multi-armed bandits[END_REF][START_REF] Bubeck | Pure exploration in multi-armed bandits problems[END_REF]. Maximizing the cumulative rewards. The second objective is the most commonly considered in the bandit literature where the learner wishes to maximise the sum of the (expected) rewards obtained over the rounds. In our setting, the central entity wants to maximize the cumulative expected global rewards given by T t=1 (i,j)∈E x (i)⊤ t M ⋆ x (j) t . While the first goal allows the learner to be in a pure exploration setting, regardless of the rewards obtained throughout the process, the goal of maximizing the cumulative rewards requires a trade-off between exploring the different possible arms to have an accurate estimate M of M ⋆ and exploiting the arms that seems to be the most optimal given M in order to obtain the maximum cumulative rewards. In both objectives (i.e., the best arm identification or the maximization of the cumulative rewards) and given an estimate M, the learner will have to solve at some point the following optimization problem max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ Mx (j) . (1.2) Indeed, for the best arm identification, this optimization problem must be solved at the end when the learner wants to return the best joint arm given the estimate M constructed during the learning procedure. For the maximization of the cumulative rewards, this optimization problem may need to be solved during the learning procedure when the learner wants to exploit and return the best estimated joint arm given its current knowledge of the environment which is the constructed estimate M. Solving this optimization problem is not trivial, so for both objectives we consider the common underlying objective of solving this problem. Outline of the thesis and contributions In Chapter 2, we introduce and formalize the stochastic multi-armed bandit problem and more particularly the stochastic linear bandit problem with the algorithms and guarantees that exist for the best arm identification problem and the maximization of the cumulative rewards. Indeed, many tools developed in the corresponding literature will be used to solve the problems related to the graphical bilinear bandits setting. Then we put our graphical bilinear bandits model in perspective with some multi-agent bandit models that use structured bandits and bandits in graphs. In Chapter 3, we tackle the underlying objective of solving the optimization problem given in (1.2). For this part we consider that the learner already has the best estimate M = M ⋆ . We show that the problem is NP-Hard and we give two α-approximation algorithms with α ≥ 1/2. In Chapter 4, we formalize the best arm identification problem relative to the graphical bilinear bandits. By efficiently exploiting the geometry of this bandit problem, we propose an allocation strategy based on randomized sampling with theoretical guarantees. In particular, we characterize the influence of the graph structure (e.g. star, complete or circle) on the convergence rate and propose empirical experiments that confirm these dependencies. In Chapter 5, we present a regret-based algorithm (i.e., an algorithm that aims to maximize the cumulative rewards) for graphical bilinear bandits using the principle of optimism in the face of uncertainty. Theoretical analysis of this new method yields an upper bound of Õ( √ T ) on the α-regret (a useful measure that we introduce in chapter 2) and evidences the impact of the graph structure on the rate of convergence. We show through various experiments the validity of our approach. Finally, in Chapter 6 we present the conclusion of this thesis and discuss the different research perspectives that the graphical bilinear bandits model offers. In this chapter, we first present the basics of the stochastic multi-armed bandit and the stochastic linear bandit. The different notions and algorithms that appear in the following sections do not cover the whole field and do not necessarily include the most recent or the most optimal ones since we only want to (i) introduce the reader to this domain and give the tools that allow a good understanding of the following chapters of this thesis and (ii) explain why the existing algorithms cannot be straightforwardly applied to the graphical bilinear bandit setting. For a more in-depth view of the field, we refer the reader to the book [START_REF] Lattimore | Bandit Algorithms[END_REF] which provides a detailed and comprehensive overview of bandit problems. Besides, in the last section of this chapter, we present some specific works that model structured multi-agent bandits. Since the studied models are very different from ours, a direct comparison of the methods would not be appropriate, however they bring perspective to our work. Furthermore some of the methods and tools used in the cited papers may still be useful for graphical bilinear bandits problems. An introduction to the stochastic multi-armed bandit problem 2.1.1 Motivations and formalization The bandit problem was first introduced in [START_REF] Thompson | On the likelihood that one unknown probability exceeds another in view of the evidence of two samples[END_REF] to model sequential clinical trials where a learner chooses for each patient a drug and then observes the associated effects. Then, the stochastic 7 multi-armed bandit problem was formalized in [START_REF] Robbins | Some aspects of the sequential design of experiments[END_REF] to present the general sequential decision problem: it considers a learner who has access to several actions (most often called arms) and during a given number of rounds, the learner has to choose an arm at each round that will reveal an associated perturbed reward. The expected reward of each arm is unknown to the learner. One of the most popular cases illustrating this situation is that of a gambler who is in a casino and has access to slot machines with different expected payoffs. Given his or her budget, the learner has a finite number of tries and must choose which slot machine to play at each time and then obtain the associated payout. The common goal of the player is to maximize the cumulative payoffs, but other goals such as identifying the best slot machine in a minimum number of tries can be interesting. Consider a finite set of arms X with K = |X | the number arms and a collection of distributions ν = {P x : x ∈ X }. Given a time horizon T > 0, the learner faces the following sequential decision problem: Stochastic Multi-Armed Bandit For each round t = 1, . . . , T , 1. the learner chooses an arm x t in a finite arm set X 2. the environment samples a reward y t ∈ R from P xt and reveals y t to the learner. In the next section, we present an algorithm that solves the problem of maximizing the cumulative rewards obtained during the learning procedure. The algorithms that solves the problem of identifying the best arm for multi-armed bandits use very different techniques than those used for linear bandits and by extension those we use for graphical bilinear bandits. For this reason and to maintain a clearer narrative, we do not present algorithms that tackle the best-arm identification problem for multi-armed bandits. Maximizing the cumulative rewards As we briefly mentioned in the previous section, a natural goal the learner might have is to maximize the cumulative rewards obtained during the learning process. We recall that the learner does not know the expected reward of each arm. Hence it has to explore and try out the different arms to have an estimate of their associated rewards, but also exploit the arm that appear to have the highest reward. These two subgoals are complementary: on the one hand, by exploring, the learner gets to correctly estimate the different expected rewards associated with all arms; however, this implies pulling suboptimal arms, which have bad impacts on the cumulative rewards. On the other hand, by exploiting, the learner gets to pull arms that appears to give the greatest reward; however, since other arms are disregarded, this could lead to less accurate estimation of the different expected rewards and thus to over exploiting arms that are actually suboptimal. Hence the learner has to do a tradeoff between exploration and exploitation in order to maximise the cumulative rewards. Wanting to maximize cumulative noisy rewards is the same as wanting to pull at each round the arm that gives the best expected reward µ ⋆ where µ ⋆ = max x∈X ∞ -∞ udP x (u). Indeed, the 2.1 An introduction to the stochastic multi-armed bandit problem learner does not control the randomness coming from the environment, but pulling the arms with the highest expected reward would give him, in expectation, the highest cumulative rewards. By defining µ x = ∞ -∞ udP x (u) the expected reward when pulling arm x, we have more formally that maximizing the cumulative rewards is equivalent to minimizing the pseudo-regret R(T ) = T µ ⋆ - T i=1 µ xt , The notion of pseudo-regret simply describes the difference between what the learner would have done with full information, i.e., pull the best arm during the T rounds, and what is actually done without initial information on the expected rewards of the arms. In [START_REF] Lai | Asymptotically efficient adaptive allocation rules[END_REF], the authors show that asymptotically a regret of order log(T ) is unavoidable. The objective for the learner is to have a pseudo-regret R(T ) such that lim T →∞ R(T ) T = 0 . This ensures that the learner chooses the optimal arm almost all the time when T tends to infinity. One of the most popular algorithm for stochastic multi-armed bandit that does the explorationexploitation tradeoff is the Upper Confidence Bound (UCB) algorithm [START_REF]Sample mean based index policies by o (log n) regret for the multi-armed bandit problem[END_REF][START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF] using the principle of Optimism in the Face of Uncertainty (OFU). The idea of the algorithm is to select at each time t the arm that seems to be the most optimistically optimal. The notion of optimism takes into account the value of the estimated reward of the arms but also the number of samples used for the estimations, in other words the precision of the estimations. Indeed, at each time t and for each arm x ∈ X , the learner has an estimate μx of the reward of x. Hence, instead of only pulling the arm that has the highest estimated reward max x∈X μx (i.e., only exploiting), it uses the upper confidence bound (UCB) on the estimate μx and chooses the arm with the highest UCB. The less an arm has been pulled the higher the UCB. Intuitively it means that at each round, the learner either exploits with high confidence or explores other arms that have been less pulled and that might give (optimistically) a better reward. A lot of versions exist for the UCB algorithm, we recall here the original method presented in [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF] in Algorithm 1. Algorithm 1: UCB1 Input : arm set X = {1, . . . , K} Pull each arm x ∈ X once and set μx the obtained reward and n x = 1; for t = K + 1 to T do The learner pulls the arm x t = arg max x∈X μx + We state the guarantee of the algorithm on the pseudo-regret in the following proposition that we borrow from [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF]. Proposition 2.1 (Theorem 1 in [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF]). For any K ≥ 1 and arm set X = {1, . . . , K}, if the policy UCB1 is run on X with the associated reward distribution ν = (P x : x ∈ X ) with support in [0, 1], then, for any number of plays T , its pseudo-regret is such that R(T ) ≤     8 x∈X µx<µ ⋆ ln T µ ⋆ -µ x     + 1 + π 2 3 x∈X µ ⋆ -µ x , (2.1) where for all x ∈ X , µ x = ∞ -∞ udP x (u) This proposition tells us that, given the sublinear bound on the regret, the learner is constantly improving his choice, otherwise the regret would have been of order T with the learner remaining stuck and drawing a suboptimal arm. The reader can refer to [START_REF] Cappé | Kullback-Leibler upper confidence bounds for optimal sequential allocation[END_REF] for instance for an improved and asymptotically optimal algorithm. As stated in the introduction, in bandit theory, one can distinguish unstructured bandits from structured bandits. A bandit is called unstructured when it is not possible for the learner to learn information about one arm by drawing another. In other words, we can define an unstructured bandit as one where ν is a product of distributions (that may be of different classes), so that drawing an arm gives a reward from the associated distribution without helping the learner understand the other distributions. In contrast, structured bandits such as linear bandits allow the learner to pull an arm and infer information about the rewards of other arms. The stochastic linear bandit problem Formalization We present the sequential decision problem for the linear bandit in the following: Stochastic Linear Bandit For each round t = 1, . . . , T , 1. the learner chooses an arm x t in a finite arm set X ⊂ R d with d > 1 2. the learner obtains from the environment the associated reward y t = ⟨x t , θ ⋆ ⟩ + η t (2.2) where θ ⋆ ∈ R d is an unknown parameter vector, and η t is a random variable sampled from a certain distribution When the learner draws an arm x ∈ X and receives a noisy linear reward ⟨x, θ ⋆ ⟩ + η t , it gives information about θ ⋆ and by extension about the other expected reward ⟨x ′ , θ ⋆ ⟩ for any x ′ ∈ X . This setting is particularly interesting because it might be enough for the learner to draw d arms that span R d to start having a reasonable estimate of θ ⋆ . Therefore, when the number of arms is large, the learner does not necessarily have to draw all the arms to get a good estimate of θ ⋆ and thus the estimated rewards for all the arms. Notice that this bandit can be formulated for instance with ν = {N (⟨x, θ ⋆ ⟩, σ) : x ∈ X } if we consider that the noise is a gaussian random variable. For the rest of this chapter, we consider that the noise terms are σ-sub-Gaussian random variables. Experimental designs serving the pure exploration setting When the objective of the learner is to identify the best arm x ⋆ = arg max x∈X ⟨x, θ ⋆ ⟩ within a minimum number of rounds, it is equivalent to look for the arm x ∈ X such that for all x ′ ∈ X , ⟨x -x ′ , θ ⋆ ⟩ ≥ 0 . (2.3) However, one does not have access to θ ⋆ , so we have to use its empirical estimate. For t > 0, we consider a sequence of arms x t = (x 1 , . . . , x t ) ∈ X t and the corresponding noisy rewards (y 1 , . . . , y t ). We assume that the noise terms in the rewards are i.i.d., following a σ-sub-Gaussian distribution. Let θt = A -1 t b t ∈ R d be the solution of the ordinary least squares problem with A t = t s=1 x s x ⊤ s ∈ R d×d and b t = k s=1 x s y s ∈ R d . We suppose that A t is nonsingular for all t > 0. We first recall the following property. [START_REF] Soare | Best-arm identification in linear bandits[END_REF]). Let c = 2σ √ 2. For every fixed sequence x t , with probability 1 -δ, for all t > 0 and for all x ∈ X , we have Proposition 2.2 (Proposition 1 in x ⊤ θ ⋆ -x ⊤ θt ≤ c∥x∥ A -1 t log 6t 2 K δπ . As it is done in [START_REF] Soare | Best-arm identification in linear bandits[END_REF], let us consider a confidence set Ŝ(x t ) centered at θt ∈ Ŝ(x t ) and such that P θ ⋆ / ∈ Ŝ(x t ) ≤ δ, for some δ > 0. Since θ ⋆ belongs to Ŝ(x t ) with probability at least 1 -δ, one can stop pulling arms when an arm has been found, such that the condition (2.3) is verified for any θ ∈ Ŝ(x t ). More formally, the best arm identification task will be considered successful when an arm x ∈ X verifies the following condition for any x ′ ∈ X and any θ ∈ Ŝ(x t ): ⟨x -x ′ , θt -θ⟩ ≤ ∆t (x, x ′ ) , where ∆t (x, x ′ ) = (x -x ′ ) ⊤ θt is the empirical gap between x and x ′ . Corollary 2.1. For all t > 0, let Ŝ(x t ) be such that Ŝ(x t ) = θ ∈ R d : ∀x ∈ X , ∀x ′ ∈ X , ⟨x -x ′ , θt -θ⟩ ≤ c∥x -x ′ ∥ A -1 t log 6t 2 K 2 δπ . With probability 1 -δ, θ ⋆ is in Ŝ(x t ). Proof. Using the upper bound in Proposition 2.2 and replacing x by (x -x ′ ), we get the result. Then, the stopping condition can be reformulated as follows: ∃x ∈ X , ∀x ′ ∈ X , c∥x -x ′ ∥ A -1 t log 6t 2 K 2 δπ ≤ ∆t x, x ′ . (2.4) As mentioned in [START_REF] Soare | Best-arm identification in linear bandits[END_REF], by noticing that max (x,x ′ )∈X 2 ∥x -x ′ ∥ A -1 t ≤ 2 max x∈X ∥x∥ A -1 t , an admissible strategy is to pull arms minimizing max x∈X ∥x∥ A -1 t in order to satisfy the stopping condition as soon as possible. More formally, one wants to find the sequence of arms x ⋆ t = (x ⋆ 1 , . . . , x ⋆ t ) such that: x ⋆ t ∈ arg min (x 1 ,...,xt) max x ′ ∈X x ′ ⊤ t i=1 x i x ⊤ i -1 x ′ . (G-opt-X ) This is known as G-allocation (see e.g., [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]) and is NP-hard to compute [START_REF] Çivril | On selecting a maximum volume sub-matrix of a matrix and related problems[END_REF]104]. One way to find an approximate solution is to rely on a convex relaxation of the optimization problem (G-opt-X ) and first compute a real-valued allocation γ ⋆ ∈ S X such that γ ⋆ ∈ arg min γ∈S X max x ′ ∈X x ′ ⊤ x∈X γ x xx ⊤ -1 x ′ . (G-relaxed-X ) One could either use random sampling to draw arms as i.i.d.samples from the γ ⋆ distribution or rounding procedures to efficiently convert each component in γ ⋆ into an integer and thus constructed the optimal matrix A -1 t . Another way to visualize the effect of the constructed covariance matrix A -1 t is through the confidence ellipsoids which are of the form E = {θ ∈ R d , ( θt -θ) ⊤ A -1 t ( θt -θ) ≤ τ } where τ depends on the confidence level. The ellipsoid associated with a confidence parameter δ ∈ (0, 1) represents the region that contains the true parameter θ ⋆ with probability 1 -δ (see Figure 2.1). For a fixed confidence parameter δ, one would want to minimize the region covered by the ellipsoid to ensure that the approximated parameter θt is as close as possible to the real parameter θ ⋆ . Since the ellipsoid depends on A -1 t , one way to do so, instead of estimating θt by choosing all the arms x ∈ X (also called experiments), is to select the one that are the most statistically efficient. This problem is known as experimental design [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF] and one criterion that has been studied is the G-optimal design that minimises max x∈X x ⊤ A -1 t x. The G-optimal design minimizes the worst possible predicted variance and one can see that this objective coincides exactly with the one formulated in (G-opt-X ). To approximate the solution of (G-opt-X ), the authors of [START_REF] Soare | Best-arm identification in linear bandits[END_REF] give a greedy strategy that at each time t chooses the arm x t ∈ X such that θt θ ⋆ [θ] 1 [θ] 2 { θ ∈ R 2 | ( ˆθt - θ ) ⊤ A -1 t ( ˆθt - θ ) ≤ τ } Figure 2.1: The δ-confidence level ellipsoid x t = arg min x∈X max x ′ ∈X x ′⊤ A t-1 + xx ⊤ -1 x ′ . (2.5) The greedy strategy appears to be a fine way to approximate this problem because the number of pulls to satisfy the stopping condition is not known in advance, hence, converting directly the distribution γ to integers is not relevant. Moreover finding respectively the optimal sequence (x 1 , . . . , x t ) and (x 1 , . . . , x t+1 ) by rounding procedure and for a certain t with respect to the G-allocation strategy gives the same sequence modulo the extra x t+1 arm. In Algorithm 2, we share the method. Algorithm 2: Best-arm Identification in Linear Bandit : greedy G-Allocation strategy Input : arm set X ⊂ R d , confidence δ > 0 Set t = 0; A 0 = I d ; b 0 = 0; while (2.4) is not true do t = t + 1; x t = arg min x∈X max x ′ ∈X x ′⊤ A t-1 + xx ⊤ -1 x ′ The learner observes y t = ⟨x t , θ ⋆ ⟩ + η t ; θt = A -1 t b t ; end return arg max x∈X ⟨x, θt ⟩; The authors give a guarantee on the sample complexity of any algorithm that gives a β-approximation of the solution of (G-opt-X ): 13 Proposition 2.3 ([90], Theorem 1). If the G-allocation strategy is implemented with a β-approximate method and the stopping condition (2.4) is used, then with probability at least 1-δ, arg max x∈X ⟨x, θt ⟩ = x ⋆ and t ≤ 16c 2 d(1 + β) log 6t 2 K 2 δπ ∆ 2 min , where ∆ min = min x∈X \{x⋆} ⟨x ⋆ -x, θ ⋆ ⟩ and c = 2σ √ 2 For Algorithm 2, the greedy algorithm gives a β that depends on t, we note it β t and is equal to d+d 2 +2 2t . Optimism in the face of uncertainty for linear bandits (OFUL) For the objective of maximizing the cumulative rewards with a budget of T rounds, the existing methods use the same idea of optimism in the face of uncertainty but adapted to the linear setting. Indeed, at each time t we saw in the previous section that the learner can build an estimate θt . Although the objectives are completely different, one can nevertheless consider again the confidence ellipsoids of the form E = {θ ∈ R d , ( θt -θ) ⊤ A -1 t ( θt -θ) ≤ τ } as shown in Figure 2.1. Given a confidence level δ, one can construct this ellipsoid and tell with probability 1 -δ that the true parameter vector θ ⋆ is in it. Hence, a strategy using the optimism in the face of uncertainty would be to select the arm that gives the best reward with respect to the best θ ∈ E. More formally at each time t the learner would select x t = arg max x∈X max θ∈E ⟨x, θ⟩. This method has been presented in [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF] and we recall their approach in the following. Let us define θt = A -1 t b t , (2.6) where, A t = λI d + t s=1 x s x ⊤ s , with λ > 0 a regularization parameter and b t = t s=1 x s y s . We also define the confidence set C t (δ) = θ : ∥θ -θt ∥ A -1 t ≤ σ d log 1 + tL 2 /λ δ + √ λS , where we assume that for any x ∈ X , ∥x∥ 2 ≤ L and ∥θ ⋆ ∥ 2 ≤ S. We know from Theorem 2 in [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF] that with probability 1 -δ, θ ⋆ is in C t (δ) for all t ∈ {1, . . . , T }, and δ ∈ (0, 1]. Algorithm 3: OFUL Algorithm Input : arm set X for t = 1 to T do x t , θt-1 = arg max (x,θ)∈X ×C t-1 ⟨x, θ⟩; Obtain the rewards y t ; Compute θt as in (2.6) end return θt The pseudo-regret in the linear setting can be formulated as follows: R(T ) = T t=1 ⟨x ⋆ , θ ⋆ ⟩ -⟨x t , θ ⋆ ⟩ = T t=1 ⟨x ⋆ -x t , θ ⋆ ⟩ , (2.7) where we recall that x ⋆ = arg max x∈X ⟨x, θ ⋆ ⟩ Proposition 2.4 ([2], Theorem 3). Assume that for all t and all x ∈ X , ⟨x, θ ⋆ ⟩ ∈ [-1, 1]. Then with probability 1 -δ, the pseudo-regret of the OFUL algorithm satisfies R(T ) ≤ 4 T d log(λ + T L/d) √ λS + σ 2 log(1/δ) + d log(1 + T L/(λd)) , where for any x ∈ X , ∥x∥ 2 ≤ L and ∥θ ⋆ ∥ 2 ≤ S In the next section, we present the bilinear bandit which appears as the natural extension of the linear bandit that models the interaction between two agents in the obtained rewards. Bilinear bandits are linear bandits in a higher dimensional space In [START_REF] Jun | Bilinear Bandits with Low-rank Structure[END_REF], the authors introduce the Bilinear Bandit model where at each round t a learner chooses an arm x t from a finite arm set X ⊂ R d that contains K arms and a second arm x ′ t from another finite arm set X ′ ⊂ R d ′ that contains K ′ arms, and obtains an associated reward that is bilinear with respect to the two chosen arms and an unknown parameter matrix M ⋆ ∈ R d×d ′ . More formally, this sequential decision problem is defined as follows: Stochastic Bilinear Bandit For each round t = 1, . . . , T , 1. the learner chooses an arm x t ∈ X and x ′ t ∈ X ′ 2. the learner obtains from the environment the associated reward y t = x ⊤ t M ⋆ x ′ t + η t (2.8) where M ⋆ ∈ R d×d ′ is an unknown parameter matrix, and η t is a random variable sampled from a certain distribution This kind of setting can model different real life applications, such as drug discovery applications [START_REF] Luo | A network integration approach for drug-target interaction prediction and computational drug repositioning from heterogeneous information[END_REF] or in the context of recommender systems as explained in [START_REF] Jun | Bilinear Bandits with Low-rank Structure[END_REF]. The bilinear reward can be written as a linear reward in a higher dimensional space: y t = vec x t x ′⊤ t , vec (M ⋆ ) + η t , (2.9) where for any matrix A ∈ R d×d , vec (A) denotes the vector in R d 2 which is the concatenation of all the columns of A. Therefore, for both the best arm identification problem and the cumulative rewards maximization problem, solving this bilinear bandit problem is equivalent to solving a linear bandit problem of dimension d × d ′ with an arm set Z = {vec (xx ′⊤ )|(x, x ′ ) ∈ X × X ′ } of K × K ′ arms. To directly apply the existing linear bandit algorithms to the bilinear bandit model, the learner must coordinate the choices of the two chosen arms (x t , x ′ t ) at time t, which is equivalent to choosing an arm z t = vec (x t x ′⊤ t ) ∈ Z. Although approaching a bilinear reward from a linear angle is useful, it is less trivial to use linear bandit algorithms for more complex models such as graphical bilinear bandits. Indeed, our model exposed in section 1.2 can be viewed as a bilinear bandit problem between each pair of neighbors, where each agent chooses an arm from a set of arms X (in our framework X = X ′ ). Thus, if the learner coordinates the choice of two neighboring agents and chooses the pair (x, x ′ ) ∈ X and thus its associated arm in Z, it constrains the choices related to all the pairs of neighbors (j, k) ∈ E since it is already composed of the arm x ′ associated with agent j. Due to the interdependencies of the bilinear bandit problems, it is not possible to consider the graphical bilinear bandits as simple linear bandits in parallel and directly use the linear bandit algorithms present in the literature. Another idea that one could have is to notice that since the unknown parameters matrix M ⋆ is common to all the edges (i, j) of the graph, the expected global reward at time t can also be written as the scalar product (i,j)∈E vec x (i) t x (j)⊤ t , vec (M ⋆ ) . Therefore, solving the best-arm identification problem or maximizing the cumulative rewards in the described graphical bilinear bandits reduces to solving the same problems in a global linear bandit. Although this trick allows the use of classical algorithms in linear bandits, the number of joint arms grows exponentially with the number of nodes, which makes these methods impractical. Of course, some tools are still very useful and relevant to solve the presented problems for the graphical bilinear bandits model and we use them in this thesis. Multi-agent bandits and combinatorial bandits Parallelizing contextual linear bandits Centralized multi-agent bandit problems where the learner has to choose the actions of all the agents at each round implies to parallelize the learning process on the agents. In the context of linear rewards where all the agents share the same reward function (i.e., , the same parameter θ ⋆ ), the authors in [START_REF] Chan | Parallelizing contextual linear bandits[END_REF] give a detailed analysis of the problem of maximizing the cumulative rewards and show that a sublinear regret in T can be reached but with an additional cost specific to the parallelization. More formally, they consider P agents, and at each round t, a context X (i) t ⊂ R is revealed to agent i and a central entity has to choose the arm x (i) t ∈ X (i) t for each agent i ∈ {1, . . . , P }. Then the learner receives for all agent i ∈ {1, . . . , P } the rewards y (i) t = ⟨x (i) t , θ ⋆ ⟩ + η (i) t where η (i) t is a σ-sub-gaussian random variable. Here the pseudo-regret is formulated as follows: R(T ) = T t=1 P p=1 ⟨x (i) ⋆,t , θ ⋆ ⟩ -⟨x (i) t , θ ⋆ ⟩ (2.10) where x (i) ⋆,t = arg max x∈X (i) t ⟨x, θ ⋆ ⟩. On can construct the estimate θt as follows: θt = A -1 t b t , where A t = λI d + t s=1 P i=1 x (i) s x (i)⊤ s , with λ > 0 a regularization parameter and b t = t s=1 P i=1 x (i) s y (i) s . We define also the confidence set C t (δ) = θ : ∥θ -θt ∥ A -1 t ≤ σ d log 1 + tP L 2 /λ δ + √ λS . The authors show that applying the OFUL algorithm on each agent at time t where θt-1 and C t-1 (δ) aggregate the information of all the draws and rewards of the previous rounds, gives the following bound on the regret: R(T ) ≤ Õ(d √ T P ) + O(dP log(T P )) (2.11) where Õ hides logarithmic factors. Their analysis shows that parallelizing agents that play the same contextual bandit and applying the OFUL algorithm for each of them with an aggregation of the information at the end of each round to construct θt and C t (δ) give an upper bound on the regret that is the sum of the nearoptimal regret of a single agent pulling T P arms plus a second term that represents the cost of parallelizing. The similarities between the graphical bilinear bandits and their setting is that if m = |E| is the number of edges in our considered graph, the central entity in our model plays m bilinear bandits in parallel that can be seen as m linear bandits in parallel. However, the main difference is that they consider independent agents whereas we assume interactions between the agents. In particular, the arm associated with each (bi-)linear bandit problem cannot be chosen independently at each round since some of them share a mutual information. So we have to deal with both the parallel aspect and the dependent aspect at the same time. Bandit problems in graphs Graphs are often used to bring structure to a bandit problem. But one can distinguish two main representations. Single agent. In [START_REF] Valko | Spectral Bandits for Smooth Graph Functions[END_REF] and [START_REF] Mannor | From bandits to experts: On the value of side-observations[END_REF] for instance, the arms are the nodes of a graph and pulling an arm gives information on the rewards of the neighboring arms. This kind of setting considers only one agent which is out of the scope of this thesis. The reader can also refer to [START_REF] Valko | Bandits on graphs and structures[END_REF] for an account on such problems. Multi-agents. As in [START_REF] Cesa-Bianchi | A gang of bandits[END_REF], each node is an instance of a linear bandit and the neighboring nodes are assumed to have similar unknown regression coefficients. More precisely, they consider an undirected graph G = (V, E) where each node i ∈ G has an associated parameter θ (i) ⋆ where the authors make the assumption that (i,j)∈E ∥θ (i) ⋆ -θ (j) ⋆ ∥ 2 2 (2.12) is small compared to i∈V ∥θ (i) ⋆ ∥ 2 2 . At each time t, the learner receives the index I t of one of the nodes in V as well as a set of contexts X (It) t where it has to choose a context x t ∈ X (It) t and then receives an associated linear reward of the form ⟨x t , θ (i) ⋆ ⟩ + η t . Note that at each round, only the instance of the linear bandit of node I t is used, the learner does not choose an arm for each node of the graph. However, given the assumption that near-by nodes have similar associated parameter vectors θ (i) ⋆ , the learner may still get some information on the behaviors of other nodes. Again, the main difference with our model is that the rewards of the nodes are independent, and although playing an arm at a given node may give information about its neighbor's rewards, the reward is not directly affected by the possible choices of its neighbors. Link with unstructured multi-agents bandits In the same way that a linear bandit with canonical arms can be seen as a classical multi-armed bandit 1 , hence loosing its structured property, the graphical bilinear bandit can also be seen as an unstructured graphical multi-agent bandit when the arm set X is the canonical basis X = (e 1 , . . . , e d ). As mentioned in the introduction, some works have studied this unstructured setting. In particular, the authors in [START_REF] Amin | Graphical Models for Bandit Problems[END_REF] consider a graph G = (V, E) where at each round t and for each node i in V = {1, . . . , n}, a learner receives a context c (i) t , then has to choose an arm x (i) t and finally gets a global reward F (x t , c t ) where x t = (x (1) t , . . . , x (n) t ) and c t = (c (1) t , . . . , c (n) t ). They assume that the reward can be decomposed as a sum of subfunctions that depend only on subset of neighboring nodes. More formally, given a collection of subset P ⊂ 2 V , we have F (x t , c t ) = P ∈P f P (x P , c P ) (2.13) where x P = (x (i) t ) i∈P , c P = (c (i) t ) i∈P and f P are unknown functions for any P ∈ P. Note that a subset P ∈ P contains only nodes that are neighbors one to another. The main similarity with our framework is that the global function can be decomposed as the sum of local rewards that depend on the arms of neighboring nodes, which highlights the dependencies between the nodes. The reader can also refer to [START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF] for such works. However, all the algorithms that we presented for the graphical bilinear bandit leverage the structured aspect where pulling an arm informs about the rewards of the other arms through the unknown parameter matrix M ⋆ , which is not done in the unstructured setting. Combinatorial bandits A combinatorial bandit consists of a sequential decision problem where a learner has access to a set of K arms, at each round t selects a subset of arms under some constraints and then receives the associated reward. More formally, consider the arm set X = {0, 1} d where an arm x ∈ X is often called a super-arm. When the i-th coordinate of x ∈ X equals 1 it means that the learner selects the i-th arm. An example of a constraint that the learner can have is to select an arm x ∈ X such that ∥x∥ 1 ≤ m for m > 0, which means that the learner can at most select a subset of m arms per round. The type of rewards associated with a super arm varies from one setting to another. For instance, let consider ν = {P 1 , . . . P K } where P i is the distribution associated with the i-th arm. At each round, let us denote X i,t the random variable drawn at time t from P i , and denote X t the vector in R K containing in its coordinates all the X i,t for all i ∈ {1, . . . , K}. The reward at time t associated with the super arm x t ∈ X is given by y t = ⟨x t , X i,t ⟩ . In this particular example, we can notice that the reward is linear i.e., the reward y t is the sum of all the rewards associated with the selected arms by the learner, but other forms of reward can be considered and the learner may not even know how it is calculated. Semi-bandit feedback. In what is called the semi-bandit feedback, the learner receives each of the rewards X i,t if the i-th coordinate of x t is equal to 1. The number of super-arms being exponential in K and when we do not have the knowledge on how the reward is computed, these kinds of problems can be hard to solve and the knowledge of an oracle is often assumed to return the optimal super-arm to play at a round t according to the estimates of the learner. More precisely, given the estimates μ = (μ 1 , . . . , μK ) of the expected reward associated with each arm, the learner asks the oracle which super-arm to play at time t. A relaxation in [START_REF] Chen | Combinatorial multi-armed bandit: General framework and applications[END_REF] considers an (α, β)-Approximation-oracle that returns with probability β, the α-optimal super-arm given the estimates. More formally, if opt μ is the value of the best superarm given μ, the (α, β)-Approximation-oracle returns a super arm that gives at least an expected reward equals to α • opt μ. Given that (α, β)-Approximation-oracle, the authors used the αβ-pseudo-regret [START_REF] Kakade | Playing games with approximation algorithms[END_REF] which is defined as follows : R(T ) = E T t=1 αβopt ⋆ -y t , where opt ⋆ is the optimal expected reward, and where the expectation is on the randomness of both the environment and the learner's policy. The combinatorial bandit setting can be viewed as a similar model to ours in that we solve a bandit problem with an exponential number of arms and the underlying problem of returning an estimated best joint-arm (which can be viewed as a super arm under some particular constraint) at each round is NP-Hard and may require knowledge of an oracle. Although we do not assume knowledge of an oracle, we instead design an α-approximation algorithm to solve the underlying problem (see Chapter 3). And as is the case in the combinatorial bandit literature, the use of α-approximation algorithms makes the notion of α-regret a relevant measure of the performance for our algorithms. 2 Moreover, we present algorithms that takes into account the graph structure, which has not been considered in the combinatorial framework. In this chapter, we focus on the optimization problem of finding the joint arm x (1) , . . . , x (n) that maximizes (i,j)∈E x (i)⊤ M ⋆ x (j) while knowing the parameter matrix M ⋆ . This problem being non-trivial, it is natural to understand the guarantees on the solutions with the full information of the matrix M ⋆ (we relax this assumption in the next chapters where the matrix M ⋆ will be considered as unknown). Hence the objective of this chapter is the following: Objective: Given the parameter matrix M ⋆ , design an algorithm that returns the allocation of arms (x (1) ⋆ , . . . , x (n) ⋆ ) that maximises (i,j)∈E x (i)⊤ M ⋆ x (j) We follow the notations we established in section 1.2.1. An NP-Hard problem Reduction to the max-cut problem We address the problem of finding the best joint arm given M ⋆ and we denote it as follows: (x (1) ⋆ , . . . , x (n) ⋆ ) = arg max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ M ⋆ x (j) . (3.1) Notice that if the couple (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ is such that x ⋆ = x ′ ⋆ then finding the best joint arm is trivial and the solution is to assign x ⋆ to all nodes. Conversely, 23 if x ⋆ ̸ = x ′ ⋆ , the problem may be harder: according to the graph G, the optimal joint arm could either be composed exclusively of the couple (x ⋆ , x ′ ⋆ ) or be composed of other arms in X . One might want to use dynamic programming as in [START_REF] Amin | Graphical Models for Bandit Problems[END_REF] to solve this optimization problem, however in this particular setting, it would lead to use a non-polynomial time algorithm. Indeed, the following theorem states that, even with the knowledge of the true parameter M ⋆ , identifying the best join-arm (x (1) ⋆ , . . . , x (n) ⋆ ) is NP-hard with respect to the number of nodes n. Theorem 3.1. Consider a given matrix M ⋆ ∈ R d×d and a finite arm set X ⊂ R d . Unless P=NP, there is no polynomial time algorithm guaranteed to find the optimal solution of max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ M ⋆ x (j) . Proof. We prove the statement by reduction to the Max-Cut problem that is NP-Hard itself. Let G = (V, E) be a graph with V = {1, . . . , n}. Let X = {e 0 , e 1 }, where e 0 = (1, 0) ⊤ and e 1 = (0, 1) ⊤ . Let M ⋆ = 0 1 1 0 . For any joint arm assignment x (1) . . . x (n) ∈ X n , let F ⊆ E be defined as F = i : x (i) = e 1 . Note that (i,j)∈E x (i)⊤ M ⋆ x (j) = (i,j)∈E 1 x (i) ̸ = x (j) = 2 × (i,j)∈E 1[i ∈ F, j / ∈ F ], where 1[•] is the indicator function. The assignment x (1) , . . . , x (n) induces a cut (F, V \F ), and the value of the assignment is precisely twice the value of the cut. Thus, if there were a polynomial time algorithm solving our problem, this algorithm would also solve the Max-Cut problem. Hence, given the true parameter matrix M ⋆ , the learner is not guaranteed to find in polynomial time the joint arm x (1) ⋆ , . . . , x (n) ⋆ maximizing the expected global reward. In the next sections, we give polynomial time approximation algorithms that have guarantees on the returned expected global reward with respect to the optimal one. Approximation algorithm and guarantees Given the true parameter M ⋆ , the objective is to design an algorithm that returns a joint arm x (1) , . . . , x (n) such that its associated expected global reward y = (i,j)∈E x (i)⊤ M ⋆ x (j) has the guarantee to be close to the optimal expected global reward y ⋆ = (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ . In other words, we want to find a approximation parameter 0 < α ≤ 1 such that, y ≥ αy ⋆ . Although the optimization problem we seek to solve can be found in the literature on Markov Random Fields when dealing with a multi-labeled graph (see e.g., [START_REF] Ajanthan | Optimization of Markov random fields in computer vision[END_REF]), to the best of our knowledge, algorithms that give an approximation ratio on the optimal solution have not been explored. Assumption 3.1 (Positive rewards). A classical assumption in the linear bandit literature is that expected rewards are positive. For any (x, x ′ ) ∈ X 2 , since the bilinear reward x ⊤ M ⋆ x ′ can be formulated as a linear reward ⟨vec (xx ′⊤ ), vec (M ⋆ )⟩ and that in the rest of this thesis we will use tools from the linear bandit literature, we make the same assumption. We consider that for any (x, x ′ ) ∈ X 2 , the associated expected reward x ⊤ M ⋆ x ′ is positive, x ⊤ M ⋆ x ′ ≥ 0. The approach we present in this section is first to consider the problem locally, i.e., at the edge level. Indeed, let us consider two neighboring nodes i and j in V and only the expected rewards related to these nodes, which are x (i)⊤ M ⋆ x (j) and x (j)⊤ M ⋆ x (i) . By summing those two quantities, 1 we get j) which represents the expected reward between the two neighbors (i) and (j). A local strategy that the central entity should carry out is thus to allocate (x x (i)⊤ M ⋆ x (j) + x (j)⊤ M ⋆ x (i) = x (i)⊤ M ⋆ + M ⊤ ⋆ x ( (i) , x (j) ) = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ = (x ⋆ , x ′ ⋆ ) . Naturally, while this local strategy is easy to apply for a couple of neighbors (i, j), it becomes infeasible to simultaneously extend it to all the other couples in the graph since some of them share the same nodes. However, one can learn something from this strategy, which is that given the optimal joint arm (x Hence, instead of looking for the optimal joint arm (which is NP-Hard), one can alternatively aim at seeking the allocation that, for any edge (i, j) ∈ E, constructs as many pairs (x (i) , x (j) ) = (x ⋆ , x ′ ⋆ ) as possible. Assigning x ⋆ to a subset of nodes and x ′ ⋆ to the complementary is equivalent to cutting the graph into two pieces and creating two distinct sets of nodes V 1 and V 2 such that V = V 1 ∪ V 2 and V 1 ∩ V 2 = ∅. Thus, the described strategy boils down to finding a cut passing through the maximum number of edges. This problem is known as the Max-Cut problem (see e.g., [START_REF] Goemans | 879-approximation algorithms for max cut and max 2sat[END_REF][START_REF] Sahni | P-complete approximation problems[END_REF]), which is also NP-Hard. However, the extensive attention this problem has received allows us to use one of the many approximation algorithms (see, e.g., Algorithm 4) which are guaranteed to yield a cut passing through at least a given fraction of the edges in the graph. Most of the guarantees for the approximation of the Max-Cut problem are stated with respect to the optimal Max-Cut solution, which is not exactly the guarantee we are looking for: we need a guarantee as a proportion of the total number of edges. We thus have to be careful on the algorithm we choose. From Algorithm 4, one can have a guarantee on the proportion of cut edges with respect to the total number of edges m = |E|. We state this guarantee in the following Proposition. Proposition 3.1. Given a graph G = (V, E), Algorithm 4 returns a couple (V 1 , V 2 ) such that |{(i, j) ∈ E | (i ∈ V 1 ∧ j ∈ V 2 ) ∨ (i ∈ V 2 ∧ j ∈ V 1 )}| ≥ m 2 . Algorithm 4: Approx-MAX-CUT [START_REF] Sahni | P-complete approximation problems[END_REF] Input : G = (V, E) Set V 1 = ∅, V 2 = ∅ for i in V do n 1 = |{(i, j) ∈ E | j ∈ V 1 }|; n 2 = |{(i, j) ∈ E | j ∈ V 2 }|; if n 1 > n 2 then V 2 ← V 2 ∪ {i} else V 1 ← V 1 ∪ {i}; end return (V 1 , V 2 ) Proof. At each iteration, we take a node i that is neither in V 1 nor in V 2 , count its neighbors already in V 1 and V 2 and save the results respectively in n 1 and n 2 . For the sake of simplicity in the proof, we will denote them n (i) 1 and n (i) 2 to distinguish from one node to the other. Since n (i) 1 represents the number of neighbors of i already assigned to V 1 , if the node i is added to V 2 , 2 × n (i) 1 edges would be cut (the factor 2 comes from the fact that between two nodes i and j, there are the edges (i, j) and (j, i)). Similarly, since n (i) 2 represents the number of neighbors already assigned in V 2 , if the node i is added to V 1 , 2 × n (i) 2 edges would be cut. In the algorithm, the node i is added to V 1 or V 2 such that we cut the most edges, hence by denoting m i the number of additional cut edges implied by the assignment of node i in V 1 or V 2 , we have: m i = max 2n (i) 1 , 2n (i) 2 ≥ 2n (i) 1 + 2n (i) 2 2 = n (i) 1 + n (i) 2 . By summing for all the nodes in the graph : n i=1 m i ≥ n i=1 n (i) 1 + n (i) 2 = m 2 . By definition n i=1 m i is the total number of edges that are cut which also means that n i=1 m i = |{(i, j) ∈ E | (i ∈ V 1 ∧ j ∈ V 2 ) ∨ (i ∈ V 2 ∧ j ∈ V 1 )}| . Given this guarantee with respect to the total number of edges, it only remains to present the full strategy that is to allocate to the nodes in V 1 the arm x ⋆ and to the nodes in V 2 the arm x ′ ⋆ . We give the strategy in Algorithm 5. Algorithm 5: Approximation algorithm of our NP-Hard problem Input : Graph G = (V, E), arm set X , parameter matrix M ⋆ (V 1 , V 2 ) = Approx-MAX-CUT(G); Find (x ⋆ , x ′ ⋆ ) ∈ arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ ; for i in V 1 do x (i) t = x ⋆ ; // Can be done in parallel end for i in V 2 do x (i) t = x ′ ⋆ ; // Can be done in parallel end return (x (1) , . . . , x (n) ) With this algorithm, given the returned allocation (x (1) , . . . , x (n) ), for some edges (i, j) ∈ E, the associated allocated arms will be the optimal couples (x ⋆ , x ′ ⋆ ) or (x ′ ⋆ , x ⋆ ) and for other edges (i, j) ∈ E the associated allocated arms will be the suboptimal and unwanted couples (x ⋆ , x ⋆ ) or (x ′ ⋆ , x ′ ⋆ ). Before we state the guarantee of this algorithm with respect to the optimal global reward, let us introduce m 1 (respectively m 2 ) the number of edges that go from nodes in V 1 (respectively V 2 ) to nodes in V 1 as well (respectively V 2 ) and m 1→2 (respectively m 2→1 ) the number of edges that goes from nodes in V 1 (respectively V 2 ) to nodes in V 2 (respectively V 1 ). Notice that the total number of edges m = m 1→2 + m 2→1 + m 1 + m 2 and that by definition of the edge set E and using Proposition 3. ⋆ ) be the optimal joint arm as defined in (3.1) and let 0 ≤ ξ ≤ 1 be a problem-dependent parameter defined by ξ = min x∈X x ⊤ M ⋆ x Proof. Given the allocation (x (1) , . . . , x (n) ) return by Algorithm 5, the associated reward y can be written as y = m 1→2 × x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 × x ′⊤ ⋆ M ⋆ x ⋆ (a) + m 1 × x ⊤ ⋆ M ⋆ x ⋆ + m 2 × x ′⊤ ⋆ M ⋆ x ′ ⋆ (b) Let us analyse (a): (a) = m 1→2 × x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 × x ′⊤ ⋆ M ⋆ x ⋆ = m 1→2 + m 2→1 2 × x ⊤ ⋆ M ⋆ + M ⊤ ⋆ x ′ ⋆ (because m 1→2 = m 2→1 ) = m 1→2 + m 2→1 m n i=1 j∈N i j>i x ⊤ ⋆ M ⋆ + M ⊤ ⋆ x ′ ⋆ ≥ m 1→2 + m 2→1 m n i=1 j∈N i j>i x (i)⊤ ⋆ M ⋆ + M ⊤ ⋆ x (j) ⋆ (3.3) = m 1→2 + m 2→1 m (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ = m 1→2 + m 2→1 m y ⋆ , where (3.3) comes from Equation (3.2). Now, we analyse (b) by using the definition of ξ where for any x ′ ∈ X , x ′⊤ M ⋆ x ′ ≥ min x∈X x ⊤ M ⋆ x (3.4) = 1 m ξ M ⋆ x (j) ⋆ . (3.5) Hence we have, (b) ≥ m 1 m ξ (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ + m 2 m ξ (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ = m 1 + m 2 m ξy ⋆ . Thus, y = (a) + (b) ≥ m 1→2 + m 2→1 m y ⋆ + m 1 + m 2 m ξy ⋆ = m 1→2 + m 2→1 m + m 1 + m 2 m ξ y ⋆ (3.6) = m 1→2 + m 2→1 m + 1 - m 1→2 + m 2→1 m ξ y ⋆ = m 1→2 + m 2→1 + mξ -(m 1→2 + m 2→1 )ξ m y ⋆ = m 1→2 + m 2→1 m (1 -ξ) + ξ y ⋆ ≥ 1 2 (1 -ξ) + ξ y ⋆ = 1 + ξ 2 y ⋆ . Moreover, the Approx-Max-Cut Algorithm has a complexity in O(n 2 ), then we do K 2 estimations to find the best couple (x ⋆ , x ′ ⋆ ) ∈ X 2 , and each round in the first and second for loop of the algorithm is in O(1) and there are |V 1 | + |V 2 | = n rounds. Hence the complexity is in O(K 2 + n 2 ). What is ξ and what value can we expect? This parameter measures what minimum gain with respect to the optimal reward one could get by allocating the unwanted and suboptimal couple of arms of the form (x, x) for two neighbors. For example, if there exists x 0 ∈ X such that x ⊤ the m 2 rewards of the form x ′⊤ ⋆ M ⋆ x ′ ⋆ that one gets when allocating x ⋆ to the nodes in V 1 and x ′ ⋆ to the nodes in V 2 . Here, the improvement we can make is to include them in the optimization problem and to weight the different rewards obtained through the graph using the proportions m 1→2 , m 2→1 , m 1 and m 2 . By denoting (x ⋆ , x′ ⋆ ) the solution of the following optimization problem, max (x,x ′ )∈X 2 m 1→2 • x ⊤ M ⋆ x ′ + m 2→1 • x ′⊤ M ⋆ x + m 1 • x ⊤ M ⋆ x + m 2 • x ′⊤ M ⋆ x ′ , (3.7) we are optimizing the total global reward that one would obtain when allocating only two arms (x, x ′ ) ∈ X 2 in the graph. This strategy is described in Algorithm 6. Algorithm 6: Improved approximation algorithm of our NP-Hard problem Input : Graph G = (V, E), arm set X , parameter matrix M ⋆ (V 1 , V 2 ) = Approx-MAX-CUT(G); m 1→2 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 2 }|; m 2→1 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 1 }|; m 1 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 1 }|; m 2 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 2 }|; Find (x ⋆ , x′ ⋆ ) solution of (3.7); for i in V 1 do x (i) t = x⋆ ; // Can be done in parallel end for i in V 2 do x (i) t = x′ ⋆ ; // Can be done in parallel end return (x (1) , . . . , x (n) ) To understand and analyse this new algorithm, let us define ∆ ≥ 0 the global reward difference of allocating (x ⋆ , x′ ⋆ ) instead of (x ⋆ , x ′ ⋆ ) as follows: ∆ = m 1→2 x⊤ ⋆ M ⋆ x′ ⋆ -x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 x′⊤ ⋆ M ⋆ x⋆ -x ′⊤ ⋆ M ⋆ x ⋆ + m 1 x⊤ ⋆ M ⋆ x⋆ -x ⊤ ⋆ M ⋆ x ⋆ + m 2 x′⊤ ⋆ M ⋆ x′ ⋆ -x ′⊤ ⋆ M ⋆ x ′ ⋆ . The new guarantees that we get on the reward of the allocation obtained by Algorithm 6 are stated in the following theorem. Theorem 3.3. Let us consider the graph G = (V, E), a finite arm set X ⊂ R d and the matrix M ⋆ ∈ R d×d given as input to Algorithm 6. Let (x (1) ⋆ , . . . , x (n) ⋆ ) be the optimal joint arm as defined in (3.1) and let 0 ≤ ξ ≤ 1 be defined as in Theorem 3.2. Let 0 ≤ ϵ ≤ 1 2 be a problem dependent parameter that measures the relative gain of optimizing on the suboptimal rewards defined as: α = m 1→2 + m 2→1 m + m 1 + m 2 m ξ + ϵ . Proof. The proof follows exactly the one of Theorem 3.2 where we stop the reasoning at Equation 3.6 and then plug the result into the proof of Theorem 3.3 This corollary is useful to understand in practice the kind of guarantees we can have depending on the graph structure and the approximation algorithm we use to solve the Max-Cut problem. For instance, in the most favorable graphs, which are bipartite graphs (i.e., graphs where all the m edges goes from nodes in V 1 to nodes in V 2 or vice versa), we have m 1→2 + m 2→1 = m and m 2 + m 1 = 0. Also it implies that (x ⋆ , x ′ ⋆ ) = (x ⋆ , x′ ⋆ ), so ϵ = 0 which gives α = 1 and makes the Algorithm 10 find the optimal solution of the problem. What may also be of interest is to understand how α varies with respect to ξ, ϵ and the quantity m 1 and m 2 for graphs that are between a complete graph (that is the worst case scenario in terms of constraints) and a bipartite graph. We investigate experimentally this dependency in Section 3.2. Numerical experiments: influence of the parameters on the solution In this section, we give some insights on the problem-dependent parameters ξ and ϵ and the corresponding α. Let α 1 and α 2 be the α stated respectively in Theorem 3.2 and Corollary 3.1. In the first experiment, we show the dependence of α 1 and α 2 on the graph type and the chosen approximation algorithm for the max-cut problem with respect to ξ and ϵ. We also highlight the differences between the two parameters α 1 and α 2 as well as the significant improvement in guarantees that one can obtain using Algorithm 6 depending on the type of the graph. The results are presented in Table 3.1. One can notice that the complete graph seems to give the worst guarantee on the α-approximation with respect to ϵ and ξ. Thus, we conducted a second experiment where we consider the worst case scenario in terms of the graph type -e.g., the complete graph-and where there are n = 10 agents. This second experiment studies the variation of ϵ and ξ with respect to the unknown parameter matrix M ⋆ . To design such experiment, we consider the arm-set X as the vectors (e 1 , . . . , e d ) of the canonical base in R d . We generate the matrix M ⋆ randomly in the following way: first, all elements of the matrix are drawn i.i.d.from a standard normal distribution, and then we take the absolute value of each of these elements to ensure that the matrix only contains positive numbers. The choice of the vectors of the canonical base as the arm-set allows us to modify the matrix M ⋆ and to illustrate the dependence on ξ and ϵ in a simple way. Consider the best couple (i ⋆ , j ⋆ ) = arg max (i,j)∈{1,...,d} 2 e ⊤ i M ⋆ e j , we want to see how the rewards of the suboptimal couples of arms (e i ⋆ , e i ⋆ ) and (e j ⋆ , e j ⋆ ) impact the values of ξ, ϵ and thus α. Notice that the reward associated with the couple of arms (e i ⋆ , e i ⋆ ) (respectively (e j ⋆ , e j ⋆ )) is [M ⋆ ] i ⋆ i ⋆ (respectively [M ⋆ ] j ⋆ j ⋆ ). Hence we define 0 ≤ ζ < 1 and set [M ⋆ ] i ⋆ i ⋆ = [M ⋆ ] j ⋆ j ⋆ = ζ × 1 2 ([M ⋆ ] i ⋆ j ⋆ + [M ⋆ ] j ⋆ i ⋆ ). We study the variation of ξ, ϵ, α 1 and α 2 with respect to ζ. The results are presented in Figure 3.1. One can see that when the associated rewards of (e i ⋆ , e i ⋆ ) and (e j ⋆ , e j ⋆ ) are low (thus ξ is low and ϵ high), Algorithm 6 gives a much better guarantees than Algorithm 5 since it focuses on other arms than e i ⋆ and e j ⋆ that give a higher global reward. Moreover, even when the unwanted couples (e i ⋆ , e i ⋆ ) and (e j ⋆ , e j ⋆ ) give high rewards, the guarantees on the regret of Algorithm 10 are still stronger because it takes into consideration the quantities m 1 and m 2 of the constructed suboptimal couple of arms (e i ⋆ , e i ⋆ ) and (e j ⋆ , e j ⋆ ). Conclusion and perspectives In this chapter, we showed that the underlying optimization problem is NP-hard and that one has to rely on approximation algorithms. To that matter, we designed a first α-approximation algorithm based on the max-cut problem, with α ≥ 1/2. We showed that by exploiting the graph structure and the typology of the problem, one can both improve the performance in practice and have a better theoretical guarantee on α. The method presented in this chapter can be extended in many ways, especially in the choice of the max-cut approximation algorithm. For instance, one can consider cutting the graph into 3 or more pieces, which is equivalent to approximating the problem of a Max-k-Cut [START_REF] Frieze | Improved approximation algorithms for max k-cut and max bisection[END_REF] with k ≥ 3. With the knowledge of such a partition of nodes V 1 , . . . , V k , one may want to look for a k-tuple of arms maximizing the optimistic allocated reward rather than a pair, therefore introducing an elegant tradeoff between the optimality of the solution and the computational complexity of the arms allocation. In this chapter, we assume that we do not know the parameter matrix M ⋆ and as described in the problem setting in Section 1.2, a central entity faces a graphical bilinear bandits problem where at each round it chooses an arm for each node of the graph and observes a bilinear reward for each edge of the graph. In this chapter, we will focus on the best-arm identification objective that reads as follows: Objective: Find, within a minimum number of rounds, the joint arm (x (1) ⋆ , . . . , x (n) ⋆ ) such that the expected global reward (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ is maximized, where M ⋆ is unknown to the learner. We follow the notations we established in section 1.2.1. Preliminaries A two-stage algorithm template For simplicity, we consider for now that the unknown parameter M ⋆ is symmetric, which greatly simplifies the reasoning, and we will relax this assumption in Section 4.2.3. In Chapter 3, we designed polynomial-time algorithms that allow us to compute an α-approximation solution to the NP-Hard problem of finding the best joint arm given M ⋆ . Notice that in the α-approximation Algorithm 5, M ⋆ is only used to identify the best pair (x ⋆ , x ′ ⋆ ) as follows: x ⋆ , x ′ ⋆ = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ x ′ . (4.1) Thus, using an estimate M of M ⋆ having the following property: arg max (x,x ′ )∈X 2 x ⊤ Mx ′ = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ x ′ = x ⋆ , x ′ ⋆ , (4.2) allows us to identify the pair (x ⋆ , x ′ ⋆ ), and thus gives us the same guarantees as the ones presented in Theorem 3.2. We thus address the problem of computing M such that, in a minimum number of rounds, with high probability, we are able to identify the pair (x ⋆ , x ′ ⋆ ) and apply the Algorithm 5. Our general algorithm can thus be thought of as a two-part algorithm where, in the first instance, the central entity applies a pure exploration algorithm and draws the arms of the nodes at each round to compute the best possible estimate M (in a certain sense), and then, in the second instance, uses M to apply the α-approximation-Algorithm 5 described in chapter 3. We describe the general procedure in Algorithm 7. Algorithm 7: General framework for BAI in GBB : a two-stage algorithm Input : graph G = (V, E), arm set X M = Pure-Exploration-Algorithm(G,X ); // This chapter x (1) , . . . , x (n) = α-Approximation-Algorithm(G, X , M); // Done in chapter 3 return x (1) , . . . , x (n) Stopping condition Notice that the optimization problem (4.1) is equivalent to the following optimization problem (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X vec xx ′⊤ , vec (M ⋆ ) . Let us simplify the notations and denote θ ⋆ ≜ vec (M ⋆ ) the vectorized version of the unknown matrix M ⋆ . Let us also use the notation z xx ′ ≜ vec xx ′⊤ , and define Z = {z xx ′ |(x, x ′ ) ∈ X 2 } the set containing such vectors. Then looking for the couple (x ⋆ , x ′ ⋆ ) is the same as looking for the vector z ⋆ ∈ Z where z ⋆ = arg max z∈Z ⟨z, θ ⋆ ⟩ . In other words, we want to find an arm z ∈ Z, such that for all z ′ ∈ Z, (z -z ′ ) ⊤ θ ⋆ ≥ 0. However, one does not have access to θ ⋆ , so we have to use its empirical estimate. Hence at each round t, the central entity can choose for each couple of neighbors (i, j) an arm z ∈ Z and get a noisy linear reward of the form ⟨z, θ ⋆ ⟩ + η where η is a σ-subgaussian random variable, that can be used to compute an estimate θt . For more clarity, we refer to any x ∈ X as a node-arm and any z ∈ Z will be referred as an edge-arm. If x (i) t ∈ X represents the node-arm allocated to the node i ∈ V at time t, for each edge (i, j) ∈ E we will denote the associated edge-arm by z (i,j) t ≜ vec x (i) t x (j)⊤ t ∈ Z. Assumption 4.1 (Bounded edge-arm norm). We consider that there exists L > 0, for all edge-arm z ∈ Z, such that ∥z∥ 2 2 ≤ L. Assumption 4.2 (Positive rewards). We consider that for any z ∈ Z, the associated expected reward ⟨z, θ ⋆ ⟩ is such that ⟨z, θ ⋆ ⟩ ≥ 0 Assumption 4.3 (Spanning the action space). We consider that X spans R d . The goal here is to define the optimal sequence (z 1 , . . . , z mt ) ∈ Z mt that should be pulled in the first t rounds so that (4.2) is reached as soon as possible. A natural approach is to rely on classical strategies developed for best arm identification in linear bandits. We define (y 1 , . . . , y mt ) the corresponding noisy rewards of the sequence (z 1 , . . . , z mt ). We assume that the noise terms in the rewards are i.i.d., following a σ-sub-Gaussian distribution. Let θt = A -1 t b t ∈ R d 2 be the solution of the ordinary least squares problem with A t = mt i=1 z i z ⊤ i ∈ R d 2 ×d 2 and b t = mt i=1 z i y i ∈ R d 2 . We first recall the Proposition 2.2 with the notation of our problem. Proposition 4.1 (Proposition 1 in [START_REF] Soare | Best-arm identification in linear bandits[END_REF]). For every fixed sequence (z 1 , . . . , z mt ) ∈ Z mt , with probability 1 -δ, for all t > 0 and for all z ∈ Z, we have z ⊤ θ ⋆ -z ⊤ θt ≤ 2σ √ 2∥z∥ A -1 t log 6m 2 t 2 K 2 δπ . Following the steps of [START_REF] Soare | Best-arm identification in linear bandits[END_REF] and the ones developed in Section 2.2.2, we can show that if there exists z ∈ Z such that for all z ′ ∈ Z the following holds: ∥z -z ′ ∥ A -1 t 8σ 2 log 6m 2 t 2 K 4 δπ 2 ≤ ∆t z, z ′ , (4.3) where ∆t (z, z ′ ) = (z -z ′ ) ⊤ θt is the empirical gap between z and z ′ , then with probability at least 1-δ, the OLS estimate θt leads to the best edge-arm z ⋆ , which means that arg max z∈Z ⟨z, θt ⟩ = arg max z∈Z ⟨z, θ ⋆ ⟩. Therefore, when the Equation (4.3) is true, the learner can stop pulling arms, we call it the stopping condition. As mentioned in [START_REF] Soare | Best-arm identification in linear bandits[END_REF], by noticing that max (z,z ′ )∈Z 2 ∥z -z ′ ∥ A -1 t ≤ 2 max z ∈ Z ∥z∥ A -1 t , an admissible strategy is to pull edge-arms minimizing max z∈Z ∥z∥ A -1 t in order to satisfy the stopping condition as soon as possible. A Constrained G-Allocation Given the stopping condition (4.3) derived in the previous section, one wants to find the sequence of edge-arms z ⋆ mt = (z ⋆ 1 , . . . , z ⋆ mt ) such that: z ⋆ mt ∈ arg min (z 1 ,...,zmt)∈Z mt max z ′ ∈Z z ′ ⊤ mt i=1 z i z ⊤ i -1 z ′ . (G-opt-Z) This is known as G-allocation (see e.g., [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]) and is NP-hard to compute ( [START_REF] Çivril | On selecting a maximum volume sub-matrix of a matrix and related problems[END_REF]104]). One way to find an approximate solution is to rely on a convex relaxation of the optimization problem (G-opt-Z) and first compute a real-valued allocation Γ ⋆ ∈ S Z such that Γ ⋆ ∈ arg min Γ ∈S Z max z ′ ∈Z z ′ ⊤ z∈Z Γ z zz ⊤ -1 z ′ . (G-relaxed-Z) One could either use random sampling to draw edge-arms as i.i.d. samples from the Γ ⋆ distribution or rounding procedures to efficiently convert each Γ ⋆ z into an integer. However, these methods do not take into account the graphical structure of the problem. Indeed, at a given round, the chosen edge-arms may result in two different assignments for the same node, we call this phenomenon a collision. In Figure 4.1, we characterise a collision that occurs when the central entity allocates respectively two edge-arms z and z ′ to two edges that share the same node. Therefore, random sampling or rounding procedures cannot be straightforwardly used to select edge-arms in Z. Nevertheless, (G-relaxed-Z) still gives valuable information on the number of times, in proportion, each edge-arm z ∈ Z must be allocated to the graph. In the next section, we present an algorithm satisfying both the proportion requirements and the graphical constraints. i j k z = v e c x x ′ ⊤ z ′ = v e c x ′ ′ x ′ ′ ′ ⊤ x (i) = x x (j) = x ′ x (k) = x ′′′ x (j) = x ′′ Collision if x ′ = x ′′ Algorithm and guarantees Random Allocation over the Nodes Our algorithm is based on a randomized method directly allocating node-arms to the nodes and thus avoiding the difficult task of choosing edge-arms and trying to allocate them to the graph while ensuring that every node has an unique assignment. The validity of this random allocation is based on Theorem 4.1 below showing that one can draw node-arms in X and allocate them to the graph such that the associated edge-arms follow the probability distribution Γ ⋆ solution of (G-relaxed-Z). Theorem 4.1. Let γ ⋆ be a solution of the following optimization problem: min γ∈S X max x ′ ∈X x ′ ⊤ x∈X γ x xx ⊤ -1 x ′ . (G-relaxed-X ) Let Γ ⋆ ∈ S Z be defined for all z = vec xx ′⊤ ∈ Z by Γ ⋆ z = γ ⋆ x γ ⋆ x ′ . Then, Γ ⋆ is a solution of (G-relaxed-Z). To prove Theorem 4.1, we first state a useful lemma. For any finite set X ⊂ R d and γ ∈ S X , let Σ X (γ) = x∈X γ x xx ⊤ . We define the function h X : S X → R ∪ {+∞} as follows: for any γ ∈ S X , h X (γ) = max x ′ ∈X x ′⊤ Σ X (γ) -1 x ′ if Σ X (γ) is invertible +∞ otherwise . Lemma 4.1. Let X ⊂ R d be a finite set spanning R d and let Z = {vec (xx ′⊤ ) | (x, x ′ ) ∈ X 2 }. If γ ⋆ ∈ S X is a minimizer of h X , then γ ⋆ is a solution of min γ∈S X max z∈Z z ⊤ x∈X x ′ ∈X γ x γ x ′ vec xx ′⊤ vec xx ′⊤ ⊤ -1 z . Proof. First, let us notice that, for any X ⊂ R d , one has h X ≥ 0. Thus, γ ⋆ is also a minimizer of h 2 X . In addition, X is spanning R d so h X (γ ⋆ ) < +∞. Developing h X (γ ⋆ ) 2 yields: h X (γ ⋆ ) × h X (γ ⋆ ) = max x∈X x ⊤ Σ X (γ ⋆ ) -1 x × max x∈X x ⊤ Σ X (γ ⋆ ) -1 x = max x∈X max x ′ ∈X x ⊤ Σ X (γ ⋆ ) -1 xx ′⊤ Σ X (γ ⋆ ) -1 x ′ = max x∈X max x ′ ∈X vec xx ′⊤ ⊤ vec Σ X (γ ⋆ ) -1 xx ′⊤ Σ X (γ ⋆ ) -1 = max x∈X max x ′ ∈X vec xx ′⊤ ⊤ Σ X (γ ⋆ ) -1 ⊗ Σ X (γ ⋆ ) -1 vec xx ′⊤ = max z∈Z z ⊤ Σ X (γ ⋆ ) -1 ⊗ Σ X (γ ⋆ ) -1 z , where ⊗ denotes the Kronecker product. We can now focus on the central term: Σ X (γ ⋆ ) -1 ⊗ Σ X (γ ⋆ ) -1 = x∈X γ ⋆ x xx ⊤ -1 ⊗ x∈X γ ⋆ x xx ⊤ -1 = x∈X γ ⋆ x xx ⊤ ⊗ x∈X γ ⋆ x xx ⊤ -1 = x∈X x ′ ∈X γ ⋆ x γ ⋆ x ′ xx ⊤ ⊗ x ′ x ′⊤ -1 = x∈X x ′ ∈X γ ⋆ x γ ⋆ x ′ vec xx ′⊤ vec xx ′⊤ ⊤ -1 , and the result holds. Proof of Theoreme 4.1. From [START_REF] Kiefer | The Equivalence of Two Extremum Problems[END_REF], we know that min Γ ∈S Z h Z (Γ ) = d 2 and min γ∈S X h X (γ) = d. Then, using Lemma 4.1, one has d 2 = h X (γ ⋆ ) × h X (γ ⋆ ) = max z ′ ∈Z z ′⊤ x∈X x ′ ∈X γ ⋆ x γ ⋆ x ′ vec xx ′⊤ vec xx ′⊤ ⊤ -1 z ′ = max z ′ ∈Z z ′⊤ z∈Z Γ ⋆ z zz ⊤ -1 z ′ . This result implies that h Z (Γ ⋆ ) = d 2 . Since min Γ ∈S Z h Z (Γ ) = d 2 , Γ ⋆ is a minimizer of h Z . This theorem implies that, at each round t > 0 and each node i ∈ V , if x (i) t is drawn from γ ⋆ , then for all pairs of neighbors (i, j) ∈ E the probability distribution of the associated edgearms z (i,j) t follows Γ ⋆ . Moreover, as γ ⋆ is a distribution over the node-arm set X , Γ ⋆ is a joint (product) probability distribution on X 2 with marginal γ ⋆ . On the computation of γ ⋆ . Let us first state the following proposition: Proposition 4.2. Let d > 0, for any set X ⊂ R d , h X is convex. Proof. Let (γ, γ ′ ) ∈ S 2 X be two distributions in S X . If either Σ X (γ) or Σ X (γ ′ ) are not invertible, then for any t ∈ [0, 1] one has h X (tγ + (1 -t)γ ′ ) ≤ th X (γ) + (1 -t)h X (γ ′ ) = +∞ . Otherwise, for t ∈ [0, 1], we define the positive definite matrix Z(t) ∈ R d×d as follows: Z(t) = tΣ X (γ) + (1 -t)Σ X (γ ′ ) . Simple linear algebra [START_REF] Petersen | The matrix cookbook, nov 2012[END_REF] yields ∂Z(t) -1 ∂t = Z(t) -1 ∂Z(t) ∂t Z(t) -1 . Using this result and the fact that ∂ 2 Z(t)/∂t 2 = 0, we obtain ∂ 2 Z(t) -1 ∂t 2 = 2Z(t) -1 ∂Z(t) ∂t Z(t) -1 ∂Z(t) ∂t Z(t) -1 . Therefore, for any x ∈ X, ∂ 2 x ⊤ Z(t) -1 x ∂t 2 = 2x ⊤ Z(t) -1 ∂Z(t) ∂t Z(t) -1 ∂Z(t) ∂t Z(t) -1 x = 2 ∂Z(t) ∂t Z(t) -1 x ⊤ Z(t) -1 ∂Z(t) ∂t Z(t) -1 x ≥ 0 , which shows convexity for any fixed x ∈ X. The final results yields from the fact that h X is a maximum over convex functions. Although we face a min-max optimization problem and given the convexity of h X , we apply the Frank-Wolfe algorithm [START_REF] Frank | An algorithm for quadratic programming[END_REF] to compute the solution γ ⋆ of (G-relaxed-X ), as it is more suited to optimization tasks on the simplex than projected gradient descent. The convergence of the algorithm has been proven in [START_REF] Damla Ahipasaoglu | Linear convergence of a modified Frank-Wolfe algorithm for computing minimum-volume enclosing ellipsoids[END_REF]. Given the characterization in Theorem 4.1 and our objective to verify the stopping condition in (4.3), we present our sampling procedure in Algorithm 8. We also note that at each round the sampling of the node-arms can be done in parallel. This sampling procedure implies that each edge-arm follows the optimal distribution Γ ⋆ . However, if we take the number of times each z ∈ Z appears in the m pulled edge-arms of a given round, we might notice that the observed proportion is not close to Γ ⋆ z , regardless of the size of m. This is due to the fact that the m edge-arms are not independent because of the graph structure Algorithm 8: Pure-Exploration-Algorithm : Randomized G-Allocation for GBB Input : graph G = (V, E), arm set X Set A 0 = I ; b 0 = 0 ; t = 1; Apply the Frank-Wolfe algorithm to find γ ⋆ solution of (G-relaxed-X ). while stopping condition (4.3) is not verified do // Sampling the node-arms Draw x (1) t , . . . , x (n) t iid ∼ γ ⋆ and obtain for all (i, j) in E the rewards y (i,j) t ; // Estimating θt with the associated edge-arms A t = A t-1 + (i,j)∈E z (i,j) t z (i,j)⊤ t ; b t = b t-1 + (i,j)∈E z (i,j) t y (i,j) t ; θt = A -1 t b t t ← t + 1; end return θt (cf. Section 4.3.1) . Conversely, since each group of m edge-arms are independent from one round to another, the proportion of each z ∈ Z observed among the mt pulled edge-arms throughout t rounds is close to Γ ⋆ z . One may wonder whether deterministic rounding procedures could be used instead of random sampling on γ ⋆ , as it is done in many standard linear bandit algorithms [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]. Applying rounding procedure on γ ⋆ gives the number of times each node-arm x ∈ X should be allocated to the graph. However, it does not provide the actual allocations that the learner must choose over the t rounds to optimally pull the associated edge-arms (i.e., pull edge-arms following Γ ⋆ ). Thus, although rounding procedures give a more precise number of times each node-arm should be pulled, the problem of allocating them to the graph remains open, whereas by concentration of the measure, randomized sampling methods imply that the associated edge-arms follow the optimal probability distribution Γ ⋆ . In this thesis, we present a simple and standard randomized G-allocation strategy, but other more elaborated methods could be considered, as long as they include the necessary randomness. On the choice of the G-allocation problem. We have considered the G-allocation optimization problem (G-opt-Z), however, one could want to directly minimize max (z,z ′ )∈Z 2 ∥z -z ′ ∥ A -1 t , known as the XY-allocation [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]. Hence, one may want to construct edge-arms that follow the distribution Γ ⋆ XY solution of the relaxed XY-allocation problem: min Γ ∈S Z max (z ′ ,z ′′ )∈Z 2 z ′ -z ′′ ⊤ z∈Z Γ z zz ⊤ -1 z ′ -z ′′ . Although efficient in the linear case, this approach outputs a distribution Γ ⋆ XY which is not a joint probability distribution of two independent random variables, and so cannot be decomposed as the product of its marginals. Hence, there is no algorithm that allocates identically and indepen-dently the nodes of the graph to create edge-arms following Γ ⋆ XY . Thus, we will rather deal with the upper bound given by the G-allocation as it allows sampling over the nodes. Static design versus adaptive design. Adaptive designs as proposed for example in [START_REF] Soare | Best-arm identification in linear bandits[END_REF] and [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF] provide a strong improvement over static designs in the case of linear bandits. In our particular setting however, it is crucial to be able to adapt the edge-arms sampling rule to the node-arms, which is possible thanks to Theorem 4.1. This result requires a set of edge-arms Z expressed as a product of node-arms set X . Extending the adaptive design of [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF] to our setting would eliminate edge-arms from Z at each phase, without trivial guarantees that the newly obtained edge-arms set Z ′ ⊂ Z could still be derived from another node-arms set X ′ ⊂ X . An adaptive approach is definitely a natural and promising extension of our method, and is left for future work. Convergence Analysis We now prove the validity of the random sampling procedure detailed in Algorithm 8 by controlling the quality of the approximation max z∈Z z ⊤ A -1 t z with respect to the optimum of the G-allocation optimization problem max z ′ ∈Z z ′ ⊤ mt i=1 z ⋆ i z ⋆⊤ i -1 z ′ described in (G-opt-Z). As is usually done in the optimal design literature (see e.g., [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF][START_REF] Sagnol | Optimal design of experiments with application to the inference of traffic matrices in large networks: second order cone programming and submodularity[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]) we bound the relative error β t : max z∈Z z ⊤ A -1 t z ≤ (1 + β t ) max z ′ ∈Z z ′ ⊤ mt i=1 z ⋆ i z ⋆⊤ i -1 z ′ . Our analysis relies on several results from matrix concentration theory. One may refer for instance to [START_REF] Tropp | An introduction to matrix concentration inequalities[END_REF] and references therein for an extended introduction on that matter. We first introduce a few additional notations. Let f Z be the function such that for any non-singular matrix Q ∈ R d 2 ×d 2 , f Z (Q) = max z∈Z z ⊤ Q -1 z and for any distribution Γ ∈ S Z we recall that Σ Z (Γ ) ≜ z∈Z Γ z zz ⊤ is the associated covariance matrix. Finally let A ⋆ t = mt i=1 z ⋆ i z ⋆⊤ i be the G-optimal design matrix constructed during t rounds. Theorem 4.2. Let Γ ⋆ be a solution of the optimization problem (G-relaxed-Z). Let 0 < δ ≤ 1 and let t 0 be such that t 0 = 2Ld 2 log 2d 2 /δ /λ min , where λ min is the smallest eigenvalue of the covariance matrix 1 K 2 z∈Z zz ⊤ . Then, at each round t ≥ t 0 with probability at least 1 -δ, the randomized G-allocation strategy for graphical bilinear bandits in Algorithm 8 produces a matrix A t such that: f Z (A t ) ≤ (1 + β t )f Z (A ⋆ t ) , where β t = Ld 2 mλ 2 min 2v t log 2d 2 δ + o 1 √ t , and v ≜ E (A 1 -EA 1 ) 2 . To prove this confidence bound, we need the two following propositions. Proposition 4.3 ([96] , Chapter 5 and 6). Let Z 1 , . . . , Z t be i.i.d. positive semi-definite random matrices in R d 2 ×d 2 , such that there exists L > 0 verifying 0 ⪯ Z 1 ⪯ mLI. Let A t be defined as A t ≜ t s=1 Z s . Then, for any 0 < ε < 1, one can lowerbound λ min (A t ), the minimum eigenvalue of A t , as follows: P(λ min (A t ) ≤ (1 -ε)λ min (EA t )) ≤ d 2 e - tε 2 λ min (EZ 1 ) 2mL . If in addition, there exists some v > 0, such that ∥E (Z 1 -EZ 1 ) 2 ∥ ≤ v, then for any u > 0, one has P(∥S t ∥ ≥ u) ≤ 2d 2 e - u 2 2mLu/3 + 2tv , From the second inequality, [START_REF] Rizk | Refined bounds for randomized experimental design[END_REF] derived a slightly different inequality that we use here : Proposition 4.4 ([78], Appendix A.3). Let Z 1 , . . . , Z t be t i.i.d. random symmetric matrices in R d 2 ×d 2 such that there exists L > 0 such that ∥Z 1 ∥ ≤ mL, almost surely. Let A t ≜ t i=1 Z i . Then, for any u > 0, one has: P ∥A t -EA t ∥ ≥ √ 2tvu + mLu 3 ≤ d 2 e -u . where v ≜ E (Z 1 -EZ 1 ) 2 . Finally, to prove our main theorem, we need the following lemma. Lemma 4.2. One has Σ Z (Γ ⋆ ) -1 ≤ d 2 λ min , where λ min is the smallest eigenvalue of the covariance matrix 1 K 2 z∈Z z ⊤ z. Proof. Define B = z ∈ R d 2 : ∥z∥ = 1 . First, for any semi-definite matrix A ∈ R d 2 ×d 2 , we have ∥A∥ = max z∈B z ⊤ Az. Because Σ Z (Γ ⋆ ) -1 is positive definite and symmetric, and by Rayleigh-Ritz theorem, Σ Z (Γ ⋆ ) -1 = max z∈B z ⊤ Σ Z (Γ ⋆ ) -1 z z ⊤ z = max z∈B z ⊤ Σ Z (Γ ⋆ ) -1 z . Let Z ∈ R K 2 ×d 2 be the matrix whose rows are vectors of Z in an arbitrary order. Notice that Z spans R d 2 , since X spans R d . Now for any z ∈ B, define β (z) ∈ R K 2 as a vector such that z = Z ⊤ β (z) . Then, Σ Z (Γ ⋆ ) -1 = max z∈B β (z) ⊤ ZΣ Z (Γ ⋆ ) -1 Z ⊤ β (z) = max z∈B d 2 i=1 d 2 j=1 β (z) i β (z) j z ⊤ i Σ Z (Γ ⋆ ) -1 z j ≤ max z∈B β (z) 2 1 × max i,j z ⊤ i Σ Z (Γ ⋆ ) -1 z j . Define zi = Σ Z (Γ ⋆ ) -1 2 z i . Clearly, max i,j z ⊤ i Σ Z (Γ ⋆ ) -1 z j = max i,j z⊤ i zj = max i z2 i . So we have Σ Z (Γ ⋆ ) -1 ≤ max z∈B β (z) 2 1 × max z ′ ∈Z z ′⊤ Σ Z (Γ ⋆ ) -1 z ′ ≤ max z∈B β (z) 2 1 d 2 . The last inequality comes from Kiefer and Wolfowitz equivalence theorem [START_REF] Kiefer | The Equivalence of Two Extremum Problems[END_REF]. Now observe that β (z) can be obtained by least square regression : β (z) = ZZ ⊤ -1 Zz = Z ⊤ † z where (•) † is the Moore-Penrose pseudo-inverse. Note that ZZ ⊤ is a Gram matrix. It is known that for a matrix having singular values {σ i } i , its pseudo-inverse has singular values 1 σ i if σ i ̸ = 0 0 otherwise for all i. So for z ∈ B, we have: β (z) 2 1 ≤ K 2 β (z) 2 2 ≤ K 2 Z ⊤ † 2 ≤ K 2 σ min (Z) 2 , where σ min (•) refers to the smallest singular value. Let λ min (•) refer to the smallest eigenvalue. Noting that σ min (Z) 2 = λ min Z ⊤ Z = K 2 λ min 1 K 2 z∈Z zz ⊤ , yields the desired result. We can now prove the main theorem. Proof of Theorem 4.2. Let (X s ) s=1,...,t , . . . , (X (n) s ) s=1,...,t be nt i.i.d. random vectors in R d such that for all x ∈ X , P X (1) 1 = x = γ ⋆ x . For (i, j) ∈ E and 1 ≤ s ≤ t, we define the random matrix Z (i,j) s by Z (i,j) s = vec X i s X j⊤ s vec X i s X j⊤ s ⊤ . Finally, let us define for all 1 ≤ s ≤ t, the edge-wise sum Z s ∈ R d 2 ×d 2 , that is Z s = (i,j)∈E Z (i,j) s . One can easily notice that Z 1 , . . . , Z t are i.i.d. random matrices. We define the overall sum A t = t s=1 Z s and our goal is to measure how close f Z (A t ) is to f Z (mt × Σ Z (Γ ⋆ )) , where mt corresponds to the total number of sampled arms z ∈ Z during the t rounds of the learning procedure. By definition of A t , one has max z∈Z z ⊤ (EA t ) -1 z = max z∈Z z ⊤   t s=1 (i,j)∈E E Z (i,j) s   -1 z = max z∈Z z ⊤   t s=1 (i,j)∈E x,x ′ ∈X γ ⋆ x γ ⋆ x ′ vec (xx ′⊤ ) vec (xx ′⊤ ) ⊤   -1 z = max z∈Z z ⊤   t s=1 (i,j)∈E z ′ ∈Z Γ ⋆ z ′ z ′ z ′⊤   -1 z = f Z (mtΣ Z (Γ ⋆ )) . This allows us to bound the relative error as follows: β t = f Z (A t ) f Z (mt × Σ Z (Γ ⋆ )) -1 = max z∈Z z ⊤ A -1 t -(EA t ) -1 + (EA t ) -1 z f Z (mt × Σ Z (Γ ⋆ )) -1 ≤ max z∈Z z ⊤ A -1 t -(EA t ) -1 z f Z (mt × Σ Z (Γ ⋆ )) . Using the fact that f Z (mtΣ Z (Γ ⋆ )) = d 2 /mt [START_REF] Kiefer | The Equivalence of Two Extremum Problems[END_REF], we obtain β t ≤ mt d 2 × max z∈Z z ⊤ A -1 t -(EA t ) -1 z ≤ mt d 2 × max z∈Z ∥z∥ 2 ∥A -1 t -(EA t ) -1 ∥ ≤ mtL d 2 × ∥A -1 t -(EA t ) -1 ∥ . Therefore, controlling the quantity ∥A -1 t -(EA t ) -1 ∥ will allow us to provide an upper bound on the relative error. Notice that ∥A -1 t -(EA t ) -1 ∥ = ∥A -1 t (EA t -A t )(EA t ) -1 ∥ ≤ ∥A -1 t ∥ ∥EA t -A t ∥ ∥(EA t ) -1 ∥ . Using Proposition 4.3, we know that for any d 2 e -tλ min (EZ 1 ) mL < δ h < 1, the following holds: ∥A -1 t ∥ ≤ ∥(EA t ) -1 ∥ 1 -2mL t ∥(EZ 1 ) -1 ∥ log(d 2 /δ h ) , with probability at least 1 -δ h . Similarly, using Proposition 4.4, for any 0 < δ b < 1, we have ∥A t -EA t ∥ ≤ mL 3 log d 2 δ b + 2tv 2 log d 2 δ b , with probability at least 1 -δ b . Combining these two results with a union bound leads to the following bound, with probability 1 -(δ b + δ h ): A -1 t -(EA t ) -1 ≤ (EA t ) -1 2 (mL/3) log d 2 /δ b + 2tv log(d 2 /δ b ) 1 -(2mL/t) (EZ 1 ) -1 log(d 2 /δ h ) . In order to obtain a unified bound depending on one confidence parameter 1 -δ, one could optimize over δ b and δ h , subject to δ b + δ h = δ. This leads to a messy result and a negligible improvement. One can use simple values δ b = δ h = δ/2, so the overall bound becomes, with probability 1 -δ: ∥A -1 t -(EA t ) -1 ∥ ≤ 1 tm 2 Σ Z (Γ ⋆ ) -1 2 2v t log 2d 2 δ   1 + m 2 L 2 log(2d 2 /δ) 18vt 1 -2L∥Σ Z (Γ ⋆ ) -1 ∥ log(2d 2 /δ) t   . This can finally be formulated as follows: A -1 t -(EA t ) -1 ≤ 1 tm 2 Σ Z (Γ ⋆ ) -1 2 2v t log 2d 2 δ + o 1 t √ t . Using the obtained bound on ∥A -1 t -E(A t ) -1 ∥ yields f Z (A t ) f Z (mt × Σ Z (Γ ⋆ )) -1 ≤ mtL d 2 × 1 tm 2 Σ Z (Γ ⋆ ) -1 2 2v t log 2d 2 δ + o 1 t √ t ≤ L md 2 Σ Z (Γ ⋆ ) -1 2 2v t log 2d 2 δ + o 1 √ t , By noticing that f Z (mt × Σ Z (Γ ⋆ )) ≤ f Z (A ⋆ t ) and by using Lemma 4.2, the result holds. We have just shown that the approximation value max z∈Z z ⊤ A -1 t z converges to the optimal value with a rate of O √ v/(m √ t) . In Section 4.3.1, we show that the best case graph implies a v = O(m) matching the convergence rate O 1/ √ mt of a linear bandit algorithm using randomized sampling to pull mt edge-arms without (graphical) constraints. Moreover, we will see that the worst case graph implies that v = O m 2 . Since we filled the gap between our constraint objective and the problem of best arm identification in linear bandits, thanks to Theorem 4.1 and 4.2, we are able to extend known results for best arm identification in linear bandits on the sample complexity and its associated lower bound. Corollary 4.1 ([90], Theorem 1). If the G-allocation is implemented with the random strategy of Algorithm 8, resulting in an β t -approximation, then with probability at least 1 -δ, the best arm obtained with θt is z ⋆ and t ≤ 128σ 2 d 2 (1 + β t ) log 6m 2 t 2 K 4 δπ m∆ 2 min , where ∆ min = min z∈Z\{z⋆} (z ⋆ -z) ⊤ θ ⋆ . Moreover, let τ be the number of rounds sufficient for any algorithm to determine the best arm with probability at least 1 -δ. A lower bound on the expectation of τ can be obtained from the one derived for the problem of best arm identification in linear bandits (see e.g., Theorem 1 in [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF]): E[τ ] ≥ min Γ ∈S Z max z∈Z\{z⋆} log 1 2.4δ 2σ 2 ∥z ⋆ -z∥ 2 Σ Z (Γ ) -1 m (z ⋆ -z) ⊤ θ ⋆ 2 . As observed in [START_REF] Soare | Best-arm identification in linear bandits[END_REF] this lower bound can be upper bounded, in the worst case, by 4σ 2 d 2 /(m∆ 2 min ) which matches our bound up to log terms and the relative error β t . Note, however, that since we borrow this lower bound from the standard linear bandit literature, it does not take into account the graph constraint, so it may never be reached. Case where M ⋆ is not symmetric Consider the relaxation of the assumption we made at the beginning of the chapter, namely that M ⋆ is symmetric. We now consider that M ⋆ is not necessarily symmetric. We recall here that in the graph G = (V, E) associated with the graphical bilinear bandits framework, (i, j) ∈ E if and only if (j, i) ∈ E. Therefore, for a given allocation (x (1) , . . . , x (n) ) ∈ X n , we can write the associated expected global reward as follows: (i,j)∈E x (i)⊤ M ⋆ x (j) = n i=1 j∈N (i) j>i x (i)⊤ M ⋆ x (j) + x (j)⊤ M ⋆ x (i) = n i=1 j∈N (i) j>i x (i)⊤ M ⋆ x (j) + x (j)⊤ M ⋆ x (i) ⊤ = n i=1 j∈N (i) j>i x (i)⊤ M ⋆ x (j) + x (i)⊤ M ⊤ ⋆ x (j) = n i=1 j∈N (i) j>i x (i)⊤ M ⋆ x (j) + M ⊤ ⋆ x (j) = n i=1 j∈N (i) j>i x (i)⊤ M ⋆ + M ⊤ ⋆ x (j) . Let us denote M⋆ = M ⋆ + M ⊤ ⋆ . One can notice that M⋆ is symmetric. Consider the edge set Ẽ = {(i, j) ∈ E | i ∈ V, j ∈ V, j > i}, there are exactly m/2 edges in Ẽ and the objective of the central entity is to find, within a minimum number of rounds, the joint arm that maximises (i,j)∈ Ẽ x (i)⊤ M⋆ x (j) . (4.4) Every time the central entity chooses a joint arm (x (1) t , . . . , x (n) t ) at time t, it receives for each edge (i, j) ∈ Ẽ the rewards y (i,j) t = x (i)⊤ t Mx (j) and y (j,i) t = x (j)⊤ t Mx (i) . By summing y (i,j) t and y (j,i) , one has, y (i,j) t + y (j,i) t = vec x (i) t x (j)⊤ t , vec (M ⋆ ) + z x (j) t x (i) t , vec (M ⋆ ) + η (i,j) t + η (j,i) t = vec x (i) t x (j)⊤ t , vec (M ⋆ ) + vec x (i) t x (j)⊤ t , vec M ⊤ ⋆ + η (i,j) t + η (j,i) t = vec x (i) t x (j)⊤ t , vec M ⋆ + M ⊤ ⋆ + η (i,j) t + η (j,i) t = vec x (i) t x (j)⊤ t , vec M⋆ + η (i,j) t + η (j,i) t = z (i,j) t , θ⋆ + η (i,j) t + η (j,i) t √ 2σ-sub-gaussian random variable , where θ⋆ = vec M⋆ is the vectorized version of the true parameter matrix M⋆ . Hence at each round t, and for each edge (i, j) ∈ Ẽ, the central entity aggregate the reward y (i,j) t and y (j,i) t to get a reward of the form ⟨z, θ⋆ ⟩ + η with z ∈ Z and where η is a √ 2σsubgaussian random variable, that can be used to compute an estimate θt of θ⋆ . Notice that the amount of rewards y (i,j) t obtained during a round is m, but since the central entity needs to sum y (i,j) t + y (j,i) t to compute θt , we thus consider only m/2 obtained information per round. Solving this best arm identification problem for the graphical bilinear bandits with a nonsymmetric matrix M ⋆ and an edge set E is exactly solving the best arm identification problem for graphical bilinear bandits with the symmetric matrix M⋆ and the edge set Ẽ. The solution of this problem is exactly what we proposed throughout the previous sections. Influence of the graph structure on the convergence rate 4.3.1 Characterization of the variance associated with the randomized strategy The convergence bound in Theorem 4.2 depends on v = ∥E (A 1 -EA 1 ) 2 ∥. In this section, we characterize the impact of the graph structure on this quantity and, by extension, on the convergence rate. For i ∈ {1, . . . , n} and s ∈ {1, . . . , t}, let X (i) s be i.i.d. random vectors in X such that for all x ∈ X , P X (1) 1 = x = γ ⋆ x . Each X (i) s is to be viewed as the random arm pulled at round s for the node i. Hence one can write A 1 = (i,j)∈E vec X (i) 1 X (j)⊤ 1 vec X (i) 1 X (j)⊤ 1 ⊤ . Let denote A (i,j) 1 = vec (X (i) 1 X (j)⊤ 1 ) vec (X (i) 1 X (j)⊤ 1 ) ⊤ such that A 1 = (i,j)∈E A (i,j) 1 and let define for any random matrices A and B the operators Var(A) ≜ E (A -E[A]) 2 and Cov(A, B) ≜ E (A -E[A])(B -E[B] ) . We can derive the variance of A 1 as follows: Var(A 1 ) = (i,j)∈E Var A (i,j) 1 + (i,j)∈E (k,l)∈E (k,l)̸ =(i,j) Cov A (i,j) 1 , A (k,l) 1 . One can decompose the sum of the covariances into three groups: a first group where k ̸ = i, j and l ̸ = i, j which means that the two edges do not share any node and Cov(A (i,j) 1 , A (k,l) 1 ) = 0, and two other groups where the edges share at least one node. For all edges (i, j) ∈ E we consider either the edges (i, k) ∈ E where k ̸ = j, yielding Cov(A (i,j) 1 , A (i,k) 1 ) or the edges (j, k) ∈ E, yielding Cov(A (i,j) 1 , A (j,k) 1 ). Hence, one has Var(A 1 ) = (i,j)∈E Var A (i,j) 1 + n i=1 j∈N (i) k∈N (i) k̸ =j Cov A (i,j) 1 , A (i,k) 1 + n i=1 j∈N (i) k∈N (j) Cov A (i,j) 1 , A (j,k) 1 . Let P ≥ 0 be such that for all (i, j) ∈ E, Var A (i,j) 1 ⪯ P × I and M, N ≥ 0 such that for all (i, j) ∈ E: ∀k ∈ N (i), Cov A (i,j) 1 , A (i,k) 1 ⪯ M × I , ∀k ∈ N (j), Cov A (i,j) 1 , A (j,k) 1 ⪯ N × I . We want to compare the quantity ∥Var(A 1 )∥ for different types of graphs: star, complete, circle and a matching graph. To have a fair comparison, we want graphs that reveal the same number of rewards at each round of the learning procedure. Hence, we denote respectively n S , n Co , n Ci and n M the number of nodes in a star, complete, circle and matching graph of m edges and get: Star graph. We have, Var(A 1 ) ⪯ m × P • I + (n S -1)(n S -2)M • I + n S (n S -1)N • I . Since the star graph of m edges has a number of nodes n S = m/2 + 1, we have ∥Var(A 1 )∥ ≤ m × P + (M + N ) × O m 2 . Complete graph. As for the star graph, Var(A 1 ) ⪯ m × P • I + n Co (n Co -1)(n Co -2)M • I + n Co (n Co -1)(n Co -1)N • I . Since the complete graph of m edges has a number of nodes n Co = 1 + √ 4m + 1 /2, we have ∥Var(A 1 )∥ ≤ m × P + (M + N ) × O m √ m . Circle graph. Again, Var(A 1 ) ⪯ m × P • I + 2n Ci M • I + 4n Ci N • I . Since the circle graph of m edges has a number of nodes n Ci = m/2, we have ∥Var(Z 1 )∥ ≤ m × P + (M + N ) × O(m) . Matching graph. Finally, Var(A 1 ) ⪯ m × P • I + n M N • I . Since the matching graph of m edges has a number of nodes n M = m, we have ∥Var(A 1 )∥ ≤ m × P + m × N . We thus obtain the bounds stated in Table 4.2. Graph Upper bound on ∥Var(A 1 )∥ These four examples evidence the strong dependency of the variance on the structure of the graph. The more independent the edges are (i.e., with no common nodes), the smaller the quantity ∥Var(A 1 )∥ is. For a fixed number of edges m, the best case is the matching graph where no edge share the same node and the worst case is the star graph where all the edges share a central node. β t Star mP + (M + N )O m 2 O 1/ √ t Complete mP + (M + N )O(m √ m) O 1/ m 1 4 √ t Circle mP + (M + N )O(m) O 1/ √ mt Matching mP + mN O 1/ √ mt Experimental results validating the dependence on the graph In this section, we consider the modified version of a standard experiment introduced by [START_REF] Soare | Best-arm identification in linear bandits[END_REF] and used in most papers on best arm identification in linear bandits [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF][START_REF] Tao | Best Arm Identification in Linear Bandits with Linear Dimension Dependency[END_REF]106,109] to evaluate the sample complexity of our algorithm on different graphs. We consider d+1 node-arms in X ⊂ R d where d ≥ 2. This node-arm set is made of the d vectors (e 1 , . . . , e d ) forming the canonical basis of R d and one additional arm x d+1 = (cos(ω), sin(ω), 0, . . . , 0) ⊤ with ω ∈]0, π/2]. Note that by construction, the edge-arm set Z contains the canonical basis (e ′ 1 , . . . , e ′ d 2 ) of R d 2 . The parameter matrix M ⋆ has its first coordinate equals to 2 and the others equal to 0 which makes θ ⋆ = vec (M ⋆ ) = (2, 0, . . . , 0) ⊤ ∈ R d 2 . The best edge-arm is thus z ⋆ = z (1,1) = e ′ 1 . One can note that when ω tends to 0, it is harder to differentiate this arm from z (d+1,d+1) = vec x (d+1) x ⊤ (d+1) than from the other arms. We set η (i,j) t ∼ N (0, 1), for all edges (i, j) and round t. We consider two cases where ω = 0.1 which makes the edge-arms z (1,1) and z (d+1,d+1) difficult to differentiate, and ω = π/2 which makes the edge-arm z (1,1) easily identifiable as the optimal edge-arm. For each of these two cases, we evaluate the influence of the graph structure, the number of edges m and the edge-arm space dimension d 2 on the sampling complexity. Results are shown in Figure 4.3. When ω = 0.1, the type of the graph does not impact the number of rounds needed to verify the stopping condition. This is mainly due to the fact that the magnitude of its associated variance is negligible with respect to the number of rounds. Hence, even if we vary the number of edges or the dimension, we get the same performance for any type of graph including the matching graph. This implies that our algorithm performs as well as a linear bandit that draws m edgearms in parallel at each round. When ω = π/2, the number of rounds needed to verify the stopping condition is smaller and the magnitude of the variance is no longer negligible. Indeed, when the number of edges or the dimension increases, we notice that the star graph takes more times to satisfy the stopping condition. Moreover, note that the sample complexities obtained for the circle and the matching graph are similar. This observation is in line with the dependency on the variance shown in Table 4.2. Conclusion & Perspectives We provided an algorithm based on the G-allocation strategy that uses randomized sampling over the nodes to return a good estimate M that can be used instead of M ⋆ to identify the couple (x ⋆ , x ′ ⋆ ). Moreover, we highlighted the impact of the graph structure on the convergence rate of our algorithm and validated our theoretical results with experiments. While the estimate M is constructed in order to identify the couple (x ⋆ , x ′ ⋆ ) that is used in algoriothm 5 and gives a 1+ξ 2 -approximation solution, a perspective of improvement can be to construct the estimate M that identifies with high probability the couple (x ⋆ , x′ ⋆ ) that is used in Algorithm 6 that gives the better 1+ξ 2 + ϵ -approximation guarantee. We let this extension for future work. 1 Moreover, in this chapter we based our algorithm on the G-allocation strategy that minimises max zZ 2∥z∥ A -1 t that is upperbound of max (z,z ′ )Z 2 ∥z -z ′ ∥ A -1 t , objective of XYallocation strategy. An algorithm based on the XY -allocation represents a promising extensions for our model. In this chapter, we also assume that we do not know the parameter matrix M ⋆ and as described in the problem setting in Section 1.2, a central entity faces the graphical bilinear bandits problem where at each round it chooses an arm for each node of the graph and observes a bilinear reward for each edge of the graph. The objective of the central entity is the following: Objective: Design an algorithm that maximizes the expected cumulative global reward obtained during T rounds T t=1 (i,j)∈E x (i)⊤ t M ⋆ x (j) t . We will naturally rely on some ideas and results presented in Chapter 3 where the matrix was assumed to be known by the central entity. We follow the notations we established in section 1.2.1. Optimism in the face of uncertainty for graphical bilinear bandits Preliminaries Let us recall that maximizing the cumulative rewards is equivalent to minimizing its associated regret. We thus define the global pseudo-regret over T rounds as follows: R(T ) = T t=1   (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ - (i,j)∈E x (i)⊤ t M ⋆ x (j) t   . 55 We recall that the objective of the learner is to have a pseudo-regret R(T ), such that lim T →∞ R(T ) T = 0 We know from Theorem 3.1 that finding the best joint arm x ⋆ , . . . , x (n) ⋆ = arg max (x (1) ,...,x (n) )∈X (i,j)∈E x (i)⊤ M ⋆ x (j) is NP-Hard with respect to the number of agents n. We extend this result in the following corollary. Corollary 5.1. There does not exist a polynomial time algorithm in n such that lim T →∞ R(T ) T = 0 , (5.1) for any instance of the graphical bilinear bandits described in Section 1.2, unless P = N P . Proof. Suppose that there exists a polynomial time algorithm in n such that lim T →∞ R(T ) T = 0 for any instance of the graphical bilinear bandits described in Section 1.2. In particular consider the class of instances where the graph G is of degree 3 (i.e., each agent has 3 neighbors), where X = {e 0 , e 1 } is the canonical basis in R 2 and the parameter matrix M ⋆ = 0 1 1 0 . The pseudo-regret is such that ∃f : R → R with lim T →∞ f (T ) = 0, such that R(T ) = T × f (T ) with a computational time in poly(n) per iteration. Let us consider the best case scenario for the learner and assume that the matrix M ⋆ is known. Then, for t = arg max t=1,...,T (i,j)∈E x (i)⊤ t M ⋆ x (j)⊤ t we have T (i,j)∈E x (i)⊤ t M ⋆ x (j)⊤ t ≥ T t=1 (i,j)∈E x (i)⊤ t M ⋆ x (j)⊤ t ≥ T (i,j)∈E which gives, (i,j)∈E x (i)⊤ t M ⋆ x (j)⊤ t ≥ (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j)⊤ ⋆ -f (T ) . As T → ∞, f (T ) → 0, hence consider the cases where f (T ) ≤ c × (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j)⊤ ⋆ , for a certain constant c > 0. It gives that (i,j)∈E x (i)⊤ t M ⋆ x (j)⊤ t ≥ (i,j)∈E x M ⋆ x (j)⊤ ⋆ , (5.2) with a computational time in poly(n) per iteration. However, we know from the proof of Theorem 3.1 that solving the optimization problem max (x (1) ,...,x (n) ) (i,j)∈E x (i)⊤ M ⋆ x (j)⊤ is the same as solving the max-cut problem. Moreover, we know from Theorem 1 in [START_REF] Berman | On some tighter inapproximability results[END_REF] that there does not exist a polynomial time algorithm in the number of nodes that has an approximation ratio better than 330 331 ≈ 0.997 of the optimal solution of the max-cut problem for any graphs of degree 3, unless P = N P . By taking c = 0.002, Eq. (5.2) gives an approximation ratio of 0.998. This concludes the proof. Hence, the objective of designing an algorithm with a sublinear regret in T is not feasible in polynomial time with respect to n. However, some NP-hard problems are α-approximable (for some α ∈ (0, 1]), which means that there exists a polynomial-time algorithm guaranteed to produce solutions whose values are at least α times the value of the optimal solution. We refer the reader to chapter 3 for more information on approximating the optimal solution of our problem. For these kind of problems, it makes sense to consider the α-pseudo-regret as in [START_REF] Chen | Combinatorial multi-armed bandit: General framework and applications[END_REF][START_REF] Kakade | Playing games with approximation algorithms[END_REF] which is defined for all α ∈ (0, 1] as follows R α (T ) ≜ T t=1   α (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ - (i,j)∈E x (i)⊤ t M ⋆ x (j) t   , and set the objective of designing an algorithm with a sublinear α-regret. 57 Finally, as we did in chapter 4, let us recall that the reward obtained for each edge of the graph at each round can be seen as a noisy linear reward in higher dimension [START_REF] Jun | Bilinear Bandits with Low-rank Structure[END_REF] with y (i,j) t = vec x (i) t x (j)⊤ t , vec (M ⋆ ) + η (i,j) t . To simplify the notation, let us refer to any x ∈ X as a node-arm. Let us use the notation z xx ′ ≜ vec xx ′⊤ , and define the arm set Z = {z xx ′ |(x, x ′ ) ∈ X 2 } where any z ∈ Z will be referred as an edge-arm. If the arm x (i) t ∈ X represents the node-arm allocated to the node i ∈ V at time t, for each edge (i, j) ∈ E we will denote the associated edge-arm by z (i,j) t ≜ vec x (i) t x (j)⊤ t ∈ Z and define θ ⋆ = vec (M ⋆ ) the vectorized version of the unknown matrix M ⋆ . With those notations, the (now) linear reward can be rewritten as follows: y (i,j) t = z (i,j) t , θ ⋆ + η (i,j) t . (5.3) Assumption 5.1 (Bounded edge-arm norm and parameter norm). We consider that there exists L > 0, for all edge-arm z ∈ Z, such that ∥z∥ 2 ≤ L. Moreover, for some S > 0, the norm of the true parameter θ ⋆ is such that ∥θ ⋆ ∥ 2 ≤ S. Assumption 5.2 (Positive and bounded rewards). We consider that for any z ∈ Z, the associated expected reward ⟨z, θ ⋆ ⟩ is such that 0 ≤ ⟨z, θ ⋆ ⟩ ≤ LS In this chapter, we choose to design an algorithm based on the principle of optimism in the face of uncertainty [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF], and in the case of a linear reward [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF][START_REF] Li | A contextual-bandit approach to personalized news article recommendation[END_REF], we need to maintain an estimator of the true parameter θ ⋆ . To do so, let us define for all rounds t ∈ {1, . . . , T } the OLS estimator of θ ⋆ as follows: θt = A -1 t b t , (5.4) where, A t = λI d 2 + t s=1 (i,j)∈E z (i,j) s z (i,j)⊤ s , with λ > 0 a regularization parameter and b t = t s=1 (i,j)∈E z (i,j) s y (i,j) s . We define also the confidence set C t (δ) = θ : ∥θ -θt ∥ A -1 t ≤ σ d 2 log 1 + tmL 2 /λ δ + √ λS , where with probability 1 -δ, we have that θ ⋆ ∈ C t (δ) for all t ∈ {1, . . . , T }, and δ ∈ (0, 1]. Algorithm and analysis of the regret In chapter 3, we presented two algorithms (Algorithm 5 and 6) that use the true parameter matrix M ⋆ to return an allocation of arm that achieve an α-approximation solution of maximizing the global reward. Naturally, since we do not have access to M ⋆ , we cannot use it directly at each iteration to maximise the cumulative global reward (and thus minimize the associated regret). Nevertheless, one can use the constructed estimator θt and the principal of optimism in the face of uncertainty to overcome the fact that M ⋆ is unknown. Indeed, we recall that in Algorithm 5, the couple (x ⋆ , x ′ ⋆ ) is chosen as follows, (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ (5.5) = arg max (x,x ′ )∈X 2 ⟨z xx ′ + z x ′ x , θ ⋆ ⟩ , (5.6) and is used to create as much as possible edge arms of the form z xx ′ and z x ′ x in the graph. Here instead, at each round t, the central entity chooses optimistically the couple (x t , x ′ t ) as follows, (x t , x ′ t ) = arg max (x,x ′ )∈X 2 max θ∈C t-1 (δ) ⟨z xx ′ + z x ′ x , θ⟩ , and then allocates the node-arms to maximize the number of locally-optimal edge-arms z xtx ′ t and z x ′ t xt . The method is presented in Algorithm 9 Theorem 5.1. Given the Graphical Bilinear Bandits problem defined in Section 1.2.1, let 0 ≤ ξ ≤ 1 be a problem-dependent parameter defined by ξ = min x∈X ⟨z xx , θ ⋆ ⟩ 1 m (i,j)∈E z (i,j) ⋆ , θ ⋆ ≥ 0 , and set α = 1+ξ 2 , then the α-regret of Algorithm 9 satisfies R α (T ) ≤ Õ σd 2 + S √ λ T m max(2, (LS) 2 ) + LSm d 2 log 2 T mL 2 /λ δ , where Õ hides the logarithmic factors. Algorithm 9: Adaptation of OFUL algorithm for Graphical Bilinear Bandits Input : graph G = (V, E), node-arm set X (V 1 , V 2 ) = Approx-MAX-CUT(G) for t = 1 to T do // Find the optimistic best couple of node-arms x t , x ′ t , θt-1 = arg max (x,x ′ ,θ)∈X 2 ×C t-1 ⟨z xx ′ + z x ′ x , θ⟩; // Allocate x t and x ′ t in the graph x (i) t = x t for all i in V 1 ; x (i) t = x ′ t for all i in V 2 ; Obtain for all (i, j) in E the rewards y (i,j) t ; Compute θt as in (5.4) end return θt Proof. To properly derive the regret bounds, we will have to make connections between our setting and a standard linear bandit that chooses sequentially T m arms. For that matter, let us consider an arbitrary order on the set of edges E and denote E[i] the i-th edge according to this order with i ∈ {1, . . . , m}. We define for all t ∈ {1, . . . , T } and p ∈ {1, . . . , m} the OLS estimator θt,p = A -1 t,p b t,p , where A t,p = λI d 2 + t-1 s=1 m b=1 z E[b] s z E[b]⊤ s + p k=1 z E[k] t z E[k]⊤ t , with λ a regularization parameter and b t,p = t-1 s=1 m b=1 z E[b] s y E[b] s + p k=1 z E[k] t y E[k] t . (5.7) We define also the confidence set C t,p (δ) = θ : ∥θ -θt,p ∥ A -1 t,p ≤ σ d 2 log 1 + tmL 2 /λ δ + √ λS , (5.8) where with probability 1 -δ, we have that θ ⋆ ∈ C t,p (δ) for all t ∈ {1, . . . , T }, p ∈ {1, . . . , m} and δ ∈ (0, 1]. Notice that the confidence set C t (δ) defined in Section 5.1.1 is exactly the confidence set C t,m (δ) defined here. The definitions of the matrix A t,m and the vector b t,m follow the same reasoning. Recall that (x (1) ⋆ , . . . , x (n) ⋆ ) = arg max (x (1) ,...,x (n) ) (i,j)∈E x (i)⊤ M ⋆ x (j) is the optimal joint arm, and we define for each edge (i, j) ∈ E the optimal edge arm z (i,j) ⋆ = vec (x (i) ⋆ x (j)⊤ ⋆ ). Let us borrow the notion of Critical Covariance Inequality introduced in [START_REF] Chan | Parallelizing contextual linear bandits[END_REF], that is for a given round t ∈ {1, . . . , T } and p ∈ {1, . . . , m}, the expected covariance matrix A t,p satisfies the critical covariance inequality if A t-1,m ≼ A t,p ≼ 2A t-1,m . (5.9) Let us now define the event D t as the event where at a given round t, for all p ∈ {1, . . . , m}, A t,p satisfies the critical covariance inequality (CCI). We can write the pseudo-regret as follows: R(T ) = T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -z (i,j) t , θ ⋆ + T t=1 1[D c t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -z (i,j) t , θ ⋆ ≤ T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -z (i,j) t , θ ⋆ (a) + LSm T t=1 1[D c t ] (b) . We know that the approximation Max-CUT algorithm returns two subsets of nodes V 1 and V 2 such that there are at least m/2 edges between V 1 and V 2 , and to be more precise: at least m/4 edges from V 1 to V 2 and at least m/4 edges from V 2 to V 1 . Hence at each time t, if all the nodes of V 1 pull the node-arm x t and all the nodes of V 2 pull the node-arm x ′ t , we can derive the term (a) as follows: (a) = T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -⟨z (i,j) t , θ ⋆ ⟩ = T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -1[i ∈ V 1 ∧ j ∈ V 2 ]⟨z (i,j) t , θ ⋆ ⟩ -1[i ∈ V 2 ∧ j ∈ V 1 ]⟨z (i,j) t , θ ⋆ ⟩ 61 -1[i ∈ V 1 ∧ j ∈ V 1 ]⟨z (i,j) t , θ ⋆ ⟩ -1[i ∈ V 2 ∧ j ∈ V 2 ]⟨z (i,j) t , θ ⋆ ⟩ . Notice that (i,j)∈E z (i,j) ⋆ = (i,j)∈E 1 m (k,l)∈E z (k,l) ⋆ , so one has (a) = T t=1 1[D t ] (i,j)∈E 1[i ∈ V 1 ∧ j ∈ V 2 ]   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -⟨z (i,j) t , θ ⋆ ⟩   (a 1 ) + T t=1 1[D t ] (i,j)∈E 1[i ∈ V 2 ∧ j ∈ V 1 ]   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -⟨z (i,j) t , θ ⋆ ⟩   (a 2 ) + T t=1 1[D t ] (i,j)∈E 1[i ∈ V 1 ∧ j ∈ V 1 ]   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -⟨z (i,j) t , θ ⋆ ⟩   (a 3 ) + T t=1 1[D t ] (i,j)∈E 1[i ∈ V 2 ∧ j ∈ V 2 ]   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -⟨z (i,j) t , θ ⋆ ⟩   (a 4 ) . Let us analyse the first term. By using the fact that 1[D t ] ≤ 1, we have (a 1 ) = T t=1 n i=1 j∈N i j>i 1[i ∈ V 1 ∧ j ∈ V 2 ] 2 m (k,l)∈E z (k,l) ⋆ -z (i,j) t + z (j,i) t , θ ⋆ . By defining (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X 2 ⟨z xx ′ + z x ′ x , θ ⋆ ⟩, and noticing that in the case where a node i is in V 1 and a neighbouring node j in is V 2 , then z (i,j) t = z xtx ′ t , we have, 2 m (k,l)∈E z (k,l) ⋆ , θ ⋆ = 2 m n k=1 j∈N k j>k z (k,l) ⋆ + z (l,k) ⋆ , θ ⋆ ≤ 2 m n k=1 j∈N k j>k z x⋆x ′ ⋆ + z x ′ ⋆ x⋆ , θ ⋆ = ⟨z x⋆x ′ ⋆ + z x ′ ⋆ x⋆ , θ ⋆ ⟩ ≤ ⟨z xtx ′ t + z x ′ t xt , θt-1,m ⟩ (optimistic reward) = ⟨z (i,j) t + z (j,i) t , θt-1,m ⟩ . Plugging this last inequality yields, with probability 1 -δ, (a 1 ) ≤ T t=1 n i=1 j∈N i j>i 1[i ∈ V 1 ∧ j ∈ V 2 ] z (i,j) t + z (j,i) t , θt-1,m -θ ⋆ = T t=1 (i,j)∈E 1[i ∈ V 1 ∧ j ∈ V 2 ] z (i,j) t , θt-1,m -θ ⋆ . We define, as in Algorithm 9, 1 z (i,j) t = z xtx ′ t ≜ 1[i ∈ V 1 ∧ j ∈ V 2 ]. Then, one has, with probability 1 -δ, (a 1 ) ≤ T t=1 (i,j)∈E 1 z (i,j) t = z xtx ′ t z (i,j) t , θt-1,m -θ ⋆ = T t=1 m k=1 1 z E[k] t = z xtx ′ t z E[k] t , θt-1,m -θ ⋆ = T t=1 m k=1 1 z E[k] t = z xtx ′ t z E[k] t , θt-1,m -θt-1,m + z E[k] t , θt-1,m -θ ⋆ ≤ T t=1 m k=1 1 z E[k] t = z xtx ′ t ∥z E[k] t ∥ A -1 t,k-1 ∥ θt-1,m -θt-1,m ∥ A t,k-1 + 1 z E[k] t = z xtx ′ t ∥z E[k] t ∥ A -1 t,k-1 ∥ θt-1,m -θ ⋆ ∥ A t,k-1 ≤ T t=1 m k=1 1 z E[k] t = z xtx ′ t ∥z E[k] t ∥ A -1 t,k-1 √ 2∥ θt-1,m -θt-1,m ∥ A t-1,m (5.10) + 1 z E[k] t = z xtx ′ t ∥z E[k] t ∥ A -1 t,k-1 √ 2∥ θt-1,m -θ ⋆ ∥ A t-1,m ≤ T t=1 m k=1 1 z E[k] t = z xtx ′ t 2 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 (5.11) ≤ T t=1 m k=1 2 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 , (5.12) with β t (δ) ≤ σ d 2 log 1+tmL 2 /λ δ + √ λS and where (5.10) uses the critical covariance inequality (5.9), (5.11) comes from the definition of the confidence set C t-1,m (δ) (5.8) and (5.12) upper bounds the indicator functions by 1. Using a similar reasoning, we obtain the same bound for (a 2 ): (a 2 ) ≤ T t=1 m k=1 2 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 . Let us bound the terms (a 3 ) and (a 4 ). (a 3 ) ≤ T t=1 (i,j)∈E 1 z (i,j) t = z xtxt   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -⟨z (i,j) t , θ ⋆ ⟩   . For all x ∈ X , let ξ x be the following ratio ξ x = ⟨z xx , θ ⋆ ⟩ 1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ , (5.13) and let ξ be the worst ratio ξ = min x∈X ⟨z xx , θ ⋆ ⟩ 1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ . ( 5.14) We have T t=1 (i,j)∈E 1 z (i,j) t = z xtxt   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -ξ xt 1 m (k,l)∈E z (k,l) ⋆ , θ ⋆   ≤ T t=1 (i,j)∈E 1 z (i,j) t = z xtxt   1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ -ξ 1 m (k,l)∈E z (k,l) ⋆ , θ ⋆   = T t=1 (i,j)∈E 1 z (i,j) t = z xtxt (1 -ξ) 1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ ≤T m 4 (1 -ξ) 1 m (k,l)∈E z (k,l) ⋆ , θ ⋆ (5.15) = T t=1 (i,j)∈E 1 4 (1 -ξ) z (i,j) ⋆ , θ ⋆ , where (5.15) comes from the fact that there is at most m/4 edges that goes from node in V 1 to other nodes in V 1 . The derivation of this bound for (a 3 ) gives the same one for (a 4 ) (a 4 ) ≤ T t=1 (i,j)∈E 1 4 (1 -ξ) z (i,j) ⋆ , θ ⋆ . By rewriting (a), we have : (a) ≤ T t=1 m k=1 4 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 + 1 2 (1 -ξ)⟨z (i,j) ⋆ , θ ⋆ ⟩ . In [START_REF] Chan | Parallelizing contextual linear bandits[END_REF], they bounded the term (b) as follows LSm T t=1 1[D c t ] ≤ LSm d 2 log 2 T mL 2 /λ δ . (5.16) We thus have the regret bounded by R(T ) ≤ T t=1 m k=1 4 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 + 1 2 (1 -ξ)⟨z (i,j) ⋆ , θ ⋆ ⟩ + LSm d 2 log 2 T mL 2 /λ δ , which gives us R 1+ξ 2 (T ) ≤ T t=1 m k=1 4 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 + LSm d 2 log 2 T mL 2 /λ δ . Let us bound the first term with the double sum as it is done in [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF][START_REF] Chan | Parallelizing contextual linear bandits[END_REF]: T t=1 m k=1 4 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 ≤ T t=1 m k=1 min 2LS, 4 2β t (δ)∥z E[k] t ∥ A -1 t,k-1 ≤ T t=1 m k=1 4 2β t (δ) min LS, ∥z E[k] t ∥ A -1 t,k-1 ≤ T m × 32β T (δ) T t=1 m k=1 min (LS) 2 , ∥z E[k] t ∥ 2 A -1 t,k-1 ≤ 32T mβ T (δ) T t=1 m k=1 max(2, (LS) 2 ) log 1 + ∥z E[k] t ∥ 2 A -1 t,k-1 (5.17) = 32T mβ T (δ) max(2, (LS) 2 ) T t=1 m k=1 log 1 + ∥z E[k] t ∥ 2 A -1 t,k-1 ≤ 32T mβ T (δ) max(2, (LS) 2 )d 2 log 1 + T mL 2 /λ d 2 (5.18) ≤ 32T md 2 max(2, (LS) 2 ) log 1 + T mL 2 /λ d 2 β T (δ) , where (5.17) uses the fact that for all a, x ≥ 0, min(a, x) ≤ max(2, a) log(1 + x), (5.18) uses the fact that T t=1 m k=1 log 1 + ∥z E[k] t ∥ 2 A -1 t,k-1 ≤ d 2 log 1 + T mL 2 /λ d 2 from Lemma 19.4 in [START_REF] Lattimore | Bandit Algorithms[END_REF]. We get the final bound by noticing that β T (δ) ≤ σ d 2 log 1 + T mL 2 /λ δ + √ λS . One can notice that the first term of the regret-bound matches the one of a standard linear bandit that pulls sequentially T m edge-arms. The second term captures the cost of parallelizing m draws of edge-arms per round. Indeed, the intuition behind this term is that the couple (x t , x ′ t ) chosen at round t is relevant to pull the (tm + 1)-th edge-arm but not necessarily the other (m -1) edge-arms that are pulled in parallel. This is because the reward associated with the (tm + 1)-th edge-arms could have led to change the central entity choice if it had been done sequentially. In [START_REF] Chan | Parallelizing contextual linear bandits[END_REF], they characterize this phenomenon and show that this potential change in choice occurs less and less often as we pull arms and get rewards, hence the dependence in O(log(T m))). Improved algorithm and analysis of the regret We address the problem of designing an improved version of the proposed algorithm using the idea presented in Algorithm 6 that improves the approximation ratio. Let us recall that in the previous section in Algorithm 9, the central entity chooses the couple (x t , x ′ t ) such that (x t , x ′ t ) = arg max (x,x ′ )∈X 2 max θ∈C t-1 (δ) ⟨z xx ′ + z x ′ x , θ⟩ , which maximize the optimistic reward obtained between two nodes if the central entity were able to allocate x t to one node and x ′ t to the second. This strategy being optimal locally but complicated by considering the dependencies between edges, the central entity could take into consideration the edge arms of the form z xx and z x ′ x ′ that are created when allocating the graph node using only two node-arm x and x ′ . This idea follows the one presented in Algorithm 6 where we recall with different notations that the couple (x ⋆ , x′ ⋆ ) chosen to allocate the nodes of the graph are such that (x ⋆ , x′ ⋆ ) = arg max (x,x ′ )∈X 2 ⟨m 1→2 • z xx ′ + m 2→1 • z x ′ x + m 1 z xx + m 2 z x ′ x ′ , θ ⋆ ⟩ . As in the previous section, we do not have access to θ ⋆ , we use the principle of optimism to find at each round the couple (x t , x′ t ) such that (x t , x′ t ) = arg max (x,x ′ )∈X 2 max θ∈C t-1 (δ) ⟨m 1→2 • z xx ′ + m 2→1 • z x ′ x = m 1 z xx + m 2 z x ′ x ′ , θ⟩ . (5.19) Here, instead of maximizing the local reward one can get between two nodes, the central entity maximizes the global optimistic reward that one would obtain when allocating only two arms (x, x ′ ) ∈ X 2 in the graph. This strategy is described in Algorithm 10. Before stating the guarantees on the α-regret, we recall that we defined in Chapter 3 the quantity ∆ ≥ 0 to be the expected reward difference of allocating (x ⋆ , x′ ⋆ ) instead of (x ⋆ , x ′ ⋆ ), ∆ = ⟨m 1→2 z x⋆ x′ ⋆ -z x⋆x ′ ⋆ + m 2→1 z x′ ⋆ x⋆ -z x ′ ⋆ x⋆ + m 1 (z x⋆ x⋆ -z x⋆x⋆ ) + m 2 z x′ ⋆ x′ ⋆ -z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ . The new guarantees that we get on the α-regret of Algorithm 10 are stated in the following theorem. Theorem 5.2. Given the Graphical Bilinear Bandits problem defined as in Section 1.2, let ξ be defined as in Theorem 5.1, let 0 ≤ ϵ ≤ 1 2 be a problem dependent parameter that measures the gain of optimizing on the suboptimal and unwanted arms defined as: ϵ = ∆ (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ , and set α = 1+ξ 2 + ϵ where α ≥ 1/2 by construction, then the α-regret of Algorithm 10 satisfies 67 Algorithm 10: Improved OFUL for Graphical Bilinear Bandits Input : graph G = (V, E), node-arm set X (V 1 , V 2 ) = Approx-MAX-CUT(G); m 1→2 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 2 }|; m 2→1 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 1 }|; m 1 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 1 }|; m 2 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 2 }|; for t = 1 to T do xt , x′ t , θt-1 = arg max (x,x ′ ,θ)∈X 2 ×C t-1 ⟨m 1→2 • z xx ′ + m 2→1 • z x ′ x + m 1 • z xx + m 2 • z x ′ x ′ , θ⟩; x (i) t = xt for all i in V 1 ; x (i) t = x′ t for all i in V 2 ; Obtain for all (i, j) in E the rewards y (i,j) t ; Compute θt as in (5.4) end return θt R α (T ) ≤ Õ σd 2 + S √ λ T m max(2, (LS) 2 ) + LSm d 2 log 2 T mL 2 /λ δ , where Õ hides the logarithmic factors. Proof. We can write the regret R(T ) as in the proof of Theorem 5.1: R(T ) = T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -z (i,j) t , θ ⋆ + T t=1 1[D c t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -z (i,j) t , θ ⋆ ≤ T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -z (i,j) t , θ ⋆ (a) + LSm T t=1 1[D c t ] (b) . Here, (b) doesn't change, we thus only focus on deriving (a). (a) = T t=1 1[D t ] (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -⟨z (i,j) t , θ ⋆ ⟩ ≤ T t=1 (i,j)∈E ⟨z (i,j) ⋆ , θ ⋆ ⟩ -⟨z (i,j) t , θ ⋆ ⟩ (where 1[D t ] ≤ 1) = T t=1 (i,j)∈E m 1→2 + m 2→1 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ (a 1 ) + T t=1 (i,j)∈E m 1 + m 2 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ - T t=1 (i,j)∈E ⟨z (i,j) t , θ ⋆ ⟩ . We have (a 1 ) = T t=1 (i,j)∈E 2m 1→2 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ = T t=1 n i=1 j∈N i j>i 2m 1→2 m ⟨z (i,j) ⋆ + z (j,i) ⋆ , θ ⋆ ⟩ ≤ T t=1 n i=1 j∈N i j>i 2m 1→2 m ⟨z x⋆x ′ ⋆ + z x ′ ⋆ x⋆ , θ ⋆ ⟩ = T t=1 n i=1 j∈N i j>i 2 m ⟨m 1→2 • z x⋆x ′ ⋆ + m 2→1 • z x ′ ⋆ x⋆ , θ ⋆ ⟩ = T t=1 n i=1 j∈N i j>i 2 m ⟨m 1→2 • z x⋆x ′ ⋆ + m 2→1 • z x ′ ⋆ x⋆ + m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ - T t=1 n i=1 j∈N i j>i 2 m ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ = T t=1 n i=1 j∈N i j>i - T t=1 n i=1 j∈N i j>i 2 m ∆ - T t=1 n i=1 j∈N i j>i 2 m ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ = T t=1 ⟨m 1→2 • z x⋆ x′ ⋆ + m 2→1 • z x′ ⋆ x⋆ + m 1 • z x⋆ x⋆ + m 2 • z x′ ⋆ x′ ⋆ , θ ⋆ ⟩ -∆ - T t=1 ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ ≤ T t=1 m 1→2 • z xt x′ t + m 2→1 • z x′ t xt + m 1 • z xt xt + m 2 • z x′ t x′ t , θt-1,m -∆ - T t=1 ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ (w.p 1 -δ) = T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m ⟩ - T t=1 ∆ - T t=1 ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ . By plugging the last upper bound in (a) and with probability 1 -δ, we have, (a) ≤ T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m ⟩ - T t=1 ∆ - T t=1 ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ - T t=1 (i,j)∈E ⟨z (i,j) t , θ ⋆ ⟩ = T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ - T t=1 ∆ - T t=1 ⟨m 1 • z x⋆x⋆ + m 2 • z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ = T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ - T t=1 ∆ - T t=1 (i,j)∈E m 1 m ξ x⋆ ⟨z (i,j) ⋆ , θ ⋆ ⟩ + T t=1 (i,j)∈E m 2 m ξ x ′ ⋆ ⟨z (i,j) ⋆ , θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ ≤ T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ - T t=1 ∆ - T t=1 (i,j)∈E m 1 + m 2 m ξ⟨z (i,j) ⋆ , θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m ⟨z (i,j) ⋆ , θ ⋆ ⟩ = T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ - T t=1 ∆ + T t=1 (i,j)∈E m 1 + m 2 m (1 -ξ)⟨z (i,j) ⋆ , θ ⋆ ⟩ = T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ - T t=1 (i,j)∈E ϵ⟨z (i,j) ⋆ , θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m (1 -ξ)⟨z (i,j) ⋆ , θ ⋆ ⟩ = T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m (1 -ξ) -ϵ ⟨z (i,j) ⋆ , θ ⋆ ⟩ . By plugging (a) in the regret and with probability 1 -δ, we have, R(T ) ≤ T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ + T t=1 (i,j)∈E m 1 + m 2 m (1 -ξ) -ϵ ⟨z (i,j) ⋆ , θ ⋆ ⟩ + LSm T t=1 1[D c t ] , which gives, R(T ) - T t=1 (i,j)∈E m 1 + m 2 m (1 -ξ) -ϵ ⟨z (i,j) ⋆ , θ ⋆ ⟩ ≤ T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ + LSm T t=1 1[D c t ] . Thus, R 1-m 1 +m 2 m (1-ξ)-ϵ (T ) ≤ T t=1 (i,j)∈E ⟨z (i,j) t , θt-1,m -θ ⋆ ⟩ + LSm T t=1 1[D c t ] . The upper bound of the right hand terms follows exactly what we have already done for Theorem 5.1 by applying the upper bounds (5.12) and (5.16). Moreover, 1 - m 1 + m 2 m (1 -ξ) -ϵ ≥ 1 - (1 -ξ) 2 -ϵ = (1 + ξ) 2 + ϵ . Here one can see that the improvement happens in the α of the α-regret. In the next section, we confirm this results through experiments. Numerical experiments In Chapter 3, we run experiments on α and how it can varying according to the graph and the problem parameters. In this subsection, we only focus on the impact on the regret. We design an experiment that compares in practice the performance of Algorithm 9 and Algorithm 10 with the Explore-Then-Commit (ETC) algorithm by using the exploration strategy designed in Chapter 4 during the exploration phase, and by allocating the nodes in V 1 and V 2 with the best estimated couple (x, x ′ ) = arg max (x,x ′ ) ⟨z xx ′ + z x ′ x , θt ⟩ during the commit phase. However, since the algorithms that we presented in this section have guarantees on α-regrets with different α, we plot the fraction of the optimal global reward for each iteration. As in Chapter 3, we observe a clear improvement when choosing at each round t the couple of arms (x t , x′ t ) instead of (x t , x ′ t ). Conclusion and perspectives In this chapter, we presented the first regret-based algorithms for the stochastic graphical bilinear bandits problem with guarantees on the α-regret with α ≥ 1/2. We also showed experimentally that our algorithm achieves a better performance than Explore-Then-Exploit on our synthetic datasets. One could also study this problem in the adversarial setting, in particular adapting adversarial linear bandit algorithms to our case. Finally, our setting could be extended to the case where each agent has its own reward matrix. Conclusion & perspectives 6.1 Summary of the results In this thesis we introduced a new model that we named Graphical Bilinear Bandits that models centralized multi-agent problems where pairwise interactions exist between agents. • In Chapter 3, we have highlighted the fact that the learner faced an underlying optimization problem that is NP-Hard no matter which goal the learner wanted to reach. Hence we have proposed an α-approximation algorithm with α ≥ 1/2 which only required to find the couple of arms (x ⋆ , x ′ ⋆ ) to return the α-approximate solution. Then we have refined this approximation-parameter with respect to problem dependent parameters based on the graph structure and a property of the parameter matrix M ⋆ . • In Chapter 4, given the α-approximation algorithm designed in Chapter 3, we have presented a pure-exploration algorithm that allowed the learner to construct an estimate M that was statistically efficient in terms of optimal design. Indeed, the problem of finding within a minimum number of rounds the best couple (x ⋆ , x ′ ⋆ ) used in the α-approximation algorithm came down to finding the G-optimal design, also called G-allocation in the bandit literature. Solving this problem in the graphical bilinear bandits implied dealing with an additional constraint. That was why we have presented an algorithm that respected this constraint and that used randomized sampling to construct the estimate M. Our theoretical results have revealed a term that depended on the graph structure, so we showed the impact of the graph in our results. • Finally, in Chapter 5, we have capitalized on the α-approximation algorithm given in chapter 3 and applied the principle of optimism in the face of uncertainty to design regret-based algorithms that achieved a sublinear α-regret in T where α ≥ 1/2. Furthermore, we have presented experimentally the performance of the proposed algorithms and used compared with an Explore-Then-Commit algorithm relying on the pure-exploration algorithm presented in chapter 4. Perspective and future works This thesis aimed to introduce the new graphical bilinear bandit setting and to provide the first solutions to common problems posed in the bandit literature. A lot of other approaches and modifications can be considered. We state two of them in the following. Different parameter matrices M (i,j) ⋆ for each edge (i, j) ∈ E. While dealing with a common parameter matrix M ⋆ for all the edges (i, j) ∈ E was convenient for aggregating the rewards and constructing a common estimate M for all the agents, when the rewards y (i,j) t are defined with different matrices M (i,j) ⋆ , the problem becomes more complicated. Indeed consider the setting where the reward y (i,j) t is defined as follows for each (i, j) ∈ E : y (i,j) t = x (i)⊤ t M (i,j) ⋆ x (j) t + η (i,j) t , where M (i,j) ⋆ are unknown parameter matrices and η (i,j) t σ-sub-gaussian random variables. This setting is relevant when the agents do not have the same interactions between each of their neighbors, and thus not the same reward function. Open questions: In the context of pure exploration, how does the stopping condition change? Is there a sampling strategy for each agent such that estimates M(i,j) are constructed by satisfying an optimal design criterion? Decentralized setting. When agents are controlled by a central entity, it is possible to aggregate the different rewards and to build a common estimate M of M ⋆ . Moreover, we have seen that the different objectives that appear are relative to the edge-arms and not directly to the node-arms selected by each agent. Indeed, this is due to the fact that we can express the graphical bilinear bandit as linear bandits at the edge level. This particular aspect makes the decentralized framework a bit tricky because coordinating two agents without communication to respectively draw the node-arms that will build the wanted edge-arm becomes even more complicated. However other problem arise even if the coordination problem is solved. For example, in the best arm identification problem, we have already designed a sampling procedure that can be executed in parallel during a round, hence a decentralized choice for each agent. However, the stopping condition depends on the estimate M constructed with the edge-arms during the learning procedure, but when the agents do not communicate, this estimate cannot be constructed. This is because an agent only knows which node arm it is pulling and observes the reward. However, the reward is linear with respect to the associated edge-arm and the agent does not have access to this edge-arm since it is constructed with its node arm but also with that of its neighbors (to which it does not have access). Open questions: In the fully decentralized setting (without communication), what kind of algorithms can we design to take advantage of the (bi-)linear bandit setting? If we allow communication, how can we adapt the proposed algorithms and what is the trade-off between the amount of communications and their performances? which experiments must be chosen in a fixed pool of experiments so as to minimize its variance? In the multi-dimensional case, this is done by minimizing a scalar function of its covariance matrix and several approaches have been considered such as the minimization of the determinant, the trace or the spectral norm, respectively denoted D, A and E-optimal design (see e.g., [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF][START_REF] Sagnol | Optimal design of experiments with application to the inference of traffic matrices in large networks: second order cone programming and submodularity[END_REF]). (See Appendix 1, for more details on experimental design) E-optimal design has been exploited in practical settings such as for biological experiments [START_REF] Flaherty | Robust design of biological experiments[END_REF] or for treatment versus control comparisons where useful statistical interpretations have been derived, see e.g., [START_REF] Notz | Optimal designs for treatment-control comparisons in the presence of twoway heterogeneity[END_REF][START_REF] Rosa | Optimal approximate designs for comparison with control in dose-escalation studies[END_REF]. Another criterion, known as G-optimal design and which minimizes the worst predicted variance, has recently been investigated in the context of best arm identification in linear bandits [START_REF] Soare | Best-arm identification in linear bandits[END_REF][START_REF] Tao | Best Arm Identification in Linear Bandits with Linear Dimension Dependency[END_REF]106] where one is interested in finding the experiment with maximum linear response. The optimization problems associated to the aforementioned optimal designs (E, A, D, G) are known to be NP-hard [START_REF] Çivril | On selecting a maximum volume sub-matrix of a matrix and related problems[END_REF]104]. The two common approaches have been to resort to greedy strategies or convex relaxations. A greedy strategy iteratively finds the best experiment whereas solving a convex relaxation returns a discrete probability distribution over the samples. On the one hand, performance guarantees for greedy strategies have been obtained by exploiting supermodularity and approximate supermodularity properties of the different criteria [START_REF] Chamon | Approximate supermodularity bounds for experimental design[END_REF][START_REF] Sagnol | Approximation of a maximum-submodular-coverage problem involving spectral functions, with application to experimental designs[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]. On the other hand, for performance guarantees of randomized optimal designs, only the randomized A-optimal design has been thetoretically studied with bounds on the mean square error of the associated estimator ([103]). We propose in this paper to fill the gap concerning randomized E and G-optimal designs. More precisely we study their theoretical validity by providing finite-sample confidence bounds and show with experiments that they are worth being considered in practice. The paper is organized as follows. Section A.2 defines the main notations and recalls the problem of experimental design as well as the different optimal criteria. Section A.3 presents the main results of this paper for the random strategies of E and G-optimal designs. Finally, the last section shows empirical results of the studied random strategies and an application to the best arm identification problem in linear bandits. A.2 Preliminaries Definitions and notations Throughout the paper, we use small bold letters for vectors (e.g., x) and capital bold letters for matrices (e.g., X). For any d > 0 and any vector x ∈ R d , ∥x∥ will denote the usual ℓ 2 -norm of x. For any square matrix X ∈ R d×d , we denote as ∥X∥ the spectral norm of X, that is ∥X∥ ≜ sup y:∥y∥=1 ∥Xy∥. We let λ min (X) be the smallest eigenvalue of X. For any 1 ≤ i, j ≤ d, any x ∈ R d and any matrix X ∈ R d×d , [x] i denotes the i-th coordinate of vector x, [X] i the vector of the i-th row and [X] ij the value at the i-th row and j-th column. Finally, we denote by S + d the cone of all d × d positive semi-definite matrices and by ∆ d ≜ {µ ∈ [0, 1] d , d i=1 [µ] i = 1} the simplex in R d . Experimental design for linear regression Given X ∈ R K×d a matrix of K experiments1 and y ∈ R K a vector of K measurements, it is assumed that there exists an unknown parameter θ ⋆ ∈ R d such that for all k ∈ {1, . . . , K}, [y] k = θ ⊤ ⋆ x k +ε k where x k = [X] k and ε 1 , . . . , ε K are independent Gaussian random variables with zero mean and variance σ2 . The ordinary least squares (OLS) estimator of the parameter θ ⋆ is given by θ = arg min θ ∥y -Xθ∥ 2 2 = (X ⊤ X) -1 X ⊤ y. 2 This estimator is unbiased and has a covariance matrix Σ -1 = σ 2 (X ⊤ X) -1 . Experimental design [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF] consists in estimating θ by selecting only the experiments that are the most statistically efficient to reduce the variance. More formally, let n be the total number of selected experiments and for all k ∈ {1, . . . , K}, let n k be the number of times x k is chosen. We have n k ≥ 0 and K i=1 n k = n. The covariance matrix obtained with such a design can be written as Σ -1 D = σ 2 ( K k=1 n k x k x ⊤ k ) -1 . The Loewner order on S + d being only a partial order, minimizing Σ -1 D over the cone S + d is an ill-posed problem. An optimal design is thus defined thanks to scalar properties of a matrix in S + d , i.e., as a solution of min n 1 ,...,n K f (Σ -1 D ) where f : S + d → R. The two criterion f we study in this paper are: • E-optimality : f E (Σ -1 D ) = ∥Σ -1 D ∥. The E-optimal design minimizes the maximum eigenvalue of Σ -1 D . Geometrically it minimizes the variance ellipsoid in the direction of its diameter only. • G-optimality : f G (Σ -1 D ) = max x∈X x ⊤ Σ -1 D x. The G-optimal design minimizes the worst possible predicted variance. Those two optimality criteria are NP-hard optimization problems [START_REF] Çivril | On selecting a maximum volume sub-matrix of a matrix and related problems[END_REF]104]. However approximate solutions can be found in polynomial time by relying on greedy strategies or by relaxing the problem and looking for proportions µ k ∈ [0, 1] instead of integers n k . By letting µ k = n k /n, the covariance matrix Σ -1 D can be written as Σ -1 D = σ 2 /n • ( K k=1 µ k x k x ⊤ k ) -1 and it leads us to the convex optimiation problem min µ 1 ,...,µ K ∈[0,1] f (Σ -1 D ) which returns a discrete probability distribution over the samples. For more details on experimental design and optimal design criteria see [START_REF] Boyd | Convex optimization[END_REF][START_REF] Pukelsheim | Optimal Design of Experiments[END_REF]. A.3 Convergence analysis In this section, we analyze the behavior of random sampling along the distribution associated to the convex relaxation discussed in Section A.2, for E and G-optimal designs. Let X = {x 1 , . . . , x K } ⊆ R d be the set of experiments and let µ ⋆ E and µ ⋆ G be the optimal distributions in ∆ K associated to the convex relaxation of such designs. For any µ ∈ ∆ K , we denote as M(µ) the matrix M(µ) ≜ K k=1 µ k x k x ⊤ k and f ⋆ G,n ≜ f G (nM(µ ⋆ G )) -1 as the objective at the optimum µ ⋆ G for a sample size n. Theorem A.1. Let X = {x 1 , . . . , x K } be a set of experiments and let µ ⋆ E be the solution of the relaxation of the E-optimal design. Let 0 ≤ δ ≤ 1 and let n such that n ≥ 2L M(µ ⋆ E ) -1 log(d/δ), where L = max x∈X ∥x∥ 2 . Then, with probability at least 1 -δ, one has f E S -1 E,n ≤     1 + 1 n 2L∥M(µ ⋆ E ) -1 ∥ log(d/δ) -1     f ⋆ E,n , where S E,n is the sum of n i.i.d. random matrices drawn from µ ⋆ E . Similarly, let µ ⋆ G be the solution of the relaxed G-optimal design and S G,n the associated sample sum. One has, with probability at least 1 -2δ, f G S -1 G,n ≤ 1 + L d ∥M(µ ⋆ G ) -1 ∥ 2 2σ 2 n log(d/δ) + o 1 √ n f ⋆ G,n , with σ 2 ≜ L 2 K k=1 [µ ⋆ G ] k (1 -[µ ⋆ G ] k ). Theses results recover the O(1/ √ n) that one would expect. In addition, this confirms that the randomized approach asymptotically converges toward the true optimum, which is not the casetheoretically-for the greedy strategy. Finally, let us note that the o(1/ √ n) in the G-optimal design rate depends on the interaction between Hoeffding for λ min (S n ) and Bennett for ∥S n -ES n ∥. We refer the reader to the supplementary material in Appendix A.6 for the full bound. A refined approach of the dimension In this section, we introduce two quantities, derived from the concept of intrinsic dimension [START_REF] Koltchinskii | Asymptotics and concentration bounds for bilinear forms of spectral projectors of sample covariance[END_REF][START_REF] Minsker | On some extensions of Bernstein's inequality for self-adjoint operators[END_REF], that allow us to refine the convergence rate for G-optimal design. We recall the definition of intrinsic dimension. Definition A.1 (Intrinsic dimension) . Let d > 0 and S ∈ R d×d be a positive semi-definite matrix. The intrinsic dimension of S, denoted intdim(S), is defined as follows: intdim(S) ≜ tr(S) ∥S∥ ≤ d. Using this definition, one may alter the concentration results on the spectral norm, by replacing the dimension d by 2 × intdim(ES n ). For a matrix with eigenvalues decreasing fast enough, the improvement may be substantial-see [START_REF] Tropp | An Introduction to Matrix Concentration Inequalities[END_REF]Ch. 7] and references therein for more details. The main drawback of this definition is that if eigenvalues are all of the same order of magnitude, one will not notice a sensible improvement; this is typically the case in G-optimal design as eigenvalues are designed to be large overall. We propose a refined version of the intrinsic dimension allowing improvements even with a narrow spectrum, in the form of 2 complementary quantities. Definition A.2 (Upper and lower intrinsic dimension) . Let d > 0 and S ∈ R d×d be a positive semi-definite matrix. The upper and lower intrisic dimensions of S, denoted updim(S) and lowdim(S) respectively, are defined as follows:          updim(S) ≜ tr S -λ min (S)I ∥S∥ -λ min (S) lowdim(S) ≜ tr ∥S∥I -S ∥S∥ -λ min (S) = d -updim(S). These new quantities use both the largest and the smallest eigenvalues to rescale the spectrum, which is of interest in our setting. Using this definition, one is able to formulate new concentration results on random matrices, including a concentration result on the lowest eigenvalue. In this particular case however, we are more interested in the potential speed up provided for the spectral norm, since it is the value controlling the slowest term in the G-optimal design error. Theorem A.2. Let X = {x 1 , . . . , x K } be a set of experiments and let µ ⋆ G be the solution of the relaxation of the G-optimal design. Let X 1 , . . . , X n be n i.i.d. random matrices drawn according to µ ⋆ G and S n their sum. Let V be the covariance matrix of X 1 , that is V ≜ E X 2 1 ] -M(µ ⋆ G ) 2 and let κ be its condition number. Let 0 ≤ δ ≤ 1 and let n such that n ≥ 4L 2 9∥V∥ log d/δ where L = max x∈X ∥x∥ 2 and d is defined by d ≜ updim(V) + lowdim(V)e -n(1-κ -1 )/16 < d. Then, with probability at 1 -2δ, one has f G S -1 G,n ≤ 1 + L d ∥M(µ ⋆ G ) -1 ∥ 2 4σ 2 n log d/δ + o 1 √ n f ⋆ G,n . We refer the reader to the supplementary material in Appendix A.6 for the proof of this result. A.4 Experiments In this section we compare the performances of randomized E and G-optimal designs against their greedy counterparts. We first show the behavior of the randomized E-optimal design on a synthetic data set. We then apply the randomized G-optimal design to the problem of best arm identification and compare it to the greedy approach used in [START_REF] Soare | Best-arm identification in linear bandits[END_REF]. We refer the reader to Appendix A.6 and A.6 for more details on best arm identification for linear bandits and on the experiments setting, respectively. A.5 Conclusion We have shown the convergence of randomized scheme for G and E-optimal criteria at a rate of O(1/ √ n). We also evidenced the dependence of the rate in a specific characteristic of the covariance matrix for the sampling. Empirically, the random sampling enjoys a favorable comparison with the greedy approach, even in the bandit application. One possible extension of this work could be to investigate the setting of batch or parallel bandits, using a random sampling to select a batch of arms before observing the rewards. A.6 Proofs and details on experiments Chernoff inequalities on matrices Many concentration inequalities have been developped for bounding the deviation of a sum of i.i.d. random variables. In particular, Chernoff inequalities have been extensively studied and derived due to their exponential decay rate on tail distributions. Here we show how these bounds can be extended to random matrices (see e.g., [START_REF] Tropp | An introduction to matrix concentration inequalities[END_REF] for an introduction on that matter). Additional notations For any Hermitian matrices X, Y, we write X ⪯ Y if and only if the matrix Y -X is positive semidefinite. Recall that for any Hermitian matrix X, there exists a unitary matrix P and a diagonal matrix D such that X = PDP ⊤ . For such a matrix and for any function f : R → R, we denote as f (X) the extension of f to a Hermitian matrix, defined as follows: f (X) ≜ P     f ([D] 11 ) . . . f ([D] dd )     P ⊤ . In particular, for any scalar x ∈ R, we define (x) + ≜ max(x, 0) so (X) + is the projection of X onto the positive semidefinite cone. We will use the exponential function for both scalars and matrices: for the sake of clarity, we denote as e x the exponential of a scalar and exp(X) the exponential of a matrix. We will denote as Sp(X) the spectrum of X, that is the set of all eigenvalues associated to X. The identity matrix and the zero matrix in dimension d are denoted I d and 0 d , respectively; when clear from context, we drop the d index. Useful lemmas Before stating the concentration inequalities of interest, we need to state several useful lemmas. These lemmas are key for proving concentration of random matrices, as we need similar guarantees for matrix ordering (⪯) than for scalar ordering (≤). We first state two lemmas that will ensure order preserving under basic operations. Lemma A.1 (Conjugation Rule ). Let M, N ∈ R d×d be two Hermitian matrices, such that M ⪯ N. Let p > 0 and let Q ∈ R p×d . Then, one has QMQ ⊤ ⪯ QNQ ⊤ . Proof. The proof is immediate when considering Q(N -M)Q ⊤ and using the definition of a positive semidefinite matrix. Lemma A.2 (Transfer Rule ). Let M ∈ R d×d be a Hermitian matrix and let f, g : R → R be such that, for any x ∈ Sp(M), f (x) ≤ g(x). Then, one has f (M) ⪯ g(M). Proof. Let D be the diagonal matrix in the spectral decomposition of M. Since f ≤ g on Sp(M), one has f (D) ⪯ g(D). The conjugation rule then allows us to conclude. Finally, we state two lemmas ensuring that two more complex operations (tr exp and log, respectively) preserve the order. Please note that this is usually not the case, even for operators that are monotone on R-e.g., the exponential does not preserve the order. Lemma A.3 (Monotonicity of the trace of the exponential). Let M, N ∈ R d×d be two Hermitian matrices such that M ⪯ N. Then for any non-decreasing function ψ : R → R, one has: tr ψ(M) ≤ tr ψ(N) . In particular, tr exp(M) ≤ tr exp(N). Proof. Let λ 1 (M) ≥ . . . ≥ λ d (M) and λ 1 (N) ≥ . . . ≥ λ d (N) be the sorted eigenvalues of M and N, respectively. Then, for 1 ≤ i ≤ d, one can define an eigenvalue as follows: λ i (M) = max L⊆R d :dim L=i min u∈L:∥u∥=1 u ⊤ Mu. Using the fact that M ⪯ N, one can deduce that for any 1 ≤ i ≤ d, λ i (M) ≤ λ i (N). Since ψ is a non-decreasing function on R, one has that for any 1 ≤ i ≤ d, ψ λ i (M) ≤ ψ λ i (N) . Summing over the dimensions leads to the desired result. Lemma A.4 (Monotonicity of the logarithm). Let M, N ∈ R d×d be two positive definite matrices such that M ⪯ N. Then one has: log(M) ⪯ log(N). Proof. We will first prove that for any γ ∈ R + , (M + γI) -1 ⪰ (N + γI) -1 . The facts that M ⪯ N and γ ≥ 0 imply that M + γI ⪯ N + γI. Using Lemma A.1, we obtain: 0 ≺ (N + γI) -1/2 (M + γI)(N + γI) -1/2 ⪯ I. Taking the inverse yields: (N + γI) 1/2 (M + γI) -1 (N + γI) 1/2 ⪰ I. Finally, applying again Lemma A.1 with (N + γI) -1/2 yields: (M + γI) -1 ⪰ (N + γI) -1 . Let us now focus on the main result. First recall that the logarithm of a positive scalar can be expressed using its integral representation, that is log x = +∞ 0 1 1 + t - 1 x + t dt, for any x > 0. Therefore, the logarithm of a matrix X ≻ 0 can be expressed similarly: log X = +∞ 0 1 1 + t I -(X + tI) -1 dt. In the beginning of the proof, we have shown that for any γ ≥ 0, (M+tI) -1 ⪰ (N+tI) -1 . Therefore, one has: 1 1 + γ I -(M + γI) -1 ⪯ 1 1 + γ I -(N + γI) -1 , and integrating over γ yields the final result. Chernoff inequalities Let n > 0 and let X 1 , . . . , X n be i.i.d. positive semidefinite matrices, such that there exists L > 0 verifying: 0 ⪯ X 1 ⪯ LI, almost surely. Let us now consider the random matrix S = n i=1 X i . In what follows, we will develop Chernoff bounds in order to control both ∥S∥/∥ES∥ and ∥S -ES∥. In the scalar case, Chernoff's bounds for the sum of independent variables are based on the fact that the exponential converts a sum into a products, that is for n i.i.d. random variables X 1 , . . . , X n : Ee n i=1 X i = E n i=1 e X i , and then one uses the independence to pull the product out of the expectation. For two symmetric matrices M, N ∈ R d×d however, the relation exp(M + N) = exp M exp N does not hold in general-it holds if the matrices commute. Hopefully, the following theorem gives us a way to overcome this issue. Theorem A.3. Let n, d > 0 and let X 1 , . . . , X n be i.i.d. symmetric matrices in R d . Then, for any t ∈ R, one has Proof. We start by the first inequality. Let t ∈ R and let η > 0. As in the scalar case, one has: P n i=1 X i ≥ t ≤ inf P n i=1 X i ≥ t = P e η∥ n i=1 X i ∥ ≥ e ηt ≤ e -ηt Ee η∥ n i=1 X i ∥ , where the last inequality is an application of Markov's inequality. Using the fact that for a positive semidefinite matrix X, ∥X∥ ≤ tr X, we obtain: Ee η∥ n i=1 X i ∥ = E exp η n i=1 X i ≤ E tr exp η n i=1 X i . We will now use Lieb's Theorem [START_REF] Lieb | Convex trace functions and the Wigner-Yanase-Dyson conjecture[END_REF], which states that for any symmetric matrix A ∈ R d×d , the mapping M → tr exp(A + log M) is concave on the cone of positive semidefinite matrices. This allows us to bound the above term as follows: E tr exp η n i=1 X i = EE tr exp η n i=1 X i F n-1 = EE tr exp η n-1 i=1 X i + log exp(ηX n ) F n-1 ≤ E tr exp η n-1 i=1 X i + log E exp(ηX n ) . Iterating over n yields: E tr exp η n i=1 X i ≤ tr exp n i=1 log E exp(ηX i ) , hence the result. The second inequality is a direct consequence from the fact that for any η < 0 and any matrix X, ηλ min (X) = ∥ηX∥. The formulation of Theorem A.3, although more complicated than is the scalar case, is very helpful for matrix concentration analysis. Indeed, since tr exp and log are both order-preserving operators on positive matrices, a bound on E exp(ηX 1 ) will now be enough to provide an overall bound of the extreme eigenvalues. Hoeffding's inequality The bound we develop here ensures that ∥S∥ and λ min (S) do not deviate too much from their counterpart on ES. We are now ready to state the first result. Theorem A.4. Let X 1 , . . . , X n be i.i.d. positive semidefinite random matrices, such that there exists L > 0 verifying 0 ⪯ X 1 ⪯ LI. Let S be defined as: S ≜ n i=1 X i . Then, for any 0 < ε < 1, one can lowerbound λ min (S) as follows: P(λ min (S) ≤ (1 -ε)λ min (ES)) ≤ d e -ε (1 -ε) 1-ε nλ min (EX 1 ) L . Similarly, one can upperbound ∥S∥ as follows: P(∥S∥ ≥ (1 + ε)∥ES∥) ≤ d e ε (1 + ε) 1+ε n∥EX 1 ∥ L . The following corollary shows an alternate (but slightly weaker) formmulation which is closer to usual concentration results. Corollary A.1. Let X 1 , . . . , X n and S be defined as above. For any 0 < ε < 1, one can lowerbound λ min (S) as follows: P(λ min (S) ≤ (1 -ε)λ min (ES)) ≤ d exp - ε 2 λ min (EX 1 ) 2L . Before proving Theorem A.4, we state a useful lemma for bouding moment generating function of random positive semi-definite matrices. Lemma A.5. Let t ∈ R and let X be a random matrix such that 0 ⪯ X ⪯ LI almost surely for some L ≥ 0. Then, one has: E exp(tX) ⪯ I + e tL -1 L EX ⪯ exp e tL -1 L EX . Proof. Both inequalities are derived from the convexity of the exponential. We will write scalar inequalities based on convexity and then extend them to matrices using the transfer rule in Lemma A.2. Let t ∈ R, for any 0 ≤ x ≤ L, the following holds: e tx ≤ e 0 + x L (e tL -e 0 ) = 1 + e tL -1 L x. Since 0 ⪯ X ⪯ LI almost surely, this can be extented to the matrix exponential using the transfer rule in Lemma A.2: exp(tX) ⪯ I + e tL -1 L X. Taking the expectation yields the result: E exp(tX) ⪯ I + e tL -1 L EX. The second inequality is also an application of Lemma A.2 using the inequality 1 + x ≤ e x for any x ∈ R. Proof of Theorem A.4. Let t > 0. Combining Lemma A.5 and Theorem A. Since the inequality holds for any η < 0, we can optimize over η. The optimal (lowest) value is reached for η = (L) -1 log(t/λ min (ES)), which is negative if and only if t < λ min (ES). Let us make the change of variable t = (1 -ε)λ min (ES) for 0 < ε < 1, so the condition holds. Substituting the value of η into (??) yields: P(λ min (S) ≤ (1 -ε)λ min (ES)) ≤ de (ε-1) log(1-ε)-ε nλ min (EX 1 ) L , and the result holds. Remark A.1. Without additional characterization of the problem, the bound E∥X∥ ≤ tr EX ≤ d∥EX∥ is tight: consider a diagonal random matrix X such that for any 1 ≤ i ≤ d, P(X = E ii ) = 1/d, where (E ii ) 1≤i≤d are the diagonal elements from the canonical base. Then, ∥X∥ = 1 and ∥EX∥ = 1/d, so ∥X∥ = d∥EX∥. If we consider the best arm identification application, this case essentially boils down to the MAB setting and would make the whole linear modeling irrelevant: maybe there is a more subtle way of characterizing linear bandits in order to avoid a brutal d factor in the bound. Bennett's and Bernstein's inequalities Using the Chernoff's bound, we were able to prove, with high probability, the following: x ⊤ n K k=1 [µ ⋆ G ] k x k x ⊤ k -1 x ≤ x ⊤ n i=1 X i -1 x ≤ ∥x∥ 2 (1 -ε) λ max n K k=1 [µ ⋆ G ] k x k x ⊤ k -1 . This is not enough to ensure the convergence of the randomized sampling. Considering again the random matrix S n = n i=1 X i , our goal is to bound the following quantity: x ⊤ S -1 n -(ES n ) -1 x. One way to bound the above quantity is to bound the maximum eigenvalue of S -1 n -(ES n ) -1 . One has: S -1 n -(ES n ) -1 = S -1 n I -S n (ES n ) -1 = S -1 n (ES -S n )(ES n ) -1 . In Section A.6, we used Hoeffding's inequality to upperbound S -1 n based on (ES n ) -1 value. Therefore, we only need to care about the central term. Since the random matrix ES n -S n is not necessarily positive semi-definite anymore, we cannot use Hoeffding's inequality. We can use Bernstein's inequality however, as stated in the following theorem. Theorem A.5. Let X 1 , . . . , X n be n i.i.d. random symmetric matrices such that EX 1 = 0 and there exists L > 0 such that ∥X 1 ∥ ≤ L, almost surely. Let S n ≜ n i=1 X i . Then, for any t > 0, one has: P(∥S n ∥ ≥ t) ≤ de - t 2 2Lt/3 + 2nσ 2 , where σ 2 ≜ E X 2 1 . As in the scalar case, Bernstein's inequality relies on using the Taylor expansion of the exponential to bound the moment generating function, so we will need the following lemma. Lemma A.6. Let L > 0 and let X be a random Hermitian matrix such that EX = 0 and X ⪯ LI almost surely. Then, for any 0 < t < 3/L, one has: E exp(tX) ⪯ exp t 2 /2 1 -tL/3 E X 2 . Proof. Similarly to the Hoeffding's case, we will show a result for the exponential of a scalar and extend it to a Hermitian matrix. Let L > 0, 0 < x < L and 0 < t < 3/L. Let us define f : [0, L] → R such that for any 0 < y < L, f (y) ≜ e ty -ty -1 y 2 . In particular, one has e tx = 1 + tx + x 2 f (x). Notice that f is increasing, so e tx ≤ 1 + tx + x 2 f (L). Now, using the Taylor expansion of the exponential, we can write: f (L) = e tL -tL -1 L 2 = 1 L 2 k≥2 (tL) k k! ≤ t 2 2 k≥2 (tL) k-2 3 k-2 = t 2 /2 1 -tL/3 , where the inequality comes from the fact that k! ≥ 2 × 3 k-2 , for any k ≥ 2. Now, using the fact that X ⪯ LI almost surely, we can obtain the following bound: exp(tX) ⪯ I + tX + X(f (L)I)X = I + tX + f (L)X 2 . Finally, taking the expectation and combining this result with a common bound of the exponential, we obtain: E exp(tX) ⪯ I + t 2 /2 1 -tL/3 E X 2 ⪯ exp t 2 /2 1 -tL/3 E X 2 , hence the result. We are now ready to prove Bernstein's inequality for matrices. Proof of Theorem A.5. Let 0 < η < 3/L, using Markov's inequality one has: P(∥S n ∥ ≥ t) = P e η∥Sn∥ ≥ e ηt ≤ e -ηt Ee η∥Sn∥ . By Lemma A.6 and the subadditivity of the matrix CGF [96, Lemma 3.5.1, Ch. 3], we obtain Ee η∥Sn∥ = E∥ exp(ηS n )∥ ≤ tr E exp(ηS n ) ≤ tr exp t 2 /2 1 -tL/3 E S 2 n . Plugin the trace back into the exponential yields: Ee η∥Sn∥ ≤ tr exp η 2 /2 1 -ηL/3 E S 2 n ≤ de η 2 /2 1-ηL/3 E S 2 n . Optimizing on η would lead to a complicated result, so we use instead η = t/(nσ 2 + tL/3), which verifies the condition η < 3/L and yields the final result. The relationship between the precision (nt in Theorem A.5) and the confidence level δ b (the RHS of the concentration inequality) is more complicated in Bernstein's inequality than in Hoeffding's. It requires solving a second order polynomial equation and leads to: t = L 3n log 2d δ b + L 3n log 2d δ b 2 + 2σ 2 n log 2d δ b . In our case, we will use the bound provided by Bennett's inequality applied to random Hermitian matrices, as it is simpler to derive the precision associated to a confidence level. Theorem A.6. Let X 1 , . . . , X n be n i.i.d. random Hermitian matrices such that EX 1 = 0 and there exists σ 2 > 0 such that ∥E X 1 2 ∥ ≤ σ 2 . In addition, let us assume that there exists c > 0 such that for any q ≥ 3: E (X 1 ) q + ≤ q! 2 σ 2 c q-2 , where for any symmetric matrix X,(X) + is the orthogonal projection of X onto the semidefinite positive cone. Then, for any t > 0, one has: P n i=1 X i ≥ √ 2nσ 2 t + ct ≤ de -t . The proof is very similar to Bernstein's: we need an intermediary result on the moment generating function, as stated in the following lemma. Lemma A.7. Let σ 2 , c > 0 and let X be a random Hermitian matrix such that EX = 0 and ∥E X 2 ∥ ≤ σ 2 . In addition, we assume that for any q ≥ 3, E (X) q + ≤ q! 2 σ 2 c q-2 . Then, for any 0 < t < 1/c, one has: E exp(tX) ⪯ exp t 2 /2 1 -ct E X 2 . The proof of this Lemma is ommitted as it is very similar to Lemma A.6. Proof of Theorem A.6. This proof can be directly adapted from Bernstein's using standard results on concentration (see e.g., [START_REF] Boucheron | Concentration inequalities: A nonasymptotic theory of independence[END_REF] for details). Proof of Theorem A.1. As mentionned in the beginning of this section, our goal is to bound ∥S -1 n (S n -ES n )(ES n ) -1 ∥. Let us assume that the batch size n satisfies: n > 2L log d λ min (EX 1 ) . Let de -nλ min (EX 1 ) 2L < δ h < 1. Using the Chernoff's bound, we know that with probability at least 1 -δ h , the following holds true: ∥S -1 n ∥ ≤ ∥(ES n ) -1 ∥ 1 -2L n ∥(EX 1 ) -1 ∥ log(d/δ h ) . Similarly, let 0 < δ b < 1; using Bennett's inequality, with probability at least 1 -δ b , we have: ∥S n -ES n ∥ ≤ L 3 log d δ b + 2nσ 2 log d δ b . Combining these two results with a union bound leads to the following bound, with probability 1 -(δ b + δ h ): ∥S -1 n -(ES n ) -1 ∥ ≤ ∥(ES n ) -1 ∥ 2 (L/3) log(d/δ b ) + 2nσ 2 log(d/δ b ) 1 -(2L/n)∥(EX 1 ) -1 ∥ log(d/δ h ) ≤ 1 n 2 ∥(EX 1 ) -1 ∥ 2 (L/3) log(d/δ b ) + 2nσ 2 log(d/δ b ) 1 -(2L/n)∥(EX 1 ) -1 ∥ log(d/δ h ) (A.1) In order to obtain a unified bound depending on one confidence parameter 1 -δ, one could optimize over δ b and δ h , subject to δ b + δ h = δ. This leads to a messy result and a negligible improvement. One can use simple values δ b = δ h = δ/2, so the overall bound becomes, with probability 1 -δ, ∥S -1 n -(ES n ) -1 ∥ is upper bounded by 1 n ∥(EX 1 ) -1 ∥ 2 2σ 2 n log 2d δ 1 + (L 2 /18σ 2 n) log(2d/δ) 1 -(2L/n)∥(EX 1 ) -1 ∥ log(2d/δ) . This can finally be formulated as follows: ∥S -1 n -(ES n ) -1 ∥ ≤ 1 n ∥(EX 1 ) -1 ∥ 2 2σ 2 n log 2d δ + o 1 n √ n . The final result yields using the fact that max x∈X ∥x∥ 2 = L and f ⋆ G,n = f ⋆ D,n = d n . The bound on f E (S -1 n ) is obtained similarly, only using the Hoeffding's result on minimum eigenvalue. A refined approach of the dimension Intrinsic dimension Definition A.3 (Intrinsic dimension). Let d > 0 and S n ∈ R d×d be a positive semi-definite matrix. The intrisic dimension of S n , denoted intdim(S n ), is defined as follows: intdim(S n ) ≜ tr(S n ) ∥S n ∥ . One always has 1 ≤ intdim(S n ) ≤ d. As for the regular concentration proofs, we will need two useful lemmas: one for deriving a nicer upperbound from Markov's inequality and the other for forcing the intrinsic dimension into the bound. Lemma A.8. Let Z ∈ R d be a random Hermitian matrix and let ψ : R → R + be non-decreasing and non-negative. Then, for any t ∈ R such that ψ(t) > 0, one has: P(∥Z∥ ≥ t) ≤ 1 ψ(t) E tr(ψ(Z)). Proof. Let t ∈ R. Since ψ is non-decreasing, the event {∥Z∥ ≥ t} contains {ψ(∥Z∥) ≥ ψ(t)}. In addition, using the definition of ψ(Z), one can easily notice that ψ(Z) ⪰ 0 and ∥ψ(Z)∥ ≥ ψ(∥Z∥). Therefore, one can write: P(∥Z∥ ≥ t) ≤ P(∥ψ(Z)∥ ≥ ψ(t)) ≤ P(tr(ψ(Z)) ≥ ψ(t)), where we used the fact that ψ(Z) ⪰ 0 in the rightmost inequality. Finally, one can conclude using Markov's inequality. Lemma A.9. Let φ : R → R be a convex function and let Z be a positive semi-definite matrix. Then, one has: tr φ(Z) ≤ intdim(Z)φ ∥Z∥ + (d -intdim(Z))φ(0). In particular, if φ(0) = 0, one has: tr φ(Z) ≤ intdim(Z)φ ∥Z∥ . Proof. Let 0 ≤ x ≤ ∥Z∥. By convexity of φ, we can write: φ(x) ≤ φ(0) + φ(∥Z∥) -φ(0) x ∥Z∥ . Using Lemma A.3, we can extend the above inequality to Z: tr φ(Z) ≤ tr φ(0)I + φ(∥Z∥) -φ(0) ∥Z∥ tr Z , which can be rearranged as follows: tr φ(Z) ≤ intdim(Z)φ ∥Z∥ + (d -intdim(Z))φ(0), and the result holds. Using the two previous lemmas, we can adapt the proof in Hoeffding in order to obtain an improved bound with the intrinsic dimension. Theorem A.7. Let L > 0 and let X 1 , . . . , X n be i.i.d. random matrices such that 0 ⪯ X 1 ⪯ LI. Let S n = n i=1 X i . For any 0 < ε < 1, one can upperbound ∥S n ∥ as follows: P(∥S n ∥ ≥ (1 + ε)∥ES n ∥) ≤ 2 × intdim(Z) e ε (1 + ε) 1+ε ∥ESn∥ L . Proof. Let t, η > 0. Using Lemma A.8 with ψ : x ∈ R → (e ηx -1) + yields: P(∥S n ∥ ≥ t) ≤ 1 e ηt -1 E tr (exp(ηS n ) -I) + = 1 e ηt -1 tr(exp(ηS n ) -I), (A.2) where we used the fact that S n ⪰ 0 implies exp(ηS n ) ⪰ I. Let 0 ≤ x ≤ L, by convexity of the exponential, one has: e ηx -1 ≤ e ηL -1 x L . Once again, we can extend this result to S n and obtain: tr exp(ηS n ) -I ≤ tr e ηL -1 L S n . Taking the expectation and using the inequality x ≤ e x -1 yields: E tr exp(ηS n ) -I ≤ tr e ηnL -1 nL ES n ≤ tr exp e ηnL -1 nL ES n -I We can now use Lemma A.9 with φ : x ∈ R → e x -1 to obtain a bound depending on ∥ES n ∥: E tr exp(ηS n ) -I ≤ tr exp e ηnL -1 nL ES n -I ≤ intdim(ES n ) e e ηnL -1 nL ∥ESn∥ -1 . Combining the previous inequality with (A.2) yields: P(∥S n ∥ ≥ t) ≤ intdim(ES n ) × e e ηnL -1 nL ∥ESn∥ -1 e ηt -1 ≤ intdim(ES n ) × e ηt e ηt -1 • e -ηt+ e ηnL -1 nL ∥ESn∥ . The remainder of the proof consists in bounding e ηt /(e ηt -1) by 2 and the rightmost term as in the regular Hoeffding's proof (see [START_REF] Tropp | An introduction to matrix concentration inequalities[END_REF] for additional details). There are two main differences between this version of the Hoeffding's bound and the regular one. First, there is a factor 2 with the intrinsic dimension. This a not necessarily a big deal as we can win on other aspects. Then, we have obtained a bound on the highest eigenvalue, but not on the lowest. This is due to the current definition of the intrinsic dimension: we use this limitation as a motivation for the refinement we propose in the next section. A refined approach of the intrinsic dimension Definition A.4 (Upper and lower intrinsic dimension). Let d > 0 and S n ∈ R d×d be a positive semi-definite matrix. The upper and lower intrisic dimensions of S n , denoted updim(S n ) and lowdim(S n ) respectively, are defined as follows:          updim(S n ) ≜ tr S n -λ min (S n )I ∥S n ∥ -λ min (S n ) lowdim(S n ) ≜ tr ∥S n ∥I -S n ∥S n ∥ -λ min (S n ) = d -updim(S n ). One always has 1 ≤ updim(S n ), lowdim(S n ) ≤ d -1. This definition brings a different information about the matrix at stake: instead of renormalizing the trace using the spectral norm, we also shift it using the lowest eigenvalue. With these new quantities, we are able to formulate a refined version of Lemma A.9. Lemma A.10. Let φ : R → R be a convex function and let Z be a positive semi-definite matrix. Then, one has: tr φ(Z) ≤ updim(Z)φ ∥Z∥ + lowdim(Z)φ(λ min (Z)). This bound is always tighter than the one with the intrinsic dimension, that is: updim(Z)φ ∥Z∥ +lowdim(Z)φ(λ min (Z)) ≤ intdim(Z)φ ∥Z∥ +(d-intdim(Z))φ(0). Proof. To prove both assertions, we will show a more general bound, of the form: tr φ(Z) ≤ f (l), for 0 ≤ l ≤ λ min (Z) and then we will show that f is non-increasing. Let 0 ≤ l ≤ λ min (Z) and let l ≤ x ≤ ∥Z∥. Using the convexity of φ, one can write: φ(x) ≤ φ(l) + (φ(∥Z∥) -φ(l)) x -l ∥Z∥ -l = x -l ∥Z∥ -l φ(∥Z∥) + ∥Z∥ -x ∥Z∥ -l φ(l). Using Lemma A.3 again, we can extend the above inequality to Z: tr φ(Z) ≤ tr(Z -lI) ∥Z∥ -l φ(∥Z∥) + tr(∥Z∥I -Z) ∥Z∥ -l φ(l). It is immediate to see that taking l = 0 leads to Lemma A.9 and l = λ min (Z) shows the first assertion of this lemma. The last assertion of the theorem just comes from the convexity of φ: when applying the convexity bound on two segments I ⊆ J , the bound on I is necessarily tighter than the bound on J . Theorem A.8. Let X 1 , . . . , X n be i.i.d. positive semidefinite random matrices, such that there exists L > 0 verifying 0 ⪯ X 1 ⪯ LI. Let S n be defined as: S n ≜ n i=1 X i . In addition, let κ be the condition number of ES n , that is κ ≜ ∥ES n ∥ λ min (ES n ) . Then, for any 0 < ε < 1, one can lowerbound λ min (S n ) as follows: P(λ min (S n ) ≤ (1 -ε)λ min (ES n )) ≤ dmin e -ε (1 -ε) 1-ε nλ min (EX 1 ) L , where dmin = lowdim(ES n ) + updim(ES n )e -nελ min (EX 1 )(κ-1)/L . Similarly, one can upperbound ∥S n ∥ as follows: P(∥S n ∥ ≥ (1 + ε)∥ES n ∥) ≤ dmax e ε (1 + ε) 1+ε n∥EX 1 ∥ L , where dmax = updim(ES n ) + lowdim(ES n )e -nε∥EX 1 ∥(1-κ -1 )/L . Proof. The beggining of the proof is similar to the regular Hoeffding's proof so we can directly write, for t, η > 0: P(∥S n ∥ ≥ t) ≤ e -ηt tr exp e ηL -1 L ES n Using Lemma A.10 with φ : x ∈ R → e g(η)x and g : η → L -1 (e ηL -1) yields: tr exp(g(η)ES n ) ≤ updim(ES n )e g(η)∥ESn∥ + lowdim(ES n )e g(η)λ min (ESn) = updim(ES n ) + lowdim(ES n )e g(η)(λ min (ESn)-∥ESn∥) e g(η)∥ESn∥ . Using the same value for η than in regular Hoeffding's proof thus yields: P(∥S n ∥ ≥ (1 + ε)∥ES n ∥) ≤ d e ε (1 + ε) 1+ε ∥ESn∥/L where d = updim(ES n ) + lowdim(ES n )e g(η)(λ min (ESn)-∥ESn∥) . Pluging the value η = L -1 log(1 + ε) into d's expression yields the result. The result on λ min (S n ) is very similar: P(λ min (S n ) ≤ t) = P(∥ -S n )∥ ≥ -t) ≤ e ηt tr exp e -ηL -1 L ES n . Using the same reasoning, one obtains: P(λ min (S n ) ≤ (1 -ε)∥ES n ∥) ≤ d e -ε (1 -ε) 1-ε λ min (ESn)/L where d = lowdim(ES n ) + updim(ES n )e g(-η)(∥ESn∥-λ min (ESn)) , which proves the result since η = -L -1 log(1 -ε). Theorem A.9. Let X 1 , . . . , X n be n i.i.d. random symmetric matrices such that EX 1 = 0 and there exists L > 0 such that ∥X 1 ∥ ≤ L, almost surely. Let S n ≜ n i=1 X i . Let V be the covariance matrix of X 1 , that is V ≜ E X 2 1 ] -M(µ ⋆ G ) 2 and let κ be its condition number. Let t verifying 3n∥V∥ 2 L > t > √ n∥V∥ 2 + L 3 √ n . Then, one has: P ∥S n ∥ ≥ √ t ≤ de - t 2 4∥V∥ 2 , where d = updim(V) + lowdim(V)e -n 16 (1-κ -1 ) . Proof. One can combine reasonings of Bernstein's regular proof with the proof of Hoeffding's with updim and lowdim to obtain: P(∥S n ∥ ≥ √ nt) ≤ updim(V)e - t 2 /2 σ 2 +Lt/3n + lowdim(V)e - t 2 /2 σ 2 +Lt/3n (2-κ -1 ) . Let us assume that 3nσ 2 L > t > √ nσ 2 + L 3 √ n . Then, the previous result can be bounded as follows: P(∥S n ∥ ≥ √ nt) ≤ updim(V) + lowdim(V)e -t 2 4σ 2 (1-κ -1 ) e -t 2 4σ 2 ≤ updim(V) + lowdim(V)e -n 16 (1-κ -1 ) e -t 2 4σ 2 , and the result holds. Link with the best arm identification in linear bandits Let d > 0 and X ⊆ R d a subset of R d , corresponding to the bandit arms. The linear bandit setting assumes that the conditional distribution of the rewards given the arm follows a linear model: there exists an unknown parameter θ ⋆ ∈ R d such that the reward r(x) associated to any action x ∈ X is of the form r(x) = θ ⊤ ⋆ x + ϵ, where ϵ is a R-subgaussian noise independent from x. This linear structure implies that some information is shared between arms through the parameter θ ⋆ : an action-reward pair (x, r(x)) gives information about θ ⋆ and thus about the reward distributions of the other actions. This makes this setting very different to the classical multi-armed bandit setting where the reward distributions of each action are assumed to be independent. Whereas multi-armed bandit algorithms mainly focus on the estimation of the mean reward of each action, linear bandit algorithms are mainly interested in the estimation of the parameter θ ⋆ . Depending on the context, the goal of a bandit algorithm can either be to maximize the cumulated reward (the sum of the rewards collected over several iterations) or to find the arm maximizing the reward, referred to as best arm identification or pure exploration. We here focus on best arm identification in linear bandits whose objective is to find the arm x ⋆ maximizing the average reward: x ⋆ = arg max x∈X θ ⊤ ⋆ x . As the parameter θ ⋆ is unknown the aim is to design a strategy that will sequentially choose t actions x 1 , . . . , x t ∈ X and collect their associated rewards r i = θ ⊤ ⋆ x i + ϵ i , 1 ≤ i ≤ t, where ϵ 1 , . . . , ϵ t are independent realizations of ϵ, to obtain an estimate θt of θ ⋆ . To find the best arm, the estimated prediction θ⊤ t x should be close to the real prediction θ ⊤ ⋆ x for all x ∈ X . More precisely, rather than the reward prediction of an action itself, we are interested in comparing the predictions of each pair of arms. We thus want |( θt -θ ⋆ ) ⊤ (x -x ′ )| to be small. Remark A.2. Note that compared to the multi-armed bandit case where known suboptimal arms are no longer played, the situation is different for the best arm identification case as playing suboptimal arms might give information about the parameter θ ⋆ and improve the discrimination of the unknown x ⋆ with other arms. Most of the designed strategies for best arm identification in linear bandits [START_REF] Soare | Best-arm identification in linear bandits[END_REF][START_REF] Tao | Best Arm Identification in Linear Bandits with Linear Dimension Dependency[END_REF]106] have relied on two concentration inequalities giving high probability bounds on the prediction error |( θt -θ ⋆ ) ⊤ x| of the regression estimator θt obtained from a sequence of action-reward pairs. The first concentration inequality is only valid when the sequence of actions is fixed and hence cannot depend on the observed random rewards. The authors of [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF] derived a concentration inequality which holds when the sequence of actions is adaptive to the observed random rewards. However this concentration inequality offers a looser bound than the one given for fixed sequences. In [START_REF] Soare | Best-arm identification in linear bandits[END_REF] strategies relying on the fixed sequence bound are developed whereas [106] designed a fully adaptive algorithm based on the adaptive bound. These two concentration inequalities are detailed below. Let θt (λ) denote the ridge estimate of θ ⋆ with a ℓ 2 -penalty λ: θt (λ) ≜ arg min θ∈R d t s=1 (θ ⊤ x s -r s ) 2 + λ 2 ∥θ∥ 2 . The ridge estimate θt (λ) can be expressed in closed form: θt (λ) = Ât (λ) -1 X ⊤ t r t , where X ⊤ t ≜ (x 1 , . . . , x t ), r ⊤ t ≜ (r 1 , . . . , r t ) and Ât (λ) ≜ X ⊤ t X t + λI d . Fixed design concentration inequality. We assume that there is a finite number of arms |X | = K. If λ = 0 and (x i ) 1≤i≤∞ is a fixed sequence of actions (independent of the random rewards (r i ) 1≤i≤∞ ) we have the following concentration inequality [START_REF] Soare | Best-arm identification in linear bandits[END_REF]: for all δ ∈ (0, 1), P ∀t ∈ N, ∀x ∈ X , |θ ⊤ ⋆ x -θ⊤ t x| ≤ 2R∥x∥ Â-1 t 2 log 6t 2 K π 2 δ ≥ 1 -δ . (A.3) One may notice that this result holds over these directions by replacing K by K 2 in the logarithmic term, as there are of the order of K 2 such directions. 3Adaptive design concentration inequality. When the sequence of actions is chosen adaptively of the history, i.e., for all i ∈ N, x i is allowed to depend on (x 1 , r 1 , . . . , x t-1 , r t-1 ), we need to rely on a result established by [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF]: if λ > 0 and ∥x i ∥ ≤ L for all i then for all δ ∈ (0, 1) and all x ∈ R d , P |θ ⊤ ⋆ x -θt (λ) ⊤ x| ≤ ∥x∥ Ât(λ) -1 R d log 1 + tL 2 /λ δ + √ λ∥θ ⋆ ∥ ≥ 1 -δ . (A.4) The reader can refer to [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF]Appendix B] for the proof of this result. The main difference with (A.3) is the presence of an extra √ d factor which cannot be removed and which makes adaptive algorithms suffer more from the dimension than fixed design strategies (see [START_REF] Lattimore | Bandit Algorithms[END_REF]Chapter 20] for a more complete discussion on this aspect). We now omit the dependence of Ât (λ) -1 in λ when it is not relevant for the purpose of the discussion. Whichever concentration inequality is used, the bound on the prediction error in a direction y = xx ′ , x, x ′ ∈ X depends on the matrix norm ∥y∥ Â-1 t . The goal of a strategy for the problem of best arm identification in linear bandits as formulated in [START_REF] Soare | Best-arm identification in linear bandits[END_REF] is to choose a sequence of actions that reduces this matrix norm as fast as possible for all directions y so as to reduce the prediction error and be able to identify the best arm. This approach thus leads to the following optimization problem: (x 1 , . . . , x B ) ∈ arg min x 1 ,...,x B ∈X max y∈Y y ⊤ t i=1 x i x ⊤ i -1 y. (A.5) If one upper bounds ∥y∥ Â-1 t by 2∥x∥ Â-1 t we finally obtain the G-optimal design. Details on experiment setting and comments For the randomized strategies we use the cvxopt python package [START_REF] Andersen | CVXOPT: A Python package for convex optimization, version 1.2.0[END_REF] to compute the solution of the semi-definite program associated to the E-optimal design and compute the solution of the convex relaxation of the D-optimal design problem. We recall that as the relaxed G-optimal design problem is equivalent to the relaxed D-optimal design problem we can use the solution of the latter for the former. Finally, for the greedy implementations of E and G-optimal design, when there are ties between several samples at a given iteration we uniformly select one at random. Randomized strategy versus greedy strategy for E-optimal design We recall here that the goal of E-optimal design is to choose experiments maximizing λ min ( K k=1 n k x k x ⊤ k ). We generate a pool of experiments in R d made of K independent and identically distributed realizations of a standard Gaussian random variable. Figure A.1a shows the performance of the randomized and greedy strategies against the number of selected samples n when K = 500 and d = 10. For very small numbers of selected experiments the performances of the different strategies are equivalent but as the number of experiments increases the randomized E-optimal design outperforms the greedy strategy. Figure A.1b shows the performance of the strategies against the dimension d when K = 500 and the number of selected experiments n is fixed to 500. For small dimensions the randomized E-optimal design achieves a better performance but its superiority decreases when the dimension increases. For both settings we also plot the performance of the random strategy that selects experiments uniformly at random. Furthermore, the results are averaged over 100 random seeds controlling the generation of the dataset as well as the random sampling of the experiments. Application of randomized G-optimal design to best arm identification in linear bandits We now compare the randomized G-optimal design with the greedy implementation that has been used for the problem of best arm identification in linear bandits. We note that the objective of this experiment is not to achieve state-of-the-art results for best arm identification in linear bandits but rather to show that the randomized strategy while being easy to implement achieves comparable results as the ones obtained with the greedy strategy. The underlying model of a linear bandit is the same as the one presented in Section A.2: the relationship between the experiments x, referred to as arms in the bandit literature, and their associated measurements y is assumed to be linear. The goal of best arm identification (see e.g., [START_REF] Soare | Best-arm identification in linear bandits[END_REF][START_REF] Tao | Best Arm Identification in Linear Bandits with Linear Dimension Dependency[END_REF]106]) is to find the arm with maximum linear response among a finite set of arms. We focus on the case where one wants to solve this task with a minimum number of trials for a given confidence level. The core idea of most of the developed strategies is to sequentially choose arms so as to minimize a confidence bound on the prediction error of the linear response. Indeed, the sooner we become confident about the predicted response of each arm the sooner we can identify the best one with high probability. One would like to take advantage of the past responses y when choosing future arms. However the confidence bound that is available for this adaptive setting has a worse dependence on the dimension d than the confidence bound available for fixed sequences of arm [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF]. The confidence bound for fixed sequences can be stated as follows: for all δ ∈ (0, 1), with probability at least 1 -δ, for all n ∈ N and for all arms x ∈ X , |θ ⊤ ⋆ x -θ⊤ n x| ≤ 2c∥x∥ Σ -1 D log 6t 2 |X | π 2 δ , (A.6) where θn is the OLS estimator obtained with n samples, c is a constant depending on the variance of the Gaussian noise and Σ D = K k=1 n k x k x ⊤ k . It can be observed that designing a strategy that minimizes this confidence bound for all arms naturally leads to the G-optimal design optimization problem. The reader can refer to the supplementary material or [START_REF] Soare | Best-arm identification in linear bandits[END_REF] for more details. To compare the randomized G-optimal design with the greedy implementation used for best arm identification in [START_REF] Soare | Best-arm identification in linear bandits[END_REF] we use the same setting as the one of the experiment presented in Section 6 of [START_REF] Soare | Best-arm identification in linear bandits[END_REF]. More specifically we consider a set of d + 1 arms in R d where d ≥ 2. This set is made of the d vectors (e 1 , . . . , e d ) forming the canonical basis of R d and one additional arm x d+1 = (cos(ω), sin(ω), 0, . . . , 0) ⊤ with ω = 0.1. The true parameter θ ⋆ has all its coordinates equal to 0 except the first one which is set to 2. In this setting, the best arm, i.e., the one with maximum linear response, is e 1 . One can also note that it is much harder to differentiate this arm from x d+1 than from the other arms. The noise of the linear model is a standard Gaussian random variable N (0, 1) and the confidence level in (A.6) is chosen equal to δ = 0.05. We also use the same condition as in [START_REF] Soare | Best-arm identification in linear bandits[END_REF] (equation ( 13) therein) to check when enough arms have been pulled to be able to identify the best arm with high probability. This condition naturally derives from the confidence bound (A.6). As explained in Section A.2 the greedy implementation does not work for the first iterations because the design matrix is singular. As in [START_REF] Soare | Best-arm identification in linear bandits[END_REF] we thus initialize the procedure by choosing once each arm of the canonical basis. Although this would not be required for the randomized strategy as we could start by sampling a given number of experiments, we use the same initialization for the sake of fairness. The number of samples required to find the best arm are shown in Figure A.1c which summarizes the results obtained over 100 random seeds controlling the Gaussian noise of the linear model and the random selection of the experiments. One can see that the randomized G-optimal design, while being simple to use, achieves similar performances for low dimensions and even better performances on average than the greedy implementation of the G-optimal design as the dimension increases. We note that for all the random repetitions the best arm returned by both strategies is always e 1 . B.1 Introduction Adversarial example attacks recently became a major concern in the machine learning community. An adversarial attack refers to a small, imperceptible change of an input that is maliciously designed to fool a machine learning algorithm. Since the seminal work of [START_REF] Biggio | Evasion attacks against machine learning at test time[END_REF] and [START_REF] Szegedy | Intriguing properties of neural networks[END_REF] it became increasingly important to understand the very nature of this phenomenon [START_REF] Bubeck | Adversarial examples from computational constraints[END_REF][START_REF] Fawzi | Adversarial vulnerability for any classifier[END_REF][START_REF] Fawzi | Robustness of classifiers: from adversarial to random noise[END_REF][START_REF] Gourdeau | On the Hardness of Robust Classification[END_REF][START_REF] Ilyas | Adversarial Examples Are Not Bugs, They Are Features[END_REF]. Furthermore, a large body of work has been published on designing attacks [START_REF] Athalye | Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples[END_REF][START_REF] Carlini | Towards evaluating the robustness of neural networks[END_REF][START_REF] Goodfellow | Explaining and Harnessing Adversarial Examples[END_REF][START_REF] Madry | Towards Deep Learning Models Resistant to Adversarial Attacks[END_REF][START_REF] Papernot | The limitations of deep learning in adversarial settings[END_REF] and defenses [START_REF] Cohen | Certified Adversarial Robustness via Randomized Smoothing[END_REF][START_REF] Goodfellow | Explaining and Harnessing Adversarial Examples[END_REF][START_REF] Madry | Towards Deep Learning Models Resistant to Adversarial Attacks[END_REF][START_REF] Papernot | Distillation as a defense to adversarial perturbations against deep neural networks[END_REF]. Besides, in real-life scenarios such as for an autonomous car, errors can be very costly. It is not enough to just defend against new attacks as they are published. We would need an algorithm that behaves optimally against every single attack. However, it remains unknown whether such a defense exists. This leads to the following questions, for which we provide principled and theoretically-grounded answers. Q1: Is there a deterministic classifier that ensures optimal robustness against any adversarial attack? A1: To answer this question, in Section B.3, we cast the adversarial examples problem as an infinite zero-sum game between a Defender (the classifier) and an Adversary that produces adversarial examples. Then we demonstrate, in Section B.4, the non-existence of a Nash equilibrium in the deterministic setting of this game. This entails that no deterministic classifier can claim to be more robust than all other classifiers against any possible adversarial attack. Another consequence of our analysis is that there is no free lunch for transferable attacks: an attack that works on all classifiers will never be optimal against any of them. Q2: Would randomized defense strategies be a suitable alternative to defend against strong adversarial attacks? A2: We tackle this problem both theoretically and empirically. In Section B.5, we demonstrate that for any deterministic defense there exists a mixture of classifiers that offers better worst-case theoretical guarantees. Building upon this, we devise a method that generates a robust randomized classifier with a one step boosting method. We evaluate this method, in Section B.6, against strong adaptive attacks on CIFAR10 and CIFAR100 datasets. It outperforms Adversarial Training against both ℓ ∞ -PGD [START_REF] Madry | Towards Deep Learning Models Resistant to Adversarial Attacks[END_REF], and ℓ 2 -C&W [START_REF] Carlini | Towards evaluating the robustness of neural networks[END_REF] attacks. More precisely, on CIFAR10, our algorithm achieves 0.55 (resp. 0.53) accuracy under attack against these attacks, which is an improvement of 0.13 (resp. 0.18) over Adversarial Training. B.2 Related Work Many works have studied adversarial examples, in several different settings. We discuss hereafter the different frameworks that we believe to be related to our work, and discuss the aspects on which our contribution differs from them. Distributionally robust optimization. The work in [START_REF] Sinha | Certifiable Distributional Robustness with Principled Adversarial Training[END_REF] addresses the problem of adversarial examples through the lens of distributionally robust optimization. They study a min-max problem where the Adversary manipulates the test distribution while being constrained in a Wasserstein distance ball (they impose a global constraint on distributions for the Adversary, while we study a local, pointwise constraint, leading to different attack policies). A similar analysis was presented in [START_REF] Lee | Minimax Statistical Learning with Wasserstein distances[END_REF] in a more general setting that does not focus on adversarial examples. Even though our work studies a close problem, our reasoning is very different. We adopt a game theoretic standpoint, which allows us to investigate randomized defenses and endow them with strong theoretical evidences. Game Theory. Some works have tackled the problem of adversarial examples as a two player game. For example [START_REF] Brückner | Stackelberg Games for Adversarial Prediction Problems[END_REF] views adversarial example attacks and defenses as a Stackelberg game. More recently, [START_REF] Bulò | Randomized Prediction Games for Adversarial Machine Learning[END_REF] and [START_REF] Perdomo | Robust Attacks against Multiple Classifiers[END_REF] investigated zero-sum games. They consider restricted versions of the game where classical theorems apply, such as when the players only have a finite set of possible strategies. We study a more general setting. Finally, [START_REF] Dhillon | Stochastic activation pruning for robust adversarial defense[END_REF] motivates the use of noise injection as a defense mechanism by game theoretic arguments but only present empirical results. Randomization. Following the work of [START_REF] Dhillon | Stochastic activation pruning for robust adversarial defense[END_REF] and [105], several recent works studied noise injection as a defense mechanism. In particular, [START_REF] Lecuyer | Certified Robustness to Adversarial Examples with Differential Privacy[END_REF], followed by [START_REF] Cohen | Certified Adversarial Robustness via Randomized Smoothing[END_REF][START_REF] Li | Certified Adversarial Robustness with Additive Noise[END_REF][START_REF] Pinot | Theoretical evidence for adversarial robustness through randomization[END_REF]102] demonstrated that noise injection can, in some cases, give provable defense against adversarial attacks. The analysis and defense method we propose in this paper are not based on noise injection. However, a link could be made between these works and the mixture we propose, by noting that a classifier in which noise is being injected can be seen as an infinite mixture of perturbed classifiers. Optimal transport. Our work considers a distributionnal setting, in which the Adversary manipulating the dataset is formalized by a push-forward measure. This kind of setting is close to optimal transport settings recently developed by [START_REF] Bhagoji | Lower Bounds on Adversarial Robustness from Optimal Transport[END_REF] and [START_REF] Pydi | Adversarial Risk via Optimal Transport and Optimal Couplings[END_REF]. Specifically, these works investigate classifier-agnostic lower bounds on the risk for binary classification under attack, with some hypothesis on the data distribution. The main differences are that we focus on studying equilibria and not deriving bounds. Moreover, these works do not study the influence of randomization. Finally they express the optimal risk of the Defender in terms of transportation costs between two distributions, whereas we explicitly study the Adversary's behaviour as a transport from one distribution to another. Even though they do not treat the problem from the same prism, we believe that these works are profoundly related and complementary to ours. Ensemble of classifiers. Some works have been done to improve the robustness of a model by constructing ensemble of classifiers [START_REF] Abbasi | Robustness to adversarial examples through an ensemble of specialists[END_REF][START_REF] Pang | Improving adversarial robustness via promoting ensemble diversity[END_REF][START_REF] Sen | EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks[END_REF]100,107]. However all the defense methods proposed in those papers subsequently proved to be ineffective against adaptive attacks introduced in [START_REF] He | Adversarial example defense: Ensembles of weak defenses are not strong[END_REF][START_REF] Tramer | On adaptive attacks to adversarial example defenses[END_REF]. The main difference with our method is that it is not an ensemble method since it uses sampling instead of voting to aggregate the classifiers' output. Hence in terms of volatility, in voting methods, whenever a majority agrees on an opinion, all others votes will be ignored, whereas here each classifier always contributes according to its probability weights, which do not depend on the others. B.3 A Game Theoretic point of view. Initial problem statement Notations. For any set Z with σ-algebra σ(Z), if there is no ambiguity on the considered σalgebra, we denote P(Z) the set of all probability measures over (Z, σ(Z)), and F Z the set of all measurable functions from (Z, σ(Z)) to (Z, σ(Z)). For µ ∈ P(Z) and ϕ ∈ F Z , the pushforward measure of µ by ϕ is the measure ϕ#µ such that ϕ#µ(B) = µ(ϕ -1 (B)) for any B ∈ σ(Z). Binary classification task. Let X ⊂ R d and Y = {-1, 1}. We consider a distribution D ∈ P(X × Y) that we assume to be of support X × Y. The Defender is looking for a hypothesis (classifier) h in a class of functions H, minimizing the risk of h w.r.t. D: R(h) : = E (X,Y )∼D [1{h(X) ̸ = Y }] = E Y ∼ν E X∼µ Y [1{h(X) ̸ = Y }] . (B.1) Where H := {h : x → sgn g(x) | g : X → R continuous}, ν ∈ P(Y) is the probability measure that defines the law of the random variable Y , and for any y ∈ Y, µ y ∈ P(X ) is the conditional law of X|(Y = y). Adversarial example attack (point-wise). Given a classifier h : X → Y and a data sample (x, y) ∼ D, the Adversary seeks a perturbation τ ∈ X that is visually imperceptible, but modifies x enough to change its class, i.e. h(x + τ ) ̸ = y. Such a perturbation is called an adversarial example attack. In practice, it is hard to evaluate the set of visually imperceptible modifications of an image. However, a sufficient condition to ensure that the attack is undetectable is to constrain the perturbation τ to have a small norm, be it for the ℓ ∞ or the ℓ 2 norm. Hence, one should always ensure that ∥τ ∥ ∞ ≤ ϵ ∞ , or ∥τ ∥ 2 ≤ ϵ 2 , depending on the norm used to measure visual imperceptibility. The choice of the threshold depends on the application at hand. For example, on CIFAR datasets, typical values for ϵ ∞ and ϵ 2 are respectively, 0.031 and 0.4/0.6/0.8. In the remaining of this work, we will define our constraint using an ℓ 2 norm, but all our results are valid for an ℓ ∞ based constraint. Adversarial example attack (distributional). The Adversary chooses, for every x ∈ X , a perturbation that depends on its true label y. This amounts to construct, for each label y ∈ Y, a measurable function ϕ y such that ϕ y (x) is the perturbation associated with the labeled example (x, y). This function naturally induces a probability distribution over adversarial examples, which is simply the push-forward measure ϕ y #µ y . The goal of the Adversary is thus to find ϕ = (ϕ -1 , ϕ 1 ) ∈ (F X |ϵ 2 ) 2 that maximizes the adversarial risk R adv (h, ϕ) defined as follows: R adv (h, ϕ) := E Y ∼ν E X∼ϕ Y #µ Y [1{h(X) ̸ = Y }] . (B.2) Where for any ϵ 2 ∈ (0, 1), F X |ϵ 2 is the set of functions that imperceptibly modifies a distribution: F X |ϵ 2 := ψ ∈ F X | essup x∈X ∥ψ(x) -x∥ 2 ≤ ϵ 2 . Adversarial defense, a two-player zero-sum game. With the setting defined above, the adversarial examples problem can be seen as a two-player zero-sum game, where the Defender tries to find the best possible hypothesis h, while a strong Adversary is manipulating the dataset distribution: inf h∈H sup ϕ∈(F X |ϵ 2 ) 2 R adv (h, ϕ). (B.3) This means that the Defender tries to design the classifier with the best performance under attack, whereas the Adversary will each time design the optimal attack on this specific classifier. In the game theoretical terminology, the choice of a classifier h (resp. an attack ϕ) for the Defender (resp. the Adversary) is called a strategy. It is crucial to note that the sup-inf and inf-sup problems do not necessarily coincide. In this paper, we mainly focus on the Defender's point of view which corresponds to the inf-sup problem. We will be interested in understanding the behaviour of players in this game, i.e. the best responses they have to a given strategy, and whether some equilibria may arise. This motivates the following definitions. Definition B.1 (Best Response). Let h ∈ H, and ϕ ∈ F X |ϵ 2 2 . A best response from the Defender to ϕ is a classifier h * ∈ H such that R adv (h * , ϕ) = min h∈H R adv (h, ϕ). Similarly, a best response from the Adversary to h is an attack ϕ * ∈ F X |ϵ 2 2 such that R adv (h, ϕ * ) = max ϕ∈(F X |ϵ 2 ) 2 R adv (h, ϕ). In the remaining, we denote BR(h) the set of all best responses of the Adversary to a classifier h. Similarly BR(ϕ) denotes the set of best responses to an attack ϕ. Definition B.2 (Pure Nash Equilibrium). In the zero-sum game (Eq. B.3), a Pure Nash Equilib- rium is a couple of strategies (h, ϕ) ∈ H × F X |ϵ 2 2 such that h ∈ BR(ϕ), and, ϕ ∈ BR(h). When it exists, a Pure Nash Equilibrium is a state of the game in which no player has any incentive to modify its strategy. In our setting, this simultaneously means that no attack could better fool the current classifier, and that the classifier is optimal for the current attack. Remark. All the definitions in this section assume a deterministic regime, i.e. that neither the Defender nor the Adversary use randomization, hence the notion of Pure Nash Equilibrium in the game theory terminology. The randomized regime will be studied in Section B.5. Trivial solution and Regularized Adversary Trivial Nash equilibrium. Our current definition of the problem implies that the Adversary has perfect information on the dataset distribution and the classifier. It also has unlimited computational power and no constraint on the attack except on the size of the perturbation. Going back to the example of the autonomous car, this would mean that the Adversary can modify every single image that the camera may receive during any trip, which is highly unrealistic. The Adversary has no downside to attacking, even when the attack is unnecessary, e.g. if the attack cannot work or if the point is already misclassified. This type of behavior for the Adversary can lead to the existence of a pathological (and trivial) Nash Equilibrium as demonstrated in Figure B.1 for the uni-dimensional setting with Gaussian distributions. The unbounded Adversary moves every point toward the decision boundary (each time maximizing the perturbation budget), and the Defender cannot do anything to mitigate the damage. In this case the decision boundary for the Optimal Bayes Classifier remains unchanged, even though both curves have been moved toward the center, hence a trivial equilibrium. In the remaining of this work, we show that such an equilibrium does not exist as soon as there is a small restraint on the Adversary's strength, i.e. as soon as it is not perfectly indifferent to produce unnecessary perturbations. (left) and with three different attacks: no penalty (second drawing), with mass penalty (third) and with norm penalty (fourth). On all figures blue area on the left of the axis is P h (ϵ 2 ) and red area on the right is N h (ϵ 2 ). Regularized Adversary. To mitigate the Adversary strength, we introduce a penalization term: inf h∈H sup ϕ∈(F X |ϵ 2 ) 2 [R adv (h, ϕ) -λ Ω(ϕ)] R Ω adv (h, ϕ) . (B.4) The penalty function Ω represents the limitations on the Adversary's budget, be it because of computational resources or to avoid being detected. λ ∈ (0, 1) is some regularization weight. In this paper, we study two types of penalties: the mass penalty Ω mass , and the norm penalty Ω norm . From a computer-security point of view, the first limitation that comes to mind is to limit the number of queries the Adversary can send to the classifier. In our distributional setting, this boils down to penalizing the mass of points that the function ϕ moves. Hence we define the mass penalty as: Ω mass (ϕ) := E Y ∼ν E X∼µ Y [1{X ̸ = ϕ Y (X)}] . (B.5) The mass penalty discourages the Adversary from attacking too many points by penalizing the overall mass of transported points. The second limitation we consider penalizes the expected norm under ϕ: Ω norm (ϕ) := E Y ∼ν E X∼µ Y [∥X -ϕ Y (X)∥ 2 ] . (B.6) This regularization is very common in both the optimization and adversarial example communities. In particular, it is used by Carlini & Wagner [START_REF] Carlini | Towards evaluating the robustness of neural networks[END_REF] to compute the eponymous attack 1 . In the following, we denote BR Ωmass (resp. BR Ωnorm ) the best responses for the Adversary w.r.t the mass (resp. norm) penalty. Section B.4 shows that whatever penalty the Adversary has, no Pure B.4 Deterministic regime Nash Equilibrium exists. We characterize the best responses for each player, and show that they can never satisfy Definition B.2. B.4 Deterministic regime Notations. Let h ∈ H, we denote P h := {x ∈ X | h(x) = 1}, and N h := {x ∈ X | h(x) = -1} respectively the set of positive and negative outputs of h. We also denote the set of attackable points from the positive outputs P h (δ) := {x ∈ P h | ∃z ∈ N h and ∥z -x∥ 2 ≤ δ}, and N h (δ) likewise. Adversary's best response. Let us first present the best responses of the Adversary under respectively the mass penalty and the norm penalty. Both best responses share a fundamental behavior: the optimal attack will only change points that are close enough to the decision boundary. This means that, when the Adversary has no chance of making the classifier change its decision about a given point, it will not attack it. However, for the norm penalty all attacked points are projected on the decision boundary, whereas with the mass penalty the attack moves the points across the border. Lemma B.1. Let h ∈ H and ϕ ∈ BR Ωmass (h). Then the following assertion holds: ϕ 1 (x) ∈ (P h ) ∁ if x ∈ P h (ϵ 2 ) ϕ 1 (x) = x otherwise. Where (P h ) ∁ , the complement of P h in X . ϕ -1 is characterized symmetrically. Lemma B.2. Let h ∈ H and ϕ ∈ BR Ωnorm (h). Then the following assertion holds: ϕ 1 (x) = π(x) if x ∈ P h (ϵ 2 ) x otherwise. Where π is the orthogonal projection on (P h ) ∁ . ϕ -1 is characterized symmetrically. These best responses are illustrated in Figure B.1 with two uni-dimensional Gaussian distributions. For the mass penalty, µ 1 is set to 0 in P h (ϵ 2 ), and this mass is transported into N h (ϵ 2 ). The symmetric holds for µ -1 . After attack, we now have µ 1 (P h (ϵ 2 )) = 0, so a small value of µ -1 in P h (ϵ 2 ) suffices to make it dominant, and that zone will now be classified -1 by the Optimal Bayes Classifier. For the norm penalty, the part of µ 1 that was in P h (ϵ 2 ) is transported on a Dirac distribution at the decision boundary. Similarly to the mass penalty, the best response now predicts -1 for the zone P h (ϵ 2 ). Remark. In practice, it might be computationally hard to generate the exact best response for the norm penalty, i.e. the projection on the decision boundary. That will happen for example if this boundary is very complex (e.g. highly non-smooth), or when X is in a high dimensional space. To keep the attack tractable, the Adversary will have to compute an approximated best response by allowing the projection to reach the point within a small ball around the boundary. This means that the best responses of the norm penalty and the mass penalty problems will often match. Defender's best response. At a first glance, one would suspect that the best response for the Defender ought to be the Optimal Bayes Classifier for the transported distribution. However, it is only well defined if the conditional distributions admit a probability density function. This might not always hold here for the transported distribution. Nevertheless, we show that there is a property, shared by the Optimal Bayes Classifier when defined, that always holds for the Defender's best response. Lemma B.3. Let us consider ϕ ∈ F X |ϵ 2 2 . If we take h ∈ BR(ϕ), then for y = 1 (resp. y = -1), and for any B ⊂ P h (resp. B ⊂ N h ) one has P(Y = y|X ∈ B) ≥ P(Y = -y|X ∈ B) with Y ∼ ν and for all y ∈ Y, X|(Y = y) ∼ ϕ y #µ y . In particular, when ϕ 1 #µ 1 and ϕ -1 #µ -1 admit probability density functions, Lemma B.3 simply means that h is the Optimal Bayes Classifier for the distribution (ν, ϕ 1 #µ 1 , ϕ -1 #µ -1 ) 2 . We can now state our main theorem, as well as two of its important consequences. Theorem B.1 (Non-existence of a pure Nash equilibrium). In the zero-sum game (Eq. B.4) with λ ∈ (0, 1) and penalty Ω ∈ {Ω mass , Ω norm }, there is no Pure Nash Equilibrium. Consequence 1. (No free lunch for transferable attacks) To understand this statement, remark that, thanks to weak duality, the following inequality always holds: sup ϕ∈(F X |ϵ 2 ) 2 inf h∈H R Ω adv (h, ϕ) ≤ inf h∈H sup ϕ∈(F X |ϵ 2 ) 2 R Ω adv (h, ϕ). On the left side problem (sup-inf), the Adversary looks for the best strategy ϕ against any unknown classifier. This is tightly related to the notion of transferable attacks (see e.g. [START_REF] Tramèr | The Space of Transferable Adversarial Examples[END_REF]), which refers to attacks successful against a wide range of classifiers. On the right side (our) problem (inf-sup), the Defender tries to find the best classifier under any possible attack, whereas the Adversary plays in second and specifically attacks this classifier. As a consequence of Theorem B.3, the inequality is always strict: sup ϕ∈(F X |ϵ 2 ) 2 inf h∈H R Ω adv (h, ϕ) < inf h∈H sup ϕ∈(F X |ϵ 2 ) 2 R Ω adv (h, ϕ). This means that both problems are not equivalent. In particular, an attack designed to succeed against any classifier (i.e. a transferable attack) will not be as good as an attack tailored for a given classifier. Hence she has to trade-off between effectiveness and transferability of the attack. Consequence 2. (No deterministic defense may be proof against every attack) Let us consider the state-of-the-art defense which is Adversarial Training [START_REF] Goodfellow | Explaining and Harnessing Adversarial Examples[END_REF][START_REF] Madry | Towards Deep Learning Models Resistant to Adversarial Attacks[END_REF]. The idea is to compute an efficient attack ϕ, and train the classifier on created adversarial examples, in order to move the decision boundary and make the classifier more robust to new perturbations by ϕ. B.5 Randomization matters To be fully efficient, this method requires that ϕ remains an optimal attack on h even after training. Our theorem shows that it is never the case: after training our classifier h to become (h ′ ) robust against ϕ, there will always be a different optimal attack ϕ ′ that is efficient against h ′ . Hence Adversarial Training will never achieve a perfect defense. B.5 Randomization matters As we showed that there is no Pure Nash Equilibrium, no deterministic classifier may be proof against every attack. We would therefore need to allow for a wider class of strategies. A natural extension of the game would thus be to allow randomization for both players, who would now choose a distribution over pure strategies, leading to this game: inf η∈P(H) sup φ∈P (F X |ϵ 2 ) 2 E h∼η ϕ∼φ R Ω adv (h, ϕ) . (B.7) Without making further assumptions on this game (e.g. compactness), we cannot apply known results from game theory (e.g. Sion theorem) to prove the existence of an equilibrium. These assumptions would however make the problem loose much generality, and do not hold here. Randomization matters. Even without knowing if an equilibrium exists in the randomized setting, we can prove that randomization matters. More precisely we show that any deterministic classifier can be outperformed by a randomized one in terms of the worst case adversarial risk. To do so we simplify Equation B.7 in two ways: 1. We do not consider the Adversary to be randomized, i.e. we restrict the search space of the Adversary to (F X ) 2 instead of P (F X ) 2 . This condition corresponds to the current state-of-the-art in the domain: to the best of our knowledge, no efficient randomized adversarial example attack has been designed (and so is used) yet. 2. We only consider a subclass of randomized classifiers, called mixtures, which are discrete probability measures on a finite set of classifiers. We show that this kind of randomization is enough to strictly outperform any deterministic classifier. We will discuss later the use of more general randomization (such as noise injection) for the Defender. Let us now define a mixture of classifiers. Definition B.3 (Mixture of classifier). Let n ∈ N, h = (h 1 , ..., h n ) ∈ H n , and q ∈ P({1, ..., n}). A mixed classifier of h by q is a mapping m q h from X to P(Y) such that for all x ∈ X , m q h (x) is the discrete probability distribution that is defined for all y ∈ Y as follows: m q h (x)(y) := E i∼q [1{h i (x) = y}]. We call such a mixture a mixed strategy of the Defender. Given some x ∈ X , this amounts to picking a classifier h i from h at random following the distribution q, and use it to output the predicted class for x, i.e. h i (x). Note that a mixed strategy for the Defender is a non deterministic algorithm, since it depends on the sampling one makes on q. Hence, even if the attacks are defined in the same way as before, the Adversary now needs to maximize a new objective function which is the expectation of the adversarial risk under the distribution m q h . It writes as follows: E Y ∼ν E X∼ϕ Y #µ Y E Ŷ ∼m q h (X) 1 Ŷ ̸ = Y -λ Ω(ϕ). (B.8) We also write R Ω adv to mean the left part of Equation (B.8), when it is clear from context that the Defender uses a mixed classifier. Using this new set of strategies for the Defender, we can study whether mixed classifiers outperform deterministic ones, and how to efficiently design them. Mixed strategy. We demonstrate that the efficiency of any deterministic defense can be improved using a simple mixed strategy. This method presents similarities with the notions of fictitious play [START_REF] Brown | Iterative solution of games by fictitious play[END_REF] in game theory, and boosting in machine learning [START_REF] Freund | A Decision Theoretic Generalization of On-Line Learning and an Application to Boosting[END_REF]. Given a deterministic classifier h 1 , we combine it (via randomization) with the best response h 2 to its optimal attack. The rational behind this idea is that, by construction, efficient attacks on one of these two classifiers will not work on the other. Mixing h 1 with h 2 has two opposite consequences on the adversarial risk. On one hand, where we only had to defend against attack on h 1 , we are now also vulnerable to attacks on h 2 , so the total set of possible attacks is now bigger. On the other hand, each attack will only work part of the time, depending on the probability distribution q. If we can calibrate the weights so that attacks on important zones have a low probability of succeeding, then the average risk under attack on the mixture will be low. Toy example where a mixture outperforms AT. To better understand how randomization can work, let us look at a simple toy example. Figure B.2 illustrates a binary classification setting between two set of points. Attacking the Optimal Bayes Classifier (bold straight line) consists in moving all the points that lie between the dotted lines to the opposite side of the decision boundary (Figure B.2, left). The general tactic to defend against an attack is to change the classifier's output for points that are too close to the boundary. This can be done all the time, as in Adversar-ial Training (where we move the decision boundary to incorporate adversarial examples), or part of the time as in a randomized algorithm (so that the attack only works with a given probability). When we use Adversarial Training for the star points (Figure B.2, middle), we change the output on the blue zone, so that 2 of the star (squared) points cannot be successfully attacked anymore. But in exchange, the dilation of the new boundary can now be attacked. For Adversarial Training to work, we need the number of new potential attacks (i.e. the points that are circled, 2 crosses in the dilation and 2 stars that are close to the new boundary) to be smaller than the number of attacks we prevent (the squared points, 2 blue ones that an attack would send in the blue zone, and 3 red points that are far from the new decision boundary). Here we prevent 5 attacks at the cost of 4 new ones, so the Adversarial Training improves the total score from 8 to 7. Similarly, we observe what happens for the randomized defense (Figure B.2, right). We mix the Optimal Bayes Classifier with the best response to attacking all the points. We get a classifier that is determinsitic outside the gray area, and random inside it 3 . If the first classifier has a weight α = 0.5, 6 of the old attacks now succeed only with probability 0.5 (crosses between the dotted lines), whereas 3 new attacks are created (stars outside of the gray area) that succeed with probability 0.5 also. At the end, the average rate of successful attacks is 6.5, where adversarial training previously achieved 7. More formally, Theorem B.4 shows that whatever penalty we consider, a deterministic classifier can always be outperformed by a randomized algorithm. We now can state our second main result: randomization matters. andm q h is the mixture of h by q. A similar result holds when Ω = Ω norm (see supplementary materials). Theorem B.2. (Randomization matters) Let us consider h 1 ∈ H, λ ∈ (0, 1), Ω = Ω mass , ϕ ∈ BR Ω (h 1 ) and h 2 ∈ BR(ϕ). Then for any α ∈ (max(λ, 1 -λ), 1) and for any ϕ ′ ∈ BR Ω (m q h ) one has R Ω adv (m q h , ϕ ′ ) < R Ω adv (h 1 , ϕ). Where h = (h 1 , h 2 ), q = (α, 1 -α), Remark Note that depending on the initial hypothesis h 1 and the conditional distributions µ 1 and µ -1 , the gap between R Ω adv (m q h , ϕ ′ )and R Ω adv (h 1 , ϕ) could vary. Hence, with additional conditions on h 1 , µ 1 and µ -1 , we could make the gap appear more explicitly. We keep the formulation general to emphasize that for any deterministic classifier, there exists a randomized one that outperforms it in terms of worst-case adversarial score. Based on Theorem B. q ← (1 -α, α) q ← (h 1 , h 2 ) return m q h Comparison to fictitious play. Contrary to classical algorithms such as Fictitious play that also generates mixtures of classifiers, and whose theoretical guarantees rely on the existence of a Mixed Nash Equilibrium, the performance of our method is ensured by Theorem B.4 to be at least as good as the classifier it uses as a basis. Moreover, the implementation of Fictitious Play would be impractical on the high dimensional datasets we consider, due to its computational costs. B.7 Discussion & Conclusion Finally, is there a classifier that ensures optimal robustness against all adversarial attacks? We gave a negative answer to this question in the deterministic regime, but part of the question remains open when considering randomized algorithms. We demonstrated that randomized defenses are more efficient than deterministic ones, and devised a simple method to implement them. Game theoretical point of view. There remains to study whether an Equilibrium exists in the Randomized regime. This question is appealing from a theoretical point of view, and requires to investigate the space of randomized Adversaries P((F X ) 2 ). The characterization of this space is not straightforward, and would require strong results in the theory of optimal transport. A possible research direction is to quotient the space (F X ) 2 so as to simplify the search in P((F X ) 2 ) and the characterization of the Adversary's best responses. The study of this equilibrium is tightly related to that of the value of the game, which would be interesting for obtaining min-max bounds on the accuracy under attack, as well as certificates of robustness for a set of classifiers. Advocating for more provable defenses. Although the experimental results show that our mixture of classifiers outperforms Adversarial Training, our algorithm does not provide guarantees in terms of certified accuracy. As the literature on adversarial attacks and defenses demonstrated, better attacks always exist. This is why, more theoretical works need to be done to prove the robustness of a mixture created from this particular algorithm. More generally, our work advocates for the study of mixtures as a provable defense against adversarial attacks. One could, for example, build upon the connection between mixtures and noise injection to investigate a broader range of randomized strategies for the Defender, and devise certificates accordingly. Improving Boosted Adversarial Training. From an algorithmic point of view, BAT can be improved in several ways. For instance, the weights can be learned while choosing the new classifier for the mixture. This could lead to an improved accuracy under attack, but would lack some theoretical justifications that still need to be set up. Finally, tighter connections with standard boosting algorithms could be established to improve the analysis of BAT. B.8 Omitted proofs and Additional results Notations. Let us suppose that (X ,∥.∥) is a normed vector space. B ∥.∥ (x, ϵ) = {z ∈ X | ∥x -z∥ ≤ ϵ} is the closed ball of center x and radius ϵ for the norm ∥.∥. Note that H := {h : x → sgn g(x) | g : X → R continuous}, with sgn the function that outputs 1 if g(x) > 0, -1 if g(x) < 0, and 0 otherwise. Hence for any (x, y) ∼ D, and h ∈ H one has 1{h(x) ̸ = y} = 1{g(x)y ≤ 0}. Finally, we denote ν 1 and ν -1 respectively the probabilities of class 1 and -1. Introducing remarks. Let us first note that in the paper, the penalties are defined with an ℓ 2 norm. However, Lemma B.1 and B.2 hold as long as X is an Hilbert space with dot product <|> and associated norm ||.|| = < . | . >. We first demonstrate Lemma B.2 with these general notations. Then we present the proof of Lemma B.1 that follows the same schema. Note that, for Lemma B.1, we do not even need the norm to be Hilbertian, since the core argument rely on separation property of the norm, i.e. on the property ∥x -y∥ = 0 ⇐⇒ x = y. Lemma B.2. Let h ∈ H and ϕ ∈ BR Ωnorm (h). Then the following assertion holds: ϕ 1 (x) = π(x) if x ∈ P h (ϵ 2 ) x otherwise. Where π is the orthogonal projection on (P h ) ∁ . ϕ -1 is characterized symmetrically. Proof. Let us first simplify the worst case adversarial risk for h. Recall that h = sgn(g) with g continuous. From the definition of adversarial risk we have: sup ϕ∈(F X |ϵ 2 ) 2 R Ωnorm adv (h, ϕ) = sup ϕ∈(F X ) 2 y=±1 ν y E X∼µy 1{h(ϕ y (X)) ̸ = y} -λ∥X -ϕ y (X)∥ -∞1{∥X -ϕ y (X)∥ > ϵ 2 } = sup ϕ∈(F X ) 2 y=±1 ν y E X∼µy 1{g(ϕ y (X))y ≤ 0} -λ∥X -ϕ y (X)∥ -∞1{∥X -ϕ y (X)∥ > ϵ 2 } = y=±1 ν y sup ϕy∈F X E X∼µy 1{g(ϕ y (X))y ≤ 0} -λ∥X -ϕ y (X)∥ -∞1{∥X -ϕ y (X)∥ > ϵ 2 } Finding ϕ 1 and ϕ 1 are two independent optimization problems, hence, we focus on characterizing ϕ 1 (i.e. y = 1). sup ϕ 1 ∈F X E X∼µ 1 1{g(ϕ 1 (X)) ≤ 0} -λ∥X -ϕ 1 (X)∥ -∞1{∥X -ϕ 1 (X)∥ > ϵ 2 } = E X∼µ 1 essup z∈B ∥.∥ (X,ϵ 2 ) 1(g(z) ≤ 0) -λ∥X -z∥ = X essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z) ≤ 0} -λ∥x -z∥ dµ 1 (x). Let us now consider (H j ) j∈J a partition of X , we can write. sup ϕ 1 ∈F X E X∼µ 1 1{g(ϕ 1 (X)) ≤ 0} -λ∥X -ϕ 1 (X)∥ -∞1{∥X -ϕ 1 (X)∥ > ϵ 2 } = j∈J H j essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z) ≤ 0} -λ∥x -z∥ dµ 1 (x) In particular, we consider here H 0 = P ∁ h , H 1 = P h \ P h (ϵ 2 ), and H 2 = P h (ϵ 2 ). For x ∈ H 0 = P ∁ h . Taking z = x we get 1{g(z) ≤ 0} -λ∥x -z∥ = 1. Since for any z ∈ X we have 1{g(z) ≤ 0} -λ∥x -z∥ ≤ 1, this strategy is optimal. Furthermore, for any other optimal strategy z ′ , we would have ∥x -z ′ ∥ = 0, hence z ′ = x, and an optimal attack will never move the points of H 0 = P ∁ h . For x ∈ H 1 = P h \ P h (ϵ 2 ). We have B ∥.∥ (x, ϵ 2 ) ⊂ P h by definition of P h (ϵ 2 ). Hence, for any z ∈ B ∥.∥ (x, ϵ 2 ), one gets g(z) > 0. Then 1{g(z) ≤ 0} -λ∥x -z∥ ≤ 0. The only optimal z will thus be z = x, giving value 0. Let us now consider x ∈ H 2 = P h (ϵ 2 ) which is the interesting case where an attack is possible. We know that B ∥.∥ (x, ϵ 2 )∩P ∁ h ̸ = ∅, and for any z in this intersection, 1(g(z) ≤ 0) = 1. Hence : essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z) ≤ 0} -λ∥x -z∥ = max(1 -λ essinf z∈B ∥.∥ (x,ϵ 2 )∩P ∁ h ∥x -z∥, 0) (B.9) = max(1 -λπ B ∥.∥ (x,ϵ 2 )∩P ∁ h (x), 0) (B.10) Where π B ∥.∥ (x,ϵ 2 )∩P ∁ h is the projection on the closure of B ∥.∥ (x, ϵ 2 ) ∩ P ∁ h . Note that π B ∥.∥ (x,ϵ 2 )∩P ∁ h exists: g is continuous, so B ∥.∥ (x, ϵ 2 ) ∩ P ∁ h is a closed set, bounded, and thus compact, since we are in finite dimension. The projection is however not guaranteed to be unique since we have no evidence on the convexity of the set. Finally, let us remark that, since λ ∈ (0, 1), and ϵ 2 ≤ 1, one has 1 -λπ B ∥.∥ (x,ϵ 2 )∩P ∁ h (x) ≥ 0 for any x ∈ H 2 . Hence, on P h (ϵ 2 ), the optimal attack projects all the points on the decision boundary. For simplicity, and since there is no ambiguity, we write the projection π. Finally. Since H 0 ∪ H 1 ∪ H 2 = X , Lemma B.2 holds. Furthermore, the score for this optimal attack is: sup ϕ∈(F X |ϵ 2 ) 2 R Ωnorm adv (h, ϕ) = y=±1 ν y j∈J H j essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z)y ≤ 0} -λ∥x -z∥ dµ y (x) Since the value is 0 on P h \ P h (ϵ 2 ) (resp. on N h \ N h (ϵ 2 ) ) for ϕ 1 (resp. ϕ -1 ), one gets: =ν 1     P h (ϵ 2 ) 1 -λ∥x -π(x)∥ dµ 1 (x) + P ∁ h 1dµ 1 (x)     + ν -1     N h (ϵ 2 ) 1 -λ∥x -π(x)∥ dµ -1 (x) + N ∁ h 1dµ -1 (x)     =ν 1    P h (ϵ 2 ) 1 -λ∥x -π(x)∥ dµ 1 (x) + µ 1 (P ∁ h )    + ν -1    N h (ϵ 2 ) 1 -λ∥x -π(x)∥ dµ -1 (x) + µ -1 (N ∁ h )    = R(h) + ν 1 P h (ϵ 2 ) 1 -λ∥x -π(x)∥ dµ 1 (x) + ν -1 N h (ϵ 2 ) 1 -λ∥x -π(x)∥ dµ -1 (x) (16) holds since R(h) = P(h(X) ̸ = Y )P(g(X)Y ≤ 0) = ν 1 µ 1 (P ∁ h ) + ν -1 µ -1 (N ∁ h ). This provides an interesting decomposition of the adversarial risk into the risk without attack and the loss on the attack zone. Lemma B.1. Let h ∈ H and ϕ ∈ BR Ωmass (h). Then the following assertion holds: ϕ 1 (x) ∈ (P h ) ∁ if x ∈ P h (ϵ 2 ) ϕ 1 (x) = x otherwise. Where (P h ) ∁ , the complement of P h in X . ϕ -1 is characterized symmetrically. Proof. Following the same proof schema as before the adversarial risk writes as follows: sup ϕ∈(F X |ϵ 2 ) 2 R Ωmass adv (h, ϕ) = sup ϕ∈(F X ) 2 y=±1 ν y E X∼µy [1{h(ϕ y (X)) ̸ = y} -λ1{X ̸ = ϕ y (X)} -∞1{∥X -ϕ y (X)∥ > ϵ 2 }] = sup ϕ∈(F X ) 2 y=±1 ν y E X∼µy [1{g(ϕ y (X))y ≤ 0} -λ1{X ̸ = ϕ y (X)} -∞1{∥X -ϕ y (X)∥ > ϵ 2 }] = y=±1 ν y sup ϕy∈F X E X∼µy [1{g(ϕ y (X))y ≤ 0} -λ1{X ̸ = ϕ y (X)} -∞1{∥X -ϕ y (X)∥ > ϵ 2 }] Finding ϕ 1 and ϕ 1 are two independent optimization problem, hence we focus on characterizing ϕ 1 (i.e. y = 1). sup ϕ 1 ∈F X E X∼µ 1 1{g(ϕ 1 (X)) ≤ 0} -λ1{X ̸ = ϕ 1 (X)} -∞1{∥X -ϕ 1 (X)∥ > ϵ 2 } = E X∼µ 1 essup z∈B ∥.∥ (X,ϵ 2 ) 1{g(z) ≤ 0} -λ1{X ̸ = z} = X essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z) ≤ 0} -λ1{x ̸ = z} dµ 1 (x). Let us now consider (H j ) j∈J a partition of X , we can write. sup ϕ 1 ∈F X E X∼µ 1 1{g(ϕ 1 (X)) ≤ 0} -λ1{X ̸ = ϕ 1 (X)} -∞1{∥X -ϕ 1 (X)∥ > ϵ 2 } = j∈J H j essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z) ≤ 0} -λ1{x ̸ = z} dµ 1 (x) In particular, we can take H 0 = P ∁ h , H 1 = P h \ P h (ϵ 2 ), and H 2 = P h (ϵ 2 ). For x ∈ H 0 = P ∁ h or x ∈ H 1 = P h \ P h (ϵ 2 ) . With the same reasoning as before, any optimal attack will choose ϕ 1 (x) = x. Let x ∈ H 2 = P h (ϵ 2 ). We know that B ∥.∥ (x, ϵ 2 ) ∩ P ∁ h ̸ = ∅, and for any z in this intersection, one has g(z) ≤ 0 and z ̸ = x. Hence essup z∈B ∥.∥ (x,ϵ 2 ) 1{g(z) ≤ 0} -λ1{z ̸ = x} = max(1 -λ, 0). Since λ ∈ (0, 1) one has 1{g(z) ≤ 0} -λ1{z ̸ = x} = 1 -λ for any z ∈ B ∥.∥ (x, ϵ 2 )∩P ∁ h . Then any function that given a x ∈ X outputs ϕ 1 (x) ∈ B ∥.∥ (x, ϵ 2 )∩P ∁ h is optimal on H 2 . Finally. Proof. We reason ad absurdum. Let us consider y = 1, the proof for y = -1 is symmetrical. Since H 0 ∪ H 1 ∪ H 2 = X , Let us suppose that there exists C ⊂ P h such that ν -1 ϕ -1 #µ -1 (C) > ν 1 ϕ 1 #µ 1 (C). We can then construct h 1 as follows: h 1 (x) = h(x) if x / ∈ C -1 otherwise. Since h and h 1 are identical outside C, the difference between the adversarial risks of h and h 1 writes as follows: R Ωmass adv (h, ϕ) -R Ωmass adv (h 1 , ϕ) = y=±1 ν y C 1{h(x) ̸ = y} -1{h 1 (x) ̸ = y} d(ϕ y #µ y )(x) =ν -1 1{h(x) = 1}ϕ -1 #µ -1 (C) -ν 1 1{h 1 (x) ̸ = 1}ϕ 1 #µ 1 (C) =ν -1 ϕ -1 #µ -1 (C) -ν 1 ϕ 1 #µ 1 (C) Since by hypothesis ν -1 ϕ -1 #µ -1 (C) > ν 1 ϕ 1 #µ 1 (C) the difference between the adversarial risks of h and h 1 is strictly positive. This means that h 1 gives strictly better adversarial risk than the best response h. Since, by definition h is supposed to be optimal, this leads to a contradiction. Hence Lemma B.3 holds. Additional Result. Let us assume that there is a probability measure ζ that dominates both ϕ 1 #µ 1 and ϕ -1 #µ -1 . Let us consider ϕ ∈ F X |ϵ 2 2 . If we take h ∈ BR(ϕ), then h is the Bayes Optimal Classifier for the distribution characterized by (ν, ϕ 1 #µ 1 , ϕ -1 #µ -1 ). Proof. For simplicity, we denote f 1 = (dϕ 1 #µ 1 ) dζ and f -1 = d(ϕ -1 #µ -1 ) dζ the Radon-Nikodym derivatives of ϕ 1 #µ 1 and ϕ -1 #µ -1 w.r.t. ζ. The best response h minimizes adversarial risk under attack ϕ. This minimal risk writes: inf h∈H R Ωmass adv (h, ϕ) = inf h∈H y=±1 ν y E x∼µy [1{h(ϕ y (x)) ̸ = y}] -λ Ω(ϕ). Since the the penalty function does not depend on h, it suffices to seek inf Finally, since the integral is bounded we get: inf h∈H X y=±1 ν y 1{h(x) ̸ = y}f y (x) dζ(x) := u + v | (u, v) ∈ U × X and ∥v∥ p ≤ ϵ 2 . We can construct h 2 as follows h 2 (x) = -h 1 (x) if x ∈ U h 1 (x) otherwise. This means that h 2 changes the class of all points in U , and do not change the rest, compared to h 1 . Then taking α ∈ (0, 1), we can define m q h , and ϕ ′ ∈ BR Ω (m q h ). We aim to find a condition on α so that the score of m q h is lower than the score of h 1 . Finally, let us recall that R Ωmass adv (m q h , ϕ ′ ) = ν 1 X essup z∈B ∥.∥ (x,ϵ 2 ) α1{h 1 (z) = -1} + (1 -α)1{h 2 (z) = -1} -λ1{x ̸ = z} dµ 1 (x) + ν -1 X essup z∈B ∥.∥ (x,ϵ 2 ) α1{h 1 (z) = 10} + (1 -α)1{h 2 (z) = 1} -λ1{x ̸ = z} dµ -1 (x). The only terms that may vary between the score of h 1 and the score of m q h are the integrals on U , U ⊕ϵ 2 ∩P h 1 and ϕ -1 -1 (U ) -inverse image of U by ϕ -1 . These sets represent respectively the points we mix on, the points that may become attacked -when changing from h 1 to m q h -by moving them on U , and the ones that were -for h 1 -attacked before by moving them on U . Hence, for simplicity, we only write those terms. Furthermore, we denote U + := U ⊕ ϵ 2 ∩ P h 1 \ U, U -:= ϕ -1 -1 (U ) and recall U := P h 1 (ϵ 2 ). One can refer to Figure B.3 for visual interpretation of this sets. We can now evaluate the worst case adversarial score for h 1 restricted to the above sets. Thanks to Lemma B.1 that characterizes ϕ, we can write R Ωmass adv (h 1 , ϕ) |U, U + , U - = (1 -λ) × ν 1 µ 1 (U ) + ν -1 µ -1 (U ) + 0 × ν 1 µ 1 U + + ν -1 µ -1 U + + ν 1 µ 1 U -+ (1 -λ) × ν -1 µ -1 U -. Similarly, we can write the worst case adversarial score of the mixture on the sets we consider. Note that the max operator comes from the fact that the adversary has to make a choice between attacking the zone or just take advantage of the error due to randomization. R Ωmass adv (m q h , ϕ ′ ) |U, U + , U - = max(1 -α, 1 -λ) × ν 1 µ 1 (U ) + max(α, 1 -λ) × ν -1 µ -1 (U ) + max(0, 1 -α -λ) × ν 1 µ 1 U + + ν -1 µ -1 U + + ν 1 µ 1 U -+ max(0, α -λ) × ν -1 µ -1 U -. Computing the difference between these two terms, we get the following R Ωmass adv (h 1 , ϕ) -R Ωmass adv (m q h , ϕ ′ ) (B.11) = (1 -λ -max(1 -α, 1 -λ)) × ν 1 µ 1 (U ) (B.12) + (1 -max(α, 1 -λ)) × ν -1 µ -1 (U ) (B.13) -max(0, 1 -α -λ) × ν 1 µ 1 U + (B.14) +(1 -λ -max(0, α -λ)) × ν -1 µ -1 U - (B.15) Let us now simplify Equation (B.11) using additional assumptions. • First, we have that Equation (B.13) is equal to min(1 -α, λ)µ -1 (U )ν -1 > 0. Thus, a sufficient condition for the difference between the adversarial scores to be positive is to have the other terms greater or equal to 0. • To have Equation (B.12) ≥ 0 we can always set max(1 -α, 1 -λ) = 1 -λ. This gives us α ≥ λ. • Also note that to get (B.14) ≥ 0, we can force max(1 -α -λ, 0) = 0. This gives us α ≥ 1 -λ. • Finally, since α ≥ λ, we have that 1 -λ -max(0, α -λ) = 1 -α thus Equations (B.15) > 0. With the above simplifications, we have (B.11) > 0 for any α > max(λ, 1 -λ) which concludes the proof. Theorem B.5. (Randomization matters) Let us consider h 1 ∈ H, λ ∈ (0, 1), Ω = Ω norm , ϕ ∈ BR Ω (h 1 ) and h 2 ∈ BR(ϕ). Let us take δ ∈ (0, ϵ 2 ), then for any α ∈ (max(1λδ, λ(ϵ 2 -δ)), 1) and for any ϕ ′ ∈ BR Ω (m q h ) one has R Ωnorm adv (m q h , ϕ ′ ) < R Ωnorm adv (h 1 , ϕ). Where h = (h 1 , h 2 ), q = (α, 1 -α), and m q h is the mixture of h by q. Proof. Let us take U ⊂ P h 1 (ϵ 2 ) such that min x∈U ∥x -π P h \P h (ϵ 2 ) (x)∥ = δ ∈ (0, ϵ 2 ) . We construct h 2 as follows. h 2 (x) = -h 1 (x) if x ∈ U h 1 (x) otherwise. This means that h 2 changes the class of all points in U , and do not change the rest. Let α ∈ (0, 1), the corresponding mixture m q h , and ϕ ′ ∈ BR Ω (m q h ). We will find a condition on α so that the score of m q h is lower than the score of h 1 . Recall that R Ωnorm adv (m q h , ϕ ′ ) = ν 1 X essup z∈B ∥.∥ (x,ϵ 2 ) α1{h 1 (z) = -1} + (1 -α)1{h 2 (z) = -1} -λ∥x -z∥ dµ 1 (x) + ν -1 X essup z∈B ∥.∥ (x,ϵ 2 ) α1{h 1 (z) = 1} + (1 -α)1{h 2 (z) = 1} -λ∥x -z∥ dµ -1 (x). As we discussed in proof of Theorem B.4, the only terms that may vary between the score of h 1 and the score of m q h are the integrals on U , U ⊕ ϵ 2 ∩ P h 1 and ϕ -1 -1 (U ). Hence, for simplicity, we only write those terms. Furthermore, we denote U + := U ⊕ ϵ 2 ∩ P h 1 \ U, U -:= ϕ -1 -1 (U ) and P ϵ 2 := P h 1 (ϵ 2 ). One can refer to Figure B.4 for a visual interpretation of this ensembles. We can now evaluate the worst case adversarial score for h 1 restricted to the above sets. Thanks to Lemma B.2 that characterizes ϕ, we can write R Ωnorm adv (h 1 , ϕ) = ν 1 U 1 -λ∥x -π P ∁ h 1 (x)∥ dµ 1 (x) + ν -1 µ -1 (U ) + ν 1 U + \Pϵ 2 0 dµ 1 (x) + ν -1 µ -1 U + \ P ϵ 2 + ν 1 U + ∩Pϵ 2 1 -λ∥x -π P ∁ h 1 (x)∥ dµ 1 (x) + ν -1 µ -1 U + ∩ P ϵ 2 + ν 1 µ 1 U -+ ν -1 U - 1 -λ∥x -π U (x)∥ dµ -1 (x). Similarly we can evaluate the worst case adversarial score for the mixture, R Ωnorm adv (m q h , ϕ ′ ) = ν 1 U max 1 -α, 1 -λ∥x -π P ∁ h 1 (x)∥ dµ 1 (x) + ν -1 U max(α, 1 -λ∥x -π U + (x)∥) dµ -1 (x) + ν 1 U + \Pϵ 2 max(0, 1 -α -λ∥x -π U (x)∥) dµ 1 (x) + ν -1 µ -1 U + \ P ϵ 2 + ν 1 U + ∩Pϵ 2 max 1 -α -λ∥x -π U (x)∥, 1 -λ∥x -π P ∁ h 1 (x)∥ dµ 1 (x) + ν -1 µ -1 U + ∩ P ϵ 2 + ν 1 µ 1 U - + ν -1 U - max 0, 1 -λ∥x -π N ∁ h 1 \U (x)∥, α -λ∥x -π U (x)∥ dµ -1 (x). Note that we need to take into account the special case of the points in the dilation that were already in the attacked zone before, and that can now be attacked in two ways, either by projecting on U -but that works with probability α, since the classification on U is now randomized -or by projecting on P ∁ h 1 , which works with probability 1 but may use more distance and so pay more penalty. We can now compute the difference between both scores. R Ωnorm adv (h 1 , ϕ) -R Ωnorm adv (m q h , ϕ ′ ) (B.16) = ν 1 U 1 -λ∥x -π P ∁ h 1 (x)∥ -max 1 -α, 1 -λ∥x -π P ∁ h 1 (x)∥ dµ 1 (x) (B.17) + ν -1 U 1 -max(α, 1 -λ∥x -π U + (x)∥)dµ -1 (x) (B.18) -ν 1 U + \Pϵ 2 max(1 -α -λ∥x -π U (x)∥, 0)dµ 1 (x) (B.19) + ν 1 U + ∩Pϵ 2 1 -λ∥x -π P ∁ h 1 (x)∥ -max 1 -α -λ∥x -π U (x)∥, 1 -λ∥x -π P ∁ h 1 (x)∥ dµ 1 (x) (B.20) + ν -1 U - 1 -λ∥x -π U (x)∥ -max 0, 1 -λ∥x -π N ∁ h 1 \U (x)∥, α -λ∥x -π U (x)∥ dµ -1 (x). (B.21) Let us simplify Equation (B.16) using using additional hypothesis: • First, note that Equation (B.18)> 0. Then a sufficient condition for the difference to be strictly positive is to ensure that other lines are ≥ 0. • In particular to have (B.17) ≥ 0 it is sufficient to have for all x ∈ U max 1 -α, 1 -λ∥x -π P ∁ h 1 (x)∥ = 1 -λ∥x -π P ∁ h 1 (x)∥. This gives us α ≥ λ(ϵ 2 -δ) ≥ λ max x∈U ∥x -π P ∁ h 1 (x)∥. • Similarly, to have (B. [START_REF] Boucheron | Concentration inequalities: A nonasymptotic theory of independence[END_REF]) ≥ 0, we should set for all x ∈ U + \ P ϵ 2 α ≥ 1 -λ∥x -π U (x)∥. Since min x∈U + \Pϵ 2 ∥x -π U (x)∥ = δ, we get the condition α ≥ 1 -λδ. • Finally (B.21) ≥ 0, since by definition of U -, for any x ∈ U -we have ∥x -π N ∁ h 1 \U (x)∥ ≥ ∥x -π U (x)∥. Finally, by summing all these simplifications, we have (B.16) > 0. Hence the result hold for any α > max(1 -λδ, λ(ϵ 2 -δ)) B.9 Experimental results In the experimental section, we consider X = [0, 1] 3×32×32 to be the set of images, and Y = {1, ..., 10} or Y = {1, ..., 100} according to the dataset at hand. Adversarial attacks Let (x, y) ∼ D and h ∈ H. We consider the following attacks: (i) ℓ ∞ -PGD attack. In this scenario, the Adversary maximizes the loss objective function, under the constraint that the ℓ ∞ norm of the perturbation remains bounded by some value ϵ ∞ . To do so, it recursively computes: x t+1 = Π B ∥.∥ (x,ϵ∞) x t + β sgn ∇ x L h x t , y (B. 22 ) where L is some differentiable loss (such as the cross-entropy), β is a gradient step size, and Π S is the projection operator on S. One can refer to [START_REF] Madry | Towards Deep Learning Models Resistant to Adversarial Attacks[END_REF] for implementation details. (ii) ℓ 2 -C&W attack. In this attack, the Adversary optimizes the following objective: arg min τ ∈X ∥τ ∥ 2 + λ × cost(x + τ ) (B. 23 ) where cost(x + τ ) < 0 if and only if h(x + τ ) ̸ = y. The authors use a change of variable τ = 1 2 (tanh(w) -x + 1) to ensure that x + τ ∈ X , a binary search to optimize the constant λ, and Adam or SGD to compute an approximated solution. One should refer to [START_REF] Carlini | Towards evaluating the robustness of neural networks[END_REF] for implementation details. Experimental setup Datasets. To illustrate our theoretical results we did experiments on the CIFAR10 and CI-FAR100 datasets. See [START_REF] Krizhevsky | Learning multiple layers of features from tiny images[END_REF] for more details. Classifiers. All the classifiers we use are WideResNets (see [108]) with 28 layers, a widen factor of 10, a dropout factor of 0.3 and LeakyRelu activations with a 0.1 slope. Machine used. 6 Tesla V100-SXM2-32GB GPUs Experimental details Sanity checks for Adaptive attacks In [START_REF] Tramer | On adaptive attacks to adversarial example defenses[END_REF], the authors give a lot of sanity checks and good practices to design an Adaptive attacks. We follow them and here are the information for Adaptiveℓ ∞ -PGD : • We compute the gradient of the loss by doing the expected logits over the mixture. • The attack is repeated 3 times with random start and we take the best perturbation over all the iterations. • When adding a constant to the logits, it doesn't change anything to the attack • The loss doesn't fluctuate at the end of the optimization process. Selecting the first element of the mixture. Our algorithm creates classifiers in a boosting fashion, starting with an adversarially trained classifier. There are several ways of selecting this first element of the mixture: use the classifier with the best accuracy under attack (option 1, called bestAUA), or rather the one with the best natural accuracy (option 2). Table B.6 compares both options. Beside the fact that any of the two mixtures outperforms the first classifier, we see that the fisrt option always outperforms the second. In fact, when taking option 1 (bestAUA = True) the accuracy under ℓ ∞ -PGD attack of the mixture is 3% better than with option 2 (bestAUA = False). One can also note that both mixtures have the same natural accuracy (0.80), which makes the choice of option 1 natural. Extension to more than two classifiers As we mention in the main part of the paper, a mixture of more than two classifiers can be constructed by adding at each step t a new classifier trained naturally on the dataset D that contains adversarial examples against the mixture at step t -1. Since D has to be constructed from a mixture, one would have to use an adaptive attack as Adaptive-ℓ ∞ -PGD. Here is the algorithm for the extented version : C.1 Contexte & motivations Cette thèse vise à résoudre des problèmes multi-agents centralisés qui impliquent des interactions par paire entre agents. La configuration d'antennes dans un réseau cellulaire sans fil [START_REF] Siomina | Automated optimization of service coverage and base station antenna configuration in UMTS networks[END_REF] est un exemple de ces problèmes : le choix d'un paramètre pour une antenne a un impact à la fois sur sa propre qualité de signal et sur celle de chacune de ses antennes voisines en raison de l'interférence du signal. De même, dans un parc éolien, le réglage d'une éolienne a un impact non seulement sur sa propre efficacité de collecte d'énergie mais aussi sur celle de ses voisines en raison des turbulences du vent [START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF][START_REF] Van Dijk | Yaw-misalignment and its impact on wind turbine loads and wind farm power output[END_REF]. En considérant chaque antenne ou éolienne comme un agent, ces problèmes peuvent être modélisés comme un problème de bandit multi-agents (MA-MAB) [START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF] avec la connaissance d'un graphe de coordination [START_REF] Guestrin | Coordinated Reinforcement Learning[END_REF] où chaque noeud représente un agent et chaque arête représente une interaction entre deux agents. Un problème de bandit à bras multiples (MAB) est un problème de décision séquentiel où un apprenant doit choisir une action (aussi appelée bras) à chaque itération et obtient une récompense associée (éventuellement perturbée) qui informe sur la qualité de l'action choisie. Naturellement, l'apprenant ne connaît pas la distribution de la récompense pour chaque action possible. L'apprenant peut avoir des objectifs très différents, tels que maximiser les récompenses accumulées au cours du processus, ou bien en un nombre minimum d'essais et indépendamment des récompenses accumulées, déduire quelle est la meilleure action à choisir -c'est-à-dire la plus gratifiante. Par conséquent, un problème de bandit multi-agents à plusieurs bras est le cadre dans lequel plusieurs agents sont confrontés à un problème de bandit à plusieurs bras. Dans la littérature sur les bandits, on peut distinguer les bandits non structurés et les bandits structurés. Alors que le bandit non structuré considère que le fait de jouer une action et d'obtenir la récompense associée ne permet pas de déduire quoi que ce soit sur la distribution des récompenses des autres actions, le bandit structuré inclut les modèle de bandit où les récompenses des différentes actions partagent un paramètre commun [START_REF] Lattimore | Bandit Algorithms[END_REF]. Par exemple, une configuration de bandit structuré deja très etudiée dans la litterature est le bandit linéaire [START_REF] Auer | Using confidence bounds for exploitation-exploration trade-offs[END_REF] où la récompense associée à toute action dépend linéairement d'un vecteur paramètre inconnu θ. Par conséquent, à un moment donné, le fait de choisir une action et de recevoir la récompense qui lui est associée donne des informations sur θ et, par définition, également sur les récompenses de toutes les autres actions. Nous nous intéressons ici à de tels environnements structurés et nous présentons à ce sujet un nouveau bandit structuré multi-agents appelé Bandits Bilinéaires Graphiques. La spécificité de cet environnement réside dans l'interdépendance des récompenses obtenues par les agents voisins dans le graphe et dans l'hypothèse que ces récompenses sont bilinéaires, ce qui nous apparaît comme l'extension naturelle des récompenses linéaires lorsque les agents sont dépendants par paire. En effet, si les problèmes MA-MAB ont été étudiés dans le cadre de bandits non structurés avec des agents indépendants et dépendants (voir e.g., [START_REF] Agarwal | Multi-agent multi-armed bandits with limited communication[END_REF][START_REF] Amin | Graphical Models for Bandit Problems[END_REF][START_REF] Bargiacchi | Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems[END_REF][START_REF] Besson | Multi-player bandits revisited[END_REF][START_REF] Bistritz | Distributed multi-player bandits-a game of thrones approach[END_REF][START_REF] Heliou | Learning with bandit feedback in potential games[END_REF][START_REF] Landgren | Distributed cooperative decision making in multi-agent multi-armed bandits[END_REF][START_REF] Sankararaman | Social learning in multi agent multi armed bandits[END_REF][START_REF] Shahrampour | Multi-armed bandits in multi-agent networks[END_REF]101]), seul le cadre des bandits structurés avec des agents indépendants a été exploré (voir e.g., [START_REF] Amani | Decentralized multi-agent linear bandits with safety constraints[END_REF][START_REF] Cesa-Bianchi | A gang of bandits[END_REF][START_REF] Chan | Parallelizing contextual linear bandits[END_REF]). A travers cette thèse et les articles auxquels elle fait référence, nous voulons poser une première pierre à l'édifice. C.2 Définition du problème Bandits Bilinéaires Graphiques Stochastiques Soit G = (V, E) le graphe dirigé défini par V l'ensemble fini de noeuds représentant les agents et E l'ensemble d'arêtes représentant les interactions entre les agents. Nous supposons que si (i, j) ∈ E alors (j, i) ∈ E. Le graphe pourrait être considéré comme non dirigé mais nous supposons que les interactions entre deux voisins ne sont pas nécessairement symétriques par rapport aux récompenses obtenues, nous choisissons donc de conserver le graphe dirigé pour mettre en évidence cette asymétrie potentielle. Pour tout agent i ∈ V , nous désignons N i l'ensemble de ses agents voisins. Soit n = |V | le nombre de noeuds, m = |E| le nombre d'arêtes et X ⊂ R d un ensemble de bras fini où K = |X | désigne le nombre de bras. Le bandit bilinéaire graphique avec un graphe G et un ensemble de bras X consiste en le problème de décision séquentiel suivant : Bandits Bilinéaires Graphiques Stochastiques Pour chaque tour t > 0, 1. Chaque agent i ∈ V choisit un bras x (i) t dans X . 2. Ensuite, chaque agent i ∈ V reçoit une récompense bilinéaire bruitée pour chacun de ses voisins j ∈ N i : y (i,j) t = x (i)⊤ t M ⋆ x (j) t + η (i,j) t , ( Objectifs Comme nous l'avons brièvement mentionné précédemment, il existe deux principaux objectifs différents qu'un apprenant (ici l'entité centrale) peut vouloir atteindre dans un problème de bandit. Identifier la meilleure allocation. Le premier objectif que nous voulons traiter dans cette thèse est celui où l'apprenant est intéressé à trouver, en un minimum de tours, la meilleure allocations de bras (x (1) ⋆ , . . . , x (n) ⋆ ) qui maximise la récompense globale moyenne obtenue sur le graphe : (x (1) ⋆ , . . . , x (n) ⋆ ) = arg max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ M ⋆ x (j) . Cet objectif implique que l'entité centrale ne se soucie pas de choisir une allocation sous-optimale (x (1) t , . . . , x (n) t ) à chaque instant t tant qu'elle donne suffisamment d'informations sur le paramètre inconnu M ⋆ afin de construire une estimation précise M. Cet objectif est connu sous le nom d'exploration pure ou d'identification du meilleur bras [START_REF] Audibert | Best arm identification in multi-armed bandits[END_REF][START_REF] Bubeck | Pure exploration in multi-armed bandits problems[END_REF]. Maximiser les récompenses cumulées. Le deuxième objectif que nous voulons traité est le plus souvent considéré dans la littérature sur les bandits où l'apprenant souhaite maximiser la somme des récompenses (en esperance) obtenues au cours des tours. Dans notre cas, l'entité centrale souhaite maximiser les récompenses globales cumulées, données par la formule suivante T t=1 (i,j)∈E x (i)⊤ t M ⋆ x (j) t . Alors que le premier objectif permet à l'apprenant d'être dans un cadre d'exploration pure, indépendamment des récompenses obtenues tout au long du processus, l'objectif de maximisation des récompenses cumulées nécessite un compromis entre l'exploration des différents bras possibles pour avoir une estimation précise M de M ⋆ et l'exploitation des bras qui semblent être les plus optimaux étant donné M afin d'obtenir les récompenses cumulées maximales. Pour ces deux objectifs (identification du meilleur bras ou maximisation des récompenses cumulées) et étant donné une estimation M, l'apprenant devra résoudre à un moment donné le problème d'optimisation suivant max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ Mx (j) . (C.2) En effet, pour l'identification du meilleur bras, ce problème d'optimisation doit être résolu à la fin lorsque l'apprenant veut retourner la meillieure allocation étant donné l'estimation M construite pendant la procédure d'apprentissage. Pour la maximisation des récompenses cumulées, ce problème d'optimisation peut devoir être résolu pendant la procédure d'apprentissage lorsque l'apprenant veut exploiter et renvoyer le meilleur bras articulé estimé compte tenu de sa connaissance actuelle de l'environnement qui est l'estimation construite M. La résolution de ce problème d'optimisation n'est pas triviale, aussi pour les deux objectifs nous considérons l'objectif sous-jacent commun de résolution de ce problème. C.3 Trouver la meilleure allocation lorsque la matrice est connu C.3.1 Un problème NP-Dur Nous abordons le problème de la recherche de la meilleure alloaction étant donné M ⋆ et nous le désignons comme suit : (x (1) ⋆ , . . . , x (n) ⋆ ) = arg max (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ M ⋆ x (j) . (C.3) Remarquez que si le couple (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ est tel que x ⋆ = x ′ ⋆ alors trouver la meilleure allocation est trivial et la solution est d'attribuer x ⋆ à tous les noeuds. À l'inverse, si x ⋆ ̸ = x ′ ⋆ , le problème peut être plus difficile : selon le graphe G, l'allocation optimal pourrait soit être composée exclusivement du couple (x ⋆ , x ′ ⋆ ), soit être composée d'autres bras dans X . On pourrait vouloir utiliser la programmation dynamique comme dans [START_REF] Amin | Graphical Models for Bandit Problems[END_REF] pour résoudre ce problème d'optimisation, cependant dans ce cadre particulier, cela conduirait à utiliser un algorithme en temps non-polynomial. En effet, le théorème suivant indique que, même en connaissant le vrai paramètre M ⋆ , l'identification de la meilleure allocation (x (x (1) ,...,x (n) )∈X n (i,j)∈E x (i)⊤ M ⋆ x (j) . Par conséquent, étant donné la vraie matrice M ⋆ , l'apprenant n'est pas assuré de trouver en temps polynomial l'alloaction x C.3.2 Algorithmes d'approximation et guaranties théoriques Étant donné la matrice M ⋆ , l'objectif est de concevoir un algorithme qui renvoie une allocation x (1) , . . . , x (n) telle que sa récompense globale associée y = (i,j)∈E x (i)⊤ M ⋆ x (j) ait la garantie d'être proche de la récompense globale optimale y ⋆ = (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ . En d'autres termes, nous voulons trouver un paramètre d'approximation 0 < α ≤ 1 tel que, y ≥ αy ⋆ . Bien que le problème d'optimisation que nous cherchons à résoudre se trouve dans la littérature sur les Champs aléatoires de Markov lorsqu'on traite un graphe multi-labeélisé (voir e.g., [START_REF] Ajanthan | Optimization of Markov random fields in computer vision[END_REF]), à notre connaissance, les algorithmes qui donnent un rapport d'approximation sur la solution optimale n'ont pas été explorés. L'approche que nous présentons dans cette section consiste d'abord à considérer le problème localement, i.e., au niveau des arêtes. En effet, considérons deux noeuds voisins i et j dans V et seulement les récompenses liées à ces noeuds, qui sont x (i)⊤ M ⋆ x (j) et x (j)⊤ M ⋆ x (i) . En additionnant ces deux quantités, 1 on obtient x (i)⊤ M ⋆ x (j) + x (j)⊤ M ⋆ x (i) = x (i)⊤ M ⋆ + M ⊤ ⋆ x (j) qui représente la récompense entre les deux noeuds voisins (i) et (j). Une stratégie locale que l'entité centrale devrait mettre en oeuvre consiste donc à allouer (x (i) , x (j) ) = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ = (x ⋆ , x ′ ⋆ ) . Naturellement, si cette stratégie locale est facile à appliquer pour un couple de voisins (i, j), elle ne peut pas etre simultanément appliquée à tous les autres couples du graphe puisque certains d'entre eux partagent les mêmes noeuds. Cepen-Algorithm 14: Algorithm d'approximation pour notre problème NP-Dur Entrée: Graphe G = (V, E), ensemble de bras X , matrice M ⋆ (V 1 , V 2 ) = Approx-MAX-CUT(G); Trouver (x ⋆ , x ′ ⋆ ) ∈ arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ ; for i in V 1 do x (i) t = x ⋆ ; // Peut etre fait en parallèle end for i in V 2 do x (i) t = x ′ ⋆ ; // Peut etre fait en parallèle end retourner (x (1) , . . . , x (n) ) d'autres arêtes (i, j) ∈ E les bras alloués associés seront les couples sous-optimaux et non désirés (x ⋆ , x ⋆ ) ou (x ′ ⋆ , x ′ ⋆ ). Avant d'énoncer la garantie de cet algorithme par rapport à la récompense globale optimale, introduisons m 1 (respectivement m 2 ) le nombre d'arêtes qui vont des noeuds de V 1 (respectivement V 2 ) aux noeuds de V 1 (respectivement V 2 ) et m 1→2 (respectivement m 2→1 ) le nombre d'arêtes qui vont des noeuds de V 1 (respectivement V 2 ) aux noeuds de V 2 (respectivement V 1 ). Remar- quez que le nombre total d'arêtes m = m 1→2 + m 2→1 + m 1 + m 2 et (x,x ′ )∈X 2 m 1→2 • x ⊤ M ⋆ x ′ + m 2→1 • x ′⊤ M s tarx + m 1 • x ⊤ M s tarx + m 2 • x ′⊤ M ⋆ x ′ , (C.5) nous optimisons la récompense globale totale que l'on obtiendrait en allouant seulement deux bras (x, x ′ ) ∈ X 2 dans le graphe. Cette stratégie est décrite dans l'Algorithme 15. Algorithm 15: Algorithm d'approximation amélioré pour notre problème NP-Dur Entrée : Graphe G = (V, E), ensemble de bras X , matrice M ⋆ (V 1 , V 2 ) = Approx-MAX-CUT(G); m 1→2 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 2 }|; m 2→1 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 1 }|; m 1 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 1 }|; m 2 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 2 }|; Trouver (x ⋆ , x′ ⋆ ) solution of (C.5); for i in V 1 do x (i) t = x⋆ ; // Peut etre fait en parallèle end for i in V 2 do x (i) t = x′ ⋆ ; // Peut etre fait en parallèle end retourner (x (1) , . . . , x (n) ) + m 1 x⊤ ⋆ M ⋆ x⋆ -x ⊤ ⋆ M ⋆ x ⋆ + m 2 x′⊤ ⋆ M ⋆ x′ ⋆ -x ′⊤ ⋆ M ⋆ x ′ ⋆ . Les nouvelles garanties que nous obtenons sur la récompense de l'allocation obtenue par l'Algorithme 15 sont énoncées dans le théorème suivant. un paramètre dépendant du problème qui mesure le gain relatif de l'optimisation sur les récompenses sous-optimales définies comme : ϵ = ∆ M ⋆ x (j) ⋆ . Corollary C. = m 1→2 + m 2→1 m + m 1 + m 2 m ξ + ϵ . Ce corollaire est utile pour comprendre en pratique le type de garanties que nous pouvons avoir en fonction de la structure du graphe et de l'algorithme d'approximation que nous utilisons pour résoudre le problème de Max-Cut. C.4 Identification du meilleur bras pour les bandits bilineaires graphiques Dans cette section, nous supposons que nous ne connaissons pas la matrice M ⋆ et qu'une entité centrale fait face à un problème de bandits bilinéaires graphiques où à chaque tour elle choisit un bras pour chaque noeud du graphe et observe une récompense bilinéaire pour chaque arête du graphe. Dans ce chapitre, nous nous concentrerons sur l'objectif d'identification du meilleur bras. C.4.1 Préliminaires Pour simplifier, nous considérons que la matrice inconnue M ⋆ est symétrique, ce qui simplifie grandement le raisonnement. Dans le chapitre C.3, nous avons conçu des algorithmes en temps polynomial qui nous permettent de calculer une solution d'approximation α au problème NP-Dur consistant à trouver la meilleure allocation étant donné M ⋆ . Remarquez que dans l'algorithme d'α-approximation 14, M ⋆ n'est utilisé que pour identifier la meilleure paire (x ⋆ , x ′ ⋆ ) comme suit : x ⋆ , x ′ ⋆ = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ x ′ . (C.6) Ainsi, utiliser une estimation M de M ⋆ ayant la propriété suivante : arg max (x,x ′ )∈X 2 x ⊤ Mx ′ = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ x ′ = x ⋆ , x ′ ⋆ , ( Condition d'arrêt Remarquons que le problème d'optimisation (C.6) est équivalent au problème d'optimisation suivant (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X vec xx ′⊤ , vec (M ⋆ ) . Simplifions les notations et désignons par θ ⋆ ≜ vec (M ⋆ ) la version vectorisée de la matrice inconnue M ⋆ . Utilisons également la notation z xx ′ ≜ vec xx ′⊤ , et définissons Z = {z xx ′ |(x, x ′ ) ∈ X 2 } l'ensemble contenant de tels vecteurs. Alors, chercher le couple (x ⋆ , x ′ ⋆ ) revient à chercher le vecteur z ⋆ ∈ Z où z ⋆ = arg max z∈Z ⟨z, θ ⋆ ⟩ . En d'autres termes, nous voulons trouver un bras z ∈ Z, tel que pour tout z ′ ∈ Z, (zz ′ ) ⊤ θ ⋆ ≥ 0. Cependant, nous n'avons pas accès à θ ⋆ , donc nous devons utiliser son estimation empirique. Ainsi, à chaque tour t, l'entité centrale peut choisir pour chaque couple de voisins (i, j) un bras z ∈ Z et obtenir une récompense linéaire bruitée de la forme ⟨z, θ ⋆ ⟩ + η où η est une variable aléatoire σ-sous-gaussienne, qui peut être utilisée pour calculer une estimation θt . Pour plus de clarté, nous appellerons tout x ∈ X un bras de noeud et tout z ∈ Z un bras d'arête. Si x (i) t ∈ X représente le bras de noeud alloué au noeud i ∈ V au temps t, pour chaque arête (i, j) ∈ E nous désignerons le bras d'arête associé par z (i,j) t ≜ vec x (i) t x (j)⊤ t ∈ Z. Le but ici est de définir la séquence optimale (z 1 , . . . , z mt ) ∈ Z mt qui devrait être tirée dans les t premiers tours de façon à ce que (C.7) soit atteint le plus tôt possible. Une approche naturelle consiste à s'appuyer sur les stratégies classiques développées pour l'identification du meilleur bras dans les bandits linéaires. Nous définissons (y 1 , . . . , y mt ) les récompenses bruitées correspondantes de la séquence (z 1 , . . . , z mt ). Nous supposons que les termes de bruit dans les récompenses sont i.i.d., suivant une distribution σ-sous-gaussienne. Soit θt = A -1 t b t ∈ R d 2 la solution du problème des moindres carrés ordinaires avec A t = mt i=1 z i z ⊤ i ∈ R d 2 ×d 2 et b t = mt i=1 z i y i ∈ R d 2 . En suivant les étapes de [START_REF] Soare | Best-arm identification in linear bandits[END_REF], nous pouvons montrer que s'il existe z ∈ Z tel que pour tout z ′ ∈ Z ce qui suit est vrai : ∥z -z ′ ∥ A -1 t 8σ 2 log 6m 2 t 2 K 4 δπ 2 ≤ ∆t z, z ′ , (C.8) où ∆t (z, z ′ ) = (z -z ′ ) ⊤ θt est l'écart empirique entre z et z ′ , Une strategie G-Allocation contrainte Étant donné la condition d'arrêt (C.8) dérivée dans la section précédente, on veut trouver la séquence de bras d'arête z ⋆ mt = (z ⋆ 1 , . . . , z ⋆ mt ) telle que : z ⋆ mt ∈ arg min (z 1 ,...,zmt)∈Z mt max z ′ ∈Z z ′⊤ mt i=1 z i z ⊤ i -1 z ′ . (G-opt-Z) Ceci est connu sous le nom de G-allocation (voir e.g., [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]) et est NP-Dur à calculer ( [START_REF] Çivril | On selecting a maximum volume sub-matrix of a matrix and related problems[END_REF]104]). Une façon de trouver une solution approximative consiste à s'appuyer sur une relaxation convexe du problème d'optimisation (G-opt-Z) et à calculer d'abord une allocation à valeur réelle Γ ⋆ ∈ S Z telle que Néanmoins, (G-relaxed-Z) donne tout de même des informations précieuses sur le nombre de fois, en proportion, où chaque bras d'arête z ∈ Z doit être allouée au graphe. Dans la section suivante, nous présentons un algorithme satisfaisant à la fois les exigences de proportion et les contraintes graphiques. Γ ⋆ ∈ arg min Γ ∈S Z max z ′ ∈Z z ′⊤ z∈Z Γ z z ⊤ -1 z ′ . (G- C.4.2 Algorithme et garanties Allocation aléatoire sur les noeuds Notre algorithme est basé sur une méthode aléatoire d'allocation directe des bras de noeud aux noeuds, évitant ainsi la tâche difficile de choisir les bras d'arête et d'essayer de les allouer au graphe tout en s'assurant que chaque noeud a une affectation unique. La validité de cette allocation aléatoire est basée sur le théorème C.4 ci-dessous montrant que l'on peut tirer des bras de noeuds dans X et les allouer au graphe de telle sorte que les bras d'arête associés suivent la distribution de probabilité Γ ⋆ solution de (G-relaxed-Z). Theorem C.4. Soit γ ⋆ une solution du problème d'optimisation suivant : min γ s tar∈S X max x ′ ∈X x ′ ⊤ x∈X γ x xx ⊤ -1 x ′ . (G-relaxed-X ) Soit Γ ⋆ ∈ S Z défini pour tout z = vec xx prime⊤ ∈ Z par Γ ⋆ z = γ ⋆ x γ ⋆ x ′ . Alors, Γ ⋆ est une solution de (G-relaxed-Z). Ce théorème implique que, à chaque tour t > 0 et pour chaque noeud i ∈ V , si x (i) t est tiré de γ ⋆ , alors pour toutes les paires de voisins (i, j) ∈ E, les bras d'arête associés z (i,j) t suivent la distribution de probabilité Γ ⋆ . De plus, comme γ ⋆ est une distribution sur l'ensemble des bras de noeud, X , Γ ⋆ peut etre considéré comme une distribution de probabilité jointe (produit) sur Analyse de la convergence Nous prouvons maintenant la validité de la procédure d'échantillonnage aléatoire détaillée dans l'Algorithme 16 en contrôlant la qualité de l'approximation max z∈Z z ⊤ A -1 t z par rapport à l'optimum du problème d'optimisation G-allocation max z ′ ∈Z z ′⊤ mt i=1 z ⋆ i z ⋆⊤ i -1 z ′ décrit dans (G-opt-Z). Comme cela est généralement fait dans la littérature optimal design (voir e.g., [START_REF] Pukelsheim | Optimal Design of Experiments[END_REF][START_REF] Sagnol | Optimal design of experiments with application to the inference of traffic matrices in large networks: second order cone programming and submodularity[END_REF][START_REF] Soare | Best-arm identification in linear bandits[END_REF]), nous limitons l'erreur relative β t : max z∈Z z ⊤ A -1 t z ≤ (1 + β t ) max z ′ ∈Z z ′⊤ mt i=1 z ⋆ i z ⋆⊤ i -1 z ′ . Notre analyse s'appuie sur plusieurs résultats de la théorie de la concentration matricielle. On peut se référer par exemple à [START_REF] Tropp | An introduction to matrix concentration inequalities[END_REF] et à ses références pour une introduction approfondie sur ce sujet. Nous introduisons d'abord quelques notations supplémentaires. Soit f Z la fonction telle que, pour toute matrice non singulière Q ∈ R d 2 ×d 2 , f Z (Q) = max z∈Z z ⊤ Q -1 z et pour toute distribution Γ ∈ S Z on rappelle que Σ Z (Γ ) ≜ z∈Z Γ z zz ⊤ est la matrice de covariance associée. Enfin, laissons A ⋆ t = mt i=1 z ⋆ i z ⋆⊤ i être la matrice de G-optimal design construite pendant t tours. Theorem C.5. Soit Γ ⋆ une solution du problème d'optimisation (G-relaxed-Z). Soit 0 < δ ≤ 1 et soit t 0 tel que t 0 = 2Ld 2 log 2d 2 /δ /λ min , où λ min est la plus petite valeur propre de la matrice de covariance 1 K 2 z∈Z zz ⊤ . Alors, à chaque tour t ≥ t 0 avec une probabilité d'au moins 1 -δ, la stratégie randomisée G-allocation pour les bandits bilinéaires graphiques de l'Algorithme 16 produit une matrice A t telle que : De plus, soit τ le nombre de tours suffisant pour qu'un algorithme quelconque détermine le meilleur bras avec une probabilité d'au moins 1 -δ. Une borne inférieure sur l'espérance de τ peut être obtenue à partir de celle dérivée pour le problème de l'identification du meilleur bras dans les bandits linéaires (voir e.g., Théorème 1 dans [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF]) : f Z (A t ) ≤ (1 + β t )f Z (A ⋆ t ) E[τ ] ≥ min Γ ∈S Z max z∈Z\{z⋆} log 1 2.4δ 2σ 2 ∥z ⋆ -z∥ 2 Σ Z (Γ ) -1 m (z ⋆ -z) ⊤ θ ⋆ 2 . Comme observé dans [START_REF] Soare | Best-arm identification in linear bandits[END_REF], cette limite inférieure peut être bornée, dans le pire des cas, par 4σ 2 d 2 /(m∆ 2 min ), ce qui correspond à notre borne jusqu'aux termes logarithmiques et à l'erreur relative β t . C.4.3 Influence de la structure du graphe sur le taux de convergence Caractérisation de la variance associée à la stratégie aléatoire La limite de convergence du théorème C.5 dépend de v = ∥E (A 1 -EA 1 ) 2 ∥. Dans cette section, nous caractérisons l'impact de la structure du graphe sur cette quantité et, par extension, sur le taux de convergence. Les limites sont énoncées dans le tableau C.1. Graphique Limite supérieure sur ∥Var(A Ces quatre exemples mettent en évidence la forte dépendance de la variance à la structure du graphe. Plus les arêtes sont indépendantes (sans noeuds communs), plus la quantité ∥Var(A 1 )∥ est petite. Pour un nombre fixe d'arêtes m, le meilleur cas est le graphe couplage où aucune arête ne partage le même noeud et le pire cas est le graphe en étoile où toutes les arêtes partagent un noeud central. Résultats expérimentaux validant la dépendance du graphe Dans cette section, nous considérons la version modifiée d'une expérience standard introduite par [START_REF] Soare | Best-arm identification in linear bandits[END_REF] et utilisée dans la plupart des articles sur l'identification du meilleur bras dans les bandits linéaires [START_REF] Fiez | Sequential experimental design for transductive linear bandits[END_REF][START_REF] Tao | Best Arm Identification in Linear Bandits with Linear Dimension Dependency[END_REF]106,109] . La matrice M ⋆ a sa première coordonnée égale à 2 et les autres égales à 0, ce qui donne θ ⋆ = vec (M ⋆ ) = (2, 0, . . . , 0) ⊤ ∈ R d 2 . Le meilleur bras d'arête est donc z ⋆ = z (1,1) = e ′ 1 . On peut noter que lorsque ω tend vers 0, il est plus difficile de différencier z (1,1) et z (d+1,d+1) = vec x (d+1) x ⊤ (d+1) que z (1,1) et les autres bras. Nous fixons η (i,j) t ∼ N (0, 1), pour toute arête (i, j) et tour t. Nous considérons deux cas où ω = 0.1 qui rend les bras z (1,1) et z (d+1,d+1) difficiles à différencier, et ω = π/2 qui rend le bras z (1,1) facilement identifiable comme le bras optimal. Pour chacun de ces deux cas, nous évaluons l'influence de la structure du graphe, du nombre d'arêtes m et de la dimension de l'espace des bras d'arête d 2 sur la complexité d'échantillonnage. Les résultats sont présentés dans la figure C.2. Lorsque ω = 0, 1, le type de graphe n'a pas d'impact sur le nombre de tours nécessaires pour vérifier la condition d'arrêt. Ceci est principalement dû au fait que l'ampleur de la variance associée est négligeable par rapport au nombre de tours. Par conséquent, même si nous faisons varier le nombre d'arêtes ou la dimension, nous obtenons les mêmes performances pour tout type de graphe, y compris le graphe matching. Cela implique que notre algorithme est aussi performant qu'un bandit linéaire qui tire m arêtes en parallèle à chaque tour. Lorsque ω = π/2, le nombre de tours nécessaires pour vérifier la condition d'arrêt est plus petit et l'amplitude de la variance n'est plus négligeable. En effet, lorsque le nombre d'arêtes ou la dimension augmente, on remarque que le graphe en étoile prend plus de temps pour satisfaire la condition d'arrêt. De plus, notons que les complexités d'échantillonnage obtenues pour le cercle et le graphe d'appariement sont similaires. Cette observation est en accord avec la dépendance à la variance montrée dans le Tableau C.1. C.5 Algorithmes basés sur le regret pour les bandits bilinéaires graphiques Dans cette section, nous supposons également que nous ne connaissons pas la matrice des paramètres M ⋆ et comme décrit dans la configuration du problème dans la section C.2, une entité centrale fait face au problème des bandits bilinéaires graphiques où à chaque tour elle choisit un bras pour chaque noeud du graphe et observe une récompense bilinéaire pour chaque arête du graphe. L'objectif de l'entité centrale est de concevoir un algorithme qui maximise l'espérance de la récompense globale cumulée obtenue pendant T tours T t=1 (i,j)∈E x pour toute instance de bandits bilinéaires graphiques décrits dans la section C.2, sauf si P = N P . Par conséquent, l'objectif de concevoir un algorithme avec un regret sous-linéaire en T n'est pas réalisable en temps polynomial par rapport à n. Cependant, certains problèmes NP-Dur sont α-approximables (pour un certain α ∈ (0, 1]), ce qui signifie qu'il existe un algorithme en temps polynomial garanti pour produire des solutions dont les valeurs sont au moins α fois la valeur de la solution optimale. Nous renvoyons le lecteur à la section C.3 pour plus d'informations sur l'approximation de la solution optimale de notre problème. Pour ce type de problèmes, il est logique de considérer l'α-pseudo-regret comme dans [START_REF] Chen | Combinatorial multi-armed bandit: General framework and applications[END_REF][START_REF] Kakade | Playing games with approximation algorithms[END_REF] qui est défini pour tout α ∈ (0, 1] comme suit R α (T ) ≜ T t=1   α (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ - (i,j)∈E x (i)⊤ t M ⋆ x (j) t   , et on se fixe comme objectif de concevoir un algorithme avec un α-regret sous-linéaire. Enfin, comme nous l'avons fait dans le chapitre C.4, rappelons que la récompense obtenue pour chaque arête du graphe à chaque tour peut être vue comme une récompense linéaire bruitée en dimension supérieure [START_REF] Jun | Bilinear Bandits with Low-rank Structure[END_REF] avec y (i,j) t = vec x (i) t x (j)⊤ t , vec (M ⋆ ) + η (C.10) Dans ce chapitre, nous choisissons de concevoir un algorithme basé sur le principe d'optimisme face à l'incertitude [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF], et dans le cas d'une récompense linéaire [START_REF] Abbasi-Yadkori | Improved algorithms for linear stochastic bandits[END_REF][START_REF] Li | A contextual-bandit approach to personalized news article recommendation[END_REF], nous devons maintenir un estimateur du vrai paramètre θ ⋆ . Pour ce faire, définissons pour tous les tours t ∈ {1, . . . , T } l'estimateur MCO de θ ⋆ comme suit : θt = A - C.5.3 Algorithme amélioré et analyse du regret Nous abordons le problème de concevoir une version améliorée de l'algorithme proposé en utilisant l'idée présentée dans l'Algorithme 15 qui améliore le taux d'approximation. Rappelons que dans la section précédente de l'Algorithme 17, l'entité centrale choisit le couple (x t , x ′ t ) tel que (x t , x ′ t ) = arg max (x,x ′ )∈X 2 max θ∈C t-1 (δ) ⟨z xx ′ + z x ′ x , θ⟩ , qui maximise la récompense optimiste obtenue entre deux noeuds si l'entité centrale était capable d'allouer x t à un noeud et x ′ t au second. Cette stratégie étant optimale localement mais compliquée par la prise en compte des dépendances entre les arêtes, l'entité centrale pourrait prendre en considération les bras d'arêtes de la forme z xx et z x ′ x ′ qui sont créés lors de l'allocation des noeuds du graphe en utilisant seulement deux bras de noeuds x et x ′ . Cette idée suit celle présentée dans l'Algorithme 15 où l'on rappelle avec des notations différentes que le couple (x ⋆ , x′ ⋆ ) choisi pour allouer les noeuds du graphe sont tels que Ici, au lieu de maximiser la récompense locale que l'on peut obtenir entre deux noeuds, l'entité centrale maximise la récompense optimiste globale que l'on obtiendrait en allouant seulement deux bras (x, x ′ ) ∈ X 2 dans le graphe. Cette stratégie est décrite dans l'Algorithme 18. (x ⋆ , x′ ⋆ ) = arg max (x,x ′ )∈X 2 ⟨m 1→2 • z xx ′ + m 2→1 • z x ′ x + m 1 z xx + m 2 z x ′ x ′ , Algorithm 18: OFUL amélioré pour les bandits bilineqires graphiques Input : Graphe G = (V, E), ensemble X (V 1 , V 2 ) = Approx-MAX-CUT(G); m 1→2 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 2 }|; m 2→1 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 1 }|; m 1 = |{(i, j) ∈ E|i ∈ V 1 ∧ j ∈ V 1 }|; m 2 = |{(i, j) ∈ E|i ∈ V 2 ∧ j ∈ V 2 }|; for t = 1 to T do xt , x′ t , θt-1 = arg max (x,x ′ ,θ)∈X 2 ×C t-1 ⟨m 1→2 • z xx ′ + m 2→1 • z x ′ x + m 1 • z xx + m 2 • z x ′ x ′ , θ⟩; x (i) t = xt pour tout i dans V 1 ; x (i) t = x′ t pour tout i dans V 2 ; Obtenir pour tout (i, j) dans E les récompenses y ∆ = ⟨m 1→2 z x⋆ x′ ⋆ -z x⋆x ′ ⋆ + m 2→1 z x′ ⋆ x⋆ -z x ′ ⋆ x⋆ + m 1 (z x⋆ x⋆ -z x⋆x⋆ ) + m 2 z x′ ⋆ x′ ⋆ -z x ′ ⋆ x ′ ⋆ , θ ⋆ ⟩ . Les nouvelles garanties que nous obtenons sur l'α-regret de l'Algorithme 18 sont énoncées dans le théorème suivant. On peut voir ici que l'amélioration se produit dans le α du α-regret. Dans la section suivante, nous confirmons ces résultats par des expériences. C.5.4 Expériences numériques Nous concevons une expérience qui compare en pratique les performances de l'Algorithme 17 et de l'Algorithme 18 avec l'algorithme Explore-Then-Commit (ETC) en utilisant la stratégie d'exploration conçue dans la section C.4 pendant la phase d'exploration, et en allouant les noeuds dans V 1 et V 2 avec le meilleur couple estimé (x, x ′ ) = arg max (x,x ′ ) ⟨z xx ′ + z x ′ x , θt ⟩ pendant la phase de d'exploitation. Cependant, puisque les algorithmes que nous avons présentés dans cette section ont des garanties sur les α-regrets avec différents α, nous traçons la fraction de la récompense globale optimale pour chaque itération. Comme dans le chapitre C.3, nous observons une nette amélioration en choisissant à chaque tour t le couple de bras (x t , x′ t ) au lieu de (x t , x ′ t ). C.6 Conclusion & perspectives C.6.1 Résumé des résultats Dans cette thèse, nous avons introduit un nouveau modèle que nous avons nommé Bandits Bilinéaires Graphiques qui modélise les problèmes multi-agents centralisés où des interactions par paires existent entre les agents. • Dans la section C.3, nous avons mis en évidence le fait que l'apprenant était confronté à un problème d'optimisation sous-jacent qui est NP-Hard quel que soit le but que l'apprenant souhaite atteindre. Nous avons donc proposé un algorithme d'α-approximation avec α ≥ 1/2 qui ne nécessite que de trouver le couple de bras (x ⋆ , x ′ ⋆ ) pour retourner l'α-approximation. Nous avons ensuite affiné ce paramètre d'approximation par rapport aux paramètres dépendant du problème en nous basant sur la structure du graphe et sur une propriété de la matrice M ⋆ . • Dans la section C.4, étant donné l'algorithme d'α-approximation conçu dans la section C.3, nous avons présenté un algorithme de pure exploration qui permettait à l'apprenant de construire une estimation M qui était statistiquement efficace en termes d'optimal design. En effet, le problème de trouver en un nombre minimum de tours le meilleur couple (x ⋆ , x ′ ⋆ ) utilisé dans l'algorithme d'approximation α revenait à trouver le G-optimal design, également appelé G-allocation dans la littérature bandit. Résoudre ce problème dans les bandits bilinéaires graphiques impliquait de traiter une contrainte supplémentaire. C'est pourquoi nous avons présenté un algorithme qui respectait cette contrainte et qui utilisait un échantillonnage aléatoire pour construire l'estimation M. Nos résultats théoriques ont révélé un terme qui dépendait de la structure du graphe, nous avons donc montré l'impact du graphe dans nos résultats. • Enfin, dans la section C. 1. 1 R 1 Illustration of the learner's decision process at a given round t for a simple graph of three nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The δ-confidence level ellipsoid . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Variation of ϵ, ξ, α 1 and α 2 with respect to the parameter ζ. The closer ζ is to 0 the lower the reward of the unwanted couples (e i ⋆ , e i ⋆ ) and (e j ⋆ , e j ⋆ ), the closer ζ is to 1 the higher the rewards of the unwanted couples. The dimension d of the arm-set is 10 (which gives linear reward with unknown parameter θ ⋆ of dimension 100). The plotted curve represents the average value of the parameters over 100 different matrices M ⋆ initiated randomly with positive values. . . . . 4.1 Collision when allocating directly edge-arms to the edges . . . . . . . . . . . . 4.2 Upper bound on the variance and convergence rate of Algorithm 8 for the star, complete, circle and matching graph with respect to the number of edges m and the number of rounds t. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Number of rounds t needed to verify the stopping condition (4.3) with respect to left: the number of edges m where the dimension of the edge-arm space Z is fixed and equal to 25 and right: the dimension of the edge-arm space Z where the number of edges is fixed and equal to 156. For both experiments we run 100 times and plot the average number of rounds needed to verify the stopping condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Fraction of the optimal global reward obtained at each round by applying the Algorithm 9, Algorithm 10 and the Explore-Then-commit algorithm (here named GBB-BAI) using the exploration strategy in Chapter 4. We use a complete graph of 5 nodes, we run the experiment on 5 different matrices as in Figure 3.1 with ζ = 0 and run it 10 different times to plot the average fraction of the global reward. We set the confidence δ = 0.001. . . . . . . . . . . . . . . . . . . . . B.1 Representation of the µ -1 (blue dotted line) and µ 1 (red plain line) distributions, without attack (left) and with three different attacks: no penalty (second drawing), with mass penalty (third) and with norm penalty (fourth). On all figures blue area on the left of the axis is P h (ϵ 2 ) and red area on the right is N h (ϵ 2 ). . . xv B.2 Illustration of adversarial examples (only on class 1 for more readability) crossing the decision boundary (left), adversarially trained classifier for the class 1 (middle), and a randomized classifier that defends class 1. Stars are natural examples for class 1, and crosses are natural examples for class -1. The straight line is the optimal Bayes classifier, and dashed lines delimit the points close enough to the boundary to be attacked resp. for class 1 and -1. We focus the drawing on the star points. Crosses can be treated symmetrically. . . . . . . . . . . . . . . . . B.3 Illustration of the notations U , U + , and U -for proof of Theorem B.4. . . . . B.4 Illustration of the notations U , U + , U -and δ for proof of Theorem B.5. . . . B.5 Evolution of the accuracy under Adaptive-ℓ ∞ -PGD attack depending on the budget ϵ ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.6 Comparison of the mixture that has as first classifier the best one in term of natural accuracy and the mixture that has as first classifier the best one in term of Accuracy under attack. The accuracy under attack is computed with the ℓ ∞ -PGD attack. NA means matural accuracy, and AUA means accuracy under attack. C.1 Borne supérieure de la variance et du taux de convergence de l'Algorithme 16 pour le graphe en étoile, le graphe complet, le cercle et le graphe couplage par rapport au nombre d'arêtes m et au nombre de tours t. . . . . . . . . . . . . . C.2 Nombre de tours t nécessaires pour vérifier la condition d'arrêt (C.8) par rapport à gauche: le nombre d'arêtes m où la dimension de l'espace de Z est fixée et égale à 25 et right: la dimension de l'espace de Z où le nombre d'arêtes est fixé et égal à 156. Pour les deux expériences, nous les exécutons 100 fois et nous traçons le nombre moyen de tours nécessaires pour vérifier la condition d'arrêt. . . . . . . C.3 Fraction de la récompense globale optimale obtenue à chaque tour en appliquant l'Algorithme 17, l'Algorithme 18 et l'algorithme Explore-Then-commit (appelé ici GBB-BAI) en utilisant la stratégie d'exploration de la Section C.4. Nous utilisons un graphe complet de 5 noeuds, nous exécutons l'expérience sur 5 matrices différentes avec ζ = 0 et l'exécutons 10 fois différentes pour tracer la fraction moyenne de la récompense globale . . . . . . . . . . . . . . . . . . . . . . . xvi Notations Set of real numbers R d 1 1 An introduction to the stochastic multi-armed bandit problem . . . 7 2. 1 . 1 711 Motivations and formalization . . . . . . . . . . . . . . . . . 7 2. 1 . 2 712 Maximizing the cumulative rewards . . . . . . . . . . . . . . . 8 2. 2 82 The stochastic linear bandit problem . . . . . . . . . . . . . . . . . 10 2.2.1 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. 2 . 2 1022 Experimental designs serving the pure exploration setting . . . . . 11 2. 2 . 3 1123 Optimism in the face of uncertainty for linear bandits (OFUL) . . 14 2. 2 . 4 1424 Bilinear bandits are linear bandits in a higher dimensional space . . 15 2. 3 153 Multi-agent bandits and combinatorial bandits . . . . . . . . . . . 17 17 2. 3 . 1 31 Parallelizing contextual linear bandits . . . . . . . . . . . . . . 17 17 2. 3 . 2 32 Bandit problems in graphs . . . . . . . . . . . . . . . . . . . 18 2. 3 . 3 1833 Link with unstructured multi-agents bandits . . . . . . . . . . 19 2. 3 . 4 1934 Combinatorial bandits . . . . . . . . . . . . . . . . . . . . . 19 19 Theorem 3 . 2 . 32 1 we have m 1→2 = m 2→1 ≥ m/4 and m 1 + m 2 ≤ m/2. Let us consider the graph G = (V, E), a finite arm set X ⊂ R d and the matrix M ⋆ ∈ R d×d given as input to Algorithm 5. Let (x Figure 3 . 1 : 31 Figure 3.1: Variation of ϵ, ξ, α 1 and α 2 with respect to the parameter ζ. The closer ζ is to 0 the lower the reward of the unwanted couples (e i ⋆ , e i ⋆ ) and (e j ⋆ , e j ⋆ ), the closer ζ is to 1 the higher the rewards of the unwanted couples. The dimension d of the arm-set is 10 (which gives linear reward with unknown parameter θ ⋆ of dimension 100). The plotted curve represents the average value of the parameters over 100 different matrices M ⋆ initiated randomly with positive values. Figure 4 . 1 : 41 Figure 4.1: Collision when allocating directly edge-arms to the edges Figure 4 . 3 : 43 Figure 4.3: Number of rounds t needed to verify the stopping condition (4.3) with respect to left: the number of edges m where the dimension of the edge-arm space Z is fixed and equal to 25 and right: the dimension of the edge-arm space Z where the number of edges is fixed and equal to 156. For both experiments we run 100 times and plot the average number of rounds needed to verify the stopping condition. 1 1 Optimism in the face of uncertainty for graphical bilinear bandits . 55 5.1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.1.2 Algorithm and analysis of the regret . . . . . . . . . . . . . . . 59 5.1.3 Improved algorithm and analysis of the regret . . . . . . . . . . 66 5.2 Numerical experiments . . . . . . . . . . . . . . . . . . . . . . . . 72 5.3 Conclusion and perspectives . . . . . . . . . . . . . . . . . . . . . 73 Figure 5 . 1 : 51 Figure 5.1: Fraction of the optimal global reward obtained at each round by applying the Algorithm 9, Algorithm 10 and the Explore-Then-commit algorithm (here named GBB-BAI) using the exploration strategy in Chapter 4. We use a complete graph of 5 nodes, we run the experiment on 5 different matrices as in Figure 3.1 with ζ = 0 and run it 10 different times to plot the average fraction of the global reward. We set the confidence δ = 0.001. (a) Evolution of score with n (b) Evolution of score with d (c) Evolution of score with d η>0 e -ηt tr exp n i=1 log E exp(ηX i ) , and similarly P λ min n i=1 X i ≤ t ≤ inf η<0 e -ηt tr exp n i=1 log E exp(ηX i ) . de -tη+ e ηL - 1 Lλ 1 3 yields:P(λ min (S) ≤ t) ≤ inf η<0 e -ηt tr exp e ηL -1 L ES .Reintegrating ∥ • ∥ into the RHS yields: min(ES) . Figure B. 1 : 1 Figure B.1: Representation of the µ -1 (blue dotted line) and µ 1 (red plain line) distributions, without attack(left) and with three different attacks: no penalty (second drawing), with mass penalty (third) and with norm penalty (fourth). On all figures blue area on the left of the axis is P h (ϵ 2 ) and red area on the right is N h (ϵ 2 ). Figure B. 2 : 2 Figure B.2: Illustration of adversarial examples (only on class 1 for more readability) crossing the decision boundary (left), adversarially trained classifier for the class 1 (middle), and a randomized classifier that defends class 1. Stars are natural examples for class 1, and crosses are natural examples for class -1.The straight line is the optimal Bayes classifier, and dashed lines delimit the points close enough to the boundary to be attacked resp. for class 1 and -1. We focus the drawing on the star points. Crosses can be treated symmetrically. 4 we devise a new procedure called Boosted Adversarial Training (BAT) to construct a robust mixture of two classifiers. It is based on three core principles: Adversarial Training, Boosting and Randomization. B.6 Experiments: How to build the mixture Simple mixture procedure (BAT). Given a dataset D and a weight parameter α ∈ [0, 1], we construct h 1 the first classifier of the mixture using Adversarial Training 4 on D. Then, we train Lemma B. 3 . 3 Lemma B.1 holds. Let us consider ϕ ∈ F X |ϵ 2 2 . If we take h ∈ BR(ϕ), then for y = 1 (resp. y = -1), and for any B ⊂ P h (resp. B ⊂ N h ) one has P(Y = y|X ∈ B) ≥ P(Y = -y|X ∈ B) with Y ∼ ν and for all y ∈ Y, X|(Y = y) ∼ ϕ y #µ y . ) ̸ = y} d(ϕ y #µ y )(x). Moreover thanks to the transfer theorem, one gets the following:) ̸ = y} d(ϕ y #µ y )(x) ) ̸ = y}f y (x) dζ(x) = inf h∈H X y=±1ν y 1{h(x) ̸ = y}f y (x) dζ(x). Figure B. 3 : 3 Figure B.3: Illustration of the notations U , U + , and U -for proof of Theorem B.4. Figure B. 4 : 4 Figure B.4: Illustration of the notations U , U + , U -and δ for proof of Theorem B.5. Theorem C. 1 . 1 NP-Dur par rapport au nombre de noeuds n. Considérons une matrice donnée M ⋆ ∈ R d×d et un ensemble de bras finis X ⊂ R d . A moins que P=NP, il n'existe pas d'algorithme en temps polynomial pour trouver la solution optimale de max ( 1 ) 1 ⋆ , . . . , x (n) ⋆ maximisant la récompense globale attendue. Dans les sections suivantes, nous donnons des algorithmes d'approximation en temps polynomial qui ont des garanties sur la récompense globale attendue retournée par rapport à la récompense optimale. Theorem C. 2 .C. 3 . 3 233 que par définition de l'ensemble d'arêtes E et en utilisant la Proposition C.1 nous avons m 1→2 = m 2→1 ≥ m/4 et m 1 + m 2 ≤ m/2. Considérons le graphe G = (V, E), un ensemble fini de bras X ⊂ R d et la matrice M ⋆ ∈ R d×d donnée en entrée de l'Algorithme 14. Soit (x (1) ⋆ , . . . , x (n) ⋆ ) l'allocation optimale telle que défini dans (C.3) et soit 0 ≤ ξ ≤ 1 un paramètre dépendant du problème défini par ξ = min x∈X x ⊤ M ⋆ x Algorithme amélioré utilisant la structure du graphe Theorem C. 3 . 3 Considérons le graphe G = (V, E), un ensemble fini de bras X ⊂ R d et la matrice M ⋆ ∈ R d×d donnés en entrée de l'Algorithme 15. Soit (x (1) ⋆ , . . . , x (n) ⋆ ) l'allocation optimale telle que définie dans (C.3) et que 0 ≤ ξ ≤ 1 soit défini comme dans le Théorème C.2. Soit 0 ≤ ϵ ≤ 1 2 X 2 Algorithm 16 : 216 avec pour marginale γ ⋆ . Étant donné la caractérisation dans le Théorème C.4 et notre objectif de vérifier la condition d'arrêt dans (C.8), nous présentons notre procédure d'échantillonnage dans l'Algorithme 16. Nous notons également qu'à chaque tour, l'échantillonnage des bras de noeuds peut être effectué en parallèle. Algorithme d'Exploration Pure : G-Allocation randomisé pour GBB Entrée : Graphe G = (V, E), ensemble de bras X Définir A 0 = I ; b 0 = 0 ; t = 1 ; Appliquer l'algorithme de Frank-Wolfe pour trouver la solution γ ⋆ de (G-relaxed-X ). while la condition d'arrêt (C. meilleur cas de graphe implique un v = O(m) correspondant au taux de convergence O 1/ √ mt d'un algorithme de bandit linéaire utilisant un échantillonnage aléatoire pour tirer mt edge-arms sans contraintes (graphiques). De plus, nous verrons que le graphique du pire cas implique que v = O m 2 . Puisque nous avons comblé l'écart entre notre objectif contraint et le problème de l'identification du meilleur bras dans les bandits linéaires, grâce au Théorème C.4 et C.5, nous sommes en mesure d'étendre les résultats connus pour l'identification du meilleur bras dans les bandits linéaires sur la complexité d'échantillonnage et sa borne inférieure associée. Corollary C.2 ([90], Théorème 1). Si la G-allocation est mise en oeuvre avec la stratégie aléatoire de l'Algorithme 16, résultant en une approximation de β t , alors avec une probabilité d'au moins 1 -δ, le meilleur bras obtenu avec θt est z ⋆ et t ≤ 128σ 2 d 2 (1 + β t ) log 6m 2 t 2 K 4 δπ m∆ 2 min , où ∆ min = min z∈Z\{z⋆} (z ⋆ -z) ⊤ θ ⋆ . pour évaluer la complexité d'échantillonnage de notre algorithme sur différents graphes. Nous considérons d + 1 bras de noeuds dans X ⊂ R d où d ≥ 2. Cet ensemble est constitué des d vecteurs (e 1 , . . . , e d ) formant la base canonique de R d et d'un bras supplémentaire x d+1 = (cos(ω), sin(ω), 0, . . . , 0) ⊤ avec ω ∈]0, π/2]. Notez que par construction, l'ensemble des bras d'arête Z contient la base canonique (e ′ 1 , . . . , e ′ d 2 ) de R d 2 Figure C. 2 : 2 Figure C.2: Nombre de tours t nécessaires pour vérifier la condition d'arrêt (C.8) par rapport à gauche: le nombre d'arêtes m où la dimension de l'espace de Z est fixée et égale à 25 et right: la dimension de l'espace de Z où le nombre d'arêtes est fixé et égal à 156. Pour les deux expériences, nous les exécutons 100 fois et nous traçons le nombre moyen de tours nécessaires pour vérifier la condition d'arrêt. C. 5 . 1 ( 1 )Corollary C. 3 . 5113 nous appuierons naturellement sur certaines idées et résultats présentés dans la section C.3 où la matrice était supposée connue par l'entité centrale. Nous suivons les notations que nous avons établies dans la section C.2. Optimisme face à l'incertitude pour les bandits bilinéaires graphiquesPréliminairesRappelons que la maximisation des récompenses cumulées est équivalente à la minimisation du regret associé. Nous définissons donc le pseudo-regret global sur T tours comme suit : Nous rappelons que l'objectif de l'apprenant est d'avoir un pseudo-regret R(T ), tel que lim ,...,x (n) )∈X (i,j)∈Ex (i)⊤ M ⋆ x (j)est NP-Dur par rapport au nombre d'agents n. Nous étendons ce résultat dans le corollaire suivant. Il n'existe pas d'algorithme en temps polynomial en n tel que lim . Pour simplifier la notation, désignons tout x ∈ X comme un node-arm. Utilisons la notationz xx ′ ≜ vec xx ′⊤ , et définissons l'ensemble de bras Z = {z xx ′ |(x, x ′ ) ∈ X 2 } où tout z ∈ Z sera appelé un bras d'arête. Si le bras x (i)t ∈ X représente le bras de noeud alloué au noeud i ∈ V au temps t, pour chaque arête (i, j) ∈ E nous désignerons le bra d'arête associé par z t ∈ Z et définissons θ ⋆ = vec (M ⋆ ) la version vectorisée de la matrice inconnue M ⋆ . Avec ces notations, la récompense (maintenant) linéaire peut être réécrite comme suit : ; Calculer θt comme dans (C.11) end retourner θt Avant d'énoncer les garanties sur l'α-regret, nous rappelons que nous avons défini dans la section C.3 la quantité ∆ ≥ 0 comme étant la différence entre la récompense de l'allocation (x ⋆ , x′ ⋆ ) et celle de l'allocation (x ⋆ , x ′ ⋆ ), Theorem C. 7 ., 7 Étant donné le problème de bandits bilinéaires graphiques défini dans la section C.2, on définit ξ comme dans le théorème C.6, laissez 0 ≤ ϵ ≤ 1 2 être un paramètre dépendant du prob-lème qui mesure le gain de l'optimisation sur les bras sous-optimaux et indésirables défini comme : et on fixe α = 1+ξ 2 + ϵ où α ≥ 1/2 par construction, alors le α-regret de l'Algorithme 18 satisfait R α (T ) ≤ Õ σd 2 + S √ λ T m max(2, (LS) 2 ) + LSm d 2 log 2 T mL 2 /λ δ , où Õ cache les facteurs logarithmiques. Figure C. 3 : 3 Figure C.3: Fraction de la récompense globale optimale obtenue à chaque tour en appliquant l'Algorithme 17, l'Algorithme 18 et l'algorithme Explore-Then-commit (appelé ici GBB-BAI) en utilisant la stratégie d'exploration de la Section C.4. Nous utilisons un graphe complet de 5 noeuds, nous exécutons l'expérience sur 5 matrices différentes avec ζ = 0 et l'exécutons 10 fois différentes pour tracer la fraction moyenne de la récompense globale . d 2 which is the concatenation of all the columns of A ∈ R d×d Context & motivations . . . . . . . . . . . . . . . . . . . . . . . . Problem setting . . . . . . . . . . . . . . . . . . . . . . . . . . . Stochastic Graphical Bilinear Bandits . . . . . . . . . . . . . . Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . Abbreviations 1 Introduction i.e., id est e.g., Contents exempli gratia cf. i.i.d. confer 1.1 1 identically and independently distributed 1.2 2 MAB Multi-Armed Bandits 1.2.1 2 MA-MAB Multi-Agent Multi-Armed bandits 1.2.2 BAI Best Arm Identification OFU Optimism in the Face of Uncertainty UCB Upper Confidence Bound GBB Graphical Bilinear Bandits ETC Explore-Then-Commit xvii xix 3 1.2.3 Outline of the thesis and contributions . . . . . . . . . . . . . 4 3 Computing the best allocation for known parameter matrices Contents 3.1 An NP-Hard problem . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 Reduction to the max-cut problem . . . . . . . . . . . . . . . 23 3.1.2 Approximation algorithm and guarantees . . . . . . . . . . . . 24 3.1.3 Improved algorithm using the graph structure . . . . . . . . . . 29 3.2 Numerical experiments: influence of the parameters on the solution 32 3.3 Conclusion and perspectives . . . . . . . . . . . . . . . . . . . . . 33 Table 3 . 3 1: Values of several parameters with respect to the type of graph. Experiments were performed on graphs of n = 100 nodes, and results for the random graph are averaged over 100 draws. Graph types Complete Random Circle Star Matching m1+m2 m 0.495 0.453 0.01 0 0 α 1 0.5 + 0.5ξ α 2 0.505 + 0.495ξ + ϵ 0.547 + 0.453ξ + ϵ 0.99 + 0.01ξ + ϵ 1 1 4 Best-arm identification in graphical bilinear bandits Contents 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.1.1 A two-stage algorithm template . . . . . . . . . . . . . . . . . 36 4.1.2 Stopping condition . . . . . . . . . . . . . . . . . . . . . . 36 4.1.3 A Constrained G-Allocation . . . . . . . . . . . . . . . . . . 38 4 .2 Algorithm and guarantees . . . . . . . . . . . . . . . . . . . . . . 39 4.2.1 Random Allocation over the Nodes . . . . . . . . . . . . . . . 39 4.2.2 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . 43 4.2.3 Case where M ⋆ is not symmetric . . . . . . . . . . . . . . . . 48 4.3 Influence of the graph structure on the convergence rate . . . . . .[START_REF] Ilyas | Adversarial Examples Are Not Bugs, They Are Features[END_REF] 4.3.1 Characterization of the variance associated with the randomized strat- egy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.3.2 Experimental results validating the dependence on the graph . . . 52 4 .4 Conclusion & Perspectives . . . . . . . . . . . . . . . . . . . . . . 53 Table 4 . 4 2: Upper bound on the variance and convergence rate of Algorithm 8 for the star, complete, circle and matching graph with respect to the number of edges m and the number of rounds t. Table B.1: Evaluation on CIFAR10 and CIFAR100 without data augmentation. Accuracy under attack of a single adversarially trained classifier (AT) and the mixture formed with our method (Ours). The evaluation is made with Adaptive-ℓ ∞ -PGD and Adaptive-ℓ 2 -C&W attacks both computed with 100 iterations. For Adaptive-ℓ ∞ -PGD we use an epsilon equal to 8/255 (≈ 0.031), a step size equal to 2/255 (≈ 0.008) and we allow random initialization. For Adaptive-ℓ 2 -C&W we use a learning rate equal to 0.01, 9 binary search steps, the initial constant to 0.001, we allow the abortion when it has already converged and we give the results for the different values of rejection threshold ϵ 2 ∈ {0.4, 0.6, 0.8}. As for EOT, we don't need to estimate the expected accuracy of the mixture through Monte Carlo sampling since we have the exact weight of each classifier of the mixture. Thus we give the exact expected accuracy. the second classifier h 2 on a data set D that contains adversarial examples against h 1 created from examples of D. At the end we return the mixture constructed with those two classifiers where the first one has a weight of 1 -α and the second one a weight of α. The parameter α is found by conducting a grid-search. In Table B.1 we present results for α = 0.2 under strong state-of-the-art attacks. The procedure is summarized in Algorithm 12 5 Boosted Adversarial Training Input : D the training data set and α the weight parameter. Create and adversarially train h 1 on D Generate the adversarial data set D against h 1 . Create and naturally train h 2 on D .88 0.00 0.00 0.00 0.00 AT [66] 0.83 0.42 0.60 0.47 0.35 Ours 0.80 0.55 0.60 0.57 0.53 Natural 0.62 0.00 0.00 0.00 0.00 CIFAR100 AT [66] 0.58 0.26 0.38 0.29 0.22 Ours 0.56 0.40 0.45 0.41 0.38 Algorithm 11: Table B . B • When doing 200 iterations instead of 100 iterations, it doesn't change the performance of the attack• When increasing the budget ϵ ∞ , the accuracy goes to 0, which ensures that there is no gradient masking. Here are some values to back this statement: 5: Evolution of the accuracy under Adaptive-ℓ ∞ -PGD attack depending on the budget ϵ ∞ Epsilon 0.015 0.031 0.125 0.250 Accuracy 0.638 0.546 0.027 0.000 C.1)où M ⋆ ∈ R d×d est une matrice inconnue, et η Le cadre bilinéaire apparaît comme une extension naturelle du cadre linéaire pour modéliser l'interaction entre deux agents.Notons que ce model peut être considéré soit dans un cadre décentralisé où les agents prennent des actions sans consulter les autres agents, soit dans un cadre centralisé où une entité centrale choisit les bras de tous les agents, agrège les récompenses obtenues et conçoit une stratégie globale pour les agents du graphe.Dans cette thèse, nous ne considérons que le cas centralisé où une entité centrale gère tous les agents, choisit à chaque instant t l'allocation (x (i,j) t une variable aléatoire σ-sous- gaussienne de moyenne nulle. La récompense y (i,j) t reflète la qualité de l'interaction entre les noeuds voisins i et j lorsqu'ils tirent respectivement les bras x (i) t et x (j) (1) t , . . . , x (n) t ) et reçoit ensuite les récompenses associées y (i,j) t pour tous les (i, j) ∈ E. t à l'itération t. Dans cette section, nous voulons capitaliser sur l'Algorithme 14 et sa solution1+ξ 2 -optimale pour affiner l'allocation des bras x ⋆ et x ′ ⋆ de telle sorte que les récompenses sous-optimales obtenuesx ⊤ ⋆ M ⋆ x ⋆ et x ′⊤ ⋆ M ⋆ x ′⋆ pénalisent le moins possible la récompense globale. En effet, dans l'Algorithme 14, le choix du couple (x ⋆ , x ′ ⋆ ) est uniquement guidé par le gain potentiel que l'on pourrait obtenir au niveau des arêtes coupées (i.e., qui vont d'un noeud dans V 1 à un noeud dans V 2 ou vice versa). Elle ne prend pas en compte toutes les m 1 récompenses de la forme x ⊤ ⋆ M ⋆ x ⋆ et les m 2 récompenses de la formex ′⊤ ⋆ M ⋆ x ′ ⋆ que l'on obtient en attribuant x ⋆ aux noeuds de V 1 et x ′ ⋆ aux noeuds de V 2 . Ici,l'amélioration que nous pouvons apporter est de les inclure dans le problème d'optimisation et de pondérer les différentes récompenses obtenues par le graphe en utilisant les proportions m 1→2 , m 2→1 , m 1 et m 2 . En désignant par (x ⋆ , x′ ⋆ ) la solution du problème d'optimisation suivant, max C.7) nous permet d'identifier la paire (x ⋆ , x ′ ⋆ ), et nous donne donc les mêmes garanties que celles présentées dans le Théorème C.2. Nous abordons donc le problème de construire M tel que, en un nombre minimum de tours, avec grande probabilité, nous soyons capables d'identifier la paire (x ⋆ , x ′ ⋆ ) et d'appliquer l'Algorithme 14. relaxed-Z)On pourrait soit utiliser un échantillonnage aléatoire pour tirer des bras d'arête comme i.i.d. échantillons de la distribution Γ ⋆ , soit des procédures d'arrondi pour convertir efficacement chaque Γ ⋆ z en un nombre entier. Cependant, ces méthodes ne prennent pas en compte la structure graphique du problème. En effet, à un tour donné, les arêtes choisies peuvent donner lieu à deux affectations différentes pour le même noeud, nous appelons ce phénomène une collision.Par conséquent, les procédures d'échantillonnage aléatoire ou d'arrondi ne peuvent pas être utilisées directement pour sélectionner les bras d'arête dans Z. [START_REF] Andersen | CVXOPT: A Python package for convex optimization, version 1.2.0[END_REF]) n'est pas vérifiée do // Échantillonnage des bras de noeud ⋆ et obtenir pour tout (i, j) dans E les récompenses y Estimation de θt avec les bras d'arête associésA t = A t-1 + (i,j)∈E zCette procédure d'échantillonnage implique que chaque bras d'arête suit la distribution optimale Γ ⋆ . Dans cette thèse, nous présentons une stratégie de G-allocation aléatoire simple et standard, mais d'autres méthodes plus élaborées pourraient être envisagées, tant qu'elles incluent le caractère aléatoire nécessaire. Tirer x (1) t , . . . , x (n) t iid ∼ γ (i,j) t ; // (i,j) t z (i,j)⊤ t ; b t = b t-1 + (i,j)∈E z (i,j) t y (i,j) t ; θt = A -1 t b t t ← t + 1 ; end retourner θt ∥E (A 1 -EA 1 ) 2 ∥. Nous venons de montrer que la valeur d'approximation max z∈Z z ⊤ A -1 t z converge vers la valeur optimale avec un taux de O √ v/(m √ t) . Dans la section C.4.3, nous montrons que le , où β t = Ld 2 mλ 2 min 2v t log 2d 2 δ + o 1 √ t , et v ≜ Table C . C 1: Borne supérieure de la variance et du taux de convergence de l'Algorithme 16 pour le graphe en étoile, le graphe complet, le cercle et le graphe couplage par rapport au nombre d'arêtes m et au nombre de tours t. 1 )∥ β Étoile Complète mP + (M + N )O m 2 mP + (M + N )O(m √ m) O 1/ O 1/ m √ 1 4 √ t t Cercle Correspondance mP + (M + N )O(m) mP + mN O 1/ O 1/ √ √ mt mt 2 Algorithme et analyse du regret Nous définissons également l'ensemble de confiance C t (δ) = θ : ∥θ -θt ∥ A -1 où avec une probabilité de 1-δ, on a que θ ⋆ ∈ C t (δ) pour tout t ∈ {1, . . . , T }, et δ ∈ (0, 1].Dans la section C.3, nous avons présenté deux algorithmes (Algorithm 14 et 15) qui utilisent la vraie matrice de paramètres M ⋆ pour renvoyer une allocation de bras permettant d'obtenir une α-approximation du problème de maximisation de la récompense globale. Naturellement, puisque nous n'avons pas accès à la matrice M ⋆ , nous ne pouvons pas l'utiliser directement à chaque itération pour maximiser la récompense globale cumulative (et donc minimiser le regret associé). Néanmoins, on peut utiliser l'estimateur construit θt et le principe d'optimisme face à l'incertitude pour surmonter le fait que M ⋆ est inconnu.En effet, nous rappelons que dans l'Algorithme 14, le couple (x ⋆ , x ′ ⋆ ) est choisi comme suit,(x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X 2 x ⊤ M ⋆ + M ⊤ x,x ′ )∈X 2 ⟨z xx ′ + z x ′ x , θ ⋆ ⟩ , (C.14)et est utilisé pour créer autant que possible des bras d'arêtes de la forme z xx ′ et z x ′ x dans le graphe. Ici, à chaque tour t, l'entité centrale choisit de manière optimiste le couple (x t , x ′ ⟨z xx ′ + z x ′ x , θ⟩ , puis alloue les bras-noeuds pour maximiser le nombre de bras-bords localement optimaux z xtx ′ Étant donné le problème de bandits bilinéaires graphiques défini dans la section C.2, soit 0 ≤ ξ ≤ 1 un paramètre dépendant du problème défini par Adaptation de l'algorithme OFUL pour les bandits bilinéaires graphiquesInput : Graphe G = (V, E), ensemble X (V 1 , V 2 ) = Approx-MAX-CUT(G) for t = 1 à T do //Trouver le meilleur couple optimiste x t , x ′ t , θt-1 = arg max (x,x ′ ,θ)∈X 2 ×C t-1 ⟨z xx ′ + z x ′ x , θ⟩ ; // Allouer x t et x ′ t dans le graphe x = x t pour tout i dans V 1 ; x = x ′ t pour tout i dans V 2 ; Obtenir pour tout (i, j) dans E les récompenses y α (T ) ≤ Õ σd 2 + S √ λ T m max(2, (LS) 2 ) + LSm d 2 log 2 T mL 2 /λ δ , Algorithm 17: (i) t ≤ σ d 2 log (i) 1 + tmL 2 /λ δ + √ λS , Calculez θt comme dans (C.11) (i,j) t ; end return θt . ⋆ x ′ (C.12) (C.13) = arg max t ) comme suit, (x t , x ′ t ) = arg max (x,x ′ )∈X 2 max θ∈C t-1 (δ) 1 t b t , (C.11) où, t A t = λI d 2 + z (i,j) s z (i,j)⊤ s , s=1 (i,j)∈E avec λ > 0 un paramètre de régularisation et ξ = min x∈X 1 m b t = ⟨z xx , θ ⋆ ⟩ t s=1 (i,j)∈E (i,j)∈E z ⋆ , θ ⋆ z (i,j) s (i,j) y (i,j) ≥ 0 , et on fixe α = 1+ξ 2 , alors l'α-regret de l'Algorithme 18 satisfait s . C.5.(t et z x ′ t xt . La méthode est présentée dans l'Algorithme 17 Theorem C.6. t t R où Õ cache les facteurs logarithmiques. θ ⋆ ⟩ .Comme dans la section précédente, nous n'avons pas accès à θ ⋆ , nous utilisons le principe d'optimisme pour trouver à chaque tour le couple (x t , x′ t ) tel que(x t , x′ t ) = arg max (x,x ′ )∈X 2 max θ∈C t-1 (δ) ⟨m 1→2 • z xx ′ + m 2→1 • z x ′ x = m 1 z xx + m 2 z x ′ x ′ , θ⟩ . (C.15) 2 Perspective et travaux futurs 5, nous avons capitalisé sur l'algorithme d'α-approximation donné dans le chapitre C.3 et appliqué le principe d'optimisme face à l'incertitude pour concevoir des algorithmes basés sur le regret qui ont atteint un α-regret sous-linéaire en T où α ≥ 1/2. De plus, nous avons présenté expérimentalement les performances des algorithmes proposés et utilisé en comparaison un algorithme Explore-Then-Commit s'appuyant sur l'algorithme d'exploration pure présenté dans le chapitre C.4. Cette thèse avait pour but d'introduire le nouveau cadre du bandit bilinéaire graphique et de fournir les premières solutions aux problèmes courants posés dans la littérature sur le bandit. De nombreuses autres approches et modifications peuvent être envisagées. Nous en présentons deux dans ce qui suit. Dans le cadre d'une décentralisation totale (sans communication), quel type d'algorithmes pouvons-nous concevoir pour tirer parti du cadre du bandit linéaire ? Si nous autorisons la communication, comment pouvons-nous adapter les algorithmes proposés et quel est le compromis entre la quantité de communication et la performance ? Questions ouvertes : C.6. If X = (e1, . . . , e d ), when the learner chooses xt = ei, the reward yt = ⟨ei, θ⋆⟩ + ηt = [θ⋆]i + ηt, which does not share any information with the rewards of the other arms. We drop the β parameter, because in our case it is equal to 1. (1) ⋆ , . . . , x(n) ⋆ ), we have x (i)⊤ ⋆ M ⋆ + M ⊤ ⋆ x (j) ⋆ ≤ x ⊤ ⋆ M ⋆ + M ⊤ ⋆ x ′ ⋆ . (3.2) Those quantities are not equal since the matrix M⋆ is not necessarily symmetric. m (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆, and set α = 1+ξ . Then, the expected global reward y = (i,j)∈E x (i)⊤ M ⋆ x (j) associated with the allocation x[START_REF] Abbasi | Robustness to adversarial examples through an ensemble of specialists[END_REF] , . . . , x (n) ∈ X n returned by Algorithm 5 verifies:y ≥ αy ⋆ . where y ⋆ = (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ .Finally, the complexity of the algorithm is in O(K 2 + n 2 ). (i,j)∈E x (i)⊤ ⋆ M ⋆ x 0 , = 0, then ξ = 0 as well and we are in the worst case scenario where we can only have a guarantee on an α-approximation with α = 1/2. In practice, having ξ = 0 is reached when given the couple (x ⋆ , x ′ ⋆ ) = arg max (x,x ′ )∈X2 x ⊤ M ⋆ + M ⊤ ⋆ x ′ , this x 0 is either x ⋆ or x ′ ⋆ .In other words, if the unwanted couples of arms (x ⋆ , x ⋆ ) and (x ′ ⋆ , x ′ ⋆ ) give low rewards, then the guarantee on the reward will be badly impacted. Hence we can wonder how to prevent this phenomenon. We answer this question in the next section by taking into account both the proportion of undesirable couple of arms and their potential rewards at the selection of the pair (x ⋆ , x ′ ⋆ ) in order to improve the performance of the algorithm both in practice and theoretically.3.1.3 Improved algorithm using the graph structureIn this section, we want to capitalize on Algorithm 5 and its 1+ξ 2 -optimal solution to refine the allocation of the arms x ⋆ and x ′ ⋆ such that the obtained suboptimal rewardsx ⊤ ⋆ M ⋆ x ⋆ and x ′⊤ ⋆ M ⋆ x ′ ⋆penalize as less as possible the global reward. Indeed, in the Algorithm 5, the choice of the couple (x ⋆ , x ′ ⋆ ) is only guided by the potential gain that could be obtained at the level of the cut edges (i.e., that goes from a node in V 1 to a node in V 2 or vice versa). It does not take into account all the m 1 rewards of the form x ⊤ ⋆ M ⋆ x ⋆ and 29 x⊤ ⋆ M ⋆ x′ ⋆ + m 2→1 • x′⊤ ⋆ M ⋆ x⋆ + m 1 • x⊤ ⋆ M ⋆ x⋆ + m 2 • x′⊤ ⋆ M ⋆ x′ ⋆ = m 1→2 x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 • x ′⊤ ⋆ M ⋆ x ⋆ + m 1 • x ⊤ ⋆ M ⋆ x ⋆ + m 2 • x ′⊤ ⋆ M ⋆ x ′ ⋆ + ∆ = m 1→2 x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 • x ′⊤ ⋆ M ⋆ x ⋆ + m 1 • x ⊤ ⋆ M ⋆ x ⋆ + m 2 • x ′⊤ ⋆ M ⋆ x ′ ⋆ + ϵ (i,j)∈E x (i)⊤ ⋆ M ⋆ x (i) ⋆ = m 1→2 x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 • x ′⊤ ⋆ M ⋆ x ⋆ + m 1 • x ⊤ ⋆ M ⋆ x ⋆ + m 2 • x ′⊤ ⋆ M ⋆ x ′ ⋆ + ϵy ⋆ = 1 + ξ 2 y ⋆ + ϵy ⋆ = 1 + ξ 2 + ϵ y ⋆ .What is ϵ and what value can we expect?This parameter measures the gain with respect to the optimal reward one could get by considering the undesirable couple of arms of the form (x, x).The value of ϵ is high when the rewards associated with the couples (x ⋆ , x′ ⋆ ) and (x ′ ⋆ , x⋆ ) are close to those of (x ⋆ , x ′ ⋆ ) and (x ′ ⋆ , x ⋆ ) respectively and when the rewards associated with the suboptimal couples (x ⋆ , x⋆ ) and (x ′ ⋆ , x′ ⋆ ) are much higher than those of (x ⋆ , x ⋆ ) and (x ′ ⋆ , x ′ ⋆ ) respectively. On the contrary, ϵ is low (and close to 0) if the suboptimal couples of arms (x ⋆ , x ⋆ ) and (x ′ ⋆ , x ′ ⋆ ) already give high rewards (or the highest among the other suboptimal couples of the form (x, x)), hence the central entity does not gain a lot by choosing another couple of arms than (x ⋆ , x ′ ⋆ ). Notice that α = 1+ξ 2 + ϵ ≤ 1 by construction. In fact when ξ = 1 it means that the couple of arms of the form (x, x) gives the highest reward, hence (x ⋆ , x ′ ⋆ ) = (x ⋆ , x′ ⋆ ) which gives ϵ = 0 and α = 1. Corollary 3.1. Let us consider the same setting as in Theorem 3.3, the approximation ratio can be defined with the parameters m 1→2 , m 2→1 , m 1 and m 2 that depend on the graph and the approximation algorithm of the max-cut problem such that The design of Algorithm 6 and its associated guarantee are posterior to the best arm identification algorithm that we proposed in this chapter. This extension is an ongoing work and might be added to the final version of this thesis. x (i)⊤ ⋆ M ⋆ x (j)⊤ ⋆ -T × f (T ) , (i)⊤ ⋆ M ⋆ x (j)⊤ ⋆ -c × (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j)⊤ ⋆ = (1 -c) (i,j)∈E x (i)⊤ ⋆ m ⟨m 1→2 • z x⋆ x′ ⋆ + m 2→1 • z x′ ⋆ x⋆ + m 1 • z x⋆ x⋆ + m 2 • z x′ ⋆ x′ ⋆ , θ ⋆ ⟩ In the remaining of this paper, we consider a finite set of experiments; some results could be easily transposed to a continuous setting. We assume that the experiments span R d so that the matrix X ⊤ X is non singular. If this is not the case then we may project the data onto a lower dimensional space. It suffices to consider exactly K(K -1)/2 directions as the result is the same for xx ′ and x ′ -x. Ωnorm is not limited to ℓ2 norm. The results we present hold as long as the norm used to compare X and ϕY (X) comes from a scalar product on X . We prove this result in the supplementary material. The grey area should actually be bigger since the best response to the attack would also change the decision on the upper part between the OBC and the doted line. We focus on what happens on the star points for simplicity. We use ℓ∞-PGD with 20 iterations and ϵ∞ = 0.031 to train the first classifier and to build D. More algorithmic and implementation details can be found in the supplementary materials. In order for the attack to succeed, it it more efficient to compute the expected transformation of the logits instead of taking the expectation over the loss. More details on this in the supplementary materials. Ces quantités ne sont pas égales puisque la matrice M⋆ n'est pas nécessairement symétrique. m (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ , et on fixe α = 1+ξ . Alors, la récompense globale y = (i,j)∈E x (i)⊤ M ⋆ x (j) associée à l'allocation x[START_REF] Abbasi | Robustness to adversarial examples through an ensemble of specialists[END_REF] , . . . , x (n) ∈ X n retournée par l'Algorithme 14 vérifie :y ≥ αy ⋆ . où y ⋆ = (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ .Enfin, la complexité de l'algorithme est de O(K 2 + n 2 ). Pour comprendre et analyser ce nouvel algorithme, définissons ∆ ≥ 0 la différence de récompense globale de l'attribution de (x ⋆ , x′ ⋆ ) au lieu de (x ⋆ , x ′ ⋆ ) comme suit :∆ = m 1→2 x⊤ ⋆ M ⋆ x′ ⋆ -x ⊤ ⋆ M ⋆ x ′ ⋆ + m 2→1 x′⊤ ⋆ M ⋆ x⋆ -x ′⊤ ⋆ M ⋆ x ⋆ (i,j)∈E x (i)⊤ ⋆ M ⋆ x (j) ⋆ , et on fixe α = 1+ξ 2 + ϵ.Alors, la récompense globale y = (i,j)∈E x (i)⊤ M ⋆ x (j) associée à l'allocation x[START_REF] Abbasi | Robustness to adversarial examples through an ensemble of specialists[END_REF] , . . . , x (n) ∈ X n retournée par l'Algorithme 15 vérifie :y ≥ αy ⋆ . où y ⋆ = (i,j)∈E x (i)⊤ ⋆ Remerciements A Refined bounds for randomized experimental design This appendix contains the paper "Refined bounds for randomized experimental design", Neurips 2019 Workshop "ML with Guarantees", G. Rizk, I. Colin, A. Thomas and Moez Draief. Experimental design is an approach for selecting samples among a given set so as to obtain the best estimator for a given criterion. In the context of linear regression, several optimal designs have been derived, each associated with a different criterion: mean square error, robustness, etc. Computing such designs is generally an NP-hard problem and one can instead rely on a convex relaxation that considers probability distributions over the samples. Although greedy strategies and rounding procedures have received a lot of attention, straightforward sampling from the optimal distribution has hardly been investigated. In this paper, we propose theoretical guarantees for randomized strategies on E and G-optimal design. To this end, we develop a new concentration inequality for the eigenvalues of random matrices using a refined version of the intrinsic dimension that enables us to quantify the performance of such randomized strategies. Finally, we evidence the validity of our analysis through experiments, with particular attention on the G-optimal design applied to the best arm identification problem for linear bandits. A.1 Introduction Experimental designs consist in the selection of the best samples or experiments for the estimation of a given quantity. A well-known and extensively studied example is the one of the ordinary least squares (OLS) estimator in the linear regression setting. The OLS estimator being unbiased, Is there a classifier that ensures optimal robustness against all adversarial attacks? This paper tackles this question by adopting a game-theoretic point of view. We present the adversarial attacks and defenses problem as an infinite zero-sum game where classical results (e.g. Nash or Sion theorems) do not apply. We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the Adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime. Nonetheless, the question remains open in the randomized regime. We tackle this problem by showing that any deterministic classifier can be outperformed by a randomized one. This gives arguments for using randomization, and leads us to a simple method for building randomized classifiers that are robust to state-or-the-art adversarial attacks. Empirical results validate our theoretical analysis, and show that our defense method considerably outperforms Adversarial Training against strong adaptive attacks, by achieving 0.55 accuracy under adaptive PGD-attack on CIFAR10, compared to 0.42 for Adversarial training. Evaluating against strong adversarial attacks. When evaluating a defense against adversarial examples, it is crucial to test the robustness of the method against the best possible attack. Accordingly, the defense method should be evaluated against attacks that were specifically tailored to it (a.k.a. adaptive attacks). In particular, when evaluating randomized algorithms, one should use Expectation over Transformation (EOT) to avoid gradient masking as pointed out by [START_REF] Athalye | Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples[END_REF] and [START_REF] Carlini | On Evaluating Adversarial Robustness[END_REF]. More recently, [START_REF] Tramer | On adaptive attacks to adversarial example defenses[END_REF] emphasized that one should also make sure that EOT is computed properly 6 . Previous works such as [START_REF] Dhillon | Stochastic activation pruning for robust adversarial defense[END_REF] and [START_REF] Pinot | Theoretical evidence for adversarial robustness through randomization[END_REF] estimate the EOT through a Monte Carlo sampling which can introduce a bias in the attack if the sample size is too small. Since we assume perfect information for the Adversary, it knows the exact distribution of the mixture. Hence it can directly compute the expectation without using a sampling method, which avoid any bias. Table B.1 evaluates our method against strong adaptive attacks namely Adaptive-ℓ ∞ -PGD and Adaptive-ℓ 2 -C&W. Hard constraint parameter. The typical value of ϵ in the hard constraint depends on the norm we consider in the problem setting. In this paper, we use an ℓ 2 norm, however, the constraint parameter for ℓ ∞ -PGD attack was initially set to be an ℓ ∞ constraint. In order to compare attacks of similar strength, we choose different threshold (ϵ 2 or ϵ ∞ ) values which result in balls of equivalent volumes. For CIFAR10 an CIFAR100 datasets [START_REF] Krizhevsky | Learning multiple layers of features from tiny images[END_REF], which are 3 × 32 × 32 dimensional spaces, this gives ϵ ∞ = 0.03 and ϵ 2 = 0.8 (we also give results for ϵ 2 equal to 0.6 and 0.4 as this values are sometimes used in the literature). Since Adaptive-ℓ 2 -C&W attack creates an unbounded perturbation on the examples, we implemented the constraint from Equation B.6 by checking at test time whether the ℓ 2 -norm of the perturbation exceeds a certain threshold ϵ 2 ∈ {0.4, 0.6, 0.8}. If it does, the adversarial example is disregarded, and we keep the natural example instead. Experimental results. In Table B.1 we compare the accuracy, on CIFAR10 and CIFAR100, of our method and classical Adversarial Training under attack with Adaptive-ℓ ∞ -PGD and Adaptive-ℓ 2 -C&W, both run for 100 iterations. We used 5 times more iterations for the evaluation as we used during training, and carefully check for convergence. the rational behind this is that, for a classifier to be fully robust, its loss of accuracy should be controlled when the attacks are stronger than the ones it was trained on. For both attacks, both datasets and all thresholds (i.e. the budget for a perturbation), the accuracy under attack of our mixture is higher than the single classifier with Adversarial Training. Our defense is especially more robust than Adversarial Training when the threshold is high. Extension to more than two classifiers. In this paper we focus our experiments on a mixture of two classifiers to present a proof of concept of Theorem B.4. Nevertheless, a mixture of more than two classifiers can be constructed by adding at each step t a new classifier trained naturally on the dataset D that contains adversarial examples against the mixture at step t-1. Since D has to be constructed from a mixture, one would have to use an adaptive attack as Adaptive-ℓ ∞ -PGD. We refer the reader to the supplementary material for this extended version of the algorithm and for all the implementation details related to our experiments (architecture of models, optimization settings, hyper-parameters, etc.). Hence, the best response h is such that for every x ∈ X , and y ∈ Y, one has h(x) = y if and only if f y (x) ≤ f -y (x). Thus, h is the optimal Bayes classifier for the distribution (ν, ϕ 1 #µ 1 , ϕ -1 #µ -1 ). Furthermore, for y = 1 (resp. y = -1), and for any B ⊂ P h (resp. B ⊂ N h ) one has: with Y ∼ ν and for all y ∈ Y, X|(Y = y) ∼ ϕ y #µ y . Theorem B.3 (Non-existence of a pure Nash equilibrium). In our zero-sum game with λ ∈ (0, 1) and penalty Ω ∈ {Ω mass , Ω norm }, there is no Pure Nash Equilibrium. Proof. Let h be a classifier, ϕ ∈ BR Ω (h) an optimal attack against h. We will show that h / ∈ BR(ϕ), i.e. that h does not satisfy the condition from Lemma B.3. This suffices for Theorem B.3 to hold since it implies that there is no According to Lemmas B.1 and B.2, whatever penalty we use, there exists δ > 0 such that ϕ 1 #µ 1 (P h (δ)) = 0 or ϕ -1 #µ -1 (N h (δ)) = 0. Both cases are symmetrical, so let us assume that P h (δ) is of null measure for the transported distribution conditioned by y = 1. Furthermore we have ϕ -1 #µ -1 (P h (δ)) = µ -1 (P h (δ)) > 0 since ϕ -1 is the identity function on P h (δ), and since µ -1 is of full support on X . Hence we get the following: Since the right side of the inequality is null, we also get: This inequality is incompatible with the characterization of best response for the Defender of Lemma B.3. Hence h / ∈ BR(ϕ). Theorem B.4. (Randomization matters) Let us consider Then for any α ∈ (max(λ, 1 -λ), 1) and for any andm q h is the mixture of h by q. Natural Training. To train an undefended classifier we use the following hyperparameters. • Number of Epochs: 200 • Batch size: 128 • Loss function: Cross Entropy Loss • Optimizer : SGD algorithm with momentum 0.9, weight decay of 2 × 10 -4 and a learning rate that decreases during the training as follows: Adversarial Training. To adversarially train a classifier we use the same hyperparameters as above, and generate adversarial examples using the ℓ ∞ -PGD attack with 20 iterations. When considering that the input space is [0, 255] 3×32×32 , on CIFAR10 and CIFAR100, a perturbation is considered to be imperceptible for ϵ ∞ = 8. Here, we consider X = [0, 1] 3×32×32 which is the normalization of the pixel space [0.255] 3×32×32 . Hence, we choose ϵ 2 = 0.031 (≈ 8/255) for each attack. Moreover, the step size we use for ℓ ∞ -PGD is 0.008 (≈ 2/255), we use a random initialization for the gradient descent and we repeat the procedure three times to take the best perturbation over all the iterations i.e the one that maximises the loss. For the ℓ ∞ -PGD attack against the mixture m q h , we use the same parameters as above, but compute the gradient over the loss of the expected logits (as explained in the main paper). Evaluation Under Attack. At evaluation time, we use 100 iterations instead of 20 for Adaptiveℓ ∞ -PGD, and the same remaining hyperparameters as before. For the Adaptive-ℓ 2 -C&W attack, we use 100 iterations, a learning rate equal to 0.01, 9 binary search steps, and an initial constant of 0.001. We give results for several different values of the rejection threshold: ϵ 2 ∈ {0.4, 0.6, 0.8}. Computing Adaptive-ℓ 2 -C&W on a mixture To attack a randomized model, it is advised in the literature [START_REF] Tramer | On adaptive attacks to adversarial example defenses[END_REF] to compute the expected logits returned by this model. However this advice holds for randomized models that return logits in the same range for a same example (e.g. classifier with noise injection). Our randomized model is a mixture and returns logits that depend on selected classifier. Hence, for a same example, the logits can be very different. This phenomenon made us notice that for some example in the dataset, computing the expected loss over the classifier (instead of the expected logits) performs better to find a good perturbation (it can be seen as computing the expectation of the logits normalized thanks to the loss). To ensure a fair evaluation of our model, in addition of using EOT with the expected logits, we compute in parallel EOT with the expected loss and take the perturbation that maximizes the expected error of the mixture. See the submitted code for more details. Library used. We used the Pytorch and Advertorch libraries for all implementations. Training method NA of the 1 st clf AUA of the Here to find the parameter α, the grid search is more costly. In fact in the two-classifier version we only need to train the first and second classifier without taking care of α, and then test all the values of α using the same two classifier we trained. For the extended version, the third classifier (and all the other ones added after) depends on the first classifier, the second one and their weights 1 -α and α. Hence the third classifier for a certain value of α can't be use for another one and, to conduct the grid search, one have to retrain all the classifiers from the third one. Naturally the parameters α depends on the number of classifiers n in the mixtures. dant, on peut tirer un enseignement de cette stratégie, à savoir qu'étant donné le bras conjoint optimal (x Ainsi, au lieu de chercher l'allocation optimal (ce qui est NP-Dur), on peut alternativement rechercher l'allocation qui, pour toute arête (i, j) ∈ E, construit autant de paires (x (i) , x (j) ) = (x ⋆ , x ′ ⋆ ) que possible. Affecter x ⋆ à un sous-ensemble de noeuds et x ′ ⋆ au complémentaire revient à couper le graphe en deux morceaux et à créer deux ensembles distincts de noeuds Ainsi, la stratégie décrite se résume à trouver une coupe passant par le nombre maximal d'arêtes. Ce problème est connu sous le nom de Max-Cut (voir e.g., [START_REF] Goemans | 879-approximation algorithms for max cut and max 2sat[END_REF][START_REF] Sahni | P-complete approximation problems[END_REF]), qui est également NP-Dur. Cependant, l'attention considérable portée à ce problème nous permet d'utiliser l'un des nombreux algorithmes d'approximation (voir, e.g., Algorithm 13) qui garantissent de produire une coupe passant par au moins une fraction donnée des arêtes du graphe. La plupart des garanties pour l'approximation du problème de Max-Cut sont données par rapport à la solution optimale de Max-Cut, ce qui n'est pas exactement la garantie que nous recherchons : nous avons besoin d'une garantie en proportion du nombre total d'arêtes. Nous devons donc faire attention à l'algorithme que nous choisissons. Algorithm 13: Approx-MAX-CUT [START_REF] Sahni | P-complete approximation problems[END_REF] Entrée: A partir de l'Algorithme 13, on peut avoir une garantie sur la proportion d'arêtes coupées par rapport au nombre total d'arêtes m. Nous énonçons cette garantie dans la proposition suivante. Proposition C.1. Étant donné un graphe Étant donné cette garantie par rapport au nombre total d'arêtes, il ne reste plus qu'à présenter la stratégie complète qui consiste à allouer aux noeuds de V 1 le bras x ⋆ et aux noeuds de V 2 le bras x ′ ⋆ . Nous donnons la stratégie dans l'Algorithme 14. Avec cet algorithme, étant donné l'allocation retournée (x (1) , . . . , x (n) ), pour certaines arêtes (i, j) ∈ E, les bras alloués associés seront les couples optimaux Des matrices de paramètres différentes M (i,j) ⋆ pour chaque arête (i, j) ∈ E. Alors que le traitement d'une matrice commune de paramètres M ⋆ pour toutes les arêtes (i, j) ∈ E était pratique pour agréger les récompenses et construire une estimation commune M pour tous les agents, lorsque les récompenses y (i,j) t sont définies avec des matrices différentes M (i,j) ⋆ , le problème devient plus compliqué. En effet, considérons le cas où la récompense y (i,j) t est définie comme suit pour chaque (i, j) ∈ E : sont des matrices à paramètres inconnus et η (i,j) t des variables aléatoires σ-sousgaussiennes. Ce paramètre est pertinent lorsque les agents n'ont pas les mêmes interactions entre chacun de leurs voisins, et donc pas la même fonction de récompense. Question ouverte : Dans le contexte de l'exploration pure, comment la condition d'arrêt change-t-elle ? Existe-t-il une stratégie d'échantillonnage pour chaque agent telle que les estimations M(i,j) sont construites en satisfaisant un critère d'optimal design ? Cadre décentralisé . Lorsque les agents sont contrôlés par une entité centrale, il est possible d'agréger les différentes récompenses et de construire une estimation commune M de M ⋆ . De plus, nous avons vu que les différents objectifs qui apparaissent sont relatifs aux bras d'arêtes et non directement aux bras de noeuds sélectionnés par chaque agent. En effet, cela est dû au fait que nous pouvons exprimer le bandit bilinéaire graphique comme des bandits linéaires au niveau des arêtes. Cet aspect particulier rend le cadre décentralisé un peu délicat car la coordination de deux agents sans communication pour tirer respectivement les bras de noeuds qui construiront bras d'arêtes désirés devient encore plus compliqué. Cependant, d'autres problèmes surgissent même si le problème de coordination est résolu. Par exemple, dans le problème d'identification du meilleur bras, nous avons déjà conçu une procédure d'échantillonnage qui peut être exécutée en parallèle pendant un tour, d'où un choix décentralisé pour chaque agent. Cependant, la condition d'arrêt dépend de l'estimation M construite avec les bras d'arêtes pendant la procédure d'apprentissage, mais lorsque les agents ne communiquent pas, cette estimation ne peut pas être construite. En effet, un agent ne connaît que le bras qu'il tire et observe la récompense. Or, la récompense est linéaire par rapport à au bras d'arête associée et l'agent n'a pas accès à ce bras d'arête puisqu'il est construit avec son bras de noeud mais aussi avec celui de ses voisins (auquel il n'a pas accès). ABSTRACT We introduce a new model called Graphical Bilinear Bandits where a learner (or a central entity) allocates arms to nodes of a graph and observes for each edge a noisy bilinear reward representing the interaction between the two end nodes. In this thesis, we study the best arm identification problem and the maximization of cumulative rewards. For the first problem, a learner wants to find the graph allocation maximizing the sum of the bilinear rewards obtained through the graph. For the second problem, during the learning process, the learner has to make a trade-off between exploring the arms to gain accurate knowledge of the environment and exploiting the arms that appear to be the bests to obtain the highest reward. Regardless of the learner's goal, the graphical bilinear bandit model reveals an underlying NP-Hard combinatorial problem that precludes the use of any existing best arm identification (BAI) or regret-based algorithms. For this reason, we first propose an 𝛼-approximation algorithm for the underlying NP-hard problem, and then tackle the two problems mentioned above. By efficiently exploiting the geometry of the bandit problem, we propose a random sampling strategy for the BAI problem with theoretical guarantees. In particular, we characterize the influence of the graph structure (e.g., star, complete or circle) on the convergence rate and propose empirical experiments that confirm this dependence. For the problem of maximizing the cumulative rewards, we present the first regret-based algorithm for graphical bilinear bandits using the principle of optimism in the face of uncertainty. Theoretical analysis of the presented method gives an upper bound of 𝑂 # $√T'on the 𝛼-regret and highlights the impact of the graph structure on the convergence rate. Finally, we demonstrate by various experiments the validity of our approaches. MOTS CLÉS Apprentissage séquentiel, Bandits Bilinéaires Graphiques, Multi-agents RÉSUMÉ Nous introduisons un nouveau modèle appelé Bandits Bilinéaires Graphiques où un apprenant (ou une entité centrale) alloue des bras aux noeuds d'un graphe et observe pour chaque arête une récompense bilinéaire bruitée représentant l'interaction entre les deux noeuds associés. Dans cette thèse, nous étudions le problème d'identification du meilleur bras et la maximisation des récompenses cumulées. Pour le premier, un apprenant veut trouver l'allocation du graphe maximisant la somme des récompenses bilinéaires obtenues à travers le graphe. Pour le second problème, au cours du processus d'apprentissage, l'apprenant doit faire un compromis entre l'exploration des bras pour acquérir une connaissance précise de l'environnement et l'exploitation des bras qui semblent être les meilleurs pour obtenir la récompense la plus élevée. Quel que soit l'objectif de l'apprenant, le modèle de bandits bilinéaires graphiques révèle un problème combinatoire sous-jacent qui est NP-Dur et qui empêche l'utilisation de tout algorithme existant pour l'identification du meilleur bras (BAI) ou pour la maximisation des récompenses cumulées. Pour cette raison, nous proposons tout d'abord un algorithme d'𝛼-approximation pour le problème NP-Dur sous-jacent, puis nous nous attaquons aux deux problèmes mentionnés ci-dessus. En exploitant efficacement la géométrie du problème du bandit, nous proposons une stratégie d'échantillonnage aléatoire pour le problème BAI avec des garanties théoriques. En particulier, nous caractérisons l'influence de la structure du graphe (par exemple, étoile, complet ou cercle) sur le taux de convergence et proposons des expériences empiriques qui confirment cette dépendance. Pour le problème de la maximisation des récompenses cumulées, nous présentons le premier algorithme basé sur le regret pour les bandits bilinéaires graphiques utilisant le principe d'optimisme face à l'incertitude. L'analyse théorique de la méthode présentée borne l'𝛼-regret par 𝑂 # $√T' et souligne l'impact de la structure du graphe sur le taux de convergence. Enfin, nous démontrons par diverses expériences la validité de nos approches. KEYWORDS Sequential learning, Graphical Bilinear Bandits, Multi-agents
00409747
en
[ "shs.art" ]
2024/03/04 16:41:18
2003
https://shs.hal.science/halshs-00409747/file/EFEO.pdf
Isabelle Charleux Pierre Pichard François BUDDHIST MONASTERIES IN SOUTHERN MONGOLIA: A PRELIMINARY SURVEY From the end of the sixteenth century onwards, Tibetan Buddhism flourished among Southern Mongols, bringing revolutionary changes to their nomadic society. With the support of Mongol kings and, later, the Manchu emperors, the Gelugpa (dGe-lugs-pa) Buddhist institution expanded all over the country, while local religious life organised itself around Buddhist monasteries. Although Tibetan Buddhism was first introduced among the Mongols during the thirteenth century, it failed then to take hold in Mongols' hearts and very few monasteries were founded. Almost nothing of the present physical heritage of Mongol Buddhism dates from this period. Therefore, I will focus on the historical background and the architectural aspects of the Southern Mongolian monasteries from the sixteenth to the twentieth century. Inner or Southern Mongolia 1 is today an "autonomous region" of the People's Republic of China, and has been largely settled by Han Chinese since the mid nineteenth century. Due to border modifications that occurred repeatedly during the twentieth century, former Mongol territories now belong to other provinces of China (especially Liaoning), and thus are included in this survey. This study is mainly based on fieldwork undertaken between 1993 and 1999, which allowed me to visit more than thirty monasteries. The data gathered on the field along with the historical documentation were exposed in detail in my dissertation thesis (Charleux 1998). In Mongolia, as in Tibet, the frontiers between a monastery and a temple are blurred. Should we call by the same name small "temples" 2 kept by one or two monks, "monasteries" inhabited by ten monks yet attracting hundreds of them during festivals, and large permanent communities of over 500 monks? Mongolian monasteries and temples were called by different names, each with a precise original meaning: -süme: sedentary monastery or temple sheltering a statue, without monks' dwelling; 3 -juu: "image", then "temple", from the Tibetan jo-bo, the famous statue of Shākyamuni in Lhasa; 4 -küriye: ring, enclosure, encampment, a nomadic monastery with a yurt or a wooden temple surrounded by monks' dwellings; 1 The former Outer Mongolia was part of the Qing empire from 1691 to 1911. Thereafter under the sway of the former Soviet Union, Mongolia is now a sovereign nation. Qalqa Mongols form the majority of its population. 2 We also use the term "temple" to designate a place of worship inside a monastery, often dedicated to particular deities or bodhisattvas. 3 Süme is also the official term for a Mongol monastery in Qing documents. 4 Transcribed zhao召, 昭 or 招 in Chinese. Introduction of Buddhism in the country Mahāyāna Buddhism was initially introduced in the territory of Southern Mongolia during the Northern Wei (Toba) dynasty (386-534). The nomadic people who occupied this area later on, like the Kitan and the Jürchen, were also converted to Mahāyāna Buddhism; they used Buddhism as a state religion in order to unify their heterogeneous empire, to protect and legitimate their dynasty. 8 Large monasteries of the Kitan, who founded the Liao dynasty (916-1125), could be found in every city of their empire, such as Fengzhou 豐州 (near Kökeqota), Qingzhou 慶州, and their five capital cities. 9 The main remaining sites are located in Chifeng district and in Liaoning province. Kitan Buddhist architecture was mainly of Chinese style and techniques. Except a few cave temples, seven to thirteen storeys Chinese-style pagodas are the only remains of their numerous foundations that dotted the whole Mongolia. 10 The Kitan cave monasteries of Gilubar juu [30] 11 and Öbür juu, located near their upper capital (Shangjing 上京, Baγarin banner) became under the Qing dynasty (1644-1911) important monasteries and pilgrimage sites. The Jürchen, who founded the Jin dynasty (1115-1234), reoccupied and restored many Liao Buddhist sites. Several Jin pagodas have been preserved. In the Western part of the territory, the Tangγud founded the Xia empire (1032-1227) whose nobility was first converted to Mahāyāna, and then, in the twelfth century, to Tibetan Buddhism. They built monasteries, invited Tibetan lamas at their court, exchanged sutras and paintings with the Mahāyānist Song, Liao and Jin people. Their Buddhist vestiges in Inner Mongolia include ruins of monasteries, Tibetan style stūpa in Qaraqota city in present Alasha Ejin region, and Chinese style square pagodas. 12 Buddhist monasteries under the Yuan dynasty: twelfth to fourteenth centuries Tibetan Buddhism (often referred to as "Lamaism") was first introduced among the Mongols during the thirteenth century. 13 The descendants of Chinggis Khan who founded a universal empire met with different religions: Nestorianism, Buddhism, Daoism, Islam. Those who conquered China (and called themselves the Yuan dynasty, 1277-1368) adopted Buddhism as a state religion, seeing a political advantage in an alliance with the flourishing culture of Tibet. The emperors, especially Qubilai (r. 1260-1294), ordered the conversion of the Yuan empire to Tibetan Buddhism, patronised constructions, translations and debates. Several Sakyapa (Sa-skya-pa) lamas contracted with the Mongol rulers a yon-mchod (donator-lama) relationship, like 'Phags-pa, who served as Qubilai's spiritual adviser. In this personal relationship, the lama recognised the Qan as an incarnation of a bodhisattva and a universal emperor (cakravartin); in exchange the Qan gave him titles, honours, protection and patronage. The motivations for the Qan's interest and patronage were not only religious: he expected his rule to be legitimated as he was identified as Great Qan, ruler of all the Mongol domains. Lamas were appointed to high offices and monasteries were built with state funds. The influence of Karmapa (karma-pa) and Kakyapa lamas at the Yuan court affected essentially the Mongol nobility. Tibetan Buddhism remained mainly an urban religion organised around the imperial monasteries of Northern China, and very few monasteries were founded in Southern Mongolia. Shamanism was still the religion of the majority of the Mongols, while the Chinese were mostly unaffected by this policy. Almost nothing remains of the architecture of these few Yuan foundations, except some Tibetan-style stūpa. They probably mixed different styles, mainly Chinese and Tibetan, but also imported ones. Qubilai recruited a Muslim architect to help him plan his capital city, Dadu 大都/Beijing. Later he employed other foreigners, including the Nepalese sculptor and architect Arniko (Anige), to build monasteries in Dadu. The Yuan emperors wanted to be perceived as the rulers of a universal empire by using multiethnic styles of architecture. However, the excavations of Shangdu 上都 (or Kaiping 開平), their summer capital, revealed that the architectures were mainly Chinese. Some Yuan monasteries are known by archaeological vestiges in the territory of Southern Mongolia, such as the Sakyapa monasteries in and around Shangdu (the largest were Huayan si 華嚴寺 and Qianyuan si 乾元寺), 14 and the Longxing si. Isolated pagodas are still standing in Yingchang 應昌 (Kesigten banner) and Kailu 開魯 cities. The Longquan si [24] is the only one that was still in activity under the Qing dynasty, but nothing of the Yuan buildings has remained. Of these nomadic people who once occupied the territory of Southern Mongolia up to the sixteenth century, very few Buddhist remains are left, and nowadays all are deserted. More have been preserved in China proper because they were restored by the following dynasties up to the Qing. Most of their Buddhist monuments were located in settled urban agglomerations, ruins of which still dot the countryside. A few archaeological sites are documented by Russian and Japanese expeditions of the late nineteenth and early twentieth century, and by recent Chinese excavations. II. HISTORY OF BUDDHISM IN SOUTHERN MONGOLIA FROM THE FIFTEENTH CENTURY ONWARDS The Dark Age, 1368-sixteenth century After the fall of the Yuan dynasty in 1368, most of the Mongols returned to nomadic life in their homeland. During this "Dark Age", virtually nothing is known of Buddhism in Mongolia: it seems to have retreated when faced with the resurgence of Shamanism. 15 However, the speed and the strength of the revival of Buddhism during the sixteenth century can only be explained by the fact that it still coexisted with Shamanistic practices during this period. Moreover, Southern Mongols had diplomatic, commercial or bellicose relations with the surrounding Buddhist people of Gansu, Turkestan and Amdo (A-mdo, especially the Kukunor area) who certainly influenced their Buddhist renaissance. There were probably nomadic monasteries adapted to the pastoral life, too small to be recorded in the historical sources. Indeed, the recently discovered Arjai (Baiyanyao百眼窯) caves show a rare case of the continuity of a Buddhist presence from the Northern Wei (386-534) up to the Ming period (1368-1644) (Charleux 1998: 18). During the fifteenth century, the Western Mongols, who founded the Oyirad khanship adopted Buddhism in order to legitimise their power. Their monasteries were probably also moveable, nothing having been left after the fall of their kingdom. The Mongolian renaissance of the sixteenth century Tibetan Buddhism was officially re-introduced at the end of the sixteenth century by Altan Qan (1507-1582), a descendant of Chinggis Khan and leader of numerous military campaigns. During the early period of this cultural "renaissance" (1566-1634), older schools of Tibetan Buddhism (Sakyapa, Nyingmapa [rNying-ma-pa], Karmapa) were in competition with the growing Gelugpa school founded by Tsong-kha-pa (1357-1419). 16 Monks from Amdo, Gansu, Central Tibet but also from China competed to convert the Qan. Altan's Tümed Mongols lived in the rich plain situated in the Northeast corner of the loop of the Yellow River. Buddhist predication there was part of a larger socio-economic change: the construction of towns, palaces and houses preceded the foundation of temples by Altan Qan. From 1546-1550 onwards, the Tümed Mongols used the manpower of fifty thousand to one hundred thousand Chinese immigrants and prisoners (perhaps as numerous as the Tümed themselves) who developed agriculture and contributed their technical and architectural skills. Altan Qan first constructed in the mid-sixteenth century a palace and a town commonly referred to as Yeke bayising. 17 In 1572, the year following the Sino-Mongol peace treaty re-establishing border relations, Altan Qan founded a new capital, Kökeqota -Orient, 2003, pp. 351-390 (Hohhot, "the Blue City", Ch. Huhehaote 呼和浩特). 18 The establishment of these urban centres offered new opportunities for craftsmen, thus inspiring a renaissance of the arts and architecture. The acceleration of the settling process among the Tümed encouraged the foundation of fixed religious centres. From 1572 to 1577, the Mongols repeatedly asked the Chinese court for religious texts, images and other worship items, as well as for monks and carpenters. The propagation of Buddhism in Mongolia initially came from Amdo and China, which explains the numerous Chinese features of the Buddhist monastery in Mongolia. 19 Kökeqota was then inhabited by Mongols and Chinese who seemed to cohabit peacefully, 20 contributing to the foundation of the same temples and monasteries. The Blue City quickly became the economic, cultural and religious capital of Mongolia, redistributing the Chinese products from the market-towns of the Great Wall to the other Mongol groups, who, save the Ordos, refused to sign any treaty with the Chinese. The commercial relations between the Mongol groups contributed to the rapid diffusion of Buddhism. The Tümed and their allies, the Ordos and Qaracin groups eventually gave pre-eminence to the Gelugpa school in 1578, when they met its hierarch, bSod-nams rgya-mstho (1543-1588), at the Kukunor lake. Altan Qan gave him the title of Dalai Lama. He was in turn recognised as a reincarnation of Qubilai Qan, reinstating the interdependent relationship between Mongolian kings and Tibetan lamas. 21 The adoption of Tibetan Buddhism had mutual political advantages for the Tümed and for the Gelugpa. Altan Qan, acting as a cakravartin king, assumed a legitimacy based not only on inheritance but also on reincarnation. He hoped to unify again the Mongol polities in a confederation based on this universal and organised religion, which had attracted the settled Tümed nobility by its sophisticated rituals, doctrine, and literature. For his part, the Dalai Lama hoped to find in these new allies a strong military support that could allow the Gelugpa to consolidate their influence and to conquer the whole of Tibet. From this time onwards, the Tibeto-Mongol relations intensified while the Sino-Mongol relations were limited to economic exchanges. The Mongol princes all wanted to visit the holy city of Lhasa and to meet the Dalai Lama, in order to either reinforce the relation or to establish a concurrent relation. The trip of the Dalai Lama to Mongolia in 1586-87 expanded the young faith, provoking a huge wave of conversions and foundations of monasteries. Within the next few decades, ordinary Mongols became Buddhist, either voluntarily or under pressure. The Tibeto-Mongol connection became even stronger when the Fourth Dalai Lama Yon-tan rgya-mtsho (1589-1617), the reincarnation of bSod-nams rgya-mtsho, was recognised in the person of a great-grandson of Altan Qan. The mobile monasteries that followed Altan Qan in his military campaigns progressively settled: a small temple was probably founded in Kökeqota around 1572. In 1575 Altan Qan founded the Mayidari-yin juu (temple of Maitreya) [3], 22 maybe on the site of his former palace, and the Yeke juu ("Big Temple") [1] a few years later, the first two of the large princely monasteries that survive today. 18 Kökeqota is now the capital city of Inner Mongolia. On its history: Hyer 1982; Charleux 1998. 19 At first, for several reasons, the Tümed did not directly address their demands to Central Tibet, as one may have expected. No official relation with Tibet was established before 1577. 20 In the end of the sixteenth century the Chinese immigrants suddenly disappeared from the records: some of them returned to China or died during the outbreaks of smallpox epidemics; perhaps others stayed in Southern Mongolia and became mongolised. 21 Altan Qan enacted new Buddhist laws for Mongolia, including the prohibition of human sacrifices and ongγud (Shamanistic figurines) worshipping, and the conferment of status and privileges for monks. 22 Charleux 1998: 44-48, 68-71;Charleux 1999. The early seventeenth century and the elimination of the Red sects The first foundations were made by members of the chinggiskhanid nobility; then followed "spontaneous" communities formed at the end of the sixteenth century around monks and hermits arriving from Tibet or from within Mongolia. Missionaries from Central and Eastern Tibet, but also from Chinese Tibeto-Mongol centres and members of the Mongolian nobility such as Neyici toyin (1557-1653) extended their activity all over Mongolia. They converted princes, established congregations and acted as ambassadors mediating the internal disputes between the various polities. The missionary struggle against Shamanistic practices was especially tough in the Eastern part of Southern Mongolia. 23 Buddhist missionaries also obtained favours from the rising Manchu dynasty, who gave them money, shabinar24 and land to establish monasteries. The history of foundations reflects the irresistible progression of the Gelugpa "orthodoxy". The reasons why the Tümed Mongols chose the Gelugpa sect rather than a "Red" one remain unclear. They probably gave their preference to this young school because the Karmapa were too close to Beijing, while the Sakyapa were politically feeble and lacking missionary ardour. Other princes like Ligdan Qan (1592-1634), descendant of the elder son of Chinggis Qan, hence the legitimate emperor of all Mongols ("Qan of Qan-s"), patronised the "Red" orders. The Gelugpa sect gained the support of the Manchu dynasty and then the religious monopoly over all Mongolia after the defeat of Ligdan Qan, who failed to reunite the fragmented groups. 25 The monasteries of the older sects, with a very few exceptions, were probably converted into Gelugpa ones, although the sources are silent on this subject. The heydays of the Manchu period: 1636-1840 By 1636, all the Mongol princes had negotiated alliances with the Manchus or had become their subjects following military defeat. Seduced by the Manchu support of Tibetan Buddhism, they recognised the emperor as their legitimate ruler. 26 Their cavalry significantly increased the military might of the Manchus who conquered China, taking the mandate of Heaven from the fallen Ming dynasty in 1644. At the same time, in 1642, the Western Mongol Gushri Qan conquered Central Tibet for the Fifth Dalai Lama (1617-1682). The centralisation of the political and spiritual powers in Beijing and, in a lesser degree, in Lhasa established a new deal that challenged the previous independence of the Mongolian Buddhist centres. The Manchu emperors of the new Qing dynasty became in the seventeenth century the masters of a vast empire including Mongolia, China, Tibet and Eastern Turkestan. It was by that time that a distinction was introduced between the Southern or Inner Mongols, who were by now Qing subjects, within the borders of the Qing empire and the Northern Qalqa or Outer Mongols, who were outside that border. By 1691 the Qalqa Mongols were so pressured by their enemies, the Western Mongol Jungar that they called the Manchus for help and submitted to their rule. -Orient, 2003, pp. 351-390 After the Manchu conquest of China in 1644, the situation became stable in Southern Mongolia. The Qing established new political and institutional structures to control the society. The traditional economy was transformed by the creation of banners with a fixed territory, hence restricting the mobility of the nomads. Chinggisqanid princes were appointed rulers of these banners and thus officials of the empire, subordinate to the Lifan yuan 理藩院 (Court of Colonial Affairs) in Beijing. The Qing tried to isolate Mongolia from China and from Tibet, prohibiting the Chinese to settle there. Early on, the Qing encouraged and patronised Buddhist foundations in order to maintain peace among the Mongols, and especially to solve the conflict between the Qalqas and the Jungar. According to the Kangxi emperor (1662-1722), who hoped that the pacific message of Buddhist doctrine could prevent the Mongols to rebel, "building only one temple is equivalent to feeding a hundred thousand soldiers in Mongolia." The Qing established the lCang-skya qutuγtu, a reincarnation discovered in Amdo, as the spiritual leader of Inner Mongolian Buddhism, to counterweight the Boγda gegeen of Outer Mongolia. In order to facilitate the control of the Buddhist institution, they made Beijing into one of the major centres of Inner Mongolian Buddhism. Imperial foundations in Beijing, Chengde (Jehol) and in Mongolia proper (especially in Dolonnor/Doluγan naγur), marked the important political events. All these "imperial monasteries" were characterised by a monumental and syncretic architecture devised to impress the Mongols. These few prestigious constructions did not overshadow the numerous private foundations that testify of the faith of the people. Although their political role was limited to a banner or a district, the academic and spiritual renown of monasteries founded by lamas or Mongol princes crossed the borders. All over the country, small communities developed into large "academic" monasteries,27 with colleges for the study of the doctrine, esotericism, Kālacakra (including astrology, mathematics and divination), and medicine, that attracted the most learned Mongols. Because academic studies were expensive, rules were strict and examinations difficult and only about one percent of the monastic population entered colleges. The monks had to travel to famous monasteries of Southern Mongolia (Badγar coyiling süme for instance), then to Kumbum, Labrang or Beijing and finally to Lhasa to pass the higher degrees. In other words, by that time, a three-fold categorisation among Mongolian Buddhist places had become clear. It distinguished between: 1. "Imperial monasteries"-four monasteries founded by the Qing emperor and twenty "imperialised" old monasteries of Kökeqota. 28 All are located in banners directly administrated by the Lifan yuan (because of the elimination of local Mongol nobility who rebelled against the Qing). The emperor took the place of the Mongol prince in the role of the donator. 2. Monasteries founded by a monk or offered by a layman to a famous monk. The monk became the abbot and his reincarnation, found after his death, guaranteed a long-date reputation for the monastery. 3. Local monasteries, founded by and belonging to a prince ("banner monasteries") or a lay community. Some of them attracted famous lamas and became large academic monasteries. The modern chief-towns formed around such banner monasteries. The monastic institution expanded its influence to become the dominant force in the culture and the economy of Mongolia. Monasteries were not only spiritual centres of learning but also played an important role in finance, trade and patronage of arts. Moreover, the management of their flocks and herds provided a livelihood for many dependent people called shabinar. They promoted changes in the traditional Mongol society and in the physical appearance of the country, counterbalancing the impact of sinicisation. 29 Every family aspired to have at least one child accepted into the monkhood. Quotas of monks imposed by the Qing to the largest monasteries 30 were not respected, therefore the vast majority of monks was "unofficial". Among them, some31 lived permanently in the monastery and depended on it. However, the majority of them were just novices; they worked as herders or farmers most of the time, coming to the monastery only to attend the festivals. Other were wandering monks, beggars, bards or pilgrims. Hence, by the middle of the nineteenth century, between thirty and sixty-five percent of the male population of the banners had taken monastic vows, and the largest congregations gathered several thousands of monks. Monks were often travelling to Tibet and China (especially Beijing and mount Wutai 五臺山) for scholarly, diplomatic or pilgrimage purposes. High ranking lamas occupied a privileged position, all the more so since the lay nobility did not, like in Tibet, counterbalance them. The clerical hierarchy was equated with the civil one in the Manchu administrative system, the highest rank being the qutuγtu (title of reincarnated monks). There were in the nineteenth century 157 reincarnations recognised by the Lifan yuan in Inner Mongolia. Exempted from taxation and corvée labour, they accumulated considerable property, wealth and labour force, exploiting an important population of shabinar. The most learned ones were translators of Buddhist texts, writers, painters, sculptors and architects. The end of the Qing dynasty: 1840-1911 When the ideological and economic conjuncture was turned upside down after 1840 because of the Chinese internal crisis and recession, the Buddhist church lost the support of the court. The ban on Chinese immigration was lifted, and colonisation began at a rapid pace. As a result of one hundred fifty years of immigration, the population of Inner Mongolia is now overwhelmingly Chinese, Mongols representing only sixteenth percent. Moreover, during the many upheavals of this troubled period, many monasteries were abandoned, destroyed or squatted. From 1862 to 1877 the Chinese Muslims' rebellion destroyed most of the monasteries in Ordos and Alasha. The imperial monasteries rapidly declined, but independent monasteries continued to prosper until the beginning of the twentieth century. Paradoxically this period witnessed an efflorescence in literature and arts, as shown by the bronze workshops of Dolonnor for instance. In the beginning of the twentieth century, there were about 1,340 monasteries and temples32 in an area with less than two million inhabitants, and an average of twenty monasteries per banner (Table 1). Compared to the population, monasteries were more numerous in pastoral regions of Ordos, Ulaγancab, Caqar and Sili-yin γoul, and less numerous in agricultural regions of the north-east. The density of religious buildings was higher in the Tümed and Ordos regions, probably because their construction began as soon as the sixteenth century. The apparent disparities between different areas (Table 2) can also be explained by the heterogeneity of sources and by the concentration of monks in the largest monasteries in the beginning of the twentieth century, forsaking smaller ones. In the 1930's and 1940's, the only two large academic monasteries in activity were Badγar coyiling süme [4] and Bandida gegeen süme [18]. At the end of the Qing dynasty, about five percent of the monasteries had more than five hundred monks33 , a concentration which was comparable to that of Tibet and Northern Mongolia. These larger (and more documented: Table 3) monasteries tend to conceal the number and dispersion of small local monasteries in Southern Mongolia. Present situation Most of the monasteries have been destroyed at the beginning of the twentieth century and during the iconoclast Cultural Revolution (Charleux 2001). Preserved architectures were emptied of their statues, paintings and sacred books; the monks' dwellings and lateral monastic buildings were destroyed. Since the partial authorisation of religious activities at the end of the seventies, between one and three monasteries were reopened and re-occupied by a few monks in every banner or county under public or private initiative. The Buddhist revival was strong in the nineteen eighties and has become more cautious in the last decade. The more intense manifestations of religious life are seen during the great annual festivals, which attract many Mongolian, but also Han Buddhists (ill. 3). Pilgrimages, cult to local gods and oboγa (cairns, pronounced ovoo) became popular again. The revived Buddhist institution, subordinate to the Chinese National Buddhist Association, remains strictly controlled by the state, and very few people are allowed to become a monk. Some young monks receive a basic religious education in Buddhist schools of Inner Mongolia, Kumbum (sKu-'bum, in Amdo) or Beijing. 34 The monastic communities are very small compared to the situation at the turn of the century. The largest monasteries (housing from five hundred to two thousand monks before) have nowadays thirty to forty monks plus some unofficial monks. 35 The People's Republic did not recognise new reincarnations in Inner Mongolia, but the Dalai Lama recently recognised the reincarnation of the lCang-skya qutuγtu, who lives in India. Even when in activity, monasteries are considered as museums under the administration of the Bureau of "Cultural Heritage." In 1995, seventy four reopened monasteries received national or provincial protection. Twenty seven of them had been partially rebuilt after having been severely damaged or razed to the ground. In the eighties, the authorities invested fifteen million yuan in reconstruction and restoration of the main historic and scenic monasteries. But official protection does not mean conservation, except when a quick profit can be expected from tourism. There is no sustained policy and punctual restorations often go with the sinicisation and/or folklorisation of the monastery (building of Chinese pavilions, stone inscriptions, shops and even Disneyland-like attractions). Today, probably a hundred monasteries are in activity, some of them waiting for official recognition. Scattered data on some banners suggest that many non-official small monasteries have been locally reconstructed without reporting to the authorities. Today's active monasteries are located in the historical Kökeqota region, in villages and in remote places inhabited by Mongol nomads and farmers who support them 36 Modern monasteries have to be economically self-sufficient, so they engage in the service industry and exploit their "relics". Yet they cannot revive the old economic relationship with the laymen. The Buddhist revival (survival?) is now threatened by the generation gap within the clergy, the superficial training of young monks, the folklorisation of the sites, and the impoverishment of rural Mongols. On the other hand, spontaneous rebuilding of a monastery by young monks trained in Kumbum shows instances of a genuine, popular revival. Rules of geomancy are so strict that the choice of a site presenting the ideal characteristics can last several years. The location of mountains and cliffs, the view of the site, the direction of flow of the nearby rivers, the location of wells, wooded areas, auspicious or inauspicious signs must all be interpreted on a symbolical level. For instance a mountain looking like a bell or a site compared to an eight-petals lotus are excellent omens. Besides the Daqing 大青山,39 Helan 賀蘭山 and Kingγan (Xing'an) ranges, Southern Mongolia is not a mountainous country, so the founders have to content themselves with hillocks, sand dunes or artificial terraces. The geomancer adapts the ideal rules to the characteristics of the landscape and manages to compensate a defect of the ground by another advantage, or by the edification of a stūpa. These geomantic rules match up with more practical considerations to determine the site of a nomadic camp. A northern elevation protects the camp as the monastery from the northern and north-western winds, while the presence of water-a river, a well or a spring-remains the most important criteria of selection for a site in this continental country. Provided the geomantic constraints were satisfied, a monastery was free to settle almost wherever it wanted. The agreement of the banner prince was easy to obtain, the Vinaya rules were flexible and the only decree issued by the Qing on this subject aimed at protecting the surrounding fields that could be damaged by the construction. Among the sixty eight monasteries housing more than five hundred monks in the early twentieth century (Table 2),40 four were built in pre-existent cities, Kökeqota (ill. 2) and Bayanqota. Eighteen "banner monasteries" founded by the banner prince were built within six kilometres from his residence. In a city or in the steppe, a temple could not be built contiguous to the back side (northern wall) of a palace or residence: it was usually built on the south, east or west side. Most of them are now located within cities or in the suburb of the city that was formed around it. Eighteen other monasteries are far from the banner centre and urban centre, yet eleven out of these eighteen are close to old travel routes (for trade or nomadisation), like Bayan shanda-yin süme [9]. Therefore, it is often difficult to know if the initial desire was isolation. Seven are found in "dramatic spots" in the mountains, like Aγui-yin süme [12] in Helan shan. As a general rule, a more contemplative or academic monastery searched for an isolated site, off the tracks, in a curved valley at the end of a narrow gorge, with a nice view on the plain. But as the basic criteria for selecting a site-the proximity of water and wooded areas-were the same as for the settlement of a camp, a monastery was never very far away from human beings. Even when isolated in the mountains, it received donations from laymen and possessed herds grazing in the plain. The examples of Badγar coyiling süme [4] and Aγui-yin süme [12] show that the apparent isolation of a monastery was not a bridle to its economic development. Monasteries founded by members of the nobility were often situated near their residence. Every banner had a "banner monastery" close to the camp or to the settled residence of the prince. The human density in Inner Mongolia was very low (0.3 to 1.3 inhabitant per square kilometre in the early twentieth century), and until the nineteenth century, there were only a few urban centres, such as Kökeqota, Bayanqota, Dolonnor, plus Manchu garrisons where Chinese and Manchu soldiers had their own temples, and a few Chinese settlements. "Ritualistic" monasteries, more dependent on liturgical services and related donations than "academic ones" settled near travel routes and thus often assumed a prominent role in trade. The attraction of Mongol and Chinese tradesman to the monastery fairs as well as the location along trade routes led to the growth of trade centres around the main monasteries. As they required supplies even outside the fairs, food and crafts, villages and towns developed around the religious centres. Monasteries thus both followed and fostered a slow but steady trend of sedentarisation and urbanisation over the whole period sixteenth to twentieth century. Mountain caves and occupancy of ancient sites Mountain caves were essential to the Mongol and Tibetan religious tradition. Moreover troglodytic dwelling was common in Inner Mongolia. Originally the dwellings of ascetics, caves became sanctuaries protected by an architecture and were later integrated into a monastery. About fifty monasteries were named aγui, "cave". In the Nyingmapa monastery of Aγui-yin süme [12], the five caves where Padmasambhava, according to the legend, meditated in 774, are up to the present time an important pilgrimage site. Like Tibetan "womb-caves", they contain sacred sources, circumambulation corridors around statues, and narrow initiatory passages. Monasteries turned the old Shamanist natural sites (mountain graveyards, natural caves, rocks, woods) into reservations where it was prohibited to hunt, pasture, cut trees, build, ride horse and cultivate land. Tibetan Buddhism superimposed a new sacred geography upon the old Shamanistic and Buddhist one and took possession of ancient sites. Two monasteries were founded around well preserved Kitan caves containing Buddhist sculptures: the Öbür juu and the Gilubar juu [30]. The three caves of Gilubar juu still contain images of Buddha and bodhisattvas. An assembly hall was built during the Qing dynasty to protect their entrance. A few Buddhist monasteries took possession of ancient historical sites in order to re-use their building materials, like Darqan aγula-yin süme near the ruins of the Yuan city of Yingchang (Kesigten banner, Juu-uda). However, except for the Kitan caves previously mentioned, a new monastery was never built directly on an old site, to avoid to clear the ruins, and out of fear that the ruins may be inhabited by a deus loci. This can explain why the old Kitan pagodas were not restored and included in new monasteries. The relation between the realised structures and locally available resources Most of the Mongolian monasteries, as well as the earlier Kitan and Tangγud ones, used local building materials and resources. These are, like in North China, baked bricks for the walls, timber for the framing and stone for the basement, the staircases, the terraces and the courtyards' pavement. Wood: Conifers (pines, cypress, larches, cedars), mountain poplars, oaks, and also white birches, elms, and willows were commonly used. Monasteries were usually situated near protected wooden areas; in addition, monks planted trees within the fence or near the monastery to supply for their needs. Southern Mongolia was rich in forests before the eighteenth century, but none of the monasteries that have subsisted is entirely built of timber, which was common in Northern Mongolia. Scarcity of wood due to wood-collecting for fuel,41 extensive agriculture of Chinese settlers and exportation of timber to China has been a major problem since the eighteenth century. Decrees issued by the Qing to forbid tree-cutting proved to be useless. Except for the more forested areas of the north-east, Southern Mongolia imported large quantities of timber from Northern Mongolia. Tibetan architecture is well adapted to deserted areas like Ordos and Alasha, where wood is scarce and timber-consuming Chinese roofs are seldom seen. Chinese roofs often appear to be a luxury decorative item placed up on a Tibetan flat roof. For reasons of prestige, wealthy monasteries used to import high quality timber and stone from thousands of kilometres away. Bricks: Baked bricks-a building material well fit to the region and the climate-have been baked in Southern Mongolia since the first millennium. Glazed bricks and tiles were imported from Datong. Bricks and stones were sometimes taken from ruined or partially ruined architectures, such as old cities or the Great wall. Stone: Very few monasteries were entirely built in stone. They are located in mountainous regions, mainly in the Western part of the country (Yin 陰, Helan mountains), such as Baraγun and Jegün keyid [13, 14], Gembi-yin süme [34], or Badγar coyiling süme [4]. Itinerant Chinese and Mongolian carpenters were employed in the construction and directed local workmanship. When and how they were employed and where they came from are still open issues, which need addressing before we may fully understand the foreign influences and trends. These anonymous carpenters had the difficult task to combine Chinese and Tibetan construction techniques, which were fundamentally opposed. Ill. 3. Festival in Siregetü juu gathering Mongolian lamas, Chinese Buddhist monks and lay believers, summer 1995. © Isabelle Charleux IV. THE MONASTERY LAYOUT-SIXTEENTH TO TWENTIETH CENTURY The minimum monastery consists of the assembly hall or coγcin containing the main statues and altar, located at the northern side, and the monks' dwellings. To this central nucleus can be added many elements according to the development of the community. As in Tibet and China, monasteries are continuously repaired, restored, enlarged by lay donations and by the monks' initiatives. They reached their maximum size under Qianlong reign (1736-1796). Therefore, it is often difficult to distinguish the original state and layout of the monastery. Like Tibetan monasteries, larger Southern Mongolian monasteries are real towns built on an area of one or two hundred thousand square meters. The hierarchy between the different buildings is immediately perceptible because of their situation, height, size, roofing and decoration. The main assembly halls and temples (the highest and the more decorated buildings) lie in the middle or on the northern side of the compound, and face south, except when the configuration of the ground does not permit it. They are surrounded by colleges' minor assembly halls, the reincarnations' residences and their storehouses, stūpa and minor temples, monks' dwellings, kitchen and miscellaneous buildings. Layouts can be divided into five basic categories: the küriye or circular layout of itinerant monasteries, the Chinese symmetrical arrangement, the "scatter-shot" arrangement, the mixed arrangement and the fortified monasteries. Mobile monastery in yurts During the sixteenth and seventieth centuries, except for the Tümed and a few groups already settled and partly living on agriculture, the great majority of Mongols was nomads and lived in felt tents. In Alasha, Western Mongolia (and Qalqa Mongolia up to the eighteenth century), monasteries in yurts followed the nomadic way of life of the lay community, and nomadised with their herds. 42 Even the great kings (Qan) had no fixed religious centre. All Inner Mongolian banners used to build fixed monasteries since the end of the seventeenth century, but small nomadic monasteries-in-yurts still existed in the Ordos and Ulaγancab leagues, and were widespread in Alasha banners at the beginning of twentieth century. Among settled Mongols, the presence of monasteries in yurts can be explained by a lack of finance. The monastery-in-yurts followed the küriye (litt. "ring") layout of princes' encampments, with minor temple-yurts and monks' yurts arranged in circle around the central yurt for assembly. The mobile Γanjuur süme [41] settled in 1781-84, adopting a "mixed layout" reminiscent of the circular arrangement. Wooden structures that could be dismantled are not attested, unlike in Northern Mongolia. Symmetrical Chinese style layout (ill. 4) The necessity of building a sedentary monastery appeared when big Buddhist statues had to be sheltered. The earliest monasteries showed a preponderance of Chinese style layout and structure, because in the sixteenth century the Tümed employed Chinese carpenters. Later on, all the imperial monasteries and Kökeqota's imperialised monasteries also followed the Chinese style layout. Generally, the majority of the sedentary monasteries adopted a symmetrical layout, especially among the Tümed and in Eastern Mongolia. This layout is adapted to cities and steppe areas, but is also often seen in mountainous regions. Asymmetrical elements in a general symmetrical layout may be explained by the historical development, the topography and the geomantic situation. 43 In this layout, the monastic compound is surrounded by a rectangular wall defining its extent. Ranging from 1.7 to 2.5 meters in height, the enclosure wall has no defensive purposes. The main halls are symmetrically arranged on the central axis with gradation of importance from south to north (ill. 4). The buildings and courtyards commonly built on the central axis are: the entrance gate (with one or three doors); the first courtyard leading out to the hall of the Four guardian kings (lokapāla) that can be crossed to enter the second courtyard; the Central assembly hall with lateral temples for a particular deity (ill. 5); and, at the northern side of the compound, the Chinese style two storied abbot's dwelling. Miscellaneous Chinese elements can be added, like an archway (pailuur, ch. pailou 牌樓) in front of the entrance gate, Bell and Drum towers symmetrically arranged in the first courtyard, screen-walls in front of entrances. Other ordinary Chinese elements are stone lions and incense-burners. Stone inscriptions recording the foundation and restorations are found in imperial monasteries (ill. 5) and recently repaired ones. None of these Chinese elements is absolutely necessary. Moreover, many elements of Chinese Buddhist monasteries are absent, such as the ordination platform, the bathroom, or the garden. Monks' dwellings and minor buildings are arranged in two or three lateral private axis, to the east and west. The western axis is considered as superior to the eastern one and the richest lamas used to live in the north-west corner. Courtyards are large, the buildings forming only about 15% of the total surface area. Entrance gate, temples, storehouses and dwellings are Chinese pavilions (dian 殿) covered by a Chinese sloping roof. The adaptation of the Chinese layout are the same as in the Tibeto-Mongol monasteries of China (Yonghe gong 雍和宮, mount Wutai). The only Tibetan or Sino-Tibetan architectures are the Central assembly hall, lying in the centre of the main courtyard, the college halls and the stūpa. The "scatter-shot" layout (ill. 6) The scatter-shot layout is comparable to the layout of the Gelugpa monasteries in Tibet: it usually can be explained by the landscape's features and by the growth of the monastery over the centuries. Followed by a quarter of the main fixed monasteries, it is common in mountains and hills of Western Inner Mongolia (Qaraγuna mountains, Helan mountains), and also in the steppe (Ulaγancab, Ordos, Sili-yin γoul, Aruqorcin and Baγarin banners in Juu-uda). This apparently unplanned ("organic") layout is characterised by the absence of enclosure wall, of entrance gate, of axis, and of symmetry. It is organised in conformity with the terrain, favouring narrow mountain passes, deep gorges and defile opening into a large circle at the foot of a high peak. Buildings are scattered up and down a mountain or hill slope in step-like style. Assembly halls can stand side by side on an east-west line or along a ridge (Badγar coyiling süme [4], Baraγun keyid [13]). The most important halls face approximately south while minor ones can open east or west according to the terrain. As in Tibet, the Central assembly hall, main temples and residences of reincarnated monks are located higher on the slope and the monks' quarters spread out in various directions around the main buildings (ill. 6). The buildings are built mostly in Tibetan or Sino-Tibetan style flat-roofed square or oblong structures with thick outer walls (ill. 7). Large temples have two to four storeys, with each succeeding storey being built back from the edge of the last, so that the result presents a step-like appearance. The walls made of brick or stone are whitewashed with limestone. Ornamentation is given to the more important temples and halls: red attic friezes bearing decorated brass mirrors, wooden decorations carved into beams, capitals and columns, porch, window frames with a cornice at the top, plates of glazed ceramic on the wall, animal figures and symbolic ornaments on the roof... Decorative Chinese roofs (covered with tiles), symbols of prestige, can be added atop flat roofs. The mixed layout Some monasteries combine the symmetrical and the scatter-shot layout. The main temples are enclosed in independent courtyards but we do not find central and lateral axis with successive courtyards. Moreover the typically miscellaneous Chinese buildings like Bell and Drum towers or archway are lacking. This layout can sometimes be explained by the growth of the monastery, the first foundation being enclosed by a wall, which later finds itself located in the middle of the monastery. The mixed layout is also seen in Kumbum and in other monasteries of Eastern Tibet. It is common to find in the same compound Tibetan style temples, Chinese and mixed-style architectures. Fortified monasteries (ill. 8) Ill. 8. The layout of Mayidari-yin juu [3]. Jin Shen, 1984, p. -Orient, 2003, pp. 351-390 The necessity of fortification was a main criterion for only two known foundations of the sixteenth century (Mayidari-yin juu [3] and Huayan si). It became unnecessary in Southern Mongolia under the pax manjurica of the Qing dynasty. 44 The walls, covered with stone and bricks, are typical of Chinese fortifications that could also be seen in Amdo monasteries such as Honghua si (fifteenth century). These religious compounds seem to be a survival of Central Asian square fortified monasteries of the ninth to the eleventh century like those of Turfān, Qocho and Duldur-āqur. Their inner arrangement follows the "mixed layout". This schematisation must not conceal the great diversity of layouts. Moreover, because of the bad state of preservation of minor buildings and enclosure walls, it is sometimes difficult to know what the original complete layout of a monastery was. V -THE ELEMENTS OF THE MONASTERY 45 The various monastic buildings (or yurts) had the same functions as in monasteries of Central and Eastern Tibet, and the inner arrangement did not differ from that of a Tibetan monastery, even in yurt-monasteries. However, the architecture freely adapts to various situations and needs. Buildings' layouts have a square or rectangular shape, sometimes with a porch and a small apse or cella at the back. For religious functions Monks assembly and rituals: the Central assembly hall The Central assembly hall (γoul coγcin) 46 is the most important edifice of a monastery. It was used for daily services carried out by the monks, as well as for ordinations, teaching, examinations, and confessions. Laymen were not supposed to attend the rituals-except when they asked for private prayers in exchange for offerings-, but could observe them from outside, standing under the porch or indoors. When the "cella" is at the back side of the assembly hall, with no door opening on the outside, laymen were sometimes allowed to enter during ceremonies to make their devotions to the Buddha images: they then walked along the walls, knelt, prayed, placed their offerings on the altars, and went out without stopping, so as to not disturb the assembly. The building opens on a courtyard used for open-air rituals, debates and gatherings (ill. 1, ill. 5), where one or two poles are erected for hanging prayer-banners (ill. 1). A large thang-ka could be suspended between the two poles during the festivals. The hall has a south-facing entrance with a portico and a large hypostyle hall. Its size should be proportional to that of the whole monastic community, who was supposed to daily gather inside. During the Qianlong period, as communities increased in size, many assembly halls were enlarged or rebuilt. The largest have sixty four columns, a square shape (8x8 columns, about 700 square 44 It was not the case of Northern and Western Mongolia until the late eighteenth century because of the wars between the Jungar and the Qing dynasty. Hence, other examples of fortified monasteries are known in Qalqa Mongolia (Erdeni juu, Caγan bayising) and in Western Mongolia (Ablai keyid of the Qoshuud). 45 Our presentation of the organisation of the monastery is mainly based on our fieldwork observations, completed by old photographs (Nei Menggu gu jianzhu 1959), nineteenth and twentieth century travelogues (such as Pozdneev 1977, 1978), records and studies done by the Japanese during the Manchukuo (Nagao Gajin 1987, 1991), and modern studies (Su Bai 1994). The massive destruction of purely monastic buildings such as monks' quarters and offices does not allow us to draw a complete survey of these elements. metres), and could house five hundred monks. Bays are regular in measure except in the sixteenth century temples of Kökeqota (Yeke juu [1], Mayidari-yin juu [3]), characterised by "missing" columns and irregular bays, according to Chinese Ming architecture. Because of its large size, the hall receives light from a skylight. Assembly halls all follow the same basic inner arrangement which does not differ from a Tibetan assembly hall, with many paintings, banners, seats for the monks according to their rank,47 cupboards containing small images, offerings and sometimes secondary shrines with statues. The musical and liturgical instruments used during the services, the costumes and masks for the masked dances are stored in a room near the entrance. Cult for monks and laymen: the main shrine (γoul süme) At the north of the Central assembly hall lies the main shrine or "cella", that houses altars and offerings in front of the images of Buddha and deities, paintings, plus a library of sutras. Canopies and silk banners hang from the ceiling. The main statues, along the north wall and facing south, are the Three Buddhas of the Past, Present and Future, with four bodhisattvas and a dharmapāla on each side (along the eastern and western walls). The back wall has no opening. On the altar are displayed incense-burners, oil-lamps, the eight auspicious symbols and food offerings. Cushions for prostration are displayed in front of the altar. A Chinese style incense-burner or a fumigation stove is often seen outside. Like in Tibetan monasteries, there is a clear separation between assembly halls for rituals and chapels for the cult. The architecture of the Central assembly hall and the main shrine, either combined in one building or in two separate buildings, illustrates the "conflict" between rituals and devotion. As this relation is the more original architectural feature of Mongolian monasteries, it deserves a special attention. A. The assembly hall and the shrine in the same building (ill. 9): In Tibetan style and in some Sino-Tibetan style architectures (ill. 10, ill. 11), the main shrine is the rear part of the large gTsug-lag-khang, a building comprising the assembly hall and the shrine on the ground level, chapels and rooms on the second and third floors. The skylight of the assembly hall opens on the inner courtyard of the first floor, and the back shrine, with a very high ceiling, receives light from a window opening on the flat roof of the first floor. The gTsug-lag-khang adopts different layouts and sizes: 1. The small square temple with no inner division. Images are placed along the back wall (the shrine) and a few monks can sit and pray in front of the images (the assembly room); there is no other shrine. It can be the main hall of small monasteries or a minor hall of large monasteries. The elevation is Chinese, Tibetan or Sino-Tibetan. 2. The Tibetan style gTsug-lag-khang, with a rectangular shape composed of a square assembly hall and a main shrine at the back, in a separate room. The elevation is Tibetan or Sino-Tibetan (ill. 10, ill. 11, ill. 12). The growing Tibetan influence on Mongol architecture coincides with the rise of the Manchu dynasty (1644), and the establishment of the Dalai Lama's rule in Tibet. The pure Central Tibetan style is rarely seen in Mongolia, though: buildings are made of bricks and have Chinese roofs upon the flat roof (ill. 13). They are smaller, more decorated and more symmetrical than their Tibetan models. 3. The Kökeqota style gTsug-lag-khang. The elevation of the whole building is Sino-Tibetan, covered by a succession of three Chinese roofs (ill. 14, ill. 15): the first one covers a room on the first floor above the porch; the second one, larger, covers the skylight of the assembly hall surrounded by a flat roof; and the third one, higher, is above the high ceiling of the back shrine. The back shrine is a quadrilateral apse on the northern side of the assembly hall, surrounded by a peristyle on the east, north and west side for outer circumambulation, which recalls the circular corridor found in ancient Indo-Tibetan layouts. Two doors at the north-east and north-west of the assembly hall open on the peristyle. After the sixteenth century, the circumambulation seems to have been abandoned, the two doors being closed. The tripartite facade is inspired by contemporary Gelugpa models with a projecting central part surmounted by a veranda and rectangles with metal disk recalling the vegetable frieze (ill. 16). The building technique and materials are entirely Chinese. This original building appears as soon as 1575 and was never exported out of the Tümed banners. Although inspired from the Tibetan gTsug-lag-khang, it has no direct Tibetan antecedent. The general layout of the monastery follows the Chinese arrangement or the fortified one. 4. The square assembly hall with a smaller square shrine at the back. Although the layout is close to type 3, the elevation is completely different. The assembly hall is covered by a high Chinese roof with a skylight on the false second storey and the back shrine is covered by a lower Chinese roof (ill. 17). There can be a second larger separate Chinese style shrine (type 6, 7) in the back courtyard. This type is found in Josutu and Juu-uda leagues (Fanzong si [32], Fuhui si [22]). Compared to the assembly hall, the main shrine of type 3 is larger than in other monasteries, because Kökeqota monasteries are "ritualistic" temples founded by the nobility with an emphasis on devotion. In type 4, the back shrine is small but there is a larger separate shrine behind, which is much more practical to ease the flow of the numerous visitors. On the contrary, the Tibetan style academic monastery of type 2 emphasises the assembly hall to the detriment of the shrine. B. The assembly hall and the independent shrine The main shrine can also be a separate Chinese style building placed in a courtyard located north of the assembly hall, which often has a door on the north side. 5. Mobile "monastery-in-yurts". According to old photographs and descriptions, in mobile temples two different yurts were used for the assembly hall and for the shrine. 6. The small shrine with an inner corridor to circumambulate main images. It exists in Fanzong si, in several cave temples and probably in two other temples now destroyed. These corridors are surviving examples of ancient types of buildings in Amdo and Central Tibet. The elevation is entirely Chinese with the entrance on the larger side (dian). 7. The rectangular Chinese style separate shrine sometimes surrounded by a peristyle. The elevation is entirely Chinese (dian). It is found behind assembly halls of type 4, 8, 9 and sometimes behind assembly-hall-in a yurt. 8. The large square assembly hall with no inner division and a Chinese, Tibetan or Sino-Tibetan symmetrical elevation around a central skylight. It often has a door opening in the northern wall to go to the separate shrine (1 or 3). 9. Other types of gTsug-lag-khang. In Mergen juu [5] and Jungγar juu [10], the courtyard between the assembly hall and the shrine is a small impluvium giving some light to the shrine. The shrine of Mergen juu is built to shelter a huge image of Maitreya. The elevation is Tibetan, Chinese or Sino-Tibetan. Ill. 16. Yeke juu [1], front vue (Type 3). © Isabelle Charleux Ill. 17. Fanzong si [32], front vue (Type 4). © Isabelle Charleux 1.3. Other assembly halls (duγang, < Tibetan 'du-khang) In academic monasteries, the monks can study medicine, doctrine and Vinaya, esotericism, and/or Kālacakra in colleges (rasang or dacang, from Tibetan grwa-tsang). Some monasteries are specialised in one of these four subjects but large academic monasteries have an important college of doctrine and sometimes two or three other colleges. These have an independent organisation and treasury. During the services, the monks recite sutras and learn specific rituals in their college assembly hall (of type 2, 5), which has a small shrine along the rear wall with the main images (Tsong-kha-pa for the college of doctrine, Kālacakra for the college of Kālacakra). Laymen cannot attend these specific rituals. In the nineteenth century, nineteen monasteries of more than five hundred monks were academic. 48 Cult for monks and laity: other shrines Besides the main shrine, any number of chapels (lha-khang) may be built according to the wishes of clergy and laity. These can be single buildings around the Central assembly hall or rooms located at the upper storeys of a Tibetan or Sino-Tibetan style assembly hall. The higher the place the deity occupies in the very large Tibeto-Mongol pantheon, the "higher" his shrine will be located-on the second floor of the assembly hall, on the northern part of the symmetrical compound or higher on the hill. For instance, the chapel of Maitreya is "higher" than that of the lokapāla. The most common shrines are consecrated to Maitreya, Avalokiteshvara, Tārā, Tsong-kha-pa and the eighteen arhat. The shrine of the wrathful tantric deities-the yi-dam and dharmapāla-called mgon-khang, is often located at the southern or western side of the assembly hall. In Chinese style monasteries of Eastern Mongolia, small shrines in the north of the compound are dedicated to Chinese deities such as Guandi (martial god identified with Gesar) and Niangniang (child-giving goddess). In the nineteen and twentieth centuries, it was common to build high constructions sheltering a monumental statue of Maitreya or Avalokiteshvara (4 to 15 m high). Local mountain deities are sometimes found in small individual chapels or included in the mgon-khang. The inner arrangement is the same as in the main shrine. Monks and laymen make their private devotions; laymen can also pay some monks to perform a special service. Therefore, there is often some place and seats in front of the altar, like a small assembly hall, so that a few monks can sit and pray. Other cultual elements include stūpa (see 2.1.) and prayer-wheels. Even when there is no enclosure wall, stūpa, cairns, prayer flags, prayer-wheels, painted and carved rocks delimited a cirumambulation path which is often difficult to find today because of the massive destruction of these elements. Prayer-wheels, often housed in small pavilions or surrounded shrines and assembly halls, have been removed during the Cultural Revolution. Meditation, ordinations and debates: the monks' activities The nature, organisation and buildings housing the monks' activities were the same as in Tibet. The Tibeto-Mongol monastery has no special building for ordinations: these ceremonies were performed in the Central assembly hall. Debates between monks, an ordinary and ritualised exercise of Tibeto-Mongol Buddhism, took place in a specific courtyard enclosed by walls and planted with trees. When there was no such delimited area, debates were organised on the paved ground or on the platform in front of the Central assembly hall or in front of the college of doctrine. The monks practised meditation and prayers in their own dwelling. There was no specific building for collective meditation like in China, and the assembly hall was not used as a meditation place. For individual long-term meditation, monks locked themselves in cells or caves depending on their monastery. Ideally, these were located high up in the mountain, above the main temples. The laymen often came to the monastery twice a month (the 1st and 15th days of the Moon calendar) and for the festivals (New Year festival with a procession of Maitreya, Buddha's Birthday etc) that also attracted many Chinese traders. During the festivals, the laymen of the whole countryside gathered to give offerings to the monks, to buy Chinese items in exchange for cattle and wool, and to participate in the naadam games. The monks performed various rituals such as processions round the enclosure wall and public masked dances. Some monasteries had a special platform in front of the assembly hall or even in front of the main gateway (in the Köke süme and Sira süme [16, 17]) for the masked dances, but these were usually performed in front of the Central assembly hall. At any time, the lay donators were welcomed in a reception room or yurt, located near the main halls. Noblemen could receive special initiations in their residence, but there were no public sermons and teaching for the laymen, and very few public initiations were given (the most famous one was the Kālacakra initiations given by the Panchen Lama in the 1930's). For reliquary, commemorative or funerary functions: Stūpa As in Tibet, the stūpa (Mong. suburγan) can be a commemorative, funerary, reliquary and/or votive monument. It cannot be entered, and its size ranges from a small reliquary sheltered in a special hall to a monumental building. The location and number of outside stūpa is not fixed: there can be one on a lateral axis, two twin stūpa in front of a temple or assembly hall, a "forest of stūpa" in a lateral courtyard, the complete set of the eight canonical stūpa corresponding to the eight important events of Buddha's life (Mahāvaitya), stūpa above the compound wall of a monastery, above the main gate of a compound, or stūpa lined up on the ridge of a mountain. Small stūpa placed on an altar or in a special building often have a funerary or a reliquary function. The Mongol stūpa is modelled on the contemporary Tibetan one and borrows elements from contemporary and older models, like the Xia stūpa of Qaraqota or Yuan and Qing dynasties stūpa of Beijing (ill. 18). It is externally adorned with a small niche containing a Buddha image, tiles around the top of the anda and glazed ceramic. Some atypical ones are the stūpa of Kündülün juu [6], close to Chinese votive columns; the stūpa of Hanbai miao with its quasi-spherical anda; the huge stūpa of Bayan shanda-yin süme [9] with a peristyle; the big stūpa of Üüsin juu [11] with a decreasing octagonal base and 24 niches around its anda. The one exception far from Tibetan models, is the Tabun suburγa (Ch. Wuta, "Five stūpa") of Kökeqota (ill. 19), which takes the Ming dynasty stūpa of Dajue si 大覺寺 (Beijing as a model). 49 The only examples of Chinese style tower stūpa date from the Kitan Liao dynasty. Funerary building: Stūpa containing the ashes or the mummy of the dead reincarnated monks were set in a funerary hall. During the initial period of the Buddhist renaissance, several Mongol members of the nobility were cremated after their death and their ashes were preserved in a stūpa set in the temple they founded (Mayidari-yin juu [3]). The cremation was not a local tradition-corpses were abandoned to animals of prey or buried in the mountain-; because of opposition to habits and popular belief it was soon abandoned for laymen. Obuγa (cairns) One of the most original characteristics of Mongolian monasteries is the presence of oboγa-s. Although basically opposed, the stūpa, "support" (Tib. rten) of Buddha's mind, and the Shamanistic oboγa, support of a local god, share common characteristics. The first meaning of a "stūpa" is a pile of stones and earth above a tomb. Buddhist monks took over for their own purpose the oboγa worship and even wrote prayers for obuγa rituals. Laymen can offer to build an obuγa to fulfil a wish, and it is common to find 13 or 108 obuγa representing the Buddhist cosmology (mount Sumeru surrounded by the four continents and the eight minor continents). Other obuγa-s are found on mountain passes and summits above and around a monastery, and often side by side with a stūpa, on the ridge of a mountain, close to the compound. Obuγa can be surrounded by stūpa or surround a stūpa to protect it and subdue the deus loci. In Blama-yin küriye [20], an obuγa is even located on the central axis of the Chinese style layout. For educational functions: library, classroom and printing workshop Buddhist scriptures were stored on shelves in assembly halls and in shrines. Some Chinese-influenced monasteries had a library in a distinct building, indifferently situated north or south of the Central assembly hall. In the monasteries of Kökeqota, the Jiujian lou 九間樓 ("two-storied building of nine bays large"), located at the northern part of the compound, has a library on the second floor. The hollow Tabun suburγa pagoda was used as a library. Monks also possessed books they kept in their dwelling. There was no specific classroom for young monks: they studied prayers, Tibetan language in the Central assembly hall and studied rituals by attending celebrations. They also learnt by heart and recited private lessons in their dwelling or in their master's dwelling. Students learnt rituals in their college assembly hall and debated in a courtyard. The majority of the monks was assigned to simple tasks or worked as apprentice with the relevant master. No education was given to laymen in the monasteries. Many young noblemen were trained as novices in a monastery and quitted before pronouncing monk's vows. Some of the largest monasteries had a printing office (located on a lateral axis in symmetrical layouts). None has subsisted. They did not supply books for all Southern Mongolia: books were mainly produced and bought in Beijing or in Tibet. Beijing printed liturgical books in Tibetan, but also in Mongolian. Offices and treasuries In symmetrical layouts, the religious administration was located on a side axis. Monasteries with an official title had a tamaγa-yin γajar-yinwu chu 印務處 in Chinese-, "place where the seals [(given by the Lifan yuan) are kept]". None has subsisted. The treasuries (sang or jisa) were essential economic components of a monastery. They stored food and all kind of donations (monk's clothes, carpets, silk, silver), products for trade, religious implements, books and other goods. The treasury of the assembly hall was the public treasury of the whole monastery, while individual ones were attached to particular colleges, to specific religious services and to the reincarnation(s) living in the monastery. They were located close to their unit, usually on its northern side (where the god of wealth resides). There were twenty-two treasuries in Dolonnor [16, 17], and fourteen at Bandida gegeen süme [18]. Very few have been preserved. Larger monasteries also had workshops for carpenters and painters, buildings to stock timber, fabric limestone, brick kilns, medical house, stables... For accommodation Residence of the reincarnations: As in Tibetan monasteries, the labrang (from Tibetan bla-brang) was an important residential complex for the reincarnation. Often located in an enclosed courtyard, it included a chapel (on the ground floor), an apartment (on the second floor), a kitchen, a reception room for high-ranking visitors and an office in the main building, a storehouse, quarters for the servants and sometimes a library in other buildings. During the summer, the reincarnated lama could live in a yurt inside the courtyard. In symmetrical layouts, the labrang was a Chinese style compound lying at the northern part of a lateral axis. In Tibetan style monasteries, it was a courtyard similar to those of Kumbum or Labrang in Amdo. 50 Every reincarnated monk had his own labrang: in Badγar coyiling süme [4] for instance, there were three labrang for the three reincarnations (although two of them resided elsewhere most of the time). The abbot's residence: The abbot (kambo, from Tibetan mkhan-po), who was the effective ruler of the monastery, lived close to the Central assembly hall. In symmetrical layouts, his residence was the northernmost building of the central axis. In Kökeqota monasteries the abbot lived in the Jiujian lou, which also comprised of a library on the second floor. Monks' quarters: Ordinary monks' dwellings were simple mud-brick huts or and/yurts, with an area around 10-12 m2 (3 bays). There was a bed heated by a kang, an altar for individual daily prayers plus a small kitchen. The roof was flat or slightly inclined and covered with clay and straw. A monk might have a small compound with a hut used for storage and study, coupled with a yurt for living quarters. Fortunate monks had a brick house with a sloping roof covered with tiles, north of the compound. Lamas lived with their disciples and with their servants. In Tibetan style monasteries like Badγar coyiling süme [4], the dwellings were big cubic houses of two or three storeys, in brick or stone covered with limestone (ill. 20). Monks dwellings were located around the main temples; they could form small streets when attached to each other. In the symmetrical layouts they were arranged in the private lateral axis, and often out of the compound too. There were no dormitories like in Chinese monasteries. High-ranking visitors (rich donators, important monks and reincarnations) stayed in a reincarnation's residence or in a yurt. Other visitors were accommodated in monks' dwellings, laymen yurts or houses, or brought their own tent for festivals. The kitchen and a small contiguous storeroom were located either near the assembly hall or in the southern part of the monastery. It was specialised in the preparation of milk tea and food for the services. Monks had a small kitchen in their room to prepare independent meals; they ate meat and took food after noon. There was no refectory or dining room. Monasteries settled close to rivers or wells. There were usually no toilets or sanitary facilities. Monks normally relieved themselves by squatting anywhere they like (in order not to soil their long robes). Perhaps there were Chinese style toilets in urban monasteries, but none has been preserved. Some Chinese style monasteries had a pond or a garden (without the Chinese Buddhist religious background). Ill. 20. Badγar coyiling süme [4], monks' dwellings. © Gilles Béguin Inscriptions, trees Chinese style stone inscriptions in Mongolian, Chinese and Tibetan were made for the first foundations of the sixteenth century but this practice was soon abandoned by Mongols. Later, the Qing emperors offered stone inscriptions sheltered in Chinese pavilions in the main courtyard, to commemorate the foundation and restoration of the imperial and imperialised monasteries. The official title received by the largest monasteries was calligraphed (sometimes by the emperor himself) in four languages (Mongol, Chinese, Tibetan and Manchu) on a horizontal wooden board suspended above the monastery's gate. In Chinese style monasteries, every major building can bear a name written on a wooden board. From ten to several hundred meters outside the monastery, a stone tablet notified it was compulsory to dismount from one's horse. Many trees located in the courtyards were considered as sacred. People making vows used to tie red ribbons to the boughs, which is a Shamanistic practice. Pipals cannot grow in Mongolia (there is one in a temple-greenhouse in Ivolginsk datsan, Buriatia). Conclusion The Mongolian Buddhist architecture has a relatively short history when compared to other Buddhist countries. Our very limited knowledge of the Mongol secular or religious building traditions before the sixteenth century makes it difficult to find the local roots of the Mongol monastic architecture. In any case, from the sixteenth century onwards, the mixture of various influences borrowed from China and Tibet led to typical Mongol features, such as the adaptation of a Tibetan multilevel structure with local materials and Chinese techniques, the creation of a framework and a sloping roof adapted to the large square structures of the assembly halls, a peculiar taste for external decoration with glazed tiles and brass objects as well as some iconographical specificities. The role of the "Red" sects of Tibetan Buddhism, which were influent until the mid-seventeenth century, may explain some original features like the inner corridor for circumambulation around the shrine, that seem to contradict a certain uniformisation of Gelugpa architecture. The result in the physical aspects of these buildings is often a large heterogeneity of styles and techniques, ranging from almost pure Tibetan styles to various Sino-Tibetan styles and imitations of Chinese styles. Some are very original and prove to be unique technical and aesthetic achievements which later served as models for Northern Mongolian temples and Sino-Manchu monasteries of Chengde and Beijing. Several structures organised around the central skylight pavilion are reminiscent of mandala-based temples of India and of the First propagation of Buddhism in Tibet like bSam-yas, bringing to light a typical Mongolian taste for symmetry. Ill. 2 . 2 Kökeqota at the end of the 19th century. Guihua cheng ting zhi 歸化城廳志 Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 Ill. 4 . 4 Layout of the Siregetü juu [2] 1. Pailou; 2. Entrance gate (shanmen); 3. Drum tower; 4. Bell tower; 5. Temple of the arhat; 6. Pavillons for stone inscriptions; 7. Γoul coγcin (central assembly hall); 8. Main shrine (destroyed); 9. Jiujian lou (destroyed); 10. Old Buddha hall (Gu fodian); 11. Temple of the Kanjur; 12. Labrang of the Siregetü qutuγtu; 13. Stables; 14. Labrang of the Darqan corji qutuγtu; 15. Monk's dwelling; 16. stūpa; 17. Naicung (Pe-har) temple; 18. Temple of the Tanjur. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 21 Ill. 5. The main courtyard of the Siregetü juu (Kökeqota) : stone inscriptions under pavilions and Central assembly hall. © Isabelle Charleux 27 1 . 1 Screen-wall zhaobi (destroyed); 2. Taihe gate; 3. Hall of the Four guardian kings (lokapāla) (destroyed); 4. Γoul coγcin (central assembly hall); 5. Temple of the God of wealth (ruined); 6. Temple of Avalokiteshvara; 7. Temple of the eighteen arhat; 8. Liuli dian (hall with green glazed tiles), main shrine (also called Sanfo ge); 9. Stūpa (destroyed); 10. Gongye fu (Wangye fu, residence); 11. Temple of the Dalai Lama; 12. Eastern hall of the Ten thousand Buddhas; 13. Taihou miao (temple of the Empress, with a funerary stūpa inside) or Lingta dian; 14. Dajiwa dian (destroyed); 15. Pavilions at the four corners; 16. Naicung süme (temple of Pe-har); 17. Qutuγtu's residence; 18. Western temple of the Ten thousand Buddhas; 19. Octagonal temple (Laojun miao); 20. Cross-section of the wall. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême BIBLIOGRAPHY CHARLEUX (Isabelle), 1998, Histoire et architecture des temples et monastères lamaïques de Mongolie méridionale, Ph.D. Dissertation, Sorbonne University (Paris), unpublished. ____, 1999, "La Peinture des donateurs du temple de Maitreya en Mongolie méridionale", Arts Asiatiques 54, p. 85-102. ____, 2001, forthcoming, "The Reconstruction of Buddhist Monasteries in the Chinese Autonomous Region of Inner Mongolia: Between Sanctuary and Museum", Proceedings of the International Conference "Revival of Buddhism in Mongolia after 1990" (Warsaw, 24-28th November 1999). ____, 2000, "Un Exemple d'architecture mongole: le Siregetü juu de Kökeqota", Histoire de l'Art 46, p. 91-110. DELEGE 德勒格, 1998, Nei Menggu lamajiao shi 內蒙古喇嘛教史 ('History of "Lamaism" in Inner Mongolia'), Kökeqota: Nei Menggu renmin chubanshe. HEISSIG (Walther), 1953, "A Mongolian Source to the Lamaist Suppression of Shamanism in the 17 th century", Anthropos 48 (Vienne & Fribourg), 1-2: p. 1-29 & 3-4: p. 493-536. ____, 1961, Erdeni-yin erike: Mongolische Chronik der lamaistischen Klosterbauten der Mongolei, trans. & commentary of the Mongolian chronicle written by Isibaldan in 1835, Copenhague: Ejnar Munksgaard (Monumenta linguarum Asiae majoris, Series nova, II.) ____, 1973 [1970], "Les Religions de la Mongolie", in Giuseppe TUCCI & Walther HEISSIG, Les Religions du Tibet et de la Mongolie, trans. from German by R. Sailley, Paris: Payot [Stuttgart]. HYER (Paul), 1982, "An historical Sketch of Köke-Khota city capital of Inner Mongolia", Central Asiatic Journal 26(1-2) (Wiesbaden), p. 56-77. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 6 Table 1 . 1 Number of monasteries compared to population and area at the beginning of the 20th century Leagues Population Surface Number of Density of Density of mon. Nb. of mon 1911 census km 2 monasteries population for 1,000 km 2 . for 1,000 inhab. Tümed of Kökeqota 56,337 20,000 41 2.82 2.05 0.73 Ordos (Yeke juu) 66,096 100,000 274 0.66 2.74 4.15 Ulaγancab 52,550 110,000 118 0.48 1.07 2.25 Alasha 24,942 270,000 24 0.09 0.09 0.96 Caqar 42,211 75,000 92 0.56 1.23 2.18 Sili-yin γoul 54,297 105,000 130 0.52 1.24 2.39 Josutu 237,195 30,000 282 7.91 9.40 1.19 Juu-uda 274,633 90,000 107 3.05 1.19 0.39 Jerim 561,909 120,000 197 4.68 1.64 0.35 Barγa 60,933 250,000 42 0.24 0.17 0.69 Yeke mingγan 4,110 ? 34 8.27 Buteha 59,282 ? Southern Mongolia 1,494,495 1,170,000 1,341 1.28 1.15 0.90 Table 2 2 Number of monasteries Total number > 1000 monks 500-1000 monks < 500 monks of monasteries Nb. mon. % total Nb. mon. % total Nb. mon. % total Tümed of Kökeqota 41 3 7,3% 1 2,4% 37 90,2% Ordos (Yeke juu) 274 2 0,7% 7 2,5% 265 96,7% Ulaγancab 118 3 2,5% 3 2,5% 112 94,9% Alasha 24 4 16,6% 2 8,3% 18 75% Caqar 92 3 3,3% 1 1,1% 88 95,6% Sili-yin γoul 130 2 1,5% 5 3,8% 123 94,6% Josutu 282 3 1,1% 8 2,8% 271 96,1% Juu-uda 107 1 0,9% 13 12,1% 93 96,1% Jerim 197 3 1,5% 3 1,5% 191 97% Barγa 42 0 1 2,4% 41 97,6% Yekemingγan 34 34 100% Total Southern Mongolia 1,341 24 1,8% 44 3,3% 1,273 94,9% . Maximum number of monks per monastery in Southern Mongolia at the end of the Qing dynasty. The number of monasteries counting less than 500 monks is the substraction of the two previous columns to the first one. . In rural areas where Chinese are more numerous than Mongols, or where Mongols are sinicised (the southern part of Inner Mongolia), there is no rebuilding. Mongol monasteries previously situated in remote Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 11 areas, far from urban centres are now integrated in Chinese villages or settlements, in towns or in industrial areas. Today's largest Inner Mongolian towns are mainly inhabited by Chinese people. 37 The old capital city Kökeqota still has two active monasteries, but the "new" big towns-Baotou (Boγutu), Jining, Chifeng, Hailar-and smaller county seats-Xilinhot (Sili-yin qota), Dongsheng, Bayanhot (Bayan qota), Wulanhot (Ulaγan qota), Balinzuoqi, Balinyouqi (Baγarin Left and Right banners) etc.-have no monastery at all, or an old monastery turned into a museum. Table 3 and map. The largest Buddhist monasteries of Southern Mongolia This 3 table gives the common names of the monasteries, their localisation (town, tribe or banner, with their early twentieth century definition), the main dates of their foundation and the present state of preservation. Chinese names ending by si (monastery) are titles given by the Lifan yuan. Twenty-six "large" but poorly documented monasteries are not listed here. W.P: Well preserved D: Destroyed P.P: Partly preserved R: Rebuilt P 9. Bayan shanda-yin süme ("Shanda" miao), Urad, Rear banner, 1738, D, R Ordos : Yeke juu league 10. Jungγar juu (Baotang si 寶堂寺), Jungγar, 1623 -1920-1922, W.P 11. Üüsin juu (Γanjuur nom-un süme), Üüsin, 1713 -1764, P.P Tümed of Kökeqota 1. Yeke juu (Dazhao 大召, Hongci si 弘慈寺, Wuliang si 無量寺), Kökeqota, 1579-1580, W.P 2. Siregetü juu ("Xilitu zhao", Yanshou si 延壽寺), Kökeqota, 1585 -1616, W.P 3. Mayidari-yin süme ("Meidai zhao" 美岱召, Lingjue si 靈覺寺, Shouling si 壽靈寺), east of Baotou, 1575 -1606, W.P Ulaγ γ γ γancab league 4. Badγar coyiling süme (Udan juu, "Wudang zhao", Guangjue si 廣覺寺), north-east of Baotou, 1727-1749, W.P 5. Mergen juu (Guangfa si 廣法寺), west of Baotou, 1677 or 1702, P.P 6. Kündülün juu (Faxi si 法喜寺), Baotou district, 1713 or 1729, P.P 7. Beyile-yin süme (Bailing miao, Guangfu si 廣福寺), Darqan vang, 1703, rebuilt in 1925, P.P 8. Sira mören süme (Xilamulun miao, Puhe si 普和寺), Dörben keüked, 1758, P. Josutu league (part of this league is now included in Liaoning province Jerim league 36. Morui-yin süme ("Moli miao", Jining si 集寧寺), Darqan vang, Shunzhi reign or 1679, 1801, 1826, P.P 37. Shongqoru-yin süme (Shuangfu si 雙福寺), Boγu vang, 1680, P.P 38. Gegeen süme (Fantong si 梵通寺), Jasaγtu vang, 1740, D, R? 39. Vang-yin süme (Wangye miao 王爺廟, Puhui si 普慧寺), Ulaγanqota, 1619 or 1691, D, R? 40. Bayan qosiγun keyid (Xiafu si 遐福寺), Tüsiyetü vang, 1813, D, R? Barγ γ γ γa banners (Kölün buir) ) 21. Falun si 法輪寺, Qaracin, 1745-1803, P.P 22. Fuhui si 福會寺 (Wangfu miao 王府廟), Qaracin, Kangxi reign, P.P 23. Lingyue si 靈悅寺, Qaracin, 1692-1711, P.P 24. Longquan si 龍泉寺, Qaracin, Yuan -Qing, P.P 25. Xingyuan si 興源寺 (Kulun si 庫倫寺), Siregetü blama küriye, end of Ming or 1649, P.P 26. Fuyuan si 福緣寺, Siregetü blama küriye, 1733-1742, W.P 27. Caγan diyanci-yin keyid ("Folama" miao, Ruiying si 瑞應寺), East Tümed, 1640-1650, P.P 28. Huining si 惠寧寺, East Tümed, 1738-1757, W. P 29. Youshun si 佑順寺, East Tümed, 1698-1707, W. P Juu uda league 30. Gilubar juu (Shanfu si 善福寺), Baγarin, Left banner, about 1770, P.P 31. Huifu si 薈福寺 (Dongda miao 東大廟), Baγarin, Right banner, 1706, W. P 32. Fanzong si 梵宗寺 (Beida miao 北大廟), Ongniγud, 1743-1755, P.P 33. Qan süme ("Han miao", Cheng'en si 誠恩寺), Aruqorcin, 1674, P.P 34. Gembi-yin süme (Guang'en si 廣恩寺), Aruqorcin, 1816, D.R 35. Balcirud süme (Baoshan si 寶善寺), Aruqorcin, 1665 or 1689, P.P 41. Γanjuur süme (Shouning si 壽寧寺), New Barγa, Left banner, 1781-1784, D 42. Baraγun süme (Xi miao 西廟), Left banner, 1887, D, R. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 33 1.6. Ceremonies for the laity: festivals, rituals, and initiations Linrothe 1995 and[START_REF] Linrothe (rob | Xia Renzong and the Patronage of Tangut Buddhist Art: The Stūpa and Ushnîshavijayâ Cult[END_REF] Sulla vita della seta 1993. On the general history of Tibetan Buddhism in Mongolia:Heissig 1973Heissig [1970]]. Shangdu was imagined as a mandala, surrounded by eight monasteries at the cardinal plus intermediary points. The place is also known as "a hundred and eight monasteries". According to Delege (1998: 61), there were 167 monasteries in and around Shangdu, yet no archaeological evidence confirms their existence. [START_REF] Serruys (henry | Early Lamaism in Mongolia[END_REF][START_REF]Additional Notes on the Origin of Lamaism in Mongolia[END_REF] Jagchid 1972; Charleux 1998: 15-24. "The Virtuous", also called "Yellow hats" or "Yellow religion" in China, in Mongolia and in Western countries because of the colour of their hats. Compared to the "Red hats" or older schools, the Gelugpa insist on strict monastic discipline, on celibacy and on the "gradual way" of the spiritual formation. Heissig 1953. Shamanism was never completely eliminated, and Buddhism had to accommodate with several Shamanistic practices. "Disciples", laymen given by a prince to a monastery. Ligdan Qan attempted to restore the glory of the Mongol empire by terrorising other Mongol groups to submit to his harsh rule, causing the majority of his subjects to finally join the Manchus. The Manchu emperor proclaimed himself as Qan of Qan-s, Yuan emperors' true heir because he appropriated Qubilai's imperial seal and the sacred image of Mahākāla carved under the Yuan dynasty, then transmitted to Ligdan Qan. Moreover, he presented himself as an incarnation of the bodhisattva Mañjushrī, spiritually equal to the Dalai and to the Panchen Lama. Monasteries are classified by[START_REF] Gajin | [END_REF] [1947] into "academic" and "ritualistic". The first ones place their major emphasis on learning, the second ones, on worship and rituals. For all these imperial monasteries, the Lifan yuan enacted an ordinance fixing the status and income of the monastery, appointed its administrators, gave an official title to the monastery and ordination certificates to a quota of monks. When monastic communities were created ex nihilo, every banner was ordered to send monks and money to support them. Besides the imperial monasteries, other large monasteries received an official title with a wooden board. For the economic, institutional and socio-political aspects of Mongol Buddhism: Miller 1959. The official monks received from the Lifan yuan an ordination certificate and a prebend. They were exempted from military service, taxation and corvée labour. Initiation degrees were the same as in Tibet. A fully ordained monk is a gelung, from Tibetan dge-slong. Gelugpa monks are not supposed to marry. According to estimations based on Mongolian, Chinese and Japanese sources. Charleux 1998: 218-222. The figures of Table 1 are global estimations. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 For descriptions of the largest monasteries: Qiao Ji 1994; Delege 1998. In 1984, according to a census, there were more than five thousand monks. 3,850 of them were very old. Among the 31 monasteries I visited, 20 of them were active. One claimed to have 100 monks, two had 40 monks, three had 30 monks, one had 20 monks, 13 had between one to 15 monks. Mongols represent only 5% of the population of Kökeqota city. The Chinese there have a few Buddhist (and Taoist) temples but I do not know of any Chinese monastery. Many monasteries were built at the foothill of Daqing mountain (Yin range), facing the Yellow River: Mayidari-yin juu [3], Kündülün juu[START_REF]The layout of the Badγar coyiling süme[END_REF], Mergen juu [5] etc. Small and medium-size monasteries are not documented. As we have already stressed, the majority of the monks was herders or farmers and came to the monastery only to attend the festivals. The main fuels used for building (baking bricks) and living (heating, cooking) were wood, charcoal, and coal after the discovery of mines in the Daqing mountains in the nineteenth century. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 Documentation on mobile monasteries is restricted to a few descriptions in Chinese sources, in nineteenth-twentieth centuries travelogues, and some photographs. In Siregetü juu[START_REF] Charleux | Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery[END_REF] for instance (ill. 4), the first historical axis was the western one; in the seventeenth century the founder added two axis to the east and the central one became the main axis. Charleux 2000. The monks' seats are on the right and left side of the central aisle; the seats of the reincarnated monk and of the important monks are close to the altar, on the west side and facing right (or east). The central seat near the altar and facing south is reserved for the highest ranking monk who is likely to visit the monastery, such as the Dalai Lama, the Panchen Lama, and occasionally the reincarnated lamas. For the course of studies in Badγar coyiling süme[4] -twenty-one years for the college of doctrine-: NagaoGajin 1987[START_REF] Gajin | [END_REF][START_REF] Gajin | [END_REF][START_REF] Gajin | [END_REF]. "Stūpa sharira of the diamond throne" type, imitating in Chinese style the temple of the Mahābodhi in Bodhgayā. Perhaps the reincarnated lama could also live on the first on second floor of the Central assembly hall, as in Tibetan monasteries. Map of the largest Buddhist monasteries of Southern Mongolia Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d' Extrême-Orient, 2003, pp. 351-390 14 III. GENERAL LOCATION AND DENSITY OF THE MONASTERIES The monastery and its environment 1.1. The ideal and concrete siting of a monastery The choice of an auspicious site is an essential stage of the founding of a monastery (ill. 1). For Southern Mongols, there is no contradiction between Chinese and Tibetan rules of geomancy, 38 the two systems being considered as equivalent. The work requires a monk astrologer, who uses Tibetan and Mongol handbooks-the latter being translated from the Chinese. A famous Chinese fengshui 風水 specialist can be invited too. The astrologer is further consulted to tell where to search for building materials and to solve problems of "pollution" by building stūpa to tame evil forces, by changing the orientation of a building or even by deciding to move the whole monastery to another place. The construction is then punctuated by a series of rituals such as the propitiation of the deities of the soil and the consecration ceremony. Ill. 1. Mural painting in Sira mören süme depicting the ideal features of a site. © Isabelle Charleux 38 Indeed, as early as the seventh century Tibetan geomancy was influenced by Chinese geomancy. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 Ill. 9. Typology of assembly halls and shrines. Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 28 Ill. 10. Siregetü juu [2], Kökeqota, cross section of the assembly hall (Type 2). From Liu Dunzhen 1991 [1984]. Ill. 12. The Mergen juu [5] (Type 2). © Isabelle Charleux Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 29 Ill. 11. Badγar coyiling süme [4], Duyingqur duγang (Type 2). From Nagao Gajin, 1991 [1947], fig. 8, p. 221. a. Elevation; b. Cross section; c. Ground-floor plan; d. First-storey plan. Ill. 13. The Sira mören juu [8] (Type 2) (Puhui si, summer residence of the Sixth Siregetü qutuγtu). Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême-Orient, 2003, pp. 351-390 30 Ill. 14. The Beyile-yin süme [7] (Type 3). View from above. Ill. 15. Mayidari-yin juu [3], central assembly hall from west (Type 3). © Isabelle Charleux Isabelle Charleux, "Buddhist monasteries in Southern Mongolia" -Author's manuscript See the published version in The Buddhist Monastery. A cross-cultural Survey, Pierre Pichard & François Lagirarde (eds), Paris: École Française d'Extrême -Orient, 2003, pp. 351-390
04097512
en
[ "shs.edu", "shs.socio" ]
2024/03/04 16:41:18
2020
https://theses.hal.science/tel-04097512/file/va_Fang_Ke.pdf
Ke Par Fang DR François Taddei HDR Mathias Béjean Luping Xu Sophie Pene Joël Chevrier Valerie Chanal Antonella Tufano Constructing Co-Meaningfulness: Collaborative Learning Across Boundaries Studying Innovation Communities of Designers and Scientists Keywords: Co-Meaningfulness, Collaborative Learning, Boundary-crossing, Learning Sciences, Wicked Problem, Grounded Theory This thesis studies an emerging collaborative learning model among innovation communities of designers and scientists, which we call the Collaborative Learning Across Boundaries for Open Wicked Problems, or simply: CoLAB. The "Open Wicked Problems" indicate the pressing problems of our time which resist either clear-cut definition or optimal solution, such as the fast-growing AI technology and its problematic ethics or the huge change of the economy and work model after the pandemic crisis. At the same time these problems are also "open" and accessible to learners who learn through trying to resolve them directly. New learning initiatives emerge in our universities and schools, e.g. design and science students aggregate in workshops to collaboratively learn across discipline and experience boundaries for attempting to resolve these "open wicked problems". How do we understand the process of the new learning? What are the key difficulties in the learning, and how to overcome them? Understanding CoLAB from a practice-based perspective is key to better facilitate this emerging model, which nowadays plays an increasingly significant role in education and the learning sciences. For this purpose, the author, as a researcher/practitioner of CoLAB, adopts the constructivist grounded theory based on an in-depth case study and a long-term ethnographic observation. From coding, categorizing and conceptualizing, we generate the concept of "Co-Meaningfulness" that casts light to the key process in CoLAB: that learners find it not enough just communicating at the level of "Project" they currently work on but they have to communicate and negotiate which part of the project is more "meaningful". Their learning is essentially a process of not only constructing their joint project but also constructing their "Co-Meaningfulness". The process of constructing "Co-Meaningfulness" is then studied with both socio-cognitive and socio-cultural perspectives, which generates a framework of conceptual and visual tools for analyzing CoLAB. The concept and framework of Co-Meaningfulness emerges and visualizes the underlying Co-Meaningfulness process of CoLAB, making the "invisible" "visible". Basing on a practice-rooted perspective and a structured analysis, this thesis proposes a novel way to understand and analyze CoLAB which opens opportunities for future research. RÉSUMÉ Cette thèse étudie un modèle émergent d'apprentissage collaboratif qui rapproche des designers et des scientifiques. Ces communautés d'innovation que nous nommons CoLAB se mettent en capacité de traiter des "Open Wicked Problems". Ces questions épineuses, expriment des problèmes urgents de notre temps qui résistent soit à une définition précise, soit à une solution rapide. On peut ranger parmi elles le développement de l'IA, les questions éthiques qui en découlent, les changements brutaux que la pandémie a d'ores et déjà provoqués sur les économies et le travail humain. Ce sujet est à rapprocher des initiatives d'apprentissage qui émergent dans les universités et les écoles, par exemple quand les étudiants en design et en sciences se regroupent en ateliers pour s'entraî ner à la collaboration interdisciplinaire et font l'expérience du dépassement des limites de leurs connaissances et savoir-faire. Il est à noter que ces nouveaux formats d'expérimentation et de formation se répandent, comme une pédagogie des défis afin d'entraî ner de jeunes scientifiques à tenter de résoudre ces "Open Wicked Problems". FOREWORD We enter an era when the pressing problems are complex and entangled, such as the climate challenge and the AI technology and ethics. These problems, as some refer as the "wicked problems" resisting either clear-cut definitions or optimal solutions [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF], are now "open wicked problems" that are open to the "laymen", the students, the common citizens, who are actively involved in learning and solving them. New learning initiatives emerge where students learn through facing and making attempts to solve these problems directly. Unlike the traditional vertical learning model, these new learning initiatives enters our university and school pedagogy in format of events such as workshops, bootcamps, summer schools, competitions, etc. Besides "open wicked problems" oriented, these learning initiatives features "boundary-crossing collaboration", as in real world no single person or perspective can solve these problems alone. This thesis specifically focuses on this new learning model, by which we call the CoLAB (Collaborative Learning Across Boundaries), in the era of "open wicked problems". As a researcher practitioner, I am interested in both understanding and better facilitating CoLAB. Practice-based understanding is essential as the new learning is by itself a wicked problem, resisting the simple theoretically derived understanding. The "openness" of the new learning invites a divergent practitioner community, which urgently calls for understandable research as well as pragmatic tools and guidance to better facilitate their practice. The practiced based understanding and pragmatic need are the two motivation for our thesis. To reach this end, we adopt a practice-based research paradigm by deeply involving in participating and organizing CoLAB, which well equips us with insider practical experiences. We use the constructivist grounded theory to qualitatively analyze our participant observation, so that the research tells the complexity of CoLAB, rather than reducing it. Our key result develops from our initial coding of "Meaningfulness" in a case study, which later leads to the discovery of "Co-Meaningfulness", the key concept in this thesis. In Co-LAB, learners seem to discuss and argue on their "Project" at hand, but potentially they are negotiating how the project is meaningful in divergent ways. To coordinate the different meaningfulness and to build a "Co-Meaningfulness" is the underlying but essential process in CoLAB. With further analysis, we extract two key properties of Co-Meaningfulness: the Project-Meaningfulness Intensity (P-M Intensity) and the Meaningfulness-Meaningfulness Coherence (M-M Coherence) with which we build a quadrant map and anchor the learning process with the Co-Meaningfulness trajectory. We further sample a long term observation of a INTRODUCTION The nowadays world is facing unprecedented changes that have great influence on learning and education. One of the most salient change of our time is that the pressing problems are no longer the "tame" problems with clear-cut definitions and optimized solutions. Instead, they are complex problems which are entangled with many different factors and easily accessible to everyone (e.g. ecological sustainability, disruptive technology such as AI and its ethics, climate change, etc.). These new problems challenge the traditional model of learning, and also urges the emergence of new learning models. There are three-fold background of our thesis: First, The new problems of the world are "open wicked problems", which are ill-defined, complex in its nature, entangled with multiple factors and accessible to everyone. Second, the open wicked problems bring challenges to our traditional learning and education in multiple ways. Third, In responding the challenge of open wicked problem, new learning models are emerging. Open-wicked-problem based learning, collaborative learning, and boundary-crossing learning are the key features of the new learning model. This thesis is motivated to study the new model of learning that arise to face the new open wicked problems. The general motivation of this thesis is to first provide a practice-based research with an insider understanding of the new learning model, and second to provide analytical tools and guidance for practitioners and learners who practice the new model of learning. In the INTROUCTION, we first present the general research background in the first three sections, and then our research motivations, questions and key findings in the fourth section, followed by a thesis structure section which overviews all the parts of the whole thesis. New Problems of the World -the Open Wicked Problems The Story of Ziqi In a remote countryside in Sichuan province, Southwest China, a young lady is doing her housework. She has some cotton at hand, which she just collected from her own farm. Six months ago, on the day of Guyu, one of the 24 festivals in traditional Chinese calendar, she planted the cotton seeds in an early spring rain. Now, in the late autumn, the cotton is ready, and she is preparing to make a quilt for her grandma. One trouble emerges: the bamboo bow for fluffing the cotton is broken, but a trouble like this cannot trap our versatile lady. She picks up some new bamboo sticks and fixes the tool, just like an old-time craftswoman would do. She will need the bow tonight, when her grandma will accompany her fluffing the cotton and making the quilt, in the countryside moonlight. The lady seems to live in an old-fashioned life, she plants and collects the cotton, crafts the tool and makes the quilt all by herself. Her life looks calm, isolated and free, in a secret corner of Southwest China. So might her over 9 million Youtube subscribers believe or, more precisely, wish to believe. The lady is named Li Ziqi. The fact is, she lives with her grandma, does all the work we say, but not isolated with the world. Instead, one might say she is just the opposite of an isolated figure, but a key opinion leader and a cyber celebrity. She has 9 million followers on Youtube. Her videos are well made with traditional Chinese crafting, cooking, farming, clothes making, etc. We see very often a calm and beautiful countryside with only Ziqi doing the farm work. In 2019, her story was heatedly debated as an example of Chinese culture exporting to the western world, as many of her subscribers are westerners who loves the calm life she appears to live in. There are supports, skepticism as well as grim criticism questioning the cultural export intention. The problem is even more sensitive given the gap between Chinese and the western social media, mainstream public opinion and sociopolitical ideologies. Ziqi must deal with it, balancing the public sentiment while keeping producing her content with the feedback she received. The world also has to deal with it: the phenomenon of Ziqi, which never appeared before, leaves a challenge and opportunity for inter-cultural exchange, and probably more deeply, for the clash between obviously gapped Chinese and western media, public value and opinion, and ideologies in general, in the 21 st century. Despite of the controversy, Li Ziqi is a big success. She attracts so many foreign subscribers on a foreign online platform (which is not even accessible in China), directly influencing millions of people speaking so many languages that she might not know. The popularity makes the seemingly isolated life only one side of the coin. Her life is not isolated, but closely connected to her audience and the public internationally. The influence is mutual: Ziqi also has to communicate in one way or another to the audience, live with the huge impact she makes (including responding to the controversial debate on cultural export) and become both a professional youtuber and the lady working in the farm. Not only Ziqi, the world is embracing growing entanglement and complexity, manifested in its every connected subject and object. Ziqi's story is just one example showing how profound connectivity and complexity can thrive from showcasing the opposite: an isolated and simple life. The seemingly isolation intentionally yet surprisingly results in profound connectivity, and the seemingly simplicity conveys huge complexity. And the world is driving towards that direction: simplicity and isolation diminishes, while complexity and connectivity pervades. The entanglement and hybridization is the new trend. The entanglement and hybridization have infiltrated every aspect of our lives. In <<Chapter 1.1 "The Proliferation of Hybrids">> of his book <<We have never been modern>>, Bruno Latour wrote: "On page four of my daily newspaper, I learn that the measurements taken above the Antarctic are not good this year: the hole in the ozone layer is growing ominously larger. Reading on, I turn from upperatmosphere chemists to Chief Executive Officers of Atochem and Monsanto, companies that are modifying their assembly lines in order to replace the innocent chlorofluorocarbons, accused of crimes against the ecosphere. A few paragraphs later, I come across heads of state of major industrialized countries who are getting involved with chemistry, refrigerators, aerosols and inert gases. ….. The same article mixes together chemical reactions and political reactions. A single thread links the most esoteric sciences and the most sordid politics, the most distant sky and some factory in the Lyon suburbs, dangers on a global scale and the impending local elections or the next board meeting. The horizons, the stakes, the time frames, the actorsnone of these is commensurable, yet there they are, caught up in the same story. For the others are multiplying, those hybrid articles that sketch out imbroglios of science, politics, economy, law, religion, technology, fiction. … " (Latour, 1993, pp. 1-2) The newspaper story of ecosphere damage he read reached from chemistry to politics, from economy to law, religion etc. The story is not a fiction that makes up plausible links, it is the reality with authentic entanglement, as Latour wrote: "these imbroglios do the mixing," … "they weave our world together!" (Latour, 1993, p.3). Both Ziqi's controversy and Latour's reading present new problems of our time. There are not problems people know how to solve with existing and fixed knowledge or knowledge frame, but ill-defined problems whose solution requires effective and wise integration of science, technology, society and life. The "Open Wicked Problems" As early as in the 1970s, scholars have noticed the rise of such problems in the field of social policy and urban planning. Rittel and Webber first coined the concept of "Wicked Problems", by which they indicated problems that are "no meaningfully correct or false", have "no sense to talk about optimal solution" and have "no agreed definition" [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF]. These problems emerged when there appeared a conflict between social protest of the "laity" and policy making of the professionals. The problems, such as equity in policy making, were not "tame" -definable, understandable or consensual. [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF] challenged the modern tool such as professionalism based on traditional science and engineering in resolving such problems, proposing that traditional tools are designed for well defined "efficiency" problems, such as optimizing resources, with a "planned" system. But for the "wicked problems", the goals are uncertain; the definitions are inseparable to the solutions; the input and output factors are intertwined; and even the historical sense of progress is "eroded". A perfect "planned system is unattainable" and it is "even questionable if such a system is desirable" (Rittel & Webber, 1973, p.159). [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF] further summarized the ten properties of wicked problems to distinguish them from classical problems. 1 Latour's problem of ecosphere damage can be seen as a typical wicked problem: its definition is elusive, depending on the perspectives of possible solutions; different disciplines of knowledge are useful but one cannot exhaust all useful knowledge and plan for a perfect solution; any attempt in solving the 1 1. There is no definitive formulation of a wicked problem. 2. Wicked problems have no stopping rules 3. Solutions to wicked problems cannot be true or false, only good or bad 4. There is no immediate and ultimate test of a solution to a wicked problem 5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly. 6. Wicked problems do not have a enumerable ( or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan. 7. Every wicked problem is essentially unique. 8. Every wicked problem can be considered to be a symptom of another problem 9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution 10. The Planner has no right to be wrong. (pp. 161-166) problem will have consequences, and the problem evolves as we solve it; finally, the different perspectives of the problem, from chemistry to biology, politics and business are so intertwined that we cannot just "divide and conquer". Back then, [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF] did not know if the phenomenon of "Wicked Problems" will last and become the significant problems of the world. But as we have experienced the huge change of the world during the past decades, featured by the fast growth of science and digital technology and the dramatic change of social life, the significance of the "wicked problems" only increase. Climate change, AI development and ethics, cyber and data security, pandemics, globalization and anti-globalization, social media and disinformation, Sustainable Development Goals(SDGs) 2 problems -to name a few of the pressing "wicked problems" of our time. Though the "wicked" nature of the problems persist, which is meant for the opposite of "benign" or "tame", nowadays the problems present new features. The most salient new feature is the active participation of the "laity": the general public constantly contributes to forming and solving the problem, and their role in the wicked problems changes from peripheral and passive to central and active. When [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF] introduced the concept "wicked problem", the focus is still for the researchers and policy makers to realize the emergence of the new kinds of problems. The underlying intention is still for policy makers, researchers and professionals to solve/resolve the problems caused by the complexity of society. These policy makers, researchers and professionals are assumed to play the central and subject role whereas the public and society in the peripheral and object position. Nowadays, this premise is under question. With the rapid development of information technology, everyone can access vast information with which everyone understands one facet of the wicked problem. Then the invention of social media and online co-working open platform 3 greatly facilitates everyone to collectively share knowledge, construct project, and solve/resolve the wicked problems. The participation of the "laity" is not trivial to these problems. In Ziqi's case, the power is almost reversed: many professional diplomats or researchers in culture communication do not have the kind of influence as Ziqi does. Yet Ziqi is just an ordinary youtuber broadcasting her life. She might not know very much about the science or the professional practice of cultural communication, but she is put in the center of the issue, playing a key and active role. So do many of us who consciously or unconsciously contribute to solving the wicked problems. We will refer to the kind of "wicked problems" that everyone can actively access and contribute to as "Open Wicked Problems", which is the new form of the pressing problems in the current world. The "Open Wicked Problems" are pervasive: from Brexit to US-China rival, from climate change to AI ethics, from pandemic to cyber security, from new entrainment like TikTok to new application of cutting edge biotechnology. Solving problems of this kind not only relies on the work of researchers, the professionals, the government, the enterprises, but also students, makers, entrepreneurs and all of us who tweet, who mobilize, who speak, who influence and who empower. For example, using AI to analyze user data for multiple purpose (e.g. recommendation, advertisement, surveillance, behavior analysis, etc.) has become more and more popular, and how to use AI in an ethical and responsible way is one of the pressing open wicked problems of today. It is more than an "efficacy" problem to be solved with algorithm, but a design and policy problem. When AI is playing an increasing role in not only what to buy but also whom to vote, the ethics of using AI is not neglectable. Who should own the data and have the right to analyze the data? When it is entirely up to the giant tech companies, there might be an interest to inappropriately use the data for private benefit (e.g. the Facebook data privacy scandal with Cambridge Analytica). But if no one should have the right to collect and analyze the data, we might lose the opportunity to wisely use the AI analytical power for better understanding the public and for better policy making. Many countries believe that they should develop this ability as a key competitive power in the 21 st century, therefore completely abandoning the technology does not seem to be an option. Then it is important for the government to understand the problem and make policy to ensure appropriate use of the data for public benefit. But whether the political choice of governmentcompany collaboration is ethical and responsible is also questionable. In 2018, the collaboration between Microsoft and the U.S. Immigration and Customs Enforcement (ICE) irritated Microsoft employees when they found out their technology tools, such as face recognition APIs, are used to separate the immigrant children from their families. Many employees threatened to leave the company which pushed the change of the AI algorithm of the ICE project. The political stance of the employees of the tech companies are also part of the problem. The problem of AI ethics and responsibility is also largely influenced by the public opinion. In Europe, there is considerable concern of abusing private data by companies or by the government. The European General Data Protection Regulation (GDPR) aims to protect the privacy of individuals by ensuring the individual's control of their personal data. In China, the concern is much less. The use of face recognition in the street is accepted without much challenge. The CEO of Baidu, the biggest Chinese search engine, once said in public that he believed the Chinese people are willing to compromise privacy for convenience. Only the most apparently inappropriate scenario of AI use is challenged: for example, one of the face recognition project claims that they can detect children who are not happy or absent-minded in class so that the teacher can "correct" them with the facilitation of this technology. The scenario is widely criticized as people worry it will harm the children's mental health when they know their every face expression will be recognized and graded. The backfire of public opinion forces the company to rapidly withdraw their project. Besides the entanglement of companies, government and the public, this problem is also greatly impacted by the "Black-Swan Events". At the end of 2019, an epidemic heavily hit China which later develops into the worst global pandemic in one century. Billions people's lives are affected, both by the illness itself and by the economy shutdown. Many governments have to enforce strict measurement for social distancing, and the use of AI gives them powerful tools for that purpose. In China, technology giants Alibaba and TenCent soon developed a health barcode that can determine the possibility that you have been exposed to virus, based on your digital "footprint". The barcode will give a green light to whom they believe is less likely to be exposed and red light to the others. When cities with low risk started to lift the lockdown, the barcode helped to determine whom should be the first to go back to work, to use public transportation and to go to public spaces, etc. Although the measurements are for the emerging crisis, we do not know if it will become the new norm after the pandemic. The pandemic has inevitable impact on our future including how we regard the AI technology and its use. To appropriately deploy AI is extremely complex and entangled, and it is not just problems of government and tech giants, but also questions for every one of us who uses and contribute to both the problems and the solutions. The collective intelligence and collective wisdom is more than ever an urging demand for the era of "Open Wicked Problems". Ziqi along with every one of us is empowered and obliged to create a better future together for this complex and entangled world. Challenges of Learning in the Era of Open Wicked Problems As reaching collective intelligence and wisdom for open wicked problems becomes an urging task, our current learning and education system confronts profound challenges. The Tradition Paradigm of Science and Engineering The open wicked problems challenge the tradition paradigm of science and engineering. The contradiction between the complexity of "wicked problems" and our cognitive limits to solving them is partially due to the learning paradigm that we inherit from the traditional science and engineering, as [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF] wrote: "One reason the publics have been attacking the social professions, we believe, is that the cognitive and occupational styles of the professionsmimicking the cognitive style and occupational styles of engineeringhave just not worked on a wide array of social problems" (p.160) The traditional cognitive and occupational style relies on a unitary and acute objective, on the process of planning and testing, on the closed system instead of an open complex system. But this learning and knowledge creating paradigm will face immediate problems in front of the open wicked problems. Every open wicked problem is unique, and everyone with a distinct perspective enhances this uniqueness. In terms of cyber and data security, a traditional problem definition might be how to design a perfect system that prevents hacking or any form of sabotage to the data security. But an open wicked problem perspective of the issue might not agree with the unitary objective. One might question if such a perfect system is good for social good; who owns the key to the perfect fortitude of data; even if data is secured, does the user feel as much secured; who owns the data and has the right to define and change the level of security, etc. When we take into consideration various perspectives, including enterprise's interests, user's right and privacy, power and decision making, the problem's definition becomes elusive. The traditional objective needs to incorporate with the open wicked problem perspectives in defining the problems. The traditional process of testing does not work. Every attempt of solving or resolving the problem will inevitably bring consequences that might change the problem itself [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF]. The tradition style of science and engineering can test many times in order to learn and create knowledge, but for the open wicked problems, the testing is just a one-time shot. When Ziqi reacts to the controversy of cultural exportation (see Liqi's controversy in INTRODUCTION.1), her reaction will change the perception of the public which might alleviate, intensify or even entirely redirect the problem. But she cannot withdraw and announce that reaction to be a "testing". The experience and knowledge learned in open wicked problems must be adapted to some type of more sophisticated form of knowledge but not immediate applied to the same problem assuming the condition will be the same. The way testing works for traditional learning does not work for open wicked problems. The open wicked problem is an open system with complex and unenumerable set of interconnected input and output [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF]. A typical traditional problem sets up the frame of inputs and rules so that the outputs are the results from the problem solving. The open wicked problems do not follow this pattern. The output can influence a larger social context so that the input and the rules are changed. Or the output may connect to external factors that significantly shift the center of the problem. For example, a typical traditional problem might be to optimize the bike recycling efficiency for city bike sharing system, according to the distribution of population and riding routes. But the improved efficiency and convenience in turn results in massive broken even sabotaged bikes so that efficiency does not matter that much. Instead, encouraging users to report broken bikes becomes the center of the problem. One might also consider the campaign and design to protect the bikes as one of the factors. We need to cope with the complexity and openness of the open wicked problems. To conclude, the first challenge owes to the properties of the open wicked problems themselves -plural and ambiguous objective, unable to test, open and intertwined inputs and outputs. Professionalism and its associated science and engineering learning paradigm is useful but not sufficient in facing the new problems. One of the key improvements to the learning paradigm is to incorporate the rationality-based learning with problem-based learning or project-based learning, which has been started in many areas where complex problems pervade, e.g. in medical and management school. The introduction of problem-based learning, especially open wicked problem-based learning, will complement the classical problem-solving paradigm. The Separation and Specialization of Knowledge The open wicked problems challenge the separation and specialization of knowledge. The separation and specification of knowledge is one of the major features of the current learning. Latour (1992) rhetorically explained the divide between nature science and social science and research: "They have cut the Gordian knot with a well-honed sword. The shaft is broken : on the left, they have put knowledge of things; on the right, power and human politics (p.3)". But the open wicked problems are demanding increasing holism "to retie the Gordian knot by crisscrossing." (p. 3) The major form that results in the separation and specialization of today's learning and education is the disciplinary divide. But any discipline of knowledge alone is not enough in solving the open wicked problems. Learners need to collaborate and wisely integrate the knowledge they gain in their respective disciplinary learning. Interdisciplinarity becomes an oftenreferenced approach to solve open wicked problems. Understanding the historical reasons for disciplinarity and interdisciplinarity can help us understand this challenge. Disciplines form with different historical reasons: some based on common theoretical inquires, some with similar methodologies, and others with common problems [START_REF] Scott | Undisciplining Knowledge: Interdisciplinarity in the Twentieth Century[END_REF]. Once the discipline is mature, the theoretical thinking, methodology and the problem frame make it distinguishable from other disciplines of knowledge. A discipline will evolve with complex academic efforts, but it will not be adequately agile to adapt to emerging problems like the open wicked problems as we discussed. That is why many discoveries are likely to happen on the boundaries between disciplines where new techniques, perspectives and ideas clash [START_REF] Rylance | Grant giving: Global funders to focus on interdisciplinarity[END_REF]. Therefore, interdisciplinary research and education has been active in universities and research institutes. But how to implement interdisciplinary is not that easy. The cognitive and institutional obstacles are two of the main reasons for inefficient interdisciplinary learning [START_REF] Macleod | What makes interdisciplinarity difficult? Some consequences of domain specificity in interdisciplinary practice[END_REF]. In one of his interview, Harvey Graff, who did a historical review of 12 case studies of interdisciplinary efforts [START_REF] Graff | Undisciplining knowledge: Interdisciplinarity in the twentieth century[END_REF], explained that interdisciplines are historically not opposed to disciplines but the two are interdependent and coexistent: "one myth is that interdisciplinarity is based on the "integration" of disciplines or requires "mastery" of multiple disciplines. Another is that there is one path toward interdisciplinarity -a large group and expensive science. As the case studies in my book demonstrate, there is no one formula that has a higher chance for success. Nor is interdisciplinarity new. It is part of the history of the modern research university and the development of disciplines from the late nineteenth century on. Too often we frame the disciplines and interdisciplinarity as opposed; the reality is that one depends upon the other." (The undisciplinarian: A view From the Bridge, n.d.) Therefore, the real drive for interdiscipline is the emerging "problem", which should be put in the center of interdisciplinarity and be paid closely attention to in reaching meaningful collaboration with better communication, as Graff stated: "interdisciplinary research as what emerges from the effort to develop new answers to questions (or approaches to problems) that require elements from different disciplines, subdisciplines and fields. The questions and problems are central…. Researchers, institutions and funding agencies need to be more honest and modest. Forsake endless typologies (transdisciplinary, meta-disciplinary) and focus instead on specific questions and problems. Also, consider the physical and social organization of research. …. Interdisciplinary efforts do put much greater demands on the quality of communication." (The undisciplinarian: A view From the Bridge, n.d.) The open wicked problems in its nature challenge the disciplinary boundary and push interdisciplinary learning. Moreover, as Graff pointed out, the problems are not only the reason why disciplines should agilely cross boundary but also the reason why different disciplines CAN overcome the barriers by focusing on the problems. However the interdisciplinary efforts are not easy and many have reported the hurdles in crossing the boundaries(MacLeod, 2018) [START_REF] Evans | Researching the sustainable city: Three modes of interdisciplinarity[END_REF]) [START_REF] Rhoten | Interdisciplinary research: Trend or transition[END_REF] [START_REF] Pohl | Transdisciplinary collaboration in environmental research[END_REF]. There exists a great challenge in understanding how to cross the boundary of disciplines in practice. The Subject, the Scenario and the Purpose of Learning The open wicked problems challenge the subject, the scenario and the purpose of learning. Who learns? Where do they learn? Why do they learn? The open wicked problems, especially its "open" nature (see INTRODUCTION.1), challenge the most fundamental questions of learning. Everyone now has considerate access to vast information in order to learn knowledge, and they also have the digital tools to widely share their knowledge and skills. These are the digital infrastructure with which everyone has the potential to learn/teach and to contribute to solving the open wicked problems. The learning activity in the era of open wicked problems is profoundly enhanced to its traditional definition: it is not only for students, not only in classrooms and labs, and not only for gaining knowledge and skills to fit in the professionalized life. Everyone has the potential to learn, everywhere (particularly with the support of digital tools), to be involved in their social lives weaved by the open wicked problems and at the same time to develop themselves for the complex reality and the uncertain future. The vertical model of learning is then challenged. Traditionally, students learn from professors, apprentices learn from masters, and the unlearned learns from the learned. But in the new model, everyone learns from a variety of contents and from a variety sources, instead just following the traditional vertical development trajectory. Horizontal peer learning happens among anyone who knows something. Can a professor learn from a student? Sure, when the student knows more design knowledge than an engineering professor, or more about twitter for publicity. Boundary crossing collaboration becomes a common and significant approach to fulfil the horizontal learning in complementary to the traditional vertical development following an individual or competitive model. Multiple formats of horizontal learning emerge. Workshops, competitions, hackathons, and citizen science initiatives are becoming more and more important opportunities for learning in the emerging culture. At the same time, on-line learning tools largely facilitate digital horizontal learning: e.g. MOOC, webinar, open source hardware and software, open science toolkits, etc. All of these efforts have challenged the traditional situations of learning where students stay in classrooms or labs and follow the instructions of a teacher or a professor. Learning can happen in many situations where communication and creation of knowledge is facilitated, either by off-line activities or on-line tools. The open wicked problems also accelerate the change of professionalized world. Many jobs are emerging fast: the job of youtuber did not really exist a few years ago, but now it is a choice for many young people, which even pushes traditional TV media to follow the trend. Learning as preparation for professionalism is challenged. Many encounter the problem that what they learn in formal learning does not contribute much to their later professional life. One of the reasons is that the fast-changing problems of our time require agile and adaptive learners rather than well trained workers. Learning should meet the need of the changing job market, which is ultimately attributed to the need of solving open wicked problems. Graduation is not the final stop of learning. Learners are required to keep learning in work and in life so that the complex problems we face will be resolved with agile and adaptive wisdom. Instead of profession-oriented learning, the new problems urge a scalable life-long learning so that we can keep our pace of innovating and knowledge creating to solving them. To conclude, the open wicked problems have profoundly challenged the way we learn in multiple aspects: • The shift from classical problem to wicked problem calls for problem/project-based learning. • The demand for holistic knowledge in solving wicked problems calls for boundary-crossing and interdisciplinary learning • The "openness" of the open wicked problems calls for a horizontal and collaborative life-long learning for everyone Learning in the era of open wicked problems is extended: from solving classical problems to resolving wicked problems, from a separation and specialization paradigm to a holistic and problem based paradigm, from individual based to collaboration based, from vertical and unidirectional to horizontal and bidirectional, from profession learning oriented to agile and adaptive learning oriented, from studentship learning to life-long learning. New models of learning are crucial in our time for solving open wicked problems. The changes we mention are now raising increasing attention. For example interdisciplinarity is growing internationally and is receiving increasing grant [START_REF] Rylance | Grant giving: Global funders to focus on interdisciplinarity[END_REF]) [START_REF] Frodeman | The Oxford handbook of interdisciplinarity[END_REF]. But the implementation is however difficult and calls for in-depth study [START_REF] Macleod | What makes interdisciplinarity difficult? Some consequences of domain specificity in interdisciplinary practice[END_REF]) [START_REF] Graff | Undisciplining knowledge: Interdisciplinarity in the twentieth century[END_REF]. The new model of learning is a wicked problem, there is no single knowledge for explaining it, nor there is a fixed method for studying it, or a final solution that we all hail to. Studying the new learning for open wicked problems is an emerging and urging task, at the crossroad of different recent learning approaches, e.g. project-based learning, interdisciplinary learning, and collaborative learning. The Emerging New Learning Model: CoLAB (Collaborative Learning Across Boundaries for Open Wicked Problems) Confronting the challenges of open wicked problems, the society of all levels are motivated to innovate the teaching and learning models: for policy makers, new learning models are seen as essential engines for national competitive power; at the industrial level, the job market anticipates a growing demand for learners who can agilely adapt the fast change of jobs; in education, educators and practitioners look for new ways to cultivate the "21 st century competencies". During the past few years, the author of this thesis has been an active practitioner in the educational innovation in China and Europe. There is an increasing need for educational reform to meet the challenges of open wicked problems, especially in China, where the society is undergoing fast changes at all levels. In China, the motivation to innovate learning and education is high for policy makers. After several decades' fast growth, China is now the second biggest economy. But since the 2010s, China's economic growth rate is witnessing a decline trend. The Chinese government believes the ultimate problem is in the industry structure, and therefore promotes the "supply-side reform" to replace the low-added-value, high-consumption and high-pollution industry with the more innovative, green and high-tech industry. Innovative and creative talents are therefore considered a key national competitive power to reach this goal. This policy has led to the reform of its education system, especially its Gaokao (National College Entrance Examination) policy since 2014. The new Gaokao policy requires that the K12 education should pay more attention to the "comprehensive practice competencies" which include collaboration, innovation, complex problem-solving, and digital literacy etc. Besides the political drive, the industry and job market also motivate the reform of learning and education. In 2019, Jack Ma, the founder of Alibaba, in a conversation with Steve Forbes, said that he believed that art, music and team work is more important than math and physics, given the background that machine and AI will replace human in jobs that are based on memorizing and calculating. Therefore, human will work on more complex tasks, based on skills that are more complex and "human" -as in art, music and teamwork. The change of the job market has directly influenced the change of education, e.g. the one of the education reforms in China has specifically required a focus on the vocational preparation and planning. The traditional learning system tends to isolate the textbook knowledge with the real world need in the job market. With the fast change of job market and the relatively slow change of textbook knowledge, this tradition has created a giant gap between education and vocation nowadays. How does educational system react to these political and economic motivations? There are many new learning initiatives emerging. The following sections introduce the new learning initiatives that we observed and coorganized during the first years of our research. We use these new learning initiatives to illustrate the new learning models in the era of open wicked problems. Examples of "Co-lab" Like Activities One of the new learning initiatives are the "Co-lab" workshops (please note here the "Co-lab" in small letters is NOT the same as the concept of "CoLAB" in the thesis title, we will discuss the difference in the following section of this chapter). The Co-lab workshops are a series of workshops during 2015-2017. "Co-lab" is the Japanese word for collaboration and at the same time short for "colaboratory". The workshops are experimental workshops that aggregated designers and biologists to work on an interdisciplinary project. There are different sessions in the workshops: normally, there is a design session where the scientists can learn the designerly approach from the designers, and a science session where designers can learn and practice science, and finally an interdisciplinary session where they group together to work on an interdisciplinary project of their choice. The workshops became quite popular in some design and biology communities in Europe and China and continued to host different editions in universities and institutes like UCL, CRI, Cambridge, EPFL, Tsinghua etc. The most special thing about Co-lab workshops is that it was started entirely by students but succeeded in making considerable impact. Co-lab workshops were co-founded by three young students, as shown in the follow figure (Fig. 0.2). They are a design bachelor student from Japan, a bioengineering and education master student from Spain and me, a 27 Ph.D. candidate from physics and design who just came from China to Paris. At the beginning, the student group did not have any money or human resources. But they were passionate in promoting in-depth communication between designers and biologists. They discussed the idea of starting a collaborative workshop with different people and received many useful advice, which helped them build their mentor team and raise funding from various grants. The student-led initiative soon became popular in the innovative design and biology communities, e.g. synthetic biology community. With the help from these communities, this new format of learning entered universities in Europe and China and evolved into many editions covering more and more topics. The author of this research is both one of the co-founders of "Co-lab" and at the same time a participant-observer who studies the learning process in Colab. In chapter 8, when we analyze Co-lab workshops, we will make a more in-depth ethnography of Co-lab workshops: their format, philosophy, organizers and participants. Please refer to the according contents in Chapters 8 for more detail of Co-lab workshops. But the "Co-lab workshops" were not the only one learning initiative that we encounter and study. During the past years when we conduct our research, there are many similar learning initiatives emerging. As shown in the follow figure (Fig. 0.3), we (co)organized, participated, and observed many different learning initiatives. Although they have differences in topic, participant, format, they do have underlying connections: e.g. core organizers of Co-lab started SDGo, a high school competition focused on UN's sustainable development goals. Although the basic format and participants are different, the organizers inherit their spirit and approach in the SDGo initiative. Practitioners, educators and learners started to target SDGs problems in their learning activities following the announcement of the SDGs. The openwicked-problem nature of the SDGs promotes the educators and learners to adopt new learning models that are more problem-based, more collaborative, more horizontal, more agile and more adaptive. In the summer of 2017, over 100 international students and teachers from across the globe aggregated in the campus of Tsinghua University of Shenzhen, China for a global gathering called the "iSDG Assembly". The gathering invited 5 learning initiatives originated from different universities and institutes, namely the Biopolis Summer School, the CRI-Open FIESTA SDG workshop series, the CHIC program, the Tsinghua-Geneva Initiative (TGI) Summer School, and the SDGo Competition. The different initiatives, although with different topics and pedagogical techniques, share common features of a new model of learning facing the challenge of the Open Wicked Problems. • The Biopolis Summer School The Biopolis Summer School, with the slogan of "biology and social innovation" is a summer school organized by Harvard, in collaboration with the Center for Research and Interdisciplinarity (CRI), SciencesPo, and the City of Paris. 4 The Biopolis summer school was started by Robert Lue and Alian Viel of Harvard in the summer of 2015. The overall aim for this summer school is to apply biological principles at all levels (e.g. evolutionary biology principles) to solving urban challenges in Paris. In the 2017 Biopolis summer school, the target urban challenges have a specific focus on SDGs 3, 4 and 11, which are public health quality education and sustainable cities. In two months, students from Harvard, CRI and SciencesPo formed an interdisciplinary group to identify a local problem by engaging in the urban life of Paris, and propose an actionable project to innovate towards a more sustainable future. The final projects are presented with videos as prototypes, which integrate the biological principles, the real-world SDG problems and the solution. For example, the team PICKMEUP designed a vending machine that sells food at discounted price in relation to the food freshness in order to prevent food waste. They were inspired by the symbiotic relationship between plants and nitrogen-fixing bacteria. • The CRI-Open FIESTA SDG Workshop Series The CRI-Open FIESTA SDG Workshop Series are a series of workshops between the Center for Research and Interdisciplinarity (CRI) and Open FIESTA of Tsinghua University in Shenzhen. From the 2015, the CRI students and professors have been visiting Shenzhen to hold a week-long workshop targeted at solving SDGs problems with digital technology. The Open FIESTA students who joined the workshops have a divergent background of engineering, science and design. They needed to target a problem in the beginning one or two days of the week and then work on a prototype for presenting in a public maker space at the end. In 2017, the students first visited the hardware market in Shenzhen for identifying a local SDG problem and then applied the digital technology and making their prototype for solving the problem. For example, one of the groups discovered many kids in the hardware market were left unattended as their parents were busy with their business, therefore they designed a special maker kit for them to play and learn at the same time. • The CHIC (China Hardware Innovation Camp) program The CHIC program is a year-long program curated by the university of EPFL. It is a program to solve real world problem by designing and making connected devices. Students joined their interdisciplinary team of engineering, business and design, at the end of October in Switzerland, and identified the problem before they kick-started the prototyping phase in the spring. They spent half year prototyping the device before they arrive in the world's hardware capital -Shenzhen in July and utilized the hardware to finalize their design and prototype. In 2017, the CHIC program focused on various SDG problems from SDG 6 Clean Water and sanitation to SDG 11 sustainable cities. In one of their project Livelo, the students made a low-price connected device for monitoring water level5 . • The TGI (Tsinghua Geneva Initiative) SDG summer school The Tsinghua Geneva Imitative (TGI) is a two-month summer school, where the students from all over the world join a Geneva-Beijing-Shenzhen journey for solving SDG problems6 . In 2017, the students teamed up in Geneva where they visited different agencies of the United Nations and other international Organizations, who gave important problems of SDGs that they are currently working on. The students develop their concept and design a prototype for about a month before they went to China for the final two weeks for intercultural communicating and testing and finalize the prototype in Shenzhen with the hardware they need. • The SDGo Competition The SDGo competition is a Beijing high school competition that lasts for 2 months. In 2017, the SDGo competition focused on the SDG 6 water and sanitation and SDG 12 Responsible production and consumption. In the competition, the high schoolers teamed up to join a three-phrase journey. In the first phrase, the students will need to identify a problem with field research and design thinking tools. They are asked to get in-depth understanding of the context of the SDG problem by field observation and interviews. In the second phrase, the students will need to look for science and technology knowledge that is potentially useful in solving the problem with the help of some undergraduate and graduate students. In the final phase, the team will need to incorporate their field understanding and science and technology knowledge and make a prototype to solve the problem. The SDGo competition was initiated by Beijing Normal University (BNU), which then hosted an educational grant for innovations in schools. The organizing team of SDGo competition was a student club based in the CRI (Center for Research and Interdisciplinarity)-Paris, who regularly hosted a series of interdisciplinary weekend workshops between design and biology. The core organizers are a design bachelor student, a biology master student and a Ph.D. student (also the author of this thesis) who works at the interface between design and science. The workshop normally contains three parts: 1. A design section to identify the wicked problem 2. A science workshop/practicum to learn the scientific knowledge and skill 3. A project making / prototyping section to finalize an interdisciplinary project for the problem. The new model successfully integrated undergraduate, graduate and even high school students from different backgrounds, e.g. from science, art, business and design to learn together and work on interdisciplinary projects. The workshop repeated 10 editions in different universities in Europe and China7 . During their workshop in China, the manager of the BNU school innovation grant met the team and saw the new model of learning which she believed was the innovation she was looking for. After several rounds of communication, they decided to start a 2-month student competition with the new model, because, as advised by BNU, it is the best way to introduce an unfamiliar form to the school education system in China. As the CRI was promoting education around the topic of SDGs, the competition was finally named SDGo, which meant SDG + Go: to take actions towards solving SDG problems. In term of pedagogy, the organizing team soon adapted their successful three parts into three phases for the high school competition, which are the Grey Phase for identifying the problem: the Yellow Phase for analyzing the problem and the Red Phase for solving the problem. The high schoolers need to make a group of 5-6 people in solving the problem they identify. The following content table gave details of the guidebook that explains the three phases. The Grey Phase includes an on-site launch workshop as a kick-starter for the competition and two weeks of fieldwork to identify the SDG problem. The launch workshop aims to equip students with the ability to critically view sustainability within their social, economic and cultural context, and plan their own field study in this context. During the first week, students will conduct their own field study based on the plan designed during the workshop and the guidance from the guidebook. The second week is the time for the students to integrate their first fieldwork results, improve and repeat the study if necessary and finally redefine their initial questions with their analysis of the data. The Grey Phase integrated ethnographic method and design thinking tools which the organizers often use in their design workshops. The Yellow Phase is a three-week section where the students need to learn interdisciplinary knowledge that is potentially useful for the problem. In week 3, the students re-examine their SDG topic and questions brought from the Grey Phase, and identify how their questions can relate to knowledge and references from different disciplines, and divide themselves into several small groups inside the team to work from different disciplines. The next week, Students learn and exercise academic skills to research the existing knowledge and state-of-the-art solutions to the problem and try to build an interdisciplinary knowledge map from the reference they found. In week 6, students need to integrate the different knowledge in a debate and learn how to really reach meaningful results in an interdisciplinary setting. The Red Phase is the final stage for the students to give their ideas a chance. During the week 7 and 8 they utilize the discoveries they made in Phase Grey and Yellow to create a prototype of their concepts. The focus is not about thinking and analyzing but making and testing. During the final week of the competition, students are expected to focus on the presentation and the outreach & social media aspect of their project. Students will be introduced to communication skills and methodologies to help them create a presentation and report. As the organizing team is mostly based in Paris, they use on-line tools for mentoring the high school students. The student team needs to weekly hand in a working sheet to summarize their progress in the week and get feedback from three advisors, mostly graduate students and undergraduate students from different background who read their report every week. Table 0.2 Hand-in template for the student to sum up their progress weekly The above table (Table 0.2) is the third week hand-in template for the students to summarize their weekly progress. In the template, the student team is asked to break into 3 sub-groups and search for knowledge that is potentially useful for their problem. The following sheet is the answer of the second sub-group of the team "Heterogenius" who focused on the over packaging problem of the growing delivery business in China. And the second sub-group chose to look at the problem from the material science perspective and studied the structure of corrugated paper in the aim of finding new solutions that makes cardboard recycling more friendly (Table 0.3). Table 0.3 Week 3 Hand-in answer of the second sub-group of the team "Heterogenius": from the material science perspective After receiving the hand-in, three advisors gave their answers following an advice template. The template is deliberately designed for giving open and constructive advices. The following advice sheet is the Week 3 advice given to "Heterogenius" from one of the advisors (Table 0.4). Table 0.4 Advice Template and Week 3 advice from one of the advisors to "Heterogenius" The advisors have a divergent background from business, design to engineering and science. The high school students are responsible to integrate different advices for their project. Besides the online communication, the students also make at least one physical gathering every week, when they discuss their past week progress, the feedback from the advisors, and their next week plans. An undergraduate tutor from BNU joins the gathering to facilitate them when they need. The tutors from BNU are all trained by the core organizers to make sure they align with general flow of competition. Some of the high-quality advisors also call in when the student group meet every week. The different levels of mentoring make sure the student group gets an open and divergent supports. The SDG problems students chose are open wicked problems closely related to their life. And their solutions also directly contributed to solving the problems. For example, the team of "Wen Jiajun" found out most used textbooks and practicing books are just thrown away, but many of them are quite new and ready for reuse. They invented a loose-leaf notebook so that the textbook and practicing books can be reused. They not only invented the product as an idea, but prototyped it and tried it in their school. They went to their affiliated junior high school to promote their products and started a campaign of the loose-leaf concept. As a result, they influenced hundreds of students in their school and the loose-leaf paper sales tripled in the store near their school. The learning activity did have a real-world impact upon solving/resolving the open wicked problem. Learning towards solving open wicked problems is not easy. Very often the students find themselves in the situation of failing and iterating. The team "Reopt" explained the feeling of their learning journey: "At first, we believe we are going the change the world, at least for the problem we tackle, but soon we find out there are giant walls stopping us, and each time we try another direction, we find many more other walls. In the end, we finally understand that we are not breaking the walls but looking for slight gaps between the walls and see if we can grow our project there to change the status quo." The giant walls they referred to are the wicked problems and the entangled factors that contribute to the problem. The team started their project from the observation that high school students often change eyeglasses as they grow up, but the used glasses are never recycled. They wanted to promote the recycling of eyeglasses in the high school community. They first learnt about the materials of eyeglasses and found out neither the recycling of the frame or the glass is economic. They tried another direction of recycling by collecting and donating used glasses, but then found out the model has a major flaw that the donated glasses might not fit another person, which might even do harm to their eyes. Finally, they found out there are a few NGOs in China raising money to buy glasses for the kids living in the remote countryside. The "Reopt" then proposed to donate campaign in their school and donated the eyeglass frames to the NGOs. This donation would considerably reduce the money needed for buying new glasses, so that the NGOs could help more kids with the money they have. In their donation activity, they also encouraged their student peers to write to the countryside kids. In this way the donated frame became a bridge to connect the two remote communities. Abduction: From the Small Co-lab to the Big CoLAB In the previous section, we illustrate some of the "Co-lab" like activities. Although they are mostly different in formats, they share similarities and connections. In this section, we will follow an abductive logic in understanding the different "Co-lab" like activities by abducing the most significant features of the Co-lab like activities, which give rise to the concept of the big CoLAB learning. • Open-Wicked-Problem Based Learning The objective and process of the new learning model are evidently different from the traditional model of learning, especially from the instruction-based learning. The key distinction is that the new model of learning has an openwicked-problem based learning objective and process. The traditional objective of learning is knowledge based and defined by the instructor. For example, in math classes, the teacher sets up the goal for understanding certain approaches for solving a type of equations. In this way the learning objective is relatively clear and predefined. But in the new learning model, the goal only becomes clear and substantial as the learning unfolds. The learners and mentors do not know what exact knowledge they will need beforehand, therefore it is not possible to define the objective in terms of knowledge in advance. As the learners get more involved in solving the open wicked problems, they start to define what they will need to learn in order to solve the problem. The mentor's role is to facilitate this process and help them better frame their learning objectives. As the open wicked problems are entangled with many aspects of knowledge, the learning objectives always change and iterate on various conditions. In the case of "Heterogenius" team of SDGo competition, one of their direction in Week 3 is to look for material science knowledge of the corrugated paper. But as they further their research, they find out the habit of recycling is crucial in their solution model. The open wicked problem concerns many aspects of knowledge during the learning. The learning process of the new learning model is also open-wicked-problem based. The learning process is inseparable from the process of solving the problem. In traditional model of learning, the learning process is relatively isolated and independent to other activities. But in the new model of learning, learning happens at the same time as learners inquire, communicate, debate, inspire, prototype, validate, and present. All these activities towards solving wicked problems are at the same time learning activities. The team "Reopt" of the SDGo competition discovered that their biggest chance of recycling eyeglasses is through donating eyeglass frames to NGOs who give eyeglasses to the countryside kids. The team had to communicate to the NGOs and persuade them the value of this project. But how should they do that? They had never done anything like that before, and they had to learn how to communicate to a NGO, to understand their need and to persuade them. Therefore, they consult people they know who have alike experience and learn through the whole process. Another team of "Homerun" wanted to solve the problem of food waste in restaurants and they had an idea to actually open a temporary restaurant in their high school to observe and test their solutions. In order to open the temporary restaurant, they have to persuade the school managers, apply for a funding, obtain food security qualifications from their supplier, write to the parent committee, at the same time to observe and analyze. These activities help them to better understand both the providers and the customs. None of these activities are traditional learning activities, yet they constitute the necessary process of solving the open-wicked-problems. And the learning accompanies the process of identifying, analyzing, and solving the problems. • Collaborative Learning Collaboration is one of the key features of the new learning model. In the above examples, all of the SDG learning initiatives adopt a collaborative learning format. The students are grouped into 2-6 people and compete their learning in the unit of groups. In the beginning of their learning, learners group based on different strategies, e.g. common interests, heterogenous skillset, etc. As the learning continues, learners work together in identifying, analyzing and solving the open wicked problems. In the new model of learning, the "openness" of the problem ensures that everyone can contribute to the problem. Therefore, in most cases, there is no "expert" who should dominate the group or a completely "outsider" who knows nothing to contribute. Collaboration naturally happen in this group dynamic where the leaners collectively define and construct their project in solving the open-wickedproblems. Adopting the collaborative learning format is based on the following considerations: 1. As open wicked problems challenge the separation and specialization of knowledge, individual model will likely fail to provide a comprehensive perspective. With the pervasive use of digital technology, the collaboration now can be both on-line and off-line. Many on-line technologies are being used to facilitate collaborative learning in the new model of learning. For example, the SDGo competition adopted the remote mentoring as most their mentors were in Europe while the learners were in China. Also, in the GTI summer school, the participants needed to attend a completely on-line session called open 17, before the two month summer school. The open 17 session used MOOC, online tutoring, and on-line discussion as the major forms to collaborate. Off-line collaboration is still the major format of the collaboration, but we anticipate that the on-line collaboration will be more and more important in the future as the need for remote communication, learning and collaboration increases. The collaboration in the new model of learning is different from the "Division of labor" in professionalism. The role of each learner is not predefined. As the problem and the solution is unknown to the learners before learning, they will need to collaboratively work on identifying the problem and co-constructing the project to solve the problem. This process will require everyone to contribute their intelligence, which differs from the "Division of Labor" model. Division of labor is only limitedly used in the later stage of the project when everyone has agreed on the problem and the project hence moved to the realization stage. • Boundary-Crossing Learning The new model of learning features boundary-cross learning. The collaboration in the new model of learning is more heterogenous than homogenous. Learners learn through boundary-crossing communication in their group -they learn from knowledge in another discipline or other communities. In Biopolis, the Harvard and CRI students were more familiar with biology while the SciencesPo students more familiar with social and political sciences. Also, as the team will need to solve a problem in Paris, the Harvard students also learn from local students and communities about life in Paris. In GTI summer school, the students were first grouped based on different skillset so that they can complement each other when there is a need. Then the students went to UN and other organizations, to local and to Shenzhen where they can learn from many other people and organizations about what the problems are and how to implement their solution. In the SDGo competition, although the high school students do not have majors, they present different capacities in different areas: e.g. some are better at sciences whiles some are better at communicating and emphasizing, etc. The design of the SDGo competition also required the learners to learn from crossing the boundaries -e.g. in Week 3 and 4 they needed to break the team into 3 sub-group to look for potentially useful knowledge in different areas and in Week 5 they needed to integrate them to achieve a holistic perspective. The Boundary-crossing learning challenges the vertical model of learning. Traditionally the learning is the preparation for the solving the problem, and one can only start to solve the problem once they become a master in the vertical silo of knowledge. In the new model of learning, the situations are different. Learners and practitioner are constantly conducting peer learning and learn from crossing the boundaries of their past knowledge frame and experience. The horizontal model of learning is the norm for the new learning model. In the horizontal learning model, transgressing and expanding is the source of learning and constructing knowledge. Learners do not focus on how to progressively sharpen knowledge inside a vertical area of knowledge, either in a discipline or in a community of practice, but focus on integrate different knowledge and knowledge frame towards pragmatically useful knowledge and experience for the open wicked problems. This boundarycrossing learning feature means that there need not to be a rigid frame of how knowledge should be constructed so that everyone can follow in a vertical way. Instead, everyone's background, discipline and experience should be useful in a fluid way in the learning towards solving the problems. The direction of learning is also blurred as we can observe learning happens between students and professors in both directions. Research Motivations, Questions and Key Findings In this section, we will explain our research motivation and questions for this thesis. Our research motivation for this thesis is to understand the process of CoLAB with a practice-based understanding and a pragmatic purpose. The practice-based understanding means we do not enter understanding CoLAB with existing theories, instead we enter through close observation of CoLAB practice and through self-observation in practicing CoLAB ourselves. We treasure the insider understanding of the learning process and we develop our understanding by observing and analyzing the subtle moment and their underlying mechanism in the front-line practice. The researcher is both the practitioner who participates and practices, and the research tool for extracting understanding from his practice. The pragmatic purpose means the research is also oriented to solving real world problems, in our case, to facilitating the CoLAB practitioners, particularly those who are unfamiliar with new learning model. We are motivated to create easily usable analytical tools for these practitioners as well as learners. The reasons for our practice-based and pragmatic motivation are closely associated with the general research background, as we have elaborated in the above sections, which are: First, the world is embracing growing entanglement and complexity. The "tame" problems that we are used to are evolving into "wicked" problems that are "ill-defined". Moreover, the development of digital technology has enabled everyone to be actively involved in solving the "wicked problems". The "open wicked problems" are becoming the more and more significant in our modern life: from climate change to AI ethics, from new entrainment like TikTok to new application of cutting-edge biotechnology. Second, the pervasive "open wicked problems" challenge our current learning and education system. The traditional science and engineering model is not sufficient in solving the problems. The "wicked" nature of the problems requires a holistic perspective instead of the separation of knowledge. The "open" nature of the problems result in horizontal and boundary-crossing learning in addition to the traditional vertical model. Third, to answer the challenge, new learning models are emerging, which is a result of both political motivation for enhancing national competitive power and economic motivation for fitting the fast-changing job market. The new learning models are now entering as part of our official education system, e.g. the different SDG learning initiatives gathered in Shenzhen in 2017. There are three key features of the new model of learning: open-wicked-problem-based, collaborative learning, and boundary-crossing learning. Under these backgrounds, since the 2015, the author has been an active practitioner of the new trend of learning, as a participant, an organizer, a promoter and a trainer. At the same time, the author is also a researcher trying to better understand the CoLAB phenomenon and improve the practice, which is the original motivation for this thesis. As the practice and research deepens, it has exposed two key issues of CoLAB that further motivates this thesis, especially in its research focus and methods. Furthermore, there is no definitive final "solution" or "recipe" for CoLAB either. Research Motivations The problem itself develops as we practice it. The entanglement and complexity of the problem makes it difficult to adopt a traditional science approach to "measure" and "optimize" as for a classical "efficacy" problem. Studying CoLAB needs to stay close with the subtle understanding from the practice and embrace its complexity. The third reason for a practice-based perspective is that CoLAB adopts a learner-centered approach. The learners are the center of the learning activity, which aligns with the "open" property of the "open wicked problem". With the learners in its center, the CoLAB learning is not understood as an instructional design, but an entangled activity and complex social process where every factor coordinately defines the problem. As [START_REF] Rittel | Dilemmas in a general theory of planning[END_REF] pointed out, understanding the wicked problem is inseparable with solving the problem. The understanding needs to incorporate the solving as part of the problem. At the same time, the solving and practicing is one way to understand the problem. This understanding of CoLAB is the reason for our research focus and method to incorporate with a practice-based perspective. • Research Motivation 2: The Pragmatic Purpose There exists a growing need of pragmatic guidance for a diversity of practitioners and learners of CoLAB. The CoLAB activities are becoming more and more popular in universities and research centers. As exemplified by the examples of SDG learning initiatives, the new model of learning is supported by universities and research councils as part of the official pedagogy. They spend a lot research funding and teaching resources for these kinds of learning project, which is not at all routine in the traditional university model. The organizers are divergent: there are professors who are interested in the new model of learning, professors that has been working on boundary crossing subjects, research council managers who promote educational innovation, and even learners who feel the need to collaborative and change. The diversity of practitioners and learners requires the pragmatic knowledge on how to implement CoLAB to be easily accessible, especially for those who just shift from traditional educational system or those who haven't changed their mind but are given the task to establish a CoLAB-like learning initiative. They will need to understand the study of CoLAB without scrutinizing a complicated theoretical background. The gap between the theoretical ground and the practice needs to be filled. Research Questions To conclude, with the above two considerations of CoLAB learning, we hold a practice-based and pragmatic motivation for our research, which are manifested in the following aspects: -We focus on the complexity of the CoLAB in its practice. We see CoLAB as an wicked problem rather than an "efficacy" problem. -We ask questions about CoLAB and try to understand them from a practice-based and insider perspective. We focus on the real world challenges of CoLAB practice: for it is firstly the preferred way to study an open wicked problem and secondly it is useful in filling the gap between practitioner and researcher, so that the pragmatic purpose can be fulfilled. -We use qualitative research methods such as ethnography and grounded theory, which are widely used to tackle complex problems in social and design research. We use these methods to study the complexities of CoLAB rather than to reduce them. -We regard our research a useful tool to facilitate practitioners and learners who practice CoLAB. We understand most practitioners do not have profound knowledge on learning and educational theories. Therefore, we are motivated to build this pragmatic tool based on our practice-based understanding so that it will best fit the practitioners' use. With this motivation, we further develop our initial set of research questions as follows: -What are the key challenges in CoLAB Practice? How do they influence the process of CoLAB? When we focus on the practice of CoLAB, we immediately notice that CoLAB learning is not easy. For organizers and learners, there are very often challenges in the process. Trying to understand the challenge becomes our initial motivation for this study: what are the key challenges? What factors influence these challenges? How do these challenges influence the organizers and learners and the process of CoLAB? -How do CoLAB practitioners and learners act, react and interact facing these challenges; how do their actions, feelings, and cognitions influence the process of CoLAB? As the CoLAB is a dynamic process, the challenges are not static. They evolve as the practitioners and learner act and react against them. We want to understand the practitioner and leaners' cognitive and interactional behavior in the CoLAB learning process, especially in their key moments, for example, facing difficulties. -Who are the CoLAB practitioners and learners? How does the social existence of them influence their CoLAB Process? And How does CoLAB influence them? The third part of the questions concern the socio-cultural aspect and its relation to the practice. In the era of open wicked problems, learning is no longer an isolated activity in class. Therefore, the socio-cultural aspect of the learners is essential in understanding the CoLAB process. We need to understand who they are, why they come to CoLAB and how CoLAB develops them as a learner. -In practice, after better understanding CoLAB, how can practitioner better reflect on the process? how can we improve CoLAB learning to reach our goal to solve the open wicked problems? How can we facilitate practitioners who are new to CoLAB learning? The final part of the questions indicates the pragmatic purpose of this research. One of the key issues of CoLAB learning is the evaluation of the process. As CoLAB is an open wicked problem, the evaluation is not "tame" either. How can practitioner reflect and evaluate their practice is one of the big question for us? We need to constantly improve CoLAB as every CoLAB is unique. The final purpose of this research is to facilitate the practitioners to better understand and reflect on their practice and to improve them in the future. We use these questions to set up the initial frame of our inquiry and the purposive sampling in this thesis. But as a qualitative research based on grounded theory (see Chapter. 4 for more detail), these questions will develop and refine themselves as we unfold our research by sampling and analyzing. The purpose of presenting the research motivation and question here in this section do not mean we will mechanically go through these questions through step-by-step logical theorizing or hypothesis testing-we do not follow a logicdeduced method. Instead, the purpose is for the readers to understand what our main focused questions are and why they are important questions to be studied in relation to the general background and motivations of our research. The following illustration (Fig 0 .2) presents the relation map of the research background, motivation and questions. Key Research Findings: The "Co-Meaningfulness" In this section, we will give a brief introduction to our key findings, which is the concept of "Co-Meaningfulness" in CoLAB. As we have explained in the research questions, we start our inquiry from observing and analyzing the key challenges of CoLAB. The "Co-Meaningfulness" emerges as a key concept in our analysis of CoLAB challenge when students find it not enough just communicating on the "project" level, and that they have to communicate and negotiate which part of the project is more "meaningful". This process is especially salient when the group faces challenges in the collaboration. The challenge will bring the seemingly divergent opinions and understandings of the "Project" to a more profound level in coordinating different "Meaningfulness" and constructing a "Co-Meaningfulness", a term we define as the collective, project-related and developing Meaningfulness of the learners. The phenomenon of constructing "Co-Meaningfulness" is discovered as an underlying but significant process in CoLAB learning "Meaningfulness-Meaningfulness Coherence" ("M-M Coherence") and "Project-Meaningfulness Intensity" ("P-M Intensity"), as two properties that characterize "Co-meaningfulness" are discovered through comparative analysis in finding out why and how their "Co-meaningfulness" changes over time. The "M-M Coherence" means how coherent the group member's meaningfulness are related to each other. The "P-M Intensity" means how actively group members engage their meaningfulness into their project -e.g. are they fully engaging or are they compromising their meaningfulness, are they efficiently communicating their meaningfulness so that it will influence the joint project, etc. These two properties are then used as two axes to build up the "Co-Meaningfulness" quadrant map where the CoLAB process can be anchored with a Co-Meaningfulness trajectory. The "Co-Meaningfulness", the properties of "M-M Coherence" and 'P-M Intensity"and the Co-Meaningfulness trajectory are the key results when we take a socio-cognitive perspective in understanding CoLAB. The discovery of Co-Meaningfulness and the framework of tools to understand CoLAB leads to the inquiry of what exactly "meaningfulness" mean, which can hardly be answered with solely the socio-cognitive perspective. As we continue to analyze a long term CoLAB workshop series, we conclude three key process that are related to the "Meaningfulness" from our coding and analyzing. They are the "Evoking", "Applying" and "Prioritizing" process. The "Evoking" and "Applying" indicates that the working project "Evokes" the learner's past related experience and associated social meaning which the learner "Applies" to the current project. The "Prioritizing" means there is not necessarily just one past project and one associated meaning to evoke and apply, and the final meaningfulness is the result of prioritizing the most relevant Project and Meaningfulness, according to the learner's intrinsic priority and the situational and environmental factors of the learning scenario. "Evoking", "Applying" and "Prioritizing" connects "Meaningfulness" to the socio-historical and socio-cultural perspective of the CoLAB. When the learners implement multiple evoking, applying and prioritizing, they construct the Co-Meaningfulness. And the Co-Meaningfulness develops as the learners constantly re-evoke, re-apply and adapt their prioritizing. The "Co-Meaningfulness" is a framework based on our coding and grounded theory with concepts, properties and visualization tools. The main contribution is two-fold: First, it is an easy understandable tool to identify the key challenges in CoLAB during and after CoLAB, which will help practitioners to improve their practice. Thesis Structure In this section, we will introduce the structure of this thesis, which provide readers with necessary guidelines for reading. • INTRODUCTION We explain the general background, motivation and questions for our research. The general background is three-fold: first, the "open wicked problems" become the pressing problems of our time, as our science and technology problems are inevitably entangled with complex social realities; second, the emerging "open wicked problems" challenge our learning and educational system, and the traditional model based on teaching and vertical learning is not sufficient; third, as policy makers, the industry and educators notice the challenges, new learning models are emerging. At the end of this chapter, we summarize the problems that we aims to solve in this thesis -we want to better understand the process of CoLAB with a practice-based perspective and we try to develop an analytical tool for practitioners who have trouble in understanding and implementing the new model of learning. • PART I. LITERATURE REVIEW AND METHODOLGY Though the CoLAB is new, it takes roots in different theoretical and practicebased research in the literature of education and learning sciences. We specifically review two key concepts of CoLAB, i.e. collaboration and boundary-crossing, and their crossing with the concept of "learning". The concept of "collaboration" and "learning" was not interrelated in the traditional education practice and research, which mainly took an individual and competitive paradigm. The study of "cooperative learning" began to analyze the phenomenon of learning in a team and has generally proved its better performance than individual learning. The study of "collaborative learning" stepped further trying to understand the learning as a process to co-construct knowledge based on constructivism. The focus shift to the process, the interaction and the socio-cultural aspect of learning in the form of collaboration. The crossing between "Boundary-crossing" and "learning" are most in interdisciplinary practices, as the disciplines are the most important form of boundary in education and learning. We review and different form of interdisciplinarity, their respective philosophy and their impact on CoLAB. In addition to the discipline boundary, we review expansive learning which transcends boundaries based on learners' prior experience and social existence. We apply the constructivist grounded theory (C-GT) in our thesis. In PART I, we review specific method of constructivist grounded theory as well as the constructivist paradigm of qualitative research methodology in general. The constructivist paradigm of qualitative research method constructs the interpretive knowledge of the social actors' action, thoughts and their social and cultural context through empathizing, understanding and interpreting. Therefore, the researcher's social and political existence, viewpoints, purpose and interests are important part of construction of reality. The rationale of using the constructivist qualitative research is explained in chapter 4.2: both the researchers' practice of facilitating different background participants and the learner centered CoLAB learning itself require the researcher to take a constructivist perspective. Therefore, the constructivist qualitative method matches the research content. More specifically, we apply the constructivist grounded theory for constructing the knowledge of CoLAB. The grounded theory method has the advantage of generating theory from practice matches our practice-based motivation, and the pragmatic purpose is also one of the key pursue of research based on grounded theory. In the last part of PART 1, we introduce in detail the method of constructive grounded theory (C-GT), which include a general introduction of the different components of the C-GT, as well as a step-by-step procedure of C-GT with the examples from [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF]. This will help readers who are not familiar with the C-GT method to better understand how we construct the knowledge from the frontline practice and finally generate the theory. • PART II. THE DISCOVERY OF CO-MEANINGFULNESS IN COLAB. Our main analysis starts from Part II. The first part of PART II (Chapter 5,6,7) focuses on the socio-cognitive process of CoLAB, which means the microscopic collaboration and learning is at the center of our focus. The case of "Jumping Video" is purposively sampled as it contains complex challenges and rich interactions in overcoming the challenges. The data includes a full transcription of conversation of the 4-day workshop as well as post interviews of the participants. If the reader is interested in the detail of the "Jumping Video", it is recommended to read the full transcription, otherwise we provide a brief introduction to the "Jumping Video" project including the workshop, the mentors, participants, and the general process of the workshop and result. The analysis follows the method of constructivist grounded theory with a three-step coding and conceptualizing. The "Co-Meaningfulness" emerges as a key concept in the coding and conceptualizing when students find it not enough just communicating on the "project" level, and that they have to communicate and negotiate which part of the project is more "meaningful". The phenomenon of constructing "Co-Meaningfulness" is discovered as an underlying but significant process in CoLAB learning, especially salient when the group faces challenges in the collaboration. "Meaningfulness-Meaningfulness Coherence" ("M-M Coherence") and "Project-Meaningfulness Intensity" ("P-M Intensity"), as two properties that characterize "Comeaningfulness" are discovered through comparative analysis in finding out why and how their "Co-meaningfulness" changes over time. The "M-M Coherence" means how coherent the group member's meaningfulness are related to each other. The "P-M Intensity" means how actively group members engage their meaningfulness into their project -e.g. are they fully engaging or are they compromising their meaningfulness, are they efficiently communicating their meaningfulness so that it will influence the joint project, etc. These two properties are then used as two axes to build up the "Co-Meaningfulness" quadrant map where the CoLAB process can be anchored with a Co-Meaningfulness trajectory. The "Co-Meaningfulness", the properties of "M-M Coherence" and 'P-M Intensity" and the Co-Meaningfulness trajectory are the key results when we take a socio-cognitive perspective in understanding CoLAB. Only the socio-cognitive perspective is not enough in understanding CoLAB. In the second part of PART II(chapters 8-11), we enter the socio-cultural aspect of CoLAB and the concept of Co-Meaningfulness, which concerns who these learners are, what they bring from the past learning and how do they engage them in the CoLAB, and what impact will the learning has upon the future and the development of the learner. By incorporate these issues, we will complete the grounded theory with a special focus on the socio-cultural perspective. We continue to use constructive grounded theory for this perspective. We use data based on an ethnographic study of a workshop series over one year, among which two typical workshops in the series are highlighted. Data includes notes, memos, voice recordings, formal and informal interviews, all documented in the appendix. Again, if readers are interested in the detailed ethnography, they can refer to the data in the appendix, otherwise, we also present a brief ethnography of the workshop series in the beginning of this part. Through analysis with grounded theory, we generate the basic pattern of "Evoking" and "Applying", which means that the meaningfulness is not purposely created to fit the "Project" at hand, but generated when learners evoke a (or multiple) socio-historical "Project" and apply its socio-cultural "Meaning" to the current Project, and the underlying pattern of "Prioritizing", which determines what "Project" to evoke and what "Meaning" (and how) to apply. With this understanding, Co-Meaningfulness is not only concerned with "Project" and "Meaning", but also a clashing of different leaners' "evoking" "applying" and "prioritizing" process. With focused coding, we are able to conclude a three-level framework of "Co-Meaningfulness" from both the sociocognitive and socio-cultural perspective, which provide an easy analytical tool for practitioners with which they can diagnose their CoLAB learning. In the concluding chapter 12, we conclude the whole grounded theorizing journey -how we develop from the first set of categories and complete the framework with socio-cognitive and socio-cultural perspectives, with a consistent visual presentation. • CONCLUSIONS In the CONCLUSIONS, we first summarize the whole thesis-the background, the motivation, and the questions and how our results of Co-Meaningfulness is an answer to these questions. We then conclude our key contributions of this thesis and the framework of Co-Meaningfulness: 1. Co-Meaningfulness helps to identify and understand the key challenges in CoLAB; 2. Co-Meaningfulness helps to understand the developmental role of CoLAB and help catalyze the chain reaction of CoLAB. We finally present the prospects of using "Co-Meaningfulness". Besides the two main contributions, we propose using Co-Meaningfulness as the basis for developing further grounded theory for other topics concerning CoLAB, and for extending to objectivist and bigdata based study. PART I. LITERATURE REVIEW AND METHODOLOGY In the INTRODUCTION we introduced the general background of this thesis: the "Open Wicked Problems" are challenging our current learning and education system, and the new learning models are emerging to face these challenges. We specifically focus on a new form of learning featured by "collaboration" and "boundary-crossing" in solving the "open wicked problems", which we name the "CoLAB". In the INTRODUCTION, we have discussed the "open wicked problems" and their relation to the new learning model, while in PART I, we will review the historical research related to the other two features of CoLAB, namely the "collaborative learning" and "boundary-crossing learning", in order to frame our topic -the CoLAB -in the research literature. As CoLAB is by itself a wicked problem, the related literature also covers a variety of research based on different theoretical and epistemological grounds. The literature review is an integration to frame and illuminate our research on CoLAB. In Chapter 1, we review the field of collaborative learning and their relation to CoLAB. Under the name of "collaborative learning", there are actually different perspectives in understanding the phenomenon of "people learning together". The different perspectives reflect their different focus and epistemologies. The "cooperative learning" perspective focus on "intervention" and "performance", which mainly bases on an experimental psychology paradigm. The "sociocognitive" perspective focus on the interaction and the process of knowledge constructing through interaction, while the "socio-cultural" perspective regard learning as a social activity, focusing on its role in a wider context, e.g. regarding learning as a social activity to participate in a community. Both the "socio-cognitive" perspective and "socio-cultural" perspective In Chapter 2, we review the research related to the feature of "boundarycrossing". The first boundary we discuss is the disciplinary boundary. The study of interdisciplinarity categories the different forms of boundary-crossing in interdisciplinary learning. Besides the disciplinary boundary, in CoLAB, learners often have different social, cultural and political background, and these differences based on learners' prior experience and social existence set up the secondary boundaries for the learners, which they will need to cross in the collaborative learning. We further review the field of "expansive learning" which focus on transcending the second boundary of learning. The historical study of "Collaborative learning" and "learning across boundaries" provide a broad research background for CoLAB to get theoretical inspiration from. At the same time, as an emerging learning model, the study of CoLAB also points out some promising new directions for these fields of research. As this thesis is a grounded theory study, the function of the literature review is different from a traditional social research based on deduction. The general roadmap of this research is "bottom-up" rather than "top-down", meaning that we start with our ethnographic data, analyze them, and finally generate our grounded theory. The literature review here does not serve as the theoretical frame from which the researcher selects one theory, makes hypothesis and then tests with empirical data. Instead, the literature here introduces the prior network of research that are relevant to our research focus and questions, framing the research of CoLAB and its contribution in the network, and at the same time to facilitate the researcher with in-depth theoretical sensitivity when I conduct the grounded theory. Therefore the literature we review here are all important references and theoretical inspirations, but not necessarily theoretical frame that we follow in deducing and testing. In this thesis, we conduct qualitative research methods in studying the CoLAB learning. Qualitative method in general is reviewed in Chapter 3, followed by Chapter 4 which reviews the specific method of constructive grounded theory. Chapter 1. Collaborative Learning The history of learning and education is mainly based on an "individual" and "competitive" model. Students learn in a classroom, everyone with their own textbook and assignment. Exams follow when everyone sits by themselves, answering their exam paper which is to be graded for individual learning success or failure. "Collaboration" is most absent in the learning scenario. But since the 1970s, "collaboration" becomes more and more significant in learning, because we more and more rely on collaboration in working situations, which pushes educators to regard collaboration as an important aspect in education. "There are many different kinds of learning theory. Each emphasizes different aspects of learning, and each is therefore useful for different purposes. To some extent these differences in emphasis reflect a deliberate focus on a slice of the multidimensional problem of learning, and to some extent they reflect more fundamental differences in assumptions about the nature of knowledge, knowing, and knowers, and consequently about what matters in learning." (Wenger, 1999, p. 4) The first wave of "Collaborative Learning"(CL) research emerged in the name of "Cooperative Learning" as a special topic in educational psychology [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. Although now many researchers distinguishes cooperation from collaboration [START_REF] Baker | Collaboration in collaborative learning[END_REF], the early "Cooperative Learning" research is a unneglectable part of CL research. Therefore, we use of "CL / Collaborative Learning" to represent the field in general, but not excluding the research effort under the name of "Cooperative Learning". The early "Cooperative Learning" studies mainly in-class scenarios where students are guided to learn in groups [START_REF] Slavin | Research on cooperative learning and achievement: What we know, what we need to know[END_REF]. This pedagogical method was compared with individual or competitive models where students in-class are only responsible for their own learning performance (or overperforming others in a competitive model). Johnson et al.'s meta-analysis [START_REF] Johnson | Cooperative learning methods: A meta-analysis[END_REF] of over a hundred of research work has been demonstrated that the model of "group learning" is in general better than individual/competitive model in terms of performance [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. This successful research story has inspired many researchers to follow the trend and conduct their own experiment to find out what kind of interventions helps in CL, why and how they can help. A large number of research as well as intervening methods have been proposed making a significant contribution to the CL research. But as we step into the new century, educators emphasize not only on the traditional performance but also more complex competencies such as collaborative creativity [START_REF] Sawyer | Extending sociocultural theory to group creativity[END_REF] [START_REF] Hämäläinen | Theoretical and pedagogical perspectives on orchestrating creativity and collaborative learning[END_REF]. This change has called more and more interests in the constructivist view of education rather than the traditional instruction-based view. It is true in cooperative learning wave many have referenced Piaget and constructivism, but the emphasis is mostly seeing cooperative learning as instruction method, not necessarily as learner centered and constructivist. However, the new education promoting learner centered pedagogy, e.g. Project Based Learning, is not the key emphasis of Cooperative learning. Constructivist view pays more attention to student's own cognitive process in constructing knowledge other than passively accepting, understanding and transferring knowledge [START_REF] Reusser | Co-constructivism in educational theory and practice[END_REF]. When taking a constructivist view on CL, researchers are naturally not satisfied with studying invention and performance but instead interested in understanding in-depth the cognitive process of individual and group in constructing the knowledge. Among the researchers with this view, two perspectives are most salient: 1. the constructivist developmental view originated from Piaget pays close attention to cognitive process in CL scenarios with clinical observation [START_REF] Golbeck | Implications of Piagetian theory for peer learning[END_REF], and 2. influenced by Vygotsky's social-culture development perspective, many have extended social-cultural theories into the CL research [START_REF] Wells | Learning for life in the 21st century: Sociocultural perspectives on the future of education[END_REF]. These two perspectives together with the educational psychology tradition are considered to be the most active perspective in CL research. In the meantime, CL research also takes in according methodologies as necessary tools. These methods are aligned with respective world views and epistemologies, e.g. scholars who share the sociocultural perspective adopts an ethnographic method in studying the cultural influence on the collaborative learning, while those who focus on the socio-cognitive perspective adopt methods based on interactionism, such as conversation analysis. The three waves of CL research all have significant contribution in understanding the phenomena of "Collaborative Learning". But they stand on different ontological stance in understanding what is "Collaborative Learning" or more deeply, what is "learning", applying different epistemological tools in studying the phenomena. In the chapter 1.1, 1.2, 1.3, we will respectively review the "three waves": "Cooperative Learning", "Socio-Cognitive Perspective" and "Socio-cultural Perspective". We will give context of their respective theoretical ground, major topics and results, and their research methodologies. The "Cooperative Learning" Perspective The Questions of Cooperative Learning The early effort of the "experimentalist" who study students learning in a group often use the name "Cooperative Learning". The Cooperative Learning Perspective is an early research effort in educational psychology/educational research majorly focusing on "learning in groups" as a pedagogical method in classrooms. A typical Cooperative Learning scenario is teacher asking students to learn in a small group by setting up specific goals with specific cooperative learning procedures. This scenario was not a popular method several decades ago. Dating back to the 1970s and before, the strong competitive model and focus on individual development was the dominant method in pedagogy [START_REF] Johnson | Cooperative learning methods: A meta-analysis[END_REF]. Since the 1970s, thanks to CL practitioners and researchers' continuous efforts, cooperative learning has gradually grown importance and now it has become an indispensable method in education. Its use, as we all observe, permeates to K12 and university education around the world. The research of Cooperative Learning, growing together with the blooming of the pedagogical practice, asks question about (1) Why learning in groups work (or not), especially comparing to the "individual" or the "competitive" model widely used in traditional class; (2) How can we improve the pedagogy, e.g. can how teacher make better instructions to facilitate cooperative learning, etc. [START_REF] Slavin | Research on cooperative learning and achievement: What we know, what we need to know[END_REF] These questions reflect a common understanding of Cooperative Learning research, that an effective cooperative learning is much more than just putting students together. A lot has to be considered concerning students socialpsychological status and individual learning process. What social psychological status promote meaningful interaction that leads to learning? What instruction process help students to conduct better social activities? Etc. Cooperative Learning researchers envision to decode these questions by understanding the mechanism behind the cooperation and by testing interventions in class with different conditions. Now, with a century's development, especially the blooming since the 1970s, the Cooperative Learning research has developed substantially, as researchers claim it to be "one of the greatest success in the history of educational research" (Slavin, 1996, p. 43). The collective research efforts have evidently shown that the cooperative learning method in general overperform the "individual" and the "competitive" model. [START_REF] Slavin | Research on cooperative learning: Consensus and controversy[END_REF] [START_REF] Johnson | Cooperative learning methods: A meta-analysis[END_REF] [START_REF] Gillies | Cooperative learning: Review of research and practice[END_REF] Or simply put, people learn better when in a group. This growing consensus can be better elaborated by answering the following key questions: -What do we mean by "better"? -Under what condition do we mean by "learn in a group"? -Why "people learn better when in a group"? -How can people learn even better than just being in a group? Answering the first two questions in "Cooperative Learning" Perspective gives context to Cooperative Learning research, while the last two questions describe the key research questions. We will give short answers to the first two questions and explain the essential findings for the last two questions in the following section. First question, the criteria of "better" in "Cooperative Learning" Perspective is relatively concrete. Although some other dependent variables were sometimes considered [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF], researchers largely use "Performance" to test and evaluate whether and/or how "better" performance occurs due to cooperative learning conditions (or not). This choice is nature and effective and especially useful when one wants to compare different research results. Meta-analysis in cooperative learning history to compare and evaluate different cooperative learning approaches, proposed and studied in more than 1000 research papers [START_REF] Johnson | Cooperative learning methods: A meta-analysis[END_REF]. This large scale of research and meta-analysis would not be possible if not for the concrete criteria that determines a "better" achievement of the students. For the second question, the condition of "learning in a group" is usually studied in a classroom setting in Cooperative Learning. The teacher encourages students to learn with their peer students (normally 2-5 students in a group) in a face-to-face manner. The mutuality (how deep student interact) and equality (are students in equal power) of the group learning vary from situation to situation [START_REF] Slavin | Research on cooperative learning and achievement: What we know, what we need to know[END_REF]. The third and fourth questions are however less obvious to answer. Cooperative Learning researchers have investigated for decades around this two questions. One most mature and most widely accepted answer to the "why" question concerns the motivationalist perspective of Social Interdependence. This perspective attributes the positive learning effect to a "positive social interdependence" situation where individual regards their own goal is dependent on the achievement of the team's goal, therefore make efforts to help their peer to learn [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. This motivational perspective has inspired a large body of research which constitutes the major part of the "Cooperative Learning" Perspective. The Social Interdependence Theory Social Interdependence means when the outcomes of individuals are affected by their own and others' actions [START_REF] Johnson | Cooperation and competition: Theory and research[END_REF]. Which in cooperative learning scenarios means that the learner's achieving individual goals is dependent on not only their own efforts but also their group members' efforts. The emphasis on interdependence differentiates it with the social dependence situations where only one-direction dependence affecting the result(member A affected by member B but not the other way around). A successful cooperative learning system needs to witness a positive interdependence situation, where group members perceive they can achieve their individual goal if and only if the goals of the group (as an entity as well as each individuals in the group) are achieved [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. Whereas an competitive learning situation might see the reverse -a negative interdependence, where member believe achieving their individual goal is dependent on the others failing to achieve their goals. The motivational perspective has been used to answer the "why" question we propose in the previous section: Why "people learn better when in a group" ? As explained by the social intercedence theory, group members, who understand their correlated goals, are self-motivated to BOTH help their groupmates and encourage their groupmates in making efforts to accomplish the learning task [START_REF] Slavin | Research on cooperative learning and achievement: What we know, what we need to know[END_REF]. Therefore, many researches are concerned with structuring Common goal and reward are essential [START_REF] Slavin | Research on cooperative learning and achievement: What we know, what we need to know[END_REF]. To be more specific, Researchers have concluded the following aspects as sub-mechanism of positive interdependence that accounts for better learning effect: the individual accountability and personal responsibility, the promotive interaction, the appropriate use of social skills, and the group processing [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. -Individual Accountability and Personal Responsibility The positive interdependence can create individual accountability and personal responsibility that motivate group members to make efforts [START_REF] Deutsch | A theory of co-operation and competition[END_REF]. Once the positive interdependence is established, it create an obligation for the individual in the group to make effort and contribute to the group task. The obligation is not only for completing one's work, and more importantly to help and encourage others in the group to finish their share of work. This responsibility structure bond the personal responsibility to the group performance and therefore socially structured. Everyone feeling responsible for achieving their personal goals feels responsible for the group performance, which includes their group members' performance, if the group is positively interdependent [START_REF] Wentzel | Relations of social goal pursuit to social acceptance, classroom behavior, and perceived social support[END_REF]. -Promotive Interaction The second mechanism that contributed to the group performance is promotive interaction. Promotive Interaction indicate those actions that individuals encourage and facilitate other member in the group aiming to achieve their group goal. As we have explained, the personal responsibility not only lead to a selfmotivated learning but also a socially promoted learning. This socially promoted learning is implemented by promotive interaction. Aside from the socially coordinated responsibility, the interdependent reward structure is explained to be another reason for the promotive interaction(e.g. praise, encouragement base on members' efforts) [START_REF] Slavin | Cooperative Learning[END_REF]. -Appropriate Use of Social Skills Another important mechanism related to interdependence is the appropriate use of social skills. Often regard one of the 21 st century competencies, the ability to appropriately communicate social skills is essential to the cooperative learning and better performance. The positive interdependence promoting more appropriate use of social skills is another reason accounting for why cooperative model works. Researches have shown that group members who better know, trust and support each other overperform those do not [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. Scholars sometimes relate this element of social interdependence to the concept of "social cohesion", in which they explain the motivation for the group does not only exist in the "group goal" or "accomplish the task" but to be for the good of the group per se. Therefore, the more cohesive the group is, the better they will perform because the students will help each other to learn as they care about each other [START_REF] Slavin | Research on cooperative learning and achievement: What we know, what we need to know[END_REF]. The social skill aspect of interdependence justifies teambuilding activities before and after the cooperative learning, a direct way to train the appropriate use of social skills and to improve the social cohesion [START_REF] Cohen | Designing groupwork: Strategies for the heterogeneous classroom third edition[END_REF] [START_REF] Sharan | Expanding cooperative learning through group investigation[END_REF]. -Group Processing Group Processing means members reflect on the cooperative process in order to understand which actions benefit the cooperation and thus the learning in general and which actions are not helpful. The group process is a socially coordinated action resulted from interdependence and in turn iterate and improve the interdependence structure. Research has found out that the group process help increase not only performance but also motivation, social cohesion in the group, and members' self-esteems [START_REF] Archer-Kath | Individual versus group feedback in cooperative groups[END_REF]. The group process sometimes are teacher induced and sometimes student initiated. Research has found out the combination of both instructor-commentformat and student-discussion-format processing overperforms either process alone or no process at all [START_REF] Johnson | Cooperation and competition: Theory and research[END_REF]. To conclude, the social interdependence has been widely applied by many researchers in explaining why they are effective in terms of better performance, especially comparing to individual and competitive model. These researches focus on how the motivation and reward structure have influence on the social-psychological status of the group and therefore influence the cooperative learning through different sub-mechanisms such as personal responsibility, promotive interaction, appropriated use of social skills and group processing. These elements are key to structuring positive interdependence [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF], and hence key to establish a successful cooperative learning: in the end, cooperative learning is much more than just putting students together. How can people learn even better than just being in a group? The fourth question is a pedagogical concern. When theorists have given and proven relative theoretical inspiration, the next question comes naturally around "how to achieve such success". In the study of cooperative learning, there are not only interests in understanding "why it works", but also great interests to actively creating and testing pedagogical methods based on the understanding of the "why" question. "Researcher-developers" [START_REF] Johnson | Cooperative learning methods: A meta-analysis[END_REF] see the relationship between practice and theory as a two way relationship, in which practice is guided by theory while theory is improved by the practice. Many pedagogical methods have been proposed during the past decades and are tested with lab experiments or field studies. The following reviews methods that have been largely applied and studied in the literature, mainly in the north America during last decades: -Complex Instruction (CI) [START_REF] Cohen | Designing groupwork: Strategies for the heterogeneous classroom third edition[END_REF], The Complex Instruction (CI) Method was developed mainly for heterogenous classrooms. The purpose of CI, besides promoting cooperative learning, exist in increasing equality in the cooperation. Many have observed situations where students learning in groups do not work equally as they are inherently heterogenous in terms of ability, knowledge, intelligence, etc. CI is purposed to confront this issue by introducing: (1) curricula that encourage higher-order thinking, often around a central topic/concept that is open-ended and requires multiple ability of the students. (2) specific instructional strategies that facilitate the students to better participate with cooperative norms and role play etc. The CI methods emphasize Every member in the cooperative should learn with equal access and opportunity, regardless of their inherent differences. The complex central topic makes sure no any group member with specific outstanding knowledge can be the "dominant star" who prohibits others' voice or chance. The complexity also ensures students of diverse abilities can participate with their advantage. CI requires teachers to specific care about unequal situations and treat the team status with instructions that convince students everyone can make important contributions to the learning. -Constructive Controversy (CC) [START_REF] Johnson | Conflict in the classroom: Controversy and learning[END_REF] The Constructive Controversy (CC) approach focuses on the controversy emerging from the cooperation. The phenomena of "Controversy" indicate situations when one group has incompatible ideas, concept, or conclusions, etc. with another member of the group. And the Constructive Controversy requires the two with conflicting view to try to constructively resolve the controversy and reach an agreement. And the learning happens in the process of resolving. From cognitive perspective, the situation of "controversy" introduces a disequilibrium of inter-personal cognition in the group, and therefore stimulate "epistemic curiosity" of individual to explore another individual's conflicting ideas, concept and thoughts, which is a starting point for new insight and discovery. The Controversy situation is an opportunity for learning if appropriate "controversy mediation" is present. For example, researchers find strategies such as "perspective-taking" effective compared to "egocentrism" [START_REF] Johnson | Conflict in the classroom: Controversy and learning[END_REF]. Other condition include cooperative context, skilled disagreement, rational argument, and active participation, [START_REF] Johnson | Constructive controversy: The value of intellectual opposition[END_REF], etc. -Group Investigation (GI) [START_REF] Sharan | Expanding cooperative learning through group investigation[END_REF] The Group Investigation focus on "investigation" in group form as the major part of learning. In group investigation, students actively participate in the investigation and teachers facilitate the group investigation in more interactive ways than whole-class instruction. The GI method is featured by 6 steps in 3 stages. In the first stage, students main identify an center topic for their investigation while forming and organizing their group. The second stage is remarked by activities that plan the investigation, followed by a third stage when students actually implement the investigation. Steps in the first stage is most complicated, which include (1) an exploratory step where teacher present a board and multifaceted theme; (2) students formulating and choosing different subtopics by e.g. asking questions; (3) Teacher giving and presenting suggestions to the sub questions; (4) students categorizing their questions; (5) students reviewing all the questions proposed and joining the group whose subtopic they are most interested in. The GI method leave space for student to investigate with own interest, and have been proved effective in academic performance with several studies. [START_REF] Sharan | Group investigation expands cooperative learning[END_REF] -The Jigsaw Classroom [START_REF] Aronson | The jigsaw classroom[END_REF], The Jigsaw Classroom(Jigsaw) is a pedagogic technique developed for better small group interaction in cooperative learning. The Jigsaw proposes, just like the jigsaw puzzle, that every piece would contribute an indispensable role to the whole picture -every student is like the every piece. In the Jigsaw method, 5-6 students form the "Jigsaw group", where each member learn only one segment of the whole lesson. Then the students break the "Jigsaw group" and form temporary "expert group" with whom learn the same segment. Therefore in the "expert group", all the members learn the same segment, and they have to exchange and prepare how they can present to their original "Jigsaw group" so as to finish the picture(the jigsaw puzzle). In the last step, "expert group" breaks up and students go back to their own "jigsaw" group and present what they learn. As each of the member own one part of knowledge, everyone is important for the "jigsaw group" to get a whole picture of the entire class. -Student Teams Achievement Divisions (STAD) [START_REF] Slavin | Student teams and comparison among equals: Effects on academic performance and student attitudes[END_REF], The Student Teams Achievement Divisions method (STAD) takes the motivational interdependence directly into practice. STAD structures a reward system that reward the individuals with the average score of the group. In STAD, students work in a group on task provided by the teacher. The next step, students take individual quizzes on the task. The individual score is however not the final destination of the individual, they have to make sure everyone in the group improve their score and learned. Therefore, the main activity in the group is socially structured because of the reward. Team members focus on helping each other on explaining, practicing and encouraging their peers. There are many approaches that have existed in the cooperative learning history and the above ones are only small part of the pedagogy. To conclude, the Cooperative Learning Perspective during the past century was an important reference to the Collaborative Learning Field. Although many research was done under the name of "cooperative learning" and some researchers try to distinguish the cooperation from collaboration, there are clear overlaps in the field. One most prominent contribution of Cooperative Learning is that with some hundreds of study, researchers have evidently shown the effectiveness of "learning in a group" compared to "learning alone" or "learning competitively". More research have been done to examine which invention works and why. The social interdependence theory is set as an most important explanation for the "Cooperative Learning" perspective. It not only give reasons to the why question, but also guides practice. Some warns that the cooperative learning research should not reduce to singular perspective. E.g. in the review of the future problems of cooperative learning, Slavin writes: "In particular, there are researchers who emphasize the changes in incentive structure brought about by certain forms of cooperative learning, while others hold that changes in task structure are all that is required to enhance learning. The problem is that applications of cooperative learning typically change many aspects of both incentive and task structures, so disentangling which is responsible for which outcomes can be difficult." (Slavin, 1996, p.44) The "Socio-Cognitive" Perspective The second perspective arise with the idea that Collaborative Learning is "centrally concerned with meaning and the practices of meaningmaking" (Koschmann, 2002, p.20). Seeing "collaborative learning" as the practice of meaning-making immediately brings a new perspective. The traditional perspective on education regards "learning" as essentially a knowledge acquisition practice [START_REF] Sfard | On two metaphors for learning and the dangers of choosing just one[END_REF]. Knowledge is transmitted from more knowledgeable agent to the less through instruction. Following the "acquisition" metaphor of learning, collaborative learning is no more than knowledge acquisition in a group setting. But the notion "meaning-making" is inherently different, it places the learner to the center of "learning" activity and highlight leaners' role in constructing knowledge. This view aligns with the constructivist developmental view of learning, from Piaget to Vygosky. (Piaget & Cook, 1952) [START_REF] Vygotsky | Mind in society: The development of higher psychological processes[END_REF]. Following the "meaning-making" metaphor of learning, collaborative learning is a process where the group of learners construct their joint knowledge through group meaning-making [START_REF] Koschmann | Dewey's contribution to the foundations of CSCL research[END_REF]. From the study of "outcome" to the study of "meaning-making" marks a great change in the focus of CL research. The "Cooperative Learning" studies pay a great attention in comparing predefined "Learning Performances" with different predefined conditions. But the process through which these different conditions occur is very much left examined like a "black box". While in the "meaning-making" view, learning outcome is less important than the sociocognitive process. How do they learn through meaning-making? How does interaction help them to (co-)construct knowledge? These questions all concern the center process in the second perspective: the socio-cognitive process of collaborative learning. The change was a result of the recent trend of a "learner-centered" view on learning, particularly in studies highlighting new technology interacting with human, such as Human Computer Interaction(HCI), Computer Mediated Communication, Computer Supported Collaborative Work, and Computer Supported Collaborative Learning [START_REF] Hmelo-Silver | The international handbook of collaborative learning[END_REF]. In HCI field, for example, Suchman proposed the concept of situated action that no longer regards interaction as universal cognitive process planned beforehand but situated actions happening between agents [START_REF] Suchman | Plans and situated actions: The problem of humanmachine communication[END_REF]. In CSCL, the introduction of technology as medium challenged the traditional teacherstudent paradigm. For example, Roschelle and Teasley studied students learning physics concept, namely "velocity" and "acceleration", through a gamified computer program [START_REF] Roschelle | The Construction of Shared Knowledge in Collaborative Problem Solving[END_REF]. There are no traditional teachers in the learning scenario and the two students have to play the game to learn. All the learning process is through their interacting with the computer as well as between themselves. The absence of teacher and instruction immediately place the "socially coordinated actions" at the center of investigation: how do they communicate with each other through playing? What actions and interactions marks learning through collaboration? With an in-depth micro-analysis on the conversation and interaction in great detail, they concluded that the students' co-constructing a "Joint Problem Space" is the core activity of their learning and cognition [START_REF] Roschelle | The Construction of Shared Knowledge in Collaborative Problem Solving[END_REF]. This particular emphasis on socio-cognitive process is a typical paradigm of the second perspective on Collaborative Learning, which substantially differ from the traditional educational perspective that compares and evaluates the learning outcome: students' achievements. Besides the shift in research focus, the Socio-Cognitive Process Perspective brings in a few other distinguishable changes to "Collaborative Learning": 1. It is fundamentally linked with the co-constructivist view on learning 2. It brings the division between collaboration and cooperation 3. It has a strong zooming in intention. The (Co-)Constructivist View of Learning The constructivist view on learning focus on the learner's constructing knowledge while constructing heir mental structure from interacting with the environment. In this regard, constructivists pedagogy is often related to project based methods. Leaners learn in a self-directed hands-on activity while constructing their knowledge and knowledge structure (Piaget & Cook, 1952). The co-constructivist view of learning regard the construction as not isolated to one learner, but in a shared space of knowledge and society and culture and through interacting with their peers [START_REF] Reusser | Co-constructivism in educational theory and practice[END_REF]. The constructivist view on learning is not entirely absent in the first perspective [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF], which however doesn't directly reflect the core of constructivist view. The socio-cognitive trend to the opposite directly investigate how students co-construct knowledge in a group.. The Co-constructivist view of learning brings a two-fold change to collaborative learning. The first, as we have elaborated, lies in the shift in research focus, while the second, lies in the pedagogical shift towards "learner-centered" direction. Again, the learner-centered is not entirely absent in the "outcome" study, in fact, we can traits of "learning centered", "projectbased learning" in many approaches. But this "learner-centered" approach is not the purpose of practitioners in the first trend. The purpose is to adopt and compare predefined methods and evaluate better/worse performance. But this to some degree contradict to the constructivist "learner-center" view that to a great deal centers student's active construction with their prior knowledge. The process is studied with and through the student's in a situated way and has the intention to escape universality deducted by outcome, while the process is a black box. Collaboration as Deeper Mutuality The second perspective of "Socio-Cognitive" is at the same time manifested in the change of the name of the field: from "cooperative" to "collaborative". The differences between the two notions were mentioned by [START_REF] Roschelle | The Construction of Shared Knowledge in Collaborative Problem Solving[END_REF] and elaborated by [START_REF] Baker | Collaboration in collaborative learning[END_REF]. In short, researchers of the second wave perspective, regard cooperation as a larger concept in which students learn through working together, e.g. through labor division, while collaboration indicating scenarios where students deeply exchange on the ideas of the problem from the beginning [START_REF] Baker | Collaboration in collaborative learning[END_REF]. This consensus implies a shift in the focus of CL research: from low-level mutuality to high-level mutuality. Roschelle and Teasley define "Collaboration" as "a coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem (Roschelle & Teasley, 1995, p.70)". This definition later on has been largely referenced in the Collaborative Learning field. In his elaboration, they specially distinguish Collaboration from Cooperation in that collaboration emphasize on the deep interaction, and coconstruction of a joint problem space. [START_REF] Baker | Collaboration in collaborative learning[END_REF] elaborated further on the difference with the following imagined example: "The (invented) example of three students (X, Y and Z) writing a short joint project report on flora and fauna in their local park will help to clarify this distinction between cooperation and collaboration. Students could cooperate on this joint task, in the sense of working towards the shared goal of producing the report by dividing up responsibilities for achieving subtasks: for example, "you, Y, write up the data; you, Z write the references and describe the park; I, X, will write the introduction and conclusion". Or they could collaborate, in the sense of studying the park, analysing data, writing the report together, side by side, from beginning to end, discussing as they go along, with respect to what they are trying to do, how they should do it and what is meant by elements of the task (such as "flora", "the park", "the report")." (Baker, 2015, p.6) In this regard, the researchers taking a "socio-cognitive" perspective also see the process of collaboration as mutually more in-depth than the process of cooperation. The Intention to Zoom in As we have mentioned, "Cooperative Learning" study of collaborative learning regards learning as a kind of black box, and the study in general compares different black boxes by evaluating the performance predefined. To the contrary, the "socio-cognitive" perspective has the strong intention to zoom in and to open the box. The intention to zoom in leads to the application of "micro analysis", which focuses on the micro scenarios of group interaction. In Roschelle and Teasley's seminar work of collaborative learning [START_REF] Roschelle | The Construction of Shared Knowledge in Collaborative Problem Solving[END_REF], they studied different micro scenarios with only two students interacting in great detail. This focused micro analysis of dyad cases is not common in traditional educational research of "cooperative learning". In cooperative learning, usually the study takes relatively longer time, and the outcomes is carefully examined with different conditions. But in the wave of "Collaborative Process" perspective, study of shorter period interaction with methods such as conversation analysis is more salient. The zooming in perspective resonates the constructivist perspective that the CL is a study of the construction process rather than a study of "outcomes". It is not only about give explanation to what leads good performance but also the process that leads to the results with an eye to zoom in. The intention to delve deeply into the interaction can be seen from Suther's concept of the "Intersubjective learning": "A more radically interactional epistemology, which I shall call intersubjective learning, goes beyond an information sharing conception of collaborative learning in two ways: it can be about sharing interpretations as well as information, and these interpretations can be jointly created through interaction, in addition to being formed by individuals before they are offered to the group" (Suthers, 2005, pp. 662-663) The model of "intersubjective learning" not only concerns the interaction of information in the learning but also the interaction of interpretations. This epistemology of collaborative learning is one step further to the interactional model, which researchers to take insightful and zooming in perspective into the socio-cognitive process. The "Socio-Cultural" Perspective From "Socio-cognitive" to "Socio-Cultural" perspective The "Cooperative Learning" perspective does not focus on the "meaningmaking" process which is a key element for the constructivists. The "Socio-Cognitive" perspective highlighting the "meaning-making" as essential for learning, however misses the perspective of the situated context where learning happens. Is analysis of merely "individual and interactional cognition" enough in understanding the collaborative learning? Some researchers do not agree. The collaborative learning process should not be analyzed without looking at the social and political context where learning happens. How can we well understand the meaning behind the conversation without understanding the social and political context? How can we understand the verbal and nonverbal communication without a deep insight on the community? How can we infer the tacit knowledge without merging in the culture? That's why the "Socio-cultural" perspective places the social context at the center of Collaborative Learning research and understand learning as essentially a social practice inseparable from its real life situation. The Sociocultural perspective is radically different from the previous perspectives. It is different from the "Cooperative Learning" perspective as it emphasizes the emergent meaning-making aspect of learning rather than predefined evaluation of controlled conditions. But it is also different from the socio-cognitive perspective as they believe the meaning-making should be studied in a wider social context. The natural context in which learning occur gives meaning to the situated practice. Apparently, this perspective takes a holistic view on learning. They are not satisfied with the reduction of learning activity to only "outcomes", "cognitions" or "interactions", but also ask questions about their social existence and relationships in their natural setting: What are the social and political context of the learning? Who are these people learning, for what purpose? What are their relationship in the collaborative learning? What are the functions of the cultural tools they use in the communication and learning ? How does learning take place as a community practice? Etc. The situation in which learning occurs is a complex issue. For example, Fjuk and Ludvigsen introduced the complex elements affecting the "distributed collaborative learning" (collaborative learning distributed through time/space/other medium or tools, etc.) are interconnected: "distributed collaborative learning is a product of complex interconnections between several aspects, such as: theories of learning and instruction, subject domains, teacher's roles, delivery institution's educational praxis and tradition, organisational and administrative arrangements, costs, properties of ICT (information-and communication technology) and available software, geographical distances between co-learners, etc. Any changes associated with one of these aspects will inevitably influence and change the others." (Fjuk & Ludvigsen, 2001, p.1) The intention to include the many elements from "theories of learning and instruction" to the "geographical distances between co-learners" is a typical concern for the sociocultural perspective. Experiments or conversation analysis only reflects part of the whole story. The interconnecting nature of the elements as they presents in the example also brings extra complexity to the perspective. The main task for socio-culturalist of collaborative learning is to structure the real-life complexity to cast light to the situated collaborative learning. Why does researchers introduce this perspective? The motive for understanding learning in real-life situations differs from the first two perspectives. In the "cooperative learning" perspective, the main purpose is to understand the "cooperative" model versus individual / competitive model, and to design interventions that can evidently bring better performance. The sociocognitive perspective is to understand how learners interact and how to facilitate especially with the intruding of technology that tremendously changes our ways to interact to each other. The third perspective however intend to cast light to the individual history, the environment, the community that impacts the learning. The purpose in itself expands the scenarios from school going scenarios to more general situations where learning can happen: In companies, hospitals, competitions or even at home with families. In a word, learning in informal scenarios and lifelong learning scenarios are a constant topic in the third perspective, as Wenger put in the book "Communities of Practice", the socio-cultural perspective does not pretend learning is: "an individual process, that it has a beginning and an end, that it is best separated from the rest of our activities, and that it is the result of teaching" (Wenger, 1999, p.3) But it instead "our lived experience of participation in the world is as much a part of our human nature as eating or sleeping, that it is both life-sustaining and inevitable, and that -given a chance -we are quite good at it? And what if, in addition, we assumed that learning is, in its essence, a fundamentally social phenomenon, reflecting our own deeply social nature as human beings capable of knowing?" (Wenger, 1999, p.3) In the third perspective, the motive to study collaborative is much bigger, learning is seen beyond the school, beyond the cognitive process, that is essential a question : How learning can be different from one to another community, from topic to another topic, from one structure of community to another? Are they the same or do we need to adapt according to situations? In this regard, learning is "not a separate activity: it is not something we do when we do nothing else or stop doing when we do something else" (Wenger, 1999, p.8) Socio-cultural Theories of Collaborative Learning • The Origin: Vygotsky's theory Many researches taking this perspective owes greatly to the influence of the Vygotsky's theory of learning, which eventually inherits from Hegel and Marx's cultural theory. After Vygotsky, there are different anthropologist, ethnographer, social psychologist who adopts this perspective and develop their own theories and studies in this social cultural tradition. The most relevant work in this tradition inspiring Collaborative Learning and the Learning Sciences is Vygotsky's study of child's learning and development [START_REF] Vygotsky | Mind in society: The development of higher psychological processes[END_REF]. Vygotsky posits that learning should be studied within the context of culture. A comprehensive analysis of child's development should consist the study of the context in which the child and their family are situated in. Cultural tools, e.g. language, are essential features of such analysis. Children learn knowledge and develop their mental abilities through constant communication with cultural context mediated by cultural tools and hence inherently "social". For Vygotsky, "social" means not only the "social interaction" among more than one people, e.g. child and adult, but also the "social context" such as community and culture [START_REF] Vygotsky | Mind in society: The development of higher psychological processes[END_REF]. The concept of "mediation" above mentioned is key to Vygotsky. An unmediated act might be a direct response to the stimulus(the S-R chain), but a mediated act is a more complex process where subject use the mediation "X" to respond to the stimulus. The middle "X" is the mediating tool, both physical tool such as a hammer and semiotic tools such as language, math etc. These tools carrying the cultural historical components are objectified knowledge accumulated in the culture evolution, useful in the S-R scenario. The mediation of cultural components will evitable "draw" the subject to the more complex mediated act requiring higher order mental abilities. The appropriation of the mediating tools is one key feature of learning to Vygotsky [START_REF] Vygotsky | Mind in society: The development of higher psychological processes[END_REF]. Fig. 1.1 Vygotsky's concept of mediation (Vygotsky, 1978, p.40) Vygotsky then explain the appropriation of learning happens in the Zone of Proximal Development (ZPD), another key concept that has great influence on the field of Collaborative Learning. The ZPD is defined as "the difference between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers" (Vygotsky, 1978, p.86). Vygotsky posits that the learning happens when the subject is faced with problems, they cannot solve independently but solvable with the help with more capable peers through social interaction. To Vygotsky, the learning begins when the child interact with adult in their ZPD and appropriate the mediating tools needed for the scenario, and ends with their independently solving the problem. [START_REF] Vygotsky | Mind in society: The development of higher psychological processes[END_REF] Although Vygotsky's theory of learning development talks a lot on individual cognition as Vygotsky is himself a psychologist, it leaves rich heritage for the sociocultural way of thinking for its remarkable mention of social context and cultural tool as mediation. The concept of cultural evolution, mediated act and ZPD which Vygotsky developed from Hegel and Marx, has inspired a number of recent sociocultural progress. We will review a few of them to get a glimpse on the sociocultural perspective active in the state of art Collaborative Learning Research. • Activity Theory Activity theory was developed in direct relation with Vygotsky. As we pointed out, although Vygotsky's thinking is the starting point to the social turn of collaborative learning, Vygotsky still base his unit of analysis on the individual cognition and action, such as in the mediated action model. This model of "action" is useful but not enough to "account for the collective nature of human activities" (Cole & Engeström, 1993, p.7). Activity Theory expands the individual concept "action" to the more social and collective concept of "activity". Leont'ev explains the mutual relationship between the two concepts of "individual action" and "collective activity "using a hunting example: "When members of a tribe are hunting, they individually have separate goals and they are in charge of diverse actions. Some are frightening a herd of animals towards other hunters who kill the game, and other members have other tasks. These actions have immediate goals, but the real motive is beyond hunting. Together these people aim at obtaining food and clothing-at staying alive. To understand why separate actions are meaningful one needs to understand the motive behind the whole activity. Activity is guided by a motive. " (Leont'ev, 1978, pp. 62-63) And this concept "activity", instead of "action" constructs the basic structure of analyzing human activity in collaboration. Activity theory extends Vygotsky's triangle to a multiple triangle model which consists 6 elements: Tools, Subject, Object / Objective, Rules of Interaction, Community, Division of Labor. [START_REF] Cole | A cultural-historical approach to distributed cognition[END_REF] Illustrated by the above figure(Fig. 4.2), the key elements of the activity system are the subject, the object and the community where subject and object exist. The tools are the mediation with which the subject approach the object, similar to Vygotsky's small triangle (Fig. 4.1). The subject interacts with the community with the rules of interaction of the community. The final element is division of labor, which presents the relation between the objects and the community: how tasks, powers are distributed [START_REF] Cole | A cultural-historical approach to distributed cognition[END_REF]. Activity Theory is used as a theoretical framework for analyzing the sociocultural perspective of collaborative learning. It emphasizes on the mediating tools as well as the community's role in the activity, which is manifested in the rules of interaction and the division of labor. • Communities of Practice: The communities of practice theory regard learning and collaborative learning an important role in constructing a community of practice [START_REF] Wenger | Communities of practice: Learning, meaning, and identity[END_REF]. Wegner defines the Communities of Practice as :"Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly." (Wenger, 2011, p.1) The basic rationale to introduce Communities of Practice in the discussion of collaborative learning is, like the other socio-cultural perspective, to regard learning from a different view and relate learning to its situation and context. For the Communities of Pratice Theory, this situation and context is the Learning Community: "Participation here refers not just to local events of engagement in certain activities with certain people, but to a more encompassing process of being active participants in the practices of social communities and constructing identities in relation to these communities. Participating in a playground clique or in a work team, for instance, is both a kind of action and a form of belonging. Such participation shapes not only what we do, but also who we are and how we interpret what we do." (Wenger, 1999, p.4) The primary metaphor of learning in this regard is that learning is a form of social participation, which is essentially a process in which people actively participate in a community of practice and construct their social identity, as Wenger illustrated in the following figure: [START_REF] Wenger | Communities of practice: Learning, meaning, and identity[END_REF] Fig 1 .3 Components of learning in Communities of Practice Theory (Wenger, 1999, p.5) In the above figure, four components of social participation are integrated by learning. They are Meaning, identity, community and practice, as Wenger defines in the following: "1) Meaning : a way of talking about our (changing) ability -individually and collectively -to experience our life and the world as meaningful. 2) Practice: a way of talking about the shared historical and social resources, frameworks, and perspectives that can sustain mutual engagement in action. 3) Community: a way of talking about the social configurations in which our enterprises are defined as worth pursuing and our participation is recognizable as competence. 4) Identity: a way of talking about how learning changes who we are and creates personal histories of becoming in the context of our communities." (Wenger, 1999, p.5) By proposing the interconnected concepts, Wenger argues that learning is essential to the community of practice. To individuals, learning is the approach for them to be engaged in the communities of practice, while for the community and the organization, learning is use for renewing rules of interaction and accepting new members. Learning is not separate from the other activities of communities of practice, it is a embodied in the other activities of community and organization. It is the way we participate and become involved in a community [START_REF] Wenger | Communities of practice: Learning, meaning, and identity[END_REF] . To conclude chapter 1, we have reviewed three waves of collaborative learning research, each with different research focus and method based on their different understanding of what learning is. The cooperative learning perspective majorly focuses on "learning in groups" as a pedagogical method in classrooms. Cooperation is mostly seen as a condition to compare with the individual or competitive condition. The positive inter-dependence theory is one of the most influential motivational theory in the cooperative learning research which explains the better performance of cooperative learning. The socio-cognitive perspective of collaborative learning focus on the process of learning rather than comparing performance. Collaborative learning is seen as a meaning construction process, where interaction plays an essential role. In the socio-cultural perspective, this meaning construction is not isolated, but essentially connected to the social world -the personal historical experience and social cultural context. Learners' social existence is taken into consideration in this perspective. The CoLAB is one form of collaborative learning, therefore we inherit many useful result and method in the collaborative learning literature. For example, in categorizing the Co-Meaningfulness, the motivational perspective is critical in understanding the underlying purpose and meaningfulness of the learner. The socio-cognitive perspective focuses on the interaction and coconstruction of knowledge which is also a key point in our analysis in chapter 5,6,7. The socio-cultural perspective emphasized the socio-cultural context and its role in the collaborative learning, which we inherit as a starting point of our focused coding in the chapter 8. At the same time, the CoLAB is also a complex activity that we do not rely on single perspective or theory to study. The purpose of CoLAB is for learners to directly resolving open wicked problems, which brings very distinct challenges and problems that many of the reviewed collaborative learning research does not cover. As CoLAB is a wicked problem, we are inspired by the theories and framework of collaborative learning, but our understanding is generated from grounded theorizing of data directly rather than deduced from the theories. Chapter 2. Learning Across Boundaries In the previous Chapter, we reviewed the different perspective researchers take in studying Collaborative Learning. The different perspectives essentially reflect different epistemologies towards the concepts of learning and collaboration. Empiricists regard the learning as a way to process instructed information, while constructivists believe learning is a construction of knowledge from interacting with environment and other people. Social theories of learning study learning in a broader context, where learning is believed to play an essential role in activity and in the community. All the perspectives are important theoretical scope through which we can study the different angel of learning and collaborative learning. In this Chapter, besides the collaborative learning, we will review the other side of CoLAB learning, which is the "Learning Across Boundaries" (LAB). Learning Across Boundaries has a double meaning. First, the learners are from multiple disciplinary background, which sets up the first boundary they need to cross. Second, CoLAB is not only about traditional scholarly project. In many cases, CoLAB is a Project Based Learning aiming at Open Wicked Problems, e.g. a student entrepreneurship project, or a student-centered workshop. The learner will meet real life problems, solve the problems, create new solutions or even redefine the wicked problem. That will require the leaners' prior experience and knowledge beyond disciplinary training. In CoLAB today, learners often have different social, cultural and political background, therefore their prior experience and knowledge will differ, e.g. how to understand and frame the problem, how to search knowledge, how to utilize knowledge in solving the problem. These differences based on learners' prior experience and social existence set up the secondary boundaries for the learners, which they will need to cross in the collaborative learning. The two boundaries are essentially for the same reason -the open wicked problem, which requires disciplinary and interdisciplinary knowledge as well as knowledge and experience beyond them. The following two sections review different literature concerning learning across the two kinds of the boundaries. The first section of interdisciplinary learning literature mostly concerns the boundary of disciplinary knowledge, while the last section of expansive learning concerns to the second boundary beyond disciplinary knowledge. Interdisciplinary Learning We will in this section review the specific topic of interdisciplinary learning, including its history and current state of art research: what causes it to happen, the current situation and problems, and how researchers deal with them. This literature will help us understand the key challenges of interdisciplinary learning and the efforts that have been made to study and solve them. Interdisciplinary vs disciplinary. It is essential to understand the modern discipline culture so as to frame and define interdisciplinarity in our time. The history of categorizing knowledge can be dated back to ancient Greek philosophers. Research in science and humanities, are categorized by their different purpose, whether to look for reasons for human actions, to make typology of living things, to explore the rules of the physical world. This tradition, with the purpose of ordering human knowledge, has been kept over history and given rise to the modern discipline culture. Though with a long historical tradition, the modern discipline realm was never static. The humans' research interests have gone through several big shifts along with numerous small ones. In the evolution, emerging new problems and questions, scientific or societal, are the major forces driving the discipline change, demanding new means for new ends. Interdisciplinarity has been a natural way for the discipline innovation: to borrow method from existing other disciplines, to look for knowledge that can be used in other ways, to communicate with other experts for enlightenment, to integrate different discipline knowledge for new problems that seems more complex that one discipline can handle, to expand one promising approach to other fields. Practices of these kinds prevailed in the discipline culture when the new questions and problems are more important than keeping on specializing existing knowledge. [START_REF] Frodeman | The Oxford handbook of interdisciplinarity[END_REF] Harvey Graff, comparative historian who studied over 10 cases of 20th century's interdisciplinary practices, started his analysis of interdisciplinarity with this understanding: "interdisciplinarity is part of the historical making and ongoing reshaping of modern disciplines. It is inseparable from them, not oppositional to them." (Graff, 2015, p.5) Graff's case studies of different interdisciplines, from the life science to humanities, have strengthened his view of this entanglement. This entanglement makes clear that, the function of interdisciplinarity -to innovate the discipline status quo by providing new and different ways for emerging new problems-, overweighs its form-to "integrate", or to "transcend". The reality of interdisciplinarity exists in this entanglement, which should not be overlooked if any definitions are to be made about interdisciplinarity. [START_REF] Graff | Undisciplining knowledge: Interdisciplinarity in the twentieth century[END_REF] This understanding reveals the disruptive nature of interdisciplinarity. In our time, when interdisciplinary has been widely used, it is ultimately a "revolutionary" power, with emphasis on innovation and creativity in its nature. The role of transitioning is vital to interdisciplinarity and its evaluation, which should not only address the effectiveness of problem solving results, but the effectiveness of proposing new approaches towards new problems. Multidisciplinary -A weak form of interdisciplinary Interdisciplinary work, benchmarked by the interaction and integration of different disciplines, is more than the addition of multiple disciplines. There are many multidisciplinary practices under the name of interdisciplinarity which do not satisfy the essence of it. Margaret Boden called some of these practices false, or at best weak form of interdisciplinarity [START_REF] Boden | Interdisciplinarity and the Organisation of Knowledge in Europe[END_REF]. Multidisciplinarity, as described by OECD, is a "juxtapose" of disciplines, which avoid further interaction or integration [START_REF] Klein | A taxonomy of interdisciplinarity[END_REF]. A juxtaposition means that the disciplines, intact in their knowledge, organization of knowledge, inherent structure, and cognitive tradition, skip the potential "collision" of disciplinary knowledge, epistemology or value, and present as separate and independent knowledge bodies in front of new problems. One typical example would be a curricular design targeting one topic with a series of lectures from different experts, leaving little space or academic aid for discussion of how to integrate them and innovate for new understanding. Why these forms of interdisciplinarity are sometimes labeled false? It is partially related to our explanation of the function of interdisciplinarity. Multidisciplinary work make minimum efforts in innovating existing discipline methods towards new problems, sometimes even leaving the work of integrating to outsiders who lack knowledge and academic background. A British study of sustainable city has shown the drawback of such practices: multiple proposals from different disciplinary perspectives, with heterogeneous and even contrasting results, were placed before policy makers, who find impossible to follow any suggestions if they were to integrate themselves [START_REF] Evans | Researching the sustainable city: Three modes of interdisciplinarity[END_REF]. The means fail to reach the ends. It is worth mentioning that claimed interdisciplinary work might turn out to be multidisciplinary, even when it wishes to encourage interaction and integration -it just does not happen. This phenomenon is one big challenge in modern interdisciplinary culture, the reason of which will be explained the later parts. Interdisciplinary -Major forms of interdisciplinarity Real interdisciplinarity demonstrates a motivation in the interaction and integration among disciplines, usually driven by the aim of solving complex research questions/problems. For examples, the question "how learners better learn in contemporary society" needs sociological as well as psychological and biological knowledge, at the same time welcomes design studies from anthropologists and designers through practice-based research. Knowledge from different disciplines are shared and communicated, along with a trend in which people dispute and innovate the methods they use to produce knowledge. Results are evaluated from different perspectives, often by peer review, to allow for a comprehensive discussion of what is really valuable and relevant in the field, and what approaches should be allowed and encouraged in reaching them. However, the extent to which interaction and integration happen differs from research to research. Some believe the interaction should only happen at the level of mature disciplinary knowledge/methods, since only well thought and well established disciplinary knowledge/methods are useful, while others share a much looser criterion, believing interdisciplinary discussion should happen from bottom-up. The former, more careful, usually takes a top-down method, while the later, more radical, enjoys a grass-root approach [START_REF] Klein | A taxonomy of interdisciplinarity[END_REF]. The "safe" mode (the former mode) usually expands existing (often successful) methods to other disciplines: use of statistics in social sciences, ethnographic methods in design research, etc. The expanding of knowledge/methods does not promise a success, but it relies less on radical innovation of interdisciplinarity, as the knowledge and methods are mature and useful already. Whether the contextualization will be successful depends on many factors, e.g. the flexibility of the original knowledge/methods, the gaps between the disciplines: physics to biology is much easier and more compatible than physics to music. In most cases, this borrowing and contextualizing does not require close and in-depth collaboration between researchers, if the original knowledge/methods is a good fit. Otherwise, if the contextualization meets fundamental difficulties, the "safe" mode might convert to the "radical" mode [START_REF] Klein | A taxonomy of interdisciplinarity[END_REF]. The "radical" mode (the later mode) challenges the status quo of disciplines and focus intensively on the questions and look for new ways rather than borrowing an existing one. Typical sign of this higher level of interdisciplinarity is the joint discussion of the fundamentals in the research -key directions, questions, approaches, variables, models, applications, and ethics, etc. A fusion of disciplines takes place as people mutually grow their understanding of other disciplines. Target research questions/problems as well as the ways to tackle them are constantly challenged and redefined. One sign of "radical" mode is its openness to new researchers and students. As everyone has knowledge in the interdisciplinary pool that has value in creating new ways of understanding, the process does not exclude immature disciplinary knowledge, instead welcome peer learning and creation. Creativity can be found in both modes. Core interdisciplinary creativity is however more significantly needed in the "radical" mode. In-depth collaboration happens more frequently in the "radical" mode, which demands a high level of mutual understanding, meaning making, synthesizing and creating without losing the coherent research validity. Different purpose and understanding of the function of interdisciplinarity causes the different models -from multi-disciplinary to interdisciplinary. The CoLAB often falls into the radical side of the spectrum as resolving open wicked problems will need different and distant disciplinary knowledge. In our CoLAB, designers and scientists are the two major communities, and their disciplinary training will construct certain degree of boundaries in their CoLAB practice. Interdisciplinary learning is often needed. The study of interdisciplinary, its history and development, typical challenges such as the "contextualization" we reviewed are important theoretical inspirations in our grounded theory Expansive Learning The study of expansive learning, although not at the center of the mainstream learning sciences, is a significant reference of Learning Across Boundaries. The theory of expansive learning, coined by Engeström in 1987 [START_REF] Engeström | Learning by Expanding: An Activity Theoretical Approach to Developmental Research[END_REF], inherits the activity theory of learning and knowledge creation from individuals and groups. The main focus of expansive learning transcends the school scenario where learners are obliged with a predefined task and a defined learning process, and focus on the working and organization learning scenarios where learners have to understand, communicate, and create solutions for emerging unstable problem that might not even exist ever before. The expansive learning is not only a theory in learning sciences that aims to analyze the process, but also advocate for change as a design methodology. Therefore, much work has been done inside organizations of different kinds, e.g. health care center, hospital, schools, library, etc, in order to analyze, theorize as well as to advocate for change. In this section, we will review the expansive learning as the other side of the reference of Learning Across Boundaries. The Qualitative Changes in life In the introduction of expansive learning, Engeström points out that the traditionally highest level of learning -"problem solving" [START_REF] Gagné | Conditions of learning[END_REF], is still a reactive form of learning [START_REF] Engeström | Learning by expanding[END_REF]). Engeström gives one example of what he believed to fall into the "problem-solving" mode of learning -Donald Norman's self-analysis of his learning Morse code, in which Norman focuses on words instead of letters and largely improves his speed of learning [START_REF] Norman | Learning and memory[END_REF]. Engeström argues that the "problem solving", even regarded as the most advanced level of cognitive learning, is still safeguarded in a given context with a predefined learning task, as in the case of Norman -the task is to learn a pre-existed input system: the Morse code quickly. This kind of learning, according to Engeström, "is defined so as to exclude the possibility of finding or creating new contexts" (Engeström, 2015, p.2). But this presumption is challenged in today's social practices, as in many ways, we face situations that requires swift learning to reframe, redefine problems in an emerging context and reinvent solutions for the problem, either by individuals or by communities and organizations. The learning is not safeguarded by predefined context, nor does it has a existed task that we know or we can anticipate. These situations are referred by Engeström as the qualitative changes in life [START_REF] Engeström | Learning by expanding[END_REF]. The qualitative changes in life immediately brings in his radical argument of the "futility of learning". Whatever learned in the traditional sense, with a given context and predefined task, may become obsolete under the qualitative change of life, even if it is a complex skill with a long time learning, since the context assumed is changed. Learning is futile for users who accept and learn the designer's perfectly-designed user-friendly system, but fail to adapt to new scenarios and new problems. Learning is only meaningful when it enables the users to work out their own plan for the qualitative changes in their own using of the system [START_REF] Engeström | Learning by expanding[END_REF]. The criticism that Engeström made on the reactive form of learning reflects his emphasize of learner's active role involved in life and production, which posits learner in the role of advocates rather than a passive receiver. This stance, which can find its origin in Hegel and Marx, is more significant in today's fastchanging context. One of the extreme example of the qualitative change in life Engeström proposed is the "runaway objects" [START_REF] Engeström | The future of activity theory: A rough draft[END_REF], by which he refer to the significant objects having massive distribution over space and time, such as climate change and pandemic. These are also objects that no single person, community or even single nation can deal with, and they must be seriously confronted by humanity as a whole for our collective future. The extreme example of "runaway objects" presents the qualitative change and requires we all as learners to cross our boundaries to learn for creating solutions. This process of learning transcend the traditional modes of learning, as Engeström presents the following argument: "The basic argument is that traditional modes of learning deal with tasks in which the contents to be learned are well known ahead of time by those who design, manage and implement various programs of learning. When whole collective activity systems, such as work processes and organizations, need to redefine themselves, traditional modes of learning are not enough. Nobody knows exactly what needs to be learned. The design of the new activity and the acquisition of the knowledge and skills it requires are increasingly intertwined. In expansive learning activity, they merge." (Engeström, 2016, p.39) Learning Beyond School Scenario What does expansive learning look like ? Who are the learners? And understand what circumstances do they implement expansive learning ? The following examples casts light to the typical expansive learning scenarios: "(1) The municipal home care in the City of Helsinki supports elderly people who live at home with various kinds of medical problems. Home care workers visit their clients to dispense medications and conduct various routine chores such as showering, preparing meals, and so on. The home care managers and workers are now struggling to redefine their work and services so as to meet such demanding problems as increasing loneliness and social exclusion, loss of physical mobility and dementia. The challenge is complicated by the fact that the population of Finland is aging very rapidly and it is increasingly difficult to recruit and retain competent home care workers. How can the managers, workers and clients learn to work in such a way that the new needs are met and the society can afford to provide the service? " (Engeström, 2016, p.35) "(2) As journals and books have increasingly become available through the Internet, researchers seldom need to visit university libraries physically. University libraries are becoming automatic mediators of digital information for researchers on the one hand, and physical book repositories or reading halls for students on the other hand. This threatens the professional competencies and jobs of librarians. The managers and workers of the libraries of University of Helsinki are struggling to redefine their work and services on the basis of creating partnerships and flexible practices of collaboration with research groups in need of comprehensive design and maintenance of their information management. How can librarians and research groups learn to operate in such a new way?" (Engeström, 2016, p.35) The above two examples was used by Engeström to illustrate typical scenarios of expansive learning. The home care managers and workers in Helsinki are no longer students, nor are they novice to the job, yet they are facing increasing challenges in their work. New situations and problems emerge, which require them to swiftly change and solve the problem, as individuals and as a team: they have to learn. The same happens to the university librarians whose job has to evolve as the way people use libraries changes. Learning should happen among themselves and the people and environment that are involved in the new situation. Learning of this kind happens beyond the traditional school scenario, where students learn under the instruction of teachers, who design and define what they should learn and how they should learn. The learning takes place almost anywhere new "object" is needed to solve or redefine new problems. Engeström's theory of "expansive learning" is one theory for the untraditional learning scenarios, by which he argues that expansive learning should "transform and create culture", promote " horizontal hybridization" and form "theoretical knowledge and concept": [START_REF] Engeström | Studies in expansive learning: Learning what is not yet there[END_REF] "Is learning primarily a process that transmits and preserves culture or a process that transforms and creates culture? Is learning primarily a process of vertical improvement along some uniform scales of competence or horizontal movement, exchange and hybridization between different cultural contexts and standards of competence? Is learning primarily a process of acquiring and creating empirical knowledge and concepts or a process that leads to the formation of theoretical knowledge and concepts? The theory of expansive learning puts the primacy on communities as learners, on transformation and creation of culture, on horizontal movement and hybridization and on the formation of theoretical concepts" (Engeström, 2016, p. 36) In this regard, expansive learning is inherently different. Learning is not seen as the function of preserving culture, it is instead the process through which people reinvent objects, rules, tools, and ultimately culture. In the traditional view of learning, people learn from those who are more experienced and knowledgeable, by which they reinforce the knowledge and culture, and learn what is already there. The expansive learning, on the contrary, is purposed at reinventing what is already there, which ultimately transforms and creates culture by the learning process. The Helsinki home care workers and librarians' learning is of this kind. They will need to learn to face the new challenge, the precedented solution of which does not exist. [START_REF] Engeström | Studies in expansive learning: Learning what is not yet there[END_REF] The horizontal hybridization is more significant than the vertical improvement. For health home workers, since the challenge they face is new, there are no experts from whom they can learn the ready solution. The vertical improvement as in a standard training with predefined criteria does not exist. Learning takes place more often in the horizontal communication among the learners and with the others in the environment, e.g. with the patients and their relatives, other social workers, policy makers, etc. The highlight of horizontal hybridization makes boundary transcending essential to expansive learning. The learners will have to learn from those who have different experience, who share different viewpoints, and who live in different culture. No one is a expert that everyone learns from, and everyone learns from everyone and every object. Engeström conclude this difference by using a distinct metaphor of learning, which is the metaphor of "expanding". He disagrees that expansive learning can compare to either "acquisition" and "participation" metaphor, the former of which inherits an empiricist tradition of "knowledge acquire and process", and the later regards learning as the way to enter and participate in a community [START_REF] Sfard | On two metaphors for learning and the dangers of choosing just one[END_REF]. The main argument is that both "acquisition" and "participation" assumes a directional process from the incompetent to become the competent. But in expansive learning, this directional process is blurred and replaced by a creation process that generate new object and concepts: "In expansive learning, learners learn something that is not yet there. In other words, the learners construct a new object and concept for their collective activity, and implement this new object and concept in practice." (Engeström, 2016, p.37) Activity Theory of Expansive Learning The study of expansive learning has been based on the activity theory, which we reviewed in the chapter 1.4. The activity theory argues that activities is the basic analytical structure of human practice, which is essentially different from the smaller unit of practice of "actions". The action is individual and short term, such as the a one-time killing in a hunting. But the activity is collective and generic, such as the hunting activity, which consists of multiple actions in the form of divided labor, including chasing, killing, etc. An activity can repeat time and time again and will also evolve as human practice changes for various reasons. [START_REF] Leont'ev | Activity, consciousness, and personality[END_REF] The role of expansive learning is manifested in the evolve of activity. When people face qualitative changes, they will need to change their activity to face the new challenges, either in work or in life. The change can happen in a constant way or a sudden way through the expansive learning activity. From this perspective, Engeström defines expansive learning as an activityproducing activity: "The essence of [expansive] learning activity is production of objectively, societally new activity structures (including new objects, instruments, etc.) out of actions manifesting the inner contradictions of the preceding form of the activity in question. [Expansive] learning activity is mastery of expansion from actions to a new activity. While traditional schooling is essentially a subject-producing activity and traditional science is essentially an instrument-producing activity, [expansive] learning activity is an activity-producing activity." (Engeström, 1987, p.125) The activity theory provides a basic frame for the study of expansive learning, which is an important theoretical reference to the CoLAB. In our thesis, the CoLAB activity is not studied majorly in working situations, but in universities and schools. Therefore, the specific situation will differ from many of the expansive learning scenario. But the features of expansive learning that we reviewed very much align with the feature of CoLAB. The qualitative change of life tells the reason and existence of the "open wicked problems", and the collaboration and boundary-crossing are also very much discussed in the expansive learning. Expansive learning "transform and create culture", promote " horizontal hybridization" [START_REF] Engeström | Studies in expansive learning: Learning what is not yet there[END_REF], which is also the ultimate purpose and approach for CoLAB. The difference besides the learning situation lies in the perspective and method we adopt. The expansive learning particularly focuses on the socio-cultural perspective and follows the activity theory. In CoLAB study, we are still interested in the sociocognitive and the interactionist perspective, therefore we partially inherit the socio-cognitive perspective and use grounded theory to generate our theory as the hybrid perspective theory of CoLAB is very rare. But we are also highly interested in the socio-cultural perspective and get inspired from the expansive learning from it emphasize of "object" and "culture", which is reflected in our coding of "Project" and "Meaningfulness" especially after chapter 8. Chapter 3. Qualitative Research Methodology In this chapter, we will review qualitative research methodology in general. Qualitative research methods is a collection of multiple methods that are widely used in various areas of social research, e.g. anthropology, sociology, psychology, and education research. In this research, we adopt the constructivist qualitative paradigm, we will mainly review the basic epistemological ground of this paradigm, its most prominent features and benefits. With this general understanding, we will explain why constructivist qualitative research methods are used in our thesis. The Constructivist Qualitative Research Paradigm The first questions of qualitative research methods concern the ontological and epistemological lens with which we view the world. In the tradition of qualitative research, there are several different paradigms. Each of them is distinct in viewing the nature of reality and knowledge hence has different methodology. In this section, we will particularly review the constructivist paradigm of the qualitative method, in comparison with the objectivist paradigm and explain why we choose the constructivist paradigm for our research. The Constructivist Paradigm Before we review the constructivist paradigm, we would like to mention the objectivist research paradigm which many researchers adopt. The objectivist paradigm emphasizes that the reality exists objectively in the world, and the task of the researchers is to discover it with the right tools and methods when they observe, measure, and prove it. The reality exists whether or not human observes it. The purpose of objectivist qualitative research is to generate reliable knowledge of the social reality that is generalizable to other situations, therefore it is crucial to triangulate findings with different data and methods to make sure the knowledge is accurate no matter who is observing. As the objective knowledge of reality is the main pursuit, human bias in the research process should be minimized. Therefore researchers with this world view will say little on who they are in their research, as the identity and social existence of the researcher should be independent to the knowledge -if there is an influence, it will only undermine the liability of the research. [START_REF] Tracy | Qualitative research methods: Collecting evidence, crafting analysis, communicating impact[END_REF] The objectivist epistemology is the dominant epistemology in experimental science, where researchers prove or disprove material knowledge with mainly quantitative tools such as statistics. The success of objectivist epistemology and quantitative methods also influence the science of human and society. In many areas of social sciences, the positivist perspective with quantitative methods pervades [START_REF] Bhattacherjee | Social science research: Principles, methods, and practices[END_REF]. Qualitative methods can also take objectivist paradigm when the researcher believes his or her role is to discover and prove the objective knowledge of human and society with qualitative tools, such as interviews, qualitative surveys, focus groups, etc. For a more detailed review of qualitative methodology with objectivist paradigm, one can refer to a complete sourcebook written by Miles et al. [START_REF] Miles | Qualitative data analysis: An expanded sourcebook[END_REF] But there are also fundamental challenges for the objectivist world view, one of which is the inability of human in understanding the complex reality of human and society. For example, post-positivist, which develops from the objectivist paradigm and focuses on the causality and pattern of social reality, accepts that the understanding of researcher of the social reality is inherent partial and biased. The complete and holistic understanding of reality as an object is impossible. The constructivist world view (or the interpretive world view as some researchers call), however, denies that the reality exists independently to the researcher's activity in engaging in the context of research. On the contrary, the reality is constructed through the process of communicating, interacting and understanding. The following metaphor made by [START_REF] Tracy | Qualitative research methods: Collecting evidence, crafting analysis, communicating impact[END_REF] illustrates the basic world view of the constructivist (or interpretive in the text) perspective: If you asked an interpretive scholar, "If a tree falls in the woods and there is no one there to hear it, did it really make a sound?" answers would be less clear-cut and more involved than the positivist answer. Interpretive scholars might say that the issue depends on the meaning of the word "sound." Given that sound requires a listener, perhaps the tree did not have sound if no one was listening; or maybe it had a different sound, depending on who or what was present at the scene (a baby, a chipmunk, a researcher, a digital tape-recorder, or a journalist). Also, interpretive researchers might argue that what is classified as having a sound differs from person to person. Does the air conditioner in the background create "sound"? What about the sound of your own breath or heartbeat? Perhaps you are getting bored or hungry or agitated; do any of these states have sounds? Interpretivists would ask and gain insight from multiple points of view, from multiple participants, and from themselves, to answer the question. (Tracy, 2019, pp. 40-41) In the above explanation, the researcher's purpose and understanding is essential in constructing even the question itself, not to mention the knowledge constructed. In this regard, the researcher cannot be isolated from the knowledge of reality. It is in the exact process of empathizing, understanding and interpreting, when researchers construct the interpretive knowledge of the social actors' action, thoughts and their social and cultural context. Therefore the reality does not exist as an absolute object. The researcher's social and political existence, viewpoints, purpose and interests will inevitably be included in the construction of reality. In contrast to the objectivist viewpoint, the inclusion of researcher is not seen as lack of liability, it is instead an essential part of the interpretation. The ultimate rationale for taking this epistemology is the ontology of reality in the constructive world view. The reality is not material to be measured, but socially constructed for reading and interpreting. The constructed reality is first lived by social actors and then understood, interpreted by the researcher. [START_REF] Tracy | Qualitative research methods: Collecting evidence, crafting analysis, communicating impact[END_REF] In the following table (Table 3.1), Marvasti listed the theoretical stance and the goal of research respectively in the positivist and interpretive paradigm (Marvasti, 2004, p.8). Objectivist Constructivist Theoretical stance on social reality How can we use objective research methods to capture the essence of social reality? How is reality socially constructed ? Goal of research What are the universal laws that explain the causes of human behavior? How do situational and cultural variations shape reality ? Table . 3.1 Points and emphasis of objectivist and constructive world view. (adapted from (Marvasti, 2004, p.8)) In this thesis, we will adopt a constructivist framework. The detailed rationale of adopting this framework will be elaborated in Chapter 3.2. But at this stage, to make explicit comparison between objectivist and constructivist ontology and epistemology is important. The pragmatic reason of the comparison concerns the purposes of this research, one of which is to illuminate educators who are promoting interdisciplinary learning or collaborative learning across boundaries etc. In many of these initiatives, educators are not necessarily social scientists familiar with the constructivist paradigm. Normally, if the educator receives training from experimental science, he or she might assume an objectivist perspective. As the concept and knowledge in this thesis is constructed rather than objectively measured or proved, we need to present the theoretical and paradigm ground, which is inherently related to the constructivist framework of qualitative research method. Besides the two world views we review, there are also critical theory and postmodern world views that are widely adopted in qualitative research. Although some theories of learning that we reference (e.g. expansive learning) have close connection to these paradigms, we do not rely on their methodology in this thesis. Therefore we do not particularly review critical theory and postmodern epistemology here, and interested readers can refer to comprehensive handbooks of qualitative research, e.g. [START_REF] Denzin | The Sage handbook of qualitative research[END_REF], for further detail. Features of Qualitative Research Under the Constructivist Paradigm What does constructivist (or interpretive) qualitative research look like? Merriam summarized four essential features of constructivist (or interpretive) qualitative research, which are 1. Deep understanding 2. Researcher as Research Instruments 3. Inductive 4. Richly Descriptive [START_REF] Merriam | Introduction to qualitative research[END_REF]. • Deep understanding The essential feature of constructivist (or interpretive) qualitative methods, or qualitative methods in general is to obtain deep understanding of the target phenomenon. Unlike quantitative inquiry which measures the social reality with numbers, qualitative research very often focuses on the "why" and "how" questions, the answer of which can hardly be measured. Therefore, deep understanding of the social actor, their situated actions, as well as the cultural context become the premise of qualitative research. For example, one can measure the degree of even the most detailed actions of students in a classroom with numbers, but qualitative research concerns questions such as why they conduct such actions, what is the meaning behind it, and how teacher-student interaction influence such actions, etc. In the interpretive qualitative research perspective, these questions concerns symbolic meaning constructed through communication and interaction of the social actors, that can be understood by the researchers with another interaction between the researchers and the students, such as in-depth interview and observation, etc. [START_REF] Blumer | Symbolic interactionism: Perspective and method[END_REF] Qualitative researchers intend to obtain deep understanding of the social phenomenon. With this purpose, researchers adopt methods such as in-depth interview and participant observation, etc. For example, the in-depth interviews are often in the form of semi-structured interview, where the researcher not only ask prepare questions about certain issue, but also interact with the interviewee when they feel it is necessary for the interviewee to explain in great detail and reflect on the issue. Participant observation allows researchers to get immersed with the natural setting of people and context, providing a close enough perspective for the researcher to understand deeply. For example, in this thesis, the author works and lives with the other CoLAB organizers, therefore can obtain deep understanding through close interaction with them (refer to chapter 5 for more details). These methods can be used together to get a deeper understanding of the phenomenon. • Researcher as Research Instrument The second feature of qualitative research methodology is that the researchers are themselves the instruments for research activities, e.g. sampling and collecting data, analyzing data, etc. The quantitative research instruments are often objects: experiment instruments, quantitative surveys, physiological measurement instruments, etc. They are useful in collecting and analyzing objective data. But in qualitative methods, especially the interpretive qualitative methods, the researcher himself is the key instrument. The researchers develop their knowledge of the target phenomenon as they research. Therefore, they will respond to the situation, adapt himself of herself to collect and analyze the data with insights. The researcher's own understanding and the involving role is not seen as detrimental to the objectivity. Instead, knowledge is constructed with the researcher's subjective understanding. • Inductive Theory Interpretive qualitative research starts with the interests in the phenomenon and very often does not require a theory to deduct hypothesis for testing. Instead, understanding is generated through research process, where inductive theory and hypothesis forms iteratively. This inductive intention is also a result of the interpretive epistemology that emphasize the construction of knowledge, which is by itself an inductive process. In this thesis, we mainly adopt a constructive grounded theory method, which has a strong preference in induction. The data is closely examined by the researcher with detailed coding and then categories and concepts are generated from the iterative coding. The theories then stay closely with the data, so that it is an inherent match. • Richly Descriptive In quantitative research, numbers are most significant representative of the phenomenon. But in qualitative research, one can expect a rich description of the phenomenon, which means that data is presented primarily with forms of text, pictures that are richly descriptive. Researchers will use fieldnotes, memo, transcriptions, documents excerpts, ethnographies, and autoethnographies, etc. to describe the situation. Research Methodology Rationale In this section, we will explain the rationale for choosing the constructivist paradigm and the qualitative methods. The researcher's world view from facilitating collaboration across boundaries The world view of the researchers ultimately determines the world view they choose for their research. We will explain how the researcher's constructivist world view gained through his practice is eligible and suitable for this practiced-based research. I started my bachelor's in physics, and continued my master study in interaction design, in which I worked extensively with artists and designers. In my Ph.D study, these two backgrounds merge for my thesis topic -to study the collaborative learning across boundaries, exemplified by the science design collaboration. The research adventure is however not a conventional social science study, but heavily practice based. I participate in the organization of CoLAB activities and observe and study it at the same time. In most of my practice during Ph.D, I worked to facilitate the collaboration between different background students to help them learn across boundaries. The mixed background ultimately questions me where the reality resides. In order to help them, I must understand deeply their distinct perspectives. In addition, I also must deeply emphasize with them in what they believe and how they come to believe it, which ultimately concerns their different ontologies and epistemologies. I have to understand and emphasize with the artists' way of expressing, as well as the scientists persist of "truth", their different facets of "reality" and "knowledge", explicit or tacit. To facilitate the fusion, I need to keep constant conversation and interaction with different people and appreciate their respective "reality" but not "choose a side". This fusionist work is very difficult when I hold an objectivist world view. I might just be too obsessed with the "objective" reality and the ways to get there, hence fail to "grasp" the other realities. The truth is, during the practice, even a objectivist perspective from a social scientist might be seen as too subjective to a "hard-core" natural scientist, not to mention artists and designers. The different world views exhibit already so much clashing, that the collaboration facilitator must introduce a certain degree of relativism and pragmatism to distract the clashing of where "reality" resides. The constructivist world view is more useful and suitable in this regard. When recognizing reality is socially constructed in different situation and culture and by different individuals, one faces much less obstacle in utilizing and integrating knowledge wherever it comes from, whether from rigid scientific experiments or from artistic introspection. One might also argue the possibility of implementing practice based on a constructivist perspective and then convert to an objectivist paradigm when analyzing. But we argue it will create more trouble than benefits. First, it will inevitably lose the constructivist thinking generated in the practice. The objectivist perspective will require the researcher to be as neutral as possible so that his or her subjective thinking will not interfere the analysis. But in this case, the perspective of myself as both the facilitator and observer is lost. What I constructed as the reality in the collaboration from the perspective of someone who understands and emphasizes with different backgrounds is no longer important in the analyzing. The emphasis will shift to what are objectively measured and analyzed. Second, it will create discrepancies between data and theory, as the data is collected during participant observation when I facilitate the collaboration with a constructivist world view. The final result will still have the weakness of lacking objectivity although we take a positivist perspective in analyzing, because the data collection is flawed in the objectivist view. In conclusion, I acknowledge that one can be an objectivist social scientist who build fine objective models and theories for collaborative learning cross boundaries. But in my situation, it is different. I am both a practitioner to facilitate the CoLAB and a social researcher curious on how to improve the collaboration. Therefore, my own constructed reality is part of the whole research. I recognize that the lack of objectivity will be challenged as one of the weakness, as with many researches taking a constructivist point of view. But I also hold that it is important to contribute my insider/practitioner/fusionist "constructed reality" in understanding of the CoLAB phenomenon in addition to the other positivist studies. Learning from the Constructivist Perspective The second rationale for a constructivist perspective relates to the historical paradigms of learning. The positivist paradigm regards knowledge as objective for researcher to acquire by measuring and proving. This world view also has its roots in understanding what learning is, which is closely related to the acquisition metaphor of learning [START_REF] Sfard | On two metaphors for learning and the dangers of choosing just one[END_REF]. The phenomenon of Collaborative Learning Across Boundaries has a different pattern to the acquisition metaphor. It is bonded with a constructivist (or co-constructivist) view of learning [START_REF] Reusser | Co-constructivism in educational theory and practice[END_REF]. In CoLAB, knowledge is mostly not acquired but constructed/co-constructed. Learning happens in the meantime when learners collectively make their prototype and communicate on the different understanding of the project. There is little concrete form of learning fitting the acquisition metaphor, with a clear direction from the unlearned to the learned. Another phenomenon featuring the constructivism in CoLAB is that learners learn through solving Open Wicked problems, which are themselves so complex and entangled, such as the Sustainable Development Problems (see chapter 1.2 for more explanation on Open Wicked Problems). As the problem is so complex and difficult, everyone is more or less on the same position in term of being knowledgeable of the issue, but everyone also has the potential to contribute their knowledge to the open wicked problem in the coconstruction of their joint project. The challenge encourages the group to follow a pattern that favors constructing knowledge rather than teaching and acquiring. We argue that the constructivist qualitative method is suitable for studying CoLAB learning based on a (co-)constructivist model. The constructivist qualitative methods give the researcher a consistent epistemological tool for both the phenomenon per se and the process to understand the phenomenon. One will need this consistence for in-depth understanding. For example, I have encountered a situation where a creativity researcher got asked a question, right after his presentation, about whether he think creativity researchers master the key to be creative themselves. The researcher answered, surprisingly quickly and decisively, that they do not. As their method based on neuroscience does not allow them to be creative themselves like their research targets. The anecdote explains the dilemma when the method for researching a phenomenon mismatches the phenomenon itself. The result is still useful, but there will be a lack of in-depth understanding. In the creativity research, it is a great challenge to find a widely accepted scientific method that can also fit the creativity phenomenon. But in our case, the constructivist paradigm is a good fit for both the phenomenon and the study. For example, the Constructivist Grounded Theory reviewed in the following chapter is one widely accepted method for our purpose. Chapter 4. Constructivist Grounded Theory (C-GT) In addition to the qualitative research methods reviewed in the previous chapter, we adopt the constructive grounded theory in this thesis as our primary method. The constructive grounded theory is one of the research tools under the interpretive or constructivist qualitative framework. The grounded theory has become a very successful method since Barney Glaser and Anselm Strauss invented it in the 1960s in their three books introducing both the grounded theory and their study of "dying" with this method(Barney Galland [START_REF] Glaser | Awareness of dying[END_REF])(Barney G. [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF]) (B. G. [START_REF] Glaser | Time for dying[END_REF]). Since then, many qualitative researchers have been developing grounded theory based on different epistemology frameworks and research purposes. In this thesis, we will mainly adopt the constructivist version of Grounded Theory developed by Charmaz [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF]. The main difference between Constructivist Grounded Theory(C-GT) and other versions lies in the constructivist premise it takes data collection and analysis is not objective as objectivist researchers would argue but constructed with the researchers' social existence taken into consideration. The C-GT aligns with our choice of the constructivist paradigm in this thesis. Grounded Theory Grounded theory, from its coinage since the 1960s, has been a successful qualitative tool for many social science researcher and students. Its wide use also expands from the sociology study to many areas of research. Although the most prominent feature of grounded theory is the systematic data collection and analysis methods, grounded theory is more than just a set of methods with guided procedures. We would like in this section to review the origin of this method, its purpose and epistemology, before we enter the detail methods in the following sections. Generating Theory versus Verifying Theory When Glaser and Strauss first invented the grounded theory in the 1960s, they emphasize that generating theory instead of verifying theory is their most significant purpose for grounded theory (Barney G. [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF]. To understand this purpose, we will briefly review the historical reason for this need. The first historical background is gap between data and theory at that time. [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF] made the example of Herbert Blumer's critique [START_REF] Blumer | An Appraisal of Thomas and Znaniecki's The Polish Peasant in Europe and America[END_REF] on Thomas and Znaniecki's monograph, The Polish Peasant in Poland and America [START_REF] Thomas | The Polish peasant in Europe and America: Monograph of an immigrant group[END_REF]). Blumer's major critique concerns the gap between the data and theory of the monograph, that theoretical conceptions were not grounded on the data, and that the data cannot adequately test the theory they used. This concern aligns with the then trend to better verify theories with better (quantitative) measurement thus better data(Barney G. [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF]. But [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF] argued that the concern should be reversed: "What might have happened if Blumer had focused less on the problem of verification and more on generation. He did, of course, come close to emphasizing the latter, since he raised the issue of how to theorize from data rather than from the armchair. But, as we see it, whatever his intent, Blumer threw the weight of his analysis toward an examination of verification, rather than toward the question of bow to generate grounded theory. He left that latter problem largely untouched, apparently assuming that the most one could say was that good theory is produced by a fortunate combination -an inquiring mind, rich experience, and stimulating data." (p. 14) Glaser and Strauss's focus is clear: we should consider the other direction in filling the gap between data and theory: not just to verify theories with data but to generate theories from data. This created a need for a suitable method in generating theories, especially from data that is highly grounded such as ethnographic observation. Grounded theory method is built based on this historical background, with the purpose to directly generate theory from data in a systematic way. It proposed an alternative to the then sociology focus -to test the "grand theories" that are logic-deduced. As [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF] pointed out, the difficulty of testing grand theories lies in the fact that these grand theories are generated by the "great men" (Weber, Dirkheim, Simmel, Marx, etc.) through logic deduction and that it is difficult to significantly question these theories as a whole or get deep insights on how they are generated. As a result, the task for researchers is testing these grand theories in various situations with different data, making minor modifications but not generating their own theories or even understanding how to generate theories from data. Problems emerge as [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF] suggested these grand theories were not enough for covering the every aspect of social life, especially those with new context that the "great men" have not lived through. There are also problems in understanding grand theories in specific situation. Because the grand theories are not grounded, they might not be easily understandable when applied the to the specific situation and context. And finally, [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF] claimed that it does not require a "great men" or "genius" to generate a theory. Sociology theory can be generated and understood by students and by "significant" laymen. But to achieve that goal, we will need a systematic method to help us generate theory from data, which is the later proposed grounded theory method that has been successful for the past decades. To conclude, the rationale for grounded theory resides in the following arguments that concern generating grounded theory. 1. Grand theory has gaps from data, verification sometimes does not work 2. Grand theory cannot reach every aspect of life, especially in emerging phenomenon. 3. Generating grounded theory is one way to fill the gap. 4. We should learn to generate with systematic methods 5. Everyone, not necessarily genius, can learn to generate theory from data. Pragmatism and Grounded Theory Method Before we go into the detail of grounded theory method in the next section, we would like review the relation between pragmatism and grounded theory, that cast lights to the further purpose of Grounded theory. In a pragmatism perspective, theory is for the purpose of facilitating action or increasing understanding [START_REF] James | Pragmatism: A new name for old ways of thinking[END_REF]. As Glaser and Strauss argued, the ultimate position for the grounded theory is that it serves as "a way of arriving at theory suited to its supposed uses" (Barney G. Glaser & Strauss, 1967, p.3). What is the "supposed use" then? Glaser and Strauss further gave their definition: (1) to enable prediction and explanation of behavior; (2) to be useful in theoretical advance in sociology; (3) to be usable in practical applications-prediction and explanation should be able to give the practitioner understanding and some control of situations; (4) to provide a perspective on behavior-a stance to be taken toward data; and (5) to guide and provide a style for research on particular areas of behavior. (Barney G. Glaser & Strauss, 1967, p.3) There is a strong sense of "usefulness" in the above definition especially the in (3)(4). This purpose is obvious when they claim grounded theory "fits empirical situations, and is understandable to sociologists and layman alike."(Barney G. Glaser & Strauss, 1967, p.1). Grounded theory should be understandable to sociologists as well as layman alike -who, as important practitioner readers, will make significant practical use of the theory. This practical use is sometimes hard for the all-inclusive grand theories that appear formidable to layman. In the later development of grounded theory, there are divergence towards either an objectivist(Barney G. [START_REF] Glaser | Basics of grounded theory analysis: Emergence vs forcing[END_REF] or a constructivist [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF] direction. The difference clearly resides in researchers' epistemology choice as we have reviewed in the Chapter 6.1. But the close relation with pragmatism provides epistemological ground for both paradigms. Antony Bryant(2017) explained how pragmatism can set up the criteria for generating ground theory without falling into the dichotomy of objectivist and constructivist paradigm: From the perspective of Pragmatism, the issue of the relationship between theory and practice is one of how useful the former is with regard to the latter; theories are judged in terms of their utility, as examples of "enacted truth." If a new theory has no impact on existing practices, then we are in the realm of Dewey's difference principle, which states that any dispute between proponents of the old theory and the new one is not of any practical concern, although it may be that these differences do prove to be important at a later time. New theoretical insights, whether in the form of grand theories, conceptual models, or some such, need to be judged in terms of the differences they make to people's practical understanding and actions. Strauss's continuing interest in theories of action and interaction in social settings provided evidence of this need. (Bryant, 2017, p.343) The purpose of grounded theory has a strong bond with the pragmatic philosophy tradition. Theories are judged by their utility, which will need to improve people's understanding of practical actions. This purpose is also significant in our thesis, as we will constant mention in our analysis in the later PARTs. In addition to the purpose, the grounded theory has an inherent strong relation with the pragmatism in terms of the understanding of data, coding, theory as process, and iterative inquiry, which are discussed in detail in [START_REF] Strübing | Research as pragmatic problem-solving: The pragmatist roots of empirically-grounded theorizing[END_REF]. As here we would like to focus on the purpose of grounded theory with a pragmatism perspective, interested readers can refer to Strübing's paper for further discussion. To conclude, grounded theory is a set of systematic methods that Glaser and Strauss developed to generate theory directly from data. It was proposed as an alternative to test deduced hypotheses from existing (grand) theories. Glaser and Strauss explained the rationale was to fill the gap between theory and data and to allow more new theories from not only "genius" but also students and "significant layman". The grounded theory generated in this way has an inherent link with data and the substantial context (as in Glaser and Strauss's case the study of dying and death of serious ill patients), hence they are mostly substantial theories [START_REF] Bryant | Grounded theory and grounded theorizing: Pragmatism in research practice[END_REF]. The pragmatic purpose for grounded theory is emphasized so that the theories are used to provide new understanding, perspective and guidance to the practice. Constructivist Grounded Theory (C-GT) Method The canonical grounded theory and its development especially by Glaser is based on objectivist or positivist assumption. In the 2000s, Charmaz and others developed the grounded theory based on the interpretive or constructivist epistemology [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF]. The constructivist grounded theory(G-GT) holds that both the data and analysis is socially constructed by the researcher rather than objective, as the researcher is part of the world they study. In the objectivist grounded theory, data is independent of researcher and the aim is to generate context-free generalization of the grounded theory. While in the constructivist grounded theory, observation and analysis are socially constructed and the aim is to create reliable, original and useful theory. In this section, we will introduce in detail the constructive grounded method, its specific method and rationale. The C-GT is an offspring of grounded theory in general, therefore it inherits the general principles and purpose of Grounded theory, which is to generate theory from grounded data through a systematic method. The method is characterized by a systematic, iterative and flexible use of different strategies of coding, categorizing, Memo-writing, conceptualizing, sampling and theorizing. [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF] summarized the defining components of the systematic methods as follows: -"Simultaneous involvement in data collection and analysis. -Constructing analytic codes and categories from data, not from preconceived logically deduced hypotheses -Using the constant comparative method, which involves making comparisons during each stage of the analysis -Advancing theory development during each step of data collection and analysis -Memo-writing to elaborate categories, specify their properties, define relationships between categories, and identify gaps -Sampling aimed toward theory construction, not for population representativeness -Conducting the literature review after developing an independent analysis." (pp. 5-6) The above components presents a few key understanding for the grounded theory. First, there is a strong dependence on the data. All the components more or less emphasize this understanding, as data needs to be simultaneously involved, compared to generate codes, categories that ultimately to advance the theory. Second, there is less dependence on preconceived concepts, theories or hypotheses. This aligns with the purpose of generating theories instead of testing deduced hypothesis. Charmaz emphasized that the codes and categories need to be constructed from data not from preconceived theories and that literature review should come after developing the analysis, which precludes the dependence of preconceptions when analyzing. Third, it is a developing and iterating process. The theory generating is a constant and iterative efforts, it is manifested in abstracting the codes and categories, in comparing different data, codes, and categories, in self-reflexive process such as memo-writing, and in further sampling to advance the theory. The advancing from data to codes, then categories, concepts and eventually to theories is a continuous yet iterative abstraction. The different components construct the whole grounded theory methodology, which we will review in detail in the next sections. For Charmaz, most of these components of the method align with the canonical grounded theory despite their epistemological difference. Instead of seeing these components as a concrete method process that one should follow in strict sequence, Charmaz regards them as "a set of principles and practices" that one can follow in their own grounded theory [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF]. Initial Coding The purpose for C-GT is to genera theories from data in a systematic way. Coding is the first analytic tool researchers use in grounded theory. Charmaz define coding in C-GT as categorizing "the segments of data with a short name that simultaneously summarizes and accounts for each piece of data" (Charmaz, 2006, p.43). Coding at least consists two stages: 1. Initial Coding 2. Focused Coding. Initial coding allows researchers to closely scrutinize data, word by word, line by line or incidence by incidence, to define what is happening in the data 2. Focused Coding means researchers select the most salient and significant codes emerging from the initial codes and test them with more data for further comparative analysis. The initial coding helps the researcher to abstract actions, relations, key objects, conflicts, meanings and feelings in the data. The generated initial codes are the first analytical abstraction which should stick closely with the data so that the gap between theory and data is small. Once the initial abstraction is made, the codes represent the researchers' selection and segmentation of data that has analytical potential. The set of initial codes can help the researcher emerge the meaningful components in the huge amount of data. It is important to note that in grounded theory, especially constructivist grounded theory, codes are determined by researcher from the constructive interaction between the researcher and the data, not determined by preconceived theories or concepts that the researcher passively follows. This approach is typically different from coding in an positivist paradigm in which a logic-deduced framework is present before coding. Initial coding in C-GT does not rely on such a guiding framework or rationale. Instead, the rationale and framework is to generate from the initial coding not for the initial coding, which exemplifies a highly abduction-based rather than deduction-based process. In this process, two key points are essential: first, the true analytical value is driven by data and its meaning construction rather than driven by theorytesting; second, initial coding encourages generative coding so that one can identify important analytical insight for new areas that the research might not aware before scrutinizing data. We use the following initial coding(Table 4.1) taken from [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF] to exemplify what initial coding is: In this interview, the interviewee Bonnie who lives alone suffers from a chronic illness (systemic lupus erythematosus) that developed quite serious during the past year. In the interview, Bonnie explained how it was difficult to communicate to her daughter about the illness even when it is life-threatening. The initial coding from Charmaz took a line-by-line fashion. Charmaz scrutinized the data line by line and developed her understanding for abstracting the key elements of the story. The codes consist multiple dimensions of actions, feelings, rationales and consequences. The initial codes do not follow a strict preconceived guide, but is instead used for generating analytical categories from the abstraction. The codes are not final, with some following a preliminary format (e.g. "sounding fine", "a moral lapse?" ), which are acceptable as they serve as the intermediate product in the process of iterative generating. The main use of initial codes is for the researcher to really immerge themselves in the data, to read through and compare, and to extract key categories. The language for the codes is also essential as it represent the analytical thinking of the researcher. The initial codes together as a whole presents how Charmaz extracts for the story, which is centered on "telling" and "self" as [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF] Focused Coding and Categories Focused coding is the second phase of coding in C-GT. [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF] defines focused coding as a process to "use most significant and/or frequent earlier codes to sift through large amounts of data. Focused coding requires decisions about which initial codes make the most analytic sense to categorize your data incisively and completely." (p.57) In focused coding, two important steps are essential: 1. to select the most salient and/or significant codes from initial coding. 2. to test the code with larger data for further analysis. If the initial coding is a divergent phase which opens our analytical space, the focused coding is a convergence phase which concentrates the analysis for fine-tuning. As we see from the above example of Bonnie's story (see table 4.2), although Charmaz made the first abstraction, the whole set of codes still remain very open. We will need one or several pivotal codes to construct the categories or framework with which we can sort and organize the codes into analytic use. Focused coding serves this purpose. Like initial coding, in C-GT, focused coding is a constructive process. Choosing the salient and significant codes is both grounded in understanding the data and in the researchers own meaning making. Sometimes, the process is iterative, researchers might not be able to locate the most significant codes at first sight, he or she may need to continuously interact with the data, try to test different analytical directions and finally come to the focused codes that has the most analytical potential. Sometimes, new ideas emerge as the researcher iteratively review the codes and data so that he or she might continuously evolve the analytical direction while converging. The converging and selecting is constant comparing process, researcher compare different codes with data, data with data, codes with codes to see their relationship, dependence and generalization. A good focused code is pivotal to different other codes and has the potential to be applied to more data. In the following example (Table 4.2), [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF] made such focused coding "avoiding disclosure", "managing emotions", "assessing potential loss and risks of disclosing", etc. These codes are one step further to the initial codes that present key actions and feelings that is central to their chronic illness experience. With a further review, one can find two of focused codes, namely "avoiding disclosure" and "assessing potential loss and risks of disclosing", have connection to the code "demand self-disclosure" in the previous interview with Bonnie(Table 4.1), which illuminate the researcher to focus on the action of "disclosure" and "self-disclosure" for further analysis -what is the different situations in the "disclosure" action, why they are different and how, etc. Focused codes like "disclosure" can develop into "categories", which means they apply to different data and illuminate further comparative analysis -to generate more sub-categories based on the categories, to fit in more data and codes to the category, to develop more properties to the category, or to understand the relationship between one category to other categories. Categories are essential to grounded theory, as they construct an analytical cluster for the initial codes that represent a significant action, process, feeling and objects. Confirming categories is an iterative process, as one will need to test this category with new data, which possibly evolve the existing category. The evolve also includes developing new sub-categories and properties for the category.. For example, a "disclosure" of illness can have sub-categories as "self-disclosure" or "other-disclosure", and properties like "calm", "painful", "reluctant", etc. based on different data and how the researcher define the data. Once the category is confirmed, the research will need to test it with extensive data to develop a complete set of sub-categories and properties for the substantial field. This process to sample for complementary data is called "theoretical sampling"(Barney G. [START_REF] Glaser | Discovery of grounded theory: Strategies for qualitative research[END_REF]. Generating Theory, Grounded Theory and Concepts The final step of C-GT is to generate theory from the first two steps of coding. The grounded theory generalizing requires the researcher to analyze in-depth the relationships between different emerging categories towards a theoretical direction. Ultimately, the categories are not just for clustering, the grounded theory must introduce a final analytical abstraction to define the relationship between categories, their sub-categories and properties. In other words, the categories can be left as isolated labels for different actions, feelings, objects, consequences, they have to construct an analytical story with dynamic relationships between the categories to explain the substantial field in which we study. The following figure (Fig. 4.1) illustrates the level of abstraction that pinpoints the relation between categories. Sometimes, researchers consciously guide their theoretical construction with words such as "context", "condition", "cause", "self-other", etc. These words help the researcher to think in the theoretical direction while they try to generate theory from the categories. These words are called "theoretical codes"(B. G. [START_REF] Glaser | Time for dying[END_REF]. In other occasions, researchers might not be aware of the theoretical codes they use in developing their final theory, as in Charmaz's case, she reflects her theory was influenced by the symbolic interactionism without deliberately stating it [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF]. In developing the final theory, one or two core categories might emerge that pivots the relationship between the categories. These categories are sometimes indicated as Concepts. Therefore the final theoretical abstraction can be accomplished by a conceptualizing process. In the following figure(Fig. 4.2), Bryant explained and illustrated the whole grounded theory with "a spiral abstraction" (Bryant, 2017, p.97). The process features increasing levels of abstraction from data to codes to categories and to the final concepts. Throughout the process, the researcher will need to constant compare different data, codes and categories, and determine the significance and relation between them, which ultimately constructs the theory. PART II. THE DISCOVERY OF CO- MEANINGFULNESS IN CoLAB In PART II, we will enter our study of CoLAB Learning. The whole PART II can be divided into two sub-parts. The first sub-part, from chapter 5 to 7, concerns the socio-cognitive perspective of CoLAB whereas the second subparts, from chapter 8-11, concerns the socio-cultural perspective of CoLAB. As we have reviewed in chapter 1, the socio-cognitive perspective and sociocultural perspective are the two key perspectives in studying collaborative learning in the constructivist paradigm. Here we inherit these two perspectives as we believe they are both important and relevant in answering our initial questions. Since we take both perspectives, our final result of "Co-Meaningfulness" presents a hybrid feature of both, which we will discuss at the concluding chapter, chapter 12, of this PART. We use the constructivist grounded theory method. Through a detailed case study of "Jumping Video" from the socio-cognitive perspective, we generate the key category of "Co-Meaningfulness" and its key properties. Then the "Co-Meaningfulness" is studied with a socio-cultural perspective, which connects this concept to the socio-historical experience of the learner and the sociocultural context of the learner community. The whole grounded theory study is a continuous and iterative inquiry and theorizing, the development of which is summarized and visualized at the end of this PART in chapter 12. Chapter 5. The Socio-cognitive Perspective of CoLAB Introduction to the Socio-cognitive Perspective of CoLAB The socio-cognitive perspective of CoLAB is studied in Chapter 5, 6 and 7. The purpose of studying CoLAB from the socio-cognitive perspective as the starting point, is to understand how people interact and collaboratively learn in the CoLAB setting. We do not hold too much preconception in the study, except for understanding the process is essentially complex and very often not easy. This leads us to the initial questions around the challenges of CoLAB learning process, for example: • What are the main challenges do they find in the communication and collaboration? • What works (or not) in the communication and interaction in overcoming these challenges? • What process in overcoming these challenges has led to learning and creativity? With these initial questions, we start to study the socio-cognitive process in CoLAB. With the socio-cognitive perspective, the microscopic collaboration and learning is at the center of our focus. As we are particularly interested in zooming into the collaborative learning process, unlike many other grounded theory research that focuses on personal reflection(e.g. from interpreting interviews), we focus on the conversation and interaction detail and try to compare and analyze them with grounded theory. This approach is from the research tradition in the socio-cognitive perspective of collaborative learning. We start with the micro analysis of the small group collaboration but bearing in mind that the socio-cognitive process is inherently related to a larger social context. Therefore, the final concepts generated in the grounded theorizing process present both an interactionist and a socio-cultural perspective. The interactionist perspective is more thoroughly studied from chapter 5 to 7, whereas the socio-cultural perspective is more comprehensively discussed in the chapter 8 to 11. We adopt the constructivist grounded theory approach. In this sub-part, the study is based on an in-depth investigation into a case project: "Jumping Video". This project took place in one of the workshops we observed and appeared to be a typical CoLAB learning containing rich group interactions of different patterns. It is also a CoLAB process that features typical challenges and how learners interact to face these challenges. Therefore the "Jumping Video" is purposively sampled for the socio-cognitive study. By a two-step grounded theorizing (initial coding -> focused coding), we are able to generate a key category, i.e. the Co-Meaningfulness, and its two key properties, i.e. "M-M Coherence" (short for Meaningfulness-Meaningfulness Coherence) and "P-M Intensity" (short for Project-Meaningfulness Intensity) to theorize the socio-cognitive process. Moreover, we generate a visual presentation for these concepts and properties as a framework for further categorizing and grounded theorizing. The "Co-Meaningfulness" emerges as a key category in the coding when students find it not enough just communicating on the "project" level, and that they have to communicate and negotiate which part of the project is more "meaningful". The understanding of "meaningfulness" appeared quite divergent in the beginning but gradually changed in the course of the workshop. The phenomenon of constructing "Co-Meaningfulness" is discovered as an underlying but significant process in CoLAB learning, especially salient when the group faces challenges in the collaboration. "M-M Coherence" and "P-M Intensity", as two properties that characterize "Co-meaningfulness" are discovered as we closely compare the different episodes of the project in finding out why and how their "Co-meaningfulness" changes over time. The "M-M Coherence" means how coherent the group member's meaningfulness are related to each other. The "P-M Intensity" means how actively group members engage their meaningfulness into their project -e.g. are they fully engaging, efficiently communicating or are they compromising their meaningfulness. We propose these two properties as two dimensions to describe the different status of the evolving "Comeaningfulness". The "Co-Meaningfulness" and the properties of "M-M Coherence" and 'P-M Intensity" are the key results of the socio-cognitive perspective study of CoLAB. All these concepts were generated with close reading, coding and conceptualizing of our data, which includes interviews of key participants and key conversation during the workshop, all voice recorded, transcribed and translated into English (originally in Chinese). Also, research memos written during the closing reading are also used as part of the data. All participants received oral inquiry for consent on the data collection and interviews and confirmed their consent. We receive and follow the research ethics guidance from the Center for Research and Interdisciplinarity for conduction this research. The initial study in this part generates the first set of categories and concepts that are used to analyze the socio-cognitive process. It builds the first bones and skeleton of our study but does not adequately answer a few important questions like what exactly "Meaningfulness" means for a learner, especially concerning to their historical experience. These questions need to be answered with more theoretical sampling for the socio-cultural perspective of CoLAB, which is elaborated in chapter 8,9,10 and 11. We give the basic structure of the socio-cognitive perspective from Chapter 5 to 7. Chapter 5 introduces the general structure of chapter 5 to 8 as well as the project "Jumping Video", on which we base the grounded theory of this subpart. It gives the context of the workshop and the project, e.g. information of mentors and participants, as well as the main content and development of the project during four days. The introduction will present the key elements of the "Jumping Video" so that readers can be more familiarized with the case in the following grounded theorizing. Chapter 6 presents the grounded theory methods we apply to the data. We apply Charmaz's "constructivist grounded theory" approach. We present our method in detail which unveils how the key category of "Co-meaningfulness" emerge. Chapter 7 is a detailed analysis of the case "Jumping Video " with regard to the category of "Co-Meaningfulness". We compare the different episodes and conclude "M-M Coherence" and "P-M Intensity" as the two properties of "Comeaningfulness". The Case of "Jumping Video" We based our socio-cognitive study on a case study of "Jumping Video". In this section, we will introduce the project of "Jumping Video", based on the field notes, conversation recording and research memos (see Appendix A). Choosing the Project "Jumping Video" is a purposive sampling. One of our main motivation of investigation is to understand the challenges in CoLAB process. The project "Jumping Video" is one of the most representative CoLAB learning experience that features rich types of challenges. These challenges and the attempts to solve these challenges drive the whole learning experience of the group. There are a number of key moments in the whole process, through which the group clearly transit from one stage to another in their collaborative learning. These stages are different: sometimes the group appears quite silent, sometimes they are very excited, and sometimes they argue fiercely over certain topic in their learning. In the postinterview, they also report that they believe their process is the most complicated and most interesting process compared to other groups or to their other experiences of with-in boundary learning, and they believe their process presents typical situations and challenges in CoLAB (Appendix A2.1). This self-report aligns with our observation, thus we decide to purposively sample this project as our key case study in the socio-cognitive perspective of CoLAB study. We divide the whole development of the project into 6 episodes. The whole learning is a 4 day-long collaborative learning, therefore dividing the very long process is helpful for our analysis. We divide the whole process primarily based on the natural division of time: e.g. the Episode 1 is the first day, the Episode 2 and 3 is the second day morning and afternoon, Episode 4 is the third day morning, Episode 5 is the third day evening, Episode 6 is the fourth day. But as the natural division of time also causes the change of learning focus and topic, these episodes also feature the different stages of their learning process, presenting very different group dynamic in each episode. In this chapter, we will present the important events in each episode to give a general outline of the project. But we strongly recommend the readers to refer to the Appendix A, if they are interested in the details of the project. Workshop "Smart Movement" "Jumping Video" was a student project created in one of the interdisciplinary workshop we observed. The workshop "Smart Movement" was designed to explore the integration of design, smart device engineering, and the physics concept of "Movement". Students from design, science and engineering, formed interdisciplinary groups to work on a project which must be related to the measuring movement with smart device. But other than that requirement, students were free to define their own project. Students are encouraged to come up with solutions to open wicked problems such as education, health, digital entertainment, with the "Smart Movement" device. The workshop features all the three features of CoLAB -open wicked problems oriented, collaborative learning and learning across boundaries. The workshop took place from Dec 13 to 17, 2016, in an interdisciplinary design master program8 in Open FIESTA center, Tsinghua University, Shenzhen China. The 5-day workshop was started by a brief introduction with design prompts on Day 1, and ended by a conclusive presentation to the general public on Day 5, leaving roughly 4 whole days for the groups to make their own projects from scratch. Our observation focused on the 4 days where the group worked on their project. There are in total two mentors for the workshop: Professor Jo and Ph.D candidate Ke (Ke is also author of this research). Professor Jo organized the workshop while Ke facilitated the students at the same time observed the workshop. Jo gave the general topic in the beginning and facilitated the student's discussion and prototyping throughout the workshop. The mentors also brought in a hardware tool called "movuino" to help the students to prototype. This tool is an easy-to-program chip for movement measurement and recognition. Ke gave guidance to this tool as an optional session for those who need help in implementation. The group of "Jumping Video" consisted of three members, a male engineer, a female mechanical engineer and a female designer, all in their first year Master. They all had been well trained in their bachelor degree respectively in their disciplines and were then just entering their interdisciplinary master program "Innovative Design for the Internet Plus Era". Mentors and Participants Xid, student, design background, team member who followed the whole project Ming, student, engineering background, team member who followed the whole project Ren, student, Mechanical engineering background, team member who followed part of the project Ke, Mentor, Ph.D candidate, background in physics , interaction design, social science Jo, Mentor, a physics professor who also hosts many interdisciplinary design workshops Li, external observer, bio engineering background. (All names are pseudonyms) Xid, Ming and Ren are the three master students working on the project. Ke and Jo are the mentors, and Li is a staff of Open FIESTA who sometimes joined the conversation. Jo is a French physics professor, but he is also keen on the new ways of teaching physics, especially with the help of smartphone applications. Before the workshop, he had developed a smart phone application by which students can explore the physics concepts such as velocity, acceleration, oscillation, etc. with the built-in accelerometer and gyroscope. Inspired by this new way of teaching and learning, Jo started to explore other possibilities that can make use of the "movement" measurement, e.g. in health, education, and sports, etc. That was where Jo started to work with designers as they can sensitively discover and design these concepts and applications. The exploration process then became a CoLAB learning as Jo organized workshops to bring together designers, engineers and scientists to co-create their own projects. And this Open FIESTA workshop was one of his early (but not the first) attempts. Ke is a Ph.D candidate, and also the author of this research. He participated at the same time observed Jo's several "Smart Movement" workshops. Ke studied physics in his bachelor and interaction design in his master. Therefore Ke has both background to understand the different perspectives. Ke is Chinese, and before his Ph.D life in Paris, he did his master in Tsinghua University, China. He is familiar with both the academic and social context in this workshop. Xid is a design background student, she studied industrial design and interaction design in her undergraduate. She is quite familiar with the workshop format, but she also felt this workshop was quite unique as the mentor has a physics background thus brought new perspective she had never experienced in a pure design environment. Ming has an engineering background, and he studied optics and electronics in his undergraduate. He had also joined design workshops a few times before this workshop. In one of his internship during undergraduate, the technology company often invited designers to discuss and he enjoyed the mixed perspective. Ren has a mechanical engineering background, and she is a classmate with Xid and Ming in the interdisciplinary program. Ren only joined part of the workshop as she had other duties. The Project "Jumping Video" by Episode Here, we present the project "Jumping Video" by episodes. As we said the episodes are mainly divided by time. We correspond the episode with the line number of the transcription in the Appendix A1. Episode 1. The Selfie Stick (Line. 1-28) The Episode 1. (corresponding to Line 1-28 in the conversation transcription) took place on Day 1 in the classroom. The group just started their brainstorming and they were discussing on one idea of a special selfie stick(see Fig. 5.1). The selfie stick was essentially Ming's idea. Ming wanted to mount a lighting device (the red frame-like part in Fig. 5.1) onto the selfie stick so that the selfie stick itself can fill light (the red lines emitted towards the user in Fig. 5.1) when needed, e.g. when the user is in dark environment or wants to fill extra light. The group was mainly discussing how to implement this idea but not so much on the purpose of the project. The episode ended as they found out their idea cannot meet the workshop requirement, as it did not address enough the concept of "movement". They had to change the project. The next day, as the group found out their previous idea did not fit the workshop, they started to look for new directions. As Ming was the main contributor of the first idea, Xid wanted to propose something she is interested in this time. Inspired by the lighting concept, Xid wanted to design a camera system that corresponds to the movement of the subject being photographed. Xid focused on the experience of the subject while Ming was focusing on what specific lighting effect to design. This divergence gradually became a collaboration challenge for them. This Episode features how they coordinated their divergence. Episode 3. Getting Stuck (L. 61-170) The Episode 3. (corresponding to Line 61-170 in the conversation transcription) took place on Day 2. Li came and joined the conversation. But his constant challenging did not help very much in the project progress. They seemed to get stuck. With little progress, they even considered giving up and changing again. The new idea however did not last long as it too did not fit the workshop requirement. This episode featured a creative moment after they successfully made their first prototype. All of the participants and Ke were quite satisfied with the final effect. The idea originated from Xid's idea of associating the subject's movement to the camera setting. And during the outdoor prototyping, they found out the best association would be to move the camera as the subject moves so as to create a special moving effect. For example, Xid rotated the camera while Ming at the same time rotated his skateboard he stepped on. Episode 6. The Big Fight (L. 275-443) The Episode 6. (corresponding to Line 1-28 in the conversation transcription) took place on Day 4. Although they fixed the most part of the project, they got stuck when naming the project. Xid and Ren had a different opinion on the naming. The naming conflict soon turned into a big fight over the general purpose of the project. Both Xid and Ren, who had conflicting opinions, felt very frustrated as they sensed the other did not understand their perspective. Ke helped in resolving the conflict at the same time explained to them the nature of the conflict. The group could finally fix their project before the presentation on day 5. The different episodes present several up-and-downs in this CoLAB process with rich types of challenges and their attempts to resolve these challenges, which we will analyze in chapter 6 and 7. Chapter 6. Coding and the Emergence of the "Co- Meaningfulness" Category In this chapter, we start to analyze the project "Jumping Video". The analyzing method we follow here is the "constructivist grounded theory" [START_REF] Charmaz | Constructing grounded theory: A practical guide through qualitative analysis[END_REF], a branch of grounded theory taking a constructivist perspective (see chapter 4 for a full review). The grounded theory helps us to sort and category the data as we construct the grounded theory useful in interpreting the sociocognitive process. As we want to zoom in and understand the micro scenarios in-depth in the socio-cognitive perspective. We largely rely on analyzing the recorded conversation, supplemented by interviews and researcher's memos. This choice inherits the tradition of collaborative learning research in the sociocognitive perspective (see chapter 1.3 for more detail). Initial and Focused Coding Initial Coding In this chapter, we follow the constructivist grounded theory and adopt a lineby-line coding strategy as initial coding. We present two examples of the conversation and coding to illustrate how we code the data. For a full review of the conversation and coding, please see the APPENDIX. ). The initial codes exhibit the primary analytical choice we make, which is to highlight the actions as well as the objects we code. The coding of conversation is a first attempt to generate the first set of raw analytic codes, and the next step is to read and compare extensively in finding the most useful codes that can be applied to further analyzing. We will come to the analysis of these codes in the next chapters and sections. Here is a second example of Initial Coding: (Line 161-170) The interaction itself should have meaning. Her idea is the trigger itself is movement. Photography itself can connect to movement. Can be the angle, can be the shutter speed, can be settings , can be blurred at some moment. Some movements can be defined by the camera to be wide FOV, some movements defined to be like from an animals' eyes. There is a lot to play here, but not talk. You need to play with the movements. Like the can't touch example, we designed all the movement, for a whole day. There's a lot of space to play it. I think the idea is good, worth playing and testing. Approving the project idea Associating moment to meaning Mentioning more parameter Encouraging "playing" Recalling "example" Emphasizing testing Focused Coding A close reading of the codes leads to focused coding of the socio-cognitive process in "Jumping Video". Two focused codes emerge from our initial coding: The "Project", the most salient code The frequent emerging code is the "Project" the students worked on. Learners constantly propose and accept/reject new ideas for the "Project"(e.g. Describing the effect, L35; Keep proposing new function and effect, L37;) , as well as to comment, approve or criticize existing "Project"(e.g. Indicating confusion, L40). The "Project" is not a single object. For example, students can discuss on the "Effect" of the final presentation or the "Function" of certain parts, or they can argue on the "general purpose" for the "Project". There are also different actions associated to the "Project". The following table (Table 6.1) summarizes the focused code of "Project" and its sub-categories emerging from the initial coding(see APPENDIX A1). Table 6.3 The focused code of "Project" and its sub-categories in terms of "Project" and "Project-associated actions", categorized from the initial codes The blue part in the above Table 6.1 presents the sub-categories of "Project" as an object, whereas the yellow part presents the sub-categories of "Projectassociated actions". Since the "Project" is a very common code, all these subcategories emerge from the extensive initial coding (see APPENDIX A1 for full initial codes), which we present in the right column to the sub-categories column. We explain each of the sub-categories in the following: 1. Project: "Project Components" and "Work-in-Progress Project" The project does not always appear as a whole, very often the learners will propose part a function, an idea, a specific effect. The sub-categories of "Project Components" indicates these different parts of the "Project". The following table 6.2 lists the different components in the "Jumping Video" case. Components Description Examples General (Line 116-117) Table 6.4 List of Different Components of the "Project" in "Jumping Video" Besides the "Project components", there are also occasions when the learners will need to refer to the whole work-in-progress project. This subcategory often appears "project", "whole project", "general project" in the initial coding. 2. Project-associated actions: "Proposing", "Accepting", "Falling problematic", "Coordinating" In the sub-categories of "Project-associated actions", we select four important sub-categories which categorize the different actions learners conduct in their learning. The following relation map(Fig. 6.1) summarizes their relations to the "Project" in the CoLAB process. In the above illustration, first, the different learners propose their different project components to construct their project, actions of which we categorize as "Proposing". Initial codes such as "propose new…", "introducing" etc. are categorized as "Proposing" (see table 6.1 for more detail). Two situations will happen after proposing. Either the proposed component is accepted into the work-in-progress Project, which is the sub-category of "Accepting", or becomes a problem, which is the sub-category of "Falling Problematic". The "Accepting" category often features actions such as "admitting", "supporting", "agreeing", etc., whereas the "Falling problem" is often associated with initial codes such as "questioning", "denying", "confusing", etc. After "Falling Problematic", the group will need to coordinate their problems, the process of which is presented by initial codes such as "explaining", "choosing", "advising", etc. We code this process as "Coordinating". The result is either accepted or remains to be a problem which leads to a new round of iteration by proposing new and more project components. The "Meaningfulness", the important but underlying focused code The second focused code is the "Meaningfulness": Students do not only communicate on the "Project" Level, they also explicitly or implicitly exchange on the "Meaningfulness" level. They state or imply which part of the "Project" they believe as more meaningful. The code of "Meaningfulness" is important, as it appears in situations where the students find it difficult to mutually understand or where the students feel urged to express their preference of meaningfulness. Often these situations mark important turns or key moments. But it is also not so frequent as students do not very often explicit discuss at the meaningfulness level. The code of "Meaningfulness" is important but underlying. Hence we cannot present a full summary of the sub-categories of "Meaningfulness" emerging from the extensive initial codes, as we did for the code "Project" in Table 6.1. The more comprehensive analysis of the subcategories and properties of this code will follow in the next chapters. But here we will use a few examples to explain the basic meaning of this key focused code. In Table 6.5, Xid and Ming were debating on the how the light should be placed -a "function" and "Technical Detail" of the "Project". But the discussion upgraded to the level of implicit "meaningfulness" exchange when Xid explained she believed "We are for fun… I think a design should first touch people."(L. 57) while Ming opposed to her as she "omit all the detail" and "not such fun"(L. 58). Apparently they held a divergence in understanding which part of the "Project" is more meaningful when tackling the same component of the "Project". 52 Ming: But the prototype is after the written description right. Prototype is for demonstrating the idea, which include the light. Rejecting, insisting on including the setting. 53 Xid: i think B light (?) should work. We can buy a curtain, and then we can take the picture. Proposing naming not understandable lacking context 295 Xid: But "you jump I jump " is just an association. repeating "association" Table 6.7 "Meaningfulness" Example 3: Effect of "Jumping" Being Meaningless As we have extracted from the coding, the "Project" is not a single object, but a developing object with different components. Therefore, when different people approach the "Project", they naturally look for the "components" most fitting what they believe is meaningful. For example, Xid would approach the "Project" from the perspective of "association" and "fun experience"(Example 1,3), while Ming would grab the handle of "lighting detail" (Example 1). The "meaningfulness" differences will result in different perspective, different emphasis and even different interpretation of the "Project" itself, which often cause difficulty in mutual understanding and key turns, as shown in the Example 1,3. The Meaningfulness in the "Jumping Video" is inherently related to the "interdisciplinarity' of the workshop. As Xid explained in her post-interview(see A2.2 in Appendix), the "association" is not only what she found meaningful in this particular Project, it was also what she believed where the essence of interaction design lies. For example, she believed the whole workshop theme was in a designerly way because it "associated" the "movement" to the "smart technology". At the same time, Ming's meaningfulness of "working out the detail" was also not only present in this project, he explained the biggest fulfilment in his undergraduate course was to independently implement many types of technology in a single project, which inevitably included working out different technology details. To conclude, there are two major focused codes from the initial and focused coding: (1) the pervasive code of "Project" and (2) the underlying but important code of "Meaningfulness". A few conclusions can be drawn from analyzing the two codes and the data where the codes emerge: • The "Project" is not a single object, it is a complicated and developing object, with multilayer of "Project components". • The "Project Components" enters the "Work-in-progress Project" through coded process of "Proposing", "Accepting", "Falling Problematic" and "Coordinating", and sometimes it requires iteration. • Participants often grab the "Project Components" that best fit their meaningfulness. And if they have different meaningfulness, it might eventually cause different interpretation and misunderstanding. • Divergent meaningfulness causes coordination and communication of Meaningfulness in addition to the communication and coordination of "Project" Analyzing One Challenge and the Category of "Co- Meaningfulness" A "Meaningfulness" Case Our focused coding selects "Meaningfulness" and particularly its relation to the "Project" as a significant analytical starting point. In this section, we will analyze one of the cases where the code of "Meaningfulness" emerges in order to deepen our understanding with further coding and sub-categorizing. This step is necessary as "Meaningfulness" is an underlying phenomenon, thus a simple "counting" and "categorizing" strategy will not generate insightful analysis of the underlying mechanism of the phenomenon. We need to delve into the specific case and reply on deep and iterative analysis to guide the grounded theory. The excerpts are taken from the Episode 2 from line 38 to 60(see Emphasizing on the detailed effect as part of the prototype Xid: the prototype is mainly for describing the basic function and then a good video or picture to showcase. Rejecting the understanding of prototype; providing an alternative understanding of prototype Ming: but if the function includes the light, then we must decide where the light came from and where it projects to. Insisting on the specific setting 51 Xid: this should be included in the written description. We don't need to include it in the prototype. Excluding the detailed setting from prototype 52 Ming: But the prototype is after the written description right. Prototype is for demonstrating the idea, which include the light. Rejecting, insisting on including the setting. 53 Xid: I think B light (?) should work. We can buy a curtain, and then we can take the picture. Proposing a specific setting 54 Ming: yes, we could think twice on the details. Agreeing The first key moment of misunderstanding can be detected in Line 42, when Ming invited Xid to discuss on one specific problem: "how to place these lights on the floor". In response to Ming, Xid simply omitted Ming's literary meaning, and started to introduce a scenario (dark room) to the "project", but that does not answer Ming's question. This was a very important moment that marked the starting point of misunderstanding. Xid interpreted the Ming's question as a question concerning the "scenario" rather than the installation "detail". Therefore Xid answered by giving a "dark room scenario" , which is not what Ming expected. Not being answered, Ming repeated and clarified his question (L. 44), which made Xid to realize what she proposed did not answer Ming's question. And from the later discussion, we can infer that the reason for this mismatch is that Ming believed the specific Effect is more important, but Xid believes the abstract interaction more meaningful. We use the following further coding (in blue) on the meaningfulness in Table 6.9 to present this analysis of Ming and Xid's underlying meaningfulness. 42 Ming: We haven't reached how we place these lights on the floor. 43 Xid: We could describe the scenario as a dark room, we could also find such a place. 44 Ming: no. I mean how you can place and fill the light. 45 Xid: it's the realistic part, but we are now doing a prototype, we don't need to go that far right ? imply meaningfulnessthe abstract interaction 46 Ming: But even for a prototype, you need to think about these in order to present/express the idea. imply meaningfulnessspecific Effect Table 6.9 Meaningfulness Coding 1: Discussion on Xid's new idea @Ep. 2 Why in the first place did Xid omit Ming's point? From what she explained later we know she was able to understand what Ming tried to introduce to the "Project", but she failed to interpret correctly. When Ming asked about how to set the light, Xid interpreted the question to be "how to set up a scenario to best facilitate the interaction and have good user experience", which led to her answer of a "dark room". She couldn't understand why the details of light position is worth discussing there: "Why would Ming bother to think about the light? This is irrelevant to the meaningful part of the project." Xid focused on abstract association between technology and human, this intuitive thinking is very natural for her since she was trained in interaction design(see Appendix A2.2). She knew how this interaction might conceive rich content for further development. Ming, on the contrary, focused on the specific Effect -how exactly they are going to place the light so that it will be a creative and meaningful project for him. This intuitive thinking is due to his engineering background, which emphasizes a more realistic and practical meaningfulness. Up to now, we haven't seen any explicit explanation of meaningfulness, Xid and Ming were trying to communicate the meaningfulness through discussing on the "Project" level. The second key moment followed right after. In Line 57, Xid said, "We are for fun", "I think a design should first touch people", all central to her meaningfulness. Xid clarified her meaningfulness because she sensed that Ming did not share the same ground. Therefore she had to made explicit of what she believed to be more meaningful. In clarifying "having fun" and "touching people", she tried to justify her omission of the realistic part. To her, Ming's proposition, focusing on realistic part, conflicted her intuitive thinking. She did not believe digging into the detail will be of any help to make her meaningful Project, but felt compelled to apply her principles of "having fun" and "touching people" into the Project. Ming's immediate answer was even more interesting. Instead of disagreeing with Xid, Ming actually agreed to Xid's meaningfulness of "having fun", by saying "yes, this point is very important" (L. 58). But Ming did not agree that omitting the detail would lead to "having fun", by saying: "You say it would be fun, but you omit all the details, it's very abstract now. One beam of light is not that fun" (L. 58). His response implies his ultimate meaningfulness lies in the Specific Effect, not in the interaction -the Specific Effect would trigger and support the meaningfulness of "having fun", but not the 'having fun in interacting", as Xid implied. We use the following further coding (in blue) on the meaningfulness in Ming seemed to agree with Xid's meaningfulness("having fun"), but in fact, they did hold different meaningfulness. And more importantly, this meaningfulness is not isolated, but exists in connecting their "Meaningfulness" to the "Project". When Ming tried to connect "having fun" to the project, he could not imagine the same "fun" as Xid would imagine. Therefore even if Ming can agree on the meaningfulness of "having fun", he cannot agree on the connection between the project which "omits detail" and the meaningfulness of "having fun". In the very end (L.59), Xid's proposition was an attempt to incorporate the two different "meaningfulness" together. She finally gave details such as "spot light" and "music" , but also did not throw away her meaningfulness of "having fun" ("it might be fun", L. 59). In this way, she tried to repair the divergence by proposing something that Ming might also find aligning to his meaningfulness. The Category of "Co-Meaningfulness" Emerges Analyzing the "Meaningfulness" case above helps us to extract a few important understandings that foreshadow the further categorizing and analyzing of the code "Meaningfulness": Meaningfulness appears as the driving force for the "Project" interpretation and progress: The misunderstanding caused by meaningfulness was not something they can just ignore. It is because the participants interpret the "Project" with their own glasses of "meaningfulness" on. For example, when Ming asked specifically about the light setting, Xid did not interpret it as a question about lighting but about a general scenario, as she believes the latter is more meaningful and worth developing. How participants interpret every proposition in the "Project", and in which direction they want to develop every progress in the "Project" might inevitably have something to do with their distinct meaningfulness. This phenomenon makes Meaningfulness a driving force for the "Project" interpretation and progress. Meaningfulness is manifested in its association and application to the "Project", not alone: In most cases, Meaningfulness is communicated implicitly in the "Project" space. Only when discrepancies are big enough, will people start to reflect and foreground the meaningfulness. But even then, the Meaningfulness alone does not make independent sense, it has to be established and communicated with regard to the "Project". Ming's "agree-and-disagree" reaction was a perfect example of this phenomenon: The meaningfulness of "having fun" was interpreted differently by Ming and Xid when they connected it to the actual "Project". We can compare the "Project" to an ever-growing object with many handles, and participants' Meaningfulness as hands to grab these handles. The influence of "meaningfulness" exists in the handsgrasping-handles action, not in their separated status or in the hands alone. The collective status of the "Meaningfulness" matters: If we only see "meaningfulness" as separate properties of each participants which independently influences the collective "Project", we miss to see the rich dynamics of the collective meaningfulness that largely influence the group collaboration process. For example, In the beginning, it is not only Xid's own meaningfulness that produced the debate, but also her feeling of conflicting meaningfulness that compel her to state explicitly about "fun" and "touch people". Later, it was also the "seemingly-align-actually-divergent" meaningfulness status that drove Ming to clarify his interpretation. In the very end, Xid proposed "color" and "spot light" with the intention to combine the meaningfulness of "fun" and "specific Effect", and we can reasonably infer that the successfully combined meaningfulness will influence differently from the current conflicting status. Therefore, the relation of different meaningfulness and their interaction do influence on the "Project". The collective status of the "Meaningfulness" matters. Meaningfulness has the potential to develop: As the "Project" is developing in the course of CoLAB, so is the related "Meaningfulness". In the above example, we can see at the end of the incident, Xid tried to combine their meaningfulness. The action has demonstrated participants intentionally change their meaningfulness. The develop of meaningfulness is very important if we are taking a developmental view of CoLAB: will their meaningfulness develop during the learning with each other? We will need to understand further with this direction. With the above coding and analyzing, we find that: Meaningfulness matters to the Project, and it functions not alone or isolated to the Project, but in a way that is "Project"-relevant, collective, and potentially developing. As the "Meaningfulness" is an underlying and less frequent code, we rely on the in-depth case study to guide our grounded theory. The "Project"-relevant, collective and developing feature of "Meaningfulness" is useful in guiding directions for our further analysis. We define this "Project"-relevant, collective, and potentially developing status of Meaningfulness as "Co-Meaningfulness". It is important to note that here, we define Co-Meaningfulness as it is one of our key category which sets up the initial direction of our grounded theory. But it is far from a comprehensive and mature category or concept at this stage. As we develop our grounded theory, we will further investigate the "Projectrelevant", "collective" and "developing" aspects through comparative analysis of more data and coding. But we also argue that it is important to define Co-Meaningfulness here. First, we need a name for this underlying phenomenon, and "Meaningfulness" alone is not sufficient, especially for the "collective" feature. Second, the "Co-Meaningfulness" defines the key focused categories that foreshadow our grounded theory. In the constructivist grounded theory, the researcher iteratively selects key coding as categories, compare them with more data and coding to develop the grounded theory. With this purpose, the "Co-Meaningfulness" is the pivotal category that guides our further analysis The below illustration shows the theorizing so far: • Meaningfulness evolves into Co-Meaningfulness: "Co-Meaningfulness" is different from "Meaningfulness", as it represents the "Project"-relevant, collective and developing aspects of "Meaningfulness" • Co-Meaningfulness is an emerging pivotal category that connects to different focused codes of CoLAB. Project-relevant connects to the "Project"; Collective Status concerns the meaningfulness dynamics in the group; Potential to Develop reaches to the past experience and its development. Chapter 7. Co-Meaningfulness and its Trajectories In the previous chapter, we present the key category of "Co-Meaningfulness". In this chapter, we will expand this category by an extensive study of the project "Jumping Video". Unlike in the previous chapter where one case was studied in depth, in Chapter 7 we study all the 6 episodes and generate useful categories and properties from comparing situations in different episodes. This process in grounded theory is called theoretical sampling, as the sampling is directed by certain theoretical and analytical results, which in our case is the emerging category of "Co-Meaningfulness". The theoretical sampling is important as it will expand the sub-categories and properties of our grounded theory with more comprehensive data and codes. We base our comparative analysis on the "conversation" and "memo of the conversation", supplemented by the interviews. The purpose is to generate more categories and properties related to the category of Co-meaningfulness through further grounded theorizing. Further Coding for Categorizing Co-Meaningfulness In this section, we will further code the data across all the Episodes for categorizing "Co-Meaningfulness". Particularly, we base on the previous foreshadowed direction and try to generate the sub-categories from two perspectives: the "Project-relevant" perspective and the "Collective Status" perspective. The "Project-relevant" Perspective The "Project-relevant" Perspective of Co-Meaningfulness means that the Co-Meaningfulness is not independent to the "Project" students are working on. To the contrary it is essentially manifested in its relation to the project. But how do they relate to each other? What sub-categories of process and properties can we generate from the data? These are the questions for this section. - Implicit and Explicit : How is Meaningfulness Presented The first code that categorizes the relation between Co-Meaningfulness and the "Project" is the implicit or explicit presentation of meaningfulness. In some situations, students find it necessary to explicitly state their meaningfulness, in other situations, meaningfulness is implicitly discussed. In both explicit and implicit presentation of meaningfulness, the purpose is to influence the "Project". For example, in Episode one, the meaningfulness was not explicitly presented, but all the group were discussing on the feasibility issue. There was an unspoken consensus on that meaningfulness. It did not create any communication problems, as the group more or less agreed on that implicit meaningfulness. Or they did not agree but didn't bother to explicitly state that. Episode 1. The Selfi Stick (Line 1-28) There are almost no explicit meaningfulness communication. But there is some hint of implicit meaningfulness. 1. Most conversation is around the feasibility issue of the project. There seems to be a consensus on that. implicit meaningfulness (from A3.1 Memo of Conversation) However, in Episode 2, Xid and Ming started to make more explicit on the meaningfulness. As they sense the other did not understand well enough and the communication on the "Project" level is not sufficient. In the following memo, the status of the Co-Meaningfulness was described as a "tug a war", the two students were arguing back and forth on their respective meaningfulness. They both tried hard to clarify their meaningfulness in the "war", aiming to drag the "Project" to their side of meaningfulness. Episode 2. New Proposition (Line 30-60) This part, in the beginning, I wanted to describe as a conversation of "meaningfulness". But then I changed my mind, as conversation doesn't say much about their will to drag the Project to their preferred direction. It is more like "tug a war" of "meaningfulness". Xid and Ming try to drag back and forth. It seems in this part, no one wins the war. But during the war, both of them tried to explain more clearly their intention and meaningfulness. "tug a war" of meaningfulness back and forth clarify meaningfulness (from A3.1 Memo of Conversation) The Episode 3 saw a stuck situation, where the Project suspended grow as the Co-Meaningfulness was in a very entangled status. Unlike in the previous episode, where students were at least trying to clarify the meaningfulness, in Episode 3, the progress in clarifying meaningfulness was slow. The Meaningfulness was unable to be present clearly so as to influence the Project. Episode 3. Getting Stuck (Line 61-170) It is a pity that even at this point, when Xid talks about "quality does not matter", Li and Xid couldn't dig deeper about their different meaningfulness that drag the project to different direction. Li answers "Li: it looks complicated. Some one can take the job"(L 105), which goes back to his old statement, without progress in clarify his meaningfulness. The Meaningfulness is inherently related to students' motivation in the project. If a student perceives the "Project" to be meaningful, he or she will very probably actively participate in the "Project" making. On the other hand, if the student does not think his or her meaningfulness has something to do with the current, he or she might just stay passive in the "Project". In the first Episode, we can see Ming actively participated in the "Project" construction while Xid was relatively passive, which is the opposite in the second Episode. Episode 1. The Selfi Stick (Line 1-28) The major discussion here is the implementation problem of the selfie stick. From the postinterview we know this selfie stick is largely an idea of Ming's interests. Ming seems to be very interested in the specific idea alone. However, in Episode 5, we saw a big turn of attitude. Because the group was quite successful in working out the concrete idea in their prototyping session. All of them seemed to be very active in the "Project" making. Episode 5. Going outside (Line, 232 -273) The Episode 5 documented the discussion right after their prototyping session outside in the campus. The whole conversation was in a fast pace, everyone was talking fast and continuously, giving response instantly. Everyone was excited and active. They know they have made something interesting and meaningful. active participation (from A3.1 Memo of Conversation) -Effective: How well is "Meaningfulness" connected to the "Project" Even students wants to clary "meaningfulness" in the explicit way, there is an issue of effectiveness. The effectiveness of connecting their "Meaningfulness" to the "Project" level is an important feature of Co-Meaningfulness. Effective Learner can present well their meaningfulness, and more importantly propose proper "Project" ideas that demonstrate its relation with meaningfulness. In the Episode 3, we see a situation where Li and Ming failed to connect their meaningfulness to the new "scenario" . By comparing different Episodes, we can see different ways of interaction between Meaningfulness and "Project" happens, which is a sign of their connection. The first examples happened in Episode 1. In defending the Project, Ming put forward a new meaningfulness -the feeling of artistic to justify one of his propositions. Episode 1. The Selfi Stick (Line 1-28) 2. Ming was very active in defending and creating meaningfulness for the project. When they face the problem of light being in the picture. His first reaction was to ignore. But when Ren insisted on the problem, he propose to make a frame for the light so that it looks artistic. This proposition does not directly solve the problem but accepting the problem and reuse the problem to add new dimension of meaningfulness. -Introducing new meaningfulness to justify a proposition to the "Project". Project -> new meaningfulness ->justify Project (from A3.1 Memo of Conversation) The second interaction we want to exemplify appeared in Episode 2. In this example, Xid tried to combine her meaningfulness with Ming's meaningfulness with a "Project" function "spot light" and colorful. This is a combination of meaningfulness that potentially change the Co-Meaningfulness as well as the "Project" Not all interaction has positive result. In Episode 4, we see an interaction that decouples Co-meaningfulness and "Project". In this example, Ming tries to compromise both his and Xid's meaningfulness and their connection to the "Project". He thought working out the specific technical part is enough, and they can just leave the divergence of meaningfulness. But this action of compromising was criticised in his reflection : "Another lesson I learned from this workshop is to keep communicating with our group members. Otherwise, we will probably diverge into two directions very far before we notice. " (see, A2.1) Episode 4. Mutual Understanding (Line. 171-231) "Ming: I think it is not very necessary to think about its objective, be it practical application or having fun. Because we express the idea through the same implementation" M started to compromise and think that they can put aside the disagreement on the objective, and the meaningfulness behind, and just focus on the implementation. This is a compromising attitude, weakening their meaningfulness and its influence on the project. in the post-interview, Ming opposed to this attitude. by saying: "Another lesson I learned from this workshop is to keep communicating with our group members. Otherwise, compromising meaningfulness avoiding further exchange we will probably diverge into two directions very far before we notice. " (see, A2.1) (from A3.1 Memo of Conversation) In Episode 5. when the group came back from the prototyping, they brought very good effect. Ke was very excited and immediately connect the project to his other meaningfulness and forgot about his previous meaningfulness of "powerfulness". This is an example of "Project" raising new meaningfulness. Episode 5. Going outside (Line, 232 -273) So actually he did not think it is absolutely necessary to attach every meaningfulness, the fun and imagination of the effect itself evoke his other meaningfulness which is already enough to be very active and propose ideas -e.g. "servo" Effect raising new meaningfulness (from A3.1 Memo of Conversation) The "Collective Status" Perspective -Conflicting The first status of "Collective Meaningfulness" we observe is a conflicting meaningfulness, when the meaningfulness students hold easily drag the "Project" to divergent or even opposite directions. In Episode 2, as we have already studied in Chapter 11, Xid held the meaningfulness of "associating movement as interaction experience" and Ming held the meaningfulness of "specific effect". They argue for quite a long time before they can finally understood each other and started to look for combined solutions. In the process, their meaningfulness, collectively, present a conflicting status. The same happened to the following episode 3 and 6. In Episode 3, Li was focusing on the practicality whereas Xid focuses on the "having fun", "new interaction experience", which is not so practical and useful. In Episode 6, the situation was similar. Episode 2. New Proposition (Line 30-60) The misunderstanding from L42 started as Ming did not really understand enough Xid's underlying meaningfulness. Ming was focusing on the specific effect, which Xid believe does not even matter. What Xid believes as important is the association of movements -subject can control the lighting and camera with their movement and supplemented with a playfulness feeling. But Li started from a very practical perspective. His meaningfulness is very centered on the practicality side (e.g. he gave an example of capturing wild animals, a very practical application that looks like Xid's idea). This practicality meaningfulness is very different from Xid's meaningfulness of "fun (even crazy) experience". The last episode is a big fight on the naming. Xid wants to call in "You jump I jump" while Ren wants to call it "jumping video" The rationale for Xid is that "jumping video" is not fun, does not present her key consideration of "association", But Ren did not seem to agree much on that meaningfulness. For her the naming should be easily understandable for users. divergent meaningfulness not agreeing on meaningfulness (from A3.1 Memo of Conversation) Meaningfulness In the last example, we see a special case where their meaningfulness posit an opposite direction. Ren believed the name "you jump I jump" was not explicit, and it did not say anything about the Project. But Xid however believe not being straightforward is on the contrary an advantage as it makes people curious. The reason of their opposite interpretation to the same "Project" idea was that they hold an opposite meaningfulness. Episode 6. The big fight (Line. 275-443) For Xid, not being straightforward is on the contrary an advantage. It makes people curious about the content in the video. And people would understand the content after they see the video. opposite meaningfulness to the same "project" idea (from A3.1 Memo of Conversation) - Supporting In many cases, friendly and supportive meaningfulness is proposed, sometimes by the same participant and sometimes by other participants to support their peers. The supportive meaningfulness create the second status of meaningfulness status. Episode 2. New Proposition (Line 30-60) What Xid believes as important is the association of movements -subject can control the lighting and camera with their movement and supplemented with a playfulness feeling. These two meaningfulness get along as the playfulness Meaningfulness : association supportive meaningfulness: playfulness feeling is kind of experience and so is the association. (from A3.1 Memo of Conversation) In episode 6, we see a situation of multiple supportive meaningfulness, which I noted as family meaningfulness in my memo. They are not the same but they do support each other and lead the project to similar direction. Episode 6. The big fight (Line. 275-443) Another disagreement appears when Ke and Ming are arguing if they can use the software to represent the movement of the actual camera. Ming's argument is that the movement of a camera might change the habit of photographer, therefore he might not be so used to the new interaction. besides this point, Ming (together with R) also propose some other practical meaningfulness: e.g. calculating the relative speed, calculating the center of the video "Ming: Or it is contrasting to human nature." (L 375) "Ming: I think there is another point that Joel mentioned, that taking video is a good way to recording the movement(speed) of the subject relative to the earth." (L.382) "Ren: in fact, can it be like this, from technology pov. It can detect the center of the picture, so that if the photo is constructed unbalanced, the camera would move automatically to put the focus center to the real center" (L. 389) "Ren: can that people would understand better the rationale of the design , which is to capture the center of mass of the picture. Because normally people would ask why the camera move according to the movement of the subject. People wouldn't understand what we try to express here. However, if we add the concept of center of mass, then the people standing here, the picture is not balanced, and then the camera would family meaningfulness meaningfulness practicality useful application not challenging status quo automatically adjust so that the center of mass is balanced. " (L. 391) But Ke has the opposite opinion. He agrees on Xid's meaningfulness , that the association is important. and from his point of view, the two conflicts are two sides of one problem -what is the actual essence of the project. holding the same meaningfulness, Xid also do not think the challenge of Ming is a big problem, as the purpose is to challenge the current photographing and make new ways of photographing. Ke and Xid's meaningfulness can be indicated from the following statement. "Xid: I don't think it's a problem. There's a lot of anti-instinct design." "Xid: and this design . I really haven't thought about its practical value. Or the practical details. Do you need to go that deep? Considering its value " "Ke: OK, I think the point is to make people uncomfortable." family meaningfulness meaningfulness association, new experience uncomfortable design(challenging the status quo ) (from A3.1 Memo of Conversation) -Coherent The last situation is when the meaningfulness is aligned in the group. In the coherent status, the group usually agree to each other in the meaningfulness and can communicate on the "Project" level more easily. Episode 5. Going outside (Line, The consensus in the group is that this effect did make sense. it was result of Xid's original idea plus inspiration from the accident plus going out and trying out. In the final effect we did see a sense of "fun" therefore Xid successful realized her part of meaningfulness. At the same time the effect is very specific, so Ming was also quite excited about it. agreed meaningfulnessfun Nevertheless, for Ren it is still a "movie with fun". So fun is an accepted meaningfulness to Ren. In this section, we make a comprehensive comparative analysis across the different episodes and generate a set of sub-categories in relation to the "Project-relevant" and "Collective Status" perspectives of "Co-Meaningfulness". The set of sub-categories give us clue on the different status of "Co-Meaningfulness" and its influence on the socio-cognitive process. The "Project-relevant" Perspective The The Co-Meaningfulness Quadrant Map In the previous section, we generate a few sub-categories of Co-Meaningfulness concerning the project-relative and collective perspectives of Co-Meaningfulness. In this section, we will generate two key properties of "Co-Meaningfulness" based on the sub-categories we have: the Project-Meaningfulness Intensity (P-M Intensity) and the Meaningfulness-Meaningfulness Coherence (M-M Coherence). As these two properties are indicators of degrees, we build use a quadrant map with these properties as the two axes, where we can anchor the different stages of CoLAB process and make a visual presentation of the Co-Meaningfulness development in CoLAB. The purpose of this step is to demonstrate a visualized tool to researchers to compare different process in CoLAB, and to provide practitioners an available and easy tool for reflecting their practice. Project-Meaningfulness Intensity (P-M Intensity) and Meaningfulness-Meaningfulness Coherence (M-M Coherence) We name the first key property "Project-Meaningfulness Intensity" or "P-M Intensity", by which we indicate how intensive the "Co-Meaningfulness" and the "Project" mutually influence. It is generated to summarize the degree related to sub-categories of the "Project-relevant" Perspective, i.e "implicit and explicit", "active and passive", "effective" and "interactive". The "Intensity" provides a degree of the intensity of between the "Project" and "Meaningfulness" in the CoLAB process. To determine the P-M Intensity, one needs to examine the different subcategories holistically. Are their Co-meaningfulness well-presented or perceived? Do learners actively participate? Does their Meaningfulness has the power to effectively drive the "Project"? Does the "Project" effectively reinforce/challenge/renew the Co-meaningfulness and grow? The "Intensity" property summarizes the degree to which the "Co-Meaningfulness" relates to the "Project". And it is related to all the properties we develop in the "Projectrelevant" perspective. As we define "Co-Meaningfulness" as a collective status, therefore "P-M Intensity" also means the general P-M Intensity level of the group. Therefore if everyone is contributing their meaningfulness actively, their P-M Intensity of Co-Meaningfulness is larger than in situations where only one member pushes his or her own meaningfulness. The "P-M Intensity" is a qualitative degree rather than a quantifiable one. We generate this property from qualitatively comparing different learning situations but not from quantitative measuring. In the scope of this thesis, we cannot provide any quantifiable method for the property, because our data and method do not support such a study, but as the property is interpreted as a degree, it has the potential to be developed into a quantifiable variable, as we will discuss in the chapter CONCLUSIONS.3. As a degree or a grade, it makes sense to compare the different episodes and argue how the group proceed as their project progresses: if their Co-Meaningfulness becomes more well communicated or is it becoming more alienated to their "Project" . And these changes are qualitatively comparable with our coding and memo writing, as we did and presented in this research. The detailed comparison and grade of "Jumping Video" will be presented in later in this chapter. The property of "P-M Intensity" is an indicator for all the sub-categories understand the "Project-relevant" perspective: namely the "implicit and explicit", "active and passive", "effective" and "interactive". But that does not mean "P-M Intensity" is a replacement to the sub-categories, but that it is an extra property generated from them with further analytical purposes. We will keep the other properties to maintain a more comprehensive set of properties. The property of Meaningfulness-Meaningfulness Coherence (M-M Coherence), generated from the "Collective Status" feature, means how coherent different meaningfulness relate to each other. It provides a degree by which we indicate the coherence of "Meaningfulness" in the process. For example, the "conflicting" state of Co-Meaningfulness indicates a low level of M-M Coherence, "supporting" means a medium level, and "Coherent" state implies a high degree of M-M coherence. Again, The M-M Coherence is a property that we generate from qualitative comparing and analyzing the different states of the collaborative learning. It is not yet a quantifiable property for measuring. To obtain the degree of M-M Coherence, one needs to qualitatively compare different situations through observing, coding, and analyzing, as we illustrated in the chapter 7.1. The researcher is also the research tool for determining the "degree" of the properties, based on the structed process of grounded theory. As in many cases, "Meaningfulness" is tacit, the research will need to gain a deep understanding of the context and nuances of the real-world practice. The Co-Meaningfulness Quadrant Map We combine the "P-M Intensity" and "M-M Coherence", the two degree by which we measure the Co-Meaningfulness, and generate the Co-Meaningfulness Quadrant Map. We define the y-axis as "P-M Intensity", so that it presents the intensity level of the Co-Meaningfulness. We use "vigorous" to present higher level of "intensity" and "inert" to present a lower level of "intensity". The upper quadrants present more "vigorous" Co-Meaningfulness status whereas the lower quadrants present the "inert" status. The "M-M Coherence" is defined as the x-axis. We use "aligning" to present higher level of coherence and "contrasting" for the lower level. The right quadrants are therefore the more "aligning" Co-Meaningfulness status, while the left quadrants are the "contrasting" quadrants. With the Co-Meaningfulness Quadrant Map, we can quickly generate four different quadrants (the VA, IA, VC, IC quadrants as shown below Fig. 7.1) in which we can locate the Co-Meaningfulness of different status. . When a group reaches vigorous and aligning Co-Meaningfulness, it means that the team has been well communicated and filled their disciplinary gaps, and has willingly agreed on the "Project". This state can mark an ideal CoLAB learning. -Inert-Aligning (IA) Co-Meaningfulness IA Co-Meaningfulness often appears when part of the team compromises or becomes inactive in realizing their meaningfulness. The team needs to move on, and some of the members choose to compromise or become inactive to make an aligning Co-Meaningfulness. But this Co-Meaningfulness is not stable, and if this state keeps to the end of the project, some of the members will not learn as much as they will in a VA Co-Meaningfulness. - Vigorous-Contrasting (VC) Co-Meaningfulness VC Co-Meaningfulness status often marks arguing explicitly on the meaningfulness. This state is very hard for a team to progress their "Project" since the team does not agree on what is meaningful and they are motivated to fight for their different meaningfulness. Although this state brings challenges for progressing "Project", it can help the team to communicate their deep understanding of their project. Normally this kind of communication goes beyond the "Project" level and to the core of Meaningfulness. - Inert-Contrasting (IC) Co-Meaningfulness IC Co-Meaningfulness is the most difficult status. It means the group cannot find a way to agree and is not energetic enough to progress their "Project" because their meaningfulness is weak or they don't have enough skill to realize their meaningfulness in the "Project". This state usually marks a failure and need to be mentored. The Co-Meaningfulness Trajectory In the previous section, we generate a new set of properties by emphasizing on the comparison of the degree of "P-M Intensity" and "M/M Coherent" of different episodes. These properties are featured by a quadrant map. In this section, we will apply the quadrant map and analyze the different episodes of "Jumping Video". We present the Co-Meaningfulness Trajectory of "Jumping Video" in the map. Figure. 7.2 Co-Meaningfulness Trajectories in "Jumping Video" In the trajectory, we illustrate each episode and their process. Every episode starts with a dot and followed by arrowed lines indicating where the Co-Meaningfulness heads. The positions of each start, end, and turn are carefully considered by constant comparison of different status of the episodes. Ep. Description Quadrants 1 The team starts to work on a project about selfie stick. The Co-Meaningfulness is aligning (on the feasibility), but only Ming is actively engaged. After a discussion on how to implement, the group becomes a little more aligning on the meaningfulness and more active, but the states end as they find out their idea cannot meet the workshop requirement. IA The episode started as Xid propose to make a camera that automatically take pictures when it senses a special movement of subjects. In the beginning only Xid was actively engaging. But soon Ming disagree on the meaningfulness and they started to argue explicitly, with Xid focusing on the association and interaction while Ming focusing on the specific effect. The explicit argue makes the Co-Meaningfulness moving to the VC quadrant. At the end, they were able to communicate well and agree as Xid propose a "Project" idea to combine. The line has a small turn towards VA. IA->VC The episode starts with Li's participation. It worsen the situation because Li constantly passively disagree on Xid's meaningfulness. They hold different meaningfulness but they were not able to communicate well actively. The meaningfulness cannot contribute to the "Project". The Episode saw a sharp decline from VC to IC VC->IC In Episode 4, the "intensity" problem does not improve. Students participate inactively. They try to communicate on the meaningfulness but the effect was not very well. Ming focused on the technical details, but mentor Ke persuade him that the idea was more important. therefore Ming compromised his meaningfulness, and they agree to go out and try the prototype. In this episode, the group grow coherence of the Co-Meaningfulness by trying to understand mutually each other's work, but at the price of compromising some meaningfulness. The trajectory moves from IC to IA. IC->IA The Episode 5 saw a radical increase of both "intensity" and "coherence" when they discovered a very good effect in the Project. The active participation and constant applying meaningfulness to create Project idea prove this change. The group reached a small cliMatt. IA->VA 6 In the last episode, Xid and Ren had a very fierce argument on the name, and this argument reflect a strong contrasting meaningfulness as well as their intention to influence the project with respective meaningfulness. The contrasting co-meaningfulness did not appear in the previous episodes but was evoked by the naming incidence. The group were about to fall apart and mentor Ke decided to interfere by explicitly pointing out the different meaningfulness and compare. The comparison successfully persuaded the group and helped them to reach a more aligning and vigorous status. VA->VC->V A Table 7.2 The Co-Meaningfulness status and quadrant by episode The trajectory is a visual analytical tool that helps us identify key moments of the process. In the example of "Jumping Video" we can see how the group has changed from episode to episode and identify the problems and good practices. It is an analytical tool for researchers and also a reflexive tool for practitioner. The properties and transformation of process is directly seen and easily documented. In this section, we present how we can locate the different states of CoLAB and generate the Co-Meaningfulness trajectory. The generation is followed by a coding and categorizing method, especially using our sub-categories of the "implicit and explicit", "active and passive", "effective" and "interactive" to determine the P-M Intensity and the "coherent" "supporting" and "conflicting" to determine the M-M Coherence. But we also note here that the Co-Meaningfulness trajectory is not complete here and remains to be developed in the following chapters. After we analyze CoLAB in the socio-cultural perspective in the following chapters, we will further develop the tool and guide how to use the tool in chapter 11. Conclusion of the Grounded Theory of the Socio-cognitive Perspective The grounded theory in chapter 6 and 7 makes the following contributions in understanding Co-LAB from the socio-cognitive perspective: 1. It generates a key concept "Co-Meaningfulness" in understanding the cognitive process of CoLAB. The concept is particularly important when we try to understand the communication difficulties caused by the heterogeneous background of the team. 2. It generates two key properties of "Co-Meaningfulness": "P-M Intensity" and "M-M Coherence" for comparatively analyzing the different stages of the socio-cognitive process. The quadrant map and the trajectory provide visualization tools that are useful for both researchers to analyze the process and for practitioners to easily reflect the process. We so far only discuss the socio-cognitive process of the CoLAB largely from an interactionist perspective. But we have not discussed CoLAB from a sociohistorical or socio-cultural perspective to relate the process to its social context. We will discuss the socio-cultural aspect of CoLAB in the following chapters in this PART II. Chapter 8. The Socio-Cultural Perspective of CoLAB and Co-Meaningfulness Introduction of the Socio-Cultural Perspective of CoLAB and Co-Meaningfulness In the previous chapters, we enter the analysis from a socio-cognitive perspective. That means we focus on the interactive conversation between the learners during their CoLAB learning. From the grounded theory on their interaction, we generate the important concept of "Co-Meaningfulness", which indicates: "Project"-relevant, collective, and potentially developing status of Meaningfulness The "Co-Meaningfulness" derived from the socio-cognitive grounded theory helps to analyze the interaction especially the conflicts in the learning. For example, the two properties "M-M Coherence" and "P-M intensity" is useful in understanding the different interaction stages, their trajectories, the key moments and their drives in the interaction. But we also recognize that only the socio-cognitive perspective is not enough. The interaction and cognitive process does not give enough on who these learners are, why they organize and participate in the new way of learning, what do they bring from the past learning and how do they engage them in the CoLAB, and what impact will the learning has upon the future and the development of the learner. All these issues reach beyond the socio-cognitive interaction per se. From a socio-cultural perspective, learning is not isolated from the learner's socio-historical experience, nor it is isolated from the sociocultural context of the learning. There are deeper reasons and motive for the trajectories and turns. Therefore, in the following chapters, we will complete the grounded theory with a special focus on the socio-cultural perspective. Particularly, we use our key concept "Co-Meaningfulness" as a starting point for our analysis. In the previous analysis, we have encountered several times the socio-cultural perspective of the "Co-Meaningfulness" awaits for further development. The following presents the socio-cultural perspective of "Co-Meaningfulness" before, during and after the learning. "Meaningfulness" The "Co-Meaningfulness" is generated from the initial coding of "Project" and "Meaningfulness" in the Chapter 6. From the coding and analysis, we understand that the individual "Meaningfulness" plays an important role when each learner communicates and learns, especially when they meet potential challenges. But what is exactly the "individual Meaningfulness"? How does individual obtain this meaningfulness in the first place and bring it to the learning? We cannot answer these questions by only analyzing the interactions. We need to know who the learners are before, and why do they feel meaningful in certain ways and in what social context. We will need to relate to the socio-cultural aspect of the "meaningfulness". "Co-Meaningfulness" In the previous chapters, we understand that the Co-Meaningfulness is potentially developing in the learning process. The "M-M Coherence" and "P-M Intensity" model is a set of properties to present such development from a socio-cognitive perspective. This development is not isolated from the socio-cultural context as well. We will need to understand how this development is related to the socio-cultural perspective. What is the impact of this new Co-Meaningfulness on learner's future development, on the learning community and eventually on the sociocultural context? As we can see, the concept of "Co-Meaningfulness", though generated from the socio-cognitive analysis, is however not a concept solely concerned with the interactions. What a learner believes as "meaningful" is essentially related to the learner's past experience and the social context. The "Meaningfulness" is potentially a key link between the socio-cognitive and socio-cultural understanding of the learning activity. We further develop this perspective of "Co-Meaningfulness" with grounded theory. We continue to use constructivist grounded theory as our research method. The data we choose is an ethnographic study of a workshop series over one year. We also choose two typical workshops and two projects in the series as the primary data for analyzing the CoLAB process. Data includes notes, memos, voice recordings, formal and informal interviews. The detailed conversation analysis is however not used in the socio-cultural perspective, as our focus in no longer the interaction. Although the method we use is the similar as the previous chapters, we would like to mention two distinguishable focus of our data sampling in the following chapters. These focuses are reflected in our selection of data and memo writing. We will focus on activities in relation to the learners' socio-historical experience, instead of just study the interaction per se. The interaction or cognitive operation are seen in a larger picture in the social context. That does not mean, we completely omit the interaction or socio-cognition of the learners. Instead, we take a hybrid perspective, but with a keen eye on the link between the interaction and the social context: the social identity and existence, their socio-historical learning experience, etc.. We will focus on the workshops, as well as relevant extension activities, namely the workshop preparation, workshop recruitment, built-in lectures, informal discussion, recapitulation, etc. These extended activities as well as daily life ethnographies are considered as important parts in the socio-cultural perspective, which can cast light to the learners' learning and living. With this focus, we can better understand the overall context of CoLAB, which is lacking from what we present in the previous chapters. With a thorough grounded theory analysis, we come to the following analysis results: • The Basic Pattern of "Evoking" and "Applying": Individual Meaningfulness is not purposely created to fit the "Project" at hand, but generated when learners evoke a (or multiple) socio-historical "Project" and apply its socio-cultural "Meaning" to the current Project. Evoking Socio-historical "Project" means the current project evokes the learner's past experience that has certain relevance to the project at hand. Applying Socio-cultural "Meaning" means the learner apply the associated meaning of the evoked experience to the current Project. The basic pattern is ubiquitous in our initial coding. It explains the basic process in which the learner engage their past learning and the attempt to apply their meaning in the current "Project". • The Underlying Pattern of "Prioritizing": Choosing what "Project" to evoke and what "Meaning" (and how) to apply is subject to an underlying pattern of "Prioritizing". a) Multiple potential P/M: the learner has a large pool of past experience and associated social meaning, much of which might have certain relevance to the project. This pool weaves a P-M network with complex inter-connection and extensional potential. b) Prioritizing: Not all "Project" and "Meaning" in the network will be evoked and applied. One might omit some projects and compromise some meaning over others in the network. There is an underlying priority according to which the learner selects to "evoke and apply". • Co-Meaningfulness is then not just a process concerned with "Project" and "Meaning" on the table, but a clashing of different leaners' "evoking" "applying" and "prioritizing" process. With focused coding, we are able to conclude a three-level framework of "Co-Meaningfulness" from both the socio-cognitive and socio-cultural perspective(Table V.1). The framework is both a framework of coding for researchers and a practical tool for practitioners with which they can diagnose their CoLAB learning. Co-lab Bio-design Workshop Series From chapter 8 on, we will use ethnographic data from a long term observation (19 months) of a series of interdisciplinary workshop : Co-lab biodesign workshops, or Co-lab. The workshop series were interdisciplinary workshops with different topics of science, and mostly biology and bioengineering, combined with different methods of design. The average Co-lab lasts between 2 and 4 days and involve participants mostly between 20 and 30 years old from in life sciences, art, design, business, and engineering. The workshop is a place where artists, designers, and scientists meet to initiate collaboration, and work together to deliver a project with a tangible output. The purpose of the workshop was to encourage real interdisciplinary collaboration, as the founders put in their manifesto: "We aggregate designers to learn science. We encourage scientists to value and learn artistic approach and design thinking." (from Co-lab internal documents) The community organizing it is physically dispersed around the world, and is mainly based in an NGO that we have been participant observing, together with other partners that change depending on the location and topic of each workshop. The organizer team summarize their principles of the workshop as: "1. Horizontality and diversity: Co-labs are co-organized by participants and mentors together. We encourage diversity of background, gender, and age among the participants to achieve a rich environment. 2. Interdisciplinarity: Co-lab use methodologies, lectures and knowledge that belong to several disciplines. The mentors and organizers make mixed groups of scientists and designers to prepare each of the science and design activities. 3. Openness and documentation: All the works are shared in a CC BY 4.0 license. We know that open source requires more effort than just a good license, so we prepare booklets describing the lectures, workshops, and projects created, and share everything in an open Google Drive repository and on social media. Practicality and experimentation: We prefer activities and outputs that involve the actual doing or experience of something rather than theories or ideas. Making is a media for facilitating interdisciplinarity. We leverage the diversity among participants to promote peer to peer teaching and learning. We are fearless in trying out innovative educational methods and developing our own content based on previous experience." (from Co-lab internal documents) I (Ke, the author of this research) have been an active participant observer in the workshop. In most of the workshops, I have participated as a core coorganizer who have contributed designing and organizing part of the workshop. At the same time, I have been documenting the process during workshop preparation, implementation, recapitulation, and informal discussion, etc. I have been living with the core co-organizers, communicating on a daily basis, contributing my knowledge and skill in the workshop series and at the same time keeping observation as an ethnographer. The workshop idea came out in an informal meeting after the IGEM 2016 student competition between Lina, a designer based in London, and Juanmat, a bioengineer based in Paris. Both have been involved in interdisciplinary projects and thought they lacked a space where life sciences and design could meet in a fair and tangible way. The first workshop was quickly organized in two sessions in the London Biohacker Space and the Faculty of Medicine of Paris Descartes University about the topic of synthetic biology. Soon, through the student-lead NGO Open Science School based in the Center for Research and Interdisciplinarity of Paris, a self-organized and volunteer community started to gather, and more projects came out to bring co-lab workshops to several locations around the world, in partnerships with both academic institutions (University College London, John Innes Centre in Norwich, University of Cambridge, Design School of Tsinghua University in Beijing, Shenzhen Tsinghua Graduate School, Center for Research and Interdisciplinarity of Paris, Institut Pierre Gilles de Gennes, Ecole Polytechnique Federale de Lausanne) and noninstitutional actors (Makerversity London, London Biohackerspace, Cambridge makerspace, Hackuarium Biohackerspace Lausanne, Institute of Making, Volumes coworking space in Paris). During the 19 months,10 editions of co-lab have been organized in different topics depending on the location, partners, funding, and interests of the organizers. The workshop usually takes 2 to 3 days on a weekend. A typical workshop consists of 4 parts: Introduction -> Biology -> Design -> Making An Interdisciplinary Project The Introduction part gave context of the general purpose of Co-lab workshop. It was done through a lecture of introducing our ideas and beliefs in interdisciplinarity, and completed by a discussion on general concepts. For example, in the first Co-lab, the topic was about "what exactly everyone perceive interdisciplinarity as it is becoming a buzz word". The introduction was followed by some Biology section, e.g. a lecture on cell biology and transformation. The purpose of this section is to give designers some context of the biology knowledge that can be used in the later project making. Then, Design sections is held to familiarize the biologists with designerly ways of working. This part was practice based. For example, everyone was given instructions to draw a story based on previous discussions. This narrative approach was taken from one of the organizer's design class. The last and most important part was the Making An Interdisciplinary Project, where participants were asked to form an interdisciplinary group and make their own project during the last two days, with whatever they learned in the first day. Table 8.3 Typical schedule for a Co-lab workshop The ethnographic fieldnotes and code of two co-lab projects the DNAish food and the Chrome-air attack (see Appendix B and C) will be constantly referred in the later analysis, therefore it is recommended to read the two examples before reading the analysis. "Our workshop is a place where artists, designers, and scientists meet to initiate collaboration. We aggregate designers to learn biology. We encourage scientists to value and learn artistic approach. We bring artists, designers, and scientists together to explore the possibilities of biological design. The goal of the project is to foster the creation of truly interdisciplinary projects around synthetic biology. Interdisciplinarity is a tool to solve complex problems that are beyond the reach of any discipline alone. To be able to do this, many soft-skills need to be developed. University and school often forgets about some of them. However, we believe that they are central to face the challenges of this emerging new world: conceptualization, inter-cultural communication, project-based learning, adaptation, and willingness to learn. conflict is the source of creativity. We also believe that 1 + 1 is not equals 2, but sums much more. Being able to exchange knowledge is the most valuable tool that a community can have. Being able to use the skills that we have learned from our field in another discipline makes us valuable." Co-Lab in total had more than 300 participants in all of the workshops, among which there are 15 mentors and participants actively involved in organizing and iterating the workshops, forming the core team. Co-Lab has received great support from early-career scholars as advisors, who help to host several workshops, as well as others from international institutions. Co-Lab relies on constantly evolving itself according to its topic and methods each time. When Co-lab evolves, it invites a new group of participants, normally experts in the new topic, to the core organizing team, or the participating organizers. The new group brings in not only new knowledge, new ways of thinking, that helps connecting the co-lab spirit to a new group of audiences. Participants in co-lab communities are often young scientists, designers, engineers, artists, businesspeople, who share openness in interdisciplinarity and collaboration, or life-long learners who eagers to explore in a new field. Access to the project are open and well-documented. Co-lab especially encourages participants to adapt their co-lab projects to other uses. In the following chapters, we largely use the data collected from the long term ethnographic observation. In our long term observation, we receive and follow the research ethics guidance from the Center for Research and Interdisciplinarity for conducting this research. Before any data collection, we orally inquire for consent of our participants and inform their use of data, and received their consent before we conduct the data collection. For analyzing the process, we purposively sampled two projects in detail. Our ethnography (in APPENDIX B.C) is based on our fieldnotes, voice recording and interviews, all collected under the research ethics guidance from the Center for Research and Interdisciplinarity(All names are pseudonyms). Chapter 9. "Evoking" and "Applying": the Basic Pattern In this chapter, we will use grounded theory to analyze CoLAB from the sociohistoric and socio-cultural perspective. We will see what role past experiences play in the learning, how they are brought in and how they became a part of the collective project space, and how they construct the learner's meaningfulness and the group's Co-Meaningfulness. With these questions in mind, we conduct initial and focused coding and summarize two typical patterns of process that underpins the "Meaningfulness" concept, which are Evoking Socio-historical Project and Applying Socio-cultural Meaning. Evoking Socio-historical Project means the current project evokes the learner's past experience that has certain relevance to the project at hand. Applying Socio-cultural "Meaning" means the learner apply the associated meaning of the evoked experience to the current Project. The Applied Meaning then becomes (at least part of) his or her recognition why the project at hand is meaningful. The meaningfulness is not created, but associated with the socio-historical "Project" and socio-cultural "Meaning" through the basic pattern of "Evoking" and "Applying". Fig. 9.1 Basic pattern of "meaningfulness": Evoking and applying Upon abstracting the two key codes from the initial coding, we are able to summarize the typical pattern of socio-historical past experience, and its relation to the "Meaningfulness" that the meaningfulness is not created specifically for the project at hand, but generated through a process we call "evoking and applying". Evoking means that the learner evoke a sociohistorical experience that has certain relevance to the current Project while applying means Evoking Socio-historical Project In analyzing the socio-historical experience and its relation to the current learning, we use grounded theory to study how learners engage their past experience. The initial codes such as "Recalling past experience", "Recalling relevant past knowledge" (Appendix B1 Chris's background) "Recalling project example" (Appendix B3 What's interdisciplinarity) "Referring to real lab experience" (Appendix B2 Does Science Communication Need Emotion), etc. have evidently demonstrate that past experience and projects are commonly engaged in the collaborative learning. We extract the focused code "Evoking Socio-historical Project" to summarize the process where the current project evokes the learner's past project or experience with certain relevance to the project at hand. The gerund "Evoking" indicates the current discussion is seen as a cue for individual learners to evoke a relevant "Project" they made, participated in, observed, saw, or heard. This evoked Project was part of their socio-historical experience, obtained in formal or informal learning. It can be a previous student project, an exemplary project in competition, an example in class, or even a social event, a YouTube video, a movie, a piece of news, etc. Calling it a "Project" is sometimes not exactly accurate. Sometimes learners will evoke the "Project" as a whole in the collaborative learning. But in other cases, learners may just reveal and engage a specific component of a project in the collaborative learning. Therefore it is present as a partial project, e.g. a scenario, a specific effect, a function, etc (e.g. Fig. 6.1). However, in order to keep the simplicity and consistency of the focused codes, we will hereafter use only "Evoking Socio-historical Project" (or the short version "Evoking") to refer to the above two cases. The following (Table 9.1) is one example taken from the second Co-lab when Juanmat and Matt are discussing on "science communication" in the workshop, where multiple "Evoking" took place. Just before Lina starts the workshop, she share with the group a movie she found very inspiring. It is a movie called My American Uncle(Mon oncle d'Amérique) written by a French scientist Henri Laborit. To Lina, the movie constantly compare mice experiments to societal relationship which explains the science, especially neuro science in a way that is related to the society. "The meaning behind the movie is "big" -concerning the science of emotion and how it affect our inner organs, which compares to the social stories which gave pains to people through hierarchy relationships." The movie is a perfect example which makes science and society relatable to each other and presented in a way that easily understandable and with emotions. The movie raised the interest of Matt: "are you saying when science communicate to the public, they are lacking some sort of emotional thing and just talk about facts?" Juanmat made a small change to Matt's statement. He believe there is a lack of "perspective". As a scientist, he understand science is complex, but the way science is presented to the public often appear to be so simple and even boring. "We say that it is the DNA is like this, and that it makes the protein.. and forms Ebola, we say it in a way as if it is that simple, but it is not." While Juanmat implies that scientific facts should not be presented in a brief and "dry" way, Matt pointed out his concern of overdoing it. What troubles Matt is that he often observe media articles that reports a prospect of a scientific finding, instead of reporting the scientific fact. "When we find a molecular that can better recognize cancer, the media says cure for cancer discovered". Therefore he believe the truth is neither in But what is at the core of the problem? Juanmat point to the problem of research paper, which often takes a fixed structure of literature, method, experiment, results, etc., which is not how science is actually implemented. "Science at lab is very messy, always in the cloud, we make a mistake, we throw away result…" The core of the problem, is how do we extract that? How do we present it? Should the outside know about it? It is a yes for Juanmat. He believes the way we present our research should include incidence like daily dialogue like "Hey Rebecca, nice discovery!". Science as a whole should be present, not just the results, in a hollow way. "I imagine to have science paper as movies, so that people can really emphasize on what is really research in science" The "movie" statement soon catches Matt's eye, and he pointed out that some people is already taking actions: "some nature movie about science process". But Juanmat doesn't think this approach is what he meant. He also knows of a journal which accepts a video instead of a paper. But that's NOT what he meant. He doesn't want a video to present a paper, he wants a form of publication in which it carries the complexity of science research. "There will be part where there is nothing, there will be part that is contradictory. It The discussion started as Lina and Juanmat presented a movie "Mon oncle d'Amérique" as the probe for the discussion. The movie successfully presented a scientific knowledge with a movie, and "presented in a way that easily understandable and with emotions." Matt was a little uncertain about Lina and Juanmat's intention. Juanmat, sensing the confusion of Matt, made his first "Evoking" of his past experience to support his first statement, that "there is a lack a perspective" in the science communication. "We say that it is the DNA is like this, and that it makes the protein.. and forms Ebola, we say it in a way as if it is that simple, but it is not." (Table 9.1) Juanmat was referring the tone of science communication. But he used a specific past experience to explain what he actually means by saying "lacking a perspective", which is slight different from the statement of "lacking emotion". Matt also made his version of "Evoking": Matt pointed out his concern of overdoing it. What troubles Matt is that he often observe media articles that reports a prospect of a scientific finding, instead of reporting the scientific fact. "When we find a molecular that can better recognize cancer, the media says cure for cancer discovered". Therefore he believe the truth is neither in the "dry" fact, nor in the overdecorated media-report manner. (Table 9.1) This "Evoking" helps Matt to better place his point of view on the table. He has clearly demonstrated his concern, an overly decorated science communication may undermine its authenticity. However, Juanmat did not believe they are communicating on the same level of the problem. Therefore he used a continued "Evoking". This time, what he presents is a messy daily routine of his lab life. Juanmat even evokes the daily scenarios like "hey Rebecca, nice discovery" to support and communicate his statement. Juanmat point to the problem of research paper, which often takes a fixed structure of literature, method, experiment, results, etc., which is not how science is actually implemented. "Science at lab is very messy, always in the cloud, we make a mistake, we throw away result…" ……He believes the way we present our research should include incidence like daily dialogue like "Hey Rebecca, nice discovery!". (Table 9.1) We can see the current "Project" at hand is to build a grounding for "science communication lacking emotion". When the group discuss on the abstract concept and the probing movie, learners engage their personal past experiences, which are constantly being referred to as evidences to support the their statement. Each "evoking" is built upon the previous context of discussing, so that the current "Project" and its meaning evolves. Applying Socio-cultural Meaning Applying Socio-cultural "Meaning" means the learner apply the associated meaning of the evoked experience to the current Project. Learners incorporate their socio-historical experience by "Evoking". They "Evoke" to explain themselves, to comprehend the Project proposition by others, to ground a mutual understanding of certain project and concept, etc. While they do these, they also apply certain socio-cultural meaning to the current project, directly or implicitly. The socio-cultural meaning is associated with the project they evoke, therefore it is not created for the new project, but already established when they first encounter the evoked project in their past learning. The learner applies this established meaning to the current Project as they believes it will fit. For example, in the previous example (Table 9.1), when Matt evoke his experience of magazine that over decorate science result, he also applies the meaning that science fact needs to be respected with objectivity and accuracy. This meaning he made was not his immediate creation, but already formed when he saw the magazine. Matt evokes the relevant experience and together with the social meaning and applies it to the current project being discussed. Juanmat follows Matt's "Evoking" and "Applying" by mentioning the "dry expression" of science publication. "We say that it is the DNA is like this, and that it makes the protein.. and forms Ebola, we say it in a way as if it is that simple, but it is not." (Table 9.1) Juanmat wants to apply the social meaning of approaching science to general public language. This meaning was also not an immediate creation. In other occasion of our observation, he has mentioned many times about the problem of science publication and scientific publication, which inhibit the public from being knowable of science itself. The "Applying" is ubiquitous. Without applying socio-cultural meaning, the projects evoked are scattered without clear inter connection. The sociohistorical project is personal, but the socio-cultural context is common and overlapped. It is only when the learner applies the socio-cultural meaning of the project, can the other learners comprehend the function of their project and its relation with the current Project. Categories of Evoking and Applying In this section, we will compare the different Evoking and "Applying" cases in our ethnographic data and conclude the different categories of "Evoking" and "Applying". The categories will help us to better understand the purpose and operations of this basic pattern of "Meaningfulness". Relevance Evoking and Applying Every "Evoking" and "Applying" will need to establish a certain degree of relevance, but "Relevance Evoking and Applying" place establishing relevance as its most significant element. "Relevance Evoking and Applying" is the most common form of "Evoking" and "Applying". Learners use this type of "Evoking" and "Applying" to approach the current Project, to explain their own statement, to comprehend other's proposal and to ground the mutual understanding of the conversation. Establishing the relevance is the major task for this type of "Evoking" and "Applying", and learner will understand from their own perspective. The following is one example of Relevance Evoking and Applying. The "movie" statement soon catches Matt's eye, and he pointed out that some people is already taking actions: "some nature movie about science process". But Juanmat doesn't think this approach is what he meant. He also knows of a journal which accepts a video instead of a paper. But that's NOT what he meant. He doesn't want a video to present a paper, he wants a form of publication in which it carries the complexity of science research. "There will be part where there is nothing, there will be part that is contradictory. It is complex and I think movie is a right form for this. It is not like 'hi, I am Juanmat, I am the researcher and I am going to explain to you about the protein that .. this kind of videos.'" Referring to news reading Opposing the misunderstanding Carrying the wholeness and complexity making example with jokes Table 9.2 Fieldnotes taken from B5 "Does Science Communication Need Emotion?" At the end of the discussion "Does Science Communication Need Emotion?" (Table 9.2), Matt tries to evoke a past experience that he read: The "movie" statement soon catches Matt's eye, and he pointed out that some people is already taking actions: "some nature movie about science process". (Table 9.2) In evoking this specific experience, Matt tries to comprehend what Juanmat meant by "presenting science with movies". The "Evoking", as a part of Matt's grounding, needs to present a Project at least Matt believe to be relevant to the previous discussion -communicating science with a movie. But Matt's relevance does not necessarily present other's definition of relevance. In what follows, Juanmat again makes a "Relevance Evoking and Applying" to clarify what he means by "science movies", which is different from the Project Matt just evoked and applied. He specifically evokes a counter Project -a plain descriptive video. Juanmat's "Relevance Evoking" is on one hand to comprehend what Matt mentions as "some people is already taking actions" and at the same time to make it relevant to his own previous statement He doesn't want a video to present a paper, he wants a form of publication in which it carries the complexity of science research. "There will be part where there is nothing, there will be part that is contradictory. It is complex and I think movie is a right form for this. It is not like 'hi, I am Juanmat, I am the researcher and I am going to explain to you about the protein that .. this kind of videos.'" (Table 9.2) The following "Relevance Evoking and Applying" (Table 9.3) is made by Chris, who was trying to explain his own research field in art and music. Chris, who hadn't finished his introduction then felt necessary to complement on his background, as it is not that easy to comprehend than a simple "biology". He mentioned a coming exhibition in the next week called "living object", in which artists use installation to explain what they believe as "living object". He use this exhibition to explain the "ecology" where he works in. The group started to look at the exhibition website when Juanmat immediate found someone (an artist) who he's heard of but forgot why. Before Chris even had the time to explain, Ke found the work of the artist to be one of his impressive experience. Ke started to Needing to clarify Similar project in he "ecology" Impressive experience explain his impressive experience: when the art work was exhibited in China, Ke once saw it was broken by one of the audience, which in turn made it accidentally more respective. One of his friend wrote a review about the accident which she believe adds more meaning to the art work's origin meaning: The "death" of this object (the art work) reflect its "living" status. Adding meaning to original project Table 9.3 Fieldnotes taken from B1. Chris's background Before this "Evoking", he has tried many efforts, such as mention scenography, art and sound combination etc. But as his work is not commonly known to other discipline, many other learners has evoked multiple relevant experience, including an interdisciplinary field, a specific form of art, etc. But Chris still found they are not representative enough. Therefore he himself made a "Relevance Evoking" to present the "ecology". Chris, who hadn't finished his introduction then felt necessary to complement on his background, as it is not that easy to comprehend than a simple "biology". He mentioned a coming exhibition in the next week called "living object", in which artists use installation to explain what they believe as "living object". He use this exhibition to explain the "ecology" where he works in. (Table 9.3) "Ecology" was Chris's original wording, which means a set of similar projects to what he does. Familiarity Evoking and Applying Unlike "Relevance Evoking and Applying", which place establishing relevance as the most prominent feature and purpose, "Familiarity Evoking and Applying" summarizes the type of "Evoking and Applying" that majorly evokes and presents a learner's familiar Project, which he or she feels particularly compelling to evoke and share. "Familiarity Evoking and Applying" is often associated with personal interests and preference. In the following example(Table 9.4), Juli made a "Familiarity Evoking and Applying" when Chris tried to explain "his background as a scenography with the integral of sound, light, music, and art installation." The group was in the introduction section where everyone take turns to introduce their background. But it did not follow a strict one-by-one manner. As everyone has different background, the rest of the group needs to Comprehending others Interrupting take a short period of time to absorb the information as one was introducing themselves. When the rest tried to comprehend, they interrupted the speaker and started to express their understanding of the issue at hand. The speaker then also responses to the questions. Chris explained his background as a scenography with the integral of sound, light, music, and art installation. This was however not immediately comprehended by the rest of the group. Matt, a biology researcher did not know about "scenography", and Juli, a science student, tried to relate it to one of a field she knows that seemed similar. The discussion then took a slight detour as Juli started to explain her knowledge of the interdisciplinary field between cognitive science and design. "you know there is a school,… a field that connect cognitive science and design… it is (related to ) visual art, sound design" (Juli) Juli is a science student who has a great interest in the cognitive science. She was also quite interested in its application to more practical field, e.g. design. Her notion of the interdisciplinary field started a new round to comprehension among the group. Juanmat mentioned the term "audio-visual" to conclude what he understood for Juli's description, and Chris recall some artist using brain wave detectors to make art, which seems related to Juli's description. Juli approves everyone's attribution and further elaborate that it is about using cognitive science and physics to compose design, art, and music. This small detour was part of Chris's introduction with Juli's contribution. In the end, Juli conclude the small detour by saying "I just know this", which means she just recalled this interdisciplinary that is related to what she believe as relevant to Chris's background and she thought it was worth to share. It means : "I just know this (which I want to share with you guys)". The discussion then took a slight detour as Juli started to explain her knowledge of the interdisciplinary field between cognitive science and design. Communicating to match understanding "you know there is a school,… a field that connect cognitive science and design… it is (related to ) visual art, sound design" Juli is a science student who has a great interest in the cognitive science. She was also quite interested in its application to more practical field, e.g. design. Her notion of the interdisciplinary field started a new round to comprehension among the group.(Table 9.4) Juli's evoking of the interdisciplinary field even created a small sub-discussion of the field she mentioned. Although it has certain relevance to the topic at hand, it more significantly addresses Juli's own interest, especially in cognitive science. Chris did not mention cognitive science, nor did his field actually has anything to do with cognitive science. But Juli successfully inserted her interest in the current conversation, which turns the "Project" into a hybrid of both Chris's background and Juli's interest of the field. Though part of the purpose was to establish a relevance, but the evoking eventually triggers a detour of discussion. At the end of the this sharing, Juli concluded her actual contribution: This small detour was part of Chris's introduction with Juli's contribution. In the end, Juli conclude the small detour by saying "I just know this", which means she just recalled this interdisciplinary that is related to what she believe as relevant to Chris's background and she thought it was worth to share. It means : "I just know this (which I want to share with you guys)". (Table 9.4) Juli's example demonstrates that the learner who conduct a "Familiarity Evoking" are likely to associate with a "want to share" intention. Sometimes, the Familiarity Evoking and Applying might bring in a complete different perspective, as illustrated in the following example. (Table 9.5) Juanmat was the first to present his story. He wanted to think of something new, something outside bioengineering with which he is so familiar. "I want to get rid of all the preconception I have." He focus on what he calls the "social impact of scientific vocabulary", and particularly the concept of DNA. He imagined a daily scenario of a future kitchen where a family is having their breakfast. While everything else seems normal, there is one thing strange: nutritional fact table on the "Cheerios" (Cereal food brand) marks the DNA content. Juanmat drew the "Cheerios" out, and everyone laughed when they noticed the DNA content on the label: how funny! Normally, DNA is not put on the nutritional fact table, as it does not count as nutrition. But Juanmat rationalized this as a mark of the "naturalness" of the food: the more DNA it has, the more natural the food is, since it has more organic compound and less chemical ones. So DNA becomes as good as a kind of nutrition like protein! But soon this speculation was challenged by Pratek, a participant with biology background. Pratek pointed out that DNA is phosphoric acid, which "isn't cool" for people's health. This argument soon evoked a round of discussion on the chemical component of DNA and their nutritional value. The "phosphoric acid" argument was debunked by Juanmat by saying that the "phosphoric acid" is not free in the DNA, but instead confined in the chain as part of a larger Comparing a daily example chemical: "It is like saying coca cola has carbohydrates because it has CO2 (which is not correct because CO2 is just a part of the carbohydrates and not free to move away from the main part)." But Caline, a participant with both biology and design background, argued that another component of DNA, the "purine" might add kidney's burden as she once learned in a medical course. Her arguments seems more evidential to the other participants, but Juanmat argues that the burden also exists when one has too much protein. The biological discussion continues as Matt wants to calculate the exact number of the DNA content. The biological discussion was not expected in Juanmat's original idea. His intention was about the "social impact of scientific vocabulary". But When he made the claim that the DNA is seen as a kind of nutrition, as a sign of healthier food, people started the above discussion. Referring to past knowledge Unexpected different perspective Raising discussion Table 9.5 Fieldnotes taken from B6. Labelling the DNA content In the above example, Juanmat, as required by the "narrative workshop", first propose a abnormal narrative: "nutritional fact table on the "Cheerios" (Cereal food brand) marks the DNA content." Juanmat wants to justify this speculation by stating DNA is a mark of nutrition as "the more DNA it has, the more natural the food is, since it has more organic compound and less chemical ones." But this statement was immediately challenged by Pratek who made a "Familiarity Evoking" of biological and chemical knowledge of DNA : Pratek pointed out that DNA is phosphoric acid, which "isn't cool" for people's health. This argument soon evoked a round of discussion on the chemical component of DNA and their nutritional value. The "phosphoric acid" argument was debunked by Juanmat by saying that the "phosphoric acid" is not free in the DNA, but instead confined in the chain as part of a larger chemical: "It is like saying coca cola has carbohydrates because it has CO2…. But Caline, a participant with both biology and design background, argued that another component of DNA, the "purine" might add kidney's burden as she once learned in a medical course.(Table 9.5) Pratek has a biology science background. His proposal of nutritional value is not entirely correct as we can see in the following debate, but he successfully guided the discussion into a biology and chemistry one, diverting from what Juanmat's original Project (the relevance is low). Pratek made such "evoking" because he believed it is important to check the biological knowledge. It is a "Familiarity Evoking" based on Pratek's disciplinary training. Under the category of "Familiarity Evoking and Applying", there is a special type associated with what we code as "impressive Project". It denotes a Project that is so impressive to the learner, so that they feel compelling to evoke and share. In the previous example of Chris's background (table 9.3), Ke made a "Familiarity Evoking -Impressive Project". Before Chris even had the time to explain, Ke found the work of the artist to be one of his impressive experience. Ke started to explain his impressive experience: when the art work was exhibited in China, Ke once saw it was broken by one of the audience, which in turn made it accidentally more respective. One of his friend wrote a review about the accident which she believe adds more meaning to the art work's origin meaning: The "death" of this object (the art work) reflect its "living" status. (Table 9.3) Ke's mention of this project does not really contribute to Chris's background. He made the "Evoking" because this event was so impressive. When Ke saw the picture of the Project, he immediately shared with the others. Boundary Evoking and Applying "Boundary Evoking and Applying" lies in between "Relevance Evoking and Applying" and "Familiarity Evoking and Applying". It normally evokes a Project that both connects to one's own interest / point to view, and at the same time establishes a connection to the group. "Boundary Evoking and Applying" is particularly associated with Projects that are easy understandable to different group of learners. The following example (Table 9.6) marks a "Boundary Evoking and Applying" of a daily scenario. The biological discussion was not expected in Juanmat's original idea. His intention was about the "social impact of scientific vocabulary". But When he made the claim that the DNA is seen as a kind of nutrition, as a sign of healthier food, people started the above discussion. When the biological discussion ends, Juanmat continues his story. This future kitchen has a green window full of algae and Raising discussion Daily expression with another window with red bacteria that is good for health. The mother was talking about engineering and fitness, while the son told mom that these topics were so "80s" and not fashionable any more, science vocabulary is the current new faction ! When Juanmat finished his future scenarios, he also speculated how his speculation could be realized at the moment by implementing "science activism" actions. He proposed that the group can maybe go to the street and put on stickers (like "containing DNA") to the food. He got this inspiration from the artist group who made fake corporate advertisement that ironically reflects their indifferent attitude towards the environment. The artist and activists puts those fake advertisement in the light box of bus station during the COP21 conference in Paris as if there were real advertisement by the companies(This happens in Paris around the time the workshops took place, so everyone has real life experience with it). [START_REF] Johnson | Cooperation and competition: Theory and research[END_REF] "Ah, that's nice!" The group hailed to this proposition, and started to discuss the fake advertisement activism which many of them knew or even witnessed. Inspired by this, Matt proposed that designers can design the stickers to mimic real stickers on food, just as the artist mimic the advertisement. Ke proposed to design not only "containing DNA" sticker, but also "no DNA at all" stickers, which immediately echoed by Pratek: "Yes, like DNA free" "Yeah, yeah". Creating a science activism ignites everyone's imagination. We saw ideas after ideas coming up. Pratek propose it is possible to actually extract the DNA with alcohol or soap, (he knows it as he has done it in his biology experiments) and sell it as a kind of food addition. This idea soon inspired more ideas to make DNA concept shop, DNA products, and DNA level certificate etc. The rationale of this is that all this activism can push people to think more about the science concept, so that people can really understand the science behind instead of just following false understanding, one of which Juanmat knew, is about a report saying carrot has a lot of DNA and eat carrot will make people yellow as the DNA starts to translate inside human. The yellow effect is true, but it is not how DNA works. Science vocabulary Future to current Performing street art Inspired from on-going street art Reference of daily acquittance Receiving accomplishment Resembling past reference An opposite proposition Igniting passion Outburst of ideas Inspired from past experience Extending narrative Presenting rationale Table 9.6 Fieldnotes taken from B6. Labelling the DNA content After Juanmat's introduction of the speculative story of DNA as nutrition(see table 9.5 for more detail), he evoked a street art to connect his story to reality: When Juanmat finished his future scenarios, he also speculated how his speculation could be realized at the moment by implementing "science activism" actions. He proposed that the group can maybe go to the street and put on stickers (like "containing DNA") to the food. He got this inspiration from the artist group who made fake corporate advertisement that ironically reflects their indifferent attitude towards the environment. The artist and activists puts those fake advertisement in the light box of bus station during the COP21 conference in Paris as if there were real advertisement by the companies(This happens in Paris around the time the workshops took place, so everyone has real life experience with it). (table 9.6) "Boundary Evoking and Applying" does not only concerns one self, it serves as a media between the speaker and the others. In this example, Juanmat just finished the speculative story, and he wanted the story to be exhibited in reality. His solution is a kind of "science activism", and he use the example of "COP21 fake advertisement" as a bridging Project for his idea. The COP21 perfectly represents Juanmat's idea of science activism, but it is also a public event, widely reported during the workshop. Therefore most of the participants know this event and soon establishes their understanding. Juanmat could have made an evoking of old less known art activism, but then it is more likely to be a Relevance Evoking and Applying. When he manage to use of a hot public event as a bridge to connect the other group, he made a successful Boundary Evoking and Applying. The following example(Table 9.7) presents another example of "Boundary Evoking and Applying" when Lina tries to present the "narrative workshop". Lina is presenting a few examples before the narrative workshop. She straights out the purpose for the workshop, Making examples Expressing purpose which is to generate conversations and allow design perspective in the scientific "fabrication of facts" (refer to D1 fieldnote 3). She has to make the science people understand the importance of this purpose before she can deliver the workshop. Therefore her presentation is essential. She chose to directly showcase the design projects to present the "design perspective". These examples are not random, you can see she purposely chose design projects related to life science so that the biologists feel more attached to their life. The first project is about the utilization of a bio-degradable material -mycelium: low-technique but quite strong in terms of rigidity and fire-proof. The exploration of such materials, not necessarily high-tech but useful is one of the promising direction for designers. Another similar project features the use of natto -a Japanese traditional ferment food, as natural material to detect humidity. The exploration of material is one thread of the design science collaboration story. Then Lina introduces another thread which intends to bring design narrative to science. For example, one of the design project speculates a technology to help homosexual parents to have children with both their genes, which is inspired by a true science. The designer not only speculates the technology, she also create the life story for a couple, what is the families' life like. She creates a film with computer graphics, which moves Lina a lot. It is touching because Lina can actually see the technology being used and feel the reality of the science. Another designer makes a genetic modified flower which reverse a GM flower to its original status. It creates a paradox of whether it is GM, as at one hand it is made with GM technology, but at the other hand it de-modify its artificial features to return to its origins. The paradox further provoke the question of what GM is and how should we see and deal with it. In the third project, the designers collect gums on the street and visualize their owners based on the DNA on the gums. The visualization is on one hand a technology, but on the other a question on the security and the rights of such information we randomly give away everywhere. For Lina, These are the stories illustrate her purpose in this narrative workshop. These speculative design, or design fictions, help to relate the technology to people, to life and to GM discussion Raise awareness of bioinfo Using example projects Relating technology to life the socio-cultural context of where it is being or will be used. It helps to imagine a future where technology is being considered with a design perspective. For Lina, they either asks a question, or frame the technology to a context so that it is easier for people to comprehend with their daily life experience. And in this workshop, she wants everyone to make these kinds of narratives so that different perspectives can clash and hopefully we can find a way to learn from each other and integrate. Future with design perspective Framing technology to a context Making narrative to engage different perspective Table 9.7 Fieldnotes taken from.B4. Lina's design examples When facing a group of biologists, Lina wants to present the narrative power of design narrative. She chose "Projects" that both demonstrates the design perceptive but at the same time deeply concerned with biologists' daily work: These examples are not random, you can see she purposely chose design projects related to life science so that the biologists feel more attached to their life. … For example, one of the design project speculates a technology to help homosexual parents to have children with both their genes, which is inspired by a true science. The designer not only speculates the technology, she also create the life story for a couple, what is the families' life like….. Another designer makes a genetic modified flower which reverse a GM flower to its original status. It creates a paradox of whether it is GM, as at one hand it is made with GM technology, but at the other hand it de-modify its artificial features to return to its origins…. In the third project, the designers collect gums on the street and visualize their owners based on the DNA on the gums. The visualization is on one hand a technology, but on the other a question on the security and the rights of such information we randomly give away everywhere.(Table 9.7) To conclude: 1. "Relevance Evoking and applying" is the most common form, which puts relevance in the center. Learner use this evoking to ground their understanding of the joint Project and the other learners. 2. "Familiarity Evoking and applying" centers the speakers' own interest and perspective. Although the evoking must have some relevance to the current Project and others of the group, it is not as important as the speakers' own familiarity of the project. Sometimes, it is associated with a change of topic, or a very impressive project. 3. "Boundary Evoking and applying" is a combination of both considering making relevance, explaining oneself and connecting others. The purpose is hybrid, and very often associated with project that is very understandable to other learners. It is important to note that the three categories are not mutually exclusive. Most works in a hybrid way. When the learner feels compelling to share, he or she might made a "familiarity evoking and applying", but soon find out he might need further relevance evoking and applying or boundary evoking and applying to supplement. The reality always exists in a mix of different categories. Chapter 10. "Prioritizing": the Underlying Pattern In this chapter, we will study the underlying pattern of "Evoking" and "Applying". What "Project" do learners "Evoke" and what "Meaning" do they "Apply"? What rules do they follow when they choose what to "Evoke" and "Apply"? Is it a formula-like rule so that every time it will be same "Project and "Meaning" given the learner's inherent knowledge and experience? Is "Evoking" and "Applying" a one-to-one mapping, so that given a current "Project", a learner will come up with only one pair of past "Project" and associated "Meaning"? In this chapter, we will further study the underlying mechanism to address the question above mentioned. We compare the different situations with our ethnographic data and generated a focused code for the underlying mechanism: "Prioritizing", which summarizes the following phenomenon: Fig. 10.1. Illustration of Evoke, Apply, and Prioritize 1. The "Evoking" and "Applying" has a manifold potential. Whatever the leaner evokes and applies, it is essentially rooted in his or her broader socio-historical background and socio-cultural context, which consists of multiple potential P and M that could be relevant. 2. P/M emerges as only the most prioritized P and M. The prioritized Project and Meaning is a representative of the project and meaning that is most significant and suitable for the current project. 3. We summarized two categories of "Prioritizing": the "Inherent Prioritizing" and the "Situate Prioritizing". The "Inherent Prioritizing" means the selection is subject to the inherent preference that the learner built in his or her socio-historical learning. The "Situated Prioritizing", on the contrary, forms during the workshop. It is subjected to incidental factors of the workshop: the discussion, the reaction of others, incidental interruptions, the self and environmental factors, etc. Therefore, the individual Meaningfulness constructed through a dynamic process of "Prioritizing", "Evoking" and "Applying". The "Prioritizing" compares different Project and Meaning that might have relevance and then select those one find most suitable. Finally, the "prioritizing" does not follow an formula-like rule, it is both "inherent" and "situated". The Multiple Potential P/M ( Project/Meaning ) The Emerged Multiple P/M In most cases of our ethnographic data, the learner will evoke one "Project" and apply the "Meaning" to it. This is th e simple and singular pattern of "Evoking" and "Applying". But the singular form is not the only form of "Evoking" and "Applying". In some other cases, there are multiple "Projects" evoked and "Meaning" applied. In the following example(table 10.1), Lina use two similar projects, to explain the meaning of designer's perspective of "fabricating the fact" Lina's project got inspired by a group of scientists who are working on grow human organs on pigs for the use of transplant. More specifically, they are finding ways to use CRISPR(a gene editing tool) to get rid of the virus so that pig organs are safe for transplanting to humans. But as a designer, Lina's eye on the facts turns to other perspectives: e.g. she asks who is going to manage the transplant and where will it be implemented. When she found out the government is in charge of managing transplanting, she realized the conflicts between the private company who funded the research and the government might need a solution. She also wants to know how in reality it will be implemented in the society: People might view pig very differently; the conflict between society use and nature living, the hospital scenario where the patient will need to choose from wait for natural human organ transplant or pig organ transplant; etc. She is concerned with these issues and believe her perspective can be involved in the science research, or, as she quotes Bruno Latour, in the "fabrication of facts." Using past example to clarify the "fabricating facts" Designer looking at A real science project A alternative perspective Social reality Sensing conflict Implement science in reality is an issue Social controversies The pig example raised Juanmat's interest, and he reminded that in some areas, pigs are somewhat unsacred, and in some other areas, pigs are religious. Growing human organs on pig might create political and religious issues. Matt starts to understand this. He thought about the insulin grown in pig that is used in diabetes, which is not a big problem to scientists. For him, the science world seems to accept those issues quite fast. But he also understand what Lina meant by the different perspective of the issue. "It makes sense to ask those questions" Another example Lina made is a synthetic vanilla made by a group of scientist. They claim it to be closer to natural hence healthier, as the vanilla now we use are basically chemicals. But then Haagen-Dazs says they will not use it as it is genetically modified. And the consumers also does not want to buy it. The conflicts is between the scientists' fact, which is a healthier and probably cheaper vanilla, and the another fact, which is GM not acceptable to Haagen-Dazs or the consumer. Lina says she does not think the research is a waste, but she suggest a collective perspective and a better communication once the science research is actually implemented in society. Involving different perspective Reminding social norms Raising political aspect Starting to understand Bringing in relevant but slight different example Admitting designers' perspective has meaning Giving another example Natural vs GM Table 10.1 Fieldnotes taken from B3. What's interdisciplinarity The first "Project" was Lina's own project pig organ transplant. Scientists are working on the biological solution of safe organ pig transplant, while Lina's project is to investigate who will actually implement the technology, where and how it be will done. She believe her perspective should be part of "fabricating the fact", which now is often the job of the scientists. While the two scientists Matt and Juanmat are starting to understand her point, Lina starts to evoke another similar project. A GM vanilla flavor produced by the scientists, claiming healthier and more natural than chemical vanilla, got refused by Haagen-Dazs because GM product is not easy to be accepted in the market. This "Project" was evoked by Lina to reinforce her meaning mentioned above. Both the pig organ project and the GM vanilla project represent her perspective that scientists need to incorporate more perspective when "conceiving" and "fabricating" the "facts". The only one "Project" was maybe enough in explaining her "Meaning", but two similar "Project" might be more complete and accurate. From the later progress of the workshop, we know that Lina's explanation was well accepted by Matt who had the confusion in the beginning. Lina conducted a multiple evoking again in her presentation where she used three new design examples to make her point of a similar meaning. Unlike the above mentioned science "Projects", these "Projects" are all design projects. They are the reflections by the designers to science projects. Lina evoke these projects to apply a similar meaning from the other side of the coin. Fig 10.2 Lina's design examples Besides similar Projects for the same "Meaning". There are also multiple similar meaning evoked and applied in the collaborative learning. In the second CoLAB example, Denqing presents a cluster of related meaning that he generate from his past project experience, which are "designing rules", "interaction", "connectivity", "non-linear narration" and "transformation". (table 10.2) We add one more meaning to his summary: "Rule reinventing". For him, the most meaningful interaction design seems to be a design that reinventing the rules: Therefore, the basic feature units are "interaction " "connectivity" "nonlinear narration" "transformation" that describe the entry and perspective that a designer might take as entering tools. And the "rule designing" is a higher level feature which is the emphasis on all the above features. The rule to interact, to establish rule…. Denqing seems to present a stance that Interaction designer should know that their ultimate focus is the rules of these different entry point. And the final implicit feature is that the designing should to some degree to be creative and challenging the status quo, to have the power to say that I am making a new and radical change to the existing rules, to reinvent the rules (taken from fieldnotes C1.) Fig10.3 Denqing's cluster of "Meaning" The different meaning he tries to applies to the workshop are different, but still, it is related. Denqing use this cluster of "Meaning" to approach what he believes as meaningful. He hopes the learners can comprehend these different but related meaning. The multiple meaning as a cluster does have a coherent and strong impression to the student, as the students often recall that the design of Denqing is "untraditional". Also, there are situations, one learner might evoke and apply significantly different "Project" and "Meaning" towards a single current Project. In the following example, Pratek and Juanmat both have evoked "Project" and "Meaning" is two separate directions. The first line of the "Project" and "Meaning" is about biological fact of DNA, while the other is more concerned with the science communication in a society. When Juanmat explained his narrative that DNA label can be seen as a sign of natural food, Pratek did not immediately evoke his experience of science communication, instead, he evokes his experience in biological learning and started a thread of discussion of whether more DNA is better. But it does not mean he did not have any experience about science communication that can be evoked and applied. In fact, in just a few minutes later, when the discussion shifted to science activism, Pratek was approving the idea and gave his contribution to "extract DNA and sell" as a part of the positive labelling of DNA. This double thread also appeared to Juanmat who started the "Project". He did not expect the biological fact debate to happen, but while they started the separated thread, he also join the discussion and made serious analysis: The "phosphoric acid" argument was debunked by Juanmat by saying that the "phosphoric acid" is not free in the DNA, but instead confined in the chain as part of a larger chemical: "It is like saying coca cola has carbohydrates because it has CO2 (which is not correct because CO2 is just a part of the carbohydrates and not free to move away from the main part). (taken from fieldnotes B6.) Juanmat is a biological background student. So his disciplinary background and knowledge allows him to quick react to Pratek's question and to debunk. But he is also interested in the science communication and the society's perception of science, therefore he started this project and guided the group to return to this line of "Meaning" after the biological debate. Both "Meaning" was applied by Juanmat as well as Pratek. The different situations of multiple "Evoking" and "Applying" proves that this basic pattern is not a singular process. The current project has potentially multiple ways to connect to the learners' broader socio-historical ground and the socio-cultural context, which includes: 1. Multiple similar Projects 2. Multiple similar Meaning Multiple heterogenous Project and Meaning There are more than one options, and the multiple possibilities, whether similar or distant to each other, are attributed to the learner's own sociohistorical learning and the socio-cultural context where their understanding builds on. They are not confined to only disciplinary knowledge, as we can see in the examples, but exist because the learner has experience the "Projects" and established the meaning in life. The Oppressed P/M In the previous cases, multiple "Project" and "Meaning" emerges from the collaborative learning, which proves the "Evoking" and "Applying" has a manifold potential. In this section, we will focus on another significant code "Oppressed P/M", which we generate from the initial codes. "Oppressed P/M" cast lights to the situation where the learner's "evoking" and "applying" was neglected or oppressed due to multiple reasons: being cut off, interrupted, redirected, or forgotten. The following example showcases one of the Oppressed P/M situation. There is one small incidence took place in this stage which is noticeable: when Xiaoding (the designer) ask if the sensor can be something in the city, e.g. oil container. The answer was quickly given by one of the biology student, that sensor is a receptor, a receptor reacting to a specific trigger, with all the scientific explanation followed. He was not considering why Xiaoding asked the question in the first place. But if we look back the question, why does Xiaoding ask about something in the city, why she give specific example like an oil container. As a designer, instinct tells me that the girl has something to say about the example, which is not very much about science, but about the functionality of the object itself. Might be that oil container has some special meaning that is worth exploring. asking alternative function quick answering scientific meaning reason for original question alternative meaning The idea was not even developed, but just as a "baby proposition". this "baby proposition" just take the form of one very easily neglectable sentence, and its full meaning that might exist in the designer' mind but soon disappearing in the driving meaningfulness of the discussion happening at the moment. We will never know what the implication of the girl, which might has the potential to redirect the project, to nurture new creativity. It is not a pity, these small incidence happens all the time, and it is impossible to get every implication spoken out and developed, if that is to happen, there is not time, and the project will go nowhere. But this small incidence is unnoticeable, that some ideas are easily overwhelmed by the driving meaningfulness. Xiaoding's self-explanation aligns with the fieldnote above. Her proposal of using a city object, e.g. an oil container, as a carrier of bio-sensor, was quickly neglected. And this neglection does not happen on the "Project" level, as one of the biology student did give his answer about the technical detail of a biosensor. But he neglected the fact that Xiaoding was not proposing the city object as she trying to find a technical solution, but trying to propose a scenario where human and object interact. Xiaoding actually agrees with the technical aspect of the project, but she also constantly has the intention to propose the "scenario" consideration. This intention appeared again in a later discussion. When discussing on the project, the designer Xiaoding kept on mentioning scenarios. She was more comfortable to put the object into a scenario or a story then discussion on the object of its own. For example, when the other members of the group was focusing on the specific object that can change color, they mentioned the air itself can change color. But Runda oppose to that idea saying that a room of red color air would make it scary to breathe. This scenario soon caught Xiaoding's attention. She immediately added: "So we are looking at a scenario where there will be some human interaction right?" While the others were discussing on the object, Xiaoding was thinking about the scenario, whether there will be people and whether they will interact in it. (fieldnotes taken from C6. the science fiction) The focus of scenario and human interaction came from Xiaoding's disciplinary training. In her design lesson, the projects she made emphasized a lot on the interaction scenario, e.g. how does a person interact inside a blood donation van. She tries to "evoke" and "apply", but her "Project" and "Meaning" was oppressed as the main meaningfulness during that time was the technical part of the object, her proposal from an alternative was not comprehended nor developed. Similar incidence happened multiple times in this workshop, and that's why Xiaoding felt only 20% of her work was effective. In the case of Oppressed P/M, there emerges a borderline of "Project" of "Meaning". The multiple P/M is not salient as in the previous example of DNA labelling where both "Meaning" appears on the table, but it also leaves a trace of the Oppressed P/M which could have the potential to be developed, but disappears in a flash. The Multiple Potential To conclude, the emerged multiple P/M evidently reveals that the P/M actually evoked is not the only P/M that has the potential to be evoked. They are more "Project" and "Meaning" to be evoked and applied. The oppressed P/M presented a borderline Project and Meaning between the emerge and the immersive one. These Oppressed Project and Meaning also has the potential to be evoke but got neglected or oppressed by other "Project" and "Meaning". These are all evidence to summarize that the basic pattern of "Evoking" and "Applying" have a multiple potential. What emerges is only a fraction of what actually has the potential to be established. The potential P/M lurk in in the socio-historical background and socio-cultural context, as shown in the following illustration. Though the potential is broad, the learner cannot evoke and apply every P/M potentially connected. From the emerged multiple P/M examples we know, there are certain conditions in which P/M was chosen to emerge, while in the oppressed P/M example, we can see the emerging is limited and can be oppressed due to the intensive pace of collaborative work, limit of time, misunderstanding, or neglection etc. As the emerging opportunity is limited, what leads to the successfully P/M to be selected to evoke and apply? What rules determine this selection? We will further analyze the above questions in the next section. "Prioritizing" : Selecting P/M from the multiple potential What Project and Meaning from the potential pool of P/M is actually selected to evoke and apply, and why? What P/M will be oppressed or neglected and why? In answering these question, we further study the ethnographic data of the co-lab workshops and conclude a focused code "Prioritizing" that interprets the selection of P/M from the socio-historical background and sociocultural context. Two key categories of "Prioritizing" are generated: the Inherent Prioritizing and the Situated Prioritizing. The "Inherent Prioritizing" means the selection is subject to the inherent preference that the learner built in his or her socio-historical learning. The "Inherent Prioritizing" can form long before the workshop takes place, and is often associated with personal interests, intrinsic motivation and attitude. The learner will select the P/M that fits his or her inherent preference and oppress the ones irrelevant to his or her constant value or attitude. The "Situated Prioritizing", on the contrary, forms during the workshop. It is subjected to incidental factors of the workshop: the discussion, the reaction of others, incidental interruptions, the self and environmental factors, etc. In "Situated Prioritizing", learners will be influenced by the situation, and prioritize P/M that normally does not fits the learner's inherent preference, value or attitude. This type of "Prioritizing" marks the development of the learners' inherent prioritizing, which will possibly impact the learner's future learning. Inherent Prioritizing We generate the categories "Inherent Prioritizing" from the ethnographic study of key learners in the Co-lab workshops. In comparing the ethnographic observation of different workshops as well as daily conversation, we can find consistencies of personal preference of Project and Meaning that have been constantly evoke and apply, which we code as "Inherent Prioritizing". The following ethnographies give example of inherent prioritizing as we study the individual motivation and their prioritizing in the workshop. Juanmat is one of the founder of Co-lab workshop. Before he started the workshop series, he received his master degree of interdisciplinary study of biology. It is a degree that encourages interdisciplinary collaboration. Therefore, during his master, Juanmat joined the iGEM competition in which he worked with designers to explore the societal aspect of their project. In his spare time, he also co-founded the student association Open Science School to promote open education for all. Most co-organizers of Co-lab workshop join to work as a member of Open Science School. Juanmat has a broad personal interests besides biology study, e.g. art, design and politics. Juanmat enjoys to practice art and design. In some of the workshop, he wanted to take the designers' job of making the poster. He learns as he works with people of different background. During the co-lab workshops, he got the habit of using tools that designer often use, such as post-it and mind map, which he claimed to learn from the designers. His open attitude also extends to scientific education and research. The Open Science School was first established to teach high school student about synthetic biology with open sources hardware and online course. The handson practice that uses synthetic biology In open science school. Open education is a big part of his academic life although it is not the focus on his lab when he continues to do his Ph.D after his master. He joined first three version of the global conference on open hardware conference GOSH. These practices have been often evoked in the workshop. For example, the following fieldnotes was taken from one of the workshop: Juanmat complement on this point from a similar but slightly different perspective. He mentioned using synthetic biology lessons to high schoolers as an introductory lesson to science in general. He had one experience with that in the previous summer, when he aggregate dozens of high schoolers in a student lab and teach them about synthetic biology. He also mention a NGO "bio-builder", from whom he got the inspiration. To Juanmat, the synthetic biology brings in the openness from its simplicity. "It is so simple, like a black box, you have the genes and then what see what you do and get the result, sometimes dangerous, yes, but very simple." This simplicity, to Juanmat is a good feature that allows high schoolers and outsiders to open a door to the science world. He believe it is not only the laws, facts, results -but the meta knowledge that makes science: how scientist work, how to write science, how to talk like a scientist, etc. (fieldnotes from B2. Matt's motivation) The high school course Juanmat gave is a priority Project that he had worked on in the summer the co-lab. This course was very often brought up by Juanmat as one of his preferred examples. He and other members of the student association kept developing an open hardware kit after the course and even started their own company in Shenzhen China for making the hardware three years after the course. The prioritizing is not only about a preferred Project, but also social meaning one values in their past experience. Juanmat has been actively involved in Open Science movement, which holds the political stance of science democratizing. Not only science should be open to the science community but also the general public. One of the political perspective can be reflected in one of the informal discussion he had with his design friends about science publication, in which he believes the publication and citation system has become a rate for scientists and some part of it has lost the meaning of communicating, especially communicating openly to a general public. He also inquire what the designers' world is like and see if science can learn from the publication of the design world. This political stance also emerges from his "evoking" and "applying", when he criticize the scientific publication language of our time: As a scientist, he understand science is complex, but the way science is presented to the public often appear to be so simple and even boring. "We say that it is the DNA is like this, and that it makes the protein.. and forms Ebola, we say it in a way as if it is that simple, but it is not." … But what is at the core of the problem? Juanmat point to the problem of research paper, which often takes a fixed structure of literature, method, experiment, results, etc., which is not how science is actually implemented. "Science at lab is very messy, always in the cloud, we make a mistake, we throw away result…" The core of the problem, is how do we extract that? How do we present it? Should the outside know about it? It is a yes for Juanmat. He believes the way we present our research should include incidence like daily dialogue like "Hey Rebecca, nice discovery!". Science as a whole should be present, not just the results, in a hollow way. "I imagine to have science paper as movies, so that people can really emphasize on what is really research in science" (taken from fieldnote: B5.does science needs emotion) Also, he evokes the NASA's strategy is science communication, which he believes has a political purpose: Juanmat then point out another side of the issue, the political side. He believed that some of NASA's project is not only for science but also for public communication, e.g. looking for water on Mars. He thinks NASA need this publicity projects to get attention and funding, and on the other side the public also needs science. "We now live in democracy and the public needs to know that science is important, and also science needs the public to understand it, to think about science, so it is a political issue." (from fieldnote: B3. what's interdisciplinarity) The political perspective of democratizing science is one of the prioritized meaning Juanmat applies consistently in daily life and in the workshop, as we have observed in the ethnographic observation. Situated Prioritizing The workshop format is an event with fluidity and accidental factors. Everything in the workshop, especially in a workshop where different background learners aggregate, is not planned rigid. The prioritizing, besides being inherent, has an situating side, which allow the prioritized project to change and evolve. Situational factors can be partly attributed to the learner's own incidental situation. e.g. in one of the fieldnotes "labelling DNA content", Juanmat explained he just wanted to think of something new, to get rid of the preconceptions. Juanmat was the first to present his story. He wanted to think of something new, something outside bio-engineering with which he is so familiar. "I want to get rid of all the preconception I have." He focus on what he calls the "social impact of scientific vocabulary", and particularly the concept of DNA. (fieldnotes taken from B6 labbelling DNA content) But in most cases, it can be attributed to the group dynamics. In the previous example where Juanmat and Pratek had two directions of prioritizing, Pratek was later convinced that the biological fact was not the priority. He started to actively . The situated reaction was attributed to Juanmat's successful example of the science activism, which made Pratek to re-prioritize his original prioritized Project and Meaning, which is the biological fact about food nutrition. Denqing's bubble is another example of situated prioritizing. Before Denqing presented the very visualized music video, the group was focusing on the feasibility and practical use of the project. But Denqing's example of the bubble is an unexpected situation to the group. In the later discussion group, although they still did not give up the practical side, they showed a comparatively equal role of the aesthetic consideration of the Project. In general, the group accepted Denqing's idea and believed his idea makes sense especially in terms of artistic presentation. But the group kept their concerns on the feasibility issue and a lot of the discussion was about this direction. But occasionally, when the group focus too much on the detail and get stuck, they are able to get back to the artistic part and found their way out taking this perspective. The group's interaction slightly changed from "totally scientific" to a mixed status of both "keeping scientific" and "accepting romantic". …. There are also time when the group get stuck in the detail, and then question the whole idea. But instead of turning away as they did in their early discussion, the group will pick up the artistic and romantic side to complete the narrative and reconfirm it is meaningful. (fieldnotes taken from C4. Denqing's bubble) The final science fiction also demonstrate that their prioritized P/M at least include the artistic consideration, which should not have emerged without Daniqng's input. This intention lead to her proposal of a scenario very different from the previous discussion. She proposed a dark future where the entire world were polluted. And the only color they see in the air was the red alert of bubble……The sci-fi ignite the group, everyone looks very happy with the story. "OK", That's it" They were all applauding for the story. The story was so attractive that people started to add more to it. "Bitterly romantic! " Jinrong describe the sci-fi with the following: "it looks like romantic, but in fact it is an irony, the more color there is, the more bacteria there will be. Scientific knowledge was also added to the final scenario: how the bubble will be degradable, easy to explode, without pollution. The group did not give up their many scientific details, but they started to attach the science part to the sci-fi story. (fieldnotes taken from C6. The science fiction) Evoking, Applying, Prioritizing and Meaningfulness In this section, we present the grounded theory of the construction of "Individual Meaningfulness" from the socio-cultural perspective. The "Meaningfulness" represent what the learner recognize as meaningful in the collaborative learning. In the socio-cognitive perspective, we understands that the individual meaningfulness concerns the Project-relevant meaning that the learner construct, and it is related to the learner's past experience, e.g. disciplinary learning. We specifically study what constructs the "Meaningfulness" and how it is constructed from the socio-cultural perspective. We use grounded theory to summarize the key codes that emerges in the "meaningfulness" construction, namely: • "Evoking": the learner evoke a past experience/project that he or she believe relevant. The evoking establishes the relevance from the current Project to a past project. • "Applying": the learner apply the evoked project to established a similar meaning to the current project. The applying establishes a familiar meaning for the current project. • "Prioritizing": the learner prioritize what to evoke and apply based on his or her inherent preference and the specific situation. The prioritizing selects the most relevant from the many possible relevance one can evoke and apply. The "evoking" and "applying" is a salient code, which commonly appears in our initial code in forms like "recalling experience", "giving example", "a youtube video" etc. It constructs the basic pattern of "Meaningfulness" construction. In CoLAB, learner will often face the challenge that the current Project is often to some degree alien, not as easy to comprehend, or that his or her own proposition is to some degree alien to others, therefore requiring more explanation. That is why evoking and applying becomes very common. The learner will need to establish, communicate and ground the meaning through the "evoking" and "applying" process. The socio-historical Project allows the learner to approach the current project from a perspective he or she knew and the socio-cultural context provide a ground when they applies a similar meaning. The individual Meaningfulness is constructed in the meaning they apply. The "prioritizing" pattern is however a relatively underlying code. With further comparative study, the "evoking" and "applying" is proved to have a multiple potential, meaning not only one project or one meaning can be evoked and applied given the specific Project at hand and the specific learner. What the learner actually chooses to evoke and apply is subject to a prioritizing process. Because the prioritizing concerns P/M that does not explicitly appear, it is not as common as we can observe for the basic pattern. We use codes such as "oppressed P/M" to extract and explain the phenomenon. Though underlying, the "prioritizing" is important because it reveals the deeper connection between the learner's socio-historical background and the meaningfulness her or she constructs in the learning. The historical background serves two roles in meaningfulness: 1. It consists a pool of potentially evocable Project and applicable meaning, 2. It is also a network that determines which project and meaning to prioritize in the "evoking" and "applying" process. The prioritizing pattern is found to be both inherent and situated. It is inherent because the network is built before the learning and many of the prioritized project and meaning remains their high priority. It is also situated because the network is flexible, it is subject to changes and situational actions of the learner, their peers as well as environmental factors. "Meaningfulness", as our most essential concept, can now be associated with its socio-cultural perspective from the three codes: 1. "Meaningfulness" presents what the learner recognize as meaningful in the "meaning" he or she applies from an evoked "Project". "Meaningfulness" presents what the learner recognize as meaningful in prioritizing the "Project" and "Meaning" from the other potential options. To conclude the socio-cultural perspective of the "Meaningfulness", we will complete the code we made in the socio-cognitive perspective about the project relevant "Meaningfulness": the P-M Intensity. It is important to integrate the different perspectives of "Meaningfulness" to make a comprehensive understanding. Chapter 11. Evoking, Applying and Prioritizing and the Co- Meaningfulness Trajectory In the socio-cognitive perspective, we define Co-Meaningfulness as the project-related, collective and developing status of "Meaningfulness". From the socio-cultural perspective, the Chapter 9 and 10 have extensively analyzed the "project-related" aspect of the Co-Meaningfulness: that the project-relevance is manifested in the combination of "Evoking" "Applying" and "Prioritizing" process. Co-Meaningfulness is the collective state of Meaningfulness and it is developing as the project evolves. From the Chapter 9 and 10, we understand that it is not only concerned with the current project and possible associated meaning, but also concerned with: 1. Socio-historical Project evoked to established relevance 2. Socio-cultural Meaning associated with the evoked project 3. A network of Potential P/M to prioritize from. Therefore the Co-Meaningfulness encompasses a more complex network of P/M and their relationships in the socio-cultural perspective. The learners not only ground their understanding of the current Project and Meaning, but also the evoking and applying of past experiences, which ultimately extend to a underlying network of P/M and the prioritizing process. Mutual communication can happen at all levels: they may communicate on the relevance of the past P/M and the current, or negotiate at the prioritizing level. The heterogeneity of Co-Meaningfulness is manifold, in the different "Projects" they evoke, in the different "Meaning" they apply, in the different prioritizing they adopt, as well as the different underlying socio-historical background and socio-cultural context (illustrated in the Fig. 11.1). The multi-level heterogeneity of Co-Meaningfulness provide us a 3-level framework to study the Co-Meaningfulness from a socio-cultural perspective. In the previous PART, we have concluded two essential properties of Co-Meaningfulness, i.e. "M-M Coherence" and "P-M Intensity". And in this PART, we will use the threefold framework to further study Co-Meaningfulness. The These questions set up the framework for theorizing Co-Meaningfulness quadrant and trajectories in both socio-cognitive and socio-cultural perspectives, which we will present in section 11.1. The framework (table 16.1) also serves as the grounded frame that guides the coding for the "Co-Meaningfulness" trajectory. Three levels of Co-Meaningfulness and its properties In the socio-cognitive perspective, we have generated the two important properties for the concept "Co-Meaningfulness", which is the "P-M Intensity" and "M-M Coherence". In this chapter, as we have generate the key codes for the socio-cultural perspective, we will now combine the both perspectives and understand Co-Meaningfulness in a hybrid way. We develop a 3-level framework for understanding the Co-Meaningfulness. The framework consists in total 8 categories of the different perspective of Co-Meaningfulness and the purpose of setting up these categories is for building a framework that we can use to code the different learning scenarios and analyze them in respect to the socio-cognitive and socio-cultural perspective of Co-Meaningfulness. The Evoking-Level As we have presented, the process of evoking is common in the CoLAB learning. Learners very often evokes their personal historical "Project" that he or she believes relevant to the current Project. The relevance is essential in determining what the learners want for the current "Project", either to modify, to understand or to explain it. This relevance is built by the learners themselves with their own experience and will need to be integrated into the "Co-Meaningfulness" of the group. However, in CoLAB, as different learners might have a divergent sociohistorical background, the introduction of a certain "Project" into the "Co-Meaningfulness" might not be as efficient as the learner might suppose. Will the learner well present the evoked "Project"? Will the "Project" be understandable to others? Will the group agree on relevance of the "Project" that one of them provoke? The First two questions concerns the P-M Intensity whereas the last one concerns the M-M Coherence. 1.Present Evoking Does the learner present his or her evoked project ? If so, how well does he or her present? This is the first question to ask when we try to analyze the P-M Intensity. A good presentation will successfully bring the evoked project into the space of joint discussion, which is the premise of further understanding and agreeing on the evoking. While a unsuccessful presentation will probably lead to confusion, misunderstanding and hence the difficulty in constructing the Co-Meaningfulness. 2.Understand Evoking Once the evoked project is presented, does the group understand the Project and its relevance to the current Project? In CoLAB, because of the heterogeneous background, there is no grantee a presented Project will be immediately understood by the group, even if it is presented accurately. An engineer might miss some key information or terminology when presented with a design project, and vice versa. Therefore, the accurate understanding of the evoked project and its relevance as it is established by the learn who propose it is another important factor of the P-M Intensity. In most cases, the degree of understanding is not obvious to distinguish. But we can still infer from the conversation analysis and relying on the analysis of grounding technique, such as noticing the positive and negative turn taking. When we lack the conversation data, we can use research fieldnotes and post-interviews to code and distinguish. 3.Agree Evoking Does the group agree on the relevance one builds between the evoked project and the current project? In CoLAB, the answer is not necessarily positive. One might understand differently and reject the relevance or build alternative relevance. Disagreeing on the evoking often happens when the group members debate on the functionality level, which may or may not concern the underlying social meaning difference. The Applying-Level The purpose of evoking a Project is normally to apply the social meaning associated with it to the current Project. But again, the social meaning is not necessarily well presented, understood or agreed. In CoLAB, an evoked Project can have divergent reading in terms of its social meaning. What meaning does the learner make when they first experienced the project? Why do they apply the meaning to the current Project? Questions like this will not be systematically discussed as the time for making the project is limited. Therefore the meaning is often communicated through grounding. And sometimes learners take it for granted that meaning will be automatically understood and agreed, which however is not the real situation. Co-Meaningfulness at the applying-level concerns the collective states of social meaning applied to the current project. 4.Present Applying One way to communicate the social meaning being applied is to directly present it. Sometimes, people do not realize that they need to present it directly therefore it creates a myth for the evoked project and its associated meaning in the group. When presented well with evoked project, the meaning can be well introduced to the Co-Meaningfulness easily. But in many cases, time is limited, and the pace of discussing is intense. There is no room for thorough presentation of the meaning. In these cases, good presentation, unclear presentation, or even no presentation at all will differentiate in terms of the P-M Intensity. 5.Understand Applying Whether the meaning is presented or implied, the leaner who proposes and communicates it wants it to be understood and applied by the group. The social meaning can be understood because the socio-cultural context of different learners has overlap. Even for disciplinary knowledge, there is a grounded understanding of its social meaning as we live in this entangled society. But how much the other learners understand the meaning depends in different situations. Again we can use conversational analysis and ethnographic data to interpret the degree of understanding. 6.Agree Applying Do learners agree on the meaning applied to current Project? Many debates happen at this level. It is essentially a problem of how different learner makes meaning. When learners of different background gathers, they are even more likely to have these kinds of disagreement. The Prioritizing-Level The third level of Co-Meaningfulness is the prioritizing level. To understand how learners prioritize, one needs to only understand the conversation or interaction, but also to understand well enough about the learners. Who are the learners? What are their interests? Why do they perceive some P/M more prioritized to others? 7.Present Prioritizing A learner can choose to compromise their own prioritized Project and Meaning for multiple reasons: e.g. time limits, the group dynamic, being interrupted, or feeling discouraged, etc. Whether the learners present the prioritized P/M is one important mark of P-M Intensity. Presentation with strong motivation indicates a vigorous P-M Intensity, while oppressed presentation or no presentation indicates an inert state. It is however not easy to tell if someone is presenting or hiding their prioritized P/M. We have to understand the intrinsic motivation of the learner as well as their status in the workshop. In order to determine the present Prioritizing level, only socio-cognitive analysis is not enough. We need to combine sociocultural perspective in our ethnographic study. 8.Agree Prioritizing The final aspect of the three-level framework of Co-Meaningfulness is whether different learners agree on a specific P/M should be prioritized. In situations where learners agree that the evoked project is relevant, and the associated meaning is eligible, there still can be different stances upon whether the P/M is trivial or significant. Generating Co-Meaningfulness Trajectory with the Threelevel Framework The "Co-Meaningfulness" concept was generated through grounded theory with the socio-cognitive perspective. It serves as an analytical tool for understanding the underlying mechanism of CoLAB learning as well as a framework to categorize the different states of CoLAB learning. The different states can be identified with the two properties we generate: P-M Intensity and M-M Coherence, with the former indicating how intensive learners engage their Meaningfulness in the CoLAB learning and the later indicating how coherent the collective state of Co-Meaningfulness is. With the socio-cultural perspective, the "Co-Meaningfulness" is more complete. It casts light to the Meaningfulness with respect to learners' sociohistorical background and socio-cultural context. The Meaningfulness is regarded as an integration of Prioritized "Evoked" Project and "Applied" Meaning. We combine the socio-cognitive perspective and the socio-cultural perspective and generate a three-level framework that comprehensively categorizes the "Co-Meaningfulness" states. This final framework is a grounded theory result from the initial coding, focused coding and theorizing. With a constructive grounded theory, the researcher's choice of the essential codes are crucial in generating the theory. In our theory, the code of "P-M Intensity" and "M-M Coherence" as well as "Prioritizing" "Evoking" "Applying" set up the basic categories. Besides these codes, there is also an important underlying "code" we choose in order to make the categories more understandable and useful. The special "code" is the Co-Meaningfulness quadrant map in the form of visual presentation. As both researcher and practitioner, we understand it is as important to make the analytical tool as to communicate and apply it. The quadrant map can help practitioner understand CoLAB process and their development in a sensible and visible way, which is another purpose of the research. In this section, we will apply this tool with more cases of the co-lab workshops. In addition to the example we demonstrated in the socio-cognitive perspective, we will exemplify how this framework can be used under the three level framework. The way to use it is to make focused coding with respect to the three-level framework, e.g. code for the process that marks a good presentation of evoking that improves the P-M Intensity, or the a misunderstanding of the meaning applied. After the focused coding, we can use the quadrant map to visualize the different states. The visualization is an interpretation of the researchers' comparing of the different states, which has only a qualitative meaning. The following table and figure (Table 11.2 and Fig. 11.3) are one example of using the framework for generating Co-Meaningfulness trajectory. We propose the following steps to use the Co-Meaningfulness trajectory for researcher/practitioner to reflect and analyze their CoLAB process: Step 1. Initial coding to get familiar with the data and extract the initial categories of analytical power Step 2. Focused coding to extract key interpretation of the learning process with respect to the three-level framework. For example, in the Table 11.2 "Not so much explaining on the meaning" is one focused code extracted from the Applying-Present category. Step 3. Review and compare the different codes and determine the different states. For example, the "Not so much explaining on the meaning" and "Juli actively presents her prioritizing topic" are clustered into one episode "Juli evokes an interdisciplinary field" Step 4. Further compare the different states and coding and determine the qualitative degree of P-M Intensity and M-M Coherence, and draw the Co-Meaningfulness trajectory. Step 5. Further analyze the process based on the key turns of the trajectory. With the three level of framework, the practitioners can effectively review their practice in a structured way without losing their acute observation and deep understanding of the process, because the coding and analyzing are essentially the practitioner's own interpretation rather than a provided coding system. At the same time the structured framework help the practitioners to focus on the "Co-Meaningfulness" construction at different levels in both socio-cognitive perspective and socio-cultural perspective. Finally the Co-Meaningfulness trajectory which integrates all the comparative analysis is easy to understand and present as a visual tool. It highlights the different states of CoLAB process and its key turns which researchers/practitioners should pay attention. To see more examples of Co-Meaningfulness trajectory generated from the above approach, please refer to APPENDIX D. Coding for the Co-Meaningfulness trajectory also demonstrates the developing nature of "Co-Meaningfulness" in the socio-cultural perspective. Constructing Co-Meaningfulness in the socio-cultural perspective means that the learners will need to constantly re-evoke, re-apply and re-prioritize in order to reach a high level of P-M Intensity and M-M Coherence(see re-evoking, reapply and re-prioritizing in the different focused coding of Table 11.2 and other examples in APPENDIX D). The constant re-evoking, re-applying and re-prioritizing invites more "Project" and "Meaning" from the potential P/M network of the learners to construct the Project and Co-Meaningfulness. The final "Project" and "Co-Meaningfulness" will also enter the learners' sociohistorical network of P/M for him or her to evoke, apply and prioritize in the next CoLAB. The socio-cultural perspective of Co-Meaningfulness connects the CoLAB process to the learners' socio-historical learning as well as their development in continuous CoLAB learning and cultivation. This developmental perspective of CoLAB will be discussed further in the final part of Conclusions. Table 12.1 Visualization of the 6 Key Steps of the Grounded Theory of "Co-Meaningfulness" Each step of the grounded theorizing process adds to the comprehensiveness and depth of the theory, by iterative and sustained inquiry, purposive sampling, theoretical sampling, coding, categorizing, and conceptualizing. The following explains the 6 steps illustrated above. The Socio-cognitive Perspective of Co-Meaningfulness Step 1: Project Components, Work-in-process "Project", and the iterative process. In CoLAB, the basic form of learning is through collaboratively constructing a project that addresses the open wicked problem. The learning process is associated with the maturing of that project. In the initial coding (first step of grounded theory, coding data extensively to extract the preliminary analytical meaning of data), "Project" emerged as a key code in the form of "Project Components", e.g. a certain function one learner wants to add to the "Project". Different "Project Components" proposed by different learners set up the initial "Project" space, from which the learners will need to assemble an agreed and consistent "Project" at the end. However, the process of constructing the "Project" is not linear: when the intermediate and very often imperfect "Workin-progress Project" causes disagreement and conflicting opinions, the learners will iterate to add, modify, or remove part (or, in extreme situations, all) of the "Project Components". This non-linear iteration often marks the important moments of CoLAB learning, when learners need to resolve the conflict ideas of the "Project". In the following visualization(Table 12.2), we use three small round to represent the three learners(each with a distinct color). The triangles below them represents their proposed project components, the work-in-progress project is presented by the components put together, but as they might cause problems, the work-in-progress Project might lead to iteration. The final project is in blended color meaning the different components finally merge. 1 Table 12.2 Visualization of the Step 1 of the Grounded Theory of "Co-Meaningfulness" Step 2: The underlying driven force of "Meaningfulness" and the emergence of "Co-Meaningfulness". The initial coding of the iterative process presents another key code: "Meaningfulness", particularly at the moments when learners have to resolve the conflicting views on the "Project Components". In our case study(PART IV), although the design student and engineering student are discussing on the same "work-in-progress" project (an automatic camera), their focuses are different. The design student wants to focus on the interaction between the person being photographed and the camera, while the engineering student wants to focus on the specific lighting effect the camera can trigger. The conflict gradually leads to a discussion on why and how they believe the project is meaningful, which unveils the ultimate divergence between the two. The designer thinks an conceptual association and interaction between human and machine is meaningful, while the engineer believes inventing an original effect is the essential meaning of their project. With purposive sampling of several other alike key moments, we are able to find that the "meaningfulness" plays a key role in driving the "work-inprogress" Project, especially when learners have difficulties in front of the "Project" conflicts. With further analysis, we summarize three key aspects of "Meaningfulness" for it to impact the "Project": first, Meaningfulness is not independent but associated with the "project"; second, the collective status of "Meaningfulness" matters; third, the "Meaningfulness" has the potential to develop in the learning process. We therefore define this "project-relevant", "collective", and "developing" status of "Meaningfulness" as "Co-Meaningfulness". The Co-Meaningfulness emerges as the key drive for the project building and learning process, as the conflicts between "Project Components" reside not only at the project level, but also at the Meaningfulness level, both in the different meaningfulness learns hold and their different approaches in engaging their meaningfulness in the project. It is important to note that this definition of Co-Meaningfulness sets up the initial direction of our grounded theory. But it is far from a comprehensive and mature category or concept at this stage. As we develop our grounded theory, the "Project-relevant", "collective" and "developing" aspects will be developed iteratively through comparative analysis of more data and coding. But we deem it is important to define Co-Meaningfulness here. First, we need a name for this underlying driving force, and "Meaningfulness" alone is not sufficient, especially for the "collective" feature. Second, the "Co-Meaningfulness" defines the key focused categories that foreshadow our grounded theory. In the constructivist grounded theory, the researcher iteratively selects key coding as categories, compare them with more data and coding to develop the grounded theory. With this purpose, the "Co-Meaningfulness" is the pivotal category that guides our further analysis. In the following visualization, the step from Project components to work-inprogress is inserted by a "Co-Meaningfulness" box where the quadrilateral below the triangles present the "Meaningfulness". The underlying process of Co-Meaningfulness is presented by the inserted box. The double arrows in the box indicate the "project-relevant" and "collective" aspect of Co-Meaningfulness. The Socio-cultural Perspective of Co-Meaningfulness Step 4: Where does Meaningfulness come from: The basic pattern of Evoking and Applying, and the underlying pattern of Prioritizing. The step 4 grounded theory answers the basic question concerning "Meaningfulness": what is it and where does it come from. With further theoretical sampling, we generate three focused codes in explaining the individual "Meaningfulness" in CoLAB: namely the basic pattern of "Evoking" and "Applying" and the underlying pattern of "Prioritizing". The "Meaningfulness" is not a deliberate creation for the "Project" per se., but deeply associated with the learner's socio-historical experience and sociocultural context. The work-in-process project evokes the learner's past related experience and associated social meaning which the learner applies to the current project. Through the process of "Evoking" and "Applying", which are two very salient codes in the initial coding, the learner builds their individual meaningfulness. With further scrutinizing and comparing the data and codes, the underlying but significant code "prioritizing" emerges: there is not necessarily just one past project and one associated meaning to evoke and apply, and the final meaningfulness is the result of prioritizing the most relevant P/M, according to the learner's intrinsic priority and the situational and environmental factors of the learning scenario. The focused code of "Evoking", "Applying" and "Prioritizing" cast light to "Meaningfulness" by connecting it to the socio-historical and socio-cultural perspective of the CoLAB. From step 4 on, we switch to a socio-cultural perspective. Although "Meaningfulness" initially emerges as a focused code from the socio-cognitive perspective, its deep meaning inevitably concerns the socio-cultural aspect, which is not neglectable in understanding Co-Meaningfulness in CoLAB. In the follow visualization, a similar color (red) triangle and quadrilateral are used to represent the related Project and Meaning of B, with which B evokes and applies to the Project and get his or her Meaningfulness(purple) for the current project. The pair of the red triangle and quadrilateral are prioritized from the pool below them, where many other potential triangles and quadrilaterals of B exist. 4 Table 12.5 Visualization of the Step 4 of the Grounded Theory of "Co-Meaningfulness" Step 5: The Socio-cultural Aspect of Co-Meaningfulness: multiple Evoking, Applying and Prioritizing. The Co-Meaningfulness is a result of multiple Evoking, Applying and Prioritizing. The internal tension of Co-Meaningfulness is then understood as the tension between different Evoking, Applying and Prioritizing. Further theoretical sample of data and coding spotlights the key moments when "Co-Meaningfulness" develops as the learners develop their evoking, applying and finally to re-prioritize. Learning across boundaries happens when learners can understand and appreciate an alternative evoking, applying and prioritizing, and internalize the new evoking, applying and prioritizing in the project at hand. In the following visualization, we see the Co-Meaningfulness box is extended to a multiple evoking, applying and prioritizing of different triangles (Projects) and quadrilaterals(Meaning) to present the Co-Meaningfulness in the sociocultural aspect. 5 Table 12.6 Visualization of the Step 5 of the Grounded Theory of "Co-Meaningfulness" Step 6: Advancing the Co-Meaningfulness Quadrant and Trajectory with the Socio-cultural Perspective. The final step of our grounded theory couples the socio-cognitive and sociocultural perspectives. The P-M Intensity in this sense means how active and effective the learners engage their evoking, applying and prioritizing, while the M-M Coherence means how coherent it is between different learners in their evoking, applying and prioritizing. The final theorizing builds an evaluation matrix for the Co-Meaningfulness trajectory as follows: Levels P-M Intensity M-M Coherence Evoking 1. Does the learner present the evoked project? How well? 2. Does the group understand the evoked the project and its relevance to the current Project? How well? 3. Does the group agree on the relevance of the evoked project? Applying 4. Does the learner present the applied meaning? How well does he or she present ? 5. Does the group understand the meaning applied and its relevance to the current Project? How well? 6. Does the group agree on the relevance of the applied meaning? Prioritizing 7. Can the learner evoke and apply P/M that he or she prioritize? 8. Does the group agree on the prioritizing? Table 12.7 Evaluation matrix for P-M Intensity and M-M Coherence relative to Evoking, Applying and Prioritizing In the following visualization, the double arrows still represent the P-M Intensity and M-M Coherence. The P-M Intensity arrows are now connecting "evoking", "applying" and "prioritizing" from the quadrilateral to the triangle, whereas the M-M Coherence arrows are now comparing between "evoking", "applying" and "prioritizing" between the different quadrilaterals. 6 Table 12.8 Visualization of the Step 6 of the Grounded Theory of "Co-Meaningfulness" The whole process of the grounded theory of consistent with an internal logic to fit our motivation of this thesis, which is to understand CoLAB with practical lens and to build pragmatic tools for practitioners. CONCLUSIONS 1.Thesis Summary: Constructing Co-Meaningfulness in CoLAB This thesis is a grounded theory study of the new learning model of CoLAB. The CoLAB (Collaborative Learning Across Boundaries) model emerges in the era of Open Wicked Problems -problems that are ill-defined, resisting single optimization solution and problems that the general public, the students, and citizens contribute and learn to solve. CoLAB emerges as a new learning model that aggregate learners across discipline and experience boundaries and collaboratively work on the open wicked problems. It adopts a problem or project-based learning approach as the open wicked problems are so complex and entangled that the traditional instruction based learning is not sufficient. It also highlights collaborative learning and boundary-crossing learning as these open wicked problems, in real world, are no problems to be solved by single person or with single perspective. As new CoLAB initiatives such as workshops, competitions, hackathons, summer camps etc. are emerging fast in our universities and schools, it is important to understand CoLAB and its process from a practice-based perspective, since the CoLAB itself is a "open wicked problem": understanding an open wicked problem is essentially not separated from resolving it. The practice-based understanding is our first motivation in this thesis. Also, as more and more practitioners from a variety of background join to participate and organize CoLAB, it is essential to meet their pragmatic need to facilitate these practitioners with easy understandable analytical tools. To meet the pragmatic need is our second main motivation for our thesis. With these motivations, we adopt the constructivist grounded theory method which allows us to delve into the complexity of the issue, and which is also pragmatic purpose oriented. Through literature review, the CoLAB is framed at the crossroad of collaborative learning and learning across boundaries. Especially, we inherit the socio-cognitive perspective and socio-cultural perspective of constructivist collaborative learning research tradition in our CoLAB study. With the socio-cognitive perspective, we focus on the interaction and micro scenario of the CoLAB process, with an in-depth case study. With the socio-cultural perspective, we rely on a grounded theory of a long-term ethnography observation into a series of CoLAB workshops. We generate the core concept of "Co-Meaningfulness" in our grounded theory study. Instead of only constructing a joint project while they learn, learners are also explicitly or implicitly constructing a Co-Meaningfulness in CoLAB. The process is especially salient when the learners meet difficulties in communication and collaboration, which is also the situation where the code of "meaningfulness" first emerges. When learners meet difficulties in communication and collaboration in CoLAB, they do not only communicate on the "Project" level, but also need to communicate at the "Meaningfulness" level, to negotiate which parts or components of the Project is more meaningful. The underlying "meaningfulness" is crucial as it drives the direction of each learners' expectation, and also their interpretation of the project. However, the negotiation is not always explicit or efficient, sometimes learners might not even realize the real problem is the divergence in understanding the "meaningfulness". With further analysis of a typical challenging CoLAB case in the sociocognitive perspective, we conclude that the meaningfulness is project related, important as a collective status and it is developing in the course of CoLAB. We use Co-Meaningfulness to indicate the project-relevant, collective and developing status of the learners' meaningfulness. Two key properties are generated from the coding and categorizing: the P-M Intensity and the M-M Coherence. The P-M Intensity indicate how intensive the learners engage their meaningfulness into the project, e.g. how well they present the meaningfulness, how effective they communicate the meaningfulness, and how active they motivate themselves in engaging their meaningfulness, etc. The M-M Coherence means how coherent their meaningfulness is to each other, e.g. are their meaningfulness conflicting to each other, fulfilling one meaning sacrifice the other, or are their meaningfulness supporting the other, etc. The two properties are qualitative degrees that can be derived from practitioners' observation and reflection. To visualize the trajectory of CoLAB, we use the two properties as two axes and build the Co-Meaningfulness quadrant, where the CoLAB process trajectory can be allocated. We further study the socio-cultural perspective of CoLAB starting by starting to ask what exact "meaningfulness" is. With further grounded theory, we find three key codes in the CoLAB process are especially important in understanding "meaningfulness" in CoLAB. They are the "Evoking", "Applying" and "Prioritizing" process. The "Evoking" and "Applying" indicates that the current project "Evokes" the learner's past similar experience and associated social meaning which the learner "Applies" to the current project. The "Prioritizing" means there is not only one past project and one associated meaning to evoke and apply, and the final meaningfulness is the prioritizing the most relevant Project and Meaningfulness, with respect to the learner's intrinsic priority and the environmental factors of the specific learning situation. "Evoking", "Applying" and "Prioritizing" connects "Meaningfulness" to the socio-historical and socio-cultural perspective of the CoLAB. When the learners implement multiple evoking, applying and prioritizing, they construct the Co-Meaningfulness. And the Co-Meaningfulness develops as the learners constantly re-evoke, re-apply and adapt their prioritizing. The whole Co-Meaningfulness grounded theory answers our initial motivation and questions, especially in understanding the key challenges and process of CoLAB from a practice perspective. It is also a visualize tool for reflecting and analyzing CoLAB that will help practitioners and learners to better practice. In the next sections, we will focus on these key contributions of "Co-Meaningfulness" and the prospect use of Co-Meaningfulness. 2.The Key Contributions of "Co-Meaningfulness" In INTRODUCTION.4, we explain our primary motivation for this thesis, which is to generate a practice-based understanding for CoLAB and to meet the pragmatic need for CoLAB learner and practitioner. In this section, we would like to conclude the key contributions of "Co-Meaningfulness" and how it is meaningful for our initial motivation. Co-Meaningfulness for Understanding the Key challenges in CoLAB The category of "Co-Meaningfulness" is generated when we closely scrutinize the challenges of CoLAB. It explains how and why CoLAB is difficult in its practice and what causes the challenges in the communication, collaboration and learning. Problems seemingly happening at the project level in fact reflect the underlying conflicts in believing what is more meaningful. In CoLAB, everyone proposes their different "Project Components" (e.g. a specific function of the project), which they have to assemble into a coherent project in solving/resolving their target open wicked problem. But since the learners have different background, the meaningfulness behind each project component is often divergent. When communication happens only at the project level, the divergence of meaningfulness often makes CoLAB a challenging process. Two key CoLAB challenges emerge when we focus on the "meaningfulness" level. The first key challenge is the explicit and implicit conflicts in understanding and assembling the different project components. In the second episode of "Jumping Video", Xid, as an designer, believes the "experience of associating user and the camera" is the most meaningful component while Ming, as an engineer, focuses on the specific lighting effect. Their divergent meaningfulness leads to divergent understanding of their project components and eventually the whole project. The second key challenge is the inefficiency in communicating and engaging one's meaningfulness in the project building. In the third episode of "Jumping Video", both Ming and Ji try to present many different project components, but not able to efficiently or sufficiently communicate their meaningfulness. Xid can understand their proposition at the Project level, but she cannot fully understand their underlying meaningfulness. Sometimes, learners also compromise their meaningfulness by not engaging them in the project building, which also triggers the second type challenge. From the comparative analysis of the different challenging learning process, we generate two key properties of "Co-Meaningfulness", which are the the Project-Meaningfulness Intensity(P-M Intensity) and Meaningfulness-Meaningfulness Coherence (M-M Coherence). P-M Intensity indicates how intensive the "Co-Meaningfulness" and the "Project" mutually influence, especially how well learners are able to present, embed, engage and communicate their meaningfulness. and M-M Coherence indicates how coherent their different individual meaningfulness are -whether they align with each other, support each other, or contrast each other. When we use the two properties to form the Co-Meaningfulness quadrant map (with P-M Intensity as Y axis, M-M Coherence as X axis), we see clearly the challenges can be located in the different quadrants of the map. The Co-Meaningfulness trajectory can help us visually locate these problems and challenges in the map, as illustrated in the following figure, which presents the trajectory of the "Jumping Video" project, where Episode 1,2,5 have comparatively low P-M Intensity, while Episode 3 and middle of Episode 6 have low M-M Coherence, and Ep.4 encounters both challenges. Fig. 13.1. Co-Meaningfulness quadrant map Understanding the key challenges of CoLAB has two pragmatic uses. First, it helps educators who want to practice or have been practicing CoLAB to understand the socio-cognitive barrier in practice, and more importantly to help them understand it from a deeper perspective: that problems are not just happening at the Project level, but at the meaningfulness level, namely the divergence of meaningfulness and the inefficiency to engage meaningfulness in the project. Designers might not be just arguing on the appearance of the prototype with the engineers, but in fact about a specific meaningfulness that is reflected through the appearance. These kinds of problems are not so obvious so that new CoLAB learners and practitioners might fail to skillfully present or even to realize. But these problems, as they influence the underpinning socio-cognitive process, will cause fundamental challenges in CoLAB, as we have observed and analyzed in last episode of the "Jumping Video" Project. When these challenges occur, CoLAB practitioners need to understand what they are and why they appear. Using the Co-meaningfulness map can help us categorize and understand these problems, and hence to fix them. if it is the problem of M-M Coherence, we need to guide them to better understand the divergent meaningfulness that each one holds and help them to merge the meaningfulness through assembling the project or to select the meaningfulness that is relevant and discard irrelevant ones. It it is the problem of P-M Intensity, we need to again guide them to present and engage the different meaningfulness, e.g. by asking them why they propose this specific project component, why it is meaningful or important to the learner, what past project is similar to the project components you propose, and why they were meaningful, and to communicate the meaningfulness, e.g. by trying to ask them if there is a good way to animate or to visualize the meaningful part so that everyone else can understand it. The second use of Co-Meaningfulness is for practitioners to review and reflect on their CoLAB process by analyzing the good or less productive practices. By drawing the meaningfulness quadrant and trajectory, learners and practitioners can visually reflect the journey and identify the key moments of CoLAB process. For example, in the "Jumping Video"(see Fig. Con.1), the episode 5 sees a rapid growth in both the P-M intensity and M-M coherence. This episode marks a big change of the group's working dynamic and environment. As suggested by Xid, the group went outside and tested their idea during this period. The testing prototype presented very well both Xid's Project component idea and her meaningfulness so that Ming and Ren can finally understand the underlying meaning of Xid's meaningfulness of "experience of associating user and the camera". All the learners agree that it was a key moment of their project building and they conclude it was crucial to "play" outside and present one's idea with a work-in-progress prototype. The Co-Meaningfulness map is useful when learners and practitioners reflect their process of learning by studying the key moments and turns of their learning process. We will however emphasize here that Co-Meaningfulness framework do not intend to give universal guidelines for good practices for CoLAB. During our research, we find that every CoLAB is unique (which is also one feature of wicked problem), and that the different mentors, different topics and different environment all contribute to the uniqueness of each CoLAB, influencing the dynamic of the learning process. Sometimes, the same practice might lead to entirely different process by different learners or under different learning situations. The Co-meaningfulness is a framework of merits to evaluate the different practice but it does not generate a recipe for all. In fact, trying to generate a all-in-one recipe is dangerous, as it may reduce the complexity of the CoLAB practice as an open wicked problem. Co-meaningfulness is useful in promoting profound and holistic understanding of CoLAB process as complex as it is, not a reduced one with parameters to measure and simple solution to follow based on the measurement. We believe the most suitable solution is only attainable by the practitioner and learner who actually learn and practice on the specific CoLAB and understand in-depth the topic and dynamic of the learning process. We encourage practitioners to reflect deeply with the Co-Meaningfulness framework and analyze on the key moments of Co-Meaningfulness trajectories, which is the most beneficial way of using this concept and framework. Co-Meaningfulness for Understanding the Developmental Role of CoLAB The previous section focuses on the micro process of CoLAB and explains that the key contribution and usefulness of Co-Meaningfulness in the sociocognitive perspective: that the framework of Co-Meaningfulness (the concept and the associated tools) is useful in understanding the key challenges and practices of CoLAB。In this section, we will explain the contribution of Co-Meaningfulness in the socio-cultural perspective, focusing on the developmental role of CoLAB, and how CoLAB practitioners can develop collaborative and boundary-crossing creativity with the framework of Co-Meaningfulness. The Co-Meaningfulness in the socio-cultural perspective first focuses on what "meaningfulness" is. From the coding and analyzing, we find out that meaningfulness is related to three process -"Evoking", "Applying" and "Prioritizing". The first process of evoking means that the current work-inprogress project "Evokes" an past experience -for example, an engineer might evoke his/her past engineering project with relevant method and tools, then the learner "Applies" the social meaning of that past project to the current work-in-progress project. The "Evoking-Applying" pattern associates the learners' past experience and related social context to the current work and learning. But these evoked and applied Project/Meaning (P/M) are only a prioritized fraction of his/her larger potential network of the personal experience, and its social meaning. The final "Meaningfulness" is manifested in the prioritized P/M emerging from the larger network of potential P/M. The underlying process of "Prioritizing" is essential in understanding "Meaningfulness", because it distinguishes the "Meaningfulness" from any "Meaning" that fits, but features the "prioritized" meaning given the learners' socio-historical experience and the current learning situation. In the meantime, the underlying larger network of potential P/M are temporarily oppressed/neglected. The "prioritizing" makes CoLAB possible as different learners have to "reevoke", "re-apply" and "re-prioritize" to integrate the divergent evoking, applying and prioritizing. When different learners brings in their different "Meaningfulness", they also brings in their distinct "evoking", "applying" and "prioritizing". Therefore, building Co-Meaningfulness is a process where the learners need to incorporate different past P/M as well as their different ways of prioritizing. The situated learning activity will trigger the temporarily oppressed/neglected P/M. If they are fit, they will be re-evoked and re-applied, which will finally lead to re-prioritizing of the oppressed/neglected P/M. For example, in the project of "Chrome-Air Attack", we observe several process that features the re-organization of "evoking"," applying" and "prioritizing". The most influential reorganization happens when Denqing presents the scenario of the colorful bouncing balls and proposes to attach a bio-sensor to a colorful bubble. The very beautiful scenario triggers what the learners later describe as "the romantic feeling", which eventually evokes the science fiction idea and applies the associated meaning of "speculating a plausible feature with a romantic and ironic bio-tech artifact." This final P/M was initially oppressed/neglected, but, as we have seen in the final result, they do have the potential to be re-evoked, re-applied and finally re-prioritized. In the postinterview, the learners also prioritize the meaning of "romantic" which was not a priority before. The developmental role of CoLAB exists in the reorganization of evoking, applying and prioritizing. To understand the developmental view of CoLAB, we need to ask what exactly learners learn from CoLAB. There are three levels of learning objective of CoLAB. First: we learn through making project oriented to solving/resolving an open wicked problem, and that project, as one potential solution for the open wicked problem, is one of the learning objective. Second: we learn the necessary knowledge and skills useful in the Project making, from all sorts of learning materials, and from our mentors and peer learners, through making the project. Third: we learn from building Co-Meaningfulness, which includes the practice of overcoming the challenges of meaningfulness divergence and lack of meaningfulness communication, and the practice of reorganizing the "evoking", "applying" and "prioritizing". As a result of the third level of learning, both the new project and its social meaning has entered the learners "potential network of P/M", and at the same time, the process of building Co-Meaningfulness also expand the learners' original way of prioritizing. The learners expand their original network of P/M with the new Project and Co-Meaningfulness, their original prioritizing with the new prioritizing brought by the new Co-Meaningfulness. This expansion is otherwise difficult through vertical learning, which reinforces rather than expand the original P/M network and prioritizing. The expansion of boundary-crossing P/M and more adaptability and comprehensiveness in prioritizing marks the development for the learner. The ultimate purpose of CoLAB is to develop learner to be more capable in solving/resolving the open wicked problems, as CoLAB aims to promote boundary-crossing innovation and creativity to solve these problems. The developmental role of CoLAB serves this purpose. Our individual experience is limited, so is our with-in boundary experience in vertical learning. CoLAB helps us break the boundary and make possible the connection between the open wicked problem and our own experience in new ways -by introducing new Project and Meaning and by reorganizing our way of prioritizing. From the socio-cultural perspective of Co-Meaningfulness, the collaborative and boundary-crossing creativity is obtained through constantly developing our experience, its associated meaning and by improving our prioritizing versatility of the P/M network. To practice CoLAB is not one-time work, it has to be a consistent and sustained practice. The Co-Meaningfulness framework, especially its sociocultural perspective helps learner and practitioners to understand that each time CoLAB is done, the Project and Co-Meaningfulness is part of the development, as they enter the P/M network of the learners, developing the versatility of their prioritizing. The next time in CoLAB, learners are equipped with more entangled network of experience and meaning as well as more adaptable and comprehensive ways of prioritizing from which they build the next Co-Meaningfulness -and possibly lead the development of novice CoLAB learners through this new Co-Meaningfulness. The development within the learner and between learners can be a chain reaction when learners and practitioners understand the developmental role of Co-Meaningfulness in CoLAB. 3.Prospect of Using "Co-Meaningfulness" Before we discuss the prospect of using "Co-Meaningfulness", we remind that the "Co-Meaningfulness", at the end of the thesis, indicates two aspects: 1. "Co-Meaningfulness" is the key concept of our grounded theory, which is often used to stand for the grounded theory. In this sense, the "Co-Meaningfulness" is associated with its properties, tools, and collectively, a framework. 2. "Co-Meaningfulness" as a standalone concept also indicate the substantial "collective meaningfulness" generated and emerged from the CoLAB learning. Both the framework and the substantial aspects have the prospect of use in the following discussion. Using Co-Meaningfulness Grounded Theory as a Conceptual and Reflective Tool The first prospect of using Co-Meaningfulness has been proposed in the previous section. Co-Meaningfulness is a conceptual and reflective tool. We first recommend the future use of Co-Meaningfulness to identify and understand the key problems in the CoLAB, which is often difficult for novice learners and practitioners. Using "Co-Meaningfulness" in this sense does not mean to present the complete theory to the learners when they meet difficulties, which we suspect will only cause confusion. It means practitioners need to consider the conflicts and inefficiencies at the "Co-Meaningfulness" level, when difficulties appear, and give guidance accordingly. The Co-Meaningfulness as a reflective tool helps practitioners/researchers to review the trajectory of the learning process. Using the reflective tool will require the practitioners/researchers to record or keep notes during learning. Using Co-Meaningfulness as the Chain Reaction Catalysis for CoLAB. The second prospect of using Co-Meaningfulness has also been briefly mentioned in the previous section, when we explain how Co-Meaningfulness casts light to the developmental role of CoLAB, as we say: "The Co-Meaningfulness framework, especially its socio-cultural perspective helps learner and practitioners to understand that each time CoLAB is done, the Project and Co-Meaningfulness is part of the development, as they enters the P/M network of the learners, developing the versatility of their prioritizing. The next time in CoLAB, learners are equipped with more entangled network of experience and meaning as well as more flexible and adaptable ways of prioritizing from which they build the next Co-Meaningfulness -and possibly lead the development of novice CoLAB learners through this new Co-Meaningfulness. The development within the learner and between learners can be a chain reaction when learners and practitioners understand the developmental role of Co-Meaningfulness in CoLAB." In the discussion here, we will focus on how to use "Co-Meaningfulness" (as a substantial "Collective Meaningfulness" generated in CoLAB) as a chain reaction catalysis for CoLAB. Experienced CoLAB practitioners such as Denqing and Juanmat often use the "Co-Meaningfulness" built in a CoLAB to catalyze the next CoLAB. For example, the bouncing ball video (see Appendix C.4 Denqing's bubble) is often mentioned by Denqing when he wants to explain the meaningfulness of designing for a large scale of interaction. In the series workshop "Colab bio-design", the Co-Meaningfulness in the Project "eatable book" integrated the meaningfulness of innovation in material science, playfulness, educational and communicational purpose, and was highlighted in multiple versions of the workshops as prompts and even special practicum sessions. The "Co-Meaningfulness" in these Projects are naturally catalyzers because they have successfully resulted in re-evoking, re-applying and re-prioritizing in a CoLAB, which proves their complex and adaptable connections to the open wicked problems. They serves as a hub of the P/M network, not only in one individual, but also in a boundary-crossing group. Their role is similar to the "boundary object" for learners across different background, which bears the boundary-crossing potential. To use good "Co-Meaningfulness" as catalyzers, we want to remind three things: First, Co-Meaningfulness is not separated from the "Project". Since the very beginning we give the name "Co-Meaningfulness", it has remained closely related to the "Project". For using "Co-Meaningfulness" as a catalyzer, it is not much helpful to just talk about meaningfulness without presenting the project. When Denqing presents the bouncing ball example, the visual and animation is a big part in understanding the meaningfulness. The "Project" helps to better present the "Meaningfulness", as well as to present how "Meaningfulness" is associated to a real "Project", and finally the "Project" can facilitate novice learners in (re-)evoking and (re-)applying. Second, it is also not useful to just talk about the "Project" without mentioning the associated "Meaningfulness". It is often troublesome when learners and practitioners just present the "Project" hoping meaningfulness will naturally follow. In CoLAB, this is not what actually happen, as we have observed and studied in many situations in this thesis. When the CoLAB practitioners want to use "Co-Meaningfulness" as catalyzers, they should take their time and explain deeper on the "why" questions: why do I want to present the project here? Why is it important for us and the problems? The catalyzing power will be best stimulated when "Project" and "Meaningfulness" combine in the Co-Meaningfulness. Third, practitioners should understand that each CoLAB is unique, thus the best way to use "Co-Meaningfulness" as catalyzers is through evoking and adaptive prioritizing -meaning that the "Co-Meaningfulness" is not prepared as a reference, but timely evoked in response to the current learning and working project. In this way, the catalyzer "Co-Meaningfulness" carries both its original meaning and its new connection to a different topic and a different community of leaners, through the experienced practitioners' evoking, applying and adaptive prioritizing. This ultimate chain reaction also implies that the catalyzing power does not only exist in the "Co-Meaningfulness" as an "substance" but also is manifested in the learner and practitioners' adaption and using. The catalyzing and scaling of CoLAB is never isolated from the specific learner and the situation, which is ultimately because of the openness and wickedness of CoLAB as an open wicked problem. Developing the Co-Meaningfulness Grounded Theory The Grounded Theory of Co-Meaningfulness is not ended here. Its structure and content is highly related to our initial motivation and questions, due to our personal interests with personal reasons. In a constructivist perspective, this personal standing point is important in understanding and constructing social reality, which is adaptable to other's use and development as the construction is in its essence socially coordinated. If seen from a Co-Meaningfulness perspective, our grounded theory also constructs a Co-Meaningfulness that everyone can evoke and apply with their own experience, knowledge and meaningfulness, and everyone can develop it with their own interests. If the learners/practitioners/researchers hold a constructivist perspective as we do, they can follow the research steps we present in the thesis and develop the framework. As we understand most of the learners and practitioners might not know the method we use, we deliberately present the data, method and analysis in detail (see original coding and categorizing in the analysis and Appendix). For example, one might be interested in CoLAB creativity specifically in the socio-cognitive perspective. He or she might use the CoLAB trajectory to locate the key turning moments which he finds important to construct the understanding for socio-cognitive creativity. Then the researcher might add more coding and categorizing with their interested topic to the P-M Intensity and M-M Coherence coordinates. The framework of "Co-Meaningfulness" has a focus on the process which might inspire practitioners who work on the frontline in building their own understanding of their distinguishable CoLAB process. We keep an open attitude and welcome any further development of the grounded theory. Extension to the Objectivist and Big-data Based Study What happens when a practitioner is going to organize a CoLAB with 500 participants, or even more? How do they understand the different process? How is "Co-Meaningfulness" going to be useful in this case? This is a question that CoLAB practitioners will face, especially when CoLAB is becoming more and more popular. Although our intension in this thesis is to present a zoom-in perspective, I constantly meet with situations where I have to host CoLAB in a large number scale. These opportunities are exciting, but also create a great challenge for understanding and evaluating the process. How are we going to analyzing the so many different process, and generate useful knowledge of CoLAB in this situation? How can we integrate these understanding without losing the complexity of the issue? One proposal is to extend the "Co-Meaningfulness" framework to the objectivist and big-data based study. The "Co-Meaningfulness" grounded theory is built with the "practice sensibility", which means the researcher himself is the research tool to understand and analyze based on his considerable experience in CoLAB practice. The benefit is apparent -the research focuses on the most important questions of practitioners and understand them with a "frontline" insight and a zoom-in intension, but the shortcoming is also obvious, that it is hard to scale to a large number of situations across time and space. However, an objectivist researcher might lose the "practice sensibility" while he or she alienates himself/herself to the practice as an neutral observer. Hence it is helpful to develop a mixed perspective which derives objectivist observations from the Co-Meaningfulness framework. Co-Meaningfulness is a concept associated with qualitative observation and coding in the constructivist perspective, but it is also presented with properties that have the potential to be quantified. One might need to set up a protocol for observation, and use objectivist tools such as surveys and scales to ensure minimum bias in the study. When sufficient objectivist measurement are made, the P-M Intensity and M-M Coherence can be used in a large scale to evaluate the process, which also inherits its practice sensibility as it presents to some degree the important qualitative changes in the process. When the data is large and in high dimension, we can also extend the framework into an analysis based on artificial intelligence, e.g. a supervised learning to distinguish different trajectory type based on features extract from behaviors that will influence the P-M Intensity and M-M Coherence. There are however challenges when researchers want to analyze further with evoking, applying and prioritizing as these processes will require understanding from a socio-cultural perspective, which is inherently related to the learners learning history and cultural context. A more ambitious longitude comparative study might help us in understanding these perspectives. To conclude, the first two prospects of using "Co-Meaningfulness" align with our previous explanation of "key usefulness and contributions of Co-Meaningfulness". We believe the benefits we bring to our practitioners and learners in learning, facilitating, and developing in CoLAB are the most essential prospects of our thesis. In addition, we also discuss the potential use of Co-Meaningfulness as a grounded theory for researchers/practitioners with either constructivist perspective or the need to mine from big-data: this thesis tries to meet the pragmatic need of CoLAB learners, whoever they are, and whatever presumptions they hold. Because we know that many more people with divergent background will join in the CoLAB adventure and our thesis is a proposal not to exclude anyone just because they have a different perspective, but focus on how to help and inspire as many CoLAB learners/practitioners/researchers as possible. Ming: Or we don't let the movement control the camera, we can let the movement control the light. Emphasizing Light control Ke: I think we had enough talk. I like the idea. One thing I want to add here is special moments have special meaning. The interaction itself should have meaning. Her idea is the trigger itself is movement. Photography itself can connect to movement. Can be the angle, can be the shutter speed, can be settings , can be blurred at some moment. Some movements can be defined by the camera to be wide FOV, some movements defined to be like from an animals' eyes. There is a lot to play here, but not talk. You need to play with the movements. Like the can't touch example, we designed all the movement, for a whole day. There's a lot of space to play it. I think the idea is good, worth playing and testing. Approving the project idea Associating moment to meaning Mentioning more parameter Encouraging "playing" Recalling "example" Emphasizing testing 2016.12.15 in the classroom (R, Ming Xid & K) (Ke push Xid to understand XMing's idea on implementation) Xid: Look, here is XMing's idea, he said we are going to use movuino for movement detection. And then Matt for recognition. Explaining working progress and plan Ming: and then we use the web to control camera, or with Arduino. Adding technical details Ke: so do you understand it ? Asking degree of understanding "fun" as meaningful Ke: hmm. The interaction itself. Think of the "collider" that we did, we can't open the door with machine, so we pull it with our hands. It's all fake but the the Ars Electronica jury liked it. We can imagine we had the technologies. Referring to example prototyping with "faking" Ming: So, let's have someone control the camera for us, it captures the picture every time the subject made some movements, and then we present the idea. Borrowing strategies from example Xid: I think it is OK to think about practical application, if he(XM) insist. Allowing different perspective Ming: I think it is not very necessary to think about its objective, be it practical application or having fun. Because we express the idea through the same implementation. Reducing deep meaning Implementation is the same Xid: I want to say it's good that smartphone can have this function. Because sometimes when I dance I don't have anyone taking photos for me. Like one application Xid: so where is our point ? Clueless on the general idea Ke: I think first the camera itself can be free from just one angle. There is a lot of parameter to adjust, e.g. Ming: yes, like the interaction expression is related to camera movement and many more ways ? Ming: the camera is the tool to produce the time. Using the movuino to detect your motion or gesture, and use that information to control the camera may capture some special moment, such as we are doing some special moment that consists something unusual, might be fun or a kind of art. Explaining project Capturing moments Resembling art Jo: but I see in details how you do this. You are going to film or capture and then you will have a kind of picture or movie coming out of this ? my question now is what are you doing with this kind of image. Confusion about function Ming: we are just creating them and show special moments. Clarifying on the purpose Jo: the question now is how you prepare the situation before you do the picture. When you do the picture, there is something going on. Asking about the scenario Ming: actually, there's some parameter in the camera, such as focus distance, aperture, iso, we consider using movuino to get the data of the movement, and then use the data to control the camera. Using the information to change the parameter and produce some unusual picture. Clarifying Jo: Ke told me that you have difficulty with the technology, what is that ? Technical difficulty Ming: in this system, there are two key device, the movuino and the camera, we haven't had a good idea how to do send the data to the phone and change the camera's parameter. After discussion, we want to take pictures to illustrate our idea. Explaining working progress Jo: I do agree. Once again, if after this week, you want to continue, you can work on it. But now one minor technology problem might take hours. It's nice also that you have this evolution of your project from yesterday to today. I like it very much. Confirming Approving the progress Ming: we have discussed on the formal idea, there is some problem of that idea. Problem of previous idea Jo: it's just perfect to do that, sometimes you work on the project and then you just decide to change it. Good. Confirming changing Ming: Renju, I think yesterday we should buy the soft camera fixer. Do you have it. Let's go and play. Going out 2016.12.15 in the cafe, after prototyping (R, Ming Xid & K) servo. But I think the movement of the camera is most important. Ming: so we can design this structure… Ke: Firstly, do you think the movement of the camera is important? I think, the movement is important, but since you say you want to use software for the same effect, so I want to ask your opinion. Ren: because let's recall the video last night. When I jump, the camera also jumps, which ensures that I am always in the center of the picture. Explaining jumping effect and center of mass Li: For example if the subject is leaning, the camera should also leaning. Proposing special scenario Ke: any comments? Asking for comments Ming: I think we need to go back and rethink the scenario. In the beginning, we proposed that it was a big room where there is a automatic camera. E.g. if you move in some way, the camera would rotate and trigger a picture. Now we omit one discussion: whether it is a camera standing there or a hand held camera for outside ? Proposing rewinding Reminding initial scenario Asking for using scenario Xid: first. Answering Ren: so one person can finish all the shooting. Confirming Ming: are you sure ? because the first situation, the camera is hidden, so you won't see the camera. Asking for confirmation Pointing our controversy Xid: no. it can hide or not. When we can to hide it, it's just a setting possibility. Either way is ok, controversy not relevant Ming:whether you hide it or not, the subject just move, there is no one operating the camera, you just get the final result after the processing. Explaining the controversy Xid: i didn't get it. Not understanding Ming: we were discussing on why the subject moving would lead to the camera's moving. One opinion is that the photographer who is holding the camera can also feel the interaction or joy. Focusing on the effect . Ming: no matter you do post-processing or you move the phone when taking the photography, the video you get is the same. Maybe the video is bigger when the phone is moving, because it hasn't been chopped. So the final video, the feedback to the subject are very similar. The difference might exist, when someone is holding the camera, the interaction might influence the photographer. Same effect with different approach. Ming: So we have two modes. One is phone moves, the other is the phone doesn't move and we add postprocessing. Proposing inclusion of both modes Ren: how do we differentiate the two modes. ? Li: from the implementation perspective, the postprocessing approach is easier. It might sacrifice some effect. Comparing the two modes from easy of difficult Xid: in the beginning I think about the first mode. Ren: the association is really important, I now realise our project, the most important is the association between the subject, the photographer and the phone. But then not only these three, we also need to show to the audience. Let them know about the idea. So I believe the video and association should combine. Confirming on the final meaning of the project A2.2 [OF1-2] Interview Xid Ke: Hello, this interview is for collecting your opinions upon this workshop we just had. Xid: I think the workshop is great, because I had similar workshop before, an international mentor came and provided us with Arduino and a topic, guiding us to do interaction design. But that was just interaction design workshop, and if you compare the two, I think the this workshop had more practical implication and flexibility. So this one is more interesting. Ke: Can you remember in details the workshop you just mentioned. What did you do ? Ke: OK, the topic is music player. They gave us Arduino to play with. We were assigned with one emotion. Other teams have "surprise", "dramatic", etc, and we had "fear". So we had to design a music player associated with the emotion: fear. So our final design is a music player with the a shark mouth. The mouth is dark and you have to put your hand in it to trigger interaction. The further you put your hand, the louder it plays. Ke: So it is the first time you knew about interaction design (ID)? Xid: No we had interaction design classes in the second year of undergraduate. They taught theory, methods about ID, (there is a lot of them), and then we had a practise on adding interaction design to robots, drones, etc. Ke: So how do you understand interaction design after all these activities ? Xid: In my undergraduate, my understanding of ID is : it is a relation between human and machine -how I acted and it reacted to my action, this association construct the interaction. A good ID is when the interaction makes you feel comfortable. It is an interaction between me and an interface, be it an interface of a machine or a screen. Now my understanding doesn't change much, but there is one change: Now I think the interaction is more important than the appearance of the product. I held the opposite opinion before. Before, though I understood form follows function, but I really think a product should be goodlooking. Now I think the essence is the interaction, we should focus on that. Ke: But this is not so much related to the "movement", the theme of the workshop, right? Ke Xid: exactly, it is something he just wants to do, and we just follows, not so related to the workshop itself. (so have you considered this in the group?) Yes, we have. I particularly mentioned this to him. His answer was: sure it is related, because you need to control it, so you can control with one hand's movement. And then I asked "why you have to use movuino?" and he answered, because it is the requirement of the workshop(laugh). "OK…" so he is quite 神奇. Ke: So how do you relate your project to the "theme". Xid:The theme itself is not so rigid itself. It's just to combine sensors with movement. I don't really think this is a theme, but more like a limit, a limit to the way of thinking. So we can choose our topic. It is quite free. Ke: Then what was your second proposal ? Xid: So, with your help, we succeeded persuading him to change a topic, because the last we couldn't continue. So I had two proposals, first, to use people's(e.g. dancer's) movement to trigger the shooting. It could be a specific movement, not known to the subject. The sensor senses the special movement and then take the picture, since the subject is unaware of the shooting, so their action is natural. XM, however, he understood this idea directly from the engineering sidethe implementation. I couldn't remember all, but to me, it seems he understood it as only a process of sensing with sensor, then telling the machine, and then machine triggering the camera, (there is a lot lost). Because for this design we need to complete it with a lot of design scenarios(dance, stage etc), but he omits these possibilities, and the rest is …. (not valuable) But I also realise it's not enough time to finish the idea and design the stage scenarios within two days. So, as you suggested, we went out and try taking pictures for inspiration. My ideas was actually two steps, the second step, the movement can influence the setting of camera (e.g. aperture brightness, etc) However, my phone doesn't not have The technique detail those function, so I can only move the phone when we took videos, but surprisingly we found it is an interesting effect. Ke: It was a success right away? Or you tried several times? Xid: Because it's me who took the shooting, so I purposely tried many times with different ways and finally found that if we move the phone according to the movement of the subject, it's the best effect. Ke: What did you try ? Xid: random movement -move in a circle, he rotate himself and I rotate the phone, or adjust the brightness according to his movement : static->dark, move->bright., also the distance to the subject. We had several tries, and focused on the movement, it seemed to be the only good option. Ke: So did you agree on this design when you shoot? Xid: No, it's just me who shoot and tried, and I showed to them the The major discussion here is the implementation problem of the selfie stick. From the post-interview we know this selfie stick is largely an idea of Ming's interests. Ming seems to be very interested in the specific idea alone. The process in this episode follows a problem solving pattern: Feasibility / implementation problem -> proposing solution. The solution has multiple levels. e.g. new function: to make a frame to hide the light new meaningfulness: artistic new scenario: outside, express emotion new technical: folding the selfie stick There are almost no explicit meaningfulness communication. But there is some hint of implicit meaningfulness. 1. Most conversation is around the feasibility issue of the project. There seems to be a consensus on that. 2. Ming was very active in defending and creating meaningfulness for the project. When they face the problem of light being in the picture. His first reaction was to ignore. But when Ren insisted on the problem, he propose to make a frame for the light so that it looks artistic. This proposition does not directly solve the problem but accepting the problem and reuse the problem to add new dimension of meaningfulness. -Introducing new meaningfulness to justify a proposition to the "Project". First of all, Ming must hold some degree of the artistic, otherwise he wouldn't propose that. But we should also be aware the fundamental meaningfulness is not "artistic" , it is a supportive meaningfulness that align with his original idea of the "Project" Actively involved in the Project Meaningfulnessfeasibility raising supportive meaningfulness implicit meaningfulness Agreeing on the meaningfulnessfeasibility Meaningfulness -artistic Project -> new meaningfulness ->justify Project This meaningfulness was not responded by Ren but agreed by Xid. ("Xid: For me, I think if the light have to be in the photo, it's better to make it look like a frame") 3. The new function of colorful light was proposed here by Xid. The color element was used in the later episode as a symbol of playfulness. But Xid did have the chance to develop her idea any further, so we cannot infer from the conversation about this meaningfulness. The project idea had a very big problem of little relation to "movement" and was abandoned. But Xid even tried to prototype it himself after the workshop. The proposition Xid was a camera system(light and shutter) that can react automatically to the subject's movement. She didn't explain why she want to propose the idea, except an intention to borrow some of components from the previous idea, so that Ming wouldn't think too much. Some of the words Xid uses might imply her underlying meaningfulness, .e.g (L35 crazy). -at least she does believe the craziness is meaningful not the other way around. X was constantly proposing ideas in the first part, she is now the active one. The misunderstanding from L42 started as Ming did not really understand enough Xid's underlying meaningfulness. Ming was focusing on the specific effect, which Xid believe does not even matter. What Xid believes as important is the association of movements -subject can control the lighting and camera with their movement and supplemented with a playfulness feeling. These two meaningfulness get along as the playfulness feeling is kind of experience and so is the association. But that does not Meaningfulnesscraziness Active and passive Meaningfulness: good effect Meaningfulness : association align with the Ming's focus on the specific light effect, therefore whenever Xid proposed anything, Ming would interpret from his perspective deeply related to his meaningfulness of effect (e.g. visual effect). As long as Xid does not address this meaningfulness they can not communicate well. This part, in the beginning, I wanted to describe as a conversation of "meaningfulness". But then I changed my mind, as conversation doesn't say much about their will to drag the Project to their preferred direction. It is more like "tug a war" of "meaningfulness". Xid and Ming try to drag back and forth. It seems in this part, no one wins the war. But during the war, both of them tried to explain more clearly their intention and meaningfulness. At the end of the conversation, Xid started to combine. Xid propose spot light and colorful, It is in her mind part of the experience but also proposed because Ming demand specific effect. This is an example of meaningfulness communication change the collective status of meaningfulness and then change the Project The conversation started as Li joined the discussion. From the start we know, Xid and Ming agree that the interaction is interesting, but Ming also mentioned that it should develop more details for the Project, which aligns with his previous view. But Li started from a very practical perspective. His meaningfulness is very centered on the practicality side (e.g. he gave an example of capturing wild animals, a very practical application that looks like Xid's idea). This practicality meaningfulness is very different from Xid's meaningfulness of "fun (even crazy) experience". But what makes the conversation even harder is that they did not exchange of the meaningfulness level as Li did not join Xid and Ming's previous argue so Li did not know about all the "having fun" statement. Meaningfulnesspractical different meaningfulness lack of communication on the meaningfulness Li's reaction :" I don't understand the need. Why there is a camera ?" "it reminds me of some people put cameras in the wild and the animal would trigger it. But I don't know with human" "looks like it's becoming more complicated. Rather have someone take the picture for you" Li's reaction shows that he didn't know or understand the "having fun", just focus on the practical side "the need" "take the picture for you" There is no good or wrong meaningfulness, every meaningfulness might have potential. But there is meaningfulness well incorporated to the group and to the project, and meaningfulness standing alone inside someone's head and refuse to connect in an easily understandable way. This first part of this episode (L.60-100) features the later status. It is largely because Li's constant rejection and not being able to understand the other meaningfulness at the same time not being able to explain his own meaningfulness. He mentioned five times about complicated and nearly every time he did not give rationale to back up his statement. Therefore It is hard for the team member to capture and really incorporate this "complicated" meaningfulness behind. Xid tried several times to explain herself by giving examples, and to argue on the rationale, and Ming also tried to work out the detail and supported Xid's meaningfulness. But their attempt did not seem to evoke the echo of Li in empathizing the meaningfulness of "fun interaction". The group is getting stuck! X finally gave up her idea and asked Ming and Li to come up with their own. And they finally did find an idea all of them are excited about. The idea was started by Ming and was accepted by Xid quite easily, as the idea can easily addressed Xid's meaningfulness of "fun, unique experience". L did not comment a lot on the idea, only an implementation question. Li helped them to try the idea on their phone as prototype. Li made a mistake, he used the video mode instead of take picture. But it turned out to be a stylish video. Ke's suggestion was about the "power" of the interaction. He believed it is meaningful to make symbolic meaning to the interaction. He gave the example of the mouse trap camera. Li showed his meaningfulness of practicality again here by proposing a practical function "shooing jumping", which is not Ke's intention, but Ke did not oppose to that, he tried to connect to another project that is related. Project inherently connected to meaningfulness Meaningfulnesspowerful interaction giving example to explain meaningfulness. Episode 4. Mutual Understanding (Line. 171-231) We see here in the fourth episode some attempts to increase the mutual understanding. Ming is in charge of the technical implementation part while Xid is responsible for the design part. Ke wants Xid to learn a bit about the implementation from Ming, therefore Ke ask about the implementation questions to Xid. Xid's answer indicated that she does not understand very well on the implementation "Xid: I don't know how he can implement it with the hardware, I understood his general idea. So all of this is meant for triggering the camera" (L Ke also refer to his past experience how they faked the prototype in an important art festival competition and successfully won the time and chance to really made it in the exhibition. Ming immediately came out the idea of a prototype with someone faking the automatic association: "Ming: So, let's have someone control the camera for us, it captures the picture every time the subject made some movements, and then we present the idea" (L190) Xid and Ming then exchange a little more on the meaningfulness level. "Xid: I think it is OK to think about practical application, if he(XM) insist." "Ming: I think it is not very necessary to think about its objective, be it practical application or having fun. Because we express the idea through the same implementation" X started to agree with Ming's insist on the practical side, which at least means she can understand it is the practicality that is meaningful to Ming. Ming started to compromise and think that they can put aside the disagreement on the objective, and the meaningfulness behind, and just focus on the implementation. The Episode 5 documented the discussion right after their prototyping session outside in the campus. The whole conversation was in a fast pace, everyone was talking fast and continuously, giving response instantly. Everyone was excited and active. They know they have made something interesting and meaningful. What they discovered is a special effect when they shoot. They chose video mode inspired the accident in Episode 3, and they started to move the camera when the subject moves. For example, in one scenario Ming was rotating his skateboard he steps on. and in another scenario Ren was jumping while the cameras moves up and down as if it was imitating R's movement, making a funny effect. Ke got excited the second he saw the effect. Although the effect was not that "powerful" as Ke used to suggest. But it did have a funny effect, which made Ke laugh and think of the name "you jump I jump". active participation Meaningfulness -fun The consensus in the group is that this effect did make sense. it was result of Xid's original idea plus inspiration from the accident plus going out and trying out. In the final effect we did see a sense of "fun" therefore Xid successful realized her part of meaningfulness. At the same time the effect is very specific, so Ming was also quite excited about it. For Ke, the meaningfulness of powerful did not appear , but he just ignored that. So actually he did not think it is absolutely necessary to attach every meaningfulness, the fun and imagination of the effect itself evoke his other meaningfulness which is already enough to be very active and propose ideas -e.g. "servo" X present it in a very emotional way "Xid: these are not good enough. You can see me taking videos of them. This one. They are jumping and the camera is also jumping. Isn't it fun ??" (P256) while, Ren conclude the meaningfulness in the more logical way, trying to presenting it through reason, "Ren: we call it "fun with movie", referring to the TV show "the big bang theory" There are two goals -1. Let the subject control the picture, 2. we set three modes of fun movies. -three modes that we just saw. And just now, the enlargement effect that XM mentioned before is a kind of visual effect, can't be categorized into one of our goals, it's just a visual effect" (P 258) So when something can not be included in the reasoning, Ren considered as irrelevant. Nevertheless, for Ren it is still a "movie with fun". So fun is an accepted meaningfulness to Ren. The last episode is a big fight on the naming. Xid wants to call in "You jump I jump" while Ren wants to call it "jumping video" The rationale for Xid is that "jumping video" is not fun, does not present her key consideration of "association", But Ren did not seem to agree much on that meaningfulness. For her the naming should be easily understandable for users. "Xid: I think our product's focus is the association so "you jump, I jump" can highlight the association …. " (L. 289) explicit meaningfulness for X divergent meaningfulness not agreeing on meaningfulness meaningfulnessassociation "Ren: but the users wouldn't know the activity then" (L290) Ren: "Jumping video" is the visual effect (L.292) it is a clear sign that Ren did not think presenting the association is an important meaningfulness here. It is not as important as to make the name understandable. At the same time, Ren defends the name "Jumping Video" as the visual effect so that readers will immediately understand the content of the video. This straightforward attitude is again challenged by Xid. "Xid: but "jumping video" doesn't sound catchy. will be disadvantaged when selling. Secondly "jumping video" doesn't have a reason. Our point, the focus is the association, not that it can jump. " (L.293) "Xid: then it makes you curious, so it is like a slogan" (L.297) For Xid, not being straightforward is on the contrary an advantage. It makes people curious about the content in the video. And people would understand the content after they see the video. This disagreement even upgrade to the level of judging the character of Ren "Xid: Renju is 又红又专 (holding a orthodox mindset) , she's from Mechanical Engineering." (L301) "Xid: Therefore she is 又红又专 , I can't persuade her, making me really annoying" (L.303) There was an emotional tension in the discussion and Xid could not stand the situation of not being understood. She blame this situation to the fact that Ren is from mechanical Engineering and holds a very strict orthodox on everything. R did hold a strong preference on rationalizing the project. She supports the meaningfulness using a very rationalized speech and tried to name the project in a clear and direct way. Another disagreement appears when Ke and Ming are arguing if they can use the software to represent the movement of the actual camera. Ming's argument is that the movement of a camera might change the habit of photographer, therefore he might not be so used to the new interaction. besides this point, explicit stating meaningfulness arguing on the "Project" name meaningfulness influence name opposite meaningfulness to the same "project" idea emotion tension upgrading to personal character Ming (together with R) also propose some other practical meaningfulness: e.g. calculating the relative speed, calculating the center of the video "Ming: Or it is contrasting to human nature." (L 375) "Ming: I think there is another point that Joel mentioned, that taking video is a good way to recording the movement(speed) of the subject relative to the earth." (L.382) "Ren: in fact, can it be like this, from technology pov. It can detect the center of the picture, so that if the photo is constructed unbalanced, the camera would move automatically to put the focus center to the real center" (L. 389) "Ren: can that people would understand better the rationale of the design , which is to capture the center of mass of the picture. Because normally people would ask why the camera move according to the movement of the subject. People wouldn't understand what we try to express here. However, if we add the concept of center of mass, then the people standing here, the picture is not balanced, and then the camera would automatically adjust so that the center of mass is balanced. " (L. 391) But Ke has the opposite opinion. He agrees on Xid's meaningfulness , that the association is important. and from his point of view, the two conflicts are two sides of one problemwhat is the actual essence of the project. holding the same meaningfulness, Xid also do not think the challenge of Ming is a big problem, as the purpose is to challenge the current photographing and make new ways of photographing. Ke and Xid's meaningfulness can be indicated from the following statement. "Xid: I don't think it's a problem. There's a lot of anti-instinct design." "Xid: and this design . I really haven't thought about its practical value. Or the practical details. Do you need to go that deep? Considering its value " "Ke: OK, I think the point is to make people uncomfortable." "Xid: exactly, I meant that, because it is a anti-human nature design" (L. 377-381) different meaningfulness to the same "project" idea family meaningfulness meaningfulness practicality useful application not challenging status quo family meaningfulness meaningfulness association, new experience uncomfortable design(challenging the status quo ) K, in his final speech present very strong opinion of his meaningfulness (that aligns with the "association" meaningfulness)by introducing a new meaningfulness -not straightforward. If it is just the movement of the final effect, the audience will not be very persuasive of the effect, but if the they see the association, they will understand and they might be more interested. Ke wanted not only to design an association of designer, he wants the user to know about the design, why they should play this. And the user should know it from direct see the association. Ke also uses an example to explain this idea "Ke: I can give you an example, there is one video in the photodrama workshop. This guy tied the camera to a two meter long rope , and then rotate the rope horizontally, him being the center, camera facing him. And then he got a video, and he is in the middle of the picture, not moving much, but background rotating greatly. OK, now I only give you this video, you see the effect, but you don't know at all what he want to express here, right? If you haven't seen the other video shooting how he played with the camera (L.434)" The strong elaboration help the team to finally agree on the meaningfulness of "association": "Ren: the association is really important, I now realise our project, the most important is the association between the subject, the photographer and the phone. But then not only these three, we also need to show to the audience. Let them know about the idea. So I believe the video and association should combine" They finally agree partially because Ke is mentor and his word has more authority. But also because he used several different examples to back up his opinion. And he was able to point out the key inconsistency in R's argument. The meaningfulness was well presented and manage to make everyone back to a lined up situation. K was not a very strong mentor who gave ideas to participants, instead he tried to not interfere too much the project. But in the end of this episode, Ke presented a very strong opinion and persuasive attitude. His attitude is because he believe the argument has to end, as the team does not have enough time, and the team was heading towards the plain idea that overlook explicit clarify meaningfulness providing supportive meaningfulness to connect referring to example agreeing on meaningfulness the creative part -association , while Xid does not have enough persuasion power to bring it back. good persuasion meaningfulness successfully influence "project" good persuasion A3.2 Memo of [OF1-1] Interview Ming Memo Ming, male, engineering background, M1 student, group member of the movement-camera with Xid (designer) and Renjun in Mechanical Engineering design. M is an engineering background student. His experience in a technology patent company had a lot of influence on him. In the company, he was surrounded by discussions on cutting edge technology with application potentials. Though there is no designer in the company, the discussion occasionally concerns human and society, which he believe is related to design. From one designer(Denqing) he once worked with, he learned that design is about the power in expression. In this workshop, he proposed the first idea (lighting controlled by movement) to the group, but after one day brainstorming, the idea was discarded. Then Xid proposed another idea, which he failed to understand in the beginning. Especially in the interview, he used more the specific scenario and object to explain the design than the abstract concept of "association", which Xid believe is the essence. about himself • I like better the patent company where people from different backgrounds gather and talk about interesting cutting-edge technologies than the research center doing only basic research. • People in this company sometimes talk about the human side and application, therefore it is related to designerly thinking. • (from what I learned from Denqing) Design is to be expressive, to have power in the expression. • Denqing always brings the artistic sensibility, his design is magic and fantasy. about the project • My first impressive moment is when we went out and took pictures for inspiration, we were able to come up with good ideas • Xid didn't specify the reason of her design, which made me confused. I got the idea when Ke Fang rephrase Xid's idea. • My two group members have different design languages. object and human / movement and reaction). For her, design is more abstract concept than artistic skills. In this workshop, she believe her group was the most dynamic, because they had a lot of discussion, change of topic and heated discussion. The main dispute was around the core of their designwhether it is the association between movement and camera or the prototype implementation or the details of final video. Being the only designer in an interdisciplinary group, she felt her perspective was not agreed by the others, which pushed her to communicate. Though the interdisciplinary environment was difficult, she seemed to enjoy and learn. On one hand, she believe workshop with more perspectives, e.g. Joel's physics perspective, is more open-minded, on the other hand, she realised being able to think in other's shoes and communicate in another "language" is a crucial part of design as well. • Interaction design is an association between human and machine • Design is a 理念(concept/philosophy), thinking of this made me anxious, because it is not very concrete or rigid skills, and can be quickly learned, therefore cannot secure my career. • Workshops with only designers are narrow-minded, people like Joel can bring in new perspectives. • The first idea -filling light for selfies -is not good, because using scenario is not realistic. • The core of the second idea -movement controls camera -is the association between movement and camera, not the implementation or well organised details or the quality of the videos. • We saw the same, experienced the same, but we understood differently. • Understanding and communication is a big part of design as well, which I overlooked in past workshops where only designers attended. Some Excerpts and Memo Xid: "I think the this workshop had more practical implication and flexibility." This flexibility is hard to understand without a context. But it was reflected later when she mentioned that design workshops with only designers are sometimes narrow-minded Xid: In my undergraduate, my understanding of ID is : it is a relation between human and machinehow I acted and it reacted to my action, this association construct the interaction. "you know there is a school,… a field that connect cognitive science and design… it is (related to ) visual art, sound design" (Juli) Juli is a science student who has a great interest in the cognitive science. She was also quite interested in its application to more practical field, e.g. design. Her notion of the interdisciplinary field started a new round to comprehension among the group. Juanmat mentioned the term "audio-visual" to conclude what he understood for Juli's description, and Chris recall some artist using brain wave detectors to make art, which seems related to Juli's description. Juli approves everyone's attribution and further elaborate that it is about using cognitive science and physics to compose design, art, and music. This small detour was part of Chris's introduction with Juli's contribution. In the end, Juli conclude the small detour by saying "I just know this", which means she just recalled this interdisciplinary that is related to what she believe as relevant to Chris's background and she thought it was worth to share. It means : "I just know this (which I want to share with you guys)". Chris, who hadn't finished his introduction then felt necessary to complement on his background, as it is not that easy to comprehend than a simple "biology". He mentioned a coming exhibition in the next week called "living object", in which artists use installation to explain what they believe as "living object". He use this exhibition to explain the "ecology" where he works in. The group started to look at the exhibition website when Juanmat immediate found someone (an artist) who he's heard of but forgot why. Before Chris even had the time to explain, Ke found the work of the artist to be one of his impressive experience. Ke started to explain his impressive experience: when the art work was exhibited in China, Ke once saw it was broken by one of the audience, which in turn made it accidentally more respective. One of his friend wrote a review about the accident which she believe adds more meaning to the art work's origin meaning: The "death" of this object (the art work) reflect its "living" status. Again, Ke's story was a small detour to Chris's story. But again, it seems that Ke felt compulsory to share his story, which he believed relevant and worth sharing. Impressive experience Adding meaning to original project Detouring not feel interrupted, as they were reflecting on the accident and discuss on the notion of the "living object" -the name of the exhibition Chris mentioned. The topic of "living object" then drives the discussion into one of Chris's motivation in the workshop. Chris was working closely with digital technology, at the same time he also wanted to collaborate with biological technology. For example, the topic of "living object" seemed to him a good topic to work on: "I wonder if there is anyway to interact with living organisms, e.g. plants, say if we can genetic modify them and see the change in real time." Juanmat gave his answer. He believed even if there is any way to do that, it cannot be presented in the public but only existed in labs (due to GM regulations). But then he indicated there are much more ways to "play with" the natural living things. Juanmat made an example of "biofilm", which is natural but at the same time has a lot of interesting features (e.g. appearance change according to different situations, e.g. temperature, etc. ) that might fit in Chris's definition of interaction. Juanmat's notion of biofilm also reminded Matt of a specific plant that might has the same feature. When Juanmat and Matt discussed about the time it needs for plant and bacteria to have visible reaction, Chris implied that several months or even days is too long, which gave Juli another inspiration. She recalled that other than living things, there are also special "material" that could fit in Chris's requirement, despite that Chris's original demand was to look for "living organisms". Juli was the founder of the "Material Club" in her university, and she often organized activities around the topic of material science and their application. Inspiration from personal interest traditional biology research. He mentioned several times, as he furthered his studies, he became interested in the "creative aspect" of biology science. The "creative aspect" means , in his own words :"to have an idea and then implement it" rather than to study "what's already there". This intention fits well the purpose of synthetic biology -the more engineer version of biology than science. Matt did not agree that the "creative aspect" means to the end of artistic purpose, but he suggested that scientists should learn from the designers and artists about the creativity and "openness". Juanmat made a minor correction to Matt's statement by saying that he believes it is not scientists should learn from designers about openness, but both should learn to be open to each other. Juanmat recall his working experience with designers, and he believes not all designers are open and some are even quite closed. Lina, a design student in the UK, agreed with Juanmat. When she first introduce biology to her designer peers, almost none of them is open to this subject. As for Lina, their rejection is a sign of not being open. After Matt explains his motivation, Ke complemented that he believe this "creative aspect" of biology, to make some things rather than just analyzing something, makes synthetic biology a pioneering field in terms of interdisciplinary collaboration: "The physics, for example, is much harder to work with designers". Matt agrees, and mention that once the field is rigid in methods and ways of working, it loses its flexibility to be open, and that is why he is interested in synthetic biology and its creative aspect. Juanmat complement on this point from a similar but slightly different perspective. He mentioned using synthetic biology lessons to high schoolers as an introductory lesson to science in general. He had one experience with that in the previous summer, when he aggregate dozens of high schoolers in a student lab and teach them about synthetic biology. He also mention a NGO "bio-builder", from whom he got the inspiration. To Juanmat, the synthetic biology brings in the openness from its simplicity. This simplicity, to Juanmat is a good feature that allows high schoolers and outsiders to open a door to the science world. He believe it is not only the laws, facts, results -but the meta knowledge that makes science: how scientist work, how to write science, how to talk like a scientist, etc. "When we learn, we do not only learn concepts, but also the meta knowledges, like in school, the kids does not only learn about the specific rules like how to sit, but also the meta knowledge." Simple for general understanding An open tool Meta knowledge as essential Comparing meta knowledge to schools B3 What's interdisciplinarity Juanmat invites the group, before entering the science part, to join a group discussion on the concept of "interdisciplinarity", as he believe although the concept is now more and more popular, different people might have different perspectives on it. He believes it is necessary for the group to communicate and ground the basic understanding of it: "when a word is so much used in so many context, it might lose its meaning, so what is interdisciplinarity to you?" Discussing on interdisciplinarity Multiple meaning and no meaning Matt believes the purpose of interdisciplinarity is to reach a better solution: "It might generate a solution much better than any field alone could have." Then Lina put forward her purpose by quoting Bruno Latour's "thingness theory", which she read in a paper and became fond of it. She explained that the fact is not only about the fabrication of the scientists but also the others who participate in the social word. Everyone has their own perspective on the fact and they together fabricated the collective reality. The social reality, and the real reality, is not confined to one perspective. It then need interdisciplinary to happen in the beginning so that different views can be taken in when the "fact" are being fabricated. The saying of "fabrication of fact" is inspired from her read of Bruno Latour, but it speaks about her motivation in interdisciplinarity, which is letting everyone takes their perspectives in the process of knowledge generating and technology creation and find the "intersection". Matt seemed a bit confused about the "fabrication of facts" : "So does that mean the fact is a better fact? A more valuable one?" "so this intersection is what is real? Objective?" If you are a trained scientist, you might get confused upon the notion of "fabricated fact" with "manifold perspective", as we often hear "fact is fact" and why should scientific fact be fabricated in ways beyond the scientific methods? When Lina sensed this confusion, she gave an example of her own third year project to explain. Lina's project got inspired by a group of scientists who are working on grow human organs on pigs for the use of transplant. More specifically, they are finding ways to use CRISPR(a gene editing tool) to get rid of the virus so that pig organs are safe for transplanting to humans. But as a designer, Lina's eye on the facts turns to other perspectives: e.g. she asks who is going to manage the transplant and where will it be implemented. When she found out the government is in charge of managing transplanting, she realized the conflicts between the private company who funded the research and the government might need a solution. She also wants to know how in reality it will be implemented in the society: People might view pig very differently; the conflict between society use and nature living, the hospital scenario where the patient will need to choose from wait for natural human organ transplant or pig organ transplant; etc. She is concerned with these issues and believe her perspective can be involved in the science The pig example raised Juanmat's interest, and he reminded that in some areas, pigs are somewhat unsacred, and in some other areas, pigs are religious. Growing human organs on pig might create political and religious issues. Matt starts to understand this. He thought about the insulin grown in pig that is used in diabetes, which is not a big problem to scientists. For him, the science world seems to accept those issues quite fast. But he also understand what Lina meant by the different perspective of the issue. "It makes sense to ask those questions" Another example Lina made is a synthetic vanilla made by a group of scientist. They claim it to be closer to natural hence healthier, as the vanilla now we use are basically chemicals. But then Haagen-Dazs says they will not use it as it is genetically modified. And the consumers also does not want to buy it. The conflicts is between the scientists' fact, which is a healthier and probably cheaper vanilla, and the another fact, which is GM not acceptable to Haagen-Dazs or the consumer. Lina says she does not think the research is a waste, but she suggest a collective perspective and a better communication once the science research is actually implemented in society. Juanmat views this topic with a deeper perspective. "I think we can see this in a more philosophical way, the epistemology aspect -that is one person enough to know the truth?" Juanmat thinks we need people of different perspectives to know the truth -the philosophical truth. He mentioned an old Asian story in which several blind people wants to know about elephant. With one touching the leg, one touching the tail, one touching the nose and one touching the ear, no one actually get a correct answer what an elephant is like. The story aligns with his quote of a Spanish philosopher, that the only knowledge that is close to true knowledge is common knowledge. Juanmat then point out another side of the issue, the political side. He believed that some of NASA's project is not only for science but also for public communication, e.g. looking for water on Mars. He thinks NASA need this publicity projects to Raising political perspective Bringing NASA's example The exploration of material is one thread of the design science collaboration story. Then Lina introduces another thread which intends to bring design narrative to science. For example, one of the design project speculates a technology to help homosexual parents to have children with both their genes, which is inspired by a true science. The designer not only speculates the technology, she also create the life story for a couple, what is the families' life like. She creates a film with computer graphics, which moves Lina a lot. It is touching because Lina can actually see the technology being used and feel the reality of the science. Another designer makes a genetic modified flower which reverse a GM flower to its original status. It creates a paradox of whether it is GM, as at one hand it is made with GM technology, but at the other hand it de-modify its artificial features to return to its origins. The paradox further provoke the question of what GM is and how should we see and deal with it. In the third project, the designers collect gums on the street and visualize their owners based on the DNA on the gums. The visualization is on one hand a technology, but on the other a question on the security and the rights of such information we randomly give away everywhere. For Lina, These are the stories illustrate her purpose in this narrative workshop. These speculative design, or design fictions, help to relate the technology to people, to life and to the socio-cultural context of where it is being or will be used. It helps to imagine a future where technology is being considered with a design perspective. For Lina, they either asks a question, or frame the technology to a context so that it is easier for people to comprehend with their daily life experience. And in this workshop, she wants everyone to make these kinds of narratives so that different perspectives can clash and hopefully we can find a way to learn from each other and integrate. NOT what he meant. He doesn't want a video to present a paper, he wants a form of publication in which it carries the complexity of science research. "There will be part where there is nothing, there will be part that is contradictory. It is complex and I think movie is a right form for this. It is not like 'hi, I am Juanmat, I am the researcher and I am going to explain to you about the protein that .. this kind of videos.'" Design narrative into science Opposing the misunderstanding Carrying the wholeness and complexity making example with jokes B6 Labelling DNA content Juanmat was the first to present his story. He wanted to think of something new, something outside bio-engineering with which he is so familiar. "I want to get rid of all the preconception I have." He focus on what he calls the "social impact of scientific vocabulary", and particularly the concept of DNA. He imagined a daily scenario of a future kitchen where a family is having their breakfast. While everything else seems normal, there is one thing strange: nutritional fact table on the "Cheerios" (Cereal food brand) marks the DNA content. Outside of comfort zone Trying new things Getting rid of preconception Society and science Abnormal reality Juanmat drew the "Cheerios" out, and everyone laughed when they noticed the DNA content on the label: how funny! Normally, DNA is not put on the nutritional fact table, as it does not count as nutrition. But Juanmat rationalized this as a mark of the "naturalness" of the food: the more DNA it has, the more natural the food is, since it has more organic compound and less chemical ones. So DNA becomes as good as a kind of nutrition like protein! But soon this speculation was challenged by Pratek, a participant with biology background. Pratek pointed out that DNA is phosphoric acid, which "isn't cool" for people's health. This argument soon evoked a round of discussion on the chemical component of DNA and their nutritional value. The "phosphoric acid" argument was debunked by Juanmat by saying that the "phosphoric acid" is not free in the DNA, but instead confined in the chain as part of a larger chemical: "It is like saying coca cola has carbohydrates because it has CO2 (which is not correct because CO2 is just a part of the carbohydrates and not free to move away from the main part)." But Caline, a participant with both biology and design background, argued that another component of DNA, the "purine" might add kidney's burden as she once learned in a medical course. Her arguments seems more evidential to the other participants, but Juanmat argues that the burden also exists when one has too much protein. The biological discussion continues as Matt wants to calculate the exact number of the DNA content. Referring to past knowledge The biological discussion was not expected in Juanmat's original idea. His intention was about the "social impact of scientific vocabulary". But When he made the claim that the DNA is seen as a kind of nutrition, as a sign of healthier food, people started the above discussion. When the biological discussion ends, Juanmat continues his story. This future kitchen has a green window full of algae and another window with red bacteria that is good for health. The mother was talking about engineering and fitness, while the son told mom that these topics were so "80s" and not fashionable any more, science vocabulary is the current new faction ! When Juanmat finished his future scenarios, he also speculated how his speculation could be realized at the moment by implementing "science activism" actions. He proposed that the group can maybe go to the street and put on stickers (like "containing DNA") to the food. He got this inspiration from the artist group who made fake corporate advertisement that ironically reflects their indifferent attitude towards the environment. The artist and activists puts those fake advertisement in the light box of bus station during the COP21 conference in Paris as if there were real advertisement by the companies(This happens in Paris around the time the workshops took place, so everyone has real life experience with it). "Ah, that's nice!" The group hailed to this proposition, and started to discuss the fake advertisement activism which many Reference of daily acquittance of them knew or even witnessed. Inspired by this, Matt proposed that designers can design the stickers to mimic real stickers on food, just as the artist mimic the advertisement. Ke proposed to design not only "containing DNA" sticker, but also "no DNA at all" stickers, which immediately echoed by Pratek: "Yes, like DNA free" "Yeah, yeah". Creating a science activism ignites everyone's imagination. We saw ideas after ideas coming up. Pratek propose it is possible to actually extract the DNA with alcohol or soap, (he knows it as he has done it in his biology experiments) and sell it as a kind of food addition. This idea soon inspired more ideas to make DNA concept shop, DNA products, and DNA level certificate etc. The rationale of this is that all this activism can push people to think more about the science concept, so that people can really understand the science behind instead of just following false understanding, one of which Juanmat knew, is about a report saying carrot has a lot of DNA and eat carrot will make people yellow as the DNA starts to translate inside human. The yellow effect is true, but it is not how DNA works. The Interaction design concerns designing interactions between human and human as well as between human and object, especially computers. Most of the interaction design in current design schools deals with design issue with human computer interaction. But Denqing didn't start with any orthodoxically textbook definition of "interaction design", he starts with his own understanding as a design practitioner, with several features of interaction design he summarizes, namely "designing rules", "interaction", "connectivity", "non-linear narration" and "transformation". The first feature he summarizes is that interaction design is essentially designing rules, instead of designing the end product. This is the first and most frequently mentioned feature in his following mentoring. To Denqing, the rule is more essential than the object, appearance, and in many cases rules can generate end product, therefore designing rules lead to designing final product, but is inherently a higher level. He gave very simple example to illustrate what "rule" means. "Imagine you cannot find one of the "knight" when you want to play chess, what would you do?" Denqing believes most people will just find a replacement, e.g. an eraser. In this case the specific object doesn't matter, the eraser will function as a "knight" in the chessboard, exactly as if it were a "knight". Hence the "rule" defines the functionality, the playfulness, the logic, and the game, not the appearance or material or anything specific in making the physical "knight". "The traditional design deal with the physical part, but that is not the focus of interaction design" , as Denqing put it. Denqing believes, and as he presented to the whole workshop, that interaction designers play with rules. One example he gave is a font design project where no specific fonts were designed. In fact, the designer works out a rule for generating new fonts from old fonts. The metaphor they use is DNA. The designers define the rule with which two fonts can gave birth to a new third font by exchanging their "font DNA". The new Interdisciplinary project Relevant to audience created font then has both features of its parent fonts but is different from any of them. The designers created new fonts not by traditional font design method, but by inventing a new rule. Denqing seemed every excited in explaining this point, he again uses one of his favorite design work to back up this argument. The "rule-designing" perspective mentioned here constantly appear as the workshop unfold. And we will come back later to this point and argue this is an most essential feature among all his five features. Then, Denqing takes his time explaining the second feature of "interaction". Denqing gave three examples how Designers play with "interactions". The first example is his own work -"collider", exhibited in the Ars Electronica 2014. The installation is a door locked by a magnetic lock and will open instantly when people run into it without deceleration. It is intentionally designed for people to believe it will open in the last moment. This interaction is quite risky and provocative. The second example is a fashion design that will protect people from the smoke. The original design reacts to smoke and makes alert, and then Denqing help to improve it into an active interactionthe clothes will emit an anti-smoke gas which smells and only smells bad to people who are smoking. The upgraded interaction works like a weapon to fight. The third example is a furniture design for hospital. The original design was an ipad associated with chair so that people can play and interact when they are waiting. Denqing also helps to adapt it into a more sensitizing interaction design to help very painful patient to be more prioritized in the line. The design is a line of chairs, more uncomfortable to the front and more comfortable to the rear. The painful patient doesn't care if it is uncomfortable because they already feel painful. They will choose the uncomfortable chair at the front and then get treated faster. The interaction is implicit but very subtle and creative. As we can see here, although Denqing was talking about interaction, it can still be interpreted as a way to design rulesrules of interacting, between human and object, and between human and human through designed objects. Denqing's emphasize is still on the "rule" side of the interaction, not the object. As we will see later, this emphasis appears again in later elaboration of the following features. Exemplary/impressive projects Using metaphor "Weapon" Consist focus on rulescluster of meaning Similar different The next feature "connectivity refer to when we design the interaction and rules in this digital world, we can design and as designers, we are interested in designing collective and connected activity. "Telegraden" is an example where people can attend to their plant online, and the community of virtue garden actually makes a collective garden and generative collective gardening activity, such as helping other's to attend to their plants. The next feature "non-linear narration "refers to designs that changes the traditional way of narration. E.g. in one of his design "sound theatre", the physical space of theatre is thrown away and audience thrown onto the stage and they will experience in zero distance with the actors which completely change the narrative as in traditional theatre. The final feature, transformation, means designers can transform the perceived to another form in a meaningful way. The example he gives is an artificial intelligence photobooth. Instead of making picture, the photobooth will sense the user and makes a poem to describe the user's face. The transformation emphasize the mediating role of design. The photobooth is one the mediating tool from one sensation to another. The five features that seem to be in parallel has, as I read, an internal logic, and from his uses of examples, we can find an important implicit feature. We need to point out this feature to better understand Denqing's perspective and hence to understand his influence to the following workshops The implicit feature is the intention to challenge the status quo, to make radical change, to create new things, especially new rules, to resist plain application in conventional ways and to crave for creativity. There is a strong intention to challenge the status quo as he elaborate his definition of design. The first feature he used is the design rules rather than the final object, but to me, what he believes as designing rules is more like to reinvent the rules." "he is not satisfied with the status quo, he constantly challenges it with new rules and new metaphors. E.g. "connectivity is not only virtual, but also physical" Connected activity Exemplary projects Past experience / project Meaning introducing to workshop Challenging the status quo Attitude as Connecting social meaning "designing rules, not result" "interaction not necessarily digital" Denqing doesn't seem to be satisfied with taking the status quo for granted. For example, the last point of "interaction not necessarily digital" referring to the chair example challenges the normal interaction design paradigm. Besides the very creative idea, the design secretly remove the element of digital device, leaving only the chairs. But the concept, and the functionality of the design is complete and rich, leaving no room for a digital device. He provocatively asks, why interaction must be digital. To him, interaction when fully established doesn't need a digital tool, which challenges normal understanding of interaction design -to design digital interaction tools. Denqing describe this lack as a plus. The creative nature of the design and the rule breaking spirit much overweight the status quo. When we review his other examples, the photobooth, the collider, the anti-smoke, we can sense in all these example a hint of creative intention to resist against plain and straight-forward application. Therefore, the basic feature units are "interaction " "connectivity" "nonlinear narration" "transformation" that describe the entry and perspective that a designer might take as entering tools. And the "rule designing" is a higher level feature which is the emphasis on all the above features. The rule to interact, to establish rule…. Denqing seems to present a stance that Interaction designer should know that their ultimate focus is the rules of these different entry point. And the final implicit feature is that the designing should to some degree to be creative and challenging the status quo, to have the power to say that I am making a new and radical change to the existing rules, to reinvent the rules To understand the design presented here, along with the examples Denqing used is essential in understanding is the following episodes. There is relative little interaction between Denqing and the students, except a very brief Q&A section. Provoking the status quo Less is more Spirit of rule breaking Connecting social meaning Meaning relevant and introduced to working Project design principle, knowledge or techniques they had learnt from the previous lectures. In a post interview, Denqing explained his rationale for introducing the weapon concept. He described the "weapon" as an object that do not solve any problem that either designers or scientists are familiar with. As Denqing observes in his past teaching practices, normally students are quite comfortable rushing into problem solving with their knowledge at hand. The pre-knowledge narrows the scope through which they will see the problem. The narrowed perspective is often to which their knowledge can immediately apply, but the we need to stay open to the complexity of the problem and its plausible alternative solution. A "weapon" is first of all an alienating tool, which forces the participants to take into consideration an unfamiliar problem or a new perspective. After all, one rarely thinks about designing one weapon in their peaceful life. As Denqing furthers his rationale, he consider the "weapon" as a neutralizer. The "weapon", as an alien object, functions as a "hammer"(as Daqing put it) to break both designers and scientists' structured knowledge. The "weapon" then forces them to forget where they come from, and to deconstruct their knowledge and reuse them in ways neither of them is comfortable. But thanks to this discomfort, it leaves space for both perspectives to enter. The "weapon" makes sure both of the disciplines have voice as no one now can be the expert. It then neutralizes the situation and invite both perspectives to enter with an equal standing point. With the rationale above, one can better understand why this seemingly distant metaphor makes sense in this interdisciplinary workshop. When Denqing explains to Juanmat and Ke about this rationale before the workshop, it didn't take long for the two to agree with the idea. All of the organizers at least believe it is an idea worth trying. After the workshop, when organizers gather to reflect on the result, they agree that the weapon did prove its usefulness. It was not perfect as expect, but it did in some way help the student to open their eyes . With the above rationale, Denqing prepared a lecture specifically to introduce the metaphor. In this workshop, Denqing used extensive examples to illustrate how "weapon" Lecturing knowledge to look for more appropriate solutions. All the organizers were OK with that flexibility as the purpose is to encourage exchange between science and design, between different communities, not necessarily a specific group. The last element of Science Fiction means that the students will need to come up with a scientific fiction where the "weapon" will be used or presented. It requires the students to give context to their final solution. Moreover, as the context has to be a "science fiction", it implicitly encourages more creative and imaginative ideas. To understand the five elements is easy, however to understand the internal logic and relationship among the elements in the design process is hard. -Which elements should I start with? -Are all elements absolutely necessary? -Is there an order of the elements when brainstorming? -Which elements should I emphasize while compromising or even omitting others? Denqing immediately tried to address these potential confusion by giving what he believed to be an ideal designing process with the different elements: "You can start with whatever interests you the most, do not follow the order we give you, but focus on finding the connection between the elements. In the end, we recommend you integrate everything into an coherent story… You should try to start as broad as possible, but end with a more specific angle that is most creative and appealing"(Denqing explaining the logic of the elements in the design process) In this sense, we can understand that Denqing's five elements is more flexible than hard requirement. They more serves as an inspiration than as a must-do. However, it doesn't mean Denqing emphasize nothing but a random process, there is at least two emphasis in the above excerpt as well as in his later mentor: 1. They should find way to connect the different elements. 2. They should try hard to make it "creative" and "appealing". And we will see in the later episodes what exactly these means to him. -Time bomb: the rule, not the impact The first weapon example is the "time bomb". "What is unique about time bomb?" Denqing believes the uniqueness is not about how impactful the bomb is, or what material it is made of, but about the link between time and destroying. The time bomb Again, Denqing emphasized on the rule, not the physical impact of the bomb. Weapon Examples -Fuse: weapon and biology In the next example, Denqing demonstrated how weapon can be related to biological knowledge. Denqing compared the fuse to the neurotransmission, and encourage student to look for more similar connections. -Sniper and Parametric Speaker: Weapon and Interaction Design Emphasizing the rule Connecting biology and design In the last example, Denqing compared the sniper to an interaction art work. The art named "parametric speaker" explored the possibility of directional listening and directional speaking in public spaces with a special speaker. In a flexible comparison, Denqing believes the speaker is like a sniper, and one might get some inspiration from this metaphor. In exemplifying the weapon metaphor, we can understand Denqing did not come up with the weapon idea from zero. He himself had been exploring some simple prototypes already. In his exmples, rules and connections are still the focus. To Denqing, one should focus what rules make the weapon special -the "time", or the "transmission", or the "direction." These rules can connect the different elements of the workshop. To conclude his points, Daqing presented an ideal model: Is there anything related to time in the environment conflict? Is there anything related to time in the rule of weapon? In the Synthetic Biology, or in the interaction design? Try to connect the dots with the rule and get inspired Using examples Prototype developing Focusing on main social meaning Connecting the dots as creating The metaphor of weapon is a key concept in this workshop, largely attributed to Denqing and his consideration on interdisciplinary learning. There is no similar concept in the first Co-lab workshop before. It serves more than a simple metaphor, but a potential tool to break the pre-knowledge and reconnect the disciplines as well as the wicked problem. It is an invention. In the next episode, we will unveil how this strategy actually works in reality by close participant observing the project of the "Chrom-Air Attack". Pedagogical tool Using an invention C3. The start of the idea The group work started from right after Denqing's lecture on weapon (4pm, Monday). They were encouraged to go out and start their brainstorming in a beautiful sunny garden. Going out The group starts in a very divergent manner, they propose idea after idea. It seems they listened to Denqing's advise to think as broad as possible in the beginning. Some of the ideas were direct solution to an environment problem, while some others seemed to emerge from the "weapon" metaphor. Although the ideas were emerging fast, I observed some features of the discussion that showed an biased focus towards science. The unbalance is partially because the group was formed mostly by science students. For example, whenever an idea was proposed, there seemed to appear an serious discussion on the feasibility of the proposition. As new ideas were always immature and vulnerable, critiques on technique feasibility often stopped the group from going deep. The first round of topic share a similarity. It follows a pattern. A direct observation of a problem and apply a direct technology. At this stage, the design side of the narrative is weak. Although Denqing's did adivise the student to be divergent, but not at the price of only discussing superficially without discovering the potential of the ideas. The strategy the group uses were more like: "proposing-criticizing on the feasibilityproposing new idea". Therefore the connection between science and design wasn't fully established as they do not have the time to dig deep enough. The discussion lasted for around one hour, and generated surprisingly 7 ideas, presented shortly after. With some discussion after, the group finally converged to one project, which they find both feasible and adequately interesting: a home decoration, e.g. a wall painting, that can change its color according to different environmental conditions, e.g. air pollution. The color change is the result of certain biological process, e.g bacteria reacting to the environmental change. This idea became the initial prototype to their final proposition. The idea was challenged by Denqing in the next day when he presented the bubble idea. C4. Denqing's Bubble A critical turning point of their project appeared in a discussion with Denqing in the beginning of the third day. Denqing liked their idea of color changing wall painting, but proposed an alternative object to carry the feature. Before explaining his idea, Denqing first demonstrated an artistic music video of the song "heartbeats" by José González. In the video, tens of hundreds of colorful balls were bouncing freely in every corner of a city. The enormous bright color balls bouncing across the streets in a calm city is quite mind-blowing with its outstanding visual imagination. (See the picture below and the music video for reference). Speculating based on the project For example, the group investigated different possibilities of bubble forms: liquid bubble, half liquid bubble (liquid when made, become solid when exposed to air), solid state material that can float (e.g like snow) . They carefully discussed on how bio-sensor or other forms of color changing method can embed in the different bubble. In discussing on how to kill the bubble before landing, the group investigated different methods: such as pressure, temperature, liquid mechanics, oXidtion, etc. In making the installation, they investigated the idea of a floating island over the city controlled by light pressure. Very probably, these detailed discussions would never happen if not for the interdisciplinary workshop: rarely scientists would think of making bubbles that can change color in the open air, and kill themselves before landing-the whole story doesn't make sense for the sake of science alone; nor designers with similar ideas would seriously discuss the scientific detail: "why should they ?" they may probably question the necessity of that detail in such an imaginary design. The discussion can be a scientific research topic, for fun; and it can also inspire the design to advance, as we will see in the next parts. There are also time when the group get stuck in the detail, and then question the whole idea. But instead of turning away as they did in their early discussion, the group will pick up the artistic and romantic side to complete the narrative and reconfirm it is meaningful. There is one small incidence took place in this stage which is noticeable: when Xiaoding (the designer) ask if the sensor can be something in the city, e.g. oil container. The answer was quickly given by one of the biology student, that sensor is a receptor, a receptor reacting to a specific trigger, with all the scientific explanation followed. He was not considering why Xiaoding asked the question in the first place. But if we look back the question, why does Xiaoding ask about something in the city, why she give specific example like an oil container. As a designer, instinct tells me that the girl has something to say about the example, which is not very much about science, but about the functionality of the object itself. Might be that oil container has some special meaning that is worth exploring. The idea was not even developed, but just as a "baby proposition". this "baby proposition" just take the form of one very easily neglectable sentence, and its full meaning that might exist in the designer' mind but soon disappearing in the driving meaningfulness of the discussion happening at the moment. We will never know what the implication of the girl, which might has the potential to redirect the project, to nurture new creativity. It is not a pity, these small incidence happens all the time, and it is impossible to get every implication spoken out and developed, if that is to happen, there is not time, and the project will go nowhere. But this small incidence is unnoticeable, that some ideas are easily overwhelmed by the driving meaningfulness. small project proposition neglected project oppressed meaning driving meaningfulness potential new direction regular neglecting under developed project proposition C6. The science fiction With a few rounds of discussion of both scientific part and the romantic feeling, the group developed a certain degree of agreement and confidence in the project. Their passion was however ignited by the final science fiction. When discussing on the project, the designer Xiaoding kept on mentioning scenarios. She was more comfortable to put the object into a scenario or a story then discussion on the object of its own. For example, when the other members of the group was focusing on the specific object that can change color, they mentioned the air itself can change color. But Runda oppose to that idea saying that a room of red color air would make it scary to breathe. This scenario soon caught Xiaoding's attention. She immediately added: "So we are looking at a scenario where there will be some human interaction right?" While the others were discussing on the object, Xiaoding was thinking about the scenario, whether there will be people and whether they will interact in it. This intention lead to her proposal of a scenario very different from the previous discussion. She proposed a dark future where the entire world were polluted. And the only color they Proposing new scenario see in the air was the red alert of bubble. This dark future scenario soon catches everyone's attention. Runda soon developed a continued version of the story, based on this scenario: Still in the dark future, everywhere is polluted by pathogens, everyone has to wear a mask and people were hunting for space with safe color. The more colorful the bubble is, the more detrimental the air will be. In the end, they finally found a part where the bubbles are colorless. Seeing the colorless bubble was like seeing oasis in dessert. People take off their mask and breath the clean air. The sci-fi ignite the group, everyone looks very happy with the story. "OK", That's it" They were all applauding for the story. The story was so attractive that people started to add more to it. "Bitterly romantic! " Jinrong describe the sci-fi with the following: "it looks like romantic, but in fact it is an irony, the more color there is, the more bacteria there will be. Scientific knowledge was also added to the final scenario: how the bubble will be degradable, easy to explode, without pollution. The group did not give up their many scientific details, but they started to attach the science part to the sci-fi story. Xiaoding adds also another question of how human can interact with it, which leads to an idea of a protection suit that will change color. People can then visualize there degree of being polluted. Xu believes this idea was not as "romantic" as the bubble, but do agree with the others that it can be included as a "side product", the bubble being the most attractive, while others more practical with the same technology. The Bitterly Romantic began to be a commonly accepted description of their final product along side with their very detailed technique to realize the bubble. Impressive scenario Completing scenario Aha moment Achieving agreement Summative description Keeping the science part Attaching science to scenario Adding human interaction perspective Reaching side product Fig. 0 0 Fig. 0.1 The format and different editions of Co-lab workshops Fig. 0 . 2 02 Fig. 0.2 The three co-founders of Co-lab workshops Fig. 0 0 Fig. 0.3 The "Co-lab" like activities Fig. 0 0 Fig. 0.4 The three defining features of the new model of learning -CoLAB Fig. 0 0 Fig. 0.2 Relation Map of the Research Background, Motivations and Questions. Fig. 1 1 Fig. 1.2 Activity Theory Triangle System[START_REF] Cole | A cultural-historical approach to distributed cognition[END_REF] Fig. 4 . 1 41 Fig. 4.1 Charmaz's (2006) categories and their relation map on telling chronic illness(p.62). Fig 4. 2 2 Fig 4.2 The Spiral Abstraction: Codes, categories, Concepts -increasing levels of abstraction(Bryant, 2017, p.97) Fig. 5 5 Fig. 5.1 Illustration of the Selfie Stick Idea in Episode 1 Episode 4 . 4 Mutual Understanding (L.171-231) The Episode 4. (corresponding to Line 171-231 in the conversation transcription) took place in the daytime of Day 3. The team gradually fixed the idea, and they started to work on the details. Ke helped them to grow the mutual understanding in this episode. At the end of the episode, Ke urged the group to go out and try out their first prototype. Episode 5. Going Outside (L. 232-273) The Episode 5. (corresponding to Line 232-273 in the conversation transcription) took place in the evening of Day 3. Fig. 6 6 Fig. 6.1 The socio-cognitive CoLAB process presented by the sub-categories of "Project" and "Project-associated actions" Episode 2 . 2 New Proposition (Line 30-60) At the end of the conversation, Xid started to combine. Xid propose spot light and colorful, It is in her mind part of the experience but also proposed because Ming demand specific effect. This is an example of meaningfulness communication change the collective status of meaningfulness and then Figure 7 . 7 Figure 7.1 Co-Meaningfulness Quadrant Map Fig. 8 . 1 81 Fig. 8.1 Constructing "individual Meaningfulness" by Evoking, Applying and Prioritizing Fig. 8 . 2 82 Fig. 8.2 Constructing "Co-Meaningfulness" from multiple Meaningfulness the "dry" fact, nor in the over-decorated media-report manner. Fig 10. 4 4 Fig 10.4 example of the two thread of "Project" and "Meaning" from B6 taken from C5. Science and romantic In the example above, Xiaoding was the only designer in the team. She actively tried to participated in the group discussion, but often felt she could not be effectively involved. In a later interview, she explain what she believes the reason lies: "I really wanted to participate, to contribute my knowledge and skill, but they are all very professional in the technology … I felt only 20% of my participation was somewhat useful, and the reason is that I think I do not have the knowledge to quickly respond. I need time to reflect on what they say." (interview Xiaoding) Fig 10. 5 5 Fig 10.5 illustration of multiple potential. Fig. 10. 6 6 Fig. 10.6 Prioritizing from the socio-historical background and socio-cultural context Fig. 11. 1 1 Fig. 11.1 Constructing "Co-Meaningfulness" from multiple "Meaningfulness" Fig 11. 2 2 Fig 11.2 Questions for the three-level framework of Co-Meaningfulness by Index Fig. 11 11 Fig. 11.3 Example of Co-Meaningfulness trajectory of B1. 2 Table 12.3 Visualization of the Step 2 of the Grounded Theory of "Co-Meaningfulness" With theoretical sampling (meaning further sampling for more data with the purpose to complete certain theoretical direction in grounded theory, see PART III for more details) into more data and coding, two key properties of Co-Meaningfulness are selected : the P-M Intensity (Project-Meaningfulness Intensity) and M-M Coherence (Meaningfulness-Meaningfulness Coherence). P-M Intensity indicates the degree to which learners actively and effectively engage their individual meaningfulness in the project, while the M-M Coherence indicates the relationship between different individual meaningfulness: whether they are contradicting to each other, supportive to each other or completely aligning. The P-M Intensity concerns the "Projectrelevant" aspect of "Co-Meaningfulness" whereas the M-M Coherence concerns the "collective status" of Co-Meaningfulness. The two dimensions of Co-Meaningfulness construct the Co-Meaningfulness Quadrant where the trajectory of CoLAB learning process can be anchored. The trajectory is both an analytical tool for researchers and a reflective tool for learners and practitioners to review the good and bad practices in their learning. The Co-Meaningfulness Quadrant and Trajectory is a special visual coding to relate and integrate the different properties of Co-Meaningfulness -the two dimensions presents the P-M Intensity and M-M Coherence and the trajectory presents the development of Co-Meaningfulness. The generation of the Co-Meaningfulness Quadrant and Trajectory is the most essential in our grounded theory as it serves as a pivotal role in relating different concepts and categories. In the following visualization, the P-M intensity is presented by the double arrow between the triangle (Project) and the quadrilateral (Meaningfulness) whereas the M-M Coherence is presented by the double arrow between different quadrilaterals. The Co-Meaningfulness trajectory is presented in the quadrant on the right. 3 Table 12.4 Visualization of the Step 3 of the Grounded Theory of "Co-Meaningfulness" The big fight (Line. " It is so simple, like a black box, you have the genes and then what see what you do and get the result, sometimes dangerous, yes, but very simple." , as she quotes Bruno Latour, in the "fabrication of facts." a video instead of a paper. But that's students participating in the workshop were science background, Denqing believes it is important to have a lecture and explain what design and interaction design means to him, in contrast to what normally design means. In clarifying the five elements, especially the "connection" of the five elements. Denqing illustrate a few weapon examples: Table 0 . 0 1 Content of SDGo guidebook in three phases There are two major forms of boundary-crossing learning in the new model of learning. The first is interdisciplinary learning. Disciplines are still the major form of boundaries of learning scenarios. We are quite used to refer to disciplinary knowledge when we learn. In solving open-wicked-problems, disciplinary knowledge is also indispensable. However, no single disciplinary knowledge is enough for open wicked problems. We need to cross the boundary of existing discipline and integrate useful knowledge. The second boundary-crossing learning is crossing communities of practices. The open wicked problems, especially the "openness" element, requires the learning to incorporate experience across different communities of practice and experience. In SDGo competition, the high schoolers need to cross the boundary and learn from the community of street cleaning workers in order to better understand the problem. The learning model of "CoLAB" does not indicate one specific learning approach or pedagogical method, nor does it indicate learning activities with specific and fixed rules or formats. Instead, it is an umbrella name for a group of learning activities that share the same features. It is however difficult to make and clear-cut definition of CoLAB, as the phenomenon of CoLAB is by itself a wicked problem in our nowadays learning.The three features are interrelated: The key purpose of learning is to solve Open Wicked problems. To achieve that goal, we need learners to collaborate to authentically solve the problems, as in real world, none of these problems can be solved by one person. The collaboration is boundary-crossing so that different background and experience become a source for learning, not a barrier, as boundary-crossing experience and knowledge are crucial in solving open wicked problems. Hereafter in this thesis, we will use CoLAB to indicate the new model of learning in solving open wicked problems. We conclude three indispensable feature of the new model of learning that emerges from facing the challenges of the open wicked problems (see Fig 0 . 4). Each of the three features are not entirely new. For example, the problem-based learning can be originated Dewey and the Project Method in the progressive education wave, and more recently their wide use in the universities for complex task such as surgery in medical schools. Collaborative learning can be dated back to the 70s where teachers started to use cooperative learning skills to promote better student performance. Boundary-crossing learning can be originated to interdisciplinarity and expansive learning. For detailed review of the different learning models separately, please refer to the PART I. LITERATURE REVIEW AND METHODOLOGY. The three define features shown above helps us define the new model of learning. We call this emerging learning model "CoLAB", which is short for "Collaborative Learning Across Boundaries for Open Wicked Problems". • CoLAB is Open-Wicked-Problem-Based Learning • CoLAB is Collaborative Learning • CoLAB is Boundary-crossing Learning The study of CoLAB oriented to improving the learning experience is in great demand for practitioners. Implementing CoLAB is difficult for two reasons: 1. the open wicked problem people trying to solve with CoLAB is complex and difficult; 2. The CoLAB is by itself an open wicked problem and hence difficult to deal with -it concerns many entangled challenges concerning institutional and cognitive barriers. This pragmatic oriented study should consider the fact that the most majority of CoLAB practitioners does not have a firm background of learning sciences or educational research. Filling the gap between practice and theory requires the study of CoLAB to firstly have an indepth understanding of the real-world problems of the practitioners and secondly be presented in a way that the divergent practitioners can easily understand.The pragmatic purpose of the study of CoLAB also aligns with the personal motivation for this thesis. Before the Ph.D. study, the author was a learner in the CoLAB learning initiative in China. As China's economy develops, the need of solving open wicked problems also increase. The trend has also infiltrated into many universities and research centers in China, where they are asked to design and implement new learning initiative to solve complex problems with collaborative efforts of people from different background. However, many new learning initiatives are facing practical difficulties-they do not know what a CoLAB learning is like, what the key issues are and how to overcome the barriers. During his Ph.D. study in Europe, the author of this thesis has developed his experience in CoLAB organizing and curation through his constant practice with different learners, professors and the CoLAB communities. Therefore, one of the motivation for this thesis is to materialize the experience and understanding of the practice and help the practitioners who have less experience to improve their CoLAB experience. The Co-Meaningfulness concept can help practitioner, when they meet challenges in CoLAB, to understand the challenge from a more profound perspective at the "Co-Meaningfulness level". The Co-Meaningfulness trajectory can help practitioner to better locate the key moments and understand the good and bad practices. Second, from a sociocultural and developmental perspective, The Co-Meaningfulness framework helps the learners and practitioners to understand that each time CoLAB is done, the Project and Co-Meaningfulness is part of the development, as they enters the P/M "library" of the learners, developing the adaptability of their prioritizing. The next time in CoLAB, learners are equipped with more entangled network of experience and meaning as well as more flexible and adaptable ways of prioritizing from which they build the next Co- Meaningfulness -and possibly lead the development of novice CoLAB learners through this new Co-Meaningfulness. Understanding this developmental role of "Co-Meaningfulness" will help us better catalyze the development of individual and the community in CoLAB. In the beginning of INTRODUCTION, we use examples in our daily lives to illustrate the new "open wicked problems": what are they? What are the features of "open wicked problems"? And why are they changing the pressing problems of our time? We then review the traditional learning models and the challenges it faces when solving open wicked problems. The challenges have resulted in new learning models, exemplified by a group of new learning initiatives around the topic of SDGs, one of which, the SDGo competition is introduced in detail so that we know what exactly the new learning model is. The new learning model is open-wicked-problem-based in the form of collaborative and boundary-crossing learning, which we call the CoLAB (Collaborative Learning Across Boundaries). The emergence of the new learning models encourages many learners and practitioners to transit from the traditional model to the new learning model, which calls for a better understanding of the nature of this learning model. As CoLAB is an open wicked problem by itself, we are motivated to study CoLAB with practice-based perspective, with a pragmatic purpose to illuminate the educators in transition. Table 4 . 4 1 Charmaz (2006) initial coding on the interview of Bonnie (p. 52) Table 4 . 4 2 Charmaz (2006) focused coding (p.58) I think we had enough talk. I like the idea. One thing I want to add here is special moments have special meaning. 170 Ke: Transcription of Conversation Coding 161 Ke: you know there is a camera, it's Relating to examples actually a ball, with a fish-eye. You can with technology throw it in the air, and it can take a panorama at the top. 162 Xid: look this is just now, my two friends Friends prototyping takes the capture picture. 163 Ke: if we leave out the light. We can follow directing to refer to the camera drama workshop. The title examples means the camera itself is a drama. 164 Xid: if you fix the camera to the bicycle, Proposing new scenario you would get a very rotating video. and effect. 165 Ming: Like astronauts training. Making metaphor 166 Xid: so we need a conclusion. Urging to conclude 167 Ming: we had already a lot of change of Concluding previous ideas now, before camera shake, light. work Tonight, we focus on movement and camera. 168 Xid: let me state again what I believe is Clarify meaningfulness meaningful in my proposal. First is, the Highlighting random as randomness. You don't know when the meaningful camera will take the picture. Second the Natural picture moment you record is very natural. meaningful Sometimes when we take picture, it all Dislike same expression looks the same as if they can have only one expression on their face. You can change the form but express the same feeling. 169 Ming: Or we don't let the movement Emphasizing Light control the camera, we can let the control movement control the light. Table 6 6 .2 Transcription and Coding of Episode 2, Xid explaining her Meaningfulness In this above example, the group was struggling to fix a concrete project idea and Ke suggested them to play and test. (Encouraging "playing", Emphasizing testing, L170) . Besides working on the specific project idea, Xid also explains explicitly what is really meaningful in the project, e.g. randomness and taking pictures with a natural feeling. (Clarify meaningfulness, Highlighting random as meaningful, Natural picture meaningful, L168). Table 6 . 6 , this point is very important. And this actually contribute to my points. Because you say it would be fun, but you omit all the details, it's very abstract now. One beam of light is not such fun.In the following excerpts of Table6.6, the "Meaningfulness" is much more explicit as Xid deliberately stated her belief in what is meaningful: randomness and natural picture (L.168). 6 "Meaningfulness" Example 2: Randomness as Meaningfulness In the following excerpts of Table6.7, Xid and Ren were debating on the naming of the "Project". Ren insisted on a simple name "Jumping Video" which to her is more understandable(L. 290, 292). But to Xid, she preferred another name "You Jump I Jump" because she believed the "association" of "movements" is key meaningful aspect in the Project and should be reflected in the name (L. 289, 295). a specific setting Table6.8), where Xid and Ming were arguing on a new proposition of Xid. In the first part of this conversation(line 38-46), Xid and Ming started to realize they have disagreement beyond the "Project" level which led them to communicate their meaningfulness. In the second part(line 47-60), they started to deeply communicate on the "Meaningfulness" trying to find out what results in the disagreement and trying to fix it. Xid: let's think about this scenario: in a Proposing scenario -dark dark photography studio, there's some studio with colorful light. colorful line light, some people are Proposing effect and dancing. Maybe some movements would purpose -natural photo: trigger the shooting, and then we can have very natural photo. Xid: just now, we were thinking about one Proposing scenario and person, but we could have multiple interaction -multiple people interacting in it. I am just people; inviting for advice proposing this idea, you can also change it. Ming: the trigger ? Indicating confusion Xid: just the shutter Brief explaining Ming: We haven't reached how we place Asking question on these lights on the floor. technical detail -how to place light Xid: We could describe the scenario as a Answering by give a dark room, we could also find such a scenario place. Ming: no. I mean how you can place and Refusing answer; fill the light. repeating questions. Xid: it's the realistic part, but we are now Rejecting the need to doing a prototype, we don't need to go further discussion on the that far right ? realistic part; providing rationale: prototype Ming: But even for a prototype, you need Rejecting rationale; to think about these in order to arguing on the prototype's present/express the idea. presentation Xid: what do you mean ? expressing confusion Ming: meaning that if you want to present the idea, you need to, for example, the light projected on the dancer can have certain effect. Dancer... the certain movement … Table 6.10 to present this analysis of Ming and Xid's underlying meaningfulness in this second part of conversation. Actually I was thinking about spot light. That might be fancy. But now we have only point light …. In the worst case, we post-process. 60 Ming: we need to have a demo live right ? Table 6.10 Meaningfulness Code 2: Discussion on Xid's new idea @Ep. 2 56 Ming: yes. I am now thinking maybe we can just use top light . 57 Xid: can be multiple light, we might also refer explain Meaningfulness to the MF room. But that is very professional. -fun, touch people We are for fun. How people can really play with it. I think a design should first touch people. 58 Ming: yes, this point is very important. And imply Meaningfulness - this actually contribute to my points. Because Specific Effect you say it would be fun, but you omit all the details, it's very abstract now. One beam of light is not such fun. 59 Xid: I meant to have colorful light as new function. It can change according to music … it might be fun…. Can it rotate? No, right? What types of Interaction Between Meaningfulness and "Project" Although the purpose of Xid's example of "rollerstaker" was to connect her meaningfulness of "associating movements" to a new scenario that Li might understand, but it apparently did not work. It is partially because Xid's example and her way to connect is not good enough, and partially because Li did not receive it and understand it .One of the biggest problems of Episode 3 is that Li kept rejecting Xid and Ming's proposition by only providing very simple rationale -he felt the "Project" was complicated, as I wrote in the Memo: "He mentioned five times about complicated and nearly every time he did not give rationale to back up his statement. Therefore It is hard for the team member to capture and really incorporate this "complicated" meaningfulness behind" (A3.1 Memo of Conversation)Li presented a lack in effectively presenting his meaningfulness, or the receive and understand other's meaningfulness.There seemed to be a barrier that stopped Li's meaningfulness from entering the discussion. He tried to repeat about the feeling of complicated, but the lack of examples, and lack of elaboration makes it very different to propose new project ideas that can represent the meaningfulness. Or to combine the meaningfulness.There is no good or wrong meaningfulness, every meaningfulness might have potential. But there is meaningfulness well incorporated to the group and to the project, and meaningfulness standing alone inside someone's head and refuse to connect in an easily understandable way. This first part of this episode (L.60-100) features the later status. It is largely because Li's constant rejection and not being able to understand the other meaningfulness at the same time not being able to explain his own meaningfulness. He mentioned five times about complicated and nearly every time he did not give rationale to back up his statement. Therefore It is hard for the team member to capture and really incorporate this "complicated" meaningfulness behind.Episode 6. The big fight (Line. They finally agree partially because Ke is mentor and his word has more authority. But also because he used several different examples to back up his opinion. And he was able to point out the key inconsistency in R's argument. The meaningfulness was well presented and manage to make everyone back to a lined up situation. disconnecting meaningfulness from entering Project good persuasion meaningfulness successfully influence "project" … (from A3.1 Memo of Conversation) meaningfulness -Interactive: disconnected from project or group new scenario failing to connect meaningfulness: not giving rationale it makes their project get stuck (from A3.1 Memo of Conversation) (from A3.1 Memo of Conversation) Episode 3. Getting Stuck (Line 61-170) Xid tried several times to explain herself by giving examples, and to argue on the rationale, and Ming project fail to present also tried to work out the detail and supported Xid's meaningfulness meaningfulness. But their attempt did not seem to evoke the echo of Li in empathizing the meaningfulness of "fun interaction". Episode 3. Getting Stuck (Line 61-170) (from A3.1 Memo of Conversation) Episode 3. Getting Stuck (Line 61-170) Xid tried to fix the problem by proposing new scenarios: "rollers taker", but that didn't work, because Li still did not think about the fun side. They tried to resolve the problem by working out the specific details, that did not work either. In Episode 6, Ke, as the mentor, presented his effective skills in connecting the meaningfulness to the Project. This was also one of Xid's most impressive learning she mentioned in her interview. is complex and I think movie is a right form for this. It is not like 'hi, I am Juanmat, I am the researcher and I am going to explain to you about the protein that .. this kind of videos.'" Referring to real lab experience Comparing reality with presentation Presentation of a complex problem Valuing daily dialogue Presenting the wholeness Movie as new media for science presentation Referring to news reading Opposing the misunderstanding Carrying the wholeness and complexity making example with jokes Table 9.1 Fieldnotes Taken from B5. "Does Science Communication Need Emotion?" Fieldnotes taken from B1. Chris's background Chris's background is a bit difficult for outsiders to understand, therefore the other learners start to comprehend it with what they are familiar with. Juli made the following "Familiarity Evoking and Applying" from her own interest in cognitive science and design: Mutual communicating Inadequate knowledge Approaching from similar experience detouring interdisciplinary as connecting knowledge personal interest initiating sub- discussion mentioning familiar term recalling past experience grounding everyone's understanding Recalling relevant past knowledge Wanting to share Table 9.4 table and figure below (Table 11.1, Fig 11.2) presents the questions we summarize for the two properties of Co-Meaningfulness. relevance to the current Project? How well? 3. Does the group agree on the relevance of the evoked project? 4. Does the learner present the applied meaning? How well does he or she present ? Applying 5. Does the group understand the meaning applied and its relevance to the current Project? How well? 6. Does the group agree on the relevance of the applied meaning? Levels P-M Intensity M-M Coherence 1. Does the learner present the evoked project? How well? 2. Does the group understand Evoking the evoked the project and its Prioritizing 7. Can the learner evoke and apply P/M that he or she prioritize? 8. Does the group agree on the prioritizing? Table 11 .1 Questions for the three levels of Co-Meaningfulness Table 11 11 .2 Example of Co-Meaningfulness trajectory coding of B1. were not there yesterday. (to R) yesterday Xid and Ming had a lot of discussion on the core of the interaction. The very key idea is, before movement was not organically involved in the design. Use movement to control the stick doesn't look that natural. Why use a movement to control the brightness, the association is not obvious or natural, we can just use a button. So the question now is that the movement itself should be involve, not as a reductant element, but a necessity. So yesterday, one result is whether the movement can be a trigger, at the same time, Ke: you Explaining previous discussion Reason of abandoning Involving movement Xid: influence same factors in the photography. Confirming Ke: we mentioned example, -jumping, right? Maybe Mentioning example jump is one movement, every time we jump there will be a picture. But this is the most simplified, I can Rejecting too simple idea imagine it won't be so much interesting. But there is a lot space left for discussing. Firstly, what movement is Raising questions as prompt worth shooting, second in what would we photograph the movement so that it is appropriate. There is a lot, e.g. angle. You can imagine i have camera all over my body, which camera is taking picture when I am Proposing new function making certain movements? Does the camera on my foot shoot or the camera in the air. This is one possibility. Another is other elements aperture, speed of shutter, etc. Ke: And then I suggest that you don't stay in the Encouraging "playing" outside classroom and think, let's go out and play. Xid: let's go out. XM being like this makes me very feeling anxious anxious. He says to me "what you want to do is like this, right?" Ke: It's quite normal, I can understand. Understanding difficulty X; can you understand the idea now ? Asking for understanding Opening to more parameter apture, Xid: ah yes, he totally omit this part, I almost forgot. Forgetting previous proposition Ming: apture? Not sure Xid: yes, it can influence many factors, the movement Confirming can influence many factors of the camera. Do you think of this idea because it's hard to implement? Or it's just you think that they are the same? Ke: but you don't have the knowledge ? I don't have either. But let's assume we have the knowledge. You propose a good question, that we can do it with software. What's your understanding of this? actually, last night, I didn't understand "you jump i jump", i heard of it, but didn't know the story behind. Ming: not understanding the wordplay Li: how do you tell the center of mass of the picture. ? Asking technique detail Ren: see if we can have a function. Ke: I proposed that because Xid showed me a video Li: i don't get the center of mass… is it visual center? explaining rationale for naming Asking technique detail where Ren was jumping, and the camera was also jumping. TV show "the big bang theory", right. Ren: So then we borrowed "you jump i jump" from the believe movement as important borrowing naming Xid: no, "fun with movie" was from the TV show. correcting misunderstanding Ke: "you jump i jump" is a famous line from the movie asking reason for different explaining wordplay Titanic ...(explaining the story) … understanding Ming: hard to implement. Ke: but what if it's easy? Do you think it's equivalent ? video, Xid: I think it is usable, first it's more fun than jumping reason speculating and digging true feeling fun thus usable Ming: Yesterday we saw the cradle head can rotate. Only the control is more stable, so I am thinking about Ren: The reason why I disagree with "you jump I jump" is like XMing's point: because I don't know what it alternative way to fix it. means. Ke: I think it is a very valid point. referring to example naming not understandable avoiding the question pushing for real reason approving Xid: I don't understand why you don't understand. not agreeing Ming: because the movie is very old, the young people, wordplay out of date Ming: When i think about this design yesterday, to use who are our potential users, might never see this giving reasons the mechanics, firstly, I really can't make it, secondly, I movie, we have to consider them. If two of us three not able to making within time not catchy as thought thought two people are taking photos, so that the didn't hear about this, we can imagine the title is not photographer would have his own traditional way of that catchy as you imagine. transcending traditional way of taking pictures. If the camera is moving, it might Xid: you can't use yourselves as an example. photographing self not representative change the regular way they take picture. Ren: I've seen Titanic twice, but still I am not familiar having seen the movie but Ke: OK let's discuss together. with the line, but I can recall the scene when Ke proposing group discussion forgetting lines (the girls are talking about names).... mention. It's not self-explaining at first sight, maybe we Xid: I think our product's focus is the association so "you jump, I jump" can highlight the association …. let people understand it at first sight ? background story. Why don't we use a simple way to need to explain a lot, and then people realise the as meaningful naming presenting "association" understanding when seeing Ren: but the users wouldn't know the activity then Xid: Because there is no need. If we were told naming not understandable Xid: but if you put "jumping video", it's still strange, why something that we don't know about, we will keep the feeling awkward not knowing inviting more would it jump? question in mind, and then when we explain it with naming meaningless curiosity Ren: "Jumping video" is the visual effect. How do I videos, and then they got the idea, with a surprise. naming not understandable know what you meant by "you jump I jump"? There is a change of emotion. Only the jumping video emotion arousing Xid: but "jumping video" doesn't sound catchy. will be is very dump. naming not catchy disadvantaged when selling. Secondly "jumping video" Ke: it's a very good discussion, for Xid, "you jump i approving both rationales doesn't have a reason. Our point, the focus is the jump" is symbolic and metaphoric slogan. For you two, naming meaningless association, not that it can jump. Everything can jump. it is hard for the user to understand. Actually I come emphasizing on "association" referring to own experience Ren: but what if users never heard of the background across similar situation. Sometimes, I think of a name, naming not understandable of "you jump I jump", would they understand? … . and I ask advice from many people. And the most lacking context asking for many advice But if you just fix the camera, and the you are moving, in fact the camera's movement won't be very impressive. You just get the final video that has a effect. The final result is similar, but the process is different. If it is a hand held, the photographer can see the camera is vibrating or rotating, which leave them a nontraditional feeling of the photographing. If it is fixed, the subject being photographed doesn't feel the movement of the camera, you are just focusing your movement. Ming:Pointing our experience process difference Movement not associated Ren: so now we are going back to the question : do we Asking key function -movement make the phone move or not? If the phone move, what of camera if the photographer can move the phone, then we don't need this machine. So the effect we need to have, could be the camera is not moving and the picture is moving. It might have bigger effect. If the camera itself is moving , in fact it is a physical effect, to make the video a bit .. dizzy Elaborating the controversy Explaining using scenario Xid: Ah, this … I think.. First of all, the details, I didn't Ignoring detail go that far. If we are now discussing the details, I think it could be me myself playing with the camera or you Proposing new scenario -self can hold it. Actually i didn't think about how it moves, playing. the exact scenario. So you think there is a lot of difference in this two modes Ming: what? Xid: the two modes you just mention. Is there a big Proposing merging as solution difference? We can combine. , why should the subject be in the center for a photo? I can give you an example, there is one video in the photodrama workshop. This guy tied the camera to a two meter long rope , and then rotate the rope horizontally, him being the center, camera facing him. And then he got a video, and he is in the middle of the picture, not moving much, but background rotating greatly. OK, now I only give you this video, you see the effect, but you don't know at all what he want to express here, right? If you haven't seen the other video shooting how he played with the camera. : is this association. Just like what you say, you want the audience to understand what it is, a video is not enough. Think of that example. Do you know how he made the video when you just see video, do you really know what the point there. But once you see how he made it, you understand immediately, and then you get the idea, and then you find it interesting. So the point is the final video or the two added together? Ren: so do we propose a good idea or do we make it ? Xid: me neither. Agreeing Ke: you don't need to think about the implementation. Ke: firstlyQuestioning reason Focusing on design You only consider the design or the idea. Ren: if like this, I think both can exist. Proposing both Using example Ke: not necessary the first mode, you can discuss Asking for comparing which one is better. Ren: if it is only concept, I think we can keep both. Proposing keeping both Ming: the first, the demo itself is difficult. Ke: you assume that you solve any technology Not considering implementation problem. The problem in implementation is no problem. problem If you have time, you would solve it as well. Xid: don't you see structure that can make the camera Ren: It makes sense, let's move the phone then. Expressing understanding move like this ? you don't need to think too much on Ke: I think the core or the meaning of the project is .. Reaffirming the structure. Ke: for me, my opinions : I think software is Xid: is to express this association. Expressing meaningless meaningless, because… not straightforward. Why the Explaining reason -not movement of mine would lead to a video like that? I straightforward don't see the causality. You can take the data, and then post-process to get the effect, but why? Why Asking why would i add this specific effect, it is not straightforward. The movement of the hardware is seen by our eyes, everyone sees it, it is moving, this is about everything. Many arts is not understandable, why, because it is not direct. It forcely made a link between two things. Why Xid: the point is the association, I from the very start stated that. would I have this link then ? You three know, because Ke: but you need express in a way that they can you went through this, you played and generated the idea. But to anyone who hasn't seen you play for yesterday afternoon, how would they know. See here understand. Xid: OK. Asking reason for function Thinking from the user in your video. The first video, I am rotating my Ke: of course you need to make comment, and you Mentioning example skateboard, and then the phone is rotating, so I got this also, if you disagree with me, you can also propose it. Preferring one of the mode Ming: Last night we first think about the first mode, but implementation is a problem. Ke's point is the rotating effect. If the phone is accordingly rotating, you instantly get the point, get the association. At least you know how this video is produced. What if I only give movement of the phone would influence the you this video, I am rotating my skateboard. You never photographer, providing interaction. Like if you hold it know why the video is rotating, you don't even know and you dance, and camera is moving, so the camera it's the movement of the skateboard that cause the has minor interaction with the photographer. It's the rotation of the video. If there is no physical movement difference. And then the difference lead me to think about the very first idea, where we don't have a of the camera, there is no such association. Your point doesn't exist, your design never exist. photographer. So this difference was not considered at there is not so much difference. So we can consider the design by saying it put the subject in the center. all. If we base on Xid's idea in the beginning, i think Li: but Renju mentioned that we can also rationalize Mentioning the difficulty of implementation Making strong assertion on Reminding the original idea meaningfulness Proposing one of the function Mentioning function the not moving the camera. Ke: I don't think the rationale exist. Denying function Ke But she didn't specify the reason, what is the artistic expression here. If it is random, we didn't get the idea. And then later, when we get the point to have some randomness in the artistic expression, some augmented effect of certain action in the video, and then we gradually understood the design. A2. Interview and Coding A2.1 [OF1-1] Interview Ming Transcription they were working on basic technology research, which I am not very interested in. Ke: So in the Beijing company it's different. Ming: Yes, in fact every week, they have a brainstorm … technology review.. (..not clear) . The company is a patent operation company. Every week, staff from different background gather to discuss on their recent capture your action. Ke: What do you think of the whole workshop? Was it difficult for you? Coding feeling boring in pure tech brainstorming regularly sharing of divergent help explaining scenario of specific setting missing rationale research and interesting technology they hear about. Like an weekly Ming: I think the most difficult part is expression. The ability to express my Ke: Hello, this interview is for collecting your opinions upon this workshop meeting, but because their backgrounds are very diverse, so it's more own ideas so that other people can understand. Another lesson I learned we just had. Ming: OK interesting. (any example?) Glasses, with projector, projected to retina, so that people can see the screen(voice not clear). Another one is a ball on from this workshop is to keep communicating with our group members. Otherwise, we will probably diverge into two directions very far before we perspectives pure random making no sense state of art tech rationale explained with seeing examples. hot water, they can control the movement of the ball by controlling the notice. Ke: Can you introduce your background? temperature of the water, kind of related to touch screen. Ming: OK, I studied optics and electronics in my undergraduate, which is applied physics. interesting and fun technology feeling difficult to express keeping communicating Ke: Can you tell me your most impressive projects in your undergraduate? unnoticed divergence Ming: The first one is to build a 3D microscope with light field camera (e.g. escalating Lytro ), improve its optics for biology tissue imaging. Including optics, biology and algorithm. pure scientist group bio-technology optical technology inviting designer for discussion Ke: OK. So, I remember you participated in one workshop with a Denqing Ke: What was interesting in this project? (a designer), how do you feel about him? Ming: (In the group) We used to study applied physics, applied optics (theory). This project is the first time we bought lens, we set up the light Ming: He likes things that are expressive, sometimes when we proposed him an idea, and he would think it is too normal, not expressive, not path ourselves. In this way we could understand better, many knowledges powerful enough, and he would push us to something that is more not taught in class, we could associate (the knowledge). There is a lot of interesting, I like this sense of his. A friend who knows him says Denqing is work, and we managed to finish it step-by-step. It functioned at end, and now trying to find applications for his work. His courses were a great we processed the images as well. We learned light path design, microscopy, light field , biology knowledge, light platform experience, etc. experience. mixing perspective divergent language applying theory to use attributing to time limit mentioning a design teacher liking expressive things independent working truly mastering knowledge and dislike too normal idea demanding make space gaining experience require powerful and more mentor guiding in the trying to apply functioning as success expressive idea beginning Ke: So what is impressive working with him? Ke: Are there any other projects you want to share? Ming: Keep saying no. We kept coming up with ideas that ourselves are Ming: When I was in second year, I did an internship in a company in satisfied with. But after a second thought in the group, we found these beijing, now merged into xiaomi. There are many Ph.Ds with different ideas were not good enough, or someone else already had them before. background. Occasionally they invite design professors to share their So we had to develop further. (What was he pursuing?) An emotional ideas. I felt their research is interesting, not only technology oriented, sometimes we consider human society as well, It's attractive. Before that, I expression. Every group he gave different suggestion. For our group, learning multiple knowledge in making a project liking the experience interning constant rejecting to ideas self rejecting -not good inviting designer to tech company enough or not original enough had another internship in the National Optics and Electronics Center, there feeling interesting beyond tech emphasizing on emotional considering societal issue expression feeling attracted Ke: Do they have designers in the team? Ming: they are mostly phd (on science) discussing together, they don't have a designer, but they used to invite designer for a discussion. They would consider human side and application, but many of them is also pure technology. It's kind of mixed, you can not define. because one of the group member is medicine background, he tried to push us to think about pills. He likes to combine two different things. He dares to really connect two very distant things and dig deep. More Ke: How do you think of your two partners? Ming: I think they don't agree with each other. I think each of them have their own design philosophy and language. Maybe also because time is limited. Ke: How can we improve the workshop. Ming: Next time it would be good to have a fablab for us to work in. It would be good that in the first day each team can talk to a mentor for advice. should be portal, always for travel, so I think the application value is not big. Additionally, camera has its own flash, why do we need to fill light? I don't quite understand his point, I don't think it is a very good design. When he is alone doing a selfie and he doesn't have a good lighting, he must be very bothered. Xid: I think he is very good. I felt workshop by designers alone are went out without telling anyone. comparing past interaction associating movement is sometimes narrow-minded. Sometimes they constrained themselves. Ke: So let's talk about your project in the workshop. It's very interesting, design designly For example, when Joel came to the workshop, and he talked about the especially the process. practicality relative motion of camera and subject, and the camera perspective of Xid:I think among all groups, our group is the most interesting one in flexibility human motion, etc, this is something that I would never imagine as a terms of "process". All other groups proposed an idea in the beginning feeling more interesting designer or in a design workshop. and finished it without a lot of changes. Their ideas were proposed by narrowed-minded with only designers, and other members followed the idea and implemented. We designers Ke: You mentioned narrow-minded, what exactly did you mean by that? are different, our engineer (M) is quite imganitive, and quite insisted on Xid: Like a limitation on the range. For example, we see a lot a good what he wanted to accomplish. And we, as a group, of course we product design from architects. Many good products nowadays came need to be supportive to group members. We need to try the idea, and if playing with prototyping tool mind-blowing from people with diverged background. it didn't work at last, we change/modified together. So, we worked given topic of "fear" interesting process. together, and then changed together. music player making people Ke:Your first proposal is about light right, can you elaborate more on fearful limit of horizon insisting on ideas that idea? interaction inside shark good product from diverged mouth background supporting team members design as conceptual changing together knowing theory and pracise feel anxious before feeling nothingness resembling taste Ke: How about Joel's understanding of design? Does he gain the question originality concept you mentioned? Xid: I think from his way of introducing the theme, it's very clear and easily gained individual idea as group idea direct thus has the sense of design. (Theme?) Yes, to associate movement with sensors. I think this is very 直指核心(directed to the relation between human and core). (Why "directed to the core" is design mind/thinking?) eh…. machine. not original enough in the Because, design, although it has a lot of 理念(concepts) such as "serve market for the people", but in fact, I think, it is just a kind of 联结(association). Ke: You don't recognise the value of the design. But in your mind, good design feeling theme clear and direct as not as competitive can you imagine why he proposed this ? comfortable designly When Joel introduced the topic, he try to associate movement Xid: I think, because Xiaomi, he personally hates post process. He just me and interface to …(didn't specify), which I think is 直指核心 itself. And concerning doesn't like to to post-proceed the photo, very persistent on that. It's change: interaction how you associate (the movement), in what forms of interaction, this is overweigh appearance not understanding reason perspective in this workshop a design question. So the question itself direct to the core of design. design core is "association" focusing on interaction : Joel's background is in Physics, how do you perceive his Ke: What do you think is the reason? Xid: Because I think design itself is a 理念(concept/philosophy). Because I majored in design, so when I understood that, it made me anxious (about my career). Because I spent four years in my undergraduate, and I learned one 理念, a conceptual idea. You can't say it is complicated, because I believe it could be quickly nurtured under certain environment. The design mind/thinking… maybe.. is something like aesthetic taste, it can be developed if the environment is suitable. The artistic skills are more hardcore, and I don't have them. So it makes me feel anxious. Concept and ideas are abstract and can be easily gained. Xid:in the beginning, the camera is proposed by Ming, our engineer. When asked, he said he want to do a 自拍神器(selfie equipment). So we asked how is your 自拍神器 special, he said it can adjust the lightning automatically. So first idea was just his idea. Ke: And how do you think of this idea? Xid:Firstly, in the market, there is quite a lot of ways to fill the light in photo. And their effects are much better than direct filling light with really light directly to the face. There is many many ways. And then if it is stage lighting, they are very professional, they don't need this. So actually in the first place you can hardly build up the using scenario, can't find where you can use it. Secondly, this design, selfie equipment, true actually, a high quality photo need professional lighting and angle. It's hard to do selfie with those. I can understand this point. So I think XM he personally has the need to use this . He is a loner, and he often one,point two and point three. I would not specify how we move, maybe I will just say we move according to the movement of the subject. And then I would show the video, to illustrate how we played it. to know their language, and well communicate to the extent that I can sell the value of beauty. A3. Memo and Coding clear, point But she prefered to categorize the movement into horizontal, vertical, Ke: Do you think they sell something to you ? A3.1 Memo of Conversation Focusing on scenario Time limit Different background Organized as limited contributing to different focus etc. She is very organised. Xid: Yes. e.g. in the argue, the girl proposed that we can also use Understanding language software for the same effect. My preference then, was still hardware, Going out for inspiration No practice experience Ke: And you think this organisation is limiting? because it's physical, straight-forward, and attractive when you can see Memo Comprehensive steps Understanding within Coding Xid: I think it depends on the necessity. I think what she tried to the phone moving. I think the product is targeted at fans of the phone, Episode 1. The Selfi Stick (Line 1-28) designers organise is not necessary. For example, I can explain it in one and they would prefer a physical peripheral as well. Then she said Vague presentation sentence, why should I break it into so many steps. And what if I don't software can do the same thing. I was a bit upset, because I thought Selling value want to limit the phone to move in these three ways ? What if I can have she didn't get my point. But now I think her point is also valid, because, more ways of moving? I don't need to specify into details. when the hardware is well accepted, this software can be a lot fun too. It might be popular. Then XM, his very direct and insisting perspective Purposely trying Receiving value also influence me. Keep prototyping Not necessary to be organized Random prototyping Upset about different idea Understanding after Not interested in details Accept more meaning influence of Insisting attitude Same thing different ideas Negotiate ideas value of fusionist Actually we were super behind schedule then, because the third group Not understanding member in our team is majored in mechanical engineering design. Her Ke: As a designer, after all this, in what ways do you think you are Videos for supporting ideas encouraging mindset is more engineering-oriented. Her talking is very organised, different? Not sure about the results ideas very practical and logical, but these drives me crazy when talking Xid: The focus is different. As a designer, I focus more on the link Good emotion Fancy is important to her. between human and object, because it is the essence of the world. And then engineering focused more on implementation. They are OK, Only Association matters Ke: But why you can't bear she talking in an organised way ? normally it because we have different background. After this argue, I find "design" is a good way to talk right? is more than what I perceived. I find communication and understanding Inevitable argument Xid: Because her "organised way" is very limited. I, myself, can also is a big part of design as well. How can you quickly understand a be organised, e.g. the two steps that I mentioned(about triggering and person, and afterwards understand their language. It's important. This Due to mindset Talking style Focusing on the link Feeling crazy effect. But I don't think they understood what it means, (the idea behind the design). Maybe I wasn't very clear then. Because I was also trying, I couldn't know exactly what is good/bad, so I wasn't able to tell them. So they don't quite understand clearly. But we were very happy jumping and playing. Ke: And then the next day, within the group, you had a big argue. Xid: Yes, and it's inevitable. It was just one day before presentation. modifying parameter of camera), but these two steps are general , open. If I specify these steps, they are comprehensive, inclusive. If I were to do this alone, I wouldn't present the project in a way that is so Ke: What was the argue about ? Xid: Because my idea is quite clear, but the other two not clear, they have different ideas, their own ideas. Even though we saw the same thing, experienced the same thing, but our ideas are still different, surprisingly, still different. We had to try to persuade each other. I told them what was my original ideas, and they told me, when they see all of those (videos), what was their understanding of the design. Me, I always had the same idea, to highlight the association. But for XM, the most important is the video looks fancy. For the other girl, it's both the association and the video. However, these are different from what I perceive. I just wanted to show the association (from the movement of the subject to the movement of the phone), if the video is not good enough, we could adjust later. was there in theory (that I learned), but I didn't took it seriously, because we were all designers, we built up the same language, we could understand each other. But here is different. For designers, it's essential Ke: What's most impressive for you in the workshop ? Xid: You. Your ability to both work as an engineer and know the value of design. Also your ability to understand different perspective and languages and communicate. And then Joel is always encouraging us, it's very good. Ke: What is your suggestion if we do it again? Xid: Now it feels more like a design workshop, maybe we can find a way to realise the value of student in other disciplines. X tried to fix the problem by proposing new scenarios: "rollers taker", but that didn't work, because Li still did not think about the fun side. They tried to resolve the problem by working out the specific details, that did not work either. it makes their project get stuck About complication: Li kept rejecting Xid's proposition in this part: he used a vague reason: "complicated." He felt the Project was too complicated, but didn't further elaborate on which part is complicated, what's the reason of being complicated (Li's expression of complication: Line 68, 74, 84, 105, 109) There seemed to be a barrier that stopped Li's meaningfulness from entering the discussion. He tried to repeat about the feeling of complicated, but the lack of examples, and lack of elaboration makes it very different to propose new project ideas that can represent the meaningfulness. Or to combine the meaningfulness. The attempt of the idea of roller staker was not good enough.Quality does not matter, because it is the experience and the association that matters. It is a pity that even at this point, when Xid talks about "quality does not matter", Li and Xid couldn't dig deeper about their different meaningfulness that drag the project to different direction. Li answers "Li: it looks complicated. Some one can take the job"(L 105), which goes back to his old statement, without progress in clarify his meaningfulness. Later, Xid adds more supporting function and meaningfulness "But with or without is a photographer is definitely different. We will act when facing someone."(L106) Which was also support by Ming "Many professional photographer, one of their job is to take your picture without your awareness".(L107) But Li does not answer in a constructive way "Li: can we connect it in a smarter way ? looks complicated to me".(L 109) insisting meaningfulness not understanding new scenario failing to connect meaningfulness: insisting on negetic comment no elaboration no constructive explanation. disconnecting meaningfulness from entering Project There is a turn at Line 102-103. When Xid explain a bit more about the rationale, the interesting part and even invite Li to try himself : "Project" trying to connect "The trick is you don't know your pose when the camera is fail triggered, don't you want to try?" Li answers: "i kinda understand. But inside a room, to take a picture of movement is not good quality." Showing Li started to know better about the idea behind. But he still worries about the practical side of the problem: the "quality" What Xid answers is a direct reference of her meaningfulness : "having fun experience" "Even the quality is bad, it doesn't matter." The problem is multifold. It is because they did not already have a clear project definition. Up till now the idea was still the vague. It is because they do not have time and Ming was focusing on the technology issue. He did have experience on working on the technology so he tried to work it out. Part of him believe working out the technology implementation is meaningful, which is actually not the intention of the workshop: to create ideas and to prototype it. Both the lack of concrete idea and the intention to work out the technology makes Ming inactive in the work.In the following, I told Ming, that the technology is not important. "Ke: In fact, these steps are trivial. It's limited because we only have this Matt application, but actually what we need is just a smartphone application." (L.187) In fact it is not Ming's problem not being able to work out the working prototype but because time is limited and we do not have enough material. But as a prototype it does not need to be fully functional. I told Ming that "faking" is totally acceptable. Communication on mutual job division supervisual understanding 180) nor did Ming understood well about Xid's idea : "Xid: so what I explain a lot, but he understood very directly. That it is a movement that triggers the camera. Fair enough." (L.182) in the above complaint, Xid indicated Ming only understands supervisual superficially the meaning of Xid. very directly can be interpreted understanding as the implementation level. Only making efforts in the implementation level means Ming did not actively contribute to the ideas at the level of meaningfulness. inactive participation At the end of this episode, Ke encourage the group to make more constructive communication. Ke believes there is some way by which the student can reach mutual understanding."Ke: we need practise, if you want him to understand you, then you need more than words. You need to draw, to show examples. It's much easier. It's a game of a group, we cannot let anyone do everything. I suggest you go out to see if there will be anything interesting. Even you don't get anything, it's still OK. For my own projects, I spent many days trying, and I get nothing. But finally I took one picture that is good(shaking head one)" (L.209) This is a compromising attitude, weakening their meaningfulness and its influence on the project. in the post-interview, Ming opposed to this attitude. by saying: vague Project compromising meaningfulness "Another lesson I learned from this workshop is to keep communicating avoiding further exchange with our group members. Otherwise, we will probably diverge into two directions very far before we notice. " (see, A2.1) meaningfulness -working out technical solution inactive participation encourage communication encourage going out and trying out Episode 5. Going outside (Line, 232 -273) referring to past experience agreeing meaningfulness and abandoning meaningfulness before more exchange on meaningfulness Comment comprendre ces processus d'apprentissage? Quelles sont les obstacles à l'instauration de ces nouvelles approches, et comment les surmonter? Aborder le CoLAB, c'est-à-dire la collaboration au sein de communautés scientifiques et créatives occasionnelles ou durables, sous des aspects applicatifs se révèle essentiel. Ces perspectives concernent en effet des évolutions de l'éducation, et leur étude par les sciences de l'apprentissage. L'auteur, qui se définit comme chercheur praticien explorant et développant le CoLAB, adopte la théorie constructiviste dite "Grounded Theory" pour l'analyse de données ethnographiques (observations, dialogues de conception) recueillies sur plusieurs années, au cours de sessions consacrées au développement de prototypes innovants réunissant des designers, des ingénieurs et des biologistes. Le codage et l'analyse des échanges ont mis au jour la notion de "Co-Meaningfulness" qui éclaire un résultat majeur du CoLAB: il apparaî t que les participants ne communiquent pas seulement au niveau du "projet", dans une perspective de management, de prise de décision et d'efficience. Ils négocient essentiellement sur la valeur de "sens" du projet. Leur apprentissage est un processus de "Co-Meaningfulness", c'est-àdire de découverte et d'évaluation du sens que l'effectuation du projet représente pour chaque protagoniste. Ce sens, ce "Co-Meaningfulness", s'entend sous des aspects éthiques, scientifiques, d'utilité, de valeurs humanistes engagées. Il est étudié ici dans une perspective socio-cognitive et socioculturelle. Des outils notionnels et de représentation graphique éclairent les variétés de forme du "Co-Meaningfulness" du CoLAB. Ils rendent visibles et reconnaissables des processus non seulement implicites mais également encore peu reconnus socialement. Car ils abordent un mode de coordination souvent étouffé par la grammaire du management de la coopération. C'est pourquoi la a l'ambition d'ouvrir un territoire de recherche dédié à l'étude du Co-Meaningfulness qui concerne le management de l'innovation scientifique et créative. Mots-clés: Co-Meaningfulness, apprentissage collaboratif, franchissement des frontières, Sciences de l'apprentissage, Wicked Problem, théorie fondée More details on the SDGs: https://sustainabledevelopment.un.org/?menu=1300 e.g. knowledge sharing platform such as the Wikipedia, experience sharing platform such as the Instructables See more details of Biopolis at: https://www.thebiopolis.com/home https://chi.camp/projects/livelo/ See more details of GTI SDG summer school at : http://gt-initiative.org/educationprograms/summer-school/summer-school-2017/ See PART V for more detailed introduction of this workshop series ID (Innovative Internet+ Design) program in Open FIESTA (Faculty of Innovation Education of Science Technology and Art) Shenzhen, China. iGEM, see chapter 1, for reference on this competition D2. Co-Meaningfulness Trajectory Example of B3 D3. Co-Meaningfulness Trajectory Example of B5 D5. Co-Meaningfulness Trajectory Example of C3-6 ACKNOWLEDGMENTS Core organizers and the Learning Community The core team carrying the project is a combination of scientists and designers experienced in working with both communities and facilitating the communication. Among them, the three founders play the most important role: Juanmat: PhD candidate in Biophysics, MSc Education,. Lina: designer, educator, interdisciplinary mediator. Ke: Ph.D candidate in anthropology, designer, educator. Juanmat and Lina, two of the main organizers of Co-lab workshops, met for the first time in the iGEM competition in Boston 9 in the summer of 2015. Lina was at that time a member of London Biohackspace iGEM team, working on a toolkit for DIY beer. Juanmat also joined the 2015 iGEM Competition in the Paris Bettencourt team as a mentor, working on synthetic probiotics. When Lina and Juanmat met, Lina was quite upset about one incidence in the competition. She met someone on site and was just about to explain their project, when the scientist expressed that he wanted to talk to the "scientist" in the team. Although Lina was a designer, she did learn and contribute a lot to biology part in the team. She was more than an affiliated designer who just "do the drawing", and she expected to be treated just as her scientist peers. But the stereotype of that scientist made her feel that even in a competition promoting the interdisciplinary collaboration, the gap was still there. Juanmat told Lina they should do something. Juanmat, as a scientist, however had a good understanding of the design side. He occasionally does painting and used to exhibit his work in his town. He likes design and has many experiences working with designers. In the year 2014, Juanmat's team won the best supporting art and design award in iGEM. He had enjoyed working with designers. Hearing the complaints of Lina, Juanmat believed that gap existed as designers and scientists do not know each other very well, and they might be able to start workshops that promote REAL interaction between the two communities. Lina and Juanmat, with great passion, decided to work on that when they came back to Europe. They did not stop on the words. The first thing they did was summarizing what they wanted to do, and put it on a google doc they named "idea generation", which also presented their very passionate motivation : Chapter 12. Conclusion: the Grounded Theory of Co- Meaningfulness In this chapter, we will conclude the grounded theory of "Co-Meaningfulness", which is the key result of this thesis. As grounded theorizing is a structured and dynamic process, we would like to review both the grounded theory result, as well as the process through which we develop it. In the following illustration(Table . 12.1), we present the key steps of the entire development of "Co-Meaningfulness" with a consistent visualization of the framework. The first three steps are based on the socio-cognitive perspective, while step 4 to 6 are based on the socio-cultural perspective. A. Project of "Jumping Video" A1. Conversation and Coding No. Keep proposing new function and effect -dance and shooting trigger. Xid: let's think about this scenario: in a dark photography studio, there's some colorful line light, some people are dancing. Maybe some movements would trigger the shooting, and then we can have very natural photo. Proposing scenario -dark studio with colorful light. Proposing effect and purpose -natural photo: Xid: just now, we were thinking about one person, but we could have multiple people interacting in it. I am just proposing this idea, you can also change it. Proposing scenario and interaction -multiple people; inviting for advice Ming: the trigger ? Indicating confusion Xid: just the shutter Brief explaining Ming: We haven't reached how we place these lights on the floor. Asking question on technical detail -how to place light Xid: We could describe the scenario as a dark room, we could also find a such place. Answering by give a scenario Ming: no. I mean how you can place and fill the light. Refusing answer; repeating question. Xid: it's the realistic part, but we are now doing a prototype, we don't need to go that far right ? Rejecting the need to further discussion on the realistic part; providing rationale: prototype Ming: But even for a prototype, you need to think about these in order to present/express the idea. Rejecting rationale; arguing on the prototype's presentation Xid: what do you mean ? expressing confusion Ming: meaning that if you want to present the idea, you need to, for example, the light projected on the dancer can have certain effect. Dancer... the certain movement. … Emphasizing on the detailed effect as part of the prototype Xid: the prototype is mainly for describing the basic function and then a good video or picture to showcase. Rejecting the understanding of prototype; providing an alternative understanding of prototype Ming: but if the function includes the light, then we must decide where the light came from and where it projects to. Insisting on the specific setting Xid: this should be included in the written description. We don't need to include it in the prototype. Excluding the detailed setting from prototype Ming: But the prototype is after the written description right. Prototype is for demonstrating the idea, which include the light. Rejecting, insisting on including the setting. Xid: i think B light (?) should work. We can buy a curtain, and then we can take the picture. Rejecting too much retail. Useless 98 Xid: The goal is sometimes when we take pictures, we make poses that are not very natural. Explaining purpose: making natural pictures 99 Ming: But we might get a lot of black pictures. Rejecting due to feasibility problem Xid: there will be black, but also good photos. A photographer always has his own perspective. But this camera not. Rejecting rejection. Li: why to take pictures in a room ? Questioning scenario Xid: Because you will not know when you will be photographed. But we define a movement for doing that. we need to find a movement typical or big enough. The trick is you don't know your pose when the camera is triggered, don't you want to try? Elaborating purpose: unnoticed shoot; implying the fun by inviting playing Li: i kinda understand. But inside a room, to take a picture of movement is not good quality. Partially accepting. Questioning on quality Xid: depend on the camera. Even the quality is bad, it doesn't matter. Rejecting the quality merit. implying different merit. Li: it looks complicated. Some one can take the job Rejecting -feeling complicated; replaceable to human Xid: But with or without is a photographer is definitely different. We will act when facing someone. Arguing for new rationale : avoid acting Ming:Many professional photographer, one of their job is to take your picture without your awareness. Supporting new rationale X; try to capture your distinct movement, moment that you yourself might not even notice. It might surprise you , wow , I was like this. Or it might also capture your bad moves. But you will be surprised. It is an experience. Explaining experience as main purpose. Li: can we connect it in a smarter way ? looks complicated to me Repeatedly rejecting with single rationale -complicated. Xid: this is my point, you can try to add more in it. Because the stick idea is very hard to continue. To design the movement for controlling the camera. So I am making a counterproposal. What do you think, XM? you want to relate to light right ? Inviting participating ; asking for confirmation on key function Ming: not necessary . abandoning function Xid: No? you were so sure. Actually, if light is not necessary, we can leave it out. The essence is to take a picture without your awareness. There is no need to associate the light with music. Agreeing on abandoning function; changing function Li: Your camera would need tripod right? Asking about detail setting Xid: can be smartphone. We can hide a smartphone there. Proposing solution Li: but smartphone can't take good moving picture. Questioning function Xid: iphone 6s. XM, you remember LIN took a picture of you, and you were moving like this. The picture looks so great. Very funny. Referring to past fun experience Xid: So if we use the inspiration, we just need to set several movement to trigger the camera. Using Inspiration Li: or you can control by sound ? because I think it's better to take a static picture. I fear the moving pictures are too bizarre. And people would not like it. Proposing function -control by sound. Hating bizarre Xid: If all the pictures taken this way are bizarre, then people wouldn't be bothered. The goal is different (than taking good quality). Embracing bizarre. Implicating divergent meaningfulness Ming: What if we don't need the light at all . Proposing abandoning function Li: Renjun has some good ideas. Proposing changing project Xid: I've seen similar ideas, so I am not very motivated. XM you can propose your ideas. Rejecting -not motivated. Inviting other ideas. Ming: I think about sunflower, so that the selfie stick can follow the face. Like Renjun proposed before.. Proposing new project Ming: Recently I saw company making cradle head that can prevent shaking of the camera. Relating to recent reading Ming: Can we think of making cradle head worsening the shake? Proposing abnormal effect Xid: How to make a camera to take shaky pictures. It's kinda different. Accepting effect, acknowledging being different Ming: let's work on the shake then. Inviting work Xid: shake ? so we make a camera that looks shaky.. No .. a camera that take pictures while shaking ? Propose function Ming: yes, or like before, there are some apps turning clear pictures into vintage, glitchy or random effect. Agreeing, Propose alternative effect and function. Xid: like a camera … called … but the shaking effect, I never tried. I think we can try. It's like a opposite design. Normally we prevent the shaking, and now you want the opposite. But how you want to connect to movement ? Accepting the opposite design. Asking how to relate to workshop topic Ming: for example, it can shake when you legs are shaking. Proposing scenario Xid: So it means… but how the camera shake ? Or it sways? How it shake with you? Asking about function and feasibility Ming: so difficult Expressing difficulty Xid: So you move, and the camera can sense and follow your move ? Asking about function Ming: yep. Xid: I think it's a good idea. Agreeing on project idea Ming: but we won't need a movuino then. We just need a selfie stick. Reducing reliance on technology Xid: you don't need a selfie stick, you can just sway with your hand. Reducing function Li: sway and take a picture or after the sway you take the picture? Raising question Xid: we can try. Let's try. Proposing prototyping ….trying ... Li: oh bad, it's the video mode. Making mistake Xid: look ! this effect. .. it's cool… So we can do this with hand. We need to think of a new idea. Liking effect, self reject due to no technology element Xid: look, this is the off-focus effect. Ming: to combine the camera with the movement of a vehicle. Propose new project idea Xid: but it's similar to the example. Rejecting repeated idea 2016.12.14 in the cafe (X&M) Part 2 (L came and ask them to explain, Li is an outsider) Ke: any progress? …. Xid Showing the picture …. Ming: we think about the idea : to have an opposite design: many camera wants to reduce the shaking, we can have a camera that increase the shaking. And then we think we can just use hand for this results. Explaining project Emphasizing the special effectopposite design Ke: I think this direction is worth digging. Just we need to make the interaction more .. powerful. Adhering to "powerful" Li: there is one need, some people would like to take the picture when jumping. We can detect and then take picture when they are at their highest point. Relating to other scenario Practical use Ming: Also when we take jumping pictures together, we are not synchronized. We can calculate the moment when taking the picture Relating to another problem in the same scenario Ke: you know there is a camera, it's actually a ball, with a fish-eye. You can throw it in the air, and it can take a panorama at the top. Relating to examples with technology Xid: look this is just now, my two friends takes the capture picture. Friends prototyping Ke: if we leave out the light. We can follow the camera drama workshop. Ke: so the implementation is not the essentials. Ming: it's the expression. Emphasizing expression Ke: It's not because implementation is not important, it's quite important, but within two days, it shouldn't waste you too much time. Limit of time Xid: I think it's quite interesting even without the implementation part. regarding interesting without implementation/feasibility issue Ming: the prototype is not of so much interest itself, because it requires... Explaining Prototype function (Joel coming, asking for explanation) J : what do you want to do with the camera? Asking about progress Ren: Normally, when we take a photo, it's always the photographer who controls the picture(camera), we are asking if we can let the subject to control it. For example, if the subject shakes, the camera will shake. We chose to make videos rather than pictures. The camera is here, no one is touching it. And then the model moves, which triggers the camera to move. We proposed this idea. Presenting other effect Jumping effect Feeling Fun Ke: So good, you can have a series of them. You know there are big phone cases in the market. Your product can be one of them, a really big one that has a built-in servo, which move the actually phone. You can hold the big case and then phone will move automatically. You can play it outside. Perfect. A collection of effect Proposing technical solution Ren: we call it "fun with movie", referring to the TV show "the big bang theory" There are two goals -1. Let the subject control the picture, 2. we set three modes of fun movies. -three modes that we just saw. And just now, the enlargement effect that XM mentioned before is a kind of visual effect, can't be categorized into one of our goals, it's just a visual effect. Naming the project Presenting rationale Categorizing rationale Excluding effect -just visual Ke: you can totally say it is a programmable design. We provide several modes, very fun. But users can play themselves. We can open the imagination to all those who buy this. BTW, I just thought about a name, I think it's interesting: we can call it : you jump, I jump. Not agreeing on video Xid: I need Ke to translate. I need translation even in Chinese. Asking mentor for help Ke: it's exactly the same question as before -what is the final design ? is it the video or the process of photographing ? or the association? Actually XM and me we were discussing the same question, and now we can merge the two discussions. Interesting. Asking core design Implying meaningfulness Proposing to merge conflict Ke: Because the mechanical structure is hard to implement. XM proposed to use software to imitate the jumping effect. Explaining conflicts Xid: we can't say the jumping effect is no value, but the value is very small. The core value is the association. You give it away, then it is not attractive in terms of innovation. Do you understand this point ? Anything can jump, but the association is rare. Establishing the association between two things is a process of design. Judging value Explaining meaningfulness Providing reason for argument Ke: this is exactly what I said to XM, and XD didn't hear me, and she had the exactly same idea. We agree on that -the core innovation of the idea is not the effect, is not when meituxiuxiu (an smartphone application like photoshop) make you look better, but the in the process, the photographer and subject enjoy the playing experience. Agreeing on argument Explaining innovation Giving examples Xid: the essential is the association and the better effect is the better presentation, but not core. You can't say it's unimportant, but … Explaining meaningfulness Comparing Ke: but XM raised a quite good point when we two were discussing, what was it ? your last point? Asking for repeat Ming: that normally people have own regular habit of photographing, for example xx take pictures us, he would focus on us and observe our moving. Assuming if we have the movement of the camera when the subject move. It might bring difficulties to the traditional photographer. About the user habit. Proposing experience difficulties User habit Ke: this point is quite interesting, we never discussed it before. Validating question Xid: can you explain more? Asking for elaboration Ming: for example, when I hold a DV, I will hold like this, like this angle. If the DV moves, it will interrupt my regular habit of shooting. It will bring confusion. Explaining with gesture K; did you understand? Asking for confirmation Ming: because when you take a picture, you will have certain point of view in mind, and then you already set the point of view, and then the camera moves, so you lost your point of view, you can't control the picture. For example, if you want the picture to move left, but then the camera moves to the right, you will get lost. Elaboration Xid: ehhh. Thinking Ren: smartphone will move according to the our movement. Using the servo to move the camera. So the servo can adjust the camera movement. Explaining the function Xid: then we first move in small ranges. Let's first keep it to an adjustable range. Proposing solution to the difficulty Ke: I think the question is not only looking for practical solutions. Let us paraphrase it using my own words: it will bring confusion to a user's old shooting habit, is that so? Denying practical solution Upgrading to abstract meaning Ming: Or it is contrasting to human nature. Reaffirming Ke: what do you think ? Ask for idea Xid: I don't think it's a problem. There's a lot of antiinstinct design. Disagreeing on the nature of difficulty Ren: in the beginning, the user might not get used to it . Reinforcing the difficulty Xid: and this design . I really haven't thought about its practical value. Or the practical details. Do you need to go that deep? Considering its value Focusing on value -not practical Ke: OK, I think the point is to make people uncomfortable. Proposing purposive discomfort Xid: exactly, I meant that, because it is a anti-human nature design. Agreeing on purposive discomfort Ming: I think there is another point that Joel mentioned, that taking video is a good way to recording the movement(speed) of the subject relative to the earth. If you add too many structure, the video can't record the accurate speed. Proposing new function Ke: why do we need this accurate speed? Questioning Ming: for example the speed of movuino and speed of the camera …. Repeating Ren: like we can measure the relative movement. From physics, they are in relative move. Maybe we can capture this feature, so that we can measure the speed. Adding value from physics discipline Ke: but what for ? Questioning purpose Ren: I mean the principle, it is the relative movement. Mentioning physics principle Ke: So first, what do you understand it is, your experiment, your discussion, your three understanding of the project. Asking for clarification of the project Ren: in fact, can it be like this, from technology pov. It can detect the center of the picture, so that if the photo is constructed unbalanced, the camera would move automatically to put the focus center to the real center. . Proposing function from Technology perspective Ke: but why we need this ? Questioning the purpose Ren: can that people would understand better the rationale of the design , which is to capture the center of mass of the picture. Because normally people would ask why the camera move according to the movement of the subject. People wouldn't understand what we try to express here. However, if we add the concept of center of mass, then the people standing here, the picture is not balanced, and then camera would automatically adjust so that the center of mass is balanced. Explain the purpose for the function. Adding physics concept -center of mass Ke: i can understand what you mean. What do you think? Expressing understanding Ren: it can automatically chop the picture. We did a poster, it's like this. The idea is to chop the picture, and the picture can have a good balance. Proposing function to chop picture Li: so we now just use software right? Asking for project detail Ren: I am just saying. See, in the original picture the subject is not in the center. So the camera keep adjusting its angle so that the subject can be in the center. Repeat rationale specifically, he pushed us to think about illegal and conceptual ideas, sometimes weird ideas. For example, we proposed to use patches to affect emotions, which might violate some laws, but in design and art, it's acceptable. When he comes to us, he always brings the artistic sensitivity. Ke: How do you describe his design? Ming: magic and fantasy. Ke: Let's talk about the workshop. Can you tell me the story of your group? Ming: The first day, we randomly grouped. We didn't specify what we want to do. The next day, in the beginning I thought light is interesting. I proposed the idea (lighting for selfie), it looked fancy and interesting, we could try. Then we went to huaqiangbei, and realised that this idea is not so closely related to movement. So we kept discussing and find that the idea before have two elements, one is the light and movement, another is the lens and movement. So Xid proposed, we could use movement to control the configuration of the lens to change movement, which lead to the final idea. This was the whole process of our projects. Ke: What were the most impressive moments of the process? Ming: When you asked us to stopping talking and go out and talk pictures. When we really went out, many strange effect came out more than we could think of when just talking. We didn't have any machine to move the phone, so we move it with our hand. We assumed some movements of body will trigger the movements of the phone. And then demonstrate the video. All of us took turns to take the video and being shoot. Ke: What were you thinking when being shoot? Ming: The weather is good! Ke: What other moments impressed you? Ming: Another moment is when you help Xid explain her idea (about the artistic value of random movement), it helped us better understand the idea. (more specific?) For example, in the beginning, Xid expressed her idea as a scenario where there is lens and curtain and the camera can connecting distant things digging deep daring considering conceptual even illegal things. illegal but concept acceptable in art imagination feeling fancy of the lighting idea. not relating to movement extracting elements movement to control configuration going out helps generating more creative effect taking turns to prototype Some Excerpts and Memo Ming: "Glasses, with projector, projected to retina, so that people can see the screen(voice not clear). Another one is a ball on hot water, they can control the movement of the ball by controlling the temperature of the water, kind of related to touch screen. " These examples show that XM is very much interested in cutting-edge technology. Ming: "combine two different things. He dares to really connect two very distant things and dig deep." This combination is the process of creating a metaphor, it is used to make the "design" more expressive. So what XM mentioned here is consistent in Denqing's design philosophy It's a bit different from Xid's idea of "association" Ming: "to control the configuration of the lens to change movement, which lead to the final idea. " When XM explains their design, he used the very specific object "lens", while Xid focused on the "association" between movement and "parameter of the camera", a general description. What is important is the object itself than the "association" as a concept Ming: " some augmented effect of certain action in the video, " While Xid think the core of their design is the association, XM express the design in a more specific way. In his mind the randomness of artistic expression and the augmentation of movements are the core of design. XM didn't get the idea before Xid can demonstrate a specific scenario.The common language here is the "scenario". If Xid could propose an image, then XM could understand better A3.3 Memo of [OF1-2] Interview Xid Xid, female, design background, M1 student, Summary of the interview: Xid is a designer, who studied product design and knew about interaction design. She mentioned many times in the interview that design is a kind of association ( e.g. between machine and human / In her interview, she mentioned many times about the association. In her understanding, design is to associate something with another thing. For example, she thinks the core of her project in this workshop is the association between the movement of subject and the movement of the camera. Xid:"For example, when Joel came to the workshop, and he talked about the relative motion of camera and subject, and the camera perspective of human motion, etc, this is something that I would never imagine as a designer or in a design workshop. " This perspective is from Joel's background, which added to her feeling of flexibility/mindopening Xid:" Because I think design itself is a 理念(concept/philosophy). Because I majored in design, so when I understood that, it made me anxious (about my career). Because I spent four years in my undergraduate, and I learned one 理念, a conceptual idea. You can't say it is complicated, because I believe it could be quickly nurtured under certain environment. The design mind/thinking… maybe.. is something like aesthetic taste, it can be developed if the environment is suitable. The artistic skills are more hardcore, and I don't have them. So it makes me feel anxious. Concept and ideas are abstract and can be easily gained. " This answer does not fit the question, but the anxiety is very strong, almost the first reaction of the girl when she thought of the idea "design is a way of thinking/concept". it shows the confusion of "design" as a skill or just an idea/concept. It brought anxiety when designers try to identify themselves. Xid: "Firstly, in the market, there is quite a lot of ways to fill the light in photo. And their effects are much better than direct filling light with really light directly to the face. There is many many ways. And then if it is stage lighting, they are very professional, they don't need this. So actually in the first place you can hardly build up the using scenario, can't find where you can use it. Secondly, this design, selfie equipment, should be portal, always for travel, so I think the application value is not big. Additionally, camera has its own flash, why do we need to fill light? I don't quite understand his point, I don't think it is a very good design. " When she judge the design, she tend to judge it from user and market perspective. She try to find the using scenario. Xid: "I think, because Ming, he personally hates post process. He just doesn't like to to post-proceed the photo, very persistent on that. It's true actually, a high quality photo need professional lighting and angle. It's hard to do selfie with those. I can understand this point. So I think XM he personally has the need to use this . He is a loner, and he often went out without telling anyone.When he is alone doing a selfie and he doesn't have a good lighting, he must be very bothered." While she thinks Ming is very self-centered when designing the product. She tend to think more from other scenario than herself using it Xid:"Her talking is very organised, ideas very practical and logical, but these drives me crazy when talking to her. " organised, practical and logical: these are normally good, but she can't stand it. She prefers ambiguity and openness B. Project of "DNAish Food" B1. Chris's background The group was in the introduction section where everyone take turns to introduce their background. But it did not follow a strict one-by-one manner. As everyone has different background, the rest of the group needs to take a short period of time to absorb the information as one was introducing themselves. When the rest tried to comprehend, they interrupted the speaker and started to express their understanding of the issue at hand. The speaker then also responses to the questions. Chris explained his background as a scenography with the integral of sound, light, music, and art installation. This was however not immediately comprehended by the rest of the group. Matt, a biology researcher did not know about "scenography", and Juli, a science student, tried to relate it to one of a field she knows that seemed similar. The discussion then took a slight detour as Juli started to explain her knowledge of the interdisciplinary field between cognitive science and design. get attention and funding, and on the other side the public also needs science. "We now live in democracy and the public needs to know that science is important, and also science needs the public to understand it, to think about science, so it is a political issue." Juanmat's input throw the group into further discussion on the "deep" question. Matt and Ke believe the specifying of science makes science more and more alienated to the common life, while on the contrary it affect daily life to a large degree. Lina mention the growing popularity of citizen science makes scientists understand that it is important as it frames the science problems in a way that common people can understand and participate. She gave example of the Vegen cheese Project, which helps the public to understand better about a GM product that frees animals' lives or suffering. "The framing is helpful for scientists to better communicate their project." "When you have applications that actually touches people's life", that is the conclusion from Matt, who in the end totally understood what Lina meant. In response to Juanmat's point of mutual influence between science and the society, Ke agreed by giving the example of mobile device. He believe the mobile world we live in now implicitly shapes how we think and see, shapes the question we ask and the method with which we look for an answer. Science is also shaped by the society. Juanmat mentioned another experience to echo Ke's point. Because of the political economy of Spanish government takes, many Spanish scientists go away and those who stay take a different approach than the way they do in the US or China. The science is in the society and is influenced by the social political status. Promoting B4 Lina's design examples Lina is presenting a few examples before the narrative workshop. She straights out the purpose for the workshop, which is to generate conversations and allow design perspective in the scientific "fabrication of facts" (refer to D1 fieldnote 3). She has to make the science people understand the importance of this purpose before she can deliver the workshop. Therefore her presentation is essential. She chose to directly showcase the design projects to present the "design perspective". These examples are not random, you can see she purposely chose design projects related to life science so that the biologists feel more attached to their life. The first project is about the utilization of a bio-degradable material -mycelium: low-technique but quite strong in terms of rigidity and fire-proof. The exploration of such materials, not necessarily high-tech but useful is one of the promising direction for designers. Another similar project features the use of natto -a Japanese traditional ferment food, as natural material to detect humidity. Making examples Expressing purpose Design perspective in fabricating facts Presenting to make people understood Showcasing projects Relating to the other discipline Usable bio-material Ferment food B5 Does Science Communication Need Emotion Just before Lina starts the workshop, she share with the group a movie she found very inspiring. It is a movie called My American Uncle(Mon oncle d'Amérique) written by a French scientist Henri Laborit. To Lina, the movie constantly compare mice experiments to societal relationship which explains the science, especially neuro science in a way that is related to the society. "The meaning behind the movie is "big" -concerning the science of emotion and how it affect our inner organs, which compares to the social stories which gave pains to people through hierarchy relationships." The movie is a perfect example which makes science and society relatable to each other and presented in a way that easily understandable and with emotions. Sharing movie of relevant concept Connecting science to society Discussing meaning of movie Social relationship and pain Relation between science and society The movie raised the interest of Matt: "are you saying when science communicate to the public, they are lacking some sort of emotional thing and just talk about facts?" Juanmat made a small change to Matt's statement. He believe there is a lack of "perspective". As a scientist, he understand science is complex, but the way science is presented to the public often appear to be so simple and even boring. "We say that it is the DNA is like this, and that it makes the protein. . and forms Ebola, we say it in a way as if it is that simple, but it is not." While Juanmat implies that scientific facts should not be presented in a brief and "dry" way, Matt pointed out his concern of overdoing it. What troubles Matt is that he often observe media articles that reports a prospect of a scientific finding, instead of reporting the scientific fact. "When we find a molecular that can better recognize cancer, the media says cure for cancer discovered". Therefore he believe the truth is neither in the "dry" fact, nor in the over-decorated media-report manner. But what is at the core of the problem? Juanmat point to the problem of research paper, which often takes a fixed structure of literature, method, experiment, results, etc., which is not how science is actually implemented. "Science at lab is very messy, always in the cloud, we make a mistake, we throw away result…" The core of the problem, is how do we extract that? How do we present it? Should the outside know about it? It is a yes for Juanmat. He believes the way we present our research should include incidence like daily dialogue like "Hey Rebecca, nice discovery!". Science as a whole should be present, not just the results, in a hollow way. "I imagine to have science paper as movies, so that people can really emphasize on what is really research in science" The "movie" statement soon catches Matt's eye, and he pointed out that some people is already taking actions: "some nature movie about science process". C1 Interaction Design -Reinventing the Rules Denqing gave his lecture on interaction design in the first day to all the colab students. This lecture served as a "starter" for all the students to have a taste of "interaction design. Since Lecturing Preparing the students Interdisciplinary field Although the interaction is brief, the presentation did make an important role. Student when working on their own project. C2. The Metaphor of Weapon The mysterious and unusual metaphor of "weapon" is key to understanding this workshop. Actually in the official booklet documenting this workshop (see data set[CL]) we actually name his workshop "Colab Bioweapon". The "weapon" was purposely introduced to the workshop. This episode gives details on questions like: How the metaphor, as the key concept in this workshop, was introduced to participants; What the purpose for the weapon metaphor was; What was its function in this workshop; And how it was related to the other elements, i.e. interaction design and synthetic biology. The metaphor of "weapon" was an invention of Denqing. It never appeared in any discussion in the first Co-lab, so it was the first time co-lab workshop and Denqing's weapon metaphor combined. Denqing requires every group will need to design a "weapon" that fights against environmental problems as their project. The group will also need incorporate the biology and Denqing started the workshop by making a five dimension requirement of the workshop, which framed the project that students will complete within roughly two days. The first element is the target problem for the project. We chose the environmental issues to be the key target problem. Students will need to come up with a project that response to an environmental issue. Choosing Environmental as the target was Denqing's idea, as he believe it was difficult to randomly come up with a weapon design. The final solution or speculation will need to be a "weapon" design. For example, the "anti-smoke" clothes (see episode 1 of this ethnography) purposely emit gas that smells bad to people smoking is a kind of "weapon". Apparently, in this sense, the "weapon" is not used for its literal meaning but its metaphoric meaning. Synthetic biology and interaction design are the two disciplines of knowledge to refer to. The project will need to apply knowledge in both areas. This principle was flexible, and later more of less changed to applying interaction design and science in general (synthetic biology preferred). For example, some group starts with Synthetic biology technique in the beginning but soon found out that only synthetic biology is not enough. Following these questions, Denqing proposed that a possibility that the object can be the bubbles that we often see kids playing in city square. The bubbles can change its color according to different conditions, and they can also fly to different corners of the city. The bubbles will be directly seen by people in public space. There is no answer which idea is better, everyone will have their own opinion. But Denqing introduced a few things that were missing in the original proposition. First is a strong visual presentation. The colorful balls are amazingly beautiful to see in the music video. Although the group gave their idea of a wall painting, they did not specify which one or its visual effect. Denqing did not just present his idea, he used a very strong visual reference when trying to sell his idea. This strong visual presentation changed people's perception when they think of the idea. The second is a sense of playfulness and the "feeling of artistic and romantic". In addition to the apparent practicality of indexing environment, the bubble is symbol of playing and romantic feeling. This romantic feeling was mentioned frequently afterward in the group discussion. The third is a degree of "strangeness". The music video is unique. People do not normally see this scene in their life. This strangeness is a challenge of status quo: why not a lively colorful city? Certainly this strangeness will be challenged with Trying to connect to the current project Changing scale Changing balls to bubbles Speculating scenario Bringing in aesthetic value Visualized presenting Sense of playfulness Feeling romantic Feeling of uniqueness practical issues, but that is not the first concern of Denqing. He believes this strangeness is a plus. The group did question the feasibility and practical issue about this idea, but they never questioned the three value above. In fact, the group in their later discussion often refer to these values to defend the meaningfulness of the proposition. Accepting alternative meaningfulness C5. Scientific and Romantic How was Denqing's idea received by the group members? Did they like the idea? Did they understand the idea exactly as Denqing's idea or did they adapt and develop the idea? The last part of the episode presented how the group received and developed their project after Denqing's inspiration. In general, the group accepted Denqing's idea and believed his idea makes sense especially in terms of artistic presentation. But the group kept their concerns on the feasibility issue and a lot of the discussion was about this direction. But occasionally, when the group focus too much on the detail and get stuck, they are able to get back to the artistic part and found their way out taking this perspective. The group's interaction slightly changed from "totally scientific" to a mixed status of both "keeping scientific" and "accepting romantic". Although Denqing had very vividly presented the idea, the group didn't give up their strong intention to think about feasibility. The first few rounds of discussion were focused on this directions. For example, Four key issues emerges as the group proceed: 1. What material and form should the bubble take ? 2. How to change the color without introducing more pollution to the environment? 3. Following the second issue, how to "kill" the bubble before it falls? 4. How to build the machine that makes the bubble and where to install it? The group presented a very high level of scientific and engineering knowledge and skill, with which they almost solved all the problems . Keeping both meaningfulness Working
04097551
en
[ "info.info-au" ]
2024/03/04 16:41:18
2023
https://pastel.hal.science/tel-04097551/file/2023UPSLM004_archivage.pdf
Sijia, Maxime, Loris Aurélien Nils Aradana Rémy Xavier Julie Paul Grégoire Keywords: Un stage ? Mais vous ne voulez pas un sujet de thèse plutôt ? J'ai un sujet sur l'atterrissage de fusées réutilisables qui pourrait vous plaire." C'est sur cette phrase de Nicolas Petit qu'est née l'idée de cette thèse, en 2016, lors d'une discussion à propos d'un sujet qui n'avait pas grand chose à voir. A l'époque, j'étais un jeune étudiant travaillant sur l'atténuation des vibrations pour les machines à laver, sous la supervision de Florent Di Meglio. Je tiens à remercier Nicolas de m'avoir proposé ce sujet et encadré depuis tout ce temps. Merci d'avoir toujours répondu présent, même pendant cette longue période de crise sanitaire. Ce n'est pas seulement avec un chercheur que j'ai pu travailler, mais avec un passionné d'aérospatial. Je tiens aussi à remercier Eric Bourgeois de m'avoir encadré avec Nicolas. Je tiens à remercier toute l'équipe du Centre Automatique et Systèmes pour ces nombreuses années de collaboration. Tous les permanents m'ont aidé à un moment ou à un autre de ma thèse. Merci à Delphine pour m'avoir fait confiance pour les cours d'optimisation, me laissant une grande liberté dans le choix de mes sujets, et d'avoir été toujours disponible quelle que soit la question. Merci à Florent, pour son soutien sans faille depuis toujours. Merci à Philippe, pour toutes ces discussions autour de la machine à café, et les nombreux coups de main techniques. Merci à Laurent, pour ses relectures précises, et ses conseils techniques avancés. Merci infiniment à Dilshad, d'avoir été un co-doctorant et ami pendant 3 ans, de m'avoir initié au puit sans fond qu'est tikz, de m'avoir fait découvrir des sujets techniques comme littéraires, et d'avoir refait le monde autour d'un nombre peu raisonnable de cafés. Merci à tous les autres doctorants et anciens doctorants du laboratoire, x = Rocket states x = (h, v h , z, v z , m) ⊤ ∈ R 5 . u = Guidance controls u = (q r , α) ⊤ ∈ R 2 . Nomenclature for the 3D model α = Incidence (In 3D) α is unsigned: a nor = Normal acceleration a nor is signed. 0 • ≤ α < 90 • . Ψ = Yaw x = Rocket states x = (z, y, h, v z , v y , v h , m, q r ) ⊤ ∈ R 8 . u = Guidance controls u = (q, α z , α y ) ⊤ ∈ R 3 . Chapter 1 Nomenclature for the optimization methods ξ = Input variable Size N ξ , s.t. ξ = (∆x 0 ⊤ , ∆η ⊤ , ∆u init ⊤ ) ⊤ . x = Introduction Résumé Ce chapitre introduit les différents enjeux liés au calcul embarqué de trajectoires d'atterrissage d'urgence pour des lanceurs réutilisables. Après avoir présenté le contexte dans lequel se déroule le développement de ces lanceurs, les différentes familles de méthodes de guidage disponibles dans la littérature sont rappelées. Ceci permet de présenter le problème au coeur de cette thèse, et les outils nécessaires pour le résoudre : l'ordre co-lexicographique (ordre d'urgence), les problèmes (linéaires) de négotiation et le problème (quadratique) de rafinement. Enfin, le plan du manuscrit est annoncé, et un résumé schématique est proposé. Powered Descent Guidance for reusable launchers Reusable launchers are autonomous vehicles tailored for complex missions consisting in a succession of distinct phases. Their development is driven by the need for a fast and cost-efficient solutions to put payloads into orbit. Reusability imposes that return to Earth of the launcher's first stage must be accomplished safely and reliably. Powered Descent Guidance (PDG) refers here to the action of making such a flying vehicle land autonomously on a horizontal surface using rocket-like engines for maneuvers and deceleration. PDG has emerged as a paramount topic during the 1960's space race, the prime example of PDG being the lunar module (LM) landing of the Apollo missions, which was successfully performed six times in a row on the moon. LM landing used surprisingly straightforward path planning methods [START_REF] Klumpp | Apollo Lunar Descent Guidance[END_REF][START_REF] Steinfeldt | Guidance, Navigation, and Control System Performance Trades for Mars Pinpoint Landing[END_REF], relying on non-optimized analytic calculations in an atmosphere-free environment. Though the space race slowed down drastically in the 1970's, the need for PDG on other planets re-emerged with automated exploration missions towards Mars, the Moon and even some large asteroids. These missions require a high level of autonomy, for they are conducted extremely far from Earth, and thus cannot rely on near-instantaneous communications used in teleoperation control. Accuracy is also a key factor, especially for Mars missions, which aims at landing in the vicinity of rich geological features where landing areas are scarce and narrow [START_REF] Blackmore | Autonomous Precision Landing of Space Rockets[END_REF]. Several missions performed a series of increasingly accurate autonomous powered landings, peaking with the impressive performance of the Perseverance Rover which landed . Interestingly, all of these missions operated in the absence of atmosphere or under negligible atmospheric density. For PDG on Earth's surface, where aerodynamic forces are not negligible, progress is more recent. At the end of the 1990's, the DC-X project conducted collaboratively between McDonnell Douglas and NASA started to explore the possibilities for a reusable rocket that could land back on its "feet", using the same engines for take-off and landing. As shown in Figure 1.1-(Left), this atypical tetrahedron shaped vehicle achieved several short flights, but never reached a significant altitude. It was necessary to wait until the early 2010's to witness the first successful powered landings, achieved by Space X rocket Grasshopper. In parallel, some smaller prototypes were developed and successfully tested, such as the Xombie by Masten Space Systems and JPL 1 . Today, several projects are under developments and have reached various Technology Readiness Levels, such as New Shepard and New Glenn (Blue Origins), FROG (CNES, [START_REF] Rmili | FROG, a Rocket for GNC demonstrations: Firsts flights attempts of the FROG turbojet version and preparation of the future mono-propellant rocket engine[END_REF]), Callisto (CNES/DLR/JAXA) and Themis (Ariane-Group/CNES/ONERA) or even Shenlan Nebula (Deep Blue Aerospace). In this thesis, we are interested in tossback vehicles and, more precisely, in reusable Two-Stage-To-Orbit (TSTO) rockets, equipped with a single gimbaled engine, whose trajectory slenderness ratio2 is medium to high. As shown in Figure 1.2, this type of launcher typically goes through six different flight phases: take-off, boost-back, ballistic, re-entry-burn, re-entry glide and final-burn. Our work focuses on the last flight phase, the final burn, which starts at a few kilometers of altitude, lasts for 10 seconds to a couple of minutes during which the engine is always turned on and (at least partially) controllable. To land properly at a desired site, the typical Guidance and Control (G&C) strategy considers two objectives: i) the guidance problem i.e. determination of a high-level guidance trajectory , ii) the control problem i.e. tracking of this trajectory using a low-level controller. This thesis addresses the guidance problem only under the assumption of full knowledge of the state of the system, making low-level control and other estimation topics out of scope. Providing guidance for the final burn is a challenging and necessary task. Indeed, solving the PDG in-flight must be achieved using limited time and computational power. Only single-CPU methods are studied in the thesis, as opposed to methods requiring the use of GPUs. Further, PDG has to deal with the accumulation of the tracking errors -i.e. the distance between the actual rocket states and the expected reference trajectory -during the flight phases prior to the final burn, which have to be shrunk to zero during this last phase [START_REF] Blackmore | Autonomous Precision Landing of Space Rockets[END_REF]. This must be done under strong disturbances (e.g. wind speed or changes of atmosphere density profile among others) and under multiple constraints: the incidence -or angle-of-attack -must be limited to avoid hazardous regions of the aerodynamic flight domain, the engine flow is mechanically bounded and its internal dynamics is not instantaneous, the low-level controllers actuating the rocket have limited capabilities which requires to bound the normal acceleration, the landing site is small, the available mass is limited, etc. Gathering all these detrimental effects, it is likely that the rocket starts its final burn too far from its reference trajectory to get a landing trajectory strictly meeting all these requirements. However, at the expense of sacrificing some requirements and/or loosening well-chosen constraints, it is possible to successfully land and maintain the reusability of the vehicle. A key concept to preserve the launcher's reusability is to "maximize the launcher's integrity during landing". To illustrate it, consider the following example, pictured in Figure 1.3. The reference trajectory A is shown in plain black. In a typical nominal scenario, the final burn would start reasonably close to the reference trajectory such that a proper guidance algorithm would provide the rocket B with a trajectory that reaches the desired landing at null speed while satisfying all the constraints listed earlier. However, if the rocket starts its descent farther away from the reference trajectory -e.g. with a large lateral displacement -then, at some point, it is impossible to land properly and to satisfy the whole system of constraints, which are in fact incompatible. Allowing sharper turns, which can be achieved by relaxing the incidence bound (or the normal acceleration bound) may make the system of constraints compatible again, at the cost of a trajectory alteration. Indeed, this relaxation enables the rocket C to land on the desired landing site. If we push this reasoning further, the rocket could start its final burn far enough from the reference trajectory so that even relaxing the incidence bound would not be enough. If the area neighboring the desired landing site is uninhabited and flat enough, one can allow the landing location to be relaxed too. This is what happens in the case of rocket D. This first point highlighted by this example is that there is a list of quantifiable elements that can help to relax the constraints (to some extent). These will be referred to as negotiable parameters in this thesis. Maximizing the launcher's integrity is choosing the values of these parameters in an optimal way. The negotiable parameters can refer to many different factors, such as the incidence bound, the normal acceleration bound, the landing site location or even the final vertical speed. The set of negotiable parameters can differ depending on the mission requirements, the rocket structural capabilities or the landing site surroundings for instance. In the thesis we develop a generic methodology able to handle different sets of landing parameters, and not only a single particular set. Further, the above-mentioned example illustrates why these parameters are not equally critical. Indeed, relaxing the incidence bound of a couple degrees will have a much lower impact on the rocket structure than relaxing the maximum final vertical speed by a few meters per second. A classic way to handle this relative importance while solving the PDG problem is to penalize the constraint incompatibility [START_REF] Chinneck | Feasibility and Infeasibility in Optimization: Algorithms and Computational Methods[END_REF]. Computationally, it means that: i) the negotiable parameters are added to the decision variables of the optimization problem describing the descent trajectory, and that ii) the use of these parameter is penalized in the cost of the optimization problem. In this case, the relative importance of the parameters is partially ensured by the difference in the weights associated to each parameter. However, this is a heuristic, and it does not guarantee the relative importance of the parameter to be exactly satisfied. Instead, we propose to mathematically define the relative importance of the negotiable parameters via the introduction of a strict hierarchy between these parameters. In the thesis, this hierarchy is represented by a specific order relation over the set of negotiable parameters, which we call the emergency order below. A challenge for the guidance methodology to be developed is to compute the smallest constraint alteration that recovers feasibility in the sense of the latter order. Following the discussion above, our main objective in this thesis is formulated as follows: design a method to update the guidance trajectory at the beginning of the final-burn, while maximizing the launcher's integrity, in a computationally efficient way. To address this objective, the thesis proposes a guidance method that performs online trajectory optimization for the final burn, while minimizing the use of negotiable parameters according to a certain hierarchy making sure that their relative importance is strictly respected. Mathematical programming for online PDG 1.2.1 Current state-of-the-art The Powered Descent Guidance (PDG) problem is a worked applied mathematics problem, most often tackled from the Optimal Control perspective. The decision variables belong to infinite-dimensional function spaces (the control law u and the state x), and the scalar time-of-flight (t f ) over which these functions are defined. The goal in such Optimal Control Problems (OCPs) is to find the triplet (x, u, t f ) that minimizes a certain criterion, under multiple constraints. Several performance indexes (or cost functions) have been considered in the literature: minimum-fuel [START_REF] Açikmeşe | G-FOLD: A Real-Time Implementable Fuel Optimal Large Divert Guidance Algorithm for Planetary Pinpoint Landing[END_REF][START_REF] Carson | Lossless convexification of Powered-Descent Guidance with non-convex thrust bound and pointing constraints[END_REF][START_REF] Leparoux | Structure of optimal control for planetary landing with control and state constraints[END_REF][START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF][START_REF] Szmuk | Successive Convexification for Fuel-Optimal Powered Landing with Aerodynamic Drag and Non-Convex Constraints[END_REF][START_REF] Wang | Optimal Rocket Landing Guidance Using Convex Optimization and Model Predictive Control[END_REF], minimum-acceleration [START_REF] Steinfeldt | Guidance, Navigation, and Control System Performance Trades for Mars Pinpoint Landing[END_REF], minimum time-of-flight [START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF][START_REF] Steinfeldt | Guidance, Navigation, and Control System Performance Trades for Mars Pinpoint Landing[END_REF], minimum error-to-reference [START_REF] Song | Survey of autonomous guidance methods for powered planetary landing[END_REF], minimum-landing-error [START_REF] Blackmore | Minimum-Landing-Error Powered-Descent Guidance for Mars Landing Using Convex Optimization[END_REF], or any weighted combinations of these [START_REF] Souza | An optimal guidance law for planetary landing[END_REF][START_REF] Sagliano | Guidance and Control Strategy for the CALLISTO Flight Experiment[END_REF]. The constraints depend on the rocket design and the mission requirements. They can take the form of inequalities (e.g. engine flow upper and lower bounds) as well as equalities (e.g. final horizontal and vertical speeds). As discussed earlier, these constraints can be quantified by several parameters. Among these, the ones that can eventually be modified are called the negotiable parameters. For instance, the incidence bound is a safety requirement regarding mechanical and flight quality aspects, whose value may be negotiated if necessary. On the contrary, the engine flow bound is a typical example of non-negotiable parameters, since it is a physical limitation. Mathematically, these cases are treated as follows: the constraints can be tuned using a vector parameter denoted p, which is defined such that p = 0 when the nominal requirements are met. The PDG problem has several inputs: the initial state of the rocket -i.e. its initial position, speed, orientation and mass -and other external parameters, such as the wind speed for example. These cover all the above-mentioned sources of disturbances. Mathematically, the inputs are conveyed by a parameter ξ. From a general perspective, the PDG problem, seeking a guidance trajectory for the final burn, is written as an OCP such that OCP (ξ, p) :=                          min x,u,t f J (x, u, t f , ξ) (Cost), s.t. ẋ = f (x, u) on [0, t f ] (Rocket dynamics), x(0) = x 0 + ξ (Initial conditions), Ψ(x(t f ), p) = 0 on [0, t f ] (Final state), C(x, u, p) ≤ 0 on [0, t f ] (Other constraints). where J is the performance index (cost), f is the right-hand side of the ordinary differential equations, Ψ represents the terminal condition and C conveys the var-ious constraints. The special case OCP (ξ, 0) is the nominal PDG problem, and OCP (0, 0) corresponds to the reference trajectory. This problem having an infinite dimensional decision variable and an infinite number of constraints, it is discretized for numerical resolution (for some given value of ξ and p). Two well-known and dual discretization approaches exist (see e.g. [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF][START_REF] Hull | Optimal control theory for applications[END_REF][START_REF] Rao | Trajectory Optimization: A Survey[END_REF][START_REF] Trélat | Optimal control and applications to aerospace: some results and challenges[END_REF]). On one hand, for indirect methods, stationary conditions are derived from OCP (ξ, p) in the form of Ordinary Differential Equations (ODEs) which are then discretized and solved. The celebrated Pontryagin Maximum Principle [START_REF] Hartl | A Survey of the Maximum Principles for Optimal Control Problems with State Constraints[END_REF] is the fundamental tool to form the stationary conditions. On the other hand, for direct methods, OCP (ξ, p) is first discretized into the sub-problem DSC (ξ, p), using a finite dimensional variable z representing the unknowns (x, u, t f ) of the initial problem, where DSC (ξ, p) is an optimization problem of the form DSC (ξ, p) :=            min z J(z, ξ) s.t. h(z, ξ, p) ≤ 0, g(z, ξ, p) = 0, which is then solved using Non-Linear Programming (NLP). Though the indirect methods are known to be more accurate, they are also very sensitive 3 and usually are not used in real-time. Currently, for online applications, PDG problems are solved almost exclusively using direct methods. We follow this trend and will focus of the parametric problem DSC (ξ, p). Still, the main challenges for direct methods are the choice of the decision variable z, the design of the associated functions J, h and g, and the selection of the proper numerical method used to solve DSC (ξ, p). These decisions directly determine whether DSC (ξ, p) can be used for online applications with sufficient accuracy. A broad spectrum of direct methods have been used to solve OCP (ξ, p) for aerospace applications [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF]. Direct trajectory optimization methods relying on NLP have been explored since the 1980's [START_REF] Hargraves | Direct trajectory optimization using nonlinear programming and collocation[END_REF]. Since then, many methods have emerged. Pseudospectral methods -consisting in discretizing the states and the controls of OCP (ξ, p) at well-chosen time points using polynomial approximations [START_REF] Fahroo | Direct Trajectory Optimization by a Chebyshev Pseudospectral Method[END_REF] to transform OCP (ξ, p) into a NLP -have been investigated from various perspectives [START_REF] Ross | A review of pseudospectral optimal control: From theory to flight[END_REF][START_REF] Sagliano | Onboard Guidance for Reusable Rockets: Aerodynamic Descent and Powered Landing[END_REF][START_REF] Sostaric | Powered Descent Guidance Methods For The Moon and Mars[END_REF][START_REF] Sostaric | Powered Descent Trajectory Guidance and Some Considerations for Human Lunar Landing[END_REF][START_REF] Wang | A Pseudospectral-Convex Optimization Algorithm for Rocket Landing Guidance[END_REF]. Sensitivity-based parametric optimization has been used for Mars atmospheric entry problems [START_REF] Seelbinder | On-board Trajectory Computation for Mars Atmospheric Entry Based on Parametric Sensitivity Analysis of Optimal Control Problems[END_REF]. Recent work also focused on applying learning based methods to PDG [START_REF] Furfaro | Waypoint-Based Generalized Zem/Zev Feedback Guidance For Planetary Landing Via A Reinforcement Learning Approach[END_REF]. A noteworthy state-of-the-art direct method aiming at solving OCP (ξ, p) for rocket landing problems is Successive Convexification [START_REF] Açikmeşe | Convex Programming Approach to Powered Descent Guidance for Mars Landing[END_REF]. This method solves OCP (ξ, p) by successively solving linearized and discretized versions of it [START_REF] Mao | Successive convexification of non-convex optimal control problems and its convergence properties[END_REF][START_REF] Reynolds | Optimal Planar Powered Descent with Independent Thrust and Torque[END_REF][START_REF] Szmuk | Successive Convexification for Real-Time 6-DoF Powered Descent Guidance with State-Triggered Constraints[END_REF]. In other words, this method is a variant of Successive Convex Programming (SCP) methods, tailored and optimized for the requirements of PDG. This method is presented in great details in [START_REF] Malyuta | Convex Optimization for Trajectory Generation[END_REF]. Theoretical guarantees were established by B. Açikmeşe and his fellow researchers. Informally speaking, the main statement can be rephrased as follows: "if the method converges and if some intermediate slack variables are sufficiently penalized and tend to zero, then the solution is the optimal landing method in the sense of the Karush-Kuhn-Tucker conditions and the constraints are satisfied" [START_REF] Malyuta | Convex Optimization for Trajectory Generation[END_REF]Thm. 8]. Superlinear convergence rates have been established under mild assumptions [START_REF] Mao | Successive Convexification: A Superlinearly Convergent Algorithm for Non-convex Optimal Control Problems[END_REF]. Specifically designed interior point solvers have also been implemented [START_REF] Dueri | Customized Real-Time Interior-Point Methods for Onboard Powered-Descent Guidance[END_REF]. These results are rather powerful and the method proved to work correctly even on real vehicles [START_REF] Açikmeşe | G-FOLD: A Real-Time Implementable Fuel Optimal Large Divert Guidance Algorithm for Planetary Pinpoint Landing[END_REF]. It is often used along with Lossless Convexification, which aims at performing an exact convex relaxation of the thrust magnitude constraints [START_REF] Blackmore | Lossless convexification of control constraints for a class of nonlinear optimal control problems[END_REF][START_REF] Carson | Lossless convexification of Powered-Descent Guidance with non-convex thrust bound and pointing constraints[END_REF][START_REF] Malyuta | Convex Optimization for Trajectory Generation[END_REF]. The above-mentioned list of direct methods for PDG is not exhaustive. For further details on the available methods, see the recent survey by Song et al. [START_REF] Song | Survey of autonomous guidance methods for powered planetary landing[END_REF]. All the above-mentioned methods tackle different versions of the PDG problem, but face the same bottleneck when it comes to landing on Earth's surface: the presence of the aerodynamic forces. Without atmosphere, i.e. for Moon and Mars landing 4 , the problem is called planetary landing, and one can safely say that the above-mentioned literature is mature enough to provide efficient and proven PDG methods for real-time usage. A noteworthy example is the full characterization of the reachable set for Mars landing [START_REF] Eren | Constrained Reachability and Controllability Sets for Planetary Precision Landing via Convex Optimization[END_REF], obtained by combining convex analysis and Successive Convexification. Though a few papers have tackled PDG in the past decade, there are still many open questions regarding the best way to perform online PDG in the presence of non-negligible aerodynamic effects. In this thesis, this is the case we consider. Hierarchy and the emergency problem In this thesis we address cases when ξ is such that there exist no trajectories (x, u, t f ) that satisfy all the constraints of OCP (ξ, p). This situation, referred to as the emergency problem, is also sometimes called the abort planning problem in the air and space field [START_REF] Calise | Generation of Launch Vehicle Abort Trajectories Using a Hybrid Optimization Method[END_REF][START_REF] Hanson | Ascent, transition, entry, and abort guidance algorithm design for the X-33 vehicle[END_REF][START_REF] Lampazzi | Intact Ascent Aborts Workbook 21002[END_REF][START_REF] Lu | Abort Guidance during Powered Descent for Crewed Lunar Missions[END_REF][START_REF] Shapiro | Survivability of Emergency Escape from a Simulated Shuttle Entry Trajectory[END_REF][START_REF] Vana | Any-Time Trajectory Planning for Safe Emergency Landing[END_REF]. To the best of our knowledge, a single paper has tackled this topic for planetary PDG, for Mars landings [START_REF] Blackmore | Minimum-Landing-Error Powered-Descent Guidance for Mars Landing Using Convex Optimization[END_REF]. Using Lossless and Successive Convexification, the latter paper exposes a method that first minimizes the landing error -i.e. the minimum distance between the feasible landing sites and the targeted one -and then computes the fuel-optimal trajectory among the ones having a minimum landing-error. Our goal is to design a more general methodology to perform emergency guidance by dealing with more than a single parameter to relax. From a modeling perspective, in this thesis, the above-mentioned vector of negotiable parameters p is decomposed into R vector parameters of possibly different dimensions, ranked with respect to their relative importance, such that p = p (1) , . . . , p (R) = Least critical, . . . , Most critical . For instance, one can consider that p (1) = ∆α max (the incidence bound) is less critical than p (2) = ∆v f h (the final vertical speed). The elements p (j) can be vectors. For instance, p (j) = (∆z f , ∆y f ) is the vector denoting the landing site location on a map. The hierarchy among the parameters is defined using a variant of the lexicographic order: the negotiable parameters are compared using the 1-norm of their sub-parameters, by comparing their most critical sub-parameters first. In practice, a vector p a is said to be larger -or more negotiated -than another vector p b , which is denoted p a ⪰ e p b if and only if ∥p (R) a ∥ 1 > ∥p (R) b ∥ 1 or ∥p (R) a ∥ 1 = ∥p (R) b ∥ 1 and ∥p (R-1) a ∥ 1 > ∥p (R-1) b ∥ 1 , or . . . or ∥p (R) a ∥ 1 = ∥p (R) b ∥ 1 and . . . and ∥p (1) a ∥ 1 > ∥p (1) b ∥ 1 , or ∥p (R) a ∥ 1 = ∥p (R) b ∥ 1 and . . . and ∥p (1) a ∥ 1 = ∥p (1) b ∥ 1 . (1.1) In the context of this thesis, the order ⪰ e is called the extended colexicographic order or simply the emergency order. Maximizing the launcher's integrity by sacrificing the parameter p according to the hierarchy of importance between the sub-parameters translates into finding the smallest p in the sense of ⪰ e such that there is a feasible trajectory (x, u, t f ) satisfying the constraints of OCP (ξ, p). There are no generic methods yet available that recovers feasibility in OCP (ξ, p) while enforcing such a hierarchy. This is the subject and main contribution of this thesis. Proposed contribution: Hierarchical Emergency Guidance Optimization The main contribution of this thesis is an online method that computes the optimal trajectory and the optimal values of the negotiable parameters, in the sense of the extended colexicographic order, using only Linear and Quadratic programming techniques. In the manuscript, this method is demonstrated on two rocket models of increasing complexity: from a planar model with non-trivial aerodynamic effects, to a richer three-dimensional model with non-negligible engine transients. This contribution is obtained in two steps: first a nominal guidance method computing the landing trajectory z for a given p is presented, then an emergency guidance method aiming at computing the best value for p is studied. Fast PDG for fixed value of constraint parameters : a sensitivity-based approach To perform nominal guidance, i.e. to compute z for given values of ξ and p, we use a sensitivity-based approach. In this approach, OCP (ξ, p) is described as an optimal correction problem PDG (ξ, p), defined w.r.t. a reference trajectory (x, ū, tf ) (e.g. rocket A in Figure 1.3). The latter trajectory is mission-specific, and its design is out-of-scope for this manuscript. Using handy notations 5 , PDG (ξ, p) is sketched below as an infinite dimensional optimization problem of the form PDG (ξ, p) :=                          min δu,∆t f J (δu, ∆t f ) s.t. ẋ(t) = f (x(t), ū(t) + δu(t)), x(0) = x0 + ξ, Ψ(x( tf + ∆t f ), p) = 0, C(x(t), ū(t) + δu(t), p) ≤ 0. The decision variable δu, the ODE of the dynamics and the constraints are discretized using a finite-dimensional variable z, which conveys a parametric description µ of the control change w.r.t. the reference control ū and the implicit time-of-flight change ∆t f . Through this discretization procedure, the state x is expressed as a function of ξ and z using the flow of the ODE defined by f . The problem PDG (ξ, p) is approximated by its discrete non-linear version NLP (ξ, p), which writes NLP (ξ, p) :=            min z J(z, ξ) h(z, ξ) ≤ H p p, g(z, ξ) = B p p. where it is stressed that the negotiable parameter p has a linear influence on the constraint right-hand side of the guidance problem tackled in this thesis. Then, sensitivity analysis is used to approximate the solution of NLP (ξ, p). Revisiting results from the literature, we introduce a Quadratic Program (QP) which writes QP (ξ, p) :=            min z 1 2 z ⊤ P z + q ⊤ z s.t. Gz ≤ h 0 + H ξ ξ + H p p Az = b 0 + B ξ ξ + B p p and whose solution is shown to be a local approximation to the solution of NLP (ξ, p), under mild assumptions. Interestingly, Strict Complementary Slackness, often assumed in the literature, is not needed here. Approximating NLP (ξ, p) by QP (ξ, p) is shown to be relevant and usable even for large -non-local -values of ξ, when used with the two above-mentioned rocket models. The problem QP (ξ, p) contains a convenient linear description of the constraints which makes the emergency guidance method described thereafter tractable. Problems such as QP (ξ, p) can be solved in an online/offline fashion, where the defining constant matrices are computed before the flight for the prescribed reference trajectory, and where the QP itself is solved for given ξ and p on-board using offthe-shelf QP solvers. Numerical resolution can be achieved with confidence as QP solvers are now considered mature and reliable technology [START_REF] Boyd | Convex Optimization[END_REF][START_REF] Mattingley | CVXGEN: a code generator for embedded convex optimization[END_REF][START_REF] Stellato | OSQP: an operator splitting solver for quadratic programs[END_REF]. Computing the best constraint alteration achieving feasibility When ξ is such that the nominal guidance problem QP (ξ, 0) has no solutions, we propose a method computing a value of p such that QP (ξ, p) is feasible. Finding the smallest p in the sense of ⪰ e guaranteeing the existence of a trajectory is achieved by solving the following R different negotiation problems. The idea is to iteratively minimize each sub-parameter p (j) under the 1-norm (consistently with the mathematical definition of ⪰ e ), and memorize the associated optimal value P * j . For each negotiation problem, the constraints are those of NLP (ξ, p), plus new constraints that ensure that the optimal values P * j+1 , . . ., P * R of the previous negotiation problems are reached. In details, the j th negotiation problem writes P * j ←-min z, p ∥p (j) ∥ 1 (1.2a) s.t. Gz ≤ h 0 + H ξ ξ + H p p (1.2b) Az = b 0 + B ξ ξ + B p p (1.2c) ∥p (i) ∥ 1 = P * i , i = j + 1, . . . , R. (1.2d) The HEGO algorithm: main contribution of the thesis Once this problem has been solved for each j, starting from R down to 1, we minimize z under the original performance index J, such that z * ←-arg min z, p 1 2 z ⊤ P z + q ⊤ z (1.3a) s.t. Gz ≤ h 0 + H ξ ξ + H p p (1.3b) Az = b 0 + B ξ ξ + B p p (1.3c) ∥p (i) ∥ 1 = P * i , i = 1, . . . , R. (1.3d) This latter problem is the refine problem. It only returns the value of z. The value of p is not necessarily unique. The "negotiation and refine" problems are gathered into the HEGO algorithm, which is a nominal and emergency guidance algorithm, described using pseudo-code below. Overall, HEGO is a numerical method that provides nominal and emergency guidance, relying on Linear and Quadratic programming solvers only. Manuscript outline A high-level summary of the approach with references to the associated chapters is presented in Figure 1.4. Chapter 2 introduces the dynamic model of the class of reusable launchers under study, with two levels of complexity (2D and 3D rocket models). Among others, the non-trivial aerodynamic model is detailed, and special care is taken to define the 3D model. Then, after discussing the various mission constraints, the PDG problem is formulated as an OCP with respect to a reference trajectory. Chapter 3 takes a side turn and tackles the problem of optimal thrust programming for the special case of a purely vertical atmospheric flight. By applying Pontryagin Maximum Principle, necessary and sufficient conditions are derived, showing that the optimal thrust program is min-max for the class of launchers that we study. The by-product of this result is a full characterization the reachable set of the rocket for the vertical landing problem. For a first reading, this chapter can be skipped without loss of continuity. Chapter 4 details the method that performs nominal trajectory planning, sketched in Section 1.3.1. Using a finite-dimension decision variable to describe the trajectory of the PDG, a NLP is derived. Its optimal solution is then approximated using parametric sensitivity analysis, and solved using a single QP. The mathematical formulation of this QP is instrumental in the rest of the thesis. Chapter 5 presents the core topic of this thesis. After discussing the available negotiation parameters, the Algorithm HEGO is introduced in the LP/QP framework. Its behavior is explained on a detailed toy example. Then, theoretical guarantees are proposed and proved. Among others, the Lipschitz-continuity of the optimal solution z * is established, guaranteeing the absence of "jumps" in the solutions, a desirable property in practice. Also, high-level comments on the emergency guidance method are proposed, to distinguish what is generalizable from what comes from the underlying nominal guidance problem. Finally, several examples are presented to illustrate the various modeling possibilities offered by HEGO, and to visualize the quality of the results. Chapter 6 offers a quantitative assessment of the performances of HEGO. The inputs of the algorithm are dispersed over wide uncertainty intervals, and the results are analyzed pair-wise. Also, a comparison with the vertical landing problem from Chapter 3 is presented. A concluding chapter discusses a few topics that have not been detailed in the previous chapters, it also presents possible future research directions, and conclusions. Hierarchical Emergency Guidance Optimization (Chapter 5) P * j ←-min z, p ∥p (j) ∥ 1 s.t.: Gz ≤ h 0 + H ξ ξ + H p p, Az = b 0 + B ξ ξ + B p p, ∥p (i) ∥ 1 = P * i , i = j + 1, . . . , R. Negotiation problem (loop for j = R, . . . , 1) z * ←-arg min z, p 1 2 z ⊤ P z + ξ ⊤ Qz s.t.: Gz ≤ h 0 + H ξ ξ + H p p, Az = b 0 + B ξ ξ + B p p, ∥p (i) ∥ 1 = P * i , i = 1, . . . , R. Refine problem Discrete approximation of PDG (ξ, p) (from Chapter 2). Negotiable parameters (e.g. incidence bound, landing site, etc.) p = p (1) , . . . , p (R) least -→ most critical Chapter 2 Dynamic models and the PDG problem Résumé Ce chapitre contient une description mathématique du problème de guidage pour l'atterrissage (PDG). Il introduit les modèles dynamiques dérivant la fusée et les contraintes. Tout d'abord, les choix généraux de modélisation sont discutés, ce qui permet de souligner le rôle de l'atmosphère, une question rarement abordée dans le cadre du guidage pour l'atterrissage. Deux modèles de fusée, avec différents niveaux de complexité, sont présentés. Un modèle de fusée dans le plan (2D) et un modèle de fusée tridimensionnel (3D) sont construits. Le modèle de fusée 2D suppose que la fusée reste toujours dans un seul plan. Il servira à illustrer les principes de guidage des chapitres suivants d'une manière beaucoup plus accessible que le modèle 3D. En revanche, le modèle 3D est utilisé dans les exemples avancés des Chapitres 4 et 5 et pour l'ensemble du Chapitre 6, ce dernier contenant une évaluation des performances numériques et vérifie l'applicabilité de la méthodologie proposée. Un problème général de calcul de trajectoire est présenté à la fin de ce chapitre, sous la forme d'un OCP en temps final libre, défini par rapport à une trajectoire de référence. La résolution de ce problème sera l'objet principal du Chapitre 4. L'importance relative des contraintes du PDG sera examinée plus loin dans le Chapitre 5. This chapter contains a mathematical description of the PDG problem. It introduces the dynamic models governing the rocket and the constraints. First, general modeling choices are discussed, which serve to stress the role of the atmosphere, a seldomly looked at issue in PDG. Two rocket models, with different levels of complexity are presented. A planar (2D) and a three-dimensional (3D) rocket model are constructed. The 2D rocket model assumes that the rocket always remains in a single plane. It will serve to illustrate the guidance principles of the next chapters in a much more accessible way than the 3D model. On the other hand, the 3D model is used in the advanced examples of Chapter 4 and 5 and for the whole 15 Chapter 6 containing the numerical results assessing the numerical performance and the applicability of the proposed methodology. A general trajectory design problem is presented at the end of this chapter, as a free-final time OCP defined w.r.t. a reference trajectory. The resolution of this problem will be the main concern of Chapter 4. The relative importance of the PDG constraints will be discussed later in Chapter 5. Atmospheric flight dynamics Here are presented the features shared by both 2D and 3D models. Some notions, such as those regarding the environment and the aerodynamic model are also used in Chapter 3. The typical flight phases of a tossback vehicle are illustrated in Figure 1.2. In this thesis, we are interested in the last part of the flight: the final burn until landing. It starts a few kilometers above the ground [START_REF] Brendel | Optimal guidance for toss back concepts of Reusable Launch Vehicles[END_REF]. In both 2D and 3D rocket models, which will be used in guidance algorithms, the rocket is assumed to be a punctual mass having an orientation (and thus having an aerodynamic incidence). This conceptual approach is presented in details below. Earth and atmosphere model Since we are only interested in the last flight phase, the Earth is considered locally flat, non-rotating, with a constant gravity field of magnitude g. The atmosphere is described via the pressure P a , the density ρ, the temperature and the speed of sound S SP are functions of the altitude. They are computed using linear interpolation of data samples. Denoting by V r the norm of the relative speed of the rocket, we define M a := V r /S SP (h) the Mach number at a given altitude h. The wind is assumed to be horizontal with a speed depending on the altitude only. It is assumed null at null altitude. In practice, the wind map is described by its value at three reference altitudes1 , for each direction: • w z,0 , w z,1 and w z,2 for the first horizontal direction (2D and 3D models), • w y,0 , w y,1 and w y,2 for the second horizontal direction (3D model only). For these wind parameters, index 0 (resp. 1, 2) corresponds to an altitude of h = 2 km (resp. 5 km, 10 km). Aerodynamic model The aerodynamic model of a rocket moving in the direction of its thrust flame is notoriously hard to determine. Early works from 1966 started to describe the aerodynamic effect of an air jet pushing in front of a body in a supersonic flow. Later works from the early 2000's from JAXA have completed these observations [START_REF] Nonaka | Vertical Landing Aerodynamics of Reusable Rocket Vehicle[END_REF]. They were followed and corroborated by recent experiments of the DLR [START_REF] Marwege | First Wind Tunnel Data of CALLISTO Reusable VTVL Launcher First Stage Demonstrator[END_REF]. The common findings of [START_REF] Nonaka | Vertical Landing Aerodynamics of Reusable Rocket Vehicle[END_REF] and [START_REF] Marwege | First Wind Tunnel Data of CALLISTO Reusable VTVL Launcher First Stage Demonstrator[END_REF] are that for a sufficiently strong jet flow, its wrapping around the rocket body lowers drastically the drag along the body axis, though orthogonal effects remain strong for non-zero incidences. This can be partially explained by the fact that the air jet creates an air cushion in front of the rocket that helps the boundary layer to stick to the fuselage from its lower end. A qualitative explanation of this observation is presented in Figure 2.1. For the sake of this thesis, the aerodynamic effects of the air flow around the rocket is taken into account via: 1. A non-trivial lift coefficient C Lift , to account for the aerodynamic effect orthogonal to the drag and in the opposite direction to the relative speed, 2. A small-magnitude drag coefficient C Drag , to account for the aerodynamic effect in the direction of the rocket body, 3. Drag and lift coefficients C Drag and C Lift depending on two parameters: the rocket incidence α and the Mach number M a , 4. An altitude-corrected expression of the thrust. Denoting by T the thrust magnitude along the rocket body, we will consider that T = g Isp q -S E P (h) (2.1) where g is the gravity acceleration, Isp the engine specific impulse, q is the engine flow, S E the nozzle section surface and h the altitude. 5. A lift vector s.t. the effective aerodynamic forces on the rocket are contained in the plane defined by the relative speed vector and the rocket body longitudinal axis. The functions M a , α → C Drag (M a , α) and M a , α → C Lift (M a , α) considered below for the 2D and 3D models convey the same aerodynamic model. Remark 1. Few articles have tackled PDG where drag is not negligible. See for instance [START_REF] Szmuk | Successive Convexification for Fuel-Optimal Powered Landing with Aerodynamic Drag and Non-Convex Constraints[END_REF], [START_REF] Leparoux | Structure of optimal control for planetary landing with control and state constraints[END_REF] or [START_REF] Brendel | Optimal guidance for toss back concepts of Reusable Launch Vehicles[END_REF]. Noteworthy particularities Engine dynamics The engine is assumed to generate the only control forces available on the rocket (i.e. aerodynamic grid fins or Reaction Control Systems (RCS) are not considered in this thesis). The output flow and the angular position of the engine are actuated. As far as the flow is concerned, its dynamic is not instantaneous and should not be neglected. It is assumed that its real flow is q r , that the controlled flow -or input signal -is q c , and that they are related via a first-order low-pass dynamics with time constant τ q s.t. Qualitative change of the wind flow with (left) and without (right) an air jet pushing in front of a moving rocket. See [START_REF] Nonaka | Vertical Landing Aerodynamics of Reusable Rocket Vehicle[END_REF] and [START_REF] Marwege | First Wind Tunnel Data of CALLISTO Reusable VTVL Launcher First Stage Demonstrator[END_REF] for experimental results. qr = q c -q r τ q . ( 2 The dynamics of the angular position of the engine2 is not rigorously instantaneous. However, the time constant of its transient is sufficiently small compared to the other time constants of the guidance problem to be neglected. In the following, it will only be required that the attitude of the rocket (that the rocket control system will track) be a continuous function of time. Remark 2. Consider some lower and upper bounds q -and q + on the engine flow. If q r (0) ∈ [q -, q + ] and q c (t) ∈ [q -, q + ], then from Equation (2.2) we get that q r (t) remains within these bounds for all t ≥ 0. However, if q c (t) is not guaranteed to be between q -and q + , the proper version of Equation (2.2) becomes qr = Sat q + q -(q c )q r τ q . Thrust direction The thrust is assumed colinear to the rocket longitudinal axis. This assumption is an approximation, and corresponds to the equilibrium of the low-level controllers (out-of-the-scope of this thesis). More precisely, non-zero nozzle gimbal angles would slightly deviate the thrust vector, creating a momentum and in the end a rotation of the rocket. However, the rotation time constant is assumed significantly smaller than the translation time constant involved in the guidance problem. Thrust dominance The rocket engine is powerful compared to its weight, and the rocket incidence remains always sufficiently low s.t. the vertical speed is always negative and slowing down. Among others, this assumption prevents hovering maneuvers. Dynamic equation and parameters choice The dynamic equation of each model will be described by an ODE of the form ẋ = f (x, u, η) where x denotes the rocket states, u its controls, and η its dynamics parameters. As will be detailed next, the rocket states are the position, the speed and the total mass, plus the real engine flow for the 3D rocket model only. The controls are the controlled engine flow, and one or two variables that convey the rocket incidence, depending on which model is considered (2D or 3D). The states and controls chosen to describe each model will be made explicit. The choice of dynamics parameters presented in this Chapter is taken arbitrarily wide, to illustrate the various modeling possibilities. However, for the numerical examples of Chapters 4, 5 and 6, only relevant sub-sets of these parameters will be analyzed. C N C A • α < 0 V r e Or e Vr V T D + L L D • θ > 0 mg w e z e h V = v h v z V r = v h v z -w(h) Planar rocket model The planar rocket model -a.k.a. 2D model -is described by its altitude h, its vertical speed v h , its horizontal position z, its horizontal speed v z and its total mass m. As mentioned above, v h ≤ 0. The rocket is equipped with its own orthonormal frame (C A , C N ), where C A is parallel to the rocket body and oriented towards the engine, as pictured in Figure 2.2. To alleviate the writing, the vector pointing in the direction opposite to C A is noted e A := -C A . The rocket orientation -or attitude -is defined by a single signed angle θ, formed by the angle between the vertical axis e h and the rocket main axis. Thus, when the rocket flies purely vertically, one has θ = 0 • . The incidence is defined as the signed angle α between the relative speed and the vector C A . The unit vector associated to the relative speed is denoted e Vr . The unit vector orthogonal to e Vr is denoted e Or , and is s.t. (e Vr , e Or ) is positively oriented, as shown in Figure 2 .2. A geometric relation gives tan(θ + α) = v z -w(h) |v h | . As introduced in Section 2.1.1, the wind map is denoted w(h) and parametrized by the three values w z,0 , w z,1 and w z,2 . The equations of motion (EoM) are ḣ = v h (2.3a) vh = -g + L sin θ + (T + D) cos θ m (2.3b) ż = v z (2.3c) vz = L cos θ -(T + D) sin θ m (2.3d) where T := gIspq r -P a (h)S E , (2.4a) V r := v 2 h + (v z -w(h)) 2 , ( 2.4b ) D := 1 2 ρ(h)V 2 r S ref C Drag (M a , α), ( 2.4c ) L := 1 2 ρ(h)V 2 r S ref C Lift (M a , α). (2.4d) Adding the mass dynamics and using the fact that the vertical speed is assumed to be always negative (see Section 2.1.3.3), we can write the EoM ḣ = v h vh = -g + ((T + D)|v h | + Lv z ) cos α + ((T + D)v z -L|v h |) sin α V r m ż = v z vz = (-(T + D)v z + F L |v h |) cos α + ((T + D)|v h | + Lv z ) sin α V r m ṁ = -q r The 2D rocket variables are (States) x := (h, v h , z, v z , m) ⊤ ∈ R 5 , (Controls) u := (q r , α) ⊤ ∈ R 2 , (Parameters) η := (∆Isp, w z,0 , w z,1 , w z,2 ) ⊤ ∈ R 4 , where the parameter ∆Isp is incorporated into Equation (2.4a) s.t. it becomes T = g(Isp + ∆Isp)q r -S E P (h). This yields the dynamic function f 2d of the 2D rocket model: ẋ = f 2d (x, u, η). Remark 3. Since the 2D rocket model serves illustrative purposes, it relies on the simplifying assumption that the engine flow dynamics (2.2) is instantaneous, and that q c is its control variable. However, this engine dynamic will not be neglected in the 3D model. Normal acceleration The normal acceleration a nor is defined as the non-gravitational acceleration orthogonal to the relative speed. Thus, it equals a nor = e Or •    vh + g vz    (2.6) where • is the inner product. For the 2D model, a nor is signed. Three-dimensional rocket model The 3D rocket model has been designed to match the 2D model as closely as possible when the rocket trajectory remains in a plane. Thus, some concepts easily translate from one model to the other. Transposing the notions of incidence and attitude in 3D is, however, a delicate part. First, we introduce a series of frames that are necessary to define the rocket orientation in 3D. Then, we express the aerodynamic model, and formulate the associated EoM. Finally, some comments on the specifics of the 3D model are provided. Orientation frames The rocket is axially symmetric, making the notion of roll irrelevant. Two angles are used to describe its orientation. First, we introduce the rocket yaw and pitch using Euler angles, which enables us to define a frame attached to the rocket body. Then, this frame is re-written using projected angles, which are more convenient for our applications. Rocket orientation frame As shown in Figure 2.3, the Earth's frame is (e z , e y , e h ). The rocket, initially3 positively colinear to e h , is oriented using the yaw Ψ first, and then using the pitch ξ, as explained in Figure 2.3. Mathematically, this translates into C P := R (e z , Ψ) e y C N := R (C P , ξ) e z e A := R (C P , ξ) R (e z , Ψ) e h C A := -e A where the vectors (C N , C A , C P ) define a new direct orthonormal frame, attached to the rocket body. Expressed in the frame (e z , e y , e h ), the latter gives C N =       cos ξ sin ξ sin Ψ -sin ξ cos Ψ       , C A =       -sin ξ cos ξ sin Ψ -cos ξ cos Ψ       , C P =       0 cos Ψ sin Ψ       . (2.7) Projected angles The above-defined angles Ψ and ξ define the orientation of e A . When projected onto the plane (e h , e z ) (respectively (e y , e h )), the vector e A has an angle ζ y (resp. ζ z ) with the vector e h . Note that, in terms of vector labeling, the angle on the plane (e h , e z ) corresponds to a rotation on the axis e y . The angles are illustrated in Figure 2.4. The expression of e A writes e A = 1 1 + tan 2 ζ z + tan 2 ζ y       tan ζ y -tan ζ z 1       (2.8) and is valid for |ζ z | < 90 • and |ζ y | < 90 • . The steps used to derive formula (2.8) are as follows. Consider the square pyramid defined by a square on plane (e z , e y ) with its summit being at the top of e A . As shown in Figure 2.4, denote by v its height, and h z (resp. h y ) its base length orthogonal to e z (resp. e y ). Here, v, h z and h y are taken unsigned. Then, using basic geometry, one has 1 = v 2 + h 2 z + h 2 y , tan ζ z = h z v , tan ζ y = h y v which gives v = 1 1 + tan 2 ζ z + tan 2 ζ y , h z = v tan ζ z , h y = v tan ζ y . Using the proper signs for e A , as shown in Figure 2 C N = 1 t z,y       1 cos ζz t y sin ζ z -t y cos ζ z       , C A = 1 t z,y       -t y t z -1       and C P =       0 cos ζ z sin ζ z       . (2.10) Dynamics To express the aerodynamic forces, the orientation of the relative speed vector has to be defined. Then, lift and drag vectors are formulated. Finally, the dynamic equations are expressed. Relative speed definition To describe the orientation of the relative speed vector w.r.t. the rocket body, we introduce the incidence variables for the 3D model. The speed vector is V := ( ż, ẏ, ḣ) ⊤ = (v z , v y , v h ) ⊤ . The horizontal wind vector is w := (w z (h), w y (h), 0) ⊤ . Thus, the relative speed is V r := V -w =       v z -w z (h) v y -w y (h) v h       . The unit vector of the relative speed vector V r is denoted e Vr . We can define the orientation of e Vr using projected angles. The angles α y and α z are introduced s.t. the angles defining the position of e Vr w.r.t. the base frame using projected angles are ζ y + α y and ζ z + α z . They are represented in Figure 2.5. Since e Vr is defined equivalently as e A in Equation (2.8), its expression is e Vr = 1 1 + tan 2 (α z + ζ z ) + tan 2 (α y + ζ y )       -tan(α y + ζ y ) tan(α z + ζ z ) -1       . (2.11) Then, knowing the expressions of C A and e Vr , we can define the incidence as the unsigned angle between these vectors, leading to the expression α = arcsin     (T z -t z ) 2 + (T y -t y ) 2 + (T z t y -T y t z ) 2 1 + t 2 z + t 2 y 1 + T 2 z + T 2 y     . ( 2 C A × e Vr = 1 (1 + t 2 z + t 2 y )(1 + T 2 z + T 2 y )       T z -t z t y -T y T y t z -T z t y       . where × is the cross product. Then, from the definition of V r and the definition of the projected angles, we have tan(α z + ζ z ) = v y -w y (h) |v h | and tan(α y + ζ y ) = - v z -w z (h) |v h | . ( 2 ζ z = -α z + arctan v y -w y (h) |v h | and ζ y = -α y -arctan v z -w z (h) |v h | . (2.14) Aerodynamic effects Lift and drag can be defined using (C N , C A , C P ). Indeed, the drag D is colinear to C A , in the opposite direction to V r . Only low-incidence flight is considered, and D is positively colinear with -C A . Moreover, in consistency with the aerodynamic model described in Section 2.1.2, the axial symmetry of the 3D rocket model implies that L, C A and V r are linearly dependent, and that L must belong to the plane (C P , C N ). Therefore, as illustrated in Figure 2.6, one can define the lift and drag vectors as L = L.e L and D = -D.C A , where the magnitudes L and D equal L = 1 2 ρ(h) V 2 r S ref C Lift (M a , α) and D = 1 2 ρ(h) V 2 r S ref C Drag (M a , α). The direction e L of the lift requires further attention, since the lift orientation is defined only when the incidence is not zero. When it is well defined, the vector e L is a unit vector, positively colinear to the projection of -e Vr lying in the plane (C P , C N ). Thus, it can be expressed as e L := - (e Vr • C N )C N + (e Vr • C P )C P ∥(e Vr • C N )C N + (e Vr • C P )C P ∥ = - e Vr -(e Vr • C A )C A ∥e Vr -(e Vr • C A )C A ∥ (2.15) when e Vr is not colinear to C A and e L = 0 otherwise. This apparent discontinuity is actually not troublesome. Indeed, for α > 0, the vector e L can be equivalently defined as e L = C A × C A × e Vr ∥C A × e Vr ∥ = C A × (C A × e Vr ) sin α which yields the following expression for the lift L = L.e L = 1 2 ρ(h)V 2 r S ref C Lift (M a , α) sin α C A × (C A × e Vr ). For any fixed Mach number M a , the map α → C Lift (M a , α) is assumed continuously differentiable and it equals zero at α = 0. Thus, the ratio C Lift (Ma,α) sin α remains bounded when α tends to zero. Also, C A × e Vr tends to zero when e Vr tends to C A . Consequently, the expression of e L does not matter when α = 0, which is a false singularity. Remark 5. Here, the coefficient C Lift is positive and only evaluated for positive values of α. However, note that in the 2D model, α → C Lift (M a , α) is taken odd for any fixed Mach number M a and is evaluated on signed values of α. Rocket dynamics in 3D With g = (0, 0, -g) ⊤ denoting the gravity vector, the acceleration vector a equals where the thrust vector is T = -T C A and its magnitude T is defined in Equation (2.1). a := d dt V = g + T + L + D m C N C A C P D V r e Vr The dynamics parameters conveyed by the variable η are the Isp via ∆Isp, multiplicative factors for the aerodynamic coefficients (m L , m D ) and the wind parameters (w z,0 , w z,1 , w z,2 ) and (w y,0 , w y,1 , w y,2 ) defined in Section 2.1.1. The first three parameters must be incorporated in the equations s.t. T = gIspq r -S E P (h) becomes g(Isp + ∆Isp)q r -S E P (h) L = 1 2 ρ(h)V 2 r S ref C Lift (M a , α) becomes 1 2 ρ(h)V 2 r S ref (1 + m L )C Lift (M a , α) D = 1 2 ρ(h)V 2 r S ref C Drag (M a , α) becomes 1 2 ρ(h)V 2 r S ref (1 + m D )C Drag (M a , α) It allows us to define the 3D rocket variables as (States) x := (z, y, h, v z , v y , v h , m, q r ) ⊤ ∈ R 8 , (Controls) u := (q r , α z , α y ) ⊤ ∈ R 3 , (Parameters) η := (∆Isp, m L , m D , w z,0 , w z,1 , w z,2 , w y,0 , w y,1 , w y,2 ) ⊤ ∈ R 9 Then, written using blocks, the dynamic equation equals ẋ = f 3d (x, u, η) =           V g + T+L+D m -q r q c -q r τ q           . ( 2 Normal acceleration, downrange and attitude As it will be needed later in our work, we now focus on the acceleration component normal to the relative speed vector. The unsigned normal acceleration a u.s. nor is defined as the norm of the part of the non-gravitational acceleration vector normal to the relative speed. Denoting F = T + L + D yields Compared to the 2D model, this expression is naturally unsigned. However, the norm in this expression may bring differentiation issues. Indeed, it will be needed to differentiate this term later when considering optimization problems (see e.g. (4.6g) in Chapter 4). Instead of a u.s. nor , we consider an alternate expression that remains signed (and thus differentiable), see below Equation (2.17). First remark that, according to the aerodynamic model and as shown in (2.17) s.t. a u.s. nor = |a nor |. Thanks to Equation (2.17), the 3D model also has a term a nor describing the normal acceleration that is signed. Downrange The downrange, or distance of the vertical projection of the rocket to the landing site, equals d := z 2 + y 2 . Attitude The attitude θ is defined as the angle between e h and e A . It is unsigned (contrary to the planar rocket model). Its expression equals θ := arcsin sin 2 ξ + cos 2 ξ sin 2 Ψ. (2.18) PDG as an Optimal Control Problem First, we present the general PDG objectives and then write the main OCPs. Mission goals and constraints In this sub-section, we present all the constraints of our PDG problem. Landing site target It is desired to find a trajectory -i.e. states x(t) and controls u(t) -that steers the rocket from a given initial condition x 0 to the landing site in a time t f . Assuming that the landing site is located at the origin of our coordinate system, the following end-point constraints are introduced z(t f ) = 0 y(t f ) = 0 (For the 3D model only) h(t f ) = 0 v z (t f ) = 0 v y (t f ) = 0 (For the 3D model only) v h (t f ) = -ε f v Note that a small non-zero vertical speed at landing (ε f v > 0) is desired. Among others, this serves to avoid singularity of incidence at t f and gives a margin regarding the thrust dominance assumption 4 . These conditions can be written as a linear equality constraint A f x(t f ) = b f where A f is filled with zeros and ones only, and b f contains zeros and -ε f v only. Mechanical bounds As noted in Section 2.1.3, the rocket has several mechanical limitations. Its engine flow magnitude and rate of change are bounded. These impose that the real and controlled flows satisfy q -≤ q r ≤ q + and q -≤ q c ≤ q + , ( where q + > q -> 0. Remark 7. It should be noted that in the literature, the decision variables often considered (especially for non-atmospheric missions) are the components of the thrust vector T itself. The bounded flow constraints are then taken into account through the constraint T min ≤ ∥T∥ ≤ T max . In the latter, the lower bound defines an artificially non-convex constraint. Lossless convexification [START_REF] Açıkmeşe | Lossless convexification of a class of optimal control problems with non-convex control constraints[END_REF][START_REF] Carson | Lossless convexification of Powered-Descent Guidance with non-convex thrust bound and pointing constraints[END_REF], a method using an intermediate slack variable, is often used to overcome this problem. However, with our modeling choices, picking the engine flow as one of the decision variables makes the constraints described in Equation (2.19) convex, since the constraints of the shape "q -≤ q ≤ q + " are used for scalar values of q. Bounded mass The fuel tank being finite, there are upper and more importantly lower bounds on the mass m dry ≤ m ≤ m wet . (2.20) Safety bounds For safety reasons, it is also desirable to remain within limited incidences. This constraint is one of the most important difference between planetary and atmospheric landing. It writes |α| ≤ α max . This constraint has a straightforward interpretation for the 2D model, since it only implies a single control variable, α itself. However, it is more intricate for the 3D model, since α is defined through Equation (2.12) and depends non-linearly on state and control variables. We choose to impose the following constraints for the 3D model |α z | ≤ α max and |α y | ≤ α max , which is not equivalent but conservative. Moreover, since the guidance trajectory must be tracked by the underlying rocket control system, it is necessary that this trajectory does not exceed prescribed thresholds of normal accelerations. Thus, we impose |a nor | ≤ a max nor where a nor is defined in Equation (2.6) for the 2D rocket model, and in Equation (2.17) for the 3D one. Constraints not considered Other types of constraints can be found in the literature, such as5 : • Landing cone (a.k.a. glideslope) constraints [START_REF] Açikmeşe | Flight Testing Of Trajectories Computed By G-FOLD: Fuel Optimal Large Divert Guidance Algorithm For Planetary Landing[END_REF][START_REF] Eren | Constrained Reachability and Controllability Sets for Planetary Precision Landing via Convex Optimization[END_REF][START_REF] Szmuk | Successive Convexification for Real-Time 6-DoF Powered Descent Guidance with State-Triggered Constraints[END_REF], • Pointing (a.k.a. attitude) constraints [START_REF] Eren | Constrained Reachability and Controllability Sets for Planetary Precision Landing via Convex Optimization[END_REF], • Thermal flux (a.k.a. heating rate) constraints [START_REF] Brendel | Optimal guidance for toss back concepts of Reusable Launch Vehicles[END_REF][START_REF] Wang | A Pseudospectral-Convex Optimization Algorithm for Rocket Landing Guidance[END_REF][START_REF] Wang | Optimal Rocket Landing Guidance Using Convex Optimization and Model Predictive Control[END_REF], • Dynamic pressure constraints [START_REF] Brendel | Optimal guidance for toss back concepts of Reusable Launch Vehicles[END_REF][START_REF] Wang | Optimal Rocket Landing Guidance Using Convex Optimization and Model Predictive Control[END_REF]. However, as detailed below, these constraints are not considered here, though this would be possible as natural extensions. Landing cone constraints are not considered due to the thrust dominance assumption. Indeed, the class of rockets studied in this thesis naturally performs landing trajectories with a high slenderness ratio. The same motivation rules out the pointing constraints. Thermal flux constraints are critical mechanical requirements for re-entry problems at hypersonic speeds, when the vehicle directly relies on the atmosphere to brake [START_REF] Bonnard | Optimal control of the atmospheric arc of a space shuttle and numerical simulations with multiple-shooting method[END_REF]. Since the speeds involved for our landing scenarios are high (approximately between 0 and Mach 2) but not hypersonic6 and considering the unusual air flow around the rocket, as depicted in Figure 2.1, the thermal flux constraints are not needed. The same arguments apply for the dynamic pressure constraint. Formulation as an optimal correction problem The PDG problem is formulated as an OCP in free-final time, w.r.t. a reference trajectory. Regarding the notations, whatever the chosen model is (2D or 3D), the dynamic function is noted f . The lower and upper control bounds are respectively denoted u - and u + . The mixed state-control constraints are conveyed by a function c. As mentioned in the Introduction, we consider that all of the above-mentioned constraints can be tuned by a parameter p that allows one to adjust their nominal value. For example, let us say that we parametrize the incidence and the normal acceleration bounds in the planar rocket model, i.e. p = (p 1 , p 2 ) ⊤ = (∆α max , ∆a max nor ) ⊤ . Then, the parameterized constraints become |α| ≤ α max + p 1 and |a nor | ≤ a max nor + p 2 . From a general point of view, we say that all the constraints are parameterized by p. Therefore, the dependency on p of the right-hand side vector b f , the control bounds u -and u + , and the mixed state-control constraint c will be highlighted whenever necessary. It is assumed that p = 0 for a nominal landing. The exact choice of p varies depending on the mission needs, and will be discussed extensively in Chapter 5. Let us consider a reference trajectory, which is described as a quadruplet (x, ū, η, tf ). Such a trajectory can be computed offline, for any given mission, using well-known and possibly time-consuming numerical methods [START_REF] Bryson | Applied optimal control: optimization, estimation and control[END_REF]. It is assumed that this trajectory satisfies the constraints, i.e. that it satisfies the control constraints and c(x(t), ū(t), η, p) ≤ 0 for all times, and that it is dynamically feasible, i.e. that it satisfies the Initial Value Problem (IVP)      x(0) = x0 , ẋ(t) = f (x(t), ū(t), η), ∀t ∈ [0, tf ]. (2.21) At the beginning of the final burn, the gap between the current state and the current dynamics parameters and their reference values is denoted ∆x 0 and ∆η. They are conveyed by the input variable7 ξ. Knowing ξ, and the value of p, the mathematical goal of PDG, when formulated w.r.t. this reference trajectory, is to find a control correction δu and a time-offlight correction ∆t f making the rocket land while satisfying all the constraints and minimizing a certain performance index8 J . For completeness, note that δu belongs to a functional space defined over [0, tf + ∆t f ]. We will consider that this space9 equals U(∆t f ) := L ∞ ([0, tf + ∆t f ], R m ) for the definition below, although we will restrict the problem to a much more specific class of control corrections in Chapter 4. Definition 1 (Infinite dimensional problem, PDG (ξ, p)). Given a reference de- scribed by its quadruplet (x, ū, η, tf ), where ū is defined over [0, tf ], find the optimal time-of-flight change ∆t f and the optimal control correction δu ∈ U(∆t f ) for the optimization problem PDG (ξ, p) defined by min δu,∆t f J (δu, ∆t f ) (2.22a) s.t. ẋ(t) = f (x(t), ū(t . tf /( tf + ∆t f )) + δu(t), η + ∆η), (2.22b ) x(0) = x0 + ∆x 0 (2.22c) A f x( tf + ∆t f ) = b f (p) (2.22d) u -(p) ≤ ū(t . tf /( tf + ∆t f )) + δu(t) ≤ u + (p), (2.22e) c(x(t), ū(t . tf /( tf + ∆t f )) + δu(t), η + ∆η, p) ≤ 0, (2.22f) where conditions (2.22b), (2.22e) and (2.22f) are meant for all t ∈ [0, tf + ∆t f ]. Remark 8. J is assumed strictly convex. Moreover, for null inputs (i.e. ∆x 0 = 0 and ∆η = 0) it has a null minimum (i.e. δu * = 0 and ∆t * f = 0). Remark 9. Written this way, the formulation of PDG (ξ, p) suggests that the bounded mass constraint is enforced over the whole interval [0, t f ]. However, since the engine flow is positive, the mass is decreasing, and thus it is only necessary to enforce the simpler condition m( tf + ∆t f ) ≥ m dry in practice. Written in this format, PDG (ξ, p) falls into the field of perturbation methods for OCPs [START_REF] Bensoussan | Perturbation methods in optimal control[END_REF][START_REF] Deshpande | Directional Input Adaptation in Parametric Optimal Control Problems[END_REF], which is the ground base of Chapter 4. However, this problem is studied for any values of ∆x 0 and ∆η, because we are interested in the solutions of this problem even for non small values of these inputs. Summary In this chapter, we have defined dynamic models of the rocket, for 2D and 3D motions. The aerodynamic model is the main difference between the planetary landing problems studied in the literature and atmospheric landing problems. The general, infinite-dimensional version, of the PDG problem has been defined as finding the optimal correction (δu, ∆t f ) w.r.t. a reference trajectory, while satisfying all the problem constraints, for given values of ξ and p. A simpler version of PDG (ξ, p), considering only vertical motion, is considered in Chapter 3 (which can be skipped without loss of continuity), and Chapter 4 presents a method to compute an approximation of the solution of the general problem. Chapter 3 Mathematical properties of the optimal vertical descent Résumé Ce chapitre se concentre sur un cas particulier du problème PDG présenté précédemment : l'optimisation de la consommation de carburant pour l'atterrissage vertical. Comme nous l'avons déjà vu, le rapport entre la translation latérale et la translation verticale est faible. En tant que cas limite, il est tentant d'étudier d'abord le problème purement vertical. Se concentrer sur le problème purement vertical permet plusieurs simplifications : il n'y a plus qu'une seule variable de décision (le débit du moteur) et le modèle aérodynamique est grandement simplifié. Cela rend l'analyse analytique du problème envisageable. Le problème de l'atterrissage atmosphérique vertical optimal en termes de carburant est ici étudié en tant que problème de Commande Optimale en temps final libre. La principale contribution établit la nature de la loi de poussée optimale en consommation de carburant, en étendant les résultats de la littérature sur les problèmes sans atmosphère. Des conditions suffisantes et nécessaires sont fournies qui garantissent la nature Min-Max des extremums normaux. Il est également démontré que les extremales anormales -au sens du principe du maximum de Pontryagin (PMP) -sont soit Min, soit Max. Un sous-produit utile de cette étude est une caractérisation de l'ensemble atteignable pour les atterrissages verticaux. Cette notion sera réutilisée plus tard dans le chapitre 6 à des fins d'évaluation des performances. This chapter focuses on a special case of the PDG problem presented previously: the fuel-optimal vertical landing. As already discussed, the ratio of lateral vs vertical translation is small. As a limit case, it is tempting to study the limit case of the purely vertical problem first. Focusing on the purely vertical problem enables several simplifications: there is only one decision variable left (the engine flow) and the aerodynamic model is greatly simplified. This makes the analytic analysis of the problem tractable. The vertical fuel-optimal vertical atmospheric landing problem is here studied 35 as a free-final time OCP. The main contribution establishes the nature of the fueloptimal thrust program, extending results from the literature on atmosphere-free problems. Sufficient and necessary conditions are provided that guarantee the Min-Max nature of the normal extremals. Abnormal extremals -in the sense of the Pontryagin Maximum Principle (PMP) -are also shown to be either Min or Max. A useful by-product of this study is a characterization of the reachable set for vertical landings. This notion will be re-used latter in Chapter 6 for performance assessment purposes. If necessary, the reader can skip directly to page 49 for a summary of the important results. Also, note that this chapter is a detailed version of [START_REF] Ménou | Fuel-optimal program for atmospheric vertical powered landing[END_REF]. Vertical descent Historically, Meditch [START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF] and then Shi & Eckstein [START_REF] Shi | An exact solution for optimum controlled soft lunar landing[END_REF] have offered analytic solutions for the (atmosphere-free) vertical Moon landing problem. Since then, due to the spectacular development of reusable launcher technologies, powered landing strategies have been successfully addressed using numerical methods [START_REF] Blackmore | Lossless convexification of control constraints for a class of nonlinear optimal control problems[END_REF][START_REF] Brendel | Optimal guidance for toss back concepts of Reusable Launch Vehicles[END_REF][START_REF] Lee | Constrained Autonomous Precision Landing via Dual Quaternions and Model Predictive Control[END_REF][START_REF] Ross | A review of pseudospectral optimal control: From theory to flight[END_REF][START_REF] Szmuk | Successive Convexification for Real-Time 6-DoF Powered Descent Guidance with State-Triggered Constraints[END_REF][START_REF] Ulybyshev | Optimization of Three-Dimensional Lunar Landing Trajectories and Accessible Area Computation[END_REF]. Due to the non-negligible effects of the atmosphere, the analytic results derived for the Moon landing problem can not be directly adapted to the problem of Earth landing. Yet, analytical results on this problem would still represent valuable assets. On the one hand, analytic solutions are very useful to assess the quality of the numerical methods, by providing well-described reference solutions to standardized problems, see e.g. [START_REF] Bonnard | Optimal control of the atmospheric arc of a space shuttle and numerical simulations with multiple-shooting method[END_REF][START_REF] Souza | An optimal guidance law for planetary landing[END_REF][START_REF] Goddard | A Method Of Reaching Extreme Altitudes[END_REF][START_REF] Graichen | Solving the Goddard problem with thrust and dynamic pressure constraints using saturation functions[END_REF][START_REF] Reynolds | Optimal Planar Powered Descent with Independent Thrust and Torque[END_REF]. Further, when analytical investigations establish the switching structure of the solution, very efficient numerical methods can be employed, using a reduced number of unknown variables [START_REF] Souza | An optimal guidance law for planetary landing[END_REF][START_REF] Reynolds | Optimal Planar Powered Descent with Independent Thrust and Torque[END_REF][START_REF] Trélat | Optimal control and applications to aerospace: some results and challenges[END_REF]. For complex dynamics and high-dimensional systems, obtaining such analytical results is usually considered as out-of-reach [START_REF] Schättler | Geometric Optimal Control: Theory, Methods and Examples[END_REF]. It is thus of importance to select only dominant factors while leaving out unnecessary details in the modeling. Following this modus operandi, we consider a simplified (but not simplistic) representation of the general powered landing problem and establish a non-trivial result. The analysis presented in this chapter considers one key element: the effects of atmosphere. The model under study builds upon the variable-mass model of a rocket considered in [START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF] and incorporates atmospheric effects in the form of an altitudedependent bias of the thrust only, as introduced in Equation (2.1) in Chapter 2. In this model of the final phase of the powered landing, the thrust generator is always turned on1 and the thrust is upper and lower-bounded in a way that prevents hovering flight (according to the Thrust dominance assumption already presented). Intuitively, one could expect that it is more efficient to wait until the last feasible moment to use maximal thrust, as early efforts trying to slow down the rocket are likely to be less effective due to the varying mass scaling of the dynamics. The contribution of this chapter is to establish conditions under which fuel-optimal vertical powered landing through the atmosphere is indeed of this expected Min-Max nature. The arguments of proof are as follows. Under simple assumptions on the atmosphere pressure model (decreasingness, convexity), the optimal thrust program is first shown to have a Max-Min-Max structure, based on the PMP. Compared to [START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF], some sharper differential inequalities on the adjoint states are necessary to obtain a conclusion. Also, both normal and abnormal extremals need to be tackled. Then, using additional inequality constraints derived from the Implicit Function Theorem (IFT), Min-Max structures are proven to be more fuel-optimal than Max-Min-Max structures. These conditions can be checked numerically, over a finite domain. It is also shown that these conditions hold for zero atmosphere (and scarce atmosphere, using a continuity argument), which makes a connection with [START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF]. The chapter is organized as follows. In Section 3.1, the dynamics and the powered landing problem are summarized, in harmony with Chapter 2. In Section 3.2, the flight envelope is described based on flow analysis and differential inequalities. In Section 3.3, the optimal thrust program is shown to be Min-Max using the PMP, the IFT and mild assumptions. Finally, we provide numerical details in Section 3.4, and concluding remarks in Section 3.5. Single dimensional rocket model Following Chapter 2, we describe the rocket having a purely vertical motion by its altitude h, speed v and total mass m. The dynamics write ḣ = v, v = -g + T (h, q) m , ṁ = -q where q is the engine flow, and the thrust T defined in Equation (2.1). Because the engine is firing, the sole effect of the atmosphere is through the atmospheric pressure in the expression of T . Recall that negative speed conveys descending movement. During the powered landing, the rocket engine is always firing and q is bounded. In the problem setup under consideration, the engine flow bounds are s.t. the net thrust is always positive, i.e. q -is s.t. a cc := g Isp q --S E max h≥0 P a (h) > 0. (3.1) Normalized dynamics The following normalized variables are introduced u := 2 q -q - q + -q --1, y 1 := h g Isp , y 2 := v g Isp , y 3 := 2m q + -q - where y := (y 1 , y 2 , y 3 ) ⊤ denotes the normalized2 states and where κ := 1 Isp , r := q + + q - q + -q -, π(y 1 ) := P a (g Isp y 1 ) 2S E g Isp (q + -q -) . This yields the control-affine dynamics in R 3 ẏ = f (y) + ug(y), ( 3.2) where |u| ≤ 1 and (Altitude) ẏ1 = y 2 (3.3a) (Speed) ẏ2 = r + u -π(y 1 ) y 3 -κ (3.3b) (Mass) ẏ3 = -(r + u). (3.3c) Also, recall that the mass is bounded s.t. m -≤ y 3 ≤ m + . Assumptions specific to the vertical descent The problem under study is also described by the two following assumptions. Assumption 1 (Pressure model properties). The normalized pressure function π, is of class C 2 , and π > 0, π ′ < 0, π ′′ > 0. This assumption is very general and holds for all reference Earth atmosphere models, such as [START_REF] Leslie | The NASA Marshall Space Flight Center Earth Global Reference Atmospheric Model, 2010 Version[END_REF]. Then, the thrust dominance assumption from Chapter 2 is re-written as follows. Assumption 2 (Thrust dominance). ẏ2 ≥ a cc > 0. Assumption 2 implies condition (3.1), shows that the ratio r is greater than 1 and it also prevents hovering. Reaching null speed at a positive altitude is thus an undesired behavior and is not a steady state. Optimal Control Problems A natural goal for rocket landing is to maximize the final mass [START_REF] Meditch | On the problem of optimal thrust programming for a lunar soft landing[END_REF], or equivalently to minimize the fuel consumption. Landing is defined as final null altitude and (vertical) velocity. A constrained optimal control problem in free final time depending on an initial state y 0 can then be formulated. Problem 1 (Fuel optimal landing with state inequality path constraints). min u(.),t f t f 0 r + u(s)ds (3.4a) s.t. ẏ = f (y) + ug(y), (3.4b) |u| ≤ 1 (3.4c) y(0) = y 0 , y 1 (t f ) = y 2 (t f ) = 0 (3.4d) y 1 (t) ≥ 0, y 2 (t) ≤ 0, y 3 (t) ∈ [m -, m + ] (3.4e) State constraints (3.4e) are meant for any t in [0, t f ]. Additionally, we will consider another formulation where the state constraints (3.4e) have been removed, as they will be shown to be automatically satisfied. Problem 2 (Fuel optimal landing). min u(.),t f t f 0 r + u(s)ds s.t. ẏ = f (y) + ug(y), |u| ≤ 1 y(0) = y 0 , y 1 (t f ) = y 2 (t f ) = 0 Studying Problem (2) will help us describe the solutions of Problem 1. Remark 10 (Terminology). In the following proofs, maximal solutions of an ordinary differential equation are the solutions that cannot be extended in time. Premilinaries on the dynamics This section aims at describing conditions under which state path inequalities (3.4e) can be ignored. A detailed study of the dynamics is conducted. First, the altitude and speed dynamics are studied using surfaces of R3 s.t. any trajectory that lands must start between these surfaces. This region is called the flight envelope. Then, the mass constraint is discussed. Let us denote the domain D := R + × R -× (0, m + ]. Below, we say that a trajectory starting at some y 0 ∈ D lands applying the thrust u(.) if it reaches y 1 (t f ) = y 2 (t f ) = 0 for some t f > 0. Note that the minimum mass constraint is not included in the first part of this discussion. Beforehand, remark that there is a unique time T u associated to a control u(.) s.t. y 0 3 - Tu 0 r + u(s)ds = 0. ( 3.5) Since r + u ≥ r -1 > 0, the map t → 1/y 3 (t) is not integrable near T u because of (3.3c). Thus, the maximal solution of (3.2) starting at y 0 ∈ D is defined on the interval [0, T u ). If u ≡ σ is constant, then T σ = y 0 3 /(r + σ). Lemma 1. Let σ ∈ [-1, 1] a constant parameter. For any y 0 2 and y 0 3 , there is a unique y 0 1 (σ, y 0 2 , y 0 3 ) s.t. a trajectory starting at (y 0 1 (σ, y 0 2 , y 0 3 ), y 0 2 , y 0 3 ) ⊤ ∈ D lands when applying the constant thrust u ≡ σ. Proof. The maximal solution y of (3.2) with u ≡ σ, starting at y 0 ∈ D, is defined on [0, T σ ). y 2 (.) is continuous, increasing and diverges to +∞ as t tends to T σ . Thus, there is a unique time, denoted t * (y 0 1 ) ∈ [0, T σ ) s.t. y 2 (t * (y 0 1 )) = 0. The IFT applied with Assumption 2, on equation 3 Φ f +σg t * (y 0 1 ), (y 0 1 , y 0 2 , y 0 3 ) ⊤ 2 = 0 (3.6) shows that the application that maps y 0 1 into t * is actually continuous, and differentiable, for all y 0 1 ≥ 0. Then, define η : z ∈ R + → Φ f +σg (t * (z), (z, y 0 2 , y 0 3 ) ⊤ ) 1 ∈ R. (3.7) From the regularity of f + σg, the flow Φ f +σg is continuous and thus η is continuous. Necessarily, η(0) < 0. Moreover, since the acceleration is lower-bounded by a cc , it is possible to find an altitude y crit 1 > 0 large enough s.t. η(y crit 1 ) > 0. Therefore, there is a y * 1 ≥ 0 s.t. η (y * 1 ) = 0. Using t f = t * (y * 1 ), one has y 1 (t f ) = y 2 (t f ) = 0 by construction of η. A comparison argument, as in the proof of Proposition 1, shows that η is actually increasing, proving the uniqueness of y * 1 . It yields y * 1 = y 0 1 (σ, y 0 2 , y 0 3 ) using the above-mentioned variables. Let us denote Σ max (respectively Σ min ) the set of initial conditions s.t. landing is successful, at mass y f 3 ∈ (0, m + ], when applying a constant maximum (resp. minimum) thrust. Denoting y max 1 (y 2 , y 3 ) := y 0 1 (1, y 2 , y 3 ), y min 1 (y 2 , y 3 ) := y 0 1 (-1, y 2 , y 3 ), yields Σ max := {(y max 1 (y 2 , y 3 ), y 2 , y 3 ) : y 2 ≤ 0, y 3 ∈ (0, m + ]}, Σ min := {(y min 1 (y 2 , y 3 ), y 2 , y 3 ) : y 2 ≤ 0, y 3 ∈ (0, m + ]}. Moreover, using flows of the backward-time dynamics, for σ ∈ [-1, 1], define Σ σ := Φ -(f +σg) t, (0, 0, y f 3 ) ⊤ : 0 ≤ t ≤ m + -y f 3 r + σ , 0 < y f 3 ≤ m + which provides the relations Σ max = Σ 1 and Σ min = Σ -1 . It is noteworthy that y max 1 (y 2 , y 3 ) ≤ y min 1 (y 2 , y 3 ), implying that Σ max is always "below" Σ min , as pictured in Figure 3.2. Note that the applications y min 1 and y max 1 are continuous: this property stresses the continuity of the flows and the formal definition of Σ σ . Continuity can also be proven using the IFT on function η from Equation (3.7), considering y 0 2 and y 0 3 as variables. Proposition 1. For any y 0 ∈ D, if y min 1 (y 0 2 , y 0 3 ) < y 0 1 then for any control u(.) in [-1, 1] the dynamics reaches null speed at a positive altitude. Proof. Consider y 0 ∈ D s.t. y min 1 (y 0 2 , y 0 3 ) < y 0 1 and denote ỹ0 := (y min 1 (y 0 2 , y 0 3 ), y 0 2 , y 0 3 ) ⊤ . Let ỹ be the maximal solution of (3.2) with u ≡ 1 and y be the maximal solution of (3.2) for some measurable function u satisfying |u| ≤ 1 at all times. y starts at y 0 and ỹ at ỹ0 . They are respectively defined on [0, T 1 ) and [0, T u ), where T u ≤ T 1 . Using mass as a time-varying scaling, we get    ẏ1 (t) ẏ2 (t)    ≥ K   t,    y 1 (t) y 2 (t)       :=    y 2 (t) -κ + r-1-π(y 1 (t)) y 0 3 -t(r-1)    for any t ∈ [0, T u ). By construction, ỹ satisfies the equality version of this equation. Thus, comparison Lemma 13 (in Appendix) yields ỹ1 (t) ≤ y 1 (t), ỹ2 (t) ≤ y 2 (t), ∀t ∈ [0, T u ). Since y 2 is continuous, increasing and diverges as t → T u , there is a unique t * ∈ [0, T u ) s.t. y 2 (t * ) = 0. Therefore, y 1 (t * ) ≥ ỹ1 (t * ) ≥ 0. Using a Taylor expansion on (3.3a) with the initial conditions shows that the last inequality is strict, whence the proposition. Using a very similar proof, one shows the following result. Proposition 2. For any y 0 ∈ D, if y max 1 (y 0 2 , y 0 3 ) > y 0 1 then for any control u(.) in [-1, 1] the dynamics reaches null altitude at a negative speed. Proposition 1 defines the notion of being too high, meaning that if the rocket starts its powered descent above Σ min (in terms of altitude), then it will either lack fuel before reaching null speed, or go back up before touching the ground and then lack fuel at a non-zero altitude. In both cases, landing fails. Proposition 2 is the exact equivalent for the notion of being too low, meaning that the rocket will hit the ground at a non-zero speed if it starts below Σ max . Further, note that if a trajectory lands s.t. the mass remains in [m -, m + ], then the acceleration is upper-bounded by ācc := -κ + r+1 m -for any positive time. Since the fuel flow is lower-bounded, the mass can remain in [m -, m + ] for at most T max := m + -m - r-1 Therefore, for any positive time, the speeds are lower-bounded by y 2 and the altitudes are upper-bounded by ȳ1 s.t. Consequently, if y 0 ∈ F and Problem (2) has a solution, then altitude and speed constraints are enforced. If y 0 ∈ D\F, then Problems 1 and 2 cannot have solutions. Leaving out the limit cases of Σ max and Σ min , for which landing can be achieved by applying, respectively, the maximum and the minimum thrust, for the whole duration of the flight, we introduce F * := F\(Σ max ∪ Σ min ). (3.10) The following result discusses feasibility of the landing. Optimality will be studied later on in Section 3.3. Proposition 3. If y 0 belongs to F * , then there is always a control u of structure Min-Max that lands. Proof. The Min-Max structure denotes a 2 step sequence starting with minimum value of the control and ending with maximum value. For such y 0 , denote ỹ0 := (y min 1 (y 0 2 , y 0 3 ), y 0 2 , y 0 3 ) ∈ Σ min . Since y 0 ∈ F * , then y max 1 (y 0 2 , y 0 3 ) < y 0 1 < y min 1 (y 0 2 , y 0 3 ). Let us denote y and ỹ the maximal solutions of Equation (3.2) with u ≡ -1, starting respectively at y 0 and ỹ0 . Then, using similar comparisons as in the previous proof, one obtains y 1 (t) < ỹ1 (t) and y 2 (t) < ỹ2 (t) for all positive times. Thus, one deduces that y 1 reaches zero at some time t ′ > 0, before y 2 does. Moreover, the map ξ : t ∈ [0, t ′ ] → y 1 (t) -y max 1 (y 2 (t), y 3 (t)) ∈ R (3.11) is continuous, and satisfies ξ(0) > 0 since the trajectory starts strictly above Σ max , and ξ(t ′ ) < 0 since (0, y 2 (t ′ ), y 3 (t ′ )) is necessarily below Σ max in terms of altitude (recall that y 2 (t ′ ) < 0). Thus, there exists a time t ′′ ≤ t ′ s.t. y(t ′′ ) ∈ Σ max . The desired Min-Max control law equals -1 on [0, t ′′ ) and +1 for times t ≥ t ′′ . As far as the mass is concerned, since it is a continuous decreasing function of time, enforcing the terminal constraint y 3 (t f ) ≥ m -is sufficient to guarantee the mass constraint (3.4e). Therefore, only the simplified Problem (2) needs to be solved. If there is a solution that satisfies y 3 (t f ) ≥ m -, then Problem (1) shares the same solution. Otherwise, if y 3 (t f ) < m -, then Problem (1) has no solutions. Indeed, since the solution is fuel-optimal, there is no other way to land with a greater final mass. Optimal thrust programs This section focuses on Problem (2) exclusively, which, according to the previous discussion gives an answer to Problem (1) or proves its infeasibility. We aim at proving that optimal controls are of Min-Max nature, where one of the min or max arcs may be absent. To establish this result (Theorem 1), we proceed as follows. First, stationary conditions are derived from the PMP. Then, using properties of the second adjoint state variable, the optimal thrust program is shown to be Max-Min-Max. Finally, the first maximum arc is shown to be absent under one (mild) additional assumption on the atmosphere model (Assumption 4). Fuel Optimal Landing Consider y 0 ∈ F. Let u be an optimal thrust program for Problem (2) and y be the corresponding trajectory. Let t f be the time-of-flight. It is assumed that y 3 (t f ) ≥ m -. Thus, from the previous section, y lies in F. The Hamiltonian of Problem ( 2) is defined as H := λ 0 (r + u) + λ ⊤ (f (y) + ug(y)) (3.12) where λ 0 ∈ R and λ : [0, t f ] → R 3 denote the adjoint states. To study the controlaffine Hamiltonian, consider the switching function Γ(t) := λ 0 + λ(t) ⊤ g(y(t)) = λ 0 + λ 2 (t) y 3 (t) -λ 3 (t). (3.13) The PMP, as stated in [84, Thm. 2.2.1], yields (λ 0 , λ(t)) ̸ = 0 R 4 , ∀t ∈ [0, t f ] (3.14a) λ1 = λ 2 π ′ (y 1 ) y 3 (3.14b) λ2 = -λ 1 (3.14c) λ3 = λ 2 y 2 3 (r + u -π(y 1 )) (3.14d) u = -Sgn (Γ(t)) , when Γ(t) ̸ = 0 (3.14e) λ(t f ) = ν 1 ν 2 0 ⊤ , (ν 1 , ν 2 ) ∈ R 2 (3.14f) Equation (3.14a) states the non-triviality of the adjoint states. Following [START_REF] Bryson | Applied optimal control: optimization, estimation and control[END_REF] and [102, Thm. 7.8.1], since the integral cost, the dynamics and the end-point constraints are time-invariant, the Hamiltonian is constant along the extremals and for such a free time, fixed endpoint problem, this constant is zero: H(t) ≡ 0, ∀t ∈ [0, t f ]. (3.15) The optimal pairs (y, u) are called abnormal extremals [START_REF] Agrachev | On Abnormal Extremals for Lagrange Variational Problems[END_REF][START_REF] Montgomery | Abnormal minimizers[END_REF] if λ 0 = 0, and normal extremals if λ 0 ̸ = 0. We now proceed to establish some intermediate results on the adjoint states. Proposition 4. (λ 1 (t), λ 2 (t)) ̸ = (0, 0) for all t ∈ [0, t f ]. Proof. The linear time-varying dynamics of (λ 1 , λ 2 ) is Lipschitz in (λ 1 , λ 2 ) and continuous in time. Therefore, from the Cauchy-Lipschitz theorem, any maximal solution is unique. Thus, if there is a t 0 s.t. (λ 1 , λ 2 )(t 0 ) = 0, then λ 2 ≡ 0 over [0, t f ], implying λ 3 ≡ 0 from (3.14d) and (3.14f) and then λ 0 = 0 from (3.15), violating (3.14a). Now, remark that the sign of λ 2 and λ2 can be extrapolated from the following second-order equation λ2 = a(t)λ 2 , where a(t ) := - π ′ (y 1 (t)) y 3 (t) > 0. (3.16) Indeed, the cones R + × R + and R -× R -are both invariant through the dynamics (3.16). This shows that if λ 2 or λ2 is null at some t λ ∈ (0, t f ), then they will both remain in one of these cones after t λ . Further, from Proposition 4 and (3.16), they will actually remain in interior subsets of these cones for times t > t λ . Hence, by an exhaustive enumeration of possible cases we can state the following result. Proposition 5. (λ 2 , λ2 ) necessarily match one of these conditions, as illustrated in 1. λ 2 and λ2 are never zero on (0, t f ): (b) λ 2 > 0 and λ2 < 0 on (0, t f ), (a) λ 2 > 0 and λ2 > 0 on (0, t f ), (c) λ 2 < 0 and λ2 > 0 on (0, t f ), (d) λ 2 < 0 and λ2 < 0 on (0, t f ), 2. There is a unique t λ ∈ (0, t f ) s.t. λ 2 (t λ ) = 0 and λ2 ̸ = 0 on [0, t f ]: (a) Sgn (λ 2 (t)) = -Sgn (t -t λ ) and λ2 < 0, (b) Sgn (λ 2 (t)) = Sgn (t -t λ ) and λ2 > 0, 3. There is a unique t λ ∈ (0, t f ) s.t. λ2 (t λ ) = 0 and λ 2 ̸ = 0 on [0, t f ]: (a) Sgn λ2 (t) = -Sgn (t -t λ ) and λ 2 < 0, (b) Sgn λ2 (t) = Sgn (t -t λ ) and λ 2 > 0. Note that for scenarios 1a and 1d (resp. scenarios 1b and 1c), either λ 2 or λ2 can be zero at t = 0 (resp. at t = t f ). Also, note that for scenario 2, λ 2 is necessarily non-zero at t = 0 and t = t f since λ2 is of constant sign. The same kind of remark applies to scenario 3 as well. The goal is to state whether these scenarios are consistent with conditions (3.14a)-(3.14f), and if so to what control structure they refer to. Proposition 6. Abnormal extremals are optimal programs of constant thrust. Proof. For abnormal extremals, λ 0 = 0. Using Equation (3.15) at t = t f yields ν 2 = 0 = λ 2 (t f ). Thus, from Proposition 5, λ 2 has a constant non-zero sign over [0, t f ). Moreover, from (3.14d) and (3.14f), one has Sgn (λ 3 (t)) = -Sgn (λ 2 (t)) , ∀t ∈ [0, t f ). (3.17) Therefore, for any t ∈ [0, t f ): Sgn(Γ(t)) = Sgn(λ 2 ). Hence, u has a constant value in {-1, +1} over [0, t f ). This latest proposition shows that abnormal extremals require the initial state y 0 to be on constant thrust trajectories achieving landing, i.e y 0 ∈ Σ max or y 0 ∈ Σ min must hold for these extremals. From now on, we consider normal extremals only, and, without loss of generality 4 , we consider λ 0 = 1. Let us define b(t) := π(y 1 (t)) y 3 (t) Note that from Assumption 1 and the sign of y 2 , one can show that a(.) and b(.) are increasing from a study of their derivatives. Also, for all times in [0, t f ], a and b are respectively lower and upper-bounded by a := - π ′ (ȳ 1 ) m + and b := π(0) m -. (3.18) Let us define γ(t) := λ2 (t) + λ 2 (t)b(t), which satisfies dΓ dt (t) = Γ ′ (t) = γ(t) y 3 (t) (3.19) Since y 3 is positive, γ carries the sign of Γ ′ . From this point, Γ is the subject of our investigations. Lemma 2. If γ < 0 over (0, t f ), then Γ is null at most on a single t ∈ [0, t f ]. Lemma 3. Γ < 0 in the left-neighborhood of t f . Proof. Equation (3.15) at t f yields ν 2 = -(r + u(t f ))/ ẏ2 (t f ). Thus, one gets Γ(t f ) = - κy 3 (t f ) + π(0) y 3 (t f ) ẏ2 (t f ) < 0. The conclusion follows from the continuity of Γ(.). λ 2 must be non-positive in a neighborhood of t f . Indeed, let us assume that there is a time t ′ s.t. λ 2 is positive on [t ′ , t f ). Note that λ 2 (t f ) may be null. Then, using (3.14d) and (3.14f), λ 3 would necessarily be negative on [t ′ , t f ), leading to Γ(t) > 0 for t in [t ′ , t f ], which contradicts Lemma 3. This eliminates scenarios 1a, 1b, 2b and 3b. Moreover, note that scenario 1d necessarily corresponds to Min-Max programs, where one arc may be absent, for it satisfies Lemma 2. Then, the three remaining scenarios, namely 1c, 2a and 3a, require a refined sign study of λ 2 and λ2 . Using differential equations bounding λ 2 , we can establish bounds on γ tight enough to derive valuable sign information. Definition 2. For a constant c > 0 and t 0 ∈ (0, t f ), the C 2 scalar function x c is defined over [0, t f ] as the unique solution of the initial value problem ẍc = cx c with x c (t 0 ) = λ 2 (t 0 ) and ẋc (t 0 ) = λ2 (t 0 ) which yields x c (t) = λ 2 (t 0 ) cosh √ c(t -t 0 ) + λ2 (t 0 ) √ c sinh √ c(t -t 0 ) . Inspired from the definition of γ (from (3.19) and before), let us denote γ c (t) := ẋc (t) + x c (t)b(t) (3.20) and introduce z λ := (λ 2 , λ2 ) ⊤ and z := (x a , ẋa ) ⊤ s.t. żλ = F (t, z λ ) :=    0 1 a(t) 0    z λ . (3.21) The next proofs require the following assumption. Proof. Here, λ2 < 0 and Sgn (λ 2 (.)) = -Sgn (.t λ ), where t λ ∈ (0, t f ). In this proof only, we consider the functions from Definition 2 with t 0 = t λ . It leads to γ a (t) = λ2 (t λ ) cosh ( √ a(t -t λ )) + b(t) √ a sinh ( √ a(t -t λ )) . For Lemma 3 and Proposition 7 imply that the sign of Γ changes at most once over [0, t f ] for scenario 2a. Lemma 4. If λ 2 < 0 over [0, t f ], if γ(t γ ) = 0 for some t γ ∈ (0, t f ) and if Assump- tion 3 holds, then: γ(t) < 0, ∀t > t γ . Proof. By construction λ 2 (t γ ) = -λ2 (t γ )/b(t γ ). Necessarily, λ2 (t γ ) > 0. In this proof only, we consider the functions from Definition 2 with t 0 = t γ . It yields γ a (t) = λ2 (t γ ) 1 - b(t) b(t γ ) cosh ( √ a(t -t γ )) + b(t)b(t γ ) -a b(t γ ) √ a sinh √ c(t -t γ ) Since b increases, the factor associated to the cosh term is negative. Also, Assumption 3 yields b(t)b(t γ ) -a ≤ (b(t) + √ a)(b(t) - √ a) < 0 (3. Optimality of Min-Max Programs We shall now discuss under which conditions Min-Max trajectories are always more fuel-optimal than Max-Min-Max trajectories, for some y 0 ∈ F * . Let us consider a trajectory y starting at y 0 , with thrust structure Max-Min-Max. Denote t 1 its first time of switch (from max to min). The last max arc may be of null duration. Then, for every time t ′ 1 ∈ [0, t 1 ], there is a trajectory with thrust structure Max-Min-Max, with first time of switch t ′ 1 , that lands, which is guaranteed by applying Proposition 3 at t ′ 1 . Below, we derive conditions under which the trajectory having the smallest first time of switch has the highest final mass, showing that the Min-Max trajectory starting from y 0 is fuel-optimal. The second time of switch, denoted t 2 , and the final time t f are implicitly imposed by t 1 so that the rocket lands. This relation will be given later. For the time being, note that the final mass, denoted y f 3 , satisfies y 0 3 -y f 3 (t 1 ) = (r + 1)t 1 + (r -1)(t 2 (t 1 ) -t 1 ) + (r + 1)(t f (t 1 ) -t 2 (t 1 )). (3.26) The first two components of y are collected in µ(y), i.e. µ(y) := (y 1 , y 2 ) ⊤ . The landing condition is simply µ(y(t f )) = 0. Define L(τ 1 , τ 2 , τ f ) := µ Φ f +g (τ f -τ 2 , Φ f -g (τ 2 -τ 1 , Φ f +g (τ 1 , y 0 ))) . Then, the landing condition boils down to L(t 1 , t 2 , t f ) = 0. ( 3 dt 2 dt 1 , dt f dt 1 ⊤ = - ∂L ∂[t 2 , t f ] -1 • ∂L ∂t 1 . (3.28) To express these derivatives w.r.t. t 1 , intermediate quantities are introduced. The transition matrices M (t f ) and N (t 2 ) are respectively defined as the unique solutions to the matrix initial value problems Ṁ (t) = ∂(f + g) ∂y (y(t)) • M (t) and M (t 2 ) = I 3 , (3.29) Ṅ (t) = ∂(f -g) ∂y (y(t)) • N (t) and N (t 1 ) = I 3 . (3.30) Let us define R 1 , R 2 , S 1 and S 2 by R 1 R 2 ⊤ := µ (M (t f ) • (f -g)(y(t 2 ))) , (3.31) S 1 S 2 ⊤ := µ (M (t f ) • N (t 2 ) • (f + g)(y(t 1 ))) . (3.32) Since Assumption 2 holds, the invertibility condition of ∂L ∂[t 2 ,t f ] needed to apply the IFT boils down to R 1 ̸ = 0. Then, one can provide a detailed version of (3.28) dt 2 dt 1 = 1 - S 1 R 1 , ( 3.33 ) dt f dt 1 = 1 - R 1 S 2 -R 2 S 1 + S 1 ẏ2 (t f ) ẏ2 (t f )R 1 . (3.34) Using the previous terms with Equation (3.26) yields dy f 3 dt 1 (t 1 ) = r + 1 ẏ2 (t f )R 1 R 1 (S 2 -ẏ2 (t f )) + S 1 ẏ2 (t f ) r -1 r + 1 -R 2 . (3.35) The conditions that enable us to state that Min-Max thrust programs are always more fuel-optimal than the Max-Min-Max ones, by allowing us to apply the IFT on L, are thus conveyed by the assumption below Assumption 4. The parameter defined in (3.31) is s.t. R 1 ̸ = 0, and one has dy f 3 dt 1 (0) < 0 for any y 0 ∈ F * s.t. the rocket lands at y 3 (t f ) ≥ m -. Note that, since it is formulated for any y 0 ∈ F * , it is sufficient to check these conditions for t 1 = 0 only. Moreover, these conditions can be either checked through (3.35), analytically -if the pressure model is known well enough and tractable -or numerically. Remark 12. For illustration purposes only, let us check the validity of Assumption 4 when there is no atmosphere. When π ≡ 0, every term from (3.33) and (3.34) can be explicitly written using the fact that r+u y 3 = -ẏ3 y 3 for intermediate integrations, which yields R 1 = 2 r + 1 1 - y 3 (t f ) y 3 (t 2 ) + log y 3 (t f ) y 3 (t 2 ) , R 2 = -κ + r -1 y 3 (t f ) , S 1 = - 2 r -1 log y 3 (t 2 ) y 3 (t 1 ) , S 2 = -κ + r + 1 y 3 (t f ) . R 1 is negative since y 3 (t 2 ) > y 3 (t f ). Thus, (3.35) becomes dy f 3 dt 1 (t 1 ) = - 4κ ẏ2 (t f )(r -1) 1 R 1 log y 3 (t 2 ) y 3 (t 1 ) < 0. (3.36) The negativity of this quantity gives the desired conclusion. By continuity, the assumption also holds for scarce atmospheres. Further, an example based on a nonscarce tabulated pressure model is treated in Section 3.4. Main result Under Assumptions 1, 2, 3, and 4, if the final mass y f 3 of the landing Min-Max trajectory, starting from a y 0 in F, satisfies y f 3 ≥ m -, then the optimal thrust program of Problem ( 1) is Min-Max, where one arc may be absent. Conversely, if 1) has no solution. Henceforth, it is possible to describe the whole set of feasible initial conditions. Define y f 3 < m -or if y 0 / ∈ F, then Problem ( Ω y f 3 := Φ -(f -g) τ 1 ,Φ -(f +g) τ 2 , (0, 0, y f 3 ) ⊤ : τ 1 ≥ 0, τ 2 ≥ 0, (r -1)τ 1 + (r + 1)τ 2 ≤ m + -y f 3 which denotes the set of states landing at final mass y f 3 ≤ m + applying a Min-Max control. Minimum (resp. maximum) arcs last for τ 1 (resp. τ 2 ). Thus, the solution set F sol of the initial conditions y 0 s.t. Problem (1) has a solution is F sol := m -≤y f 3 ≤m + Ω y f 3 (3.37) The following theorem summarizes this discussion. y 2 < 0 0 M a s s ( y 3 ) y - 3 y + 3 Altitu de (y 1 ) 0 y 1 > 0 Σ max Σ min Ω(y - 3 ) Figure 3.2: Flight envelope. F sol is delimited by Σ max , Σ min , Ω y - 3 and closed by the constraint y 3 ≤ y + 3 on the last side. For Σ max and Σ min , only the trajectories that land with a mass y 3 (t f ) ≥ m -are represented. The vertical axis conveys the altitude to ease the visualization. Numerical illustrations Let us consider the following (normalized) parameters κ = 0.00285 s -1 , r = 4.0, m -= 458.3 s, m + = 520.3 s. In this example, the engine can be used at 60-100% of its maximum flowrate. Also, κ is taken close to the values of actual reusable launcher engines [START_REF] Onel | Liquid rocket engine performance assessment in the context of small launcher optimisation[END_REF], such as the Merlin (Falcon 9) or the BE-4 (New Glenn). We consider a pressure model describing Earth's atmosphere from tabulated values, satisfying Assumption 1, s.t. π(0) = 6.2 × 10 -1 . Assumption 2 and 3 are satisfied since a cc = 2.90 × 10 -3 > 0 and b/ √ a = 3.37 × 10 -1 < 1. F sol is pictured in Figure 3.2 for these values. Assumption 4 is then checked numerically, by computing R 1 and dy f 3 dt 1 (0) Comments The results presented above call for a few comments. Vertical flight envelope applications u(y) =            1 if y ∈ Σ max , -1 if y ∈ Σ min ∪ Ω(y - 3 ), v(y) otherwise, guarantees that a rocket starting inside the vertical flight envelope will land at null vertical speed. This could be used to design safe-set control laws [START_REF] Garcia | A Comprehensive Survey on Safe Reinforcement Learning[END_REF][START_REF] Wabersich | Safe exploration of nonlinear dynamical systems: A predictive safety filter for reinforcement learning[END_REF]. Also, in terms of software design, using the structure where a control "u" supervises another control "v" would help increase the Run-Time Assurance of the rocket G&C system [START_REF] Clark | A Study on Run Time Assurance for Complex Cyber Physical Systems[END_REF][START_REF] Sha | Using simplicity to control complexity[END_REF], but it brings us out-of-the-scope of this thesis. Non robustness of the Min-Max trajectory The optimal thrust program being of a bang-bang nature, it is non-robust to some sources of uncertainty, due to the proximity of the ground. For instance, if the switch from the Min to the Max arc is delayed, the rocket necessarily comes out of the vertical flight envelope, which guarantees that there are no thrust programs that allow a proper landing. Given the uncertain and complex environment in which the actual rocket final burn occurs, it can be of high interest to consider more robust thrust programs, that are not too close to the actuators limits, or, in other words, not too close to the vertical flight envelope boundary. dt 1 (0) while dispersing most of the involved parameters, make us conjecture that Assumption 4 could be made much weaker: it is sufficient but not necessary. Conjecture regarding assumption 4 Recent results on the topic The results presented in this chapter correspond to the paper [START_REF] Ménou | Fuel-optimal program for atmospheric vertical powered landing[END_REF] published in 2021. Since then, Leparoux et al. published an article on a really close topic [START_REF] Leparoux | Structure of optimal control for planetary landing with control and state constraints[END_REF]. They explored the structure of optimal thrust programs of 3D rocket models for planetary landing. Among others, they showed that the optimal thrust programs were generally of the Max-Min-Max nature, and that it was not sensitive to several model changes. Among the model changes that they considered, they used an atmospheric model that resembles the thrust bias law T = gIspq -S E P (h) that we use in this manuscript, but with a constant pressure term instead. Other approach for optimal thrust programs The optimal control problem studied in this chapter has a state of dimension 3, which makes the theoretical results from H. J. Sussmann and H. Schättler applicable as well [START_REF] Schättler | The Local Structure of Time-Optimal Trajectories in Dimension Three under Generic Conditions[END_REF][START_REF] Schättler | Geometric Optimal Control: Theory, Methods and Examples[END_REF][START_REF] Sussmann | Time-optimal control in the plane[END_REF][START_REF] Sussmann | Lie Brackets and real analiticity in control theory[END_REF]. They provided a variety of results based on the Lie brackets of the functions f and g involved in Equation (3.2), which can help characterize the optimal solution nature of Problem ( 2), but for a minimum-time criterion only. Chapter 4 Nominal guidance via Quadratic Programming Résumé Le problème général de guidage par descente motorisée PDG (ξ, p), défini dans l'équation (2.22), est un OCP, c'est-à-dire un problème d'optimisation de dimension infinie, en temps libre. Dans ce chapitre, ce problème est réécrit sous la forme d'un problème de dimension finie, aussi simple que possible, afin qu'il puisse être résolu rapidement et de manière fiable en vol. Cette approche est la méthode de guidage par descente motorisée que nous proposons. Tout d'abord, PDG (ξ, p) est réécrit sous la forme d'un problème d'optimisation non-linéaire (NLP) paramétrique de faible dimension. Cette réécriture nécessite, entre autres, une description des variables d'état basée sur le flot d'équations différentielles ordinaires (ODE), une représentation à dimension finie de la variable de contrôle, et une remise à l'échelle de la variable temporelle pour tenir compte des variations du temps final. Deuxièmement, la solution de ce dernier NLP est approximée par une expansion directionnelle du premier ordre, en utilisant les résultats classiques de l'analyse de sensibilité des NLP. Étant donné que des variations générales et non infiniment petites des paramètres ξ et p doivent être prises en compte dans l'application, il est nécessaire de traiter les changements dans l'ensemble des contraintes actives. Il s'agit là d'une caractéristique essentielle de l'analyse de sensibilité. Une méthode de calcul basée sur la programmation quadratique (QP) est décrite pour traiter cette question. Enfin, des commentaires importants concernant l'utilisation offline/online de ce QP sont discutés et illustrés par trois exemples numériques. La méthode de guidage nominal décrite ici est générale et peut être appliquée aux problèmes 2D et 3D. 53 The general Powered Descent Guidance problem PDG (ξ, p), defined in Equation (2.22), is an OCP, i.e. an infinite dimensional optimization problem, in free-final time. In this chapter, this problem is re-written as a finite dimensional problem, as simple as possible, so that it can be solved quickly and reliably in-flight. This approach is our proposed nominal Powered Descent Guidance method. First, PDG (ξ, p) is re-written as a low dimensional parametric Non-Linear Program (NLP). Among others, this rewriting requires a description of the state variables based on the flow of Ordinary Differential Equations (ODEs), a finitedimensional representation of the control variable, and a re-scaling of the time variable to account for the variations of the free-final time. Second, the solution of the latter NLP is approximated by a directional first-order expansion, using classic sensitivity analysis results of NLPs. Because general and non-infinitely small variations of the parameters ξ and p must be considered in the application, it is necessary to deal with changes in the set of active constraints. This appears as a critical feature of the sensitivity analysis. A computational method based on Quadratic Programming (QP) is described to handle this. Finally, important comments regarding the offline/online use of this QP are discussed, and illustrated on three numerical examples. The nominal guidance method described here is general, and can be applied to both the 2D and 3D problems. This chapter is an updated version of [START_REF] Ménou | Sensitivity Analysis for Powered Descent Guidance: Overcoming degeneracy[END_REF], with enhanced examples and a new application to the 3D rocket model. Non-Linear Programming formulation for PDG The goal of this section is to explain how PDG (ξ, p) can be approximated using a NLP with few variables. As recalled in Chapter 1, OCPs are commonly solved using either direct or indirect methods. On one hand, direct methods first discretize the optimization problem and then solve it using NLP techniques. On the other hand, indirect methods consist in formulating infinite dimensional stationary conditions first, and then discretizing them. The former is often more robust but less accurate than the latter. For both approaches, it is highly difficult to guarantee convergence times for complex nonlinear problems. We propose to approximate PDG (ξ, p) using a method in-between these two classic approaches. As used in certain direct methods [START_REF] Hargraves | Direct trajectory optimization using nonlinear programming and collocation[END_REF][START_REF] Kraft | On Converting Optimal Control Problems into Nonlinear Programming Problems[END_REF][START_REF] Vlassenbroeck | A chebyshev polynomial method for optimal control with state constraints[END_REF], we discretize the control variable using an interpolation method. However, we do not use a coarse1 discretization scheme to convey the dynamic equation. The latter is here expressed exactly, as the flow of a certain ODE. A point worth special care is that the ODE is defined over a time domain whose endpoint is an unknown of the problem. This representation will help us form a finite dimensional problem denoted NLP (ξ) thereafter. The sensitivity analysis based method used to solve this latter problem will be the matter of the next section. Here is represented a scalar Cubic Spline, described by its values µ 0 , . . . , µ 3 at several time instances, and by its slopes µ 4 and µ 5 at the starting and end-points. The inequality constraints are enforced on the subdivision τ ′ 0 , . . . , τ ′ Nc . τ Parametric control u µ (τ ) • • • • | τ 0 τ 1 . . . . . . τ Nc τ 0 = 0 τ 1 τ 2 τ 3 = 1 µ 0 µ 1 µ 2 µ 3 µ 4 µ 5 • • Discretization of the decision variable Free-final time First, recall that in PDG (ξ, p), the time-of-flight is an optimization variable implicitly defined by the constraints and the cost. Its change w.r.t. the reference time of flight tf is denoted ∆t f . We scale the time variable t by considering τ := t tf + ∆t f The final time is considered as an extra state of null dynamics: ṫf = 0. The augmented state equals x := (x ⊤ , t f ) ⊤ , and satisfies the dynamics ẋ = f (x, u, η) :=    t f f (x, u, η) 0    where the variables are here defined for times τ ∈ [0, 1]. The unknown ∆t f is now taken into account as an initial condition, s.t. x(0) =    x0 + ∆x 0 tf + ∆t f    To alleviate the writing, the constraint A f x(1) = b f (p) will be written A f x(1) = b f (p) as well, where the latter matrix A f is simply the former matrix A f with an extra column of zeros on the right-hand side. Here, b f (p) remains unchanged. Likewise, the notation c(x, u, η, p) will be used to refer to the former constraint c(x, u, η, p) ≤ 0. Parametric description of the control We choose to describe the infinite dimensional variable δu using a smooth parametric description. Let us describe δu via a function (µ, τ ) ∈ R Nµ × [0, 1] → u µ (τ ) ∈ R m where µ → u µ (τ ) is linear for any fixed τ . There is a matrix valued function M (.) s.t. u µ (τ ) = M (τ )µ. This framework encompasses the use of many interpolation methods, from piecewise constant interpolation to Cubic Splines and Hermite polynomials. Our choice is detailed below. For N ≥ 2, consider a subdivision of the normalized time interval [0, 1] denoted by the N + 1 time instances τ 0 = 0 < τ 1 < . . . < τ N = 1. We chose to describe u µ as a Cubic Spline defined on the subdivision (τ i ) 0≤i≤N , as represented in Figure 4.1. These Splines are constructed using the classic method from [START_REF] Kraft | On Converting Optimal Control Problems into Nonlinear Programming Problems[END_REF]Sec. 3.3]. Thus, the vector µ embeds the values u k of the corrections at τ k and the slopes u0 and uN s.t. µ := (u 0 ) ⊤ , (u 1 ) ⊤ , . . . , (u N ) ⊤ , ( u0 ) ⊤ , ( uN ) ⊤ ⊤ ∈ R Nµ . (4.1) where N µ = m(N + 3). The bounds on the engine flow must be discussed differently depending on the engine model considered. On the one hand, for the 2D rocket model, it was decided that the engine flow was directly controlled via q r . Even though this simplifying choice was made for illustration purposes only, choosing a smooth parametric description for u µ has a convenient by-product. Since the controlled flow can be expressed from the real flow s.t. q c = τ q qr + q r , then it will be possible to express exactly the controlled flow, as long as • q r remains differentiable, • τ q qr + q r remains within [q -, q + ], • the estimated value qr (0) of the initial real flow is imposed, i.e.: qr (0) + (q r ) µ (0) = qr (0). On the other hand, the 3D model already takes into account the first-order dynamics on the engine flow. If q -≤ q 0 r ≤ q + and that q -≤ q 0 c ≤ q + , then the real flow will remain within [q -, q + ]. Therefore, imposing the flow bounds via the constraint u -(p) ≤ ū(τ ) + u µ (τ ) ≤ u + (p) is sufficient to guarantee that q c and q r lie in the proper interval. Moreover, it was previously mentioned that the engine flow dynamics was taken into account but not the orientation dynamics, since the time constant of the latter was significantly much smaller than the former. However, to remain as feasible as possible by the real system, it is of high interest to have a continuous and smooth control law description. Picking a parametric description such as the Cubic Splines ensures the smoothness. Moreover, imposing the initial value of the incidence -or the projected incidences for the 3D model -to be the same as the current estimate ensures the continuity. Therefore, we consider a constraint of the shape ū(0) + u µ (0) = û(0), which also writes u µ (0) = ∆u init (4.2) where ∆u init is the gap between the reference control at time 0 and the current control. Remark 14. When converting OCPs to NLPs, Cubic Splines and other similar discretization methods are often used to approximate the control variable and the state itself [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF][START_REF] Fahroo | Direct Trajectory Optimization by a Chebyshev Pseudospectral Method[END_REF][START_REF] Kraft | On Converting Optimal Control Problems into Nonlinear Programming Problems[END_REF][START_REF] Vlassenbroeck | A chebyshev polynomial method for optimal control with state constraints[END_REF]. However, here, only the variable δu is described by finitely many values, and the state will be described exactly, using Equation (4.7) below. The state is not a Cubic Spline. This choice is motivated by the need to evaluate with a very high accuracy the states, especially at touchdown. Parametric problem The control variable of the parametric description of our problem is z := (µ ⊤ , ∆t f ) ⊤ . (4.3) It is of dimension N z = N µ + 1. From the previous discussion, let us denote the whole input vector by ξ := (∆x 0 ⊤ , ∆η ⊤ , ∆u init ⊤ ) ⊤ ∈ R N ξ . (4.4) Describing the control correction by u µ (.) and imposing the control constraints u -≤ ū + u µ ≤ u + (4.5) does not necessarily imply that µ is bounded. Indeed, the slopes of u µ at τ = 0 and τ = 1 are not directly bounded by these constraints, especially if these constraints are enforced on a badly chosen subset of time instances. Likewise, neither the bounds from Equation (4.5) nor the other above-mentioned constraints necessarily imply that ∆t f is bounded. To guarantee that both µ and ∆t f remain bounded, additional constraints are imposed on the decision variable z s.t. z low ≤ z ≤ z up . By extending the mixed state-control constraints conveyed by the function c in Equation (2.22f), an approximation of PDG (ξ, p) is min z∈R Nz J(z, ξ) (4.6a) s.t. ẋ(τ ) = f (x(τ ), ū(τ ) + u µ (τ ), η + ∆η), ∀τ ∈ [0, 1] (4.6b) x(0) =    x 0 + ∆x 0 tf + ∆t f    (4.6c) A f x(1) = b f (p) (4.6d) u µ (0) = ∆u init (4.6e) u -(p) ≤ ū(τ ) + u µ (τ ) ≤ u + (p), ∀τ ∈ [0, 1] (4.6f) c(x(τ )), ū(τ ) + u µ (τ ), η + ∆η, p) ≤ 0, ∀τ ∈ [0, 1] (4.6g) z low ≤ z ≤ z up (4.6h) Problem (4.6) has a finite-dimensional decision variable, but an infinite number of constraints. Formulation of the finite dimensional guidance problem Description of the state as the flow of an ODE Our goal is to remove x from the description of (4.6). To this purpose, let us introduce a classic notation used for the flow of the ODE following e.g. [START_REF] Bonnans | Course on Optimal Control, Part I: the Pontryagin approach[END_REF][START_REF] Sontag | Mathematical Control Theory[END_REF]. Definition 3 (Flow of f ). Consider the subsets X ⊂ R n (assumed open), U ⊂ R m (assumed compact) and Ω ⊂ R nη . Given a differentiable function f : X ×U ×Ω → R n , vectors (x 0 , η) ∈ X × Ω and a control function 2 u ∈ L ∞ ([0, 1], U) , the flow of the ODE defined by f is defined using the following Initial Value Problem (IVP) ∀t ∈ [0, 1], x(t) = Φ f t, x 0 , η; u ⇔      x(0) = x 0 ẋ(s) = f (x(s), u(s), η), ∀s ∈ [0, t]. Using this notation, let us describe the extended state x[τ, z, ξ] ∈ R n+1 as x[τ, z, ξ] := Φ f   τ,    x 0 + ∆x 0 t f + ∆t f    , η + ∆η; ū + u µ    , ∀τ ∈ [0, 1]. (4.7) The latter notation x[τ, z, ξ] will prove to be handy when writing the inequality and equality constraints (4.8) and (4.9) below. Note also that the hypothesis of Definition 3 guarantee that x[τ, z, ξ] is uniquely defined. For formal results on the existence and uniqueness of Φ f (t, x 0 , η; u), see e.g. [START_REF] Sontag | Mathematical Control Theory[END_REF]Appendix C.3]. Provided that f is continuously differentiable w.r.t. all of its inputs, since µ → u µ (t) is also assumed continuously differentiable for all times t, then x[τ, z, ξ] is continuously differentiable w.r.t. all of its inputs. For detailed properties of Φ, see Appendix A.2.2 recalling some useful classic results. PDG as a NLP To alleviate the writing, let us first consider that the constraint parameter p equals zero (the case with p ̸ = 0 will be handled afterwards). The last ingredients that must be discretized in (4.6) are the inequality constraints. Indeed, constraints (4.6f) and (4.6g) are defined on an infinite number of points. We decide to enforce these constraints on a number of N c + 1 times instance τ ′ 0 = 0 < τ ′ 1 < . . . < τ ′ Nc = 1 s.t. the subdivision (τ ′ i ) 0≤i≤Nc is an uniform oversampled version of (τ i ) 0≤i≤N , as shown in Figure 4.1. In other words, every interval [τ i , τ i+1 ] is split into several sub-intervals, and the constraints (4.6f) and (4.6g) are enforced at their borders. Thus, the discretized version of the inequality constraints can be re-written as h(z, ξ) ≤ 0 where h(z, ξ) :=                           ū(τ ′ 0 ) + u µ (τ ′ 0 ) -u + (0) u -(0) -(ū(τ ′ 0 ) + u µ (τ ′ 0 )) c (x[τ ′ 0 , z, ξ], ū(τ ′ 0 ) + u µ (τ ′ 0 ), η + ∆η, 0) . . . ū(τ ′ Nc ) + u µ (τ ′ Nc ) -u + (0) u -(0) -(ū(τ ′ Nc ) + u µ (τ ′ Nc )) c x[τ ′ Nc , z, ξ], ū(τ ′ Nc ) + u µ (τ ′ Nc ), η + ∆η, 0 z -z up z low -z                           . (4.8) Moreover, using the same notations, the equality constraints (4.6d) and (4.6e) are conveyed by the condition g(z, ξ) = 0 where g(z, ξ) :=    A f x[1, z, ξ] -b f (0) u µ (0) -∆u init    . ( 4.9) Thus, we get an expression of the finite-dimensional constraints approximating PDG (ξ, 0), i.e. for the special case where p = 0. The general case where p ̸ = 0 is then straightforward. Indeed, there is no loss of generality in assuming that the constraint parameter p has a linear influence on the constraints of the original optimization problem, as explained below. For instance, for the 2D rocket model, the final horizontal position in the terminal constraint (4.6d) can be parametrized by a variable ∆z f , s.t. A f x(1) = b f + 0 0 ∆z f 0 ⊤ . Likewise, to negotiate the incidence limit, the control bounds are changed by a variable ∆α max s.t. u --(0, ∆α max ) ⊤ ≤ ū(t) + u µ (t) ≤ u + + (0, ∆α max ) ⊤ . Therefore, using this assumption of linear influence of p, we assume that there are matrices H p and B p s.t. the original constraints h(z, ξ) ≤ 0 and g(z, ξ) = 0 actually write h(z, ξ) ≤ H p p and g(z, ξ) = B p p when p ̸ = 0. The matrices H p and B p are basically filled by zeros and a few ones. To sum up all the preceding steps, PDG (ξ, p) is converted into a NLP as follows • change the time variable from t ∈ [0, t f ] to τ ∈ [0, 1], • consider a parametric description u µ (.) of the infinite dimensional variable δu, • describe the dynamic equation and the initial condition constraints through Equation (4.7), • enforce the inequality constraints on the time instances τ ′ i for i = 0, . . . , N c , instead of enforcing them for all τ ∈ [0, 1], • enforce bounds on the decision variable z. not usually recommended from a numerical point of view (e.g. [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF]). Any iterative method aiming at solving NLP (ξ, p) (e.g. Successive Quadratic Programming [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF]) requires the evaluation of x[τ, z, ξ] and its derivatives at each iteration, which means solving multiple ODEs at each iteration. However, as it will be detailed below, our goal is to provide an approximation of the solutions of NLP (ξ, p) using a (directional) first-order expansion. Therefore, it is only needed to evaluate these computationally expensive terms once and offline, making the use of x[τ, z, ξ] appropriate in this context. Sensitivity analysis for degenerate parametric NLP Without loss of generality, to simplify the exposition, we consider in this section the special case where p = 0. The direct extension to the case p ̸ = 0 will be discussed in Section 4.3. Let us study the standard problem (4.10), but where the constraint right-hand sides have been simplified s.t. NLP (ξ) :=            min z J(z, ξ) s.t. h(z, ξ) ≤ 0, g(z, ξ) = 0. In fact, NLP (ξ) is a parametric NLP formulated in a standard form [25, Eq. ( 1)]. When it is too hard to compute its exact solution, an alternative approach is to provide a reasonable approximation of it w.r.t. the parameter ξ, near a known solution. This section aims at answering the three following questions. First, if z * denotes the optimal solution of NLP Then, how can we compute this (local) expansion? Finally, how can this be applied to solve NLP (ξ) non-locally (at least approximately)? An introductory toy example Let us consider a low-dimensional example that will help us illustrate the challenges of NLP sensitivity. This toy problem has only two variables and one parameter ξ which appears non-linearly. Basic Example 1. For a scalar parameter ξ > 0, consider the parametric NLP min x∈R 2 1 2 (x 2 1 + x 2 2 ) (4.11) s.t. c 1 (x, ξ) = -x 1 -x 2 + ξ 1 + ξ 2 ≤ 0 (4.12) c 2 (x, ξ) = x 1 -x 2 1 + ξ 2 -ξ ≤ 0 (4.13) This problem posseses a unique solution and can be solved analytically. The solution is illustrated in Figure 4.2. For ξ > 0, the constraint c 1 is active (with associated multiplier λ 1 ) and the optimum is x * 1 (ξ) = ξ 2 1 + ξ 2 , x * 2 (ξ) = ξ 2 1 + ξ 2 , λ 1 (ξ) = ξ 2 1 + ξ 2 . For ξ < 0, the constraint c 2 is active (with associated multiplier λ 2 ) and the optimum is x * 1 (ξ) = ξ 2 + ξ 2 , x * 2 (ξ) = -ξ 1 + ξ 2 2 + ξ 2 , λ 2 (ξ) = - ξ 2 + ξ 2 . Finally, for ξ = 0, since the point (x 1 , x 2 ) = (0, 0) globally minimizes the cost and satisfies the constraints, it is the optimum. Both constraints are active, and the associated multipliers equal 0. A first remark is that for all values ξ ̸ = 0, the function ξ → x * (ξ) is continuously differentiable. This is pictured in Figure 4.2, and corresponds to the smooth parts of the black curve. Also, note that the multipliers of the active constraints are positive for ξ ̸ = 0. When ξ = 0, then λ 1 = λ 2 = 0 even though both constraints are active, which is what we will call a degenerate scenario following [START_REF] Jittorntrum | Solution point differentiability without strict complementarity in nonlinear programming[END_REF]. Then, the main point of interest is when ξ = 0, i.e. point A in Figure 4.2. The set of active constraints changes when the sign of ξ changes. The optimal solution x * is not differentiable, but has a left-hand side and right-hand side derivatives. Therefore, the value of x * can be inferred in the neighborhood of ξ = 0 using the following (directional) expansions, for ε ≥ 0: x * (ε) = x * (0) + dx * dξ (0 + )ε + o(ε) and x * (-ε) = x * (0) - dx * dξ (0 -)ε + o(ε). The goal of the next sub-section is to highlight the conditions under which x * (.) is at least directionally differentiable. The need to be able to handle degenerate scenarios for PDG -i.e. when some multipliers are zero when the input parameter is zerowill be demonstrated in Section 4.3, and is directly related to local changes in the active set of constraints. Known results in parametric NLP sensitivity In this section, we consider the problem NLP (ξ) from a general perspective, i.e. not necessarily expressed by the above-mentioned expressions for h and g. The goal is to present sufficient conditions that enable us to compute an expansion of the solutions of NLP (ξ). Without loss of generality, we seek an expansion in the neighborhood of ξ = 0. First, after recalling necessary conditions for the existence of a minimizer, a classic theorem that gives conditions for the differentiability of the solution is recalled. A discussion on the key aspects of its proof stresses the need of a more general theorem, as its main assumption needs to be relaxed in view of application to PDG. With this view in mind, an alternative result from the literature, based on directional derivatives -a.k.a. Dini derivatives -is presented. Finally, computational aspects are discussed for the evaluation of the solution expansions. Sensitivity analysis with Strict Complementary Slackness Introduce the multipliers ν ∈ R n in and λ ∈ R neq and the Lagrangian L(z, ν, λ, ξ) := J(z, ξ) + ν ⊤ h(z, ξ) + λ ⊤ g(z, ξ). (4.14) Classically, a tuple (z, ν, λ, ξ) is said to satisfy the Karush-Kuhn-Tucker (KKT) conditions 3 if L z (z, ν, λ, ξ) = 0, (4.15a) g(z, ξ) = 0, (4.15b) ν ⊤ h(z, ξ) = 0 and ν ≥ 0. (4.15c) At ξ = 0, the multipliers are denoted ν 0 and λ 0 and are assumed to satisfy the KKT conditions. The compact notation L[0] = L(z 0 , ν 0 , λ 0 , 0) is used if necessary to alleviate the writing. Likewise, h[0] = h(z 0 , 0), h z [0] = h z (z 0 , 0), etc. Lemma 5 (Second-Order Sufficient Conditions (SOSC), [37, Lemma 3.2.1]). If the functions defining Problem NLP (0) are twice continuously differentiable in a neighborhood of z 0 , if there exist multipliers ν 0 ∈ R n in and λ 0 ∈ R neq s.t. the KKT conditions (4.15) hold and, further, if κ ⊤ L zz (z 0 , ν 0 , λ 0 , 0)κ > 0 (4.16) for any non-zero vector κ ∈ R Nz that satisfies [h i ] z (z 0 , 0)κ ≤ 0, ∀i : h i (z 0 , 0) = 0, (4.17a) [h i ] z (z 0 , 0)κ = 0, ∀i : (ν 0 ) i > 0, (4.17b) g z (z 0 , 0)κ = 0. (4.17c) then, z 0 is a strict local minimizing point of P(0). Definition 5 (SCS). For a pair (z, ν), Strict Complementary Slackness (SCS) holds when ν i > 0, ∀i : h i (z, 0) = 0. The conditions from Lemma 5 and the Strict Complementary Slackness condition allow one to present the following theorem, adapted from [37, Thm. 3.2.2], which states a well-known NLP sensitivity result. Theorem 2 (Continuous differentiability, with SCS from [START_REF] Fiacco | Introduction to Sensitivity and Stability Analysis in Nonlinear Programming[END_REF]). Assume that J, h and g are twice continuously differentiable in z and that their gradients w.r.t. z and the constraints are continuously differentiable in ξ in a neighborhood of (z 0 , 0). (c) for ξ near 0, the set of binding inequalities is unchanged, SCS holds, and the binding constraint gradients are linearly independent at z * (ξ). If (i) [h i ] z (z 0 , 0) (for i s.t. h i (z 0 , 0) = 0) The proof of this theorem conveys several keys features helping to understand what makes the solution z * continuously differentiable. Precisely, Equation (4.19) below is an highly useful by-product of the proof, providing an explicit formula for the derivative of z * and the associated multipliers. Remark 16. The SCS assumption, combined with the conditions of Lemma 5, implies that L zz [0] is positive definite on the kernels of both matrices h z a [0] and g z [0]. Denoting by N K the dimension of the intersection of these kernels, and by Q ∈ R Nz×N K the matrix that generates this vector space, the conditions (4.16) and (4.17) can be re-written equivalently as Q ⊤ L zz [0]Q ≻ 0. Sketch of proof of Theorem 2, adapted from [START_REF] Fiacco | Introduction to Sensitivity and Stability Analysis in Nonlinear Programming[END_REF] and [START_REF] Büskens | Sensitivity Analysis and Real-Time Optimization of Parametric Nonlinear Programming Problems[END_REF]. The idea of the proof is to use the Implicit Function Theorem (IFT) on a subset of the KKT conditions, which defines functions z * (ξ), ν * (ξ), λ * (ξ) that are then shown to be locally unique minimizers of Problem NLP (ξ) using the conditions of Lemma 5. At z 0 , the active inequality constraints are denoted with an exponent "a". Their cardinal is denoted n a . For instance, h a denotes the rows of h that are active at z 0 , and ν 0 a are the corresponding multipliers. Without loss of generality, we assume that the active inequality constraints are the first n a components of h and ν. The main function of interest in this proof is4 K(z, ν a , λ, ξ) :=       L z (z, ν a , λ, ξ) ν a • h a (z, ξ) g(z, ξ)       =              J z (z, ξ) + (ν a ) ⊤ h z a (z, ξ) + λ ⊤ g z (z, ξ) ν 1 h 1 (z, ξ) . . . ν na h na (z, ξ) g(z, ξ)              and is often referred to as the Kuhn-Tucker matrix [START_REF] Büskens | Sensitivity Analysis and Real-Time Optimization of Parametric Nonlinear Programming Problems[END_REF]. The conditions of Lemma 5 imply that the matrix ∂K ∂[z, ν a , λ] (z 0 , ν 0 a , λ 0 , 0) =              L zz [h 1 ] z ⊤ . . . [h na ] z ⊤ g z ⊤ (ν 0 ) 1 [h 1 ] z h 1 0 0 . . . . . . . . . (ν 0 ) na [h na ] z 0 h na g z 0 . . . 0              (4.18) is invertible. Note that all the elements on the right-hand side of the latter matrix are evaluated on (z 0 , ν 0 a , λ 0 , 0), but this has been omitted to alleviate the writing. Thus, the IFT applies on the condition K(z, ν a , λ, ξ) = 0 at (z 0 , ν 0 a , λ 0 , 0), which guarantees the existence of a differentiable function y : ξ → (z * (ξ), ν * (ξ), λ * (ξ)) s.t. z * (0) = z 0 , ν * (0) = ν 0 , λ * (0) = λ 0 and K(z * (ξ), (ν * ) a (ξ), λ * (ξ), ξ) = 0 in the neighborhood of ξ = 0. Finally, the strict inequalities of SCS enables us to show that the set of active constraints does not change in the neighborhood of ξ = 0, which implies that y(ξ) locally satisfies the sufficient conditions of Lemma 5, hence the theorem. This proof has a useful by-product. Indeed, the construction of the triplet (z * (ξ), ν * (ξ), λ * (ξ)) via the use of the IFT directly provides the derivatives of these variables at ξ = 0         dz * dξ (0) d(ν * ) a dξ (0) dλ * dξ (0)         = - ∂K ∂[z, ν a , λ] (z 0 , ν 0 a , λ 0 , 0) -1 ∂K ∂ξ (z 0 , ν 0 a , λ 0 , 0) . (4.19) However, as we have seen with Toy Example 1, SCS does not necessarily hold everywhere, which becomes a problem in the above-mentioned arguments. Indeed, if one of the multipliers of the active constraints becomes zero, then the corresponding line in Equation (4.18) becomes zero, leading to a singular Kuhn-tucker matrix. In this scenario, the IFT does not apply anymore. Sensitivity analysis without Strict Complementary Slackness To overcome the above-mentioned difficulty, another set of assumptions is needed. Definition 6 (Strong SOSC). There exists a scalar a > 0 s.t. κ ⊤ L zz (z 0 , ν 0 , λ 0 , 0)κ ≥ a∥κ∥ 2 (4.20) for any κ ∈ R Nz s.t. g z (z 0 , 0)κ = 0, [h j ] z (z 0 , 0)κ = 0, ∀j s.t. h j (z 0 , 0) = 0 and (ν 0 ) j > 0. The SOSC property is weaker than the SCS property. As shown by Jittorntrum [START_REF] Jittorntrum | Solution point differentiability without strict complementarity in nonlinear programming[END_REF], relaxing the SCS using Strong SOSC instead eventually leads to directional differentiability properties. To formulate this, let us introduce the following. For a function γ : R p → R q , when it exists, the upper Dini derivative of γ at x in the direction d is a vector of R q and is denoted by D + d γ(x) := lim ε↓0 γ(x + εd) -γ(x) ε ∈ R q . For any given direction ξ, consider the directional problem NLP (εξ), where ε ≥ 0 is a scalar. Theorem 3 (Directional differentiability, without SCS). Let us assume that J, h and g are twice continuously differentiable in a neighborhood of z 0 . If (i) [h i ] z (z 0 , 0) (for i s.t. h i (z 0 , 0) = 0) and [g j ] z (z 0 , 0) (all j) are linearly independent, (ii) Strong SOSC holds at z 0 , then there exists a unique continuous function ε → (z * (εξ), ν * (εξ), λ * (εξ)) that locally minimizes NLP (εξ), for ε ≥ 0, s.t. z * (0) = z 0 , ν * (0) = ν 0 and λ * (0) = λ 0 . Furthermore, its right-hand derivative at ε = 0 exists, i.e. the upper Dini derivatives D ξ + z * (0), D ξ + ν * (0) and D ξ + λ * (0) exist. For a proof of this theorem, see the discussions after the Theorems 3 and 4 in [START_REF] Jittorntrum | Solution point differentiability without strict complementarity in nonlinear programming[END_REF]. A direct consequence of Theorem 3 is that for any ξ, for ε ≥ 0, the following expansion holds: z * (εξ) = z 0 + D + ξ z * (0)ε + o(ε). (4.21) The same way Theorem 2 provided an expression for the derivatives of the solution tuple, the proof of Theorem 3 by Jittorntrum provides a way to compute the latter expansion, which will be used later in Equation (4.25). While Theorem 2 requires to inverse a linear system as shown in Equation (4.19), Theorem (3) needs to solve a QP, as indicated in the Proposition below. Proposition 10 (Adapted from [START_REF] Jittorntrum | Solution point differentiability without strict complementarity in nonlinear programming[END_REF]Eq. 24]). The vector D + ξ z * (0) in Equation (4.21) is the optimal solution of min ∆z∈R Nz 1 2 ∆z ⊤ L zz [0]∆z + ξ ⊤ L ξz [0]∆z s.t. h a z [0]∆z + h a ξ [0]ξ ≤ 0, g z [0]∆z + g ξ [0]ξ = 0, whose uniqueness is guaranteed by Assumption (ii) in Theorem 3. Proposition 10 provides a way to compute a local, directional, expansion of the optimal solution z * (ξ) using QP. Qualitatively, it boils down to linearizing the active inequality constraints, forgetting the inactive inequality constraints, linearizing the equality constraints, and taking into account the second-order approximation of the cost. Therefore, aside from the non-linear aspect of NLP (ξ), the only missing features of Problem NLP (ξ) in Proposition 10 are the inactive inequality constraints. This result can be improved and made more useful for practical applications. To alleviate the writing, and without loss of generality, we assume for the rest of this section that z 0 = 0. If instead of considering only the active inequality constraints we consider all inequality constraints, i.e. that we solve QP (ξ) := min then we have a more useful tool. On one hand, the result of Proposition 10 is preserved. Indeed, denote by i any (strictly) inactive inequality constraint at ξ = 0. Consider any arbitrary value of ξ. Since ε → z * (εξ) is continuous on a certain right-neighborhood of ε = 0 by Theorem 3, then h i (z * (εξ), εξ) < 0 holds in this neighborhood. By reducing this neighborhood if necessary, the condition z∈R Nz 1 2 z ⊤ L zz [0]z + ξ ⊤ L ξz [0]z (4.22a) s.t. h[0] + h z [0]z + h ξ [0]ξ ≤ 0 (4.22b) g z [0]z + g ξ [0]ξ = 0 (4.22c) h i [0] + [h i ] z [0]z * (εξ) + [h i ] ξ [0]εξ < 0 holds as well in the vicinity of ε = 0 and will not affect the local property of Proposition 10. On the other hand, the linearized conditions (4.22b) provide a linear approximation of the inequality constraints, giving a non-local estimation of changes in the active set 5 . As an intermediate summary, if a parametric NLP described by NLP (ξ) satisfies Strong SOSC and some other mild conditions described in Theorem 3, its solution point is locally uniquely defined, and admits a directional first-order expansion, that can be computed by solving QP (ξ) detailed in (4.22). This latter Quadratic Program consists simply in modifying NLP (ξ) by linearizing the constraints in both the parameter and the decision variable, and by taking a second-order expansion of the cost. Locally, the approximations provided by QP (ξ) are very accurate (in the sense of (4.21)), and can handle active set changes that depend on the direction ξ. Globally, QP (ξ) can keep providing an approximation of the solution, that will be as good as the approximations made in (4.22b) and (4.22c) are. From a high-level point of view, this Chapter describes how to convert an infinite dimensional problem into a finite one, which in turns is solved using parametric sensitivity analysis. However, the converse approach which consists in performing parametric sensitivity analysis on the infinite dimensional problem directly is an alternative that has been explored in the literature. See e.g. [START_REF] Deshpande | Directional Input Adaptation in Parametric Optimal Control Problems[END_REF], that describes how directional differentiability can be used on the stationary conditions of OCPs with multiple types of constraints. Remark 17. A direct application of QP (ξ) to the Basic Example 1 is shown in Fast nominal guidance method Up to here, we have presented a method to re-formulate the original problem PDG (ξ, p) into the finite-dimensional problem NLP (ξ, p). For p = 0, the solutions of NLP (ξ, 0) are approximated by solving QP (ξ). Let us extend this sensitivity-based method to the case where p ̸ = 0, and summarize how the newly formed problem QP (ξ, p) is intended to be used in practice to provide nominal guidance. An offline/online approach for nominal guidance First, note that the role of p in the definition of NLP (ξ, p) is equivalent to the one of ξ, in the sense that both are given parameters. Thus, by introducing convenient intermediate matrices and vectors, we can extend the QP from Equation (4.22) to the general case under the form QP (ξ, p) :=            min z∈R Nz 1 2 z ⊤ P z + ξ ⊤ Qz s.t. Gz ≤ h 0 + H ξ ξ + H p p Az = b 0 + B ξ ξ + B p p (4.23) where H p and B p are the same as the ones used in the definition of the constraints of NLP (ξ, p) and where P := J zz (0, 0) Q := J ξz (0, 0) G := h z (0, 0) A := g z (0, 0) h 0 := -h(0, 0) H ξ := -h ξ (0, 0) b 0 := -g(0, 0) B ξ := -g ξ (0, 0) It is this expression of QP (ξ, p) that will be used from now on. Remark 20. Note that the reason why p does not appear in the cost of QP (ξ, p) is that the cost J in Equation (4.10a) does not depend on p, but only on z and ξ. Remark 21 (Shortcuts). In the remainder of this thesis, the dependency of the linear part of the cost and the constraint right-hand side on ξ in QP (ξ) may be omitted to alleviate the writing, by using the vectors q, h and b s.t. q := J ξz (0, 0) ⊤ ξ, h := h 0 + H ξ ξ, and b := b 0 + B ξ ξ. To maximize computational efficiency, the constant matrices of QP (ξ, p) are computed offline, for a reference trajectory, using the transition matrices formulas from the Appendix. As described in Section 4.1, such a reference trajectory (x, ū, η, tf ) must satisfy the constraints of NLP (ξ, p), and its design depends critically on the mission goals. The reference may be defined as the solution of a more complex optimization problem, out-of-the-scope of the thesis, solved using indirect methods [START_REF] Bryson | Applied optimal control: optimization, estimation and control[END_REF]. Its computation can take anything from several seconds to hours, depending on the desire accuracy, the optimization solver or the computer used for it. On the other hand, computing the transition matrices, even to a good accuracy, only needs a few seconds, even with a non efficiency-optimized code. Guidance law As introduced in (4.3), the optimal values µ * and ∆t * f returned by QP (ξ, p) via z * enable us to describe the guidance control law as a continuous time function. Indeed, interpreting ū and u µ as functions defined on [0, 1], the guidance law becomes u * (t) = ū t tf + ∆t * f + u µ * t tf + ∆t * f , ∀t ∈ [0, tf + ∆t * f ]. (4.24) Directional first-order estimate of waypoints Let us denote by z nlp (ξ, p) the hypothetical solution of NLP (ξ, p) and by z qp (ξ, p) the solution of QP (ξ, p). Then, from Section 4.2, the following directional expansion holds z nlp (εξ, εp) = z qp (εξ, εp) + o(ε). Moreover, using the definition of the augmented state x[τ, z, ξ] from Equation (4.7), for any τ ∈ [0, 1] we have x[τ, z, ξ] =    x[τ ] tf    + ∂ x ∂z [τ, 0, 0]z + ∂ x ∂ξ [τ, 0, 0]ξ + o(∥(z, ξ)∥). Let us introduce xlin as x lin (τ, ξ, p) xlin (τ, ξ, p) :=    x[τ ] tf    + ∂ x ∂z [τ, 0, 0]z qp (ξ, p) + ∂ x ∂ξ [τ, 0, 0]ξ ∈ R n+1 . ( 4 t lin f (τ, ξ, p)    . Horizontal position z x 0 = x(τ 0 ) • • • • • • Reference trajectory, x using ū during tf with η x(τ 1 ) x(τ 2 ) x(τ 3 ) x(τ 4 ) h(0)- • Altitude h Solving QP (ξ, p) at x 0 + ∆x 0 gives µ * and ∆t * f (Here, p = 0) ∆x 0 • • • • • x 1 x 2 x 3 x 4 Way points, expressed using ∆x 0 , ∆η, µ * and ∆t * f . Note that t lin f does not actually depend on τ by construction of the augmented state, and Equation (4.25) applied to t lin f simply yields t lin f (ξ, p) = tf + ∆t * f (ξ, p). In practice, the solution z * of QP (ξ, p) can be used either to form the guidance law from Equation (4.24), or by computing waypoints x k using the state approximation provided by x lin s.t. x k := x lin (τ k , ξ, p) (4.26) where (τ k ) k is a prescribed subdivision of [0, 1] of normalized time instance. The values τ k are equivalent to the non-normalized time instances t k s.t. t k = τ k . tf + ∆t * f (ξ, p) . Numerical examples Let us discuss the three following examples, each highlighting a distinct feature of the solutions of QP (ξ, p). Since the case where p ̸ = 0 is the matter of Chapter 5, these examples focus exclusively on nominal descents, i.e. where p = 0. For all the examples, the cost J that we consider is defined directly by giving its matrices P and Q. We consider that Q = 0, and that the matrix P is a diagonal matrix of positive weights, which favors early corrections (i.e. the weight associated to u(τ 0 ) is smaller than the one of u(τ N )). The reference trajectory chosen for each example is represented by a black line. Moreover, for the examples using the 3D model, the reference trajectory is assumed to lie in the plane (e z , e h ) (though corrections implying out of plane trajectories are discussed in the last example). The data have been normalized for all the examples. Effectiveness of calculated guidance The first example aims at showing that in some neighborhood of the reference parameters, the expansion (4.25) provides accurate corrections enforcing the terminal constraints, even when using the corrections in open-loop only. It is directly illustrated on the 3D rocket model. Due to the terminal constraints, the six states corresponding to the positions and speeds are expected to be null at the final time (except v h (t f ) = -ε f v h ). If the reference guidance law (ū, tf ) is applied directly to a scenario where ξ ̸ = 0 -i.e. no corrections are applied -then strong constraint violation errors will appear. However, if the change in parameter is corrected using u * from (4.24) (i.e. using QP (ξ, p)), the terminal constraints are supposed to be approximately satisfied up to the first-order. These two behaviors are well observed in the sub-figure (a) of Figure 4.5, which represents the terminal horizontal position z(t f ). The other terminal constraints (on y, h, v z , v y and v h ) have voluntarily been omitted, as their correction curve is much flatter than for z. In other words, it means that even though this terminal constraint component has the worst open-loop correction curve, it still demonstrates that the first-order corrections brought by QP (ξ, p) work well in a non-trivial neighborhood of ξ = 0 in practice. Changes in the active set The second example aims at showing that the optimal solutions of QP (ξ, p) are indeed only Dini-differentiable and not smooth, even in a standard landing scenario. Let us use the 2D rocket model, and consider that ξ only varies in horizontal position ∆z 0 = z 0 -z0 . The directional-derivatives of α at the middle point t 2 in the two directions ∆z 0 = 1 and ∆z 0 = -1 differ, as shown in Figure 4. 6-(d). This behavior has a real-world interpretation. When ∆z 0 > 0, using more incidence on α(t 2 ) is not a possible option as the constraint is already active and becomes strictly active (the associated multiplier becomes positive when ∆z 0 > 0). However, when ∆z 0 < 0, lowering α(t 2 ) is possible, allowing the presented corrections. Non-local constraint satisfaction This third example aims at showing the behavior of QP (ξ, p) for large values of ξ. Let us consider the 3D model, with a reference trajectory contained in the plane (e z , e h ). As illustrated in Figure 4.7, we are here interested in the behavior of QP (ξ, p) for values of ξ covering a grid in (∆z 0 , ∆y 0 ), the change in initial horizontal positions. Considering a such choice of inputs allows us to stress that QP (ξ, p) is able to compute out-of-plane trajectories. As pointed out in the sub-Figures 4.7 If one considers even larger values of ξ in this example, it comes a point where the constraints of QP (ξ, p) are infeasible, at least as long as we keep p = 0. Hence the need for an algorithm that computes the proper value for p according to a hierarchy of objectives, which will be the matter of the next chapter. Chapter 5 Emergency guidance via Linear and Quadratic Programming Résumé Le problème QP (ξ, 0) a été le sujet principal jusqu'à présent. À un moment donné, si ξ est trop grand, alors QP (ξ, 0) peut être infaisable. Par exemple, si la position horizontale initiale est trop éloignée du site d'atterrissage, il n'est pas possible de concevoir une trajectoire qui satisfasse toutes les contraintes en même temps : ces dernières sont incompatibles. La question centrale de cette thèse est donc la suivante : que faire dans cette situation ? Lorsque QP (ξ, p) est infaisable à p = 0 et pour une valeur donnée de ξ, le problème de l'atterrissage doit être modifié pour retrouver la faisabilité, dans une certaine mesure. Cela peut se faire en relâchant les contraintes, c'est-à-dire en modifiant p. Une stratégie de relaxation doit être définie. Tout d'abord, nous identifions les principaux paramètres négociables qui peuvent être relâcher, dans la Section 5.1, et nous soulignons leur importance relative. Cette liste ordonnée découle des connaissances préalables et de la compréhension commune des ingénieurs chargés de la réussite de la mission. Ensuite, à partir de cette liste ordonnée, une suite de problèmes d'optimisation visant à minimiser l'amplitude des paramètres négociables est présentée dans la Section 5.2 et nommée HEGO pour Optimisation Hiérarchique pour le Guidage d'Urgence. L'utilisation de cette suite produit une trajectoire de guidage, tout en imposant une hiérarchie prescrite entre les paramètres négociables sélectionnés, au sens de l'ordre extended colexicographic order présenté au Chapitre 1. La formulation des problèmes sous-jacents repose uniquement sur la programmation linéaire et quadratique. De plus, des garanties théoriques sur le caractère bien posé et la régularité de la fonction qui à ξ associe la trajectoire optimale sont fournies. Comme il est d'un grand intérêt pour les applications pratiques, où la régularité des méthodes numériques est toujours souhaitable, il est démontré que cette dernière fonction est Lipschitz-continue, évitant ainsi les sauts dans la trajectoire de guidage pour des entrées arbitrairement proches. D'autres propriétés peuvent être établies, et la monotonie directionnelle de l'amplitude du paramètre de négociation est examinée dans un cas particulier. L'Algorithme HEGO est généralisé dans la Section 5.6. Cette généralisation permet de distinguer ce qui est lié à la méthode de guidage nominal sousjacente de ce qui est propre aux problèmes de négociation eux-mêmes. On montre notamment comment HEGO pourrait être utilisé avec un autre choix de méthode de guidage nominal (i.e. au lieu de QP (ξ, p)), parmi les méthodes directes pour les OCP. Enfin, des résultats numériques sont présentés dans la Section 5.7. Ils mettent en avant divers aspects de la méthode de guidage d'urgence, à la fois sur les modèles de fusée 2D et 3D. Plusieurs ensembles de paramètres négociables sont utilisés pour illustrer les différentes stratégies d'urgence accessibles par la méthodologie HEGO ainsi proposée. The problem QP (ξ, 0) has been the main topic so far. At some point, if ξ is too large, then QP (ξ, 0) may be infeasible. For instance, if the initial horizontal position is too far from the landing site, then it is not possible to design a trajectory that satisfies all the constraints at the same time: they are inconsistent. Hence, the central question of this thesis is: what should one do in this case? When QP (ξ, p) is infeasible at p = 0 for a given value of ξ, the landing problem has to be modified to recover feasibility to some extent. This can be done by revising the constraints, i.e. by modifying p. Some revision strategy has to be defined. First, we identify the main negotiable parameters that can be relaxed, in Section 5.1, and emphasize their relative importance. This ordered list stems from prior knowledge and common understanding between engineers in charge of the mission success. Then, from this ordered list, a sequence of optimization problems aiming at minimizing the magnitude of the negotiable parameters is introduced in Section 5.2 and denoted HEGO for Hierarchical Emergency Guidance Optimization. Using the sequence produces a guidance trajectory, while enforcing a prescribed hierarchy between the selected negotiable parameters, in the sense of the extended colexicographic order introduced in Chapter 1. Formulating the underlying problems relies only on Linear and Quadratic Programming. Moreover, theoretical guarantees on the well-posedness and the smoothness of the mapping from ξ to the optimal trajectory are provided. As it is of high interest for a practical application where smoothness of numerical calculation procedures is always desirable, the latter map is shown to be Lipschitz-continuous, preventing jumps in the guidance trajectory for arbitrarily similar inputs. Some further properties can be established, e.g. the directional monotonicity of the negotiation parameter magnitude is discussed for a special case. The HEGO algorithm is generalized in Section 5.6. It distinguishes what is linked to the underlying nominal guidance method, from what is fundamental to the negotiation problems themselves. In details, it is shown how HEGO could be used with another choice of nominal guidance method (i.e. instead of QP (ξ, p)), among direct methods for OCPs. Finally, numerical results are presented in Section 5.7. They highlight various aspects of the emergency guidance method, on both the 2D and the 3D rocket models. Several sets of negotiable parameters are employed, to illustrate the various emergency policies achievable by the proposed HEGO methodology. [START_REF] Ménou | Nominal And Emergency Rocket Landing Guidance Using Quadratic Programming[END_REF]. However, the last sections, including the theoretical proofs and the numerical simulations, are new (unpublished) elements. The first sections of this chapter are a detailed version of Negotiable parameter choices For a given input ξ, when landing has been declared infeasible by QP (ξ, p), it is necessary to loosen some of the constraints. First, the parameters that can be negotiated are listed and it is shown how they modify the constraints of QP (ξ, p). Then, a model describing their relative importance is proposed. Negotiated constraints The parameters that can be negotiated are the ones describing the goals of the landing. The physics-based equations of motion are not negotiable. However, the location of the landing site is, at least partially, negotiable. Indeed, if the landing site is located in a wide and flat area, it is of interest to allow touchdowns in a neighborhood of the ideal landing site1 . Some of the other parameters defining the constraints can be partially loosened. For example, the incidence limit should be seen more as a safety constraint -and a way to limit long-term fatigue of the rocket -and could be slightly widened if necessary, whereas the engine flow limitations are non-negotiable mechanical constraints. The negotiable parameters are already conveniently conveyed by p, the variable introduced in Chapter 2. The purpose of this chapter is to discuss extensively how these negotiable parameters are mathematically modeled, what are the possible choices for its components, and how its value is chosen. Let us recall that p has a linear influence on the Right-Hand Side (RHS) of the constraints of QP (ξ, p), s.t. the nominal constraints      Gz ≤ h 0 + H ξ ξ Az = b 0 + B ξ ξ are transformed into the negotiated constraints by the action of p      Gz ≤ h 0 + H ξ ξ + H p p Az = b 0 + B ξ ξ + B p p (5.1) For any such p, the matrices H p and B p are basically filled with zeros and a few ones, which makes them sparse. These negotiated constraints come with an extra condition: it is assumed that all negotiable parameters are negotiable within prescribed limits, i.e. that p is bounded If necessary, this cube-like constraint could be modeled as a bounded polytope, described by an inequality Dp ≤ d for some pair (D, d), without changing any of the reasoning below. For example, as it will be discussed in more details in Example 1 at the end of this chapter, a possible choice for the variable p is the pair (∆α max , ∆z f ) ⊤ for the 2D rocket model. From a more exhaustive point of view, the list of all parameters that can or cannot be loosened in practice is presented in Table 5.1. On the relative importance of the parameters Adapting Orwell's Animal Farm quote, all parameters are important but some are more important than others (in (5.1)). It is necessary to enforce a hierarchy of importance between the negotiation parameters. Thus, let us separate the negotiation parameters in R different sub-parameters p (j) of ranked (increasing) importance, as p = (p (1) ) ⊤ . . . (p (R) ) ⊤ ⊤ ∈ R nneg . The higher the index j, the more critical p (j) is. The dimension of p (j) is noted n j . Mathematically speaking, the negotiable parameters are compared using the 1norm of their sub-parameters, by comparing their most critical sub-parameters first. As first introduced in (1.1) in Chapter 1, a vector p a is said to be more negotiated than another vector p b , which is denoted p a ⪰ e p b , if and only if ∥p (R) a ∥ 1 > ∥p (R) b ∥ 1 or ∥p (R) a ∥ 1 = ∥p (R) b ∥ 1 and ∥p (R-1) a ∥ 1 > ∥p (R-1) b ∥ 1 , or . . . or ∥p (R) a ∥ 1 = ∥p (R) b ∥ 1 and . . . and ∥p (1) a ∥ 1 > ∥p (1) b ∥ 1 , or ∥p (R) a ∥ 1 = ∥p (R) b ∥ 1 and . . . and ∥p (1) a ∥ 1 = ∥p (5. 3) The relation ⪰ e is an extended colexicographic order, that we will eventually refer to as the emergency order in this thesis2 . (1) , p (2) ) ⊤ , where p (1) ∈ R and p (2) ∈ R 2 . Then (Least critical) (1, 2, 1) ⪯ e (1, 3, 1 ) ⪯ e (2, -4, 0 ) ⪯ e (-3, 0, 5) (Most critical). Basic Example 2. Let us consider negotiable parameters ∥p (2) ∥ 1 = 4 ∥p (2) ∥ 1 = 4 For illustration purposes, let us consider again p = (∆α max , ∆z f ) for the 2D rocket model. Considering that it is less critical to sacrifice a few degrees of incidence limit than to land outside the desired landing site yields p (1) = ∆α max and p (2) = ∆z f . Remark 22. To some extent, recovering feasibility has been tackled from different perspectives. In a seminal paper by Blackmore et al. [START_REF] Blackmore | Minimum-Landing-Error Powered-Descent Guidance for Mars Landing Using Convex Optimization[END_REF], the problem of finding the actual landing site that minimizes the distance to the desired landing site is described in details, for Mars landing (i.e. without atmosphere), using Successive Convexification. Translated into the above-mentioned taxonomy, one can write that they have two different negotiable parameters that were negotiated at the same time, which were the two final horizontal positions of their 3D lander model. Their taxonomy also implies that R = 1 with ours, meaning that they do not use any notion of hierarchy. A hierarchical negotiation Now that it has been discussed what the levers available to modify the constraints of QP (ξ, p) are -using Equation (5.1) -there comes the follow-up question: how does one negotiate these parameters? When the input ξ makes the nominal constraints infeasible -i.e. when nominal landing is not feasible anymore -the goal is to find the smallest change in the negotiable parameters that recovers feasibility. There are two salient expectations regarding the method that computes these negotiable parameters and the associated trajectory that will be called emergency trajectory. On the one hand, this method must pick the smallest negotiable parameter as possible, in the sense of the emergency order. This aims at using as little negotiable parameters as possible while enforcing the relative importance of the parameters. On the other hand, the map ξ → z * giving the optimal emergency trajectory should be as smooth as possible (continuity is a minimum), in order to prevent jumps in the trajectory between arbitrarily close values of ξ. Such jumps could be very detrimental and cause serious issues to the control algorithms (out-of-the-scope of the thesis). With these objectives in mind, we introduce the method HEGO (i.e. Algorithm 2 below), composed of a finite sequence of negotiation problems labeled LP j and a refinement problem labeled Refine, which aims at fulfilling the two latter goals. HEGO is then tested on a low-dimensional example, helping to understand some of the methodology design choices. Finally, several aspects of this algorithm are put into perspective. The theoretical proofs showing that this algorithm behaves as expected will be discussed in Section 5.3. Algorithmic principle of HEGO From a high-level perspective, finding the smallest negotiation parameters that satisfy the constraints could take the form of a single optimization problem s.t. z * , p * ←-min z,p Penalty(p) s.t. Negotiated constraints for (z, p), from Eq. (5.1). However, there are several limitations to this strategy. First, nothing guarantees that neither z * nor p * are unique when coming out of such a procedure. Moreover, since the cost of the latter problem differs from the cost of the nominal guidance problem QP (ξ), it is highly likely that the map ξ → z * encounters a discontinuity when the emergency is triggered, i.e. when going from the most inner set to the intermediate set of Figure 5.1. Finally, and most importantly, the hierarchy is ignored with this kind of problem description. At best, the parameters can be non-homogeneously weighted to reflect their importance, but not their strict ranking. Instead, we propose to successively minimize the magnitude of the negotiable parameters while enforcing the existence of at least one feasible trajectory at each step. This is a penalty-free approach. As required by the emergency order, this minimization procedure will focus sequentially on the sub-parameters, starting by the last one (i.e. p (R) ), down to the first one (i.e. p (1) ). This means that the most critical negotiable parameters are minimized first. At each step, only the proper sub-parameter p (j) is minimized, s.t. the result be p (j) = 0 if it is not necessary to use this parameter to recover feasibility. Also, to enforce the desired hierarchy, a kind of memory effect is needed so that each step takes into account the results of the previous steps, by preserving the negotiation levels. This will be the role of condition (5.4e) below. As will appear, Linear Programming (LP) will play a key role to implement these negotiation steps. Quantitatively, let us first introduce the negotiation problems LP j as LP j := min z,p ∥p (j) ∥ 1 (5.4a) s.t. Gz ≤ h 0 + H ξ ξ + H p p (5.4b) Az = b 0 + B ξ ξ + B p p (5.4c) p low ≤ p ≤ p up (5.4d) ∥p (i) ∥ 1 = P * i , i = j + 1, . . . , R (5.4e) where P * i denotes the optimal value of LP i . To make this problem definition wellposed, note that the constraint (5.4e) does not exist when j = R. Moreover, note that the inputs of each LP j are ξ and P * i for i = j + 1, . . . , R. To alleviate the writing, these inputs are omitted wherever the context is clear enough. Finally, it is important to highlight the fact that z and p are both optimization variables in LP j , even though only a few coefficients of p are involved in the cost (5.4a). The role of LP j is to minimize a cost on the j -th negotiable sub-parameters, while making sure that there are still feasible trajectories z, and that the already determined levels of relaxation of the previous negotiation problems are unchanged in the process. The reason why constraint (5.4e) must be satisfied instead of a constraint of the type "p (j) = p (j) * " is that LP j does not necessarily have a unique solution. Imposing ∥p (j) ∥ 1 = P * i encompasses all the solutions of LP j and makes sure that the level of negotiation reached at step i remains satisfied in the follow-up negotiations. Descending from j = R to j = 1 and imposing this latter constraint guarantees that the emergency order is enforced. Thus, solving successively LP j for j = R, . . . , 1 (decreasing indices) provides a way to recover feasibility, while hierarchically minimizing what needs to be sacrificed from the original guidance problem, by building the sequence P * 1 , . . . , P * R starting from the end. Among others, a noteworthy property of this process is that if landing is feasible without any negotiation, then solving LP j will return ∥p (j) ∥ 1 = 0 at each step, implying that the overall vector p equals zero. Once this negotiation sequence has been computed, there may be many possible values for p, and for z as well. It is thus necessary to pick the best trajectory among these ones, by solving Refine := min z,p 1 2 z ⊤ P z + ξ ⊤ Qz (5.5a) s.t. Gz ≤ h 0 + H ξ ξ + H p p (5.5b) Az = b 0 + B ξ ξ + B p p (5.5c) p low ≤ p ≤ p up (5.5d ) ∥p (i) ∥ 1 = P * i , i = 1, . . . , R (5.5e) Like LP j , the problem Refine takes as inputs ξ and P * i for i = 1, . . . , R. These two optimization problems are the basic bricks of the central algorithm of this thesis, defined below. Algorithm 2 Hierarchical Emergency Guidance Optimization (HEGO) Require: Difference w.r.t. reference values: ξ = (∆x 0 , ∆η, ∆u init ). for j = R, . . . , 1 (decreasing indices) do P * j ← min LP j ξ, P * j+1 , . . . , P * R // From definition (5.4). end for z * ← argmin Refine (ξ, P * 1 , . . . , P * R ) // From definition (5.5). return z * For the argmin operation, the value of p is voluntarily ignored, since it is not needed nor unique. It is important to observe that this guidance procedure provides, depending on the situation, either a nominal guidance or an emergency guidance, as stated in the following proposition. Proposition 11. If ξ is s.t. the constraints of the nominal guidance problem QP (ξ, 0) are feasible, then HEGO and QP (ξ, 0) return the same value z * . Proof. If the constraints of QP (ξ, 0) are feasible, then there exists at least one pair (z, p) with p = 0 that satisfies the negotiated constraints (5.1). Thus, all the problems LP j will necessarily return P * j = 0, and constraint (5.5e) will subsequently impose p = 0, making Refine and QP (ξ, 0) coincide. Uniqueness of the solutions yield the conclusion. Qualitatively, the nominal guidance method based solely on QP (ξ, 0) can provide landing trajectories on the set A from Figure 5.1 whereas HEGO is able to do it on the sets A and B. HEGO therefore provides landing guidance for a wider range of input values ξ. It also meets the requirements regarding the negotiable parameter minimization and the hierarchy enforcement. These features make it an "universal" guidance algorithm. The analysis of the map ξ → z * defined by this algorithm is presented later in Section 5.3. Among others, Proposition 12 will show the well posedness of the algorithm, and Theorem 7 proves the Lipschitz-continuity of ξ → z * (ξ) for HEGO. Before getting into these technical details, let us discuss the following example. An illustrative toy example Consider a low-dimensional example that resembles the landing problem, illustrating how HEGO works and why the negotiation problem hierarchy matters. Although this example could seem greatly over-simplified at first glance, its similarities with the actual landing problem are noteworthy, especially when comparing the curves from Figure 5.3 (left) and Figure 5.5 (d) reporting the negociation of parameters. The problem takes as input a scalar ξ ≥ 0, and aims at minimizing the norm of z = (z 1 , z 2 ) ⊤ ∈ R 2 , under some constraints min z, p z 2 1 + z 2 2 s.t. (Ineq 1 ) z 1 ≥ 0 (Ineq 2 ) z 2 ≥ 0 (Eq) z 2 = 1 -ξ -z 1 This problem is represented in Figure 5.2 (0). By analogy, let us imagine that z 1 conveys the incidence, z 2 the engine flow and ξ the initial horizontal position error. Therefore, • (Ineq 1 ) conveys the incidence bound, that may be negotiated by a variable p 1 , s.t. z 1 ≥ -p 1 . • (Ineq 2 ) conveys the mechanical limits of the engine flow, which are nonnegotiable. • (Eq) represents the terminal condition on the horizontal position and is directly influenced by ξ. It may be negotiated by a parameter p 2 if necessary, s.t. z 2 = 1 -ξ -z 1 + 2p 2 . Moreover, note that the negotiable parameters p = (p 1 , p 2 ) ⊤ ∈ R 2 can be negotiated up to some extent, so we decide to bound them between 0 and 1. More precisely, imposing 0 ≤ p 2 ≤ 1 means that we allow the rocket to eventually land farther of the landing site, but on one side only and up to a certain limit. Finally, as in the actual landing problem, we consider that negotiating p 2 is more critical than p 1 (i.e. R = 2). Let us now review the possible scenarios depending on the input value. Nominal scenario When ξ remains low, i.e. 0 ≤ ξ ≤ 1, then there is no need for parameter negotiation, because the problem is feasible when p 1 = p 2 = 0. Running Algorithm 2 will give null values for the negotiation penalties. In this case, the optimal solution is z * 1 (ξ) = z * 2 (ξ) = (1 -ξ)/2. This scenario is represented in Figure 5.2 (A). First negotiation scenario When 1 < ξ ≤ 2, the initial constraints are not compatible anymore and must be negotiated, whence the need for Algorithm 2. The first step of the latter is min z, p |p 2 | s.t. z 1 ≥ -p 1 z 2 ≥ 0 z 2 = 1 -ξ -z 1 + 2p 2 0 ≤ p 1 ≤ 1 0 ≤ p 2 ≤ 1 and will result in a null optimal negotiation, i.e. P * 2 (ξ) = 0, since it is possible to recover feasibility without using p 2 . The second step will be the problem min z, p |p 1 | s.t. z 1 ≥ -p 1 z 2 ≥ 0 z 2 = 1 -ξ -z 1 + 2p 2 0 ≤ p 1 ≤ 1 0 ≤ p 2 ≤ 1 0 = |p 2 | where "0 = |p 2 |" is here to enforce the result from the former negotiation problem. The latter will return the optimal negotiation P * 1 (ξ) = ξ -1. Thus, the optimization variable z can be re-optimized over the newly negotiated set -which is actually a singleton. In the end, it yields z * 1 (ξ) = 1 -ξ and z * 2 (ξ) = 0. This scenario is represented in Figure 5.2 (B). Mild negotiation scenario Then, let us consider a scenario that requires even more negotiations, i.e. when 2 < ξ ≤ 4. In this case, the first step of Algorithm 2 gives a non-zero optimal negotiation s.t. P * 2 (ξ) = ξ-2 2 , and the second step gives the result P * 1 (ξ) = 1. Qualitatively, this must be interpreted as: "the parameter p 2 must be modified just enough so that there is a value of p 1 that provides a non-empty feasible set for z". In this scenario, z * 1 (ξ) = -1 and z * 2 (ξ) = 0, which are plotted in Figure 5.2 (C). Infeasible scenario Finally, for ξ > 4, there are no possible solutions for the first negotiation problem. Therefore, Algorithm 2 does not return anything, because there are no solutions to this over-constrained problem. Note that the limit p 2 ≤ 1 is the ultimate constraint that makes the negotiation problem infeasible. The importance of the parameters hierarchy The values of the variables z and p are represented in Figure 5.3 (left), w.r.t. the input ξ. The chosen parameters hierarchy is crucial. Indeed, if instead of considering that "p 2 is more critical than p 1 " it was considered that the whole vector (p 1 , p 2 ) could be negotiated at once, the results would have been completely different, as shown in Figure 5.3 (right). In the latter case, there would be only one negotiation problem min z, p |p 1 | + |p 2 | s.t. z 1 ≥ -p 1 , z 2 ≥ 0, z 2 = 1 -ξ -z 1 + 2p 2 , 0 ≤ p 1 ≤ 1, 0 ≤ p 2 ≤ 1. Doing this would imply that the variable p 2 would be used to recover feasibility before p 1 . Noteworthy remarks Remark 23 (Linear Programs). Using standard material from the literature, such as [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF]Example 1.13], decomposing p = ρ + -ρ -with ρ + , ρ -≥ 0 makes it possible to solve LP j using Linear Programming, whence the name LP j . Indeed, this decomposition yields the convenient re-writing ∥p∥ 1 = 1 ⊤ (ρ + + ρ -). See also the re-writing method in the Lipschitz-continuity proofs below. Remark 24 (In the literature). From a very general mathematical programming point of view, recovering feasibility in Linear Programming has been discussed extensively by Chinneck [START_REF] Chinneck | Feasibility and Infeasibility in Optimization: Algorithms and Computational Methods[END_REF] for instance. Problem (5.4) builds upon right-hand side constraint "alteration" methods, by exploiting the available levers conveyed through the parameter p, the matrices H p and B p , and the need to enforce the parameter hierarchy. Remark 25 (Hierarchy does not impact feasibility). Though the hierarchy notion is important, it does not change the set of inputs s.t. there exists at least one value of the negotiable parameters that make the constraints (5.1) feasible. Mathematically, it means that the set of ξ ∈ R N ξ s.t. ∃(z, p) ∈ R Nz × R nneg :            Gz ≤ h 0 + H ξ ξ + H p p Az = b 0 + B ξ ξ + B p p p low ≤ p ≤ p up does not depend on ⪰ e by construction. The latter order will only determine which values of p will be used for a given ξ. This fact can be observed in Figure 5.3, by remarking that the regions (D) and (C ′ ) correspond to the same sets for ξ. 0 z 1 z 2 | 1 | 1 (Ineq 1 ) (Ineq 2 ) (Eq) • • Feasible set for z when p1 = p2 = 0 and ξ = 0. 0 z 1 z 2 | -1 | 1 ξ • • 0 z 1 z 2 | -1 | 1 ξ p 1 • 0 z 1 z 2 | -1 | 1 ξ p 2 p 1 • 0 A B C | | | | 1 2 3 4 | | 0 1 | -1 z 1 z 2 ξ Optimal penalties | | | | 1 2 3 4 | | 0 1 | -1 P * 1 P * 2 A B C D With the hierarchy. ξ Optimal values of z | | | | 1 2 3 4 | | 0 1 | -1 z 1 z 2 ξ Optimal penalties | | | | 1 2 3 4 | | 0 1 | -1 P * 1 P * 2 A' B' C' Without the hierarchy. where both variables p 1 and p 2 are negotiated at the same time. The penalties differ from the previous case, and so do the optimal values z 1 and z 2 . Remark 26 (Limits of the hierarchy). HEGO guarantees that the most critical parameters are minimized first. However, since the negotiable parameters can have a very different influence on the constraints, it is still possible to have scenarios where a critical parameter is non-zero, but a less critical parameter is zero. For instance, let us consider the following trivial example where the negotiated constraints take the form z 1 = ξ 1 + p 1 , z 2 = ξ 2 + p 2 , -1 ≤ z 1 ≤ 1, -1 ≤ z 2 ≤ 1. Since there are no links between the variables indexed by 1 and the ones indexed by 2, even imposing that p 2 be more critical than p 1 does not guarantee that p 1 will ever be used when the magnitude of ξ 2 increases. To summarize, the least critical negotiable parameters will be used in lieu of the most critical ones only if the fundamental nature of the problem makes it possible. 3 after the thesis defense, trying to minimize the value of a parameter with respect to a lexicographic or co-lexicographic order is similar to some methods used in multiobjective optimization. For instance, see [START_REF] Cococcioni | Lexicographic multiobjective linear programming using grossone methodology: Theory and algorithm[END_REF] for more details on the topic. Remark 27. (Post-defense note) As noted by one of the jury members Smoothness of the HEGO algorithm For the reasons mentioned previously, the fact that the outputs of Algorithm 2 -i.e. HEGO-do not change too fast when its inputs vary is of high interest. The goal of this section is to prove that the map ξ → z * defined by Algorithm 2 is globally Lipschitz on its definition domain. Proving this property relates to QP sensitivity analysis w.r.t. the constraints RHS and the linear part of the cost. When the cost is defined with a positive definite matrix, this property holds and is a wellknown result [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF]. However, this has to be adapted to our framework, where only a part of the optimization variable is unique (i.e. z is unique, but p is not necessarily unique). To alleviate the writing, and without loss of generality, we make the assumption that the cost of Refine only has a quadratic term in z, i.e. that Q = 0 in Equation (5.5a). Indeed, extending the constraints RHS sensitivity results to perturbations in the linear part of the cost is obtained using well-known dualization methods of QPs [START_REF] Boyd | Convex Optimization[END_REF][START_REF] Gauvin | Formulae for the Sensitivity Analysis of Linear Programming Problems[END_REF][START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF]. First, a re-writing of the problems LP j and Refine is introduced, to show that the result z * from Algorithm 2 is well-defined and unique. Then, we show that the optimal negotiation maps giving P * i and the optimal solution maps giving z * are Lipschitz continuous functions of the RHS of their constraints. This is proved using a series of well-known results and by adapting a theorem by Mangasarian & Shiau [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF] to our framework. Finally, we conclude on the properties of Algorithm 2. Problem re-writing The constraints of LP j and Refine can be rewritten into a unified, standard and linear framework. Denote by C j these constraints C j :=                  Gz ≤ h 0 + H ξ ξ + H p p, Az = b 0 + B ξ ξ + B p p, p low ≤ p ≤ p up , ∥p (i) ∥ 1 = P * i , i = j + 1, . . . , R. For j = 1, . . . , R, the constraints C j convey those of LP j , and C 0 convey those of Refine. Slack variables, denoted s G , s up , s low , can be used to transform the inequalities of C j into the constraints Gz + s G = h 0 + H ξ ξ + H p p, p + s up = p up , p -s low = p low , s G , s up , s low ≥ 0 Let us define the variable x = (z + , z -, ρ + , ρ -, s G , s up , s low ) where z = z + -z -, (5.6a ) p = ρ + -ρ -, (5.6b) x ≥ 0. ( and introduce the matrices Āj and rj as Āj :=              G -G -H p H p I dim s G O O A -A -B p B p O O O O O I nneg -I nneg O I nneg O O O I nneg -I nneg O O -I nneg O O I j+1 I j+1 O O O              , rj :=              h 0 + H ξ ξ b 0 + B ξ ξ p up p low P R j+1              where I j+1 and P R j+1 denote P R j+1 := P * R . . . P * j+1 ⊤ ∈ R R-j and I j+1 =          O . . . O 1 ⊤ n R 1 ⊤ n R-1 O . . . . . . . . . O 1 ⊤ n j+1 . . . O          ∈ R (R-j)×nneg . Note that for j = R, the last line of ĀR and rR is absent. Also, for j = 0, the first column of zeros is absent for I 1 . Recall for the rest of the proofs that rj is the vector bearing all the dependency of the problems LP j and Refine on ξ. Finally, note A j and r j the following matrices A j :=    Āj -Āj    and r j :=    rj -r j    . Using the new non-negative variable x, and the notations above, the constraints C j can be re-written under the two equivalent forms      Āj x = rj x ≥ 0 or      A j x ≥ r j x ≥ 0 (5.7) Further, for j = 1, . . . , R -1, by construction of the constraints C j , the following property holds {x | Āj x = rj , x ≥ 0} = {x | Āj+1 x = rj+1 , x ≥ 0, c ⊤ j+1 x = P * j+1 } (5.8) and will be used later to prove Proposition 12. Regarding the cost, define the vector c j of the same dimension as x s.t. c j :=      (. . . , 0, 1 ⊤ n j , O 1×(nneg-n j ) , 1 ⊤ n j , 0, . . .) ⊤ if j = 1, . . . , R O if j = 0 where the terms 1 ⊤ n j correspond to the position of the j th negotiable variable p (j) , here conveyed by the corresponding parts of ρ + and ρ -from Equation (5.6). Moreover, let us define D :=       P -P -P P O O O       ∈ R dim x×dim x . It is straightforward to verify that, for all P positive definite, the matrix D is positive semidefinite. Thus, the optimization problems LP j and Refine can be re-written in a more standard form than their original definition, respectively (5.4) and (5.5). For any j = 1, . . . , R, LP j can be described as a Linear Program of the form (Primal LP) j            min x c ⊤ j x s.t. Āj x = rj x ≥ 0 (5.9) and Refine as a Quadratic Program which is (Primal QP)            min x 1 2 x ⊤ Dx + c ⊤ 0 x s.t. Ā0 x = r0 , x ≥ 0.            min x 1 2 x ⊤ Dx + c ⊤ 0 x s.t. A 0 x ≥ r 0 , x ≥ 0. (5.11) Certainly, in view of numerical implementation, there are more memory-efficient ways to translate Refine into these formats, but the formulations above are handy in the theoretical proof below. Also, note that we keep the same definition for P * j as in Problem (5.4), i.e. P * j denotes the optimal value of the cost of Problem (5.9) (its well-posedness will be established in Proposition 12). Uniqueness of the optimal trajectory Recalling that c 0 = 0, let us introduce the dual 4 of Problem (5.10) s.t. (Dual QP)            max x,µ,λ -1 2 x ⊤ Dx -λ ⊤ r0 s.t. Dx -µ + Ā0 ⊤ λ = 0, µ ≥ 0. (5.12) Using the Strong Duality Theorem 5 we get the following property. Proof, adapted from Lemma 2.1 in [START_REF] Berkelaar | Sensitivity analysis in (degenerate) quadratic programming[END_REF]. Since the tuples from the statement are optimal and since there is no duality gap, then the primal and dual cost are equal s.t. 1 2 (x * ) ⊤ Dx * = - 1 2 (x) ⊤ Dx -λ ⊤ r0 Since the tuples are optimal solutions, complementary slackness holds, i.e. µ ⊤ x = 0 and µ ⊤ x * = 0 (see e.g. Proposition 5.1.5 in [START_REF] Bertsekas | Nonlinear Programming[END_REF]). Applying the equality constraints from (5.12) at x, and multiplying it by (x * ) ⊤ to the left gives Proof. Let us prove by induction for j = R, . . . , 1 (with decreasing indices), that "Problem (5.9) has a finite optimal value at index j, and constraints (5.7) are feasible at index j -1". Beforehand, note that the cost function of all Problems 5.9 is lowerbounded by 0. For j = R, since χ(ξ) is assumed non-empty, and since the cost is lower-bounded by 0, then it has a finite optimal value, denoted P * R . Moreover, this minimum is attained, as guaranteed by Lemma 12 in the Appendix, at a point denoted x R . Thanks to the Equation (5.8), we get that x R is a feasible point for the constraints (5.7) at index R -1. (x * ) ⊤ Dx = (x * ) ⊤ µ -(x * ) ⊤ Ā0 ⊤ λ. Thus 1 2 (x * -x) ⊤ D(x * -x) = -λ ⊤ r0 + (x * ) ⊤ Ā0 ⊤ λ = λ ⊤ ( Ā0 x * -r0 ) = To prove the induction, let us assume that Problem (5.9) has a finite optimal value at index j, and constraints (5.7) are feasible at index j -1, for some j ≥ 2. Using the relation from Equation (5.8), and by induction, the set x | Āj-1 x = rj-1 , x ≥ 0 is non-empty. Thus, since its cost is lower-bounded by 0, Problem (5.9) at index j -1 has a finite optimal value P * j-1 , also attained at some point denoted x j-1 . The latter being a feasible point for constraints (5.7) at index j -2, one concludes the induction proof. This shows that the optimal penalties P * 1 , . . . , P * R are well defined, and that the constraints (5.7) are feasible at index 0. Consequently, Problem (5.10) also has a minimum. By Lemma 7, and using the expression of D, we get that any solution of Problem (5.10) has a unique value for z + and z -, showing that z * = z +z -exists and is unique. This concludes the proof. Among others, this last proposition points out that as long as the first-to-becomputed negotiation problem is feasible -i.e. LP R -then Algorithm 2 will necessarily terminate. This is summarized as follows, using the set Λ := {ξ ′ | χ(ξ ′ ) ̸ = ∅}. Proof. Using the definition of rR , there is a constant vector b and a constant matrix B s.t. rR = b + Bξ, where b and B directly depend on h 0 , H ξ , b 0 and B ξ appearing in the formulation of QP (ξ, p) defined in (4.23). Let us assume that ξ 1 and ξ 2 are s.t. χ(ξ 1 ) ̸ = ∅ and χ(ξ 2 ) ̸ = ∅. For x 1 ∈ χ(ξ 1 ) and x 2 ∈ χ(ξ 2 ), the condition (1 -t)x 1 + tx 2 ∈ χ (1 -t)ξ 1 + tξ 2 , ∀t ∈ [0, 1], holds due to the linearity of (5.7) w.r.t. x and ξ, which shows the desired result. It is noteworthy that Λ is not necessarily bounded. Indeed, if the intersection of the kernels of matrices B ξ and H ξ (defined in Problem (4.23)) is wider than the singleton {0}, then the set Λ is even guaranteed to be unbounded. Regularity w.r.t. the right-hand side of the constraints The proof that z * is a Lipschitz-continuous map of its inputs can be split in two steps. First, we need to show that the optimal penalties P * j are Lipschitz-continuous maps of their inputs, and then that z * is also a Lipschitz-continuous map of ξ and P * i for i = 1, . . . , R. The former result is rather straightforward and will be dealt with in Lemma 8. However, the latter result requires a bit more attention, and will be detailed in Theorem 6. Remark 28. The results used below are expressed with both the Euclidean and the maximum norms, respecting their original formulation, whereas the main result is given in an homogeneous form -i.e. using only the Euclidean norm -in Theorem 7. RHS regularity of the negotiation maps The following theorem is adapted from [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF], and applies to the standard LP min ∥x 1 * -x 2 * ∥ ∞ ≤ L    b 1 -b 2 d 1 -d 2    2 (5.14) A direct corollary is the following. -p * 2 | ≤ K    b 1 -b 2 d 1 -d 2    2 Lemma 8. For any j = 1, . . . , R, the maps ξ ∈ Λ → P * j are Lipschitz continuous. Proof. In order for these maps to be well-defined, recall that each value P * j depends on ξ and the preceding values P * j+1 , . . . , P * R , (except for P * R that only depends on ξ). Thus, for all j = 1, . . . , R, we are precisely interested in the maps Γ j : Γ j : Λ → R + ξ → P * j (ξ, Γ j+1 (ξ, . . .), . . . , Γ R (ξ)). where Γ R (ξ) = P * R (ξ). Consider j = R. By composition, since the RHS of Problem (5.9) is affine dependent on ξ, and since the optimal value of Problem (5.9) is Lipschitz-continuous w.r.t. its RHS (due to Corollary 2), then Γ R is Lipschitz continuous. Then, by induction, let us assume that at each step j = 1, . . . , R, the previous functions Γ j+1 , . . . , Γ R are Lispchitz continuous. Using the same composition argument, we obtain the desired result. RHS regularity of the Linear Complementary Problem To show that the map (ξ, P * 1 , . . . , P * R ) → z * is Lipschitz continuous, we proceed in three steps. First, we recall that polytopes satisfy a Lipschitz continuity-like property (Theorem 5) w.r.t. their RHS. Then, we show how the former map is related to a certain Linear Complementary Problem (LCP, in Lemma 9). Finally, the Lipschitz-continuity of the uniquely defined components of the LCP is established (Theorem 6). respectively. There exists a constant µ, that depends only on A and C, s.t. for each x 1 ∈ F 1 , there exists an x 2 ∈ F 2 closest to x 1 in the ∞-norm s.t. ∥x 1 -x 2 ∥ ∞ ≤ µ    b 1 -b 2 d 1 -d 2    2 Proof of the following lemma can be found in [START_REF] Murty | Linear and Combinatorial Programming[END_REF]Sec. 16.4.4]. Lemma 9 (Linear Complementary Problem, [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF][START_REF] Murty | Linear and Combinatorial Programming[END_REF]). Assume that D is positive semidefinite. Then, x is a solution of Problem (5.11) if and only if there exists a vector η s.t. x :=    x η    (5.15) is a solution to the following LCP M x + q ≥ 0, x ≥ 0, (M x + q) ⊤ x = 0 (5.16) where M :=    D -A 0 ⊤ A 0 O    and q :=    c 0 -r 0   . Given a subset J ⊂ {1, . . . , dim x}, any solution of the following linear system6 M j x + q j ≥ 0, xj = 0, j ∈ J, (5.17a) M j x + q j = 0, xj ≥ 0, j / ∈ J, (5.17b) is a solution to the LCP (5.16) for (M, q). For such sets J, denote Q(J) the set of all q vectors for which (5.17) has a solution 7 . Lemma 10 (Active set partitions [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF]Lemma 3.1]). Let q 1 and q 2 be two distinct vectors and let q(t) := (1-t)q 1 +tq 2 for every t ∈ [0, 1]. Assume that the LCP (5.16) for (M, q(t)) is solvable for t ∈ [0, 1]. Then, there exists a partition 0 = t 0 < . . . < t N = 1 s.t. for i = 1, . . . , N q(t i-1 ) ∈ Q(J i ), q(t i ) ∈ Q(J i ), for some J i ⊂ {1, . . . , n}. The constructive proof of this result can be found in [59, p.592]. Its main purpose is to provide a characterization of the active set changes along [0, 1]. When D is positive definite, then Lemma 10 is instrumental to show that the solutions of the LCP (5.16) are Lipschitz w.r.t. q [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF]. Among others, the latter reasoning relies on the fact that the positive definiteness of D uniquely defines the solution. However, in our framework, D is only positive semidefinite, and the result can not be applied directly. Instead of using much more abstract results of the literature (see e.g. [START_REF] Aubin | Lipschitz Behavior of Solutions to Convex Minimization Problems[END_REF][START_REF] Lee | Continuity of the Solution Map in Quadratic Programs under Linear Perturbations[END_REF]), we decided to adapt the proofs of Mangasarian & Shiau [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF] and to focus only on the part of the optimization variable that is uniquely defined. More precisely, the variable x defined in (5.15) equals x = (z + , z -, ρ + , ρ -, s G , s up , s low , η), where the first part "z + , z -" is uniquely defined, as pointed out in Proposition 12. Therefore, we consider that x can be decomposed in two parts x u and x m , s.t. x u is uniquely defined and x = (x u , x m ). Let us introduce the following theorem, which is a generalized version of [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF]Thm. 3.2]. Theorem 6 (Lipschitz continuity of the uniquely defined components of the LCP). Let q 1 and q 2 be points s.t. the LCP (5.16) for (M, q(t)) with q(t) = (1t)q 1 + tq 2 has a solution x(t) = (x u (t), x m (t)) s.t. x u (t) is unique, for every t ∈ [0, 1]. Then, there exists a constant σ > 0, depending only on M , s.t. any solutions x1 = (x 1 u , x 1 m ) and x2 = (x 2 u , x 2 m ) of (5.16) with respective vectors q 1 and q 2 satisfy ∥x 1 u -x 2 u ∥ ∞ ≤ σ∥q 1 -q 2 ∥ 2 (5. 18) Proof. There exists a subdivision 0 = t 0 < t 1 < . . . < t N = 1 satisfying the properties of Lemma 10. For i = 0, ..., N , let x(t i ) = (x u (t i ), x m (t i )) be a solution of (5.16) for (M, q(t i )). Note that the x u (t i ) are unique, but the x m (t i ) are not necessarily unique. For any set of indices J ⊂ {1, . . . , dim x} and any matrix A, denote by A J (resp. A J ) the matrix composed of the rows of A whose indices are in J (resp. in {1, . . . , dim x}\J). With this notation, for any 1 ≤ i ≤ N and for any t ∈ [t i-1 , t i ], the LCP (5.16) reduces to the linear problem    M J i I Ji    x(t) +    q(t) J i O Ji    ≥ 0 and    M Ji I J i    x(t) +    q(t) Ji O J i    = 0 (5.19) by construction of J i and Q(J i ). Then, according to Theorem 5, stating the Lipschitz-continuity of feasible points of linear constraints, there exists a solution ŷ(t i-1 ) := (y u (t i-1 ), y m (t i-1 )) of (5.16) for (M, q(t i-1 )), i.e. a feasible point of (5.19) at t = t i-1 , s.t. ∥x(t i ) -ŷ(t i-1 )∥ ∞ ≤ µ i ∥q(t i ) -q(t i-1 )∥ 2 for some µ i > 0. Let us define σ := max{µ i | 1 ≤ i ≤ N }. Since the first part of x(t i-1 ) and ŷ(t i-1 ) is uniquely defined, i.e. x u (t i ) = y u (t i ), the following inequality holds ∥x 1 u -x 2 u ∥ ∞ ≤ N i=1 ∥x u (t i ) -x u (t i-1 )∥ ∞ = N i=1 ∥x u (t i ) -y u (t i-1 )∥ ∞ ≤ N i=1 ∥x(t i ) -ŷ(t i-1 )∥ ∞ ≤ N i=1 µ i ∥q(t i ) -q(t i-1 )∥ 2 ≤ σ N i=1 ∥(t i -t i-1 )(q 1 -q 2 )∥ 2 = σ∥q 1 -q 2 ∥ 2 . Hence the desired result. Remark 29. The constants of Theorem 4 and Theorem 6 are actually defined via a constructive approach, which is detailed in [START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF]. Conclusion on the Lipschitz-continuity of HEGO ∥(z * ) 1 -(z * ) 2 ∥ 2 ≤ L∥ξ 1 -ξ 2 ∥ 2 . Proof. This theorem links all the previously stated results. The optimal function z * is well defined on Λ due to Proposition 12. Its value is the minimum of Problem (5.11), whose RHS is affinely dependent on the optimal penalties P * 1 , . . . , P * R . The latter are Lipchitz-continuous maps of ξ, as shown in Lemma 8. Moreover, the solutions of Problem (5.11) are solutions of the LCP (5.16), as recalled in Lemma 9. The uniquely defined components of the solutions of the LCP -i.e. the variables z + and z -in x -are Lipschitz-continuous functions of the vector q from (5.16), in the sense of Theorem 6. Also, the latter vector q is affinely dependent on the vector r 0 (see Lemma 9), which is affinely dependent on ξ and the optimal penalties P * 1 , . . . , P * R . Thus, by composition, z + and z -are Lipschitz-continuous maps of ξ, in the sense of Equation (5.18). The desired result, i.e. with the 2-norm on both sides of the Lipschitz inequality, stems from the equivalence of the norms in finite dimension. A direct corollary of Theorem 7 is the following. Corollary 3. Let z * (ξ) denote the value returned by Algorithm 2 with input ξ. Then, the optimal value function ξ → J(z * (ξ)) is Lipschitz-continuous on Λ. Monotonicity of the optimal negotiations This section aims at giving a mathematical meaning to the sentence "the farther from the reference trajectory, the higher the negotiation", for the special case R = 1, i.e. when there is no hierarchy involved. Getting farther from the reference is modeled directionally by considering the map t → tξ where t ≥ 0 and ξ is an arbitrary input direction. Among others, we aim at showing that the map t → P * 1 (tξ) is non-decreasing, for non-negative values of t, which is what we call the negotiation monotonicity. Formally, when R = 1, there is only a single negotiation problem, that writes P * ←-min z,p ∥p∥ 1 (5.20a) Gz ≤ h 0 + H ξ ξ + H p p (5.20b) Az = b 0 + B ξ ξ + B p p (5.20c) p low ≤ p ≤ p up (5.20d) where P * is the optimal value. Using the same kind of re-writing technique as in Section 5.3.1, one can show that there are matrices M and Q, and a vector r, s.t. Problem (5.20) is equivalent to the following LP in its standard primal form V * P (r) ←-min x,y 1 ⊤ x (5.21a) s.t. M x + Qy = r, (5.21b) x, y ≥ 0. (5.21c) where V * P (r) denotes its optimal value function, and r = r 0 + Kξ ∈ R nr for some matrix K and some vector r 0 . The problems are equivalent in the sense that P * (ξ) = V * P (r 0 + Kξ). (5.22) Following Proposition 15 in the Appendix, the dual associated to Problem (5.21) is V * D (r) ←-max µ µ ⊤ r (5.23a) s.t. M ⊤ µ ≤ 1, (5.23b ) Q ⊤ µ ≤ 0. ( 5 (r + td) = V * D (r + td) (5.24) which highlights the absence of duality gap. Now that the directional RHS sensitivity function is well-defined, thanks to Theorem 11, we can use standard results from the literature (as recalled by Proposition 16 in Appendix) to formulate the following theorem. Theorem 8 (Direct application of [START_REF] Adler | A geometric view of parametric linear programming[END_REF][START_REF] Murty | Linear Programming[END_REF]). W is a closed interval (possibly unbounded). v is a continuous convex function, which is piecewise affine on a finite number of sub-intervals of W. Theorem 9. Consider any ξ and define d := Kξ. Assume that (i) V * P (r 0 ) = 0, (ii) ξ is s.t. V * P (r 0 + td) is defined for t > 0, on a (small) non-trivial interval. Then, the negotiation map t → P * (tξ) := V * P (r 0 + td) is non-decreasing on W ∩ R + . Proof. We proceed by combining the local affine description of t → V * P (r 0 + td) with the above mentioned results. Let us introduce the quantity D d (r) s.t. D d (r) ←-max λ λ ⊤ d s.t. M ⊤ λ ≤ 1, Q ⊤ λ ≤ 0, λ ⊤ r = V * P (r). 8 Recalled as Theorem 11 in Appendix. Thanks to the absence of duality gap over W, as shown in Equation (5.24), Theorem 12 used with assumption (ii) states that there exists a t ′ > 0 s.t. for any scalar t ∈ [0, t ′ ] we have V * P (r 0 + td) = V * P (r 0 ) + tD d (r 0 ) where D d (r 0 ) is finite. This formula is what we call Gauvin's formula in the Appendix. Since V * P (r 0 ) = 0 by assumption (i), then 0 is a feasible vector for D d (r 0 ), and consequently D d (r 0 ) ≥ 0. Finally, since t → P * (tξ) is convex (from Theorem 8) and has a non-negative slope at t = 0, it is necessarily non-decreasing on W ∩ R + . Remark 32. For the general emergency problem -i.e. when there are R ≥ 2 negotiable sub-parameters -the negotiation maps t → P * j (tξ) behave differently. Showing that they are all continuous and piecewise affine can be achieved with little effort, but they are generally not convex, as the introductory example of Section 5.2.2 shows. This latter example also shows that even t → R j=1 P * j (tξ) is not necessarily convex. However, it has been conjectured that the negotiation maps t → P * j (tξ) remain non-decreasing functions of t. This conjecture is currently under investigation, at the time of writing this manuscript. Remark 30. Assumption (i) in Non-monotonicity of the optimal trajectories Contrary to the negotiation maps, the directional optimal solution maps t → z * (tξ) defined by the outputs of HEGO are not necessarily component-wise monotonous. In the same fashion as in Section 5.2.2, consider the following example which illustrates this property. It is one of the reason why the heatmaps of the optimal time-of-flight variation, i.e. Figures B.9 Basic Example 4. Consider an optimization problem with z = (z 1 , z 2 ) ⊤ ∈ R 2 , Emergency guidance method generalization HEGO, as presented in Algorithm 2, is an emergency guidance method built upon a nominal guidance method that relies on Quadratic Programming. This convenient framework, with its linear constraints, allows the use of mature LP and QP solvers for the implementation of the problems LP j and Refine. However, it is possible to take some distance from this presentation, as other kinds of mathematical programming are often used to provide an approximation of PDG (ξ, p), as discussed in Chapter 1. Thus, let us describe the link between nominal and emergency guidance methods from a high-level perspective. This section can be skipped with no loss of continuity. Generalized notations Let us denote by J(z, ξ) the cost that must be minimized, and by F eas (ξ, p) ⊂ R Nz the feasible set, which conveys the various control constraints, the dynamic model, etc. We insist that this set can convey much more general constraints than Equation (5.1). For example, it could stem from any direct collocation method, as presented in well-known references [START_REF] Betts | Practical methods for optimal control and estimation using nonlinear programming[END_REF][START_REF] Hargraves | Direct trajectory optimization using nonlinear programming and collocation[END_REF][START_REF] Stryk | Numerical Solution of Optimal Control Problems by Direct Collocation[END_REF], or other state-of-the-art methods [START_REF] Malyuta | Convex Optimization for Trajectory Generation[END_REF]. Also, it it desired that p ∈ Ω for some set Ω ⊂ R nneg . Generalized emergency order The goal is still to provide a trajectory, even when F eas (ξ, 0) is empty. To that purpose, instead of the 1-norm, let us introduce more general negotiation functions γ j : R n j → R + , for j = R, . . . , 1, assumed convex. It helps us define a generalized emergency order denoted ⪰ γ . The latter is also co-lexical, as ⪰ e , but in the sense of the negotiation functions γ j . Like in (5.3), a vector p a is said to be more negotiated than another vector p b s.t. p a ⪰ γ p b , if and only if γ R p (R) a > γ R p (R) b or γ R p (R) a = γ R p (R) b and γ R-1 p (R-1) a > γ R-1 p (R-1) b , or . . . or γ R p (R) a = γ R p (R) b and . . . and γ 1 p (1) a > γ 1 p (1) b , or γ R p (R) a = γ R p (R) b and . . . and γ 1 p (1) a = γ 1 p (1) b . Therefore, the general meaning of maximizing the launcher's integrity is to find the smallest p in the sense of ⪰ γ . Generalized sequence of optimization problems Let us here define a more general version of LP j and Refine. With the above-defined notations, the generalized nominal guidance problem is the following optimization problem (Nominal) min z J(z, ξ), s.t. z ∈ F eas (ξ, 0) . As before, what is of interest here is what happens when ξ is s.t. F eas (ξ, 0) is empty. Thus, we will seek to minimize a cost γ i for each parameter p (i) . Let us introduce the first negotiation problem, dealing with the most critical negotiation parameter p (R) , s.t. Γ * R ←-min z, p γ R (p (R) ), s.t. z ∈ F eas (ξ, p) , p ∈ Ω, where Γ * R denotes the optimal value of this problem. Let S R be the set of points p s.t. there is a z ∈ F eas (ξ, p) and s.t. (z, p) minimizes γ R . In other words, S R is the set of minimizers of the latter problem, but projected on the negotiable parameters set. By successively negotiating the other parameters, in a similar fashion as LP j , we define the follow-up negotiation problems as Γ * j ←-min z, p γ j (p (j) ), s.t. z ∈ F eas (ξ, p) , p ∈ Ω, p (j+1) ∈ S j+1 , where j = 1, . . . , R-1, and where S i denotes the set of minimizers of the i th problem, projected onto the negotiable parameters set (which is, by its recursive definition, a subset of the previous sets S i for i = j + 1, . . . , R). Consequently, the generalized version of the Refine problem becomes z * , ⋆ ←-argmin z, p J(z, ξ), s.t. z ∈ F eas (ξ, p) , p ∈ Ω, p (1) ∈ S 1 where the notation z * , ⋆ means that the value of p is ignored, since it is not necessarily unique. The two latter problems can be re-written in a more convenient form. Since the conditions z ∈ F eas (ξ, p) and p ∈ Ω do not change between these problems, it is possible to simplify the above-mentioned notations by switching the abstract condition p (j+1) ∈ S j+1 into γ i (p (i) ) = Γ * i , ∀i = j + 1, . . . , R. (5.25) However, even for a convex function γ i , the equality constraint γ i (p (i) ) = Γ * i is numerically ill-posed, since it defines a non-convex level-set in general (e.g. when γ i (.) = ∥.∥ 2 ). Thankfully, it can be relaxed without loosing generality, by using only the inequality γ i (p (i) ) ≤ Γ * i . ( 5 γ i (p (i) ) ≤ Γ * i , ∀i = 1, . . . , R (5.28d) High-level description of safety margins Enforcing the condition z ∈ F eas (ξ, p) for some value ξ may sometimes bring the variables of the problem to the frontier of what is feasible for a given p. Therefore, given a set M that contains 0, we would like to impose that if z is feasible for a value of p, then for every ∆p ∈ M there is another z ′ feasible for p + ∆p. Consequently, the condition z ∈ F eas (ξ, p) from the previous problems (5.27) g ξ (z) ≤ K k=1 σ k g ξ (z k ) ≤ K k=1 σ k H(ξ).(p + ∆p k ) = H(ξ).(p + ∆p) and likewise: A(ξ)z = b(ξ)+B(ξ).(p+∆p). Therefore, z belongs to F eas (ξ, p + ∆p), whence (5.29b). Constraint (5.29a) also holds, since it corresponds to the sub-case ∆p = 0, hence the conclusion. Therefore, incorporating Proposition 14 into Problems 5.27 and 5.28 is a way to enforce safety margins while performing nominal an emergency guidance, using only a finite number of constraints. Illustrations In this section, four numerical examples are proposed along with qualitative discussions. Example 1 illustrates the basic principles of HEGO. Example 2 shows that a wide selection of negotiable parameters can be used together. Example 3 demonstrates the modeling capabilities offered by HEGO. Finally, Example 4 shows how emergency guidance scales to the 3D model. A quantitative analysis of this last example is proposed in Section 6.2 of Chapter 6. Note that all of these examples have the same number of discretization points, i.e. N = 4, where N is defined in p. 56 right before Equation (4.1). For the 2D model (resp. the 3D model), it means that the size of z is N z = 15 (resp. N z = 22). All the data presented in the examples below is normalized. With the 2D model Example 1 (Basic 2D scenario). Let us consider a simple choice of negotiable parameters, consisting in the incidence limit ∆α max and the final horizontal position ∆z f s.t. p = ∆α max , ∆z f ⊤ . In terms of hierarchy, we impose • p (1) = ∆α max (least critical), • p (2) = ∆z f (most critical). This example is illustrated in Figures 5. [START_REF] Adler | A geometric view of parametric linear programming[END_REF] • p (1) = ∆α max (least critical), • p (2) = ∆a max nor , • p (3) = ∆z f , • p (4) = ∆h f (most critical). This example is illustrated in Figure 5.7. Recall that ∆h f denotes the final altitude, which can seem surprising at first site. Why do not we use the final vertical speed ∆v f h instead? The reason is linked to the linearization used when we define QP (ξ). Indeed, ∆v f h appears, in practice, to be a (τ 1 ) = µ 3 , α(τ 1 ) = µ 4 , qr (t f ) = µ 2(N +3)-1 and α(t f ) = µ 2(N +3 ) . The Lispschitz-continuity stated in Theorem 7 can be observed on all the charts, except (g) whose Lipschitz-continuity is related to Corollary 3. Note that nominal guidance is performed up to ∆z 0 ≈ 0.3 (see Figure 5.5-(d)), highlighting the fact that significant active set changes can occur even with nominal guidance. Also, as one can observe in (h) between ∆z 0 = 0.0 and 0.2, ∆t * f has a non-zero though very small slope. This is mostly due to the fact that changing ∆t * f has a strong influence on simultaneously the vertical and the horizontal components of the trajectory, meanwhile ∆z 0 influences (almost only) the horizontal one. less useful lever than ∆h f . Mathematically speaking, it means that the image of the matrix 9 B ∆v f h does not describe the same vector space as the one of B ∆h f , and thus will not have the same ability to recover the changes in ξ. This remark also applies to the matrices B ∆v f h and B ∆h f . In practice, negotiating ∆h f is blindly allowed in the optimization problems of HEGO. However, when HEGO says that ∆h f < 0 is necessary, the optimal trajectory z * will reach the ground before reaching the new final altitude ∆h f . The state x at which the altitude of this trajectory reaches null altitude may be defined as the actual negotiated landing state. Note that in Figure 5.7, dispersing the inputs w.r.t. ∆z 0 does not trigger the use of ∆h f , whose negotiation curve P * 4 remain flat in the sub-Figure (d). However, this example is the ground base of the numerical assessment provided in Chapter 6, where sufficiently rich scenarios are considered, and show how ∆h f is used. Example 3 (2D scenario with repeated negotiable parameters). To demonstrate the modeling capabilities that Algorithm 2 offers, let us consider an example where several negotiable parameter are "repeated": p = ∆α 1 max , ∆z f 1 , ∆α 2 max , ∆z f 2 ⊤ In terms of hierarchy, we impose • p (1) = ∆α 1 max (least critical), • p (2) = ∆z f 1 , • p (3) = ∆α 2 max , • p (4) = ∆z f 2 (most critical). This example is illustrated in Figure 5.8. This choice of negotiation parameters allows to negotiate the incidence and the final horizontal position alternatively. This may be helpful when the nature of the area neighboring the landing site becomes increasingly worse when moving away. To some extent, it can be applied to the terrain presented in Figure 1.3 from Chapter 1: the order of priority is to first negotiate the incidence, then the horizontal position (as long as it remains within the crops or the beach), then the incidence again, and finally trying to land in the forest or in the ocean. • p (1) = ∆α max (least critical), With the 3D model • p (2) = ∆a max nor , • p (3) = (∆z f , ∆y f ) ⊤ , • p (4) = ∆h f (most critical). This example is illustrated in Figure 5.9. To illustrate behaviors that are not shared with the 2D rocket model, the input are dispersed w.r.t. ∆y 0 , the initial horizontal position component that is out-of-plane compared to the reference trajectory. These negotiation parameters are used for the quantitative analysis in Chapter 6. Summary In this chapter, we exposed a method to provide emergency guidance. It boils down to a sequential minimization of the amplitude of negotiable parameters, enforcing a prescribed hierarchy between these parameters, and is implemented using a finite number of LPs and a single QP. The method HEGO is capable of producing both nominal (when possible) and emergency guidance solutions thanks to a unified formulation. Theoretical guarantees prove that the outputs of HEGO are Lipschitz-continuous w.r.t. its inputs, preventing the solutions from varying too fast for small changes in the inputs. Numerical simulations have demonstrated how HEGO behaves on 2D and 3D examples, and that its outputs are consistent with the above-mentioned theoretical guarantees. Here, we propose quantitative assessments for HEGO. First, we provide highlevel comments regarding the implementation of HEGO. Then, we present a quantitative analysis of Example 4 from Chapter 5, by computing bivariate dispersions of the inputs on the 3D rocket model. Also, since HEGO aims at dealing with infeasible landing scenarios, its outcomes are compared with the vertical flight envelopes introduced in Chapter 3. To improve readability, the figures of this chapter have been moved at its end. General comments The time required to run HEGO depends directly on the design choices presented in Chapters 4 and 5. HEGO has been implemented in python and tested with cvxopt [START_REF] Andersen | CVXOPT: Convex Optimization[END_REF], mosek [START_REF] Andersen | The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm[END_REF], glpk [START_REF] Fsf | GNU Linear Programming Kit[END_REF] and qpSWIFT [START_REF] Pandala | qpSWIFT: A real-time sparse quadratic program solver for robotic applications[END_REF] as underlying solvers for the LP and QP solvers. The benchmarks have been forced to run on a single CPU (in practice, tested on an Intel® Core™ i9-9900K, at 3.60GHz, and on an Intel® Core™ i7-8550U CPU at 1.80GHz). Its run time typically ranges between • a few milliseconds for the 2D rocket model with 2 negotiable parameters, • and ≈ 60 ms for the 3D rocket model with 5 negotiable parameters.1 Major influence on run time The parameters that naturally affect the computation time are the dimensions N z (i.e. main decision variable z) and n neg (i.e. negotiation parameter p). Let us denote by N opt := N z + n neg the dimension of the optimization variable involved in the sub-problems of HEGO. Recall that m is the dimension of the control variable, and that N is the number of discretization points (as shown in Figure 4.1). Since Cubic Splines are used for the description of the control corrections in Chapter 4, we get N opt = (m + 3)N + 1 + n neg showing the relative importance of the above-mentioned sizes. Also, the computation time may be influenced by the number of constraints, which is directly proportional to N c , as defined in Equation (4.8), and by the number of partitions of the negotiable parameter (i.e. R defined in Chapter 5). Minor or no influence on run time It is noteworthy that some parameters do not influence, or in a negligible way, the computation time. Among others, the dimension of ξ has a negligible impact on the run time of HEGO. Indeed, it only matters when computing the constraints right-hand side Gz ≤ h 0 + H ξ ξ + H p p, Az = b 0 + B ξ ξ + B p p. A useful application would be to handle thinner descriptions of the atmosphere parameters, such as the wind for instance. Let us assume that one is able to measure the horizontal wind component at a high resolution for the low atmosphere layers (let us say 1 point of measure per 10 m, up to 10 km, for illustration purposes). Then, ξ would resemble: ξ = (∆x 0 ) ⊤ , w 0 m , w 10 m , w 20 m , , . . . , w 10 000 m ⊤ The dimension of ξ now equals 1009 in this case (when ∆x 0 is of of dimension 8 for the 3D rocket model of Chapter 2), though this has a non-significant impact on HEGO run-time. Input dispersion on 3D rocket model Let us consider the same reference trajectory as in Example 4, for the 3D rocket model. We pick the same choices of negotiable parameters as in Example 4, i.e. p = ∆α max , ∆a max nor , ∆z f , ∆y f , ∆h f ⊤ with R = 4 s.t. • p (1) = ∆α max (least critical), • p (2) = ∆a max nor , • p (3) = ∆z f , ∆y f ⊤ , • p (4) = ∆h f (most critical). The purpose of this section is to quantify and assess the quality of the outputs of HEGO, over bivariate dispersions of the inputs. We consider the 15 inputs presented in Table 6.2. From the 105 pairs of inputs that can be formed from this list, we report results obtained with a selection of 20 pairs, enumerated in Table 6.3. These pairs have been selected for their representativeness, and because they demonstrate a wide variety of behaviors. To categorize the outputs of HEGO, we use the notion of emergency mode. It is defined as the index of the most critical negotiable sub-parameter that has a nonzero 1-norm, thus taking its values within {0, . . . , R + 1}. Mathematically, using handy notations, it is defined s.t. where the values P * i (ξ) come from negotiation problems of HEGO. The color coding associated with the emergency modes is presented in Table 6.1. Selection of figures First, two pairs are detailed in Figures 6.1 and 6.2. They present the outputs of HEGO for two pairs, respectively (∆z 0 , ∆y 0 ) and (∆q 0 r , ∆q 0 c ). The first pair yields a typical collection of inputs that require the negotiation capabilities of HEGO, whereas the second pair shows that in some cases no negotiation is needed. To ease the visualization, the pairs from Table 6.3 have been split in two batches, labeled A and B. For each batch, the following charts are provided: • Emergency mode map (Figures 6. Several complementary comments are provided along with the Figures below. Conclusion on the example This example demonstrated that the negotiable parameters have very different impacts on how they help solve the emergency problem. • ∆α max and (∆z f , ∆y f ) appear to be extremely relevant to recover feasible trajectories when inputs influencing the horizontal behavior of the landing change. ∆a max nor has a non-negligible though small influence on these changes. • ∆h f has an influence on the vertical part of the landing, but the trajectories relaxed using ∆h f are not doable in practice, for the margin it offers is too thin. These remarks are summarized in Table 6.4. Comparison with vertical flight envelopes As detailed in Chapter 5, Algorithm HEGO provide landing guidance for nominal and emergency problems. Thus, the set of inputs that do not require emergency guidance define the "nominal reachable set", in the sense of HEGO. Due to the approximations made in Chapter 4 (such as the discretization and the linearization), the set of inputs that are considered feasible without negotiation differ from the actual reachable set. The full characterization of the reachable set of the "complete" rocket model (i.e. the 2D or 3D rocket model) is not computationally tractable, due to the nonlinearities and the number of states involved. However, as detailed at the end of Chapter 3, it is possible to fully describe the reachable set of the rocket for purely Note that all of these trajectories have been computed using the 3D rocket model, even if it has been used with a purely vertical motion. These trajectories are represented in Figure 6.17. vertical motion. Due to the high slenderness ratio of the rocket final-burn, this latter set is a coarse approximation of the actual reachable set of the complete model. Thus, we propose a comparison between the classification offered by HEGO in terms of emergency modes, and the vertical flight envelope defined in Chapter 3. We consider four different reference trajectories, which are listed in Table 6.5 and displayed in Figure 6.17. For all four trajectories, a slice of the reachable set is considered, and HEGO is computed over a grid of inputs on each of these slices. In the chart (a) of their respective, the Max, Min and Min-Max surfaces defining the vertical flight envelope are defined at the end of Chapter 3. The main observation is that HEGO is conservative. Indeed, on one hand, it declares some inputs "non-nominal" when they would be expected to be feasible without negotiating the constraints, as shown analytically in Chapter 3. On the other hand, their are no "false-positive", i.e. all the inputs declared "nominal" by HEGO are indeed inside the vertical flight envelope. Also, it is noteworthy that the reference trajectory significantly influences the outcomes of HEGO, though it does not change the latter conservative property. Contrary to what the chart (e) suggests, there is no discontinuity between the various scenarios. The fact that one can see a "gap" in the curves is only a matter of mesh refinement: a thinner mesh would show that this gap is continuously filled with curves. However, this gap shows that on some input subsets, the algorithm may have stiff output variations, as discussed in Figure 6.15. Conclusion Résumé Ce chapitre propose un résumé succinct de ce manuscrit, quelques commentaires sur les sujets qui ont été volontairement omis, et des pistes de recherche pour d'éventuels travaux futurs. The main focus of this thesis is on computing a landing trajectory for a reusable tossback vehicle in response to changes in the flight parameters. When this guidance problem is infeasible, one faces an emergency situation. In such cases it is acceptable to sacrifice some of the constraints originally formulated in the nominal (non emergency) situation. A methodology (Algorithm HEGO) has been developed in the manuscript to solve this emergency guidance problem. It is applicable in the -relative vast-vicinity of a reference trajectory having a reasonably high slenderness ratio. It is capable of handling the sacrifice of constraints according to predefined extended colexicographic order. As illustration, results have been presented and have served to compute performance charts in Chapter 6. For actual missions, aerospace engineers would be interested in the quantitative version of Table 6.4. A significant advantage of the proposed method, i.e. Algorithm HEGO, is that it is deterministic, and its implementation relies on mature technologies (LP and QP) for which off-the-shelf solvers are available. There are nearly no heuristic used to make the method work, apart from the tuning of the termination condition tolerances of the numerical solvers. As announced in the introduction, several topics have been voluntarily considered out-of-scope in this thesis. Some of them represent possible future research directions, and are sketched here. The modeling choices of Chapter 4 could be tailored to other frameworks, to be able to handle singular arcs in the parametric control description for instance. As far as the overall G&C system is concerned, the control part has been considered out-of-scope for this manuscript. The interplay between the control and the guidance algorithms has been swiftly discussed in [START_REF] Ménou | Nominal And Emergency Rocket Landing Guidance Using Quadratic Programming[END_REF]. Simulations gathering both parts of the G&C system would provide a more complete performance assessment. Negotiating vertical components (∆v f h or ∆h f ) has proved to be ill-posed, as shown in Chapter 6. To obtain robustness regarding the vertical motion, it is needed to go back to the mission design itself, and eventually consider other landing strategies than the classic tossback trajectory structure, pictured in Figure 1.2. More generally, the choice of the reference trajectory has a strong impact on the final performances. The sensitivity-based PDG method presented in Chapter 4 has a sufficient accuracy for our application (i.e. final burn guidance of a tossback vehicle with high slenderness ratio) but depends significantly on the chosen reference trajectory and its generalization to more complex maneuvers is limited. Its application to large diverts of high agility vehicles would require a dedicated study with extensive numerical benchmarks. However, it may be of interest to apply this sensitivity-based guidance method to multi-phase problems. Indeed, combining the re-entry glide and the final burn phases in a single guidance problem would be a relevant problem, for which the sensitivity based approach would have a great potential. Our implementation of HEGO conveniently builds upon the linear description of the constraints. However, following the generalization discussed in Section 5.6, it would be interesting to develop a version of HEGO with other underlying guidance methods, such as successive convexification or pseudo-spectral methods, which could be applied to large-divert problems. Finally, the emergency problem that we formulated is only one way to describe the problem of landing guidance relaxation. Our modeling is not sufficient to handle disjunctive scenarios, where one would need to choose between two separate landing sites for instance. This represents an important research direction, that could be worth exploring. The proof of this property is standard material that can be found in most optimization text-books (see e.g. [START_REF] Boyd | Convex Optimization[END_REF]Ch. 5]). It is provided here for completeness and for its tutorial aspect. Also, beware of the fact that the role of µ and λ is inverted in the previous proposition and its proof compared to Equation (A.2). Note that if the condition c -A ⊤ µλ = 0 is not feasible -or equivalently that A ⊤ µ ≤ c is not feasible -then g equals -∞ for any value of µ and λ, and therefore the optimal value of the dual is -∞. A direct consequence of Proposition 15 and the Strong Duality Theorem presented above is that v(b) = min{c ⊤ x | Ax = b, x ≥ 0} = max{µ ⊤ b | A ⊤ µ ≤ c}. whenever the primal problem is feasible and finite. By convention, when the primal problem is infeasible, v takes the optimal value of the dual problem (possibly ±∞). - 1 1 Saturates x between a and b. Sgn (.) = Sign function For a ∈ R, Sgn (a) := if a < 0. IVP = Initial Value Problem OCP = Optimal Control Problem STM = State Transition Matrix See Appendix A.2.2. LP = Linear Programming QP = Quadratic Programming NLP = Non Linear Programming RHS = Right-hand side OoP = Out of Plane Mathematical nomenclature specific to Chapter 2 e ⋆ = Unit vector In direction of ⋆. X, X = Vector, Norm For X a vector of R 3 , then X = ∥X∥ 2 . R (a, γ) = Rotation 3 × 3 rotation matrix of axis a and angle γ. Nomenclature for the 2D model α = Incidence (In 2D) α is signed: -90 • < α < 90 • . (e z , e h ) = Earth frame xv xvi Nomenclature (C A , C N ) = Rocket body frame θ = Attitude (In 2D) θ is signed: -90 • < θ < 90 • . h = Altitude v h = Vertical speed z = Horizontal position v z = Horizontal speed q r = Engine flow m = Total mass a nor = Normal acceleration a nor is signed. ξ = Pitch (e z , e y , e h ) = Earth frame (C N , C A , C P ) = Rocket body frame θ = Attitude (In 3D) θ is unsigned: 0 • ≤ θ < 90 • . ζ z , ζ y = Projected attitudes Defined in Figures 2.4 and 2.5. α z , α y = Projected incidences Defined in Figure 2.5. z = Horizontal position (With respect to e z .) y = Horizontal position (With respect to e y .) h = Altitude v z = Horizontal speed (With respect to e z .) v y = Horizontal speed (With respect to e y .) v h = Vertical speed m = Total mass q r = Engine flow (Real) q c = Engine flow (Controlled) Reference state State of size n. ū = Reference control Control of size m. η = Reference parameter Parameter of size n η . tf = Reference time-of-flight µ = Discretized control Size N µ . For Cubic Splines, N µ = m(N + 3). τ i = Control time instance N elements s.t. 0 = τ 0 < τ 1 < . . . < τ N = 1. τ ′ i = Constraint time instance N c elements s.t. 0 = τ ′ 0 < τ ′ 1 < . . . < τ ′ Nc = 1. ∆t f = Final time change z = Decision variable Size N z = N µ + 1, s.t. z = (µ ⊤ , ∆t f ) ⊤ . Figure 1 . 1 : 11 Figure 1.1: (Left) DC-X take-off and landing in 1993. (© New Mexico Museum of Space History) (Right) Perseverance landing site, 2021. (© ESA/DLR/FU-Berlin/NASA/JPL-Caltech) Figure 1 . 2 : 12 Figure 1.2: Typical flight phases of a tossback vehicle, from take-off to landing (trajectory not to scale). The higher the slenderness ratio, the more vertical the flight trajectories. (© Google Earth V 9.168.0.0, (July 28, 2022), France, Landsat/Copernicus) Forest Figure 1 . 3 : 13 Figure 1.3: Trying to maximize launcher's integrity by relaxing well-chosen constraints. Figure 1 . 4 : 14 Figure 1.4: High-level summary. Nominal and emergency guidance methods presented in this thesis, with references to the associated chapters. Figure 2 . 1 : 21 Figure 2.1: Air jet influence.Qualitative change of the wind flow with (left) and without (right) an air jet pushing in front of a moving rocket. See[START_REF] Nonaka | Vertical Landing Aerodynamics of Reusable Rocket Vehicle[END_REF] and[START_REF] Marwege | First Wind Tunnel Data of CALLISTO Reusable VTVL Launcher First Stage Demonstrator[END_REF] for experimental results. Figure 2 . 2 : 22 Figure 2.2: Planar rocket model. Axis, angles and forces. e Vr (respectively e Or ) denotes the unit vector parallel (resp. orthogonal) to V r . Figure 2 . 4 : 24 Figure 2.4: Angles ζ z and ζ y in the frame (e z , e y , e h ), for the proof of Equation (2.8). (Figure 2 . 6 : 26 Figure 2.6: Relations between the lift, the drag and the relative speed vectors. Here, the angle ν denotes the oriented angle between C N and L, defined only for α ̸ = 0. a u.s. nor = ∥F -(F • e Vr )e Vr ∥ m . Fig- ure 2 . 6 , 26 T, D, L, e Vr and e L all belong to the plane (e Vr , e L ), or equivalently to the plane (e Or , e Vr ), where e Or and e Vr are orthogonal. Consequently, the term F -(F • e Vr )e Vr only has a component along e Or , which yields ∥F -(F • e Vr )e Vr ∥ = |F • e Or | Using the latter expression with the definitions of T, D, L, e Vr , e L and C A yields a nor := L cos α -(T + D) sin α m . y 2 : 2 max 2 . ( 3 . 8 ) 22238 = -ā cc T max and ȳ1 := ācc T Let us define F ⊂ D the flight envelope, as the set of states y lying between Σ max and Σ min (in terms of altitude), and satisfying y 1 ≤ ȳ1 , y 2 ≥ y 2 , and m -≤ y 3 ≤ m + .(3.9) Figure 3 3 Figure 3.1: Figure 3 . 1 : 31 Figure 3.1: Possible scenarios for (λ 2 , λ2 ). The origin is prohibited due to Proposition 4. Figure 3 . 3 : 33 Figure 3.3: Three Max-Min-Max trajectories, with varying first time of switch t ′ 1 . Maximum final mass is obtained for t ′ 1 = 0, i.e. for a Min-Max thrust program. A useful by-product of the proofs from above is the full characterization of the reachable set for the vertical motion (3.37), as shown in Figure 3.2. Using the notations from Figure 3.2, for any map v(y) : R 3 → [-1, 1], every feedback control law u : R 3 → R s.t. Theorem 1 relies on Assumption 4 to rule out the Max-Min-Max programs, and keep the Min-Max programs only. A greater number of numerical simulations, aiming at computing dy f 3 Figure 4 . 1 : 41 Figure 4.1: Control discretization and time instances of the constraints for N = 3.The correction is described by a parametric function τ → u µ (τ ). Here is represented a scalar Cubic Spline, described by its values µ 0 , . . . , µ 3 at several time instances, and by its slopes µ 4 and µ 5 at the starting and end-points. The inequality constraints are enforced on the subdivision τ ′ 0 , . . . , τ ′ Nc . Figure 4 . 2 : 42 Figure 4.2: Representation of the optimal points of Basic Example 1 for -1 ≤ ξ ≤ 1, in the plane (x 1 , x 2 ). Point A is sometimes called a cusp in the literature [23]. and [g j ] z (z 0 , 0) (all j) are linearly independent, (ii) the conditions of Lemma 5 are satisfied at z 0 for the multipliers ν 0 and λ 0 , (iii) SCS holds for (z 0 , ν 0 ), then (a) z 0 is a local isolated minimizing point of problem NLP (0) and the associated Lagrange multipliers ν 0 and λ 0 are unique, (b) for ξ in a neighborhood of 0, there exists a unique, once continuously differentiable vector function y(ξ) = [z * (ξ), ν * (ξ), λ * (ξ)] ⊤ satisfying the second-order sufficient conditions for a local minimum of NLP (ξ) s.t. y(0) = (z 0 , ν 0 , λ 0 ), and hence z * (ξ) is a locally unique local minimum of NLP (ξ) with associated unique Lagrange multipliers ν * (ξ) and λ * (ξ), Figure 4 . 3 : 43 Figure 4.3: Comparison between the exact solution of Example 1 and the solution returned by QP (ξ), for -1 ≤ ξ ≤ 1. Figure 4 . 3 . 18 . 4318 Figure 4.3. Remark 18. Similar methods aiming at computing an expansion of z * (ξ) can be found in more recent work. See for example the work of Bonnans & Shapiro [19, Sec. 5.2]. Remark 19. From a high-level point of view, this Chapter describes how to con- . 25 ) 25 Composing the previous expressions yields the directional first-order expansion of the statex[τ, z nlp (εξ, εp), εξ] = xlin (τ, εξ, εp) + o(ε).Since xlin is an approximation of the augmented state, it can be split in half: the first n components form the state approximation x lin , and the last component form the time-of-flight approximation t lin f xlin (τ, ξ, p) =    Figure 4 . 4 : 44 Figure 4.4: Summary of the nominal guidance method, as presented in Chapter 4. Figures 4 . 4 5 illustrate a dispersion of the input variable ξ along a single component, namely ∆v 0 h , for both positive and negative values. The inputs are voluntarily dispersed over a small range of values. -(c) and (d), non-local constraints are activated for sufficiently large values of the input, as demonstrated by the trajectories that reach the incidence bounds. Figure 4 . 5 : 45 Figure 4.5: First-order correction, for ∆v 0 h varying in the vicinity of 0. The sub-Figure (a) is the main purpose of this example, showing that the green curve is a second order residual w.r.t. to the input ∆v 0 h . Note that sub-Figure (c) shows very late corrections in the engine flow. The weighting matrix P has been chosen to favorearly corrections. Since QP (ξ, p) returned late corrections anyway, it means that earlier flow corrections would have required even higher incidence corrections (which is partially explained by the high dynamic pressure at the beginning of the descent). Figure 4 . 6 : 46 Figure 4.6: Changes in the input variable leading to local changes in the active set.The input variable used in this example is ∆z 0 . Figure 4 . 7 : 47 Figure 4.7: Non-local behavior of QP (ξ, p), when ∆z 0 and ∆y 0 change. For large values of ∆z 0 and ∆y 0 , several constraints start to be triggered, such as the incidence bounds (upper and/or lower bounds). s.t. n neg = 2 and R = 2, making both sub-parameters are thus scalars. Then (Least critical) (1, 1) ⪯ e (3, 2) ⪯ e (4, 2) ⪯ e (1, 3) ⪯ e (-2, 3) (Most critical). Basic Example 3. Let us consider negotiable parameters s.t. n neg = 3 and R = 2 s.t. p = (p Figure 5 . 1 : 51 Figure 5.1: Pictorial representation of the possible values for the input ξ. Figure 5 . 2 : 52 Figure 5.2: Illustration of the constraint for the problem of Section 5.2.2 w.r.t. the input values, using HEGO. Figure 5 . 3 : 53 Figure 5.3: Curves associated to the example of Section 5.2.2. (Left)The first remark is that all the quantities displayed are indeed continuous w.r.t. the input ξ. On the bottom chart, there are four distinguishable areas. (A) corresponds to the nominal scenario, when no negotiation is needed. (B) and (C) corresponds to scenarios that require respectively 1 and 2 non-zero negotiation parameter values to recover feasibility. Finally, scenario (D) is when there are no options left., and no allowed values of p 1 or p 2 can help recover feasibility for z. (Right) Without enforcing any hierarchy, the previous zones (B), (C) merge into a single zone (B ′ ), where both variables p 1 and p 2 are negotiated at the same time. The penalties differ from the previous case, and so do the optimal values z 1 and z 2 . (5. 10 ) 10 It is also possible to re-write Problem (5.10) into the following canonical QP (Canonical QP) Lemma 6 .Lemma 7 . 67 If {x | Ā0 x = r0 , x ≥ 0} is not empty, then Problem (5.10) has no duality gap, i.e. Problem (5.10) and Problem (5.12) have optimal values and they are equal. Under the assumption of Lemma 6, let (x * , µ * , λ * ) and (x, μ, λ) be optimal solutions of both Problems 5.10 and 5.12. Then, Dx * = Dx. 4Proposition 12 . 12 0 which gives D(x * -x) = 0 since D ⪰ 0, hence the desired result. See e.g. [21, Section 5.2]. 5 Theorem 11, recalled in the Appendix.Let us now introduce the setχ(ξ) := x | ĀR x = rR , x ≥ 0 . If χ(ξ) is non-empty, then the value of z * = z +z -computedby Algorithm 2, obtained by successively solving Problems 5.9 and 5.10, exists and is unique. Corollary 1 . 1 Algorithm 2 returns a solution for the input ξ if and only if ξ ∈ Λ.Proof. If χ(ξ) ̸ = ∅, Proposition 12 shows that Algorithm 2 has a solution. Otherwise, χ(ξ) = ∅, implies that LP R is infeasible, and then Algorithm 2 fails. The set Λ is not empty, since 0 belongs to Λ by definition of QP (ξ, p). Proposition 13. Λ is convex. ATheorem 4 ( 4 feasible point for Problem (5.13) is a point x that satisfies (5.13b) and (5.13c). A solution point for Problem (5.13) is a feasible point that is minimal for (5.13a). Adapted from [59, Thm. 2.4]). Let the Linear Program (5.13) have non-empty solution sets S 1 and S 2 for right-hand sides (b 1 , d 1 ) and (b 2 , d 2 ), respectively. There exists a constant L > 0 s.t. for each x 1 * ∈ S 1 , there exists an x 2 * ∈ S 2 s.t. Theorem 5 ( 5 Adapted from [59, Thm. 2.2], Lipschitz-continuity of feasible points of linear inequalities and equalities). Let the conditions Ax = b and Cx ≤ d have non-empty feasible sets F 1 and F 2 for the right-hand sides (b 1 , d 1 ) and (b 2 , d 2 ), and B.20, and the curve of ∆t * f , in Figure B.23, exhibit such complex patterns. Figure 5 . 4 : 54 Figure 5.4: Illutration of Basic Example 4. This example shows that there can be components z * i among the outputs of HEGO that are non-monotonous. and 5.6, where the input ξ is dispersed for positive values of the change in initial horizontal position ∆z 0 . Example 2 (Advanced 2D scenario). Let us consider a more advanced version of Example 1, where the negotiable parameter is now p = ∆α max , ∆a max nor , ∆z f , ∆h f ⊤ In terms of hierarchy, we impose 2 Figure 5 . 5 : 255 Figure 5.5: Dispersion of ∆z 0 over [0, 1], for the 2D rocket model from Example 1. The curves of sub-Figures (a), (b) and (c) are plotted for 30 values of ∆z 0 . The similarity between the introductory example of Section 5.2.2 and this actual example is clear in the sub-Figure (d). Figure 5 . 6 : 56 Figure 5.6: Dispersion of ∆z 0 over [0, 1], for the 2D rocket model from Example 1.All these charts have very different y-scales. The charts (c) and (d) are zoomed views of (a) and (b). Except for the charts (c) and (d), where the blue dots represent the computed values, all the curves have been plotted for 301 values of ∆z 0 . The charts (a) to (f) shows a subset of µ, of size 2(N + 3). Precisely, using the nomenclature from Equation (4.1), q r (τ 1 ) = µ 3 , α(τ 1 ) = µ 4 , qr (t f ) = µ 2(N +3)-1 and α(t f ) = 4 Figure 5 . 7 : 457 Figure 5.7: Dispersion of ∆z 0 over [-2, 1], for the 2D rocket model from Example 2.In sub-Figure(c), a thinner mesh (not required for the purpose of this example) would reveal the presence of blue-dot trajectories between the orange and the red ones, on the left of the reference trajectory. The asymmetry in the negotiation maps (sub-Figure(d)) comes from the reference trajectory itself, which is a non-trivial curve in the plane (z, h). H 2 | 3 | h f | = * 4 Figure 5 . 9 : 23459 Figure 5.9: Dispersion of ∆y 0 over [-1, 1], for the 3D rocket model. Dispersing the inputs along ∆y 0 leads to the computation of out-of-plane trajectories. The dispersion of the inputs of HEGO will be analyzed quantitatively on this exact same scenario in Chapter 6. EmergencyMode 1 1 (ξ) = . . . = P * R (ξ) = 0, R + 1 if LP R (ξ) is not feasible, arg max i=1,...,R { i | P * i (ξ) ̸ = 0} otherwise.(6.1) 3 3 and 6.4),• Projected incidence α y (Figures 6.5 and 6.6), Figure 6 . 1 : 61 Figure 6.1: Dispersion of (∆z 0 , ∆y 0 ). Dispersing this pair leads to in-plane and out-of-plane trajectories. The negotiation maps (charts (f) to (i)) have a structure deeply linked to the partition described in the emergency mode map from chart (b). Figure 6 . 2 : 62 Figure 6.2: Dispersion of (∆q 0 c , ∆q 0 r ). The inputs correspond to purely nominal scenarios. The negotiation maps are not displayed, since they are all constant and equal to zero. Figure 6 . 3 : 63 Figure 6.3: Emergency modes. Batch A. Each chart represents the emergency modes obtained by dispersing a given pair of inputs. For instance, in (a), ∆v 0 z w.r.t ∆z 0 means that ∆z 0 is in the x-axis, and ∆v 0z is in the y-axis. Since the role of the outof-plane variables (such as ∆y 0 , ∆v 0 y or w y,0 ) is symmetric in this problem having a planar reference trajectory, their associated charts have an axis of symmetry. Figure 6 . 4 : 64 Figure 6.4: Emergency modes. Batch B. Finding purely horizontal or purely vertical lines separating different emergency modes for some specific pairs reveals that the associated input variables are uncorrelated. Among others, the variables conveying the vertical motion (h, v h , m, Isp) are mostly uncorrelated with the ones conveying the horizontal motion (z, v z , y, v y ), especially when considering the emergency mode of ∆h f . The pairs (i) and (m) are typical supporting examples. Moreover, contraryto what appears to be, charts (p) and (q) are not feature-less. The specific case of the pair (q) is detailed above in Figure6.2. Figure 6 . 5 : 65 Figure 6.5: Projected incidence α y (in-plane) w.r.t. normalized time. Batch A. In most charts, the curves describe a sort of pivot around a the normalized time τ = 0.5. This behavior is directly linked to the Cubic Spline description of the correction, detailed in Chapter 4. Note that for some pairs -e.g. pair (e) -the incidence is negotiated but not always up to its maximum value, even when more critical parameters are negotiated first. The main reason is due to the way the active set changes w.r.t. to the input, which is illustrated in detail in Figure B.23. Figure 6 . 6 : 66 Figure 6.6: Projected incidence α y (in-plane) w.r.t. normalized time. Batch B. It is normal that the pair (l), with purely out-of-plane variables, has nearly zero impact on α y , since the latter conveys the in-plane projected incidence. Figure 6 . 7 : 67 Figure 6.7: Projected incidence α z (out-of-plane) w.r.t. normalized time. Batch A.Contrary to what the chart (e) suggests, there is no discontinuity between the various scenarios. The fact that one can see a "gap" in the curves is only a matter of mesh refinement: a thinner mesh would show that this gap is continuously filled with curves. However, this gap shows that on some input subsets, the algorithm may have stiff output variations, as discussed in Figure6.15. Figure 6 . 8 : 68 Figure 6.8: Projected incidence α z (out-of-plane) w.r.t. normalized time. Batch B. Figure 6 . 9 : 69 Figure 6.9: Controlled engine flow q c w.r.t. normalized time. Batch A. The highly different influence of the horizontal variables and the vertical ones on the engine flow is clearly pictured by these charts. A change on the input pair (c) only implies little changes in the controlled engine flow. However, the pair (e) has a high impact on the engine flow, especially due to the presence of ∆v 0 h . Figure 6 . 10 : 610 Figure 6.10: Controlled engine flow q c w.r.t. normalized time. Batch B. Figure 6 . 11 : 611 Figure 6.11: Heatmap of the first negotiation, i.e. P * 1 = |∆α max |. Batch A. Figure 6 . 12 : 612 Figure 6.12: Heatmap of the first negotiation, i.e. P * 1 = |∆α max |. Batch B. Figure 6 . 13 : 613 Figure 6.13: Heatmap of the third negotiation, i.e. P * 3 = ∥(∆z f , ∆y f ) ⊤ ∥ 1 . Batch A. These negotiation maps have a structure deeply linked to the partition described in the emergency mode maps, as shown in Figure 6.3. Figure 6 . 14 : 614 Figure 6.14: Heatmap of the third negotiation, i.e. P * 3 = ∥(∆z f , ∆y f ) ⊤ ∥ 1 . Batch B. Figure 6 . 15 : 615 Figure 6.15: Several zooms on stiff parts of the pair (∆z 0 , ∆h 0 ), illustrating that the negotiation maps are indeed Lipschitz-continuous in practice. All the x-axis convey ∆z 0 , and all the y-axis convey ∆h 0 . Figure 6 . 16 : 616 Normal load a nor on two different input pairs. H o ri z o n ta l p o s it io n z -1 -0.5 0 0.5 1 H o r iz o n t a l p o s it io n y time: = t/tf mdry (f) Total mass m Figure 6 . 17 : 617 Figure 6.17: Reference trajectories selected for the benchmark against the vertical flight envelope, as listed in Table6.5. Figure 6 . 18 : 618 Figure 6.18: Vertical envelope and associated slices (Reference A). As shown in the chart (b), the red slice shows what happens when dispersing the total mass m w.r.t. the vertical speed v h . The light yellow area conveys the area which is out of the flight envelope. On the contrary, the blue area is the area inside it. The dashed line conveys the separation between these areas. Figure 6 . 19 : 619 Figure 6.19: Vertical envelope and associated slices (Reference B). Figure 6 . 20 : 620 Figure 6.20: Vertical envelope and associated slices (Reference C). Figure 6 . 21 : 621 Figure 6.21: Vertical envelope and associated slices (Reference D). Proof. Introduce the LagrangianL(x, µ, λ) := c ⊤ x + µ ⊤ (b -Ax)λ ⊤ x.Let us form the function g s.t.g(µ, λ) := inf x L(x, µ, λ) Since g(µ, λ) = µ ⊤ b + inf x (c -A ⊤ µλ) ⊤ x, then g(µ, λ) =      µ ⊤ b if c -A ⊤ µλ = 0 -∞ otherwise.Therefore, the dual of (A.5) is sup µ,λ g(µ, λ),s.t. λ ≥ 0.Using the Strong Duality Theorem (Theorem 11 above) with the fact that v(b) is finite and x → c ⊤ x is convex, makes the sup of the latter problem a max. Finally, after simplification of λ = c -A ⊤ µ ≥ 0, ones gets (A.6). Theorem 12 (. 7 ) 33 .• 12733 From [41, Thm. 1]). Under the assumption of Proposition 15, for any direction d and for any scalar t > 0 sufficiently small, we always havev(b + td) = v(b) + t sup{λ ⊤ d | A ⊤ λ ≤ c, λ ⊤ b = v(b)}.(ARemark Theorem 12 must be understood as follows: If it is known that t → v(b + td) exists on some interval [0, t ′ ] for t ′ > 0, then the "sup" term in Equation (A.7) is a "max", and Equation (A.7) holds (at least) on a non-trivial sub-interval of [0, t ′ ]. Figure B. 10 : 10 Figure B.10: Heatmap of second negotiation P * 2 = |∆a max nor |. Batch A. .2) ∧ ROCKET BODY ∧ ∧ ROCKET BODY ∧ Boundary layer stays attached to the body ∧ ENGINE ∧ ∧ ENGINE ∧ Boundary layer detaches from the body ∧ , e y , e h ) is rotated by Ψ around e z , giving (e z , C P , e h ′ ). Then, the later is rotated by ξ around C P to give (C N , C P , e A ). Finally C A = -e A , leading to the orthonormal basis (C N , C A , C P ). ξ e h ′ Ψ e h e A = -C A Yaw: Ψ Pitch: ξ Here, Ψ > 0, ξ > 0. ξ e z Ψ • e y Ψ C P ξ V r C N C A Figure 2.3: Rocket orientation, based on yaw (Ψ) and pitch (ξ) angles. First, (e z A = -C A convey enough information to establish the change of variables between (Ψ, ξ) and (ζ z , ζ y ). By taking the ratio of the first two components of e A , the following relations are obtained Equation (2.7) and (2.8) and e Ψ = ζ z and tan ξ = cos Ψ tan ζ y . (2.9) Then, using the shortcuts t z := tan ζ z , t y := tan ζ y and t z,y := 1 + t 2 z + t 2 y , we get hence (2.8). .4, one has e A = (h y , -h z , v) ⊤ , Remark 4. Equation (2.7) is well defined whatever the values of Ψ and ξ. However, Equation (2.8) has a singular definition when ζ z = 90 • or ζ y = 90 • . This raises multiple comments: 1. In all the scenarios studied in the thesis, only close-to-vertical trajectories are considered, where the rocket remains far from these singularities. 2. The singularity in Equation (2.8) is only a problem of definition. Indeed, e A can be extended by continuity everywhere, except when ζ z and ζ y equal ±90 • simultaneously. For example, for a given |ζ y | < 90 • , then e A (ζ z ) -→ ζz↑90 • e z . 3. One could legitimately wonder whether a numerical method could fall into one of these singularities during intermediate computations. As far as this thesis is concerned, the guidance methods exposed below only require the evaluation of the dynamic function f and its derivatives along a prescribed reference tra- jectory before the flight (more details on this topic in Section 4.3). Since the evaluation of f is not required on-board, this singular definition at ±90 • does not present any risk for our applications. .12) Representation of the projected angles: ζ z , ζ y , α z , α y . Note that C A , e Vr and V r are not coplanar to (e z , e h ) nor (e y , e h ).wheret z = tan ζ z , t y = tan ζ y , T z = tan(α z + ζ z ) andT y = tan(α y + ζ y ), to alleviate the writing. This expression stems from Equations (2.10) and (2.11) and ∥C A × e Vr ∥ = ∥C A ∥ ∥e Vr ∥ sin C A , e Vr = sin α e A e h e A e h ζ y ζ z Here, ζ z > 0 and ζ y > 0. e z e y • e z • e y ζ y α y e Vr |v h | ζ z α z e Vr |v h | α y + ζ y C A α z + ζ z C A V r V r -(v z -w z (h)) v y -w y (h) Figure 2.5: where Assumption 3 depends on b and a, which depend on the bounds on y 2 and y 1 . Though the estimates of y 2 and ȳ1 provided in Equation (3.8) are coarse, Assumption 3. The constants in (3.18) are s.t. b < √ a. Remark 11. they are sufficient for the numerical application discussed below. If needed, analytic bounds sharper than (3.8) could be computed. Proposition 7. For scenario 2a, γ(t) < 0, ∀t ∈ [0, t f ]. Proof. γ can be zero at most on an isolated point. Indeed, γ is continuous and if there is t 0 s.t. γ(t 0 ) = 0, then, from Lemma 4, it cannot be zero for greater times. Therefore, from (3.19), Γ can be zero at most on two isolated points.Proposition 8 shows that the two remaining scenarios (1c and 3a) correspond to Max-Min-Max structures. It enables us to state the main result below. Under Assumptions 1, 2 and 3, and for y 0 in F, any solution of Problem (2) is necessarily a Max-Min-Max thrust program, where one or two arcs may be absent. (3.25) The conclusion stems from comparison Lemma 13. Proposition 8. Under the assumptions of Lemma 4, the sign of Γ changes at most twice on [0, t f ]. Proposition 9. [START_REF] Bryson | Applied optimal control: optimization, estimation and control[END_REF] Thus, γ a (t) < 0 for t > t γ . Moreover, z λ (t γ ) = z a (t γ ) holds and since λ 2 < 0, for any t ∈ [t γ , t f ] żλ (t) = F (t, z λ (t)) and ża (t) ≤ F (t, z a (t)). .27) It describes the above-mentioned implicit dependence of (t 2 , t f ) on t 1 . When applicable, the IFT used on (3.27) provides us with the differentiability and the value of the derivatives of t 2 and t f w.r.t. t 1 , as Definition 4 . 4 The finite-dimensional approximation of the problem PDG (ξ, p) is defined by NLP (ξ, p), which is the following non-linear optimization problem min z J(z, ξ) (4.10a) s.t. h(z, ξ) ≤ H p p, g(z, ξ) = B p p. (4.10b) (4.10c) Remark 15 (State modeling choices). Describing the state using Equation (4.7) is Table 5 . 5 Parameter name Variables Negotiable? Comment Final horizontal positions Final horizontal speeds Final altitude Final vertical speed Incidence bound (2D) Projected incidence bound (3D) ∆z f , ∆y f ∆v f z , ∆v f y ∆h f ∆v f h ∆α max ✓ × × ≈ ✓ If landing area is solid and flat Otherwise the rocket would tilt at landing. See Example 2 for a discussion. Tiny margin, imposed by landing gear design Safety and flight quality bound. Engine flow bounds Normal acceleration ∆q min , ∆q max ∆a max nor × ✓ Physical constraint. Safety and flight quality bound. 1: List of negotiable parameters in PDG (ξ, p). under the form p low ≤ p ≤ p up . (5.2) Corollary 2 (Lipschitz continuity of the optimal value function of LPs w.r.t. RHS perturbations). Let the Linear Program (5.13) have non-empty solution sets S 1 and S 2 for right-hand sides (b 1 , d 1 ) and (b 2 , d 2 ), respectively, with associated optimal values p * 1 and p * 2 . Then, there exists a constant K > 0 s.t. |p * 1 Theorem 7 (Lipschitz-continuity of Algorithm 2). There exists a constant L > 0 s.t. for any inputs ξ 1 and ξ 2 in Λ, the unique solutions (z * ) 1 and (z * ) 2 returned by Algorithm 2 satisfy If V * P (r + td) is feasible and finite, then V * D (r + td) is finite by the Strong Duality Theorem 8 . For the converse case, note that V * D (r + td) is always feasible since µ = 0 satisfies (5.23b) and 5.23c. Also recall that the Dual of V * D (r + td) is V * P (r + td). Therefore, using once more the Strong Duality Theorem, if V * D (r + td) is finite then V * P (r + td) is feasible and finite. This gives the conclusion. For t ∈ W, let us define the function v s.t. .23c) Also, we will consider a RHS change in an arbitrary direction d. Let us define W := { t ∈ R : V * P (r + td) is feasible and finite} = {t ∈ R : V * D (r + td) is finite}. Lemma 11. For any r, V * P (r + td) is feasible and finite if and only if V * D (r + td) is finite. Proof. v(t) := V * P Theorem 9 is guaranteed by the assumption stating that the reference trajectory must satisfy the constraints of NLP (ξ, p): for null inputs -i.e. RHS equals r 0 in Problem (5.21) -there is no need for negotiation (i.e. V * P (r 0 ) = 0). Remark 31. Assumption (ii) in Theorem 9 is not over restrictive. It means that we consider only inputs ξ s.t. the negotiation problem remains feasible when exploring the inputs in this direction. To alleviate the writing, we denote by a function "c" the latter constraints s.t. these are satisfied if and only if c(ξ, z, p) ≤ 0. In this case, the (single) negotiation problem associated to these constraints is simply min z 1 ,z 2 ,p s.t. c(ξ, z, p) ≤ 0, p p ≥ 0. Moreover, given the arbitrary cost J(z) := 1 P * ←-2 (z 1 -2) 2 + 1 (z 2 + 2) 2 , the refine problem 2 writes min z 1 ,z 2 ,p J(z) s.t. c(ξ, z, p) ≤ 0, p ≥ 0, p ≤ P with a scalar input ξ ≥ 0, and with a scalar negotiable parameter p ≥ 0. Consider four constraints s.t. (Const. 1) (Const. 2) (Const. 3) (Const. 4) z 2 ≥ ξ z 2 ≥ z 1 z 2 ≤ 2 + p -z 1 z 2 ≤ 2 + z 1 * . Using HEGO with the two latter problems, we observe that the first component of the optimal solution, i.e. z * 1 , has a non-monotonous behavior w.r.t. ξ, as illustrated in Figure 5.4. The latter modeling falls into the context of robust optimization (see e.g.[START_REF] Ben-Tal | Robust Optimization[END_REF] Ch.1]). It cannot be used as-is, since it conveys an infinite number of constraints. Let us make two further assumptions in order to simplify (5.29)1. The set F eas is convex in z, and linearly influenced by p. For example, we assume the existence of a (possibly non-linear) convex function g modified into and (5.28) must be z ∈ F eas (ξ, p) , F eas (ξ, p + ∆p) ̸ = ∅, ∀∆p ∈ M (5.29a) (5.29b) ξ and (possibly non-linear) matrix-valued maps H(.), A(.), b(.) and B(.) s.t. F eas (ξ, p) = {z ∈ R Nz : g ξ (z) ≤ H(ξ).p and A(ξ).z = b(ξ) + B(ξ).p} 2. The set M is convex, and assumed to have a finite number of extreme point. In other words, there are K vectors ∆p i ∈ R nneg s.t. M = ConvexHull(∆p 1 , . . . , ∆p K ). Proposition 14. Under the two latter assumptions, there are elements z k s.t. z k ∈ F eas ξ, p + ∆p k , ∀k = 1, . . . , K if and only if conditions (5.29a) and (5.29b) are satisfied. Proof. The return implication of the equivalence is guaranteed by construction. To prove the direct implication, let us consider any arbitrary vector ∆p ∈ M. By construction of M, there are K non-negative scalars σ k s.t. σ k = 1 and ∆p = K k=1 σ k ∆p k . Let us denote by z k an element of F eas ξ, p + ∆p k and consider the vector z := K k=1 σ k ∆p k . Using the convexity of g ξ , one gets Table 6 . 3 : 63 List of pairs of input components dispersed in the charts below. For example, using the data from Table6.2, there are 81 × 81 = 6561 pairs of values of (∆z 0 , ∆v 0 z ) that have been considered. Usefulness parameter Overall Horizontal Vertical Negotiable ∆α max +++ +++ 0 ∆a max nor ++ ++ 0 (∆z f , ∆y f ) +++ +++ 0 ∆h f + 0 + Table 6 . 4 : 64 Performance summary. The qualitative scale goes from 0 (useless) to + + + (extremely useful).by HEGO correspond to sharper turns, but does not require to negotiate ∆a max nor by a lot. The corresponding heatmaps (i.e. the heatmaps representing P * 2 ) are in the Appendix. Table 6 . 5 : 65 Reference trajectories considered for the benchmark w.r.t. the vertical flight envelope. All four trajectories share the same reference time-of-flight tf . Label Figure Trajectory type Engine flow structure Comment A 6.18 Vertical Near Min-Max B 6.19 Vertical Max-Min-Middle C 6.20 Planar Max-Min-Middle Reference trajectory of Section 6.2. D 6.21 Fully 3D Max-Min-Max Out-of-plane reference. See[START_REF] Açikmeşe | G-FOLD: A Real-Time Implementable Fuel Optimal Large Divert Guidance Algorithm for Planetary Pinpoint Landing[END_REF] and the associated video: youtube.com/watch?v=BqXFzVVCSCU. For trajectories, it is the ratio between maximum flight altitude and maximum downrange. An initial guess is needed for the adjoint state or the input u, t f and numerical methods are sensitive to these guesses. Mars atmosphere is often treated as a small disturbance, for it is more than 100 times thinner than Earth's. The exact definition of problem PDG (ξ, p), presented here in a simplified shape, is in Chapter 2. Picking three values to describe the wind profile is an arbitrary design choice. If it is needed to consider a finer description of the wind profile in practice, significantly increasing the number of variables describing the wind profile does not change the approach presented in the next chapters, as discussed in Chapter 6. One gimbal angle for the 2D model, two on the 3D model In the sense of the successive rotations defining the rocket body frame. If the rocket reaches v h = 0 before h = 0, the engine must be stopped, since hovering is prevented by the thrust dominance assumption. Thus, seeking v h (t f ) = -ε f v leaves a small safety margin. Both pointing and landing cone constraints are well discussed in[START_REF] Malyuta | Convex Optimization for Trajectory Generation[END_REF] Fig.19] for instance. Note that subsonic, supersonic and hypersonic speeds are usually defined as below Mach 1, between Mach 1 and Mach 5, and above Mach 5. The exact definition of ξ will be detailed later, in Equation (4.4). The exact performance index used in this thesis is detailed in the examples of Chapter 4. L ∞ denotes the set of essentially bounded measurable functions. Note that the ignition time optimization is a different topic. In the numerical examples of Chapter 4, 5 and 6, the model is normalized for numerical stability, whereas, in this chapter, it aims at simplifying the writing of the dynamics. Here ⋆| i denotes the i th component of ⋆. Equations being linear in λ, one can consider λ/λ0 instead of λ. Here, coarse refers to discretization schemes such as Euler methods with few collocation points. L ∞ ([0, 1], U) denotes the set of essentially bounded measurable functions. Recall that the derivatives are denoted by putting the variable of differentiation as an index, e.g. Lz = ∂L/∂z. x • y denotes the component-wise product of vectors x and y, a.k.a. the Hadamard product. See[START_REF] Büskens | Sensitivity Analysis and Real-Time Optimization of Parametric Nonlinear Programming Problems[END_REF] Sec. 4.3] for a complete discussion on that aspect. Note that this would not apply to landings on offshore platforms, for obvious reasons. Naturally, p b ⪯e pa is equivalent to pa ⪰e p b . I would like to thank Laurent Pfeiffer for its attention to detail and its rich feedback. Mj denotes the j th row of M . If necessary, more details on the role of Q(J) are provided in[START_REF] Mangasarian | Lipschitz Continuity of Solutions of Linear Inequalities, Programs and Complementarity Problems[END_REF] Sec. 3]. These performances have been demonstrated live during the thesis defense, with on-demand initial condition changes. Remerciements [-4, 2], for the 2D rocket model from Example 3. Note that the negotiation maps from sub-Figure (d) are exactly the kind of negotiation structure that support the conjecture presented in Remark 32. The Lipschitz-continuity of the optimal cost function, stated in Corollary 3, can be observed on the same sub-Figure . Also, remark that the cost function variations convey several active set changes within the nominal guidance mode, that can be visualized through the slope changes. The reason why the cost function remains constant when the horizontal final positions are being negotiated is that the optimal trajectory is shifted purely horizontally in these cases. • Projected incidence α z (Figures 6.7 and 6.8), Color Value • Controlled engine flow q c (Figures 6.9 and 6.10), • Heatmap of the first negotiation map: P * 1 (Figures 6.11 and 6.12), • Heatmap of the third negotiation map: P * 3 (Figures 6.13 and 6.14). For the figures having the input pairs as their x and y-axis, the black dot • conveys the origin (0, 0) (for example the emergency mode maps). Also, the reference trajectory x and the reference control ū are represented in plain black. Additional data is provided in Appendix, regarding the various states for these dispersions, the heatmap of the optimal time-of-flight change ∆t f * , and the other negotiation maps. Observations and comments Variable combinations. Some variables combinations have a natural constructive or destructive behavior. The pair (∆z 0 , ∆v 0 z ) is the clearest one. Starting farther away from the landing site but with a greater horizontal speed (or closer but with a lower horizontal speed) is a typical constructive behavior that explains the green strip in Figure 6.3-(a). Continuity. The negotiation maps ξ → P * i (ξ) discussed in Chapter 5 are Lispchitz-continuous. The reader might think that some of the maps below would suggest the contrary, due to abrupt changes. It is only a matter of zoom and Lipschitz constants. The detailed example pictured in Figure 6.15 provides various zoom levels to see how the negotiation maps can vary. Normal acceleration negotiation. As indicated by the black line in Figure 6.16, the ratio between the maximum normal acceleration of the reference trajectory and the normal acceleration bound is close to 1, leaving little room for negotiations. However, it is not always necessary to negotiate ∆a max nor , as pictured in Figure 6.1. For a large number of values of the pair (∆z 0 , ∆y 0 ), the guidance trajectories provided A.1 Optimization results A.1.1 Duality gap The results presented here are taken from D. P. Bertsekas' book [START_REF] Bertsekas | Nonlinear Programming[END_REF], which contains all the proofs of the theorems and lemmas presented below. Let f , g and h be functions defined over R n , where f has scalar values, g has m components, and h has r components. Let us define the primal optimization problem where f * denotes its optimal value. An optimization problem is said to be feasible if there exists at least one element satisfying its constraints. An optimization problem is said to be finite if its optimal value exists and does not equal ±∞. The Lagrangian of Problem (A.1) is defined by where µ ∈ R m and λ ∈ R r . Definition 7. Vectors µ * and λ * are said to be Lagrange multipliers for the primal Problem The dual of Problem (A.1) is the following problem sup Let us denote by q * the optimal value of Problem (A.2). Here, q * ∈ R ∪ {±∞}. Theorem 10 (Weak Duality Theorem [START_REF] Bertsekas | Nonlinear Programming[END_REF]Prop. 5.1.3]). Assume that the primal Problem (A.1) is feasible, but not necessarily finite. Then: The duality gap is defined as the following non-negative quantity We say that there is no duality gap if ∆ gap = 0, and that there is a duality gap if ∆ gap > 0. Let us introduce the matrices and vectors E, d, A, b of proper dimensions, assume that f is convex, and define the problem Assume that the primal Problem (A.4) is feasible and its optimal value f * is finite. Let also f be convex over R n . Then, there is no duality gap and there exists at least one Lagrange multiplier. Lemma 12 (Existence of Primal Optimal Solutions of LPs [13, Lemma 5.2.1]). Assume that Problem (A.4) is feasible and its optimal value f * is finite. Let also f be linear. Then, Problem (A.4) has at least one optimal solution. A.1.2 Right-hand side sensitivity of Linear Programs Consider the following primal LP in its standard form and its optimal value function v (viewed as a function of its right-hand side b) s.t. By convention, if A ⊤ µ ≤ c is infeasible, then the optimal value of the latter problem is -∞. • On the contrary, if the "sup" term equals +∞ , then the primal problem with right-hand side b + td is infeasible for any t > 0. Equation (A.7), that we also refer to as Gauvin's formula in this manuscript, is a powerful tool to analyze the RHS sensitivity of LPs. It is important to remark that the maximization problem in (A.7) is deeply linked to the dual LP from (A.6), s.t. Proposition 16 (Adapted from [5, Prop. 2.3]). There exists a closed interval [α, β] (possibly empty) s.t. A.2 Differential Equations A.2.1 Comparison theorem For n ≥ 1, a function F : I × X ⊂ R × R n → R n is said to be quasi-monotone increasing if, for every pair (t, x) and (t, v) in I × X and every i = 1, . . . , n, one gets Lemma 13 (Adapted from [80, IX.2.6]). Let F be a continuously-differentiable, quasi-monotone increasing function and x : A.2.2 Flow of Ordinary Differential Equations The sensitivity computations described below are standard material in the literature. The results recalled below focus mainly on the computational aspects of the different derivatives of the flow. For further references, complementary approaches can be found in [18, [START_REF] Oniki | Comparative Dynamics (Sensitivity Analysis) in Optimal Control Theory[END_REF] for more applied methods. Useful material about the flow properties can also be found in [84, Sec. 4.5] (properties of the flow linked to Lie brackets). Let us consider a dynamic function f : R n × R m × R nη → R n with state x, control u and parameter η, which defines the ODE ẋ = f (x, u, η). where x is defined over [0, 1] by the following IVP x(0) = x 0 . The non-relevant inputs of Φ f may be omitted to alleviate the writing. For instance, if f depends only on its state, its flow will be denoted Φ f (t, x 0 ). The propositions below present how to compute the derivatives of Φ f w.r.t. each of its variables. Basically, these derivatives are the results of Initial Value Problems (IVPs) involving the derivatives of f . Note, first, that the time derivative of Φ f is a direct consequence of its definition and, for any t ∈ [0, 1], it equals A.2.2.1 Sensitivity w.r.t. the initial condition The derivative of Φ f w.r.t. the initial condition is referred to as the state transition matrix (STM), or simply the transition matrix [START_REF] Bryson | Applied optimal control: optimization, estimation and control[END_REF]. Proposition 17. The derivative of Φ f w.r.t. the initial condition x 0 is ∂Φ f ∂x 0 (T, x 0 ) = M (T ) where the transition matrix M satisfies the matrix-IVP When the context is clear enough, the following notation is sometimes used (e.g. in Chapter 3, or in [START_REF] Bryson | Applied optimal control: optimization, estimation and control[END_REF]): where The IVP of Proposition 19 can be derived from Proposition 17, by considering an extended state composde of the original state, and one state having null dynamics for each component of the parameter η. For further details on this, see e.g. [START_REF] Demailly | Analyse numérique et équations différentielles[END_REF]. Using this remark, the smoothness of the flow w.r.t. its parameters can be derived from Proposition 18, as stated in the corollary below. Corollary 4. Pick any A.2.2.3 Sensitivity w.r.t. the control Proposition 20 (Adapted from [18, Sec 3.2]). Consider When the control depends on a vector parameter, the following corollary holds. where z is defined by the matrix-IVP where x(t) = Φ f (t, x 0 ; u(θ, .)). A.2. Differential Equations 153 Since the variable u belongs to a functional space, it is more technical to state how smooth is the flow w.r.t. to the control. Hence the need for the following proposition, which formally describes to what extent the function z u,v from Equation (A.10) defines the first-order expansion of Φ f w.r.t. u. Proposition 21 (Adapted from [START_REF] Pytlak | Numerical methods for optimal control problems with state constraints[END_REF]Prop. 1.3]). Consider a bounded set Ω ⊂ R m and a vector x 0 ∈ R n . Let us assume that f : R n × R m → R n is differentiable, that f , ∂f ∂x and ∂f ∂u are continuous and that there exists K < ∞ s.t. Then there exists a function ε : R * where 1 • z u,v is defined in Equation (A.10). A.2.2.4 Technical summary Consider a parametric control t → u µ (t), continuously differentiable in t and µ. The following expansion holds where ε is a function s.t. ε(⋆)/∥ ⋆ ∥ 2 tends to zero when ⋆ = (∆x 0 , ∆η, µ) tends to zero, and where A, B and C are matrix valued functions defined by the following IVPs where x(t) = Φ f (t, x 0 , η; u 0 ). 1 Here, L ∞ ([0, 1], R m ) denotes the vector space of essentially bounded measurable functions, for the essential supremum norm: ∥u∥L∞ := inf{C ≥ 0 : ∥u(t)∥2 ≤ C a.e. on [0, 1]}. Similarly, L ∞ ([0, 1], R m ) denotes the vector space of essentially bounded measurable functions, for the 1norm: Additional data Résumé Ce chapitre contient des données complémentaires concernant l'exemple détaillé dans la Section 6.2 du Chapitre 6. This chapter contains complementary data regarding the example detailed in Section 6.2 of Chapter 6. Are displayed in these extra charts: • the height states with respect to the normalized time, • the heatmap of the optimal time-of-flight change ∆t * f , • and the heatmap of the second and fourth negotiations (P * 2 and P * 4 ). These charts are presented for each pairs of Table 6. ABSTRACT This thesis studies emergency Powered Descent Guidance (PDG) for reusable launchers, as an Optimal Control Problem in free final-time with constraints. For such a launcher, subject to strong aerodynamic effects and having limited maneuverability, we wish to perform « emergency » trajectory planning by relaxing some negotiable parameters, such as the incidence safety bound, the normal acceleration load, or the landing site location. To this end, a hierarchy between the parameters is introduced and an algorithm, Hierarchical Emergency Guidance Optimization (H.E.G.O.), is developed to enforce it. The algorithm consists of a finite sequence of negotiation Linear Programs, followed by a refinement Quadratic Program. The rocket is modeled by eight states, and three controls. The flight parameters are the initial conditions of the rocket states and other parameters, such as the Engine Specific Impulse and the wind profile. The user-defined hierarchy is conveyed via a co-lexicographic order. The methodology is theoretically studied. Among others, the Lipschitz-continuity of the guidance trajectory with respect to the input flight parameters is established. Extensive numerical results serve to quantify the performance and relevance of the methodology. KEYWORDS powered descent guidance, optimal control, NLP sensitivity, emergency guidance
04097640
en
[ "shs.eco", "shs" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04097640/file/0-WP-%20Female%20Early%20Marriage%20and%20Son%20Preference%20in%20Pakistan.pdf
Mazhar Mughal email: [email protected] Rashid Javed email: [email protected] Thierry Lorey email: [email protected] Female Early Marriage and Son Preference in Pakistan Keywords: Child marriage, Age at marriage, gender bias, son preference, Pakistan JEL codes: D13, J13, O15, C13, Z13 In this study, we employ pooled data from four rounds of Pakistan Demographic and Health Survey (PDHS) to examine whether, and to what extent, does the incidence of early marriage shape the married women's perspectives on gender preference associated with reproduction. We employ a number of econometric techniques (Probit, OLS, Cox Hazard Model, IV Probit and treatment effects) and a large set of model specifications, and find significant evidence supporting the role of early marriage in perpetuating disproportionate preference for boys. Women who married before turning 18 not only state a greater desire for boys but are also less likely to stop reproduction as long as they do not have a boy. Early-age marriage is associated with 7.7 -12.5% higher incidence of fertility discontinuation among women without a son. This son-preferring behaviour is stronger at higher birth order and also reflects in differential spacing patterns. Women's education appears to be the strongest channel through which these effects are mediated. The divergence between early-and late-marrying women appears to have sharpened over time. The findings of this study underscore the role played by early marriage in altering the gender-specific attitudes prevalent in the society, and highlight existing gender inequality traps. Introduction 39% of Pakistani women of child-bearing age are reported to get married before reaching the age of 18 (PDHS 2017-18). Though lower than 50% reported in 1990 (PDHS 1990-91), this incidence still remains high by world average. The practice of early-age marriage, also called child marriage, in part results from a higher perceived value of the young bride. In traditional societies, younger women are believed to be more fertile, sexually inexperienced and easy to 'control'. For parents, early marriage of daughters implies lower spending on education, less effort chaperoning the girl in order to protect virginity and guard 'family honour', and smaller dowry requirement [START_REF] Allendorf | Early Women, Late Men: Timing Attitudes and Gender Differences in Marriage[END_REF]Caldwell, 2005). Early marriage has important consequences for the health and well-being of the mother and the child. Women who marry early produce more children than who marry later [START_REF] Maitra | Effect of Socioeconomic Characteristics on Age at Marriage and Total ertility in Nepal[END_REF][START_REF] Nasrullah | Girl child marriage and its effect on fertility in Pakistan: Findings from Pakistan Demographic and Health Survey, 2006-2007[END_REF][START_REF] Raj | Prevalence of child marriage and its effect on fertility and fertility-control outcomes of young women in India: a cross-sectional, observational study[END_REF]. They are younger at the time of first birth and have subsequent births at shorter intervals [START_REF] Jensen | Early female marriage in the developing world[END_REF][START_REF] Koski | Has Child Marriage Declined in sub-Saharan Africa? An Analysis of Trends in 31 Countries[END_REF][START_REF] Raj | When the mother is a child: The impact of child marriage on the health and human rights of girls[END_REF]. Early marriage is associated with greater risk of still birth and miscarriages [START_REF] Kamal | Child marriage and its association with adverse reproductive outcomes for women in Bangladesh[END_REF]. There is increasing evidence for adverse health outcomes among children born to women who married at an early age, including higher risk of premature birth, neo-natal, infant or child mortality [START_REF] Adhikari | Early marriage and childbearing: risks and consequences[END_REF][START_REF] Garcia-Hombrados | Child Marriage and Infant Mortality: Evidence from Ethiopia[END_REF][START_REF] Raj | The effect of maternal child marriage on morbidity and mortality of children under 5 in India: Cross sectional study of a nationally representative sample[END_REF] as well as negative effects on child weight, height and general health [START_REF] Chari | The causal effect of maternal age at marriage on child wellbeing: Evidence from India[END_REF][START_REF] Palloni | Childhood Health and the Wantedness of Male and Female Children[END_REF][START_REF] Wachs | Mechanisms linking parental education and stunting[END_REF]. Women with early marriages show higher psychological distress, whereas women with late marriages tend to adjust better [START_REF] Shaud | Marital adjustment , convergent communication patterns , and psychological distress in women with early and late marriage[END_REF]. Early marriage can also limit women's economic empowerment and education outcomes of their children [START_REF] Sekhri | Intergenerational Consequences of Early Age Marriages of Girls: Effect on Children's Human Capital[END_REF][START_REF] Yount | Women's Age at First Marriage and Long-Term Economic Empowerment in Egypt[END_REF]. A related area of investigation pertains to the influence of female early marriage in sustaining prevailing gender bias in general, and gender-specific reproductive outcomes in particular. [START_REF] Asadullah | Early Marriage, Social Networks and the Transmission of Norms[END_REF] find that early marriage increases agreement with statements supportive of traditional gender roles and gender bias in the allocation of resources. They hypothesize four potential pathways through which female early marriage in developing countries can affect women's beliefs and attitudes towards traditional gender norms: 1) less schooling and exposure to a school curriculum presenting alternative views; 2) fewer social networks; 3) lower likelihood of matching with more progressive men and 4) earlier experience of marital responsibilities. The gender perspectives of early-married women get shaped by their degree of empowerment in important issues such as contraceptive use and reproductive choices [START_REF] Larsson | Women's Education, Empowerment, and Contraceptive Use in sub-Saharan Africa: Findings from Recent Demographic and Health Surveys[END_REF][START_REF] Upadhyay | Women's Empowerment and Ideal Family Size : An Examination of DHS Empowerment Measures In Sub-Saharan Africa[END_REF]. In this study, we employ pooled data from four Pakistan Demographic and Health Surveys (PDHS) to examine whether, and to what extent, does the incidence of early marriage shape the married women's perspectives on gender preference associated with reproduction. We investigate how early-age marriage influences both the reported or revealed son preference of the mother (observed in differential stopping) as well as the stated or desired preference that the interviewed women state. From a medical point of view, the likelihood of bearing sons does not depend on the age a woman marries. Any significant variation in the number of sons she bears should reflect differential fertility stopping patterns ultimately resulting from differences in the socioeconomic profile of early and late-marrying women1 . To our knowledge, this is the first study that addresses this subject. We employ a number of econometric techniques (Probit, OLS, Cox Hazard Model, IV Probit and treatment effects) and a large set of model specifications to support our analysis. We find significant evidence supporting the role of female early marriage in perpetuating disproportionate preference for boys in Pakistan. Women who married before turning 18 not only state a greater desire for boys but are also more likely to end reproduction only after obtaining the desired number of boys. This son-preferring behaviour is stronger at higher birth order and also visible in differential spacing patterns. The divergent trend seems to be stronger in the post-2000 cohort compared to the women who married before 2000, reflecting increasing social pressures associated with demographic transition. Women's schooling appears to be the most important channel through which these gender-specific reproductive effects are mediated. We also find evidence for reported son preference among early-married men. These findings are robust to the use of alternative definitions and empirical procedures. Our study is organized as follows: Section 2 presents the survey data and shows relevant salient statistics. Section 3 describes the empirical model and the outcome and control variables employed in the estimations. Section 4 reports key findings followed by robustness checks and additional results in Section 5. Section 6 concludes. Data We pool data of all the four rounds (1990-91, 2006-07, 2012-13 and 2017-18) of the Pakistan Demographic and Health Survey (PDHS). The PDHS are household surveys representative at the national level, containing information about fertility, family planning, maternal and child health. A two-stage stratified sample design was adopted for the survey. The pooled sample consists of 45,260 women who married between 1951 and 2018. Out of these, 21,849 women are considered to have completed their fertility. The latter group corresponds to the women who gave the answer "want no more children" in response to the question "Do you desire more children?", those who report to be infecund or who or their husbands had undergone sterilization procedure. In our sample, 17,528 women reported to desire no more children, 1,348 reported to be infecund whereas 2,973 were sterilized. According to the dataset, 43% of the women of child-bearing age interviewed got married before the age of 18 while 24% gave birth to their first child before turning 20. The data show substantial difference between the profile of women who married before the age of 18 and those who married later (Table 1). Fewer early-marrying women and their husbands went to school than did their later-marrying counterparts. On average, early marrying women are poorer (46% of early-marrying women vs 30% of late-marrying women) and less urban (28% earlymarrying women vs 39% late-marrying women). Besides, a greater proportion of them work for a living than do women who married later. [Insert Tables 1 &2 here] Table 2 sheds light on divergent reproductive behaviours of the two groups of women by comparing the actual and desired number of sons born to early and late-marrying women. On average, early-marrying women are found to have more boys (3.21) than late-marrying women (2.51). The difference in the mean number of sons is clearer at higher parities. Similarly, women who marry early state greater desire for boys (2.29 boys) compared with those who marry later (2.00 boys). The spacing pattern of earlymarrying women too differs from that of their late-marrying cohorts. Except for the interval between the first and the second birth, the spacing for all intervals is higher among early-marrying women. Empirical Framework We estimate early marriage's association with son preference by regressing indicators of women's revealed and stated son preference on the early marriage indicator and controlling for individual and household socioeconomic factors. The empirical model for son preference can be given as follows: (1) [Insert Table 3 here] Outcome variables 1. Revealed son preference: We employ three indicators to represent different dimensions of revealed son preference. Following [START_REF] Javed | Changing patterns of son preference and fertility in Pakistan[END_REF], we define the baseline indicator of differential birth stopping for revealed or reported son preference as a binary variable which takes the value of 1 if a woman has no son at a given birth order and does not pursue further child birth, 0 otherwise. The second variable modifies this definition to focus only on the sex of the latest child. For example, the variable takes the value of 1 if a woman who has four children, out of which the fourth is a girl, does not proceed to subsequent child birth. The third indicator of revealed son preference is a count variable that pertains to succeeding birth spacing. It is defined as the succeeding birth space in months at a given parity if a woman has no son. Estimation results of these three outcomes are reported for the first four birth orders. Stated son preference: We employ two indicators to measure stated or desired son preference: First, following [START_REF] Behrman | The relationship between women's paid employment and women's stated son preference in India[END_REF], we define the baseline indicator for stated or desired son preference as a binary variable which takes the value of 1 if the woman's desire number of sons exceeds the desired number of daughters. Following [START_REF] Gaudin | Son preference in Indian Families: Absolute versus relative wealth effects[END_REF], we also use a ratio to denote desired son preference. The alternative measure is defined as the ratio of the difference between Ideal number of boys and girls to the Ideal number of children. In our sample, 14%, 16%, 18% and 20% woman without at least one son stop their childbearing at the first, second, third and fourth birth order, respectively. The mean succeeding birth space of women without at least one son ranges from 27.31 to 28.63 months at the first four birth orders. The average spacing is the shortest at the fourth birth order. 37% women reported desiring more boys than girls. [Insert Figure 1 here] Figure 1 represents the gender-specific progression to subsequent birth. Almost all (98%) of the women moved on to second parity regardless of whether the firstborn was a boy or a girl (Figure 1). At higher parities however, the reproductive patterns of girlsonly women diverge from those of women with one or more sons: 96% of women with no boys proceed to third birth compared to 91% of women with one or two sons, while 93% of girls-only women move on to fourth birth compared to 83% of women with three sons. Methodology Given the binary nature of the first two revealed preference outcomes and the first stated preference outcome, the corresponding models are initially estimated using the Probit estimator, while the stated preference ratio is estimated using OLS. Later, these estimations are carried out using instrumental-variable and matching estimators. The spacing outcome is regressed using the Cox Hazard Model. All estimations are carried out first without, and then with the full set of controls and region-and time-fixed effects. Findings Revealed Son Preference In son preferring societies, discriminatory reproductive patterns manifest themselves, either in the form of sex-selective abortions, or differential birth stopping. Even though preferential attitudes towards boys are widespread in the Indian Subcontinent, the practice does not usually enjoy good press [START_REF] Robitaille | Determinants of Stated Son Preference in India: Are Men and Women Different[END_REF]. Islam, Pakistan's dominant religion, does not promote sex-selective reproductive practices [START_REF] Aydede | Son Preferring Fertility Behaviours in Turkey[END_REF]. There is little evidence supporting widespread practice of sexselective abortion in Pakistan (see for instance [START_REF] Zaidi | In the Pursuit of Sons: Additional Births or Sex-Selective Abortion in Pakistan?[END_REF]). However, differential stopping is reported to be widely practised [START_REF] Hussain | The role of son preference in reproductive behaviour in Pakistan[END_REF][START_REF] Javed | Changing patterns of son preference and fertility in Pakistan[END_REF]. In the presence of disproportionate preference for male offspring, earlymarrying women continue childbearing as long as the desired number of sons is not attained. Table 4 reports partial results for the association between women's early marriage and their child-stopping behaviour in the situation where all the existing children are girls. We report both the estimates and the marginal effects (ME). Columns 1 to 8 alternately show results of Probit estimations, with and without controls, and region-and time-fixed effects, for the likelihood of childbearing after the first, second, third and fourth birth respectively. The results are negative and statistically significant at all the birth orders. At the first birth order, an early-marrying woman with a girl child is 7.7% to 10.5% less likely to stop child bearing, in contrast to the late-marrying women. The impact is found to be stronger at higher birth orders. At the second birth order, the presence of no son is associated with 9.2% (without controls) and 12.4% (with controls) lower probability of stopping childbearing. In other words, earlymarrying women, both of whose first two children are girls, are 9.2-12.4% less likely to discontinue fertility compared with late-marrying women. The corresponding results for probit estimates for birth order 3 and 4 show 8.3% to 12.5% lower likelihood of stopping childbearing among early-marrying women without at least one son. This differential birth stopping effect is similar for both rural and urban women (Table 5), except for the fourth birth on which the effect is much stronger among urban women (17.2% less likelihood) compared to the rural women (9% less likelihood). The proportion of early-marrying women living in the urban areas (0.28) is much lower than that of the late-marrying women (0.39). This, combined with the fact that earlymarrying women have on average more children and a greater stated desire for sons, makes the revealed son preference effect of early marriage stronger in the cities. [Insert Table 4 & 5 here] These results could be challenged on the grounds that the fertility preferences of earlymarrying women might be over-represented in the sample. At a given time, women who marry early are more likely to achieve their desired fertility than women who married late and thereby began their reproductive phase later2 . In Table 6, we show results of estimations, first without, and then with the set of controls and the fixed effects, carried out on the sub-sample of women whose fertility could be considered complete, i.e. women who gave the answer "want no more children" in response to the question "Do you desire more children?", those who report to be infecund or who or their husbands had undergone sterilization procedure. The association between early marriage and reported preference remains negative and statistically significant as before. However, the marginal effects are much lower (1.3 -6.7%) compared to those observed in the full sample. [Insert Tables 6 &7 here] The definition of 'completed fertility' used in our above estimations for the subsample with complete fertility is based on the interviewed women's self-reported state of infecundity. In Table 7, we employ an alternative definition of complete fertility by restricting our sample to women age 40 or above. One can assume that by that age, most women have completed their fertility, and many of those who have not, are nearing menopause and are facing difficulty conceiving. As before, the association between early marriage and revealed son preference remains negative and significant. The marginal effects range between 1.2% and 6.0%. Table 8 presents the estimates of childbearing behaviour with respect to the sex of the last child. Compared to late-marrying women, early-marrying women are more likely to continue childbearing if the last child happens to be a girl with the marginal effects ranging from 7.7% to 13%. [Insert Tables 8 &9 here] Another dimension of fertility, but for which we do not find much evidence, is the differential child-spacing practised by early-married Pakistani women. Javed and Mughal (2020) report strong evidence for differential behaviour at early parities. They find that women whose first or second child is a son have significantly longer subsequent birth intervals compared with women with no sons. In this study, our interest lies not in the spacing patterns of women with one or more sons per se, but rather in their interaction with early marriage. Table 9 reports partial results for Cox Hazard Model estimations for the first four parities. We observe significant difference between early-and late-marrying women at the first two birth orders. Early-marrying women without a son have 6.7-8.4% shorter subsequent birth interval in contrast to late marrying woman. The effect is not visible at higher birth orders. All in all, these findings show a clear difference between women who married early and those who married later in terms of their revealed son preference reflected in greater incidence of differential birth stopping and spacing. Stated Son Preference Next, we examine if the divergence in preference for male child found between women who married early and those who married later is also reflected in their stated desire. Stated preference, to some extent, reflects the woman's perception of gender equality, and should plausibly decrease with growing maturity and autonomy that accompanies later marriage. Partial results of Probit estimations for stated son preference reported in Table 10 support this argument. The results shown in Columns 1 and 2 point to a positive and mostly significant relationship between desired son preference and early marriage. The marginal effects are similar to those observed for revealed preference, and range from 1.8% to 6.3%. [Insert Table 10 here] Here, the survey questions on which the stated preference variable used in the above set of estimations is based merit scrutiny. Women who already given birth to a child were asked the following questions to find out their desired fertility preferences: "If you could go back to the time you did not have any children and could choose exactly the number of children to have in your whole life, how many would that be?", "How many of these children would you like to be boys?" and "How many would you like to be girls, and for how many would it not matter if it's a boy or a girl?" Responses to such questions, constituting direct measures of son preference, are criticized in the literature for being subject to Rationalization bias. A woman's perception of ideal number of sons and daughters may be driven by the number of sons she has already borne [START_REF] Dasgupta | Son Preference and Gender Gaps in Child Nutrition: Does the Level of Female Autonomy Matter?[END_REF][START_REF] Pritchett | Desired Fertility and the Impact of Population Policies[END_REF]. In our data however, we do not find support for this assertion. The correlation between the indicators of stated and revealed preference is low (correlation coefficient = -0.0382 for parity 1, -0.06 for parity 2, -0.04 for parity 3 and -0.08 for parity 4). Furthermore, the results of the stated son preference model are not much affected if the sex of existing children is controlled for. For this, we include the son ratio variable, defined as the ratio of sons as a proportion of the total number of children born to the woman. The coefficient of the variable is found to be significant, and its inclusion, if anything, improves the statistical significance of the models. An early-marrying woman with completed fertility is 1.8% to 6.3% more likely to declare greater desire for sons than daughters than a later-marrying woman does (Columns 3 -4). Next, we employ another definition of stated son preference, defined as the ratio of the difference between Ideal number of boys and girls to the Ideal number of Children born to the woman. Partial results for OLS estimations carried out with this alternative measure (shown in columns 5-8) again point to a positive and statistically significant association between female early marriage and stated son preference. The coefficients of the early-marriage variable for the models with or without controls or fixed effects, or with or without the inclusion of the sex of existing children, are all significant and lie in the 1 -3.5% range. The results for women residing in rural and urban areas (shown in Table 11) are quite similar, and suggest that the differences between the early-and late-marrying women's stated preference effects do not substantially differ by their place of residence. [Insert Table 11 here] Husband's Early Marriage Next, we compare the gender-specific fertility effects of female early marriage examined thus far with those associated with men's early marriage. Early marriage is much less common among Pakistani men. Only 10% men in our sample got married before age 18 compared to 43% women. This notwithstanding, male early marriage's association with revealed son preference does not appear much different from that observed in the case of female early marriage (Table 12). The marginal effects range from 3.4% to 7.6%. This finding is in line with the observation that historically, son preference and the demand for additional children has been strong both among Pakistani men and women [START_REF] Khan | Son Preference and the Demand for Additional Children in Pakistan[END_REF]. [Insert Tables 12 &13 here] Interestingly, the association of men's early marriage with stated son preference is mostly insignificant (Table 13), suggesting that unlike women, men's stated desire for boys does not significantly differ by age at marriage. In other words, while earlymarrying women state a desire for boys that is significantly greater than that expressed by late-marrying women, early-and late-marrying men exhibit no such difference in their stated gender preference. Compared to men, women in patriarchal societies face more pressure to produce sons [START_REF] Javed | Have a Son, Gain a Voice: Son Preference and Female Participation in Household Decision Making[END_REF], which is expressed in their greater stated desire for boys. Demographic Transition Women in our sample got married between 1951 and 2018. During this time period, Pakistan went through demographic transition, with fertility rates falling from over 6 in the 1950s to less than 4 in 2018 (World Bank, 2020). Contraceptive prevalence increased from 12% in 1991 to 34% in 2018 (PDHS, 2018). In the presence of son preferring norms, smaller family requirements can aggravate gender-specific fertility stopping. The change can be expected to affect early-marrying women disproportionately, leading to increasing difference with late-marrying women. We find evidence for this argument by comparing the pre-and post-2000 marriage cohorts. Table 14 reports the partial results of probit estimations for these two cohorts, without and with the set of controls and fixed effects. The association between early marriage and reported son preference remains negative and statistically significant. The effect is substantially stronger in post-2000 marriages. The likelihood of stopping childbearing without a son among the more recent (post-2000) early-marrying women is lower by as much as 21% (parity 2/3) compared to their later-marrying counterparts. [Insert Table 14 here] Mediating Channels We examine the role of the four mediating channels suggested by [START_REF] Asadullah | Early Marriage, Social Networks and the Transmission of Norms[END_REF] through which early marriage can affect women's beliefs and attitudes towards traditional gender norms. 1) Women's schooling: we compare women with no schooling to those with at least some education. 2) Social network: We compare women who, in response to the question: "Who usually decides on visits to family or relatives?", answer: "respondent alone" or "Respondent and husband/partner jointly", with those who answer: "family elders", "husband/partner alone" or "others". 3) Progressive spousal matching: We compare women whose husbands have acquired at least five years more education from them with those who do not. 4) Earlier experience of marital responsibilities: We compare women who gave birth to their first child within twelve months of marriage with those who did not. Out of the four channels, women's education appears to be the strongest, especially considering the women's revealed son preference. The education profiles of the earlyand late-marrying women are substantially different. Only 26% of early-marrying women have ever been to school compared to 47% of late-marrying women. Earlymarrying women, on average, hold 1.68 years of schooling compared to 4.2 years for late-marrying women. The differences in gender-specific birth stopping effects between the early-and late-marrying women are stronger among educated women, particularly at higher birth orders (Table 15). For example, at the third birth order, illiterate earlymarrying women without a son are 8.6% less likely to stop fertility compared to latemarrying women, while their educated counterparts are 18.2% less likely to stop childbearing compared to corresponding late-marrying women. The difference is much less important in case of stated preference (Table 16). [Insert Tables 15 &16 here] The difference in gender-specific birth stopping behaviour of early-and late-marrying women is weaker for the other three mediation channels examined. The difference among women with weak social network ranges between 12.8 and 13.9% at the four birth orders, and 7.6 and 13.6% among women with stronger social network (Table 17). Both the revealed and stated preference effects are somewhat stronger among women with weak network (Table 18). Likewise, the effects are stronger among women who take up marital responsibilities soon after marriage (Tables 2122). In contrast, the differential child stopping impact of early marriage is similar across women with and without progressive spousal matches (Table 1920). [Insert Tables 171819202122here] Robustness Measures and Additional Results In this section, we show that the main results are robust to a wide range of robustness checks. Instrumental Variable Estimations The estimations reported in the previous section may be subject to endogeneity concerns. Personal and family values, local traditions and cultural norms prevalent in the society influence the age at which girls get married and the importance the birth of boys enjoys. Age at which a woman marries, therefore, cannot be treated as a random event, and may plausibly be correlated with unobserved factors which also affect the woman's reproductive preferences. We employ an instrumental strategy to tackle potential endogeneity present in our estimated models. We construct a communitylevel instrument for this purpose. The instrument is defined as the percentage of incidence of early marriage observed in the Primary Sampling Unit (PSU) among the women who married before the respondent. This instrument takes its inspiration from [START_REF] Delprato | Intergenerational Education Effects of Early Marriage in Sub-Saharan Africa[END_REF], who analyze the inter-generational of education effects of early marriage in Sub-Saharan Africa. The logic for this instrument goes as follows: In traditional societies, marriage is an institution meant to maintain and promote social ties and within-group connections [START_REF] Unfpa | Ending Child Marriage: A Guide for Global Policy Action[END_REF]. Early-age marriage, in this context, serves as a means to preserve community values related to gender and sexuality. The community is expected to adhere to the practice, and failure to conform could be socially costly [START_REF] Bayisenge | Early marriage as a Barrier to Girl's Education: A Developmental Challenge in Africa[END_REF][START_REF] Bicchieri | Norms and Beliefs : How Change Occurs[END_REF][START_REF] Srinivas | Social change in modern India[END_REF]. The proportion of early-married women in the community could therefore act as a useful community-level predictor of female age at marriage, while not being directly related to individual reproductive outcomes. Table A1 presents the first-stage regressions of the instrumental-variable estimation, which confirm this plausible association. The community-incidence variable is strongly associated with the female age at marriage variable, with all coefficients significant at the 1% level. The F-statistics is above 10 across all specifications, implying that instrument is strong. Table 23 shows results of the IV Probit estimations for fertility stopping, without and with the full set of controls and fixed effects. The impact of female early marriage on fertility stopping among women without a son at a given birth order is negative and significant as before, with coefficients ranging between -1.06 and -1.78 (Columns 1-8). The impact of female early marriage on the stated desire for sons is likewise negative and statistically significant, with coefficients ranging between 0.15 and 0.15 (Table 24). [Insert Tables 23 &24 here] We compute the proportion of early-marrying women among all the women in the PSU who got married prior to the surveyed woman. The trends and norms prevalent at the time of the surveyed woman's marriage might closely relate to those present at the time of marriages that took place in the near past. The instrument may in such a case not be considered exogenous. We consider this possibility by limiting the marriages in the PSU to five and ten years prior to the respondent's marriage respectively. The results of IV Probit estimations for revealed and stated son preference carried out with these two instruments is presented in Tables 25 and26 (IV2) and Tables 27 and28 (IV3). The female early marriage and son preference relationship obtained using these instruments is similar to the one found previously, and again points to a lower likelihood of birth stopping among early-marrying women without a son. [Insert Tables 25 & 28 here] Treatment Effects It is possible that women who marry early self-select based on their individual and household characteristics. As previously shown in Table 1, early and late-marrying women differ substantially on a number of observables including schooling, employment, wealth status and place of residence. Women marrying early may therefore differ from those marrying later in ways that could be considered non- The results of treatment effect estimations for revealed son preference are given in Table 29. The corresponding Average Treatment Effects (ATE) obtained for the all four birth orders are negative and significant. The ATE of early marriage on fertility stopping at given parity for the four treatment effect estimators range between 9.6% and 13.8% (PSM), 10.5% and 12.4% (RA), 9.2% and 12.3% (IPW), and 9.2% and 12.3% (AIPW), respectively. These findings are highly similar, both in sign and significance, to the baseline estimates, and give strong evidence in favour of higher revealed son preference among early marrying women in Pakistan. Likewise, the results of treatment effects estimated for stated son preference (Table 30) are similar to the baseline estimates. [Insert Tables 29 &30 here] After the treatment effect estimations, the balancing of the treatment groups is checked using Kernel density plots. The covariates of the two groups are found to be well balanced. Alternative Female Early Marriage Indicator We further check the robustness of our estimates by employing an alternative indicator of female early marriage. We use woman's age at marriage as a count variable rather than a binary indicator. We again come up with significant impact of women's marriage age on the revealed and stated preference for boys. A one-year increase in women's age at marriage is associated with 1 -2% higher likelihood of stopping childbearing without a son (Tables 31). Likewise, a one-year delay in women's marriage in 0.4 (outcome 2) -0.7% (outcome 1) lower stated preference for boys when estimated without controls (Table 32). The significance of the impact on stated preference disappears, however, when the full set of controls is included. [Insert Tables 31 & 32 here] Alternative Definition of Female Early Marriage Pakistani law sets the female marriageable age at 16 years as against 18 years for men. About 19% of the women interviewed in our survey got married before turning 16. These women are on average less educated and come from poorer households, and can be expected to show a greater preference for boys than late-marrying women. Table 33 (Columns 1-8) gives the results of the revealed son preference model. The coefficients of the early marriage variable are highly significant and retain the negative sign. The marginal effects for the first four birth order vary from 7% to 10.2%. Similarly, the marginal effects for the stated son preference models given in Table 34 range between 1.2% to 7.5%, respectively [Insert Tables 33 &34 here] Sex-selective Abortion A possible threat to our estimation could be from sex-selective abortion. As discussed in Section 4.1, there is little evidence suggesting widespread practice of sex-selective abortion in Pakistan. Reliable data are scarce, as women are reluctant to report abortions given the social stigma attached to the practice. We explore this possibility through two strategies: First, in our pooled sample, mothers reported the deaths of 16,198 children. Out of these, 7,930 (4,540 boys and 3,390 girls) were reported to have died at zero month. This number may presumably include abortions in addition to still-births and neo-natal deaths. A significant association between death at birth of a female child and mother's early-age marriage may suggest the presence of a discriminatory practice. We fail to find any such association. Table 35 shows the coefficients of the early-marriage variable, first for the estimation without and then with various mother-and householdlevel factors and including time, region and mother-specific effects. The coefficients are invariably insignificant, with p-values in excess of 0.5. Second, we limit our sample to the women who do not report any child death, thereby precluding any cases of gender-specific abortion misreported as miscarriage or still birth. We again obtain results similar to our baseline estimations (Table 36). [Insert Tables 35 &36 here] Conclusion The fifth Sustainable Development Goal (SDG) of the United Nations which deals with gender inequality calls for entailing women and girls equal rights to economic resources, and ensuring their full participation at all levels in economic decisions (UN, 2015). A prerequisite to achieving the goal of women's economic empowerment is to eliminate harmful practices such as marriage before age 18. In this study, we compared the reproductive behaviour and fertility preferences of early-and late-married women using data on married women from four PDHS surveys. The findings of this study underscore the role played by early marriage in altering the gender-specific attitudes prevalent in the society. There is substantial evidence for disproportionate son preference prevalent among early-married women. This evidence points in the direction of gender-inequality traps. Previous research from Pakistan has shown an association between women's say in household decision making and son preference [START_REF] Javed | Have a Son, Gain a Voice: Son Preference and Female Participation in Household Decision Making[END_REF]. Likewise, early-marrying women, who are themselves victims of existing patriarchal customs, help perpetuate gender bias through sonpreferring reproductive practices. These son-preferring norms have non-negligible social and demographic consequences. The desired sex ratio is increasing, sex ratio at last birth is worsening, and son preference's association with modern contraceptive use has become stronger. This can aggravate existing gender gaps in children's anthropometric, health and development outcomes. Tackling these traps requires policy interventions aimed at empowering women. Raising the female marriage age to 18 years could be one option. However, merely passing laws against child marriage is not sufficient to end the practice in the developing countries [START_REF] Wodon | CHILD MARRIAGE LAWS AND THEIR CHILD MARRIAGE LAWS AND THEIR[END_REF]. It is equally important to provide the parents better incentive structures that lead to greater school enrolment and higher labour participation of their daughters. The incidence of female early marriage in Pakistan has decreased over time and the age at first marriage has risen. This demographic transition owes less to any sustained policy initiative or public awareness campaign and more to socioeconomic pressures related to urbanization, improved girls' education and increased female participation in the labour market [START_REF] Javed | Girls not brides: Evolution of child marriage in Pakistan[END_REF]. Investing in girls' education can help reduce gender disparities while at the same time delaying marriages, thereby contributing to further reducing the incidence of early-age marriage [START_REF] Qureshi | Additional Returns to Investing in Girls' Education: Impact on Younger Sibling Human Capital[END_REF]. Tables and Figures: random. We account for this possibility of selection bias by estimating the baseline model using different treatment effect estimations including Propensity Score Matching (PSM), Regression Adjustment (RA), Inverse probability weights (IPW) and Augmented IPW (AIPW). The PSM is a matching technique which matches the treated group individuals (those who married early) to the non-treated counterparts (women who married later) based on a propensity score for participation given observable characteristics of the individual. RA estimates the average treatment effect (ATE) and the potential-outcome means (POMs) from observational data. RA estimators use contrasts of averages of treatment-specific predicted outcomes to estimate the treatment effects. IPW estimates the average treatment effect (ATE) and the potentialoutcome means (POMs) from observational data by obtaining probability weights to correct for missing data pertaining to the potential outcomes. Finally, AIPW estimators have the double-robust property as they combine aspects of regression-adjustment and inverse-probability-weighted methods. Table 3 3 possessed no formal education (60%) compared to 35% of the husbands. Likewise, only 8% women report having acquired tertiary level education compared to 15% husbands, respectively. Around one-fourth (22%) women in our sample participate in labour market. 41% women report either listening radio or watching television at least once a week. The average household size is 8.26. About two-thirds of the households (66%) live in the rural areas. provides the definitions of the variables included and their means and proportions. About 43% of the women reported to have married before age 18 compared to 10% of the husbands. The mean age of women is 32 years. The average age difference between husband and wife were 5.51 years. Majority of women in our sample Table 1 : 1 Individual and household characteristics by Age of Marriage : Authors' calculations using pooled data from the four rounds of PDHS. The means are reported in columns 1 and 2. Columns 3 report the t-statistic for the early Marriage-late marriage mean comparison test. Early Marriage Late Marriage Two sample t-test Overall 0.43 0.56 Education Schooling 0.26 0.47 -34.13 Years of Schooling 1.68 4.20 -45.34 Spouse Education Schooling 0.56 0.70 -21.62 Women Employed Yes 0.26 0.19 11.17 Place of Residence Urban 0.28 0.39 -17.97 Economic Status Poor 0.46 0.30 23.82 Source Table 2 : 2 Number of Boys and Spacing by Women's Age at Marriage : Authors' calculations using pooled data from the four rounds of PDHS. The means are reported in columns 1 and 2. Columns 3 report the t-statistic for the early Marriage-late marriage mean comparison test. Figure 1: Progression to Subsequent Birth by Child Sex (%)Source: Authors' calculations using pooled data from the four rounds of PDHS. Early Marriage Late Marriage Two sample t-test Source Table 3 : 3 Data description Variables Description Proportion/Mean Revealed Son Preference Birth Stopping 1 Dummy variable, takes the value of 1 if the woman whose first child is a girl and stops 0.14 child birth, 0 otherwise 0.85 2 Dummy variable, takes the value of 1 if the woman whose first two children are girls 0.16 and stops child birth, 0 otherwise 0.83 3 Dummy variable, takes the value of 1 if the woman whose first three children are girls 0.18 and stops child birth, 0 otherwise 0.81 4 Dummy variable, takes the value of 1 if the woman whose first four children are girls 0.20 and stops child birth, 0 otherwise 0.79 Birth Spacing 1 Succeeding birth space in months at parity 1 if first child is a girl 27.53 2 Succeeding birth space in months at parity 2 if first two children are girls 28.30 3 Succeeding birth space in months at parity 3 if first three children are girls 28.63 4 Succeeding birth space in months at parity 4 if first four children ate girls 27.31 Stated Son Dummy variable, takes the value of 1 Ideal number of boys greater than ideal number 0.37 Preference of girls, 0 otherwise 0.62 Stated Son The ratio of the difference between Ideal number of boys and girls to the Ideal number 0.14 Preference: alternate of children proxy Early Marriage Dummy variable, takes the value of 1 if female age at marriage below 18, 0 otherwise 0.43 0.56 Early Marriage - Dummy variable, takes the value of 1 if female age at marriage below 16, 0 otherwise 0.19 Alternative Measure 0.80 Husband Early Dummy variable, takes the value of 1 if husband age at marriage below 18, 0 otherwise 0.10 Marriage 0.89 Age Woman's current age in completed years 32.20 Age Difference Age difference between husband and wife in years 5.51 Education Categorical variable, takes the value of 0 if the woman has no education, 1 if the woman 0.60 possesses primary education, 2 if the woman possesses secondary education, 3 if the 0.14 woman possesses higher education 0.16 0.08 Spouse Education Categorical variable, takes the value of 0 if the husband possesses no education, 1 if the 0.35 husband possesses primary education, 2 if the husband possesses secondary education, 0.16 3 if the husband possesses higher education 0.33 0.15 Women Employed Dummy variable, takes the value of 1 if the woman is employed, 0 otherwise 0.22 0.77 Media Exposure Dummy variable. takes the value of 1 if the woman read newspaper or listens radio or 0.41 watches television once a week, 0 otherwise 0.58 Household Size Total number of family members in the household 8.36 Place of Residence Dummy variable, takes the value of 1 if the household resides in urban area, 0 0.33 otherwise 0.66 Wealth Status Categorical variable, takes the value of 1-5 for households belonging to poorest, poorer, 0.18 middle, rich and richest household wealth groups. 0.19 0.19 0.21 0.21 Source: Authors' calculations using pooled data from the four rounds of PDHS. Table 4 : 4 Early Female Marriage and Differential Birth Stopping (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Early -0.360*** -0.650*** -0.382*** -0.682*** -0.319*** -0.558*** -0.363*** -0.552*** Marriage (ref: Late Marriage) (0.024) (0.032) (0.035) (0.047) (0.051) (0.068) (0.079) (0.099) Marginal -0.077*** -0.105*** -0.092*** -0.124*** -0.083*** -0.113*** -0.104*** -0.125*** effect (0.004) (0.004) (0.008) (0.007) (0.013) (0.013) (0.022) (0.022) Constant -0.933*** 2.765*** -0.813*** 3.268*** -0.752*** 3.171*** -0.629*** 3.001*** (0.014) (0.101) (0.022) (0.157) (0.034) (0.236) (0.054) (0.373) Observatio 18,798 17,942 7,941 7,596 3,299 3,148 1,318 1,255 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the subsequent birth at the nth birth order, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 5 : 5 Early Female Marriage and Differential Birth Stopping-Place of Residence (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 2 order 3 order 4 order 1 order 2 order 3 order 4 Early -0.691*** -0.622*** -0.665*** -0.430*** -0.608*** -0.782*** -0.478*** -0.741*** Marriage (ref: Late Marriage) (0.043) (0.061) (0.096) (0.140) (0.048) (0.073) (0.099) (0.147) Marginal -0.111*** -0.114*** -0.125*** -0.090*** -0.098*** -0.137*** -0.103*** -0.172*** effect (0.006) (0.108) (0.017) (0.029) (0.007) (0.011) (0.020) (0.032) Constant 2.893*** 3.159*** 3.757*** 3.151*** 2.710*** 3.280*** 2.664*** 2.933*** (0.141) (0.214) (0.339) (0.513) (0.160) (0.254) (0.360) (0.604) Observatio 9,467 4,073 1,691 669 8,475 3,523 1,457 586 ns Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. The table presents results for the subsequent birth at the nth birth order. Columns 1-4 show results for women living in rural areas while Columns 5-8 show results for women living in urban areas. All estimations include the full set of controls and fixed effects. Controls include woman's birth order, with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical feature (region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 6 : 6 Early Female Marriage and Differential Stopping -Complete Fertility Sample (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Early -0.282*** -0.383*** -0.417*** -0.511*** -0.508*** -0.531*** -0.256** -0.201 Marriage (ref: Late Marriage) (0.061) (0.079) (0.073) (0.095) (0.090) (0.114) (0.123) (0.159) Marginal -0.013*** -0.014*** -0.034*** -0.034*** -0.067*** -0.059*** -0.044** -0.027 effect (0.002) (0.002) (0.005) (0.006) (0.011) (0.0124) (0.021) (0.021) Constant -1.929*** 0.279 -1.582*** 1.364*** -1.235*** 1.243*** -1.170*** 1.147* (0.036) (0.286) (0.042) (0.357) (0.053) (0.422) (0.084) (0.652) Observatio 9,842 9,472 4,406 4,244 1,938 1,890 801 770 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the subsequent birth at the nth birth order, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 7 : 7 Early Female Marriage and Differential Birth Stopping -Subsample of women age 40 and above (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Early -0.289*** -0.258** -0.406*** -0.391*** -0.468*** -0.418** -0.352** -0.320 Marriage (ref: Late Marriage) (0.077) (0.103) (0.109) (0.135) (0.132) (0.172) (0.155) (0.215) Marginal -0.018*** -0.012** -0.029*** -0.025*** -0.053*** -0.039** -0.060** -0.043 effect (0.004) (0.004) (0.007) (0.008) (0.014) (0.015) (0.262) (0.028) Constant -1.772*** 0.638 -1.646*** 0.346 -1.357*** 2.862** -1.120*** -0.299 (0.044) (0.733) (0.059) (0.937) (0.072) (1.303) (0.096) (1.669) Observatio 4,883 4,409 2,299 2,075 1,078 933 515 444 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the subsequent birth at nth birth order, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 8 : 8 Early Female Marriage, Sex of Last Child and Differential Stopping (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Early -0.360*** -0.650*** -0.406*** -0.673*** -0.362*** -0.492*** -0.314*** -0.421*** Marriage (ref: Late Marriage) (0.024) (0.032) (0.024) (0.031) (0.024) (0.030) (0.027) (0.032) Marginal -0.077*** -0.105*** -0.103*** -0.130*** -0.104*** -0.117*** -0.100*** -0.112*** effect (0.004) (0.004) (0.005) (0.005) (0.007) (0.007) (0.008) (0.008) Constant -0.933*** 2.765*** -0.746*** 3.099*** -0.615*** 2.481*** -0.502*** 2.617*** (0.014) (0.101) (0.015) (0.104) (0.016) (0.105) (0.019) (0.119) Observatio 18,798 17,942 16,347 15,647 13,594 12,979 10,300 9,816 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Table 9 : 9 Early Female Marriage and Differential Birth Spacing -Cox Estimates : Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-4 present results for the subsequent birth space at the nth birth order with the set of controls. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. (1) (2) (3) (4) VARIABLES Duration 1 to Duration 2 Duration Duration 2 to 3 3 to 4 4 to 5 Early Marriage (ref: -0.084*** -0.067** -0.042 0.060 Late Marriage) (0.017) (0.027) (0.042) (0.067) Observations 15,414 6,316 2,567 986 Controls YES YES YES YES *** p<0.01, ** p<0.05, * p<0.1. Source Table 10 : 10 Early Female Marriage and Stated Son Preference (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Early 0.163*** 0.050*** 0.167*** 0.052*** 0.035*** 0.010*** 0.035*** 0.011*** Marriage (ref: Late Marriage) (0.015) (0.017) (0.015) (0.017) (0.003) (0.004) (0.003) (0.004) Marginal 0.063*** 0.018*** 0.063*** 0.018*** 0.035*** 0.010*** 0.035*** 0.011*** effect (0.005) (0.006) (0.005) (0.006) (0.003) (0.004) (0.003) (0.004) Son ratio 0.716*** 0.751*** 0.183*** 0.179*** (0.024) (0.028) (0.005) (0.006) Constant -0.306*** -0.117* -0.686*** -0.526*** 0.144*** 0.191*** 0.049*** 0.096*** (0.010) (0.063) (0.016) (0.066) (0.002) (0.013) (0.003) (0.013) Observatio 30,418 29,156 30,147 28,895 30,418 29,156 30,147 28,895 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the stated son preference, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 11 : 11 Early Female Marriage and Stated Son Preference-Place of Residence (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 02 e 02 e 01 e 01 e 02 e 02 Early 0.041* 0.041* 0.010** 0.010** 0.055** 0.059** 0.010** 0.010** Marriage (ref: Late Marriage) (0.022) (0.022) (0.005) (0.004) (0.025) (0.025) (0.005) (0.005) Marginal 0.015* 0.015* 0.010** 0.010** 0.019** 0.020** 0.010** 0.010** effect (0.008) (0.008) (0.005) (0.004) (0.008) (0.008) (0.005) (0.005) Son ratio 0.717*** 0.179*** 0.797*** 0.178*** (0.035) (0.007) (0.037) (0.007) Constant 0.095 -0.273*** 0.238*** 0.142*** -0.450*** -0.912*** 0.131*** 0.037** (0.074) (0.080) (0.015) (0.016) (0.088) (0.093) (0.017) (0.018) Observatio 15,397 15,281 15,397 15,281 13,759 13,614 13,759 13,614 ns R-squared 0.073 0.111 0.048 0.093 Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. The table presents results for stated son preference. Columns 1-4 show results for women living in rural areas while Columns 5-8 show results for women living in urban areas. All estimations include the full set of controls and fixed effects. Controls include woman's birth order, with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical feature (region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 12 : 12 Husband's Early Marriage and Differential Birth Stopping (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Early -0.298*** -0.530*** -0.147*** -0.323*** -0.330*** -0.407*** -0.151 -0.336** Marriage (ref: Late Marriage) (0.043) (0.054) (0.057) (0.072) (0.090) (0.108) (0.124) (0.151) Marginal -0.057*** -0.076*** -0.034*** -0.056*** -0.076*** -0.076*** -0.041 -0.071** effect (0.007) (0.006) (0.012) (0.011) (0.018) (0.017) (0.032) (0.029) Constant -1.058*** 2.298*** -0.954*** 2.604*** -0.873*** 2.689*** -0.788*** 2.490*** (0.012) (0.095) (0.018) (0.145) (0.027) (0.222) (0.042) (0.352) Observatio 17,991 17,942 7,619 7,596 3,159 3,148 1,257 1,255 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. The unit of analysis is husband. Columns 1-8 present results for the subsequent birth at the nth birth order, first without and then with the set of controls and fixed effects. Controls include husband's characteristics (age, age difference with wife, education, employment status, media exposure), wife education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to those without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 13 : 13 Husband's Early Marriage and Stated Son Preference (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Early 0.132** 0.062 0.144** 0.076 0.012 -0.004 0.007 -0.005 Marriage (ref: Late Marriage) (0.055) (0.058) (0.060) (0.066) (0.012) (0.013) (0.012) (0.012) Marginal 0.052** 0.023 0.057** 0.023 0.012 -0.004 0.007 -0.005 effect (0.021) (0.022) (0.023) (0.022) (0.012) (0.013) (0.012) (0.012) Son ratio 0.427*** 0.457*** 0.096*** 0.086*** (0.059) (0.066) (0.011) (0.014) Constant -0.041** 0.268** -0.267 -0.120 0.160*** 0.284*** 0.101*** 0.138*** (0.017) (0.128) (0.036) (0.140) (0.004) (0.026) (0.007) (0.027) Observatio 6,026 5,570 4,764 4,376 6,496 6,007 5,142 4,726 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the three rounds of PDHS. The unit of analysis is husband. Columns 1-8 present results for stated son preference, first without and then with the set of controls and fixed effects. Controls include husband's characteristics (age, age difference with wife, education, employment status, media exposure), wife education, household size, wealth status, and geographical features (place of residence, region). Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 14 : 14 Early Female Marriage and Differential Birth Stopping -Pre-and Post-2000 Cohorts (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order order order 2 order 2 order 3 order 3 order 4 order 4 1(Pre 1(Post (Pre (Post (Pre (Post (Pre (Post 2000) 2000) 2000) 2000) 2000) 2000) 2000) 2000) Early -0.467*** -0.772*** -0.453*** -0.790*** -0.422*** -0.746*** -0.488*** -0.562** Marriage (ref: Late Marriage) (0.051) (0.045) (0.068) (0.073) (0.090) (0.123) (0.119) (0.230) Marginal -0.042*** -0.187*** -0.053*** -0.212*** -0.066*** -0.211*** -0.096*** -0.168** effect (0.004) (0.009) (0.008) (0.017) (0.0142) (0.031) (0.023) (0.064) Constant 1.724*** 3.841*** 2.056*** 4.888*** 2.412*** 5.303*** 2.085*** 4.953*** (0.167) (0.147) (0.231) (0.257) (0.322) (0.538) (0.461) (0.983) Observatio 10,607 7,335 4,864 2,667 2,273 875 997 255 ns Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the subsequent birth at the nth birth order, with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 15 : 15 Early Female Marriage and Differential Birth Stopping-Role of Women's Education : Authors' calculations using pooled data from the four rounds of PDHS. The table shows results for subsequent birth at the nth birth order. Columns 1-4 show results for women with no education while Columns 5-8 show results for women with at least some schooling. All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth at nt birth nt birth nt birth nt birth nt birth at birth at birth birth at birth at birth at birth at birth at birth order 1 order 2 order 3 order 4 order 1 order 2 order 3 order 4 Early -0.680*** -0.688*** -0.490*** -0.506*** -0.657*** -0.731*** -0.756*** -0.679*** Marriage (ref: Late Marriage) (0.042) (0.060) (0.086) (0.119) (0.050) (0.075) (0.113) (0.190) Marginal -0.097*** -0.111*** -0.086*** -0.104*** -0.123*** -0.155*** -0.182*** -0.184*** effect (0.005) (0.009) (0.015) (0.024) (0.008) (0.014) (0.023) (0.047) Constant 2.731*** 3.082*** 3.058*** 3.016*** 2.822*** 3.601*** 3.510*** 3.083*** (0.129) (0.196) (0.295) (0.438) (0.189) (0.298) (0.476) (0.862) Observatio 10,666 4,725 2,052 888 7,276 2,871 1,096 366 ns Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source*** p<0.01, ** p<0.05, * p<0.1. Table 16 : 16 Early Female Marriage and Stated Son Preference-Role of Women's Education (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 02 e 02 e 01 e 01 e 02 e 02 Early 0.057*** 0.059*** 0.014*** 0.014*** 0.056** 0.057** 0.008 0.008 Marriage (ref: Late Marriage) (0.020) (0.021) (0.004) (0.004) (0.027) (0.028) (0.005) (0.005) Marginal 0.021*** 0.022*** 0.014*** 0.014*** 0.019** 0.019** 0.008 0.008 effect (0.007) (0.007) (0.004) (0.004) (0.009) (0.009) (0.005) (0.005) Son ratio 0.678*** 0.176*** 0.830*** 0.181*** (0.035) (0.007) (0.037) (0.007) Constant -0.103 -0.479*** 0.201*** 0.103*** -0.494*** -0.926*** 0.104*** 0.018 (0.065) (0.070) (0.014) (0.014) (0.111) (0.117) (0.021) (0.021) Observatio 16,626 16,469 16,626 16,469 12,530 12,426 12,530 12,426 ns R-squared 0.073 0.105 0.024 0.081 Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. The table shows results for states son preference. Columns 1-4 show results for women with no education while Columns 5-8 show results for women with at least some schooling. . All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 17 : 17 Early Female Marriage and Differential Birth Stopping-Role of Social Networks : Authors' calculations using pooled data from the four rounds of PDHS. The table presents results for subsequent birth at the nth birth order. Columns 1-4 show results for women without any role in decisions involving social networks while Columns 5-8 show results for women with some role in decisions involving social networks. All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 2 order 3 order 4 order 1 order 2 order 3 order 4 Early -0.679*** -0.685*** -0.605*** -0.607*** -0.562*** -0.774*** -0.523*** -0.632*** Marriage (ref: Late Marriage) (0.053) (0.082) (0.121) (0.196) (0.062) (0.092) (0.120) (0.181) Marginal -0.128*** -0.132*** -0.128*** -0.139*** -0.076*** -0.119*** -0.104*** -0.136*** effect (0.009) (0.014) (0.024) (0.043) (0.007) (0.012) (0.022) (0.036) Constant 3.116*** 4.147*** 3.166*** 4.299*** 2.299*** 3.274*** 3.708*** 3.107*** (0.162) (0.274) (0.412) (0.696) (0.195) (0.303) (0.430) (0.856) Observatio 5,708 2,305 904 357 5,590 2,436 1,028 399 ns Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source*** p<0.01, ** p<0.05, * p<0.1. Table 18 : 18 Early Female Marriage and Stated Son Preference-Social Network : Authors' calculations using pooled data from the four rounds of PDHS. The table shows results for states son preference. Columns 1-4 show results for women without any role in decisions involving social networks while Columns 5-8 show results for women with some role in decisions involving social networks. All estimations include the full set of controls and fixed effects. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 02 e 02 e 01 e 01 e 02 e 02 Early 0.066** 0.068** 0.021*** 0.021*** 0.051* 0.055* 0.010* 0.010* Marriage (ref: Late Marriage) (0.028) (0.028) (0.006) (0.006) (0.030) (0.030) (0.006) (0.005) Marginal 0.024** 0.024** 0.021*** 0.021*** 0.018* 0.018* 0.010* 0.010* effect (0.010) (0.010) (0.006) (0.006) (0.010) (0.010) (0.006) (0.005) Son ratio 0.555*** 0.135*** 0.920*** 0.200*** (0.041) (0.008) (0.047) (0.008) Constant -0.203** -0.499*** 0.164*** 0.095*** -0.655*** -1.201*** 0.088*** -0.022 (0.080) (0.084) (0.016) (0.017) (0.096) (0.101) (0.018) (0.018) Observatio 9,955 9,955 9,955 9,955 9,737 9,737 9,737 9,737 ns R-squared 0.116 0.140 0.042 0.096 Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Source Table 19 : 19 Early Female Marriage and Differential Birth Stopping-Progressive Spousal Matching : Authors' calculations using pooled data from the four rounds of PDHS. The table presents results for subsequent birth at the nth birth order. Columns 1-4 show results for women whose husbands do not hold at least five more years of schooling while Columns 5-8 show results for women whose husbands have at least five more years of schooling. All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 2 order 3 order 4 order 1 order 2 order 3 order 4 Early -0.646*** -0.684*** -0.551*** -0.634*** -0.698*** -0.717*** -0.601*** -0.488*** Marriage (ref: Late Marriage) (0.040) (0.058) (0.085) (0.131) (0.053) (0.079) (0.112) (0.162) Marginal -0.107*** -0.130*** -0.117*** -0.138*** -0.107*** -0.116*** -0.112*** -0.108*** effect (0.006) (0.010) (0.017) (0.027) (0.007) (0.012) (0.020) (0.035) Constant 2.838*** 3.213*** 3.287*** 3.467*** 2.578*** 3.398*** 2.986*** 2.595*** (0.125) (0.191) (0.295) (0.495) (0.177) (0.287) (0.416) (0.638) Observatio 11,705 4,923 2,003 797 6,235 2,673 1,145 458 ns Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects ** p<0.01, ** p<0.05, * p<0.1. Source* Table 20 : 20 Early Female Marriage and Stated Son Preference-Progressive Matches : Authors' calculations using pooled data from the four rounds of PDHS. The table shows results for states son preference. Columns 1-4 show results for women whose husbands do not hold at least five more years of schooling while Columns 5-8 show results for women whose husbands have at least five more years of schooling. All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 02 e 02 e 01 e 01 e 02 e 02 Early 0.054*** 0.058*** 0.011*** 0.012*** 0.045* 0.044 0.010* 0.009* Marriage (ref: Late Marriage) (0.021) (0.021) (0.004) (0.004) (0.026) (0.027) (0.005) (0.005) Marginal 0.019*** 0.020*** 0.011*** 0.012*** 0.016*** 0.016*** 0.010* 0.009* effect (0.007) (0.007) (0.004) (0.004) (0.009) (0.009) (0.005) (0.005) Son ratio 0.751*** 0.178*** 0.756*** 0.180*** (0.031) (0.006) (0.044) (0.009) Constant -0.146** -0.541*** 0.179*** 0.084*** -0.078 -0.518*** 0.200*** 0.102*** (0.067) (0.072) (0.014) (0.014) (0.092) (0.099) (0.019) (0.020) Observatio 18,954 18,792 18,954 18,792 10,200 10,101 10,200 10,101 ns R-squared 0.069 0.112 0.066 0.104 Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source Table 21 : 21 Early Female Marriage and Differential Birth Stopping-Early Assumption of Marital Responsibilities : Authors' calculations using pooled data from the four rounds of PDHS. The table presents results for subsequent birth at the nth birth order. Columns 1-4 show results for women who gave birth to a child within twelve months of marriage while Columns 5-8 show results for women who gave birth to a child at least twelve months after marriage. All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 2 order 3 order 4 order 1 order 2 order 3 order 4 Early -1.078*** -0.813*** -0.450*** -0.775*** -0.610*** -0.724*** -0.696*** -0.516*** Marriage (ref: Late Marriage) (0.071) (0.095) (0.132) (0.190) (0.036) (0.054) (0.080) (0.119) Marginal -0.139*** -0.133*** -0.086*** -0.180*** -0.103*** -0.135*** -0.144*** -0.115*** effect (0.007) (0.013) (0.024) (0.041) (0.005) (0.009) (0.015) (0.026) Constant 3.381*** 3.238*** 3.074*** 3.651*** 2.711*** 3.379*** 3.325*** 2.760*** (0.215) (0.314) (0.446) (0.678) (0.117) (0.184) (0.285) (0.458) Observatio 5,475 2,285 954 383 12,467 5,311 2,194 872 ns Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects ** p<0.01, ** p<0.05, * p<0.1. Source* Table 22 : 22 Early Female Marriage and Stated Son Preference-Early assumption of marital responsibilities : Authors' calculations using pooled data from the four rounds of PDHS. The table shows results for states son preference. Columns 1-4 show results for women who gave birth to a child within twelve months of marriage while Columns 5-8 show results for women who gave birth to a child at least twelve months after marriage. All estimations include the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 02 e 02 e 01 e 01 e 02 e 02 Early 0.066** 0.068** 0.013** 0.013** 0.040** 0.042** 0.009** 0.009** Marriage (ref: Late Marriage) (0.031) (0.031) (0.006) (0.006) (0.019) (0.020) (0.004) (0.004) Marginal 0.023** 0.024** 0.013** 0.013** 0.014** 0.015** 0.009** 0.009** effect (0.011) (0.011) (0.006) (0.006) (0.007) (0.020) (0.004) (0.004) Son ratio 0.729*** 0.168*** 0.761*** 0.183*** (0.046) (0.009) (0.030) (0.006) Constant -0.185* -0.591*** 0.172*** 0.082*** -0.091 -0.500*** 0.199*** 0.102*** (0.102) (0.106) (0.020) (0.020) (0.063) (0.069) (0.013) (0.014) Observatio 8,972 8,972 8,972 8,972 20,184 19,923 20,184 19,923 ns R-squared 0.056 0.094 0.070 0.113 Controls YES YES YES YES YES YES YES YES Region YES YES YES YES YES YES YES YES Fixed Effects Time Fixed YES YES YES YES YES YES YES YES Effects Source*** p<0.01, ** p<0.05, * p<0.1. Table 24 : 24 Community Prevalence of Early Marriage and Stated Son Preference -IV Probit Estimates (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Early 0.608*** 0.156*** 0.635*** 0.159* 0.104*** 0.016 0.105*** 0.016 Marriage (ref: Late Marriage) (0.063) (0.096) (0.063) (0.097) (0.014) (0.017) (0.014) (0.017) Son ratio 0.659*** 0.744*** 0.180*** 0.177*** (0.024) (0.025) (0.005) (0.005) Constant -0.497*** -0.272** -0.872*** -0.696*** 0.113*** 0.194*** 0.018** 0.098*** (0.026) (0.080) (0.027) (0.081) (0.006) (0.015) (0.006) (0.013) Observatio 29,792 28,598 29,521 28,337 28,792 28,598 29,521 28,337 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for stated son preference, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Early marriage is instrumented by the proportion of early-marrying women among all the women in the PSU who got married prior to the surveyed woman. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 26 : 26 Early Female Marriage and Stated Son Preference -IV2 Probit Estimates (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Early 0.783*** 0.358*** 0.799*** 0.347*** 0.160*** 0.089*** 0.157*** 0.083*** Marriage (ref: Late Marriage) (0.063) (0.103) (0.063) (0.103) (0.016) (0.020) (0.015) (0.019) Son ratio 0.671*** 0.798*** 0.180*** 0.177*** (0.025) (0.025) (0.005) (0.005) Constant -0.567*** -0.408*** -0.928*** -0.820*** 0.089*** 0.083*** -0.003 -0.008 (0.025) (0.083) (0.025) (0.083) (0.007) (0.016) (0.007) (0.016) Observatio 29,094 27,922 28,827 27,665 29,094 27,922 28,827 27,665 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for stated son preference, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Early marriage is instrumented by the proportion of early marriages in the PSU to five years prior to the respondent's marriage respectively. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 28 : 28 Early Female Marriage and Stated Son Preference -IV3 Estimates (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Early 0.704*** 0.224*** 0.721*** 0.216*** 0.104*** 0.051*** 0.138*** 0.047*** Marriage (ref: Late Marriage) (0.058) (0.090) (0.058) (0.090) (0.014) (0.017) (0.013) (0.016) Son ratio 0.679*** 0.741*** 0.180*** 0.177*** (0.024) (0.025) (0.005) (0.005) Constant -0.535*** -0.320** -0.901*** -0.735*** 0.097*** 0.111*** 0.004 0.018 (0.024) (0.076) (0.024) (0.077) (0.006) (0.014) (0.006) (0.014) Observatio 29,695 28,505 29,425 28,245 29,695 28,505 29,425 28,245 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for stated son preference, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Early marriage is instrumented by the proportion of early marriages in the PSU to ten years prior to the respondent's marriage. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. Table 29 : 29 Early Female Marriage and Differential Birth Stopping -Treatment Effects Propensity Regression adjustment Inverse-Probability Augmented IPW score weights matching Subsequent Subsequent POmean Subsequent POmean Subsequent POmean birth birth birth birth Birth order 01 ATE (Early -0.096*** -0.105*** 0.186*** -0.092*** 0.179*** -0.092*** 0.179*** Marriage vs Late Marriage) (0.005) (0.004) (0.003) (0.004) (0.004) (0.004) (0.004) Observations 17,942 17,942 17,942 17,942 17,942 17,942 17,942 Birth order 02 ATE (Early -0.114*** -0.124*** 0.223*** -0.111*** 0.214*** -0.110*** 0.214*** Marriage vs Late Marriage) (0.010) (0.007) (0.006) (0.008) (0.006) (0.008) (0.006) Observations 7,596 7,596 7,596 7,596 7,596 7,596 7,596 Birth order 03 ATE (Early -0.101*** -0.114*** 0.234*** -0.104*** 0.227*** -0.104*** 0.227*** Marriage vs Late Marriage) (0.015) (0.012) (0.010) (0.013) (0.011) (0.013) (0.011) Observations 3,148 3,148 3,148 3,148 3,148 3,148 3,148 Birth order 04 ATE (Early -0.138*** -0.122*** 0.276*** -0.123*** 0.274*** -0.123*** 0.274*** Marriage vs Late Marriage) (0.029) (0.022) (0.018) (0.022) (0.018) (0.022) (0.018) Observations 1,255 1,255 1,255 1,255 1,255 1,255 1,255 Source: Authors' calculations using pooled data from the four rounds of PDHS. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, Table 30 : 30 Early Female Marriage and Stated Son Preference -Treatment Effects : Authors' calculations using pooled data from the four rounds of PDHS. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. VARIABLES Propensity score Regression Inverse-Probability Augmented IPW matching adjustment weights (1) (2) POmean (3) POmean (4) POmean ATE (Early Marriage vs 0.020*** 0.025*** 0.394*** 0.027*** 0.397*** 0.027*** 0.398*** Late Marriage) (0.007) (0.006) (0.003) (0.006) (0.004) (0.006) (0.004) Observations 29,156 29,156 29,156 29,156 29,156 29,156 29,156 Source Table 31 : 31 Early Female Marriage and Differential Birth Stopping-Alternate Definition of Early Marriage : Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for subsequent birth at the nth birth order, first without and then with the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Age at 0.058*** 0.145*** 0.061*** 0.152*** 0.053*** 0.123*** 0.050*** 0.091*** Marriage (0.003) (0.004) (0.004) (0.007) (0.007) (0.010) (0.011) (0.015) Marginal 0.012*** 0.023*** 0.014*** 0.027*** 0.013*** 0.024*** 0.014*** 0.020*** effect (0.000) (0.000) (0.001) (0.001) (0.001) (0.001) (0.003) (0.003) Constant -2.182*** 0.429*** -2.122*** 0.866*** -1.865*** 1.171*** -1.705*** 1.245*** (0.055) (0.108) (0.084) (0.162) (0.129) (0.246) (0.201) (0.389) Observatio 18,798 17,942 7,941 7,596 3,299 3,148 1,318 1,255 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source Table 32 : 32 Early Female Marriage and Stated Son Preference-Alternate Definition of Early Marriage : Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for stated son preference, first without and then with the full set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Age at -0.021*** -0.003 -0.021*** -0.003 -0.004*** -0.000 -0.004*** -0.000 Marriage (0.002) (0.002) (0.002) (0.002) (0.000) (0.000) (0.000) (0.000) Marginal -0.007*** -0.001 -0.007*** -0.001 -0.004*** -0.000 -0.004*** -0.000 effect (0.000) (0.000) (0.000) (0.000) (0.000) (0.000) (0.000) (0.000) Son ratio 0.716*** 0.751*** 0.183*** 0.179*** (0.024) (0.025) (0.005) (0.005) Constant 0.147*** -0.032 -0.223*** -0.437*** 0.233*** 0.201*** 0.137*** 0.106*** (0.035) (0.064) (0.038) (0.067) (0.008) (0.013) (0.008) (0.013) Observatio 30,418 29,156 30,147 28,895 30,418 29,156 30,147 28,895 ns R-squared 0.003 0.066 0.046 0.108 Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source Table 33 : 33 Early Female Marriage and Differential Birth Stopping -Legal marriage age : Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the subsequent birth at the nth birth order, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women without a son at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLE Subseque Subseque Subseque Subseque Subseque Subseque Subseque Subseque S nt birth nt birth nt birth nt birth nt birth nt birth nt birth nt birth at birth at birth at birth at birth at birth at birth at birth at birth order 1 order 1 order 2 order 2 order 3 order 3 order 4 order 4 Early -0.420*** -0.592*** -0.398*** -0.611*** -0.334*** -0.508*** -0.259*** -0.351*** Marriage (ref: Late Marriage) (0.031) (0.040) (0.043) (0.056) (0.062) (0.077) (0.090) (0.111) Marginal -0.080*** -0.087*** -0.087*** -0.102*** -0.081*** -0.096*** -0.070*** -0.076*** effect (0.005) (0.004) (0.008) (0.007) (0.013) (0.013) (0.023) (0.022) Constant -1.002*** 2.426*** -0.896*** 2.875*** -0.823*** 2.928*** -0.737*** 2.608*** (0.012) (0.096) (0.019) (0.149) (0.029) (0.229) (0.045) (0.353) Observatio 18,939 18,073 8,003 7,653 3,326 3,172 1,329 1,266 ns Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source Table 34 : 34 Early Female Marriage and Stated Son Preference -Legal marriage age : Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for stated son preference, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1. (1) (2) (3) (4) (5) (6) (7) (8) VARIABLES Son Son Son Son Son Son Son Son Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc Preferenc e 01 e 01 e 01 e 01 e 02 e 02 e 02 e 02 Early 0.192*** 0.070*** 0.177*** 0.061*** 0.037*** 0.012*** 0.035*** 0.011*** Marriage (ref: Late Marriage) (0.017) (0.020) (0.018) (0.020) (0.004) (0.004) (0.004) (0.004) Marginal 0.075*** 0.025*** 0.067*** 0.022*** 0.037*** 0.012*** 0.035*** 0.011*** effect (0.006) (0.007) (0.006) (0.007) (0.004) (0.004) (0.004) (0.004) Son ratio 0.715*** 0.752*** 0.182*** 0.179*** (0.024) (0.028) (0.005) (0.006) Constant -0.286*** -0.130** -0.649*** -0.514*** 0.151*** 0.189*** 0.057*** 0.098*** (0.008) (0.059) (0.015) (0.066) (0.002) (0.012) (0.003) (0.013) Observatio 34,522 33,097 30,405 29,132 34,522 33,097 30,405 29,132 ns R-squared 0.003 0.066 0.045 0.108 Controls NO YES NO YES NO YES NO YES Region NO YES NO YES NO YES NO YES Fixed Effects Time Fixed NO YES NO YES NO YES NO YES Effects Source The same outcome would result if the two groups of women differed in terms of sex-selective abortion. In Pakistan, child-birth generally occurs only in matrimony. Source: Authors' calculations using pooled data from the four rounds of PDHS. Columns 1-8 present results for the subsequent birth at the nth birth order, first without and then with the set of controls and fixed effects. Controls include woman's characteristics (age, age difference with husband, education, employment status, media exposure), spouse education, household size, wealth status, and geographical features (place of residence, region). The sample is restricted to women with female child at the nth birth order. Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1.
01384833
en
[ "scco", "scco.neur" ]
2024/03/04 16:41:18
2009
https://hal.science/hal-01384833/file/Save2008.pdf
Etienne Save email: [email protected] Bruno Poucet Role of the parietal cortex in long-term representation of spatial information in the rat The processing of spatial information in the brain requires a network of structures within which the hip-pocampus plays a prominent role by elaborating an allocentric representation of space. The parietal cor-tex has long been suggested to have a complementary function. An overview of lesion and unit recording data in the rat indicates that the parietal cortex is involved in different aspects of spatial information pro-cessing including allocentric and egocentric processing. More specifically, the data suggest that the pari-etal cortex plays a fundamental role in combining visual and motion information, a process that would be important for an egocentric-to-allocentric transformation process. Furthermore, the parietal cortex may also have a role in the longterm storage of representation although this possibility needs further evi-dence. The data overall show that the parietal cortex occupies a unique position in the brain at the inter-face of perception and representation. Introduction Spatial behaviors are essential to survival of most animal species. Evolution yielded the emergence of spatial strategies that allow animals to maintain their navigational capability and their spatial memory in spite of environmental modifications. Understanding how the brain processes spatial information has motivated a huge amount of work. It is now well established that the processing of spatial information in the brain requires a network of cortical and subcortical structures within which the hippocampus plays a central role by implementing an allocentric representation of space. One of the most striking evidence in favor of such a role comes from the existence in CA1 and CA3 of pyramidal neurons characterized by location-specific firing, the so-called place cells [START_REF] O'keefe | The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely moving rat[END_REF][START_REF] O'keefe | The hippocampus as a cognitive map[END_REF]. The discovery of place cells in the 1970s has had a great conceptual influence and contributed to promote a ''hippocampus-centered" view of the processing of spatial information in the brain. However, that a phylogenetically preserved, paleocortical structure such as the hippocampus could be the neural substrate of high level cognitive processes implicitly raised the question of the role of the neocortex in rodents. In this respect, the influence of Lashley's theories was still perceptible in the 1970s [START_REF] Mcdaniel | Posterior association cortex and visual pattern discrimination in the rat[END_REF][START_REF] Thomas | Mass function and equipotentiality: A reanalysis of Lashley's retention data[END_REF][START_REF] Thomas | The effects of lesions in the frontal or posterior association cortex of rats on maze III[END_REF]. As the main proponent of a holistic view of cortical functions in learning twenty years before, Lashley had postulated that cortical areas do not have spe-cific functions as far as learning is concerned and can substitute for each other when a lesion is made (equipotentiality principle). Several decades later, this theory motivated studies that examined the effects of lesioning various parts of the cortex on learning performance. Lesions of the posterior association cortex, frontal cortex and temporal cortex produced different effects on various learning tasks thus questioning Lashley's equipotentiality principle and perhaps more importantly, suggesting a specific contribution of the posterior association cortex (McDaniel & Thomas, 1978;[START_REF] Thomas | Mass function and equipotentiality: A reanalysis of Lashley's retention data[END_REF][START_REF] Thomas | The effects of lesions in the frontal or posterior association cortex of rats on maze III[END_REF]. In the context of a strong disagreement in the literature regarding the existence of a posterior association cortex in the rat, these seminal studies are among the first to propose that this region has a distinct role in spatial learning and memory. This renewal of interest for the parietal cortex in the rat produced a large number of studies that sought to characterize this cortical area both neuroanatomically and functionally. The results provide a great deal of evidence in favor of a role in the formation of long-term spatial representations. This aim of this review is to summarize this evidence and to suggest possible directions for further work. The parietal cortex is involved in multimodal processing: Anatomical evidence The hypothesis of the existence of a posterior association cortex (hereafter referred to as parietal cortex) in the rat has been initially founded on neuroanatomical bases. Using cytoarchitectonic characteristics, Krieg described a parietal region subdivided into six areas [START_REF] Krieg | Connexions of the cerebral cortex. 1. The albino rat. A. Topography of the cortical areas[END_REF], three primary somatosensory areas (labeled 1 1, 2, 3 according to Brodmann's nomenclature) and three areas putatively involved in multisensory integration (labeled 7, 39, 40). Subsequently, Krieg's area 7 was considered as corresponding to the parietal cortex by [START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF]. This area, lying between the rostral primary somatosensory areas and the caudal secondary visual areas, would differ from the neighboring regions by a reduction in layer thickness and fewer fibers [START_REF] Kolb | Posterior parietal and temporal association cortex[END_REF]. The parietal cortex was also described on the basis of its thalamic inputs. Authors agreed that the thalamic projections to the parietal cortex originated from the lateroposterior and laterodorsal nuclei [START_REF] Chandler | Thalamocortical connexions of rat posterior parietal cortex[END_REF][START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF][START_REF] Lashley | Thalamo-cortical connections of the rat's brain[END_REF]McDaniel, McDaniel, & Thomas, 1978;[START_REF] Reep | Rat posterior parietal cortex: Topography of corticocortical and thalamic connexions[END_REF]. However, such connection are not specific since the lateroposterior nucleus also have extensive projections to various cortical areas including primary and secondary visual cortex, medial prefrontal and anterior cingulate cortex (Musil & Olson, 1988a, 1988b) and subcortical regions such as the striatum [START_REF] Kamishina | Striatal projections from the rat lateral posterior thalamic nucleus[END_REF]. Whether there is topographic organization of the neurons within the lateroposterior thalamus with respect to their cortical site of projection is not clearly established. Most importantly, strong support for the hypothesis of an associative function in the parietal cortex is provided by the pattern of corticocortical connections. As shown in Fig. 1, the parietal cortex receives inputs from various sensory regions including the somatosensory cortex (Par 1 according to Zilles' nomenclature, Zilles, 1985), primary and secondary visual cortex (Oc1, Oc2L, Oc2M), and the auditory cortex (Te1) [START_REF] Kimura | Efferent connections of ''posterodorsal'' auditory area in the rat cortex: Implications for auditory spatial processing[END_REF][START_REF] Kolb | Posterior parietal and temporal association cortex[END_REF][START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF][START_REF] Miller | Direct connexions of rat visual cortex with sensory, motor, and association cortices[END_REF][START_REF] Reep | Rat posterior parietal cortex: Topography of corticocortical and thalamic connexions[END_REF][START_REF] Torrealba | Cortical connexions of the anteromedial extrastriate visual cortex in the rat[END_REF]. It is also connected to cortical regions involved in goal-directed behavior such as the orbitofrontal, and medial prefrontal cortices (LO, VLO, Fr2) [START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF][START_REF] Nelson | Prefrontal cortical modulation of acetylcholine release in posterior parietal cortex[END_REF][START_REF] Reep | Rat posterior parietal cortex: Topography of corticocortical and thalamic connexions[END_REF]. Interestingly, the parietal cortex receives inputs from the cerebellum suggesting a direct link with motor systems [START_REF] Giannetti | Cerebellar input to the posterior parietal cortex in the rat[END_REF]. It may also have some connection with the vestibular system either monosynaptically [START_REF] Guldin | Monosynaptic input from the cerebral cortex to the vestibular brainstem nuclei in the rat[END_REF] or polysynaptically via the lateroposterior thalamic nucleus [START_REF] Smith | The effects of vestibular lesions on hippocampal function in rats[END_REF] but this remains to be clarified. Consistent with the hypothesis of a role in spatial memory, the parietal cortex is connected to the limbic system and in particular to the hippocampal formation via the retrosplenial and the postrhinal cortex [START_REF] Burwell | Cortical afferents of the perirhinal, postrhinal, and entorhinal cortices of the rat[END_REF]. Note how-ever that nothing is known about the topographical organization of the projections within the parietal cortex. One can assume that the projections are not intermingled over the whole parietal surface but on the contrary segregated but this hypothesis remains to be confirmed. Overall, this complex pattern of connection strongly suggests that the parietal cortex is part of various networks involved in the processing of sensory, motor information and in memory. It therefore may play a unique role in multimodal processing and, as a result, would be an important actor in many cognitive processes in the rat. Effects of parietal cortex lesions in the processing of allocentric information Parietal cortex lesion studies were performed not only to uncover the role of this structure in spatial learning but also to discriminate it from that of the hippocampus. The possibility that the cognitive map or at least an elementary form was elaborated in the parietal cortex before being fully realized in the hippocampus was raised. To investigate the contribution of the parietal cortex in long-term representation of spatial information, a number of studies examined the effects of parietal lesions in place navigation tasks that involve the formation and use of an allocentric spatial representation. Most of these studies used the Morris water maze but a few used alternative situations such as the cheese board task, a dry version of the water maze [START_REF] Kesner | Place and taste aversion learning: Role of basal forebrain, parietal cortex, and amygdala[END_REF]. In the Morris water maze, the animals are required to locate a submerged platform by using a configuration of environmental cues. Lesions yielded variable effects. Rats with parietal cortex lesions were at best non affected [START_REF] Compton | The flexible use of multiple cue relationships in spatial navigation: A comparison of water maze performance following hippocampal, medial septal, prefrontal cortex, or posterior parietal cortex lesions[END_REF][START_REF] Kolb | A comparison of the contributions of the frontal and parietal association cortex to spatial localization in rats[END_REF]Save & Poucet, 2000a) and at worst mildly impaired in the acquisition of this task [START_REF] Kolb | Dissociation of the medial prefrontal, posterior parietal, and posterior temporal cortex for spatial navigation and recognition memory in the rat[END_REF][START_REF] Kolb | Recovery from early cortical lesions in rats. III. Neonatal removal of posterior parietal cortex has greater behavioral and anatomical effects than similar removals in adulthood[END_REF][START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF][START_REF] Save | Effects of lesions of the associative parietal cortex in the acquisition and use of spatial memory in egocentric and allocentric navigation tasks in the rat[END_REF]. In contrast, a marked deficit was found by DiMattia and [START_REF] Dimattia | Spatial cognitive maps: Differential role of posterior parietal cortex and hippocampal formation[END_REF] and Hoh and co-workers (2003). However, in the DiMattia and Kesner study, it is possible that the deficits would result from larger lesion size and more anterior lesion location than the other studies. We also showed that the parietal cortex is not recruited when the hippocampus is inactivated during place learning (Parron, Poucet, & Save, 2001). Using a distributed learning procedure, we found that short-lasting reversible inactivation of the dorsal hippocampus during the navigation trials in the Morris water maze did not prevent learning of a platform location, therefore suggesting the involvement of another structure for acquisition, storage and off line processing. The possibility that this structure could be the parietal cortex was ruled out since rats with parietal cortex lesions that had their hippocampus inactivated were able to perform the task as well as rats with just an inactivated hippocampus. Thus, the parietal cortex does not appear to be the brain area that compensates for a dysfunctioning hippocampus. As a whole, the results suggest that the parietal cortex plays a role in the formation of spatial representations but this role appears to be not critical for place learning and navigation. That hippocampal lesions had much more deleterious effects on place navigation than parietal cortex lesions (e.g. [START_REF] Compton | The flexible use of multiple cue relationships in spatial navigation: A comparison of water maze performance following hippocampal, medial septal, prefrontal cortex, or posterior parietal cortex lesions[END_REF][START_REF] Morris | Place navigation impaired in rats with hippocampal lesions[END_REF] suggested that these two regions contribute to different aspects of the processing of spatial information. Interestingly, Kolb and colleagues observed that rats with parietal cortex lesions were able to learn the general location of the platform by using room cues but had difficulties in adjusting their movement toward the goal [START_REF] Kolb | Dissociation of the medial prefrontal, posterior parietal, and posterior temporal cortex for spatial navigation and recognition memory in the rat[END_REF][START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF]. Because this deficit did not appear when the animals had to navigate toward a visible goal, the possibility of a pure motor impairment is unlikely. A more appealing hypothesis would consider the possibility that the parietal lesion is involved in the conjoint integration of spatial features and motion information, a process that may be important for the formation of the hippocampal cognitive map [START_REF] Mcnaughton | Cortical-hippocampal interactions and cognitive mapping: A hypothesis based on reintegration of the parietal and inferotemporal pathways for visual processing[END_REF][START_REF] Mcnaughton | Cortical representation of motion during unrestrained spatial navigation in the rat[END_REF]. A study in which rats were trained to navigate to a submerged platform with cues/objects directly placed in the pool provided results consistent with this hypothesis. This study was based on the assumption that building an allocentric representation requires extraction of spatial invariants in the environment. It has been proposed that this process involves conjoint integration of different views of the environment with movements connecting these views [START_REF] Poucet | Spatial cognitive maps in animals: New hypotheses on their structure and neural mechanisms[END_REF][START_REF] Poucet | The neuropsychology of spatial cognition in the rat[END_REF]. Because of parallax effects, using intramaze cues may require enhanced association between views and motion to form a spatial representation. Rats with parietal cortex lesions were impaired when they had to use intramaze cues but not when they had to use distant cues whereas rats with hippocampal lesions were impaired regardless the kind of cues they used (Save & Poucet, 2000a). The notion that the parietal cortex cooperates with the hippocampus and is involved in the formation of spatial representations based on proximal objects is further supported by a study that examined the effects of parietal lesions on hippocampal place activity. Place cells were recorded in parietal-lesioned rats as the animals performed a pellet chasing task in a circular arena containing three objects [START_REF] Save | Functional interaction between the parietal cortex and hippocampal place cell firing in the rat[END_REF]. Room cues were made irrelevant by placing a curtain around the arena. We found that place field stability was perfectly controlled by the intramaze objects in control rats. Ninety degree rotation of the set of objects in the absence of the rat resulted in equivalent rotation of the place fields. In contrast, in parietal-lesioned rats, the control of place fields by the objects was much weaker since a majority of fields did not rotate but remained stable relative to the room reference frame. We also examined whether the rats could use olfactory and idiothetic cues to maintain stable fields by removing the objects. This was the case for control rats in which place fields remained stable. In contrast, place fields in lesioned rats shifted to their initial position, suggesting that place cells did not use combined olfactory and idiothetic cues but background cues to maintain stable fields (see Fig. 2). We assumed that because parietal-lesioned rats were unable to properly use proximal objects, place cells eventually used non controlled cues. If this hypothesis is correct, then place cell activity in parietal-lesioned rats should be normally controlled by distal cues. Beyond this assumption, this study is a demonstration that the parietal and the hippocampus are functionally related in spite of indirect neuroanatomical relationships. As a whole, the data are compatible with the idea that the parietal cortex plays a role in the combination of visuo-spatial and motion information but also provide some new hints on the organization of cue encoding processes. Proximal cues, i.e. objects placed in the animal's locomotor space, and distal cues, i.e. room cues, may be differentially encoded by different structures in the brain. These results and others suggest that the parietal cortex is preferentially involved in the processing of proximal cues whereas the entorhinal cortex is preferentially involved in the processing of distal cues. The hippocampal system would have a major role in combining the two kinds of information to form an integrated spatial representation [START_REF] Parron | Entorhinal cortex lesions impairs the use of distal but not proximal landmarks during navigation in the rat[END_REF]Save & Poucet, 2000a;[START_REF] Van Cauter | Unstable CA1 place cell representation in rats with entorhinal cortex lesions[END_REF]. The effects of parietal cortex lesions were also examined in nonassociative tasks. Save and colleagues used a habituation/dishabituation situation in which the rats explored an arena containing several objects [START_REF] Save | Objects exploration and reaction to a spatial and a non-spatial change in the rat following damage to the posterior parietal cortex or the dorsal hippocampus[END_REF]. The objects formed a particular configuration that remained constant during the habituation sessions but was modified during the spatial change and non-spatial change sessions. It is important to note that each animal was run only one time and that the whole sequence of session was completed in approximately 1 h. When repeatedly exposed to the initial configuration, both sham-operated and parietal-lesioned rats exhibited habituation, i.e. a decrease of object exploration and locomotor activity. It was assumed that rats would form a spatial representation during habituation. To test this hypothesis, the effects of changing the spatial configuration were examined by displacing some objects. Such a manipulation induced a renewal of exploration specifically directed toward the displaced objects in sham-operated rats. In contrast, rats with parietal lesions did not display such a renewal, suggesting that they were impaired in elaborating a spatial representation on the basis of the object configuration. In addition it was demonstrated that this deficit is specific of manipulation of spatial relationships since a non-spatial change (replacing a familiar object with a novel object) induced a renewal of exploration in the two groups. In this situation, rats with hippocampal lesions displayed similar pattern of performance as rats with parietal lesions: they were impaired in the detection of spatial but not non-spatial change. At this point, the comparison of the impact of parietal Fig. 2. Examples of firing rate maps of two hippocampal place cells recorded in four successive sessions in control and parietal-lesioned-rats. Rats were foraging in a circular arena containing three objects (represented as a black square, white circle, and white polygon). Dark pixels indicate the place field of the cell. In control rats, place field were controlled by the objects, i.e. rotated an equivalent amount after object rotation, and remained stable after object removal suggesting a control by non visual (e.g. idiothetic, olfactory) cues. In contrast, in parietal-lesioned rats, a number of place fields were not controlled by the objects (not illustrated). In addition, after object removal, place fields shifted back to their initial location (Standard 1/Standard 2). lesions in associative and non-associative tasks is instructive. First, the parietal cortex seems to play a greater role when the animal has to use a configuration of intramaze objects than a configuration of extramaze cues to form a spatial representation. This supports the hypothesis that the parietal cortex is involved in the association between visuo-spatial and motion-related information. Second, the contribution of the parietal cortex would be fully revealed when encoding of spatial information is performed within a limited amount of time that does not allow for compensatory processes and/or neural activation. Such processes may account for the reduced deficits when training is distributed over days. Both the parietal cortex and the hippocampus are involved in the processing of complex spatial information but their specific contributions remain unclear. In the perspective of disentangling their respective roles, studies compared the effects of parietal cortex and hippocampal lesions in various spatial tasks. In a scene discrimination task, both rats with parietal lesions and hippocampal lesions exhibited impaired detection of spatial and spatial/object changes but not object changes [START_REF] Decoteau | Effects of hippocampal and parietal cortex lesions on the processing of multiple-object scenes[END_REF], an effect in line with Save et al.'s findings (1992). Later it was suggested that this effect could result from the inability of parietal-lesioned rats to process topological information and of hippocampal-lesioned rats to process metric information [START_REF] Goodrich-Hunsaker | Human topological task adapted for rats: Spatial information processes of the parietal cortex[END_REF][START_REF] Goodrich-Hunsaker | Dissociating the role of the parietal cortex and dorsal hippocampus for spatial information processing[END_REF]. In contrast, neither the parietal cortex nor the hippocampus seem to be involved in the discrimination of spatial location or allocentric distance as measured in a delayed-matching-to-sample go/no go task [START_REF] Long | The effects of dorsal versus ventral hippocampal, total hippocampal, and parietal cortex lesions on memory for allocentric distance in rats[END_REF][START_REF] Taube | Head direction cells and the neurophysiological basis for a sense of direction[END_REF]. However, the parietal cortex would play a weaker role in the memory for allocentric distance than the hippocampus in a similar paradigm [START_REF] Long | The effects of dorsal versus ventral hippocampal, total hippocampal, and parietal cortex lesions on memory for allocentric distance in rats[END_REF]. Rats with parietal lesions were not impaired in learning an allocentric version of a Hebb-Williams maze but were impaired in an egocentric version. Rats with hippocampal lesions were impaired in the allocentric but not egocentric version [START_REF] Rogers | Lesions of the dorsal hippocampus or parietal cortex differentially affect spatial information processing[END_REF]. These results do not provide a clear picture of the respective roles of the parietal cortex, hippocampus, and parietal-hippocampal relationships. They nevertheless suggest that the parietal cortex has a more subtle role in spatial representation than the hippocampus. In particular, parietal lesions appeared to affect or spare the ability to process and memorize spatial information depending on the experimental set up and task used. Thus, a possible explanation for these inconsistent effects is that the involvement of the parietal cortex and therefore of the parietal-hippocampal interaction would be highly dependent on the cognitive demand, task contingencies, and behavioral constraints. This hypothesis is supported by the results of a recent study in which Rogers and Kesner used a disconnection procedure to more directly examine the importance of cortical-hippocampal interaction in the processing of spatial information [START_REF] Rogers | Hippocampal-parietal cortex interactions: Evidence from a disconnection study in the rat[END_REF]. In this procedure, the rats received unilateral lesions in both the hippocampus and the parietal cortex either on ipsilateral or contralateral hemisphere. If the two structures interact, contralateral lesions are expected to completely disrupt the parietal-hippocampal relationships therefore producing profound deficits whereas ipsilateral lesions that partially damage the relationships would produce only mild deficits. Rats were trained in different spatial tasks including an object-place paired associate learning task, a dry-land place navigation task and a reaction-to-change task. Rats with contralateral lesions were more impaired than rats with ipsilateral lesions in the object-place learning task and the dry-land task. In contrast, both groups were similarly impaired in the reaction-to change task. Although all three tasks have been shown to be sensitive to both parietal cortex lesions and hippocampal lesions, the results suggest that the parietal cortex and the hippocampus cooperate in the object-place and dry-land navigation task but not in the reaction-to change task. As pointed out by the authors one possible explanation is that parietal-hippocampal interaction would be important in spatial tasks that induce a gradual (over days) formation of a spatial representation. On the contrary, rapid acquisition of environmental information during exploration involves both the parietal cortex and the hippocampus but do not require an interaction between these two structures. Interestingly, in a recent study using similar disconnection procedure and equivalent tasks, we looked at the interaction between the entorhinal cortex and the hippocampus [START_REF] Parron | Cooperation between the hippocampus and the entorhinal cortex in spatial memory: A disconnection study[END_REF] and obtained opposite effects suggesting entorhinal-hippocampal interactions in the reaction-to-change task but not in the place navigation task. This suggests that formation of a representation and detection of a spatial change during object exploration requires cooperation between the entorhinal cortex and the hippocampus but not between the parietal cortex and the hippocampus. This outcome is consistent with the notion that the cortical-hippocampal interactions are modulated by the task requirements. Effects of parietal cortex lesions in the processing of egocentric information As in allocentric tasks, parietal cortex lesions produced variable effects in egocentric tasks. Once again, the diversity of the tasks used may account for such inconsistency. Actually, egocentric tasks include a heterogenous amount of behavioral situations ranging from visually guided navigation in the water maze to path integration in an arena. Thus, these tasks involve different sensory inputs and processes that may be mediated by different brain structures. It not surprising therefore that parietal cortex lesions do not disrupt all egocentric learning tasks. Parietal cortex lesions did not impair navigation to a visible platform in the water maze [START_REF] Kolb | Behavioral and anatomical studies of the posterior parietal cortex in the rat[END_REF]Save & Poucet, 2000a), learning of an egocentric version of the radial maze [START_REF] Kesner | Double dissociation of egocentric and allocentric space following medial prefrontal and parietal cortex lesions in the rat[END_REF][START_REF] Kolb | Dissociation of the medial prefrontal, posterior parietal, and posterior temporal cortex for spatial navigation and recognition memory in the rat[END_REF]. Memory for egocentric distance in a delayed-matching-to-sample go/no go task was not affected [START_REF] Long | Effects of hippocampal and parietal cortex lesions on memory for egocentric distance and spatial location information in rats[END_REF]. In contrast, deficits were found in acquisition of a response-learning task in a greek cross-shaped water maze [START_REF] Mcdaniel | Unilateral injury of posterior parietal cortex and spatial learning in hooded rats[END_REF] and route learning task in a Hebb-Williams maze [START_REF] Rogers | Lesions of the dorsal hippocampus or parietal cortex differentially affect spatial information processing[END_REF]. Few studies were conducted to investigate the possibility that the parietal cortex is involved in the processing of motion information. Generally, in these studies, allothetic cues are removed or made inconsistent in order to encourage the use of idiothetic cues by the animals. For example, [START_REF] Save | Effects of lesions of the associative parietal cortex in the acquisition and use of spatial memory in egocentric and allocentric navigation tasks in the rat[END_REF] trained rats to reach a platform in the water maze from a fixed start position in total darkness. Lesioning the parietal cortex resulted in inaccurate trajectories so that lesioned rats could not learn the task. Consistent with these results, parietal lesions had a deleterious effect in rats that were trained by using a disorientation procedure to neglect allocentric cues and to rely on egocentric cues to navigate to the platform [START_REF] Commins | Disorientation combined with bilateral parietal cortex lesions causes path integration deficits in the water maze[END_REF]. The hypothesis that the parietal cortex is involved in the processing of idiothetic information was further investigated in studies that examined the rats' capability to navigate by path integration. Rats were trained to perform a homing task in a large circular arena. They had to climb on the arena, explore it to find a piece of food hidden in one of 17 food wells and carry the food back, straight to the home cage. Because distant visual cues, directional auditory cues and local olfactory cues were made not relevant, it was assumed that the animals would rely on path integration to return to their home cage. Rats with parietal lesions made more errors, i.e. did not exhibit a correct return, suggesting a path integration deficit. Note that both hippocampal lesions and entorhinal cortex lesions also produced an impairment in this task (Parron & Save, 2004;[START_REF] Save | Dissociation of the effects of lesions of the dorsal hippocampus and parietal cortex on path integration in the rat[END_REF]. However, unlike parietal and entorhinal cortex-lesioned rats, hippocampallesioned rats exhibited slower acquisition of the basic requirements for the task that may reflect a more general learning impairment. Together, these results suggest that the parietal cortex plays an important role in path integration and that path integration is dependent on the recruitment of a large functional network including at least the parietal cortex, the entorhinal cortex and the hippocampus. Acquisition vs. retention: A role in memory storage? There is a large consensus that a dialog between the neocortex and the hippocampus is essential for the formation of long-term memory [START_REF] Mcclelland | Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory[END_REF]. The hippocampus would be necessary for rapid acquisition of new information and short term storage whereas the neocortex would be involved in the storage of remote memories [START_REF] Frankland | The organization of recent and remote memories[END_REF]. The parietal cortex has been hypothesized to be activated during consolidation [START_REF] Maviel | Sites of neocortical reorganization critical for remote spatial memory[END_REF]. Accordingly, post acquisition parietal cortex lesions should affect retention in spatial tasks. This aspect of parietal functioning has been poorly investigated, most studies reporting the effects of lesions made before learning. Among the studies that addressed the acquisition vs. retention issue, Cho and Kesner showed that rats with parietal cortex lesions did not interfere with the retention of two previously learned spatial discriminations in the radial maze. In addition, lesioned rats were able to relearn new discriminations [START_REF] Cho | Retrograde and anterograde amnesia for spatial discrimination in rats: Role of hippocampus, entorhinal cortex and parietal cortex[END_REF]. In a subsequent study using a similar task, the same authors found a non-temporally graded retention deficit in rats with parietal lesions [START_REF] Cho | Involvement of entorhinal cortex or parietal cortex in long-term spatial discrimination memory in rats: Retrograde amnesia[END_REF]. Post acquisition lesions affected the retention of an egocentric navigation task in the water maze and produced a transient deficit in the place navigation (allocentric) version of the task [START_REF] Commins | Disorientation combined with bilateral parietal cortex lesions causes path integration deficits in the water maze[END_REF][START_REF] Save | Effects of lesions of the associative parietal cortex in the acquisition and use of spatial memory in egocentric and allocentric navigation tasks in the rat[END_REF]. [START_REF] Hoh | Role of the neocortex in the water maze task in the rat: A detailed behavioral and Golgi-Cox analysis[END_REF] did not find any retention deficit in the place navigation task. Parietal-lesioned rats were found to be impaired in both acquisition and retention of allocentric and egocentric maze learning using the Hebb-Williams maze [START_REF] Rogers | Lesions of the dorsal hippocampus or parietal cortex differentially affect spatial information processing[END_REF]. Thus, in tasks that yielded no or mild acquisition deficits such as the place navigation task in the water maze, parietal lesions were not found to significantly impair retention. In contrast, in tasks that yield acquisition deficits such as egocentric navigation in darkness or route learning in the Hebb-Williams maze, parietal lesions produced retention deficits. Clearly, more data are needed but those available suggest that when the parietal cortex is necessary, it is involved in both initial acquisition and long-term storage of spatial information. Neural activity in the parietal cortex: Unit recordings Only a very few studies have managed to record unit activity in the parietal cortex in the rat. McNaughton and collaborators recorded parietal neurons as the rats performed a radial maze task. They found that a substantial number of cells exhibited movement correlates, discriminating between right turns, left turns and forward motion. Interestingly, there were cells that appeared to be modulated by a combination of motion and spatial correlates. For examples, some cells were preferentially activated for outwardly directed forward motion [START_REF] Mcnaughton | Cortical representation of motion during unrestrained spatial navigation in the rat[END_REF]. Other cells fired selectively during specific turns at the end but not at the center of the maze. Still more selective spatial correlates were found since there were cells that fired during specific turns in circumscribed parts of the maze, for example in the western arms [START_REF] Mcnaughton | Cortical-hippocampal interactions and cognitive mapping: A hypothesis based on reintegration of the parietal and inferotemporal pathways for visual processing[END_REF]. Thus, the parietal cortex contains cells that have correlates ranging from pure motion to conjunctions of motion and spatial correlates. Further investigations allowed to identify a small number of cells in the parietal cortex that had head direction firing properties (Chen, Lin, Barnes, & McNaughton, 1994b;Chen, Lin, Green, Barnes, & McNaughton, 1994a). Similar cells were also found in the retrosplenial cortex (see also [START_REF] Cho | Head direction, place, and movement correlates for cells in the rat retrosplenial cortex[END_REF]. Some cells exhibited behavioral modulations in addition to their direction-specific firing. For example, there were head direction neurons that were more active for specific movements, e. g. right turns. Head direction-specific firing of parietal neurons was shown to be controlled by environmental and idiothetic cues (Chen et al., 1994b). Some of these neurons showed activity modulation in response to vestibular stimulation [START_REF] Chen | Head-centered representation and spatial memory in rat posterior parietal cortex[END_REF]. This is consistent with the properties of head direction cells recorded in other regions of the brain [START_REF] Taube | Head direction cells and the neurophysiological basis for a sense of direction[END_REF], for a review) and suggests that parietal neurons incorporate information from both environmental cues and movement-related cues including motor and vestibular cues. Pursuing the idea that parietal neurons encode motion information, Nitz recorded parietal neurons as rats ran along a familiar path in a complex maze including one right and one left turns [START_REF] Nitz | Tracking route progression in the posterior parietal cortex[END_REF]. He found that parietal neuron activity was modulated by a variety of motor behaviors (straight run, right, left turn, straight + right turn, etc.). Such activity was correlated to the sequence of movements irrespective of the places where these movements occurred. A direction-dependent activity was found only in restricted portions of the paths suggesting a limited influence of the spatial context to parietal neuron activity. Thus, the finding suggests that parietal neurons encode representations of routes mainly in terms of movement sequences. Consistent with previous data, Nitz's results indicate that the parietal cortex not only is involved in the processing of motion information but also encodes representation of complex sensory motor behavior. The role of the parietal cortex in the formation of spatial representation Overall, lesion and electrophysiological studies provide a complex pattern of results that may reflect the multiple facets of parietal cortex functioning. It may also reflect neuroanatomical heterogeneity of this area. Note that little is known about the organization of inputs and outputs within the cortex parietal. It is likely that there is a subregional specificity. For example, a subregion may be preferentially involved in the processing of sensory information (e.g. visual) and another subregion in the association between tactile and visual information. Uncovering this organization may be very helpful to enhance our understanding of the multiple functional aspects of the parietal cortex. The picture that emerges from the data ascribes to the parietal cortex a role in the processing of both allocentric and egocentric information. It is clear however that the parietal cortex does not play a role in all allocentric and egocentric processes but has a more specific function. The data first suggest that the parietal cortex is important when the animal has to form an allocentric spatial representation by using nearby cues. We have previously suggested that, in the nearby object situation, extraction of spatial invariants is strongly dependent on the association between different views of the environment and the movements connecting these views [START_REF] Poucet | Spatial cognitive maps in animals: New hypotheses on their structure and neural mechanisms[END_REF][START_REF] Poucet | The neuropsychology of spatial cognition in the rat[END_REF]. Thus, one basic function of the parietal cortex would be to perform associations between visual information and motion information, a process that is an early and important step for the formation of a spatial representation [START_REF] Mcnaughton | Neural association of movement and space: Preliminary steps toward a non-cartographic theory of spatial representation and learning[END_REF]. This hypothesis is consistent with neuroanatomical and functional data, accounting for the diversity of behavioral lesion effects and firing correlates of parietal neurons reviewed above. In a previous theoretical work [START_REF] Save | Hippocampal-parietal cortical interactions in spatial cognition[END_REF], we have proposed that the parietal cortex primarily processes spatial information in an egocentric frame of reference. Indeed, visual input from an animal's point of view as well as movement-related information are basically egocentric. Associations between views and motions are assumed to initiate the transformation of egocentric into allocentric information. Evidence for such a gradual process in the parietal cortex is provided, in particular, by unit recordings that have identified firing correlates ranging from pure motor to complex visuo-spatial-motor combinations (Chen et al., 1994a;[START_REF] Mcnaughton | Cortical-hippocampal interactions and cognitive mapping: A hypothesis based on reintegration of the parietal and inferotemporal pathways for visual processing[END_REF][START_REF] Mcnaughton | Cortical representation of motion during unrestrained spatial navigation in the rat[END_REF][START_REF] Nitz | Tracking route progression in the posterior parietal cortex[END_REF]. Accordingly, a number of deficits in allocentric tasks may be a consequence of upstream alteration in the transformation process. The egocentric-to-allocentric hypothesis still holds at the light of the most recent results even if it raises several questions. First, to what extent does the parietal cortex mediate the egocentric-to-allocentric conversion? Second, provided that the conversion is important for elaborating an allocentric representation, why should a parietal lesion not impair all behaviors requiring such a representation? Third, how is it possible to relate this particular function to other potential functions of the parietal cortex for example in attentional processes (e.g. [START_REF] Cabeza | The parietal cortex and episodic memory: An attentional circuit[END_REF]? Fourth, is this role compatible with a possible implication in the long-term storage of allocentric spatial representations? Clearly, any global model of parietal functioning must answer these questions and integrate all these aspects. The parietal cortex as an element of the cortical-hippocampal interaction Understanding the role of the parietal cortex in spatial information processing requires taking into account its interactions with other cortical and subcortical regions. The data demonstrate that the parietal is functionally related to the hippocampus, thus supporting the idea that the parietal cortex is part of a functional network that allows continuous dialog between the neocortex and the hippocampus. The data also indicate that the parietal cortex and the hippocampus have distinct roles in spatial tasks. As proposed by [START_REF] Burgess | Integrating hippocampal and spatial functions: A spatial point of view[END_REF], there are two theoretical extreme views of the cooperation between the parietal cortex and the hippocampus. On the one hand, these two structures can be hypothesized to work in series. The parietal cortex would process sensory information in a format that could be used and further processed by the hippocampus. On the other hand, these two structures can be hypothesized to work in parallel. The parietal cortex and the hippocampus would mediate the formation of complementary spatial representations and parietal-hippocampal cooperation would then take place at all levels of processing. According to the serial hypothesis, a parietal cortex lesion may produce equivalent deficits as those resulting from hippocampal lesions, which is clearly not the case as shown by the short review of the literature presented above. According to the parallel hypothesis, a parietal lesion would be less disruptive than in the serial model, due to the spared capacity of the hippocampus to generate a spatial representation. This hypothesis accounts only partially for the behavioral effects of parietal cortex lesions. Thus, it could be useful to consider a model that comprises both serial and parallel properties. One possibility for integrating these two aspects would be to ascribe to the parietal cortex a role in both the processing of sensory information (serial processing in the egocentric-to-allocentric hypothesis) and in the long-term storage of spatial representations (parallel processing). This latter aspect is supported by metabolic imaging studies [START_REF] Bontempi | Time-dependent reorganization of brain circuitry underlying long-term memory storage[END_REF][START_REF] Maviel | Sites of neocortical reorganization critical for remote spatial memory[END_REF]) but needs to be further investigated. The interest for the parietal cortex function in the spatial processing of information may have a new impetus as a consequence of the discovery of grid cells in the entorhinal cortex. These cells exhibit location-specific activity and generate multiple fields with regular spacing, therefore forming a grid-like firing pattern [START_REF] Hafting | Microstructure of a spatial map in the entorhinal cortex[END_REF]. It has been hypothesized that grid cells are involved in path integration, a basic navigation strategy requiring the use of movement-related information [START_REF] Mcnaughton | Path integration and the neural basis of the ''cognitive map[END_REF] for a review). How grid cell activity is generated remains unknown so far. That the parietal cortex is involved in the processing of movement-related information and in path integration [START_REF] Save | Dissociation of the effects of lesions of the dorsal hippocampus and parietal cortex on path integration in the rat[END_REF] suggests that it could contribute to the generation of the grid cell signal. Other cortical areas projecting to the entorhinal cortex may also contribute to grid cell activity, in particular, the retrosplenial cortex. This region has been shown to be connected to both the parietal and entorhinal cortices and contains spatial, head direction and movement-related signals (Chen et al., 1994a;[START_REF] Cho | Head direction, place, and movement correlates for cells in the rat retrosplenial cortex[END_REF]. The parietal cortex occupies a unique position in the brain linking perception to spatial representations. Data have accumulated across years but the role of this region remains unclear. The difficulty is to integrate the different facets of it function into a coherent model. This model would necessarily have to also integrate the notion of a functional cooperation with the hippocampus and other cortical areas. Further anatomical, lesion and recordings studies are needed to enter into the details of the parietal contribution to spatial processing. In particular, the possibility of functional subdivisions may be investigated. Because there are similarities between rodents and primates, a functional model of the parietal cortex in the rat may be useful to understand the normal and pathological functioning of the human parietal cortex. Fig. 1 . 1 Fig. 1. Main cortical and subcortical connections of the parietal cortex in the rat.
04097904
en
[ "qfin" ]
2024/03/04 16:41:18
2016
https://hal.science/hal-04097904/file/nadia-mouna-im%C3%A8ne.pdf
Mouna Abdelhamid email: [email protected] Nadia Farjallah # email: [email protected] Imène Guetat email: [email protected] Economic growth, government size and political instability Keywords: Economic growth, Political instability, government size, GMM system estimator and MENA 1. INTRODUCTION To empirically determine the effects of political instability and size of government on economic growth, we use the GMM system estimator for linear dynamic panel data models on a sample covering up to 19 countries from 1980 to 2012. The major empirical is that higher degrees of political instability are associated with lower growth rates of GDP per capita, unlike the size of government which has a positive effect on economic growth. Also, we figure out that political instability adversely affects growth by lowering the rates of productivity growth and physical and human capital accumulation. Finally, democracy and inflation have a negative effect, while economic freedom is beneficial to growth. of research which attracted the attention of several researchers in economic and social materials, particularly in the 80s with the proliferation of coups in Africa. The economic dimension of the political instability generated many papers in the literature, in particular, its relation and its interaction with the economic performances. It is in this context that [START_REF] Rodrik | Policy uncertainty and private investment in developing countries[END_REF] affirms that political instability has a negative impact on macroeconomic indicators, such as investment, unemployment, and inflation. In fact, a low economic growth rate may be the result of political unrest during the change of government [START_REF] Kuznets | Modern Economic Growth: Rate, Structure, and Spread[END_REF]. As a result, a politically unstable economy is likely to cause corruption and other distorting activities. Therefore, political instability is likely to have a negative impact on economic growth. Empirically, [START_REF] Aisen | Political Instability and Inflation Volatility[END_REF] have shown that higher inflation volatility is associated with higher levels of political instability, fragmentation of the political system and lower economic freedom. Furthermore, they argued that the policies in politically unstable countries tend to be more frequently interrupted by comparison with countries that are politically stable. [START_REF] Alesina | Why are stabilizations delayed?[END_REF] have shown that the delay in the implementation of inflation stabilization programs is associated with greater political instability in countries. In fact, several empirical studies have shown that political instability has a negative impact on the main macroeconomic variable as GDP, private investment, and inflation. Jong-A-Pin (2009) examined the causal effect of political instability on economic growth (using GMM method). He showed that the unstable political regime has a significant negative effect on economic growth. Political instability considerably reduced economic growth, both statistically and economically [START_REF] Aisen | How does political instability affect Economic Growth?[END_REF]. Similar studies have reported a negative and significant correlation between political instability and economic growth (e.g [START_REF] Gupta | The Economics of Political Violence[END_REF]; Barro, 1991;Alesina et al, 1996;[START_REF] Ades | Thy Neighbor's curse: regional instability and economic growth[END_REF]. Various economists (Alesina et al., 1996;[START_REF] Mauro | Corruption and growth[END_REF][START_REF] Özler | External shocks, politics and private investment: Some theory and empirical evidence[END_REF][START_REF] Alesina | Income distribution, political instability, and investment[END_REF] showed that GDP growth is much less weaker in countries where there is a significant tendency for their government to collapse than other countries. Tang According to the empirical study of Barro (1996), democracy is a slight negative effect on economic growth, with evidence of nonlinearity where democracy increases the growth to low levels of democracy, or reduced to higher levels [START_REF] Helliwell | Empirical Linkages between Democracy and Economic Growth[END_REF]. According to [START_REF] Azam | Risque politique et croissance en Afrique[END_REF], the emergence of political disturbances is determined by economic variables such as health spending, defense spending, the enrollment rate in primary and secondary …etc. y i,t -y i,t-1 = α y i,t-1 + β ' x i,t + μ i + ε i,t (1) Where y is the logarithm of GDP per capita, x represents the explanatory variables other than the 2 System-GMM is a useful method to estimate the effects of political instability on growth because it provides a clear solution to the endogeneity problem involving these two variables. To differentiate equation ( 1), Arellano and Bond (1991) propose: (y i,t -y i,t-1 )-(y i,t-1 -y i,t-2 ) = α (y i,t-1 -y i,t-2 ) + β ' (x i,t -x i,t-1 ) + (ε i,t -ε i,t-1 ) (2) Although differentiation eliminates countries specific effect but it offers a new way for construction of the new error term, (ε i,t -ε i,t-1 ), which is correlated with the lagged dependent variable (y i,t-1 -y i,t-2 ). As a result, the explanatory variables are strongly exogenous. Arellano and Bond (1991) propose the following moment conditions: E [y i,t-s (ε i,t -ε i,t -1 ) ]= 0 to s ≥ 2 and t = 3, ..., T. ( E [x i,t-s (ε i,t -ε i,t-1 )]= 0 to s ≥ 2 and t = 3, ..., T. 3) Arellano and Bond (1991) propose a two-step GMM estimator, using the moment conditions. For the first step (3), the error term is assumed to be independent and homoscedastic across countries and over time. In the second step, residues reached from the preceding step, are used to construct a regular estimate of the variance-covariance matrix. As a result, we obtain the assumptions of homoscedasticity and independence. We obtain an asymptotically efficient estimator in the first step. Instruments for the regression system delay differences of the corresponding variables. Conditions moment's estimator for the system is: E [(y i,t-s -y i,t-s-1 )(μ i -ε i,t )]=0 to S=1 E [ (x i,t-s -x i,t-s-1 )(μ i -ε i,t-1 )]=0 to S=1 The (ε i,t ) is not correlated in series. The failure to reject the primary hypotheses of two tests gives support to our model. Both difference estimator and system estimator found some problems with small samples. For two-step estimators, asymptotic standard errors are biased (Arellano and Bond, 1991;Bundell and Bond, 1998). Table 1 provides details on all variables along with their definitions and sources used in this paper. From strongly autocratic (-10) to strongly democratic [START_REF] Tang | The impacts of tourism, energy consumption and political instability on Economic Growth in MENA Countries[END_REF]. This variable is our proxy for democracy. Database ( The final consumption expenditure of government (formerly general government consumption) includes all government current expenditures for purchases of goods and services (including compensation of employees). These expenditures also include most of the expenses for defense and national security, but does not include military expenditures that are part of government capital formation of government. World Bank's World Development Indicators (WDI) Index of Economic Freedom The Index of Economic Freedom Takes a broad and comprehensive view of Economic Freedom, measuring country performance in 10 separate Areas. The 10 measured aspects of Economic Freedom may be grouped Into four broad categories: 1. Rule of Law (property rights, freedom from corruption); 2. Government size (fiscal freedom, government Spending) 3. Regulatory efficiency (business freedom, labor freedom, monetary freedom); 4. Market and openness (trade freedom, investment freedom, and financial freedom). Freedom House political risk The measure of political constraints employed in this paper estimates the feasibility of policy change (the extent to which a change in the preferences of any one actor may lead to a change in government policy) Henisz, W. J. (2002). The assessment and measurement of the role of government changes or adjustments is not an easy target to achieve. In this paper, we used the concept of measuring the political instability as the propensity for government change, which has attracted a considerable attention in previous research (Alesina et al., 1996). However, our measures of political instability are somewhat different from those of previous works. In fact we used two variables. The first called political risk which estimates the feasibility of policy change (the extent to which a change in the preferences of any one actor may lead to a change in government policy), and the second is the Democracy (Polity IV) which is our proxy for democracy. However, we believe that application of those variables may also provide some interesting conclusions as it allows comprehensive comparisons between the effects of major and regular government transfers. Table 2 contains some basic information about our data, which shed some light on the possibility of the existence of simultaneous relations between economic growth and both indicators of political instability. The average growth rate of the studied countries was around 3.677 current US $. On the other hand the average of political risk was at a level of 0.126, which indicates that the change in government policy frequency was prominent. In addition, the democracy level was too low since it did not exceed the rate of -5. Moreover, to determine the strength of the statistical relationships between all the variables, table 3 represents the correlation matrix. 4). Inflation, GDP deflator (annual %). [START_REF] Alesina | Distributive Politics and Economic Growth[END_REF]. Democracy (polity IV). ( 6). Political risk. [START_REF] Alesina | Political instability and economic growth[END_REF]. Index of Economic Freedom. [START_REF] Bergh | Government Size and Growth: A Survey and Interpretation of the Evidence[END_REF]. size of government. At the correlation matrix, it can be seen that the factors moderately correlate, which implies that they indeed reflect different dimensions of political instability, although some correlation coefficients do significantly differ from 0. THE EMPIRICAL RESULTS We choose a growth model on a panel of 19 MENA developing countries (Annex 1) selected according to data availability for the period from 1980 to 2012. The empirical analysis is divided into two parts. First, we test the hypothesis that the instability of political institutions has a negative effect on economic growth. In the second part of the empirical analysis, we study the channels through which political instability affects economic growth. Our primary interest is to determine the tests of the validity of instruments (Sargan) and the lack of residuals serial autocorrelation (Arellano and Bond (1991). Political instability and economic growth Empirically, we accept the presence of an AR [START_REF] Ades | Thy Neighbor's curse: regional instability and economic growth[END_REF] for residues and the absence of an AR (2) effect. This is in accordance with the formulated hypotheses. Besides, the tests of Sargan validate the choices of instruments. For this study, variables: size of government and political instability are represented by proxy variables which are respectively the final consumption expenditure of general government as a percentage of GDP and political risk [START_REF] Henisz | The Institutional Environment for Infrastructure Investment[END_REF]. Specifically, we use the dynamic panel GMM system estimator of [START_REF] Blundell | Initial Conditions and Moment Restrictions in Dynamic Panel Data Models[END_REF]. This choice is motivated by the fact that this estimator allows us to model both the lagged dependent variable and the fixed country effects. In our opinion, including country fixed effects in the model is particularly important because most of the significant variables identified by the empirical growth literature (such as the ethnolinguistic or geographical splitting variables) are time-invariant (an overview of determinants of economic growth can be found in [START_REF] Durlauf | Growth Econometrics, Handbook of Economic Growth[END_REF]. In addition, the GMM approach can be used to take into account the potential endogeneity of political instability by using political instability lagged variables as instrumental variables. Results of the data regressions are represented in Table 1. In fact, considering macroeconomic variables, the results of the different models are similar to those provided. The hypothesis that political instability negatively affects economic growth gets a clear empirical support. The estimated coefficient implies when there is an additional change in political risk, the annual growth rate decreases. Consequently, the low economic growth may increase the volatility of government (Alesina et al., 1996).The initial GDP per capita has a negative coefficient, which is compatible with the conditional convergence income across countries. Investment [START_REF] Mankiw | A contribution to the empirics of Economic Growth[END_REF]) and tertiary enrollment rates have positive and statistically significant coefficients, indicating that investment and education promote growth. Inflation has a negative and statistically significant effect on economic growth due that high inflation negatively affects growth [START_REF] Edison | International financial integration and economic growth[END_REF][START_REF] Elder | Another perspective on the effects of inflation uncertainty[END_REF]. The Index of Economic Freedom is included in the model in column 2 to explain the favorable economic institutions. This index is statistically significant and has a positive sign as expected. Similarly, the size of government has a positive and significant effect on economic growth. In contrast, democracy has a significant and negative effect on economic growth. The one-unit increase in the index of democracy decreases the economic growth rate of 1.9 per cent. Similarly, the empirical analysis of Barro (1996) shows a negative relationship between democracy and economic growth. This implies that democracy promotes economic growth to low levels of political freedom, although tends to decrease it when a certain level of freedom is achieved. Transmission channels We study mechanisms by which political instability affects economic growth since political instability is associated with greater uncertainty about future economic policy. Hence, it is likely to affect negatively on investment and thus on physical capital. Various studies have verified a negative relationship between political instability and investment [START_REF] Alesina | Income distribution, political instability, and investment[END_REF][START_REF] Mauro | Corruption and growth[END_REF][START_REF] Özler | External shocks, politics and private investment: Some theory and empirical evidence[END_REF][START_REF] Perotti | Growth, income distribution, and democracy. What the data say?[END_REF]. The accumulation of human capital could be disturbed by political instability because uncertainty about the future can encourage less investing in education. In developing countries, human capital formation may be adversely affected by political instability in two ways (Gyimah-Brempong and Camacho (1998)). First, a greater political instability can bring those who have high levels of human capital to emigrate. The second source is due to the allocation of resources by the government. Devereux and Wen (1998) argue that greater political instability leads to a higher share of public spending in GDP, which may require a misallocation of resources and slow productivity growth. Political instability affects growth through the accumulation of physical and human capital, note that the first having a slightly greater effect than the second. Government stability is an important feature of political systems. Political instability leads to uncertainty about future political which encourages leaders to adopt a predatory behavior towards the resources of private economic resources. One of the main characteristics of democracy is providing transparent rules to facilitate the transaction between political forces. Consequently, democracies can have a peaceful and predictable transfer of political power; nevertheless, autocracies may experience violent and irregular changes. Empirically, Alesina et al. (1996) found that political instability has a negative effect on growth. The estimation results of the regressions are shown in the table 5. So far, we have examined the overall effect of political instability on economic growth without trying to exactly distinguish the influence of the accumulation of production factors. By adding the interaction term between the production factors and political instability, we find that the interaction term between political instability and human capital is positive and significant at 5% level. Then the interaction term between political instability and physical capital is significant and negative at 5% level. In addition, there is a term negative and significant interaction between democracy and political instability. In contrast, there is not an interaction term between political instability and government size. Finally, we can conclude that political instability can influence economic growth, through indirect effects via human capital. In other words, greater political stability will affect economic growth, if and only if these countries are characterized by a productive human capital. CONCLUSION In this study, we have tried to contribute to the resolution of fundamental questions concerning impact of political instability on economic growth and the transmission channels through which it affects economic growth. To do this, we used the GMM system estimator for linear dynamic panel data models on a sample covering up to 19 countries in the MENA region during the period 1980-2012. As part of this empirical study, we tested the effects of political instability, democracy and government size on economic growth. The key findings emerged from this empirical analysis show a negative impact of political instability and democracy on economic growth, which is opposite to the effect of government size on economic growth. In addition, increased political instability is aggregated to a decline in economic growth through the channels of human capital, physical capital, government size and democracy. More generally, since 2011, several countries in the MENA region recorded a significant slowdown in tourism, lower remittances from migrants, worsening budget deficits and a raise of their debt level. These factors explain the decline in the growth rate in these countries which record negative rates in the case of transitions countries (Libya and Syria), and others weak and volatile ones for most countries in the region. We conclude, that this analysis allowed us, even in part, to show the existence of a relationship between political instability and economic performance and some key channels through which the effects of political instability could affect the performance of countries in the MENA region. Although, it is important to note that, despite the importance of empirical results that led this work, shortcomings might arise namely existence of other possible mechanisms to examine this relationship that have not been considered and the causality problem that also has not been treated. and Abosedra (2014) found that political instability prevents the process of growth and economic development in the MENA region (use of 24 countries in the MENA region). Economic theory suggests several mechanisms by which government activities can affect growth. Literature concerning the relationship between government size and economic growth is full of contradictory results. This conflict is explained by variations in definitions and studied countries. There are many reasons to expect a relationship that is inversely U-shaped, a hypothesis that is sometimes indicated under the name of Armey curve (Armey 1995). For less developed countries, there is a positive link between tax revenue and growth because a state managed typically to collect taxes if it succeeded in providing the stability necessary for economic activity begins to grow (Besley and Persson 2009). The most basic government functions such as protection of property rights and Law enforcement can be performed at low levels of taxation. If productive public spending is characterized by diminishing returns, the negative impact of taxes financing public spending can dominate the positive impact of government activities promoting growth. Generally, in poor countries, the public sectors are insignificant, and the relationship between government size and growth is positive. Unlike in rich countries, public sectors are great, and the relationship between government size and growth is less positive than in poor countries, and possibly negative1. Concerning the interaction between 1 Andreas Bergh and Magnus Henrekson (2011). Government Size and Growth: A Survey and Interpretation of the Evidence. IFN Working Paper No. 858, 2011 democracy and economic growth, Acemoglu et al. (2014) found a positive effect of democracy on growth. Economists, such as Alesina and Rodrik (1994) and Persson and Tabellini (1994), said that the democratic redistribution is a distortion and it will discourage economic growth. As a matter of fact, Acemoglu et al. (2008) argues that democratic institutions can create distortions because of their redistributive tendencies. Figure 1 :Figure 1 , 11 Figure 1: Growth and political stability in MENA 2014 Concerning, the case where the explanatory variables persistence,[START_REF] Blundell | Initial Conditions and Moment Restrictions in Dynamic Panel Data Models[END_REF] and Alonso-Borrego andArellano (1996), verified that the delayed levels of these variables are weak instruments for the regression of the difference equation. Asymptotically, it will have an increase in the variance of the coefficients. Monti Carlo simulations for small sample sizes verified that the weaknesses of the instruments can establish biased coefficients.[START_REF] Arellano | Another Look at the Instrumental-Variable Estimation of Error Components Models[END_REF],[START_REF] Blundell | Initial Conditions and Moment Restrictions in Dynamic Panel Data Models[END_REF] have proposed an estimator system to reduce the potential bias and imprecision associated with the difference estimator. consistency of the GMM estimator depends on the validity of hypotheses of autocorrelation absence of error terms and instruments. For the validity of these assumptions, we use two tests proposed by Arellano and Bond (1991), Blundell and Bond (1998) together with Arellano and Bover (1995). The first Sargan test of over-identification tests the complete validity of the instruments and the second test verifies the assumption that the error Annual data on economic and political variables from 1980 to 2012 were collected for 19 countries, covering the MENA region (the Middle East and North Africa). Economic data sources were found on the World Development Indicators of the World Bank (WDI, 2007). The political data was obtained by Henisz (2002) and the Polity IV database (Marshall and Jaggers, 2009). The following work will focus on determining the impact of political instability and the size of government on economic growth. Inflation as Measured by the annual growth rate of the GDP implicit deflator shows the rate of price change in the economy as a Whole. The GDP implicit deflator is the ratio of GDP in current local currency to GDP in constant local currency.The total Enrollment in tertiary education (ISCED 5 and 6) Regardless of age, Expressed as a percentage of the total population of the five-year age group Following on from secondary school leaving. Table1: List of variables, definitions and sources Indicator Definition Source Inflation, GDP World Bank's World deflator (annual %) Development Indicators (WDI) Investment Share of The share of investment as a percentage of GDP. World Bank's World GDP (%) Development Indicators (WDI) School Enrollment, World Bank's World tertiary (% gross) Development Indicators (WDI) Democracy (Polity IV) GDP per capita is gross domestic product divided by midyear population. GDP is the sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products. It is calculated without making deductions for depreciation of fabricated assets or for depletion and degradation of natural resources. Data are in current U.S. dollars. Marshall and Jaggers, 2009) GDP per capita World Bank's World (current US $) Development Indicators (WDI) General government final consumption expenditure (% of GDP) Table 2 : 2 Descriptive statistics of examined data Variables Obs Mean Standard Min Max deviation growth of GDP per capita 500 3.677 0.563 2.452 4.967 Investment Share of GDP (%) 400 29.698 11.442 7.976 80.120 School Enrollment, tertiary (% gross). 411 19.536 13.196 -12.732 62.375 Inflation, GDP deflator 476 10.893 26.818 -25699 390.678 (Annual %) Democracy (polity IV) 509 -5.332 5.208 -10 10 political risk 523 0.126 0.201 0 0.647 Index of Economic Freedom 213 6.446 1.141 3.1 8.1 size of government 458 19.338 7.116 5.745 76.222 Table 3 : 3 Correlation Matrix Variable 1 2 3 4 5 6 7 8 1 1 2 0.004 1 3 0.099 -0.028 1 4 -0.069 -0.045 0.121 1 5 -0.078 -0.036 0.633 0.302 1 6 -0.047 -0.104 0.375 0.118 0.645 1 7 0.633 0.150 0.184 -0.304 -0.059 -0.148 1 8 0.184 0.018 0.075 0.165 0.089 0.146 -0.109 1 [START_REF] Ades | Thy Neighbor's curse: regional instability and economic growth[END_REF] . growth of GDP per capita. [START_REF] Aisen | Political Instability and Inflation Volatility[END_REF] .Investment Share of GDP (%). [START_REF] Aisen | How does political instability affect Economic Growth?[END_REF] . School enrollment, tertiary (% gross). ( Table 4 : 4 Economic growth and instability of political institutions Dependent Variable : real GDP growth per capita 1 2 3 4 5 lagged real GDP growth per capita -0.101 (0.000)** -0.069 (0.032)** -0.059 (0.000)** -0.028 (0.006) * * -0.159 (0.037)** Investment Share of GDP (%) 0.001 (0.000)** -0.001 (0.440) 0.012 (0.000)** 0.002 ( 0.002) * * 0.027 ( 0.005) * * School enrollment, tertiary (% gross) 0.001 (0.000)** 0.017 (0.000)** 0.008 (0.000)** 0.004 (0.000)** 0.021 (0.003) * * political risk -1.279 -0.463 -0.882 -0.980 -0.987 (0.000)** (0. 009) * * (0.000)** (0.020) * * (0.000)** Index of Economic Freedom 1.448 (0.000)** Inflation, GDP deflator (annual %) -0.003 -0.005 (0.010) * * (0.001) * * size of government 0.004 (0.000)** Table 5 : 5 Effect of political instability Dependent Variable : real GDP growth per capita 1 2 3 4 lagged real GDP growth per capita -0.086 -0.097 -0.098 -0.091 (0.000)** (0.000)** (0.000)** (0.000)** Investment Share of GDP (%) 0.008 0.008 0.008 0.008 (0.000)** (0.000)** (0.000)** (0.000)** political risk* School enrollment, tertiary 0.007 (% gross) (0.000)** -0.006 Political risk* Investment Share of GDP (0.085) * (%) political risk* size of government -0.007 (0.002) * * political risk* democracy -0.001 (0.031) * * constant 3.781 3.834 3.828 3.813 (0.000)** (0.000)** (0.000)** (0.000)** Number of observations 276 276 276 276 Number of countries 31 31 31 31 Hansen test (p-value) 0.14 0.14 0.13 0.13 Arellano-Bond test for AR(1) (p-value) 0.000 0.000 0.000 0.000 Arellano-Bond test for AR(2) (p-value) 0.17 0.147 0.164 0.141 Notes:  System-GMM estimates for dynamics panel-data models. Sample period: 1980-2012 Annex: List of countries used in the sample We used the MENA countries as a sample of our study except it has no standardized definition; different organizations define the region as consisting of different territories, so we choose to limit this analyze to the following list of countries:
04098015
en
[ "info" ]
2024/03/04 16:41:18
2021
https://hal.science/tel-04098015/file/Rapport-Th%C3%A8se-Soulef-Bouaafia-Final.pdf
Keywords: Augmented Reality xviii Application Specific Integrated Circuit 74 AVC, Advenced Video Coding xix AXI, Advanced eXtensible Interface 75 BD-BR, Bjontegaard Delta bitrate 41, 42 Context-Adaptive Binary Arithmetic Coding 9 Configurable Logic Blocks 77 53 DCT, Discrete Cosine Transform 9 V ideo contents visualization has been revolutionized over the last decade with the advent of video-on-demand services, web-TV, video-sharing sites, live streaming service for individuals, and broadcast platforms offered by social networks. This led to an explosion of internet traffic. According to a recent Cisco study, video-driven internet traffic will quadruple between 2017 and 2022 and will represent 82% of overall internet traffic. The appearance of new video content, such as 360°video, Virtual Reality (VR), High Frame Rate (HFR) and the advent of very high spatial resolution 8K or even 16K leads to a significant increase in the amount of data to be transmitted. Consequently, efficient compression is essential to store or transmit this huge amount of data. Despite the considerable performance achieved by the video coding standards, the existing compression techniques showed their limitations and it is becoming increasingly difficult to meet the growing demands of data. Therefore, the adoption of new approaches such as machine learning based methods has great potential to address this challenge and can provide very promising results. The objective of this thesis is to introduce advanced techniques to significantly reduce the complexity of the High Efficiency Video Coding (HEVC) and the Versatile Video Coding (VVC) standards, while preserving the bitrate gain and ensuring a better video quality for users. These techniques, based on machine learning provide better performance in classification, in prediction and in efficient compression vs classical algorithms. First, I would like to witness my gratitude to Almighty God, who gave me the courage and strength to pursue this thesis and opened the gates of knowledge for me. I have started my final internship project for my master's degree in Electronics and Microelectronics Lab Research EµE. It was March 2017. I never thought this training would be the reason for choosing me to pursue a long and fruitful Ph.D. journey full of fluctuations (The 1 st registration was on January 23, 2019). What's for sure is that I will never forget this opportunity which was wonderful, overwhelming, and full of lessons and responsibilities. Thankfully, my Ph.D. was completed successfully in which we delivered many journals and international conferences. This wouldn't have happened without the help and support of countless people over the past three years. First of all, I am deeply grateful to my supervisors Mrs. Fatma Ezahra SAYADI and Mrs. Randa KHEMIRI who gave me this opportunity and believed in me from the very beginning. Knowing that this Ph.D. experience wasn't always easy, it was always a pleasure, with a lot of excitement, to be up to the challenge and to beat paper deadlines. Working with you improved me a lot as a student, as a researcher, and as a person. It wasn't straightforward for me to understand how to think like a researcher. You have taught me, both consciously and unconsciously, how good work is done. I appreciate all your contributions of time and ideas to make my Ph.D. experience productive. You have always listened to my ideas and discussions with you frequently led to progress. Your ability to approach research problems and your high scientific standards set an example. I admire your ability to balance research interests and personal pursuits. I am thankful for the excellent example you have provided me as a successful and ambitious researcher. I would first of all like to thank the members of the jury for their presence, for their careful reading of my thesis as well as for the remarks they will address to me during ii this defense in order to improve my work. I thank Professors Ali DOUIK and Khaled BEN KHALIFA for finding the time to read my thesis and for their valuable feedback. I would like to thank Professor Chokri SOUANI for being examinator of this jury. And finally, I would like to thank Professor Nejib HASSEN for the honor he gave me when he agreed to chair this jury. I would like to thank all EµE laboratory members, especially the Lab head Pr. Mohsen MACHHOUT and Pr. Mohamed ATRI for their advice and their human qualities of listening and understanding. This work would never be completed without the support of my family. Words cannot describe my gratitude for my parents, my brothers, and my sisters. Without them, I could never have reached this current level of success. To all of you, thank you for your continuous encouragements and devotion. I would like to give special thanks to my friend Dr. Seifeddine MESSAOUD who have always encouraged me. Soulef BOUAAFIA iii """"""' Résumé L a visualisation de contenus vidéo a été révolutionnée au cours de la dernière décennie avec l'apparition des services de vidéo à la demande, de web-TV, de sites de partage de vidéos, de service de diffusion en direct pour les particuliers, et des plateformes de diffusion offertes par les reseaux sociaux. Ceci a conduit à une explosion du trafic internet. Selon une étude récente de Cisco, le trafic internet lié à la vidéo quadruplera entre 2017 et 2022 et représentera 82% du trafic internet global. L'apparition de nouveaux contenus vidéo, tels que la vidéo 360°, la Réalité Virtuelle (VR), le High Frame Rate (HFR) et l'avènement de très grandes résolutions spatiale 8K voire 16K conduit à une augmentation significative de la quantité de données à transmettre. Par conséquent, une compression efficace est essentielle pour stocker ou transmettre cette énorme quantité de données. Malgré les performances considérables obtenues par les normes de codage vidéo, les techniques de compression existantes ont montré leurs limites et il devient de plus en plus difficile de répondre aux demandes croissantes de données. Par conséquent, l'adoption de nouvelles approches telles que les méthodes d'apprentissage automatique représente un grand potentiel pour relever ce défi et peut fournir des résultats très prometteurs. L'objectif de cette thèse est d'introduire des techniques avancées pour réduire significativement la complexité des normes de codage vidéo High Efficiency Video Coding (HEVC) et Versatile Video Coding (VVC) tout en préservant le gain en débit et assurant une meilleure qualité de vidéo aux utilisateurs. Ces techniques basées sur l'apprentissage automatique offrent de meilleures performances en classification, en prédiction et en efficacité de compression par rapport aux algorithmes classiques. List of Figures W ith the development of multimedia computing, communication and display technologies, many video applications have emerged, such as TV broadcasting, video-on-demand, video conference, mobile video, video surveillance, 3D videos and Augmented Reality (AR), which can provide immersive telepresence and realistic visual perception experience. These video applications have been widely employed for multiple roles in human daily life, such as manufacturing, communication, national security, military, education, medicine, and entertainment. Nowadays, video data has been the majority data traffic over the internet and its volume grows explosively each year. The latest Cisco Visual Networking Index reports that Internet Protocol video traffic accounted for 75% of all Internet traffic in 2017, and they expect it to rise up to 82% by the year 2022 [START_REF] Cicero | Cisco predicts more ip traffic in the next five years than in the history of the internet[END_REF]. On that occasion, million minutes of video contents will be delivered through the network in every second. To further enhance the immersive and realistic visual experiences, more high-end video applications emerge, such as High and Ultra-High Definition content (HD, UHD), Virtual Reality (VR), High Frame Rate (HFR) and 360°video and the advent of very high spatial resolution 8K or even 16K, which require larger data volume to represent higher fidelity and more details. Meanwhile, the number of video clients and cameras in use grows rapidly as the video demands keep boost in recent years, as HDTV, surveillance cameras, laptop and smart phones. The total amount of global video data doubles every two years, which is the bottleneck for data processing, storage and transmission. General Introduction Video coding is one of the basic technologies in video applications that allows video data to be structured and compressed more efficiently for computation, transmission and storage. It has been developed over three decades with four generations and the coding efficiency doubles every ten years. But there is a big gap as compared with the rapid growth of global video data doubling every two years. Achieving much higher compression efficiency and narrowing the gap in an effective way become urgent missions for video coding. Machine learning is a field of study that can learn from data, discover hidden patterns and make data-driven decisions. Due to its superior performance in learning from data, many emerging works have applied machine learning algorithms to video coding to further promote the coding performances, which becomes one of the most promising directions in both academic and industrial communities. In this context, the advent of the video coding standard, High Efficiency Video Coding (HEVC), standardized in January 2013 [START_REF] Gary J Sullivan | Overview of the high efficiency video coding (hevc) standard[END_REF] has made it possible to broadcast the UHD content on the communication network. The HEVC provides nearly 50% bitrate gain in comparison to the H.264/AVC standard for the same quality. However, HEVC is still not efficient enough to endure the burden of video transmission and storage for various large popular applications based on 8K and 360°videos. For this reason, Versatile Video Coding (VVC) appears the most recent video coding standard developed jointly by JVET, as known H.266 [BCO + 21]. It is based on the same hybrid video coding block, as its predecessors from MPEG-2 to HEVC. VVC is designed to be both efficient and versatile to address today's media needs. This includes approximately 30% -50% bitrate reduction over HEVC [WHB + 20], as well as versatility by efficient coding of a wide range of video content and applications. The main objective of this thesis is to significantly improve the coding efficiency of HEVC and VVC standards based on fast machine learning algorithms. This manuscript is structured in four chapters: Chapter I : Video Coding and Artificial Intelligence Backgrounds General Introduction The first chapter introduces the most emerging video technologies nowadays. For this purpose, the hybrid aspects of the video coding standards are discussed first and then some essential modules to build a codec with this structure are detailed. In order to emphasize on the similarity of general structure between different video coding standards, some sections also provide equivalent historical and descriptive information from HEVC along with VVC. Meanwhile, we summarize the most challenges in video coding standards. After that, we introduce the recent advancements in machine learning and deep learning models and their categories. Finally, some related research based on video coding techniques are reviewed. Finally, the achieved results are discussed and a comparative study is made. Chapter II : Machine Finally, the last part of this thesis will be reserved for a general conclusion that summarizes the results found and lists the different perspectives. Chapter I Video Coding and Artificial Intelligence I.1 Introduction I n this chapter, a brief review of the video compression structure is presented. For this purpose, the hybrid video coding standards scheme is first discussed and then some essential modules to build a codec with this structure are detailed. In order to emphasize on the similarity of general structure between different video coding standards, some sections also provide equivalent historical information from High Efficiency Video Coding (HEVC) along with Versatile Video Coding (VVC). Meanwhile, we summarize the most challenges in video coding standards. After that, we introduce the recent advancements in machine learning and deep learning models and their categories. Finally, this chapter provides a detailed literature review on different advanced video coding approaches that have been proposed. The remainder of this chapter is organized as follows. Section I.2 presents the video compression history. The HEVC standard structure is described in Section I.3. Section I.4 introduces the VVC coding tools. Then, Section I.5 provides the video coding challenges. Afterwards, a detailed overview of artificial intelligence technique is exposed in Section I.6. The related research of video coding approaches is presented in Section I.7. Finally, Section I.8 concludes this chapter. I.2 Video Compression History Every second in year 2021, more than a million minutes of video content will cross the network. It would take a person more than 5 million years to watch all videos of one month [START_REF] Cicero | Cisco predicts more ip traffic in the next five years than in the history of the internet[END_REF]. This forecast is convincing for video coding experts to think of more efficient compression tools and technologies. These technologies are expected to address various emerging video formats, namely High Dynamic Range (HDR), High Frame Rate (HFR), high resolution videos (e.g. 4K, 8K and beyond), immersive 360°videos, screen content and more. Video coding standards aim at bringing format compatibility between devices. This enables the playback of any video file conforming the syntax of a given standard, with any device supporting it. From the industrial point of view, such unity facilitates the interaction between all components of the broadcast chain, including consumer electronics manufactures, broadcasters, content providers etc. This convenience in interaction, if achieved, can significantly accelerate the progress of the broadcast industry as a whole. CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS Most successful standardization acts of the MPEG were accomplished after its collaboration with the ITUT, in the late 90's. This joint collaboration, initially called Joint Video Team (JVT), then Joint Collaboration Team on Video Coding (JCT-VC), resulted in some of the most successful video coding standards in the family of "H.26x", notably H.264/Advanced Video Coding (AVC), developed in May 2003 [START_REF] Chen | Introduction to h. 264 advanced video coding[END_REF], and H.265/High Efficiency Video Coding (HEVC), finalized in January 2013 [START_REF] Gary J Sullivan | Overview of the high efficiency video coding (hevc) standard[END_REF]. In October 2015, another collaboration between MPEG and VCEG formed the Joint Video Exploration Team (JVET) [START_REF] Amir | Jvet encoder complexity analysis[END_REF] that was tasked with assessing the available compression technologies and exploring the requirements for a next-generation video compression standard. Hence, the new video coding standard called H.266/Versatile Video Coding (VVC) was standardized in July 2020 [BCO + 21]. After the history presentation of the different video coding standards, the two latest HEVC and VVC will be detailed in next sections, since they will be used in this thesis. I.3 HEVC Standard High Efficiency Video Coding (HEVC) is the sophisticated video coding standard, also known as H.265, standardized in 2013 by the JCT-VC [START_REF] Richardson | An introduction to high efficiency video coding[END_REF]. HEVC saves approximately 50% of bitrate for the same subjective video quality, with respect to its predecessor H.264/AVC standard. Thus, the HEVC codec is expected to ease the burden on global networks where High Definition (HD) and Ultra High Definition (UHD) video content is becoming more and more popular. HEVC is based on the basic hybrid structure as employed by previous standards since H.261. However, the standard contains series of incremental improvements [START_REF] Richardson | An introduction to high efficiency video coding[END_REF] In accordance, a typical video encoder compliant with the HEVC standard would start by dividing each frame into block-shaped regions, with the exact block partitioning being conveyed to the decoder. The first picture of the video sequence is coded using only intra picture prediction, i.e., the prediction of the blocks in the picture is only BACKGROUNDS From Figure I.2, the result from the prediction is subtracted from the original block and the residual information is then transformed by a linear spatial transform. The transform coefficients are then scaled, quantized, compressed and transmitted in the receiver, together with the prediction information. The encoder also integrates the processing loop of the decoder in order to generate the same pictures as the output of the decoder. These pictures are then stored in a decoded picture buffer, and will be used for the prediction of the subsequent pictures. In the following, the general features of the hybrid video coding scheme used in HEVC will be described with more details. I.3.1 Sampled Representation of Pictures Video sequence is typically captured using the RGB color space, which is not a particularity efficient representation for video coding. On the contrary, HEVC uses a more video coding friendly color space, the YCbCr, which divides the color space in 3 components: Y, known as luma, representing brightness; Cb and Cr, also known as chroma, which represent how much color deviates from gray towards blue and red, respectively. CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS As the human visual system is more sensitive to brightness, the typically used sampling scheme follows the 4:2:0 structure, meaning that four luma components are sampled for every chroma component. HEVC also supports each sample pixel value with 8 or 10 bits precision, with 8 bits being the most commonly used for HEVC standard [B + 13] and 10 bits used for VVC standard [FJK + 20]. I.3.2 Block Partitioning in the HEVC Standard In the former video coding standard H.264/AVC, Variable Macro-Block (MB) sizes ranging from 4×4 to 16×16 are supported [START_REF] Chen | Introduction to h. 264 advanced video coding[END_REF]. Whereas larger block sizes, reached at 64×64, are used in HEVC standard to facilitate the high definition video compression. Additionally, more flexible partitioning of video frames is supported to improve the (CTU), is splitted into CUs using a quad-tree partitioning structure, and a CU can be further sub-divided into Prediction Units (PU) for inter-frame or intra-frame prediction and its transformation is performed using one or more Transform Units (TU). I.3.3 Intra Prediction In intra picture prediction, the information of adjacent CTU from the same picture is used for spatial prediction, as shown in Figure I.4. There are a total of 35 intra picture prediction modes available in HEVC, corresponding to 33 different directional modes, a DC and a planar mode. For directional mode encoding, the spatially neighboring decoded blocks are used as reference for the prediction, using the selected angle to cover the current PU. This mode is the most used for regions with strong directional edges. Directional mode prediction is consistent across all block sizes and prediction directions. DC mode encoding simply uses a single value matching the mean value of boundary samples for the prediction. Finally, the planar mode assumes an amplitude surface with a horizontal and a vertical slope derived from the boundaries. This mode is supported for all block sizes in HEVC. I.3.4 Inter Prediction In order to exploit the redundancies in the temporal adjacent images, inter-picture prediction based on previously coded pictures is an essential technique to obtain high compression rates. It consists of the application of the following two techniques: motion compensation and motion estimation. By using these techniques, pictures are predicted from previously encoded frames (uni-directional) or from previous and future frames (bi-directional), as shown in Figure I.5. The use of the bidirectional prediction is more complex, since it requires the video frames to be coded and stored out of order, so that future frames may be available. Before the application of motion compensation technique, the encoder has to find a block similar to the one it is encoding on a previous/future encoded frame, referred to as a reference frame. Such searching procedure is known as motion estimation, resulting in the identification of a motion vector, which points to the position of the best prediction block in the reference frame. However, since the identified block will most likely not be an exact match of the encoding block, the resulting difference (residue) has to be encoded and transmitted to the decoding end, so that it can be read by the decoder. These residuals, originated from the difference between the predicted block and the actual block, are known as prediction errors. The actual position of the prediction in the neighboring frames may be out of the sampling grid (where the intensity is unknown), so the intensities of the positions in between the integer pixels must be interpolated and the resolution of the motion vector CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS increased accordingly. For the interpolation in fractional luma sample positions, an 8-tap filter is used, while a 4-tap filter is used for chroma samples. I.3.5 Transform and Quantization After the motion estimation, all the prediction error residuals are transformed into a set of coefficients for efficient transmission and storage. In the HEVC standard, as indicated in I.3.6 Entropy Coding In the HEVC standard a bitstream is produced using motion parameters, prediction modes, quadtree partitioning information, quantized transform coefficients and some other control data through entropy coding. Only one entropy coding method, Context-Adaptive Binary Arithmetic Coding (CABAC), is specified in the standard. Although there is no change made on the core algorithm of CABAC, it is optimized on the aspects of context modeling, adaptive coefficient scanning, coefficient coding, sign data hiding and so on to improve its throughput. I.3.7 In-Loop Filters Before writing the samples in the decoded picture buffer, they are processed first by a deblocking filter (DBF) and then by a sample adaptive offset filter (SAO). Block based coding schemes tend to produce blocking artifacts due to the fact that inner blocks are coded with more accuracy than outer blocks. To mitigate such artifacts, the decoded CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS samples are filtered by a DBF. After the deblocking has been processed, the samples are processed by SAO, a filter designed to allow for better reconstruction of the original signal amplitudes, reducing banding and ringing artifacts. SAO is performed on a per CTU basis and may or may not be applied, depending on the filtering type selected. I.4 VVC Standard Versatile Video Coding (VVC) [BCO + 21] [HBA + 21] is the new generation video coding developed in July 2020, by the Joint Video Experts Team (JVET), as a successor of the HEVC [START_REF] Gary J Sullivan | Overview of the high efficiency video coding (hevc) standard[END_REF]. As the next standard for sophisticated video coding technology, VVC allows up to 30% -50% for bitrate savings while maintaining the same quality as HEVC. VVC has been designed to achieve improved compression capacity over previous standards such as HEVC, and at the same time to be highly versatile for effective use in a broadened range of applications. Some key application areas for the use of VVC particularly include UHD video (e.g. 4K or 8K resolution), video with a high dynamic range, and video for immersive media applications such as 360°omnidirectional video, in addition to the applications that have commonly been addressed by prior video coding standards. Similar to its predecessor HEVC, VVC uses a block-based hybrid coding architecture with some coding tools that may be included or removed. The VVC architecture includes the inter-picture, intra-picture prediction, and transform coding with entropy coding. I.4.1 Block Partitioning in the VVC Standard In VVC, each picture is split into non-overlapping squares called CTUs. The largest CTU size allowed in VVC is 128 × 128 pixels, larger than the maximum size allowed in HEVC, 64 × 64. Large blocks improve the efficiency of coding flat areas such as backgrounds, especially for high-resolution videos such as HD and 4K. In order to efficiently represent highly detailed areas such as textures and edges, VVC employs a flexible partitioning scheme that can CTUs partition sized of 128 × 128 down CUs as small as 4 × 4 pixels. or ternary splits. The first is the quadtree split that is also available in HEVC, which can recursively split CTU into squared CUs down to 4 × 4 pixels, smaller than the 8 × 8 minimum CU size in HEVC. The second part consists of binary-tree and ternary-tree splits that partition a block into two and three rectangles respectively. Both binary and ternary tree splits can operate in either horizontal or vertical directions, be recursively applied and mixed together in a nested multi-type tree. The block partitioning in VVC is highly flexible and provides about 8 percent bitrate reduction over HEVC. However, this flexibility comes at a computational cost, especially on the encoder side, where many more permutations need to be evaluated to select the optimal partition. I.4.2 Intra Prediction The number of directional intra modes in VVC is extended from 33, as used in HEVC, to I.4.3 Inter Prediction The basic concepts of uni-directional and bi-directional motion compensation from one or two reference pictures are mostly unchanged. However, there are some new tools CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS that have not been used in the last video coding standard. For each inter-predicted CU, motion parameters consisting of motion vectors, reference picture indices and reference picture list usage index, and additional information needed for the new coding feature of VVC to be used for inter-predicted sample generation. The motion parameter can be signalled in an explicit or implicit manner. When a CU is coded with skip mode, the CU is associated with one PU and has no significant residual coefficients, no coded motion vector delta or reference picture index. A merge mode is specified whereby the motion parameters for the current CU are obtained from neighbouring CUs, including spatial and temporal candidates, and additional schedules introduced in VVC. The merge mode can be applied to any inter-predicted CU, not only for skip mode. The alternative to merge mode is the explicit transmission of motion parameters, where motion vector, corresponding reference picture index for each reference picture list and reference picture list usage flag and other needed information are signalled explicitly per each CU. Beyond the inter coding features in HEVC, VVC includes a number of new and refined inter prediction coding tools listed as follows; Extended merge prediction, 1/16th luma sample MV storage and 8×8 motion field compression, Bi-prediction with CU-level weight (BCW), and Bi-directional optical flow (BDOF), etc. I.4.4 Transform and Quantization The size of transform block is increased from 4 × 4 to 64 × 64 in the VVC standard compared to the HEVC standard. In addition to the DCT-II used in HEVC, a multiple transformation selection (MTS) scheme is also used for residual coding of intra and inter I.4.5 Entropy Coding In VVC, the CABAC technique is improved in comparison to the HEVC design. The three main modifications are: modified context modeling for transform coefficients, multi-hypothesis probability estimation with context-dependent updating speed and CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS adaptive initialization for context models (e.g. initial probability states of context models for inter coded slices can be initialized by copying states from previously coded pictures). Table I.1: Coding Tools of VVC vs HEVC I.4.6 In-Loop Filters In VVC, a remapping operation and three in-loop filters can be applied sequentially to the reconstructed picture to modify its representation domain and alleviate different types of artifacts. First, a new sample-based process called LMCS (Luma Mapping with Chroma Scaling) is performed. Then, a DBF is used to reduce blocking artifacts. CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS SAO is then applied to the deblocked picture to attenuate ringing and banding artifacts. Finally, an Adaptive Loop Filter (ALF) reduces other potential distortion introduced by the quantization and transform processes. The deblocking filter design is based on the one in HEVC but is extended with longer deblocking filters and a luma-adaptive filtering mode designed specifically for HDR video. While SAO is the same as in HEVC, and the deblocking is very similar, LMCS and ALF are new compared with previous standards. The design of ALF in VVC consists of two operations: ALF with block-based filter adaption for both luma and chroma samples and a cross-component ALF (CC-ALF) for chroma samples. I.5 Video Coding Challenges During the last decade, multimedia services and video applications have significantly increased due to the huge progress in digital technologies. The emerging video applications and image representation offer an immersive and more natural viewing experience. However, these new services require both higher quality and resolution (4K, 8K) to satisfy the quality of service required by the end users. To meet the increasing demands for video content at better qualities and higher resolutions, video compression technology is being researched and developed, due to its higher performance. However, this unmatched performance is achieved by increasing the encoder computational complexity mainly due to its block partition structure. Indeed, the complexity reduction has always been a popular challenge in the video coding field. For example, Figure I.9 shows that the greatest complexity lies in the selection of the optimal prediction mode, especially in the inter-mode [START_REF] Gabriel Cebrian-Marquez | Adaptive inter cu partitioning based on a look-ahead stage for hevc[END_REF]. In this context, many researchers aim to reduce the complexity for each standard I.6 Artificial Intelligence: New Advancements and Innovations Artificial Intelligence (AI) is a branch of computer science that deals with simulation of human intelligence by machines processes and computational rationality. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. AI is a computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, and translation between languages. Machine learning and deep learning are subsets of AI, which are described in the following sections. I.6.1 Machine Learning The learning activity is essential for the human beings in order to understand and recognize various parameters such as a voice, a person, an object, and others. Supervised learning systems make use of labeled datasets [START_REF] Sotiris B Kotsiantis | Supervised machine learning: A review of classification techniques[END_REF]. This training set of input-output pairs is used to find a deterministic function that maps any input to an output, predicting future input-output observations while minimizing errors as much as possible. While Unsupervised learning systems use unlabeled datasets to train the system [START_REF] Hastie | Unsupervised learning. the elements of statistical learning[END_REF]. The objective of unsupervised learning is to derive structure from unlabeled data by investigating the similarity between pairs of objects, and is usually associated with density estimation or data clustering. Reinforcement learning systems do not experience a fixed dataset, but a feedback loop between the system and its experiences [START_REF] Pack | Reinforcement learning: A survey[END_REF]. A dynamic environment is considered in which state-action-reward triples are observed as the data. The objective of reinforcement learning is mapping situations to actions with the goal of maximizing rewards. Other existing learning systems that are a combination of two categories, such as semi-supervised learning that uses both labeled and unlabeled data [START_REF] Chapelle | Semisupervised learning[END_REF]. Here, we limit our focus to supervised learning algorithms. There are a wide variety of tasks exist that could be solved with machine learning. However, two popular machine learning tasks are regression analysis and classification. Commonly used algorithms for classification technique [START_REF] Sotiris B Kotsiantis | Supervised machine learning: A review of classification techniques[END_REF] include k-Nearest Neighbor, Support Vector Machine, Naïve Bayes, and Decision Trees, etc...In the following, we focus on describing the Support Vector Machine (SVM) considered as the most useful algorithm, due to its CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS ability to solve classification task problems. Indeed, SVM is a class of learning algorithm, initially used for discrimination that is, predicting a binary qualitative variable which is then generalized forecast a quantitative variable. In case of discriminating a dichotomous variable, they are based on the search for the optimal margin hyperplane. I.6.2 Deep Learning The past decade has witnessed the emerging and booming of Deep Learning (DL), a class of techniques that are increasingly adopted in the hope of approaching the ultimate goal of artificial intelligence [START_REF] Arif Wani | Advances in deep learning[END_REF]. DL belongs to machine learning technology, and has the distinction of its computational models, known as deep artificial neural networks or deep networks for short, which are composed of multiple (usually more than three) processing layers, each layer is further composed of multiple simple but non-linear basic computational units. One benefit of such deep networks is believed to be the capacity for processing data with multiple levels of abstraction, and converting data into different kinds of representations. Note that these representations are not manually designed; instead, the deep network including the processing layers is learned from massive data using a general machine learning procedure. DL eliminates the necessity of handcrafted representations, and thus is regarded useful especially for processing natively unstructured data, such as acoustic and visual signal, since the processing of such data is considered a long-standing problem in the field of artificial intelligence. Specifically for processing image/video, DL using Convolutional Neural Network (CNN) has revolutionized the paradigm in computer vision and image processing [START_REF] Bouaafia | Fast cu partition-based machine learning approach for reducing hevc complexity[END_REF]. CNN is one of the most commonly used supervised deep learning models, which is described in this chapter. This network structure was first proposed by Fukushima in 1988 [START_REF] Fukushima | Neocognitron: A hierarchical neural network capable of visual pattern recognition[END_REF]. In the 1990s, LeCun et al. I.6.2.1 Convoluional Layer Convoluional Layer is the core building block of CNN network, in which its parameters consist of a set of learnable filters also known as kernels. The main task of the convolutional layer is to detect features found within local regions of the input image that are common throughout the dataset and mapping their appearance to a feature map. A feature map is obtained for each filter in the layer by repeated application of the filter across sub-regions of the complete image, i.e., convolving the filter with the input image, adding a bias term, and then applying an activation function. Therefore, four important hyperparameters in the convolutional layer are used, such as Filter Size, Number of Filters, Stride, and Zero Padding. The following equation shows the convolution operation. x l j = f   i∈M j x l-1 i × k l ij + b l j   (I.1) where x l j is the output of the current layer, x l-1 i is the previous layer output, k l ij is the kernel for the present layer, and b l j are the biases for the current layer. M j represents a CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS selection of input maps. I.6.2.2 Pooling Layer In CNN, the sequence of convolution layer and activation function layer is followed by an optional pooling or down-sampling (also sub-sampling) layer to reduce the spatial size of the input and thus reducing the number of parameters in the network. A pooling layer takes each feature map output from the convolutional layer and down-samples it, i.e., pooling layer summarizes a region of neurons in the convolution layer. Two types of operations are mostly performed in this layer: Average pooling or max-pooling. In the case of the average pooling approach, the function usually sums up over N×N patches of the feature maps from the previous layer and selects the average value. On the other hand, in the case of max-pooling, the highest value is selected from the N×N patches of the feature maps. The poling operation can be defined in equation I.2, where down(•) represents a sub-sampling function. x l j = down x l-1 i (I.2) I.6.2.3 Fully Connected Layer At the end, the stack of convolutional and pooling layers act as feature extraction stage while as the classification stage is composed of one or more fully connected layers followed by an activation function layer. The process of convolutional and pooling continues until enough features are detected. Next step is to make a decision based on these detected features. In case of classification problem, the task uses the detected features in the spatial domain to obtain probabilities that these features represent each class, that is, obtain the class score. This is done by adding one or more fully connected layers at the end. In fully connected layer, each neuron from previous layer is connected to every neuron in the next layer and every value contributes in predicting how strongly a value matches a particular class. Additionally, a fully connected layer is connected to all features, and it is prone to overfitting. Overfitting refers to the problem when a model is trained and it works so I.6.2.4 Activation Functions The output of each convolutional layer is fed to an activation function layer. The activation function layer consists of an activation function that takes the feature map produced by the convolutional layer and generates the activation map as its output. The activation function is used to transform the activation level of a neuron into an output signal. There are many activation functions and some of the commonly used activation functions are as follows: Rectified Linear Unit (ReLU) has gained some importance in recent years and currently is the most popular activation function for deep neural networks. Neural networks with ReLU train much faster than other activation functions. ReLU simply computes the activation by thresholding the input at zero. In other words, a rectified linear unit has output 0 if the input is less than 0, and raw output otherwise. It is denoted as. f (x) = max(0, x) (I.3) The sigmoid function is mathematically represented in the equation I. the input layer and uses these errors to calculate the desired gradients. This description makes clear the incredible utility and computational efficiency of the backpropagation algorithm. We can calculate all the derivatives using a single "forward" and "backward" pass of the neural network, equations are summarized in A. This computational efficiency is crucial since we must calculate the gradient with respect to all parameters of the neural net at each step of gradient descent. I.7 Related Research The existing video coding complexity reduction works can be generally classified into two categories: heuristic and learning based approaches. This section reviews the complexity reduction approaches in these two categories. I.7.1 Heuristic Methods In heuristic methods, several fast decision algorithms have been introduced properties, temporal and spatial correlation, which limit their applicability and may be difficult to handle the situations with various contents, complex coding structures. I.7.2 Learning Methods The past few years have exhibited great success in applying machine learning tools to enhance the video coding. In this vein, great efforts have been carried out to integrate machine learning tools in order to predict the CU partition to reduce HEVC complex- Overall, the main target of video coding is to minimize the bitrate while maintaining the visual quality. There are three key requirements on the video coding [START_REF] Ohm | Vision, applications and requirements for high efficiency video coding (hevc)[END_REF], including high compression ratio, low complexity, and high visual quality. In this context, this thesis contribution proposes to integrate the advancement techniques in the video coding standards, in order to achieve a compression efficiency considerably higher than the old video compression technologies. I.8 Conclusion This chapter introduces the basic building blocks of a video compression system and a detailed description of the HEVC and VVC standards, which are the current state-ofthe-art video coding standards. The coding tools comparison of HEVC versus VVC is provided. Then, the chapter also presents a video coding standards challenges. Moreover, the recent advancements technologies, such as artificial intelligence, machine learning and deep learning and their tools are surveyed. Finally, the related researches on HEVC and VVC complexity reduction are reviewed. In the next chapter, in order to overcome the HEVC complexity, we propose to integrate machine learning solutions in HEVC standard to predict the CU partition at inter-mode. This chapter will provide more details about the three proposed machine learning algorithms instead of traditional rate-distortion optimization search in HEVC standard. II RD cost _CU > 3 k=0 RD cost _sub_CU (k) (II.1) The CU partition can be considered as a combination of binary classifiers {F l } 3 1 at three levels of decisions l ∈ 1, 2, 3 on whether to split a parent CU into sub-CUs. Accord- CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY ing to the CTU, we assume that the CUs are denoted as CU , CU i , CU i,j corresponding to depth 0,1,2,3, where i, j ∈ 0, 1, 2, 3 are the index of sub-CUs. In each CU depth, we need to determine whether to split the current CU or not. The overall CU partition in a CTU is extremely complex, due to the large number of possible pattern combinations. For example, for a 64 × 64 CU, if F 1 (CU )= 1, it will be split into four 32 × 32 CUs, i.e., {CU i } 3 i=0 . Since for each CU i there exist 1 + 2 4 = 17 splitting patterns in {CU i , j} 3 j=0 , the total number of splitting patterns for CU is 1 + 17 4 = 83522. There are too many types of CU partitions and it is hard to be solved by a single multi-class classification in one step. However, due to the large number of pattern combinations, the prediction is adopted at each decision level to yield F1 (CU ), { F2 (CU i )} 3 i=0 , and { F3 (CU i,j )} 3 i,j=0 , which denotes the predicted F 1 (CU ), {F 2 (CU i )} 3 i=0 , and {F 3 (CU i,j )} 3 i,j=0 , respectively. II.3 Proposed CU Partition based on Machine Learning II.3.1 CU Partition based on SVM In machine learning theory, SVM is a supervised learning tool that performs classification analysis [START_REF] Cortes | Support-vector networks[END_REF]. In particular, the video coding mode decision process can be considered as a classification problem. A hyperplane technique is used in SVM to separate data from one space at one dimension to another at a larger dimension. SVM can transform data in a larger dimensional space by nonlinear transformation, if the data points are clearly not linearly separable in the input space. To separate the two classes of data points, SVM maps the sample data into a hyperspace. In addition, the main goal of SVM is to solve linear and nonlinear problems in order to find an optimal hyperplane. SVM classifier creates a hyperplane in order to maximize the margin between hyperplanes and support vectors [HCP + 18]. The CU split decision can be modeled as a binary classification problem, with classes split and non-split [START_REF] Bouaafia | Svm-based inter prediction mode decision for hevc[END_REF]. Here, we propose an online SVM as a machine learning technique, since it is robust and popular in solving the binary classification problem with significant computational advantages. The main idea is to find a hyperplane that can separate the training samples of different classes while maximizing the margin be- CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY tween these classes in order to determine the CU splitting level. According to equation II.2, the ideal weight vector w is a linear combination of support vectors. Therefore, the support vectors are the training points that minimize the misclassification. Given training set with N samples, ({x i , y i } N i=1 ), x i ∈ R n , while y i ∈ ({-1, 1}), the hyperplane parameterized by the normal vector w that maximizes margins can be found by solving the optimization problem. min w γ 2 w 2 + 1 n n i=0 max(0, 1 -y(w • x)) (II.2) where γ ≥ 0 is the smoothing parameter and is defined by: γ=1/nC, where C is the parameter which need to be tuned during SVM training. Mathematically, SVMs handle such situations by using a kernel function which maps the data to a different space where a linear hyperplane can be used to separate classes. In this work, Gaussian Radial Basis Function (RBF) is applied as the kernel function, which is defined as: K(x i , x j ) = exp(- x i -x j 2 2σ 2 ) (II.3) Our approach therefore consists in determining when a 2N × 2N block has to be "Split" or "Not-Split". The input feature vector is denoted by x i and y i is the output label indicating CU splitting or not. The following equation is the discriminant function: f(x) = w T φ(x) + b (II.4) where the normal vector is denoted by w. The function φ(x) maps feature vector x, and b is the bias. The current CU splitting decision should be determined in each CU depth. Therefore, an SVM classifier is used in each CU depth to get the best combination of CU, PU and TU via evaluating the RD cost of all possible modes. The preprocessing layer are residual CUs of CU, CU i or CU i,j , corresponding to the three levels. Therefore, the residual block is subtracted by the mean intensity values to reduce the variation of the input CTU samples. Specifically, at the first level of CU partition, the mean value of CU is removed in accordance with the output of F1 (CU ). At the second level, four CUs {CU i } 3 i=0 are subtracted by their corresponding mean values, matching the 2 × 2 output of ({ F2 (CU i )} 3 i=0 ). At the third level, {CU i,j } 3 i,j=0 remove the mean values in each CU for the 4 × 4 output ({ F3 (CU i,j )} 3 i,j=0 ). After preprocessing layer, the three convolutional layers are used to extract features from data at all levels. The convolution layer is a mathematical operation that takes two inputs such as CU partition and filters. In each layer, the convolution kernels of all three levels have the same size. In our work, at the first convolutional layer, 16 kernels are used to extract the low features maps for the CU partition. At the second and third layers, feature maps are sequentially convoluted twice with 2×2 kernels (24 filters for the second layer and 32 filters for the third layer) to generate features at a higher level.The strides of all the above convolutions are equal to the widths of the corresponding kernels for non-overlap convolution. The below design of the convolutional layer is in accordance with all possible nonoverlap CUs at different sizes for CTU partition. At the end of the convolution, through the concatenation layer, the final feature maps are concatenated together and then flatten into a vector. In the following fully connected layers, features generated from the whole CTU are all considered to predict the CU partition at each single level. CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY ( F1 (CU )) in 1 × 1, ({ F2 (CU i )} 3 i=0 ) in 2 × 2 and ({ F3 (CU i,j )} 3 i,j=0 ) in 4 × 4 at three levels, respectively. In Deep CNN structure, the early termination may result in the calculation of the fully connected layers at levels 2 and 3 being skipped, thus saving computation time. Specifically, if CU is decided not to be split at level 1, the calculation of ({ F2 (CU i )} 3 i=0 ) is terminated early at level 2. If {CU i } 3 i=0 are all not split, the ({ F3 (CU i,j )} 3 i,j=0 ) at level 3 do not need to be computed for the early termination. The ReLU function is used to activate all convolutional layers and hidden fully connected layers, since this function has better convergence speed [START_REF] Glorot | Deep sparse rectifier neural networks[END_REF]. Moreover, since all the labels for splitting or non-splitting are binary, all the output layers in three levels are activated with the sigmoid function. II.3.3 Training Phase This section presents the training process for the proposed Deep CNN as shown in For learning our Deep CNN model, we assume that the cross entropy is applied as a loss function, which is defined in the equations II.5 and II.6: L = 1 N N n L n (II.5) CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY L n = Y (F n 1 (CU ), F n 1 (CU )) + i∈{0,1,2,3} Y (F n 2 (CU i ), F n 2 (CU i )) + i,j∈{0,1,2,3} Y (F n 3 (CU i,j ), F n 3 (CU i,j )) (II.6) where Y denotes the cross entropy between the ground truth labels and the predicted labels. The labels predicted by our Deep CNN are represented by ({ F1 (CU )), ({ F2 (CU i )} 3 i=0 ), and ({ F3 (CU i,j )} 3 i,j=0 )} N n=0 . We use the Tensorflow-GPU deep learning framework to train our proposed Deep CNN on an NVIDIA GeForce GTX 480 GPU that can dramatically improve speed during training compared to the CPU. We adopt a batch mode learning method with a batch size of 64 where the momentum of the stochastic gradient descent algorithm optimization is set to 0.9. To train our Deep CNN, the base learning rate is set to decay exponentially to 0.01, changing every 1, 000 iterations. The total number of iterations CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY was 2, 000, 000. Finally, the trained model can be used to predict the CU partition at HEVC inter-mode. At level l, F C 1-l (t) represents the deep CNN input features at frame t, and II.3.4 CU F C l (t-1) is the output features of the LSTM model of frame t-1. These three gates are presented in the following equations: i l (t) = σ(W i • [F C 1-l (t), F C l (t -1)] + b i ) o l (t) = σ(W o • [F C 1-l (t), F C l (t -1)] + b o ) f l (t) = σ(W f • [F C 1-l (t), F C l (t -1)] + b f ) (II.7) where the sigmoid function is denoted by σ(•). c l (t) = i l (t) tanh(W c [F C 1-l (t), F C l (t -1)] + b c ) +f l (t) c l (t -1) (II.8) where signifies the element-wise multiplication. The output of the LSTM cell F C l (t) can be determined as follows: F C l (t) = o l (t) c l (t) (II.9) II.3.5 Training Phase In the training phase, the LSTM model was trained from the training set of the inter database given in Table II.1, which minimizes the loss function between the ground truth and the prediction of CTU partition. L n (t) = Y (F n 1 (CU, t), F n 1 (CU, t)) + i∈{0,1,2,3} Y (F n 2 (CU i , t), F n 2 (CU i , t)) + i,j∈{0,1,2,3} Y (F n 3 (CU i,j , t), F n 3 (CU i,j , t)) (II.10) However, over N training samples alongside the T -f rames, the LSTM network can be trained by optimizing the cost function, as defined in equation II.11. L = 1 N T N n T t L n (t) (II.11) The training parameters were defined as follows; batch size, learning rate, and LSTM length (T ) were set to 64, 0.001, and 20, respectively. Finally, the trained model was saved to be used after (in the framework), which aims to predict the inter-coding CU partition. For the test, the LSTM model works in stages, that is, when the prediction of the CU partition at frame t -1 is complete, the state and the output of the frame t are computed. To further enhance the RD performance and reduce the inter-coding CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY II.4.2 Performance Metrics The T= T P roposed -T Original T Original × 100 (%) (II.12) where T P roposed and T Original are the coding time of the proposed approach and the original HEVC algorithm, respectively. For further performance evaluation of the proposed scheme, Table II.4 shows the coding performance between the proposed CNN-LSTM framework and the deep CNN [START_REF] Bouaafia | Fast cu partition-based machine learning approach for reducing hevc complexity[END_REF]. The proposed scheme CNN-LSTM is better than the deep CNN in terms of computational-complexity and RD performance. Specifically, the execution time of this method is 58.60% on average, exceeding the 53.99% obtained with deep CNN only. of 75% and 58.60% on average and gives an increase in BD-BR of 1.78% with a little reduction in BD-PSNR of -0.053 dB. II.4.3 Performance Evaluation with Online SVM and Deep CNN II.4.4 Performance Evaluation with Deep CNN and CNN-LSTM In fact, the proposed approach achieves a higher computational-complexity reduction for video sequences with low motion activities and homogeneous regions, where the blocks CU partition is larger and the percentage of splitting cases is lower, such as "Kris-tenAndSara" video sequence. Similarly, the existing methods prove a high encoding time for class E video sequences. For example, [LZZ + 19] achieves 64% encoding complexity and 1.58% BD-BR increase for sequence "KristenAndSara", as shown in Table II.5. For the same sequence, [START_REF] Tahir | Fast video encoding based on random forests[END_REF] gives 77% time saving with an increase in the BD-BR of 3.30% on average of four QPs. In addition, the work proposed in [XLW + 18] achieves 67.23% encoding time with 1.55% BD-BR on average. With regard to the ultra-high definition sequences like "PeopleOnStreet", the computational complexity reduction of our proposed approach is slightly lower, since these sequences have high motion and camera movement, which are encoded in a small CU partition. Hence, the proposed scheme performs better in terms of both RD performance and complexity reduction of HEVC as compared to the previous works. Overall, all approaches are better adapted to low-motion video content. On the other hand, on average, 43% time saving is reduced by [LZZ + 19] with an increase in BD-BR of 2.56% and a decrease in BD-PSNR of -0.099dB. The proposed method presented in [TTAA19] allows 54.57% encoding time while the BD-BR increases by 2.97% and the BD-PSNR degradation reaches -0.107dB. Regarding the work presented in [XLW + 18], the proposed method surpasses ours in terms of BD-BR and BD-PSNR, while our proposed approach allows a significant coding time saving of 58.60% compared to this work. When comparing our work to the state of the art schemes ], and [XLW + 18] describing earlier, we can conclude that the proposed CNN-LSTM-based learning method proves the best coding efficiency of HEVC at inter-mode in order to predict the CU partition. [LZZ + 19], [ TTAA19 CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY II.5 Conclusion In this chapter, we proposed a fast CU partition based on machine learning approaches to reduce the HEVC complexity of inter-mode. An online SVM-based fast CU partition method was proposed to reduce the encoding complexity of HEVC. Then, to predict the CU partition of HEVC, a Deep CNN was proposed, which reduces the HEVC complexity at inter-mode. Unfortunately, these two machine learning algorithms do not explore the correlation of the CU partition across neighboring frames. Therefore, a deep learning approach was proposed to predict the CU partition at inter-mode, which combines the CNN and the LSTM structures. Simulations results show and prove the efficiency of the proposed framework in saving a significant encoding complexity, compared to other previous approaches-based machine learning tools. In the next chapter, we provide a deep learning technique to improve the visual quality of reconstructed video for the VVC standard in order to meet the demands required by the users. we evaluate the proposed method in section III.5. Finally, section III.6 refers to the conclusion of this chapter. III.2 Background The growing multimedia portfolio, including Big Data processing, Cloud Computing and the IoT [MBB + 20], has a direct impact on our lifestyle. M-IoT is considered as a major network technology enabling the interconnection and interaction between humans, health-centers, industries, and objects like cameras, transport, and sensors [START_REF] Cao | A survey of emerging m2m systems: Context, task, and objective[END_REF]. In addition, M-IoT systems combine the networking technologies for computer vision, image processing, and connectivity. Yet, they can be used in driving assistance, surveil- QoE BR = a × log(BR) + b (III.1) where a and b denote coefficients determined during the experiment. However, this parameter, just like the PSNR metric, will also be used for the proposed WSE-DCNNbased in-loop filtering to evaluate video quality. III.4 Proposed Method This section introduces the proposed method and describes the WSE-DCNN architecture and how this is integrated into VVC standard to replace the traditional loop filtering technique in order to improve the video quality. CHAPTER III. DEEP LEARNING BASED VIDEO QUALITY ENHANCEMENT FOR THE NEW VERSATILE VIDEO CODING otherwise the flag is disabled. After all the CTUs in one frame are determined, the framelevel RD cost before and after filtering are calculated using equation III.2 indicated by J1 and J2, respectively. If J1 > J2, the frame-level flag will be enabled. Hence the corresponding frame-level flag can be encoded in the slice header and CTU-level control flags can be signaled into each corresponding CTU syntax. Otherwise, the frame-level flag is disabled and CTU-level flags will not be encoded for transmission anymore. III.4.2 WSE-DCNN Architecture The concept of the proposed architecture is illustrated in The WSE unit consists of the following phases as depicted in Algorithm 1, given a feature map X with shape H × W × C, where C means channel amounts: • A wide 3 × 3 convolution followed by ReLU and a convolution layer with kernel size is 1 × 1. Given Y 1 is the channel defined in Algorithm 1 and Y 2 is the output of the second convolution layer. • Each channel obtains a value according to the squeeze operation using Global Average Pooling (GAP) Y 3 (k) as shown in Algorithm 1. • The excitation operation is described by two fully connected layers followed by ReLU and sigmoid (σ) activation functions, respectively. As shown in Algorithm 1, Y 4 is the first fully connected layer followed by ReLU, which is refined by a certain • According to WSE function, each Y 2 channel is multiplied by the gating ratio r, as defined Algorithm 1. Y 1 = ReLU (W 1 X + b 1 ) Y 2 = W 2 Y 1 + b 2 return (Y 2 ) Squeeze- • Finally, when the number of input equals to the output channels C, a skip connection will be added directly from input to output to learn the residue. Otherwise, there is no skipped connection. L(θ) = 1 N N i=1 ||F (Y i , θ) -X i || 2 2 (III.5) Let X i is III.5.4 Comparative Study We also compared the proposed approach with other filtering models based on CNN network. Deep learning algorithms, such as CNNs, have significant higher accuracies than traditional algorithms, but they require huge amounts of computational resources and memory access due to the large number of parameters in the layers operation, which represents a computational challenge. Therefore, many hardware accelerators, such as Field Programmable Gate Arrays (FPGAs), especially the new technology FPGA-SoC, are considered as the most promising platforms for accelerating CNNs, due to their high performance capabilities, energy efficiency, and reconfigurable property. This chapter provides a hardware-software architecture based on an accelerated CNN model for a video compression application. We first accelerate the CNN layers to build an Intellectual Property (IP) cores using Vivado High Level Synthesis (HLS). Then, we create a hardware-software architecture based on a CNN's IP cores designed and integrated in the Programmable Logic zone (PL) which is connected to the Xilinx Processing System (PS) that manage all processing tasks on the FPGA-SoC board. The remainder of this chapter is organized as follows. Section IV.2 provides the background in which the preliminary study are included. Section IV.3 discusses the proposed CNN accelerator on FPGA-SoC. Section IV.4 describes the experimental results and the discussions. Section IV.5 concludes this chapter. IV.2 Background Recent AI methods, such as deep learning, have enjoyed considerable success in various machine learning tasks because of their powerful learning ability [START_REF] Kang | Energy efficiency of machine learning in embedded systems using neuromorphic hardware[END_REF]. They have been broadly applied in many signal processing areas including computer vision, image processing, data mining. However, computationally intensive deep learning algorithms had to be run on embedded devices [START_REF] Hong | Design of power-efficient training accelerator for convolution neural networks[END_REF], such as FPGAs. Especially, the new technology FPGA-SoC are considered as the most promising platforms for accelerating AI methods, due to their real-time performance, high energy efficiency and flexible designs. CHAPTER IV. DEEP CNN CO-DESIGN FOR HEVC CU PARTITION PREDICTION ON FPGA-SOC (AXI) connections. IV.2.1.1 Processing System (PS) All Zynq devices have the same basic architecture, and all of them contain, as the basis of the processing system, a dual-core ARM Cortex-A9 processor [START_REF] Crockett | The Zynq Book: Embedded Processing with the Arm Cortex-A9 on the Xilinx Zynq-7000 All Programmable Soc[END_REF]. eral, the advantage of soft processors is that the number and precise implementation of processor instances are flexible. On the other hand, hard processors can achieve higher performance, as is the case with Zynq's ARM processor. It is important to note that the Zynq processing system encompasses not only the ARM processor, but a set of associated processing resources forming an application processing unit, as well as other peripheral interfaces, cache memory, memory interfaces, IV.2.1.2 Programmable Logic (PL) PL is the second principal part of the Zynq architecture [START_REF] Crockett | The Zynq Book: Embedded Processing with the Arm Cortex-A9 on the Xilinx Zynq-7000 All Programmable Soc[END_REF], which is based on the Artix®-7 and Kintex®-7 FPGA fabric. The PL part of the Zynq device is illustrated in IV.2.2 Direct Memory Access (AXI DMA) AXI DMA transfers data between memory and AXI4-Stream-type target peripherals IV.2.3 FPGA-SoC: PYNQ-Z1 The PYNQ-Z1 FPGA is the chosen hardware platform, which is based on Xilinx ZYNQ SoC technology [START_REF]PYNQ Xilinx. Python productivity for zynq[END_REF]. It provides a Python environment, to make it easier for designers to exploit the PL and PS of the FPGA board. Xilinx offers Python packages and associated libraries to facilitate the interaction with hardware modules based on Overlays. Overlays, or hardware libraries, are designed to be programmable and reusable FPGA designs to extend the user application from The PS into the PL of the ZYNQ. An overlay is a PL design class developed by hardware designers. PYNQ overlays can be customized the hardware platform for a certain application. IV.4.2 Hardware Cost of the Proposed Co-Design This section presents the percentage of hardware resources consumed by the proposed co-design on the PYNQ-Z1 platform. After the implementation phase, the hardware resource occupancy of our proposed design is shown in Table IV.4. FFs, and more than 25% BRAMs. This design achieves an on-chip power consumption of 15.8 W under a 120 MHz working frequency of the FPGA. In reference [START_REF] Zhang | Fpga implementation for cnn-based optical remote sensing object detection[END_REF], an efficient hardware-implementation method for optical remote sensing object detection was proposed. However, this design has extremely high requirements for resource utilization, consuming more than 70% BRAMs. It achieves 5.96 W on-chip power consumption at a clock frequency of 200 MHz. In addition, the authors in [LZF + 19] proposed a CNN accelerator, which accelerates the standard convolution and the depthwise separable convolution. This method has been implemented on the Xilinx ZYNQ 7100 hardware platform which achieves an on-chip power consumption of 3.99W at a clock frequency of 100MHz. Meanwhile, it consumed the highest hardware resources, using almost all DSPs, 50% LUTs, and over 40% BRAMs. After this comparative study, we remark that our proposed design achieves high performance; low power consumption around 1.69W and occupies low hardware FPGAs resources. Therefore, our design can achieve a satisfactory balance between resource cost and power consumption and it is suitable for deployment on embedded devices with limited resource budget. The future work will be explored to accelerate the overall architecture of Deep CNN and implement it in a real-time on PYNQ-Z1 using customized Overlay. General Conclusion and Perspectives The expansion of Internet coupled with the rapid introduction of UHD, HDR and 360°v ideo contents in daily life have caused the explosion of video traffic. Recent study published in Cisco [START_REF] Cicero | Cisco predicts more ip traffic in the next five years than in the history of the internet[END_REF] has predicted that video traffic will increase from 75% of the global IP traffic in 2017 to 82% in 2022. This increasing demand for video contents brings new challenges to compression, especially to enhance the coding efficiency and enable a high QoE of video services. Driven from these requirements, various technologies appeared as potential solutions for video coding deployment. In this thesis work, the main concern is reducing computational complexity and improving video quality for HEVC and VVC standards using artificial intelligence. In this thesis, we have focused on video coding standards, such as HEVC and VVC. Particularly, we tackled the problem of complexity reduction and video quality using machine learning solutions. Four main contributions have been integrated in this work. Firstly, we have reviewed the basic building blocks of a video compression system. where in the last line we have used the fact that ∂b l j ∂z l j = 1. This is the second of the four backpropagation equations. We now derive the final two backpropagation equations using the chain rule. Since the error depends on neurons in layer l only through the I. 1 1 History of Video Coding Standardization . . . . . . . . . . . . . . . . . . I.2 Block Diagram of the Hybrid Video Coding Layer for HEVC . . . . . . . I.3 HEVC Quadtree Partitioning Structure, including CU, PU and TU (solid line for CU, dashed line for TU) . . . . . . . . . . . . . . . . . . . . . . . I.4 Intra Prediction Modes in the HEVC Standard . . . . . . . . . . . . . . . I.5 Example of Uni and Bi-directional Inter Prediction . . . . . . . . . . . . . I.6 Example of Quadtree with Nested Multi-Type Tree Coding Block Structure for VVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.7 Intra Directional Modes in VVC . . . . . . . . . . . . . . . . . . . . . . . I.8 Wide-Angular Intra-Picture Prediction . . . . . . . . . . . . . . . . . . . . I.9 Example of HEVC Time Profile . . . . . . . . . . . . . . . . . . . . . . . . I.10 Example of SVM Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . I.11 Overall Convolutional Neural Network Architecture . . . . . . . . . . . . II.1 CU Partition Structure in HEVC . . . . . . . . . . . . . . . . . . . . . . . II.2 Flowchart of the Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . II.3 Online Training Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.4 Deep CNN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.5 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.6 Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.7 LSTM Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.8 Learning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.9 Encoding Time of the Proposed CNN-LSTM and Deep CNN . . . . . . . III.1 M-IoT Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III.2 M-IoT Scenario-based Centralized Video Quality Enhancement . . . . . . III.3 Proposed VVC Standard Framework . . . . . . . . . . . . . . . . . . . . . III.4 WSE-DCNN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . III.5 Sample Frames of Sequences from the BVI-DVC Database . . . . . . . . . III.6 Training MSE Loss and Validation PSNR . . . . . . . . . . . . . . . . . . xii LIST OF FIGURES III.7 Ablation Study. Subjective Visual Quality Comparison (the 12th frame of BQSquare with QP =37: (a) Original; (b) VVC without in-loop filtering (P SN R=31.17dB); (c) VVC (P SN R=31.37dB); (d) VVC-based proposed model (P SN R=31.68dB) . . . . . . . . . . . . . . . . . . . . . . III.8 Comparison of QoE Variation with Respect to bitrate . . . . . . . . . . . III.9 RD-performance Curves of the Proposed Model Compared to other three Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.1 A simplified Model of the Zynq Architecture . . . . . . . . . . . . . . . . . IV.2 Zynq Processing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.3 Zynq Programmable Logic . . . . . . . . . . . . . . . . . . . . . . . . . . IV.4 Block CLB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.5 AXI Interconnects and Interfaces . . . . . . . . . . . . . . . . . . . . . . . IV.6 Vivado HLS Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.7 Proposed Deep CNN-based CU Partition for HEVC . . . . . . . . . . . . IV.8 CONV-IP Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.9 FC-IP Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.10Hardware-Software Co-Design . . . . . . . . . . . . . . . . . . . . . . . . . IV.11Power Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Convolution Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 The Non-Linear Activation Functions . . . . . . . . . . . . . . . . . . . . A.3 Max vs Average pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Fully Connected Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii List of Tables I.1 Coding Tools of VVC vs HEVC . . . . . . . . . . . . . . . . . . . . . . . . II.1 Sequences in CPIH Database . . . . . . . . . . . . . . . . . . . . . . . . . II.2 Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II.3 Performances Comparison between Deep CNN and Online SVM . . . . . II.4 Performances Comparison between Deep CNN and CNN-LSTM . . . . . . II.5 Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III.1 Key Features of BVI-DVC Video Training Database [MZB20] . . . . . . . III.2 Performance Evaluation of the Proposed model under RA Configuration . III.3 Coding Performance Comparison with other Approaches . . . . . . . . . . IV.1 CNN_1 Model Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.2 Hardware Resource Occupation of the CONV-IP . . . . . . . . . . . . . . IV.3 Hardware Resource Occupation of the FC-IP . . . . . . . . . . . . . . . . IV.4 Hardware Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV.5 Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv General Introduction Learning Approach-based Fast CU Partition for Reducing HEVC Complexity The second chapter proposes a fast Coding Unit (CU) partition based on machine learning approaches to reduce the HEVC complexity of inter-mode. An online Support Vector Machines (SVM)-based fast CU partition method is proposed for reducing the encoding complexity of HEVC. Afterwards, to predict the CU partition of HEVC, a Deep Convolutional Neural Network (CNN) is proposed, which reduces the HEVC complexity at inter-mode. Unfortunately, these two machine learning algorithms do not explore the correlation of the CU partition across neighboring frames. A Long-and Short-Term Memory (LSTM) model was developed to learn the temporal dependency of the intermode CU partition. Therefore, a deep learning approach is proposed to predict the CU partition at inter-mode, which combines the CNN and LSTM structures. Finally, the obtained results are discussed in order to evaluate the performance of the proposed algorithms. Chapter III : Deep Learning based Video Quality Enhancement for the New Versatile Video Coding General Introduction The third chapter proposes a deep learning algorithm-based VVC standard to enhance visual video quality while improving the user's Quality of Experience (QoE). The proposed Wide-activated Squeeze-and-Excitation Deep Convolutional Neural Network (WSE-DCNN) model is integrated into VVC standard to replace in-loop filtering in order to alleviate the coding artifacts, such as ringing, blocking, and blurring. The proposed VVC filtering technique is used in the Multimedia-Internet of Things (M-IoT) scenario-based smart city context to help the centralized cloud meet the user's required video quality. Finally, all simulation results obtained are interpreted and compared to the related existing methods. Chapter IV : Deep CNN Co-Design for HEVC CU Partition Prediction on FPGA-SoC The last chapter proposes a deep CNN based hardware-software design for HEVC CU prediction on FPGA-SoC. Our proposed work aims to accelerate the CNNs due to their computationally intensive. Hence, we create a hardware Intellectual Property (IP) core for each CNNs layer using the Vivado HLS tool. Then, we have designed a hardwaresoftware architecture by importing the hardware IP cores based on the PYNQ-Z1 board. Summary I. 1 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.2 Video Compression History . . . . . . . . . . . . . . . . . . . . I.3 HEVC Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . I.3.1 Sampled Representation of Pictures . . . . . . . . . . . . . . . I.3.2 Block Partitioning in the HEVC Standard . . . . . . . . . . . . I.3.3 Intra Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . I.3.4 Inter Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . I.3.5 Transform and Quantization . . . . . . . . . . . . . . . . . . . I.3.6 Entropy Coding . . . . . . . . . . . . . . . . . . . . . . . . . . I.3.7 In-Loop Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . I.4 VVC Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.4.1 Block Partitioning in the VVC Standard . . . . . . . . . . . . . I.4.2 Intra Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . I.4.3 Inter Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . I.4.4 Transform and Quantization . . . . . . . . . . . . . . . . . . . I.4.5 Entropy Coding . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS I.4.6 In-Loop Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 I.5 Video Coding Challenges . . . . . . . . . . . . . . . . . . . . . . 15 I.6 Artificial Intelligence: New Advancements and Innovations . 16 I.6.1 Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 16 I.6.2 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 I.7 Related Research . . . . . . . . . . . . . . . . . . . . . . . . . . 23 I.7.1 Heuristic Methods . . . . . . . . . . . . . . . . . . . . . . . . . 23 I.7.2 Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . 24 I.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Historically, two major video coding standardization organizations have coexisted: Moving Picture Experts Group (MPEG), which belongs to the International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC), and the Visual Coding Experts Group (VCEG), which belongs to the International Telecommunication Union, Telecommunication Standardization Sector (ITUT). The history of the different H.26x and MPEG-x families established by ITUT and ISO/IEC is shown in Figure I.1. Figure Figure. I.1: History of Video Coding Standardization over H.264/AVC in order to achieve better compression efficiency. The block diagram of a hybrid video coding layer conforming with the HEVC standard is illustrated in Figure I.2. Figure Figure. I.2: Block Diagram of the Hybrid Video Coding Layer for HEVC Figure. I. 4 : 4 Figure. I.4: Intra Prediction Modes in the HEVC Standard Figure. I. 5 : 5 Figure. I.5: Example of Uni and Bi-directional Inter Prediction Figure I. 3 , 3 TUs of size 4×4, 8×8, 16×16 and 32×32 are supported. The 2D transforms based on Discrete Cosine Transform (DCT) are designed for them and special efforts are particularly spent on selecting the value of the transfrom matrix for retaining the property of easy-to-implementation [SOHW12]. In addition, when transforming for 4×4 block size in intra-frame prediction mode, another integer transformation based on Discrete Sine Transform (DST) is available for use. The resulting transform coefficients are then quantized, before being sent to the construction of the coded bitstream. Quantization is a compression technique which converts a range of values into a single quantum value. The maximum Quantization Parameter of HEVC standard is set to 51. Figure I. 6 6 Figure I.6 shows one CTU divided into multiple CUs with a QuadTree plus Multi- 65. The new directional modes, not used in HEVC, are depicted by red dotted arrows, as mentioned in Figure I.7, whereas the planar and DC modes are unchanged for both video encoder. These denser directional intra prediction modes apply for all block sizes andCHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDSfor both luma and chroma intra predictions. In HEVC, 33 angular prediction directions are defined from 45°to 135°in a clockwise direction. In VVC, the angular precision is basically doubled to produce 65 angles within that same range, and another 28 "wideangle" prediction modes beyond this angular range can be used for non-square blocks, as illustrated in Figure I.8. Figure. I. 7 : 7 Figure. I.7: Intra Directional Modes in VVC coding blocks. The newly introduced transformation matrices are DST-VII and DCT-VIII. The change of the Quantization stage is the increase in the maximum Quantization Parameter (QP) from 51 to 63. Figure. I.9: Example of HEVC Time Profile The hyperplane, where possible, classifies or separates the data correctly while being as far as possible from all observations, based on the training set. The principle is therefore to find a classifier or a discrimination function whose generalization capacity (forecast quality) is acceptable for the specific application. Therefore, the purpose of SVM is the reduction of discrimination problem to the linear problem of finding an optimal hyperplane [HCP+ 18]. In Figure I.10, the principle of the SVM algorithm has been shown. Finding the optimal hyperplane to differentiate classes is the major functionality of SVM techniques. The Figure I.10 (a) presents two classes consisting of circles and stars which need to be separated. SVM is a frontier which best segregates the two classes (hyperplane). Now the important question is how can one identify the right hyperplane?. The response is in the Figure I.10 (b) which maximizes the distance between the nearest data point (either class) and hyperplane will help us to decide the right hyperplane. This distance is defined as margin. So, the margin of the hyperplane C is the highest as compared to A and B. Hence, the hyperplane C is the optimal hyperplane that can classify data set. Figure Figure. I.10: Example of SVM Classifier [START_REF] Lecun | Gradientbased learning applied to document recognition[END_REF] applied a gradient-based learning algorithm to CNNs and obtained successful results for the handwritten digit classification problem. After that, researchers further improved CNNs and reported state-of-the-art results in many recognition tasks. CNNs have several advantages over Deep Neural Networks (DNNs), including being more like the human visual processing system, being highly optimized in the structure for processing 2D and 3D images, and being effective at learning and extracting abstractions of 2D features. Figure I. 11 11 Figure I.11 introduces the overall architecture of CNNs consisting of two main parts:Feature extraction and classification. In the feature extraction layers, each layer of the network receives the output from its immediate previous layer as its input and passes its output as the input to the next layer. In the classification part, the feature maps of CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDSwell on training data that it negatively impacts the performance of the model on new data.In order to overcome the problem of overfitting, a dropout layer can be introduced in the model in which some neurons along with their connections are randomly dropped from the network during training. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights. Dropout notably reduces overfitting and improves the generalization of the model. 4. It squashes the input into the range [0, 1]. σ (x) = 1 1 + e -x (I.4) In addition, the hyperbolic tangent function (tanh) is similar to sigmoid function but its output lies in the range [-1, 1]. The advantage of tanh over sigmoid is that the negative inputs will be mapped strongly negative and the zero inputs will be mapped near zero. Moreover, softmax function is often used in the output layer of a neural network for classification. It is a more generalized logistic activation function which is CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS used for multiclass classification, which is defined as. σ (x j ) = e x j n k=1 e x k (I.5) I.6.2.5 Backpropagation Algorithm For the training phase, the common algorithm used is the backpropagation [RZ85]. The training procedure requires us to be able to calculate the derivative of the cost function with respect to all the parameters of the neural network (the weights and biases of all the neurons in the input, hidden, and visible layers). The backpropagation algorithm is a clever procedure that exploits the layered structure of neural networks to more efficiently compute gradients [MBW + 19]. This algorithm consists of a forward pass from the bottom layer to the top layer where one calculates the weighted inputs and activations of all the neurons. One then backpropagates the error starting with the top layer down to [CMMC19, WLM + 16, XLWM13, CK13, FSK + 20, PK19]. To reduce the HEVC computational complexity, authors in [CMMC19] introduced a look-ahead stage-based fast partitioning and CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS mode decision algorithm. Wang et al. in [WLM + 16] proposed a threshold-based splitting decision scheme with respect to the RD cost of each CU. It reduces the number of available intra candidates, adaptive reference frame selection and early termination of coding unit splitting. In [XLWM13], authors proposed a fast algorithm to split CU based on pyramid motion divergence at inter prediction. In addition, a fast early CU-splitting and pruning method with low complexity and full RD cost was developed by Cho et al.in[START_REF] Cho | Fast cu splitting and pruning for suboptimal cu partitioning in hevc intra coding[END_REF]. In a similar way, authors in [FSK+ 20] proposed a fast QTMT partition algorithm based on variance and gradient to reduce the computational complexity brought in by the novel MT partitions in VVC. Reference[START_REF] Park | Context-based ternary tree decision method in versatile video coding for fast intra coding[END_REF] proposes a context-based ternary trees (TT) decision (C-TTD) method to significantly reduce TT computational complexity in VVC intra-coding. These methods are based on the statistics on the RD cost scheme for VVC intra encoders. Similarly, reference [LXT + 21] proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. A fast CU partition decision algorithm based on the improved Directed Acyclic Graph Support Vector Machine model to reduce the complexity of CU partition [ZWH + 21]. In addition, other components of HEVC and VVC, such as in-loop filtering, are simplified to reduce the encoding complexity. Figure. II.1: CU Partition Structure in HEVC Figure. II.2: Flowchart of the Proposed Algorithm Figure. II. 3 : 3 Figure. II.3: Online Training Mode Figure. II. 4 : 4 Figure. II.4: Deep CNN Architecture Figure II. 5 . 5 Figure II.5. We train our model in a supervised learning manner, in which the Deep CNN has been learned based on labeled data. In this context, we create the database for training the proposed model, which satisfy highly performances (high accuracy and low loss). Afterwards, we establish a large-scale database for CU partition of the inter-mode HEVC (CPIH), in order to increase the prediction accuracy. However, to construct our CPIH database, we selected 114 raw video sequences with various resolutions from 352 × 240 to 2560×1600 [XDLW14, OSS + 12, ML94, B + 13]. These sequences are gathered into three sub-sets: 86 sequences for training, 10 sequences for validation, and 18 sequences for test. Table II.1 summarizes the chosen videos and the number of frames (41, 349) in our CPIH database. First, we encoded the original database (114 video sequences) by original HEVC encoder common test condition at different Quantization Parameters (QP=22, 27, 32, 37) using Low Delay P configuration (using encoder -lowdelay -P -main.cf g ) to obtain the residue and the ground truth CU depth. The ground truth CU depth files Figure. II.6: Proposed Framework , as shown in Figure II.6. The ReLU and the sigmoid activation functions are used to activate the hidden and the output layers, respectively [GBB11]. When predicting CTU partition, the long short-term dependency of CTU partition across frames can be taken into consideration in the LSTM network. As seen in Figure II.6, the temporal dependency is modeled by the LSTM cells, which are processed along with the encoded frames. Here, we take the LSTM cell of level l at frame t as an example to discuss the internal mechanism of the proposed LSTM. In fact, the LSTM cell consists of three gates, as shown in Figure II.7; the input gate i l (t), the forget gate f l (t), and the output gate o l (t). Figure. II.7: LSTM Cell Figure. II.8: Learning Process RD performance analysis is performed based on the Bjontegaard Delta bitrate (BD-BR) and the Bjontegaard Delta Peak Signal-to Noise Ratio (BD-PSNR) [Bjo01]. The BD-BR represents the average bitrate savings that calculated between two RD curves for the same video quality, where negative BD-BR values indicate actual bitrate savings and positive values indicate how much the bitrate is increased. BD-PSNR is the overall PSNR difference of RD curves with the same bitrate in decibel. Not forgetting that the coding time is modeled as the critical metric for the validation performance of the HEVC at inter-mode, as shown in the following equation: Figure Figure. II.9: Encoding Time of the Proposed CNN-LSTM and Deep CNN (M-IoT) is an emerging type of Internet of Things (IoT) relaying multimedia data (image, video, audio and speech, etc...) [NQA + 20]. The rapid growth of M-IoT devices enables the creation of a massive volume of multimedia data with different characteristics and requirements [MBB + 20]. With the development of Artificial Intelligence (AI), AI-based M-IoT systems have been recently designed and deployed for various video-based services for contemporary daily life, like video surveillance with HD and UHD and mobile multimedia streaming. These new services need higher video quality in order to meet the Quality of Experience (QoE) required by users [ARM18]. This chapter proposes a deep CNN-based in-loop filtering approach, denoted as the Wide-activated Squeeze-and-Excitation Deep Convolutional Neural Network (WSE-DCNN). The proposed approach provides new powerful in-loop filtering without exploiting traditional ones for the VVC standard. Indeed, the main goal is to effectively remove compression artifacts and enhance the compressed video quality. The proposed method improves the QoE of end-users. The remainder of this chapter is organized as follows. Section III.2 presents the background. Section III.3 introduces the proposed M-IoT scenario. Then, the proposed deep CNN-based in-loop filtering in VVC standard is defined in section III.4. Next, CHAPTERFigure Figure III.1. However, several issues, such as interoperability, security, data size, reliability, storage and computational capacity need to be well resolved to process multimedia data [ZKH + 19]. Figure . . Figure. III.4: WSE-DCNN Architecture CHAPTER III. DEEP LEARNING BASED VIDEO QUALITY ENHANCEMENT FOR THE NEW VERSATILE VIDEO CODING Algorithm 1 WSE-Unit Input: X ∈ {H, W, C} Output: Y ∈ {H, W, C} 1 for number of Epochs do 2 2 (i, j, k). return(Y 3 ) Call-Excitation-Operation(Y 3 ): Y 4 = ReLU (W 4 Y 3 + b 4 ). Y 5 = σ(W 5 Y 4 + b 5 ). return(Y 5 ) 2 , Y 5 ): Y 6 (i, j, k) = Y 2 (i, j, k) × Y 5 (k), ∀i ∈{1, ..., H}, ∀j ∈ {1, ..., W }, ∀k ∈ {1, ..., C}. return(Y 6 ) ratio r. Then, the second fully connected layer followed by the sigmoid activation function which is denoted by Y 5 , and it gives each channel a smoothing gating ratio in the range of [0,1]. Features Figure. III.5: Sample Frames of Sequences from the BVI-DVC Database Figure. III.7: Ablation Study. Subjective Visual Quality Comparison (the 12th frame of BQSquare with QP =37: (a) Original; (b) VVC without in-loop filtering (P SN R=31.17dB); (c) VVC (P SN R=31.37dB); (d) VVC-based proposed model (P SN R=31.68dB) Figure. III.8: Comparison of QoE Variation with Respect to bitrate Figure Figure. III.9: RD-performance Curves of the Proposed Model Compared to other three Approaches FPGAs have been used to improve CNN performance, which is the purpose of the next chapter. are widely used, due to their excellent performance, in many computer vision applications, such as facial recognition, image classification tasks, speech recognition programs, video gaming, etc. However, CNNs require a large number of memory resources and they are also computationally intensive. Figure. IV.2: Zynq Processing System CHAPTER IV. DEEP CNN CO-DESIGN FOR HEVC CU PARTITION PREDICTION ON FPGA-SOC interconnect, and clock generation circuitry [CEES14]. Figure Figure IV.3, with various features highlighted. The PL is mainly composed of a general purpose FPGA logic structure, which is made up of slices and Configurable Logic Blocks (CLBs), as well as Input/Output Blocks (IOBs) for interfacing. Indeed, CLBs are small, regular groupings of logic elements that are laid out in a two-dimensional array on the PL, and connected to other similar resources via programmable interconnects. Figure. IV. 3 : 3 Figure. IV.3: Zynq Programmable Logic [ Joh14]. AXI DMA in Vivado provides high-bandwidth direct memory access between an AXI4 memory-mapped and an AXI4-Stream ports on IPs interfaces [Xil19a]. PYNQ supports the AXI central DMA IP with the PYNQ DMA class [PYN19]. DMA can be used for high performance burst transfers between PS DRAM and PL. It helps to offload data from the Central Processing Unit (CPU) in processor-based systems [Xil19a]. AXI DMA data movement between system memory and stream target is through the AXI4 Read Master to AXI4 memory-mapped to stream (MM2S) Master, and AXI stream to memory-mapped (S2MM) Slave to AXI4 Write Master. CHAPTER IV. DEEP CNN CO-DESIGN FOR HEVC CU PARTITION PREDICTION ON FPGA-SOCLUT slices consumed by our three accelerators, including FC1-IP, FC2-IP and FC3-IP is 3%. The hardware cores use 1% of FFs and 2% of DSPs. In addition, 1% of BRAMs are mainly used by the FC1-IP accelerator. These IPs operate at a frequency of 150 MHz. In this chapter, we have proposed a deep CNN based hardware-software design for HEVC CU prediction on FPGA-SoC. Our proposed work aims to accelerate the CNNs due to their computationally intensive. However, we have created a hardware IP core for each CNNs layer using the Vivado HLS tool. Then, we have designed a hardware-software architecture by importing the hardware IP cores based on the PYNQ-Z1 board. Compared to other designs, our architecture has shown clear advantages in terms of power consumption and hardware resources, which is suitable for deployment on embedded devices with limited resources. A scheme saved a significant encoding complexity, compared to other previous approachesbased machine learning tools. CHAPTER I. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS used in the HEVC standard, in which the analogous structure, called Coding Tree Unit PU 2Nx2N 2NxN Nx2N NxN 2NxnU 2NxnD nLx2N nRx2N CTU TU TU CTU 4 5 0 1 2 3 6 7 8 9 10 11 12 Figure. I.3: HEVC Quadtree Partitioning Structure, including CU, PU and TU (solid line for CU, dashed line for TU) coding efficiency, where available block size varies from 4×4 up to 64×64, including symmetric partitioning, such as 2N×2N, 2N×N, N×2N and N×N, and also asymmetric motion partitioning (AMP) for instance 2N×nU, 2N×nD, nL×2N and nR×2N. In par- ticular N×N is only allowed for minimum coding unit (CU) and AMP is not applied to CUs smaller than 16×16. Figure I.3 introduces the partitioning and quadtree structure Table I I .1 summarizes the main coding tools of HEVC and VVC. VVC has adopted many new coding tools in each coding stage [MMS + 21]. VIDEO CODING AND ARTIFICIAL INTELLIGENCE BACKGROUNDS to handle a large amount of data but difficult to build a good model which is able to effectively recognize new objects in a new test. Machine learning (ML) is an attempt to understand and reproduce this learning facility in an artificial system. It therefore seems appropriate to use techniques from this field to discover and model knowledge and reduce the semantic gap[START_REF] Mitchell | An artificial intelligence approach[END_REF]. ML is at the crossroads of various fields such as artificial intelligence, statistics, cognitive science, probability theory, optimization, signal and information, and so on [B + 01] [RN16][START_REF] Fulcher | Computational intelligence: an introduction[END_REF]. It is therefore very difficult to give taxonomy of machine learning categories. Then, we briefly present in this section the four main types of machine learning techniques [DB17]: Supervised Learning [KZP07], One generally distinguishes the learning which consists of memorizing information [AM05] [SS94], and the learning by generalization [Wit74] [OW83] in which we usually build a model from learning examples to recognize new examples and scenarios. For the machines, it is easy CHAPTER I. Unsupervised Learning [HTF09], Semi-supervised Learning [CSZ09], and Reinforcement Learning [KLM96]. Table II II .1: Sequences in CPIH Database Resolutions Train Data Valid Data Test Data N. N. N. N. N. N. of video of frame of video of frame of video of frame 352x240(SIF) 4 677 - - - - 352x288(CIF) 23 6,530 2 550 - - 704x576(4CIF) 4 2,280 1 600 - - 720x486(NTSC) 6 1,800 1 300 - - 416X240(240p) - - - - 4 1,900 832x480(480p) - - - - 4 1,900 1280x720(720p) 5 1,327 2 1,100 3 1,800 1920x1080(1080p) 28 8,417 2 540 5 2,080 2048x1080(2k) 16 8,048 2 1,200 - - 2560x1600(WQXGA) - - - - 2 300 Total 86 29,079 10 4,290 18 7,980 where N is the number of training samples and Ln represents the sum of the cross entropy: CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY each CU. Specifically, F1 (CU, t) at level 1 indicating whether the 64 × 64 size CU will be split into 32 × 32 size sub-CUs or not. At level 2, { F2 (CU i , t)} 3 i=0 and { F3 (CU i,j , t)} 3At each level, two fully connected layers that contains a hidden layer and an output layer are followed the LSTM cells. In addition, the output features of the LSTM cells are denoted by (F C l ) 3 l=1 at frame t. If the CU of the current level is predicted to be split, the LSTM classifier of the next level is activated to make decisions on the four subsequentCUs at the next level. Otherwise, the prediction on partitioning the current CTU is terminated, in order to save computational time. Finally, the CU splitting results of three levels are combined to represent the CTU partition in the form of 21-dimensional i,j=0 designate respectively the CU partition labels from 32 × 32 to 16 × 16 and from 16 × 16 to 8 × 8. vector, which is composed of F1 (CU, t), { F2 (CU i , t)} 3 i=0 and { F3 (CU i,j , t)} 3 i,j=0 Table II II .2: Test Sequences Class Resolutions Sequences A 2560 × 1600 PeopleOnStreet, Traffic B 1920 × 1080 Kimono, ParkScene, Cactus, BQTerrace, BasketballDrive C 832 × 480 BasketballDrill, BQMall, PartyScene, RaceHorses D 416 × 240 BasketballPass, BQSquare, BlowingBubbles, RaceHorses E 1280 × 720 FourPeople, Johnny, KristenAndSara MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY performance TableII.3 gives a comparison of our two proposed methods, Deep CNN and Online SVM, in terms of complexity reduction and RD performance using LDP configuration. of inter-mode HEVC, as seen in TableII.3. This implies that the proposed Deep CNN is robust in reducing complexity of inter-mode HEVC when compared to the online SVM. This is due to the fact that the CNN works well with visual images recognition while SVM is used widely in classification problems. Additionally, it is difficult to parallelize SVM but the CNN architecture inherently supports parallelization. CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY Table II.3: Performances Comparison between Deep CNN and Online SVM Sequences Online SVM Deep CNN BD-BR BD-PSNR T BD-BR BD-PSNR T (%) (dB) (%) (%) (dB) (%) PeopleOnStreet 3.65 -0.154 -56.56 1.20 -0.051 -50.67 Traffic 3.94 -0.106 -58.07 1.49 -0.041 -57.90 Kimono 1.12 -0.036 -44.18 1.38 -0.044 -43.26 ParkScene 1.67 -0.048 -52.60 1.43 -0.041 -64.14 Cactus 1.68 -0.034 -41.38 2.44 -0.047 -52.57 BQTerrace 1.67 -0.029 -41.45 2.22 -0.034 -58.43 BasketballDrive 1.72 -0.039 -51.17 2.28 -0.051 -51.30 BasketballDrill 2.72 -0.098 -55.87 1.43 -0.052 -53.54 BQMall 5.11 -0.192 -55.96 2.24 -0.085 -52.25 PartyScene 4.38 -0.169 -52.93 1.48 -0.057 -51.54 RaceHorses 4.67 -0.173 -51.25 1.41 -0.053 -42.22 BasketballPass 4.94 -0.217 -55.49 1.85 -0.083 -52.42 BQSquare 6.63 -0.227 -55.92 2.09 -0.073 -52.79 BlowingBubbles 4.56 -0.157 -51.87 1.71 -0.061 -46.55 RaceHorses 7.48 -0.315 -50.81 1.32 -0.058 -38.01 FourPeople 1.78 -0.054 -51.90 1.06 -0.029 -67.54 Johnny 3.60 -0.076 -60.12 3.99 -0.083 -69.66 KristenAndSara 2.51 -0.070 -53.57 1.31 -0.082 -67.20 Overall 3.55 -0.121 -52.28 1.80 -0.057 -53.99 As it can be seen, our proposed Deep CNN obtains significantly best results in terms of execution time for class E sequences, this is caused by the low motion activities displayed in these sequences, which leads to larger partitions. For the same reason, it is possible to The experimental results show that our Deep CNN model achieves a significant observe a slightly higher encoding time for high-resolution sequences compared to lower complexity reduction of around 53.99% with 1.80% BD-BR compared to the online SVM resolution ones. at LDP configuration. On the other hand, our online SVM demonstrates significant coding losses in BD-BR of 3.55% and an average decrease of 52.28% in time reduction. From the overall performance evaluation, we can find that the proposed method Deep CNN outperforms the online SVM in terms of both complexity reduction and RD CHAPTER II. Table II II CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY Proposed CNN-LSTM T BD-BR BD-PSNR T (%) (%) (dB) (%) 27.22 1.70 -0.017 -48.88 59.66 1.53 -0.059 -66.38 50.49 1.65 -0.052 -47.77 -2.79 -0.081 -70.82 53.06 1.73 -0.033 -53.85 51.89 1.75 -0.030 -65.62 49.16 2.02 -0.045 -52.77 46.55 1.67 -0.061 -48.23 43.32 1.38 -0.090 -48.07 30.33 0.96 -0.038 -59.30 27.08 1.47 -0.055 -54.32 38.24 1.26 -0.056 -56.67 34.30 1.27 -0.046 -60.79 39.87 0.97 -0.034 -50.14 -1.60 -0.018 -50.33 78.98 2.71 -0.071 -72.42 79.31 2.46 -0.083 -73.59 77.58 2.93 -0.094 -74.91 54.57 1.78 -0.053 -58.60 .5: Comparative Study [LZZ + 19] [TTAA19] T BD-BR BD-PSNR T BD-BR BD-PSNR (%) (%) (dB) (%) (%) (dB) -47.50 5.45 -0.250 29.84 1.85 -0.085 -60.60 ---3.20 -0.102 -56.03 0.35 -0.010 34.74 2.43 -0.079 -58.72 2.84 -0.090 46.42 -- -56.87 2.79 -0.065 43.22 2.46 -0.060 -60.01 2.15 -0.038 38.70 1.74 -0.032 -55.84 2.06 -0.046 39.45 1.93 -0.046 -55.19 3.90 -0.148 32.12 2.24 -0.089 -50.74 5.56 -0.227 37.11 1.86 -0.071 -46.83 4.74 -0.210 31.58 2.58 -0.108 -46.22 2.10 -0.180 26.03 1.25 -0.048 -49.04 ---3.12 -0.147 -46.91 3.38 -0.145 35.72 3.02 -0.117 -45.62 3.41 -0.136 24.73 3.99 -0.154 -41.86 ----- -64.37 1.66 -0.058 65.28 1.83 -0.063 -66.49 0.90 -0.020 64.05 5.45 -0.139 -67.23 1.58 -0.050 64.67 3.30 -0.108 -54.2 2.56 -0.099 43.33 2.97 -0.107 Sequence [XLW + 18] BD-BR BD-PSNR (%) (dB) PeopleOnStreet 1.05 -0.045 Traffic 1.99 -0.052 Kimono 1.49 -0.048 ParkScene 1.47 -0.042 Cactus 2.07 -0.043 BQTerrace 1.09 -0.017 BasketballDrive 2.26 -0.052 BasketballDrill 1.95 -0.072 BQMall 1.91 -0.071 PartyScene 1.01 -0.039 RaceHorses 0.87 -0.032 BasketballPass 1.45 -0.066 BQSquare 0.77 -0.028 BlowingBubbles 1.29 -0.044 RaceHorses 1.11 -0.047 FourPeople 1.83 -0.052 Johnny 1.69 -0.038 KristenAndSara 1.55 -0.045 Overall 1.49 -0.046 DEEP LEARNING BASED VIDEO QUALITY ENHANCEMENT FOR THE NEW VERSATILE VIDEO CODING Table III.3 shows the comparison of encoding performance with other approaches CHAPTER III. Table IV IV .3: Hardware Resource Occupation of the FC-IP IP cores Resource BRAM_18k DSP FF LUT FC1-IP Total 4 5 1151 1618 Available 280 220 106400 Utilization (%) 1 2 1 3 FC2-IP2 Total 1 5 1151 1618 Available 280 220 106400 Utilization (%) 0 2 1 3 FC3-IP Total 0 5 1149 1618 Available 280 220 106400 Utilization (%) 0 2 1 3 Table IV IV CNNs based on Xilinx VC709 board. This proposed method almost entirely uses the hardware resources of the FPGA, consuming almost all the DSPs, over 60% LUTs, 50% CHAPTER IV. DEEP CNN CO-DESIGN FOR HEVC CU PARTITION PREDICTION ON FPGA-SOC Table IV.5: Comparative Study [LCX + 19] [ZWCL21] [LZF + 19] Our Design Platform XC7VX690T XC7Z035 7100 XC7Z020 Frequency (MHz) 120 200 100 142 LUTs 62.9% 48.4% 51 47% FFs 50.2% 31.7% 38 26% BRAMs 26.6% 74% 46 16% DSPs 99.8% 21.3% 95 14% Power (W) 15.8 5.96 3.99 1.69 .4: Hardware Cost Resource LUT LUTRAM FF BRAM DSP IO BUFG Total 25026 1374 27979 22 30 4 1 Available 53200 17400 106400 140 220 125 32 Utilization (%) 47 8 26 16 14 3 3 FPGA-Frequency (MHz) 142 PS7-Frequency (MHz) 525 Bitstream (Ko) 3951 II.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 (a) ClassA1 (b) ClassA2 (c) ClassB (d) ClassC (e) ClassD Acknowledgments List of CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY computational complexity, the bi-threshold decision scheme was adopted at three levels. Note that the upper and the lower thresholds at level l are represented by (γ l ) 3 l=1 and (γ l ) 3 l=1 . At three levels, the LSTM network provides the predicted CU partition probability P l (CU ). Consequently, the CU decides to be split only when P l (CU ) > γ l . If P l (CU ) < γl , the CU is not split. The interval [γ l , γ l ] represents the uncertain zone, in which the possible splitting patterns of the current CU need to be traversed by HEVC for the RDO search. In this way, the HEVC complexity is reduced considerably by skipping the most redundant verification of the RD cost. II.4 Experimental Results This section introduces the evaluation performance of the proposed HEVC method. Specifically, we first present the experimental settings. Then, the performance metrics are provided. Finally, the evaluation performance of the proposed machine learning approaches are discussed and compared to other related algorithms. II.4.1 Experimental Settings In this section, we present the obtained results to validate the coding efficiency of the proposed deep learning framework. Our experiments were performed in the HM16.5 reference test model [Mod] using the Low Delay P (LDP) configuration. The QP values tested were 22, 27, 32 and 37 for encoding process. All simulations were tested on 18 JCT-VC videos from class A (2650×1600) to class E (1280×720) [B + 13], as illustrated in Table II.2. The frames number used for each video sequences is 100. All implementations were executed on windows 10 OS platform with Intel ®core TM i7-3770 @ 3.4 GHz CPU and 16 GB RAM. To accelerate the speed of the network model training phase, we also used the NVIDIA GeForce GTX 480 GPU, but it was not used in the HEVC complexity reduction test. In the experiments, the Tensorflow-GPU deep learning framework was used. CHAPTER II. MACHINE LEARNING APPROACH-BASED FAST CU PARTITION FOR REDUCING HEVC COMPLEXITY On the other hand, the proposed approach can reduce the BD-PSNR performance by -0.053dB, which is better than -0.057dB achieved by the proposed Deep CNN. Furthermore, our proposed approach has an average BD-BR performance of 1.78%, better than that of [START_REF] Bouaafia | Fast cu partition-based machine learning approach for reducing hevc complexity[END_REF]; 1.80%. In our experiments, we note that the proposed deep learning CNN-LSTM achieves high HEVC complexity reduction at inter-coding, because it is capable to predict all the CU splitting of an entire CTU at the same time. The proposed algorithm also performs well in terms of BD-PSNR performance, due to the high accuracy of the predicted CU partition. Consequently, the learning scheme based on CNN-LSTM achieves a good compromise between RD performance and coding complexity in order to predict inter-mode CU partition of HEVC. This is mainly due to the LSTM ability to resolve the temporal correlation through adjacent frames. For more evaluation, the reducing complexity of CNN-LSTM versus deep CNN under all video sequences (A -E) at LDP configuration is proved in Figure II.9. As shown in this Figure, the proposed approach allows higher encoding time when the QP value increases from 22 to 37. Overall, the proposed deep learning approach outperforms best in terms of time saving than the deep CNN. Consequently, the proposed scheme is better for reducing the HEVC complexity of inter-coding and for finding an optimal CU partition, compared to traditional RDO research. II.4.5 Comparative Study To evaluate the encoding performance of the proposed learning approach, our experimental results are compared to other state of the art methods. The CTU level on/off control is adopted to avoid a reduction in RDO performance. The frame level filtering would be shut off to prevent over-signal, if the enhancement quality is not worth to cost the signaled bits. Specifically, the control flags at the CTU- III.5 Experimental Results In this section, we will present the performance of the proposed loop filtering scheme based on WSE-DCNN in the VVC standard through experimental results. We evaluated the performances, in terms of RD performance, and QoE of the proposed scheme. Specifically, we first present the description of the dataset collection in the experiments. Then, model training, testing, and evaluation are provided. Finally, objective evaluations and subjective visualizations are presented. Furthermore, the performance of the proposed method is illustrated and compared with other CNN based in-loop filter algorithms. III.5.1 Dataset Collection In III.5.3 WSE-DCNN Evaluation The RD performance results of the proposed model compared to the original VVC standard are shown in Table III The BQSquare video sequence encoded with QP equals to 37 under RA configuration is deployed in order to show the subjective visual quality and to further verify the effec- IV.2.1 FPGA-SoC Nowadays, the limitations of Application Specific Integrated Circuit (ASIC) Systemon-Chips (SoCs) make them incompatible with a large number of applications in terms of time-to-market, flexibility and upgrade capability. There is a clear need for a more flexible solution, and this is what motivates the System-Programmable-Chip, a specific flavour of SoC implemented on a programmable, reconfigurable device. However, the FPGAs is the popular solution used, which are inherently flexible devices that can be configured to implement any arbitrary system, including embedded processors if needed. FPGAs can also be reconfigured as often as desired, thus offering a more fundamentally flexible platform than ASICs for implementing SoCs. IV.2.1.3 PS-PL Interfaces In this section, we introduce the connections between the PS and PL and consider how they can be used. We start by introducing the AXI standard, on which most of these connections are adopted. Suite. AXI buses can be used flexibly, and in general are used to connect the processor to other IP blocks in an embedded system. In fact, there are three versions of AXI4, each representing a different bus protocol, as summarised below. The choice of AXI bus protocol for a particular connection depends on the desired properties of that connection. In fact, AXI4 is for memory-mapped links and providing the highest performance; an address is supplied followed by a data burst transfer of up to 256 data words. AXI4-Lite is a simplified link supporting only one data transfer per connection. AXI4-Lite is also memory-mapped; in this case an address and single data word are transferred. Then, AXI4-Stream is for high-speed streaming data and supporting burst transfers of unrestricted size. There is no address mechanism; this bus type is best suited to direct data flow between source and destination (non memory mapped). In addition, IV.2.4 High Level Synthesis (HLS) Vivado High Level Synthesis (HLS) allows functions written in C, C++, SystemC and OpenCL kernels to be synthesized into a Register-Transfer-Level (RTL) implementation [START_REF] Vh Xilinx | Vivado design suite user guide-high-level synthesis[END_REF]. Vivado HLS provides a number of optional C libraries to enable higher productivity and high performance RTL design. These include arbitrary precision libraries allowing operations to be performed at any arbitrary precision. IV.3.1 Proposed Deep CNN Architecture As mentioned in the previous chapter, the HEVC standard adopts a quadtree structure, known as CTU. CTU supports CU partition from 64 × 64 to 8 × 8 at four levels. The CU quadtree partition consumes much of the HEVC encoding complexity, due to the adoption of a wide variety of CU sizes at the RDO level. Therefore, the cost of computational complexity remains a critical issue that must be properly considered in The proposed CNN has better performance compared to conventional techniques, but it requires higher computational complexity which is due to the large number of parameters in convolutional and fully connected layers. To solve this issue, this model will be accelerated on a hardware platform. IV.3.2 CNN Accelerator based on Vivado HLS In this chapter, we have chosen to accelerate the first CNN model (CNN_1) corresponding to level 1 for a 64 × 64 CU partition. In the proposed deep CNN_1 model, three convolutional layers are used, each one is characterized by specific parameters such as the filters number and the kernel size, as mentioned in Table IV.1. All the processed data pass through the CONV operation to extract the feature maps in a 16 × 16 format. For CONV_2, the input image sized to 16 × 16, is convoluted with 2 × 2 kernels to generate the 8 × 8 output data. Additionally, for CONV_3, the input image sized to 8 × 8 is convoluted with 2 × 2 kernels to extract the 4 × 4 output data. Following the above acceleration steps mentioned in the section IV.2.4, each CONV layer is accelerated using Vivado HLS tool and then exported as an RTL core (CONV1_0, CONV2_0, and CONV3_0). Therefore, these generated IP cores will be integrated and used in the Vivado design. On the other hand, the proposed CNN_1 model contains three fully connected layers with different parameters, as shown in Table IV.1. We accelerate these three fully Therefore, these generated IP cores will be integrated as a new repository to be used in the design. IV.3.3 Hardware-Software Co-Design for CNN on FPGA-SoC Currently, there are two implementation CNN modes due to its hierarchical structure. The first one is the Streaming Architectures and the second is the Single Computation Engine. The former has the ability to allocate corresponding hardware resources to each (network layer) IP core (CONV-IPs and FC-IPs) and it has the following characteristics. Firstly, it can realize the inter-layer pipeline and flexible control and management within each IP core with a high customization degree. Secondly, it can be applied only to small network layers (minimum neurons by each layer), since, it is characterized by its higher demand for resources. The latter indicates that different network layers share the same accelerator (one CNN-IP) through resource reuse, which is a non-highly customized architecture, is more inflexible but it is easier. Considering all these advantages, we buffer uses the AXI-stream interface to receive data and weights from the DMA. After the processing task by the accelerator, this IP core will provide outputs that will be sent back to the DDR via the stream_output and AXI-DMA. Thus, this output data, from the current IP core, will be the input for the second layer (second IP core) with the corresponding weight and parameters. That is why each IP core in the co-design includes tow data exchanges which are the input_stream and the output_stream within the AXI-lite for the parameter configurations and the stream_kernel for kernel's values programming (only for CONV-IP cores). The above operation will be repeated until the data processing of the network model is completed. On the other hand, the PS reset core, within the AXI-gpio interfaces, generates a customized reset for the entire system, such as the AXI-Interconnect and peripherals. CHAPTER IV. DEEP CNN CO-DESIGN FOR HEVC CU PARTITION PREDICTION ON FPGA-SOC IV.4 Experimental Results This section introduces the performance evaluation of the hardware accelerators and the full co-design system. The provided results are implemented on Vivado pack v2016.1 within PYNQ-Z1 board (Xilinx Zynq-7000 device). IV.4.1 IP Cores Hardware Resource The three CONV-IP hardware resource occupation of the target FPGA are summarized in Table IV.2. The hardware CONV1-IP core occupies 3% of BRAMs to store the parameters, 3% of LUTs, 2% of DSP, while using only 1% of FFs. In addition, the CONV2-IP uses 1%, 2% and 4% of DSPs, FFs, and LUTs, respectively. For CONV3-IP core, the resources used are 2% of DSP, 1% of FFs, and 4% of LUTs. To sum up, the CONV1-IP presents 9% of hardware cost with a frequency of 150 MHz. While the CONV2-IP and CONV3-IP occupies 7% of hardware cost on the PYNQ-Z1 with a frequency of 150 MHz also. This is considered a low occupancy on the PYNQ-Z1 FPGA with high processing speed. According to the power report, the static power of our proposed system is about 0.160W and the dynamic power is 1.539W . However, the total on-chip power is 1.699W of the proposed architecture. IV.4.3 Comparative Study The proposed design has also been compared to other related works. List of Publications Appendix A Convolutional Neural Networks CNNs have achieved remarkable success in the fields of image processing and computer vision [START_REF] Bouaafia | Fast cu partition-based machine learning approach for reducing hevc complexity[END_REF][START_REF] Bouaafia | Cnn-lstm learning approach-based complexity reduction for highefficiency video coding standard[END_REF]. Indeed, the key terminology and operations involved in CNNs architecture, including convolution, normalization, pooling, activation functions, and fully connected layers have been introduced [START_REF] Kumar | Recent deep learning techniques, challenges and its applications for medical healthcare system: A review[END_REF]. A.1 Convolutional Layer Convolutional Layers are the basic building blocks, which are the most computationally A.2 Activation Functions The commonly used activation function is the Rectified Linear Unit (ReLU) which clips all negative values to zero. It is a non-linear activation function used after the convolutional layer, which can be defined as. where the weighted sum of the neuron inputs denoted by x. The goal of ReLU is to converge faster in training and has low computational complexity compared to other functions. Besides, sigmoid and tanh functions are another popular activation function, which can be given as. A.5 Backpropagation Algorithm The backpropagation algorithm is resumed using four equations. In order to see this, we must first establish some useful notation. We will assume that there are L layers The four backpropagation equations are defined by the equations A.8, A.9, A.10, and A.11, relating the gradients of the activations of various neurons a l j , the weighted inputs z l j in A.6, and the errors ∆ l j . This algorithm consists of a forward pass from the bottom layer to the top layer where one calculates the weighted inputs and activations of all the neurons. One then backpropagates the error starting with the top layer down to the input layer and uses these errors to calculate the desired gradients. This description makes clear the incredible utility and computational efficiency of the backpropagation algorithm. We can calculate all the derivatives using a single "forward" and "backward" pass of the neural network. This computational efficiency is crucial since we must calculate the gradient with respect to all parameters of the neural network at each step of gradient descent. 116
04098047
en
[ "spi.mat" ]
2024/03/04 16:41:18
2023
https://cea.hal.science/cea-04098047/file/EDTM2023_vf_v2.pdf
G Bourgeois V Meli R Antonelli C Socquet-Clerc T Magis F Laulagnet B Hemard M Bernard L Fellouh P Dezest J Krawczyk S Dominguez F Baudin J Garrione C Pellissier J.-A Dallery N Castellani M.-C Cyrille C Charpin F Andrieu G Navarro Crossbar Arrays based on "Wall" Phase-Change Memory (PCM) and Ovonic-Threshold Switching (OTS) Selector: a Device Integration Challenge Towards New Computing Paradigms in Embedded Applications Keywords: Crossbar, "Wall" PCM, OTS selector) HAL is Introduction Phase-Change Memory (PCM) and Ovonic-Threshold Switching (OTS) selector have been successfully co-integrated in 3D stacked Crossbar arrays definitely reaching a high maturity, until the mass production of devices targeting stand-alone Storage Class Memory applications [1]. In order to enable neuromorphic computing architecture design, for example to speed up matrix-vector multiplication operations [2], Crossbar Resistive Memory is considered a valuable solution to be implemented in future embedded architectures, in order to support recognition and prediction tasks required in autonomous systems (e.g. driving). In this work, we demonstrate the co-integration of a PCM resistive device (1R) based on "self-aligned" (SA) Wall structure [3] with an OTS selector device (1S) by adding a "Cross" patterning (i.e. "Double-Patterned Self-Aligned" or DPSA integration) perpendicular to the first SA one. We address the issues inherent to our DPSA integration and in particular related to the bitline realization, which should reliably recover the contact over the 1S1R devices, and to the lithography strict alignment to allow device scaling down to tens of nm. We show a good control of the effects of etching/stripping chemistry on the integrated materials by TEM/EDX analyses and finally we provide electrical results demonstrating successful programming operations in devices with a surface size down to 60 nm×80 nm. Results and Discussions A. PCM and OTS alloys Our 1S1R DPSA structure is schematically represented in Fig. 1a. The PCM layer is integrated over a heater element (H), and it is followed by the OTS layer based on a GeSbSeN alloy [4], [5] (Fig. 1b). The intermixing between the two layers is hindered by an intermediate TiN electrode. We fabricated 1S1R devices integrating two different GeSbTe PCM alloys A and B, respectively with low and high Ge content. They were coupled with two OTS thicknesses (i.e. t1 and t2 with t1 < t2), leading to a total of four different PCM-OTS stacks. B. DPSA 1S1R cell fabrication We fabricated our 1S1R devices and arrays in the BEOL of LETI Memory Advanced Demonstrator based on 130 nm CMOS technology. Fig. 2 summaries the key steps of the DPSA process integration described by Coventor SEMulator3D  modeling platform. A first Wall (W) SA patterning (x direction) of the PCM-OTS stack involving also the heater definition is followed by a Cross (C) perpendicular patterning (y direction). The recovery of the contact on the top electrode after Wall patterning represents the first challenge in such integration. We compared two approaches: "Dry Etch-back" (Fig. 3a) and "Full CMP" (Fig. 3b) (i.e. chemical-mechanical polishing). The first one demonstrated (in SEM analyses) a high dependency on the devices pitch, with a consequent high variability. The second one, based on the selectivity of SiO 2 polishing wrt a SiN layer used as stop layer demonstrated a good reproducibility. In order to obtain fast prototyping of scaled devices in x and y directions (targeting devices down to 50 nm), we took advantage of Variable Shape Beam VISTEC  SB3054 lithography tool allowing alignments control thanks to modelling and die-to-die correction. Being the C/H alignment crucial for the device functionality (i.e. to obtain an active region perfectly centered into the PCM layer), we designed dedicated structures with a large Wall size and for different Cross sizes, connecting many devices in parallel for the electrical evaluation of the C/H alignment at end of fabrication. Fig. 4a shows the results obtained on a wafer integrating only a PCM layer (w/o OTS). The resistance increases with the decrease of the Cross size, likely related to a gradual loss of reliability on the C/H alignment. The inset reports the typical overlay (OV) obtained in a whole lot for both W/H and C/W alignments with a mean OV+3 < 30 nm for both. This is affecting the alignment for a Cross size below 60 nm (a solution for a direct C/H alignment is under validation). On the contrary, we observe the expected almost linear relation between the devices virgin resistance (R V ) and the reciprocal of the Wall size (Fig. 4b). TEM analyses performed on a 60×60 device (i.e. Wall×Cross sizes are expressed in nm) integrating the PCM-OTS stack (Fig. 5a) highlight that alignment control is mandatory to target small sizes functionality. Indeed, we observe that H is not perfectly aligned with C in the device observed. EDX profiles (Fig. 5b) evidence a uniform distribution of the elements perpendicularly to the OTS/SiN interface. Indeed, chalcogenide materials are sensitive to halogens [6], then a step-by-step etching strategy with dedicated chemistry recipes for each step was applied during the Wall etching (i.e. starting from the top electrode: TiN / OTS / TiN / PCM / heater) and Cross etching (i.e. TiN / OTS / TiN / PCM) to prevent from OTS and PCM layers damage. C. DPSA 1S1R electrical characterization The firing operation (i.e. initialization step) was investigated in the four PCM-OTS stacks, showing a decreasing of the firing voltage (V FIRING ) when reducing the OTS thickness from t2 to t1, and when reducing the Ge content in the PCM alloy from B to A (Fig. 6a). As expected, V FIRING does not depend on the size of the device (i.e. electric-field-dominated phenomenon) [7], as shown in the results that are equivalent for 60×60 and 300×300 devices. Device functionality was statistically verified in 1 kb arrays of transistor-selected 1S1R devices (1T1S1R). As an example, in Fig. 6b we report the array map of the current after the devices activation, meaning the dynamic current measured during the pulse application. Only four devices are not switching (< 0.4%). Threshold Voltage (V th ) versus programming current plots of Fig. 7 show the typical SET-RESET characteristics for 1S1R devices. We compared the behavior of 60×80 and 300×300 cells, both showing a reading window of about 1.5 V, compatibly with the V th of Ge-rich GeSbTe devices [8] (i.e. the 1S1R max voltage reading window is equal to the PCM V th [9]), with well distinguishable SET and RESET programming current regions. Current reduction in 60×80 cells is in line with the reduced heater surface wrt 300×300. SET/RESET programming operations were performed in 1T1S1R 1 kb arrays based on 60×80 devices along 100 cycles, and the statistics of the obtained V th distributions are reported in Fig. 8. We show almost no overlapping among SET and RESET distributions, without the use of smart programming or Program&Verify protocol. Conclusions A "Double-Patterned Self-Aligned" PCM-OTS structure was presented, targeting the co-integration of PCM "Wall" structure with OTS in the BEOL of the integration. The optimization of the fabrication process was discussed, presenting some of the implemented solutions. 1S1R devices, with dimensions down to 60 nm×80 nm, were realized and tested at statistical level in 1 kb 1T1S1R arrays. A reliable reading window of 1.5 V obtained with almost no SET/RESET distributions overlapping along 100 cycles. Fig. 4 : 4 Fig. 4: a) Resistance (median values) measured wrt Cross size on dedicated test structures (PCM layer w/o OTS i.e. 1R devices) for alignment control. b) Evolution of mean R V vs reciprocal of Wall size in 1R devices. The smallest Wall size of 40 nm shows a higher variability. Fig. 2 :Fig. 5 :Fig. 7 : 257 Fig.2: DPSA PCM-OTS process integration highlighting the "Wall" SA patterning (top) and the following "Cross" patterning (bottom). SCU Fig. 3 :Fig. 8 :Fig. 1 :Fig. 6 : 3816 Fig.3: Contact recovery after "Wall" patterning and encapsulation: a) "Etch-back" approach: strong dependency on the devices density. In the TEM it is compared the nominal (good) density wrt 4x (bad). b) "Full CMP" approach: good uniformity, confirmed by SEM (section + BSE and SE). Acknowledgments This work was partially funded by European commission, French State and Auvergne-Rhone Alpes region through ECSEL-IA 101007321 project StorAIge and French Nano2022 program.
02954405
en
[ "info.info-ai", "info.info-hc", "info.info-tt", "scco.ling", "sdv.neu.sc" ]
2024/03/04 16:41:18
2020
https://hal.science/hal-02954405/file/Blache_IVA2020.pdf
Philippe Blache email: [email protected] Massina Abderrahmane email: [email protected] Stéphane Rauzy email: [email protected] Roxane Bertrand email: [email protected] An integrated model for predicting backchannel feedbacks Keywords: Backchannels, embodied conversational agent, multimodal feedback, rule-based model teaching and research institutions in France or abroad, or from public or private research centers. INTRODUCTION Backchanneling constitutes one the keys for intelligent virtual agents naturalness, which is closely related to their capacity in generating prompt and adequate feedbacks to the speaker's production. Several predictive models have been proposed, involving different cues from prosody, gestures, lexicon, syntax or semantics. Unfortunately, acquiring all such cues in real time, which is mandatory in the case of conversational agents [START_REF] Kopp | An Architecture for Fluid Real-time Conversational Agents: Integrating Incremental Output Generation and Input Processing[END_REF], can be difficult. One simple solution consists in using temporal features [START_REF] Poppe | Backchannel strategies for artificial listeners[END_REF]. However, we also need to determine backchannels' type: generic or specific (initially called continuers or assessment) [START_REF] Bavelas | Listeners as co-narrators[END_REF] . These types involve different levels of processing, depending on whether they require or not deep understanding of the speaker's message. Some backchannels (esp. generic) are produced almost automatically and can be predicted by low-level cues. Some others require a certain level of semantic processing. The problem is that the production of these two types of backchannels is based on two different predictive mechanisms that are potentially in conflict because applied in parallel. We propose in this paper an approach avoiding this issue by implementing a single-route backchannel predictive model, generating appropriate backchannels in real time at a fine-grained level. The method we propose relies on a generic architecture. However, all dialogue systems being dependent from the task, we illustrate the approach with a specific application aiming at training human doctors to break bad news to virtual patients [START_REF] Ochs | Training doctors' social skills to break bad news: Evaluation of the impact of virtual environment displays on the sense of presence[END_REF]. In this system, the virtual patient is mainly a listener, capable of answering questions, requesting for clarification and producing backchannels. BACKCHANNELS TYPES Two BC categories are usually distinguished: generic (displaying attention to the speaker) and specific (expressing responses to the content of the speaker's production) [START_REF] Bavelas | Listeners as co-narrators[END_REF][START_REF] Bertrand | Co-narration in French conversation storytelling: A quantitative insight[END_REF][START_REF] Tolins | Addressee backchannels steer narrative development[END_REF]. They correspond to different communicative functions and can be expressed in different modalities: verbal, visual or multimodal. Our IVA being developed for French, we mainly focus the most frequent BCs given in [? ]: oui (yes) ah oui (oh yes), mh, d'accord (agree), ok, voilà (that's it), non (no), oh non (oh no), bon (well) ah bon (is that so?) (note that this list is comparable to that in English). In terms of types, oui and mh are typically generic whereas oh non or d'accord are specific. Visual backchannels on their side correspond to many different types. They are mainly head movement (nod, jerk, shake, tilt, turn, waggle), facial expressions (smile, laughter) or eyebrow (frowning, raising). As noted in [START_REF] Ferré | Unimodal and Bimodal Backchannels in Conversational English[END_REF], they are less disruptive and in a large majority used as generic BCs. Types of backchannels Bimodal backchannels (involving both visual and verbal productions) are also very frequent and play an important role [START_REF] Ferré | Unimodal and Bimodal Backchannels in Conversational English[END_REF]. [START_REF] Bevacqua | Multimodal Backchannels for Embodied Conversational Agents[END_REF] has specifically studied a subset of such bimodals for IVA, associated with their functions: nod+yeah (agreement), shake+no (disagreement), smile+ok (interest), raise eyebrows+ooh (understanding), etc. In some cases, bimodality can reinforce the function: for example, bimodal BCs show a stronger agreement than unimodal ones. BC positions in the discourse also have an influence on their modality [START_REF] Ferré | Unimodal and Bimodal Backchannels in Conversational English[END_REF]: verbal BCs are preferably used during pauses, whereas visual BC are more likely to occur during speech. In terms of temporal features, verbal BCs are preferred soon after the beginning of a turn whereas visual BCs are usually produced later. We propose in this paper, beside the generic BC type, to precise the specific one by only using certain subtypes: agreement and disagreement (both being the most frequent in our corpus), completed with two other subtypes fitting with the specific purpose of our study: surprise and fear. The table 1 summarizes the different backchannels according to their modality and type. BACKCHANNELS PREDICTIVE FEATURES Many studies showed that the number of BCs has consequence on agent's naturalness. It is then important to identify as many BC triggers as possible for elaborating a natural model. In terms of temporal cues [START_REF] Poppe | Backchannel strategies for artificial listeners[END_REF] suggests that a BC may occur in average every 6.5 seconds. Another direct placement model consists in generating feedbacks systematically during breaks. These models are very simple, but produce unnatural behaviors of the agent. We have proposed more elaborated approach taking into account the temporal context in the framework of ACORFORMed doctor/patient dialogues [START_REF] Penteado | Evaluating Temporal Predictive Features for Virtual Patients Feedbacks[END_REF], based on the duration of the doctor's last silent pause, the duration since the doctor's last silent pause and the duration since last patient feedback. On their side, [START_REF] Poppe | Backchannel strategies for artificial listeners[END_REF][START_REF] Ward | Prosodic features which cue back-channel responses in English and Japanese[END_REF] have presented more detailed mechanisms based on prosodic and duration features: • After a region of pitch less than the 26th-percentile pitch level, lasting 110ms, after 700ms of speech, with no BC within the preceding 800ms, after 700ms wait. • After a pause of 400ms preceded by at least 1000ms of speech, where the last 100ms contain a rising or falling pitch of at least 30Hz with no BC within the preceding 1400ms. Several works [START_REF] Gravano | Turn-taking cues in task-oriented dialogue[END_REF][START_REF] Meena | Data-driven models for timing feedback responses in a Map Task dialogue system[END_REF] propose models based in particular on different prosodic backchannelling cues such as final rising intonation, higher intensity level, higher pitch level, inter-pausal duration, etc. We propose to use a selection of such features (that can be recognized in real time): Prosody (time since IPU began, silent pause for over 200ms), Discourse (last dialogue act), Syntax (type of the last POS bigram), Semantic (semantic type of the last entity). Moreover, we also identified sequences of multimodal features (mainly POS tags and gestures) that could be used as backchannel predicting cues [START_REF] Porhet | Mining a Multimodal Corpus of Doctor's Training for Virtual Patient's Feedbacks[END_REF]. From these sequences, several rules have been extracted for generating backchannels: doctor_head_nod ⇒ patient_head_nod doctor_verb, doctor_head_nod, doctor_noun ⇒ patient_head_nod doctor_adverb ⇒ patient_head_nod doctor_medical_vocabulary ⇒ patient_head_nod 4 A PREDICTIVE RULE-BASED MODEL Instead of a classical architecture based on two types of processing (an automatic loop generating low-level BC vs. a deeper one for semantic processing), we propose a unique model integrating both levels. This approach is not only more efficient, but also closer to a parallel human-like processing. At the technical level, the different techniques now available in dialogue technology give access to many types of linguistic information in real time. More precisely, besides low-level classical cues (such as breaks, turn length, POS, etc.), we can also recognize during the dialogue semantic and discourse-level cues such as dialog acts, slot-based semantic structures as well as the position in the discourse ( [START_REF] Bertrand | Co-narration in French conversation storytelling: A quantitative insight[END_REF] shows how discourse phases influence BC types, generic BCs being favored when reaching the core of the narration). These different features all contribute to the generation of backchannels by deciding their type and placement. The interest of such an integrated model, on top of its homogeneity and its naturalness, is to avoid conflicts in the generation process, occurring when different backchannels can be generated simultaneously from two different loops. Our proposal consists in integrating this BC generation model into an embodied conversational agent equipped with a dialogue system. The first question consists in identifying the span of each processing step that is traditionally a temporal window in the signal (e.g. each 3 seconds), an inter-pausal unit (i.e. between pauses of at least 200ms), a turn, etc. Embodied dialogue systems mainly have two input streams: the audio signal and its transcription (note that, in spite of the important role it can play, we put aside at this stage the gestural speaker's behavior in order to simplify the type of cues to be acquired in real time). The audio stream makes it possible to acquire temporal and prosodic features (silent pauses, pitch, IPU duration, etc.) where the transcription stream leads to the other linguistic cues. In this second stream, features from different domains can be acquired. First, at the lexical level, we have seen that morphosyntax (POS tags) plays an important role: for example, certain POS bigrams such as V-Adv favors the production of a generic BC. In the same vein, some terms (or more generally some semantic types) can be associated to a specific BC (surprise, fear, etc.). We also know that the introduction of new terms in the discourse (new referents) may be associate to a specific BC, indicating for example an agreement. At the discourse level, different studies have shown that the structure of the conversation and more precisely its phasing (opening, closing, argumentation, etc.) can also be associated with specific listener's reactions [START_REF] Bertrand | Co-narration in French conversation storytelling: A quantitative insight[END_REF]. This information can be directly obtained thanks to dialog act (DA) classification [START_REF] Chen | Dialogue Act Recognition via CRF-Attentive Structured Network[END_REF][START_REF] Stolcke | Dialogue act modeling for automatic tagging and recognition of conversational speech[END_REF] : in the case of our application domain, DAs correspond to dialogue phases. It is thus possible to trigger specific BCs upon phase change (e.g. when the doctor starts announcing a bad new). Finally, semantics plays a central role in generating specific BCs: many listener's reactions are triggered upon instantiation of the semantic structure, which is represented in task-oriented dialogues by a common ground (CG) [START_REF] Blache | Dialogue management in task-oriented dialogue systems[END_REF]. In such approaches, the CG consists in a set of structured frames made of different variables. The construction of the semantic structure consists in instantiating these variables during the discourse, which can elicit specific BCs (related to the content itself or to the global process of CG construction). We propose to implement a semantic-based BC generation by associating CG slots to specific BCs. For example, a BC expressing fear can be triggered when the slot "urgency" is instantiated. The implementation of a unique loop for generating BC requires an appropriate segmentation of the input stream in order to take advantage of all different types of information we want to bring into the model. The processing span need to be large enough to capture semantic and discourse-level information, but not too long in order to allow real-time reactions. This means that neither arbitrary short segments nor entire IPUs returned by the speech recognition system (that can last in some cases more than 20 seconds) are adequate for our model. We propose an intermediate segmentation approach for answering these needs consisting in segmenting the input flow by using discourse markers (mais, donc, puis, alors, etc.) that indicate approximately a change between different discourse units. The list of such items being closed and very short, the mechanism simply consists in checking whether a marker appears. In such case, the current segment is parsed and the linguistic processing tools extract in real time the different features such as POS tags, semantic types of the substantives, dialogue acts and slot instantiation of the common ground in order to generate adequate BCs. Beside them, the different audio features are kept updated, in particular the duration since the last feedback, the indication of the current state of the production (speech or silent pause), the duration of the speech since the last pause, the duration of the pause, etc. Note that in the type of data we are working on, most of BCs are triggered by linguistic cues, only few of them being generated by temporal features. As a consequence, we give them a priority in the algorithm, temporal cues being only used as secondary triggers. Concretely, during the speech, we first look at linguistic BC cues whereas during a pause, the BC generator is mainly based on pause duration. In the case where none of these contexts trigger any BC, then the most general temporal cue (duration since the last BC) is applied. The algorithm 1 presents the general mechanism for controlling BC generation. This algorithm renders possible to process all types of BC generation in a single loop. It distinguishes different situations, hierarchically ordered (avoiding a concurrent processing generating possible conflicts). At the higher level, we have seen that slot filling (i.e. instantiation of the semantic structure) may directly triggers backchannels. In such case, the BC type associated with the slot becomes the current BC type value (e.g. agreement, surprise, etc.). If no BC comes from slot filling, then the different BC cues are extracted from the analysis of the current segment (e.g. dialogue acts, semantic type, POS sequences, etc.). These cues serve as input to a BC type identification function, based on a set of production rules. Note that the cues as well as the rules can be applied to the audio signal as well as the transcription (in other words, takes into account prosody, discourse, syntax etc.). Figure 2 gives examples of such rules. Given the BC type to be generated and the current mode (pause or speech), the last step consists in generating the BC itself, by choosing among a list of possible candidates (as described 1). Note that this list is in a probability space which also depends on the current state (visual or bimodal BC will be preferred during speech where verbal BCs will be favored during pauses). CONCLUSION We have presented in this paper a method for generating backchannels which is, at the difference with other approaches, integrated into a single loop. This approach takes advantage of different NLP technologies for a real time acquisition of different linguistic features involved into a high-level BC generating model. We have implemented this model into a virtual agent equipped with a dialogue system. The resulting application offers a very reactive artificial listener, capable of generating a large variety of multimodal BCs. This system is currently under evaluation. Figure 1 : 1 Figure1: Types of backchannels Bimodal backchannels (involving both visual and verbal productions) are also very frequent and play an important role[START_REF] Ferré | Unimodal and Bimodal Backchannels in Conversational English[END_REF].[START_REF] Bevacqua | Multimodal Backchannels for Embodied Conversational Agents[END_REF] has specifically studied a subset of such bimodals for IVA, associated with their functions: nod+yeah (agreement), shake+no (disagreement), smile+ok (interest), raise eyebrows+ooh (understanding), etc. In some cases, bimodality can reinforce the function: for example, bimodal BCs show a stronger agreement than unimodal ones. Figure 2 : 2 Figure 2: BC generation rules
04090557
en
[ "phys", "info.info-lg", "info.info-mo" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04090557/file/main.pdf
Samir Beneddine Nonlinear input feature reduction for data-based physical modeling Keywords: deep learning, mutual information, dimensional analysis, physical modeling This work introduces a novel methodology to derive physical scalings for input features from data. The approach developed in this article relies on the maximization of mutual information to derive optimal nonlinear combinations of input features. These combinations are both adapted to physicsrelated models and interpretable (in a symbolic way). The algorithm is presented in detail, then tested on a synthetic toy model. The results show that our approach can effectively construct relevant combinations by analyzing a strongly noisy nonlinear dataset. These results are promising and may significantly help training data-driven models. Finally, the last part of the paper introduces a way colorblackto account for the physical dimension of data. The test case is a synthetic dataset inspired by the Law of the Wall from turbulent boundary layer theory. Once again, the algorithm shows that it can recover relevant nondimensional variables colorblackfor data-base modeling. Introduction Open physical modeling questions have recently benefited from the accelerating developments of Machine Learning tools. In particular, Deep Learning (DL) carries high hopes of tackling numerous unresolved physical problems. For instance, for the specific field of fluid mechanics, the work of [START_REF] Singh | Machine-learning-augmented predictive modeling of turbulent separated flows over airfoils[END_REF] has been among the precursors for many papers attempting to propose new DL-augmented Reynolds-Averaged Navier-Stokes (RANS) models [START_REF] Wu | Physics-informed machine learning approach for augmenting turbulence models: A comprehensive framework[END_REF][START_REF] Volpiani | Machine learning-augmented turbulence modeling for rans simulations of massively separated flows[END_REF]. To mention a few other applications, DL has also been considered as a tool to produce new Wall Models [START_REF] Yang | Predictive large-eddy-simulation wall modeling via physics-informed neural networks[END_REF], Subgrid-Scale Models [START_REF] Vollant | Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures[END_REF], and Transition Models [START_REF] Yang | Improving the k-ω-γ-ar transition model by the field inversion and machine learning framework[END_REF]. Needless to say that this scientific trend goes well beyond the sole topic of fluid mechanics or even physics and concerns virtually all open modeling problems (for instance, in biotechnology [START_REF] Gao | Deep learning in protein structural modeling and design[END_REF], solid mechanics [START_REF] Haghighat | A physicsinformed deep learning framework for inversion and surrogate modeling in solid mechanics[END_REF], molecular dynamics [START_REF] Zhang | Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics[END_REF], epidemiology [START_REF] Shorten | Deep learning applications for covid-19[END_REF], etc.). Yet, relying on a purely data-based approach has shown mitigated results. To our knowledge, no data-driven model has definitively answered any of the topics mentioned above. Indeed, neural networks (NN) are known to have poor extrapolation capabilities [START_REF] Hettiarachchi | The extrapolation of artificial neural networks for the modelling of rainfall-runoff relationships[END_REF], which results in low accuracy when a DL-model is used for physical conditions that have been unseen during training. Additionally, NNs are often black boxes from which it is hard to gain new scientific knowledge. This lack of generality and understandability of DL models may be one of the core challenges in modern Machine Learning applied to physical problems. Consequently, recent papers have attempted to incorporate physical knowledge into these data-based approaches. Several works have directly included the governing equations of a dynamical system within the loss function of neural networks. This led for instance to the so-called Physics-Informed Neural Networks (PINN) [START_REF] Raissi | Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[END_REF], used for a wide range of physics-related problems [START_REF] Cai | Physics-informed neural networks (pinns) for fluid mechanics: A review[END_REF][START_REF] Mao | Physics-informed neural networks for high-speed flows[END_REF][START_REF] Sahli Costabal | Physics-informed neural networks for cardiac activation mapping[END_REF]. Another way to include physics within machine learning approaches has been done through the input features. For instance, [START_REF] Yang | Predictive large-eddy-simulation wall modeling via physics-informed neural networks[END_REF] showed in their works that scaling input features based on prior knowledge of the physics involved leads to far better results than just blindly feeding the NN with all available quantities. Similarly, [START_REF] Volpiani | Machine learning-augmented turbulence modeling for rans simulations of massively separated flows[END_REF] wrote in their paper that "the choice of the input features is crucial" to obtain data-based RANS models with acceptable fidelity. These are just two examples of many works demonstrating that successful DL approaches for physical modeling depend on the judicious choice of input quantities. Therefore, the adequate definition of the input features is a central question for data-driven modeling. Note that feature selection is a research topic highly active in the Machine Learning community, especially on non-physics-related topics and classification tasks, with plenty of articles dedicated to the question (to name a few, [START_REF] Dash | Feature selection for classification[END_REF][START_REF] Blum | Selection of relevant features and examples in machine learning[END_REF][START_REF] Battiti | Using mutual information for selecting features in supervised neural net learning[END_REF][START_REF] Sindhwani | Feature selection in mlps and svms based on maximum output information[END_REF][START_REF] Bollacker | Linear feature extractors based on mutual information[END_REF][START_REF] Tadist | Feature selection methods and genomic big data: a systematic review[END_REF]). Researchers have gathered existing methods into five main categories: filter methods, wrapper methods, embedded methods, ensemble methods, and integrative methods (see, for instance, [START_REF] Naik | A novel sensitivity-based method for feature selection[END_REF][START_REF] Tadist | Feature selection methods and genomic big data: a systematic review[END_REF][START_REF] Bolón-Canedo | A review of feature selection methods on synthetic data[END_REF] for details). For this work, we are particularly interested in finding methods to identify a priori (before any training) some relevant features just by analyzing existing data. The techniques developed in this article can be viewed as assistive tools for classical or data-driven modeling. It falls into the filter methods category, where numerous approaches have been considered based on linear correlation, Fisher score, etc. (see, for instance, [START_REF] Naik | A novel sensitivity-based method for feature selection[END_REF]). Several particularly promising approaches for filter methods rely on information theory. Mutual information maximization (a notion explained in the paper) has been extensively used to eliminate redundant inputs among large sets of inputs. For instance, information gain is a standard tool for text classification [START_REF] Yang | A comparative study on feature selection in text categorization[END_REF], where mutual information between a single feature and a class variable estimates the relevance of each feature one by one. Along the same line, [START_REF] Battiti | Using mutual information for selecting features in supervised neural net learning[END_REF] developed the mutual information feature selection (MIFS) algorithm for classification problems that searches greedily a set of features with high-mutual information with class labels but low mutual information among chosen features. colorblack Other similar algorithms exist, such as the Maximum Relevance Minimum Redundancy (MRMR) approach proposed by [START_REF] Peng | Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy[END_REF], or the Joint Mutual Information (JMI) method from [START_REF] Yang | Feature selection based on joint mutual information[END_REF] to derive relevant subsets of features. These approaches allow for the elimination of redundant information within features. Among other works, one may cite [START_REF] Bollacker | Linear feature extractors based on mutual information[END_REF], which used mutual information between class variables to form optimal linear combinations of inputs. However, as mentioned by [START_REF] Sindhwani | Feature selection in mlps and svms based on maximum output information[END_REF], these methods, which deal with continuous variables in classification tasks, demand large amounts of data and high-computational complexity. colorblackMany other algorithms exist and they have been widely used in particular for classification tasks (e.g. medical image analysis [START_REF] Allam | A study on optimization techniques in feature selection for medical image analysis[END_REF], phenotype classification [START_REF] Ding | Minimum redundancy feature selection from microarray gene expression data[END_REF], etc.). A complete review may be found in [START_REF] Li | Feature selection: A data perspective[END_REF]. The present work is inspired by this existing literature, except that the developed approach is tailored explicitly for physics-related problems. In contrast to the articles cited above, this study focuses on regression tasks, and we do not want to perform a simple elimination of input features. Instead, it aims to form specific nonlinear feature combinations that are both relevant for physical modeling and understandable (in a symbolic way). Additionally, the last part of the paper proposes an approach that accounts for the physical dimension of input features. The introduced algorithms are new, and they come with several implementation challenges that are addressed in the paper. Finally, one major novelty is that all methods are based on the recent work of [START_REF] Belghazi | Mutual information neural estimation[END_REF], which provides innovative techniques to estimate the mutual information between two random variables. The paper is organized as follows. The first part defines the context of the study and illustrates the issues addressed in this article using a synthetic toy model. The complete approach and corresponding algorithm are then detailed. This algorithm is tested in the next section on the previously introduced toy model. A variant of this model is also considered to present a methodology that produces several reduced variables instead of a single one. Finally, before concluding, the last section presents a way to colorblack include dimensional knowledge about the input features in the algorithm. The method is tested on a synthetic case inspired by the Law of the Wall from the turbulent boundary layer theory. All results of the paper have been produced using the pyTorch library [START_REF] Ketkar | Introduction to pytorch[END_REF]. Definitions and proposed strategy Notations and definitions Let us consider a quantity f = (f 0 , f 1 , . . . , f n ) ∈ R n to model. The set of input variables of the sought model is q = (q 0 , q 1 , . . . , q m ) ∈ R m . In the context of fluid dynamics, f may be for instance a corrective term for a RANS model (as done in [START_REF] Parish | A paradigm for data-driven predictive modeling using field inversion and machine learning[END_REF] or [START_REF] Volpiani | Machine learning-augmented turbulence modeling for rans simulations of massively separated flows[END_REF]), and q may be some local fluid variables such as the density, velocity, molecular viscosity, shear stress tensor components, etc. Let us assume that f and q are linked through an unknown stochastic model M: M(q, η) = f , with η a stochastic variable associated with some incompressible noise (that is not modeled). The initial variable set is assumed to be not optimal, i.e., it is hard to learn a data-based model M from (q 0 , q 1 , . . . , q m ). It may be because this set of inputs contains redundant or useless information or because there exists a reduced set of variables that makes the mathematical model simpler (this aspect is further developed in section 2.2). Such improper input quantities will be called "naive" input variables in the following. As mentioned in the introduction, choosing physically relevant input features is a common issue for data-driven physical modeling. More generally, the proper selection of inputs is a known problem in deep learning that may drastically affect the convergence, generality, and accuracy of the learned model (see section 2.2 as an illustration of this). Finally, let us say that some reduced sets of more relevant input features exist. In physics, this would typically be a set of non-dimensional quantities or dimensional quantities involving an appropriate scaling. Therefore, the unknown model M may be split into two functions E and M ′ , such that M = M ′ •E, with E : R n → R l the mapping of the naive input vector q to a bettersuited l-dimensional space that feed the model M ′ . This new model M ′ is ideally easier to learn from data and may have enhanced understandability. The black square represents the range of values for (q 0 , q 1 ) used for training neural networks to estimate f (q 1 = 0 excluded). Illustrative example 2.2.1. Definition of the toy model This section presents a toy model used for a regression task using deep learning. It illustrates some intuitive ideas mentioned earlier, and it will be used later in the article to test the new approaches developed in this work. Let us consider a database containing 4000 triplets (q (i) 0 , q (i) 1 , f (i) ) obtained from the following nonlinear stochastic toy model f = M(q 0 , q 1 , η) = q 2 0 q 1 + 3 cos(2π q 2 0 q 1 )(1 + η), (1) with η a white noise of amplitude 0.5, and (q (i) 0 , q (i) 1 ) taking values in [-1, 1]× [-1, 1] \ {0}. A graphic representation of this function is shown in figure 1. This toy model has been designed to be strongly nonlinear, very noisy, and such that a simple regression task by a neural network may become challenging if not done adequately. This model is particularly interesting due to its behavior near q 1 = 0: f diverges and oscillates very rapidly in this neighborhood, causing difficulties to perform a regression using a neural network. A first "naive" strategy to predict f consists of training a neural network N directly from (q 0 , q 1 ) using available data. Alternatively, since q = q 2 0 q 1 is a reduced variable that changes M into a simpler model M ′ : f = M ′ (q, η) = (q + 3) cos(2π q)(1 + η), (2) one may train a network Ñ to estimate f from q using the same training data. This approach is called hereafter a "model-informed" strategy (since it requires some prior knowledge of the model). We will demonstrate some clear advantages of this latter strategy in the following. Comparison of the naive and model-informed regression To perform a fair comparison between the "naive" and the "model-informed" strategy, N and Ñ have the same number of hidden layers, units per layer, and activation functions. Training is performed the same way: the database is split in two for the validation (20% of samples) and training (80% of samples), and an Adam optimizer [START_REF] Kingma | A method for stochastic optimization[END_REF] is used with the same learning rate to minimize the mean square error between the network output and the actual value of f . No extra penalization or more advanced technique is used here. Training is performed until an early-stopping criteria (based on the validation loss) is met. Full details are given in Appendix A. Learning curves from figure 2 show that the naive strategy has a poor and slow convergence. By comparison, the model-informed approach is much easier to train. The interpolation and extrapolation capability of each network may be seen in figure 3. The naive approach is not only unable to extrapolate outside from the training range (q (i) 0 , q (i) 1 ) ∈ [-1, 1] × [-1, 1] \ {0} , but even its interpolation capabilities are unsatisfactory. On the other hand, the model-informed strategy has much better interpolation capabilities (figure 3(right)). It is even able to extrapolate to some extent: while the center-left and center-right parts of the domain in figure 3(right) are not well predicted, the rest shows good accuracy even for unseen values of (q 0 , q 1 ), because the corresponding values of q have actually been seen during training. These results do not demonstrate that one cannot develop a more advanced training strategy or network architecture to make the naive strategy work with this database. Still, they highlight that using specific input features may significantly ease training. This numerical experiment is a simple illustration of known neural network training behavior. Yet, it highlights well the motivations behind the present paper: the need for a strategy to deduce the "proper" reduced variable(s) to use from a database. For instance, for this toy model, it may ideally tell to use q 2 0 q 1 as an input feature. This would ease the network's training and improve its interpolation/extrapolation capabilities. -2 The black square represents the range of values for (q 0 , q 1 ) used for training (everything outside the square shows extrapolation capabilities). 0 2 q 0 -2 -1 0 1 2 q 1 -5 0 5 -2 0 2 q 0 -2 -1 0 1 2 q 1 -5 0 5 7 -2 0 2 q 0 -2 -1 0 1 2 q 1 -5 0 5 -2 0 2 q 0 -2 -1 0 1 2 q 1 -5 0 5 Figure 4: (Left): estimation of f obtained using by q-1 as input feature. (Right): estimation of f obtained using by q2 as input feature. The black square represents the range of values for (q 0 , q 1 ) used for training (everything outside the square shows extrapolation capabilities). Non-unicity of reduced variables The previous results show that the reduced variable q is a much better input feature choice than (q 0 , q 1 ). But one may wonder if other choices could be even better. In particular, for this paper, it is interesting to see if variables of the form qx are also good choices (the reason for this particular form is explained in section 3.2). In general, as soon as x ̸ = 0, this would make a better input feature than (q 0 , q 1 ): the dimensional reduction of the input space mechanically gives a network trained with such a variable extrapolation capabilities to some extent. The only issue may be that for non-integer values of x, qx is not be defined for negative values of q 0 or q 1 , so one may favor integer values for x. Nonetheless, most variables of the form qx yield nearly equivalent results in terms of interpolation / extrapolation performances. For instance, figure 4(left) shows the result field obtained from a network trained with the input feature q-1 : except for the zone near q 0 = 0 (harder to learn due to the discontinuous behavior induced by this input feature choice), the overall result is similar to that from the previous section. In contrast, using q2 provides significantly better results (figure 4(right)). This is because the relation between (q 0 , q 1 ) and f displays some symmetries, but the network is unaware of this and has to learn it from data. When using q2 as an input feature, the reduced model to learn M ′ has no longer any symmetry. Thus the network has less information to learn and reaches better regression capabilities. The conclusion will further discuss these few remarks since they pro-vide interesting insights and maybe future work directions about the general question of choosing proper input features, which is the central topic of this article. Remark on correlation analysis to find relevant input features One widely used tool to determine the relevancy of a given input variable to model another is the computation of correlation coefficients. The choice of this particular toy model is interesting because it shows that a correlation analysis cannot indicate that q is a well-suited variable: even if η = 0, the Pearson correlation coefficient between q and f for (q (i) 0 , q (i) 1 ) ∈ [-1, 1] × [-1, 1] \ {0} would only be 0.04. This limitation is due to the linear nature of a correlation analysis, which would lead here to discard q as a relevant input variable to model f . Mutual information concept Before introducing a new approach for finding good input features for physical models, let us define the notion of "proper" input variable better. This paper bases the answer on information theory: a well-chosen input feature should provide as much information as possible on the quantity to model. This can be quantified using mutual information, a concept defined below. In information theory, the mutual information I(X, Y ) between two random variables X, Y from a space X × Y is defined as I(X, Y ) = D KL (P Y,X ∥P X ⊗ P Y ) , (3) where P (X,Y ) is the joint probability distribution of the pair (X, Y ), P X and P Y are the marginal distribution of the pair (X, Y ), and D KL (•∥•) is the Kullback-Leibler (KL) divergence. After expending this expression into differential form I(X, Y ) = X ,Y P X (X)P Y |X (Y ) log P Y |X (Y ) P Y (Y ) dXdY = - Y P Y (X) log P Y (Y )dY + X P X (X) Y P Y |X (Y ) log P Y |X (Y )dY dX = E Y ∼P Y [-log P Y (Y )] -E X∼P X ,Y ∼P Y |X [-log P Y |X (Y )], (4) where P Y |X is the conditional distribution of Y knowing X, a well-known alternative definition of I involving Shannon's entropy H appears I(X, Y ) = H(Y ) -H(Y |X). (5) Equation ( 5) gives an alternative view on I: it is linked to the remaining uncertainty on Y (respectively X) once X (respectively Y ) is known. In other words, I(X, Y ) is the amount of information contained in one random variable about the other. For instance, if the two variables are independent, then H(Y |X) = H(Y ) and I(X, Y ) = 0. Contrary to linear correlation, it is a measure of true dependence since it is able to correctly quantify nonlinear statistical dependency [START_REF] Kinney | Equitability, mutual information, and the maximal information coefficient[END_REF] (which is not the case for linear correlation as illustrated with the toy problem from section 2.2). The proposed strategy of this paper, detailed in the next section, relies on maximizing the mutual information between some input combinations and the quantity to model: some parameters A (the weights of a neural network E A ) will be optimized to maximize I(E A (q), f ) (using notations from 2.1), thus providing the mapping from naive to relevant input features. Mutual information maximization strategy In most cases, it is impossible to directly maximize I(E A (q), f ) because it involves the unknown posterior distribution P q|f (using the notation q = E A (q)). To overcome this issue, we propose an approach based on the recent work of [START_REF] Belghazi | Mutual information neural estimation[END_REF], where the mutual information is first estimated using a dual representation of the KL-Divergence (providing a lower bound). This estimation is then used to maximize the mutual information. The estimation of the mutual information relies on the Donsker-Varadhan (DV) representation [START_REF] Donsker | Asymptotic evaluation of certain markov process expectations for large time[END_REF], based on the following theorem (see for instance [START_REF] Belghazi | Mutual information neural estimation[END_REF] for the proof) Theorem 1. Let Ω be a sample space, and P and Q two given probability distributions on Ω. Then, the KL-divergence admits the following dual representation: D KL (P ∥Q) = sup T :Ω→R E P [T ] -log(E Q [e T ]), (6) with the supremum taken over all functions such that the expected values are finite. Since equation (3) defines the mutual information as the KL-divergence of the joint distribution and the product of the marginals, the DV-representation straightforwardly provides lower bounds that may be used to maximize the mutual information I(E A (q), f ). To estimate I(X, Y ), the idea is to consider a large family of function T ψ : X , Y → R parametrized as a neural network with parameters ψ ∈ R K (with K the number of weights and bias of the network), and set ψ such that it maximizes the quantity Ĩψ (X, Y ) = E P X,Y [T ψ ] -log(E P X ⊗P Y [e T ψ ]). ( 7 ) This quantity is a lower bound for I(X, Y ), and such a network is called a statistics network in the Mutual Information Neural Estimator (MINE) framework developed by [START_REF] Belghazi | Mutual information neural estimation[END_REF]. Given the universal approximation theorem for neural networks [START_REF] Cybenko | Approximation by superpositions of a sigmoidal function[END_REF][START_REF] Hornik | Multilayer feedforward networks are universal approximators[END_REF], one may expect to get through Ĩψ (X, Y ) an arbitrarily tight bound given that the network complexity is high enough. Details about the network architecture for T ψ used in the paper are given in the appendices. Adequate choice of function space to represent input features This section proposes a particular network architecture for E A suited for physical modeling, and that provides interpretable results. When looking at existing physical models, relevant input variables come from scalings of the form q = i q α i i . It generally does not involve more complex functions for dimensional reasons. For instance, the logarithmic Law of the Wall, which models a part of the velocity profile of turbulent boundary layers in fluid mechanics, is a relation linking the streamwise velocity u of the flow to the distance from the wall y that reads u + = 1 κ ln(y + ) + C + , (9) with τ w = µ ∂u ∂y y=0 , u * = τ w ρ , y + = yρu * µ , u + = u u * where ρ is the fluid density, µ the dynamic viscosity, and κ and C + two constant values. This law provides u from the intermediary variables y + and u * , which are combinations of the simpler variables y, ρ, µ, ∂u ∂y y=0 of the form [START_REF] Haghighat | A physicsinformed deep learning framework for inversion and surrogate modeling in solid mechanics[END_REF]. Therefore, we propose to use a logarithm representation of the quantity to model (we focus on log(f ) instead of f ), such that the input features are searched under the form q = i α i log(q i ), (10) instead of using equation [START_REF] Haghighat | A physicsinformed deep learning framework for inversion and surrogate modeling in solid mechanics[END_REF]. A neural network forming such quantities can easily be designed: it consists of a log activation followed by a linear layer with no bias. The weights of such a network are directly the exponents of equation ( 8). This architecture, called in the following a logarithmic network (LN), is represented in figure 5. Note that the inputs are first made positive by taking their absolute value to handle negative numbers properly. This has no impact in the method since it is the same set of exponents a i that relates q with q i and |q| with |q i |. As a side remark, an alternative solution could have been to use the complex definition of the logarithmic function log(z = |z|e iθ ) = log(|z|) + iθ. (11) One limitation remains: the LN cannot handle null values, which have to be removed or replaced by some ϵ value in the training database considered. Note that although it is not explored in this paper, by adding a linear layer before the log activation, this approach may be straightforwardly extended to the more general form of input features which may encompass some specific cases of physical scaling that may be not covered by relation [START_REF] Haghighat | A physicsinformed deep learning framework for inversion and surrogate modeling in solid mechanics[END_REF]. In any case, different functional spaces that would be more fitted to a given problem may easily be designed and used in the framework presented in this paper. Note that in the following, LN with a single output will be exclusively used (corresponding to m = 1 in figure 5) due to the particular strategy used for models with multiple reduced variables (explained in section 3.3). q = i j a ij q j α i , (12) q 0 q 1 q n . . . log(|q 0 |) log(|q 1 |) log(|q n |) . . . log(| • |) Linear layer n × m matrix A n i=1 a 0i log(|q i |) n i=1 a 1i log(|q i |) Feature Discovery algorithm This section combines the notion and techniques introduced above to propose the Feature Discovery (FeDis) algorithm. The idea is to maximize the mutual information between the output of an LN and a quantity to model using a MINE statistical network. The overall procedure to compute a single reduced variable is presented in algorithm 1. One may see that a penalization of the LN weights is added to the loss function. In the cases treated in the paper, a L 1 penalization was systematically considered since it promotes sparsity (and therefore helps to discard naive inputs that do not contain information about the modeled quantity) and was found to improve convergence overall. Other more advanced regularizations to handle multiple reduced variables are discussed in section 3.3. Additionally, algorithm 1 shows one more implementation specificity: instead of maximizing the mutual information between E(q) and f , the algorithm uses E(q) and log(|f |). This has no impact on the algorithm (maximizing the mutual information between these transformed variables still provides the input that contains the most information about f ) and has the advantage Algorithm 1 Feature Discovery (FeDis) algorithm m ← 1 Set the dimension of the latent space for the reduced variables to 1 A, ψ ← Initialize network parameters for the 2 networks E A (LN network, output dim. = m) and T ψ while loss L not converged do Draw a batch of randomly sampled pair (q 1 , f 1 ), . . . , (q b , f b ) Eliminates/process pairs with (q i ) having null component(s) Form the pairs c 1 = (E A (q 1 ), log(|f 1 |)), . . . , c b = (E A (q b ), log(|f b |)) (joint distribution) Form (f ′ 1 , . . . , f ′ b ) by shuffling (f 1 , . . . , f b ) (Marginal distribution) Form the pairs c ′ 1 = (E A (q 1 ), log(|f ′ 1 |))), . . . , c ′ b = (E A (q b ), log(|f ′ b |)) Evaluate the cost function L ← -1 b b i=1 T ψ (c i ) + log( 1 b b i=1 e T ψ (c ′ i ) ) [Optional] Add regularization terms L ← L + R(θ) Jointly update θ and ψ to minimize L (gradient-based optimization) end while of yielding entries for T ψ of lesser magnitude (which helps avoid float-overflow issues that may occur when evaluating e T ψ (•) ). Note that the original paper of [START_REF] Belghazi | Mutual information neural estimation[END_REF] about MINE networks introduced some advanced training techniques such as gradient clipping and a modified loss formulation to avoid bias during the gradient descent. These techniques have not brought any significant improvements to the results of this paper. They are therefore neither introduced nor used for the present article. Test of the FeDis algorithm Results on the augmented toy model The toy model from section 2.2 is used to demonstrate the ability of the FeDis approach to finding relevant reduced inputs from data generated by a noisy nonlinear model. We consider an augmented set of 14 naive input features (q 0 , q 1 , . . . , q 13 ) that add 12 useless inputs to the original dataset (to assess the ability of the approach to sort useful/useless variables). The model is therefore f = M(q 0 , q 1 , q 2 , . . . , q 13 , η) = q 2 0 q 1 + 3 cos(2π q 2 0 q 1 )(1 + η), (13) with η a white noise of amplitude 0.5. Consistently with section 2.2, the training database is made of 4000 samples (q (i) 0 , . . . , q (i) [START_REF] Cai | Physics-informed neural networks (pinns) for fluid mechanics: A review[END_REF] , f (i) ), with each input q j randomly sampled in [-1, 1] \ {0}. All details to reproduce the test case are given in the appendices. Figure 6 shows the corresponding evolution of the exponent value of each input during training. As expected, all useless variables are quickly discarded, and the algorithm yields the reduced variable q 3.01 1 /q 5.99 0 (i.e., approximately q-3 ). 0 200 400 600 800 epoch q 0 q 1 q 2 q 3 q 4 q 5 q 6 q 7 q 8 q 9 q 10 q 11 q 12 q 13 -5.99 +3.01 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 final values Normalization of the exponents The FeDis approach does not yield q = q 2 0 /q 1 , but instead a variable of the form qx . This was expected since I(q, f ) = I(q x , f ) for x ̸ = 0 (the mutual information I(X, Y ) is preserved by any invertible deterministic transformation of X, see [START_REF] Belghazi | Mutual information neural estimation[END_REF]). Therefore, given the structure of the LN network, the maximization strategy followed here leaves the exponent x undetermined (the convergence of x is eventually due to the L 1 penalization of weights that forbid the exponents to become too large). Finding exactly q 2 0 /q 1 would have been coincidental. But as mentioned in section 2.2, using qx as input feature provides the same advantages as using q. The only limitation is that non-integer exponents are not defined for negative numbers. But having this in mind, it is easy to rescale a posteriori the exponent to overcome this issue since the method provides a symbolic representation of the input. One could also imagine penalizing non-integer exponents (for instance, using the function p(a) = -λ int cos(2πa)), but that would add one extra hyperparameter λ int for an uncertain and arguable improvement of the original algorithm. A simple a posteriori normalization to remove the undetermined nature of the exponent is given by algorithm 2, which sets the smallest exponents to 1 (discarding nearly zero values). If applied to the results obtained in section 3.1, the resulting reduced variable is q-1 = q 1 /q 2 0 . Algorithm 2 Exponent normalization procedure NormalizeExponents(a = (a 0 , a 1 , . . . , a n-1 )) a max ← max i (|a i |) Dealing with multiple reduced variables Only one reduced variable was needed for the toy model of section 3.1. But more than a single reduced variable may be needed for an actual physical model. To address this question, let us consider the following modified toy model f = M(q 0 , q 1 , q 2 , . . . , q 13 , η) = q 2 0 q 1 + 3 cos(2πq 0 q 1 )(1 + η). Then, one may want to use reduced variables of the form (q a 0 , qb 1 ), with q0 = q 0 q 1 and q1 = q 2 0 q 1 . A first idea would be to use an LN network with an output dimension m = 2 (see section 2.5) such that the FeDis algorithm would yield a 2-dimensional vector whose components are feature combinations maximizing the information about f . In practice, this idea does not work. Indeed, the most straightforward pair of variables maximizing I(•, f ) is the naive couple (q 0 , q 1 ) (the deterministic part of f is fully defined once these two variables are known). Therefore, since the algorithm promotes sparsity, the result is likely to be (q 0 , q 1 ) (or a trivial variant). Actually, the algorithm could output nearly any pair of independent combinations of q 0 and q 1 and would still maximize the mutual information with f . We need a way of promoting a combination set where each component taken individually also maximize the mutual information. The solution is to compute one reduced variable at a time. A first run of the algorithm provides the nonlinear combination of features that maximizes I(•, f ) (standard FeDis procedure). Then, the algorithm is rerun to give a different nonlinear combination that maximizes the mutual information, and so on. This one-by-one procedure forbids the production of trivial feature combinations such as (q 0 , q 1 ) and ensures that at each step, the next reduced variable is the best nonlinear combination given the previous ones. For this approach to work, at each step, an additional penalization term needs to be added to forbid the algorithm from producing over and over the same combination of features. To illustrate the proposed approach, let us consider the second toy model defined by equation [START_REF] Mao | Physics-informed neural networks for high-speed flows[END_REF]. We proceed the same way as before: the training database is made of 4000 samples (q (i) 0 , . . . , q (i) [START_REF] Cai | Physics-informed neural networks (pinns) for fluid mechanics: A review[END_REF] , f (i) ), with each input q j randomly sampled in [-1, 1] \ {0}. The database is completed by the corresponding values of f generated from equation ( 14) with η = 0.5. Then, reusing the same techniques as before, the FeDis algorithm is run to get a first reduced variable (details regarding the neural networks are given in Appendix C). The result is shown in figure 7(top). The algorithm yields the reduced variable q -3.9 0 q -4.1 1 , which after normalization (defined in section 3.2) gives q0 ≈ q 0 q 1 (note that if rerun and initialized differently, the network would sometimes converged toward the second reduced variable ( q 2 0 q 1 ) x ). Then, the algorithm is rerun with a regularization promoting different feature combinations. Note that any set of exponents that would be a multiple of those already obtained are unwanted (the exponents maximizing the mutual information are defined up to a multiplicative constant). The additional penalization term to achieve this is the following. Consider that the set of exponents obtained during the first run of the algorithm is the vector a = (a 0 , a 1 , . . . , a n ). Proportionality between the current exponents A = (A 0 , A 1 , . . . , A n ) during the rerun and a may be quantified using the normalized scalar product s s = ⟨a, A⟩ ∥a∥∥A∥ with ⟨•, •⟩ the classical dot product, and ∥ • ∥ the associated norm. When a and A are orthogonal, s = 0, and when they are aligned, s = 1. Then, the following penalization is added to the loss function of the FeDis algorithm L a = λ a e -(s-1) 2 σ 2 , ( 16 ) 0 200 400 600 800 1000 1200 1400 epoch q q q q q q q q q q q q q q -3.89 -4.10 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 +0.00 final values epoch q q q q q q q q q q q q q q -2.78 +1.40 +0.00 -0.00 +0.00 -0.00 -0.00 +0.00 -0.00 +0.00 -0.00 -0.00 -0.00 -0.00 final values with λ a a weighting factor for the penalization. Note that the use of a Gaussian function instead of s directly allows to not promote orthogonality of a and A (which is unwanted), but to only penalize too significant alignments. For the present study, σ 2 = 0.2. An exhaustive study on the optimal choice for σ 2 has not be conducted, but other values near 0.2 (0.15, 0.25) have been tested and they nearly did not changed the results. The results obtained using this penalization are shown in figure 7(bottom) (implementation details given in Appendix C). The algorithm behaves as expected, and yields the second reduced variable q -2.8 0 q 1. 4 1 , which gives after normalization q0 = q 1 /q 2 0 . Dimension-aware algorithm Finding relevant reduced variables to produce a physical model echoes the well-studied question of dimensional analysis. The Buckingham π-theorem states that any physically meaningful governing equation involving k input variables can be reduced to an equation involving kl dimensionless parameters, with l the number of physical dimensions involved. Therefore, it is natural to see if the techniques introduced in this paper can be used to automatically perform a feature reduction accounting for the physical dimension of the input variables. Physics-inspired test case The following proposes an algorithm that answers the question mentioned above. To introduce it, let us focus on the already-presented Law of the Wall, a relation between the streamwise velocity u in turbulent boundary layers and the distance from the wall y, which links two dimensionless quantities y + and u + defined as y + = y √ ρ ∂u ∂y y=0 √ µ , (17) u + = u √ ρ µ ∂u ∂y y=0 , ( 18 ) where ρ is the fluid density, µ the dynamic viscosity, and κ and C + two constant values. The relation, already detailed in section 2.5, reads u + = 1 κ ln(y + ) + C + . (19) ω 0 µ LN network Update A 1 , . . . , A 7 (gradient-based minimization) Statistical network T Ψ X y = ρ ω 3 ω 2 ω 1 u ∂u ∂y (0) ω 0 µ X u = ρ ω 3 ω 2 ω 1 1 A 1 A 4 A 2 A 3 A 7 A 6 A 5 f y = log(|y|) + 7 i=1 A i log(|X y,i |) LN network 1 B 1 B 4 B 2 B 3 B 7 B 6 B 5 f u = log(|u|) + 7 i=1 B i log(|X u,i |) Loss Evaluation L = -ĨΨ (f y , f u )+ Regularization Update B 1 , . . . , B 7 (gradient-based minimization) Update Ψ (gradient-based minimization) Figure 8: Schematic representation of the FeDis variant algorithm that accounts for physical dimensions, applied to the synthetic Law of the Wall case. In the following, we consider synthetic data generated using this relation. A database made of 10000 set of values (y, ∂u ∂y y=0 , µ, ρ, ω 0 , ω 1 , ω 2 , ω 3 ) randomly sampled from ]0, 1] 8 has been generated. Note that this range of values has been chosen for simplicity since the case serves illustrative and demonstration purposes (thus the name of "physics-inspired" test case). It does not correspond to realistic values for each feature. Then, u is evaluated for each sample using the following relation u = u * 0.41 ln(y + ) + 5 (1 + η), (20) with η a random white noise of amplitude 0.2. The chosen values κ = 0.41 and C + = 5 are the standard constant values. Note that the variables ω i are extra dummy quantities that are unused to compute u (they allow to assess the ability of the approach to discard useless data). Length L Time T Mass M y 1 0 0 ∂u ∂y 0 0 -1 0 µ -1 -1 1 ρ -3 0 1 ω 0 1 0 0 ω 1 0 0 1 ω 2 0 -1 0 ω 3 1 1 1 Length L Time T Mass M u 1 -1 0 ∂u ∂y 0 0 -1 0 µ -1 -1 1 ρ -3 0 1 ω 0 1 0 0 ω 1 0 0 1 ω 2 0 -1 0 ω 3 1 1 1 Table 1: Dimension of the features defining q y and q u . These tables define the so-called dimension matrices M d,y (left) and M d,u (right) associated with q y and q u respectively. They are used to promote dimensionless combinations. is unwanted because it overlooks the relation between u and y. Therefore, the alignment of à and B needs to be penalized. This is done using the same alignment penalization from section 3.3 (equation ( 16)). Results Details and hyperparameters used to produce the results are given in Appendix D. Figure 9 shows the evolution of the exponents A and B during training. One may see that it produces the wanted result: the algorithm is able to nearly recover the expressions of y + and u + (equations ( 17) and ( 18)). One downside of the approach is that it includes multiple penalization terms. Therefore, extra additional hyperparameters need to be tuned. These hyperparameters have been set by trial and error. Typical behaviours when they are not set correctly is either all exponents going to zero, or à and B becoming proportional with a very high amplitude (see section 4.2.2 for the explanation). Conclusion This paper introduced a novel algorithm to find from a noisy dataset nonlinear input feature combinations which are optimal for physical modeling. They are obtained by solving a minimization problem based on the mutual information between some specific input combinations and the quantity to model. The algorithm has been tested on synthetic cases, showing very promising results. The last section has demonstrated that it can be modified results from the first LN network (q y ) (Bottom): results from the second LN network (q u ). The results are close to the wanted expressions defined by equations ( 17) and [START_REF] Battiti | Using mutual information for selecting features in supervised neural net learning[END_REF]. to exploit dimensional information to form physically-relevant combinations of input feature. Hopefully, this type of approach could help provide better data-driven models for physics-related problems in the near future. As mentioned in the article, the framework could be modified to generate different forms of feature combinations that may be better adapted to some specific modeling problem, distinct from those considered in the paper. Note that section 2.2 has shown that some feature combinations that eliminate the symmetries of the model may provide significantly enhanced results over other reduced variables. As is, the algorithm does not account for possible symmetries in the data. This may be a future direction to explore to attempt to improve the algorithm. One identified downside of the approach developed here is that it involves several hyperparameters that need to be tuned adequately (but this is an issue for most of the existing data-driven techniques, unfortunately). The paper's results were relatively robust with respect to these hyperparameters. But the synthetic cases considered were rather simple. It may be helpful to dedicate future studies to a more thorough analysis of the hyperparameter robustness in more complex cases. As the paper's primary goal was to introduce a new methodology for input feature design, exhaustive studies on the optimal choices of hyperparameters are left for future work, which may help to find empiric rules or automatic techniques to set them. But arguably, the most interesting future work concerns the application of the FeDis technique to some of the open modeling problems mentioned in the introduction, such as RANS modeling for turbulent flows (open-source databases are already available to explore this path, see [START_REF] Mcconkey | A curated dataset for data-driven turbulence modelling[END_REF]). The mathematical and numerical framework introduced here is general and could be applied theoretically to any dataset. Nonetheless, it is not excluded that processing actual physical data involving advanced feature dependencies may raise implementation challenges that did not appear in this first study. For instance, since the size of the synthetic datasets from the present study was small, the question of the computational cost and convergence speed has not been investigated. Another interesting aspect is that actual non-synthetic datasets may involve "non-homogeneous" data stemming from multiple physical phenomena. Therefore, it may sometimes be hard to get a single model for the whole dataset, and designing multiple coexisting data-driven models may be better. Each of these models may have its own distinct set of relevant features. Therefore, exploring how the present approach may be coupled with clustering techniques may be an interesting question for future work. Acknowledgment This work is funded by ONERA as a part of the MODDA (MOdelization Data-Driven for Aerodynamics) project. state is dumped on disk and used for testing. This criterion is deactivated during the first fifteen epochs to avoid premature stopping. Figure 1 : 1 Figure1:(Left): Toy model f . The black square represents the range of values for (q 0 , q 1 ) used for training neural networks to estimate f (q 1 = 0 excluded). Figure 2 : 2 Figure 2: (Left): learning curves of N (naive strategy). (Right): learning curves of Ñ (model-informed strategy). The loss is the mean square error between the network output and the target value. Training is stopped based on a criterion defined in Appendix A. Figure 3 : 3 Figure 3: (Left): estimation of f produced by N (naive strategy). (Right): estimation of f produced by Ñ (model-informed strategy). The black square represents the range of values for (q 0 , q 1 ) used for training (everything outside the square shows extrapolation capabilities). n i=1 a mi log(|q i |) Figure 5 : 5 Figure 5: Logarithmic Network (LN) structure. The hyper-parameter m is chosen depending on the number of input features chosen. In this article, given the strategy proposed, m will always be 1. Figure 6 : 6 Figure 6: Evolution of the learned exponents for each input variable. Figure 7 : 7 Figure 7: Evolution of the learned exponents for each input variable, second toy model (equation (14)). (Top): first run producing the first reduced variable. (Bottom): second run producing the second reduced variable, obtained by adding a penalization promoting new input feature combinations. Figure 9 : 9 Figure 9: Evolution of the learned exponents for the Law of the Wall test case. (Top):results from the first LN network (q y ) (Bottom): results from the second LN network (q u ). The results are close to the wanted expressions defined by equations (17) and[START_REF] Battiti | Using mutual information for selecting features in supervised neural net learning[END_REF]. get the highest exponent m ← |a| > 0.1a max build a boolean mask for values lower than 10% of a max a ← a • m set all small values to zero ã ← nonzero(a) store all nonzero values a min ← min i (|ã i |) get the smallest nonzero exponent a ← a/a min normalize exponents end procedure Description of the algorithm 4.2.1. Strategy The algorithm is described in figure 8. It involves two LN networks, respectively producing a feature combination q y and q u of the form q y = y ∂u ∂y q u = u ∂u ∂y Note that the first weights A 0 and B 0 (the exponents of y and u respectively) are frozen to 1. Given the considered model (equations ( 17) and ( 18)), the expected outcome is A = (1, 0.5, -0.5, 0.5, 0, 0, 0, 0) and B = (1, -0.5, -0.5, 0.5, 0, 0, 0, 0). The rest of the algorithm is similar to the FeDis approach: the exponents are computed by maximizing a lower bound of the mutual information I(q y , q u ), following the same approach as that of algorithm 1. The specificity here comes from the penalization terms detailed in the next section. Penalization of the loss In addition to the classical L1 penalization of the exponents (used for all results in the paper), two specific penalization terms have been added for this algorithm. The first one aims to promote non-dimensional combinations of the features. The dimensional matrices M d,y and M d,u associated with q y and q u , respectively, are presented in table 1. They gather the physical dimension of each feature. The dimension has been set arbitrarily for the dummy variables ω i . The dimension of the LN combinations corresponding to A and B is then simply given by the vector-matrix products D y = AM d,y and D u = BM d,u (yielding a 3 × 1 vector). The extra penalization is then based on the L2-norm of these vectors: with λ d a weighing coefficient for the penalization. The algorithm has an easy (but unwanted) way to maximize the mutual information between q y and q u . If the subvectors à = (A 1 , . . . , A 7 ) and B = (B 1 , . . . , B 7 ) become aligned and their components becomes really large, it may lead to q y ≈ q α u , yielding a very high mutual information I(q y , q u ). This Appendix A. Regression task: section 2.2 The networks are made of four layers with 150 hidden units, eLU activation functions, followed by a last linear layer to produce the output. The loss function is the standard mean square error between output and target values. The network is trained using mini-batches of 100 samples. The optimizer is based on the Adam algorithm (learning rate of 10 -4 ). Training is stopped when the validation loss exceeds the lowest value encountered so far by more than 2% for more than thirty consecutive epochs. Then, the model's current The function T ψ from the algorithm is a neural network made of four layers with 200 hidden units, eLU activation functions, followed by a last linear layer to produce the output (scalar value). The optimizer is a Stochastic Gradient Algorithm (SGD) with a learning rate of 7×10 -2 . As mentioned in the article, an L1 penalization on the weights of the LN is added, this penalization is weighted in the total loss using a discount factor λ L1 = 5 × 10 Appendix C. FeDis algorithm: section 3.3 The function T ψ from the algorithm is a neural network made of four layers with 200 hidden units, eLU activation functions, followed by a last linear layer to produce the output (scalar value). The optimizer is a Stochastic Gradient Algorithm (SGD) with a learning rate of 5 × 10 -2 . As mentioned in the article, an L1 penalization on the weights of the LN is added, this penalization is weighted in the total loss using a factor λ L1 = 5 × 10 -3 . Training is performed using mini-batches of size 200. For the second run of the algorithm, the extra penalization (alignment penalization, equation ( 16)) is weighted in the total loss using factor λ a = 3 × 10 -2 . All other hyperparameters are unchanged. The losses evolution, not shown in the paper, is visible in figure C.11. The function T ψ from the algorithm is a neural network made of four layers with 200 hidden units, eLU activation functions, followed by a last linear layer to produce the output (scalar value). The optimizer is a Stochastic Gradient Algorithm (SGD) with a learning rate of 2 × 10 -2 . As mentioned in the article, an L1 penalization on the weights of each LN is added, this penalization is weighted in the total loss using a factor λ L1 = 5 × 10 -2 . Training is performed using mini-batches of size 400. The first extra penalization (dimensionless promotion [START_REF] Bolón-Canedo | A review of feature selection methods on synthetic data[END_REF]) is weighted in the total loss using factor λ d = 5 × 10 -2 . The second extra penalization (alignment penalization, equation ( 16)) is weighted in the total loss using factor λ a = 1. The loss evolution, not shown in the paper, is visible in figure D.12.
00409816
en
[ "phys.meca.stru", "spi.meca.stru" ]
2024/03/04 16:41:18
2002
https://hal.science/hal-00409816/file/sdee22_semblat_et_al.pdf
J F Semblat email: [email protected] P Dangla M Kham A M Duval . The amplification of seismic motion is analysed in terms of level, occuring frequency and location. For both sites, the amplification factor is found to reach maximum values of 20 (weak motion). Site effects nevertheless have very different features concerning the frequency dependence and the location of maximum amplification. For the shallow deposit in Nice, the amplification factor is very small for low frequencies and fastly increases above 1.0 Hz. The irregular Caracas basin gives a much different frequency dependence with many different peaks at various frequencies. The model for Caracas deep alluvial basin also includes a part of the local topography such as the nearest mountain. One can estimate seismic site effects due to both velocity contrast (between the basin and the bedrock) and local topography of the site. Furthermore, the maximum amplification is located on the surface for Nice, whereas some strong amplification areas also appear inside the basin itself in the case of Caracas. One investigates the influence of this focusing effect on the motion vs depth dependence. This is of great interest for the analysis of seismic response of underground structures. The form and the depth of alluvial deposits are then found to have a great influence on the location of maximum amplification on the surface but also inside the deposit for deep irregular basins. It is essential for the analysis of the seismic response of both surface and underground structures. Introduction The analysis of seismic site effects considers amplification versus frequency curves showing the range of the spectrum leading to large motion amplification. Experimental measurements are generally performed along the surface with various methods : microtremor recordings, real earthquakes measurements [START_REF] Duval | Relation between curves obtained from microtremor and site effects observed after Caracas 1967 earthquake[END_REF]. Information on in-depth motion could sometimes be obtained thanks to specific measurement networks [START_REF] Kashima | Underground earthquake recording at Kushiro JMA observatory. 9th World Conference on Seismic Zonation[END_REF][START_REF] Lussou | Seismic design regulation codes: contribution of K-net data to site effect evaluation[END_REF]. Through numerical methods, one can also study the amplification process in various types of geological structures. It is for instance possible to consider the vibratory resonance of alluvial basin [START_REF] Bard | The two-dimensional resonance of sediment filled valleys[END_REF][START_REF] Paolucci | Fundamental vibration frequencies of 2D geological structures[END_REF]. Otherwise, one can perform numerical analyses on site effects through explicit wave propagation models. In this paper, we try to study the influence of the basin geometry on site effects. Both surface and in-depth motion are especially considered to find out how they can be modified by some specific motion amplification for a typical basin geometry. The focusing effects are for instance taken into account to explain the possible increase of in-depth motion in some areas [START_REF] Sommerville | Seismic hazard evaluation. 12th World Conf. on Earthquake Eng[END_REF]. To perform such an analysis, seismic wave amplification is investigated in various types of alluvial basins considering the boundary element method. Shallow and deep alluvial basins For the analysis of in-depth motion amplification and focusing effect, we chose two alluvial basins with very different profiles : the first one is located in the centre of Nice (France) and is a wide flat basin (width 2 km, depth 60 m) [START_REF] Semblat | Numerical analysis of seismic wave amplification in Nice (France) and comparisons with experiments[END_REF], the second one, located in Caracas (Venezuela), is a deep irregular valley surrounded by mountains (width 3.6 km, depth 300 m) [START_REF] Semblat | Seismic site effects in a deep alluvial basin: numerical analysis by the boundary element method[END_REF]. Some experimental or numerical investigations were performed previously for both basins [START_REF] Semblat | Numerical analysis of seismic wave amplification in Nice (France) and comparisons with experiments[END_REF][START_REF] Semblat | Seismic site effects in a deep alluvial basin: numerical analysis by the boundary element method[END_REF][START_REF] Duval | Relation between curves obtained from microtremor and site effects observed after Caracas 1967 earthquake[END_REF]. We found that the amplitude versus frequency dependence is very different in each case. We will then try to analyse the variations of in-depth motion in both cases and to find out if focusing effects can actually influence motion amplification in a deep irregular basin. Modelling site effects by the BEM The numerical analysis of site effects for both types of basin was performed by the Boundary Element Method [START_REF] Semblat | Numerical analysis of seismic wave amplification in Nice (France) and comparisons with experiments[END_REF][START_REF] Semblat | Seismic site effects in a deep alluvial basin: numerical analysis by the boundary element method[END_REF][START_REF] Bonnet | Boundary integral equation methods for solids and fluids[END_REF][START_REF] Dangla | A plane strain soil-structure interaction model[END_REF]. The method is very powerful since it allows the modelling of seismic wave propagation for large geological structures without such drawbacks as numerical dispersion for some other methods [START_REF] Semblat | Efficiency of higher order finite elements for the analysis of seismic wave propagation[END_REF]. The numerical analysis was performed considering plane seismic waves of various types [START_REF] Dangla | A plane strain soil-structure interaction model[END_REF][START_REF] Betbeder-Matibet | In-depth attenuation of seismic ground motion[END_REF]. The shear wave velocities were chosen as follows: for Nice [1] C 1 =300m/s in the deposit and C 2 =1400m/s in the bedrock ; for Caracas [START_REF] Semblat | Seismic site effects in a deep alluvial basin: numerical analysis by the boundary element method[END_REF] C 1 =450m/s and C 2 =2500m/s respectively. Fig. 1 gives the isovalues of the amplification factor for both sites. The first one (Nice, French Riviera) is shallow and its geometry is very regular. Site effects are found to be strong in the deepest part of the deposit (left) between 1 and 2 Hz and in the thinnest part (right) for frequencies above 2 Hz [START_REF] Semblat | Numerical analysis of seismic wave amplification in Nice (France) and comparisons with experiments[END_REF]. For the second one (Caracas, Venezuela), there is a significant influence of the local topography (nearest mountains) as shown in Fig. 1 for 0.6 Hz [START_REF] Semblat | Seismic site effects in a deep alluvial basin: numerical analysis by the boundary element method[END_REF]. The surface motion amplification has a very complex dependence on frequency. The irregular form of this basin as well as the large velocity contrast suggest that focusing effects could occur in the basin itself and influence the amplification process. In the following, we will estimate in-depth motion variations to determine if they can lead to deep amplification areas. This issue is very important for the design of earthquake resistant underground structures [START_REF] Semblat | Amplification and diffraction of seismic waves from underground structures[END_REF]. Occurrence of focusing effect Focusing effect is related to particular geological structures that can focus seismic energy because of their geometrical and mechanical features. Some unexpected localised zones of damage were especially observed after Northridge earthquake [START_REF] Sommerville | Seismic hazard evaluation. 12th World Conf. on Earthquake Eng[END_REF]. To analyse potential focusing effect, we compare the seismic motion amplification at various depths for both alluvial basins (shallow regular ; deep irregular). Fig. 2 displays the amplification factor in the whole shallow basin (Nice) at various frequencies. It is given versus depth and distance (along the free surface). For the lowest frequency (1.0 Hz), there is only one amplification area on the free surface and in-depth motion decreases regularly. For frequencies values of 1.4 and 1.6 Hz, there are several amplification areas along the free surface in the left deepest part of the basin. No significant amplification is observed in the thinnest part (right) for those frequency values. For larger frequency values (2.0, 2.2 and 2.4 Hz), many different amplification areas are obtained along the free surface in the left part of the basin except for the last frequency value leading to low amplification in this part. In the right part, there is a strong increase of the amplification factor values for the three largest frequencies. Nevertheless, the seismic motion amplification is always decreasing with depth inside the basin. For all frequencies, there is a monotonic decrease of ground motion values from maximum surface motion values. Since the shallow basin is very flat, no focusing effects is observed but there is still a basin effect leading to amplification values much larger than those obtained from 1D analytical estimation considering the mechanical features of the deposit [START_REF] Semblat | Numerical analysis of seismic wave amplification in Nice (France) and comparisons with experiments[END_REF]. For the shallow regular basin in Nice, site effects are then influenced by basin effects leading to seismic waves trapped in the deposit. However, there is no energy focusing effect due to the basin geometry and consequently no large in-depth amplification. In the case of the deep irregular basin in Caracas [START_REF] Semblat | Seismic site effects in a deep alluvial basin: numerical analysis by the boundary element method[END_REF], amplification values versus depth and distance are given in Fig. 3. For both first frequencies (0.4 and 0.8 Hz), there are one or several (respectively) amplification areas along the free surface and in-depth motion decreases regularly down to the bedrock. For the second value (0.8 Hz), the deep deposit also appears more sensitive to some basin edge effects than the shallow basin. For the third frequency (1.2 Hz), we can suspect some little focusing effect since there is a very slow decrease of in-depth motion on the right part of the basin. At the bottom of the deepest part of the basin, there is a rather large value of seismic amplification. The focusing effects is much clearer for frequency 1.4 Hz : in the deepest part of the basin there is a strong increase of in-depth motion. It corresponds to an area of strong motion amplification located inside the alluvial deposit. For larger frequencies (1.8 and 2.0 Hz), there are several parts of the basin where in-depth motion increases. For some places, deep amplification can reach similar values to those obtained along the free surface. At 1.8 Hz, three main areas lead to in-depth motion increase and there are six of them at 2.0 Hz. These results (Fig. 3) show a strong influence of focusing effects on in-depth motion amplification. In the next section, we will discuss the dependence of seismic motion on depth by comparing in-depth motion curves for this deep site at various frequencies. Fig. 2 : Amplification in the whole shallow deposit (Nice) at various frequencies. Fig. 3 : Amplification in the whole deep deposit (Caracas) at various frequencies. Influence on in-depth motion In the case of the shallow basin, there is always a regular in-depth motion decrease in agreement with the classical rules (Fig. 2). For horizontally multilayered media, a simple analytical analysis leads to explicit decreasing laws for in-depth seismic motion [START_REF] Betbeder-Matibet | In-depth attenuation of seismic ground motion[END_REF]. For a two-dimensional shallow regular basin, results given in Fig. 2 follow the same trend than analytical results in the multilayered case. To investigate the influence of focusing effects on in-depth motion for the deep irregular basin, several curves giving seismic motion versus depth are considered (Fig. 4). The variations of in-depth motion are very different. In some places, there could be a strong increase of in-depth motion due to focusing effect. As shown in Fig. 4, at 1.4 Hz, there is a maximum of the seismic motion inside the basin in its deepest part. For larger frequencies (1.8 and 2.0 Hz), the wavelength is shorter and in-depth motion local maxima appear in other parts of the deposit with different energy focusing processes. For the largest frequency (Fig. 4), there are even several different large motion areas along the maximum depth. In Fig. 4, the local maximum is shown to appear between 200 and 250m. For large frequency values, there are then several related focusing effects corresponding to the focus of seismic waves in shallower areas of the basin at shorter wavelengthes. The focusing effects are then influenced by the geometry of the basin as well as the depth/wavelength aspect ratio. Fig. 4 : In-depth motion at various locations and frequencies for the deep deposit (Caracas). Conclusion The analysis of seismic site effects for two very different alluvial basins (shallow, deep) gives interesting results on potential energy focusing effects [START_REF] Sommerville | Seismic hazard evaluation. 12th World Conf. on Earthquake Eng[END_REF]. For the shallow regular basin considered in Nice, there is no focusing effect and larger amplification is obtained along the free surface. The influence of the basin geometry, vs wavelength, is only observed on the location of maximum amplification areas (deepest part for low frequencies and thinnest part for higher frequencies). For the deep irregular basin in Caracas, various amplification areas are observed inside the basin itself starting in the deepest part of the basin at some intermediate frequency. For larger frequencies (shorter wavelengths), different parts of the basin lead to large deep amplification. In-depth motion variations are consequently influenced by focusing effects. The shallow basin gives a classical decrease of in-depth motion whereas the deep basin can lead to some in-depth motion increases due to energy focusing effects. It is of great interest for the design of earthquake resistant underground structures [START_REF] Semblat | Amplification and diffraction of seismic waves from underground structures[END_REF] as well as the analysis of seismic hazard in urban areas [START_REF] Guéguen | From soil-structure to site-city interaction[END_REF]. Fig. 1 : 1 Fig.1 : BEM modelling of site effects for shallow and deep alluvial deposits : amplification factor in the case of Nice (top) and Caracas (bottom) Seismic Site Effects for Shallow and Deep Alluvial Basins: In-Depth Motion and Focusing Effect
03660204
en
[ "spi" ]
2024/03/04 16:41:18
2021
https://hal.science/hal-03660204/file/Guilbert2021.pdf
B Guilbert • P Velex Influence of thin-rimmed/-webbed gears on transmission dynamic behaviour-Approximate dynamic factor formula Keywords: die es ermöglicht, dynamische Zahnkraftungen aus Finite-Elemente-Modellen, Dehnungsenergien und quasistatischen Übertragungsfehlern abzuschätzen die getesteten Geometrien die dynamischen Kopplungen zwischen dynamischen Eingriffskräften und Radkörperelastizität moderat bleiben The objective of this paper is to analyse the effect of thin-webbed/-rimmed and consequently flexible gear bodies on dynamic tooth loads. To this end, an approximate dynamic factor formula is used, which makes it possible to estimate dynamic mesh force amplifications from Finite Element models, strain energies and quasi-static transmission errors. It is shown that, whenever solid gears are considered, the dynamic factor derived from the complete FE model results agrees well with those given by the analytical formula. When thin-rimmed/webbed gears are considered, the outcomes from the approximate dynamic factor formula are still in reasonable agreement with those of the complete FE model although the influence of rotating gear body cannot be accounted for. This good agreement also reveals that, for the tested geometries, dynamic couplings between dynamic mesh forces and gear body elasticity remain moderate. Einfluss von dünnstegigen/-berandeten Zahnrädern auf das dynamische Verhalten des Getriebes -Näherungsweise Formel für den Dynamikfaktor Zusammenfassung Ziel dieser Arbeit ist es, den Einfluss von dünnstegigen/-berandeten und damit flexiblen Zahnradkörpern auf dynamische Zahnbelastungen zu analysieren. Dazu wird eine näherungsweise dynamische Faktorformel verwendet, 1I n t r o d u c t i o n The vast majority of the gear dynamics models are based on rigid discs connected by a time-varying elastic mesh interface [START_REF] Özgüven | Mathematical models used in gear dynamics-A review[END_REF][START_REF] Kahraman | Non-linear dynamics of a spur gear pair[END_REF][START_REF] Velex | A mathematical model for analysing the influence of shape deviation and mounting errors on gear behaviour[END_REF][START_REF] Vedmar | A method to determine dynamic loads on spur gear teeth and on bearings[END_REF][START_REF] Kubur | Dynamic analysis of a multi-shaft helical gear transmission by finite elements: model and experiment[END_REF]. Recently, however, efforts have been made to account for gear body flexibility. Li [START_REF] Li | Deformation and bending stress analysis of a threedimensional, thin-rimmed gear[END_REF][START_REF] Li | Effects of centrifugal load on tooth contact stresses and bending stresses of thin-rimmed spurs gears and with inclined webs[END_REF][START_REF] Li | Effects of misalignment error, tooth modifications and transmitted torque on tooth engagements of a pair of spur gears[END_REF] built a full FE model of lightweight gears and studied the influence of the corresponding additional deflections on tooth contact con-B. Guilbert [email protected] 1 LaMCoS-INSA de Lyon, 27 bis Avenue Jean Capelle, 69621 Villeurbanne, France ditions. Parker et al. [START_REF] Parker | Free vibration and stability of a spinning disk-spindle system[END_REF] used an elastic ring on constant stiffness foundation for flexible ring-gears and developed a 2D spur gear model combining finite element and analytical models [START_REF] Parker | Non-linear dynamic response of a spur gear pair: modelling and experimental comparisons[END_REF]. Still in the context of planetary gears, Abousleiman et al. [START_REF] Abousleiman | Modelling of spur and helical gear planetary drives with flexible gears and planet carriers[END_REF] inserted the results of condensed 3D finite element models of planet-carriers whereas Wu and Parker [START_REF] Wu | Modal properties of planetary gear with an elastic continuum ring gear[END_REF] used the thin ring theory to account for ring gear elasticity. Bettaieb et al. [START_REF] Bettaieb | A static and dynamic model of geared transmission by combining substructures and elastic foundations-Application on thin-rimmed gears[END_REF]a n dL i ue ta l . [START_REF] Liu | Hybrid dynamic modelling and analysis of high-speed thin-rimmed gears[END_REF] included a condensed 3D finite element of gear body into spur and helical gear models. In this paper, the semi-analytical formulation of the dynamic factor for mesh forces proposed by Velex and Ajmi [START_REF] Velex | On the modelling of excitations in geared systems by transmission errors[END_REF][START_REF] Velex | Dynamic tooth loads and quasi-static transmission errors in helical gears-Approximate dynamic factor formulae[END_REF] is revisited in the context of models integrating gear body flexibility. The formula was initially established for solid gears under the conditions of ab Fig. 1 a Hybrid gear model and b base plane [START_REF] Guilbert | Modular hybrid model to simulate the static and dynamic behaviour of high-speed thin-rimmed gears[END_REF] small relative variations of mesh stiffness, hence mainly for helical gears, and linear behaviour (no contact losses). Within this domain of validity, the results from the dynamic factor formula are critically assessed and confronted with those delivered by the hybrid gear dynamic models proposed in [START_REF] Guilbert | A mortar based mesh interface for hybrid finite element/lumped parameter gear dynamic models-Application to thin-rimmed geared systems[END_REF][START_REF] Guilbert | Modular hybrid model to simulate the static and dynamic behaviour of high-speed thin-rimmed gears[END_REF], which can capture the static and modal contributions of thin-rimmed/-webbed gears. Model presentation-analytical dynamic ratio formula Hybrid dynamic model The modular hybrid gear dynamic model (Fig. 1a) includes lumped parameter and finite elements. It is assumed that all the contacts between the teeth occur on the theoretical line of actions in the base plane (Fig. 1b), which are discretised into small segments centred on the potential points of contact Mij, where subscript i refers to the line of contact and j to the segment. A local stiffness kij [START_REF] Weber | Formänderung und Profilrücknahme bei Gerad-und Schrägverzahnten Antriebstechnik [Change in shape and profile modifications in spur and helical gears[END_REF] is attributed to every ij segment and distributed along the contact lines, making it possible to connect the pinion and gear degreesof-freedom by a time-varying, possibly non-linear, Winkler foundation representative of the mesh interface elasticity [START_REF] Guilbert | A mortar based mesh interface for hybrid finite element/lumped parameter gear dynamic models-Application to thin-rimmed geared systems[END_REF]. The mechanical environment is simulated by combining lumped parameter and shaft elements along with substructures (super-elements), which account for gear body contributions [START_REF] Herting | A general purpose, multi-stage component modal synthesis method[END_REF] as illustrated in Fig. 1a in the particular case of a thin-rimmed gear. The corresponding state equations are solved by a time-step Newmark's integration scheme combined with a unilateral contact algorithm verifying that all contact forces on the mating teeth are compressive [START_REF] Velex | A mathematical model for analysing the influence of shape deviation and mounting errors on gear behaviour[END_REF]. Dynamic ratio approximate formula The gear dynamic factor formula in [START_REF] Velex | Dynamic tooth loads and quasi-static transmission errors in helical gears-Approximate dynamic factor formulae[END_REF] is based on a classical 3D shaft finite element-lumped parameter dynamic model with rigid body gears [START_REF] Velex | A mathematical model for analysing the influence of shape deviation and mounting errors on gear behaviour[END_REF]. Assuming that the relative mesh stiffness variation is small, a main order approxima-tion of the instant dynamic factor is derived under the form [START_REF] Velex | Dynamic tooth loads and quasi-static transmission errors in helical gears-Approximate dynamic factor formulae[END_REF]: r Š 1+ X k s k k k k m Z .k/ (1) with ρk and kϕk, the percentage of modal strain energy in the mesh interface and the modal stiffness respectively for Φk, kth mode shape obtained with a time-averaged stiffness matrix (over one mesh period in this paper), km, the average (scalar) mesh stiffness Ζ(k), the contribution of the kth mode shape to the dynamic response when considering transmission error-based excitations, i.e., Z .k/ = X n1 N A n Ö k n 2 -1 +2 N B n k Ö k n Ö k n 2 -1 2 +4 2 k Ö k n 2 sin n# + N B n Ö k n 2 -1 -2 N A n k Ö k n Ö k n 2 -1 2 +4 2 k Ö k n 2 cos n# (2) where ξk is the damping factor of mode k -# = ˝1t, is an angular position (Ω1 constant pinion rotation speed) -Ö k = 1 ˝1 q k k m k , kϕk and mϕk modal stiffness and mass, is the kth natural frequency (with time-averaged mesh stiffness) -N A n = A n x0 and N B n = B n x0 , with x 0 the average static mesh deflection, are the dimensionless coefficients derived from transmission errors as: k TE 00 S =- X n1 n 2 A n sin n# + B n cos n# (3) k = cos ˇb x0 T M _ X 0 =k m k , _ X 0 = N K -1 F S V, N K averaged stiffness matrix, FS static mesh force, TE 00 S is the second order derivatives (with respect to ϑ) of the quasi-static transmission error under load. In contrast with [START_REF] Velex | Dynamic tooth loads and quasi-static transmission errors in helical gears-Approximate dynamic factor formulae[END_REF], the no-load transmission error does not appear in the present study as it has been conducted without any errors or tooth modifications. Test case definition The test case defined in Fig. 2 is an aeronautical power transmission with a solid pinion and a lightweight gear. The pinion torque is 400 Nm and the gear data are in Table 1. Solid gear For validation purposes, the original lightweight gear is replaced in the section by the solid gear shown in Fig. 3. The modes with the highest percentages of strain energy stored in the mesh interface are listed in Table 2.T h e s e modes are those which contribute mostly to dynamic tooth forces hence to the dynamic factor. Most of them are bending modes for the pinion shaft, as their maximum energy is stored in the beam elements of the pinion. The 4th mode corresponds to a 2 Nodal Diameter (2 N.D.) mode for the gear [START_REF] Guilbert | Hybrid models for the study if gear body dynamic deflections-Modes of the gear body[END_REF], showing that even solid gears can contribute to dynamic tooth loads via their deformable bodies. It can be noticed, however, that the dominant modes for mesh forces are mainly those which would be obtained by considering rigid discs, thus validating this modelling option for solid pinions and gears. The dynamic responses in Fig. 4 have been calculated between 0 and 2500 rad/s on the pinion shaft. Two different parameters are represented versus speed, which are the maximum dynamic factor R (maximum dynamic-to-static mesh force ratio at a given speed or the maximum of r defined in ( 2)) and the RMS of the axial displacement at the node on the gear rim shown in Fig. 3b. The dynamic factor has been calculated by numerical time integration of the hybrid model equations of motion (solid line) and by using the maximum of r derived from the approximate formula (2) (dotted line with cross marks) using the hybrid model modes (hence a deformable solid gear). A third curve has been superimposed (dotted line), which corresponds to the results obtained by assimilating the gear to a rigid disc. The dynamic factor using the ISO standard [START_REF]Calculation of load capacity of spur and helical gears-Part 1: Basic principles, introduction and general influence factors[END_REF] with the theoretical critical speed nE1 given in [START_REF]Calculation of load capacity of spur and helical gears-Part 1: Basic principles, introduction and general influence factors[END_REF] (red dotted curve in Fig. 4) and that obtained based on the modes with more than 20% of strain energy in the mesh interface (green dotted curve) have been added. It can be noticed that the three first sets of results are very close. In this example of solid gear, the rigid disc or deformable gear body models give similar results but, as expected, the major critical speeds for tooth loading are slightly lower when gear flexibility is simulated. Interestingly, the approximate analytical dynamic factor calculated by keeping the few modes with more than 5% of strain energy in the mesh interface only (Table 2), agrees very well with the hybrid model results. Slight deviations can be observed with a peak around 1300-1400 rad/s, not reproduced by the analytical formula [START_REF] Kahraman | Non-linear dynamics of a spur gear pair[END_REF], which corresponds to a critical speed associated with gear body displacements as illustrated in Fig. 4b and the 2 ND mode in Table 2.T h e very limited couplings between mesh force and gear body modes probably explains the very good agreement between the analytical and numerical results in this example with a solid gear. The critical speed derived from the theoretical Table 3 Strain energy distribution for modes with energy greater or equal to 5% in the meshing for the transmission with the thin-rimmed/webbed gear, only the modes with more than 20% of the total strain energy in the mesh interface are kept for nE1 ISO 6336- ISO curve is not correctly positioned but the ISO dynamic factor formulae using the modes with maximum energy in the tooth mesh are better. For both curves, the amplitudes comparable reasonably well with those obtained by numerical simulations. Analysis of the thin-rimmed/-webbed gear Similar comparisons have been extended to the thinrimmed/-webbed gear shown in Fig. 2. The corresponding percentages of modal strain energy in the mesh interfaces are given in Table 3. Because of the web flexibility, the 2 N.D. gear body mode (number 4 in both Tables 2 and3) has dramatically moved from 6129 to 2134 Hz. The previous bending modes for the pinion shaft with the highest percentages of strain energy in the mesh obtained for a solid gear are replaced by coupled modes affecting more components in the transmission, thus showing that highly flexible thin-rimmed/-webbed gears can influence dynamic tooth loads. These coupled modes still are the most energetic re the mesh interface, whereas gear body mode contributions remain limited. The response curves in Fig. 5 show results of the same nature as those in Fig. 4 but for the thin-rimmed gear arrangement. In spite of the more complex interactions between the gear body and the mesh interface, the approximate formula (2) can still capture most of the dynamic tooth loads and particularly the two major response peaks. Some secondary peaks are not predicted by [START_REF] Kahraman | Non-linear dynamics of a spur gear pair[END_REF], which correspond to amplifications of gear body vibrations (see the second graph in Fig. 5) and are dynamically coupled to the mesh. Their amplitudes, however, remain limited. The displacement curve shows far more dynamic activity than in the case of a solid gear, thus highlighting the role of coupled modes. The curves deduced from the ISO standard formulae [START_REF]Calculation of load capacity of spur and helical gears-Part 1: Basic principles, introduction and general influence factors[END_REF] have been superimposed and it can be observed that they fail to correctly predict the dynamic mesh forces. The agreement is worse than compared with the solid gear example in Fig. 4, as expected based on the conditions of validity of the ISO formulae (for solid gears only). 5C o n c l u s i o n s A modular hybrid dynamic gear model which includes lumped parameters and 3D finite elements to account for the flexibility of thin-rimmed/-webbed gears has been used to assess the validity of the approximate formula of Velex and Ajmi [START_REF] Velex | On the modelling of excitations in geared systems by transmission errors[END_REF][START_REF] Velex | Dynamic tooth loads and quasi-static transmission errors in helical gears-Approximate dynamic factor formulae[END_REF] to calculate mesh force dynamic factors. This formula is based on the hypotheses of the 3D rigid body gear model, and relies on the hypothesis of linear behaviour (no tooth contact losses) and small mesh stiffness time-variations (hence, more adapted to helical gears). The comparisons have been conducted on an aeronautical power transmission, by considering: a) a solid gear simulated as a rigid disc and by using 3D block finite elements and, b) the actual thin-rimmed, thin-webbed gear. Overall, it seems that the analytical formula (2) can give a reasonable estimate of the dynamic mesh forces versus speed and can identify the major tooth critical speeds even for thin-rimmed applications. As expected, it fails to account for gear body mode contributions as it was developed in the context of pinions and gears assimilated to rigid discs connected by a time-varying elastic Winkler foundation. One major advantage of (2) is clearly the computational time as it only takes a few minutes for a complete response curve as opposed to days for the more involved hybrid model. It is therefore believed that this analytical formula can be useful at the early design stage and could probably be combined with the analytical formulations for transmission errors such as those proposed by in [START_REF] Velex | Some analytical results on transmission errors in narrow-faced spur and helical gears -influence of profile modifications[END_REF]inorder to account for more realistic excitations including tooth shape deviations and errors. Fig. 2 Fig. 3 23 Fig. 2 Aeronautical transmission, a solid pinion and b lightweight gear Fig. 4 4 Fig. 4 Approximate formula with 5 harmonics and 5% and more of energy Fig. 5 5 Fig. 5 Comparison between Dynamic Ratio (R) and R.N. radial displacement RMS over a speed range Table 1 1 Gear data Pinion Gear Module m (mm) 2.5 Number of teeth 29 71 Pressure Angle (°) 25 Helix Angle (°) 14 Addendum coefficient 1 Dedendum coefficient 1.25 Profile shift coefficient 0.275 0.320 Fillet Radius/module 0.25 Rim width b (mm) 52 50 Table 2 2 Modal strain energy percentages (above 5% in the mesh interface), only the modes with more than 20% of the total strain energy in the mesh interface are kept for nE1 ISO 6336-1:2006 calculation[START_REF]Calculation of load capacity of spur and helical gears-Part 1: Basic principles, introduction and general influence factors[END_REF] Mode Number Pinion Speed (rad/s) Mode Fre-quency (Hz) Strain Energy (%) Conflict of interest B. Guilbert and P. Velex declare that they have no competing interests.
04098231
en
[ "math.math-gm" ]
2024/03/04 16:41:18
2022
https://theses.hal.science/tel-04098231/file/TheseCabezas.pdf
Pancho, Coba, Mati, Iriar, Enzo, Mario Emilio Hasson precious to me. Thank you also to Arrianne, who along with Afaf became an inseparable friends. We have spent so many things together, from the evenings at my house to the vacations we took and every memory by your side is something I will treasure with time. How can I forget Bastian and Josefa, with whom I have not been able to share as much as I would like, especially because time moves fast and does not stop, but I like them just the same for everything they have given me and the moments we have spent together. They are very valuable friends to me and they can always count on me. To the people in LAMFA, Sebastian, Mariem, Clément, Gauthier, Cheryl, Jihade, Marouan, Henry, Yohan for the time we have spent together, especially at parties and the forgotten after-lunch board games before covid attacked. I would like to make It has been a long path since I started this part of my life, but the final moments have arrived. Many things happened during this time, but above all I want to thank the people who were there with me during the whole process. First, I want to thank my thesis director Samuel Petite for always believing in me, despite the obstacles and for all the help and constant support he has given me over these years. Thank you professor for your meticulous revisions and the long meetings where we debated about the best way to express what I wanted to say and for always paying attention to the details that I missed, it is because of your advice that I have been able to forge my path as a researcher. Without you I would not be the professional I believe I am today. I would like to extend my sincere thanks to the reviewers of this thesis, Valérie Berthé and Reem Yassawi, for taking time in reading and the good comments received for my work. I also like to extend my thanks to the examiners Bryna Kra, María Isabel Cortez and Fabien Durand for accepting the invitation to be part of my thesis's jury. I would also like to thank professor Alejandro Maass, because apart from all the help he gave me working in Chile and his trust, he was there for me in a very difficult moment of my life and I will always appreciate him for that. I would like to extend my sincere thanks to professor Sebastian Donoso because he was the first person who believed in me, and although we met very randomly, he is always willing to work together and support me in my projects. To my friends, those from farther and nearer who have always been there to share good times. To Ambuli for all his years of friendship. We have been through many things together that it is hard for me to summarize it in a couple of lines, but you have always been my companion, in adventures and misadventures and that is why you will always be one of my best friends. If I look back, I can imagine that my life would not be the same if I had not met you and I am one hundred percent sure that our friendship will last no matter the time that passes and the distance that separates us. To Afaf, I'm not quite sure how it all started, but suddenly you became an indispensable person in my life. You are very important to me and despite our differences and arguments, you are always by my side and that makes our friendship so special. Thank you for the company and the good times, which are countless, and for being by my side since I arrived in Amiens. You are very Contents Introduction The central objects of study of this thesis are homomorphisms between topological Z dactions T : X × Z d → X on a compact metric space X. A homomorphism is a continuous surjective map φ : (X, T, Z d ) → (Y, T, Z d ) such that for some Z d -automorphism M ∈ GL(d, Z), we have φ • T n = T M n • φ for all n ∈ Z d . This notion extends the classical dynamical one of morphism like factor, when M is the identity and conjugacy when φ is an invertible factor. Invertible homomorphisms, which will be called isomorphisms, are then conjugacies of Z d -actions, up to a GL(d, Z)-transformation. For Z-actions, isomorphisms are nothing else than flip-conjugacies, i.e., homeomorphisms φ such that φ • T is equal to T • φ or T -1 • φ. Factors and conjugacies are referred to as endomorphisms and automorphisms, respectively when the dynamical systems are the same. While endomorphisms represent a kind of internal symmetries in the system, such as permutations, homomorphisms represent furthermore symmetries of the orbits, such as rotations and reflections. That is why they are also sometimes called extended symmetries (see for example [START_REF] Baake | A brief guide to reversing and extended symmetries of dynamical systems[END_REF][START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF][START_REF] Bustos | Extended symmetry groups of multidimensional subshifts with hierarchical structure[END_REF][START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]). From an algebraic point of view, while the automorphism group is the centralizer of the action group T in the group Homeo(X) of self-homeomorphisms in X, the isomorphism group is the normalizer of the action group T in Homeo(X), that is, the set of self-homeomorphisms φ such that φ T φ -1 = T . The study of homomorphisms of a dynamical system (X, T, Z d ) is a classical problem. The elements in the group T generated by the action are automorphisms of the system, hence the automorphism group is always nonempty. However the existence of homomorphisms for a particular matrix M ∈ GL(d, Z) is generally an open problem. Classical questions concern their dynamical and algebraic properties in relation with the dynamical ones of (X, T, Z d ). The description for these isomorphisms of their possible subgroups, their quotients, their amenability or their action on the T -invariant measures depending on the properties of (X, T, Z d ) are natural problems. In a general context these questions are widely open. Another classical question is the determination of topological factors of a particular system. Their explicit description can be used to unravel the structure of the system. For certain aspects, they carry relevant information and enable to do concrete calculations or to study some specific structures (for example in spectral theory [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF]). The study of isomorphisms of the particular class of minimal systems where X is a Cantor set is motivated by algebraic reasons. Indeed, to any Z-minimal Cantor system (X, T, Z) there is an associated countable group called the topological full group and denoted by [[T ]]. The elements of this group are self-homeomorphisms of the space X that are locally power of the transformation T . Homomorphisms of the system (X, T, Z) induce group homomorphisms in [[T ]]. Conversely, the topological full group is a complete invariant of flip-conjugacy [START_REF] Giordano | Full groups of Cantor minimal systems[END_REF]. It appears such full groups [[T ]] present remarkable properties. H. Matui proved in [START_REF] Matui | Some remarks on topological full groups of Cantor minimal systems[END_REF] that its commutator subgroup is simple. Moreover, he also showed that this group is finitely generated if and only if (X, T, Z) is conjugated to a minimal subshift. Thus, these groups enable to construct finitely generated simple ones with unexpected algebraic properties. For instance any topological full group is amenable [START_REF] Juschenko | Cantor systems, piecewise translations and simple amenable groups[END_REF], giving the first examples of finitely generated simple groups in this class. Using isomorphisms of some low complexity Z-subshifts (namely linearly recurrent ones with arbitrarily long palindromic words like the substitutive Fibonacci subshift) V. Nekrashevych in [START_REF] Nekrashevych | Palindromic subshifts and simple periodic groups of intermediate growth[END_REF] succeed to construct with full groups a finitely generated simple group with intermediate growth, i.e., which is neither of polynomial nor of exponential growth. Moreover it is of Burnside type, that is, an infinite group where each element is periodic. Another motivation for the study of isomorphisms comes from theoretical physics. The discovery in 1984 by D. Schechtman et.al. [START_REF] Shechtman | Metallic phase with long-range orientational order and no translational symmetry[END_REF] of a metal alloy structure similar to ideal crystal ones deeply influenced the study of multidimensional aperiodic structure. This alloy presented a discrete diffraction pattern, like for crystals, but had a five-fold rotational symmetry which is forbidden for ideal crystals. The term quasicrystal was then invented to describe these new classes of crystals with "forbidden" symmetry, although there is little agreement on the precise definition of a quasicrystal. Roughly speaking, a quasicrystal is a solid, which exhibits sharp bright spots (called Bragg peaks) in their X-ray diffraction pattern but has an aperiodic structure (usually manifested by the presence of a non-quasicrystallographic symmetry). The presence of Bragg peaks indicates the presence of "long-range order" in the structure (see [START_REF] Levine | Quasicrystals: a new class of ordered structures[END_REF]). This work earned D. Schechtman the Wolf Prize in Physics in 1999 and the Nobel Prize in Chemistry in 2011. One can assume that (at least approximately) a quasicrystal consists of atoms located at the vertices of an almost periodic tiling. It is possible to recover quantitative physical properties by studying a dynamical system associated with the tiling. Such systems were first introduced by D. Rudolph in [START_REF] Rudolph | Markov tilings of R n and representations of R n actions[END_REF]. Since then, a series of articles are devoted to their study (see [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF][START_REF] Frank | Substitution and tiling dynamics: introduction to self-inducing structures[END_REF][START_REF] Robinson | Symbolic dynamics and tilings of R d . In Symbolic dynamics and its applications[END_REF][START_REF] Solomyak | Dynamics of self-similar tilings[END_REF] for an extensive bibliography on this subject). The diffraction pattern is then essentially the point spectrum of the corresponding translation action [START_REF] Levine | Quasicrystals: a new class of ordered structures[END_REF]. Moreover its symmetries are reflected by the isomorphisms of its dynamical system [START_REF] Robinson | The dynamical theory of tilings and quasicrystallography[END_REF]. In 1982, A. Mackay [START_REF] Mackay | Crystallography and the Penrose pattern[END_REF] published the diffraction pattern of a tiling created by R. Penrose several years before [START_REF] Penrose | The role of aesthetics in pure and applied mathematical research[END_REF], which has very similarities with the ones discovered by D. Schechtman. Figure 1: The quasicrystal diffraction images appearing in the original article of D. Shechtman et.al. [START_REF] Shechtman | Metallic phase with long-range orientational order and no translational symmetry[END_REF] Figure 2: The diffraction pattern of the Penrose tiling as it appears in the original article of A. Mackay [START_REF] Mackay | Crystallography and the Penrose pattern[END_REF] The Penrose tiling is then a good mathematical model of quasicrystals. It is build with only 2 tiles, up to rotations and translations. The generation of patterns is obtained by means of an algorithmic method so-called substitution. Roughly speaking, this process consists in substituting tiles by a union of tiles and applying the same rule to this new pattern. Such construction provides, at the limit, most of the simplest aperiodic tilings, in the sense that they have the lowest complexity [START_REF] Lagarias | Local complexity of Delone sets and crystallinity[END_REF]. Another interesting property of the Penrose tiling, meaningful in the crystallographic context of short range interaction, is that all the allowed tilings of the associated system are the ones verifying a finite set of local rules. This is a geometrical analogue of onedimensional subshifts of finite type. However as a difference with one-dimensional subshifts of finite type that always contains periodic points, Penrose tiling system is aperiodic. This highlights a fundamental difference between the one and two dimensional combinatorial properties which are linked with logic and computability. The seminal work of H. Wang [START_REF] Wang | Proving theorems by pattern recognition -ii[END_REF] already established relation between decidability of certain first-order logic formulas and domino problems. His student R. Berger [START_REF] Berger | The undecidability of the domino problem[END_REF] showed the undecidability of the domino problem by exhibiting a (huge) family of tiles with adjancies rules, called Wang tiles, allowing only aperiodic tilings. Later, this example was simplified by R. Robinson [START_REF] Robinson | Undecidability and nonperiodicity for tilings of the plane[END_REF] with [START_REF] Bustos | Extended symmetry groups of multidimensional subshifts with hierarchical structure[END_REF] Wang tiles (5 up to rotation and reflection). More recently by E. Jeandel and M. Rao [START_REF] Jeandel | An aperiodic set of 11 Wang tiles[END_REF] provided a similar example with only 11 Wang tiles, and showed it is the optimal bound. A powerful method to generate aperiodic Wang tiles is through the use of so-called constant-shape substitution, where the shape of the image of tiles are the same. The result of S. Mozes [START_REF] Mozes | Tilings, substitution systems and dynamical systems generated by them[END_REF] illustrates this procedure giving sufficient conditions for a (constant-shape) substitutive subshift to be a factor of a subshift of finite type. Let us also mention that both examples of R. Robinson and Jeandel-Rao contain minimal systems that are substitutive subshifts [START_REF] Gähler | Combinatorics and topology of the Robinson tiling[END_REF][START_REF] Labbé | Substitutive structure of Jeandel-Rao aperiodic tilings[END_REF]. Motivated by all these reasons, in this thesis we will focus on the study of homomorphisms between some specific multidimensional subshifts: the substitutive ones, generated by multidimensional constant-shape substitutions. Let us recall some historical results, first in the one-dimensional context. For Z-actions, isomorphisms are closely related with automorphisms. They are the same, or the isomorphism group is an index-2 group extension of the automorphism group. So most of the properties of isomorphisms can be deduced by these ones. Automorphisms of symbolic systems present rigidity properties already in the one-dimensional case. For instance, the famous Curtis-Hedlund-Lyndon theorem [START_REF] Hedlund | Endomorphisms and automorphisms of the shift dynamical system[END_REF], ensures that any factor between subshifts is a sliding block code or an cellular automata. Actually, homomorphisms are also induced by local maps, but the center changed according to the matrix of the homomorphism (Theorem 1.9). This shows both the automorphism group and the isomorphism group of a subshift are countable and discrete subgroups on the group of self-homeomorphisms of the phase space. The automorphism group of symbolic systems was initially studied for subshifts of finite type by G. Hedlund in [START_REF] Hedlund | Endomorphisms and automorphisms of the shift dynamical system[END_REF]. This group is infinitely generated and is very large. It contains all the finite ones, free groups, the direct sum of a countable number of copies of Z, any countable collection of finite groups, etc. In particular it is not an amenable group. However, it is residually finite, so does not contain divisible groups (like Q) or the infinite symmetric group [START_REF] Boyle | The automorphism group of a shift of finite type[END_REF][START_REF] Kim | On the automorphism groups of subshifts[END_REF]. Nevertheless, there is still no general description of the automorphism group for a given subshift nor of their generators. For example, whether the automorphism groups of the two-letter full-shift and the three-letter full-shift are algebraically isomorphic is still an open problem. Large complexity is not enough to have a large automorphism group. In [START_REF] Bulatek | Strictly ergodic Toeplitz flows with positive entropies and trivial centralizers[END_REF][START_REF] Donoso | On automorphism groups of Toeplitz subshifts[END_REF] the authors gave a family of Toeplitz subshifts with arbitrarily large positive entropy with trivial automorphism group. Also, the size of the automorphism group imposes no restrictions on the entropy as shown in [START_REF] Donoso | On automorphism groups of Toeplitz subshifts[END_REF]. A large class of infinite finitely generated abelian groups can be realized as the automorphism group of arbitrarily large or zero entropy Toeplitz subshift. At the opposite, low complexity of the subshift restricts algebraic properties of the automorphism group. In [START_REF] Coven | Computing automorphism groups of shifts using atypical equivalence classes[END_REF][START_REF] Cyr | The automorphism group of a shift of linear growth: beyond transitivity[END_REF][START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF] it was proved the automorphism group is virtually Z for minimal subshifts with non super-linear complexity, i.e., such that lim inf n→∞ p X (n)/n < ∞ where p X (n) denote the number of words of length n. This hypothesis implies the subshift has finitely many asymptotic pairs, i.e., two different points x, y ∈ X with a common past. The strategy in [START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF] is based under the property that automorphisms permutes asymptotic pairs and non super-linear complexity subshifts have a finite number of asymptotic pairs. For higher complexity subshifts, the growth rate of the automorphism group is bounded by the complexity of the subshift, In particular it is amenable for a large class of zero entropy subshift as proved in [START_REF] Cyr | The automorphism group of a minimal shift of stretched exponential growth[END_REF][START_REF] Cyr | The automorphism group of a shift of slow growth is amenable[END_REF]. Beyond quantitative properties, other algebraic restrictions do exist for zero-entropy subshifts. For example, in [START_REF] Cyr | Distortion and the automorphism group of a shift[END_REF] was provided the first examples of countable groups that cannot embed (as the Baumslag-Solitar groups BS(1, n)) into the automorphism group of any zero-entropy subshift. But there still some countable groups, such as the discrete Heisenberg group, that it is not known whether they can embed into the automorphism group of a one-dimensional subshift. Let us recall some examples of subshifts having a non trivial isomorphism, i.e., that is not an automorphism (see [START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF]): The full-shift, any palindromic subshift, such as, the sturmian shifts, the period doubling shift and the Thue-Morse shift. The first one has a huge automorphism group, not even amenable, whereas the automorphism group of the second and third ones are trivial. The automorphism group of the last one is isomorphic to Z ⊕ C 2 . These examples suggest that algebraic properties of the automorphism group does not imply the existence of non trivial isomorphisms. Few general results are known in the multidimensional context: In [START_REF] Hochman | On the automorphism groups of multidimensional shifts of finite type[END_REF] M. Hochman proved that most of the one-dimensional properties of the automorphism group are preserved for the class of subshifts of finite type with positive entropy. Nevertheless, he presented a remarkable example: A subshift of finite type with automorphism group isomorphic to Z 2 ⊕ G, where G is locally finite and factors onto a virtually simple group. This is in contrast to the one-dimensional setting, where the group has to be residually finite. Isomorphisms of the chair tiling has been studied in [START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF]. It appears its automorphism group is trivial and its isomorphism group is a semi-direct product of Z 2 by the symmetry group of the square. The study of automorphisms is also a classical subject in the context of ergodic theory. In the measure-theoretic framework we consider (X, µ, T, Z), where (X, F, µ) is a standard probability space, T represent the action generated by a measure-preserving transformation and automorphism are defined almost everywhere and preserved the measure µ. Let us recall some important results (we refer the reader to [START_REF] Ferenczi | Systems of finite rank[END_REF] for an overview of this theme). D. Ornstein [START_REF] Ornstein | On the root problem in ergodic theory[END_REF] proved that a mixing rank one dynamical system has a trivial (measurable) automorphism group. Later, A. del Junco [START_REF] Del Junco | A simple measure-preserving transformation with trivial centralizer[END_REF] showed that the example given by Chacon [START_REF] Chacon | Weakly mixing transformations which are not strongly mixing[END_REF], also has this property. Then, J. King and J.-P. Thouvenot [START_REF] King | A canonical structure theorem for finite joining-rank maps[END_REF] proved that for mixing systems of finite rank, its measurable automorphism group is virtually Z. In the class of substitutive subshifts, homomorphisms have presented stronger rigidity properties than the ones mentioned above. Let us make a brief description of substitutions. These are combinatorial objects which produce infinite sequences by an iteration process. Their deep understanding took several decades. We refer [START_REF] Fogg | Substitutions in dynamics, arithmetics and combinatorics[END_REF][START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF] for extensive bibliographies on the earlier developments of the subject. Using the shift as the action on these infinite sequences, we obtain the substitutive subshifts, which are the simplest nontrivial zero-entropy symbolic systems. They were introduced by W.H. Gottschalk in [START_REF] Gottschalk | Substitution minimal sets[END_REF]. Their simplicity makes them appear in many different fields of mathematics, such as, combinatorics on words (see [START_REF] Berstel | The origins of combinatorics on words[END_REF]), number theory (especially in transcendental number theory [1]), numeration systems (see [START_REF] Cobham | On the base-dependence of sets of numbers recognizable by finite automata[END_REF]), diophantine approximations (see [2]), and computer science (especially automata theory [3]). B. Host and F. Parreau in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] gave a complete description of factors between subshifts arising from certain constant-length substitutions like for instance the Thue-Morse substitution defined by 0 → 01, 1 → 10. They proved that any measurable factor induces a continuous one, and the automorphism group is isomorphic to a direct product of Z with a finite group. Moreover, any finite group can be realized as a quotient group Aut(X, S, Z)/ S for these subshifts as proved by M. Lemańczyk and M. K. Mentzen in [START_REF] Lemańczyk | On metric properties of substitutions[END_REF]. Later, I. Fagnot [START_REF] Fagnot | Sur les facteurs des mots automatiques[END_REF] proved that the problem of whether there exists a factor map between two constant-length substitutions subshifts is decidable, using the first-order logic framework of Presburger arithmetic. Some years later, F. Durand in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF] showed that linearly recurrent subshifts (in particular substitutive subshifts) have finitely many subshift factors, up to conjugacy. Also in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF] it was proved that topological Cantor factors of substitutive subshifts are either substitutive subshifts or odometer systems. V. Salo and I. Törmä provided in [START_REF] Salo | Block maps between primitive uniform and Pisot substitutions[END_REF] a renormalization process of the factor maps to extend the description obtained in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Next, C. Müllner and R. Yassawi [START_REF] Müllner | Automorphisms of automatic shifts[END_REF] demonstrated that any aperiodic symbolic factor of a constant-length substitutive subshift is conjugated via a letter-to-letter map to a constant-length substitutive subshift. More recently, F. Durand and J. Leroy [START_REF] Durand | Decidability of the isomorphism and the factorization between minimal substitution subshifts[END_REF] showed the decidability of the existence problem of a factor map between two minimal substitutive subshifts. Presentation of main results In this thesis, we study homomorphisms between multidimensional substitutive subshifts generated by constant-shape substitutions. In our context, L ∈ M(d, Z) is an integer expansion matrix, i.e, L > 1 and L -1 < 1. A constant-shape substitution ζ is a map A → A F , where A is a finite alphabet and F is a fundamental domain of L(Z d ) in Z d . The set F is called the support of the substitution. Constant-shape substitutions are a multidimensional analogue of one-dimensional constant-length substitutions. Here the "length" of the substitution is represented by the expansion matrix L. For every n > 0, any interation ζ n of the substitutions can also be obtained by a constant-shape substitution, with expansion matrix L n and support F n . At the differences with the one-dimensional case, these substitutions may not be linearly recurrent (Example 3.6), and can have topological Cantor factors that are neither expansive nor equicontinuous (Example 4.3). Some known results of the one-dimensional case are still preserved for these constant-shape substitutions, such as, they are finite extensions of a specific odometer system given by the data of the substitution (Lemma 3.9) and their maximal equicontinuous factors have a similar structure to the one-dimensional case (Proposition 3.15). In [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF] the authors studied isomorphisms, called extended symmetries, for a class of constant shape substitutions, called bijective block substitutions. A constant-shape substitution is bijective if for any index f ∈ F we have |{ζ(a) f : a ∈ A}| = |A|. Block substitutions are constant-shape substitutions with diagonal expansion matrix and a parallelepiped support. The authors proved in [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF] that the set of matrices which define isomorphisms is a finite group. Although it is not proved in the article, it can be deduced that the group of isomorphisms is virtually generated by the shift action. We extend the study of homomorphisms, by describing the isomorphism group for general constant-shape substitutions. As a difference with [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF], our strategy is by describing the set of nondeterministic directions (Theorem 5.2). The matrices defining isomorphisms preserve the set of nondeterministic directions (Proposition 1.12). A vector v ∈ S d-1 is said to be nondeterministic for the subshift (X, S, Z d ) if there is two different points x = y ∈ X such that are equal in the half-space H v = {t ∈ R d : t, v < 0}. This is a multidimensional analogue of asymptotic pairs [START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF]. Nondeterministic directions were introduced in [START_REF] Boyle | Expansive subdynamics[END_REF] with the notion of nonexpansive subspaces to study sub-actions of a given Z d -action, for d > 1. This notion appears to be meaningful in symbolic dynamics: As an example let us mention [START_REF] Cyr | Nonexpansive Z 2 -subdynamics and Nivat's conjecture[END_REF] in which these objects were used to prove a weak version of Nivat's conjecture. We give a description of the set of nondeterministic directions for bijective constantshape substitutions. Theorem A (Theorem 5.2). Let ζ be an aperiodic bijective primitive constant-shape substitution. The set of nondeterministic directions of its substitutive subshift is the intersection of S d-1 with a nonempty union of limits of opposite normal cones of faces of the convex hull support of ζ n , for integers n > 0. This theorem gives topological constraints on the set of nondeterministic directions for bijective substitutions. Under geometrical conditions of the support we get stronger properties about nondeterministic directions. A bijective constant-shape substitution is polytope when the convex hull of the compact set defined as the limit of L -n (F n ) (called digit tile), with respect to the Hausdorff metric (see Section 1.7), is a polytope. In this case the set of nondeterministic directions is much more restricted: it is a finite union of closed balls (eventually degenerated). For instance, in the two-dimensional case it cannot be a Cantor set (Corollary 5.3). In contrast with the result of M. Boyle and D. Lind in [START_REF] Boyle | Expansive subdynamics[END_REF] and M. Hochman in [START_REF] Hochman | Non-expansive directions for Z 2 actions[END_REF], where they proved any compact set of S 1 can be realized as the set of nonexpansive directions of a subshift. The work in this thesis gives the first descriptions of the set of nondeterministic directions for minimal Z d -actions. When the rank of nondeterministic directions is maximal, thanks to the former description, we get the following constraints on the homomorphisms of substitutive subshifts. Theorem B (Proposition 5.15 and Theorem 5.17). Let ζ be an aperiodic bijective primitive polytope substitution. If the substitutive subshift (X ζ , S, Z d ) has d linearly independent nondeterministic directions, then: 1. Any homomorphism on the substitutive subshift (X ζ , S, Z d ) is invertible. 2. The group of isomorphisms is virtually generated by the shift action. Block substitutions are particular cases of polytope substitutions and it can be easily to check it satisfies the hypothesis on the rank of the set of nondeterministic directions, so Theorem B generalizes the ones in [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]. Actually we can relax the notion of bijectivity by the one called bijective on extremities (see Chapter 5) and reducibility (see Chapter 4) and is enough to keep the same conclusion. This hypothesis of reducibility is almost optimal because we provide in Chapter 6 an example of a constant-shape substitutive subshift with an infinite set of matrices defining isomorphisms. The hypothesis on the rank of the set of nondeterministic directions is weak since we didn't know a substitution that does not satisfy it. We provide an algorithm to check if this hypothesis is satisfied (Lemma 5.12). Furthermore, given the result in [START_REF] Guillon | Determinism in subshifts[END_REF], the hypothesis is true for a generic family of two-dimensional bijective constant-shape substitutions. In a private communication, P. Guillon [62] mentioned this result was already proved for higher dimensions, but nowhere published. To get Theorem B, we need some control on the block maps defining the isomorphisms. For this, we follow the strategy of B. Host and F. Parreau in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Moreover we also get a strong rigidity property when the matrices commute with the expansion matrix of the substitution. Recall that substitutive subshifts are uniquely ergodic, so any continuous endomorphism induces a measurable one. We provide a partial converse: Theorem C (Theorem 4.1, simplified version). Let (X ζ , S, Z d ) be a subshift generated by an aperiodic, primitive reduced constant-shape substitution. For every measurable endomorphism φ, there exists j ∈ Z d such that S j φ is equal to a continuous endomorphism ψ, satisfying the following two properties: 1. The endomorphism ψ has a bounded radius given by the substitution. 2. There exist an integer n > 0 and p ∈ Z d such that, S p ψζ n 1 = ζ n 2 ψ. Theorem C implies that, under reduciblity, any measurable endomorphism induced a continuous one. In fact, we prove the set of measurable endomorphisms is countable. Theorem C is a multidimensional analogue of the one proved by B. Host and F. Parreau in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Counterexamples of Theorem C are provided by substitutive subshifts that are metrically isomorphic to their maximal equicontinuous factors. This occurs when a substitution has a combinatorial condition called coincidence [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF] (see Example 4.3). The set of measurable endomorphisms of odometer systems is then uncountable and any element of the odometer system represent a measurable endomorphism via addition. So, as in the original article [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF], reducibility is an optimal hypothesis for Theorem C. We then get some dynamical consequence of Theorem C. Under the reducibility condition, the substitutive subshifts are coalescent (Proposition 4.7), i.e., any endomorphism on the substitutive subshift is invertible. This was already known for linearly recurrent subshifts, first in the one-dimensional case in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF], then in higher dimensions in [START_REF] Cortez | Linearly repetitive Delone systems have a finite number of nonperiodic Delone system factors[END_REF]. The chair tiling (see [START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF]) and the half-hex substitutive subshifts, studied in Chapter 6, are examples of substitutive subshifts having a coincidence. Hence their measurable endomorphisms form an uncountable set. Nevertheless both examples are coalescent and their automorphism groups are virtually generated by the shift action. Outside the reducibility condition, we do not know whether all aperiodic substitutive subshifts satisfy these last properties. The half-hex substitutive subshift, mentioned before, does not satisfy neither the hypothesis of Theorem C nor Theorem B. Nevertheless we are able to describe its maximal equicontinuous factor and characterize its isomorphisms. The symmetry group of a subshift is the set of matrices M ∈ GL(d, Z) defininig an isomorphism. Thanks to this example we get the following result. Theorem D (Theorem 6.3). There exists a minimal aperiodic subshift (in fact a substitutive one) with an infinite symmetry group. More precisely, the isomorphism group of the half-hex substitutive subshift is isomorphic to a semidirect product between Z 2 and GL(2, Z), so its symmetry group is the largest possible. Subshifts with infinite symmetry groups have been found before, as in [START_REF] Baake | Number-theoretic positive entropy shifts with small centralizer and large normalizer[END_REF], studying their relation with topological entropy. But these examples are far from being minimal. Finally, concerning the factors of substitutive subshifts we have the following characterization. Theorem E (Theorem 3.22). Let (Y, S, Z d ) be an aperiodic symbolic factor of a subshift generated by an aperiodic primitive constant-shape substitution ζ. Then, there exists an aperiodic primitive constant-shape substitution ζ , with the same structure of a power of ζ, generating a system (X ζ , S, Z d ) conjugated to the symbolic factor (Y, S, Z d ). This is a multidimensional analogue of a result proved by C. Müllner and R. Yassawi [START_REF] Müllner | Automorphisms of automatic shifts[END_REF] for the one-dimensional case, which is a refinement of a result proved in [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF]. This result leaves open what can be said about other topological Cantor factors of substitutive subshifts. Example 4.3 gives a substitutive subshift with a Cantor topological factor that is neither expansive nor equicontinuous. Also Example 4.3 has a symbolic factor with a non-trivial period and an infinite phase space. This is in contrast with the one-dimensional dichotomy proved in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF]. Theorem A,Theorem B, Theorem C and Theorem E are published in [START_REF] Cabezas | Homomorphisms between multidimensional constant-shape substitutions[END_REF]. Organization of this thesis This thesis is organized as follows. The basic definitions and background to be used throughout this thesis are introduced in Chapter 1. We recall some classical notions of topological dynamical systems, ergodic theory and symbolic dynamics. We develop the relation between homomorphisms and topological factors of dynamical systems. Then, we present the example of odometer systems and Toeplitz subshifts. We obtain a characterization to define a homomorphism between odometers. We finish this chapter with a brief survey of multidimensional constant-shape substitutions, which is where we mainly study the homomorphisms. In Chapter 2, we study the symmetry semigroup of two-dimensional constant-base odometer systems given by a matrix L. In this case, we get a description of a bifurcation phenomenon at the level of the symmetry semigroup with respect to arithmetical relations of invariants of the matrix L. The main theorem (Theorem 2.2) shows in most cases the symmetry semigroup is the centralizer of the matrix L. This will help to get a characterization of the isomorphism semigroup of aperiodic primitive constant-shape substitutions using the relation between homomorphisms and their maximal equicontinuous factors (Lemma 1.7). The main result in Chapter 3 is the characterization about aperiodic symbolic factors of substitutive subshifts given by an aperiodic, primitive constant-shape substitution. They are conjugate to substitutive subshifts generated by aperiodic primitive constant-shape substitutions (Theorem E). Substitutive subshifts are not necessarily linearly repetitive (Example 3.6). Nevertheless, we prove a polynomial growth on the repetitivity function for constant-shape substitutions (Lemma 3.7). Chapter 4 is devoted to prove rigidity properties about measurable factors and homomorphisms between substitutive subshifts (Theorem C). Then, we deduce that these substitutive subshifts are coalescent (Proposition 4.7) and their automorphism group is virtually generated by the shift action (Proposition 4.8). In Chapter 5 we describe the isomorphism group for general constant-shape substitutions. We prove it is virtually generated by the shift action (Theorem B). To do this, we relate the symmetry group with different types of supports of the substitution and non-diagonal expansion matrices, via the nondeterministic directions. We characterize the nondeterministic directions through the digit tile for some weaker version of bijective substitutions (Theorem A). Moreover, these directions are computable in terms of the combinatorics of the substitution (Corollary 5.13). Finally, in Chapter 6 we characterize the isomorphism group for two examples of constant-shape substitutions. The first one, called table substitution, satisfied the hypothesis of the results in Chapter 4 and Chapter 5. The second example does not satisfy the hypothesis for the previous results. Nevertheless, a description of its isomorphism group is provided by Theorem D. Résumé Cette thèse traite des homomorphismes entre des Z d -actions topologiques T : X × Z d → X sur des espaces métriques compacts X. Un homomorphisme est une surjection continue φ : (X, T, Z d ) → (Y, T, Z d ) telle que pour un Z d -automorphisme M ∈ GL(d, Z), on ait φ•T n = T M n •φ pour tout n ∈ Z d . Cette notion étend le concept de facteur, lorsque M est l'identité et de conjugaison lorsque φ est un facteur inversible. Les isomorphismes (homomorphismes inversibles) sont alors des conjugaisons de Z d -actions, à une transformation de GL(d, Z) près. Pour les Z-actions, les isomorphismes sont des flip-conjugaisons, c'est-à-dire des homéomorphismes φ tels que φ • T est égal à T • φ ou T -1 • φ. Les facteurs et les conjugaisons sont appelés respectivement endomorphismes et automorphismes lorsque les systèmes dynamiques sont identiques. Alors que les endomorphismes représentent une sorte de symétrie interne du système, comme les permutations, les homomorphismes représentent en plus des symétries des orbites, comme les rotations et les réflexions. C'est pourquoi ils sont aussi parfois appelés symétries étendues (voir par exemple [START_REF] Baake | A brief guide to reversing and extended symmetries of dynamical systems[END_REF][START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF][START_REF] Bustos | Extended symmetry groups of multidimensional subshifts with hierarchical structure[END_REF][START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]). D'un point de vue algébrique, alors que le groupe d'automorphisme est le centralisateur du groupe d'action T dans le groupe Homeo(X) des homéorphismes de X, le groupe d'isomorphisme est le normalisateur du groupe d'action T dans Homeo(X), c'est-à-dire l'ensemble des homéorphismes φ tels que φ T φ -1 = T . L'étude des homomorphismes d'un système dynamique (X, T, Z d ) est un problème classique. Les éléments du groupe T engendré par l'action sont des automorphismes du système. Le groupe d'automorphisme est donc toujours non-vide. Cependant, l'existence d'homomorphismes pour une matrice particulière M ∈ GL(d, Z) est généralement un problème ouvert. Les questions classiques concernent leurs propriétés dynamiques et algébriques en relation avec la dynamique de (X, T, Z d ). La description pour ces isomorphismes de leurs sous-groupes, de leurs quotients, de leur moyennabilité ou de leur action sur les mesures invariantes de l'action T en fonction des propriétés de (X, T, Z d ) sont des problèmes naturels. Dans un contexte général, ces questions sont largement ouvertes. Une autre question classique est la détermination des facteurs topologiques d'un système particulier. Leur description explicite peut être utilisée pour décrire la structure du système. Par certains aspects, ils contiennent des informations pertinentes et permettent de faire des calculs concrets ou d'étudier certaines structures spécifiques (par exemple en théorie spectrale [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF]). L'étude des isomorphismes de la classe particulière des systèmes minimaux où X est un ensemble de Cantor est motivée par des raisons algébriques. En effet, à tout système minimal de Cantor (X, T, Z) il existe un groupe dénombrable associé appelé groupe topologique plein et noté [[T ]]. Les éléments de ce groupe sont des homéomorphismes de l'espace X qui sont localement des puissances de la transformation T . Les homomorphismes du système (X, T, Z) induisent des homomorphismes de groupe dans [[T ]]. Réciproquement, le groupe topologique plein est un invariant complet de flip-conjugaison [START_REF] Giordano | Full groups of Cantor minimal systems[END_REF]. Il apparaît que de tels groupes pleins [[T ]] présentent des propriétés remarquables. H. Matui a prouvé dans [START_REF] Matui | Some remarks on topological full groups of Cantor minimal systems[END_REF] que son sous-groupe dérivé est simple. De plus, il a également montré que ce groupe est finiment engendré si et seulement si (X, T, Z) est conjugué à un sous-shift minimal. Ainsi, ces groupes permettent de construire des groupes simples finiment engendrés avec des propriétés algébriques inattendues. Par exemple, tout groupe topologique plein est moyennable [START_REF] Juschenko | Cantor systems, piecewise translations and simple amenable groups[END_REF], ce qui donne les premiers exemples de groupes simples finiment engendrés dans cette classe. En utilisant les isomorphismes de certains sous-shifts de complexité faible (à savoir les sous-shifts linéairement récurrentes avec des mots palindromiques arbitrairement longs comme le sous-shift substitutive de Fibonacci), V. Nekrashevych dans [START_REF] Nekrashevych | Palindromic subshifts and simple periodic groups of intermediate growth[END_REF] réussit à construire avec les groupes pleins un groupe simple finiment engendré avec une croissance intermédiaire, c'est-à-dire qui n'est ni de croissance polynomiale ni de croissance exponentielle. De plus il est de type Burnside, c'est-à-dire un groupe infini où chaque élément est périodique. Une autre motivation pour l'étude des isomorphismes provient de la physique théorique. La découverte en 1984 par D. Schechtman et.al. [START_REF] Shechtman | Metallic phase with long-range orientational order and no translational symmetry[END_REF] d'une structure d'alliage métallique similaire à celle des cristaux idéaux a profondément influencé l'étude de la structure apériodique multidimensionnelle. Cet alliage présentait un diagramme de diffraction discret, comme les cristaux, mais avait une symétrie rotationnelle d'ordre 5, ce qui est interdit pour les cristaux idéaux. Le terme quasi-cristal a alors été inventé pour décrire ces nouvelles classes de cristaux avec des symétries "interdites", s'il n'y a pas consensus sur la définition précise d'un quasi-cristal. En substance, un quasi-cristal est un solide qui présente des points brillants nets (appelés pics de Bragg) dans son diagramme de diffraction aux rayons X, mais qui a une structure apériodique généralement manifestée par la présence d'une symétrie non-quasi-cristallogique. La présence de pics de Bragg indique la présence d'un "ordre à longue portée" dans la structure (voir [START_REF] Levine | Quasicrystals: a new class of ordered structures[END_REF]). Ces travaux ont valu à D. Schechtman le prix Wolf de physique en 1999 et le prix Nobel de chimie en 2011. On peut supposer que (au moins approximativement) un quasi-cristal est constitué d'atomes situés aux sommets d'un pavage presque périodique. Il est possible de retrouver des propriétés physiques quantitatives en étudiant un système dynamique associé au pavage. De tels systèmes ont été présentés pour la première fois par D. Rudolph dans [START_REF] Rudolph | Markov tilings of R n and representations of R n actions[END_REF]. Dès lors, une série d'articles ont été consacrés à leur étude (voir [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF][START_REF] Frank | Substitution and tiling dynamics: introduction to self-inducing structures[END_REF][START_REF] Robinson | Symbolic dynamics and tilings of R d . In Symbolic dynamics and its applications[END_REF][START_REF] Solomyak | Dynamics of self-similar tilings[END_REF] pour une vaste bibliographie sur ce sujet). Le diagramme de diffraction est alors essentiellement le spectre ponctuel de l'action de translation correspondante [START_REF] Levine | Quasicrystals: a new class of ordered structures[END_REF]. De plus, ses symétries sont reflétées par les isomorphismes de son système dynamique [START_REF] Robinson | The dynamical theory of tilings and quasicrystallography[END_REF]. En 1982, A. Mackay [START_REF] Mackay | Crystallography and the Penrose pattern[END_REF] a publié le schéma de diffraction d'un pavage créé par R. Penrose quelques années plus tôt [START_REF] Penrose | The role of aesthetics in pure and applied mathematical research[END_REF], qui présente de grandes similitudes avec ceux découverts par D. Schechtman (voir figure 1 et figure 2). Le pavage de Penrose est alors un bon modèle mathématique de quasi-cristaux. Il est construit avec seulement 2 tuiles, aux rotations et translations près. La génération des motifs est obtenue par une méthode algorithmique appelé substitution. Cette procédure consiste à substituer des tuiles par une union de tuiles et appliquer la même règle à ce nouveau motif. En itérant ce processus ad infinitum, cette construction fournit la plupart des pavages apériodiques les plus simples, dans le sens où ils ont la plus faible complexité possible [START_REF] Lagarias | Local complexity of Delone sets and crystallinity[END_REF]. Une autre propriété intéressante du pavage de Penrose, est que tous les pavages autorisés du système associé sont ceux qui vérifient un ensemble fini de règles locales. C'est l'analogue géométrique des sous-shifts de type fini unidimensionnels. Cependant, le système de pavage de Penrose est apériodique, à l'opposé des sous-shifts de type fini unidimensionnelles qui contiennent toujours des points périodiques. Cela met en évidence une différence fondamentale entre les propriétés combinatoires unidimensionnelles et bidimensionnelles qui sont liées à la logique et à la calculabilité. Le travail fondateur de H. Wang [START_REF] Wang | Proving theorems by pattern recognition -ii[END_REF] a déjà établi une relation entre la décidabilité de certaines formules de la logique du premier ordre et les problèmes de domino. Son étudiant R. Berger [START_REF] Berger | The undecidability of the domino problem[END_REF] a montré l'indécidabilité du problème de domino en exposant une énorme famille de tuiles avec des règles d'adjacence, appelées tuiles de Wang, ne permettant que des pavages apériodiques. Plus tard, cet exemple a été simplifié par R. Robinson [START_REF] Robinson | Undecidability and nonperiodicity for tilings of the plane[END_REF] avec 20 tuiles de Wang (5 à rotation près). Plus récemment, E. Jeandel et M. Rao [START_REF] Jeandel | An aperiodic set of 11 Wang tiles[END_REF] ont fourni un exemple similaire avec seulement 11 tuiles de Wang et ont montré que c'est la borne optimale. Une méthode puissante pour générer des tuiles de Wang apériodiques est l'utilisation de ce que l'on appelle substitution de forme constante, où la forme de l'image des tuiles est la même. Le résultat de S. Mozes [START_REF] Mozes | Tilings, substitution systems and dynamical systems generated by them[END_REF] illustre cette procédure en donnant des conditions suffisantes pour qu'un sous-shift substitutif (de forme constante) soit un facteur d'un sous-shift de type fini. Nous mentionnons également que les deux exemples de R. Robinson et Jeandel-Rao contiennent des sous-shifts substitutifs minimaux [START_REF] Gähler | Combinatorics and topology of the Robinson tiling[END_REF][START_REF] Labbé | Substitutive structure of Jeandel-Rao aperiodic tilings[END_REF]. Motivés par toutes ces raisons, nous nous concentrerons dans cette thèse sur l'étude des homomorphismes entre certains sous-shifts spécifiques: les substitutifs, générés par des substitutions multidimensionnelles de forme constante. Rappelons quelques résultats historiques dans le contexte unidimensionnel. Pour les Z-actions, les isomorphismes sont étroitement liés aux automorphismes. Le groupe d'isomorphisme est une extension de groupe d'indice au plus 2 du groupe d'automorphisme. Ainsi, la plupart des propriétés des isomorphismes peuvent être déduites de celles-ci. Les automorphismes des systèmes symboliques présentent des propriétés de rigidité déjà dans le cas unidimensionnel. Par exemple, le célèbre théorème de Curtis-Hedlund-Lyndon [START_REF] Hedlund | Endomorphisms and automorphisms of the shift dynamical system[END_REF] assure que tout facteur entre sous-shifts est un fonction de bloc glissant ou un automate cellulaire. En fait, les homomorphismes sont aussi induits par des fonctions locales, mais le centre change selon la matrice de l'homomorphisme (théorème 1.9). Cela montre que le groupe d'automorphisme et le groupe d'isomorphisme d'un sous-shift sont des sous-groupes dénombrables, discrets dans le groupe des homéomorphismes de l'espace des phases. Le groupe d'automorphisme des systèmes symboliques a été initialement étudié pour les sous-shifts de type fini par G. Hedlund dans [START_REF] Hedlund | Endomorphisms and automorphisms of the shift dynamical system[END_REF]. Ce groupe est infiniment engendré et contient de nombreux subgroupes: n'importe quel groupe fini, les groupes libres, la somme directe d'un nombre dénombrable de copies de Z, toute collection dénombrable de groupes finis, etc. Cependant, il est résiduellement finie, et ne contient donc pas de groupes divisibles (comme Q) ou le groupe symétrique infini [START_REF] Boyle | The automorphism group of a shift of finite type[END_REF][START_REF] Kim | On the automorphism groups of subshifts[END_REF]. Néanmoins, il n'existe pas de description générale du groupe d'automorphisme ni de leurs générateurs pour un sous-shift donné. Par exemple, la question de savoir si les groupes d'automorphisme du full-shifts sur 2 et 3 lettres sont algébriquement isomorphes est un problème encore ouvert. Une grande complexité n'est pas suffisante pour avoir un grand groupe d'automorphisme. Dans [START_REF] Bulatek | Strictly ergodic Toeplitz flows with positive entropies and trivial centralizers[END_REF][START_REF] Donoso | On automorphism groups of Toeplitz subshifts[END_REF], les auteurs ont donné une famille de sous-shifts Toeplitz d'entropie positive arbitrairement grande presentant un groupe d'automorphisme trivial. De plus, la taille du groupe d'automorphisme n'impose aucune restriction sur l'entropie, comme prouvé dans [START_REF] Donoso | On automorphism groups of Toeplitz subshifts[END_REF]. Une grande classe de groupes abéliens infinis finiment engendrés peut être réalisée comme le groupe d'automorphisme d'un sous-shift Toeplitz d'entropie arbitrairement grande ou nulle. À l'inverse, la complexité faible du sous-shift restreint les propriétés algébriques du groupe d'automorphisme. Il est prouvé dans [START_REF] Coven | Computing automorphism groups of shifts using atypical equivalence classes[END_REF][START_REF] Cyr | The automorphism group of a shift of linear growth: beyond transitivity[END_REF][START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF] que le groupe d'automorphisme est virtuellement Z pour les sous-shifts minimaux de complexité non superlinéaire, c'està-dire tels que lim inf n→∞ p X (n)/n < ∞ où p X (n) correspond au nombre de mots de longueur n. Deux suites différents x, y ∈ X sont dites asymptotiques lorsqu'elles ont le même passé. La stratégie de [START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF] est basée sur la propriété que les automorphismes permutent les paires asymptotiques et les sous-shifts de complexité non super-linéaire ont un nombre fini de paires asymptotiques. Pour les sous-shifts de complexité plus élevée, le taux de croissance du groupe d'automorphisme est limité par la complexité du sous-shift. En particulier, il est moyennable pour une grande classe de sous-shifts à entropie nulle, comme prouvé dans [START_REF] Cyr | The automorphism group of a minimal shift of stretched exponential growth[END_REF][START_REF] Cyr | The automorphism group of a shift of slow growth is amenable[END_REF]. Au-delà des propriétés quantitatives, d'autres restrictions algébriques existent pour les sous-shifts à entropie nulle. Par exemple, dans [START_REF] Cyr | Distortion and the automorphism group of a shift[END_REF] a été fourni les premiers exemples de groupes dénombrables qui ne peuvent pas être des sous-groupes d'automorphisme (comme les groupes de Baumslag-Solitar BS ( L'étude des homomorphismes est également un sujet classique dans le contexte de la théorie ergodique. Dans le cadre de la théorie de la mesure, nous considérons le systèmes dynamiques (X, µ, T, Z), où (X, F, µ) est un espace de probabilité standard, T est l'action générée par une transformation préservant la mesure µ et les automorphismes sont définis presque partout et préservent la mesure µ. Rappelons quelques résultats importants (nous renvoyons le lecteur à [START_REF] Ferenczi | Systems of finite rank[END_REF] pour un aperçu de ce thème). D. Ornstein [START_REF] Ornstein | On the root problem in ergodic theory[END_REF] a prouvé qu'un système dynamique mélangeant de rang un possède un groupe d'automorphisme mesurable trivial. Plus tard, A. del Junco [START_REF] Del Junco | A simple measure-preserving transformation with trivial centralizer[END_REF] a montré que l'exemple donné par Chacon [START_REF] Chacon | Weakly mixing transformations which are not strongly mixing[END_REF], possède également cette propriété. Ensuite, J. King et J.-P. Thouvenot [START_REF] King | A canonical structure theorem for finite joining-rank maps[END_REF] ont prouvé que le groupe d'automorphisme mesurable est virtuellement Z pour les systèmes mélangeant de rang fini. Dans la famille des sous-shifts substitutifs, les homomorphismes présentent des propriétés de rigidité plus fortes que celles mentionnées ci-dessus. Faisons une brève description des substitutions. Ce sont des objets combinatoires qui produisent des suites infinies par un processus d'itération. Leur compréhension profonde a pris plusieurs décennies. Nous renvoyons à [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF][START_REF] Fogg | Substitutions in dynamics, arithmetics and combinatorics[END_REF] pour des bibliographies complètes sur les sujet. En utilisant le shift comme action sur ces suites infinies, nous obtenons les sous-shifts substitutifs, qui sont les plus simples systèmes symboliques non triviaux d'entropie nulle. Ils ont été introduits par W.H. Gottschalk dans [START_REF] Gottschalk | Substitution minimal sets[END_REF]. Leur simplicité les fait apparaître dans de nombreux domaines des mathématiques, tels la combinatoire des mots (voir [START_REF] Berstel | The origins of combinatorics on words[END_REF]), la théorie des nombres (en particulier dans la théorie des nombres transcendants [1]), les systèmes de numération (voir [START_REF] Cobham | On the base-dependence of sets of numbers recognizable by finite automata[END_REF]), les approximations diophantiennes (voir [2]), et l'informatique (en particulier la théorie des automates [3]). B. Host et F. Parreau dans [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] ont donné une description complète des facteurs entre les sous-shifts résultant de certaines substitutions de longueur constante, comme par exemple la substitution de Thue-Morse définie par 0 → 01, 1 → 10. Ils ont prouvé que tout facteur mesurable induit un facteur continu, et que le groupe d'automorphisme est isomorphe à un produit direct de Z avec un groupe fini. De plus, tout groupe fini peut être réalisé comme un groupe quotient Aut(X, S, Z)/ S pour ces sous-shifts comme l'ont prouvé M. Lemańczyk et M. K. Mentzen dans [START_REF] Lemańczyk | On metric properties of substitutions[END_REF]. Plus tard, I. Fagnot [START_REF] Fagnot | Sur les facteurs des mots automatiques[END_REF] a prouvé que le problème de savoir s'il existe une facteur entre deux sous-shifts substitutifs de longueur constante est décidable, en utilisant le cadre de la logique du premier ordre de l'arithmétique de Presburger. Quelques années plus tard, F. Durand dans [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF] a montré que les sous-shifts linéairement récurrents (en particulier les sous-shifts substitutifs) ont un nombre fini de facteurs symboliques, à conjugaison près. De plus, dans [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF], il a été prouvé que les facteurs de Cantor topologiques des sous-shifts substitutifs sont soit des sous-shifts substitutifs, soit des odomètres. V. Salo et I. Törmä ont fourni dans [START_REF] Salo | Block maps between primitive uniform and Pisot substitutions[END_REF] un processus de renormalisation des facteurs étandant la description obtenue dans [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Ensuite, C. Müllner et R. Yassawi [START_REF] Müllner | Automorphisms of automatic shifts[END_REF] ont démontré que tout facteur symbolique apériodique d'un sous-shift substitutif de longueur constante est conjugué par une fonction lettre à lettre à un sous-shift substitutif de longueur constante. Plus récemment, F. Durand et J. Leroy [START_REF] Durand | Decidability of the isomorphism and the factorization between minimal substitution subshifts[END_REF] ont montré la décidabilité du problème d'existence d'une facteur entre deux sous-shifts substitutifs minimaux. Présentation des principaux résultats Dans cette thèse, nous étudions les homomorphismes entre les sous-shifts substitutifs multidimensionnells générées par les substitutions à forme constante. Dans notre contexte, L ∈ M(d, Z) est une matrice d'expansion entière, c'est-à-dire L > 1 et L -1 < 1. Une substitution de forme constante est une fonction ζ : A → A F , où A est un alphabet fini et F est un domaine fondamental de L(Z d ) dans Z d . L'ensemble F est appelé le support de la substitution. Les substitutions de forme constante sont les analogues multidimensionnels des substitutions de longueur constante unidimensionnel. Ici, la "longueur" de la substitution est représentée par la matrice d'expansion L. Pour chaque n > 0, toute itération ζ n des substitutions peut également être obtenue par une substitution de forme constante de une matrice d'expansion L n et de support F n . À la différence du cas unidimensionnel, ces substitutions peuvent ne pas être linéairement récurrentes (exemple 3.6) et peuvent avoir des facteurs de Cantor non expansifs ni équicontinus (exemple 4.3). Des résultats connus du cas unidimensionnel sont encore préservés pour ces substitutions de forme constante: elles sont des extensions finies d'un odomètre spécifique donné par la substitution (lemme 3.9) et leurs facteurs équicontinus maximaux ont une structure similaire au cas unidimensionnel (proposition 3.15). Dans [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF], les auteurs ont étudié les isomorphismes, appelés symétries étendues, pour une classe de substitutions de forme constante, appelée substitutions par blocs bijectifs. Une substitution de forme constante est bijective si tout indice f ∈ F vérifie |{ζ(a) f : a ∈ A}| = |A|. Les substitutions de blocs sont des substitutions de forme constante avec une matrice d'expansion diagonale et un support parallélépipédique. Les auteurs ont prouvé dans [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF] que l'ensemble des matrices qui définissent les isomorphismes est un groupe fini. Bien que cela ne soit pas prouvé dans l'article, on peut en déduire que le groupe des isomorphismes est virtuellement engendré par l'action du shift. Nous étendons l'étude des homomorphismes en décrivant le groupe d'isomorphisme pour les substitutions générales de forme constante. À la différence de [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF], notre stratégie consiste à décrire l'ensemble des directions non déterministes (théorème 5.2). Les matrices définissant les isomorphismes préservent l'ensemble des directions non déterministes (proposition 1.12). Un vecteur v ∈ S d-1 est dit nondéterministe pour le sous-shift (X, S, Z d ) s'il existe deux points différents x = y ∈ X qui sont égaux dans le demi-espace H v = {t ∈ R d : t, v < 0}. C'est un analogue multidimensionnel des paires asymptotiques [START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF]. Les directions non déterministes ont été introduites dans [START_REF] Boyle | Expansive subdynamics[END_REF] avec la notion de sous-espace non expansif pour étudier les sous-actions d'une action Z d donnée, pour d > 1. Cette notion est important en dynamique symbolique. À titre d'exemple, mentionnons [START_REF] Cyr | Nonexpansive Z 2 -subdynamics and Nivat's conjecture[END_REF] où ces objets ont été utilisés pour prouver une version faible de la conjecture de Nivat. Nous donnons une description de l'ensemble des directions non déterministes pour les substitutions bijectives de forme constante. Théorème A (théorème 5.2). Soit ζ une substitution apériodique bijective primitive de forme constante. L'ensemble des directions non déterministes de son sous-shift substitutif est l'intersection de S d-1 avec une union non vide des limites des cônes normaux des faces de l'enveloppe convexe du support de ζ n , pour les entiers n > 0. Ce théorème donne des contraintes topologiques sur l'ensemble des directions non déterministes pour les substitutions bijectives. Sous des conditions géométriques du support, nous obtenons des propriétés plus fortes sur les directions non déterministes. Une substitution bijective de forme constante est polytope si l'enveloppe convexe de l'ensemble limite de L -n (F n ) (appelé digit tile, voir la section 1.7) est un polytope. Dans ce cas, l'ensemble des directions non déterministes est beaucoup plus restreint: c'est une union finie de boules fermées (éventuellement dégénérées). Par exemple, dans le cas bidimensionnel, il ne peut pas être un ensemble de Cantor (corollaire 5.3). Ceci diffère du le résultat prouvé par M. Boyle et D. Lind [START_REF] Boyle | Expansive subdynamics[END_REF] et M. Hochman [START_REF] Hochman | Non-expansive directions for Z 2 actions[END_REF]. Ils assurent que tout ensemble compact de S 1 peut être réalisé comme l'ensemble des directions non expansifs d'un sousshift. Le travail de cette thèse donne les premiers description de l'ensemble des directions non déterministes pour des Z d -actions minimaux. Lorsque le rang des directions non déterministes est maximal, grâce à la description précédente, nous obtenons les contraintes suivantes sur les homomorphismes des sous-shifts substitutifs. Théorème B (proposition 5.15 et théorème 5.17). Soit ζ une substitution polytope primitive bijective apériodique. Si le sous-shift substitutif (X ζ , S, Z d ) a d directions non déterministes linéairement indépendantes, alors: 1. tout homomorphisme du sous-shift substitutif (X ζ , S, Z d ) est inversible. Le groupe d'isomorphisme est virtuellement engendré par l'action du shift. Les substitutions de blocs sont des cas particulier des substitutions polytopes et il est simple de vérifier qu'elles vérifient l'hypothèse sur le rang de l'ensemble des directions non déterministes. Ainsi le théorème B généralise les résultats de [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]. En fait, nous pouvons affaiblir la notion de bijectivité par les notions de bijectivité sur les extrémités (voir le chapitre 5) et réductibilité (voir le chapitre 4). Cela suffit pour garder la même conclusion. Cette hypothèse de réductibilité est presque optimale car nous fournissons dans l'exemple 6 un exemple de sous-shift substitutif de forme constante dont l'ensemble des matrices définissant les isomorphismes est infini. L'hypothèse sur le rang de l'ensemble des directions non déterministes est faible puisque nous ne connaissions pas de substitution qui ne la satisfasse pas. Nous fournissons un algorithme pour vérifier si cette hypothèse est satisfaite (lemme 5.12). De plus, par le résultat dans [START_REF] Guillon | Determinism in subshifts[END_REF], cette hypothèse est vérifiée pour une famille générique de substitutions bijectives de forme constante bidimensionnelles. Dans une communication privée, P. Guillon [62] a mentionné que ce résultat étant également pour les dimensions supérieures. Malheureusement la preuve n'a jamais été publié. Pour obtenir le théorème B, nous avons besoin d'un certain contrôle sur les fonctions de blocs définissant les isomorphismes. Pour cela, nous suivons la stratégie de B. Host et F. Parreau dans [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Nous obtenons également une propriété remarquable de rigidité lorsque les matrices commuent avec la matrice d'expansion de la substitution. Nous rappelons que les sous-shifts substitutifs sont uniquement ergodiques, donc tout endomorphisme continu induit un endomorphisme mesurable. Nous fournissons une réciproque partielle. Théorème C (théorème 4.1, version simplifiée). Soit (X ζ , S, Z d ) un sous-shift généré par une substitution apériodique primitive de forme constante. Pour tout endomorphisme mesurable φ, il existe j ∈ Z d tel que S j φ est égal à un endomorphisme continu ψ, satisfaisant les deux propriétés suivantes: 1. l'endomorphisme ψ a un rayon borné par la substitution. Il existe des entiers n > 0 et p ∈ Z d tels que S p ψζ n 1 = ζ n 2 ψ. Le théorème C implique que pour une substitution réduite tout endomorphisme mesurable induit un endomorphisme continu. Ainsi l'ensemble des endomorphismes mesurés est un ensemble dénombrable. Le théorème C est un analogue multidimensionnel de celui prouvé par B. Host et F. Parreau dans [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Des contre-exemples au théorème C sont fournis par des sous-shifts substitutifs qui sont métriquement isomorphes à leur facteur équicontinu maximal. Cela se produit lorsqu'une substitution possède une condition combinatoire appelée coïncidence [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF] (voir l'exemple 4.3). L'ensemble des endomorphismes mesurés des odomètres est alors indénombrable et tout élément de l'odomètre représente un endomorphisme mesurable par addition. Ainsi, comme dans l'article original [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF], la réductibilité est une hypothèse optimale pour le théorème C. Nous obtenons ensuite quelques conséquence dynamique du théorème C. Sous la condition de réductibilité, les sous-shifts substitutifs sont coalescents (proposition 4.7), c'està-dire que tout endomorphisme du sous-shift est inversible. Ceci était déjà connu pour les sous-shifts linéairement récurrents, d'abord pour le cas unidimensionnel dans [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF], puis par les dimensions supérieures dans [START_REF] Cortez | Linearly repetitive Delone systems have a finite number of nonperiodic Delone system factors[END_REF]. Le pavage de la chaise (voir [START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF]) et les sous-shifts substitutifs du demi-hexagone, étudiées dans le chapitre 6, sont des exemples de sous-shifts substitutifs avec coïncidence. Par conséquent, leurs endomorphismes mesurés forment un ensemble indénombrable. Néanmoins, les deux exemples sont coalescents et leurs groupes d'automorphisme sont virtuellement engendrés par l'action de shift. Sans la condition de réductibilité, nous ne savons pas si tous les sous-shift substitutifs apériodiques satisfont ces dernières propriétés. Le sous-shift substitutif du demi-hexagone, mentionné précédemment, ne satisfait pas les hypothèses des théorèmes C et B. Néanmoins, nous sommes capables de décrire son facteur équicontinu maximal et de caractériser ses isomorphismes. Le groupe de symétrie d'un sous-shift est l'ensemble des matrices M ∈ GL(d, Z) définissant un isomorphisme. Grâce à cet exemple, nous obtenons le résultat suivant. Théorème D (théorème 6.3). Il existe un sous-shift apériodique minimal (en fait un sousshift substitutif) avec un groupe de symétrie infini. Plus précisément, le groupe d'isomorphisme du sous-shift substitutif du demi-hexagone est isomorphe au produit semidirect de Z 2 avec GL(2, Z). Son groupe de symétrie est donc le plus grand possible. Des sous-shifts avec des groupes de symétrie infini ont été trouvés auparavant, comme dans [START_REF] Baake | Number-theoretic positive entropy shifts with small centralizer and large normalizer[END_REF], en étudiant leurs relation avec l'entropie topologique. Mais ces exemples sont loin d'être minimaux. Enfin, concernant les facteurs de sous-shifts substitutifs, nous avons la caractérisation suivante. Théorème E (théorème 3.22). Soit (Y, S, Z d ) un facteur symbolique apériodique d'un sous-shift (X ζ , S, Z d ) généré par une substitution ζ apériodique primitive de forme constante. Alors, il existe une substitution primitive apériodique de forme constante ζ , de la même structure qu'une puissance de ζ, dont le système (X ζ , S, Z d ) est conjugué au facteur symbolique (Y, S, Z d ). Ceci est un analogue multidimensionnel d'un résultat prouvé par C. Müllner et R. Yassawi [START_REF] Müllner | Automorphisms of automatic shifts[END_REF] pour le cas unidimensionnel, qui est un raffinement d'un résultat prouvé dans [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF]. En revanche, ce résultat ne dit rien sur les autres facteurs topologiques sur un Cantor des sous-shifts substitutifs. L'exemple 4.3 donne un exemple de sous-shift substitutif avec un facteur topologique de Cantor qui n'est ni expansif ni équicontinu. De même, l'exemple 4.3 possède un facteur symbolique avec une période non triviale et une espace des phases infini. Ceci est en contraste avec la dichotomie unidimensionnelle prouvée dans [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF]. Les théorèmes A, B, C et E sont publiés dans [START_REF] Cabezas | Homomorphisms between multidimensional constant-shape substitutions[END_REF]. Organisation de cette thèse Cette thèse est organisée comme suit. Les définitions de base et le contexte qui seront utilisés tout au long de cette thèse sont introduits dans le chapitre 1. Nous rappelons quelques notions classiques de systèmes dynamiques topologiques, de théorie ergodique et de dynamique symbolique. Nous développons la relation entre les homomorphismes et les facteurs topologiques des systèmes dynamiques. Ensuite, nous présentons l'exemple des odomètres et des sous-shifts de Toeplitz. Nous obtenons une caractérisation pour des homomorphismes entre odomètres. Nous terminons ce chapitre par un bref survol des substitutions multidimensionnelles de forme constante, dont nous étudions principalement les homomorphismes. Dans le chapitre 2, nous étudions le semigroupe de symétrie des odomètres bidimensionnels de base constante donnée par une matrice L. Dans ce cas, nous obtenons une description d'un phénomène de bifurcation au niveau du semigroupe de symétrie par rapport aux relations arithmétiques des invariants de la matrice L. Le théorème principal (théorème 2.2) montre que dans la plupart des cas le semigroupe de symétrie est le centralisateur de la matrice L. Cela aidera à obtenir une caractérisation du semigroupe normalisateur des substitutions apériodiques primitives de forme constante en utilisant la relation entre les homomorphismes et leurs facteurs équicontinus maximaux (lemme 1.7). Le résultat principal du chapitre 3 est la caractérisation des facteurs symboliques apériodiques des sous-shifts substitutifs donnés par une substitution apériodique primitive de forme constante. Ils sont conjugués aux sous-shifts substitutifs générés par des substitutions apériodiques primitives de forme constante (théorème E). Les sous-shifts substitutifs ne sont pas nécessairement linéairement répétitives (exemple 3.6). Néanmoins, nous donnos une croissance polynomiale sur la fonction de répétitivité pour les substitutions de forme constante (lemme 3.7). Le chapitre 4 est consacré à la preuve de propriétés de rigidité des facteurs mesurés et les homomorphismes entre sous-shifts substitutifs (théorème C). Ensuite, nous déduisons que ces sous-shifts substitutifs sont coalescentes (proposition 4.7) et que leurs groupes d'automorphisme est virtuellement engendrés par l'action de shift (proposition 4.8). Dans le chapitre 5, nous décrivons le normalisateur des substitutions générales de forme constante. Nous prouvons que le normalisateur est virtuellement engendré par l'action de shift (théorème B). Pour ce faire, nous relions le normalisateur à différents types de supports des substitutions, via les directions non-déterministes. Nous caractérisons les directions non-déterministes par le digit tile pour une version affaiblie de substitutions bijectives (théorème A). De plus, ces directions sont calculables en termes de combinatoire de la substitution (théorème 5.13). Enfin, dans le chapitre 6, nous caractérisons le groupe normalisateur pour deux exemples de substitutions de forme constante. Le premier, appelé substitution de la table, satisfait les hypothèses des résultats des chapitres 4 et 5. Le pavage du demi-hexagone ne satisfait pas les hypothèses des résultats précédents. Néanmoins, une description de son groupe d'isomorphisme est fournie par le théorème D. Chapter 1 Definitions and background In this chapter, we fix some notations, definitions and show some general properties to be used throughout this thesis. We start with some notions on discrete, convex and fractal geometry. Then we recall some classical notions of topological dynamical systems and the ones of homomorphisms between them. In particular, we will see some relations between homomorphisms and topological factors of dynamical systems (Lemma 1.6). We also recall some basics of ergodic theory, symbolic dynamics, and nondeterministic directions (also called nonexpansive half-spaces). In Section 1.2 we define the central objects of study of this thesis, which are homomorphisms between topological Z d -actions T : X × Z d → X on a compact metric space X. This types of morphisms have been studied in both one and higher dimensions actions. For Z-actions, the isomomorphism group (the group of invertible homomorphisms) of a topological dynamical system is either the automorphism group Aut(X, T, Z d ), or an index-2 extension of Aut(X, T, Z d ). Therefore, the study of isomorphisms for Z-actions focuses on the existence of an isomorphism, which are sometimes called reversors (see [START_REF] Baake | A brief guide to reversing and extended symmetries of dynamical systems[END_REF] for a brief guide to the study of these isomorphisms). Since GL(d, Z) is an infinite group for d > 1, the relation between automorphisms and isomorphisms becomes less clear. Isomorphisms have been studied for particular subshifts [START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF] and more recently for a class of substitutive subshifts under strong geometrical and combinatorial restrictions [START_REF] Bustos | Extended symmetry groups of multidimensional subshifts with hierarchical structure[END_REF][START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]. In this thesis we extend the study of homomorphisms by describing the isomorphism group (called the normalizer group) for general constant-shape substitutions that are defined in this chapter. We then describe the nondeterministic directions for a subshift. This notion was introduced in [START_REF] Guillon | Determinism in subshifts[END_REF] to the study of two-dimensional subshifts. In [START_REF] Cyr | Nonexpansive Z 2 -subdynamics and Nivat's conjecture[END_REF] they were used to prove a weak version of Nivat's conjecture. This notion is motivated by the work of M. Boyle and D. Lind, about nonexpansive subspaces. When the space X is infinite such subspaces always exist [START_REF] Boyle | Expansive subdynamics[END_REF]Theorem 3.7]. In fact, they can be described only by hyperplanes [17, Theorem 3.6], hence the term of nondeterministic directions. In this thesis, nondeterministic directions are characterized for bijective substitutions (Theorem 5.2) and used to describe the isomorphisms for a big family of substitutive subshifts (Theorem 5.17). We also present the example of odometer systems and Toeplitz sequences in Section 1.6. Odometer systems are the most natural equicontinuous systems in the study of minimal Cantor systems. In fact, they are the maximal equicontinuous factor for a big family of symbolic systems, such as, some substitutions and Toeplitz sequences. Toeplitz subshifts are symbolic sytems that are the orbit closures of the regular quasi-periodic points of the subshift. We refer to [START_REF] Downarowicz | Survey of odometers and Toeplitz flows[END_REF][START_REF] Cortez | Z d Toeplitz arrays[END_REF][START_REF] Cortez | G-odometers and their almost one-to-one extensions[END_REF] for the study of odometer systems for different actions. We also obtain a characterization of homomorphisms between two odometer systems (Lemma 1.14), useful to describe isomorphisms for substitutive subshifts. We finish this chapter with a brief survey of multidimensional constant-shape substitutions. They represent the simplest nontrivial zero-entropy symbolic systems, since they are generated by finite data. By this fact, ergodic and topological properties of substitution dynamical systems have been extensively studied. They were introduced by W.H. Gottschalk in [START_REF] Gottschalk | Substitution minimal sets[END_REF] (see [START_REF] Fogg | Substitutions in dynamics, arithmetics and combinatorics[END_REF][START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF] for a good bibliography on this subject). We also provide some good finite sets useful to the structure of the sequences in substitutive subshifts. As a corollary, we get a canonical form of symbolic factors of constant-shape substitutions. We will call a sequence of finite sets Basics on discrete, convex and fractal geometry (F n ) n>0 ⊆ Z d a F ølner sequence if for all n ∈ Z d we have that lim n→∞ |F n ∆(n + F n )| |F n | = 0. 1 For any r > 0 and F Z d we denote F •r as the set of all elements f ∈ F such that f + (B(0, r) ∩ Z d ) ⊆ F , i.e., F •r = {f ∈ F : f + (B(0, r) ∩ Z d ) ⊆ F }. Note that the Følner assumption implies for any r > 0 lim n→∞ |F •r n | |F n | = 1. (1.1) Convex geometry In the following we will give some basics on convex geometry that we will be used in the rest of the thesis. We refer to [START_REF] Auslender | Asymptotic cones and functions in optimization and variational inequalities[END_REF] for a survey of results about this field. A set C ⊆ R d is said to be convex if for all x, y ∈ C the set [x, y] = {z ∈ R d : z = tx + (1 -t)y, t ∈ [0, 1]} is included in C. Recall that the image of a convex set under an affine map is also a convex set, and the intersection of an arbitrary family of convex sets is also a convex set. This leads to the notion of the convex hull of a set. If A ⊆ R d we define the convex hull of A, denoted by conv(A), as the intersection of all convex sets containing A. A set S ⊆ R d is an affine set if for any x, y ∈ S the line {tx + (1 -t)y : t ∈ R} is contained in S. For any set A ⊆ R d we define the affine hull of A, denoted by Aff(A), as the intersection of all affine sets containing A. A fundamental characterization of convex sets is provided by Carathéodory's theorem. Theorem 1.1 (Carathéodory's theorem). For any A ⊆ R d , any element of conv(A) can be represented as a convex combination of no more than (d + 1) elements of A. We now recall some basic topological concepts associated with convex sets. We say that a point x ∈ A is relative interior for A, if A contains the intersection of a ball centered at x with Aff(A), i.e., ∃r > 0, B(x, r) ∩ Aff(A) ⊆ A. The set of all relative interior points of A is called its relative interior, and is denoted by ri(A). We can also define the relative boundary ∂ ri (A) as the set difference of the closure and the relative interior, i.e., ∂ ri (A) = cl(A) \ ri(A). An important notion for convex sets are the supporting hyperplanes. We now recall some basic notions about cones and polyhedral sets. A nonempty set C ⊆ R d is said to be a cone if for every x ∈ C, the set C contains the positive ray R + x = {tx : t > 0} spanned by x. A translation of a cone by a non zero vector is called an affine convex cone. A cone C ⊆ R d is said to be finitely generated if it can be written as If C ⊆ R d is a closed convex set, C = p i=1 tu i : u i ∈ R d , t i ≥ 0, i = 1, . . . , p . For a given nonempty set A ⊆ R d , the smallest cone containing the set A is called the positive hull (or conical hull ) of A. It is the smallest cone containing the set A, and is given by cone A = {tx : x ∈ A, t > 0} ∪ {0}. The positive hull is also said to be the cone generated by A. A set P ⊆ R d is called polyhedral if it has the form P = x ∈ R d : u i , x ≤ a i , i = 1, . . . , p , where u i ∈ R d , a i ∈ R. The following is characterization of polyhedral sets. Theorem 1.2 (Minkowski-Weyl Theorem). A cone C is polyhedral if and only if it is finitely generated. Convex sets can be represented, but it requires the notion of extreme points and extreme rays. A point x in a convex set C is called an extreme point, if can not be written as the convex combination of two different points in C, i.e., if x is equal to tu + (1 -t)v for some 0 ≤ t ≤ 1, with u, v ∈ C, then u = v = x. We denote by Ext(C) the set of the extreme points of a convex set C. Extreme points are special cases of faces of a convex set. A compact convex set is called a polytope if it has a finite number of extreme points. A convex set F ⊆ C is called a face of C if for every x ∈ F and every y, z ∈ C such that x = ty + (1 -t)z, with 0 < t < 1, we have that y, z ∈ F . The dimension of a face F of C is the dimension of its affine hull. The 0-dimensional faces of C are exactly the extreme points of C, and the bounded 1-dimensional faces are called segments or edges. An extreme ray of a convex set C is the direction of a half-line that is a face of C. A useful result about representation of closed convex sets in R d is the following Theorem 1.3 (Krein-Milman theorem for unbounded convex sets). If a nonempty closed convex set C ⊆ R d has at least one extreme point, i.e., does not have an affine line. Then C can be written as the sum of the convex hull of its extreme points and the cone generated by its extreme rays. Some useful notion for convex sets corresponds to the normal cone. Let F be a nonempty face of a closed convex polytope C. The opposite normal cone 2 NF (C) of C at F is defined as NF (C) = {v ∈ R d : min t∈C v, t = v, p , ∀p ∈ F }. The opposite normal fan of C is the collection of all opposite normal cones of C: N (C) = { NF (C) : F is a proper face of C}. The following are simple statements on the normal fan • If F is a face of C, then dim( NF (C)) = d -dim(F ). • If F is a face of G, which is a face of C, then NG (C) is a face of NF (C). • The set F face of C NF (C) is equal to R d . Fractal Geometry In the following we present some definitions and some properties satisfied for some fractals sets which are defined by iterated function systems or IFS. We refer to [START_REF] Kirat | Remarks on self-affine fractals with polytope convex hulls[END_REF][START_REF] Strichartz | Geometry of self-affine tiles[END_REF][START_REF] Vince | Digit tiling of Euclidean space[END_REF] for some results that will be used throughout this thesis. Let C(R d ) denote the collection of all nonempty compact subsets of R d . The Hausdorff metric H on C(R d ) is defined as follows: H(A, B) = inf{ε : A ⊆ B ε ∧ B ⊆ A ε }, where A ε = {t ∈ R d : t -y ≤ ε, for some y ∈ A}. We have that (C(R d ), H) is a complete metric space. A map f : R d → R d is said to be a contraction if there exists 0 < c < 1 such that f (x) -f (y) ≤ c x -y for all x, y ∈ R d . Let {f i } N i=1 be a set of contraction maps on R d , and define the map F : C(R d ) → C(R d ) A → N i=1 f i (A) This map is a contraction on (C(R d ), H), so by the Banach fixed-point Theorem, there exists a unique set T ∈ C(R d ) (called digit tile) such that T = N i=1 f i (T ). A way to approximate this set is by iterations T = lim n→∞ F n (T 0 ), (1.2) where T 0 is an arbitrary compact set in R d and the limit is with respect to the Hausdorff metric. Since the convex hull of a compact set in R d is compact, the map conv : C(R d ) → C(R d ) which gives for any set A ∈ C(R d ) its convex hull is well defined, and is well known to be continuous. Topological dynamical systems In this section, we will present the basic definitions and some properties of topological dynamical systems. We also define the central object of study of this thesis, which are homomorphisms between topological dynamical systems, and present some basic results about them. We finish this section with a survey on some results about the compatibility of homomorphisms between topological factors of topological dynamical systems. We mention [4] for an extensive bibliography of this area. Basic definitions A topological dynamical system is a triple (X, T, G), where (X, ρ) is a compact metric space, G is a group of homeomorphisms of the space X into itself, and T : X × G → X is a continuous map, satisfying T (x, e) = x, and T (T (x, g), h) = T (x, gh) for all x ∈ X, and g, h ∈ G. We will denote T g to the homeomorphism T (•, g). If (X, ρ) is a compact metric space, we denote Homeo(X) the group of all homeomorphisms from X to itself, and if T ∈ Homeo(X), we use (X, T, Z) to denote the topological dynamical system (X, T, {T n : n ∈ Z}). Similarly, if T 1 , . . . , T d are d commuting homeomorphisms on X, we denote (X, T, Z d ) to denote the topological dynamical system (X, T, {T 1 , . . . , T d } ). For a point x ∈ X, we define its orbit as the set O(x, G) = {T g (x) : g ∈ G}. If A ⊆ X, we say that A is G-invariant if for all x ∈ A, O(x, G) is included in A. If (X, T, G) is a topological dynamical system, a subset K ⊆ X is called a minimal set if K is closed, nonempty, G-invariant, and has no proper closed nonempty invariant subsets, i.e., if N ⊆ K is closed and G-invariant, then N = ∅ or N = K. In this case, we say that (K, T | K , G) is a minimal system, where T | K : K × G → K corresponds to the restriction of T in K. It is easy to see that a system is minimal if and only if it it is the closure orbit of all of its points. An important type of topological dynamical systems are the so-called equicontinuous systems. A topological dynamical system (X, T, Z d ) is said to be equicontinuous if the set of maps {T n : n ∈ Z d } forms an equicontinuous family of homeomorphisms. The equicontinuous systems are, in some sense, the simplest dynamical systems. In fact, there exists a complete characterization of them [4]. Homomorphisms between topological dynamical systems In the following, we define the homomorphisms between topological dynamical systems, which are the central object of study on this thesis. Homomorphisms represent internal symmetries of a given topological dynamical system, such as rotations and reflections. Invertible homormophisms, which will be called isomorphisms, are then conjugacies of Z d -actions, up to a GL(d, Z)-transformation. We refer to [START_REF] Baake | A brief guide to reversing and extended symmetries of dynamical systems[END_REF] for a brief guide to homomorphisms both for one-dimensional systems and higher-dimensional dynamical systems. Notation and basic properties Definition 1.4. Let (X, T, Z d ), (Y, T, Z d ) be two topological dynamical systems and M ∈ GL(d, Z). A homomorphism associated with M is a surjective continuous map φ : X → Y such that for all n ∈ Z d we have that φ • T n = T M n • φ. If φ is invertible, then φ is an isomorphism. In the following we fix the different notations that we will used throughout this thesis: • We denote the set of all homomorphisms associated with M between (X, T, Z d ) and (Y, T, Z d ) by Hom M (X, Y, T, Z d ). • The set of homomorphisms between two dynamical systems, is defined as the collection of all of homomorphisms, i.e., Hom(X, Y, T, Z d ) = M ∈GL(d,Z) Hom M (X, Y, T, Z d ). • In the special case M is the identity matrix, homomorphisms are called factors and we denote Fac(X, Y, T, Z d ) the collection of all factors between (X, T, Z d ) and (Y, T, Z d ). If a factor is invertible, then it is called a conjugacy. • In the case (X, T, Z d ) = (Y, T, Z d ), we simply denote these sets as N M (X, T, Z d ) and N (X, T, Z d ) and we call the last one, the normalizer semigroup of (X, T, Z d ). A factor map is called an endomorphism, and a conjugacy is called an automorphism. We denote the set of all endomorphisms and automorphisms of a topological dynamical system as End(X, T, Z d ) and Aut(X, T, Z d ), respectively. • We define the symmetry semigroup N (X, T, Z d ) of (X, T, Z d ) as the collection of all matrices M ∈ GL(d, Z) with N M (X, T, Z d ) = ∅. • A topological dynamical system (X, T, Z d ) is said to be coalescent if every endomorphism of (X, T, Z d ) is an automorphism. Note that the symmetry semigroup of a topological dynamical system is an invariant under conjugation. As an example, we can define an isomorphism for the Z-action T : S 1 → S 1 given by the rotation T α (x) = x + α, α ∈ S 1 , by the map φ(x) = -x. Indeed, we have that φ • T = T -1 • φ. For the Z 2 -action on the torus T 2 generated by the actions T 1 (x) = x+α, T 2 (x) = x+β, an isomorphism is given by the map ψ : T 2 → T 2 defined as ψ(x) = -x. Isomorphisms of a dynamical system has been studied before. In [START_REF] Baake | A brief guide to reversing and extended symmetries of dynamical systems[END_REF] they are called as reversors and the normalizer group N * (X, T, Z) (generated by isomorphisms) is called the reversing symmetry group. Their study was inspired by the time-reversal symmetry of many fundamental equations in physics. In this case N * (X, T, Z) = Aut(X, T, Z) or N * (X, T, Z) is an index-2 extension of Aut(X, T, Z). Isomorphisms are always elements of even or infinite order. The existence of isomorphisms have been studied for particular subshifts. Evidence suggest that algebraic properties of the automorphism group does not affect its existence: • The full shift (A Z , S, Z). The automorphism group is huge (not amenable). • Any sturmian subshift, which is always palindromic [START_REF] Droubay | Palindromes and Sturmian words[END_REF]. Its automorphism group is trivial, i.e., Aut(X, S, Z) = S . • The period doubling shift, defined by the primitive substitution 0 → 01, 1 → 00. • The Thue-Morse shift, defined by 0 → 01, 1 → 10. • The square-free shift, obtained as the orbit closure of the characteristic function of the square-free integers. For higher-dimensional systems they are sometimes called extended symmetries (see for example [START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF][START_REF] Bustos | Extended symmetry groups of multidimensional subshifts with hierarchical structure[END_REF][START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]). Since GL(d, Z) is an infinite group for d > 2, the relation between N * (X, T, Z d ) and Aut(X, T, Z d ) is less clear than in the one-dimensional case. If φ ∈ N M 1 (X, T, Z d ), and ψ ∈ N M 2 (X, T, Z d ), then φψ is in N M 1 M 2 (X, T, Z d ), so the sets N M (X, T, Z d ) are not semigroups (except if M is the identity matrix). Now, even though the matrices M ∈ GL(d, Z) are invertible in Z d , the symmetry semigroup N (X, T, Z d ) is not necessarily a group, since the existence of a homomorphism associated with a matrix M does not necessarily imply the existence of a homomorphism associated with M -1 . Nevertheless, we get the following result when a dynamical system is coalescent. Proposition 1.5. Let (X, T, Z d ) be a coalescent system. If N (X, T, Z d ) is a group, then any homomorphism in N (X, T, Z d ) is invertible. Proof. Let φ, ψ be two homomorphisms onto (X, T, Z d ) associated with M , M -1 , respectively. Then φψ is a factor map onto (X, T, Z d ). Since (X, T, Z d ) is coalescent, then φψ is invertible. We conclude that φ and ψ are invertible maps. Equicontinuous systems are examples of coalescent systems [4]. See [START_REF] Parry | Minimal skew-product homeomorphisms and coalescence[END_REF] for an example of a non-coalescent system. When the action generated by T is free, the normalizer group N * (X, T, Z d ) (which is the group of isomorphisms of (X, T, Z d )) is the normalizer of T in Homeo(X). This is because g T g 1 = T , for g ∈ Homeo(X) is only possible if the generators of T , which are T e i for 1 ≤ i ≤ d, are conjugated into generators of T . In the particular case T is not free, then norm Homeo ( T ) contains N * (X, T, Z d ) as a subgroup, but possibly further elements (see [START_REF] Baake | The structure of reversing symmetry groups[END_REF] for some examples). Nevertheless, in this thesis we are only interested in free actions. The groups T and Aut(X, T, Z d ) are normal subgroups of N * (X, T, Z d ), and the centers of N * (X, T, Z d ) and Aut(X, T, Z d ) are the same. In fact, we have the following short exact sequences For every topological dynamical system, there exists at least one equicontinuous factor, which is the system given by one point. Furthermore, for every topological dynamical system, there exists its maximal equicontinuous factor, i.e., a factor π eq : (X, T, Z d ) → (X eq , T eq , Z d ) such that (X eq , T eq , Z d ) is an equicontinuous system, and for every equicontinuous factor π : (X, T, Z d ) → (Y, T, Z d ), there exists a factor map φ : (X eq , T eq , Z d ) → (Y, T, Z d ) such that π = φ • π eq . In the one-dimensional case, the maximal equicontinuous factor of a minimal system is a rotation on a compact monothetic topological group [96, Theorem 2.11], which is a group G for which there exists an element g ∈ G such that the subgroup generated by g is dense. Such groups are always abelian. Also, in the particular case, where π : (X, T, Z d ) → (Y, T, Z d ) is an almost 1-to-1 factor and (Y, T, Z d ) is an equicontinuous system, then (Y, T, Z d ) is the maximal equicontinuous factor of (X, T, Z d ). For instance, odometer systems (defined in Section 1.6) are almost 1-to-1 factors of Toeplitz systems [START_REF] Cortez | Z d Toeplitz arrays[END_REF][START_REF] Downarowicz | Survey of odometers and Toeplitz flows[END_REF]. 1 → T → Aut(X, T, Z d ) → Aut(X, T, Z d )/ T → 1 (1.3) 1 → Aut(X, T, Z d ) → N * (X, T, Z d ) → N * (X, T, Z d ) → 1. (1.4) Compatibility properties of homomorphisms We say a factor map π : (X, T, Z d ) → (Y, T, Z d ) is compatible if for any endomorphism φ ∈ End(X, T, Z d ), and every x, y ∈ X, if π(x) is equal to π(y), then π(φ(x)) is equal to π(φ(y)). With the same spirit, we say a factor π is compatible with homomorphisms if for any homomorphism φ ∈ N (X, T, Z d ), and every x, y ∈ X, if π(x) is equal to π(y), then π(φ(x)) is equal to π(φ(y)). The compatibility property allow us to study homomorphisms of some topological dynamical system as shown in the following result. Lemma 1.6. Let (X, T, Z d ), (Y, T, Z d ) be two minimal systems, such that π : (X, T, Z d ) → (Y, T, Z d ) is a compatible factor. Then, there is a semigroup homomor- phism π : End(X, T, Z d ) → End(Y, T, Z d ) such that 1. π(φ)(π(x)) = π(φ(x)) for all φ ∈ End(X, T, Z d ) and x ∈ X. 2. π(Aut(X, T, Z d )) ≤ Aut(Y, T, Z d ). 3. For all ψ ∈ End(Y, T, Z d ), |π -1 ({ψ})| ≤ min y∈Y |π -1 (y)|. Moreover, if π is compatible with homomorphisms, there is an extension of π : N (X, T, Z d ) → N (Y, T, Z d ) defined as in Item 1. for all φ ∈ N (X, T, Z d ), such that π(N M (X, T, Z d ) ≤ N M (Y, T, Z d ), for any M ∈ GL(d, Z). Furthermore, if c = min y∈Y |π -1 (y)|, then for each M ∈ GL(d, Z), the map π : N M (X, T, Z d ) → N M (Y, T, Z d ) is at most c-to-1. Proof. Set φ ∈ End(X, T, Z d ). By definition, the map π(φ) : Y → Y given by π(φ)(π(x)) = π(φ(x)) is well defined and, by minimality of (Y, T, Z d ), it is a surjective map, so π(φ) is an endomorphism of (Y, T, Z d ). Moreover, if φ is an automorphism of (X, T, Z d ), then π(φ) is invertible. Indeed, π(φ) • π(φ -1 ) • π = π • φ • φ -1 = π, so we conclude that π(φ) • π(φ -1 ) = id Y . Now, set ψ ∈ End(Y, T, Z d ) and suppose that min y∈Y |π -1 ({y})| = c < ∞ (if not, then there is nothing to prove). Let x 0 ∈ X and y 0 ∈ Y be such that |π -1 ({y 0 })| = c, and y 0 = ψ(π(x 0 )). Assume there exists c + 1 endomorphisms φ 0 , . . . , φ c of (X, T, Z d ), in π({ψ}) -1 . By the pigeonhole principle, y 0 = ψ(π(x 0 )) = π(φ 0 (x 0 )) = • • • = π(φ c (x 0 )). So, there must exists two different indices 0 ≤ i, j ≤ c such that φ i (x 0 ) = φ j (x 0 ), which, by minimality of (X, T, Z d ), implies φ i = φ j . Finally, note that the proof for homomorphisms use similar arguments. We will use Lemma 1.6 to describe isomorphisms of particular examples in Chapter 6. It is known that factor maps between equicontinuous systems are compatible [4], but as we will see in the next section, they are not necessarily compatible with homomorphisms (see Remark 2.7). Nevertheless, the maximal equicontinuous factor is an example of a factor compatible with homomorphisms as proved in [10, Theorem 5 and Corollary 3]. The next result summarizes the compatibility properties of the maximal equicontinuous factor. Lemma 1.7. [10, Theorem 5 and Corollary 3] For any minimal topological dynamical system (X, T, Z d ) such that the action of the maximal equicontinuous factor T eq is free, the maximal equicontinuous factor π eq : (X, T, Z d ) → (X eq , T eq , Z d ) is compatible with homomorphisms. In this case, there exists a semigroup homomorphism θ : (X, T, Z d ) → Homeo(X eq ), with θ(End(X, T, Z d )) = id such that π eq (φ(x)) = π(φ) + θ(φ)(π eq (x)), for all x ∈ X, and φ ∈ N (X, T, Z d ), i.e., any homomorphism φ ∈ N (X, T, Z d ) induces a unique homomorphism in X eq given by z → π(φ) + θ(φ)(z). Moreover, if c = min z∈Xeq |φ -1 eq ({z})| is finite, then for all n ≥ c, and φ ∈ N (X, T, Z d ), we have that {z ∈ X eq : |π -1 eq ({z})| = n} = θ(φ)({z ∈ X eq : |π -1 eq ({z})| = n}) + π(φ), In some cases the quantity min y∈Y |π -1 (y)| can be computed. We refer to [START_REF] Coven | Computing automorphism groups of shifts using atypical equivalence classes[END_REF] for the class of substitutive systems generated by constant-length substitutions. Also Lemma 3.9 is the analogue for constant-shape substitutions. Measure-preserving systems In the following, we present the basics on ergodic theory. We mention [START_REF] Petersen | Ergodic theory, volume 2 of Cambridge Studies in Advanced Mathematics[END_REF][START_REF] Walters | An introduction to ergodic theory[END_REF] for classical references on this theme. A measure-preserving system is a 4-tuple (X, µ, T, G), where (X, F, µ) is a probability space and G is a countable group of measurable and measure-preserving transformations acting on X (where the action is denoted by T ), i.e., ∀A ∈ F, ∀g ∈ G, µ(T g -1 A) = µ(A). We say that (X, µ, T, G) is ergodic if for all A ∈ F we have that (∀g ∈ G) µ(T g -1 (A)∆A) = 0 =⇒ µ(A) = 0 ∨ µ(A) = 1. We now recall the notions of measurable homomorphisms in the measure-theoretic framework. Let (X, µ, T, G) and (Y, ν, T, G) be measure-preserving systems and M ∈ GL(d, Z). A measurable homomorphism associated with M is a measure-preserving map φ : X → Y where X , Y are measurable subset of X, Y respectively, µ(X ) = ν(Y ) = 1 and for any g ∈ G, T g (X ) ⊆ X , T g (Y ) ⊆ Y such that for any n ∈ Z d we have that φ • T n = T M n • φ in X . If there is a measurable factor φ between X and Y , then X is said to be an extension of Y . If φ is a bi-measurable bijection, we say that φ is a measurable conjugacy and in this case (X, µ, T, G) and (Y, ν, T, G) are metrically isomorphic. For Z d -actions, we always have at least one invariant probability measure for topological dynamical systems (in fact, at least one ergodic probability measure). We define M(X, T, Z d ) the set of all invariant probability measures. This set is convex and compact on the weak-* topology. We say that (X, T, Z d ) is uniquely ergodic if |M(X, T, Z d )| = 1, and strictly ergodic if it is minimal and uniquely ergodic. In the special case of strictly ergodic topological dynamical systems (X, T, Z d ), (Y, T, Z d ) we denote m Hom(X, Y, T, Z d ), m Fac(X, Y, T, Z d ) the collection of all measurable homomorphisms and factors between X and Y , respectively. We recall that a map φ is in m Hom M (X, Y, T, Z d ), for a particular matrix M ∈ GL(d, Z), if φ is measure-preserving and there exists two subsets X ⊆ X, Y ⊆ Y with µ X (X ) = 1, µ Y (Y ) = 1 such that for all n ∈ Z d , φ • S n = S M n • φ for µ X -a.e in X . Symbolic Dynamics In this section, we will present the basic definitions and some background about symbolic dynamics that will be used in the rest of this thesis. We refer to [START_REF] Lind | An introduction to symbolic dynamics and coding[END_REF] for a classical reference in the one-dimensional case, and [START_REF] Ceccherini-Silberstein | Cellular automata and groups[END_REF] for actions on more abstract groups. Let A be a finite alphabet and d ≥ 1 be an integer. We define a topology on A Z d by endowing A with the discrete topology, and considering in A Z d the product topology, which is generated by cylinders. Since A is finite, A Z d is a metrizable compact space. In this space Z d acts by translations, defined for every n ∈ Z d by: S n (x) k = x n+k , x ∈ A Z d , k ∈ Z d . The Z d -action (A Z d , S, Z d ) is called the fullshift. Let P ⊆ Z d be a finite subset. A pattern is an element p ∈ A P . We say that P is the support of p, and we denote P = supp(p). A pattern occurs in x ∈ A Z d , if there exists n ∈ Z d such that p = x| n+P , in this case we denote it p x, and we call this n an occurrence in x of p. A subshift (X, S, Z d ) is given by a closed subset X ⊆ A Z d which is invariant by the Z d -action. A subshift can also be defined by its language. For P Z d we define L P (X) = {p ∈ A P : ∃x ∈ X, p x}. We define the language of a subshift X by L(X) = P Z d L P (X). Let (X, S, Z d ) be a minimal subshift and x ∈ X. We say that p ∈ Z d is a period of x if for all n ∈ Z d , x n+p = x n . We say that (X, S, Z d ) is aperiodic if there are no nontrivial periods. Let B be other finite alphabet, and Y ⊆ B Z d be a subshift. For P Z d , we define a P -block map as a map of the form Φ : L P (X) → B. This map induce a factor φ : X → Y given by φ(x) n = Φ(x| n+P ). The map φ is called the sliding block code induced by Φ, and P is the support of the map φ. In most of the cases we may assume the support of the sliding block codes is a ball of the form B(0, r), for r ∈ N. We define the radius (and we denote by r(φ)) as the infimum of r ∈ N such that we can define a B(0, r)-block map which induced it. The next theorem characterizes the factor maps between two subshifts. Theorem 1.8 (Curtis-Hedlund-Lyndon theorem). Let (X, S, Z d ) and (Y, S, Z d ) be two subshifts. A map φ : (X, S, Z d ) → (Y, S, Z d ) is a factor if and only if there exists a B(0, r)block map Φ : L B(0,r) (X) → L 1 (Y ), such that φ(x) n = Φ(x| n+B(0,r) ), for all n ∈ Z d and x ∈ X. For homomorphisms we have a similar characterization, but we need to make a slight variation of this theorem. Theorem 1.9 (Curtis-Hedlund-Lyndon theorem for homomorphisms). Let (X, S, Z d ) and (Y, S, Z d ) be two subshifts, and M ∈ GL(d, Z). A map φ : (X, S, Z d ) → (Y, S, Z d ) is a homomorphism associated with M if and only if there exists a B(0, r)-block map Φ : L B(0,r) (X) → L 1 (Y ), such that φ(x) n = Φ(x| M -1 n+B(0,r) ), for all n ∈ Z d and x ∈ X. Proof. Let (x n ) n∈N be a convergent sequence to x ∈ X. This implies, (∀p > 0)(∃N ∈ N)(∀n ≥ N ) x n | B(0,p) = x| B(0,p) . Set m ∈ Z d . Let p(m) ∈ N be large enough such that M -1 m + B(0, r) ⊆ B(0, p(m)). Hence, for any n ≥ N (p(m)) we have that φ(x n ) m = Φ(x n | M -1 m+B(0,r) ) = Φ(x| M -1 m+B(0,r) ) = φ(x) m . We conclude that φ is continuous. Now, set m, n ∈ Z d . Then for all x ∈ X φ(S n x) m = Φ((S n x)| M -1 m+B(0,r) ) = Φ(x| n+M -1 m+B(0,r) ), and S M n φ(x) m = φ(x) M n+m = Φ(x| n+M -1 m+B(0,r) ). We conclude that φ • S n = S M •n • φ, so φ is a homomorphism associated with M . On the other hand, let φ : (X, S, Z d ) → (Y, S, Z d ) be a homomorphism associated with M , and let r > 0 be such that x| B(0,r) = y| B(0,r) , implies φ(x) 0 = φ(y) 0 . Then, the local map Φ(x| B(0,r) ) = φ(x) 0 is well defined by the very definition of r. Finally, note that φ(x) h = S h φ(x) 0 = φ(S M -1 h x) 0 = Φ(x| M -1 h+B(0,r) ), which proves the claim. This means, for any homomorphism φ we can define a radius (also denoted by r(φ)), as the infimum of r ∈ N such that we can define a B(0, r)-block map which induced it. An immediate consequence of Theorem 1.9 is that for any subshift (X, S, Z d ), its normalizer group N * (X, S, Z d ) is a countable and discrete subset in Homeo(X). Nondeterministic directions of a topological dynamical system An interesting notion in the study of higher-dimensional dynamical systems, with the objective to study sub-actions of one, is the so-called nonexpansive subspaces, introduced by M. Boyle and D. Lind in [START_REF] Boyle | Expansive subdynamics[END_REF]. When the space X is infinite such subspaces always exist [START_REF] Boyle | Expansive subdynamics[END_REF]Theorem 3.7]. In fact, nonexpansive subspaces are always subsets of nonexpansive hyperplanes [START_REF] Boyle | Expansive subdynamics[END_REF]Theorem 3.6]. A key step on the proof of this result, was one proved by S. Schwartzman. S. Schwartzman proved that there are no "one-sided expansive" homeomorphisms, except on finite spaces. Although he never published this result. A proof can be found in [START_REF] Gottschalk | Topological dynamics[END_REF]. We will only focus on hyperplanes in R d , which lead to the notion of deterministic/nondeterministic directions. Let S d-1 be the unit (d -1)-dimensional sphere. For v ∈ S d-1 define H v = {x ∈ R d : x, v < 0} to be the open half-space with outward unit normal v. We identify the set H d of all half-spaces in R d with the sphere S d-1 using the parametrization v ←→ H v . Definition 1.10. Let (X, S, Z d ) be a subshift and v be a unit vector of R d . Then v is deterministic for (X, T, Z d ) if for all x, y ∈ X we have that x| Hv∩Z d = y| Hv∩Z d =⇒ x = y. If v does not satisfies this condition, we say that v is nondeterministic for (X, T, Z d ). In this thesis, we are only interested in the set of nondeterministic directions. In the one-dimensional case, this lead to the notion of asymptotic pairs, and in the finitary version to the notion of special words [START_REF] Morse | Symbolic dynamics II. Sturmian trajectories[END_REF]. For a subshift (X, S, Z d ) we denote ND(X, S, Z d ) the sets of nondeterministic directions for (X, S, Z d ). In [START_REF] Boyle | Expansive subdynamics[END_REF] it was proved an analogous result of the following, that we mention in our context Theorem 1.11. [17, Lemma 3.4 and Theorem 3.7] For any subshift (X, S, Z d ), with an infinite phase space X, the set of nondeterministic directions ND(X, S, Z d ) is a non-empty compact set. The notion of direction of determinism was introduced in [START_REF] Guillon | Determinism in subshifts[END_REF] for two-dimensional subshifts, and in [START_REF] Cyr | Nonexpansive Z 2 -subdynamics and Nivat's conjecture[END_REF] they were used to prove a weak version of Nivat's conjecture. The following result establishes a link between nondeterministic directions and the symmetry group of a subshift, which we will use to describe the symmetry group for a big family of substitutive systems (Theorem 5.17). Proposition 1.12. Let (X, S, Z d ) be a subshift. Then, for all v ∈ ND(X, S, Z d ) and M ∈ N (X, S, Z d ), we have that (M * ) -1 v/ (M * ) -1 v ∈ ND(X, S, Z d ). Proof. If v is in ND(X, S, Z d ), there exists x = y ∈ X with x| Hv∩Z d = y| Hv∩Z d . Set M ∈ N (X, T, Z d ) and φ ∈ N M (X, S, Z d ). Then, we have that φ(x)| (M Hv)+n = φ(y)| (M Hv)+n , where n is a vector of radius r(φ). We note that S n φ(x)| M Hv = S n φ(y)| M Hv , and we conclude that (M * ) -1 v/ (M * ) -1 v ∈ (X, S, Z d ). Odometer systems In the following we present the example of odometer systems and Toeplitz sequences. Odometer systems are the most natural equicontinuous systems in the study of minimal Cantor systems. They have been described by algebraic reasons. They present interesting recurrence properties. The return times to clopen sets contain infinite arithmetic progressions. In fact, they are the maximal equicontinuous factor for a big family of symbolic systems, such as, some substitutions and Toeplitz sequences. In this section we describe the symmetry semigroup of odometer systems (Lemma 1.14) which we then use to completele characterize it for two-dimensional constant-base odometer systems (Theorem 2.2). Note that, in [START_REF] Giordano | Z d -odometers and cohomology[END_REF] there was a first attempt of studying homomorphisms between two-dimensional odometer systems. Toeplitz sequences have been introduced in dynamical systems by K. Jacobs and M. Keane in [START_REF] Jacobs | 0 -1-sequences of Toeplitz type[END_REF]. Toeplitz subshifts are symbolic sytems that are the orbit closures of the regular quasi-periodic points of the subshift. N. Markley and M. Paul characterize them in [START_REF] Markley | Almost automorphic symbolic minimal sets without unique ergodicity[END_REF] as the minimal almost 1-1 extensions of odometer systems. They have been used to provide a series of examples with interesting dynamical properties. We refer to [START_REF] Downarowicz | Survey of odometers and Toeplitz flows[END_REF] for the study of odometer systems and Toeplitz sequences in the one-dimensional case, [START_REF] Cortez | Z d Toeplitz arrays[END_REF] for higher dimensional actions, and [START_REF] Cortez | G-odometers and their almost one-to-one extensions[END_REF] for more abstract actions given by residually finite groups. Here we will follow the same notation than in [START_REF] Cortez | Z d Toeplitz arrays[END_REF]. Let Z 0 ≥ Z 1 ≥ . . . ≥ Z n ≥ Z n+1 ≥ . . . be a nested sequence of finite index subgroups of Z d such that n≥0 Z n = {0}, and let α n : Z d /Z n+1 → Z d /Z n be the function induced by the inclusion map. Following the notation in [START_REF] Cortez | G-odometers and their almost one-to-one extensions[END_REF], we consider the inverse limit of these groups ← - Z d (Zn) = lim ←n (Z d /Z n , α n ), i.e., ← - Z d (Zn) is the subset of the product n≥0 Z d /Z n consisting of the elements ← -g = (g n ) n≥0 such that α n (g n+1 ) = g n (mod Z n ) for all n ≥ 0. This set is a group equipped with the addition defined coordinate-wise, i.e., ← -g + ← - h = (g n + h n ) n≥0 . Every group Z d /Z n is endowed with the discrete topology, then n≥0 (Z d /Z n ) is a compact metric space, and ← -Z d (Zn) is a compact topological group whose topology is spanned by the cylinder sets [a] n = ← -g ∈ ← - Z d (Zn) : g n = a , with a ∈ Z d /Z n , and n ≥ 0. Now, consider the group homomorphism κ (Zn) : Z d → n≥0 Z d /Z n defined for n ∈ Z d by κ (Zn) (n) = [n (mod Z n )] n≥0 . The image of Z d by κ (Zn) is dense in ← - Z d (Zn) , so the Z d -action n( ← -g ) = κ (Zn) (n)+ ← -g , with n ∈ Z d , ← -g ∈ ← - Z d (Zn) , is well defined and ( ← - Z d (Zn) , + (Zn) , Z d ) is a minimal equicontinuous system as proved in [START_REF] Cortez | G-odometers and their almost one-to-one extensions[END_REF]. We call ( ← -Z d (Zn) , + (Zn) , Z d ) an odometer system. From now on, we will denote the odometer system ( ← - Z d (Zn) , + (Zn) , Z d ) just as ← - Z d (Zn) , and in the constant base case (where Z n = L n (Z d ), for some matrix L ∈ M(d, Z)), we will denote it as ← -Z d (L n ) . Odometer systems have been extensively studied before (see [START_REF] Cortez | Z d Toeplitz arrays[END_REF][START_REF] Cortez | G-odometers and their almost one-to-one extensions[END_REF][START_REF] Downarowicz | Survey of odometers and Toeplitz flows[END_REF]). The next result characterizes the factor odometers systems of a fixed odometer system. Lemma 1.13. [28, Lemma 1] Let ← - Z d (Z j n ) be two odometer systems (j = 1, 2). There exists a factor map π : ← - Z d (Z 1 n ) → ← - Z d (Z 2 n ) if and only if for every Z 2 n there exists some Z 1 m such that Z 1 m ≤ Z 2 n . The proof of Lemma 1.13 can be modified to provide a characterization for the matrices M ∈ GL(d, Z) defining a homomorphism φ : ← - Z d (Z 1 n ) → ← - Z d (Z 2 n ) . Lemma 1.14. Set M ∈ GL(d, Z). There exists a homomorphism associated with M from ← - Z d (Z 1 n ) to ← - Z d (Z 2 n ) , if and only if for all n ∈ N, there exists m M (n) ∈ N such that M Z 1 m M (n) ≤ Z 2 n . (Normalizer Condition) Proof. First we prove the necessity. Let φ : ← - Z d (Z 1 n ) → ← - Z d (Z 2 n ) be a homomorphism associated with a matrix M ∈ GL(d, Z). By continuity, for any n ≥ 0 and g ∈ Z d /Z 2 n , there exists m ≥ 0 and f ∈ Z d /Z 1 m such that [f ] m ⊆ φ -1 ([g] n ). Set h ∈ Z 1 m . Note that for all ← - f ∈ [f ] m , we have that h( ← - f ) ∈ [f ] m , which implies φ(h( ← - f )) = M h( ← - f ) ∈ [g] n . Since φ( ← - f ) is in [a] n , then m ∈ Z d : m(φ( ← -g )) ∈ [a] n is equal to Z 2 n , which implies M h ∈ Z 2 n . For the sufficiency, assume that M ∈ GL(d, Z) satisfies (Normalizer Condition). Since the sequences {Z i n } n>0 , i = 1, 2 are decreasing, we may assume that m(n) ≤ m(n + 1) for all n > 0. Thus, we have a homomorphism φ m(n) : Z d /Z 1 m M (n) → Z d /Z 2 n , given by φ m(n) (m) = M m. To finish the proof, we remark that φ : ← - Z d (Z 1 n ) → ← - Z d (Z 2 n ) defined as φ( ← -g = (g m(n) ) n>0 ) = (φ m(n) (g m(n) )) n>0 is a homomorphism associated with M . In [START_REF] Giordano | Z d -odometers and cohomology[END_REF] there was a first attempt of studying homomorphisms between two-dimensional odometer systems. Note that for all n ≥ 0, we may assume that m M (n) is arbitrarily large. Consider, for all n ∈ N a matrix L n,i ∈ M(d, Z) such that L n,i (Z d ) = Z i n for i ∈ {1, 2}. This matrix is unique, up to a conjugation with a matrix in GL(d, Z). Then, (Normalizer Condition) is equivalent to, for all n ≥ 0, there exists m M (n) ≥ 0 such that L -1 n,2 M L m M (n),1 is an endomorphism in Z d . Since det(L)L -1 = adj(L) , where adj(L) is the adjugate matrix of L, then (Normalizer Condition) is equivalent to ∀n ∈ N, ∃m M (n) ∈ N, adj(L n,2 )M L m M (n),1 ≡ 0 (mod det(L n,2 )). ( NC 2) In the constant-base case, ignoring the condition M ∈ GL(d, Z), then L satisfies (NC 2). Moreover, the set of integer matrices M ∈ M(d, Z) satisfying (NC 2) is an additive group. In particular, any polynomial in L with integer coefficients also satisfies (NC 2). A direct corollary of Lemma 1.14 is the following. Corollary 1.15. The following are consequences of Lemma 4.10 easily to verified. 1. If L = pM is an integer expansion matrix, with p ∈ Z and M ∈ GL(d, Z), then the Then Per(x, Z) = a∈A Per(x, Z, a) for all x ∈ A Z d . When Per(x, Z) is non-empty, we say Z is a group of periods of x. We say Z ⊆ Z d is a group generated by essential periods of x if for all finite index subgroup Z ⊆ Z d , Per(x, Z) ⊆ Per(x, Z ) implies that Z ⊆ Z. We say that x is a Z d -Toeplitz sequence if for all n ∈ Z d , there exists a finite index subgroup Z ⊆ Z d such that n ∈ Per(x, Z). The following is a characterization of Toeplitz sequences Proposition 1.16. [28, Proposition 14] The following statements concerning x ∈ A Z d are equivalent: symmetry semigroup N ( ← - Z d (L n ) ) is GL(d, Z). 1. x is a Toeplitz sequence. 2. There exists a sequence of positive integer numbers {p n } n≥0 such that p n < p n+1 , p n divides p n+1 and -n, . . . , n d ⊆ Per(x, p n Z d ) for all n ≥ 0. Toeplitz subshifts, which are subshifts generated by a Toeplitz sequence, are almost 1-to-1 extensions of odometer systems. In fact Proposition 1.17. [28, Proposition 20] Let x ∈ A Z d be a Toeplitz sequence. If {Z n } n≥0 is a sequence of groups generated by essential periods of x such that Z n+1 ⊆ Z n and n≥0 Per(x, Z n ) = Z d , then ( ← - Z d Zn , + Zn , Z d ) is the maximal equicontinuous factor of the subshift generated by x. Multidimensional constant-shape substitutions In this section we will define the main object where we study homomorphisms in this thesis, which are multidimensional substitutive subshifts. They represent the simplest nontrivial zero entropy symbolic systems, since they are generated by finite data. By that, ergodic and topological properties of substitution dynamical systems have been extensively studied. They were introduced by W.H. Gottschalk in [START_REF] Gottschalk | Substitution minimal sets[END_REF] (see [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF] for a good bibliography on this subject). Unlike the one-dimensional case, the notion of multidimensional substitutions is not clear. In our case we extend the notion of constant-length substitutions to the multidimensional framework and we call them constant-shape substitution. To do this, we use the fact that if L ∈ M(d, Z) is an integer matrix with det(L) = 0 and F is a fundamental domain of L(Z d ) (with 0 ∈ F ), i.e., a set of representative classes of Z d /L(Z d ), then for any n ∈ Z d , there exists unique p ∈ Z d and f ∈ F such that n = L(p) + f . This is a multidimensional interpretation of the Euclidean division. In our context, L ∈ M(d, Z) is an integer expansion matrix, i.e, L > 1 and L -1 < 1, such that L(Z d ) ⊆ Z d . Let F be a fundamental domain of L(Z d ) with 0 in F , and A be a finite alphabet. A multidimensional constant-shape substitution ζ is a map A → A F . The set F is called the support of the substitution. An analogue of constant-shape substitutions in the one-dimensional case are the constant-length substitutions. For any f ∈ F ζ 1 we define p f the restriction of ζ in f , i.e., for a ∈ A, we have that p f (a) = ζ(a) f . We say the substitution is bijective if for all f ∈ F ζ 1 the maps p f are bijective. A substitution is called primitive if there exists a positive integer n > 0, such that for every a, b ∈ A, b occurs in ζ n (a). Fig. 1.2 is an example of a bijective and primitive constant-shape substitution. The language of a substitution is the set of all patterns that appear in ζ n (a), for some n > 0, a ∈ A, i.e., L ζ = {p : p ζ n (a), for some n > 0, a ∈ A}. Using the language we define the subshift X ζ associated with a substitution as the set of all sequences x ∈ A Z d such that every pattern occurring in x is in L ζ , and we denote (X ζ , S, Z d ) the substitutive subshift. If ζ is a primitive constant-shape substitution, the existence of periodic points is well known, i.e., there exists at least one point x 0 ∈ X ζ such that ζ p (x 0 ) = x 0 for some p > 0. In the primitive case, the subshift is preserved replacing the substitution by a power of it, i.e., X ζ n is equal to X ζ for all n > 0. Then, we may assume that the substitution possess at least one fixed point, i.e., there exists a point x ∈ X ζ such that x = ζ(x). It is well known that this subshift is uniquely ergodic (in [START_REF] Lee | Consequences of pure point diffraction spectra for multiset substitution systems[END_REF] can be found a proof for substitution tiling systems seen as substitution Delone sets for R d -actions that can be adapted for our context with the assumption of the supports (F ζ n ) n>0 being a Følner sequence). The unique ergodic measure is characterized in terms of the expansion matrix of ζ, and we denote this measure as µ ζ . For a cylinder set [p] n , where p is a pattern in L ζ , the quantity µ ζ ([p]) represents the frequency of the pattern p in any sequence in X ζ . In the literature, constant-shape substitutions with a positive diagonal expansion matrix L = diag(l i ) i=1,...,d and support equal to the standard d-dimensional parallelepiped F 1 = d i=1 0, l i -1 are called block substitutions. Examples of constant-shape substitutions can be generated via constant-length substitutions as follows: Let {ζ i } d i=1 be d aperiodic one-dimensional constant-length substitutions with alphabet A i and length q i for 1 ≤ i ≤ d. We define the product substitution of {ζ i } d i=1 as the constant-shape substitution ζ with alphabet A = d i=1 A i , expansion matrix given by L ζ = diag(l i ) and support F ζ 1 = d i=1 0, l i -1 , defined as ζ(a 1 , . . . , a d ) j = (ζ 1 (a 1 ) j 1 , . . . , ζ d (a d ) j d ). It is straightforward to check that if {ζ i } d i=1 are primitive, then the product substitution is also primitive. The same is true for bijectivity. Since L ζ is an expansion matrix, then L -1 ζ is a contraction map in R d . For any g ∈ F ζ 1 define the map f g (•) = L -1 ζ (• + g). As mentioned in Section 1.1.3, there exists a nonempty compact subset T ζ (or denoted T (L, F 1 ) when there is no substitution defined) in R d such that L ζ (T ζ ) = g∈F ζ 1 T ζ + g. As in [START_REF] Vince | Digit tiling of Euclidean space[END_REF] we call this set the digit tile of the substitution. Using T 0 = {0} in (1.2) we get that T ζ = lim n→∞ n-1 i=0 L -i ζ (F ζ 1 ) = lim n→∞ L -n ζ (F ζ n ), (1.5) where the limit is with respect to the Hausdorff metric. The expansion matrix and fundamental domains of these examples are the following: (a) L (a) = 2 0 0 2 , F (a) 1 = 0 0 , 1 0 , 0 1 , -1 -1 . (b) L (b) = 3 0 0 3 , F (b) 1 = 0 0 , 1 1 , 2 2 , -1 0 , -2 0 , - 1 1 , 0 -1 , 0 -2 , 1 -1 . (c) L (c) = 3 0 0 3 , F (c) 1 = 0 0 , 1 0 , 2 0 , 0 1 , 0 2 , 2 2 , 4 4 , 2 1 , 1 2 . (d) L (d) = 1 -1 1 1 , F (d) 1 = 0 0 , 1 0 . As in the one-dimensional case, the following proposition shows that for any multidimensional constant-shape substitution there exists a finite subset K Z d whose iteration under the substitution fill the whole Z d . This set determine the fixed points of a primitive constant-shape substitution. Proposition 1.18. Let ζ be a multidimensional constant-shape substitution. Then, the set K ζ = m>0 ((id -L m ζ ) -1 (F ζ m ) ∩ Z d ) is finite and satisfies n≥0 L n ζ (K ζ ) + F ζ n = Z d , using the notation F ζ 0 = {0}. Proof. Set n ∈ Z d and consider the sequence (a n ) n≥0 ⊆ Z d given by a 0 = n and for any n ≥ 0, a n+1 is defined as the unique element in Z d such that there exists an element f n ∈ F ζ 1 with a n = L ζ (a n+1 ) + f n . Note that for any n ≥ 0, a n+1 ≤ L -1 ζ • a n + L -1 ζ • F ζ 1 , which implies a n ≤ L -1 ζ n n + L -1 ζ F ζ 1 (1 -2 L -1 ζ n ) 1 -L -1 ζ , hence (a n ) n≥0 is a bounded sequence in Z d . By the Pigeonhole Principle there exist n ≥ 0 and k > 0 such that a n = a n+k , i.e., a n = L k ζ (a n ) + f , for some f ∈ F ζ k . It follows the set K ζ = m>0 ((id -L m ζ ) -1 (F ζ m ) ∩ Z d ) satisfies the property. Now we prove that K ζ is finite. Note that for any m > 0 (id -L m ζ ) -1 (F ζ m ) = m-1 i=0 (id -L m ζ ) -1 L i (F ζ 1 ) ≤ F ζ 1 m-1 i=0 (id -L m ζ ) -1 L i ζ ≤ F ζ 1 (id -L m ζ ) -1 (id -L m ζ )(id -L ζ ) -1 ≤ F ζ 1 (id -L ζ ) -1 . We conclude that K ζ is a finite set. Remark 1.19. The following statements can be easily verified. (1) In the one-dimensional case for any constant-length substitution ζ, we have that K ζ = {-1, 0}. Moreover, for any d-dimensional block substitution, i.e., L ζ equal to the diagonal matrix diag(l 1 , . . . , l d ), and F ζ 1 equal to the standard d-dimensional parallelepiped support d i=1 0, l i -1 , then K ζ = -1, 0 d . (2) If L ζ is such that | det(L ζ -id)| = 1, then K ζ is equal to (id -L -1 ζ ) -1 (F ζ 1 ). (3) Since K ζ is a finite set, there exists j > 0 such that K ζ = j m=0 ((id -L m ζ ) -1 (F ζ m ) ∩ Z d ). When the substitution ζ is primitive, up to considering a power of ζ, we may assume that the set K ζ is of the form (id -L ζ ) -1 (F ζ 1 ) ∩ Z d . The argument used in the proof of Proposition 1.18 is inspired by the Euclidean Division Algorithm. A similar idea can be used to find different sets satisfying specific statements involving the supports (F ζ n ) n>0 that will be needed for some proofs. From now on for any n ∈ Z d we denote by d n ∈ Z d and f n ∈ F ζ 1 the unique elements such that n = L ζ (d n ) + f n . The following result will be useful in a series of results throughout this thesis. Proposition 1.20. Set A Z d and let F Z d be such that F ζ 1 ⊆ F . Define B = {d n } n∈F +A . Then, there exists a finite subset C of Z d satisfying the following conditions: 1. B ⊆ C. 2. C + F + A ⊆ L ζ (C) + F ζ 1 . 3. C ≤ B + L -1 ζ A + F + F ζ 1 / 1 -L -1 ζ . Proof. We define two sequences of finite sets of Z d , (B n ) n≥0 , (C n ) n≥0 , with B 0 = B, C 0 = B + F + A, and for any n ≥ 0, set B n+1 Z d such that B n+1 = {d n } n∈Cn , and C n+1 Z d such that C n+1 = B n+1 + F + A. Note that B n+1 ≤ L -1 ζ C n + F ζ 1 ≤ L -1 ζ B n + A + F + F ζ 1 ≤ L -1 ζ B n + L -1 ζ A + F + F ζ 1 . Hence, for any n > 0 we have that B n ≤ B L -1 ζ n + 1 -L -1 ζ n 1 -L -1 ζ L -1 ζ A + F + F ζ 1 . Since L -1 ζ is strictly smaller than 1, then B n ≤ B + L -1 ζ A + F + F ζ 1 / 1 -L -1 ζ . This implies there exists N ∈ N such that n≤N B n = n≤N +1 B n . We conclude the proof taking C = N n=0 B n . Remark 1.21. The following statements will be useful on the rest of the thesis. (1) Condition 2. implies C + A + F ζ 1 ⊆ L ζ (C) + F ζ 1 , and a direct induction proves that for all n ≥ 0, we have that L n ζ (C + A) + F ζ n ⊆ L n+1 ζ (C) + F ζ n+1 . ( ) Using F = F ζ 1 + F ζ 1 and A = {0}, we obtain a set C Z d such that C + F ζ 1 + F ζ 1 ⊆ L ζ (C) + F ζ 1 . Since 0 ∈ F ζ 1 , then 0 is in C, which implies F ζ 1 + F ζ 1 ⊆ L ζ (C) + F ζ 1 . 2 Assume that for some n > 0 we have that F ζ n + F ζ n ⊆ L n ζ (C) + F ζ n . Then, we obtain that F ζ n+1 + F ζ n+1 = F ζ n + F ζ n + L n ζ (F ζ 1 ) + L n ζ (F ζ 1 ) ⊆ L n ζ (C) + F ζ n + L n ζ (F ζ 1 ) + L n ζ (F ζ 1 ) ⊆ L n+1 ζ (C) + L n ζ (F ζ 1 ) + F ζ n = L n+1 ζ (C) + F ζ n+1 , so, by induction we prove that for all n > 0, F ζ n + F ζ n ⊆ L n ζ (C) + F ζ n . The Chapter 2 The symmetry semigroup of Z 2 -odometers In this chapter, we will study the symmetry semigroup of some odometer systems. In [START_REF] Giordano | Z d -odometers and cohomology[END_REF] there is a first attempt to characterize this semigroup for Z 2 -odometers. We start with the d-dimensional universal odometer system. This odometer is universal in the sense that any d-dimensional odometer system is a topological factor of this one. We get that its symmetry semigroup is the largest one and equal to GL(d, Z). Then, we will restrict on two dimensional constant base odometer systems ← -Z 2 (L n ) , where L ∈ M(2, Z) is an integer expansion matrix. In this case, we get a description of a bifurcation phenomenon at the level of the symmetry semigroup with respect to arithmetical relations of invariants of the matrix. The main theorem (Theorem 2.2) shows that in most cases the symmetry semigroup is the centralizer of the matrix L. This will help to get a characterization of the normalizer semigroup of aperiodic primitive constant-shape substitutions using the relation between homomorphisms and their maximal equicontinuous factors (Lemma 1.7). The universal odometer Let (Γ n ) n∈N be a enumeration of all finite index subgroups of Z d . We define the ddimensional universal odometer system as follows: Set Z 0 = Γ 0 , and for any n > 1 set Z n = Λ n-1 ∩ Γ n . Then, we define the d-dimensional universal odometer as ← - Z d (Zn) . This odometer is universal in the sense that by Lemma 1.13 any odometer system is a topological factor of the universal odometer. For instance, the 1-dimensional universal odometer system is equal to ← -Z (n!) . With respect to its symmetry semigroup, (NC 2) implies the following result. Proposition 2.1. The symmetry semigroup of the universal odometer is GL(d, Z). Proof. Consider L n ∈ M(d, Z) such that L n (Z d ) = Λ n . A matrix M ∈ GL(d, Z) is in N ( ← - Z d (Λn) ) if and only if M satisfies (NC 2). Now, for any n ∈ N we can choose m(n) ∈ N large enough such that Λ m(n) ≤ det(L n )Z d , so adj(L n )M L m(n) ≡ 0 (mod det(L n )). We then conclude that N ( ← - Z d (Λn) ) = GL(d, Z). 2.2 Bifurcation phenomenon of the normalizer semigroup for constant base Z 2 -odometer systems To describe the different cases of the normalizer semigroup of odometer systems of the form ← - Z 2 (L n ) , where L ∈ M(2, Z) is an integer expansion matrix, we need some extra notations. For any positive integer n > 0, the radical rad(n) of n is defined as the product of the distinct prime numbers dividing n, if n < 0 we define rad(n) just as rad(-n). We define the centralizer of a matrix L on GL(2, Z) Cent GL(2,Z) (L) as the subgroup of all matrices in GL(2, Z) commuting with L. Recall by Corollary 1.15 the centralizer Cent GL(2,Z) (L) is always a subgroup of the symmetry semigroup N ( ← - Z d (L n ) ) . From now on, we will fix the following notation. An integer expansion matrix will be denoted as L = p q r s and its powers as L n = p(n) q(n) r(n) s(n) . We will denote a matrix in GL(2, Z) as M = m 11 m 12 m 21 m 22 . The following theorem summarizes the different cases of the symmetry group N ( ← -Z 2 (L n ) ) depending on the matrix L. Theorem 2.2. Let L ∈ M(2, Z) be an integer expansion matrix. • If rad(det(L)) divides trace(L), then N ( ← - Z 2 (L n ) ) is equal to GL(2, Z). • Otherwise 1. If the spectrum of the matrix L is disjoint from the integers, then the symmetry semigroup N ( ← - Z 2 (L n ) ) is the centralizer Cent GL(2,Z) (L). Moreover, if the spec- trum of L is disjoint from the real line, then N ( ← - Z 2 (L n ) ) is a finite group. 2. When the spectrum of L contains an integer value, then the matrix coefficients of elements in N ( ← - Z 2 (L n (Z 2 )) ) are characterized by linear relations with respect to the coefficients given by the one of the matrix L. In particular, under explicit arithmetical properties of the coefficients of L, N ( ← - Z 2 (L n (Z 2 )) ) can be isomorphic to Z/2Z, Z 2 /2Z × 2Z, or N ( ← - Z 2 (L n (Z 2 )) )/(Cent GL(2,Z) (L)) can be virtually to Z. Along the proof of Theorem 2.2 we will get more precise information about the symmetry semigroup, when we have more restrictions on the matrix L. Since odometer systems are coalescent systems, then as a consequence of the proof of Theorem 2.2 and Proposition 1.5 we get the following result. Proposition 2.3. Let L ∈ M(2, Z) be an integer expansion matrix. Then, the symme- try semigroup of ← - Z 2 (L n ) is a group. In particular, any homomorphism in N ( ← - Z 2 (L n ) ) is invertible. Proposition 2.3 will be proved in Section 2.5. The following examples illustrate the different consequences of Theorem 2.2 according to the expansion matrix L. . We have that trace(L 1 ) = 5, and det(L 1 ) = 7, so by Theorem 2.2 a matrix M ∈ GL(2, Z) is in N ( ← - Z 2 (L n 1 ) ) if and only if M commutes with L 1 . Note that trace(L 1 ) 2 -4 det(L 1 ) = -3, so L 1 has complex eigenvalues (which are 5/2 ± i √ 3/2), and Cent GL(2,Z) (L 1 ) is equal to        1 1 -1 0 , -1 -1 1 0 , 0 -1 1 1 0 -1 1 -1 , 1 0 0 1 , -1 0 0 -1        . ( 2 ) The diagonal case We will decompose the proof of Theorem 2.2 starting with the case where the expansion matrix has integer eigenvalues. First we will study the diagonal case, i.e., q = r = 0, since we will get more precise results about the symmetry semigroup. Note that det(L) = ps and trace(L) = p + s, so the condition rad(det(L)) divides trace(L) is equivalent to rad(p) = rad(s). The following result is a particular case of Theorem 2.2 for diagonal matrices. Proposition 2.6. Let L = diag(p, s) be an expansion 2 × 2 diagonal matrix such that rad(det(L)) does not divides trace(L). We have the following: 1. If (rad(p) does not divide s and rad(s) divides p), or (rad(p) divides s and rad(s) does not divide p), then N ( ← - Z 2 (L n ) )/(Cent GL(2,Z) (L)) is virtually Z. 2. If rad(p) does not divide s and rad(s) does not divide p, then N ( ← - Z 2 (L n ) ) is isomorphic to Z/2Z × Z/2Z. Proof. Let M ∈ GL(2, Z) satisfying (NC 2). Then, for any n > 0, we get the following equations adj(L n )M L m(n) = m 11 p m(n) s n m 12 s m(n)+n m 21 p m(n)+n m 22 s m(n) p n ≡ 0 0 0 0 (mod p n s n ), (2.1) Assuming m(n) > 0 is large enough, we simplify the equations in the anti-diagonal, and we get m 12 s m(n) ≡ 0 (mod p n ), (2.2) m 21 p m(n) ≡ 0 (mod s n ) (2.3) If rad(p) does not divide s, there exists a prime number t dividing p such that for all m > 0, and all n > 0, s m is an invertible element in Z/t n Z. This implies m 12 ≡ 0 (mod t n ) for all n > 0, hence m 12 = 0. On the other hand, if rad(p) divides s, then for any n > 0, there exists m(n) > 0 large enough such that p n divides s m(n) . In this case any m 12 ∈ Z is solution of (2.2). Using the same arguments we have the same conclusion for m 21 with respect to rad(s) and p. Assuming rad(p) does not divide s and rad(s) does not divide p, we get m 12 = 0 and m 21 = 0, so the only matrices satisfying (NC 2) are diagonal matrices. We conclude N ( ← - Z 2 (L n ) ) is isomorphic to Z 2 /(2Z × 2Z). Remark 2.7. The proof of Proposition 2.6 can be easily generalized to higher dimensions in the following way: Suppose that L = diag(p 1 , . . . , p d ) is an expansion d × d diagonal matrix. If rad(p i ) does not divide p j for some 1 ≤ i, j ≤ d, then for any M ∈ N ( ← - Z d (L n ) ), we have that m i,j = 0. In particular, if for any pair of distinct indices 1 ≤ i = j ≤ d, rad(p i ) does not divide p j , then by (NC 2), N ( ← - Z d (L n ) ) is isomorphic to Z d /(2Z × • • • × 2Z ), as can be deduced by the proof of Theorem 28 in [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]. The triangular case We will next consider the case where the expansion matrix is a triangular matrix. We will focus on the upper triangular case, i.e., q = 0 and r = 0, since the lower triangular case can be deduce by this one via conjugation with the matrix 0 1 1 0 . For all n ∈ Z we have that L n = p n q(n) 0 s n , where q(n) = q(p n -s n )/(p -s) = q n-1 i=0 p i s n-1-i . As in the diagonal case, det(L) = ps and trace(L) = p + s, so the condition rad(det(L)) divides trace(L) is equivalent to rad(p) = rad(s). In this case, there is a similar result to the one obtained for the diagonal case, as it is shown in the following proposition: Proposition 2.8. Let L ∈ M(2, Z) be an expansion upper triangular matrix such that rad(det(L)) does not divide trace(L). Then, we have the following: 1. If rad(p) does not divide s and rad(s) divides p, then a matrix M ∈ GL(2, Z) is in N ( ← - Z 2 L n ) if and only if (p -s) 2 m 12 = m 21 q 2 + (p -s)(m 11 -m 22 )q. In particular, if (p -s)|q, then N ( ← - Z 2 L n )/(Cent GL(2,Z) (L)) is virtually Z. 2. Assume that rad(p) divides s and rad(s) does not divide p. Then N ( ← - Z 2 (L n ) )/(Cent GL(2,Z) (L)) is virtually Z. 3. If rad(p) does not divide s and rad(s) does not divide p, we have two cases: (2.4) Suppose rad(s) does not divide p. Then, there exists a prime number t dividing s such that for all n > 0 and m > 0, p m is an invertible element in Z/t n Z. Hence, m 21 ≡ 0 (mod t n ), which implies m 21 = 0, so m 21 = 0. Now, by (2.4) we get that m 12 s m ≡ 0 (mod p n ). • If 2q ∈ (p -s)Z then N ( ← - Z 2 (L n ) ) is isomorphic to Z/2Z × Z/2Z. • Otherwise, N ( ← - Z 2 (L n ) ) is isomorphic to Z/2Z. Proof. Let M be in N ( ← - Z 2 (L n ) ). Define the matrix M = (p -s)M -(m 11 -m 22 )L -(p • m 22 -m 11 • s) id ( We have two cases: • If rad(p) does not divide s, then (2.5) implies m 12 = 0. We conclude that M = 0 0 0 0 , i. It is not difficult to see that M 2 is the identity matrix. We conclude that Finally, if rad(s) divides p, then for any n > 0, and any m large enough s n divides p m and q(m). Let t be a prime number dividing p that does not divide s. Then, by (2.4) we obtain (p -s) 2 m 12 s n+m ≡ m 21 q 2 s n+m (mod t n ). N ( ← - Z 2 (L n ) ) is isomorphic to Z/2Z × Z/2Z. If 2q / ∈ (p -s)Z, then N ( ← - Z 2 (L n ) ) is isomorphic to Z/2Z. (2.6) Since t does not divide s, then for any n, m > 0, s n+m is an invertible element in Z/t n Z, so (2.6) is reduced to (p -s) 2 m 12 ≡ m 22 q 2 (mod t n ), (2.7) which implies (p -s) 2 m 12 = m 21 q 2 . Thus, we get that (p -s) 2 m 12 = m 21 q 2 + (p -s)(m 11 -m 22 )q. (2.8) Now, if (p -s) divides q, we write q = k(p -s) for some k ∈ Z. By (2.8), we have that        1 -m • k -mk 2 m 1 + m • k , 1 -m • k 2k -mk 2 m m • k -1 -1 -m • k -2k -mk 2 m 1 + m • k , -1 -m • k -mk 2 m -1 + m • k : m ∈ Z        . In this case, L is conjugate to the matrix p 0 0 s , via the matrix 1 k 0 1 , so by the diagonal case (Proposition 2.6) we conclude that N ( ← - Z 2 (L n ) )/(Cent GL(2,Z) (L)) is virtually Z. Proof of Theorem 2.2 Now we will prove Theorem 2.2. Since we already proved the triangular case, we assume that q = 0 and r = 0. Proof of Theorem 2.2. Let L ∈ M(2, Z) be an integer expansion matrix. The Cayley-Hamilton theorem implies L 2 = trace(L) • L -det(L) • id R 2 . If rad(det(L)) divides trace(L), then L 2 ≡ 0 (mod rad(det(L))). Hence, for all n > 0 and any m(n) > 0 large enough, we get that L m(n) ≡ 0 (mod det(L) n ). So any matrix in GL(2, Z) satisfies (NC 2), and we conclude that N ( ← -Z 2 (L n ) ) = GL(2, Z). Suppose now that rad(det(L)) does not divide trace(L). By induction, for any n > 0 we have that L n ≡ trace(L) n-1 L (mod det(L)). Since rad(det(L)) does not divide trace(L), there exists a prime number t dividing det(L) that does not divide p or s. Without loss of generality (up to a conjugation with 0 1 1 0 ) we may assume that t does not divide s. Since s(n) ≡ trace(L) n-1 s (mod det(L)), then, for all n > 0, and m > 0, s(m) is an invertible element in Z/t n Z. Let M = m 11 m 12 m 21 m 22 be in N ( ← - Z 2 (L n ) ). Define the matrix M = rM -m 21 L -(r • m 11 -p • m 21 ) id R 2 . Then, the matrix M satisfies (Normalizer Condition) and has the form M = 0 m 12 0 m 22 , where m 12 = r • m 12 -q • m 21 , m 22 = r • m 22 -s • m 21 -r • m 11 + p • m 21 , with m 12 , m 21 ∈ Z. Now, for all n, m > 0 adj(L n )M L m = r(m)(m 12 s(n) -m 22 q(n)) s(m)(m 12 s(n) -m 22 q(n)) r(m)(-m 12 r(n) + m 22 p(n)) s(m)(-m 12 r(n) + m 22 p(n)) ≡ 0 0 0 0 (mod det(L) n ). (2.9) Since s(m) is an invertible element in Z/t n Z, (2.9) implies m 12 s(n) -m 22 q(n) ≡ 0 (mod t n ), (2.10) -m 12 r(n) + m 22 p(n) ≡ 0 (mod t n ), (2.11) which is equivalent to , so E is an adj(L)-invariant Z-module of rank at most 1. adj(L n ) m 12 m 22 ≡ 0 0 (mod t n ). ( 2 If L does not have integer eigenvalues, E must have rank 0. This implies m 12 = m 22 = 0. Hence M = 0 0 0 0 and then rM (so M ) commutes with L. Since m 12 = m 22 = 0, we have that r • m 12 -q • m 21 = 0 r • m 22 -s • m 21 -r • m 11 + p • m 21 = 0. (2.13) • Suppose p = s. In this case, (2.13) implies m 11 = m 22 and m 21 = m 11 • r/q. Note that L has complex eigenvalues if and only if (2p) 2 -4(p 2 -qr) < 0, i.e., qr < 0. Since | det(M )| = 1, then m 2 11 -m 2 12 • r/q = ±1, so the condition qr < 0 implies there exists a finite number of points (m 11 , m 12 ) ∈ Z 2 satisfying (2.13). • If p = s, then (2.13) implies m 12 = q(m 11 -m 22 )/(p-s) and m 21 = r(m 11 -m 22 )/(ps). Since M ∈ GL(2, Z), we get that m 11 m 22 -(m 11 -m 22 ) 2 qr (p -s) 2 = ±1. (2.14) In this case, there is a finite number of solutions if (p -s) 2 -4qr < 0, which is equivalent to trace(L) 2 -4 det(L) < 0. Since trace(L) 2 -4 det(L) is the discriminant of the characteristic polynomial of L, this is equivalent to L having complex eigenvalues. Now we will prove Proposition 2.3 Proof of Proposition 2.3. First note that if M = m 11 m 12 m 21 m 22 is in GL(2, Z), then M -1 =            m 22 -m 12 -m 21 m 11 if det(M ) = 1, -m 22 m 12 m 21 -m 11 if det(M ) = -1. Then, we will prove that if M satisfies the arithmetical relations given by the proof of Theorem 2.2, then M -1 also satisfies them. We will do it by cases as in the proof of Theorem 2.2: • The statement is easily to verifies when L is a diagonal matrix. • If L = p q 0 s is an upper triangular matrix. We only need to prove it for Item 1 in Proposition 2.8. The rest of the cases are easily to verify. We recall that a matrix M satisfies (Normalizer Condition) in this case if and only if (p -s) 2 m 12 = m 21 q 2 + (p -s)(m 11 -m 22 )q. We will see that either of the cases det(M ) = 1 or det(M ) = -1, M -1 also satisfies this arithmetical relation. Indeed -Assume that det(M ) = 1, then (p -s) 2 (-m 12 ) = -m 21 q 2 + (p -s)(m 22 -m 11 )q (p -s) 2 m 12 = m 21 q 2 + (p -s)(m 11 -m 22 )q. -If det(M ) -1, we have that (p -s) 2 m 12 = m 21 q 2 + (p -s)((-m 22 ) -(-m 11 ))q (p -s) 2 m 12 = m 21 q 2 + (p -s)(m 11 -m 22 )q. We conclude that the statement is true when L is an upper triangular case. We recall the lower triangular case is deduced by this one. • In the general case, we need to separate it in two cases: - If -q • m 21 r • m 22 -s • m 21 -r • m 11 + p • m 21 is an eigenvector of adj(L) (with associated eigenvalue having a prime divisor not dividing trace(L)). A direct computation shows that in both cases (det(M ) = 1 or det(M ) = -1), if M ∈ GL(2, Z) satisfies this property, then the inverse matrix M -1 also satisfies it. We conclude that the statement is true for all the different cases. Remark 2.9. 1. In the particular case gcd(trace(L), det(L)) = 1, we can simplify the proof noting that (2.10), (2.11) imply the existence of two sequences (k 1 n ) n>0 , (k 2 n ) n>0 ⊆ Z such that det(L) n k 1 n = m 12 s(n) -m 22 q(n) and det(L) n k 2 n = -m 12 r(n) + m 22 p(n), i.e., k 1 n = m 12 s(n) det(L) n -m 22 q(n) det(L) n k 2 n = -m 12 r(n) det(L) n + m 22 p(n) det(L) n . (2.15) Since L is an expansion matrix, then L -1 is a contraction, so we have that lim n→∞ p(n) det(L) n = lim n→∞ q(n) det(L) n = lim n→∞ r(n) det(L) n = lim n→∞ s(n) det(L) n = 0, this implies for all n large enough, k 1 n = k 2 n = 0, and we conclude that m 12 = m 22 = 0. Chapter 3 The recognizability property of constant-shape substitutions The recognizability property of substitutions is a combinatorial property that provides a form of invertibility of the morphism, to uniquely decompose points in the substitutive subshift. For Z-actions, this property has a dynamical interpretation, since it implies the presence of refining Kakutani-Rokhlin partitions which allows encoding the dynamics in an infinite graph [START_REF] Herman | Ordered Bratteli diagrams, dimension groups and topological dynamics[END_REF] (called Bratteli diagrams). This coding enabled, among other results, the study of topological orbital equivalence of minimal Cantor systems [START_REF] Giordano | Topological orbit equivalence and C *crossed products[END_REF], to characterize continuous and measurable eigenvalues [START_REF] Durand | Eigenvalues of minimal Cantor systems[END_REF] and to analyze their invariant measures [START_REF] Bezuglyi | Finite rank Bratteli diagrams: structure of invariant measures[END_REF]. It was first proved for any aperiodic primitive substitution by B. Mossé in [START_REF] Mossé | Puissances de mots et reconnaissabilité des points fixes d'une substitution[END_REF], and then in [START_REF] Bezuglyi | Aperiodic substitution systems and their Bratteli diagrams[END_REF] for the non-primitive case. Also, V. Berthé, W. Steiner, J. M. Thuswaldner, and R. Yassawi [START_REF] Berthé | Recognizability for sequences of morphisms[END_REF] studied a recognizability property for different types of one-dimensional morphisms ζ : A → B Z , where A and B can be different alphabets. In the multidimensional context, B. Solomyak showed in [START_REF] Solomyak | Nonperiodicity implies unique composition for self-similar translationally finite tilings[END_REF] that aperiodic translationally finite self-affine tilings of R d satisfy a recognizability property (called unique composition property). In this chapter, we will get a similar result as [START_REF] Solomyak | Nonperiodicity implies unique composition for self-similar translationally finite tilings[END_REF], but for aperiodic symbolic factors (Proposition 3.3). We then present several consequences on invariants of factors and homomorphisms of substitutive subshifts: There exists a finite number of orbits in X ζ which are invariant by the substitution map (Proposition 3.4). They are not necessarily linearly repetitive (Example 3.6). Nevertheless, there is a polynomial growth of the repetitivity function of substitutive subshifts (Lemma 3.7). We determine their maximal equicontinuous factors, which is an explicit odometer system (Proposition 3.15). Thanks to these last descriptions, we get that any aperiodic symbolic factor of a substitutive subshift is conjugate to a substitutive subshift via a letter-to-letter map (Theorem 3.22). This extends the result proved in [START_REF] Müllner | Automorphisms of automatic shifts[END_REF]. In the original article, a key part is a characterization of periodic sequences by their complexity function. Our proof does not require this characterization, which is known as Nivat's conjecture in the two-dimensional case (and it is known to be false for higher dimensions [START_REF] Cassaigne | Subword complexity and periodicity in two or more dimensions[END_REF]). The recognizability property of aperiodic symbolic factors of substitutive subshifts The substitution ζ seen as a map from X ζ to ζ(X ζ ) is continuous. Moreover, when the constant-shape substitution is aperiodic and primitive this map is actually a homeomorphism. This comes from the notion of recognizability of a substitution. Definition 3.1. A constant-shape substitution ζ is said to be recognizable, if there exists R > 0 such that for any x, y ∈ X ζ satisfying x| B(0,R)∩Z d = y| B(0,R)∩Z d , then x is in ζ(X ζ ) if and only if y is in ζ(X ζ ). This implies for every x ∈ X ζ there exist a unique x ∈ X ζ and a unique j ∈ F ζ 1 such that x = S j ζ(x ). With this, the set ζ(X ζ ) is a clopen subset of X ζ and {S j ζ(X ζ ) : j ∈ F ζ 1 } is a clopen partition of X ζ (a proof for the one-dimensional case can be found in [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF], which can be adapted to our case). This property is also satisfied for the iterations ζ n , for all n > 0. The recognizability property was first proved for any aperiodic primitive substitution by B. Mossé in [START_REF] Mossé | Puissances de mots et reconnaissabilité des points fixes d'une substitution[END_REF] for the one-dimensional case, and in the multidimensional case by B. Solomyak in [START_REF] Solomyak | Nonperiodicity implies unique composition for self-similar translationally finite tilings[END_REF]. In this section we will prove it for aperiodic symbolic factors of substitutive subshifts (Proposition 3.3). This property will allows us to determine its maximal equicontinuous factors. The proof of Proposition 3.3 is a multidimensional analogue of the one given by P. Kůrka in [START_REF] Kurka | Topological and symbolic dynamics, volume 11 of Cours Spécialisés[END_REF] with the use of the following repulsion property for constant-shape substitutions proved in [START_REF] Solomyak | Nonperiodicity implies unique composition for self-similar translationally finite tilings[END_REF]. Proposition 3.2 (Repulsion Property ). [107, Lemma 3.2] Let ζ be an aperiodic primitive constant-shape substitution, and x ∈ X ζ . Then, there exists N > 0 such that, for all n > 0, and every neighborhood V ⊆ R d of the origin, if a pattern p x, with (L n ζ (V ) ∩ Z d ) + s ⊆ supp(p) for some s ∈ Z d , has two occurrences j 1 , j 2 ∈ Z d such that j 1 -j 2 ∈ L n-N ζ (V ), then j 1 is equal to j 2 . As proved in Lemma 1.22, we may assume that an aperiodic symbolic factor of a substitutive subshift is induced by a letter-to-letter map. Proposition 3.3 (Recognizability property of aperiodic symbolic factors of substitutive subshifts). Let A, B be two finite alphabets, ζ be an aperiodic primitive constant-shape substitution from the alphabet A, and T : A → B be a map such that (τ (X ζ ), S, Z d ) is an aperiodic subshift. Let x ∈ X ζ be a fixed point of ζ and y = τ (x). Then, there exists R > 0 such that if y| i+B(0,R) = y| j+B(0,R) and i ∈ L ζ (Z d ), then j ∈ L ζ (Z d ). Proof. Let N given by Proposition 3.2 and V ⊆ R d be a neighborhood of 0 such that L N ζ ([-1, 1] d ) ⊆ V and let B = d i=1 I i be a box containing (V + [-1, 1] d ) , where I i is a finite interval of Z. Assume the contrary, then for every n > 0 there exist i n ∈ L ζ (Z d ), j n / ∈ L ζ (Z d ) and a pattern u n ∈ L B (X ζ ) such that y| in+L n ζ (B∩Z d ) = τ (ζ n (u n )) = y| jn+L n ζ (B) . τ (ζ n (u n )) • • • • • • • • • • • • • • • • • • τ (ζ n (u n )) • • • • • • • • • • • • • • • • • • i n × j n × j n -i n Figure 3.1: Illustration of the patterns τ (ζ n (u n )). The pattern in i n (black) comes from the fact that x = ζ(x). We will prove that the pattern in j n (blue) also comes from this fact. Since x = ζ(x), there exist a finite subset C (n) Z d , a pattern v n ∈ L C (n) (X ζ ) and an occurrence c n ∈ L ζ (Z d ) of ζ(v n ) with j n + (L n ζ (B) ∩ Z d ) ⊆ c n + L n ζ (C (n) ) ⊆ (F ζ n +n)∩(jn+L n ζ (B)) =∅ n∈Z d F ζ n + n. In particular, ζ n (u n ) ζ n (v n ), as illustrated in Fig. 3.2: τ (ζ n (v n )) τ (ζ n (u n )) • • • • • • • • • • • • • • • • • • j n × Figure 3.2: Illustration of the patterns τ (ζ n (v n )) and τ (ζ n (u n )) in j n . By the Pigeonhole Principle, there are an infinite set E ⊆ N, a finite set C Z d and patterns u ∈ L B (X ζ ), v ∈ L C (X ζ ) such that for all n ∈ E, C (n) is equal to C, u n = u, and v n = v. Consider D = {n ∈ C : n + -1, 1 d ⊆ C}, and let w = x| kn+D where k n ∈ Z d is such that L n ζ (k n + D) is strictly contained in j n + (L n ζ (B) ∩ Z d ) and set a n = x| (jn+L n ζ (B))\(L n ζ (kn+D)) . We have that supp(a n ) ⊆ (∂(j n + L n ζ (B)) + L n ζ ([-1, 1] d )) as illustrated in Fig. 3.3: τ (ζ n (v)) τ (ζ n (u)) τ (ζ n (w)) • • • • • • • • • • • • • • • τ (a n ) L n ζ (k n ) × j n × m-n (ζ n (w)) = ζ m (w). If ζ m-n (a n ) = a m , there is two occurrences of ζ m (w) in ζ m (u). Since the supports of ζ m-n (a n ) and a m are in the same set, the distance between these two occurrences is smaller than max t∈[-1,1] d L m ζ (t) . If these occurrences are not the same, the repulsion property (Proposition 3.2) gives a contradiction, so ζ m-n (a n ) = a m as illustrated in Fig. 3.4: τ (ζ m (v)) τ (ζ m (u)) τ (ζ m (w)) • • • • • • • • • • • • • • • τ (ζ m-n (a n )) = τ (a m ) L m-n ζ (k n ) L m-n ζ (j n ) × × Figure 3.4: Illustration of the patterns ζ m-n (a n ) in L m-n ζ (j n ). Now, since x = ζ m-n (x), y| L m-n ζ (jn+L n ζ (B)) is equal to τ (ζ m (u)). Hence, we have L m-n ζ (j n ) -L m-n ζ (k n ) = j m -L m ζ (k m ). This implies that j m ∈ L ζ (Z d ) which gives a contradiction. Invariant orbits of substitutive subshifts As mentioned in the last proof, we assume that primitive constant-shape substitutions admit at least one fixed point for the map ζ : X ζ → X ζ . The orbits of these fixed points lead to the notion of ζ-invariant orbits. An orbit O(x, Z d ) is called ζ-invariant if there exists j ∈ Z d such that ζ(x) = S j x, i.e., the orbit is invariant under the action of ζ in X ζ . Since for every n ∈ Z d we have ζ • S n = S L ζ n • ζ, the definition is independent of the choice of the point in the Z d -orbit of x. The orbit of a fixed point of the substitution is an example of an invariant orbit. In the following, we will prove that for aperiodic primitive constant-shape substitutions there exist finitely many ζ-invariant orbits. This property will be used to prove other properties about some constant-shape substitutions such as coalescence (Proposition 4.7) and the automorphism group of some substitutive subshifts is virtually generated by the shift action (Proposition 4.8). Proof. Let x ∈ X ζ be such that ζ(x) = S jx x, for some j x ∈ Z d . For any m ∈ Z d , we have . Inductively, we obtain for every n ≥ 0 ζ(S m x) = S L ζ m ζ(x) = S L ζ m+jx x = S (L ζ -id)m+jx S m x, and thus j x -j S m x ∈ (L ζ -id)Z d . Let H Z d be a fundamental domain of (L ζ -id)(Z d ) in Z d with 0 ∈ H. We may assume that x ∈ X ζ is in a ζ-invariant orbit with j x ∈ H. Set K ζ Z d be x| n k=0 L k ζ j +L n+1 ζ (D)+F ζ n+1 = y| n k=0 L k ζ j +L n+1 ζ (D)+F ζ n+1 . Let E 0 be equal to D and for all n > 0, define E n = n-1 k=0 L k ζ j + L n ζ (D) + F ζ n . We will prove that n≥0 E n = Z d . This implies x = y, which is a contradiction. To do this, we will prove that for every n ≥ 0 that Note that L n ζ (K ζ ) + F ζ n ⊆ E L n ζ (K) + F ζ n ⊆ E n+1 if and only if L n ζ (K -j) + n-1 k=0 L k ζ (F ζ 1 -j) ⊆ L n+1 ζ (D) + F ζ n+1 . Claim 1. For every n ≥ 0 we have n k=0 L k ζ (F 1 -j) ⊆ L n+1 ζ (C) + F ζ n+1 . Proof of Claim. For n = 0, note that F 1 -j is included in L ζ (C) + F ζ 1 by Proposition 1.20. Assume that for some n ≥ 0 n k=0 L k ζ (F 1 -j) ⊆ L n+1 ζ (C) + F ζ n+1 . We have n+1 k=0 L k ζ (F 1 -j) = n k=0 L k ζ (F 1 -j) + L n+1 ζ (F 1 -j) ⊆ L n+1 ζ (C) + F ζ n+1 + L n+1 ζ (F 1 -j) ⊆ L n+1 ζ (C + F ζ 1 -j) + F ζ n+1 ⊆ L n+2 ζ (C) + L n+1 ζ (F ζ 1 ) + F ζ n+1 (by Proposition 1.20) = L n+2 ζ (C) + F ζ n+2 We conclude that for every n ≥ 0, n k=0 L k ζ (F 1 -j) ⊆ L n+1 ζ (C) + F ζ n+1 By Claim 1 L n ζ (K -j) + n-1 k=0 L k ζ (F 1 -j) is included in L n ζ (K -j) + L n ζ (C) + F ζ n , and L n ζ (K + C -j) is a subset of L n+1 ζ (D) + L n ζ (F ζ 1 ), so we have L n ζ (K -j) + L n ζ (C) + F ζ n ⊆ L n+1 ζ (D) + L n ζ (F ζ 1 ) + F ζ n = L n+1 ζ (D) + F ζ n+1 and we conclude the proof. Remark 3.5. Let ζ be an aperiodic primitive substitution with an expansion matrix L ζ such that | det(L ζ -id R d )| = 1. This implies (L ζ -id R d )(Z d ) = Z d . Let x ∈ X ζ be a point in a ζ-invariant orbit, i.e., there exists j ∈ Z d such that ζ(x) = S j x and set m ∈ Z d such that (L ζ -id R d )(m) = -j. Then ζ(S m x) = S L ζ m+j ζ(x) = S m+(L ζ -id R d )(m)+j x = S m x. Hence S m x is a fixed point of ζ. We conclude that the only ζ-invariant orbits in this case are the ones given by the fixed points of the substitution. The repetitivity function of substitutive subshifts Let ζ be an aperiodic primitive constant-shape substitution, and assume that x ∈ X ζ is a fixed point of the substitution. The minimality property implies the substitutive subshift is repetitive, i.e., for every pattern p x there is a radius R > 0 such that for every n ∈ Z d , the ball B(n, R) contains an occurrence of p in x. The repetitivity function is the map M X ζ : R + → R + defined for R > 0 as the smallest radius such that every ball B(n, M X ζ (R)) contains an occurrence of every pattern with diam(supp(p)) ≤ 2R. We say the substitution is linearly recurrent or linearly repetitive if the repetitivity function has a linear growth, i.e., there exists C > 0 such that M X ζ (R) ≤ C • R. The notion of linearly recurrent was introduced by F. Durand in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF], to study the relations between the substitutive dynamical systems and the stationary dimension groups. For the multidimensional context, the same notion appears in [START_REF] Lagarias | Local complexity of Delone sets and crystallinity[END_REF] for Delone sets of R d . In fact, according to a result in [START_REF] Lagarias | Local complexity of Delone sets and crystallinity[END_REF]Theorem 2.3], the linear growth of the repetitivity function is the slowest possible for an aperiodic Delone set. F. Durand proved one-dimensional substitutive subshift are linearly recurrent [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF], but in the multidimensional case this is no longer true, as we can see in Example 3.6. Example 3.6 (A nonlinearly repetitive constant-shape substitution). Consider the substitution σ 1 , given by L σ 1 = 2 0 0 3 , and F σ 1 1 = 0, 1 × 0, 2 defined by σ 1 : b c a c c b a → c b b → c b c → a c a b b c c b. For p > 1, we consider the pattern w p = σ p 1 (a)| 0,2 p-1 ×{0} ∈ L 0,2 p-1 ×{0} (X ζ ). A direct induction enable to prove that if w p σ p 1 α β γ δ , for α, β, γ, δ ∈ {a, b, c, ε} (where ε denote the empty pattern), then one of the letters must be a. Moreover, w p only appears in the lower left corner of the pattern σ p 1 (a). These properties imply that there is only one occurrence of w p in σ p 1 (w p ), which is in the lower corner of σ p 1 (w p ) as seen in Fig. 3.5: σ p 1 (w p ) • • • • • • • • • • • • • • • • • • w p 3 p 4 p σ p 1 (a) σ p 1 (b) Figure 3.5: Decomposition of σ p 1 (w p ). Then, there is a ball of radius 3 p /2 in the support of σ p 1 (w p ) with no occurrences of w p . Since this is true for any p, this implies that this substitution is not linearly recurrent. However, the repetitivity function has at most polynomial growth, with exponent depending only on the expansion map of the substitution. Lemma 3.7. Let ζ be an aperiodic primitive constant-shape substitution. Then M X ζ (R) is O   R - log( L ζ ) log ( L -1 ζ )   . Proof. Using A = K ζ + K ζ , and F = F ζ 1 + F ζ 1 in Proposition 1.20 , where K ζ is given by Proposition 1.18, we obtain a subset C Z d such that for all n > 0, L n ζ (K ζ + K ζ ) + F ζ n + F ζ n ⊆ L n ζ (C) + F ζ n . The recognizability property implies for every pattern p ∈ L(X ζ ) such that its support is contained in L n ζ (K ζ ) + F ζ n , there exists a pattern w ∈ L C (X ζ ) with p ζ n (w). Set T = M X ζ (diam(C)) . By definition, any ball of radius T contains an occurrence of every pattern in L L n ζ (K ζ )+F ζ n (X ζ ). Set R > 0. Fix n > 0 such that L n-1 ζ (K ζ ) + F ζ n-1 ⊆ B(0, R) ⊆ L n ζ (K ζ ) + F ζ n . By definition of n, we get that inf t =1 L n-1 ζ (t) ≤ 2R, hence 1/ L -1 ζ n-1 ≤ 2R. Thus M X ζ (R) ≤ L n ζ T ≤ L ζ n T ≤ C(ζ)R -(log( L ζ ))/(log( L -1 ζ )) , for some constant C(ζ) independent of R. Remark 3.8. The following statements can be easily verified. (1) In the case of a symmetric expansion map for the substitution, a bound for M X ζ (R) is given by the eigenvalues of the expansion matrix: (2) In the case of a self-similar tiling (where the expansion map satisfies L ζ (t) = λ t , for some λ > 0), the norm matrix satisfies M X ζ (R) = O R (log(|λ 1 |))/(log(|λ d |)) , L ζ = ( L -1 ζ ) -1 = λ, so the repetitivity function is O(R), i.e., has a linear growth. Hence self-similar substitutions are linearly recurrent, as it was proved in [START_REF] Solomyak | Nonperiodicity implies unique composition for self-similar translationally finite tilings[END_REF]. (3) The sufficiency of the previous case is not true, there exist constant-shape substitutions that are not self-similar, but are linearly recurrent. Substitutive subshifts as extension of d-dimensional odometers In this section, we will describe substitutive subshifts as symbolic extensions of odometer systems, which are given by the data of the substitution. Actually, the maximal equicontinuous factor of a substitutive subshift is an explicit odometer system (Proposition 3.15). This was first made by F. Dekking in [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF] for the one-dimensional case, where he introduced the notion of height to describe the return times of a letter which are coprime with the K2: Suppose that ←g does not satisfy K1. Proceeding as in the previous case, we assure the existence of two indices j 1 = j 2 such that y n j 1 | K ζ = y n j 2 | K ζ for any n in an infinite subset E ⊆ N. This will imply that x j 1 | -gn+L n ζ (K ζ )+Fn = x j 2 | -gn+L n ζ (K ζ )+F ζ n for any n ∈ E. Now, note that for any n ∈ N, g n+1 -g n ∈ L n (F ζ 1 ) , and a direct induction prove that - g n + L n (K ζ ) + F ζ n is included in -g n+1 + L n+1 (K ζ ) + F ζ n+1 . Since K ζ ⊆ K ζ , we conclude that x j 1 = x j 2 which will be a contradiction. Remark 3.10. Note that (1) The set of points ← -g ∈ ← - Z d (L n ζ ) satisfying K1 is a G δ -set. Indeed, for any M > 0 define U M = ← -g ∈ ← - Z d (L n ζ ) : -M, M d ⊆ -g n + F ζ n , for some n > 0 . Note that U M is an open subset of ← - Z d (L n ζ ) and M >0 U M are exactly the points satisfying K1. Now, for any n in Z d and M > 0 we have that U M is included in U M +|n| , hence the set of points on the odometer satisfying K1 is invariant by the Z d -action. (2) We will prove that the existence of a point ← -g ∈ ← - Z d (L n ζ (Z d )) satisfying K1 is equivalent to the existence of N ∈ N and f ∈ F ζ N such that B(0, R L ζ ) ∩ Z d ⊆ F ζ N -f , with R L ζ = max L ζ 2 , L -1 ζ 1 -L -1 ζ 1 + L ζ 2 . Which is always true by the Følner assumption. We prove the nontrivial implication. We will show by induction that for every r ≥ R L ζ there exists n ∈ N and f r ∈ F ζ n such that B(0, r)∩Z d ⊆ F ζ n -f r . By hypothesis, this is true for r = R L ζ . Now, assume that for some r ≥ R L ζ there exists n ∈ N and f r ∈ F ζ n such that B(0, r) ∩ Z d ⊆ f ζ n -f r . Then, L ζ (B(0, r) ∩ Z d ) is included in L ζ (F ζ n ) -L ζ (f r ). This implies L ζ (B(0, r) ∩ Z d ) + B 0, L ζ 2 ∩ Z d ⊆ F ζ n+N +1 -L ζ (f r ) -f . Now, we prove that B(0, r + 1) ∩ Z d ⊆ L ζ (B(0, r) ∩ Z d ) + B (0, L ζ /2) ∩ Z d . Set n ∈ B(0, r + 1) ∩ Z d . Since L ζ (Z d ) is L ζ /2-relatively dense, we can write n = m + L ζ (p), with m ∈ B (0, L ζ /2) ∩ Z d and p ∈ Z d . We have that p ≤ L -1 ζ L ζ (p) = L -1 ζ n -m ≤ L -1 ζ r + 1 + L ζ 2 . The last expression is smaller than r whenever r ≥ (L -1 ζ /(1-L -1 ζ ))•(1 + L ζ /2), which implies p ∈ B(0, r) ∩ Z d . Since L ζ (f r ) + f is in F ζ n+N +1 , we conclude that there exist m ∈ N and f ∈ F ζ m such that B(0, r + 1) ∩ Z d ⊆ F ζ m -f . Finally, using classical compactness arguments we conclude the existence of a point ←g ∈ ← - Z d (L n ζ (Z d )) such that n>0 F ζ n -g n = Z d . (3) Let ζ be an aperiodic bijective primitive constant-shape substitution. The map p 0 is a permutation of A and we consider a power of p 0 such that p n 0 is equal to the identity. Since ζ is primitive, we replace it by ζ n and with this we may assume that ζ possess at least |A| fixed points. Now, let ←g satisfying K1. For any a ∈ A, consider a fixed point of ζ, denoted as x a , with x a (0) = a and define x ←g a = lim nm→∞ S gn m x a for some convergent subsequence. Since the sets {S gn [ζ n (a)]} a∈A are disjoint for any a = b ∈ A, x ← -g a is different from x ← -g b . Finally, noticing that π(x ← -g a ) = ← -g , we have that π -1 ({ ← -g }) ≥ |A|. By Lemma 3.9 we conclude that π -1 ({ ←g }) = |A| for any ←g satisfying K1, and then the factor map π : (X ζ , S, Z d ) → ( ← - Z d (L n ζ ) , + (L n ζ ) , Z d ) for aperiodic bijective constant-shape substitutions is almost |A|-to-1. In general, this d-dimensional odometer is not the maximal equicontinuous factor of aperiodic constant-shape substitutions. In some particular cases we can explicitly compute the cardinality of the fibers given by the topological factor in Lemma 3.9, such as in the two examples studied in [START_REF] Robinson | On the table and the chair[END_REF]. The following result shows other examples where we can compute the cardinality of the fibers. Lemma 3.11. Let L ∈ M(d, Z) be an integer expansion matrix with det(L) ≥ 3 and F 1 be a fundamental domain of L(Z d ) in Z d . Let A be a finite alphabet with cardinality |A| = |F 1 | -1 and τ : F 1 \ {0} → A be a bijection. We define a substitution σ L as the following: ∀a ∈ A, σ(a) f = a f = 0, τ (f ) f = 0. Under the hypothesis that the sequence of supports of the iterations σ n L is a Følner sequence, σ L is an aperiodic primitive constant-shape substitution and the factor map π : (X σ L , S, Z d ) → ( ← - Z d (L n ) , + (L n ) , Z d ) is almost 1-to-1. Moreover, we have that |π -1 ({ ← -g })| = |A| ← -g ∈ O( ← - 0 , Z d ), 1 ← -g / ∈ O( ← - 0 , Z d ). Proof. Since τ is a bijection, by definition σ L is a primitive substitution. Now, we prove that σ L is aperiodic. To prove this we prove σ L is recognizable. We follow similar ideas of the proof of the recognizability property in [START_REF] Kurka | Topological and symbolic dynamics, volume 11 of Cours Spécialisés[END_REF]. Let x be a fixed point of σ L and n, m ∈ Z d such that n ∈ L(Z d ) and x| n+F 1 = x| m+F 1 . We prove that m ∈ L(Z d ). Indeed, since x is a fixed point of the substitution and n ∈ L(Z d ) we have that for all f ∈ F 1 \ {0}, x m+f = x n+f = τ (f ). Set g ∈ F \ {0} such that m + g / ∈ L(Z d ) (such g ∈ F 1 \ {0} exists since |F 1 | ≥ 3 ). We write m + g = L(p) + h, with h ∈ F 1 . Note that h = 0. Hence x m+g = x n+g = τ (g), but x is a fixed point of σ L so x m+g = σ L (x p ) h = τ (h), i.e., τ (g) = τ (h). By bijectivity of τ we have that g = h, so m = L(p). We conclude that σ L is a recognizable substitution, i.e., σ L is aperiodic. Now we study the fibers π -1 ({ ←g }). Assume that ←g = ← -0 . By the proof of Lemma 3.9 we need to compute σ n L (w) for patterns w ∈ L Kσ L (X σ L ) and n > 0. Let w 1 , w 2 be two patterns in L Kσ L (X σ L ). Note that, by definition of σ L , the cardinality of the coordinates where the patterns σ n L (w 1 ), σ n L (w 2 ) differ is constant on n > 0 and is at most |K σ L |. Indeed, if W = {k ∈ K σ L : w 1 (k) = w 2 (k)}, then for any k ∈ W , σ n L (w 1 ) L n (k) = σ n L (w 2 ) L n (k) and for any a ∈ (L n (K σ L ) + F σ L n ) \ L n (W ) we have that σ n L (w 1 ) a = σ n L (w 2 ) a . Since the distance between these differences increase (exponentially on n > 0) and for any n > 0 and a ∈ A the patterns σ n L (a)| F σ L n and σ n+1 L (a)| F σ L n are the same, we get two cases: • If w 1 (0) = w 2 (0), the patterns σ n L (w 1 ), σ n L (w 2 ) converge to two points x 1 , x 2 ∈ X σ L with x 1 (0) = x 2 (0) and for any n ∈ Z d , x 1 (n) = x 2 (n). • At the opposite, if w 1 (0) = w 2 (0), the patterns σ n L (w 1 ), σ n L (w 2 ) converge to the same point x ∈ X σ L . This implies |π -1 ({ ← -0 })| ≤ |A| and since for any letter a ∈ A we have a fixed point x a such that x a (0) = a, then |π -1 ({ ← - 0 })| ≥ |A|. We conclude that |π -1 ({ ← - 0 })| = |A|. Since this property is invariant under translation, we conclude that for any ← -g ∈ O( ← - 0 , Z d ) that |π -1 ({ ← -g })| is equal to |A|. For the other case, consider ← -g = (g n ) n>0 / ∈ O( ← - 0 , Z d ). This implies for every n > 0 exists m > n such that g m = 0. Set w 1 , w 2 ∈ L Kσ L (X σ L ) and let W = {k ∈ K σ L : w 1 (k) = w 2 (k)} be their set of differences. By definition of σ L , for any n > 0 the coordinates where σ n L (w 1 ) and σ L (w 2 ) differ is L n (W ). Then, for any M > 0 we can find n > 0 such that g n + -M, M d ⊆ L n (K σ L )+F σ L n and g n + -M, M d ∩L n (Z d ) = ∅. Hence σ n L (w 1 )| gn+ -M,M d = σ n L (w 2 )| gn+ -M,M d . Moreover, a direct computation shows that for any n > 0, the patterns σ n L (w 1 )| gn+ -M,M d , σ n+1 L (w 1 )| g n+1 + -M,M d are the same. This implies, taking M > 0 arbitrarily large, for any w ∈ L Kσ L (X σ L ), the patterns σ n L (w 1 )| gn+ -M,M d are the same and then converge to a unique point x ∈ X σ L such that π(x) = ←g . We conclude that |π -1 ( ←g )| = 1. We will use this result to a study the fibers of a particular case in Chapter 6. The maximal equicontinuous factor of a substitutive subshift In this section, we will describe the maximal equicontinuous factor of substitutive subshifts (see [START_REF] Frank | Multidimensional constant-length substitution sequences[END_REF] for the case with diagonal expansion matrices and [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF] for the one-dimensional case). A subgroup L ≤ Z d is called a lattice if it is isomorphic to Z d , i.e., it has finite index. For a lattice L of Z d , we define the dual lattice of L, as the subgroup L * = {x ∈ R d : x, n ∈ Z, ∀n ∈ L}. We have that (Z d ) * = Z d , and for any R ∈ M(d, Z) the set R(Z d ) is a lattice of Z d with dual lattice equal to (R * ) -1 (Z d ), where R * stands for the algebraic adjoint of R. Let L 1 , L 2 be two lattices of Z d . We denote by L 1 ∨ L 2 the smallest lattice that contains L 1 and L 2 , i.e., if a lattice L containing L 1 and L 2 , then must contains L 1 ∨ L 2 . Fix x ∈ X ζ . We define the set of return times as R(X ζ ) = {j ∈ Z d : ∃k ∈ Z d , x k+j = x k }. By minimality, this set is well-defined independently of x ∈ X ζ , and it is syndetic, i.e., there exists a finite subset A Z d such that R(X ζ )+A = Z d . We define L(R(X ζ )) as the smallest lattice containing R(X ζ ). The height lattice H(X ζ ) of a constant-shape substitution ζ is the smallest lattice containing L(R(X ζ )) such that H(X ζ ) ∩ L ζ (Z d ) ≤ L ζ (H(X ζ )). Notice that the last property is equivalent to H(X ζ ) ∩ L n ζ (Z d ) ≤ L n ζ (H(X ζ )), for any n > 0. The height lattice is trivial whenever H(X ζ ) = Z d . In the following, we will give a description for the height lattice. For k ∈ Z d , we define R k (X ζ ) = {j ∈ Z d : x j+k = x k }. Let L(R k (X ζ )) be the smallest lattice containing R k (X ζ ) and H k (X ζ ) be the smallest lattice containing L(R k (X ζ )) such that H k (X ζ ) ∩ L ζ (Z d ) ≤ L ζ (H k (X ζ )) . We adapt the proof for the one-dimensional case [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF][START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF] and obtain the following result. Lemma 3.12. Let ζ be an aperiodic primitive constant-shape substitution. Then, for any k 1 , k 2 ∈ Z d the sets H k 1 (X ζ ), H k 2 (X ζ ) are the same. In particular, for any k ∈ Z d , H k (X ζ ) is equal to H(X ζ ). Proof. Let x ∈ X ζ be a fixed point of the substitution, k 1 , k 2 ∈ Z d and N be large enough such that x k 2 ζ N (x k 1 ). Set m such that x k 2 = ζ N (x k 1 ) m . Since x = ζ N (x) for any j ∈ R k 1 (X ζ ), L N ζ (k 1 + j) + m ∈ R k 2 (X ζ ). Hence L N ζ (j) is in L(R k 2 (X ζ )) and therefore in H k 2 (X ζ ). By definition of H k 2 (X ζ ) and invertibility of L N ζ , we conclude that j ∈ H k 2 (X ζ ). Since it is the smallest lattice satisfying the property, H k 1 (X ζ ) is a subgroup of H k 2 (X ζ ), and by reciprocity we have that these sets are the same. We conclude the second equality by observing that H( X ζ ) = k∈Z d H k (X ζ ). As in the one-dimensional case, to study the maximal equicontinuous factor of substitutive subshifts, we study theirs eigenvalues. A vector x ∈ R d is said to be an eigenvalue for the topological dynamical system (X ζ , S, Z d ) (the measure-preserving system (X ζ , µ, S, Z d )) if there exists a continuous function f : X ζ → C (f ∈ L 2 (X ζ , µ ζ )) such that for every n ∈ Z d , f • S n = e 2πi x,n f in X ζ (f • S n = e 2πi x,n f in X ζ , µ ζ -a.e. in X ζ ). In [START_REF] Solomyak | Eigenfunctions for substitution tiling systems[END_REF] it was proved the following result, generalizing the characterization of eigenvalues for the one-dimensional case [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF]. 1. The vector x is a continuous eigenvalue for the topological dynamical system (X ζ , S, R d ). 2. The vector x is a measurable eigenvalue for the measure-preserving system (X ζ , µ ζ , S, R d ). 3. The vector x satisfies the following condition: lim n→∞ e 2πi L n ζ j,α = 1, ∀ j ∈ R(X ζ ). 4. The vector x satisfies the following condition: lim n→∞ e 2πi L n ζ j,α = 1, ∀ j ∈ H(X ζ ). Remark 3.14. We have that (1) Condition 4. in Theorem 3.13 is not proved in [START_REF] Solomyak | Eigenfunctions for substitution tiling systems[END_REF] but it can be easily checked noticing the set of points satisfying (L * ζ ) N α, j ∈ Z is a lattice. (2) The same results is satisfied for the topological dynamical system (X ζ , S, Z d ) with the same arguments given in [START_REF] Solomyak | Eigenfunctions for substitution tiling systems[END_REF]. This implies the set of continuous (and measurable) eigenvalues of (X ζ , S, Z d ) corresponds to the set n≥0 (L * ζ ) -n (H * (X ζ )). In particular the set of eigenvalues E(X ζ , S, Z d ) of (X ζ , S, Z d ) is a subset of Q d . A direct consequence of Theorem 3. 13 is a description of the maximal equicontinuous factor of (X ζ , S, Z d ). Proposition 3.15. Let ζ be an aperiodic primitive constant-shape substitution. The maximal equicontinuous factor of the substitutive subshift of (X ζ , S, Z d ) is the odometer system ( ← - Z d (L n ζ (H(X ζ ))) , + L n ζ (H(X ζ )) , Z d ) = lim ←n (Z d /L n ζ (H(X ζ )), α n ), + (L n ζ (H(X ζ ))) , Z d . For completeness, we provide another description for the maximal equicontinuous factor in a general setting. Let (X, T, Z d ) be a topological dynamical system, where X is a Cantor set, and Γ ≤ Z d a subgroup with finite index. We say (X, T, Z d ) admits a Γ-minimal partition if there exists a closed partition g∈Z d /Γ X g = X such that for all g ∈ Z d /Γ the set of return times RT (X g ) = {n ∈ Z d : T -n (X g ) ∩ X g = ∅} is equal to Γ and the topological dynamical system (X g , Γ) is a minimal system. Note that, since its a finite partition, the sets X g are clopen. The minimality of the induced actions implies there is at most one Γ-minimal partition up to a permutation of the sets {X g } g∈Z d /Γ . A Γ-minimal partition is associated with an equicontinuous factor of (X, T, Z d ). To see this we enumerate X g such that for all g ∈ Z d /Γ and n ∈ Z d we have that T n X g = X g+n mod Γ . Then, the map π : (X, T, Z d ) → (Z d /Γ, + Γ , Z d ), such that π(x) = g if and only if x ∈ X g , is a factor map onto (Z d /Γ, + Γ , Z d ) , where Z d acts by quotient translations onto Z d /Γ. The following proposition shows the connection between Γ-minimal partitions and eigenvalues of a topological dynamical system. Proposition 3.16. Let (X, T, Z d ) be a minimal topological dynamical system and Γ ≤ Z d be a finite index subgroup. The system (X, T, Z d ) admits a Γ-minimal partition, if and only if Γ * ⊆ E(X, T, Z d ). Proof. Let R ∈ M(d, Z) be such that Γ = R(Z d ). Let {X g } g∈R([0,1) d )∩Z d be a Γ-minimal partition satisfying X g = T g (X 0 ) for all g ∈ M ([0, 1) d )∩Z d . Let x be in (R * ) -1 (Z d ). Define f as the map f = g∈M ([0,1) d )∩Z d e 2πi x,g 1 Xg . Since the sets {X g } g∈R([0,1) d )∩Z d are clopen, the map f is continuous. Let x ∈ X 0 , m ∈ Z d and m 1 ∈ Z d , m 2 ∈ R([0, 1) d ) ∩ Z d be such that m = R(m 1 ) + m 2 . Note that T m x is in X m 2 and since x is in (R * ) -1 (Z d ), we have that e 2πi α,m = e 2πi α,R(m 1 )+m 2 = e 2πi( α,R(m 1 ) + α,m 2 ) = e 2πi α,m 2 . We find a continuous map such that f (T m x) = e 2πi x,m f (x) for all x ∈ X and all m ∈ Z d . We conclude that x ∈ E(X, T, Z d ). On the other hand, let x be in X. For j ∈ {1, . . . , d} we denote x j = (R * ) -1 (e i ). Since Γ * ⊆ E(X, T, Z d ) there exists a map f j : X → C such that f j (T m x) = e 2πi x j ,m f j (x) for all m ∈ Z d . Since the eigenspaces are one-dimensional we choose f j such that f j (x) = 1. By the previous formula, the values of e 2πi x j ,m only depend on m ∈ R([0, 1) d ) ∩ Z d . Now, for any j ∈ {1, . . . , d} and m ∈ R([0, 1) d )∩Z d we denote X j m = f -1 j ({e 2πi α j ,m }). For each j ∈ {1, . . . , d}, the set X is equal to m∈R([0,1) d )∩Z d X j m . We define X m = j∈{1,...,d} X j m . We will prove that {X m } m∈R([0,1) d )∩Z d is a Γ-minimal partition. Note that X m is Γ-invariant. First, assume that there exist n 1 , n 2 ∈ R([0, 1) d )∩Z d such that n 1 = n 2 and X n 1 ∩X n 2 = ∅. Using the Z d -action on X, this is equivalent to the existence of m ∈ R([0, 1) d ) ∩ Z d with m = 0 and X m ∩ X 0 = ∅. This means that for all j ∈ {1, . . . , d}, e 2πi x j ,m is equal to 1, i.e., x j , m ∈ Z. Since Γ * = {x 1 , . . . , x d } , we have that m ∈ (Γ * ) * = Γ which is a contradiction, so all of these sets are disjoint. By minimality, we have that X = m∈R([0,1) d )∩Z d O(T m x, Γ). Since O(T m x, Γ) is included in X m , we conclude that {X m } m∈R([0,1) d )∩Z d is a clopen partition of X and O(T m x, Γ) = X m , so the action of Γ on X m is minimal. A direct computation shows that the set of return times of each X m is Γ. Hence {X m } m∈R([0,1) d )∩Z d is a Γ-minimal partition. Remark 3.17. The following statements can be easily verified. (1) In the case of an aperiodic primitive constant-shape substitution ζ, the recognizability property implies for all n > 0, the sets {S j ζ n (X ζ )} j∈F ζ n are a L n ζ (Z d )-minimal partition. (2) By Theorem 3.13 and Proposition 3.16 for any aperiodic primitive constant-shape substitution there exists a H(X ζ )-minimal partition. Aperiodic symbolic factors of substitutive subshifts are conjugate to substitutive subshifts In the following we will prove Theorem E. This is a multidimensional analogue of a result proved by C. Müllner and R. Yassawi [START_REF] Müllner | Automorphisms of automatic shifts[END_REF], which is a refinement of a result proved in [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF] for constant-length substitutions. We follow the same strategy of [START_REF] Müllner | Automorphisms of automatic shifts[END_REF], with a slight difference. By Proposition 3.3 aperiodic symbolic factors of substitutive subshifts are recognizable. In the original article, this is not mentioned. They proved an odometer system (the one defined in Section 3.4) is an equicontinuous system of aperiodic symbolic factors of substitutive subshifts, following the ideas developed by T. Kamae [START_REF] Kamae | A topological invariant of substitution minimal sets[END_REF] about minimal partitions. To get this, they used a characterization of periodic sequences and the complexity function, known as Morse-Hedlund theorem [START_REF] Morse | Symbolic dynamics II. Sturmian trajectories[END_REF]. Until now, this characterization is not known to be true in the two-dimensional case, and it is called as the Nivat's conjecture. However, in higher dimensions is known to be false [START_REF] Cassaigne | Subword complexity and periodicity in two or more dimensions[END_REF]. Then, using the fact that we can always assume that the factor between a substitutive subshift and an aperiodic symbolic system is induced by a letter-to-letter map (Lemma 1.22), we define an equivalence relation on the alphabet, calling two letters equivalent if and only if they have the same image via the local map given by the factor map. We consider the set of equivalence classes as a new alphabet, and we define a new constant-shape substitution on it. We finally prove that the substitutive subshift generated by this new substitution is conjugate to the aperiodic symbolic factor (Theorem 3.22). Now, we proceed to the proof. First, we have the following consequence of Proposition 3.3. Remark 3.18. It is straightforward to check that Proposition 3.3 implies if x, x ∈ X ζ are such that τ (x) = τ (x ), then π(x) is equal to π(x ), where π : (X ζ , S, Z d ) → ( ← - Z d (L n ζ ) , + (L n ζ ) , Z d ) is the factor map. To prove the result we will introduce some notions as in [START_REF] Müllner | Automorphisms of automatic shifts[END_REF]. Let ζ be an aperiodic primitive constant-shape substitution. We consider a labeled directed graph G ζ with vertex set E = A 2 and there exists an edge (a, b) to (c, d) with label f ∈ F ζ 1 if ζ(a) f = c, ζ(b) f = d. Note that the diagonal ∆ A = {(a, a) : a ∈ A} is a stable set, i.e., E(∆ A ) = ∆ A . Let P = (a 0 , b 0 )(a 1 , b 1 )(a 2 , b 2 ) be a path, by definition, there is an edge from (a 0 , b 0 ) to (a 1 , b 1 ) with a label f 0 and an edge (a 1 , b 1 ) to (a 2 , b 2 ) with label f 1 , i.e, ζ(a 0 ) f 0 = a 1 , ζ(b 0 ) f 0 = b 1 , and ζ(a 1 ) f 1 = a 2 , ζ(b 1 ) f 1 = b 2 . Then, we have that ζ 2 (a 0 ) L ζ f 0 +f 1 = a 2 , ζ 2 (b 0 ) L ζ f 0 +f 1 = b 2 . This means, the paths indicate the simultaneous positions of the letters in the iterates of the substitution. Remark 3.20. The following statements can be easily verified (1) As for the case of periodic points for the substitution, we can replace the substitution ζ for an appropriate power, i.e, ζ n(ζ) , so we may assume that the substitution is pairaperiodic. (2) If the substitution ζ is bijective, every (a, b) ∈ A 2 \ ∆ A is an asymptotic disjoint pair. ( With these definitions we are ready to prove the next result which is the multidimensional analogue to Theorem 22 in [START_REF] Müllner | Automorphisms of automatic shifts[END_REF]. We define an equivalence relation a ∼ b in A, such that a ∼ b if a, b are indistinguishable. By definition, the substitution ζ ([a]) f = [ζ(a) f ], f ∈ F ζ 1 . in A/ ∼ and the map T : A/∼ → B, given by T ([a]) = τ (a) are well defined. These maps satisfy the following property: Every pair in A/ ∼ is distinguishable. ( It is straightforward to check that primitivity of ζ implies primitivity of ζ . Assume now that ([a], [b] ) is a periodic pair, i.e, there exists a cycle (c,d) repeating the labels of the path P 1 with a period k. By the Pigeonhole Principle, there exist two subpaths P 4 = (e 0 , f 0 ) . . . (e l 1 k , f l 1 k ), P 5 = (g 0 , h 0 ) . . . (g l 1 k, h l 2 k ) of P 3 , having the same labels of the edges as P 1 repeating with period k, such that e 0 = e l 1 k , P 1 = ([a 0 ], [b 0 ]) . . . ([a k ], [b k ]) in G ζ with ([a 0 ], [b 0 ]) = ([a k ], [b k ]) = ([a], [b]). We can consider a path P 2 = (c 0 , d 0 ) . . . (c k , d k ) in G ζ with [c i ] = [a i ] and [d i ] = [b i ] for 0 ≤ i ≤ k h 0 = h l 2 k and [e 0 ] = [a 0 ], [h 0 ] = [b 0 ]. Now con- sider the cycle in G ζ (u 0 , v 0 ) . . . (u l 1 l 2 k 2 , v l 1 l 2 k 2 ) where u l 1 kj . . . u l 1 k(j+1) = e 0 , . . . , e l 1 k and v l 2 km . . . v l 2 k(m+1) = h 0 . . . h l 2 k for all 0 ≤ j < l 2 k, 0 ≤ m < l 1 k. Since ζ is pair- aperiodic, there exists f ∈ F ζ 1 such that ζ(e 0 ) f = e 0 , ζ(h 0 ) f = h 0 . We then conclude that ζ ([a]) f = [a], ζ ([b]) f = [b], i.e., ζ is pair-aperiodic. On the other hand, for all n > 0, we have that τ (ζ n (a)) = τ (ζ ([a] )), hence Y has the same language as τ (X ζ ), so they are equal, since subshifts are uniquely determined by their language. Finally, we prove that τ : (X ζ , S, Z d ) → (Y, S, Z d ) is a conjugacy. Let x, x ∈ X ζ , with τ (x) = τ (x ). By the recognizability property of X ζ , we can write x = S f 1 ζ 2|A| 2 (x), x = S f 2 ζ 2|A| 2 (x ). By Remark 3.18 we have that f 1 = f 2 . For every n ∈ Z d , (x n , x n ) ∈ A 2 are not asymptotic disjoint pair. Assume the contrary, i.e., there exists n ∈ Z d such that (x n , x n ) is an asymptotic disjoint pair. Then we can find a periodic pair ([a], [b]) and a path P = ([a 0 ], [b 0 ]) . . . ([a k ], [b k ]) in G ζ with ([a 0 ], [b 0 ]) = (x n , x n ) and ([a k ], [b k ]) = ([a], [b]). with k ≤ |A| 2 . Since ζ is pair-aperiodic, we have that there exists f ∈ F ζ 2|A| 2 such that (ζ ) 2|A| 2 (x n ) f = [a], (ζ ) 2|A| 2 (x n ) f = [b]. Since τ (ζ 2|A| 2 (x n )) = ζ 2|A| 2 (τ (x n )) = ζ 2|A| 2 (τ (x n )) = τ (ζ 2|A| 2 (x n )), we have that τ (a) = τ (b), then ([a], [b]) are indistinguishable, which contradicts 3.1. Thus we have that (x n , x n ) is not an asymptotic disjoint pair for any n ∈ Z d . By Remark 3.20 we have that ζ 2|A| 2 (x n ) = ζ 2|A| 2 (x n ), i.e., x = x . For the one-dimensional case, F. Durand proved [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF] that Cantor topological factors of substitutive subshifts are either substitutive subshifts (when the action is expansive) or odometer systems (when the action is equicontinuous). This dichotomy is no longer true in the multidimensional context. Example 4.3 shows an example of substitutive subshift with a Cantor topological factor that is neither expansive nor equicontinuous. Also the example in Example 4.3 has a symbolic factor (in fact substitutive subshift) which has a non-trivial period, and the phase space is still infinite. Chapter 4 Measurable morphisms between substitutive subshifts In this chapter, we study different types of homomorphisms between substitutive subshifts. Note that since these subshifts are uniquely ergodic, any topological endomorphism is also a measurable endomorphism preserving the ergodic measure. First, we will extend a result of B. Host and F. Parreau [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] to the multidimensional case (Theorem 4.1), showing some rigidity properties: Any measurable factor between two substitutive subshifts given by two aperiodic constant-shape substitutions with some combinatorial property induces a continuous one. We follows the same strategy for the proof. It is based on the property of these substitutive subshifts being self-induced systems, i.e., there exists non-empty clopen proper subsets (in our case ζ n (X ζ ), for all n > 0) such that the induced system (ζ n (X ζ ), S ζ n (X ζ ) , Z d ) is conjugate to the system (X ζ , S, Z d ). Here, the action S ζ n (X ζ ) is given by S L n ζ (m) : m ∈ Z d . This implies, any measurable endo- morphism φ of (X ζ , S, Z d ) is associated with an induced measurable endomorphism φ n of (ζ n (X ζ ), S ζ n (X ζ ) , Z d ). We prove that these induced measurable endomorphisms are stationary, i.e., there exists n = m > 0 such that φ n = φ m . Then, we prove that we can approximate these induced measurable endomorphisms by endomorphisms of radius F ζ 1 1 + L -1 ζ 2 + 1/(1 -L -1 ζ ) . The finiteness of sliding block codes of a specific radius will let us conclude the theorem (Theorem 4.1). The result of B. Host and F. Parreau was then extended by V. Salo and Törmä in [START_REF] Salo | Block maps between primitive uniform and Pisot substitutions[END_REF], for topological factors between constant-length substitutions and Pisot substitutions whose associated incidence matrices have the same dominant eigenvalue. As in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF], there exists a bound R such that any factor map is the composition of a sliding block code of radius R with a power of the shift map. They used a renormalization process, where they could reduce the radius of a given factor map. Then, F. Durand and J. Leroy extended this result for any pair of aperiodic substitutions [START_REF] Durand | Decidability of the isomorphism and the factorization between minimal substitution subshifts[END_REF]. This was one of the key steps to prove the decidability of the isomorphism problem between substitutive subshifts. Our result shows the decidability of the isomorphism problem between multidimensional substitutive subshifts, when both constant-shape substitutions has the same expansion matrix and same support, and satisfy a combinatorial condition (called reducibility). This condition is always satisfied for bijective substitutions. Nevertheless, this result is far from representing the complete picture about the decidability of the isomorphism problem between multidimensional substitutive subshifts. Then, we will deduce restrictions on endomorphisms and homomorphisms, using the finiteness of the number of invariant orbits (Proposition 3.4). Every substitutive subshift given by an aperiodic constant-shape substitution satisfying the combinatorial property is coalescent (Proposition 4.7). This was already proved in the one-dimensional case in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF]. Also we proved the automorphism group of substitutive subshift is virtually generated by the shift action (Proposition 4.8). It was already known in the one-dimensional context by [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] and [START_REF] Lemańczyk | On metric properties of substitutions[END_REF]. In the multidimensional framework was proved under more restrictive geometrical and combinatorial properties [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]. We also give some conditions to get that the automorphism group of a substitutive subshift is isomorphic to a direct product of Z d with a finite group (Corollary 4.9). Finally, we extend Theorem 4.1 to homomorphisms associated with matrices commuting with a power of the expansion matrix of the substitution (Theorem 4.12). This leads to the same rigidity properties about these homomorphisms (Proposition 4.16) and for a restricted normalizer group (Proposition 4.16). Notice that in the next chapter we will give sufficient conditions to ensure the former result is a complete characterization of the normalizer group. Measurable factors implies continuous ones for substitutive subshifts In this section, we will extend the result proved in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] about measurable factors between substitutive subshifts in the multidimensional context. As in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF], we will use the notion of reducibility of a substitution. Let ζ be a constant-shape substitution. For any pair of letters a, b ∈ A, and n > 0, we consider the sequence d n (ζ n (a), ζ n (b)) = f ∈ F ζ n : ζ n (a) f = ζ n (b) f |F ζ n | . This sequence is decreasing for all of the pairs a, b ∈ A. We say the constant-shape substitution is reduced if min n∈N a =b∈A d n (ζ n (a), ζ n (b)) > 0. For instance, every bijective constantshape substitution is reduced. As mentioned in Section 1.7, the substitutive subshift (X ζ , S, Z d ) is uniquely ergodic. Denote µ ζ its unique ergodic invariant measure. Using the recognizability property, as in the one-dimensional case [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF], this unique ergodic measure satisfies ∀U ∈ F X ζ , µ ζ (U ) = 1 |F ζ n | X ζ f ∈ F ζ n : S f ζ n (x) ∈ U dµ ζ (x), where F X ζ corresponds to the Borel sets of X ζ . The following theorem is a multidimensional analogue of Theorem 1.3 in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]: Theorem 4.1. Let (X ζ 1 , S, Z d ), (X ζ 2 , S, Z d ) be two substitutive subshift from two aperiodic primitive constant-shape substitutions ζ 1 , ζ 2 from finite alphabets A and B, with the same expansion matrix L and same support F 1 . If (X ζ 2 , S, Z d ) is reduced, then for every measurable factor φ : (X ζ 1 , µ ζ 1 , S, Z d ) → (X ζ 2 , µ ζ 2 , S, Z d ), there exists j ∈ Z d such that S j φ is equal µ ζ 1 -a.e. to a continuous factor ψ : (X ζ 1 , S, Z d ) → (X ζ 2 , S, Z d ), satisfying the following two properties: 1. ψ is a sliding block code of radius F ζ 1 1 + L -1 ζ 2 + 1/(1 -L -1 ζ ) . 2. There exist an integer n > 0 and p ∈ F ζ n such that, S p ψζ n 1 = ζ n 2 ψ. In Chapter 6 we present an example where Theorem 4.1 can be applied, describing the automorphisms of it. Remark 4.2. The following statements can be easily verified. (1) If L ζ is a diagonal matrix, then L -1 ζ ≤ 1/2, so ψ is a sliding block code of radius 3 F ζ 1 . (2) Since the set of sliding block codes of radius (3) As also mentioned in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF], we may assume that p / ∈ (L n ζ -id)(Z d ), because it is equivalent to find a factor map commuting with the substitution map, i.e., F ζ 1 1 + L -1 ζ 2 + 1/(1 -L -1 ζ ) between X ζ 1 and X ζ 2 is ψζ n = ζ n ψ, with p ∈ F ζ 1 . We follows the same strategy of [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF]. Substitutive subshifts are self-induced systems, i.e., there exists non-empty clopen proper subsets (in our case ζ n (X ζ ), for all n > 0) such that the induced system (ζ n (X ζ ), S ζ n (X ζ ) , Z d ) is conjugate to the system (X ζ , S, Z d ). We recall, the action S ζ n (X ζ ) is given by S L n ζ (m) : m ∈ Z d . This implies, any measurable endomorphism φ of (X ζ , S, Z d ) is associated with an induced measurable endomorphism φ n of (ζ n (X ζ ), S ζ n (X ζ ) , Z d ). We prove that these induced measurable endomorphisms are stationary, i.e., there exists n = m > 0 such that φ n = φ m . Then, we prove that we can approximate these induced measurable endomorphisms by endomorphisms of radius F ζ 1 1 + L -1 ζ 2 + 1/(1 -L -1 ζ ) . The finiteness of sliding block codes of a specific radius will let us conclude the theorem (Theorem 4. ) ) f = [ζ(a) f ] for f ∈ F ζ 1 . This substitution is reduced. We have a natural letter-to-letter factor map φ : (X ζ , S, Z d ) → (X ζ , S, Z d ), and is called the reduced substitution of ζ. In the one-dimensional case if (X ζ , S, Z) does not have purely discrete spectrum, it can be proved using the results in [START_REF] Dekking | The spectrum of dynamical systems arising from substitutions of constant length[END_REF] that (X ζ , S, Z) is aperiodic. In the multidimensional case this is not true in general, as we can see in Example 4.3. Example 4.3 (An aperiodic constant-shape substitution, with a periodic reduced substitution). Consider the substitution σ 2 with L σ 2 = 2 0 0 2 and F σ 2 1 = 0, 1 2 , given by σ 2 : 0 → 1 3 0 2 , 1 → 0 2 0 2 , 2 → 3 1 2 0 , 3 → 2 0 2 0 . This substitution corresponds to the product substitution between the Thue-Morse substitution (σ 3 : 0 → 01, 1 → 10) and the doubling sequence substitution (σ 4 : a → ab, b → aa). The substitution does not have purely discrete spectrum, since (X T M × ← - Z (2 n Z) , S ×+ (2 n Z) , Z 2 ) is a factor of (X σ 2 , S, Z 2 ), where (X T M , S, Z) corresponds to the onedimensional Thue-Morse substitutive subshift. The reduced substitution for σ 2 is defined with the same expansion matrix and support, given by: σ2 : a → a b a b , b → b a b a , where every element in {0} × Z is a nontrivial period of σ2 . However, as proved in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] for the one-dimensional case, if the reduced substitution system is aperiodic, then (X ζ , µ ζ , S, Z d ) is metrically isomorphic to (X ζ , µ ζ , S, Z d ). X ζ , S, Z d ) is a metric isomorphism. Proof. Let π : (X ζ , S, Z d ) → ( ← - Z d (L n ζ ) , + (L n ζ ) , Z d ) defined in Section 3.4.1. By Proposition 3.3, for every n > 0, π n (x) is equal to π n ( φ(x)). In particular, if x, y ∈ X ζ satisfies φ(x) = φ(y), then π n (x), π n (y) are equal for any n > 0. Set U = {x ∈ X ζ : ∃y ∈ X ζ , φ(x) = φ(y), x 0 = y 0 }. It is enough to prove that U is a null-set. Let n > 0, f ∈ F ζ n and x ∈ X ζ be such that S f ζ n (x) ∈ U . Then, there exists y ∈ X ζ with φ(y) = φ(S f ζ n (x)) and y 0 = ζ n (x 0 ) f . Then π n (y) = π n (S f ζ n (x)) and is equal to f . Moreover, there exists z ∈ X ζ with y = S f ζ n (z), so φ(x) is equal to φ(z). This implies (ζ n z 0 ) j , (ζ n x 0 ) j are equivalent for all j ∈ F ζ n . Note that (ζ n z 0 ) f = y 0 , so is different from (ζ n x 0 ) f . We define the set G n = a,b∈A f ∈ F ζ n : [(ζ n a) f ] = [(ζ n b) f ], (ζ n a) f = (ζ n b) f . We deduce from the previous paragraph that µ ζ (U ) = 1 |F ζ n | f ∈ F ζ n : S f ζ n (x) ∈ U dµ(x) ≤ |G n | |F ζ n | . For any a, b ∈ A we denote D a,b n = (c, d) ∈ A 2 : ∃f ∈ F ζ n , (ζ n a) f = c, (ζ n b) f = d, [c] = [d] and E a,b n = (c, d) ∈ A 2 : ∃f ∈ F ζ n , (ζ n a) f = c, (ζ n b) f = d, [c] = [d] . Set ε > 0 and let j > 0 be large enough such that for any a, b ∈ A we have that = 1 d j (ζ j (a), ζ j (b)) ≤ lim k→∞ d k (ζ k (a), ζ k (b)) + ε. |F ζ n |   (c,d)∈D a,b n d j (ζ j (c), ζ j (d)) + (c,d)∈E a,b n d j (ζ j (c), ζ j (d))   ≤ 1 |F ζ n |   (c,d)∈D a,b n ( lim k→∞ d k (ζ k (c), ζ k (d)) + ε) + (c,d)∈E a,b n ε   ≤ ε(|D a,b n | + |E a,b n |) |F ζ n | + 1 |F ζ n | (c,d)∈D a,b n lim k→∞ d k (ζ k (c), ζ k (d)). Since this is for every ε > 0 and lim When n → ∞, the right expression goes to zero, and we conclude µ ζ (U ) = 0. It is known that if the substitutive subshift has a coincidence, then it is metrically isomorphic to its maximal equicontinuous factor [START_REF] Queffélec | Substitution dynamical systems-spectral analysis[END_REF]. A constant-length substitution has a coincidence, if there exists n > 0 and an index j on the support of ζ n such that for all pair of letters a, b ∈ A, ζ n (a) j = ζ n (b) j . The doubling sequence substitutive subshift (see Example 4.3) is an example of a substitution having a coincidence. If a substitution has a coincidence, then cannot be reduced. In fact its reduced substitution is trivial, i.e., has an alphabet of cardinality 1. The set of measurable endomorphisms of odometer systems is not discrete. In fact it is uncountable and any element of the odometer system represent a measurable endomorphism via addition. So, as in the original article [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF], reducibility is an optimal hypothesis for Theorem C. Now, to prove Theorem 4.1, we assume that ζ 2 is an aperiodic primitive reduced constant-shape substitution. We denote by η = min (φ) in F ζ 1 n . The set S pn(φ) φζ n 1 (X ζ 1 ) is included in ζ n 2 (X ζ 2 ) up to a µ ζ 2 -null set. Since ζ n 1 is a homeomorphism from X ζ 1 to ζ n 1 (X ζ 1 ), for µ ζ 1 -almost all x ∈ X ζ 1 there exists a unique point y ∈ X ζ 2 such that S pn(φ) φζ n 1 (x) = ζ n 2 (y), which we denote φ n (x). So, for every φ ∈ m Fac(X ζ 1 , X ζ 2 , S, Z d ) we consider a sequence (p n (φ)) n≥0 and a sequence of maps (φ n ) n ∈ m Fac(X ζ 1 , X ζ 2 , S, Z d ) such that p n (φ) ∈ F ζ 1 n , S pn(φ) φζ n 1 (x) = ζ n 2 (φ n (x)). It is straightforward to check that the sequence satisfies the recurrence p n+1 (φ) = p n (φ) + L n ζ 1 p 1 (φ n ) (mod L n+1 ζ 1 (Z d )) . We also have the recurrence (φ n ) 1 = φ n+1 . Now, for φ, ψ ∈ m Fac(X ζ 1 , X ζ 2 , S, Z d ), we denote d(φ, ψ) = µ ζ 1 ({x ∈ X ζ 1 : (φx) 0 = (ψx) 0 }). We also denote for any r > 0 the quantity C(r) = |B(0, r) ∩ Z d |. Applications of rigidity results on endomorphisms of substitutive subshifts In this section, we prove two consequences of Theorem 4.1 on endomorphisms of substitutive subshifts. First, we prove that substitutive subshifts given by aperiodic primitive reduced constant-shape substitutions are coalescent. This was first proved in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF] for one-dimensional linearly recurrent subshifts (in particular aperiodic subshifts). Linearly recurrent substitutive subshifts (such as the self-similar ones) are also coalescent as a consequence of a result in [START_REF] Cortez | Linearly repetitive Delone systems have a finite number of nonperiodic Delone system factors[END_REF]. Then, we deduce the automorphism group of a substitutive subshift of an aperiodic primitive reduced constant-shape substitution is virtually generated by the shift action. As a corollary, we get a condition for the automorphism group to be isomorphic to a direct product of Z d (given by the shift action) and a finite group. As we will see in the next chapter, this condition is always true for aperiodic bijective primitive constant-shape substitutions. From now on, during this section we will always assume, up to consider a power of the substitution, that any factor map ψ ∈ End(X ζ , S, Z d ) satisfying Property 2. in Theorem 4.1 is true for n = 1. Coalescence of substitutive subshifts In [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF] it was proved that one-dimensional linearly recurrent subshifts (in particular aperiodic substitutions) are coalescent. In the multidimensional context, linearly recurrent substitutive subshifts (such as the self-similar ones) are also coalescent as a consequence of a result in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF]. Here we will use Theorem 4.1 and the finiteness on the number of invariant orbits under the action of ζ (Proposition 3.4), to obtain that substitutive subshifts are also coalescent, for aperiodic primitive reduced constant-shape substitutions. Proof. Set φ ∈ End(X ζ , S, Z d ). Theorem 4.1 ensures there exists j ∈ Z d such that S j φ is equal to a sliding block code ψ of fixed radius satisfying S p ψζ = ζψ, for some p ∈ F ζ 1 . Let x ∈ X ζ be in a ζ-invariant orbit, i.e., there exists j ∈ Z d such that ζ(x) = S j x. Note that S p ψζ(x) = S p+j ψx = ζψx, so, if the orbit of x is in a ζ-invariant orbit, then ψx satisfies the same property. By Proposition 3.4, there exist finitely many ζ-invariant orbits, hence for n large enough, we can find x ∈ X ζ with x and ψ n (x) being in the same orbit, i.e., there exists m ∈ Z d such that S m ψ n (x) = x. By minimality of X ζ , ψ n = S -m , hence ψ is invertible, which implies φ is invertible. m Hom M (X ζ 1 , X ζ 2 , S, Z d ) being non empty, whenever ζ 1 , ζ 2 are two aperiodic primitive constant-shape substitutions with the same expansion matrix and support. This condition is similar to the one for odometer systems (Lemma 1.14), although the proof is very different. In this case we use a relation between measurable eigenvalues and homomorphisms (which we will prove in the proof of Lemma 4.10). Then we conclude by the fact that measurable eigenvalues and continuous eigenvalues are the same for substitutive subshifts (Theorem 3.13). In some cases this condition implies the matrix M commutes with the expansion matrix L (as shown for constant-base odometer systems in Theorem 2.2). Then, we prove an analogue of Theorem 4.1 (Theorem 4.12) establishing that measurable homomorphisms induced continuous ones for homomorphisms associated with matrices in the centralizer of some power of the expansion matrix. Finally, we give an explicit bound on the norm of these matrices for the quotient group of a restricted normalizer semigroup with respect to the shift action (Proposition 4.16). Lemma 4.10. Let ζ 1 , ζ 2 be two aperiodic primitive constant-shape substitutions having the same expansion matrix L and same support F . If M ∈ GL(d, Z) is such that m Hom M (X ζ 1 , X ζ 2 , S, Z d ) = ∅, then for all n > 0 there exists m(n) > 0 such that M L m(n) (H(X ζ 1 )) ≤ L n (H(X ζ 2 )). (Normalizer Condition for substitutions) Proof. Let φ be in m Hom M (X ζ 1 , X ζ 2 , S, Z d ), and x ∈ E(X ζ 2 , µ ζ 2 , S, Z d ). We will prove that M * x ∈ E(X ζ 1 , µ ζ 1 , S, Z d ). Indeed, let f ∈ L 2 (X ζ 2 , µ ζ 2 ) be such that for all m ∈ Z d , f • S m = e 2πi x,m • f , µ ζ 2 -a.e. in X ζ 2 . Then, we have that (f • φ) • S m = (f • S M m ) • φ = e 2πi x,M m • f • φ = e 2πi M * x,m • f • φ, µ ζ 1 -a.e. in X ζ 1 . By Theorem 3.13 and Proposition 3.16, for any n > 0, the system (Z d /M -1 L n (H(X ζ 2 )), +, Z d ) is a finite factor of the odometer sys- tem ( ← - Z d (L n (H(X ζ 1 ))) , + (L n (H(X ζ 1 ))) , Z d ), which implies the odometer system ( ← - Z d (M -1 L n (H(X ζ 2 ))) , + (M -1 L n (H(X ζ 1 ))) , Z d ) is a factor of ( ← - Z d (L n (H(X ζ 1 ))) , + (L n (H(X ζ 1 ) )) , Z d ). By Lemma 1.13, we conclude that for any n > 0, there exists m(n) > 0 such that L m(n) (H(X ζ 1 )) ≤ M -1 L n (H(X ζ 2 )). A consequence of Proposition 4.7 is that a homomorphism is associated with a matrix with finite order, then is an isomorphism. Proof. Since M has finite order, there exists n > 0 such that M n = id R d . This implies φ n ∈ End(X ζ , S, Z d ). By Proposition 4.7, φ n is invertible, which implies φ is invertible. Now, we will prove an analogue of Theorem 4.1 for homomorphisms associated with matrices commuting with a power of the expansion matrix L. As mentioned before a priori this does not cover all the homomorphisms between substitutive subshifts. Theorem 4.12. Let (X ζ 1 , S, Z d ), (X ζ 2 , S, Z d ) be two substitutive subshifts from two aperiodic primitive constant-shape substitutions ζ 1 , ζ 2 from finite alphabets A and B, with the same support F 1 and expansion matrix L. Let M ∈ GL(d, Z) be a matrix commuting with a power of L, i.e., there exists n > 0 such that M L n = L n M . If (X ζ 2 , S, Z d ) is reduced, then for every measurable homomorphism associated with M , φ : (X ζ 1 , µ ζ 1 , S, Z d ) → (X ζ 2 , µ ζ 2 , S, Z d ), there exists j ∈ Z d such that S j φ is equal µ ζ 1 -a.e. to a homomorphism associated with M ψ ∈ N M (X ζ 1 , X ζ 2 , S, Z d ), satisfying the following two properties: 1. ψ is given by a block map of radius F ζ 1 L -1 ζ 1 (1 + M ) 2 -L -1 ζ / 1 -L -1 ζ . 2. There exist an integer n > 0 and q ∈ F ζ n such that S q ψζ n 1 = ζ n 2 ψ. In Chapter 6 we provide an example where the hypothesis are not satisfied, so we cannot apply Theorem 4.12. Nevertheless, we are able to describe its normalizer semigroup. For any m in Z d , we have that S q ψ(ζ n 1 (S m x)) = ζ n 2 (ψ(S m x)), and S q ψ(ζ n 1 (S m x)) = S q+M L n ζ 1 m ψ(ζ n 1 (x)), ζ n 2 (ψ(S m x)) = S L n ζ 1 M m ζ n 2 (ψ(x)), it follows, M L n ζ 1 m = L n ζ 1 M m, i. e., M and L n ζ 1 commute, hence this hypothesis is optimal to obtain property (2). Note that if L is an integer multiple of the identity, then any matrix M ∈ GL(d, Z) commutes with L. The proof of Theorem 4.12 follows the same strategy as the one of Theorem 4.1, except some small modifications. Since the substitution ζ 1 is primitive, we can replace the substitution by some power ζ n 1 , so we may assume that M commutes with the expansion matrix of ζ 1 . We will replace the term p n (φ) by the map π n (x) -M -1 π n (φx) (mod L n ζ (Z d )), with π n (x) and M -1 π n (φx) being the representative classes in F ζ n . The commutation assumption implies, for any n > 0 the map M defines a bijection in Z d /L n (Z d ), also denoted by M , i.e., n = m (mod L n (Z d )), if and only if M n = M m (mod L n (Z d )). With this, the map p n (φ) is invariant under the shift action. Since (X ζ 1 , µ ζ 1 , S, Z d ) is ergodic, the map p n (φ) ∈ F ζ n is a constant map µ ζ 1 -a.e. in X ζ 1 and the set S M pn(φ) φζ n 1 (X ζ 1 ) is included, up to a µ ζ 2 -null set, in ζ n 2 (X ζ 2 ). We can define the map φ n for µ ζ 1 -a.e. in X ζ 1 as the unique point y ∈ X ζ 2 such that S M pn(φ) φζ n 1 x = ζ n 2 y, where M p n (φ) is the representative element in F ζ n . It is straightforward to check that φ n • S n = S M n • φ n for all n ∈ Z d , so φ n is in mN M (X ζ 1 , X ζ 2 , S, Z d ). The sequences p n (φ) and (φ n ) satisfies the same recurrences given in Section 4.1: p n+1 (φ) = p n (φ) + L n ζ p 1 (φ n ), (φ n ) 1 = φ n+1 . As in Theorem 4.1 we need the following adaptations of Lemma 4.5 and Lemma 4.6 for homomorphisms. The proof are the same, so we omit them. L -1 ζ 1 (1 + M ) 2 -L -1 ζ / 1 -L -1 ζ such that d(φ n , ψ n ) → 0. To finish the proof of Theorem 4.12, we proceed exactly as in the proof of Theorem 4.1 Proof of Theorem 4.12. For the fixed alphabets A and B, there exists a finite number of homomorphisms associated with M of radius F ζ 1 L -1 ζ 1 (1 + M ) 2 -L -1 ζ / 1 -L -1 ζ . By Lemma 4.15, there exist two different integers m, k ≥ 0 such that d(φ m , φ m+k ) < η/C(R), so by Lemma 4.14, we have that φ m = φ m+k , µ ζ 1 -a.e.. Let n ≥ m be a multiple of k. We have that ( φ n ) k = φ n+k = (φ m+k ) n-m = (φ m ) n-m = φ n , µ ζ 1 -a.e. This implies for all r ∈ N, φ n is equal to (φ n ) rk , µ ζ 1 -a.e. Since the number of sliding block codes of radius F ζ 1 L -1 ζ 1 (1 + M ) 2 -L -1 ζ / 1 -L -1 ζ is finite, by Lemma 4.15 we have that φ n is equal to a homomorphism associated with M of radius F ζ 1 L -1 ζ 1 (1 + M ) 2 -L -1 ζ / 1 -L -1 ζ , µ ζ 1 -a.e. in X ζ 1 . Note that φ n is equal to φ 2n , µ ζ 1 -a.e. We denote ψ = φ n and p = p n (ψ). By definition of p, we have that S M p ψζ n 1 = ζ n 2 ψ. Set j = M (p n (φ) -p), then S j φζ n 1 = S M (pn(φ)-p) φζ n 1 = S -M p ζ n 2 ψ = ψζ n 1 , µ ζ 1 -a.e, this implies that S j φ and ψ coincides in ζ n 1 (X ζ 1 ) µ ζ 1 -almost everywhere, and by ergodicity in the whole set X ζ 1 µ ζ 1 -almost everywhere. In the case ζ 1 = ζ 2 , we can consider a restricted normalizer group as all the invertible homomorphisms associated with matrices commuting with a power of L ζ 1 N C(X ζ 1 , S, Z d ) = M ∈GL(d,Z) M L n ζ 1 =L n ζ 1 M, for some n (N M (X, T, Z d ) ∩ Homeo(X)), This set is a group under composition and S , Aut(X ζ 1 , S, Z d ) are normal subgroups of N C(X ζ 1 , S, Z d ). Following the same proof as Proposition 4.8, we have the map φ → (j φ , ψ φ ) is unique, so we obtain the same result with respect to this restricted normalizer group. A substitution with expansion matrix equal to L 1 in Example 2.4 is an example where Proposition 4.16 gives a complete characterization for the normalizer group. Chapter 5 Precisions on bijective constant-shape substitutions Bijective substitutions are of great interest because of their mixed dynamic spectrum. They are never extensions almost 1-to-1 of its maximal equicontinuous factor. Bijective substitutions were studied before in [START_REF] Frank | Multidimensional constant-length substitution sequences[END_REF] for block substitutions, where it was proved that the subshift generated by a bijective constant-shape substitution, with a diagonal expansion matrix is measurable-theoretic isomorphic to a skew product of one-dimensional odometers. Also, in [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF] it was studied the normalizer group of bijective block substitutions. We extend the study by describing the normalizer group for general constant-shape substitutions. To do this, we relate the symmetry group with different types of supports of the substitution and non-diagonal expansion matrices. First, we prove that the automorphism group of substitutive subshifts of bijective substitutions are direct products of the shift action and finite groups, given by a permutation of letters (Proposition 5.1). This is a well known result for one-dimensional and block substitutions ( [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF][START_REF] Lemańczyk | On metric properties of substitutions[END_REF][START_REF] Bustos | Extended symmetry groups of multidimensional subshifts with hierarchical structure[END_REF][START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]). In the rest of the chapter, under some geometrical conditions (called polytope substitutions), we prove that the normalizer group is virtually generated by the shift action. We then describe the symmetry group of these substitutive subshifts (Proposition 5.15 and Theorem 5.17). The strategy is to use the fact that nondeterministic directions, viewed as asymptotic pairs in the one-dimensional context in [START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF], are preserved by the symmetry group (Proposition 1.12). We determine the nondeterministic directions of these substitutive subshifts thanks to the supporting hyperplanes to conv(F ζ n ) (Theorem 5.2). As we see in Section 5.3, the polytope assumption imply strong constraints on such supporting hyperplanes. Conversely, we provides a checkable combinatorial condition to ensure a half-space to be nonexpansive for (X ζ , S, Z d ). (Corollary 5.13). This is the first characterization of nondeterministic directions for a family of minimal systems. = n-1 i=0 L i (p). Set m ∈ Z d such that (p n + (F n ) •C ) ∩ (L n (m) + F n ) = ∅. Since S pn ψζ n 1 = ζ n 2 ψ and ζ 2 is bijective, we have that x 0 determines ψ(x) m , which implies S -m ψ is a factor map of radius 0 (or a letter-to-letter factor map). Set φ = S -m ψ, we have that S pn+m-L n (m) φζ n 1 = ζ n 2 φ, and by bijectivity the coordinate x 0 determined two coordinates of ψ, unless for any n ∈ N large enough p n + m is in L n ζ (Z d ), i.e., for any n large enough there exists r n ∈ Z d such that p n + m = L n (r n ). Note that p + r n = L(r n+1 ), which implies r n+1 ≤ L -1 ( r n + p ), so (r n ) is a bounded sequence. Hence there exist n > 0 and N > 1 such that r n+N = r n , which implies p N -1 ∈ (L N -id)(Z d ) which is not possible by Remark 4.2. If p n + m / ∈ L n ζ (Z d ), then x 0 determines two coordinates n 1 , n 2 of ψ(x) and then the coordinates 0 and n 2 -n 1 of ψ 1 = S -n 1 ψ, since ψ 1 is also induced via a 0-block map. Note that the map Ψ 1 : A → B inducing ψ 1 is bijective, if not two fixed points x, y with Ψ 1 (x 0 ) = Ψ 2 (y 0 ) and x 0 = y 0 generate two points with the same image, which is a contradiction. It follows that x 0 determines x n 2 -n 1 , and then x k(n 2 -n 1 ) for all k ∈ Z, so x has a nontrivial period, which is a contradiction. Finally, we conclude by Corollary 4.9. Nondeterministic directions of substitutive subshifts of bijective on extremities constant-shape substitutions In this section we give a characterization of the nondeterministic directions (defined in Section 1.5) of a substitutive subshift (X ζ , S, Z d ) in the case ζ is a bijective on extremities constant-shape substitution. A starting remark is that for each n > 0, the set of directions S d-1 is stratified by the opposite normal fan N (conv(F ζ n )) (see Section 1.1.2). Our description of the nondeterministic directions is given in terms of union of these fans. We say a constant-shape substitution ζ is bijective on extremities if the restriction p f of ζ in f is bijective for all f ∈ Ext(conv(F ζ 1 )). Since Ext(conv(A + B)) ⊆ Ext(conv(A)) + Ext(conv(B)) ), a substitution is bijective on extremities if and only if for any n > 0 and f ∈ Ext(conv(F ζ n ) the restriction p f of ζ n in f is bijective. Since (F ζ n ) n>0 is a Følner sequence, there exists n > 0 such that conv(F ζ n ) is a nondegenerate polytope, so up to considering a power of ζ, we may assume that conv(F ζ 1 ) is a nondegenerate polytope. Using the recognizability property of substitutions and some basic results in convex geometry we prove the following result. Theorem 5.2. Let ζ be an aperiodic primitive constant-shape substitution which is bijective on extremities. Then, the set of nondeterministic directions ND(X ζ , S, Z d ) of the substitutive subshift is the intersection of S d-1 with a nonempty union of limits of opposite normal cones of the form NGn (conv(F ζ n )), with G n a face of conv(F ζ n ), for some integer n > 0. This theorem gives topological constraints on the set of nondeterministic directions. Actually, we will see when L ζ = λ id R d , that the convex hull of any digit tile is a polytope, i.e. it has a finite number of extreme points (Theorem 5.6). In this case by Theorem 5.2, the set of nondeterministic directions ND(X ζ , S, Z d ) is a finite union of closed balls (eventually degenerated). More explicitly, in the two-dimensional case we have the following corollary, showing in particular, it cannot be a Cantor set. This in contrast with the result proved by M. Boyle and D. Lind in [START_REF] Boyle | Expansive subdynamics[END_REF], where they proved any compact set with cardinality at least 2 can be realized as the set of nondeterministic directions of some two-dimensional subshift. Corollary 5.3. In the two dimensional case, under the hypothesis of Theorem 5.2, either the set ND(X ζ , S, Z 2 ) has nonempty interior, either it has at most 2 accumulation points. Proof. Assume that the set of nondeterministic directions ND(X ζ , S, Z 2 ) has empty interior. By Theorem 5.2, the elements of ND(X ζ , S, Z 2 ) are limits of normal vectors to edges of conv(F ζ n ) for some n > 0. In [START_REF] Strichartz | Geometry of self-affine tiles[END_REF] it was proved such vectors are normalized vectors of the form (L * ζ ) -k u k , with u k ∈ S 1 a normal vector to an edge of conv(F ζ 1 ) for some k > 0. Hence, their accumulation points are accumulation points of orbits of the projective action L * ζ on the circle S 1 . A standard analysis of this action provides the cardinality of in the boundary of conv(F ζ n -π n (x 1 ) + L n ζ (z n (t))). Since this last set is a translated of conv(F ζ n -π n (x 1 )) and are both subsets of conv(D), a basic geometrical argument ensures for any n > 0, h n (t) and π n (x 1 ) are in the same face of conv(F ζ n ). The same arguments imply that h n (t 1 ), h n (t 2 ) are in the same face for any t 1 , t 2 ∈ F 1 ∩ D. Furthermore, h n (t) and π n (x 1 ) are different for large enough n ∈ E. Indeed, assume the converse, taking R > t , we have that x 1 | t = ζ n (w i ) L n ζ (k 3 )+πn(x 1 ) , for some k 3 ∈ K ζ for infinitely many n ∈ E. Since t = L n ζ (k 1 -k 3 ) , (conv(D)) = n∈E NHn (conv(F n ) ζ ). Suppose now that conv(D) is not closed. Set F 1 be a face of conv(D) of codimension 1 containing 0 and w ∈ NF 1 (conv(D)). We will find a sequence of faces H n of conv(F ζ n ) such that NF 1 (conv(D)) = n>0 NHn (conv(F ζ n )). f m i = h(m, i, n) -π n (x 1 ) + L n ζ (z(m, i, n)), with h(m, i, n) ∈ F ζ n and z(m, i, n) ∈ Z d . Since w, h(m, i, n) -π n (x 1 ) ≥ 0 and w, L n ζ (z(m, i, n)) ≥ 0, we have that for all n > 0 w, h(m, i, n) -π n (x 1 ) ----→ m→∞ 0 ∧ w, L n ζ (z(m, i, n)) ----→ m→∞ 0. Since F ζ n is finite, we conclude that for all n ∈ E, there exists m(n) such that for all m ≥ m(n), w, h(m, i, n) = w, π n (x 1 ) = 0. (5. 2) The same argument as the former one when conv(D) is closed, gives h(m, i, n) = π n (x 1 ) and if i = j, h(m, i, n) = h(m, j, n) for all n large enough. Now, for any n > 0, we define H n as the face of conv(F ζ n ) of smallest dimension containing π n (x 1 ) and h(m, i, n ) : t ∈ F 1 , 0 ≤ i ≤ d, m ≥ m(n) with lim inf m→∞ t m i > 0 . In particu- lar, (5.2) shows that NF 1 (conv(D)) ⊆ n>0 NHn (conv(F ζ n )). We claim n>0 NHn (conv(F ζ n )) = NF 1 (conv(D)) . First, note that taking subsequences if its necessary, we get that for all n ∈ E the following limits lim m→∞ d i=1 t m i h(m, i, n) = h n (t) ∧ lim m→∞ d i=1 t m i z(m, i, n) = z n (t). Hence h n (t) ∈ H n for all n ∈ E and z n (t) ∈ conv(K ζ -K ζ ) for all n ∈ E large enough. A geometric argument shows that there exists t ∈ F 1 and ε > 0 small enough such that for all t ∈ F 1 , we have that z n (t ) = z n (t), so H n is a face of codimension 1 for all n ∈ E large enough. We then conclude that Then, to determine the nondeterministic directions for (X ζ , S, Z d ) we will study the supporting hyperplanes to conv(F ζ n ). To do this we will focus on the convex hull of the digit tile of the substitution. In general, this convex hull is not a polytope, i.e., can have at least a countable number of extreme points, even if the expansion matrix is diagonal, as we see in Example 5.4: A direct computation shows that for any n > 0, the extreme points of conv(F n ) is the set {(0, 0), (0, 3 n -1), ( 2 n -1, 3 n -1)} ∪ {(2 n -2 k , 3 k -3 n ) : 0 ≤ k ≤ n -1}, which implies Ext(conv(T (L, F 1 ))) = {(0, 0), (0, 1), (1, 1), (1, -1)} ∪ {(1 -2 -k , -1 + 3 -k ) : k ≥ 0}. The polytope case In this section, we will focus in the case when the convex hull of the digit tile is a polytope. We will present some known results about the convex hull of the digit tile that we will use in the rest of this thesis. Definition 5.5. We say a substitution ζ is a polytope substitution if it is bijective on extremities, and the convex hull of the digit tile T ζ = T (L ζ , F ζ 1 ) is a polytope. From now on, we will only consider polytope substitutions. As we will see, this geometrical hypothesis implies several algebraic restrictions on the expansion matrix L ζ (Proposition 5.10) and some dynamical consequences for (X ζ , S, Z d ) (Theorem 5.17). We recall here some results characterizing the polytope case in terms of the extreme points of conv(F ζ n ) [START_REF] Kirat | Remarks on self-affine fractals with polytope convex hulls[END_REF], and the inward unit normal vectors of the (d-1)-dimensional faces of conv(T ζ ) [START_REF] Strichartz | Geometry of self-affine tiles[END_REF]. A big family for the polytope case is when a power of L is an integer multiple of the identity, because for any fundamental domain F of L, the convex hull of the digit tile generated by L and F is a polytope. In particular, all the convex hull of the digit tiles of the examples in Fig. 1.4 are polytopes. Although it is not the only case where Theorem 5.6 can be applied. (2) As an example where the statement (3) in Theorem 5.6 to be applied is not necessarily satisfied in n = 1, consider L = -2 0 0 -2 and F 1 = {(0, 0), (1, 0), (0, 1), (-1, -1)}. We have that F 2 = {(-1, -3), (0, -2), ( 1 , -2), (-3, -1), (-1, -1), (0, -1), (-2, 0), (-1, 0), (0, 0), (1, 0), (-2, 1), (0, 1), (1, 1), (2, 2), (3, 2), (2, 3)}, so conv(F 2 ) has 3 extreme points, while conv(F 1 ) has 6 extreme points as shown in In [START_REF] Kirat | Remarks on self-affine fractals with polytope convex hulls[END_REF] it was proved the following result about the extreme points of conv(F m ) for any m > n, where n is such that | Ext(conv(F n ))| = | Ext(conv(F n+1 ))| and conv(T (L, F 1 )). Proposition 5.9. [77, Theorem 4.8] If | Ext(conv(F n ))| = | Ext(conv(F n+1 ))|, then all the extreme points of conv(T (L, F 1 )) are of the form j>0 L -(n+1)j n i=0 L i (f i ) , with n i=0 L i (f i ) being an extreme point of conv(F n+1 ). This implies conv(T (L, F 1 )) is equal to (L m -id) -1 conv(F m ) for all m > n. Now, assume that we are under the condition | Ext(conv(F 1 ))| = | Ext(conv(T (L, F 1 ))| and for all n > 0, conv(T (L, F 1 )) = (L n -id) -1 conv(F n ). Let u be an inward unit normal vector of a (d-1)-dimensional face of conv(T (L, F 1 )). For each n > 0, ((L n -id) * ) -1 u is an inward normal vector of a (d -1)-dimensional face of F n . By Theorem 5.6 (1), there exists k > 0 such that ((L -id) * ) -1 u are eigenvectors of (L * ) k . Hence by commutation, u is an eigenvector of (L * ) k . Since conv(T (L, F 1 )) is a polytope, we can take n > 0 large enough such that any of the inward unit normal vectors of conv(F 1 ) is an eigenvector of the same power (L * ) n . Hence, by the same arguments, up to considering a power of L, we may assume that any of the inward unit normal vectors of the (d -1)-dimensional faces of conv(F 1 ) are eigenvector of L * . This is equivalent to the hyperplane ∂H[u] = {t ∈ R d : t, u = 0} (the vector space of an affine hull of a face of conv(F 1 )) generated by u being preserved by L, i.e., L∂H[u] = ∂H[u]. This implies the normal fan N (conv(F n )) is the same for all n > 0 and it is equal to the one of conv(T (L, F 1 )). Since for some n > 0, conv(F n ) is nondegenerate (by the Følner condition), it has d linearly independent inward normal vectors (that has integer coordinates with no common divisor) which are eigenvectors of L * . The polytope condition implies then the following algebraic restrictions on the expansion matrix L. The proof is left to the reader. Moreover, if u 1 , u 2 , u 3 are linearly dependent inward unit normal vectors of (d -1)dimensional faces of conv(T (L, F 1 )), then L restricted to the vector space generated by these vectors acts as a integer multiple of the identity. In particular, in the two-dimensional case if the digit tile has 3 or at least 5 edges, then it follows that the expansion matrix is an integer multiple of the identity. Finally, up to taking an appropriate power of a substitution, we may assume the following hypothesis (PC 1) The expansion matrix L is diagonalizable with positive eigenvalues. (PC 4) The set K given by Proposition 1.18 is equal to (id -L) -1 (F 1 ) ∩ Z d , i.e., for any k ∈ K, there exists f ∈ F 1 such that k = L ζ (k) + f . Dynamical properties of substitutive subshifts of polytope substitutions In this section, we prove the normalizer group of substitutive subshifts given by polytope substitutions is virtually generated by the shift action. We then describe the symmetry group of these substitutive subshifts (Proposition 5.15 and Theorem 5.17). To do this, we determine the nondeterministic directions of these substitutive subshifts thanks to the supporting hyperplanes to conv(F ζ n ) (Theorem 5.2). Conversely, we provides a checkable combinatorial condition to ensure a vector to be nondeterministic for (X ζ , S, Z d ). (Corollary 5.13). As we saw in the previous section, under the hypothesis (PC 1), (PC 2), (PC 3), and (PC 4), the normal fan is the same for all the supports of ζ n (for any n > 0 large enough) and the digit tile, so we have that the following interpretation of Theorem 5.2 in the polytope case. Hence, in the two-dimensional case the former corollary implies strong restrictions on the set of nondeterministic directions. For instance, the number of its connected components is bounded by the number of edges of conv(T ζ ). Now, as shown in the proof of Theorem 5.2, to establish which opposite normal vectors of conv(F ζ 1 ) appeared as nondeterministic directions for (X ζ , S, Z d ) we will study the convex sets conv(L n ζ (k) + F ζ n ) generated by the points k in K ζ , which depend on the combinatorics of the substitution. We say a subset W ⊆ K ζ is a set of differences if there exist two patterns w 1 , w 2 ∈ L K ζ (X ζ ) such that w 1 (k) is equal to w 2 (k) if and only if k is in K ζ \ W . The next lemma gives a sufficient condition to ensure a vector v to be a nondeterministic for (X ζ , S, Z d ), seen as the converse of Theorem 5.2 in the polytope case. As in Lemma 3.9, consider a set C Z d such that for all n > 0, C +F ζ n +F ζ n ⊆ L n ζ (C)+F ζ n and K ζ = K ζ +C. Lemma 5.12. Let W ⊆ K ζ be a set of differences, k ∈ W , n > 0, a point f ∈ ∂ conv(L n ζ (k) + F ζ n ) and v ∈ S d-1 be such that f + ∂H v is supporting to conv(L n ζ (k) + F ζ n ) at f . Suppose that f satisfies the following conditions: H1. f + K ζ ⊆ L n ζ (K ζ ) + F ζ n , H2. (f + K ζ ) ∩ (f + H v ) ⊆ L n ζ (K ζ \ W ) + F ζ n . Then v is nondeterministic for (X ζ , S, Z d ). Proof. Let w 1 , w 2 be two patterns such that w 1 (k ) = w 2 (k ) if and only if k ∈ K ζ \W . Note that Condition H1 is equivalent to for all m > 0, L m ζ (f ) + L m ζ (K ζ ) + F ζ m ⊆ L n+m ζ (K ζ ) + F ζ n+m . Since K ζ = K ζ + C, Remark 1.21 (2) implies for all m > 0, L m ζ (f ) + F ζ m + (L m ζ (K ζ ) + F ζ m ) ⊆ L n+m ζ (K ζ ) + F ζ n+m . (5.3) If f is an extreme point of conv(L n ζ (k) + F ζ n ), there exists g ∈ Ext(conv(F ζ 1 )) such that f = L n ζ (k) + n-1 i=0 L i ζ (g). If f is in the relative interior of a k-dimensional face of conv(L n ζ (k) + F ζ n ), (1 ≤ k ≤ d -1), we consider g ∈ Ext(conv(F ζ 1 )) such that L n ζ (k) + n-1 i=0 L i ζ (g) and f are in the same k-dimensional face of conv(L n ζ (k) + F ζ n ) as shown in Fig. 5.4. f L n ζ (k) + n-1 i=0 L i ζ (g) H v Figure 5.4: The hyperplane ∂H v supporting to conv(L n ζ (k) + F ζ n ) at f and L n ζ (k) + n-1 i=0 L i ζ (g). Now, Condition H2 is equivalent to for all m > 0 L m ζ ((f + K ζ ) ∩ (f + H v )) + F m ⊆ L n+m ζ (K ζ \ W ) + F ζ n+m . (5.4) We will prove that for all m > 0 (L m ζ (f )+ m-1 i=0 L i ζ (g)+L m ζ (K ζ )+F ζ m )∩(L m ζ (f )+ m-1 i=0 L i ζ (g)+H v ) ⊆ (L m ζ (f )+K ζ )∩(f +H v )+F m . (5.5) We have that v, g = min matrix M in the symmetry group of (X ζ , S, Z d ) permutes those hyperplanes defined by these (d -1)-dimensional faces of conv(T ζ ). By condition (PC 3), the normal vectors of these hyperplanes are invariant by a power of the expansion matrix L * ζ . Hence M * permutes n eigenspaces {Qv 1 , . . . , Qv n } of some power of L * ζ . Moreover, we can assume that these vectors have integer coordinates not having common divisors except ±1. Note that each vector is unique up to a sign and does not depend on M . So, (M n! ) * leaves invariant these eigenspaces, i.e., (M n! ) * v i = α i v i . Since the vectors v i have integer coordinates with no common divisor, we have that α i ∈ Z, for all 1 ≤ i ≤ n. The same being true for (M -1 ) * , we get that each α i is invertible in Z, which implies |α i | = 1 so M 2n! is equal to the identity matrix, and then M has a finite order. By Proposition 4.11 any homomorphism of (X ζ , S, Z d ) is invertible. Furthermore, for any v i , we have that M * v i = λ i v j , for some 1 ≤ i, j ≤ n and λ i ∈ Q. Since M ∈ GL(d, Z) and the coordinates of v j does not have common divisors, we have that λ i ∈ Z. Moreover, note that (M 2n! ) * v i = λ i • λ i 1 • • • λ i 2n!-1 v i = v i for some λ i 1 . . . λ i 2n!-1 ∈ Z, hence |λ i | = 1 for all 1 ≤ i ≤ n. Up to a change of indices, we can assume that {v 1 , . . . , v d } is a R d -basis, so M * has the form M * = P Q M P -1 , where Q M is the matrix with columns equal to the coordinates of M v i (which for all 1 ≤ i ≤ d is equal to some v j or -v j ) in the basis {v 1 , . . . , v d }. We then conclude that the symmetry group N (X ζ , S, Z d ) is finite and by Minkowski's theorem we have that N (X ζ , S, Z d ) ≤ GL(d, Z/3Z). Finally, note that M ≤ P • Q M • P -1 , where P , P -1 and sup M ∈ N (X ζ ,S,Z d ) Q M < ∞ only depend on the convex hull of the digit tile T ζ . Remark 5. [START_REF] Bezuglyi | Finite rank Bratteli diagrams: structure of invariant measures[END_REF]. In particular, it follows from the proof that if n = d each matrix Q M is a permutation matrix, so N (X ζ , S, Z d ) is conjugate to a subgroup of the hyperoctaedral group W d . These recover results in [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF] for block substitutions. By the realization result [START_REF] Bustos | Admissible reversing and extended symmetries for bijective substitutions[END_REF]Theorem 35] these results obtained are optimal. The following theorem summarizes all the properties satisfied for aperiodic primitive reduced polytope substitutions. Until now, we didn't find an aperiodic d-dimensional primitive constant-shape substitution with less than d linearly independent nondeterministic directions. In the case of aperiodic primitive block substitutions, it can be easily proved that this hypothesis is true. Moreover, given the result in [START_REF] Guillon | Determinism in subshifts[END_REF] for the two-dimensional case, by Theorem 5.2, the hypothesis is true for all of the cases where conv(T ζ ) does not have two parallel edges. In a private communication, P. Guillon [62] mentioned this result is already proved for higher dimensions, but nowhere published. This implies, we only have to deal in the case conv(T ζ ) has two parallel (d -1)-dimensional faces. Theorem 5.17 deal only with polytope substitutions, and therefore leaves open how to characterize the normalizer semigroup for nonpolytope substitutions, i.e., where the convex hull of the digit tile is nonpolytope. In [START_REF] Strichartz | Geometry of self-affine tiles[END_REF], the authors gives a description of the convex hull of the digit tile on the nonpolytope case. This description may be useful to obtain a characterization for the normalizer semigroup and symmetry semigroup. Nevertheless, until now there are no good descriptions of the convex hull of the digit tile for higher dimensions. The work in this thesis corresponds to the first examples of realization results about the set of nondeterministic directions for minimal actions. In [START_REF] Boyle | Expansive subdynamics[END_REF] it was proved that for any compact set of S 1 that is not a singleton containing one line with irrational slope can be realized as the set of nonexpansive directions of a Z 2 -action, and the singleton case was after proved by M. Hochmann in [START_REF] Hochman | Non-expansive directions for Z 2 actions[END_REF]. If aperiodic bijective on extremities primitive constantshape substitutions have d linearly independent nondeterministic directions, then we cannot obtain a unique nondeterministic direction with these substitutions, so we will need to use other types of substitutions, or other type of subshifts (such as Toeplitz sequences) to obtain other realization results with minimal subshifts. In Chapter 6 we describe the normalizer group for two examples. The first one is a bijective polytope substitution, so we may apply the results proved in Chapter 4 and Chapter 5. The second example is a non-reduced constant-shape substitution, such that its reduced substitution is trivial, i.e., the subshift generated is conjugate to the one-point system. This implies the techniques developed in Chapter 4 and Chapter 5 does not apply. Nevertheless, we are able to characterize its maximal equicontinuous factor and its normalizer group. Chapter 6 Some examples of constant-shape substitutions As we have seen the normalizer group is very restrictive (Theorem 5.17) under strong combinatorial and geometric conditions on the constant-shape substitutions. In this chapter we characterize the normalizer group for two examples of constant-shape substitutions. The first one is called the table substitution, which is a discretization of the table tiling. Its maximal equicontinuous factor was determined in [START_REF] Robinson | On the table and the chair[END_REF]. Here, we prove that the normalizer group of the table tiling is isomorphic to a direct product of the shift action and the dihedral group D 4 , given by the symmetries of the square (Proposition 6.1), using the techniques developed in Chapter 4 and Chapter 5. The second example is called the half-hex substitution, which is a discretization of the so-called half-hex tiling. This is a non-reduced constant-shape substitution, such that its reduced substitution is trivial, i.e., the subshift generated is conjugate to the onepoint system. Then, one cannot apply the rigidity results developed in Chapter 4 and Chapter 5 cannot be applied. Nevertheless, we characterize its maximal equicontinuous factor. Furthermore, we prove it is a coalescent system, and its symmetry semigroup is a group. By Proposition 1.5 we have that the normalizer semigroup is actually a group. Moreover, we prove that its normalizer group is a semi-direct product of the shift action and GL(2, Z) (Theorem 6.3). This is a first example of a minimal aperiodic subshift with an infinite symmetry group. The table substitution The table tiling is a well known example of a rep-tile tiling (see [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF]Section 4.9] for more properties about this tiling), i.e., it is a polygon that can be tiled by a finite number of smaller, congruent copies of itself. The tile-substitution of the table tiling is shown in Fig. 6.1, 3 → 1. This permutation is given by reflecting the rectangles defining the table tiling by the x-axis. We now define the map ψ : X t → ψ(X t ) as ψ(x) n = Ψ(x M -1 2 n ) for all x ∈ X t , n ∈ Z d . A direct computation shows that S (-1,0) φ permutes the fixed points of ζ 2 t . Hence, by minimality of (X t , S, Z d ), we have that ψ(X t ) = X t , so ψ is a homomorphism associated with the matrix M 2 of (X t , S, Z d ). We also have that S (0,3) ψζ 2 t = ζ 2 t ψ. So ψ is the homomorphism associated with M 2 given by Theorem 4.12. Since D 4 = M 1 , M 2 and the homomorphisms are induced by letter-to-letter maps, the composition of arbitrary maps in {φ, ψ} is a homomorphism induced by a letter-to-letter map. This implies M 1 , M 2 ∼ = φ, ψ . To conclude, this last statement let us define a right split on the exact sequence (1.4), i.e., a group homomorphism from D 4 to N (X t , S, Z 2 ). By this, we conclude that N (X t , S, Z 2 ) = S {φ, ψ} ∼ = Z 2 D 4 . The half-hex substitution Another well known inflation rule is the half-hex inflation shown in Fig. 6.3 (see [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF]Section 6.4] for more properties about this tiling). In tiling vocabulary, it is a edge-to-edge inflation, which means that each inflated tile is precisely dissected into copies of the tiles so the vertices of any tile intersect only the vertices of the adjacent tiles. This inflation defines an aperiodic tiling of the plane as proved in [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF]. The following shows a pattern of the half-hex tiling: Since the largest edge of any half-hex can only meet the largest edge of the adjacents half-hexes, two half-hexes always join to form a regular hexagon, through the largest edge. With this procedure, the half-hex tiling can be decomposed in three types of hexagons which are distinguished by a single diagonal line (see [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF]). Using this full hexagons, we can define a pseudo inflation (using the vocabulary on [START_REF] Baake | Encyclopedia of Mathematics and its Applications[END_REF]), which is conjugated to the half-hex tiling as the following. From this pseudo inflation, we consider an inflation with only the four shaded hexagons in Fig. 6.6. On this tiling there is an invariant lattice (by translation of hexagons) Λ = Since { ←g ∈ ← -Z 2 (2 n Z×2 n Z) : |π -1 hh ({ ←g })| = 3} is the orbit O( ← -0 , Z 2 ), we have that πhh (φ) is in O( ← -0 , Z 2 ). Moreover, for any n ∈ Z 2 , πhh (S n ) is equal to [n (mod2 n Z × 2 n Z)] which is in O( ← -0 , Z 2 ). By injectivity of πhh we conclude that End(X hh , S, Z 2 ) = S . In particular any endomorphism of (X hh , S, Z 2 ) is an automorphism. Hence (X hh , S, Z 2 ) is a coalescent system. Now, for homomorphisms. Let τ : Z 2 /2Z × 2Z \ {(0, 0)} → A given by above equations. It is important to note that τ is a bijection. Proof of the Claim. Since the sign does not change the parity of the coordinates (m, n), the statement is true for k = 1 and k = -1. We only need to prove the claim for k > 0. Suppose that for any 1 ≤ k < k the statement is true. We separate the proof in two cases: 1. If k is even, we write k = 2j. Then x(km, kn) = x((2j)m, (2j)n) = ζ hh (x)((2jm), 2jn) = ζ hh (x(jm, jn)) (0,0) = x(jm, jn), where the last equality is by the definition of the substitution. We conclude that x(km, kn) = x(m, n). Question. 1. Is there a recognizability property satisfied by periodic infinite symbolic factors of a constant-shape substitution? B. Solomyak [START_REF] Solomyak | Nonperiodicity implies unique composition for self-similar translationally finite tilings[END_REF] already proved this result for primitive constant-shape substitutions. An idea would be to see if we can extend this proof for symbolic factors of substitutive subshifts. The answer to this question will help to understand multidimensional substitutive subshifts and their topological Cantor factors. 2. Do non-primitive constant-shape substitutions producing an aperiodic subshift satisfy a recognizability property? In the one-dimensional case this property is true [START_REF] Bezuglyi | Aperiodic substitution systems and their Bratteli diagrams[END_REF]. 3. In the non-minimal case, are the constant-shape substitutions recognizable for aperiodic points as proved in [START_REF] Berthé | Recognizability for sequences of morphisms[END_REF] for the one-dimensional case? To a Cobham theorem for constant-shape substitutions An open question of this thesis is how different two substitutions producing the same subshift but with different expanding matrix and/or different fundamental domains can be. For example, if a substitution is defined with a diagonal matrix, but with a triangle support, is there any square substitution (diagonal matrix and the standard square support) producing the same shift? In [START_REF] Fagnot | Sur les facteurs des mots automatiques[END_REF] it was proved that if ζ 1 , ζ 2 are two aperiodic primitive constant-length substitutions, and (X ζ 2 , S, Z) is a symbolic factor of (X ζ 1 , S, Z), then their lengths have a common power (greater than 1). This is still true for multidimensional substitutions with expansion matrix equal to a multiple of the identity and the standard d-dimensional cubic support [START_REF] Durand | Cobham-Semenov theorem and N d -subshifts[END_REF], but there is no generalized version for all constant-shape substitutions. Note that by Theorem 3.22, if ζ 1 , ζ 2 are two aperiodic primitive constant-shape substitutions such that (X ζ 2 , S, Z d ) is a symbolic factor of (X ζ 1 , S, Z d ), then there is another constant-shape substitution ζ 3 with the same expansion matrix and support of ζ 1 such that (X ζ 2 , S, Z d ) and (X ζ 3 , S, Z d ) are conjugate. However, this does not imply combinatorial conditions for ζ 2 . See [START_REF] Durand | Decidability of the isomorphism and the factorization between minimal substitution subshifts[END_REF]Section 8.4] for an example of a non constant-length substitution which is conjugate to a constant-length one. Moreover, it is known that constant-length substitutions are strongly related to automatic sequences. Cobham showed [START_REF] Cobham | Uniform tag sequences[END_REF] that automatic sequences are exactly letter-to-letter projection of fixed points of constant-length substitutions. Automatic sequences are generated by finite automatas, which are one of the most basic models of computation and they have a large number of interesting connections with number theory, such as transcendence theory in positive characteristic (see for example [3]), and expansion in integer bases [1]. Question. 1. Is there a version of Cobham's theorem for constant-shape substitutions, regarding the shape of their supports? Since in the one-dimensional case, substitutions are defined using only intervals as supports of them, there is no known version considering also the geometry of the supports of substitutions. 2. Are the letter-to-letter codings of all constant-shape substitutions multidimensional automatic sequences? In particular, are all constant-shape substitutions generated by a DFA? Topological factors of constant-shape substitutions In the one-dimensional case, substitutions (and its topological factors on Cantor sets) are included in a broader class of systems called finite rank systems. In [START_REF] Downarowicz | Finite-rank Bratteli-Vershik diagrams are expansive[END_REF] it was proved that finite rank systems are either expansive or equicontinuous. This classification result is no longer true in the multidimensional framework (Example 4.3 is an example of a constantshape substitution with a Cantor factor which neither expansive neither equicontinuous). In [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF] it was proved that expansive Cantor factors of substitutions are conjugate to substitutions, and equicontinuous Cantor factors are conjugate to an odometer. Also, it is shown in [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF] that aperiodic substitutions have a finite number of aperiodic Cantor factors up to conjugacy. In this thesis, it was proved that aperiodic symbolic factors of aperiodic primitive constant-shape substitutions are conjugate to aperiodic primitive constant-shape substitutions Theorem 3.22, extending the result proved in [START_REF] Müllner | Automorphisms of automatic shifts[END_REF]. It can be proved using Theorem 4.1 and Theorem 3.22 that if we only consider reduced substitutions, for a substitutive subshift which is conjugate to an aperiodic primitive reduced substitution, there is a finite number of aperiodic symbolic factors, which are conjugate to a substitutive subshift given by an aperiodic primitive reduced substitution. But this does not cover all of the cases. Moreover, it is known the result is true for linearly repetitive primitive constant-shape substitutions as proved in [START_REF] Durand | Linearly recurrent subshifts have a finite number of non-periodic subshift factors[END_REF]. Question. 1. Substitutive subshifts have a finite number of aperiodic symbolic factors? 2. Does there is a classification theorem for topological factors of constant-shape substitutions, as the one proved in [START_REF] Durand | Substitutional dynamical systems, Bratteli diagrams and dimension groups[END_REF] for one-dimensional substitutions? 3. Are all substitutive subshifts coalescent? 4. Is the automorphism group of any substitutive subshift virtually generated by the shift action? 5. Is there a substitutive subshift with a nontrivial topological factor with a connected phase space? Until now, even in the one-dimensional case this question is open. Decidability problems on constant-shape substitutions The decidability of a problem corresponds to the existence of an algorithm to give a positive (or negative) answer to the problem. Since substitutions are defined by finite objects, it is natural to ask about the decidability of some properties about them. On the study of homomorphisms between Z d -topological dynamical systems We study in this thesis homomorphisms between Z d -symbolic dynamical systems generated by constant-shape substitutions. This notion extends the classical dynamical one of morphism like factor and conjugacy. Isomorphisms are conjugacies of Z d -actions, up to GL(d, Z)-transformations. We show the class of substitutive subshifts is stable under aperiodic symbolic factors. We prove any measurable factor induces a continuous one. We also get strong restrictions on the homomorphisms of a generic family of substitutive subshifts: they are invertible, the normalizer group is virtually generated by the shift action and its quotient by the automorphisms is limited by the digit tile of the substitution. We prove this by describing their set of nondeterministic directions. Finally, we show the optimality of the hypotheses by exhibiting an example of a minimal subshift with an infinite symmetry group. Keywords: Symbolic dynamics, substitutive subshift, homomorphism, automorphism group, normalizer group, symmetry group, digit tile, nonexpansive half-space. Sur l'étude des homomorphismes entre Z d -systèmes dynamiques topologiques Nous étudions les homomorphismes entre des Z d -systèmes symboliques engendré par des substitutions de forme constante. Cette notion étend les concepts de facteur et conjugaison. Les isomorphismes sont alors des conjugaisons de Z d -actions, à une transformation de GL(d, Z) près. Nous montrons que la classe de sous-shifts substitutifs est stable par les facteur symbolique aperiodique. Nous prouvons que tout facteur mesuré induit un facteur continu. Nous obtenons également des restrictions fortes des homomorphismes d'une famille générique des sous-shifts substitutifs: ils son inversibles, le normalisateur est virtuellement engendré par l'action du shift et son quotient par des automorphismes est limité par le digit tile de la substitution. Nous prouvons ceci en décrivant leurs ensembles des directions non déterministes. Finalement, nous prouvons que nous hypothèses sont optimales en donnant un example d'un sous-shift substitutif avec un groupe de symétrie infini. Mots-clés: dynamique symbolique, sous-shift substitutif, homomorphisme, groupe de automomorphisme, normalisateur, groupe de symétrie, digit tile, demi-espace nonexpansif. Introduction Résumé 1 1 Definitions and background 1.1 Basics on discrete, convex and fractal geometry . . . . . . . . . . . . . . . . 1.1.1 Discrete geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Convex geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Fractal Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Topological dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Homomorphisms between topological dynamical systems . . . . . . . 1.3 Measure-preserving systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Symbolic Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Nondeterministic directions of a topological dynamical system . . . . . . . . 1.6 Odometer systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Multidimensional constant-shape substitutions . . . . . . . . . . . . . . . . 2 The symmetry semigroup of Z 2 -odometers 2.1 The universal odometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Bifurcation phenomenon of the normalizer semigroup for constant base Z 2odometer systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The diagonal case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 The triangular case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Proof of Theorem 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 The recognizability property of constant-shape substitutions 3.1 The recognizability property of aperiodic symbolic factors of substitutive subshifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Invariant orbits of substitutive subshifts . . . . . . . . . . . . . . . . . . . . 3.3 The repetitivity function of substitutive subshifts . . . . . . . . . . . . . . . Figure 3 : 3 Figure 3: The five basic Robinson tiles (up to rotation and reflection). 1. 1 . 1 11 Discrete geometryIf F ⊆ Z d is a finite set, it will be denoted by F Z d , and we use the notation F = max n∈F n , where • is the standard Euclidean norm of R d . The standard Cartesian product in R d will be denoted by •, • . If L ∈ M(d, R) is a matrix, we denote L = max x∈R\{0} L(x) / x as the matrix norm of L. We denote GL(d, Z) as the set of d × d matrices M with integer coefficientes such that | det(M )| = 1. and x ∈ C is a point in the relative boundary of C, An affine hyperplane ∂H[a; c] = {y ∈ R d : a, y = c}, for some a ∈ R d \ {0} and c ∈ R is supporting to C at x, if x ∈ ∂H[a : c] and inf y∈C a, y < a, x = c = sup y∈C a, y . Fig. 1 . 1 Fig. 1.1 illustrate the opposite normal cones of a triangle. Figure 1 . 1 : 11 Figure 1.1: Example of the opposite normal cones of a triangle, and the stratification of the circle S 1 given by them. If π : (X, T, Z d ) → (Y, T, Z d ) is a factor map between minimal systems, and there exists y ∈ Y such that |π -1 ({y})| = 1, then this property is satisfied in a G δ dense subset Y 0 ⊆ Y . In this case, we say that π is almost 1-to-1. If |π -1 ({y})| = K for all y in a G δ dense subset of Y , then we say that π is almost K-to-1. If |π -1 ({y})| ≤ K < ∞ for all y ∈ Y , we say that π is finite-to-1. 2 . 1 )(P L n 1 P 211 If L 1 ∈ M(d, Z) is an integer expansion matrix and M ∈ N ( ), then for any P ∈ GL(d, Z), P M P -1 is in the symmetry semigroup N ( ← -Z d -1 ) ). 3. If M ∈ GL(d, Z) commutes with some power of the expansion matrix L, then M is in the symmetry semigroup N ( ← -Z d (L n ) ) Now we define Toeplitz sequences. Let A be a finite alphabet and Z ⊆ Z d be a finite index subgroup of Z d . For x ∈ A Z d and a ∈ A we define Per(x, Z, a) = {n ∈ Z d : x(n + m) = a, for all m ∈ Z}. Fig. 1 .Figure 1 . 2 :Figure 1 . 3 : 11213 Figure 1.2: An example of a constant-shape substitution over a four-letter alphabet. Fig. 1 .Figure 1 . 4 : 114 Figure 1.4: Approximation of some digit tiles: (a) Gasket, (b) Rocket, (c) Shooter, (d) Twin Dragon. The names of these tiles comes from[START_REF] Vince | Digit tiling of Euclidean space[END_REF] next result shows that for any aperiodic symbolic factor (Y, S, Z d ) of (X ζ , S, Z d ) we can change ζ for an appropriate substitution ζ with the same expansion matrix and fundamental domain such that (X ζ , S, Z d ) and (X ζ , S, Z d ) are conjugate, and there exists a factor π : (X ζ , S, Z d ) → (Y, S, Z d ) induced by a 0-block map. Lemma 1.22. Let ζ be an aperiodic primitive constant-shape substitution and φ : (X ζ , S, Z d ) → (Y, S, Z d ) be an aperiodic symbolic factor of (X ζ , S, Z d ). Then, there exists a substitution ζ having the same support and expansion matrix such that (X ζ , S, Z d ) and (X ζ , S, Z d ) are conjugate and a factor map π : (X ζ , S, Z d ) → (Y, S, Z d ) induced by a 0-block map. Proof. Suppose that φ : (X ζ , S, Z d ) → (Y, S, Z d ) is a factor map via a B(0, r)-block map. Set A = B(0, r), by Proposition 1.20 there exists a set C Z d such that B(0, r)+F ζ 1 +C ⊆ L ζ (C) + F ζ 1 . Set D = L ζ (C) + F ζ 1 . We will define a substitution ζ (D) considering the set L D (X ζ ) as the alphabet with the same expansion matrix and support of ζ in the following way: If p ∈ L D (X ζ ), then for any j ∈ F ζ 1 we set ζ (D) (p) j = ζ(p)| j+D . It is straightforward to check that x ∈ X ζ is a fixed point of the substitution ζ, if and only if y ∈ L D (X ζ ) Z d such that y n = x| n+D for all n ∈ Z d is a fixed point of the substitution ζ (D) . With this, we can define the following sliding block codes ψ 1 : (X ζ , S, Z d ) → (X ζ (D) , S, Z d ) given by the D-block map Ψ 1 (p) = p, and ψ 2 : (X ζ (D) , S, Z d ) → (X ζ , S, Z d ) given by the 0-block map Ψ 2 (p) = p 0 . These maps commute with the shift action and define a conjugacy between X ζ and X ζ (D) . We then, define a factor map φ (D) : (X ζ (D) , S, Z d ) → (Y, S, Z d ) given by a 0-block map equal to ψ 2 φ. Example 2 . 4 ( 24 Different results for Theorem 2.2). (1) Consider the matrix L 1 = 2 -1 1 3 R 2 .where m 12 = 212 Then, M satisfies (Normalizer Condition). Note that M has the form (p -s)m 12 -(m 11 -m 22 )q and m 21 = (p -s)m 21 , with m 12 , m 21 ∈ Z. Now, for all n, m > 0, (NC 2) implies -m 21 p m q(n) m 12 s n+m -m 21 q(m)q(n) m 21 p n+m m 21 pq(m) p n s n ). m 22 .= m 11 (m 11 -m 22 22111122 e., (p -s)M = (m 11 -m 22 )L + (p • m 22 -m 11 • s) id R 2 . Since m 21 = 0, then M has the form M = m 11 m 12 (m 11 , m 22 ) 0 where m 12 (m 11 , m 11 ) satisfies (p -s)m 12 (m 11 , m 11 ) = (m 11 -m 22 )q. -Note that m 11 = m 22 if and only if m 12 = 0. -If m 11 = m 22 , then m 11 -m 22 ∈ {-2, 2}, so (p -s)m 12 = ±2q. Since M has integer coefficients, this necessarily implies 2q ∈ (p -s)Z. If this condition is satisfied, then M has the form M m 12 = 12 m 21 k 2 + k(m 11 -m 22 ). Since | det(M )| = 1, and det(M ) = (m 11 +m 21 •k)(m 22 -m 21 •k), we get that |m 11 +m 21 •k| = 1 and |m 22 -m 21 • k| = 1. We then can parameterize the matrices in N ( ← -Z 2 (L n ) ) as follows: Figure 3 . 3 : 33 Figure 3.3: Illustration of the patterns ζ n (w) and a n in j n . Proposition 3 . 4 . 34 Let ζ be an aperiodic primitive constant-shape substitution. Then, there exist finitely many ζ-invariant orbits in the substitutive subshift X ζ . The bound is explicit and depends only on d, |A|, L -1 ζ , F ζ 1 and det(L ζ -id). 1 = 1 = 11 from Proposition 1.18. Using -H as the set A and F = F ζ 1 in Proposition 1.20 we obtain a set C Z d such that L n ζ (-H + C) + F ζ n ⊆ L n+1 ζ (C) + F ζ n+1 for all n > 0. Define D Z d be such that D = C + K ζ -H. Suppose that there are more than |A| |D| • |H| ζ-invariant orbits. By the Pigeonhole Principle, there exist j ∈ H and two different points x = y ∈ X ζ such that x| D = y| D and ζ(x) = S j x, ζ(y) = S j y. Note that ζ(x| D ) = ζ(x)| L ζ (D)+F ζ x| j+L ζ (D)+F ζ 1 Hence, we have x| j+L ζ (D)+F ζ y| j+L ζ (D)+F ζ 1 n+1 and we conclude by Proposition 1.18. where |λ 1 | 1 , |λ d | are the maximum and minimum of the absolute values of the eigenvalues of L ζ , respectively. Theorem 3 . 3 13. [108, Theorem 3.13] Let ζ be an aperiodic primitive tiling substitution with expansion matrix L ζ , which has a fixed point. Then the following are equivalent for x ∈ R d : Definition 3 . 19 . 2 . 3192 Let ζ be an aperiodic primitive constant-shape substitution. 1. We say that a pair (a, b) ∈ A 2 \ ∆ A is a periodic pair if there is a cycle in G ζ which starts and ends in (a, b). We define n(a, b) = min{|P | : P is a cycle in (a, b)} and we denote n(ζ) = lcm{n(a, b) : (a, b) is a periodic pair}, we call the substitution pair-aperiodic if n(ζ) = 1. We call a pair (a, b) ∈ A \ ∆ A an asymptotic disjoint pair if for any k > 0, there exists a path P = (a 0 , b 0 ) . . . (a k , b k ) in G ζ of length k with (a 0 , b 0 ) = (a, b) and (a k , b k ) / ∈ ∆ A . ) Assume that ζ is pair-aperiodic. If (a, b) ∈ A 2 is not an asymptotic disjoint pair, let k be the minimum length of a path from (a, b) such that any path of length k has an end in ∆ A . If k > |A| 2 , there exists a cycle as a subpath in P , i.e., one of the vertex (c, d) is a periodic pair. Since ζ is pair-aperiodic, there exists f ∈ F ζ 1 such that ζ(c) f = c = d = ζ(d) f . So we can create a path of length arbitrarily large from (a, b) that not reach ∆ A , which is a contradiction of not being asymptotic disjoint pair. We then have that k ≤ |A| 2 , which implies ζ |A| 2 (a) = ζ |A| 2 (b). Definition 3.21. Let A, B be two finite alphabets, ζ be an aperiodic primitive constantshape substitution with alphabet A and τ : A → B. We say that a, b ∈ A with a = b are indistinguishable (by (ζ, τ )) if for all n ≥ 0 we have that τ (ζ n (a)) = τ (ζ n (b)). Theorem 3 . 22 . 322 Let (Y, S, Z d ) be an aperiodic symbolic factor (with alphabet B) of a substitutive subshift (X ζ , S, Z d ), with ζ being an aperiodic primitive constant-shape substitution with alphabet A. Then, there exists an aperiodic primitive constant-shape substitution ζ with alphabet C having the same expansion matrix and support of a power of ζ and a conjugacy τ : (X ζ , S, Z d ) → (Y, S, Z d ) via a 0-block map. Proof. By Lemma 1.22 we can assume that the factor map τ : (X ζ , S, Z d ) → (Y, S, Z d ) is induced via a 0-block map and by Remark 3.20 we can assume that ζ is a pair-aperiodic substitution. having the same label of edges as P 1 . Now, repeating this process we get a path P 3 in G ζ of length (max{|[a]|, |[b]|} + 1)k from finite, we may consider an appropriate iteration of ζ 1 and ζ 2 such that any factor ψ satisfying 2. in Theorem 4.1 satisfies S p ψζ 1 = ζ 2 ψ. 1). If a substitution is not reduced, we consider an equivalence relation calling two letters a, b equivalent when d n (ζ n (a), ζ n (b)) → 0. If two letters a, b ∈ A are equivalent, then (ζ(a)) f ∼ (ζ(b)) f for all f ∈ F ζ 1 . We define a substitution ζ on A/ ∼ given by ( ζ([a] Proposition 4 . 4 . 44 Let ζ be an aperiodic primitive constant-shape substitution. If ζ is aperiodic, the natural factor between (X ζ , S, Z d ) and ( Fix a, b ∈ A. Note that d n+j (ζ n+j (a), ζ n+j (b)) = 1 |F ζ n+j | (c,d)∈D a,b n |F ζ j |d j (ζ j (c), ζ j (d)) + (c,d)∈E a,b n |F ζ j |d j (ζ j (c), ζ j (d)) k→∞ d k k (ζ k (c), ζ k (d)) ≤ 1 we have that d n+j (ζ n+j (a), ζ n+j (b)) ≤ |D a,b n |/|F ζ n | and this is true for every j large enough, so lim k→∞ d k (ζ k (a), ζ k (b)) ≤ |D ad n (ζ n (a), ζ n (b)) -lim k→∞ d k (ζ k (a), ζ k (b)) . n∈N a =b∈B d n (ζ n 2 2 (a), ζ n 2 (b)) and R the radius from the recognizability property of X ζ 2 . Recall that the recognizability property implies the substitutions maps are injective. Let φ be in m Fac(X ζ 1 , X ζ 2 , S, Z d ). The map π n (x) -π n (φx) (mod L n (Z d ))is invariant under the Z d -action, so is constant µ ζ 1 -a.e. We denote this constant by p n Proposition 4 . 7 . 47 Let ζ be an aperiodic primitive reduced constant-shape substitution. Then (X ζ , S, Z d ) is coalescent. Proposition 4 . 11 . 411 Let ζ be an aperiodic primitive reduced constant-shape substitution. If M ∈ GL(d, Z) has finite order, then any homomorphism φ ∈ N M (X ζ , S, Z d ) is invertible. Remark 4 . 13 . 413 Let ψ ∈ Hom M (X ζ 1 , X ζ 2 , S, Z d ) satisfying property (2) of Theorem 4.12. Lemma 4 . 14 . 414 If φ, ψ ∈ mN M (X ζ 1 , X ζ 2 , S, Z d ) are such that d(φ, ψ) is smaller than η/C(R), then φ, ψ are equal µ ζ 1 -a.e in X ζ 1 . Lemma 4.15. Let φ ∈ mN M (X ζ 1 , X ζ 2 , S, Z d ).Then there exists a sequence (ψ n ) of homomorphisms associated with M of radius F ζ 1 Proposition 4 . 16 . 416 Let (X ζ , S, Z d ) be a subshift from a reduced aperiodic primitive constant-shape substitution ζ from a finite alphabet. If the set of matrices M ∈ N (X ζ , S, Z d ) commuting with a power of the expansion matrix L ζ is finite, then the quotient group N C(X ζ , S, Z d )/ S is finite. A bound for |N C(X ζ , S, Z d )/ S | is given by an explicit formula depending only on d, |A|, L -1 ζ , F ζ 1 , and sup M ∈Cent GL(d,Z) (L ζ ) M . Proof. Let ψ ∈ N C(X ζ , S, Z d ), satisfying Property 2. of Theorem 4.12. Following the proof of Proposition 4.7, ψ acts as a permutation of the ζ-invariant orbits. Since the set of matrices M ∈ N (X ζ , S, Z d ) commuting with a power of L ζ is finite, there exists n > 0 such that ψ n is an automorphism of X ζ . By Proposition 4.8, we have that ψ n has finite order, which implies ψ has finite order. n∈E NHn (conv(F ζ n )) = NF 1 (conv(D)).Thus, the extremal rays of NF 0 (conv(D)) are equal to sets of the form n>0 NHn (conv(F ζ n )), with H n being faces eventually of codimension 1 of conv(F ζ n ) containing π n (x 1 ). Example 5 . 4 (F 1 =Figure 5 . 1 : 54151 Figure 5.1: The fundamental domain and an approximation of the digit tile of Example 5.4. Theorem 5 . 6 . 56 Let T be the digit tile for an expansion matrix L ∈ M d (R) and a fundamental domain F 1 ⊆ R d . The following statements are equivalent:1. The convex hull of the digit tile T (L, F 1 ) is a polytope.2. [109, Theorem 4.2]The inward unit normal vectors of the (d -1)-dimensional faces of conv(F 1 ) are eigenvectors of (L * ) k for some k. 3. [ 77 ,Remark 5 . 7 . 7757 Theorem 2.2] The cardinality of Ext(conv(F n )) and Ext(conv(F n+1 )) are the same for some n > 0. In such a case, for any m > n, | Ext(conv(F m ))| = | Ext(conv(F n ))|, and then | Ext(conv(T (L, F 1 ))| = | Ext(conv(F n ))|. In the case L = λ id R d , with λ > 1, a direct computation shows that the statements (2) and (3) of Theorem 5.6 are satisfied without taking any power of L. Example 5 . 8 .Figure 5 . 2 : 5852 Figure 5.2: The fundamental domain and an approximation of the digit tile of a non self-similar matrix. Fig. 5 2 Figure 5 . 3 : 5253 Figure 5.3: The sets F 1 and F 2 . Proposition 5 . 10 . 510 If | Ext(conv(F ζ 1 )| = | Ext(conv(T (L, F 1 )))|, then the eigenvalues of L are integer numbers. (PC 2 ) 2 The convex set conv(F 1 ) is nondegenerate and| Ext(conv(F 1 ))| = | Ext(conv(T (L, F 1 ))|.(PC 3) Any inward unit normal vector of a (d -1)-dimensional face of conv(F 1 ) is an eigenvector of L * . Corollary 5 . 11 ( 511 Nondeterministic directions in the polytope case). Let ζ be an aperiodic primitive polytope substitution. The set of nondeterministic directions ND(X ζ , S, Z d ) is the intersection of S d-1 with a nonempty union of opposite normal cones of the form NG (conv(T ζ )), where G is a face of conv(T ζ ). ) + L m ζ (K ζ ) + F ζ m ) ∩ (L m ζ (f ) + m-1 i=0 L i ζ (g) + H v ), there exist k 1 ∈ K ζ , j ∈ F ζ m and h ∈ H v such that n = L m ζ (f ) + exist c ∈ K ζ and l ∈ F ζ m such that m-1 i=0 L i ζ (g) + L m ζ (k 1 ) + j = L m ζ (c) + l. Theorem 5 . 17 . 517 Let ζ be an aperiodic reduced primitive polytope substitution. Then 1. The system (X ζ , S, Z d ) is coalescent, and any homomorphism inN (X ζ , S, Z d ) is invertible.If there are d linearly independent vectors that are nondeterministic directions for (X ζ , S, Z d ), we have that 2. The normalizer group is virtually generated by the shift action. 3 . 3 The symmetry group N (X ζ , S, Z d ) acts as a permutation group in the set{N G (conv(T ζ )) : N G (conv(T ζ )) ⊆ ND(X ζ , S, Z d )}. In particular, if ND(X ζ , S, Z d ) = S d-1, then the symmetry group N (X ζ , S, Z d ) is isomorphic to a subgroup of the automorphism group of the normal fan of conv(T ζ ).Proof. The statement 1. is true by Proposition 4.7 and Proposition 4.11. Now, by the third isomorphism theorem we have thatN (X ζ , S, Z d ) /Aut(X ζ , S, Z d ) ∼ = N (X ζ , S, Z d ) / S /( Aut(X ζ , S, Z d ) / S ).Then, by Proposition 5.15 gives the quotient group N (X ζ , S, Z d )/ Aut(X ζ , S, Z d ) is finite and Proposition 4.8 implies Aut(X ζ , S, Z d )/ S is also finite, so we conclude that N (X ζ , S, Z d )/ S is a finite group. Finally, statement 3. is true by Proposition 5.15. Figure 6 . 3 : 63 Figure 6.3: Tile-substitution of the half-hex tiling. Figure 6 . 4 : 64 Figure 6.4: A pattern of the half-hex tiling. Figure 6 . 5 : 65 Figure 6.5: The three tiles as a new alphabet for the half-hex tiling. Figure 6 . 6 : 66 Figure 6.6: New tile-substitution conjugate to the half-hex tiling, with a discrete 2dimensional subaction in R 2 . Claim 2 . 2 For any (m, n) ∈ Z 2 \ 2Z 2 , the letter in x(m, n) (hence y(m, n) and z(m, n)) only depend on the parity of the coordinates.Proof. Let (m, n) ∈ Z 2 \ 2Z 2 . We can write (m, n) = 2(a, b) + (f 1 , f 2 ) with (a, b) ∈ Z 2 and (f 1 , f 2 ) ∈ F hh 1 \ {(0, 0)}. Since ←x is a fixed point of ζ hh we have that ←x (m, n) = ζ hh (x(a, b)) (f 1 ,f 2 ) ,which only depend on (f 1 , f 2 ) by the very definition of ζ hh .In particular we get that(m, n) ≡ (1, 0) (mod 2) =⇒ x(m, n) = y(m, n) = z(m, n) = 2 (m, n) ≡ (0, 1) (mod 2) =⇒ x(m, n) = y(m, n) = z(m, n) = 0 (m, n) ≡ (1, 1) (mod 2) =⇒ x(m, n) = y(m, n) = z(m, n) = 1. Claim 3 . 3 Let (m, n) ∈ Z 2 with gcd(m, n) = 1. Then for any k ∈ Z we have that x(km, kn) = x(m, n) (and the same for y, z). 2 . 2 If k is odd, we write k = 2j + 1. Using the Euclidean division we have thatkm = (2j + 1)m = 2a + g 1 , m = 2a + f 1 kn = (2j + 1)n = 2b + g 2 , n = 2b + f 2 , with a, b, a, b ∈ Z 2 and (f 1 , f 2 ), (g 1 , g 2 ) ∈ F hh 1 .Since this decomposition is unique, then a = jm + a, b = jn + b, f 1 = g 1 and f 2 = g 2 . By the definition of the substitution we conclude that x(km, kn)= ζ hh (x(2a + f 1 , 2b + f 2 )) = ζ hh (x(2a, 2b)) (f 1 ,f 2 ) = x(m, n). Z 2 ⊕ G, où G est localement fini et se factorise sur un groupe virtuellement simple. Ceci diffère du cas unidimensionnel où le groupe doit être résiduellement fini. Les isomorphismes du pavage de la chaise ont été étudié dans[START_REF] Baake | Reversing and extended symmetries of shift spaces[END_REF] où il a été démontré que le groupe d'automorphisme est trivial et le groupe d'isomorphisme est un produit semi-direct de Z 2 pour le groupe de symétries du carré. dire qui n'est pas un automorphisme (voir [10]): le full-shift, tout sous-shift palindromique, comme les shifts sturmien, le period-doubling sous-shift et le sous-shift de Thue-Morse. Le premier a un énorme groupe d'automorphisme, même pas moyennable, alors que les groupes d'automorphisme du deuxième et du troisième sont triviaux. Le groupe d'automorphisme du dernier est isomorphe à Z ⊕ C 2 . Ces exemples suggèrent que les propriétés algébriques du groupe d'automorphisme n'impliquent pas l'existence d'isomorphismes non triviaux. Il y a peu de résultat generaux existent dans le contexte multidimensionnel. Dans [67] M. Hochman a prouvé que la plupart des propriétés unidimensionnelles des groupes d'automorphisme sont préservées pour la classe des sous-shifts de type fini d'entropie pos- itive. Cependant, il a présenté un exemple remarquable d'un sous-shift de type fini dont le groupe d'automorphisme est isomorphe à 1, n)) d'un sous-shift d'entropie nulle. Mais il existe encore quelques groupes dénombrables, comme le groupe discret de Heisenberg, dont on ne sait pas s'il peut être un sous-groupe du groupe d'automorphisme d'un sous-shift unidimensionnel. Rappelons quelques exemples de sous-shifts ayant un isomorphisme non trivial, c'est-à- ) if and only if |m 11 m 22 -m 12 m 21 | = 1 and m 12 = 4m 21 + 2m 22 -2m 11 . Remark 2.5. Theorem 2.2 implies the factor map between equicontinuous systems is not necessarily compatible with homomorphisms. Consider X as the 2-dimensional universal odometer, and Y = ← -Z 2 (L n 1 ) , where L 1 = 2 -1 1 3 , so Y is an equicontinuous factor of X. Now, by Proposition 2.1, we can define a homomorphism associated with the matrix 2 1 1 1 in X, but by Theorem 2.2 and Lemma 1.6, this is not possible in Y . Consider the matrix L 2 = hence by Theorem 2.2 the matrices in N ( 2 -1 . In this case trace(L 2 ) = 7 and det(L 2 ) = 11, 1 5 ← -Z 2 (L n 2 ) ) are the ones commuting with L 2 . Note that L 2 has real eigenvalues (which are equal to 7/2± √ 5/2) and Cent GL(2,Z) (L 2 ) is an infinite group containing 2 -1 -1 1 . (3) Set L 3 = 0 6 -3 9 . This matrix has integer eigenvalues, which are 3 and 6. 3 ) This set of matrices is conjugate, via the matrix We will get that a matrix M = m 11 m 12 m 21 m 22 is in N ( ← -Z 2 (L n 1 -1 -1 2 , to the set of matrices N ( ← -Z 2 (L n m 11 m 12 0 m 22 : |m 11 m 22 | = 1, m 12 ∈ Z . It is easy to see that • If rad(p) divides s, then any m 12 ∈ Z satisfies (2.5). Thus, any matrix M = m 11 m 12 0 m 22 with |m 11 m 22 | = 1 satisfies (NC 2). 5.1 The automorphism group of substitutive subshifts from bijective constant-shape substitutions Since bijective substitutions are reduced, Proposition 4.8 implies the automorphism group of (X ζ , S, Z d ) is virtually Z d . In fact, we have a more rigid result in the bijective case as shown in the following proposition. Proposition 5.1. Let ζ 1 , ζ 2 be two aperiodic bijective primitive constant-shape substitutions with the same expansion matrix L and support F 1 . Then, any factor ψ : X ζ 1 → X ζ 2 satisfying Property 2. in Theorem 4.1 has radius 0. In particular, the automorphism group Aut(X ζ , S, Z d ) of a substitutive subshift from an aperiodic bijective primitive constant-shape substitution is isomorphic to the direct product of Z d , generated by the shift action, with a finite group given by a permutation of letters. Proof. Let ψ ∈ Fac(X ζ 1 , X ζ 2 , S, Z d ) satisfying Property 2. in Theorem 4.1, i.e., there exists p ∈ F 1 such that S p ψζ 1 = ζ 2 ψ. Suppose that p = 0. Let n > 0 be large enough such that the set F •C n = {f ∈ F n : f +C ⊆ F n } is nonempty, where C is the set defined in Lemma 4.6. Then for any x ∈ X ζ 1 , the coordinate x 0 determines the pattern ζ n 1 (x)| Fn , hence the pattern (S pn ψζ n 1 )| pn+(Fn) •C is also completely determined by x 0 , where p n for infinitely many n ∈ E, we have that necessarilyk 1 = k 3 , which is a contradiction. Consider the face H n of conv(F ζ n ) of smallest dimension generated by {h n (t)} t∈F 1 .Notice that NF 1 (conv(D)) ⊆ By construction of x 1 , x 2 , the set z n is bounded (for all n large enough it belong to K ζ -K ζ ), so there exists t ∈ F 1 and ε > 0 small enough such for all t ∈ B(t, ε) ∩ F 1 we have that z n (t ) = z n (t) for all n ∈ E large enough. Hence H n is a face of codimension 1 and an argument of dimensions let us conclude that NF 1 NHn (conv(F ζ n )). We will prove that NF 1 (conv(D)) = n>0 NHn (conv(F ζ n )). n>0 In the literature, especially Group Theory, is common to also ask that the union of the sequence of sets (Fn)n>0 is equal to Z d for a sequence to be Følner, but we will not use it in this thesis. The word opposite comes from the fact that the usual normal cone is related to the outward normal vectors of convex sets, and in this thesis we will use the inward normal vectors. ) )/ Cent GL(2,Z) (L 3 ) is virtually Z. Acknowledgments length of the substitution. Then, N. P. Frank in [START_REF] Frank | Multidimensional constant-length substitution sequences[END_REF] followed the same ideas to describe the maximal equicontinuous factor of block substitutions. Here, we follow the same ideas in [START_REF] Frank | Multidimensional constant-length substitution sequences[END_REF], using the notion of height lattice. It is a lattice of Z d containing the set of return times of a letter, satisfying a specific property with the expansion matrix of the constant-shape substitution. Using a characterization of the eigenvalues of a substitutive subshift proved in [START_REF] Solomyak | Eigenfunctions for substitution tiling systems[END_REF] we characterized the maximal equicontinuous factor of substitutive subshift. Substitutive subshifts as a finite-to-1 extension of a d-dimensional odometer by the recognizability property The recognizability property establishes a factor map from the substitutive subshift to a d-dimensional odometer as follows. For every n > 0, let π n : X ζ → F ζ n be the map satisfying x ∈ S πn(x) ζ n (X ζ ). This map is well defined ( In particular, the factor map π : (X ζ , S, ) is finite-to-1. Proof. We separate the proof in these two cases. K1: Assume that |π -1 ({ ←g })| > |A|. Let x 0 , . . . , x |A| be in π -1 ({ ←g }). For all n > 0 and all j ∈ {0, . . . , |A|}, there exist y n j ∈ X ζ such that x j = S gn ζ n (y n j ). By the Pigeonhole Principle, there exist j 1 = j 2 ∈ {0, . . . , |A|} and an infinite set E ⊆ N with y n j 1 0 = y n j 2 0 . This implies for any n ∈ E, x To complete the proof, note that for any n > 0, we have that g n+1 -g n is in Proof. For all n ≥ 0, we denote U n = {x ∈ X ζ 1 : (φ n x) 0 = (ψ n x) 0 }. We will prove by induction on n ≥ 0 that p n (φ) = p n (ψ) and . By hypothesis this is true for n = 0. Now, suppose that p n (φ) = p n (ψ) and By the recognizability property, this map vanishes on the set {x ∈ X ζ 1 : (φ n x)| B(0,R) = (ψ n x)| B(0,R) } which has a positive measure by hypothesis, and we conclude that p n+1 (φ) = p n+1 (ψ). Now, let x be in U n+1 . Then, there exist at least η|F , so we have that Finally, the set {x ∈ X : φ(x) = ψ(x)} is the decreasing intersection of these sets, so has a positive measure. By ergodicity, φ, ψ are equal Then there exists a sequence (ψ n ) of sliding block codes of radius Proof. Set ε > 0. By Lusin's theorem, there exist an integer > 0 and a continuous map f : X ζ 1 → B such that f (x) only depends on x| B(0, ) and the measure of the set So we have that and by Remark 1.21 we have that n | elements in J(x) ∩ J(y), so (φ n x) 0 is equal to (φ n y) 0 by definition of η. Hence, for every x in W , (φ n x) 0 only depends on x| C . Finally, to prove Theorem 4.1 we use similar arguments given in [START_REF] Host | Homomorphismes entre systèmes dynamiques définis par substitutions[END_REF] that we describe for completeness. Proof of Theorem 4.1. For the fixed alphabets A and B, there exist a finite number of sliding block codes of radius ) . By Lemma 4.6, there exist two different integers m, k ≥ 0 such that d(φ m , φ m+k ) < η/C(R), so by Lemma 4.5, we have that φ m = φ m+k , µ ζ 1 -a.e.. Let n ≥ m be a multiple of k. We have that ( This implies for all r ∈ N, Since the number of sliding block codes of radius Note that φ n is equal to φ 2n , µ ζ 1 -a.e. We denote ψ = φ n and p = p n (ψ). By definition of p, we have that this implies that S j φ and ψ coincides in ζ n 1 (X ζ 1 ) µ ζ 1 -almost everywhere, and by ergodicity in the whole set X ζ 1 µ ζ 1 -almost everywhere. The automorphism group of substitutive subshifts. Since the set of sliding block codes of radius between X ζ to itself is finite, we have the following result as a direct corollary of Theorem 4.1. In the special case, where any automorphism of (X ζ , S, Z d ) satisfies Property 2. of Theorem 4.1 with p = 0, we have a more rigid result. Proof. Note that an automorphism φ commutes with the substitution map if and only if φ is equal to φ n , for all n > 0. Now, by Lemma 4.6, the Property 2. implies Property 1. of Theorem 4.1. So, the group of automorphisms commuting with the substitution map is finite. To conclude we just need to observe that the pair (j φ , ψ φ ) in Theorem 4.1 associated with any automorphism φ is unique. which implies (id -L n ζ )(j 2 -j 1 ) = 0, so j 2 = j 1 and then ψ 1 = ψ 2 . We will show in Proposition 5.1 that for bijective substitutions, the automorphism group is isomorphic to a direct product of the shift map and a finite group, generated by permutation of letters. Rigidity properties for homomorphisms between substitutive subshifts and applications This section is devoted to homomorphisms between substitutive subshifts. We recall that for M ∈ GL(d, Z), the map φ : (X, T, Z d ) → (Y, T, Z d ) between two topological dynamical systems is said to be a homomorphism associated with M if for all m ∈ Z d we have that φ • S m = S M m • φ. We can also define measurable homomorphisms in the measuretheoretic setting. First, we establishes a necessary condition for the matrices M with the accumulation points is at most 2, when a power of one of the L ζ -eigenvalues is a real number. Otherwise, the projective orbits of L * ζ are dense in the circle. Since ND(X ζ , S, Z 2 ) is closed, it is the whole circle, which is a contradiction. Proof of Theorem 5.2. Let v be a nondeterministic direction for (X ζ , S, Z d ), and 2. If conv(D) does not have extreme points, then contains a line. In this case the hyperplane ∂H v must be parallel to this line contained in conv(D) and using the same argument, we can assume that x 1 (0) = x 2 (0). So we can assume that 0 is in a face F 0 of smallest dimension of conv(D) and then v ∈ NF 0 (conv(D)). In fact any element in NF 0 (conv(D)) ∩ S d-1 is a nondeterministic direction for (X ζ , S, Z d ). Now, for any k > 0 consider R (k) > 0 be the recognizability radius for ζ k given by Proposition 3.3 and R = 4R (k) . Since x 1 and x 2 coincide in an arbitrarily large ball, they have the same image under the maximal equicontinuous factor, hence π n (x 1 ) = π n (x 2 ) ∈ F ζ n for any n > 0. By Lemma 3.9, there exist n > 0 and two words w ). Letting k to infinity we have that for all n > 0, the origin is in the boundary of conv( We will show now the converse. We separate the proof in two cases. Suppose that conv(D) is closed. Let F 1 be a face of conv(D) containing 0 of codimension 1. We have that Since h ∈ H v , then v, h ≤ 0, and by (5.6) we have that v, gl (i) ≤ 0, for all 1 ≤ i ≤ m -1. Since λ > 0, we conclude that v, c ≤ 0, which implies c ∈ H v . By (5.3) and Proposition 1.18, the iterations of the substitution on the patterns w 1 , w 2 leads to two points , for i ∈ {1, 2}. Finally, (5.4) implies x 1 | Hv = x 2 | Hv , and we conclude that v is nondeterministic for (X ζ , S, Z d ). As described in Theorem 5.2, depending on the faces where the point f satisfying Condition H1. belongs in conv(L n ζ (k)+F ζ n ), we may have more nondeterministic directions, obtaining the following corollary: In the following we present two examples with different behaviors given by Lemma 5.12 Example 5.14 (Different behaviors for the nondeterministic directions). (1) Consider the 2D-Thue Morse substitution with In this case K ζ = -1, 0 2 , and we have that The sets of differences for the 2D-Thue Morse substitution are {{(0, -1), (0, 0)}, {(-1, 0), (0, 0)}, {(-1, 0), (0, -1)}, {(-1, -1), (0, 0)}, {(-1, -1), (-1, 0)}}. By Lemma 5.12 it can be proved 1 0 , -1 0 , 0 1 , 0 1 are the only nondeterministic directions for (X ζ T M , S, Z 2 ). (2) Consider the substitution of the table tiling [START_REF] Robinson | On the table and the chair[END_REF], with L ζt = 2 0 0 2 , F ζt 1 = 0, 1 2 , given by ζ t : The set K ζt is equal to -1, 0 2 and the sets of differences is equal to 2 K ζ t \ {∅, K ζt , {(-1, -1), (0, 0)}, {(0, -1), (-1, 0)}}. By Lemma 5.12, it can be proved the set of nondeterministic directions for (X ζ , S, Z 2 ) is the whole circle S 1 . Now, we proceed to determine the normalizer semigroup N (X ζ , S, Z d ) of substitutive subshifts of polytope substitutions. Set M ∈ N (X ζ , S, Z d ). By Proposition 1.12 if v is a nondeterministic direction for (X ζ , S, Z d ), then M * v/ M * v is also a nondeterministic direction for (X ζ , S, Z d ). Moreover, Theorem 5.2 ensures the matrix M acts on the opposite normal cones of conv(T ζ ) that appeared as nondeterministic directions for (X ζ , S, Z d ). In particular, the matrix M * permutes the hyperplanes defined by the (d - The following is a pattern of the table tiling: Figure 6.2: A pattern of the table tiling, using red and green rectangles. The table tiling forms an aperiodic tiling [START_REF] Robinson | On the table and the chair[END_REF]. Also in [START_REF] Robinson | On the table and the chair[END_REF], it was proved that the Z 2subaction of the table tiling is conjugate to the following constant-shape substitution (called table substitution) ζ t with expansion matrix L t = 2 0 0 2 and support F ζt 1 = 0, 1 2 : This is an aperiodic bijective primitive polytope constant-shape substitution. By Remark 3.10 (3), the factor map π t : (X t , S, ) is almost 4-to-1. In fact, by [START_REF] Robinson | On the table and the chair[END_REF] we have that |π The table substitution has 24 patterns with support K ζt = -1, 0 2 , which generate the 24 fixed points under the square of the substitution: In Example 5.14 it was mentioned that the set of nondeterministic directions for (X t , S, Z 2 ) is the whole circle S 1 . Since the table substitution is a polytope substitution and has 2 linearly independent nondeterministic directions we can apply Theorem 5.17. This implies, its symmetry group N (X t , S, Z 2 ) is isomorphic to a subgroup of 0 -1 1 0 , 1 0 0 -1 which is isomorphic to D 4 . More precisely, we have the following result: Proposition 6.1. The normalizer group of the table substitution is isomorphic to Z 2 D 4 . In particular Aut(X ζt , S, Z 2 ) is the group generated by the shift action. Proof. First, we will prove that Aut(X t , S, Z 2 ) = S . Indeed, the table substitution is bijective. By Proposition 5.1, any automorphism φ can be written as φ = S n τ , where n ∈ Z 2 and τ is defined by a permutation map T on the alphabet {0, 1, 2, 3}. In particular, τ acts as a permutation on the set of fixed points of ζ 2 t . Now, note that 2 0 2 0 is the only pattern of the form i j i j , i, j ∈ {0, 1, 2, 3}. This implies T (0) = 0 and T (2) = 2. In the same way, 3 3 1 1 is the only pattern of the form i i j j , i, j ∈ {0, 1, 2, 3}, which implies T (1) = 1 and T (3) = 3, i.e., T = id {0,1,2,3} . By minimality of (X t , S, Z 2 ), we have that τ is the identity on X t . We conclude that Aut(X t , S, Z 2 ) = S . Now, for homomorphisms. Set M 1 = 0 -1 1 0 . Consider the permutation Φ : 0 → 3, This permutation comes by the rotation of the rectangles defining the table tiling. We define the map φ : X t → φ(X t ), given by φ(x) n = Φ(x M -1 1 n ) for all x ∈ X t , n ∈ Z d . We will prove that φ is a homomorphism on X t . A direct computation shows that the map S (-1,0) φ permutes the fixed points of ζ 2 t . By minimality of (X t , S, Z 2 ) we conclude that S (-1,0) φ(X t ) = X t . Then φ is a homomorphism associated with M 1 onto (X t , S, Z 2 ). Note that, also by minimality, S (-1,0) φζ 2 t = ζ 2 t (S (-1,0) φ), which implies S (3,0) φζ 2 t = ζ 2 t φ. So φ is the homomorphism given by Theorem 4.12. ) , generated by the center of these hexagons, using the vectors u and v in Fig. 6.6. We can define a constant-shape substitution with expansion matrix L hh = 2 • id R 2 and F hh 1 = {(0, 0), (1, 0), (0, 1), (1, -1)} where, for convenience, we identify the hexagons in Fig. 6.5 with the letters {0, 1, 2}. The associated substitutive subshift is conjugated to the Λ-subaction of the half-hex tiling. The set K hh (defined in Section 1.7) is equal to {(0, 0), (-1, 0), (0, -1), (-1, 1)}. Since this substitution has coincidences (the definition is given in Section 4.1) in all except one coordinate, it has exactly three fixed points x, y, z such that x(0, 0) = 0, y(0, 0) = 1, z(0, 0) = 2 and for all (m, n) In fact, we can characterize the maximal equicontinuous factor of the half-hex substitutive subshift. Proposition 6.2. The half-hex substitutive subshift (X hh , S, Z 2 ) is a Toeplitz subshift. Moreover, its maximal equicontinuous factor is the odometer system Proof. This is a particular case of Lemma 3.11. Since the factor map π hh : (X hh , S, ) is the maximal equicontinuous factor of the half-hex substitutive subshift (X hh , S, Z 2 ) and (X hh , S, Z 2 ) is a Toeplitz subshift. In fact, by Lemma 3.11, we have that ). We could show, using the rotational symmetries of the substitution generated by the matrix 0 -1 1 1 that Z/6Z can embed in N (X hh , S, Z 2 ). But, as shown in Theorem 6.3 these are not the only isomorphisms this tiling presents. Theorem 6.3. The normalizer semigroup of the half-hex substitutive subshift is a group and it is isomorphic to Z 2 GL(2, Z). Moreover, Aut(X hh , S, Z 2 ) is equal to the shift group S . Proof. First we will prove that End(X hh , S, Z 2 ) = S . In fact, since the factor map ) is injective (Lemma 1.6). Now, by Lemma 1.7, for any endomorphism φ, we have that In fact, since M commutes with 2 • id R 2 , the matrix M maps the 2Z 2 cosets onto 2Z 2 cosets. Moreover, any matrix M ∈ GL(2, Z) induces a bijection in Z/2Z × Z/2Z via one of the following matrices in GL(2, Z/2Z): Each of these matrices define a permutation τ M 1 on Z 2 /2Z × 2Z \ {(0, 0)}. Identifying (1, -1) in F hh 1 with (1, 1) (mod 2), for any matrix M ∈ GL(2, Z), we have a unique permutation τ M 2 on the alphabet A = {0, 1, 2} given by τ ), for any x ∈ X hh and n = (m, n) ∈ Z 2 . We prove that φ M is a homomorphism on (X hh , S, Z 2 ) associated with M . Indeed, by Claim 2, since τ is a bijection we have that for any (m, Then, by Claim 3 we get that the image of a fixed point of ζ hh via φ M is a fixed point of ζ hh . Now, by minimality of (X hh , S, Z 2 ), we conclude that φ M (X hh ) = X hh for any M ∈ GL(2, Z). So φ M is a homomorphism onto (X hh , S, Z 2 ) associated with M . Note that for any (m, n) ∈ Z 2 we have that φ M (S (m,n) ) = S M (m,n) φ M . If φ M is the identity in X hh , this would imply that S (m,n) = S M (m,n) for any (m, n) ∈ Z 2 , i.e., M = id. This implies φ M is a nontrivial isomorphism. Hence, any matrix M ∈ GL(2, Z) has a homomorphism φ M induced by a letter-to-letter local map. Since (X hh , S, Z 2 ) is a coalescent system, by Proposition 1.5 the normalizer semigroup N (X hh , S, Z 2 ) is a group. Furthermore, for any ). This defines a group homomorphism from GL(2, Z) to N (X hh , S, Z 2 ), so using the exact sequence (1.4) we conclude that N (X hh , S, Z 2 ) ∼ = Z 2 GL(2, Z). Perspectives We present here some perspectives that remain open after the results of this thesis. On non-deterministic directions of multidimensional subshifts Beyond polytope substitutions Until now, we didn't find an aperiodic d-dimensional primitive constant-shape substitution with less than d linearly independent nondeterministic directions. In the case of aperiodic primitive block substitutions, it can be easily proved that this hypothesis is true. Moreover, given the result in [START_REF] Guillon | Determinism in subshifts[END_REF] for the two-dimensional case, by Theorem 5.2, the hypothesis is true for all of the cases where conv(T ζ ) does not have two parallel edges. In a private communication, P. Guillon [62] mentioned this result is already proved for higher dimensions, but nowhere published. This implies, we only have to deal in the case conv(T ζ ) has two parallel (d -1)-dimensional faces. Another open problem is the study of the normalizer semigroup for nonpolytope substitutions, i.e., where the convex hull of the digit tile is nonpolytope. Using the description of the convex hull of the digit tile given in [START_REF] Strichartz | Geometry of self-affine tiles[END_REF] on the two-dimensional case it may be possible to obtain similar results for the normalizer semigroup and symmetry semigroup. Nevertheless, until now there are no good descriptions of the convex hull of the digit tile for higher dimensions. Motivated by the classification of full-shifts, in [START_REF] Hartman | The stabilized automorphism group of a subshift[END_REF] was introduced the notion of stabilized automorphism group of a topological dynamical system, which is the group of selfhomeomorphisms commuting with a power of T . The authors could distinguished, up to isomorphism, various stabilized automorphism groups of non-trivial mixing shift of finite type. Moreover, in the class of fullshifts, they proved if the stabilized automorphism group of the fullshift on n and m letters are isomorphic, then n and m have the same number of distinct prime divisors. In [START_REF] Schmieding | Local P entropy and stabilized automorphism groups of subshifts[END_REF], S. Schmieding studied the relation between topological entropy and the stabilized automorphism group. Moreover, he introduced a certain kind of entropy (in fact a whole family of entropies) for groups which he called local P entropy. For Z d -actions we can define the stabilized automorphism group of a topological dynamical system as the following. Let M ∈ M(d, Z) be an invertible integer matrix with Realization results of nondeterministic directions for minimal actions The work in this thesis corresponds to the first examples of realization results about the set of nondeterministic directions for minimal actions. In [START_REF] Boyle | Expansive subdynamics[END_REF] it was proved that for any compact set of S 1 that is not a singleton containing one line with irrational slope can be realized as the set of nonexpansive directions of a Z 2 -action, and the singleton case was after proved by M. Hochmann in [START_REF] Hochman | Non-expansive directions for Z 2 actions[END_REF]. If aperiodic bijective on extremities primitive constantshape substitutions have d linearly independent nondeterministic directions, then we cannot obtain a unique nondeterministic direction with these substitutions, so we will need to use other types of substitutions, or other type of subshifts (such as Toeplitz sequences) to obtain other realization results with minimal subshifts. Question. 1. Is there an aperiodic primitive constant-shape substitution with a unique nondeterministic direction? 2. Is there an aperiodic primitive constant-shape substitution such that its set of nondeterministic directions is homeomorphic to a Cantor set? Recognizability property of constant-shape substitutions As mentioned in Section 3.1, the recognizability property is a combinatorial one, useful to prove other properties satisfying the constant-shape substitutions. In this thesis we proved aperiodic symbolic factors of aperiodic primitive constant-shape substitutions are recognizable. This left open the following questions: On minimal multidimensional S-adic subshifts All the previous problems can be expressed for multidimensional minimal S-adic subshifts. In the one-dimensional case, this class of minimal subshifts is one of the most natural containing minimal subshifts of sublinear complexity, but it is much broader as was shown in [START_REF] Donoso | On automorphism groups of low complexity subshifts[END_REF][START_REF] Donoso | Interplay between finite topological rank minimal cantor systems, s-adic subshifts and their complexity[END_REF]. The class contains several well studied systems, such as substitutive subshifts, symbolic codings of interval exchange transformations, dendric subshifts, and some Toeplitz sequences. In the multidimensional setting, we can consider the case where the morphisms σ n : A n+1 → A F n 1 n are given by a constant-shape morphism. In this context the following are open questions. Question. 1. Is there a recognizability property for minimal multidimensional constantshape S-adic systems? 2. Is there an analogue definition of finite rank systems in the multidimensional framework? 3. What rigidity properties do homomorphisms satisfy in the context of minimal multidimensional constant-shape S-adic systems? Moreover, if the morphisms are bijective, it is possible to extend Theorem 5.17 in this context?
04098256
en
[ "spi.meca.mefl" ]
2024/03/04 16:41:18
2023
https://hal.science/hal-04098256/file/Maillet_29629.pdf
V Maillet A Joksimović E Benichou X Carbonneau BODY-FORCE MODELLING OF MULTIPLE DISTRIBUTED PROPULSORS WITH BOUNDARY LAYER INGESTION Keywords: Body-Force Modelling, Aero-Propulsive Performance, Preliminary Design, Boundary-Layer Ingestion BFM : Body-Force Modelling ρ : Fluid density - Rotational speed - A coupled wing-propulsor configuration is modelled in order to extract the coupled lift, drag or thrust coefficients and to investigate a methodology for in-flight energy recovery via loaded windmilling. The baseline wing geometry and flight conditions of the configuration are representative of the Daher TBM900 aeroplane. The propulsors are modelled using RANS Body Force Modelling (BFM) in order to analyse the non-homogenous incoming flow at computation cost somewhat coherent with preliminary design. The main coupled phenomena such as Boundary Layer Ingestion (BLI), suction-side flow acceleration, propulsor-to-propulsor influence, lateral loads and increased operability are discussed; numerical modifications to the baseline Hall-Thollet model are introduced, providing control-by-power of the propulsor. In continuiation of previous in-house efforts, the simulations were carried out for a typical takeoff case. The obtained results provide preliminary evidence for the possibility of energy recovery by loaded windmilling, which lays groundwork for further investigations at other flight conditions and more complex configurations. INTRODUCTION The growing demand for environmentally-acceptable means of transportation has been driving the aeronautical industry actors to investigate a variety of technological concepts beyond the conventional "Tube and Wing (with podded gas-turbine engines)" paradigm. [START_REF] Kellari | Influence of Technology Trends on Future Aircraft Architecture[END_REF] Two disruptive ideas are Boundary Layer Ingestion (BLI) and Distributed Propulsion (DP). [START_REF] Bijewitz | A Review of Recent Aircraft Concepts Employing Synergistic Propulsion-Airframe Integration[END_REF] While the two are a priori separate concept families, they are often employed together since the DP involves a greater number of small propulsors whose distribution over the airframe can enable different levels of absorption of portions of the upstream boundary layer. Moreover, vehicle-level performance gains from aero-propulsive synergies can potentially contribute to offsetting the penalising effects of the (hybrid-)electric propulsive systems [START_REF] Kim | A Review of Distributed Electric Propulsion Concepts for Air Vehicle Technology[END_REF] which are often regarded as enabler for zero-emissions flying, but whose power densities still make them prohibitive for large-scale airborne applications. To carry such philosophy of functional synergies a step further -in addition to enabling aerodynamic vehicle-level gains, the DP could also be functionally related to the aeroplane control and stability characteristics. Given that the DP propulsors would likely be powered by an electrically distributed power system, the "one engine inoperative" failure mode would not necessarily be the relevant limiting sizing case anymore. Furthermore, in the previous work by the current authors, where the contribution presented in this paper was firstly demonstrated [START_REF] Benichou | Numerical Low-Fidelity Method for Improved Analysis of Breakthrough Aero-Propulsive Systems AEC[END_REF]), a notable behaviour was observed. Simulations indicated that distorted flow at the inlet of a BLI distributed propulsor going through the propulsor assembly results in a lateral force component at the propulsor nozzle. While already by itself this is a valuable observation, it does not provide any insight into how such emergent effects could be fully leveraged with multi-propulsor arrangements, notably with the DP which intrinsically relies on various layouts of two or more adjacent propulsors under mutual influence. If the most beneficial concepts are to be revealed to the designer, it is crucial to render the possible system synergies transparent in early design. That is, the preliminary-design level methods need to contain all the necessary information to capture the complex correlations such as the ones described previously. However, the sheer number of the different ways one could configure airframe aerodynamics with distributed propulsion and various types of hybrid-electric propulsive architectures over different flight profiles opens a tremendously big design space. Confident discrimination between different possible solutions within huge possibility space requires very fast computing methods. The authors have been developing one such low-fidelity method for aero-propulsive conceptual design [START_REF] Bommidala | Development of a Panel Method for Preliminary Design of Aero-Propulsive Systems[END_REF]), but due to lack of real-life experience to validate and/or calibrate the models -intermediary solutions are sought. One such approach is the one presented in the current paper. In particular, we present a Body Force Modelling (BFM) approach, employed for modelling propulsors of an aero-propulsive assembly. This choice allows for an acceptable trade-off between computation costs and fidelity of the results. The BFM methods date back to the 1990s; they consist of replacing the blades by their means azimuthal effect on the flow. The modelling used in this paper is taken from Hall and Thollet (Thollet 2017). The case study presented in this article was carried out on an aeroplane model similar to a Daher TBM900. In particular, the explored propulsive concept was developed by replacing the single nose-mounted propeller with eight smaller ducted fans -four on each wing -mounted on the suction side at the trailing edge. Such setup enables studying the coupling between the propulsor behaviour and the wing aerodynamics, along with the coupling between neighbouring propulsors. To that end, the article is elaborated as follows: the first chapter presents the theoretical background for the models employed in this study; then, the details of the study at hand is provided; this is followed by the summary of the results for the different investigated operating scenarios. THEORETICAL PRINCIPLES Body-Force Modelling The core idea of Body-Force Modelling (BFM) is to substitute the physical presence (i.e. explicit model) of the blades with volume sources that represent the mean azimuthal effect that the blades exert on the flow. (Fig. 1) With this method all the phenomena of length scale lower than the blade-to-blade distance are not resolved. However, it allows for much lighter simulations since the need to mesh the turbomachine blade rows is eliminated. In BFM, the source terms depend on both local flow and local geometry of the blade. This allows for the simulation of non-homogeneous flows and their interaction with the propulsors without resorting to URANS simulations, which is why this approach was chosen to conduct the study. Further insight into expressions for the source terms can be found in the literature, see [START_REF] Thollet | Body-force modeling for aerodynamic analysis of air intake â fan interactions[END_REF][START_REF] Reichstein | Estimation of axial compressor body forces using three-dimensional flow computations[END_REF][START_REF] Hale | A Three-Dimensional Turbine Engine Analysis Compressor Code (TEACC) for Steady-State Inlet Distortion[END_REF][START_REF] Gong | A computational model for rotating stall and inlet distortions in multistage compressors[END_REF]. The formulation used in the paper was developed by Thollet [START_REF] Thollet | Body-force modeling for aerodynamic analysis of air intake â fan interactions[END_REF], who had in turn improved on model by [START_REF] Hall | Analysis of Fan Stage Design Attributes for Boundary Layer Ingestion[END_REF]: ∂ρ ∂t + - → ∇ • (ρ - → V ) = B (1) ∂ρ - → V ∂t + - → ∇ • (ρ - → V • -→ V t ) + - → ∇P = ρ( - → f n + - → f p ) + B - → V (2) ∂ρe t ∂t + - → ∇ • (ρh t - → V ) = E source + Bh t (3) Where the different terms are described as follows: • B = -1 b (ρ - → V • - → ∇b) is the mass source term • - → f n = K M ach 1 2 W 2 2π δ sb|n θ | is the normal force applied to the flow, with a corecction term K M ach for compressible effects • - → f p = 0.0592Re -0.2 W 2 sb|n θ | is the parallel force, corresponding to losses and viscosity • E source = ρrΩ(f n,θ + f p,θ ) is the volume energy source, derived from Euler's theorem • b is the blockage term defined as shown on the right of figure 1 In this formulation of the BFM, only the geometry of the blade is necessary to create the source terms, no previous CFD simulations are needed to extract coefficients. These source terms depends on the blocage factor b and the normal vector on the surface of the simulated blade. Windmilling and Control by Power As the above equations indicate, the main way to control the body forces at the user's disposal is to vary the rotation speed Ω, which will also control the relative velocity -→ W . However, in the aeroplane preliminary design and power consumption characterisation of the whole system, it is more relevant to control the power delivered to the propulsor. Indeed, in the broader framework of the presented study, the goal was to determine if the baseline TBM900 aeroplane retrofitted with a wing-mounted BLI DP concept would be able to carry out a standard mission with the same power and energy characteristics as the baseline aeroplane. Using an analysis by Dufour [START_REF] Dufour | Body Force Modeling of the Aerodynamics of the Fan of a Turbofan at Windmill[END_REF], an angular momentum based equation can be derived to describe the evolution of the rotational speed based on the power received by the propeller: dΩ dt = Ẇfan -Ẇfluid ΩJ Θ (4) Where the term J Θ represents the inertia of the rotating parts. The above differential can be approximated as follows: Ω n+1 = Ω n - δt J Θ Ω n ( ṁ∆h t -Ẇ ) (5) The relaxation term needs to be physical in the case of an unsteady simulation, but can be regarded as a purely numerical constant otherwise: Ω n+1 = Ω n -K(∆h t - Ẇ ṁ ) (6) Where K is the aforementioned relaxation factor that was tuned for the convergence of the calculation (here around 0.005). The simulation does not appear to be too sensitive to this value. Ẇ is the power command for the propulsor (positive for a net increase of the energy in the rotor), ∆h t is the computed enthalpy rise through the rotor, and ṁ is the mass flow. Thus, if the power command is less than the actual energy output, the rotational speed will decrease until both terms are equal. By assigning null or negative value to Ẇ , free and loaded windmilling situations can be attained respectively. STUDY SETUP Aero-Propulsive Assembly Description The presented aero-propulsive assembly was conceived to be representative of a concept that could be envisioned for a small aeroplane like the TBM900. The original MS-0313 wing root profile of the TBM900 was kept and extruded to make a constant-section basic wing structure for the current model. The current aero-propulsive assembly design employs 8 propulsors -4 on each wing -whose total combined power would match the power provided by the Pratt&Whitney PT6 engine to the baseline aircraft. The propulsor is composed of a rotor and a stator designed by an in-house program whose purpose is to generate basic blades that generate one eighth of the thrust of the baseline aeroplane whilst consumming one eighth of its power. The resulting unitary aero-propulsive segment is shown in Fig. 2; the base profile chord is 2m long and the segment is 0.375m wide. It was designed in order to enable modular stacking of as many of these segments as necessary, to conduct studies such as the one described in the following paragraph. As such, the nacelle and the wing are symmetrical with respect to the center plane, and the walls have continuous tangency constraints in order to avoid sharp angles and corners. Numerical Setup The simulations were carried out using StarCCM+ 13.04. This software was chosen for two main reasons. Firstly, the versatility of its unstructured mesher allows meshing of complex 3D shapes such as e.g. the transition between the nacelle, the propulsor and the wing for the current case. However, this capability comes at the cost of controllability, i.e. it is more difficult to have control over the mesh in the geometrically complicated areas (such as the junction between the propulsor and the wing) without a drastic increase the number of cells. Secondly, the simple capability of addition of User Field Functions that compute custom scalar or vector fields facilitates creation of the source terms needed in the Body Force Modelling. All simulations were done using a 3D RANS steady solver with a second order implicit scheme; the turbulent flow was simulated using the k -ω SST model with the second order Gamma-ReTheta transition. The air was modeled as an ideal gas following Sutherland's law for the viscosity. Since capturing the interaction between the propulsor and the boundary layer is one of the main objectives of this study, the mesh was constructed in order to have a y + ∼ 1 on every wall in the domain. Given that the studied configuration is a combination of a propulsor and a wing, the fluid domain needs to have a large extension in order to capture the jet and the aerodynamic forces on the physical (airframe) surfaces. It was chosen to have a rectangular domain with 5 chords upstream, above and below, and 20 chords downstream of the aero-propulsive assembly. For the single-propulsor configuration, the width is equal to 0.375m, which corresponds to the diameter of the fan and its cowling. Such simulations are composed of 4 domains: the rotor, the stator, the regions in-between the two, and the exterior. The source terms described above were applied in the rotor and stator domains (red and dark blue zones depicted in figure 3 as well as the yellow zones in figure 2). On the lateral boundaries of the domain, a periodic condition was applied. Thus, the simulation is analogous to an infinite wing with an infinite number of propulsors integrated on the suction side. For reference, the total number of cells is approximately 17 million and the computational cost is around 2000 CPU hours for a single operating point simulation on one single-segment. In comparison with a equivalent URANS case, the cell number is much smaller in the bladed area while everywhere else the number of cells is approximately the same. The number of CPU hours is considerably smaller than for an equivalent URANS case. A more comprehensive description of the meshing method and criteria can be found on the previous paper from the authors [START_REF] Benichou | Numerical Low-Fidelity Method for Improved Analysis of Breakthrough Aero-Propulsive Systems AEC[END_REF]. For the bi-propulsor simulations, the domain width was doubled in order to fit the two segments. Consequently, this simulation consisted of 7 domains shown on figure 3 : 2 rotors in dark blue, 2 stators in red, 2 intermediate spaces in yellow and 1 exterior domain in light blue. The number of cells and the computational cost are doubled with respect to the single-segment case. Figure 3: Visualisation of the mesh on the X 1 Z 1 plane at the center of the propulsor. Conservativity Correction An issue that might arise is the non-conservation of the mass flow across the rotor and the stator. Indeed, the mass source term B from Eq.1 has theoretically a null global production, but the discretisation can result in an imbalance and the creation (or destruction) of mass flow. During the early simulations, the exiting mass flow was around 1% higher than the one at the inlet. In order to correct this numerical error, the term B was slightly modified as follows: B B = B - 1 V V BdV ( 7 ) Where V is the volume of the space in wich the source terms are applied. The goal was to subtract the mean value of the mass flow production in the domain (rotor or stator) from the local value of B. By construction, the total contribution would be naught and the conservativity would now be within 0.01% of the mass flow. This modification effectively changes the geometry of the blade by increasing (or decreasing) its thickness. Since the term 1 V V BdV is very small, the overall shape of B B is very similar to the original B and it does not impact the simulations results. Lift, Drag and Thrust In order to analyse and assess the performance of the aero-propulsive configuration at hand, a classical resultant force decomposition into independent lift, drag and thrust contributions is no longer relevant. The nature of the forces that are applied on this segment is roughly three-fold: pressure forces, viscous forces, and momentum increase due to the work added by the propulsor. The first two contributions can be obtained by integrating the pressure and the friction on all wet surfaces noted -→ S F (Eq.8), while the third one comes from the integration of the generated thrust between the rotor inlet and the stator outlet noted -→ S T (Eq.9). Indeed, in BFM, the blades are not physically present in the fluid domain. As such, the variations of momentum they subject the flow to can only be obtained by integrating the dynalpy between the inlet and the outlet of the propulsor. The resulting forces are then decomposed as a streamwise and upward-oriented stream-normal component (relative to the free-stream flow), in order to draw a comparison with the usual lift and drag of a wing [START_REF] Agard | Guide to in-flight thrust measurement of turbojets and fan engines[END_REF] as shown in Figure 2. The described force decomposition is given in the following equations: - → F = S F (p + - → τ • - → n )d -→ S F (8) - → T = S T (ρ - → v • - → v + (p -p ∞ ))d -→ S T (9) From the above, it follows that the streamwise component can be negative, meaning that the segment accelerates in forward direction by producing more thrust than drag. The coefficients (Eqs.10 and 11) are normalised conventionally using the dynamic pressure and the reference surface area S ref defined as a rectangular surface spanned by the chord and the width of the aero-propulsive segment (or double the width for the bi-propulsor simulations). C L = ( - → F + - → T ) • - → Y 0 1 2 ρ ∞ V 2 ∞ S ref (10) C D = ( - → F + - → T ) • -→ X 0 1 2 ρ ∞ V 2 ∞ S ref (11) A comprehensive analysis of these phenomena can be found in the previous paper by the current authors [START_REF] Benichou | Numerical Low-Fidelity Method for Improved Analysis of Breakthrough Aero-Propulsive Systems AEC[END_REF]. WING-PROPULSOR COUPLING: RESULTS Test Matrix The investigated cases are summarised in Tables 1 and2. All the upstream conditions are representative of the take-off of a TBM900 with P ∞ = 101325P a and T ∞ = 300K for the atmospheric conditions, with a Mach number M ∞ = 0.11. The angle of attack is set to 0 • . Case name Imposed condition for propulsor Imposed value Ω N Rotor speed 13 900 rpm Ω W M Rotor power 0 kW Table 1: Summary of the simulations for the single-segment case. The single-segment simulations are the focus of the precedent paper by the authors (ibid.) and are solely used here to add some degree of validation to the double-segment computations. For the investigation of the mutual influence of the propulsors, the last case Ω W M /P 100% is used to do a sweep in angle of attack from 0 • to 16 • . Coupled Behaviour of the Adjacent Propulsors The simulations presented in this paper aim to study the mutual interactions between the propulsors. The computational domain was changed to include two aero-propulsive segments whose respective power setting could be controlled independently. setup makes it possible to impose different fan power settings and analyse overall behaviour of the resulting flow. As for the unitary aero-propulsive assembly case, periodicity condition was imposed on the lateral boundaries of the domain, which means that in this case the simulation represented an infinite wing composed of double-segment blocks. Consequently, these simulations were evidently twice as big in term of memory and costly in CPU hours as the previous ones; the mesh target sizes used were unchanged, and the number of cells was approximately doubled. Table 3 summarises a selection of the results from the simulations outlined in the test matrix (Tables 1 and2). Case name 3: Results of the simulations. Ω is the rotational speed, ṁ is the mass flow, ∆h t the rise of enthalpy through the propulsor and P the power setting of the given propulsor. Ω 1 /Ω 2 (RPM) ṁ1 / ṁ2 (kg The following section will firstly compare the results of single-segment simulations with their uniform-power double-segment counterparts. Following this pseudo-validation step, we went on to analyse the double-segment with assymetrical power settings, as presented in the second part of the section. In the framework of preliminary validation of these simulations, the first computations consisted in setting the same operating point for the two propulsors, and then comparing the results with the ones from the single-segment case. Comparison between cases Ω N and Ω N /Ω N ; and Ω W M and Ω W M /Ω W M provides evidence of the coherence between the single and doublesegment simulations. Indeed, the mass flow, the power consumption, and the predicted freewindmilling velocity are almost identical, as well as the lift and drag coefficient (not shown here for clarity). These results provide some confidence going further with asymmetrical power settings. Cases Ω W M /Ω N and Ω W M /P 100% highlight the coupling of the flow between adjacent segment. It shows that in the case of asymmetrical power-setting combination, the mass flow is higher in the windmilling propulsor and lower in the full-power one than compared to their baseline counterpart (case Ω W M and Ω N ). Some of the airflow is forced in the windmilling fan by the suction of the other. Thus, with the same power input, the powered fan experiences a higher load on its blades. Comparison between lines Ω W M /Ω N and Ω W M /P 100% provides compelling evidence that the control-by-power of the BFM model allows to keep control over the power consumption. In the case of Ω W M /Ω N , the power needed to keep the baseline rotational speed is higher since some of the work is used by the windmilling rotor (and thus the windmilling speed is higher). Case Ω W M /P 100% is the counterpart by highlighting the lower rotational speed of the powered rotor caused by the presence of the windmilling propulsor. Since this was preliminary work, dedicated to the methodology assessment, only four bipropulsor simulations were run with different angles of attack. Even though the Body Force Modelling reduces drastically the number of cells in the rotor and stator, the rest of the simulated domain needs a fine mesh, which is the main contributor to the duration of the simulations. We can also consider other global indicators such as lift or drag coefficient. Figure 4 shows the variations of the lift coefficient as defined in Eq.10 for asymmetrical settings of the bipropulsor case versus the single propulsor. For the rest of this paper, the bi-propulsor results correspond to the Ω W M /P 100% case at four different angles of attack. A flow field is presented figure 5 to provide some visualisation of the explored configuration. The single-propulsor corresponds to P 100% in Figure 4 (a) and Ω W M in (b). The mutual interactions of the propulsors at different power settings can then be investigated. Firstly, we can study the segment independently. In Figure 4 (a), we compare the singlesegment of Ω N single-segment against the segment at nominal power of the bi-propulsor simulation. This highlight that the coupling effect continues to take place at different angles of attack, with the windmilling segment absorbing some of the power and thus decreasing the lift of the powered segment. On the contrary, Figure 4 (b) compares Ω W M to the windmilling segment of the bi-propulsor simulation. On this case, the lift is higher is the asymmetrical configuration since there is a power source next to it. We can also study the mean performance of the bi-propulsors simulation compared to the uniform wing at different fractions of the rotational speed (power fraction is not employed here because these simulations date back from before the method was formulated). The values Figure 5: Visualisation of the axial velocity on the Z 1 X 1 plane at the center of propulsor for the bi-propulsor configuration at zero angle of attack . The top propulsor is at windmilling condition while the bottom one is at nominal power. summarised in Figure 6 indicate what is the equivalent of the bi-propulsor in regards to a monopropulsor. From the point of view of the lift coefficient, the bi-propulsor is practically coincident with the 50%Ω mono-propulsor trend. Given that a major part of the lift is generated by the upper-surface suction effect, having only half the propulsive power generates as much lift as the single propulsor at half nominal speed. However, on the drag coefficient, the bi-propulsor is closer to a rotational setting of 75%Ω. This is due to the fact that the lift is in part created by the suction while the drag of the windmilling fan is underestimated without the mechanical losses simulated. Finally, the mass flow indicates a midpoint between 75%Ω and 50%Ω. Indeed, some of the flow in the windmilling propulsor come from the upstream velocity. These results show potential for energy recovery in flight, with a possibility to load the rotors until C D become null or even negative. CONCLUSIONS AND PERSPECTIVES The paper describes application of a Body Force Model to model aero-propulsive behaviour of a highly integrated propulsors-airframe (wing and nacelle) assembly with computation cost compatible with preliminary and early detailed design. The particular contribution of the presented work is in new boundary condition formalism which enables control of the propulsor operation through power manipulation rather than rotating speed, as it has been the case so far. The described formalism was employed to examine the traditionally non-linear coupled behaviour of aero-propulsive assemblies, this time for a configuration consisting of a double aero-propulsive wing segment. The configuration was simulated at non-symmetric conditions, with particular aim to investigate the behaviour in the case of one of the propulsors operating at windmilling mode. The obtained preliminary results indicate that a possibility for power recovery via loaded windmilling might be possible and need to be investigated. The coupling between two propulsors at different power settings was also highlighted and a more complete parametric study can reveal new interesting synergies. This work presents the next step towards a better visibility of the functional synergies enabled by the airframe-propulsor configuration composed of an array of ducted propulsors mounted in the boundary layer zone of the wing suction surface. Aerodynamics and handling qualities enabled by combined propulsor-airframe behaviour need to be rendered transparent in the overall aeroplane design schemes, which are beyond the scope of this paper. To that end, the next step in this research work would consist of creation of a full wing model with a full array of propulsors which could be independently simulated at any flight conditions of interest. The experience summarised in this paper tells us that even at this lower-fidelity level of modelling, the CPU cost relative to the number of possible configurations of interest would be completely out of reach of any preliminary design effort. Nevertheless, the presented method can be used to identify preliminary tendencies as well as to calibrate low-level decision-making methods used in preliminary aero-propusive configuration selection. While simulation of a full finite wing-propulsor configuration is currently not envisioned since the associated computation cost would be prohibitive for such endeavours, preliminary feasibility studies of a finite wing with 4 independently controlled distributed propulsors were carried out. Notwithstanding rather extensive meshing, such calculations seem perfectly feasible if CPU constraint is alleviated. Furthermore, the experience gathered from the current and the preceding works does not seem to favour any optimisation efforts of such configuration using the presented frameworks, be it for a unitary or more complicated aero-propulsive segments. Creation of such geometries involves a lot of variables, for both the propulsor and the wing but mainly for the transition zone between the nacelle, the fan and the aerofoil, and more work needs to be invested in definition of meaningful free variables that define such configurations, if any optimisation is to be undertaken in the long run. Figure 1 : 1 Figure1: Illustration of the BFM: the blade passage is modelled by its mean effect on the flow.[START_REF] Dufour | Body Force Modeling of the Aerodynamics of the Fan of a Turbofan at Windmill[END_REF] Figure 2 : 2 Figure 2: Left: Visualisation of the investigated aero-propulsive assembly consisting of two identical unitary segments; Right: Description of the different coordinate systems for the analysis of the aero-propulsive coupling (Benichou et al. 2020). Figure 4 : 4 Figure 4: Comparison of the lift coefficient for the nominal power setting (P 100% ) (a), and the windmilling power setting (Ω W M ) (b). Figure 6 : 6 Figure 6: Comparison of the lift coefficient (a), the drag coefficient (b), the mass flow (c) and the polars (d) at multiple angles of attack. Table 2 : 2 The currently presented Summary of the simulations for the bi-propulsor case. Case name Imposed conditions for propulsor 1 and 2 Imposed value Ω N /Ω N Speed 13 900 rpm Ω W M /Ω N Power for 1 and Speed for 2 0 kW and 13 900 rpm Ω W M /Ω W M Power 0 kW for both P 10% /P 10% Power 12.3 kW for both P 100% /P 100% Power 123 kW for both Ω W M /P 100% Power 0 kW and 123 kW /s) ∆h t1 /∆h t2 (kJ/kg) P 1 /P 2 (kW) Ω N 13 900 14.7 8.4 123 Ω W M 2 650 4.1 0 0 Ω N /Ω N 13 900 / 13 900 14.7 / 14.7 8.4 / 8.4 123 / 123 Ω W M /Ω N 3 300 / 13 900 5 / 14.3 0 / 9.4 0 / 135 Ω W M /Ω W M 2 650 / 2 650 4.1 / 4.1 0 / 0 0 / 0 P 10% /P 10% 6 500 / 6 500 7.7 / 7.7 1.6 / 1.6 12.3 / 12.3 P 100% /P 100% 13 900 / 13 900 14.7 / 14.7 8.4 / 8.4 123 / 123 Ω W M /P 100% 3 200 / 13 400 5 / 14 0 / 9 0 / 123 Table ACKNOWLEDGEMENTS The authors wish to acknowledge and thank SAFRAN Group for the AEGIS research sponsorship framework (2016-2021) which made this research work possible.
04098396
en
[ "info.info-ni", "info.info-ai" ]
2024/03/04 16:41:20
2022
https://theses.hal.science/tel-04098396/file/ZECCHIN_Matteo_these_2022.pdf
CFL Clustered Federated Learning EM Expectation Maximization IID Independent and Identically Distributed MAML Model-Agnostic Meta-Learning xviii Keywords: Artificial intelligence (AI) is widely viewed as a key enabler of sixth generation (6G) wireless systems. The main drivers of its adoption are the increasing complexity and specialization of the services offered by wireless networks to end users. The original premise towards the incorporation of AI in 6G networks is the possibility of establishing a mutually gainful synergy between wireless communication systems, and the tools belonging to the machine learning (ML) literature. Specifically, the edge of wireless networks caters an unprecedented data availability and computational power that ML algorithms can potentially tap into. At the same time, a plethora of wireless networking problems lacking analytical solutions can benefit from data-driven techniques belonging to the image and audio signals processing domain. This thesis targets fundamental problems arising in this domain, with the end goal of paving the way towards the adoption of reliable AI in future wireless networks. The first part of this thesis is devoted to wireless communication for ML. It focuses on the development of distributed training algorithms that can be deployed at the edge of wireless networks to fully harness its potential. Future wireless networks are envisioned to be heavily reliant on device-to-device (D2D) communication. For that reason, we first investigate the implication of performing distributed optimization of ML models over wireless communication systems comprising unreliable computing devices restrained to intermittent peer-to-peer connectivity. We propose and formally analyze an implementation of distributed stochastic gradient descent that leverages asynchronous model updates and a time-varying consensus strategy to mitigate the detrimental effect of computational and communication impairments. While being in principle a challenge, we demonstrate that D2D communication brings a new degree of flexibility to the network infrastructure that can be exploited to speed up the training of ML models at the network edge. Specifically, given the increasingly important role of unmanned aerial vehicles (UAVs) in infrastructureless wireless networks, we introduce a UAV-aided training procedure in which the UAV trajectory is designed to promote the diffusion of locally optimized models across devices. In the second part of this thesis, we switch focus to learning aspects associated with the distributed nature of the data generation processes, in particular data heterogeneity. Data heterogeneity entails the problem of producing ML models capable of generalizing to multiple and different data sources. We consider two different approaches to attain this desideratum. Our first solution is a user-centric federated learning protocol that sidesteps the issue of finding a universal ML solution by outputting tailored models i Abstract for different groups of devices that have similar learning goals.We then recommend an alternative approach, based on a distributionally robust reformulation of the learning problem, that has the goal of producing a unique and fair ML model with satisfactory worst-case performance. To achieve this, we develop an agnostic decentralized gradient descent-ascent algorithm that solves the underlying minimax optimization problem in a communication-efficient manner by employing a compressed consensus scheme. In the third and final part of this thesis, we turn to the paradigm of ML for wireless communication. We take a critical look at frequentist learning and its applications to wireless communication problems. This stance is motivated by the unreliability of the frequentist framework under the challenging learning conditions that characterize wireless communication problems. The main contribution of this section is a novel robust Bayesian learning paradigm, that concurrently counteracts three prominent challenges arising in wireless communication learning: data scarcity, the presence of outliers and model misspecification. Finally, after over-viewing its main theoretical underpinnings and formally investigating its properties, we showcase the merits of the proposed robust Bayesian learning over a range of prototypical wireless communication problems. Résumé L'intelligence artificielle (IA) est largement considérée comme un élément fondamental des technologies de communication sans fil de sixième génération (6G). La hausse de la complexité et de la spécialisation des services offerts par les réseaux sans fil aux utilisateurs ont été des facteurs déterminants derrière son utilisation répandue. L'intégration de l'IA dans les réseaux 6G est motivée par la possibilité d'établir une relation mutuellement bénéfique entre les systèmes de communication sans fil et les outils appartenant à la littérature sur l'apprentissage automatique (AA). Plus précisément, la périphérie des réseaux sans fil offre une disponibilité de données et une puissance de calcul sans précédent que les algorithmes d'AA peuvent exploiter. En parallèle, une multitude de problèmes liés aux réseaux sans fil, pour lesquels il n'existe pas de solution analytique, peuvent bénéficier des techniques d'AA appartenant au domaine du traitement des images et des signaux audio. Cette thèse vise à résoudre les problèmes fondamentaux liés à ce domaine, afin de faciliter l'adoption d'une IA fiable dans les futurs réseaux sans fil. La première partie de cette thèse est consacrée à la communication sans fil pour l'AA. Elle se concentre sur le développement d'algorithmes d'apprentissage distribués qui peuvent être déployés à la périphérie des réseaux sans fil afin d'en exploiter pleinement le potentiel. En effet, il est prévu que les futurs réseaux sans fil soient fortement tributaires de la communication de dispositif à dispositif (D2D) dans un futur proche. Pour cette raison, nous étudions l'optimisation distribuée des modèles d'AA sur les systèmes de communication sans fil comprenant des dispositifs informatiques non fiables limités à un système pair-à-pair intermittent. Nous proposons et analysons un algorithme de la descente de gradient stochastique distribuée qui exploite les mises à jour asynchrones des modèles et une stratégie de consensus variable dans le temps afin d'atténuer l'effet indésirable des dysfonctionnements en matière de calcul et de communication. Nous montrons ensuite que, même si la communication D2D est en principe un obstacle, elle apporte un nouveau degré de flexibilité à l'infrastructure du réseau qui peut être exploitée pour accélérer l'entraînement des modèles d'AA à la périphérie du réseau. Plus précisément, compte tenu du rôle de plus en plus important que tiennent les drones dans les réseaux sans fil ad hoc, nous proposons une procédure d'apprentissage assistée par drone dans laquelle la trajectoire de ce dernier est conçue pour favoriser la diffusion de modèles optimisés localement dans le réseau. Dans la deuxième partie de cette thèse, nous nous concentrons sur les aspects d'apprentissage associés à la nature distribuée des processus de génération des données, et en particulier à l'hétérogénéité des données. L'hétérogénéité des données pose le iii problème de la production de modèles d'AA avec une bonne capacité de généralisation sur des sources de données multiples et différentes. Nous proposons deux approches différentes pour atteindre ce desideratum. Notre première solution est une méthode d'apprentissage fédéré centrée sur l'utilisateur. Celle-ci contourne le problème de la recherche d'une solution d'AA universelle en produisant des modèles sur mesure pour différents groupes de dispositifs ayant des objectifs d'apprentissage similaires. Nous suggérons par la suite une approche alternative, basée sur une reformulation distributionnellement robuste du problème d'apprentissage, qui a pour but de produire un modèle d'AA unique et éthique avec une performance satisfaisante dans le pire des cas pour tous les dispositifs collaboratifs. Pour y parvenir, nous développons un algorithme de descente de gradient décentralisé agnostique qui résout le problème d'optimisation minimax sous-jacent d'une manière efficace en termes de communication en utilisant un schéma de consensus compressé. Dans la troisième et dernière partie de cette thèse, nous nous tournons vers le paradigme de l'AA pour les communications sans fil. Nous jetons un regard critique sur l'apprentissage fréquentiste et son utilisation sur les problèmes de communication sans fil. Cette prise de position est motivée par le manque de fiabilité que le paradigme fréquentiste démontre lors de conditions d'apprentissage difficiles caractérisant les problèmes de communication sans fil. La principale contribution de cette section est un nouveau paradigme d'apprentissage bayésien robuste qui relève simultanément trois défis proéminents dans l'apprentissage des communications sans fil : la rareté des données, la présence de données aberrantes et la mauvaise spécification du modèle.Enfin, après avoir passé en revue les principaux fondements théoriques de l'apprentissage bayésien robuste et avoir étudié formellement ses propriétés, nous démontrons ses mérites sur une série de problèmes prototypiques de communication sans fil. First, I would like to express my gratitude to David Gesbert and Marios Kountouris. Their crystalline view on problems helped me move my first steps in the world of research, and their constant support provided me with opportunities for which I am extremely grateful. I want to also thank EURECOM and the great researchers that I had the chance to collaborate with. In particular, I would like to thank the past and present members of the M3 group, and Davit, Maurizio and Motonobu from the data science department. I am very grateful to professor Osvaldo Simeone for welcoming me to his research group at King's College. During these six extraordinary months, I had the chance to meet fantastic new collaborators and friends. Thanks, Sangwoo, Mikolaj, Riccardo, Ivana, Nicolas, Sharu, Kfir, Yunchuan, Hari and Clement for all the good times spent in London and at The Vault! I would like to thank the European Union for funding my research, and all the members of the ITN Windmill with whom I shared lots of unforgivable experiences. I would also like to thank my friends that backed me up from Italy, Grazie! And all the ones that I have met here in France, Merci! Last but not least, I would like to thank my family and Alexane for their unconditional support and love. v List of Figures [START_REF] Lu | The cells out of sample (coos) dataset and benchmarks for measuring out-of-sample generalization of image classifiers[END_REF]. We consider a network of 5 devices with one device sampling images using a different microscope from the rest of the collaborating devices. CHOCO-SGD (solid lines), a not robust decentralized learning scheme, yields a model with highly imbalanced performance between the two type of instruments, while AD-GDA (dashed curves), the proposed distributionally robust algorithm, drastically reduces the accuracy gap and improve fairness among the collaborating devices. . . . . . . . . . . . . . . In the last two scenarios, in which user inherently belongs to 4 different cluster, the scores indicates the necessity of at least 4 personalized streams. 62 6.1 t-logarithm loss, or log t -loss, of a predictive distribution p(x) for different values of t. For t = 1, the samples x corresponding to low predictive probability p(x) → 0 have a potentially unbounded loss value. On the contrary, for t < 1, the t-logarithm loss is bounded by (1t) -1 and it limits their influence. . The optimized predictive distributions are displayed in shades of gray. In (a), we plot the predictive distribution associated to (m, 1)-robust Bayesian learning obtained minimizing the m-free energy criterion J m of [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] with m = 20 by using only samples from the ID measure (i.e., there are no outliers). In (b), we show the predictive distribution obtained by minimizing the same criterion when using samples from the ID measure and OOD measure with a contamination ratio ϵ = 0.1. In (c) and (d) we consider the same scenario as in (b), but we consider the proposed (m, t)-robust Bayesian based on the robust m-free energy criterion J m t with m = 20, when setting t = 0.9 and t = 0.8, respectively. . . . . . . . . 6.6 Test accuracy (top) and expected calibration error (ECE) (bottom) as a function of t under the contamination ratio ϵ = 0.3 for: (i) deep ensembles [START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF]; (ii) robust Gibbs predictor, which minimizes the free energy criterion J 1 t [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF]; and (iii) (m, t)-robust Bayesian learning, which minimizes the free energy criterion J 10 t . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Distribution of the negative log-likelihood of ID and OOD training data samples for an ensemble model minimizing (on the left) the log-loss based criterion J 10 1 , and (on the right) the proposed robust objective J 10 0.7 based on the log t -loss with t = 0.7. . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Negative log-likelihood computed on a uncorrupted data set for: (i) deep ensembles [START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF]; (ii) robust Gibbs predictor, which minimizes J 1 t [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF]; and (iii) the (m, t)-robust Bayesian learning, which minimizes J 10 t . The models are trained on ϵ-contaminated data set for ϵ ∈ {0, 0.1, 0.2, 0.3} . . . . . . 7.1 Average test accuracy and ECE for AMC over the DeepSIG: RadioML 2016.10A data set [START_REF] O'shea | Convolutional radio modulation recognition networks[END_REF] for frequentist and (m, t)-robust Bayesian learning as a function of the parameter t. The test set is free from interference, while the training set is subject to interference (ϵ = 0.5). . . . . . . . . . The samples from the ID measure are represented as green dots, while data points sampled from the OOD component are in red. The optimized predictive distributions. The predictive distribution obtained minimizing the standard m-free energy is denoted by J m , while the predictive distribution yielded by the minimization of the robust m-free energy are denoted by J m 0.9 , J m 0.7 , J m 0.5 , J m 0.3 and J m 0.1 for t = {1, 0.9, 0.7, 0.5, 0. Introduction As the fifth generation (5G) network roll out is ramping up around the world, research on sixth generation (6G) mobile systems is expected to deliver technological advancements that are able to sustain increasing demands for massive connectivity, increased reliability, reduced latency, while at the same time satisfying imperative energy-efficiency requirements [START_REF] Saad | A vision of 6g wireless systems: Applications, trends, technologies, and open research problems[END_REF][START_REF] Chowdhury | 6g wireless communication systems: Applications, requirements, technologies, challenges, and research directions[END_REF]. These, apparently contrasting, needs bring about complex engineering problems that frequently lack models that are concurrently well-descriptive and analytically tractable. Researchers have then considered complementing the classical model-based design with the data-driven one [START_REF] Wang | Machine learning for 5g and beyond: From model-based to data-driven mobile wireless networks[END_REF][START_REF] Zappone | Wireless networks design in the era of deep learning: Model-based, ai-based, or both?[END_REF]. The major underpinning for this paradigm shift is the adoption of machine learning (ML) solutions. ML allows to sidestep the challenges of modelling and solving complex wireless communication problems by relying upon data-driven solutions obtained from the optimization of parametric models, i.e. neural networks, based on large amounts of data. This technology is expected to be sustained by the enormous data availability and computational power provided by 6G networks, and to penetrate at all the levels of the protocol stack [START_REF] Du | Machine learning for 6g wireless networks: Carrying forward enhanced bandwidth, massive access, and ultrareliable/low-latency service[END_REF]. The interaction between ML and wireless communication is not limited to an application of the tools of the former field to the problems arising in the latter. In fact, 6G networks also bears an opportunity to scale machine learning technologies up to an unprecedented level. Massive connectivity, combined with the advent of Internet of Things (IoT), will provide a new sheer amount of data and it will contribute to produce one of the largest and most powerful distributed computing platforms available [START_REF] Lovén | Edgeai: A vision for distributed, edge-native artificial intelligence in future 6g networks[END_REF][START_REF] Letaief | Edge artificial intelligence for 6g: Vision, enabling technologies, and applications[END_REF]. This opportunity calls for a novel network design that, instead of serving as a simple pipeline for data, can support distributed ML at the edge of the network. Overall, form the intersection between ML and wireless communication stem two different and complementary research fields: wireless communication for machine learning, focusing on repurposing wireless communication systems for distributed training of ML models, and machine learning for wireless communication, mapping the tools from the ML literature to the problems emerging from the development of 6G networks. Wireless Communication for Machine Learning The current spring of AI has been fueled by the availability of powerful computing frameworks and the big data revolution [START_REF] Buchanan | A (very) brief history of artificial intelligence[END_REF]. These two ingredients are traditionally leveraged following the centralized machine learning (CML) paradigm. Accordingly, the training data is collected at a single processing unit, or a cluster of processing units interconnected by wired links, and the optimization process is run locally. However, as the amount of smartphones and IoT devices soar, data has started being generated in a distributed fashion at the edge of the network by increasingly powerful devices. In principle any distributed data sets can be collected at a central node and processed according to the CML framework; however, the volume of the data generated at the network edge, the unreliability of its wireless link and the privacy concerns associated to off-loading personal data, poses an insurmountable obstacle to the application of CML in 6G networks. This limitation of the CML approach motivates the search for novel communication protocols and physical layer technologies to enable efficient distributed machine learning (DML) at the wireless network edge [START_REF] Hellström | Wireless for machine learning[END_REF]. Distributed Machine Learning DML is a training technique that allows to scale out the training of ML models. Differently from the scale-up approach, which works by increasing the computational power and storage at a single device, DML parallelizes the optimization of a ML model by leveraging a group of computing nodes while keeping the training data distributed. The practical upshot is that devices with limited data and computing capabilities can aggregate their resources without off-loading sensitive data [START_REF] Verbraeken | A survey on distributed machine learning[END_REF]. The archetypical DML protocol consists in an iterative and coordinated optimization scheme that encompasses multiple rounds, each comprising a computation and communication phase. During the former, a portion of the collaborating devices locally optimize the ML model based on the in-situ data and computing resources. It then follows a communication phase, during which devices share the result of the optimization and a new model is created through the aggregation of the received parameters. This procedure repeats until the validation performance of the ML model stabilizes. There exists different types of distributed learning frameworks that differentiate depending on the number and reliability of their computing nodes, the way data is distributed, the speed of the communication links and the type of communication topology that interconnects the devices. In Table 1.1 we provide a taxonomy of the most popular DML schemes. The 6G network constitutes the ideal environment for the deployment of DML, in particular of the federated and decentralized strategies which are expected to be able to cope with the massive population of heterogeneous workers [START_REF] Yang | Federated learning for 6g: Applications, challenges, and opportunities[END_REF] (see Table 1.1). end of 2025 [START_REF] Hasan | State of IoT 2022: Number of connected IoT devices growing 18% to 14.4 billion globally[END_REF]. Similarly, the number of smartphones is steadily increasing since their advent [START_REF] Ericsoon | Ericsson Mobility Report[END_REF]. Massive Device to Device Connectivity Connectivity for this vast amount of devices has been traditionally provided using the cellular networks paradigm, with the base station (BS) serving all the devices within its coverage area. While this network design is compatible with the federated learning (FL) and the multi-stage federated learning (MS-FL) protocols, the star communication topology is affected by an inherent communication bottleneck hindering massive connectivity [START_REF] Abad | Hierarchical federated learning across heterogeneous cellular networks[END_REF][START_REF] Hosseinalipour | Multi-stage hybrid federated learning over large-scale d2d-enabled fog networks[END_REF]. To cope with this enormous amount of connected devices, the standard cellular design is then expected to be complemented by more flexible communication topologies based on device-to-device (D2D) communication [START_REF] Asadi | A survey on device-to-device communication in cellular networks[END_REF][START_REF] Tehrani | Device-to-device communication in 5g cellular networks: challenges, solutions, and future directions[END_REF]. For this new communication paradigm, decentralized learning protocols are envisioned to play a crucial role in virtue of the absence of parameter servers (PS) and their flexibility with respect to the underlying communication topology. In fact, the communication phase of decentralized learning protocols requires nodes to communicate in a P2P fashion with other devices that are within range, and to perform aggregation locally [START_REF] Xing | Decentralized federated learning via sgd over wireless d2d networks[END_REF][START_REF] Shi | Over-the-air decentralized federated learning[END_REF]. To illustrate the advantages of decentralized learning in D2D network deployments, consider the scenario in Figure 1.1. A platoon of autonomous intelligent vehicles (AIV) wish to collaboratively train a ML model relying upon the short-range D2D links. Federated learning can be applied, by either resorting to a BS at the side of the road or by defining a vehicle as a PS. The first solution is often infeasible due to the mobility of the platoon, the second limits the number of collaborative devices to the AIVs that are within range of the PS, unless one allows multi-hop links. In contrast, decentralized learning enables the platoon to organize in a line topology and to harness the entirety of the resource using only the available D2D links. Challenges From the above discussion it is clear the essential nature of decentralized learning protocols for the future smart edge. However, the application of DML to 6G communication systems brings about fundamental challenges that have to be addressed in order to scale out ML and unleash the full potential of future IoT networks. The first challenge is the communication bottleneck introduced by unreliable wireless communication links. As the computing hardware develops, the communication phase of DML protocols becomes the most time-consuming step [START_REF] Van Berkel | Multi-core for mobile phones[END_REF]. Devices can also become unavailable for long periods, e.g. to perform additional tasks other than training, causing delays or biases in the training results [START_REF] Bonawitz | Towards federated learning at scale: System design[END_REF]. Furthermore, being an iterative procedure, the training of ML models can require numerous rounds of communication and large energy expenditures that are not always affordable by edge IoT devices [START_REF] Liu | Toward green iot: Energy solutions and key challenges[END_REF]. These aspects related to the application of DML to unreliable wireless networks will be the focus of the first part of this thesis. Another fundamental challenge derives from the collaboration among network devices that are heterogeneous. In fact, the 6G smart edge is expected to comprise devices with different sensing capabilities that sample processes influenced by geographical or user dependent factors [START_REF] Li | Federated learning: Challenges, methods, and future directions[END_REF]. Learning theory states that the aggregation of different data sources can heavily hinder the quality of the final model at testing time [START_REF] Mansour | Domain adaptation: Learning bounds and algorithms[END_REF]. Therefore, decentralized learning procedures have to be carefully designed in order to make collaboration fruitful rather than detrimental. The development of such decentralized learning algorithms will constitute the content of the second part of this thesis. Machine Learning for Wireless Communications The design of 5G and previous generations networks has been reliant on mathematical models of communication systems, with the role of real-world data being integrative and limited to the fine-tuning of their parameters. However, the increased level of complexity and flexibility of 6G communication systems have rendered the model-based design ineffective [START_REF] Zappone | Wireless networks design in the era of deep learning: Model-based, ai-based, or both?[END_REF]. The extent to which 6G networks will leverage complex physical layer technologies and adapt communication protocols based on user-centric and contextual information makes the mathematical description of communication models challenging [START_REF] Kato | Ten challenges in advancing machine learning technologies toward 6G[END_REF]. As such, the role of data and ML algorithms has become prominent in the design of future wireless networks as it allows to bypass explicit problem formulations. This paradigm shift allows us to obtain solutions to wireless communication problems in a model-free fashion, leveraging large collections of network measurements and by optimizing expressive parametric models. Complexity of Future Network Services In virtue of its versatility, ML is envisioned to be employed at all layers of the wireless protocol stack. Specifically, data-driven algorithms can reproduce the output of complex iterative optimization procedures at a smaller computational cost. For example, new physical layer technologies, such as intelligent reflecting surface (IRS) and millimeter wave communication, lead to complex optimization problems [START_REF] Liu | Reconfigurable intelligent surfaces: Principles and opportunities[END_REF][START_REF] Alghamdi | Intelligent surfaces for 6g wireless networks: A survey of optimization and performance analysis techniques[END_REF][START_REF] Roh | Millimeter-wave beamforming as an enabling technology for 5g[END_REF][START_REF] Kutty | Beamforming for millimeter wave communications: An inclusive survey[END_REF]. In these cases, classical solvers bear a large computational cost which is incompatible with 6G low-latency requirements. On the other hand, ML techniques such as deep unfolding are capable of efficiently providing solutions to many high-dimensional signal processing problems by the means of a single forward pass [START_REF] Hershey | Deep unfolding: Model-based inspiration of novel deep architectures[END_REF][START_REF] Balatsoukas-Stimming | Deep unfolding for communications systems: A survey and some new directions[END_REF]. Similarly, the cross-layer design of communication protocols gives rise to complicated resource allocation problems. In this context, reinforcement learning (RL) has demonstrated its ability to cope with the large action space characterizing these sequential decision-making processes, and to cut down the delays of block-wise optimized solutions [START_REF] Musaddiq | Reinforcement learning-enabled cross-layer optimization for low-power and lossy networks under heterogeneous traffic patterns[END_REF][START_REF] Mei | Intelligent radio access network slicing for service provisioning in 6g: A hierarchical deep reinforcement learning approach[END_REF]. At the same time, data-driven methods can also help tackle in a novel way problems that were out-of-reach or that were solved using costly iterative search procedures. Considering for example the case of millimeter wave beam alignment, ML can take advantage of high-dimensional contextual information to greatly reduce the search space compared to classical solutions [START_REF] Klautau | Lidar data for deep learningbased mmwave beam-selection[END_REF][START_REF] Zecchin | Lidar and position-aided mmwave beam selection with non-local cnns and curriculum training[END_REF]. The data-driven design can also enable accurate localization and sensing with massive antenna arrays by extracting spatial information from high-dimensional channel fingerprints [START_REF] Prasad | Machine learning methods for rss-based user positioning in distributed massive mimo[END_REF][START_REF] Vaca-Rubio | Assessing wireless sensing potential with large intelligent surfaces[END_REF]. These are key building blocks for context-aware networking protocols for which the model-based paradigm cannot provide general, yet adaptable, solutions. At the higher layer of the protocol stack, ML can be used to extract user-centric features from the application data and provide personalized services [START_REF] Fiorini | Unsupervised machine learning for developing personalised behaviour models using activity data[END_REF][START_REF] Goldenberg | Personalization in practice: Methods and applications[END_REF]. Uncertainty Quantification and Robustness in Wireless Systems As we have seen, ML naturally finds application in many wireless communication problems; however, in these scenarios, standard figures of merit such as the prediction accuracy have to be weighted against other performance indicators that are specific to communication systems. The first is uncertainty quantification; namely, the ability of an ML model to faithfully quantify the uncertainty of its outputs [START_REF] Beven | The future of distributed models: model calibration and uncertainty prediction[END_REF]. This capability is essential in safety-critical decision-making processes, e.g. real-time control via the tactile internet [START_REF] Fettweis | The tactile internet: Applications and challenges[END_REF], and it can be used to enhance the network performance, e.g. in the context of cognitive radio to adopt conservative behaviour upon uncertain spectrum sensing outcomes [START_REF] Wang | Advances in cognitive radio networks: A survey[END_REF]. Uncertainty quantification also lays the foundation of network self-monitoring [START_REF] Masur | Artificial intelligence in open-radio access network[END_REF]. ML communication systems with good uncertainty quantification capabilities can detect the deterioration of their performance (low confidence) and trigger timely retraining of its modules, for example when the operating condition mutates. A second important prerequisite for ML to be applied to wireless communication systems is the robustness against mismatches between the design assumptions and the real-world operating conditions [START_REF] Steinhardt | Resilience: A criterion for learning in the presence of arbitrary outliers[END_REF][START_REF] Martinez-Cantin | Practical Bayesian optimization in the presence of outliers[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF]. When deployed in wireless communication systems, ML models need to be operational with little to no human intervention over a variety of application scenarios that are often outside the designer's control. As a result, data-driven solutions to communication problems are obtained based on model assumptions that frequently do not entail the testing condition, and therefore learning usually happens based on model classes that poorly approximate the phenomenon of interest. This learning condition, termed model misspecification, is known to greatly hamper the performance of ML solutions, and it renders the search for robust ML algorithms extremely important in wireless communication problems [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF]. Challenges The major major obstacle to the application of ML solutions in 6G networks are the strict ML reliability requirements and the adverse learning conditions characterizing wireless communication problems. In wireless systems, data generation processes often have short stationary intervals that impose strict upper bounds on the length of data acquisition procedures and the size of training data sets. In the limited data regime, the frequentist learning approach is known to perform poorly and lead to over-confident predictors [START_REF] Guo | On calibration of modern neural networks[END_REF]. Therefore, in spite of its popularity, the application of frequentist ML solutions in wireless communication systems is incompatible with most of the ML requirements. On the other hand, the Bayesian learning approach provides a mathematically grounded framework to reason about epistemic uncertainty, the uncertainty due to the limited amount of data [START_REF] Theodoridis | Machine learning: a Bayesian and optimization perspective[END_REF][START_REF] Madigan | Bayesian model averaging[END_REF]. This merit of the Bayesian framework is promising for the application of ML in 6G. However, the uncertainty quantification properties of Bayesian learning are reliant on two fundamental assumptions: the model class is well specified and the training data distribution matches the testing one. These two conditions are often violated in wireless communication systems. Strict energy efficiency and computation complexity requirements require the usage of simple models that are often rough approximations of the complex real-world phenomena the designer wishes to model. In this condition, the Bayesian learning rule does not retain good uncertainty quantification capabilities and is incapable of delivering calibrated models [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF][START_REF] Kalyani | OFDM channel estimation in the presence of NBI and the effect of misspecified NBI model[END_REF]. Additionally, the data collection procedures in realistic wireless systems are autonomous with little or no human interventions. Therefore, in stark contrast with the computer science domain, learning in wireless systems happens by the means of data sets that are small and frequently corrupted by outliers introduced by exogenous noise sources, and by malicious or inaccurate reporting [START_REF] Fawzy | Outliers detection and classification in wireless sensor networks[END_REF][START_REF] Jin | An RSSI-based localization algorithm for outliers suppression in wireless sensor networks[END_REF]. This mismatch between training and testing conditions is known to greatly degrade the performance of ML models. The above limitations of the frequentist and standard Bayesian framework motivate the last part of this thesis and the development of a robust Bayesian framework able to concurrently counteract model misspecification and outliers. Contributions and Thesis Outline This thesis work comprises three separate parts, each focusing on particular challenges resulting from the integration of machine learning algorithms and wireless communication. In Part I, we consider the challenges and opportunities deriving from the application of decentralized learning in D2D networks. In particular, in Chapter 2 we propose an implementation of the distributed stochastic gradient descent (DSGD) algorithm that is designed to cope with the inherent communication and computation impairments characterizing the edge of the wireless networks. The proposed algorithm leverages asynchronous updates and time-varying consensus strategies that allow it to tackle both the presence of straggling computing nodes and unreliable communication links. For the proposed algorithm, we derive a convergence guarantee for non-convex objectives, which allows us to quantify the impact of key network performance indicators on the convergence properties of the algorithm. The main takeaway of this section is that asynchronicity can speed-up training in spite of the aforementioned network impairments. In Chapter 3 we consider the bright side of D2D connectivity, and we explore the role of UAVs as potential promoters of edge intelligence. We show that the sparse and local connectivity of IoT networks, which potentially hinder decentralized learning, can be mitigated by a UAV relay. We derive an optimized trajectory for the UAV that speeds up training by promoting the diffusion of the locally optimized model across the network. This first part is based on the papers: In Part II, we focus on critical learning aspects deriving from the heterogeneity of the data generated at the edge. Specifically, in Chapter 4 we consider a D2D IoT network with devices sampling data from different processes, in turn leading to differently distributed local training data sets. In this scenario, we consider the task of producing a ML model that guarantees satisfactory performance for all devices. To this end, we formulate a decentralized distributionally robust problem and we propose AD-DGA, a decentralized learning algorithm to solve the associated minimax optimization problem in a communication-efficient manner. We establish non-asymptotic convergence guarantees both in the case of convex and non-convex objectives. The theoretical results are corroborated by experimental results highlighting the merits of the distributionally robust learning procedure. • E. Jeong † , M. In Chapter 5 we address statistical heterogeneity by adopting a different approach based on personalization. Starting from theoretical results derived from the domain adaptation literature, we replace the standard federated learning aggregation rule with a set of user-centric ones that serve groups of statistically homogeneous users. Each user-centric aggregation rule produces a model that is tailored for the target distribution associated with the group of users. Tuning the number of personalized rules allows trading personalization for communication resources. We show that the optimal number of user-centric rules can be obtained using clustering techniques. Our algorithm is shown to outperform state-of-the-art solutions both in terms of personalization capabilities and communication-efficiency. This part is based on the works: Finally in Part III, we focus on the application of machine learning to wireless communications problems. We first highlight the fundamental challenges characterizing this application domain; namely, small training data sets affected by outliers and misspecified model classes. These learning conditions render the frequentist learning rule inadequate to deliver reliable models with good uncertainty quantification capabilities in 6G systems. Therefore, the main contribution of this part is the development of the (m, t)-Bayesian learning framework, an extension of the generalized Bayesian inference that is robust to outliers and misspecified model classes. The proposed learning rule is shown to enjoy advantageous theoretical properties and to provide better ML models on a range of supervised and unsupervised wireless communication problems. • M. This final part of the thesis is based on the papers: • M. Asynchronous Decentralized Learning over Unreliable Wireless Networks Decentralized learning enables edge users to collaboratively train models by exchanging information via device-to-device (D2D) communication. Prior works have been limited to the analysis of the performance of these algorithms over wireless networks with fixed topologies and reliable workers, which do not resemble realistic sixth generation (6G) network deployments. In this Chapter, we propose an asynchronous decentralized stochastic gradient descent (DSGD) algorithm, which is robust to the inherent computation and communication failures occurring at the wireless network edge. We theoretically analyze its performance and establish a non-asymptotic convergence guarantee. Experimental results corroborate our analysis, demonstrating the benefits of asynchronicity and outdated gradient information reuse in decentralized learning over unreliable wireless networks. Introduction Distributed learning algorithms empower devices in wireless networks to collaboratively optimize the model parameters by alternating between local optimization and communication phases. Leveraging the aggregated computational power available at the wireless network edge in a communication efficient [START_REF] Mcmahan | Communication-efficient learning of deep networks from decentralized data[END_REF] and privacy preserving manner [START_REF] Yan | Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties[END_REF], distributed learning is considered to be a key technology enabler for future intelligent networks. A promising paradigm, which enables collaborative learning among edge devices communicating in a peer-to-peer (P2P) manner, is decentralized learning [START_REF] Tsitsiklis | Distributed asynchronous deterministic and stochastic gradient optimization algorithms[END_REF]. Differently from federated learning, decentralized algorithms do not require a star topology with a central parameter server (PS), thus being more flexible with respect to the underlying connectivity [START_REF] Koloskova | A unified theory of decentralized sgd with changing topology and local updates[END_REF]. This feature renders decentralized learning particularly appealing for future wireless networks with D2D communication. Several decentralized learning schemes over wireless networks have been proposed and analyzed [START_REF] Xing | Decentralized federated learning via sgd over wireless d2d networks[END_REF][START_REF] Shi | Over-the-air decentralized federated learning[END_REF][START_REF] Ozfatura | Decentralized SGD with over-the-air computation[END_REF][START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF], highlighting the key role of over-the-air computation (AirComp) [START_REF] Amiri | Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air[END_REF] for low-latency training at the edge. Prior works have mainly considered wireless networks of reliable workers communicating in a fixed topology throughout the entire training procedure. Nevertheless, these assumptions are hardly met in practical systems, in which communication links can be intermittent or blocked, and devices may become temporarily unavailable due to computation impairments or energy saving reasons. Asynchronous distributed training has been shown to mitigate the effect of stragglers (slow workers) [START_REF] Dutta | Slow and stale gradients can win the race[END_REF][START_REF] Nadiradze | Decentralized SGD with asynchronous, local and quantized updates[END_REF][START_REF] Adikari | Decentralized optimization with non-identical sampling in presence of stragglers[END_REF]. However, harnessing the potential benefits of asynchronism in decentralized learning over unreliable wireless networks remains elusive. In this chapter, we propose an asynchronous implementation of decentralized stochastic gradient descent (DSGD) as a means to address the inherent communication and computation impairments of heterogeneous wireless networks. In particular, we study decentralized learning over a wireless network with a random time-varying communication topology, comprising unreliable devices that can become stragglers at any point of the learning process. To account for communication impairments, we propose a consensus strategy based on time-varying mixing matrices determined by the instantaneous network state. At the same time, we design the learning rates at the edge devices in such a way so as to preserve the stationary point of the original network objective in spite of the devices' heterogeneous computational capabilities. Finally, we provide a non-asymptotic convergence guarantee for the proposed algorithm, demonstrating that decentralized learning is possible even when outdated information from slow devices is used to locally train the models. Experimental results confirm our analysis and show that reusing stale gradient information can speed up convergence of asynchronous DSGD. System Model We consider a network consisting of m wireless edge devices, in which each node i is endowed with a local loss function f i : R d → R and local parameter estimate θ i ∈ R d . The network objective consists in minimizing the aggregate network loss subject to a consensus constraint minimize θ 1 ,...,θm f (θ 1 , . . . , θ m ) := 1 m m i=1 f i (θ i ) (2.1) s.t. θ 1 = θ 2 = • • • = θ m . This corresponds to the distributed empirical risk minimization problem whenever f i is a loss term over a local dataset. In the following, we denote the network objective evaluated at a common parameter vector θ as f (θ) := f (θ 1 , . . . , θ m ) θ 1 =•••=θm=θ , (2.2) and the mean parameter vector as θ = 1/m m i=1 θ i . (2.3) To solve (2.1), we consider a DSGD algorithm according to which devices alternate between a local optimization based on gradient information (computation phase) and a communication phase. Computation model To locally optimize the model estimate θ i , we assume that each device can query a stochastic oracle satisfying the following properties. Assumption 1. At each node i, the gradient oracle g i (θ) satisfies the following properties for all θ ∈ R d E[g i (θ)] = ∇ θ f i (θ) (unbiasedness) (2.4) E∥g i (θ) -∇ θ f i (θ)∥ 2 ≤ σ 2 (bounded variance) (2.5) E∥g i (θ)∥ ≤ G 2 . (bounded magnitude) (2.6) We admit the existence of straggling nodes and that a random subset of devices can become inactive or postpone local optimization procedures, e.g., due to computation impairments or energy constraints. As a result, devices may join the communication phase and disseminate a model that has been updated using gradient information computed using previous model estimates, or a model that has not been updated at all from the previous iteration(s). Formally, at every optimization round t, the local update rule is θ (t+ 1 2 ) i = θ (t) i , if device i is straggler at round t θ (t) i -η t i g i (θ (t-τ i ) ), otherwise (2.7) where η t i is a local learning rate and the delay τ i ≥ 0 accounts for the staleness of the gradient information at device i. Communication model The channel between any pair of device i and j follows a Rayleigh fading model. At every communication iteration t, devices can exchange information according to a connectivity graph G (t) = (V, E (t) ), where V = {1, 2, . . . , m} indices the network nodes and (i, j) ∈ E (t) if devices i and j can communicate during round t. We consider symmetric communication links; therefore the communication graph is undirected. While the connectivity graph is assumed to remain fixed within the optimization iteration, it may vary across optimization iterations due to deep fading, blockage, and/or synchronization failures. Asynchronous Decentralized SGD The proposed asynchronous DSGD procedure, which takes into account both computation and communication failures, is detailed in Algorithm 1. At the beginning of each training iteration t, non straggling devices update the local estimate θ (t) i according to (2.7) using a potentially outdated gradient information. Subsequently, based on the current connectivity graph G (t) = (V, E (t) ), devices agree on a symmetric and doubly stochastic mixing matrix W (t) using a Metropolis-Hastings Algorithm 1: Asynchronous Decentralized SGD Input : Number of devices m, number of iterations T , learning rates η θ and θ weighting scheme [START_REF] Xiao | Distributed average consensus with time-varying Metropolis weights[END_REF]. In particular, the (i, j) entry of the mixing matrix W (t) is obtained as (0) i ∈ R d . Output : θ(T ) = 1 T T -1 t=0 θt for t in 0, . . . T - w (t) i,j =        1 1+max d (t) i ,d (t) j , if (i, j) ∈ E (t) and i ̸ = j 1 -j w (t) i,j , if i = j 0, otherwise. ( i is the degree on node i at the communication round t. These weights are very simple to compute and are amenable for distributed implementation. In particular, each device requires only knowledge of the degrees of its neighbors to determine the weights on its adjacent edges. After that, it follows a communication phase in which devices exchange the updated estimates and employ a gossip scheme based on W (t) . To leverage AirComp capabilities, devices employ analog transmission together with the scheduling scheme proposed in [START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF]. Accordingly, the communication phase is divided into multiple pairs of communication slots. Each pair consists of an AirComp slot and a broadcast slot as illustrated in Fig. 2.1. During the AirComp slot s, the star center i receives the superposition of the signals transmitted by its neighboring devices N (t) (i) = {j ∈ V : (i, j) ∈ E (t) }. In particular, each scheduled node j ∈ N (t) (i) transmits to the star center i x (s,t) j = γ (s,t) i h (s,t) i,j w (t) i,j θ (t+ 1 θ (t+1) i = (1 -ζ)θ (t+ 1 2 ) i + ζ    m j=1 w (t) i,j θ (t+ 1 2 ) j + ñi (t)    (2.15) where ñ(t) i ∼ N (0, σ(t) w,i 1 d ) is a noise vector term that accounts for the aggregation of noise components during AirComp and broadcast transmissions at device i during communication phase t. Convergence Analysis In this section, we study the effect of communication and computation failures on the asynchronous DGSD procedure and prove its convergence. Effect of Communication Failures Communication impairments amount for a random connectivity graph with an edge set that differs at each different optimization iteration. From an algorithmic perspective, random communication impairments result in DSGD with stochastic mixing matrices. A particular class of stochastic mixing matrices are those that satisfy the expected consensus property. Definition 1 (Expected Consensus Rate [START_REF] Koloskova | A unified theory of decentralized sgd with changing topology and local updates[END_REF]). A random matrix W ∈ R m×m is said to satisfy the expected consensus with rate p if for any X ∈ R d×m E W W X -X 2 F ≤ (1 -p) X -X 2 F (2.16) where X = X 11 T m and the expectation is w.r.t. the random matrix W . Lemma 1. If the event that the connectivity graph G (t) is connected at round t has a probability q > 0 and the Metropolis-Hastings weighting is used to generated the mixing W (t) , the expected consensus rate is satisfied with rate p = qδ > 0, with δ being the expected consensus rate in case of a connected topology. Proof. See Appendix A.1. If the expected consensus is satisfied, it is then possible to establish a convergent behavior for the estimates generated by the proposed algorithm. Lemma 2 (Consensus inequality). Under Assumption 1, after T iterations, decentralized SGD with a constant learning rate η and consensus step size ζ satisfies m i=1 θ (T ) i -θ(T ) 2 ≤η 2 12mG 2 (pζ) 2 + ζ 2 p m i=1 σ 2 w,i (2.17) where σ 2 w,i = max T t=0 E ñ(t) i 2 . Proof. See Appendix A.2. Overall, communication failures amount to a reduced expected consensus rate compared to the scenario with perfect communication. At the same time, dropping users that are delayed and are unable to synchronize and perform AirComp, renders the communication protocol more flexible. For instance, in Fig. 2.2, we consider a network of nine nodes organized according to different topologies and show the evolution of the average spectral gap of the mixing matrix with Metropolis-Hastings weights, whenever devices not satisfying a certain delay constraint are dropped. As expected, stricter delay requirements result in sparser effective communication graphs and mixing matrices with smaller spectral gaps. i is a specified learning rate value in case of successful computation. Furthermore, to ensure that the procedure converges to stationary points of the network objective even when edge devices have different computing capabilities, the expected learning rates have to be equalized. In particular, if E[η (t) i ] = η, ∀i, we have that stationary points are maintained in expectation, namely m i=1 E[η (t) i ]∇f i (θ) = 0 =⇒ ∇f (θ) = 0. (2.19) Finally, the existence of straggling devices introduces asynchronicity in the decentralized optimization procedure. In particular, a device i that fails at completing the gradient computation at a given optimization iteration is allowed to apply the result in a later one, without discarding the computation results. While we do not specify the delay distribution, we rather introduce the following assumption regarding the staleness of gradients. Assumption 2. For all iteration t, there exists a constant γ ≤ 1 such that E ∇f ( θ(t) ) - m i=1 ∇f i (θ (t-τ i ) i ) m 2 ≤ γE ∇f ( θ(t) ) 2 + L 2 m i=1 E θ (t) i -θ(t) 2 m . (2.20) The above assumption is similar to the one in [START_REF] Dutta | Slow and stale gradients can win the race[END_REF] with an additional consensus error term. Note that the value of γ is proportional to the staleness of the gradients and in case of perfect synchronization (γ = 0) the bound amounts to a standard consensus error term. Convergence Guarantee In this subsection, we demonstrate the convergence of the decentralized optimization procedure to a stationary point of the problem (2.1). Theorem 1. Consider a network of unreliable communicating devices in which the expected consensus rate is satisfied with constant p and each device can be a straggler with probability ρ i < 1. If Assumptions 1 and 2 are satisfied, asynchronous DSGD with constant learning rate η i = min j (1 -ρ j )/( √ 4LT (1 -ρ i )) and consensus rate ζ = 1/T 3/8 satisfies the following stationary condition 1 T T t=1 ∇f ( θ(t) ) 2 ≤ 8 √ L(f ( θ(T ) ) -f * ) γ ′ ρ min √ T + 3G 2 L T 1/4 p 2 γ ′ + L 4T σ 2 mγ ′ min j (1 -ρ j ) + m i=1 σ 2 w,i mγ ′ 2L 2 γ pT 3/8 + 4L √ L mT 1/4 ρ min (2.21) where γ ′ = 1γ, ρ min = min j (1ρ j ) and f * = min θ∈R d f (θ). Proof. See Appendix A.3. The above theorem establishes a vanishing bound on the stationarity of the returned solution, which involves quantities related to both communication and computation impairments. In particular, the constant of the slowest vanishing terms T -1/4 contains the term p related to random connectivity, as well as γ ′ and ρ min due to stragglers. Numerical Results The effectiveness of the proposed asynchronous DSGD scheme is assessed using a network of m = 15 devices that collaboratively optimize the parameters of a convolutional neural network (CNN) for image classification with Fashion-MNIST. Gradients are calculated using batches of 16 data samples and the performance is evaluated using a test set of 500 images. We model the channel gain between each device pair as Rayleigh fading and we assume a shifted exponential computation time at each device, i.e., T comp = T min +Exp(µ) with T min = 0.25s and µ = 1. In Fig. 2.3, nodes communicate only when the channel is in favorable conditions, i.e., when the channel gain exceeds a certain minimum threshold h min . This allows to save energy; however, while higher threshold values result into lower average energy consumption, they also produce mixing matrices with smaller consensus rate, thus increasing the convergence time. To study the effect of computation impairments, our proposed asynchronous learning algorithm is compared with: (i) synchronous DSGD, which waits for all devices to finish their computations; and (ii) synchronous DSGD with a delay barrier T max , which discards computation from users that violate the maximum computing time. Compared to the latter, our asynchronous procedure allows for slow devices to reuse stale gradient computations during later iterations. In Fig. 2.4, we plot the evolution of the test accuracy of the aforementioned algorithms under two different values of T max . For a moderate delay constraint T max = E[T comp ], asynchronous DSGD and synchronous DSGD with delay barrier perform similarly as the fraction of slow users is modest. Nonetheless, imposing a delay constraint and discarding slow devices greatly reduces the training time compared to the synchronous DSGD case. On the other hand, for a stringent delay requirement, T max = 4 5 E[T comp ], reusing stale gradients turns out to be beneficial and the proposed asynchronous DSGD attains higher accuracy faster compared to the synchronous DSGD with a delay barrier. Conclusion In this chapter, we have proposed and analyzed an asynchronous implementation of DSGD, which enables decentralized optimization over realistic wireless networks with unreliable communication and heterogeneous devices in terms of computation capabilities. We have studied the effect of both communication and computation failures on the training performance and proved non-asymptotic convergence guarantees for the proposed algorithm. The main takeaway is that reusing outdated gradient information from slow devices is beneficial in asynchronous decentralized learning. Chapter 3 UAV-Aided Decentralized Learning over Mesh Networks In Chapter 2 we have shown that decentralized learning algorithm can be used to collaboratively train a machine learning (ML) over realistic wireless network affected by computation and communication impairments. As shown in Theorem 1 the convergence speed of the decentralized optimization algorithm severely depends on the degree of the network connectivity, with denser network topologies leading to shorter convergence time. Consequently, the local connectivity of real world mesh networks, due to the limited communication range of its wireless nodes, undermines the efficiency of decentralized learning protocols, rendering them potentially impracticable. In this chapter we investigate the role of an unmanned aerial vehicle (UAV), used as flying relay, in facilitating decentralized learning procedures in such challenging conditions. We propose an optimized UAV trajectory, that is defined as a sequence of waypoints that the UAV visits sequentially in order to transfer intelligence across sparsely connected group of users. We then provide a series of experiments highlighting the essential role of UAVs in the context of decentralized learning over mesh networks. Introduction Most of the decentralized learning schemes over wireless networks have been proposed and analyzed under the assumption that the network topology is strongly connected on average. [START_REF] Xing | Decentralized federated learning via sgd over wireless d2d networks[END_REF][START_REF] Shi | Over-the-air decentralized federated learning[END_REF][START_REF] Ozfatura | Decentralized SGD with over-the-air computation[END_REF][START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF][START_REF] Jeong | Asynchronous decentralized learning over unreliable wireless networks[END_REF]. However, real world mesh networks are characterized by local, rather than global connectivity, and groups of nodes are often isolated or sparsely connected to the rest of the network due to their limited communication range. In these scenarios, decentralized learning is either not possible or its performance is severely hampered. At the same time, unmanned aerial vehicles (UAVs) represent an appealing solution to mitigate limited ground connectivity. UAVs have been used as smart flying relays to improve multi-hop routing capabilities [START_REF] Esrafilian | Autonomous UAV-aided mesh wireless networks[END_REF], to self-organize in flying mesh networks [START_REF] Behnke | Comparison of distributed ad-hoc network planning algorithms for autonomous flying robots[END_REF] and to improve coverage to ground users [START_REF] Sabino | Topology control of unmanned aerial vehicle UAV mesh networks: A multi-objective evolutionary algorithm approach[END_REF]. In this chapter, we investigate the role of UAVs in aiding decentralized learning protocols over ground mesh wireless networks. The combination of FL and UAV assisted communication has recently been explored; however, these studies have been limited to scenarios in which the UAV has the role of a parameter server (PS), i.e., aggregating model estimates received from ground nodes and subsequently broadcasting the aggregated model back to the ground [START_REF] Donevski | Federated learning with a drone orchestrator: Path planning for minimized staleness[END_REF][START_REF] Mrad | Federated learning for UAV swarms under class imbalance and power consumption constraints[END_REF]. The results presented in this chapter differ from these previous works as it considers the UAV serving as a relay and it assumes that ground nodes are able to carry out learning even in the absence of a UAV, by exploiting the already existing ground D2D links. This feature dramatically improves the convergence speed, versatility and fault tolerance of the proposed solution. We propose an optimized trajectory, given as a sequence of waypoints visited by the UAV, which is designed to intelligently provide relaying opportunities to ground nodes and to diffuse locally optimized model across subsets of users with limited connectivity. We provide experiments showing that with the aid of a UAV following the proposed trajectory, it is possible to harness the full potential of the mesh network in spite of sparse and local connectivity, and to accelerate learning compared to UAV-aided federated learning algorithms. System Model We consider a network of m + 1 devices comprising m ground users plus a UAV serving as a flying relay. We index the ground user by 1, . . . , m and we denote the location of the i-th ground device by p i = [x i , y i , z i ] ∈ R 3 , where x i and y i are the horizontal coordinates while z i denotes the elevation. We assume that ground devices are static, namely their position is not a function of time. On the other hand, the UAV location is denoted p uav = [x, y, z] ∈ R 3 and is assumed to be time-varying in the horizontal coordinates x and y, but not in the vertical one z. Furthermore, the UAV elevation z is set to be larger than a safety altitude z min . Communication Model Communication among network nodes takes place in rounds. At every communication round t ∈ {τ, 2τ, . . . }, the channel gain coefficient g (t) i,j ∈ R, expressed in dB, between each pair of distinct user (i, j) ∈ [1 : m] 2 is given by g (t) i,j = g (t) j,i = β g -α g 10 log 10 d i,j + η (t) g (3.1) where α g is the path loss exponent, β g is the average channel gain in dB at a reference distance d = 1, d i,j = ∥p ip j ∥ 2 is the distance between nodes i and j, and η g ∼ N (0, σ 2 g ) models the shadowing effects. For simplicity, we assume that the link parameters α g , β g and σ g are homogeneous across pairs of ground users; however, the proposed solution can easily accommodate heterogeneous channel parameters. At communication round t, the channel gain link between the UAV and a ground node i under Line-of-Sight (LoS) conditions is modeled as g (t) i,L = β L -α L 10 log 10 d (t) i + η (t) L , (3.2) while under Non-Line-of-Sight (NLoS) propagation it follows g (t) i,N = β N -α N 10 log 10 d (t) i + η (t) N (3.3) where d (t) i = p i -p (t) uav 2 denotes the time-dependent distance between the UAV and user i, α L , β L and η (t) L ∼ N (0, σ 2 L ) are the channel parameters under LoS, while α N , β N and η (t) N ∼ N (0, σ 2 N ) describe the channel under NLoS propagation. These parameters are assumed to be homogeneous across users for simplicity. The LoS probability between the UAV at a position p (t) uav and user i is modeled using the s-model [START_REF] Al-Hourani | Optimal LAP altitude for maximum coverage[END_REF] ρ (t) i = 1 1 + e -a i θ (t) i +b i (3.4) where a i and b i are model coefficients related to the propagation environment, and θ (t) i is the elevation angle between the UAV and ground user i. At every communication round t, a link between network nodes is modeled using a simple, yet classical, on-off channel model. Accordingly two nodes communicate if and only if the associated channel gain exceeds a threshold g th . Therefore, the resulting ground connectivity matrix A (t) gr ∈ [0, 1] m×m , is symmetric, has diagonal elements being equal to 1 and Bernoulli distributed off-diagonal entries [A (t) gr ] j,i = [A (t) gr ] i,j ∼ Bern 1 -Φ g th - ḡ(t) i,j √ σ g (3.5) where Φ(•) denotes standard Gaussian cumulative distribution function and ḡ(t ) i,j = E[g (t) i,j ]. Similarly, the connectivity between the UAV and ground users is described by a vector a (t) uav ∈ [0, 1] 1×m with entries [a (t) uav ] i ∼Bern   1-ρi Φ   g th - ḡ(t) i,N √ σ N   -ρ i Φ   g th - ḡ(t) i,L √ σ L     (3.6) where ρi = 1 -ρ i , ḡ(t) i,L = E[g (t) i,L ] and ḡ(t) i,N = E[g (t) i,N ]. Based on the instantaneous connectivity status, determined by the realization of a (t) uav , the UAV serves as a one-hop relay for the communication among grounds users. The connectivity matrix resulting from the relaying opportunities offered by the UAV to ground users is obtained as A (t) uav = (a (t) uav ) T a (t) uav . (3.7) If follows that A (t) uav is a symmetric random binary matrix whose entry (i, j) is 1 if an only if there exists a relaying opportunity between ground user i and j, and 0 otherwise. Overall, the aggregated connectivity matrix, accounting for link existence either by D2D ground communication or thanks to UAV relaying, is given by A (t) = J m -(J m -A (t) uav ) ⊙ (J m -A (t) gr ) (3.8) where J m is the m × m all-one matrix and ⊙ denotes the Hadamard product. For every realization of the connectivity matrix A (t) , the set of devices connected to node i is N (t) (i) := {j : [A (t) ] i,j = 1}. (3.9) Note that every ground user is connected to itself. Learning procedure We assume that the goal of ground devices is to collaboratively train a machine learning model in order to benefit from the aggregation of local computational resources and in-situ data. In particular, we assume that each ground device is endowed with a local loss function f i : R d → R f (θ 1 , . . . , θ m ) := 1 m m i=1 f i (θ i ) (3.10) s.t. θ 1 = θ 2 = • • • = θ m . In the following, we denote the average network estimate as θ = 1/m m i=1 θ i . To solve (3.10), we consider the asynchronous decentralized stochastic gradient descent (DSGD) algorithm proposed in [START_REF] Jeong | Asynchronous decentralized learning over unreliable wireless networks[END_REF]. According to this optimization scheme, ground devices alternate between a local optimization phase based on gradient information (computation phase) and a communication phase to exchange the updated local estimates with one-hop neighbours. To locally optimize the model estimate θ i , we assume that each device i can query a stochastic oracle that is unbiased, E[g i (θ i )] = ∇ θ f i (θ i ), (3.11) and has bounded variance and magnitude E∥g i (θ i ) -∇ θ f i (θ i )∥ 2 ≤ σ 2 , (3.12) E∥g i (θ i )∥ ≤ G 2 . (3.13) Furthermore, to account for computation impairments and energy constraints, we admit the existence of straggling ground users that can become inactive or postpone the local optimization computation. At each communication round t, the local update rule at device i becomes θ (t+ 1 2 ) i = θ (t) i , if device i is straggler at round t θ (t) i -η (t) i g i (θ (t-τ i ) ) otherwise (3.14) where η (t) i is a local learning rate and the delay τ i ≥ 0 accounts for the staleness of the gradient information at device i. Subsequently, each device i shares its updated local estimate θ (t+ 1 2 ) i with its neighbours N (t) (i) using either a digital or analog communication protocol [START_REF] Ozfatura | Decentralized SGD with over-the-air computation[END_REF][START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF]. The received estimates are then averaged to obtain the new local estimate θ (t+1) i = j∈N (i) w i,j θ (t+ 1 2 ) j (3.15) where w i,j are the entries of the mixing matrix W (t) obtained using a Metropolis-Hastings weighting scheme [START_REF] Xiao | Distributed average consensus with time-varying Metropolis weights[END_REF] given in (2.8). In [START_REF] Jeong | Asynchronous decentralized learning over unreliable wireless networks[END_REF], it has been shown that the performance of the asynchronous DSGD optimization procedure depends both on the activity of users and on the degree of wireless network connectivity. In particular, with more connected network topologies converging faster than sparser ones. This motivates the use of a UAV to facilitate the diffusion of locally optimized models, and to render the decentralized learning protocol more efficient in spite of sparse and local ground connectivity. Trajectory Optimization At every optimization round t, the connectivity matrix A (t) associated to the network of ground users depends on the UAV location p (t) uav and it can be enhanced thanks to the the relaying opportunities it provides to ground devices. In the following, we propose to optimize the UAV trajectory during the optimization process so that the distributed learning procedure is facilitated. A key quantity that is used to measure the information diffusion capabilities of a network is the expected consensus rate [START_REF] Koloskova | A unified theory of decentralized sgd with changing topology and local updates[END_REF]. While this quantity can be used to characterize the rate of convergence of DSGD procedure, it does not provide a tractable optimization objective to derive the UAV trajectory. For this reason we define a more tractable surrogate objective that yields the optimized trajectory as a sequence of waypoints {w i } n i=1 . In particular, we assume that the initial way point w 0 is equal to the initial UAV location p (0) uav , and that the sequence of waypoints is determined on-the-fly, with the waypoint w i+1 being computed when the UAV reaches the location specified by the previous waypoint w i . The waypoints are designed so as to hover the UAV on a position that maximizes the probability of creating a relaying opportunities between users that, up to communication round t, have not been able to communicate. To this end, we recursively define an link activity rate matrix R (t) as R (0) = 0 (3.16) R (t+1) = γR (t) + (1 -γ)E[A (t) ], (3.17) for γ ∈ (0, 1). Denoting by t i the communication round in which the UAV reaches the waypoint w i , the subsequent waypoint is obtained solving the following optimization problem maximize pt (1 -R (t i ) ) ⊙ E[A uav ] 1 (3.18) where ∥•∥ 1 denotes the entry-wise 1-norm; namely, the sum of absolute values of the matrix entries. The optimization problem (3.18) determines the next waypoint so as to maximize the relaying opportunities between pair of users associated to links with a low activity rate. The main challenges in solving (3.18) are the lack of a close form expression for Φ(•) and the non-convexity of the objective. In order to make the objective differentiable, we approximate Φ(•) using the sigmoid function S(x) := 1 1 + e -αx (3.19) where α is a fitting parameter set to α = -1.702 as proposed in [START_REF] Lieberman | Introduction to operations research[END_REF]. This approximation of the objective allows us to employ efficient gradient based solvers to generate the sequence of waypoints. Nonetheless, the optimization objective (3.18) remains non-convex. In order to reduce the probability of obtaining a waypoint associated to poor local maxima, we employ gradient descent with restarts. The number of restart points is chosen to meet the UAV computation constraints and the restart points are sampled uniformly at random inside the convex hull determined by ground user locations. Simulations To test the proposed solution, we consider a network deployment of 30 × 60m 2 with 23 ground devices deployed at the ground level (z = 0m) as depicted in Figure 3.1. The propagation parameters describing the ground links channel gain are set to α g = 3, β g = -30 dB and σ 2 g = 1, and the channel gain threshold determining active/inactive links is fixed to g th = -60 dB. In the considered deployment, ground users are naturally clustered together in 3 distinct groups and the path exponent is such that communication within each cluster is possible, but links between users belonging to different clusters are active with negligible probability. Furthermore, we consider an obstruction (gray vertical line) that amounts to a 35 dB attenuation for the links between users residing on opposite side of the line. A UAV flying at a fixed altitude of 10m serves as a relay to enhance ground connectivity. The channel gain parameters describing the link between ground users and the UAV under LoS propagation are α L = 2.5, β L = -30 dB and σ 2 L = 1, while under NLoS are α N = 3, β N = -30 dB and σ 2 N = 1. We assume that the devices store only 10 data samples from the FashionMNIST dataset, which alone would not guarantee good inference capabilities. Therefore, they wish to harness the distributed dataset to jointly train a machine learning model. In the following experiments we consider a fully connected neural network with one hidden layers comprising 25 neurons. Ground devices update the model employing a gradient descent optimizer and a geometrically decaying learning rate η (t) i = 0.1 • (0.995) t . In this setting, the role of the UAV is to intelligently create relaying opportunities so to promote collaborative learning, and to facilitate the ground users to harness the entire distributed dataset by global diffusion of the locally optimized models, in spite of sparse and local connectivity. We benchmark the proposed solution, corresponding to a UAV visiting the sequence of waypoints returned by (3.18) and serving users for 20 optimization rounds each time a waypoint is reached, and compare it with alternative trajectory optimization schemes. In particular, we consider the cluster mid-points traversal trajectory according to which the UAV first runs a k-means clustering algorithm to detect natural clusters of the ground devices positions, and then visits sequentially the mid-points between each pair of cluster centers. Once the mid-point is reached, the UAV serves ground users for 20 optimization rounds and then it flies to the next location. We consider the barycenter placement, in which the UAV hovers at a fixed location p bar uav determined by the mean ground user location. Finally, we consider the maximum connectivity placement in which the UAV location is fixed and set equal to the coordinate that maximizes the probability of creating a relay links. This is obtained solving the following maximization problem For both the barycenter and the maximum connectivity placements, the UAV serves ground users at every communication round as it hovers at a fixed location for the entire learning procedure. We consider the centralized learning solution, corresponding to a fully connected topology, as a performance upper bound. For all listed approaches we run the distributed optimization protocol for 500 rounds and we track the testing accuracy of the mean network estimate θ. We also consider the network consensus error metric maximize p ∥(1 -A gr ) ⊙ E[A uav ]∥ 1 . ( 3 ε(θ 1 , . . . , θ m ) = 1 dm m i=1 θ i -θ 2 2 , (3.21) which measures the degree of disagreement among ground nodes estimates and it is a proxy to assess how effective is the UAV trajectory in satisfying the consensus constraint in (3.18). In Figure 3.1 we plot the ground user locations (black dots) and we overlay the UAV trajectories during the training process. Specifically, in the top left corner, we report in green the UAV trajectory returned by the proposed scheme. The UAV frequently hovers between the disconnected components in order to relay information between the clusters and to diffuse model estimates between groups of interconnected ground nodes that rarely communicate using D2D ground links. In the top right corner, we provide the trajectory of the cluster mid-points traversal solution (gray). The UAV successfully identifies the clusters and it sequentially visits the mid-points. This strategy enhances the ground connectivity, but it fails at providing relaying opportunities across the two bottom components that are disconnected due to the propagation obstacle. In the bottom row, we plot the barycenter (left plot) and maximum connectivity (right plot) placementes. Both solutions yields a static UAV placement that is fixed throughout the entire training phase. In Figure 3.2a we report the testing accuracy attained by the mean network estimate θ when decentralized learning is assisted by a UAV flying according to the different trajectories. The testing accuracy allows us to quantify the extent to which the UAV is beneficial to the collaborative learning process. The proposed solution (in green) is able to take full advantage of the distributed dataset, and it successfully enables fast distributed training with a final accuracy level that matches the accuracy of the centralized solution (in black). The barycenter solution (in red) converges slowly to a lower accuracy level, highlighting the necessity of a dynamic UAV placement to take full advantage of the network resources. The cluster mid-points traversal solution, despite enhancing ground connectivity, it is not able to connect all the disjoint components and therefore it converges to suboptimal solution. Similarly, the maximum connectivity placement is not able to connect all the disjoint network components and to transfer intelligence across different clusters. In Figure 3.2b we report the network consensus error evolution attained by the different UAV trajectories. While the proposed trajectory is able to reduce the consensus error during training and it eventually ensures that the edge devices reach a common learning goal, the other baselines are not able to drive network nodes to a globally shared model estimate. Finally, we propose a comparison between the proposed decentralized learning scheme and a UAV-aided federated learning protocol, as in [START_REF] Donevski | Federated learning with a drone orchestrator: Path planning for minimized staleness[END_REF]. In particular, for the federated learning algorithm, we assume that the UAV serves as a PS, and it orchestrates the learning procedure by collecting locally optimized models by the network devices and broadcasting aggregated estimates back to the ground users. On the other hand, in case of decentralized learning, the UAV serves as a relay and the ground devices can also exploit the available D2D ground links to exchange model estimates, in principle being able to perform learning without the presence of a UAV. As a result, the proposed protocol is more flexible with respect to the communication topology, it can easily accommodate multiple assisting UAVs, and converges faster. To compare these two approaches we study the same deployment as in Figure 3.1. We assume that the relaying UAV follows the proposed trajectory, while the UAV serving as orchestrator follows the trajectory obtained solving (3.18) setting A (t) gr = 0, trying to serve large groups of users prioritizing stale ones, akin to [START_REF] Donevski | Federated learning with a drone orchestrator: Path planning for minimized staleness[END_REF]. In Figure 3.3 we report the testing accuracies attained by the protocols. The proposed approach drastically reduces the training time, halving the number of iterations required to reach the final performance obtained by the federated learning protocol. Conclusion In this chapter, we have studied the benefits that a flying relay can bring to a network of wireless devices that are jointly training a machine learning model. We have proposed a trajectory optimization scheme that enhances the ground connectivity so as to facilitate the diffusion of locally optimized model estimates, and that enables ground users to take full advantage of network computational and data resources. We have also provided a series of experiments highlighting how a properly designed UAV trajectory can greatly promote decentralized training and outperform UAV-aided federated learning protocols. Part II Robust Learning for Heterogeneous Data Chapter 4 Communication-Efficient Distributionally Robust Decentralized Learning As shown in Chapter 2 and 3, decentralized learning algorithms empower interconnected edge devices to share data and computational resources to collaboratively train a machine learning model without the aid of a central coordinator (e.g. an orchestrating basestation). In the case of heterogeneous data distributions at the network devices, collaboration can yield predictors with unsatisfactory performance for a subset of the devices. For this reason, in this chapter we consider the formulation of a distributionally robust decentralized learning task and we propose a decentralized single loop gradient descent/ascent algorithm (AD-GDA) to solve the underlying minimax optimization problem. We render our algorithm communication efficient by employing a compressed consensus scheme and we provide convergence guarantees for smooth convex and non-convex loss functions. Finally, we corroborate the theoretical findings with empirical evidence of the ability of the proposed algorithm in providing unbiased predictors over a network of collaborating devices with highly heterogeneous data distributions. Introduction Decentralized learning algorithms have gained an increasing level of attention, mainly due to their ability to harness, in a fault tolerant and privacy preserving manner, the large computational power and data availability at the network edge exploiting deviceto-device (D2D) communication [START_REF] Ozfatura | Decentralized SGD with over-the-air computation[END_REF][START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF][START_REF] Jeong | Asynchronous decentralized learning over unreliable wireless networks[END_REF]. According to this framework, a set of interconnected devices (e.g., smartphones, IoT devices, health monitors, research labs, etc.) collaboratively train a machine learning model alternating between local model updates, based on in situ data, and D2D type of communication to exchange modelrelated information. Compared to federated learning in which a swarm of edge devices communicates with a central parameter server (e.g., a shared access point) at each communication round, fully decentralized learning has the benefits of removing the single dataset [START_REF] Lu | The cells out of sample (coos) dataset and benchmarks for measuring out-of-sample generalization of image classifiers[END_REF]. We consider a network of 5 devices with one device sampling images using a different microscope from the rest of the collaborating devices. CHOCO-SGD (solid lines), a not robust decentralized learning scheme, yields a model with highly imbalanced performance between the two type of instruments, while AD-GDA (dashed curves), the proposed distributionally robust algorithm, drastically reduces the accuracy gap and improve fairness among the collaborating devices. point of failure and of alleviating the communication bottleneck inherent to the star topology. The heterogeneity of distributedly generated data by the Internet-of-Things (IoT) entails a major challenge, represented by the notions of fairness [START_REF] Dwork | Fairness through awareness[END_REF] and robustness [START_REF] Quiñonero-Candela | Dataset shift in machine learning[END_REF]. In the distributed setup, the customary global loss function is the weighted sum of the local empirical losses, with each term weighted by the fraction of samples that the associated device stores. However, in the case of data heterogeneity across participating parties, a model minimizing such definition of risk can lead to unsatisfactory and unfair1 inference capabilities for certain subpopulations. Consider the example given in Fig. 4.1 in which a network of IoT devices with different sensing capabilities (e.g., IoT devices with heterogeneous measuring instruments) wishes to collaboratively train a machine learning model. In this setting, a model obtained by myopically minimizing the standard notion of risk defined over the aggregated data can be severely biased towards some devices at the expense of others, leading to potentially dangerous or unfair decision making processes. To tackle this issue, distributionally robust learning (DRL) aims at maximizing the worst-case performance over a set of distributions, termed as uncertainty set, which possibly contains the testing distribution of interest. Typical choices of the uncertainty sets are perturbed version of the training distribution [START_REF] Esfahani | Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations[END_REF] or, whenever the training samples come from a mixture of distributions, the set of potential subpopulations resulting from such mixture [START_REF] Duchi | Distributionally robust losses against mixture covariate shifts[END_REF][START_REF] Duchi | Learning models with uniform performance via distributionally robust optimization[END_REF]. Robust distributed learning with heterogeneous data in which different distributions exist at the various devices falls in the latter category, as the natural ambiguity set is the one represented by the convex combination of the local data distributions. In this case, minimizing the worst-case risk is equivalent to trying to ensure a minimum level of performance for each participating device. Specifically for the federated case, Mohri et al. [START_REF] Mohri | Agnostic federated learning[END_REF] introduced agnostic federated learning (AFL) as a means to ensure fairness and proposed a gradient based algorithm to solve the underlying minimax optimization problem. Later, in [START_REF] Deng | Distributionally robust federated averaging[END_REF], a communication-efficient version of the optimization algorithm, which avoids frequent retransmission of the dual variables, was proposed. In virtue of the advantages of the fully decentralized setup and advocating the necessity for robust and fair predictors in future generation networks, in this chapter we propose and analyze a distributionally robust learning procedure for D2D communication networks. In contrast to previous works on collaborative distributional robust learning, our algorithm operates in the absence of a central aggregator and with devices limited to local and possibly sparse communication; therefore, it exhibits increased scalability, adaptability and tolerance against network failures. Despite the additional complexity stemming from the minimax nature of the distributionally robust decentralized optimization problem, our solution is computationally lightweight and communication-efficient as it alternates between local single-loop stochastic gradient descent/ascent model updates and compressed consensus steps in order to cope with local connectivity. We establish convergence guarantees for the proposed algorithm both in the case of smooth convex and smooth non-convex local loss functions. In the former case, the algorithm returns an ϵ-optimal solution after O(1/ϵ 2 ) iterations. In the latter, the output is guaranteed to be an ϵ-stationary solution after O(1/ϵ 2 ) iterations whenever the stochastic gradient variance is also bounded by ϵ, otherwise the same guarantee can be obtained by increasing the number of calls to the stochastic gradient oracle. Furthermore, we demonstrate the effectiveness of the proposed algorithm in finding a robust predictor under different compression schemes, network topologies, and models architectures. We also compare the proposed approach against the distributionally robust federated learning counterpart and we the proposed solution attains higher worst-case distribution accuracy for the same number of transmitted bits, effectively reducing the communication burden of the distributionally robust learning procedure on the edge of the network. Related work Initiated in the 80s by the work of Tsitsiklis [START_REF] Tsitsiklis | Distributed asynchronous deterministic and stochastic gradient optimization algorithms[END_REF][START_REF] Tsitsiklis | Problems in decentralized decision making and computation[END_REF], the study of decentralized optimization algorithms was spurred by their adaptability to various network topologies, reliability to link failures, privacy-preserving capabilities, and potentially superior convergence properties compared to the centralized counterpart [START_REF] Yan | Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties[END_REF][START_REF] Chen | Diffusion adaptation strategies for distributed optimization and learning over networks[END_REF][START_REF] Olfati-Saber | Consensus and cooperation in networked multi-agent systems[END_REF][START_REF] Ling | Decentralized jointly sparse optimization by reweighted ℓ q minimization[END_REF][START_REF] Lian | Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent[END_REF]. This growing interest and the advent of large-scale machine learning brought forth an abundance of optimization algorithms both in the deterministic and stochastic settings [START_REF] Nedic | Distributed subgradient methods for multi-agent optimization[END_REF][START_REF] Wei | Distributed alternating direction method of multipliers[END_REF][START_REF] Duchi | Dual averaging for distributed optimization: Convergence analysis and network scaling[END_REF][START_REF] Shamir | Distributed stochastic optimization and learning[END_REF][START_REF] Rabbat | Multi-agent mirror descent for decentralized stochastic optimization[END_REF]. With the intent of extending its applicability, a concurrent effort has been made to devise techniques able to reduce the delay due to inter-device communication. Notable results in this direction are the introduction of message compression techniques, such as sparsification and quantization [START_REF] Stich | Sparsified sgd with memory[END_REF][START_REF] Aji | Sparse communication for distributed gradient descent[END_REF][START_REF] Alistarh | The convergence of sparsified gradient methods[END_REF][START_REF] Alistarh | Qsgd: Communicationefficient sgd via gradient quantization and encoding[END_REF][START_REF] Bernstein | signsgd: Compressed optimisation for non-convex problems[END_REF][START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF], and event-triggered communication to allow multiple local updates between communication rounds [START_REF] Stich | Local sgd converges fast and communicates little[END_REF][START_REF] Yu | Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning[END_REF]. Decentralized learning algorithms have also been studied in the context of wireless communication as an enabler of edge intelligence for beyond 5G (B5G) networks. [START_REF] Ozfatura | Decentralized SGD with over-the-air computation[END_REF][START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF][START_REF] Jeong | Asynchronous decentralized learning over unreliable wireless networks[END_REF]. Distributional robustness copes with the frequent mismatch between training and testing distributions by posing the training process as a game between a learner and an adversary, which has the ability to choose the testing distribution within an uncertainty set [START_REF] Scarf | A min-max solution of an inventory problem[END_REF]. Restraining the decisional power of the adversary is crucial to obtain meaningful and tractable problems and a large body of the literature deals with uncertainty sets, represented by balls centered around the training distribution and whose radius are determined by f -divergences [START_REF] Namkoong | Stochastic gradient methods for distributionally robust optimization with f-divergences[END_REF][START_REF] Hu | Kullback-leibler divergence constrained distributionally robust optimization[END_REF] or Wasserstein distance [START_REF] Esfahani | Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations[END_REF][START_REF]A framework for optimization under ambiguity[END_REF][START_REF] Jiang | Data-driven chance constrained stochastic program[END_REF]. Distributional robustness is deeply linked with the notion of fairness as particular choices of uncertainty sets allows to guarantee uniform performance across the latent subpopulations in the data [START_REF] Duchi | Distributionally robust losses against mixture covariate shifts[END_REF][START_REF] Duchi | Learning models with uniform performance via distributionally robust optimization[END_REF]. In the case of federated learning, robust optimization ideas have been explored to ensure uniform performance across all participating devices [START_REF] Mohri | Agnostic federated learning[END_REF] but, to the best of our knowledge, not in the context of fully decentralized learning. Distributionally robust learning typically entails saddle point optimization problems. The convergence properties of saddle point optimization algorithms have also been studied in the decentralized scenario for the convex-concave setting [START_REF] Koppel | A saddle point algorithm for networked online convex optimization[END_REF][START_REF] Mateos-Núnez | Distributed subgradient methods for saddle-point problems[END_REF]. More recently, the assumptions on the convexity and concavity of the objective function have been relaxed. In [START_REF] Tsaknakis | Decentralized min-max optimization: Formulations, algorithms and applications in network poisoning attack[END_REF] an algorithm for non-convex strongly-concave objective functions has been proposed; however, the double-loop nature of the solution requires to solve the inner maximization problem with increasing level of accuracy rendering it potentially slow. On the other hand, our algorithm is based on a single loop optimization scheme -with dual and primal variables being updated at each iteration in parallel -and, consequently, has a lower computational complexity. For the non-convex -non-concave case, [START_REF] Liu | A decentralized proximal point-type method for saddle point problems[END_REF] provides a proximal point algorithm, while a simpler gradient based algorithm is provided in [START_REF] Liu | A decentralized parallel algorithm for training generative adversarial nets[END_REF] to train generative adversarial networks in a decentralized fashion. None of these works take into consideration communication efficiency in their algorithms. System Model We consider a network of m edge devices in which each device i is endowed with a local objective function f i : R d → R given by E z∼P i ℓ(θ, z), with P i denoting the local distribution at device i and θ ∈ R d being the model parameter to be optimized. Whenever P i is replaced by an empirical measure Pi,n i , the local objective function coincides with the empirical risk computed over n i samples. Network devices are assumed to be interconnected according to a communication topology specified by a connected graph G := (V, E) in which V = {1, . . . , m} indexes the devices and (i, j) ∈ E if and only if devices i and j can communicate. For each device i ∈ V, we define its set of neighbors by N (i) := {j : (i, j) ∈ E} and since we assume self-communication we have (i, i) ∈ N (i) for all i in V. At each communication round, the network devices exchange messages with their neighbors and average the received messages according to a mixing matrix W ∈ R m×m . Assumption 3. The mixing matrix W ∈ R m×m is symmetric and doubly-stochastic; we denote its eigengap by ρ ∈ (0, 1] and define β = ∥I -W ∥ 2 ∈ [0, 2]. Being the communication phase the major bottleneck of decentralized training, we assume that devices transmit only compressed messages instead of sharing uncompressed model updates. To this end, we define a, possibly randomized, compression operator Q : R d → R d that satisfies the following assumption. Assumption 4. For any x ∈ R n and for some δ ∈ [0, 1], E Q ∥Q(x) -x∥ 2 ≤ (1 -δ)∥x∥ 2 . (4.1) The above definition is quite general as it entails both biased and unbiased compression operators. For instance, random quantization [START_REF] Alistarh | Qsgd: Communicationefficient sgd via gradient quantization and encoding[END_REF] falls into the former class and satisfies (4.1) with δ = 1 τ . For a given vector x ∈ R d and quantization levels 2 b , it yields a compressed message x b = sign(x)∥x∥ 2 b τ 2 b |x| ∥x∥ + ξ (4.2) with τ = 1 + min d/2 2b , √ d/2 b and ξ ∼ U[0, 1] ⊗d . A notable representative of the biased category is the top-K sparsification [START_REF] Stich | Sparsified sgd with memory[END_REF], which for a given vector x ∈ R d returns the K largest magnitude components and satisfies (4.1) with δ = K d . Operators of that type have been previously considered in the context of decentralized learning and the effect of compressed communication in decentralized stochastic optimization has been previously investigated [START_REF] Stich | Sparsified sgd with memory[END_REF][START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF][START_REF] Koloskova | Decentralized deep learning with arbitrary communication compression[END_REF]. The resulting communication cost savings have been showcased in the context of decentralized training of deep neural networks [START_REF] Koloskova | Decentralized deep learning with arbitrary communication compression[END_REF]. However, to the best of our knowledge, there are no applications of compressed communication to distributional robust training in the decentralized setup. ∈ ∆ m-1 . Output : θ o = 1 T T -1 t=0 θt , λ o = 1 T T -1 t=0 λt initialize θ 0 i = θ 0 , λ 0 i = λ 0 and s 0 i = 0 for i = 1, . . . , m for t in 0, . . . T -1 do // In parallel at each device i θ t+ 1 2 i ← θ t i -η θ ∇ θ g i (θ t i , λ t i , ξ t i ) // Descent Step λ t+ 1 2 i ← P Λ (λ t i + η λ ∇ λ g i (θ t i , λ t i , ξ t i )) // Projected Ascent Step θ t+1 i ← θ t+ 1 2 i + γ s t+1 i -θt+1 i // Gossip q t i ← Q θ t+1 i -θt i // Compression send (q t i , λ t+ 1 2 i ) to j ∈ N (i) and receive (q t j , λ t+ 1 2 j ) from j ∈ N (i) // Msgs exchange θt+1 i ← q t i + θt i // Public variables update s t+1 i ← s t i + m j=1 w i,j q j λ t+1 i ← m j=1 w i,j λ t+ 1 2 j // Dual variable averaging end In order to obtain a final predictor with satisfactory performance for all local distributions {P i } m i=1 , the common objective is to learn global model which is distributionally robust with respect to the ambiguity set P := m i=1 λ i P i : λ i ∈ ∆ m-1 where ∆ m-1 denotes the m -1 probability simplex. As shown in [START_REF] Mohri | Agnostic federated learning[END_REF], a network objective function that effectively works as proxy for this scope is given by min θ∈R d max λ∈∆ m-1   g(θ, λ) := 1 m m i=1 (λ i f i (θ) + αr(λ)) :=g i (θ,λ)    (4.3) in which r : ∆ m-1 → R is a strongly-concave regularizer and α ∈ R + . For instance, in the empirical risk minimization framework in which each device i is endowed with a training set D i ∼ P ⊗n i i and the overall number of training points is n = i n i , a common choice of r(λ) is χ 2 (λ) := i (λ i -n i /n) 2 n i /n . In what follows, we refer to θ and λ as the primal and dual variables, respectively, and make the following fairly standard assumptions on the local functions g i and the stochastic oracles available at the network devices. Assumption 5. Each function g i (θ, λ) is differentiable in R d × ∆ m-1 , L-smooth and µ-strongly concave in λ. Assumption 6. Each device i has access to the stochastic gradient oracles ∇ θ g i (θ, λ, ξ i ) and ∇ λ g i (θ, λ, ξ i ), with randomness w.r.t. ξ i , which satisfy the following assumptions: • Unbiasedness E ξ i [∇ θ g i (θ, λ, ξ i )] = ∇ θ g i (θ, λ) (4.4) E ξ i [∇ λ g i (θ, λ, ξ i )] = ∇ λ g i (θ, λ). (4.5) • Bounded variance E ξ i ∥∇ θ g i (θ, λ, ξ i ) -∇ θ g i (θ, λ)∥ 2 ≤ σ 2 θ (4.6) E ξ i ∥∇ λ g i (θ, λ, ξ i ) -∇ λ g i (θ, λ)∥ 2 ≤ σ 2 λ . (4.7) • Bounded magnitude E ξ i ∥∇ θ g i (θ, λ, ξ i )∥ 2 ≤ G 2 θ (4.8) E ξ i ∥∇ λ g i (θ, λ, ξ i )∥ 2 ≤ G 2 λ . (4.9) The above assumption implies that each network device can query stochastic gradients that are unbiased, have finite variance, and have bounded second moment. The last assumption is rather strong but it is often made in distributed stochastic optimization [START_REF] Deng | Distributionally robust federated averaging[END_REF][START_REF] Stich | Sparsified sgd with memory[END_REF][START_REF] Koloskova | Decentralized deep learning with arbitrary communication compression[END_REF]. Distributionally Robust Decentralized Learning Algorithm Problem (4.3) entails solving a distributed minimax optimization problem in which, at every round, collaborating devices store a private value of the model parameters and the dual variable, which are potentially different from device to device. We denote the estimate of the primal and dual variables of device i at time t by θ t i and λ t i and the network estimates at time t as θt = 1 m m i=1 θ t i and λt = 1 m m i=1 λ t i , respectively. The main challenge resulting from the decentralized implementation of the stochastic gradient descent/ascent algorithm consists in approaching a minimax solution or a stationary point (depending on the convexity assumption on the loss function) while concurrently ensuring convergence to a common global solution. To this end, the proposed procedure, given in Algorithm 2, alternates between a local update step and a consensus step. At each round, every device i queries the local stochastic gradient oracle and, in parallel, updates the model parameter θ i by a gradient descent step with learning rate η θ > 0 and the dual variable λ i by a projected gradient ascent one with learning rate η λ > 0. Subsequently, a gossip strategy is used to share and average information between neighbors. In order to alleviate the communication burden of transmitting the vector of model parameters, which is typically high dimensional and contributes to the largest share of communication load, a compressed gossip step is employed. To implement the compressed communication, we consider the memory efficient version of CHOCO-GOSSIP [START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF] in which each device needs to store only two additional variables θi and s i , each of the same size as θ i . The first one is a public version of θ i , while the second is used to track the evolution of the weighted average, according to matrix W, of the public variables at the neighboring devices. Instead of transmitting θ i , each device first computes an averaging step to update the value of the private value using the information about the public variables encoded in θi and s i . It then computes q i , a compressed representation of the difference between θi and θ i , and shares it with the neighboring devices to update the value of θi and s i used in the averaging step in the next round. As the number of participating devices is usually much smaller than the size of the model (m ≪ d), the dual variable λ i is updated sending uncompressed messages and then averaged according to matrix W . Note that AD-GDA implicitly assumes that collaborating parties are honest and for this reason it does not employ any countermeasure against malicious devices providing false dual variable information in order to steer the distributional robust network objective at their whim. Convex loss function We provide now a convergence guarantee for the solution output by Algorithm 2 for the case the loss function ℓ(•) is convex in the model parameter θ. The result is given in the form of a sub-optimality gap bound for the function Φ(θ) = g (θ, λ * (θ)) , λ * (•) := arg max λ∈∆ m-1 g(•, λ) (4.10) and it can be promptly derived from a primal-dual gap type of bound provided in the Appendix. In the bound we also refer to θ * (•) ∈ arg max θ∈R d g(θ, •). Theorem 2. Under Assumptions 5, 6, we have that for any θ * ∈ arg min θ Φ(θ) the solution θ o returned by Algorithm 2 with learning rates η θ = η λ = 1 √ T and consensus step size γ = ρ 2 δ 16ρ+ρ 2 +4β 2 +2ρβ 2 -8ρδ satisfies E [Φ(θ o ) -Φ(θ * )] ≤O D θ + D λ + G 2 θ + G 2 λ √ T + O LD λ G θ c √ T + LD θ G λ ρ √ T + O LG 2 λ ρ 2 T + LG 2 θ c 2 T (4.11) where D λ := max t E λt -λ * (θ o ) , D θ := max t E θt -θ * (λ o ) and c = ρ 2 δ 82 . The bound establishes a O(1/ √ T ) non-asymptotic optimality gap guarantee for the output solution. Compared to decentralized stochastic gradient descent (SGD) in the convex scenario, we obtain the same rate but with a dependency on the network topology and compression also in the lower order terms. Moreover, whenever θ and λ are constrained in convex sets, the diameter of the two can be used to explicitly bound D θ and D λ . Non-convex loss function We now focus on the case where the relation between the model parameters θ and the value of the loss function is non-convex. In this setting we provide a bound on the stationarity of the randomized solution, picked uniformly over time. In this setting, carefully tuning the relation between primal and dual learning rates is key to establish a convergent recursion (see B.2). This technical condition allows us to derive the following result. T and consensus step size γ = ρ 2 δ 16ρ+ρ 2 +4β 2 +2ρβ 2 -8ρδ satisfies T t=1 E ∇Φ( θt-1 ) 2 T ≤O L ∆Φ T √ T + L 2 κ 2 D 0 λ 2 √ T + O D λ LG θ c √ T + σ 2 θ + κσ 2 λ m √ T + O G 2 θ c 2 T + κG 2 λ ρ 2 T + σ 2 θ m (4.12) where ∆Φ T = E[Φ( θ0 )] -E[Φ( θT )] and c = ρ 2 δ 82 . We note that the bound decreases at a rate O(1/ √ T ), except the last variance term which is non-vanishing. Nonetheless, whenever the variance of the stochastic gradient oracle for the primal variable is small or the number of participating devices is large, this term becomes negligible. Otherwise, at a cost of increased gradient complexity, each device can query the oracle O(1/ϵ 2 ) times every round, average the results and make the stochastic gradient variance O(1/ϵ 2 ). This procedure make the bound vanishing and leads to a gradient complexity matching the one of [START_REF] Sharma | Federated minimax optimization: Improved convergence analyses and algorithms[END_REF] given for the federated learning scenario. Experiments In this section, we empirically evaluate the capabilities of AD-GDA in producing a robust predictor for different learning models, communication network topologies, and message compression schemes. Lacking of a baseline for the distributionally robust fully decentralized setup, we compare the effectiveness of the proposed solution against the distributionally robust federated baseline (DRFA) [START_REF] Deng | Distributionally robust federated averaging[END_REF] under similar communication constraints. Setup We perform our experiments using the Fashion-MNIST dataset [START_REF] Xiao | Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms[END_REF] 2 , a popular dataset made of images of 10 different clothing items, which is commonly used to test distributionally robust learners [START_REF] Mohri | Agnostic federated learning[END_REF][START_REF] Deng | Distributionally robust federated averaging[END_REF]. In order to introduce data heterogeneity, samples are partitioned across the network devices using a class-wise split. Namely, we simulate a network of 10 devices, each storing data points coming from one of the 10 classes. In this setting, we train a logistic regression model and a two layer fully connected neural network with 25 hidden units in order to investigate both the convex and the non-convex case. In both cases, we use the SGD optimizer and, in order to ensure consensus at the end of the optimization process, we consider a geometrically decreasing learning rate η t θ = r -t η 0 θ with ratio r = 0.995 and initial value η 0 θ = 1. The metrics that we track are the final worst-device distribution accuracy and the average accuracy over the aggregated data samples of the network estimate θt . Effect of compression We assess the effect of compression with a fixed budget in terms of communication rounds by organizing devices in a ring topology and training the logistic model and the fully connected network for T = 2000 iterations. As representative of the unbiased compression operators, we consider the b-bit random quantization scheme for b = {16, 8, 4} bit levels, while for the biased category we implement the top-K sparsification scheme saving K = {50%, 25%, 10%} of the original message components. For each compression scheme and compression level, we tune the consensus step size γ performing a grid search. We train the different model for 20 different random placements of the data shards across the devices using the distributionally robust and standard learning paradigms. In Table 4.1 we report the average worst-case accuracy attained by the final averaged model θT . AD-GDA almost doubles the worst-case accuracy compared to the not-robust baseline CHOCO-SGD [START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF]. This gain holds for both compression schemes and across different compression levels. For increased compression ratios, the worst-case accuracy degrades; however, for a comparable saving in communication bandwidth the unbiased quantization scheme results in superior performance than the biased sparsification compression operator. For a fixed optimization horizon compression degrades performance. Nonetheless, compression allows to obtain the same accuracy level with fewer transmitted bits as shown in Fig. 4.3a where we plot the average worst-case accuracy of the fully connected model as a function of the transmitted bits using the random quantization scheme. Furthermore, in Fig. 4.3b we compare the average accuracy of the robust predictor against the standard one. The price to pay in terms of average performance in order to ensure robustness of the predictor is modest and in the range of 2.5%. Effect of topology We now turn to investigate the effect of device connectivity. Sparser communication topologies slow down the consensus process and therefore hamper the convergence of the algorithm. In the previous batch of experiments we considered a sparse ring topology, in which each device is connected to only two other devices. Here, we explore two other network configurations with a more favorable spectral gap. The communication topology with each device connected to other 4 devices and the mesh case, in which all devices communicate with each other. For these configurations we consider the 4-bit quantization and top-10% sparsification compression schemes. In Table 4.2 we report the final worst-case performance for the different network configurations. As expected, network configurations with larger device degree lead to higher worst-case accuracy owing to smaller spectral gap which leads to the faster convergence rates. Effect of regularization Following a two-player game interpretation of the minimax optimization problem (4.3), the regularizer parameter r(λ) reduces the freedom that an adversary has in choosing the weighting vector λ so as to maximize the training loss at every iteration t. As a result, the less constrained the adversary, the larger the emphasis on the worse performing devices. In the following we consider a regularizer of the form χ 2 (λ) := i (λ i -n i /n) 2 n i /n and study the effect of the regularization parameter α on the robustness of the yielded solution. For this specific experiment, we consider the biological dataset COOS7 [START_REF] Lu | The cells out of sample (coos) dataset and benchmarks for measuring out-of-sample generalization of image classifiers[END_REF] that contains images of mouse cells captured using two different microscopes. We consider a network of 5 collaborating parties (e.g. research labs) connected according to a ring topology. We endow 4 of these parties with images sampled using microscope 1, while we give to the remaining one images taken from microscope 2. We train the model using AD-GDA for α = {∞, 1, 0.1, 0.01} and report the average accuracy along with the 95% confidence intervals in Table 4.3. For the case α → ∞, which corresponds to CHOCO-SGD, we observe a large test accuracy gap between images taken from microscope 1 and microscope 2, with the classifier attaining 25% higher accuracy on the latter. This accuracy mismatch showcases how standard decentralized optimization schemes are unable to guarantee uniform performance across participating parties. On the other hand, using AD-DGA and smaller regularization parameters, the gap between the two instruments is effectively reduced, eventually hitting a 3% performance mismatch for α = 0.01. At the same time, the improved fairness brought by AD-GDA does not significantly hamper the average performance of the model when tested on both instruments. Comparison with the federated baseline Lacking of a term of comparison for the distributionally robust fully decentralized setting, we consider the communication-efficient distributionally robust federated learning scheme (DRFA) [START_REF] Deng | Distributionally robust federated averaging[END_REF] and the standard federated averaging (FedAvg) [START_REF] Mcmahan | Communication-efficient learning of deep networks from decentralized data[END_REF]. In the federated scenario, network devices are connected according to a star topology, with the star center representing the central aggregator. Communication efficiency is obtained allowing network devices to perform multiple local updates of the primal variable between subsequent synchronization rounds at the central aggregator. We run DRFA allowing devices to perform 10 local gradient step before sending its local models for the distributionally robust averaging steps and we consider half user participation at each round. To have the same per round communication cost, we run FedAvg allowing 10 local gradient step between aggregations, but considering full user participation. Recall that the random sketching technique employed by DRFA requires devices to send two model updates to the central aggregator at each round therefore doubling the communication cost. To match this setting from a communication standpoint, we consider AD-GDA with a mesh network topology. Moreover, in order to have comparable communication cost per device, we consider the quantization compression operator with b = 4 in combination with the sparsification scheme saving K = {25%, 50%} components. In Fig. 4.4a we compare the worst-case distribution accuracy attained by the different algorithms on the Fashion-MNIST dataset (with data split as in Sec.4.5.1) as a function of the number of stochastic gradients that each device needs to query. DRFA and AD-GDA have similar gradient complexity while FedAvg needs considerably more gradient calls to obtain the same worst-case performance. In Fig. 4.4b we compare the communication efficiency of the algorithms and we report the worst-case accuracy versus the average number of bits transmitted by each device. For the same communication budget, AD-GDA can attain higher worst-case distribution accuracy compared to DRFA transmitting only a fraction of bits compared to the federated counterparts. Conclusion We provided a provably convergent decentralized single-loop gradient descent/ascent algorithm to tackle the distributionally robust learning problem over a network of collaborating devices with heterogeneous local data distributions. Differently from previously proposed solutions, which are limited to the federated scenario with a central coordinator, our algorithm restrains devices to D2D communication and attains communication efficiency by employing compressed communication techniques. Experiments showed that the proposed solution produces distributionally robust predictors with higher worst-case accuracy, while it attains superior communication efficiency compared to the previously proposed algorithms that reduce the communication load allowing multiple local updates at participating devices. The proposed framework is a promising decentralized learning solution over edge devices in B5G IoT networks. Chapter 5 User-Centric Federated Learning Data heterogeneity across participating devices poses one of the main challenges in federated learning as it has been shown to greatly hamper its convergence time and generalization capabilities. In this chapter we address this limitation by enabling personalization using multiple user-centric aggregation rules at the parameter server. Our approach potentially produces a personalized model for each user at the cost of some extra downlink communication overhead. To strike a trade-off between personalization and communication efficiency, we propose a broadcast protocol that limits the number of personalized streams while retaining the essential advantages of our learning scheme. Through simulation results, our approach is shown to enjoy higher personalization capabilities, faster convergence, and better communication efficiency compared to other competing baseline solutions. Introduction Federated learning [START_REF] Mcmahan | Communication-efficient learning of deep networks from decentralized data[END_REF] has seen great success, being able to solve distributed learning problems in a communication-efficient and privacy-preserving manner. Specifically, federated learning provides to clients (e.g. smartphones, IoT devices, and organizations) the possibility of collaboratively train a model under the orchestration of a parameter server (PS) by iteratively aggregating locally optimized models and without off-loading local data [START_REF] Kairouz | Advances and open problems in federated learning[END_REF]. The original aggregation policy was implemented by Federated Averaging (FedAvg) [START_REF] Mcmahan | Communication-efficient learning of deep networks from decentralized data[END_REF], has been devised under the assumption that clients' local datasets are statistically identical, an assumption that is hardly met in practice. In fact, clients typically store datasets that are statistically heterogeneous and different in size [START_REF] Sattler | Clustered federated learning: Modelagnostic distributed multitask optimization under privacy constraints[END_REF], and are mainly interested in learning models that generalize well over their local data distribution through collaboration. Generally speaking, FedAvg exhibits slow convergence and poor generalization capabilities in such non-IID setting [START_REF] Li | Federated optimization in heterogeneous networks[END_REF]. To address these limitations, a large body of literature deals with personalization as a technique to reduce the detrimental effect of non-IID data. A straightforward solution consists in producing adapted models at a device scale by local fine-tuning procedures. Borrowing ideas from Model Agnostic Meta-Learning (MAML) [START_REF] Finn | Model-agnostic meta-learning for fast adaptation of deep networks[END_REF], federated learning can be exploited in order to find a launch model that can be later personalized at each device using few gradient iterations [START_REF] Fallah | Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach[END_REF][START_REF] Jiang | Improving federated learning personalization via model agnostic meta learning[END_REF]. Alternatively, local adaptation can be obtained by tuning only the last layer of a globally trained model [START_REF] Arivazhagan | Federated learning with personalization layers[END_REF] or by interpolating between a global model and locally trained ones [START_REF] Deng | Adaptive personalized federated learning[END_REF][START_REF] Hanzely | Federated learning of a mixture of global and local models[END_REF]. However, these methods can fail at producing models with an acceptable generalization performance even for synthetic datasets [START_REF] Caldas | Leaf: A benchmark for federated settings[END_REF]. Adaptation can also be obtained leveraging user data similarity to personalize the training procedure. For instance, a Mixture of Experts formulation has been considered to learn a personalized mixing of the outputs of a commonly trained set of models [START_REF] Reisser | Federated mixture of experts[END_REF]. Similarly, [START_REF] Marfoq | Federated multi-task learning under a mixture of distributions[END_REF] proposed a distributed Expectation-Maximization (EM) algorithm concurrently converges to a set of shared hypotheses and a personalized linear combination of them at each device. Furthermore, [START_REF] Zhang | Personalized federated learning with first order model optimization[END_REF] proposed a personalized aggregation rule at the user side based on the validation accuracy of the locally trained models at the different devices. In order to be applicable, these techniques need to strike a good balance between communication overhead and the amount of personalization in the system. In fact, if on one hand, the expressiveness of the mixture is proportional to the number of mixed components; on the other, the communication load is linear in this quantity. Clustered Federated Learning (CFL) measures the similarity among the model updates during the optimization process in order to lump together users in homogeneous groups. For example, [START_REF] Sattler | Clustered federated learning: Modelagnostic distributed multitask optimization under privacy constraints[END_REF][START_REF] Briggs | Federated learning with hierarchical clustering of local updates to improve training on non-iid data[END_REF] proposed a hierarchical strategy in which the original set of users is gradually divided into smaller groups and, for each group, the federated learning algorithm is branched in a new decoupled optimization problem. In this chapter, we propose a different approach to achieve personalization by allowing multiple user-centric aggregation strategies at the PS. The mixing strategies account for the existence of heterogeneous clients in the system and exploit estimates of the statistical similarity among clients that are obtained at the beginning of the federated learning procedure. Furthermore, the number of distinct aggregation rules -also termed personalized streams -can be fixed in order to strike a good trade-off between communication and learning efficiency. Finally, we provide simulation results for different scenarios and demonstrate that our approach exhibits faster convergence, higher personalization capabilities, and communication efficiency compared to other popular baseline algorithms. Learning with heterogeneous data sources In this section, we provide theoretical guarantees for learners that combine data from heterogeneous data distributions. The set-up mirrors the one of personalized federated learning and the results are instrumental to derive our user-centric aggregation rule. In the following, we limit our analysis to the discrepancy distance (5.4) but it can be readily extended to other divergences [START_REF] Mestoukirdi | User-centric federated learning: Trading off wireless resources for personalization[END_REF]. In the federated learning setting, the weighted combination of the empirical loss terms of the collaborating devices represents the customary training objective. Namely, in a distributed system with m nodes, each endowed with a dataset D i of n i IID samples from a local distribution P i , the goal is to find a predictor f : X → Ŷ from a hypothesis class F that minimizes L(f, ⃗ w) = m i=1 w i n i (x,y)∈D i ℓ(f (x), y) (5.1) where ℓ : Ŷ × Y → R + is a loss function and ⃗ w = (w 1 , . . . , w m ) is a weighting scheme. In case of identically distributed local datasets, the typical weighting vector is ⃗ w = 1 i n i (n 1 , . . . , n m ), the relative fraction of data points stored at each device. This particular choice minimizes the variance of the aggregated empirical risk, which is also an unbiased estimate of the local risk at each node in this scenario. However, in the case of heterogeneous local distributions, the minimizer of ⃗ w-weighted risk may transfer poorly to certain devices whose target distribution differs from the mixture P ⃗ w = m i=1 w i P i . Furthermore, it may not exists a single weighting strategy that yields a universal predictor with satisfactory performance for all participating devices. To address the above limitation of a universal model, personalized federated learning allows adapting the learned solution at each device. In order to better understand the potential benefits and drawbacks coming from the collaboration with statistically similar but not identical devices, let us consider the point of view of a generic node i that has the freedom of choosing the degree of collaboration with the other devices in the distributed system. Namely, identifying the degree of collaboration between node i and the rest of users by the weighting vector ⃗ w i = (w i,1 , . . . , w i,m ) (where w i,j defines how much node i relies on data from user j), we define the personalized objective for user i L(f, ⃗ w i ) = m j=1 w i,j n j (x,y)∈D j ℓ(f (x), y) (5.2) and the resulting personalized model f ⃗ w i = arg min f ∈F L(f, ⃗ w i ). (5.3) We now seek an answer to: "What's the proper choice of ⃗ w i in order to obtain a personalized model f ⃗ w i that performs well on the target distribution P i ?". This question is deeply tied to the problem of domain adaptation, in which the goal is to successfully aggregate multiple data sources in order to produce a model that transfers positively to a different and possibly unknown target domain. In our context, the dataset D i is made of data points drawn from the target distribution P i and the other devices' datasets provide samples from the sources {P j } j̸ =i . Leveraging results from domain adaptation theory [START_REF] Ben-David | A theory of learning from different domains[END_REF], we provide learning guarantees on the performance of the personalized model f ⃗ w i to gauge the effect of collaboration that we later use to devise the weights for the user-centric aggregation rules. In order to avoid negative transfer, it is crucial to upper bound the performance of the predictor w.r.t. to the target task. The discrepancy distance introduced in [START_REF] Mansour | Domain adaptation: Learning bounds and algorithms[END_REF] provides a measure of similarity between learning tasks that can be used to this end. For a hypothesis set of functions F : X → Ŷ and two distributions P, Q on X , the discrepancy distance is defined as d F (P, Q) = sup f,f ′ ∈F E x∼P ℓ(f, f ′ ) -E x∼Q ℓ(f, f ′ ) (5.4) where we streamlined notation denoting f (x) by f . For bounded and symmetric loss functions that satisfy the triangular inequality, the previous quantity allows to obtain the following inequality E (x,y)∼P [ℓ(f, y)] ≤ E (x,y)∼Q [ℓ(f, y)] + d F (P, Q) + γ (5.5) where γ = inf f ∈F E (x,y)∼P [ℓ(f, y)] + E (x,y)∼Q [ℓ(f, y)] . We can exploit the inequality to obtain the following risk guarantee for f ⃗ w i w.r.t the true minimizer f * of the risk for the distribution P i . Theorem 4. For a loss function ℓ B-bounded range, symmetric and satisfying the triangular inequality, with probability 1δ the function f ⃗ w i satisfies E z∼P i [ℓ( f ⃗ w i , z)] -E z∼P i [ℓ(f * , z)] ≤ m j=1 w 2 i,j n j 2d i n i log e i n i d + log 2 δ + 2 m j=1 w i,j d F (P i , P j ) + 2γ (5.6 ) where γ = min f ∈F E z∼P i [ℓ(f, z)] + E z∼P ⃗ w i [ℓ(f, z)] and d is the VC-dimension of the function space resulting from the composition of F and ℓ. Recently, an alternative bound based on an information theoretic notion of dissimilarity, the Jensen-Shannon divergence, has been proposed [START_REF] Shui | Beyond h-divergence: Domain adaptation theory with jensen-shannon divergence[END_REF]. It is based on less restrictive constraints, as it only requires the loss function ℓ(f, Z) to be sub-Gaussian of some parameter σ for all f ∈ F, and therefore whenever ℓ(•) is bounded, the requirement is automatically satisfied. Measuring similarity by the Jensens-Shannon divergence the following inequality is available E X∼P [X] ≤ E X∼Q [X] + βσ 2 + D JS (P ||Q) β for β > 0 (5.7) where D JS (P ∥Q) = KL P P +Q 2 + KL Q P +Q 2 . Exploiting the above inequality we obtain the following estimation error bound. Theorem 5. For a loss function ℓ B-bounded range, the function f ⃗ w i satisfies E z∼P i [ℓ( f ⃗ w i , z)] -E z∼P i [ℓ(f * , z)] ≤B m j=1 w 2 i,j n j 2d i n i log e i n i d + log 2 δ + B 2 m j=1 w i,j D JS (P i ||P j ) (5.8) Figure 5.1: Personalized Federated Learning with user-centric aggregates at round t. Proof of Theorem 4 and 5: In the Appendix C.1. The theorems highlights that a fruitful collaboration should strike a balance between the bias terms due to dissimilarity between local distribution and the risk estimation gains provided by the data points of other nodes. Minimizing the upper bound in Th. 4,5 with respect to the user-specific weights, and using the optimal weights in our aggregation rule seems an appealing solution to tackle the data heterogeneity during training; however, the distance terms (d F (P i , P k ) and D JS (P i ||P j )) are difficult to compute, especially under the privacy constraints that federated learning imposes. For this reason, in the following we consider a heuristic method based on the similarity of the readily available users' model updates to estimate the collaboration coefficients. User-centric aggregation For a suitable hypothesis class parametrized by θ ∈ R d , federated learning approaches use an iterative procedure to minimize the aggregate loss (5.1) with ⃗ w = 1 i n i (n 1 , . . . , n m ). At each round t, the PS broadcasts the parameter vector θ t-1 and then combines the locally optimized models by the clients {θ t-1 i } m i=1 according to the following aggregation rule θ t ← m i=1 n i m j=1 n j θ t-1 i . As mentioned in Sec. 5.2, this aggregation rule has two shortcomings: it does not take into account the data heterogeneity across users, and it is bounded to produce a single solution. For this reason, we propose a user-centric model aggregation scheme that takes into account the data heterogeneity across the different nodes participating in training and aims at neutralizing the bias induced by a universal model. Our proposal generalizes the naïve aggregation of FedAvg, by assigning a unique set of mixing coefficients ⃗ w i to each user i, and consequently, a user-specific model aggregation at the PS side. Namely, at the PS side, the following set of user-centric aggregation steps are performed θ t i ← m j=1 w i,j θ t-1/2 j for i = 1, . . . , m (5.9) where now, θ t-1/2 j is the locally optimized model at node j starting from θ t-1 j , and θ t i is the user-centric aggregated model for user i at communication round t. As we elaborate next, the mixing coefficients are heuristically defined based on a distribution similarity metric and the dataset size ratios. These coefficients are calculated before the start of federated training. The similarity score we propose is designed to favor collaboration among similar users and takes into account the relative dataset sizes, as more intelligence can be harvested from clients with larger data availability. Using these user-centric aggregation rules, each node ends up with its own personalized model that yields better generalization for the local data distribution. It is worth noting that the user-centric aggregation rule does not produce a minimizer of the user-centric aggregate loss given by (5.2). At each round, the PS aggregates model updates computed starting from a different set of parameters. Nonetheless, we find it to be a good approximation of the true update since personalized models for similar data sources tend to propagate in a close neighborhood. The aggregation in [START_REF] Zhang | Personalized federated learning with first order model optimization[END_REF] capitalizes on the same intuition. Computing the collaboration coefficients Computing the discrepancy distance (5.4) can be challenging in high-dimension, especially under the communication and privacy constraints imposed by federated learning. For this reason, we propose to compute the mixing coefficient based on the relative dataset sizes and the distribution similarity metric given by ∆ i,j ( θ) = 1 n i (x,y)∈D i ∇ℓ(f θ, y) - 1 n j (x,y)∈D j ∇ℓ(f θ, y) 2 ≈ E z∼P i ∇ℓ(f θ, y) -E z∼P j ∇ℓ(f θ, y) 2 where the quality of the approximation depends on the number of samples n i and n j . The mixing coefficients for user i are then set to the following normalized exponential function w i,j = n j n i e -1 2σ i σ j ∆ i,j ( θ) m j ′ =1 n j ′ n i e -1 2σ i σ j ′ ∆ i,j ′ ( θ) for j = 1, . . . , m. (5.10) The mixture coefficients are calculated at the PS during a special round prior to federated training. During this round, the PS broadcasts a common model denoted θ to the users, which compute the full gradient on their local datasets. At the same time, each node i locally estimates the value σ 2 i partitioning the local data in K batches {D k i } K k=1 of size n k and computing σ 2 i = 1 K K k=1 1 n k (x,y)∈D k i ∇ℓ(f θ, y) - 1 n i (x,y)∈D i ∇ℓ(f θ, y) 2 (5.11) where σ 2 i is an estimate of the gradient variance computed over local datasets D k i sampled from the same target distribution. Once all the necessary quantities are computed, they are uploaded to the PS, which proceeds to calculate the mixture coefficients and initiates the federated training using the custom aggregation scheme given by (5.9). Note that the proposed heuristic embodies the intuition provided by Th. 2. In fact, in the case of homogeneous users, it falls back to the standard FedAvg aggregation rule, while in the case of node i has an infinite amount of data it degenerates to the local learning rule which is optimal in that case. Reducing the communication load A full-fledged personalization by the means of the user-centric aggregation rule (5.9) would introduce a m-fold increase in communication load during the downlink phase as the original broadcast transmission is replaced by unicast ones. Although from a learning perspective the user-centric learning scheme is beneficial, it is also possible to consider overall system performance from a learning-communication trade-off point of view. The intuition is that, for small discrepancies between the user data distributions, the same model transfer positively to statistically similar devices. In order to strike a suitable trade-off between learning accuracy and communication overhead we hereby propose to adaptively limit the number of personalized downlink streams. In particular, for a number of personalized models m t , we run a k-means clustering scheme with k = m t over the set of collaboration vectors {w i } m i=1 and we select the centroids { ŵi } mt i=1 to implement the m t personalized streams. We then proceed to replace the unicast transmission with group broadcast ones, in which all users belonging to the same cluster c receive the same personalized model ŵc . Choosing the right value for the number of personalized streams is critical in order to save communication bandwidth but at the same time obtain satisfactory personalization capabilities. It can be experimentally shown that clustering quality indicators such as the Silhouette score over the user-centric weights can be used to guide the search for the suitable number of streams m t . Choosing the number of personalized streams Choosing an insufficient number of personalized streams can yield unsatisfactory performance, while concurrently learning many models can prohibitively increase the communication load of personalized federated learning. Therefore, properly tuning this free parameter is essential in order to obtain a well-performing but still practical algorithm. Being agnostic w.r.t. the underlying data generating distributions at the devices, it does not exist a universal number of personalized streams that fits all problems. However, we now illustrate that the silhouette coefficient, a quality measure of the clustering, provides w i ∈ C k as a( ⃗ w i ) = 1 |C j | -1 ⃗ w j ∈C k , ⃗ w j ̸ = ⃗ w i ∥ ⃗ w j -⃗ w i ∥ (5.12) and the smallest mean distance between the collaboration vector ⃗ w i ∈ C k and the closest cluster b( ⃗ w i ) = min C j ̸ =C k 1 |C j | ⃗ w j ∈C j ∥ ⃗ w j -⃗ w i ∥. (5.13) The average silhouette score s is then defined as s(C) = 1 m m i=1 b(i) -a(i) max{a(i), b(i)} (5.14) and it is a number in the range [-1, 1], directly proportional to the quality of the clustering. In turn, a good clustering of the collaboration vectors { ⃗ w i } m i=1 implies that users belonging to the same clusters are similar, and that the centroid ⃗ c j is a good approximation of the collaboration coefficient of users in C j . Consequently, whenever the silhouette score is large, the loss in terms of personalization performance resulting from the reduced number of aggregation rules compared to the full-fledged personalization system is modest. For this reason, the silhouette score provides a proxy to the inference performance and at the same time it allows to trade-off communication load and personalization capabilities in a principled way. In Algorithm 3 we provide the pseudocode of the procedure that autonomously chooses the optimal number of personalized streams m t based on a communication-personalization trade-off function c(k, s) : N × [-1, 1] → R + scoring the utility of pairs of the systems based on the number of user-centric rules and the resulting silhouette scores. The function c(k, s) is a system dependent function typically decreasing in k and increasing in s k . Experiments We now provide a series of experiments to showcase the personalization capabilities and communication efficiency of the proposed algorithm. Set-up In our simulation we consider a handwritten character/digit recognition task using the EMNIST dataset [START_REF] Cohen | Emnist: Extending mnist to handwritten letters[END_REF] and an image classification task using the CIFAR-10 dataset [START_REF] Krizhevsky | Learning multiple layers of features from tiny images[END_REF]. Data heterogeneity is induced by splitting and transforming the dataset in a different fashion across the group of devices. In particular, we analyze three different scenarios: • Character/digit recognition with user-dependent label shift in which 10k EMNIST data points are split across 20 users according to their labels. The label distribution follows a Dirichlet distribution with parameter 0.4, as in [START_REF] Marfoq | Federated multi-task learning under a mixture of distributions[END_REF][START_REF] Wang | Tackling the objective inconsistency problem in heterogeneous federated optimization[END_REF]. • Character/digit recognition with user-dependent label shift and covariate shift in which 100k samples from the EMNIST dataset are partitioned across 100 users each with a different label distribution, as in the previous scenario. Additionally, users are clustered in 4 group and at each group images are rotated of {0 • , 90 • , 180 • , 270 • } respectively. • Image classification with user-dependent concept shift in which the CIFAR-10 dataset is distributed across 20 users which are grouped in 4 clusters, for each group we apply a different random label permutation. For each scenario, we aim at solving the task at hand by leveraging the distributed and heterogeneous datasets. We compare our algorithm against four different baselines: FedAvg, local learning, CFL [START_REF] Sattler | Clustered federated learning: Modelagnostic distributed multitask optimization under privacy constraints[END_REF] and FedFomo [START_REF] Zhang | Personalized federated learning with first order model optimization[END_REF]. In all scenarios and for all algorithms, we train a LeNet-5 convolutional neural network [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] using a stochastic gradient descent optimizer with a fixed learning rate η = 0.1 and momentum β = 0.9. Personalization performance We now report the average accuracy over 5 trials attained by the different approaches. We also study the personalization performance of our algorithm when we restrain the overall number of personalized streams, namely the number of personalized models that are concurrently learned. In Fig. 5.2a we report the average validation accuracy in the EMNIST label shift scenario. We first notice that in the case of label shift, harvesting intelligence from the datasets of other users amounts to a large performance gain compared to the localized learning strategy. This indicates that data heterogeneity is moderate and collaboration is fruitful. Nonetheless, personalization can still provide gains compared to FedAvg. Our solution yields a validation accuracy which is increasing in the number of personalized streams. Allowing maximum personalization, namely a different model at each user, we obtain a 3% gain in the average accuracy compared to FedAvg. CFL is not able to transfer intelligence among different groups of users and attains performance similar to the FedAvg. This behavior showcases the importance of soft clustering compared to the hard one for the task at hand. We find that FedFOMO, despite excelling in the case of strong statistical heterogeneity, fails to harvest intelligence in the label shift scenario. In Fig. 5.2b we report the personalization performance for the second scenario. In this case, we also consider the oracle baseline, which corresponds to running 4 different FedAvg instances, one for each cluster of users, as if the 4 groups of users were known beforehand. Different from the previous scenario, the additional shift in the covariate space renders personalization necessary in order to attain satisfactory performance. In fact, the oracle training largely outperforms FedAvg. Furthermore, as expected, our algorithm matches the oracle final performance when the number of personalized streams is 4 or more. Also, CLF and FedFOMO are able to correctly identify the 4 clusters. However, the former exhibits slower convergence due to the hierarchical clustering over time while the latter plateaus to a lower average accuracy level. We turn now to the more challenging CIFAR-10 image classification task. In Fig. 5.2c we report the average accuracy of the proposed solution for a varying number of personalized streams, the baselines, and the oracle solution. As expected, the label permutation renders collaboration extremely detrimental as the different learning tasks are conflicting. As a result, local learning provides better accuracy than FedAvg. On the other hand, personalization can still leverage data among clusters and provide gains also in this case. Our algorithm matches the oracle performance for a suitable number of personalized streams. This scenario is particularly suitable for hard clustering, which isolates conflicting data distributions. As a result, CFL matches the proposed solution. FedFOMO promptly detects clusters and therefore quickly converges, but it attains lower average accuracy compared to the proposed solution. The performance reported so far is averaged over users and therefore fails to capture the existence of outliers performing worse than average. In order to assess the fairness of the training procedure, in Table 5.1 we report the worst user performance in the federated system. The proposed approach produces models with the highest worst case in all three scenarios. Silhouette score In Fig. 5.4 we plot the average silhouette score obtained by the k-means algorithm when clustering the federated users based on the procedure proposed in Sec. 5.3.3. In the labels shift scenario, for which we have seen that a universal model performs almost as good as the personalized ones, the silhouette scores monotonically decreases with k. In fact, in this simulation setting, a natural cluster-like structure among clients tasks does not exist. On the other hand, in the covariate shift and the concept shift scenarios, the silhouette score peaks around k = 4. In Sec. 5.4.2 this has has shown to be the minimum number of personalized models necessary to obtain satisfactory personalization performance in the system. This behaviour of the silhouette score is expected and desired, in this case the number of clusters matches exactly the number of underlying different tasks among the participants in FL that was induced by the rotation of the covariates and the permutation of the labels. We then conclude that the silhouette score provides meaningful information to tune the number of user-centric aggregation rules prior to training. Communication Efficiency Personalization comes at the cost of increased communication load in the downlink transmission from the PS to the federated user. In order to compare the algorithm convergence time, we parametrize the distributed system using two parameters. We define by ρ = T ul T dl the ratio between model transmission time in uplink (UL) and downlink (DL). Typical values of ρ in wireless communication systems are in the [START_REF] Lu | The cells out of sample (coos) dataset and benchmarks for measuring out-of-sample generalization of image classifiers[END_REF][START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF] because of the larger transmitting power of the base station compared to the edge devices. Furthermore, to account for unreliable computing devices, we model the random computing time T i at each user i by a shifted exponential r.v. with a cumulative distribution function P [T i > t] = 1 -1(t ≥ T min ) 1 -e -µ(t-T min ) where T min representing the minimum possible computing time and 1/µ being the average additional delay due to random computation impairments. Therefore, for a population of m devices, we then have T comp = E [max{T 1 , . . . , T m }] = T min + H m µ where H m is the m-th harmonic number. To study the communication efficiency we consider the simulation scenario with the EMNIST dataset with label and covariate shift. In Fig. 5.3 we report the time evolution of the validation accuracy in 3 different systems. A wireless systems with slow UL ρ = 4 and unreliable nodes T min = T dl = 1 µ , a wireless system with fast uplink ρ = 2 and reliable nodes T min = T dl , 1 µ = 0 and a wired system ρ = 1 (symmetric UL and DL) with reliable nodes T min = T dl , 1 µ = 0. The increased DL cost is negligible for wireless systems with strongly asymmetric UL/DL rates and in these cases, the proposed approach largely outperforms the baselines. In the case of more balanced UL and DL transmission times ρ = [START_REF] Guo | On calibration of modern neural networks[END_REF][START_REF] Lu | The cells out of sample (coos) dataset and benchmarks for measuring out-of-sample generalization of image classifiers[END_REF] and reliable nodes, it becomes instead necessary to properly choose the number of personalized streams in order to render the solution practical. Nonetheless, the proposed approach remains the best even in this case for k = 4. Note that FedFOMO incurs a large communication cost as personalized aggregation is performed at the client-side. Conclusion In this chapter, we presented a novel federated learning algorithm that exploits multiple user-centric aggregation rules to produce personalized models. The aggregation rules are based on user-specific mixture coefficients that can be computed during one communication round prior to federated training. Additionally, in order to limit the communication burden of personalization, we propose a simple strategy to effectively limit the number of personalized streams. We experimentally study the performance of the proposed solution across different tasks. Overall, our solution yields personalized models with higher testing accuracy while at the same time being more communication-efficient compared to the competing baselines. Part III Robust Bayesian Learning Chapter 6 Robust Bayesian Learning Bayesian learning provides a principled framework to account for uncertainty quantification, an essential enabler for reliable AI. However, standard Bayesian learning is known to have suboptimal generalization capabilities under model misspecification and in the presence of outliers. PAC-Bayes theory demonstrates that the free energy criterion minimized by Bayesian learning is a bound on the generalization error for Gibbs predictors (i.e., for single models drawn at random from the posterior) under the assumption of sampling distributions uncontaminated by outliers. This viewpoint provides a justification for the limitations of Bayesian learning when the model is misspecified, requiring ensembling, and when data is affected by outliers. In recent work, PAC-Bayes bounds -referred to as PAC m -were derived to introduce free energy metrics that account for the performance of ensemble predictors, obtaining enhanced performance under misspecification. This chapter introduces a novel robust free energy criterion that combines the generalized logarithm score function with PAC m ensemble bounds. The proposed free energy training criterion produces predictive distributions that are able to concurrently counteract the detrimental effects of model misspecification and outliers. Introduction Key assumptions underlying Bayesian inference and learning are that the adopted probabilistic model is well specified and that the training data set does not include outliers, so that training and testing distributions are matched [START_REF] Theodoridis | Machine learning: a Bayesian and optimization perspective[END_REF]. Under these favorable conditions, the Bayesian posterior distribution provides an optimal solution to the inference and learning problems. In contrast, optimality does not extend to scenarios characterized by misspecification [START_REF] Walker | Bayesian inference with misspecified models[END_REF][START_REF] Grünwald | Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it[END_REF] or outliers [START_REF] Martinez-Cantin | Practical Bayesian optimization in the presence of outliers[END_REF]. The framework developed in this chapter aims at addressing both problems by integrating the use of ensemble predictors [START_REF] Madigan | Bayesian model averaging[END_REF] with generalized logarithm score functions [START_REF] Sypherd | A loss function for robust classification: Calibration, landscape, and generalization[END_REF] in Bayesian learning. The proposed learning framework -termed (m, t)-robust Bayesian learning -is underpinned by a novel free energy learning criterion parameterized by integer m ≥ 1 and scalar t ∈ [0, 1]. The parameter m controls robustness to misspecification by determining the size of the ensemble used for prediction. In contrast, parameter t controls robustness to outliers by dictating the degree to which the loss function penalizes low predictive probabilities. The proposed learning criterion generalizes the standard free energy criterion underlying generalized Bayesian learning, which is obtained for m = 1 and t = 1 [START_REF] Knoblauch | Generalized variational inference: Three arguments for deriving new posteriors[END_REF][START_REF] Simeone | Machine Learning for Engineers[END_REF]; as well as the m-free energy criterion, obtained for t = 1, which was recently introduced in [3]. Related Work Recent work has addressed the problem of model misspecification for Bayesian learning. In particular, references [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF] have argued that the minimization of the standard free energy criterion -which defines generalized Bayesian learning [START_REF] Knoblauch | Generalized variational inference: Three arguments for deriving new posteriors[END_REF][START_REF] Simeone | Machine Learning for Engineers[END_REF] -yields predictive distributions that do not take advantage of ensembling, and thus have poor generalization capabilities for misspecified models. To mitigate this problem, references [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF] introduced alternative free energy criteria that account for misspecification. The author of [START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF] leveraged a second-order Jensen's inequality to obtain a tighter bound on the cross entropy loss; while the work [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] proposed an m-free energy criterion that accounts for the performance of an ensemble predictor with m constituent models. Both optimization criteria were shown to be effective in overcoming the shortcomings of Bayesian learning under misspecification, by yielding posteriors that make better use of ensembling. The free energy metrics introduced in [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF] are defined by using the standard log-loss, which is known to be sensitive to outliers. This is because the log-loss grows unboundedly on data points that are unlikely under the model [START_REF] Jewson | Principles of Bayesian inference using general divergence criteria[END_REF]. Free energy criteria metrics based on the log-loss amount to Kullback-Leibler (KL) divergence measures between data and model distributions. A number of papers have proposed to mitigate the effect of outliers by replacing the classical criteria based on the KL divergence in favor of more robust divergences, such as the β-divergences [START_REF] Basu | Robust and efficient estimation by minimising a density power divergence[END_REF][START_REF] Ghosh | Robust Bayes estimation using the density power divergence[END_REF] and the γ-divergence [START_REF] Fujisawa | Robust parameter estimation with a small bias against heavy contamination[END_REF][START_REF] Nakagawa | Robust Bayesian inference via γ-divergence[END_REF]. These criteria can be interpreted as substituting the log-loss with generalized logarithmic scoring rules. To optimize such criteria, variational methods have been proposed that were shown to be robust to outliers, while not addressing model misspecification [START_REF] Futami | Variational inference based on robust divergences[END_REF]. This chapter extends (generalized) Bayesian learning by tackling both model misspecification and the presence of outliers. Specifically, we propose the (m, t)-robust Bayesian learning framework, which is underpinned by a novel free energy criterion based on generalized logarithmic scoring rules and multi-sample objectives. The predictive distribution resulting from the minimization of the proposed objective takes full advantage of ensembling, while at the same time reducing the effect of outliers. The proposed robust m-free energy criterion is justified by following a PAC-Bayes approach, and its enhanced robustness is also proved through the lens of its influence function [START_REF] Hampel | The influence curve and its role in robust estimation[END_REF]. The theoretical findings are corroborated by experiments that highlight the enhanced generalization capabilities and calibration performance of the proposed learning criterion under model misspecification and with data sets corrupted by outliers. Chapter Organization The rest of chapter the is organized as follows. In Section 6.2, we review the generalized logarithm function, the associated entropy and divergence measures. In Section 6.3, we define the learning setup and we provide a tutorial-style comparison between frequentist and Bayesian learning frameworks. In Section 6.4.1, we introduce the concept of model misspecification and we review the m-free energy criterion [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] as a tool to mitigate the effect of misspecified model classes. In Section 6.4.2, we define outliers and illustrate the role of robust losses to reduce the influence of outlying data samples. In Section 6.5, we introduce the (m, t)-robust Bayesian learning framework that tackles both model misspecification and the presence of outliers, and that overcomes the limitations of the standard Bayesian learning rule. We theoretically analyze the proposed learning criterion, providing PAC-Bayesian guarantees for the ensemble model with respect to the contaminated and the in-distribution measures. Finally, in Section 6.6, we provide regression and classification experiments to quantitatively and qualitatively measure the performance of the proposed learning criterion. Preliminaries Generalized Logarithms The t-logarithm function, also referred to as generalized or tempered logarithm is defined as log t (x) := 1 1t x 1-t -1 for x > 0, ( for t ∈ [0, 1) ∪ (1, ∞), and log 1 (x) := log(x) for x > 0 (6.2) where the standard logarithm (6.2) is recovered from (6.1) in the limit lim t→1 log t (x) = log(x). As shown in Figure 6.1, for t ∈ [0, 1), the t-logarithms is a concave function, and for t < 1 is lower bounded as log t (x) ≥ -(1t) -1 . Largely employed in classical and quantum physics, the t-logarithm has also been applied to machine learning problems. Specifically, t-logarithms have been used to define alternatives to the log-loss as a score function for probabilistic predictors with the aim of enhancing robustness to outliers [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF][START_REF] Sypherd | A loss function for robust classification: Calibration, landscape, and generalization[END_REF][START_REF] Amid | A more globally accurate dimensionality reduction method using triplets[END_REF]. Accordingly, the loss associated to a probabilistic model q(x) is measured aslog t q(x) instead of the standard log-loss log q(x). Note that we have the upper boundlog t q(x) ≤ (1t) -1 for t < 1. In information theory, the t-logarithm was used by [START_REF] Tsallis | Possible generalization of Boltzmann-Gibbs statistics[END_REF] to define the t-Tsallis entropy H t (p(x)) := -p(x) t log t p(x)dx, (6.3) and the t-Tsallis divergence D t (p(x)||q(x)) :=p(x) t [log t p(x)log t q(x)]dx. (6.4) For t = 1, the t-Tsallis entropy and the t-Tsallis divergence coincide respectively with the Shannon (differential) entropy and with the Kullback-Leibler (KL) divergence. 0.0 0.5 1.0 1.5 2.0 p(x) -1 0 1 2 3 4 -log t (p(x)) t = 0 t = 1 Increasing t Figure 6.1: t-logarithm loss, or log t -loss, of a predictive distribution p(x) for different values of t. For t = 1, the samples x corresponding to low predictive probability p(x) → 0 have a potentially unbounded loss value. On the contrary, for t < 1, the t-logarithm loss is bounded by (1t) -1 and it limits their influence. When using (6.4) as an optimization criterion in machine learning, the concept of escort distribution is often useful [START_REF] Sears | Generalized maximum entropy, convexity and machine learning[END_REF]. Given a probability density p(x), the associated t-escort distribution is defined as E t (p(x)) = p(x) t p(x) t dx . (6.5) We finally note that t-logarithm does not satisfy the distributive property of the logarithm, i.e., log(xy) = log(x) + log(y). Instead, we have the equalities [START_REF] Umarov | On a q-central limit theorem consistent with nonextensive statistical mechanics[END_REF] log t (xy) = log t x + log t y + (1t) log t x log t y (6.6) and log t x y = y t-1 (log t xlog t y) . (6.7) Frequentist vs. Bayesian Learning Throughout this chapter we consider a standard learning set-up in which the learner has access to a data set D of n data points {z i } n i=1 sampled in an independent and identically distributed (IID) fashion from a sampling distribution ν s (z). As we will see, owing to the presence of outliers, the sampling distribution may differ from the target distribution ν(z). The general goal of learning is that of optimizing models that perform well on average with respect to the target distribution ν(z). In this section, we assume that the sampling distribution ν s (z) equals the target distribution ν(z), and we will address the problem of outliers -which arises when ν s (z) ̸ = ν(z) -in the next section. We will consider both supervised learning problems and the unsupervised learning problem of density estimation, which, as we will see in Chapter 7, have many applications to wireless communications. In supervised learning, a data sample z ∈ Z corresponds to a pair z = (x, y) that comprises a feature vector x ∈ X and a label y ∈ Y. In contrast, for density estimation, each data point z ∈ Z corresponds to a feature vector z = x ∈ X . Supervised learning is formulated as an optimization over a family of discriminative models defined by a parameterized conditional distribution p(y|x, θ) of target y given input x. The conditional distribution, or model, p(y|x, θ) is parameterized by vector θ ∈ Θ in some domain Θ. In contrast, density estimation amounts to an optimization over a model defined by parameterized densities p(x|θ). In both cases, optimization targets a real-valued loss function, which is used to score the model θ when tested on a data point z. Frequentist Learning The goal of frequentist learning consists in finding the model parameter vector θ that minimizes the training loss evaluated on the data set D, i. To simplify the discussion, we assume that the solution to the ERM problem is unique, although this does not affect the generality of the presentation. ERM is motivated by the fact that the training loss (6.8) is a finite-sample approximation of the true, unknown, population loss L(θ) = E ν(z) [ℓ(θ, z)], (6.10) which averages the loss over the target, and here also sampling, distribution ν(z). The discrepancy between the population loss and its approximation given by the training loss introduces uncertainty regarding the optimal model parameter θ * = arg min θ∈Θ L(θ), (6.11) which is also assumed to be unique to simplify the discussion. The error between the optimal solution θ * and the frequentist solution θ freq is a form of epistemic uncertainty, which can be reduced by increasing the size of the data set D. In practice, the short stationarity intervals of the data-generating distributions associated with wireless communications often limit the size of training data sets. In this scarce data regime, epistemic uncertainty may be significant. By selecting a single model, frequentist learning neglects epistemic uncertainty as it discards information about other plausible models that fit training data almost as well as the ERM solution (6.9). As a result, frequentist learning is known to lead to poorly calibrated decision [START_REF] Guo | On calibration of modern neural networks[END_REF][START_REF] Cohen | Learning to learn to demodulate with uncertainty quantification via Bayesian meta-learning[END_REF], resulting in over-or under-confident outputs that may cause important reliability issues. Bayesian Learning Bayesian learning adopts a probabilistic reasoning framework by scoring all members in the model class by means of a distribution q(θ) over the model parameter space Θ. Through this distribution, Bayesian learning summarizes information obtained from data D, as well as prior knowledge about the problem, e.g., about the scale of the optimal model parameter vector θ * or about sparsity patterns in θ * . Mathematically, given a prior distribution p(θ) on the model parameter space, Bayesian learning can be formulated as the minimization of the free energy criterion Ĵ (q) = E q(θ) [ L(θ, D)] + 1 β KL(q(θ)||p(θ)), (6.12) where KL(q(θ)||p(θ)) denotes the Kullback-Leibler (KL) divergence between the posterior distribution q(θ) and a prior distribution p(θ), i.e. KL(q(θ)||p(θ)) = E q(θ) log q(θ) p(θ) , (6.13) while β > 0 is a constant, also known as inverse temperature. Accordingly, through problem minimize q Ĵ (q), (6.14) Bayesian learning minimizes a weighted sum of the average training loss and of the discrepancy with respect to the prior distribution p(θ). The KL term in the free energy (6.12) plays an essential role in differentiating between Bayesian learning and frequentist learning for small data set sizes. In fact, the KL divergence term acts as a regularizer, whose influence on the solution of problem (6.14) is inversely proportional to the data set size n. When the regularizer is removed, i.e., when we set β → ∞, the solution of the problem (6.14) reduces to the frequentist solution (6.9). More precisely, the distribution q(θ) that solves problem (6.14) reduces to a point distribution concentrated at θ freq . The optimization (6.14) of the free energy criterion (6.12) can be theoretically justified through the PAC Bayes generalization framework. In it, the KL term is proved to quantify an upper bound on the discrepancy between training loss and population loss on average with respect to the random draws of the model parameter vector θ ∼ q(θ). Mathematically, the free energy provides an upper bound on the average population loss (when neglecting constants that are inessential for optimization), i.e., E q(θ) [L(θ)] ≤ Ĵ (q) + const. (6.15) As we have discussed in the previous subsection, epistemic uncertainty is caused by the difference between training and population losses, and hence between the corresponding minimizers (6.11) and (6.9). By incorporating a bound on this error, the free energy criterion (6.12) unlike the frequentist training loss (6.8), provides a way to account for epistemic uncertainty. Specializing the problem (6.14) to the log-loss ℓ(x, y, θ) =log p(y|x, θ) (6.16) for supervised learning, and ℓ(x, θ) =log p(x|θ) (6.17) for density estimation, the minimization of the free energy in (6.14) leads to the β-tempered posterior distribution q Bayes (θ|D) ∝ (x,y)∈D p(θ)p(y|x, θ) β (6.18) for supervised learning, and a similar expression applies to unsupervised learning for density estimation. The distribution (6.18) reduces to the standard posterior distribution when β = 1. In practice, computing the posterior distribution, or more generally solving problem (6.14), are computationally prohibitive tasks. A common approach to address this issue is through variational inference (VI) [START_REF] Blei | Variational inference: A review for statisticians[END_REF]. VI limits the scope of the optimization over a tractable set of distributions q(θ), such as jointly Gaussian variables with free mean and covariance parameters. Let us now assume that we have obtained a distribution q(θ) as a, generally approximate, solution of problem (6.14). We focus first on supervised learning. Given a test input x, the ensemble predictor obtained from distribution q(θ) is given by p(y|x, q) = E q(θ) [p(y|x, θ)]. (6.19) The average in (6. [START_REF] Ericsoon | Ericsson Mobility Report[END_REF]) is in practice approximated by drawing multiple, say m, samples θ ∼ q(θ) from distribution q(θ), obtaining the m-sample predictor p(y|x, θ 1 , ..., θ m ) = 1 m m i=1 p(y|x, θ i ), (6.20) where samples θ i are generated from distribution q(θ) for i = 1, ..., m, which we write as θ 1 , ..., θ m ∼ q(θ) ⊗m . In the case of density estimation, the ensemble density p(x|q) is similarly defined as p(x|q) = E q(θ) [p(x|θ)], (6.21) which can be approximated as p(x|θ 1 , . . . , θ m ) = 1 m m i=1 p(x|θ i ), (6.22) with θ 1 , . . . , θ m ∼ q(θ) ⊗m . Henceforth, when detailing expressions for supervised learning, it will be implied that the corresponding formulas for density estimation apply by replacing p(y|x, θ) with p(x|θ) as done above to define ensemble predictors. Given a distribution q(θ), we define the log-loss of the ensemble model (6.21) as R(q, x, y) := -log p(y|x, q) = -log E q(θ) [p(y|x, θ)], (6.23) and the m-sample log-loss as Rm (q, x, y) := E q(θ) ⊗m [-log(p(y|x, θ 1 , ..., θ m ))] = E q(θ) ⊗m -log 1 m m i=1 p(x|θ i ) , (6.24) which measures the log-loss of the m-sample predictor (6.20). Note that for m = 1 in (6.24), we obtain the log loss of the Gibbs predictor R1 (q, x, y) = R(q, x, y) := E q(θ) [-log p(y|x, θ)]. (6.25) Example 1: To illustrate the difference between the frequentist and Bayesian learning paradigms, let us consider the problem of estimating the probability distribution of the channel gain of a scalar wireless channel. This is an example of unsupervised learning for density estimation. Let us assume that the channel gain density follows a true, unknown, target distribution given by the mixture of two Gaussians ν(x) = 0.7N (x|0.5, 0.05) + 0.3N (x|0.8, 0.02). This is shown in the left part of Fig. 6.2 as a dashed green line. The two components may correspond to line-of-sight (LOS) and non-line-of-sight (NLOS) propagation conditions [START_REF] Xiao | Identification and mitigation of non-line-of-sight conditions using received signal strength[END_REF]. We fix a Gaussian model class p(x|θ) = N (x|θ, 0.25) and a prior distribution p(θ) = N (θ| -5, 5). Given the data points represented as crosses in the left part of Figure 6.2, the estimated distribution obtained by frequentist learning is reported as a dash-dotted black curve in the left panel. In contrast, Bayesian learning returns the posterior distribution (6.18), which in turn yields the ensemble density (6.19). The distributions are shown in the left and right parts of the Figure 6.2, respectively for inverse temperature parameters β = {1, 0.1}. The Bayesian predictive distribution is still unimodal but it has a larger variance, which results from the combination of multiple Gaussian models according to the Bayesian posterior that does not collapse to a point distribution in virtue of the KL regularization term whose influence is controlled by β. ■ Robust Bayesian Learning As we have seen in the previous section, Bayesian learning optimizes the free energy by tackling problem (6.14). By (6.15), the free energy provides a bound on the population loss as a function of the training loss when averaging over the distribution q(θ) in the model parameter space [START_REF] Catoni | PAC-bayesian supervised classification: the thermodynamics of statistical learning[END_REF]. This approach has two important limitations: • Model misspecification: The bound (6.15) provided by the free energy is known to be loose in the presence of model misspecification. Model misspecification occurs when the assumed probabilistic model p(y|x, θ) cannot express the conditional target distribution ν(y|x) = ν(x, y)/ν(x), where ν(x) = ν(x, y)dy [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF]. This causes the β-tempered posterior distribution to be generally suboptimal when the model is misspecified [START_REF] Knoblauch | Generalized variational inference: Three arguments for deriving new posteriors[END_REF]. • Discrepancy between sampling and target distributions: The sampling distribution ν s (z) that underlies the generation of the training data set D may not match the target distribution ν(x) used to test the trained model due to the presence of outliers in the training data. This discrepancy is not accounted for in the derivation of the free energy criterion, causing Bayesian learning to be suboptimal in the presence of outliers [START_REF] Knoblauch | Generalized variational inference: Three arguments for deriving new posteriors[END_REF]. We observe that the two causes of suboptimality outlined in the previous paragraph are distinct. In fact, model misspecification may reflect the ignorance of the learner concerning the data generation process, or it may be caused by constraint on the computational resources of the device implementing the model. In contrast, the presence of outliers amounts to an inherent source of distortion in the data, which cannot be removed even if the learner acquired more information about the data generation process or more computing power. In this section, we review robust Bayesian learning solutions that address these two issues. (m, 1)-Robust Bayesian Learning Against Model Misspecification In this subsection, we describe a recently proposed method that robustifies Bayesian learning against model misspecification. We start by providing a formal definition of misspecification. Recall that we are focusing on supervised learning, but the presentation also applies to density estimation by replacing the discriminative model p(x|y, θ) with the density model p(x|θ). Definition 2 (Misspecification). A model class F = {p(y|x, θ) : θ ∈ Θ} is said to be misspecified with respect to the target distribution ν(x, y) whenever there is no model parameter vector θ ∈ Θ such that ν(y|x) = p(y|x, θ), where ν(y|x) is the conditional target distribution obtained from the joint target distribution ν(x, y). Under model misspecification, the free energy criterion has been shown to yield a loose bound (6.15) on the population loss obtained by the ensemble predictor (6.19) [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF]. To address this problem, the m-sample free energy criterion was introduced in [3], whose minimization yields (m, 1)-robust learning. The reason for the notation "(m, 1)" will be made clear in the next two subsections. The key observation underlying this approach is that the training loss L(θ, D) in the standard free energy (6.12) does not properly account for the performance of ensemble predictors. In fact, the log-loss of an m-sample ensemble predictor is given by Rm (q, x, y) in (6.24), and not by the Gibbs log-loss R(q, x, y) in (6.25). By leveraging the results of [START_REF] Burda | Importance weighted autoencoders[END_REF] and [START_REF] Mnih | Variational inference for Monte Carlo objectives[END_REF], the multi-sample criterion Rm (q, x, y) can be shown to provide a sharper bound to the ensemble risk R(q, x, y) in (6.23) as compared to the Gibbs risk R(q, x, y) in (6.25), i.e., R(q, x, y) ≤ Rm (q, x, y) ≤ R(q, x, y) (6.26) Furthermore, the first inequality in (6.26) becomes asymptotically tight as m → ∞, i.e., lim m→∞ Rm (q, x, y) = R(q, x, y). (6.27) Using PAC-Bayes arguments, the m-sample free energy is obtained by replacing the training loss E q(θ) [ L(θ, D)] in the free energy (6.12) with the m-sample training loss L(θ 1 , . . . , θ m , D) = (x,y)∈D Rm (q, x, y) = (x,y)∈D E q(θ) ⊗m [-log(p(y|x, θ 1 , ..., θ m ))] . (6.28) Furthermore, the m-sample free energy is defined as Ĵ m (q) = L(θ 1 , . . . , θ m , D) + m β KL(q(θ)||p(θ)), (6.29) in which the m-sample training loss is averaged over the distribution of the m samples θ 1 , ..., θ m ∼ q(θ) ⊗m used in the ensemble predictor (6.20). We note that the m-sample free energy coincides with the standard free energy (6.12) for m = 1. Finally, the (m, 1)-robust Bayesian learning problem is defined by the optimization minimize q Ĵ m (q). (6.30) Example 1 (continued): Let us return to Example 1. The problem is characterized by model misspecification since the target distribution ν(x) is a mixture of two Gaussian components, while the model class comprises only unimodal Gaussian models p(x|θ). In contrast to standard Bayesian learning, the ensemble density (6.20) obtained with the distribution q(θ) returned by (m, 1)-robust Bayesian learning for m = 10 (red curve in the right panel) is able to take advantage of ensembling to approximate both the NLoS and LoS components of the target distribution. ■ Accordingly, (1, t)-robust Bayesian learning is defined by the minimization [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] minimize q Ĵt (q). (6.36) Example 2: To highlight the effect of outliers, we consider the same channel gain estimation problem described in Example 1, but we now assume that the original training data set (black crosses) is contaminated by an outlying data point (red cross). The (m, 1)-robust Bayesian learning solution (red curve with m = 10) is based on the standard log-loss and is observed to be significantly affected by the presence of the outliers. As a result, the estimated distribution for the (m, 1)-robust Bayesian learning concentrates a relevant fraction of its mass around the outlier. In contrast, the (1, t)-robust Bayesian solution (gray curve) with t = 0.4 is less influenced by the outlying data point. However, like Bayesian learning, it is not able to take advantage of ensembling and to approximate both LoS and NLoS components. This observation justifies the (m, t)-robust Bayesian learning approach described next. ■ (m, t)-Robust Bayesian Learning In the previous section, we reviewed the m-free energy criterion introduced by [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF], which was argued to produce predictive distributions that are more expressive, providing a closer match to the underlying sampling distribution ν(x). However, the approach is not robust to the presence of outliers. In this section, we introduce (m, t)-robust Bayesian learning and the associated novel free energy criterion that addresses both expressivity in the presence of misspecification and robustness in setting with outliers. To this end, we study the general setting described in Section 6.4 in which the sampling distribution ν(x) satisfies both Assumption 2 and Assumption 7, and we investigate the use of the log t -loss with t ∈ [0, 1) as opposed to the standard log-loss as assumed in [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF]. Robust m-free Energy For a proposal posterior q(θ), generalizing (6.24), we define the multi-sample empirical log t -loss evaluated at a data point x as Rm t (q, x, y) := E q(θ) ⊗m [-log t (p(y|x, θ 1 , ..., θ m ))] = E q(θ) ⊗m -log t ( 1 m m i=1 p(x|θ i )) , (6.37) From the concavity of the t-logarithm with t ∈ [0, 1), in a manner similar to (6.26), the loss (6.37) provides an upper bound on the original log t -loss R t (q, x, y) in (6.23) R t (q, x, y) ≤ Rm t (q, x, y). (6.38) Furthermore, the bound becomes increasingly tighter as m increases, and we have the limit lim m→∞ Rm t (q, x, y) = R t (q, x, y) (6.39) for t ∈ [0, 1). The m-sample log t -loss (6.37) is used to define the (m, t) training loss Lt (θ 1 , . . . , θ m , D) = (x,y)∈D Rm t (q, x, y), (6.40) based on which, for β > 0, is possible to derive the robust m-free energy as J m t (q) := Lt (θ 1 , . . . , θ m , D) + m β KL(q(θ)||p(θ)). (6.41) The proposed free energy generalizes the standard free-energy criterion (6.12), which corresponds to the training criterion of (m, t)-robust Bayesian learning for m = 1 and t = 1, and the m-free energy criterion (6.29), which corresponds to the training criterion of (m, t)-robust Bayesian learning for t = 1. In the following we provide theoretical results for the proposed learning framework. The analysis is specialized for the unsupervised setting in order to simplify the notation. Nonetheless, the results can be readly extended to the supervised setting by replacing p(x|θ) with p(y|x, θ). Following similar steps as in [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF], the robust m-free energy can be proved to provide an upper bound on the population log t -risk as detailed in the following lemma. Lemma 3. With probability 1σ, with σ ∈ (0, 1), with respect to the random sampling of the data set D, for all distributions q(θ) that are absolutely continuous with respect the prior p(θ), the following bound on the population risk of the ensemble model holds E ν(x) [R t (q, x)] ≤J m t (q) + ψ(ν, n, m, β, p, σ) (6.42) where ψ(ν, n, m, β, p, σ) := 1 β log E D,p(θ) e β∆m,nlog σ (6.43) and ∆ m,n := 1 n x∈D log t E j∼U [1:m] p(x|θ j ) -E ν(x) log t E j∼U [1:m] p(x|θ j ) . (6.44) Furthermore, the risk with respect to the ID measure ν(x) can be bounded as E ν(x) [R t (q, x)] ≤ 1 1 -ϵ (J m t (q) + ψ(ν, n, m, β, p, σ)) + ϵ(C 1-t -1) (1 -ϵ)(1 -t) , (6.45) if the contamination ratio satisfies the inequality ϵ < 1. Lemma 3 provides an upper bound on the log t -risk (6.32), which is defined with respect to the sampling distribution ν(x) corrupted by outliers, as well as on the ensemble log t -risk (6.23) evaluated with respect to the ID measure ν(x). Reflecting that the data set D contains samples from the corrupted measure ν(x), while the bound (6.42) vanishes as n → ∞, a non-vanishing term appears in the bound (6.45). Minimizing the Robust m-free Energy Using standard tools from calculus of variations, it is possible to express the minimizer of the robust m-free energy q m t (θ) := arg min q J m t (q) (6.46) as fixed-point solution of an operator acting on the ensembling distribution q(θ). Theorem 6. The minimizer (6.46) of the robust m-free energy objective (6.41) is the fixed point of the operator T (q):=p(θ j ) exp β x∈D E {θ i } i̸ =j log t m i=1 p(x|θ i ) m (6.47) where the average in (6.47) is taken with respect to the IID random vectors {θ i } i̸ =j ∼ q(θ) ⊗m-1 . Theorem 6 is useful to develop numerical solutions to problem (6.46) for nonparametric posteriors, and it resembles standard mean-field variational inference iterations [START_REF] Bishop | Pattern recognition and machine learning[END_REF]. Alternatively, we can tackle the problem (6.46) over a parametric family of distribution using standard tools from variational inference [START_REF] Blei | Variational inference: A review for statisticians[END_REF]. To further characterize the posterior minimizing the robust m-free energy criterion, and to showcase the beneficial effect of the generalized logarithm, we now consider the asymptotic regime in which m → ∞ and then n → ∞. In this limit, the robust m-free energy (6.41) coincides with the log t -risk R t (q). From the definition of t-Tsallis divergence (6.4), the log t -risk can be shown in turn to be equivalent to the minimization of the divergence D t (E t (ν(x))||p(x|q)) (6.48) between the t-escort distribution (6.5) associated to the sampling distribution ν(x) and the ensemble predictive distribution p(x|q). Therefore, unlike the standard Bayesian setup with t = 1, the minimizer of the robust m-free energy does not seek to approximate the sampling distribution ν(x). Instead, the minimizing ensembling posterior q(θ) aims at matching the t-escort version of the sampling distribution ν(x). In the case of corrupted data generation procedures, i.e., when ν(x) ̸ = ν(x), recovering the sampling distribution ν(x) is not always the end goal, and, as shown by [START_REF] Sypherd | A loss function for robust classification: Calibration, landscape, and generalization[END_REF], escort distributions are particularly effective at reducing the contribution of OOD measures. Example 2 (continued): Returning to Example 2, we now consider the performance of (m, t)-robust Bayesian learning for m = 10 and t = 0.4. The resulting distribution (blue line) with m = 10 and t = 0.4 seems to be able to better approximate the target distribution by reducing the effect of the outliers, while also taking advantage of ensembling to combat misspecification. Influence Function Analysis In this section, we study the robustness of the proposed free energy criterion by using tools from classical statistics. The robustness of an estimator is typically measured by the means of its influence function [START_REF] Hampel | The influence curve and its role in robust estimation[END_REF]. The influence function quantifies the extent to which an estimator derived from a data set D changes when a data point z is added to D. We are specifically interested in quantifying the effect of data contamination, via the addition of a point z, on the ensembling distribution q m t (θ) that minimizes the proposed robust m-free energy objective (6.41). To this end, given a set D of n data points {x 1 , . . . , x n } ∈ X n , we define the empirical measure P n (x) = 1 n n i=1 δ(x -x i ) (6.49) where δ(•) denotes the Dirac function, and we introduce its γ-contaminated version for an additional data point z ∈ X as P n γ,z (x) = (1 -γ) n n i=1 δ(x -x i ) + γδ(x -z) (6.50) with γ ∈ [0, 1]. -5.0 -2.5 0.0 2.5 5.0 7.5 z The following analysis is inspired by [START_REF] Futami | Variational inference based on robust divergences[END_REF], which considered Gibbs models trained using generalized free energy criteria based on the β-divergence and γ-divergence. 0 1 2 3 4 | ∂ ∂φ Rm t (q φ , z)| φ=φ m * t (0 To compute the influence function we consider parametric ensembling distributions q ϕ (θ) defined by the parameter vector ϕ ∈ Φ ⊆ R d . We denote the robust m-free energy (6.41) evaluated using the empirical distribution (6.50) as J m t (γ, ϕ)=E P n γ,z (x) Rm t (q ϕ , x) + m β D 1 (q ϕ (θ)||p(θ)), (6.51) and its minimizer as ϕ m * t (γ) = arg min ϕ∈Φ J m t (γ, ϕ). (6.52) The influence function is then defined as the derivative IF m t (z, ϕ, P n ) = dϕ m * t (γ) dγ γ=0 (6.53) = lim γ→0 ϕ m * t (γ) -ϕ m * t (0) γ . (6.54) Accordingly, the influence function measures the extent to which the minimizer ϕ m * t (γ) changes for an infinitesimal perturbation of the data set. Theorem 7. The influence function of the robust m-free energy objective (6.51) is IF m t (z, ϕ, P n )=- ∂ 2 J m t (γ, ϕ) ∂ϕ 2 -1 × ∂ 2 J m t (γ, ϕ) ∂γ∂ϕ γ=0 ϕ=ϕ m * t (0) , (6.55) Table 6.1: Total variation (TV) distance between the ID measure ν(x) and the predictive distribution p q (x) obtained from the optimization of the different free energy criteria for the setting in Figure 6.5 (the TV values are scaled by 10 4 ). t = 1 ϵ = 0 t = 1 ϵ = 0.1 t = 0.9 ϵ = 0.1 t = 0.8 ϵ = 0.1 TV(ν(x)||p q (x)) 1.38 2.15 1.88 1.79 where ∂ 2 J m t (γ, ϕ) ∂ϕ 2 =E P n γ,z (x) ∂ 2 ∂ϕ 2 Rm t (q ϕ , x) (6.56) + ∂ 2 ∂ϕ 2 m β KL(q ϕ (θ)||p(θ)) (6.57) and ∂ 2 J m t (γ, ϕ) ∂γ∂ϕ = ∂ ∂ϕ E P n (x) Rm t (q ϕ , x) -Rm t (q ϕ , z) . (6.58) Theorem 7 quantifies the impact of the data point z through the contamination dependent term ∂ ∂ϕ Rm t (q ϕ , z). We study the magnitude of this term to illustrate the enhanced robustness deriving from the proposed robust m-free energy objective. For ease of tractability, we consider the limit m → ∞. In this case, the contamination dependent term can be expressed as ∂ ∂ϕ lim m→∞ Rm t (q ϕ , z)= ∂ ∂ϕ log t E q ϕ (θ) [p(z|θ)] (6.59) = E q ϕ (θ) [p(z|θ)] -t ∂E q ϕ (θ) [p(z|θ)] ∂ϕ . (6.60) The effect of the t-logarithm function thus appears in the first multiplicative term, and it is the one of reducing the influence of anomalous data points to which the ensemble predictive distribution p q (x) assigns low probability. Example: To illustrate how the t-logarithm improves the robustness to outlying data points, we consider again the example of Figure 6.3 and we assume a parametrized ensembling posterior q ϕ (θ) = N (θ|ϕ, 1). In Figure 6.4, we plot the magnitude of the contamination dependent term evaluated at the parameter ϕ m * t (0) that minimizes the robust m-free energy J m t (0, ϕ) for m = ∞ and different values of t. For all values of t, the optimized predictive distribution concentrates around 0, where most of sampled data points lie. However, as the value of the contaminated data point z becomes smaller and moves towards regions where the ensemble assign low probability, the contamination dependent term grows linearly for t = 1, while it flattens for t ∈ (0, 1). This showcases the role of the robust m-free energy criterion as a tool to mitigate the influence of outlying data points by setting t < 1. The samples from the ID measure are represented as green dots, while data points sampled from the OOD component are in red. The optimized predictive distributions are displayed in shades of gray. In (a), we plot the predictive distribution associated to (m, 1)-robust Bayesian learning obtained minimizing the m-free energy criterion J m of [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] with m = 20 by using only samples from the ID measure (i.e., there are no outliers). In (b), we show the predictive distribution obtained by minimizing the same criterion when using samples from the ID measure and OOD measure with a contamination ratio ϵ = 0.1. In (c) and (d) we consider the same scenario as in (b), but we consider the proposed (m, t)-robust Bayesian based on the robust m-free energy criterion J m t with m = 20, when setting t = 0.9 and t = 0.8, respectively. Experiments In this section, we first describe a simple regression task with an unimodal likelihood, and then we present results for larger-scale classification and regression tasks. The main aim of these experiments is to provide qualitative and quantitative insights into the performance of (m, 1)-robust Bayesian learning of [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] and the proposed robust (m, t)-robust Bayesian learning. Both examples are characterized by model misspecification and outliers. Multimodal Regression For the first experiment, we modify the regression task studied by [START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF] and [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] in order to capture not only model misspecification but also the presence of outliers as in the contamination model (6.31). To this end, we assume that the ID distribution ν(x), with x = (a, b), is given by ν(a, b) = p(a)ν(b|a), where the covariate a is uniformly distributed in the interval [-10.5, 10.5] -i.e., p(a) = 1/21 in this interval and p(a) = 0 otherwise -and by a response variable b that is conditionally distributed according to the two-component mixture ν(b|a) = N (b|αµ a , 1), (6.61) α ∼ Rademacher, ( µ a = 7 sin 3a 4 + a 2 . 6.62) The The parametric model is given by p(x|θ) = p(a, b|θ) = p(a)N (b|f θ (a), 1), where f θ (a) is the output of a three-layer fully connected Bayesian neural network with 50 neurons and Exponential Linear Unit (ELU) activation functions [START_REF] Clevert | Fast and accurate deep network learning by exponential linear units (elus)[END_REF] in the two hidden layers. We consider a Gaussian prior p(θ) = N (0, I) over the neural network weights and use a Monte Carlo estimator of the gradient based on the reparametrization trick [START_REF] Kingma | Auto-encoding variational Bayes[END_REF] as in [START_REF] Blundell | Weight uncertainty in neural network[END_REF]. Consider first only the effect of misspecification. The parametric model assumes a unimodal likelihood N (b|f θ (a), 1) for the response variable, and is consequently misspecified with respect to the ID measure (6.61). As a result, the standard Bayesian learning leads to a unimodal predictive distribution that approximates the mean value of the response variable, while (m, 1)-robust Bayesian learning can closely reproduce the data distribution [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF]. This is shown in Figure 6.5a, which depicts the predictive distribution obtained by minimizing the m-free energy criterion J m with m = 20 when using exclusively samples from the ID measure (green dots). In virtue of ensembling, the resulting predictive distribution becomes multimodal, and it is seen to provide a good fit to the data from the ID measure. Let us evaluate also the effect of outliers. To this end, in Figure 6.5b we consider (m, 1)-robust Bayesian learning and minimize again the m-free energy criterion, but this time using a data set contaminated with samples from the OOD component (red points) and with a contamination ratio ϵ = 0.1. The predictive distribution is seen to cover not only the ID samples but also the outlying data points. In Figure 6.5c and 6.5d, we finally plot the predictive distributions obtained by (m, t)-robust Bayesian learning with m = 20, when setting t = {0.9, 0.8}, respectively. The proposed approach is able to mitigate the effect of the outlying component for t = 0.9, and, for t = 0.8, it almost completely suppresses it. As a result, the proposed energy criterion produces predictive distributions that match more closely the ID measure. This qualitative behavior is quantified in Table 6.1, where we report the total variation distance from the ID measure for the setting and predictors considered in Figure 4. MNIST and CIFAR-10 Classification Tasks We now address the problem of training Bayesian neural network classifiers in the presence of misspecification and outliers. We consider three different experimental setups entailing distinct data sets and model architectures: • Classification of MNIST digits [START_REF] Lecun | The MNIST database of handwritten digits[END_REF] based on a fully connected neural network comprising a single hidden layer with 25 neurons. • Classification of Extended MNIST characters and digits [START_REF] Cohen | Emnist: Extending mnist to handwritten letters[END_REF] based on a fully connected neural network with two hidden layers with 25 neurons each. • Classification of CIFAR-10 [133] images using a convolutional neural network (CNN) with two convolutional layers, the first with 8 filters of size 3 × 3 and the second with 4 filters of size 2 × 2, followed by a hidden layer with 25 neurons each. All hidden units use ELU activations [START_REF] Clevert | Fast and accurate deep network learning by exponential linear units (elus)[END_REF] except the last, classifying, layer that implements the standard softmax function. Model misspecification is enforced by adopting neural network architectures with small capacity. As in [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF], outliers are obtained by randomly modifying the labels for fraction ϵ of the data points in the training set. Additional details for the experiments can be found in the supplementary material. We measure the accuracy of the trained models, as well as their calibration performance. Calibration refers to the capacity of a model to quantify uncertainty (see, e.g., [START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF]). We specifically adopt the expected calibration error (ECE) [START_REF] Guo | On calibration of modern neural networks[END_REF], a standard metric that compares model confidence to actual test accuracy (see supplementary material for the exact definition). We train the classifiers using corrupted data sets with a contamination ratio ϵ = 0.3, and then we evaluate their accuracy and ECE as a function of t ∈ [0, 1] based on a clean (ϵ = 0) holdout data set. We compare the performance of (m, t)-robust Bayesian learning based on the minimization of the robust m-free energy J m t , with m = 10, to: (i) deep ensembles [START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF], also with 10 models in the ensembles; and (ii) the robust Gibbs predictor of [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF], which optimizes over a single predictor (not an ensemble) by minimizing the free energy metric J 1 t . The inverse temperature parameter β is set to 0.1 in the (m, t)-robust Bayesian and the Gibbs predictor objectives. samples for an ensemble model minimizing (on the left) the log-loss based criterion J 10 1 , and (on the right) the proposed robust objective J 10 0.7 based on the log t -loss with t = 0.7. In Figure 6.6 we report the performance metrics attained by the trained models in the three different setups listed above. From the top panels we conclude that (m, t)robust Bayesian learning is able to mitigate model misspecification by improving the final accuracy as compared to the robust Gibbs predictor and the deep ensemble models. Furthermore, the use of the robust loss for a properly chosen value of t leads to a reduction of the detrimental effect of outliers and to an increase in the model accuracy performance as compared to the standard log-loss (t = 1). In terms of calibration performance, the lower panels demonstrate the capacity of robust ensemble predictors with t < 1 to drastically reduce the ECE as compared to deep ensembles. In this regard, it is also observed that the accuracy and ECE performance levels depend on the choice of parameter t. In practice, the selection of t may be addressed using validation or meta-learning methods in a manner akin to [START_REF] Zhang | Meta-learning divergences for variational inference[END_REF]. Additional results on calibration in the form of reliability diagrams [START_REF] Degroot | The comparison and evaluation of forecasters[END_REF] can be found in supplementary material. As shown shown theoretically in Section 6.5.3, the effect of the log t -loss is to reduce the influence of outliers during training for t < 1. We empirically investigate the effect of the robust loss in Figure 6.7, in which we compare the distribution of the negative log-likelihood for ID and OOD training data samples. We focus on the CIFAR-10 data set, and we compare the histogram of the negative log-likelihood under a CNN model trained based on the m-free energy J m 1 , with m = 10 and standard logarithmic loss, and a CNN minimizing the proposed robust m-free energy J m t , with m = 10 and t = 0.7. The (m, 1)-robust Bayesian based on the standard log-loss tries to fit both ID and OOD samples and, as a result, the two components have similar likelihoods. In contrast, (m, t)-robust Bayesian learning is able to downweight the influence of outliers and to better fit the ID component. (ii) robust Gibbs predictor, which minimizes J 1 t [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF]; and (iii) the (m, t)robust Bayesian learning, which minimizes J 10 t . The models are trained on ϵ-contaminated data set for ϵ ∈ {0, 0.1, 0.2, 0.3} California Housing Regression Task Finally, we consider the problem of training a robust regressor based on training data sets corrupted by outliers and in the presence of model misspecification. We consider the California housing dataset, which is characterized by response variables y normalized in the [0, 1] interval, and we fix a unimodal likelihood p(y|x, θ) = N (y|f θ (x), 0.1), where f θ (x) is the output of a three-layer neural network with hidden layers comprising 10 units with ELU activation functions [START_REF] Clevert | Fast and accurate deep network learning by exponential linear units (elus)[END_REF]. The model class is misspecified since the response variable is bounded and hence not Gaussian. Outliers are modeled by replacing the label of fraction ϵ of the training sample with random labels picked uniformly at random within the [0, 1] interval. We consider training based on data sets with different contamination ratios ϵ ∈ {0, 0.1, 0.2, 0.3}, and measure the trained model ability to approximate the ID data by computing the negative log-likelihood on a clean holdout data set (ϵ = 0). As in the previous subsection, we compare models trained using (m, t)-robust Bayesian learning, with m = 5, to: (i) deep ensembles [START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF], also with 5 models in the ensembles; and (ii) the robust Gibbs predictor of [START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF] minimizing the free energy metric J 1 t . The inverse temperature parameter β is set to 0.1 in the (m, t)-robust Bayesian and the Gibbs predictor objectives. In Figure 6.8 we report the negative log-likelihood of an uncontaminated data set for models trained according to the different learning criteria. The leftmost panel (ϵ = 0) corresponds to training based on an uncontaminated data set. For this case, the best performance is obtained for t = 1 -an expected result due to the absence of outliers -and the proposed criterion outperforms both the Gibbs predictor and deep ensembles, as it is capable of counteracting misspecification by the means of ensembling. In the remaining panels, training is performed based on ϵ-contaminated data sets, with the contamination ϵ increasing from left to right. In these cases, learning criteria based on robust losses are able to retain similar performance to the uncontaminated case for suitable chosen values of t. Furthermore, the optimal value of t is observed to increase with the fraction of outliers in the training data set. Conclusion In this work, we addressed the problem of training ensemble models under model misspecification and in the presence of outliers. We proposed the (m, t)-robust Bayesian learning framework that leverages generalized logarithm score functions in combination with multi-sample bounds, with the goal of deriving posteriors that are able to take advantage of ensembling, while at the same time being robust with respect to outliers. The proposed learning framework is shown to lead to predictive distributions characterized by better generalization capabilities and calibration performance in scenarios in which the standard Bayesian posterior fails. The proposed robust Bayesian learning framework can find application to learning scenarios that can benefit from uncertainty quantification in their decision making processes and are characterized by the presence of outliers and model misspecification. Examples include inference in wireless communication systems [START_REF] Zecchin | Robust bayesian learning for reliable wireless ai: Framework and applications[END_REF], medical imaging [START_REF] Liu | Time sequence learning for electrical impedance tomography using bayesian spatiotemporal priors[END_REF] and text sentiment analysis [START_REF] Onan | A hybrid ensemble pruning approach based on consensus clustering and multi-objective evolutionary algorithm for sentiment classification[END_REF][START_REF] Onan | Biomedical text categorization based on ensemble pruning and optimized topic modelling[END_REF]. We conclude by suggesting a number of directions for future research. The (m, t)robust Bayesian learning has been shown to lead to the largest performance gains for properly chosen values of t. The optimal values of t depend on the particular task at hand, and deriving rules to automate the tuning of these parameters represents a practical and important research question. Furthermore, (m, t)-robust Bayesian learning can be extended to reinforcement learning, as well as to meta-learning, for which Bayesian methods have recently been investigated (see, e.g., [START_REF] Yoon | Bayesian model-agnostic meta-learning[END_REF][START_REF] Jose | Information-theoretic analysis of epistemic uncertainty in bayesian meta-learning[END_REF] and references therein). Chapter 7 Robust Bayesian Learning Applications to Wireless Communication This chapter takes a critical look at the application of conventional machine learning methods to wireless communication problems through the lens of reliability and robustness. Deep learning techniques adopt a frequentist framework, and are known to provide poorly calibrated decisions that do not reproduce the true uncertainty caused by limitations in the size of the training data. Bayesian learning, while in principle capable of addressing this shortcoming, is in practice impaired by model misspecification and by the presence of outliers. Both problems are pervasive in wireless communication settings, in which the capacity of machine learning models is subject to resource constraints and training data is affected by noise and interference. In this context, we explore the application of the framework of robust Bayesian learning developed in Chapter 6. We showcase the merits of robust Bayesian learning on several important wireless communication problems in terms of accuracy, calibration, and robustness to outliers and misspecification. Introduction Artificial intelligence (AI) is widely viewed as a key enabler of 6G wireless systems. Research on this topic has mostly focused on identifying use cases and on mapping techniques from the vast literature on machine learning to given problems [START_REF] Simeone | A very brief introduction to machine learning with applications to communication systems[END_REF][START_REF] Sun | Application of machine learning in wireless networks: Key techniques and open issues[END_REF][START_REF] Gündüz | Machine learning in the air[END_REF]. At a more fundamental level, there have been efforts to integrate well-established communication modules, e.g., for channel encoding and decoding, with data-driven designs, notably via tools such as model unrolling [START_REF] Jiang | Learn codes: Inventing low-latency codes via recurrent neural networks[END_REF][START_REF] Kim | Deepcode: Feedback codes via deep learning[END_REF]. All these efforts have largely relied on deep learning libraries and tools. The present paper takes a critical look at the use of this conventional methodology through the lens of reliability and robustness . To this end, we explore the potential benefits of the alternative design framework of robust Bayesian learning by focusing on several key wireless communication applications, namely modulation classification, indoor and outdoor localization, and channel modeling and simulation. Frequentist vs. Bayesian Learning In frequentist learning, the output of the training process is a single model -typically, a single vector of weights for a neural network -obtained by minimizing the training loss. This approach is justified by the use of the training loss as an estimate of the population loss, whose computation would require averaging over the true, unknown distribution of the data. This estimate is only accurate in the presence of sufficiently large data sets. While abundant data is common in the benchmark tasks studied in the computers science literature, the reality of many engineering applications is that data are often scarce. In wireless communications, the problem is particularly pronounced at the physical layer, in which fading dynamics imply short stationary intervals for data collection and training [START_REF] Park | Meta-learning to communicate: Fast end-to-end training for fading channels[END_REF][START_REF] Park | Learning to demodulate from few pilots via offline and online meta-learning[END_REF][START_REF] Simeone | From learning to meta-learning: Reduced training overhead and complexity for communication systems[END_REF][START_REF] Yuan | Transfer learning and meta learning-based fast downlink beamforming adaptation[END_REF]. The practical upshot of the reliance on frequentist learning is that, in the presence of limited data, decisions made by AI models tend to be poorly calibrated, providing confidence levels that do not match their true accuracy [START_REF] Guo | On calibration of modern neural networks[END_REF][START_REF] Cohen | Learning to learn to demodulate with uncertainty quantification via Bayesian meta-learning[END_REF]. As a result, an AI model may output a decision with some level of confidence, say 95%, while the accuracy of the decision is significantly lower. This is an issue problem in many engineering applications, including emerging communication networks (e.g., 5G and beyond), in which a more or less confident decision should be treated differently by the end user [START_REF] Masur | Artificial intelligence in Open Radio Access Network[END_REF]. The framework of Bayesian learning addresses the outlined shortcomings of frequentist learning [START_REF] Mackay | Information theory, inference and learning algorithms[END_REF][START_REF] Osawa | Practical deep learning with Bayesian principles[END_REF]. At its core, Bayesian learning optimizes over a distribution over the model parameter space, which enables it to quantify uncertainty arising from limited data. In fact, if several models fit the data almost equally well, Bayesian learning does not merely select one of the models, disregarding uncertainty; rather it assigns similar distribution values to all such models [START_REF] Nikoloska | BAMLD: Bayesian Active Meta-Learning by Disagreement[END_REF]. This way, decisions produced by AI modules trained via Bayesian learning can account for the "opinions" of multiple models by averaging their outputs using the optimized distribution [START_REF] Madigan | Bayesian model averaging[END_REF][START_REF] Simeone | Machine Learning for Engineers[END_REF]. Bayesian learning has recently been applied in [START_REF] Cohen | Learning to learn to demodulate with uncertainty quantification via Bayesian meta-learning[END_REF] by focusing on the problem of demodulation over fading channels; as well as in [START_REF] Zilberstein | Annealed Langevin dynamics for massive mimo detection[END_REF] for detection over multiple-antenna channels. Robust Bayesian Learning Like frequentist learning, Bayesian learning assumes that the distribution underlying training data generation is the same as that producing test data. Furthermore, Bayesian learning implicitly assumes that the posited model -namely likelihood and prior distribution -is sufficiently close to the true, unknown data-generating distribution to justify the use of the posterior distribution as the optimized distribution in the model parameter space. As a result, the benefit of Bayesian learning is degraded when data is affected by outliers and/or when the model is misspecified. In Chapter 6 we have addressed both of these limitations, introducing a generalized framework that we will refer to as robust Bayesian learning. Robust Bayesian learning aims at providing well-calibrated, and hence reliable, decisions even in the presence of model misspecification and of discrepancies between training and testing conditions. Model misspecification has been addressed in [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF]. These papers start from two observations. The first is that Bayesian learning can be formulated as the minimization of a free energy metric, which involves the average of the training loss, as well as an informationtheoretic regularizing term dependent on a prior distribution. The conventional free energy metric can be formally derived as an upper bound on the population loss within the theoretical framework of PAC Bayes theory [START_REF] Jose | Free energy minimization: A unified framework for modeling, inference, learning, and optimization [lecture notes[END_REF][START_REF] Catoni | A PAC-Bayesian approach to adaptive classification[END_REF][START_REF] Alquier | User-friendly introduction to PAC-Bayes bounds[END_REF]. The second observation is that, in the presence of model misspecification, model ensembling can be useful in combining the decisions of different models that may be specialized to distinct parts of the problem space. Using these two observations, references [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF] introduced alternative free energy criteria that are based on a tighter bound of the population loss for ensemble predictors. To address the problem of outliers (see e.g. [START_REF] Knoblauch | Generalized variational inference: Three arguments for deriving new posteriors[END_REF]), different free energy criteria have been introduced, which are less sensitive to the presence of outliers. These metrics are based on divergences, such as β-divergences [START_REF] Basu | Robust and efficient estimation by minimising a density power divergence[END_REF][START_REF] Ghosh | Robust Bayes estimation using the density power divergence[END_REF] and γ-divergence [START_REF] Fujisawa | Robust parameter estimation with a small bias against heavy contamination[END_REF][START_REF] Nakagawa | Robust Bayesian inference via γ-divergence[END_REF], which generalize the Kullback-Liebler divergence underlying the standard free energy metric. Finally, a unified framework has been introduced in [START_REF] Zecchin | Robust bayesian learning for reliable wireless ai: Framework and applications[END_REF] that generalizes the free energy metrics introduced in [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF][START_REF] Masegosa | Learning under model misspecification: Applications to variational and ensemble methods[END_REF]. The approach is robust to misspecification, while also addressing the presence of outliers. Main Contributions In the following, we explore the application of robust Bayesian learning to wireless communication systems. We detail applications of robust Bayesian learning to communication systems, focusing on automated modulation classification (AMC), received signal strength indicator (RSSI)-based localization, as well as channel modeling and simulation. These applications have been selected in order to highlight the importance of considering uncertainty quantification, in addition to accuracy, while also emphasizing the problems of model misspecification and outliers in wireless communications [START_REF] Kalyani | OFDM channel estimation in the presence of NBI and the effect of misspecified NBI model[END_REF][START_REF] Fawzy | Outliers detection and classification in wireless sensor networks[END_REF][START_REF] Jin | An RSSI-based localization algorithm for outliers suppression in wireless sensor networks[END_REF]. Our specific contributions are as follows. • As a first application, we focus on the AMC problem for intelligent spectrum sensing [START_REF] Liang | Cognitive radio networking and communications: An overview[END_REF]. In this setting, the necessity of deploying lightweight models that satisfy the strict computational requirements of network edge devices can give rise to model misspecification. At the same time, the training data sets often contain non-informative outliers due to interfering transmissions from other devices. We demonstrate that robust Bayesian learning yields classifiers with good calibration performance despite model misspecification and the presence of outliers. • As a second application, we study node localization based on crowdsourced RSSI data sets [START_REF] Lohan | Wi-Fi crowdsourced fingerprinting dataset for indoor positioning[END_REF]. Such data sets typically contain inaccurately reported location measurements due to imprecise or malicious devices. Furthermore, owing to the complex relation between RSSI measurements and device locations, learning often happens using misspecified model classes. In this context, we demonstrate that robust Bayesian is able to properly estimate residual uncertainty about the transmitters' locations in spite of the presence of outliers and misspecified model classes. • Finally, we apply robust Bayesian learning to the problem of channel modeling and simulation. We show via experiments that robust Bayesian learning produces accurate and well-calibrated generative models even in the presence of outlying data points. Robust and Calibrated Automatic Modulation Classification As a first application of robust Bayesian learning we consider the AMC problem. This is the task of classifying received baseband signals in terms of the modulation scheme underlying their generation. The relation between the received signal and the chosen modulation scheme is often mediated by complex propagation phenomena, as well as hardware non-idealities at both the receiver and the transmitter side. As a result, modelbased AMC methods often turn out to be inaccurate because of the overly simplistic nature of the assumed models [START_REF] O'shea | Over-the-air deep learning based radio signal classification[END_REF]. In contrast, machine learning based AMC has been shown to be extremely effective in correctly classifying received signals based on signal features autonomously extracted from data [START_REF] O'shea | Convolutional radio modulation recognition networks[END_REF]. We refer to [START_REF] Zhou | Deep learning for modulation recognition: A survey with a demonstration[END_REF] and references therein for a comprehensive overview. All prior works on learning-based AMC, reviewed in [START_REF] Zhou | Deep learning for modulation recognition: A survey with a demonstration[END_REF], have adopted frequentist learning. In this section, we consider the practical setting in which AMC must be implemented on resource-constrained devices, entailing the use of small, and hence mismatched, models; and the training data sets are characterized by the presence of outliers due to interference. Problem Definition and Performance Metrics The AMC problem can be framed as an instance of supervised classification, with the training data set D comprising pairs (x, y) of discrete-time received baseband signal x and modulation label y, with Y being the set of possible modulation schemes. Each training data point (x, y) ∈ D is obtained by transmitting a signal with a known modulation y ∈ Y over the wireless channel, and then recording the received discrete-time vector x at the receiver end. The outlined procedure determines the unknown sampling distribution ν s (x, y). We evaluate the performance of AMC on a testing data set D te in terms of accuracy and calibration. To describe calibration performance metrics, let us consider a predictive distribution p(y|x), which may be the frequentist distribution p(y|x, θ freq ), or the ensemble distribution (6.20) in the cases of Bayesian learning and robust Bayesian learning. A hard prediction ŷ is obtained as the maximum-probability solution ŷ = arg max y∈Y p(y|x). We adopt the standard reliability diagrams [START_REF] Degroot | The comparison and evaluation of forecasters[END_REF] and the expected calibration error as diagnostic tools for the calibration performance [START_REF] Guo | On calibration of modern neural networks[END_REF]. Both metrics require binning the output of the classifier confidence score p(ŷ|x) into M intervals of equal size, and then grouping the testing data points (x, y) ∈ D te based on the index of the bin for the confidence score p(ŷ|x). For each bin B m , the within-bin accuracy is defined as Acc(B m ) = 1 |B m | (x,y)∈Bm 1{ŷ = y}, (7.2) which measures the fraction of test samples within the bin that are correctly classified; and the within-bin confidence as Conf(B m ) = 1 |B m | (x,y)∈Bm p(ŷ|x), (7.3) which is the average confidence level for the test samples within the bin. The reliability diagram plots within-bin accuracy and within-bin confidence as a function of the bin index m. As a result, a reliability diagram visualizes the relation between confidence and accuracy of a predictor, establishing whether a classifier is over -confident (Conf(B m ) > Acc(B m )), under-confident (Conf(B m ) < Acc(B m )) or well- calibrated (Conf(B m ) ≈ Acc(B m )). The expected calibration error (ECE) summarizes the calibration performance of a classifier as a single number obtained as the weighted sum of the absolute difference between within-bin accuracy and within-bin confidence, namely ECE = M m=1 |B m | M m=1 |B m | |Conf(B m ) -Acc(B m )| . (7.4) By this definition, one can generally conclude that a lower ECE indicates a better calibrated predictor. Data Set We adopt the DeepSIG: RadioML 2016.10A data set [START_REF] O'shea | Convolutional radio modulation recognition networks[END_REF]. This is a synthetic data set that contains 220K vectors of I/Q samples of signals comprising 8 digital modulation schemes (BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK) and 3 analog modulations (WB-FM, AM-SSB, AM-DSB). We focus on the problem of classifying the 8 digital modulation schemes using received signals recorded at different SNR levels ranging from 0 dB to 18 dB. Furthermore, we model the presence of interference during training by generating an ϵ-contaminated version of the original data set. In it, with probability ϵ ∈ [0, 1), the original training sample x is summed to an interfering signal x ′ picked uniformly at random from the data set. Note that the interfering signal can be possibly generated from a different modulation scheme. Using Definition 2, the samples affected by interference represent outliers, since no interference is assumed during testing. We consider 30% of the available samples for training; 20% of the samples for validation; and the remaining 50% for testing. The use of a small training data set is intentional, as we wish to focus on a regime characterized by data scarcity. [START_REF] O'shea | Convolutional radio modulation recognition networks[END_REF] for frequentist and (m, t)-robust Bayesian learning as a function of the parameter t. The test set is free from interference, while the training set is subject to interference (ϵ = 0.5). Implementation We implement a lightweight convolutional neural network (CNN) architecture comprising of two convolutional layers followed by two linear layers with 30 neurons each. The first convolutional layer has 16 filters of size 2 × 3, and the second layer has 4 filters of size 1 × 2. We adopt the Exponential Linear Unit (ELU) activation with parameter α = 1. The lightweight nature of the architecture is motivated by the strict computational and memory requirements at network edge devices. As a result, the CNN model is generally misspecified, in the sense that, following Definition 1, the complex relation between received signal and chosen modulation scheme cannot be exactly represented using the model. In the training data set, half of the samples are affected by interference, i.e., ϵ = 0.5. For Bayesian learning, we adopt a Gaussian variational distribution q(θ) = N (θ|µ, Σ) over the CNN model parameter vector θ. Accordingly, the mean µ and diagonal covariance matrix Σ are optimized, while we fix the prior p(θ) = N (θ|0, I). Optimization for both frequentist and Bayesian methods is carried out via Adam with a learning rate η = 0.001, and the reparametrization trick is implemented for Bayesian learning [START_REF] Kingma | Auto-encoding variational Bayes[END_REF]. In our experiments we set β = 0.01. The number of samples used to evaluate the ensemble prediction (6.20) is m = 10. Note that this may differ from the value of m used to define the training criterion. Results In Figure 7 that, with suitably chosen parameters (m, t), robust Bayesian learning can outperform standard frequentist learning both in terms of accuracy and calibration for t < 1. The smallest ECE is obtained by robust Bayesian learning for t = 0.7, and it is five times smaller compared to the one obtained using conventional Bayesian learning (t = 1). Overall, (m, t)-robust Bayesian paradigm is able to improve the final accuracy by 5% and to reduce the ECE by five times via suitable choice of parameters (m, t). To further elaborate on the calibration performance, in Figure 7.2 we compare the reliability diagrams obtained via frequentist and (m, t)-robust Bayesian learning for m = 4 and t = 0.7. While frequentist learning provides under-confident predictions, robust Bayesian learning offers well-calibrated predictions that consistently offer a small discrepancy between accuracy and confidence levels. In this section, we turn to the problem of localization. In outdoor environments, accurate localization information of a wireless device can be obtained leveraging the global navigation satellite system (GNSS). However, the performance of satellite-based positioning is severely degraded in indoor environments [START_REF] Mautz | Overview of current indoor positioning systems[END_REF], and its power requirements are not compatible with IoT application characterized by ultra-low power consumption [START_REF] Aernouts | Sigfox and Lo-RaWAN datasets for fingerprint localization in large urban and rural areas[END_REF]. For this reason, alternative techniques have been investigated that rely on so-called channel fingerprints, i.e., feature extracted from the received wireless signals [START_REF] Pecoraro | CSI-based fingerprinting for indoor localization using LTE signals[END_REF]. Robust and Calibrated RSSI-Based Localization Among such methods, the use of received signal strength indicators (RSSI) measured at multiple wireless access points has been shown to provide an accessible, yet informative, vector of features. Owing to the complexity of defining explicit models relating the device location y ∈ Y with the RSSI-measurements vector x ∈ X , data-driven RSSI-based localization techniques have been recently explored [START_REF] Hoang | Recurrent neural networks for accurate RSSI indoor localization[END_REF][START_REF] Sinha | Comparison of CNN applications for RSSI-based fingerprint indoor localization[END_REF]. The outlined prior work in this area has focused on machine learning models trained using the conventional frequentist approach. In this section, we study a setting in which the training data set is collected using noisy, e.g., crowd-sourced, fingerprints. As such, the training set contains outliers. Furthermore, we aim at developing strategies, based on robust Bayesian learning, which can offer accurate localization, while also properly quantifying residual uncertainty. Problem Definition and Performance Metrics The RSSI-based localization problem is a supervised regression task. In it, a training sample (x, y) is obtained by measuring the RSSI fingerprint y corresponding to the transmission of a reference signal at a device located at a known position x. The general goal is to train a machine learning model p(y|x) to predict the location y associated to a RSSI vector x so as to optimize accuracy and uncertainty quantification. Given a test data set D te and assuming that the predictive location is the mean of the predictive distribution, i.e. ȳ = E p(y|x) [ Note that the negative log-likelihood is large if the model assigns a small probability density p(y|x) to the correct output y. Data Sets We experiment on different publicly available RSSI fingerprint data sets, encompassing both outdoor and indoor conditions: • The SigfoxRural data set [START_REF] Aernouts | Sigfox and Lo-RaWAN datasets for fingerprint localization in large urban and rural areas[END_REF] comprises 25, 638 Sigfox messages measured at 137 base stations and emitted from vehicles roaming around a large rural area (1068 km 2 ) between Antwerp and Gent. • The UTSIndoorLoc data set [START_REF] Song | A novel convolutional neural network based indoor localization framework with WiFi fingerprinting[END_REF] contains 9494 WiFi fingerprints sampled from 589 access points inside the FEIT Building at the University of Technology of Sydney, covering an area of 44, 000 m 2 . • The UJIIndoorLoc data set [START_REF] Torres-Sospedra | Ujiindoorloc: A new multi-building and multi-floor database for WLAN fingerprint-based indoor localization problems[END_REF] contains 21, 049 WiFi fingerprints measured at 520 access points and collected from 3 building of the Jaume I University, spanning a total area of 108, 703 m 2 . To model the presence of outliers, we modify the training data sets described above, producing ϵ-contaminated data sets D as per Definition 2. This is done by replacing the target variable y for a fraction ϵ of the data points (x, y) ∈ D with a uniformly random location y within the deployment area. Implementation We consider a model class specified by a Gaussian likelihood p(y|x, θ) = N (y|f θ (x), 0.01), where the mean f θ (x) is the output of a neural network with two hidden layers, each with 50 neurons with ELU activations. Despite the expressive power of the neural network model, each model p(y|x, θ) in this class can only account for unimodal, Gaussian distributed, residual uncertainties around the estimated position f θ (x). Therefore, whenever the residual uncertainty about the receiver location is multimodal, the model class is misspecified by Definition 1. As we will see, given the complex relation between RSSI vector and location, particularly when the number of RSSI measurements is sufficiently small, residual uncertainty tends to be multimodal, making this an important problem. Training for frequentist and Bayesian learning is carried out as described in the previous section, and ensembling uses m = 50 samples during testing time. Results We start by considering the case in which there are no outliers, i.e., ϵ = 0, thus focusing solely on the problem of misspecification. In Figure 7.3, we plot the predictive distribution obtained via Bayesian learning (m = 1, left panel) and robust Bayesian learning with m = 10 and t = 1 (right panel) for a testing sample x corresponding to the position shown as a green cross. The black dots correspond to the positions covered by the training set in the SigfoxRural data set. The resulting predictive distribution for conventional Bayesian learning provides a poor estimation of the true device position, and is unable to properly quantify uncertainty. In contrast, robust Bayesian learning is able to counteract model misspecification, producing a more informative predictive distribution. The distribution correctly suggests that the receiver can be in two possible areas, one of which indeed containing the true node location. To further elaborate on the capacity of robust Bayesian learning for uncertainty quantification, in Table 7.1 we report the negative log-likelihood (7.6) attained by Bayesian learning (m = 1), as well as by robust Bayesian learning with t = 1 and m = 2 or m = 10 on the three data sets. Increasing the value of m is seen to yield lower negative log-likelihood scores, confirming that robust Bayesian learning provides a more precise quantification of uncertainty. We now introduce outliers by carrying out training on contaminated data sets with different levels of contamination ϵ. Recall that the trained models are tested on a clean (ϵ = 0) test data set D te . In Figure 7.4, we plot the test MSE (7.5) for frequentist and the (m, t)-robust Bayesian learning with m = 10 and t ∈ {1, 0.96} as a function of ϵ. The MSE of frequentist learning and (10, 1)-robust Bayesian learning are seen to degrade significantly for increasing values of ϵ. The performance loss is particularly severe for (m, 1)-robust Bayesian learning. This is due to the mass-covering behavior entailed by the use of m-sample training loss, which in this case becomes detrimental due to the presence of outliers. In contrast, robust Bayesian learning with t = 0.96 is able to counteract the effect of outliers, retaining good predictive performance even in case of largely corrupted data sets. Robust and Calibrated Channel Simulation The design of communication systems has traditionally relied on analytical channel models obtained via measurements campaigns. Due to the complexity of multipath propagation scenarios, in recent years generative machine learning models have introduced as an alternative to analytical models. Generative models can be trained to produce samples that mimic hard-to-model channel conditions. Applications of deep generative models in the form of variational autoencoders (VAEs) [START_REF] Kingma | Auto-encoding variational Bayes[END_REF] and generative adversarial networks (GANs) [START_REF] Goodfellow | Generative adversarial nets[END_REF] were specifically reported in the context of end-to-end simulation of wireless systems in [START_REF] Aoudia | End-to-end learning of communications systems without a channel model[END_REF][START_REF] Ye | Deep learning-based end-to-end wireless communication systems with conditional GANs as unknown channels[END_REF] and for channel modeling in [START_REF] O'shea | Approximating the void: Learning stochastic channel models from observation with variational generative adversarial networks[END_REF][START_REF] Orekondy | MIMO-GAN: Generative MIMO channel modeling[END_REF][START_REF] Yang | Generative-adversarialnetwork-based wireless channel modeling: Challenges and opportunities[END_REF][START_REF] Ibnkahla | Applications of neural networks to digital communications-a survey[END_REF] for earlier applications to satellite communications. The outlined prior work has focused on frequentist methods and has assumed the availability of clean data sets that are free from outliers. In this section, we explore the use of robust Bayesian learning to account for both outliers and model misspecification. Problem Definition and Performance Metrics Generative models are trained in an unsupervised manner by assuming the availability of a training set D of examples x corresponding to channel impulse responses. We focus on VAEs, i.e., on generative models with latent variables. VAEs comprise a parameterized encoder q(h|x, θ e ), mapping an input x ∈ X into a lower-dimensional latent vector h ∈ H; as well as a parameterized decoder p(x|h, θ d ) that reconstructs the input sample x ∈ X from the latent representation h ∈ H. Note that the vector of model parameters encompasses both encoding and decoding parameters as θ = (θ e , θ d ). Let us define as p(h) a fixed prior distribution on the latent variables h. Once training is complete, samples x of channel responses can be generated from the model as follows. For frequentist learning, given the trained model θ freq , one generates a sample h ∼ p(h) for the latent vector, and then produces a channel sample x ∼ p(x|h, θ freq ). For Bayesian learning, given the optimized distribution q(θ), we produce a random sample θ ∼ q(θ) and then generate channel sample x ∼ p(x|h, θ d ). The role of the encoder q(h|x, θ e ) will be made clear in Section 7.4.3 when discussing the training method. According to the discussion in the previous paragraph, the channel distribution implemented by the model is given by p (x) = E p(h) [p(x|h, θ freq d )] (7.7) for frequentist learning; and by p(x) = E p(h)q(θ d ) [p(x|h, θ d )] (7.8) for Bayesian learning. Note that the average is taken only over the latent vector h ∼ p(h) for frequentist learning; while in Bayesian learning the expectation is also taken over the optimized distribution q(θ d ) for the decoder's parameters θ d . To evaluate the performance of the generative model, we consider two different metrics accounting for accuracy and uncertainty quantification. Accuracy is measured by the "distance" between the target distribution ν(x) and the distribution p(x) produced by the model. We measure the "distance" between ν(x) and p(x) via the maximum-mean discrepancy (MMD) [START_REF] Gretton | A kernel method for the two-sample-problem[END_REF], which is defined as MMD(p, ν) =E x,x ′ ∼p(x) [k(x, x ′ )] + E x,x ′ ∼ν(x) [k(x, x ′ )] -2E x∼ν(x),x ′ ∼p(x) [k(x, x ′ )] (7.9) where k(x, x ′ ) is a positive definite kernel function. In the experiments reported below, we have approximated the MMD based on empirical averages. These are evaluated using samples from distribution p(x), which are generated as explained above, as well as samples from the sampling distribution ν(x), i.e., examples from the training set D. Moreover, we use the Gaussian kernel k(x, x ′ ) = N (∥xx ′ ∥|0, 1). To evaluate the performance in terms of uncertainty quantification, we focus on the problem of out-of-distribution (OOD) detection (see, e.g., [START_REF] Daxberger | Bayesian variational autoencoders for unsupervised out-of-distribution detection[END_REF]). A well-calibrated model p(x), when fed with an input x, should return a small value if x is an OOD sample, that is, if it has a low target distribution ν(x). To obtain a quantitative measure, we consider the task of distinguishing between samples drawn from the target distribution ν(x) and from the OOD distribution ξ(x). Specifically, we adopt the model probability distribution p(x) as the test statistic, classifying x as in-distribution (ID) if p(x) is larger than some threshold γ and as OOD otherwise. As in ( [START_REF] Fawcett | An introduction to ROC analysis[END_REF]), we take the area under the receiver operating characteristic curve (AUROC) score for this test as a measure of how distinguishable the two samples are. The AUROC metric is obtained by integrating the ROC traced by probability of detection versus probability of false alarm as the threshold γ is varied. A larger AUROC indicates that the model provides a better quantification of uncertainty, as reflected in its capacity to detect OOD samples against ID samples. Data Set We consider the simulation of the magnitudes of a frequency-selective channel response x ∈ R 128 that mimics the target distribution ν(x) defined by the 3GPP TDL-A channel model distribution [START_REF]Study on channel model for frequencies from 0.5 to 100 GHz[END_REF] with a delay spread of τ = 100 ns. Outliers are accounted for by constructing an ϵ-contaminated training set D that contains a fraction ϵ = 0.2 of samples distributed according to the same channel model but with a larger delay spread τ = 300 ns (see the top row in Fig. 7.5). Implementation For models with latent variables, the direct adoption of the log-loss generally yields intractable optimization problems (see, e.g., [START_REF] Simeone | Machine Learning for Engineers[END_REF]). To address this problem, training of VAEs replaces the training loss (6.8) with the variational lower bound The prior latent variable distribution is p(h) = N (h|0, I 5 ). We implement both the encoder and the decoder by using fully connected neural networks with a single hidden layer with 10 units. Specifically, the encoder distribution p(h|x, θ e ) = N (h|µ θe (x), Σ θe (x)) has mean vector µ θe (x) ∈ R 5 and diagonal covariance matrix Σ θe (x) ∈ R 5×5 obtained from the output of the neural network. The decoder p(x|h, θ d ) = N (x|µ θ d (h), σI 128 ) has mean vector µ θ d (h) obtained as the output of the neural network with a fixed variance value σ = 0.1. For Bayesian learning, we optimize distribution q(θ d ) as in the previous sections, while we consider a distribution q(θ e ) concentrated at a single vector θ e . Ensembling during testing time is carried out with m = 50 samples. LV AE (θ, D) = x∈D E p(h|x,θe) [log p(x|h, θ d )] = - x∈D KL(p(h|x, θ d )||p(h)), ( 7 Results To start, in Figure 7.5 we illustrate a sample of the magnitude for the TDL-A channel response given a delay spread τ = 100 ns in panel (a), while an outlier sample corresponding to the larger delay spread τ = 300 ns is depicted in panel (b). The bottom row of Figure 7.5 reports a sample from the trained model for frequentist learning in panel (c) and for (4, 0.7)-robust Bayesian learning in panel (d). Visual inspection of the last two panels confirms that (m, t)-robust Bayesian learning can mitigate the effect of outliers as it reduces the spurious multipath components associated with larger delays. For a numerical comparison, Figure 7.6 compares frequentist and (4, t)-robust Bayesian learning in terms of both accuracy -as measured by the MMD -and uncertainty quantification -as evaluated via the AUROC. For t < 0.85 robust Bayesian learning is confirmed to have the capacity to mitigate the effect of the outlying component, almost halving the MMD obtained by frequentist learning. Furthermore, robust Bayesian learning has a superior uncertainty quantification performance, with gain increasing for decreasing values of t. Conclusion This chapter has focused on the problem of ensuring that AI models trained for wireless communications satisfy reliability and robustness requirements. We have specifically addressed two important problems: model misspecification, arising from limitations on the available knowledge about the problem and on the complexity of the AI models that can be implemented on network devices; and outliers, which cause a mismatch between training and testing conditions. We have argued that standard frequentist learning, as well as Bayesian learning, are not designed to address these requirements, and we have explored the application of robust Bayesian learning to achieve robustness to model misspecification and to the presence of outliers in the training data set. Robust Bayesian learning has been shown to consistently provide better accuracy and uncertainty estimation capabilities in a range of important wireless communication problems. These results motivate a range of extension of robust Bayesian learning and applications. For instance, the integration of robust Bayesian learning to the meta-learning framework, in order to enable robust and sample effective learning, or the application of robust Bayesian learning to higher layers of the protocol stack as a tool to empower semantic communication. Chapter 8 Conclusion The content of this thesis illustrates how vast and heterogeneous the range of problems arising from the application of machine learning in wireless communication networks is. For some of these challenges, we provided solutions that we hope will contribute to the adoption of reliable machine learning solutions in 6G networks. In the following, we summarize the contributions and potential directions for future work. In Part I of this manuscript, we focused on decentralized training of machine learning models over device-to-device 6G networks. In Chapter 2, we proved that wireless networks, even when characterized by straggling nodes and unreliable communication links, are a suitable infrastructure for the training of machine learning models in a decentralized manner. In particular, we showed how asynchronous updates can greatly reduce the convergence time of optimization procedures without hampering the quality of the final model. As shown by our analysis, the training procedure converges despite the dissemination of outdated updates and sparse communication between workers. This achievability result provides us with the flexibility of designing energy-efficiency optimization procedures in which devices communicate only in opportune slots; for example, when the wireless channel is in favourable conditions or when the model updates are relevant. Studying the trade-off between energy-efficiency performance indicators and the convergence properties of decentralized optimization represents an interesting research direction. In Chapter 3, we have investigated the potential role of UAVs in decentralized learning procedures. While the literature on UAV-aided communication is vast, the applications of UAVs in the context of edge learning are mostly unexplored. Our results are presented for a single drone and its optimized trajectory is obtained accordingly. Analyses of multi-drone scenarios and the derivations of a jointly optimized trajectory are natural extensions of the results presented in this chapter. Part II has been devoted to addressing one of the fundamental limitations of collaborative learning procedures: data heterogeneity. We provided two possible algorithms to mitigate the detrimental effects due to the aggregation of heterogeneous data. In Chapter 4 we formulated the learning problem as a distributionally robust optimization problem and provided a communication-efficient algorithm to solve it. The outcome of this procedure is a single machine learning model that is fair, i.e. it has satisfactory performance on all devices. The concept of fairness is fundamental in wireless commu-nication protocols, therefore we expect that the tools derived in this chapter can find application in many networking problems. In Chapter 5 we tackled data heterogeneity by proposing a training procedure that outputs personalized models to serve groups of users with different needs. The main underpinning of the algorithm is the estimation of the similarity between users' learning tasks. Evaluating similarity scores between users poses a threat to the privacy guarantees of federated algorithms, therefore investigating the trade-off between personalization and privacy may shed light on the fundamental limitations of this approach. In Part III, motivated by the necessity of quantifying uncertainty in wireless communication learning problems, we proposed the (m, t)-robust Bayesian learning framework, a Bayesian learning procedure capable of addressing both model misspecification and the presence of outliers. The proposed methodology produced well-calibrated and robust predictive posteriors over a range of wireless communication problems. In Chapter 7, we showed that (m, t)-robust Bayesian learning greatly outperforms the frequentist and standard Bayesian learning approaches. Despite the superior uncertainty quantification capabilities, the (m, t)-robust Bayesian learning relies on ensembling, which comes with a potentially large computational cost. This is due to the necessity of sampling and aggregating the output of multiple components to perform inference. Therefore, it become essential to validate the merits of the (m, t)-robust Bayesian learning when applied to more computationally efficient ensembling approaches. Furthermore, (m, t)-robust Bayesian learning can be directly applied to reinforcement learning, as well as to meta-learning, for which Bayesian methods have recently been investigated. F ≤ (1 -δ) W (t) X -X 2 F It follows that, for any X ∈ R d×m E W (t) W (t) X -X 2 F =qE W (t) |E (t) W (t) X -X 2 F + (1 -q)E W (t) | Ē(t) X -X 2 F ≤q(1 -δ) W (t) X -X 2 F + (1 -q) X -X 2 F where we have lower bounded the consensus rate by zero in case of disconnected topologies. Grouping terms and having assumed q > 0, we obtain that the expected consensus is satisfied with rate (1qδ) > 0. A.2 Proof of Lemma 2 Similarly to [START_REF] Xing | Federated learning over wireless device-todevice networks: Algorithms and convergence analysis[END_REF][START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF] we establish the following recursive inequality i=1 E θ (t) -θ(t) 2 ≤ 1 - pζ 2 m i=1 E θ (t-1) -θ(t-1) 2 + η 2 pζ 6mG 2 + ζ 2 m i=1 E ñ(t) i 2 . and then solving the recursion we obtain the final expression. A.3 Proof of Theorem 1 We denote stale gradients by g i ( θ(t) i ) = g i (θ (t-τ i ) i ). According to the update rule, at each iteration t + 1, we have E[f ( θt+1 )] = E f θt - 1 m m i=1 η(t) i g i ( θ(t) i ) + ζ ñ(t) i where the expectation is w.r.t. the stochastic gradients, the communication noise Ξ (t) , and the computation and communication failures at iteration t + 1. For an L-smooth objective function, we have E[f ( θ(t+1) )] ≤ f ( θ(t) ) - 1 m m i=1 ∇f ( θ(t) ), E[η (t) i g i ( θ(t) i ))] :=T 1 + L 2m 2 E m i=1 η(t) i g i ( θ(t) i )) 2 :=T 2 + L 2m 2 ζ 2 m i=1 E ñ(t) i 2 where we used the fact that the communication noise has zero mean and is independent across users. Adding and subtracting ∇f i ( θ(t) ) to each summand of T 1 and since E[η (t) i g i ( θ(t) i )] = η∇f i ( θ(t) i ), with η = min j (1 -ρ j )/( √ 4LT ), we obtain T 1 = -η ∇f ( θ(t) ), 1 m m i=1 ∇f i ( θ(t) i ) = η 2 ∇f ( θ(t) ) - 1 m m i=1 ∇f i ( θ(t) i ) 2 - η 2 ∇f ( θ(t) ) 2 - η 2m 2 m i=1 ∇f i ( θ(t) i ) 2 ≤ ηγ 2 ∇f ( θ(t) ) 2 + ηL 2 2m m i=1 θ (t) i -θ(t) 2 - η 2 ∇f ( θ(t) ) 2 - η 2m 2 m i=1 ∇f i ( θ(t) i ) where we have used the staleness assumption. The last term can be bounded using the property of the stochastic gradient and the fact that η (t) i ≤ 1/( √ 4LT ) ≤ 1/( √ 4L) as T 2 ≤ L 2m 2 E m i=1 η(t) i [g i ( θ(t) i ) -∇f i ( θ(t) i )] 2 + L 2m 2 E m i=1 η(t) i ∇f i ( θ(t) i ) 2 ≤ σ 2 8mT + η 8m 2 E m i=1 ∇f i ( θ(t) i ) 2 . Summing T 1 and T 2 we obtain where P Λ is applied column-wise. The compressed gossip algorithm CHOCO-GOSSIP [START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF] used to share model parameters preserves averages and satisfies the following recursive inequality with c = ρ 2 δ 82 T 1 + T 2 ≤ - η 2 (1 -γ) ∇f ( θ(t) ) 2 + σ 2 8mT + ηL 2 2m m i=1 θ (t) i -θ(t) 2 - η 4m 2 m i=1 ∇f i ( θ(t) i ) 2 . Defining γ ′ = (1 -γ), E Θ t+1 -Θt+1 2 F + Θ t+1 -Θt+1 2 F ≤(1 -c) E Θ t+ 1 2 -Θt+ 1 2 2 F + Θ t+ 1 2 -Θt 2 F . (B.19) The uncompressed gossip scheme used to communicate Λ satisfies LG where D λ = max t=0...,T E λtλ . E Λ t+1 -Λt+1 2 F ≤(1 -ρ) E Λ t+ 1 2 -Λt+ 2 θ c 2 + 1 √ T √ 12 D λ LG θ c + 2 D θ LG λ ρ + 1 √ T D θ + D λ 2 + G 2 θ + G 2 Proof The proof follows similarly as in Lemma ( 6) ∇ θ g i (θ t-1 i , λ ∇ λ g i (θ t-1 i , λ t-1 i ) ± ∇ λ g i ( θt-1 , λt-1 ) -∇ λ g i ( θt-1 , λ * ( θt-1 )) E ξ t λt+1 2 (B.86) ≤ 2η 2 λ m m i=1 ∇ λ g i (θ t-1 i , λ t-1 i ) -∇ λ g i ( θt-1 , λt-1 ) where the last inequality follows from choosing η λ ≤ 1/(2L). Substituting the expressions we get T 6 = 1 - µη λ 2 λ * ( θt-1 ) -λt-1 2 + η 2 λ σ 2 λ m + Lη λ m Ξ t-1 θ + 3Lη λ m Ξ t-1 λ + 2η λ LD t-1 λ 1 m Ξ t- C.2 Proof of Theorem 5 Thanks to the upper bound on the target domain risk and the fact that the sum of two sub-Gaussian random variables of parameter σ is also sub-Gaussian with parameter 2σ, we can decompose the excess risk as Exc( f ⃗ w i , P i ) = E z∼P i [ℓ( f ⃗ w i , z)] -inf f ∈F E z∼P i [ℓ(f, z)] = E z∼P i [ℓ( f ⃗ w i , z) -ℓ(f * , z)] ≤ E z∼P ⃗ w i [ℓ( f ⃗ w i , z) -ℓ(f * , z)] + 2βσ 2 + D JS (P i ||P ⃗ w i ) β From the convexity of the KL-divergence we can bound the Jensen-Shannon divergence as follows where we have used the simplified notation ∆ m,n = ∆ m,n (Θ, D), and the equality follows from the basic properties of the KL divergence. A direct application of Markov's inequality is then used to bound the last term of (D.7) with high probability. Namely, with probability greater then 1σ with respect to the random drawn of the data set D ∼ ν(x) ⊗n , the following holds Finally, the result above can be translated to a guarantee with respect to the ID measure ν(x) = ν(x) 1-ϵ -ϵ 1-ϵ ξ(x) via the sequence of inequalities E ν(x),q(θ) ⊗mlog t E j∼U [1:m] p(x|θ j ) = E ν(x),q(θ) ⊗mlog t E j∼U [1:m] p(x|θ j ) E p( 1ϵ + ϵ E ϵ(x),q(θ) ⊗mlog t E j∼U [1:m] p(x|θ j ) 1ϵ (D.12) ≤ E ν(x),q(θ) ⊗mlog t E j∼U [1:m] p(x|θ j ) 1 -ϵ + ϵ C 1-t -1 (1 -ϵ)(1 -t) , (D.13) where the last inequality follows by having assumed the probabilistic model being uniformly upper bounded by C (Assumption 2). ■ Finally, with regard to the comparison between the PAC m bound in Theorem 1 in [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] and the guarantee with respect to the ID measure, we observe that it is not in general possible to translate a guarantee on the log t -risk to one on the log-risk. This can be illustrated by the following counter-example. Consider the following discrete target distribution parametrized by integer k, which defines the size of its support, as and therefore that an ensemble optimized for a value of t in the range [0, 1) can incur in an unboundedly large loss when scored using the log-loss. ν k (x) = 1 -1 k , for x = 0 D.2 Proof of Theorem 6 Theorem. The minimizer of the robust m-free energy objective where the average in (6.47) is taken with respect to the i.i.d. random vectors {θ i } i̸ =j ∼ q(θ) ⊗m-1 . J m t ( Proof: The functional derivative of the multi-sample risk is instrumental to computation of the minimizer of the robust m-free energy objective (6.41). This is given as Rm t (q ϕ , x) -Rm t (q ϕ , z) . d (D.28) The proof of Theorem 7 directly follows from the Cauchy implicit function theorem stated below. Theorem 8 (Cauchy implicit function theorem). Given a continuously differentiable function F : R n × R m → R m , with domain coordinates (x, y), and a point (x * , y * ) ∈ R n ×R m such that F (x * , y * ) = 0, if the Jacobian J F,y (x * , y * ) = ∂F 1 (x * ,y * ) ∂y 1 , . . . , ∂Fm(x * ,y * ) ∂ym is invertible, then there exists an open set U that contains x * and a function g : U → Y such that g(x * ) = y * and F (x, g(x)) = 0, ∀x ∈ U . Moreover the partial derivative of g(x) in U are given by ∂g ∂x i (x) = -[J F,y (x, g(x))] -1 ∂F ∂x i (x, g(x)) (D.29) Proof: Replacing F (x, y) with for t = {1, 0.9, 0.7, 0.5, 0.3, 0.1} respectively. Table D.1: Total variation (TV) distance between the ID measure ν(x) and the predictive distribution p q (x) obtained from the optimization of the different free energy criteria. 38 4 . 2 42 IoT network comprising edge devices with different sampling capabilities and operating in different conditions. The network goal consists in exploiting the heterogeneous distributed dataset and the D2D links to collaboratively train a robust and fair machine learning model. . . . . . . 41 4.3 Average and worst-case accuracies of a fully connected neural network vs. number of transmitted bits using the random quantization compression scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4 Comparison between distributionally robust federated averaging (DRFA), federated averaging (FedAvg) and the proposed algorithm (AD-GDA) for different compression techniques. . . . . . . . . . . . . . . . . . . . . . . . 48 5.1 Personalized Federated Learning with user-centric aggregates at round t. . 55 5.2 Evolution of the average validation accuracy in the three simulation scenarios. 59 5.3 Evolution of the average validation accuracy against time normalized w.r.t. T dl for the three different systems. . . . . . . . . . . . . . . . . . . . . . . 61 5.4 Average silhouette scores of the k-means clustering in the three scenarios. 78 xii 6 . 4 7864 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.2 Estimated distribution over a scalar channel gain (left panel) and corresponding posterior distribution q(θ) over the model parameter θ (right panel) for frequentist learning, Bayesian learning with β ∈ {1, 0.1} and (m, 1)-robust Bayesian learning with m = 10. The training data set, represented as crosses, is sampled from the target distribution ν(x). . . . 71 6.3 Estimated distribution over channel gains (left panel) and posterior distribution over the model parameter θ (right panel) of a density model trained following (m, 1)-robust Bayesian learning, the (1, t)-robust Bayesian learning and the (m, t)-robust Bayesian learning. The training data set, represented as crosses, comprises samples from the sampling distribution ν(x) (black) and an outlier (red). . . . . . . . . . . . . . . . . . . . . . . . Absolute value of the contamination dependent term ∂ ∂ϕ Rm t (q ϕ , z) evaluated at ϕ m * t (0) for different values of t. The predictive distribution of the ensemble model concentrates around 1. . . . . . . . . . . . . . . . . . . . . 6.5 Ensemble predictive distribution obtained minimizing different free energy criteria. The samples from the ID measure are represented as green dots, while data points sampled from the OOD component are in red. 7 . 2 7 . 4 105 D. 1 72741051 Reliability diagrams for frequentist (left) and (m, t)-robust Bayesian learning for m = 4 and t = 0.7 (right) for AMC over the DeepSIG: RadioML 2016.10A data set [6]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Predictive distribution p(y|x) as a function of the estimated position of the transmitter y, where x is the RSSI vector associated to the true location shown as a green cross. The black dots correspond to the locations recorded in the SigfoxRural data set. The left panel shows the predictive distribution for Bayesian learning, while the right panel depicts the predictive distribution for (m, t)-robust Bayesian learning with m = 10 and t = 1. No outliers are considered in the training set, i.e., ϵ = 0. . . . . . . xiii Test mean squared error (7.5) for frequentist and the (m, t)-robust Bayesian learning with m = 10 and t = {1, 0.96} as a function of the corruption level ϵ for RSSI-based localization. As ϵ increases, the training data sets are increasingly affected by outliers. . . . . . . . . . . . . . . . . . . . . . 101 7.5 The top row shows a sample of the magnitude for the TDL-A channel response given a delay spread τ = 100ns in panel (a), while an outlier sample corresponding to the larger delay spread τ = 300 ns is depicted in panel (b). The bottom row reports a sample from the trained model for frequentist learning in panel (c) and for (4, 0.7)-robust Bayesian learning in panel (d). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.6 Maximum mean discrepancy (MMD) and area under receiving operating curve (AUROC) for frequentist learning and (4, t)-robust Bayesian learning. Both models are trained on a corrupted data set with (ϵ = 0.2). . . . . . Ensemble predictive distribution obtained minimizing different free energy criteria and different values of m. 3, 0.1} respectively. . . . . . . . . . . . . . . . . . . . 137 D.2 Reliability diagram of deep ensembles [4]. . . . . . . . . . . . . . . . . . . 139 D.3 Reliability diagrams of robust Gibbs predictor that optimizes J 1 t (top); and proposed robust ensemble predictor that optimizes J 10 t (bottom) under contamination ratio ϵ = 0.3 for different t = 0, 0.5, 1. . . . . . . . . 140 xiv List of Tables 1.1 Taxonomy of distributed machine learning (DML) paradigms. . . . . . . . 4.1 Worst-case distribution accuracy attained by AD-GDA and CHOCO-SGD for different compression schemes. . . . . . . . . . . . . . . . . . . . . . . 4.2 Worst-case distribution accuracy attained by AD-GDA and CHOCO-SGD for different network topologies. . . . . . . . . . . . . . . . . . . . . . . . 4.3 Testing accuracy attained at convergence for different regularization values α. The first two columns represent the accuracy when the model is tested on images produced by microscope 1 and microscope 2. The last column is the average accuracy when tested on a 50/50 test dataset. . . . . . . . 5.1 Worst user performance averaged over 5 experiments in the three simulation scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Total variation (TV) distance between the ID measure ν(x) and the predictive distribution p q (x) obtained from the optimization of the different free energy criteria for the setting in Figure 6.5 (the TV values are scaled by 10 4 ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Test negative log-likelihood for RSSI localization (7.6) with t = 1 and no outliers (ϵ = 0). The case m = 1 corresponds to conventional Bayesian learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.1 Total variation (TV) distance between the ID measure ν(x) and the predictive distribution p q (x) obtained from the optimization of the different free energy criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Figure 1 . 1 : 11 Figure 1.1: Comparison between the communication topology induced over a platoon of smart vehicles by the federated (left) and the decentralized learning protocols (right). Federated Learning cannot harness the full platoon resources when constrained to one-hop communication with the orchestrator. Decentralized learning allows to connect the entire platoon exploiting short range device-to-device links. Figure 2 . 1 : 21 Figure 2.1: An example of the timeline for one training iteration composed of alternate Broadcast and AirComp slots. h min = 2 Figure 2 . 3 : 223 Figure 2.3: Test accuracy versus time under different channel gain thresholds. Smaller thresholds result in larger average consensus rates and therefore in faster convergence. Figure 2 . 4 : 24 Figure 2.4: Test accuracy for the asynchronous, synchronous with delay barrier, and synchronous schemes under two different values of T max . Figure 3 . 1 : 31 Figure 3.1: Different UAV trajectory and placements. Black dots represent ground users and the gray vertical line is a propagation obstacle that corresponds to a 35 dB attenuation. Figure 3 . 2 : 32 Figure 3.2: Testing accuracy averaged over 5 runs, obtained by the mean network estimate, using different UAV-Aided decentralized learning protocols. Evolution of the average consensus error (3.21) attained by the benchmarked trajectories. Smaller consensus error corresponds to disagreement between network nodes. Figure 3 . 3 : 33 Figure 3.3: Testing accuracy, averaged over 5 experiments, obtained by the mean network estimate when training is aided by a UAV serving as a relay to assist the decentralized learning protocol (green), or as an orchestrator to perform federated learning (gray dashed). Figure 4 . 1 : 41 Figure 4.1: Validation accuracy of a mouse cell image classifier trained on the COOS7 dataset[START_REF] Lu | The cells out of sample (coos) dataset and benchmarks for measuring out-of-sample generalization of image classifiers[END_REF]. We consider a network of 5 devices with one device sampling images using a different microscope from the rest of the collaborating devices. CHOCO-SGD (solid lines), a not robust decentralized learning scheme, yields a model with highly imbalanced performance between the two type of instruments, while AD-GDA (dashed curves), the proposed distributionally robust algorithm, drastically reduces the accuracy gap and improve fairness among the collaborating devices. Figure 4 . 2 : 42 Figure 4.2: IoT network comprising edge devices with different sampling capabilities and operating in different conditions. The network goal consists in exploiting the heterogeneous distributed dataset and the D2D links to collaboratively train a robust and fair machine learning model. Algorithm 2 : 2 Agnostic Decentralized GDA with Compressed Communication (AD-GDA) Input : Number of devices m, number of iterations T , learning rates η θ and η λ , mixing matrix W , initial values θ 0 ∈ R d and λ 0 Theorem 3 . 3 Under Assumptions 5, 6, we have that the solution θ o returned by Algorithm 2 with learning rates η θ = η λ 16(κ+1) 2 and η λ = 1 2L √ Figure 4 . 3 : 43 Figure 4.3: Average and worst-case accuracies of a fully connected neural network vs. number of transmitted bits using the random quantization compression scheme. AD-GDA 4 bit Quantization AD-GDA 4 bit Quant.+Top-50% Spars. AD-GDA 4 bit Quant.+Top-25% Spars. Figure 4 . 4 : 44 Figure 4.4: Comparison between distributionally robust federated averaging (DRFA), federated averaging (FedAvg) and the proposed algorithm (AD-GDA) for different compression techniques. Algorithm 3 : 3 Silhouette based scoring Input : Collab. vectors { ⃗ w i } m i=1 from (5.10) and trade-off function c(k, s k ). Output : Number of clusters m t and personalized streams for k = 1, 2, . . . , m do C k ← K-means clustering of { ⃗ w i } m i=1 s k ← the silhouette score of s(C) end return m t = arg max k=1,...,m c(k, s k ) and cluster centers of C mt a rule of thumb to choose the number of personalized streams. In order to compute the silhouette score of a clustering C 1 , . . . , C mt of the clustering we define the intra-cluster similarity of the collaboration vector ⃗ Figure 5 . 2 : 52 Figure 5.2: Evolution of the average validation accuracy in the three simulation scenarios. (c) ρ = 1 , 0 Figure 5 . 3 : 1053 Figure 5.3: Evolution of the average validation accuracy against time normalized w.r.t. T dl for the three different systems. range Figure 5 . 4 : 54 Figure 5.4: Average silhouette scores of the k-means clustering in the three scenarios. In the last two scenarios, in which user inherently belongs to 4 different cluster, the scores indicates the necessity of at least 4 personalized streams. Figure 6 . 2 : 62 Figure 6.2: Estimated distribution over a scalar channel gain (left panel) and corresponding posterior distribution q(θ) over the model parameter θ (right panel) for frequentist learning, Bayesian learning with β ∈ {1, 0.1} and (m, 1)-robust Bayesian learning with m = 10. The training data set, represented as crosses, is sampled from the target distribution ν(x). 8 ) 8 This optimization follows the empirical risk minimization (ERM) principle. Accordingly, the frequentist solution is a single model parameter θ freq ∈ Θ that minimizes the training loss, i.e., θ freq = arg min θ∈Θ L(θ, D).(6.9) Figure 6 . 3 : 63 Figure 6.3: Estimated distribution over channel gains (left panel) and posterior distribution over the model parameter θ (right panel) of a density model trained following (m, 1)-robust Bayesian learning, the (1, t)-robust Bayesian learning and the (m, t)-robust Bayesian learning. The training data set, represented as crosses, comprises samples from the sampling distribution ν(x) (black) and an outlier (red). ) t = 1 tFigure 6 . 4 : 164 Figure 6.4: Absolute value of the contamination dependent term ∂ ∂ϕ Rm t (q ϕ , z) evaluated at ϕ m * t (0) for different values of t. The predictive distribution of the ensemble model concentrates around 1. 8 Figure 6 . 5 : 865 Figure 6.5: Ensemble predictive distribution obtained minimizing different free energy criteria. The samples from the ID measure are represented as green dots, while data points sampled from the OOD component are in red. The optimized predictive distributions are displayed in shades of gray. In (a), we plot the predictive distribution associated to (m, 1)-robust Bayesian learning obtained minimizing the m-free energy criterion J m of[START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF] with m = 20 by using only samples from the ID measure (i.e., there are no outliers). In (b), we show the predictive distribution obtained by minimizing the same criterion when using samples from the ID measure and OOD measure with a contamination ratio ϵ = 0.1. In (c) and (d) we consider the same scenario as in (b), but we consider the proposed (m, t)-robust Bayesian based on the robust m-free energy criterion J m t with m = 20, when setting t = 0.9 and t = 0.8, respectively. OOD component ξ(x) = ξ(a, b) = p(a)ξ(b) also has a uniformly distributed covariate a in the interval [-10.5, 10.5], but, unlike the ID measure, the response variable b is independent of a, with a distribution concentrated around b = 0 as ξ(b) = N (b|0, 0.1). (6.64) Figure 6 . 6 :Figure 6 . 7 : 6667 Figure 6.6: Test accuracy (top) and expected calibration error (ECE) (bottom) as a function of t under the contamination ratio ϵ = 0.3 for: (i) deep ensembles[START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF]; (ii) robust Gibbs predictor, which minimizes the free energy criterion J 1 t[START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF]; and (iii) (m, t)-robust Bayesian learning, which minimizes the free energy criterion J 10 t . Figure 6 . 8 : 68 Figure 6.8: Negative log-likelihood computed on a uncorrupted data set for: (i) deep ensembles[START_REF] Lakshminarayanan | Simple and scalable predictive uncertainty estimation using deep ensembles[END_REF]; (ii) robust Gibbs predictor, which minimizes J 1 t[START_REF] Amid | Robust bi-tempered logistic loss based on Bregman divergences[END_REF]; and (iii) the (m, t)robust Bayesian learning, which minimizes J 10 t . The models are trained on ϵ-contaminated data set for ϵ ∈ {0, 0.1, 0.2, 0.3} (7. 1 ) 1 The corresponding confidence score assigned by the predictor p(y|x) is the probability p(ŷ|x) ∈ [0, 1]. The calibration of a classifier measures the degree to which the confidence score p(ŷ|x) ∈ [0, 1] reflects the true probability of correct classification P [ŷ = y|x] conditioned on the input x. Figure 7 . 1 : 71 Figure 7.1: Average test accuracy and ECE for AMC over the DeepSIG: RadioML 2016.10A data set[START_REF] O'shea | Convolutional radio modulation recognition networks[END_REF] for frequentist and (m, t)-robust Bayesian learning as a function of the parameter t. The test set is free from interference, while the training set is subject to interference (ϵ = 0.5). Figure 7 . 3 : 73 Figure 7.3: Predictive distribution p(y|x) as a function of the estimated position of the transmitter y, where x is the RSSI vector associated to the true location shown as a green cross. The black dots correspond to the locations recorded in the SigfoxRural data set. The left panel shows the predictive distribution for Bayesian learning, while the right panel depicts the predictive distribution for (m, t)-robust Bayesian learning with m = 10 and t = 1. No outliers are considered in the training set, i.e., ϵ = 0. Figure 7 . 4 : 74 Figure 7.4: Test mean squared error (7.5) for frequentist and the (m, t)-robust Bayesian learning with m = 10 and t = {1, 0.96} as a function of the corruption level ϵ for RSSI-based localization. As ϵ increases, the training data sets are increasingly affected by outliers. Figure 7 . 5 : 75 Figure 7.5: The top row shows a sample of the magnitude for the TDL-A channel response given a delay spread τ = 100ns in panel (a), while an outlier sample corresponding to the larger delay spread τ = 300 ns is depicted in panel (b). The bottom row reports a sample from the trained model for frequentist learning in panel (c) and for (4, 0.7)-robust Bayesian learning in panel (d). m 1 1 Lemma 4 .= ρ 2 δ 16ρ+ρ 2 Lemma 5 .B. 2 4252 (Consensus inequality for compressed communication[START_REF] Koloskova | Decentralized deep learning with arbitrary communication compression[END_REF]) For a fixed η θ > 0 and γ +4β 2 +2ρβ 2 -8ρδ the iterates of Algorithm 2 satisfy (Consensus Inequality for uncompressed communication[START_REF] Koloskova | Decentralized stochastic optimization and gossip algorithms with compressed communication[END_REF]) For a fixed η λ > 0 the iterates of Algorithm 2 satisfy Proof of Theorem 2: Convex case Define Φ(•) = max λ∈∆ m-1 g(•, λ); under assumptions 5, 6 and if the local objective functions {f i (θ)} m i=1 are convex, Theorem 2 guarantees that the output solution (θ o , λ o ) satisfies E Φ(θ o )min θ∈Θ E max θ 1 T T - 1 t=0g 1 T T - 1 t=0g 1111 from the following decomposition of the sub-optimality gapE max λ g(θ o , λ)min o , λ)g(θ, λ o )) (B.26) θt , λ)g(θ, λt )) (B.27) , λ)g( θt , λt ) + ( θt , λt )g(θ, λt ) . (B.28)Thanks to Lemmas (6) and (7) proved below, the two summands can be bounded to obtainwhereD θ = max t=0...,T E[D t θ ] = max t=0...,T E θtθ . □ Lemma 7.For T > 0 and any λ, the sequence { θt , λt } T t=0 generated by Algorithm 2 satisfies E ( θt , λ)g( θt , λt ) 61 )δ 0 λ 2 + η λ 10D λ L 2 We now turn bounding term T 4 T 4 = 612244 Simplifying and definingD λ = max t=0,...,T D t λ 256 E[Φ( θ0 )] -E[Φ( θT )] + 45Lκ 2 η θ ⟨∇Φ( θt-1 ), ∇Φ( θt-1 ) -1 m m i=1 ∇ 2 ( 2 λ g i ( θt-1 , λt-1 ) -∇ λ g i ( θt-1 , λ * ( θt-1 )) θt-1 , λ * ( θt-1 ))g i ( θt-1 , λt-1 θt-1 , λ * (θ t-1 i ))g i ( θt-1 , λt-1 ) . θt-1 ) -λt-1 ; ∇ λ g i (θ tθt-1 ) -λt-1 ; ∇ λ g i (θ t-1 i , λ t-1 i ) ± ∇ λ g i ( θt-1 , λ tθt-1 , λ * ( θt-1))g i ( θt-1 , λt-  Finally, the expected uniform deviation can be bounded by the Rademacher complexity as followsE P [∆(G, Z)] ≤ 2Rad(G)whereRad(G) = E ⃗ σ,D 1 ,...,D j j g(Z i,j ) By a direct application of Massart's and Sauer's Lemma we obtain Rad(G) ≤ log e j n j + log(VCdim(G))j n jcombining everything together, we get the final result. 1 k 2 -k 2 , 122 for x = 1, . . . , 2 k 2 , (D.14) and the optimization of the log t -loss over a predictive distribution p(x). The following limit holds lim k→∞ min p E ν k (x) [log t p(x)] = 0, for t ∈ [0, 1) ∞, for t = 1 , (D.15) q) := Lm t (θ, D) + m β D 1 (q(θ)||p(θ)). (D.16)is the fixed point of the operatorT (q):=p(θ j ) exp β x∈D E {θ i } i̸ =j log t m i=1 p(x|θ i ) m (D.17) 1 Θ m- 1 ■D. 3 113 log t E j∼U [1:m] p(x|θ j ) i̸ =k q(θ i )dθ i (D.20) log t E j∼U [1:m] p(x|θ j ) i )dθ i , (D.21) = -mE θ 1 ,...,θ m-1 ∼q(θ) ⊗m-1 log t E j∼U [1:m] p(x|θ j ) ,(D.22) where (a) follows from the derivative of a nonlocal functional of m functions, and (b) holds since the integrand is invariant under the permutation of {θ i } i̸ =k . The functional derivative of the robust m-free energy then follows as θ 1 ,...,θ m-1 ∼q(θ) ⊗m-1 log t E j∼U [1:m] p(x|θ j ) + m β (1 + log(q(θ))log(p(θ))). (D.24) Imposing the functional derivative equals to zero function it follows that the optimized posterior must satisfy q(θ m ) =p(θ m ) • exp βE θ 1 ,...,θ m-1 ∼q(θ) ⊗m-1 log t E j∼U [1:m] p(x|θ j ) . (D.25) Proof of Theorem 7 30 ) 20 Figure D. 1 : 30201 Figure D.1: Ensemble predictive distribution obtained minimizing different free energy criteria and different values of m. The samples from the ID measure are represented as green dots, while data points sampled from the OOD component are in red. The optimized predictive distributions. The predictive distribution obtained minimizing the standard m-free energy is denoted by J m , while the predictive distribution yielded by the minimization of the robust m-free energy are denoted by J m 0.9 , J m 0.7 , J m 0.5 , J m 0.3 and J m 0.1 1.1 Comparison between the communication topology induced over a platoon of smart vehicles by the federated (left) and the decentralized learning protocols (right). Federated Learning cannot harness the full platoon resources when constrained to one-hop communication with the orchestrator. Decentralized learning allows to connect the entire platoon exploiting short range device-to-device links. . . . . . . . . . . . . . . . . . . . . . . . . . . Testing accuracy averaged over 5 runs, obtained by the mean network estimate, using different UAV-Aided decentralized learning protocols. Evolution of the average consensus error (3.21) attained by the benchmarked trajectories. Smaller consensus error corresponds to disagreement between network nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation accuracy of a mouse cell image classifier trained on the COOS7 dataset 4.1 3.3 Testing accuracy, averaged over 5 experiments, obtained by the mean network estimate when training is aided by a UAV serving as a relay to assist the decentralized learning protocol (green), or as an orchestrator to perform federated learning (gray dashed). . . . . . . . . . . . . . . . . . . xi 2.1 An example of the timeline for one training iteration composed of alternate Broadcast and AirComp slots. . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Average spectral gap under different delay constraints for mesh, ring, and two-dimensional torus topologies with 9 nodes. Each link is associated to a completion time ∼ Exp(1) and is dropped if it exceeds the delay tolerance value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Test accuracy versus time under different channel gain thresholds. Smaller thresholds result in larger average consensus rates and therefore in faster convergence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Test accuracy for the asynchronous, synchronous with delay barrier, and synchronous schemes under two different values of T max . . . . . . . . . . . 3.1 Different UAV trajectory and placements. Black dots represent ground users and the gray vertical line is a propagation obstacle that corresponds to a 35 dB attenuation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Zecchin † , and M. Kountouris, "Asynchronous Decentralized Learning over Unreliable Wireless Networks," ICC 2022 -IEEE International Conference on Communications, 2022. • M .Zecchin, D. Gesbert, and M. Kountouris, "UAV-Aided Decentralized Learning over Mesh Networks." EUSIPCO 2022 -European Signal Processing Conference, 2022. .20) 0.8 0.7 0.6 Accuracy 0.5 Fully Connected Topology 0.4 Proposed Trajectory Cluster Mid-points Traversal 0.3 Barycenter Placement Maximum Connectivity Placement 0.2 0 100 200 300 400 500 Iteration Table 4 . 4 1: Worst-case distribution accuracy attained by AD-GDA and CHOCO-SGD for different compression schemes. ± 2.05 57.43 ± 1.44 55.75 ± 2.09 57.05 ± 0.68 54.02 ± 1.14 51.51 ± 2.88 Logistic CHOCO-SGD 30.69 ± 0.96 30.06 ± 0.83 29.46 ± 0.05 30.28 ± 0.60 28.56 ± 0.54 26.39 ± 0.67 F.C. AD-GDA 54.99 ± 1.92 48.99 ± 2.30 47.08 ± 2.53 51.85 ± 2.11 43.65 ± 2.97 38.95 ± 3.21 F.C. CHOCO-SGD 30.83 ± 2.22 28.08 ± 2.50 28.01 ± 2.59 29.92 ± 2.54 27.11 ± 2.96 25.91 ± 3.20 Logistic AD-GDA 59.19 16 bit 8 bit Quantization 4 bit 50% 25% Sparsification 10% Table 4 . 4 2: Worst-case distribution accuracy attained by AD-GDA and CHOCO-SGD for different network topologies. SGD 26.82 ± 0.41 29.00 ± 0.02 30.82 ± 0.24 30.97 ± 0.03 F.C. AD-GDA 44.31 ± 2.47 45.21 ± 2.22 50.16 ± 1.85 50.80 ± 1.83 F.C. CHOCO-SGD 26.02 ± 2.29 26.38 ± 2.65 28.79 ± 2.22 28.96 ± 1.87 Top-10% Sparsification 4-bit Quantization 2D Torus Mesh 2D Torus Mesh Log. AD-GDA Log. CHOCO- 54.00 ± 0.61 54.07 ± 0.03 56.94 ± 0.38 57.11 ± 0.03 Table 4 . 4 3: Testing accuracy attained at convergence for different regularization values α. The first two columns represent the accuracy when the model is tested on images produced by microscope 1 and microscope 2. The last column is the average accuracy when tested on a 50/50 test dataset. Microscope 1 Microscope 2 Mean α = ∞ α = 1 α = 0.1 α = 0.01 76.03 ± 1.45 65.86 ± 1.26 70.73 ± 1.33 73.30 ± 2.20 91.11 ± 0.63 84.30 ± 1.6 79.78 ± 2.30 79.02 ± 1.40 78.48 ± 0.96 77.51 ± 1.51 76.54 ± 2.25 77.52 ± 1.43 Table 5 . 5 1: Worst user performance averaged over 5 experiments in the three simulation scenarios. Local FedAvg Oracle CFL FedFOMO Proposed EMNIST + label shift 58.8 68.9 - 70.3 70.0 73.2 (k = 20) EMNIST + cov. & label shift 56.0 67.5 77.4 76.1 73.6 76.4 (k = 4) CIFAR10 + concept shift 35.7 19.6 49.1 48.6 45.5 48.8 (k = 4) .1 we report the average test accuracy and ECE for frequentist and (m, t)robust Bayesian with different values of m as a function of t. The main observation is 1.0 1.0 Perfect Calibration Perfect Calibration Confidence Confidence 0.8 Accuracy 0.8 Accuracy 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 (a) Frequentist Learning (b) Robust Bayesian Learning Figure 7.2: Reliability diagrams for frequentist (left) and (m, t)-robust Bayesian learning for m = 4 and t = 0.7 (right) for AMC over the DeepSIG: RadioML 2016.10A data set [6]. Table 7 . 7 1: Test negative log-likelihood for RSSI localization (7.6) with t = 1 and no outliers (ϵ = 0). The case m = 1 corresponds to conventional Bayesian learning. m = 1 m = 2 m = 10 SigfoxRural 1.70 ± 1.03 -0.43 ± 0.61 -1.59 ± 0.36 UTSIndoor 4.33 ± 2.32 2.25 ± 1.69 2.17 ± 1.76 UJIIndoor 4.86 ± 1.02 2.74 ± 0.46 1.44 ± 0.33 Proposition 3. Given two vectors a, b ∈ R d , for β > 0 we have 2⟨a, b⟩ ≤ β -1 ∥a∥ 2 + β∥b∥ 2 (B.8) and ∥a + b∥ ≤ (1 + β -1 )∥a∥ 2 + (1 + β)∥b∥ 2 . (B.9) Proposition 4. Given two matrices A ∈ R p×q , B ∈ R q×r , we have∥AB∥ F ≤ ∥A∥ F ∥B∥ 2 (B.10)where ∥•∥ F denotes the Frobenius norm.Proposition 5. Given a set of vectors {a i } n i=1 we have X 11 T m . The local update rule of Algorithm 2 can be rewritten asΘ t+ 1 2 = Θ tη θ ∇θ G(Θ t , Λ t ) (B.17) Λ t+ 1 2 = P Λ Λ t + η λ ∇λ G(Θ t , Λ t ) (B.18) n 2 n i=1 a i ≤ n i=1 ∥a i ∥ 2 . (B.11) Consensus inequalities To streamline the notation we define ∇g i (θ t i , λ t i ) = ∇g i (θ t i , λ t i , ξ t i ) and introduce the following matrices Θ telescoping and taking expectations we obtain 1 T T t=1 ∇f ( θ(t) ) 2 ≤2 f ( θ0 ) -f ( θT ) ηT γ ′ + σ 2 4ηγ ′ mT + 1 T T t=1 L 2 mγ ′ m i=1 E θ (t) i -θ(t) 2 + 1 T T t=1 Lζ 2 ηm 2 γ ′ i=1 m E ñ(t) i 2 . Defining σ 2 w,i = max T t=0 E ñ(t) i 2 and bounding the consensus term by Lemma 2, we obtain 1 T T t=1 ∇f ( θ(t) ) 2 ≤2 f ( θ0 ) -f ( θT ) ηT γ ′ + L 2 mγ ′ η 2 12mG 2 (pζ) 2 + ζ 2 p m i=1 σ 2 w,i + σ 2 4ηγ ′ mT + Lζ 2 ηm 2 γ ′ m i=1 σ 2 w,i . The final result is obtained setting η = 1 √ 4LT and ζ = 1 T 3/8 . t = θ t 1 , . . . , θ t m ∈ R d×m , (B.12) Θt = θt 1 , . . . , θt m ∈ R d×m , (B.13) Λ t = λ t 1 , . . . , λ t m ∈ R m×m , (B.14) ∇θ G(Θ t , Λ t ) = ∇θ g 1 (θ t 1 , λ t 1 ), . . . , ∇θ g m (θ t m , λ t m ) ∈ R d×m (B.15) ∇λ G(Θ t , Λ t ) = ∇λ g 1 (θ t 1 , λ t 1 ), . . . , ∇λ g m (θ t m , λ t m ) ∈ R m×m (B.16) and for a matrix X we define X = λ 2 =E ξ t λ -λt + η λ m ∇ λ g i ( θt , λ t i )⟩ + 2η λ LD t ∇ λ g i ( θt , λ t i )⟩ + ⟨λ t i -λt ; ∇ λ g i ( θt , λ t i )⟩ + 2η λ LD tPlugging it back in (B.45), rearranging the terms and taking the expectation over the previous iterate we getIn the case of non-convex functions {f i } m i=1 , Theorem 3 provides the following ϵ-stationarity guarantee on the randomized solution of Algorithm 2 : ( θt ) -λt 2 represents the squared distance between the optimal value of the dual variable for the current averaged network belief and the current averaged value of the dual variable.Lemma 9, reported below, provides a bound on T t=1 δ t λ that plugged in (B.57) yields Exploiting consensus inequalities (B.21), (B.22) and the fact that κ ≥ 1 and η θ = ≤ -2 ≤ -2 (B.4) ≤ -2 η λ m η λ m η λ m i=1 m i=1 m m i=1 ⟨λ -λt ; λ ⟨λ -λ t i ; λ Ξ t θ m + 4 η θ κLσ 2 θ m + 45κLη λ σ 2 λ 4m 45σ 2 θ + 2 • 16 2 m + 9L 2 4T T 31κE[Ξ t-1 θ ] m 72E[κΞ t-1 λ ] + m t=1 g i ( θt , λ) -g i ( θt , λt ) -L 2 λt -λ t i 2 + 2η λ LD t λ Ξ t θ m + 9L 2 4T T t=1 40κD t-1 λ 1 m E[Ξ t-1 θ ] . Ξ t θ m (B.47) (B.48) (B.49) (B.60) = -2 η λ 16(κ+1) 2 ≤ 1/2L we can simplify and obtain η λ m m i=1 g i ( θt , λ) -g i ( θt , λt ) + 2η λ L m Ξ t λ + 2η λ LD t λ Ξ t θ m . (B.50) 1 T T t=1 E ∇ Φ( θt-1 ) 2 ≤ 64(κ + 1) 2 η λ T E[Φ( θ0 )] -E[Φ( θT )] + 45Lκ 2 δ 0 λ 2T η λ + 2 η λ Lσ 2 θ m + η λ T t=1 45κLσ 2 E ∇∥Φ( θt-1 )∥ 2 λ 2m 45σ 2 θ + 16 2 m + L 2 9η θ 16 T t=1 + L 2 9 E[Ξ t θ ] m 4T + 2η λ 2E[Ξ t λ ] m + 9η θ L 2 8 E + t=1 η λ 2 T δ t G 2 λ + λ L m E Ξ t λ (B.57) where δ t + LE D t λ E Ξ t θ m . (B.52) 21) and (B.22) we get m i=1 m i=1 ∇λ g i (θ t ∇λ g i (θ t i , λ t i ) η λ m ⟨λ -λt ; E ξ t ∇λ g i (θ t 2 i , λ t i ) ⟩ i , λ t i ) 2 2 -2 2 -2 η λ m m i=1 =E λt -λ = λt -λ + E ξ t η λ m m i=1 ⟨λ -λt ; ∇ λ g i (θ t i , λ t i )⟩ :=T 3 +η 2 λ G 2 λ . g( θt , λ) -g( θt , λt ) ≤ D λ 2η λ T + η λ 2 G 2 λ + 4η 2 λ LG 2 λ ρ 2 + √ 12η θ E[Φ( θT )] ≤E[Φ( θ0 )] + η θ Denoting with D t 1 T E T -1 t=0 45Lκ 2 δ 0 λ 8η λ + η θ 45κ 4 η 2 θ η 2 λ -7 16 T E ∇ Φ( θt-1 ) D λ LG θ (B.43) (B.44) (B.45) . (B.53) 2 t=1 c + T η θ η θ κLσ 2 θ m + 45κLη λ σ 2 λ 4m 45σ 2 θ + 2 • 16 2 m + L 2 9η θ 16 T t=1 E[Ξ t θ ] m + 2E[Ξ t ] λ m + 30κE[Ξ t-1 θ ] m + 70κE[Ξ t-1 λ ] m + L 2 9η θ 16 T t=1 40κD t-1 λ 1 m E[Ξ t-1 θ ] . (B.58) Moreover, the relation between the two step-sizes established above ensures that where D 1 T T t=1 E ∇Φ( θt-1 ) 2 ≤ 2L √ T 256 E[Φ( θ0 )] -E[Φ( θT )] + 45Lκ 2 D 0 λ 2 45κ 4 η 2 θ η 2 λ -7 16 ≤ -1 (B.59) 4 λ = λt -λ we have that for T 3 the following holds T 3 = -2 η λ m m i=1 ⟨λ -λt ; ∇ λ g i ( θt , λ t i )⟩ + m i=1 ⟨λ -λt ; ∇ λ g i (θ t i , λ t i ) -∇ λ g i ( θt , λ t + 1 √ T 5D λ L G θ c + σ 2 θ 2m + 45κσ 2 λ and therefore rearranging terms, dividing by 4 T η θ and recalling that κ ≥ 1 4m i )⟩ (B.46) + 1 T G 2 θ 4c 2 + 171 κG 2 λ ρ 2 + σ 2 θ m . (B.54) 1 T T t=1 E ∇ Φ( θt-1 ) 2 ≤ 4 η θ T 45Lκ 2 δ 0 λ E[Φ( θ0 )] -E[Φ( θT )] + 2T η λ E g( θt , λ)g( θt , λt ) = 1 m E m i=1 g i ( θt , λ)g i ( θt , λt ) (B.51) ≤ E∥ λt -λ∥ 2 -E∥ λt+1 -λ∥ 2 Telescoping from t = 0 to t = T -1 and plugging the consensus inequalities (B.λ = max t=0...,T E[D t λ ] = max t=0...,T E λtλ . □ B.3 Proof of Theorem 3: Non-convex case λ := λ * g i ( θt-1 , λ * ( θt-1 )) -∇ θ g i (θ t-1 i , λ t-1 i ) + (1 + b) E ξ t-1 λ * ( θt )λ * ( θt-1 ) =E ξ t-1 λ * ( θt-1 ) -λt 2 (B.82) =E ξ t-1 λ * ( θt-1 ) -λt-1 -η λ m (B.9) ≤ Bounding T 6 similarly η θ 2     ∇Φ( θt-1 ) T 6 m 2 + L 2 Ξ t-1 θ m + 2L 2 Ξ t-1 λ m + 2L 2 m m i=1 λ * ( θt-1 ) -λt-1 2 =δ t-1 λ 2     (B.73) (B.74) (B.75) i , λ t-1 (B.83) (B.84) ∇ θ g i (θ t-1 i ) i , λ t-1 i ) ± ∇ λ g i (θ t-1 i , λ t-1 ∇λ g i (θ t-1 and from stochastic gradient assumptions 6 we can bound T 5 as follows i=1 T 5 =E ξ t-1 m i=1 ∇θ g i (θ t-1 i , λ t-1 i ) 2 ≤ λ * ( θt-1 ) -λt-1 -η λ m m i=1 ∇ λ g i (θ t-1 i , λ t-1 i ) 2 + η 2 λ σ 2 λ m =E ξ t-1   m i=1 ∇θ g i (θ t-1 i , λ t-1 i ) -∇ θ g i (θ t-1 i , λ t-1 i ) 2   + m i=1 T 6,1 i ) 2 = λ * ( θt-1 ) -λt-1 2 + η λ m m i=1 ∇ λ g i (θ t-1 i , λ t-1 i ) 2 η 2 λ σ 2 λ + m (B.9) ≤ mσ 2 θ + 2 -2⟨λ * ( θt-1 ) -λt-1 ; m i=1 ∇ θ g i (θ t-1 T 6,2 i , λ t-1 m η λ m i=1 ∇ λ g i (θ t-1 i , λ t-1 i )⟩ . (B.85) (B.76) (B.1) ≤ mσ 2 θ + 2L 2 m Estimating T 6,1 λ ≤mσ 2 m i=1 θt-1 -θ t-1 i 2 + 2L 2 m m i=1 θ + 2L 2 mΞ t-1 θ + 4L 2 mΞ t-1 λ + 4L 2 m m i=1 λ (B.79) m 1 T 6,1 =2η 2 λ m i=1 Recombining, grouping, and taking the expectation over the previous iterates we get the desired result. □ Lemma 9. The sequence of {δ t λ } T t=1 generated by Algorithm 2 satisfies t-1 i )⟩ 1 m E Ξ t-1 θ ∇ θ g i (θ t-1 i , λ t-1 + i ) 3E Ξ t-1 θ m 2   2 λ µ 2 E ∇Φ( θt-1 ) 5κ 4D t-1 λ 2 + ∇Φ( θt-1 ) -λ ≤ E δ t 5δ 0 λ κ η λ µ + T t=1  ∇Φ( θt-1 ) T t=1  1 m m i=1 + T t=1 5 8κ 2 η 2 θ η 2  η θ 2 η θ (B.8) ≤ ≤ 2  ∇Φ( θt-1 ) 2 + 1 m m + 5T 2η λ σ 2 λ mµ + 4σ 2 θ 16 2 m(κ + 1) 2 µ 2 ∇ 2 +  7E Ξ t-1 λ m (B.69) (B.70)  (B.71) where D t-1 λ = λt-1 -λ . i=1 (B.1) ≤ η θ 2 ∇Φ( θt-1 ) 2 + m Proof: From (B.9), for b > 0, we have L 2 m i=1 θt-1 -θ t-1 E ξ t-1 [δ t λ ] ≤ 1 + 1 b E ξ t-1 λ * ( θt-1 ) -λt 2 2 . (B.81) (B.80) :=T 6 :=T 7 θ i ) -∇ θ g i ( θt-1 , λ * ( θt-1 )) 2 + 2 m∇Φ( θt-1 ) 2 (B.77) * ( θt-1 )λ t-1 i 2 + 2m 2 ∇Φ( θt-1 ) 2 (B.78) * ( θt-1 ) -λt-1 2 =δ t-1 λ +2m 2 ∇Φ( θt -1 ) 2 . (•) is κ-smooth (Lemma 4.3[START_REF] Lin | On gradient descent ascent for nonconvex-concave minimax problems[END_REF]) we can bound T 7 as followsT 7 =E ξ t-1 λ * ( θt )λ * ( θt-1 )□ depending to which loss term the increment is associated. Recognizing this, we can then directly apply Azuma's concentration bound and state that w.p. 1δ the following holds ∆(G, Z) ≤ E P [∆(G, Z)] + B + 6Lη λ + 16κ 2 η 2 θ L 2 η λ µ Ξ t-1 λ m + 8κ 2 η 2 θ η λ µ ∇Φ(x t-1 ) 2 . (B.111) Fixing η x = η λ 16(κ+1) 2 we get that ν = 1 - η λ µ 4 + 16κ 2 η 2 x L 2 η λ µ ≤ 1 - η λ µ 5 . (B.112) Taking the expectation over the current iterate and applying recursively the inequality we obtain E ξ t-1 [δ t λ ] ≤ν t δ 0 λ + t-1 i=0 ν t-1-i 8κ 2 η 2 θ η λ µ E ξ t-1 [ ∇Φ( θt-1 ) 2 ] + 2 η 2 λ σ 2 λ m + 4κ 2 η 2 θ η λ µ σ 2 θ m + t-1 i=0 ν t-1-i 4η λ LD t-1 λ 1 m E ξ t-1 [Ξ t-1 θ ] + t-1 i=0 ν t-1-i 3Lη λ E ξ t-1 [Ξ t-1 θ ] m + 7Lη λ E ξ t-1 [Ξ t-1 λ ] m . (B.113) T t=1 E ξ t-1 [δ t λ ] ≤ 5δ 0 λ η λ µ + T t=1 5 η λ µ 8κ 2 η 2 θ η λ µ E ξ t-1 [ ∇Φ( θt-1 ) 2 + 5T η λ µ 2 η 2 λ σ 2 λ m + 4κ 2 η 2 θ η λ µ σ 2 θ m + T t=1 5 η λ µ 4η λ LD t-1 λ 1 m E ξ t-1 [Ξ t-1 θ ] + T t=1 5 η λ µ 3Lη λ E ξ t-1 [Ξ t-1 θ ] m + 7Lη λ E ξ t-1 [Ξ t-1 λ ] m . (B.114) Setting b = 2 2 η λ µ -1 > 0 we get the following inequalities 1 + 1 b 1 - η λ µ 2 ≤ 1 - η λ µ 4 , (B.108) (1 + b) ≤ 4 η λ µ , 1 θ . (B.109) (B.98) 1 + Being λ 2 that allows to simplify (B.107) as follows 1 b ≤ 2. (B.110) (B.99) ≤κ 2 E ξ t-1 θt -θt-1 2 = θ i=1 m 2 E ξ t-1 ∇θ g i (θ t-1 i , λ t-1 i ) κ 2 η 2 m δ t λ ≤ 1 -η λ µ 4 + 16κ 2 η 2 θ L 2 η λ µ δ t-1 2 (B.100) (B.101) * Summing from t = 1 to T and from (B.112) we get (q(θ)||p(θ)) + log E p(θ) ⊗m e β∆m,n , (D.7) =mD 1 D JS (P i ||P ⃗ wi ) = 1 2 KL P i || P i + P ⃗ wi 2 + 1 2 KL P ⃗ wi || P i + P ⃗ wi 2 = 1 2 KL P i || j w i,j (P i + P j ) 2 + 1 2 KL   j w i,j P j || j w i,j (P i + P j ) 2   θ) ⊗m e ∆m,n ≤ E ν(x) ⊗n ,p(θ) ⊗m e ∆m,n E p(θ) ⊗m e ∆m,n ≤ log E ν(x) ⊗n ,p(θ) ⊗m e ∆m,nlog σ.(D.9)Combining (D.7) with (D.9), the following upper bound on the predictive risk holds with probability 1σ R t (q) ≤E ν(x),q(θ) ⊗mlog t E j∼U [1:m] p(x|θ j ) (D.10) σ , (D.8) or, equivalently, ≤E q(θ) ⊗m 1 n x∈D log t E j∼U [1:m] p(x|θ j ) + m β D 1 (q(θ)||p(θ)) + log E ν(x) log ⊗n E p(θ) ⊗m e ∆m,nlog σ β . (D. [START_REF] Du | Machine learning for 6g wireless networks: Carrying forward enhanced bandwidth, massive access, and ultrareliable/low-latency service[END_REF] Theorem. The influence function of the robust m-free energy objective (6.51) is IF m t (z, ϕ, P n )=- ∂ 2 J m t (γ, ϕ) ∂ϕ 2 -1 × ∂ 2 J m t (γ, ϕ) ∂γ∂ϕ t (0) ϕ=ϕ m * γ=0 , (D.26) where ∂ 2 J m t (γ, ϕ) ∂ϕ 2 =E P n γ,z (x) ∂ 2 ∂ϕ 2 Rm t (q ϕ , x) + ∂ 2 ∂ϕ 2 m β KL(q ϕ (θ)||p(θ)) (D.27) and ∂ 2 J m t (γ, ϕ) ∂γ∂ϕ = ∂ ∂ϕ E P n (x) t = 1 t = 0.9 t = 0.7 t = 0.5 t = 0.3 t = 0.1 Theoretical feasibility and prototype results," IEEE communications magazine, vol. 52, no. 2, pp.106-113, 2014. cellular communications: m = 1 0.59 0.42 0.27 0.18 0.16 0.18 m = 2 0.44 0.32 0.22 0.17 0.15 0.15 m = 5 0.34 0.32 0.23 0.18 0.15 0.14 m = 10 0.34 0.30 0.24 0.19 0.15 0.16 † indicates equal contribution In the machine learning community, the notion of fairness has many facets. In this chapter, we will use the term "fair" in accordance with the notion of good-intent fairness as introduced in[START_REF] Mohri | Agnostic federated learning[END_REF]. The Fashion-MNIST dataset is released under the MIT License Acknowledgements Acknowledgements vi (1, t)-Robust Bayesian Learning Against Outliers We now turn to methods that robustify Bayesian learning against the presence of outliers in the training set. As in [START_REF] Huber | Robust estimation of a location parameter[END_REF], we model the presence of outliers by assuming that the training data is generated from a sampling distribution ν s (x, y) that is given by the contamination of the in-distribution(ID) distribution ν(x, y) by an out-of-distribution (OOD) distribution ξ(x, y). A formal definition follows. Assumption 7 (Outliers). The sampling distribution is given by ν s (x, y) = (1ϵ)ν(x, y) + ϵξ(x, y) (6.31) where ν(x, y) is the target distribution; ξ(x, y) is the OOD distribution accounting for the presence of outliers; and ϵ ∈ [0, 1] denotes the contamination ratio. In order for model (6.31) to be meaningful, one typically assumes that the OOD measure ξ(x, y) is large for pairs of (x, y) at which the target measure ν(x, y) is small. This ensures that outlying data points (x, y) ∼ ξ(x, y) tend to be in part of the domain that is not covered by the target distribution. The performance of both frequentist and Bayesian learning is known to be sensitive to outliers when the log-loss is adopted to evaluate the training loss. This sensitivity is caused by the unbounded value of the log-loss (6.16) when evaluated on anomalous data points to which the model assigns low probabilities p(y|x, θ). This is illustrated in Figure 6.1 for a general conditional distribution p(y|x). A number of papers have proposed to mitigate the effect of outliers by replacing the log-loss in favor of more robust losses [START_REF] Jewson | Principles of Bayesian inference using general divergence criteria[END_REF][START_REF] Basu | Robust and efficient estimation by minimising a density power divergence[END_REF][START_REF] Ghosh | Robust Bayes estimation using the density power divergence[END_REF][START_REF] Fujisawa | Robust parameter estimation with a small bias against heavy contamination[END_REF][START_REF] Nakagawa | Robust Bayesian inference via γ-divergence[END_REF]. A well-explored solution is to adopt the t-log-loss introduced in Section 6.2. Using the t-log-loss in lieu of the standard log-loss in the loss definitions (6.23) and (6.25), we obtain the log t -loss of the ensemble model (6.21) as R t (q, x, y) :=log t p(y|x, q) =log t E q(θ) [p(y|x, θ)], (6.32) and the log t loss of the Gibbs predictor Rt (q, x, y) By (6.2), the above definitions generalize the ones based on the standard log-loss as these are obtained with t = 1. On the other hand, for t < 1 the associated loss function is bounded by (1t) -1 , as shown in Figure 6.1. Based on (6.33), we obtain the t-training loss as Rt (q, x, y), (6.34) which leads to the corresponding t-free energy Appendices Appendix of Chapter 2 A.1 Proof of Lemma 1 Define the event E (t) := {G (t) is connected} and its complementary event Ē(t) . Whenever the Metropolis-Hasting weights are obtained from a connected graph, the resulting mixing matrix W (t) has a consensus rate greater than zero. Therefore, there exists δ > 0 such that Useful inequalities This section contains a collection of ancillary results that are useful for the subsequent proofs. and if x * is a minimizer and if x * is a maximizer and a differentiable and µ-strongly concave function g(x) satisfies T , the final result is obtained. □ Lemma 6. For T > 0 and any θ, the sequence { θt , λt } T t=0 generated by Algorithm 2 satisfies Plugging it back in (B.33), rearranging the terms and taking the expectation over the previous iterate we get The proof is inspired from recent results in [START_REF] Lin | On gradient descent ascent for nonconvex-concave minimax problems[END_REF]. Specifically, Lemma 8, stated and proved below, provides a descent inequality of the type Denote by f * the arg min f ∈F E z∼P i [ℓ(f, z)] and bound the estimation error of f ⃗ w i as . We recognize the estimation error of f ⃗ w i w.r.t to the measure P ⃗ w i that can be bounded following fairly standard approaches. In particular, where is the uniform deviation term and is the class resulting from the composition of the loss function ℓ(•) and F. The uniform deviation bound can be bounded in different ways, depending on the type of knowledge about the random variable g(Z), in the following we assume that the loss function is bounded with range B and we exploit Azuma's inequality. In particular, the Doob's Martingale associated to the weighted loss will still have increments bounded by Plugging it back into the previous expression and minimizing with respect to β we obtain We identify the estimation error and we bound as previously done for Theorem Lemma. With probability 1σ, with σ ∈ (0, 1), with respect to the random sampling of the data set D, for all distributions q(θ) that are absolutely continuous with respect the prior p(θ), the following bound on the population risk of the ensemble model holds where and Furthermore, the risk with respect to the ID measure ν(x) can be bounded as if the contamination ratio satisfies the inequality ϵ < 1. Proof: The proof follows in a manner similar to [START_REF] Morningstar | PAC m -Bayes: narrowing the empirical risk gap in the misspecified Bayesian regime[END_REF]. For a data set size n, and for an ensemble of models Θ = {θ} m i=1 , we define the quantity From the compression lemma [START_REF] Banerjee | On Bayesian bounds[END_REF], we have that for any distribution q(θ) which is absolutely continuous with respect to the prior p(θ), and for any β < 0, the following holds The probabilistic model is a Gaussian unit variance p(x|θ) = N (x|θ, 1), the ensembling distribution q(θ) is represented by a discrete probability supported on 500 evenly spaced values in the interval [-30, 30], and the prior is p(θ) = N (θ|0, 9). For a given m, β and t, the optimized ensembling distribution is obtained applying the fixed-point iteration in Theorem 6, i.e., for α ∈ (0, 1). In Figure D.1 we report the optimized predictive distributions produced by the above procedure for β = 1, m = {1, 2, 5, 20} and t = {1, 0.9, 0.7, 0.5, 0.3, 0.1}. As m grows larger, the multi-sample bound on the predictive risk becomes tighter. As a result, the predictive distribution becomes more expressive, and it covers all the data points. The use of generalized logarithms offers increased robustness against the outlier data point, and leads to predictive distributions that are more concentrated around the ID measure. In Table D.1 we report the total variation distance between the ID measure and the predictive distribution p q (x). The proposed robust m-free energy criterion consistently outperforms the standard criterion by halving the total variation distance form the ID measure for t = 0.3. D.6 Details and Further Results for the Classification Example in Sec. 6.6.2 In Figure 6.6, we used expected calibration error (ECE) [START_REF] Guo | On calibration of modern neural networks[END_REF] to assess the quality of uncertainty quantification of the classifier. In this section, we formally define the ECE, along with the related visual tool of reliability diagrams [START_REF] Degroot | The comparison and evaluation of forecasters[END_REF], and present additional results using reliability diagrams. given the covariate a is given as [START_REF] Guo | On calibration of modern neural networks[END_REF] p(a) = max Note that perfectly calibrated model p(b|a, θ) would have acc(B k ) = conf(B k ) for all k ∈ {1, . . . , K} in the limit of a sufficiently large data set. D.6.1 Expected Calibration Error (ECE) [1] ECE quantifies the amount of miscalibration by computing the weighted average of the differences between accuracy and confidence levels across the bins, i.e., ECE = D.6.2 Reliability Diagrams Since the ECE quantifies uncertainty by taking an average over the bins, it cannot provide insights into the individual calibration performance per bin. In contrast, reliability diagrams plot the accuracy acc(B k ) versus the confidence conf(B k ) as a function of the bin index k, hence offering a finer-grained understanding of the calibration of the predictor. D.6.3 Additional Results For the MNIST image classification problem considered in Section 6.6. It is also noted that setting t = 1 is seen to yield underconfident predictions due to the presence of outliers, while a decrease in t leads to overconfident decision due to the reduced expressiveness of t-logarithms. A proper choice of t leads to well-calibrated, robust prediction.
04098415
en
[ "spi.meca.geme", "info.info-ia" ]
2024/03/04 16:41:20
2021
https://hal.science/hal-04098415/file/FMANU-MD-20-1169.pdf
Yuan Liu email: [email protected] Guolei Zheng Nikita Letov email: [email protected] Yaoyao Fiona Zhao email: [email protected] Md-20-1169 Zheng A Survey of Modeling and Optimization Methods for Multi-Scale Heterogeneous Lattice Structures Keywords: modeling, optimization, multi-scale, heterogeneous lattice structures This paper aims to provide a comprehensive review of the state-of-the-art modeling and optimization methods for multi-scale heterogeneous lattice structures (MSHLS) to further facilitate the more design freedom. In this survey, a design process including optimization and modeling for MSHLS is proposed. Material composition and multi-scale geometric modeling methods for representation of material and geometry information are separately discussed. Moreover, the optimization methods including multi-scale and multi-material optimization design methods, as well as the simulation methods suitable for MSHLS are respectively reviewed. Finally, the relationship, advantages and disadvantages of MSHLS modeling and optimization methods are summarized with discussion and comparison, which provides a guidance to further take advantage of MSHLS to improve the performance and multifunctional purpose of production for software developers and researchers concerning the design approaches and strategies currently available. Introduction Recently, Additive Manufacturing (AM) technologies, which fabricate parts by joining materials usually layer by layer, have attracted great interests from both industry and academia. Compared to other manufacturing methods, such as machining or casting, AM processes have the following unique capabilities. Firstly, parts with extremely complex shape can be built by AM process without increasing fabrication cost. Secondly, AM technologies are suitable for processing multiple materials either simultaneously or sequentially; therefore, parts with complex material compositions can be fabricated by this manufacturing method. Thirdly, manufacturing preparation time can be substantially reduced, since the part is directly fabricated from its 3D model. These unique capabilities of AM technologies have brought great application potentials in several major industries such as aerospace [START_REF] Angrish | A critical analysis of additive manufacturing technologies for aerospace applications[END_REF] and medical implants manufacturers [START_REF] Jardini | Cranial reconstruction: 3D biomodel and custom-built implant created using additive manufacturing[END_REF]. For example, in the aerospace industry, lightweight, strong and sometimes electrically conductive parts are more desired. AM process can produce lightweight components by replacing solid material with lattice structures. Gradient electrical conductivity can also be achieved by changing the composition of materials at each fabrication point or each fabrication layer. Major airplane manufacturers such as Boeing, Airbus, and Northrop Grumman have all identified AM technologies to be the emerging and revolutionary manufacturing method [START_REF] Bourell | The roadmap for additive manufacturing and its impact[END_REF]. To take advantages of the design freedom enabled by AM processes, several Design for Additive Manufacturing (DfAM) methods have been developed [START_REF] Rosen | Computer-Aided Design for Additive Manufacturing of Cellular Structures[END_REF]. These methods aim to optimize the performance and/or other key product life-cycle considerations such as manufacturability, reliability, and cost with respect to the capabilities of AM. A recent published literature review [START_REF] Tang | A survey of the design methods for additive manufacturing to improve functional performance[END_REF] shows most existing DfAM methods such as topology optimization method or inverse homogenization. However, it only focusses on a single component design with a single type of material. This review also indicates that in order to further improve the performance of AM fabricated parts, structures with multiscale heterogeneous material distributions are the good choice. As shown in Fig. 1, the heterogeneity referred in this paper has two aspects: heterogeneous structure (lattice cell and strut size and dimension vary) but with single material composition, and heterogeneous materials (multi material constituent in the lattice structure). A heterogeneous material is composed of different constituent materials and exhibits continuously varying composition and/ or microstructure. In this paper, we refer to lattice structures which are made of heterogeneous structures or heterogeneous materials as heterogeneous lattice structure. The heterogeneous information of such lattice structure can be described and categorized into composition and microstructure [START_REF] Kumar | An approach to modeling and representation of heterogeneous objects[END_REF]. The composition of the graded composite material can be described by the volume fractions of individual constituents that compose the material [START_REF] Li | A Review on Functionally Graded Materials and Structures via Additive Manufacturing: From Multi -Scale Design to Versatile Functional Properties[END_REF]. Heterogeneity are employed in design, often when both aspects of the resulting lattice structure would be across scale. The length-scale breakdown of multiscale structures with hierarchical architectures is shown in Fig. 2. On the macroscale ( ∼ 5 cm) (Fig. 2c), the bulk metallic structures is comprised of a network of hierarchical stretch-dominated octet unit cells which are designed to carry load via axial stress (Fig. 2d). Each stretch-dominated unit cell (Fig. 2e,f) from the hierarchical lattice network contains ∼ 200 μm hierarchical strut members (Fig. 2g) that are comprised of a network of stretch-dominated unit cells (Fig. 2h). These first-order unit cells (Fig. 2i) are comprised of microscale thin-walled hollow tube nickel-phosphorus struts with the thickness ranging from 50 to 700 nm at the lowest hierarchy (Fig. 2j). Therefore, the multiscale lattice structure usually has the following features: the overall structure can be further divided into multiple design scales. The structures defined in the upper scale is made of the material whose microstructure is defined at the lower scale. Comparing to those structures made of homogeneous materials or even Functional Graded Materials (FGM) [START_REF] Zhang | Additive manufacturing of functionally graded objects: a review[END_REF], multi-scale heterogeneous lattice structures (MSHLS) provides more design freedom for designers to further control the distribution of effective properties of materials on macroscale by controlling both material compositions and their microstructures. Thus, the performance of this type of structures can be further improved comparing to those structures made of conventional materials. The results of some recent primary research [START_REF] Sivapuram | Simultaneous material and structural optimization by multiscale topology optimization[END_REF][START_REF] Zhu | Two-scale topology optimization with microstructures[END_REF] have already proven that the multiscale heterogeneous structures can exhibit better performance. This type of structures is extremely useful especially for multifunctional purposes [START_REF] Yan | Two-scale optimal design of structures with thermal insulation materials[END_REF]. Even though, the merits of MSHLS have already been unveiled, it is still difficult for designers to take full advantages of these type of structures in their product design. There are two major barriers-modelling and optimization. Among these two major barriers, modelling is the most important, as it is the key to link the fabrication, simulation and optimization. However, most existing commercial CAD software cannot be used to model the lattice structure with heterogeneous material or structural variance. It is mainly due to the underlying geometric modeling methods used by the software which cannot efficiently handle and manipulate the heterogeneous material or property information. To deal with this issue, several heterogeneous object modelling methods [START_REF] Sigmund | Materials with prescribed constitutive parameters: An inverse homogenization problem[END_REF] have been developed. However, most of these methods only focus on a single scale, where only the distribution of material compositions is considered. These methods cannot be used to describe the microstructure distribution of materials on multiple design scales. Moreover, the size of geometric model represented by these methods are usually large. On one hand, MSHLS provides a great design freedom for designers. On the other hand, it also brings challenges for optimizations. There are two main challenges to optimize MSHLS. Firstly, there are a large number of design parameters that needs to be considered. These parameters are from different design scales. If designers want to solve the defined optimization directly, it usually requests high computational resource. Secondly, the design parameters from different design scales may be coupled together, which make the optimization problem even harder to be solved. The two major issues summarized above must be solved in order to allow designers to take further advantage of MSHLS. The aim of this review is to reflect on the MSHLS design process detailing the various phases. Actually, the current literature on the design of MSHLS are mainly focused on discussing a specific phase of the process lacking a comprehensive view. To create such a view, this work has been conceived to build a link among the main research findings available in the literature and related to the optimization design and modeling of MSHLS. In general, this review can be beneficial for various experts working in the AM field and provides a comprehensive review of the state-of-the-art approaches to further take advantage of MSHLS to improve the performance of designed product. For example, designers could retrieve indications for better organizing their decisional process when looking for the most suitable MSHLS considering functional and manufacturing constraints. IE [14]. Within such a modeling space, every point in the base space is attached with a vector for the description of its material compositions and property, which can be symbolically described as: ( ) IE ). Without considering the differences in the definition scales of heterogeneous objects, heterogeneous objects regarded as components are analyzed, recognized and defined uniformly from three levels: integration, element and information (Fig. 3 Multimaterial-property configuration [START_REF] Liu | Material-Unit Network for Multi-Material-Property and Multiscale Components[END_REF].). Among which, the element level is subdivided into two levels, namely, the component element and material unit and the information level consists of geometric, material and property information [START_REF] Liu | Material-Unit Network for Multi-Material-Property and Multiscale Components[END_REF]. In general, heterogeneous object modeling embraces two fundamental processes: material composition modeling and geometric modeling [START_REF] Zhang | Additive Manufacturing of Functionally Graded Material Objects: A Review[END_REF]. Geometric modeling is concerned with the geometric representations of the objects and material modeling is targeted at modeling material composition distributions defined over the geometric domain. Therefore, material composition and multi-scale geometric modeling methods for representation of material and geometry information are separately discussed in Subsections 2.1 and 2.2. Element Component Component Element Material Geometric Property Specialized Element Unit Integration Information Material Unit Fig. 3 Multi-material-property configuration [START_REF] Liu | Material-Unit Network for Multi-Material-Property and Multiscale Components[END_REF]. Material composition modeling Decomposition-based models Based on the idea of space segmentation, the space body is divided into grid elements in the decomposition-based models, such as voxel element [START_REF] Hiller | Design and analysis of digital materials for physical 3D voxel printing[END_REF][START_REF] Huang | A digital material design framework for 3D-printed heterogeneous objects[END_REF][START_REF] Leung | Digital Material Design Using Tensor-Based Error Diffusion for Additive Manufacturing[END_REF] and volume mesh element [START_REF] Wang | Finite element-based approach to modeling heterogeneous objects[END_REF][START_REF] You | Adaptive meshing for finite element analysis of heterogeneous materials[END_REF], and the material composition is defined within each element. The material information of the voxel element is stored in the voxel center point, which is suitable for the irregular distribution of the material. As shown in Fig. 4, 2D [START_REF] Andreassen | How to determine composite material properties using numerical homogenization[END_REF] and 3D [START_REF] Dong | A 149 line homogenization code for three-dimensional cellular materials written in matlab[END_REF] heterogeneous lattice structures with multi-material composition are represented by voxel-based model. It not only supports the efficient query of the material composition, but also facilitates the realization of visualization. However, the pitfall of the voxel model is that the accuracy of the method is directly related to the voxel's resolution. Moreover, voxel methods are inexact in terms of the geometric and material accuracies. On the contrary, the material information of the volume mesh-based model is stored in the grid node, and the material information of the non-grid node is saved by interpolation function. Therefore, the representation is a more compact data structure, which solves the huge storage problem in voxel unit to a certain extent. Representation of material composition using these decomposition-based depends on the resolution of the mesh or voxels that may not conform to the material distribution, leading to discretization error. In addition, any change in material function would lead to rediscretization of the whole geometry to re approximate the new material distribution. Set-based models According to the geometric representation, set-based models are divided into two types. The first one is rm-set, which is an extension of solid modeling to handle heterogeneous objects by using r-sets (a form of B-rep modeling) with information about material variation. This geometry model is further decomposed into atlases over which material variations are mapped. In order to define FGM models, a set of Boolean operators [START_REF] Sun | Reasoning Boolean operation based modeling for heterogeneous objects[END_REF] for modifying the geometry and composition stored in the data structure, therefore, the heterogeneous objects with more complex geometry and material distribution are obtained. Another set-based model is based on implicit function such as level-set [START_REF] Sethian | Structural Boundary Design via Level Set and Immersed Interface Methods[END_REF], R function [START_REF] Biswas | Heterogeneous material modeling with distance fields[END_REF] and convolution surfaces [START_REF] Gupta | Heterogeneous object modeling with material convolution surfaces[END_REF] to generate exact geometric data representations. These models use the functional representation (F-Rep) as the basic model for both the point set geometry and the material distribution. Essentially, the implicit function is used to model spatial partitions and the function determines whether a given point is "inside" or "outside" a subset. When a point is within the subset, the corresponding material distribution is then assigned to that point/point set. Tsukanov and Shapiro [START_REF] Tsukanov | Meshfree modeling and analysis of physical fields in heterogeneous media[END_REF] proposed the construction of field modeling using sampling distance field and interpolated physics field by way of extended Rfunctions, however, the limitations of R-functions results in the difficulties in complex shaped heterogeneous object modeling. A level set based scheme was proposed by Wang et al. [START_REF] Wang | A level-set based variational method for design and optimization of heterogeneous objects[END_REF], which adapted a variational model as the objective function to locate any point in the material region of a well-defined gradient or on the boundary edges and surfaces of discontinuities. The set of discontinuities was represented implicitly, using a multi-phase level set model. The compact and exact in data representation capacity makes level set model suitable for multi-material structural representation. In addition, parameterized level set method can be used for CSG (Constructive Solid Geometry) type feature-based CAD modeling. The rectangular frame and the interior hole are the two zero-value level set contours, i.e. the material source profiles. All results demonstrated in Fig. 5 have realized the smooth material transitions between the source profiles [START_REF] Liu | Level set-based heterogeneous object modeling and optimization[END_REF]. Control point-based models The material information is stored in the control points of Bezier, B spline, NURBS, parameter curve, surface and volume, and the representation method of parameter control points is obtained [START_REF] Yang | A B-spline-based approach to heterogeneous objects design and analysis[END_REF][START_REF] Samanta | Optimized normal and distance matching for heterogeneous object modeling[END_REF][START_REF] Sasaki | Adaptive direct slicing of volumetric attribute data represented by trivariate B-spline functions[END_REF][START_REF] Yoo | Heterogeneous object modeling using the radial basis functions[END_REF]. Control point-based models have some appealing properties. They are compact in both geometry and material representations. Given the parametric coordinate ( ) , , u v w , the material composition of a point can be also efficiently interrogated. In addition, this representation method can effectively represent complex (2D or 3D dependent) material distributions. Local modifications on both geometries and material definitions are also straightforward. The drawback of this method is that it relies heavily on spatial parameterizations and for arbitrary 3D objects such parameterizations remain a rather non-trivial task. Kou et al. [START_REF] Kou | A hierarchical representation for heterogeneous object modeling[END_REF] proposed a more general definition for 1D heterogeneous features (Fig. 8) described in Eq. (2). (2 ) where P is an arbitrary point on the 1D feature, i P is constructive point (e.g., starting point, ending point of the line and the control points of the B-Spline curve), i W is the blending weight of the ith constructive point in material gradation. For different types of 1D features, the weight generation methods might be different. Similarly, these approaches could be extended to 2D and 3D features. To more clearly compare the methods that investigate the representation abilities of material heterogeneity, both the advantages and disadvantages of existing models for heterogeneous materials are summarized in Table 1. It serves as a reference tool for selecting appropriate models in the downstream applications. pervasive, it is foreseen that some materials modeling modules will be embedded in future CAD systems so that the geometry and materials of a product can be designed concurrently [START_REF] Huang | A multiscale materials modeling method with seamless zooming capability based on surfacelets[END_REF]. Rosen et al. [START_REF] Huang | Material feature representation and identification with composite surfacelets[END_REF][START_REF] Jeong | Microstructure Feature Recognition for Materials Using Surfacelet-Based Methods for Computer-Aided Design-Material Integration[END_REF][START_REF] Wang | Multiscale Heterogeneous Modeling with Surfacelets[END_REF] firstly proposed a new multi-scale geometric and materials modeling methods that uses a surfacelet-based implicit representation to efficiently capture internal and boundary information of materials. It serves as the foundation for modeling structure-property relationships for materials design. Therefore, multi-scale geometric modeling methods are reviewed in the following section. Multi-scale geometric modeling 2.2.1 Multi-scale problem Before approaching the topic of multi-scale geometric modeling, it is important to cover issues emerging from multi-scale mathematical models describing reality. These issues come from the fact that on different scales different laws of physics act and different mathematical models are used to describe what happens on these scales and how different entities react to each other on these scales. The main challenge of multi- scale modeling is the lack of a unified theory that could correctly describe geometry both down from its structure (~10 -9 m) up to boundary conditions and processes present in it (~10 0 m) [START_REF] Raabe | Multi-scale modeling in materials science and engineering[END_REF]. A common example of a multi-scale problem comes from theoretical physics and lies in finding a theory of everything -a hypothetical ultimate framework that would allow describe physical relations on every possible scale. Nowadays, in physics there are two major models describing our world: quantum field theory (comes from merging quantum mechanics with special relativity) and general relativity (comes from merging Newtonian mechanics with special relativity); there is nothing similar between these models as the first one works well for nano-and lower scales while the second one works for macro-and higher scales [START_REF] Schombert | Quantum gravity[END_REF]. The ability of modeling in meso-and macro-scale is crucial in many research fields involving computer graphics and geometric modeling. For example, Prada et al. [START_REF] Prada | Superficial 3D mesh generation process using multimedia software for multiscale bone analysis[END_REF] apply multi-scale modeling concepts for 3D mesh generation process for multiscale bone analysis. Engquist et al. [START_REF] Engquist | Multiscale methods in science and engineering[END_REF] identified the need for an appropriate multi-scale geometric modeling tool for porous materials. Predictive nonlinear theories are being applied for multiscale modeling of heterogeneous materials [START_REF] Matouš | A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials[END_REF]. Multi-scale geometric modeling methods are also critical in medical research. For example, Youssef [START_REF] Youssef | Parallelization of a bio-inspired computational model for the simulation of 3-D multicellular tissue growth[END_REF] introduced a bio-inspired framework for the simulation of 3D multicellular tissue growth. However, there is no geometric modeling tool to implement this framework due to high geometric complexity. The need for such a geometric modeling approach for modeling a human heart both as a whole and in details (as even small defects are crucial) is identified [START_REF] Sacks | On the need for multi-scale geometric modelling of the mitral heart valve[END_REF]. Another common application for multi-scale geometric modeling is visualization of online geographic maps: when a user zooms in to a map, e.g. Google Maps, more and more details appear to the extent that even 3D shapes of buildings render [START_REF]Developer Guide: Maps Static API[END_REF]. Geometric modeling of heterogeneous lattice structures Geometric modeling in engineering is mainly used for visualizing engineering systems such as parts, assemblies, etc., and represented by computer-aided design (CAD) software. While conventional CAD software is normally sufficient for engineering design in industry, there is no geometric modeling tool that would be able to appropriately represent complex heterogeneous lattice structures due to inability of existing geometric modeling tools to represent a model in multi-scale, which forms a research gap that is yet to be filled. Another issue is the computational cost of such geometric modeling tools because heterogeneous lattice structures have high geometrical complexity resulting in high computational needs using conventional methods [START_REF] Dong | A survey of modeling of lattice structures fabricated by additive manufacturing[END_REF]. Multi-scale modeling becomes a crucial add-on to conventional geometric modeling tools when dealing with lattice structures. It is required to consider not only the whole structure in its macro-scale, but also each joint and strut of the lattice as they form at meso-scale [START_REF] Zhang | Extended multiscale finite element method for mechanical analysis of heterogeneous materials[END_REF]. Level of detail In computer graphics, a far-away object in a virtual environment appears obscured (similar to the effect people with myopia experience), which means that there is no need for representing all details of the object on a screen [START_REF] Marschner | Fundamentals of computer graphics[END_REF]. However, the details appear to be more concrete when approaching the object or zooming into it. Such effect of manipulating the level of detail (LOD) can be achieved, for example, by reducing the polygon count for distant objects by vertex and edge removal [START_REF] Luebke | Level of detail for 3D graphics[END_REF]. However, decreasing LOD of a lattice structure down to a certain level can result in complete homogenization of lattice, i.e. removing from view all geometrical features corresponding to lattice structures [START_REF] Ptc | Material homogenization for lattice simulation in additive manufacturing[END_REF]. Normally, LOD is manually or automatically associated with CAD-features of solid bodies as illustrated in Fig. 9, e.g. extrusions, revolutions, etc. [START_REF] Borrmann | Multi-scale geometric-semantic modeling of shield tunnels for GIS and BIM applications[END_REF]. However, lattice structures are designed not with CAD-features but with nodes connected by struts which are not so well-defined from the CAD-perspective [START_REF] Vayre | Designing for additive manufacturing[END_REF]. Moreover, in heterogeneous lattice even periodicity of the lattice is not obvious. As mentioned, simplifying a lattice structure by reducing the LOD of its model eventually results in complete homogenization of the structure. Such model does not provide the user with sufficient information about structure and topology. Another way to simplify a lattice structure is to represent it as a beam-element model. This approach, however, avoids representing detailed geometrical information that can be important for an appropriate analysis of a lattice structure, especially in node regions as beam models simply do not take high material concertation at nodes into the account [START_REF] Tang | A survey of the design methods for additive manufacturing to improve functional performance[END_REF]. Setting up a limit for the maximum distance for an object to be shown on screen is a common practice in computer graphics. This limit is called a draw distance [START_REF] Connell | 3D for Graphic Designers[END_REF]. It is mainly used for optimizing the overall performance of a 3D model [START_REF] Limper | The pop buffer: Rapid progressive clustering by geometry quantization[END_REF]. The described model complexity reduction methods have proven their effectiveness in the video game industry when developing game engines, such as AnvilNext developed by Ubisoft Montreal, in particular for rendering 2D or 3D graphics and providing a physics engine [START_REF] Gregory | Game engine architecture[END_REF]. Some other geometric modelling approaches also can be adapted from game engines that are being used for videogames design. Considering applying the LOD concept and multi-scale modeling to lattice structures, Liu and Shapiro [START_REF] Liu | Multiscale shape-material modeling by composition[END_REF] propose a framework for multi-scale modeling of parts with lattice structures. The framework's first scale describes the shape-material model; the second scale describes identity mapping for the solid regions of the part, as well as it includes downscaling and neighborhood functions for lattice structures; the third scale includes downscaling and neighborhood functions for lattice structures in case beams of the lattice structures are made of the fine scale structures. Such division into scales and phases essentially corresponds to the LOD concept, as the complexity of the solid model iteratively rises with downscaling. Voxel modeling One more topic worth considering when discussing multi-scale geometric modelling is application of multi-scale voxel modeling. The word 'voxel' derives from combining words 'volumetric' and 'pixel' and it is defined as 'a unit of volume containing the value of the corresponding raster element in a 3D space' [START_REF] Kaufman | Volume graphics[END_REF]. Voxelized objects allow the same set of operations with polygonal objects [START_REF] Cohen-Or | Fundamentals of surface voxelization[END_REF]. Voxel-based object simplification has been used for eliminating high-frequency details of the object ever since the introduction of voxels [START_REF] He | Voxel based object simplification[END_REF]. Similar voxel-based approaches are used for simplification and repair of polygonal models [START_REF] Nooruddin | Simplification and repair of polygonal models using volumetric techniques[END_REF]. Voxels have an advantage in terms of downsampling and acquisition of real-world data, i.e. any geometrical complexity is feasible which, however, comes at a cost due to rise of computational resources required for rendering [START_REF] Laine | Efficient sparse voxel octrees[END_REF]. Note that as lattice structures cannot be produced without AM techniques, there is no need in voxel size higher than 3D-printer resolution [START_REF] Telea | Voxel-based assessment of printability of 3D shapes[END_REF]. Voxel models do not handle zooming efficiently [START_REF] Laine | Efficient sparse voxel octrees[END_REF]. One of the most popular voxel-based simplification methods involves using sparse voxel octrees which are based on generating multi-scale voxels which could be visible or invisible depending on the resolution and size of the screen. Moreover, recent researches show that octree-based neural networks can be applied for 3D shape analysis and learning and thus can be adapted for feature recognition. For example, Wang et al. introduced an octree-based convolutional neural networks (CNN) for 3D shape analysis [START_REF] Wang | O-cnn: Octree-based convolutional neural networks for 3d shape analysis[END_REF]. Liu et al. [START_REF] Liu | Learning a hierarchical latent-variable model of 3d shapes[END_REF] proposed an octree-based variational shape learning model for the same purpose. Wu et al. [START_REF] Wu | 3d shapenets: A deep representation for volumetric shapes[END_REF] developed a Bayesian volumetric shape recognition based on voxel octrees. Recent research on voxel-based surface approximation investigates the possibility of applying the previously discussed LOD concept to voxel models. The results have proven themselves encouraging, as a 12 GB voxel model can be approximated to a 2 GB while still accurately visualizing the details [START_REF] Marcus | Level-of-Detail Independent Voxel-Based Surface Approximations[END_REF]. However, in this method, the depth of rendering (which is required for fully adaptive multi-scale voxel modeling [START_REF] Tian | Adaptive voxels: interactive rendering of massive 3D models[END_REF]) is assumed to be given and the voxelization algorithm is not adaptive, which does not allow performing multi-scale modeling. Moreover, current approaches are unable to represent crucial features of a part on a larger scale with voxels [START_REF] Seemann | Simplification of Multi-Scale Geometry using Adaptive Curvature Fields[END_REF] and become significantly slow on a smaller scale [START_REF] Kauker | VoxLink-Combining sparse volumetric data and geometry for efficient rendering[END_REF]. Lattice structure is essentially a structure of some size that is defined by a topology of a significantly smaller size. Thus, voxelization of lattices introduces visible discreteness of its model as illustrated in Fig. 10, which affects user experience. Voxels normally have cubic shape which is explained by render simplicity: one cubic voxel can be defined only by its position in 3D space (assuming that every voxel has the same size). Cubic shape also fits well for simulations, e.g. a cube is a common unit element for stress analysis within structures as noted in Fig. 11a since it easily allows modeling of normal stresses 𝜎𝜎 𝑥𝑥 , 𝜎𝜎 𝑦𝑦 and 𝜎𝜎 𝑧𝑧 , as well as shear stresses 𝜏𝜏 𝑥𝑥𝑦𝑦 , 𝜏𝜏 𝑥𝑥𝑧𝑧 , 𝜏𝜏 𝑦𝑦𝑧𝑧 , 𝜏𝜏 𝑦𝑦𝑥𝑥 , 𝜏𝜏 𝑦𝑦𝑧𝑧 and 𝜏𝜏 𝑧𝑧𝑦𝑦 . However, representing cylindrical shapes illustrated in Fig. 11b as a combination of cubes introduces an obvious distortion which gets lower as the number of cubes approaches infinity, which is not feasible from the computational point of view. Similarly, not cubic but cylindrical sector unit elements are used for simulations of cylindrical shapes, which allows modeling of normal stresses 𝜎𝜎 𝑟𝑟𝑟𝑟 , 𝜎𝜎 𝜃𝜃𝜃𝜃 and 𝜎𝜎 𝑧𝑧𝑧𝑧 , as well as shear stresses 𝜎𝜎 𝜃𝜃𝑧𝑧 and 𝜎𝜎 𝑧𝑧𝜃𝜃 [START_REF] Xu | A three-dimensional soil-water coupled FE analysis of hollow cylinder test concerning non-uniform deformation[END_REF]. Strand [START_REF] Strand | Surface skeletons in grids with non-cubic voxels[END_REF] conveyed research in the area of using non-cubic voxels and tested various voxel shapes such as body-centered cubic (BCC) and face-centered cubic (FCC) (a) (b) Note that BCC is similar to a truncated octahedron and FCC is similar to a rhombic dodecahedron. Non-cubic grids appear more sparse than cubic grids which provide the most information about the structure. However, there is no evidence of research that had been conveyed on combining different shapes of voxels within one model. Such research would contribute to not only AM of heterogeneous lattice structures but also to 4D printing technique, i.e. AM with smart materials [START_REF] Sossou | Design for 4D printing: A voxel-based modeling and simulation of smart materials[END_REF]. There is evidence that volumetric meshes can support multi-scale modeling. Volumetric meshes are similar to polygon meshes that are used in the majority of CADsoftware with the difference in discretizing the whole solid body and not only its surface, i.e. the whole body is considered to be subdivided with polyhedrons instead of polygons [START_REF] Rom | Volume Mesh Generation for Numerical Flow Simulations Using Catmull-Clark and Surface Approximation Methods[END_REF]. For instance, volumetric mesh models of macromolecules are able to provide sufficient visual information in chemistry [START_REF] Feng | Multiscale geometric modeling of macromolecules II: Lagrangian representation[END_REF]. However, the computation of curvature values of polygon and polyhedron meshes is non-trivial due to their discrete nature. Moreover, a volumetric finite element is always convex since it is bounded by a finite number of planes. Therefore, FEV introduces even more distortion when applied to non-convex geometric shapes such as lattice structures and surfaces with genus 1 and higher [START_REF] Tewari | Meshing genus-1 point clouds using discrete one-forms[END_REF]. This implies the need in a proper meshing algorithm that takes convexity and curvature into the account and affects quality of meshed models, especially ones requiring multi-scale modeling. Note that non-convex volume selection is a topic of interest in geometric modeling [START_REF] Fuchs | Non-convex polyhedral volume of interest selection[END_REF][START_REF] Gortler | Discrete one-forms on meshes and applications to 3D mesh parameterization[END_REF]. Table 2 compares polygon mesh methods with voxel-based method when applied to MSHLS. Note that the table does not include F-Rep method as it is incompatible with other modeling formats and cannot store topology information [START_REF] Pasko | Procedural function-based modelling of volumetric microstructures[END_REF]. The hybrid modeling approach introduced by Tang et al. [START_REF] Tang | A hybrid geometric modeling method for lattice structures fabricated by additive manufacturing[END_REF] has certain features (e.g. using boundary representation for lattice struts) that can assuage the disadvantages of both geometric modeling methods described in Table 2. Note that as the size of a lattice structure approaches nano-scale, new properties arise, some of which are of quantum nature [START_REF] Bauer | Nanolattices: an emerging class of mechanical metamaterials[END_REF]. As there is no tool that can efficiently produce a lattice structure with tolerance down to its nanoscale, this work does not support geometric modeling of lattice structures at nanoscale. Optimization methods of heterogeneous lattice structures Lattice structures are often produced to solve engineering optimization problems and AM itself prompts research in computational optimization [START_REF] Tang | A hybrid geometric modeling method for lattice structures fabricated by additive manufacturing[END_REF]. In heterogeneous lattice structures, optimization of structural topology and thickness of lattice struts results in satisfaction of given functional requirements [START_REF] Tang | Bidirectional Evolutionary Structural Optimization (BESO) based design method for lattice structure to be fabricated by additive manufacturing[END_REF]. Heterogeneity of a structure provides unique properties which are required to be optimal and which cannot be provided by homogeneous structures, e.g. gradual elasticity of the structure or structural stiffness [START_REF] Martínez | Procedural voronoi foams for additive manufacturing[END_REF]. Research on optimizing the internal structure to provide an optimal strength-to-weight ratio is identified as crucial, for example, in the aerospace sector [START_REF] Vasiliev | Anisogrid composite lattice structures-Development and aerospace applications[END_REF]. Topology optimization methods are often being applied for lattice structures [START_REF] Liu | Current and future trends in topology optimization for additive manufacturing[END_REF]. For example, Zhu et al. [START_REF] Zhu | Two-scale topology optimization with microstructures[END_REF] proposed a two-scale topology optimization framework of microstructures. One of the ways to optimize a lattice structure is by adding porosity to the internal structure. Moreover, the approaches that are being used for the porosity generation (such as the ones that are based on Voronoi tessellation [START_REF] Lu | Build-to-last: strength to weight 3D printed objects[END_REF]) produce a tessellation that is visually similar to the tessellation of cells in a living organism. Based on this, it is crucial to analyze multi-scale and multi-material optimization methods of heterogeneous lattice structures covered in Subsection 3.2 and Applying the LOD concept is not straightforward and does not achieve as good results as for polygon mesh modeling 3.3, respectively, as well as the ability to perform computational simulation on them as covered in Subsection 3.1. Simulation methods of multi-scale heterogeneous lattice structures This section of the design workflow is the evaluation of the MSHLS performance through simulation methods to support optimization design. The simplest way is to directly apply the rules of mixtures, where the property of heterogeneous composite can be considered as a linear or non-linear combination of its base materials [START_REF] Kim | On the rule of mixtures for the hardness of particle reinforced composites[END_REF][START_REF] Sola | Functionally graded materials for orthopedic applications -an update on design and manufacturing[END_REF]. Comparing to other ways to evaluate the effective properties of the heterogeneous objects, the rule of mixtures is the simplest and most straightforward method. The effective properties X of a point with material composition values ( , ,..., ) n X X X could be calculated by Eq. (3). 0 1 1 2 2 ... n n X X X X X ρ ρ ρ ρ = = + + + (3) However, rules of mixtures are not accurate and flexible enough for MSHLS. Some numerical analysis methods suitable for MSHLS including physics-based method, homogenization method, finite element analysis (FEA) method, and isogeometric analysis (IGA) are reviewed as follows. Physics-based method There are many established methods and implementations for simulating deformable soft bodies. Considering the large deformations and relatively low stiffness of the materials involved, the physics-based dynamics are often significant. When our proposed MSHLS consist of multiple materials, it's important to consider deformation simulation for further application such as feature films, video games, and virtual surgery, among others. Much of the development in simulating soft bodies has been driven by the computer graphics community. In this method each voxel is modeled as a lattice point with mass and rotational inertia. Voxels are connected (from their centers) by three-dimensional beam elements with appropriate translational and rotational stiffness leading to realistic deformation under applied forces and moments. These beams form a 3D lattice frame, which acts as a backbone (or a control structure) of the whole shape. The frame is what holds the multi-material objects together and that governs the deformation of the whole geometry. Based on this method Sossou G et al. [START_REF] Sossou | Design for 4D printing: A voxel-based modeling and simulation of smart materials[END_REF][START_REF] Sossou | Design for 4D printing: Modeling and computation of smart materials distributions[END_REF] developed a computational design tool, which has been implemented in the Rhinoceros® add-on Grasshopper®(GH). Homogenization method The homogenization usually refers to a way to replace the composite with a kind of equivalent material model, which can overcome the difficulty in the analysis of the boundary value problem with high heterogeneities. It is used to obtain the effective properties of homogenized material for periodic heterogeneous continuous media in many physical and engineering applications [START_REF] Dong | A survey of modeling of lattice structures fabricated by additive manufacturing[END_REF]. Homogenization methods can also be applied to calculate the effective properties of the heterogeneous objects at any points [START_REF] Allaire | Homogenization of a conductive, convective, and radiative heat transfer problem in a heterogeneous domain[END_REF][START_REF] Hassani | A review of homogenization and topology optimization I -Homogenization theory for media with periodic structure[END_REF]. It can be applied to the cells with any arbitrary material compositions and can also be applied for the calculation of several different types of material effective properties including stiffness, heat transfer coefficients and Poisson's ratio. This method is applied for material effective properties simulation of lattice cell with geometric heterogeneity [START_REF] Xu | Design of lattice structures with controlled anisotropy[END_REF] but also material composition heterogeneity [START_REF] Andreassen | How to determine composite material properties using numerical homogenization[END_REF][START_REF] Dong | A 149 line homogenization code for three-dimensional cellular materials written in matlab[END_REF]. Based on the calculated effective properties of the lattice unit, the overall performance of designed objects can be easily evaluated. A typical procedure of simulation of solidlattice hybrid structure with homogenization method has been introduced by Dong [START_REF] Dong | Design and optimization of solid lattice hybrid structures fabricated by additive manufacturing[END_REF]. However, this method has its own limitations. It cannot be applied to the structure based on heterogeneous density gradients, and it is not easy to manage in case of complex geometries. FEA-based method FEA-based simulation method is usually used in interactive inverse design method to create continuous heterogeneous material distributions on 1D beam [START_REF] Yu | A Computational Method for the Design of an Additively Manufactured Personalized Artificial Spinal Disc With Physiological Stiffness Under Multiple Loading Conditions[END_REF], truss element with truss-like lattice structure [START_REF] Lumpe | Computational Design of 4D Printed Shape Morphing Multi-State Lattice Structures[END_REF] or 3D volumetric meshes with arbitrary geometry [START_REF] Xu | Interactive material design using model reduction[END_REF][START_REF] Bickel | Design and fabrication of materials with desired deformation behavior[END_REF][START_REF] Bickel | Capture and modeling of non-linear heterogeneous soft tissue[END_REF]. The material distributions are designed to conform to prescribed displacements and internal elastic forces at the selected vertices of the 1D element or 3D mesh [START_REF] Fang | Geometry-based direct simulation for multi-material soft robots[END_REF]. Such a capability has numerous applications as many mechanical components, structures, and mechanisms have to produce predictable displacements under known forces or pressure distributions. With smooth varying heterogeneous material distributions, multiple properties (functions) can be obtained in the same object, and high-stress regions at the material boundaries can be avoided. The method is applicable not only to small deformations, linear mesh elements but also to nonlinear materials under large deformations. Each mesh element is assumed to be homogeneous and made of an isotropic material parameterized by Young's modulus and Poisson's ratio. Two applications of FEA-based simulation aided design with 1D truss element and 3D tetrahedral mesh, respectively are shown in Fig. 12. FEA method with 1D element has a moderate computational cost and can be used for MSHLS with heterogeneous density gradients. Its limitations are the following: when the strut is not thin enough, it is difficult to provide accurate results and the ability to predict the stress distribution at struts joints is poor. While, FEA method with 3D mesh is of high computation cost and time consuming. IGA-based method A common approach to the analysis and optimization of metamaterials and lattices is multiscale simulation through homogenization of unit cell behavior. However, due to the functional grading and curved topology, which lead to non-periodicity of the microstructure, as well as the nonlinearity of a soft lattice, common homogenization approaches cannot be applied, and the lattice needs to be simulated at full scale. In addition, nonlinear simulation based on continuum mechanics and 3D finite elements is not feasible due to the huge computational effort involved and linear modelling with 3D truss elements will not capture the nonlinear behavior of a soft lattice [START_REF] Weeger | Isogeometric shape optimization of nonlinear, curved 3D beams and beam structures[END_REF]. While, IGA uses B-spline function and NURBS basis function, which are commonly used in CAD environment, to accurately describe any complex geometry and approximate unknown solution fields in the analysis [START_REF] Tavakkoli | Isogeometric topology optimization by using optiMality criteria and implicit function[END_REF]. It provides an accurate and efficient numerical discretization of the model and enables a seamless integration of the simulation method with the design approach through the concept of IGA. The rod formulation and IGA method also allow to accurately represent graded design parameters of the cross-sections, such as a spatially variable material distribution, e.g. in terms of Young's modulus. These properties are highly desirable for many types of applications, such as reusable energy absorbing devices based on nonlinear elastic rather than plastic deformations, vibration mitigation through curved ligaments, tailored energy absorption response through functional grading of material properties or hierarchical microstructures, tissue-like medical implants, soft robotic actuators and devices, 4D printing, or structures with complex constitutive behavior such as negative Poisson's ratio effects. Weeger O et al. [START_REF] Weeger | Digital design and nonlinear simulation for additive manufacturing of soft lattice structures[END_REF] proposed a digital design and manufacturing framework for soft lattice structures (Fig. 13). The design approach was implemented in the 3D modeling software Rhinoceros® (also Rhino® or Rhino3D®) using the algorithmic design environment Grasshopper™ and the open-source plugin IntraLattice. Based on the simulation methods introduced above, a comparison between different simulation approaches is made and summarized in Table 3, and it helps designers select a suitable simulation method for MSHLS. Multi-scale optimization methods of heterogeneous lattice structures As to the optimization of the heterogeneous lattice structures, the existing methods can be roughly divided into three groups according to their design scales. It mainly includes the optimization design of microstructure, optimal distribution of material compositions or microstructures in macroscale and the two-scale concurrent optimization. Optimization design of microstructures The inverse homogenization technique can be applied to determine the optimal distribution of material in micro-cell at a certain point of the microstructures to achieve their desired properties [START_REF] Gao | Topology optimization of micro-structured materials featured with the specific mechanical properties[END_REF]. In this process, a microstructure periodically arranged in is smooth and continuous The stiffness matrix of higher order basis function is dense and does not satisfy the orthogonal condition macro design domain actually behaves like a material, whose length scale is much smaller than that of the macroscopic structure. This method has originally been developed for the structural related properties, such as extreme elastic modulus, Poisson's ratio, and extreme thermal expansion coefficients. Also, it can be easily extended to many other fields for multifunctional purposes. Li et al. [START_REF] Li | Integrated design of cellular composites using a level-set topology optimization method[END_REF] produced periodic cellular composite for each layer, by integrating the numerical homogenization into a level set approach (Fig. 14a). Kazemi et al. [START_REF] Kazemi | Multi-material topology optimization of lattice structures using geometry projection[END_REF] proposed a computational method for the design of architected truss lattice materials where each strut can be made of one of a set of available materials (Fig. 14b). Schumacher et al. [START_REF] Schumacher | Microstructures to control elasticity in 3D printing[END_REF] solved the inverse problem to the numerical coarsening method, obtaining a microstructure that coarsens to a given stiffness tensor (Fig. 14c). The difference among the three methodology is that the representations of microstructures are continuous method (Fig. 14a) and discrete method, in which the discrete method uses 1D truss element (Fig. 14b) and voxel element (Fig. 14c), respectively. Optimal distribution of material compositions or microstructures in macroscale Optimal distribution of material compositions. Instead of only focusing on microscale, the second group of optimization methods is developed to determine the optimal distribution of material compositions or microstructures. Due to the limitations of traditional manufacturing technologies, most of the existing optimization methods mainly focus on the optimization of material compositions distributions. During the optimization of material compositions, the material distribution functions are usually preselected. Then, the optimization formulation is generated to find the optimal coefficients which are defined in the preselected functions. For example, in the optimization of Functional Graded Materials (FGM) which is a typical heterogeneous object, power law and exponential functions are widely used [START_REF] Sola | Functionally graded materials for orthopedic applications -an update on design and manufacturing[END_REF][START_REF] Elishakoff | Three-dimensional analysis of an all-round clamped plate made of functionally graded materials[END_REF][START_REF] Na | Volume fraction optimization of functionally graded composite panels for stress reduction and critical temperature[END_REF]. Instead of using functions, SIMP-based topology optimization results with relative density distribution have also been used for the design of the heterogeneous lattice structure. For example, Brackett et al. [START_REF] Brackett | Topology optimization for additive manufacturing[END_REF] suggested mapping volume fractions of the lattice unit cells onto the intermediate densities of an un-penalized SIMP solution, making the greyscale density solution of topology optimization possible to manufacture with AM. This work utilized the tessellation of the unit cell for the lattice generation, i.e. where a selected unit cell template was tessellated across the design domain in a regular fashion. To generate lattice with varying cell sizes, Brackett et al. [START_REF] Brackett | An error diffusion based method to generate functionally graded cellular structures[END_REF] offered an error diffusion-based method which enabled the mapping of irregular unit cell lattice onto a grey scale input. This was achieved by generating dithered points from the greyscale image followed by connecting them, using either Delaunay triangulation or Voronoi tessellation. Tang et al. [START_REF] Tang | Multifunctional design of heterogeneous cellular structures[END_REF] developed a function-performance-property-design parameter model (F-P-P-D) (Fig. 15) for heterogeneous lattice structures where the result of density-based topology optimization has been mapped to the relative density of lattice structures. Its result shows the optimized heterogeneous lattice structures can significantly achieve both low thermal conductivity and high stiffness without increasing its weight. Alzahrani et al. [START_REF] Alzahrani | Design of truss-like cellular structures using relative density mapping method[END_REF] proposed method utilizes the relative density information (Fig. 16) obtained from a solid topology optimization to automatically determine the diameter of each individual strut in the structure, which collectively represents the set of design variables. This allows the method to produce lattice structures that can perform reliably under multiple loading conditions and also reduce the computational cost associated with the design of these structures. Li et al. [START_REF] Li | Optimal design and modeling of gyroid-based functionally graded cellular structures for additive manufacturing[END_REF] presented a novel optimization strategy for designing functionally graded cellular structures with desired mechanical properties. In the strategy a complex functionally graded gyroid lattice structure is generated by implicit surface reconstruction algorithm in 3D space, which maps the corresponding material properties (Fig. 17a) to geometric parameter (Fig. 17b) perfectly. Panesar et al. [START_REF] Panesar | Strategies for functionally graded lattice structures derived using topology optimisation for additive manufacturing[END_REF] presented a number of strategies that enable lattice structures to be derived from topology optimization results suitable for AM. As shown in Fig. 18, the structures with different strategies such as solid (Fig. 18a), intersected lattice of D-P (Fig. 18b), intersected lattice of BCC (Fig. 18c), graded lattice of D-P (Fig. 18d), scaled lattice of D-P (Fig. 18e), and uniform lattice of D-P (Fig. 18f) are used for FEA and made a comparison for several objectives. Instead of using relative density distribution results, stress distribution results of the structures could also be used for optimization. Teufelhart and Reinhart [START_REF] Teufelhart | Optimization of strut diameters in lattice structures[END_REF] optimized the strut diameters of an irregular lattice structure by capitalizing on the flux of force within a solid structure. They showed that enhancement in performance can be achieved when compared to a regular (uniformly tessellated) counterpart, mainly due to a much smoother distribution of stresses owing to the stress conformal nature of the lattice. Also Tang et al. [START_REF] Tang | Bidirectional Evolutionary Structural Optimization (BESO) based design method for lattice structure to be fabricated by additive manufacturing[END_REF] proposed a BESO-based design method for truss-like lattice structure optimization. In this method, Functional Volumes (FVs) and Functional Surfaces (FSs) are first determined based on an analysis of the functional requirements. FVs can be further decomposed into several sub-FVs. These sub-FVs can be divided into two types: FV with solid and FV with lattice. Optimal distribution of microstructures. Besides material compositions, the microstructure distribution is another type of design parameter which are considered during the optimization process. For instance, to achieve the given elasticity gradients, an inverse-homogenization technique-based design optimization is proposed for functional graded cellular structures [120]. In this method, a precomputed data-based of tiled cell structures is built to cover a wide range of elastic properties. Then, a global optimization algorithm is proposed to synthesize the cells from the pre-computed database to achieve desired elasticity distribution with the consideration of connections between neighborhood cells. To simultaneously update the topology of unit cell as well as the morphology of cells in the different region of design domain, Liu et al. proposed a new approach to generate functional graded cellular structures based on moving morphable components/voids (MMV/MMC) topology optimization [START_REF] Guo | Doing topology optimization explicitly and geometrically-a new moving morphable components based framework[END_REF][START_REF] Liu | Additive Manufacturing-Oriented Design of Graded Lattice Structures Through Explicit Topology Optimization[END_REF]. In this method, the topology of the unit cell is represented by a set of explicit functions, while the basis perturbation functions have been applied to control the morphology of cell distribution. This innovative design parameterization method enables designers to control both cell topology and its morphology distribution simultaneously without a large number of design parameters. Similarly, Schumacher et al. [START_REF] Schumacher | Microstructures to control elasticity in 3D printing[END_REF] proposed a method for fabricating deformable objects with spatially varying elasticity using 3D printing. A database of microarchitectures is assembled prior to large-scale optimization, consisting of different families derived through topology optimization targeting defined material properties. A tiling algorithm then maps the discrete microarchitectures to the large-scale domain in order to fulfil functional objectives. Liu et al. [START_REF] Liu | Rapid modeling and design optimization of multi-topology lattice structure based on unit-cell library[END_REF] considered the connectivity between lattice cells, and divided the lattice cells of constructed unit-cell library into seven series to ensure that lattice cells of each series could be connected. Panetta et al. [START_REF] Panetta | Elastic textures for additive fabrication[END_REF] explored a wide space of truss-like, symmetric 3D patterns to obtain a small family. This pattern family can be printed without internal support structure on a single-material 3D printer and can be used to fabricate objects with prescribed mechanical behavior. As shown in Fig. 19, the microstructures from same family are mapped to each voxel according to the property distribution of the designed objects. Two-scale concurrent optimization However, above works are restricted to a single scale (either macro or microscale), while the concurrent optimization at both scales is limited. The concurrent topology optimization refers to simultaneously devise the best structural topology with the most compatible material microstructure, with the aim of increasing the design freedom to find the optimal performance in the macrostructure [START_REF] Li | A new multiscale topology optimization method for multiphase composite structures of frequency response with level sets[END_REF]. The development of the AM can provide a feasible solution to fabricate the concurrent multiscale designs with either single material [START_REF] Wang | Multiscale reliability-based topology optimization methodology for truss-like microstructures with unknown-but-bounded uncertainties[END_REF] or multiphase materials [START_REF] Vogiatzis | Topology optimization of multi-material negative Poisson's ratio metamaterials using a reconciled level set method[END_REF][START_REF] Vicente | Concurrent topology optimization for minimizing frequency responses of twolevel hierarchical structures[END_REF]. In the existing concurrent topology optimization framework, macro and micro densities are introduced as independent design variables for macro structure and material microstructure, respectively. Optimizations at two scales are integrated into one system with the homogenization theory. Penalization approaches are adopted in both scales to ensure clear topologies, i.e., SIMP in the micro scale and PAMP (Porous Anisotropic Material Penalization) in the macro scale. Two classes of design variables (density fields) are independently defined [START_REF] Deng | Concurrent topology optimization of multiscale structures with multiple porous materials under random field loading uncertainty[END_REF]. As illustrated in Fig. 20, the two-scale concurrent topology optimization framework, where the material microstructure is assumed to be the same throughout the macroscopic structure, is the focus of this work. The macrostructure is assumed to be composed of uniform materials, whose material microstructures could be regarded as composed of repetitive unit cells in which a voxel-based representation specifies whether a finite element contains base material (voxel=1) or not (voxel=0). and Breitkopf [START_REF] Xia | Recent Advances on Topology Optimization of Multiscale Nonlinear Structures[END_REF], where several different cellular materials are designed for a layered structure following a two-step design procedure. Another concurrent topology optimization model with multiple porous materials is proposed by Deng and Chen [START_REF] Deng | Concurrent topology optimization of multiscale structures with multiple porous materials under random field loading uncertainty[END_REF]. By combining the concurrent topology optimization framework and the discrete material optimization interpolation model, the distribution of subdomains on the macroscale and the topology of the material microstructure in each subdomain can be automatically determined. L-Beam optimized design by a generic approach [START_REF] Sivapuram | Simultaneous material and structural optimization by multiscale topology optimization[END_REF] allowing any number of unique microstructures indicates that increasing the number of allowable microstructures leads to a solution with an improved objective. Li et al. [START_REF] Li | Design of Architected Materials for Thermoelastic Macrostructures Using Level Set Method[END_REF] introduced a larger number of material microstructures in architected thermoelastic materials optimization, the overall performance was improved due to the expanded design space. However, when multiple architected materials with spatial variations in a structure are considered, a challenge arises in topological solutions, which may not be connected between adjacent material architecture. Du et al. [START_REF] Du | Connecting microstructures for multiscale topology optimization with connectivity index constraints[END_REF] introduced connectivity index (CI) to quantify the topological connectivity, and added it as constraint in multiscale topology optimization to achieve connected architected materials. However, the unit cell of microstructure is composed of single base material with single base material property. In order to further explore design freedom provided by multi-material AM (e.g., Polyjet), Zhu et al. [START_REF] Zhu | Two-scale topology optimization with microstructures[END_REF] proposed a two-scale topology optimization with microstructures consisting of two base materials (soft material and rigid material). As shown in Fig. 21, a continuous gamut of material properties is derived from a precomputed database of discrete microarchitectures prior to large-scale optimization. Topology optimization is then used to optimize the material properties of the large-scale domain within the gamut to satisfy functional objectives. The continuous solution from the topology optimization is then mapped to discrete microarchitectures from within the database. This effective approach requires a large computational resource to assemble the databases prior to large scale optimization. Multi-material optimization methods of heterogeneous lattice structures From the multi-material point of view, the material space of the multi-material objects is no longer confined to a single material, it has been extended to multi-material space. Therefore, compared with the topology optimization of a single material object, multi-material topology optimization (MMTO) refers to finding the optimal layout form of variety of materials in the design space under a given design domain, a given constraint and a load. Then, the best way to pass force is obtained to achieve a predetermined function or movement, and to achieve a certain design goal. For MMTO, the problem could be generalized to ( ) 1,2, , Min , 0 Subject to k M D i j i i k g u Ku f ϕ = ⊂ Ω ≠ Ω ∩ Ω = ∅ Ω ≤ =  ( 4 ) where ϕ is the objective, such as compliance, stress, or volume; k Ω is the topology to be computed for material " k " to be computed; M is the number of materials; D is the domain within which the topology must lie; u is the finite-element displacement vector K is the finite-element stiffness matrix; f is the external force vector, and i g is the constraints on volume, stress, buckling, etc. The MMTO problem assumes that every point within the design space has a distinct material associated with it (or is void). This differs from functionally graded material optimization, where a mixture of base materials is allowed [START_REF] Mirzendehdel | A pareto-optimal approach to multimaterial topology optimization[END_REF]. It mainly includes Solid Isotropic Microstructure with Penalty (SIMP), Level Set Method (LSM) and Discrete Element Method (DEM). SIMP for MMTO SIMP method needs different material interpolation schemes to meet the requirements of different base materials and material properties. Thomsen et al. [START_REF] Thomsen | Topology optimization of structures composed of one or two materials[END_REF] firstly extended multiple materials to SIMP in 1992. Sigmund proposed [START_REF] Sigmund | Design of multiphysics actuators using topology optimization-Part II: Two-material structures[END_REF] a twomaterial interpolation scheme for designing electrically and thermally driven microactuators. Kruijf et al. [START_REF] De Kruijf | Topological design of structures and composite materials with multiobjectives[END_REF] find the optimal structures with the maximum stiffness and minimum resistance to heat dissipation and tailor composite materials with effective thermal conductivity and bulk modulus. Two-phase ill-ordered base materials (i.e. one has a higher Young's modulus, but lower thermal conductivity while another has a lower Young's modulus but higher conductivity) are assumed in order to observe competition in the phase distribution defined by stiffness and conduction. Hvejsel et al. [START_REF] Hvejsel | Material interpolation schemes for unified topology and multi-material optimization[END_REF] proposed two multi-material interpolation schemes as direct generalizations of the well-known SIMP and RAMP material interpolation schemes originally developed for isotropic mixtures of two isotropic material phases. Jeong et al. [START_REF] Jeong | Separable stress interpolation scheme for stress-based topology optimization with multiple homogenous materials[END_REF] presented a new regional constraint method based on the sorting algorithm and the applicability and limitations of the newly developed framework were discussed in the context of its application to several stress-based topology optimizations with multiple materials. Gaynor et al. [START_REF] Gaynor | Multiple-material topology optimization of compliant mechanisms created via PolyJet three-dimensional printing[END_REF] manufactured compliant mechanism designs based on three-phase (2 solid phases plus void) topology optimization using PolyJet AM technology, which can print bulk materials covering a wide range of elastic moduli (Fig. 22). The advantages of SIMP are as followings: the design variables (cell densities) in the SIMP model are directly linked to the optimization problem, that is, the display of the topology is dependent on the design variables. The model optimization algorithm converges well and can directly calculate the discrete design sensitivity based on finite element method. Moreover, it is suitable for any shape of the design domain and is easier to integrate with existing commercial finite element software. However, SIMP method extended to m-phase materials requires (m-1) design variables for each finite element, which results in complex interpolation express and large computational cost [START_REF] Zuo | Multi-material topology optimization using ordered SIMP interpolation[END_REF]. In order to enforce the selection of, at most or exactly, one material at each design subdomain, a large number of sparse linear constraints are needed. LSM for MMTO The level-set function is originally developed by Osher and Sethian [START_REF] Osher | Fronts propagating with curvaturedependent speed: algorithms based on Hamilton-Jacobi formulations[END_REF] with the fundamental goal of tracking the motion of curves and surfaces. In this work, the structural boundary is implicitly represented by the zero level sets. For the multi-material structures, multiple level set functions ( ) , 1, 2 k k M φ =  are employed to denote different phases. These level set functions are utilized to define the following subdomains [START_REF] Cui | A level-set based multi-material topology optimization method using a reaction diffusion equation[END_REF]: Wang et al. [START_REF] Wang | Color" level sets: a multi-phase method for structural topology optimization with multiple materials[END_REF] also introduced a novel level-set approach referred to as a "color" level-set representation which has its unique benefits: it is flexible to handle complex topologies. The multi-level set functions ( ) ( ) ( ) ( ) Φ x 0 Φ x 0 , Φ x 0 1, 2 k k k k k k k k M x x x D  > ∀ ∈ Ω Γ  = ∀ ∈ = Γ   < ∀ ∈ Ω     (5) , 1, 2 k k M φ =  are used to represent 2 M n = distinct material phases and substantially reduces the number of model functions. Liu et al. [START_REF] Liu | Level set-based heterogeneous object modeling and optimization[END_REF] employed multiple level set functions to build the geometry, utilizes zero-value level set contours as material source profiles, and realized FGM blending with a signed distance-based blending function. More importantly, this model supports the concurrent structure and material optimization because of the unified level set framework for both structure and material composition representation. The initial problem setups are demonstrated in Fig. 24a and Fig. 24c. The heterogeneous source profile in Fig. 24a employs the Young's modulus combination of 1.3 and 0.33, while the heterogeneous source profile in Fig. 24c employs the Young's modulus combination of 1.3 and 0.98. The optimization results are demonstrated in Fig. 24b and Fig. 24d, respectively. DEM for MMTO Unlike continuous topology optimization method, i.e. SIMP. Discrete elements such as 1D truss element (ground structure method) [START_REF] Zhang | Material nonlinear topology optimization using the ground structure method with a discrete filtering scheme[END_REF] or 3D bar element (geometry projection method) [START_REF] Norato | A geometry projection method for continuum-based topology optimization with discrete elements[END_REF] can be directly applied to MMTO for MSHLS. Recently, several papers about this topic have been published. Zhang et al. [START_REF] Zhang | Multi-material topology optimization with multiple volume constraints: a general approach applied to ground structures with material nonlinearity[END_REF] proposed an efficient MMTO formulation considering material nonlinearity. The proposed formulation handles an arbitrary number of candidate materials with flexible material properties, features freely specified material layers, and includes a generalized volume constraint setting. An application to MMTO of truss networks considering multiple load cases and nonlinear constitutive behavior is shown in Fig. 25. Kazemi et al. [START_REF] Kazemi | Topology optimization of structures made of discrete geometric components with different materials[END_REF] presented a geometry projection method for the simultaneous topology optimization and material selection of structures made by the union of discrete geometric components, where each component is made of one of multiple available materials. Michell cantilever example shows that the proposed formulation can be readily extended to any number of materials (Fig. 26). The advantages of DEM are as followings: Optimal design of truss like structures with discrete design variables share many mathematical properties with multi-material topology optimization. For ground structure method, the final optimal truss structure is realized by selecting an optimal substructure from a pre-defined ground structure. Typically strut cross-section dimensions (such as diameter) are the design variables. Local strain or stress values from the FEA results are used to compute the strut sizes. If a strut size approaches the specified threshold, it can be removed, effectively changing the topology of the design. This conveniently changes a topology optimization problem into a sizing problem. Unlike ground structure methods, the discrete elements in geometry projection method need not be connected during the optimization, hence giving more freedom to the optimizer to optimally place and size the bars. By having an independent explicit geometry, this method is capable of producing designs with members of predefined cross-sections. Moreover, it can accommodate the case where the thicknesses of the bars are added at their intersections (overlapping bars), and it can compute the volume fraction and its sensitivities directly from the bar's geometry and not from the composite density. However, the discrete nature of the problem makes it difficult to be solved. There exist a large number of both mathematically well-founded as well as heuristic approaches like genetic algorithms, swarms and differential evolution techniques that are not viable alternatives for the vast majority of topology optimization problems. However, none of them is well-suited for the problems with thousands and up to millions of variables that are encountered in topology optimization approaches. Based on the optimization design methods introduced above, a comparison between different optimization approaches is made and summarized in Table 4. Based on this comparison, it helps designers select a suitable optimization method for different cases. Discussion A general design process of MSHLS including input of design and functional requirements, initial design, optimized design and geometric&material composition models is proposed in Fig. 27. This process is based on the design and fabrication methods for AM proposed by Rosen [START_REF] Rosen | Computer-Aided Design for Additive Manufacturing of Cellular Structures[END_REF], Tang [START_REF] Tang | A survey of the design methods for additive manufacturing to improve functional performance[END_REF] and Yang [149]. In this paper, we only focus on the design aspect. With the input of design and functional requirements, initial design including either Functional Volumes (FVs) and Functional Surfaces (FSs) with solid&skin or with lattice can be obtained. Then the initial design result is the input to the simulation step. The simulation result then goes back to modeling stage for the optimization of the initial design to get optimized design. During this design process, optimization methods (multi-scale optimization and multi-material optimization), simulation support (mixtures rule and numerical analysis), and modeling methods (material composition modeling and multi-scale geometric modeling) as main design methods support optimized design phase to obtain final geometric & material composition models of MSHLS. Optimized Design Initial Design Design and Functional Requirements Geometric&Material Composition Models Multi-scale Geometric Modeling Material Composition Modeling Modeling Methods Simulation Support Multi-material Optimization Multi-scale Optimization Optimization Methods Mixtures Rule Numerical Analysis Fig. 27 The design process of multi-scale heterogeneous lattice structure. To facilitate the use of the above-mentioned modeling, simulation and optimization methods for MSHLS design, it is necessary to describe the scope of application of the method. According to the minimum basic elements describing the is not only applicable to Boolean operation but also to MMTO (Section3.3.2). Summarising the overview of modelling methods, there is a gap between geometrical complexity of heterogeneous lattices and inability of current geometric modelling methods to handle such complexity. Moreover, volumetric modelling allows direct transition to structural simulations but is even more complicated. Thus, there is a need in a novel volumetric geometric modelling approach to support MSHLS. Such approach should be extensively tested on numerous use-case of lattice structures and other complex geometries, measuring its performance and speed comparing to existing approaches. Moreover, the concept of level of detail should be adapted for lattice structures, as they do not have CAD-features that are commonly associated with their level of detail in conventional modelling. Simulation methods such as homogenization can be applied based on the calculated effective properties. However, how to accurately predict the effective properties for MSHLS is a critical issue, since micro cell of this type of structure may dramatically change in a certain local region. RVE (Representative Volume Element) for this type of structure is difficult defined. There is no cell which can strictly satisfy the periodic boundary condition. By directly choosing the smallest unit cell as RVE will cause the significant discrepancy between simulation result and physical testing result. Another method of mixtures rule is the simplest way. However, the accuracy of this method is limited, since it fails to consider the geometric configuration of the microstructures of the material composition. In most cases, it can only provide a rough estimation of upper or lower bound of effective properties. FEA methods could be used based on decomposition-based models (Section2.1.1). However, the computational cost for a fine-meshed model is too heavy to fulfillment. For the optimization methods, to efficiently solve the proposed optimization formulation is another critical issue needs to be solved. Comparing to the existing optimization methods for homogenous objects, the proposed optimization formulation contains more design parameters. Moreover, those design parameters may be coupled with each other. For example, during the optimization the stiffness of the heterogeneous structures. The material distribution on macroscale is coupled with the microstructure of materials in different regions. On the one hand, the optimized microstructure can be generated based on predefined material compositions. On the other hand, the effective properties of optimized structures will affect the distribution of material compositions in turn. To solve this complex optimization problem, there are several potential methods which can be adopted in this problem. Given that MSHLS needs to deal with a large number of design parameters, one strategy for MMTO is to introduce multi-resolution method to obtain multiple discretization level. The discretization level for the displacement elements, the density elements, and the design variables can be chosen independently, thus it can realize high-resolution MMTO with relatively low computational cost by employing a coarser discretization for finite elements and a finer discretization for both density elements and design variables. Another strategy is to reduce the number of design parameters by using the heuristic algorithms. Generally, the heuristic algorithm may be applied based on initial analysis of structure to divide the initial design domain on a single design scale into several sub-regions. The material or its microstructure are supposed to be homogeneous in each sub region during the design optimization process. This method can significantly decrease the number of design parameters and computational time for the defined optimization problem. To optimize the structures on different scales sequentially. The method (Section3.2.3) tries to decouple the relation between design parameters on different design scales. The effects of design parameters defined on the lower scale will be pre-estimated during the optimization iteration for the design parameters defined on the upper scale. Additional optimization constraint will be added to limit the discrepancy between the estimated effects and the real effects of the design parameters defined in the lower scales. This strategy has been widely used in the Multidiscinplnary Design Optimization (MDO), and known as IDF (Indiviual Discinplinary Feasible) analysis. It can be used for design and optimization a complex coupled system. The developed geometric&material composition model for MSHLS is a universal model. It cannot only be used to represent the information of designed and optimized structures for industry and engineering applications. This model can be also used to represent the information of bio-tissues which also has hierarchical complexities such as human's bone. Based on the developed model, researchers from bioengineering and biomedical fields can further investigate the mechanical behaviour of these tissues. Conclusion and perspectives The contribution of this review is to provide a comprehensive view of the MSHLS design process, and of the modeling, simulation and optimization strategies currently 2015, "Additive manufacturing of metal cellular structures: design and fabrication," Jom, 67 [START_REF] Bourell | The roadmap for additive manufacturing and its impact[END_REF], pp. 608-615. Permission to reprint from IEEE copyright 2018 [START_REF] Leung | Approximate functionally graded materials for multi-material additive manufacturing[END_REF]. Permission to reprint from ACM copyright 2015 [START_REF] Schumacher | Microstructures to control elasticity in 3D printing[END_REF]. Permission to reprint from Elsevier copyright 2019 [START_REF] Deng | Concurrent topology optimization of multiscale structures with multiple porous materials under random field loading uncertainty[END_REF]. Figure Captions List Fig. 1 1 Fig. 1 Heterogeneous lattice structure: a) Heterogeneous lattice with gradient strut dimensions, b) Heterogeneous lattice with variational material compositions. Fig. 2 2 Fig. 2 Multiscale structures with hierarchical lattice architectures. Permission to reprint from Springer Nature copyright 2016 [8]. Many scholars have carried out an in-depth research on the techniques of material composition modeling. It mainly includes decomposition-based models, set-based models, explicit function-based models, control feature-based models, control pointbased models. A brief review of the methods is provided here to clarify the difference and limitations of present works: Fig. 4 4 Fig. 4 2D and 3D heterogeneous lattice structures. Permission to reprint from Elsevier copyright 2014 and ASME copyright 2019 [22, 23]. Fig. 5 2 . 1 . 3 5213 Fig. 5 Material blending with three material source profiles. Permission to reprint from Elsevier copyright 2019 [30]. Fig. 6 6 Fig. 6 Common material composition variation features in MSHLS. Permission to reprint from Elsevier copyright 2019 [31]. As shown in Fig. 7, distance-based material distribution function, e.g., Piecewise, Polynomial, Power, Gaussian, Cauchy, etc, can realize different material gradients. The material distribution is invariant under some transformations, such as rotation or translation. Due to the effective evaluation of point-to-feature distances, the representation supports efficient query of material composition. Moreover, the user can select the reference features and material distribution function through the predefined function library and can intuitively and flexibly complete the interaction modeling of multi-material objects. Fig. 7 7 Fig. 7 Effect of different material functions on material-distribution. Permission to reprint from IEEE copyright 2018 [33]. Fig. 8 1D 8 Fig. 8 1D heterogeneous features: a) a heterogeneous line, b) a heterogeneous B-spline. Permission to reprint from Elsevier copyright 2005 [37]. Fig. 9 9 Fig. 9 An example of LODs associated with CAD features. Permission to reprint from John Wiley Fig. 10 10 Fig. 10 Lattice structure voxelization, adapted from [74]. Fig. 11 A 11 Fig. 11 A standard cubic unit element: a) and a cylindrical shape with its cylindrical sector unit element, b) used for stress analysis. Permission to reprint from Elsevier copyright 2013 [75]. Fig. 12 FEA 12 Fig. 12 FEA-based simulation aided design: a) multi-material fiber network of artificial disc. Permission to reprint from ASME copyright 2019 [98], b) medical tweezers with spatial material distribution. Permission to reprint from ACM copyright 2015 [100]. Fig. 13 13 Fig. 13 Overview of the digital design and manufacturing framework for soft lattice structures. Permission to reprint from Elsevier copyright 2019 [106]. Fig. 14 14 Fig. 14 Material microstructures design: a) level-set-based design. Permission to reprint from Elsevier copyright 2016 [108], b) design of architected truss lattice materials with three base materials. Permission to reprint from Elsevier copyright 2020 [109], and c) voxel-wise material distribution. Permission to reprint from ACM copyright 2015 [110]. Fig. 15 15 Fig. 15 An example of F-P-P-D model. Permission to reprint from Springer Nature copyright 2018 Fig. 16 16 Fig. 16 Relative density mapping to strut diameter. Permission to reprint from Elsevier copyright 2015 [116]. Fig. 17 17 Fig. 17 Example of mechanical properties mapping process and its manufacturability verification by 3D printing: a) material properties diagram, b) 3D mapping result, c) 3D printing result. Permission to reprint from Elsevier copyright 2018 [117]. Fig. 18 18 Fig. 18 Structures used for FEA: a) Solid (SIMP solution), b) Intersected Lattice of D-P, c) Intersected Lattice of BCC, d) Graded Lattice of D-P, e) Scaled Lattice of D-P, and f) Uniform Lattice of D-P [118]. Fig. 19 19 Fig. 19 Deformation of an object with varying material properties per voxel, and the same object with the material in each voxel replaced with the corresponding pattern. Permission to reprint from ACM copyright 2015 [124]. Fig. 20 20 Fig. 20 Two-scale hierarchical structure with uniform microstructure. Permission to reprint from Elsevier copyright 2019 [130]. The most commonly applied strategy is designing a uniform material microstructure at the microscopic scale either for a fixed or concurrently changed structure at the macroscopic scale. Obviously, such designs have not yet released the full potentiality of concurrent two-scale designs. A further step has been made by Xia Fig. 21 21 Fig. 21 Multi-scale topology optimization framework with microstructure database. Permission to reprint from ACM copyright 2018 [11]. Fig. 22 3 3 Fig. 22 3-phase inverter result by multiphase SIMP approach. Permission to reprint from ASME copyright 2014 [140]. where D represents the design domain including all admissible shapes. k Ω denotes the kth material region with positive value of the kth level set function, k Γ is the boundary of the kth material, negative value of the kth level set function signifies the domain not containing the kth material. x represents a point located in D . M is the number of the level set function. An example for the design domain containing three level set functions is illustrated in Fig. 23. Fig. 23 A 23 Fig. 23 A schematic diagram of the design domain which includes three level set functions. Permission to reprint from Elsevier copyright 2016 [143]. Fig. 24 24 Fig. 24 Optimization results of the Michell structure problem with heterogeneous source profiles. Permission to reprint from Elsevier copyright 2019 [30]. The advantages of level set are as followings: level set-based model is one of a few exceptions that could support both aspects of modeling and optimization. The gray area problem is solved by the unique advantages of smooth and clear boundaries of various materials and convenient extraction of topological configurations, and the optimization results can be directly used for However, since the design variables are indirectly linked to optimization problems, it involves the approximation of finite elements that are cut by the horizontal set, thus affecting the optimization accuracy. Level-set-based topology optimization methods suffer from local minima and dependency of the final result on the initial guess. This property is a consequence of the shape optimization character of LSM. Seeding an initial design with a large number of holes or inclusions may lead to numerical issues and does not necessarily mitigate the dependency of the final result on the initial guess. Fig. 25 3DFig. 26 2526 Fig. 25 3D crane design subjected to multiple load cases: a) optimized structure for the 3D crane design with three materials, b) printed model using FDM process. Permission to reprint from Springer Nature copyright 2017 [147]. material information of MSHLS, voxel-based, volumetric mesh-based, truss or beambased and curve feature-based design methods are summarized. Among them, voxel-based modeling, simulation and optimization methods include voxel model (Section2.1.1), physics-based method (Section3.1.1), homogenization method (Section3.1.2), and SIMP method (Section3.3.1); Volumetric mesh-based methods include volume mesh model (Section2.1.1) and FEA-based method (Section3.1.3); Truss or beam-based methods include control feature-based model (Section2.1.3), FEA-based method (Section3.1.3) and DEM (Section3.3.3); Curve feature-based methods include control point-based model (Section2.1.4), IGA-based method (Section3.1.4). Especially, as an implicit function, level set-based method (Section2.1.2)  6 Acknowledgment 6 available, which are still lacking. In the end, based on the summaries of different modeling, simulation and optimization methods, it comes out that further researches are needed and several future work perspectives concerning the design of MSHLS have been pointed out:  To develop a multi-scale model which can represent the information of MSHLS. This model should satisfy the following requirements. Firstly, it should be able to store both material and geometry information on multiple design scales. The relations between different design scales are also needed to be clearly defined in this model. Secondly, the developed model should have a compact and efficient data structure, which enable the quick accessing and visualization of data on different design scales. Thirdly, the developed model should be general enough to incorporate different types of MSHLS. Fourthly, this model should also be easily converted manufacturing readable data model, which enable the direct fabrication via selected AM process. To develop a multiscale simulation model of MSHLS. In this model the effective properties of material defined in the bottom scale need to be calculated based on the numerical homogenization methods. Different numerical homogenization methods will be developed or modified to evaluate the different type of microstructures. For example, if the microstructures are lattice structure, numerical homogenization methods can be applied. By carefully selecting the suitable numerical homogenization methods, it can significantly shorten the computational time. Moreover, in order to improve the accuracy of the calculation, the modification of calculated effective properties from numerical homogenization methods also needs to be done. The detailed modification algorithm needs to be further investigated. Based on evaluated effective properties, the numerical simulation model such as FEA model can be applied to evaluate the performance of modeled multiscale MSHLS. To investigate general optimization parameterization formulation. On different design scale, the parameters which can efficiently control the distribution of material compositions and its microstructures should be figured out. For example, the coefficient of exponential function can be used to control the material distribution in design domain, while the implicit level set functions can be used to parameterize the microstructures of materials. The selected design parameters can be directly converted to the generalized units defined on each level. This work is supported by financial support from the Program of the Academic Excellence Foundation of BUAA for PhD Students, China Scholarships Council (No. 201806020095), and National Sciences and Engineering Research Council of Canada (Discovery Grant RGPIN-2018-05971). Fig. 1 1 Fig.1Heterogeneous lattice structure: a) Heterogeneous lattice with gradient Fig. 3 3 Fig. 3Multi-material-property configuration[START_REF] Liu | Material-Unit Network for Multi-Material-Property and Multiscale Components[END_REF] Fig. 5 5 Fig. 5Material blending with three material source profiles. Permission to Fig. 6 6 Fig. 6Common material composition variation features in MSHLS. Permission Fig. 7 7 Fig. 7Effect of different material functions on material-distribution. Fig. 8 1D 8 Fig. 8 1D heterogeneous features: a) a heterogeneous line, b) a heterogeneous Fig. 9 Fig. 10 910 Fig. 9An example of LODs associated with CAD features. Permission to reprint Fig. 11 A 11 Fig. 11 A standard cubic unit element: a) and a cylindrical shape with its Fig. 12 FEA 12 Fig. 12 FEA-based simulation aided design: a) multi-material fiber network of Fig. 13 13 Fig. 13 Overview of the digital design and manufacturing framework for soft Fig. 14 14 Fig. 14 Material microstructures design: a) level-set-based design. Permission to Fig. 15 15 Fig. 15 An example of F-P-P-D model. Permission to reprint from Springer Fig. 16 Fig. 17 1617 Fig. 16 Relative density mapping to strut diameter. Permission to reprint from Fig. 18 18 Fig. 18 Structures used for FEA: a) Solid (SIMP solution), b) Intersected Lattice of Fig. 19 19 Fig. 19 Deformation of an object with varying material properties per voxel, and Fig. 20 20 Fig. 20 Two-scale hierarchical structure with uniform microstructure. Fig. 21 21 Fig. 21 Multi-scale TO framework with microstructure database. Permission to Fig. 22 3 Fig. 23 AFig. 24 32324 Fig. 22 3-phase inverter result by multiphase SIMP approach. Permission to Fig. 25 3D 25 Fig. 25 3D crane design subjected to multiple load cases: a) optimized structure Fig. 26 26 Fig. 26 Michell cantilever optimal designs. Red (3) indicates material Fig. 27 27 Fig.[START_REF] Gupta | Heterogeneous object modeling with material convolution surfaces[END_REF] The design process of multi-scale heterogeneous lattice structure. Software developers could get insights for the development or adjustment of design tools to represent the information of designed and optimized structures for industry and engineering applications. Researchers could get a comprehensive view on the topic together with indications for developing more integrated and evolutionary design methods. After the introduction above, this paper is organized as follows: firstly, material composition modeling methods and multi-scale geometric for representation of material and geometry information are separately discussed in Sec.2. Moreover, optimization methods including multi-scale optimization and multi-material optimization design methods suitable for MSHLS are respectively reviewed in Sec.3. In addition, a design process of MSHLS is discussed in Sec.4. Finally, this paper is wrapped up with a conclusion and some prospects for the future research in Sec.5. 2 Modeling methods of multi-scale heterogeneous lattice structure Generally speaking, if a heterogeneous object is composed of 2 n different materials and attached with 3 n different physical properties, the modelling space of heterogeneous object can be denoted as the tensor product of 1 n dimensional geometry space 1 n E , 2 n dimensional material space M 2 n and 3 n dimensional property space 3 n Table 1 Comparison of existing models for heterogeneous materials Methods Including Advantages Disadvantages 1 Decomposition- Voxel-based Voxel model is Inexact and based models model suitable for computationally expensive volume mesh- visualization Non-trivialness to based model Volume mesh model manually is easy for FEA manipulate the material distribution Set-based R m -set R m -set model is easy Incompatible with other models to integrate modeling formats and constructive cannot store topology F-Rep model operation information F-Rep model facilitates topology optimization Control feature- Single and More favorable in Heavily based on the based models multiple capturing the features with predefined feature-based designers' intents material distributions model Suitable for FGM representation Control point- B-spline and Compact in both Heavily based on spatial based models radial basis geometry and parameterizations function material representations Table 2 Comparison between polygon mesh modeling and voxel-based modeling approaches for MSHLS 2 Methods Advantages Disadvantages Polygon mesh Computational efficiency Not suitable for complex geometrical modeling shapes LOD concept is applicable Voxel-based Handles high complexity Not computationally efficient modeling No need for resolution higher Introduce distortion when modeling on than required for printing low resolution Efficient Boolean operations Feature recognition is well applicable Table 3 Comparison of existing simulation methods for MSHLS 3 Methods Advantages Disadvantages Physics-based method Realistic visual effects in real time Low accuracy Able to simulate multiple interspersed materials of varying properties Computationally efficient Homogenization Computationally efficient Satisfying local periodicity method Easy to conduct multiscale simulation hypothesis owing to scale through repeated unit cell behavior effect FEA- 3D Suitable for arbitrary geometry High computation cost and based element High accuracy and heterogeneity time consuming method 1D Suitable for truss-like structure Unable to simulate element Low heterogeneity multiple interspersed Relatively low computation cost materials at joints Low heterogeneity IGA-based method Suitable for curves frame The parameters control points of the model can be directly optimized, and the boundary of the optimization results Table 4 Comparison of existing optimization methods for MSHLS 4 Methods Advantages Disadvantages SIMP-based method The algorithm has good Complex convergence, is suitable interpolation for any design domain, and expresses and large is easy to integrate with computational cost commercial FEA software Multi-material LSM-based method The boundary of Final results optimization optimization results is strongly depend on methods clear, which can be starting guess directly used in manufacturing DEM-based method It is suitable for truss-like MSHLS, and can deal with Table Caption List Caption Table 1 Comparison of existing models for heterogeneous materials Table 2 Comparison between polygon mesh modeling and voxel-based modeling approaches for MSHLS Table 3 Comparison of existing simulation methods for MSHLS Table 4 Comparison of existing optimization methods for MSHLS
04098416
en
[ "info.info-ia", "phys.meca.geme" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04098416/file/FMANU-MD-22-1253.pdf
Nikita Letov email: [email protected] Yaoyao Fiona Zhao email: [email protected] Beam-based lattice topology transition with function representation A lattice structure is a porous periodic structure with unit cells organized according to a pattern. Lattice structures are lightweight parts that are commonly produced by additive manufacturing techniques. Lattice structures require their topology defined which effectively defines the connectivity of their unit cell. Many of these topologies are beam-based, i.e. their unit cell is represented by a network of nodes connected with beams. Such lattice structures require a geometric modeling tool capable of generating their solid model. This paper presents a method to support the topology transition for beam-based lattice structures by controlling the geometric parameters of topologies. This control is made possible with the function representation of the geometry. The work also analyzes how suitable different beam-based lattice topologies are to support the transition. A few case studies are carried out to demonstrate the feasibility of the proposed method. NOMENCLATURE 3D three-dimensional AM additive manufacturing ASCII American Standard Code for Information Interchange B-rep boundary representation BCC body-centered cubic BCCz body-centered cubic with additional 4 zdirection oriented beams CAD computer-aided design CPU central processing unit DFAM design for additive manufacturing F-rep function representation * Corresponding author. FBCC face-and body-centered cubic FCC face-centered cubic FCCz face-centered cubic with additional 4 z-direction oriented beams FOSS free and open-source software GMK geometric modeling kernel GPU graphics processing unit OCCT Open CASCADE Technology RAM random-access memory S-FBCCz self-supporting face-centered cubic without horizontal beams with additional 4 zdirection oriented beams S-FCC self-supporting face-and body-centered cubic without horizontal beams S-FCCz self-supporting face-centered cubic without horizontal beams with additional 4 z-direction oriented beams STL stereolithography SSD solid-state drive TPMS triply periodic minimal surface INTRODUCTION Ever since its introduction, additive manufacturing (AM) has been able to push the manufacturing freedom to the new frontiers. AM has found its application in part consolidation, rapid prototyping, and manufacturing of geometrically complex and hollow structures [START_REF] Yang | A new part consolidation method to embrace the design freedom of additive manufacturing[END_REF][START_REF] Lam | The impact of 3D printing implementation on stock returns: a contingent dynamic capabilities perspective[END_REF][START_REF] Noronha | Hollow-walled lattice materials by additive manufacturing: Design, manufacture, properties, applications and challenges[END_REF]. Lattice structures are an example of a complex geometric object that can be produced with AM. Some lattice structures can be manufactured with subtractive manufacturing techniques [START_REF] Queheillalt | Cellular metal lattices with hollow trusses[END_REF]. However, more geometrically complex lattice structures, such as heterogeneous and multiscale lattice structures, often require AM for their manufacturing. Lattice structures provide increased strength-Fig. 1: An example of a lattice structure with multiple topologies that are inspired by topology optimization [START_REF] Liu | Rapid modeling and design optimization of multi-topology lattice structure based on unit-cell library[END_REF] to-weight ratio without a significant decrease of strength properties [START_REF] Yazdi | Optimization of geometrical parameters in a specific composite lattice structure using neural networks and abc algorithm[END_REF]. Moreover, other physical and mechanical properties which are different from their parent material can emerge in a lattice structure [START_REF] Chen | Multi-material additive manufacturing of metamaterials with giant, tailorable negative poisson's ratios[END_REF][START_REF] Maconachie | SLM lattice structures: Properties, performance, applications and challenges[END_REF]. Lattice structures can be classified to be either homogeneous or heterogeneous. Homogeneous lattice structures have their topology and geometric parameters constant across the structure, while heterogeneous lattice structures have varying topologies or geometric parameters. Geometric and mechanical properties of lattice structures have secured their place in such industries as aerospace [START_REF] Maconachie | SLM lattice structures: Properties, performance, applications and challenges[END_REF], automotive [START_REF] Aslan | Optimum design of automobile components using lattice structures for additive manufacturing[END_REF], prosthetics [START_REF] Kandil | A novel bioinspired hydrogel-based lattice structure to mechanically mimic human annulus fibrosus: A finite element study[END_REF], and more [START_REF] Dong | A survey of modeling of lattice structures fabricated by additive manufacturing[END_REF]. A lattice structure with multiple homogeneous regions is an example of a heterogeneous lattice structure. Different topologies have different mechanical properties in different directions [START_REF] Li | Architecture design of periodic truss-lattice cells for additive manufacturing[END_REF]. Assigning various topologies to different regions of the same structure ensures the variation of mechanical properties within that structure. This effect is often utilized in design for additive manufacturing (DFAM) [START_REF] Liu | Rapid modeling and design optimization of multi-topology lattice structure based on unit-cell library[END_REF]. For example, topology optimization can often be used to identify the optimal topology for each unit cell of a lattice depending on the loading conditions [START_REF] Hu | Two-scale con-current topology optimization method of hierarchical structures with self-connected multiple latticematerial domains[END_REF][START_REF] Wei | Topology optimization for design of hybrid lattice structures with multiple microstructure configurations[END_REF]. Figure 1 illustrates an example of a lattice structure with topologies selected based on a topology optimization algorithm. Geometric modeling of lattice structures has been a challenge in AM due to the inability of conventional tools to model periodic structures [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF]. Even more challenging is geometric modeling of heterogeneous lattice structures [START_REF] Liu | A survey of modeling and optimization methods for multi-scale heterogeneous lattice structures[END_REF]. While AM continuously allows manufacturing of an ever-increasing variety of lattice structures [START_REF] Tao | Design of lattice structure for additive manufacturing[END_REF], the design freedom is still limited for the design of lattice struc-tures. Such limitation is often associated with the geometric modeling functionality of the existing computer-aided design (CAD) tools for lattice structures [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF]. Conventional CAD tools which are based on features are wellsuited for subtractive manufacturing but fail to provide sufficient design flexibility when modeling complex periodic structures [START_REF] Liu | A survey of modeling and optimization methods for multi-scale heterogeneous lattice structures[END_REF]. There is a research gap between manufacturing freedom and design freedom that is yet to be crossed from both sides. First, not all designed parts can be manufactured and the manufacturing freedom is thus limited [START_REF] Velivela | of domain integrated design methodology for bio-inspired design-a case study of suture pin design[END_REF]. Second, not everything that is manufactured can be easily modeled, which is a gap that can be crossed by future developments in geometric modeling for AM and that has been identified in the literature [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF]. This gap can be crossed by providing an appropriate geometric modeling approach that would allow efficient and convenient control over the geometric parameters and the topology of a lattice structure. The topic of topology transition in lattice structures is not a new one [START_REF] Leonardi | Additive manufacturing of heterogeneous lattice structures: an experimental exploration[END_REF]. Geometric modeling of transition between areas with different topologies within the same lattice structure is of particular interest in research on this topic. A smooth transition between topologies that are arbitrarily oriented between each other finds its application in, for example, the design of bone implants [START_REF] Lu | Relationship between the morphological, mechanical and permeability properties of porous bone scaffolds and the underlying microstructure[END_REF]. Trabecular bone has multiple porous microstructures which are oriented depending on the direction the load which is commonly applied to the bone. The regions of the bone which are subject to the same type of load possess similar mechanical properties which are achieved by a seemingly randomized natural lattice structure with arbitrarily oriented beams. This paper presents the research work that is a direct continuation of a previously published work that was focused on providing a framework for functional control over the geometric parameters of a lattice structure [START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF]. The purpose of this paper is to analyze the possible combinations of beam-based topologies within a single lattice and extend the previously developed framework to enable functional control over the topology in it. Figure 2 illustrates various ways of enabling heterogeneity of lattice structures with the main topic of this paper highlighted in bold. To summarize, this paper attempts to provide a methodology that can be applied in the geometric modeling of beam-based lattice structures with multiple topologies. Note that even though material heterogeneity is a noticeable topic of interest in the research on lattice structures [START_REF] Liu | Material-unit network for multi-materialproperty and multiscale components[END_REF], this paper focuses only on the geometric aspect of modeling. The proposed paper does not focus on topology optimization itself However, the proposed method can be Fig. 2: Various ways to parametrize heterogeneous lattice structures with the direction chosen for this paper encircled with bold lines eventually applied to topology optimization. For example, it has been shown that if a solid body is defined as with function representation, then the optimization of the arguments of the underlying function results in a topologically optimized structure [START_REF] Popov | Cad/cam system for additive manufacturing with a robust and efficient topology optimization algorithm based on the function representation[END_REF]. Even though surface-based topologies such as the topologies based on the triply periodic minimal surfaces (TPMS) are a topic of interest in the research on topology transition [START_REF] Al-Ketan | Functionally graded and multi-morphology sheet tpms lattices: Design, manufacturing, and mechanical properties[END_REF][START_REF] Ren | Transition boundaries and stiffness optimal design for multi-tpms lattices[END_REF]. This paper only focuses on the beam-based lattice topologies. The rest of this paper is organized as follows. Section 2 reviews the literature on the existing geometric modeling methods that support AM of lattice structures with multiple topologies. Section 3 describes the method that is proposed to be used as a support for the geometric modeling of lattice structures with multiple topologies. Section 4 documents the technical aspects of the implementation, as well as provides several use cases of lattice structures modeled with the proposed approach. The conclusions and directions of future research are provided in section 5. GEOMETRIC MODELING OF MULTI-TOPOLOGY BEAM-BASED LATTICE STRUC-TURES Normally, the unit cells in lattice structures are aligned in a three-dimensional (3D) pattern in which every two neighboring unit cells share a side. This work, however, focuses on the geometric modeling of more unconventional methods of aligning the unit cells. This includes the modeling of lattice structures in which the topologies are aligned at an angle to each other, as well as the variation of topologies that is ensured by the variation of geometric parameters. The rest of this section is organized as follows. Con-cepts that are relevant to the geometric modeling of lattice structures are reviewed in section 2.1. Section 2.2 reviews the function representation (F-rep) methods that are applicable to the geometric modeling of lattice structures. One of the main challenges that were identified in the literature corresponds to the so-called connectivity issue which is introduced in detail in section 2.3. Geometric modeling concepts The geometric modeling of lattice structures has been extensively reviewed in the literature [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF][START_REF] Liu | A survey of modeling and optimization methods for multi-scale heterogeneous lattice structures[END_REF][START_REF] Savio | Geometric modeling of lattice structures for additive manufacturing[END_REF]. Similarly to every CAD tool, any lattice modeling tool has a geometric modeling kernel (GMK) at its core. A GMK is a software representation of a set of geometric theorems and axioms that is crucial to defining the shape of a solid body. While multi-topology beam-based lattice structures have been a topic of interest in AM research, the geometric modeling methods used to generate their respective solid models are limited [START_REF] Lertthanasarn | Hierarchical strengthening of polycrystal-inspired lattice materials[END_REF]. The literature review performed in the preceding works has revealed that not many tools allow variation of geometric parameters other than the thickness of the lattice beams [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF][START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF]. In this work, the solid model S ⊂ R 3 is defined in the 3D design space X ⊂ R 3 as a set of points bounded by its boundary surface ∂S which is defined as ∂S := S ∩ (X -S) (1) and is a 2-manifold M 2 g of a finite genus g. The design space X is considered to be defined by the design and environment constraints. Consider that the boundary surface is defined by a function F (X) such that all X satisfying F (X) ≥ 0 lie inside the solid body. Then, S := {X|F (X) ≥ 0} (2) and Eqn. (1) converges to ∂S = {X|F (X) = 0}. (3) Geometric modeling in general can be classified into surface and volumetric modeling. Surface modeling represents a solid body solely by its surface boundary. The most common way of surface modeling is based on boundary representation (B-rep) in which only ∂S is required to be defined, i.e. all points satisfying F (X) = 0 are required to be modeled. Similarly, volumetric modeling requires finding a solution for F (X) ≥ 0. The key differences and advantages of both approaches are covered in the preceding works [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF][START_REF] Letov | Volumetric cells: A framework for a bio-inspired geometric modelling method to support heterogeneous lattice structures[END_REF]. Function representation applicability for geometric modeling of beam-based lattice structures Geometric modeling approaches that define the geometric shape of the target solid body by providing either an explicit or an implicit form of F (X) are called the F-rep methods [START_REF] Pasko | Function representation in geometric modeling: concepts, implementation and applications[END_REF]. F-rep is a powerful approach that allows direct control over the shape of the desired solid body given that its shape can be described by a mathematical equation. In F-rep, a solid body is defined by the inequality F (X) ≥ 0, (4) where X = (x, y, z) ⊂ R 3 is the design space and F (X) is defined in such a way, that F (X) ≥ 0 is the solid itself, F (X) = 0 corresponds to the surface of the solid, and F (X) < 0 is the rest of the design space [START_REF] Pasko | Function representation in geometric modeling: concepts, implementation and applications[END_REF]. Moreover, F-rep has proved itself applicable for the geometric modeling of heterogeneous lattice structures [START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF]. This is achieved by expanding the classical F-rep definition of geometry defined by Eqn. 4 by composing it of a function P (X) that defines the geometric parameters of the lattice unit cells and a function that defines the topology T (X) of the lattice, i.e. F (X) = (P • T )(X) ≥ 0. (5) Figure 3 illustrates the sequential mapping described by Eqn. [START_REF] Yazdi | Optimization of geometrical parameters in a specific composite lattice structure using neural networks and abc algorithm[END_REF]. Note that the order of the composition matters. First, the topology of the whole multi-topology lattice is defined by the mapping t = T (X), where t ∈ Z 3 is a scalar integer space that describes the distribution of topologies in the design space X. t is discrete because the set of unit cells is a countable set, i.e. each unit set can be assigned a finite number. No topology at a certain x ∈ X is a case of 0-topology where no geometry exists. These points x, however, lie outside the solid where F (X) < 0 and are thus redundant to be defined. Then, the geometric parameters of every region of the lattice are applied to the topology by the mapping p = P (t). If p is a geometric parameter and F = p -T (X), then P (T (X)) is a trivial function, and the resulting lattice structure is equivalent to its skeletal graph with the 0-thickness, or T . In general, p ∈ R g where g is the number of geometric parameters required to define a solid model of a lattice unit cell. For Fig. 3: The mapping of the function T that defines the topology and the function P that defines the geometric parameters of a single heterogeneous lattice structure example, the lattice thickness and the unit cell size can be parameters that are needed to fully define the solid model of a unit cell. Functions P and T define two different aspects of the heterogeneous geometry of a lattice structure as shown in Fig. 2. The mathematical foundations of this approach are described in more details in the preceding work [START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF]. This approach allows setting a constant topology T and a custom set of geometric parameters P . Topology in this approach is defined by its skeletal graph which is a set of straight line segments which are defined by mathematical equations and intersect at nodes. In other words, a skeletal graph is a single unit cell with the 0 thickness of beams. For example, a topology defined by T (X) :             x ∈ {0, u}, z ∈ {y, -y + u}, y ∈ {0, u}, z ∈ {x, -x + u}, z ∈ {0, u}, y ∈ {x, -x + u}; x, y, z ∈ [0, u], (6) where u is the side of the cubic unit cell, describes 12 straight line segments bounded by the cubic region x, y, z ∈ [0, u] with each segment being from one vertex to the opposite one within the same face of the cube. In this notation, α ∈ {β, γ} means a union of α = β and α = γ. So, x ∈ {0, u} means x = 0 and x = u. Eqn. [START_REF] Chen | Multi-material additive manufacturing of metamaterials with giant, tailorable negative poisson's ratios[END_REF] describes the skeletal graph of the FCC topology which is sketched in Fig. 4. The definition of the geometric parameter function P allows adding thickness and other geometric parameters to the skeletal graph, thus enabling the modeling of solid bodies. Figure 5 shows some of the topologies inspired by the metal crystal structure and which are supported by default in this approach. All the illustrated unit cells have the unit cell size u = 10 mm, the beam diameter d = 1 mm, and the node diameter D = 1.1 mm. Note that the FCC topology observed in Fig. 5a is obtained by adding the thickness parameter to the skeletal graph defined by Eqn. [START_REF] Chen | Multi-material additive manufacturing of metamaterials with giant, tailorable negative poisson's ratios[END_REF] and illustrated in Fig. 4. Figure 6 illustrates other beam-based topologies that are supported by the approach. Note that the rhombicuboctahedron and the truncated cube topologies can optionally require the truncation size parameter as they are based on truncated polyhedrons. These unit cells have the unit cell size u = 10 mm, the beam diameter d = 1 mm, and the node diameter D = 1.1 mm. F-rep is a powerful geometric modeling technique that greatly expands the complexity of the solid models it can produce [START_REF] Kartasheva | An implicit complexes framework for heterogeneous objects modelling[END_REF]. This method, however, introduces additional complexity to the design process itself by providing more complex tools to model solid bodies. The advantage of the method that is proposed to be used in this work lies in its simplification of the F-rep for the modeling of lattice structures. The software implementation of the approach that has been released as a free and open-source (FOSS) software [START_REF] Letov | jalovisko/LatticeQuery 0.1LQ[END_REF] has a list of predefined topology functions T with the ability to extend them at will. For instance, Eqn. ( 6) is essentially simplified to T (X) : F CC(X). While providing a powerful tool for the modeling of heterogeneous lattice structures with custom geometric parameters P , this approach is still limited to supporting only the constant values of T within each certain region of the structure. Note that this approach does not yet support stochastic lattices structures, and thus this work does not focus on the connectivity of stochastic topologies. Connectivity issue In beam-based topologies, the connectivity issue arises when there is no well-defined physical connection between the beams of two neighboring topologies. The connectivity issue affects the quality of the solid model of the lattice structure, as well as its manufacturability. Figure 7 sketches a scheme of a lattice structure with multiple topologies that have the connectivity issue. This work focuses only on the connectivity of the beam-based topologies. To address this issue appropriately, a list of such topologies should be made. A substantial number of the beam-based topologies is inspired by the cubic crystal system in crystallography due to their ability to reinforce the structure in specific directions [START_REF] Maskery | An investigation into reinforced and functionally graded lattice structures[END_REF]. These topologies include simple cubic, bodycentered cubic (BCC), face-centered cubic (FCC), as well as variations of these topologies such as self-supporting FCC without horizontal beams (S-FCC), BCC with additional 4 z-direction oriented beams (BCCz), FCC with additional 4 z-direction oriented beams (FCCz), S-FCCz, face-and body-centered cubic (FBCC), S-FBCC, and S-FBCCz [START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF]. Any of these topologies can be combined within a lattice structure without connectivity issues given the parallel translation of unit cells within it. All the topologies inspired by the metal crystal structure which are listed above share the same cubic shape of their unit cell, as well as at least 4 common nodes. Thus, the connectivity of these topologies can be efficiently achieved. However, countless other beam-based topologies that are not inspired by crystallography exist. Among the topologies that are extensively used in AM, there are the diamond, rhombicuboctahedron, truncated cube, and truncated cuboctahedron topologies. All of these except for the diamond topology are normally able to parallel transition from one to another without significant connectivity issues. The diamond topology is not plane symmetrical, i.e. it cannot be obtained by mirroring a subset of that topology about a plane. This effect limits the application of the diamond topology in lattice structures with multiple topologies. The transition of unit cells with different topologies is not limited to parallel. For example, assigning BCC and FCC topologies oriented in different directions in different regions of the same lattice has proven mimicking of the crystal structure damage-resisting properties [START_REF] Pham | Damage-tolerant architected materials inspired by crystal microstructure[END_REF]. The connectivity issue, in this case, is often mitigated by introducing additional beams between the unmatched nodes in the transition plane for support [START_REF] Lertthanasarn | Influence of the base material on the mechanical behaviors of polycrystal-like meta-crystals[END_REF]. However, these additional beams in the transition region can affect the mechanical properties that arise in it, thus making the outcome of the design process less predictable. In particular, it has been found that the mechanical properties in homogeneous lattice structures are easily predictable by various techniques such as the homogenization technique [START_REF] Somnic | Status and challenges in homogenization methods for lattice materials[END_REF]. On the contrary, the connectivity region between various topologies is greatly affected by the geometric properties of the transition region which is more difficult to predict if stochastic [START_REF] Wang | Locally resonant band gaps in periodic beam lattices by tuning connectivity[END_REF]. THE PROPOSED FUNCTION REPRESENTA-TION APPROACH Multiple commercial software packages that can model heterogeneous lattice structures exist. Many of those have been extensively reviewed in the literature [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF][START_REF] Liu | A survey of modeling and optimization methods for multi-scale heterogeneous lattice structures[END_REF]. One of the most notable examples of such a tool is nTopology [START_REF]Next-Generation Engineering Design Software Online[END_REF], which allows the modeling of highly complex lattice structures with heterogeneous parameters. However, the only geometric parameter that is allowed to be controlled in the majority of such tools is the thickness of the lattice. Moreover, there are not many open-source geometric modeling tools that would be able to model such complex shapes, while the ability to extend the software with additional functionality is of the essence. Thus, it is proposed to extend the F-rep approach described in section 2.2 to implement the connectivity of multiple topologies within the same lattice structure. The Fig. 7: A schematic of a lattice structure with multiple topologies [START_REF] Liu | Two-scale concurrent topology optimization of lattice structures with connectable microstructures[END_REF] (Permission to reprint from Elsevier © 2020) advantages of this approach include: -Its ability to model a large amount of beam-based topologies. -Its ability to extend the number of supported topologies by defining a custom function T that defines its skeletal graphs. -Its support for defining a custom distribution of geometric parameters by providing control over the function P that defines them. -Its ability to control geometric modeling parameters other than the thickness of the beam such as, for example, the shape of the beam cross-section. This is one of the key differences between this approach and other alternative approaches. -Its release as a software prototype in the form of a free and open-source tool [START_REF] Letov | jalovisko/LatticeQuery 0.1LQ[END_REF], which allows further improvements of the software. Connectivity of beam-based topologies by the transition plane A special case of topology transition at certain angles can be supported by the proposed approach. It is proposed to define the transition between the beam-based topologies by a transition plane. Since it is proposed to utilize the F-rep framework described in section 2.2, this transition plane is ought to be defined as a function. Consider two topologies the skeletal graphs T 1 and T 2 for which are known. The transition plane P between them is a plane of the scalar form P(X) : a t (x -x 0t ) + b t (y -y 0t ) + c t (z -z 0t ) = 0, [START_REF] Maconachie | SLM lattice structures: Properties, performance, applications and challenges[END_REF] where a t , b t , and c t are the components of the normal vector -→ n t = (a t , b t , c t ) of the transition plane, x 0t , y 0t , and z 0t are the coordinates of an arbitrary point on P. In this case, the skeletal graph of the lattice structure with the two topologies that are separated by a transition plane P can be described as T (X) =    {T 1 (X)|P < 0}, {T 2 (X)|P > 0}, {(T 1 ∪ T 2 )(X)|P = 0}, (8) or, in a general case, T (X) =    {T i (X)|P ij < 0} {T j (X)|P ij > 0} {(T i ∪ T j )(X)|P ij = 0} ∀i, j ∈ [1, ..., N ], (9) where P ij is the transition plane between topologies T i and T j , N is the total number of regions with different topologies. Note that P ij = -P ji is assumed to account for the change of the direction of a normal vector for each corresponding transition plane. As an example, consider a lattice structure consisting of two topologies T 1 and T 2 which are oriented as sketched in Fig. 8. Let T 1 be a beam-based topology with a cubic unit cell with the side u 1 . Let the transition plane P between them be defined as follows: P(X) : x + z -pu 1 = 0, ( 10 ) where u 1 is the side of the cubic unit cell with the T 1 topology and p is the number of unit cells between the origin and the transition plane along the x-axis. In this case, the normal vector -→ n t of the transition plane P forms 45 • with the positive direction of the x-axis. The T 2 topology has a cuboid shape of its unit cell with the dimensions of u 2 , u 1 , and u 2 in the x 2 , y 2 , and z 2 directions, respectively, where u 2 = u 1 / √ 2. X 2 = (x 2 , y 2 , z 2 ) ⊺ is obtained by the Euclidean plane transformation of rotation as X 2 = RX 1 = RX where R is the rotation matrix defined as R = √ 2 2   1 0 1 0 √ 2 0 -1 0 1   . ( 11 ) Note that the additional translation matrix is optional since T 2 is not defined for P < 0. Controllable truncation as a means to achieve topology transition Some topologies have an optional truncation parameter required to fully define their skeletal graph T . For example, the rhombicuboctahedron (Fig. 6a) and the truncated cube (Fig. 6b) can have an additional truncation parameter τ that can be defined by the function P that defines geometric parameters. The strict definition of many truncated polyhedrons assumes that every edge in them has equal length. Thus, the truncation in the truncated polyhedrons is considered to be defined. This work, however, steps back from the strict definitions of truncated polyhedrons. Consider a skeletal graph T of the truncated cube topology with the unit cell size u sketched in Fig. 9. The skeletal graph is defined according to the F-rep principles [START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF] as follows: T (X) :                                           x ∈ {0, u},      -y + τ = z, y -u + τ = z, y = z -u + τ, -y + u = z -u + τ ; y ∈ {0, u},      -x + τ = z, x -u + τ = z, x = z -u + τ, x -u + τ = -z + u; z ∈ {0, u},      -x + τ = y, x -u + τ = y, x = y -u + τ, -x + u = y -u + τ ; x ∈ [τ, u -τ ], x ∈ {0, u}, z ∈ {0, u}; y ∈ [τ, u -τ ], x ∈ {0, u}, z ∈ {0, u}; z ∈ [τ, u -τ ], x ∈ {0, u}, y ∈ {0, u}. ( ) 12 Here, the first 3 subsystems of equations correspond to the line segments that define the truncated faces and the other 3 subsystems of equations correspond to the edges of the cube. The truncated cube, if defined as an Archimedean solid, assumes that τ is defined in such a way that every edge has an equal length. Thus u = 2τ + √ 2τ (13) or τ = u 2 + √ 2 . ( 14 ) This work assumes that the truncation can take any real value in range τ ∈ [0, u/2], or for simplicity, between 0% and 100%. Observe that in the two extreme cases when the value of τ takes 0 or u/2, Eqn. ( 12) that defines the truncated cube illustrated in Fig. 6b, it converges to the simple cubic topology that is defined as T (X) :    x ∈ {0, u}, y ∈ {0, u}, y ∈ {0, u}, z ∈ {0, u}, x ∈ {0, u}, z ∈ {0, u} (15) and to the cuboctahedron topology, respectively. This effect is known as the complete quasitruncation. Equation [START_REF] Liu | Rapid modeling and design optimization of multi-topology lattice structure based on unit-cell library[END_REF] and Eqn. ( 15) are defined on x, y, z ∈ [0, u]. The approach is similarly applied to the rhombicuboctahedron topology. In the two extreme complete quasitruncation cases when the value of τ takes 0 or u/2, the rhombicuboctahedron topology converges to the simple cubic topology and the octahedron topology, respectively. By considering the truncation τ as a variable instead of a constant, the proposed approach can achieve the transition of one topology T 1 into another topology T 2 by defining the function P that defines the geometric parameters. Note that the skeletal graph is different for any two different values of truncation τ ∈ [0, u/2]. This effect blurs the differences between defining the geometric parameters with the function P and defining the topology with the function T . IMPLEMENTATION It was decided to extend the functionality of a previously developed implementation of the described F-rep approach [START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF][START_REF] Letov | jalovisko/LatticeQuery 0.1LQ[END_REF]. This approach is discussed in detail in section 2.2 and its framework is described by Eqn. [START_REF] Yazdi | Optimization of geometrical parameters in a specific composite lattice structure using neural networks and abc algorithm[END_REF]. However, the implementation has only supported the variation of geometric parameters P and not the variation of topologies T . Thus, adjustments to the developed tool needed to be made. The developed tool is based on Cadquery [START_REF] Urbańczyk | Innovations Technology Solutions[END_REF] with the Open CASCADE Technology (OCCT) [START_REF] Cascade | Open Cascade -software development company[END_REF] GMK. This allows the implementation of the same software development practices as in the previous work. Moreover, the software prototype developed this way remains crossplatform, thus enhancing the flexibility and applicability of the software. Connectivity of beam-based topologies by the transition plane As described in section 3.1, it is possible to achieve the transition of topology T 1 into topology T 2 by defining the transition plane P with an arbitrary position and orientation. This also enables support for non-cubic unit cells as one of the topologies may have the form of a cuboid. As an example of such topology transition, consider a heterogeneous lattice structure with a total size of 37.5× 37.5 × 37.5 mm 3 which consists of topologies T 1 and T 2 . Let T 1 and T 2 transition in the transition plane P defined as P(X) : x + z -37.5 = 0, (16) so that the normal vector of P forms 45 • with the positive direction of the x-axis. Let the topology T 1 correspond to the cubic FCC with the unit cell size of u 1 = 3.75 mm and the topology T 2 correspond to the cuboid BCC with the transition plane. Figure 10 illustrates the transition in detail. In this case, u 2 = u 1 / √ 2 ∼ = 2.66 mm. The connectivity issue in this case is resolved as the nodes of the two topologies are continuously connected in the transition plane. According to the framework, after the definition of the topology T , the geometric parameters P are needed to be defined. The beams of the topologies in Fig. 10 are set to have the diameter d = 0.7 mm and the node diameter is set to D = 0.75 mm. Controllable truncation as a means to achieve topology transition As described in section 3.2, it is possible to achieve the transition of topology T by controlling other geometric parameters P such as the truncation τ of topologies that are based on the truncated polyhedrons. This can be done by defining P (X) : τ (X). As an example of such topology transition, consider a 10 × 10 × 10 lattice structure with the truncated cube topology with the unit cell size u = 10 mm. Let the truncation τ of the truncated cube topology linearly change from τ min = 0 (0%) to τ max = u/2 = 5 mm (100%), i.e. P (X) : τ (z) = z, (17) where z ∈ [0, 1] is the variable corresponding to the zaxis. In this approach, z ∈ [0, 1] is mapped to the actual coordinate z a ∈ [1, N z ] with z a ∈ N + . The beam diameter is set to 1 mm and the node diameter is set to 1.05 mm. The resulting heterogeneous lattice structure is illustrated in Fig. 11. Note that at z = 0 the topology T that is described by Eqn. [START_REF] Liu | Rapid modeling and design optimization of multi-topology lattice structure based on unit-cell library[END_REF] converges to the simple cubic topology defined by Eqn. [START_REF] Letov | Challenges and opportunities in geometric modeling of complex bio-inspired threedimensional objects designed for additive manufacturing[END_REF], and at z = 1 it converges to the cuboctahedron topology. The approach allows simultaneous control over different geometric parameters in different directions. In this example, the beam thickness is an additional parameter that linearly increases from 0.5 mm on the left and 5.0 mm on the right similarly to the truncation. Note that the lattice nodes have a diameter larger that the diameter of the beams and is set to linearly increase from 0.55 mm to 5.5 mm. The truncation can be one of the potential output parameters of a topology optimization algorithm. For example, different regions of the lattice can be assigned a different truncation depending on whether the region is subject to bending-, compression-, or tension loads [START_REF] Alghamdi | Effect of additive manufactured lattice defects on mechanical properties: an automated method for the enhancement of lattice geometry[END_REF]. Additionally, since the control over the truncation allows a smooth and continuous transition between topologies, this approach can find its application in lattice embedding [START_REF] Sanders | Optimal and continuous multilattice embedding[END_REF]. The estimation of mechanical properties in the lattice transition region can be more accurate due to the lack of stochastic geometric parameters. Another example of a topology that can support the truncation-based topology transition is the rhombicuboctahedron topology. Consider a 10 × 10 × 10 lattice structure with the rhombicuboctahedron topology with the unit cell size u = 10 mm. Let the truncation τ of the rhombicuboctahedron topology linearly change similarly to the previous example according to Eqn. [START_REF] Tao | Design of lattice structure for additive manufacturing[END_REF]. Similarly, the beam diameter is set to linearly increase from left to right. The resulting heterogeneous lattice structure is illustrated in Fig. 12. Note that at z = 0 the topology T described converges to the simple cubic topology defined, and at z = 1 it converges to the octahedron topology. Performance analysis The performance testing of the implemented approach was performed on a machine equipped with the AMD Ryzen™ 7 3700X central processing unit (CPU) with a 3.20 GHz of clock rate, the NVIDIA® GeForce® RTX 2070 Super graphics processing unit (GPU) with 8 GB of memory, 16 GB of random-access memory (RAM), a solid-state drive (SSD) and the Linux operating system. The modeling precision is customizable and can be set from the software settings. In this work, the modeling precision was set to be 0.1 mm. The performance of the developed approach when applied to some of the covered examples is listed in Table 1. The previously developed software prototype supports the export of the resulting solid models into a stereolithography (STL) file encoded with the American Standard Code for Information Interchange (ASCII), and a STEP file defined by the ISO 10303-21 standard [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 21: Implementation methods: Clear text encoding of the exchange structure Standard, International Organization for Standardization[END_REF]. The manufacturability of the lattice is also an important aspect to consider when analyzing the resulting models [START_REF] Zhang | Hybrid sparse convolutional neural networks for predicting manufacturability of visual defects of laser powder bed fusion processes[END_REF]. The manufacturability check is run on the resulting STL models using the Preform 3D printing software [START_REF]Preform 3D Printing Soft-ware[END_REF]. It was decided to run the manufacturability checks for the transition plane connectivity similar to the example shown in Fig. 10 with all combinations of topologies Table 1: The performance metrics of the geometric modeling with the proposed approach that are inspired by the crystal metal structure and that are listed in Fig. 5. Also, the same manufacturability checks have been performed for the truncation-based topology transition. The basic manufacturability checks within the software have been successfully passed. Moreover, a case with multiple transitions of topologies was decided to be manufactured with the Formlabs Form 2 stereolithography 3D printer [START_REF]Form 2: Affordable Desktop SLA 3D Printer. Online[END_REF]. Stereolithography 3D printers allow highly accurate AM with a smooth finish [START_REF] Bhattacharjee | Desktop-stereolithography 3D-printing of a poly (dimethylsiloxane)-based material with sylgard-184 properties[END_REF]. The material was chosen to be Formlabs Elastic 50A [START_REF]Material data sheet. Elastic 50A. Online[END_REF]. It was decided to have 5 layers of FCC and 5 layers of BCC each with an equal layer thickness and oriented at 45 • to the horizontal plane as illustrated in Fig. 13a. The FCC unit cell is cubic, and its size is set to u 1 = 3.75 mm. The BCC unit cell is cuboid and according to the framework described in section 3.1 and Fig. 8, its smaller side is u 2 = u 1 / √ 2 ≈ 2.65 mm. The beam diameter for both topologies is set to d = 0.7 mm and the node diameter is set to D = 0.75 mm. Additional 3.75 mm thick plates were added at the bottom and at the top of the lattice structure to support it during the AM process. The resulting print is illustrated in Fig. 13b. Note that one of the plates is bent in an arc which can be explained by its shrinkage due to the residual stress occurring in it during the print, as this plate was the one attached to the printing platform [START_REF] Milovanović | Experimental dimensional accuracy analysis of reformer prototype model produced by FDM and SLA 3D printing technology[END_REF]. CONCLUSION AND FUTURE RESEARCH This work provides a novel geometric modeling approach that allows the generation of lattice structures with multiple topologies. The topology transition based on the transition plane has been shown to enable cuboid unit cells. A novel approach for transforming a topology from one to another which is based on the variation of geometric parameters has been proposed and implemented in a software prototype. Several use cases of transition of topology by the variation of the truncation parameter have been covered. These topology transformations have been implemented with a novel F-rep approach. The integration in the design loop is proposed to be achieved by providing access and guidance to the design community and to a list of manufacturers utilizing which utilize lattice structures. The aerospace sector is of the sectors of industry that heavily relies on lightweight structures with unique thermophysical properties, and is thus proposed as a potential venue for exploration. The tool is released as a documented FOSS which should enhance the user experience and foster further agile development of the tool. It is proposed to investigate the topology transition between TPMS-based topologies and the applicability of F-rep to this transition. While this topic has been of interest with several of successful implementations, there may be ways to improve the existing methods with the Frep approaches. It is also important to find industrial use cases of extreme heterogeneity of lattice structures that could benefit from the proposed approach. The diamond beam-based lattice structure is an example of a beam-based topology that does not have conventional nodes at the edges of its cubic unit cell. Moreover, the properties of such lattice depend on the orientation of the unit cells. It is proposed to investigate the application of the proposed approach to this topology. The software prototype that embeds the proposed approach has been developed according to the object-oriented programming principles, which enables further extension of the framework. There is evidence that lattice structures with multiple topologies can possess unique properties. It is proposed to model geometrically complex lattice structures that would be challenging to manufacture with other existing means. The future work is also proposed to focus on supporting multi-scale hierarchical lattice structures with the F-rep approach. Fig. 4 : 4 Fig. 4: An FCC unit cell described by Eqn. (6). The thicker lines correspond to the beams that are located in visible faces of the arbitrary cube with the side u. The edges of the arbitrary cube are represented as dotted lines. The nodes are represented as circles. Fig. 5 :Fig. 6 : 56 Fig.5: Various beam-based topologies inspired by the cubic crystal system supported by the F-rep approach[START_REF] Letov | A geometric modeling framework to support the design of heterogeneous lattice structures with non-linearly varying geometry[END_REF] Fig. 8 : 8 Fig. 8: A diagram of two topologies transitioning in a plane Fig. 10 : 10 Fig. 10: The transition between the FCC topology with the cubic unit cell to the BCC topology with the cuboid unit cell along the transition plane P. Note that the BCC topology is rotated 45 • . Fig. 11 : 11 Fig. 11: A heterogeneous lattice structure with the topology based on a truncated cube with the truncation parameter τ varying along the z-axis. The topology converges to the simple cubic at the bottom and the cuboctahedron topology shifted to half of the unit cell size at the top. The thickness of the beams is linearly varying along the x-axis. Fig. 12 : 12 Fig.12:A heterogeneous lattice structure with the topology based on a rhombicuboctahedron with the truncation parameter τ varying along the z-axis. The topology converges to the simple cubic at the bottom and the octahedron topology at the top. The thickness of the beams is linearly varying along the x-axis. (a) The orientation of topologies for the manufacturability performance test (b) The resulting print with a zoomed-in view on the topology transition region. In the zoomed-in view, the transition planes are marked with dotted lines and the instances of the FCC and BCC unit cells are marked with the dashed lines. Fig. 13 : 13 Fig. 13: A use-case showing the lattice transition with the proposed approach ACKNOWLEDGEMENTS This research work is supported by National Sciences and Engineering Research Council of Canada Discovery Grant RGPIN-2018-05971.
04098579
en
[ "chim.theo" ]
2024/03/04 16:41:20
2023
https://theses.hal.science/tel-04098579/file/113960_BIZOT_2023_archivage.pdf
Chapter 1 Context of the study Because of climate change and sustainable development, it is necessary to adopt new processes for the manufacture of metallic parts, lowering material losses and reducing the cost of the final product. Unlike traditional techniques based on substractive processes, powder metallurgy offers new perspectives. Powder metallurgy consists of three basic steps: powder blending, die compaction, and sintering, and encompasses a large number of techniques such as Spark Plasma Sintering (SPS) or Hot Isostatic Pressing (HIP). Additive manufacturing (AM) is a relatively novel family of techniques which use metal powders (among other materials, such as plastics) to make parts by laser sintering or melting. Since the advent of industrial production-scale metal powder-based additive manufacturing (AM) in the 2010s, metal AM processes are a new category of commercially important powder metallurgy applications. A key step in Powder Metallurgy is powder preparation. Ball milling is commonly used to grind or blend metallic powders. High-energy ball milling changes the reactivity of as-milled solids (mechanical activation), induces phase transformations and defects in starting powders, or provokes chemical reactions (mechanochemistry). In material sciences, the understanding of the relationship process-microstructureperformance is of primary importance. In this respect, microstructure modeling is useful for predicting the impact of processing conditions on performance of the resulting material. As the microstructure is associated with many processes running on different scales, a multi-scales approach is necessary. As an example, multiscale modeling of additive manufacturing of metal-based materials includes: macroscale modeling to extract temperature profiles, mesoscale modeling for evaluating melt flow, microscale modeling for investigating microstructure development, and nanoscale modeling to describe solid/liquid interfaces. As shown in Fig. 1.1, different simulation techniques were developed to address a specific problem at relevant time and length scales. In the present manuscript, we focused on the atomic level by means of classical molecular dynamics simulations. That approach can be considered as an in-situ tool that allows us to follow the elemental mechanisms governing kinetics aspects, at the mi-Figure 1.1: Schematic representation of the different numerical methods with respect to their length and time scales. From [START_REF] Goel | Horizons of modern molecular dynamics simulation in digitalised solid freeform fabrication with advanced materials[END_REF]. croscopic level. It consists in solving numerically the equations of motion of a set of atoms for which the interaction law is specified. The main limitations are the reliability of the interaction potential, the number of atoms (a few millions) and the time scale (nanosecond). This method will be presented in details in Chapter 2 together with numerous case studies in Materials Science. In this chapter, we give an overview of the processes on which we focused. Additive manufacturing is first introduced in 1.1 together with the state of the art of modeling. The mechanical treatment of powders is presented in 1.2. Additive manufacturing Additive manufacturing (AM) is a promising technology that enables complex-shaped parts with tailored properties to be produced using layer-by-layer deposition. This technology uses a local high-power heat source to melt material in the form of powder or wire. The melted material solidifies as the heat source moves away from the molten region [START_REF] Felix | Literature review of metal additive manufacturing defects[END_REF][START_REF] Lewandowski | Metal Additive Manufacturing: A Review of Mechanical Properties[END_REF][START_REF] Tepylo | Laser-Based Additive Manufacturing Technologies for Aerospace Applications[END_REF][START_REF] Anant | Additive manufacturing: A review on 3 D printing of metals and study of residual stress, buckling load capacity of strut members[END_REF]. The main advantage of metal AM compared to conventional manufacturing is that it is possible to design complex shapes in one step, while reducing waste, and eliminating assembly time and cost. The impact on the environment is limited by reducing energy consumption and carbon footprint. The first AM technology was developed in the 80s to manufacture three dimensional objects layer-by-layer with photopolymers using UV light beam. Since many technologies have been developed and several materials can be used such as polymers, metals, ceramics, biochemicals, or glass. Frazier et al. [START_REF] Frazier | Metal Additive Manufacturing: A Review[END_REF] published in 2014 a review article that classified the metal AM technologies regarding the material feed stock, the energy source, and the build volume. Among all the available approaches, the two main categories are power bed fusion (PBF) and direct energy deposition (DED) (see Fig. 1.2). In PBF, a thermal energy source such as a laser or an electron beam is used to melt or fuse materials previously deposited on a bed in powder form. Once a powder layer has been processed by the laser, it is lowered and a new layer of fresh powder feed again the bed. This process is repeated until the complete metallic part is designed. In DED, the powder or wire is streamed through a nozzle and a heat source is focused on this stream to melt the metal. The melted material is then deposited layer by layer with the desired shape on a substrate. Different sub-categories exist depending on their specificity such as the type of heat source (laser, arc or electron beam) and material feed-stock (wire or powder). duced by an energy source (here a laser). From [START_REF] Zheng | Melt pool boundary extraction and its width prediction from infrared images in selective laser melting[END_REF]. On the right, microstructure associated to gradient and velocity in AM technologies: Selective Laser Melting (SLM), Wire Arc Additive Manufacturing (WAAM), Laser Metal Deposition (LMD). From [START_REF] Bermingham | Revealing the Mechanisms of Grain Nucleation and Formation During Additive Manufacturing[END_REF]. SLM is a Power Bed Fusion technology while WAAM and LMD are Direct Energy Deposition techniques. WAAM uses electric arc as heat source and wire as feed-stock. LMD technology uses a laser as heat source and the feed-stock is either powder either wire. The number of metallic materials used in metallic AM is constantly increasing. The most used are titanium alloys [START_REF] Dutta | The Additive Manufacturing (AM) of titanium alloys[END_REF], aluminum alloys [START_REF] Nesma | 3D printing of Aluminium alloys: Additive Manufacturing of Aluminium alloys using selective laser melting[END_REF], nickel-based alloys [START_REF] Suresh | Additive Manufacturing of Nickel Superalloys: Opportunities for Innovation and Challenges Related to Qualification[END_REF], HEA [START_REF] Ocelík | Additive Manufacturing of High-Entropy Alloys by Laser Processing[END_REF], steels [START_REF] Haghdadi | Additive manufacturing of steels: a review of achievements and challenges[END_REF], precious metal alloys [START_REF] Ferreira | Additive Manufacturing in Jewellery Design[END_REF], copper alloys [START_REF] Horn | Additive Manufacturing of Copper and Copper Alloys[END_REF], cobalt-based alloys [START_REF] Mostafaei | Additive Manufacturing of Cobalt Alloys[END_REF], bimetals [START_REF] Bai | Additive manufacturing of bimetallic structures[END_REF] and gradient alloys [START_REF] Douglas C Hofmann | Compositionally graded metals: A new frontier of additive manufacturing[END_REF]. As shown on Fig. 1.3, innovative metal AM has already found numerous applications in the automotive, aerospace, electronics, and biomedical industries [START_REF] Vafadar | Advances in Metal Additive Manufacturing: A Review of Common Processes, Industrial Applications, and Current Challenges[END_REF][START_REF] Bhavar | A review on powder bed fusion technology of metal additive manufacturing[END_REF][START_REF] Ahn | Direct metal additive manufacturing processes and their sustainable applications for green technology: A review[END_REF][START_REF] Gisario | Metal additive manufacturing in the commercial aviation industry: A review[END_REF]. A typical AM process involves the local melting of the metal and the solidification of the melt pool (Fig. 1.4). Because the heat source is usually not uniform (a Gaussian in the case of a laser), a temperature gradient develops at the melt pool surface with a significant variation of surface tension. This effect leads to thermocapillary convection in the melt pool. Once the energy source is stopped or focus on another location, heat transfer by conduction on the solid side begins and induces progressive solidification of the meltpool. In addition, there exist radiative heat losses at the surface of the melt pool. Solidification in AM technologies is directional due to high temperature gradients and rapid due to high cooling rates. Moreover, the layers treated successively give rise to repeated melting of the materials resulting in a very complex thermal history. Thus, solidification carries out in non-stationary and non-equilibrium conditions. Yet, the way solidification develops is crucial because it determines the microstructure and performance of the final product. In general, two specific microstructures are observed during AM processes, namely a structure composed of columnar grains and/or equiaxed grains. Columnar grains (dendritic or cellular) are often associated with rapid solidification and steep thermal gradient (Fig. 1.4). Equiaxed grains result from nucleation processes (inoculant particles or impurities) or from dendrite fragments. Indeed, dendrite fragments can detach during solidification under the effect of the constraints. Later, these fragments are transported by convections within the meltpool. For both mechanisms, equiaxed grains are formed blocking columnar growth. Structures with columnar grains present anisotropic mechanical properties. Structures with fine equiaxed grains exhibit more homogeneous properties [START_REF] Liu | Insight into the mechanisms of columnar to equiaxed grain transition during metallic additive manufacturing[END_REF]. For this reason, structures with equiaxed grains are particularly interesting and several studies aim at the production of these structures. This represent the longitudinal section of solidified Ni-Al alloy obtained in a furnace. From [START_REF] Jung | Columnar to equiaxed transition during directional solidification in refined Al-based alloys[END_REF]. Therefore, the columnar-to-equiaxed transition (CET) is central to the control of the solidified microstructure (see Fig. 1.5). In alloy solidification, the main parameters governing CET are the temperature gradient, the velocity of the solidification front, the cooling rate, and constitutional undercooling. The critical gradient condition for fully steady-state equiaxed growth with a given density of inoculants has been estimated by Hunt and is commonly used as a rough estimate of CET [START_REF] Hunt | Steady state columnar and equiaxed growth of dendrites and eutectic[END_REF]. Kurz et al. [START_REF] Kurz | Columnar to equiaxed transition in solidification processing[END_REF] and Gaumann et al. [START_REF] Gäumann | Single-crystal laser deposition of superalloys: processing-microstructure maps[END_REF] improved Hunt's model with a dendritic growth model more appropriate for the rapid solidification observed in AM processes. In order to deal with the specific characteristics of AM, extensive experimental work (see, for example [START_REF] Bermingham | Promoting the columnar to equiaxed transition and grain refinement of titanium alloys during additive manufacturing[END_REF]) and theoretical studies have been carried out to predict favorable CET conditions. Modeling of Additive Manufacturing Modeling AM processes requires a multiscale approach: the macroscopic scale to record thermal history, the mesoscopic scale to predict melt pool dynamics, and the microscale to describe microstructure formation. Process modeling based on finite element methods or computational fluid dynamics often integrates experimental measurements to predict process defects such as cracks and pores as well as to estimate surface roughness of the final parts [START_REF] Heang Kuan Tan | Microstructure modelling for metallic additive manufacturing: a review[END_REF]. At the microscale, stochastic models such as cellular automata and kinetic Monte Carlo can provide micrographs of grain structures close to those observed experimentally. Such models nevertheless leave aside many physical phenomena, relying on effective parameters that are not transferable, but have to be fitted for each new material of interest [START_REF] Spittle | Columnar to equiaxed grain transition in as solidified alloys[END_REF]. In contrast, phase field modeling provides a much more comprehensive description of solidification at the microscale that involves the formation of dendrites, microsegregation, and precipitation [START_REF] Kurz | Progress in modelling solidification microstructures in metals and alloys. Part II: dendrites from 2001 to 2018[END_REF]. Phasefield can also be combined with cellular automaton to study competitive growth of columnar dendritic grains under temperature gradient [START_REF] Dorari | Growth competition between columnar dendritic grains -The role of microstructural length scales[END_REF]. The multiscale phase field method can also be used to describe heterogeneous nucleation, grain selection, and epitaxial growth to assess the role of AM parameters in CET [START_REF] Liu | Insight into the mechanisms of columnar to equiaxed grain transition during metallic additive manufacturing[END_REF]. However, at nanoscale, few models report CET, except for a recent work by Kurian et al. [START_REF] Kurian | Selective laser melting of aluminum nanopowder particles, a molecular dynamics study[END_REF], which studied the micro-selective laser melting process using molecular dynamics (MD) simulations in order to investigate melting and solidification of a randomly-distributed aluminum nano-powder bed. Modeling of solidification at the microscopic level Many solidification processes have nevertheless been investigated at the nanoscale by means of MD simulations. The pioneering work of Lu and Spzunar [START_REF] Lu | Molecular-dynamics simulation of rapid solidification of aluminum[END_REF] described the structural changes induced by rapid solidification. Fast quenching leads to a nonequilibrium glassy state, while a slow cooling rate results in a crystalline state [START_REF] Tian | Molecular dynamics simulation for cooling rate dependence of solidification microstructures of silver[END_REF][START_REF] Pei | The rapid solidification of Ti 3 Al : a molecular dynamics study[END_REF][START_REF] Hou | Cooling rate dependence of solidification for liquid aluminium: a large-scale molecular dynamics simulation study[END_REF]. The characteristics of a solidification front have been measured by MD in two-phase metallic systems under isothermal conditions for different orientations of the solid-liquid interface in order to obtain the kinetic coefficients [START_REF] Celestini | Measuring kinetic coefficients by molecular dynamics simulation of zone melting[END_REF][START_REF] Sun | Kinetic coefficient of Ni solid-liquid interfaces from molecular-dynamics simulations[END_REF]. The case of directional solidification has been studied using non-equilibrium MD simulations for Lennard-Jones systems [START_REF] Celestini | Nonequilibrium molecular dynamics simulation of rapid directional solidification[END_REF], Al-Cu alloys [START_REF] Mahata | Effects of solidification defects on nanoscale mechanical properties of rapid directionally solidified Al-Cu Alloy: A large scale molecular dynamics study[END_REF], and CrNi-alloyed steels [START_REF] Bahramyan | Nano-scale simulation of directional solidification in TWIP stainless steels: A focus on plastic deformation mechanisms[END_REF]. Large-scale MD simulations with several million atoms have been used to study the very first steps of solidification in various metallic systems. Homogeneous nucleation has been studied at constant undercooling temperature or by applying a constant cooling rate on a melted metal [START_REF] Shibuta | Homogeneous nucleation and microstructure evolution in million-atom molecular dynamics simulation[END_REF][START_REF] Mahata | Understanding homogeneous nucleation in solidification of aluminum by molecular dynamics simulations[END_REF]. Athermal heterogeneous nucleation through grain refiners in undercooled metal has also been investigated by MD simulations [START_REF] Fujinaga | Molecular dynamics simulation of athermal heterogeneous nucleation of solidification[END_REF]. The columnar-to-equiaxed transition was not systematically investigated by MD in the case of pure metal, in the typical non-stationary thermal conditions specific to AM. In pure metals, the microstructure results exclusively from thermal effects occurring during solidification processes. The simulations will allow us to understand the different thermal mechanisms leading to different solidified microstructures. The results obtained on the topic in the framework of my PhD and their direct comparison to classical theories of solidification/nucleation will be presented in Chapter 3. Mechanical treatment of powders Mechanical treatment of powders in planetary ball mills is a very common process used for several purposes including the synthesis of inorganic compounds and metallic nanocomposites, the formation of supersaturated solid solutions or metastable crystalline phases, and the elaboration of nanostructured materials or amorphous alloys [START_REF] Suryanarayana | Mechanical alloying and milling[END_REF]. Mechanosynthesis, also termed mechanochemistry or mechanical alloying, supposes that the reaction is completed during the mechanical treatment [START_REF] Baláž | Hallmarks of mechanochemistry: from nanoparticles to technology[END_REF]. In contrast, preliminary mechanical activation is used to fabricate reactive materials [START_REF] Rogachev | Mechanical activation of heterogeneous exothermic reactions in powder mixtures[END_REF], and the reaction follows the activation process. As an example, the combination of mechanical activation and reactive sintering leads to the formation of nanostructured intermetallics [START_REF] Paris | Spark plasma synthesis from mechanically activated powders: a versatile route for producing dense nanostructured iron aluminides[END_REF], and nanostructured High Entropy Alloys [START_REF] Fourmont | Effects of planetary ball milling on AlCoCrFeNi high entropy alloys prepared by Spark Plasma Sintering: Experiments and molecular dynamics study[END_REF]. Nanocomposites prepared by mechanical milling are also used in additive manufacturing [START_REF] Nepapushev | Production of Rounded Reactive Composite Ti/Al Powders for Selective Laser Melting by High-Energy Ball Milling[END_REF][START_REF] Nguyen | Nanocomposite thermite powders with improved flowability prepared by mechanical milling[END_REF], where attention is paid to particle size and shape. Popov et al. described the fundamental requirements of powders used in AM. The shape of the powders needs to be as spherical as possible with a smooth surface. Powder size must be controlled to range from few micrometers to 240 micrometers depending on the AM technology. The powder size distribution is also important because finer powders in the distribution enable to fill the void between the larger powders resulting in a higher packing density [START_REF] Popov | Powder Bed Fusion Additive Manufacturing Using Critical Raw Materials: A Review[END_REF]. In addition, the powder flowability is often described as the crucial parameter to produce high quality powder layer [START_REF] Spierings | Powder flowability characterisation methodology for powder-bed-based metal additive manufacturing[END_REF]. Because of its ability to produce regular spherical powders, atomization is currently the most widely used technique [START_REF] Kassym | Atomization processes of metal powders for 3D printing[END_REF]. However, this technology finds limitation to produce composite reactive particles due to the difference of mechanical properties of the different reagents. One way to solve this limitation is the high energy ball milling (HEBM) such as planetary ball mills. Planetary ball milling In the planetary ball mills, the jars containing the powder and the grinding balls are arranged on a sun wheel. The sun wheel rotates in one direction while the jars rotate in the reverse direction (Fig. 1.6). In practice, powders processed in planetary ball mills are submitted to a series of transformations induced by contact with grinding balls. The Figure 1.6: Schematic representation of a planetary ball mill. From [START_REF] Burmeister | Process engineering with planetary ball mills[END_REF]. different elemental load modes observed in milling experiments include compression, shear, shock, cutting, and impact. Beinert et al. [START_REF] Beinert | Analysis and modelling of bead contacts in wet-operating stirred media and planetary ball mills with CFD-DEM simulations[END_REF] classified the contacts between beads or bead and wall into four contact types: impact, torsion, shearing, and rolling, as a function of the relative velocity of colliding bodies (Fig. 1.7, right). In addition, in situ observations have demonstrated that milling balls undergo complex motions in the jar [START_REF] Rosenkranz | Experimental investigations and modelling of the ball motion in planetary ball mills[END_REF]. Three milling regimes were identified as a function of milling parameters in the case of a large number of grinding balls [START_REF] Burmeister | Process engineering with planetary ball mills[END_REF]. In cascading regime, the balls are taken along the container and unroll upon each other from the bulk top to its base. Friction between grinding balls is dominant. In cataracting regime, the balls detach from the wall and impact the other balls or the opposite wall with high intensity. In centrifugal regime, the balls are stuck to the inner wall of the vial and move together with the vial (Fig. 1.7, left). The specific milling regime was found to influence the microstructure and reactivity of the final powder mixture [START_REF] Rogachev | Experimental investigation of milling regimes in planetary ball mill and their influence on structure and reactivity of gasless powder exothermic mixtures[END_REF]. The action of balls on the blended elemental powders is essential to control the effects of milling. The two main activation factors are the collision of grinding balls with one another and with the milling wall, and friction between the grinding balls. The fundamental question is to determine how the mechanical energy delivered by the mechanical process is stored in the powder, leading to enhanced reactivity in activated powders. The most common explanation is to assert that mechanical energy is stored in defects created during severe plastic deformation associated with mechanical treatment. According to Hoffman et al. [START_REF] Hoffmann | Reactive Comminution[END_REF], the defects due to mechanical treatment include: zero-dimensional point defects, such as vacancies; one-dimensional line defects, such as dislocations; two-dimensional area defects, such as stacking faults, grain boundaries, and contact areas with other phases. Three-dimensional volume defects include amorphous regions, pores, other phases, and metastable regions, corresponding to the development of structural transformations. The energy associated with different types of point defects, dislocations, and grain boundaries has been evaluated by Khina [START_REF] Khina | Effect of mechanical activation on SHS: Physicochemical mechanism[END_REF][START_REF] Mukasyan | Mechanical activation and gasless explosion: Nanostructural aspects[END_REF]. Low-dimensional defects appear to have a negligible effect on subsequent reactivity, in comparison with metastable phases, for which the formation mechanism is not yet understood. From [START_REF] Rogachev | Experimental investigation of milling regimes in planetary ball mill and their influence on structure and reactivity of gasless powder exothermic mixtures[END_REF]. On the right: specific motions of the grinding balls observed in the container. Four distinct loading modes are represented namely the impact, the torsion, the shearing and the rolling. From [START_REF] Beinert | Analysis and modelling of bead contacts in wet-operating stirred media and planetary ball mills with CFD-DEM simulations[END_REF]. Characterizing the reactive properties of activated powders is important to design reactive material with tailored reactivity, and to choose the most adequate parameters for the elaboration process (e.g., milling speed). Reactivity of activated powders In order to understand the effect of mechanical activation on reactivity, the Ni-Al or Ti-Al systems have been widely studied experimentally. It has been shown that highenergy ball milling affects the reaction kinetics of these reactive composites [START_REF] Rogachev | Influence of the high energy ball milling on structure and reactivity of the Ni+Al powder mixture[END_REF][START_REF] Rogachev | Reactivity of mechanically activated powder blends: Role of micro and nano structures[END_REF]. In the case of Ni-Al, it was demonstrated that the effective activation energy of the reaction is significantly reduced after mechanical treatment [START_REF] Shuck | Reactive Ni/Al Nanocomposites: Structural Characteristics and Activation Energy[END_REF]. In the case of Ti-Al, Nepapushev et al. presented the difference of temperature of ignition regarding the milling time and the size of the grinding balls as shown in Fig. 1.8 . They reported that temperature of ignition can be lowered to 640 C compared to 850 C without mechanical activation [START_REF] Nepapushev | Production of Rounded Reactive Composite Ti/Al Powders for Selective Laser Melting by High-Energy Ball Milling[END_REF]. This reduction may be attributed to structural refining that leads to an increase in the number of contact surfaces (Fig. 1.9). In addition, microstructural analysis has shown that nanosized clusters are formed, which can serve as highly reactive precursors of new phases, and thus reduce the potential barrier to the initiation of exothermic reaction. Context of the study A problem of topical interest is to understand how impact and friction (the two prevalent actions during milling) act on the powder and modify microstructure evolution. Several studies have already been conducted at the atomic level. The plastic deformation induced by milling was modeled by cyclic deformation [START_REF] Odunuga | Forced Chemical Mixing in Alloys Driven by Plastic Deformation[END_REF][START_REF] Delogu | Forced chemical mixing in model immiscible systems under plastic deformation[END_REF]. These studies have demonstrated that cyclic deformation forces chemical mixing due to dislocation gliding. The progressive amorphization and mixing of a binary system through extensive plastic straining has been studied in [START_REF] Lund | Molecular simulation of amorphization by mechanical alloying[END_REF], by means of a strain-and-stack process similar to cold-rolling. Since the pioneering work of Holian and coworkers [START_REF] Hammerberg | Studies of sliding friction in compressed copper[END_REF], the friction between sliding metallic blocks has been extensively studied (see for instance [START_REF] Rigney | The Evolution of Tribomaterial During Sliding: A Brief Introduction[END_REF]). The frictional sliding of solid surfaces involves large plastic strains and strain gradients, high strain rates and strain rate gradients, and mechanical mixing from both contacting solids. Mechanical mixing was attributed to different mechanisms: interface instability that leads to local vorticity, dislocation and twin activity, and amorphization. Atomic mixing in metals under shear deformation has been studied at various interfaces in [START_REF] Nhon | Atomic Mixing in Metals Under Shear Deformation[END_REF]. The mixing was found to be diffusive or "superdiffusive", depending on the coherence between interfaces. Heat dissipation due to sliding was investigated by means of non-equilibrium Molecular Dynamics simulations [START_REF] Lin | Molecular dynamics simulation of nano-scale interfacial friction characteristic for different tribopair systems[END_REF][START_REF] Chen | Molecular dynamics simulation of microstructure evolution and heat dissipation of nanoscale friction[END_REF]. Atomic-scale mechanical mixing and generation of mixing layer were observed in the regions near the contact interface. The enhanced reactivity of mechanical activated systems has also been considered in the literature. The mechanisms of loading and chemical processes resulting from shock compaction have been investigated in the case of Ni/Al composites by means of Molecular Dynamics simulations [START_REF] Cherukara | Shock Loading of Granular Ni/Al Composites. Part 1: Mechanics of Loading[END_REF][START_REF] Cherukara | Shock Loading of Granular Ni/Al Composites. Part 2: Shock-Induced Chemistry[END_REF]. The role of porosities in such systems during deformation has been considered in [START_REF] Khachatur | Tailored Reactivity of Ni+Al Nanocomposites: Microstructural Correlations[END_REF]. The reactivity of Ni-Al composites prepared by mechanical activation has been studied in order to detect the role played by the nanoscale mixing of the reagents [START_REF] Fourmont | Reactivity of Ni-Al nanocomposites prepared by mechanical activation: A molecular dynamics study[END_REF]. But, in these studies, a nano-laminated structure or premixing of reagents was assumed. So a complete description of the effects induced by the mechanical treatment of elemental powders from activation to reaction is still lacking. From [START_REF] Maurice | The physics of mechanical alloying: A first report[END_REF]. In order to obtain a better understanding of the mechanisms that take place during mechanical activation at the atomic level, Molecular Dynamics simulations mimicking the effects of grinding balls on powder particles can be developed. We modeled the impact of grinding balls and their action on blended powders following Maurice and Courtney's idea [START_REF] Maurice | The physics of mechanical alloying: A first report[END_REF] (see Fig. 1.10). Flat surface tools act on a large number of individual powder particles and cause uniaxial compression of the powder. Despite unavoidable limitations in system size and simulation time, large-scale Molecular Dynamics simulations are a useful tool to provide in situ observations during mechanical activation. We considered a set of metallic binary systems: Ni-Al, Ti-Al, Fe-Cr, and Fe-Ni [START_REF] Baras | Mechanical activation of metallic powders and reactivity of activated nanocomposites: a molecular dynamics approach[END_REF]. It is expected that a layered structure would be observed in ductile/ductile binaries. In the case of couples with dissimilar ductility, the resulting composite represents a metallic matrix with inclusions of the less ductile phase. The aim is to understand the specific behavior of each couple as a function of the mechanical and structural properties of its elemental constituents. Understanding the specific behavior of metallic couples during mechanical activation is important when this process is used to fabricate complex alloys. The efficiency of mechanically induced deformation will be investigated by measuring amorphization rate, chemical mixing efficiency, and the creation of defects. In order to evaluate their reactivity, activated Ni-Al and Ti-Al systems were studied at different temperatures, below and above the melting point of the less refractory element (Al). Both diffusion and reaction mechanisms will be evaluated in Ni-Al and Ti-Al systems. The results will be presented in Chapter 4. Reactive laminated particles (RLPs) produced by planetary ball mill present very close similarities with Reactive multilayer nanofoils (RMNFs) (see for instance Fig. 1.11 for Ni-Al mixtures). They are often used as model systems to study the processes occurring at the atomic scale in metallic nano-systems presenting a huge amount of interfaces. The RMNFs are composed of hundreds of stacked thin metallic layers, varying in thickness from 4 to 100 nm. They can be prepared by several methods such as sput-tering, vapor deposition or electrodeposition [START_REF] Adams | Reactive multilayers fabricated by vapor deposition. A critical review[END_REF][START_REF] Weihs | 5 -Fabrication and characterization of reactive multilayer films and foils[END_REF]. The pure metals are arranged alternately with layer of one metal and layer of another metal (Fig. 1.11). The reaction occurs after an ignition process such as mechanical loading, thermal heating, laser heating or electrostatic discharge [START_REF] Adams | Reactive multilayers fabricated by vapor deposition. A critical review[END_REF]. The exothermic reaction propagates along the foil without any further energy supply, leading to the transformation of reactants into intermetallics. Trevizo et al. listed the different reactive nanometric multilayers materials [START_REF] Sáenz-Trevizo | Nanomaterials by design: a review of nanoscale metallic multilayers[END_REF], including Ti-Al systems. In such bimetallic Ti/Al nanofoils, a short-term local heating will induce a rapid and self-sustaining reaction [START_REF] Mukasyan | Combustion synthesis in nanostructured reactive systems[END_REF]. The propagation velocity of the reaction is up to 3 m/s [START_REF] Sen | Al-based binary reactive multilayer films: Large area freestanding film synthesis and self-propagating reaction analysis[END_REF][START_REF] Sen | Synthesis and characterization of Ti/Al reactive multilayer films with various molar ratios[END_REF]. The overall reaction temperature is higher than the melting temperature of Al (T m = 933 K) and lower than the melting temperature of Ti (T m = 1923 K). Thus, the reaction proceeds in a liquid/solid system. A Ti-Al foil is transformed in intermetallics in less than 50 ms. The kinetics of intermetallic phase formation in the Ti/Al multilayers has been investigated using differential scanning calorimetry and time-resolved X-ray diffraction [START_REF] Illeková | Kinetics of intermetallic phase formation in the Ti/Al multilayers[END_REF][START_REF] Gachon | On the mechanism of heterogeneous reaction and phase formation in Ti/Al multilayer nanofilms[END_REF]. Reactivity of multilayer nanofoils Chapter 5 is dedicated to the reactivity of Al-Ti system and the formation of intermetallic compounds upon a certain temperature of ignition. This part allows us to obtain a full microscopic understanding of the reactivity in the case of a perfect layered structure of Ti-Al. Different observations at the microscopic level were also conducted in order to understand how the dissolution process occurs. That study, at the microscopic scale, also makes possible to explain the various observations of our Russian colleagues who did the experimental part. Chapter 2 Molecular Dynamics simulations Molecular Dynamics Method Molecular Dynamics (MD) allows one to simulate the time evolution of a system made of atoms. It has been proven to be a valuable tool to predict thermodynamics or kinetic properties and to observe in-situ atomic processes in order to understand physicochemical mechanisms. Classical MD approach is based on the Born-Oppenheimer approximation: the motions of the electrons adapt themselves instantaneously to the displacements of the cores. Hence, electron motion is not taken into account and MD treats atomic nuclei as classical particles whose trajectories are determined by integrating Newton's equations: f i (t)=m i d 2 r i (t) dt 2 (2.1) where m i is the mass, f i is the force and d 2 r i /dt 2 the acceleration of the atom i. Newton's equations are integrated, with a time step dt, in a discrete time domain. dt must be low enough to avoid instability and conserve energy, but not too low in order to avoid waste of computing time (typically dt=1-2 fs). The forces f i applied to the atoms are conservative, meaning that the work between two points does not depend on the path followed by the force between these two points. The forces are given by the derivative of the potential energy U depending only on the relative positions of the atoms: f i (t)= ∇ r i U(r 1 ,...,r n ) (2.2) The MD algorithm is divided into three main parts: initialization, simulation and postprocessing, as summarized in Fig. 2.1: 1. The positions of the atoms and their velocities are initialized. Atomic positions will depend on the system and its specific structure. Initial velocities are assigned randomly according to a Maxwell-Boltzmann distribution at a given temperature. The potential energy U is computed with an interatomic potential. The force f i acting on each atom is then determined according to Eq. (2.2). 3. After the displacement of the atoms, system properties are computed, analyzed and visualized with appropriate software. Initialization Crystal lattice Before starting a simulation, atomic positions must be set, usually according to a particular crystalline structure that is composed of a periodic ordered stacking of atoms in space. The periodicity is defined by a basis -defined by 3 vectors A`with `= 1, 2, 3 -that is replicated over space. Basis vectors are not necessarily normed or orthogonal and form a parallelepiped called a unit cell. Unit cells are divided into 7 crystal classes: cubic, tetragonal, monoclinic, orthorhombic, rhombohedral, hexagonal or triclinic, depending on the angles and lattice parameters (see Fig. 2.2). Moreover, there exist 4 types of unit cells known as Primitive, Body-centered, Face-centered and Sidecentered. Thus, one can distinguish 14 Bravais Lattices. Finally, to build a crystalline lattice, the unit cell is replicated by a vector: R i, j,k,`= iA 1 + jA 2 + kA 3 + a `(2. 3) where a l are the coordinates of the atoms in the unit cell. To perform MD simulations of non-crystalline systems (e.g., amorphous or liquid phase systems), the initial box can be filled with a random distribution of atoms corresponding to a given density. Another approach will consider the melting of a solid system at high temperature to create a liquid phase which will be cooled down to reach the temperature of interest. Velocity and temperature initialization The initial velocities of the atoms follow the Maxwell-Boltzmann distribution: P(v i,α )= ✓ m 2πk b T ◆ 1 2 e mv 2 i,α 2k b T (2.4) with v i,α the velocity of the atom i and α one of the three components of the direction x, y and z. The distribution of velocity is also directly related to the instantaneous temperature T (t) of the system through the equipartition theorem: < mv 2 α 2 >= 1 2 k b T (2.5) which relates the kinetic energy of atoms to the temperature of the system. In order to avoid significant fluctuation in temperature, a large number of atoms is required. It can be estimated that the fluctuations on T (t) follow: ∆T (t) < T (t) > = < T 2 (t) > < T (t) > 2 1 2 < T (t) > ⇠ N 1 2 . (2.6) In the present work, the smallest number of atoms used in a simulation is 40000, corresponding to a relative temperature fluctuation of 0.5 %. Interatomic potentials Interatomic potential is the cornerstone of MD simulations as it captures all the physical characteristics of the system such as lattice parameters, cohesive energy, elastic constants, defects, stacking faults and surface energies, diffusion barriers, etc. These quantities, obtained by experimental measurements or by ab-initio calculations, are collected in the fitting database. In classical approaches, the potential energy is expressed in terms of empirical interatomic potentials with many body notations: U(r 1 , r 2 ,...,r N )= ∑ i U 1 (r i )+ ∑ i ∑ j>i U 2 (r i , r j )+ ∑ i ∑ j>i ∑ k> j U 3 (r i , r j , r k )+... (2.7) where U 1 refers to a one-body term equivalent to an external force. U 2 represents a two-body term in which the interaction between any pair of atoms ignores the influence of other atoms. This pair term, including Morse, Born-Meyer or Lennard-Jones terms, has been widely used for rare gases but is inefficient to reproduce metallic, covalent or ionic interactions. It was only in the 1980s that a three-body term U 3 was introduced. This term takes into account the presence of a third atom which influences the pairwise interaction of the atoms. The kind of potential used depends on the type of interaction. For example, the Stillinger-Weber, Tersoff or Brenner potentials are more suited to covalent interactions. Finnis-Sinclair (FS), the Embedded Atom Method (EAM) or modified-EAM (MEAM) are used when considering metallic interactions. The differences among these potentials arise from the way the parameters are calculated or from their analytical forms. In this manuscript, the potentials used are EAM and FS. Although their analytical forms are similar, the Finnis-Sinclair potential is based on the second-moment tight binding approach, whereas the EAM potential is based on density-functional theory (DFT). The potential energy of the EAM or FS type is expressed as follows: U = 1 2 ∑ i, j(i6 = j) φ (r ij )+ ∑ i F i (ρ i ) . (2.8) The potential is composed of two terms: the first one, φ (r ij ), represents the pair interaction between 2 atoms. It makes it possible to reproduce the repulsion between atoms. The second one, F, corresponds to the embedding energy of the atom i in the host electron density ρ i produced by neighboring atoms at the atom i site (see Fig. 2.3). This term corresponds to the cohesion of the system and yields the metallic interactions between atoms. The functional F differs between FS and EAM type potentials. Indeed, in the FS approach, the binding energy is proportional to the square root of the electron density, whereas in the EAM approach, F corresponds to the energy required to place an atom i in the local electron density area produced by the neighbor atoms. This electronic density for a given atom i is given by the relation: ρ i = N ∑ j=1( j6 =i) ϕ j (r ij ) (2.9) where ϕ j (r ij ) is the electronic contribution of the neighbor atoms j of the atom i. The potentials are designed to reproduce certain material properties at the expense of others. For example, a potential may be created specifically to capture the transport properties of a material, but it will not be transferable to simulations that wish to calculate the mechanical properties. One major difficulty is to obtain an appropriate potential that is suited to the system of interest. Sometimes, there is no potential or it is not parameterized for the purpose of the simulation. It is then necessary to validate the chosen potential by first calculating basic properties such as melting temperature, thermal conductivity and elastic constants. Nevertheless, softwares have been created to enable the proper development (testing and standardisation) of potentials. These include potfit [START_REF] Brommer | Classical interaction potentials for diverse materials from ab initio data: a review of potfit[END_REF], KLIFF [START_REF] Wen | KLIFF: A framework to develop physics-based and machine learning interatomic potentials[END_REF] and Atomicrex [START_REF] Stukowski | Atomicrex-a general purpose tool for the construction of atomic interaction models[END_REF]. Potentials are also available for a wide variety of material choices (metals, semiconductors, oxides, carbon, etc.) on sites such as Interatomic Potentials Repository (NIST) and the Knowledgebase of Interatomic Models (OpenKim) [START_REF] Tadmor | The potential of atomistic simulations and the knowledgebase of interatomic models[END_REF]. Simulation Simulation Integrators Many methods are available to integrate the equations of motion including leapfrog, predictor-corrector, Verlet, etc. Each is characterized by its own accuracy and efficiency. Among them, the velocity Verlet algorithm stands out for its efficiency, stability, convergence and low truncation error at long simulation times. This method is divided into two steps. The first step consists in computing the velocity at a half time-step: v i (t + dt 2 )=v i (t)+ dt 2 a i (t) (2.10) Positions and velocities of atoms are computed at the next time step: v i (t + dt)=v i (t + dt 2 )+ dt 2 a i (t + dt) , (2.11 ) r i (t + dt)=r i (t)+dtv i (t + dt 2 ) . (2.12) Statistical ensembles The "natural" ensemble for molecular dynamics simulations is the micro-canonical ensemble (NVE). In this ensemble, the system is isolated, i.e. there is no exchange of energy with the external environment. The number of atoms N, the volume V and the total energy E of the system are constants. Sometimes, it is desirable to be able to control the temperatures and pressures of a system to ensure that simulations more closely reproduce physical and chemical experiments. It is thus necessary to modify the equations of motion to allow the system to exchange energy with a larger external system commonly referred to as a "thermostat". Different methods have been developed to ensure temperature control such as velocity scaling [START_REF] Woodcock | Isothermal molecular dynamics calculations for liquid salts[END_REF], the Berendsen thermostat [START_REF] Berendsen | Molecular dynamics with coupling to an external bath[END_REF], the Andersen thermostat [START_REF] Hans | Molecular dynamics simulations at constant pressure and/or temperature[END_REF] or the Nosé-Hoover thermostat [START_REF] Nosé | A unified formulation of the constant temperature molecular dynamics methods[END_REF][START_REF] William | Canonical dynamics: Equilibrium phase-space distributions[END_REF]. In this work, we principally used the Nosé-Hoover (NH) formalism in which a friction term, ζ , is added to the equations of motion: f i (t)=m i d 2 r i (t) dt 2 ζ m i v i . (2.13) The temperature is then controlled by the difference between the actual kinetic energy (E c ) of the system and the "target" kinetic energy (E target c ). When E c is greater than E target c , the kinetic energy drops and vice versa when it is lesser. The dynamic friction term ζ is linked to the kinetic energy by: ζ =(E c E target c )/Q T (2.14) where Q T is the mass parameter of the thermostat that determines the degree of coupling of the system thermostat. In other words, it determines how rapidly the temperature is relaxed. The ensemble using this method is the canonical ensemble (NVT) with the number of atoms N, the volume V and the temperature T as constants. However, other simulations require controlling the pressure and therefore varying the volume. The Nosé-Hoover approach also makes it possible to introduce a barostat to act on the pressure. The approach consists in introducing a degree of freedom acting on the volume of the simulation box by: V =(P P target )/Q P (2.15) with V the volume and Q P the response speed of the barostat. The corresponding thermodynamic ensemble is isoenthalpic-isobaric (NPH) with constant atomic number N, pressure P and enthalpy H. When a thermostat and barostat are considered, the simulations are performed in the isothermal-isobaric ensemble (NPT). This ensemble allows the system to adjust its volume and to control the temperature through pressure and kinetic energy. Moreover, certain simulations have been carried out using a Langevin thermostat. The principle remains similar to that of an Nosé-Hoover thermostat: it consists in adding a term to the equations of motion according to a Gaussian probability. This thermostat is no longer deterministic but rather stochastic. Boundary conditions Molecular Dynamics simulations are usually limited to system sizes of a few hundred nanometers (sub-micrometers). Therefore, periodic boundary conditions are applied to overcome finite size effects, the system becoming pseudo-infinite. The application of periodic conditions considers the simulation box as a primary cell. This cell is replicated in all directions (x,y,z) by images with the same properties (number of atoms, position of atoms, moment of atoms, size and geometry) as shown in Fig. 2.4. If an atom leaves the primary cell, it will re-enter from the opposite side. Note that an atom must not interact with its image. Therefore, the size of the primary cell must be at least 2 times larger than the cut-off radius of the interatomic potential (i.e., the minimum distance for which the attractive part of a potential is zero). LAMMPS software In the present work, we used the large-scale massively parallel atomic/molecular simulator (LAMMPS) that was developed since 1995 at the Sandia National Laboratory to simulate the time evolution of a very large number of atoms [START_REF] Thompson | LAMMPS -a flexible simulation tool for particlebased materials modeling at the atomic, meso, and continuum scales[END_REF]. This opensource software is widely used to simulate a large variety of materials such as metals, semi-conductors and even biomolecules or polymers. LAMMPS is computationally efficient as it supports most parallel technologies such as MPI, Open MP, Cuda, etc. It is also possible to build on LAMMPS as a library to include or couple with many codes to extend its range of application. Moreover, LAMMPS incorporates state of the art methods thanks to an active developers' community, allowing specific commands to be improved or added. Post-processing 2.4.1 Analysis Analysis of simulation data is necessary to detect elementary mechanisms at the nanoscale. One of the keys to better understanding such mechanisms is to visualise the atomic displacements using visualisation software such as Ovito [START_REF] Stukowski | Visualization and analysis of atomistic simulation data with OVITO-the Open Visualization Tool[END_REF]. Ovito reads atomic coordinates and individual properties of atoms in order to visualise the structure at different times during the simulation. This is free-distribution software that incorporates a wide variety of analysis methods such as crystalline structure, the radial distribution function, atomic trajectory and cluster analysis, among others. • Structure analysis Many methods are available to identify crystalline structures, each with its own specificity and accuracy. In the present work, we have used three different methods to identify the local atomic environment of an atom. The first method is based on the analysis proposed by Ackland and Jones. For an atom i (or central atom), the Ackland and Jones analysis calculates the number of neighbors and the angles with the central atom to determine whether its structure is hcp, bcc, fcc or amorphous [START_REF] Ackland | Applications of local crystal structure measures in experiment and simulation[END_REF]. The advantage of the Ackland and Jones analysis is that it is less sensitive to thermal fluctuations than other methods because it considers the angles between atoms and not their distances. However, characterisation in the liquid phase is difficult as many atoms are recognised with a defined structure even when amorphous. Another method used below is the Common Neighbor Analysis (CNA) approach [START_REF] Dana | Molecular dynamics study of melting and freezing of small Lennard-Jones clusters[END_REF]. In this method, a central atom is chosen and a sphere is drawn around this atom to select the first neighboring shell. Each pair is then characterized by three criteria: the number of atoms common to both atoms, the number of bonds between their common neighbors and the number of bonds in the longest chain among the common neighbors. The criteria are then compared to reference structures to identify the structure of the chosen atom. The adaptive Common Neighbor Analysis (a-CNA) is an evolution proposed by Stukowski to study binary systems [START_REF] Stukowski | Structure identification methods for atomistic simulations of crystalline materials[END_REF]. However, a-CNA sometimes leads to certain ambiguities in the identification of the structure, particularly in the presence of thermal fluctuations or stresses. Finally, the last and most robust Polyhedral Template Matching (PTM) method allows for the identification of crystalline structures with less noise than the other methods [START_REF] Peter Mahler Larsen | Robust structural identification via polyhedral template matching[END_REF]. This method is divided into two steps: a graph isomorphism test is used to identify potential structure matches. Then, the deviation is calculated between the local structure (in the simulation) and a model of the ideal network structure. As clearly shown in Fig. 2.5, the PTM method is much more efficient than a-CNA in distinguishing fcc and disordered phases in systems at different temperatures. • Radial distribution function An efficient method to analyse the structure of a system is the radial distribution function (RDF), g(r). The RDF is defined as the probability of an atom to be located at distance r 0 from another atom chosen as reference (see Fig. 2.6, left). This function is expressed as follows: g(r)= ρ(r) ρ (2.16) with ρ = N/V corresponding to the average density over the entire system with respect to the total number of atoms N and the volume V . The local density, ρ(r)= dn(r)/V shell , corresponds to the probability of finding dn(r) atoms in a small volume (V shell ). The volume of the shell of thickness dr is approximated to V shell = 4 3 π(r 0 + dr) 3 4 3 πr 3 0 ⇠ 4πr 2 0 dr. The radial distribution function is efficient in detecting the phases of a system. The crystalline structures exhibit defined peaks corresponding to the neighbor shells. In the solid phase, the peaks are separated from each other, even at long-range order. The liquid or amorphous phases are characterized by a smoothing of the peaks at long-range order and by a broad peak at short-range order (see Fig. 2.6, right). Moreover, RDF can provide information about the progressive amorphization of a system or about the local rearrangement of atoms. atom in the center is the atom of reference. dr is the thickness of the shell located at distance r 0 from the atom of reference. On the right, typical RDF for the liquid phase and solid phase (here the Argon from [START_REF]The Thomas Group -PTCL[END_REF]). • Degree of mixing The degree of mixing is a powerful method to analyse the level of mixing between different elements at short-range order. The first step is to define the limit distance (r cut ) between the short-range order and the long-range order (see Fig. 2.7). This distance is chosen such as to be superior to first neighbor shells for fcc, hcp and amorphous atoms and superior to the first and second neighbor shells for bcc atoms. This criterion is defined via the radial distribution function. The short-range order parameter is therefore introduced to follow the degree of mixing during the simulation by: Ω = c 1 c N j i + 1 c c N i j N j i + N i j (2.17) with Ω the chemical mixing of the system, c the atomic fraction and N j i the average number of neighbors of type j in the sphere of radius (r cut ) around atoms of type i (and vice-versa for N i j ). These atoms are considered in the zone of mixing between the different elements. • Number density profiles The number density profile (NDP) is an analytical tool which provides abundant information about the dynamics in multiphase systems. NDP consists in dividing the space into several bins of a constant thickness. Then, the number of atoms is calculated in each bin with respect to the type of atoms. The thickness needs to be sufficiently large in order distinguish the different crystalline planes but low enough to be accurate. In the ideal case, the thickness is chosen to be slightly lower than the interplanar distance. The NDP provides useful information about the ordering at the interface between two phases, the formation of crystalline phases (such as intermetallic phases, for example), the stoichiometry and the amorphization. Note that other 2D profiles such as temperature profile or fcc fraction profile can be obtained in a way similar to the NDP approach. The difference is essentially the way in which the variables of interest in each bin are computed. Indeed, these variables are calculated with the average over each bin, providing accurate information about the location of interfaces, the width of interfaces, local transformations, increases in temperature, etc. Calculations of specific properties As mentioned in Section 2.2.3, interatomic potentials are fitted with a limited database of properties determined experimentally or with ab-initio calculations at zero or room temperature. The calculations of specific properties (e.g. Melting temperatures, thermal conductivities, latent heat of fusion, solid/liquid interface energy, lattice parameters, solid/liquid interface velocities) that were not explicitly fitted allowed us to evaluate the potential transferability at high temperature. Also, these values determined with atomic simulations will later serve as inputs in models based on the classical nucleation theory (CNT) that is well established. The use of CNT with values determined by MD calculations allow us to extend our results to large scale systems. Last but not least, this procedure also demonstrated that the classical nucleation theory that was developed at micro/macro-scale still to be valid at atomistic scale. • Latent heat of fusion of Ni In order to estimate the latent heat of fusion, L V , we considered a bulk system of 5.28 nm ⇥ 5.28 nm ⇥ 5.28 nm at zero pressure, thermalized at T = 100 K. Periodic boundary conditions were imposed in all directions. The sample was then heated with a temperature ramp of 50 K over 50 ps in the NPT ensemble, and then relaxed at constant temperature in the NPT ensemble over 50 ps, and in the NVT ensemble over 10 ps. Quantities were measured in the NVE ensemble over 10 ps. The corresponding average heating was ⇠ 450 K/ns. This procedure was repeated until complete melting of Ni at 3000 K. Note that the melting temperature of the bulk is higher than the thermodynamic melting temperature (T m ), because the crystal is perfect, without any free surfaces or imperfections. The system was then cooled with a temperature ramp similar to that imposed for the heating process. A rough estimation of the melting temperature, T m = T + + T p T + T , is deduced from the temperature for the maximum degree of superheating (T + ) and supercooling (T ) [START_REF] Luo | SolidLiquid Interfacial Energy and Melting Properties of Nickel under Pressure from Molecular Dynamics[END_REF]. The latent heat of fusion is determined at the melting point of the bulk system (1701 K) as the difference between the enthalpy of the solid phase and the enthalpy of the liquid phase. The latent heat of fusion is estimated at 22.00 10 2 eV, equivalent to 21.23 10 6 J/kmol (Fig. 2.8a). The experimental reported value in the range between 17.03 10 6 J/kmol and 18.90 10 6 J/kmol [START_REF] Thurnay | Thermal properties of transition metals[END_REF] is slightly lower than the value measured in MD. The volume per atom (V at ) is estimated at 11.55 Å 3 (Fig. 2.8b) at the melting temperature of Ni (1701 K, value of the potential [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF]). The molar volume is V m = V at ⇥ N A = 0.006955 m 3 /kmol. Note that the molar volume estimated with the potential at ambient temperature 0.006612 m 3 /kmol is close to the experimental value 0.006590 m 3 /kmol. Finally, the latent heat of fusion, L V , expressed in appropriate units, is equal to 21.23 10 6 /V m = 3.052 10 9 J/m 3 . This value slightly differs from the experimental estimation (2.530 10 9 J/m 3 ). Nevertheless, this indicates good adequacy of the potential to determine the latent heat of fusion. • Velocity of the solid/liquid interfaces of Ni Two-phase systems were considered, with a solid/liquid interface perpendicular to the x direction. The crystallization velocity of Ni was determined for three orientations (100), [START_REF] Luo | SolidLiquid Interfacial Energy and Melting Properties of Nickel under Pressure from Molecular Dynamics[END_REF], and (111) of the solid/liquid interface. The following procedure was considered: 1. We built a fcc-Ni system elongated in the x-direction at the target temperature T , with the appropriate lattice parameter and orientation. Typical sizes (L x ⇥ L y ⇥ L z ) of the system for the orientation (100), ( 110), (111) were 85.36 ⇥ 4.27 ⇥ 4.27 (nm), 120.71 ⇥ 6.04 ⇥ 4.27 (nm) and 147.84 ⇥ 6.03 ⇥ 6.97 (nm), respectively. Periodic boundary conditions were imposed in all directions. The whole system was then relaxed at the target temperature T in the NPT ensemble for 1 ns. 2. One half of the sample was frozen and the other half (i.e between 0 and L x /2) was melted in the NPT ensemble for 1 ns at a temperature T sup = 3000 K, much higher than the melting temperature estimated with the EAM potential, in order to ensure complete melting and produce a liquid phase. 3. Finally, the simulation was carried out in the NPT ensemble for 2.5 ns at the target temperature T for the whole system. If the target temperature is below the melting temperature (T m = 1701 K), the interface moves toward the right and the system crystallizes (Fig. 2.9). If the target temperature T is above the melting temperature, a melting front propagates to the left and the system melts completely. We considered 7 target temperatures below the melting temperature: 1100 K, 1200 K, 1300 K, 1400 K, 1500 K, 1600 K, and 1700 K. In order to estimate the solid/liquid interface velocity, the front position was followed and compared to the position of a reference atom in the solid phase (Table 2.1). We observed that the solid/liquid interface velocity depends on the degree of undercooling. The Wilson-Frenkel model was used to approximate temperature dependence: v = K T m T T exp ✓ Q k B T ◆ (2.18) where Q is the activation enthalpy, k B is the Boltzmann constant, and K is a constant. In addition, the crystallization velocity v (100) > v (110) > v (111) is consistent with the sequence, reported in the literature. The two-phase method gives also the melting temperature of the material. The target temperature is chosen around the estimated melting temperature. If the temperature is below the melting temperature, the solid phase grows. If the temperature is above the melting temperature, the liquid phase invades the entire system. The melting temperature is estimated as the temperature for which the solidification/melting front is stagnant. T (K) v (100) v (110) v ( • Solid/liquid interfacial energy of Ni In order to estimate solid-liquid interfacial energy γ SL , we built a pseudo 2D system with a solid Ni cylinder surrounded by liquid Ni. Periodic boundary conditions were imposed in all directions. The entire sample was thermalized at a target temperature in the NPT ensemble for 200 ps, followed by the NVT ensemble for 100 ps. The liquid phase around the cylinder was introduced by heating the liquid region in the NPT ensemble for 200 ps and the NVT ensemble for 100 ps to 2500 K (Fig. 2.10a). The entire sample was then maintained in the NPT ensemble for 5 ns at the target temperature. Several radii were considered: 1.79 nm, 3.58 nm, 7.17 nm, 14.34 nm, and 17.93 nm. Given a particle radius, we chose a large set of target temperatures in order to find its melting point. According to the classical nucleation theory, since there is no interaction between solid and liquid for the flat face of the cylinder, the Gibbs free energy for a cylinder can be expressed as ∆G = πr 2 h∆g V + 2πrhγ SL , where h is the cylinder height, r the cylinder radius, and ∆g V is the Gibbs free energy per volume unit. The critical radius r c is obtained when the first derivative of the Gibbs free energy is zero. Moreover, ∆g V can be approximated by ∆g V ⇠ L V ∆T /T m . The critial radius r ⇤ is then expressed as r ⇤ =( γ SL ⇥ T m )/(L V ⇥ ∆T ), where ∆T = T m T the degree of undercooling at which the cylinder melts. By considering the values of latent heat and the melting temperature, the simulation results were fitted by this expression in order to obtain the solid-liquid interfacial energy γ SL = 284 mJ.m 2 (Fig. 2.10b). • Phonon thermal conductivity of Ni Phonon thermal conductivity was computed with non-equilibrium molecular dynamics (NEMD) [START_REF] Juan | The impact of the thermostats on the non-equilibrium computer simulations of the interfacial thermal conductance[END_REF]. We considered a periodic simulation box elongated in the z direction, with a cross-section of 20 ⇥ 20 lattice units (i.e. 7.04 nm ⇥ 7.04 nm). This system was equilibrated at T in an isothermal-isobaric ensemble (NPT) over 0.2 ns. Then it was divided into 20 bins. The first bin defines the hot region maintained at T + 50 K, [START_REF] Levchenko | Phonon-mediated heat dissipation in a monatomic lattice: case study on Ni[END_REF] and Turlo et al. [START_REF] Turlo | Comparative study of embedded-atom methods applied to the reactivity in the Ni-Al system[END_REF] . These two authors computed κ with the Green-Kubo approach, using Mishin 2004 (M04) [START_REF] Mishin | Atomistic modeling of the and -phases of the Ni-Al system[END_REF] and Purja Pun and Mishin 2009 (M09) [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF] potentials for Ni. while the middle bin (i.e. 11) is the cold region thermalized at T 50 K. Two distinct Langevin thermostats were used. The Langevin damping parameter was set equal to 0.1 ps. The equations of motions of the atoms outside the thermostated regions were integrated into the NVE ensemble. A run of 0.28 ns was long enough to obtain a steady state temperature profile between heat sink and heat source. The energies added/removed by the two thermostats and the temperature profile between the two regions were monitored and averaged for 0.1 nanoseconds. Thermal conductivity was computed using Fourier's law : κ = J dT/dz (2.19) where J is the heat flux and dT/dz is the thermal gradient between the two thermostated regions. In the case of solids, finite size effects have to be taken into account. The distance between the two thermostats limits the phonons' mean free path that lowers thermal conductivity compared to bulk values. This was overcome by computing thermal conductivity with samples of different lengths. Bulk thermal conductivity was then estimated by extrapolating reciprocal thermal resistivity 1/κ versus reciprocal system length (1/L z ). Note that, in metallic melts (or liquids) the phonon's mean free path for heat transport is much shorter than the distance between the two thermostats since it is meanly due to collisions between close neighbors. That lead us to simulate between 40 and 100 lattice units (i.e. 14.08 nm to 35.2 nm) along the z-direction and nine temperature values were considered: T = 300 K, 500 K, 800 K, 1100 K, 1300 K, 1500 K, 1700 K 1900 K, 2100 K, and 2300 K (see Fig. 2.11). Non-exhaustive literature review of MD simulations on solidification, mechanical activation and nanocomposite reactivity In this section, selected Molecular Dynamics studies are presented in order to contextualize the different chapters presented later. Firstly, it allows our work to be situated in relation to other studies already carried out using Molecular Dynamics and, secondly, to be compared with what has already been done. The three main themes of this work will be discussed, namely solidification, mechanical activation and reactivity of Ti-Al. The solidification process at atomic scale Various aspects of solidification processes at the nanoscale have already been investigated, namely homogeneous and heterogeneous nucleations, the effect of cooling rates on final microstructure, directional solidification and the calculation of solidification properties. Many atomistic simulations for numerous materials, including metals, have been undertaken to study homogeneous and heterogeneous nucleations. They have focused on fundamental aspects such as nucleation rates or nucleation barriers (see for instance [START_REF] Jakse | Machine learning interatomic potentials for aluminium: application to solidification phenomena[END_REF][START_REF] Becker | Unsupervised topological learning approach of crystal nucleation[END_REF]). Two main procedures for achieving homogeneous nucleation are usually used in MD approaches: at a constant undercooled temperature (isothermal conditions) or by applying a constant cooling to the melt. • Isothermal homogeneous nucleation Shibuta et al. studied homogeneous nucleation at isothermal undercooled temperatures in large-scale simulations. These authors simulated a billion-atom iron melt with a Finnis-Sinclair potential [START_REF] Shibuta | Heterogeneity in homogeneous nucleation from billionatom molecular dynamics simulation of solidification of pure metal[END_REF]. They considered two temperatures in order to observe the structural evolutions over time by tracking the number of grains created and their orientation with respect to the (100) orientation (see Fig. 2.12). They demonstrated the formation of grains at the surface of previously formed grains. In parallel, they also investigated homogeneous nucleation in iron for a larger range of temperatures [START_REF] Shibuta | Homogeneous nucleation and microstructure evolution in million-atom molecular dynamics simulation[END_REF][START_REF] Shibuta | Submicrometer-scale molecular dynamics simulation of nucleation and solidification from undercooled melt: Linkage between empirical interpretation and atomistic nature[END_REF]. This work allowed them to follow the nucleation rates, the incubation time for the nucleation as well the microstructural evolution with respect to temperature. Another notable MD work was published by Mahata et al. [START_REF] Mahata | Understanding homogeneous nucleation in solidification of aluminum by molecular dynamics simulations[END_REF]. They investigated homogeneous nucleation in an undercooled aluminum melt using a MEAM potential. Under isothermal conditions, they derive important values such as critical nucleus size, number of nuclei or induction time. Furthermore, Mahata et al. succeeded in comparing their numerical values obtained at constant temperature or with different cooling rates with those of classical nucleation theory. • Isothermal heterogeneous nucleation Few approaches focus on heterogeneous nucleation and unfortunately most of them use pair potentials that are not appropriate for metals [START_REF] Sheng | On the onset of surface condensation: formation and transition mechanisms of condensation mode[END_REF][START_REF] Kimura | MOLECULAR DYNAMICS SIMU-LATION OF HETEROGENEOUS NUCLEATION OF A LIQUID DROPLET ON A SOLID SURFACE[END_REF][START_REF] Yasuoka | Molecular dynamics simulation of supersaturated vapor nucleation in slit pore[END_REF]. Nevertheless, one can cite the approach reported by Fujinaga et al. for titanium particles surrounded by liquid aluminium using an EAM potential [START_REF] Fujinaga | Molecular dynamics simulation of athermal heterogeneous nucleation of solidification[END_REF]. This work considered heterogeneous nucleation for various particle geometries and sizes under different isothermal undercooling conditions. Two different cases were characterized: stagnation and free growth. Free growth was observed when the undercooling temperature was sufficiently high with respect to the size of the nucleant particle (Fig. 2.13). Their approach was validated by a direct comparison with the classical theory of nucleation. • Solidification at different cooling rates According to solidification theory, the solidified microstructure will vary according to the applied cooling rate. A very fast cooling rate will result in a glassy structure while a slower cooling rate will result in a crystalline structure. In the mid-90s, a pioneering work on aluminium melts was restricted to two cooling rates and the use of pair potentials because of limited computing power [START_REF] Lu | Molecular-dynamics simulation of rapid solidification of aluminum[END_REF]. Later on, more systematic studies were carried out to precisely determine the specific cooling rates giving rise to a crystalline or glassy structure on metals and alloys such as Ag [START_REF] Tian | Molecular dynamics simulation for cooling rate dependence of solidification microstructures of silver[END_REF] with the Sutton-Chen potential, Al [START_REF] Hou | Cooling rate dependence of solidification for liquid aluminium: a large-scale molecular dynamics simulation study[END_REF] or Ti 3 Al [START_REF] Pei | The rapid solidification of Ti 3 Al : a molecular dynamics study[END_REF] with EAM potentials. In all of these studies, the initial stage was obtained by heating the system of interest to an equilibrium liquid state (T > T m ). Then, several cooling rates were applied to observe the effect of the cooling rate on the final microstructure (see Fig. 2.14). • Solidification front properties Molecular dynamics also provides access to characteristic values of the solidification front such as the kinetic coefficient or the solid-liquid interface energy. Neither quantity is easily accessible via experiments and their evaluation requires the use of numerical methods at nanoscale. The kinetic coefficient expresses the dependence of the interface velocity on the interface temperature. It is particularly important to know this value for the various orientations at which velocities are different. Sun et al. and Celestini et al. computed the kinetic coefficients for several interface orientations under isothermal conditions in nickel using an EAM potential [START_REF] Sun | Kinetic coefficient of Ni solid-liquid interfaces from molecular-dynamics simulations[END_REF] and in gold using the Glue potential [START_REF] Celestini | Measuring kinetic coefficients by molecular dynamics simulation of zone melting[END_REF]. Both studies considered an elongated system in one direction with a temperature gradient. They observed the difference in velocity of the solid-liquid interface as a function of orientation. The solid-liquid interface energy has been calculated using different techniques such as the cleaving technique [START_REF] Broughton | Molecular dynamics investigation of the crystal-fluid interface. VI. Excess surface free energies of crystal-liquid systems[END_REF], the capillarity fluctuation method [START_REF] Asta | Calculation of alloy solid-liquid interfacial free energies from atomic-scale simulations[END_REF][START_REF] Hoyt | Method for Computing the Anisotropy of the Solid-Liquid Interfacial Free Energy[END_REF], theoretical calculation based on density functional theory [START_REF] Curtin | Density-functional theory of crystal-melt interfaces[END_REF][START_REF] Marr | Planar density-functional approach to the solidfluid interface of simple liquids[END_REF] or the critical nucleus method [START_REF] Xia | Molecular dynamics studies on the correlation of undercoolability and thermophysical properties of liquid Ni-Al alloys[END_REF][START_REF] Bai | Calculation of solid-liquid interfacial free energy: A classical nucleation theory based approach[END_REF]. An accurate estimation of solid-liquid interface energy is of great importance as it will serve as an input parameter for more realistic simulations of solidification with, for example, the phase field approach [START_REF] Boettinger | Phase-Field Simulation of Solidification[END_REF]. • Directional solidification The first work on rapid directional solidification was published by Celestini et al. in 2000 [START_REF] Celestini | Nonequilibrium molecular dynamics simulation of rapid directional solidification[END_REF]. It focused on the growth of a solid binary alloy from its liquid phase. This approach was developed using LJ potentials with an Argon melt containing a random dispersion of solute atoms. Using Non-Equilibrium molecular dynamics (NEMD), the authors established a temperature gradient brought about by heating parts of the system to different temperatures. It was not until much later (i.e., 2019) that studies on rapid directional solidification were carried out on metallic systems. Among these studies, one can mention the studies of Al-Cu alloys [START_REF] Mahata | Effects of solidification defects on nanoscale mechanical properties of rapid directionally solidified Al-Cu Alloy: A large scale molecular dynamics study[END_REF] (see Fig. 2.15) and CrNi-alloyed steels [START_REF] Bahramyan | Nano-scale simulation of directional solidification in TWIP stainless steels: A focus on plastic deformation mechanisms[END_REF]. • Columnar to equiaxed transition in MD To our knowledge, only one study has dealt with the transition between columnar and equiaxed structures where equiaxed grains are formed by homogeneous nucleation. In this work, the temperature distribution reproduced that observed in additive manufacturing techniques with a temperature rise due to the heating of the laser followed by an From [START_REF] Kurian | Selective laser melting of aluminum nanopowder particles, a molecular dynamics study[END_REF]. imposed cooling on a bed of aluminum powder. A finite element approach determined the temperature distribution which served as input parameters for the Molecular Dynamics using Langevin thermostats. This work reported columnar grain solidification in the direction of the hottest point as well as the formation of equiaxed grains close to the surface of the meltpool during cooling (see Fig. 2.16). Further observations were also formulated on porosity, dislocations and cracks [START_REF] Kurian | Selective laser melting of aluminum nanopowder particles, a molecular dynamics study[END_REF]. Deformation studied by MD Most of the studies related to mechanical deformations concentrate on the mechanical properties of materials and very few are related to milling mechanisms. For example, sliding giving rise to friction was studied in [START_REF] Delogu | Molecular dynamics investigation on the role of sliding interfaces and friction in the formation of amorphous phases[END_REF][START_REF] Kim | A simulation study of the mixing, atomic flow and velocity profiles of crystalline materials during sliding[END_REF][START_REF] Li | Molecular dynamics calculation of heat dissipation during sliding friction[END_REF]. These works highlighted the creation of an amorphous mixing zone mainly localized at the interface regions during deformation. Chen et al. also studied the slide on an Ni-Al system and observed that the softest material (Al) is the location at which the mixing zone is formed [START_REF] Chen | Molecular dynamics simulation of microstructure evolution and heat dissipation of nanoscale friction[END_REF]. Plastic deformation induced by milling was also studied by cyclic deformations [START_REF] Lund | Molecular simulation of amorphization by mechanical alloying[END_REF]. In this study, dislocation motions induced the chemical mixing of the elements. • Mechanical activation Mechanical activation has been studied in detail by Cherukara et al. [START_REF] Cherukara | Shock Loading of Granular Ni/Al Composites. Part 1: Mechanics of Loading[END_REF][START_REF] Cherukara | Shock Loading of Granular Ni/Al Composites. Part 2: Shock-Induced Chemistry[END_REF] in the context of the shock loading of a granular material consisting of laminated Ni-Al grains (see Fig. 2.17). The results of this MD work with an EAM potential were reported into two papers. The first one outlines the elementary mechanisms related to the compaction of the system such as stress, temperature and localized velocity increase. The authors also highlighted the importance of pores, which play an important role during collision by locally increasing temperature. Indeed, during the collision of pores, high impact velocities cause the void to be filled with Ni-Al fluid. In the second paper, the influence of different impact velocities on the reaction is discussed. In the case of low impact velocities, the initial voids are filled by the deformation of adjacent grains. The Ni and Al atoms mix only partially, leading to no significant reaction in the sample. In contrast, at high impact velocities, the void filled by the fluid of Ni-Al atoms results in intimate mixing. This mixing induces a localized increase in temperature permitting the adjacent solid grains to melt and thus start the reaction. This reaction, observed at various locations in the sample, then propagates rapidly throughout the sample. Activating factors determine the increase in reactivity of a mixture due to mechanical treatment. Nevertheless, most of the studies published in the literature have no direct link with mechanical activation but rather isolate mechanisms encountered in mechanical processes giving rise to improved reactivity. minium layer surrounded by two nickel layers as might be observed experimentally. In this particular system, the authors reported that premixing at the nanoscale improved reactivity when compared to samples with no premixing. Reactivity of Ti-Al systems Many molecular dynamics studies have been performed on the reactivity of metallic systems using geometries such as nanoparticles, nanowires, nanofilms and nanometric multilayers [START_REF] Baras | SHS in Ni/Al Nanofoils: A Review of Experiments and Molecular Dynamics Simulations[END_REF]. Indeed, MD is a very pertinent method as it allows observation in-situ of the elementary mechanisms that occur at the nanometric scale, namely melting, dissolution, mixing, phase transformations, microstructure formation, nucleation, growth of intermetallics, etc. Nevertheless, few studies focus on the particular Ti-Al system. • Ti-Al coated nanoparticles Levchenko et al. studied the alloying reaction in Ti-coated Al nanoparticles with a particle size of 4.8 nm and an equivalent atomic fraction [START_REF] Levchenko | Molecular dynamics simulation of alloying in a Ticoated Al nanoparticle[END_REF]. They heated their samples to a temperature of 900 K (see Fig. 2.18,a.). At this temperature, the aluminium was completely melted, allowing for the dissolution of Ti in Al(see Fig. 2.18,b. and c.). This dissolution caused an exothermic reaction that raised the temperature to 1352 K in a self-sustaining manner. Finally, after this heat release, the system was cooled down to create a Ti-Al alloy. The authors also compared their study with one they had previously conducted on Al-coated Ti nanoparticles [START_REF] Levchenko | Molecular dynamics simulation of the alloying reaction in Al-coated Ni nanoparticle[END_REF]. With similar nanoparticle sizes, they concluded that the interfaces play an important role in reactivity. Indeed, reactivity is better in the Al-coated Ti because the aluminium layer disturbs the structure while in the Ti-Coated Al, the titanium layer is harder and confines the aluminium core. • Multilayers Studies on nanoscale multilayers in the Ti-Al system have also been carried out by Kiselev et al. [START_REF] Kiselev | Molecular-dynamics simulation of the synthesis of intermetallic Ti-Al[END_REF]. These authors considered a system featuring a Ti layer and an Al layer of equal size, 8 nm in length, 8 nm in width and 2 nm in thickness. The sample was then heated to a temperature of 1000 K, close to the melting point of Al, to start the reaction. They observed that the temperature rise was slow up to 1400 K due to the limited solubility of Ti in Al. From 1400 K, the reaction was accelerated considerably due to a much higher solubility of Ti in Al in this temperature range. The exothermic reaction was always more pronounced, allowing for a rise in temperature sufficient to reach the melting point of Ti (1700 K). The 2 components then mixed intimately, maintaining a heat release up to 2100 K. The samples were cooled to observe the formation of the Ti-Al intermetallic phase. Chapter 3 Solidification at nanoscale in the context of Ni additive manufacturing In the present chapter, we developed MD simulations of pure nickel submitted to typical non-stationary thermal conditions specific to Additive Manufacturing (AM) in order to study the columnar-to-equiaxed transition. Among metals and alloys, nickelbased alloys are commonly used in AM [START_REF] Bandyopadhyay | Recent developments in metal additive manufacturing[END_REF]. As this study is restricted to pure metal, the microstructure will result exclusively from thermal effects occurring during solidification processes. The first representative model (model A) was set up to describe the rapid directional solidification of a textured Ni substrate. The second model (model B) was used to investigate the effect of the cooling rate on the evolution of the microstructure. This approach allowed us to assess the effects of process parameters on the resulting microstructure at the nanoscale. In addition, MD simulations were compared to predictions based on the classical theory of solidification of pure metals and on classical nucleation theory. Details of the simulations The simulations were performed using the Embedded Atom Method (EAM) potential, developed by Purja Pun and Mishin for Ni-Al systems [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF]. This potential has been extensively used to study reactivity in Ni-Al nanofoils [START_REF] Politano | Reaction front propagation in nanocrystalline Ni/Al composites: A molecular dynamics study[END_REF][START_REF] Politano | Molecular Dynamics Studies in Nanojoining: Self-Propagating Reaction in Ni/Al Nanocomposites[END_REF]. In addition, this potential is known to reproduce well the physical and thermodynamical properties of nickel [START_REF] Turlo | Comparative study of embedded-atom methods applied to the reactivity in the Ni-Al system[END_REF] (see Table 3.1). We considered a pseudo 2D system in a simulation box of 100 nm ⇥ 1.4 nm ⇥ 200 nm. As shown in Fig. 3.1, the system was divided into three regions along the z direction: a lower region located between 0 and 20 nm kept at a fixed temperature and named the substrate, a middle region from 20 to 50 nm, and an upper region that was effec- tively affected by the laser beam. The y direction was slightly larger than twice the cut-off distance (r cut ) of the potential to avoid multiple interactions. Periodic boundary conditions were applied in the x and y directions, while the boundary condition was nonperiodic and shrink-wrapped in the z direction. The system was built with the Atomsk software [START_REF] Hirel | Atomsk: A tool for manipulating and converting atomic data files[END_REF]. The procedure used to create this polycrystalline sample is analogous to that used to obtain Voronoi cells, which has already been applied in previous works [START_REF] Politano | Reaction front propagation in nanocrystalline Ni/Al composites: A molecular dynamics study[END_REF][START_REF] Perron | Oxidation of nanocrystalline aluminum by variable charge molecular dynamics[END_REF]. Each grain had its [010] axis aligned with the y direction and could rotate along the y axis. The equations of motion were integrated using a timestep of 1 fs. The local atomic environment (i.e. fcc, hcp, or unknown) was determined using Polyhedral Template Matching (PTM). Two representative models were prepared: with and without cooling. Different temperature values of the substrate were considered in order to explore a broad range of microstructures. Typically, the first model (A) allowed us to understand rapid directional solidification in a temperature gradient, without cooling. In the second model (B), a cooling rate was applied to force the system to cool down at a controlled rate, as observed in AM solidification processes (see [START_REF] Bahramyan | Nano-scale simulation of directional solidification in TWIP stainless steels: A focus on plastic deformation mechanisms[END_REF]). Details of the simulations In model A, the system was first equilibrated at T sub for 200 ps in the NPT (isothermalisobaric) ensemble, followed by another equilibration in the NVT ensemble for 100 ps. After thermalization, the two extremities of the sample were thermostated in an NVT ensemble for 2 ns. The laser region extended from 50 to 200 nm (see Fig. 3.1) and its temperature was set at 1750 K. The temperature of the substrate was set at T sub . Note that, in the central part of the sample, atoms evolved freely as their equations of motion were integrated into the NVE ensemble. After 2 ns, the hot thermostat was removed but the cold thermostat was kept until the end of the simulation. We considered 2 different temperatures for the substrate: T sub = 300 K and 1000 K, in order to consider two very different gradients. In model B, the system was first equilibrated at T sub for 200 ps in the NPT (isothermalisobaric) ensemble, followed by another equilibration in the NVT ensemble for 100 ps. After thermalization, the two extremities of the sample were thermostated as in model A. The laser region was set at 1750 K whereas the substrate was set at T sub for 2 ns. Then the laser thermostat was removed and replaced by a temperature ramp corresponding to cooling rates ranging from 75 K/ns to 750 K/ns (see Table 3.2 for more details). The substrate thermostat was kept until the end of the cooling process. At the end of the cooling ramp, which is when the whole system reaches the substrate temperature, T sub , the two thermostats were removed, and the simulation of the entire system was carried out in the NVE ensemble. Six different temperatures of T sub were considered: 800 K, 900 K, 1000 K, 1100 K, 1200 K, and 1300 K. Mishin [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF]) and compared to experimental values or thermodynamic calculations ([a] estimated in the review of Jiang and Lu [START_REF] Jiang | Size dependent interface energy and its applications[END_REF] ; [b] Thermo-Calc; [c] Assael et al. [START_REF] Assael | Reference correlations for the thermal conductivity of liquid copper, gallium, indium, iron, lead, nickel and tin[END_REF]). Experimental values are given for information only. Most of these values are discussed in Thurnay [START_REF] Thurnay | Thermal properties of transition metals[END_REF]. Temperature-dependent parameters are evaluated at the bulk melting temperature T m . † Cohesive energies are evaluated at 0 K. Molecular dynamics simulations enable characterization of solidification processes by means of thermal effects and nucleation. In order to evaluate quantities such as critical nuclei, nucleation rate, or thermal fluxes, it is necessary to estimate structural parameters (lattice parameter), thermodynamic properties (melting temperature, solid/liquid interfacial energy, latent heat of fusion, heat capacity, etc.) and thermal transport (heat conductivity). For this purpose, we developed separate MD simulations (see Section 2.4.2 for more details). The parameters for Ni are reported in Table 3.1. The structural order of the Ni crystalline phase is fcc. The temperature evolution of the lattice parameter a 0 was evaluated, and its value at melting point, a(T m ), is reported. The melting temperature was determined with the isothermal two-phase method. The heating-cooling cycle procedure was used to obtain the other thermodynamic properties: the bulk heat of fusion L V and the heat capacity C p (see Section 2.4.2). The heat capacity, which is proportional to the slope of the relative dependence of the enthalpy on temperature, was in good agreement with experimental data. Reasonable agreement between experimental value and MD calculation was also obtained for the latent heat of fusion. The huge discrepancy between the experimental and MD data for heat conductivity can easily be understood if one considers that the conduction of heat observed in this MD study with the classical potential accounts only for phonon transport and does not account for electronic transport. In the present approach, phonon thermal conductivity was computed with non-equilibrium molecular dynamics (NEMD) (see Chapter 2.4.2 for details). In the case of our Ni system, the MD value led to an effective characteristic length of heat conduction that was shorter than the real one by a factor q D MD th /D exp th = 4.2. This can be considered as an asset in simulations because we can investigate thermal effects on MD scales, which allows the refinement and validation of existing theories and the development of new theories for nanoscale systems [START_REF] Turlo | Modeling self-sustaining waves of exothermic dissolution in nanometric Ni-Al multilayers[END_REF]. After the theoretical model has been refined, experimental values for material properties can be inserted to generate reliable predictions, which can be further validated experimentally. In addition, such theoretical models solved analytically or numerically are much more computationally efficient, which allows us to cover much greater length and time scales, well beyond the MD capabilities. The solid-liquid interfacial energy γ sl was determined using the critical nucleus method [START_REF] Xia | Molecular dynamics studies on the correlation of undercoolability and thermophysical properties of liquid Ni-Al alloys[END_REF]. Details are given in Section 2.4.2. In AM, laser heating imposes dynamic temperature conditions that affect the solidification process. In order to understand their influence on microstructure formation, we first considered the role of a temperature gradient between the solidified material and the melt pool (model A). Next, we introduced the effect of global cooling (model B). Solidification under free cooling (model A) After the complete melting of the upper region through laser heating, the system was divided into 3 regions: a polycrystalline substrate from 0 to 20 nm maintained at T sub , Solidification under free cooling (model A) a melt pool at 1750 K (> T m (Ni)), and a solid region between the two (middle region). The resulting temperature profile at t = 0 ns, after the passage of the heat source, is depicted in Fig. 3.2, showing that the steep temperature gradient in the solid part was established between 20 and 80 nm. The abrupt decrease in the fraction of fcc atoms in Fig. 3.2 corresponds to a solid/liquid interface. In the specific initial microstructure of this simulation, the system was composed of three columnar grains at the interface, as shown in the snapshot at t = 0 ns. After the release of the thermostat from the melt pool, solidification proceeded with propagation of a planar solid/liquid interface in the z direction. We observed that the melted region shrank while the solidified region increased. This progressively induced a smoother temperature gradient in the solidified region. As one can see in Fig. 3.2, the fraction of fcc atoms in the solid region is not strictly equal to 1 because of grain boundaries and point defects. Thus, the thickness of the front was measured as the zone with a fraction of fcc atoms between 0.02 and 0.95, which never exceeded 10 nm. The position of the solidification front, associated with the decrease in the fraction of fcc atoms, was defined as the point where the fraction of fcc atoms is equal to 0.8. This intersection corresponds to 1650 K, which is slightly lower than the melting point of Ni (1701 K). The resulting position of the solid/liquid interface is plotted in Fig. 3.3 as a function of time for two distinct substrate temperatures of 300 K and 1000 K. The crystallization velocity was found to be strongly dependent on the temperature of the substrate but with common features, including a change in slope after some time. By analyzing the regions with linear time dependence on the position of the solid/liquid interface, we extracted the corresponding solidification velocities, listed in Fig. 3.3. During a transient period (before 3 ns for T sub = 300 K and 5 ns for T sub = 1000 K), the instantaneous velocities were larger than the stationary ones by 2.5-3 m/s. In the stationary regime, the solidification front velocities were reduced to 11.5 m/s for T sub = 300 K and 5.5 m/s for T sub = 1000 K, suggesting a linear relationship between substrate temperature and steady-state solidification front velocity. Extrapolating the linear trend set by these two points to zero velocity would result in a temperature of 1642 K, close to the melting temperature of Ni (1701 K). Solidification in pure metals is entirely controlled by thermal processes [START_REF] Porter | Phase Transformations in Metals and Alloys[END_REF]. In the absence of convection, the latent heat of solidification released during solidification is transported by conduction away from the solid/liquid interface. Conduction can take place through either the solid or the liquid, depending on the temperature gradients at the interface. For solid growth at a velocity v with a planar interface, the balance between heat flow through the solid, heat flow through the liquid, and latent heat released at the interface reads κ sol G s = κ liq G `+ vL V (3.1) where G s = dT dz sol is the local temperature gradient in the solid, G `= dT dz liq is the local temperature gradient in the liquid, and L V is the latent heat of fusion. In Model A, the liquid is superheated, and temperature gradients at the interface in the liquid and the solid are both positive. If we assume that thermal conductivity is very similar in the liquid and in the solid κ sol = κ liq = κ, eq. (3.1) gives an estimate of the velocity v: v = κ L V [G s G `] (3.2) Since the local gradient in the melt pool is weak, the velocity mainly depends on the temperature gradient in the solidified region. The relationship (3.2) explains why the velocity is greater at the beginning of the solidification process when the gradient is steeper (see Fig. 3.3). We also understand why the crystallization velocity increases for decreasing substrate temperatures, since the temperature gradient is larger in this case. In MD studies, the crystallization velocity is usually estimated at constant temperature using the two-phase method [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF]. The crystallization velocity as a function of temperature and interface orientation was evaluated using this method (see Section 2.4.2). The velocity depends on the orientation of the Ni interface: v(001) > v(101) > v [START_REF] Thurnay | Thermal properties of transition metals[END_REF] and the degree of undercooling. The difference in velocity as a function of orientation becomes smaller as it approaches the melting temperature. In the case of directional solidification, investigated in the present work, we estimated the solidification velocity as a function of orientation in the case of an initial system composed of a single grain. The velocity was almost the same for the three orientations: v(001) ⇠ v( 101) & v [START_REF] Thurnay | Thermal properties of transition metals[END_REF]. This is not surprising because the solid/liquid interface corresponds to a temperature value close to melting point. It is also supported by the fact that the velocities measured in the polycrystalline samples (see Fig. 3.3) are in the same range as the crystallization velocity measured by the two-phase method between 1600 K and 1700 K. It is also important to note that the interface [START_REF] Thurnay | Thermal properties of transition metals[END_REF] was the least mobile and, thus, was not considered in our polycrystalline sample, as crystal rotations around the direction [010] did not allow us to establish such an interface. The crystallization velocity in the case of directional solidification (model A) in polycrystalline samples is fixed by the temperature gradient rather than by the temperature value at the solidification front or by the grain orientation. This observation corroborates the fact that the front remained flat in the case of a polycrystalline system. Solidification under imposed cooling rate (model B) In Model B, a progressive global cooling was imposed in order to mimic the thermal conditions experienced during AM. As in the previous model, A, the system was composed of a substrate maintained at a given temperature, T sub , a melt pool, and a solidified region between the substrate and the melt pool (middle region). Both the so- lidified and melted regions were affected by the progressive cooling. Two parameters define the thermal conditions: the temperature of the substrate, T sub , and the cooling rate α measured in K/ns. All the conditions considered in the present work are reported in Table 3.2. T sub (K) α (K/ns) The temperature profile and the associated fraction of fcc atoms are depicted in Fig. 3.4 at different times for T sub = 1000 K and α = 375 K/ns. Time t = 0 ns corresponds to the time at which the heat source was released. We observed a solidified region ranging from 15 to 85 nm between the substrate and the solidification front. The solidification front separating the solid and liquid regions was located at 1650 K, close to the melting point. A temperature gradient of about 8.75 K/nm was established in this solid region, whereas most of the liquid region was at the melting temperature of Ni. Because of the imposed cooling, the liquid went from a superheated to a supercooled state. The temperature profile at 0.8 ns showed a peak reflecting the exothermicity of the solidification process, whereas most of the liquid region was at a temperature lower than the melting point of Ni. The solidification front was located precisely at the peak of the temperature front. From 0.8 ns to 1.6 ns, the solidification front propagated forward, resulting in columnar grain growth. At 1.6 ns, a small peak of fcc phase was observed in the liquid region, reflecting the appearance of a nucleus of crystalline phase. At 2 ns, the exothermic peak in the temperature profile was less pronounced because of the homogeneous nucleation and growth of the crystalline grains, increasing the overall temperature in this previously liquid region. After 2.4 ns, the upper region was completely filled with randomly oriented and randomly distributed Ni grains. By comparison with model A, the cooling imposed in model B induced a drastic change in the microstructure, with the coexistence of columnar and equiaxed grains (see Fig. 3.4b). In the case of a substrate at T sub = 1000 K, the influence of the cooling rate α was evaluated to be in the range of one order of magnitude from 75 K/ns to 750 K/ns. We observed that the cooling rate modified the time at which the first nucleus appeared (t 2 in Table 3.2). The higher the cooling rate, the faster nucleation occurred. Consequently, the zone of equiaxed grains ∆L z was thicker for greater cooling rates. The position of the solidification front as a function of time is reported in Fig. 3.5. The typical curve is convex, in comparison with Model A (see Fig. 3.3). This means that the instantaneous velocity progressively increased with the degree of undercooling of the melt pool. Furthermore, greater cooling rates correspond to faster solidification. After first nucleus appearance (black diamond in Fig. 3.5), columnar grain solidification ceased in favor of equiaxed grains. The only difference between model A and model B was the imposed cooling rate. In model A, the solidifiaction developped in a superheated fluid and the front kept its planar shape. In contrast, in model B, the solid grew in a supercooled liquid and the heat was conducted away from the front in the liquid due to the negative gradient G `< 0, as shown in Fig. 3.5, in the case of a cooling equal to 375 K/ns. Equation (3.1) still holds and the velocity reads v = κ L V [G s + |G `|] (3.3) The negative temperature gradient in the liquid G `significantly decreased until it reached a minimum when the first nucleus appeared. This observation explains why the instantaneous velocity increased during the associated solidification stage. After that stage, the heat released by the formation of solid grains in the liquid thwarted the cooling, which became less efficient. In Fig. 3.4 at t = 2 ns, we noted that the peak corresponding to solidification was less pronounced, when compared to previous times. Each grain formation released heat in the liquid region and, consequently, the melt pool temperature increased. Figure 3.6 shows the instantaneous solidification front ve- locity of columnar grains in representative cases of Table 3.2. For a given cooling rate (α = 375 K/ns), the substrate temperature, T sub , has a limited impact on front velocity. This is due to the fact that G s in eq. ( 3.3) is the local temperature gradient evaluated on the left side of exothermic peaks (see Fig. 3.4a) rather than the global temperature gradient established in the solidified region. In the case of columnar and equiaxed grain microstructure, velocity increased as a function of time. This is related to the increase in magnitude of the temperature gradient in the liquid G `during cooling. The increase in v is thus more pronounced for greater cooling rates. Due to the formation of equiaxed grains in the melt, the front associated with columnar grains either stopped or slowed down before stopping. For a very low cooling rate (75 K/ns), the velocity was almost constant over 4 ns before a slight increase. The front reached the system edge before the degree of undercooling (∆T = T m T = 450 K) was sufficient for homogeneous nucleation. The crystallization velocity is also a function of the degree of undercooling (see Section 2.4.2 for more details): v = K T m T T exp ✓ Q k B T ◆ (3.4) According to eq. (3.4), the velocity increases as a function of the degree of undercooling T m T . In contrast, the formation of equiaxed grains, which occurred at around the same degree of undercooling (500-600 K) for all systems (see Table 3.2), led to a drastic change in the local temperature gradient due to the release of heat associated with nucleation. The nucleation provoked a slowing down of the velocity at a characteristic time corresponding to the target degree of undercooling. As a consequence, the local temperature gradient in the liquid reached about the same minimum value (-20 K/nm) whatever the cooling rate or substrate temperature (see Fig. 3.5). The ve-locity at columnar-to-equiaxed transition was in the range of 60-65 m/s, corresponding to this specific degree of undercooling. During the propagation of solidification from 0 to 1.6 ns, the front progressively lost its planar shape, developing a characteristic rounded grain shape at the interface. This is also reflected by the greater thickness of the front. We observed a typical microstructure of columnar and equiaxed grains for most of the parameters considered, except for a high substrate temperature value (see Table 3.2). Homogeneous nucleation began when the degree of undercooling of the melt pool reached a value in the range 500 K-600 K, whatever the substrate temperature T sub and the cooling rate α. Is it possible to interpret this observation in terms of Classical Nucleation Theory (CNT)? In CNT, the rate of homogeneous nucleation I reads [START_REF] Mahata | Understanding homogeneous nucleation in solidification of aluminum by molecular dynamics simulations[END_REF]: I = I 0 exp ✓ A T 3 (∆T ) 2 ◆ (3.5) with A = 16πγ 3 SL T 4 m 3k B L 2 V (3.6) where I 0 is a coefficient that depends on the interface temperature and free energy. The factor A is a constant that depends on the solid-liquid surface energy and latent heat of solidification. The normalized rate I/I 0 is plotted in Fig. 3.7(a) as a function of the degree of undercooling for the parameter values corresponding to the MD simulations (see Table 3.1). The nucleation rate strongly depends on the degree of undercooling. No nucleation was expected below ∆T = 500K. The critical temperature T cr corresponds to the maximum of the nucleation rate: T cr = 3T m /5 = 1026 K, i.e. ∆T = 684 K. This estimate is in good agreement with the range of undercooling where nucleation was observed in MD simulations. The experimental estimate is that the critical nucleation temperature is between 0.5 and 0.6 times the melting temperature. According to CNT, the critical radius r ⇤ of a spherical nucleus depends on the degree of undercooling ∆T = T m T : r ⇤ = 2γ sl L v T m ∆T (3.7) The corresponding number of atoms in a critial nucleus is N ⇤ = 4 ⇥ 4π 3 r ⇤3 /a 3 0 (3.8) where the factor 4 stands for the number of fcc atoms/unit cell. In simulations, the degree of undercooling ∆T , at which nucleation appears, is in the range 500-600 K. The critial radius r ⇤ is thus between 0.66 nm and 0.75 nm, corresponding to N ⇤ between 100 and 160 atoms. The critical radius predicted by CNT with MD parameters is plotted in Fig. 3.7(b) for the degree of undercooling corresponding to the appearance of the first nucleus. We note that the critical nucleus is almost the same in all simulations and smaller than the size of the simulation box along the y axis, thus avoiding any simulation artifacts. Nucleus formation dynamics is of primary importance in the final microstructure after solidification. In order to describe the dynamics, we analyzed the instantaneous system state at different times after the first nucleation during the growth process. Nuclei were identified by deleting all non-fcc/liquid atoms and then using the cluster analysis available in Ovito. A nucleus is defined here as a set of connected atoms, each of which is within the reach of the other atoms in the same cluster. The cutoff distance was set at 2.5 Å, which is the distance between nearest neighbors in the fcc structure. The results are presented in Fig. 3.8. From 1.6 ns, the number of clusters increased, reaching its maximum at 1.8 ns. A maximum of fifteen clusters was observed. Note that the number of clusters does not correspond to the rate of nucleation (eq. (3.5)): it corresponds to the cumulative nucleation rate minus the clusters that disappeared due to coalescence. The mean cluster size remained lower than 5000 atoms. It corresponds to small clusters that are larger than the critical size. Note that the critical nucleus size decreases during system cooling, which ends at 2.5 ns (see eq. 3.7). We also followed the size of the largest cluster. At the very beginning, there is competition between clusters of similar size and the largest one may differ from time to time. At 1.75 ns, the largest cluster was clearly identified. It grew significantly because of adhesion of Ni atoms that diffused in the liquid and came into contact with existing clusters. After 1.8 ns, the number of clusters decreased following coalescence between isolated clusters or with columnar grains. Coalescence was associated with an increase in mean cluster size. Fluctuations in cluster numbers correspond to a detach/attach process between 2 or more adjacent grains. After 2 ns, the number of clusters decreased and their mean size stabilized. During the growth process of equiaxed grains, columnar grains extended in the direction of solidification (z > 0), but their growth progressively slowed due to the growth of equiaxed grains. Fluctuations in the size of the largest cluster reflect the attachment/detachment of the grain with columnar grains. At 2.3 ns, all grains were connected. At 2.5 ns, most of the system was solidified, with the coexistence of columnar and equiaxed grains. Grain boundaries were occupied by atoms with no well-defined structure (termed unknown atoms) and can be considered disordered grain boundaries. Figure 3.4b shows that nuclei appeared randomly in the melt. In the front view provided by the snapshots, we first see the formation of round nuclei. These initial critical spherical nuclei (r < 0.7 nm) grew in the form of disks due to their extent being limited in the y direction. Each grain is made of fcc (green) and hcp (red) atoms. As the difference in cohesive energy per atom for fcc and hcp is very small (less than 0.03 eV; see Table 3.1), any thermal fluctuation may cause the formation of hcp Ni atoms, leading to the formation of nanotwinned grains. Let us now consider the case of a substrate at 1300 K (see Fig. 3.9). Here we considered a system twice as long in the z direction (L z = 426 nm). At t = 0 ns, laser heating was removed, and the region ranging from 50 nm to 400 nm was submitted to a cooling rate of 375 K/ns. The temperature profile at t = 0.4 ns depicted in Fig. 3.9a corresponds to a liquid region at 1600 K and a solid region submitted to a temperature gradient. The limit between the solidified region and the liquid pool is delineated by the abrupt decrease in the fraction of fcc atoms. In about 1 ns, the melt pool cooled down to the substrate temperature T sub = 1300 K. This corresponds to an undercooled melt pool. The degree of undercooling ∆T = 410 K did not allow spontaneous nucleation (see Fig. 3.7a). The nucleation rate is almost zero, contrasting with the formation of nuclei observed for a substrate temperature at 1000 K. When cooling ended, the system evolved into adiabatic conditions (NVE ensemble). Figure 3.9a depicts the temperature profiles and fraction of fcc atoms at different times under these specific conditions. At 1.4 ns, the temperature peak due to the release of latent heat was evacuated into the solid and liquid parts. At 2.2 ns, this peak became higher, while the temperature gradient in the solidified region became smoother. For later times, this tendency became more pronounced, with a smooth gradient in the solid part and a sharp one in the liquid around the solidification front. In solidification theory [START_REF] Porter | Phase Transformations in Metals and Alloys[END_REF], it is well known that when a solid grows in an undercooled liquid, a planar solid/liquid interface is unstable. At 2.2 ns, we observed that protrusions located in the central columnar grain developed at the interface to form a cellular structure with fingers (snapshot in Fig. 3.9). At later times (for instance, 3 ns and 5 ns), these fingers extended, with a marked bottom and tip. In Fig. 3.9b, we plotted the temperature profiles and the fraction of fcc calculated in a slice containing the tip or the bottom of the finger. A striking fact is that the thickness of the solidification front between the tip and the bottom increased significantly between 3 and 5 ns. In Fig. 3.9b, we see that the heat released at the tip can only be conducted in the liquid (Fig. 3.9b -circle in red) when the local gradient is very steep. Local temperature T tip ⇠ 1450 K is lower than the melting point. The heat released at the finger bottom is conducted in both solid and liquid parts. The local temperature T bottom ⇠ 1698 K is close to the melting point, T m (Fig. 3.9b -circle in red). The positions of the tip and the bottom are plotted as a function of time in Fig. 3.10(a). As expected, the velocity of the tip v tip is greater than that of the bottom: v bottom . The difference in position between the tip and the bottom increases as a function of time, indicating the progressive extent of the fingers observed in the snapshots. Heat is conducted away from the tip in the liquid, due to the local steep temperature gradient in the liquid part. Due to the Gibbs-Thomson effect [START_REF] Porter | Phase Transformations in Metals and Alloys[END_REF], the solidification velocity at the tip reads v tip = κ liq L V 1 r ∆T 0 ✓ 1 r ⇤ r ◆ (3.9) where ∆T 0 is the degree of undercooling, r ⇤ is the corresponding critical radius, and r is the characteristic size of the tip. Given the parameters (see Table 3.1), the theoretical value of the tip velocity is given in Fig. 3.10b. The measured value of the tip is close to 50 m/s, corresponding to a characteristic size of 6 nm, in good agreement with our simulations. As shown in Table 3.2, the microstructure observed after solidification depends on both substrate temperature, T sub , and cooling rate, α. The value of T sub modifies the steepness of the temperature gradient in the solid region between the substrate and the solidification front. This temperature gradient becomes smoother as the solid/liquid interface moves forward. In addition, since T sub is the temperature of the melt pool after completion of cooling, its value gives the degree of undercooling. In the case of low values, T sub  1200 K, the final structure corresponds to the coexistence of columnar and equiaxed grains. For values T sub 1300 K, we observed columnar grains and the development of protrusions associated with solidification front instability. In this case, the degree of undercooling was never high enough for homogeneous nucleation, since the rate of homogeneous nucleation is almost zero. Summary Molecular dynamics (MD) simulations were used to provide in situ observations of solidification during additive manufacturing (AM). The thermal conditions to which the system is submitted during AM are dynamic. The temperature gradient between the solidified region and the melt pool is constantly evolving. Thus, conduction of heat away from the solid/liquid interface takes place in non-stationary conditions. In addition, thermal radiation and convective cooling of the melt pool induce a progressive cooling of the liquid region. Molecular dynamics simulations allow us to reproduce these conditions in a nano-box system corresponding to the close vicinity of the solidification front. We first analyzed the case of directional solidification in a non-stationary temperature gradient. The steepness of the temperature gradient is directly related to the substrate temperature T sub . A low T sub value increases the temperature gradient in the solid region (G s ), leading to rapid propagation of the solid/liquid interface in the direction of the gradient. The heat release by conduction in the solidified region determines the front characteristics (velocity and stability). The microstructure is a typical columnar structure. The solid/liquid interface remains flat, expect at grain boundaries, where grain boundary grooving was observed. When the cooling of the melt pool is taken into account, the situation is completely different. Heat is conducted away from the interface in both solidified and melted regions, leading to an intrinsically unstable solid/liquid interface, with the development of rounded columnar grains. In addition, when the undercooling induced by the cooling was sufficient, we observed spontaneous nucleation of seeds in the melt pool and growth of equiaxed grains. These randomly oriented grains impede further propagation of columnar grains. The faster the cooling, the faster the equiaxed grains appeared, leading to a large region with equiaxed grains. The propagation velocity of columnar grains is directly related to the temperature gradients from either side of the interface, in the solid (G s ) and liquid (G l ) regions. A greater cooling rate promotes the propagation of columnar grains by increasing their velocity. So the same operating parameter (cooling rate) influences the two competing solidification modes (directional solidification and homogeneous nucleation). The maximum degree of undercooling ∆T max depends on the substrate temperature: ∆T max = T sub T m . If ∆T max is not in the appropriate range, the nucleation rate vanishes. In this case, the columnar grains continued to grow, but thermal instability led to grain fingering, very similar to dendrite formation. Although the length scale is nanometric, molecular dynamics simulations reproduce the complex behavior associated with solidification during additive manufacturing. With complementary simulations, we were able to evaluate the structural and thermodynamic parameters of Ni associated with the EAM potential [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF]: latent heat of fusion, solid/liquid interfacial energy, heat conductivity, etc. (See Section 2.4.2). This study allowed us to compare the behaviors at nanoscale to those expected in the framework of the classical solidification theory. The main conclusion is that predictions of the solidification theory of pure metals still hold at the nanoscale. Another interesting conclusion of the present work is the observation of CET in a pure metallic system. Here, the CET is driven exclusively by thermal conditions. The next step is to consider systems with inoculants in the melt pool, in order to further control the CET, as is usual in experiments. In addition, through the detailed characterization of relevant parameters, we could consider supplementing this study with simulations at mesoscopic scales [START_REF] Kavousi | Quantitative prediction of rapid solidification by integrated atomistic and phase-field modeling[END_REF][START_REF] Kavousi | A temperature-dependent atomistic-informed phasefield model to study dendritic growth[END_REF]. Chapter 4 Mechanical activation of metallic powders and reactivity of activated nanocomposites We proposed a description, at the atomic level, of a mechanical treatment on a mixture composed of two metallic powders. We used Molecular Dynamics to simulate the very first impact of grinding balls involving compaction and plastic deformation. Two binary mixtures were considered: Ni-Al and Ti-Al, in order to assess the influence of the mechanical and structural properties of these pure elements on the characteristics of the activated mixture. We observed the formation of nanometric mixing zones over deformation steps. The effects induced by the mechanical treatment were found to be specific for each binary system, and depended on both the mechanical and structural properties of the pure elements. Mechanical activation induces solid-state solubility, structural transformations, and defects. In this chapter, we evaluated reactivity and transport properties at different temperatures in Ni-Al and Ti-Al nanocomposites fabricated by mechanical activation. We assessed the extent of their mixing zones, together with solubility, mobility, and the formation of intermetallics within these zones. Details of the simulations The equations of motion of the atoms were integrated with a timestep of 1 fs. As shown in Fig. 4.1, we studied the mechanical deformation of a parallelepipedal simulation box filled with rounded particles1 . The typical box size was 58.6 nm x 58.6 nm x 16.8 nm. To build this initial system, we randomly distributed 12 spheres of 16.6 nm diameter in the middle plane of the simulation box. Each sphere was filled with either a monocrystalline or polycrystalline metal. The polycrystalline systems were obtained by Voronoï tessellation [START_REF] Politano | Reaction front propagation in nanocrystalline Ni/Al composites: A molecular dynamics study[END_REF][START_REF] Perron | Oxidation of nanocrystalline aluminum by variable charge molecular dynamics[END_REF]. The EAM potentials developed for the Ni-Al and Ti-Al systems are designed by Purja Pun et al. [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF] and Zope et al. [START_REF] Rajendra | Interatomic potentials for atomistic simulations of the Ti-Al system[END_REF]. The total number of atoms for the Ti-Al and Ni-Al polycristalline systems are 1658148 and 2110942, respectively. Moreover, the mole fraction (i.e. N(Ni)/N(Al) or N(Ti)/N(Al)) is 52 at.%. The properties of the elements are summarized in Table 4.1. The system was first equilibrated at 300 K by a run of 50 ps in the NVT (canonical) ensemble followed by 50 ps in the NPT (isothermal-isobaric) ensemble. After this preliminary step, the atoms are integrated in the NPT ensemble. In order to mimic high-energy ball milling in which the average impact velocity of grinding balls is typically between 2 and 14 m/s, the box was shrunk at the constant velocity of 13.17 m/s, corresponding to a compression rate of ε = 4.5 10 10 s 1 along the x-direction [START_REF] Delogu | Molecular dynamics investigation on the role of sliding interfaces and friction in the formation of amorphous phases[END_REF]. The length of the simulation box was fixed along the z-direction, and adjusted along the y-direction by the Nose-Hoover barostat to maintain zero pressure in the system. The system deformation was monitored by computing the relative compression ε xx = |∆L/L 0 | = |(L L 0 )/L 0 |. In the following, we will use ε instead of ε xx , for simplicity. The system was deformed up to 80% of its initial length in 1.78 ns before studying its reactivity at a finite temperature. The reactivity simulations were carried out at constant temperature using the anisotropic NPT ensemble along all three directions. Effects of mechanical activation at the nanoscale The mechanical alloying observed during high-energy ball milling is necessarily related to the mechanical properties (e.g. Young's modulus, hardness, and tensile strength) of the elemental metal. Some of these properties can be evaluated by computing stressstrain curves. Nevertheless, hardness calculations will require nanoindentation simulations, which are complex and beyond the scope of this study. As a first approximation, stress-strain curves give a rough estimate of the hardness [START_REF] Baras | Mechanical activation of metallic powders and reactivity of activated nanocomposites: a molecular dynamics approach[END_REF] in a given binary by using an empirical linear dependance, H ⇡ 3σ , reported between tensile strength and hardness [START_REF] Zhang | General relationship between strength and hardness[END_REF] (see Table 4.1) The local environment (i.e. fcc, bcc, or unknown/amorphous) was determined for each atom, with the adaptative Common Neighbor Analysis (a-CNA). In the initial configuration depicted in Fig. 4.1, each atom was tagged with an indicator referencing its type and local structure. Effects of mechanical activation at the nanoscale The system composed of nanometric particles of K and L (Ni-Al and Ti-Al) was subjected to compaction and plastic deformation mimicking the action of grinding balls during mechanical treatment. We focused on the 3D defects created by mechanical activation, in particular mixing zones (MZs) where the two elements are in direct contact (see Fig. 4.2) [START_REF] Baras | Mechanical activation of metallic powders and reactivity of activated nanocomposites: a molecular dynamics approach[END_REF]. Mixing zones are of special interest because they could play the role of precursor in subsequent alloying processes. We observed that the effect of deformation depends both on mechanical properties and on the structural characteristics of K and L. The main characteristics of mixing zones after deformation are summarized as follows: -In the Ni-Al system, both elements are fcc but Ni is harder than Al (H Ni > H Al ). Mechanical treatment produces thick and amorphous MZs with an excess of Al. The other crystallized atoms adopt the fcc-Ni lattice. In this case, the ductile element adheres easily to hard particles, as in a wetting process. This is corroborated by the high mobility of Al atoms that already occurs during the compaction stage. After plastic deformation is completed, hcp planar defects (stacking faults) are created in Ni particles. -In the Ti-Al system, there is a considerable difference in hardness (H Ti H Al ), as in the Ni-Al system, but the elements have a different crystallographic structure (hcp-fcc). The consequences of mechanical treatment are very different in comparison with the Ni-Al system. Mixing zones have a limited extent, with only half of the atoms being amorphous, and a moderate excess of Al. Atom mobility is limited, and occurs during the plastic deformation stage. Crystallized atoms in MZs are fcc-Al, fcc-Ti, hcp-Al, or hcp-Ti. In other words, Al(Ti) can adopt the original Ti(Al) structure. Aluminum atoms are stabilized when they are in close vicinity to Ti atoms. This feature leads to the creation of fcc seeds that could play the role of nucleus in the formation of an intermetallic. After deformation, fcc stacking faults (2D) and fcc regions (3D) are created in Ti particles. In the next section, the reactivity and transport properties at different temperatures in Ni-Al and Ti-Al nanocomposites fabricated by mechanical activation are evaluated. We assessed the extent of their mixing zones, together with solubility, mobility, and the formation of intermetallics within these zones Reactivity and mobility in activated powders Two of the systems studied are reactive: Ti-Al and Ni-Al. Mechanical activation of elemental powders was found to increase their reactivity [START_REF] Rogachev | Mechanical activation of heterogeneous exothermic reactions in powder mixtures[END_REF][START_REF] Fourmont | Reactivity of Ni-Al nanocomposites prepared by mechanical activation: A molecular dynamics study[END_REF]. This result could be attributed to the formation of premixed nanoclusters, nano-sized precursors and/or defects, including amorphous and metastable regions. In order to understand the role of activation in reactivity, we studied the evolution of MZs in a temperature range close to ignition temperature, together with atom mobility in MZs, and reactive mechanisms as a function of microstructure. We first considered the Ti-Al activated system composed of polycrystalline particles after a deformation of ε = 73%. The system evolution was investigated at temperatures ranging from 850 K to 1150 K, in 50 K steps. For this purpose, the system was heated up to the target temperature and simulations were carried out in the NPT ensemble. We focused on MZs because they reflect the reactivity between pure elements. The evolution of MZs as a function of time is represented in Fig. 4.3a. The number of atoms in MZs is 10 at.% after deformation. At low T , in the range (850 K -950 K), the number of atoms in MZs reached a plateau value in 5 ns. For larger T values, the number of atoms in MZs continued to increase slowly over the time scale considered. Temperature was found to strongly influence the population in MZs. We must reach temperatures close to 1150 K for the whole system to be mixed. We next analyzed in detail the system at T = 850 K, 950 K, and 1100 K, on each side of the melting temperature of the less refractory element (T m (Al)=870 K for the Ti-Al EAM potential [START_REF] Rajendra | Interatomic potentials for atomistic simulations of the Ti-Al system[END_REF]). What is the reason for the increase in MZs? We expected partial or complete amorphization of Al and diffusion of Ti in Al atoms, resulting in an increase in the number of dissimilar close neighbors. Ti-Al system At T = 950 K (T m (Al) < T < T m (Ti)) and 10 ns, atoms in MZs represent more than half of the system (n MZ = 55 at.%). Figure 4.3c shows the distribution of atoms according to their local configuration and type as a function of time. The number of unk-atoms sharply increased and reached a maximum at 4 ns with a majority of Al atoms. After that stage, MZ reorganization took place. Some amorphous atoms of Ti and Al progressively adopted an fcc-local configuration. The ratio between fcc-Ti and fcc-Al in MZ, ξ f cc = N f cc (Ti)/(N f cc (Ti)+N f cc (Al)) = 0.28, is close to the typical value required for the formation of the TiAl3 intermetallic (ξ f cc = 0.25). At 10 ns and later, a majority of atoms in MZs were amorphous (u MZ = 65 at.%). We identified two transformations: -melting of Al and partial dissolution of Ti in liquid -formation of the intermetallic phase TiAl 3 . Note that the D0 22 structure is based on a face-centered cubic structure. II As the system temperature was increased to 950 K, we observed the rapid amorphization of Al surrounding Ti particles (0.2 ns). Titanium atoms at the particle periphery started to dissolve. We noted that a small region of fcc-Ti promoted the formation of fcc-Al in its vicinity. III From t = 0.2 ns to 5.25 ns, we observed a progressive coarsening of hcp-grains in Ti particles and disappearance of fcc-defects. Amorphous atoms in GBs either became hcp-atoms or dissolved in the surrounding liquid. After 5.25 ns, the transformation of unk-Al to fcc-Al accelerated in MZs. system after deformation, system at 850 K, 950 K, and 1100 K, at different times. IV At the end, Ti particles were surrounded by a few hcp-Al, a layer of fcc-atoms, and a liquid solution of Ti and Al. The final microstructure reflects the two processes described by (4.1). We now analyze the behavior at 1100 K. In less than 2.5 ns, more than 50 at.% are in MZs (Fig. 4.3a), with a majority of amorphous atoms (Fig. 4.3d). The snapshots in Fig. 4.4 (0.2 ns and 2.5 ns) depict the rapid dissolution of Ti in the liquid and the reorganization of the Ti particles, with larger grains and reduction of fcc-defects. From 2.5 ns, the number of atoms in MZs further increased, reaching 70 at.% at 12.5 ns (Fig. 4.3a). In Fig. 4.3d, we noted that the number of fcc-Al increased continuously from 2.5 ns to 12.5 ns, followed by a smooth increase in fcc-Ti atoms. The incoming atoms in MZs are fcc-atoms with a ratio ξ f cc ⇠ 0.33 at 12.5 ns. A massive recrystallization of TiAl 3 around the Ti particle took place at 15 ns. At 1100 K, reactive dissolution and formation of the intermetallic TiAl 3 were both observed. For the two temperatures above the melting point of Al, we observed the spontaneous formation of the intermetallic phase TiAl 3 . The solubility of Ti in the solution, x MZ = 0.22 -0.26, is independent of temperature. At 850 K, the situation is quite different. Because the temperature is below the melting point of Al, we did not expect a global amorphization of Al atoms. A limited number of atoms (n MZ = 35 at.%) were in MZs at 12.5 ns (Fig. 4.3a). This result indicates limited reactivity at that temperature. As shown in Fig. 4.3b, the number of unk-atoms increased, reaching a maximum at around 2 ns. The decrease in unk-atoms was followed by an increase in fcc-atoms; the ratio is 0. 4.2). In order to study the mobility of atoms in MZs, we computed the mean square displacement (MSD) of atoms in MZs as a function of time: MSD = 1 N MZ N MZ ∑ i=1 |r i (t) ri (0)| 2 (4.2) where N MZ is the number of particles in the MZ, ri (0) is the reference position of the i-atom, ri (t) is the atom position at time t. Note that the number of particles in MZs increases as a function of time. The MSD log-log plot at different temperatures is depicted in Fig. 4.5a. Typically, a diffusion mode corresponds to a linear dependance: MSD ⇠ t. Three characteristic slopes are observed in the MSD log-log plot that delineate three stages in the system evolution (the very beginning up to 0.1 ns corresponds to the suppression of voids between particles): I First stage (⇠ 0.1 ns -⇠ 2 ns): The MSD log-log plot is fitted by a straight line, characteristic of a diffusion regime in MZs composed of unk-atoms. II Second stage (2 ns -8 ns): There is a change of slope and curve bending. This reflects lower atom mobility associated with partial recrystallization in MZs. III Third stage (after 8 ns): There is another change of slope corresponding to a further slowing down of mobility. Atom mobility can be directly related to the microstructure evolution of MZs depicted in Fig. 4.3b-d. The first stage corresponds to the increase in unk-atoms in MZs. In the second stage, the number of unk-atoms decreases in favor of fcc-atoms. In the third stage, the microstructure of MZs does not evolve significantly. The diffusion coefficients are estimated for the time interval [0.1 ns -3 ns] prior to recrystallization in MZs (Table 4.2). Figure 4.6 gives ln(D) for Ti and Al in MZs as a function of 1/T . We noted two slopes associated either with high temperatures (950 K -1150 K) or with low temperatures (850 K -900 K). The linear fit gives the corresponding activation energies Q according to the Arrhenius law: D = D 0 exp ✓ Q RT ◆ (4.3) where D 0 is the prefactor and R the universal gas constant. The activation energies are given in Table 4.3. Below the melting point of Al (850 K), the diffusion coefficient is noticeably lower than at higher temperatures. The existence of two slopes observed in Fig. 4.6 reflects the change of microstructure in the MZs, with a transition from an amorphous system to pure liquid. For purposes of comparison, the diffusion coefficients were evaluated with Dictra (Thermo-Calc). The diffusion activation energy in pure liquid Al is 19 kJ/mol, close to the value measured in MD simulations. The diffusion coefficient of Ti in liquid Al at 1100 K, with mole fraction x(Ti)=0.2, is 3.4 10 9 m 2 /s, of the same order of magnitude as the value measured in MD simulations at high temperatures. Note, however, than this molar fraction is unrealistic in a system at equilibrium, due to the limited solubility of Ti in Al liquid. In an activated system, Ti solubility in Al is greater than at equilibrium, which actually enhances reactivity. Figure 4.7 depicts the location of Ti atoms that will later be embedded in MZs. We identified Ti atoms in MZs at 2 ns (snapshot not shown), and reported their positions in black on a snapshot at 0.06 ns. Two locations were identified: internal grain boundaries and the external shell of the particle. Around the Ti particle, we observed a dissolution front in the liquid. Inside the Ti particle, GBs expanded with incoming Al and became channels for effective diffusion of Ti atoms toward the liquid solution. The GB expansion weakened particle stability so that it fragmented into small Ti-grains, which are more likely to dissolve. Both mechanisms promote the dissolution of Ti beyond the expected equilibrium value. Ni-Al system We next considered the Ni-Al activated system composed of polycrystalline particles after a deformation of ε = 73%. Mechanical treatment induces the formation of amorphous regions where atoms of Ni and Al are mixed, occupying 15 at.% of the entire system. Mixing zones generally contain the same amount of Ni and Al. In order to investigate the reactivity of the activated system, the system evolution was studied at temperatures ranging from 700 K to 1200 K, in 100 K steps. For this purpose, the system was heated up to the target temperature, and simulations were carried out in the NPT ensemble. The extent of MZs as a function of time is represented in Fig. 4.8a. The mole fraction x MZ = N MZ (Ni)/N MZ expressed in at.% is given in Fig. 4.8b. The behavior depends on temperature: -At low temperatures (700 K and 800 K), the number of atoms in MZs (n MZ ) remained limited and did not exceed 22 at.% at 700 K and 30 at.% at 800 K. The proportion of Ni in MZs measured by x MZ decreased as a function of time (Fig. 4.9b). This result indicates that more and more Al atoms are in MZs. If we compare the snapshot of the system prior to and after heating at 700 K (Fig. -At an intermediate temperature (900 K, below the melting point of aluminum T m (Al)=1055 K for the Ni-Al EAM potential [START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF]), the number of atoms in -The behavior of the number of atoms in MZs is more or less the same at high temperatures (1000 K, 1100 K, and 1200 K). The number of atoms in MZs (n MZ ) increased before reaching a plateau value. At 1200 K, almost 80 at.% of atoms are in MZs. As shown in Fig. 4.8b, the mole fraction x MZ decreased abruptly before reaching a minimum. First, Al melted and then Ni atoms progressively dissolved in the liquid solution. The mobility of atoms in MZs was studied for the temperature range (700 K-1200 K). The MSD log-log plot is presented in Fig. 4.5b. At high temperatures (1100 K and 1300 K), we noted a diffusive regime followed by a slowing down of mobility, directly associated with the formation of the solid B2-NiAl intermetallic. By contrast, at 900 K, the diffusive regime was followed by an increase in atom mobility. This behavior is actually associated with the polycrystallinity of Al particles: Al GBs played the role of diffusion paths and thus enhanced atom mobility. Diffusion coefficients were evaluated in the time interval [0.1 ns -1 ns] prior to recrystallization. Results are summarized in Table 4.2, Table 4.3, and Fig. 4.9. At high temperatures, above the melting point of Al, the activation energy is close to 19 kJ/mol for pure Al liquid, the value given by Dictra (Thermo-calc) (D = 6.95 10 9 m 2 /s at 1100 K). In the case of a liquid solution Ni + Al at 1100 K, the diffusion coefficient of Ni in the solution given by Dictra ranges from 1 10 8 m 2 /s to 2 10 8 m 2 /s, with the mole fraction x(Ni) in the range [0 -0.3]. In the MD simulations, the estimated diffusion coefficient is smaller by one order of magnitude. Note that the melting temperature given by the Purja and Mishin potential is 1050 K, which is 170 K higher than the experimental value. This temperature difference will necessarily produce discrepancies when computing diffusivities close to the expected melting point. In addition, the discrepancy could be attributed either to the amorphous state of MZs, which occurs prior to complete melting in the time range over which the diffusion coefficient was estimated, or to the progressive dissolution of Ni into the liquid solution. At low temperatures, the diffusion coefficient measured in MZs is of the order of 10 11 -10 10 m 2 /s, larger than the characteristic value in the fcc solid phase 10 15 -10 13 m 2 /s. This higher diffusivity results from the almost completely amorphous atomic arrangement of the Ni-Al MZs. We observed three different behaviors as a function of temperature: -At high temperatures (1000 K -1200 K), aluminum was completely melted. Large MZs with liquid solution (Ni+Al) liq were formed. Note that, at 1000 K, the system behaved as if it were above the melting point of Al. The nanometric size of MZs induces their melting. -At low temperatures (700 K -800 K), atoms diffused in the amorphous regions formed around Ni particles. In this case, MZs can be considered as an amorphous solid solution (Ni+Al) sol . Nevertheless, diffusion coefficients are larger than values characteristic of a solid but of the typical order of magnitude of diffusion in GBs. Summary -At 900 K, below the melting point of aluminum, we noted an intermediate situation: extended amorphous regions in which Ni atoms diffuse more efficiently than at 800 K but less than in a liquid phase. It appears that mechanical activation induced increased reactivity below the melting point of aluminum. Summary The mechanical treatment produces Ti-Al and Ni-Al nanocomposites whose reactivity has been investigated in the present work. For this purpose, we focused on the evolution of mixing zones at different temperatures, just below and above the melting point T m (Al) of the less refractory element (here Al). We also quantified the mobility in MZs by measuring the diffusion coefficients. The typical behavior can be summarized as follows: -For the Ti-Al system, MZs increase during annealing. Annealing leads to the coarsening of Ti grains inside Ti particles, with suppression of defects and grain boundaries. For T < T m (Al), the reactivity is limited, with partial amorphization of Al, formation of fcc solid solution, and dissolution of Ti in amorphous Al. The diffusion coefficient prior to massive recrystallization corresponds to typical mobility in a dual-phase system: here, an amorphous phase. For T T m (Al), the system is more reactive, with complete melting of Al, dissolution of Ti into the liquid solution, and formation of the intermetallic TiAl 3 . The diffusion coefficient prior to crystallization corresponds to a melt. The mechanical treatment promotes the solubility of Ti in Al in comparison with equilibrium. -For the Ni-Al system, annealing promotes broad MZs. The reactive behavior is directly related to the temperature. For T < T m (Al), the reactivity is limited. Mixing zones remain narrow, with the dissolution of Ni in the amorphous solid solution. The diffusion coefficient in MZs is of the same order of magnitude as diffusion in grain boundaries. For T ⇠ T m (Al), the reactivity is more pronounced, with broader MZs composed of unk-Al and unk-Ni. The diffusion coefficient in MZs remains smaller than in a liquid. For T > T m (Al), the activated system becomes very reactive, with the full amorphization of Al. Mixing zones extend over Al regions with effective dissolution of Ni into the liquid solution. The intermetallic B2-NiAl is formed around the solid Ni particles. The diffusion coefficient in MZs becomes close to that of a liquid. Microscopic simulations provide an interesting tool to observe the effects of a mechanical treatment, and to study the reactivity of nanocomposites fabricated by a mechanical process. This work could be extended in order to take into account the activation associated with the friction of sliding interfaces. It would be interesting to estimate the efficiency of amorphization and chemical mixing, in the case where heat dissipation is taken into account. Another important issue is to understand the fragmentation process during HEBM. Alternative approach such as Extended Finite Element method [START_REF] Khader | Key Parameters for Fracture Toughness of Particle/Polymer Nanocomposites; Sensitivity Analysis via XFEM Modeling Approach[END_REF] could be undertaken to handle fracture and crack propagation at the mesoscopic scale. Chapter 5 Reactivity of Ti-Al reactive laminated particles: Experimental study and molecular dynamics simulations In this chapter, we focused on the reactivity of the Ti-Al system in the case of reactive laminated particles (RLPs) produced by high energy ball milling (HEBM), by combining an experimental investigation and MD study. In the experimental part that was performed by our collaborators, the aim is to detect the key aspects of the exothermic reaction as the reaction onset temperature and the characteristic activation energy. In the MD part developed during my PhD, the understanding of the reaction mechanism is directly related to the question of phase transformations associated with the selfpropagating reactive wave. To handle this problem by means of molecular dynamics, the atomic and microstructure evolution inside the stacked layers was simulated at a fixed temperature. Two representative systems were prepared: a small system (reference system) and a thick system. Different initial temperatures close to the melting temperature of Al were imposed for the same stoichiometry N Ti /N Al ⇠ 3 with an excess of Ti. This allowed accurate description of reactive dissolution and crystallization of intermetallic compound during the progress of the reaction. Experimental study Composite Ti/Al powders were produced by means of HEBM technique in the planetary mill "Activator-2S" ("Activator", available at ISMAN laboratory inRussia), from the mixture of Ti and Al elemental powders. The powders were placed in the steel jars together with steel balls. Volume of the jar was 250 ml (filled with Ar at 4 bar), diameter of the steel balls 6 mm, balls to mixture mass ratio 20:1, the rotation speed of Sun-disc 200 rpm, milling time 120 min. After HEBM, the powder consisted of bimetallic particles, where Ti flattened inclusions were embedded into Al matrix: thus, Al layers separated Ti islands in each bimetallic particle. Reactivity of the composite powders were evaluated by heating up mini-pellets (3 mm in diameter, 0.3 -1.0 mm thick), consolidated from the composite powder, in Ar atmosphere, and measuring reaction onset temperature T i . Details of the methods were published earlier [START_REF] Nepapushev | Production of Rounded Reactive Composite Ti/Al Powders for Selective Laser Melting by High-Energy Ball Milling[END_REF]. Microstructure and elements distribution inside the bimetal particles were studied using SEM and EDS analyses. Basing on the Kissinger method [START_REF] Kissinger | Reaction Kinetics in Differential Thermal Analysis[END_REF], energy of activation E was evaluated from the formula ln ✓ b T 2 i ◆ = const E/(RT i ) (5.1) where b = dT/dt -heating rate, T i -reaction onset temperature, R -gas constant. We used T i instead of the temperature corresponding to maximum reaction rate, T m , because, as distinct from differential scanning calorimetry, temperature in our experiments increased very sharply after reaction initiation. Thus, value of T i was close to T m , which allowed evaluation of E from eq. (5.1). After HEBM, both metals (Ti and Al) formed bimetal particles (Fig. 5.1a). More ductile metal Al forms matrix, where Ti layers (inclusions) have irregular shape. No intermetallic phases were observed at the boundaries between Al and Ti (Fig. 5.1b). At the same time, some intermixing of these two metals was revealed due to line-scan mode of EDS microanalysis (Fig. 5.2). A typical microstructure of the intermixed areas looks similar to metastable phases that formed in the Ni-Al systems during HEBM due to intense friction [START_REF] Rogachev | Influence of the high energy ball milling on structure and reactivity of the Ni+Al powder mixture[END_REF]. It allows us to assume that metastable phases (perhaps, solid solutions) can be formed due to HEBM also in the Ti-Al system, which should decrease temperature of the reaction initiation in this system. In order to measure the reaction initiation temperature, the mini-pellets were placed in the h-BN crucible and heated up using carbon-strip heater with average rate from 18 K/s up to 119 K/s. Typical temperature-time profile of the process is shown in Fig. 5.3. When the temperature of the sample achieved some value T i , exothermal reaction started and increased temperature sharply. The heating regime below T i does not follow strictly a linear law, however, average heating rate can be calculated for the sake of rough approximation as b =< dT/dt >=(T i T 0 )/t i , (5.2) where T i -the reaction onset temperature, T 0 -initial temperature of the sample, and t i -heating time. The carbon heater was switched-off in 1 second after exothermic reaction initiation, and the sample cooled down. The value of T i increased continuously with increasing heating rate b (Fig. 5.4). Based on these data, the Kissinger equation (5.1) was applied to evaluate activation energy of the process: E ⇠ R d ⇥ ln(b/T 2 i ) ⇤ d(1/T i ) (5.3) Two linear regions were appeared at the Kissinger plot (Fig. 5.5) that correspond to the values of activation energy E 1 =(28.4 ± 7.3) kJ/mol for higher temperature, and E 2 =(92.1 ± 14.4) kJ/mol for lower temperature regions. A transition point between these two regions corresponds to a temperature of about 950 -960 K, which is close to the melting point of Al (933.5 K). Thus, we can assume that limiting stage of the process changes, when T i exceeds melting temperature of Al. Probably, under slow heating conditions (smaller b), reaction has enough time to form some solid intermetallic product on the boundary between Ti and Al, which later limits the reaction rate even after Al melts. At fast heating (larger b), no solid product appears, and the reaction is limited only by diffusion in the melt, with small energy of activation. the other hand, some experimental study of interfacial reactions in Ti/Al multilayers during diffusion welding, shown that effective activation energy of diffusion decreases dramatically with decreasing atomic fraction of Ti [START_REF] Acoff | Interfacial reactions of titanium and aluminum during diffusion welding[END_REF]. Thus, the activation energy for diffusion in the TiAl phase was determined as 250 kJ/mol, while diffusion in the TiAl 3 phase had activation energy about 100 kJ/mol. Moreover, TiAl 3 was a major phase formed during the diffusion welding [START_REF] Acoff | Interfacial reactions of titanium and aluminum during diffusion welding[END_REF]. Recent works confirmed the leading role of TiAl 3 in the interface reaction between Ti and Al below melting point of Al [START_REF] Thiyaneshwaran | Nucleation and growth of TiAl3 intermetallic phase in diffusion bonded Ti/Al Metal Intermetallic Laminate[END_REF][START_REF] Hossein | Microstructure and Kinetics of Intermetallic Phase Formation during Solid State Diffusion Bonding in Bimetal Ti/Al[END_REF]. Measurements of the linear growth rate of TiAl 3 layer in the temperature range 823 -923 K gave activation energy 128.7 kJ/mol [START_REF] Hossein | Microstructure and Kinetics of Intermetallic Phase Formation during Solid State Diffusion Bonding in Bimetal Ti/Al[END_REF]. Since the forming has polycrystalline structure with well-developed grain boundaries network [START_REF] Thiyaneshwaran | Nucleation and growth of TiAl3 intermetallic phase in diffusion bonded Ti/Al Metal Intermetallic Laminate[END_REF], the grain boundary diffusion must be also taken into account. Basing on to the normal parabolic growth of the TiAl 3 layer (which was the only phase appeared on the Ti/Al boundary at 823-923 K), activation energy of 33.1 kJ/mol was obtained for the low temperature grain boundary diffusion controlled growth, and 296.2 kJ/mol -for the high temperature bulk diffusion controlled growth [START_REF] Mirjalili | On the kinetics of TiAl3 intermetallic layer formation in the titanium and aluminum diffusion couple[END_REF]. Overall activation energy of 76.8 kJ/mol was accepted for the whole region of the applied annealing temperatures. It is worth noting that all these data were obtained using isothermal annealing method. Comparison with our data measured at non-isothermal conditions allows assumptions that the value of 92.1 kJ/mol corresponds to combination of solid-state bulk and grain boundary diffusion, while 28.4 kJ/mol may correspond to grain diffusion solely or to liquid phase diffusion in the Al melt. From analogy with other metallic melts, we may expect that activation energy of diffusion in the Al melt is approximately 5 times smaller that that for the solid-state diffusion [START_REF] Gòrecki | Changes in the activation energy for self-and impurity-diffusion in metals on passing through the melting point[END_REF]. The following MDS results allow deeper insight in the mechanism of interaction. Numerical study Design of the numerical experiments The system was analyzed at the atomistic level using the EAM interatomic potential developed by Zope and Mishin in 2003 [START_REF] Rajendra | Interatomic potentials for atomistic simulations of the Ti-Al system[END_REF]. The potential of Zope and Mishin was initially fitted for Al, α-Ti and γ-TiAl using extended experimental and ab-initio databases and its transferability was then tested by considering other intermetallic compounds (i.e., L1 0 -TiAl, L1 2 -TiAl 3 and D0 22 -TiAl 3 ). A preliminary step consisted in computing the melting temperature (see 5.1) using a classical liquid-solid coexistence (two-phase) method [START_REF] Turlo | Comparative study of embedded-atom methods applied to the reactivity in the Ni-Al system[END_REF]. Thermal expansion coefficient was also determined to characterize the behavior of the system as a function of temperature (see Section 2.4.2). z" x" y" In order to model the Ti-Al reactive interfaces in RLPs, a simplified bilayer Ti-Al-Ti system was considered. The initial system (see Fig. 5.6), referred to here as the small sample, is made of an inner layer of fcc-Al (7 atomic planes containing 10 188 Al atoms) in between two outer layers of hcp-Ti (8 and 9 atomic planes each containing all together 31 280 atoms) with N Ti /N Al ⇠ 3. The size of the simulation box is L x = L y = 11.8 nm in the x-direction and L z = 6 nm in the z-direction. The interface is oriented normal to the [002] direction of the Ti layers and to the [START_REF] Thurnay | Thermal properties of transition metals[END_REF] direction of Al layer. Periodic boundary conditions are applied in all directions. The empty space around the Al layer allows the system to avoid any artificial constraint due to the application of periodicity on the Al and Ti [START_REF] Baras | Molecular dynamics simulations of nanometric metallic multilayers: Reactivity of the Ni-Al system[END_REF]. Both Al and Ti lattice parameters were determined as a function of temperature on bulk systems. Hence, the Ti-Al-Ti system was created with the appropriate lattice parameter corresponding to the initial temperature. The simulation is thermalized in the canonical statistical ensemble (NVT) over 400 ps at the initial temperature using a Nosé-Hoover thermostat (damping parameter of 0.1). The simulation is then carried out in the microcanonical statistical ensemble (NVE) over more than 15 ns. NVE simulation corresponds to adiabatic conditions: no external reservoir is interacting with the system. This procedure allows us to observe the spontaneous dynamics of the system that may imply a variety of elemental mechanisms. The time step to integrate the equation of motions with a Verlet algorithm was fixed at 0.001 ps. A thick sample was also considered (see Table 5. -The system is thermalized in the NVT ensemble before the simulation in adiabatic conditions (NVE ensemble). -The system is then cooled down by step of 5 K in the NPT ensemble during 300 ps and the NVT ensemble during 100 ps. After each step, the simulation is carrying out in the microcanonical ensemble (NVE) in order to observe the spontaneous dynamics of the system. The cooling process begins at 1315 K and ends at 1200 K. Different indicators are used to follow the evolution of the system. The number density profiles along the z axis gives a good indication of the crystallinity state and of the local composition in Ti or Al in each slice. Well-defined peaks are associated with a system structured in atomic planes. Each atom was labeled by two indices: one associated with the chemical species (type) and one with its local lattice structure (Ackland and Jone's indicator, see Section 2.4.1). It is also useful to compute the potential energy per atom, which is very sensitive to the local environment of a given atom. Global indicators were also followed during the evolution of the system: temperature and the stoichiometry in the inner layer. The stoichiometry is defined as the ratio ξ = N Ti /(N Ti + N Al ). Reference sample results The reference system was prepared at different initial temperatures, close to the melting temperature of Al, T m (Al) = 870 K (see Table 5.1). For initial temperatures below the melting temperature of Al, the system remains stable. Just a small number of defects appear in the inner layer. The stability of the interface is quite unexpected as compared to prior works on Ni-Al system where exothermic phase transformations already appear in solid-state [START_REF] Baras | Molecular dynamics simulations of nanometric metallic multilayers: Reactivity of the Ni-Al system[END_REF]. This is probably due to the very low misfit between (002) Ti-plane and ( 111) Al-plane (see Fig. For an initial temperature of 950 K, the situation is completely different. During the thermalization at 950 K, the Al inner melts and wets the Ti-free surface. During the simulation in adiabatic conditions, the temperature is followed as well as the fraction (at%) of atoms in the different local configurations (fcc, hcp or unknown). The typical shape of curves in Fig. 5.8 suggests that the dynamics of the system can be divided into 4 stages: I Just after thermalization, from t = 0.4 to 11 ns (stage I), the number of amorphous atoms (unknown) and fcc atoms is more or less constant. Amorphous atoms correspond to liquid atoms. The fcc-atoms are located at free surfaces or interfaces. The temperature slowly increases up to 1050 K and ξ reaches 0.18. II During stage II (from t = 11 to 12 ns), the number of hcp-atoms starts to decrease. The temperature reaches 1100 K and ξ exceeds 0.2. This stage corresponds to a rapid dissolution of Ti in the inner layer. III The stage III is short (less than 0.2 ns, from t = 12.24 to 12.46 ns). More hcp-Ti atoms disappear together with a sudden and short decrease in amorphous atoms. This corresponds to a reorganisation of the inner layer in fcc atoms. The burst in temperature is associated with the crysallization of the inner layer with a stoichiometry around 0.23. IV In the last stage (IV), the temperature reaches a plateau. Other indicators are stable. The system is completely crystallized and its structure doesn't changes over time. The number density profile at t = 0.4 ns in Fig. 5.9a shows the amorphization of Al atoms in the inner layer due to melting. Although the temperature was larger than the melting temperature of Al, the Al atoms remained arranged in layers close to the interface. During the adiabatic evolution of the system, the Ti atoms invade the inner layer as the stoichiometric factor ξ progressively increases while the Al are reorganized in planes (see Fig. 5.9b). At the interfaces, the transient formation of a solid solution (Ti+Al) ss was observed. Figure 5.9c demonstrates the recrystallization of the inner plane (6 planes) with a well-defined ratio between Al and Ti atoms. This ratio is slightly less than 0.25, the one expected for the formation of TiAl 3 . The Ackland and Jones' analysis shows formation of a fcc-like structure except a twin defect formed by planes in hcp-like structure. The increase in temperature is associated with two processes: the reactive dissolution between Ti and Al and the spontaneous crystallization of a new phase. The reactive dissolution plays an important role in the catalytic character of the reaction because an increase in temperature favors further dissolution of Ti in Al. In this simulation, the process of dissolution was interrupted by the phase transformation in an intermetallic compound. Indeed, the stoichiometry in the inner layer increases up to a plateau with a value of 22 at% of titanium. The phase transformation could be associated with an exothermic reaction giving rise to the formation of a new compound. If we suppose that is formed: 3Al + Ti ! TiAl 3 (5.4) The energy balance expressed in terms of cohesive energies reads 1 4 [3 ⇥ E 0 (Al)+E 0 (Ti)] = E 0 (TiAl 3 )+q (5.5) where q = 0.30 eV is the excess energy released by the atomic rearrangement or, equivalently, the formation energy of the compound TiAl 3 . The intermetallic TiAl 3 is characterized by a D0 22 structure at high temperature. But the potential developed by Zope and Mishin [START_REF] Rajendra | Interatomic potentials for atomistic simulations of the Ti-Al system[END_REF] gives a very small difference between the two structures D0 22 and L1 2 of the TiAl 3 . The structure L1 2 is 0.001 eV/atom smaller than the structure D0 22 . According to this observation, the distinction between these two structures can't be really established in the context of the simulation. Another simulation was carried out with an initial temperature of 1000 K. The number of atoms and the geometry remained the same. The system was created with the appropriate lattice parameter corresponding to 1000 K. As shown in Fig. 5.10, the dynamics follows the four stages described previously. During stage II, the temperature rise is even more pronounced. The dissolution is here more important and the stoichiometry reaches a plateau value of 34 at% and a maximum temperature T ad = 1300 K. The evolution towards the final temperature is done in a shorter time. Thick sample results In order to investigate the size effect, a thick sample was created at 1000 K. We performed the same analysis as previously. In this case, Fig. 5.11a shows only 3 stages: (I) slow increase in temperature and no configuration change, (II) rapid dissolution of Ti in the inner layer and (III) saturation toward the adiabatic temperature. The dissolution of Ti in the inner layer produces a release of heat. The adiabatic temperature is 1315 K and the saturation of the liquid solution is ξ = 0.35 as shown in Fig. 5.11b. No spontaneous crystallization was observed. After the NVE simulation in adiabatic conditions, the inner layer is a liquid solution (0.35 Ti+ 0.65 Al) liq . Figure 5.12 at t = 5 ns gives the number density profile and the corresponding snapshot of the system during stage (I). Although the inner layer is Al liquid, a structural ordering close to the interfaces was noticed. Figure 5.12 at t = 20 ns corresponds to the coexistence of solid outer layers surrounding the liquid solution of Ti and Al. As shown in Fig. 5.12, 3.5 ns after the start of the cooling demonstrates the complete reorganization in atomic planes with a constant ratio between Ti and Al in each plane. All atoms of the inner layer have a fcc local configuration as proven in Fig. 5.11c. Dissolution is the main process in the self-sustained character of the reaction in the Ti-Al system. Above the melting point of Al, Ti dissolve in the liquid Al layer. Figure 5.13 gives the coverage of 4 planes below the lower interface. A similar behavior was observed for the upper interface. The following features were identified: 1. The Ti atoms close to the interfaces leave the solid substrate more easily than atoms located deeper. Nevertheless, the number of liquid atoms (marked as amorphous atoms) doesn't significantly increase. 4. In about 10 ns, 50 at% of Ti of the first atomic plane below the interface are dissolved. At 12 ns, the number of Ti reaches a minimum value. Remaining Ti ( < 20 at%) are located on the free surfaces. x" y" z" 5. When the number of Ti in the plane (# -1) is significantly reduced, the second plane (# -2) starts to be depleted and the number of Ti in the plane (# -1) increases again and reaches a value of 32 at%. The dissolution process operates plane by plane as shown in Fig. 5.13. 6. After the dissolution of 70 at% of Ti atoms in 3 planes, the system reaches a state Ti sol -(Ti x + Al 1 x ) ss -(Ti y + Al 1 y ) liq In the case of liquid mixtures, Titanium and Aluminum are characterized by a negative mixing enthalpy: ∆H mix (T )=H Ti Al (T ) [(1 x Ti ) H Al (T )+x Ti H Ti (T )] (5.6) At the atomistic level, the scheme (Fig. 5.14) shows the elemental process associated with dissolution: a liquid atom Al will be exchanged with a solid Ti atom. As an example, a set of atoms were followed. For instance, the potential energy of the Al atom in the liquid layer is around -3.28 eV and decreases to -5.48 eV when it substitutes a Ti The reactivity of the Ti-Al system has been studied by means of molecular dynamics simulations and experiments. The main features are as follows: Summary -No reaction occurs at a temperature below the melting point of Al. The interface between (002)-Ti plane and (111)-Al plane is very stable due to the small mismatch between the two structures. In addition, the metallic radius of Ti is slightly larger than the Al metallic radius: r metal (Ti)=1.47 pm r metal (Al)=1.43 pm (5.8) This fact explains the lack of solubility of Ti in Al solid. -When the system is heated above the melting temperature of Al, we observed an exothermic self-sustained dynamics until a plateau value of T is reached. The behavior is similar to the phenomenon of adiabatic explosion. -The first phase transformation to occur at high temperature is the amorphization of the Al inner layer, except close the interfaces where Al exhibits a structural ordering. -The dissolution of Ti in Al is associated with an exothermic mixing of the two metals. The heat released due to dissolution is the main heat source. -The dissolution operates as an exchange of Ti and Al atoms at interfaces. There is no net flux at the interface. -The adiabatic temperature depends on the initial temperature value. The same is true for the saturation value of the liquid solution. -The spontaneous formation of an intermetallic compound was only observed in the small systems. The phase transformation is very rapid (less than 0.2 ns). -In the large system, the crystallization of the intermetallic compound occurs during the cooling simulation at 2988 K. The growth of the intermetallic develops plane by plane starting from the interface. -The intermetallic compound is (Ti y Al 1 y ) with y ⇠ 0. Chapter 6 General conclusions and perspectives This work focused on the numerical modelling of powder metallurgy processes by means of molecular dynamics (MD) simulations. Two processes were investigated, namely the solidification process in the context of additive manufacturing and the mechanical treatment of powders (i.e., mechanical activation due to milling). The results of MD simulations are reported in Chapters 3 to 5. These simulations required 7 million hours of computing time on the LINUX cluster at the computer center of the University of Burgundy. The simulations were performed using many-body potentials that are well suited for metallic systems. These potentials are fitted on specific physical properties provided by experimental and numerical (ab-initio calculations) data. The reliability of MD simulations depends on the accuracy of the interatomic potentials. As explained in Chapter 2, we performed certain specific calculations to evaluate the validity of the potentials used in this work. Moreover, in order to interpret the results, we computed useful properties: the latent heat of fusion, the melting temperature, the thermal conductivity, the thermal expansion and the solid/liquid interface energy. Chapter 3 deals with the solidification of Ni in the context of additive manufacturing. For this purpose, two specific systems were designed to model the directional solidification of the melt pool above a polycrystalline substrate. In the model without imposed cooling, columnar growth with a planar solid/liquid interface was observed whatever the value of the substrate temperature. When a cooling rate was imposed, we observed three different microstructures according to the temperature gradient, the substrate temperature and the cooling rate. Columnar microstructures followed by the formation of equiaxed grains were observed when the undercooling temperature in the liquid reached a temperature of over 500 K with a sufficiently high cooling rate. The formation of equiaxed grains is directly related to the homogeneous nucleation event. On the other hand, when the cooling rate was too low (in our case 75 K/ns), the columnar growth reached the end of the box of simulation. The temperature in the liquid was not sufficient to observe any nucleation event because the undercooling temperature was too low (below 500 K). Moreover, a substrate temperature of 1300 K resulted in the formation of cells. As discussed in the corresponding chapter, instabilities of the front in pure metals are directly related to the negative gradient in the liquid. These instabilities will progressively gives rise to the formation of protrusions which will transform into cells. Results are summarized in Fig. 6 In the last two chapters, the reactivity of composite powders was studied via two different approaches. In Chapter 4, the reactivity and diffusion of titanium-aluminum (Ti-Al) and nickel-aluminum (Ni-Al) systems after severe plastic deformation due to mechanical activation by milling were studied. In Chapter 5, we designed a laminated system appropriate to study the reactivity of titanium-aluminum reactive laminated particles. In order to investigate the enhanced reactivity due to mechanical activation, we elaborated a system containing a large number of interfaces and defects as observed after high-energy ball milling. The action of the grinding balls on the powder is very complex. Ball-to-ball and ball-to-wall collisions are an important source of modification to the initial microstructure of the powder. To understand such modifications, we developed a mechanical impact model on a set of metallic particles at the microscopic scale in order to identify the modifications induced in the microstructure as a result of mechanical activation and to evaluate the reactivity of the activated powders. In a first step, polycrystalline particles were subjected to progressive uniaxial compression to produce severe plastic deformation. In a second step, the deformed samples were heated to the temperatures of interest (i.e., around the melting temperature of aluminum). We were able to demonstrate that compression induces compaction and severe plastic deformation that creates complex mixing zones between elements at the surface contact between particles. The contacts between the particles create defects such as twins, dislocations or stacking faults in fcc-structures (Ni particles). After the plastic deformation, initial hcp-Ti particles show a high number of fcc atoms, indicating the presence of a huge number of linear/planar defects. Temperature setting allowed us to study the diffusion in activated Ti-Al and Ni-Al powders; the diffusion coefficients were evaluated in the mixing zones for these systems. Reactivity was studied in order to better understand mechanisms including the formation of intermetallics, grain coarsening, etc. In Chapter 5, we considered a simplified model consisting of an aluminum layer between two titanium layers in order to mimic lamellar particles obtained via a planetary ball mill. Different mechanisms according to temperature were in this way revealed. For temperatures below the melting temperature of aluminum, the system remained stable and no exothermic reaction appeared in the solid state. For temperatures above the melting temperature of aluminium, a self-sustained behavior arising from the exothermic dissolution of titanium in the liquid aluminium layer was observed. The dissolution remained limited by the low solubility of titanium in liquid aluminium. The exothermic reaction continued until the recrystallization of the system and the formation of the intermetallic TiAl 3 . In order to understand the size effects (thickness of the layer), we considered a thicker sample. The system reacted similarly to the smaller sample, although no spontaneous recrystallization was observed. Cooling was therefore necessary for recrystallization and the formation of the TiAl 3 intermetallic to occur. This thicker system also allowed us to analyze more finely the dissolution mechanisms of Ti in Al. Titanium dissolves plane by plane until saturation. The molecular dynamics approach thus highlighted the fact that the dissolution takes place via exchange between an aluminium atom and a titanium atom at the interfaces. Working in parallel, our Russian collaborators1 contributed to this study by developing an experimental approach. They studied the reactivity of lamellar particles produced via high-energy ball milling. By evaluating the activation energies, they identified two different behaviors. The first is associated with solid state transformation at interfaces below the melting temperature of aluminum. The second corresponds to the dissolution of titanium in liquid aluminum and the formation of the intermetallic TiAl 3 . • Perspectives As discussed in Chapter 3, columnar grains cause anisotropic properties, reduce mechanical performance and increase hot tear tendency. One of the main objectives here was thus to promote the columnar to equiaxed transition. It is possible to form equiaxed grains using adequate parameters of the AM process that directly modify the temperature gradient and the solidification rate, as reported by [START_REF] Kobryn | The laser additive manufacture of Ti-6Al-4V[END_REF]. However, the low thermal gradient required for the formation of equiaxed grains remains difficult to achieve, even when adapting the process parameters. One of the strategies to promote the columnar to equiaxed transition is to inject nanoparticles, which serve as nucleation sites, into the melt pool to promote heterogeneous nucleation. Indeed, the energy barrier related to heterogeneous nucleation is lower than that of homogeneous nucleation. For instance, certain particles have demonstrated their efficiency in the context of AM of Al-based metals, namely Al 3 Sc [165], TiB 2 [START_REF] Li | Selective laser melting of nano-TiB2 decorated AlSi10Mg alloy with high fracture strength and ductility[END_REF], Al 3 Zr [START_REF] Martin | 3D printing of high-strength aluminium alloys[END_REF], and TiC [START_REF] Lin | Aluminum with dispersed nanoparticles by laser additive manufacturing[END_REF]. The main difficulty is finding sufficiently stable and powerful nucleant particles. In order to reproduce these phenomena at the atomic scale, we considered adding NiAl particles to an Al melt pool based on the models we had previously developed. One of the main difficulties related to these materials stems from the progressive dissolution of NiAl particles (see Fig. 6.2). Indeed, the melting temperature difference between the NiAl intermetallic and Al is not sufficient to preserve the stability of the NiAl nanoparticles. It is for this reason that we chose to put this study aside at first. It is, however, quite frequent to observe this phenomenon of dissolution during the injection of particles carried out in the course of experiments. An interesting perspective would include performing a more systematic study of the dissolution of NiAl particles as a function of their size while they are subjected to different thermal conditions. There are several questions that arise in this context. Does the progressive dissolution of NiAl prevent the particles from acting as an inoculant promoting heterogeneous nucleation? Does dissolution, which is an exothermic process, necessarily prevent heterogeneous nucleation because of a local temperature increase? What is the role of the composition gradient generated by the progressive dissolution of nanoparticles? What are the effects of the cooling rates on the two mechanisms, namely heterogeneous nucleation and dissolution? An alternative approach would be to choose a system in which the dissolution is limited, allowing the particles to act as powerful nucleants. For instance, Al 3 Ti particles are known to be strong nucleants for the solidification of the Al melt [START_REF] Wang | Heterogeneous nucleation of solid Al from the melt by Al 3 Ti: Molecular dynamics simulations[END_REF]. To avoid the problems arising from dissolution, one of the possible options is to consider metallic systems in which the difference between melting temperatures is significant, as for example copper (Cu) and tungsten (W). Experimentally, copper and tungsten have melting temperatures of 1358 K and 3695 K, respectively. In addition, tungsten has a bcc structure and copper has an fcc structure. There are different interatomic potentials for the Cu-W system and we considered the one developed by Wei et al. [START_REF] Wei | Strain-stress relationship and dislocation evolution of W-Cu bilayers from a constructed n-body W-Cu potential[END_REF] that is known to give an abnormally high melting temperature for tungsten. This, however, is an advantage for our study in which W particles play the role of nucleation sites and have to remain stable at high temperature. Using this potential, we computed crucial properties for solidification: the solid/liquid interface energy of Cu, the thermal expansion of both metals, the melting temperature of Cu and the thermal conductivity of Cu following the methods presented in Chapter 2. We studied heterogeneous nucleation using a simplified quasi-3D model containing a single W particle surrounded by Cu liquid. Various sizes of tungsten were considered, namely 1.59 nm, 7.95 nm and 23.85 nm, and different isothermal temperatures were applied to estimate the undercooling degree related to the free growth of copper. The degree of undercooling depends on the radius of the particle. For the smallest radius of 1.59 nm, the undercooling degree required to observe free growth was estimated to be 133 K, while the other two sizes were estimated at an undercooling degree between 40-50 K. Note that this method is similar to the one used in the context of liquid/solid interfacial energy presented in Chapter 2. The larger particles have lower undercooling temperatures than the smaller ones, as is frequently observed in heterogeneous nucleation processes. These results pinpointed more precisely the range of temperatures giving rise to the solidification of Cu from the W surfaces. In a second step, we developed simulations similar to the ones used to study Ni solid-ification but in this new case, several W particles were randomly added to a Cu melt. The model without imposed cooling gave interesting results that merit further investigation: • The solid/liquid interface is locally disturbed where it crosses the W particles. A solidification delay is observed after the passage of a particle. Indeed, the Cu remains amorphous while the front around the particles continues to propagate with a flat shape. This solidification delay is directly related to the size of the particle: the larger the particle, the larger the zone in which the front is disturbed. The delay is nonetheless caught up in a few picoseconds because the temperature gradients in the liquid and in the solid are positive: the heat is extracted in the solid part of the Cu. The engulfment by the solidification interface is clearly shown in Fig. 6.3, right. • Heterogeneous nucleation is not observed because the solidification temperature is relatively high (close to the melting temperature of Cu). • A "groove" is observed at the grain boundaries located at the solid/liquid interfaces (also called triple junction). The grooving induces an opening angle depending on the solid/liquid surface tensions and the energy of the grain boundary. The grain boundaries cling to the particles when the opening angle is close enough. A progressive curvature of the grain boundary is also observed. This phenomenon seems very similar to the Zener pinning resulting from the interaction between particles and grain boundaries in polycrystalline materials (see Fig. 6.3, left). The pinning of boundaries by obstacles is an important effect in materials processing and its characterization requires further investigation. The model with imposed cooling rate gives rise to several observations: • The solidification front is also disturbed when it passes through particles. The delay is significantly different from that observed in the model without cooling. Indeed, the gradient in the liquid is negative: heat is released in the solid and the liquid. The solidification delay is therefore propagated over a longer period of time, even throughout the entire solidification process. An interesting next step would be to continue this work to better understand the effect of particles on a solidification front with a negative gradient in the liquid. How will these particles influence the solidified microstructure? Could the growth transform more easily into cellular/dendritic growth? How do the grain boundaries adapt to the presence of the particles? • The same pinning by the dispersed particles is observed as in the model without imposed cooling. • The temperature range giving rise to heterogeneous nucleation is relatively small. Indeed, the smallest particles favor heterogeneous nucleation at a temperature of 1050 K and the largest at 1160 K. The solidification rate of Cu is therefore low in this temperature range as it is relatively close to the melting tempera- ture (1181K). We were nevertheless able to observe heterogeneous nucleation by properly choosing the thermostat temperatures and cooling rates (see Fig. 6.4). However, this results in a major problem related to the thermostats we use. For example, the temperature in the laser region is averaged over all atoms in the liquid region. The solidification front and heterogeneous nucleation are both exothermic processes releasing a very substantial amount of heat in the liquid region. Consequently, the atoms still in the liquid state have abnormally low temperatures to compensate for this heat release, results that may be questionable from a physical point of view. Note that we considered 10 particles. A smaller number of particles would be more appropriate to reduce the heat release effect and avoid this artifact. Further investigation of the method used should therefore be carried out in continuing this study. Remaining in the context of solidification, several perspectives to my work might be considered: • Solidification of alloys: Metal alloys are mainly used in additive manufacturing techniques. During the solidification of a pure metal, the effects are exclusively thermal. With an alloy, other phenomena such as chemical undercooling can be studied. The solidification rate will also be different from that of a pure metal. A preliminary work was carried out by adding 5% of Al atoms to the Ni samples, adopting the same model as in the study of Ni in the presence of an imposed cooling. In the case of pure Ni, with a substrate temperature of 1300 K, dendritic/cellular growth has been observed. Under the same thermal conditions (substrate temperature and cooling rate), the microstructure was completely different for the Ni-Al alloy. We observed the formation of equiaxed grains upstream of the front. It would certainly be interesting to carry out an in-depth study on Ni-Al alloys. Several questions would have to be clarified: What is the influence of the Al percentage? What microstructures will be formed? Will a compositional gradient upstream of the front be set up during solidification as is commonly observed in experiments? Dendrites, which are often associated with alloys, could also be studied in further detail. • Solidification maps: In the course of our study of solidification, we observed different microstructures related to solidification rates, thermal gradients, substrate temperatures and cooling rates. It would be interesting to quantify more precisely the transition between flat columnar growth and cellular growth in metallic alloys. Indeed, experimenters rely on solidification maps; such maps are particularly based on thermal gradients and solidification rates. An interesting perspective would be to compare these maps with results obtained from molecular dynamics. • Particle inoculation: Crystal growth on inoculating particles is an exothermic process. The heat, released locally in the liquid, causes a significant increase in temperature. This phenomenon is also a function of the density of the inoculating particles where heterogeneous nucleation occurs. The undercooling degree depends on the size of the particles. It could be interesting to introduce several particles of different sizes into a liquid bath by imposing a cooling. Different questions would have to be considered: Will the heat released by the growth around the larger particles prevent the growth on the smaller ones? What would the distances between particles need to be in order to observe growth on all particles? It might also be interesting to consider the size distribution of the particles. • Convection: Additive manufacturing techniques cause convection in the liquid bath. A comparison between the models we have studied on the solidification of Ni with models that take convection into account could provide a better understanding of the influence of convection on solidification. Moreover, the addition of inoculant particles in the liquid bath would also be an interesting situation to consider. What would the movements of these particles be? What would their positions at the end of solidification be? Would it be possible to observe heterogeneous nucleation in the presence of convective phenomena? As mentioned above, most of the items listed in the context of future perspectives have been the subject of preliminary calculations to design the simulations, to determine what is feasible and to identify technical difficulties. Beyond these preliminary studies that open up many opportunities, it will be necessary to perform a systematic and thorough study in order to better understand the role played by the inoculation of particles in the melt pool of pure metals and alloys. Abstract: The process-microstructure relationship is central in materials science because the microstructure will determine the properties of the materials developed by the processes. In our work, we focused on different metallurgical processes by adopting a description at the atomic scale. This approach allows us to detect the elementary mechanisms that are at the origin of the observed microstructures without having to postulate macroscopic mechanisms or estimate the associated parameters. In this respect, molecular dynamics simulations provide a tool for "in-situ" observation of metallic systems as long as an atomic interaction potential is available. The originality of our approach consists in modeling the characteristics of the processes at nanometric scales. In the context of powder metallurgy, we focused on the additive manufacturing of metallic materials and the activation of metallic powders by high-energy milling. We performed molecular dynamics simulations to understand the directional solidification processes at the nanoscale of a pure Ni polycrystalline metal in the context of additive manufacturing. Various microstructures were observed as a function of thermal conditions. Solidification and nucleation were also compared to classical solidification and nucleation theories to establish their validity at the nanoscale. We modeled the milling process by mechanical treatment with compaction and plastic deformation to observe the action of grinding balls on a binary mixture of powders. This approach allows us to understand the behavior of mechanically activated powders (Ti+Al and Ni+Al) by characterizing atom mobility, structural transformations and reactivity. We also studied the reactivity of a Ti-Al nanometric multilayer model similar to the materials obtained after high-energy milling. We have highlighted several elementary mechanisms responsible for their increased reactivity, such as dissolution at the Ti(solid)/Al(liquid) interfaces and the formation of an intermetallic (TiAl3). Université Bourgogne Franche-Comté 32, avenue de l'Observatoire 25000 Besançon 4. 2 2 Effects of mechanical activation at the nanoscale . . . . . . . . . . . 4.3 Reactivity and mobility in activated powders . . . . . . . . . . . . . . 4.3.1 Ti-Al system . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Ni-Al system . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Reactivity of Ti-Al reactive laminated particles: Experimental study and molecular dynamics simulations 5.1 Experimental study . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Numerical study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Design of the numerical experiments . . . . . . . . . . . . . 5.2.2 Reference sample results . . . . . . . . . . . . . . . . . . . . 5.2.3 Thick sample results . . . . . . . . . . . . . . . . . . . . . . 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 1 Figure 1.2: Schematic representation of the main additive manufacturing techniques (a) power bed fusion (PBF) and (b) direct energy deposition (DED) categories. From [6]. Figure 1 . 3 : 13 Figure 1.3: Distribution of industrial fields related to metals AM technologies. From [8]. Figure 1 . 4 : 14 Figure 1.4: On the left, schematic representation of the meltpool with thermal processes in- Figure 1 . 5 : 15 Figure 1.5: Example of columnar (bottom part of the sample) and equiaxed structure (upper part of the sample) with the transition between both structures delimited by white dashed line. This represent the longitudinal section of solidified Ni-Al alloy obtained in a furnace. From [25]. 1 .Figure 1 . 7 : 117 Figure 1.7: On the left: in-situ observations of the specific regimes recorded via high speed video camera: (a) the cascading regime, (b) the cataracting regime and (c) the centrifugal regime.From[START_REF] Rogachev | Experimental investigation of milling regimes in planetary ball mill and their influence on structure and reactivity of gasless powder exothermic mixtures[END_REF]. On the right: specific motions of the grinding balls observed in the container. Four distinct loading modes are represented namely the impact, the torsion, the shearing and the rolling. From[START_REF] Beinert | Analysis and modelling of bead contacts in wet-operating stirred media and planetary ball mills with CFD-DEM simulations[END_REF]. Figure 1 . 8 : 18 Figure 1.8: Thermograms of the Ti/Al mixtures after mechanical treatment. The size of the milling balls are 6 mm (red and green curve) and 2 mm (black and blue curve). The milling times are 60 min and 120 min represented by square and circle, respectively. The black dashed line represents the melting temperature of Al close to the beginning of the reaction. From [52]. Figure 1 . 9 : 19 Figure 1.9: SEM images showing nanostructured Ni-Al system obtained after (a) 10 min and (b) 40 min of milling. The corresponding structure exhibits a very large number of surface contacts between Ni and Al (especially after 40 min), which lowers the ignition temperature.From[START_REF] Shuck | Reactive Ni/Al Nanocomposites: Structural Characteristics and Activation Energy[END_REF]. Figure 1 . 10 : 110 Figure 1.10: Schematic representation of the impact of grinding balls acting on the powders. Figure 1 . 11 : 111 Figure 1.11: On left, STEM image of the microstructure of Ni-Al mixtures obtained after planetary ball milling process. From [64]. On right, TEM image of Ni-Al reactive multilayer nano-foils produced by magnetron sputtering. From [81]. 21 2 . 2 Positions and velocities are updated by integrating the equation of motions (velocity Verlet algorithm). Figure 2 . 1 : 21 Figure 2.1: Schema of the basic molecular dynamics simulation [90]. Figure 2 . 2 : 22 Figure 2.2: List of the different Bravais unit cells. From [91]. Figure 2 . 3 : 23 Figure 2.3: Potential functions of the EAM Ti-Al potential. From [92]. Top left, the pair interaction function, φ (r ij ), for three different interactions between Al-Al, Ti-Ti and Ti-Al. Top right, the embedding energy function, F i , of Ti and Al. Below, the electronic density function, ρ i , for Ti and Al. Figure 2 . 4 : 24 Figure 2.4: Schematic representation of the Periodic Boundary Conditions (PBC). From [102]. The central cell is replicated in all directions by images with the same properties (number of atoms, position of atoms, moment of atoms, size and geometry). Figure 2 . 5 : 25 Figure 2.5: Comparison between a-CNA and PTM. From [106]. Figure 2 . 6 : 26 Figure 2.6: On the left, 2D representation of the radial distribution function (RDF). The black Figure 2 . 7 : 27 Figure 2.7: 2D representation of the atoms considered in the mixing zone. Red and blue atoms represent the atoms of type i and j, respectively. Dashed lines represent the distance r cut between short-and long-range orders. Figure 2 . 8 : 28 Figure 2.8: (a) Enthalpy as a function of temperature. (b) Volume per atom as a function of temperature. Latent heat of fusion L V and volume per atom V at are evaluated at melting point (dotted red line). Figure 2 . 9 : 29 Figure 2.9: (a) Schematic representation of the system. (b) Solid/liquid interface velocity of the three orientations (100), (110), and (111) as a function of the degree of undercooling. Figure 2 . 2 Figure 2.10: (a) Snapshot of the solid particle, in green, surrounded by the liquid, in gray. (b) Inverse of the critical radius as a function of the degree of undercooling. Red dots are the simulation results. Black curve is the fit used for the interfacial energy estimate Figure 2 . 2 Figure 2.11: (a) κ as a function of the simulation box length for the considered temperatures. Symbols are the numerical results. Lines are the corresponding fits. (b) Bulk thermal conductivity results as a function of temperature compared with the values reported by Levchenko et al.[START_REF] Levchenko | Phonon-mediated heat dissipation in a monatomic lattice: case study on Ni[END_REF] and Turlo et al.[START_REF] Turlo | Comparative study of embedded-atom methods applied to the reactivity in the Ni-Al system[END_REF] . These two authors computed κ with the Green-Kubo approach, using Mishin 2004 (M04)[START_REF] Mishin | Atomistic modeling of the and -phases of the Ni-Al system[END_REF] and Purja Pun and Mishin 2009 (M09)[START_REF] Purja Pun | Development of an interatomic potential for the Ni-Al system[END_REF] potentials for Ni. Figure 2 . 12 : 212 Figure 2.12: Overview of the homogeneous nucleation process in an iron melt at a temperature of 0.67 T m . From [119]. (a) Representative snapshot of the entire billion-atom simulation box. (b) Zoom on the part of the simulation highlighted by the red box. The white color represents the liquid atoms and the different colors represent relative disorientation with respect to the coordination axis. Figure 2 . 13 : 213 Figure 2.13: Heterogeneous nucleation simulations with titanium particles surrounded by liquid aluminium.From[START_REF] Fujinaga | Molecular dynamics simulation of athermal heterogeneous nucleation of solidification[END_REF]. The crosses represent the stagnation for which the liquid phase remains the same over the entire simulation, the circles represent the free growth of the liquid aluminum on the titanium surface and the triangles the transition from the two preceding behaviors. The solid black line is a fit from the Classical Nucleation Theory (CNT). Figure 2 . 14 : 214 Figure 2.14: Snapshots of the microstructures obtained for different cooling rates on purealuminum at the final temperature (here 293 K). From[START_REF] Hou | Cooling rate dependence of solidification for liquid aluminium: a large-scale molecular dynamics simulation study[END_REF]. Cyan, purple and pink represent fcc, hcp and amorphous atoms, respectively. Crystalline structures are observed for the cooling rates below 4 K/ps. As shown in (a), the fastest cooling rate (10 K/ps) exhibits a glassy structure. Figure 2 . 15 : 215 Figure 2.15: Directional solidification on Al-Cu alloys. From [42]. (a) and (b) represent the microstructure evolution (2D and 3D representations, respectively) with fcc atoms in green, hcp atoms in red and amorphous atoms in white. (c) represents the different orientations of grains during solidification. Figure 2 . 16 : 216 Figure 2.16: Solidification process of laser powder bed fusion technologies on Al powder. Figure 2 . 17 : 217 Figure 2.17: Overview of the Cherukara et al. study [76]. At top, initial system containing laminated grains of Ni and Al. The arrows represent the direction of the deformation. Below, the final microstructure obtained after deformation with the maps of the temperature and chemical reaction. The two colors within in each grain denote Ni (thin bands) and Al (thick bands). Figure 2 . 2 Figure 2.18: Ti-coated Al nanoparticles at 3 different temperatures: a. 900 K, b. 1000 K, c. 1100 K. From [139]. Al is in dark blue and Ti in yellow. Figure 3 . 1 : 31 Figure 3.1: Schematic representation of the model. The laser region is represented in red between 50 nm and 200 nm. The substrate region is represented in blue between 0 and 20 nm. Figure 3 . 2 : 32 Figure 3.2: System (model A) with a substrate maintained at T sub = 1000 K. (a) Temperature profiles (solid lines) and the fraction of fcc atoms (dashed lines) at different moments in time. (b) Snapshots of the system at 0, 4, and 8 ns. Figure 3 . 3 : 33 Figure 3.3: Position of the solid/liquid interface as a function of time in the polycrystalline system (Model A) with a substrate at 300 K and 1000 K. Figure 3 . 4 : 34 Figure 3.4: System (Model B) submitted to a cooling rate α = 375 K/ns with a substrate maintained at T sub = 1000 K. (a) Temperature profiles (solid lines, left axis) and fraction of fcc atoms (dashed lines, right axis) at different times during the global cooling. (b) Snapshots of the sytem at 0.3, 1.6, 2, and 2.4 ns. Color coding: fcc atoms are in green, hcp in red, and unknown/liquid atoms in light gray. Figure 3 . 5 : 35 Figure 3.5: Position of the solid/liquid interface as a function of time for a substrate at 1000 K; different cooling rates are shown as solid lines (left axis). Local temperature gradients in the solid G s and the liquid G `around the solidification front at a cooling rate of 375 K/ns are shown as dash-dotted and dashed lines in blue, respectively (right axis). Figure 3 . 6 : 36 Figure 3.6: Instantaneous velocity as a function of time for different T sub and different cooling rates α. Figure 3 . 7 : 37 Figure 3.7: Normalized nucleation rate I/I 0 (eq. (3.5)) (a) and critical radius (b) as a function the degree of undercooling. Figure 3 . 8 : 38 Figure 3.8: Cluster analysis: number of clusters, mean cluster size, and size of the largest cluster as a function of time. Size is measured in terms of the number of atoms in the cluster. Figure 3 . 9 : 39 Figure 3.9: System (Model B) submitted to a cooling rate α = 375 K/ns with a substrate maintained at T sub = 1300 K. (a) Temperature profiles (solid lines, left axis) and fraction of fcc atoms (dashed lines, right axis) at different times during global cooling. (b) Enlarged view of temperature profiles (solid lines, left axis) and fraction of fcc atoms (dashed lines, right axis) restricted to the bottom and tip of the protrusion, at 5 ns. The slice over which spatial binning was calculated is shown in the snapshot (c) Snapshots of the system at 0.4, 2.2, 3, and 5 ns. Figure 3 . 3 Figure 3.10: (a) Position of the finger tip and bottom as a function of time. (b) Theoretical estimate of tip velocity as a function of the characteristic size of the finger (see eq. 3.9). 3.9). Figure 4 . 1 : 41 Figure 4.1: Simulated system. Initial state with fcc-Ni and fcc-Al polycrystalline particles (left). System elongated along the y-direction after deformation (right). structure Exp. T m (K) Sim. T m (K) Exp. T tr (K) Sim. σ (GPa) Exp.H (HV) Al fcc 933 1055 [112] 8.0 [112] 18 870 [92] 6.6 [92] Ni fcc 1728 1710 [112] 19.4 [112] 60 Ti hcp ! fcc 1713 1531 [92] 1155 14.8 [92] 99 Figure 4 . 2 : 42 Figure 4.2: Snapshots of a slice of 10 Å, around z = 100 Å after deformation (ε = 73 %): (a) Ni-Al monocrystalline particles (b) Ni-Al polycrystalline particles (c) Ti-Al monocrystalline particles (d) Ti-Al polycrystalline particles. Figure 4 . 3 : 43 Figure 4.3: (a) Evolution of the number of atoms (at.%) in MZs as a function of time, for different temperatures. (b) Evolution of the number of fcc-and unk-atoms (at.%) in MZs at T = 850 K. (c) T = 950 K. (d) T = 1100 K. The atomic percentage of atoms is relative to the number of atoms in the entire system. Al sol + Ti sol ! Al liq +(Al + Ti) sol + TiAl 3 (4.1) A representative slice was chosen to analyze the local dynamics. Corresponding snapshots are given in Fig. 11. We identified the following characteristic behaviors at 950 K: I After deformation, Ti particles are composed of hcp-Ti, fcc-Ti, and unk-Ti at internal grain boundaries (GB). Defects in hcp-Ti grains are fcc-Ti linear defects (stacking faults). Aluminum particles lost their spherical shape and are composed of fcc-Al and unk-Al in GBs. A few hcp-Al atoms correspond to defects in Al grains. Figure 4 . 4 : 44 Figure 4.4: Snapshots of a slice of 40 nm in the y-direction and 10 nm in the x-direction: 22 in amorphous phase and 0.35 in fcc-phase. The microstructures depicted in Fig.4.4 show a transient amorphization of Al atoms around Ti particles. At 2.5 ns, Ti atoms were either dissolved in unk-Al or substituted in the fcc-Al phase. At 10 ns, the Ti particle mainly recovered its hcp-structure, with a thin layer of hcp-Al at the periphery. Aluminum hcp-atoms correspond to atoms that occupy vacancies liberated by outgoing Ti atoms. Dissolution operates as an exchange of Ti and Al atoms at interfaces. At the end, the particle was surrounded by an fcc solid solution, an amorphous solution, and fcc-Al with defects. Figure 4 . 5 : 45 Figure 4.5: MSD log-log plot for Al inside the MZs in the Ti-Al system (a) and in the Ni-Al system (b) at representative temperatures. Log-log plots for Ti in the Ti-Al system and Ni in the Ni-Al system are given in Supporting Information. Vertical lines delineate the 3 stages observed in the system evolution (see text). Solid line corresponds to the time interval over which diffusion coefficients are computed (see Table4.2). Figure 4 . 6 : 46 Figure 4.6: Plot of ln(D) as a function of 1/T for the Ti-Al system. 74 4. 3 .Figure 4 . 7 : 74347 Figure 4.7: Two detailed views of the snapshot after deformation (0.06 ns). Black atoms are Ti atoms in MZs at 2 ns. (a) Visualization of Ti atoms in grain boundaries (b) Visualization of Ti atoms in the external shell. Left: All atoms are represented with the color coding of Fig. 4.4. Right: Ti atoms in MZs at 2 ns are shown in black. 4.9a and Fig. 4.9b), we note a coarsening of Al grains and the creation of amorphous regions around Ni grains with the mixing of Ni and Al atoms. As shown in Fig. 4.8c, MZs contained twice as much unk-Al as unk-Ni atoms. Figure 4 . 8 : 48 Figure 4.8: (a) Evolution of the number of atoms (at.%) in MZs as a function of time, for different temperatures. (b) Evolution of mole fraction x MZ (at.%) in MZs (c) Evolution of the number of unk-atoms (at.%) in MZs at T = 800 K and T = 900 K. (d) Evolution of the number of bcc-and unk-atoms (at.%) in MZs at T = 1200 K. The atomic percentage of atoms is relative to the number of atoms to the entire system. Figure 4 . 4 8d shows the increase in the number of unk-Al followed by the increase in unk-Ni in less than 2 ns. After 2 ns, the num-ber of bcc-Al and bcc-Ni started to increase with the formation of intermetallic B2-NiAl. The number of unk-Al decreased slightly, while the number of unk-Ni remained constant. This result indicates that unk-Al atoms in MZs transformed into bcc-Al, and that unk-Ni atoms that became bcc-Ni were replaced in MZs by incoming Ni atoms that dissolved in the liquid region. At 10 ns, the snapshot (Fig. 4.9d) depicts shrinking fcc-Ni particles surrounded by a liquid solution composed of Ni and Al. Disoriented B2-NiAl grains surrounded the Ni particles and eventually formed a neck between particles. Grain boundaries progressively disappeared inside the Ni particles. We suppose that internal GBs acted as channels for outgoing unk-Ni that were dissolved in the melt. The mole fraction of unk-Ni in MZs is 0.34 at 10 ns, close to the expected equilibrium value. Figure 4 . 9 : 49 Figure 4.9: Snapshots of the system (slice at x = 169 nm) after deformation (a), at 700 K (b), 900 K (c), and 1200 K (d) at t = 10 ns. Figure 4 . 10 : 410 Figure 4.10: Plot of ln(D) as a function of 1/T for the Ni-Al system. Figure 5 . 1 : 51 Figure 5.1: Microstructure of the bimetal particle after HEBM (a) and boundaries between Ti and Al layers (b). SEM, backscattered electrons. Dark phase is Al, white -Ti. Figure 5 . 2 : 52 Figure 5.2: SEM and results of the EDS scanning along the line. Dark phase -Al, white -Ti, grey -intermediate metastable phase. Figure 5 . 3 : 53 Figure 5.3: Temperature-time profile of the heating up and reaction in Ti/Al composite powder sample measured with WRe5/WRe20 thermocouple. Figure 5 . 4 : 54 Figure 5.4: Reaction onset temperature as function of heating rate. Figure 5 . 5 : 55 Figure 5.5: Evaluation of activation energy in Kissinger coordinates. Figure 5 . 6 : 56 Figure 5.6: Initial configuration of the simulated system with one slice of Al in between two Ti layers. Al and Ti are shown as blue and red spheres, respectively. 2). Initially, an inner layer of fcc-Al with of 15 atomic planes (N Al = 51 230) was placed in between two outer layers of hcp-Ti with 40 atomic planes each (N Ti = 160 480) with N Ti /N Al ⇠ 3. The size of the simulation box is L x = L y = 17.5 nm in the x-direction and L z = 13.4 nm in the z-direction. The orientations of Ti and Al at interface was (002)/(111) as in the previous simulations. The simulation procedure is in two parts: 5.7). Figure 5 . 7 : 57 Figure 5.7: Atomic arrangement of Ti and Al at the interface. Figure 5 . 8 : 58 Figure 5.8: (a): Temperature T (scale in K, on the right) and number of atoms in at% (on the left) as a function time (scale in ns). The number of atoms in fcc, hcp and unknown configuration are represented. The different stages in the evolution are indicated by the dashed lines. (b): Temperature T (scale in K, on the right) and stoichiometry (on the left) as a function time (scale in ns). Figure 5 . 9 : 59 Figure 5.9: Number density profiles at t = 0.4, 12.24, 12.46 ns in the z-direction, perpendicular to the interfaces. The dashed lines indicate the limits of the inner layer in the initial configuration. Corresponding snapshots of the system. Position is measured in Angstroms. 2 .Figure 5 . 10 : 2510 Figure 5.10: Initial temperature of 1000 K. (a): Temperature T (scale in K, on the right) and number of atoms in at% (on the left) as a function time (scale in ns). The number of atoms in fcc, hcp and unknown configuration are represented. The different stages in the evolution are indicated by the dashed lines. (b): Temperature T (scale in K, on the right) and stoichiometry (on the left) as a function time (scale in ns). Figure 5 . 11 : 511 Figure 5.11: Thick sample. Initial temperature of 1000 K. (a): Temperature T (scale in K, on the right) and number of atoms in at% (on the left) as a function time (scale in ns) during the heating. The number of atoms in fcc, hcp and unknown configuration are represented. (b) Stoichiometry (on the left) as a function time (scale in ns). (c): Temperature T (scale in K, on the right) and number of atoms in at% (on the left) as a function time (scale in ns) during the cooling. Figure 5 . 12 : 512 Figure 5.12: Number density profiles at t = 5, 20 ns in the z-direction, perpendicular to the interfaces, and 3.5 ns after the start of the cooling. The dashed lines indicate the limits of the inner layer in the initial configuration. Corresponding snapshots of the system. Position is measured in Angstroms. Figure 5 . 13 : 513 Figure 5.13: Number of Ti atoms (at%) in the atomic planes close to the interface. The plane (# -1) is the plane just below the interface, the plane (# -2) is below the plane (#-1), etc. Figure 5 . 14 :Figure 5 . 15 : 514515 Figure 5.14: Schematic representation of the dissolution process. Figure 5 . 16 : 516 Figure 5.16: D0 22 crystallographic structure of TiAl 3 and two consecutive (111) planes in TiAl 3 . The (002) plane of pure Ti is shown in dashed. Al in blue and Ti in red. 3 . 3 The interatomic potential developed by Zope and Mishin predicts a single Al-rich phase Ti 0.25 Al 0.75 ⌘ TiAl 3 . The simulation results demonstrate the formation of an intermetallic with a well-defined composition and a local fcc configuration, close to the TiAl 3 compound. As shown in Fig.5.16, the TiAl 3 intermetallic can easily grow on a Ti plane. Substitution of Ti atoms by Al ones and a slight displacement of Ti gives the specific structure. Moreover the D0 22 structure is based on a face centered cubic structure. Figure 6 . 1 : 61 Figure 6.1: Schematic representation of the several microstructures of nickel obtained after solidification as a function of the cooling mode, substrate temperature and cooling rate. Figure 6 . 2 : 62 Figure 6.2: Snapshot of the system at the end of the simulation (7 ns). Left: fcc, bcc (NiAl) and amorphous atoms are represented in green, blue and white colors, respectively. Right: snapshot zooms of the red squares with Al and Ni atoms represented in red and blue colors, respectively. The Ni atoms progressively dissolved around NiAl particles. Figure 6 . 3 : 63 Figure 6.3: Snapshot of the system at the end of the simulation (18.5 ns) with bcc, fcc and amorphous atoms represented in blue, green and white, respectively. Left: snapshot zooms of the evolution of the grain boundary with progressive pinning on the particle of W. Right: snapshot zooms on the evolution of the Cu solid/liquid interface in presence of the W particle. Figure 6 . 4 : 64 Figure 6.4: Snapshots of Cu-W system with an imposed cooling ramp of 215 K/ns at 2.9 ns, 3.1 ns and 3.5 ns (end of the simulation). The bcc, fcc, hcp, and amorphous atoms are represented in blue, green, red and white, respectively. Heterogeneous nucleation is observed around the particles of W. Résumé: La relation procédé-microstructure est centrale en sciences des matériaux car la microstructure va déterminer les propriétés des matériaux élaborés par les procédés. Dans notre travail, nous nous sommes intéressés à différents procédés de métallurgie en adoptant une description à l'échelle atomique. Cette approche permet de déceler les mécanismes élémentaires qui sont à l'origine des microstructures observées sans avoir à postuler des mécanismes macroscopiques et estimer les paramètres associés. A cet égard, les simulations par dynamique moléculaire fournissent un outil d'observation "in-situ" des systèmes métalliques pour autant qu'un potentiel d'interaction atomique soit disponible. L'originalité de notre démarche a été de modéliser les caractéristiques des procédés aux échelles nanométriques. Dans le contexte de la métallurgie des poudres, nous nous sommes intéressés à la fabrication additive de matériaux métalliques et à l'activation des poudres métalliques par broyage à haute énergie. Nous avons réalisé des simulations de dynamique moléculaire afin de comprendre les processus de solidification directionnelle à l'échelle nanométrique d'un métal polycristallin de Ni pur dans le contexte de la fabrication additive. Différentes microstructures ont été observées en fonction des conditions thermiques. La solidification et la nucléation ont également été comparées aux théories classiques de la solidification et de la nucléation afin d'en établir la validité aux échelles nanométriques. Nous avons modélisé le procédé de broyage par un traitement mécanique sous la forme d'une compaction et d'une déformation plastique pour rendre compte de l'action des billes de broyage sur un mélange binaire de poudres. Cette approche permet de comprendre le comportement des poudres activées mécaniquement (Ti+Al et Ni+Al) en caractérisant la mobilité des atomes, les transformations structurelles et la réactivité. Nous avons également étudié la réactivité d'un modèle multicouche nanométrique Ti-Al similaire aux matériaux obtenus après un broyage très énergétique. Nous avons mis en évidence plusieurs mécanismes élémentaires responsables de leur réactivité exacerbée comme la dissolution aux interfaces Ti(solide)/Al(liquide) et la formation d'intermétallique (TiAl3). Table 2 . 2 1: Left: Crystallization velocity (in m/s) calculated with the two-phase method at different temperatures and for different interface orientations. Right: Wilson-Frenkel parameters obtained from the fit by eq. (2.18). Table 3 . 3 1: Parameters of pure Ni computed with the EAM-09 interatomic potential (Purja and Table 3 . 2 32 morphology t 1 (ns) t 2 (ns) ∆T (K) ∆L z (nm) 800 750 C+E 1.27 1.00 551 61.5 375 C+E 2.53 1.73 527 47.8 250 C+E 3.8 2.55 525 27.8 900 375 C+E 2.27 1.60 527 55 1000 750 C+E 1.00 0.84 545 79.4 375 C+E 2.00 1.57 528 60.9 250 C+E 3.00 2.34 530 39.4 187.5 C+E 4.00 2.99 522 34.4 150 C+E 5.00 3.72 520 15.4 75 C 10.00 - - - 1100 375 C+E 1.73 1.50 533 66.1 1200 375 C+E 1.47 1.46 528 70.9 1300 750 C+D 0.6 - - - 375 C+D 1.2 - - - : Summary of MD results for Model B as a function of the temperature of the substrate T sub and the cooling rate α expressed in K/ns. The morphology is named 'C' for columnar grains, 'C+E' for columnar and equiaxed grains, 'C+D' for columnar and dentrites; t 1 is the time at which cooling ended, t 2 is the time at which the first nucleus appeared, ∆T is the degree of undercooling at which first nucleation appeared, ∆L z is the width of the zone with equiaxed grains. Table 4 . 4 1: Structural, thermodynamics and mechanical properties: Structure of pure elements at ambient temperature; Melting temperature T m of pure elements: Experimental value and theoretical value corresponding to the specific potential; Experimental value of the transition temperature T tr ; Simulated tensile strength, σ ; Experimental value of hardness, H. Table 4 . 4 2: Diffusion coefficients ⇥10 9 (m 2 /s) in MZs for the Ti-Al system and the Ni-Al system. Values corresponding to temperatures above the melting point of Al are in bold. T (K) 700 800 850 900 950 1000 1050 1100 1150 1200 Al/Ti-Al 0.78 1.35 1.97 2.07 2.46 2.71 3.02 Ti/Ti-Al 0.42 0.72 1.21 1.37 1.80 2.19 2.61 Al/Ni-Al 0.016 0.057 0.38 1.16 1.90 2.26 Ni/Ni-Al 0.015 0.061 0.45 1.36 2.29 3.05 Al in Ti-Al Ti in Ti-Al Al in Ni-Al Ni in Ni-Al Q low T 69.84 69.45 100.54 103.95 Q high T 20.32 36.08 18.86 31.76 Table 4 . 3 43 : Diffusion activation energy (kJ/mol) for the Ti-Al system and the Ni-Al system (eq. ( 4 .3)) Table 5 . 5 2: Summary of the simulation details Melting temperatures (K) Cohesive energies (eV/atom) metal/phase two-phase method other works experimental EAM potential vs. experimental Al 870 870 933 3.36 / 3.36 Ti 1531 - 1941 4.85 / 4.85 1510 1494 1713 4.51 / 4.51 (D0 22 ) 1175 - 1388 4.02 / 4.06 Table 5.1: Melting temperatures and cohesive energies evaluated using the Zope and Mishin EAM potential [92] and experimental values. sample N Ti /N Al ⇠ 3 L x = L y (nm) L z (nm) Al atomic planes # atoms Reference system 11.8 6.1 7 41 468 Thick sample 17.5 13.4 15 211 710 .1 according to the cooling mode, substrate temperature and cooling rate. The originality of the present work relies on the interpretation of our numerical results obtained at a nanoscale with theories of solidification and nucleation developed at micro/macroscale. Imposed cooling Yes No Substrate temperature Tsub ≤ 1200 K Tsub > 1200 K Cooling rate Low ( 75 K/ns) High (<75 K/ns) Microstructure Columnar Columnar +equiaxed Columnar cellular Columnar (planar shape) Here, "particle" means powder particle In the framework of a PHC Kolmogorov RECIPES 2018-2021 with Pr. A. S. Rogachev and coworkers at MISiS University, Functional Nanoceramics Laboratory Remerciements Un travail de thèse est souvent le fruit des nombreuses rencontres que l'on peut faire et des nombreuses expériences que l'on peut vivre. Ces quelques lignes seront sans doute insuffisantes pour adresser mes remerciements à toutes les personnes que j'ai pu croiser durant ces quelques années. Ces mêmes personnes sans qui ce travail n'aurait pas été mené de la même façon qu'il l'a été. Ces lignes permettront cependant de remercier les personnes qui ont pris part à ce travail que ce soit sur le plan scientifique ou personnel.
03593009
en
[ "shs.info", "info", "shs.hisphilso" ]
2024/03/04 16:41:20
2018
https://hal-lara.archives-ouvertes.fr/hal-03593009/file/About-the-proposal-for-software-indicators-in-OSM.pdf
About the proposal for software indicators in OSM Summary of the proposed indicators The indicators proposed by the OSM for software are the following: Analysis of the proposal Terminology and proposal 3) In terms of terminology, it should first be noted that "open code "is an unknown neologism in the world of software. It has no precise definition, it is not used by software developers, nor by the lawyers and experts in the domain. Thus, the term "open code" could be interpreted as relating to a code which can simply be browsed on the Internet, without any guarantee that it comes with additional rights of modification and redistribution of changes that characterize open source and free software licenses. It is obvious that, in the absence of a precise definition of what one wants to measure, it will not be possible to have effective measures reliable or to recommend explicit policies in this field. We therefore strongly discourage the use of this neologism, that is undefined and subject to misinterpretation. We recommend using the terms "free software" or "open source software", which are well defined and have been used for decades. The definition of these terms is maintained by specialized NGOs (OSI, Open Source Initiative, and FSF, Free Software Foundation). They correspond to a real use in the world of software development and IP law. Some institutions of the European Union promote the aggregative term of "FLOSS", for Free / Libre and open-source software (see https://cordis.europa.eu/publication/rcn/9015_en.html), that can also be used. In the framework of Open Science, the term "Open Source Scientific Software" could therefore be used. Consequently, proposition 3) is, of course, to be reviewed. We will now detail below our comments on proposals 1), 2) and 4). 1) using DOIs to count / identify software DOIs are totally unknown identifiers in the world of software. Using them as indicators would not only be inoperative, but also very dangerous, for the following reasons: An incomplete measure There are tens of thousands of scientific software programs openly available, but hardly any of them has an associated DOI: the scientific community, and the developer community, do not use DOI for software. The number of DOIs associated to software would therefore not measure by any means the amount of research software distributed under a free license. An incorrect measure Some data repositories, such as Zenodo, propose mechanisms to issue DOIs associated to each version of a software hosted on GitHub (and GitHub only). If the number of DOIs were to become an indicator, it would be trivial to create dozens, or hundreds of versions of the same software, with the aim of generating a lot of DOIs and therefore inflate artificially this indicator. This one would not measure the amount of software projects, but the number of versions, without identifying the reasons why these are created (major improvement or minor correction), making this indicator irrelevant. Distortion of competition In the field of software, DOIs are not suitable (see the article "Identifiers for Digital Objects: the Case of Software Source Code Preservation ", iPres 2018), nor justified by the current practice. HAL, for example, does not issue DOIs, but its own identifiers; the BNF, which is in charge of the deposit software in France, uses Ark identifiers, which are free unlike the DOIs. Adopting DOIs as the only recognized measure of scientific software would not only promote an inappropriate technology: it would create monopoly position, with and annuities (the issuance of DOIs being charged for) to a single actor, in the new field which is the measurement of scientific software production and use. Systemic and strategic risks The only platform currently able to provide DOIs on software is the GitHub silo, via its connection with Zenodo. Promoting the use of DOIs would therefore boil down to encouraging all scientific software developers to migrate to GitHub, to the detriment of platforms that could provide in the longer term more innovative and relevant services. At a time when GitHub has just been bought by a major player in North American software publishing is not in the interest of Europe to encourage its developers to use a single non-European platform, controlled by private actors that have no obligation nor commitment to maintain a public service in the long term, as the recent shutdown of Gitorious and Google code has made self-evident. 4) GitHub as the only reference for counting software projects This proposal is to be rejected absolutely, for the following reasons: An incomplete indicator There is a large amount of scientific software available on public forges different from GitHub (which has only existed for 10 years) or institutions; many are only accessible from the web page of their authors. It is precisely for collecting in one place all of this scattered software that the French national open access portal, HAL, has partnered with the Software Heritage Foundation to offer a scientific software deposit that is going to be open to all at the end of the month of September 2018. So, counting only the scientific software that is present on GitHub would amount to ignoring a large mass of existing scientific software available under an open source license. An indicator hard to build It is very unclear how one can identify all scientific projects by just looking at GitHub, which is already (as seen above) introducing a huge bias. Distortion of competition and systemic risks Using GitHub as the only official repository would harm competing platforms and institutional repositories, introducing systemic and strategic risks in the long term (see the arguments detailed in the previous points). Alternative Proposals There are several alternatives available to build way better indicators for the scientific software that is made available under an open source license. Some are immediately actionable, others are more in the medium term. Actionable alternatives Leverage the OpenAire European research infrastructure OpenAire is a 10+ year European union-funded infrastructure that aggregates all the open access publications available in the many open access portals across Europe, and beyond. In collaboration with Software Heritage, OpenAire is building a catalogue of software projects mentioned in the scientific articles indexed in its database, which provides a concrete first step to build a transparent, independently verifiable indicator. Considering that OpenAire is an EU-funded infrastructure, it would be difficult not to leverage it for building the software indicators for the OSM. Leverage the Software Heritage universal software archive Identifying open source research software is a preliminary step to building a software indicator for the OSM. Ensuring that the identified software is properly available to other researchers requires an extra essential step: archiving it. Software Heritage (www.softwareheritage.org) is building the universal archive of software source code, and has already been identified as an essential infrastructure in the French National Plan for Open Science. It is essential that the OSM counts the amount of open source research software that is safely archived in Software Heritage. Software Heritages provides also a high quality harvesting process of many software repositories. It is can be seen as a better starting point than GitHub. Medium term initiatives Leverage existing catalogues of scientific software There are tens of thousands of important scientific software that has been available under an open source license for years and even decades. Many scientific communities have built manually curated catalogues whose quality is way superior to any automated index one can imagine today. Building an aggregate index for these catalogues is the way to go if we want to create a meaningful indicator in the medium to long term. Conclusion and recommendation The analysis provided in this report shows that the set of indicators proposed under the OSM for the software activity of the researchers is very far from satisfactory, and cannot even be considered as a baseline, because of the methodological and economic bias they entail. Furthermore, we strongly remark that scientific software is a small part of a much bigger ecosystem of software development that involves industries and developer communities well outside the research sector. Hence, any proposal made concerning research software must take into account the practice and standards of the developer community at large. Hence, we strongly advises the creation of a working group involving a representative panel, including researchers having a long experience in software and hardware development, actors of the transfer of software and research materials to industry, representatives of open source developer communities, legal experts from the world of software and free hardware, and scientists with the necessary skills to assess the feasibility of constructing the proposed indicators. Contacts : -Roberto Di Cosmo [email protected] -François Pellegrini [email protected] -Marin Dacos [email protected] 1. Number of code projects with DOI (Mozilla Codemeta) 2. Number of scientific API (Programmableweb) 3. % of journals with open code policy (Stodden 2013, Stodden, V., Guo, P. and Ma, Z. (2013), "Toward reproducible computational research: an empirical analysis of data and code policy adoption", PLoS One , Vol 8 No. 6, pp. E67111, doi: 10.1371 / journal.pone.0067111) 4. Number of scientific projects on Github (Github)
03963784
en
[ "math.math-ds" ]
2024/03/04 16:41:20
2022
https://hal.science/tel-03963784v2/file/2022UPSLD047.pdf
En premier lieu, je tiens à remercier mes directeurs de thèse, Abed Bounemoura et Jacques Féjoz. Vous avez choisi un très bon sujet de thèse. Je vous remercie pour avoir partagé votre connaissance et votre expérience. Discuter de mathématiques avec vous est fascinant. Je suis très reconnaissant envers Massimiliano Berti et Marcel Guardia qui ont accepté d'être rapporteurs de cette thèse. Merci pour votre intérêt pour mon travail, vos questions et commentaires ont été précieux. C'était un plaisir parler de mathématiques avec vous. Un grand merci à Alain Chenciner et Anna Florio. Vos remarques et questions ont donné lieu à des échanges très interessantes. Merci de m'avoir fait l'honneur d'accepter de faire partie du jury. Surtout, je tiens à te remercier Anna pour ta disponibilité et ta gentillesse. Je remercie Patrick Bernard et Raphael Krikorian, je suis honoré de vous avoir parmi mon jury de thèse. Merci beaucoup au staff de MIDO, l'équipe de CEREMADE et à l'école doctorale La théorie de Kolmogorov-Arnold-Moser (KAM) montre la persistence de solutions quasipériodiques dans les systèmes hamiltoniens presque intégrables. Cette théorie est intéressante, surtout en ce qui concerne les applications dans les problèmes classiques de mécanique céleste, comme par exemple le problème à n-corps. Nous renvoyons à [START_REF] Bost | Tores invariants des systemes dynamiques hamiltoniens[END_REF], [START_REF]Smooth ergodic theory and its applications[END_REF] [DlL01] et [START_REF]Introduction to KAM theory with a view to celestial mechanics, Variational methods[END_REF] pour de très bonnes introductions. Nous notons qu'un système hamiltonien est intégrable si des variables action-angle appropriées existent de telle sorte que le hamiltonien ne dépend que des variables d'action. Nous faisons référence à [START_REF] Igorevich | [END_REF]. Soit B ⊂ R n une boule centrée à l'origine et 0 < ε 1 un petit paramètre approprié. Nous considérons les hamiltoniens de la forme H : T n × B -→ R, H(q, p) = h(p) + εf (q, p), (H) où p ∈ B sont les actions, alors que q ∈ T n sont les angles conjuguées variant sur le tore T n = R n /Z n . Nous associons au hamiltonien H le suivant système hamiltonien X H (q, p) = (∂ p H(q, p), -∂ q H(q, p)), où ∂ p et ∂ q représentent respectivement les dérivées partielles par rapport à p et q. Les équations du mouvement prennent la forme suivante q = ∂ p H(q, p), ṗ = -∂ q H(q, p), où le point indique la dérivée par rapport au temps t. Si ε = 0, le hamiltonien H coïncide avec le hamiltonien non perturbé h. Dans ce cas, les équations du mouvement sont de telle forme q = ω(p), ṗ = 0 où ω = ∂ p h(p). Nous pouvons facilement intégrer ce dernier et pour tout (q 0 , p 0 ) ∈ T n × B q(t) = q 0 + ω(p 0 )t, p(t) = p 0 c'est la solution du système précédent. Nous observons que chaque solution donne lieu à un flot linéaire et, à cause de l'identification des coordonnées q modulo 1, chaque orbite tourne autour du tore invariant T ω(p 0 ) = T n × {p 0 } 1 Introduction avec vecteur fréquence ω(p 0 ) constant. Soit α = dp ∧ dq la forme symplectique standard associée à l'espace de phase T n × B ⊂ T n × R n , alors ces tores sont lagrangiens. Cela signifie que leur dimension est n = 1 2 dim (T n × R n ) et la restriction de la forme symplectique α à l'espace tangent s'annule. Plus précisément, dans ce cas particulier, l'espace de phase est feuilleté en une famille à n paramètres de tores lagrangiens invariants T ω(p 0 ) supportant des flots linéaires avec vecteurs fréquence constants ω(p 0 ). La question suivante s'ensuit naturellement. Parmi cette famille de tores invariants, lesquels persistent si nous considérons une petite perturbation εf (q, p) du hamiltonien non perturbé h? Dit autrement, combien de ces tores invariants persistent si 0 < ε 1? A. N. Kolmogorov a été le premier à répondre à cette question et, en 1954, il a prouvé ce qui est maintenant généralement connu comme le théorème KAM [START_REF] Kolmogorov | On conservation of conditionally periodic motions for a small change in Hamilton's function[END_REF] (dans le cadre des hamiltoniens analytiques réels). La preuve repose sur deux idées principales. La première consiste à prouver la persistance des tores invariants avec une dynamique quasipériodique dont le vecteur fréquence satisfait certaines conditions arithmétiques. La deuxième concerne la convergence. L'auteur a prouvé ce résultat avec un schéma itératif basé sur l'algorithme de Newton caractérisé par une convergence quadratique. À cette fin, nous introduisons la définition de vecteur (γ, τ )-diophantien. Définition. Pour γ > 0 et τ > 0, un vecteur ω ∈ R n est (γ, τ )-diophantien si |k • ω| ≥ γ |k| τ pour tout k ∈ Z n /{0}, où |k| = |k 1 | + ... + |k n |. Pour tout γ > 0 et τ ≥ 0, soit D τ γ l'ensemble de tous les vecteurs ω ∈ R n qui satisfont cette infinité de conditions. En outre, nous définissons les ensembles de vecteurs suivants D γ = τ >0 D τ γ , D τ = γ>0 D τ γ . Si τ < n -1, alors D τ est vide. Nous devons assumer τ ≥ n -1 afin d'avoir D τ = ∅. De plus, quand τ > n -1, alors D τ est un ensemble de mesure de Lebesgue totale. D'un autre côté, τ = n -1 implique que D τ est un ensemble de mesure de Lebesgue nulle mais la dimension de Hausdorff est maximale. Nous supposons n ≥ 2, parce que les systèmes à un degré de liberté sont toujours intégrables. Soit Ω = ∂ p h(B) l'ensemble des fréquences, nous définissons Ω γ = {ω ∈ Ω ∩ D γ : d(ω, ∂Ω) ≥ γ} l'ensemble des vecteurs diophantiens avec au moins une distance γ du bord de Ω. Théorème (KAM). Soit H le hamiltonien défini par (H). Nous supposons que H est analytique réel et l'application fréquence ∂ p h : B → Ω est un difféomorphisme. Alors il existe une constante c > 0 telle que si ε < cγ 2 tous les tores invariants T ω du système non perturbé avec ω ∈ Ω γ persistent comme tores lagrangiens invariants, n'étant que légèrement déformés. 1.1 Théorie KAM Plus précisément, pour tout ω ∈ Ω γ il existe un plongement analytique Φ ε ω : T n → T n × B tel que l'image ImΦ ε ω = T ε ω est un tore invariant par H et Φ ε ω conjugue la restriction de X H à T ε ω au champ de vecteurs constant ω sur T n . De plus, le déplacement des variables action du tore perturbé par rapport à celui non perturbé on peut l'estimer comme O( √ ε). Cette formulation du théorème KAM est due à Pöschel et nous nous référons à [START_REF]Smooth ergodic theory and its applications[END_REF] pour la preuve. En ce qui concerne l'acronyme KAM, en 1963, Arnold a publié une autre démonstration du théorème de Kolmogorov [Arn63a] avec une hypothèse de nondégénérescence differente pour les systèmes hamiltoniens analytiques réels. La même année, il a appliqué cette théorie aux questions de stabilité pour le problème à n-corps. Il a démontré l'existence d'un ensemble de mesure positive des conditions initiales dans l'espace de phase donnant lieu à des mouvements quasipériodiques proches à des trajectoires képlériennes non perturbées, circulaires et coplanaires [Arn63b]. En 1962, Moser a démontré un théorème sur la persistance des courbes invariantes pour twist maps du plan [START_REF] Moser | On invariant curves of area-preserving mappings of an annulus[END_REF]. C'était surprenant parce que, en mélangeant les idées de Kolmogorov et Nash, l'auteur a prouvé ce résultat en différentiabilité finie, remplaçant l'hypothèse analytique assumée par Kolmogorov. La théorie KAM a connu de nombreux développements au cours des années. Pour une preuve détaillée du théorème de Kolmogorov basé sur ses idées, nous renvoyons à [START_REF] Benettin | A proof of Kolmogorov's theorem on invariant tori using canonical transformations defined by the Lie method[END_REF]. Plusieurs autres preuves et raffinements sont apparus dans la littérature. Concernant un travail plus récent, celui de Pöschel [START_REF]Smooth ergodic theory and its applications[END_REF] est remarquable. Tenant compte de l'idée de Moser [START_REF] Moser | Convergent series expansions for quasi-periodic motions[END_REF], l'auteur introduit les fréquences ω = ∂ p h comme des paramètres indépendants, donnant un énoncé très subtil et une preuve élégante du théorème. D'autre part, une série de preuves sont données par l'introduction d'un théorème des fonctions implicites adapté dans une échelle d'espace de Banach qui remplace le système itératif inauguré par Kolmogorov. Nous renvoyons aux travaux de Zehnder [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF], [START_REF]Generalized implicit function theorems with applications to some small divisor problems[END_REF], Herman [START_REF] Bost | Tores invariants des systemes dynamiques hamiltoniens[END_REF] et Féjoz [START_REF]Démonstration du 'théorème d'Arnold' sur la stabilité du système planétaire (d'après Herman)[END_REF]. Salamon et Zehnder [START_REF] Salamon | KAM theory in configuration space[END_REF] proposent une autre approche au théorème KAM en utilisant le formalisme lagrangien au lieu du formalisme hamiltonien afin d'éviter la composition dénombrable de changement de coordonnées. Nous concluons avec quelques remarques sur les hypothèses du théorème KAM. Tout d'abord, nous savons déjà que ce résultat peut être prouvé pour des hamiltoniens en différentiabilité finie [START_REF] Moser | On invariant curves of area-preserving mappings of an annulus[END_REF]. Il suffit de supposer que le système soit de classe C k avec k > 2(τ + 1) > 2n (voir [START_REF] Pöschel | Integrability of Hamiltonian systems on Cantor sets[END_REF], [START_REF] Dietmar | The Kolmogorov-Arnold-Moser theorem[END_REF], [START_REF] Bounemoura | Positive measure of KAM tori for finitely differentiable Hamiltonians[END_REF], [START_REF] Koudjinan | A KAM theorem for finitely differentiable Hamiltonian systems[END_REF]). Nous mentionnons également le théorème KAM dans le cas Gevrey [START_REF] Popov | KAM theorem for Gevrey Hamiltonians[END_REF] et ultra différenciable [START_REF] Bounemoura | Hamiltonian perturbation theory for ultra-differentiable functions[END_REF]. Rüssmann a un rôle important dans la théorie KAM avec plusieurs contributions remarquables. Pour ce qui concerne l'hypothèse de non-dégénérescence, dans le cas analityque, il suffit de supposer que localement l'image de l'application fréquence n'est pas contenue dans aucun hyperplan de R n [START_REF] Rüssmann | Nondegeneracy in the perturbation theory of integrable dynamical systems, Number theory and dynamical systems[END_REF]. Par ailleurs, en utilisant des résultats d'approximation très précis de fonctions analytiques par des polynômes, il a amélioré la condition diophantienne sur le vecteur fréquence en supposant la plus générale qui est maintenant appelée condition Bruno-Rüssmann [START_REF] Rüssmann | Invariant tori in non-degenerate nearly integrable Hamiltonian systems[END_REF]. Dans ce contexte, nous faisons référence au résultat de Bounemoura- Introduction Fischler [START_REF] Bounemoura | The classical KAM theorem for Hamiltonian systems via rational approximations[END_REF]. Ils donnent une preuve du théorème KAM, qui, en utilisant des approximations rationnelles, ne comporte pas des estimations des petits diviseurs et des expansions en séries de Fourier. Il s'agit d'une autre approche à la preuve du théorème KAM où la condition Bruno-Rüssmann mentionnée ci-dessus remplace la condition diophantienne sur le vecteur fréquence. Maintenant, nous prenons un peu de recul et proposons un problème, résolu par Arnold [START_REF] Arnold | Small denominators. I. Mapping the circle onto itself[END_REF], sur les petites perturbations des champs de vecteurs constants sur le tore (théorème de forme normale d'Arnold pour les champs de vecteurs sur le tore). Soit 0 < ε 1 un paramètre suffisamment petit, nous considérons le champ de vecteurs sur T n suivant Z : T n -→ R n , Z(q) = ω + εf (q) (Z) avec ω ∈ R n . Il est légitime de se demander s'il existe un difféomorphisme ϕ de T n , ϕ = id + v avec v : T n → R n , qui conjugue le champ de vecteurs Z avec le champ de vecteurs constant ω. C'est-à-dire, Z • ϕ(q) = ∇ϕ(q)ω pour tout q ∈ T n , où ∇ϕ est la différentielle de ϕ. En général, ce n'est pas possible. Même si f est constante et ε arbitrairement petit, nous ne pouvons pas conjuguer ω + εf à ω avec un difféomorphisme homotope à l'identité. Cela a forcé Arnold à prouver un théorème légèrement différent et plus artificiel. Théorème (Arnold). Soit Z comme dans (Z). Nous supposons que Z est analytique réel et ω ∈ D τ γ . Alors, pour ε suffisamment petit, il existe un vecteur α ∈ R n et un difféomorphisme analytique réel ϕ : T n → T n tel que (Z + α) • ϕ(q) = ∇ϕ(q)ω pour tout q ∈ T n . Dans la preuve de ce théorème nous retrouvons certains problèmes similaires à ceux du théorème KAM. Alors que dans le théorème KAM nous recherchons un tore invariant avec une dynamique quasipériodique, ici nous cherchons seulement une conjugaison au champ de vecteurs constant ω. Théorie KAM non-autonome En ce qui concerne cette thèse, nous nous sommes intéressés aux perturbations non autonomes satisfaisant de bonnes propriétés de décroissance lorsque t → ±∞. Dans ce cas, nous ne cherchons pas des solutions quasipériodiques, mais des différents types de solutions qui sont encore liées à des solutions quasipériodiques. Pour être plus précis, introduisons la définition du tore KAM asymptotique analytique. Pour certains s > 0, nous définissons les domaines complexes suivants T n s := {q ∈ C n /Z n : | Im(q)| ≤ s} B s := {p ∈ C n : |p| ≤ s}, 1.2 Théorie KAM non-autonome et pour tout υ > 0, nous introduisons l'intervalle qui suit J υ = [υ, +∞) ⊂ R. Pour chaque fonction f définie sur T n s × B s × J υ et pour t ∈ J υ fixé, soit f t la fonction définie sur T n s × B s telle que f t (q, p) = f (q, p, t). Nous gardons cette notation pour le reste de ce travail, également lorsque T n s × B s est remplacé par T n × B ou une variété lisse appropriée. Soit P égal à T n ×B, T n ou à un sous-ensemble ouvert de R 2n . Nous considérons les champs de vecteurs analytiques réels dépendant du temps X t et X t 0 sur P, pour tout t ∈ J υ , et un plongement analytique de T n sur P tel que lim t→+∞ |X t -X t 0 | s = 0, (1.1) X 0 (ϕ 0 (q), t) = ∂ q ϕ 0 (q)ω pour tout (q, t) ∈ T n × J υ , (1.2) où ω ∈ R n et | • | s est la norme analytique (voir Appendix B). Par souci de clarté, nous précisons que ∂ q ϕ 0 (q)ω est l'élément de R 2n ayant la composante j égale à ∂ q ϕ 0 (q)ω j = ∂ q ϕ 0,j (q) • ω, pour tout j = 1, ..., 2n. Autrement dit, nous considérons un champ de vecteurs X t convergent asymptotiquement dans le temps vers un champ de vecteurs X t 0 ayant un tore invariant supportant une dynamique quasipériodique de vecteur fréquence ω. Définition 1.1. Nous supposons que (X, X 0 , ϕ 0 ) satisfont (1.1) et (1.2). Une famille de plongements analytiques réels ϕ t : T n → P est un tore KAM (positif ) asymptotique analytique associé à (X, X 0 , ϕ 0 ) s'il existe 0 < s ≤ s et υ ≥ υ ≥ 0 tel que X(ϕ(q, t), t) = ∂ q ϕ(q, t)ω + ∂ t ϕ(q, t), (1.3) lim t→+∞ |ϕ t -ϕ 0 | s = 0, (1.4) pour tout (q, t) ∈ T n ×J υ . Lorsque P est une variété symplectique avec dimP = 2n, alors nous disons que ϕ t est lagrangien si ϕ t (T n ) est lagrangien pour tout t. Si nous définissons J υ = (-∞, -υ] et nous remplaçons (1.1) et (1.4) avec lim t→-∞ |X t -X t 0 | s = 0, lim t→-∞ |ϕ t -ϕ 0 | s = 0, nous obtenons la définition du tore KAM (négatif) asymptotique analytique associé à (X, X 0 , ϕ 0 ). Dans ce qui suit, nous discutons le cas pour les temps positifs (t ∈ R + ). Des résultats similaires sont vrais pour les temps négatifs. Cette définition est due à M. Canadell et R. de la Llave (see [START_REF] Canadell | KAM tori and whiskered invariant tori for non-autonomous systems[END_REF]). Dans cet article, ils utilisent l'expression tore KAM non-autonome. Nous préférons tore 1 Introduction KAM asymptotique pour souligner les propriétés asymptotiques de cette famille de plongements. La définition originale est pour des champs de vecteurs définis sur des variétés lisses. Cependant, dans ce travail nous nous intéressons aux champs de vecteurs hamiltoniens ou aux champs de vecteurs sur le tore. C'est pour cette raison que P est égal à T n × B, T n ou à un sous-ensemble ouvert de R 2n . Dans ce qui suit, nous faisons une série de remarques sur la définition précédente toujours due à M. Canadell et R. de la Llave. Nous observons que nous pouvons réécrire (1.3) en termes du flot de X. En fait, soit ψ t t 0 ,X le flot au moment t avec un temps initial t 0 de X. Proposition 1.1. Si le flot ψ t t 0 ,X est défini pour tout t, t 0 ∈ J υ , alors (1.3) est équivalent à ψ t t 0 ,X • ϕ t 0 (q) = ϕ t (q + ω(t -t 0 )), (1.5) pour tout t, t 0 ∈ J υ et q ∈ T n . Démonstration. Nous nous référons à la Section 3.1. Grâce à la proposition précédente, il est facile à voir que (1.3) est trivial. Proposition 1.2. Si ψ t t 0 ,X est défini pour tout t, t 0 ∈ J υ , il est toujours possible de trouver une famille de plongements satisfaisant (1.3). Démonstration. Soit φ : T n → P un plongement alors, pour tout t, t 0 ∈ J υ et q ∈ T n , nous considerons ϕ t (q) = ψ t t 0 ,X • φ(q -ω(t -t 0 )). Il s'agit d'une famille de plongements satisfaisant (1.5). En effet, par la précédente définition de ϕ t nous avons que ϕ t 0 (q) = φ(q) pour tout q ∈ T n . Ensuite, par construction, ϕ t satisfait (1.5) et donc (1.3). Une autre conséquence importante de (1.5) est la suivante. Proposition 1.3. Nous supposons que ψ t t 0 ,X est défini pour tout t, t 0 ∈ R. S'il existe un tore KAM asymptotique analytique ϕ t défini pour tout t ≥ υ , alors nous pouvons étendre l'ensemble de définition pour tout t ∈ R. Démonstration. Pour tout q ∈ T n , nous considerons φ t (q) = ϕ t (q) pour tout t ≥ υ ψ t υ ,X • ϕ υ (q -ω(t -υ )) pour tout t ≤ υ . (1.6) Il s'agit d'une famille de plongements qui vérifie (1.3) et (1.4). Malheureusement, nous ne pouvons déduire aucune information asymptotique pour la famille de plongements (1.6) quand t → -∞. Pour ce qui concerne la dynamique associée à un tore KAM asymptotique analytique, nous introduisons la définition de solution asymptotiquement quasipériodique et discutons certaines propriétés de ces orbites. 1.2 Théorie KAM non-autonome Définition 1.2. Nous supposons que (X, X 0 , ϕ 0 ) satisfont (1.1) et (1.2). Une courbe intégrale g(t) de X est une solution asymptotiquement quasipériodique (positive) associée à (X, X 0 , ϕ 0 ) s'il existe q ∈ T n de telle sorte que lim t→+∞ |g(t) -ϕ 0 (q + ω(t -t 0 ))| = 0. La proposition suivante prouve que si ϕ t est un tore KAM asymptotique analytique associé à (X, X 0 , ϕ 0 ), alors chaque point initial ϕ t 0 (q) donne lieu à une solution asymptotiquement quasipériodique associée à (X, X 0 , ϕ 0 ). Proposition 1.4. Soit ϕ t un tore KAM asymptotique analytique associé à (X, X 0 , ϕ 0 ). Alors, pour tout q ∈ T n et t 0 ∈ J υ , g(t) = ψ t t 0 ,X • ϕ t 0 (q) est une solution asymptotiquement quasipériodique associée à (X, X 0 , ϕ 0 ). Démonstration. Grâce à (1.5) g(t) = ψ t t 0 ,X • ϕ t 0 (q) = ϕ t (q + ω(t -t 0 )) et donc, par (1.4), nous obtenons ce qu'on voulait démontrer. Nous concluons cette section avec une propriété importante concernant le cas dans lequel X et X 0 sont des champs de vecteurs hamiltoniens. Soit P = T n × B et nous supposons que ϕ t est un tore KAM asymptotique analytique associé à (X, X 0 , ϕ 0 ). Dans le cas particulier des systèmes hamiltoniens, si le tore invariant ϕ 0 est lagrangien, alors ϕ t est lagrangien pour tout t. Plus concrètement, nous avons la proposition suivante Proposition. Soit ϕ t un tore KAM asymptotique analytique associé à (X, X 0 , ϕ 0 ). Si ϕ 0 est lagrangien, alors ϕ t est lagrangien pour tout t ∈ J υ . M.Canadell et R. de la Llave le prouvent dans le cas discret. Dans la Section 3.1, nous le prouvons dans le cas continu. Le premier résultat concernant l'existence d'un tore KAM asymptotique analytique associé à des hamiltoniens dépendants du temps est dû à A. Fortunati et S. Wiggins [START_REF] Fortunati | Persistence of Diophantine flows for quadratic nearly integrable Hamiltonians under slowly decaying aperiodic time dependence[END_REF]. Ils considèrent des hamiltoniens analytiques réels dépendant du temps de la forme suivante                          H : T n × B × R + → R, H(q, p, t) = ω • p + 1 2 Γ • p 2 h +εf (q, p, t), ω ∈ D γ τ , Γ est réelle, symétrique et non-singulaire, f est quadratique en p, |f t | s ≤ Ce -at pour tout t ∈ R + , (FW) 1 Introduction pour certains s > 0, une constante positive C et a ∈ (0, 1). Nous soulignons que Γ•p 2 représente le vecteur p donné deux fois comme argument de la forme bilinéaire symétrique Γ. Soit ϕ 0 le plongement trivial suivant ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0). Théorème (Fortunati-Wiggins). Soit H comme dans (FW). Alors, si ε est suffisamment petit, il existe un tore KAM asymptotique analytique associé à (X H , X h , ϕ 0 ). Rappelons que X H représente le champ de vecteurs hamiltonien associé au hamiltonien H. Dans la preuve, ils convertissent le hamiltonien H en un hamiltonien autonome en introduisant η ∈ R comme le momentum conjugué à ξ = t. La preuve repose sur un schéma à la Kolmogorov avec des modifications appropriées. La partie intéressante consiste en la solution de l'équation homologique correspondante, qui est résolue en utilisant des estimations des petits diviseurs et des expansions en séries de Fourier. Environ un an plus tard, M. Canadell and R. de la Llave [START_REF] Canadell | KAM tori and whiskered invariant tori for non-autonomous systems[END_REF] ont publié un résultat qui généralise celui de Fortunati-Wiggins. Dans cet article, ils travaillent en différentiabilité finie. Ils considèrent des champs de vecteurs dépendant du temps qui convergent dans le temps à des champs de vecteurs autonomes ayant un tore invariant supportant une dynamique quasipériodique. À cette fin, introduisons la définition de C σ -tore KAM asymptotique qui est la version à différentiabilité finie de la Définition 1.1. Étant donné σ ≥ 0 et un entier positif k ≥ 0, nous considerons les champs de vecteurs dépendant du temps X t et X t 0 de classe C σ+k sur une variété lisse P, pour tout t ∈ J υ , et un plongement ϕ 0 de T n sur P de classe C σ tel que lim t→+∞ |X t -X t 0 | C σ+k = 0, (1.7) X 0 (ϕ 0 (q), t) = ∂ q ϕ 0 (q)ω pour tout (q, t) ∈ T n × J υ , (1.8) où ω ∈ R n , C σ est l'espace des fonctions Hölder et | • | C σ est la norme Hölder (voir Appendix A). Définition 1.3. Nous supposons que (X, X 0 , ϕ 0 ) satisfont (1.7) et (1.8). Une famille de C σ -plongements ϕ t : T n → P est un C σ -tore KAM asymptotique associé à (X, X 0 , ϕ 0 ) s'il existe υ ≥ υ ≥ 0 tel que lim t→+∞ |ϕ t -ϕ 0 | C σ = 0, X(ϕ(q, t), t) = ∂ q ϕ(q, t)ω + ∂ t ϕ(q, t), pour tout (q, t) ∈ T n ×J υ . Lorsque P est une variété symplectique avec dimP = 2n, alors nous disons que ϕ t est lagrangien si ϕ t (T n ) est lagrangien pour tout t. Toutes les remarques et considérations faites ci-dessus pour ce qui concerne des tores KAM asymptotique analytique restent également vrai pour des C σ -tores KAM asymptotiques. Théorie KAM non-autonome Pour revenir au travail de Canadell-de la Llave, soit P une variété lisse arbitraire et σ ≥ 1. Nous considérons un champ de vecteurs dépendant du temps X t de classe C σ+1 sur P, un champ de vecteurs autonome X 0 de classe C σ+1 sur P et un plongement de T n sur P de classe C σ tel que |X t -X 0 | C σ+1 ≤ ελ t pour tout t ∈ R + X 0 • ϕ 0 (q) = ∂ q ϕ 0 (q)ω pour tout q ∈ T n (CdlL) pour des paramètres positifs appropriés ε et 0 ≤ λ < 1. Soit φ t X 0 le flot de X 0 et Dφ t X 0 son différentiel. Théorème (Canadell-de la Llave). Soit X comme dans (CdlL). Nous supposons qu'il existe µ > 0 tel que |Dφ d X 0 • ϕ 0 • • • Dφ 0 X 0 • ϕ 0 | C 0 ≤ Cµ d (1.9) µ σ λ < 1 (1.10) pour tout d ∈ N et une constante positive appropriée C. Alors, pour ε suffisamment petit, il existe un C σ -tore KAM asymptotique associé à (X, X 0 , ϕ 0 ) Les auteurs considèrent un champ de vecteurs dépendant du temps convergeant, avec une vitesse exponentielle par rapport au temps, vers un champ de vecteurs autonome ayant un tore invariant supportant une dynamique quasipériodique. Ils ont besoin d'un certain contrôle, donné par le paramètre µ, sur la dynamique transverse au tore invariant ϕ 0 . Ce paramètre est en competition avec le paramètre λ qui mesure la vitesse de décroissance du champ de vecteurs dépendant du temps X t . Ils démontrent aussi que le C σ -tore KAM asymptotique trouvé converge avec une vitesse exponentielle par rapport au temps vers le plongement autonome ϕ 0 . Cela signifie que, pour tout t ∈ R + , |ϕ t -ϕ 0 | C σ ≤ Cλ t pour une costante appropriée C qui peut différer de celle dans l'énoncé du théorème. La preuve repose sur le théorème des fonctions implicites. Ce théorème généralise celui de Fortunati-Wiggins. Tout d'abord, Fortunati-Wiggins prouvent un théorème concernant des systèmes hamiltoniens dépendant du temps, tandis que Canadell-de le Llave travaillent avec des champs de vecteurs plus généraux sur des variétés lisses. Nous notons que Canadell-de la Llave ne supposent aucune hypothèse de non dégénérescence sur X 0 et aucune condition arithmétique sur le vecteur fréquence ω. Par contre, ils maintiennent la décroissance exponentielle par rapport au temps du champ de vecteurs dépendant du temps X t . Concernant (1.9) et (1.10), nous avons quelques considérations à faire. Rappelons le hamiltonien H considéré par Fortunati et Wiggins H : T n × B × R + → R, H(q, p, t) = ω • p + 1 2 Γ • p 2 h(p) +εf (q, p, t) 2 Résultats Principaux où Γ est une matrice réelle, symétrique, non-singulaire et h est le hamiltonien non perturbé. Par ailleurs, pour un certain s > 0 |f t | s ≤ Ce -at pour tout t ∈ R + , où a est un paramètre positif pris conventionnellement plus petit que 1. Soit φ t X h le flot du hamiltonien X h associé au hamiltonien h, alors pour tout (q, p) ∈ T n × B φ t X h (q, p) = (q + (ω + Γp)t, p). Donc, en prenant la différentielle de φ t X h Dφ t X h (q, p) = Id Γt 0 Id . Cela signifie que, pour tout d ∈ N |Dφ d X h • ϕ 0 • • • Dφ 0 X h • ϕ 0 | C 0 ≤ C(1 + ... + d)|Γ| = C|Γ| d(d + 1) 2 , où |Γ| = max 1≤i,j≤n |Γ ij | et C est une constante qui ne dépend pas de d. Dans ce cas, nous observons que la norme précédente est contrôlée par le terme d(d+1) qui, quand µ > 1, diverge plus lentement que µ d si d → +∞. Alors, dans cette situation particulière, nous pouvons remplacer (1.10) avec λ < 1. Cela équivaut à supposer a > 0 dans le résultat de Fortunati-Wiggins. Autrement dit, si nous considérons des perturbations dépendant du temps des systèmes hamiltoniens intégrables, il suffit de supposer que la perturbation decroît avec une vitesse exponentielle par rapport au temps, sans aucune hypothèse sur le taux de décroissance. Cela se traduit en a > 0 dans le théorème de Fortunati-Wiggins et λ < 1 dans celui de Canadell-de la Llave. Cela nous amène à nous poser la question suivante. Est-il possible de relaxer l'hypothèse de décroissance exponentielle par rapport au temps dans le cas de perturbations dépendant du temps des systèmes hamiltoniens intégrables? En d'autres termes, si nous considérons une perturbation dépendant du temps d'un système hamiltonien ayant un tore invariant supportant une dynamique quasipériodique, est-il possible de prouver un résultat similaire lorsque la perturbation decroît avec une vitesse plus faible qu'exponentielle? C'est bien cette question qui a motivé le début de ce travail. L'intérêt pour ce genre de perturbations n'est pas artificiel. Ces types de systèmes seraient intéressants en astronomie. L'exemple qui a motivé ce travail est celui d'un système planétaire perturbé par une comète provenant et retournant à l'infini, sur une orbite asymptotiquement hyperbolique. Dans la partie suivante, nous présenterons les principaux résultats contenus dans cette thèse. Résultats Principaux Les résultats principaux de cette thèse sont organisés en quatre parties. La première partie, appelée Asymptotic Motions Converging to Quasiperiodic Dynamics (voir II), traite certains résultats qui améliorent les théorèmes de Fortunati-Wiggins et Canadell-de la Llave dans le cas des champs de vecteurs hamiltoniens et champs de vecteurs sur le tore dépendant du temps. La décroissance exponentielle par rapport au temps est relaxée et l'hypothèse de petitesse sur la perturbation est supprimée. Dans la deuxième partie, Biasymptotic Motions Converging to Quasiperiodic Dynamics (voir III), en utilisant une version différente du résultat mentionné cidessus, nous prouvons l'existence de solutions biasymptotiquement quasipériodiques pour des champs de vecteurs hamiltoniens dépendant du temps. Plus précisément, nous montrons l'existence de certaines orbites convergeant vers une dynamique quasipériodique caractérisée par un certain vecteur fréquence ω + ∈ R n dans le futur (t → +∞) et vers une dynamique quasipériodique de vecteur fréquence ω -∈ R n dans le passé (t → -∞). La troisième partie, Applications to Celestial Mechanics (voir IV), est consacrée à l'étude de l'exemple d'un système planétaire (le problème à trois corps dans le plan) perturbé par une comète donnée provenant et retournant à l'infini, asymptotiquement le long d'une orbite képlérienne hyperbolique. Nous démontrons qu'il existe des solutions où la comète attire le centre de masse du système planétaire avec une vitesse asymptotiquement nulle quand le temps tend vers +∞. En outre, si nous considérons un repère attaché au centre de masse du système planétaire, les mouvements des planètes convergent vers certaines dynamiques qui sont proches (dans un sens que nous préciserons plus tard) de certains mouvements quasipériodiques associés au hamiltonien du problème à trois corps dans le plan, dont l'existence est garantie par [Arn63b] La partie Asymptotic Motions Converging to Arbitrary Dynamics (voir V) conclut cette thèse. Nous prouvons certains résultats qui sont une variation du théorème de Canadell-de la Llave dans les cas particuliers des champs de vecteurs hamiltoniens et champs de vecteurs sur le tore dépendant du temps. Plus précisément, nous considérons des champs de vecteurs dépendant du temps appropriés X convergeant avec une vitesse exponentielle par rapport au temps vers des champs de vecteurs X 0 ayant un tore invariant ϕ 0 supportant une dynamique quelconque. La nouveauté repose sur le fait que la dynamique sur ϕ 0 est arbitraire (donc, en général, pas quasipériodique). La décroissance exponentielle par rapport au temps est maintenue, mais nous n'avons pas besoin d'hypothèses de petitesse. Dans ce qui suit, nous examinerons plus en profondeur ce qui a été dit jusqu'à présent. Plus précisément, la dernière partie de ce chapitre est divisée en quatre sections. Chacune d'elles est consacrée à une partie de cette thèse, où nous énoncerons et commenterons chaque résultat contenu dans ce travail. Nous rappelons que B ⊂ R n est une boule centrée à l'origine et, pour un certain υ > 0, nous introduisons l'intervalle suivant J υ = [υ, +∞) ⊂ R. Pour chaque fonction f définie sur T n × B × J υ et pour t ∈ J υ fixé, soit f t la fonction définie sur T n × B tel que f t (q, p) = f (q, p, t) pour tout (q, p) ∈ T n × B. Par ailleurs, pour p 0 ∈ B fixé, soit f p 0 la fonction définie sur T n × J υ tel que f p 0 (q, t) = f (q, p, t) pour tout (q, t) ∈ T n ×J υ . Nous gardons cette notation aussi dans le cas où T n ×B est remplacé par un voisinage complexe T n s × B s de T n × B ou une variété lisse appropriée. De plus, nous utilisons cette notation également dans le cas où J υ est remplacé par R. Mouvements Asymptotiques Convergeant vers une Dynamique Quasipériodique Cette section est divisée en deux sous-sections. La raison de cette division est que nous prouvons les mêmes résultats dans le cadre à différentiabilité finie et analytique. Nous sommes intéressés à des perturbations dépendant du temps de certains hamiltoniens non autonomes et des champs de vecteurs constants sur le tore. Plus précisément, nous considérons les hamiltoniens dépendant du temps de la forme H : T n × B × J 0 → R, H(q, p, t) = h(q, p, t) + f (q, p, t), où h est dans la ω-forme normal de Kolmogorov, avec ω ∈ R n . Cela signifie que, pour tout (q, t) ∈ T n × J 0 h(q, 0, t) = c, ∂ p h(q, 0, t) = ω, avec c ∈ R. Soit ϕ 0 le plongement trivial suivant ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0), alors ϕ 0 est un tore lagrangien invariant pour X h supportant une dynamique quasipériodique de vecteur fréquence ω. Nous rappelons que X h est le champ de vecteurs hamiltonien associé au hamiltonien h. Nous définissons K ω comme étant l'ensemble des Hamiltoniens h : T n × B × J 0 → R sous la ω-forme normale de Kolmogorov. D'autre part, nous considérons les champs de vecteurs sur le tore dépendant du temps Z de telle sorte que Z : T n × J 0 → R n , Z(q, t) = ω + P (q, t) avec ω ∈ R n . Cas à Différentiabilité Finie Nous nous intéressons aux fonctions holderiennes C σ . Nous nous référons à Appendix A pour une petite introduction. Plus précisément, afin de quantifier la régularité des fonctions, nous introduisons l'espace suivant. Étant donné σ ≥ 0, υ ≥ 0 et un entier positif k ≥ 0, nous avons la définition suivante Définition. Soit Sυ σ,k l'espace de fonctions f définies sur T n × B × J υ tel que, pour tout t ∈ J υ f t ∈ C σ+k (T n × B) et ∂ i (q,p) f ∈ C(T n × B × J υ ) pour tout 0 ≤ i ≤ k. Mouvements Asymptotiques Convergeant vers une Dynamique Quasipériodique Conventionnellement f = ∂ 0 (q,p) f . En d'autres termes, f ∈ Sυ σ,k si f t ∈ C σ+k (T n × B), pour tout t ∈ J υ , et f est continue et ses dérivés partielles par rapport à (q, p) le sont aussi jusqu'à l'ordre k. Nous utiliserons cette notation également pour des applications définies sur T n × J υ . Ceci sera spécifié par le contexte. Pour mesurer la décroissance par rapport au temps des perturbations, nous introduisons des fonctions positives, décroissantes et intégrables u sur J 0 et nous notons ū(t) = +∞ t u(τ )dτ pour tout t ∈ J 0 . Maintenant, nous avons tout ce dont nous avons besoin pour énoncer le théorème suivant. Étant donné ω ∈ R n et σ ≥ 1, nous considérons l'hamiltonien dépendant du temps H de la forme                      H : T n × B × J 0 -→ R H(q, p, t) = h(q, p, t) + f (q, p, t), h ∈ K ω f 0 , ∂ p f 0 , ∂ 2 p H ∈ S0 σ,2 sup t∈J 0 |f t 0 | C σ+2 < ∞, sup t∈J 0 |∂ 2 p H t | C σ+2 < ∞, |∂ q f t 0 | C σ+1 ≤ a(t), |∂ p f t 0 | C σ+2 ≤ b(t) pour tout t ∈ J 0 , ( * A ) où a et b sont des fonctions positives, décroissantes et intégrables sur J 0 . Nous supposons qu'il existe υ ≥ 0 tel que a et b satisfont les conditions suivantes ā(t) ≤ b(t) ā(t)b(t) ≤ a(t) b(t) (#) pour tout t ∈ J υ . Théorème A. Soit H comme dans ( * A ) avec a et b qui satisfont (#). Alors, il ex- iste h ∈ K ω et un C σ -tore KAM asymptotique lagrangien ϕ t associé à (X H , X h, ϕ 0 ). Nous commençons avec deux exemples des fonctions a et b qui vérifient (#). Tout d'abord, nous considérons le cas à décroissance exponentielle. Soient a et b les fonctions suivantes a(t) = e -λ 1 t , b(t) = e -λ 2 t λ 1 , pour certains paramètres positifs λ 1 ≥ λ 2 > 0. Nous pouvons vérifier facilement que (#) est satisfait pour tout t ∈ J 0 . L'exemple suivant est plus intéressant que le précédent. Il s'agit de décroissance polynomiale. Nous considérons a(t) = 1 t l+1 , b(t) = 1 t l , 13 pour un paramètre réel positif l > 1. Ce couple de fonctions satisfait (#) pour tout t ∈ J 1 . Le théorème précédent prouve l'existence d'un C σ -tore KAM asymptotique ϕ t de la forme ϕ t (q) = (q + u t (q), v t (q)) pour tout q ∈ T n et t suffisamment grand, où id + u t est un difféomorphisme du tore pour tout t fixé. En outre, nous obtenons également des informations sur la décroissance par rapport au temps de u et v. Plus précisément |u t | C σ ≤ C b(t), |v t | C σ ≤ Cā(t), pour tout t assez grand et pour une constante appropriée C. Ce théorème généralise celui de Canadell-de la Llave dans le cas des champs de vecteurs hamiltoniens dépendant du temps. Concernant la preuve, elle est basée sur le théorème des fonctions implicites. Par ailleurs, une partie importante repose sur la résolution du problème linéaire associé. Étant donné σ ≥ 0, υ ≥ 0 et ω ∈ R n , il consiste dans la solution de l'équation suivante d'inconnu κ : T n × J υ → R      ω • ∂ q κ(q, t) + ∂ t κ(q, t) = g(q, t), g ∈ Sυ σ,0 , |g t | C σ ≤ g(t), for all t ∈ J υ , (HE A ) où g(t) est une fonction positive, décroissante et intégrable sur J υ et g : T n × J υ → R est donnée. Nous pouvons facilement trouver une solution à l'équation précédente par intégration grâce à un changement de coordonnées approprié qui rectifie la dynamique sur le tore. Cette solution κ existe et satisfait |κ t | C σ ≤ ḡ(t) pour tout t ∈ J υ . Dans un premier lieu, nous observons que κ a la même régularité que g (elle est de classe C σ ). Si g(t) = e -λt avec λ > 0 (comme dans le théorème de Canadell-de la Llave), alors ḡ(t) = 1 λ e -λt . Par conséquent, dans ce cas particulier, la solution κ décroit avec une vitesse exponentielle et le même taux de décroissance λ de g. Maintenant, si nous considerons g(t) = 1 t l avec l > 1, alors κ décroit comme 1 t l-1 (ḡ(t) = 1 l-1 1 t l-1 ). Dans ce cas, nous perdons en décroissance par rapport au temps. En retournant à la preuve du Théorème A, nous considerons a(t) = 1 t l+1 et b(t) = 1 t l avec l > 1. Nous savons que (#) sont satisfaits. Le problème de perte de décroissance par rapport au temps est résolu dans la partie non linéaire. Les solutions de l'équation homologique, caractérisées par une lente décroissance dans le temps (c'est-à-dire 1 t l et 1 t l-1 ), sont toujours multipliées par d'autres termes satisfaisant de bonnes propriétés de décroissance ( 1 t l+1 ou 1 t l ). Donc, grâce à une série de multiplications, nous pouvons regagner ce que nous perdons en matière de décroissance par rapport au temps. De même que dans le cas général (a(t) et b(t) sont comme en (#)). Ici, l'hypothèse (#) assure que cette série de multiplications dans la partie non linéaire résolve la perte de décroissance par rapport au temps. Contrairement au théorème de Canadell-de la Llave, nous n'avons pas besoin d'assumer une condition de petitesse sur la perturbation. Nous introduisons υ ≥ 0 afin d'avoir Pour ce qui concerne les perturbations dependant du temps des champs de vecteurs constants sur le tore, étant donné σ ≥ 1 et ω ∈ R n , nous considérons le champ de vecteurs dépendant du temps suivant |∂ q f t 0 | C σ+1 , |∂ p f t 0 | C σ+2 14          Z : T n × J 0 -→ R n Z(q, t) = ω + P (q, t) P ∈ S0 σ,1 , |P t | C σ+1 ≤ P(t) pour tout t ∈ J 0 , (Z A ) où P est une fonction positive, décroissante et intégrable sur J 0 . Corollaire A. Soit Z comme dans (Z A ). Alors, il existe un C σ -tore KAM asymptotique ψ t associé à (Z, ω, Id). De façon similaire à comment nous avons procédé dans le Théorème A, nous prouvons le résultat ci-dessus. Nous concluons cette partie en observant que l'hypothèse d'intégrabilité sur P est optimale. Étant donné Ẑ le champ de vecteurs dépendant du temps défini sur T 1 × J 0 de la forme Ẑ(q, t) = ω + P (t), où ω ∈ R et P (t) > 0 pour tout t > 0. Nous supposons que +∞ t 0 P (τ )dτ = +∞, pour tout t 0 ≥ 0. Soit ψ t t 0 ,X le flot au moment t avec un temps initial t 0 de Ẑ. Alors, pour tout q ∈ T n et t > 0 ψ t 0 +t t 0 , Ẑ (q) = q + ωt + t 0 +t t 0 P (τ )dτ et donc, pour tout q ∈ T n ψ t 0 +t t 0 , Ẑ (q) -q -ωt = t 0 +t t 0 P (τ )dτ. Par conséquent, en prenant la limite pour t → +∞, le côté droit de l'égalité précédent diverge à +∞. Cela signifie qu'il n'existe pas une solution asymptotiquement quasipériodique associée à ( Ẑ, ω, Id) et donc, grâce à la Proposition 1.4, il n'existe pas un C σ -tore KAM asymptotique associé à ( Ẑ, ω, Id). Résultats Principaux Cas Analytique Réel Comme mentionné ci-dessus, nous présentons la version analytique réelle des résultats précédents. Étant donné s > 0 et υ ≥ 0, nous introduisons l'espace de fonctions suivant Définition. Soit A υ s l'espace des fonctions f définies sur T n s × B s × J υ tel que f ∈ C(T n s × B s × J υ ) et, pour tout t ∈ J υ , f t est analytic réel sur T n s × B s . Nous utilisons cette notation aussi pour des applications définies sur T n s × J υ . Étant donné ω ∈ R n et un paramètre réel positif s 0 > 0, nous considérons le hamiltonien dépendant du temps H suivant                    H : T n × B × J 0 -→ R H(q, p, t) = h(q, p, t) + f (q, p, t), h ∈ K ω h, f ∈ A 0 s 0 sup t∈J 0 |f t 0 | s 0 < ∞, sup t∈J 0 |∂ 2 p H t | s 0 < ∞ |∂ q f t 0 | s 0 ≤ a(t), |∂ p f t 0 | s 0 ≤ b(t), pour tout t ∈ J 0 ( * B ) où a, b sont des fonctions positives, décroissantes et intégrables sur J 0 . Théorème B. Soit H comme dans ( * B ) avec a et b qui satisfont (#). Alors, il existe h ∈ K ω et un tore KAM asymptotique analytique lagrangien ϕ t associé à (X H , X h, ϕ 0 ). Comme dans Théorème A, nous prouvons l'existence d'un tore KAM asymptotique analytique ϕ t de la forme ϕ t (q) = (q + u t (q), v t (q)) pour tout q ∈ T n et t suffisamment grand, où, pour chaque t fixé, id + u t est un difféomorphisme du tore. En outre, pour une constante appropriée C |u t | s 4 ≤ C b(t), |v t | s 4 ≤ Cā(t) , pour tout t suffisamment grand. La preuve de ce théorème est essentiellement la même que celle du Théorème A avec des petites modifications. De même que le cas à différentiabilité finie, nous résolvons le problème linéaire associé par un changement de coordonnées approprié qui dépend du flot linéaire sur le tore généré par le champ de vecteurs constant ω. En outre, si t est réel, nous pouvons résoudre l'équation homologique sans réduire le domaine où la solution est analytique. Pour cette raison, nous trouvons un tore KAM asymptotique analytique ϕ t qui est juste continu par rapport au temps. Encore une fois, nous prouvons un résultat analogue pour des perturbations analytiques réelles dépendant du temps des champs de vecteurs constants sur le tore. Soit Z le champ de vecteur non autonome sur T n × J 0 de la forme          Z : T n × J 0 -→ R n , Z(q, t) = ω + P (q, t) P ∈ A 0 s 0 , |P t | s 0 ≤ P(t) pour tout t ∈ J 0 (Z B ) 2.2 Mouvements Biasymptotiques Convergeant vers une Dynamique Quasipériodique où ω ∈ R n et 0 < s 0 < 1. Nous supposons que P est une fonction positive, décroissante et intégrable sur J 0 . Corollaire B. Soit Z comme dans (Z B ). Alors il existe un tore KAM asymptotique analytique ψ t associé à (Z, ω, Id). Nous démontrons ce résultat comme conséquence du Théorème B. Mouvements Biasymptotiques Convergeant vers une Dynamique Quasipériodique Ici, nous nous intéressons aux systèmes non autonomes X t definis pour tout t ∈ R. En particulier, cette section vise à prouver l'existence de solutions définies pour tout t ∈ R et convergeant vers certaines solutions quasipériodiques dans le futur (t → +∞) et dans le passé (t → -∞). Pour cette raison, introduisons la définition de C σ -tore KAM biasymptotique. Étant donné σ ≥ 0 et un entier positif k ≥ 0, nous considerons les champs de vecteurs dépendant du temps X t , X t 0,+ , X t 0,-de classe C σ+k sur T n × B, pour tout t ∈ R, et des plongements ϕ 0,+ , ϕ 0,-de T n sur T n × B de classe C σ tel que lim t→±∞ |X t -X t 0,± | C σ+k = 0, (2.1) X 0,± (ϕ 0,± (q), t) = ∂ q ϕ 0,± (q)ω ± pour tout (q, t) ∈ T n × R, (2.2) où ω + , ω -∈ R n . Définition 2.1. Nous supposons que (X, X 0,± , ϕ 0,± ) satisfont (2.1) et (2.2). Pour tout t ∈ R, une famille continue de C σ -plongements ϕ t : T n → T n × B est un C σtore KAM biasymptotique associé à (X, X 0,± , ϕ 0,± ) s'il existe υ ≥ 0 tel que lim t→±∞ |ϕ t -ϕ 0,± | C σ = 0, X(ϕ(q, t), t) = ∂ q ϕ(q, t)ω + + ∂ t ϕ(q, t), pour tout (q, t) ∈ T n × (υ, +∞) X(ϕ(q, t), t) = ∂ q ϕ(q, t)ω -+ ∂ t ϕ(q, t), pour tout (q, t) ∈ T n × (-∞, -υ). De plus, nous disons que ϕ t est lagrangien si ϕ t (T n ) est lagrangien pour tout t. Malheureusement, nous ne sommes pas en mesure de prouver l'existence de C σ -tores KAM biasymptotiques sans faire l'hypothèse que ω + = ω -= ω et que les hamiltoniens H : T n × B × R → R satisfont des symétries très fortes. Plus précisément, nous devons supposer que H(q + ωt, p, t) = -H(q -ωt, p, -t) pour tout (q, p, t) ∈ T n × B × R et un certain ω ∈ R n . Alors, nous pouvons démontrer l'existence d'un C σ -tore KAM biasymptotique avec ω + = ω -= ω. Dans ce qui suit, nous démontrerons des résultats plus faibles. Pour cette raison, nous avons la définition suivante. Résultats Principaux Définition 2.2. Nous supposons que (X, X 0,± , ϕ 0,± ) satisfont (2.1) et (2.2). Une courbe intégrale g(t) de X est une solution biasymptotiquement quasipériodique associée à (X, X 0,± ϕ 0,± ) s'il existe q -, q + ∈ T n de telle sorte que lim t→±∞ |g(t) -ϕ 0,± (q ± + ω ± t)| = 0. (2.3) Donc, nous ne prouverons pas l'existence de C σ -tores KAM biasymptotiques mais plutôt des solutions biasymptotiquement quasipériodiques associées à certains systèmes hamiltoniens dépendant du temps. Dans ce qui suit, nous analyserons deux cas. Dans le premier, nous considérons des perturbations dépendant du temps des hamiltoniens intégrables. Le second concerne des perturbations dépendant du temps des hamiltoniens ayant un large (au sens de la mesure) sous-ensemble des tores invariants. Tout d'abord, nous rappelons que, pour tout (p, t) ∈ B × R fixé, nous considérons f t p comme la fonction définie sur T n telle que f t p (q) = f (q, p, t) pour tout q ∈ T n . Cas Intégrable Étant donné un paramètre réel positif σ ≥ 1, nous considérons l'espace de fonctions suivant. Définition. Soit B σ l'espace des fonctions f définies sur T n × B × R tel que f , ∂ p f ∈ C(T n × B × R) et f t p ∈ C σ (T n ) pour tout (p, t) ∈ B × R. Nous introduisons une norme spéciale qui joue un rôle central dans cette partie. Pour tout f ∈ B σ and l > 1, nous définissons |f | σ,l = sup (p,t)∈B×R |f t p | C σ (1 + |t| l ) + sup (p,t)∈B×R | (∂ p f ) t p | C 0 (1 + |t| l-1 ). (2.4) Définition. Étant donné σ ≥ 1 et un entier positif k ≥ 0, nous définissons Bσ,k comme l'espace des fonctions f tel que f ∈ B σ+k , et ∂ i q f ∈ B σ+k-i pour tout 0 ≤ i ≤ k. Dans la définition précédente, ∂ i q f indique les dérivés partielles d'ordre i par rapport aux variables q = (q 1 , ..., q n ) de f . Nous utilisons la convention ∂ 0 q f = f . Évidemment, nous avons Bσ,0 = B σ . Pour tout f ∈ Bσ,k et l > 1, nous concluons cette partie avec l'introduction de la norme suivante f σ,k,l = max 0≤i≤k |∂ i q f | σ+k-i,l . Soit B r ⊂ R n une boule centrée à l'origine de rayon r > 0. Si une fonction f définie sur T n × B r × R appartient à B σ , nous considérons que f satisfait les propriétés dans la définition précédente avec B remplacée par B r . Mouvements Biasymptotiques Convergeant vers une Dynamique Quasipériodique Étant donné σ ≥ 1, l > 1 et 0 < ε < 1, nous considérons le hamiltonien suivant                          H : T n × B 1 × R -→ R H(q, p, t) = h(p) + f (q, p, t) f, ∂ p f ∈ Bσ,2 , |f | σ+2,0 + ∂ q f σ,1,l+2 + ∂ p f σ,2,l+1 < ε, ∂ 2 p H t ∈ C σ+2 (T n × B 1 ) pour tout t ∈ R fixé ∂ i qp ∂ 2 p H ∈ C(T n × B 1 × R) pour tout 0 ≤ i ≤ 3. sup t∈R |∂ 2 p H t | C σ+2 < ∞. ( * C ) Pour chaque p ∈ B 1 , nous considérons le plongement trivial suivant ϕ 0,p : T n → T n × B 1 , ϕ 0,p (q) = (q, p). Théorème C. Soit H comme dans ( * C ). Alors il existe un hamiltonien dépendant du temps h tel que, si ε est suffisamment petit, pour tout (q, p) ∈ T n × B1 2 il existe p -, p + ∈ B 1 et une solution biasymptotiquement quasipériodique g(t) associée à (X H , X h, ϕ 0,p ± ) tel que g(0) = (q, p). En ce qui concerne la régularité de H, nous soulignons que si nous prenons l'hypothèse plus forte f ∈ C σ+3 (T n × B 1 × R) et ∂ 2 p H ∈ C σ+2 (T n × B 1 × R), alors les hypothèses de régularité du théorème précédent sont satisfaites. Pour chaque p ∈ B 3 4 , nous prouvons l'existence d'un C σ -tore KAM asymptotique ϕ t +,p associé à (X H , X h, ϕ 0,p ) défini pour tout t ≥ 0. En plus, soit ϕ t + (q, p) = ϕ t +,p (q) pour tout (q, p) ∈ T n × B 3 4 , nous vérifions que sup t≥0 |ϕ t + -Id| C 1 < C 0 ε, (2.5) pour une constante appropriée C 0 qui dépend de n, l, sup t∈R |∂ 2 p H t | C σ+2 et |∂ p h| C 1 . De la même manière, pour tout t ≤ 0, il existe une famille de C σ -tores KAM asymptotiques ϕ t -définie sur T n × B3 4 qui satisfont sup t≤0 |ϕ t --Id| C 1 < C 0 ε. (2.6) Les estimations précédentes, (2.5) et (2.6), sont les clés pour prouver que T n × B 1 2 ⊂ ϕ 0 ± (T n × B3 4 ). Ceci conclut la preuve. En effet, si nous nous rappelons des propriétés des C σtores KAM asymptotique, chaque (q, p) ∈ T n × B 1 2 donne lieu à des solutions biasymptotiquement quasipériodiques associées à H. Comparé au Théorème A, nous supposons ici une décroissance par rapport au temps plus forte. Cela est dû au contrôle sur les variables p obtenu par (2.5) et (2.6). Par contre, ce théorème est perturbatif parce que nous avons besoin que |ϕ t ± -Id| C 1 soit petit pour t = 0 et pas seulement pour |t| grand. 2 Résultats Principaux Cas Presque Intégrable Soit A ⊂ R n un ensemble fermé et E égal à T n × B ou T n , pour chaque fonction f : E × A → R nous définissons la norme suivante |f | L(A) = sup z∈B sup x,y∈A,x =y |f (z, x) -f (z, y)| |x -y| + |f | C 0 . Étant donné un paramètre positif réel σ ≥ 1 et A ⊂ R n , nous avons la définition qui suit. Définition. Soit D σ l'espace de fonctions f définies sur T n × A × R tel que f ∈ C(T n × A × R) et f t p ∈ C σ (T n ) pour tout (p, t) ∈ A × R. Comme dans le cas précédent, nous définissons la norme suivante. Pour tout f ∈ D σ et l ≥ 1, |f | σ,l,L(A) = sup (p,t)∈A×R |f t p | C σ (1 + |t| l ) + sup t∈R |f t | L(A) (1 + |t| l-1 ). (2.7) Soit f ∈ B σ (où B σ est l'espace défini précédemment). Pour chaque A ⊂ B fermé, évidemment f ∈ D σ et si |f | σ,l < ∞ alors |f | σ,l,L(A) ≤ |f | σ,l , où | • | σ,l est la norme définie par (2.4). Définition. Étant donné σ ≥ 1 et un entier positif k ≥ 0, nous définissons Dσ,k comme l'espace de fonctions f tel que f ∈ D σ+k , et ∂ i q f ∈ D σ+k-i pour tout 0 ≤ i ≤ k. De plus, pour tout f ∈ Dσ,k et l > 1, nous considérons la norme qui suit f σ,k,l,L(A) = max 0≤i≤k |∂ i q f | σ+k-i,l,L(A) , Dans ce cas aussi, le hamiltonien suivant et ses composants sont des fonctions définies sur T n × B r × R, pour certains r > 0. Ensuite, si une fonction f appartient à D σ , nous considérons que f satisfait la régularité démandée dans la définition précédente avec A remplacé par B r . Ici, nous considérons le cas où le hamiltonien intégrable h(p) est remplacé par un hamiltonien autonome ayant un large ensemble des tores invariants. Mouvements Biasymptotiques Convergeant vers une Dynamique Quasipériodique Étant donné σ ≥ 1, l > 1, 0 < ε < 1 et µ > 0. Nous considérons le hamiltonien suivant                                        H : T n × B 1 × R -→ R H(q, p, t) = h(p) + R(q, p) + f (q, p, t) f, ∂ p f ∈ Dσ,2 , |f | σ+2,0,L(B 1 ) + ∂ q f σ,1,l+2,L(B 1 ) + ∂ p f σ,2,l+1,L(B 1 ) < ε, D ⊂ B 1 , Leb(B 1 \D) < µ, R ∈ C 2 (T n × B 1 ) R(q, p) = ∂ p R(q, p) = 0 pour tout (q, p) ∈ T n × D, ∂ 2 p H t ∈ C σ+2 (T n × B 1 ) pour tout t ∈ R fixé ∂ i qp ∂ 2 p H ∈ C(T n × B 1 × R) pour tout 0 ≤ i ≤ 2. sup t∈R |∂ 2 p H t | C σ+2 < ∞, ( * D ) Théorème D. Soit H comme dans ( * D ). Alors il existe un hamiltonien dépendant du temps h tel que, si ε est suffisamment petit, nous avons l'existence d'un ensemble W ⊂ T n × B 1 de telle sorte que, pour tout (q, p) ∈ W, il existe p -, p + ∈ D et une solution biasymptotiquement quasipériodique g(t) associée à (X H , X h, ϕ 0,p ± ) tel que g(0) = (q, p). De plus, nous avons Leb ((T n × B 1 ) \W) ≤ 4µ. L'hypothèse sur le hamiltonien précédent H, concernant la partie autonome h + R, n'est pas artificielle. Pöschel, dans son travail [START_REF] Pöschel | Integrability of Hamiltonian systems on Cantor sets[END_REF], considère une petite perturbation H 1 et C ∞ d'un hamiltonien intégrable analytique réel non-dégénéré H 0 de la forme H : T n × B -→ R, H(q, p) = H 0 (p) + H 1 (q, p). L'auteur démontre l'existence d'un symplectomorphisme φ de classe C ∞ tel que Concernant la preuve du Théorème D, nous définissons D ⊂ D comme les éléments de D suffisamment loin du bord de B 1 . Ensuite, pour chaque p ∈ D , nous démontrons l'existence d'un C σ -tore KAM asymptotique ϕ t +,p (resp. ϕ t -,p ) associé à (X H , X h, ϕ 0,p ) défini pour tout t ≥ 0 (resp. t ≤ 0). En posant ϕ t ± (q, p) = ϕ t ±,p (q) pour tout (q, p) ∈ T n × D , nous vérifions que H • φ : T n × B -→ R, H(q, p) = h(p) + R(q, p), sup t≥0 |ϕ t + -Id| L,T n ×D < C 0 ε, sup t≤0 |ϕ t --Id| L,T n ×D < C 0 ε (2.8) pour une constante appropriée C 0 qui dépend de n, l, sup t∈R |∂ 2 p H t | C σ+2 et |∂ p h| L(D) . Maintenant, soit W le suivant sous-ensemble de T n × B 1 W = ϕ 0 + (T n × D ) ∩ ϕ 0 -(T n × D ) . Donc, la conclusion du théorème précédent est due à (2.8) et Leb(B 1 \D) < µ. Applications à la Mécanique Céleste Dans cette partie, nous étudions l'existence d'orbites pour un système planétaire (problème à trois corps dans le plan) perturbé par une comète donnée, venant et revenant à l'infini asymptotiquement le long d'une orbite képlérienne hyperbolique. Sur un espace de phase approprié, nous considérons le hamiltonien H = H 0 + H c , où H 0 joue le rôle du hamiltonien du problème à trois corps dans le plan et H c l'interaction avec la comète donnée. Malheureusement, la perturbation H c ne satisfait pas de bonnes propriétés de décroissance lorsque t → +∞. Dans une norme appropriée, |H t c | ∼ 1 t l avec l = 2, alors que nous avons besoin de l > 2 pour appliquer le Théorème A. Pour résoudre ce problème, nous sommes obligés de démontrer un autre théorème abstrait qui est une version plus faible du Théorème A. Nous introduisons les définitions de C σ -cylindre faiblement asymptotique et de solution faiblement asymptotiquement quasipériodique. À cette fin, nous définissons B ⊂ R n+m comme une boule centrée à l'origine et nous notons q ∈ T n × R m et p ∈ B. Soit P égal à T n × R m × B ou à un sous-ensemble ouvert de R 2(n+m) et J = [1, +∞) ⊂ R. Étant donné σ ≥ 0 et un entier positif k ≥ 0, nous considérons les champs de vecteurs dépendant du temps X t , X t 0 de classe C σ+k sur P, pour tout t ∈ J, un plongement ϕ 0 de T n × R m sur P de classe C σ et un champ de vecteurs dépendant du temps γ t de classe C σ sur T n × R m , pour tout t ∈ J, tel que lim t→+∞ |X t -X t 0 | C σ+k = 0, (2.9) X 0 (ϕ 0 (q), t) = ∂ q ϕ 0 (q)(ω + γ(q, t)) pour tout (q, t) ∈ T n × R m × J, (2.10) lim t→+∞ |γ t | C σ = 0, (2.11) où ω = (ω, 0) ∈ R n+m avec ω ∈ R n . En d'autres termes, X t -X t 0 converge vers zéro lorsque t → +∞. De plus, le champ de vecteurs X 0 possède un cylindre invariant ϕ 0 et la restriction de X 0 est conjuguée au champ de vecteurs non-autonome ω + γ, qui est un champ de vecteurs dépendant du temps convergeant vers ω lorsque t → +∞. Définition 2.3. Nous supposons que (X, X 0 , ϕ 0 ) satisfont (2.9), (2.10) et (2.11). Une famille de C σ -plongements ϕ t : T n × R m → P est un C σ -cylindre faiblement asymptotique associé à (X, X 0 , ϕ 0 ) s'il existe un champ de vecteurs dépendant du temps Γ t de classe C σ sur T n × R m , pour tout t ∈ J, tel que lim t→+∞ |ϕ t -ϕ 0 | C σ = 0, (2.12) X(ϕ(q, t), t) = ∂ q ϕ(q, t)(ω + Γ(q, t)) + ∂ t ϕ(q, t), (2.13) lim t→+∞ |Γ t | C σ = 0, (2.14) 2.3 Applications à la Mécanique Céleste pour tout (q, t) ∈ T n × R m × J. De plus, ϕ t est lagrangien si ϕ t (T n × R m ) est lagrangien pour tout t. Cette définition est une version plus générale de celle du C σ -tore KAM asymptotique (Définition 1.3). En effet, en prenant m = 0, γ ≡ 0 et Γ ≡ 0, nous obtenons la Définition 1.3. Par ailleurs, nous observons que si m = 0 et |Γ t | C σ+1 est intégrable sur J, alors, grâce au Corollaire A, on peut prouver l'existence d'un C σ -tore KAM asymptotique associé à (X, X 0 , ϕ 0 ). Ici, P est égal à T n × R m × B ou à un sous-ensemble ouvert de R 2(n+m) . Par contre, nous nous intéressons à une famille de cylindres plongés et non à une famille de tores plongés. Ceci est dû à l'exemple en mécanique céleste que nous étudierons dans la section suivante. Contrairement à la définition de C σ -tore KAM asymptotique, nous recherchons ici des familles ϕ t de plongements de classe C σ définis pour tout t ∈ J et non seulement t grand. Toutefois, il ne s'agit pas d'une différence substantielle. Nous verrons que, comme pour des C σ -tores KAM asymptotique, si ϕ t est un C σ -cylindre faiblement asymptotique défini pour tout t suffisamment grand, alors nous pouvons étendre l'ensemble de définition pour tout t ∈ R. Nous avons une série de propriétés en commun avec les C σ -tores KAM asymptotiques. Dans ce cas également, nous pouvons réécrire (2.13) en termes du flot de X. En fait, soit ψ t t 0 ,X le flot au moment t avec un temps initial t 0 de X et ψ t t 0 ,ω+Γ le flot au moment t avec un temps initial t 0 de ω + Γ. Nous supposons que ψ t t 0 ,X et ψ t t 0 ,ω+Γ sont définis pour tout t , t 0 ∈ J. Alors, (2.13) est équivalent à ψ t t 0 ,X • ϕ t 0 = ϕ t • ψ t t 0 ,ω+Γ ( ∈ J ϕ t = ψ t t 0 ,X • φ • ψ t 0 t,ω+Γ est une famille des plongements qui satisfont (2.15) et donc (2.13). Si ψ t t 0 ,X est défini pour tout t , t 0 ∈ R et s'il existe un C σ -cylindre faiblement asymptotique ϕ t défini pour tout t ≥ 1, alors nous pouvons étendre l'ensemble de définition pour tout t ∈ R. Plus précisément, nous définissons φ t = ϕ t pour tout t ≥ 1 ψ t 1,X • ϕ 1 • ψ 1 t,ω+Γ pour tout t ≤ 1. Nous observons que φ t est un C σ -cylindre faiblement asymptotique défini pour tout t ∈ R. Maintenant, afin de donner quelques informations concernant la dynamique associée à un C σ -cylindre faiblement asymptotique, nous introduisons la définition de solution faiblement asymptotiquement quasipériodique. Définition 2.4. Nous supposons que (X, X 0 , ϕ 0 ) satisfont (2.9), (2.10) et (2.11). Une courbe intégrale g(t) de X est une solution faiblement asymptotiquement 2 Résultats Principaux quasipériodique associée à (X, X 0 , ϕ 0 ) s'il existe un champ de vecteurs dépendant du temps Γ : T n × R m × J → R n+m et q ∈ T n × R m tel que lim t→+∞ |g(t) -ϕ 0 • ψ t t 0 ,ω+Γ (q)| = 0. Nous observons que ce type d'orbites de X ne convergent pas vers les mouvements associés à X 0 sur ϕ 0 mais vers la dynamique sur ϕ 0 générée par le champ de vecteurs dépendant du temps ω + Γ, d'où le terme faiblement. Évidemment, en prenant m = 0, γ ≡ 0 et Γ ≡ 0, on obtient la définition de solution asymptotiquement quasipériodique (Définition 1.2). De manière similaire à la Définition 2.3, si m = 0 et |Γ t | C σ+1 est intégrable sur J, alors on peut prouver l'existence de solutions asymptotiquement quasipériodiques associées à (X, X 0 , ϕ 0 ). Comme on peut s'y attendre, nous avons la proposition suivante, dont la preuve est similaire à la Proposition 1.4. Proposition 2.1. Soit ϕ t un C σ -cylindre faiblement asymptotique associé à (X, X 0 , ϕ 0 ). Alors, pour tout q ∈ T n × R m et t 0 ∈ J, g(t) = ψ t t 0 ,X • ϕ t 0 (q) est une solution faiblement asymptotiquement quasipériodique (X, X 0 , ϕ 0 ). Cette proposition, et notamment ces orbites, jouent un rôle important en particulier dans la partie consacrée au hamiltonien du problème à trois corps plus comète dans le plan. Nous divisons ce qui suit en deux sous-parties. La première est consacrée à l'énoncé d'une version plus faible du Théorème A, tandis que la seconde est dédiée aux applications en mécanique céleste. Le Théorème Abstrait Nous commençons par l'introduction de l'espace de fonctions suivant. Étant donné un paramètre réel positif σ ≥ 0, nous avons la définition suivante Définition. Soit S σ l'espace de fonctions f définies sur T n × R m × B × J tel que f t ∈ C σ (T n × R m × B) pour tout t ∈ J fixé et ∂ i (q,p) f ∈ C(T n × R m × B × J) pour tout 0 ≤ i ≤ [σ]. Dans la définition précédente, ∂ i (q,p) représente les dérivés partiels par rapport à (q, p) d'ordre i et [σ] est la partie entière de σ. Conventionnellement ∂ 0 (q,p) f = f . Comme d'habitude, nous utilisons cette notation également pour les fonctions définies sur T n × R m × J. Ceci sera spécifié par le contexte. Contrairement au Théorème A, nous considérons des fonctions qui sont continues et dont les dérivés partielles par rapport à (q, p) le sont également jusqu'à l'ordre [σ]. En outre, pour tout f ∈ S σ et un paramètre positif l > 0, nous introduisons la norme suivante |f | σ,l = sup t∈J |f t | C σ t l . Applications à la Mécanique Céleste Maintenant, soient s, λ, ρ, β et α des paramètres positifs qui satisfont les conditions suivantes            1 ≤ ρ < λ < s, s > max α α -1 , λ + α β -1 , 1 < β < 2, α > 1, λ > 2β 2 -β , ρ < λ -β β 2 . (# E ) Étant donné ω ∈ R n et des paramètres positifs réels δ > 0 et ε > 0, nous considérons le hamiltonien dépendant du temps suivant                          H : T n × R m × B × J -→ R H(q, p, t) = ω • p + a(q, t) + (b 0 (q, t) + b r (q, t)) b(q,t) •p + m(q, p, t) • p 2 a, b 0 , b r , ∂ 2 p H ∈ S s+1 |b 0 | 2,1 < δ, |b 0 | s+1,1 < ∞, |a| λ+1,0 + |∂ q a| λ,2 < ε, |b r | λ+1,1 < ε, |a| s+1,0 + |∂ q a| s,2 < ∞, |b r | s+1,1 < ∞, |∂ 2 p H| s+1,0 < ∞. ( * E ) Soit ϕ 0 le plongement trivial suivant ϕ 0 : T n × R m → T n × R m × B, ϕ 0 (q) = (q, 0). De plus, nous considérons le hamiltonien h : T n × R m × B × J → R tel que h(q, p, t) = ω • p + m(q, p, t) • p 2 . (2.16) Théorème E. Soit H comme dans ( * E ). Alors, pour δ suffisamment petit et pour ε assez petit par rapport à δ, il existe un C ρ -cylindre faiblement asymptotique associé à (X H , X h, ϕ 0 ). Ici, b 0 joue le rôle de γ dans la Définition 2.3. De plus, nous observons que les termes perturbatifs ∂ q a et b r (avec des normes appropriées) décroissent par rapport au temps comme 1 t 2 et 1 t , respectivement. Même en prenant b 0 ≡ 0 et m = 0, le hamiltonien H ne satisfait pas les hypothèses du Théorème A. L'idée de trouver des solutions localisées dans l'espace de phase pour ces types de systèmes donne lieu au théorème précédent. Cependant, le prix à payer est une conclusion plus faible que celle du Théorème A parce que nous perdons quelques informations sur la dynamique des orbites trouvées. De plus, à cause de la décroissance plus lente par rapport au temps des termes perturbatifs, nous avons besoin d'un hamiltonien H plus régulier et des conditions de petitesse sur les termes perturbatifs. La preuve repose sur une version du théorème de Nash-Moser due à Zehnder [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF]. Les conditions (# E ) sont une conséquence de ce théorème (Theorem 6.1). Par souci de clarté, nous donnerons un exemple de paramètres satisfaisant (# E ). T n × R m × J → R n+m          ∂ q κ(q, t) (ω + f (q, t)) + ∂ t κ(q, t) + g(q, t)κ(q, t) = z(q, t) f, g, z ∈ S σ , |f | 1,1 ≤ µ |g| 1,1 ≤ µ, |z| σ,2 < ∞, |f | σ,1 < ∞ |g| σ,1 < ∞ (HE E ) avec ω = (ω, 0) ∈ R n+m . Les fonctions f : T n ×R m ×J -→ R n , z : T n ×R m ×J -→ R n et g : T n ×R m ×J -→ M n+m sont données, où M n+m est l'ensemble des matrices de dimension (n + m). Si µ = 0, celle-ci prend la forme beaucoup plus simple ∂ q κ(q, t)ω + ∂ t κ(q, t) = z(q, t) z ∈ S σ , |z| σ,2 < ∞. Quand m = 0, il s'agit du problème linéaire résolu dans la preuve du Théorème A. Cependant, également dans le cas m = 0, suivant les lignes de la preuve du Théorème A, une solution de l'équation précédente existe et satisfait |κ| σ,1 ≤ |z| σ,2 . En revenant au cas général µ ≥ 0, (HE E ) semble plus compliqué que ce dernier. Nous devons supposer que µ soit suffisamment petit pour pouvoir le résoudre. Plus précisément, si µ est assez petit, nous trouvons une solution de (HE E ) par intégration grâce à un changement de coordonnées approprié φ (qui consiste à rectifier le flot sur le cylindre). La partie la plus laborieuse consiste à trouver une bonne estimation de la solution, qui vérifient |κ| σ,1 ≤ C(σ) |z| σ,2 1 -c κ σ µ + C(σ) |f | σ,1 + |g| σ,1 (1 -c κ σ µ) 2 |z| 1,2 pour des constantes appropriées C(σ) et c κ σ dépendant de σ. Dans ce cas également, la solution κ est moins régulière (en termes de décroissance par rapport au temps) que z. Comme dans le Théorème A, nous résolvons ce problème dans la partie non linéaire. Les solutions des équations homologiques sont toujours multipliées par d'autres termes satisfaisant de bonnes propriétés de décroissance dont la multiplication nous permet de retrouver ce que nous perdons en termes de décroissance par rapport au temps. Malheureusement, nous avons une perte de dérivés à cause des nouveaux termes γ et Γ (les deux sont égaux à zéro dans le Théorème A). Pour cette raison, nous ne sommes pas en mesure de prouver ce théorème par 2.3 Applications à la Mécanique Céleste le théorème des fonctions implicites, mais nous devons faire recours à une des variantes du théorème de Nash-Moser. Cette preuve ne marche pas si nous travaillons avec des hamiltoniennes analytiques réels parce que le changement de coordonnées φ, utilisé pour résoudre l'équation homologique, dépend du flot de ω + f . Le terme f est responsable d'une perte de régularité pour la solution de (HE E ) qui s'ajoute à la perte de décroissance par rapport au temps, ce qui fait que notre preuve est sans espoir de fonctionner. Cette démonstration ne marche pas dans le cas des hamiltoniens C ∞ parce que plus σ est grand et plus nous devons prendre µ petit afin de résoudre (HE E ). Nous concluons cette partie avec une remarque concernant l'hypothèse de petitesse sur les termes perturbatifs. Il semble raisonnable de penser que cette hypothèse n'est pas essentielle et, en raisonnant comme dans le Théorème A, on devrait être en mesure de la supprimer. Par contre, l'exemple d'un système planétaire perturbé par une comète donnée est un problème perturbatif. Donc, le Théorème E suffit pour l'application. Le Problème à Trois Corps plus Comète Nous considérons trois points de masses fixes m 0 , m 1 et m 2 subissant une attraction gravitationnelle dans le plan et une comète de masse fixe m c . La comète provient et retourne à l'infini le long d'une orbite képlérienne hyperbolique. Nous supposons que le mouvement de la comète est une fonction donnée c(t) et que seul le système planétaire est influencé par la comète. Nous supposons |c(t)| → t→+∞ ∞, d dt |c(t)| → t→+∞ v > 0. Si la comète est sur une orbite képlérienne hyperbolique, d dt c(t) lui-même a une limite. Mais nous n'utiliserons pas cette hypothèse plus forte. Étant donné 0 < ε < 1 et J = [1, +∞) l'espace de phase est l'espace ((x i , y i ) 0≤i≤2 , t) ∈ R 2 × R 2 * 3 × J ∀0 ≤ i < j ≤ 2, x i = x j |x i | |c(t)| < ε de covecteurs de quantité de mouvement (y 0 , y 1 , y 2 ) et vecteurs de position (x 0 , x 1 , x 2 ) de chaque corps. Le hamiltonien du problème à trois corps plus la comète dans le plan (P3BP+C de l'anglais "planar three body problem plus comet") est H(x, y, t) = 2 i=0 |y i | 2 2m i - 0≤i<j≤2 G m i m j |x i -x j | H 0 (x,y) - 2 i=0 G m i m c |x i -c(t)| Hc(x,t) , où G est la constante universelle de gravitation que nous pouvons supposer égale à 1. Comme mentionné précédemment, H est la somme du hamiltonien du problème à trois corps dans le plan H 0 et du hamiltonien H c qui est responsable de l'interaction avec la comète. Résultats Principaux Soit ϕ 0 une famille à 1-paramètre de tores invariants pour H 0 supportant une dynamique quasipériodique avec quatre fréquences et ψ t t 0 ,H le flot au moment t avec un temps initial t 0 de H. Théorème F. Soit H le hamiltonien de P3BP+C. Alors, si |c(1)| et v sont suffisamment grands par rapport à ε, pour ε assez petit, il existe un sous-ensemble ouvert W ⊂ (R 2 × R 2 * ) 3 tel que, pour tout x ∈ W, ψ t 1,H (x) est une solution faiblement asymptotiquement quasipériodique associée à (X H , X H 0 , ϕ 0 ). Concernant ϕ 0 , en 1963, Arnold a prouvé l'existence de solutions quasipériodiques pour le hamiltonien du problème à trois corps dans le plan [Arn63b]. Ici, nous suivons Féjoz [START_REF] Féjoz | Quasiperiodic motions in the planar three-body problem[END_REF], qui fournit des solutions plus générales. Dans un repère rotatif, l'auteur prouve l'existence d'orbites quasipériodiques à trois fréquences pour le hamiltonien du problème à trois corps dans le plan. Avant la réduction symplectique par la symétrie des rotations, ces mouvements quasipériodiques ont une fréquence supplémentaire, qui est la vitesse angulaire de la rotation simultanée de trois ellipses. De plus, avant la réduction symplectique par la symétrie des translations, chacune de ces tores invariants se traduit par une famille à 1-paramètre de tores invariants paramétrés par le centre de masse du système planétaire. La preuve du Théorème F repose sur [START_REF] Féjoz | Quasiperiodic motions in the planar three-body problem[END_REF] et le Théorème E. La première partie est dédiée au hamiltonien H 0 . Nous commençons par introduire un changement symplectique linéaire de coordonnées φ 0 . Dans ces nouvelles variables {(X i , Y i )} i=0,1,2 , nous pouvons diviser le hamiltonien H 0 du problème à trois corps dans le plan comme suit H 0 • φ 0 (X, Y ) = |Y 0 | 2 2M + K(X 1 , X 2 , Y 1 , Y 2 ), (2.17) où K est le hamiltonien du problème à trois corps dans le plan après la réduction par la symétrie des translations, X 0 est le centre de masse du système planétaire, Y 0 est le moment linéaire du système planétaire et M = m 0 + m 1 + m 2 . Comme mentionné précédemment, K possède un tore invariant supportant une dynamique quasipériodique avec quatre fréquences. Grâce à (2.17), ce tore invariant se traduit en une famille à 1-paramètre de tores invariants paramétrés par le centre de masse du système planétaire. La seconde partie de la preuve est dédiée à la perturbation H c . Nous prouvons que dans un certain sous-ensemble de l'espace de phase, où nous nous attendons à ce que les orbites d'intérêt aient lieu, la perturbation H c satisfait de bonnes propriétés de décroissance lorsque t → +∞, mais pas toutes les hypothèses du Théorème E. Afin de résoudre ce problème, nous introduisons une extension lisse appropriée H du hamiltonien H et nous prouvons l'existence d'un C 1 -cylindre faiblement asymptotique pour cette extension. Donc, nous concluons la preuve du Théorème F vérifiant l'existence d'un sous-ensemble ouvert de points initiaux donnant lieu à des solutions faiblement asymptotiquement quasipériodiques pour X H , qui sont également des orbites pour X H . Soit ω ∈ R 4 la fréquence d'une solution quasipériodique pour le hamiltonien du problème à trois corps dans le plan. Ce résultat (Théorème F) garantit l'existence d'orbites où le centre de masse du système planétaire est attiré par la comète avec une vitesse asymptotiquement nulle quand le temps tend à +∞. De plus, dans 2.4 Mouvements Asymptotiques Convergeant vers une Dynamique Arbitraire un repère de référence attaché au centre de masse du système planétaire, les mouvements des planètes convergent vers certaines dynamiques qui sont conjuguées à des orbites générées par une petite perturbation dépendant du temps du champ de vecteurs constant sur le tore ω. Mouvements Asymptotiques Convergeant vers une Dynamique Arbitraire Nous considérons des champs de vecteurs dépendant du temps convergeant avec une vitesse exponentielle par rapport au temps vers des champs de vecteurs ayant un tore invariant supportant une dynamique arbitraire. Nous prouvons des résultats qui sont des variations du théorème de Canadell-de la Llave dans le cas particulier des champs de vecteurs hamiltoniens et des champs de vecteurs sur le tore. Étant donné B ⊂ R n une boule centrée à l'origine et P égal à T n × B ou à T n . Pour υ > 0, on rappelle que J υ = [υ, +∞) ⊂ R. Commençons par la définition de C σ -tore asymptotique. Étant donné σ ≥ 0, υ ≥ 0 et d'un entier positif k ≥ 0, nous considérons les champs de vecteurs dépendant du temps X t et X t 0 de classe C σ+k sur P, pour tout t ∈ J υ , un plongement ϕ 0 de T n sur P de classe C σ et un champ de vecteurs sur le tore W de classe C σ tel que lim t→+∞ |X t -X t 0 | C σ+k = 0, (2.18) X(ϕ 0 (q), t) = ∂ q ϕ 0 (q)W (q) pour tout (q, t) ∈ T n × J υ . (2.19) Cela signifie que X t -X t 0 converge vers zéro lorsque t → +∞. De plus, le champ de vecteurs X 0 possède un tore invariant ϕ 0 supportant une dynamique générée par le champ de vecteurs autonome W . Définition 2.5. Nous supposons que (X, X 0 , ϕ 0 , W ) satisfont (2.18) et (2.19). Une famille de C σ -plongements ϕ t : T n → P est un C σ -tore asymptotique associé à (X, X 0 , ϕ 0 , W ) s'il existe υ ≥ υ ≥ 0 tel que lim t→+∞ |ϕ t -ϕ 0 | C σ = 0, (2.20) X(ϕ(q, t), t) = ∂ q ϕ(q, t)W (q) + ∂ t ϕ(q, t), (2.21) pour tout (q, t) ∈ T n × J υ . Lorsque dimP = 2n, alors nous disons que ϕ t est lagrangien si ϕ t (T n ) est lagrangien pour tout t ∈ J υ . Contrairement à la définition de C σ -tore KAM asymptotique (Définition 1.3), ici, nous ne supposons pas W constant. En effet, si W (q) ≡ cst, nous obtenons la définition de C σ -tore KAM asymptotique. En ce qui concerne certaines propriétés des C σ -tores asymptotiques, si X est complet, alors, de la même manière que pour des C σ -tores KAM asymptotiques, nous pouvons réécrire (2.21) en termes du flot de X. De plus, (2.21) est trivial et si ϕ t est un C σ -tore asymptotique défini pour tout t ≥ υ , alors nous pouvons étendre l'ensemble de définition pour tout t ∈ R. Par ailleurs, comme on peut s'y attendre, concernant la dynamique dans la projection sur l'espace de phase P, les trajectoires convergent asymptotiquement par rapport au temps vers les orbites de X 0 générées par W sur le tore invariant ϕ 0 . Étant donné σ ≥ 0, υ ≥ 0 et un entier positif k ≥ 0, nous rappelons la définition suivante Definition. Soit Sυ σ,k l'espace de fonctions f définies sur T n × B × J υ tel que, pour tout t ∈ J υ f t ∈ C σ+k (T n × B) et ∂ i (q,p) f ∈ C(T n × B × J υ ) pour tout 0 ≤ i ≤ k. Maintenant, nous considérons le hamiltonien suivant h : T n × B × J 0 → R tel que, pour tout (q, t) ∈ T n × J 0 , h(q, 0, t) = c, ∂ p h(q, 0, t) = W (q) (2.22) pour certains c ∈ R et W ∈ C σ+2 (T n ). Soit K W l'ensemble des hamiltoniens h : T n × B × J 0 → R qui satisfont (2.22). Nous observons que, pour tout h ∈ K W , le plongement trivial ϕ 0 donné par ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0), est un tore invariant pour X h et le champ de vecteurs restreint est W . Étant donné σ ≥ 1 et λ ≥ 0, nous considérons le hamiltonien H suivant                    H : T n × B × J 0 → R H(q, p, t) = h(q, p, t) + f (q, p, t) h ∈ K W , W ∈ C σ+2 (T n ), f 0 , ∂ p f 0 , ∂ 2 p H ∈ S0 σ,2 , |f 0 | 0 σ+2,0 + |∂ q f 0 | 0 σ+1,λ < ∞, |∂ p f 0 | 0 σ+2,λ < ∞ |∂ 2 p H| 0 σ+2,0 < ∞. ( * G ) Théorème G. Soit H comme dans ( * G ). Alors, il existe un hamiltonien h ∈ K W et une constante C(σ) qui dépend de σ tel que si λ > C(σ)|∂ q W | C 0 , (λ) il existe un C σ -tore asymptotique lagrangien associé à (X H , X h, ϕ 0 , W ). Contrairement au théorème de Canadell-de la Llave, ici, le champ de vecteurs W n'est pas constant. De plus, nous observons que si W ≡ cst nous obtenons λ > 0, ce qui est l'hypothèse de Canadell-de la Llave dans le cas des systèmes hamiltoniens. Évidemment, le Théorème A prouve que dans ce cas nous n'avons pas besoin de décroissance exponentielle. Nous démontrons l'existence d'un C σ -tore asymptotique ϕ t de la forme ϕ t (q) = (q + u t (q), v t (q)) 2.4 Mouvements Asymptotiques Convergeant vers une Dynamique Arbitraire pour tout q ∈ T n et t suffisamment grand, où id + u t est un difféomorphisme du tore pour chaque t fixé. Concernant u et v, nous avons |u t | C σ ≤ Ce -λt , |v t | C σ ≤ Ce -λt , pour tout t assez grand et pour une constante appropriée C. La preuve est essentiellement la même que celle du Théorème A. La principale différence repose sur la solution du problème linéaire associé. Résoudre ce probleme, dans ce cas, est plus compliqué. Étant donné σ ≥ 1, λ > 0 et υ ≥ 0, il consiste à résoudre l'équation suivante d'inconnu κ : T n × J υ → R n      ∂ q κ(q, t)W (q) + ∂ t κ(q, t) + ∂ q W (q)κ(q, t) = z(q, t) W ∈ C σ+1 (T n ), z ∈ Sυ σ,0 , sup t∈Jυ |z t | C σ e λt < ∞. (HE G ) Quand W (q) ≡ W ∈ R n est constante, l'expression précédente prend la forme suivante plus simple      ∂ q κ(q, t)W + ∂ t κ(q, t) = z(q, t) z ∈ Sυ σ,0 , sup t∈Jυ |z t | C σ e λt < ∞. Ce système est un cas particulier de (HE A ) et nous savons déjà comment le résoudre. En ce qui concerne le cas général, nous pouvons trouver une solution pour (HE G ) par intégration grâce à un changement de coordonnées approprié, dépendant du flot de W , qui rectifie la dynamique sur le tore. Plus précisément, nous prouvons l'existence d'une constante c κ σ , dépendant de σ, de telle sorte que si λ > c κ σ |∂ q W | C 0 , alors une solution de (HE G ) existe et sup t∈Jυ |κ t | C σ e λt ≤ C(σ) 1 λ -c κ σ |∂ q W | C 0 sup t∈Jυ |z t | C σ e λt + C(σ) |∂ q W | C σ (λ -c κ σ |∂ q W | C 0 ) 2 + |∂ q W | C 1 |∂ q W | C σ-1 (λ -c κ σ |∂ q W | C 0 ) 3 sup t∈Jυ |z t | C σ e λt pour une constante C(σ) qui dépend de σ. Contrairement au Théorème A, la solution κ a la même régularité que z. De même que pour le Théorème A, nous n'avons pas besoin que la perturbation soit petite pour tout t ∈ J 0 . Nous recherchons plutôt un C σ -tore asymptotique associé à (X H , X h, ϕ 0 , W ) défini pour tout t suffisamment grand tel que |∂ q f t 0 | C σ+1 et |∂ p f t 0 | C σ+2 soient suffisamment petits. Comme le Théorème E, cette preuve ne marche pas dans le cadre analytique. En effet, le changement de coordonnées utilisé dans la solution de l'équation homologique (HE G ) dépend du flot de W . Cette preuve ne fonctionne pas non plus dans le cas des hamiltoniens C ∞ parce que plus σ est grand, plus nous devons prendre λ grand afin de résoudre (HE G ). Résultats Principaux Nous avons également un résultat concernant les perturbations dépendant du temps des champs de vecteurs autonomes sur le tore. Étant donné σ ≥ 1, soit Z un champ de vecteurs non autonome sur T n × J 0 de la forme      Z(q, t) = W (q) + P (q, t) W ∈ C σ+1 (T n ), P ∈ S0 σ,1 , |P | 0 σ+1,λ < ∞. (Z C ) Corollaire C. Soit Z comme dans (Z C ). Alors, il existe une constante C(σ) dependant de σ telle que si λ > C(σ)|∂ q W | C 0 , alors il existe un C σ -tore asymptotique ψ t associé à (Z, W, Id, W ). Part II Asymptotic Motions Converging to Quasiperiodic Dynamics This part is devoted to a series of results which improve those of Fortunati-Wiggins [START_REF] Fortunati | Persistence of Diophantine flows for quadratic nearly integrable Hamiltonians under slowly decaying aperiodic time dependence[END_REF] and Canadell-de la Llave [START_REF] Canadell | KAM tori and whiskered invariant tori for non-autonomous systems[END_REF] in the case of time-dependent Hamiltonian vector fields and time-dependent vector fields on the torus. The decision to work with these systems has allowed us to relax the exponential decay in time in view of the possible applications in celestial mechanics. We do not need the perturbation to be small for all times but just for times large enough. This led us to prove the existence of a C σ -asymptotic KAM torus defined for all t sufficiently large. This part is divided into two chapters. First, we prove the existence of a C σasymptotic KAM torus for finite differentiable time-dependent Hamiltonian vector fields and for time-dependent vector fields on the torus. Then, in the second chapter, we prove the same results for real analytic systems. Finitely Differentiable Case This chapter is divided into four sections. We begin with a part where we recall and explain the definition of C σ -asymptotic KAM torus (Section 3.1). We continue with a section (Section 3.2) dedicated to the main results of this chapter (Theorem A and Corollary A). The last sections contain the proofs of these results (Sections 3.3 and 3.4) C σ -asymptotic KAM torus We begin with the definition of C σ -asymptotic KAM torus. Let B ⊂ R n be a ball centred at the origin, P equal to T n ×B or T n and, for all υ ≥ 0, J υ = [υ, +∞) ⊂ R. Given σ ≥ 0, υ ≥ 0 and a positive integer k ≥ 0, we consider time-dependent vector fields X t and X t 0 of class C σ+k on P, for all t ∈ J υ , and an embedding ϕ 0 from T n to P of class C σ such that lim t→+∞ |X t -X t 0 | C σ+k = 0, (3.1) X 0 (ϕ 0 (q), t) = ∂ q ϕ 0 (q)ω for all (q, t) ∈ T n × J υ , (3.2) where ω ∈ R n . Definition (Définition 1.3). We assume that (X, X 0 , ϕ 0 ) satisfy (3.1) and (3.2). A family of C σ embeddings ϕ t : T n → P is a C σ -asymptotic KAM torus associated to (X, X 0 , ϕ 0 ) if there exists υ ≥ υ ≥ 0 such that lim t→+∞ |ϕ t -ϕ 0 | C σ = 0, (3.3) X(ϕ(q, t), t) = ∂ q ϕ(q, t)ω + ∂ t ϕ(q, t), (3.4) 3 Finitely Differentiable Case for all (q, t) ∈ T n × J υ . When dimP = 2n, then we say that ϕ t is Lagrangian if ϕ t (T n ) is Lagrangian for all t. For all q ∈ T n and t, t 0 ∈ J υ , let ψ t t 0 ,X be the flow at time t with initial time t 0 of X and ψ t t 0 ,ω (q) = q + ω(t -t 0 ). We assume that ψ t t 0 ,X is defined for all t, t 0 ∈ J υ . Then, we can rewrite (3.4) in terms of the flow of X. That is to say, that (3.4) is equivalent to ψ t t 0 ,X • ϕ t 0 (q) = ϕ t • ψ t t 0 ,ω (q) (3.5) for all q ∈ T n and t, t 0 ∈ J υ . For the sake of clarity, we prove it. Proposition (Proposition 1.1). If the flow ψ t t 0 ,X is defined for all t, t 0 ∈ J υ , then (3.4) is equivalent to (3.5). Proof. In this proof, we denote the time dependence by indexes. We assume (3.4) and we prove (3.5). For fixed t 0 , let ϕ -t 0 be the inverse map of ϕ t 0 . It suffices to show that ψ t t 0 ,X and ϕ t • ψ t t 0 ,ω • ϕ -t 0 verify the same differential equation. For all x ∈ ϕ t 0 (T n ) d dt ϕ t • ψ t t 0 ,ω • ϕ -t 0 (x) = ∂ q ϕ t ψ t t 0 ,ω ϕ -t 0 (x) ψt t 0 ,ω ϕ -t 0 (x) + ∂ t ϕ t ψ t t 0 ,ω ϕ -t 0 (x) = ∂ q ϕ t ψ t t 0 ,ω ϕ -t 0 (x) ω + ∂ t ϕ t ψ t t 0 ,ω ϕ -t 0 (x) = X t • ϕ t • ψ t t 0 ,ω • ϕ -t 0 (x), where ψt t 0 ,ω stands for the derivative with respect to t of ψ t t 0 ,ω , it is obviously equal to ω. The last equality is a consequence of (3.4). This concludes the first part of the proof. Now, we assume (3.5) and we prove (3.4). We fix t 0 ∈ J υ , for all t ∈ J υ and x ∈ ϕ t 0 (T n ) d dt ϕ t • ψ t t 0 ,ω • ϕ -t 0 (x) = ψt t 0 ,X (x) = X t • ψ t t 0 ,X (x) = X t • ϕ t • ψ t t 0 ,ω • ϕ -t 0 (x). On the other hand, by the chain rule d dt ϕ t • ψ t t 0 ,ω • ϕ -t 0 (x) = ∂ q ϕ t ψ t t 0 ,ω ϕ -t 0 (x) ω + ∂ t ϕ t ψ t t 0 ,ω ϕ -t 0 (x) . We know that ϕ t 0 is an embedding, then there exists q ∈ T n such that ϕ t 0 (q) = x. Thanks to the above equations X t • ϕ t • ψ t t 0 ,ω (q) = ∂ q ϕ t ψ t t 0 ,ω (q) ω + ∂ t ϕ t ψ t t 0 ,ω (q) for all q ∈ T n and for all t ∈ J υ . Letting t = t 0 we have the claim. We observe that (3.4) is trivial and if ϕ t is a C σ -asymptotic KAM torus defined for all t ≥ υ , then we can extend the set of definition for all t ∈ R. Concerning the dynamics associated to a C σ -asymptotic KAM torus, we recall the definition of asymptotically quasiperiodic solution and discuss some properties of these motions. 3.1 C σ -asymptotic KAM torus Definition 3.1. We assume that (X, X 0 , ϕ 0 ) satisfy (3.1) and (3.2). An integral curve g(t) of X is an asymptotically quasiperiodic solution associated to (X, X 0 , ϕ 0 ) if there exists q ∈ T n in such a way that lim t→+∞ |g(t) -ϕ 0 • ψ t t 0 ,ω (q)| = 0. As mentioned in the introduction of this work, the following proposition provides an evident link between a C σ -asymptotic KAM torus and the above-mentioned motions. Proposition 3.1. Let ϕ t be a C σ -asymptotic KAM torus associated to (X, X 0 , ϕ 0 ). Then, for all q ∈ T n and t 0 ∈ J υ , g(t) = ψ t t 0 ,X • ϕ t 0 (q) is an asymptotically quasiperiodic solution associated to (X, X 0 , ϕ 0 ). Let X, X 0 and ϕ 0 in such a way that (X, X 0 , ϕ 0 ) satisfy (3.1) and (3.2). We consider the case when X and X 0 are Hamiltonian vector fields. Letting P = T n × B, we assume that, for all t ∈ J υ , ϕ t is a C σ -asymptotic KAM torus associated to (X, X 0 , ϕ 0 ). We prove the following property mentioned in the introduction of this thesis. Canadell-de la Llave prove it in the discrete case. Here, we prove it in the continuous case. Proposition 3.2. Let ϕ t be a C σ -asymptotic KAM torus associated to (X, X 0 , ϕ 0 ). If ϕ 0 (T n ) is Lagrangian, then ϕ t (T n ) is Lagrangian for all t ∈ J υ . Proof. Let α = dp ∧ dq be the standard symplectic form on (q, p) ∈ T n × B. For all fixed t, t 0 ∈ J υ , the map ψ t t 0 ,X is a symplectomorphism. This means that (ψ t t 0 ,X ) * α = α for all fixed t, t 0 ∈ J υ . Since (3.5), for all t 0 ∈ J υ and t ≥ 0 ψ t 0 +t t 0 ,X • ϕ t 0 = ϕ t 0 +t • ψ t 0 +t t 0 ,ω (3.6) and, taking the pull-back with respect to the standard form α on both sides of the latter, we obtain (ϕ t 0 ) * (ψ t 0 +t t 0 ,X ) * α = (ψ t 0 +t t 0 ,ω ) * (ϕ t 0 +t ) * α. We know that ψ t 0 +t t 0 ,X is symplectic then, replacing (ψ t 0 +t t 0 ,X ) * α = α on the left hand side of the above equation, we have (ϕ t 0 ) * α = (ψ t 0 +t t 0 ,ω ) * (ϕ t 0 +t ) * α. We want to prove that for all q ∈ T n , ((ϕ t 0 ) * α) q = 0, where ((ϕ t 0 ) * α) q stands for the symplectic form calculated on q ∈ T n . The idea consists in verifying that, for all fixed q ∈ T n , the limit when t → +∞ on the right-hand side of the above equation converges to zero. Then, taking the limit for t → +∞ on both sides of the latter, we have the claim. We introduce the following notation ϕ t (q) = (U t (q), V t (q)), ϕ 0 (q) = (U 0 (q), V 0 (q)) for suitable families of functions U t , V t : T n → R n and U 0 , V 0 : T n → R n . One can see that, for all q ∈ T n (ψ t 0 +t t 0 ,ω ) * (ϕ t 0 +t ) * α q = 1≤i<j≤n α t 0 +t ij (q)dq i ∧ dq j , where for all 1 ≤ i < j ≤ n α t 0 +t i,j (q) = ∂ q i V t 0 +t • ∂ q j U t 0 +t -∂ q j V t 0 +t • ∂ q i U t 0 +t • ψ t 0 +t t 0 ,ω (q). We observe that, for all q ∈ T n and for all fixed 1 ≤ i < j ≤ n ∂ q i V 0 (q) • ∂ q j U 0 (q) -∂ q j V 0 (q) • ∂ q i U 0 (q) = 0 because ϕ 0 is Lagrangian. Then, α t 0 +t i,j C 0 ≤ ∂ q i V t 0 +t • ∂ q j U t 0 +t -∂ q j V t 0 +t • ∂ q i U t 0 +t C 0 = ∂ q i V t 0 +t • ∂ q j U t 0 +t -∂ q j V t 0 +t • ∂ q i U t 0 +t -∂ q i V 0 • ∂ q j U 0 -∂ q j V 0 • ∂ q i U 0 C 0 ≤ ∂ q i V t 0 +t • ∂ q j U t 0 +t -∂ q i V 0 • ∂ q j U 0 C 0 + ∂ q j V t 0 +t • ∂ q i U t 0 +t -∂ q j V 0 • ∂ q i U 0 C 0 , and we can estimate each term in the last line of the latter by V t 0 +t C 1 U t 0 +t -U 0 C 1 + |U 0 | C 1 V t 0 +t -V 0 C 1 multiplied by a suitable constant C. This concludes the proof of this proposition because, by (3.3), the latter converges to 0 if t → +∞. Results This section is dedicated to the main results of this chapter. First, we need to introduce some notations. Given positive real parameters σ ≥ 0 and υ ≥ 0, we have the following definition Definition 3.2. Let S υ σ be the space of functions f defined on T n × B × J υ such that f ∈ C(T n × B × J υ ) and, for all t ∈ J υ , f t ∈ C σ (T n × B). We use this notation also for functions defined on T n × J υ , this will be specified by the context. Furthermore, for a positive integer k ≥ 0, we have the following space of functions Definition 3.3. Let Sυ σ,k be the space of functions f such that f ∈ S υ σ+k and ∂ i (q,p) f ∈ S υ σ+k-i for all 0 ≤ i ≤ k. We conventionally let f = ∂ 0 (q,p) f . In other words, f ∈ Sυ σ,k if f ∈ S υ σ+k and ∂ i (q,p) f ∈ C(T n × B × J υ ) for all 0 ≤ i ≤ k. That is, f t ∈ C σ+k (T n × B ) for all t ∈ J υ and the partial derivatives of f with respect to (q, p) are continuous until the order k. It is straightforward to verify that Sυ σ,0 = S υ σ . Results In order to measure the decay in time of the perturbations, we introduce positive, decreasing, integrable functions u on J 0 and we denote ū(t) = +∞ t u(τ )dτ for all t ∈ J 0 . We recall that K ω is the set of the Hamiltonians in ω-Kolmogorov normal form and we consider the following trivial embedding ϕ 0 : T n -→ T n × B, ϕ 0 (q) = (q, 0). Given ω ∈ R n and σ ≥ 1, we consider a time-dependent Hamiltonian H of the form                      H : T n × B × J 0 -→ R H(q, p, t) = h(q, p, t) + f (q, p, t), h ∈ K ω f 0 , ∂ p f 0 , ∂ 2 p H ∈ S0 σ,2 sup t∈J 0 |f t 0 | C σ+2 < ∞, sup t∈J 0 |∂ 2 p H t | C σ+2 < ∞, |∂ q f t 0 | C σ+1 ≤ a(t), |∂ p f t 0 | C σ+2 ≤ b(t) for all t ∈ J 0 , ( * A ) where a, b are positive, decreasing, integrable functions on J 0 . We assume that there exists υ ≥ 0, such that a and b satisfy the following conditions ā(t) ≤ b(t) ā(t)b(t) ≤ a(t) b(t) (#) for all t ∈ J υ . Theorem A. Let H be as in ( * A ) with a and b satisfying (#). Then, there exist h ∈ K ω and a Lagrangian C σ -asymptotic KAM torus ϕ t associated to (X H , X h, ϕ 0 ). About time-dependent perturbations of constant vector fields on the torus, given σ ≥ 1 and ω ∈ R n , we consider the following time-dependent vector field          Z : T n × J 0 -→ R n Z(q, t) = ω + P (q, t) P ∈ S0 σ,1 , |P t | C σ+1 ≤ P(t) for all t ∈ J 0 , (Z A ) where P is a positive, decreasing, integrable function on J 0 . Corollary A. Let Z be as in (Z A ). Then, there exists a C σ -asymptotic KAM torus ψ t associated to (Z, ω, Id). 3 Finitely Differentiable Case Proof of Theorem A This section is devoted to the proof of Theorem A. To this end, we expand the Hamiltonian H in ( * A ) in a small neighbourhood of 0 ∈ B, h(q, p, t) = h(q, 0, t) + ∂ p h(q, 0, t) • p + 1 0 (1 -τ )∂ 2 p h(q, τ p, t)dτ • p 2 f (q, p, t) = f (q, 0, t) + ∂ p f (q, 0, t) • p + 1 0 (1 -τ )∂ 2 p f (q, τ p, t)dτ • p 2 , we can assume without loss of generality that h(q, 0, t) = 0 for all (q, t) ∈ T n × J 0 . Letting ω = ∂ p h(q, 0, t) a(q, t) = f (q, 0, t) b(q, t) = ∂ p f (q, 0, t) m(q, p, t) = 1 0 (1 -τ ) ∂ 2 p h(q, τ p, t) + ∂ 2 p f (q, τ p, t) dτ = 1 0 (1 -τ )∂ 2 p H(q, τ p, t)dτ, for a positive real parameter Υ ≥ 1, we can rewrite the Hamiltonian H in the following form                H : T n × B × J 0 -→ R H(q, p, t) = ω • p + a(q, t) + b(q, t) • p + m(q, p, t) • p 2 , a, b ∈ S0 σ,2 , sup t∈J 0 |a t | C σ+2 < ∞, sup t∈J 0 |∂ 2 p H t | C σ+2 ≤ Υ, |∂ q a t | C σ+1 ≤ a(t), |b t | C σ+2 ≤ b(t), for all t ∈ J 0 ( * * A ) where ∂ 2 p H ∈ S0 σ,2 and a(t), b(t) are the functions introduced in ( * A ) satisfying (#). This Hamiltonian is our new starting point. Furthermore, let h be the following Hamiltonian h(q, p, t) = h(q, p, t) + 1 0 (1 -τ )∂ 2 p f (q, τ p, t)dτ • p 2 for all (q, p, t) ∈ T n ×B×J 0 . Obviously h ∈ K ω . Moreover, X H and X h verify (3.1). Outline of the Proof of Theorem A We are looking for a C σ -asymptotic KAM torus ϕ t associated to (X H , X h, ϕ 0 ), where H is the Hamiltonian in ( * * A ), h is the Hamiltonian previously defined and ϕ 0 is the trivial embedding ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0). More specifically, for given H, we are searching for υ ≥ 0 sufficiently large and suitable functions u, v : T n × J υ → R n such that ϕ(q, t) = (q + u(q, t), v(q, t)) Proof of Theorem A and in such a way that ϕ, u and v satisfy the following conditions X H (ϕ(q, t), t) -∂ q ϕ(q, t)ω -∂ t ϕ(q, t) = 0, (3.7) lim t→+∞ |u t | C σ = 0, lim t→+∞ |v t | C σ = 0, (3.8) for all (q, t) ∈ T n × J υ . The parameter υ is free and it will be chosen large enough in Lemma 3.5 below. The proof rests on the implicit function theorem. To this end, we need to introduce a suitable functional F given by (3.7). We consider m(q, p, t)p = 1 0 ∂ 2 p H(q, τ p, t)dτ p = ∂ p m(q, p, t) • p 2 . This is well defined because ∂ p m(q, p, t) • p 2 = ∂ p 1 0 (1 -τ )∂ 2 p H(q, τ p, t)dτ • p 2 = ∂ p 1 0 (p -ξ)∂ 2 p H(q, ξ, t)dξ = 1 0 ∂ 2 p H(q, ξ, t)dξ = 1 0 ∂ 2 p H(q, τ p, t)dτ p, where the second equality of the latter is due to the change of variables ξ = τ p. Going back to the definition of the functional F, we observe that the Hamiltonian system associated to the Hamiltonian H is equal to X H (q, p, t) = ω + b(q, t) + m(q, p, t)p -∂ q a(q, t) -∂ q b(q, t)p -∂ q m(q, p, t)p 2 , where we recall that H is the Hamiltonian defined by ( * * A ). We introduce φ(q, t) = (q + u(q, t), v(q, t), t), ũ(q, t) = (q + u(q, t), t), for all (q, t) ∈ T n × J υ . Composing the Hamiltonian system X H with φ, we can write X H • φ in the following form X H • φ(q, t) = ω + b • ũ(q, t) + m • φ(q, t)v(q, t) -∂ q a • ũ(q, t) -∂ q b • ũ(q, t)v(q, t) -∂ q m • φ(q, t) • v(q, t) 2 for all (q, t) ∈ T n × J υ and moreover, ∂ q ϕ(q, t)ω + ∂ t ϕ(q, t) = ω + ∂ q u(q, t)ω + ∂ t u(q, t) ∂ q v(q, t)ω + ∂ t v(q, t) for all (q, t) ∈ T n × J υ . We define ∇u(q, t)Ω = ∂ q u(q, t)ω + ∂ t u(q, t), ∇v(q, t)Ω = ∂ q v(q, t)ω + ∂ t v(q, t) for all (q, t) ∈ T n × J υ . Then, we can rewrite (3.7) in the following form b • ũ + ( m • φ) v -(∇u) Ω -∂ q a • ũ -(∂ q b • ũ) v -(∂ q m • φ) • v 2 -(∇v) Ω = 0 0 . (3.9) This is composed of sums and products of functions defined on (q, t) ∈ T n × J υ , we have omitted the arguments (q, t) in order to achieve a more elegant form. We keep this notation for the rest of this proof. Over suitable Banach spaces, that we will specify later, let F be the following functional F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + ( m • φ) v -(∇u) Ω, F 2 (a, b, m, u, v) = ∂ q a • ũ + (∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) Ω. The latter is obtained by (3.9) and we observe that for all m and m, F(0, 0, m, m, 0, 0) = 0. We can reformulate our problem in the following form. For fixed m and m in a suitable Banach space and for (a, b) sufficiently close to (0, 0), we are looking for some functions u, v in such a way that F(a, b, m, m, u, v) = 0 and the asymptotic conditions (3.8) are satisfied. Concerning the associated linearized problem, the differential of F with respect to the variables (u, v) calculated on (0, 0, m, m, 0, 0) is equal to D (u,v) F(0, 0, m, m, 0, 0)(û, v) = ( m0 v -(∇û) Ω, (∇v) Ω) where, in according to the notation previously introduced, for all (q, t) ∈ T n × J υ we let m0 (q, t) = m(q, 0, t). The proof of this theorem is a straightforward application of the implicit function theorem if we assume the following norm sup t∈J 0 |a t | C σ+2 + sup t∈J 0 |∂ q a t | C σ+1 a(t) , sup t∈J 0 |b t | C σ+2 b(t) , to be sufficiently small. To avoid this smallness assumption, we study the problem from another point of view. We are looking for a C σ -asymptotic KAM torus defined for t sufficiently large in such a way |∂ q a t | C σ+1 , |∂ q b t | C σ+1 are sufficiently small. It suffices for proving the existence of functions (u, v) satisfying (3.7) and (3.8). The following four sections are devoted to the proof of Theorem A. In the first, we introduce suitable Banach spaces on which the previous functional is defined. The second is dedicated to solving the homological equation, which is the main tool to prove that D (u,v) F(0, 0, m, m, 0, 0) is invertible. In the penultimate section, we verify that F is well defined and satisfies the hypotheses of the implicit function theorem. Finally, the last section concludes the proof of this theorem. Proof of Theorem A Preliminary Settings Given positive real parameters σ ≥ 0, υ ≥ 0 and a positive integer k ≥ 0, we recall that S υ σ and Sυ σ,k are respectively the spaces of functions defined by Definition 3.2 and Definition 3.3. We introduce the following norm that we will widely use in the rest of this section. For every f ∈ S υ σ and for a positive real function u(t) defined on J υ , we let |f | υ σ,u = sup t∈Jυ |f t | C σ u(t) . (3.10) Furthermore, we recall that for all positive, decreasing, integrable functions u on J υ , we let ū be ū(t) = +∞ t u(τ )dτ for all t ∈ J υ . Now, let σ ≥ 1, υ ≥ 0 and Υ ≥ 1 be the positive parameters introduced in ( * * A ) and (#). For υ ≥ υ ≥ 0 that will be chosen later, we consider the following Banach spaces (A, | • |), (B, | • |), (U, | • |), (V, | • |), (Z, | • |) and (G, | • |) (see Appendix C) A = a : T n × J υ → R | a ∈ Sυ σ,2 and |a| = |a| υ σ+2,1 + |∂ q a| υ σ+1,a < ∞ B = b : T n × J υ → R n | b ∈ Sυ σ,2 , and |b| = |b| υ σ+2,b < ∞ U = u : T n × J υ → R n | u, (∇u) Ω ∈ S υ σ and |u| = max{|u| υ σ, b, | (∇u) Ω| υ σ,b } < ∞ V = v : T n × J υ → R n | v, (∇v) Ω ∈ S υ σ and |v| = max{|v| υ σ,ā , | (∇v) Ω| υ σ,a } < ∞ Z = z : T n × J υ → R n | z ∈ S υ σ , and |z| = |z| υ σ,b < ∞ G = g : T n × J υ → R | g ∈ S υ σ and |g| = |g| υ σ,a < ∞ where, in the definition of A, the norm |a| υ σ+2,1 = sup t∈J υ |a t | C σ+2 . This means that 1 stands for the function identically equal to 1 for all t ∈ J υ . Let M n be the set of the n-dimensional matrices. We introduce another Banach space (M, | • |) in such a way that M = m : T n × B × J υ → M n | m ∈ Sυ σ,2 and |m| = |m| υ σ+2,1 ≤ Υ where Υ is the positive parameter in ( * * A ). Now, we have everything we need to define more precisely the functional F introduced in the previous section. Let F be the following functional F : A × B × M × M × U × V -→ Z × G 3 Finitely Differentiable Case F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + ( m • φ) v -(∇u) Ω, F 2 (a, b, m, u, v) = ∂ q a • ũ + (∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) Ω. Homological Equation Given σ ≥ 0, υ ≥ 0 and ω ∈ R n , in this section, we solve the following equation for the unknown κ : T n × J υ → R ω • ∂ q κ(q, t) + ∂ t κ(q, t) = g(q, t), g ∈ S υ σ |g| υ σ,g < ∞ (HE A ) where g(t) is a positive, decreasing, integrable function on J υ and g : T n × J υ → R is given. Lemma 3.1 (Homological Equation). There exists a unique solution κ ∈ S υ σ of (HE A ) such that lim t→+∞ |κ t | C 0 = 0. (3.11) Moreover, |κ| υ σ,ḡ ≤ |g| υ σ,g . Proof. Existence: Let us define the following transformation h : T n × J υ → T n × J υ , h(q, t) = (q -ωt, t), that is the key to solve the homological equation. We claim that it is enough to prove the first part of this lemma for the much simpler equation ∂ t κ = g(q + ωt, t). (3.12) As a matter of fact, if κ is a solution of the latter satisfying the asymptotic condition (3.11), then χ = κ • h is a solution of (HE A ) satisfying the same asymptotic condition and viceversa. For the sake of clarity, we prove this claim. Let κ be a solution of (HE A ) satisfying the asymptotic condition (3.11), then ∂ t (κ • h -1 ) = ∂ q κ • h -1 • ω + ∂ t κ • h -1 = g • h -1 , where the last equality is due to (HE A ). This implies that κ = κ •h -1 is a solution of (3.12) and by |κ t | C 0 = | κ • h -1 t | C 0 ≤ |κ t | C 0 κ = κ • h -1 satisfies the asymptotic condition because κ does. Viceversa, let κ be a solution of (3.12) satisfying the asymptotic condition (3.11), then ∂ q (κ • h) • ω + ∂ t (κ • h) = ∂ q κ • h • ω -∂ q κ • h • ω + ∂ t κ • h = g. Proof of Theorem A By (3.12), we have the last equality of the latter and hence κ • h is a solution of (HE A ). Moreover, thanks to |κ t | C 0 = | (κ • h) t | C 0 ≤ |κ t | C 0 κ = κ • h satisfies the asymptotic condition (3.11 ). This proves the claim. For all q ∈ T n a solution of (3.12) exists and κ(q, t) = e(q) + t υ g(q + ωτ, τ )dτ with a function e defined on the torus. We have to choose e in such a way that κ satisfies the following asymptotic condition for all fixed q ∈ T n 0 = lim t→+∞ κ(q, t) = e(q) + +∞ υ g(q + ωτ, τ )dτ. There is only one possible choice for e, that is e(q) = - +∞ υ g(q + ωτ, τ )dτ. This implies that κ(q, t) = - +∞ t g(q + ωτ, τ )dτ is the solution of (3.12) we are looking for. Therefore, e is well defined, indeed +∞ υ g(q + ωτ, τ )dτ ≤ |g| υ σ,g +∞ υ g(τ )dτ = |g| υ σ,g ḡ(υ) < ∞. Moreover, |κ t | C 0 ≤ +∞ t |g τ | C 0 dτ ≤ |g| υ σ,g +∞ t g(τ )dτ = |g| υ σ,g ḡ(t), since ḡ(t) converges to 0 when t → +∞, taking the limit for t → +∞ on both sides of the latter, we have that |κ t | C 0 → 0 when t → +∞. This concludes the first part of the proof because κ(q, t) = κ • h(q, t) = - +∞ t g(q + ω(τ -t), τ )dτ is the unique solution of (HE A ) satisfying (3.11) that we are looking for. Regularity and Estimates: We observe that g ∈ S υ σ implies κ ∈ S υ σ and hence κ = κ • h ∈ S υ σ . Moreover, for all fixed t ∈ J υ |κ t | C σ ≤ |g| υ σ,g ḡ(t). Multiplying both sides of the latter by 1 ḡ(t) and taking the sup for all t ∈ J υ , we prove the second part of this lemma. 3 Finitely Differentiable Case Regularity of F We recall the definition of the functional F, F : A × B × M × M × U × V -→ Z × G F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + ( m • φ) v -(∇u) Ω, F 2 (a, b, m, u, v) = ∂ q a • ũ + (∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) Ω. We verify that the functional F satisfies the hypothesis of the implicit function theorem. The proof consists of three lemmas. We prove that F is well defined, F is differentiable with respect to the components (u, v) and this differential calculated on (0, 0, m, m, 0, 0) is invertible. In what follows, we will widely use the properties contained in Proposition A.2 (see Appendix A). For this reason, we recall it. Let D be equal to T n or T n × B. To avoid a flow of constants, let C(•) be constants depending on n and the other parameters in brackets. On the other hand, C stands for constants depending only on n. Proposition (Proposition A.2). We consider f , g ∈ C σ (D) and σ ≥ 0. 1. For all β ∈ N n , if |β| + s = σ then ∂ |β| ∂x 1 β 1 ...∂xn βn f C s ≤ |f | C σ . 2. |f g| C σ ≤ C(σ) (|f | C 0 |g| C σ + |f | C σ |g| C 0 ). Now we consider composite functions. Let z be defined on D 1 ⊂ R n and takes its values on D 2 ⊂ R n where f is defined. If σ < 1, f ∈ C 1 (D 2 ), z ∈ C σ (D 1 ) then f • z ∈ C σ (D 1 ) 3. |f • z| C σ ≤ C(|f | C 1 |z| C σ + |f | C 0 ). If σ < 1, f ∈ C σ (D 2 ), z ∈ C 1 (D 1 ) then f • z ∈ C σ (D 1 ) 4. |f • z| C σ ≤ C(|f | C σ |∇z| σ C 0 + |f | C 0 ). If σ ≥ 1 and f ∈ C σ (D 2 ), z ∈ C σ (D 1 ) then f • z ∈ C σ (D 1 ) 5. |f • z| C σ ≤ C(σ) |f | C σ |∇z| σ C 0 + |f | C 1 |∇z| C σ-1 + |f | C 0 . Upon choosing υ large enough, we can assume b(υ ) ≤ 1 and b(υ ) ≤ 1. We will take a stronger restriction after. Therefore, for all (u, v) ∈ U × V, thanks to (#), we have the following estimates |u t | C σ ≤ |u| υ σ, b, |v t | C σ ≤ |v| υ σ,ā (3.13) for all t ∈ J υ . Lemma 3.2. F is well defined. Proof of Theorem A Proof. We begin by proving that for all (b, m, u, v) ∈ B×M×U ×V, F 1 (b, m, u, v) ∈ Z. Concerning the regularity F 1 (b, m, u, v) is the sum of functions in S υ σ then F 1 (b, m, u, v) ∈ S υ σ . We have to find an upper bound for |F 1 (b, m, u, v)| υ σ,b , then for all t ∈ J υ |F 1 (b, m, u, v) t | C σ b(t) ≤ (b • ũ) t C σ b(t) + ( m • φ) t v t C σ b(t) + |(∇u t ) Ω| C σ b(t) . (3.14) Now, we have to estimate each member on the right-hand side of the latter. The last one is obviously bounded, we have to estimate the others. Concerning the member in the middle of the previous inequality, ( m • φ) t v t C σ b(t) ≤ C(σ) ( m • φ) t C σ |v t | C σ b(t) ≤ C(σ)Υ |v t | C σ b(t) 1 + |∂ q ϕ t | σ C 0 + |∂ q ϕ t | C σ-1 ≤ C(σ)Υ |v t | C σ ā(t) 1 + 1 + |u| υ σ, b σ + |u| υ σ, b + |v| υ σ,ā + |v| υ σ,ā σ ≤ C(σ)Υ|v| υ σ,ā 1 + 1 + |u| υ σ, b σ + |u| υ σ, b + |v| υ σ,ā + |v| υ σ,ā σ for all t ∈ J υ . We observe that the first and the second line of the latter are due, respectively, to properties 2. and 5. of Proposition A.2. The third line is a consequence of the first condition in (#), the form of ϕ t and (3.13). The last inequality follows by the definition of the norm | • | υ σ,ā (see (3.10)). Similarly to the previous case, by Proposition A.2 and (3.13) (b • ũ) t C σ b(t) ≤ C(σ) |b t | C σ b(t) 1 + |∂ q u t | C σ-1 + 1 + |∂ q u t | C 0 σ ≤ C(σ)|b| υ σ+2,b 1 + |u| υ σ, b + 1 + |u| υ σ, b σ for all t ∈ J υ . Taking the sup for all t ∈ J υ on the left-hand side of the above estimates, we prove the existence of an upper bound for the first two terms on the right-hand side of (3.14). This implies that |F 1 (b, m, u, v)| υ σ,b < ∞ and hence, for all (b, m, u, v) ∈ B×M×U ×V, F 1 (b, m, u, v) ∈ Z. Similarly, for all (a, b, m, u, v) ∈ A × B × M × U × V, F 2 (a, b, m, u, v) ∈ G. We point out that in this case we use both conditions in (#). This proves that F is well defined, moreover, one can prove that it is continuous. As we mentioned before, in the following lemma, we show that F is differentiable with respect to the variables (u, v). Let D (u,v) F be the differential with respect to (u, v) Lemma 3.3. F is differentiable with respect to (u, v) with D (u,v) F 1 (b, m, u, v)(û, v) = D u F 1 (b, m, u, v)û + D v F 1 (b, m, u, v)v = (∂ q b • ũ) û + v T (∂ q m • φ) û + v T (∂ p m • φ) v + ( m • φ) v -(∇û) Ω D (u,v) F 2 (a, b, m, u, v)(û, v) = D u F 2 (a, b, m, u, v)û + D v F 2 (a, b, m, u, v)v = ∂ 2 q a • ũ û + v T ∂ 2 q b • ũ û + (v T ) 2 ∂ 2 q m • φ û + (∂ q b • ũ) v + (v T ) 2 ∂ 2 pq m • φ v + 2v T (∂ q m • φ) v + (∇v) Ω, where T stands for the transpose of a vector and D u , D v are respectively the differentials with respect to u and v. Proof. We begin by proving that F 1 is differentiable with respect to u with D u F 1 (b, m, u, v)û = (∂ q b • ũ) û + v T (∂ q m • φ) û -(∇û) Ω. First, let us introduce the following notation ũ(q, t) + τ û(q, t) = (q + u(q, t) + τ û(q, t), t) φ+τ û(q, t) = (q + u(q, t) + τ û(q, t), v(q, t), t), for all (q, t) ∈ T n × J υ and τ ∈ (0, 1). Now, thanks to Taylor's formula F 1 (b, m, u + û, v) -F 1 (b, m, u, v) -(∂ q b • ũ) û -v T (∂ q m • φ) û + (∇û) Ω = 1 0 ∂ q b • (ũ + τ û) -∂ q b • ũdτ û + v T 1 0 ∂ q m • φ+τ û -∂ q m • φdτ û. This implies the claim because, for all fixed t ∈ J υ , by the property 2. of Proposition A.2, the first condition in (#) and (3.13) F 1 (b, m, u + û, v) -F 1 (b, m, u, v) -(∂ θ b • ũ) û -v T ∂ θ m • ψ û + (∇ θt û) Ω t C σ b(t) ≤ C(σ) 1 0 (∂ q b • (ũ + τ û) -∂ q b • ũ) t C σ b(t) dτ ût C σ + C(σ) |v t | C σ b(t) 1 0 (∂ q m • φ+τ û -∂ q m • φ) t C σ dτ ût C σ ≤ C(σ) 1 0 |∂ q b • (ũ + τ û) -∂ q b • ũ| υ σ,b dτ |û| υ σ, b + C(σ) |v| υ σ,ā 1 0 (∂ q m • φ+τ û -∂ q m • φ) t υ σ,1 dτ |û| υ σ, b and, by the regularity of b and m, we have the claim. Similarly F 1 is differentiable with respect to v and F 2 is differentiable with respect (u, v). Proof of Theorem A This shows that F is differentiable with respect to the variables (u, v). Furthermore, one can show that D (u,v) F is continuous. This differential calculated on (0, 0, m, m, 0, 0) is equal to D (u,v) F(0, 0, m, m, 0, 0)(û, v) = ( m0 v -(∇û) Ω, (∇v) Ω). (3.15) In the following lemma, we verify that for all fixed m, m ∈ M, the latter is invertible. Lemma 3.4. For all (z, g) ∈ Z × G there exists a unique (û, v) ∈ U × V such that D (u,v) F(0, 0, m, m, 0, 0)(û, v) = (z, g). Moreover, there exists a suitable constant C such that |û| ≤ CΥ|g| υ σ,a + |z| υ σ,b , |v| ≤ |g| υ σ,a , (3.16) where we recall that |û| = max{|û| υ σ, b, | (∇û) Ω| υ σ,b } and |v| = max{|v| υ σ,ā , | (∇v) Ω| υ σ,a }. Proof. The proof of this lemma rests on Lemma 3.1. By (3.15), we can reformulate the problem in the following form. Given (z, g) ∈ Z × G, we are looking for the unique solution (û, v) ∈ U × V of the following system (∇û) Ω = m0 v -z (∇v) Ω = g. (3.17 Now, it remains to solve the first equation of (3.17) where v is known. For all fixed t ∈ J υ and thanks to property 2. of Proposition A.2, the first condition of (#) and (3.18) |( m0 v -z) t | C σ b(t) ≤ CΥ |v t | C σ b(t) + |z t | C σ b(t) ≤ CΥ |v t | C σ ā(t) + |z t | C σ b(t) ≤ CΥ|v| υ σ,ā + |z| υ σ,b ≤ CΥ|g| υ σ,a + |z| υ σ,b , for a suitable constant C. Taking the sup for all t ∈ J υ on the left-hand side of the latter, we obtain | m0 v -z| υ σ,b ≤ CΥ|g| υ σ,a + |z| υ σ,b and hence | (∇û) Ω| υ σ,b = | m0 v -z| υ σ,b ≤ CΥ|g| υ σ,a + |z| υ σ,b . Thanks to Lemma 3.1 the unique solution û of the first equation of (3.17) exists verifying |û| υ σ, b ≤ | m0 v -z| υ σ,b(t) ≤ CΥ|g| υ σ,a + |z| υ σ,b . This concludes the proof of this lemma with C = C because |û| = max{|û| υ σ, b, | (∇û) Ω| υ σ,b } ≤ CΥ|g| υ σ,a + |z| υ σ,b . C σ -asymptotic KAM torus In the previous section, we proved that the functional F satisfies the hypotheses of the implicit function theorem. Here, we prove the existence of a C σ -asymptotic KAM torus associated to (X H , X h, ϕ 0 ) and we conclude the proof of Theorem A. F(x, m, m, y) = D (u,v) F(0, 0, m, m, 0, 0)y + R(x, m, m, y). The aim is to find y ∈ Y in such a way that F(x, m, m, y) = 0, where we recall that we have fixed x, m and m. This is equivalent to find y ∈ Y such that y = -D (u,v) F(0, 0, m, m, 0, 0) -1 R(x, m, m, y) = y -D (u,v) F(0, 0, m, m, 0, 0) -1 F(x, m, m, y). This is well defined because we have already proved that D (u,v) F(0, 0, m, m, 0, 0) is invertible (see Lemma 3.4). To this end, we introduce the following functional L(x, m, m, •) : Y -→ Y in such a way that L(x, m, m, y) = y -D (u,v) F(0, 0, m, m, 0, 0) -1 F(x, m, m, y). (L) This is well defined and, by the regularity of F, we deduce that L is continuous, differentiable with respect to y = (u, v) with differential D y L continuous. The proof is reduced to find a fixed point of the latter. For this purpose, we introduce the following lemma. Lemma 3.5. There exists υ large enough with respect to n, σ, Υ and b, such that, for all y * ,y ∈ Y with |y * | ≤ 1, |D y L(x, m, m, y * )y| ≤ 1 2 |y|. (3.20) Proof. The proof relies on Lemma 3.4. By (L), for all y * ,y ∈ Y D y L(x, m, m, y * )y = D (u,v) F(0, 0, m, m, 0, 0) -1 D (u,v) F(0, 0, m, m, 0, 0) -D (u,v) F(x, m, m, y * ) y. We can reformulate this problem in terms of estimating the unique solution ŷ = (û, v) ∈ Y of the following system D (u,v) F(0, 0, m, m, 0, 0)ŷ = D (u,v) F(0, 0, m, m, 0, 0) -D (u,v) F(x, m, m, y * ) y. (3.21) Now, it suffices to estimate the right-hand side of the latter and apply Lemma 3.4. First, let us introduce the following notation. We observe that y * = (u * , v * ) ∈ Y and, for all (q, t) ∈ T n × J υ , we let ũ * (q, t) = (q + u * (q, t), t), φ * (q, t) = (q + u * (q, t), v * (q, t), t). By Lemma 3.3, the right-hand side of (3.21) is equal to D (u,v) F(0, 0, m, m, 0, 0)-D (u,v) F(x, m, m, y * ) y = m0 v -(∇u) Ω -D (u,v) F 1 (b, m, y * )y (∇v) Ω -D (u,v) F 2 (x, m, y * )y where m0 v -(∇u) Ω -D (u,v) F 1 (b, m, y * )y = ( m0 -m • φ * ) v -(∂ q b • ũ * ) u -v T * (∂ q m • φ * ) u -v T * (∂ p m • φ * ) v (∇v) Ω -D (u,v) F 2 (x, m, y * )y = -∂ 2 q a • ũ * u -v T * ∂ 2 q b • ũ * u -(v T * ) 2 ∂ 2 q m • φ * u -(∂ q b • ũ * ) v -(v T * ) 2 ∂ 2 pq m • φ * v -2v T * (∂ q m • φ * ) v. Thanks to property 2. of Proposition A.2, we can estimate the first member on the left-hand side of the latter as follows m0 v -(∇u) Ω -D (u,v) F 1 (b, m, y * )y t C σ ≤ C(σ) mt 0 -m • φ * t C σ v t C σ + (∂ q b • ũ * ) t C σ u t C σ + |v t * | C σ (∂ q m • φ * ) t C σ |u t | C σ + |v t * | C σ (∂ p m • φ * ) t C σ |v t | C σ for all t ∈ J υ . We point out that |y * | = max{|u * |, |v * |} ≤ 1 and we find an upper bound for each member on the right-hand side of the previous inequality. For all t ∈ J υ mt 0 -m • φ * t C σ |v t | C σ ≤ C(σ) |∂ q mt (id + τ u * , τ v * )u t * | C σ + |∂ p mt (id + τ u * , τ v * )v t * | C σ |v t | C σ ≤ C(σ)Υ 1 + b(υ ) + ā(υ ) |u t * | C σ |v t | C σ + C(σ)Υ 1 + b(υ ) + ā(υ ) |v t * | C σ |v t | C σ ≤ C(σ)Υ |u * | b(t) + |v * |ā(t) |v|ā(t) ≤ C(σ)Υ b(υ )|y|b(t) + C(σ)Υb(υ )|y|b(t) The first line of the latter is a consequence of the mean value theorem for a suitable τ ∈ [0, 1]. Concerning the second inequality, it is due to properties 2. and 5. of Proposition A.2. Moreover, we use also that, thanks to (#) and for υ large enough, we may assume b(υ ) ≤ 1 and ā(υ ) ≤ b(υ ) ≤ 1. In the penultimate line on the right-hand side of the previous inequalities, we apply the following estimate 1 + b(υ ) + ā(υ ) ≤ 3 for all t ∈ J υ . In the last line, we use the first condition in (#). Similarly to the previous case, thanks to property 5. of Proposition A.2, the first condition in (#), (3.19) and b(υ ) ≤ 1, ā(υ ) ≤ 1, we obtain (∂ q b • ũ * ) t C σ u t C σ ≤ C(σ)|b| υ σ+2,b b(t) 1 + b(υ ) |u| b(t) ≤ C(σ) b(υ )|y|b(t) |v t * | C σ (∂ q m • φ * ) t C σ |u t | C σ ≤ C(σ)|v * |ā(t)Υ 1 + b(υ ) + ā(υ ) |u| b(t) ≤ C(σ)Υ b(υ )|y|b(t) |v t * | C σ (∂ p m • φ * ) t C σ |v t | C σ ≤ C(σ)|v * |ā(t)Υ 1 + b(υ ) + ā(υ ) |v|ā(t) ≤ C(σ)Υb(υ )|y|b(t), for all t ∈ J υ . Now, for υ large enough, the previous estimates imply m0 v -(∇u) Ω -D (u,v) F 1 (b, m, y * )y t C σ ≤ 1 4 |y|b(t) for all t ∈ J υ . Multiplying both sides of the latter by 1 b(t) and taking the sup for all t ∈ J υ , we obtain m0 v -(∇u) Ω -D (u,v) F 1 (b, m, y * )y υ σ,b ≤ 1 4 |y|. (3.22) Similarly to the previous case, (∇v) Ω -D (u,v) F 2 (x, m, y * )y t C σ ≤ C(σ) ∂ 2 q a • ũ * t C σ u t C σ + v t * C σ ∂ 2 q b • ũ * t C σ u t C σ + v t * 2 C σ ∂ 2 q m • φ * t C σ |u t | C σ + (∂ q b • ũ * ) t C σ v t C σ + v t * 2 C σ ∂ 2 pq m • φ * t C σ v t C σ + v t * C σ (∂ q m • φ * ) t C σ v t C σ , for all t ∈ J υ . Therefore, we have to estimate each member on the right-hand side of the latter. We begin with the element in the second line. For all t ∈ J υ |v t * | C σ ∂ 2 q b • ũ * t C σ |u t | C σ ≤ C(σ)|v t * | C σ |b| υ σ+2,b b(t) 1 + b(υ ) |u t | C σ ≤ C(σ)ā(t)|b| υ σ+2,b b(t)|u| b(t) ≤ C(σ) b(υ ) 2 |y|a(t). The first line of the above estimate is due to property 5. of Proposition A.2 and b(υ ) ≤ 1. In the second line we use |v t * | C σ ≤ ā(t) and |u t | C σ ≤ |u| b(t) for all t ∈ J υ . The last inequality is a consequence of the second condition of (#). Thanks to property 5. of Proposition A.2, (#), (3.19) and b(υ ) ≤ 1, ā(υ ) ≤ 1, 50 3.3 Proof of Theorem A in the same way we have ∂ 2 q a • ũ * t C σ u t C σ ≤ C(σ)|∂ q a| υ σ+1,a a(t)|u| b(t) ≤ C(σ) b(υ )|y|a(t), v t * 2 C σ ∂ 2 q m • φ * t C σ |u t | C σ ≤ C(σ)|v * | 2 ā(t) 2 Υ|u| b(t) ≤ C(σ)Υā(t)b(t) b(t)|u| ≤ C(σ)Υ b(υ ) 2 |y|a(t) (∂ q b • ũ * ) t C σ v t C σ ≤ C(σ)|b| υ σ+2,b b(t)|v|ā(t) ≤ C(σ) b(υ )|y|a(t) v t * 2 C σ ∂ 2 pq m • φ * t C σ v t C σ ≤ C(σ)|v * | 2 ā(t) 2 Υ|v|ā(t) ≤ C(σ)Υb(t) 2 |v|ā(t) ≤ C(σ)Υb(υ ) b(υ )|y|a(t) v t * C σ (∂ q m • φ * ) t C σ v t C σ ≤ C(σ)|v * |ā(t)Υ|v|ā(t) ≤ C(σ)Υb(t)|v|ā(t) ≤ C(σ)Υ b(υ )|y|a(t) for all t ∈ J υ . Then, for υ large enough (∇v) Ω -D (u,v) F 2 (x, m, y * )y t C σ ≤ 1 4 CΥ |y|a(t) for all t ∈ J υ . We recall that C is the constant introduced in Lemma 3.4. Multiplying both sides of the latter by 1 a(t) and taking the sup for all t ∈ J υ , we obtain ((∇v) Ω -D (u,v) F 2 (x, m, y * )y υ σ,a ≤ 1 4 CΥ |y|. (3.23) This concludes the proof of this lemma because, thanks to Lemma 3.4, the unique solution of (3.21) exists and by (3.22), (3.23) |û| ≤ m0 v -(∇u) Ω -D (u,v) F 1 (b, m, y * )y υ σ,b + CΥ ((∇v) Ω -D (u,v) F 2 (x, m, y * )y υ σ,a ≤ 1 2 |y| |v| ≤ ((∇v) Ω -D (u,v) F 2 (x, m, y * )y υ σ,a ≤ 1 4 CΥ |y| ≤ 1 2 |y|. We observe that the choice of the constant 1 in the ball |y * | ≤ 1 is completely arbitrary. One can choose another threshold provided to take υ sufficiently large. Now, the previous lemma proves that L(x, m, m, •) is a contraction of a complete subset of Y. Then, there exists a unique fixed point y ∈ Y with |y| ≤ 1. This concludes the proof of Theorem A. Proof of Corollary A The proof is essentially the same as that of Theorem A. Because of that, we will not give all the details. However, we will provide the necessary elements to reconstruct the proof. We are looking for a C σ -asymptotic KAM torus ψ t associated to (Z, ω, Id), where Z is the vector field defined by (Z A ). This means that, for given Z, we are searching for υ ≥ 0 sufficiently large and a suitable function u : T n × J υ → R n such that ψ(q, t) = q + u(q, t) and in addition, ψ and u satisfy Z(ψ(q, t), t) -∂ q ψ(q, t)ω -∂ t ψ(q, t) = 0, (3.24) lim t→+∞ |u t | C σ = 0. (3.25) for all (q, t) ∈ T n ×J υ . We will choose υ sufficiently large in Lemma 3.6. Similarly to the proof of Theorem A, we introduce a suitable functional F given by (3.24). To this end, we define ψ(q, t) = (q + u(q, t), t), for all (q, t) ∈ T n × J υ . The composition of Z with ψ is equal to Z • ψ(q, t) = ω + P • ψ(q, t) and ∂ q ψ(q, t)ω + ∂ t ψ(q, t) = ω + ∂ q u(q, t)ω + ∂ t u(q, t) for all (q, t) ∈ T n × J υ . We recall the notation introduced in the previous section ∇u(q, t)Ω = ∂ q u(q, t)ω + ∂ t u(q, t) for all (q, t) ∈ T n × J υ . Then, we can rewrite (3.24) in the following form P • ψ -(∇u) Ω = 0. (3.26) This is the sum of functions defined on (q, t) ∈ T n × J υ , we have omitted the arguments (q, t) in order to achieve a more elegant form. Before the introduction of the functional F, let υ ≥ 0 and σ ≥ 1 be the positive parameters defined in Corollary A. For υ ≥ υ ≥ 0 that will be chosen later, we introduce the following Banach spaces (P, | • |), (U, | • |) and (Z, | • |) P = P : T n × J υ → R n | P ∈ Sυ σ,1 , and |P | = |P | υ σ+1,P < ∞ U = u : T n × J υ → R n | u, (∇u) Ω ∈ S υ σ and |u| = max{|u| υ σ, P, | (∇u) Ω| υ σ,P } < ∞ Z = z : T n × J υ → R n | z ∈ S υ σ , and |z| = |z| υ σ,P < ∞ Let F be the following functional F : P × U -→ Z F(P, u) = P • ψ -(∇u) Ω. This is obtained by (3.26) and we observe that F(0, 0) = 0. We can reformulate our problem in the following form. For P ∈ P sufficiently close to 0, we are looking for u ∈ U in such a way that F(P, u) = 0. Regarding the differential of F with respect to the variable u calculated in (0, 0), this is equal to D u F(0, 0)û = -(∇û) Ω. The functional F is well defined, continuous, differentiable with respect to u with D u F(P, u) continuous. Moreover, as a straightforward consequence of Lemma 3.1, D u F(0, 0) is invertible. Then, F satisfies the hypotheses of the implicit function theorem. Now, similarly to the proof of Theorem A, we fix P as in Corollary A and we introduce the following functional L(P, •) : U -→ U in such a way that L(P, u) = u -D u F(0, 0) -1 F(P, u). We recall that P is fixed and the proof of Corollary A is reduced to find a fixed point of the latter. To this end, we have the following lemma Lemma 3.6. There exists υ large enough with respect to n, σ and P, such that, for all u * ,u ∈ U with |u * | ≤ 1, |D u L(P, u * )u| ≤ 1 2 |u|. Therefore, L(P, •) is a contraction of a complete subset of P and this concludes the proof of Corollary A. Real Analytic Case This chapter contains the analytic version of the results proved in the previous chapter. However, in order to simplify the reading and the comparison with the previous results, we keep almost the same structure. This chapter is divided into three sections. We begin by introducing the definition of analytic asymptotic KAM torus (Section 4.1). The second section (Section 4.2) is dedicated to stating and analysing the main results (Theorem B and Corollary B). This section contains the proof of the result concerning time-dependent perturbations of constant vector fields on the torus. We dedicate the last (Sections 4.3) to the proof of the theorem about time-dependent Hamiltonian vector fields. The proof of this result is essentially the same as that contained in the previous chapter of this work. Here, we pay special attention to the modifications in the proof due to the different classes of functions. Analytic Asymptotic KAM Torus For some s > 0, we define the following complex domains T n s := {q ∈ C n /Z n : | Im(q)| < s}, B s := {p ∈ C n : |p| < s}. We recall the definition of analytic asymptotic KAM torus. Let P be equal to T n × B or T n . We consider time-dependent real analytic vector fields X t and X t 0 on P, for all t ∈ J υ , and a real analytic embedding ϕ 0 from T n to P such that lim t→+∞ |X t -X t 0 | s = 0 (4.1) X 0 (ϕ 0 (q), t) = ∂ q ϕ 0 (q)ω, for all (q, t) ∈ T n × J υ , (4.2) where ω ∈ R n and | • | s is the analytic norm (see Appendix B). Definition (Définition 1.1). We assume that (X, X 0 , ϕ 0 ) satisfy (4.1) and (4.2). A family of real analytic embeddings ϕ t : T n → P is an analytic asymptotic KAM torus associated to (X, X 0 , ϕ 0 ) if there exist 0 < s ≤ s and υ ≥ υ ≥ 0 such that lim t→+∞ |ϕ t -ϕ 0 | s = 0, (4.3) X(ϕ(q, t), t) = ∂ q ϕ(q, t)ω + ∂ t ϕ(q, t), (4.4) for all (q, t) ∈ T n × J υ . When dimP = 2n, then we say that ϕ t is Lagrangian if ϕ t (T n ) is Lagrangian for all t. We conclude this section with the introduction of a suitable space of functions. For some s > 0 and υ ≥ 0, we have the following definition Definition 4.1. Let A υ s be the space of functions f defined on T n s × B s × J υ such that f ∈ C(T n s × B s × J υ ) and, for all t ∈ J υ , f t is real analytic on T n s × B s . Results We use this notation also for functions defined on T n s × J υ . For all k ∈ Z 2n with |k| ≥ 1, we let ∂ k (q,p) = ∂ k 1 q 1 ...∂ kn qn ∂ k n+1 p 1 ...∂ k 2n pn where |k| = |k 1 | + ... + |k 2n |. The following proposition is about an important property concerning each f ∈ A υ s , which we will widely use in the rest of this chapter. Proposition 4.1. Let f ∈ A υ s , then, for all k ∈ Z 2n with |k| ≥ 1, ∂ k (q,p) f ∈ A υ s for all 0 < s < s. Proof. For all t ∈ J υ , ∂ k (q,p) f t is real analytic on T n s × B s (see [START_REF] Rudin | Real and complex analysis[END_REF]) and hence on T n s ×B s . It remains to prove that ∂ k (q,p) f ∈ C(T n s ×B s ×J υ ). For all (q 1 , p 1 , t 1 ), (q 1 , p 1 , t 1 ) ∈ T n s × B s × J υ , by Cauchy's inequality |∂ k (q,p) f (q 1 , p 1 , t 1 ) -∂ k (q,p) f (q 2 , p 2 , t 2 )| ≤ |∂ k (q,p) f (q 1 , p 1 , t 1 ) -∂ k (q,p) f (q 1 , p 1 , t 2 )| + |∂ k (q,p) f (q 1 , p 1 , t 2 ) -∂ k (q,p) f (q 2 , p 2 , t 2 )| ≤ k 1 !...k 2n ! (s -s ) |k| |f t 1 -f t 2 | s + ∂ k (q,p) f t 2 (q 1 , p 1 ) -∂ k (q,p) f t 2 (q 2 , p 2 ) and hence by the continuity of f with respect to t and the continuity of ∂ k (q,p) f with respect to (q, p), we have the claim. Results Given ω ∈ R n and positive real parameters s 0 > 0, we consider the following time-dependent Hamiltonian H                    H : T n × B × J 0 -→ R H(q, p, t) = h(q, p, t) + f (q, p, t), h ∈ K ω h, f ∈ A 0 s 0 sup t∈J 0 |f t 0 | s 0 < ∞, sup t∈J 0 |∂ 2 p H t | s 0 < ∞ |∂ q f t 0 | s 0 ≤ a(t), |∂ p f t 0 | s 0 ≤ b(t), for all t ∈ J 0 ( * B ) where a, b are positive, decreasing, integrable functions on J 0 . We assume that there exists υ ≥ 0 such that ā(t) ≤ b(t) ā(t)b(t) ≤ a(t) b(t) (#) for all t ∈ J υ . Let ϕ 0 be the following trivial embedding ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0), we have the following theorem. Theorem B. Let H be as in ( * B ) with a and b satisfying (#). Then, there exist h ∈ K ω and a Lagrangian analytic asymptotic KAM torus ϕ t associated to (X H , X h, ϕ 0 ). As a consequence of the above theorem, we have the following result concerning time-dependent perturbations of constant vector fields on the torus. Let Z be a non-autonomous vector field on T n × J 0 of the form          Z : T n × J 0 -→ R n , Z(q, t) = ω + P (q, t) P ∈ A 0 s 0 , |P t | s 0 ≤ P(t) for all t ∈ J 0 (Z B ) where ω ∈ R n and 0 < s 0 < 1. We assume that P is a positive, decreasing, integrable function on J 0 . Corollary B. Let Z be as in (Z B ). Then, there exists an analytic asymptotic KAM torus ψ t associated to (Z, ω, Id). Proof. The proof is a straightforward application of Theorem B. Let h(p) = ω • p, we consider the Hamiltonian H defined on T n × B × J 0 of the form H(q, p, t) = ω • p + P (q, t) • p. The latter satisfies the hypotheses of Theorem B. Then, there exists an analytic asymptotic KAM torus ϕ t associated to (X H , X h, ϕ 0 ), where ϕ 0 is the trivial embedding previously introduced. Moreover, ϕ t = (id + u t , v t ) and, for all fixed t, id + u t is a diffeomorphism of the torus. This concludes the proof of this theorem with ψ t = id + u t . Proof of Theorem B We expand the Hamiltonian H in ( * B ) in a small neighbourhood of 0 ∈ B. Then, thanks to Proposition 4.1 and for a positive parameter Υ ≥ 1, we can rewrite the Hamiltonian H in the following form                H : T n × B × J 0 -→ R H(q, p, t) = ω • p + a(q, t) + b(q, t) • p + m(q, p, t) • p 2 , a, b, ∂ 2 p H ∈ A 0 s , sup t∈J 0 |a t | s < ∞, sup t∈J 0 |∂ 2 p H t | s ≤ Υ, |∂ q a t | s ≤ a(t), |b t | s ≤ b(t), for all t ∈ J 0 ( * * B ) where s = s 0 2 and a(t), b(t) are the functions introduced in ( * B ) satisfying (#). We consider the following Hamiltonian h(q, p, t) = h(q, p, t) + 1 0 (1 -τ )∂ 2 p f (q, τ p, t)dτ • p 2 . for all (q, p, t) ∈ T n × B × J 0 . It is obvious that h ∈ K ω and X H , X h satisfy (4.1). Proof of Theorem B Outline of the Proof of Theorem B We are looking for an analytic asymptotic KAM torus ϕ t associated to (X H , X h, ϕ 0 ), where H is the Hamiltonian in ( * * B ), h is the Hamiltonian previously defined and ϕ 0 the trivial embedding ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0). More specifically, for given H, we are searching for υ ≥ 0 sufficiently large and suitable functions u, v : T n × J υ → R n such that ϕ(q, t) = (q + u(q, t), v(q, t)) and in such a way that ϕ, u and v satisfy X H (ϕ(q, t), t) -∂ q ϕ(q, t)ω -∂ t ϕ(q, t) = 0, (4.5) lim t→+∞ |u t | s 2 = 0, lim t→+∞ |v t | s 2 = 0, (4.6) for all (q, t) ∈ T n × J υ . Similarly to Theorem A, we will choose υ large enough in Lemma 4.5 (υ will be already required large in Lemma 4.2). The proof of this theorem rests on the implicit function theorem. For this reason, we begin by defining a suitable functional F given by (3.7). Similarly to the proof of Theorem A, we can rewrite (4.5) in the following form b • ũ + ( m • φ) v -(∇u) Ω -∂ q a • ũ -(∂ q b • ũ) v -(∂ q m • φ) • v 2 -(∇v) Ω = 0 0 . (4.7) Thanks to the latter, over suitable Banach spaces that we will specify later, we define the functional F in such a way that F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + ( m • φ) v -(∇u) Ω, F 2 (a, b, m, u, v) = ∂ q a • ũ + (∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) Ω. Moreover, we observe that for all m and m, F(0, 0, m, m, 0, 0) = 0. We reformulate our problem in the following form. For fixed m and m in a suitable Banach space and for (a, b) sufficiently close to (0, 0), we are looking for some functions u, v in such a way that F(a, b, m, m, u, v) = 0 and the asymptotic conditions (4.6) are satisfied. We can see that, the differential of F with respect to the variables (u, v) calculated in (0, 0, m, m, 0, 0) is equal to D (u,v) F(0, 0, m, m, 0, 0)(û, v) = ( m0 v -(∇û) Ω, (∇v) Ω), where, for all (q, t) ∈ T n × J υ , m0 (q, t) = m(q, 0, t). Also in this case, the proof of Theorem B is developed in the following four sections. In the first, we introduce suitable Banach spaces on which the previous functional is defined. These Banach spaces are the analytical version of those used in the proof of Theorem A. The second is dedicated to solving the homological equation. In the third section, we verify that F is well defined and satisfies the hypotheses of the implicit function theorem. In the last section, we conclude the proof of this theorem. Real Analytic Case Preliminary Settings Given s > 0 and υ ≥ 0, for every f ∈ A υ s and for positive real functions u(t) defined on J υ , we introduce the following norm |f | υ s,u = sup t∈Jυ |f t | s u(t) , where | • | s is the analytic norm and A υ s is the space of functions of Definition 4.1. It is the analytic version of the norm defined in the finitely differentiable case by (3.10). Let υ ≥ 0, s > 0 and Υ ≥ 1 be the positive parameters introduced by ( * * B ) and (#). For υ ≥ υ ≥ 0 that will be chosen later, we consider the following Banach spaces (A, | • |), (B, | • |), (U, | • |), (V, | • |), (Z, | • |), (G, | • |) and (M, | • |). A = a : T n × J υ → R | a ∈ A υ s and |a| = |a| υ s,1 + |∂ q a| υ s,a < ∞ B = b : T n × J υ → R n | b ∈ A υ s , and |b| = |b| υ s,b < ∞ U = u : T n × J υ → R n | u, (∇u) Ω ∈ A υ s 2 and |u| = max{|u| υ s 2 , b, | (∇u) Ω| υ s 2 ,b } < ∞ V = v : T n × J υ → R n | v, (∇v) Ω ∈ A υ s 2 and |v| = max{|v| υ s 2 ,ā , | (∇v) Ω| υ s 2 ,a } < ∞ Z = z : T n × J υ → R n | z ∈ A υ s 2 , and |z| = |z| υ s 2 ,b < ∞ G = g : T n × J υ → R | g ∈ A υ s 2 and |g| = |g| υ s 2 ,a < ∞ M = m : T n × B × J υ → M n | m ∈ A υ s and |m| = |m| υ s,1 ≤ Υ The risk of mixing the Banach space A with the space of functions A υ s is small. Concerning A, we have that |a| υ s,1 = sup t∈J υ |a t | s , similarly for M. Regarding the last Banach space M, we recall that M n is the set of n-dimensional matrices. These are the analytic version of the Banach spaces introduced in the finitely differentiable case (see Section 3.3.2). Contrary to the previous chapter, we have to define the functional F on a suitable subspace X of A × B × M × M × U × V. This is because we have to control the domain of analyticity of the components of F. Let X be equal to X = {(a, b, m, m, u, v) ∈ A × B × M × M × U × V : |∂ q a| υ s,a ≤ 1, |b| ≤ 1, |m| ≤ Υ, | m| ≤ Υ, |u| ≤ 1, |v| ≤ 1}. Let F be the following functional F : X -→ Z × G 4.3 Proof of Theorem B F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + ( m • φ) v -(∇u) Ω, F 2 (a, b, m, u, v) = ∂ q a • ũ + (∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) Ω. Homological Equation Given s > 0, υ ≥ 0 and ω ∈ R n , we are looking for a solution of the following equation for the unknown κ : T n s × J υ → R ω • ∂ q κ(q, t) + ∂ t κ(q, t) = g(q, t), g ∈ A υ s , |g| υ s,g < ∞ (HE B ) where g(t) is a positive, decreasing, integrable function on J υ and g : T n s × J υ → R is given. Lemma 4.1 (Homological Equation). There exists a unique solution κ ∈ A υ s of (HE B ) such that lim t→+∞ |κ t | C 0 = 0. Moreover, |κ| υ s,ḡ ≤ |g| υ s,g . Proof. The proof of this lemma is essentially the same as that of Lemma 3.1. Existence: Let us define the following transformation h(q, t) = (q -ωt, t), then h : T n s × J υ → T n s × J υ because ω ∈ R n , t ∈ J υ ⊂ R and thus q -ωt ∈ T n s if and only if q ∈ T n s . The fact that t is real and not complex is of fundamental importance. This ensures that the latter transformation is well defined. It is enough to prove the first part of this lemma for the much simpler equation ∂ t κ = g(q + ωt, t). The unique solution of the above equation satisfying the asymptotic condition is κ(q, t) = - +∞ t g(q + ωτ, τ )dτ and hence κ(q, t) = κ • h(q, t) = - +∞ t g(q + ω(τ -t), τ )dτ is the unique solution of (HE B ) that we are looking for. Regularity and Estimates : g ∈ A υ s implies κ ∈ A υ s and hence κ = κ • h ∈ A υ s . Moreover, for all fixed t ∈ J υ |κ t | s ≤ |g| υ s,g ḡ(t). We prove the second part of this lemma by multiplying both sides of the latter by 1 ḡ(t) and taking the sup for all t ∈ J υ . Regularity of F Here, we verify that the functional F is well defined and satisfies the hypotheses of the implicit function theorem. We begin with a quantitative lemma fundamental to verify that F is well defined. In the second part of this section, we state, without proving, that F is differentiable with respect to the variables (u, v) and this differential calculated in (0, 0, m, m, 0, 0) is invertible. The proofs are essentially the same as that in Section 3.3.4. The following lemma imposes the first restriction on υ . We will take a stronger one after. Lemma 4.2. For υ large enough with respect to s and b, if (u, v) ∈ U ×V satisfies the following estimates |u| ≤ 1 and |v| ≤ 1, then sup t∈J υ |u t | s 2 ≤ s 8 , sup t∈J υ |v t | s 2 ≤ s 8 . Proof. If |u| ≤ 1 and |v| ≤ 1, then by (#) |u t | s 2 ≤ b(t) ≤ b(υ ), |v t | s 2 ≤ ā(t) ≤ b(t) ≤ b(υ ) . for all t ∈ J υ . Now, for υ large enough, we have the claim. Lemma 4.3. F is well defined. Proof. We prove that, for all (a, b, m, m, u, v) ∈ X , F 1 (b, m, u, v) ∈ Z. By the estimates in Lemma 4.2, the compositions b • ũ(q, t) and m • φ(q, t) are well defined for all (q, t) ∈ T n s 2 × J υ . Concerning the regularity, thanks to Proposition 4.1, F 1 (b, m, u, v) ∈ A υ s 2 . It remains to prove that |F 1 (b, m, u, v)| υ s 2 ,b < ∞. First, let us remind that |F 1 (b, m, u, v)| υ s 2 ,b = sup t∈J υ |F 1 (b, m, u, v) t | s 2 b(t) . Moreover, for all (a, b, m, m, u, v) ∈ X , we have the following estimates |∂ q a| υ s,a ≤ 1, |b| = |b| υ s,b ≤ 1, |m| = |m| υ s,1 ≤ Υ, | m| = | m| υ s,1 ≤ Υ, |u| = max{|u| υ s 2 , b, | (∇u) Ω| υ s 2 ,b } ≤ 1, |v| = max{|v| υ s 2 ,ā , | (∇v) Ω| υ s 2 ,a } ≤ 1. Then, for all t ∈ J υ |F 1 (b, m, u, v) t | s 2 b(t) ≤ (b • ũ) t s 2 b(t) + ( m • φ) t v t s 2 b(t) + |(∇u t ) Ω| s 2 b(t) . (4.8) We have to estimate each member on the right-hand side of the latter. The last one is obviously bounded, we have to find an upper bound for the others. Thanks to Lemma 4.2, (#) and the properties contained in Appendix B (b • ũ) t s 2 b(t) ≤ |b t | s b(t) ≤ |b| ≤ 1, ( m • φ) t v t s 2 b(t) ≤ C ( m • φ) t s 2 |v t | s 2 b(t) ≤ C mt s |v t | s 2 ā(t) ≤ CΥ|v| ≤ CΥ, 4.3 Proof of Theorem B for all t ∈ J υ and a suitable constant depending on n. As a consequence of the latter, taking the sup for all t ∈ J υ on the left-hand side of (4.8), we prove |F 1 (b, m, u, v)| υ s 2 ,b < ∞. Similarly, for all (a, b, m, m, u, v) ∈ X , F 2 (a, b, m, u, v) ∈ G. We proved the previous lemma because F defined on X translates into a slightly different proof from that of Lemma 3.2. Furthermore, one can prove that F is continuous and, similarly to the proof of Theorem A, F is differentiable with respect to the variables (u, v) and this differential D (u,v) F(a, b, m, m, u, v) is continuous. Moreover, for all fixed m, m ∈ M, D (u,v) F(0, 0, m, m, 0, 0) is invertible. More specifically, we have the following lemma Lemma 4.4. For all (z, g) ∈ Z × G there exists a unique (û, v) ∈ U × V such that D (u,v) F(0, 0, m, m, 0, 0)(û, v) = (z, g). Moreover, for a suitable constant C |û| ≤ CΥ|g| υ s 2 ,a + |z| υ s 2 ,b , |v| ≤ |g| υ s 2 ,a , where we recall that |v| = max{|v| υ s 2 ,ā , | (∇v) Ω| υ s 2 ,a } and |û| = max{|û| υ s 2 , b, | (∇û) Ω| υ s 2 ,b }. Proof. The proof is essentially the same as that of Lemma 3.4. It relies on Lemma 4.1. Analytic Asymptotic KAM Torus This part is extremely similar to that of the differentiable case (see Section 3. L(x, m, m, y) = y -D (u,v) F(0, 0, m, m, 0, 0) -1 F(x, m, m, y). (L) This is well defined and, by the regularity of F, we deduce that L is continuous, differentiable with respect to y = (u, v) with differential D y L continuous. The proof is reduced to find a fixed point of the latter. For this purpose, we state the following lemma, which is the analytic version of Lemma 3.5. Real Analytic Case Lemma 4.5. There exists υ large enough with respect to s, Υ and b, such that, for all y * ,y ∈ Y with |y * | ≤ 1, |D y L(x, m, m, y * )y| ≤ 1 2 |y|. Proof. The proof of this lemma is extremely similar to that of Lemma 3.5. For this reason, it is omitted. The above lemma proves that L(x, m, m, •) is a contraction and this concludes the proof of Theorem B. Part III Biasymptotic Motions Converging to Quasiperiodic Dynamics 5 Biasymptotic Motions for Time Dependent Hamiltonians In this chapter, we prove the existence of biasymptotically quasiperiodic solutions for time-dependent Hamiltonian vector fields. We consider two different cases. In the first, we work with time-dependent perturbations of integrable Hamiltonians and in the second with time-dependent perturbations of autonomous Hamiltonians having a large (in the sense of measure) subset of invariant tori. The proofs rely on different versions of Theorem A. We need to require more conditions, with respect to Theorem A, to obtain more information. In the integrable case, we prove the existence of biasymptotically quasiperiodic solutions for every initial condition. Concerning the other case, when the unperturbed Hamiltonian has a large subset of invariant tori, we show the existence of a large subset of initial conditions giving rise to biasymptotically quasiperiodic solutions. This chapter has seven sections. We begin by stating and explaining the definition of biasymptotically quasiperiodic solutions (Section 5.1). Afterwards, we continue with a part dedicated to the introduction of the result about time-dependent perturbations of integrable Hamiltonians (Section 5.2). The following two sections are dedicated to the proof of this theorem (Section 5.3 and Section 5.4). Then, in the fifth section, we state the result concerning time-dependent perturbations of autonomous Hamiltonians having a large (in the sense of measure) subset of invariant tori (Section 5.5). The proof of this theorem is contained in the last sections (Section 5.6 and Section 5.7) Biasymptotically quasiperiodic solutions In the previous chapters, we focused on non-autonomous Hamiltonian systems defined for positive times. Here, we consider time-dependent Hamiltonian systems defined for all t ∈ R. According to that, it makes sense to distinguish between positive C σ -asymptotic KAM tori and negative C σ -asymptotic KAM tori, as well as introduce the definition of C σ -biasymptotic KAM torus (we refer to the introduction of this thesis for this series of definitions). Unfortunately, we are not able to prove the existence of a C σ -biasymptotic KAM torus. But, we prove some weaker results concerning the existence of biasymptotically quasiperiodic solutions associated with suitable time-dependent Hamiltonian systems. To this end, we recall the following definition. Given σ ≥ 0 and a positive integer k ≥ 0, we consider time-dependent vector fields X t , X t 0,+ , X t 0,-of class C σ+k on T n × B, for all t ∈ R, and embeddings ϕ 0,+ , ϕ 0,-from T n to T n × B of class C σ such that lim t→±∞ |X t -X t 0,± | C σ+k = 0, (5.1) X 0,± (ϕ 0,± (q), t) = ∂ q ϕ 0,± (q)ω ± , for all (q, t) ∈ T n × R (5.2) where ω + , ω -∈ R n . Definition (Définition 2.2). We assume that (X, X 0,± , ϕ 0,± ) satisfy (5.1) and (5.2). An integral curve g(t) of X is a biasymptotically quasiperiodic solution associated to (X, X 0,± ϕ 0,± ) if there exist q -, q + ∈ T n in such a way that lim t→±∞ |g(t) -ϕ 0,± (q ± + ω ± t)| = 0. (5.3) Roughly speaking, a biasymptotically quasiperiodic solution g is an orbit of the time-dependent vector field X, which converges to a quasiperiodic motion of frequency vector ω + ∈ R n in the future and to a quasiperiodic motion of frequency vector ω -∈ R n in the past. Therefore, we conclude this section with an obvious property concerning these motions. Let X, X 0,+ , X 0,-, ϕ 0,+ and ϕ 0,-be as in the previous definition Proposition 5.1. We assume the existence of a positive C σ -asymptotic KAM torus ϕ t + associated to (X, X 0,+ ϕ 0,+ ), a negative C σ -asymptotic KAM torus ϕ t associated to (X, X 0,-ϕ 0,-) and q + , q -∈ T n in such a way that ϕ 0 + (q + ) = ϕ 0 -(q -). Then, letting g(t) = ϕ t + (q + + ω + t) for all t ≥ 0 ϕ t -(q -+ ω -t) for all t ≤ 0, g is a biasymptotically quasiperiodic solution associated to (X, X 0,± ϕ 0,± ). Result (Integrable Case) We recall that, for every function f defined on T n × B × R and, for fixed t ∈ R, we let f t be the function defined on T n × B in such a way that f t (q, p) = f (q, p, t). In addition, for all fixed p 0 ∈ B, we let f p 0 be the function defined on T n × R such that f p 0 (q, t) = f (q, p 0 , t). Given a positive real parameter σ ≥ 1, we have the following definition. Definition 5.1. Let B σ be the space of functions f defined on T n × B × R such that f , ∂ p f ∈ C(T n × B × R) and f t p ∈ C σ (T n ) for all (p, t) ∈ B × R. Result (Integrable Case) For all f ∈ B σ and l ≥ 1, we define |f | σ,l = sup (p,t)∈B×R |f t p | C σ (1 + |t| l ) + sup (p,t)∈B×R | (∂ p f ) t p | C 0 (1 + |t| l-1 ), (5.4) |f | σ,0 = sup (p,t)∈B×R |f t p | C σ + sup (p,t)∈B×R | (∂ p f ) t p | C 0 . (5.5) In what follows, we provide a series of properties of the previous norms that we will widely use in the rest of this chapter. Nevertheless, first, we recall that C(•) stands for constants depending on n and the other parameters in brackets. On the other hand, C stands for constants depending only on n. Proposition 5.2. Given σ ≥ 1, for all f , g ∈ B σ and positive l, m ≥ 1 a. |f | σ,l ≤ |f | s,l for all 1 ≤ σ ≤ s, b. |f | σ,l ≤ C(l, m)|f | σ,l+m c. |f g| σ,l+m ≤ C(σ) (|f | 0,l |g| σ,m + |f | σ,l |g| 0,m ). Moreover, we consider g ∈ B σ such that, for all (q, p, t) ∈ T n × B × R, g(q, p, t) = (g(q, p, t), p, t). Then f • g ∈ B σ and d. |f • g| σ,l+m ≤ C(σ) |f | σ,l |g| σ 1,m + |f | 1,l |g| σ,m + |f | 0,l+m . Before the proof, we observe that the previous properties are still verified when l = m = 0 or only one of the two parameters l and m is zero. Proof. The proof rests on Proposition A.2 (see Appendix A). Properties a. and b. are obvious. Then, we verify the others. c. For all fixed (p, t) ∈ B × R, by property 2. of Proposition A.2 f t p g t p C σ 1 + |t| l+m ≤ C(σ) |f t p | C 0 |g t p | C σ + |f t p | C σ |g t p | C 0 1 + |t| l (1 + |t| m ) ≤ C(σ) |f t p | C 0 1 + |t| l |g t p | C σ (1 + |t| m ) + |f t p | C σ 1 + |t| l |g t p | C 0 (1 + |t| m ) ≤ C(σ) (|f | 0,l |g| σ,m + |f | σ,l |g| 0,m ) where in the second line we use 1 + |t| l+m ≤ 1 + |t| l (1 + |t| m ). Taking the sup for all (p, t) ∈ B × R on the left-hand side of the latter, we obtain sup (p,t)∈B×R f t p g t p C σ 1 + |t| l+m ≤ C(σ) (|f | 0,l |g| σ,m + |f | σ,l |g| 0,m ) . It remains to prove that the second term of the norm (see the right-hand side of (5.4)) also satisfies the same estimate. For all fixed (p, t) ∈ B × R (∂ p (f g)) t p C 0 1 + |t| l+m-1 = (∂ p f ) t p g t p + f t p (∂ p g) t p C 0 1 + |t| l+m-1 ≤ (∂ p f ) t p g t p C 0 + f t p (∂ p g) t p C 0 1 + |t| l+m-1 ≤ C (∂ p f ) t p C 0 1 + |t| l-1 g t p C 0 (1 + |t| m ) + C (∂ p g) t p C 0 1 + |t| m-1 f t p C 0 1 + |t| l ≤ C (|f | 0,l |g| σ,m + |f | σ,l |g| 0,m ) and taking the sup for all (p, t) ∈ B × R on the left-hand side of the latter, we prove the estimate. d. For all fixed (p, t) ∈ B × R and thanks to property 5. of Proposition A.2 |f t p • g t p | C σ 1 + |t| l+m ≤ C(σ) f t p C σ g t p σ C 1 + f t p C 1 g t p C σ + f t p C 0 1 + |t| l+m ≤ C(σ) f t p C σ 1 + |t| l g t p σ C 1 (1 + |t| m ) σ + f t p C 1 1 + |t| l g t p C σ (1 + |t| m ) + f t p C 0 1 + |t| l+m ≤ C(σ) |f | σ,l |g| σ 1,m + |f | 1,l |g| σ,m + |f | 0,l+m where in the second line we use (1 + |t| m ) ≤ (1 + |t| m ) σ . Taking the sup for all (p, t) ∈ B 1 × R on the left-hand side of the latter, sup (p,t)∈B×R |f t p • g t p | C σ 1 + |t| l+m ≤ C(σ) |f | σ,l |g| σ 1,m + |f | 1,l |g| σ,m + |f | 0,l+m . Concerning the second term of the norm (see (5.4)), for all fixed (p, t) ∈ B × R (∂ p (f • g)) t p C 0 1 + |t| l+m-1 = (∂ q f ) t p • g t p (∂ p g) t p C 0 1 + |t| l+m-1 + (∂ p f ) t p • g t p C 0 1 + |t| l+m-1 ≤ C (∂ q f ) t p C 0 1 + |t| l (∂ p g) t p C 0 1 + |t| m-1 + (∂ p f ) t p C 0 1 + |t| l+m-1 ≤ C (|f | 1,l |g| σ,m + |f | 0,l+m ) . Taking the sup over (p, t) ∈ B × R on the left-hand side of the latter, sup (p,t)∈B×R (∂ p (f • g)) t p C 0 1 + |t| l+m-1 ≤ C (|f | 1,l |g| σ,m + |f | 0,l+m ) . This concludes the proof of this lemma. Now, we recall the following space of functions. Definition 5.2. Given σ ≥ 1 and an integer k ≥ 0, we define Bσ,k the space of functions f such that f ∈ B σ+k , and ∂ i q f ∈ B σ+k-i for all 0 ≤ i ≤ k. We observe that B σ is the space of functions introduced by Definition 5.1. Furthermore, for all f ∈ Bσ,k and l > 1, we consider the following norm f σ,k,l = max 0≤i≤k |∂ i q f | σ+k-i,l , (5.6) where | • | σ,l is the norm defined by (5.4) Result (Integrable Case) In the previous definition and Definition 5.1, B ∈ R n is a ball with some unspecified radius. In what follows, we will pay attention to the radius of B. Let B r ⊂ R n be a ball centred at the origin with radius r > 0. If a function f defined on T n × B r × R belongs to B σ , we consider that f satisfies the properties in Definition 5.1 with B replaced by B r . Now, we have everything we need to state the main result of this first part. Let σ ≥ 1, Υ ≥ 1, l > 1 and 0 < ε < 1. We consider the following Hamiltonian                          H : T n × B 1 × R -→ R H(q, p, t) = h(p) + f (q, p, t) f, ∂ p f ∈ Bσ,2 , |f | σ+2,0 + ∂ q f σ,1,l+2 + ∂ p f σ,2,l+1 < ε, ∂ 2 p H t ∈ C σ+2 (T n × B 1 ) for all fixed t ∈ R ∂ i qp ∂ 2 p H ∈ C(T n × B 1 × R) for all 0 ≤ i ≤ 3. sup t∈R |∂ 2 p H t | C σ+2 ≤ Υ. ( * C ) For each p ∈ B 1 , we consider the following trivial embedding ϕ 0,p : T n → T n × B 1 , ϕ 0,p (q) = (q, p). Theorem C. Let H be as in ( * C ). Then, there exists a time-dependent Hamiltonian h such that, if ε is small enough with respect to n, l, Υ and |∂ p h| C 1 , for all (q, p) ∈ T n × B 1 2 there exist p -, p + ∈ B 1 and a biasymptotically quasiperiodic solution g(t) associated to (X H , X h, ϕ 0,p ± ) such that g(0) = (q, p). Instead of proving this theorem directly, we are going to deduce it from another theorem. Let σ ≥ 1, Υ ≥ 1, l > 1 and 0 < ε < 1. We consider the following family of Hamiltonians                                H : T n × B1 4 × R × B3 4 -→ R H(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I +a(θ, t; p 0 ) + b(θ, t; p 0 ) • I + m(θ, I, t; p 0 ) • I 2 ω ∈ C 1 (B3 4 ), a, b ∈ Bσ,2 , |a| σ+2,0 + ∂ θ a σ,1,l+2 < ε, b σ,2,l+1 < ε, ∂ 2 I H t ∈ C σ+2 (T n × B1 4 × B 3 4 ) for all fixed t ∈ R ∂ i θIp 0 (∂ 2 I H) ∈ C(T n × B 1 4 × R × B 3 4 ) for all 0 ≤ i ≤ 3. sup t∈R |∂ 2 I H t | C σ+2 ≤ Υ. ( ) We define the following family of trivial embeddings ψ 0 : T n × B 3 4 -→ T n × B1 4 , ψ 0 (θ, p 0 ) = (θ, 0) (5.7) and we consider the following family Hamiltonians h : T n × B1 4 × R × B3 4 → R such that h(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I + m(θ, I, t; p 0 ) • I 2 . 5 Biasymptotic Motions for Time Dependent Hamiltonians Theorem 5.1. Let H be as in ( ). Then, if ε is sufficiently small with respect to n, l, Υ and |ω| C 1 , for all fixed p 0 ∈ B3 4 , there exists a positive C σ -asymptotic KAM torus ψ t +p 0 and a negative C σ -asymptotic KAM torus ψ t -p 0 associated to (X Hp 0 , X hp 0 , ψ 0,p 0 ). Moreover, letting ψ t ± : T n × B 3 4 -→ T n × B 1 4 , ψ t ± (q, p 0 ) = ψ t ±,p 0 (q), there exists a constant C 0 depending on n, l, Υ and |ω| C 1 such that sup t≥0 |ψ t + -ψ 0 | C 1 < C 0 ε, sup t≤0 |ψ t --ψ 0 | C 1 < C 0 ε. (5.8) Proof of Theorem C assuming Theorem 5.1 In this section, we assume Theorem 5.1 and we deduce Theorem C. First, we introduce the following well-known property. Proposition 5.3. Given r > 0 and 0 < < δ < r, let φ be a map φ : B r -→ B r+δ of class C 1 such that |φ -Id| C 1 < . Then, for small enough, φ is a diffeomorphism onto its image and B r-δ ⊂ φ(B r ). For all p 0 ∈ B3 4 , we let p = p 0 + I and we expand the Hamiltonian ( * C ) around p 0 so that h(p) = h(p 0 ) + ∂ p h(p 0 ) • I + 1 0 (1 -τ )∂ 2 p h(p τ )dτ • I 2 f (q, p, t) = f (q, p 0 , t) + ∂ p f (q, p 0 , t) • I + 1 0 (1 -τ )∂ 2 p f (q, p τ , t)dτ • I 2 where p τ = p 0 + τ I and I ∈ B1 4 . For any p 0 ∈ B3 4 , we define e(p 0 ) = h(p 0 ) ω(p 0 ) = ∂ p h(p 0 ) a(q, t; p 0 ) = f (q, p 0 , t) b(q, t; p 0 ) = ∂ p f (q, p 0 , t) m(q, I, t; p 0 ) = 1 0 (1 -τ ) ∂ 2 p h(p τ ) + ∂ 2 p f (q, p τ , t) dτ = 1 0 (1 -τ )∂ 2 p H(q, p τ , t)dτ, for all (q, I, t) ∈ T n × B 1 4 × R. Writing θ instead of q for the angular variables, we can rewrite H in the following form as a family of Hamiltonians parametrized by p 0 ∈ B 3 4 , H : T n × B1 4 × R × B3 4 -→ R H(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I (5.9) + a(θ, t; p 0 ) + b(θ, t; p 0 ) • I + m(θ, I, t; p 0 ) • I 2 . , X H and X h satisfy (5.1). Furthermore, for all p 0 ∈ B3 4 , it is obvious that h has an invariant torus supporting quasiperiodic dynamics. It is straightforward to verify that the Hamiltonian H defined by (5.9) satisfies the hypotheses of Theorem 5.1. Then, there exists a family of positive C σasymptotic KAM tori ψ t + : T n × B 3 4 -→ T n × B1 4 associated to (X H , X h, ψ 0 ) and a family of negative C σ -asymptotic KAM tori ψ t -: T n × B 3 4 -→ T n × B1 4 associated to (X H , X h, ψ 0 ) , where ψ 0 is the family of trivial embeddings introduced by (5.7). Moreover, we have sup t≥0 |ψ t + -ψ 0 | C 1 < C 0 ε, sup t≤0 |ψ t --ψ 0 | C 1 < C 0 ε, where C 0 is a constant depending on n, l, Υ and |ω| C 1 . Therefore, by the latter, there exist u t ± , v t ± : T n × B 3 4 → R n such that we can rewrite ψ t + and ψ t -in the following form ψ t ± (θ, p 0 ) = (θ + u t ± (θ, p 0 ), v t ± (θ, p 0 )) for all (θ, p 0 ) ∈ T n × B 3 4 with sup t≥0 |u t + | C 1 < C 0 ε, sup t≥0 |v t + | C 1 < C 0 ε, sup t≤0 |u t -| C 1 < C 0 ε, sup t≤0 |v t -| C 1 < C 0 ε. By construction, an orbit (θ(t), I(t)) for the previous Hamiltonian at the parameter value p 0 ∈ B3 4 translates into a trajectory (q(t), p(t)) = (θ(t), p 0 + I(t)) for the Hamiltonian in (q, p)-coordinates. Then, letting ϕ 0 : T n × B 3 4 -→ T n × B 1 , ϕ 0 (q, p 0 ) = (q, p 0 ), (5.10) the following family of maps ϕ t ± : T n × B 3 4 -→ T n × B 1 , ϕ t ± (q, p 0 ) = (q + u t ± (q, p 0 ), p 0 + v t ± (q, p 0 ) ) is a family of positive (resp. negative) C σ -asymptotic KAM tori associated to (X H , X h, ϕ 0 ). In other words, for all p 0 ∈ B 3 4 , ϕ t +p 0 (resp. ϕ t -p 0 ) is a positive (resp. negative) C σ -asymptotic KAM torus associated to (X H , X h, ϕ 0,p 0 ). Thanks to Proposition 5.3, T n × B 1 2 ⊂ ϕ 0 ± (T n × B3 4 ). This concludes the proof of the theorem because, for all (q, p 0 ) ∈ T n × B 1 2 , there exist (q + , p 0+ ), (q -, p 0-) ∈ T n × B 3 4 such that ϕ 0 + (q + , p 0+ ) = (q, p 0 ) = ϕ 0 -(q -, p 0-). Then, by Proposition 5.1 there exists a biasymptotically quasiperiodic solution g(t) associated to (X H , X h, ϕ 0,p 0± ) such that g(0) = (q, p 0 ). Proof of Theorem 5.1 The proof of Theorem 5.1 is essentially the same as Theorem A with suitable modifications, especially in the section dedicated to the homological equation. In the third chapter of this thesis, we show the existence of a positive C σ -asymptotic KAM torus for a suitable time-dependent Hamiltonian. Here, the Hamiltonian in ( ) consists of a family of Hamiltonians parametrized by p 0 ∈ B3 4 . This section aims to prove the existence of a positive C σ -asymptotic KAM torus for each p 0 ∈ B 3 4 . Similarly, we have the claim concerning the existence of a family of negative C σ -asymptotic KAM tori. Therefore, we need these families to satisfy (5.8). This control on the variables (θ, p 0 ) is the reason why we need to assume a stronger time decay for a and b in ( ) than that of Theorem A. Outline of the proof of Theorem 5.1 We are looking for a family of positive C σ -asymptotic KAM tori ψ t associated to (X H , X h, ψ 0 ), where we drop the subscript + in order to obtain a more elegant form. Here, H is the Hamiltonian in ( ) and ψ 0 is the following family of trivial embeddings ψ 0 : T n × B3 4 → T n × B1 4 , ψ 0 (θ, p 0 ) = (θ, 0). More specifically, we are looking for u, v : T n × R + × B 3 4 → R n such that ψ(θ, t; p 0 ) = (θ + u(θ, t; p 0 ), v(θ, t; p 0 )) and, for all fixed p 0 ∈ B3 4 , ψ, u and v satisfy the following conditions X H (ψ(θ, t; p 0 ), t; p 0 ) -∂ θ ψ(θ, t; p 0 )ω(p 0 ) -∂ t ψ(θ, t; p 0 ) = 0 (5.11) lim t→+∞ |u t p 0 | C σ = 0, lim t→+∞ |v t p 0 | C σ = 0, (5.12) for all (θ, t) ∈ T n × R + . As mentioned before, this proof relies on the implicit function theorem. As we did in the previous chapters, we introduce a suitable functional F given by (5.11). To this end, we recall the following definitions. For all (θ, I, t; p 0 ) ∈ T n × B1 4 × R + × B3 4 , m(θ, I, t; p 0 )I = 1 0 ∂ 2 p H(θ, p 0 + τ I, t)dτ I = ∂ I m(θ, I, t; p 0 ) • I 2 , ψ(θ, t; p 0 ) = (ψ(θ, t; p 0 ), t; p 0 ) = (θ + u(θ, t; p 0 ), v(θ, t; p 0 ), t; p 0 ), ũ(θ, t; p 0 ) = (θ + u(θ, t; p 0 ), t; p 0 ) ∇ θt u(θ, t; p 0 )Ω(p 0 ) = ∂ θ u(θ, t; p 0 )ω(p 0 ) + ∂ t u(θ, t; p 0 ), ∇ θt v(θ, t; p 0 )Ω(p 0 ) = ∂ θ v(θ, t; p 0 )ω(p 0 ) + ∂ t v(θ, t; p 0 ). Similarly to the proof of Theorem A, we can rewrite (5.11) in the following form (see Section 3.3.1) This is composed by sums and products of functions defined of (θ, t; p 0 ) ∈ T n × R + × B 3 4 , we have omitted the arguments (θ, t; p 0 ) in order to achieve a more elegant form. Over suitable Banach spaces, that we will specify later, let F be the following functional   b • ũ + m • ψ v -(∇ θt u) Ω -∂ θ a • ũ -(∂ θ b • ũ) v -∂ θ m • ψ • v 2 -(∇ θt v) Ω   = 0 0 . ( 5 F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)), with F 1 (b, m, u, v) = b • ũ + m • ψ v -(∇ θt u) Ω, F 2 (a, b, m, u, v) = ∂ θ a • ũ + (∂ θ b • ũ) v + ∂ θ m • ψ • v 2 + (∇ θt v) Ω which is obtained by (5.13). We can reformulate our problem in the following form. For fixed m and m in a suitable Banach space and for (a, b) sufficiently close to (0, 0), we are looking for u, v : T n × R + × B 1 → R n satisfying (5.12) such that F(a, b, m, m, u, v) = 0. Here, contrary to Theorem A, we can not get rid of the smallness assumption over a and b (see ( )). This is because we are looking for solutions u and v with |u t | C 1 and |v t | C 1 sufficiently small for all t ∈ R + and not only for t sufficiently large. Concerning the associated linearized problem, the differential of F with respect to the variables (u, v) calculated in (0, 0, m, m, 0, 0) is equal to D (u,v) F(0, 0, m, m, 0, 0)(û, v) = ( m0 v -(∇ θt û)Ω, (∇ θt v)Ω), where, for all (θ, t, p 0 ) ∈ T n × B 1 4 × B 3 4 , we let m0 (θ, t; p 0 ) = m0 (θ, 0, t; p 0 ). As one can expect, this differential is invertible. In the following four sections we prove Theorem 5.1. First, we introduce suitable Banach spaces on which the functional F is defined. In the second section, we solve the homological equation, which is the key to prove that the latter operator is invertible. In the third section, we verify that F satisfies the hypotheses of the implicit function theorem. Finally, we verify the estimates and we conclude the proof of the theorem. Preliminary Settings We begin this section by introducing some notations, spaces of functions and suitable norms. Given σ ≥ 1, we have the following definition Definition 5.3. Let B + σ be the space of functions f defined on T n × R + × B3 4 such that f , ∂ p 0 f ∈ C(T n × R + × B3 4 ) and f t p 0 ∈ C σ (T n ) for all (t, p 0 ) ∈ R + × B 3 4 . For all f ∈ B + σ and l > 1, we define the following norms |f | + σ,l = sup (t,p 0 )∈R + ×B 3 4 |f t p 0 | C σ (1 + t l ) + sup (t,p 0 )∈R + ×B 3 4 | (∂ p 0 f ) t p 0 | C 0 (1 + t l-1 ), |f | + σ,0 = sup (t,p 0 )∈R + ×B 3 4 |f t p 0 | C σ + sup (t,p 0 )∈R + ×B 3 4 | (∂ p 0 f ) t p 0 | C 0 . These norms satisfy the properties in Proposition 5.2. As one can expect, we define the following subset of B + σ Definition 5.4. Given σ ≥ 1 and an integer k ≥ 0, we define B+ σ,k the space of functions f such that f ∈ B + σ+k , and ∂ i q f ∈ B + σ+k-i for all 0 ≤ i ≤ k. We conclude this part of settings with the following norm. For all f ∈ B+ σ,k and l > 1, we define f + σ,k,l = max 0≤i≤k |∂ i q f | + σ+k-i,l . Now, let σ ≥ 1 and l > 1 be the positive parameters introduced by ( ). We consider the following Banach spaces (A, | • |), (B, | • |), (U, | • |), (V, | • |), (Z, | • |) and (G, | • |) A = a : T n × R + × B 3 4 → R | a ∈ B+ σ,2 and |a| = |a| + σ+2,0 + ∂ θ a + σ,1,l+2 < ∞ B = b : T n × R + × B3 4 → R n | b ∈ B+ σ,2, , and |b| = b + σ,2,l+1 < ∞ U = u : T n × R + × B3 4 → R n | u, (∇ θt u) Ω ∈ B + σ and |u| = max{|u| + σ,l , | (∇ θt u) Ω| + σ,l+1 } < ∞ V = v : T n × R + × B3 4 → R n | v, (∇ θt v) Ω ∈ B + σ and |v| = max{|v| + σ,l+1 , | (∇ θt v) Ω| + σ,l+2 } < ∞ Z = z : T n × R + × B 3 4 → R n | z ∈ B + σ , and |z| = |z| + σ,l+1 < ∞ G = g : T n × R + × B3 4 → R n | g ∈ B + σ , and |g| = |g| + σ,l+2 < ∞ Similarly to what we did in Appendix C, it is straightforward to verify that the previous normed spaces are Banach spaces. Let M n be the set of the n-dimensional matrices and Υ ≥ 1 the positive parameter in ( ). We introduce another Banach space (M, | • |), such that M = m : T n × B 1 4 × R + × B 3 4 → M n | ∂ i θIp 0 m ∈ C(T n × B 1 4 × R + × B3 4 ) for all 0 ≤ i ≤ 3, m t ∈ C σ+2 (T n × B 1 4 × B3 4 ) for all fixed t ∈ R + and |m| = sup t∈R + |m t | C σ+2 ≤ Υ Let F be the following functional F : A × B × M × M × U × V -→ Z × G F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + m • ψ v -(∇ θt u) Ω, F 2 (a, b, m, u, v) = ∂ θ a • ũ + (∂ θ b • ũ) v + ∂ θ m • ψ • v 2 + (∇ θt v) Ω. 5.4 Proof of Theorem 5.1 The following section is devoted to the solution of the homological equation. Meanwhile, in the last part of the proof, we verify that F is well defined and it satisfies the hypotheses of the implicit function theorem. Homological equation Before analyzing the homological equation, let us prove the following estimates Lemma 5.1. Given m > 1, +∞ t 1 1 + τ m dτ ≤ C(m) 1 + t m-1 , +∞ t τ -t 1 + τ m+1 dτ ≤ C(m) 1 + t m-1 (5.14) for all t ≥ 0 and some constants C(m) depending on m. Proof. We define the following function f m : R + -→ R such that f m (t) = 1 + t m-1 +∞ t 1 1 + τ m dτ. It is straightforward to verify that f is continuous. We will prove the existence of a constant C(m), depending on m, such that f m (t) ≤ C(m) for all t ≥ 0, which implies the first estimate in (5.14). It suffices to prove that there exists lim t→+∞ f m (t) and it is finite. Thanks to l'Hôpital's rule lim t→+∞ f m (t) = lim t→+∞ d dt +∞ t 1 1+τ m dτ d dt 1 1+t m-1 = lim t→+∞ (1 + t m-1 ) 2 (m -1)t m-2 (1 + t m ) = 1 m -1 . Concerning the second inequality in (5.14), similarly to the previous case, we define the following function g m : R + -→ R such that g m (t) = 1 + t m-1 +∞ t τ -t 1 + τ m+1 dτ. One can see that g m is continuous. We have to verify that there exists lim t→+∞ g m (t) and it is finite. Applying l'Hôpital's rule twice lim t→+∞ g m (t) = lim t→+∞ d dt +∞ t τ -t 1+τ m+1 dτ d dt 1 1+t m-1 = lim t→+∞ +∞ t 1 1+τ m dτ (m-1)t m-2 (1+t m-1 ) 2 = lim t→+∞ d dt +∞ t 1 1+τ m dτ d dt (m-1)t m-2 (1+t m-1 ) 2 = lim t→+∞ (1 + t m-1 ) 4 (1 + t m+1 ) t 3m-5 h m (t) where h m (t) = (m -1) 2(m -1) 1 t m-1 + 1 -(m -2) 1 t m-1 + 1 2 . Then, by the latter lim t→+∞ g m (t) = lim t→+∞ (1 + t m-1 ) 4 (1 + t m+1 ) t 3m-5 h m (t) = lim t→+∞ 1 t m-1 + 1 4 1 t m+1 + 1 h m (t) = 1 m(m -1) . Biasymptotic Motions for Time Dependent Hamiltonians Given σ ≥ 1 and l > 1, we consider the following equation in the unknown κ : T n × R + × B 3 4 → R      ω(p 0 ) • ∂ q κ(θ, t; p 0 ) + ∂ t κ(θ, t; p 0 ) = g(θ, t; p 0 ), g ∈ B + σ , |g| + σ,l+1 < ∞, ω : B3 4 -→ R n , ω ∈ C 1 (B3 4 ). (HE C ) Lemma 5.2 (Homological Equation). There exists a unique solution κ ∈ B + σ of (HE C ) such that, for all fixed p 0 ∈ B 3 4 , lim t→+∞ |κ t p 0 | C 0 = 0. Moreover, |κ| + σ,l ≤ C|g| + σ,l+1 (5.15) for a suitable constant C depending on n, l and |ω| C 1 . Proof. Existence: We sketch this first part because it is similar to Lemma 3.1 (see Section 3.3.3). Let us define the following transformation h : T n × R + × B 3 4 → T n × R + × B 3 4 , h(θ, t; p 0 ) = (θ -ω(p 0 )t, t; p 0 ). It is enough to prove the first part of this lemma for the much simpler equation ∂ t κ(θ, t; p 0 ) = g(θ + ω(p 0 )t, t; p 0 ). The unique solution of the above equation satisfying the asymptotic condition is equal to κ(θ, t; p 0 ) = - +∞ t g(θ + ω(p 0 )τ, τ ; p 0 )dτ and hence composing k with h κ(θ, t; p 0 ) = κ • h(θ, t; p 0 ) = - +∞ t g(θ + ω(p 0 )(τ -t), τ, p 0 )dτ (5.16) is the unique solution of (HE C ) that we are looking for. Regularity and Estimates: g ∈ B + σ implies κ ∈ B + σ and thus κ = κ • h ∈ B + σ . Now, we have to verify the estimate (5.15). By (5.16) and Lemma 5.1 |κ t p 0 | C σ ≤ +∞ t |g τ p 0 | C σ dτ ≤ |g| + σ,l+1 +∞ t 1 1 + τ l+1 dτ ≤ C(l) |g| + σ,l+1 1 + t l , for all fixed (t, p 0 ) ∈ R + × B 3 4 . Multiplying both sides of the latter by 1 + t l and taking the sup for all R + × B 3 4 , we obtain sup (t,p 0 )∈R + ×B 3 4 |κ t p 0 | C σ (1 + t l ) ≤ C(l)|g| + σ,l+1 . 5.4 Proof of Theorem 5.1 It remains to estimate the second member of the norm | • | + σ,l . The partial derivate of κ with respect to p 0 is equal to ∂ p 0 κ(θ, t; p 0 ) = - +∞ t ∂ p 0 ω(p 0 )∂ θ g(θ + ω(p 0 )(τ -t), τ ; p 0 )(τ -t)dτ - +∞ t ∂ p 0 g(θ + ω(p 0 )(τ -t), τ ; p 0 )dτ. Then, thanks to Lemma 5.1, we can estimate the norm C 0 on the left-hand side of the latter as follows |∂ p 0 κ t p 0 | C 0 ≤ +∞ t |g τ p 0 | C 1 |ω| C 1 (τ -t) + |∂ p 0 g τ | C 0 dτ ≤ |g| + 1,l+1 |ω| C 1 +∞ t (τ -t) 1 + τ l+1 dτ + |g| + 1,l+1 +∞ t 1 1 + τ l dτ, ≤ C(l, |ω| C 1 ) |g| + 1,l+1 1 + t l-1 . Similarly to the previous case, by multiplying both sides of the previous inequality by 1 + t l-1 and taking the sup for all R + × B 3 4 , we have sup (t,p 0 )∈R + ×B 1 |∂ p 0 κ t p 0 | C 0 (1 + t l-1 ) ≤ C(l, |ω| C 1 )|g| + 1,l+1 . Moreover, reminding that |g| + 1,l+1 ≤ |g| + σ,l+1 , we conclude the proof of this lemma. Regularity of F The content of this section is essentially the same as that of Theorem A (see Section 3.3.4). By Proposition 5.2, F is well defined, continuous, differentiable with respect to the variables (u, v) and this differential D (u,v) F is continuous. As we have already seen, D (u,v) F calculated in (0, 0, m, m, 0, 0) is equal to D (u,v) F(0, 0, m, m, 0, 0)(û, v) = ( m0 v -(∇ θt û)Ω, (∇ θt v)Ω). (5.17) It remains to verify that, for all fixed m, m ∈ M, the latter is invertible. Lemma 5.3. For all (z, g) ∈ Z × G there exists a unique (û, v) ∈ U × V such that D (u,v) F(0, 0, m, m, 0, 0)(û, v) = (z, g). Moreover, for a suitable constant C depending on n, l and |ω| C 1 |û| ≤ C | m0 | + σ,0 |g| + σ,l+2 + |z| + σ,l+1 , |v| ≤ C|g| + σ,l+1 , (5.18) where we recall that |u| = max{|u| + σ,l , | (∇ θt u) Ω| + σ,l+1 } and |v| = max{|v| + σ,l+1 , | (∇ θt v) Ω| + σ,l+2 }. 5 Biasymptotic Motions for Time Dependent Hamiltonians Proof. The key of the proof is Lemma 5.2. By (5.17), the proof consists in searching for the unique solution to the following system m0 v -(∇ θt û)Ω = z (∇ θt v)Ω = g. (5.19) Thanks to Lemma 5.2, a unique solution v for the last equation of the above system exists and |v| + σ,l+1 ≤ C(|ω| C 1 , l)|g| + σ,l+2 . (5.20) Moreover, by |(∇ θt v) Ω| + σ,l+2 = |g| + σ,l+2 , we obtain the following estimate |v| = max{|v| + σ,l+1 , | (∇ θt v) Ω| + σ,l+2 } ≤ C(|ω| C 1 , l)|g| + σ,l+1 . Now, it remains to solve the first equation of the system (5.20) where v is known. We can rewrite this equation in the following form (∇ θt û)Ω = m0 v -z. (5.21) By Proposition 5.2 and (5.20), we can estimate the norm | • | σ,l+1 on the right-hand side of the latter as follows | m0 v -z| + σ,l+1 ≤ | m0 | + σ,0 |v| + σ,l+1 + |z| + σ,l+1 ≤ C(|ω| C 1 , l) | m0 | + σ,0 |g| + σ,l+2 + |z| + σ,l+1 . This implies |(∇ θt û) Ω| + σ,l+1 = | m0 v -z| + σ,l+1 ≤ C(|ω| C 1 , l) | m0 | + σ,0 |g| + σ,l+2 + |z| + σ,l+1 and thanks to Lemma 5.2, a unique solution of (5.21) exists satisfying |û| + σ,l ≤ C(|ω| C 1 , l) | m0 | + σ,0 |g| + σ,l+2 + |z| + σ,l+1 . This concludes the proof of this lemma because |û| = max{|û| + σ,l , | (∇ θt û) Ω| + σ,l+1 } ≤ C(|ω| C 1 , l) | m0 | + σ,0 |g| + σ,l+2 + |z| + σ,l+1 . Families of C σ -asymptotic tori The functional F satisfies the hypotheses of the implicit function theorem. Then, for ε small enough, there exists a family of positive C σ -asymptotic KAM tori ψ t + : T n × B 3 4 -→ T n × B1 4 associated to (X H , X h, ψ 0 ) , where ψ 0 is the following family of trivial embeddings We assume that ψ 0 : T n × B3 4 → T n × B1 F : X 0 × Y 0 -→ Z is continuous and has the property that D y F exists and is continuous at each point of X 0 × Y 0 . Moreover, D y F(x 0 , y 0 ) is invertible and sup x∈X 0 D y F(x 0 , y 0 ) -1 F(x, y 0 ) ≤ µ 2 (5.22) sup (x,y)∈X 0 ×Y 0 Id -D y F(x 0 , y 0 ) -1 D y F(x, y) ≤ 1 2 (5.23) where Id ∈ M n is the identity matrix and • stands for the operator norm. Then there exists a unique g ∈ C(X 0 , Y 0 ) such that g(x 0 ) = y 0 and F(g(x), x) = 0 for all x ∈ X 0 . Proof. We refer to [START_REF] Chierchia | Lezioni di analisi matematica[END_REF]. We observe that condition (5.22) tells us how to choose µ as a function of ε. Then (5.23) determines a threshold for ε. In order to conclude the proof of Theorem 5.1, we have to establish the relation between ε and µ. Lemma 5.4. The estimates (5.8) are satisfied. Proof. The proof relies on Lemma 5.3 and (5.22). For all fixed m, m ∈ M and for all (a, b) ∈ A × B, D (u,v) F(0, 0, m, m, 0, 0) -1 F(a, b, m, m, 0, 0) = D (u,v) F(0, 0, m, m, 0, 0) -1 b ∂ θ a . We want to estimate the right-hand side of the latter. Therefore, we can reformulate this problem in terms of estimating the unique solution (û, v) ∈ Y of the following system D (u,v) F(0, 0, m, m, 0, 0)(û, v) = b ∂ θ a . By Lemma 5.3, this solution exists and |û| ≤ C Υ|∂ θ a| + σ,l+2 + |b| + σ,l+1 ≤ CΥε |v| ≤ C|∂ θ a| + σ,l+2 ≤ Cε, where C is a constant depending on n, l and |ω| C 1 . Using the notation of Theorem 5.2, by (5.18), we can choose µ = 2 CΥε. Now, we observe that for all fixed m, m ∈ M Id -D (u,v) F(0, 0, m, m, 0, 0) -1 D (u,v) F(a, b, m, m, u, v) is continuous with respect to (a, b, u, v) ∈ A × B × U × V. Then, thanks to Lemma 5.3, there exists ε 0 depending on n, l, Υ and |ω| C 1 such that, for all ε ≤ ε 0 , (5.23) is satisfied. 5 Biasymptotic Motions for Time Dependent Hamiltonians Result (Near Integrable Case) Let A ⊂ R n be a closed set and E equal to T n or T n × B, for every function f : E × A ⊂ R n × R n → R we recall the definition of the following norm |f | L(A) = sup z∈B sup x,y∈A,x =y |f (z, x) -f (z, y)| |x -y| + |f | C 0 . We consider a real parameter σ ≥ 1 and A ⊂ R n . Definition 5.5. Let D σ be the space of functions f defined on T n × A × R such that f ∈ C(T n × A × R) and f t p ∈ C σ (T n ) for all (p, t) ∈ A × R. For all f ∈ D σ and l ≥ 1, we define |f | σ,l,L(A) = sup (p,t)∈A×R |f t p | C σ (1 + |t| l ) + sup t∈R |f t | L(A) (1 + |t| l-1 ), (5.24) |f | σ,0,L(A) = sup (p,t)∈A×R |f t p | C σ + sup t∈R |f t | L(A) . (5.25) Similarly to the previous case, the following proposition contains several properties of the previous norms. Proposition 5.4. Given σ ≥ 1, for all f , g ∈ D σ and positive l, m ≥ 1 a. |f | σ,l,L(A) ≤ |f | s,l,L(A) for all 1 ≤ σ ≤ s, b. |f | σ,l,L(A) ≤ C(l, m)|f | σ,l+m,L(A) c. |f g| σ,l+m,L(A) ≤ C(σ) |f | 0,l,L(A) |g| σ,m,L(A) + |f | σ,l,L(A) |g| 0,m,L(A) . Moreover, we consider g ∈ D σ such that, for all (q, p, t) ∈ T n × A × R, g(q, p, t) = (g(q, p, t), p, t). Then f f t p g t p C σ 1 + |t| l+m ≤ C(σ) |f | 0,l,L(A) |g| σ,m,L(A) + |f | σ,l,L(A) |g| 0,m,L(A) . We want to prove that the same estimate is also verified for the second term of the norm (see the right-hand side of (5.24)). For all fixed t ∈ R and x, y ∈ A such that x = y |f t (q, x)g t (q, x) -f t (q, y)g t (q, y)| |x -y| (1 + |t| l+m-1 ) = |f t (q, x)g t (q, x) -f t (q, y)g t (q, x) + f t (q, y)g t (q, x) -f t (q, y)g t (q, y)| |x -y| (1 + |t| l+m-1 ) ≤ C |f t | L(A) 1 + |t| l-1 |g t x | C 0 (1 + |t| m ) + |g t | L(A) 1 + |t| m-1 |f t y | C 0 1 + |t| l ≤ C |f | 0,l,L(A) |g| σ,m,L(A) + |f | σ,l,L(A) |g| 0,m,L(A) . Result (Near Integrable Case) Taking the sup for all q ∈ T n and x, y ∈ A with x = y on the left-hand side of the latter and then for all t ∈ R, we prove c. d. Also in this case, similarly to Proposition 5.2, one has sup (p,t)∈A×R |f t p • g t p | C σ 1 + |t| l+m ≤ C(σ)|f | σ,l,L(A) |g| σ 1,m,L(A) + C(σ) |f | 1,l,L(A) |g| σ,m,L(A) + |f | 0,l+m,L(A) . Now, we estimate the second term of the norm (see (5.24)). For all fixed t ∈ R and x, y ∈ A such that x = y |f t (g t (q, x), x) -f t (g t (q, y), y)| |x -y| (1 + |t| l+m-1 ) = |f t (g t (q, x), x) -f t (g t (q, x), y) + f t (g t (q, x), y) -f t (g t (q, y), y)| |x -y| (1 + |t| l+m-1 ) ≤ |f t (g t (q, x), x) -f t (g t (q, x), y)| |x -y| (1 + |t| l+m-1 ) + |f t (g t (q, x), y) -f t (g t (q, y), y)| |x -y| (1 + |t| l+m-1 ) ≤ C |f t | L(A) 1 + |t| l+m-1 + |f t y | C 1 1 + |t| l |g t | L(A) 1 + |t| m-1 ≤ C |f | 0,l+m,L(A) + |f | 1,l,L(A) |g| σ,m,L(A) , where we used |f t (g t (q, x), y) -f t (g t (q, y), y)| ≤ | (∂ q f ) t y | C 0 |g t | L(A) |x -y|. Taking the sup for all q ∈ T n and x, y ∈ A with x = y on the left-hand side of the latter and then for all t ∈ R, we conclude the proof of this lemma. Now, let us define the following space of functions. Definition 5.6. Given σ ≥ 1 and an integer k ≥ 0, we define Dσ,k the space of functions f such that f ∈ D σ+k , and ∂ i q f ∈ D σ+k-i for all 0 ≤ i ≤ k. Furthermore, for all f ∈ Dσ,k and l > 1, we consider the following norm f σ,k,l,L(A) = max 0≤i≤k |∂ i q f | σ+k-i,l,L(A) , (5.26) where | • | σ,l,L(A) is the norm defined by (5.24) Now, we state the main theorem of this section. Here, the Hamiltonian and its components are functions defined on T n × B r × R, for some r > 0. Then, if a function f belongs to D σ , we consider that f satisfies the properties in Definition 5.1 with A replaced by B r . Let σ ≥ 1, Υ ≥ 1, l > 1 and 0 < ε < 1. We consider 5 Biasymptotic Motions for Time Dependent Hamiltonians the following Hamiltonian                                        H : T n × B 1 × R -→ R H(q, p, t) = h(p) + R(q, p) + f (q, p, t) f, ∂ p f ∈ Dσ,2 , |f | σ+2,0,L(B 1 ) + ∂ q f σ,1,l+2,L(B 1 ) + ∂ p f σ,2,l+1,L(B 1 ) < ε, D ⊂ B 1 , Leb(B 1 \D) < µ, R ∈ C 2 (T n × B 1 ) R(q, p) = ∂ p R(q, p) = 0 for all (q, p) ∈ T n × D, ∂ 2 p H t ∈ C σ+2 (T n × B 1 ) for all fixed t ∈ R ∂ i qp ∂ 2 p H ∈ C(T n × B 1 × R) for all 0 ≤ i ≤ 2. sup t∈R |∂ 2 p H t | C σ+2 < Υ, ( * D ) Theorem D. Let H be as in ( * D ). Then, there exists a time-dependent Hamiltonian h such that, for ε small enough with respect to n, l, Υ, |∂ p h| L(D) and µ, we have the existence of a set W ⊂ T n × B 1 in such a way that, for all (q, p) ∈ W, there exist p -, p + ∈ D and a biasymptotically quasiperiodic solution g associated to (X H , X h, ϕ 0,p ± ) such that g(0) = (q, p). Moreover, Leb ((T n × B 1 ) \W) ≤ 4µ. Also in this case, we are going to deduce the latter from another theorem. Let σ ≥ 1, Υ ≥ 1, l > 1, 0 < ε < 1, µ > 0 and 0 < δ < 1 with δ ≤ µ. We consider the following family of Hamiltonians                                          H : T n × B δ × R × D -→ R H(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I +a(θ, t; p 0 ) + b(θ, t; p 0 ) • I + m(θ, I, t; p 0 ) • I 2 ω ∈ C(D ), |ω| L(D ) < ∞ a, b ∈ Dσ,2 , |a| σ+2,0,L(D ) + ∂ θ a σ,1,l+2,L(D ) < ε, b σ,2,l+1,L(D ) < ε, ∂ 2 I H t p 0 ∈ C σ+2 (T n × B δ ) for all fixed (t, p 0 ) ∈ R × D ∂ i θI (∂ 2 I H) ∈ C(T n × B δ × R × D ) for all 0 ≤ i ≤ 2. sup (t,p 0 )∈R×D |∂ 2 I H t p 0 | C σ+2 ≤ Υ, sup 0≤i≤2 sup t∈R |∂ i θI (∂ 2 I H t )| L(D ) ≤ Υ. ( ) We define the following family of trivial embeddings ψ 0 : T n × D -→ T n × B δ , ψ 0 (θ, p 0 ) = (θ, 0) and we consider the following family Hamiltonians h : T n × B δ × R × D → R such that h(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I + m(θ, I, t; p 0 ) • I 2 . 5.6 Proof of Theorem D assuming Theorem 5.3 Theorem 5.3. Let H be as in ( ). Then, if ε is sufficiently small with respect to n, l, Υ and |ω| L(D ) , for all fixed p 0 ∈ D , there exists a positive C σ -asymptotic KAM torus ψ t +p 0 and a negative C σ -asymptotic KAM torus ψ t -p 0 associated to (X Hp 0 , X hp 0 , ψ 0,p 0 ). Moreover, letting ψ t ± : T n × D -→ T n × B δ , ψ t ± (q, p 0 ) = ψ t ±,p 0 (q), there exists a constant C 0 depending on n, l, Υ and |ω| L(D ) such that sup t≥0 |ψ t + -ψ 0 | L(T n ×D ) < C 0 ε, sup t≤0 |ψ t --ψ 0 | L(T n ×D ) < C 0 ε. (5.27) Proof of Theorem D assuming Theorem 5.3 Here, we assume Theorem 5.3 and we prove Theorem D. The following well-known property is the Lipschitz version of Proposition 5.3 Proposition 5.5. Given r > 0 and 0 < < δ < r, let φ be a Lipschitz map φ : B r -→ B r+δ such that |φ -Id| L(Br) < . Then, for small enough, φ is a lipeomorphism onto its image and B r-δ ⊂ φ(B r+δ ). We define δ = 2C 0 ε, where C 0 is the constant introduced in Theorem 5.3. Therefore, we consider D = B 1-δ ∩ D. Now, for all p 0 ∈ D , we let p = p 0 + I. Similarly to the proof of Theorem C, we expand the Hamiltonian H in ( * D ) around p 0 in such a way that h(p) = h(p 0 ) + ∂ p h(p 0 ) • I + 1 0 (1 -τ )∂ 2 p h(p τ )dτ • I 2 R(q, p) = R(q, p 0 ) + ∂ p R(q, p 0 ) • I + 1 0 (1 -τ )∂ 2 p R(q, p τ , t)dτ • I 2 = 1 0 (1 -τ )∂ 2 p R(q, p τ , t)dτ • I 2 f (q, p, t) = f (q, p 0 , t) + ∂ p f (q, p 0 , t) • I + 1 0 (1 -τ )∂ 2 p f (q, p τ , t)dτ • I 2 . where p τ = p 0 + τ I and I ∈ B δ . For all p 0 ∈ D , we define e(p 0 ) = h(p 0 ) ω(p 0 ) = ∂ p h(p 0 ) a(q, t; p 0 ) = f (q, p 0 , t) b(q, t; p 0 ) = ∂ p f (q, p 0 , t) m(q, I, t; p 0 ) = 1 0 (1 -τ ) ∂ 2 p h(p τ ) + ∂ 2 p R(q, p τ , t) + ∂ 2 p f (q, p τ , t) dτ = 1 0 (1 -τ )∂ 2 p H(q, p τ , t)dτ for all (q, I, t) ∈ T n × B δ × R. Writing θ instead of q for the angular variables, we rewrite the Hamiltonian H, restricted to T n × B δ × R × D , in the following form as a family of Hamiltonians parametrized by p 0 ∈ D , Obviously, for each fixed p 0 ∈ D , X H and X h satisfy (5.1) and, for all p 0 ∈ D , h has an invariant torus supporting quasiperiodic dynamics. H : T n × B δ × R × D -→ R ( The Hamiltonian H in (5.28) satisfies the hypotheses of Theorem 5.3. Then, similarly to the proof of Theorem C, there exist u t ± , v t ± : T n × D → R n such that, in the (q, p)-coordinates, the following family of embeddings ϕ t + : T n × D -→ T n × B 1 , ϕ t + (q, p 0 ) = (q + u t + (q, p 0 ), p 0 + v t + (q, p 0 )) is a family of positive C σ -asymptotic KAM tori associated to (X H , X h, ϕ 0 ) and ϕ t -: T n × D -→ T n × B 1 , ϕ t -(q, p 0 ) = (q + u t -(q, p 0 ), p 0 + v t -(q, p 0 )) is a family of negative C σ -asymptotic KAM tori associated to (X H , X h, ϕ 0 ), where ϕ 0 is the family of trivial embeddings defined by (5.10). Moreover, sup t≥0 |u t + | L(T n ×D ) < C 0 ε, sup t≥0 |v t + | L(T n ×D ) < C 0 ε, sup t≤0 |u t -| L(T n ×D ) < C 0 ε, sup t≤0 |v t -| L(T n ×D ) < C 0 ε, where C 0 is a constant depending on n, l, Υ and |ω| L(D ) . Now, there exist ũt ± , ṽt ± : T n × B 1-δ → R n such that ũt ± , ṽt ± extend u t ± , v t ± without affecting their Lipschitz constant. This means that, ũt ± T n ×D = u t ± , ṽt ± T n ×D = v t ± , sup t≥0 |ũ t + | L(T n ×B 1-δ ) = sup t≥0 |u t + | L(T n ×D ) sup t≥0 |ṽ t + | L(T n ×B 1-δ ) = sup t≥0 |u t + | L(T n ×D ) . Therefore, letting φt ± : T n × B 1-δ -→ T n × B 1 , φt ± (q, p 0 ) = (q + ũt ± (q, p 0 ), p 0 + ṽt ± (q, p 0 )) 5.6 Proof of Theorem D assuming Theorem 5.3 we have sup t≥0 | φt ± -Id| L(T n ×B 1-δ ) = sup t≤0 |ϕ t ± -Id| L(T n ×D ) < C 0 ε. Now, we recall that Leb(B 1 \D) < µ, Leb φ0 ± (T n × B 1-δ )\ϕ 0 ± (T n × D ) = Leb φ0 ± (T n × B 1-δ )\ φ0 ± (T n × D ) ≤ C ε Leb (B 1-δ \D ) = C ε Leb (B 1-δ \ (B 1-δ ∩ D)) ≤ C ε Leb (B 1 \D) < C ε µ (5.29) where C ε is a constant converging to 1 if ε → 0. We observe that (T n × B 1 ) \ϕ 0 ± (T n × D ) = (T n × B 1 ) \ φ0 ± (T n × B 1-δ ) + φ0 ± (T n × B 1-δ )\ϕ 0 ± (T n × D ). (5.30) Thanks to Proposition 5.5 and the special form of φ0 ± T n × B 1-2δ ⊂ φ0 ± (T n × B 1-δ ) . Then, by the latter, (5.29) and (5.30) Leb (T n × B 1 ) \ϕ 0 ± (T n × D ) = Leb (T n × B 1 ) \ φ0 ± (T n × B 1-δ ) + C ε µ ≤ Leb ((T n × B 1 ) \(T n × B 1-2δ )) + C ε µ ≤ 2µ (5.31) for ε sufficiently small. Now, let us introduce the following set W = ϕ 0 + (T n × D ) ∩ ϕ 0 -(T n × D ) and by (5.31) Leb ((T n × B 1 ) \W) ≤ Leb (T n × B 1 ) \ϕ 0 + (T n × D ) + Leb (T n × B 1 ) \ϕ 0 -(T n × D ) ≤ 4µ. This concludes the proof of this theorem because, for all (q, p 0 ) ∈ W = ϕ 0 + (T n × D ) ∩ ϕ 0 -(T n × D ), there exist (q + , p 0+ ), (q -, p 0-) ∈ T n × D such that ϕ 0 + (q + , p 0+ ) = (q, p 0 ) = ϕ 0 -(q -, p 0-). Therefore, by Proposition 5.1, there exists a biasymptotically quasiperiodic solution g(t) associated to (X H , X h, ϕ 0,p 0± ) such that g(0) = (q, p 0 ), where ϕ 0 is the family of trivial embeddings defined by (5.10). Proof of Theorem 5.3 The proof of this theorem is the same as Theorem 5.1. However, we have some obvious differences in the estimation of the solution of the homological equation. Here, we prove the existence of a family of positive C σ -asymptotic KAM tori ψ t + parametrized by p 0 ∈ D . Similarly, we have the claim concerning negative times. In what follows, we drop the subscript + to obtain a more elegant form. We are looking for u, v : T n × D × R + → R n such that letting ψ(θ, t; p 0 ) = (θ + u(θ, t; p 0 ), v(θ, t; p 0 )), ψ, u and v satisfy the following conditions X H (ψ(θ, t; p 0 ), t; p 0 ) -∂ θ ψ(θ, t; p 0 )ω(p 0 ) -∂ t ψ(θ, t; p 0 ) = 0 (5.32) lim t→+∞ |u t p 0 | C σ = 0, lim t→+∞ |v t p 0 | C σ = 0, (5.33) To this end, given σ ≥ 1, let us introduce the following definitions Definition 5.7. Let D + σ be the space of functions f defined on T n × R + × D such that f ∈ C(T n × R + × D ) and f t p ∈ C σ (T n ) for all (t, p) ∈ R + × D . For all f ∈ D + σ and l > 1, we define the following norms |f | σ,l,L(D ) = sup (t,p)∈R + ×D |f t p | C σ (1 + |t| l ) + sup t∈R + |f t | L(D ) (1 + |t| l-1 ), |f | σ,0,L(D ) = sup (t,p)∈R + ×D |f t p | C σ + sup t∈R + |f t | L(D ) . These norms satisfy the properties in Proposition 5.4. As one can expect, we define the following subset of D + σ Definition 5.8. Given σ ≥ 1 and an integer k ≥ 0, we define D+ σ,k the space of functions f such that f ∈ D + σ+k , and ∂ i q f ∈ D + σ+k-i for all 0 ≤ i ≤ k. Therefore, for all f ∈ D+ σ,k and l > 1, we define f + σ,k,l,L(D ) = max 0≤i≤k |∂ i q f | + σ+k-i,l,L(D ) . Now, let σ ≥ 1 and l > 1 be the positive parameters introduced by ( ). We define the following Banach spaces (A, | • |), (B, | • |), (U, | • |), (V, | • |), (Z, | • |) and (G, | • |) 5.7 Proof of Theorem 5.3 A = a : T n × R + × D → R | a ∈ D+ σ,2 and |a| = |a| + σ+2,0,L(D ) + ∂ θ a + σ,1,l+2,L(D ) < ∞ B = b : T n × R + × D → R n | b ∈ D+ σ,2, , and |b| = b + σ,2,l+1,L(D ) < ∞ U = u : T n × R + × D → R n | u, (∇ θt u) Ω ∈ D + σ and |u| = max{|u| + σ,l,L(D ) , | (∇ θt u) Ω| + σ,l+1,L(D ) } < ∞ V = v : T n × R + × D → R n | v, (∇ θt v) Ω ∈ D + σ and |v| = max{|v| + σ,l+1,L(D ) , | (∇ θt v) Ω| + σ,l+2,L(D ) } < ∞ Z = z : T n × R + × D → R n | z ∈ D + σ , and |z| = |z| + σ,l+1,L(D ) < ∞ G = g : T n × R + × D → R n | g ∈ D + σ , and |g| = |g| + σ,l+2,L(D ) < ∞ Similarly to what we did in Appendix C, one can prove that the previous normed spaces are Banach spaces. Let M n be the set of the n-dimensional matrices and Υ ≥ 1 the positive parameter in ( ). We introduce another Banach space (M, | • |), such that M = m : T n × B δ × R + × D → M n | ∂ i θI m ∈ C(T n × B δ × R + × D ) for all 0 ≤ i ≤ 2, m t p 0 ∈ C σ+2 (T n × B δ ) for all fixed (t, p 0 ) ∈ R + × D and |m| = sup (t,p 0 )∈R + ×D |m t p 0 | C σ+2 + sup 0≤i≤2 sup t∈R + ∂ i θI m t L(D ) ≤ 2Υ Let F be the following functional F : A × B × M × M × U × V -→ Z × G F(a, b, m, m, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, u, v)) with F 1 (b, m, u, v) = b • ũ + m • ψ v -(∇ θt u) Ω, F 2 (a, b, m, u, v) = ∂ θ a • ũ + (∂ θ b • ũ) v + ∂ θ m • ψ • v 2 + (∇ θt v) Ω. It is obtained by (5.32). For fixed m, m ∈ M and for (a, b) sufficiently close to (0, 0), we are looking for (u, v) ∈ U×V satisfying (5.33) such that F(a, b, m, m, u, v) = 0. Following the lines of the proof of Theorem 5.1, one can prove that F is well defined, continuous, differentiable with respect to the variables (u, v) with D (u,v) F continuous. Moreover, for all fixed m, m ∈ M D (u,v) F(0, 0, m, m, 0, 0)(û, v) = ( m0 v -(∇ θt û)Ω, (∇ θt v)Ω). is invertible. The proof relies on the solution of the following homological equation. Given σ ≥ 1 and l > 1, we consider the following equation in the unknown κ : (5.34) T n × R + × D → R            ω(p 0 ) • ∂ q κ(θ, t; p 0 ) + ∂ t κ(θ, t; p 0 ) = g(θ, t; p 0 ), g ∈ D + σ , |g| + σ,l+1,L(D ) < ∞, ω : D -→ R n , ω ∈ C(D ), |ω| L(D ) < ∞. Moreover, |κ| + σ,l,L(D ) ≤ C|g| + σ,l+1,L(D ) for a suitable constant C depending on n, l and |ω| L(D ) . Proof. We know that κ(θ, t; p 0 ) = - +∞ t g(θ + ω(p 0 )(τ -t), τ ; p 0 )dτ is the unique solution of (HE D ) satisfying (5.34). Concerning the estimates, similarly to the proof of Lemma 5.2, we have sup (t,p 0 )∈R + ×D |κ t p 0 | C σ (1 + t l ) ≤ C(l)|g| + σ,l+1,L(D ) . It remains to estimate the second member of the norm. By Lemma 5.1, for all (θ, t, p 0 1 ), (θ, t, p 0 2 ) ∈ T n × R + × D with p 0 1 = p 0 2 , |κ(θ, t; p 0 1 ) -κ(θ, t; p 0 2 )| |p 0 1 -p 0 2 | ≤ +∞ t |g(θ + ω(p 0 2 )(τ -t), t, p 0 2 ) -g(θ + ω(p 0 2 )(τ -t), t, p 0 1 )| |p 0 1 -p 0 2 | dτ ≤ +∞ t |g(θ + ω(p 0 2 )(τ -t), t, p 0 1 ) -g(θ + ω(p 0 1 )(τ -t), t, p 0 1 )| |p 0 1 -p 0 2 | dτ ≤ C sup t∈R + |g t | L(D ) 1 + t l +∞ t 1 1 + τ l dτ + C sup (t,p 0 )∈R + ×D |g t p | C 1 1 + t l+1 |ω| C 1 +∞ t τ -t 1 + τ l+1 dτ, ≤ C(l, |ω| C 1 ) |g| + 1,l+1,L(D) 1 + t l-1 . Taking the sup for all θ ∈ T n , p 0 1 , p 0 2 ∈ D with p 0 1 = p 0 2 , and then for all t ∈ R + on the left hand side of the latter, we conclude the proof of this lemma. We proved that the functional F satisfies the hypotheses of the implicit function theorem. Therefore, following the lines of the proof of Theorem 5.1, we conclude the proof of Theorem 5.3. Part IV Applications to Celestial Mechanics We consider the example of a planetary system (planar three-body problem) perturbed by a given comet coming from and going back to infinity, asymptotically along a hyperbolic Keplerian orbit. We prove the existence of orbits in such a way that the center of mass of the planetary system is attracted by the comet with zero asymptotic velocity when t → +∞. Moreover, in a frame of reference attached to the center of mass of the planetary system, the motion of the planets converges to some dynamics that are close (in the sense that we will recall later) to suitable quasiperiodic solutions associated to the Hamiltonian of the planar three-body problem. The Hamiltonian of the planar three-body problem plus comet (P3BP+C) is H = H 0 + H c , where H 0 is the Hamiltonian of the planar three-body problem and H c is the perturbation given by the interaction of the planets with the comet. On a suitable subset of the phase space, the Hamiltonian H c does not satisfy good decay properties. This makes Theorem A useless for this particular case. For this reason, we begin this part by proving another abstract theorem, a weaker version of Theorem A. This is the content of the following chapter. The consecutive chapter is devoted to the application in celestial mechanics. The Abstract Theorem This chapter is divided into three sections. The first is devoted to the definition of C σ -weakly asymptotic cylinder (Section 6.1). The above-mentioned weak theorem is introduced in the second section (Section 6.2) and the last section contains its proof (Section 6.3). C σ -weakly asymptotic cylinder Let B ⊂ R n+m be a ball centred at the origin. We denote q ∈ T n × R m and p ∈ B. Let P be equal to T n × R m × B or an open subset of R 2(n+m) and J = [1, +∞) ⊂ R be a real interval. Given σ ≥ 0 and a positive integer k ≥ 0, we consider time-dependent vector fields X t , X t 0 of class C σ+k on P, for all fixed t ∈ J, an embedding ϕ 0 from T n ×R m to P of class C σ and a time-dependent vector field γ t of class C σ on T n × R m , for all fixed t ∈ J, such that lim t→+∞ |X t -X t 0 | C σ+k = 0, (6.1) X 0 (ϕ 0 (q), t) = ∂ q ϕ 0 (q)(ω + γ(q, t)) for all (q, t) ∈ T n × R m × J, (6.2) lim t→+∞ |γ t | C σ = 0, (6.3) where ω = (ω, 0) ∈ R n+m with ω ∈ R n . 6 The Abstract Theorem Definition (Définition 2.3). We assume that (X, X 0 , ϕ 0 ) satisfy (6.1), (6.2) and (6.3). A family of C σ embeddings ϕ t : T n × R m → P is a C σ -weakly asymptotic cylinder associated to (X, X 0 , ϕ 0 ) if there exists a time-dependent vector field Γ t of class C σ on T n × R m , for all fixed t, such that lim t→+∞ |ϕ t -ϕ 0 | C σ = 0, (6.4) X(ϕ(q, t), t) = ∂ q ϕ(q, t)(ω + Γ(q, t)) + ∂ t ϕ(q, t), (6.5) lim t→+∞ |Γ t | C σ = 0, (6.6) for all (q, t) ∈ T n ×R m ×J. Moreover, ϕ is Lagrangian if ϕ t (T n ×R m ) is Lagrangian for all t ∈ J. We observe that taking m = 0, γ ≡ 0 and Γ ≡ 0, we obtain Definition 1.3. As one can expect, all the properties and considerations discussed for C σ -asymptotic KAM tori remain true for C σ -weakly asymptotic cylinders with suitable modifications. We can rewrite (6.5) in terms of the flow of X. Let ψ t t 0 ,X and ψ t t 0 ,ω+Γ be the flow at time t with initial time t 0 of X and ω + Γ, respectively. We assume that ψ t t 0 ,X and ψ t t 0 ,ω+Γ are defined for all t, t 0 ∈ J. Then, (6.5) is equivalent to ψ t t 0 ,X • ϕ t 0 = ϕ t • ψ t t 0 ,ω+Γ . (6.7) Moreover, we can always find a family of embeddings satisfying (6.5). Furthermore, if ϕ t is a C σ -weakly asymptotic cylinder defined for all t large, we can extend the set of definition for all t ∈ R. Regarding the dynamics, we recall the definition of weakly asymptotically quasiperiodic solution. Definition (Définition 2.4). We assume that (X, X 0 , ϕ 0 ) satisfy (6.1), (6.2) and (6.3). An integral curve g(t) of X is a weakly asymptotically quasiperiodic solution associated to (X, X 0 , ϕ 0 ) if there exist a time-dependent vector field Γ : T n × R m × J → R n+m and q ∈ T n × R m such that lim t→+∞ |g(t) -ϕ 0 • ψ t t 0 ,ω+Γ (q)| = 0. The latter is a weakly version of the definition of asymptotically quasiperiodic solutions (Definition 3.1). Indeed, taking m = 0, γ ≡ 0 and Γ ≡ 0, we obtain Definition 3.1. We conclude by recalling the following proposition. Proposition (Proposition 2.1). Let ϕ t be a C σ -weakly asymptotic cylinder associated to (X, X 0 , ϕ 0 ). Then, for all q ∈ T n × R m and t 0 ∈ J, g(t) = ψ t t 0 ,X • ϕ t 0 (q) is a weakly asymptotically quasiperiodic solution associated to (X, X 0 , ϕ 0 ). Result Given a positive real parameter σ ≥ 0, we have the following definition 6.2 Result Definition 6.1. Let S σ be the space of functions f defined on T n ×R m ×B ×J such that f t ∈ C σ (T n × R m × B), for all fixed t ∈ J, and ∂ i (q,p) f ∈ C(T n × R m × B × J) for all 0 ≤ i ≤ [σ]. We point out that ∂ i (q,p) stands for the partial derivatives with respect to (q, p) of order i and [σ] for the integer part of σ. Given σ ≥ 0 and l ≥ 0, for every f ∈ S σ , we introduce the following norm |f | σ,l = sup t∈J |f t | C σ t l . (6.8) The following proposition proves some properties of this norm. Proposition 6.1. We consider f , g ∈ S σ and positive l, m > 0. a. For all β ∈ N 2n , if |β| + s ≤ σ, then ∂ |β| ∂q 1 β 1 ...∂qn βn ∂p 1 β n+1 ...∂pn β 2n f s,l ≤ |f | σ,l , b. |f | σ,l ≤ |f | σ,l+m , c. |f g| σ,l+m ≤ C(σ) (|f | 0,l |g| σ,m + |f | σ,l |g| 0,m ). If σ ≥ 1 and f , z ∈ S σ then f • z ∈ S σ , d. |f • z| σ,l+m ≤ C(σ) |f | σ,l |∇z| σ 0,m + |f | 1,l |∇z| σ-1,m + |f | 0,l+m . Proof. The proof consists of a straightforward application of Proposition A.2 (see Appendix A). Properties a. and b. are obvious, we verify the others c. |f g| σ,l+m = sup t∈J |f t g t | C σ t l+m ≤ C(σ) sup t∈J |f t | C 0 |g t | C σ + |f t | C σ |g t | C 0 t l+m ≤ C(σ) sup t∈J |f t | C 0 t l |g t | C σ t m + |f t | C σ t l |g t | C 0 t m ≤ C(σ) (|f | 0,l |g| σ,m + |f | σ,l |g| 0,m ) d. |f • z| σ,l+m = sup t∈J |f t • z t | C σ t l+m ≤ C(σ) sup t∈J |f t | C σ |∇z t | σ C 0 + |f t | C 1 |∇z t | C σ-1 + |f | C 0 t l+m ≤ C(σ) sup t∈J |f t | C σ t l |∇z t | σ C 0 t σm t (1-σ)m + |f t | C 1 t l |∇z t | C σ-1 t m + |f | C 0 t l+m ≤ C(σ) |f | σ,l |∇z| σ 0,m + |f | 1,l |∇z| σ-1,l + |f | 0,l+m notice that t ≥ 1 and 1 -σ ≤ 0 imply t (1-σ)m ≤ 1. Now, we state the main theorem that we shall prove in this chapter. Let s, λ, ρ, β and α be positive parameters satisfying the following conditions            1 ≤ ρ < λ < s, s > max α α -1 , λ + α β -1 , 1 < β < 2, α > 1, λ > 2β 2 -β , ρ < λ -β β 2 . (# E ) Given ω ∈ R n and real positive parameters δ > 0, ε > 0 and Υ ≥ 1, we consider the following time-dependent Hamiltonian                          H : T n × R m × B × J -→ R H(q, p, t) = ω • p + a(q, t) + (b 0 (q, t) + b r (q, t)) b(q,t) •p + m(q, p, t) • p 2 a, b 0 , b r , ∂ 2 p H ∈ S s+1 |b 0 | 2,1 < δ, |b 0 | s+1,1 < Υ, |a| λ+1,0 + |∂ q a| λ,2 < ε, |b r | λ+1,1 < ε, |a| s+1,0 + |∂ q a| s,2 < Υ, |b r | s+1,1 < Υ, |∂ 2 p H| s+1,0 ≤ Υ, ( * E ) Let ϕ 0 be the following trivial embedding ϕ 0 : T n × R m → T n × R m × B, ϕ 0 (q) = (q, 0). Furthermore, we consider the Hamiltonian h : T n × R m × B × J → R such that h(q, p, t) = ω • p + m(q, p, t) • p 2 . (6.9) Theorem E. Let H be as in ( * E ) and we assume that s, λ, ρ, β and α satisfy (# E ). Then, for δ small enough with respect to s, there exists ε 0 , depending on δ, s, λ, β, α and Υ, such that for all ε ≤ ε 0 there exists a C ρ -weakly asymptotic cylinder associated to (X H , X h, ϕ 0 ). Proof of Theorem E 6.3.1 The Nash-Moser Theorem (Zehnder) The proof of Theorem E relies on a version of the Nash-Moser theorem proved by Zehnder (see [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF]). For the sake of clarity, we dedicate this section to explaining this result. We consider a one-parameter family of Banach spaces {(X σ , | • | σ )} σ≥0 and we assume that for all 0 ≤ σ ≤ σ < ∞ X 0 ⊇ X σ ⊇ X σ ⊇ X ∞ = σ≥0 X σ |x| σ ≤ |x| σ for all x ∈ X σ . Now, we introduce the definition of C ∞ -smoothing, which plays a very important role in the proof of the Theorem. Proof of Theorem E Definition 6.2. A C ∞ -smoothing in {(X σ , | • | σ )} σ≥0 is a one-parameter family {S τ } τ >0 of linear mappings S τ : X 0 → X ∞ together with constants C(m, d), for positive integers m and d, satisfying the following conditions: |S τ x| m ≤ τ m-d C(m, d)|x| d (S1) for all x ∈ X d and 0 ≤ d ≤ m, |(S τ -1)x| d ≤ τ -(m-d) C(m, d)|x| m (S2) for all x ∈ X m and 0 ≤ d ≤ m. The existence of a C ∞ -smoothing implies the following well-known convexity property. Lemma 6.1. We assume that {(X σ , | • | σ )} σ≥0 has a C ∞ -smoothing. Then, for all 0 ≤ λ 1 ≤ λ 2 , α ∈ [0, 1] and x ∈ X λ 2 , |x| λ ≤ C(α, λ 1 , λ 2 )|x| 1-α λ 1 |x| α λ 2 with λ = (1 -α)λ 1 + αλ 2 . Proof. We refer to [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF] for the proof. In what follows, we report the hypotheses of the Zehnder theorem. We consider three one-parameter families of Banach spaces {(X σ , | • | σ )} σ≥0 , {(V σ , | • | σ )} σ≥0 and {(Z σ , | • | σ )} σ≥0 each with a C ∞ -smoothing, which we denote by the same letter {S τ } τ >0 . Let F be the following functional F : X 0 × V 0 -→ Z 0 and we assume that F(u 0 , v 0 ) = 0 for some (u 0 , v 0 ) ∈ X 0 × V 0 . Given a positive parameter 0 < ζ ≤ 1, we define O σ ζ = {(x, v) ∈ X σ × V σ : |x -x 0 | σ , |v -v 0 | σ < ζ} (6.10) and we consider F : O 0 ζ → Z 0 to be continuous. In his work, Zehnder takes ζ = 1. We will see that this does not really change the proof. But, as we shall see in the proof of Theorem E, we need ζ to satisfy a suitable smallness condition to define the right inverse of our functional F. For given x ∈ X 0 ∩ O 0 ζ , the aim of the Zehnder theorem is to solve the equation F(x, v) = 0 assuming x sufficiently close to x 0 . The author makes the following hypotheses. Hypotheses H.1-H.4 H.1 Smoothness: We assume that F(x, •) : V 0 → Z 0 is two times differentiable with the uniform estimate |D v F(x, v)| 0 , |D 2 v F(x, v)| 0 ≤ C for all (x, v) ∈ O 0 ζ and for some constant C ≥ 1, where D v is the differential with respect to the second component. H.2 F is uniformly Lipschitz in X 0 : For all (x 1 , v), (x 2 , v) ∈ O 0 ζ , |F(x 1 , v) -F(x 2 , v)| 0 ≤ C|x 1 -x 2 | 0 . H.3 Existence of a right-inverse of loss γ, 1 ≤ γ < s (s will be specified later): For every (x, v) ∈ O γ ζ there exists a linear map η(x, v) : Z γ → V 0 such that, for all z ∈ Z γ , D v F(x, v) • η(x, v)z = z |η(x, v)z| 0 ≤ C|z| γ . (η1) Moreover, for all γ ≤ σ ≤ s, if (x, v) ∈ O γ ζ ∩ (X σ × V σ ), then the linear map η : Z σ → V σ-γ is well defined and if |x -x 0 | σ , |v -v 0 | σ ≤ K, then |η(x, v)F(x, v)| σ-γ ≤ C(σ)K. (η2) H.4 Order: The triple (F, x 0 , v 0 ) is of order s, s > γ ≥ 1. Here, Zehnder uses the following Definition 6.3. (F, x 0 , v 0 ) is called of order s, 1 ≤ s < ∞, if the following three conditions are satisfies: 1. (x 0 , v 0 ) ∈ X s × V s , 2. F(O 0 ζ ∩ (X σ × V σ )) ⊂ Z σ , 1 ≤ σ ≤ s 3. there exist constants C(σ), 1 ≤ σ ≤ s, such that if (x, v) ∈ (X σ × V σ ) ∩ O 1 ζ satisfies |x -x 0 | σ , |v -v 0 | σ ≤ K then |F(x, v)| σ ≤ C(σ)K. Zehnder, in his paper, assumes the existence of an approximate right-inverse. The reason is that, in his works [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF] and [START_REF]Generalized implicit function theorems with applications to some small divisor problems[END_REF], he wants to apply generalized implicit function theorems to solve some small divisor problems. In particular, to prove Arnold's normal form theorem for vector fields on the torus and the KAM theorem. In the proof of the previous theorems, the author defines a functional F which does not admit a right-inverse but just an approximate right-inverse. Here, we do not have this problem; hence we prefer to write H.4 in this form. Theorem 6.1 (Zehnder). Let α, β, λ, ρ, γ and s be positive real numbers satisfying the following set of inequalities: 1 < β < 2, 1 < α, 1 ≤ γ ≤ ρ < λ < s, (6.11) λ > max{ 2βγ 2 -β , β(γ + ρβ)} (6.12) s > max{ αγ α -1 , λ + αγ β -1 }. (6.13) Proof of Theorem E Let (F, x 0 , v 0 ) be of order s and satisfy H.1-H.4 with a loss of γ. Then there exists ε 0 , depending on α, β, λ, γ, s and ζ, such that for all ε ≤ ε 0 we have the existence of an open neighborhood D λ ⊂ X λ of x 0 , D λ = {x ∈ X λ : |x -x 0 | λ ≤ ε} and a mapping ψ : D λ → V ρ such that F(x, ψ(x)) = 0, x ∈ D λ |ψ(x) -v 0 | ρ ≤ ζ, where ζ is the positive parameter defined by (6.10). Proof. The statement of the theorem is slightly different from the original. As mentioned before, Zehnder considers ζ = 1. For the sake of clarity, in what follows, we will report the parts that differ from the original proof. The proof uses an iteration technique similar to the Newton algorithm modified by a double C ∞ -smoothing. The first is introduced in V 0 to regain the loss of derivatives γ at each iteration step. The second approximates elements in D λ ⊂ X λ with smoother ones to keep the loss of regularity minimal in the X 0 space. Zehnder makes great use of Lemma 6.1, he estimates the lowest norms | • | 0 very carefully to keep them down and the highest norms | • | s are left to grow. Then, he uses the aforementioned Lemma 6.1 to estimate the intermediate norms. Let ε = υε 0 for some 0 < υ ≤ 1 and a sufficiently small parameter ε 0 to be determined later. We define D λ = {x ∈ X λ : |x -x 0 | λ ≤ υε 0 }. Following the lines of the proof of Zehnder, we define a sequence {φ j } j≥0 of linear mapping φ j : D λ → X ∞ in such a way that, φ 0 (x) = x 0 φ j (x) -x 0 = S τ j (x -x 0 ), for all j ≥ 1, where τ j = Q β j for some Q > 1 sufficiently large to be chosen later. Since β > 1 then τ j → +∞ if j → +∞. By the latter, we write φ j (x) -x in the following form φ j (x) -x = S τ j -1 (x -x 0 ). Thanks to (S2), for all 0 ≤ µ < λ |φ j (x) -x| µ = | S τ j -1 (x -x 0 )| µ ≤ τ µ-λ j C(λ, µ)|x -x 0 | λ ≤ τ µ-λ j C(λ, µ)υε 0 and taking the limit for j → +∞, we have lim j→+∞ |φ j (x) -x| µ = 0 for all 0 ≤ µ < λ. We construct inductively a sequence of mapping {ψ j } j≥0 , ψ j : D λ → V ∞ such that ψ 0 (x) = v 0 ψ j+1 (x) -ψ j (x) = S t j+1 η(φ j+1 (x), ψ j (x))F(φ j+1 (x), ψ j (x)) for all j ≥ 0, with t j = τ α j = Q αβ j . We use two different rates of approximations, S τ j and S t j , for the family of Banach spaces {(X σ , | • | σ )} σ≥0 and {(V σ , | • | σ )} σ≥0 , respectively. We shall show by induction that, if ε 0 is sufficiently small with respect to α, β, λ, γ, s and ζ, and x ∈ X λ satisfies |x -x 0 | λ ≤ υε 0 , then the following statements S(d) hold for d ≥ 1: 6 The Abstract Theorem S(d, 1) (φ d (x), ψ d (x)) ∈ O γ ζ ∩ (X ∞ × V ∞ ) and |F(φ d (x), ψ d (x))| 0 ≤ 1 2 υQ -λβ d S(d, 2) |ψ d (x) -ψ d-1 (x)| 0 ≤ CυQ -(λ-βγ)β d-1 , S(d, 3) |ψ d (x) -ψ d-1 (x)| s ≤ υQ (s-λ)β d+1 , for a suitable constant C. The only difference compared to the proof of Zehnder is in S(d, 1), where he considers ζ = 1. In what follows, we verify only the first part of S(d, 1). The rest follows from the original proof. We introduce the abbreviated notation x j = φ j (x) and v j = ψ j (x). We claim that x j ∈ O γ ζ ∩ X ∞ if ε 0 is sufficiently small. We remind that 1 ≤ γ < λ, then, by the definition of x j and (S1), |x j -x 0 | γ = |S τ j (x -x 0 )| γ ≤ C|x -x 0 | γ ≤ C|x -x 0 | λ for a suitable constant C. Letting Cε 0 < ζ, we have the claim. Zehnder proves the totality of the three statements S(d) by induction. Letting d = 1, S(1) follows from the smallness condition by choosing ε 0 sufficiently small with respect to α, β, λ, γ, s and ζ. Now, we assume S(d) for 1 ≤ d ≤ j and we prove S(j + 1). We do not provide any details concerning the proofs of S(j + 1, 2) and S(j + 1, 3) because they coincide with the original proof. We verify that |v j+1 -v 0 | γ < ζ. By S(j + 1, 2), S(j + 1, 3) and Lemma 6.1 |v j+1 -v j | γ ≤ C(γ, s)|v j+1 -v j | 1-γ s 0 |v j+1 -v j | γ s s ≤ υCQ -ξβ j for a suitable constant C and with ξ = λ -β(γ + βγ) + γ s (λ(β 2 -1) + βγ) > λ -β(γ + βγ) > λ -β(γ + βρ) where we have used γ ≤ ρ and β > 1. This means that, |v j+1 -v 0 | γ ≤ j d=0 |v d+1 -v d | γ ≤ Cυ d≥0 Q -ξβ d < ζ. for Q large enough. Zehnder distinguishes between the order of (F, x 0 , v 0 ) and the smoothness assumption quantified by λ. In his work, Zehnder states that the minimal order for which we can apply the previous theorem, and thus the minimal order which assures the convergence of the algorithm, is s ≥ 8γ, where we point out that s < ∞. Moreover, concerning the corresponding minimal smoothness assumption for λ, one has 3γ < λ < 4γ. Proof of Theorem E Corollary 6.1. For all s ≥ 8γ, the following holds: let λ(s) = 2γ + 14 γ 2 s , there exists in X λ(s) a neighborhood D λ(s) = {x ∈ X λ(s) : |x -x 0 | λ(s) ≤ ε(s, ζ)} and a mapping ψ s : D λ(s) → V γ such that, for all x ∈ D λ(s) , F(x, ψ s (x)) = 0, |ψ s (x) -v 0 | γ ≤ ζ. Proof. Taking α = 7 6 , β = 1 + 7γ 3s , ρ = γ, and λ = 2γ + 14 γ 2 s , these numbers satisfy the inequalities (6.11) -(6.13) if s ≥ 8γ and hence the result follows from Theorem 6.1. Outline of the Proof of Theorem E We are looking for a C ρ -weakly asymptotic cylinder ϕ associated to (X H , X h, ϕ 0 ), where H is the Hamiltonian defined by ( * E ), h the Hamiltonian in (6.9) and ϕ 0 the trivial embedding ϕ 0 : T n × R m → T n × R m × B, ϕ 0 (q) = (q, 0). More concretely, for given H, we are searching for v, Γ : T n × R m × J → R n+m such that for all (q, t) ∈ T n × R m × J ϕ(q, t) = (q, v(q, t)) and in such a way that ϕ, v and Γ satisfy X H (ϕ(q, t), t) -∂ q ϕ(q, t)(ω + Γ(q, t)) -∂ t ϕ(q, t) = 0, (6.14) lim t→+∞ |v t | C ρ = 0, lim t→+∞ |Γ t | C ρ = 0, for all (q, t) ∈ T n × R m × J. The vector ω = (ω, 0) ∈ R n+m where ω ∈ R n is the frequency vector introduced by ( * E ). As mentioned above, the proof rests on Theorem 6.1. To this end, we need to introduce a suitable functional F given by (6.14). First, we recall that m(q, p, t)p = 1 0 ∂ 2 p H(q, τ p, t)dτ p = ∂ p m(q, p, t) • p 2 . However, concerning the definition of the functional F, the Hamiltonian system associated to the Hamiltonian H is equal to X H (q, p, t) = ω + b(q, t) + m(q, p, t)p -∂ q a(q, t) -∂ q b(q, t)p -∂ q m(q, p, t)p 2 , where H is the Hamiltonian defined by ( * E ). Let φ(q, t) = (q, v(q, t), t) for all (q, t) ∈ T n × R m × J, then X H • φ takes the following form X H • φ(q, t) = ω + b(q, t) + m • φ(q, t) v(q, t) -∂ q a(q, t) -∂ q b(q, t)v(q, t) -∂ q m • φ(q, t)v 2 (q, t) 6 The Abstract Theorem for all (q, t) ∈ T n × R m × J. Moreover, for all (q, t) ∈ T n × R m × J, ∂ q ϕ(q, t) (ω + Γ(q, t)) + ∂ t ϕ(q, t) = ω + Γ(q, t) ∂ q v(q, t)(ω + Γ(q, t)) + ∂ t v(q, t) , and hence, we can rewrite (6.14) in the following form Γ -b -( m • φ) v ∂ q a + (∂ q b) v + ∂ q m • φ • v 2 -∂ q v(ω + Γ) -∂ t v = 0 0 . (6.15) This is composed of sums and products of functions defined on (q, t) ∈ T n ×R m ×J. We have omitted the arguments (q, t). We keep this notation for the rest of the proof. We define (∇v) Ω = (∂ q v) ω + ∂ t v and over suitable Banach spaces that we will specify later, let F be the following functional F(a, b, m, m, v) = (∇v) Ω + ∂ q v (b + ( m • φ) v) + ∂ q a + (∂ q b) v + ∂ q m • φ • v 2 . This is obtained by the second equation of (6.15), where we have replaced Γ with b + ( m • φ) v. This is our starting point. We observe that, for all b, m and m, F(0, b, m, m, 0) = 0. Therefore, we can reformulate our problem in the following form. For fixed m and m, for a suitable b 0 and for (a, b) sufficiently close to (0, b 0 ), we are looking for a function v : As one might expect, because of the term ∂ q v (b + ( m • φ) v), which does not appear in the proof of Theorem A, we are not able to prove Theorem E with the implicit function theorem. T n × R m × J → R n+m in The most technical part of the proof consists in showing that the differential of F with respect to the variable v admits a right-inverse. Let f = b + ( m • φ) v, g = ∂ q b + ∂ q v (∂ p m • φ) v + ∂ q v ( m • φ) + v T ∂ 2 pq m • φ v + 2 (∂ q m • φ) v, where T denotes the transpose. Over suitable Banach spaces, the differential of F with respect to the variable v is equal to D v F(a, b, m, v)v = (∇v) Ω + (∂ q v) f + gv. We will see that, assuming f and g sufficiently small, we are able to find a right inverse of the latter. The proof of Theorem E is split up into the following four sections. The first is dedicated to introducing the special one-parameter families of Banach spaces on which the functional F is defined. The second is about the solution of the homological equation. In other words, we prove the existence of a right-inverse for the latter differential. In the third section, we verify that the functional F satisfies the hypotheses of the Nash-Moser theorem proved by Zehnder (Theorem 6.1) and in the last section, we conclude the proof. Proof of Theorem E Preliminary Settings Given σ ≥ 0, we recall the following definition Definition. Let S σ be the space of functions f defined on T n × R m × B × J such that f t ∈ C σ (T n × R m × B) for all fixed t ∈ J and ∂ i (q,p) f ∈ C(T n × R m × B × J) for all 0 ≤ i ≤ [σ]. We use this notation also for functions defined on T n × R m × J. Given σ ≥ 0 and l ≥ 0, for every f ∈ S σ , we recall the definition of the following norm |f | σ,l = sup t∈J |f t | C σ t l . Some properties of this norm are contained in Proposition 6.1. Now, we consider the following families of Banach spaces {(A σ , |•| σ } σ≥0 , {(B σ , |• | σ )} σ≥0 ,{(V σ , | • | σ )} σ≥0 , {(Z σ , | • | σ )} σ≥0 (see Appendix C) such that, for all σ ≥ 0, A σ = a : T n × R m × J → R | a ∈ S σ+1 and |a| σ = |a| σ+1,0 + |∂ q a| σ,2 < ∞ B σ = b : T n × R m × J → R n+m | b ∈ S σ+1 and |b| σ = |b| σ+1,1 < ∞ V σ = v : T n × R m × J → R n+m | v ∈ S σ+1 , (∇v) Ω ∈ S σ and |v| σ = max{|v| σ+1,1 , | (∇v) Ω| σ,2 } < ∞ Z σ = z : T n × R m × J → R n+m | z ∈ S σ , and |z| σ = |z| σ,2 < ∞ It is straightforward to verify that, for all 0 ≤ σ ≤ σ < ∞, A 0 ⊇ A σ ⊇ A σ ⊇ A ∞ = σ≥0 A σ , B 0 ⊇ B σ ⊇ B σ ⊇ B ∞ = σ≥0 B σ , |a| σ ≤ |a| σ |b| σ ≤ |b| σ V 0 ⊇ V σ ⊇ V σ ⊇ V ∞ = σ≥0 V σ , Z 0 ⊇ Z σ ⊇ Z σ ⊇ Z ∞ = σ≥0 Z σ , |v| σ ≤ |v| σ |z| σ ≤ |z| σ for all a ∈ A σ , b ∈ B σ , v ∈ V σ and z ∈ Z σ . This part aims to prove the existence of a C ∞ -smoothing, see Definition 6.2, for these families of Banach spaces. This is not surprising because the behaviour of these norms is very similar to that of the Hölder norms. Lemma 6.2. There exists a C ∞ -smoothing for the latter families of Banach spaces. Proof. We begin by proving the existence of a C ∞ -smoothing for the family of Banach spaces {(Z σ , | • | σ )} σ≥0 . Following the lines of [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF], we take a function s ∈ C ∞ 0 (R n+m ) vanishing outside a compact set and identically equal to 1 in a neighbourhood of 0. Let s be its Fourier transform then, for all z ∈ S 0 , [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF]). Now, we verify that ∂ i q (S τ z) ∈ C(T n × R m × J) for all i ≥ 0. We observe that, for every m > 0 and p > 0, there exists a constant C(m, p) > 0 such that S τ z(q, t) = 1 τ n+m R n+m s q -ϑ τ z(ϑ, t)dϑ. For all fixed t ∈ J, S τ z t ∈ C ∞ (T n × R m ) = σ≥0 C σ (T n × R m ) (see |∂ m s(x)| ≤ C(m, p)(1 + |x|) -p , where ∂ m stands for partial derivatives of order m. The claim is a consequence of the regularity of z and the latter. Indeed, for all (q 1 , t 1 ), (q 2 , t 2 ) ∈ T n × R m × J and i ≥ 0, ∂ i q (S τ z) (q 1 , t 1 ) -∂ i q (S τ z) (q 2 , t 2 ) = 1 τ n+m+i R n+m ∂ i s q 1 -ϑ τ z(ϑ, t 1 )dϑ - 1 τ n+m+i R n+m ∂ i s q 2 -ϑ τ z(ϑ, t 2 )dϑ = 1 τ i R n+m ∂ i s(ρ) z(q 1 -ρτ, t 1 ) -z(q 2 -ρτ, t 2 ) dρ ≤ 1 τ i R n+m ∂ i s(ρ) z(q 1 -ρτ, t 1 ) -z(q 2 -ρτ, t 2 ) dρ where | • | stands for the standard Euclidean norm and, in the last line of the latter, we did the following change of coordinates q i -ϑ τ = ρ for i = 1, 2. This implies the claim. It remains to prove (S1) and (S2). For all z ∈ Z d , 0 ≤ d ≤ m and fixed t ∈ J |S τ z t | C m ≤ τ m-d C(m, d)|z t | C d (always look at [START_REF] Zehnder | Generalized implicit function theorems with applications to some small divisor problems[END_REF]), then |S τ z| m = sup t∈J |S τ z t | C m t 2 ≤ τ m-d C(m, d) sup t∈J |z t | C d t 2 = τ m-d C(m, d)|z| d and (S1) is verified. For all z ∈ Z m , 0 ≤ d ≤ m and fixed t ∈ J |(S τ -1)z t | C d ≤ τ -(m-d) C(m, d)|z t | C m (always see [Zeh75]), then |(S τ -1)z| d = sup t∈J |(S τ -1)z t | C d t 2 ≤ τ -(m-d) C(m, d) sup t∈J |z t | C m t 2 ≤ τ -(m-d) C(m, d)|z| m and (S2) is also verified. This implies the existence of a C ∞ -smoothing for {(Z σ , | • | σ )} σ≥0 . Remembering that S τ commutes with partial differential operators, similarly, we have the claim for the family of Banach spaces {(A σ , | • | σ )} σ≥0 and {(B σ , | • | σ )} σ≥0 . Proof of Theorem E It remains to prove the existence of a C ∞ -smoothing for {(V σ , | • | σ )} σ≥0 . Similarly to the previous case, because of S τ commutes with partial differential operators, S τ : V 0 → V ∞ . Now, we verify (S1) and (S2). We begin by remembering that, for all v ∈ V σ |v| σ = max{|v| σ+1,1 , | (∇v) Ω| σ,2 }. Similarly to the previous case, for all v ∈ V d , 0 ≤ d ≤ m and fixed t ∈ J |S τ v t | C m+1 ≤ τ m-d C(m, d)|v t | C d+1 , which implies |S τ v| m+1,1 = sup t∈J |S τ v t | C m+1 t ≤ τ m-d C(m, d) sup t∈J |v t | C d+1 t ≤ τ m-d C(m, d)|v| d . Noting that S τ commutes with partial differential operators, for fixed t ∈ J, |∇ S τ v t Ω| C m = |S τ ∇v t Ω | C m ≤ τ m-d C(m, d)|∇v t Ω| C d . Multiplying both sides of the latter by t 2 and taking the sup for all t ∈ J |∇ (S τ v) Ω| m,2 = sup t∈J |∇ S τ v t Ω| C m t 2 ≤ τ m-d C(m, d) sup t∈J |∇v t Ω| C d t 2 ≤ τ m-d C(m, d)|v| d . This implies (S1) because |S τ v| m = max{|S τ v| m+1,1 , |∇ (S τ v) Ω| m,2 } ≤ τ m-d C(m, d)|v| d . Concerning (S2), for all v ∈ V m , 0 ≤ d ≤ m and fixed t ∈ J, |(S τ -1)v t | C d+1 ≤ τ -(m-d) C(m, d)|v t | C m+1 . Multiplying both sides by t and taking the sup for all t ∈ J, |(S τ -1)v| d+1,1 = sup t∈J |(S τ -1)v t | C d+1 t ≤ τ -(m-d) C(m, d) sup t∈J |v t | C m+1 t ≤ τ -(m-d) C(m, d)|v| m+1,1 ≤ τ -(m-d) C(m, d)|v| m . The operator S τ commutes with partial differential operators, then for fixed t ∈ J, |∇ (S τ -1)v t Ω| C d = |(S τ -1) ∇v t Ω | C d ≤ τ -(m-d) C(m, d)|∇v t Ω| C m and hence |∇ ((S τ -1)v) Ω| d,2 = sup t∈J |∇ (S τ -1)v t Ω| C d t 2 ≤ τ -(m-d) C(m, d) sup t∈J |∇v t Ω| C m t 2 ≤ τ -(m-d) C(m, d)|∇v Ω| m,2 ≤ τ -(m-d) C(m, d)|v| m . This concludes the proof of this lemma because | (S τ -1) v| d = max{|(S τ -1)v| d+1,1 , |∇ ((S τ -1)v) Ω| d,2 } ≤ τ -(m-d) C(m, d)|v| m . Homological Equation We introduce some fundamental Gronwall-type inequalities that we widely use in this section. Proposition 6.2. Let J be an interval in R, t 0 ∈ J, and a, b, u ∈ C(J) continuous positive functions. for all t ≥ t 0 . Given σ ≥ 1, µ > 0 and ω ∈ R n , this section is devoted to the solution of the following equation for the unknown κ : T n × R m × J -→ R n+m                ∂ q κ(q, t) (ω + f (q, t)) F (q,t) +∂ t κ(q, t) + g(q, t)κ(q, t) = z(q, t) f ∈ Sσ+1,1 , g, z ∈ S σ , |f | 1,1 ≤ µ |g| 1,1 ≤ µ, |z| σ,2 < ∞, |f | σ,1 < ∞ |g| σ,1 < ∞ (HE E ) where ω = (ω, 0) ∈ R n+m and | • | σ,l is the norm defined by (6.8). The functions f : T n ×R m ×J -→ R n+m , z : T n ×R m ×J -→ R n+m and g : T n ×R m ×J -→ M n+m are given, where M n+m is the set of (n + m)-dimensional matrices. For the sake of 100 6.3 Proof of Theorem E clarity, we recall that ∂ q κ(q, t) (ω + f (q, t)) is a vector in R n+m with j component equal to ∂ q κ(q, t) (ω + f (q, t)) j = ∂ q κ j (q, t) • (ω + f (q, t)) for all 1 ≤ j ≤ n + m. If µ = 0, the problem takes the easier form ∂ q κ(q, t)ω + ∂ t κ(q, t) = z(q, t) z ∈ S σ , |z| σ,2 < ∞. (6.18) When m = 0, this is the homological equation we have already studied in the third chapter of this thesis (Lemma 3.1). Letting m = 0, following the lines of the proof of Lemma 3.1, we have the following lemma Lemma. There exists a unique solution κ ∈ S σ of (6.18) such that lim t→+∞ |κ t | C 0 = 0. Moreover, |κ| σ,1 ≤ |z| σ,2 . We begin with several estimates. We define ψ t t 0 as the flow at time t with initial time t 0 of F (q, t). We continue to use the notation previously introduced where C(•) means constants depending on n, m and we indicate in brackets the other parameters on which these constants depend. On the other hand, C stands for constants depending only on n + m. We recall that σ ≥ 1 and we have the following estimates. Lemma 6.3. For all t, t 0 ∈ J, if t ≥ t 0 |∂ q ψ t t 0 | C σ-1 ≤ C(σ) 1 + |f | σ,1 ln t t 0 t t 0 cσµ , (6.19) whereas if t ≤ t 0 |∂ q ψ t t 0 | C σ-1 ≤ C(σ) 1 + |f | σ,1 ln t 0 t t 0 t cσµ . (6.20) with a positive constant c σ depending on n + m and σ. Before the proof of this lemma, we observe that when σ = 1 and t ≥ t 0 , by (6.19), we have |∂ q ψ t t 0 | C 0 ≤ C 1 + |f | 1,1 ln t t 0 t t 0 c 1 µ ≤ C 1 + ln t t 0 µ t t 0 c 1 µ ≤ C t t 0 c1 µ for a suitable constant C depending on n + m and c1 > c 1 . Similarly, when t 0 ≥ t, |∂ q ψ t t 0 | C 0 ≤ C t 0 t c1 µ . 6 The Abstract Theorem Proof. For all q ∈ T n × R m , we can write ψ t t 0 in the following form ψ t t 0 (q) = q + t t 0 F τ • ψ τ t 0 (q)dτ as a consequence of the fundamental theorem of calculus. Taking the derivative with respect to q ∂ q ψ t t 0 (q) = Id + t t 0 ∂ q f τ • ψ τ t 0 (q) dτ, where Id stands for the identity matrix. We assume t ≥ t 0 , and then the norm C σ-1 of the left-hand side of the latter can be estimated as follows |∂ q ψ t t 0 | C σ-1 ≤ 1 + t t 0 f τ • ψ τ t 0 C σ dτ. (6.21) Case σ = 1. By Proposition A.2 |∂ q ψ t t 0 | C 0 ≤ 1 + C t t 0 |f τ | C 0 dτ + C t t 0 |f τ | C 1 |∂ q ψ τ t 0 | C 0 dτ for a suitable constant C. We can easily estimate the first integral of the latter t t 0 |f τ | C 0 dτ ≤ µ t t 0 1 τ dτ = ln t t 0 µ and hence |∂ q ψ t t 0 | C 0 ≤ 1 + ln t t 0 Cµ + C t t 0 |f τ | C 1 |∂ q ψ τ t 0 | C 0 dτ. We know that |f | 1,1 ≤ µ and thus by Gronwall's inequalities (6.17) |∂ q ψ t t 0 | C 0 ≤ 1 + ln t t 0 Cµ e C t t 0 |f τ | C 1 dτ ≤ 1 + ln t t 0 Cµ e ln t t 0 Cµ ≤ C t t 0 c1 µ for a suitable constant c1 ≥ 1. Case σ > 1. Similarly to the previous case, by Proposition A.2 |∂ q ψ t t 0 | C σ-1 ≤ 1+C(σ) t t 0 |f τ | C 0 dτ + t t 0 |f τ | C σ |∂ q ψ τ t 0 | σ C 0 dτ + t t 0 |f τ | C 1 |∂ q ψ τ t 0 | C σ-1 dτ . We have to estimate the first two integrals. We have already calculated the first one, t t 0 |f τ | C σ |∂ q ψ τ t 0 | σ C 0 dτ ≤ C(σ) t t 0 |f | σ,1 τ τ t 0 c1 σµ dτ ≤ C(σ)|f | σ,1 t t 0 c1 σµ t t 0 τ -1 dτ = C(σ)|f | σ,1 ln t t 0 t t 0 c1 σµ . Proof of Theorem E In the second line of the latter, rather than directly integrating the member in the first line, we prefer using the trivial inequality τ t 0 c1 σµ ≤ t t 0 c1 σµ to avoid the term 1 µ because we do not assume that µ is not zero. Thus, we can estimate |∂ q ψ t t 0 | C σ-1 as follows |∂ q ψ t t 0 | C σ-1 ≤ 1 + C(σ) ln t t 0 µ + C(σ)|f | σ,1 ln t t 0 t t 0 c1 σµ + C(σ) t t 0 |f τ | C 1 |∂ q ψ τ t 0 | C σ dτ. Furthermore, thanks to Gronwall's inequality (6.17) |∂ q ψ t t 0 | C σ-1 ≤ 1 + C(σ) ln t t 0 µ + C(σ)|f | σ,1 ln t t 0 t t 0 c1 σµ e C(σ) t t 0 |f τ | C 1 dτ ≤ 1 + C(σ) ln t t 0 µ + C(σ)|f | σ,1 ln t t 0 t t 0 c1 σµ e ln t t 0 C(σ)µ ≤ C(σ) t t 0 µ + |f | σ,1 ln t t 0 t t 0 c1 σµ t t 0 C(σ)µ ≤ C(σ) 1 + |f | σ,1 ln t t 0 t t 0 cσµ , for a suitable constant c σ ≥ c1 σ. This concludes the proof when t ≥ t 0 . Similarly, we have the other case. We can see that the constant c σ ≥ c 1 σ. This means that c σ goes to infinity when σ → ∞. We consider R : T n × R m × J × J → M n+m , where M n+m is the set of the (n + m)-dimensional matrices. For all (q, τ, t) ∈ T n × R m × J × J, let R(q, t, τ ) = {r ij (q, t, τ )} 1≤i,j≤n+m . We define the following family of norms |R t τ | C s = max 1≤i,j≤n |r ij (q, t, τ )| C s , for positive real parameters s ≥ 0. We consider the following system Ṙ(q, t, τ ) = -g(ψ t t 0 (q), t)R(q, t, τ ) R(q, τ, τ ) = Id (R) where g is introduced in (HE E ). In what follows, for fixed t, τ ∈ J, we denote R t τ (q) = R(q, t, τ ). The following lemma is dedicated to studying the latter system and providing proper estimations of the solutions. Lemma 6.4. The latter system admits a unique solution. Moreover, for all τ , t ∈ J with τ ≥ t, letting R(q, t, τ ) = R(ψ t 0 τ (q), t, τ ), we have the following estimates |R t τ | C 0 ≤ τ t c R 0 µ (6.22) | Rt τ | C σ ≤ C(σ) 1 + (|f | σ,1 + |g| σ,1 ) ln τ t τ t c R σ µ (6.23) 103 6 The Abstract Theorem with a positive constant c R σ depending on n and σ, where σ and µ are those defined by (HE E ). Also in this case, before the proof, we observe that when σ = 1, by (6.23), we have the following estimate | Rt τ | C 1 ≤ C 1 + (|f | 1,1 + |g| 1,1 ) ln τ t τ t c R 1 µ ≤ C 1 + ln τ t µ τ t c 1 µ ≤ C τ t cR 1 µ for a suitable constant C depending on n + m and cR 1 > c R 1 . Proof. For all q ∈ T n × R m , by the theorem of existence and uniqueness, a unique solution of (R) exists. It remains to prove the estimates. We begin with the first and then we verify the other. Similarly to the proof of the previous lemma, by the fundamental theorem of calculus, we can write R in the following form R t τ (q) = Id + τ t g s • ψ s t 0 (q) R s τ (q)ds (6.24) for all q ∈ T n × R m and t, τ ∈ J with τ ≥ t. Taking the norm C 0 on the left-hand side of the latter, we obtain |R t τ | C 0 ≤ 1 + τ t |g s • ψ s t 0 R s τ | C 0 ds ≤ 1 + C τ t |g s | C 0 |R s τ | C 0 ds, for a suitable constant C depending on n + m. Thanks to Gronwall's inequality (6.17) and remembering that |g| 1,1 ≤ µ, |R t τ | C 0 ≤ e Cµ τ t 1 s ds ≤ τ t Cµ . Hence, letting c R 0 = C, (6.22) is proved. It remains to prove (6.23). The composition of R t τ with ψ t 0 τ (q) is R t τ • ψ t 0 τ (q) = Rt τ (q) = Id + τ t (g s • ψ s τ (q)) Rs τ (q)ds (6.25) for all q ∈ T n × R m and t, τ ∈ J with τ ≥ t. For σ ≥ 1, we can estimate the C σ norm of the right-hand side of the latter as follows | Rt τ | C σ ≤ 1 + τ t | (g s • ψ s τ ) Rs τ | C σ ds. (6.26) First of all, we estimate the norm into the integral. To this end, we use the properties in Proposition A.2. | (g s • ψ s τ ) Rs τ | C σ ≤ C(σ) |g s • ψ s τ | C σ | Rs τ | C 0 + |g s • ψ s τ | C 0 | Rs τ | C σ |g s • ψ s τ | C σ ≤ C(σ) (|g s | C σ |∂ q ψ s τ | σ C 0 + |g s | C 1 |∂ q ψ s τ | C σ-1 + |g s | C 0 ) 6.3 Proof of Theorem E and, thanks to the latter and (6.26), we can estimates | Rt τ | C σ in the following way | Rt τ | C σ ≤ 1 + C(σ) τ t |g s | C σ |∂ q ψ s τ | σ C 0 | Rs τ | C 0 ds + C(σ) τ t |g s | C 1 |∂ q ψ s τ | C σ-1 | Rs τ | C 0 ds + C(σ) τ t |g s | C 0 | Rs τ | C 0 ds + C(σ) τ t |g s | C 0 | Rs τ | C σ ds. Now, by (6.22) and Lemma 6.3, we can find upper bounds for the first three integrals on the right-hand side of the previous inequality τ t |g s | C σ |∂ q ψ s τ | σ C 0 | Rs τ | C 0 ds ≤ C(σ)|g| σ,1 τ t s -1 τ s c1 σµ τ s c R 0 µ ds ≤ C(σ)|g| σ,1 τ t (c1σ+c R 0 )µ τ t s -1 ds = C(σ)|g| σ,1 ln τ t τ t (c1σ+c R 0 )µ τ t |g s | C 1 |∂ q ψ s τ | C σ-1 | Rs τ | C 0 ds ≤ C(σ) τ t |g| 1,1 s 1 + |f | σ,1 ln τ s τ s (cσ+c R 0 )µ ds ≤ C(σ)µ τ t 1 s τ s (cσ+c R 0 )µ ds + C(σ)|f | σ,1 µ ln τ t τ t 1 s τ s (cσ+c R 0 )µ ds = C(σ) c σ + c R 0 τ t (cσ+c R 0 )µ -1 + C(σ) c σ + c R 0 |f | σ,1 ln τ t τ t (cσ+c R 0 )µ -1 τ t |g s | C 0 | Rs τ | C 0 ds ≤ µ τ t s -1 τ s c R 0 µ ds ≤ 1 c R 0 τ t c R 0 µ -1 . Similarly to the previous lemma, in the second line of the latter, we use the trivial estimate τ s (c1σ+c R 0 )µ ≤ τ t (c1σ+c R 0 )µ . Therefore, remembering that c1 σ ≤ c σ , | Rt τ | C σ ≤ 1 + C(σ)|g| σ,1 ln τ t τ t (c1σ+c R 0 )µ + C(σ) τ t (cσ+c R 0 )µ -1 + C(σ)|f | σ,1 ln τ t τ t (cσ+c R 0 )µ -1 + C(σ) τ t c R 0 µ -1 + C(σ) τ t |g s | C 0 | Rs τ | C σ ds ≤ C(σ) 1 + (|f | σ,1 + |g| σ,1 ) ln τ t τ t (cσ+c R 0 )µ + C(σ) t τ |g s | C 0 | Rs τ | C σ ds . 6 The Abstract Theorem Now, let a(t) = C(σ) 1 + (|f | σ,1 + |g| σ,1 ) ln τ t τ t (cσ+c 0 R )µ , we can rewrite the latter as follows | Rt τ | C σ ≤ a(t) + C(σ) t τ |g s | C 0 | Rs τ | C σ ds . We observe that a is a monotone decreasing function, and thus by the more general inequality (6.16) | Rt τ | C σ ≤ a(t) + C(σ) t τ a(s)|g s | C 0 e |C(σ) t s |g δ | C 0 dδ| ds ≤ a(t) + C(σ)a(t) τ t µ s e ln( s t ) C(σ)µ ds = a(t) + C(σ)a(t) τ t µ s s t C(σ)µ ds = a(t) 1 + C(σ) τ t C(σ)µ -1 ≤ C(σ)a(t) τ t C(σ)µ ≤ C(σ) 1 + (|f | σ,1 + |g| σ,1 ) ln τ t τ t c R σ µ for a suitable c R σ ≥ c σ + c 0 R . As for c σ of the previous lemma, we note that the constant c R σ goes to infinity if σ → ∞. To solve the homological equation, we must counter the growth of c R σ and c σ taking µ sufficiently small. It will be clear from the following lemma. Lemma 6.5 (Homological equation). There exists a solution κ ∈ S σ , (∇κ) Ω ∈ S σ-1 of (HE E ). Moreover, letting c κ σ = max{c R 0 + c σ , c R σ + c1 σ, cR 1 + c σ }, if µ < 1 c κ σ (6.27) then |κ| σ,1 ≤ C(σ) |z| σ,2 1 -c κ σ µ + C(σ) |f | σ,1 + |g| σ,1 (1 -c κ σ µ) 2 |z| 1,2 . (6.28) Proof. Existence: For fixed t 0 ∈ J, let us define the following transformation h : T n × R m × J -→ T n × R m × J h(q, t) = (ψ t 0 t ( q), t) where ψ t t 0 is the flow at time t with initial time t 0 of F (q, t) previously defined. We claim that it is enough to prove the first part of this lemma for the much simpler equation ∂ t κ(q, t) + g • h -1 (q, t)κ(q, t) = z • h -1 (q, t). (6.29) If κ is a solution of the latter, then κ = κ • h is a solution of (HE E ) and viceversa. For the sake of clarity, we prove this claim. Let κ be a solution of (HE E ), ∂ t (κ • h -1 ) + g • h -1 κ • h -1 = ∂ q κ • h -1 ∂ t ψ t t 0 + ∂ t κ • h -1 + g • h -1 κ • h -1 = ∂ q κ • h -1 F • h -1 + ∂ t κ • h -1 + g • h -1 κ • h -1 = ((∂ q κ) F + ∂ t κ + gκ) • h -1 = z • h -1 . 106 6.3 Proof of Theorem E Since (HE E ), we have the last equality of the latter. This implies that κ • h -1 is a solution for (6.29). Let us first show that ∂ q ψ t 0 t F + ∂ t ψ t 0 t = 0. We consider the following trivial equality ψ t t 0 • ψ t 0 t (q) = q, (6.30) for all t, t 0 ∈ J and q ∈ T n . Differentiating both sides of the latter with respect to the variable q ∈ T n , we obtain ∂ q ψ t t 0 • ψ t 0 t (q)∂ q ψ t 0 t (q) = Id and by the above equation ∂ q ψ t 0 t (q) = ∂ q ψ t t 0 • ψ t 0 t (q) -1 . (6.31) Taking the derivative with respect to t on both sides of (6.30) 0 = d dt ψ t t 0 • ψ t 0 t (q) = ∂ q ψ t t 0 • ψ t 0 t (q)∂ t ψ t 0 t (q) + ∂ t ψ t t 0 • ψ t 0 t (q). Hence, ∂ t ψ t 0 t (q) is equal to ∂ t ψ t 0 t (q) = -∂ q ψ t t 0 • ψ t 0 t (q) -1 ∂ t ψ t t 0 • ψ t 0 t (q). (6.32) Therefore, thanks to (6.31) and (6.32), we can rewrite ∂ q ψ t 0 t F + ∂ t ψ t 0 t = 0 in the following form ∂ q ψ t 0 t (q)F (q, t) + ∂ t ψ t 0 t (q) = ∂ q ψ t t 0 • ψ t 0 t (q) -1 F (q, t) -∂ t ψ t t 0 • ψ t 0 t (q) for all t, t 0 ∈ J and q ∈ T n . This implies the claim because F (q, t) -∂ t ψ t t 0 • ψ t 0 t (q) = F (q, t) -F (ψ t t 0 • ψ t 0 t (q), t) = 0 for all t, t 0 ∈ J and q ∈ T n . Now, let κ be a solution of (6.29) ∂ q (κ • h)F + ∂ t (κ • h) + g(κ • h) = (∂ q κ • h) ∂ q ψ t 0 t F + ∂ t ψ t 0 t + ∂ t κ • h + g(κ • h) = ∂ t κ • h + g(κ • h) = z. Hence κ • h is a solution of (HE E ), where the last equality of the latter is a consequence of (6.29). Now, we can reduce the proof of the first part of this lemma by studying the existence of a solution for the easier equation (6.29). For all (q, t 0 ) ∈ T n × R m × J, let R(q, t, t 0 ) be the unique solution of (R). Then, a solution κ of the above equation exists and κ(q, t) = R(q, t, t 0 )e(q) - t t 0 R(q, t, τ )z • h -1 (q, τ )dτ = R(q, t, t 0 ) e(q) - t t 0 R(q, t 0 , τ )z • h -1 (q, τ )dτ 107 6 The Abstract Theorem with a function e defined on T n × R m . Estimates: We choose e in such a way that e(q) = +∞ t 0 R(q, t 0 , τ )z • h -1 (q, τ )dτ. It is well defined because by Lemma 6.4 and (6.27), +∞ t 0 R(q, t 0 , τ )z • h -1 (q, τ )dτ ≤ C +∞ t 0 |R t 0 τ | C 0 |z τ | C 0 dτ ≤ C +∞ t 0 τ t 0 c R 0 µ |z| 0,2 τ 2 dτ = C |z| 0,2 t c R 0 µ 0 +∞ t 0 τ c R 0 µ-2 dτ = C |z| 0,2 1 -c R 0 µ 1 t 0 . Furthermore, κ(q, t) = κ • h(q, t) = - +∞ t R t τ • ψ t 0 t (q)z τ • ψ τ t (q)dτ = - +∞ t R t τ • ψ t 0 τ • ψ τ t (q)z τ • ψ τ t (q)dτ = - +∞ t Rt τ • ψ τ t (q)z τ • ψ τ t (q)dτ is the solution of (HE E ) we are looking for. The estimate (6.28) is a consequence of Proposition A.2, the two previous lemmas and (6.27). For all t ∈ J and by Proposition A.2, we can estimate |κ t | C σ as follows |κ t | C σ ≤ C(σ) +∞ t |R t τ | C 0 |z τ • ψ τ t | C σ + | Rt τ • ψ τ t | C σ |z τ | C 0 dτ. Moreover, |z τ • ψ τ t | C σ ≤ C(σ) (|z τ | C σ |∂ q ψ τ t | σ C 0 + |z τ | C 1 |∂ q ψ τ t | C σ-1 + |z τ | C 0 ) | Rt τ • ψ τ t | C σ ≤ C(σ) | Rt τ | C σ |∂ q ψ τ t | σ C 0 + | Rt τ | C 1 |∂ q ψ τ t | C σ-1 + |R t τ | C 0 and replacing the latter into the above integral |κ t | C σ ≤ C(σ) +∞ t |R t τ | C 0 |z τ | C σ |∂ q ψ τ t | σ C 0 dτ + C(σ) +∞ t |R t τ | C 0 |z τ | C 1 |∂ q ψ τ t | C σ-1 dτ + C(σ) +∞ t | Rt τ | C σ |∂ q ψ τ t | σ C 0 |z τ | C 0 dτ + C(σ) +∞ t | Rt τ | C 1 |∂ q ψ τ t | C σ-1 |z τ | C 0 dτ + C(σ) +∞ t |R t τ | C 0 |z τ | C 0 dτ. Proof of Theorem E Now, we have to estimate each integral on the right-hand side of the latter. First, we observe that, for all t ∈ J and x < 1 +∞ t τ x-2 ln τ t dτ = 1 1 -x +∞ t τ x-2 dτ. It is obtained by integrating by part. Then, using Lemma 6.3, Lemma 6.4, (6.27) and the latter +∞ t |R t τ | C 0 |z τ | C σ |∂ q ψ τ t | σ C 0 dτ ≤ C(σ) +∞ t |z| σ,2 τ 2 τ t (c R 0 +c 1 σ)µ dτ = C(σ) |z| σ,2 t (c R 0 +c 1 σ)µ +∞ t τ (c R 0 +c 1 σ)µ-2 dτ = C(σ) |z| σ,2 1 -(c R 0 + c1 σ) µ 1 t +∞ t |R t τ | C 0 |z τ | C 1 |∂ q ψ τ t | C σ-1 dτ ≤ C(σ) +∞ t |z| 1,2 τ 2 1 + |f | σ,1 ln τ t τ t (c R 0 +cσ)µ dτ = C(σ) +∞ t |z| 1,2 τ 2 τ t (c R 0 +cσ)µ dτ + C(σ) +∞ t |z| 1,2 τ 2 |f | σ,1 ln τ t τ t (c R 0 +cσ)µ dτ = C(σ) |z| 1,2 1 -(c R 0 + c σ )µ 1 t + C(σ) |z| 1,2 |f | σ,1 1 -(c R 0 + c σ )µ 1 t (c R 0 +cσ)µ +∞ t τ (c R 0 +cσ)µ-2 dτ = C(σ) |z| 1,2 1 -(c R 0 + c σ )µ 1 t + C(σ) |z| 1,2 |f | σ,1 (1 -(c R 0 + c σ )µ) 2 1 t +∞ t |R t τ | C 0 |z τ | C 0 dτ ≤ C +∞ t |z| 0,2 τ 2 τ t c R 0 µ dτ = C |z| 0,2 1 -c R 0 µ 1 t +∞ t | Rt τ | C σ |∂ q ψ τ t | σ C 0 |z τ | C 0 dτ ≤ C(σ) +∞ t 1 + (|f | σ,1 + |g| σ,1 ) ln τ t |z| 0,2 τ 2 τ t (c R σ +c 1 σ)µ dτ = C(σ) +∞ t |z| 0,2 τ 2 τ t (c R σ +c 1 σ)µ dτ + C(σ) (|f | σ,1 + |g| σ,1 ) +∞ t ln τ t |z| 0,2 τ 2 τ t (c R σ +c 1 σ)µ dτ = C(σ) |z| 0,2 1 -(c R σ + c1 σ)µ 1 t + C(σ) |z| 0,2 (|f | σ,1 + |g| σ,1 ) (1 -(c R σ + c1 σ)µ) 2 1 t 109 6 The Abstract Theorem +∞ t | Rt τ | C 1 |∂ q ψ τ t | C σ-1 |z τ | C 0 dτ ≤ C(σ) +∞ t |z| 0,2 τ 2 1 + |f | σ,1 ln τ t τ t (c R 1 +cσ)µ dτ = C(σ) +∞ t |z| 0,2 τ 2 τ t (c R 1 +cσ)µ dτ + C(σ) +∞ t |z| 0,2 τ 2 |f | σ,1 ln τ t τ t (c R 1 +cσ)µ dτ = C(σ) |z| 0,2 1 -(c R 1 + c σ ) µ 1 t + C(σ) |z| 0,2 |f | σ,1 (1 -(c R 1 + c σ ) µ) 2 1 t Then, thanks to the latter |κ t | C σ t ≤ C(σ) |z| σ,2 1 -(c R 0 + c1 σ) µ + |z| 1,2 1 -(c R 0 + c σ )µ + |z| 1,2 |f | σ,1 (1 -(c R 0 + c σ )µ) 2 + |z| 0,2 1 -c R 0 µ + |z| 0,2 1 -(c R σ + c1 σ)µ + |z| 0,2 (|f | σ,1 + |g| σ,1 ) (1 -(c R σ + c1 σ)µ) 2 + |z| 0,2 1 -(c R 1 + c σ ) µ + |z| 0,2 |f | σ,1 (1 -(c R 1 + c σ ) µ) 2 ≤ C(σ) |z| σ,2 1 -c κ σ µ + C(σ) (|f | σ,1 + |g| σ,1 ) (1 -c κ σ µ) 2 |z| 1,2 for all t ∈ J, where we recall that c κ σ = max{c R 0 + c σ , c R σ + c1 σ, cR 1 + c σ } . Taking the sup for all t ∈ J on the left-hand side of the latter, we conclude the proof. Regularity of F We begin this part by reminding the following families of Banach spaces introduced in Section 6. 3.3 {(A σ , | • | σ } σ≥0 , {(B σ , | • | σ )} σ≥0 ,{(V σ , | • | σ )} σ≥0 , {(Z σ d , | • | σ )} σ≥0 such that, for all σ ≥ 0, A σ = a : T n × R m × J → R | a ∈ S σ+1 and |a| σ = |a| σ+1,0 + |∂ q a| σ,2 < ∞ B σ = b : T n × R m × J → R n+m | b ∈ S σ+1 and |b| σ = |b| σ+1,1 < ∞ V σ = v : T n × R m × J → R n+m | v ∈ S σ+1 , (∇v) Ω ∈ S σ and |v| σ = max{|v| σ+1,1 , | (∇v) Ω| σ,2 } < ∞ Z σ = z : T n × R m × J → R n+m | z ∈ S σ , and |z| σ = |z| σ,2 < ∞ where (∇v) Ω = (∂ q v) ω + ∂ t v. Let s and Υ be the positive parameters introduced by ( * E ), we define the following Banach space M = m : T n × R m × B × J → M n+m | m ∈ S s+1 and |m| = |m| s+1,0 ≤ Υ where M n+m is the set of (n + m)-dimensional matrices. We consider an additional family of Banach spaces {(X σ , | • | σ )} σ≥0 such that, for all σ ≥ 0, X σ = A σ × B σ and for all x ∈ X σ , |x| σ = max{|a| σ , |b| σ }. Now, we have everything we need to define the functional F more precisely. Let F be the following functional F : X 0 × M × M × V 0 -→ Z 0 F(x, m, m, v) = (∇v) Ω + ∂ q v (b + ( m • φ) v) + ∂ q a + (∂ q b) v + ∂ q m • φ • v 2 . Thanks to Proposition 6.1, it is straightforward to verify that F is well defined. We observe that, for all (b, m, m) ∈ B 0 × M × M, letting x 0 = (0, b), F(x 0 , m, m, 0) = 0. Let δ and b 0 be as in ( * E ). Obviously, b 0 ∈ B s ⊂ B 0 . Moreover, we recall that b 0 satisfies the following estimate |b 0 | 1 < δ. We fix x 0 = (0, b 0 ). For all σ ≥ 0 and for a suitable parameter 0 < ζ < 1, that we will choose sufficiently small with respect to δ in Lemma 6.6, we define the following subset of X σ × V σ O σ ζ = {(x, v) ∈ X σ × V σ : |x -x 0 | σ , |v| σ < ζ} ⊂ X σ × V σ . Let m, m ∈ M be as in ( * E ), we consider the following functional F m, m : O 0 ζ -→ Z 0 such that, for all (x, v) ∈ O 0 ζ , F m, m(x, v) = F(x, m, m, v). It is well defined, continuous and F m, m(x 0 , 0) = 0, where x 0 = (0, b 0 ) is the element of X 0 previously introduced. This section aims to prove that F m, m satisfies hypotheses H.1-H.4 of Theorem 6.1. Then, Zehnder's Theorem ensures the existence of a C ρ -weakly asymptotic cylinder associated to (X H , X h, ϕ 0 ). In the proof of the following lemma, we widely use the properties contained in Proposition 6.1 without specifying them each time. Lemma 6.6. F m, m satisfies hypotheses H.1-H.4 of Theorem 6.1. Proof. H.1. Smoothness: F m, m(x, •) : V 0 -→ Z 0 is two times differentiable with respect to the variable v and D v F m, m(x, v)v = (∇v) Ω + (∂ q v) f + gv, D 2 v F m, m(x, v)(v, ṽ) = (∂ q v) f ṽ + (∂ q ṽ) f v + ṽT gv, with f = b + ( m • φ) v, g = ∂ q b + ∂ q v (∂ p m • φ) v + ∂ q v ( m • φ) + v T ∂ 2 pq m • φ v + 2 (∂ q m • φ) v, f = m • φ + (∂ p m • φ) v, f = (∂ p m • φ) v + m • φ, g = ∂ q v ∂ 2 p m • φ v + 2∂ q v (∂ p m • φ) + v T ∂ 2 p ∂ q m • φ v + 4 ∂ 2 pq m • φ v + 2∂ q m • φ, where T denotes the transpose. We have to verify that |D v F m, m(x, v)| 0 , |D 2 v F m, m(x, v)| 0 ≤ C for all (x, v) ∈ O 0 ζ and some C ≥ 1. In the latter, we consider the norm corresponding to the Banach space (Z 0 , | • | 0 ). Referring to the notation introduced by (6.8), this norm coincides with | • | 0,2 . For all v ∈ V 0 , |D v F m, m(x, v)v| 0,2 ≤ | (∇v) Ω| 0,2 + C (|f | 0,1 |v| 1,1 + |g| 0,1 |v| 0,1 ) ≤ |v| 0 (1 + C (|f | 0,1 + |g| 0,1 )) , where we recall that |v| 0 = max{|v| 1,1 , | (∇v) Ω| 0,2 }. Moreover, for all (x, v) ∈ O 0 ζ and m ∈ M, |f | 0,1 ≤ C (|b| 0,1 + |v| 0,1 | m| 0,0 ) ≤ C (|b| 0 + |v| 0 | m| 0,0 ) , ≤ C (|b 0 | 0 + |b -b 0 | 0 ) + C (|v| 0 | m| 0,0 ) ≤ C(δ + ζ) + CΥζ |g| 0,1 ≤ C |b| 1,1 + |v| 0,1 | m| 1,0 |v| 1,1 + |v| 1,1 | m| 0,0 + (|v| 0,1 ) 2 | m| 2,0 + |v| 0,1 |m| 1,0 , ≤ C |b| 0 + (|v| 0 + |v| 2 0 )Υ ≤ C(δ + ζ) + CΥζ. This implies the claim for D v F m, m(x, v). Similarly, we have the claim for D 2 v F m, m(x, v). H.2. F m, m is uniformly Lipschitz in X 0 : For all (x 1 , v), (x 2 , v) ∈ O 0 ζ , remem- bering that |x| 0 = max{|a| 0 , |b| 0 }, |F m, m(x 1 , v) -F m, m(x 2 , v)| 0,2 = |∂ q v(b 1 -b 2 ) + (∂ q a 1 -∂ q a 2 ) + (∂ q b 1 -∂ q b 2 )v| 0,2 ≤ C|b 1 -b 2 | 0,1 |v| 1,1 + C (|∂ q a 1 -∂ q a 2 | 0,2 + |b 1 -b 2 | 1,1 |v| 0,1 ) ≤ C (|b 1 -b 2 | 0 |v| 0 + |a 1 -a 2 | 0 + |b 1 -b 2 | 0 |v| 0 ) ≤ C(1 + ζ)|x 1 -x 2 | 0 , which proves H.2. Now, we verify H.4 before H.3. H.4. Order : The first two conditions of Definition 6.3 are satisfied, meaning (x 0 , 0) ∈ X s × V s and F m, m O 0 ζ (X σ × V σ ) ⊂ Z σ for all 1 ≤ σ ≤ s. We verify the tame estimate. Proof of Theorem E For all 1 ≤ σ ≤ s and ( x, v) ∈ O 1 ζ (X σ × V σ ), we rewrite the functional F m, m in the following form F m, m(x, v) = (∇v) Ω + ∂ q v (b 0 + (b -b 0 ) + ( m • φ) v) + ∂ q a + (∂ q b 0 ) v + ∂ q (b -b 0 ) v + ∂ q m • φ • v 2 . We assume |x - x 0 | σ , |v| σ ≤ K, then |F m, m(x, v)| σ,2 ≤ | (∇v) Ω| σ,2 + | (∂ q v) b 0 | σ,2 + | (∂ q v) (b -b 0 ) | σ,2 + |∂ q v ( m • φ) v| σ,2 + |a| σ,2 + | (∂ q b 0 ) v| σ,2 + |∂ q (b -b 0 ) v| σ,2 + |∂ q m • φ • v 2 | σ,2 . We have to estimate each term on the right-hand side of the latter. The terms |a| σ,2 and | (∇v) Ω| σ,2 are bounded by K. We estimate the others |b 0 (∂ q v) | σ,2 ≤ C(σ)|b 0 | s+1,1 |v| σ+1,1 ≤ C(σ)|b 0 | s |v| σ ≤ C(σ)|b 0 | s K | (∂ q v) (b -b 0 ) | σ,2 ≤ C(σ) (|∂ q v| 0,1 |b -b 0 | σ,1 + |∂ q v| σ,1 |b -b 0 | 0,1 ) ≤ C(σ) (|v| 1,1 |b -b 0 | σ,1 + |v| σ+1,1 |b -b 0 | 0,1 ) ≤ C(σ) (|v| 0 |b -b 0 | σ + |v| σ |b -b 0 | 0 ) ≤ C(σ)ζK ≤ C(σ)K |∂ q v ( m • φ) v| σ,2 ≤ C(σ) (|∂ q v ( m • φ) | σ,1 |v| 0,1 + |∂ q v ( m • φ) | 0,1 |v| σ,1 ) ≤ C(σ)|v| 0 (| m • φ| σ,0 |v| 0 + | m| 0,0 |v| σ ) + C(σ)|v| σ | m| 0,0 |v| 0 ≤ C(σ)|v| 0 | m| σ,0 (1 + |v| σ 0 + |v| σ )|v| 0 + C(σ)ζΥ|v| σ ≤ C(σ)ΥK | (∂ q b 0 ) v| σ,2 ≤ C(σ)|b 0 | s+1,1 |v| σ,1 ≤ C(σ)|b 0 | s K |∂ q (b -b 0 ) v| σ,2 ≤ C(σ) (|v| 0,1 |b -b 0 | σ+1,1 + |v| σ,1 |b -b 0 | 1,1 ) ≤ C(σ) (|v| 0 |b -b 0 | σ,1 + |v| σ |b -b 0 | 0 ) ≤ C(σ)ζK ≤ C(σ)K |∂ q m • φ • v 2 | σ,2 ≤ C(σ) (| (∂ q m • φ) v| 0,1 |v| σ,1 + | (∂ q m • φ) v| σ,1 |v| 0,1 ) ≤ C(σ)|m| 1,0 |v| 0 |v| σ + C(σ)|v| 0 (|∂ q m • φ| σ,0 |v| 0,1 + |∂ q m • φ| 0,0 |v| σ,1 ) ≤ C(σ)ζΥK + C(σ)|v| 0 |m| σ+1,0 (1 + |v| σ 0 + |v| σ )|v| 0 ≤ C(σ)ΥK. Therefore, H.4 is satisfied. Now, we fix δ < 1 c χ s , where s is the positive parameter defined by ( * E ) and c χ s is the constant in Lemma 6.5. Furthermore, we choose ζ depending on δ in such a way that δ + CΥζ < 1 c χ s (δζ) for a suitable constant C depending on n + m. This hypothesis is crucial if we want to define a right-inverse of F. H.3. Existence of a right-inverse of loss 1 : In this part, we prove that for all (x, v) ∈ O 1 ζ ∩ (X σ × V σ ) with 1 ≤ σ ≤ s, a right-inverse of loss 1 is well defined. This means that, for all (x, v) ∈ O 1 ζ ∩ (X σ × V σ ), there exists a liner map η m, m : Z σ → V σ-1 such that D v F m, m(x, v)η m, m(x, v)z = z for all z ∈ Z σ . In other words, for all z ∈ Z σ , we have to solve the following equation in the unknown v D v F m, m(x, v)v = (∇v) Ω + (∂ q v) f + gv = z, (6.33) where f and g are defined at the beginning of the proof of this lemma. If |f | 1,1 ≤ δ + CΥζ, |g| 1,1 ≤ δ + CΥζ, (6.34) thanks to Lemma 6.5 and (δζ), a solution to the above equation exists. It remains to verify the estimate on |f | 1,1 and |g| 1,1 . For all 1 ≤ σ ≤ s and (x, v) ∈ O 1 ζ , |f | σ,1 ≤ |b| σ,1 + | ( m • φ) v| σ,1 ≤ |b| σ,1 + C(σ) (| m • φ| σ,0 |v| 0,1 + | m • φ| 0,0 |v| σ,1 ) ≤ |b| σ-1 + C(σ) Υ 1 + |v| σ 1,1 + |v| σ,1 |v| 0,1 + Υ|v| σ,1 ≤ |b| σ-1 + C(σ)Υ|v| σ-1 |g| σ,1 ≤ |b| σ+1,1 + |∂ q v (∂ q m • φ) v| σ,1 + |∂ q v ( m • φ) | σ,1 + |v T ∂ 2 pq m • φ v| σ,1 + | (∂ q m • φ) v| σ,1 ≤ |b| σ + C(σ)Υ|v| σ . Taking σ = 1, we obtain |f | 1,1 ≤ |b| 0 + CΥ|v| 0 ≤ |b 0 | 0 + |b -b 0 | 0 + CΥ|v| 0 ≤ (δ + ζ) + CΥζ ≤ δ + CΥζ (6.35) |g| 1,1 ≤ |b| 1 + C(σ)Υ|v| 1 ≤ |b 0 | 1 + |b -b 0 | 1 + CΥ|v| 0 , ≤ (δ + ζ) + CΥζ ≤ δ + CΥζ (6.36) This implies the claim. The second part of this proof is dedicated to verifying (η1) and (η2). In what follows, we drop the indexes m, m from F and η to achieve a more elegant proof. For all (x, v) ∈ O 1 ζ and z ∈ Z 1 |η(x, v)z| 0 = max{|η(x, v)z| 1,1 , |∇ (η(x, v)z) Ω| 0,2 }. By Lemma 6.5 (more specifically (6.28)), (δζ) and the previous estimates concern- ing |f | 1,1 and |g| 1,1 |η(x, v)z| 1,1 ≤ C(δ, ζ)|z| 1,2 = C(δ, ζ)|z| 1 . Moreover, as a consequence of (6.33), the latter estimate, (6.35), (6.36) and (δζ) |∇ (η(x, v)z) Ω| 0,2 = |z -∂ q (η(x, v)z) f -g (η(x, v)z) | 0,2 ≤ |z| 0 + C|f | 0,1 |η(x, v)z| 1,1 + C|g| 0,1 |η(x, v)z| 0,1 ≤ |z| 0 + |η(x, v)z| 1,1 ≤ C(δ, ζ)|z| 1 . This implies (η1) because |η(x, v)z| 0 = max{|η(x, v)z| 1,1 , |∇ (η(x, v)z) Ω| 0,2 } ≤ C(δ, ζ)|z| 1 . 6.3 Proof of Theorem E Conserning (η2), for all 1 ≤ σ ≤ s and (x, v) ∈ O 1 ζ ∩ (X σ × V σ ), we assume |x -x 0 | σ , |v| σ ≤ K and we recall that |η(x, v)F(x, v)| σ-1 = max{|η(x, v)F(x, v)| σ,1 , |∇ (η(x, v)F(x, v)) Ω| σ-1,2 }. We shall prove that the two norms on the right-hand side of the latter are smaller or equal to K multiplied by a suitable constant. We consider the estimates of |f | σ,1 and |g| σ,1 calculated above |f | σ,1 ≤ |b| σ-1 + C(σ)Υ|v| σ-1 ≤ |b 0 | σ-1 + |b -b 0 | σ-1 + C(σ)Υ|v| σ-1 ≤ |b 0 | s + C(σ)ΥK |g| σ,1 ≤ |b| σ + C(σ)Υ|v| σ , ≤ |b 0 | σ + |b -b 0 | σ + C(σ)Υ|v| σ ≤ |b 0 | s + C(σ)ΥK. Furthermore, by (6.34) and (δζ) |f | 0,1 ≤ 1, |g| 0,1 ≤ 1. Moreover, thanks to H.4 and (δζ) |F(x, v)| 1,2 ≤ CΥζ ≤ 1, |F(x, v)| σ,2 ≤ C(σ)ΥK. Now, by (6.28) and the above estimates |η(x, v)F(x, v)| σ,1 ≤ C(σ, Υ, δ, ζ)|F(x, v)| σ,2 + C(σ, Υ, δ, ζ) (|f | σ,1 + |g| σ,1 ) |F(x, v)| 1,2 ≤ C(σ, Υ, δ, ζ)|F(x, v)| σ,2 + C(σ, Υ, δ, ζ) (|b 0 | s + C(σ)ΥK) |F(x, v)| 1,2 ≤ C(σ, Υ, δ, ζ) (1 + |b 0 | s ) |F(x, v)| σ,2 + C(σ, Υ, δ, ζ)K|F(x, v)| 1,2 ≤ C(σ, |b 0 | s , Υ, δ, ζ)K and by (6.33) |∇ (η(x, v)F(x, v)) Ω| σ-1,2 = |F(x, v) -∂ q (η(x, v)F(x, v)) f -g (η(x, v)F(x, v)) | σ-1,2 ≤ |F(x, v)| σ,2 + |∂ q (η(x, v)F(x, v)) f | σ-1,2 + |g (η(x, v)F(x, v)) | σ,2 . It remains to prove that each term on the right-hand side of the latter can be estimated by K multiplied by a suitable constant. By H.4, this is true for the first term |F(x, v)| σ,2 . We will prove it for the others using the estimates previously verified |∂ q (η(x, v)F(x, v)) f | σ-1,2 ≤ C(σ) (|η(x, v)F(x, v)| σ,1 |f | 0,1 + |η(x, v)F(x, v)| 1,1 |f | σ,1 ) ≤ C(σ, |b 0 | s , Υ, δ, ζ)K|f | 0,1 + C(δ, ζ)|F(u, v)| 1,2 |f | σ,1 ≤ C(σ, |b 0 | s , Υ, δ, ζ)K and similarly |g (η(x, v)F(x, v)) | σ,2 ≤ C(σ) (|g| 0,1 |η(x, v)F(x, v)| σ,1 + |g| σ,1 |η(x, v)F(x, v)| 0,1 ) ≤ C(σ) (|η(x, v)F(x, v)| σ,1 + |g| σ,1 C(δ, ζ)|F(x, v)| 1,2 ) ≤ C(σ, |b 0 | s , Υ, δ, ζ)K + C(σ, δ, ζ) (|b 0 | s + C(σ)ΥK) |F(x, v)| 1,2 ≤ C(σ, |b 0 | s , Υ, δ, ζ)K + C(σ, δ, ζ)|b 0 | s |F(x, v)| σ,2 ≤ C(σ, |b 0 | s , Υ, δ, ζ)K. This concludes the proof of H.3 and also of this lemma. C ρ -Weakly Asymptotic Cylinder We proved that, for fixed m, m ∈ M as in ( * E ), the functional F m, m satisfies the hypotheses of Theorem 6.1. Then, there exists v : T n × R m × J → R n+m such that ϕ t = (id, v t ) is a C ρ -weakly asymptotic cylinder associated to (X H , X h, ϕ 0 ). We recall that H is the Hamiltonian defined by ( * E ), h is the Hamiltonian in (6.9) and ϕ 0 is the trivial embedding ϕ 0 : T n × R m → T n × R m × B n+m , ϕ 0 (q) = (q, 0). Moreover, letting Γ = b + ( m • φ) v (see Section 6.3.2), |v| ρ,1 ≤ ζ, |Γ| ρ,1 ≤ |b| ρ,1 + | ( m • φ) v| ρ,1 ≤ |b 0 | ρ,1 + |b -b 0 | ρ,1 + | ( m • φ) v| ρ,1 ≤ |b 0 | ρ,1 + |b -b 0 | λ,1 + | ( m • φ) v| ρ,1 ≤ |b 0 | ρ,1 + ε + CΥζ for a positive constant C ≥ 1 depending on n + m. It remains to prove that ϕ t is Lagrangian for all t ∈ J. First, we observe that |Γ| 1,1 ≤ |b 0 | 1,1 + ε + CΥζ ≤ δ + ε + CΥζ. Letting ψ t t 0 ,ω+Γ be the flow at time t with initial time t 0 of ω + Γ, we have the following proposition Proposition 6.3. For all t, t 0 ∈ J, if t ≥ t 0 |∂ q ψ t t 0 ,ω+Γ | C 0 ≤ C t t 0 C(δ+ε+CΥζ) , for a suitable positive constant C. Proof. Similarly to the proof of Lemma 6.3 we have the claim. We observe that the constant C in this proposition may differ from that in the estimate of |Γ| 1,1 previously calculated. Let ψ t t 0 ,H be the flow at times t with initial time t 0 of H, the following lemma concludes the proof of Theorem E. Proof of Theorem E Lemma 6.7. ϕ t 0 is Lagrangian for all t 0 ∈ J Proof. Let α = dp ∧ dq be the standard symplectic form associated to (q, p) ∈ T n ×R m ×B n ×B m . For all fixed t, t 0 ∈ J, the flow ψ t t 0 ,H is a symplectomorphisms. This means that, for all fixed t, t 0 ∈ J, (ψ t t 0 ) * α = α. By (6.7), ψ t 0 +t t 0 ,H • ϕ t 0 = ϕ t 0 +t • ψ t 0 +t t 0 ,ω+Γ (6.37) and taking the pull-back with respect to the standard form α on both sides of the latter, we obtain (ϕ t 0 ) * (ψ t 0 +t t 0 ,H ) * α = (ψ t 0 +t t 0 ,ω+Γ ) * (ϕ t 0 +t ) * α. We remind that ψ t 0 +t t 0 ,H is symplectic, then letting (ψ t 0 +t t 0 ) * α = α on the left-hand side of the above equation, we have (ϕ t 0 ) * α = (ψ t 0 +t t 0 ,ω+Γ ) * (ϕ t 0 +t ) * α. We want to prove that, for all q ∈ T n × R m , ((ϕ t 0 ) * α) q = 0, where ((ϕ t 0 ) * α) q stands for the symplectic form calculated on q ∈ T n × R m . The idea is to prove that, for all q ∈ T n × R m , the limit for t → +∞ on the right-hand side of the above equation converges to zero. We know that ϕ t 0 +t (q) = (q, v t 0 +t (q)), then for all q ∈ T n × R m (ψ t 0 +t t 0 ,ω+Γ ) * (ϕ t 0 +t ) * α q = 1≤i<j≤n+m 1≤k<d≤n+m α t i,j,k,d (q)dq k ∧ dq d where α t i,j,k,d (q) = ∂ q i v t 0 +t j • ψ t 0 +t t 0 ,ω+Γ (q) -∂ q j v t 0 +t i • ψ t 0 +t t 0 ,ω+Γ (q) × ∂ q k ψ t 0 +t t 0 ,ω+Γ,i (q)∂ q d ψ t 0 +t t 0 ,ω+Γ,j (q) -∂ q d ψ t 0 +t t 0 ,ω+Γ,i (q)∂ q k ψ t 0 +t t 0 ,ω+Γ,j (q) . In the latter × stands for the usual multiplication in R. Then, for t > 0 and fixed 1 ≤ i < j ≤ n + m, 1 ≤ k < d ≤ n + m, by Proposition 6.3 α t i,j,k,d C 0 ≤ ∂ q i v t 0 +t j • ψ t 0 +t t 0 ,ω+Γ -∂ q j v t 0 +t i • ψ t 0 +t t 0 ,ω+Γ C 0 × ∂ q k ψ t 0 +t t 0 ,ω+Γ,i ∂ q d ψ t 0 +t t 0 ,ω+Γ,j -∂ q d ψ t 0 +t t 0 ,ω+Γ,i ∂ q k ψ t 0 +t t 0 ,ω+Γ,j C 0 ≤ ∂ q i v t 0 +t j C 0 + ∂ q j v t 0 +t i C 0 × ∂ q k ψ t 0 +t t 0 ,ω+Γ,i C 0 ∂ q d ψ t 0 +t t 0 ,ω+Γ,j C 0 + ∂ q d ψ t 0 +t t 0 ,ω+Γ,i C 0 ∂ q k ψ t 0 +t t 0 ,ω+Γ,j C 0 ≤ C|v t 0 +t | C 1 |∂ q ψ t 0 +t t 0 ,ω+Γ | C 0 2 ≤ C ζ t 0 + t t 0 + t t 0 C(δ+ε+CΥζ) for a suitable constant C ≥ 1. Thanks to (δζ) and for ε small enough, taking the limit for t → +∞ on both sides of the latter, the term in the last line converges to zero. This concludes the proof of this lemma. 7 The Three-Body Problem plus Comet The Three-Body Problem plus Comet This chapter studies the existence of suitable motions for a planetary system (planar three-body problem) perturbed by a given comet coming from and going back to infinity, asymptotically to a hyperbolic Keplerian orbit. In [Arn63b] Arnold proves the existence of quasiperiodic motions for the Hamiltonian of the planar three-body problem. In this work, we follow the setting of Féjoz [START_REF] Féjoz | Quasiperiodic motions in the planar three-body problem[END_REF], which provides more general solutions. In a rotating frame of reference, the author proves the existence of quasiperiodic orbits with three frequencies for the Hamiltonian of the planar three-body problem. Before the symplectic reduction by the symmetry of rotations, these quasiperiodic motions have one additional frequency, which is the angular speed of the simultaneous rotation of the three ellipses. Furthermore, before the symplectic reduction by the symmetry of translations, each of these invariant tori translates into a 1-parameter family of invariant tori parametrized by the center of mass of the planetary system. In a neighbourhood of these quasiperiodic orbits, we want to perturb the Hamiltonian of the planar three-body problem with a time-dependent Hamiltonian that quantifies the interaction of the planets with the given comet. In this chapter, we prove the existence of orbits which are close (in a sense that we will specify later) to the quasiperiodic motions associated with the Hamiltonian of the planar three-body problem. The proof relies on Theorem E proved in the previous chapter. Reduced Problem and Result Consider three points of fixed masses m 0 , m 1 and m 2 undergoing gravitational attraction in the plane and a comet of fixed mass m c . The comet comes from and goes to infinity, along a hyperbolic Keplerian orbit. We assume that the motion of the comet is a given smooth function c(t) and that only the planetary system is influenced by the comet. We assume, |c(t)| → t→+∞ ∞, d dt |c(t)| → t→+∞ v > 0. By the latter, there exists t 0 0 such that v 2 ≤ d dt |c(t)| ≤ 2v (7.1) for all t ≥ t 0 . At the risk of replacing t by t + t 0 -1, we can take t 0 = 1. Let J = [1, +∞), for a positive parameter 0 < ε < 1, the phase space is the space ((x i , y i ) 0≤i≤2 , t) ∈ R 2 × R 2 * 3 × J ∀0 ≤ i < j ≤ 2, x i = x j |x i | |c(t)| < ε (7.2) of linear momentum covectors (y 0 , y 1 , y 2 ) and position vectors (x 0 , x 1 , x 2 ) of each body. The Hamiltonian of the planar three-body problem plus comet (P3BP+C) 7.1 Reduced Problem and Result is H(x, y, t) = 2 i=0 |y i | 2 2m i -G 0≤i<j≤2 m i m j |x i -x j | H 0 (x,y) -G 2 i=0 m i m c |x i -c(t)| Hc(x,t) , where G is the universal constant of gravitation that we may suppose equal to 1. It is the sum of the Hamiltonian of the planar three-body problem H 0 and the Hamiltonian of the interaction with the comet H c . Let ϕ 0 be a 1-parameter family of invariant tori for H 0 supporting quasiperiodic dynamics with four frequencies and ψ t t 0 ,H be the flow at time t with initial time t 0 of H Theorem F. Let H be the Hamiltonian of the P3BP+C. Then, there exist constants C(H 0 , ϕ 0 ) depending on H 0 , ϕ 0 and C(H) depending on H such that if |c(1)| > C(H 0 , ϕ 0 ) ε , v > C(H) ε , (cv) for ε small enough, there exists an open subset W ⊂ (R 2 × R 2 * ) 3 such that, for all x ∈ W, ψ t 1,H (x) is a weakly asymptotically quasiperiodic solution associated to (X H , X H 0 , ϕ 0 ). As mentioned before, we observe that the existence of ϕ 0 is guaranteed by [START_REF] Féjoz | Quasiperiodic motions in the planar three-body problem[END_REF]. Furthermore, we will see that the constants in (cv) are specified in Proposition 7.1, Lemma 7.3 and Lemma 7.9. Before going into the details of the proof, we will prove the following property Proposition 7.1. If |c(1)| > 1 ε , v > 2 ε , (7.3) then sup t≥1 t |c(t)| < ε. (7.4) Proof. By (7.1) and the fundamental theorem of calculus |c(1)| + v 2 (t -1) ≤ |c(t)| ≤ |c(1)| + 2v(t -1) (7.5) for all t ≥ 1. Thanks to the latter, we can estimate t |c(t)| as follows t |c(t)| ≤ 1 + (t -1) |c(1)| + v 2 (t -1) , for all t ≥ 1. Thanks to (7.3), the claim is true for t = 1. Now, we suppose that there exists t 0 > 1 such that 1 + (t 0 -1) |c(1)| + v 2 (t 0 -1) ≥ ε. We can rewrite the latter in the following form 1 -ε|c(1)| ≥ ε 2 v -1 (t 0 -1) and this is a contradiction because, by (7.3), 1-ε|c(1)| < 0 and ε 2 v -1 (t 0 -1) > 0. 7 The Three-Body Problem plus Comet 7.2 Proof of Theorem F Quasiperiodic Motions in the Planar Three-Body Problem This part is dedicated to a very brief introduction to the work of Féjoz [START_REF] Féjoz | Quasiperiodic motions in the planar three-body problem[END_REF] concerning the existence, in a rotating frame of reference, of quasiperiodic motions with three frequencies for the Hamiltonian of the planar three-body problem. This result is an important element for the proof of Theorem F. In this work, the author splits the dynamic into two parts: a fast, called Keplerian dynamic, and a slow, called secular dynamic. The first describes the motion of the bodies along three ellipses as if each body underwents the attraction of only one fictitious center of attraction. The slow dynamic describes how the mutual attraction of each planet deforms these Keplerian ellipses. There is a natural splitting H 0 = H Kep + H per of the Hamiltonian when one uses the Jacobi coordinates. The author defines the perturbing region contained in the direct product of the phase and parameter spaces. In this region, the Hamiltonian of the planar three-body problem is C k -close to the dynamically degenerate Hamiltonian of two decoupled two-body problems. In order to get rid of the degeneracy of the Kepler Hamiltonian H Kep , in a suitable open subset of the above-mentioned perturbing region, he introduces the secular system. It is obtained by an averaging process. It consists in averaging along the Keplerian ellipses parametrized by the mean anomalies λ 1 and λ 2 of the two fictitious Kepler problems where the Keplerian frequencies are non-resonant. We thus obtain an integrable approximation. After the reduction by the symmetry of rotation and far from elliptic singularities, the phase space of the secular Hamiltonian contains a positive measure of Lagrangian diophantine invariant tori. The claim relies on a sophisticated version of KAM theorem, which is proved using a normal form theorem due to Herman. Jacobi's Splitting and Keplerian Dynamics. Let three points of masses m 0 , m 1 and m 2 undergoing gravitational attraction in the plane. We consider the following phase space (x i , y i ) 0≤i≤2 ∈ R 2 × R 2 * 3 | 0 ≤ i < j ≤ 2, x i = x j . The Hamiltonian of the planar three-body problem is H 0 (x, y) = 2 i=0 |y i | 2 2m i - 0≤i<j≤2 m i m j |x i -x j | . In order to carry out the reduction by the symmetry of translations, he chooses the following symplectic change of variables      X 0 = x 0 X 1 = x 1 -x 0 X 2 = x 2 -σ 0 x 0 -σ 1 x 1      Y 0 = y 0 + y 1 + y 2 Y 1 = y 1 + σ 1 y 2 Y 2 = y 2 where σ 0 = m 0 m 0 +m 1 and σ 1 = m 1 m 0 +m 1 . The coordinates {(X i , Y i )} i=0,1,2 are the wellknown Jacobi coordinates. In these variables, the Hamiltonian H 0 does not depend on X 0 (because of the symmetry by translations). This means that, considering the frame of reference attached to the center of mass and if X 2 = 0, the system is then described by the 4 Jacobi coordinates {(X i , Y i )} i=1,2 . The reduced Hamiltonian can be written as H 0 = H Kep + H per , where H Kep is the degenerate Hamiltonian of two decoupled two-body problems and H per is the perturbation. More specifically, H Kep is the completely integrable Hamiltonian of two fictitious bodies which revolve along ellipses around a fixed center of attraction without mutual interaction. Now, we introduce some notations concerning the Keplerian dynamics. This part is necessary in order to be able to define the above-mentioned perturbing region. For the ith fictitious body, with i = 1 or 2, the mean longitude will be designated by λ i , the semi-major axis by a i , the eccentricity by e i , the "centricity" 1 -e 2 i by i , the argument of the pericenter by g i , the mean motion by υ i and the difference of the arguments of the pericenter by g = g 1 -g 2 . We also introduce the well-known Poincaré coordinates (Λ i , λ i , ξ i , η i ), where we refer for example to the notes of A. Chenciner and J. Laskar [START_REF] Chenciner | Intégration du problème de kepler par la méthode de hamilton-jacobi: coordonnées action-angle de delaunay[END_REF][START_REF] Laskar | Les variables de poincaré et le développement de la fonction pertubatrice[END_REF] or the work of J. Féjoz [START_REF]On action-angle coordinates and the Poincaré coordinates[END_REF]. Perturbing Region. To measure how close the outer ellipse is from the inner ellipses when they are in opposition, the author defines ∆ = max (λ 1 ,λ 2 ,g)∈T 3 max{σ 0 , σ 1 } |X 1 | |X 2 | = max{σ 0 , σ 1 } a 1 (1 + e 1 ) a 2 (1 -e 2 ) . He assumes that ∆ < 1. This means that the outer ellipse does not meet the other two, whatever the difference g of the arguments of the pericenters. Moreover, the eccentricity e 2 of the outer ellipse cannot be arbitrarily close to 1. He also assumes that the eccentricity of the inner ellipses is upper bounded from 1. Let P be the reduced symplectically by translations phase space and M be the space described by the three mass parameters m 0 , m 1 and m 2 . Definition 7.1. For a positive parameter δ and a non negative integer k, the perturbing region Π k δ of parameters δ and k is the open subset of P × M defined by the following inequality max m 2 M 1 a 1 a 2 3 2 , µ 1 √ M M 3 2 1 a 1 a 2 2 1 3(2+k) 2 (1 -∆) 2k+1 < δ, (7.6) where M 1 = m 0 + m 1 , M = m 0 + m 1 + m 2 and µ 1 = m 0 m 1 m 0 +m 1 . Féjoz writes in his work that this inequality is not optimal and the given powers are not meaningful. He justifies this definition by proving that, inside the perturbing region, the perturbating function is δ-small in a suitable C k -norm. Another remark is that 1 3(2+k) 2 prevents the outer body from getting too close from collisions with the fictitious center of attraction (X 2 = 0) and the factor 1 (1-∆) 2k+1 prevents the two outer bodies from getting too close from each other (x 2 = x 0 or x 1 ). For the proof of Theorem F, where the masses are fixed, note that the inequality (7.6) may be satisfied merely by assuming that a 1 , a 2 1 and e 2 ≤ Cst < 1. This is the so-called lunar or hierarchical regime. If z 1 and z 2 are two quantities, let ž = min{z 1 , z 2 }. When δ → 0, let Πk δ = Π k δ × R 2 ∩ { λ = O( λ0 )} ∩ {υ = O(υ 0 )} be open sets of Π k δ × R 2 , where ( Λ0 , υ0 ) stands for coordinates of R 2 . These open sets can be thought of as fiber bundles over the parameter space M × R 2 . The additional parameters ( Λ0 , υ0 ) are meant to localize the particular region on which we focus in the phase space. Elimination of Fast Angles. In order to get rid of the degeneracy of the Keplerian Hamiltonian and hence apply the well-known KAM theorem, the secular Hamiltonian is introduced. Let d and k be suitable positive integers. On a suitable open set Πk δ of Πk δ , the author proves the existence of a C ∞ -symplectomorphism φ d which is δ-close to the identity in a suitable C k -norm. The Hamiltonian H 0 • φ d can be split as follows H 0 • φ d = H d π + H d comp , where In what follows, we only give an idea of the proof. It consists of a finite iterating procedure. The symplectomorphism φ d is the composition of d time-one maps ψ 1 of a sufficiently small (on a suitable C k norm) autonomous Hamiltonian vector field. To this end, let F be a Hamiltonian to be determined. We denote X F as its vector field and ψ t as its flow. To describe the transformed Hamiltonian H • ψ 1 , we recall that for a function K H d dt K • ψ t = {K, F } • ψ t , where the right-hand side of the latter stands for the Poisson bracket of K and F evaluated at ψ t . Coming back to the work of Féjoz, he defines the (first order) complementary part H 1 comp of H by the equality H 0 • φ 1 = H Kep + (H per + {H Kep , F }) + H 1 comp . We remind that we need to eliminate the fast angles from H per modulo a complementary part previously defined. Letting H per = 1 4π 2 T 2 H per dλ 1 dλ 2 be the average of H per , this is equivalent to solve the following homological equation in the unknown F {F, H Kep } = H per -H per . Proof of Theorem F This partial differential equation has a unique solution on a suitable Cantor set of diophantine tori. The Hamiltonian F is of class Whitney-C ∞ and thus can be extended into a C ∞ -function which is rotating-invariant. Then, the Hamiltonian H 0 is conjugated to H 0 • φ 1 = H Kep + H per + H 1 comp . Iterating by induction, the claim is proved. Secular Dynamics As mentioned before, in a rotating frame of reference, Féjoz proves the existence of three-dimensional invariant tori supporting quasiperiodic dynamics for the Hamiltonian of the planar three-body problem. This means that, before the symplectic reduction by the symmetry of rotations, these quasiperiodic motions have one additional frequency, namely the angular speed of the simultaneous rotation of the three ellipses. More specifically, the author proves the following theorem Theorem 7.1. In a rotating frame of reference, there are integers k ≥ 1 and d ≥ 1 and real numbers δ > 0 and τ ≥ 1 such that inside the perturbing region Πk+d(τ+4) δ a positive measure of quasiperiodic Lagrangian tori of H d π survive in the dynamics of the planar three-body problem.1 Far from elliptic singularities and in the rotating frame of reference, the secular Hamiltonian H d π possesses a family of Lagrangian invariant tori supporting diophantine quasiperiodic dynamics. As mentioned before, the proof of this theorem is a consequence of a sophisticated KAM theorem proved by using a normal form theorem due to Herman. Introducing secular dynamics is the key to get rid of the degeneracy of the Hamiltonian of two decoupled two-body problems. Furthermore, investigating the secular dynamics allows us to find a series of solutions for the Hamiltonian of the planar three-body problem. In what follows, we will not provide other information concerning the proof of the above theorem. Instead, we will only use Theorem 7.1 as a blackbox to prove the existence of weakly asymptotically quasiperiodic solutions associated to the Hamiltonian of the planar three-body problem plus comet (see Theorem F). Outline of the proof of Theorem F Our proof relies on [START_REF] Féjoz | Quasiperiodic motions in the planar three-body problem[END_REF] and Theorem E proved in the previous chapter. We observe that we do not prove the existence of a weakly asymptotic cylinder but only a set of initial points giving rise to weakly asymptotically solutions associated to the Hamiltonian H of the planar three-body problem plus comet. The proof is divided into five parts. The first two concern the Hamiltonian of the planar three-body problem H 0 . In the first part, which we call Splitting, we introduce a linear symplectic change of variable φ 0 . Letting (X i , Y i ) i=0,1,2 be the new variables, which should not be confused with the Jacobi coordinates introduced in the previous section, we can split the Hamiltonian H 0 in such a way that H 0 • φ 0 (X, Y ) = |Y 0 | 2 2M + K(X 1 , X 2 , Y 1 , Y 2 ), where K is the Hamiltonian of the planar three-body problem after the reduction by the symmetry of translations, X 0 is the center of mass of the planetary system, Y 0 is the linear momentum of the planetary system and M = m 0 + m 1 + m 2 . We call the second part Quasiperiodic Dynamics Associated to K. Here, Theorem 7.1 ensures the existence of Lagrangian four-dimensional invariant tori in the phase space after the symplectic reduction by the symmetry of translations. As mentioned before, Féjoz proves the existence of quasiperiodic solutions with three frequencies for the Hamiltonian of the planar three-body problem in a rotating frame of reference. The additional frequency is given by the angular speed of the simultaneous rotations of the three ellipses. Therefore, we prove the existence of a symplectic change of variables φ F in such a way that K • φ F : T 4 × B 4 → R, K • φ F (θ, r) = c + ω • r + R 0 (θ, r) • r 2 where c ∈ R, ω ∈ R 4 and R 0 (θ, r) • r 2 stands for the vector given twice as an argument of the symmetric bilinear form R 0 (θ, r). We lift the above symplectic transformation φ F (θ, r) to a symplectic transformation φF (θ, ξ, r, η) defined on T 4 × R 2 × B 4 × B 2 , for some balls B 4 ⊂ R 4 and B 2 ⊂ R 2 of small radius, such that φF (θ, ξ, r, η) = (ξ, η, φ F (θ, r)), where ξ = X 0 and η = Y 0 . Letting φ = φ 0 • φF , we can rewrite the Hamiltonian of the planar three-body problem H 0 in the following form H 0 • φ : T 4 × R 2 × B 4 × B 2 -→ R, H 0 • φ(θ, ξ, r, η) = c + ω • r + R 0 (θ, r) • r 2 + |η| 2 2M . The third part of the proof is devoted to the perturbing function and, for this reason, we call it Perturbing Function. We introduce a suitable open subset U of T 4 × R 2 × B 4 × B 2 × J characterized by the orbits of H that stay sufficiently far from the comet. Letting φ(θ, ξ, r, η, t) = (φ(θ, ξ, r, η), t), H c • φ : U → R is well defined and satisfies good time-dependent estimates. Unfortunately, the Hamiltonian H • φ = H 0 • φ + H c • φ : U → R does not satisfy the hypotheses of Theorem E. This is because H • φ is not defined in the whole phase space T 4 × R 2 × B 4 × B 2 × J but only on a subset. To solve this problem, in the fourth part called Smooth Extension of the Perturbing Function, we introduce the Hamiltonian H, H : T 4 × R 2 × B 4 × B 2 × J -→ R 7.2 Proof of Theorem F in such a way that H coincides with H • φ on a suitable subset U1 2 of U where one expects the motions to take place, and H satisfies the same estimates of H • φ outside U 1 2 . We will see that H • φ satisfies the hypotheses of Theorem E. Letting ϕ 0 be the following trivial embedding ϕ 0 : T 4 × R 2 -→ T 4 × R 2 × B 4 × B 2 , ϕ 0 (θ, ξ, r, η) = (θ, ξ, 0, 0), we prove the existence of a C 1 -weakly asymptotic cylinder ϕ t associated to (X H , X H 0 •φ , ϕ 0 ). In the last part, called Weakly Asymptotically Quasiperiodic Solutions, we conclude the proof of Theorem F. We define B 2 1 /2 = ξ ∈ R 2 : |ξ| < ε 6 |c(1)| ⊂ R 2 , where |c(1)| stands for the distance of the comet from the origin when t = 1. Therefore, we show that, for all z ∈ ϕ 1 (T 4 × B 2 1 /2), ψ t 1, H (z) ∈ U1 2 for all t ∈ J. This is a consequence of (cv), |c(t)| ∼ vt and |ξ(t )| = |X 0 (t)| ∼ ln t. Then, by the definiton of H, for all z ∈ ϕ 1 (T 4 × B 2 1 /2), ψ t 1, H (z) = ψ t 1,H• φ(z) for all t ∈ J. Now, because of φ is symplectic, we conclude the proof verifying that for all w ∈ W = φ • ϕ 1 (T 4 × B 2 1 /2), ψ t 1,H (w) is a weakly asymptotically quasiperiodic solution associated to (X H , X H 0 , φ • ϕ 0 ). Splitting The phase space and the Hamiltonian of the planar three-body problem are respectively (x i , y i ) 0≤i≤2 ∈ R 2 × R 2 * 3 | 0 ≤ i < j ≤ 2, x i = x j , and H 0 (x, y) = 2 i=0 |y i | 2 2m i - 0≤i<j≤2 m i m j |x i -x j | . We would like to split the dynamics into the absolute motion of the center of mass and the relative motion of the three bodies. For this purpose, let us introduce the following linear symplectic change of coordinates φ 0      X 0 = m 0 M x 0 + m 1 M x 1 + m 2 M x 2 X 1 = x 0 -x 1 X 2 = x 0 -x 2      Y 0 = y 0 + y 1 + y 2 Y 1 = m 1 M y 0 -m 0 +m 2 M y 1 + m 1 M y 2 Y 2 = m 2 M y 0 + m 2 M y 1 -m 0 +m 1 M y 2 (7.7) where M = m 0 + m 1 + m 2 . The left-hand side of the latter recalls the well-known heliocentric coordinates (see for exemple [START_REF] Laskar | Les variables de poincaré et le développement de la fonction pertubatrice[END_REF]). We will see that, in these new coordinates, the Hamiltonian H is split into a Hamiltonian depending on Y 0 and a Hamiltonian depending on the variables {X i , Y i } i=1,2 (see (7.8)). In these new coordinates, if X 1 = 0, X 2 = 0 and X 1 = X 2 , the Hamiltonian of the planar three-body problem H 0 is equal to H 0 • φ 0 (X, Y ) = |Y 0 | 2 2M (7.8) + |Y 1 | 2 2µ 1 - m 0 m 1 |X 1 | + |Y 2 | 2 2µ 2 - m 0 m 2 |X 2 | + Y 1 • Y 2 m 0 - m 1 m 2 |X 2 -X 1 | K(X 1 ,X 2 ,Y 1 ,Y 2 ) where µ 1 = m 0 m 1 m 0 +m 1 and µ 2 = m 0 m 2 m 0 +m 2 . We observe that H 0 is the sum of two independent Hamiltonians. The first is responsible for the motion of the center of mass X 0 and the linear momentum Y 0 , the Hamiltonian K plays the role of the Hamiltonian of the planar three-body problem after the reduction by the symmetry of translations. Quasiperiodic Dynamics Associated with K For suitable integers k ≥ 1, d ≥ 1 and real numbers δ > 0 and τ ≥ 1, inside the perturbing region Πk+d(τ+4) δ (see Definition 7.1), Theorem 7.1 proves the existence of three-dimensional invariant tori for the Hamiltonian of the planar three-body problem K in a rotating frame of reference. As mentioned above, these quasiperiodic motions have one additional frequency before the symplectic reduction by the symmetry of rotations. This frequency plays the role of the angular speed of the simultaneous rotation of the three ellipses. Here, we fix m 0 , m 1 and m 2 as in Theorem F and we introduce the slice Πk+d(τ+4) δ,m = Πk+d(τ+4) δ m 0 ,m 1 ,m 2 ⊂ P where P is the phase space after the symplectic reduction by the symmetry by translations. In other words, Πk+d(τ+4) δ,m is the subset of P obtained by Πk+d(τ+4) δ once we have fixed the masses m 0 , m 1 and m 2 . We begin this second section with the following lemma concerning the dynamics of the Hamiltonian of the planar three-body problem K with the frame of reference attached to the center of mass. Lemma 7.1. There exists a symplectic transformation φ F defined on T 4 × B 4 , where B 4 is a 4-dimensional ball with some small unspecified radius, with φ F (T 4 × B 4 ) ⊂ P such that the Hamiltonian K • φ F : T 4 × B 4 → R can be written in the following form K • φ F (θ, r) = c + ω • r + R 0 (θ, r) • r 2 for some c ∈ R and ω ∈ R 4 . Proof. By Theorem 7.1, there exists a 4-dimensional Lagrangian invariant torus T ⊂ Πk+d(τ+4) δ,m for K supporting quasiperiodic dynamics. These tori form a set of positive Lebesgue measure, but we use only one such torus. We observe that φ F (T 4 × {0}) = T is a Lagrangian invariant torus for K. Hence K • φ F (θ, 0) = c for all θ ∈ T 4 and a suitable constant c ∈ R. Moreover, φ F (T 4 × {0}) = T support a quasiperiodic dynamics with some frequency vector ω ∈ R 4 , so ∂ r (K • φ F ) (θ, 0) = ω. In order to obtain suitable coordinates for the Hamiltonian H 0 of the planar three-body problem, we need to lift the symplectic change of variables φ F introduced in the previous lemma. Let φF (θ, ξ, r, η) be the symplectic transformation defined on T 4 × R 2 × B 4 × B 2 in such a way that φF (θ, ξ, r, η) = (ξ, η, φ F (θ, r)) (7.9) with ξ = X 0 and η = Y 0 . We recall that (X 0 , Y 0 ) are respectively the center of mass and the linear momentum of the planetary system defined in the previous section (see (7.7)). Letting φ = φ 0 • φF , (7.10) we have the following lemma Lemma 7.2. We can write the Hamiltonian of the planar three-body problem H 0 in the following form H 0 • φ : T 4 × R 2 × B 4 × B 2 -→ R, H 0 • φ(θ, ξ, r, η) = c + ω • r + R 0 (θ, r) • r 2 + |η| 2 2M . Proof. The proof of this lemma is a consequence of (7.8) and Lemma 7.1. However, if the planetary system is composed of only two bodies, we can explicitly construct the change of coordinates φ (see Appendix D). We conclude this part dedicated to the Hamiltonian H 0 with a property that plays an important role in the following section. First, we observe that, for all i = 0, 1 sup (θ,r)∈T 4 ×B 4 |X i (θ, r)| < ∞ where X i are the coordinates defined in (7.7). This is because, for all i = 1, 2, X i , as a function of (θ, r) ∈ T 4 × B 4 , is continuous and 1-periodic with respect to θ j for all 0 ≤ j ≤ 4. 7 The Three-Body Problem plus Comet Lemma 7.3. We assume that |c(1)| > max i=1,2 sup (θ,r)∈T 4 ×B 4 |X i (θ, r)| 3 ε (7.11) where ε is the positive real parameter introduced in Section 7.1. Then, for all (θ, r) ∈ T 4 × B 4 and i = 1, 2 |X i (θ, r)| |c(t)| < ε 3 for all t ≥ 1. Proof. The proof is a straightforward computation. For all i = 1, 2 |X i (θ, r)| |c(t)| ≤ sup (θ,r)∈T 4 ×B 4 |X i (θ, r)| |c(1)| < ε 3 for all t ≥ 1. Therefore, thanks to (7.11), we conclude the proof of this lemma. Perturbing Function We begin this section with the introduction of a suitable neighbourhood U of T 4 × {0} × B 6 × J ⊂ T 4 × R 2 × B 6 × J where we expect the motions of interest to take place. For all fixed t ∈ J, we define B 2 t = {ξ ∈ R 2 : |ξ| |c(t)| < ε 3 }. (7.12) We consider U as the following subset of T 4 × R 2 × B 4 × B 2 × J, U = t∈J T 4 × B 2 t × B 4 × B 2 × {t} . (U) Let φ(θ, ξ, r, η, t) = (φ(θ, ξ, r, η), t), (7.13) where φ is the symplectic transformation defined by (7.10). This part aims to prove that the Hamiltonian H c • φ : U → R satisfies suitable estimates. First, let us show that H c • φ : U → R is well defined Lemma 7.4. For all t ∈ J and (θ, ξ, r, η) ∈ T 4 × B 2 t × B 4 × B 2 , |x i (θ, ξ, r)| |c(t)| < ε for all i = 0, 1, 2. Proof of Theorem F Proof. The proof is a straightforward computation. Because of (7.7) and Lemma 7.1, we can rewrite the cartesian coordinates (x 0 , x 1 , x 2 ) as follows      x 0 (θ, ξ, r) = ξ + m 1 M X 1 (θ, r) + m 2 M X 2 (θ, r) x 1 (θ, ξ, r) = ξ -m 0 +m 2 M X 1 (θ, r) + m 2 M X 2 (θ, r) x 2 (θ, ξ, r) = ξ + m 1 M X 1 (θ, r) -m 0 +m 1 M X 2 (θ, r), (7.14) for all t ∈ J and (θ, ξ, r, η) ∈ T 4 × B 2 t × B 4 × B 2 . By the latter, (U) and Lemma 7.3, for all t ∈ J and (θ, ξ, r, η) ∈ T 4 × B 2 t × B 4 × B 2 |x 0 (θ, ξ, r)| |c(t)| ≤ ξ + m 1 M X 1 (θ, r) + m 2 M X 2 (θ, r) |c(t)| ≤ |ξ)| |c(t)| + m 1 M |X 1 (θ, r)| |c(t)| + m 2 M |X 2 (θ, r)| |c(t)| ≤ |ξ| |c(t)| + |X 1 (θ, r)| |c(t)| + |X 2 (θ, r)| |c(t)| < ε for all t ∈ J. We recall that M = m 0 + m 1 + m 2 , which implies m i M ≤ 1 for all 0 ≤ i ≤ 2. Similarly, we have the claim for x 1 (θ, ξ, r) and x 2 (θ, ξ, r). The previous lemma ensures that H c • φ is well defined on U. Moreover, by (7.14), it is straightforward to verify that H c • φ does not depend on the variable η. The following lemma provides time-dependent estimations of H c . Lemma 7.5. For all k ∈ Z with k ≥ 0 and a suitable constant C(k) depending on k sup t≥1 H t c C k < C(k)M m c ε, sup t≥1 ∂ x H t c C k t 2 < C(k)M m c ε. For all t ∈ J, the above norms | • | C k are taken on φ (T 4 × B 2 t × B 2 × B 4 ), where φ is defined by (7.10). Proof. For all t ∈ J and (θ, ξ, r, η) ∈ T 4 × B 2 t × B 2 × B 4 , let us rewrite the Hamiltonian H c • φ in the following form H c • φ(θ, ξ, r, η, t) = 2 i=0 m i m c |x i (θ, ξ, r) -c(t)| . For the rest of this proof, x i = x i (θ, ξ, r) for all 0 ≤ i ≤ 2. We drop the coordinates (θ, ξ, r) to obtain a more elegant form. Using Legendre polynomials, we have that 1 |x i -c(t)| = 1 |c(t)| n≥0 P n (cos x i c(t)) |x i | |c(t)| n = 1 |c(t)| 1 + n≥1 P n (cos x i c(t)) |x i | |c(t)| n (7.15) 7 The Three-Body Problem plus Comet for all 0 ≤ i ≤ 2, where P n means the nth Legendre polynomial. This expansion hold if |x i | |c(t)| < 1 and this prerequisite is verified by Lemma 7.4. Now, we can conclude the proof of this lemma. We recall that C(•) means constants depending on some parameters indicated in brackets. Therefore, thanks to (7.15), Lemma 7.4 and Proposition 7.1, for all t ∈ J and (x, y) = (x 0 , x 1 , x 2 , y 0 , y 1 , y 2 ) ∈ φ(T 4 × B 2 t × B 4 × B 2 ), |H t c (x)| = 2 i=0 m i m c |x i -c(t)| = 2 i=0 m i m c |c(t)| 1 + n≥1 P n (cos x i c(t)) |x i | |c(t)| n ≤ 2 i=0 m i m c t |c(t)| 1 + n≥1 ε n < CM m c ε. In the last line, t ≥ 1 implies sup H t c C 0 < CM m c ε. (7.16) Now, for all 0 ≤ i ≤ 2, k ≥ 1, t ∈ J and (x, y) = (x 0 , x 1 , x 2 , y 0 , y 1 , y 2 ) ∈ φ(T 4 × B 2 t × B 4 × B 2 ) |∂ k x i H(x)|t 2 ≤ C(k) m i m c t 2 |x i -c(t)| k+1 = C(k)m i m c t 2 |c(t)| k+1 1 + n≥1 P n (cos x i c(t)) |x i | |c(t)| n k+1 ≤ C(k)m i m c t k+1 |c(t)| k+1 1 + n≥1 P n (cos x i c(t)) |x i | |c(t)| n k+1 ≤ C(k)ε k+1 m i m c 1 + n≥1 ε n k+1 ≤ ε k+1 C(k)m i m c , < ε k+1 C(k)M m c where t ≥ 1 and k + 1 ≥ 2 imply t 2 ≤ t k+1 in the third line. Similarly to the previous case, because of 0 < ε ≤ 1 2 , we can estimate 1 + n≥1 ε n k+1 by a suitable constant depending on k. Similarly to the previous case, taking the max for all 0 ≤ i ≤ 2 on the left-hand side of the latter we have sup t≥1 ∂ k x H t c C 0 t 2 < C(k)M m c ε k+1 . (7.17) Now, thanks to (7.16), (7.17) and remembering the definition of Hölder's norm (A.1), we conclude the proof of this lemma. 7.2 Proof of Theorem F Smooth Extension of the Perturbing Function This section is dedicated to the introduction of a suitable smooth extension of H c • φ. First, we consider the following subset of U. For all fixed t ∈ J, we define B 2 t /2 = {ξ ∈ R 2 : |ξ| |c(t)| < ε 6 } ⊂ B 2 t . (7.18) Let U 1 2 be the following subset of U, U 1 2 = t∈J T 4 × B 2 t /2 × B 4 × B 2 × {t} ⊂ U. ( U1 2 ) Lemma 7.6. There exists H ex : T 4 × R 2 × B 4 × B 2 × J -→ R such that H ex does not depend on η and H ex U 1 2 = H c • φ. (7.19) Moreover, If |c(1)| > 1 ε and v > 2 ε , then for all k ∈ Z with k ≥ 0 and some constants C(k) depending on k, we have the following estimates sup t≥1 |H t ex | C k < C(k)M m c ε, (7.20) sup t≥1 |∂ (θξr) H t ex | C k t 2 < C(k)M m c ε, (7.21) where the previous norms are taken on T 4 × R 2 × B 4 × B 2 . Before the proof, we have some comments. First, for the sake of clarity, (7.19) means that, for all (θ, ξ, r, η, t) ∈ U1 2 , H ex (θ, ξ, r, η, t) = H c • φ(θ, ξ, r, η, t). Furthermore, we observe that the constants in the last estimates may differ from those in Lemma 7.5 and they depend on the chosen extension. Proof. For all fixed t ∈ J, we consider the following family of functions          ρ t : R 2 -→ R 2 , ρ t ∈ C ∞ 0 (R 2 ), ρ t (ξ) = ξ for all |ξ| ≤ ε |c(t)| 6 , ρ t (ξ) = 0 for all |ξ| ≥ ε |c(t)| 3 . Hence, we define the following map π : T 4 × R 2 × B 4 × B 2 × J → T 4 × R 2 × B 4 × B 2 × J π(θ, ξ, r, η, t) = (θ, ρ t (ξ), r, η, t). It is straightforward to verify that = Id. This proves the first part of this lemma. Concerning the second part, we observe that, for all fixed t ∈ J and k ∈ Z with k ≥ 0 π T 4 × R 2 × B 4 × B 2 × J ⊂ U π U 1 2 = Id |ρ t | C k ≤ C(k) (ε|c(t)|) k ≤ C(k) t ε|c(t)| k ≤ C(k) for a suitable constant C(k) depending on k. We point out that the second inequality of the previous estimate is due to t ≥ 1 and the last is a consequence of Proposition 7.1. Thanks to the latter, one can see that sup t∈J |∇π t | C k < ∞. Now, we have everything we need to prove the second part of this lemma. We begin with the case k = 0. For all t ∈ J, we observe that by (7.22) and Lemma 7.5 |H t ex | C 0 (T 4 ×R 2 ×B 6 ) = H c • φ • π t C 0 (T 4 ×R 2 ×B 6 ) ≤ |H t c | C 0 (φ(T 4 ×B 2 t ×B 6 )) ≤ CM m c ε where B 2 t is defined by (7.12). For the sake of clarity, we have specified the domain where the Hölder norms are taken. Now, concerning the case k ≥ 1, for all t ∈ J |H t ex | C k (T 4 ×R 2 ×B 6 ) = H c • φ • π t C k (T 4 ×R 2 ×B 6 ) ≤ |H t c | C k (φ(T 4 ×B 2 t ×B 6 )) + |H t c | C k (φ(T 4 ×B 2 t ×B 6 )) |∇π t | k C 0 (T 4 ×R 2 ×B 6 ) + |H t c | C k (φ(T 4 ×B 2 t ×B 6 )) |∇π t | C k-1 (T 4 ×R 2 ×B 6 ) ≤ C k, sup t∈J |∇π t | C k-1 (T 4 ×R 2 ×B 6 ) M m c ε. We note that the first inequality (second line) is due to property 5. of Proposition A.2, while the last line is a consequence of Lemma 7.5. This proves (7.20). Similarly, one can prove (7.21). The rest of this section is devoted to showing that the Hamiltonian H : T 4 × R 2 × B 4 × B 2 -→ R, H = H 0 • φ + H ex satisfies the hypothesis of Theorem E, where H 0 is the Hamiltonian of the planar three-body problem. Obviously, by the previous lemma H U 1 2 = H • φ. (7.23) First, let us recall some definitions introduced at the beginning of this chapter. Let σ ≥ 0 be a positive real parameter 7.2 Proof of Theorem F Definition. Let S σ be the space of functions f defined on T 4 × R 2 × B 4 × B 2 × J such that f t ∈ C σ (T 4 × R 2 × B 4 × B 2 ) for all fixed t ∈ J and ∂ i (q,p) f ∈ C(T 4 × R 2 × B 4 × B 2 × J) for all 0 ≤ i ≤ [σ]. For every f ∈ S σ , we define the following norm |f | σ,l = sup t∈J |f t | C σ t l for a real positive parameter l. Obviously, for all t ∈ J, the above norm | • | C σ is taken on T 4 × R 2 × B 4 × B 2 . Lemma 7.7. We can rewrite H ex in the following form H ex (θ, ξ, r, t) = a(θ, ξ, t) + b(θ, ξ, t) • r + R c (θ, ξ, r, t) • r 2 , (7.24) for all (θ, ξ, r, t) ∈ T 4 × R 2 × B 4 × J. Moreover, for all k ∈ Z with k ≥ 1 |a| k+1,0 + |∂ (θ,ξ) a| k,2 < C(k)M m c ε, |b| k+1,2 < C(k)M m c ε, ∂ 2 r H ex k+1,2 < C(k)M m c ε, for some constants C depending on k. Proof. Expanding H ex in a small neighborhood of r = 0 H ex (θ, ξ, r, η, t) = H ex (θ, ξ, 0, t) + ∂ r H ex (θ, ξ, 0, t) • r + 1 0 (1 -τ )∂ 2 r H ex (θ, ξ, τ r, t)dτ • r 2 and letting a(θ, ξ, t) = H ex (θ, ξ, 0, t), b(θ, ξ, t) = ∂ r H ex (θ, ξ, 0, t)(θ, ξ, 0, η, t), R c (θ, ξ, r, t) = 1 0 (1 -τ )∂ 2 r H ex (θ, ξ, τ r, t)dτ we prove the first part of this lemma. The second part is a straightforward consequence of Lemma 7.6. More specifically, |a| k+1,0 ≤ sup t≥1 |H t ex | C k+1 , |∂ (θ,ξ) a| k,2 ≤ sup t≥1 |∂ (θ,ξ) H t ex | C k+1 t 2 , |b| k+1,2 ≤ sup t≥1 |∂ r H t ex | C k+1 t 2 , ∂ 2 r H ex k+1,2 ≤ sup t≥1 |∂ 2 r H t ex | C k+1 t 2 and thanks to (7.20) and (7.21) we conclude the proof of this lemma. 7 The Three-Body Problem plus Comet Summarizing the contents of the previous sections, we conclude this part with the following lemma Lemma 7.8. We can rewrite H in the following form H : T 4 × R 2 × B 4 × B 2 × J -→ R H(θ, ξ, r, η, t) = c + ω • r + a(θ, ξ, t) + b(θ, ξ, t) • r + m(θ, ξ, r, t) • r η 2 where m(θ, ξ, r, t) • r η 2 = R c (θ, ξ, r, t) • r 2 + R 0 (θ, r) • r 2 + |η| 2 2M . We note that c, ω and R 0 are defined in Lemma 7.1, while a, b and R c are introduced in Lemma 7.7 and M = m 0 + m 1 + m 2 . Moreover, for all k ∈ Z with k ≥ 1, where ∇φ = (∇φ 1 , ..., ∇φ 8 ) is the transposed of the Jacobian of φ. We will use this notation also for ∇φ 0 and ∇ φF . By Proposition A.2 concerning the properties of the Hölder norms and recalling that φ = φ 0 • φF (see (7.10)) |a| k+1,0 + |∂ (θ,ξ) a| k,2 < C(k)M m c ε, (7.25) |b| k+1,2 < C(k)M m c ε, (7.26) ∂ 2 (r,η) H k+1,0 < C(k, M, m c , ε, |∂ (x,y) H 0 • φ| C k+2 , |∇φ| C k+2 ), |∇φ| C k = ∇ φ 0 • φF C k = ∇φ 0 • φF ∇ φF T C k ≤ C(k)|∇φ 0 | C k |∇ φF | C k |∇ φF | C k-1 + |∇ φF | k C 0 + 1 , where T stands for the transpose of a matrix. We know that φ 0 is a linear transformation (see (7.7)), then |∇φ 0 | C k < ∞. On the other hand, φF is C ∞ -function, 1-periodic with respect to θ i for all 0 ≤ i ≤ 4, the variables r vary on a bounded subspace and it is the identity with respect to (ξ, η) (see Lemma 7.1 and (7.9)). This implies |∇ φF | C k < ∞ and hence the claim. Now, concerning the inequality (7.27), by Lemma 7.7 ∂ 2 r H ex k+1,2 < C(k)M m c ε. 7.2 Proof of Theorem F Now, we have to estimate |∂ 2 r (H 0 • φ)| k+1,0 . By the chain rule, (7.28) and properties 2. and 5. of Proposition A.2 ∂ 2 r (H 0 • φ) C k+1 ≤ C |∂ (x,y) H 0 • φ| C k+1 |∂ 2 r φ| C k+1 + |∂ 2 (x,y) H 0 • φ| C k+1 |∂ r φ| 2 C k+1 ≤ C (k, |∇φ| C k+2 ) |∂ (x,y) H 0 • φ| C k+2 . We observe that ∂ (x,y) H 0 • φ does not depend on ξ. Moreover, it is C ∞ , 1-periodic with respect to θ i for all 0 ≤ i ≤ 4 and the variable (r, η) vary on B 4 × B 2 that is bounded. This implies |∂ (x,y) H 0 • φ| C k+2 < ∞. Now summarizing the previous estimates we obtain ∂ 2 (r,η) H k+1,0 ≤ ∂ 2 r H ex k+1,0 + ∂ 2 r (H 0 • φ) k+1,0 ≤ C(k, M, m c , ε, |∂ (x,y) H 0 • φ| C k+2 , |∇φ| C k+2 ). Weakly Asymptotically Quasiperiodic Solutions In the previous section, for k sufficiently large, Lemma 7.8 ensures that H satisfies the hypotheses of Theorem E. Then, letting ϕ 0 : T 4 × R 2 → T 4 × R 2 × B 4 × B 2 be the trivial embedding ϕ 0 (θ, ξ) = (θ, ξ, 0, 0), for ε small enough, there exist v 1 : T 4 × R 2 × J → R 4 and v 2 : T 4 × R 2 × J → R 2 such that ϕ t (θ, ξ) = (θ, ξ, v t 1 (θ, ξ), v t 2 (θ, ξ)) (7.29) is a C 1 -weakly asymptotic cylinder associated to (X H , X H 0 •φ , ϕ 0 ). This means that there exist Γ 1 : . T 4 × R 2 × J → R 4 and Γ 2 : T 4 × R 2 × J → R 2 such that, letting Γ = (Γ 1 , Γ 2 ) and v = (v 1 , v 2 ), for all (θ, ξ, t) ∈ T 4 × R 2 × J, X H• φ(ϕ(θ, ξ, t), t) = ∂ (θ,ξ) ϕ(θ, ξ, t)(ω + Γ(θ, ξ, t)) + ∂ t ϕ(θ, We recall that B 2 t /2 is defined in (7.18). Letting ψ t 1,H be the flow at time t with initial time 1 of H, we have the following lemma. Lemma 7.9. We assume v > 12 1 + C ε . (7.32) Then, for all w ∈ W = φ • ϕ 1 (T 4 × (B 2 1 /2)), ψ t 1,H ( w) is a weakly asymptotically quasiperiodic solution associated to (X H , X H 0 , φ • ϕ 0 ). Proof. Let ψ t 1, H be the flow at time t with initial time 1 of H. For all (θ, ξ) ∈ T 4 × (B 2 1 /2), we define (θ t 1 (θ, ξ), ξ t 1 (θ, ξ), r t 1 (θ, ξ), η t 1 (θ, ξ)) = ψ t 1, H • ϕ 1 (θ, ξ). for all t ∈ J. Now, for all w ∈ W = φ • ϕ 1 (T 4 × (B 2 1 /2)) there exists (θ, ξ) ∈ T 4 × (B 2 1 /2) such that w = φ • ϕ 1 (θ, ξ ) and, because of φ is symplectic, by (7.34) and the latter, we can rewrite ψ t 1,H (w) in the following way ψ t 1,H (w) = ψ t 1,H • φ • ϕ 1 (θ, ξ) = φ • ψ t 1,H• φ • ϕ 1 (θ, ξ) = φ • ψ t 1, H • ϕ 1 (θ, ξ) = φ • ϕ t • ψ t 1,ω+Γ (θ, ξ). Moreover, for all t ∈ J ψ t 1,H (w) -φ • ϕ 0 • ψ t 1,ω+Γ (θ, ξ) ≤ φ • ϕ t • ψ t 1,ω+Γ (θ, ξ) -φ • ϕ 0 • ψ t 1,ω+Γ (θ, ξ) ≤ φ • ϕ t -φ • ϕ 0 C 0 ≤ C |∇φ| C 1 ϕ t -ϕ 0 C 0 for a suitable constant C. Therefore, reminding that |∇φ| C 1 < ∞ and taking the limit for t → +∞, thanks to (7.31), we conclude the proof of this lemma. This concludes the proof of Theorem F Part V Asymptotic Motions Converging to Arbitrary Dynamics This last chapter contains a variation of the result of Canadell-de la Llave [START_REF] Canadell | KAM tori and whiskered invariant tori for non-autonomous systems[END_REF] for time-dependent Hamiltonian vector fields and time-dependent vector fields on the torus. In the first case, we consider a time-dependent Hamiltonian vector field X converging exponentially fast in time to a Hamiltonian vector field X 0 having an invariant torus ϕ 0 supporting arbitrary dynamics generated by the vector field W . Unlike [START_REF] Canadell | KAM tori and whiskered invariant tori for non-autonomous systems[END_REF], we do not assume the dynamics associated with X 0 on ϕ 0 to be quasiperiodic. This situation already appears in Part IV, where the dynamics associated with X 0 on ϕ 0 are generated by time-dependent perturbations of constant vector fields on the torus. Here, we maintain the exponential decay but do not assume any smallness condition. We prove the existence of a C σ -asymptotic torus associated to (X, X 0 , ϕ 0 , W ) and thus the existence of solutions that asymptotically converge to the arbitrary dynamics associated to X 0 on ϕ 0 . Similarly, we have an analogous result concerning the case of time-dependent vector fields on the torus. The proofs are essentially the same as those of Theorem A and Corollary A. They rely on the implicit function theorem, where we look for a C σ -asymptotic torus defined for all t sufficiently large. Unlike the above-mentioned theorem, the solution of the associated homological equation is more complicated. Asymptotic Motions for Time Dependent Hamitlonians This chapter is divided into four sections. The first (Section 8.1) contains the definition of C σ -asymptotic torus. The above mentioned results (Theorem G and Corollary C) are stated in the second section (Section 8.2). The last two (Section 8.3 and Section 8.4) contain the proofs. C σ -Asymptotic Torus We recall the definition of C σ -asymptotic torus that generalizes that of C σ -asymptotic KAM torus (see Definition 1.3). Let B ⊂ R n be a ball centred at the origin, P be equal to T n or T n × B and, for all υ ≥ 0, J υ = [υ, +∞) ⊂ R. Given σ ≥ 0, υ ≥ 0 and a positive integer k ≥ 0, we consider time-dependent vector fields X t and X t 0 of class C σ+k on P, for all t ∈ J υ , an embedding ϕ 0 from T n to P of class C σ and a vector field on the torus W of class C σ such that lim t→+∞ |X t -X t 0 | C σ+k = 0, (8.1) X(ϕ 0 (q), t) = ∂ q ϕ 0 (q)W (q) for all (q, t) ∈ T n × J υ . (8.2) Definition (Définition 2.5). We assume that (X, X 0 , ϕ 0 , W ) satisfy (8.1) and (8.2). A family of C σ embeddings ϕ t : T n → P is a C σ -asymptotic torus associated to (X, X 0 , ϕ 0 , W ) if there exists υ ≥ υ ≥ 0 such that lim t→+∞ |ϕ t -ϕ 0 | C σ = 0, (8.3) X(ϕ(q, t), t) = ∂ q ϕ(q, t)W (q) + ∂ t ϕ(q, t), (8.4) for all (q, t) ∈ T n × J υ . When dimP = 2n, then ϕ t is Lagrangian if ϕ t (T n ) is Lagrangian for all t. First, we observe that if W (q) ≡ cst, then we obtain Définition 1.3. As one can expect, we can rewrite (8.4) in terms of the flow of X. Let ψ t t 0 ,X and ψ t t 0 ,W be the flow at time t with initial time t 0 of X and W , respectively. We assume that ψ t t 0 ,X is defined for all t, t 0 ∈ J υ . Then (8.4) is equivalent to ψ t t 0 ,X • ϕ t 0 = ϕ t • ψ t t 0 ,W (8.5) for all t, t 0 ∈ J υ . By the latter, (8.4) is trivial and if there exists a C σ -asymptotic torus ϕ t defined for all t large, then we can extend the set of definition for all t ∈ R. Results As usual, we begin with some definitions Definition. Let S υ σ be the space of functions f defined on T n × B × J υ such that f ∈ C(T n × B × J υ ) and, for all t ∈ J υ , f t ∈ C σ (T n × B). We use this notation also for functions defined on T n × J υ . For every f ∈ S υ σ and for fixed λ ≥ 0, we define the following norm |f | υ σ,λ = sup t∈Jυ |f t | C σ e λt . (8.6) We conclude this first part with the following proposition, which contains a series of properties of the previous norm. As one can expect, there is a significant similarity with those enumerated in Proposition A.2 (see Appendix A). Proposition 8.1. For all f , g ∈ S υ σ and positive parameters m, d ≥ 1, we have the following properties. a. For all β ∈ N 2n , if |β| + r ≤ σ then ∂ |β| ∂q 1 β 1 ...∂qn βn ∂p 1 β n+1 ...∂pn β 2n f υ r,λ ≤ |f | υ σ,λ b. |f | υ σ,λ ≤ |f | υ σ,kλ c. |f g| υ σ,dλ+mλ ≤ C(σ) |f | υ 0,dλ |g| υ σ,mλ + |f | υ σ,dλ |g| υ 0,mλ . Given σ ≥ 1, for all f , z ∈ S υ σ then f • z ∈ S υ σ 8.2 Results d. |f • z| υ σ,kλ+mλ ≤ C(σ) |f | υ σ,kλ |∇z| υ 0,mλ σ + |f | υ 1,kλ |∇z| υ σ-1,mλ + |f | υ 0,kλ+mλ , Proof. The proof is a straightforward application of Proposition A.2. Properties a. and b. are obvious. We verify the others. c. |f g| υ σ,kλ+mλ = sup t∈Jυ |f t g t | C σ e (kλ+mλ)t ≤ C(σ) sup t∈Jυ |f t | C 0 |g t | C σ + |f t | C σ |g t | C 0 e (kλ+mλ)t ≤ C(σ) sup t∈Jυ |f t | C 0 e kλt |g t | C σ e mλt + |f t | C σ e kλt |g t | C 0 e mλt ≤ C(σ) |f | υ 0,kλ |g| υ σ,mλ + |f | υ σ,kλ |g| υ 0,mλ d. |f • z| υ σ,kλ+mλ = sup t∈Jυ |f t • z t | C σ e (kλ+mλ)t ≤ C(σ) sup t∈Jυ |f t | C σ e kλt |∇z t | C 0 e mλt σ e (1-σ)mλt + C(σ) sup t∈Jυ |f t | C 1 e kλt |∇z t | C σ-1 e mλt + |f | C 0 e (kλ+mλ)t ≤ C(σ) |f | υ σ,kλ |∇z| υ 0,mλ σ + |f | υ 1,kλ |∇z| υ σ-1,mλ + |f | υ 0,kλ+mλ where we observe that if t ≥ 0 and σ ≥ 1 then e (1-σ)mλt ≤ 1. Given σ, υ ≥ 0 and an integer k ≥ 0, we conclude this part of set-up by reminding the following definition Definition. Let Sυ σ,k be the space of functions f such that f ∈ S υ σ+k and ∂ i qp f ∈ S υ σ+k-i for all 0 ≤ i ≤ k. In other words, we recall that this is the space of functions f ∈ S υ σ+k with partial derivatives with respect to (q, p) continuous until the order k. Let W ∈ C σ+2 (T n ), we recall that K W is the set of the Hamiltonians h : T n × B × J 0 → R such that, for all (q, t) ∈ T n × J 0 , h(q, 0, t) = c, ∂ p h(q, 0, t) = W (q) for some c ∈ R. Therefore, for all h ∈ K W , the trivial embedding ϕ 0 given by ϕ 0 : T n → T n × B, ϕ 0 (q) = (q, 0), is an invariant torus for X h and the restricted vector field is W . Given σ ≥ 1 and λ ≥ 0, let H be the Hamiltonian of the following form                    H : T n × B × J 0 → R H(q, p, t) = h(q, p, t) + f (q, p, t) h ∈ K W , W ∈ C σ+2 (T n ), f 0 , ∂ p f 0 , ∂ 2 p H ∈ S0 σ,2 , |f 0 | 0 σ+2,0 + |∂ q f 0 | 0 σ+1,λ < ∞, |∂ p f 0 | 0 σ+2,λ < ∞ |∂ 2 p H| 0 σ+2,0 < ∞ ( * G ) 141 Theorem G. Let H be as in ( * G ). Then, there exists a Hamiltonian h ∈ K W and a constant C(σ) depending on σ such that if λ > C(σ)|∂ q W | C 0 , (# G ) there exists a Lagrangian C σ -asymptotic torus associated to (X H , X h, ϕ 0 , W ). We will see that (# G ) plays a crucial role in the section dedicated to the solution of the homological equation. Furthermore, we will see that the previous constant C(σ) is defined in Lemma 8.3 Therefore, we have the following result concerning time-dependent vector fields on the torus. Given σ ≥ 1, let Z be a non-autonomous vector field on T n × J 0 of the form      Z(q, t) = W (q) + P (q, t) W ∈ C σ+1 (T n ), P ∈ S0 σ,1 , |P | 0 σ+1,λ < ∞. (Z C ) Corollary C. Let Z be as in (Z C ). Then, there exists a constant C(σ) depending on σ such that if λ > C(σ)|∂ q W | C 0 , there exists a C σ -asymptotic torus ψ t associated to (Z, W, Id, W ). Proof of Theorem G As in Theorem A, the proof rests on the implicit function theorem. As mentioned before, it is essentially the same as that of Theorem A. Following the ideas in the third chapter of this thesis, we are looking for a C σ -asymptotic torus defined for t sufficiently large so that the perturbative terms are small enough. To this end, by expanding the Hamiltonian H in ( * G ) in a small neighbourhood of 0 ∈ B, we can rewrite H in the following form                H : T n × B × J 0 -→ R H(q, p, t) = W (q) • p + a(q, t) + b(q, t) • p + m(q, p, t) • p 2 , a, b, ∂ 2 p H ∈ S0 σ,2 , W ∈ C σ+2 |a| 0 σ+2,0 + |∂ q a| 0 σ+1,λ ≤ Υ, |b| 0 σ+2,λ ≤ Υ |∂ 2 p H| 0 σ+2,0 ≤ Υ ( * * G ) for a suitable Υ ≥ 1. In the latter, we consider h(q, 0, t) = 0 for all (q, t) ∈ T n × J 0 . We can do it without loss of generality. Let h be the following Hamiltonian h(q, p, t) = W (q) • p + m(q, p, t) • p 2 for all (q, p, t) ∈ T n × B × J 0 . Obviously X H and X h satisfy (8.1). Proof of Theorem G Outline of the Proof of Theorem G We are looking for a C σ -asymptotic torus ϕ t associated to (X H , X h, ϕ 0 , W ). More concretely, for given H, we will choose υ ≥ 0 sufficiently large and we search for some functions u, v : T n × J υ → R n such that ϕ(q, t) = (q + u(q, t), v(q, t)), and to satisfy the following conditions X H (ϕ(q, t), t) -∂ q ϕ(q, t)W (q) -∂ t ϕ(q, t) = 0, (8.7) lim t→+∞ |u t | C σ = 0, lim t→+∞ |v t | C σ = 0. (8.8) for all (q, t) ∈ T n × J υ . The parameter υ is free and we will fix it large enough in Lemma 8.5. As expected, we introduce a suitable functional F given by (8.7). To this end, we define m(q, p, t)p = 1 0 ∂ 2 p H(q, τ p, t)dτ p = ∂ p m(q, p, t) • p 2 , φ(q, t) = (q + u(q, t), v(q, t), t), ũ(q, t) = (q + u(q, t), t), ∇u(q, t) W (q) = ∂ q u(q, t)W (q) + ∂ t u(q, t), ∇u(q, t) W (q) = ∂ q u(q, t)W (q) + ∂ t u(q, t). for all (q, p, t) ∈ T n × J υ . Similarly to Section 3.3.1, we define the following functional F(a, b, m, m, W, u, v) = (F 1 (b, m, W, u, v), F 2 (a, b, m, W, u, v)) with F 1 (b, m, W, u, v) = W • (id + u) -W + b • ũ + ( m • φ) v -(∇u) W , F 2 (a, b, m, W, u, v) = ∂ q a • ũ + (∂ q W • (id + u) + ∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) W . It is defined over suitable Banach spaces, which we will specify later. We observe that for all m, m and W , F(0, 0, m, m, W, 0, 0) = 0. As one can expect, we reformulate this problem in these terms. For fixed m, m and W in suitable Banach spaces and for (a, b) sufficiently close to (0, 0), we are looking for some functions u, v such that F(a, b, m, m, W, u, v) = 0 and in order to satisfy the asymptotic conditions (8.8). Concerning the associated linearized problem, the differential of F with respect to the variables (u, v) calculated in (0, 0, m, m, W, 0, 0) is equal to D (u,v) F(0, 0, m, m, W, 0, 0)(û, v) = ∂ q W û -(∇û) W + m0 v ∂ q W v + (∇v) W . The proof preserves the same structure as that of Theorem A. Given σ ≥ 1, λ > 0 and υ ≥ 0, this section aims to solve the following equation for the unknown κ : T n × J υ → R n      ∂ q κ(q, t)W (q) + ∂ t κ(q, t) ± ∂ q W (q)κ(q, t) = z(q, t) W ∈ C σ+1 (T n ), z ∈ S υ σ , |z| υ σ,λ < ∞. (HE G ) If W (q) ≡ W ∈ R n is constant, then the latter translates into the following easier problem ∂ q κ(q, t)W + ∂ t κ(q, t) = z(q, t) z ∈ S υ σ , |z| υ σ,λ < ∞. In the third chapter of this thesis (Section 3.3.3), we proved the existence of a unique solution to the above system satisfying suitable asymptotic conditions (see Lemma 3.1). We observe that (HE G ) is quite similar to the homological equation solved in the sixth chapter of this thesis (Lemma 6.5). The proof resembles that in Section 8.3.3 with suitable modifications. Therefore, we begin by proving several estimates. Let φ t W be the flow at time t of W (q). As usual, C(•) stands for constants depending on n and the other parameters into brackets. On the other hand, C means constants depending on n. Lemma 8.1. For all t ∈ R |∂ q φ t W | C σ-1 ≤ C(σ) (1 + |∂ q W | C σ-1 |t|) e cσ|∂qW | C 0 |t| , (8.11) with a positive constant c σ ≥ 1 depending on n and σ. By (8.11), we note that when σ = 1 and t ∈ R |∂ q φ t W | C 0 ≤ C (1 + |∂ q W | C 0 |t|) e c 1 |∂qW | C 0 |t| ≤ Ce c1 |∂qW | C 0 |t| for a suitable c1 > c 1 . Proof. The proof is quite similar to that of Lemma 6.3 (see Section 6.3.4). By the fundamental theorem of calculus, we can write φ t W in the following form φ t W (q) = q + t 0 W • φ τ W (q)dτ. Therefore, taking the derivative with respect to q ∂ q φ t W (q) = Id + t 0 ∂ q W • φ τ W (q)∂ q φ τ W (q)dτ, where Id stands for the identity matrix. We assume t ≥ 0. Then, we can estimate the norm C σ-1 of the left-hand side of the latter as follows |∂ q φ t W | C σ-1 ≤ 1 + t 0 |∂ q W • φ τ W ∂ q φ τ W | C σ-1 dτ. (8.12) Case σ = 1. By Proposition A.2 |∂ q φ t W | C 0 ≤ 1 + C t 0 |∂ q W | C 0 |∂ q φ τ W | C 0 dτ, for a suitable constant C. Then, thanks to (8.10) |∂ q φ t W | C 0 ≤ e c1 |∂qW | C 0 t (8.13) for a suitable constant c1 ≥ 1. It remains to verify (8.11) when σ > 1. By Proposition (A.2), we can estimate the norm on the right-hand side of (8.12) as follows |∂ q W • φ τ W ∂ q φ τ W | C σ-1 ≤ C(σ) (|∂ q W • φ τ W | C σ-1 |∂ q φ τ W | C 0 + |∂ q W | C 0 |∂ q φ τ W | C σ-1 ) . Hence, we can rewrite (8.11) in the following form |∂ q φ t W | C σ-1 ≤ 1 + C(σ) t 0 |∂ q W • φ τ W | C σ-1 |∂ q φ τ W | C 0 dτ + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C σ-1 , dτ. (8.14) Unlike the proof of Lemma 6.5, we need to treat cases 1 < σ < 2 and σ ≥ 2 separately. Case 1 < σ < 2. Thanks to Proposition (A.2), |∂ q W • φ τ W | C σ-1 ≤ C(σ) |∂ q W | C σ-1 |∂ q φ τ W | σ-1 C 0 + |∂ q W | C 0 . Replacing the latter into (8.14), we can rewrite it in the following way |∂ q φ t W | C σ-1 ≤ 1 + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C 0 dτ + C(σ) t 0 |∂ q W | C σ-1 |∂ q φ τ W | σ C 0 dτ + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C σ-1 dτ. Proof of Theorem G Now, (8.13) allows us to find an upper bound for the first two integrals on the right-hand side of the latter t 0 |∂ q W | C 0 |∂ q φ τ W | C 0 dτ ≤ |∂ q W | C 0 t 0 e c1 |∂qW | C 0 τ dτ = e c1 |∂qW | C 0 t -1 c1 t 0 |∂ q W | C σ-1 |∂ q φ τ W | σ C 0 dτ ≤ |∂ q W | C σ-1 t 0 e c 1 σ|∂qW | C 0 τ dτ ≤ |∂ q W | C σ-1 te c1 σ|∂qW | C 0 t . In the second line of the latter, rather than calculating the integral, we prefer using the trivial estimate e c 1 σ|∂qW | C 0 τ ≤ e c 1 σ|∂qW | C 0 t to avoid a division by |∂ q W | C 0 since we do not assume it is not zero. Hence, we can estimate |∂ q φ t W | C σ-1 as follows |∂ q φ t W | C σ-1 ≤ 1 + C(σ) e c1 |∂qW | C 0 t -1 c1 + C(σ)|∂ q W | C σ-1 te c1 σ|∂qW | C 0 t + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C σ-1 , dτ, ≤ C(σ) (1 + |∂ q W | C σ-1 t) e c1 σ|∂qW | C 0 t + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C σ-1 , dτ. Then, thanks to the Gronwall inequality (8.10) |∂ q φ t W | C σ-1 ≤ C(σ) (1 + |∂ q W | C σ-1 t) e c1 σ|∂qW | C 0 t e C(σ) t 0 |∂qW | C 0 dτ ≤ C(σ) (1 + |∂ q W | C σ-1 t) e cσ|∂qW | C 0 t for a suitable constant c σ ≥ c1 σ. This concludes the proof for the case 1 < σ < 2. The general case σ > 2 is quite similar to the previous one. The main difference lies in the estimation of |∂ q W • φ τ W | C σ-1 . Case σ > 2. By Proposition A.2, |∂ q W • φ τ W | C σ-1 ≤ C(σ) |∂ q W | C σ-1 |∂ q φ τ W | σ-1 C 0 + |∂ q W | C 1 |∂ q φ τ W | C σ-2 + |∂ q W | C 0 and replacing the latter into (8.14), we can estimate |∂ q φ t W | C σ-1 as follows |∂ q φ t W | C σ-1 ≤ 1 + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C 0 dτ + C(σ) t 0 |∂ q W | C σ-1 |∂ q φ τ W | σ C 0 dτ + C(σ) t 0 |∂ q W | C 1 |∂ q φ τ W | C σ-2 |∂ q φ τ W | C 0 dτ + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C σ-1 dτ. We have already estimated the first two integrals on the right-hand side of the latter. It remains the integral in the second line. By the convexity property of the Hölder norms (Proposition A.1), for all fixed τ |∂ q W | C 1 |∂ q φ τ W | C σ-2 ≤ C(σ) |∂ q W | σ-2 σ-1 C 0 |∂ q W | 1 σ-1 C σ-1 |∂ q φ τ W | 1 σ-1 C 0 |∂ q φ τ W | σ-2 σ-1 C σ-1 147 and hence |∂ q W | C 1 |∂ q φ τ W | C σ-2 |∂ q φ τ W | C 0 ≤ C(σ) (|∂ q W | C 0 |∂ q φ τ W | C σ-1 ) σ-2 σ-1 (|∂ q W | C σ-1 |∂ q φ τ W | σ C 0 ) 1 σ-1 . From a λ b 1-λ ≤ C(a + b) for 0 < λ < 1, we have that |∂ q W | C 1 |∂ q φ τ W | C σ-2 |∂ q φ τ W | C 0 ≤ C(σ) (|∂ q W | C 0 |∂ q φ τ W | C σ-1 + |∂ q W | C σ-1 |∂ q φ τ W | σ C 0 ) . Furthermore, replacing the latter in the previous estimate of |∂ q φ t W | C σ-1 , we obtain |∂ q φ t W | C σ-1 ≤ 1 + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C 0 dτ + C(σ) t 0 |∂ q W | C σ-1 |∂ q φ τ W | σ C 0 dτ + C(σ) t 0 |∂ q W | C 0 |∂ q φ τ W | C σ-1 dτ. Now, similarly to the previous case (1 < σ < 2), we conclude the proof of (8.11) also in this general case. Similarly, we have the claim when t ≤ 0. Now, we consider R : T n × J υ × J υ → M n , where M n is the set of the ndimensional matrices. For all (q, τ, t) ∈ T n × J υ × J υ , R(q, t, τ ) is the matrix having elements equal to r ij (q, t, τ ) for all 1 ≤ i, j ≤ n. In other words, R(q, t, τ ) = {r ij (q, t, τ )} 1≤i,j≤n . We define the following family of norms |R t τ | C s = max 1≤i,j≤n |r ij (q, t, τ )| C s , for positive real parameters s ≥ 0. We consider the following system that plays an important role in the solution of the homological equation (HE G ) Ṙ(q, t, τ ) = ∓∂ q W • φ t W (q)R(q, t, τ ) R(q, τ, τ ) = Id. (R) where W is defined in (HE G ). For all fixed τ , t ∈ J υ , in what follows we denote R t τ (q) = R(q, t, τ ). Lemma 8.2. The latter system admits a unique solution. Moreover, for all τ , t ∈ J υ with τ ≥ t, letting R(q, t, τ ) = R(φ -τ W (q), t, τ ), we have the following estimates |R t τ | C 0 ≤ e c R 0 |∂qW | C 0 (τ -t) (8.15) | Rt τ | C σ ≤ C(σ) (1 + |∂ q W | C σ (τ -t)) e c R σ |∂qW | C 0 (τ -t) (8.16) + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e c R σ |∂qW | C 0 (τ -t) with positive constants c R 0 > 0 and c R σ ≥ c σ . We point out that c σ is the positive constant introduced in the previous lemma. Before the proof, we observe that when σ = 1, thanks to (8.16), | Rt τ | C 1 ≤ C 1 + |∂ q W | C 1 (τ -t) + |∂ q W | C 1 |∂ q W | C 0 (τ -t) 2 e c R 1 |∂qW | C 0 (τ -t) ≤ C (1 + |∂ q W | C 1 (τ -t)) e c R 1 |∂qW | C 0 (τ -t) + C|∂ q W | C 1 (τ -t)e |∂qW | C 0 (τ -t) e c R 1 |∂qW | C 0 (τ -t) ≤ C (1 + |∂ q W | C 1 (τ -t)) e cR 1 |∂qW | C 0 (τ -t) for a suitable cR 1 > c R 1 . Proof of Theorem G Proof. We prove this lemma in the case Ṙ(q, t, τ ) = ∂ q W • φ t W (q)R(q, t, τ ). The other case ( Ṙ(q, t, τ ) = -∂ q W • φ t W (q)R(q, t, τ )) can be proved similarly. For all q ∈ T n , a unique solution of (R) exists by the theorem of existence and uniqueness. It remains to prove the estimates. By the fundamental theorem of calculus, we can write R as follows R t τ (q) = Id - τ t ∂ q W • φ s W (q)R s τ (q)ds. (8.17) for all q ∈ T n and t, τ ∈ J υ with τ ≥ t. Then, thanks to the latter, we can estimate |R t τ | C 0 in the following way |R t τ | C 0 ≤ 1 + C τ t |∂ q W | C 0 |R s τ | C 0 ds. Therefore, by the Gronwall inequality (8.10), we have |R t τ | C 0 ≤ e τ t C|∂qW | C 0 ds ≤ e c 0 R |∂qW | C 0 (τ -t) for a suitable positive constant c R 0 . This concludes the proof of (8.15). Now, we prove (8.16). By (8.17), we can write Rt τ in the following form Rt τ (q) = R t τ • φ -τ W (q) = Id - τ t ∂ q W • φ s-τ W (q) Rs τ (q)ds. Hence, we can estimate | Rt τ | C σ in such a way that | Rt τ | C σ ≤ 1 + τ t |∂ q W • φ s-τ W Rs τ | C σ ds. (8.18) We will estimate the norm into the integral on the right-hand side of the latter using Proposition A.2 and (8.15). The claim is a consequence of the Gronwall inequality (8.9). As a consequence of Proposition A.2, |∂ q W • φ s-τ W Rs τ | C σ ≤ C(σ) |∂ q W • φ s-τ W | C σ |R s τ | C 0 + |∂ q W | C 0 | Rs τ | C σ |∂ q W • φ s-τ W | C σ ≤ C(σ) |∂ q W | C σ |∂ q φ s-τ W | σ C 0 + |∂ q W | C 1 |∂ q φ s-τ W | C σ-1 + |∂ q W | C 0 and replacing the latter into (8.18) | Rt τ | C σ ≤ 1 + C(σ) τ t |∂ q W | C σ |∂ q φ s-τ W | σ C 0 |R s τ | C 0 ds + C(σ) τ t |∂ q W | C 1 |∂ q φ s-τ W | C σ-1 |R s τ | C 0 ds + C(σ) τ t |∂ q W | C 0 |R s τ | C 0 ds + C(σ) τ t |∂ q W | C 0 | Rs τ | C σ ds. Now, by (8.15) and Lemma 8.1, we can estimate the first three integrals on the right-hand side of the latter τ t |∂ q W | C σ |∂ q φ s-τ W | σ C 0 |R s τ | C 0 ds ≤ C(σ)|∂ q W | C σ τ t e c1 σ|∂qW | C 0 (τ -s) e c R 0 |∂qW | C 0 (τ -s) ds ≤ C(σ)|∂ q W | C σ (τ -t)e (c 1 σ+c R 0 )|∂qW | C 0 (τ -t) τ t |∂ q W | C 1 |∂ q φ s-τ W | C σ-1 |R s τ | C 0 ds ≤ C(σ)|∂ q W | C 1 τ t e (cσ+c R 0 )|∂qW | C 0 (τ -s) ds + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 τ t (τ -s)e (cσ+c R 0 )|∂qW | C 0 (τ -s) ds ≤ C(σ)|∂ q W | C 1 (τ -t)e (cσ+c R 0 )|∂qW | C 0 (τ -t) + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e (cσ+c R 0 )|∂qW | C 0 (τ -t) τ t |∂ q W | C 0 |R s τ | C 0 ds ≤ |∂ q W | C 0 τ t e c R 0 |∂qW | C 0 (τ -s) ds = 1 c R 0 e c R 0 |∂qW | C 0 (τ -t) -1 . Similarly to the previous lemma, in the first two integrals on the left-hand side of the latter, we use some trivial inequalities to avoid the division by |∂ q W | C 0 . Then, by the above estimations and remembering that c1 σ ≤ c σ | Rt τ | C σ ≤ 1 + C(σ) e c R 0 |∂qW | C 0 (τ -t) -1 + C(σ)|∂ q W | C σ (τ -t)e (c 1 σ+c R 0 )|∂qW | C 0 (τ -t) + C(σ)|∂ q W | C 1 (τ -t)e (cσ+c R 0 )|∂qW | C 0 (τ -t) + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e (cσ+c R 0 )|∂qW | C 0 (τ -t) + C(σ) τ t |∂ q W | C 0 | Rs τ | C σ ds ≤ C(σ) (1 + |∂ q W | C σ (τ -t)) e (cσ+c R 0 )|∂qW | C 0 (τ -t) + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e (cσ+c R 0 )|∂qW | C 0 (τ -t) + C(σ) t τ |∂ q W | C 0 | Rs τ | C σ ds . Similarly to the proof of Lemma 6.4, we define the following function a(t) = C(σ) (1 + |∂ q W | C σ (τ -t)) e (cσ+c R 0 )|∂qW | C 0 (τ -t) + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e (cσ+c R 0 )|∂qW | C 0 (τ -t) and we rewrite the latter in the following way | Rt τ | C σ ≤ a(t) + C(σ) t τ |∂ q W | C 0 | Rs τ | C σ ds . However, it is straightforward to verify that a is a monotone decreasing func-8.3 Proof of Theorem G tion. Hence, by the more general inequality (6.16) | Rt τ | C σ ≤ a(t) + C(σ) t τ a(s)|∂ q W | C 0 e |C(σ) t s |∂qW | C 0 dδ| ds ≤ a(t) 1 + C(σ)|∂ q W | C 0 τ t e C(σ)|∂qW | C 0 (s-t) ds = a(t) 1 + e C(σ)|∂qW | C 0 (τ -t) -1 ≤ a(t) 1 + e C(σ)|∂qW | C 0 (τ -t) ≤ C(σ) (1 + |∂ q W | C σ (τ -t)) e c R σ |∂qW | C 0 (τ -t) + C(σ)|∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e c R σ |∂qW | C 0 (τ -t) for a suitable constant c R σ ≥ c σ + c R 0 . We observe that the constant c R σ , as for c σ , goes to infinity if σ → ∞. This means that, in order to solve the homological equation, we must counter the growth of c R σ and c σ assuming λ sufficiently large. Lemma 8.3 (Homological equation). There exists a solution κ, (∇κ ) W ∈ S υ σ of (HE G ). Moreover, letting c κ σ = max{c σ + c R 0 , c R σ + c R 0 , cR 1 + c σ }, if λ > c κ σ |∂ q W | C 0 (8.19) then, |κ| υ σ,λ ≤ C(σ) 1 λ -c κ σ |∂ q W | C 0 |z| υ σ,λ (8.20) + C(σ) |∂ q W | C σ (λ -c κ σ |∂ q W | C 0 ) 2 + |∂ q W | C 1 |∂ q W | C σ-1 (λ -c κ σ |∂ q W | C 0 ) 3 |z| υ σ,λ . Proof. Existence: Let us define the following transformation h : T n × J υ -→ T n × J υ h(q, t) = (φ -t W (q), t) where φ t W is the flow of W previously introduced. We claim that it suffices to prove the first part of this lemma for the much easier equation ∂ t κ(q, t) ± ∂ q W • φ t W (q)κ(q, t) = z • h -1 (q, t). (8.21) If κ is a solution of the latter, then κ = κ • h is a solution of (HE G ) and viceversa. We prove this claim. Let κ be a solution of (HE G ), ∂ t κ • h -1 ± ∂ q W • φ t W κ • h -1 = ∂ q κ • h -1 φt W + ∂ t κ • h -1 ± ∂ q W • φ t W κ • h -1 = ∂ q κ • h -1 W • φ t W + ∂ t κ • h -1 ± ∂ q W • φ t W κ • h -1 = z • h -1 where φt W stands for the derivative of φ t W with respect to t. It is obviously equal to W • φ t W . Furthermore, the last equality is a consequence of (HE G ). This proves that κ • h -1 is a solution of (8.21). Let us first show that ∂ q φ -t W W = φ-t W . We know that φ t W is the flow of W , then φ-t W = W • φ -t W . However, for all (q, t) ∈ T n × J υ , W • φ -t W (q) = ∂ q φ -t W (q)W (q). This is because the pull-back of W by φ -t W is equal to W . In others words, φ -t W * W = W where φ -t W * W = ∂ q φ -t W -1 W • φ -t W . Then, ∂ q φ -t W W = φ-t W . Now, let κ be a solution of (8.21), then ∂ q (κ • h) W + ∂ t (κ • h) ± ∂ q W (κ • h) = (∂ q κ • h) ∂ q φ -t W W -(∂ q κ • h) φ-t W + ∂ t κ • h ± ∂ q W (κ • h) = ∂ t κ • h ± ∂ q W (κ • h) = z. Hence κ • h is a solution of (HE G ), where the last equality of the latter is a consequence of (8.21). This proves the claim. For all q ∈ T n , let R(q, t, υ) be the unique solution of (R). For all (q, t) ∈ T n ×J υ a solution κ of (8.21) exists and κ(q, t) = R(q, t, υ)e(q) - t υ R(q, t, τ )z • h -1 (q, τ )dτ = R(q, t, υ) e(q) - t υ R(q, υ, τ )z • h -1 (q, τ )dτ where e is a function defined on the torus. Estimates: We choose e equal to e(q) = +∞ υ R(q, υ, τ )z • h -1 (q, τ )dτ for all q ∈ T n . It is well defined because, by Lemma 8.2 and (# G ), +∞ υ R(q, υ, τ )z • h -1 (q, τ )dτ ≤ C +∞ υ |R υ τ | C 0 |z τ | C 0 dτ ≤ C|z| υ 0,λ +∞ υ e (c R 0 |∂qW | C 0 -λ)s ds = C|z| υ 0,λ λ -c R 0 |∂ q W | C 0 e (c R 0 |∂qW | C 0 -λ)υ Therefore, for all (q, t) ∈ T n × J υ , κ(q, t) = κ • h(q, t) = - +∞ t R t τ • φ -t W (q)z τ • φ τ -t W (q)dτ = - +∞ t R t τ • φ -τ W • φ τ -t W (q)z τ • φ τ -t W (q)dτ = - +∞ t Rt τ • φ τ -t W (q)z τ • φ τ -t W (q)dτ is the solution of (HE G ) we are looking for. The estimate (8.20) is a consequence of Proposition A.2, Lemma 8.1, Lemma 8.2 and (8.19). For all fixed t ∈ J υ , by Proposition A.2, we can estimate |κ t | C σ as follows |κ t | C σ ≤ C(σ) +∞ t | Rt τ • φ τ -t W | C σ |z τ | C 0 + |R t τ | C 0 |z τ • φ τ -t W | C σ dτ. Always using Proposition A.2 |z τ • φ τ -t W | C σ ≤ C(σ)|z τ | C σ |∂ q φ τ -t W | σ C 0 + |∂ q φ τ -t W | C σ-1 + 1 | Rt τ • φ τ -t W | C σ ≤ C(σ) | Rt τ | C σ |∂ q φ τ -t W | σ C 0 + | Rt τ | C 1 |∂ q φ τ -t W | C σ-1 + |R t τ | C 0 and replacing the latter into the above integral |κ t | C σ ≤ C(σ) +∞ t |R t τ | C 0 |z τ | C σ |∂ q φ τ -t W | σ C 0 dτ + C(σ) +∞ t |R t τ | C 0 |z τ | C σ |∂ q φ τ -t W | C σ-1 dτ + C(σ) +∞ t | Rt τ | C σ |∂ q φ τ -t W | σ C 0 |z τ | C σ dτ + C(σ) +∞ t | Rt τ | C 1 |∂ q φ τ -t W | C σ-1 |z τ | C σ dτ + C(σ) +∞ t |R t τ | C 0 |z τ | C σ dτ. It remains to estimate each integral on the right-hand side of the latter. But, first, we note that for all t ∈ J υ and x < 0 where the latter is obtained by integrating by part. Now, thanks to Lemma 8.1, Lemma 8.2, (8.19) and the latter +∞ t |R t τ | C 0 |z τ | C σ |∂ q φ τ -t W | σ C 0 dτ ≤ C(σ)|z| υ σ,λ +∞ t e (c1σ+c R 0 )|∂qW| C 0 (τ -t) e -λτ dτ = C(σ) |z| υ σ,λ λ -(c 1 σ + c R 0 ) |∂ q W | C 0 e λt +∞ t |R t τ | C 0 |z τ | C σ |∂ q φ τ -t W | C σ-1 dτ ≤ C(σ)|z| |z| υ σ,λ |∂ q W | C σ (λ -(c R σ + c1 σ) |∂ q W | C 0 ) 2 e λt + C(σ) |z| υ σ,λ |∂ q W | C 1 |∂ q W | C σ-1 (λ -(c R σ + c1 σ) |∂ q W | C 0 ) 3 e λt +∞ t | Rt τ | C 1 |∂ q φ τ -t W | C σ-1 |z τ | C σ dτ ≤ C(σ)|z| υ σ,λ +∞ t (1 + |∂ q W | C 1 (τ -t)) (1 + |∂ q W | C σ-1 (τ -t)) e (c R 1 W | C 1 + |∂ q W | C σ-1 ) (λ -(c R 1 + c σ ) |∂ q W | C 0 ) 2 e λt + C(σ) |z| υ σ,λ |∂ q W | C 1 |∂ q W | C σ-1 (λ -(c R 1 + c σ ) |∂ q W | C 0 ) 3 e λt λ -(c 1 σ + c R 0 ) |∂ q W | C 0 + 1 λ -(c σ + c R 0 ) |∂ q W | C 0 + |∂ q W | C σ-1 (λ -(c σ + c R 0 ) |∂ q W | C 0 ) 2 + 1 λ -(c R σ + c1 σ) |∂ q W | C 0 + |∂ q W | C σ (λ -(c R σ + c1 σ) |∂ q W | C 0 ) 2 + |∂ q W | C 1 |∂ q W | C σ-1 (λ -(c R σ + c1 σ) |∂ q W | C 0 ) 3 + 1 λ -(c R 1 + c σ ) |∂ q W | C 0 + |∂ q W | C 1 + |∂ q W | C σ-1 (λ -(c R 1 + c σ ) |∂ q W | C 0 ) 2 + |∂ q W | C 1 |∂ q W | C σ-1 (λ -(c R 1 + c σ ) |∂ q W | C 0 ) 3 + 1 λ -c R 0 |∂ q W | C 0 |z| υ σ,λ ≤ C(σ) 1 λ -c κ σ |∂ q W | C 0 + |∂ q W | C σ (λ -c κ σ |∂ q W | C 0 ) 2 + |∂ q W | C 1 |∂ q W | C σ-1 (λ -c κ σ |∂ q W | C 0 ) 3 |z| υ σ,λ for all t ∈ J υ . Furthermore, taking the sup for all t ∈ J υ on the left-hand side of the latter, we conclude the proof. We observe that we do not find a unique solution to the homological equation in this case, unlike when W is constant. That is why we prove only the existence of a right inverse of the differential of F introduced in Section 8.3.1. The following two sections conclude the proof of Theorem G. As mentioned before, the proof is quite similar to that of Theorem A. ∂ q W (id + τ u)dτ u + b • ũ + ( m • φ) v -(∇u) W F 2 (a, b, m, W, u, v) = ∂ q a • ũ + (∂ q W • (id + u) + ∂ q b • ũ) v + (∂ q m • φ) • v 2 + (∇v) W , where the second line of the latter is a consequence of the Taylor formula. Thanks to Proposition 8.1, the functional F is well defined and continuous. Moreover, F is differentiable with respect to the components (u, v), with D (u,v) F 1 (b, m, W, u, v)(û, v) = D u F 1 (b, m, W, u, v)û + D v F 1 (b, m, W, u, v)v = (∂ q W • (id + u) + ∂ q b • ũ) û + v T (∂ q m • φ) û + v T (∂ p m • φ) v + ( m • φ) v -(∇û) W D (u,v) F 2 (a, b, m, W, u, v)(û, v) = D u F 2 (a, b, m, W, u, v)û + D v F 2 (a, b, m, W, u, v)v = ∂ 2 q a • ũ û + v T ∂ 2 q W • (id + u) + ∂ 2 q b • ũ û + (v T ) 2 ∂ 2 q m • φ û + (∂ q W • (id + u) + ∂ q b • ũ) v + (v T ) 2 ∂ 2 pq m • φ v + 2v T (∂ q m • φ) v + (∇v) W , where T stands for transpose. These differentials are continuous. Furthermore, the differential D (u,v) F calculated in (0, 0, m, m, W, 0, 0) is equal to D (u,v) F(0, 0, m, m, W, 0, 0)(û, v) = ∂ q W û -(∇û) W + m0 v ∂ q W v + (∇v) W , (8.22) where for all (q, t) ∈ T n × J υ we let m0 (q, t) = m(q, 0, t). The following lemma proves that, for all fixed m, m ∈ M and W ∈ W, D (u,v) F(0, 0, m, m, W, 0, 0) admits a right inverse. Lemma 8.4. For all (z, g) ∈ Z × G, there exists (û, v) ∈ U × V such that D (u,v) F(0, 0, m, m, W, 0, 0)(û, v) = (z, g). By Lemma 8.3, a solution of the first equation of (8.24) exists and we have that |û| υ σ,λ ≤ C(σ, λ, |∂ q W | C 0 , |∂ q W | C σ )| m0 v -z| υ σ,λ ≤ C(σ, λ, Υ, |∂ q W | C 0 , |∂ q W | C σ ) |g| υ σ,λ + |z| υ σ,λ . It remains to estimate | (∇û) W | υ σ,λ to conclude the proof of this lemma. This is a consequence of the previous estimates and (8.24) (L) | (∇û) W | υ σ,λ = | m0 v -z + ∂ q W û| υ σ,λ ≤ | m0 v -z| υ σ,λ + C(σ)|∂ q W | C σ |û| υ σ,λ ≤ C(σ, λ, Υ, |∂ q W | C 0 , |∂ q W | C σ ) As one can expect, it is well defined, and by the regularity of F, we deduce that L is continuous and differentiable with respect to y = (u, v) with differential D y L continuous. The proof is reduced to find a fixed point of the latter. In what follows, we will widely use Proposition A.2 (see Appendix A). Then, we recall it. Let D be equal to T n or T n × B. Proposition (Proposition A.2). We consider f , g ∈ C σ (D) and σ ≥ 0. 1. For all β ∈ N n , if |β| + s = σ then ∂ |β| ∂x 1 β 1 ...∂xn βn f C s ≤ |f | C σ . 2. |f g| C σ ≤ C(σ) (|f | C 0 |g| C σ + |f | C σ |g| C 0 ). Now we consider composite functions. Let z be defined on D 1 ⊂ R n and takes its values on D 2 ⊂ R n where f is defined. If σ < 1, f ∈ C 1 (D 2 ), z ∈ C σ (D 1 ) then f • z ∈ C σ (D 1 ) 3. |f • z| C σ ≤ C(|f | C 1 |z| C σ + |f | C 0 ). If σ < 1, f ∈ C σ (D 2 ), z ∈ C 1 (D 1 ) then f • z ∈ C σ (D 1 ) Proof of Theorem G In what follows, we estimate the right-hand side of the latter. Then, by Lemma 8.4, we conclude the proof. We point out that y * = (u * , v * ) ∈ Y and for all (q, t) ∈ T n × J υ , we let ũ * (q, t) = (q + u * (q, t), t), φ * (q, t) = (q + u * (q, t), v * (q, t), t). Thanks to (8.22), we can rewrite the right-hand side of (8.27) in the following form ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y ∂ q W v + (∇v) W -D (u,v) F 2 (x, m, W, y * )y (see Section 8.3.4), moreover ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y = ( m0 -m • φ * ) v -(∂ q b • ũ * ) u + (∂ q W -∂ q W • (id + u * )) u -v T * (∂ q m • φ * ) u -v T * (∂ p m • φ * ) v ∂ q W v + (∇v) W -D (u,v) F 2 (x, m, W, y * )y = (∂ q W -∂ q W • (id + u * )) v -∂ 2 q a • ũ * u -v T * ∂ 2 q W • (id + u * ) + ∂ 2 q b • ũ * u -(v T * ) 2 ∂ 2 q m • φ * u -(∂ q b • ũ * ) v -(v T * ) 2 ∂ 2 pq m • φ * v -2v T * (∂ q m • φ * ) v. Now, thanks to property 2. of Proposition A.2, ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y t C σ ≤ C(σ) mt 0 -m • φ * t C σ v t C σ + (∂ q b • ũ * ) t C σ u t C σ + | (∂ q W -∂ q W • (id + u * )) t | C σ |u t | C σ + |v t * | C σ (∂ q m • φ * ) t C σ |u t | C σ + |v t * | C σ (∂ p m • φ * ) t C σ |v t | C σ for all t ∈ J υ . We have to estimate each term on the right-hand side of the latter. We begin with the third because it is the only term that does not appear in the proof of Lemma 3.5. | (∂ q W -∂ q W • (id + u * )) t | C σ |u t | C σ ≤ C(σ)|∂ 2 q W • (id + τ u * ) t u t * | C σ |u t | C σ ≤ C(σ)|∂ q W | C σ+1 |y * |e -λt 1 + 1 + |∂ q u t * | C 0 σ + |∂ q u t * | C σ-1 |y|e -λt ≤ C(σ)|∂ q W | C σ+1 e -λυ |y|e -λt for all t ∈ J υ . The first line of the latter is due to the mean value theorem for a suitable τ ∈ [0, 1]. In the second line, we use properties 2. and 5. of Proposition A.2. The last line is due to |y * | ≤ 1. Similarly to the previous case and to the estimates in Lemma 3.5, thanks to the mean value theorem, properties 2. and 5. for all t ∈ J υ . Therefore, for υ large enough, the above estimates imply ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y t C σ ≤ 1 4 C |y|e -λt for all t ∈ J υ . We point out that C is the constant introduced in Lemma 8.4. Multiplying both sides of the latter by e λt and taking the sup for all t ∈ J υ , we obtain ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y υ σ,λ ≤ 1 4 C |y|. Similarly to the previous case, for υ large enough, we have We proved that L(x, m, m, •) is a contraction of a compact subset of Y. Then, there exists a unique fixed point y ∈ Y with |y| ≤ 1. ∂ q W v + ( More specifically, there exists (u, v) ∈ U × V such that, for all (q, t) ∈ T n × J υ ϕ t (q) = (q + u(q, t), v(q, t)) is a C σ -asymptotic torus associated to (X H , X h, ϕ 0 , W ). We conclude the proof by verifying that ϕ t is a Lagrangian C σ -asymptotic torus. Proof of Corollary C Lemma 8.6. ϕ t 0 is Lagrangian for all t 0 ∈ J υ . Proof. Let α = dp ∧ dq be the standard symplectic form associated to (q, p) ∈ T n × B. Similarly to Proposition 3.2, by (8.5) (ϕ t 0 ) * α = (ψ t 0 +t t 0 ,W ) * (ϕ t 0 +t ) * α for all fixed t 0 ∈ J υ and t ≥ 0. We want to prove that, for all q ∈ T n × R m , ((ϕ t 0 ) * α) q = 0. The proof rests on the same ideas of Proposition 3.2 and Lemma 6.7. We observe that, for all q ∈ T n , we can rewrite the right-hand side of the latter as follows (ψ t 0 +t t 0 ,W ) * (ϕ t 0 +t ) * α q = 1≤i<j≤n 1≤k<d≤n α t i,j,k,d (q)dq k ∧ dq d where α t i,j,k,d (q) = ∂ q i v t 0 +t • ∂ q j id + u t -∂ q j v t 0 +t • ∂ q i id + u t • ψ t 0 +t t 0 ,ω+Γ (q) × ∂ q k ψ t 0 +t t 0 ,W,i (q)∂ q d ψ t 0 +t t 0 ,W,j (q) -∂ q d ψ t 0 +t t 0 ,W,i (q)∂ q k ψ t 0 +t t 0 ,W,j (q) . and × stands for the usual multiplication in R. Then, for fixed 1 ≤ i < j ≤ n, 1 ≤ k < d ≤ n, by Lemma 8.1 α t i,j,k,d C 0 ≤ ∂ q i v t 0 +t • ∂ q j Id + u t 0 +t -∂ q j v t 0 +t • ∂ q i Id + u t 0 +t • ψ t 0 +t t 0 ,W (q) C 0 × ∂ q k ψ t 0 +t t 0 ,W,i ∂ q d ψ t 0 +t t 0 ,W,j -∂ q d ψ t 0 +t t 0 ,W,i ∂ q k ψ t 0 +t t 0 ,W,j C 0 ≤ ∂ q i v t 0 +t • ∂ q j Id + u t 0 +t -∂ q j v t 0 +t • ∂ q i Id + u t 0 +t C 0 × ∂ q k ψ t 0 +t t 0 ,W,i C 0 ∂ q d ψ t 0 +t t 0 ,W,j C 0 + ∂ q d ψ t 0 +t t 0 ,W,i C 0 ∂ q k ψ t 0 +t t 0 ,W,j C 0 = ∂ q i v t 0 +t j + ∂ q i v t 0 +t • ∂ q j u t 0 +t -∂ q j v t 0 +t i -∂ q j v t 0 +t • ∂ q i u t 0 +t C 0 × ∂ q k ψ t 0 +t t 0 ,W,i C 0 ∂ q d ψ t 0 +t t 0 ,W,j C 0 + ∂ q d ψ t 0 +t t 0 ,W,i C 0 ∂ q k ψ t 0 +t t 0 ,W,j C 0 ≤ C ∂ q v t 0 +t C 0 1 + ∂ q u t 0 +t C 0 ∂ q ψ t 0 +t t 0 ,W 2 C 0 ≤ C v t 0 +t C 1 1 + u t 0 +t C 1 ∂ q ψ t 0 +t t 0 ,W 2 C 0 ≤ Ce -λ(t 0 +t) e 2c 1 |∂qW | C 0 t for a suitable constant C ≥ 1. Thanks to (# G ), taking the limit for t → +∞ on both sides of the latter, the term in the last line converges to zero. This concludes the proof of this lemma. Proof of Corollary C The proof is quite similar to that of Theorem G and Corollary A. We are looking for a C σ -asymptotic torus ψ t associated to (Z, W, Id, W ). More specifically, for given Z, we are searching for υ ≥ 0 sufficiently large and a suitable function u : T n × J υ → R n such that ψ(q, t) = q + u(q, t) In what follows, we have some properties of these norms that we widely use in this thesis. First, we recall that C(•) stands for constants depending on n and other parameters into brackets. A Hölder classes of functions Proposition A.1. For all f ∈ C σ 1 (R n ), then |f | σ 1 -σ 0 C σ ≤ C(σ 1 )|f | σ 1 -σ C σ 0 |f | σ-σ 0 C σ 1 for all 0 ≤ σ 0 ≤ σ ≤ σ 1 . Proof. We refer to [START_REF] Hörmander | The boundary problems of physical geodesy[END_REF] for the proof. This is a fundamental property that plays a substantial role in this thesis. Furthermore, we have the following Proposition. Proposition A.2. We consider f , g ∈ C σ (D) and σ ≥ 0. 1. For all β ∈ N n , if |β| + s = σ then ∂ |β| ∂x 1 β 1 ...∂xn βn f C s ≤ |f | C σ . 2. |f g| C σ ≤ C(σ) (|f | C 0 |g| C σ + |f | C σ |g| C 0 ). Now we consider composite functions. Let z be defined on D 1 ⊂ R n and takes its values on D 2 ⊂ R n where f is defined. If σ < 1, f ∈ C 1 (D 2 ), z ∈ C σ (D 1 ) then f • z ∈ C σ (D 1 ) 3. |f • z| C σ ≤ C(|f | C 1 |z| C σ + |f | C 0 ). If σ < 1, f ∈ C σ (D 2 ), z ∈ C 1 (D 1 ) then f • z ∈ C σ (D 1 ) 4. |f • z| C σ ≤ C(|f | C σ |∇z| σ C 0 + |f | C 0 ). If σ ≥ 1 and f ∈ C σ (D 2 ), z ∈ C σ (D 1 ) then f • z ∈ C σ (D 1 ) 5. |f • z| C σ ≤ C(σ) |f | C σ |∇z| σ C 0 + |f | C 1 |∇z| C σ-1 + |f | C 0 . Proof. The proofs of the properties contained in this proposition are similar to those in [START_REF] Hörmander | The boundary problems of physical geodesy[END_REF]. The first is obvious. For the second, we refer to [START_REF] Hörmander | The boundary problems of physical geodesy[END_REF]. Properties 3. and 4. are quite trivial. We prove the last property. By (A.1), |f • z| C σ ≤ |f | C 0 + |(∇f • z) T ∇z| C σ-1 , (A.2) where T stands for the transpose and (∇f •z) T ∇z is the vector having i component equal to (∇f • z) T ∇z i = ∇f • z • ∂ x i z. Thanks to the property 2. |f • z| C σ ≤ |f | C 0 + |(∇f • z) T ∇z| C σ-1 ≤ |f | C 0 + C(σ)|∇f • z| C σ-1 |∇z| C 0 + C(σ)|∇f • z| C 0 |∇z| C σ-1 . The last term of the latter is bounded by |f | C 1 |∇z| C σ-1 , it remains to estimate |∇f • z| C σ-1 |∇z| C 0 . If σ ≤ 2, |∇f • z| C σ-1 ≤ |f | C σ |∇z| σ-1 C 0 + |f | C 1 thanks to 4.. Then |∇f • z| C σ-1 |∇z| C 0 ≤ C(σ) (|f | C σ |∇z| σ C 0 + |f | C 1 |∇z| C 0 ) ≤ C(σ) (|f | C σ |∇z| σ C 0 + |f | C 1 |∇z| C σ-1 ) , whence the property holds in this case. If σ > 2, assuming that 5. is already proven for σ -1, we find |∇f •z| C σ-1 |∇z| C 0 ≤ C(σ) (|∇f | C σ-1 |∇z| σ C 0 + |f | C 2 |∇z| C σ-2 |∇z| C 0 + |f | C 1 |∇z| C 0 ) . It remains to find a good estimate for the central term. By Proposition A.1 |f | C 2 |∇z| C σ-2 |∇z| C 0 ≤ C(σ) |f | σ-2 σ-1 C 1 |f | 1 σ-1 C σ |∇z| 1 σ-1 C 0 |∇z| σ-2 σ-1 C σ-1 |∇z| C 0 ≤ C(σ) (|f | C 1 |∇z| C σ-1 ) σ-2 σ-1 (|f | C σ |∇z| σ C 0 ) 1 σ-1 , since a λ b 1-λ ≤ C(a + b) for 0 < λ < 1, we have the claim. B Real analytic classes of functions This section will collect some well-known facts about real analytic functions. For some s > 0, we begin with the introduction of complex domains C Banach spaces Here, we prove that the normed spaces, introduced in the third chapter of this thesis, are Banach spaces. Similarly, we have the claim for those defined in the other chapters of this work. First, however, for clarity, let us remind some definitions. For a positive parameter υ ≥ 0, we define a real interval J υ = [υ, +∞) ⊂ R. For every function f defined on T n × J υ and for fixed t ∈ J υ , we let f t be the function defined on T n in such a way that f t (q) = f (q, t). Given σ ≥ 0, we have the following definition Definition. Let S υ σ be the space of functions f defined on T n × J υ such that f ∈ C(T n × J υ ) and for all t ∈ J υ , f t ∈ C σ (T n ). For all f ∈ S υ σ and a positive real function u(t) defined on J υ , we recall the definition of the following norm We conclude this first part with the following subset of S υ σ . Definition. Given σ, υ ≥ 0 and an integer k ≥ 0, we let Sυ σ,k be the space of functions f such that f ∈ S υ σ+k and ∂ i (q,p) f ∈ S υ σ+k-i for all 0 ≤ i ≤ k. C Banach spaces We prove that lim We have to verify that ∇w(q, t)Ω = f (q, t) for all (q, t) ∈ T n × J υ . Let us denote z = (q, t) and we remind that Ω = (ω, 1). We will prove that for all ε > 0 there exists δ > 0 such that w(z + τ Ω) -w(z) τ -f (z) < ε D Planar two body problem plus comet We introduce the following symplectic change of variable φ 0 defined by X 0 = σ 0 x 0 + σ 1 x 1 X 1 = x 0 -x 1 Y 0 = y 0 + y 1 Y 1 = σ 1 y 0 -σ 0 y 1 where 1/σ 0 = 1 + m 1 /m 0 and 1/σ 1 = 1 + m 0 /m 1 . It is symplectic because dY 1 ∧ dX 1 + dY 0 ∧ dX 0 = d (σ 1 y 0 -σ 0 y 1 ) ∧ d (x 0 -x 1 ) + d (y 0 + y 1 ) ∧ d (σ 0 x 0 + σ 1 x 1 ) = σ 1 dy 0 ∧ dx 0 -σ 1 dy 0 ∧ dx 1 -σ 0 dy 1 ∧ dx 0 + σ 0 dy 1 ∧ dx 1 + σ 0 dy 0 ∧ dx 0 + σ 1 dy 0 ∧ dx 1 + σ 0 dy 1 ∧ dx 0 + σ 1 dy 1 ∧ dx 1 = dy 0 ∧ dx 0 + dy 1 ∧ dx 1 . In the new coordinates, the Hamiltonian of the planar two-body problem is equal to H 0 • φ 0 (X, Y ) = |Y 0 | 2 2M + |Y 1 | 2 2µ - µM |X 1 | with M = m 0 + m 1 and 1/µ = 1/m 0 + 1/m 1 . Now, we introduce polar coordinates X 1 = x 0 -x 1 = r(cos ϕ, sin ϕ). The variables (r, ϕ) do not depend on the center of mass X 0 . This suggests the introduction of the following change of coordinates Pol : R + * × T × R 2 -→ R 2 * × R 2 , Pol(r, ϕ, X 0 ) = (r cos ϕ, r sin ϕ, X 0 ). Letting (X 0 , Y 0 ) = (ξ, η), in order to get a symplectic transformation, one is led to the symplectic map There are many references concerning the two-body problem and these kinds of symplectic transformations. We propose the work of Féjoz [START_REF]On action-angle coordinates and the Poincaré coordinates[END_REF]. In the new variables H 0 • φ 0 • φ pc (r, R, ϕ, Φ, ξ, η) = |η| 2 2M + R 2 2µ + Φ 2 2µr 2 - µM r V ef f (r;Φ) . (D.1) The variables ϕ and ξ are cyclic (this means that ∂ ϕ (H 0 • φ 0 • φ pc ) = 0 and ∂ x (H 0 • φ 0 • φ pc ) = 0, so that Φ = cst, η = cst). Then, H 0 • φ 0 • φ pc is one degree-of-freedom and thus integrable. Now, we let H0 (r, R, ϕ, Φ) = H 0 • φ 0 • φ pc (r, R, ϕ, Φ, ξ, η) -|η| 2 2M . Drawing from the work of Chenciner [START_REF] Chenciner | Intégration du problème de kepler par la méthode de hamilton-jacobi: coordonnées action-angle de delaunay[END_REF] or that of Celletti-Chierchia [START_REF] Celletti | KAM stability and celestial mechanics[END_REF] about the variables of Delaunay, there exists a symplectic change of coordinates φ D defined on (l, L, g, G) ∈ (T × R + * ) 2 | 0 < G < L où h est proche de H 0 et le jet infini de R est nul sur T n × D γ . Danc ce travail, D γ est un sous-ensemble approprié de B tel que Leb(B\D γ ) ≤ Cγ 2 , pour une certaine constante C. En faisant référence à cet article de Pöschel, dans le Théorème D, nous prenons D = D γ et µ = Cγ 2 . ) By Lemma 3.1, the unique solution v for the last equation of the latter system exists and satisfies |v| υ σ,ā ≤ |g| υ σ,a . Moreover, by | (∇v) Ω| υ σ,a = |g| υ σ,a , we have the second estimate in (3.16) |v| = max{|v| υ σ,ā , | (∇v) Ω| υ σ,a } ≤ |g| υ σ,a . (3.18) Let x = (a, b), where a and b are those defined by ( * * A ). Obviously (a, b) ∈ A × B and |∂ q a| υ σ+1,a ≤ 1, |b| υ σ+2,b ≤ 1. (3.19) We introduce the Banach space (Y, |•|) where Y = U ×V and for all y = (u, v) ∈ Y, |y| = max{|u|, |v|}. Let m, m ∈ M be as in ( * * A ) and we consider 3.5). Let a and b be the functions introduced by ( * * B ). It is straightforward to verify that (a, b) ∈ A × B and |∂ q a| υ s,a ≤ 1, |b| υ s,b ≤ 1. (4.9) We introduce the Banach space (Y, | • |), such that Y = U × V and for all y = (u, v) ∈ Y, |y| = max{|u|, |v|}. Following the lines of the differentiable case (Section (3.3.5)), we fix m, m ∈ M as in ( * * B ), we let x = (a, b) and we introduce the following functional L(x, m, m, •) : Y -→ Y in such a way that 5. 3 4 × R × B 3 4 . 344 Proof of Theorem C assuming Theorem 5.1 We consider the following family of Hamiltonians h(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I + m(θ, I, t; p 0 ) • I 2for all (θ, I, t; p 0 ) ∈ T n × B 1 For each fixed p 0 ∈ B3 4 .13) 5.4 Proof of Theorem 5.1 4, ψ 0 0 (θ, p 0 ) = (θ, 0). Similarly, for negative times. It remains to verify the estimates (5.8). Let us state a quantitative version of the implicit function theorem. 5.4 Proof of Theorem 5.1 Theorem 5.2 (Implicit Function Theorem). Let (X , | • |), (Y, | • |) and (Z, | • |) be Banach spaces. For some (x 0 , y 0 ) ∈ X ×Y and ε, µ > 0, we introduce the following spaces X 0 = {x ∈ X : |x -x 0 | ≤ ε}, Y 0 = {y ∈ Y : |y -y 0 | ≤ µ}. • g ∈ D σ and d. |f •g| σ,l+m,L(A) ≤ C(σ) |f | σ,l,L(A) |g| σ 1,m,L(A) + |f | 1,l,L(A) |g| σ,m,L(A) + |f | 0,l+m,L(A) . The previous properties are still verified when l = m = 0 or only one of the two parameters l and m is zero. Proof. The proof is quite similar to that of Proposition 5.2. The first two properties a. and b. are obvious. Hence, we prove c. and d. c. Similarly to Proposition 5.2, one has sup (p,t)∈A×R , I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I + a(θ, t; p 0 ) + b(θ, t; p 0 ) • I + m(θ, I, t; p 0 ) • I 2 . As in the proof of Theorem C, for all (θ, I, t; p 0 ) ∈ T n × B δ × R × D , let h be the following family of Hamiltonians h(θ, I, t; p 0 ) = e(p 0 ) + ω(p 0 ) • I + m(θ, I, t; p 0 ) • I 2 . ( HE D ) Lemma 5.5 (Homological Equation). There exists a unique solution κ ∈ D + σ of (HE D ) such that, for all fixed p 0 ∈ D, lim t→+∞ |κ t p 0 | C 0 = 0. such a way that F(a, b, m, m, v) = 0 and lim t→+∞ |v t | C ρ = 0. Once we have v, we let Γ = b + ( m • φ) v and this concludes the proof. 7. 2 2 Proof of Theorem F Due to the Weinstein Lagrangian neighbourhood theorem (see e.g. McDuff-Salamon [MS17]), there exists a neighbourhood N (T ) of T and a symplectomorphism φ F : T 4 × B 4 -→ N (T ) such that ϕ(T 4 × {0}) = T . e xτ (τ -t)dτ = e xt x 2 , +∞ t e xτ (τ -t) 2 dτ = -2 e xt x 3 ( 1 + 1 |∂ q W | C σ-1 (τ -t)) e (cσ+c R 0 )|∂qW| C 0 (τ -t) e -λτ dτ = C(σ)|z| υ σ,λ +∞ t e (cσ+c R 0 )|∂qW| C 0 (τ -t) e -λτ dτ + C(σ)|z| υ σ,λ |∂ q W | C σ-1 +∞ t e (cσ+c R 0 )|∂qW| C 0 (τ -t) e -λτ (τ -t)dτ = C(σ) |z| υ σ,λ λ -(c σ + c R 0 ) |∂ q W | C 0 e λt + C(σ) |z| υ σ,λ |∂ q W | C σ-1 (λ -(c σ + c R 0 ) |∂ q W | C 0 ) 2 e λt1538 Asymptotic Motions for Time Dependent Hamitlonians+∞ t | Rt τ | C σ |∂ q φ τ -t W | σ C 0 |z τ | C σ dτ ≤ C(σ)|z| υ σ,λ +∞ t |∂ q W | C σ (τ -t)) e (c R σ +c 1 σ)|∂qW | C 0 (τ -t) e -λτ dτ + C(σ)|z| υ σ,λ +∞ t |∂ q W | C 1 |∂ q W | C σ-1 (τ -t) 2 e (c R σ +c 1 σ)|∂qW | C 0 (τ -t) e -λτ dτ = C(σ) |z| υ σ,λ λ -(c R σ + c1 σ) |∂ q W | C 0 e λt + C(σ) +cσ)|∂qW | C 0 (τ -t) e -λτ dτ ≤ C(σ)|z| υ σ,λ +∞ t e (c R 1 +cσ)|∂qW | C 0 (τ -t) e -λτ dτ + C(σ)|z| υ σ,λ |∂ q W | C 1 +∞ t (τ -t)e (c R 1 +cσ)|∂qW | C 0 (τ -t) e -λτ dτ + C(σ)|z| υ σ,λ |∂ q W | C σ-1 +∞ t (τ -t)e (c R 1 +cσ)|∂qW | C 0 (τ -t) e -λτ dτ + C(σ)|z| υ σ,λ |∂ q W | C 1 |∂ q W | C σ-1 +∞ t (τ -t) 2 e (c R 1 +cσ)|∂qW | C 0 (τ -t) e -λτ dτ = C(σ) |z| υ σ,λ λ -(c R 1 + c σ ) |∂ q W | C 0 e λt + C(σ) |z| υ σ,λ (|∂ q | C 0 |z τ | C σ dτ ≤ C|z| υ σ,λ +∞ t e c R 0 |∂qW | C 0 (τ -t) e -λτ dτ = C(σ) |z| υ σ,λ λ -c R 0 |∂ q W | C 0 e λt . Now, we remind that c κ σ = max{c σ + c R 0 , c R σ + c R 0 , cR 1 + c σ }.Hence, thanks to the 8.3 Proof of Theorem G latter |κ t | C σ e λt ≤ C(σ) 1 8.3.4 Regularity of FWe begin this section by reminding the definition of the functional F,F : A × B × M × M × W × U × V -→ Z × G F(a, b, m, m, W, u, v) = (F 1 (b, m, u, v), F 2 (a, b, m, W, u, v)) with F 1 (b, m, W, u, v) = W • (id + u) -W + b • ũ + ( m • φ) v - a suitable constant C depending on σ, Υ, λ, |∂ q W | C 0 and |∂ q W | C σ |û| ≤ C |g| υ σ,λ + |z| υ σ,λ , |v| ≤ C|g| υ σ,λwhere, we recall that|û| = max{|û| υ σ,λ , | (∇û) W | υ σ,λ } and |v| = max{|v| υ σ,λ , | (∇v) W | υ σ,λ }. Proof.The proof of this lemma relies on Lemma 8.3. Thanks to (8.22), we can rewrite equation (8.23) in terms of the following system in the unknown (û, v)(∇û) W -∂ q W û = m0 v -z (∇v) W + ∂ q W v = g. (8.24)These equations are decoupled, and hence we can study them separately. We begin by solving the last one. Then, we replace the found solution v in the first equation, which now can be solved, and we conclude the proof of this lemma. By Lemma 8.3, a solution v of the second equation of the above system exists and satisfies|v| υ σ,λ ≤ C(σ, λ, |∂ q W | C 0 , |∂ q W | C σ )|g| υ σ,λ .Moreover, thanks to Proposition 8.1, (8.24) and the latter| (∇v) W | υ σ,λ = |g -∂ q W v| υ σ,λ ≤ |g| υ σ,λ + C(σ)|∂ q W | C σ |v| υ σ,λ ≤ C(σ, λ, |∂ q W | C 0 , |∂ q W | C σ )|g| υ σ,λ .8.3 Proof of Theorem GAs a consequence of the previous estimates, we obtain|v| = max{|v| υ σ,λ , | (∇v) W | υ σ,λ } ≤ C(σ, λ, |∂ q W | C 0 , |∂ q W | C σ )|g| υ σ,λ ,which proves the first estimate of this lemma. Now, we can solve the first equation of (8.24) where v is known. Thanks to Proposition 8.1 and the previous estimate| m0 v -z| υ σ,λ ≤ C(σ)Υ|v| υ σ,λ + |z| υ σ,λ ≤ C(σ, λ, |∂ q W | C 0 , |∂ q W | C σ )Υ|g| υ σ,λ + |z| υ σ,λ ≤ C(σ, λ, Υ, |∂ q W | C 0 , |∂ q W | C σ ) |g| υ σ,λ + |z| υ σ,λ . |g| υ σ,λ + |z| υ σ,λ and thus |û| = max{|û| υ σ,λ , | (∇û) W | υ σ,λ } ≤ C(σ, λ, Υ, |∂ qW | C 0 , |∂ q W | C σ ) |g| υ σ,λ + |z| υ σ,λ .8.3.5 C σ -Asymptotic TorusFollowing Section 3.3.5, this part is devoted to proving the existence of a C σasymptotic torus associated to (X H , X h, ϕ 0 , W ). To this end, we fix x = (a, b), where a and b are those defined by ( * * G ). It is straightforward to verify that (a, b) ∈ A × B. Moreover, we define the following Banach space (Y, | • |) such that Y = U × V and, for all y = (u, v) ∈ Y, |y| = max{|u|, |v|}. Let m, m ∈ M and W ∈ W be as in ( * * A ), we rewrite F in the following form F(x, m, m, W, y) = D (u,v) F(0, 0, m, m, W, 0, 0)y + R(x, m, m, y). (8.25) For fixed x, m, m and W , the purpose of this section is to find y ∈ Y such that F(x, m, m, W, y) = 0. Let η(m, m, W ) be the right inverse of D (u,v) F(0, 0, m, m, W, 0, 0) whose existence is guaranteed by Lemma 8.4. Therefore, we are looking for y ∈ Y in such a way that y = y -η(m, m, W )F(x, m, m, W, y). 8 Asymptotic Motions for Time Dependent Hamitlonians To this end, we define the following functional L(x, m, m, W, •) : Y -→ Y where L(x, m, m, W, y) = y -η(m, m, W )F(x, m, m, W, y). 4 . |f • z| C σ ≤ C(|f | C σ |∇z| σ C 0 + |f | C 0 ). If σ ≥ 1 and f ∈ C σ (D 2 ), z ∈ C σ (D 1 ) then f • z ∈ C σ (D 1 ) 5. |f • z| C σ ≤ C(σ) |f | C σ |∇z| σ C 0 + |f | C 1 |∇z| C σ-1 + |f | C 0 .The following lemma is the main tool to conclude the proof of Theorem G. Lemma 8.5. There exists υ large enough with respect to n, σ, λ, |∂ q W | C σ+1 and Υ, such that, for all y * ,y ∈ Y with |y * | ≤ 1, |D y L(x, m, m, W, y * )y| ≤ The proof of this lemma is similar to that of Lemma 3.5 with some modifications due to the different homological equations. As one can expect, the proof rests on Lemma 8.4. By (L), for all y, y * ∈ YD y L(x, m, m, W, y * )y = Id -η(m, m, W )D (u,v) F(x, m, m, W, y * ) y.We can reformulate the problem in terms of estimating the solution ŷ = (û, v) ∈ Y of the following system D (u,v) F(0, 0, m, m, W, 0, 0)ŷ (8.27) = D (u,v) F(0, 0, m, m, W, 0, 0) -D (u,v) F(x, m, m, W, y * ) y. ∇v) W -D (u,v) F 2 (x, m, W, y * )y This concludes the proof of this lemma. Thanks to Lemma 8.4, a solution ŷ ∈ Y of (8.27) exists satisfying|û| ≤ C ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y υ σ,λ + ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y ∂ q W u -(∇u) W + m0 v -D (u,v) F 1 (b, m, W, y * )y |D y L(x, m, m, W, y * )y| ≤ 1 2 |y|. This part is dedicated to a very brief introduction to Hölder classes of functions C σ . Let D be equal to T n × B, T n or an open subset of R n . For integers k ≥ 0, we denote by C k (D) the spaces of functions f :D → R with continuous partial derivatives ∂ α f ∈ C 0 (D) for all α ∈ N n with |α| = α 1 + ... + α n ≤ k. We define the norm |f | C k = sup |α|≤k |∂ α f | C 0 ,where|∂ α f | C 0 = sup x∈D |∂ α f (x)| denotes the sup norm. For σ = k + µ, with k ∈ Z, k ≥ 0 and 0 < µ < 1, the Hölder spaces C σ (D) are the spaces of functions f ∈ C k (D) such that |f | C σ < ∞, where |f | C σ = sup |α|≤k |∂ α f | C 0 + sup |α|=k |∂ α f (x) -∂ α f (y)| |x -y| µ . (A.1)In the case of functions f = (f 1 , ..., f n ) with values in R n , we set |f | C σ = max 1≤i≤n |f i | C σ . Moreover, in agreement with the convention made above, if M = {m ij } 1≤i,j≤n is a n × n matrix, we set |M | C σ = max 1≤i,j≤n |m ij | C σ . {q ∈ C n /Z n : | Im(q)| ≤ s} B s := {p ∈ C n : |p| ≤ s}, with T n = R n /Z n and B ⊂ R n a sufficiently large neighborhood of the origin. Let D be equal to T n × B or T n and we consider a real analytic function in a neighborhood of D f : D → R.Let D s be equal to T n s ×B s or T n s , for a suitable small s. It is known that f extends to a functionf : D s → Cthat is real, holomorphic and bounded. We define the following norm|f | s = sup z∈Ds |f (z)|.In the case of vector valued functionsf = (f 1 , ..., f n ) with values in C n , we set |f | s = max i |f i | s . Moreover, if C = {C ij } 1≤i,j≤n is a n × n matrix, we let |C| s = max ij |C ij | s .We define A s as the space of such functions. The rest of this section is devoted to a series of general well-known properties.Proposition B.1. Let f , g ∈ A s , then the product f g ∈ A s and|f g| s ≤ |f | s |g| s .Let f ∈ A s and 0 ≤ σ ≤ s. Then ∂ x f ∈ A s and we have|∂ x f | s-σ ≤ 1 σ |f | s . Let f ∈ A s , 0 ≤ σ ≤ sand φ ∈ A s-σ such that φ : D s-σ → D s . Then f • φ ∈ A s-σ and |f • φ| s-σ ≤ |f | s . 2 d 2 d→+∞ |g d -g| υ σ+k,b = 0. Let g k d be a subsequence of g d such that |g k d+1 -g k d | υ σ+k,b < 1 for all d ∈ N.We claim that it suffices to prove the above property for g k d . Indeed, we assume that limd→+∞ |g k d -g| υ σ+k,b = 0. Then, for all d ∈ N |g d -g| υ σ+k,b ≤ |g d -g k d | υ σ+k,b + |g k d -g| υ σ+k,b .Therefore, for all ε > 0, there existsD ∈ N such that, |g d -g k d | υ σ+k,b < ε 2 and |g k d -g| υ σ+k,b < ε 2 for all d ≥ D.Because g d is a Cauchy sequence, we have the first inequality. The second follows because we assumed that g k d converges to g in the norm | • | υ σ+k,b . This implies lim d→+∞ |g d -g| υ σ+k,b = 0 and hence the claim. Now, for all fixed t ∈ J υ|g t k d -g t | C σ+k b(t) ≤ +∞ i=d |g t k i -g t k i+1 | C σ+k b(t) ≤ +∞ i=d |g k i -g k i+1 | υ σ+k,b ≤ the sup for all t ∈ J υ ,we obtain |g k d -g| υ σ+k,every ε > 0 there exists D ∈ N such that |g k d -g| υ σ+k,b < ε for all d ≥ D. We prove that |g| υ σ+k,b < ∞. For all d ∈ N we can estimate |g| υ σ+k,b as follows |g| υ σ+k,b ≤ |g d -g| υ σ+k,b + |g d | υ σ+k,b . For d sufficiently large, |g d -g| υ σ+k,b < ∞ because lim d→+∞ |g d -g| υ σ+k,b = 0. Moreover, |g d | υ σ+k,b < ∞ because g d ∈ G. Then (G, | • |) is a Banach space. In the second part of this section we prove that (W, • ) is a Banach space. Let {w d } d≥0 ⊂ W be a Cauchy sequence. Similarly to the previous case, there exist w ∈ S υ σ and f ∈ S υ σ such that lim d→+∞ |w d -w| υ σ, b = 0, lim d→+∞ | (∇w d ) Ω -f | υ σ,b = 0. (C.1) φ pc : R + * × R × T × R × R 4 -→ R 2 * × R 2 × R 4 ,φ pc ((r, R), (ϕ, Φ), (ξ, η)) = ((r cos ϕ, r sin ϕ), (R cos ϕ -Φ r sin ϕ, R sin ϕ + Φ r cos ϕ), (ξ, η)). 2.1 Mouvements Asymptotiques Convergeant vers une Dynamique Quasipériodique suffisamment petit pour tout t ≥ υ . Par conséquent, nous étudions H pour tout t ≥ υ et nous prouvons l'existence d'un tore KAM asymptotique associé à (X H , X h, ϕ 0 ) défini pour tout t ≥ υ . Alors, grâce à la Proposition 1.3, nous pouvons étendre le domaine de définition pour tout t ∈ J 0 . The secular Hamiltonian is H d π , which is Pöschel integrable. It can be split into an integrable part H d int and a resonant part H d res of size O(δ). The infinite jet of H d res vanishes along a suitable Cantor set. d comp is of size O(δ d+1 ) on Πk+d(τ+4) δ . n≥1 ε n with a suitable constant C. Taking the sup on φ(T 4 × B 2 t × B 4 × B 2 ) and then for all t ∈ J on the left-hand side of the latter, we obtain sup t≥1 1 |c(t)| ≤ sup t≥1 t |c(t)| and, letting 0 < ε ≤ 1 2 , we can estimate 1 + t≥1 (7.27)for some constants C(k) depending on k and C(k, M, m c , ε, |∂ (x,y) H 0 •φ| C k+2 , |∇φ| C k+2 ) depending on k, M , m c , |∂ (x,y) H 0 • φ| C k+2 and |∇φ| C k+2 . Proof. The first part of this lemma is due to Lemme 7.2 and Lemma 7.7. Con- cerning the estimates, the first two are proved in Lemma 7.7. It remains to prove the last one. First, we claim that for all k ∈ Z with k ≥ 1, the symplectic change of coordinates φ satisfies |∇φ| C k < ∞, (7.28) Proposition (Proposition 6.2). Let J be an interval in R, t 0 ∈ J, and a, b, u ∈ C(J) continuous positive functions. If we assume that 8.3 Proof of Theorem G t u(t) ≤ a(t) + b(s)u(s)ds , ∀t ∈ J t 0 then it follows that u(t) ≤ a(t) + t a(s)b(s)e | t s b(τ )dτ | ds , ∀t ∈ J. (8.9) t 0 If a is a monotone increasing function and we assume that t u(t) ≤ a(t) + b(s)u(s)ds ∀t ≥ t 0 , t 0 then we obtain the estimate u(t) ≤ a(t)e t t 0 b(s)ds , ∀t ≥ t 0 . (8.10) of Proposition A.2 and |y * | ≤ 1, we obtain( m0 -m • φ * ) t C σ |v t | C σ ≤ C(σ) |∂ q mt (id + τ u * , τ v * )u t * | C σ + |∂ p mt (id + τ u * , τ v * )v t * | C σ |v t | C σ ≤ C(σ)Υ |u t * | C σ + |v t * | C σ |v t | C σ ≤ C(σ)Υe -λυ |y|e -λt (∂ q b • ũ * ) t C σ u t C σ ≤ C(σ)|b| υ * σ+2,λ e -λυ |y|e -λt ≤ C(σ)Υe -λt |y|e -λt |v t * | C σ (∂ q m • φ * ) t C σ |u t | C σ ≤ C(σ)|y * |e -λυ Υ|y|e -λt |v t * | C σ (∂ p m • φ * ) t C σ |v t | C σ ≤ C(σ)|y * |e -λυ Υ|y|e -λt Moreover, for any fixed masses, the set of quasiperiodic Lagrangian tori has positive Lebesgue measure. The Three-Body Problem plus Comet Remerciements We begin by proving that, for all (θ, ξ) ∈ T 4 × (B 2 1 /2), ψ t 1, H • ϕ 1 (θ, ξ) ∈ U 1 2 for all t ∈ J, where U 1 2 is defined in (U1 2 ). This is equivalent to show that, for all (θ, ξ) ∈ T 4 × (B 2 1 /2) for all t ∈ J. Let ψ t 1,ω+Γ be the flow at time t with initial time 1 of ω + Γ. We know that (7.30) is equivalent to for all t ∈ J. Now, for all t ∈ J, ϕ t is the identity with respect to (θ, ξ) (see (7.29)). Then, by (7.34) for all t ∈ J. Hence, thanks to the special form of ϕ t for all t ∈ J. Then, by (7.31) for all t ∈ J. This proves the first part of (7.33). Concerning the second part, thanks to (7.29) and (7.34), ξ t 1 (θ, ξ) is the unique solution of the following system ξt 1 (θ, ξ) = Γ 2 (θ t 1 (θ, ξ), ξ t 1 (θ, ξ), t) ξ 1 1 (θ, ξ) = ξ where ξt 1 (θ, ξ) stands for the derivative with respect to t of ξ t 1 (θ, ξ), Γ 2 is defined by (7.30) and ξ ∈ (B 2 1 /2). Now, thanks to (7.31) and the latter for all t ∈ J. It is true for t = 1. If we suppose the existence of t 0 > 1 in such a way that We can rewrite the latter in the following way 8 Asymptotic Motions for Time Dependent Hamitlonians Preliminary Settings Let σ, λ and Υ be the positive parameters introduced by ( * G ). For υ ≥ 0 that we will specify later, we consider the following Banach spaces (A, where we recall that the norm | • | υ σ,λ is defined by (8.6). Let M n be the set of the n-dimensional matrices. We introduce two other Banach spaces (M, where Υ is the positive parameter in ( * G ). Now, we can correctly introduce the previous functional F. Let F be the following functional with Homological Equation We begin this section by reminding Proposition 6.2 concerning some fundamental Gronwall-type inequalities. and in addition ψ and u satisfy Z(ψ(q, t), t) -∂ q ψ(q, t)W (q) -∂ t ψ(q, t) = 0, (8.28) lim for all (q, t) ∈ T n × J υ . We introduce a suitable functional F given by (8.28). To this end, we define ψ(q, t) = (q + u(q, t), t), for all (q, t) ∈ T n × J υ . The composition of Z with ψ is equal to for all (q, t) ∈ T n × J υ , moreover for all (q, t) ∈ T n × J υ . Then, we can rewrite (8.28) as follows This is the sum of functions defined for all (q, t) ∈ T n × J υ or q ∈ T n . As usual, we have omitted the arguments (q, t) and q in order to achieve a more elegant form. For the sake of clarity, we recall that, during the proof of Theorem G, we have introduced the following notation Before the introduction of the functional F, letting σ ≥ 1 be the positive parameter defined in Corollary C. For a suitable positive parameter υ ≥ 0 that we will choose large enough in Lemma 8.7, we introduce the following Banach spaces (P, Now, thanks to (8.30) and the previous Banach spaces, we have everything we need to introduce the functional F. Let F be the following functional Proof of Corollary C We observe that for all W ∈ W F(0, W, 0) = 0. Therefore, we can reformulate our problem in the following form. We fix W ∈ W and for P ∈ P sufficiently close to 0, we are looking for u ∈ U in such a way that F(P, W, u) = 0. Concerning the associated linearized problem, the differential of F with respect to the variable u calculated in (0, W, 0) is equal to The functional F is well defined, continuous, differentiable with respect to the coordinate u with D u F(P, W, u) continuous. Moreover, by Lemma 8.3, for all fixed W ∈ W, D u F(0, W, 0) admits a right inverse η(W ). Then, F satisfies the hypotheses of the implicit function theorem. We fix P ∈ P and W ∈ W as in Corollary C and we define the following functional The proof of Corollary C is reduced to find a fixed point of the latter. Similarly to the proof of Lemma 8.5, we have the following lemma Lemma 8.7. There exists υ large enough with respect to n, σ, λ and |∂ q W | C σ , such that, for all u * ,u ∈ U with |u * | ≤ 1, This concludes the proof of Corollary C. Now, let b be a positive, decreasing, integrable function on J υ . We define We may assume b(t) ≤ 1, b(t) ≤ 1 for all t ∈ J υ . For fixed σ ≥ 1 and an integer k ≥ 0, we consider the following spaces (G, | • |) and (W, • ) such that We recall that, for all (q, t) ∈ T n × J υ , ∇w(q, t)Ω = ∂ q w(q, t)ω + ∂ t w(q, t) with ω ∈ R n . We prove that these spaces are complete. We begin with the first. Let {g d } d≥0 ⊂ G be a Cauchy sequence. This means that, for all ε > 0 there exists Then, for all fixed t ∈ J υ , there exists g t ∈ C σ+k such that lim We have to verify that g ∈ G (that is g ∈ Sυ σ,k and |g| υ σ+k,b < ∞) and lim d→+∞ |g d -g| υ σ+k,b = 0. We prove that g ∈ Sυ σ,k . Obviously, for all fixed t ∈ J υ , for all 0 ≤ i ≤ k. Now, for all ε > 0 there exists D ∈ N such that, for all d ≥ D, the first and the last term on the right-hand side of the latter are smaller than ε 3 . This is because, for all fixed t ∈ J υ , g t d converges to g t in the norm C σ+k . Concerning the second term, we know that g d ∈ Sυ σ,k . Hence, by the definition of Sυ σ,k , ∂ i q g d ∈ C(T n × J υ ) for all 0 ≤ i ≤ k. Then, there exists δ > 0 such that if |(q 1 , t 1 ) -(q 2 , t 2 )| < δ also the second term on the right-hand side of the latter is smaller than ε 3 . This proves the claim. for all |τ | < δ. Thanks to the triangle inequality By (C.1) there exists D > 0, depending on ε and τ , such that the first and the third terms on the right-hand side of the latter are smaller than ε 3 for all d ≥ D. Now, thanks to Taylor's formula, we can rewrite the second term on the right-hand side of the latter as follows and using the triangle inequality We know that f is continuous, then there exists δ such that for all |τ | < δ the second term on the right-hand side of the latter is smaller than ε 9 . Since the uniform convergence of (∇w d ) Ω there exists D > 0, depending on ε and τ , such that the first and the third terms on the right-hand side of the latter are smaller than ε 9 . This concludes the proof. D Planar two body problem plus comet We consider the Hamiltonian of the planar two-body problem H 0 . We verify that, in this case, we can explicitly construct the symplectic transformation φ of Section 7.2.4. We consider two points of masses m 0 and m 1 undergoing gravitational attraction in the plane. The phase space is the space of linear momentum covectors (y 0 , y 1 ) and position vectors (x 0 , x 1 ) of each body. The Hamiltonian of the planar two-body problem is equal to The angles (l, g) are respectively the mean anomaly and the argument of the perihelion of the fictitious body x 0 -x 1 . Let a be the semi-major axis and e the eccentricity, then The symplectic transformation φ D is generated by where V ef f is the effective potential defined in (D.1) and r -(h(L)) is a root of the equation h(L)-V ef f (ρ; G) = 0. The previous generating function S 0 is independent of η. In particular, the symplectic change of variables φ D is defined by the following conditions Going back to the Hamiltonian It is the generating function of a symplectic transformation φD defined on Moreover, in these new variables, the Hamiltonian takes the following form This concludes this slight digression. ABSTRACT In this thesis, we are interested in time-dependent Hamiltonian systems. More specifically, we study the existence of orbits converging, when t → ±∞, to some quasiperiodic solutions. After the first part dedicated to the introduction, the results of this thesis are divided into four parts. In the first part, we analyze when the above-mentioned orbits exist for time-dependent Hamiltonians converging asymptotically in time t → +∞ to Hamiltonians having an invariant torus supporting quasiperiodic solutions. A second part is devoted to studying the conditions of existence of orbits defined for all times and converging asymptotically in time to some quasiperiodic solutions in the past t → -∞ and in the future t → +∞. The third part is dedicated to the applications in celestial mechanics. We consider the example of a planetary system perturbed by a comet coming from and going back to infinity asymptotically along a hyperbolic keplerian orbit. Here, the Hamiltonian which describes this system does not satisfy good time decay properties. Hence, the results of the previous parts do not apply. Therefore, we are forced to prove another abstract theorem to show the existence of suitable orbits associated with this system. We conclude this thesis by studying time-dependent Hamiltonians converging asymptotically in time t → +∞ to Hamiltonians having an invariant torus supporting arbitrary dynamics. Here, we prove the existence of some orbits converging asymptotically in time t → +∞ to the arbitrary dynamics associated with the above-mentioned invariant torus. MOTS CLÉS KEYWORDS Dynamical systems, KAM tori, time-dependence
04098769
en
[ "info.info-ai" ]
2024/03/04 16:41:20
2022
https://theses.hal.science/tel-04098769/file/TH2022DANIELCECILE.pdf
Professor Gio- Vanna Di Professor Marzo Serugendo Flavien Balbo Zahia Guessoum David Rey I would also like to thank Professor Eugenio Zimeo and Lorenzo Goglia for their collaboration on the BC work, I'm very glad I was able to spend some time with you in Benevento right before the lock-downs. Un immense merci à Andrès, pour son accompagnement sur la deuxième moitié de ma thèse, sa patience et sa pédagogie. Travailler avec toi aura étét r ès enrichissant et fort sympathique. J'espère que nos longues conversations continueront encore longtemps. Merci aussi àC écile B., qui m'a aidée à dompter SymuPy et qui a patiemment répondu à chacune de mes questions. Je remercie tous les membres du labo, avec qui j'ai passédetrès bons moments, autour de cafés, de gâteaux du lundi, de verres gagnés/perdus en paris. Un merci tout particulier àLo ïc et Elise, mes partenaires de galère du premier instant, nos pauses café, nos footings et nos verres sont des souvenirs précieux. Un autre merci particulier à Anne-Christine et Loic (encore), qui m'ont généreusement hébergée après mon exode post covid et avec qui la coloc est un régal au sens propre comme au figuré. Merci à ma famille, qui a vécu presque autant que moi les émotions de la thèse. Merci à mes parents pour leur soutien indéfectible et qui m'ont donnél eg oût des études, malgré leur insistance parfois un peu oppressante à me faire travailler... Enfin, last but not least, merci à Alma, la PénélopedemonOdyss ée, ton soutien dans les moments les plus durs, ta patience et tes encouragements m'auront portée jusqu'au bout. Promis je ne bouge plus. Abstract The economic and social development of modern cities relies on the efficiency, mobility and resilience of their transportation systems. The latter has thus become a major research challenge involving multiple disciplines, related to urban activities. Old infrastructures and their limited capacity make cities more and more vulnerable to unpredictable events and increasing demand. Congestions are more frequent, as a consequence of the growth of the urban population, vehicle emissions and air pollution create high stress on the infrastructures and increase time waste for travelers.Solutions to improve traffic conditions, in terms of health, security and traffic management are more and more precise, embracing the generalized use of Artificial Intelligence, jointly with Big Data technologies for data collection, storage and computing. Moreover, traffic simulations are now based on various data sources and on more accurate information to better reproduce traffic dynamics and travelers' behaviors. However, analyzing such complex data in a large scale context is still a significant research challenge that requires solutions based on agent-based modelling, distribution and parallelization. Moreover, the characterization and modelling of transport vulnerabilities for improving human mobility is still at early stages of research. To prevent congestion and identify vulnerable locations, i.e. areas or sections where failures would have high cost consequences, two types of vulnerability analyses are most common in the domain of transport: dynamic system-based, and static topological based. They are both studied in this thesis. The first approach is the dynamic system-based representation that simulates travelers, their trips and the infrastructure over a given period of time, supported by the large volume of data now collected. The second approach is a topological analysis based on graph theory, and static topological considerations. To reduce vulnerability inside road networks, we propose in this thesis a control strategy that dynamically protect identified areas and recommends new routes to drivers to avoid creation of congestion in such zones. Our strategy relies on a hierarchical cooperative multiagent algorithm. Road infrastructures and vehicles are modeled as agents that dynamically react to traffic conditions. This control strategy enables congestion avoidance and a reduction of the congestion duration. We take into consideration drivers behaviours to find a balance between system performance improvement (system optimum) and individual travel choices (individual optimum), as well as privacy constraints that are now necessary for realistic applications. We prove the robustness of our approach by testing it on different demand scenarios and show that identifying and protecting critical spots of the network improves our strategy. To identify such vulnerable spots, our solution integrates the computation of Betweenness Centrality (BC), a metric usually studied with topological approaches. It is indeed quite unusual to include BC in dynamic congestion avoidance approaches whereas the BC is a popular metric in many domains for critical spot identification in the context of static graph analysis. This is due to the high computation time and the difficulty of computing it on large graphs in a context of real-time applications. This second problem of computation of BC for static vulnerability analysis is addressed in this thesis with a distributed algorithm for the exact and fast computation of BC developed for large graphs. We provide mathematical proofs of our algorithm exactness and show the high scalability of our approach, developed in an optimized framework for parallel computation. Through distributed approaches, we can design a robust solution, based on a combination of control and topological study, to dynamically reduce vulnerability inside cities in a real-time context. The proposed solution for computation of BC on large-scale graphs can be extended for real-time computation of this metric on time-varying weighted graphs and further enhance our control solution for congestion avoidance based on dynamic vulnerability detection of road networks. Résumé Le développement économique et social des villes modernes repose sur l'efficacité, la fiabilitéetlar ésilience des systèmes de transport. Les transports sont donc devenus un défi crucial qui touche plusieurs domaines liés à l'activité des villes. L'ancienneté de certaines infrastructures et leur capacité d'accueil limitée rendent les villes de plus en plus vulnérables àdesévénements imprévisibles et à une demande croissante. Les embouteillages sont ainsi plus fréquents, augmentant les émissions des véhicules et donc la pollution atmosphérique, et augmentant la perte de temps pour les conducteurs. Les embouteillages sont ainsi devenus ces dernières décennies un problème majeur des réseaux urbains. Les solutions améliorant les conditions de transport d'un point de vue santé, sécurité et gestion d ut r a fi cs o n td ep l u se np l u sp r écises, adoptant en particulier l'usage généralisé de l'Intelligence Artificielle, conjointement au développement des technologies Big Data de stockage, de détection, de communication et de calcul. Les simulations de trafic sont maintenant basées sur des sources de données variées et sur de l'information plus précise pour mieux reproduire les dynamiques de trafic et le comportement des utilisateurs. L'analyse de telles données complexes et volumineuses reste cependant un défi qui requiert notamment des solutions comme le calcul distribué, les modélisations multi-agent et la parallélisation. De plus, l'étude de la caractérisation et de la modélisation des vulnérabilités des réseaux de transport améliorant la mobilité urbaine n'est encore qu'à ses débuts. Pour prévenir les congestions et identifier les endroits vulnérables du réseau, i.e. les zones ou sections oùl e sd éfaillances (congestions, inaccessibilité) auraient des conséquences importantes, deux types d'analyse de vulnérabilité sont les plus courantes dans le domaine du transport : une analyse dynamique, basée sur la modélisation des systèmes, et une analyse statique basée sur la topologie des réseaux. Ces deux volets de l'analyse de vulnérabilités o n tétudiés dans cette thèse. La première approche, que le volume et la qualité des données collectées a encouragée, simule dynamiquement les voyageurs, leurs trajets et les infrastructures de transport sur une période de temps. La deuxième approche est basée sur la théorie des graphes et des considérations statiques de topologie. Dans ce contexte-là, nous proposons dans cette thèse tout d'abord une stratégie de contrôle qui calcule et recommande dynamiquement de nouvelles routes aux conducteurs pour éviter la création de congestions et ainsi réduire la vulnérabilité des réseaux urbains. Notre stratégie repose sur un algorithme de coopération hiérarchique d'un système multi-agent oùl er éseau routier et les véhicules sont modélisés comme des agents qui réagissent en temps réel aux conditions de trafic. Cette stratégie de contrôle permet une diminution de création de congestion et une réduction de la durée des embouteillages lorsque la congestion ne peut être évitée. Nous prenons en considération le comportement des conducteurs pour trouver un équilibre entre les performances du système de transport et les préférences individuelles, ainsi que les contraintes liées à la protection des données qui sont nécessaires dans les applications réelles. Nous montrons la robustesse de notre approche en la testant sur différents scénarios de demande de notre modèle et nous montrons aussi que l'identification des noeuds vulnérables du réseau améliore la qualitéd en o t r es t r a t égie. Nous identifions ces endroits vulnérables grâce àl a Betweenness Centrality (BC), mesure de résilience qui appartient normalement à l'analyse topologique de la vulnérabilité. Elle est donc rarement incluse dans les approches dynamiques, alors qu'elle est utilisée dans de nombreux domaines pour l'identification de noeuds critiques. Cela s'explique par la difficultéd ec a l c u l e r cette mé t r i q u ed a n su nc o n t e x t ed ec a l c u le nt e m p sr éel sur des réseaux statiques à large échelle. Ce second problème de calcul de BC pour l'analyse statique de vulnérabilité est étudié dans un deuxième temps avec un algorithme distribuéd e calcul exact et rapide de la BC sur de grands graphes. Nous apportons les preuves mathématiques de l'exactitude de la BC calculée ainsi et montrons que dans un contexte optimal de calcul distribué, cette approche est scalable.Basée sur des approches distribuées, notre solution de réduction dynamique de vulnérabilitéd ' u n réseau urbain, via du contrôle combinéà une étude topologique, est robuste et fonctionne dans un contexte de temps réel. La solution proposée pour calculer la BC sur de grands graphes peut être étendue pour du calcul en temps réel sur des graphes pondérés et ainsi compléter la solution de réduction de congestion via de la détection dynamique de vulnérabilité. Tout d'abord je souhaite remercier mes encadrants, qui ont chacun àl e u r manière contribuéà cette thèse et m'ont permis d'avancer et progresser tout au long de ces 3 années: • Nour-Eddin El Faouzi: merci pour ta patience, le recul et le relief que tu as apportés à cette thèse; • Salima Hassas: merci pour ton accompagnement dans le milieu multi-agent, pour le temps consacré et pour la multitude d'idées que tu avais, j'espère que celles non exploitées pourront servir pour d'autres travaux; • Angelo Furno: un grand merci pour pour ta patience (il m'arrive de râler parait-il), pour ta rigueur et ton exigence. Tu m'as souvent forcée àm e poser les bonnes questions. Chapter 1 Introduction The urban population has rapidly grown from 751 millions of people in 1950 to 4.2 billion in 2018 and will probably grow with 2.5 billion more people, corresponding to 68% of population 1 . The growing concentration of people translates into increasing mobility demand on the urban transport infrastructures. Hence, transport networks become a backbone of urban systems with new transportation modes and new services, like riding scooters or sharing bikes. The complexity and interdependence of all these modes make the transportation system, especially the road network, more vulnerable to failures other modes. A recent example of strikes in France in 2019 showed that consequences of public transportation strikes created 23% more congestion in Lyon2 and provoked a rush on bike services in Paris, strongly deteriorating the service 3 . Efficient, sustainable and reliable transport in large urban areas is vital for the economic and social development of modern cities and represents a major research challenge involving multiple disciplines, ranging from information and data science to economics and urban planning. The daily mobility of large masses of people and goods in, from and to urban areas strongly depends on a seamlessly available multimodal transport infrastructure, and on proper knowledge of the mobility demand. A system is considered to be vulnerable if its operations can be significantly reduced by failure (i.e. improper functioning or dysfunction of one or more elements). The probability and consequences of failure inside cities thus increase, making them more vulnerable to external events, especially extreme weather disasters that can damage infrastructures and provoke peaks of demand, reducing the performances of transportation systems. The New York subway was for example completely shut down because of Hurricane Sandy in October 2012, among many other consequences, paralyzing the city and obliging authorities to propose other modes of transportation. Especially because of climate change, the intensity and the frequency of climate disasters increase and force the cities and Chapter 1. Introduction their transportation systems to be more robust to unpredictable events. Cities will face threats of different kinds that will have economical and social consequences, as well as pollution and thus health issues, enhanced by the population growth. Resilient cities is thus a goal for main metropolitan areas, but how can we identify vulnerabilities in large networks? What are the solutions to anticipate or recover from perturbations? With the amount of collected data, is it really possible to consider real-time analysis of large scale networks? The vulnerability of transportation systems is analyzed through two distinct traditions: static topological analysis, that focuses on topological and intrinsic vulnerabilities of the network using graph theory, and dynamic system-based analysis, which is a more realistic representation of transport network, dealing with demand and supply issues in a dynamic context. This latter requires more information to be effective, hence more computational power. The system-based vulnerability analysis is enhanced by the development of smart cities, enabling better simulations and offering new types of approaches (Artificial Intelligence in particular). Indeed, cities are becoming "smarter" and collect more and more data in realtime, via connected sensors, devices, people and infrastructures. Such massive data possess the great potential to provide highly valuable insights to improve demand characterization, to pinpoint system vulnerabilities and to help designing solutions increasing resilience of the transportation system. The increasingly available amount of data and computation power enable the development of new data-oriented and realistic solutions, with the constraint of real-time analysis in the context of Big Data. New technologies have made it possible to analyze those large data sets and to consider dynamic solutions, especially in Artificial Intelligence, which would have been impossible to implement due to computation capacity and memory limitation. The concrete applications for those solutions are the Intelligent Transportation Systems (ITS), that are more and more developed, due to the recent advances in Information and Communication Technologies (ICT) and the deployment of sensors inside vehicles and in infrastructures. The resilience of cities in the future relies on the understanding of vulnerabilities via proper indicators and on finding solutions to face threats of different kinds, using ITS. Objectives of the thesis The main objective of this thesis is to propose solutions to reduce the vulnerability of large scale transport networks and enhance network resilience, via artificial intelligence, complex networks and Big Data processing. The first aspect of vulnerability studied in this thesis is the traffic monitoring and demand vulnerability of the network. The solution we develop is a multiagent control strategy of rerouting. Embracing the recent technological growth (computational power, data availability, communication), our strategy takes into account network constraints, drivers behaviour and preferences, and topological bottlenecks of the network. The second contribution is related to vulnerability analysis and is a new algorithm of fast computation of a resilience metric in large static unweighted graphs. We place ourselves in a large scale context where many districts of an urban area are monitored, and in the context of real-time application. The analysis of large scale road networks in real-time can be done by aggregating information or by processing data with distributed approaches. This thesis is organized as follows: • Chapter 2: we introduce the different notions of vulnerability, resilience related to serviceability and reliability of transportation systems. We then present what is done and what data are collected in cities to make transportation systems more resilient. • Chapter 3: we present general concepts for control inside road networks, the need for distributed approaches and describe our proposed solution for better traffic distribution to address demand fluctuation vulnerability issues. • Chapter 4: we describe our road network control solution prototype, the settings of the performed simulations and the results obtained. • Chapter 5: we present our algorithm for fast computation of a resilient metric and propose a performance evaluation for synthetic and real graphs. The control solution presented in Chapter 3 and Chapter 4 is improved by adding this resilient metric in the route choice. • Chapter 6: we develop the limitations of our work and the different perspectives that can compensate for those limitations. Vulnerability and Transport Network Management With the increase of urban population, transportation networks grow as well and become more complex. The dependency of the road network with the other transportation modes and recurrent congestions taking place in most of urban networks make it more vulnerable to disturbance and failure. The consequences of a failure impact more people and more infrastructures than before. Sources of failures are numerous: internal incidents like strikes, technical failures, or road maintenance that are frequent, for example, and external threats, such as terrorist attacks, cyber attacks, climate disasters. The level and duration of disturbance varies depending on the cause of failure and the ability of the transportation system to cope with these events and recover from it. New technologies enable a better monitoring of environmental, weather or traffic conditions and driver behaviours, offering new possibilities of resilience-oriented applications. However, the amount of collected data represents a challenge and requires distributed approaches to be studied and analyzed. In this chapter, we first define and introduce the vulnerability, resilience and related concepts inside road networks, we then describe the context of applications enabling a resilient transportation system inside smart cities. Vulnerability and Resilience of Transportation Systems Different concepts are related to the characterization of transport system quality. We present in this section the main notions related to the level of service inside the network, that lead to vulnerability analysis. Our work focuses on a reduction of demand vulnerability and performances are evaluated in view of serviceability and reliability. We also introduce the resilience that encompasses serviceability, reliability and vulnerability. Chapter 2. Vulnerability and Transport Network Management Serviceability and Reliability Perturbations inside road networks have consequences on the level of service of transportation systems and many concepts related to transportation system help characterizing the consequences of perturbation, the quality of the network. The performances of a road network are measured in terms of serviceability. The serviceability of a road network is defined by Berdica in [START_REF] Berdica | An introduction to road vulnerability: what has been done, is done and should be done[END_REF] as "the possibility to use that link/route/road network during a given time period". An incident is an event that reduces the serviceability of the network, impeding the mobility of some travelers and blocking some parts of the network (road sections, subway station, etc). The consequences of failures are a decrease of serviceability of the network system as links or nodes become unavailable for a period of time. In road networks, failures result in reduction of accessibility of a road, causing (or caused by) traffic jams, leading to travel times increase. As an example, high demand will produce congestion and will lower performance in terms of speed and travel times, by blocking some road sections, forcing the drivers to find a new route, to wait, to choose another mode or worse, to cancel their trip. Another related concept of road network analysis is the reliability of the system, that can be measured as the variance of travel cost (travel times e.g.) for a given source and destination or the probability of duration lower than a threshold. More globally, reliability of an Origin-Destination pair (OD) is the comparison of the travel duration with a given time. The reliability of a network corresponds to its stability ( [START_REF] Mattsson | Vulnerability and resilience of transport systems-A discussion of recent research[END_REF]). Reliability focuses on "maintaining the performance of critical infrastructure elements" (Murray et al. [START_REF] Murray | Overview of reliability and vulnerability in critical infrastructure[END_REF]). Taylor describes different ways of measuring reliability in [START_REF] Michael | Travel through time: the story of research on travel time reliability[END_REF], related to travel time and capacity of links inside the network. It can be by considering the standard deviation of travel times (Bates et al. [START_REF] Bates | The valuation of reliability for personal travel[END_REF]) or considering percentile values (Lam et al. [START_REF] Terence | The value of time and reliability: measurement from a value pricing experiment[END_REF] or Tilahun et al. [START_REF] Nebiyou | A moment of time: reliability in route choice using stated preference[END_REF]). Chen et al. ( [START_REF] Chen | A fuzzy multi-objective model for reconstructing the post-quake road-network by genetic algorithm[END_REF]) define it as the probability a network can absorb the demand. The reliability of the network is affected by two main variables: the demand and the supply. More precisely, Kwon et al. ([9]) identify three sources of travel time reliability: • traffic influencing events, including traffic incidents, work zone activity, weather conditions • traffic demand, corresponding to fluctuation in day-to-day demand, or special events • physical road features like traffic control devices (railway crossings for example), inadequate base capacity (bottlenecks) Typically, after technical failures or extreme weather incidents (flooding, earthquake), infrastructures may be deteriorated and inaccessible, affecting the reliability of the network by forcing the users to find alternative routes. If the failure happened in a critical spot of the network, like a bridge, the alternative routes may be much longer, thus significantly reducing the reliability. Similarly, when the demand strongly varies inside road networks, the reliability is impacted because the travel times from origin to destination vary as well. This is typically the case when failures like strikes are observed on public transportation system: the demand inside road networks increases because more people will use their car instead of public transportation. The main focus of our work will be on the demand fluctuations. Vulnerability Serviceability and reliability are concepts related to the quality of mobility inside the transport system. A more general concept that includes the risks and consequences affecting serviceability and reliability of the network is the vulnerability. Many definitions of network vulnerability exist, taking into account probability of incidents, causes and consequences of failures. Berdica ([1], p. 119) defines it as follows: "Vulnerability in the road transportation system is a susceptibility to incidents that can result in considerable reductions in road network serviceability." A network is vulnerable when an unexpected event has high cost consequences and reduces serviceability and reliability of the system. Some nodes or links can for example be identified as vulnerable if on the path of many users. Indeed, consequences would be increased as failure would concern many people. Centrality measures like the Betweenness Centrality help identifying those vulnerable spots. More generally, the reduction of vulnerability can be done in two ways: failsafe approach, reducing the risk with the protection of vulnerable spots (bridge or tunnel for example) or safe-fail approach reducing the consequences of an incident ( [START_REF] Berdica | An introduction to road vulnerability: what has been done, is done and should be done[END_REF]). The reduction of vulnerability relies on the anticipation of incidents, the reactivity when an incident occurs and the quality of solutions, either to solve the problem or to find alternative ([2]). Resilience The resilience is a complementary concept to vulnerability as it is related to the cause and consequences of a disruption in a system but encompasses more parameters of the system as it includes also duration of disturbance, the reach of new equilibrium and is related to the performance of the system. It appeared in biology, when considering the ecosystems ability to recover from disturbance. The concept has now spread in other domains and is more and more studied in transportation systems. US National Academy of Sciences defines the resilience [START_REF] Cutter | Disaster resilience: A national imperative[END_REF] as "the ability to prepare and plan for, absorb, recover from and more successfully adapt to adverse events". Hollnagel [START_REF] Hollnagel | Resilience engineering in practice: A guidebook[END_REF] proposes a more precise definition that includes the dynamic of the system : the resilience is "the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions". The resilience is thus not only the ability to recover from a disruption but also to anticipate and prepare for unexpected events, and be able to rapidly react. Hollnagel highlights four cornerstones of the resilience, illustrated in Figure 2.1: • "responding": how to respond to normal or unexpected events by changing the functioning: it can be alerting the people when accident occurs so that they can change plans, or deviation during road maintenance for example; • "monitoring": monitor the current state of the system: speed, accumulation, air quality, weather conditions for example; • "learning": from past events, be able to address current situations from the understanding of the past: congestion anticipation with demand prediction typically, knowledge on consequences from previous events like strikes; • "anticipating": anticipate future events or failures (which falls under the vulnerability analysis). Recent actions are taken, especially for congestion avoidance: traffic speed regulation when high demand is expected (vacation departure day), route recommendation, or during extreme weather events: nearby subway stations can be closed1 . Vulnerability analysis helps improving the resilience of a system, as it is focused on "what to expect". Usually, the resilience is measured when analyzing the performance of the transportation system. In Figure 2.2 from Koren et al. [START_REF] Koren | Proposal for holistic assessment of urban system resilience to natural disasters[END_REF], the performance drops after an incident, natural disaster in the case studied by the authors. After the time of response, the system recovers until reaching an equilibrium, that can be the initial one (A) or a new one (B). If the system is not resilient, it will not be able to recover, like state C, or take more time to recover. The resilience in this case is the area bellow the curve of performance, between the time of disturbance and the time of recovery. The metric for performance assessment depends on the context of the study and area related to the Level Of Service (LOS). It can be the travel time reliability, the speed, the total travel time spent in the network, or any other metric able to provide indications on the performance of the transportation system. It can be related to the whole system or specific components of it. For instance, in the context of an urban large scale transportation network, one could focus on the resilience of specific transport modes or specific regions. R = In transportation systems, the main focus is maintaining an acceptable LOS. The increase of resilience is done by limiting the performance drop, reducing the duration of recovery and ideally having a new and better stable state after enhancing the LOS when it is possible. Vulnerability Analysis and Performance of a Road Network Cities are more and more vulnerable to external events, due to their size and the different sources of perturbation. Today, global warming in particular is a serious threat for transportation systems as it increases the frequency and the intensity of natural disasters 2 , which are the two indicators of vulnerability (probability of happening, and consequences). Extreme weather conditions, like storms or blizzards, may disrupt one or more transport modes, reducing supplies and increasing demand on other transport modes, making the system more vulnerable. Many studies thus focus on the impact of extreme weather incidents, as it is a major threat for infrastructures. Cities have to be prepared and able to rapidly react otherwise the transportation system would be paralyzed. Their resilience is strongly dependant on the capacity of anticipation, time of reaction and solution found. More globally, two aspects of a transportation system may affect the vulnerability of a road network: variation of demand and variation of supply of transport services. For example, during intense precipitations, drivers behaviours change, the speed is lowered, which decreases the capacity of areas ( [START_REF] Hambly | Projected implications of climate change for road safety in Greater Vancouver, Canada[END_REF]), and the number of traffic accident increases. In this case, the demand is steady but the supply is lowered, decreasing the performance of the system. In this thesis, we place ourselves in a more frequent risk context, where the source of vulnerability is the demand variation, as a day-to-day problem to solve. Vulnerability Analysis: Static vs Dynamic Perspective Traditionally, vulnerability analysis can be divided in two categories: topological analysis and system-based analysis ( [START_REF] Mattsson | Vulnerability and resilience of transport systems-A discussion of recent research[END_REF]). The topological analysis relies on the representation of transport network as graphs, usually with links representing road sections and nodes road intersections. The advantages of this approach is its simplicity in terms of computation and input data. Graph theory and mathematical considerations enable various topological vulnerability analyses, such as vulnerable nodes identification or topological consequences of node removal. The fast computation algorithms for resilience and vulnerability analysis make it very appealing but the lack of dynamic information such as the demand and the supplies hinders realistic applications. The consequences of disruption from a user perspective are not well evaluated in this case. The second approach addresses the drawbacks of the first but with the default of complexity. System-based analysis takes into account the demand, the supply and other information about the dynamics of the network. As the demand is a source of vulnerability, it allows testing the robustness of infrastructures with increasing demand or vulnerability measurements. From system-based analysis, consequences on travelers such as travel time delay or mode choice are easily measured. They are also useful to test the performance of the system with a varying demand or capacity. The recent advances in computation technologies reduce the default of system-based analysis and enable large quantities of information to be analyzed. Those two approaches are rarely combined but in their own way provide lots of insights for vulnerability studies. Recent studies however, focus on the combination of spatio-temporal vulnerability analysis (for example Henry et al. [START_REF] Henry | Approach to quantify the impact of disruptions on traffic conditions using dynamic weighted resilience metrics of transport networks[END_REF]). Nowadays, such analyses are enhanced by the quantity and quality of collected Intelligent Transportation System Intelligent Transportation Systems (ITS) appeared with the vehicular connectivity and aim at reducing issues related to transportation, via data collection, analysis and communication. They are part of the improvement of resilience inside transportation networks, increasing the knowledge on the transport environment and offering solutions to different kinds of mobility issues. Simple applications appeared decades ago, where traffic information were broadcasted through the radio, or via variable message signs (VMS), signaling traffic perturbations or advising travelers during their trips. To face the problem of urban mobility, ITS now benefit from the improvement of technologies of data collection and storage, computation power and communication. Recent technological advances enabled indeed the development of applications around mobility inside urban network. Data collection can now be done in real-time and software are more powerful and able to perform computation on large data sets rapidly. ITS are composed of communication and data processing technologies to improve and analyze mobility data. Guerrero-Ibanez et al.. ([16], [START_REF] Guerrero-Ibáñez | Sensor technologies for intelligent transportation systems[END_REF]) highlight four principles for the ITS: sustainability, integration, safety, responsiveness and present the main objectives of the ITS: access and mobility, environmental sustainability and economic development. ITS are thus main actors for resilience improvement inside cities, and address the four cornerstones of resilience. Cooperative ITS, or C-ITS correspond to a new type of ITS where the mobility actors (vehicle or infrastructure) do not act individually but in cooperation. C-ITS are induced by the increasing connectivity and sensor equipment of vehicles and infrastructures. Infrastructures can exchange information with vehicles to alert them on real-time events, affecting their trip. Cooperation between vehicles is also possible to improve general traffic conditions. Adaptive Cruise Control for example helps vehicles to adapt their speed depending on vehicles in front of them, and can regulate traffic speed on portion of roads ( [START_REF] Marsden | Towards an understanding of adaptive cruise control[END_REF]). Chapter 2. Vulnerability and Transport Network Management Applications for ITS are numerous, such as security (lane management, blind spot information), environment (weather prediction, pollution management), assistance (parking spot location, pre-trip information). All these applications help the cities to become more resilient, storing past information, analyzing information in real-time and communicating with users or infrastructures. Data Collection and Sensors Data used by the ITS are collected from sensors, that can be in-road sensors, also referred as infrastructure,o rin-vehicle sensors. Those sensors enable the monitoring of various indicators, giving information to the traveller or city planners, and more globally help measuring the performance transportation systems. In-road Sensors The first and more mature category of sensors is the in-road sensors. Measuring outside conditions, they provide information about traffic or also environmental conditions (weather), enabling a better risk evaluation of transportation systems. Guerrero et al. ( [START_REF] Guerrero-Ibáñez | Sensor technologies for intelligent transportation systems[END_REF]) distinguish two types of in-road sensors: intrusive (inductive loops, pneumatic tubes, magnetic sensors) and non-intrusive (radar sensors, cameras), depending on their installation on or outside of the road. In both cases, the in-road sensors are accurate and is a well-known technology to rely on. They can measure occupancy, speed, length of a vehicle or flow in a road section. Among the traffic related sensors, the Road Side Units (RSU), measure traffic at fixed location and are able to communicate with vehicles in a close perimeter or other RSU. They give partial information as they are stationary but can provide precious traffic indication such as the flow or the speed of vehicles in az o n e . The main issue of those infrastructures is their high cost, hindering their installation. Traffic State Estimation can reconstruct traffic flow pattern based on observed data ( [START_REF] Seo | Traffic state estimation on highway: A comprehensive survey[END_REF]) and can thus compensate for limited deployment of infrastructures. The demand and number of vehicles inside a road network can thus be evaluated with RSU. In-vehicle Sensors With the development and spread of self-driving vehicles, more and more sensors are added to new vehicles. They are now able to compute distance to other vehicles for example, have proximity sensors to help parking or monitor tire-pressure. RAdio Detection And Ranging (RADAR) scan the road and notify drivers when obstacles appear to avoid collision. Self-driving vehicles are equipped with LIght Detection And Ranging (LIDAR) which scans the environment with a 360degree visibility. Most of the in-vehicle sensors for Advanced Driver Assistance Systems (ADAS) aim at helping the driver and increase its security, alerting the Context of Application for Dynamic Vulnerability and Resilience Analysis 25 driver when the risk of collision is high for example or helping him/her to park. Lane management helps the driver stay in its lane, adaptive cruise control manages distance between vehicles and avoid sudden speed change, blind spots are better checked via radars. In the end, all these applications, protecting the driver, reduce the risk of accident and in a way decrease the vulnerability of road network by reducing the probability of incidents. Nonetheless, the lack of common standards between brands and the difficulties of connecting them with other components are their main drawbacks and are a limitation when designing strategies involving communication with the vehicles. Communication Systems In ITS in general, communication can be done in three different ways: infrastructureto-infrastructure (I2I), vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V). Vehicles communicate with other vehicles or infrastructure via vehicular adhoc networks (VANET). V2I and V2V are often used in cooperative control strategies, where vehicles exchange information with other vehicles to adapt their speed for example ( [START_REF] Guériau | How to assess the benefits of connected vehicles? A simulation framework for the design of cooperative traffic management strategies[END_REF]), or with infrastructure to send or receive traffic information. V2V approaches require connected vehicles, with the ability to communicate with each other and the same measure system. V2I communications are more common with the maturity of infrastructure technologies. Moreover, mobile applications (Waze, Google Maps or other navigation systems) are good vectors of communication between vehicles and infrastructures. Those new ways of interactions are though sources of two problems: privacy and security. Particular attention should be paid on the type of information communicated by vehicles and the way they are sent (receiver, data storage, communication protocol, etc). Privacy is a main issue for new applications as information related to mobility are particularly sensitive. Laws protecting individuals privacy emerge, like GDPR in European Union, and force the design of new applications to take into account this constraint before any deployment. Fries et al. ( [START_REF] Ryan N Fries | Meeting privacy challenges while advancing intelligent transportation systems[END_REF]) explore the privacy issues that ITS will have or already have to address and the possible solutions (aggregation, masking id, law or third parties). One of the contributions of this thesis is a TMS on a large scale road network, this is the dimension of ITS we will focus on the rest of the thesis. Traffic Management Systems One of the fields of ITS applications (Figure 2.3) is Traffic Management Systems (TMS) that aim at improving traffic flow on road network and reducing or avoiding congestion. TMS focus on the improvement of user mobility in general, for public transportation, vehicles or other transportation modes. TMS are composed of sensors to gather information, applications or computation tools to analyze the traffic situation and offer a solution to the vehicles. TMS are articulated with three phases [START_REF] Souza | Traffic management systems: A classification, review, challenges, and future perspectives[END_REF]: information gathering via sensors or mobile applications, data processing to analyze the traffic situation, and service delivery which is the application of the management strategy. Traffic management includes lane management, surveillance, parking management, automatic tolling or intersection management. The common goal of all theses approaches is a better management of traffic flow, mainly of vehicles, in urban areas, addressing demand issues of cities. Depending on the application, the scale of TMS can differ, from a road section to a large urban area. Intersection management, as well as lane management for example, belongs to microscopic approaches, where only portion of roads are observed and vehicles included in the management are only considered for a short period. Larger zones including more roads and more vehicles focus for example on a better distribution of traffic inside the network by making some vehicles change their route in case of congestion, accident for example. Large scale TMS are difficult to deploy because of the large amount of data they can require. De Souza et al. ( [START_REF] Souza | Traffic management systems: A classification, review, challenges, and future perspectives[END_REF]) classify TMS in different categories. The two main categories are infrastructure-free and infrastructure-based. The infrastructure-free solutions are based on communication between vehicles (V2V) and enables congestion detection or traffic control. Toward a Large Scale Management Strategy Applications for vulnerability analysis are part of ITS and are more and more useful for large cities to improve their resilience. Traffic management systems especially help reducing vulnerability inside the network, avoiding accident or saturation of the network. Their dynamic makes the system more reactive and prepared for unexpected events. ITS enable communication between transportation network actors, large quantity of data and more precise information to be collected for simulation. Analyzing a whole city area increases the size of the data, the duration of observation, the size of the studied population and the size To address computation issues and be as close as possible to real interactions, decentralized approaches are a solution enabling the distribution of computation on large scale in a dynamic context. In this thesis, we will mainly focus on three aspects of the resilience in a large scale context: responding, monitoring and anticipating. The first two are addressed with a control strategy in a context of TMS, using a distributed multi-agent system in Chapters 3 and 4. Anticipation is addressed in Chapter 5, in a topological vulnerability analysis context where we describe and analyze the performance and scalability of a fast computation algorithm of a resilience indicator that we developed. Chapter 3 Multi-Agent Control Framework for Vulnerability Mitigation To design solutions for Traffic Management Systems contributing to a reduction of transportation system vulnerability, the representation of transportation and user interaction need to be defined and contextualized. This modeling is a large part of the dynamic system-based vulnerability analysis, related to the choices of data used, the level of information considered and thus the accuracy of the approach. From simulations, one can evaluate the performance of a traffic management strategy, reproducing part of the behaviours and interactions of travelers. In this chapter, we introduce the context of dynamic analysis we choose, the existing control approaches, and the hypothesis of traffic theory underlying our strategy. Finally, we present our hybrid control strategy. Demand as an Enabler of Road Network Vulnerability As introduced in the previous chapter, two phenomena may affect the reliability and vulnerability of a network: the demand and the supply (i.e. network topology, connectivity, etc). In this chapter we focus only on the vulnerability of road networks with respect to the demand. Berdica ( [START_REF] Berdica | An introduction to road vulnerability: what has been done, is done and should be done[END_REF]) addresses the demand vulnerability through the reliability on a microscopic level : "Terminal reliability is addressed at link level, and link reliability of connectivity is the probability that demand does not exceed reference capacity on a certain link for given time periods." A congested link is thus considered as a failure because not accessible for the drivers for a period of time. To avoid congested links and at a larger scale, congested areas, different solutions exist to better control traffic, by changing the departure time, the mode or the route of users. The main focus of this work is on the user rerouting during day-to-day perturbation, when the demand is too high (morning peak hours typically). Chapter 3. Multi-Agent Control Framework for Vulnerability Mitigation Our goal is to better distribute vehicles over the network by dynamically anticipate and locate congestions while considering available data and realistic interactions between actors of the rerouting strategy. A Brief State of the Art and Motivations Demand vulnerability in a large scale context is difficult to capture because of the data volume (number of travelers, size of the network, precise traffic information, etc) and the computation time required for simulation. To work on large scale, one common solution is to aggregate the information and the modeling simplification but at the price of lack of realism. We present in the following sections solutions based on aggregated traffic information for large scale network, vehicle-oriented solution based on communication with vehicles and finally the context of multi-agent rerouting strategies. Traditional Traffic Control The state estimation and control a network level is a complex task and requires a huge effort in terms of sensor deployment, data processing and forecasting. Reducing model complexity and designing control strategies at this level has become an active research topic and has been one of the main challenges to efficiently cope with traffic congestion in urban cities. Approaches to solve this problem include aggregated directional control (Tumash et al. in [START_REF] Tumash | Boundary and VSL Control for Large-Scale Urban Traffic Networks[END_REF]) or reduction of scalability dependence, (Nikitin et al. in [26]) to name a few. Real-time data-driven solutions based on Intelligent Traffic Systems and connected Vehicles have attracted a lot for deployment effort, as their effectiveness increases with the development of smart city solutions and the growing availability of fine grained data to better monitor traffic conditions. From this perspective, traffic jams can be nowadays more precisely detected and, in some situations, proactively avoided. Boundary Control From the perspective of traffic control engineering, infrastructure-based approaches are more suitable as they establish specific boundary actions leaving the routing a decision to the driver. Boundary control has shown to be efficient to limit access to the network and improve its throughput (Boufous et al. in [START_REF] Boufous | Centralized and Distributed Multi-Region Traffic Flow Control[END_REF] for example). In [START_REF] Tumash | Boundary Control Design for Traffic with Nonlinear Dynamics[END_REF], Tumash et al. propose a solution of boundary control based on the solution of Hamilton-Jacobi equation on a single road. A subzone boundary control is presented by Yang et al. in [START_REF] Yang | Regional Boundary Control of Traffic Network Based on MFD and FR-PID[END_REF], based on Macroscopic Fundamental Diagram theory (or MFD, later further described). Gao et al. ( [START_REF] Gao | Resilient perimeter control for hyper-congested two-region networks with MFD dynamics[END_REF]) propose a resilience-oriented boundary control from MFD properties between two regions. They present the concept of traffic-flow resilience and develop a solution to optimize flow between two regions. A hierarchical approach was proposed by Yildirimoglu et al. ([31]) for routing assignment. The upper-level route guidance scheme optimizes network performance based on actuation via regional split ratios, whereas the lower-level path assignment mechanism recommends subregional paths for vehicles to follow, satisfying the regional split ratios in order to achieve performance. Through its hierarchical structure, this approach address the problem of translating regional guidance to a more precise route guidance. Nonetheless, these approaches suffer from low adaptability to sudden changes of the travel demand, variations of the transport supply and evolution of congestion patterns. Moreover, those macroscopic approaches are focused on flow optimization and do not consider individual preferences via microscopic driver behaviour. Vehicle-Oriented Solutions Traffic engineers are also focusing the effort of empowering new control schemes by the connectivity between vehicles. In vehicle-oriented approaches, that are infrastructure-free, CoTEC, from Bauza and Gozalvez [START_REF] Bauza | Traffic congestion detection in largescale scenarios using vehicle-to-vehicle communications[END_REF] or CARTIM, from Araujo et al. [START_REF] Guilherme B Araujo | Cartim: A proposal toward identification and minimization of vehicular traffic congestion for vanet[END_REF] propose a solution for intelligent rerouting to avoid congested areas. They do not require a lot of infrastructures to be effective, only vehicles need to be equipped. Similarly, Pan et al. ( [START_REF] Pan | Proactive vehicle re-routing strategies for congestion avoidance[END_REF]) propose an infrastructure-free route guidance system based on three different strategies: DSP (Dynamic Shortest Path) which dynamically assigns shortest paths, RkSP (Random k Shortest Path) which randomly chooses a shortest path among the k top and EBkSP (Entropy Based k Shortest Path) which takes into account future positions of vehicles and assigns them to lowest popularity path, enabling a better distribution of vehicles inside the network. The complexity of this last approach face scalability issues. In [START_REF] Wang | An adaptive and vanets-based next road re-routing system for unexpected urban traffic congestion avoidance[END_REF], Wang et al. propose a solution of Next Road Rerouting, based on VANET's communications, where infrastructure agents suggest nearby vehicles which road to go next after an unexpected event. Other V2V solutions are based on alerting vehicles when an accident occurs ( [START_REF] Souza | Garuda: a new geographical accident aware solution to reduce urban congestion[END_REF], [START_REF] Souza | Decreasing greenhouse emissions through an intelligent traffic information system based on inter-vehicle communication[END_REF]). The main goal of those approaches is though to detect congestion rather than avoid them. Another limitation of these applications is that they strongly depend on the number of connected vehicles driving inside the controlled zone. This kind of actions keep traffic safe and try to smooth congestion based on messaging sources but proactive approach to handle congestion requires better and precise advise for users on the road. Multi-Agent Approaches for Distributed Intelligent Solutions To compensate the lack of user consideration of boundary control and consider cooperative strategies for a global improvement of the system through anticipation, multi-agent context offers a dynamic, reactive and robust framework, suited Chapter 3. Multi-Agent Control Framework for Vulnerability Mitigation for complex rerouting strategies. We present rapidly in this section the main multi-agent concepts and some multi-agent based control strategies. Multi-Agent Context Before motivating the use of multi-agent context, let us first present the main definitions and characteristics of multi-agent systems. Agent: an agent is an entity able to sense its environment (via sensors) and to act depending the information collected, via effectors. More concretely, the main characteristics associated defined by Wooldridge and Jennings ([38]) to agents are: • autonomy: the agent performs its action without external intervention; • social ability: the agent can interact with other agents or external entities; • responsiveness: the agent is able to sense its environment and respond to changes inside this environment; • proactiveness: more than responding to the environment variation, the agent acts depending on a given goal, individual or global. Multi-agent System (MAS): group of agents which evolve in a environment, interact and act to achieve an individual or global purpose. Wooldridge and Jennings ( [START_REF] Wooldridge | Intelligent agents: Theory and practice[END_REF]) highlight four types of problem the MAS can solve and that are typically related to traffic assignment: • openness: inside unpredictable environments that can change quickly, solutions can not be found easily because of the dynamic and various states of the system; • complexity: decomposing a complex problem, because of its size or its unpredictable state, into simpler sub-problems that can be solved by agents and different types of actions; • distribution of data, control, expertise or resources: agents representation offers solution when the problem depends on many distributed entities that can interact to find a solution; • legacy systems: or modularity, the modification of an agent property or behaviour does not require changing the whole system. MAS offer robustness, simplification of problem solving, reactivity inside a changing environment and a realistic representation of individual behaviours, and is thus our choice for a traffic control strategy. Multi-agent representation is especially appropriate to address individual microscopic representation as it aims at individualize autonomous entities and behaviours. Moreover, distributing the demand over given supplies is a complex problem of resource allocation, that can be solved in a multi-agent context with cooperative approaches ( [START_REF] Edmund | Distributed problem solving and multi-agent systems: Comparisons and examples[END_REF]). Multi-agent modeling enables problem-solving distribution and thus computational distribution, a dynamic environment and a realistic representation while keeping the simplistic modeling. For these reasons, a multi-agent representation, more and more used in studies at microscopic traffic modeling level, seems to be suited for dynamic analysis on large scale networks. MAS have the advantages to introduce new kinds of interactions either by game theory in cooperative/competitive schemes or by specific communication protocols to reach consensus. We introduced previously (Chapter 2) different kinds of actors of TMS, vehicles or infrastructures, and their interactions: V2V, V2I or I2I. This can be easily mapped into a multi-agent framework where vehicles and infrastructures are seen as agents. The use of navigation systems makes the travelers even more aware of the environment, receiving information about traffic condition in real-time. The infrastructures such as traffic lights or sensors are also represented in traffic multi-agent modeling as fixed agents. Multi-Agent Control Strategies The usual types of multi-agent control strategies are traffic light management and vehicle rerouting, which is the scope of our analysis. Traffic light management is typically a way of applying boundary control and is thus efficient for system improvement but does not really consider traveler preference. Wiering et al. [START_REF] Marco | Multi-agent reinforcement learning for traffic light control[END_REF], and more recently Liu et al. [START_REF] Liu | Intelligent traffic light control using distributed multi-agent Q learning[END_REF] developed a multi-agent traffic light management strategy, using reinforcement learning to minimize the waiting time of cars inside urban networks. In [START_REF] Belbachir | Smart mobility using multi-agent system[END_REF], Belbachir et al. use three infrastructure agents that cooperatively manage intersections in a self-adaptative mechanism but at a microscopic scale. Traffic light management helps smoothing traffic flow inside urban areas but do not face high demand issues that require for the users to find a better route. For route recommendation, multi-agent frameworks are a way of modeling the autonomy of vehicles and enable cooperation strategies between vehicles and/or infrastructure. Recently Chavhan et al. [START_REF] Chavhan | Prediction based traffic management in a metropolitan area[END_REF] proposed a solution that predict traffic and manage flow distribution in the road network, divided into zones and subzones. The authors use mobile and static agents, corresponding to RSU and vehicles or pedestrians. One type of algorithms explored in multi-agent routing context is the Ant Colony Optimization (ACO): introduced in the nineties (Bonabeau et al. in [44]) to solve the problem of traveling sales man, ACO algorithms are now widely used to solve complex problems. Inspired by the behaviour of ants searching for food, their functioning is based on the exploration of a topological environment by simple agents, leaving "pheromone" on their way back from a given location (food in the real case of ants). The pheromone later attracts the other agents until it evaporates. The search of shortest paths can thus be done via ACO. In the context of traffic control solutions, ACO are used to anticipate congestion, evaluating paths and demand on roads with the pheromone, and improve the search of new shortest paths without greedy computations. Cao et al.([45]) propose a pheromone based detection of congestion on a microscopic level. Pheromone represents the current traffic state, by using the location of vehicles and another pheromone unveils the future state of congestion from their routes. The pheromone is used to predict potential congestion and reroute vehicles according to this information. An extension includes pheromone for traffic light control. The combination of those two approaches leads to a robust framework. In this case, the size of the network is not very large as every road section is evaluated. An other ant-based solution is developed by Tatomir et al. ( [START_REF] Tatomir | Hierarchical routing in traffic using swarm-intelligence[END_REF]) where the authors split the network into zones and apply a hierarchical rerouting algorithm. Shortest paths are dynamically updated and computed using local ants (moving only inside zones) and exploration ants (moving between zones). Clustering the network and working with a multi-agent system ensure a robust and scalable solution but in this case, the solution requires a lot of information from the driver. Indeed, the drivers provide traffic state information as well as their route, which could cause some privacy concerns. A hierarchical approach was also developed by Kammoun et al.( [START_REF] Habib M Kammoun | Adapt-Traf: An adaptive multiagent road traffic management system based on hybrid anthierarchical fuzzy model[END_REF]) where ACO is used to reroute vehicles avoiding congestion without creating new congestion. Based on attraction/repulsion of road sections, these approaches optimize the distribution of vehicles through the network for congestion avoidance but they require a large amount of precise information about the network are though tested on small urban areas because of computation issues and data storing: the traffic conditions are evaluated at road section scale. Our solution aims at reproducing the attractive/repulsive mechanism on a larger scale. Hierarchical Multi-Agent Control Strategy The main difficulty in control strategies is to consider both the user equilibrium, where the route choices are individually optimized, and the system optimum, where the assigned routes are chosen to optimize a performance indicator (total travel time e.g.) in the whole system (Wardrop [START_REF] Road Paper | SOME THEORETICAL ASPECTS OF ROAD TRAFFIC RESEARCH[END_REF]). Centralized approaches tend to find a balance between those two optimums but require a comprehensive knowledge of the whole network and face scalability issues. On the other hand, user centered approaches commonly found in routing applications focus their attention on providing better individual experience at a cost of degrading the total network performance, and at the end having an impact on user experience. ACO approaches take into consideration the network performance, avoiding concentration of vehicles in the same locations but require a full knowledge of the network and are thus less applicable for large networks. The combination of aggregated traffic management strategy (on a region level), such as boundary control, and actions of vehicles considered individually has attracted little attention in the recent literature, (Leclercq et al. [START_REF] Leclercq | Enforcing optimal routing through dynamic avoidance maps[END_REF]). One of the main concerns to consider is the possibility to structure an aggregated actuation mechanism that contributes to improve the performance of the system, reducing congestion, without constraining the freedom of drivers to use the network in specific ways. One possible way to perform this is by providing informed routing suggestions, based on attraction or repulsion of monitored zones. Operators can provide an approach where user-centered experience is as relevant as traffic network performance. This combination is proposed and explored within this thesis. Figure 3.1: Hierarchical Cooperation between Infrastructures and Vehicles Our solution is to propose a hierarchical dynamic route recommendation algorithm for large scale road networks. It computes and suggests dynamically new routes to vehicles using aggregated congestion proxy measures, computed in a distributed way by infrastructures. In a hierarchical context, with zones and local/global decisions, the traffic network is partitioned into different zones, with homogeneous traffic conditions. Congestion indicators are then locally computed to grade the capability of the zone to accommodate incoming traffic or reroute it via other zones. The aggregated metrics determine zones that may accept vehicles. Vehicles then compute new routes according to the traffic information provided by the zones, with an attraction/repulsion mechanism. The difficulty with this general assumption will be to understand which cooperation caused which effects as vehicles will not always comply with the recommendations of the zones. Moreover, we designed our strategy to be as realistic as possible, including drivers preferences and privacy constraints as it is a major issue with ITS (see Hahn et al. [START_REF] Hahn | Security and privacy issues in intelligent transportation systems: Classification and challenges[END_REF]). This approach overcomes those limitations as it works on large scale networks, through cooperation of two types of agents in a distributed hierarchical framework, with limited vehicle information exchanged with the infrastructures. The combination of dynamic rerouting on microscopic agents via an aggregated control Chapter 3. Multi-Agent Control Framework for Vulnerability Mitigation strategy makes it a new way of improving traffic conditions and hence network vulnerability. From a resilience point of view, by better distributing the traffic flow, our control strategy facilitates the absorption of demand perturbations and reduces the duration of performance drop. Multi-Agent Infrastructure and Vehicle Interactions In this section, we present the general components of our model, and their interactions. General Architecture of our Framework When people travel through road networks from one origin to their destination, they go near sensors and produce data during their trip. Traffic state can be sensed by fixed sensors such as inductive loop detectors (ILDs), ultrasonic or radar sensors. Those installations are expensive but mature and accurate enough for traffic management applications ( [START_REF] Guerrero-Ibáñez | Sensor technologies for intelligent transportation systems[END_REF]). Those data can be then sent via cellular network to traffic management centers (if we consider a fully distributed approach with one computer per zone for example). While sensors in the infrastructure are expensive, they can provide accurate information at fixed spots. GPS and mobile data provide less accuracy but with better spatial distribution at lower cost, both type of measurements are useful for estimating the traffic state, and can be fused. Once this information is retrieved, it is aggregated and regularly updated on a higher level. The higher level is the abstract representation of reality and is where data processing and analysis are done. An illustration of those layers is shown in Figure 3.2 where the framework is presented. In our case, the higher level contains distributed agents of two types, representing vehicles and infrastructures. Moreover, we consider two types of agent: the zones which corresponds to a group of roads, located in delimited areas corresponding to a given segmentation of a city, and vehicles which can actually be seen as an interface to vehicles, such as a navigation system, or an application, capable of interaction with our framework by providing information on the area where the vehicle is located at a given timestamp, and provide route suggestion to the vehicle itself. The road network is represented in the abstract layer as a graph with road intersections as nodes and road sections as links. The aim of this work is to combine at control level interactions between infrastructures and vehicles. Zone and Graph Definition The transcription of the road network inside the control framework is done with two components: the zones, which are the agents and actors of the multiagent control strategy, and the road graph, which is based on the real network The road network is split into zones. A zone is a delimited area, with sensors fixed at known location. Each sensor is associated to a zone and send information that will be later aggregated by zone. For example the sensors measure the flow inside the zone that will characterize its level of congestion. Zones characteristics, and especially critical performance indicators, are derived from observations. The graph is an abstract representation of the road network with road sections as links and road intersections as nodes. It is a common resource, broadcasted and updated by all the agents zones, accessible to vehicle agents. In this graph representation, a zone corresponds to a connected sub-network. The weights of the links are the free-flow travel times. The graph is updated with aggregated metrics from sensors data collection. Its goal is to help the vehicles to find a new route from the information communicated by the zones and not to represent the exact network in real-time. Agent Description In our model, we consider two types of agents: the vehicle and the infrastructure. We define them so that their communications, their information sharing and their level of traffic understanding are as realistic as possible. Agents are usually characterized by the information they sense, the actions they can perform and the communication with other agents or entities of their environment. Vehicle Agent: • Agent actions: moves, computes new shortest paths, changes route; Chapter 3. Multi-Agent Control Framework for Vulnerability Mitigation • Agent information: id, departure time, origin, destination, route, macro route, arrival time, number of rerouting; • Agent interaction: with its current zone, with its next zone. Mainly receives information from zones; A vehicle agent has an initial origin, destination, departure time and route. Its route and departure time are considered as an optimal solution for the vehicle (user equilibrium). The vehicle only communicates with its current zone and its next zone. The vehicle precise location is not known by the zone, the zone only knows if the vehicle is in it or not. The constraint of privacy requires that the vehicle agent computes some information for rerouting on its own. The vehicle agent is able to compute a new route, using the resources it has access to. Here we use local information, such as weighted graph, representing the road network. It is worth noting that the vehicle information such as speed, acceleration or precise location are not used nor sent to external entities. Moreover, they do not have to be connected as our approach does not require knowledge on the speed or trajectories of the vehicles. Furthermore, our control strategy relies on the cooperation of few vehicles that are willing to change their route, not all vehicles need to have a specific application or to activate their location via GPS to be detected by the zone. Zone Agent (infrastructure): • Agent actions: monitors traffic inside its perimeter, computes its own indicators of congestion, auto-evaluates itself; • Agent information: id, graph of the road network, critical accumulation, critical spatial speed; • Agent interaction: with its neighbor zones, with vehicles driving in its perimeter; The zones aggregate local information such as the accumulation to evaluate their state of congestion. They can communicate with their neighbor zones, which are zones that are close to them. They also communicate with the vehicles inside them or driving toward them to help them find a better route. The zones update and share common variables such as the graph. Communication Between Zones As defined previously, a zone agent can communicate with its neighbors. To have a fully distributed multi-agent system, we use gossip algorithm to compute global and local aggregated metrics such as mean, min, max or else, via communication between the agents. The purpose of gossip algorithms is to propagate information among a large network of agents, via simple communications between each agent and part of their neighbors. Gossip algorithms can be implemented to propagate information or to compute a global metric in a network of agents in a distributed way. By communicating their local state and the information received from some of their neighbors, agents are able to converge to the exact value of a global metric. Agents compute an aggregate indicator and share the same information, computed in a distributed way, and achieve consensus or agreement. Through distributed computation, gossip algorithms reduce the dependency on a central system and increase the robustness of our solution. If an agent fails, the others are still able to perceive the surrounding environment. Kempe et al.,i n [START_REF] Kempe | Gossip-based computation of aggregate information[END_REF] describe the mechanism of gossip implemented with Push/Sum algorithm. This algorithm enables the computation of sum, mean, min, max of a metric inside a network of agents with iterative interactions. Let consider the sum x of the metric values x i of all the agents: x = i x i . We want x to be computed and known by all the agents in a distributed way. Each agent i will send and update two variables written s i and w i .T h o s et w o variables will be sent to neighbors and updated iteratively. At the first iteration (t =0 ) ,s 0,i is initialized with x i and w 0,i is initialized with zero for all the agents except for one, where w 0,i = 1 (in the case of "mean" computation, all the weights w are initialized at 1). The pseudo-code of Kempe et al. is presented in Algorithm 1. In the paper, x is vector, we thus simplify the pseudo-code to a one-dimension variable. Algorithm 1 Pseudo-code of a Push/Sum iteration (from [START_REF] Kempe | Gossip-based computation of aggregate information[END_REF]) 1: Let {(ŝr, ŵr)} be all pairs sent to ai in round t -1 2: Let st,i = r ŝr, wt,i = r ŵr 3: Choose shares αt,i,j for each j 4: Send (αt,i,j • st,i,αt,i,j • wt,i)t oe a c hj 5: xt,i = s t,i w t,i Each iteration, an agent broadcasts to k random neighbors and itself the following information: {α t,i,j • s t,i ,α t,i,j ,w h e r e j α t,i,j = 1. It will receive from part of its neighbors the information {s t,r ,w t,r },w i t hr a neighbor of i.F r o mt h e received information, agent will update its local variable s t+1,i and w t+1,i . s t+1,i = r s t,r w t+1,i = r w t,r where s r (t)a n dw r (t) are the information sent by neighbor r. At each iteration, the estimated sum x(t) computed by agent i is: xt,i = s t,i w t,i . After few iterations, for each agent i,t h er a t i o s i w i converges to x, enabling every agent to have an estimation of the sum x. In this case, the agents communicating through gossip algorithms are the zones. They exchange information about their state of congestion and the global state of congestion of the whole network, computed via Push/Sum. Through this distributed algorithm, we remove a dependency of zones on a centralized component and ensure a robust system of information sharing and aggregated computation. The zones can work on a complete decentralized environment. Traffic Conditions Analysis and Network Partitioning Our goal is to homogenize traffic to avoid congestion in a spatial environment and reduce zone vulnerability to high-level demand. By controlling the distribution of vehicles inside the network, between zones, we aim at reducing the duration of congestion and their intensity, and thus the total travel time spent inside the network by the vehicles. This section introduces how the individual choice of users impact the system optimum. We then describe the traffic model that will characterize the zone state and will be a corner stone of our model and induces the partitioning of the road network. User Equilibrium and Aggregated Effects Users naturally tend to choose the route that minimize their own time spent (e.g.) in the network. When all the users chose their best route, this situation is referred to as user equilibrium situation. The problem of this equilibrium is that it does not guarantee the optimization of transport system which is desirable from Traffic Management perspective. The congestion though worsens and increases the travel time of other users. A situation where the sum of travel times among all the users is minimized is a system optimum. Our objective is to lower the congestion index in zones. The idea is still to provide optimal routes to the drivers, subject to a minimum degradation of system optimum. The equilibrium assumptions are often done on a macro-scale and take rarely into account microscopic behaviors. In our work, we combined macroscopic theories with microscopic actions to improve traffic conditions. Traffic Conditions Analysis and Network Partitioning Macrocospic Fundamental Diagram: Congestion Proxy To protect zones from congestion, we need to characterize the congestion state with the available data. Sensors, as previously introduced, enable to measure the out-flow and the accumulation inside zones. We choose the Macroscopic Fundamental Diagrams (MFD) first introduced theoretically by Godfrey et al. ( [START_REF] Godfrey | The mechanism of a road network[END_REF]) in 1969 as they connect the congestion state of a zone with its accumulation, spatial speed and out-flow. The existence of MFD was later proved under dynamic homogeneous conditions by Geroliminis and Daganzo ( [START_REF] Geroliminis | Existence of urban-scale macroscopic fundamental diagrams: Some experimental findings[END_REF]) in urban areas with homogeneous distribution of congestion and MFD are now widely used for traffic flow optimization, and thus boundary control. The "free flow regime" corresponds to the accumulation lower than a critical value n c i . When the accumulation is greater than n c i , the zone becomes unstable and congestion appears: the out-flow, or production, decreases, fewer vehicles leave the zone. When the accumulation reaches its maximum, no vehicle can leave the zone, the zone is fully congested. The maximum of production is reached at n i = n c i . The critical speed corresponds to the slope of the line between n i = 0 and n i = n c i . The MFD is very convenient when working on delimited regions as it links the state of congestion of each region with their accumulation and thus characterize congestion state with few and easily measurable data. In our case, we will base our cooperative rerouting strategy with the observed accumulation, in comparison with the critical accumulation value inside each region. The speed is later measured as an indicator of performance but is not an input to characterize congestion in our model. The MFD of an area and the critical values of the indicators can be obtained from real observations ( [START_REF] Dakic | On the use of Lagrangian observations from public transport and probe vehicles to estimate car space-mean speeds in bi-modal urban networks[END_REF], [START_REF] Tsubota | Comparative analysis of traffic state estimation: cumulative counts-based and trajectorybased methods[END_REF]) or simulations ( [START_REF] Yildirimoglu | Hierarchical control of heterogeneous large-scale urban road networks via path assignment and regional route guidance[END_REF], [START_REF] Mohajerpoor | H robust perimeter flow control in urban networks with partial information feedback[END_REF]). The MFDs help identify state of congestion in regions of the network and are thus a good indicator for vulnerability analysis: we want to prevent zones to be in an unstable state. Chapter 3. Multi-Agent Control Framework for Vulnerability Mitigation Detecting the congestion and preventing it with the MFD plays a capital role in the partitioning of the road network. The ellaboration of MFD depends on the partitioning of the network into homogeneous zones ( [START_REF] Saberi | Estimating network fundamental diagram using three-dimensional vehicle trajectories: extending edie's definitions of traffic flow variables to networks[END_REF]). Control Strategy for Vehicle Rerouting Now that we have introduced the functioning and interactions of the different components, and characterized the congestion identification, we present in this section our hierarchical control algorithm. First we describe the cooperation between zones and then we present how the rerouting is applied from the recommendations of zones. Infrastructure Cooperation Mechanism By controlling the distribution of vehicles inside the network, we aim at reducing the duration of congestion and their intensity, and thus the total travel time spent on the network. This problem does not only apply to traffic engineering. The general idea targeting a metric inside a network with a redistribution among agents has been applied in other domains. An illustrative example is the work of Lequay et al. ([58]), which describes how multi-agent system enables the reduction of energy consumption. The flexibility of a home (agent) is defined as the reduction of consumption the agent is willing to do without reducing its comfort. The agents have a common goal: the total consumption of electricity can not exceed a certain quantity. By exchanging information, and recalculating the common effort, agents converge to a consensus respecting the initial goal. The whole mechanism and definitions are described in [START_REF] Lequay | Ajustement diffus et adaptatif de la consommation électrique résidentielle par un système multi-agent auto-adaptatif[END_REF]. At each round, agents will engage a certain value of flexibility, regarding the common goal and knowing the total engagement from gossip communications. It will adapt its flexibility depending on the engagement of the others. Comparing real value of flexibility and the engaged initial value, the agent computes its error from which it gets a grade. This grade will later be included in the computation of the new engaged flexibility. Likewise, our multi-agent system aims at reproducing this mechanism of negotiation and redistribution of vehicles, in order to respect a global constraint and auto-evaluation. We keep the concept of flexibility, engagement and grades, from the work of Lequay et al., by adapting them to our case. The main differences between our approach and one of Lequay et al. are: first of all, the topology constraint of our situation: the redistribution of vehicles is from one zone agent to a close zone agent. Receiving more or less vehicles than expected inside a zone will also affect traffic conditions in the neighbor zones. Moreover, the actions of the agents do not immediately impact the whole network and can thus not be as accurately evaluated as in the context of energy reduction. Another main difference is that we consider two types of agents and the success of our strategy depends on the ability of the vehicles to fulfill the engagement of the zones. The vehicles do not have the same goal as the zones, we thus used common resources to both types of agents to link them and enable a better cooperation. These common resources are for example a graph, updated by the zones with their congestion state, used later by the vehicles to compute new shortest paths. Finally, we do not consider an aggregated metric such as the sum of flexibilities as an order but rather the reach of a better distributed traffic. Traffic flow will be redistributed based on the accumulation of zones compared with the critical accumulation (from the MFD theory). Notations and Definitions The zone agents are denoted as a i and vehicle agents are denoted as ve.T h e set of neighbor zone agents of a i is written N i . Time-dependant variables are sub-scripted with (t) and are updated periodically with an interval of time equal to τ . The road network is represented as the graph G,w i t hV the set of nodes (intersections) and E the set of edges (road sections). G is weighted with the free-flow travel times and partitioned into to connected sub-networks (the zones). In our strategy, we characterize the zone agents with the following indicators: • total flexibility: their global state of congestion • flexibility: the evolution of their traffic conditions • grade: their ability to cooperate. Those three aspects and related concepts are described in the dedicated sections. Vehicle Distribution and Flexibility From the MFD of a zone, we can estimate the maximum number of vehicles that can be inside it before the emergence of congestion, the critical accumulation n c i . W et h u sd e fi n et h etotal flexibility of a zone i as follows: where n i (t)i s the occupation (number of vehicles) of zone agent a i at time t. When the total flexibility of a zone is negative, it means that its current occupation is greater than the critical value and a congestion will occur or is already happening. At each time step, zones will engage part of their flexibility, fi (t), depending on their current accumulation n i (t), their critical capacity n c i (t) and the congestion state of their neighbors. The engaged flexibility is the number of vehicles a zone is willing to accept or evacuate until the next time interval. The actual flexibility at time t depends instead on both the current and the previous state of the transport network, and is defined as follows: f i (t)=n i (t) -n i (t -τ ) (3.1) In other words, f i (t) corresponds to the number of vehicles that were actually accepted inside the zone during τ , i.e., the time interval between two engagements of flexibility. The objective of each zone is to keep n i (t) <n c i . Zone Performance and Engagement Once the actual flexibility is retrieved for each zone (via sensors), we compare it to the one engaged by the zone in the previous time interval1 .T h er e l a t i v e error is computed comparing those two values: e i (t)= fi (t) -f i (t) 2 fi (t) 2 (3.2) Chapter 3. Multi-Agent Control Framework for Vulnerability Mitigation Based on this error, each zone can grade itself by comparing itself to the others, as follows: g i (t)= e max (t)e i (t) e max (t)e min (t) (3.3) where g i (t) is the grade of zone a i and e max (t)a n de min (t) are the maximum and minimum of errors from all zones. These information are computed in a fully decentralized way via gossiping. Grades range from 0 to 1. As introduced earlier, the engagement is the number of vehicles a zone is willing to accept. We want the zones to cooperate so that a global optimum in terms of travel time spent in the network could be reached. Hence the engagement of a zone is designed to depend on its own total flexibility, its ability to respect the engagement (measured via the zone's grade) and the total flexibilities and grades of its neighbors. The engaged flexibility is thus defined as follows: fi (t)= g i (t) • f tot i (t)+(1-g i (t)) • f base i (t) ||N i || +1 + γ∈N i g γ (t) • f tot γ (t)+(1-g γ (t)) • f base γ (t) ||N i || +1 (3.4) Specifically, the engagement of a zone corresponds to the weighted average of the total engagements of its neighbors including itself. The second term of Equation 3.4 corresponds to the cooperative mechanism. The weights are here the grades of each zone. In this formula, we consider f base i which is the base flexibility of the zone, computed based on the minimum and maximum flexibilities of all the zones. f base i (t)=f min (t) • f max (t) -f tot i (t) f max (t) -f min (t) (3.5) The use of f base i forces non-reliable agent to still engage part of their flexibility. The minimum engagement f base i depends on how the importance of the agent's flexibility f tot i (t) compares to the minimum and maximum flexibility among the population, respectively f min and f max (known via decentralized aggregation) f min and f max are also computed in a distributed way, via gossiping. Routing Mechanism Initial routes are computed based on a user equilibrium situation, with predefined paths and with times corresponding to a free-flow state. When traffic conditions evolve, in particular in the presence of congestion, those times are altered. To help the zones achieve their engagement, vehicles are rerouted following the principle of attraction/repulsion. The vehicles are meant to more attracted by zones without congestion, but the access to congested zones must not be fully restricted. This is achieved by two mechanisms, the timeout for rerouting decision and a new path finding. Timeout for Rerouting Decision When a vehicle enters a zone, it will calculate a timeout that sets when the vehicle will look for an alternative shortest path, allowing to avoid congestion. This timeout will depend on the congestion state of its next zone. Denoted as t r (ve)w h e r eve is a vehicle, it is computed as follows: t r (ve)=t + T base • g j (t)+max 0, f tot j (t) n c j (3.6) where T base is a time, g j (t) is the grade of the next zone a j of vehicle ve,a n d f tot j (t) n c j corresponds to the level of congestion of a j . The rationale behind (3.6) is that when the zone is congested and currently exhibits a low grade, the timeout is low, and the vehicle should look for a new route almost instantaneously. On the contrary, if its next zone has no or low congestion, and accordingly a high grade, the driver might wait for conditions to evolve before searching for an alternative path. Potentially, the vehicle will enter the next zone before t r (ve) and thus will not change its route. Vehicles moving through non-congested zones have a low probability to be rerouted. It is worth noting that f tot j (t) can be negative when the zone is congested, but (3.6) is designed to avoid a negative timeout. T base , defined in Equation 3.6 represents the reactivity of the driver to find a new route. The timeout for rerouting computed at t is comprised between t and t +2 * T base . The higher T base is and the more chances a vehicle has to be in a new zone. On the contrary, with a low T base , a vehicle will very likely still be in the same zone and will have to compute a new route. The number of potential reroutings should thus increase when the time window decreases. T base represents the anticipation time window of our strategy. Low T base will tend to produce a high number of rerouting decisions. New Path Finding At time t r , if the vehicle has not reached a new zone, the vehicle will look for an alternative route as illustrated in Figure 3.5. To compute its shortest path, the vehicle will request the updated graph to its zone. The new path is computed on G, with weighted links, depending on the aggregated traffic metrics of their zone. A nearly congested zone will have links with higher weights than initially, whereas links of non congested zones would have decreased weights. The vehicles are thus more attracted by zones without congestion but the access to congested zones is not fully restricted. We simulate a mechanism of attraction/repulsion depending on the aggregated level of congestion of the zones. The choice of the weight In this computation, we need to weight the time spent to cross zones to reflect their congestion state. Graph and Macro Graph: Common Resources for all the Agents To lower the complexity, we simplify the shortest path computation by introducing a macro graph. The macro graph enables us to compute a macro path which is the succession of zones a vehicle will go through. When a vehicle is about to reroute, it will compute its macro path. The new path will be the shortest path computed on G filtered on the zones of the macro path. The macro graph is built from the initial graph and the zone composition. Each zone contains border nodes, that are nodes with neighbors from other zones. A node is chosen as centroid when it is the closest to the barycenter of all the nodes of the zone. From G, we define the macro graph G M , containing the border nodes of the zones and their centroid. The centroid is connected to all the border nodes of its zone. The weight of the link between the centroid and a border node is computed as the total travel time in free-flow conditions between those two nodes when considering the shortest path inside the real graph G.This needs to be computed only once, and offline. We keep the existing links between border nodes. The weights of G and G M are updated depending on the total flexibility of the zone agent of the links. At each time interval, the zones update only their links with their local values of total flexibility. The weighted travel time of links belonging to a congested zone will increase with the level of congestion whereas the weight will decrease if the total flexibility is high. When a vehicle needs to be rerouted, its current zone will send the last updated version of G and G M . The graph and macro graph are the common resources to all the agents and enable vehicles to cooperate with zone agents. Graph of Manhattan 3X3 Macro graph of Manhattan 3X3 Updating the Travel-Time Computation The weights of G and G M are updated depending on the total flexibility of the corresponding zone agent. At each time interval, the zones update only their links with their local values of total flexibility. The weighted travel time of links belonging to a congested zone will increase with the level of congestion whereas the weight will decrease if the total flexibility is high. When a vehicle needs to be rerouted, its current zone will send the last updated version of G and G M . The graph and macro graph are common resources to all the agents. At t r , if the vehicle has not reached a new zone, it first computes a macro path from the macro graph G M , corresponding to the succession of zones it will cross, and then a new route from the graph G, filtered with the zones of the macro path. The macro paths and the new shortest paths are computed on G and G M with weighted links. To attract/repulse vehicles from free-flow state/congested zones, the weight function is defined to reduce/increase link travel time. The weights of the links depend on the initial free flow travel time and the total flexibility of the zones. We chose a sigmoid function, defined as follows: tt w (l)=tt(l) • α - 1 1+e -β•f tot i (t) (3.7) where l is a link inside zone agent a i , tt(l) is its free-flow travel time and tt w (l) is travel time weighted by zone-related congestion information. The sigmoid enables the attraction/repulsion mechanism to be more or less smooth depending on its parameter β.L o wv a l u e so fβ correspond to a weight function close to affine, whereas high values correspond to a binary step function (see Fig. 3.7). If the weight applied to a zone is too high, the zone will not be reachable at all. Conversely, if the weight is too low, it will attract all the vehicles. In our case, the limits of the weights are set with a minimum of 0.5 * tt(l)a n d a maximum of 1.5 * tt(l), corresponding to α =1.5. Gridlocks may appear at simulation level due to the circular paths existing in specific traffic networks, and the nature of the dynamic traffic assignment. In order to avoid them, we assign a random shortest path among the k-top shortest p a t h s( l i k ei n [START_REF] Pan | Proactive vehicle re-routing strategies for congestion avoidance[END_REF]) with k = 5. Otherwise, some vehicles would choose the same paths and create even bigger congestion than without control action. Stochastic Acceptance of Rerouting As the initial routes correspond to a user equilibrium, we consider that few drivers actually accept to change their routes many times. The probability they do should quickly decrease when they have already done one rerouting. We can thus consider their rerouting acceptance with a decreasing probability of happening: p(accept, ve)=e -r(ve)-0.1 (3.8) with r(ve) the number of rerouting of the vehicle ve.I fr(ve) = 0, vehicles have 90% chances of accepting the rerouting, the confiance rate is hence set to 0.9. With r(ve) = 1, the probability drops to 0.33, as shown in Figure 3.8. Conclusion This approach embraces the new possibilities offered by recent technologies that enable communications between infrastructures and vehicles and a better monitoring of traffic conditions in a large scale road network. The large scale issue is addressed using a hierarchical traffic control, with aggregated information used for microscopic rerouting. Zones of the network are characterized using the MFD and compare themselves with their neighbors. Through cooperation and auto-evaluation, they provide information to vehicles that help them find the accurate moment to compute new routes, based on the graph updated with traffic state of the zones. Vehicles do not share particularly sensitive information with the zones but are still able to take part to the cooperative control strategy. From this theoretical presentation of our strategy, we present in the next chapter its implementation and performance evaluation in a simulation context. Chapter 4 Implementation and Use Cases In this chapter, we propose an implementation of the control strategy described in the previous chapter. After presenting the prototype of our framework and the main functions of agents, we analyze the results obtained after a sensibility analysis, on synthetic and real networks. Finally, we include a topological resilience metric as part of our control strategy and show that the former enables better reroutings and thus fewer driver solicitations. Prototype To test our strategy, we needed to implement a simulation on a large scale road network for a duration of at least 3 hours, corresponding to rush hours inside cities. We present in this section the required components of our prototype and the choices made to run the simulation. We also present the traffic simulator and the data we used for our strategy. [START_REF] Leclercq | The Lagrangian coordinates and what it means for first order traffic flow models[END_REF]) of the LWR model ( [START_REF] James | On kinematic waves II. A theory of traffic flow on long crowded roads[END_REF], [START_REF] Paul | Shock waves on the highway[END_REF]). SymuPy is not designed to be multi-agent, we thus created in Python the vehicle agents and zone agents outside of the simulator as shown in Figure 4.1. The agents update their information from the simulator, process the information and once a new route is assigned, it is transmitted to the simulator. Our prototype, implemented in Python, wraps SymuPy in a multi-agent framework. The latter, in the end, becomes a multi-agent framework that integrates a powerful tool for realistic traffic simulations. Realistic simulations are generally a limitation of concurrent multi-agent solutions from the state of the art (like MATSim or GAMA). To improve our strategy and understand the impacts on performance, we 4.2. Hyper-Parameter Sensitivity Analysis 55 modify some hyper-parameters and evaluate the gain or loss compared with the situation without control. The two main hyper-parameters we focus on are T base , the time window determining the time of rerouting, and β, a parameter of the linkweight function of the graph. We define in this section the hyper-parameters and the evaluation of our strategy for the system and the vehicles. We then present the results, with hyper-parameter modifications and on a single case, and then test our solution on unknown demand scenarios. Finally we present the results for a real case scenario on a subnetwork of Lyon road network. Hyper-Parameter Sensitivity Analysis To improve our strategy and understand the impacts on performance, we evaluate the gain or loss compared with the situation without control, within a set of selected values for two hyper parameters. Those hyper-parameters were chosen as they affect the quantity and the nature of reroutings. The first hyper-parameters on which we focused is T base , the time window determining the timeout for rerouting decision, which is presented in section ??. T base can be seen as the reactivity of the vehicles and has a major impact on the number of reroutings, as shown in our experiments. The values of T base that we test are expressed in seconds. The second is β, a parameter relative to the sigmoid function applied to weight the links in the graph, in function of the congestion level (section 3.5.7). A high β will distinguish zones as "congested" / "non congested", close to a binary function (β =+∞), while a low β will smooth the weights of the links depending on the total flexibility (see Figure 3.7) up to the point when weights are left unchanged (β = 0). We tested the extreme cases as well as values empirically chosen between 0.001 and 0.05. New paths thus strongly depends on β as it affects the attraction/repulsion of zones. A last hyperparameter is τ , which corresponds to the time step used to update agent state variables. Experimentally, we set it at 60 seconds. Performance Evaluation The indicator of congestion reduction we consider to evaluate our strategy and c o m p a r et h er e s u l t si st h er e d u c t i o no fT o t a lT r a v e lT i m e( TTT) in percentage. %TTT = TTT nocontrol -TTT control TTT nocontrol (4.1) The best parameters are those with high %TTT (Equation 4.1) and a limited impact on user equilibrium. To quantify the user disturbance, we consider the percentage of rerouted vehicles and the total number of reroutings. Chapter 4. Implementation and Use Cases The variation of T base makes the anticipation of congestion avoidance change (hence the number of reroutings) while β, as it changes the weights on the graph and macro graph, makes the choice of the new shortest paths vary. Case Study: the Manhattan Grid As a baseline, we consider a grid Manhattan network to test our algorithm, with 3712 nodes and 10324 edges, as it represents many city centers (see the work of Boeing et al. [START_REF] Boeing | A Multi-Scale Analysis of 27, 000 Urban Street Networks[END_REF] on US cities). This grid is split into 9 (3X3) and 25 (5X5) zones, representing a zone in the center, surrounded by one circle of zones (3X3 zones) or two circles (5X5 zones). The two sizes of zones enable a comparison of our control strategy performance depending on two clusterings. The critical values of spatial speed and accumulation of the zones are computed in advance, from previous simulations. A neutral zone surrounds the set of zones, containing the entry and exit point of the demand. The nodes belonging to the neutral zone are included to the graph and macro graph but as the neutral zone is not an agent, it does not take part to the cooperative strategy and does not communicate with zones nor vehicles. Vehicles driving inside the neutral zone do not compute new route nor receive information from their next zone and the weights of links inside the neutral zone do not change during the simulation. It is worth noting that the global performance of the system, however, includes the traffic indicators of the neutral zone. The vehicles demand is predefined and corresponds to a user equilibrium situation. Every driver agent starts the simulation with a given route and a given departure time. We consider in our base case simulation 16743 vehicles, in an approximately 4h simulation. Their origin and destination points are in the neutral zone. Most of the trips goes through the center, the main trip axes are shown in Figure 4.3. Case Study: the Manhattan Grid 57 We run our solution with varying hyper-parameters that modify the reactivity of our strategy and the penalization of congested zones, on one reference demand scenario and later test it on different demand scenarios. Hyper-Parameter Choice The exploration of hyper-parameters allows us to first notice that clustering with 25 zones gives more often good results than with 9 zones. The more zones we have and the more precise is the congestion detection and the rerouting which explains the stability and robustness of the rerouting strategy when working with 25 zones. We can observe that the variation of one parameter when fixing the other does not induce a predictable %TTT reduction. There is no monotonic variation neither an optimum that could lead to global optimal hyper-parameters. Logically, we can observe that when T base increases, the number of reroutings decreases. The outliers, corresponding to blank squares in In Manhattan 3X3, we can observe that we have better results when β is high, which corresponds to a strict filtering of congested zones. On the contrary, low values of β improve the simulations in the case of 5X5. This can be explained by the fact that in Manhattan 3X3, due to the low number of zones, if the filtering is not binary, vehicles would still go to the congested zone. A clustering with few zones offers less alternative to congestion making the congested location not that repulsive. On the other hand, in the case of Manhattan 5X5, low values of β offer a smooth filtering, more precise as the values of the links strongly depend on the total flexibility of their zone. In the extreme case where β = 0, corresponding to Results for Manhattan 3X3 are always lower than 5% with β =0s h o w i n gt h a t with fewer zones, the penalization of congested zones is more important to improve the results whereas a more precise clustering of the network does not require a special value of β (and weight function) to be efficient. Moreover, higher values of T base reduce the %TTT improvement or worse, deteriorate the initial situation. This is explained by the reactivity of vehicles: large T base will provoke late reroutings if the vehicle is still in the zone. This means that when a congestion appears, vehicles entering a nearby zone will reduce their chance of avoiding the congestion by computing a new route. In both cases, T base has to be higher than 500 to ensure %TTT reduction, except for extreme values of β. In Figure 4.6, we can notice that the efficiency of our strategy depends on the number of reroutings. Especially, with too few reroutings (≤ 3000), most of the results are worse with control than without. A certain number of reroutings, around 3000, ensures an improvement. On the contrary, too many reroutings (more than 10000) worsen the initial situation and provoke sometime more congestions than initially, which is the case when T base is lower than 500s.F r o m the colors representing T base , the relationship between the rerouting time window and the number of reroutings is confirmed: the lower T base is and the greater the number of rerouting is. Many sets of {T base ,β} enable a reduction of %TTT greater than 10% but with a varying number of reroutings. We consider rerouting action as a cost for the driver and thus the best solution is not only the one maximizing the %TTT reduction but also having the less impact on vehicle rerouting. From Table 4.1, one perfect hyper-parameter calibration does not appear. The highest %TTT reduction (14.6%) provoked reroutings for 33% of the vehicles, which is quite high. We can not assume that a third of the vehicles are willing to change their route. More globally, best results are obtained for a T base between 1000s and 2500s. Single Case Comparison With and Without Control To better understand the impact of our control strategy on the agents and further analyze the dynamic between zones and vehicles, we focus on a single case, with a high %TTT. We choose Manhattan 5X5 for this single case analysis and generalization as this size of clustering gives more stable and robust results. From the previous results, the hyper-parameters maximizing the total travel time reduction without generating too many reroutings are T base around 1500s and low β values. We here consider the case of β =0 .005 and T base = 1500s for a clustering of 25 zones. In this optimized scenario, we compare different indicators, such as spatial speeds or total travel time, to better understand the impacts of the hybrid cooperation of zone agents and vehicles on the traffic and on the trips. Considering the speed as the performance indicator for resilience characterization, we can see in Figure 4.7b that the drop of performance inside the whole network is reduced. The minimum speed is around 5 m • s -1 whereas without control it dropped bellow 3 m • s -1 . The recovery is also shorter as without control, the stable state is around time step 180min when it is recovered at 160min with control. Considering the quantification of resilience as in Equation 2.1 where the speed is the performance indicator, we have the following: where v s is the spatial speed, T 0 corresponds to the beginning of perturbation, T r to the end of the recovery. We consider T 0 and T r from the initial simulation, without control and compare R noControl with R control . From Figure 4.7b, T 0 = 60min and T r = 180min and the increase of resilience is 21%. R = Tr T 0 v s (t)dt (4.2) Accumulation Spatial Speed Globally, when comparing the accumulation of vehicles in the whole network during the simulation, we can see in Figure 4.7a that the accumulation of the simulation with control decreases faster than the initial situation, although the peak of accumulation is not reduced. This means that vehicles arrive sooner to their destination. It is confirmed by the global total travel time reduction equal to 13% compared with the simulation without control. The reroutings start around time step 60, right before the congestions of zones like zone a 22 or zone a 33 (see Figure 4.9), with a peak of reroutings corresponding to the beginning of actual congestions. This shows the reactivity of the system as actions are produced right when monitored indicators are reaching a critical value. The effect of reroutings are seen with a certain delay which explains why not all speed of zones remain higher than the critical speed. The rerouting strategy increased the resilience of the network in day-to-day demand perturbations with high reactivity and accurate solutions. Figure 4.8 depicts the accumulation per zone and the number of reroutings induced by the zone (when vehicles are rerouted when they should have gone to the zone). When the accumulation of a zone is close to its critical points, we can observe that many reroutings are triggered. In some cases, it reduces the accumulation (a 24 ) or maintains the accumulation around the critical value (a 32 , a 34 ). The central zone, where most of the vehicles intend to go through, can not avoid congestion and higher accumulation than required even though a 22 triggered a lot of reroutings. Those new assignments are effective later, thus reducing the duration of congestion. This is a limitation due to the fact that vehicles can change their path only with respect the next zone. One could consider zones that are two or three hops away in the macro path to improve the strategy. The zones that were not on the path of many vehicles have absorbed part of the reroutings and we can see that they only accept vehicles if their accumulation is not too high. In zones a 31 , a 44 , a 53 , the reach of critical accumulation induces a small number of reroutings, effective almost immediately. The consequences on the speed inside the zones is visible in Figure 4.9. We can see that even though congestions were not totally avoided, we managed to reduce the duration of congestion (e.g. a 22 , a 44 ). Zones that were not congested at all have a reduction of their spatial speed but without congestion. Indeed the spatial speed remains close to the critical speed and barely lower. In zones such as zone a 23 , zone a 34 or zone a 42 the congestion is totally avoided. From Figure 4.10a, we can observe that the number of longer of trips is reduced as only few vehicles spent more than 60 minutes inside the network. The control strategy has thus reduced the number of long trips and increased the number of shorter trips. In particular, the variance of the trip duration has decreased. Rerouting can be a constraint for drivers with no actual benefits. From Figure 4.10b, we can see first of all that most of the drivers are not rerouted (75.4%), which means that our solution does not require the cooperation of all the vehicles to be efficient. Furthermore, vehicles that have been rerouted once have reduced their trip duration with a median gain of 5 minutes. The gain for vehicles having been rerouted twice is even higher but concerns fewer people (because of the probability of rerouting acceptance). Some outliers have increased their travel time and in further developments the rerouting acceptance could also depend on the detour induced by the potential new route. Globally, 86% of vehicles have shorter or equal travel trip duration and the mean gain is 13 min whereas the median increase of travel time is 5 min. To conclude, from all the results shown, the cooperation between zone agents, at an aggregated level, and vehicles, at microscopic level is successful. The rerouting instructions are well transmitted from zones to vehicles and have a rapid effect on the global traffic state. This results to an increased level of performance inside for a longer duration. The system is thus more resilient to demand perturbations. Robustness Tests on Different Demand Scenarios Transport networks are usually sized in order to accommodate a recurrent travel demand, which is traditionally known by transport operators with different estimation methods with some uncertainty. Even a regular travel demand can produce relevant performance drops, due to the way travellers distribute over the network over time. In the previous analysis, we focused on such recurrent demand scenario and showed the effectiveness of our approach. However, to study the capability of our solution to sustain the resilience of transport network, we focus in the following on more extreme configurations of the travel demand, that would stress the network and expose its vulnerability without a proper control action. We calibrated our control strategy on a reference case, we now test it with unknown demand scenarios. This allows us to have a global performance evaluation of our model, similarly to a train/test evaluation of machine learning models. Five different scenarios are proposed: scenario A is the one used for the calibration of the model (reference case in the previous sections), scenarios B, C, D, and E with varying number of vehicles (see Figure 4.11) and peaks of congestion. The worst case scenario is B, with more vehicles (around 18k) and is close to creation of grid locks, its peak of accumulation is 2.5% higher than that of scenario A. Scenario C corresponds to the case of few vehicles, and thus fewer congestions, with a lower peak of accumulation of -7.9%. Scenario D contains a similar amount of vehicles than scenario A but has a lower peak of accumulation (-2.5%). Scenario E is a scenario with no congestion. The main results are presented in Table 4.2. For the scenario B, we obtain a reduction of total travel time of 17.9% with a third of rerouted vehicles. Our strategy and the hyper-parameters are thus suited for a high-demand scenario with For scenarios with less congestion, the increase of performance is not as good but the performance is still significantly increased for C and D, more than 10% compared with the simulation without rerouting. However, cost of rerouting in the case of demand E is too high (14.6% of rerouted vehicles) compared to the benefits for the system and the users. More globally, the TTT of all the scenarios have decreased, and the more congested the network was and the higher was the reduction of %TTT. The accumulation and spatial speed during the simulation for every scenario are shown in Figure 4.12. In all cases, the lower peak of spatial speed is reduced and the accumulation decreases faster with control. With the chosen set of hyper-parameters, we obtain a reduction of TTT, keeping the global performance of the system higher without requiring too much changes on driver behaviours. The main problem is however when there is no need for control: if our strategy does not deteriorate the initial situation, the benefits are too low compared with user cost. In conclusion, our strategy performs better when strong demand perturbations occur than when the network is not congested. Our approach is robust and can deal with unknown demand negative perturbations. To face its lack of efficiency for low-demand scenario, the proposed strategy should be activated only once certain conditions of congestion or travel demand increase are met, with respect to a reference scenario (recurrent travel demand). Application on a Application on a Real Case Scenario To further validate our approach, we worked with Lyon northern parts as real case scenario. Lyon is the main city of an urban area of more than one million inhabitants in France. This network, illustrated on Figure 4.13, contains 1883 nodes, 3383 links and is divided in 17 zones. The number of trips for the simulation is 68573, between 7 : 30 and 10 : 30, corresponding to the morning peak hours. Origins and destinations are located in the perimeter and inside the network. The demand has been estimated based on real data, collected via surveys and loop detectors. The zones are delimited depending on the existence of a MFD and based on our knowledge of the area. A zone can contain only one critical intersection which is often congested, and only one main arterial. Three zones are composed of a single arterial segment because their alternative paths are close to them and they need to be in another zone to be chosen. There is a neutral zone, corresponding to a highway where vehicles can go to but which is not an agent and thus does not trigger reroutings. Similarly to the Manhattan case, we performed a hyper-parameter sensibility analysis on T base and β. Very few reroutings are necessary to improve the initial situation. The maximum percentage of rerouted vehicles is 6.7%, corresponding to 4700 rerouted vehicles. This is partially due to the fact that 46% of vehicles do not go through a zone agent and only drive in the neutral zone. Those vehicles can never be rerouted. Moreover, the mean number of zones in the vehicle paths is 4 whereas for Manhattan network split into 25 zones, all vehicles drive through at least 11 zones. The low number of crossed zones reduces the possibility of rerouting for the vehicles. In Figure 4.16, we can see that for the case of T base = 300s and β =0 .025, the drop of spatial speed inside the network is reduced and the time of recovery is lower with control. The resilience is thus increased by 22%. This performance improvement is due to the accumulation reduction showed in Figure 4.15, where the peak of accumulation is reduce and decreases much faster with contorl than without. 4.17 shows the accumulation per zone and we can notice that 7 zones where highly congested initially. The control strategy enabled the reduction of accumulation for zones 2, 5, 6, 14 bellow the critical capacity. On the contrary, we can observe in the zones 1, 4, and 11 which were highly congested, no real improvement. This is explained by the fact they are on the south border of the network (see Figure 4.13) and the alternative paths are thus more limited for the drivers. The consequences on the performance of each zone in terms of spatial speed are shown in Figure 4.18. The control strategy reduced the time of recovery for many zones but the zones where accumulation was not reduced have the same drop of spatial speed than without control. The improvements per zone are thus not as obvious than for Manhattan network. This is due to the topological differences: Manhattan network offers more alternative zones for rerouting than Lyon and is more homogeneous. Congestions are more easily avoided and vehicles are more often attracted by new paths than in Lyon network. These results show that the reroutings were efficient to globally reduce the total travel time spent in the network but the control strategy could not prevent some zones to be congested. Some further developments could help improving these results by changing other hyper-parameters like α that sets the limit of the weight function in G. Preliminary Conclusions Preliminary Conclusions The results showed great robustness and ability of anticipation of our strategy for both real and synthetic networks. We managed to reduce or avoid congestion, especially for high-demand scenario in the synthetic network. This is particularly interesting because a high demand scenario is harder to manage, and the need of flow redistribution is greater. Furthermore, it is easier to detect low-demand scenario and not perform control than the opposite, where the solution is less efficient when it is more needed. We modeled the acceptance of rerouting as dependant on the number of rerouting. In reality, drivers accept to change their route depending on the new suggested route, which could be added to the acceptance function in further work. Another perspective would be to distinguish the type of drivers depending on their equipment: not all the drivers activate a GPS device or navigation application and it would be interesting to compare the performance of our strategy depending on the proportion of equipped vehicles. Resilience Inside the Controlled Area The results of our approach in a context of simulation show that we were able to maintain a certain level of performance by modifying traffic flow distribution inside the network. By cooperating, zone agents were able to reduce the duration and intensity of congestion. The traffic is better absorbed by the system and the calibrated version of the algorithm showed robustness toward scenarios with travel demand higher than usual. The algorithm thus reduces the road network vulnerability and offers a robust solution in case of exceptional high demand. Nonetheless, some locations inside the network are more vulnerable than others. Their failures have more consequences or impact more people than that of other parts of the network. Reducing vulnerability inside road network can be done by identifying vulnerable nodes or links and protecting them. Those nodes or links can be identified with resilience metric, the most popular one being the Betweenness Centrality, used in topological analysis of various networks (not only road networks). Betweenness Centrality as a Metric of Resilience BC is widely used, with both directed and undirected graphs, to identify opinion leaders or influential people in social network analysis [START_REF] Stephen P Borgatti | Network analysis in the social sciences[END_REF], critical intersections in transportation networks [START_REF] King | Performance Metrics and Analysis of Transit Network Resilience in Toronto[END_REF]- [START_REF] Furno | A Graph-Based Framework for Real-Time Vulnerability Assessment of Road Networks[END_REF], vulnerabilities in computer networks [START_REF] Holme | Attack vulnerability of complex networks[END_REF], threats from terrorist networks [START_REF] Carpenter | Practical Issues and Algorithms for Analyzing Terrorist Networks[END_REF]. However, in spite of the great potential, the computation time of BC often represents a barrier to the application of this metric in large-scale contexts, especially with dynamic graphs. In graph theory, the BC of a node or a link is a topological metric that quantifies the proportion of shortest paths crossing the node or link. The number of shortest paths between a source node s and a destination node t is denoted by σ s,t , whereas the number of shortest paths between s and t that cross a generic link l ∈ E is denoted by σ s,t (l). The Betweenness Centrality (BC) of a link l ∈ E is defined as follows: BC(l)= s =,t∈V σ s,t (l) σ s,t (4.3) AhighBConalinkmeansthatahighproportionofshortestpathsgothrough this link, making it a vulnerability for the system. High BC links tend indeed to have higher probabilities of being chosen for new shortest paths and have thus more chances of being congested. Congestion on high BC links create a higher drop of performance as it impacts more people and more shortest paths. In our case, rerouting vehicles on links with high BC would create a congestion in the new routes which could be worse than the first congestion. Those links need to be protected to prevent propagation of congestion in the network. Resilience Inside the Controlled Area 71 The computation time of BC on all the links or nodes of large graph is high due to the high number of explorations across the whole graph needed for shortest path computation. A main focus on the computation of BC is done in the next chapter, in a more global context than that of transport. BC in the Control Strategy Links with high BC are likely to attract vehicles as they are on many shortest paths. To maintain robustness inside the network and avoid creating new congestions, we modify the weight function of the graph by adding the BC to protect the vulnerable links. They should remain attractive when no congestion appears but we choose to make them much more unattractive when their zone is congested. This way, we do not prevent vehicles to go through them in a regular situation but start limiting their access in unstable state. The BC is thus used to help cars finding new shortest paths without creating new congestion in vulnerable locations of the network. As the BC values depend on the size of the graph, we normalized the values with a division by the maximum. This way, all the BC values are comprised between 0 and 1. The weight function, defined in Equation 3.7 and illustrated in Figure 3.7 in the previous section is a sigmoid, with the parameter α =1.5, defined as follows: tt w (l)=tt(l) • α - 1 1+e -β•f tot i (t) With α =1.5, the limits of the weights are 0.5 * tt(l)a n d1 .5 * tt(l). We include the BC normalized, written BC in the weighted travel time changing the formula of Equation 3.7 as follows: tt BC-w (l)=tt(l) • e BC(l) 1+e β•f tot i (t) + α ′ (4.4) The normalized BC of each link inside Manhattan network is shown as an example in Figure 4.19a. Logically, the highest-BC links are those in the center of the graph. As a link with a high BC is a vulnerable link if congested, we increase the higher bound to protect it, making it more repulsive than links with lower BC. On the contrary, if the traffic is fluid, the weight will decrease, but reaching the same lower bound for all links. To maintain a higher bound to 1.5 • tt(l), the value of α ′ is set to 0.5a n dt h e coefficient formula is slightly different from Equation 3.7 in terms of signs. Figure 4.19b illustrates the new weight BC function depending on the total flexibility of the link zone for a set of values of β. The upper curve corresponds to weight function for link with a normalized BC equal to 1 whereas the lower curve is for links with normalized BC equal to 0. This latter case gives the same function than Equation 3.7 (with α =1.5andα ′ =0.5). The lower bound is not modified as the For computation's sake, we only consider a static topological BC, computed on the graph with free-flow travel time weights. Use of the BC for Traffic Control (results) We test the integration of the BC in case of Manhattan 5X5 and demand scenario A. As the BC plays a role in the choice of new shortest paths, the goal is not only to improve the total travel time reduction but to increase the gain of each rerouting. This means that with less reroutings we would be able to still have the same total travel time reduction. First we perform the simulation on multiple sets of hyper-parameters as in section 4.2. We reduced the number of hyper-parameters based on the results obtained without BC. We thus select higher values of T base , greater than 1000s and β ∈{ 0.001, 0.005, 0.025}. W et a k e0 .025 as the maximum because higher values of β in the case of Manhattan 5X5 did not give good results in general. When β = 0, the weights without considering the BC are constant and equal to the free-flow travel times. We thus do not evaluate the use of BC for this case. Visualizing the percentage of rerouted vehicles in comparison with the percentage of total travel time, shown in Figure 4.20, we can see that if in general the performances are quite similar but the best results are obtained using the BC. Indeed, only using the BC were we able to obtain a %TTT reduction greater than 14%, with low values of β keeping a proportion of rerouted vehicles close to 20%. Moreover, for a similar travel time reduction, simulations with the BC require less reroutings (except for some outliers). This shows that the BC helps finding new shortest paths that have a better impact in terms of resilience when computed taking into acccount the values of BC. We characterize this new route improvement with the rerouting gain written Γ r : Γ r = TTT nocontrol -TTT control number of reroutings (4.5) The rerouting gain represents the total travel time saved per rerouting. The higher it is and the more positive the impact generated by one rerouting is. We compare the gain per rerouting of simulations with and without BC by computing the difference, expressed in minutes. From Figure 4.21, we can see that the best performances in terms of %TTT and increase of gain per rerouting are observed for low values of β, when the weight function is close to an affine function (Figure 4.19b). As expected, for low values of T base , the improvement is not really high because a lot of reroutings are done with and without BC which reduces the gain per rerouting. The number of reroutings is indeed the main source of %TTT reduction. What is actually interesting and useful is the improvement observed for high T base : by increasing Chapter 4. Implementation and Use Cases the gain per rerouting, it enables good performance for the system with fewer requests to the vehicles. Too much penalization of high-BC congested links is also not that efficient: with β too high, performances are lower than without considering the BC. The highest increase of performance is found for β =0.001 and T base = {4000s, 4500s} but the overall %TTT is around 12% which is low compared to the reduction of TTT of simulations with different values of {T base ,β}. %TTT reduction using the BC Difference of rerouting Gain (Γr(BC) -Γr(noBC)) In conclusion, using the BC boosted the performance of our control strategy by fine-tuning the new path computation. The new choice of paths increased the travel time gain per rerouting, making the drivers efforts more valuable. Even if the BC was static and thus not dynamically updated with real-time traffic information, the improvements are significant, showing that combining static topological vulnerability analysis and a dynamic system-based approach gives interesting results and should be further studied. Conclusion To conclude this section and the global work on dynamic rerouting, our control strategy performs well after some parameter optimizations. We are able to reduce the duration of congestion and for some zones to totally avoid it without creating new congestion, with a low impact on vehicles. In a fully distributed way, we manage to make zones cooperate and vehicles help them achieve their goals. The hierarchical control is efficient at an aggregated level (zones) and microscopic level (vehicles). The reactivity of both types of agents reduced quickly the drop of performance measured by the speed inside the whole network. It also reduced the duration of congestion, thus increasing overall resilience inside the network. The use of BC has enabled a better stability of results, and ensures a minimum of performance. The combination of the topological properties of the road network and the dynamic rerouting solution enabled a higher reduction of vulnerability. The topological understanding of the network is thus useful for dynamic systembased vulnerability analysis while still quite uncommon. In our case, the BC is computed in a static network, not considering the evolution of traffic conditions. We did not grasp the dynamics of the network into the BC computation. To improve even more the performance of control strategies using the BC, update values just before the rerouting process is a solution. Unfortunately, computing the BC means exploring the whole graph from all the nodes and with the state-of-the-art algorithms, the computation time is high. This explains why we only considered the BC computed before, on a free-flow travel time network. To enable a real-time computation of BC, we developed an algorithm able to reduce computation time by decomposing the BC and dividing the network into clusters (similarly to zones). This algorithm is further described in the next chapter. Introduction In recent years, the Floyd method [START_REF] Floyd | Algorithm 97: Shortest path[END_REF], which requires O(n 3 ) computation time, has been overcome by the well known Brandes' algorithm [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. Given a graph G(V, E), it exhibits O(n + m) space complexity, O(nm) time complexity for unweighted graphs and O(nm + n 2 log(n)) for weighted ones, where n = |V | is the number of nodes and m = |E| the number of edges. However, the polynomial complexity of Brandes' algorithm, which is almost quadratic for very sparse graphs, is still an obstacle for analyzing very large networks. Such problem becomes even more evident and limiting if centrality is used for real-time analysis of dynamic networks. In the last decade, many researchers have therefore worked with the aim of improving the performance of Brandes' algorithm. In this chapter, we propose an algorithm based on clustering, inspired by previous work on approximated BC computation [START_REF] Furno | Two-level clustering fast betweenness centrality computation for requirement-driven approximation[END_REF], [START_REF] Suppa | A Clustered Approach for Fast Computation of Betweenness Centrality in Social Networks[END_REF], which makes possible the exact computation of BC on large, undirected graphs with an impressive speedup when compared to Brandes' algorithm and a significant improvement over recent variants of Brandes' algorithm based on clustering [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]. The algorithm leverages structural properties of graphs to find classes of equivalent nodes: by selecting one representative node for each class, we are able to compute BC by significantly reducing the number of single-source shortest path Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality explorations required by Brandes' algorithm. We formally prove the graph properties that we exploit to define and implement two versions of the algorithm based on Scala for both sequential and parallel map-reduce executions. The experimental analysis has been conducted by testing both versions of the algorithm on synthetic and real-world graphs. The algorithm we propose is able to work with undirected, weighted or unweighted graphs. In this chapter, we focus on unweighted graphs while its extension to weighted ones can be easily obtained by substituting the breadth-first search (BFS) with Dijkstra algorithm. Undirected graphs are very common in real-world systems; examples are social networks, communication networks, protein interactions graphs, people interaction graphs, finite element meshes, etc. Among these graphs, scale-free and Barabási-Albert graphs [START_REF] Barabási | Emergence of Scaling in Random Networks[END_REF] represent an important target of our analysis, since they model many real-world systems, such as the World Wide Web, the Internet and other computer networks, citation networks, social networks, airline networks, financial networks, etc. The main contributions of this work are: • Introduction of the general concept of equivalence class for reducing BC computation time. • Formal proof about the existence of topological properties to identify an equivalence class with respect to clusters' border nodes in undirected graphs. • Two variants of Brandes' back propagation technique to avoid the direct computation of the dependency score on cluster nodes due to pivots. • Scala-based implementations of the proposed algorithm for both sequential and parallel map-reduce executions. • Extensive evaluation of the proposed algorithm with both synthetic and real large-scale graphs. The rest of the chapter is organized as follows. Section 5.1 positions our work with reference to the main results from the literature. Section 5.2 introduces the notation we use in the rest of the chapter, as well as the background concepts and algorithms that are at the basis of the proposed solution. Section 5.3 presents the main properties we exploit to define the algorithm when clustering is used to identify classes of equivalent nodes. Section 5.4 illustrates the rationale of the specific implementation of the algorithm and the main steps that characterize it, by presenting also the constituent sub-algorithms exploited for the implementation. Section 5.5 reports on the experimental results obtained by running the proposed algorithm on several synthetic and real graphs characterized by different sizes and topological properties. Section 5.6 is dedicated to the formal proofs of the theorem and claims used in our algorithm for fast BC computation. Finally, Section 5.8 summarizes the results and discusses the limits of the proposed solution by highlighting possible future improvements. Related Work Brandes' algorithm is a fast and robust solution to compute BC, but it is not adequate for real-time processing of large graphs, since BC computation time increases very rapidly with the graph size, even in sparsely-connected configurations. Several approaches, either exact or approximated, have been developed to reduce BC computation time by improving Brandes' algorithm. The proposed solutions can be classified into five main categories: (a) exploiting and increasing parallelism; (b) updating the BC of specific nodes in dynamically evolving graphs; (c) estimating BC via a partial exploration of the graph in terms of nodes or edges; (d) exploiting structural properties of some kinds of graphs to compress them; (e) reducing complexity by decomposing graphs in clusters. It is worth noting that some proposals may belong to multiple categories since in many cases, techniques falling in a category can be complemented with other ones to further reducing computation time. Exploiting parallelism. Brandes' algorithm is extremely parallelizable due to the possibility of performing n independent breadth first searches (BFS) or Dijkstra explorations on a shared graph structure. In [START_REF] David | Parallel algorithms for evaluating centrality indices in real-world networks[END_REF], an efficient parallel implementation of BC computation is provided. The solution leverages fine-grained multi-level parallelism by concurrently traversing the neighbors of a given node via a shared data structure with granular locking in order to increase concurrency. The improved version of the previous approach, proposed in [START_REF] Madduri | A faster parallel algorithm and efficient multithreaded implementations for evaluating betweenness centrality on massive datasets[END_REF], removes the need for locking in the dependency accumulation stage of Brandes' algorithm through the adoption of a successor list instead of a predecessor list for each node. In [START_REF] Van Der Grinten | Scaling Betweenness Approximation to Billions of Edges by MPI-based Adaptive Sampling[END_REF], the authors propose a MPI-based parallel implementation of the adaptive sampling algorithm KADABRA [START_REF] Borassi | KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation[END_REF]. Other efforts in this direction try to exploit the large amount of cores available on GPUs to better exploit parallelism [START_REF] Shi | Fast network centrality analysis using GPUs[END_REF]. Incremental computation. These (stream-based) approaches try to avoid recomputing the BC values of all the nodes of a graph when they are known for a previous configuration, by performing computation over only a small portion of the graph that is impacted by some changes. Recently, an efficient algorithm for streamed BC computation [START_REF] Kourtellis | Scalable online betweenness centrality in evolving graphs[END_REF] of evolving graphs has been proposed based on edges addition or removal. However, the algorithm is efficient only when the new graph changes in only one edge if compared with the old one. Continuous BC processing of large graphs to handle streamed changes of a significant number of edges is therefore inefficient. MR-IBC [START_REF] Behera | MR-IBC: MapReduce-based incremental betweenness centrality in large-scale complex networks[END_REF] is a MapReduce-based incremental algorithm for BC computation in large-scale dynamic networks that supports edge addition or removal. The paper exploits distributed computing to achieve scalability and reduce computing time. Even in this case, the focus is on changes related to one single edge. The solution proposed in [START_REF] Shukla | Efficient parallel algorithms for betweenness-and closeness-centrality in dynamic graphs[END_REF], instead, handles batches of updates in parallel. In particular, it exploits a bi-connected component decomposition technique along with some structural properties to improve performance. Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality Approximated computation. These algorithms aim at achieving low computation time by calculating approximated BC values. Brandes and Pich proposed in [START_REF] Brandes | Centrality estimation in large networks[END_REF] an approximated algorithm for faster BC calculation by choosing only k ≪ n nodes, called pivots, as sources for the single-source shortest path (SSSP) algorithm through different strategies, showing that random selection of pivots can achieve accuracy levels comparable to other heuristics. The approach has been further improved by other authors [START_REF] Geisberger | Better Approximation of Betweenness Centrality[END_REF]. The goal of these algorithms is to calculate the BC only for selected nodes called pivot nodes. The selection of these nodes depends on the problem to solve and may limit the use of BC. KADABRA [START_REF] Borassi | KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation[END_REF] is an adaptive sampling algorithm to approximate betweenness centrality. In particular, it adopts a probabilistic approach: BC of a node v is seen as the probability that, given two randomly selected nodes s and t and a randomly selected shortest path p between them, v belongs to p. The algorithm allows to specify the maximum absolute error and the probability that error is guaranteed. A similar approach is followed in ABRA [START_REF] Riondato | ABRA: Approximating Betweenness Centrality in Static and Dynamic Graphs with Rademacher Averages[END_REF], a suite of algorithms to compute high-quality approximations of the betweenness centrality of all nodes (or edges) of both static and fully dynamic graphs by using progressive random sampling. Topology manipulation. Some algorithms exploit topological properties of graphs to accelerate BC computation. Puzis et al. in [START_REF] Puzis | Topology manipulations for speeding betweenness centrality computation[END_REF] propose two heuristics to simplify BC computation: (a) identification of structural equivalent nodes, i.e., nodes that have the same centrality index and contribute equally to the centrality of other nodes; (b) partitioning a large graph in smaller bi-connected sub-graphs. Computation time on the graph partitions is significantly lower due to the quadratic to cubic complexity of Brandes' algorithm. The authors also combine the two techniques to improve the speedup when compared with Brandes' algorithm. In [START_REF] Erdem Sariyüce | Graph manipulations for fast centrality computation[END_REF], the authors use both compression and splitting techniques, including the ones developed in [START_REF] Puzis | Topology manipulations for speeding betweenness centrality computation[END_REF], to reduce the size of the input graph and its largest connected component since these are the main parameters that affects the computation time. In particular, they split the input graph by using bridges and articulation vertices and compress it by removing degree-1, identical and side vertices. Bridges and articulation vertices are edges and nodes, respectively, whose removal from a graph leads to a new graph with a greater number of connected components; degree-1 vertices are leaf nodes which, considered as source and targets, contribute equally to the computation of BC of crossed nodes; identical vertices are the ones characterized by the same neighbors and, consequently, by the same BC values; side vertices are nodes such that the graphs induced by their neighbors are cliques and they are not crossed by shortest paths. By using all these techniques, the authors achieve significant speedup with different kinds of graphs. The authors in [START_REF] Baglioni | Fast Exact Computation of betweenness Centrality in Social Networks[END_REF] propose a variant of Brandes' algorithm based on topological characteristics of social networks where nodes belonging to particular tree structures are not considered for Brandes' SSSP explorations; their contribution is simply computed by counting. Topology manipulation and graph compression are very useful techniques with some types of graphs and are complementary to other solutions from the literature, including the one proposed in this chapter. Reducing complexity by decomposing graphs in clusters.Aw a yt oc o m p u t eB C is to cluster a large graph into smaller sub-graphs, calculate the BC inside these small graphs, and then compute the BC on the remaining part of the graph. A first paper based on this approach was proposed in [START_REF] Suppa | A Clustered Approach for Fast Computation of Betweenness Centrality in Social Networks[END_REF]. This technique exploits a fast clustering method [START_REF] Vincent D Blondel | Fast unfolding of communities in large networks[END_REF] to identify clusters inside a graph. The border nodes of the clusters are then used as reference nodes to discover, for each cluster, classes of nodes that contribute the same way to the dependency score of the nodes outside the clusters. For each class, a pivot node is selected as representative node for the computation of the dependency scores from the class nodes to the other graph nodes by exploiting the well-known SSSP exploration of the graph. Hence, the dependency score is multiplied by the cardinality of the class the source node belongs to and summed up to the local contribution of BC, computed by considering only nodes belonging to the clusters, to obtain the final approximated values of BC. This technique can be also classified among the ones based on pivots, typically used for computing approximated BC values, even if the strategy adopted to identify pivots is based on clustering. The authors in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF] propose a technique based on clustering to reduce the complexity of BC computation. They prove that with a decomposition of graphs into hierarchical sub networks (HSNs), time complexity can be reduced to O(n 2 ) for unweighted graphs under the hypotheis that the number of clusters c ≫ k/2. In that case, the speedup, compared with Brandes' algorithm, is in the order of one half of the graph's average degree k, since the number of edges m = k • n/2. This means that if the considered graph has a number of edges m ∼ n,t h e nk ∼ 2 and the speedup is 1, that is, the algorithm is not able to improve Brandes' algorithm. A very similar solution has been proposed in [START_REF] Erdős | A Divide-and-Conquer Algorithm for Betweenness Centrality[END_REF]. Differently from [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF], the authors propose to build a simplified hierarchical representation of the graph after clustering (named Skeleton) by substituting each cluster with a weighted clique connecting the cluster border nodes. This way, they reduce the number of nodes in the Skeleton but need the more computationally expensive Dijkstra algorithm for computing the shortest paths over the weighted graph. Moreover, the proposed solution computes exact BC values of nodes only with respect to a subset of nodes of a graph, named target set. When the target set includes all the nodes of a given graph, the solution converges towards Brandes' algorithm, but with the additional overhead due to the creation and exploitation of the skeleton graph. Very recently, a Graph Neural Network (GNN) based model to approximate betweenness and closeness centrality has been proposed [START_REF] Kumar Maurya | Graph Neural Networks for Fast Node Ranking Approximation[END_REF]. This work, among other similar ones [START_REF] Fan | Learning to Identify High Betweenness Centrality Nodes from Scratch: A Novel Graph Neural Network Approach[END_REF], demonstrates that the efficient computation of the BC is a topic of great interest even in the field of deep learning and, particularly, graph neural networks. Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality In this chapter, we propose a technique to reduce the time needed for computing exact values of BC in undirected graphs by: i) computing BC as the sum of two main contributions, local for each cluster and global among clusters (category e), ii) reducing the SSSP explorations for the global phase through the identification of pivot nodes (category c), and ii) considering HSN-based corrections on local contributions and the properties of undirected graphs to completely remove errors during computation (which affected the first proposal in [START_REF] Suppa | A Clustered Approach for Fast Computation of Betweenness Centrality in Social Networks[END_REF]). This chapter extends our previous proposal in [START_REF] Daniel | Cluster-based Computation of Exact Betweenness Centrality in Large Undirected Graphs[END_REF], by significantly improving the algorithm and its implementations that now have been tested also with real graphs. Moreover, we formally prove the correctness of the proposed technique. It is worth noting that our algorithm could be complemented by algorithms from different aforementioned categories, such as finer-grained parallelism from the first category or compression-based techniques exploiting graphs topological properties as the ones falling within the second category. Conversely, incremental and approximated computations are approaches for specific classes of applications that regard slowly changing graphs or rank-based exploitation of BC, respectively, which we consider out of the scope of this chapter. Background In this section, we first introduce the notation used throughout the chapter, then we briefly describe Brandes' algorithm. Finally, we present the concept of equivalence class, which constitutes the basis of our algorithm. Let G(V, E) be an undirected unweighted graph with V representing the set of n vertices (or nodes) and E the set of m edges (or links). Let s, t ∈ V be two generic nodes of G. We denote by e s,t the edge connecting s and t.T h eneighbors of a vertex s are all vertices u such that e s,u ∈ E.T h edistance between s and t, denoted by d G (s, t), is the length of the shortest path(s) connecting them in G.T h enumber of shortest paths between s and t is denoted by σ s,t ,w h e r e a s the number of shortest paths between s and t that cross a generic node v ∈ V is denoted by σ s,t (v). It is worth noting that since the graph is undirected, d G and σ are symmetric functions, thus d G (s, t)=d G (t, s), σ s,t = σ t,s and σ s,t (v)=σ t,s (v). Given a generic node w ∈ V, P s (w)={u ∈ V : e u,w ∈ E,d G (s, w)=d G (s, u)+1} is the set of direct predecessors of vertex w on shortest paths from s. The Betweenness Centrality (BC) of a vertex v ∈ V is defined as follows: BC(v)= s =v =t∈V σ s,t (v) σ s,t (5.1) BC(v) thus represents the fraction of shortest paths containing v among all the shortest paths in the graph between any generic pair of nodes s and t, summed over all possible pairs s and t with s = v, s = t and v = t. We refer to Table 5.1 for a summary of the notation used in this chapter. Background 83 Notation Description G undirected unweighted input graph Ĝ a connected sub-graph of G V set of vertices of G (|V| = n) V Ĝ set of vertices of G inducing Ĝ (set of vertices of Ĝ) V Ĝ set of vertices in V \ V Ĝ VHSN set of δ γ s,• (v) global dependency score of s on v due to all t ∈ V C(s) (same as δ γ s,V C(s) (v)) δ γ s,V C(v) (v) global dependency score of s on v due to all t ∈ (V C(s) ∩ V C(v) ) Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality δ γ s,V C(v) (V C(v) ) global dependency score of s on vertices in V C(v) due to all t ∈ (V C(s) ∩V C(v) ) δ γ (v) sum of all the global dependency scores (global BC) on v δ γ (V) sum of all the global dependency scores (global BC) on vertices in V δ λ s,• (v) local dependency score of s on v due to all t ∈ V C(s) = V C(v) δ λ s,• (V) local dependency score of s on vertices in V due to all t ∈ V C(s) = V C(v) δ λ (v) sum of all the local dependency scores (local BC) on v δ λ (V) sum of all the local dependency scores (local BC) on vertices in V δ ǫ s,• (v) dependency score of s on v, as external node, due to all t ∈ V C(s) δ ǫ s,• (EN) dependency score of s on external nodes EN due to all t ∈ V C(s) δ ǫ (v) sum of all the dependency scores on v as external node δ ǫ (EN C(s) ) sum of all the dependency scores on external nodes of cluster C(s) Table 5.1: Notation. Brandes' Algorithm Brandes' algorithm is the fastest known general-purpose sequential algorithm for computing BC. It is based on the notions of pair-dependency and dependency score. Let us consider two generic nodes s, t ∈ V. Given shortest paths counts σ s,t (v)a n dσ s,t , the pair-dependency δ s,t (v)o fap a i rs, t on an intermediary node v ∈ V is defined as follows: δ s,t (v)= σ s,t (v) σ s,t (5.2) The pair-dependency represents the fraction of shortest paths between s and t crossing v. The dependency score δ s,• (v)o fav e r t e xs on a vertex v ∈ V is then defined as follows: δ s,• (v)= t∈V δ s,t (v) (5.3) BC can thus be redefined in terms of dependency score: BC(v)= s =v =t∈V σ s,t (v) σ s,t = s =v =t∈V δ s,t (v)= s∈V δ s,• (v) (5.4) The key observation of Brandes' algorithm is that the dependency score obeys a recursive formula. In particular, for each s ∈ V we have: δ s,• (v)= w:v∈Ps(w) σ s,v σ s,w • (1 + δ s,• (w)) (5.5) Brandes' algorithm runs in two phases, exploiting equation 5.5. For each (source) node s ∈ V, in the first phase, a single-source shortest-paths (SSSP) algorithm, based on breadth-first search (BFS), is executed on G to find all the shortest paths rooted in s. In the second phase, dependency scores are accumulated by backtracking along the discovered shortest paths using the recursive relation in Equation 5.5. In backtracking, nodes are visited in descending order of distance from the source. During these two phases, for each node v ∈ V the algorithm builds and exploits the following data structures: the set of direct predecessors P s (v) on shortest paths from the source, the distance d G (s, v)f r o mt h e source, the number of shortest paths σ s,v from the source and the dependency score δ s,• (v) that accumulates the contribution of the source on node v due to all destinations during the back-propagation step. Equivalence Class To reduce the number of explorations and thus lower the BC computation time, we exploit the concept of equivalence class. Formally, given a connected sub-graph Ĝ of G induced by the set of nodes V Ĝ ⊂ V, we define an equivalence class K i as any subset of nodes in V Ĝ that produce the same dependency score on all nodes -and for destinations -outside sub-graph Ĝ when used as sources for SSSP explorations. By choosing only one representative node (called pivot) for each class, the correct dependency scores of nodes can be computed by multiplying the scores computed via the SSSP rooted in the pivot by the cardinality of the class, i.e., let k i be a pivot of K i and v/ ∈ V Ĝ, a node outside sub-graph Ĝ,w eh a v e : s∈K i t/ ∈V Ĝ δ s,t (v)=|K i |• t/ ∈V Ĝ δ k i ,t (v) which, according to our notation, can be re-written as: s∈K i δ s,V Ĝ (v)=|K i |•δ k i ,V Ĝ (v) (5.6) Equation 5.6 clearly shows that a low number of classes significantly reduces the computation time, by allowing to skip a high number of SSSP explorations. Clustering and BC Computation A possible technique to identify equivalence classes is to consider reference nodes. Given a generic sub-graph Ĝ, the reference nodes in V Ĝ are those that need to be traversed to reach, via shortest paths from nodes in V Ĝ, any other node in V Ĝ. In this chapter, to easily identify reference nodes, we use clustering, and to increase the chances of identifying a low number of equivalence classes, we consider a clustering technique based on modularity, which allows reducing the amount Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality of connections among groups of nodes belonging to different clusters, and, consequently, lowers the number of reference nodes to be considered for discovering equivalence classes. The proposed approach relies on a set of mathematical properties that, for the sake of readability, are introduced and used in the following subsections, but proved in Sec. 5.6, at the end of the chapter. Equivalence Class with Clustering Let us assume a given graph G is split into a set of clusters C, where a single cluster C i is a connected sub-graph of G induced by a set of nodes V C i ⊂ V. For each cluster C i ∈ C, it is possible to identify a set of border nodes BN C i .A border node b i ∈ BN C i is a node belonging to C i and having at least one neighbor belonging to another cluster, as graphically presented in Figure 5.1 (circled nodes are border nodes). To discover equivalence classes, for each cluster C i , we group nodes based on their distance and number of shortest paths to the border nodes. To this end, we can leverage the following theorem (see Section 5.6, Theorem 5.6.1, for formal proof). Let k ∈ R + and l ∈ R,l e tC i be a generic cluster of graph G with border nodes BN C i ,a n ds, p ∈ V C i .I f ∀ b j ∈ BN C i σ s,b j = k • σ p,b j and d G (s, b j )= d G (p, b j )+l,t h e n δ s,V C i (v)=δ p,V C i (v), ∀v ∈ V C i . In other words, any given pair of nodes s, p belonging to the sub-graph induced by nodes in cluster C i (i.e., s, p ∈ V C i ), produces the same dependency score on all nodes v ∈ V C i for destinations t ∈ V C i if the distances and the number of shortest paths from s and p to every border node of C i are the same, except for an additive or multiplicative factor, respectively. From the previous theorem, we can derive the following corollary (formally proved in Sec. 5.6 as Corollary 5.6.1): if ∀ b j ∈ BN C i , σs,b j =σ p,b j and dG (s, b j )= dG (p, b j ),t h e nδ s,V C i (v)= δ p,V C i (v), ∀v ∈ V C i , where dG (s, b j ) represents the normalized distance of the generic node s to the generic border node b j , defined as follows: dG (s, b j )=d G (s, b j ) -min b k ∈BN C i d G (s, b k ) and σs,b j represents the normalized number of shortest paths from the generic node s to the generic border node b j , and is defined as: σs,b j = σ s,b j /min b k ∈BN C i σ s,b k Normalized distance and normalized shortest paths simplify the identification of classes since they are characterized by the same vector of normalized distances node v dC 1 (v, b 1 ) dC 1 (v, b 2 ) σvb 1 σvb 2 1 0 2 1 2 2 2 0 2 1 3 0 0 1 1 4 0 0 1 1 5 0 2 1 2 6 0 0 1 1 14 0 0 1 1 Table 5.2: Normalized distances and normalized number of shortest paths for the blue cluster C 1 are the pivots1 . Cluster-based Exact BC Computation The equivalence classes allow us to compute the dependency score on nodesand for destinations -that do not belong to the same cluster of the source node, which means that the contributions computed via this approach are only partial. To obtain the total BC, we rewrite Equation 5.4 as follows: BC(v)= s∈V δ s,• (v) = s∈V t∈V C(s) δ s,t (v)+ s∈V t/ ∈V C(s) δ s,t (v) = s∈V C(v) t∈V C(v) δ s,t (v) sum of local dependency scores = δ λ (v) + s∈V t/ ∈V C(s) δ s,t (v) sum of global dependency scores = δ γ (v) + s/ ∈V C(v) t∈V C(s) δ s,t (v) sum of dependency scores on external nodes = δ ǫ (v) (5.7) As a result, we can distinguish two main terms, local and global dependency scores. The additional term is necessary to properly take into account the possible existence of shortest paths connecting nodes of the same cluster via nodes belonging to one or more different clusters, i.e., external nodes. We define the local dependency score of a node s on a node v, δ λ s,• (v), as the sum of pair dependency scores for which source s, the destinations and node v belong all to the same cluster. We define the local BC of a node v, δ λ (v), as the BC of v computed on the sub-graph C(v). Local BC is computed using Brandes' algorithm inside each cluster2 ,w h i c h generates, as a by-product, additional information (i.e., the number of shortest paths and distances to border nodes). This information is later used to group nodes into equivalence classes and to fasten the computation of global dependency scores, as further discussed (see Subsec. 5.4.2). The global dependency score of a node s on a node v, δ γ s,• (v), is the sum of all the pair dependency scores for which destinations do not belong to the same cluster of source node s.T h eglobal BC of the generic node v, δ γ (v), is thus the sum of the global dependency scores for source node s ranging over the whole set of nodes V. The dependency score of a node s on an external node v, i.e. C(v) = C(s), noted as δ ǫ s,• (v), is the sum of all the pair dependency scores for which destinations belong to the same cluster of the source node s. We denote by δ ǫ (v)t h es u mo f all the dependency scores on v,w h e nv is an external node and the sources and destinations are in the same cluster, different from C(v). This last term δ ǫ (v) is equal to zero when the clustering is ideal, i.e. when all the shortest paths between any pair of nodes of a cluster only contain nodes from that same cluster. When this condition is not fulfilled, multiple side effects Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality due to the presence of external nodes have to be taken into account, as discussed below. External Nodes/Shortest Paths Given a cluster C i , two nodes s, t ∈ C i and two border nodes b 1 ,b 2 ∈ C i , there may exist shortest paths between s and t which exit C i through b 1 ,c r o s sa certain number of nodes belonging to other clusters and then re-enter C i through b 2 . We call these shortest paths external shortest paths and the nodes lying on them which do not belong to C i , EN C i , external nodes of C i . If the existence of such external shortest paths is neglected, BC computation will be affected by an error due to incorrect values of the distances and the counts of shortest paths between pairs of nodes inside the same cluster. Consequently, an error in the computation of the local BC, δ λ , and in the identification of equivalence classes will be introduced. This was one of the approximation errors affected the previous version of our algorithm [START_REF] Suppa | A Clustered Approach for Fast Computation of Betweenness Centrality in Social Networks[END_REF]. To remove this intra-cluster error, we join the idea proposed by the authors in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]. After clustering, we build a Hierarchical Sub-Network (HSN), i.e., a sub-graph of G induced by the border nodes of all the clusters and nodes lying on the intra-cluster shortest paths between pairs of border nodes of the same cluster. By retrieving all the shortest paths between pairs of border nodes of the same cluster via the HSN, we are able to identify possible external nodes for that cluster. Afterwards, we can extend each cluster with the related external nodes and use the extended clusters as sub-graphs to identify equivalence classes and pivots. Thus, local BC δ λ can be correctly computed inside these extended clusters instead of the initial ones. Formally, an extended cluster C * i of a cluster C i ∈ C is defined as a connected sub-graph induced by nodes V C * i = V C i ∪ EN C i . To better understand how the HSN is built and how it is used to form the extended clusters, we provide an illustrative example. Let us consider again the clustered graph from Figure 5.1. In cluster C 1 ,n o d e s1 and 2 are border nodes, while node 4 lies on the only intra-cluster shortest path between them. In cluster C 2 ,n o d e s17 and 20 are border nodes and nodes 15, 21, 19 and 16 lie on the intra-cluster shortest paths between them. Finally, in cluster C 3 , there is only border node 8. All the aforementioned nodes build up the HSN (see Figure 5.3a). If we now consider the shortest paths between border nodes 1 and 2 via the HSN, we notice that node 17 lies on a shortest path connecting the two former nodes. Consequently, it represents an external node of C 1 (see Figure 5.3b). Dependency Score of Pivots From the equivalence class relationship described in Subsec. 5.2.2, a pivot of such a class is representative only for the dependency scores on nodes v -a n d destinations t -which do not belong to its own cluster. In fact, given a cluster C i ∈ C and all its equivalence classes K C i , from Equation 5.6, we have: s∈K i δ s,V C i (v)=|K i |•δ k i ,V C i (v) ∀v ∈ V C i , K i ∈ K C i . (5.8) This equation can be exploited to speed up computation of BC building on Brandes' algorithm and SSSP explorations, but only holds if v ∈ V C i .T h u s , it cannot be directly applied to correctly compute values of global BC when v is in the same cluster of the source. Therefore, the algorithm requires a more elaborated approach to properly and efficiently calculate the contribution from the pivot of K C i to the BC of nodes v ∈ V C i 3 . First of all, let us decompose the global dependency scores from Equation 5.7 based on the cluster of node v as follows: δ γ (v)= s/ ∈V C(v) t/ ∈(V C(v) ∪V C(s)) δ s,t (v)+ s/ ∈V C(v) t∈V C(v) δ s,t (v)+ s∈V C(v) t/ ∈V C(v) δ s,t (v) (5.9) 3 In our previous version of the algorithm, we used Equation 5.8 without taking into account the cluster of v and the pivots were chosen to minimize the error. Here, we avoid such an error during the computation of global dependency scores by exploiting the properties of undirected graphs, as explained later. Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality The previous equation can be further simplified by considering the following claim which is proved in Sec. 5.6 as Claim 5.6.1. In undirected graphs: s∈V C(v) t/ ∈V C(v) δ s,t (v)= s/ ∈V C(v) t∈V C(v) δ s,t (v) (5.10) By relying on Equation 5.10, it becomes possible to replace with zero the sum of the pair-dependencies δ s,t (v) for which s ∈ V C(v) and t ∈ V C(v) in Equation 5.9 and compensate later the lack of this term by doubling the sum of the pairdependencies δ s,t (v) for which s ∈ V C(v) and t ∈ V C(v) . Global dependency scores in Equation 5.9 are therefore redefined as follows: δ γ (v)= s/ ∈V C(v) t/ ∈(V C(v) ∪V C(s)) δ s,t (v)+2• s/ ∈V C(v) t∈V C(v) δ s,t (v) (5.11) With this further step, we can now use pivots to efficiently compute the exact global BC. In particular, let δ γ s,V C(v) (v)a n dδ γ s,V C(v) (v) be the global dependency scores from node s on node v for destinations not belonging to C(s), but belonging to C(v), and the global dependency score from node s on node v for destinations not belonging to C(s)a n dC(v). Equation 5.11 can be rewritten as follows: δ γ (v)= s/ ∈V C(v) [2 • δ γ s,V C(v) (v)+δ γ s,V C(v) (v))] (5.12) Therefore, given a cluster C i ∈ C and all its equivalence classes K C i ,w eha v e: ∀v/ ∈ V C i , K i ∈ K C i , s∈K i 2 • δ γ s,V C(v) (v)+δ γ s,V C(v) (v) = |K i |• 2 • δ γ k i ,V C(v) (v)+δ γ k i ,V C(v) (v) (5.13) Equation 5.13 means that, during the back propagation phase, we should distinguish between contributions due to destinations inside the same cluster of v and contributions due to destinations outside the cluster of v. For a better understanding of the formulas above, let us consider an illustrative example by leveraging again the clustered graph from Figure 5.1 and the equivalence classes of cluster C 1 from Figure 5.2. The pivot node of the equivalence class composed of nodes {3, 4, 6, 14} is node 14. According to the proposed approach, we calculate the dependency scores from node 14 on all nodes of clusters C 2 and C 3 and multiply them by 4, avoiding to calculate the dependencies scores from nodes 3, 4 and 6.T h i sw a y , the computation time is divided by 4. However, while it is correct to multiply by 4 the dependency scores for nodes in C 2 and C 3 , it is not for nodes belonging to the same cluster of the pivot (see Figure 5.4a) since nodes 14, 3, 4, 6 of the class are equivalent only with reference to border nodes of cluster C 1 (nodes 1, 2). Therefore, we can not multiply by 4 the dependency scores on nodes 1, 2, 3, 4, 5, 6 since these scores are not the same when computed, for instance, from node 14 or node 4. To avoid the problem, we put these dependency scores to 0 and we later compensate during SSSP explorations from a pivot node in C 2 and C 3 (see Figure 5.4b). Problem with nodes belonging to clusters of pivots Solution for undirected graphs s,• (w),w h e r ev ∈ P s (w). Indeed, when C(v) = C(w)(i.e., when crossing a cluster), the set of destinations of w which do not belong to C(w) can be composed of both destinations belonging to C(v)a n d destinations not belonging to C(v): for the former, the pair-dependencies have to be multiplied by 2, whereas for the latter no further operation is needed (see Equation 5.13). To overcome this problem, we apply the classic recursive formula of Brandes' algorithm (Equation 5.5) on a vector of contributions, propagating the global dependency scores δ γ s,• (v). The dimensions of this vector of contributions correspond to the number of clusters, so that the contribution due to a destination t is assigned to δ γ s,V C(t) (v). Formally, we have the following recursive formula: ∀C i ∈ C \ C(s):δ γ s,V C i (v)= w:v∈Ps(w) σ s,v σ s,w * ( w∈C i + δ γ s,V C i (w)), (5.14) where w∈C i represents a boolean variable equal to 1 if w ∈ C i , 0 otherwise 5 .A t the end of the back-propagation phase, we put the dependency scores of nodes v belonging to the same cluster of the (pivot) source node to 0, whereas the dependency scores of nodes belonging to the other clusters are computed using the following formula: δ γ s,V C(s) (v)=2• δ γ s,V C(v) (v)+ C i =C(v) δ γ s,V C i (v) (5.15) Finally, according to Equation 5.13, δ γ s,• (v) is multiplied by the cardinality of the equivalence class s belongs to. E1C-FastBC Algorithm In this section, we describe the E1C-FastBC algorithm, the implementation of the cluster-based solution introduced in the previous section. We also discuss a parallel version based on MapReduce. Louvain Clustering To group nodes in clusters and minimize the number of border nodes, |BN|, and consequently |BN C i | for each cluster C i ∈ C, we exploit a modularity-based clustering algorithm. Modularity is a scalar metric, defined in the range -1 and 1, which measures the density of links inside clusters as compared to links between them: the higher its value, the lower the number of inter-clusters links. Consequently, maximizing the modularity score reduces the number of border nodes in the clusters. This allows not only to keep low the complexity of the algorithm by reducing the size of the HSN and the number of nodes against which topological properties (normalized distances and normalized number of shortest paths) have to be computed, but also to maximise the chances of having few equivalence classes, each with many nodes, since smaller vectors (those storing the topological properties) increase the probability of having linear dependency among them and consequently a smaller number of classes. This is highly beneficial from the perspective of reducing SSSP explorations. The Louvain method [START_REF] Vincent D Blondel | Fast unfolding of communities in large networks[END_REF] is an example of modularity-based clustering technique. Its time complexity of O(nlog 2 n)isverygoodcomparedtothatofBrandes' algorithm. The Louvain algorithm runs in two phases which are iteratively repeated. In the first phase, each node is initially assigned to its own cluster and moved in the cluster of the neighbor which ensures the maximum increase of modularity, with respect to the previous configuration. This phase terminates when all nodes have been explored and no further modularity improvement is possible. In the second phase, a new graph is generated by considering the identified clusters as nodes, and the loops inside them as self-loops. Phase one is then repeated using the graph generated by the second phase. The two phases are iterated until a maximum of modularity is reached and a hierarchical structure of clusters has been formed. The output of the algorithm, and consequently the modularity of the identified clusters, may be affected by the order nodes are evaluated with. This order can also influence the computation time. To improve solutions that are sub-optimal in terms of modularity, multiple runs of the algorithm can be performed over the same network, each associated to a different order for the analysis of the nodes. Algorithm Implementation Algorithm 2 reports the pseudo-code of the E1C-FastBC algorithm taking as input an undirected unweighted graph G and producing in output the exact values of BC for every node in V. The algorithm is composed of several phases. We provide a detailed description for all the intermediate phases, while the associated pseudo-code is provided for the most relevant ones in Algorithm 2. Algorithm 2 Pseudo-code of the E1C-FastBC algorithm ). These clusters do not need to be explicitly stored in a dedicated data structure as they represent a view of the starting graph, filtered through a membership information stored in every node. At line 3, we identify the set of border nodes BN by checking, for each node v ∈ V, the existence of at least one neighbor belonging to a different cluster. At line 4, the nodes building up the HSN, referred as V HSN , are retrieved. As detailed in the pseudo-code of Algorithm 3, to build the HSN we first execute |BN| local BFS, each rooted in a border node used as source, i.e., s ∈ BN.T h e term local here refers to the fact that only nodes belonging to the same cluster of s, i.e., V C(s) , are crossed during the explorations. Each BFS returns the set of direct predecessors P s (V C(s) ) of every node in V C(s) on shortest paths from s. Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality These sets are later used at line 6 of Algorithm 3 to cross the discovered shortest paths backwards starting from destinations t. Each traversal returns the set of nodes lying on the shortest paths between a pair of border nodes of the same cluster: these nodes, together with the source and destination border nodes themselves, belong to the HSN and are therefore added to the set of all its nodes, i.e., V HSN . Algorithm 3 Pseudo-code of building HSN algorithm At line 5, we identify external nodes EN as detailed in Algorithm 4. First, a BFS is executed from each source s ∈ BN. In these explorations, only nodes of the HSN, V HSN , are considered. Each BFS returns the set of direct predecessors of every node in V HSN on shortest paths from s, i.e., P s (V HSN ). Similarly to the previous step, shortest paths are crossed backwards from destination t to source s using the sets of predecessors and every crossed node not belonging to the cluster of s and t is added to the set of external node EN. Algorithm 4 Pseudo-code of external nodes identification algorithm The extended clusters C * are generated at line 6 of Algorithm 2 from the original clusters, updated with the external nodes. At line 7, we compute in each (extended) cluster, i) the local BC on every node, i.e., δ λ , ii) BC contributions δ ǫ on external nodes, and iii) the topological properties of every node, i.e., the normalized distances d and the normalized numbers of shortest paths σ, computed with respect to the set of border nodes belonging to the cluster of the node. These topological properties are subsequently used at line 8 to find equivalence classes (see Subsec. 5.3.1). A modified version of Brandes' algorithm enables the computation of all these metrics, as described in Algorithm 5. The only difference compared to the canonical implementation of Brandes' algorithm is related to the back-propagation phase: in our case, the contributions due to the external nodes (as destinations) are not propagated, since they represent non-local destinations. Algorithm 5 Pseudo-code of local BC computation 1: function computeLocalδ(C * , C, EN, BN, G) 2: for s ← Vd o 3: δ λ s• (V C(s) ),δ ǫ s• (EN C(s) ), σs,BN C(s) , dG(s, BN C(s) )) ← BrandesM odif iedV 1( s, C * (s), C(s), BN, G) 4: δ λ (V C(s) ) ← δ λ (V C(s) )+δ λ s• (V C(s) ) 5: δ ǫ (EN C(s) ) ← δ ǫ (EN C(s) )+δ ǫ s• (EN C(s) ) 6: end for Finally, at lines 10-12 of Algorithm 2, all the previously computed partial terms are aggregated via a sum operation to obtain the exact BC values for each node. Algorithm 6 Pseudo-code of global BC computation 1: function computeGlobalδ(K, G, C) 2: for Ki ← Kd o ⊲ For all classes 3: ki ← random(Ki) The proposed E1C-FastBC algorithm can be parallelized since the execution of its sub-algorithms is highly parallelizable, with the only exception of the selected clustering algorithm. We can exploit data parallelism by performing the same operations on different partitions of a given graph leveraging the MapReduce paradigm since most of the computations are applied to each node of the graph. The main execution flow is represented by solid arrows, while dashed ones represent broadcast variables. Jobs and tasks are depicted as rounded rectangles and cubes, respectively. RDDs are shown as non-rounded rectangles. 4: δ γ k i ,V C(k i ) (V) ← BrandesM odif iedV 2(ki, G, C) 5: δ γ (V C(k i ) ) ← δ γ (V C(k i ) )+δ γ k i ,V C(k i ) (V C(k i ) ) •|Ki| Figure 5.5 reports the representation of the parallel version of E1C-FastBC, built using some key concepts introduced in Apache Spark, a popular big data processing engine that we use to run the tests reported in the experimental evaluation. Spark applications are generally defined in terms of transformations and actions that are applied to Resilient Distributed Datasets (RDDs). RDDs are immutable collections of data partitioned across the worker nodes of a cluster. Transformations and actions can be processed in parallel on such partitions. In particular, transformations are functions that produce new RDDs starting from the ones they are invoked on, whereas actions are functions that return the result of a computation performed over an RDD. A Spark application is a collection of jobs, each created to perform an action, andexecutedbyoneormoreexecutors deployed on the worker nodes of the cluster by running, in parallel, tasks over the partitions of an RDD. The tasks of the jobs encapsulate all the transformations that have to be applied to RDDs. The latter are then collected at the end of the jobs by the master node of a cluster: such node hosts the driver, which is the process responsible for running the application. To process RDDs, jobs may also require other inputs that can possibly be shared across executors through the so-called broadcast variables: they are lazily copied between worker nodes during execution. The parallel version of E1C-FastBC is a sequence of jobs, each implementing one or more sub-algorithms of the algorithm, as detailed in Figure 5.5. The main execution flow is represented with solid arrows, whereas, with dashed arrows, we present data that are copied via broadcast among all Spark workers and needed to carry out the jobs. Each job executes a specific type of task, as illustrated in Figure 5.5, and receives two classes of inputs: i) RDDs, which are used to guide parallelism, i.e., the number of tasks, and ii) broadcast variables, which are used by every single task to process its own partition. In the following, we describe each job in terms tasks behaviors and needed inputs. • Job 1 organizes the graph into clusters (Algorithm 2, line 2) by performing parallel executions of the Louvain method, using different configurations with the aim of selecting the one that produces the clustering with the best modularity score. The job takes as input the graph, passed as broadcast variable, and outputs the clusters. The starting RDD, which does not contain data, only enables parallel executions of multiple runs of the Louvain method. • Job 2 identifies border nodes (Algorithm 2, line 3), by checking for each node the existence of at least one neighbor belonging to a different cluster. It requires as input the set of clusters and the graph, both passed as broadcast variables. The starting RDD contains all the nodes to analyze, and it is built from the whole set of graph vertices • Job 3 retrieves the HSN nodes (Algorithm 2, line 4), by performing, for each border node, a constrained (intra-cluster) BFS. It needs the border nodes, the clusters and the graph as its inputs. A broadcast variable is used for all of them, but the set of border nodes is also used to build the starting RDD. Nevertheless, each execution requires the availability of the whole set of border nodes i) to avoid leaving clusters while performing BFSs and ii) to check whether a destination is a border node. • Job 4 discovers the external nodes (Algorithm 2, lines 5-6) through BFSs bound to nodes belonging to the HSN. Consequently, compared to Job 3, it requires HSN nodes as additional input, passed in the form of broadcast variable, while the starting RDD is the same as that of Job 3. At the end, the job outputs the clusters extended with external nodes. . nodes as inputs, all transferred as broadcast variables 6 . The starting RDD of this job contains all the nodes of the graph. • Job 6 identifies the equivalence classes and their pivots (Algorithm 2, line 8). The starting RDD contains the topological properties (normalized distances and normalized number of shortest paths) per node, while the inputs passed as broadcast variables are the same as the previous job. • Job 7 computes the global BC (Algorithm 2, line 9) by using a starting RDD containing pairs composed of a pivot and the cardinality of its equivalence class. The only inputs passed via broadcast variables are the graph and the clusters. Final BC values are obtained by aggregating all the previously calculated values. This step is performed entirely on the driver in a sequential manner. In all cases, except for Job 1, we use a node-level grain: all functions encapsulated in the various tasks are defined to work starting from a single node (simple node, border node or pivot). Figure 5.6 reports a detailed description of Job 2 to exemplify how a job is performed. Solid arrows represent elaboration phases, while the dashed ones represent data transfer phases. The dotted box shows the set of transformations applied by the tasks hosted on the executors over the different partitions. The first dashed box reports the source code related to the job, whereas the second reports the source code of the first map task in the pipeline. The job is triggered by the col lect action. The driver builds the initial RDD by executing the parallelize method [START_REF] Berdica | An introduction to road vulnerability: what has been done, is done and should be done[END_REF]. The number of partitions is equal to the number of executors, i.e., each executor works on a single partition. Partitions are sent to executors along with the operations to be performed on them [START_REF] Mattsson | Vulnerability and resilience of transport systems-A discussion of recent research[END_REF]. These operations are encapsulated in a task. In particular, the first map operation (3a) generates an intermediate RDD of key-value pairs where the second element is the unique node identifier and the first element is a boolean value (true/false) that depends on whether the node is a border node or not. Then, the filter operation creates a new intermediate RDD containing only the pairs with the key equal to true (3b). Finally, the second map operation produces an RDD by extracting only the second element of the previous pairs (3c). Hence, the border nodes are collected on the driver (4) and stored in a set (5). Experimental Evaluation In this section, we report the main results of the experimental evaluation we conducted by testing the algorithm with both sequential and parallel executions on different types of graphs, and with different graphs and sizes. Includes elaboration and data transfer phases (solid and dashed arrows), the set of transformations applied by the tasks (dotted box) and the related source code (dashed boxes). We compare the execution times obtained with our algorithm to those obtained with other algorithms by using the Algorithmic Speedup (AS). Given two algorithms, a 1 and a 2 , the algorithmic speedup of a 1 over a 2 with p cores, noted as AS a 1 /a 2 p , is defined as T a 2 p /T a 1 p ,w h e r eT a 2 p and T a 1 p are the computation times obtained with p cores using algorithms a 2 and a 1 , respectively. Hence, the larger the value of AS, the faster a 1 is compared to a 2 with the same computing resources. For example, AS = 2 means that the time taken by a 1 is half the time taken by a 2 and therefore that a 1 is two times faster than a 2 . In particular, we will compare the E1C-FastBC algorithm, labelled with E, with Brandes' algorithm, labelled as B and with the solution proposed in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF], labelled with H. We chose this algorithm for comparison because it belongs to the same category as ours (cluster-based computation) and it addresses the problem of exact BC computation. However, due to the unavailability of source/executable code for H,w eo n l y consider the AS metric in sequential mode, by relying on the indications provided by the authors in this chapter for its computation (see Equation 7in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]). Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality To further explore the performance of our solution, we also analyze the efficiency of the E1C-FastBC algorithm, based on the canonical definition of speedup. Specifically, the speedup obtained with p coresisdefinedasS p = T s /T p ,w h e r eT s is the computation time in sequential mode and T p is the computation time with p cores. The efficiency with p cores, noted as E p , is then defined as S p /p. Efficiency E p may influence the AS metric as demonstrated by the following analysis. From the definition of speedup we can write that T a 2 p = T a 2 s /S a 2 p ,w h e r e T a 2 s and S a 2 p are the execution time in sequential mode and the speedup obtained with p cores of algorithm a 2 . Similarly, we can write that T a 1 p = T a 1 s /S a 1 p .B y using these equations in the definition of the AS metric, we have that AS a 1 /a 2 p = (T a 2 s /T a 1 s )•(S a 1 p /S p =( T a 2 s /T a 1 s ) • (E a 1 p /E a 2 p ). The relationship between AS and E p suggests that if a 1 and a 2 have comparable efficiency with p cores, then the AS only depends on the ratio of the execution times of the two algorithms in sequential mode, thus providing interesting insights to comparatively analyze the two different solutions. Finally, we also provide a breakdown analysis of the computation time of our solution, useful to investigate the contribution of each composing sub-algorithms. In all reported tests, we checked the accuracy of our solution by always observing zero error on BC values. Datasets In our tests, we consider both synthetic and real graphs. For the first category, we focus on scale-free graphs generated using the implementation of the Barabási-Albert model provided by the Python library Net-workX. According to that model, a graph of n nodes is grown by attaching new nodes, one at a time, each with m ′ edges that are preferentially attached to existing nodes with high degree. In our case, m ′ , which is called the preferential attachment coefficient, is equal to 1. This way, we have graphs with m = n -1 edges and an average degree approximately equal to 2, i.e., double the preferential attachment coefficient. This choice is motivated by the features of the current implementation of our algorithm that benefits of high modularity. In other words, this class of dataset is considered as best-case scenario. However, as mentioned in the introduction, this does not limit the applicability of our solution because many real-world systems can be represented with the Barabási-Albert model. In particular, to analyze the algorithm in terms of performance and scalability, we generate graphs with different sizes (see Table 5.3). For the second category, we focus on some real graphs8 available in public datasets. The names of the graphs are given in the first column, whereas the number of nodes and edges are given in the second and third columns. d avg and d max are the average and max degree, respectively. cc avg is the average clustering coefficient. properties. In particular, for each graph we consider the average degree (d avg ), the max degree (d max ) and the average clustering coefficient (cc avg ). All the datasets, except the one related to the Lyon road network, are scale-free graphs. Experimentation Testbed The platform for our experiments is a server machine with 128 GB of RAM and 2 sockets Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz, with 14 physical cores and 2 threads per core for a total of 28 logical cores per socket and 56 virtual cores in hyper threading, running Linux Debian as operating system. Both Brandes' algorithm and E1C-FastBC are implemented in Scala and executed using Apache Spark 2.2.0 in standalone mode. In particular, we deploy a spark cluster composed of the master node and one worker node holding all available resources (56 cores and approximately 125GB of memory). Tests are performed employing a driver with one core and 30GB of memory, and a variable number of executors having a variable amount of memory and computing resources. Specifically, except for the case with one core (sequential execution) where there is only one executor that holds all the resources (i.e., the single core and 90GB of memory), we fix the total number of cores for the experimentation and instantiate a number of executors such that each of them has 5 cores. of nodes and edges, while the third is almost always zero (only in one case it was equal to 2). The drawback is that AS E/B p decreases as the number of cores increases. This behaviour is due to the fact that the Brandes' algorithm is more efficient than E1C-FastBC (see Figure 5.8). This means that the ratio E a 1 p /E a 2 p in the relationship between the AS metric and the efficiency is lower than 1. Consequently, the AS value in the sequential case is not preserved as the number of cores increases. However, as the following efficiency analysis will further clarify, this does not mean that E1C-FastBC is less scalable than Brandes' algorithm, but rather that it needs very large graphs to better exploit the available computing resources. This statement is also confirmed by Figure 5.7, which clearly shows that when the graph size is 400,000, a higher number of cores performs even better than a smaller one: in particular, we have that the AS E/B p is better with 5 cores than with 1 core. To have a similar behavior even for a number of cores greater than 5, we should consider larger graphs. To better understand the performance of E1C-FastBC,w ei n v e s t i g a t ei t se fficiency with respect to that of Brandes' algorithm. Figure 5.8 reports the results of the efficiency analysis performed for the two algorithms. In both cases, it is possible to observe that: i) the efficiency decreases as the number of cores increases and ii) for a given number of cores, it increases as the number of nodes increases. However, it is worth to highlight that in the efficiency analysis, we use different but overlapping ranges of values for the number of nodes. In particular, for our solution we select larger graphs since we aim at showing that our algorithm scale well especially with very large graphs. In fact, the efficiency trend is almost the same in the two cases reported in Figure 5.8. Moreover, given the maximum values of the number of nodes for the two algorithms (800,000 for ours, 200,000 for Brandes'), efficiency values are approximately the same with 5 cores (i.e., the first considered parallel configuration) but significantly diverge as the number of cores increases. In particular, efficiency of E1C-FastBC decreases with a higher rate. The reason for this behaviour lies in the reduced amount of computation required by our solution. Indeed, pivots allow to significantly decrease the number of (modified) Brandes' SSSP explorations performed on the whole graph, which represent the heaviest part of the whole computation (see Figs. 5.13 and 5.14), thus reducing the workload of each core. Our solution also introduces another benefit: it allows to mitigate the variability of the computation times due to the different topological characteristics of the graphs and to the partitioning of data performed by Spark during executions. Indeed, there may exist some partitions of the RDDs characterized by a high concentration of nodes that generate the most complex shortest path trees. The time required to process these partitions directly impacts the time required to process the whole RDD, since partitions are processed in parallel. However, Spark tasks process each single partition sequentially. This aspect, combined Figure 5.9: Comparison with the solution proposed in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]. Algorithmic speedup analysis with synthetic graphs in sequential mode - AS E/B p=1 and AS H/B p=1 . AS H/B p=1 is always equal to 1 for synthetic graphs (see Subsection 5.5.3). We therefore add only one single zoomed bar in the figure just to make it visible to the reader. analytically computed based on Equation 7 provided in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]. Using such equation, it is possible to observe that: i) AS H/B p=1 depends on the number of clusters (|C|) and the average degree (d avg ), and ii) when |C| +2≫ d avg /2, it can be approximated with d avg /2. Therefore, since for synthetic graphs the average degree is constant and the number of clusters increases with the number of nodes, AS H/B p=1 is always approximately equal to 1 (the average degree is 2). In particular, the higher the number of clusters, the closer to 1 the AS H/B p=1 . This means that the algorithm proposed in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF] is not able to improve that of Brandes. Conversely, ours is able to do it by a large multiplicative factor. We can thus conclude that our solution always outperforms the one in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF] with synthetic graphs. p=1 is computed again using Equation 7 provided in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]. In all cases, our solution outperforms the one in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF]. Real Graphs Analysis To further confirm the considerations on the scalability of our solution, reported in the previous section, we analyze in the following both the algorithmic speedup and efficiency values of E1C-FastBC on the lyon-road-network graph, for which we observed a very high number of pivots (about 60% of the number of nodes) and the lowest algorithmic speedup factor. As shown in Figure 5.11, the AS is always greater than 1, thus confirming the usefulness of our solution, although the reported values are not comparable to those obtained on synthetic graphs (see Figure 5.7) with a similar number of nodes (100,000 and 200,000). In spite of this, the algorithm becomes more scalable and efficient than in the case of synthetic graphs with 100,000 and 200,000 nodes due to the increased amount of computation resulting from the higher number of border nodes, pivots and external nodes (see Figure 5.12). Also in this case, by considering the two figures (Figure 5.11 and Figure 5.12), it is possible to note the dependency relationship between the AS metric and the efficiency. In particular, going from 15 to 20 cores the difference between efficiency values for the Brandes' algorithm and of E1C-FastBC decreases while AS increases. Breakdown of Computation Time In this section, we analyze the contributions of the different component subalgorithms to the overall computation time of E1C-FastBC. The goal of this analysis is to find bottlenecks that limit scalability and, consequently, room for further improvements. We split the algorithm in seven parts (see Algorithm The third heaviest part is Louvain Clustering. Its computation time does not keep decreasing when augmenting parallelism beyond 5 cores. This is due to the fact that we have chosen to launch 10 parallel executions of the Louvain method in different configurations with the aim of selecting the one that produces the clustering with the best modularity score. This aspect highlights a potential bottleneck, since the computation time of Louvain Clustering at high levels of parallelism becomes comparable with those related to Global BC and Local BC & Top. Props.. As already discussed in Subsec. 5.5.3, the number of external nodes for our synthetic graphs is almost always equal to zero. Therefore, the contribution of HSN & External Nodes is not relevant as well as those of Classes & Pivots and Final Accumulation. In particular, the latter is a sequential step entirely performed on the driver. Therefore it does not vary with the number of cores. For Dataset Reading, computation time slowly increases with the number of cores due to the overhead introduced for creating the initial RDD, by reading data from the file system, with a number of partitions equal to the number of cores. Even for this step, the computation time becomes comparable with those obtained for Local BC & Top. Props. and Global BC when the parallelism increases. It is worth to note that synthetic graphs represent a sort of best-case scenario: the very low number of border nodes and pivots, together with the almost complete absence of external nodes, allows for excellent performance. A more realistic scenario is analyzed in the following, by focusing on a real graph. Figure 5.15 reports the time taken by each part of the algorithm on the egotwitter graph 9 , in sequential and parallel modes. In his case, HSN & External Nodes becomes the second heaviest contribution with values comparable to those of Local BC & Top. Props.. This is mainly due to the increased number of external nodes and confirms the importance of achieving ideal clustering. Similar considerations as those made for synthetic graphs apply as well to the remaining contributions from the breakdown analysis related to the ego-twitter graph. Mathematical Foundations As discussed in Sec. 5.3, our algorithm relies on multiple mathematical properties to allow for the exact fast computation of BC. In the following, we provide the mathematical proofs of the mathematical foundation (theorem, corollary and claim) at the basis our algorithm. In theorem 5.6.1, we prove that two nodes of the same cluster that satisfy some properties produce the same dependency score on nodes outside the cluster for destinations that are outside the cluster. V C i .I f ∀ b j ∈ BN C i σ s,b j = k • σ p,b j and Proof. Let us consider two border nodes b j ,b k ∈ BN C i with b j ∈ BN C i (s, t)a n d b k / ∈ BN C i (s, t) . By definition of shortest path between two nodes, we have: d G (s, b k )+d G (b k ,t) >d G (s, t) (5.19) Given that b j ∈ BN C i (s, t) by hypothesis, Equation 5.19 can be easily re-written as follows: d G (s, b k )+d G (b k ,t) >d G (s, t) ⇐⇒ d G (s, b k )+d G (b k ,t) >d G (s, b j ) + d G (b j ,t) (5.20) Now, by relying on the hypothesis of the lemma, we exploit the relationships holding between the distances of generic nodes s and p to each border node in BN C i , thus obtaining: d G (s, b k )+d G (b k ,t) >d G (s, t) ⇐⇒ d G (p, b k )+l + d G (b k ,t) >d G (p, b j ) + l + d G (b j ,t) ⇐⇒ d G (p, b k )+d G (b k ,t) >d G (p, b j ) + d G (b j ,t) (5.21) From Equation 5.21, we can derive that b k does not belong to any shortest path between p and t, i.e.: d G (s, b k )+d G (b k ,t) >d G (s, t) ⇐⇒ b k / ∈ BN C i (p, t) As the relation above holds for any node b k ∈ BN C i which does not belong to any shortest path between s and t, we can conclude that: BN C i (p, t) ⊆ BN C i (s, t) (5.22) Likewise, it is possible to prove that if b j ∈ BN C i (p, t)a n db k / ∈ BN C i (p, t), we have: d G (p, b k )+d G (b k ,t) >d G (p, t) ⇐⇒ BN C i (s, t) ⊆ BN C i (p, t) (5.23) Therefore, from Equation 5.22 and Equation 5.23, we can conclude that the following relationship holds: BN C i (s, t) ⊆ BN C i (p, t) AND BN C i (p, t) ⊆ BN C i (s, t) ⇐⇒ BN C i (s, t)=BN C i (p, t) (5.24) which proves the lemma. Proof. To prove the corollary, we only need to prove that the following two equations hold: The proof of the claim above, used in Subsec. 5.3.2, is entirely based on the property of undirection. We remove part of the estimation errors we had in the previous implementations ( [START_REF] Suppa | A Clustered Approach for Fast Computation of Betweenness Centrality in Social Networks[END_REF]) by changing the computation of global dependency score using this claim. dG (s, b j )= dG (p, b j ) ∀b j ∈ BN C i =⇒ d G (s, b j )=d G (p, b j )+l ∀b j ∈ BN C i ( Proof. Thanks to the undirected nature of the graph, we have: s∈V C(v) t/ ∈V C(v) δ s,t (v)= s∈V C(v) t Complementary Information on the Implementation of E1CFBC In our work, we implemented two alternative approaches to back-propagate contributions during the computation of the global BC. The first one, detailed in Subsec. 5.3.2, is based on using one vector of contributions per node, with one entry per cluster of destinations. The main limitation of this approach is related to high memory consumption as it implies working with n vectors (n =n um berof nodes), each including |C| contributions. To optimise our solution with respect to this limitation, we developed another approach based on a vector of two elements only per node. This method is described below. We can consider a vector of two elements instead of |C| elements, so that the contribution due to a destination t is assigned to δ γ s,V C(v) (v)o rδ γ s,V C(v) (v) depending on whether t is in the same cluster as v, C(v), or not. During the back-propagation, the two contributions are propagated independently as follows: δ γ s,V C(v) (v)= δ γ s,V C(s) (v)=2• δ γ s,V C(v) (v)+δ γ s,V C(v) (v) (5.35) Finally, according to Equation 5.13, δ γ s,• (v) is multiplied by the cardinality of the equivalence class s belongs to. However, this solution introduces an error. In fact, in Equation 5.34, δ γ s,V C(w) (w) may contain contributions of destination nodes belonging to V C(v) , which should be moved from δ γ s,V C(v) (v)t oδ γ s,V C(v) (v), so that they can be correctly multiplied by 2 as required by Equation 5.35. This situation is a consequence of the presence of external shortest paths that leave and then re-enter the clusters. The correction above has to be performed during the back-propagation phase for each border node such that i) there is at least one external shortest path starting from that border node, and ii) all the contributions of the border node have been computed, i. between them. We perform this operation when searching for external nodes. Conclusion In this chapter, we presented a very fast algorithm for performing the exact computation of BC in undirected graphs. The algorithm exploits clustering and structural properties of graphs to reduce the computing time. In particular, the algorithm exhibits an impressive speedup (compared with Brandes' algorithm and the one labelled with H, especially for very large scale-free graphs with an attachment coefficient m = 1. A significant speedup is achieved also with other Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality kinds of graphs, as demonstrated by the results obtained with real graphs. The reduction of the computation time is mainly due to the adoption of pivots, i.e., nodes that contribute equally to the dependency score of other graph nodes. This chapter described both a sequential and a map-reduce parallel version of the algorithm implemented in Scala over Spark. The experimental analysis, performed with reference to the number of cores exploited for computation, revealed that the efficiency is slightly lower than Brandes' algorithm but it increases with graph size. In fact the granularity per Spark-task of the SSSP computations is small when graphs are not very large due to the relative low number of pivots. The speedup of E1C-FastBC strongly depends on the number of pivots; thus clustering and modularity play a key role for the computation time of the algorithm. As future work, we aim to study other clustering methods for more effectively identifying border nodes in (synthetic and real) graphs with different topologies. Finally, we will investigate a better mapping of the algorithm on distributed resources, when data-parallelism is exploited, by improving locality especially when different Spark executors are used. Chapter 6 Limitations, Conclusion and Prospects We studied in this thesis two types of approaches for vulnerability analysis inside road networks to make cities more resilient to incidents. The first dynamical approach was a solution of traffic management for large scale road networks. This solution offered a reduction of congestion in critical zones. It reduced the drop of performance induced by a high peak of demand. Our approach showed robustness as it also performed well for extreme scenarios, especially a highly congested one. The dynamic and cooperative multi-agent based decision process made our approach reactive to unpredictable events, thus increasing the global resilience of the network. Not only did we reduce the drop of performance measured by the mean speed but the duration of disturbance was reduced as well. The hierarchical cooperation between infrastructures and vehicles enable a reduction of travel times for most of the users. This solution has been later improved with the use of the resilience metric Betweenness Centrality, widely used to identify vulnerable nodes inside a graph. We added a stronger penalization to nodes with high BC when their zone was congested to better protect vulnerable nodes. Better choices of shortest path, taking into account the topological vulnerability of road sections, improved the total travel time reduction per rerouting. It thus improved the overall performance of our strategy, showing that the combination of topological and system-based vulnerability analysis is possible and beneficial for resilience increase. Studying the BC was the main focus of the second type of vulnerability analysis done during this thesis. Computing the BC in real-time, to include it in a dynamical analysis, is a challenge because of its computation time. We thus developed an algorithm for distributed computation of BC in large scale network, based on graph theory and paralleling of tasks. The algorithm showed great scalability and performed well for sparse graphs, in different domains other than transport. Chapter 6. Limitations, Conclusion and Prospects Resilience Scope The main scope our work was travel demand fluctuations over a road network. We tested our control strategy on different demand scenarios in order to evaluate the capability of our approach to handle perturbations that strongly increase demand on road networks such as failures in the transport systems. Further work could be focused on failures on the supplies, i.e. when strong weather incidents occur, blocking parts of the network. In this case, many nodes and links of the graph become unreachable, areas can even become totally inaccessible. The threat would not come from the demand but rather the supply. Moreover, only static BC, computed on the free-flow travel time weighted graph, was included into our strategy. One of the first possible improvements would be to update the BC according to traffic conditions and to evaluate scenarios of supply failures, making an entire zone inaccessible. The dynamic computation of BC would be even more necessary for this kind of scenarios, where major changes occur on the graph that would undergo many link or node removal. The computation of BC over a dynamic graph would thus be a first step for control strategy improvement and necessary for new scenario of supply failure. Framework Modeling and Other Alternatives We chose the context of multi-agent system to develop our strategy but on a framework (SymuPy) that is not multi-agent oriented. Traditional transport models like SymuPy are based on aggregated flows between zones. The demand is represented with an OD matrix (Origin-Destination). When dealing with a complex population (age, activity, time of day), the demand becomes more complicated to represent and requires many OD matrices. Modeling the demand with agents allows more realistic and complex simulations ( [START_REF] Tajaddini | Recent Progress in Activity-Based Travel Demand Modeling: Rising Data and Applicability[END_REF], [START_REF] Zheng | A primer for agentbased simulation and modeling in transportation applications[END_REF]). For few decades, multi-agent simulation frameworks have been developed that better model driver's choice, preferences and reaction to its environment in a large scale context. They are derivated from the activity-based modeling of demand, where the population has a succession of activities during one day and travel from one to another. It is opposed to the trip-based modeling, which our approach was closer to. Multi-agent tools enable a better representation of driver interactions and their travel preferences, and modeling solutions for traffic management. The unpredictable behaviours are better represented and large-scale simulations can be run modeling each traveler. These simulators can perform one-day simulations using the succession of trips and activities. We could thus consider zone agents that learn from previous days of simulation and demand to adapt their strategy with past information. Among multi-agent activity-based simulators, the main ones are TRANSIMS (TRansportation Analysis and SIMulation System), developed by the Los Alamos 6.2. Framework Modeling and Other Alternatives 121 National Laboratory for the U.S. Department of Transportation, the environmentoriented ILUTE (Integrated Land Use, Transportation, Environment), maintained and contributed by Canadian universities or OpenAMOS (Open Activity-Mobility Simulator). Most of the multi-agent transportation simulator tools are activitybased, which means that the demand is not only a succession of trips but rather of scheduled activities per person, with trips to reach them. One of the most popular activity-based mesoscopic simulators is MATSim (MultiAgent Transport Simulation), an open-source tool, used for dynamic traffic assignment. Its main advantage compared with the other simulators is its parallelization of computation enabling large scale simulation of millions of agents. MATSim is an open-source project, started with Kai Nagel at ETH Zürich, when TRANSIMS was not open-sourced yet. MATSim is developed in Java and module-based, enabling the adding and plugging of own developments and thus a useful customizability. MATSim performs a microscopic modeling of traffic, using microscopic modelling of demand and a co-evolutionary algorithm to find a stochastic user equilibrium. The main reason we did not choose MATSim for our control strategy is that it omits the car-following and lane-changing behaviour to reduce computation time. This misrepresentation is due to the queue-based implementation where roadways are FIFO queues. With this representation, parallelization is feasible and allows short computation times. Such a misrepresentation of traffic dynamics could have though distorted the evaluation of our control strategy. Now that we proved its performance, further developments could be done on MATSim. Especially, MATSim offers two solutions to compensate the problem of the incorrect modeling of backward wave. The first solution is to use JDEQSim as mobility simulation or "mobsim" (see Figure 6.1), loosing part of the parallelization efficiency. The other option is to tune the QSim as suggested in [START_REF] Kay W Axhausen | The multi-agent transport simulation MATSim[END_REF]. In MATSim, the demand is activity-based (and not trip-based): the persons, agents, have a succession of activities, and their trips are no longer the goal but the mean. At the end a simulation, each agent has a score, computed using a scoring function which depends on the time spent in trips, the delays and the time spent during the activities. Basically, a plan is optimal when the agent spent a minimal time in transportation which left him/her a maximal time for his/her activities. To optimize plans, MATSim runs several simulations with a coevolutionary algorithm and modifies some plans, by changing the transportation mode or the departure time of some activities or the routes. These parameters can be tuned, the user can choose not to change the transportation mode for example. At the end of the simulation, we reached a user equilibrium, where the people have the best plan they can hope for. To run simulations from large data set (population, network, duration), MAT-Sim is multi-threaded. The consequences is that the computation time is not as high as other simulation tools but the congestion propagation is not perfectly simulated, as explained earlier. As an example, by converting SymuPy demand into MATSim inputs, it took 2 minutes for MATSim to run the simulation for the With such computation improvement, we could consider more complete scenarios. With a shortened computation time, more complex simulations can be chosen, especially for multi-modal vulnerability analysis. The interdependence of transportation modes makes some parts of the network more vulnerable such as road sections near big subway hubs. Moreover, bus or taxi suffer from jams and can be distinguished from other vehicles as their constraints and objectives are completely different from other travelers. The modularity of MATSim makes it also easy to plug the fast BC computation algorithm, E1CFastBC, to a simulation. Future work We were able to reproduce multi-agent behaviour and interactions but the possibilities offered by multi-agent systems were not fully explored. The use of object oriented solution makes it easier to be close to a multi-agent context but does not capture driver behaviours. Our solution did not require a fully multiagent oriented framework and the first objective was to prove that our approach was pertinent to reduce vulnerability. For further developments though, the use of multi-agent frameworks will be relevant. Multi-agent frameworks make it easier to model the diversity of population (connected vehicles or not) and the diversity of behaviours. For example, in our case, we consider that all vehicles can be rerouted. In a more detailed context, we could differentiate the kind of drivers, the connectivity of the vehicle, the availability of the driver to accept recommendation. Moreover, the cornerstone of resilience not addressed in this thesis is the "learning" stage, which corresponds to the ability of the system to store past information and reuse it adequately in the present. The zone agents could for example act depending on the effects of their past actions, in a context of Reinforcement Learning. The demand or the level of congestion can also be predicted from past data. We saw in Chapter 4 that the performance of our control was higher when the demand was high. By predicting the global demand on the network, we could have a level of control dependant on the prediction: with low predicted demand, no rerouting would be suggested whereas a high predicted demand would trigger our rerouting strategy. Many studies already focus on demand prediction: Raj et al. ( [START_REF] Raj | Application of data mining techniques for traffic density estimation and prediction[END_REF]) using k-NN and neural network to predict The tool and scope of our analysis allowed to prove the robustness and efficiency of our approaches in a realistic traffic flow dynamic context. Nonetheless, for further developments that would include an enlarged scope, more suited multiagent frameworks should be considered that are more appropriate and efficient. MATSim would enable larger urban zone to be studied with a more diversified population. It would for example help to better characterize resilience and vulnerability from different points of view and better evaluate the performance of transportation systems (including pollution for example). The use of activitybased modeling via multi-agent frameworks is a good perspective to continue the vulnerability analysis of road networks. As the combination of topological and dynamic vulnerability analysis is possible and benefits the transportation system, our algorithm for BC fast computation should be integrated to the control framework and more critical scenarios should be tested, with further developments on consequences analysis. Multi-modal representation of transportation network is also interesting for further development to better grasp the dependence between transportation modes. The analysis of the multi-modal transport network is interesting for both the dynamic and static vulnerability approaches. Figure 1 . 1 : 11 Figure 1.1: Chapter organization Figure 2 . 1 : 21 Figure 2.1: The four cornerstones of resilience ([11]) Figure 2 . 2 : 22 Figure 2.2: Illustrative Example of a Resilience Function (from [12]) Chapter 2 . 2 Vulnerability and Transport Network Management Figure 2 . 3 : 23 Figure 2.3: ITS Applications (from [22]) Figure 2 . 4 : 24 Figure 2.4: Contribution of the thesis on resilience cornerstones Figure 3 . 2 : 32 Figure 3.2: Processing of data collected from Road Side Units (RSU) and vehicles Figure 3 . 3 Figure 3.3: MFD Figure 3 . 4 : 34 Figure 3.4: Total flexibility compared with the production and accumulation Figure 3 . 5 : 35 Figure 3.5: New route computation at t r Figure 3 . 6 : 36 Figure 3.6: Graph and macro graph representation of a Manhattan road network partitioned in 9 zones Figure 3 . 7 : 37 Figure 3.7: Sigmoid weight function to update links of G and G M ,w i t htt(l)= 21s, corresponding to the free-flow travel time of most of the links of Manhattan network Figure 3 . 8 : 38 Figure 3.8: Routing acceptance Figure 4 . 1 : 41 Figure 4.1: Framework of simulation The traffic simulator is SymuPy 1 , an open source simulation tool for microscopic transport simulation. The car-following law is based on the Lagrangian Figure 4 . 2 : 42 Figure 4.2: Zone-Vehicle Interactions Figure 4 . 3 : 43 Figure 4.3: Main axes of demand in Manhattan 3X3 Figure 4.4 and Figure 4.5 are simulations where a grid lock has appeared, creating more congestions and thus more reroutings than expected. Figure 4 . 4 : 44 Figure 4.4: %TTT depending on β and T base (expressed in seconds) Figure 4 . 5 : 45 Figure 4.5: Number of reroutings depending on β and T base 4. 4 . 59 Figure 4 . 6 : 45946 Figure 4.6: %TTT compared with the number of rerouting and T base Figure 4 . 7 : 47 Figure 4.7: Global indicators comparison for Manhattan 5X5 Figure 4 . 8 :Figure 4 . 9 : 4849 Figure 4.8: Accumulation (vehicles) and reroutings produced per zone per simulation step (min) Figure 4 . 10 : 410 Figure 4.10: Results on trip duration for Manhattan 5X5 Figure 4 . 4 Figure 4.11: Demand scenarios Figure 4 . 12 : 412 Figure 4.12: Results on other demand scenarios Figure 4 . 13 : 413 Figure 4.13: Network of Lyon (3rd, 6th district and Villeurbanne). Every zone is represented by a distinct color. Figure 4 . 4 Figure 4.14: Hyper-parameter exploration: reduction of TTT and number of rerouting for Lyon subnetwork Figure 4 . 4 Figure 4.14 shows that for almost all the parameters, when no gridlock appears, the control strategy improves the initial strategy. This can be explained by the fact that we work on real datasets, which does not imply an reached optimum in the initial situation. The initial simulation can thus be easily improved. An optimal area of %TTT is observed for T base ∈ [300s, 1500s]a n dβ ∈ [0.01, 0.05]. The optimum is reached with β =0 .025 and T base ≤ 1000. The maximal %TTT is equal to 18%. Figure 4 . 15 : 415 Figure 4.15: Accumulation with β =0.025 and T base = 300s inside Lyon network Figure 4 . 16 :Figure 4 . 17 : 416417 Figure 4.16: Spatial Speed comparison with β =0 .025 and T base = 300s inside Lyon network Figure Figure 4.17 shows the accumulation per zone and we can notice that 7 zones where highly congested initially. The control strategy enabled the reduction of accumulation for zones 2, 5, 6, 14 bellow the critical capacity. On the contrary, we can observe in the zones 1, 4, and 11 which were highly congested, no real improvement. This is explained by the fact they are on the south border of the network (see Figure4.13) and the alternative paths are thus more limited for the drivers.The consequences on the performance of each zone in terms of spatial speed are shown in Figure4.18. The control strategy reduced the time of recovery for many zones but the zones where accumulation was not reduced have the same drop of spatial speed than without control. The improvements per zone are thus not as obvious than for Manhattan network. This is due to the topological differences: Manhattan network offers more alternative zones for rerouting than Lyon and is more homogeneous. Congestions are more easily avoided and vehicles are more often attracted by new paths than in Lyon network.These results show that the reroutings were efficient to globally reduce the total travel time spent in the network but the control strategy could not prevent some zones to be congested. Some further developments could help improving these results by changing other hyper-parameters like α that sets the limit of the weight function in G. Figure 4 . 18 : 418 Figure 4.18: Speed comparison with β =0 .025 and T base = 300s inside Lyon network for each zone Chapter 4 . 4 Implementation and Use Cases Normalized BC of Manhattan network Weight function depending on total flexibility and BC for different values of β Figure 4 . 19 : 419 Figure 4.19: BC inside Manhattan network and BC-weight function Figure 4 . 20 : 420 Figure 4.20: Results comparison with and without use of BC for different β and T base Figure 4 . 21 : 421 Figure 4.21: Increase of performance via BC adding Chapter 5 .Figure 5 . 2 : 552 Figure 5.2: Classes of equivalent nodes in the blue cluster C 1 Hierarchical Sub-Network from clusters in Figure5.1b Cluster C1, extended with the external node 17 Figure 5 . 3 : 53 Figure 5.3: Example of external node found through the HSN. Figure 5 . 4 : 54 Figure 5.4: Global SSSP explorations from pivots 94 Chapter 5 . 945 (v)i se q u i v a l e n tt oδ s,V C(s) (v)5 This is the part of contribution due to w as a destination Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality 1 : 3 : 4 : 5 : 1345 function E1CFastBC(G)2: C ← modularityBasedClustering(G) BN ← f indBorderN odes(G, C) VHSN ← buildHSN (BN, C, G) EN ← f indExternalN odes(VHSN, C, BN, G) 6: C * ← updateClusters(C, EN) 7: δ λ ,δ ǫ , σ, d ← computeLocalδ(C * , C, EN, BN, G) 8: K ← f indClasses(σ,d) 9: δ γ ← computeGlobalδ(K, G, C) 10: for v ← 1, Vd o ⊲ For all nodes 11: BC(v) ← δ λ (v)+δ γ (v)+δ ǫ (v) end function At line 2, the Louvain, modularity-based clustering algorithm is exploited for splitting graph G into a set of clusters C (see Subsection 5.4.1 7 : 7 return δ λ ,δ ǫ , σ, d 8: end function At line 9, we compute the global BC, δ γ . As shown in Algorithm 6, a second modified version of Brandes' algorithm, which exploits Equation 5.13, Equation 5.14 and Equation 5.15, is run from one pivot k i , randomly selected from each class K i ∈ K. From the explorations rooted at the class pivots, the global dependency scores, δ γ k i ,V C(k i ) (V) are computed on every other node v of the graph. The global BC of every node v is then obtained by summing the global dependency scores deriving from all the pivots. Figure 5 . 5 : 55 Figure 5.5: Map-reduce, Spark-based description of E1C-FastBC. • Job 5 5 computes the local BC, the BC on external nodes, the normalized distances and normalized numbers of shortest paths (Algorithm 2, line 7). The job receives the graph, clusters, the extended clusters and the border Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality Figure 5 . 6 : 56 Figure 5.6: Map-reduce, detailed description of Job2 Figure 5 . 5 Figure 5.10 reports the results of the analysis of AS E/B p=1 and AS H/B p=1 carried out on real graphs. AS H/B Chapter 5 .Figure 5 . 10 : 5510 Figure 5.10: Comparison with the solution proposed in [76]. Algorithmic speedup analysis with real graphs in sequential mode -AS E/B p=1 and AS H/B p=1 2 ): 2 Figure 5.11: Algorithmic speedup analysis for lyon-road-network graph in parallel mode -AS E/B p=[START_REF] Berdica | An introduction to road vulnerability: what has been done, is done and should be done[END_REF][START_REF] Bates | The valuation of reliability for personal travel[END_REF][START_REF] Cutter | Disaster resilience: A national imperative[END_REF][START_REF] Henry | Approach to quantify the impact of disruptions on traffic conditions using dynamic weighted resilience metrics of transport networks[END_REF][START_REF] Guériau | How to assess the benefits of connected vehicles? A simulation framework for the design of cooperative traffic management strategies[END_REF][START_REF] Tumash | Boundary and VSL Control for Large-Scale Urban Traffic Networks[END_REF] Figure 5 . 12 : 512 Figure 5.12: Efficiency analysis for lyon-road-network graph -E p=[1,5,10,15,20,25] . Comparison with synthetic graphs having similar size (100k and 200k nodes) and with Brandes' algorithm on the same graph. Figure 5 . 13 :Figure 5 . 14 : 513514 Figure 5.13: Breakdown analysis of computation time on synthetic graph with 100,000 nodes 1 .Claim 5 . 6 . 1 . 1561 5.29) and: σs,b j =σ p,b j ∀b j ∈ BN C i =⇒ σ s,b j = k • σ p,b j ∀b j ∈ BN C i (5.30)Let us consider any two generic pair of nodes s and p belonging to cluster C i such that:∀b j ∈ BN C i , d G (s, b j )min b k ∈BN C i d G (s, b k )=d G (p, b j )min b k ∈BN C i d G (p, b k ) (5.31) AND σ s,b j min b k ∈BN C i σ s,b k = σ p,b j min b k ∈BN C i σ p,b k (5.32)By definition, Equation5.31 can be easily re-written as follows:∀b j ∈ BN C i , d G (s, b j )min b k ∈BN C i d G (s, b k )=d G (p, b j )min b k ∈BN C i d G (p, b k ) ⇐⇒ d G (s, b j )=d G (p, b j ) -min b k ∈BN C i d G (p, b k )+min b k ∈BN C i d G (s, b k ) constant value ⇐⇒ d G (s, b j )=d G (p, b j )+l with l ∈ Rwhich corresponds to Equation 5.29. Likewise, Equation5.32 can be re-written as:∀b j ∈ BN C i , σ s,b j min b k ∈BN C i σ s,b k = σ p,b j min b k ∈BN C i σ p,b k ⇐⇒ σ s,b j = σ p,b j • min b k ∈BN C i σ s,b k min b k ∈BN C i σ p,b k constant ratio ⇐⇒ σ s,b j = σ p,b j • k with k ∈ R +which corresponds to Equation 5.30. As the two equations (Equation 5.29 and Equation 5.30) are jointly satisfied, the corollary is proved from Theorem 5.6.Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality In undirected graphs:s∈V C(v) t/ ∈V C(v) δ s,t (v)= s/ ∈V C(v) t∈V C(v)δ s,t (v)(5.33) i f C(v)=C(w), σs,v σs,w • (1 + δ γ s,V C(w) (w)+δ γ s,V C(w) (w)) otherwise(5.34) At the end of back-propagation, we put the dependency scores of nodes v belonging to the same cluster of the (pivot) source node to 0, whereas the depend e n c ys c o r e so fn o d e sn o tb e l o n g i n gt ot h es a m ec l u s t e ro ft h es o u r c en o d ea r e computed using the following formula: e., w h e nt h eb o r d e rn o d ei st h ec u r r e n tw. Consequently, for each pair of border nodes b 1 , b 2 belonging to the same cluster C i , we need to compute the distance d G (b 1 ,b 2 ) and the number of external shortest paths σ ext b 1 ,b 2 Chapter 6 . 6 Limitations, Conclusion and Prospects Figure 6 . 1 : 61 Figure 6.1: MATSim loop ([101]) 6. 3 . 3 Future work 123 demand from sensor data, Polson et al. ([103]) using deep learning for short-term flow prediction during two special events (sport game and storm) for example. 2.2. Context of Application for Dynamic Vulnerability and Resilience Analysis 23 data and new applications in the scope of Intelligent Transportation Systems allow improving the resilience of cities. The global notion of resilience of a transport network, we can associate a more concrete field aiming at improving resilience addressing different issues (security, health, congestion,...): the Intelligent Transportation Systems (ITS). As the resilience is knowing what happened, what to look for, what to do and what to expect, technologies of collection, storage and computation of data, in a large scale, are necessary to have a resilient transportation network. 2.2 Context of Application for Dynamic Vulnerability and Resilience Analysis Table 3 . 3 1: Notations Notation Description Graph related notations G, G M graph, macro graph of road network V, V M set of nodes of G, G M E, E M set of edges of G, G M tt(l) free-flow travel time of link l ∈ E tt w (l) weighted free-flow travel time of link l ∈ E tt BC-w (l) BC-weighted free-flow travel time of link l ∈ E BC(l) Betweenness Centrality of link l ∈ E Zone agent related notations a i a zone agent N i set of neighbors zones of a i n c i critical accumulation inside zone a i n i (t) accumulation inside a i at time t f tot i (t) total flexibility of a i f i (t), fi (t) flexibility, engaged flexibility of a i e min|max (t) minimum or maximum moving average error among all zones g i (t) grade of a i f base i (t) base flexibility of a i Vehicle agent related notations ve a vehicle agent t r (ve) timeout for rerouting decision of vehicle ve r(ve) number of reroutings already done by vehicle ve Hyperparameters τ time interval for the communications between zones and vehicle β hyper-parameter of the link weight function T base base time window for the rerouting decision timeout Indicators TTT nocontrol Total Travel Time spent in the network during the simulation without control TTT control Total Travel Time spent in the network during the simulation with control %TTT percentage difference between TTT nocontrol and TTT control Γ r rerouting gain Table 4 . 1 : 41 Results for set of parameters giving the the highest %TTT reduction size T base β %TTT %rerouted vehicles rerouting per vehicles 500 +∞ 11.9 % 40 % 0.43 3X3 1500 +∞ 14.2 % 22 % 0.23 2000 0.05 12.4 % 25 % 0.27 0 12.8 % 29 % 0.31 1000 0.01 12.2 % 33 % 0.36 5X5 1500 +∞ 0 0.005 13.1 % 25 % 12.4 % 30 % 14.6 % 33 % 0.31 0.36 0.25 2500 0.025 13.0 % 28 % 0.29 Table 4 . 4 Real Case Scenario 65 2: Results for other scenarios, with T base = 1500s, β =0.005 scenario vehicles %TTT %rerouted vehicles rerouting per vehicles % longer -shorter trips vehicles resilience increase A 16743 13.1 % 24.6 % 0.25 13.3% -43.5% 21% B 18475 17.9 % 30.8 % 0.33 11.2% -46.8% 37% C 15426 7.8 % 20.7 % 0.21 14.3% -37.0% 10% D 16470 9.3 % 27.4 % 0.27 17.0% -40.6% 14% E 14803 1.1 % 14.6 % 0.15 18.4% -25.2% 1% @param node_to_cluster_b the broadcasted array storing the node-cluster association * @return a tuple (true/false, node_id) depending on whether the node is a border node or not */ def is_bordernode_task(node: Int, dataset_b: org.apache.spark.broadcast.Broadcast[Array[Array[Int]]], node_to_cluster_b: org.apache.spark.broadcast.Broadcast[Array[Int]]) = { (dataset_b.value(node).tail.find(node_to_cluster_b.value(_) != node_to_cluster_b.value(node)).isDefined, node) } RDD of node ids <RDD[node_id]> RDD partition of node ids <List[node_id]> 1 4 2 3 Executor Task Executor Task Executor Record of a Task partition <node_id> 5 Set of border nodes <Set[node_id]> Driver (node_id) map 3a (true/false, node_id) 3b filter (node_id) map (true, node_id) A border node 3c <node_id> //IDENTIFYING BORDER NODES val bordernodes_set = sc.parallelize(0.to(node_to_cluster.length-1), parameterUtils.get_num_exec) .map(node => TaskFunctions.is_bordernode_task(node, dataset_b, node_to_cluster_b)) Legend .filter(_._1) .map(_._2) .collect().toSet RDD /* RDD partition wih its records * This function checks if a node is a border node. * A border node is a node having at least one neighbor belonging to another cluster. * @param node the node to check * @param dataset_b the broadcasted graph Set with its elements Elaboration Data transfer * a 2 p ). Since efficiency, for a given number of cores p,isS a 1 p /S a 2 p = E a 1 p /E a 2 p ,w eh a v e : AS a 1 /a 2 Table 5 . 5 3 reports all the graphs we use, together with some relevant Graph n m d avg d max cc avg barabási-albert 6,250 6,249 1.999 126 0.000 barabási-albert 12,500 12,499 " 225 " barabási-albert 25,000 24,999 " 344 " Synthetic barabási-albert barabási-albert 50,000 100,000 99,999 49,999 " " 463 1,138 " " barabási-albert 200,000 199,999 " 676 " barabási-albert 400,000 399,999 " 1,142 " barabási-albert 800,000 799,999 " 1,587 " web-webbase-2001[96] 16,062 25,593 3.187 1,679 0.224 ego-twitter[97] 22,322 31,823 2.851 238 0.072 Real internet[96] 124,651 193,620 3.107 151 0.062 lyon-road-network 7 156,102 178,845 2.291 8 0.017 email-euAll[98] 224,832 339,925 3.024 7,636 0.079 Table 5 . 3 53 : Topological information of synthetic & real graphs. / ∈V C(v) σ s,t (v) σ s,t = s∈V C(v) t/ ∈V C(v) σ t,s (v) σ t,s = t/ ∈V C(v) s∈V C(v) σ t,s (v) σ t,s Now, by changing the name of variables s and t: s∈V C(v) t/ ∈V C(v) δ s,t (v)= t/ ∈V C(v) s∈V C(v) σ t,s (v) σ t,s = s/ ∈V C(v) t∈V C(v) σ s,t (v) σ s,t = s/ ∈V C(v) t∈V C(v) δ s,t (v) https://www.un.org/development/desa/en/news/population/2018-revision-of-worldurbanization-prospects.html 2 https://www.lemonde.fr/economie/article/2019/12/09/greve-des-transports-plus-de-500km-d-embouteillages-sur-les-routes-franciliennes 6022161 3234.html 3 https://www.lesechos.fr/industrie-services/tourisme-transport/les-velib-en-surchauffependant-la-greve-des-transports-1156790 https://www.liberation.fr/france/2016/06/03/ratp-une-trentaine-de-stations-au-bord-de-la-fermeture1457194/ The different consequences of global warming on network resilience are further developed by Jaroszweski([13]). We note that fi(t) is computed by each zone at time t -τ . https://github.com/licit-lab/symuvia In our previous version of the algorithm, the pivots were chosen to minimize the error. Here, as we will explain later, any node in a class can be a pivot. As explained later, in the special case where there are external shortest paths in the cluster, the local BC is actually computed inside the extended cluster. We do not explicitly pass the set of external nodes as it is trivial to recognize them by leveraging the extended clusters. This dataset was supplied by the French National Institute of Geographic Information (IGN). https://www.ign.fr For each graph, we extract the largest connected component. Then, the latter is converted in an unweighted undirected graph. We do not use lyon-road-network graph as in Subsection 5.5.4 because of the Out of Memory error that arise when running the algorithm in profiling mode. Remerciements Chapter 5 Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality This chapter is dedicated to a contribution on the fast and exact computation of the Betweenness Centrality published in two papers ( [START_REF] Daniel | Cluster-based Computation of Exact Betweenness Centrality in Large Undirected Graphs[END_REF], [START_REF] Daniel | Fast cluster-based computation of exact betweenness centrality in large graphs[END_REF]). As the BC is a common resilience metric in traffic network, we developed an algorithm able to perform its exact computation in a short amount of time. The BC is not only used in traffic but in many other domains such as social networks, or internet networks, we thus present our work in a more general context than transportation. Example of clustered graph Three sub-graphs originated by clustering the graph Let G be the simple graph reported in Figure 5.1a, decomposed in three clusters, each separately shown in Figure 5.1b. We focus on the blue cluster, referred as C 1 , in order to illustrate the concept of equivalence class (see Table 5 5.2: for each node the normalized distances and normalized number of shortest paths to the border nodes are reported. According to our previous definitions, nodes 3, 4, 6, 14 and 5, 1 can be grouped in two classes respectively, whereas node 2 is assigned to a singleton class. Nodes 1, 2 and 14 Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality The amount of memory is divided evenly among executors. For instance, with 5 cores we only deploy one executor with 90GB of memory, while with 10 cores two executors, each with 45GB of memory, are deployed. The RDDs are decomposed in a number of partitions equal to the total number of cores. , obtained on the synthetic graphs in both sequential and parallel modes. In particular, we double the number of nodes from 25,000 to 800,000, and we consider a number of cores p equals to 1, 5, 10, 15, 20 and 25. We estimate by log-log regression the computation times with Brandes' algorithm for the graphs with 400,000 and 800,000 nodes, since executions would require weeks to complete, whereas our algorithm ends in maximum 31.5 minutes and 1.64 hours, respectively (sequential mode). increases with the size of the graph, meaning that E1C-FastBC is not only faster than Brandes' algorithm but its speedup grows with larger graphs. This is due to the fact that the computation of our algorithm is strongly dependent on the number of border nodes (|BN|), pivots (|K|) and external nodes (|EN|), in addition to the number of nodes (n) and edges (m). The first two variables increase slowly compared to the number with the fact that the number of partitions of an RDD is always equal to the number of cores and the default partitioning scheme of Spark distributes data evenly across the partitions, explains the punctual efficiency drops that can be observed in the plot related to Brandes' algorithm, when using graphs with 50,000 and 100,000 nodes and a low number of cores (see Figure 5.8b). Figure 5.9 reports the algorithmic speedup of E1C-FastBC over Brandes' algorithm, AS E/B p=1 , alongside with the algorithmic speedup of the approach in [START_REF] Li | Hierarchical Decomposition for Betweenness Centrality Measure of Complex Networks[END_REF] over Brandes, AS Synthetic Graphs Analysis Proof. By rewriting the statement of the theorem as follows: [START_REF] Antonio Guerrero-Ibanez | Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies[END_REF]) we can prove it by proving that the two following conditions: hold under the hypotheses of the theorem. Let us first prove the following Lemma 5.6.1, which permits to express the relationship on the distances to cluster border nodes (i.e., d G (s, b j )=d G (p, b j )+l) as an equivalence of the sets of border nodes traversed from s and p to reach nodes t outside the given cluster. Chapter 5. Cluster-based Algorithm for Fast and Exact Computation of Betweenness Centrality To complete the proof of Theorem 5.6.1, we need now to prove Equation 5.17 and Equation 5.18. To that purpose, we consider the following lemma. Lemma 5.6.2. Let s be a node of cluster C i ,a n dt any node in V C i . BN C i (s, t) is the set of border nodes of cluster C i that belong to the shortest paths between s and t.I fBN Proof. By leveraging Bellman's criterion: (5.25) From the hypothesis of Theorem 5.6.1, we know that σ s,b j = k • σ p,b j ∀b j ∈ BN C i and equivalently ∀b j ∈ BN(s, t), as BN(s, t) ⊆ BN C i . Therefore, Equation 5.25 becomes: By the hypotheses of this lemma, we also know that BN C i (s, t)=BN C i (p, t). Thus, we have: With the same reasoning, it is also evident to prove the following: σ s,t (v)=k • σ p,t (v) (5.28) Equation 5.28 and Equation 5.27 from Lemma 5.6.2 prove, via Lemma 5.6.1, Equation 5.17 and Equation 5.18 respectively. Therefore Theorem 5.6.1 is proved. We now prove that the normalized distances and number of shortest paths fulfill the conditions of Theorem 5.6.1 and hence can be used to group nodes into classes of equivalence.
04098775
en
[ "shs.gestion" ]
2024/03/04 16:41:20
2022
https://theses.hal.science/tel-04098775/file/2022UPSLD048.pdf
Keywords: out-of-sample return predictability, efficient market hypothesis, conditional beta pricing model, alpha predictability. JEL: C22, C53, G12, G14, G17 factor model, forecasting, equity markets. JEL: C22, C53, G12, G14, G17 CDS, spillover, sovereign debt, SVAR, identification by heteroskedasticity C58, G01, G18, G21 ing agreed to join my dissertation defence, Marie Brière and Serge Darolles for their insightful comments as well as for their presence at our yearly meetings within the thesis committee, and finally my rapporteurs, Christian Brownlees and Éric Jondeau, who gave me extremely valuable feedbacks and advice. I am The academic time and the institutional time have different rhythms. Sometimes they are synchronized and harmonious, sometimes they simply are not. A bit like Giovanni Drogo in The Tartar Steppes, and with all the caution that should be taken in such comparisons, I tried to juggle during this PhD with two temporalities, the short term one and the long term one, navigating between quantitative policy tasks and long-horizon research objectives. Being at the same time a PhD student at Dauphine and an economist at Banque de France was an unbelievable opportunity for me, both intensive and extremely rewarding. Playing on both sides, as one friend framed it, can be difficult, but I like to think that I tried to give my best at it. At the end of this journey, I must say that I owe a particular intellectual debt to my supervisor, Gaëlle le Fol, who was constantly present throughout the thesis. Finally leaving my student life behind me, I can now assert from experience that a young researcher needs advice not only on the substance of his projects, but also on the implicit rules that govern the academic world. Gaëlle provided both. I cannot stress enough how her role was essential in the completion of this thesis. I am also indebted to my hierarchy at Banque de France, as they enabled me to pursue this project from the beginning. I would like to thank, among others, the people who opened for me the PhD door while I was still uncertain about its implications, Laurent Clerc and Julien Idier, as well as my line managers who supported me in the last few years, Pierre-Yves Gauthier, Natacha Isslame-Rocher, Caroline Jardet, Sophie Haincourt, Nicolas Même and Nicolas Chatelais. Nicolas also played on two sides here in some sense, as he acted both as my chef de pôle and as my co-author for the second chapter of my thesis. I definitely think he managed to juggle with the two roles though. My gratitude goes also to my two other co-authors Menzie Chinn and Lukas Boeckelmann. I have to say that having Lukas on my sides for my fist research paper definitely supplied me with the exact amount of research insights, new ideas, work motivation and humourous ironical detachment that I needed at that time. I am also grateful to the other economists who helped me at some point along the way, among which Cyril Couaillier, Hugues Dastarac, Albert Dorval, Tristan Jourde, Marko Novakovic, Dilyara Salakhova, Maéva Silvestrini and Valerio Scalone. General introduction The role of financial markets from a central bank point of view Financial markets play different roles in our contemporary economy. First, they are mainly designed to be an efficient tool for capital allocation, providing at the same time financial resources to companies and remunerations for capital-holders [START_REF] Wurgler | Financial markets and the allocation of capital[END_REF]. Second, this allocation operates mainly through the price discovery process. As market agents confront their opinions on financial markets, an equilibrium price will emerge from the aggregation of various bid and ask-offers, thus providing the agents with a signal of where capital is most needed. As a result, and in line with Hayek market theory of price aggregation [START_REF] Hirshleifer | Where are we in the theory of information?[END_REF], financial markets act as a powerful informational tool to reduce the multidimensional investor opinions into a single metric: the level of asset prices. Third, financial markets pool together economic agents that may have diverging consumption needs depending on the potential future states of the world. As financial assets may yield returns in some of these states only, agents can trade them with one another so as to smooth their consumption paths. Risk sharing constitutes thus an essential aspect of financial markets. It has yet been identified that, despite their usefulness, financial markets can also be seen as source of risk for the economy for two main reasons. On one hand, financial markets are characterized by booms and busts processes whereby, fueled by so-called market exuberance (Shiller, 2015), asset prices slowly increase and depart from their fundamental values before suddenly decreasing. These swift price variations are not painless for the real economy. During the buildup of the financial bubbles, as price increases are not related to fundamental factors, they may induce capital misallocation [START_REF] Miao | Asset bubbles, collateral, and policy analysis[END_REF]. When asset prices burst, the sharp repricing can lead to cascading defaults of financial institutions, and then have broader macroeconomic impacts. These bankruptcies may be caused by various non-linearities and threshold effects that are present in the financial sectors, such as unanticipated fund redemptions or sudden margin calls [START_REF] Malkiel | Bubbles in asset prices[END_REF]. On the other hand, as stated before, financial networks may be appropriate for risk-sharing purposes between agents, but they are also central in the transmission of shocks. In other words, vi financial markets bear a risk for the economy not only because of the boom/bust processes underlined above, but also because they can play a role in propagating shocks originating in one market to another market. These contagion patterns take place for various reasons, for example because different institutions exhibit cross-linkages between themselves (Brunetti et al., 2019), because they hold similar asset portfolios (Greenwood et al., 2015), or because two assets have related characteristics in the eyes of investors. As a result, financial markets and central banks share a deeply intertwined relationship, both when we consider the functions assigned to the financial markets or the risks they are associated with. Central bank mandates may differ over the world or across time, but most of the time they involve price stability [START_REF] Smets | Maintaining price stability: how long is the medium term?[END_REF], sometimes combined with an unemployment target, as it is notoriously the case for the FED [START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF], and, since the Great Financial Crisis, they often include financial stability objectives [START_REF] Kim | Monetary aggregates and the central bank's financial stability mandate[END_REF]. Consequently, the relationship between central banks' actions and financial markets is multiform and depends on the central bank target that we are considering. We shed light here on the three main dimensions, in our view, along which financial markets and central banks interact with each other. Firstly, the principal tool used by central banks to maintain price stability is the policy rate, sometimes combined with unconventional monetary measures such as asset purchase programs or forward guidance policies. In other words, in their fundamental functioning, central banks have to operate with financial markets, in this precise case with the bond markets, to meet their targets. Secondly, within their financial stability mandate, central banks have to monitor the buildup of risks in financial markets. This involves, inter alia, tracking asset price valuations to gauge if a bubble is forming [START_REF] Geis | Measuring and interpreting the cost of equity in the euro area[END_REF], assessing financial interconnections to evaluate the potential spillovers from one market to another (Alter and Beyer, 2014) or overseeing the resilience of market participants to determine if they can cope with significantly negative shocks. Thirdly, financial markets have an informational utility for central banks given their ability, as stated above, to aggregate investors' opinions. This use can take several different forms. Central banks for example rely on inflation-linked swaps (ILS) or on the pricing of inflation-protected securities to assess market participants' expectations of short-term or long-term future inflation [START_REF] Bauer | Can we rely on market-based inflation forecasts?[END_REF]. In this case, central banks exploit these market-based metrics as an indicator of the good transmission of their monetary policy, reflected here by the potential good anchoring of investors' inflation expectations. But central banks may also rely on financial markets variables as a quantitative input when they form their forecasts regarding the future states of the business cycle. Indeed, many market-based indicators have proven to yield predictive content for economic activity forecasting, such as the term spread (Chinn and Kucko, 2015) or the dividend yield (Lan et al., 2020) and are therefore part of the central bank toolkit for business cycle monitoring. This PhD thesis aims at shedding a new light on the use of financial market data from the central banks' perspective, essentially with respect to their financial stability mandates and to their need to gauge future economic activity. The main message throughout the thesis chapters is that a considerable amount of information is obscured if we only rely on aggregate variables (as is the case for a significant part of the macro-financial literature). As we will try to make it convincing in the thesis, the use of micro/sectoral financial data can prove useful for various purposes, including for answering research questions from a macro-perspective. Historically, a lot of attention in the macro-financial literature has been devoted to aggregate financial data Regarding the previous subjects discussed above (financial bubbles, market informativeness, financial contagion) the literature has historically relied on aggregate financial market variables, most of the time with a focus on the United States. Macro-variables are seen as very convenient for various reasons. They are indeed easily available and cover, compared to micro-datasets, relatively long time periods. Additionally, especially when it comes to financial market variables, micro-datasets may turn out to be relatively noisy. Fama and French (1988) and [START_REF] Polk | Cross-sectional forecasts of the equity premium[END_REF] for instance stressed how popular metrics like the dividend yield or the price earnings ratio can be unreliable at the stock-level, especially due to the difficulty to obtain trustful estimates of book data (earnings, asset size etc.). On top of that, on the methodological side, some of the econometric models used to deal with these questions, such as Vector Auto-Regressive models (VARs), may put an upper limit on the amount of variables that could be considered in the estimation. In that regard, aggregate variables appeared as natural candidates to study these questions within sparse models. viii With respect to the study of asset valuations, and of potential departures from fundamental values in financial bubble processes, most of the foundational papers relied on aggregate data. [START_REF] Shiller | Do stock prices move too much to be justified by subsequent changes in dividends?: Reply[END_REF] for example underlined that, after building ex-post estimates of the fundamental value of the Standard and Poor's Composite Stock Price Index or of the Dow Jones Industrial Average, macro-equity prices were too volatile to be only influenced by fundamental factors. In a similar fashion, subsequent studies also aimed at assessing the relationship between asset prices and their fundamental values, but again most of the time with index-level data. This is the case, among others, of Lee et al. (1999) who estimate proxies of the Dow Jones fundamental value with cointegration techniques. Another option to gauge the level of market price-efficiency is to evaluate to what extend stock prices are driven by future cash flow news. This approach has been notably used by Campbell (1991) and Campbell and Ammer (1993), also on index-level data. The informational role of the financial market prices with regards to future economic activity has been extensively studied in the literature too. Here again, macro-level data have been the main tool used to investigate on this issue. On the bond market side, a long stream of papers has reported the good performance of the term spread, the difference between short-and long-dated government security rates, to predict future recessions (Stock andWatson, 1989, Estrella and[START_REF] Estrella | The term structure as a predictor of real economic activity[END_REF], for the US, a result extended by Chinn and Kucko, 2015, on other advanced economies). Some index-level equity variables turned out to be helpful for business cycle forecasting as well, such as stock returns (Binswanger, 2000, Ólan Henry et al., 2004, Croux and Reusens, 2013, McMillan, 2021), stock price growth (Chen and Ranciere, 2019) or the dividend yield (Lan et al., 2020). Eventually, as underlined above, in the contagion literature seminal papers often relied on aggregate country-level data given the dimensionality problem that can occur if one wants to include sectoral variables in the framework. This is notably the case for papers relying on Structural VARs (SVARs) whose identification strategies often only allow for a limited amount of variables. As a results, original studies in this field privileged market-level data over sectoral time series, as in Diebold and Yilmaz (2009) for stock returns and stock volatility or in Diebold and Yilmaz (2012) for an extension of the framework towards the bond markets and the foreign exchange markets. Relying on financial micro/sectoral data can turn profitable to answer macroeconomic research questions Although potentially cumbersome in terms of data availability or of dimensionality, focusing on a micro/sectoral-perspective on these issues may prove useful for three main reasons. First financial variables are potentially affected by different factors whether we consider them at the micro-level or at the macro-level. If we focus for example on the equity markets, one could show, following the present value formula of Campbell and Shiller (1988), that indexlevel dividend yield (x t ) can be decomposed likewise: x t = κ (1 -ρ) + ∑ j=1 ρ j-1 E t [r t+j -∆cf t+j ] (1) Where E t [r t+j ] represents expected returns and E t [∆cf t+j ] expected cash flows (κ and ρ are constant parameters). In other words, this identity formula states that current equity valuations ratios essentially depend on two factors: future cash flow growth or future expected returns/discount rates. The same equation can be written for the company or the sector i: x i,t = κ i (1 -ρ i ) + ∑ j=1 ρ j-1 i E t [r i,t+j -∆cf i,t+j ] (2) These two identities can help us formalizing why equity price behavior may differ between the micro-and the macro-stage. Samuelson initially had the intuition that stock-level returns were mainly driven by news about firms' profitability, and that these latter were averaged out in the aggregate so that index-level returns were on the reverse mostly affected by variations in discount rates (Jung and Shiller, 2005). Such phenomenon of contrasting behaviors between micro-returns and macro-returns has been empirically documented. Sadka and Sadka (2009) for example reported that the positive relationship between earning growth and returns at the micro-level turns negative at the macro-level. Kothari et al. (2006) underlined analogous findings between earning surprises and contemporaneous returns, according to which stock-level returns are positively correlated to idiosyncratic x earning news, whereas this positive correlation vanishes at the index-level. In a similar fashion, Hirshleifer et al. (2009) stress that elevated accruals predict negative future returns at the stock-level, but null or positive future returns at the index-level. All these studies share a similar reasoning: cash flows are mainly idiosyncratic, whereas discount rates are common across firms. As such, drivers of stock returns may differ greatly depending on the scale we are considering. Allegedly, these drivers are more associated with expectations about future profitability at the micro-level, but, due to diversification effects, depend more on discount rate factors (such as investor risk aversion) at the macro-level. This contrast between micro-and macro-equity behaviors can be interesting for researchers. It is thus possible to take profit from the cross-sectional heterogeneity in the cash flow factors to extract a metric of investors' discount rates. While trying to predict future aggregate returns, Kelly and Pruitt (2013) for example estimate a factor model based on a cross-section of sectoral book-to-market ratios. Relying on sectoral data enables them to filter out the cash flow components in equity prices and therefore to get a more precise estimates of aggregate discount rates (with the latter turning out to perform very well in index-return forecasting). Second, micro/sectoral data may be useful not only because micro-financial variables behave differently than macro-variables, but also because researchers can benefit from the heterogeneity of micro-asset price behaviors among themselves. In our view, this heterogeneity can serve various purposes. As sectoral future cash flows respond differently to macroeconomic shocks, evaluating how micro-stock prices react at specific dates may be profitable to identify these shocks. Venditti and Veronese (2020) for example rely on airline equity prices to identify oil supply shocks, given that oil price increases may affect relatively more airline stock returns than index-returns. In a similar fashion, sectoral equity future cash flows, and thus sectoral equity variables, respond differently to changes in future economic activity. As such, overweighting some sectors compared to other may prove fruitful in forecasting analysis. Andersson and Agostino (2008) showed for instance how Oil & Gas and Basic Materials industries exhibited stock returns that were more correlated with future Euro Area GDP than other sectors. Additionally, the heterogeneity in micro-equity prices matters beyond the two cash-flow channels outlined above. A more subtle argument, outlined again by Kelly and Pruitt (2013), is that xi growth sectors, typically composed of software companies, have future cash flows that are further away in time compared to value sectors, which exhibit more regular cash flow streams. As a result, growth sectors display a higher equity duration in comparison with value sectors, meaning that their equity valuation ratios are more sensitive to discount rate changes. This sensitivity difference can in turn be advantageous in order to purge sectoral equity valuation ratios from their discount rate components so as to better forecast future dividend growth (as in Kelly and Pruitt, 2013). Third, sectoral time series prove helpful in contagion-analysis as they can help disentangle the causality chain in spillover processes. As an example, various papers in the literature focused on the dynamics of European sovereign interest rates or Credit Default Swaps (CDS) during the European debt crisis by looking only at country-level sovereign variables (Ehrmann andFratzscher, 2017, De Santis andZimic, 2018). However, a substantial part of sovereign CDS dynamics in that period originated from the so-called sovereign bank-nexuses, where sovereign and bank credit risks fed each other. These nexuses can occur for various reasons. Among other, because bank risk can affect domestic sovereign risk through the "bailout channel", that is explicit or implicit public guarantees, in case of distress of the banking sectors (Alter and Schüler, 2012), or because banks hold a significant amount of domestic sovereign bonds in their balance sheets ("balance sheet channel", Angeloni andWolff, 2012, Buch et al., 2016). Therefore, omitting sectoral variables in the form of bank yields or bank CDS can result in a biased picture of the contagion processes, and, in turn, in a badly designed policy response. We tried, through the lens of the three arguments outlined above, to shed light on the importance of micro/sectoral financial market variables, even when dealing with macro-financial issues. The three chapters of this PhD thesis aim at leveraging on this aspect and underlining the advantage of micro-data in three areas: stock return predictability, the use of equity variables for macroforecasting and the contagion processes between bank and sovereign CDS markets. Outline of the PhD thesis chapters The first chapter of this PhD thesis compares stock return predictability, that is our ability to forecast returns, in a macro-setting (at the index-level), compared to a micro-setting (at the sectoral or at the stock-level). Note however that economic theory identifies two potential sources of xii return predictability: time variation in expected returns, beta-predictability (Cochrane, 2008), or market inefficiencies, alpha-predictability. For the latter, Samuelson argued that macro-returns exhibit more inefficiencies than micro-returns, as individual stories are averaged out, leaving only harder-to-eliminate macro-mispricing at the index-level (Jung and Shiller, 2005). To evaluate this claim, we compare macro-and micro-predictability on US data to gauge if the former turns out higher than the latter. Additionally, we extend over time the methodology of Rapach et al. (2011) to disentangle the two sources of predictability. On the result side, we show that our interpretation of Samuelson's view appears incorrect, as micro-predictability is not structurally lower than macro-predictability. There is however a diversification mechanism that plays a role between the micro-and macro-predictability time series, given that pooling the different micro-predictability series together over time yields an index that is very close to our macro-predictability estimates. Second, we find that our estimated alpha-and beta-predictability indices are coherent with their corresponding theoretical implications, thus suggesting that the two mechanisms are at play in our dataset. Notably, the alpha-predictability index appears as a theoretically based and easily updatable metric in the financial stability toolkit to spot periods of irrational exuberance. The second chapter is dedicated to forecasting future economic activity, within a factor and on the basis of sectoral equity variables. The original idea of this work originated after the Covidshock in March 2020, when stock prices declined abruptly, reflecting both the deterioration of investors' expectations of economic activity as well as the surge in aggregate risk aversion. In the following months however, whereas economic activity remained sluggish, equity markets sharply bounced back. This disconnect between equity values and macro-variables can be partially explained by other factors, namely the decline in risk-free interest rates, and, for the US, the strong profitability of the IT sector. As a result, an econometrician trying to forecast economic activity with aggregate stock market variables during the Covid-crisis is likely to get poor results. The main idea of the chapter is thus to rely on sectorally disaggregated equity variables within a factor model to predict future US economic activity. We find, first, that the factor model better predicts future economic activity compared to aggregate equity variables or to usual benchmarks used in macroeconomic forecasting (both in-sample and out-of-sample). Second, we show that the strong performance of xiii the factor model comes from the fact that the model filters out the expected returns component of the sectoral equity variables as well as the foreign component of aggregate future cash flows, and that it also overweights upstream and value sectors that are found to be closely linked to the future state of the US business cycle. Eventually, in the third chapter of the thesis, we propose a novel approach to quantify spillovers on financial markets based on a structural version of the Diebold-Yilmaz framework (Diebold and Yilmaz, 2009). Key to our approach is a SVAR-GARCH model that is statistically identified by heteroskedasticity, economically identified by maximum shock contribution and that allows for time-varying forecast error variance decompositions. We analyze credit risk spillovers between Eurozone sovereign and bank CDS. This means that our SVAR model includes 16 endogenous variables, enabling to encompass both sovereign and bank CDS, whereas papers close to our study focus only on sovereign CDS (De Santis and Zimic, 2018). Methodologically, we find the model to better match economic narratives compared with common spillover approaches and to be more reactive than models relying on rolling window estimations. On the economic side, we find credit risk in the Euro Area to be less integrated than suggested by estimates based on traditional Diebold-Yilmaz approaches. We estimate that, on average, credit risk spillovers explain about 37% of the total variation in our sample, amid strong variations of the spillovers over time and in the cross section. Introduction générale Perspectives croisées: marchés financiers et banques centrales Les marchés financiers jouent plusieurs rôles dans nos économies contemporaines. En premier lieu, ils ont essentiellement pour but d'allouer le capital de manière efficiente, à la fois en rémunérant les détenteurs de capital et en apportant des ressources financières aux entreprises [START_REF] Wurgler | Financial markets and the allocation of capital[END_REF]. D'autre part, cette allocation s'effectue principalement au travers du processus de formation des prix. En effet, à mesure que les intervenants de marché confrontent leurs opinions sur les marchés financiers, un prix d'équilibre va émerger de l'agrégation des différentes offres, fournissant ainsi aux agents un signal-prix indiquant là où le capital est le plus requis. De la sorte, et en lien avec les théories de Hayek sur les mécanismes de formation des prix [START_REF] Hirshleifer | Where are we in the theory of information?[END_REF], les marchés agissent comme un puissant outil informationnel, permettant de fondre la multiplicité des opinions des investisseurs en une seule métrique: le prix des actifs. Enfin, les marchés financiers sont également le lieu de rencontres entre des agents économiques pouvant avoir des besoins de consommation différents en fonction des futurs états du monde. Si les actifs financiers permettent d'obtenir du rendement uniquement dans certains de ces états, les agents économiques peuvent commercer entre eux afin de lisser leur consommation. Le partage des risques constitue ainsi une dimension essentielle des marchés financiers. Cependant, il a été également montré que, en dépit de leur utilité, les marchés financiers peuvent aussi être considérés comme source de risques pour deux principales raisons. Tout d'abord, ces derniers sont caractérisés par des processus de bulles spéculatives dans lesquels, alimentés par une forme "d'exubérance irrationnelle" (Shiller, 2015), les prix des actifs augmentent graduellement et divergent de leur valeur fondamentale avant de brutalement décroître. Ces brusques variations de prix ne sont pas sans conséquences pour l'économie réelle. Lors de l'étape de formation de la bulle, dans la mesure où les prix sont décorrélés des valeurs fondamentales, ils peuvent engendrer une allocation inefficiente du capital [START_REF] Miao | Asset bubbles, collateral, and policy analysis[END_REF]. Lors de l'éclatement de la bulle, la revalorisation brutale des actifs peut mener à des défauts en cascade pour les institutions financières, et en, en retour, avoir des effets macroéconomiques significatifs. Ces défauts xv peuvent avoir lieu en raison des effets de seuil et des mécanismes non-linéaires présents dans la sphère financière, comme dans le cas de rachats massifs pour les fonds d'investissement ou dans le cadre d'appels de marge non-anticipés [START_REF] Malkiel | Bubbles in asset prices[END_REF]. D'autre part, comme évoqué précédemment, les interconnexions financières peuvent certes être appropriées pour le partage du risque entre agents économiques, mais elles sont également centrales dans la transmission des chocs. En d'autres termes, les marchés financiers peuvent porter un risque pour l'économie non seulement en raison des mécanismes de bulles susmentionnés, mais également parce qu'ils jouent un rôle dans la propagation de chocs négatifs de marché à marché. Ces phénomènes de contagion peuvent avoir lieu pour plusieurs raisons, par exemple lorsqu'une institution financière détient des parts d'une autre institution dans son bilan (Brunetti et al., 2019), lorsque deux institutions possèdent des portefeuilles de marché relativement semblables (Greenwood et al., 2015), ou lorsque deux actifs ont des caractéristiques similaires aux yeux des investisseurs. Par conséquent, les marchés financiers et les banques centrales entretiennent une relation étroite, à la fois en raison des fonctions assignées aux marchés financiers mais également à cause des risques qui leur sont associés. En effet, bien que les mandats des banques centrales peuvent varier dans le temps et dans l'espace, ils incluent souvent la stabilité des prix [START_REF] Smets | Maintaining price stability: how long is the medium term?[END_REF], parfois combinée avec des objectifs sur le marché de l'emploi comme pour la FED [START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF] ainsi que, depuis la crise financière de 2008, une mission de stabilité financière [START_REF] Kim | Monetary aggregates and the central bank's financial stability mandate[END_REF] (Chinn and Kucko, 2015), ou pour le dividend yield (Lan et al., 2020). Cette thèse de doctorat vise ainsi à reconsidérer l'utilité des variables de marché pour les banques centrales, notamment dans le cadre de leur mandat de stabilité financière ainsi que dans leur besoin de prévoir le futur niveau d'activité économique. Le message principal de cette thèse est qu'une proportion considérable des informations tirées de données de marché est occultée si l'on ne considère que des variables agrégées (comme c'est le cas pour une part significative de la littérature macro-financière). Comme nous essayerons de le démontrer dans cette thèse, l'utilisation de données de marché micros/sectorielles peut s'avérer utile dans de nombreux cas, y compris dans l'étude de phénomènes à l'échelle macroéconomique. Historiquement, l'attention de la littérature macro-financière s'est essentiellement portée sur les variables de marché agrégées En ce qui concerne les précédents sujets évoqués (les bulles financières, la dimension informa-tionnelle des marchés, les phénomènes de contagion financière), la littérature a historiquement utilisé des données de marché agrégées. Plusieurs raisons permettent d'expliquer cet attrait pour les variables "macros". En premier lieu elles sont relativement disponibles et couvrent, comparativement aux données "micros", des périodes de temps plus longues. Deuxièmement, en particulier en ce qui concerne les variables de marchés financiers, les données micros peuvent être relativement bruitées. Fama and French (1988) et [START_REF] Polk | Cross-sectional forecasts of the equity premium[END_REF] (Stock andWatson, 1989, Estrella and[START_REF] Estrella | The term structure as a predictor of real economic activity[END_REF], pour les États-Unis, un résultat étendu aux autres économies avancées par Chinn and Kucko, 2015). Pour le marché action, d'autres variables au niveau indiciel ont également prouvé leur utilité dans le cadre des prévisions macroéconomiques, comme les rendements action (Binswanger, 2000, Ólan Henry et al., 2004, Croux and Reusens, 2013, McMillan, 2021), la croissance du prix des actions (Chen and Ranciere, 2019) ou le dividend yield (Lan et al., 2020). Enfin, comme souligné ci-dessus, la littérature portant sur les phénomènes de contagion finan- L'utilisation de données de marché micros/sectorielles peut s'avérer pertinente pour l'étude de questions macroéconomiques Bien que potentiellement problématiques en termes de disponibilité ou de dimensionnalité, les données micros/sectorielles tirées des marchés financiers peuvent être particulièrement utiles sur ces sujets pour trois raisons principales. Tout d'abord les variables financières sont potentiellement affectées par des facteurs différents à l'échelle micro par rapport à l'échelle macro. Si l'on se concentre par exemple sur les marchés action, on peut montrer, en suivant la formule de valeur actuelle de Campbell and Shiller (1988), que le dividend yield (x t ) au niveau indiciel peut se décomposer de la sorte: xix x t = κ (1 -ρ) + ∑ j=1 ρ j-1 E t [r t+j -∆cf t+j ] (3) Avec E t [r t+j ] les rendements anticipés et E t [∆cf t+j ] les dividendes anticipés (κ et ρ étant des paramètres constants). En d'autres termes, cette identité souligne que les ratios de valorisation action dépendent de deux facteurs: de la future croissance des dividendes et des futurs rendements/du taux d'actualisation. La même équation peut être réécrite pour le secteur i: x i,t = κ i (1 -ρ i ) + ∑ j=1 ρ j-1 i E t [r i,t+j -∆cf i,t+j ] (4) Ces deux identités nous permettent de formaliser l'idée selon laquelle les valorisations action se comportent différemment entre la perspective macro et la perspective micro. (Jung and Shiller, 2005). Un tel différentiel de comportement entre les rendements micros et macros a en effet été documenté empiriquement. Sadka and Sadka (2009) [START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF][START_REF] Pesaran | Predictability of asset returns and the efficient market hypothesis[END_REF]. Since all available information is already embedded in asset prices, changes in the latter can only be caused by the arrival of new information which is by definition unpredictable. In other words, prices should follow a random walk, and running a regression of future returns, r t+1 , on past information, X t , should not yield predictive content. At the same time, stock market efficiency may differ between a macro-perspective and a microperspective. Paul Samuelson argued in this sense (Jung and Shiller, 2005): view is correct, then we should observe higher levels of predictability at the macro-than at the micro-level. Modern The contribution of this paper is twofold. First, we compare, over time, macro-and micro-series of return predictability. Although the literature on this subject is enormous, to our knowledge we are the first ones to conduct this exercise in a time-varying manner. Allowing time variation in our results matters, as return predictability appears largely to be a regime-dependent phenomenon (Henkel et al., 2011[START_REF] Farmer | Pockets of predictability[END_REF]. Second, based on Rapach et al. (2011), we contribute to the literature aiming at identifying the drivers of return predictability by building a new indicator that is, theoretically, directly linked with market inefficiencies: the alphapredictability index. On the result side, we first show that, contrary to Samuelson's view, aggregate returns do not exhibit higher levels of predictability compared to micro-returns. Second, we document that, as expected, our alpha-predictability index is positively linked with metrics of market effervescence. However, more precisely, modern views of the EMH underline that a certain extent of return predictability can persist even in an efficient market setting. The aforementioned no-predictability paradigm implied that stock prices followed a random walk, and that expected returns were constant. On the contrary, Cochrane (2008) argues that, as investors' risk aversion varies over time, expected returns vary as well. Taking into account time variation in expected returns along the business cycle can therefore generate return predictability even in the absence of market inefficiencies. To put it bluntly, in the midst of an economic crisis, investors become highly risk averse. This leads to a decline in stock prices and to an increase in expected returns. People could therefore predict that returns will be high in the future, but they are too concerned about their current situation to benefit from it. In the same strand of the literature, empirical papers also argued that this mechanism should be especially at play during economic downturns (Henkel et al., 2011[START_REF] Dangl | Predictive regressions with time-varying coefficients[END_REF][START_REF] Rapach | Out-of-sample equity premium prediction: Combination forecasts and links to the real economy[END_REF]. Therefore, the interpretation of predictability is sensitive. High level of predictability can reflect inefficiencies such as investors' irrationality or market frictions. But it can also mirror variations in aggregate risk aversion. Consequently, in order to clarify our framework, we present three hypotheses that summarise the different views on return predictability In the first hypothesis, linked with the Samuelson's view, macro-predictability should be higher than micro-predictability, especially in times of irrational exuberance (e.g. during the dot-com bubble). The second hypothesis, in line with Cochrane's view, states that micro-and macropredictability should not behave differently as they are influenced by the same factor: changes in aggregate risk aversion. They should therefore evolve in tandem and be higher during recessions. A third "in-between" hypothesis assumes that returns at the micro-level are driven by idiosyncratic factors that can be either efficient (e.g. news about cash flows) or inefficient (e.g. illiquidity issues). The former decrease micro-predictability, whereas the latter increase it. At the aggregate level, both types of individual factors are averaged out, so that micro-predictability can either be higher or lower than macro-predictability. Eventually, this third view is agnostic regarding the sources of macro-predictability, which can therefore be high both during recessions and during market effervescence periods. We test the three different hypotheses on US post-war data, with an out-of-sample methodology that combines 23 models estimated on rolling windows. These models are commonly used in the return predictability literature and encompass both traditional econometric methods, factor modelling approaches and Machine Learning techniques. The large number of approaches considered here reflects the substantial model instability in forecasting returns exercises [START_REF] Timmermann | Forecasting methods in finance[END_REF]. We find overall that our results corroborate the third hypothesis for at least two reasons. First micro-predictability is neither structurally higher nor lower than macro-predictability. On the contrary, micro-predictability "bounces around" macro-predictability. This result is in line with a model where micro-predictability level depends on the relative importance of efficient or inefficient idiosyncratic component of returns. Second, we extend the methodology of Rapach et al. (2011) in a time-varying manner so as to disentangle the sources of macro-predictability. The two resulting series, the alpha-predictability and the beta-predictability indices, should track changes in macro-predictability due to market inefficiencies and due to time-varying risk aversion, respectively. In accord with the third hypothesis, we find that the alpha-predictability index is positively associated with metrics of market exuberance, whereas the beta-predictability index correlates with business cycle variables. This finding underlines that the two sources of return predictability are at play in our dataset, and therefore enables to reconcile the diverging views in the literature about the drivers of return predictability. The rest of the paper is structured as follows: Section 1.2 details how the current paper is located in the return predictability literature, Section 1.3 describes the three theoretical hypotheses outlined above, Section 1.4 presents the methodology and the datasets used, Section 1.5 reports the empirical results, Section 1.6 provides different robustness checks and Section 1.7 concludes. Return Predictability in the Literature The literature on stock return predictability is extensive and has considerably evolved over time. Seminal papers focused on aggregate stock returns, most of the time reporting in-sample results within linear regression approaches. Various macro-financial variables appeared to have some predictive power, such as the dividend yield [START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF]French, 1988, Campbell andShiller, 1988), the term structure of interest rates [START_REF] Campbell | Stock returns and the term structure[END_REF] or the consumption-wealth ratio [START_REF] Lettau | Consumption, aggregate wealth, and expected stock returns[END_REF]. Nevertheless, in sharp contrast with the previous studies, [START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF] underline that the former results are hardly replicable. In a linear setting, return predictability appears as a spurious result, both in-sample and out-of-sample. However, relying on more sophisticated techniques, subsequent papers claim to forecast future returns, although most of the time with relatively low R 2 . These innovative approaches fall mainly in three non-exclusive categories. First, return prediction is a specific forecasting exercise in itself, as the use of a performing model by investors is likely to erase the predictability pattern the model is based upon [START_REF] Timmermann | Forecasting methods in finance[END_REF]. The resultant instability in the predicting relationship paved the way for forecast averaging techniques, since they enable the econometrician not to rely on the assumptions of a single model. This includes notably simple and advanced forecast combination methods [START_REF] Aiolfi | Persistence in forecasting performance and conditional combination strategies[END_REF][START_REF] Rapach | Out-of-sample equity premium prediction: Combination forecasts and links to the real economy[END_REF][START_REF] Elliott | Handbook of economic forecasting[END_REF][START_REF] Baetje | Does a lot help a lot? Forecasting stock returns with pooling strategies in a data-rich environment[END_REF] or Bayesian Model Averaging [START_REF] Dangl | Predictive regressions with time-varying coefficients[END_REF]. Second, in line with other financial market variables, stocks returns are mostly influenced by investors' expectations. These expectations constitute an unobserved variable, but can be included in the predictive model as a latent factor. Consequently, theory-driven approaches in the form of factor models have proven to perform relatively well at different frequencies (Binsbergen andKoijen, 2010, Kelly andPruitt, 2013). Third, given the complex structure of financial markets, it is unlikely that stock returns follow a linear process. As a result, different studies have explicitly investigated non-linear forecasting techniques. This comprises restricted linear models [START_REF] Campbell | Predicting excess stock returns out of sample: Can anything beat the historical average?[END_REF], nonlinear VARs (Henkel et al., 2011), non-parametric approaches [START_REF] Farmer | Pockets of predictability[END_REF], or Machine Learning methodologies [START_REF] Rapach | Industry Return Predictability: A Machine Learning Approach[END_REF][START_REF] Chinco | Sparse Signals in the Cross-Section of Returns[END_REF]. Although all these analyses have exposed in-sample or out-of-sample forecastability, debate remains about the drivers of return predictability over time. Some papers underline that, in line with Cochrane's view, predictability is a countercyclical phenomenon and is therefore elevated during economic downturns [START_REF] Rapach | Out-of-sample equity premium prediction: Combination forecasts and links to the real economy[END_REF], Henkel et al., 2011[START_REF] Dangl | Predictive regressions with time-varying coefficients[END_REF]. On the contrary, other studies argued that returns are especially predictable in bullish financial markets [START_REF] Farmer | Pockets of predictability[END_REF], while other identified specific periods of return predictability (e.g. surrounding the oil price shock of 1973, [START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF]Goyal, 2008, Timmermann, 2008). Yet, return predictability is not the only available metric to gauge market inefficiencies. One intuitive way to do so is to estimate the informative content of stock prices [START_REF] Bai | Have financial markets become more informative?[END_REF]. In other words, are current prices useful to predict future cash flows? This recent work echoes older literature that evaluated to what extent stock returns were driven by future cash flows or by future returns (Campbell, 1991, Campbell andAmmer, 1993). Another method amounts to estimate a fundamental value for stock prices, and to define market inefficiency as the departure of observed prices from this estimate (Lee et al., 1999). Most of these metrics of inefficiency are based on aggregate data. However some papers extended the above methodologies for individual stocks or for subgroups of stocks (Vuolteenaho, 2002[START_REF] Cohen | The Value Spread[END_REF][START_REF] Davila | Identifying Price Informativeness[END_REF], sometimes with indicators that evolve over time [START_REF] Farboodi | Where Has All the Data Gone[END_REF]. Similarly, some studies evaluate return forecastability at the stock-level, but without reporting specifically micro-predictability [START_REF] Avramov | Predicting stock returns[END_REF], without time variation in the results (Rapach et al., 2011[START_REF] Kong | Predicting market components out of sample: asset allocation implications[END_REF] or without drawing a proper micro-macro analysis [START_REF] Guidolin | Time varying stock return predictability: Evidence from US sectors[END_REF][START_REF] Chinco | Sparse Signals in the Cross-Section of Returns[END_REF]. Compared to the aforementioned studies, the goal of the present paper is to compare, over time, macro-and micro-predictability so as to extract from this analysis a metric of market inefficiencies 1 . This question has, to our knowledge, never been addressed in the literature. Working Hypotheses We formalize in this section the three hypotheses outlined above. Following [START_REF] Avramov | Stock return predictability and asset pricing models[END_REF] and [START_REF] Rapach | Out-of-sample equity premium prediction: Combination forecasts and links to the real economy[END_REF] we express (excess) aggregate returns as: r t+1 = α(X t ) + β ′ t f t+1 + ϵ t+1 (1.1) Where α(X t ) represent the inefficient part of returns, f t+1 a vector of portfolio-based factors capturing systematic risk, β t the corresponding vector of factor loadings and ϵ t+1 a disturbance term of mean zero. Two sources of return predictability are potentially at play here. With time t variables, the econometrician is able to predict market inefficiencies α(X t ). Additionally, return predictabil-ity can emerge from the forecastability of risk factors if we further assume that they evolve likewise: f t+1 = g(X t ) + u t+1 (1.2) Where g(X t ) is a vector of (forecastable) conditional expected returns for the risk factors and u t+1 a vector of mean-zero disturbance terms independent of ϵ t+1 . Besides, we consider that micro-returns r i,t+1 are affected by aggregate factors α(X t ) and ϵ t+1 , but also by their individual counterparts: α i (X t ) and ϵ i,t+1 (that is, idiosyncratic inefficiencies and idiosyncratic unpredictable shocks). We assume that α i (X t ) and ϵ i,t+1 are centered around 0, and are diversified away at the macro-level. More precisely we write our system of macroand micro-returns such as:            r i,t+1 = α i (X t ) + ω i α(X t ) + β ′ i,t f t+1 + ϵ i,t+1 + δ i ϵ t+1 r t+1 = α(X t ) + β ′ t f t+1 + ϵ t+1 f t+1 = g(X t ) + u t+1 (1.3) With ω i and δ i the exposures of r i,t+1 to the common factors α(X t ) and ϵ t+1 , respectively, and with ϵ i,t+1 being independent from ϵ t+1 and from u t+1 . The system of equations 1.3 constitutes the basis for the three following hypotheses. H 1 , Samuelson's view Our first hypothesis is built upon Samuelson's intuition and entails several implications. First, we consider here that α i (X t ) = 0 given that, in line with Samuelson, arbitrageurs should eradicate micro-mispricings. Second, at the time where Samuelson expressed this idea, the theory of return predictability driven by time-varying expected returns was not formulated yet. Some studies even modelled expected returns as a constant [START_REF] Samuelson | Lifetime portfolio selection by dynamic stochastic programming[END_REF]. We therefore suppose here that g(X t ) = c, with c a constant vector, so that f t+1 = c + u t+1 . Third, as micro-inefficiencies are arbitraged away, and since the efficient idiosyncratic news are averaged out in the aggregate, it is assumed here that micro-returns are more driven by unpredictable components than macro-returns. Consequently, return predictability should be higher in the aggregate than at the micro-level2 . Fourth, the common predictable factor α(X t ) should especially be forecastable in times of elevated market inefficiency. The System 1.3 can therefore be rewritten for H 1 as:            r i,t+1 = ω i α(X t ) + β ′ i,t f t+1 + ϵ i,t+1 + δ i ϵ t+1 r t+1 = α(X t ) + β ′ t f t+1 + ϵ t+1 f t+1 = c + u t+1 (1.4) For illustrative purposes, we highlight in the top panel of Figure 1.1 how predictability should behave according to H 1 . In that setting, return predictability only comes from the inefficient component of returns, α(X t ). As α(X t ) is mixed with unpredictable news (ϵ i,t+1 ) at the stock-level, micro-predictability (in blue) should be lower than macro-predictability (in red). Additionally, we consider here that markets are inefficient in times of irrational exuberance (Shiller, 2015) or during downturns as the proportion of noise traders may be especially high in recessions [START_REF] Veldkamp | Slow boom, sudden crash[END_REF]. Accordingly, macro-predictability should peak during the late 90s dotcom-bubble, or during the Great Financial Crisis of 2008 (grey bars figure NBER US recessions). H 2 , Cochrane's view Our second hypothesis dwells on Cochrane (2008), and assumes return predictability in the absence of market inefficiencies. Consequently, we consider here that α(X t ) = α i (X t ) = 0. On the reverse, return predictability stems from time variation in expected returns, that is from the predictability of the risk factors: f t+1 = g(X t ) + u t+1 . In this setting, expected returns vary with risk aversion along the business cycles, for instance if investors fear to fall short on their consumption targets during downturns [START_REF] Campbell | By force of habit: A consumption-based explanation of aggregate stock market behavior[END_REF]. If at time t, a variable like the dividend yield is able to spot changes in contemporaneous risk aversion, and thus changes in expected returns, it can contain predictive content for future returns3 . We then have for H 2 the following system:            r i,t+1 = β ′ i,t f t+1 + ϵ i,t+1 + δ i ϵ t+1 r t+1 = β ′ t f t+1 + ϵ t+1 f t+1 = g(X t ) + u t+1 (1.5) Here, micro-and macro-predictability are influenced by the same phenomenon: the forecastability of f t+1 . As such, they should evolve in similar manners, although some differences may subsist depending on the values of β i,t and β t , and on the realizations of ϵ i,t+1 and ϵ t+1 . This point is illustrated by the common trend in micro-(blue lines) and macro-predictability (red line) in the middle panel of Figure 1.1. Additionally, current returns may especially be influenced by expected returns during downturns, since expected returns are more volatile in recessions (Henkel et al., 2011). Therefore, as underlined on Figure 1.1, both micro-and macropredictability should behave in a counter-cyclical way, and rise in bad times. H 3 , Third view Eventually, between the two precedent polar cases, the third view assumes that micro-returns can be both influenced by aggregate inefficiencies and by idiosyncratic mispricing, e.g. localized bubbles or specific illiquidity issues. Leaning back on the previous representation, predictability could therefore emerge from "alpha"predictability (aggregate or individual inefficiencies, α i (X t ) and α(X t )), or from "beta"-predictability (due to time variation in expected returns, in line with H 2 ). If we also assume that the α i (X t ) and ϵ i,t+1 are diversified away at the aggregate level, H 3 would yield the exact same system of equations as System 1.3. This view entails several implications illustrated on the bottom panel of Figure 1.1. First, depending notably on the relative importance of α i (X t ) and ϵ i,t+1 , micro-predictability can be higher or lower than macro-predictability. Second, as these two variables are independently distributed, we would expect the average of micro-predictability indices across stocks to be similar to the macro-predictability series. Eventually, as macro-predictability can increase due to aggregate inefficiencies or to time variation in expected returns, it can both peak during speculative bubble periods or during downturns. Data and Methodology We assess the relevance of the three hypotheses with an out-of-sample methodology that tries to encompass the major modelling approaches in the literature. Our analysis is focused on postwar US monthly excess returns (from September 1945 to October 2020), but can easily be extended to other datasets. Stock Return Data Throughout this study we investigate the predictability of excess returns, i.e. total returns minus a risk-free rate. We extract monthly postwar US returns from Kenneth French website. This implies that: 1. We evaluate stock return predictability over a market constituted by all CRSP firms incorporated in the US and listed on the NYSE, AMEX, or NASDAQ. We take a as a risk-free rate the one-month Treasury bill rate from the same source. 2. We label "aggregate returns" the excess returns of the overall stock market, and "individual returns" the excess returns of the 25 Fama-French portfolios formed on Size and Book-to-Market. Furthermore, we use supplementary variables as exogenous predictors in Section 1.4.2, or as covariates in the interpretative regressions of Section 1.5.2. Their collections and their constructions are more thoroughly detailed in Appendix 1.A.2. Constructing Raw Predictability Metrics We present here our methodology to gauge the "raw predictability" of stock returns. We call raw predictability our mere ability to forecast future returns compared to a benchmark. This estimate will then be disentangled between alpha-and beta-predictability in Section 1.4.3. As underlined in Section 1.2, an extensive number of models has been used in the return predictability literature. Besides, return-forecasting suffers from an elevated model instability as the popularity of performing approaches eradicates the predictive pattern they are based upon [START_REF] Timmermann | Forecasting methods in finance[END_REF]. We therefore adopt here an agnostic view, and centre our analysis on the estimation of K = 23 model types. These latter cover classic econometric models, forecast averaging methods, factor modelling approaches and Machine Learning tools. They are exhaustively described in Table 1.A.1. We evaluate return predictability with the out-of-sample R 2 of [START_REF] Campbell | Predicting excess stock returns out of sample: Can anything beat the historical average?[END_REF], a metric widely used in the literature [START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF]Goyal, 2008, Moench andStein, 2021). This indicator documents how well a model performs compared with the prevailing mean as a benchmark. More formally, given rt the prevailing mean of aggregate or individual returns from t -L + 1 to t, r k t+1 the forecast of r t+1 of model k based on variables running from t -L + 1 to t, the out-of-sample R 2 for model k is defined as: R 2 os,k,t = 1 - t-1 ∑ i=t-n (r i+1 -r k i+1 ) 2 (r i+1 -ri ) 2 (1.6) In line with [START_REF] Timmermann | Elusive return predictability[END_REF], we use a rolling window estimation of length L = 120 months, and an averaging period for R 2 os,k,t of length n = 36 months. Our model-selection strategy proceeds as follow: First, given a specific series of aggregate or individual returns {r t+1 } T -1 t=0 , we evaluate the different K models on a rolling window of length L. For each model m, we thus obtain a series of out-of-sample forecast: {r m t+1 } T -1 t=L 4 . Second, again for each model k, we compute the corresponding R 2 os,k,t at each point in time from L + n + 1 to T . Eventually, as in pseudo-real time strategies, we choose the model with the best average outof-sample R 2 os,k,t over the previous estimation period to perform the next-period forecast 5 . We can therefore build a series of final out-of-sample predictions {r f t+1 } T -1 t=L+n , where, potentially, at each point in time a different model is chosen for the final forecast. From the latter series, we can then construct our final metric of raw R 2 for r t+1 : {R 2 os,t } T t=L+n+1 . Disentangling the Sources of Predictability Following the different hypotheses outlined in Section 1.3, return predictability can emerge from two different phenomenons: the exposure to predictable risk factors (f t+1 ) or to market inefficiencies (α(X t ) and α i (X t )). For each portfolio returns r i,t+1 , we compute the series of raw return predictability R 2 i,os,t according to the methodology described in Section 1.4.2. In this section, to decompose this metric between the two sources of predictability, we extend the methodology proposed by Rapach et al. (2011). We first build a "beta-pricing restricted" forecast of r t+1 : r β t+1 . To that aim, we define as risk factors f t+1 the factors of the Fama-French three factor model, also extracted from Kenneth French website. We obtain the risk factors forecasts, f f t+1 , with the exact same prediction algorithm detailed in Section 1.4.2. Then, in line with Rapach et al. (2011), we estimate the risk loadings βt by regressing, over a rolling window and without constant, {r s } t t-L+1 on {f } t t-L+1 . We can eventually construct: r β t+1 = β′ t f f t+1 (1.7) In other words, all predictability stemming from the exposure to time varying risk factors should be incorporated in the beta-pricing restricted forecast r β t+1 . Any additional return predictability beyond this beta-predictability reflects the fact that α i (X t ) ̸ = 0 or that α(X t ) ̸ = 0, and is therefore called the alpha-predictability. We can thus represent the evolution over time of the beta-predictability and the alpha-predictability by decomposing the different R 2 i,os,t . To do so, we first compute the "beta-R 2 ": R 2 i,β,t . This metric documents the difference in predictive ability between the beta-pricing restricted forecast and the prevailing mean: R 2 i,β,t = 1 - t-1 ∑ i=t-n (r i+1 -r β i+1 ) 2 (r i+1 -ri ) 2 (1.8) We then gauge the performance of the unrestricted forecast (r f t+1 ) compared to the beta-pricing restricted forecast (r β t+1 ) by computing the "alpha-R 2 ": R 2 i,α,t . This latter assesses the extrapredictability that can be gained beyond the exposition to predictable risk factors: R 2 i,α,t = 1 - t-1 ∑ i=t-n (r i+1 -r f i+1 ) 2 (r i+1 -r β t+1 ) 2 (1.9) In line with Rapach et al. (2011), we can show that: R 2 i,os,t = R 2 i,α,t + R 2 i,β,t -R 2 i,α,t * R 2 i,β,t (1.10) Given that levels out-of-sample R 2 are particularly low in return forecasting exercises, we can therefore omit the cross-product and write: R 2 i,os,t ∼ R 2 i,α,t + R 2 i,β,t (1.11) In other words, looking at raw macro-and micro-predictability, R 2 os,t and R 2 i,os,t is helpful to discriminate between the three different hypotheses of Section 1.3. But analyzing more closely the behaviours of R 2 i,α,t and R 2 i,β,t enables to evaluate whether the two sources of predictability are indeed at play in the sample6 . Empirical Results This section first describes the raw predictability results over time, from both a micro-and a macro-perspective. It then outlines the decomposition of the raw predictability series between the alpha-and the beta-predictability, as well as the interpretation of the latter. Micro-and Macro-Raw Predictability We represent on Figure 1.2 the raw predictability metrics for portfolio-returns (R 2 i,os,t , in blue) and for aggregate returns (R 2 os,t , in red). The 25 R 2 i,os,t series are also plotted separately on Figure 1.A.1 of Appendix 1.A.3. Several findings emerge from Figure 1.2 that help to discriminate between the three hypotheses of Section 1.3. First micro-predictability is not structurally lower than macro-predictability. This result invalidates the main assumption of H 1 , Samuelson's view, that macro-returns are more affected by market inefficiencies compared to micro-returns. Second, micro-predictability does not seem to follow the exact same behaviour as the macro- predictability series. Although common factors are present in the micro-predictability series (as analyzed in Section 1.5.2), we notice that R 2 i,os,t is sometimes significantly lower or higher than R 2 os,t . This finding contradicts H 2 (Cochrane's view) according to which micro-and macropredictability should behave similarly. Eventually, two observations appear to corroborate the last hypothesis (H 3 , the "third view"). First, we remark on Figure 1.2 that the variances of R 2 i,os,t are considerably higher than for R 2 os,t . Second we plot on Figure 1.3 the average of the micro-predictability series over the I different portfolios (R 2 i,os,t ≡ I -1 ∑ 1≤i≤I R 2 i,os,t , in dark blue) along the macro-predictability series (R 2 os,t , in red). We observe that pooling the different R 2 i,os,t results in a time series that is significantly more correlated with R 2 os,t than the individual micro-predictability series. Both of these findings are in line with the implications of H 3 . In this setting, micro-predictability is affected upward by idiosyncratic inefficiencies (α i (X t )) and downward by idiosyncratic news (ϵ i,t+1 ). As these two components are centered around 0, they do not translate to macro-returns. Accordingly, macro-predictability should be less volatile than micro-predictability, whereas the average of the micro-predictability series should mimic the evolution of the macro-predictability series. We find both of these results on Figures 1.2 and 1.3. The results outlined in this section, regarding the means and the variances of the macro-and micro-predictability series, as well as regarding the strong correlation of R 2 i,os,t with respect to R 2 os,t , are detailed in Figure 1.A.2 of Appendix 1.A.47 8 . On this specific exercise, we underline two additional results. First, a natural question regarding Figures 1.2 and 1.3 is whether or not an investor would have been able to make money out of these forecasting exercises. Similar to [START_REF] Timmermann | Elusive return predictability[END_REF], we find that both R 2 os,t and R 2 i,os,t (for all portfolios) are negative on average. This means that, on the overall estimation period, an investor would not have been to build a profitable strategy based on our forecasts. However, in line with [START_REF] Timmermann | Elusive return predictability[END_REF] and [START_REF] Farmer | Pockets of predictability[END_REF], returns appear predictable at specific time periods. In our case, at the macro-level, R 2 os,t is positive on average during two decades: the 50s and the 80s (amounting to, respectively, 2.0% and 0.3%). Although relatively small, [START_REF] Campbell | Predicting excess stock returns out of sample: Can anything beat the historical average?[END_REF] showed that R 2 of small magnitudes may translate to a substantial gain improvements for an investor with mean-variance preferences. Following their rule of thumb for the macro-returns, we find that an investor with a coefficient of risk aversion of 3 could have improved the returns of his portfolio by 80 bp in the 50s and by 10 bp in the 80s. At the micro-level, evidence of return predictability appear more mixed, with most of R 2 i,os,t being negative on average on these two decades. However, for the portfolios with the highest R 2 i,os,t , the same calculations imply that a similar investor would have been able to increase his returns by 230 bp and 74 bp over these two periods. Second, we investigate whether the results of Figures 1.2 and 1.3 remain the same if we consider individual stock returns instead of portfolio returns to assess micro-predictability. To do so, we retrieve more than 100 individual stock returns from Refinitiv starting from January 1986 to October 2020 9 . The results of this exercise are depicted on Figure 1.A.4 in Appendix 1.A.6. It can be seen that the main results remain unchanged when we gauge micro-predictability with individual stock returns. Here also we find that the variances of R 2 i,os,t are considerably higher than for R 2 os,t , whereas pooling the different R 2 i,os,t results in a time series that is significantly We can notice on Figure 1.A.3 that the mean of R 2 os,t is similar to the means of R 2 i,os,t , while the standard deviation of R 2 os,t appears significantly lower than the standard deviations of R 2 i,os,t . 9 To select the stocks, based on Refinitiv data, we retrieve all the companies that belonged to the S&P 500 for at least a month, from January 2008 until October 2020. We then try to strike a balance between the number of individual stocks that we consider and the availability of their returns over a long period. Overall our samples of individual stock returns covers 110 companies from January 1986 to October 2020 more correlated with R 2 os,t than the individual micro-predictability series. Alpha-and Beta-Predictability Building Alpha-and Beta-Predictability The findings highlighted with Figures 1.2 and1.3 enabled to discard the first two hypotheses: Samuelson's and Cochrane's views. On the reverse, the third view, H 3 , seems to fit well with the behaviours of the micro-and macro-raw predictability series outlined above. However, H 3 has also implications regarding the time variation of macro-predictability. Since macropredictability is influenced by alpha-and beta-predictability, it should be significant both in times of elevated market inefficiencies and during economic downturns. As such, Figures 1.2 We therefore attempt in this section to better understand the sources of variation of macropredictability over time. To do so, we take as a starting point the individual portfolio returns (r i,t+1 ) that we use to estimate the individual series of alpha-predictability (R 2 i,α,t ) and beta-predictability (R 2 i,β,t ) with the methodology detailed in Section 1.4.3. Eventually, we represent on Figures 1.4 and 1.5 the behaviours of, respectively, the pooled series R 2 i,α,t ≡ I -1 ∑ 1≤i≤I R 2 i,α,t and R 2 i,β,t ≡ I -1 ∑ 1≤i≤I R 2 i,β,t 10 . We draw several conclusions from these figures. First remember that, in line with H 3 , we expect R 2 i,α,t to rise in periods of market exuberance, and R 2 i,β,t to increase during recessions. In order to better visualize their time variations, we plot along R 2 i,α,t and R 2 i,β,t the opposite of the "Excess CAPE yield" (ECY, built as the inverse of the CAPE ratio minus a risk-free rate) and 10 On Figures 1.4 and 1.5 we center the R 2 metrics around the mid-point of their estimation periods. In other words, whereas in Section 1.4. 2 we had R 2 os,m,t = 1 - ∑ t-1 i=t-n (r i+1 -r m i+1 ) 2 (r i+1 -r i ) 2 , here we consider that R 2 os,m,t = 1 - ∑ t-1+n/2 i=t-n/2 (r i+1 -r m i+1 ) 2 (r i+1 -r i ) 2 , with n an even number. We do this as, for the out-of-sample predictive algorithm, we need all the previous forecasting errors to perform our model selection. However, for interpretative purposes, building the R 2 metrics with only past data will tend to artificially shift the series with respect to the other external variables. Figure (1.4) R 2 i,α,t and US ECY, over time On the graph are represented the average across portfolios of the alpha-predictability series (R 2 i,α,t , in red) and the US Excess CAPE yield multiplied by -1 (in blue). These monthly series have been standardized to fit in the same graph, and, for visual purposes, they have been smoothed over a 3-month period. Raw series of R 2 i,α,t are yet available in the Figure 1.A.7 of Appendix 1.A.7. The red area figures the cross-sectional dispersion around R 2 i,α,t (+/-1 standard deviation). The metric used is the out-of-sample alpha-predictability R the Unemployment rate. The former has been advocated to be a good metric of market effervescence 11 [START_REF] Shiller | CAPE and the COVID-19 Pandemic Effect[END_REF], while the latter stands as an intuitive variable to spot changes in the business cycle. Regarding the behaviour of R 2 i,α,t on Figure 1.4, the series appears positively correlated with the opposite of the US ECY. As expected, R 2 i,α,t is relatively high in periods of market booms. These periods include notably the "Kennedy-Johnson peak" (Shiller, 2015) around 1966, the dotcom bubble of the late 90s and finally the period preceding the Great Financial Crisis of 2007. As for R 2 i,β,t , the series appears also positively associated with the US 11 Adjusting likewise the CAPE ratio enables to take into account the role of the fall in risk-free rates for stock valuations in the recent years. In line with Chatelais and Stalla-Bourdillon (2020) we multiply the ECY by -1 throughout the rest of this paper, so that an increase in this metric reflects stronger stock valuations (with respect to bonds). Second, the red areas surrounding R 2 i,α,t and R 2 i,β,t figure the cross-sectional dispersion of alpha-and beta-predictability across portfolios. We thus notice that the series of R 2 i,α,t are way more dispersed than the series of R 2 i,β,t . This result is quite intuitive as well: in line with H 3 , alpha-predictability depends on the importance of both idiosyncratic and aggregate factors, α i (X t ) and α(X t ). On the reverse, beta-predictability should reflect a single phenomenon, the predictability of f t+1 . Therefore we should indeed observe more dispersion among the different R 2 i,α,t than for the different R 2 i,β,t . These two findings appear in accordance with the implications of H 3 regarding either the timing of alpha-predictability and beta-predictability peaks, or the dispersion among portoflio returns for these series. However, to better assess the drivers of R 2 i,α,t and R 2 i,β,t beyond pure visual examination, we turn to regression analysis in the next section. Interpreting Alpha-and Beta-Predictability According to the different implications of H 3 , three variable types may affect R 2 i,α,t and R 2 i,β,t . First, R 2 i,α,t is supposed to increase during periods of either elevated market frictions, or of irrational exuberance. Conversely, following Henkel et al. (2011), R 2 i,β,t should especially be high during economic downturns. Thus, let j ∈ {α, β}, we look at regressions of the form: R 2 i,j,t = c j + γ ′ IE,j X IE,j,t + γ ′ F C,j X F C,t + γ ′ RA,j X RA,t + ϵ j,t (1.12) With X IE,t spotting periods of irrational exuberance (valuation ratios or speculative bubble indicators), X F C,t indicating financial constraints which prevent arbitrageurs from exploiting potential mispricings (stock return volatility, financial intermediary leverage) and X RA,t following closely the business cycles (unemployment level). H 3 has several implications for the signs of the different coefficients. If we assume that increases in X IE,t , X F C,t and X RA,t reflect an increase in market effervescence, an aggravation of financial constraints and a strengthening of economic activity, respectively, we would expect, in line with Section 1.3, that γ IE,α > 0, γ F C,α > 0 and that γ RA,β < 0. Furthemore, we would also expect that a tightening of financial conditions leaves beta-predictability unaffected as the latter shouldn't be influenced by market frictions. Eventually, we remain agnostic regarding the link between economic expansions and alpha-predictability. Alpha-predictability can either be positively influenced by the former (if an improvement in macroeconomic conditions triggers investor's excessive enthusiasm) or negatively (if noise traders are especially present during recessions, [START_REF] Veldkamp | Slow boom, sudden crash[END_REF]. Therefore, we expect γ F C,β to be non-significant while we do not form any expectation regarding the sign of γ RA,α . To test these implications on the US stock market, we first use for X IE,t two different valuation ratios: the Excess CAPE yield, already described in Section 1.5. The regression results are presented in Tables 1.1 and 1.2. For the alpha-predictability, we notice in Table 1.1 that whatever the proxy for X IE,t , the associated coefficient γ IE,α is significantly positive in the nine specifications outlined here. This finding suggests that alpha-predictability is particularly at play in times of elevated market effervescence. As for the business cycles variables, we observe that the corresponding slopes γ RA,α are always significant and positive. This last result indicates that alpha-predictability tends to be especially high in times of bullish stock market combined with sound macroeconomic conditions. Conversely, the mechanism outlined by [START_REF] Veldkamp | Slow boom, sudden crash[END_REF] does not seem to play any role here. Eventually, for all the different regressions, the coefficients γ F C,α are either non-significant or (significantly) positive. Thus, although financial constraints' coefficients have most of the time the expected sign, these variables appear to have only a secondary importance in the drivers of alpha-predictability. On the table are represented the different regression results with R 2 i,α,t as a predicted variable. t-statistics have been computed using Newey-West standard errors. Variables are rearranged so that an increase in X IE,t , X F C,t and X RA,t reflects, respectively, a surge in market effervescence, an aggravation of financial constraints and a strengthening of economic activity. Regarding the beta-predictability, we remark in Table 1.2 that, as expected, a decrease in economic activity is related to an increase in beta-predictability (γ RA,β < 0) for all nine regressions, in line with Henkel et al. (2011). Similarly, beta-predictability seems to coincide with bearish financial markets, as the coefficients γ IE,β are significantly negative irrespective of the cho-sen metric. Eventually, again as expected, financial constraints do not seem to play a role in determining the level of beta-predictability, as coefficients γ F C,β are non-significant across all specifications of Table 1.2. Table (1.2) Regression results for the Beta-Predictability Dependent variable: Beta-predictability: R 2 i,β,t (1) (2) (3) (4) (5) (6) (7) (8) (9) -ecyt -0.582 The results of Tables 1.1 and 1.2 bring new additional evidence in favor of H 3 : all the different coefficients exhibited the expected signs according to this hypothesis. The findings highlighted in this section as well as in Section 1.5.1 corroborate the two main ideas of this paper: First, 1.6. ROBUSTNESS CHECKS that there is indeed a diversification effect of efficient and inefficient individual factors when we compare micro-returns to macro-returns. Second, that regarding more specifically the drivers of macro-predictability, both types of return predictability, alpha-and beta-predictability, seem at play at the same time in our dataset. This last finding contrasts with the return predictability literature, where previous studies tended to oppose these two mechanisms. Robustness checks We provide here different robustness checks for the results outlined in Section 1.5.2. First, to build the alpha-and the beta-predictability indices, we relied on the 3 factor-model of [START_REF] Fama | Common risk factors in the returns on stocks and bonds[END_REF] as proxies for the risk factors f t+1 , namely the excess return on the market, the size factor and the value factor. On Figures 1.A.7 and 1.A.8 of Appendix 1.A.7, we also plotted the resulting R 2 i,α,t and R 2 i,β,t whether we rely on the 1-factor (in green), the 3-factor (in red) or the 5-factor (in blue) Fama-French models13 . We notice on both figures that, despite some discrepancies for the 1-factor model indices, the different metrics behave in a very similar way. These similarities are noticeable whether we look at the pooled series (R Conclusion Based on US postwar data, we manage in this paper to discriminate between three opposite hypotheses regarding the behaviours of micro-and macro-stock return predictability. Overall, by looking at raw predictability metrics, we find that our results are consistent with a model (H 3 ) that lies in-between Samuelson's and Cochrane's views (H 1 and H 2 ). Indeed, micropredictability series do not appear to be structurally higher or lower than macro-predictability indices, but tend to "bounce" around the latter. Furthermore, pooling micro-predictability series across portfolios yields an index that is significantly more correlated with the macro-predictability metric. All these observations corroborate an hypothesis where individual returns are mostly affected by idiosyncratic efficient and inefficient components, but also by common factors. If the former are diversified away at the index-level, we should indeed observe more variability in micro-predictability series, but also an averaged micro-predictability index that mimic the macro-predictability series. Additionally, by extending over time the framework by Rapach et al. (2011), we are able to disentangle the two sources of return predictability, the alpha-and the beta-predictability. Here again, our results underpin an intermediate view where return predictability is both affected by these two mechanisms. As a matter of fact, our two estimated indices match the expected theoretical patterns: alpha-predictability rises in period of market effervescence whereas betapredictability increases during downturns. This last finding enables to reconcile two opposite blocks of the literature: whereas previous papers tend to stress a specific source of predictability (Farmer et al., 2022, Dangl and[START_REF] Dangl | Predictive regressions with time-varying coefficients[END_REF], our results suggest that the two phenomenons are at play in our sample. Eventually, we argue that our estimated alpha-predictability index (R 2 i,α,t ) constitutes a theo-Model 1 Simple Exponential Smoothing Smooth Transition Autoregressive Model 1 p t+1 = αp t + (1 -α)r t With p 1 = r 1 Timmermann Model 2 Double Exponential Smoothing p t+1 = α(p t + λ t-1 ) + (1 -α)r t α t = β(p t+1 -p t ) + (1 -β)λ t-1 With p 1 = 0, f 2 = r t+1 = θ ′ 0 η t d t + θ ′ 1 η t + u t+1 d t = 1/(1 + exp(γ 0 + γ 1 (r t -r t-6 )) With η t = (1, r t ) ′ Timmermann Model 6 Smooth Transition Autoregressive Model 2 r t+1 = θ ′ 0 η t d t + θ ′ 1 η t + u t+1 d t = 1/(1 + exp(γ 0 + γ 1 r t-3 ) With η t = (1, r t ) ′ Timmermann Model 7 Neural net model 1 r t+1 = θ 0 + ∑ n i=1 θ i g(β ′ i η t ) + u t+1 With g the logistic function, η t = (1, r t , r t-1 , r t-2 ) ′ and n = 2 Timmermann Table (1.A.1) Estimated Models Name Model description References Model Neural net model 2 r t+1 = θ 0 + ∑ n1 i=1 θ i g( ∑ n2 j=1 β j g(α ′ j η t )) + u t+1 With g the logistic function, η t = (1, r t , r t-1 , r t-2 ) ′ , n 1 = 2 and n 2 = 1 Timmermann Model to Model Univariate regressions r t+1 = θ 0 + θ 1 x t + u t+1 With x t (univariate) exogenous regressors from the list detailed in Table 1.A.2 [START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF] Model "Kitchen sink" regression r t+1 = θ 0 + θ ′ 1 X t + u t+1 With X t the exogenous regressors from the list detailed in Table 1.A.2 [START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF] Model "Model selection" from Goyal and [START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF] With all the potential combinations X i,t from the list detailed in Table 1.A.2, we evaluate: r t+1 = θ i,0 + θ ′ i,1 X i,t + u i,t+1 At each point in time, we choose the model with the smallest out-of-sample R 2 Welch and Goyal (2008) Model Factor model from Kelly and Pruitt (2013) Only for aggregate return predictions With bm it the book-to-market ratio of portfolio i and F t the estimated factor, we run the following three regressions: bm i,t = θ i,0 + θ i,1 r t+1 + e i,t (time series) bm i,t = c t + F t θi,1 + u i,t (cross section) r t+1 = γ 1 + γ 2 Ft + ϵ i,t+1 (time series) Kelly and Pruitt (2013) Model Forecast averaging -equally weighted Let p j,t+1 the forecasts from the J precedent models, we use a simple equally-weighted forecast averaging of the form: stands below the first quartile of R 2 i,os,t , eventually that pooling the different series R 2 i,os,t into R 2 i,os,t sharply increases the correlation with R 2 os,t .The metric used is the out-of-sample R 2 , also detailed in Section 1.4.2, that can take negative values. p t+1 = ∑ J j=1 p j,t+1 1.A.5 Standard errors, mean and standard deviations of raw predictability series To assess the difference in means and standard deviations of R 2 os,t with respect to R 2 i,os,t , we fit an ARMA(1,1) on each series. More precisely, with Y t being either R 2 os,t or R 2 i,os,t , we estimate: Y t = c + γY t-1 + θϵ t-1 + ϵ t and E(ϵ 2 t ) = σ 2 ϵ (1.13) For each series, we then compute their estimated unconditional means m as: m = ĉ 1 - γ (1.14) And their variances σ 2 as: σ2 = (1 + 2γ θ + γ2 ) σϵ 2 1 -γ2 (1.15) Standard errors for these two estimates are obtained with 500 bootstrap simulations. Mean and standard deviation for R 2 os,t are depicted in red in Figure 1.A.3, and in blue for the 25 R 2 i,os,t . Black error bands figure +/-1 standard error confidence intervals along the estimates. We thus notice on Figure 1.A.3, in line with Section 1.5.1, that, although the means of R i,os,t is 0.06 whereas the standard deviation of R 2 os,t is 0.04. The correlation of R 2 i,os,t with R 2 os,t is 0.3 whereas the average correlation of R 2 i,os,t with R 2 os,t is 0.09. Figure (1.A.5) Raw Predictability levels vs. Returns standard deviations On the scatter plot are represented, for r i,t+1 (in blue) and r t+1 (in red), the standard deviations of the returns series on the x-axis, and the mean of their raw predictability, R 2 i,os,t or R 2 os,t , on the y-axis. The metric used for the y-axis is the out-of-sample R 2 , detailed in Section 1.4.2, that can take negative values. Figure (1.A.6) Raw Predictability standard deviations vs. Returns standard deviations On the scatter plot are represented, for r i,t+1 (in blue) and r t+1 (in red), the standard deviations of the returns series on the x-axis, and the standard deviations of their raw predictability, R 2 i,os,t or R 2 os,t , on the y-axis. The metric used for the y-axis is the out-of-sample R 2 , detailed in Section 1.4.2, that can take negative values. On the graph are represented the average across portfolios of the beta-predictability series (R 2 i,α,t ) computed using the 1-factor (in green), the 3-factor (in red) or the 5-factor (in blue) Fama-French models. The coloured areas figure the corresponding cross-sectional dispersion around the different R 2 i,β,t (+/-0.5 standard deviation). The metric used is the out-of-sample beta-predictability R 2 i,β,t , detailed in Section 1.4.3, that can take negative values. The grey vertical bands figure the NBER US recession dates. 1.A.7 Robustness checks: alternative risk factors 1.A.8 Robustness checks: regression results Abstract Stock prices declined abruptly in the wake of the Covid-19, reflecting both the deterioration of investors' expectations of economic activity as well as the surge in risk aversion. In the following months however, economic activity remained sluggish while equity markets bounced back. This disconnect between equity values and macro-variables can be partially explained by other factors, namely the decline in risk-free interest rates, and -for the US-the strong profitability of the IT sector. As a result, an econometrician forecasting economic activity with aggregate stock market variables during the Covid-crisis is likely to get poor results. Our main contribution is thus to rely on sectorally disaggregated equity variables within a factor model in order to predict US economic activity. We find, first, that the factor model better predicts future economic activity compared to aggregate equity variables, or to conventional benchmarks used in the literature, both in-sample and out-of-sample. Second, we show that the strong performance of the factor model comes from the fact that it filters out the "expected returns" component of the sectoral equity variables as well as the foreign component of aggregate future cash flows. The constructed factor overweights upstream and "value" sectors that are found to be closely linked to the future state of the business cycle. A simple, but incomplete, explanation is that not only do stock prices reflect expected future cash flows and investors risk aversion, but also the level of risk free interest rates. Focusing on the American example, US 10 year sovereign rates declined from March to August 2020 and can therefore explain part of the equity rebound (Chatelais and Stalla-Bourdillon, 2020). Lors du choc We think that this seeming disconnect between finance and the real economy can be more fully reconciled with the data by recognizing that relying on a given aggregate stock price index discards a lot of information that might be of particular importance, especially during business cycle turning points. Specifically, US aggregate stock indices can be influenced by other forces that do not entirely reflect the state of the US economy. For example the S&P 500 was driven up in 2020 by IT sector companies whose valuations either largely depend on foreign activity or are orthogonal to US economic performance. as their profitability derived tremendously from Covid19 lockdown policies. As a result, an econometrician trying to forecast economic activity with aggregate stock variables during the Covid-crisis is likely to get poor results. In this paper, we address this problem by building a factor model constructed using sectorally disaggregated equity variables to predict future US economic activity. Hence, this study constitutes one of the rare instances where stock market variables specifically are used to perform macroeconomic forecasting. Furthermore, this study adds to a surprisingly small literature relying on sectoral equity variables. To our knowledge, this paper is the first to use factor models to extract the predictive content from disaggregated sectoral stock prices. Even papers employing factor models based on large sets of variable seldom go beyond using aggregate stock indices (Barhoumi et al., 2010, Jardet and[START_REF] Jardet | Nowcasting world gdp growth with high-frequency data[END_REF]. We obtain three main results, relating to forecasting performance, and the sectoral sources of forecasting business cycle activity. First, we find that a factor based on sectoral dividend yields (DYs) better predicts industrial production (IP) growth, as compared to the same variable measured as an aggregate. That factor model also typically outperforms conventional benchmark models, such as the term spread or the lagged IP growth, particularly during times of negative IP growth. In our baseline specification, we forecast future IP growth over a 12-month horizon, but these results hold at the 18-month and the 24-month horizons, both in-sample and out-of-sample. We also find that our factor model helps to improve the forecasting accuracy of a widely used factor model à la [START_REF] Stock | Macroeconomic forecasting using diffusion indexes[END_REF] that relies on a vast number of macro-financial variables (but not on sectoral equity indices). Interestingly, our finding generalizes to a number of other countries3 . Second, relying upon the present value formula of Campbell and Shiller (1988), we find that our model improves forecasting accuracy because it filters out the expected returns/discount rate component of the sectoral equity variables, as well as the foreign component of aggregate future cash flows. We attribute the elevated outperformance of our factor model, especially during periods of negative IP growth such as during the Covid pandemic or during the Global Financial Crisis, to this filtering out of extraneous information. As expected returns are more volatile in recessionary states (Henkel et al., 2011) they tend to particularly affect the forecasting accuracy of the aggregate DY during these periods, but not of our factor model. Third, we are able to identify the specific sectors that provide additional forecasting power. Specifically, we find that our factor model overweights upstream sectors (primary industry and other industrial inputs) and "value" sectors, as the latter are found to be closely linked to the US business cycle [START_REF] Zhang | The value premium[END_REF][START_REF] Koijen | The cross-section and time series of stock and bond returns[END_REF][START_REF] Xu | Essays on the value effect in the time series and cross section of stock returns[END_REF]. For those who are particularly concerned about the trajectory of economic activity during economic downturn, our forecasting model should be of special interest, given the economically and statistically significant outperformance relative to conventional benchmarks. The identification of key sectoral indicators also provides an appealing economic intuition for our findings. In the following section, we present the basic theory placed in the context of the literature. In Section 2.3 we present the empirical model and detail the data used in the analysis. Section 2.4 provides a set of in-sample results, and Section 2.5 a corresponding set of out-of-sample results. We draw out the economic implications of those results in Section 2.6. Concluding remarks are contained in Section 2.7. Background Theoretical Framework When using aggregate financial measures to predict economic activity, one wants the factors influencing the financial variables to correspond to the appropriate macroeconomic variable. Since our objective is to forecast US economic activity, we want our financial predictor to reflect solely US activity. In order to extract the US component, we rely upon the present value formula of Campbell and Shiller (1988), a decomposition that has been widely used to model equity returns (see Campbell and Ammer, 1993, Vuolteenaho, 2002and Binsbergen and Koijen, 2010). More precisely, DYs (x t ) can be decomposed into two factors: expected returns (or discount rates) and expected cash flows growth likewise: x t = κ (1 -ρ) + ∑ j=1 ρ j-1 E t [r t+j -∆cf t+j ] (2.1) Where E t [r t+j ] represents expected returns and E t [∆cf t+j ] expected cash flows (κ and ρ are constant parameters). One could also decompose the cash flow component into two sub-components: one depending on the domestic activity of the firm, E t [∆cf D,t+j ], and the other one stemming from its foreign activity, E t [∆cf F,t+j ], such that we would get: x t = κ (1 -ρ) + ∑ j=1 ρ j-1 E t [r t+j -∆cf D,t+j -∆cf F,t+j ] (2.2) Note eventually that a similar decomposition can be applied to other equity variables, such as price-earnings or book-to-market ratios. In order to forecast future aggregate returns, Kelly and Pruitt (2013) underline that the usual predictive regressions of aggregate future returns and aggregate dividend growth on aggregate DY: r t+h = α 1 + β 1 x t + u 1,t+h (2.3 ) ∆cf t+h = α 2 + β 2 x t + u 2,t+h (2.4) are misspecified, since the DY both reflects expected returns and expected cash flows, while they would like this variable only to reflect the former (when predicting aggregate returns), or the latter (when predicting aggregate dividend growth). Relying on disaggregated book-to-market ratios, which can also be decomposed with the Campbell and Shiller (1988) formula, Kelly and Pruitt ( 2013) estimate a factor model via Partial Least Squares on that appears to predict accurately future aggregate returns and future aggregate dividends. They explain the improved accuracy by the fact that the factor model, by overweighting or underweighting certain sectoral book-to-markets, filters out the expected cash flow component while predicting future aggregate returns (and vice versa when predicting future aggregate dividends). In an approach similar to theirs, we implement the same filtering to extract a factor to predict future economic activity. In our case we want the factor model to not only filter out the expected returns component, but also the foreign cash flow component. Implicitly, we assume that the domestic cash flow component represents a good proxy for domestic US economic activity. We also assume that this filtering is possible because sectoral DYs are informative about future aggregate cash flows. We return to this point more formally in Section 2.A.1 of the Appendix. Selected Literature Review There are three strands of the literature relevant to our contribution. The first is the literature using stock prices to predict economic activity. The second is the use of factor modeling for forecasting purposes. The third focuses on how expectations regarding future economic activity affect the cross section of returns. Turning to the first strand, the theoretical arguments underlining the predictive power of stock prices are twofold (Croux and Reusens, 2013). On one hand equity prices are inherently forward looking and should therefore reflect investors expectations of future economic activity. On the other hand, stock prices can have a causal effect on the business cycle: if stock prices go up, households should consume more through the induced wealth effect. Hence, stock prices should lead aggregate activity. Consequently, various papers try to predict future GDP or industrial production with equity variables, typically with aggregate stock indices (Binswanger, 2000, Ólan Henry et al., 2004, Croux and Reusens, 2013, McMillan, 2021, Chen and Ranciere, 2019, Lan et al., 2020). Some papers, however, rely on disaggregated stock price data and can be further divided into two subcategories. In the first subcategory are papers that first build an aggregate variable from sectoral equity data and then forecast future activity with the former. [START_REF] Loungani | Stock market dispersion and unemployment[END_REF] for example use industry-level equity prices to build a metric of price dispersion. They reason that if stock prices are increasing in some industries but declining in others, in subsequent years capital and labor will have to be reallocated from the contracting industries to the expanding ones, which will be costly in the aggregate. [START_REF] Liew | Can book-to-market, size and momentum be risk factors that predict economic growth[END_REF] rely on the Fama-French factors, built from disaggregated portfolio returns, to forecast future GDP. Their rationale is that, before a recession, investors should be able to anticipate that small stocks and value stocks will perform badly. Indeed, small-sized firms and value companies, i.e. firms with low price-earnings ratios and typically elevated fixed capital as in the automobile industry, are usually deemed as less resilient to strong negative shocks [START_REF] Zhang | The value premium[END_REF][START_REF] Xu | Essays on the value effect in the time series and cross section of stock returns[END_REF]. As a result, small minus big (SMB) returns and high minus low (HML) book-to-market returns should decrease ahead of recessions. In the second subcategory are other papers that directly use the sectoral equity variables in their estimation, most of the time by evaluating the predictive power of specific sector variables in isolation from the other [START_REF] Browne | Do equity index industry groups improve forecasts of inflation and production? a us analysis[END_REF], Andersson and Agostino, 2008[START_REF] Zalgiryte | Stock market and economic growth in the u.s. france: Evidence from stock market sector indices[END_REF]. We depart from the approach adopted in these papers first by estimating a factor model based on sectoral equity variables. We therefore make use of the entire cross section of stock market variables at the same time (in contrast to [START_REF] Browne | Do equity index industry groups improve forecasts of inflation and production? a us analysis[END_REF], Andersson and Agostino, 2008[START_REF] Zalgiryte | Stock market and economic growth in the u.s. france: Evidence from stock market sector indices[END_REF]. Moreover, we do not constrain the predictive content of disaggregated stock variables into a specific aggregate predictor, like the dispersion of stock prices or the Fama-French factors. Second, in contrast to all the papers cited above, we also investigate the over-and under-weights of the different sectors in our factor model. In the end, our approach comes closest to two papers that also rely on the Kelly and Pruitt (2013) factor model to predict macroeconomic activity on the basis of equity variables. However, unlike our approach, they either use aggregate and not sectoral -indices to build their factor, i.e., the number of IPOs or the share turnover in the US (Huang et al., 2015), or they only perform their analysis in-sample and do not analyze what is filtered out in their factor modelling [START_REF] Jagannathan | Price-dividend ratio factor proxies for long-run risks[END_REF]. Second, we also contribute to the literature on factor modelling that does not specifically focus on the predictive content of equity variables. Surprisingly enough, whereas disaggregated equity data is easily available and is accessible without lags, to our knowledge the literature on factor models for forecasting exercises rarely relies on sectoral stock data, even when using large datasets [START_REF] Bessec | Prévision à court terme de la croissance du pib français à laide de modèles à facteurs dynamiques[END_REF][START_REF] Hepenstrick | Forecasting with large unbalanced datasets: The mixed-frequency three-pass regression filter[END_REF][START_REF] Fan | Sufficient forecasting using factor models[END_REF][START_REF] Ferrara | Nowcasting global economic growth: A factor-augmented mixed-frequency approach[END_REF][START_REF] Jardet | Nowcasting world gdp growth with high-frequency data[END_REF] or when using other types of sectoral variables, like surveys [START_REF] Barhoumi | Are disaggregate data useful for factor analysis in forecasting french gdp?[END_REF]. Finally, we also contribute to the financial literature that takes perspective inverse of the standard, by evaluating how future economic activity affect cross-sectional stock returns [START_REF] Koijen | The cross-section and time series of stock and bond returns[END_REF][START_REF] Zhu | The role of future economic conditions in the crosssection of stock returns: Evidence from the us and uk[END_REF]. By analyzing how the factor model over/underweights certain equity sectors we shed a new light on the pro-and counter-cyclicality of specific portfolios. Model Specification and Data A Factor Model We follow Kelly and Pruitt (2013), who utilize the Partial Least Square (PLS) methodology estimated using disaggregated equity variables. The approach resembles Principal Components Analysis (PCA), but instead of reducing the dimensionality according to the covariance of the sectoral variables between themselves, we implement the reduction according to the covariance between the predicted variable and the sectoral variables. Starting with y t+h the predicted variable (in our case, the growth rate of Industrial Production) and x it the different sectoral equity variables (here the sectoral DYs), the PLS is estimated in three steps. First, for each sector i, a univariate time series regression is estimated: x it = ϕ i0 + ϕ i y t+h + e it (2.5) Second, for each time period t, the sectoral DYs x it are regressed on the coefficients φi estimated above. Note that this regression is a cross-sectional one, and that the estimated coefficient will be the value of the factor F t at time t: y t+h = β 0 + β 1 Ft + u t+h (2.6) Finally, we use the estimated factor in a (time series) predictive regression: y t+h = β 0 + β 1 Ft + u t+h (2.7) The estimated factor Ft can be seen as a weighted sum of the different x it since: φi = ∑ t (x it -xi )(y t+h -ȳ) ∑ t (y t+h -ȳ) 2 (2.8) With xi = 1 T ∑ t x it and ȳ = 1 T ∑ t y t+h . And since: F t = ∑ i (x it -xi )(ϕ i -φ) ∑ i (ϕ i -φ) 2 (2.9) With φ = 1 I ∑ i ϕ i and xi = 1 I ∑ i x it . We can therefore write: Ft = 1 C ∑ i x i t(ϕ i -φ) (2.10) With C = ∑ i (ϕ i -φ) 2 . In other words, the more x i t is correlated with y t+h the more it will influence Ft through the coefficients (ϕ i -φ). Data Throughout the paper we focus mainly on the United States. In our main specification, we predict future Industrial Production growth. Depending on the forecast horizon h, and with IP t the Industrial Production index, we forecast at time t the variable: The other macroeconomic and financial data are from sources detailed in Table 2.A.3 of the Appendix. The data is at a monthly frequency, spanning the period from 02-1973 to 05-2021. y t+h = IP t+h IP t -1 ( We define the term spread as the spread between the Treasury 10 year and 3 month yields, in line with Chinn and Kucko (2015). In-Sample Results In order to determine whether our disaggregated equity variable based factor model exhibits greater predictive power than models based on aggregate DY, or conventional benchmark models, we conduct both in-sample and out-of-sample analyses. In this section, we present the former set of results, reserving the latter for Section 2.4. To summarize the prediction results, in Several findings are readily apparent. First, irrespective of the horizon, the factor model constantly beats the conventional benchmarks, that is the lagged IP growth or the term spread, although the term spread appears as the second best performing model. Second, the factor model outperforms the simple predictive regression based on aggregate equity data (here the aggregate DY), thus highlighting the additional accuracy that can be gained from working with sectoral stock market variables. For this last result, it should however be borne in mind that, in an in-sample setting, our factor model should in any case outperform the aggregate DY given that it overweights the sectoral DYs which are the most correlated with future IP growth. Focusing on the 12-month horizon, we show on Figure 2.A.1 of the Appendix that the same in-sample results hold when we look at alternate proxies of economic activity, although the outperformance with respect to the term spread appears more mixed. We considered manufacturing sales, the number of house permits delivered, the OECD indicator of monthly US GDP, the US unemployment rate or total nonfarm payroll employment. We perform a second simple in-sample evaluation by determining whether or not the estimated factor brings additional information as compared to our main benchmark (here the aggregate DY, x t ). To do so, we run the following predictive regression: y t+h = β 0 + β 1 x t + β 2 Ft + u t+h (2.12) And evaluate the significance of the coefficient β 2 . Table 2.4.1 below reports the results of these in-sample regressions at horizon 12, 18 and 24 months. To account for the serial correlation of the error terms, we conduct our statistical inference using Newey-West standard errors. Notice in Table 2.4.1 that the coefficient associated with the factors built on sectoral equity variables is significant for all different horizons. This result thus suggests that the factor model has forecasting value even with the inclusion of the aggregate DY in the regression. Out-of-Sample Results Out-of-Sample Performance We conduct an out-of-sample forecasting exercise in order to guard against overfitting. Following the same procedure outlined in Section 2.4, we set the rolling window used for estimation to 36 months (3 years). This means that for a 12-month horizon, the first observation to be predicted is January 1977. Our results are robust to consideration of shorter or longer rolling windows. Note that for the out-of-sample exercise, we closely follow the procedure described in Kelly and Pruitt (2013), so that, when predicting IP growth at time t + h based with variables at time t, all the regressions outlined in Section 2.5 are based on training samples that exclude observations posterior to time t. -rather than on aggregate -equity variables strongly improves the forecasting accuracy of our model. Again, this improvement is noticeable through all the different considered horizons. Regarding the relative performance of the other benchmarks, here also the factor model appears to outperform the term spread or the lagged IP growth. Finally, we run the same robustness check as in the in-sample exercise and assess the predictive accuracy of the different models for the other proxies of economic activity. As shown in Figure 2.A.2 in the Appendix, the factor model strongly improves our forecasting accuracy for virtually all the different predicted variables, sometimes decreasing the out-of-Sample RMSE by close to 20%, relative to the best performing benchmark. We further assess the outperformance of the factor model with respect to the different benchmarks by conducting Diebold-Mariano tests for statistical significance (Diebold andMariano, 2002, West, 1996). Table 2. the factor model performs worse than the corresponding benchmarks. Overall, in line with Figure 2.5.1 and at the notable exception of the term spread at the 12-month horizon, we find that the factor model improves significantly the prediction of future IP growth compared to the three different benchmarks, and at the three different horizons4 . We eventually run two out-of-sample exercises to underline the performance of our factor model. First, we evaluate the accuracy of our model compared to forecasting regressions using different metrics of market volatility. Either we rely only on the volatility variables alone in univariate regressions, or we augment the models with the term spread given that recent papers underlined that market volatility may prove useful to extract the forecasting signal out of the term spread (Kumar et al., 2022, Venditti andVeronese, 2020). differences in RMSE between these benchmarks and our factor model. As could be seen on the Table, it appears that our model significantly outperforms the aforementioned benchmarks, at various horizons and for different proxies of market volatility. Second, we vet whether our results remain robust for other advanced economies. To do so, we collect data for 5 additional countries: Canada, France, Germany, Switzerland and the United Kingdom. We report on Table 2.A.2 of the Appendix the differences in RMSE, for each country, between the same benchmark models5 as in Figure 2.5.1 and our factor model for a 12-month horizon forecasting exercise. As can be seen on the Table, on the 15 different specifications considered here, our factor model appears to outperform the benchmarks in 12 cases. For France and the United Kingdom our factor model exhibits a lower RMSE compared to a univariate regression based on the lagged IP growth, but the difference does not appear significant. Only with respect to French term spread does our factor model display a higher RMSE when it comes to forecasting IP growth. Comparison with traditional factor models In addition, we investigate whether our factor, based on sectoral equity variables, can be used to improve more conventional factor models that rely on macroeconomic variables and on aggregated financial indicators. Indeed, whereas sectoral equity variables are easily available and published without lags, they seem to be rarely used in the forecasting literature relying on large datasets [START_REF] Barhoumi | Are disaggregate data useful for factor analysis in forecasting french gdp?[END_REF][START_REF] Hepenstrick | Forecasting with large unbalanced datasets: The mixed-frequency three-pass regression filter[END_REF][START_REF] Jardet | Nowcasting world gdp growth with high-frequency data[END_REF]. To do so, we build a large dataset of 147 variables that includes aggregate macroeconomic indicators (CPI, unemployment rates), disaggregated macroeconomic variables (sectoral retail sales, sectoral industrial production indices) and aggregate financial indicators (exchange rates, interest rates and equity variables). A detailed list of the variables used is available in Table 2.A.3 of the Appendix. In the spirit of [START_REF] Stock | Macroeconomic forecasting using diffusion indexes[END_REF], we then extract factors H t from this dataset with a simple Principal Component Analysis6 . The question is then whether our factor, based on disaggregated equity variables, F t , helps to improve the out-of-sample forecasts made with PCA-factors H t , without the use of these precise variables. To do so, based on the same rolling window length, we compare the forecasts made by estimating a model relying on the PCA-factors: y t+h = β 0 + β ′ 1 H t + u t+h (2.13) And a model relying on the PCA-factors along with the lag of the predicted variable: y t+h = β 0 + β ′ 1 H t + β 2 y t + u t+h (2.14) With the same models augmented with our factor, that is: y t+h = β 0 + β ′ 1 H t + β 2 F t + u t+h (2.15) And: y t+h = β 0 + β ′ 1 H t + β 2 y t + +β 3 F t + u t+h (2.16) We are agnostic regarding the number of relevant PCA-factors and therefore include in our regressions 1 to 3 PCA-factors. Performance by Sample Period In the Introduction, we outlined that the gains of relying on sectoral rather than on aggregate equity variables may especially be strong in times of negative economic growth, such as during the pandemic. This may be the case if, for example, in these periods aggregate DY is driven mostly by sectors which are only loosely linked to the future economic activity, or if variations in aggregate DY reflect more changes in investors discount rates/expected returns rather than changes in earnings expectations. Although we return to more formally discuss these economic mechanisms in Section 2.6, in this section, we investigate whether the forecasting performance of our factor model differs between periods of contraction and of expansion. In Table 2.5.3, we define periods of contraction as months during which the annual IP growth is negative (and the reverse for periods of expansion). In line with Moench and Stein (2021), the Table reports the difference in RMSE between our factor model based on sectoral equity variables and the same univariate model benchmarks outlined in Section 2.5.1 (along with the p-values of Diebold Mariano tests). Note that we segment here our estimation according to the dates in which the forecasts are made. In other words, if we consider here a forecast horizon of 12 months, the "Negative IP growth" period refers to predictions made when the annual IP growth was negative (and not predictions made 12 months before the contraction in economic activity). Note that in Table 2.5.3, although our factor model outperforms other benchmarks both in periods of negative and positive IP growth, the gain in forecast accuracy of our factor model appears to be strongly concentrated in negative IP growth period. The difference between the two periods can be substantial: looking at the 12-month horizon for example, relying on our factor based on sectoral DYs rather than on the aggregate DY can yield a RMSE-gain close to 4 times higher in negative IP growth period than in positive growth period. One potential interpretation is that expected returns/discount rates are more volatile during recessions (Henkel et al., 2011), and can therefore blur the forecasting ability of the aggregate DY in those times. In contrast, as outlined in next section, given that our factor model filters out the expected returns component of sectoral DYs, it can yield strong forecasting accuracy gains in periods of contracting economic activity. As an example, in 2009, close to end of the Great Recession, aggregate DY was still very high, notably because investors risk aversion, and thus investors discount rates, were very high as well. As a result, the 12-month ahead IP growth forecast from the aggregate DY was still very pessimistic (-29.1% in May 2009 for the next year IP growth). On the reverse, the forecast from the factor model was much closer to the realized IP growth at the same time (+6.2% against a realized value, in May 2010, of +7.9%), probably because the forecasting ability of our factor model was not blurred by this elevated discount rate component. Economic Interpretation 2.6.1 Filtering the "return" and the "foreign cash flow" components In some ways, it should be unsurprising that predictions based on factors extracted from the cross section of sectoral portfolio variables should outperform predictions based on an aggregate variable, given that aggregate measures average out important information, and at the same time include information not directly relevant to the variable being forecasted. The question is whether one can estimate the factors with sufficient precision that one outperforms a simple model using an aggregate index. In our case, the economically important information gleaned using our approach yields a substantial gain in prediction. In this section, we further investigate how the results can be interpreted in economic terms. Kelly and Pruitt (2013) show that, while trying to predict future aggregate returns with disaggregated book-to-market ratios, their factor model puts positive weights on all sectoral book-to-market ratios, especially for "growth" portfolios (i.e. portfolios with low book-to-market ratios) which are known to be very much affected by future aggregate returns. However, some of these sectoral book-to-market ratios are positively correlated with future aggregate dividends, whereas others are negatively correlated with future aggregate dividends. Consequently, the factor, which is a weighted sum of the sectoral portfolios book-to-market ratios, will be very positively correlated with future aggregate returns but little exposed to future aggregate dividends. Similarly, when they try to forecast future aggregate dividends, they show that their factor is very positively correlated with future aggregate dividends but little exposed to future aggregate returns. In our analysis, we replicate this exercise to identify what is filtered out in our factor model based on disaggregated DYs. To show how we do this, we display on Figure 2.6.1 three variables. In red are represented, for each of the sectors, the weights ( φiφ) that correspond to the relative importance of each sector in the factor estimation as outlined in Section 2.3.1 7 . In blue are represented the correlations of each sectoral DY with the predicted variable (IP growth, y t+h ) that is corr(y t+h , x it ). Displayed in purple are the correlations of each sectoral DY with the aggregate equity returns compounded over the forecasting horizon r t+h , that is corr(r t+h , x it ). As in Kelly and Pruitt (2013) and throughout Section 2.6, we perform this analysis by examining in-sample estimates of the weights ( φiφ), while the different correlations are computed on the overall sample. We consider here, and also for the remaining of Section 2.6, a forecasting exercise over the a 12-month horizon. Finally, for visual purposes, we normalized the sector weights so that their cross-sectional standard deviation equals the standard deviations of the correlations between sectoral DYs and future IP growth. A visual way to notice this filtering can be done by representing our factor, estimated in-sample, over time. We therefore depict on Figure 2.A.3 in the Appendix our factor along with the aggregate Market DY and the IP growth lead by 12 month. We can thus see on the Figure that, during the 90s, our factor appears to track relatively well the future IP growth. On the reverse, the (opposite of the) aggregate DY exhibits an upward trend over the period, probably linked with the fact that, amidst the so-called "irrational exuberance" (Shiller, 2015) of the dotcom bubble, 7 Unlike Kelly and Pruitt (2013), for this analysis we rely on the centered weights ( φiφ), whereas they rely on the uncentered weights φi. Our approach seems more appropriate to us, given that the relationship between the sectoral DYs and the estimated factors is given precisely by the centered weights: investors were requiring very low discount rates which tended to push stock prices significantly high. As our factor model purges the discount rates/expected returns component of aggregate DY, it is less affected by this trend, and therefore spots more accurately movements in future IP growth. Ft = 1 C ∑ i xit(ϕi -φ). Additionally, we want our factor model to not only to filter out the "expected returns" component of the sectoral DYs, but to also filter out the "foreign cash flow" component. In other words, relying on the notations of Section 2.2.1, we would like corr( Ft , ∆cf D,t+j ) to be high and corr( Ft , r t+j ) and corr( Ft , ∆cf F,t+j ) to be low. However, whereas we can directly observe the levels of future aggregate returns, we need to rely on a proxy to assess the correlation between our estimated factor and the aggregate foreign cash flow component. Since the latter theoretically represents the component of the sectoral DYs that reflect the foreign profitability of the US firms, we rely on the foreign industrial production indices of [START_REF] Grossman | A new database of global economic indicators[END_REF]. The index that we consider here, IP F,t , corresponds to the level of industrial activity of advanced economies, excluding the US. Note that US IP and IP F,t are of course strongly correlated. Therefore, a direct assessment whether the factor model filters out adequately the future foreign activity component of sectoral DYs with IP F,t is likely to give biased results precisely because the estimated factor is itself positively correlated with US IP growth. On the other hand, we would like our factor model to filter out the part of foreign activity that is orthogonal to US economic activity. To do so we first regress foreign IP growth (IP F,t ) on US IP growth (y t ): IP F,t = α + βy t + u t (2.17) And rely on the estimated error terms (û t ) to conduct our analysis. I ∑ i corr(x it , y t+h ), 1 I ∑ i corr(x it , r t+h ) and 1 I ∑ i corr(x it , r t+h )). In line with ing certain sectors it increases the correlation with our predicted variable while filtering out the noisy components of the sectoral DYs. sec2tor overweighting We investigate further the economic analysis of the outperformance of our factor model by identifying more precisely which sectors are overweighted in this exercise. To do so, in Figure 2.6.3, we depict the (absolute) weights (ϕ i -φ) to understand which sectoral DYs affect the most the estimated factor. Here also we conduct this analysis on an in-sample basis, with a forecast horizon of 12 months. Services, probably due the strong link between property price dynamics and the business cycle [START_REF] Leamer | Housing really is the business cycle: What survives the lessons of 200809?[END_REF][START_REF] Borio | Forecasting recessions: the importance of the financial cycle[END_REF]. We further investigate which sectors appear to have the more importance in our factor model by testing two additional hypotheses: • Are "value" sectors, i.e. sectors that are little valued by equity investors and therefore exhibit low Price-Earnings Ratios (PER), overweighted compared to "growth" sectors, which, in contrast, have elevated PER. Value sector equities, like the automobile sector, are sometimes deemed to be more closely linked to the future business cycle as investors may estimate that they are less able to downsize their activity in case of an incoming recession [START_REF] Koijen | The cross-section and time series of stock and bond returns[END_REF][START_REF] Xu | Essays on the value effect in the time series and cross section of stock returns[END_REF]. • To what extent does our factor model overweight sectors whose DYs are correlated with future domestic IP growth compared to sectors with a high exposure on foreign economic activity. To do so, we estimate the following cross-sectional regression: |( φi -φ)| = α + β 1 |corr(y t+h , x it )| + β 2 P ER i + β 3 |corr(E t , x it )| + α i + u i (2.18) Where |corr(y t+h , x it )| represents, for the sector i, the absolute correlation of the sectoral DY with future IP growth, P ER i stands for the average PER of the sector i on the overall period, |corr(E t , x i t)| represents the absolute correlation of the sectoral DY with either the US real effective exchange rate, REER, retrieved from the BIS website, or with our metric of future foreign IP growth that is orthogonal to future US IP growth (û t+h ). Finally, α i stands for the industry fixed effects (where the 40 sectors that we are relying on are regrouped in 11 different industries in the IBC classification). We can see first in Table 2.6.1 that, by construction and in absolute terms, factor weights are strongly and positively related with the correlation between sectoral DYs and future IP growth. Second, Table 2.6.1 underlines that, in line with the hypothesis formulated above, the DYs from the value sectors seem to contain relatively more information regarding future IP growth given that lower PERs are positively associated with the factor weights in our regressions. Third, it appears that our factor significantly underweights sectors whose DYs are strongly correlated, in absolute terms, with the US REER or with our metric of foreign IP growth. This would mean that our estimated factor puts less weight on sectors with a strong exposure on foreign economic activity, so as to better spot changes in future domestic IP growth. Conclusion We show that a factor model based on sectorally disaggregated stock market variables can significantly outperform other extant macroeconomic forecasting models. Previous approaches either relied on aggregate equity variables, on disaggregated equity variables taken in isolation, or on indices built from disaggregated equity variables but in a constrained manner (for example by using the Fama-French factors). We show that our factor model outperforms -over several different horizons-both in-sample and out-of-sample, the usual macroeconomic benchmarks. We attribute this out-performance to two characteristics of our factor model. First, we show that our model over/underweights certain sectors so that the resulting factor is strongly associated with future IP growth, but is, conversely, relatively less associated with the noisy components of the sectoral DYs, namely expected returns and the foreign component of future cash flows. Second, we argue that the superior performance of our model is related to the fact that it overweights both upstream sectors (Oil and Gas, Industrial Materials etc.) and value sectors that are deemed relatively more informative regarding future IP growth. As a consequence, we are able to better predict activity overall, but also particularly during periods of negative growth. We use the sectoral DYs x it in a factor model instead of the aggregate DY x t to predict future IP growth. By doing so, we are implicitly assuming that the sectoral DYs are indicative of future aggregate domestic cash flows, which are themselves a proxy for the future US economic activity. We are also assuming that the factor model is able to isolate this information while filtering the remaining noisy components in sectoral DYs. More precisely, in line with Kelly and Pruitt (2013), we are assuming that the expectation of sectoral returns, sectoral domestic cash flow growth and sectoral foreign cash flow growth are linearly determined by a set of common factors F t : E t (r i,t+1 ) = α i,0 + α ′ i,1 F t + u i,t E t (∆cf D,i,t+1 ) = β i,0 + β ′ i,1 F t + e i,t E t (∆cf F,i,t+1 ) = γ i,0 + γ ′ i,1 F t + ϵ i,t Where u i,t , e i,t and ϵ i,t are idiosyncratic and independently distributed components with E t (u i,t+1 ) = E t (e i,t+1 ) = E t (ϵ i,t+1 ) = 0. The expectations of aggregate variables follow similar processes, that is: E t (r t+1 ) = α 0 + α ′ 1 F t + u t E t (∆cf D,t+1 ) = β 0 + β ′ 1 F t + e t E t (∆cf F,t+1 ) = γ 0 + γ ′ 1 F t + ϵ t Finally, we assume that the factors follow an autoregressive process: F t+1 = ΘF t + ν t+1 Therefore, in line with Section 2.2.1, we can use the Campbell and Shiller (1988) formula for sectoral DYs: x it = κ i (1 -ρ i ) + ∑ j=1 ρ j-1 i E t [r i,t+j -∆cf D,i,t+j -∆cf F,i,t+j ] = κ i (1 -ρ i ) + ∑ j=1 ρ j-1 i E t [(α i,0 + α ′ i,1 F t + u i,t ) -(β i,0 + β ′ i,1 F t + e i,t ) -(γ i,0 + γ ′ i,1 F t + ϵ i,t )] = κ i + α i,0 -β i,0 -γ i,0 1 -ρ i + + ∑ j=1 ρ j-1 i E t [i ′ Γ ′ i F t+j-1 + u i,t+j-1 -e i,t+j-1 -ϵ i,t+j-1 ] = κ i + α i,0 -β i,0 -γ i,0 1 -ρ i + i ′ Γ ′ i (I -ρ i Θ) -1 F t + u it -e it -ϵ it = ϕ i,0 + ϕ ′ i,1 F t + ν i,t With ϕ i,0 = κ i +α i,0 -β i,0 -γ i,0 1-ρ i , ϕ ′ i,1 = i ′ Γ ′ i (I -ρ i Θ) -1 , ν i,t = u it -e it -ϵ it , i = (1, -1, -1) ′ and Γ i = (α i,1 , β i,1 , γ i,1 ). In other words, the calculus above underlines how, by assuming that common factors affects both the expectations of sectoral and aggregate returns and cash flows, we can show that sectoral DYs are linearly related to these factors. Since the latter also affect linearly future aggregate domestic cash flows, it is therefore attractive, in this framework, to rely on the cross-section of sectoral DYs to extract a predictive signal for the future domestic cash flows. The predicted variables (Manufacturing sales, House permits etc.) are all defined as growth rates, similarly to the IP growth, before conducting the forecasting exercise. Note: The table reports the difference in RMSE of the factor model compared to the different benchmarks (a negative value means that the factor model outperforms the corresponding benchmark in terms of RMSE). In the same line as for our main specification (for the United States), we filter from this exercise IBC sectoral DY series that were incomplete over the time period. As a result, the number of sectors used in this analysis may differ between the different countries. 2.A.2 Additional forecasting results Abstract We propose a novel approach to quantify time-varying financial spillovers based on a structural version of the Diebold-Yilmaz framework. Key to our approach is a SVAR-GARCH that is statistically identified by heteroskedasticity, economically identified by maximum shock contribution and that allows for time-varying FEVDs. We analyze spillovers between Euro Area sovereign and bank CDS. The spillovers estimated are a good fit for known spillover events and give more reactive signals compared to alternative models. We find spillovers to explain 37% of the variation in our sample, amid strong variations of the spillovers over time and in the cross section. Nous proposons une nouvelle approche pour quantifier les phénomènes de contagion entre marchés financiers sur la base d'une version structurelle du modèle de Diebold and Yilmaz (2009). Nous nous reposons essentiellement sur un modèle SVAR-GARCH identifié statistiquement par hétéroscédasticité, identifié économiquement par la contribution maximale des chocs et qui permet d'obtenir des décompositions des erreurs de prévision non-constantes dans le temps. Nous analysons la propagation des chocs de risque de crédit dans la Zone Euro entre CDS souverains et bancaires. Du point de vue de la méthodologie, nous trouvons que notre modèle permet de mieux identifier les chocs de crédit par rapport aux autres modèles de contagion de la littérature, et qu'il est par ailleurs plus réactif aux événements que les modèles basés sur des estimations par fenêtres roulantes. Du point de vue économique, nous trouvons que les phénomènes de contagion expliquent seulement 37% de la variance de nos variables, avec toutefois de fortes variations dans le temps. Assessing credit risk spillovers on financial markets is challenging. The first challenge concerns shock identification: to evaluate how a specific shock propagated from one market to another requires first to identify this shock. Yet, this task may cause significant difficulties as asset prices contemporaneously affect each other and thus co-move markedly. The second challenge concerns time variation: Spillover episodes tend to be short lived and to vary substantially over time. In this paper we propose a framework to estimate credit risk spillovers that combines an attractive identification approach for a set of endogenous variables with time variation in the spillover estimates. The approach relies on a Structural Vector Autoregression with a GARCH error structure (SVAR-GARCH) that is identified by heteroskedasticity. On the SVAR estimates we apply the framework of Diebold and Yilmaz (2009) and measure spillovers by the off-diagonal elements of the time-varying Forecast Error Variance Decomposition (FEVD). The approach allows a timely monitoring of spillovers and up-to-date assessment of financial stability risks. We estimate the model on a sample of 16 banking sector and sovereign CDS series in the Eurozone (EZ), ranging between 2008 and 2019. We estimate international spillovers between banking sectors and between sovereigns and national spillovers between sovereigns and banks in one mutually consistent framework. The seminal work by Diebold and Yilmaz (2009), as well as a large number of subsequent papers (for example Alter and Beyer, 2014;[START_REF] Claeys | Measuring bilateral spillover and testing contagion on sovereign bond markets in Europe[END_REF][START_REF] Demirer | Estimating global bank network connectedness[END_REF]De Santis and Zimic, 2018), propose to base spillover estimates on the off-diagonal entries the FEVDs of rolling window structural vector autoregressions. While the approach allows for the construction of mutual consistent spillovers, the literature faces the econometric challenge of identification (De Santis and Zimic, 2018). Earlier papers rely on short-run zero restrictions for the coefficients of the SVAR (for example Diebold and Yilmaz, 2009). However this assumption is unlikely to hold with financial data that reacts almost instantaneously to news (see Alter and Beyer, 2014). Later papers sidestep any structural identification by using reduced form shocks in the form of Generalized FEVD analysis (GFEVD, see [START_REF] Pesaran | Generalized impulse response analysis in linear multivariate models[END_REF]). Yet, reduced form shocks have no economic interpretation and cannot be used for quantifying causal relationships of the data [START_REF] Kilian | Structural vector autoregressive analysis[END_REF]. Other standard identification approaches are not appealing either: sign restrictions [START_REF] Fry | Sign restrictions in structural vector autoregressions: A critical review[END_REF], for example, are not exploitable as we do not want to restrict the impacts of the shocks a priori. De Santis and Zimic (2018) and De Santis and Zimic (2019) propose attractive identification schemes using magnitude restrictions. However, as most of the literature, they rely on rolling window estimations in order to generate time variation in their spillover estimates. Such rolling window estimations come with a significant drawback: at each point in time they deliver average spillover effects over large time horizons where new spillover estimates are averaged out with outdated estimates. Spillover estimates therefore do not represent up-to-date information. We propose a novel approach for estimating time-varying spillovers by exploiting a SVAR-GARCH model that is statistically identified by the heteroskedasticity in the data. Also Normandin and Phaneuf (2004); [START_REF] Bouakez | Fluctuations in the foreign exchange market: How important are monetary policy shocks[END_REF]; [START_REF] Lütkepohl | Testing for identification in SVAR-GARCH models[END_REF] and others take advantage of the conditional heteroskedasticity in a SVAR-GARCH to identify structural shocks. We show that beyond this property, the model is attractive as it yields timevarying FEVDs based on the conditional variances of estimated structural errors. To the best of our knowledge, we are the first to exploit the properties of the conditional variances in a SVAR-GARCH model to construct time-varying spillover estimates in financial networks. Moreover, we show that it is feasible to achieve economic identification between structural shocks and financial market variables in a nontrivial one-to-one relationship, even in a system of 16 variables. We label shocks with a maximum contribution to the forecast error variance of a variable as a shock of precisely that variable (similar to Grosse Steffen and [START_REF] Grosse Steffen | Ambiguity and Time-Varying Risk Aversion in Sovereign Debt Markets[END_REF][START_REF] Dungey | Unobservable shocks as carriers of contagion[END_REF]. Due to the GARCH component in our estimation, spillover estimates are up-todate instead of being averaged out in a moving window (as in Diebold and Yilmaz, 2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF]. We investigate the properties of the SVAR-GARCH model estimated on the sample of Euro Area banking sector and sovereign CDS series. We show that the identification of the SVAR-GARCH model yields shock estimates that fit known economic and market events, supporting the choice of economic identification by maximum shock contribution. We manage to match major shocks to credit risk to 117 news events, either for bank or for sovereign CDS. In a second exercise, we compare the spillovers implied by the SVAR-GARCH with estimates stemming from other identification strategies used in the literature. For a range of established spillover events, either based on the events we identify, the events identified by [START_REF] Candelon | Sovereign Rating News and Financial Markets Spillovers: Evidence From the European Debt Crisis[END_REF] or the events identified by [START_REF] Alexandre | Crise de la dette souveraine dans l'Union Européenne: Transparence des banques et spreads de CDS[END_REF], we apply a horse race between the competing models. We find that for either event list, the SVAR-GARCH outperforms identification schemes used in [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF], Diebold and Yilmaz (2009) or Diebold and Yilmaz (2012). Overall, we find credit risk in the Euro Area to be less integrated than suggested by estimates based on traditional Diebold-Yilmaz approaches. We estimate that, on average, credit risk spillovers explain about 37% of the total variation in our sample. Yet, we show that the importance of spillover fluctuates distinctively, peaking at 61% during the Great Financial Crisis. Spillovers differ also largely in the cross-section. For example, we find that during the European debt crisis, spillovers from periphery sovereigns increased markedly, affecting the strongest credit risk of other periphery sovereigns and banking sectors. We also find strong credit risk spillovers from periphery banking sector shocks, for example at the beginning of 2013 when investor worries surfaced about the health of the Italian banking system amid high non performing loan ratios and excessive reliance on debt. In contrast, we find for the period of the Great Financial Crisis elevated spillovers from core Euro Area countries. We investigate the economic propagation channels underlying our spillover estimates. We find international credit risk spillovers between sovereigns to be higher when the countries have stronger ties in trade and portfolio investments, in line with the business cycle network literature [START_REF] Foerster | Sectoral versus aggregate shocks: A structural factor analysis of industrial production[END_REF]. We find international credit risk spillovers between banking systems to be higher when they exhibit more similar portfolios; yet we find spillovers not to be significantly associated with bank cross-holdings (similar to the findings in Brunetti et al., 2019). Concerning the national sovereign-bank nexus, we find that (i) a lower capital ratio and higher debt to GDP ratio increase domestic bank to sovereign spillovers in both low and high debt countries; while (ii) reliance of the non bank sector on domestic bank funding is significantly associated with domestic bank to sovereign spillovers only in low debt countries. In turn, we find domestic sovereign to bank spillovers to be higher for countries with a stronger bank exposure to domestic government debt. Moreover, we find that in high debt countries domestic sovereign to bank spillover are stronger when the domestic banking sector shows higher non-performing loan ratios and disposes of a lower share of liquid assets to short term liabilities. The rest of the paper is structured as follows: Section 3.2 discusses the related literature, Section 3.3 details the methodology, Section 3.4 introduces the data, Section 3.5 reports the results of the SVAR-GARCH and Section 3.6 concludes. Estimating Spillovers in the Literature Throughout this paper we define spillovers as the degree to which exogenous shocks to one CDS market drive the variation of CDS spreads in other markets, based on the off-diagonals of forecast error variance decompositions. However, the definition of spillovers may differ in the literature. De Santis and Zimic (2018) characterize spillovers as the impulse response of one shock to another variable while they label estimates based on FEVDs as "connectedness". Additionnally, [START_REF] Forbes | No Contagion, Only Interdependence: Measuring Stock Market Comovements[END_REF], [START_REF] Claeys | Measuring bilateral spillover and testing contagion on sovereign bond markets in Europe[END_REF] and [START_REF] Dungey | Endogenous crisis dating and contagion using smooth transition structural GARCH[END_REF] term contagion as significant changes in the propagation mechanism, not the propagation mechanism itself. Diebold and Yilmaz (2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF] propose in a set of papers a prominent approach to quantify time-varying spillovers on financial markets. The model is widely reused in the literature (e.g. [START_REF] Claeys | Measuring bilateral spillover and testing contagion on sovereign bond markets in Europe[END_REF]Alter and Beyer, 2014;[START_REF] Adams | Spillover effects among financial institutions: A state-dependent sensitivity value-at-risk approach[END_REF][START_REF] Fengler | A variance spillover analysis without covariances: What do we miss[END_REF][START_REF] Diebold | Commodity Connectedness[END_REF][START_REF] Hale | Monitoring Banking System Fragility with Big Data[END_REF][START_REF] Greenwood-Nimmo | What's Mine Is Yours: Sovereign Risk Transmission during the European Debt Crisis[END_REF], 2019). The key challenge of the approach is the identification of shocks in the underlying SVARs. Three different strains of the spillover-literature do offer attractive identification strategies. First, De Santis and Zimic (2018) and De Santis and Zimic (2019) apply a methodology close to ours. They gauge the spillovers between sovereign debt markets and between medium-term interest rates with a Diebold-Yilmaz approach based on a SVAR that is identified by "magnitude restrictions". The approach relies on the assumption that a shock originating from one country impacts the strongest the financial market in that very same country. Second, [START_REF] Ando | Quantile Connectedness: Modelling Tail Behaviour in the Topology of Financial Networks[END_REF] add numerous exogenous variables to their vector autoregressions with the aim to purge their variables from common factors. Once this filtering is done, they obtain (quasi) orthogonal shocks. Finally, several papers focusing on financial spillovers [START_REF] Ehrmann | Stocks, bonds, money markets and exchange rates: Measuring international financial transmission[END_REF][START_REF] Dungey | Endogenous crisis dating and contagion using smooth transition structural GARCH[END_REF]Ehrmann and Fratzscher, 2017;[START_REF] Fratzscher | Monetary Policy, Bank Bailouts and the Sovereign-Bank Risk Nexus in the Euro Area*[END_REF] apply the idea of [START_REF] Rigobon | Identification Through Heteroskedasticity[END_REF] and rely on the identification by heteroskedasticity. The authors use the variations in the variance-covariance matrix of the reduced form shocks to identify the structural shocks. The time variation in the first two strains of the literature comes from a rolling window estimation. These papers use relatively long window length in order to have a sufficient accuracy in their parameter estimates. With this feature, models lack in responsiveness as past observations mitigate the effect of new ones. The third strain of the literature focuses on specific sub-periods (e.g. Ehrmann andFratzscher, 2017 or Dungey et al., 2015) and do not provide a continuous estimation of their spillover indices. In contrast, a recent literature has exploited MGARCH models that are capable of generating up-to-date spillovers [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF][START_REF] Strohsal | Time-varying international stock market interaction and the identification of volatility signals[END_REF]. However, these models lack attractive identification approaches for structural analysis. 2 3 3.3. Methodology Measuring spillovers We follow the key idea of Diebold and Yilmaz (2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF] and base a set of mutual consistent spillover measures, from pairwise to system wise, on FEVDs. Table 3.3.1 depicts a FEVD which is amended with an additional bottom row that captures the off-diagonal column sums, an additional column on the right that captures the off-diagonal row sums and a bottom right element that captures the grand average of either off-diagonal column or row sums. Table (3.3.1) Diebold-Yilmaz Spillover Table y 1 y 2 • • • y N To Others y 1 d H 11 d H 12 • • • d H 1N ∑ N j=1 d H 1j , j ̸ = 1 y 2 d H 21 d H 22 • • • d H 2N ∑ N j=1 d H 2j , j ̸ = 2 . . . . . . . . . . . . . . . . . . y N d H N 1 d H N 2 • • • d H N N ∑ N j=1 d H N j , j ̸ = N From Others ∑ N i=1 d H i1 ∑ N i=2 d H i2 • • • ∑ N i=3 d H i3 1 N ∑ N i,j=1 d H ij i ̸ = 1 i ̸ = 2 i ̸ = N i ̸ = j The FEVD is populated by elements d H ij , which give the proportion of the H step forecast error variance of variable y j that is driven by an orthogonal shock to y i . Following Diebold and Yilmaz (2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF] we define d H ij as a pairwise directed spillover from i to j: S H i→j = d H ij . (3.1) The pairwise spillovers allow to construct more aggregated spillover indices. For example, the off-diagonal column sums indicate to which degree the H step forecast error variation of variable y j is driven by other variables in the system. Diebold and Yilmaz (2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF] define therefore inward spillovers as: S H j←• = N ∑ i=1 i̸ =j d H ij . (3.2) Vice versa, the off-diagonal row sums indicate to what degree variable y j drives the variation of all other variables in the system. Outward spillovers are therefore defined as: S H j→• = N ∑ i=1 i̸ =j d H ji . (3.3) Total spillovers in the system are finally defined as average of inward or outward spillovers. S H = 1 N N ∑ i,j=1 i̸ =j d H ij . (3.4) As underlined above, Diebold and Yilmaz (2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF] estimate time-varying FEVDs based on moving window estimations of vector autoregressions, and identify the SVARs with orthogonalization strategies that can be challenged. The remainder of the paper outlines an approach that allows for a structural estimation of VAR parameters as well as for time-varying FEVDs that do not rely on rolling window estimations. Description of the Model For the development of a structural version of the Diebold-Yilmaz index, we rely on a SVAR model with a GARCH error structure and an identification by heteroskedasticity, similar in spirit to [START_REF] Normandin | Monetary policy shocks:: Testing identification conditions under time-varying conditional volatility[END_REF]. We choose the model for the following reasons: first, a GARCH error structure appears a natural choice given that first differences of CDS, alike many other financial variables, show clustering of volatility over time and are therefore well approximated by GARCH processes. Second, the model has the property of time-varying conditional volatility of the errors, given the GARCH structure of the model. This property is crucial for the identification of structural shocks [START_REF] Rigobon | Identification Through Heteroskedasticity[END_REF]. Third, still relying on this property, we can construct time-varying FEVDs. This last feature allows us to estimate the model over the whole period, thus enabling more responsiveness compared to a time-varying FEVD based on a rolling estimation. SVAR identification through heteroskedasticity We base the empirical model on a structural vector autoregression of order p, that allows our variables to be determined simultaneously. B 0 Y t = γ + B 1 Y t-1 + ... + B p Y t-p + ϵ t (3.5) where Y t is a vector containing the endogenous variables of interest, typically sovereign and bank sector CDS time series. The matrices B i contain the contemporaneous and lagged effects of the endogenous variables. ϵ t denote structural errors with zero mean and an unconditional diagonal variance covariance matrix λ ϵ . As the SVAR cannot be estimated directly, we first estimate a reduced form VAR: Y t = β + A 1 Y t-1 + ... + A p Y t-p + µ t (3.6) where the reduced form shocks µ t have zero mean and a non-diagonal variance covariance matrix Σ µ . The structural errors ϵ t are then defined through µ t and the contemporaneous interaction matrix B 0 : ϵ t = B 0 µ t ⇔ µ t = B -1 0 ϵ t (3.7) The well known VAR identification problem arises as we try to obtain estimates for the contemporaneous interaction matrix B 0 from the relationship Σ µ = B -1 0 λ ϵ B -1′ 0 . Yet without further restrictions B 0 is not identified since Σ µ provides only N (N +1) 2 equations for N 2 unknowns if we normalize λ ϵ = I. The SVAR-GARCH model we are using relies on [START_REF] Rigobon | Identification Through Heteroskedasticity[END_REF] identification scheme that exploits the general heteroskedasticity in financial data. Suppose that the variances (or conditional variances) of µ t vary over time -implying that the structural error variance does too -while B 0 is constant. 4 This feature implies that there is more than one volatility regime in the data, defined by a different reduced form variance-covariance matrix Σ µ (m). If there are M different volatility regimes, then we have: Σ µ (1) = B -1 0 B -1′ 0 , Σ µ (m) = B -1 0 λ m B -1′ 0 , m = 2, ..., M (3.8) where λ m are the diagonal matrices of the structural shocks (λ 1 is normalized to I). [START_REF] Lanne | A Multivariate Generalized Orthogonal Factor GARCH Model[END_REF] show that B 0 is locally uniquely determined if ∀(k, l) ∈ {1, ..., K} 2 , k ̸ = l, there is an index j ∈ {2, ..., M } such that λ jk ̸ = λ jl , i.e. there is sufficient heterogeneity in the volatility changes. SVAR-GARCH Conditional heteroskedasticity can be modeled in different ways (see Lütkepohl and Netšunajev, 2017a). We rely on the methodology first proposed by [START_REF] Normandin | Monetary policy shocks:: Testing identification conditions under time-varying conditional volatility[END_REF] and assume that it is driven by GARCH processes. Similar models have been applied in [START_REF] Bouakez | Fluctuations in the foreign exchange market: How important are monetary policy shocks[END_REF], [START_REF] Lütkepohl | Testing for identification in SVAR-GARCH models[END_REF] and [START_REF] Lütkepohl | Structural vector autoregressions with heteroskedasticity: A review of different volatility models[END_REF]. We assume that the structural shocks are orthogonal and that their variances follow a univariate GARCH(1,1) process: ϵ k,t = σ k,t|t-1 e k,t where e t ∼ i.i.d. N(0, I N ) and (3.9) σ 2 k,t|t-1 = (1 -γ k -g k ) + γ k (ϵ k,t-1 ) 2 + g k σ 2 k,t-1|t-2 (3.10) where γ k > 0, g k ≥ 0, γ k +g k < 1, 1kN so that the GARCH(1,1) processes are non-trivial. Then, we can express the reduced form shocks as: µ t = B -1 0 λ 1 2 t|t-1 e t (3.11) where: λ t|t-1 =       σ 2 1,t|t-1 0 ... 0 σ 2 N,t|t-1       (3.12) is a (N x N) diagonal matrix with the univariate GARCH processes on the diagonal. Therefore, the distribution of µ t conditional on past information has mean zero and a covariance matrix: Σ µ,t|t-1 = B -1 0 λ t|t-1 B -1′ 0 (3.13) Rigobon (2003) shows that for full (local) statistical identification, 2 different volatility regimes is enough. With a SVAR-GARCH we have T (number of observations) different volatility "regimes". In this study, using daily CDS data between 2008 and 2019, this translates into more than 2800 regimes. We estimate the parameters of the SVAR-GARCH model by Maximum Likelihood as in [START_REF] Lütkepohl | Testing for identification in SVAR-GARCH models[END_REF]. Forecasts for FEVD Estimates for time-varying conditional variance-covariance matrices allow us to construct FEVDs for each time period, i.e. for each day. Note that for the computation of FEVDs in each period t, one cannot take the actual estimated structural variances λt|t-1 . Instead, we need to compute, by definition of the FEVD, in-sample forecasts for the structural variances λ * t+h|t conditional on the information set in t, as in [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF]. Contrary to the approach in the latter, our matrix B 0 is constant over time, so that the only change between a classic SVAR-FEVD and our approach is the computation of future structural variances. We have with Equation (3.10): σ 2 k,t+h|t+h-1 = (1 -γ k -g k ) + γ k (ϵ k,t+h-1 ) 2 + g k σ 2 k,t+h-1|t+h-2 (3.14) Taking conditional expectation at time t, with h ≥ 2: E t σ 2 k,t+h|t+h-1 = (1 -γ k -g k ) + γ k σ 2 k,t+h-1|t + g k E t σ 2 k,t+h-1|t+h-2 (3.15) Using the law of iterated expectations, we get: E t σ 2 k,t+h|t = (1 -γ k -g k ) + γ k σ 2 k,t+h-1|t + g k E t σ 2 k,t+h-1|t (3.16) That is: σ 2 k,t+h|t = (1 -γ k -g k ) + (γ k + g k )σ 2 k,t+h-1|t (3.17) We thus obtain λ * t+h|t for each h as this matrix is diagonal and is only composed of the different σ 2 k,t+h|t . To build the FEVDs, we then first compute the MSPE. The Θ i matrices come from the Moving Average (MA) representation of the SVAR as detailed in [START_REF] Kilian | Structural vector autoregressive analysis[END_REF]: Y t+H -Y t+H|t = H-1 ∑ i=0 Θ i ϵ t+H-i (3.18) With the structural variances estimated, we get: M SP E t (H) = E t (Y t+H -Y t+H|t )(Y t+H -Y t+H|t ) ′ = H-1 ∑ i=0 Θ i λ * t+H-i|t Θ ′ i (3.19) We can then evaluate the contribution of shock j to MSPE of y kt with the usual MSPE-formula, the only difference with a classic SVAR is that variances of structural shocks are no longer normalized to 1. With θ kj,h the kj th element of Θ h : M SP E k j,t (H) = θ 2 kj,0 σ 2 j,t+H|t + ... + θ 2 kj,H-1 σ 2 j,t+1|t (3.20) With: M SP E k t (H) = K ∑ j=1 M SP E k j,t (H) (3.21) We get: F EV D k j,t (H) = M SP E k j,t (H) M SP E k t (H) (3.22) Eventually the time-varying FEVDs enable to build the time-varying spillover indices, as explained in Section 3.3.1. 3.4. Data and filtering for common shocks Data We focus on credit risk of major EZ sovereigns and banks. We attempt to strike a balance between a sufficiently high coverage of important CDS markets and the limited number of variables our empirical approach allows. As a result, we limit the sample to 9 countries (Greece, Ireland, Italy, Portugal, Spain, Germany, France, Belgium and Netherlands). For each country we include two variables in the sample, sovereign credit risk and credit risk in the banking sector, except for Ireland and Greece where we lack banking credit risk series due to data constraints. 5 This leaves us with 16 variables all together. As standard in this literature (see [START_REF] Greenwood-Nimmo | Financial sector bailouts, sovereign bailouts, and the transfer of credit risk[END_REF], we measure credit risk using CDS spreads on senior unsecured debt, modified-modified restructuring, mid spread and a maturity of 5 years. 6 We retrieve CDS spreads for non-US sovereigns and US banks denomi- In our main specification, we estimate the model on the full period. Because the identificiation approach relies on a B 0 matrix that is constant over time, this implies that we do not take into account potential regime changes in the transmission mechanism of credit risk spillovers. Annex 5 See [START_REF] Acharya | A Pyrrhic Victory? Bank Bailouts and Sovereign Credit Risk[END_REF] and [START_REF] Fratzscher | Monetary Policy, Bank Bailouts and the Sovereign-Bank Risk Nexus in the Euro Area*[END_REF]. 6 We combine data from three sources: we use principally Thomson Reuters Datastream and extend the sample backwards using growth rates extracted from CDS series from CMA. In case of missing values in the resulting data set, we retrieve growth rates on CDS spreads from Bloomberg. 7 One shortcoming of using CDS spreads is that they are influenced not only by the probability of default of the corresponding bonds, but also by liquidity factors [START_REF] Fabozzi | Exploring the components of credit risk in credit default swaps[END_REF]. Therefore, our spillover estimates between CDS spreads may not reflect only risk transfers, but also illiquidity spillovers (see [START_REF] Cespa | Illiquidity contagion and liquidity crashes[END_REF]. However, CDS appear better suited for our analysis as they are easily available both for individual banks (contrary to bond spreads) and for sovereigns (contrary to equity returns). Additionally, sovereign CDS and bond spreads appear to send a similar signal, as the average correlation of the two variables for the same country in our sample amounts to 75%. in Section 3.3.2 on the obtained residuals, as in [START_REF] Dungey | Unobservable shocks as carriers of contagion[END_REF]. 10 That is, in a first step, we filter first differences bank and sovereign CDS series by the following OLS regression: ∆z jt = α j + ∆X ′ t β j + y jt (3.23) where ∆z jt represents the first difference of a CDS series j in the sample, α j is a constant and ∆X t is a vector of common factors in first differences. y jt contains the residuals of the regression and serves as input data for the SVAR-GARCH. Annex 3.A.5 reports robustness checks using a smaller set of exogenous variables. Results In this section we present the results for the SVAR-GARCH model outlined above. We estimate the model with 2 lags as indicated by the information criteria from a simple VAR estimated on the same dataset. Moreover, in line with Diebold and Yilmaz (2009[START_REF] Thornton | The Dual Mandate: Has the Fed Changed Its Objective? Federal Reserve Bank of St[END_REF][START_REF]5 shows that the spillover estimates estimated on the full sample are robust to alternative model specifications. 8 We construct country banking variables as an unweighted[END_REF], we choose a forecast horizon for the FEVD of 10 days. In Section 3.5.1 we present the results of our identification approach, that is the labeling of structural shocks, as well as comparisons of timeliness and of identification performances with traditional spillover models. In Section 3.5.2 we present the economic results of our application. Econometric results Statistical and economic identification Statistical identification is achieved when the number of univariate GARCH components underlying the GARCH structure are larger or equal to N-1. That means that for full local identification we may have at most one series that is not well approximated by a GARCH process in 10 This approach is similar to including a vector of exogenous variables directly into the SVAR. Alter and Beyer (2014) find similar results between the two approaches. In our case, a two-step approach is preferable as we found that including a vector of exogenous variables in the SVAR-GARCH significantly increases the time to convergence. order to have sufficient heteroskedasticity in the structural shocks. We follow the identification test proposed by [START_REF] Lanne | A Multivariate Generalized Orthogonal Factor GARCH Model[END_REF] and reject fewer than N-1 GARCH processes in our sample (see Annex 3.A.6). However, full local identification implies only statistical identification up to sign changes and ordering. To make the orthogonal shocks economic meaningful we need to label them, ideally in such a way that each orthogonal shock corresponds to a different variable. In line with Grosse Steffen and [START_REF] Grosse Steffen | Ambiguity and Time-Varying Risk Aversion in Sovereign Debt Markets[END_REF], we label shocks with the maximum contribution to the forecast error variance of a variable as a shock from this particular variable (for example the German banking sector). Exact economic identification is obtained if for each CDS series there is only one structural shock with a maximum contribution to the forecast error variance of that specific CDS series. As we estimate one FEVD for each day we focus on average shock contributions over time. However the labelling would be exactly the same if we focus, at each point in time, on individual FEVDs. Table 3.5.1 reports a FEVD that is averaged over all time periods and for which shocks are labelled accordingly. It is clear from the diagonal of the table that each shock has a maximum contribution to a different CDS series, allowing a clear labelling of the orthogonal shocks. 2 0 0 1 1 1 1 34 0 1 3 1 2 2 5 FR 1 2 0 0 1 2 1 0 5 61 1 1 1 3 1 1 GR 0 0 0 0 1 0 0 1 0 0 76 0 1 0 0 1 NL 1 0 0 1 1 2 0 2 9 3 1 68 1 2 1 7 ES 1 0 6 2 8 8 1 1 0 1 0 1 50 7 15 1 IT 0 0 0 0 1 1 0 1 1 1 2 1 2 58 3 2 PT 0 0 0 0 0 0 1 0 1 0 2 1 2 1 40 0 IE 1 0 0 1 0 0 3 1 0 1 0 0 1 1 8 53 This table represents the average over time of the FEVDs obtained with the SVAR-GARCH. We can see that the originating shocks (in line) impact the most their own variables (in column). Our economic identification approach receives further confirmation from the fact that the estimated structural time series of shocks (ε t in Equation (3.5)) correspond to a large number of historical events. In the spirit of Antolín-Díaz and Rubio-Ramírez (2018), we compare major shocks with historical economic and market events. 11 We define major shocks as those shocks that are higher than 6 times their own standard deviations. Of the 79 shocks that meet this criteria, we are able to match 62 events (covering 78% of major shocks).12 On Figure 3.5.1 we present the time series of the estimated structural shocks (in black) along with the timing of the matched events (red vertical lines). Figure 3.5.1 shows also isolated events of spillovers that fall short of the threshold for major events. Again, we are able to match a large amount of such shocks to economic and financial events, extending the list of events to 117 items. The identified events are typically rating downgrades or political shocks (for sovereigns) or bank stress episodes (for banking sectors). Annex 3.A.7 reports the exhaustive list of events. This exercise suggests that our identification strategy based on major shock contribution is further supported by the event-analysis on structural shocks of Figure 3.5.1, something which is rarely performed in the SVAR literature. Total Spillover Comparison We now compare the total spillovers S H defined in Equation (3.4) and estimated by the SVAR-GARCH model with total spillover estimates from alternative spillover models. The purpose of the exercise is to compare the magnitude and reactiveness of spillover estimates. The models we compare measure spillovers through elements in FEVDs based either on SVARs or on multivariate GARCH models:13 ) Structural Shocks and Events On the different graphs above are represented the estimated structural shocks of the model (ε t ) as well as identified historical events for each variable represented in vertical red lines. The list of events used is available in Annex 3.A.7. Model 1 A SVAR estimated on a rolling window and identified by Cholesky a decomposition (as in Diebold and Yilmaz, 2009) We label the model VAR Cholesky. Model 2 A SVAR estimated on a rolling window and identified by GIRF/GFEVD (as in Diebold and Yilmaz, 2012, labeled here VAR GIRF); Model 3 A DCC-GARCH identified by a Cholesky decomposition and estimated over the full sample (in the spirit of [START_REF] Elder | Oil price uncertainty[END_REF]. More precisely, we estimate a DCC GARCH as a reduced form VAR, that is: Y t = β + A 1 Y t-1 + ... + A p Y t-p + µ t with: µ t ∼ N (0, D t R t D t ) and D t R t D t = H t We then switch to the structural form with a Cholesky decomposition at each period t: B -1 0t B -1′ 0t = H t . We label the model DCC Cholesky. Model 4 Similarly to Model 3, we estimate a VAR-GARCH based on a DCC-GARCH, but identify: B -1 0t = H 1/2 t (as in [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF]. 14 We label the model DCC Fengler. (2018) show that GIRF identification tends to overestimate total spillovers and that Cholesky identification imposes restrictions that are likely to be at odds with the data generating process leading to spillover misspecification. 15 Similarly, the identification by [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF] puts heavy restrictions on the Data Generating Process by assuming a symmetric structure between the variables. This may explain the lower spillover estimates by the DCC Fengler model compared to the SVAR-GARCH. We come back more formally on these points in Annex 3.A.8. Second, we investigate whether spillover indices relying on rolling window estimations are less responsive to new events compared with spillover estimates that time-vary due to a GARCH 14 Here, as in [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF], the square root of a symmetric positive definite matrix H is defined as H 1/2 t = ΓΛ 1/2 Γ ′ where the columns of Γ contain the eigenvectors of H and Λ 1/2 is diagonal with the positive square roots of the eigenvalues on its diagonal. 15 When contemporaneous interaction effects between variables are not equal to 0, the estimated standard errors of structural shocks obtained with GIRF are biased upwards, equally biasing upwards spillovers estimates based on FEVDs. The 0 restrictions the Cholesky identification introduces are likely to be at odds with the data generating process. component. Intuitively, they ought to be while a priori we do not expect a clear distinction in reactiveness between the different spillover models with GARCH components. We investigate these hypotheses using a Granger causality analysis between the different S H . We report results in Table 3.A.2 in annex 3.A.3. The analysis shows that S H estimated by SVAR-GARCH model Granger causes S H computed by moving window approaches but not S H computed by alternative GARCH models. Spillover Comparison and Narrative Events To evaluate the relative performance of our identification strategy, we now compare how spillovers estimated by different approaches evolve along well-known narrative events. As CDS spreads tend to comove substantially [START_REF] Longstaff | How sovereign is sovereign credit risk?[END_REF], and especially between sovereign and bank CDS series from the same country, there is a high risk that a model confuses bank shocks with their corresponding sovereign shocks. Expressed differently, at the time of a sovereign event, outward spillovers from the country's banking sector should remain flat or decrease. Taken together, for a sovereign event to be correctly identified, not only the sovereign spillovers should increase, they should also increase by more than the corresponding bank spillovers. On the middle and lower parts of Figure 3.5.3 we display the outward spillovers from Italian banks as well as the difference between sovereign and bank outward spillovers ("net" spillovers). While most of the models exhibit flat or negative net spillovers, only the SVAR-GARCH manages well to identify this specific event on this measure. To evaluate on a more systematic basis the identification strategies of the different models, we replicate the analysis from Figure 3.5.3 over the set of our identified events available in Annex 3.A.7. We estimate that a sovereign (bank) event is well identified if, 5 days around the day of the event, the spillover estimate stemming from the sovereign (banking sector) increases more than the spillover estimate from the banking sector (sovereign) in the same country. We evaluate the identification performance of the models on different sets of events: (i) a subset of the least contestable sovereign events (i.e. only elections, sovereign rating downgrades, or political events) of the list identified in Section 3.5.1 and shown in Annex 3.A.7 covering 18 events, (ii) all sovereign events in Annex 3.A.7 covering 54 events and (iii) all events, bank and sovereign, in Annex 3.A.7 covering 117 events. As the sets of events (i), (ii) and (iii) are generated from our model, we corroborate the analysis with two exogenous lists of sovereign events, from (iv) [START_REF] Candelon | Sovereign Rating News and Financial Markets Spillovers: Evidence From the European Debt Crisis[END_REF] which covers 11 events and (v) [START_REF] Alexandre | Crise de la dette souveraine dans l'Union Européenne: Transparence des banques et spreads de CDS[END_REF] which includes 8 events. Table ?? suggests that the SVAR-GARCH outperforms or ties with, on every set of events, the other models in terms of identification. Note also that the competing models barely exceed the 50% threshold of identification, meaning that they tend to confuse more sovereign events with Note: This table reports the percentage of correct event identifications by each model. E.g. we consider a sovereign event to be correctly identified if, 5 days around the event, the outward spillover stemming from the sovereign increases more than the outward spillover stemming from the corresponding banking sector. The results are reported for (i) uncontroversial sovereign events (sovereign rating downgrades and votes) (ii) all the sovereign events previously identified (iii) all the sovereign and banking events identified (iv) the sovereign event list of Candelon et al. (2011) (v) the sovereign event list of [START_REF] Alexandre | Crise de la dette souveraine dans l'Union Européenne: Transparence des banques et spreads de CDS[END_REF]. banking events than a random selection.17 Economic results Figure 3.5.2 on total spillover indices shows that we estimate credit risk to be less integrated than other models would suggest. According to our S H estimates, on average about 37% of the variation in the filtered CDS rates can be explained by spillovers. Yet, we find substantial variation in this magnitude over time. To investigate the sources of heightened spillovers, this section analyses first the time-variation of both bank and sovereign spillovers from the EZ countries, and then the economic channels behind the spillovers we estimate. Group pairwise spillovers In this section, we analyse credit risk spillovers in terms of (i) timing, (ii) magnitude and (iii) origin. Given, that we estimate spillovers between 16 CDS series, presenting the resulting 240 pairwise spillovers is not feasible. We focus therefore on pairwise spillovers from different sets of countries/banking sectors. In the "Peripheric" group are included the high-debt countries at the time of the EZ debt crisis: Italy, Spain, Portugal, Greece, Belgium and Ireland. 18 The "Core" group, on the reverse, is constituted by Germany, France and the Netherlands. The "Peripheric banks" and "Core banks" include the corresponding banking sectors. However, as indicated in Section 3.4, due to data-constraints the group "Peripheric banks" does not include Greek and Irish banking sectors. In line with Section 3.3.1 we define the group pairwise spillover from group G 1 to group G 2 as the average outward spillovers from G1 restricted to the variables of G 2 . More formally we have: S H G 1 →G 2 = 1 N G 1 N G 2 -N G 1 1 {G 1 =G 2 } ∑ i∈G 1 ∑ j∈G 2 j̸ =i d H ij (3.24) With N G 1 and N G 2 the number of variables in G 1 and G 2 . 19 Each line represents by how much shocks from a variable set drive the variation of other variable sets on average. The analysis of time-varying spillovers here differs from the presentation of snapshots spillovers around narrative events in Section 3.5.1 (as we focus here on a much broader time period) and also from the presentation of the shocks in Section 3.5.1. This is because spillover estimates in our DY-framework are not only functions of the time-varying variances of structural shocks (λ t ), but also of the interaction matrix (B 0 ) and of the VAR coefficients (A i ). Therefore a large structural shock does not necessarily translate into a large spillover if it is associated with low coefficients in the corresponding matrices or if the magnitude of the shock is low relatively to other shocks' variances. We also find sizable spillovers from the periphery banking sector to other blocks in the EZ. For example, Figure 3.5.4 shows elevated spillovers at the beginning of 2013, when investors worried about the health of the Italian banking sector (due to high NPL ratios amid excessive reliance on debt), as well as at the beginning of 2016, when again concerns about NPLs and the lack of credibility in the Italian banking sector heightened. We also find increased spillovers around dates between 2011 and mid-2012 when the Spanish banking sector signaled problems. While we find spillovers from periphery EZ countries to increase with the beginning of the Euro debt crisis, we find spillovers from core EZ countries to be stronger during 2008/09 financial cri- Overall, compared to their periphery counterparts, we find sudden increases of spillovers from core countries to be less frequent. However, there size is much larger; spillovers from core eurozone countries appear to be an order of magnitude larger than those from periphery eurozone countries. This ought to be due to the fact that on average economies and banks tend to be larger in core eurozone countries compared to their periphery counterparts. We investigate in more detail the channels underlying the estimated spillovers in the next section. What economic channels explain spillovers? While in the previous section we discussed the sources and time-variation of outward and total spillovers, this section focuses on the economic channels underlying the pairwise spillovers we estimate. More specifically, given a shock to a sovereign or banking sector in our sample, we vet whether the resulting pairwise spillovers match the economic channels proposed by the theoretic and empirical literature. We focus here on four different types of spillovers: (i) international sovereign to sovereign spillovers, (ii) international bank to bank spillovers, (iii) national bank to sovereign spillovers and finally (iv) national sovereign to bank spillovers. International spillovers First, we address the following question: given a sovereign shock in country i, what factors are the international spillovers to the sovereign risk in country j associated with? We follow broadly the regression approach by De Santis and Zimic (2018) and regress the credit risk spillover of sovereign i on sovereign j in quarter t on a set of regressors that can be divided into two main groups: distance and exposure. We estimate: 3.25) where ωi→j,t (h) is the average spillover from i to j at forecast horizon h over the quarter t, all The results, reported in Table 3.5.3 suggest that similarity in business cycles cannot explain spillovers in sovereign risk. Instead, we find that similar credit risk in terms of similar debt to GDP ratios as well as both stronger trade and portfolio exposure are significantly related to higher sovereign risk spillovers. This finding supports the business cycle network literature (such as [START_REF] Foerster | Sectoral versus aggregate shocks: A structural factor analysis of industrial production[END_REF] which models propagation channels through exactly those two 20 We multiply difference variables by -1, such that the indicators increase in similarity. ωi→j,t (h) = β i + α t + β 2 d GDP ij,t + β 3 d D GDP ij,t + β 4 exposure k j→i,t + ϵ ij,t ( exposure variables. 21 We repeat the same exercise to investigate the determinants of a spillover from the banking sector in country i to the banking sector in country j. Similar to Equation (3.25), we regress pairwise banking spillovers on a fixed effect for the shocking banking sector, as well as distance and exposure variables. ωi→j,t (h) = β i + α t + β 2 d N P L ij,t + β 3 d Lev.R. ij,t + β 4 exposure k j→i,t + ϵ ij,t (3.26) The distance variables include credit risk distances which we estimate by the squared difference between country i and country j's banking sector's non-performing loans and capital ratios in period t. 22 In terms of exposures we test for two economic channels that are frequently used to model financial institution linkages: cross asset holdings and similarities in portfolios across banking sectors (see [START_REF] Giudici | The Multivariate Nature of Systemic Risk: Direct and Common Exposures[END_REF], Brunetti et al., 2019, or Greenwood et al., 2015). We construct bank sector portfolios from BIS Consolidated Banking Statistics data and calculate squared differences of those portfolios for each time period t. Cross asset holdings between banking systems are measured as the share of banks claims of country j visàvis country i. The results, shown in Table 3.5.4, suggest that cross-asset holdings are not significantly linked to the bank to bank spillovers. We do find however, that portfolio similarities are significantly associated with bank to bank spillovers. Both these findings are in line with the literature (Brunetti et al., 2019). Similarly to the sovereign regressions, risk distances have some explicative power: we find that international bank spillovers are significantly associated with similar capital ratios for pairs of banking systems. However, similar NPL ratios turn out not to be of statistical significance. Spillovers in the national sovereign -bank nexus 21 As the use of generated dependent variables in the regression can induce heteroskedasticity (see De Santis and Zimic, 2018), we report White heteroskedasticity-consistent standard errors. 22 Here again, we multiply difference variables by -1, such that the indicators increase in similarity. RESULTS 123 While in the previous two regression sets we have focused on international spillovers, we investigate in the next two regressions the economic determinants of the national sovereign bank-nexus. In this section, we differentiate between high debt (Belgium, Italy, Portugal and Spain) and low debt (France, Germany and the Netherlands) countries as in the European debt crisis periphery and core countries experienced substantially different degrees of sovereign-bank nexuses [START_REF] Podstawski | The state dependent impact of bank exposure on sovereign risk[END_REF]. We focus first on the economic transmission channels of domestic spillovers from banks to sovereigns. One reason for higher spillovers may simply be a more vulnerable economy. We include in the regression measures of debt to GDP ratios, current account and GDP growth as predictor variables. Dell'Ariccia et al. ( 2018) identifies three additional propagation channels: First, bank risk may affect domestic sovereign risk through the "safety net channel"; explicit or implicit public guarantees that take effect in case a banking sector is in distress (Alter and Schüler, 2012). To proxy this effect, we add as a predictor the capital ratio of the banking sector. Intuitively, the "safety net channel" should be significant if domestic banks are under- RESULTS capitalized and in potential need for public support. The second channel is the "sovereign exposure channel": when a banking sector is in distress it can trigger fire sales of the government bonds it holds, increasing in turn the credit risk of the sovereign issuer. The third channel is the "macroeconomic channel": bank distress risks to decrease (domestic) lending activity and increase sovereign risk as economic growth slows [START_REF] Podstawski | The state dependent impact of bank exposure on sovereign risk[END_REF]. We proxy the second and third channels, with two exposure variables in the regression: the share of domestic government bonds and the share of domestic non-bank assets that the banking sector holds. Denoting v k s the vulnerability variable k for sector s, Equation (3.27) restates the OLS regressions we estimate: ωbank i →sov i ,t (h) = β 0 + α t + β 1 v Lev.R. bank i ,t + β 2 v D/GDP sov i ,t + β 3 v CA sov i ,t + β 4 v g GDP sov i ,t +β 5 exposure k sov i →bank i ,t + ϵ bank i ,sov i ,t (3.27) For the vulnerability variables, we use dummies instead of continuous variables contrary to Equations 3.25 and 3.26. 23 We define high and low realisations of the variables with regards to their overall sample mean. 24 Since the sample is split according to debt levels, using the mean debt/GDP as threshold for the construction of a debt dummy does not yield much variation in the high debt subsamples. We therefore use the subsample mean for the high debt country group, and the overall sample mean for the low debt country group. 25 23 The underlying reason for using continuous variables for Equations 3.25 and 3.26 is that investors on the CDS markets may pass the shock of one sovereign (bank) to the price of another sovereign (bank) CDS if they judge them as similar. However for bank-to-sovereign or sovereign-to-bank regressions we cannot rely on such similarity metrics as the giving and the receiving variables are of different types. Therefore for Equations 3.27 and 3.28 we consider that investors pass the shock of a bank (sovereign) to a sovereign (bank) CDS if they judge the receiving variable as not resilient enough. This kind of reasoning is discrete, therefore we turn to dummy variable so as to illustrate the threshold that investors may consider. 24 Defined by 15.2% for the capital ratio, 0.5% for the capital account and 0.3% for real GDP growth. 25 We use the overall sample mean (86.5%) for the low debt group, and not the subsample mean (70%) as crossing the latter is unlikely to appear as warning signal for investors. Indeed, Germany has crossed this threshold between 2009 and 2016 while keeping its status as safe heaven. Using the subsample mean for the low debt groups renders the coefficients for the debt ratio insignificant while all other regression estimates are robust to the threshold change. The subsample mean is 101.7% for the high debt group. We find in Table 3.5.5 that low capital ratios and high debt to GDP ratios are significantly associated with stronger domestic spillovers from banks to sovereigns. This suggests that the "safety net channel" may indeed be important in explaining the sovereign-bank nexus. We find that neither the capital account nor GDP growth is significantly associated with the spillovers we estimate. While the vulnerability variables yield similar results concerning the significance of the indicators across country groups, the results for the exposure variables differ. For high debt countries, both the dependence of the domestic non-bank corporate sector and government on domestic bank lending are not significant. In contrast, we find for low debt countries that higher non-bank exposure to domestic lending is significantly associated with higher domestic bank to sovereign spillover, suggesting that reduced lending activity in the case of a banking shock may indeed feed through the corporate sector into sovereign risk (see [START_REF] Pagano | The sovereign-bank nexus and the case for European safe bonds[END_REF]. As for high debt countries, we also find non-significant effects of sovereign debt exposure to domestic bank for low debt countries. Finally, we investigate the determinants of domestic credit risk spillovers from a country's sovereign to its banking sector (see Equation (3.28)). We test the following hypotheses: First, are domestic spillovers to banks stronger if the banking sector is more vulnerable? We proxy bank vulnerability with capital ratio, liquidity (measured by liquid assets to short term liabilities) and NPL ratios. Second, are spillovers stronger if the domestic banking sector holds more domestic government debt, expressed in % of total assets (the "sovereign exposure channel", see Angeloni andWolff, 2012 andBuch et al., 2016)? Third, are spillovers stronger if the domestic banking sector holds more assets of domestic non-financial firms, expressed in % of total assets (the "macroeconomic channel")? 26 Here again, we express vulnerability variables in terms of dummies, where the thresholds between high and low realisations are set to sample averages. 27 We estimate: 26 The underlying rationale for this hypothesis is that a sovereign shock can feed into the real sector and then affect domestic banks, e.g. through increased taxes and lower consumer spending, or through a downgrade of non-financial companies. This last channel occurs because of the "rating channel": companies cannot have a better rating than their own sovereigns, so when the sovereign is downgraded this also affects private companies, [START_REF] Arezki | Bad news spreads[END_REF]. 27 NPL ratios of 3.6%, 15.2% for capital ratios, 80.0% for liquid assets to short term liabilities. The results in Table 3.5.6 suggest that, in line with the literature, the "sovereign exposure channel" plays a major role for both high and low debt countries, as underlined by the positive and significant coefficients associated with government bond exposures. The "macroeconomic channel" seems to matter only for low debt countries (positive and significant coefficient for NFC exposures). Concerning the role of bank vulnerability, we find mixed results across country groups: For high debt countries, both higher NPL and lower liquidity ratios are significantly associated with higher domestic sovereign to bank spillovers in contrast to the capital ratio, which we find not to be significantly linked to the latter. For low debt countries, we find both NPL and capital ratios not to be significantly linked to spillovers, while we find lower liquidity ratios to be significantly associated with higher spillover only in one out of three regressions. Conclusion We propose a novel approach of the popular Diebold-Yilmaz framework by exploiting a SVAR-GARCH model that is statistically identified by the heteroskedasticity in the data. We show that this identification approach is attractive as it yields time-varying FEVDs based on the conditional variances of estimated structural errors. Moreover, we show that it is feasible to achieve economic identification between structural shocks and financial market variables in a nontrivial bijective relationship, even in a system of 16 variables. We show the advantages of this methodological contribution by comparing the results with other common identification approaches used in the time-varying spillover literature. Overall, the identification scheme is supported by the fact that the results outperform other models in terms of timeliness and narrative fit. Additionally, we show that the obtained pairwise spillovers match propagation channels suggested by the literature. This study has some limitations that could be addressed in future search. First, our identification approach relies on a constant B 0 matrix over the full sample period. 28 In principal, this constraint can be relaxed by estimating the model on shorter subsamples, for example defined on dates for which the researcher expects a structural break in interdependencies. While in Annex 3.A.5 we allow for a single change in the B 0 matrix, we leave a more profound analysis of this avenue for future research. Second, by imposing fewer constraints than previous models, the SVAR-GARCH could be applied to investigate spillover patterns for time series that have been less considered in the literature, notably market liquidity data. With the law of iterative expectations: E t σ 2 k,t+h|t = (1 -γ k -g k ) + γ k σ 2 k,t+h-1|t + g k E t σ 2 k,t+h-1|t (3.32) That is: 3.33) In vector-form: σ 2 k,t+h|t = (1 -γ k -g k ) + (γ k + g k )σ 2 k,t+h-1|t (       σ 2 1,t+h|t ... σ 2 N,t+h|t       =       1 -γ 1 -g 1 ... 1 -γ N -g N       +       γ 1 + g 1 ... 0 . . . . . . . . . 0 ... γ N + g N             σ 2 1,t+h-1|t ... σ 2 N,t+h-1|t       (3.34) To build the FEVDs, we first compute the MSPE: Y t+H -Y t+H|t = H-1 ∑ i=0 Θ i ϵ t+H-i (3.35) And with the use of the structural variances estimated in 3.3.2: M SP E t (H) = E t (Y t+H -Y t+H|t )(Y t+H -Y t+H|t ) ′ = H-1 ∑ i=0 Θ i λ * t+H-i|t Θ ′ i (3.36) Then we can evaluate the contribution of shock j to MSPE of y kt with the usual MSPE-formula, the only difference with a classic SVAR is that variances of structural shocks are no longer normalized to 1. θ kj,h the kj th element of Θ h : M SP E k j,t (H) = θ 2 kj,0 σ 2 j,t+H|t + ... + θ 2 kj,H-1 σ 2 j,t+1|t (3.37) So that, contrary to the SVAR GARCH, these models fulfill the second condition of a good spillover model (responsiveness) but not the first one (good identification of the events). • Exposure domestic government debt: Sovereign debt of country i held by banking system j Total assets of banking system j , Source: IMF IFS • Exposure domestic government NFCs: Non-bank assets of country i held by banking system j Total assets of banking system j , Source: IMF IFS • Sovereign exposure : Sovereign debt of country j held by i Total sovereign debt of country j , Source: BIS CBS • Nonbank exposure : Liabilities of country js non-banks held by i Total Liabilities of country js non-banks ; Source: BIS CBS • Debt to GDP, Current account, GDP growth in %, Source: Eurostat, OECD and IMF 3.A.5 Robustness checks We perform several checks to assess the robustness of our results. First, with regards to the exogenous regressors: in our main specification we follow Alter and Beyer (2014) and include a significant number of exogenous variables to account for the strong comovement between CDS spreads. However, one might argue that some of them are endogenous to the sovereign and bank CDS. Therefore, we also estimate the model with a more parsimonious set of exogenous variables. In line with De Santis and Zimic (2018) we consider as alternative exogenous variables: oil prices, global macro news provided by Citibank, as well as US and UK CDS spreads. The upper part of Figure 3.A.2 represents total spillover indices S H from different specifications, including our main one ("SVAR-GARCH"). As can be seen on the graph, some level changes are observable at the beginning of the estimation period, but overall the different indices evolve in a parallel manner. Second, as exposed in Section 3.3.2, we assume a constant B 0 in our study. Some authors argued that the increase in CDS-correlation during the EZ debt crisis came mainly from changes in volatility and not in propagation mechanisms [START_REF] Caporin | Measuring sovereign contagion in Europe[END_REF], but this point is disputed (De Santis and Zimic, 2018). To evaluate the robustness of our results to changes in the estimation period, we estimate our model on different subsamples. In line with Ehrmann and Fratzscher (2017), we define two subperiods: a crisis period starting at the beginning of our sample until 01/10/2012, and a post-crisis period running from 01/10/2012 to the end of our sample. 2930 The bottom part of Figure 3.A.2 exhibits total spillover indices S H from the models estimated over the all sample, or over the two subsamples. Here again, apart from level differences on years 2013-2014, our results appear robust to changes in estimation period. 29 Note that Ehrmann and Fratzscher (2017) use 3 smaller subperiods, with a "crisis period" for the Greek turmoil that starts in March 2010 and ends in March 2012, and a "post-crisis period" that starts in October 2012. However, to build the total spillover indices S H , we need the exact identification underlined in Section 3.5.1, i.e. being able to assign each shock to a single variable. This identification is not granted for any subsample, and is hard to achieve on short time intervals. Thus, on Figure 3.A.2 we rely on a large "crisis period" so as to obtain this identification. 30 Corresponding to the announcements of the implementation details of the ECB OMT-program. Figure (3.A.2) Robustness graphs The different lines represent the Total Spillover indices S H built from our main specification ("SVAR-GARCH") as well as from the other specifications outlined above. For the upper part of the graph, the different indices are named according to the exogenous variables included (Oil price, Macro news from Citibank, US and UK bank or sovereign CDS). For the bottom part of the graph, the different indices represent our main specification estimated on subsamples (before and after 01/10/2012 as outlined in Ehrmann and Fratzscher (2017)). For readability we show 10 day moving averages of the indices. 3.A.6 Test for identification and estimated coefficients We rely on the original test proposed by [START_REF] Lanne | A Multivariate Generalized Orthogonal Factor GARCH Model[END_REF] to test for the identification of B 0 . The recursive test applied here gives strong evidence for full identification of B 0 , see Table 3.A.3. For a more thorough description of the test, see [START_REF] Lütkepohl | Testing for identification in SVAR-GARCH models[END_REF]. Note that the result of the test can be explained despite the reported low power of this latter, because (i) of the size of our dataset (ii) our 16 GARCH processes have a high persistence (γ k + g k close to 0.9 ∀k) which tends to increase the power of the test. 3.A.8 IRF assumptions An additional advantage of our econometric framework is that it imposes less restrictions on the impulse response functions (IRFs) compared to other spillover models. More specifically, Cholesky-identified SVARs, as used in VAR Cholesky and DCC Cholesky, postulate a recursive structure of the IRFs. Generalized impulse response functions, as used in VAR GIRF, impose that the IRF of a one-unit shock i on variable j has the same initial impact as an IRF from shock j to variable i. Eventually the same criticism applies also to the orthogonalization in [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF] as used for the model DCC Fengler -see demonstrations below. On the reverse, the SVAR-GARCH framework does not impose such a structure (Lütkepohl and Netšunajev, 2017a). Therefore the level-differences observed on Figure 3.5.2 may come from the overly strong assumptions of the competing models, over-or underestimating the spillovers. Why do the identifications used in VAR GIRF and in DCC Fengler assume symmetrical IRFs? A usual VAR-analysis begins with the reduced form VAR (Equation (3.6)), with the objective to go to the structural form of Equation (3.5). Under covariance stationarity, Equation (3.6) is equivalent to its moving average representation: 3.40) Which can be rewritten in its structural form: Y t = ∞ ∑ i=0 Φ i µ t-i ( E(µ t |µ jt = σ jj ) = [(σ 1j , ..., σ mj ) ′ σ -1 jj ]σ jj = Σ µ e j (3.44) So that the impact of a one standard deviation j shock on variable i at horizon 0 is (with Equation (3.40)): GIRF i (0, σ jj e j , Ω t-1 ) = e ′ i Φ 0 Σ µ e j (3.45) As Φ 0 = I we get: GIRF i (0, σ jj e j , Ω t-1 ) = e ′ i Σ µ e j = σ ij = σ ij = GIRF j (0, σ ii e i , Ω t-1 ) (3.46) Similarly, the identification strategy of [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF] used in DCC Fengler yields also symmetric IRFs on impact. This is because the time-varying matrices B -1 0,t (and hence B 0,t ) are symmetric as, ∀t and knowing that Λ t is diagonal and therefore symmetric: (B -1 0,t ) ′ = (Γ t Λ 1/2 t Γ ′ t ) ′ = Γ t (Γ t Λ 1/2 t ) ′ = Γ t Λ 1/2 t Γ ′ t = B -1 0,t (3.47) To conclude, identification with GIRFs or the identification of [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF] impose a symmetric structure of impulse responses upon impact while identification by Cholesky assumes a recursive one. These assumptions may be controversial when it comes to financial data which tend to respond rapidly to shocks and where variables react asymmetrically to each other. Conclusion 149 I also thank the jury members of my PhD thesis, Frédérique Bec and Matthieu Bussière for hav- cière s'est souvent reposée sur des données à l'échelle de pays compte tenu du problème de dimensionnalité posé par l'inclusion de variables sectorielles. C'est notamment le cas pour les papiers utilisant les modèles VARs structurels (SVARs) dont les stratégies d'identification requièrent un nombre limité de variables. Par conséquent, les papiers précurseurs dans ce champ académique ont privilégié les variables agrégées au niveau indiciel par rapport aux données sectorielles (à la manière de Diebold and Yilmaz, 2009, pour les mouvements de contagion entre rendements action ou Diebold and Yilmaz, 2012, pour une extension de ce modèle au marché obligataire et au marché des changes). Figure ( 1 . 1 ) 11 Figure (1.1) Hypothetical Micro-and Macro-Predictability according to the different views Figure ( 1 . 2 ) 12 Figure (1.2) Micro-and Macro-Raw Predictability series, over time Figure ( 1 . 3 ) 13 Figure(1.3) Macro-Raw Predictability series and averaged Micro-Raw Predictability, over time and 1.3 do not help disentangling these two potential factors, since drops in alpha-predictability may counterbalance rises in beta-predictability (and the reverse). 2 i,α,t , detailed in Section 1.4.3, that can take negative values. The grey vertical bands figure the NBER US recession dates. Unemployment rate. It rises during economic downturns, for example throughout the 1960-61 recession, in the neighbouring of the 1973-oil shock, along the Great Financial Crisis or during the recent Covid crisis 12 . Figure ( 1 . 5 ) 15 Figure (1.5) R 2 i,β,t and US Unemployment rate, over time 3 3 r 2 and λ 2 = r 2 -r 1 Timmermann Model Autoregressive Model (BIC) r t+1 = α + β(L)r t + u t Number of lags chosen with the Bayesian Information Criterion Timmermann Model 4 Autoregressive Model (AIC) r t+1 = α + β(L)r t + u t Number of lags chosen with the Aikake Information Criterion Timmermann Model 5 Figure ( 1 . 1 Figure (1.A.1) Individual Micro-and Macro-Raw Predictability series, over time Figure ( 1 . 1 Figure(1.A.3) Mean and standard deviations of raw predictability series: confidence intervals Figure ( 1 . 1 Figure (1.A.7) R 2 i,α,t with different Factor Specifications Figure ( 1 . 1 Figure (1.A.8) R 2 i,β,t with different Factor Specifications Covid de 2020, les prix d'actions chutèrent brutalement en lien avec la détérioration des perspectives d'activité économique et avec la hausse de l'aversion au risque. Dans les mois qui suivirent toutefois, alors que le niveau d'activité restait morose, les marchés action rebondirent rapidement. Cette déconnexion entre variables boursières et variables macros peut en partie être expliquée par d'autres facteurs, notamment par le déclin des taux d'intérêt et, pour les États-Unis, par la forte profitabilité du secteur du numérique. Par conséquent, un économètre essayant de prédire l'activité économique pendant la crise du Covid à l'aide de données action agrégées obtiendrait des résultats médiocres. La principale idée de ce chapitre est ainsi d'utiliser des données action sectorielles, dans le cadre d'un modèle à facteurs, pour prédire la future production industrielle. Côté résultats, nous trouvons, en premier lieu, que notre modèle prédit mieux l'activité économique par rapport aux variables action agrégées mais également par rapport aux variables de référence utilisées en prédiction macroéconomique. Deuxièmement, nous montrons que la bonne performance de notre modèle vient du fait qu'il filtre, parmi les variables action sectorielles, la composante liée au taux d'actualisation ainsi que la composante liée aux dividendes tirés de l'activité étrangère des firmes. Nous soulignons également que notre modèle surpondère les secteurs en amont des chaînes de valeur ainsi que les secteurs value qui apparaissent intimement liés au futur cycle des affaires.2.1. IntroductionForecasting macroeconomic variables using financial indicators has proven a challenging task, a surprising outcome given the fact that financial variables like bond yields and stock prices should impound expectations of future economic activity. The recent divergence between developments in equity markets and the economic activity has only highlighted the apparent disconnect between finance and the real economy. After the Covid shock in March 2020, stock prices declined abruptly, reflecting both the deterioration of investors expectations of future economic activity as well as the surge in aggregate risk aversion. In the following months however, and to the surprise of many, whereas economic activity remained relatively sluggish, equity markets bounced back sharply, as illustrated in Figure2.1.1. Figure ( 2 2 Figure (2.1.1) S&P 500 and US Industrial Production (100 = Dec.2019) 2.11) The DYs are drawn from Refinitiv Datastream indices either collected to reflect the overall US equity market or sectoral portfolios. The sectoral indices are based on the Industry Classification Benchmark (IBC), and are available at different granularity: either 11, 20 or 44 sectors. We rely on the most detailed breakdown available (44 sectors), although we retrieve from it 4 sectors for which the DY series were incomplete: Alternative Energy, Closed end Investments, Precious Metals and Mining and Mortgage Real Estate Investment Trusts. Thus in our main exercise we forecast IP growth with a factor model based on 40 different DY series. In the paper we also consider the aggregate DY, which corresponds to the average DY of the US stock market, also collected by Refinitiv Datastream. Figure 2 . 2 4.1 we present the in-sample RMSE of different predictive models at various horizons. In light blue, purple and dark blue bars are represented, respectively, simple forecasting models based either on the term spread, on the aggregate DY or on the lagged IP growth. The in-sample RMSE based on the factor model is shown as the red bar. Figure ( 2 2 Figure (2.4.1) In-Sample RMSE from the different estimated models Figure 2 . 5 . 25 Figure 2.5.1 indicates, in a format similar to that in Figure 2.4.1, the out-of-sample RMSE estimated for the different models. In line with the in-sample analysis, relying on disaggregated Figure (2.5.1) Out-of-Sample RMSE from the different estimated models Figure 2 . 2 Figure 2.6.1 clearly highlights the fact that positive weights tend to be associated with positive correlation of the sectoral DYs with future IP growth, whereas negative portfolio weights tend to be associated with negative correlation of the sectoral DYs with future IP growth. In contrast, both positive and negative portfolio weights are associated with the positive correlations of the sectoral DYs with future aggregate returns. As a result, the estimated factor -which equals the weighted sum of the sectoral DYs-is strongly exposed to future IP growth, but little exposed to future aggregate returns, in a fashion similar to what Kelly and Pruitt (2013) found. Figure ( 2 2 Figure(2.6.1) Factor weights and DYs correlations with future IP growth and future aggregate returns (In-sample estimates, forecasting over a 12-month horizon) Figure 2 . 2 Figure 2.6.2 summarizes the different filterings that we consider in this section. Again, the analysis is performed here on an in-sample basis and for the 12-month prediction exercise. In red are represented the correlations of the estimated factor ( Ft ) with future US IP growth (y t+h ), with future aggregate US returns (r t+h ) or with the component of future foreign IP growth that is orthogonal to future US IP growth (û t+h ). In light blue are represented the same quantities but for the aggregate DY instead of the estimated factor. Finally, in purple are pictured the average correlation of the sectoral DYs with the aforementioned variables (that is1 Figure 2 . 2 Figure (2.6.2) Factor correlations along with Sectoral and Aggregate DY correlations (Insample estimates, forecasting over a 12-month horizon) Figure ( 2 2 Figure(2.6.3) Absolute factor weights (In-sample estimates, forecasting over a 12-month horizon) Figure ( 2 . 2 Figure (2.A.1) Robustness check, In-Sample RMSE from the different estimated models Figure ( 2 . 2 Figure (2.A.2) Robustness check, Out-of-Sample RMSE from the different estimated models Figure (2.A.3) Estimated Factor, Market DY and Lead IP growth (In-sample estimates, forecasting over a 12-month horizon) 3. 1 . 1 Introduction Credit risk spillovers have been among the major challenges for financial stability in the Euro Area. Recent spillover episodes include the collapse of the investment bank Lehman Brothers in 2008, the emergence of a sovereign bank nexus in multiple Euro Area countries during the Euro area debt crisis and the Italian political turmoil in 2018. These examples highlight that spillovers can occur at multiple and interdependent dimensions: between banking systems internationally, between sovereigns internationally and between banking systems and sovereigns in the same country. nated in USD while CDS spreads for the US sovereign and European banks are denominated in EUR. Our sample covers daily data between January 2008 and March 2019, covering the GFC, European debt crisis and several sovereign and banking turbulence such as the Italian political turmoil of May 2018. 7 Figure ( 3 3 Figure (3.5.1) Structural Shocks and Events Figure 3 . 3 Figure 3.5.2 presents the model implied S H . Several results are worth noting. First, the SVAR-GARCH evaluates credit risk in the EZ to be significantly less integrated than suggested by VAR Cholesky and VAR GIRF spillover estimates. This is not surprising, De Santis and Zimic Figure ( 3 . 5 . 2 ) 352 Figure (3.5.2) Total Spillover Indices from different Models Figure 3 . 3 Figure 3.5.4 presents estimates of group pairwise spillovers for each variable sets defined above. Figure 3 . 5 . 35 Figure 3.5.4 can be read in two ways, either from a shock to perspective by rows, or from a shock from perspective by columns. In the following, we take the shock to perspective. Figure3.5.4 shows significant variation across time and groups. We find, in line with conventional wisdom, sizeable effects of sovereign periphery shocks to the rest of the EZ clustered around the beginning of the debt crisis in 2010. For example, at the height of the EZ debt crisis around mid-December 2010 when Moody's put Spain's rating on review, single variables from periphery sovereign shocks explained on average 4% and 3.5% of the variation of single variables from periphery sovereign and periphery banking groups respectively. Other major events of spillovers from periphery sovereigns include the Irish request for financial support to the EUs Financial Stability Facility and the IMF, the EU finance minister gathering to decide Greeces fate in 2015 or the 2018 Italian election crisis amid fears of new elections and voter support for Eurosceptics (see the subcaption of Figure3.5.4 for exact dates). Figure3.5.4 suggests that periphery sovereign shocks affect strongest CDS rates in other sovereign periphery countries followed by periphery banking sectors. Yet, core sovereigns and banks were also significantly affected by periphery sovereign shocks. sis. For example, we estimate strong sovereign core spillovers in January 2009 when the Dutch government announced plans to provide a backup facility to cover the risks of the ING's securitised mortgage portfolio. Moreover,Figure 3.5.4 shows increased sovereign core spillovers around dates that coincide with a downgrade of France by S&P as well as the second round presidential election stand-off between Emmanuel Macron and Marine Le Pen. Finally, we find strong core bank spillovers, for example around the dates when ING received 10bn EUR from the Dutch government or when BNP entered a liquidity crunch when the bank was no longer able to borrow in USD. Figure ( 3 3 Figure (3.5.4) Pairwise spillovers from EZ sovereigns and banks (%) Figure ( 3 . 3 Figure (3.A.1) CDS time series (basis points, first difference) i Only three Granger causality relationships appear significant: SVAR-GARCH on VAR Cholesky and VAR GIRF, and DCC Fengler on SVAR-GARCH. The marks *, **, *** indicate, respectively, the following significance levels: 0.1, 0.05 and 0.013.A.4 Data sources OLS regressions• Similar Business Cycle : the quarterly squared difference between country i and country j's GDP growth (multiplied by (-1) so that a higher number indicates more similar tendencies) [this is similar to De Santis and Zimic (2018), albeit De Santis and Zimic (2018) sum over over time as they focus on cross sectional effects]; Source: Eurostat • Similar D/GDP : Same approach for quarterly D/GDP ratios (multiplied by (-1) so that a higher number indicates more similar tendencies); : the quarterly squared difference between banking sector i and banking sector j's NPLs (multiplied by (-1) so that a higher number indicates more similar tendencies); Source: IMF Financial Soundness indicators and SNL • Same approach for capital ratios; Source: IMF Financial Soundness indicators and SNL • Similar portfolio: In a first step we construct from CBS data portfolio vectors per quarter for each banking sector as inGreenwood et al. (2015): a vector with the holdings of sovereign debt, non-bank financial institutions, households and non-financial corporations; for a large range of counterparty countries. We then express all portfolio items in % to total assets. And finally, we calculate the sum of the squared difference of those portfolio; Source: BIS CBS • Bank claim exposure: Bank claims of country j vis-a-vis country i ∑ Bank claims of country j vis-a-vis country i , Source: BIS CBS • NPLs, capital ratios (regulatory Capital to Risk-Weighted Assets) and liquidity ratios (Liquid Assets to Short Term Liabilities) of banking systems in percent; Source: IMF Financial Soundness indicators and SNL Deuxièmement, dans le cadre de leur mandat de stabilité financière, les banques centrales doivent surveiller les risques émanant des marchés. Cela implique, entre autre, de jauger du niveau xvi de valorisation des actifs afin de déterminer si une bulle est en formation[START_REF] Geis | Measuring and interpreting the cost of equity in the euro area[END_REF], d'estimer les interconnexions financières afin d'apprécier les potentiels phénomènes de contagion entre marchés(Alter and Beyer, 2014) ou d'évaluer la résilience des participants de marchés afin de déterminer si ces derniers sont en mesure de résister à de larges chocs négatifs.Troisièmement, les marchés financiers ont une utilité informationnelle compte tenu de leur capacité, comme mentionné ci-dessus, à agréger les différentes opinions des investisseurs. Cette fonction peut prendre différentes formes. Les banques centrales par exemple se reposent sur les inflation-linked swaps (ILS) ou sur la valorisation des actifs protégés de l'inflation pour estimer les anticipations d'inflation, sur le court terme ou sur le long terme, des investisseurs[START_REF] Bauer | Can we rely on market-based inflation forecasts?[END_REF]. Dans ce cas précis, les banques centrales exploitent ces variables de marché pour . Par conséquent, la relation entre marchés financiers et banques centrales est protéiforme et dépend de l'objectif de la banque centrale que l'on considère. Nous soulignons ici les trois principales dimensions, à nos yeux, au travers desquelles marchés financiers et banques centrales interagissent entre eux. Tout d'abord, le principal outil utilisé par les banques centrales pour maintenir la stabilité des prix reste le taux directeur, parfois combiné avec des mesures "non-conventionnelles" comme les programmes d'achats d'actifs ou les politiques de forward guidance. En d'autres termes, dans leur fonctionnement même, les banques centrales doivent interagir avec les marchés financiers, dans ce cas précis essentiellement avec les marchés obligataires, pour atteindre leurs objectifs. jauger de la bonne transmission de leur politique monétaire, reflétée ici par l'ancrage des anticipations d'inflation. Mais les banques centrales peuvent également utiliser les données de marché comme variables d'entrée dans le cadre de leurs prévisions macroéconomiques. En effet, la littérature empirique a démontré que différents indicateurs de marché avaient des capacités prédictives quant au niveau de la future activité économique, et qu'ils avaient ainsi leur place dans le diagnostic de conjoncture des banques centrales. C'est notamment le cas pour le term spread, ou prime de terme soulignent par exemple comment des ratios de valorisation très usités comme le dividend yield ou le price earnings ratio peuvent ne pas être fiables à l'échelle d'une entreprise en raison de la difficulté d'obtenir des estimations précises de données bilantielles (profits, niveaux des actifs etc.). Troisièmement, d'un point de vue plus méthodologique, certains modèles économétriques utilisés pour répondre à ces 'examen de cette question. Du côté du marché obligataire, un nombre conséquent de papiers ont souligné la bonne performance du term spread, la différence entre les taux d'obligations souveraines à court et long terme, pour prédire les futures récessions xviii principal outil pour l Le rôle informationnel des marchés financiers vis-à-vis du niveau de la future activité économique a été également largement étudié dans la littérature. Ici aussi, les données macros ont été le questions, comme les Vecteurs Autoregressif (VARs), ont souvent une borne supérieure quant au nombre de variables qui peuvent être considérées dans leur estimation. À ce titre, les données agrégées apparaissent comme des candidats naturels pour l'étude de ces sujets dans le cadre de modèles ne tolérant qu'un nombre restreint de variables. Pour ce qui est des problématiques de valorisation des actifs, et des potentielles divergences par rapport à leur valeur fondamentale dans le cadre de processus de bulles, la plupart des papiers pionniers sur la question se sont reposés sur des données agrégées. [START_REF] Shiller | Do stock prices move too much to be justified by subsequent changes in dividends?: Reply[END_REF] par exemple soulignent que, après avoir au préalable estimé la valeur fondamentale historique du Standard and Poor's Composite Stock Price Index et du the Dow Jones Industrial Average, les prix des actions au niveau indiciel étaient trop volatiles pour être seulement influencés par des facteurs fondamentaux. Par la suite, et de façon relativement similaire, d'autres papiers ont essayé d'évaluer la relation entre le prix des actifs et leur valeur fondamentale, mais à nouveau à l'aide de données macros. C'est le cas, entre autres, de Lee et al. (1999) qui cherchent à estimer la valeur fondamentale du Dow Jones à l'aide de techniques de cointégration. Une autre option pour jauger du niveau d'efficience des prix de marchés est d'évaluer dans quelle mesure les prix d'action sont influencés par les nouvelles concernant leurs futurs dividendes. Cette approche a notoirement été utilisée par Campbell (1991) et par Campbell and Ammer (1993), mais également à l'échelle indicielle. qui ont tendance, eux, à présenter un profil de futurs dividendes stable dans le temps. Par conséquent, les secteurs growth vont présenter une equity duration plus forte que les secteurs value. Dit autrement, leurs ratios de valorisation seront plus sensibles aux variations du taux d'actualisation. Cette différence de sensibilité peut en retour être utile pour filtrer les ratios de valorisation sectoriels de leur composante liée au taux d'actualisation afin de mieux prédire la croissance des dividendes agrégés (à la manière deKelly and Pruitt, 2013).Enfin, les séries temporelles sectorielles peuvent être bénéfiques dans l'analyse des phénomènes de contagion dans la mesure où elles peuvent permettre d'identifier la chaîne de causalité dans les processus de propagation de chocs. À titre d'exemple, différents papiers dans la littérature Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Return Predictability in the Literature . . . . . . . . . . . . . . . . . . . . . . 1.3 Working Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 H 1 , Samuelson's view . . . . . . . . . . . . . . . . . . . . . . . . . . List of estimated Models . . . . . . . . . . . . . . . . . . . . . . . . . 1.A.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.A.3 Raw Predictability series: individual graphs . . . . . . . . . . . . . . . Model Specification and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 A Factor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 In-Sample Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimated factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Economic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS CONTENTS xx xxii xxiii xxix xxx aux différentes firmes. De la sorte, les facteurs affectant les rendements action peuvent forte-ment varier en fonction de l'échelle que l'on considère. Ces facteurs sont plus associés aux anticipations de profitabilité au niveau micro mais, en raison de processus de diversification, ils dépendent plus du taux d'actualisation (et donc par exemple du niveau d'aversion au risque des investisseurs) au niveau macro. Ce contraste de comportement entre rendements micros et macros peut être bénéfique pour le chercheur. Il est en effet possible de tirer profit de l'hétérogénéité en coupe transversale de ces facteurs liés aux futurs dividendes pour estimer le niveau du taux d'actualisation requis par les investisseurs. Kelly and Pruitt (2013) par exemple estiment un modèle à facteurs à partir d'un échantillon de book-to-market ratios sectoriels. L'utilisation de données sectorielles per-met de filtrer la composante-dividende du prix des actions et d'obtenir une évaluation du taux d'actualisation, ce dernier permettant en retour d'obtenir une prédiction relativement précise des futurs rendements au niveau indiciel. parce que les banques détiennent un montant signifi-catif d'obligations souveraines dans leurs comptes (le "canal bilantiel", Angeloni and Wolff, 2012, Buch et al., 2016). Par conséquent, omettre des variables sectorielles telles que les taux d'intérêt bancaires ou les CDS bancaires peut engendrer une vision biaisée du processus de contagion. Nous avons essayé, au travers des trois arguments mentionnés ci-dessus, de souligner l'importance des données de marché micros/sectorielles, y compris pour l'étude de phénomènes macroé-conomiques. Les trois chapitres de cette thèse de doctorat visent à mettre en exergue le bénéfice données action pour la prévision macroéconomique et, enfin, l'étude des processus de contagion entre les marchés de CDS souverains et bancaires. Résumé des trois chapitres de thèse 1.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.A.3 Dataset -traditional factor model . . . . . . . . . . . . . . . . . . . . d'autres facteurs, notamment par le déclin des taux d'intérêt sans risque et, pour les États-Unis, Appendices 2.A.2 Additional forecasting results . . . . . . . . . . . . . . . . . . . . . . déconnexion entre variables boursières et variables macros peut en partie être expliquée par 2.A.1 The Factor model for sectoral and aggregate DYs . . . . . . . . . . . . niveau d'activité restait relativement morose, les marchés action rebondirent rapidement. Cette 1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . de l'aversion au risque des investisseurs. Dans les mois qui suivirent toutefois, alors que le 1.6 Robustness checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendices tiré de ces dernières dans trois domaines: la prédiction des rendements action, l'utilisation des Le premier chapitre de cette thèse de doctorat compare la prédictibilité des rendements action, c'est-à-dire notre capacité à prévoir les rendements boursiers dans le futur, au niveau macro par rapport au niveau micro. Il est à noter toutefois que la théorie économique identifie deux sources de prédictibilité: la variation dans le temps des rendements anticipés, la prédictibilité-bêta (Cochrane, 2008), ou les inefficiences de marché, la prédictibilité-alpha. Pour cette dernière, Samuelson avançait l'idée que les rendements macros étaient plus inefficients que les rende-ments micros, dans la mesure où les facteurs (efficients) idiosyncratiques ne se transposaient pas au niveau indiciel en raison de processus de diversification. En conséquence de quoi, les rendements des indices présenteraient plus d'inefficiences que les rendements micros (Jung and Shiller, 2005). Pour évaluer cette hypothèse, nous comparons les prédictibilités micros et macros sur données américaines afin d'identifier si les premières sont effectivement moins présentes que les secondes. De plus, nous reprenons la méthodologie de Rapach et al. (2011) en l'étendant à un cadre non-constant dans le temps afin de dénouer, au cours du temps, les deux sources de prédictibilité. Pour ce qui est des résultats, nous montrons que notre interprétation de l'intuition de Samuelson n'est pas valide dans la mesure où la prédictibilité micro n'est pas plus faible que la prédictibil-ité macro. Toutefois, nous montrons également que des phénomènes de diversification sont bien à l'oeuvre dans la mesure où l'agrégation des séries de prédictibilité micro au cours du temps donne un indice qui est très proche de notre série de prédictibilité macro. Deuxièmement, nous montrons que nos estimations des prédictibilités-alpha et -bêta sont cohérentes avec leurs impli-cations théoriques. Cela suggère notamment que les deux phénomènes jouent un rôle dans notre base de données. Le deuxième chapitre de cette thèse porte sur la prévision d'activité économique sur la base de données action sectorielles et dans le cadre d'un modèle à facteurs. L'idée originale de ce travail émergea lors du choc du Covid-19 en mars 2020 lorsque les prix d'actions chutèrent brutalement en lien avec la détérioration des perspectives d'activité économique et avec la hausse par la forte profitabilité du secteur du numérique. Par conséquent, un économètre essayant de prédire l'activité économique pendant la crise du Covid à l'aide de données agrégées du marché action obtiendrait des résultats médiocres. La principale idée de ce chapitre est ainsi d'utiliser des données action sectorielles, dans le cadre d'un modèle à facteurs, pour prédire la future production industrielle américaine. Pour ce qui est des résultats, nous trouvons, en premier lieu, que notre modèle à facteurs prédit mieux l'activité économique par rapport aux variables agrégées du marché action mais également par rapport aux variables de référence utilisées en prédiction macroéconomique (à la fois à l'intérieur et 1.1 1.3.2 H 2 , Cochrane's view . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 H 3 , Third view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Data and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Stock Return Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Constructing Raw Predictability Metrics . . . . . . . . . . . . . . . . . 1.4.3 Disentangling the Sources of Predictability . . . . . . . . . . . . . . . 1.5 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Micro-and Macro-Raw Predictability . . . . . . . . . . . . . . . . . . 1.5.2 Alpha-and Beta-Predictability . . . . . . . . . . . . . . . . . . . . . . 3.A.8 IRF assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.A.7 List of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 sec2tor overweighting . . . . . . . . . . . . . . . . . . . . . . . . . . 3.A.6 Test for identification and estimated coefficients . . . . . . . . . . . . . 2.6.1 Filtering the "return" and the "foreign cash flow" components . . . . . 3.A.5 Robustness checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Economic Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.A.4 Data sources OLS regressions . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Performance by Sample Period . . . . . . . . . . . . . . . . . . . . . . 3.A.3 Granger causality test . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Comparison with traditional factor models . . . . . . . . . . . . . . . . 3.A.2 CDS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Out-of-Sample Performance . . . . . . . . . . . . . . . . . . . . . . . 3.A Annex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.A.1 Derivation of Forecast Error Variance Decomposition . . . . . . . . . . 2.3 2.5 Out-of-Sample Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Appendices cycle des affaires américain. 1 Stock Return Predictability: comparing Macro-and Micro-Approaches 2.2.2 Selected Literature Review . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Econometric results . . . . . . . . . . . . . . . . . . . . . . . . . . . . chaînes de valeur ainsi que les secteurs value qui apparaissent intimement liés au niveau du futur 2.2.1 Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . des firmes. Nous soulignons également que notre modèle surpondère les secteurs en amont des General introduction / Introduction générale iv 2.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Filtering for common shocks . . . . . . . . . . . . . . . . . . . . . . . au taux d'actualisation ainsi que la composante liée aux dividendes tirés de l'activité étrangère Acknowledgement i 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . notre modèle vient du fait qu'il filtre, parmi les variables action sectorielles, la composante liée 2 Forecasting Real Activity using Cross-Sectoral Stock Market Information 3.4 Data and filtering for common shocks . . . . . . . . . . . . . . . . . . . . . . à l'extérieur de l'échantillon). Deuxièmement, nous montrons que la bonne performance de Contents 2.A.4 3 Structural estimation of time-varying spillovers: An application to credit risk trans-mission 1.1. Introduction Some forms of the Efficient Market Hypothesis (EMH) imply that stock returns are not pre-1.A.1 1.A.4 Moments of the raw return predictability series . . . . . . . . . . . . . 1.A.6 Individual stock return predictability . . . . . . . . . . . . . . . . . . . 1.A.8 Robustness checks: regression results . . . . . . . . . . . . . . . . . . 3.3.1 Measuring spillovers . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Description of the Model . . . . . . . . . . . . . . . . . . . . . . . . and Micro-Approaches 1.A.7 Robustness checks: alternative risk factors . . . . . . . . . . . . . . . . 3.2 Estimating Spillovers in the Literature . . . . . . . . . . . . . . . . . . . . . . 3.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Stock Return Predictability: comparing Macro-1.A.5 Standard errors, mean and standard deviations of raw predictability series 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dictable xxviii 1 par exemple soulignent que la relation positive entre la croissance des profits et les rendements au niveau micro devient négative au niveau macro. Kothari et al. (2006) fournissent des résultats similaires pour les surprises quant à la profitabilité d'une firme et aux rendements contemporains de cette dernière. Au niveau micro les rendements sont positivement liés à ces surprises, mais cette corrélation disparaît au niveau indiciel. De la même manière, Hirshleifer et al. (2009) relèvent le fait que des provisions élevées tendent à prédire des futurs rendements négatifs pour une entreprise, mais des rendements nuls ou positifs pour un indice. Toutes ces études partagent le même raisonnement: les dividendes reflètent des comportements idiosyncratiques tandis que les taux d'actualisation sont communs Deuxièmement, les données sectorielles peuvent être bénéfiques non seulement parce que les variables financières micros se comportent différemment des variables macros, mais aussi parce que les chercheurs peuvent exploiter l'hétérogénéité des comportements entre variables micros. À nos yeux, cette hétérogénéité peut être utile sur plusieurs aspects. Dans la mesure où les dividendes sectoriels répondent de manière différente à des chocs macroéconomiques, évaluer comment les valorisations action micros réagissent lors de dates spécifiques permet précisément d'identifier ces chocs. Ainsi, Venditti and Veronese (2020) utilisent le prix des actions des compagnies aériennes pour identifier les chocs d'offre de pétrole, notamment car l'augmentation des prix du pétrole correspondante est susceptible d'affecter plus les rendements des compagnies aériennes que celles des indices action agrégés. De la même manière, les dividendes sectoriels, et donc les ratios de valorisation sectoriels, peuvent répondre différemment à des variations du futur niveau d'activité économique. Par conséquent, surpondérer certains secteurs par rapport à d'autres peut s'avérer bénéfique dans le cadre de prévisions macroéconomiques. Andersson and Agostino (2008) ont ainsi montré par exemple comment les secteurs des hydrocarbures et des matériaux de base présentaient des rendements action qui étaient plus corrélés au futur PIB de la Zone Euro que les autres secteurs. Enfin, l'hétérogénéité entre les prix des actions au niveau micro importe au-delà des deux canaux liés aux dividendes sectoriels mentionnés ci-dessus. Un mécanisme plus subtil, décrit à nouveau par Kelly and Pruitt (2013) , consiste dans le fait que les secteurs growth, typiquement constitués d'entreprises du secteur numérique, ont des dividendes plus éloignés dans le temps que les xxi secteurs value, se sont penchés sur les dynamiques des taux d'intérêt ou des Credit Default Swaps (CDS) souverains durant la crise de la dette européenne en étudiant uniquement des variables souveraines à l'échelle des pays (Ehrmann and Fratzscher, 2017, De Santis and Zimic, 2018) . Pourtant, une part substantielle des dynamiques de CDS sur la période provient de phénomènes de cercles vicieux de propagation du risque de crédit entre les variables souveraines et les secteurs bancaires domestiques. De tels mécanismes de rétroaction peuvent avoir lieu pour plusieurs raisons. Le risque bancaire peut, entre autres, affecter le risque souverain via le "canal du renflouement", à savoir des garanties publiques, explicites ou implicites, dans le cas de tension sur le secteur bancaire (Alter and Schüler, 2012) , ou 2, and the S&P 500 Price Earning Ratio. Additionally, we also look at survey variables to gauge market exuberance in the form of the U.S. One-Year Confidence Index of Yale university. Second, we consider for X F C,t three different metrics to reflect funding constraints. The first one is stock return volatility, the second one the Baa-Aaa corporate bond spread and the third one the seasonally adjusted changes in U.S. broker-dealer leverage (LF t , [START_REF] Adrian | Financial Intermediaries and the Cross-Section of Asset Returns[END_REF] . Following [START_REF] Farmer | Pockets of predictability[END_REF] , we take LF t as proxy of funding constraints, since lower leverage is associated with a reduced availability of arbitrage capital. Eventually, for the business cycles variables, X RA,t , we take as a main proxy the US unemployment rate, but we also use the Consumer Sentiment Index from the University of Michigan in the robustness checks of Section 1.6. These different covariates, as well as their originating sources are more precisely detailed in Table 1 .A.3 of Appendix 1.A.2. Table ( 1 ( .1) Regression results for the Alpha-Predictability -unempt 0.007 * * * 0.006 * * 0.007 * * * 0.008 * * * 0.010 * * * 0.007 * * * 0.013 * * * 0.015 * * * 0.014 * * * Dependent variable: Alpha-predictability: R 2 i,α,t (1) (2) (3) (4) (5) (6) (7) (8) (9) -ecyt 0.240 * * * 0.443 * * * 0.243 * * * (0.068) (0.130) (0.069) pet 0.001 * * * 0.001 * * * 0.001 * * * (0.0002) (0.0001) (0.0002) Y alet 0.002 * * * 0.002 * * * 0.002 * * * (0.0004) (0.0003) (0.0004) (0.002) (0.003) (0.002) (0.002) (0.002) (0.002) (0.004) (0.004) (0.004) vol1,t -0.00001 -0.001 0.003 * * * (0.001) (0.001) (0.001) -LFt 0.0001 -0.00002 0.001 * (0.0002) (0.0002) (0.001) Baat 0.003 -0.004 0.021 * * * (0.008) (0.008) (0.007) Const. 0.032 * * * 0.032 * * * 0.032 * * * 0.018 * 0.033 * * * 0.017 * -0.101 * * * -0.050 * -0.113 * * * (0.010) (0.012) (0.010) (0.010) (0.011) (0.010) (0.033) (0.027) (0.034) Obs. 856 597 856 856 597 856 214 214 214 R 2 0.175 0.265 0.175 0.166 0.250 0.166 0.363 0.427 0.380 Adj. R 2 0.172 0.261 0.172 0.163 0.246 0.163 0.354 0.418 0.371 Note: * p<0.1; * * p<0.05; * * * p<0.01 Table 1 . 1 A.4 that these modifications leave the main results unchanged: γ IE,α and γ RA,β are still significantly positive and negative, γ F C,α , γ F C,β non-significant, and γ RA,α positive (although not significantly). 2 i,α,t and R 2 i,β,t ) or at the dispersion around the latter (the shaded areas on Figures 1.A.7 and 1.A.8) . Second, we provide on Table 1 .A.4 of Appendix 1.A.8 additional regression results in line with our analysis of Section 1.5.2. We use as an alternative business cycle variable the Consumer Sentiment Index from the University of Michigan, and as a supplementary financial friction proxy a different metric of stock market volatility (computed by estimating a GARCH(1,1) on daily stock returns instead of taking the monthly average of squared returns). We thus notice in Table (1.A.2) External regressors used in Model 9 to 20 in Table 1.A.1 Variable Description Sources Term spread Amit Goyal website (before tmst 10 Year Treasury rate minus the 3-Month January 2020), FRED (after T-Bill rate January 2020) Cyclically-adjusted PE (CAPE) ratio 1 cape1,t Real S&P 500 Prices divided by the 10-year Robert Shiller website moving average of the corresponding real Earnings Cyclically-adjusted PE (CAPE) ratio 2 cape2,t CAPE ratio with scaled Earnings (i.e. adjusted Robert Shiller website to account for changes in corporate payout policy) PE ratio pet Nominal S&P 500 prices divided by corresponding Robert Shiller website nominal Earnings Book-to-Market ratio bmt Median Book-to-Market ratio of Fama-French Kenneth French website 100 portfolios Excess CAPE yield ecyt Inverse of cape1,t minus the 10-year real Robert Shiller website sovereign rate Dividend-Price ratio dpt Log of S&P 500 nominal dividends minus log of S&P 500 contemporaneous nominal dividends Robert Shiller website (as in Goyal and Welch (2008)) Dividend Yield dyt Log of S&P 500 nominal dividends minus log of S&P 500 previous nominal dividends Robert Shiller website (as in Goyal and Welch (2008)) Return Volatility 1 vol1,t Monthly average of daily squared aggregate Kenneth French website returns, as in Goyal and Welch (2008) Return Volatility 2 vol2,t Monthly average of daily aggregate return Kenneth French website volatility estimated with a GARCH(1,1) indext Index level S&P 500 index level Robert Shiller website IPt Industrial Production US Industrial Production FRED Timmermann Model Model selection -in-sample From the J precedent models (apart from Model 22), we evaluate the in-sample RMSE for each single model and take as a prediction the forecast of the model with the lowest RMSE. Timmermann 1.A.2 Datasets Above listed variables are available over the all estimation period (September 1945-October 2020). Table ( 1 ( .A.3) Additional external regressors used in Sections 1.5.2 and 1.6Various variables also used in the regressions of Section 1.5.2 and 1.6 are already detailled in Table1.A.2: ecy t , pe t , vol 1,t and vol 2,t . Variable Description Sources Consumer Sentiment M ichigant Consumer Sentiment Index from the University of FRED Michigan unempt Unemployment rate US Unemployment rate FRED Baa-Aaa spread Baat Moody's Seasoned Baa Corporate Bond Yield minus FRED Aaa Corporate Bond Yield U.S. broker-dealer leverage LFt Seasonally adjusted changes in U.S. broker-dealer Tyler Muir website leverage (Adrian, Etula and Muir (2014)) Confidence Index of Yale University Y alet Seasonally adjusted changes in U.S. broker-dealer Yale University website leverage (Adrian, Etula and Muir (2014)) 1.A.3 Raw Predictability series: individual graphs Table ( 1 ( .A.4) Additional Regression Results for the Alpha-and Beta-Predictability On the table are represented the different regression results with R 2 i,α,t and R 2 i,β,t as a predicted variables. t-statistics have been computed using Newey-West standard errors. Variables are rearranged so that an increase in X IE,t , X F C,t and X RA,t reflects, respectively, a surge in market effervescence, an aggravation of financial constraints and a strengthening of economic activity (with Nicolas Chatelais 1 and Menzie Chinn 2 ) Dependent variable: Alpha-pred.: R 2 i,α,t (1) (2) 0.001 * * 0.001 * * * -0.001 * * * Beta-pred.: R 2 i,β,t (1) (2) -0.001 * * * (0.0003) (0.0002) (0.0002) (0.0003) 0.0004 -0.001 * * * (0.0003) (0.0002) 0.008 * * * -0.008 * * * (0.002) (0.002) -0.0004 0.0004 (0.001) (0.001) -0.002 0.002 (0.002) (0.001) -0.074 * * * 0.017 * 0.063 * * * -0.041 * * * (0.026) (0.010) (0.022) (0.016) 496 856 496 856 0.077 0.168 0.086 0.107 0.071 0.165 0.086 0.107 2. Forecasting Real Activity using Cross-Sectoral pe t M ichigan t -unemp t vol t vol 2,t Const. Obs. R 2 Adj. R 2 Note: Stock Market Information 49 * p<0.1; * * p<0.05; * * * p<0.01 Table ( 2 ( .4.1) Predictive coefficients of the estimated factor (In-sample estimates) Dependent variable: IP growth 12 months 18 months 24 months (1) (2) (3) Market DY -0.014 * -0.015 -0.014 (0.008) (0.012) (0.014) Factor 0.282 * * 0.297 * * * 0.313 * * * (0.125) (0.112) (0.116) Constant 0.038 * 0.039 0.035 (0.022) (0.035) (0.046) Observations 532 526 520 R 2 0.265 0.270 0.285 Adjusted R 2 0.262 0.267 0.282 Residual Std. Error 0.038 0.047 0.054 (df = 529) (df = 523) (df = 517) F Statistic 95.295 * * * 96.723 * * * 103.059 * * * (df = 2; 529) (df = 2; 523) (df = 2; 517) Note: The reported regressions are made using Newey-West heteroskedasticity and serial correlation robust standard errors. * p<0.1; * * p<0.05; * * * p<0.01 Table 2 . 2 A.1 in the Appendix reports the Table(2.5.1) Difference in RMSE with the main benchmark models (Factor model Corresponding benchmark, Out-of-sample estimates)Note: The table reports the difference in RMSE of the factor model compared to the different benchmarks (a negative value means that the factor model outperforms the corresponding benchmark in terms of RMSE). Stars represent the Diebold-Mariano test p-values under the null hypothesis that the factor model performs worse than the benchmark models indicated in the first column. Benchmark: Horizon 12 months 18 months 24 months Market DY -2.01 * -3.76 * -2.33 * * * Term spread -1.68 -2.63 * -2.41 * * * Lagged IP growth -4.62 * * -4.32 * * * 8.67 * * * * p<0.1; * * p<0.05; * * * p<0.01 Table 2 2 Note: The table reports the difference in RMSE of the models indicated in the first columns (augmented with the factor F t stemming from the sectoral equity variables) with respect to the same models without this specific factor. A negative value means that augmenting the model with the factor F t improves the RMSE. Stars represent the[START_REF] Clark | Approximately normal tests for equal predictive accuracy in nested models[END_REF] test p-values under the null hypothesis of equal MSPE.In Table2.5.2, notice that augmenting the PCA-factors with the factor built with the sectoral DYs improves the RMSE in virtually all cases, with RMSE gains being significant in two thirds of the considered cases. This highlights the extra information that can be gained with disaggregated equity variables. Benchmark: Horizon 12 months 18 months 24 months 1 PCA-Factor -5.72 * -13.31 * -8.16 * 1 PCA-Factor & lag IP growth -3.43 * * -8.49 -7.79 * * 2 PCA-Factors -0.17 * * -0.16 * * -0.58 * 2 PCA-Factors & lag IP growth -0.85 -1.29 -7.54 3 PCA-Factors -0.18 * * * -0.36 * -0.76 * 3 PCA-Factors & lag IP growth -5.62 * * -0.69 -6.55 .5.2 below summarizes the differences in RMSE of the aforementioned models, augmented or not with our factor stemming from the sectoral equity variables. As the models that we compare are nested, the reported p-values in Table 2.5.2 stem from Clark and West (2007) tests. Table (2.5.2) Difference in RMSE with alternative factor models (Factor model Corresponding PCA-factor benchmark, Out-of-sample estimates) * p<0.1; * * p<0.05; * * * p<0.01 Table ( 2 ( .5.3) Difference in RMSE by Period (Factor model Corresponding benchmark, Outof-sample estimates) Benchmark: Period: Horizon 12 months 18 months 24 months Market DY Negative IP growth -3.8 * -7.06 * * -6.82 * * * Market DY Positive IP growth -1.05 * * -0.82 * * * -0.33 Term spread Negative IP growth -3.47 * -8.05 * * -6.62 * * * Term spread Positive IP growth -0.67 * * 0.17 -0.58 Lagged IP growth Negative IP growth -8.71 * * -9.59 * * * -21.71 * * * Lagged IP growth Positive IP growth -2.31 * * -3.21 * * * -1.5 * * * Note: The table reports the difference in RMSE of the factor model compared to the different benchmarks (a negative value means that the factor model outperforms the corresponding benchmark in terms of RMSE). Stars represent the Diebold-Mariano test p-values under the null hypothesis that the factor model performs worse than the benchmark models indicated in the first column. * p<0.1; * * p<0.05; * * * p<0.01 Table 2 . 2 6.1 presents the regression results. Here again, the coefficients |(ϕ i -φ)| are from an in-sample estimation of the factor based on a 12-month horizon.Table (2.6.1) Absolute factor weights regressions (In-sample estimates, forecasting over a 12month horizon) Dependent variable: Abs. Factor coefficients (1) (2) (3) (4) Abs. corr. Future IP 25.791 * * * 27.352 * * * 27.202 * * * 27.288 * * * with sectoral DYs (2.175) (1.827) (1.594) (1.577) Average PER -0.130 * * * -0.142 * * * -0.121 * * * (0.042) (0.039) (0.035) Abs. corr. Exchange rate -1.757 * * with sectoral DYs (0.732) Abs. corr. Foreign IP -1.792 * * with sectoral DYs (0.873) Constant 0.034 2.611 * * * 3.150 * * * 2.697 * * * (0.388) (0.905) (0.838) (0.800) Observations 40 40 40 40 R 2 0.889 0.925 0.936 0.932 Adjusted R 2 0.846 0.892 0.904 0.899 F Statistic 20.470 * * * 27.832 * * * 29.325 * * * 27.622 * * * (df = 11; 28) (df = 12; 27) (df = 13; 26) (df = 13; 26) Note: All regressions include industry-level fixed effects. The reported regressions are made using White heteroscedasticity-robust standard errors. * p<0.1; * * p<0.05; * * * p<0.01 Table ( 2 ( .A.1) Difference in RMSE with volatility models (Factor model Corresponding benchmark, Out-of-sample estimates)Note: The table reports the difference in RMSE of the factor model compared to the different benchmarks (a negative value means that the factor model outperforms the corresponding benchmark in terms of RMSE). The benchmarks used in this exercise are univariate or bivariate regressions relying on a market volatility variable augmented with the term spread for the last four models. The volatility metrics are: the monthly variance of daily log returns on the US stock market, Volatility 1, the monthly sum of daily squared returns on the US stock market, à la Goyal and[START_REF] Welch | A comprehensive look at the empirical performance of equity premium prediction[END_REF], Volatility 2, the VIX and the Merrill Lynch Option Volatility Expectations, or MOVE, a metric of bond market volatility. Stars represent the Diebold-Mariano test p-values under the null hypothesis that the factor model performs worse than the benchmark models indicated in the first column. Benchmark: Horizon 12 months 18 months 24 months Volatility 1 -4.76 * -1.57 * -9.97 * Volatility 2 -4.82 * -2.22 * * -7.28 * * VIX -2.17 * -2.09 * * -0.98 * * MOVE -1.69 * -2.26 * -2.81 * * Volatility 1 + Term spread -3.69 * * -3.27 * -10.46 * * Volatility 2 + Term spread -3.76 * * -2.94 * -8.07 * * VIX + Term spread -3.67 * * -3.27 * * -2.89 * * MOVE + Term spread -2.74 * * -4.06 * -5.09 * * Benchmark: Canada France Germany Switzerland UK Market DY -2.13 Term spread -3.03 * 0.16 -1.69 * * * -1.46 * * * -1.26 * Lagged IP growth -4.2 * * -0.08 -7.62 * -4.44 * * -0.11 Number of sectors 21 28 24 30 38 * p<0.1; * * p<0.05; * * * p<0.01 Table (2.A.2) Difference in RMSE by country (Factor model Corresponding benchmark, Outof-sample estimates, 12-month horizon) * -3.14 * -1.27 * * * -5.17 * -1.5 * * * Table ( 3 ( .5.1) Forecast error variance decomposition, average over time (%) BE bk FR bk DE bk IT bk NL bk ES bk PT bk DE BE FR GR NL ES IT PT IE BE bk 81 1 3 0 2 0 0 2 1 1 0 1 0 0 0 1 FR bk 0 54 5 2 0 6 0 2 16 11 1 2 4 6 3 5 DE bk 3 9 78 3 2 3 1 4 11 7 2 4 11 8 10 10 IT bk 3 5 1 83 17 16 4 0 1 0 5 0 2 2 3 0 NL bk 1 2 0 5 61 1 1 3 3 3 3 7 19 4 9 4 ES bk 2 23 3 1 1 56 1 10 4 3 1 1 1 1 1 4 PT bk 2 1 0 1 0 1 84 0 0 1 2 0 1 1 2 1 DE 4 1 1 0 2 2 0 70 13 8 2 10 4 5 3 6 BE 0 Table ( 3 ( .5.2) Percentage of good event-identification by model DCC DCC VAR VAR SVAR - Fengler Cholesky GIRF Cholesky GARCH (i) Subset of sovereign events 11.0 22.0 44.0 39.0 78.0 (ii) Total sovereign events 30.8 33.3 33.3 38.5 64.1 (iii) Total sovereign and bank events 34.2 36.8 39.5 47.4 68.4 (iv) Candelon et al. (2011) 36.3 45.4 36.4 44.6 63.6 (v) Alexandre et al. (2016) 12.5 25.0 75.0 0.5 75.0 3.5. RESULTS121 variables d are distance measures that include (i) the squared difference between country i and country j's GDP growth in t and (ii) the squared difference between country i and country j's government debt to GDP ratio in t. 20 Exposure k j→i,t is the exposure of country j to country i in respect of either the share of exports or portfolio assets (equity and bonds). The choice of those exposure variables follows the empirical work by De Santis and Zimic (2018) and the theoretical work by Foerster et al. (2011) (see Annex 3.A.4 for data sources and construction of the explanatory variables). We use time fixed effects and, following the regression design in De Santis and Zimic (2018), fixed effects for the origin of the sovereign shock. Table (3.5.3) Factors associated with spillovers from sovereigns to sovereigns (1) (2) (3) Similar BC -0.00005 -0.002 -0.001 (0.001) (0.001) (0.001) Similar D/GDP 0.021 * * * 0.007 * * * 0.014 * * * (0.002) (0.002) (0.002) Trade exposure 0.433 * * * (0.020) Investment exposure 0.236 * * * (0.028) Time fixed effects? Yes Yes Yes i fixed effects? Yes Yes Yes Observations 3,240 3,240 3,171 R 2 0.448 0.598 0.490 Adjusted R 2 0.438 0.591 0.481 * p<0.05; * * p<0.01; * * * p<0.001 Table ( ( 3.5.4) Factors associated with spillovers from banks to banks (1) (2) (3) Similar NPLs -0.001 -0.003 -0.001 (0.003) (0.003) (0.003) Sim. Capital ratios 0.037 * * * 0.029 * * 0.040 * * * (0.009) (0.010) (0.010) Similar portfolio 7.304 * * * (1.355) Bank claims -0.013 (0.009) Time fixed effects? Yes Yes Yes i fixed effects? Yes Yes Yes Observations 1,812 1,812 1,812 R 2 0.434 0.439 0.435 Adjusted R 2 0.417 0.422 0.417 * p<0.05; * * p<0.01; * * * p<0.001 Table ( 3 ( .5.5) Factors associated with spillovers from banks to sovereigns in same countryi →bank i ,t (h) = β 0 + α t + β 1 v N P L bank i ,t + β 2 v Lev.R.bank i ,t + β 3 v Liq.R. bank i ,t + +β 5 exposure k bank i →sov i ,t + ϵ sov i ,bank i ,t 3.5. RESULTS (3.28) Table (3.5.6) Factors associated with spillovers from sovereigns to banks in the same country High debt countries Low debt countries (1) (2) (3) (4) (5) (6) NPLs High debt countries 2.61 * * 2.24 * * 2.74 * * Low debt countries -0.10 -0.12 -0.19 (1) -0.91 * * * -0.91 * * * -1.01 * * -5.01 * * * -5.07 * * * -3.88 * * * (2) (3) (4) (5) (0.81) (0.81) (0.84) (0.18) (0.17) (0.17) (6) -1.38 -0.80 -2.49 -0.06 0.34 0.41 (0.21) (0.21) (0.31) (1.04) (1.05) (0.77) (0.76) (1.31) (0.23) (0.23) (0.31) (1.07) 1.00 * * * 1.01 * * * 0.88 * * * 3.74 * * 3.56 * * -4.03 * * * -6.17 * * * -4.60 * * * -0.92 * * * -0.11 -0.43 3.50 * * (0.17) (0.17) (0.21) (1.29) (1.28) (0.48) (1.01) (0.69) (0.18) (0.29) (0.29) (1.22) 0.13 0.13 0.23 -0.63 -0.62 Exposure domestic gov. debt Capital Capital Debt to GDP Liquid assets Current Account 0.18 * * 0.18 * * * 0.16 (0.20) (0.26) (0.28) (1.26) (1.28) (0.07) (0.05) (1.31) GDP growth -0.06 -0.05 -0.05 -0.29 -0.31 Exposure domestic NFCs -0.04 0.05 * * -0.28 (0.22) (0.23) (0.22) (0.66) (0.66) (0.03) (0.02) (0.68) Sov. exposure 0.06 Time fixed effects? Yes Yes Yes Yes Yes Yes 2.94 (1.92) Observations 174 174 174 121 121 121 (9.63) Non-bank exposure -0.88 R 2 0.54 0.55 0.54 0.88 0.89 0.88 20.52 * * * (1.50) (6.02) Adjusted R 2 0.37 0.38 0.37 0.80 0.82 0.81 Time fixed effects? Yes Yes Yes Yes Yes Yes Observations 174 174 0.74 174 0.74 0.74 0.93 0.93 0.94 Adjusted R 2 0.64 0.64 0.64 0.89 0.89 0.90 * p<0.05; * * p<0.01; * * * p<0.001 ωsov * p<0.05; * * p<0.01; * * * p<0.001 Table (3.A.1) List of banks used in bank sector CDS time series 3.A. ANNEX 137 With: K M SP E k t (H) = j=1 ∑ M SP E k j,t (H) (3.38) We get: F EV D k j,t (H) = M SP E k j,t (H) M SP E k t (H) (3.39) 3.A.2 CDS Data Country Banks BE Dexia, KBC Bank FR BNP, Société Générale, Crédit Agricole DE Deutsche Bank, Commerzbank, DZ Bank, Landesbank Baden, Landesbank Hessen, HSH Nordbank, WESTLB ES BBVA, Banco pastor, Santander, Sabadell, Banco Popolar Espagnol NL Rabobank, ING Bank, SNS Bank IT Intesa, Unicredit Spa, Banca Montepaschi, Banco PPO Italiana, Unione di Banche PT Banco Comercial Portugues, Banco BPI, Caixa Geral Table ( 3 ( .A.2) Granger causality between the models (First difference, lags=2) H 0 : SVAR-GARCH does not Granger cause... F -test p-valueThis table indicates the results from the Granger causality tests between the S H of the different models. Rolling window estimated models VAR Cholesky 14.91 3.627e-07*** VAR GIRF 6.0492 0.002391 ** GARCH-related models DCC Cholesky 1.0527 0.3491 DCC Fengler 1.3641 0.2558 H 0 : SVAR-GARCH is not Granger caused by... F -test p-value Rolling window estimated models VAR Cholesky 0.5159 0.597 VAR GIRF 0.9483 0.3875 GARCH-related models DCC Cholesky 0.4206 0.6567 DCC Fengler 8.8071 0.0001539*** Table ( 3 ( .A.3) Test for identification in SVAR-GARCH h under H 0 Q 1 (1) df p-value 1 124.3405 1 < 10 -5 2 113.4685 1 < 10 -5 3 85.0733 1 < 10 -5 4 66.6269 1 < 10 -5 5 60.7231 1 < 10 -5 6 46.2298 1 < 10 -5 7 38.0658 1 < 10 -5 8 35.8007 1 < 10 -5 9 25.3033 1 < 10 -5 10 16.2284 1 5.615e-05 11 13.3168 1 0.000263 12 12.6034 1 0.000385 13 517.7083 1 < 10 -5 14 185.0355 1 < 10 -5 15 154.8558 1 < 10 -5 3.A.7 List of Events Table ( 3 ( .A.4) Historical Events European Central Bank has redoubled warnings that the state of EZ banks is a threat to the regions economic recovery, IT banks with biggest problem of sour loans Dexia, the Franco-Belgian bank whose borrowings have already had to be guaranteed, dismissed rumours that it faced impending nationalisation by the Belgian government Table (3.A.4) Historical Events Table (3.A.4) Historical Events Table (3.A.4) Historical Events Variable Variable Variable Date Date Date Events Events Events Source Source Source IT FR banks ES banks 15/04/2014 06/05/2010 26/09/2018 "On April 15th yields on ten-year Italian-government bonds fell to 3.11%, the lowest on record" BNP Paribas and Société Générale in suffering as the costs of insuring themselves against default rises Banco Santander changes its chief executive The Economist FT FT IT FR banks NL banks 30/05/2018 12/08/2011 30/09/2008 Italy election crisis spreads as central bank chief warns investor trust is fading French Short-selling ban brings relief for banks The Belgian-Dutch Fortis faces state rescue FT FT Reuters IT FR banks NL banks 19/12/2018 13/09/2011 19/10/2008 Italian bonds and stocks rally as government comes closer to EU pact BNP Bank executive says they can no longer borrow USD ING receives 10 billion from Dutch government FT WSJ NYT GR FR banks NL banks 09/03/2012 14/10/2011 26/03/2009 ISDA declares Greece in default (impact on CDS, restructuring) Big European CDS such as Frances BNP Paribas spiked to 291bp Fortis Bank Nederland posts 25.11 billion loss Reuters FT Reuters GR FR banks NL banks 09/04/2012 07/11/2011 16/01/2012 CDS decrease after Greece restructuring BNP stock price plunges compared to CAC 40p ING benefits from ING gains from Netherlands credit rating The Economist Les Echos FT GR FR banks NL banks 19/06/2012 30/11/2011 13/06/2012 EZ's Greek poll honeymoon short lived SocGen, UniCredit and BNP lose some of Mondays gains ING to pay USD 619m to settle sanctions case FT FT FT GR FR banks NL banks 21/05/2013 12/02/2016 09/07/2012 Significant decrease in Greek sovereign CDS series SocGen battles to hit targets amid low rates and volatility Former Rabobank traders fired in LIBOR-scandale FT Observer GR FR banks 11/07/2015 11/06/2018 EZ finance ministers prepare to decide Greece's fate France tells its banks to set aside more capital FT FT PT IT banks 27/04/2010 17/08/2011 Portugal rating downgraded Shares in Italys biggest retail bank Intesa Sanpaolo were at one point suspended for excessive losses "European CNN FT PT 29/03/2011 Portugal rating downgraded banks at centre of sell-off " FT PT IT banks 06/07/2011 30/11/2011 Portugal rating blow European banks junior debt under review, including a number of Italian banks "European banks junior debt under FT FT PT 18/01/2012 Moodys warns of second rescue for Portugal review" FT PT IT banks 07/02/2012 01/02/2013 Speculation on Portugal debt restructuring Monte dei Paschi di Siena asks for 3.9bn bailout amid scandal over loss-making derivatives contracts and alleged Les Echos the Guardian PT 09/03/2012 Renew speculation on Portugal debt fraud Reuters PT IT banks 02/07/2013 11/06/2016 Portuguese government at risk of collapse as foreign minister resigns Italian banking crisis, heightened by European financial stress tests Telegraph FT PT IT banks 10/11/2015 27/06/2016 Confidence vote againt government, potential left-wing coalition Italian banks struggling "Italy resurrects plans to rescue struggling banks" Business Insider FT PT IT banks 08/02/2016 04/05/2017 Portugal-Germany Yield Spread Widens to Most Since 2014 Monte Paschi CDS time series spike Bloomberg IE IT banks 11/02/2009 29/11/2017 Recapitalisation was carried out at Ireland's two largest banks, Allied Irish Bank (AIB) and Bank of Ireland (BoI) FT FT IE 28/04/2010 Marked increase in Irish 2-year bond yields The Irish Times IE IT banks 18/07/2011 31/08/2018 Record high of Irish CDS in our time series UniCredit and Intesa Sanpaolo fall on news of increased political uncertainty FT IE PT banks 06/07/2012 06/05/2010 Ireland comes back on sovereign debt markets Spectre of counterparty risk, focused attention on to smaller banks in Portugal and Spain FT FT FR PT banks 11/08/2011 19/07/2011 Focus of EZ crisis turns to France BCP fails Espírito Santo Financia almost fails EBA stress test FT EBA FR PT banks 10/11/2011 30/11/2011 Standard & Poors mistakenly announced the downgrade of Frances top credit rating on Thursday BCP's CDS arrive at record level after Fitch downgrade of covered bonds Reuters Bloomberg FR PT banks 14/01/2012 16/02/2012 SP downgrades France and Austria Moodys downgrades state guaranteed debt issued by BCP from Ba2 to Ba3 with negative outlook FT FT FR PT banks 22/02/2017 18/08/2016 Highest DE-FR spread since 2012 Portuguese bonds under pressure after rating agencys warning CNBC FT FR PT banks 28/04/2017 10/01/2017 France CDS bounce back after election Fosun to increase its stake in Millennium BCP to 30% IHS Markit FT Variable NL PT banks Date 09/10/2008 28/03/2017 Events Governement capital injections into banks CDS spread of BCP drops sharply Source BIS NL BE banks 26/01/2009 18/09/2008 Bank comprehensive rescue plans (asset insurance) Speculative rumors against Fortis BIS La Libre Belgique ES NL BE banks 12/01/2010 14/01/2012 30/09/2008 Spain rows back on measures to enforce economic co-operation SP puts Netherlands sovereign on negative outlook Dexia bailed out FT Reuters The Guardian ES NL BE banks 13/05/2010 23/04/2012 15/10/2008 Tough new austerity measures for Spain PM Rutte resigns after austerity talks Bank rally, FT FT The Guardian ES NL 15/12/2010 20/08/2013 Moodys puts Spains Aa1 ratings on review for possible downgrade Netherland's top rating is affirmed at Fitch amid debt warning FT Bloomberg ES DE BE banks 29/03/2011 07/05/2010 29/12/2008 Catalan leader Arturo Mas refuses to enforce austerity measures German Parliament approves Greek rescue KBC loses 1 billion on CDOs FT NYT La Libre Belgique ES DE BE banks 17/07/2011 29/11/2010 30/09/2011 Spain and Italy brace for bond market pressure German credit risk jumps to highest since may, debt swaps show Belgium market authority ends short selling ban on Belgian financial institutions FT Bloomberg Fed NY ES DE BE banks 03/01/2012 09/02/2016 05/10/2011 Warning over size of Spanish deficit Five-year sovereign German CDS rose to almost 22 bps due to hedging activity Dexia shares suspended as break-up takes shape FT Reuters FT ES DE banks BE banks 08/05/2012 28/04/2009 02/11/2012 Spain set to spend billions on bank rescue Profit-taking undermines Deutsche Bank Three banks Lloyds Banking Group, Commerzbank and Dexia were dropped from the GSifi list FT FT FT ES DE banks BE banks 29/08/2012 19/04/2010 29/12/2012 Catalonia set to call for 5bn bailout Bank dividend payments reach record low: Deutsche Bank, plans to pay a dividend of 0.75 for 2009, up from 0.50 European commission validates Dexia rescue plan FT FT Le Monde ES BE banks 09/11/2015 25/01/2017 Catalunya vote for independence in 2008, but still small compared with earnings per share of 7.59 Repricing of Dexia CDS BBC FT ES DE banks ES banks 27/10/2017 11/01/2011 14/01/2011 Catalan sparks Madrid showdown Concerns rise over German bank levy Spain seeks to show that it is not another Ireland FT FT FT ES DE banks ES banks 27/05/2018 10/03/2011 13/07/2011 Spain upheaval deepens Italy market jitters Sale of stake in Deutsche AM puzzles analysts Spanish bank IPOs under threat FT FT FT BE DE banks ES banks 14/07/2008 28/07/2011 09/08/2011 Belgium government resigns Deutsche Bank net revenues in its corporate and investment banking arm fell 27 per cent in the second quarter Investors turn agains spanish financials as they bet against the value of what they see as fragile institutions le Monde FT FT BE DE banks ES banks 07/06/2010 10/09/2011 26/09/2011 Uncertainty on Belgium debt (elections coming) Commerzbank hit by 760m Greek writedown News about bank rescue plan: ECB expected to boost bank liquidity. Spanish banks affected in particuliar FT FT FT BE DE banks ES banks 14/12/2010 24/11/2011 28/03/2012 SP downgrades Belgium perspective Deutsche bank needs 2 bln to meet EBA's conditions EU underlines that Spanish banks need to bailout CNBC EBA FT BE DE banks ES banks 22/11/2011 07/12/2011 09/07/2012 Belgian spreads enlarge SP placed the credit Deutsche Bank and Commerzban under review Spanish bank bailout talks. Spain to Accept Rescue From Europe for Its Ailing Banks FT Deutsche Welle NYT BE DE banks ES banks 25/11/2011 24/01/2012 16/10/2012 SP downgrades Belgium Commerzbank buoyant as investors back capital plan Spanish banks rally on hope Madrid ready to request aid FT FT FT BE DE banks ES banks 01/03/2012 10/03/2016 25/03/2013 Belgian State buys back Dexia Deutsche and UBS defeated in UK tax avoidance case over bankers bonuses Bankia leads falls across big lenders after EZ comment to toughen bank regime Les Echos BBC FT IT DE banks ES banks 01/12/2010 03/05/2017 21/01/2016 "Premiums that Italy pay hit fresh highs" HNA raises stake in Deutsche Bank to nearly 10% Market sentiment turns sharply against Spains banking sector FT FT FT IT DE banks ES banks 30/06/2012 01/06/2018 08/06/2017 Markets rebound following EU deal "the agreement allow()to buy Italian sovereign bonds" SP downgrades Deutsche Bank Emergency funds failed to save Banco Popular from death spiral FT FT FT IT DE banks ES banks 28/10/2013 04/12/2018 05/12/2017 Bond yields fall to four-month low as Italy sells 2-year debt Investor fear raids will hit DB turnaround Strong drop of CDS of Banco Popular Nasdaq FT Enfin, dans le troisième chapitre, nous proposons une nouvelle approche pour quantifier les phénomènes de contagion entre marchés financiers sur la base d'une version structurelle du modèle de(Diebold and Yilmaz, 2009). Nous nous reposons essentiellement sur un modèle SVAR-GARCH identifié statistiquement par hétéroscédasticité, identifié économiquement par la contribution maximale des chocs et qui permet d'obtenir des décompositions des erreurs de prévision non-constantes dans le temps. Nous analysons la propagation des chocs de risque de crédit dans la Zone Euro entre CDS souverains et bancaires. Cela implique que notre modèle SVAR inclue 16 variables endogènes, permettant ainsi de considérer à la fois les CDS souverains et bancaires alors que les papiers proches de notre étude se focalisent la plupart du temps uniquement sur les CDS souverains(De Santis and Zimic, 2018).En termes méthodologiques, nous trouvons que notre modèle permet de mieux identifier les chocs de crédit par rapport aux autres modèles de contagion de la littérature, et qu'il est de plus plus réactif aux événements que les modèles basés sur des estimations par fenêtres roulantes. Du point de vue économique, nous trouvons que les phénomènes de contagion expliquent seulement 37% de la variance de nos variables, avec toutefois de fortes variations dans le temps. Pour cette dernière, Paul Samuelson avançait l'idée que les rendements macros étaient plus inefficients que les rendements micros. En effet, si les facteurs efficients affectant les rendements micros de manière idiosyncratique ne se transposent pas au niveau indiciel en raison de processus de diversification, alors les rendements macros sont essentiellement influencés par des inefficiences de marché. Pour évaluer cette hypothèse, nous comparons les prédictibilités micros et macros sur données américaines afin d'identifier si, effectivement, la prédictibilité micro s'avère moins présente que la prédictibilité macro. De plus, nous reprenons la méthodologie deRapach et al. (2011) en l'étendant à un cadre non-constant dans le temps afin de dénouer, au cours du temps, les deux sources de prédictibilité. Pour ce qui est des résultats, nous montrons que notre interprétation de l'intuition de Samuelson n'est pas valide puisque la prédictibilité micro n'est pas plus faible que la prédictibilité macro. Toutefois, nous montrons également que des phénomènes de diversification sont bien à l'oeuvre dans la mesure où l'agrégation des séries de prédictibilité micro au cours du temps donne un indice qui est très proche de notre série de prédictibilité macro. Deuxièmement, nous montrons que nos estimations des prédictibilités-alpha et -bêta sont cohérentes avec leurs implications théoriques. Cela suggère notamment que les deux phénomènes jouent un rôle dans notre base de données. There are also many papers, outside the return predictability literature, that underline that stock markets behave differently at the stock-level compared to the index-level.Sadka and Sadka (2009) document that the positive relationship between earning growth and returns at the micro-level turns negative at the macro-level.Kothari et al. (2006) report similar findings between earning surprises and contemporaneous returns. Eventually,Hirshleifer et al. (2009) stress that elevated accruals predict negative future returns at the stock-level, but null or positive future returns at the index level. As such, drivers of stock returns may differ greatly depending on the scale we are considering. In other words, micro-returns are assumed to be essentially driven by "individual stories"(Jung and Shiller, 2005), whereas macro-returns are more affected by aggregate inefficiencies. More formally it would mean that the variance of the unpredictable factors of micro-returns (ϵi,t+1 + δiϵt+1 + β ′ i,t ut+1) dominates the variance of the predictable part (ωiα(Xt)). This is less true for the corresponding factors of macro-returns, respectively β ′ t ut+1 + ϵt+1 and α(Xt). Note that, in that case, return predictability is not a "free lunch", investors have to take extra-risk to benefit from it(Kelly and Pruitt, 2013). In line with[START_REF] Timmermann | Elusive return predictability[END_REF], we apply a "sanity filter" to our forecasts. If a forecast exceeds any previous return of the estimation period (in absolute value) it is then replaced with a "no change" forecast. This type of filtering is common in the return predictability literature[START_REF] Elliott | Handbook of economic forecasting[END_REF]. Note that for an estimation period running from t -L + 1 to t, we need previous forecasts from t -Ln + 2 to t -L + 1 so as to build R 2 os,k,t-L+1 . This latter variable will then be used in the model-selection to predict rt+1. Note that R 2 i,α,t is not necessarily positive. Theory-driven forecasts (such as r β t+1 ) may perform better than unrestricted forecasts (here r f t+1 ) in out-of-sample comparisons(Rapach et al., 2011). Note that all the aforementioned results concerning R 2 i,os,t and R 2 os,t cannot be explained by the variances of the input returns ri,t+1 and rt+1. We plot on Figures 1.A.5 and 1.A.6 of Appendix 1.A.4 the standard deviations of stock returns against either the level or the variance of their corresponding raw predictability indices. For both graphs the relationships between these variables appear weak at best. In Appendix 1.A.5, we take into account the uncertainty regarding the coefficients with bootstrapping techniques. Note that, following the Covid-shock, all predictability appears to stem from the beta-predictability. This finding is in line with other recent studies, such as[START_REF] Gormsen | Coronavirus: Impact on stock prices and growth expectations[END_REF]. This latter argue that the apparent disconnection between the macroeconomic situations and the US stock market wasn't due to irrational investors' behaviours, but could be rationalized through the fall in long-term sovereign rates. The two last factors "Robust Minus Weak" and "Conservative Minus Aggressive" are also extracted from Kenneth French website. Due to their limited availability, the R 2 i,α,t and R 2 i,β,t for the 5-factor model start later than for the 1-factor or 3-factor models. The outperformance also extends to specifications including some measure of volatility, such as the VIX. This point, as well as the results regarding other countries industrial production growth, are discussed in the Section 2.5 The performances of our factor model appear more mixed at shorter horizons. Compared to an univariate model based on the aggregate DY, our factor model does not improve the forecasting accuracy at the 1-month horizon, but exhibits a lower RMSE at the 3-and 6-month horizons, although the difference in RMSE is not significant in lights of Diebold-Mariano tests. For each country, the Market DY, the IP growth and the term spread are all collected from Refinitiv Datastream. We applied Dickey-Fuller tests to all the variables and transform them into growth rates in cases where we could not reject the null hypothesis of a unit root. We make several exceptions to that rule though, in the sense that we also include the benchmark variables of Section 2.5.1 in levels and we also incorporate several financial variables in log returns. For example, in[START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF] the orthogonalisation is based on the square root of the variancecovariance matrix of the reduced form shocks. Thus it does not rely on economic intuition and therefore makes the interpretation of the structural shocks difficult. 3 The same drawback applies to[START_REF] Geraci | Measuring Interconnectedness between Financial Institutions with Bayesian Time-Varying Vector Autoregressions[END_REF] who estimate a time-varying VAR to evaluate interconnectedness between financial institutions, although on the basis of reduced-form coefficients. In Annex 3.A.5 we relax this assumption. More precisely, for days with a large structural shock surge, we investigate the existence of major market events in the financial press. We consider that a peak is identified if we can match it with an event 5 days before or after the date of the peak. This ratio is robust to changes in the threshold value as well to changes in the number of days considered. The DeSantis and Zimic (2018) model is computationally intensive to estimate and therefore does not yield daily spillover estimates. For this reason we cannot integrate it in this comparison. Results reported in Table?? are robust to a large number of specifications (by analysing the % change instead of absolute changes, with different window lengths or with pairwise spillovers instead of outward spillovers). Note that throughout his paper, we include Belgium in the Periphery-group as the country exhibited high public debt/GDP ratio. However the results are very similar if we define Belgium as a Core country. Contrary to Equations(3.2) and(3.3), we divide here the index by the number of pairwise directed spillovers considered. Likewise the different indices of Figure3.5.4 are expressed in the same unit, that is: by how much, on average, a single variable of G1 has an impact on a single different variable of G2. That implies that while varying shock sizes may generate time variation in spillovers, elasticities between the variables stay constant. Table (2.A.3) List of the variables used to estimate PCA-factors Group Variable Source IP Index Industrial Production: durable goods : ow steel Federal Reserve Board IP Index Industrial Production: durable manuf : vehicle Federal Reserve Board IP Index Industrial Production: mining : gold and silver Federal Reserve Board IP Index Industrial Production: mining Federal Reserve Board IP Index Industrial Production: consummer good Federal Reserve Board IP Index Industrial Production: durable consummer good Federal Reserve Board IP Index Industrial Production: non durable manuf : food alcool beverage Federal Reserve Board IP Index Industrial Production: durable manuf : machinery Federal Reserve Board IP Index Industrial Production: business equipement Federal Reserve Board IP Index Industrial Production: non durable manuf : chimestrey Federal Reserve Board IP Index Industrial Production: durable manuf : computer Federal Reserve Board IP Index Industrial Production: Material Federal Reserve Board IP Index Industrial Production: consruction supplies Federal Reserve Board IP Index Industrial Production: Mining :oil & gas extraction Federal Reserve Board IP Index Industrial Production: Non durable consummer good Federal Reserve Board IP Index Industrial Production: Durable manufacturing: Electrical equipment, appliance, and component Federal Reserve Board IP Index Industrial Production: Durable manufacturing: Aerospace Federal Reserve Board IP Index Industrial Production: Durable manufacturing: Federal Reserve Board IP Index Industrial Production: Non Durable manufacturing Federal Reserve Board IP Index Industrial Production: Business supplies Federal Reserve Board IP Index Industrial Production: IPI excl. energy (74%) Federal Reserve Board IP Index Industrial Production: Durable material Federal Reserve Board IP Index Industrial Production: Non Durable material Federal Reserve Board IP Index Industrial Production: Industrial equipment Federal Reserve Board IP Index Industrial Production: manufacturing exluding vehicle Federal Reserve Board IP Index Industrial Production: SA equipment total Federal Reserve Board IP Index Industrial Production: electric & gas utilities Federal Reserve Board IP Index Industrial Production: Total Index 3.A.1 Derivation of Forecast Error Variance Decomposition For each univariate structural shock, its variance follows the GARCH process: Unlike [START_REF] Fengler | Measuring Spot Variance Spillovers when (Co)variances are Time-varying The Case of Multivariate GARCH Models[END_REF], our matrix B 0 is constant over time, so that the only change between a classic SVAR-FEVD and our approach is the computation of future structural variances. We have: Taking expectations conditional on t, with h ≥ 2: Generally speaking, the IRF of a vector shock δ = (δ 1 , ..., δ n ) on Y t is defined, at horizon h and with Ω t-1 the information set at t, as: Due to the orthogonality of the structural shocks, one uses δ = (0, ..., 0, δ j , 0, ..., 0) in order to consider the impact of a single shock. In that case we get, with e j a vertical vector full of zeros apart for its j th element that is equal to 1: In our SVAR-GARCH setting, B 0 is identified by heteroskedasticity and by economic identification (with Σ ϵ and Σ µ evolving over time and being equal to, respectively, λ t|t-1 and Σ µ,t|t-1 , Equation (3.13)). This identification strategy does not impose any structure on the IRFs. Conversely, in Diebold and Yilmaz (2009), B 0 is identified by using the Cholesky decomposition of Identification by GIRF works differently since, instead of considering structural shocks, the GIRF looks at reduced form shocks. Using the notation Σ µ = (σ ij ) i,j∈ 1,n 2 , a one standard deviation shock j and the same remaining notations, the GIRF is defined as: If one assumes that µ t ∼ N (0, Σ µ ), then we can write (see [START_REF] Pesaran | Generalized impulse response analysis in linear multivariate models[END_REF]: Concluding remarks We tried, throughout these three chapters, to highlight the importance of micro/sectoral financial data to answer macro-financial questions. These beneficial aspects are ventilated along three dimensions in our case: the contrasting behaviours between sector-level and index-level return predictability, the extra-accuracy that can be gained with sectoral equity variables in macroforecasting and, eventually, the new insights that can emerge with bank CDS data to better understand the dynamics in sovereign CDS series. These research works have several policy implications from a central bank point of view. First, identifying financial bubbles, even ex post, is a difficult exercise and requires a diversified range of indicators, such as the Cyclically adjusted price-to-earnings ratio (CAPE, Campbell and Shiller, 1988) or the number of IPO announcements (Huang et al., 2015). As a result, relying on the alpha-predictability index outlined in the first chapter can be useful in the financial stability toolkit to spot periods of elevated market inefficiencies. Second, the use of sectoral equity series can definitely help central banks in their macroeconomic diagnosis, given their predictive content that we illustrated in the second chapter of the thesis. In our case, the use of sectoral stock variables within our factor model improves the out-of-sample RMSE by close to 20 % compared to the usual forecasting predictors. Eventually, the contagion framework based on the SVAR-GARCH model that we developed in the third chapter can be duplicated, beyond CDS-contagion analysis, to evaluate spillover patterns between virtually any financial time series. As a matter of fact, the model as been reused in Banque de France analysis to analyze spillovers between equity returns or sovereign bond yields (see Banque de France, 2018). ABSTRACT The relationship between financial markets and central banks is deeply intertwined. On one hand central banks can for example rely on the signals stemming from the markets to assess investors' opinions regarding future economic activity. On the other hand, from a financial stability perspective, central banks need also to monitor the activity on financial markets to assess the build-up of risks for the real economy that can emerge from bubble processes or from the propagation of negative shocks. However, historically, the macro-financial literature dedicated to these issues mostly rely on aggregate data. This PhD thesis is centered on the fact that focusing on a macro-perspective on these subjects can obscure a lot of information compared to using micro/sectoral data. The first chapter of the thesis thus investigates how stock return predictability can differ between the index-level and the micro-level. Through this exercise, we manage also to spot times where stock return predictability stem from market inefficiencies, enabling us to identify periods of "irrational exuberance" (Shiller, 2015). The second chapter highlights how relying on sectoral, rather than on aggregate, equity data within a factor model can improve our ability to forecast future economic activity. In the third chapter, with the use of an innovative econometric model, we show how incorporating both country-level and sector-level time series in our framework helps to better identify sovereign and bank credit shocks as well as the propagation of the latter. MOTS CLÉS Marchés financiers, Inefficiences de Marché, Économétrie financière RÉSUMÉ Marchés financiers et banques centrales entretiennent des liens étroits. D'une part les banques centrales peuvent tirer profit des signaux émis par les marchés financiers, par exemple pour évaluer l'opinion des investisseurs sur le futur niveau de l'activité économique. D'autre part, du point de vue de la stabilité financière, les banques centrales doivent également suivre les marchés financiers afin de jauger des risques qu'ils pourraient faire peser sur l'économie réelle. Ces derniers peuvent notamment émerger de processus de bulles financières ou de la propagation de chocs négatifs de marché à marché. Toutefois, pour l'étude de ces questions, la littérature macro-financière s'est historiquement reposée sur l'utilisation de données financières agrégées. Par opposition, cette thèse de doctorat est centrée sur l'idée que, sur ces sujets, les données désagrégées/sectorielles peuvent ouvrir de nouvelles perspectives de compréhension par rapport aux données macros. Le premier chapitre de cette thèse évalue ainsi dans quelle mesure la prédictibilité des rendements action peut différer entre le niveau indiciel et le niveau individuel/sectoriel. Nous montrons de plus comment cet exercice permet de délimiter les phases durant lesquelles la prédictibilité des rendements est liée aux inefficiences de marché, et ainsi d'identifier les périodes « d'exubérance irrationnelle » (Shiller, 2015). Le deuxième chapitre souligne comment l'utilisation de données sectorielles, et non pas agrégées, tirées du marché action permet d'améliorer notre capacité à prévoir l'activité économique dans le cadre d'un modèle à facteurs. Dans le troisième chapitre, à l'aide d'un modèle économétrique innovant, nous montrons comment l'incorporation de séries temporelles à la fois au niveau des pays et au niveau des secteurs permet de mieux identifier les chocs affectant les secteurs souverains et bancaires et de mieux estimer la propagation de ces derniers. KEYWORDS Financial Markets, Market Inefficiencies, Financial Econometrics
04098824
en
[ "sdv.bc", "sdv.mhep.phy" ]
2024/03/04 16:41:20
2022
https://theses.hal.science/tel-04098824/file/2022UCFAC077_CUSSONNEAU.pdf
Laura Cussonneau email: [email protected] Christian Boyer email: [email protected] Charlotte ; ; Brun email: [email protected] Christiane ; Deval email: [email protected] Emmanuelle Loizo email: [email protected] Emmanuelle Meugni email: [email protected] Elise Gueret email: [email protected] Emeric Dubois email: [email protected] Daniel Taillandier email: [email protected] Cécile Polge email: [email protected] Daniel Béchet email: [email protected] Guillemette Gauquel-Koch Ali L Evans email: [email protected] Jon Arnemo email: [email protected] Jon E Swens email: [email protected] Stéphane Blanc email: [email protected] Chantal Simon email: [email protected] Etienne Lefai email: [email protected] Fabrice Bertile email: [email protected] Lydie Combaret email: [email protected] E ; Loizon Manuela Malatesta Emerito Carlos Rodriguez-Merchan Cécile Coudy-Gandilhon Ghita Chaouki Mehdi Djelloul-Mazouz Yoann Delorme Julien Hermet Guillemette Gauquelinkoc email: [email protected] Cécile Pol Daniel Taillandi Julien Averous Alain Bruhat Céline Jousse Isabelle Papet Pierre Fafournoux Anne-Catherine Maurin kinase B 10 wingless/int1-frizzled 11 AMP-activated protein kinase 12 nuclear factor-kappa B 13 janus kinases/signal transducers and activators of transcription Concurrent BMP Signaling Maintenance and TGF-β Signaling Inhibition Is a Hallmark of Natural Resistance to Muscle Atrophy in the Hibernating Bear Keywords: congresses 2.3.1.1 Orals brown bear hibernation, mouse unloading, muscle atrophy, physical inactivity, RNA sequencing, TGF-β/BMP signaling Aux membres du jury, le Pr Jean-Paul Thissen, la Dr Audrey Bergouignan, la Dr Jennifer Rieusset et le Dr Cédric Chaveroux Je suis très honorée de vous compter parmi les membres de mon jury. Un immense merci pour votre […] Sans la curiosité de l'esprit, que serions-nous ? Telle est bien la beauté et la noblesse de la science: désir sans fin de repousser les frontières du savoir, de traquer les secrets de la matière et de la vie sans idée préconçue des conséquences éventuelles. Marie Skłodowska-Curie, 1938 A l'équipe Protéostasis Merci à tous les membres de l'équipe Protéostasis, on ne peut pas espérer un meilleur cadre de travail, joie et bonne humeur sont les maîtres mots de cette équipe. Je vous souhaite à tous le meilleur dans vos vies professionnelles et personnelles. J'aimerais remercier tout particulièrement certaines personnes. Merci à Pierre le chef d'équipe, alias Pr Dumbledore, pour tes conseils scientifiques toujours très avisés, et pour tes histoires de vie toujours très passionnantes. Merci à Christiane, pour ton aide technique, mais surtout pour ta générosité et ta gentillesse, j'ai beaucoup appris à tes côtés et je te souhaite la plus formidable des retraites. Merci à Cécile, ma partenaire de pilate et de stretching, pour ton aide en histologie mais aussi pour nos franches rigolades et confessions. Et merci à Lolo, pour ta bonne humeur et ton écoute, tu m'auras tellement fait rire, tu as été mon soleil du labo. Aux animaliers de l'INRAE de Theix Merci à toute l'équipe de l'animalerie de Theix qui fournit un travail formidable. Un merci tout spécial à Mehdi et Yoann, vous m'avez bien fait rire pendant ces trois ans, et également bien aidé avec mes souris suspendues, ce fût un réel plaisir de travailler avec vous. Vous me croyez maintenant, je n'étais pas une stagiaire ! A l'équipe du Brown Bear Project Merci à toute l'équipe du BBP, sans qui ce projet de thèse n'aurait jamais existé. Cette équipe de vétérinaires, de rangers et de scientifiques experimentés permet à ce consortium de vivre et de faire naître de beaux projets scientifiques. A Fabrice Bertile et Damien Freyssenet Merci à Fabrice Bertile et Damien Freyssenet pour les échanges scientifiques de qualité que l'on a pu avoir lors de mes comités scientifiques de thèse. Merci à toi Fabrice également pour ton implication dans le BBP, et l'inoubliable expérience que tu m'as offerte avec Etienne lors de notre voyage en Suède. A l'IUT Génie Biologique de Clermont-Ferrand Merci à l'IUT Génie Biologique de Clermont-Ferrand, et spécialement à Mathilde Bonnet et Jérémy Denizot pour leur confiance en m'accordant un poste de vacataire pendant deux ans, ce fût une belle expérience qui a confirmé mon souhait de devenir enseignante chercheuse. Aux étudiant.es de l'équipe de passage ou de plus longue durée Merci aux doctorants qui ont partagé pendant trois ans cette incroyable expérience. Merci à Christian, même si tu râles beaucoup tu es très attachant, il faudrait te créer si tu n'existais pas. Merci à Aurore pour ta bienveillance et surtout tes conseils de maman chat quand je le suis devenue également. Merci à Ghita, pour ta présence et ton écoute. On aura partagé de jolis moments, je regrette déjà ta tajine somptueuse, tes appels avec ton franco-arabe qui me faisaient tant rigoler, tu vas beaucoup me manquer. Merci à Guillaume, mon petit frère ours, pour ces moments de franche rigolade au labo et en dehors, j'espère que tu t'occuperas bien de papa ours en mon absence. Merci à Maelle, ça a été un plaisir de travailler avec toi pendant quelques mois, et de partager de jolis moments de vie ensemble dans notre immeuble. Et merci à Maxime, alias Ludovic, Pierre, Enguerrand Raucroy, merci pour tous ces moments de rire, d'avoir relu mon manuscrit, de m'avoir consolée quand je doutais, et merci de m'avoir donné un peu plus confiance en moi. A Dulce Il était nécessaire de faire un paragraphe à part pour toi ma Dutz. Merci d'avoir été la première personne que j'ai rencontré dans cette ville, tu as transformé mon expérience Clermontoise et mon expérience de thèse comme je ne l'aurais jamais imaginé. Merci d'avoir partagé ces rires, ces doutes, ces questionnements et ces confessions pendant trois ans dans notre bureau, dans nos appartements ou pendant notre traversée de la Corse à pied. Je suis extrêmement chanceuse d'avoir croisé ton chemin. Te quiero mucho como la trucha y el trucho. A mon fils A mon petit chat Bibi, auvergnat d'origine, qui me rappellera à vie ce souvenir indélébile de l'Auvergne. Merci pour tous ces câlins réconfortants et ta présence de tout instant. Aux KAPAK Ces trois ans n'auraient pas été les mêmes si je n'avais pas pu m'évader dans ma passion de toujours, la danse. Alors un immense merci à Yohann Sebileau, le meilleur professeur de danse contemporaine de la Terre, je pleure déjà la fin de tes cours, tu m'auras tellement fait progresser, merci pour tes chorées et ton engagement, merci pour ces moments sur scène gravés à tout jamais. Et merci à mes copines Tia et Claire pour ces jolis moments de danse partagés en cours, sur scène et en dehors. A mes ami.es clermontois.es Merci à Alexis, mon compère de soirées et de confidences, tes petits plats vont terriblement me manquer mais pas plus que toi. Merci à ma copine Melu, pour ton soutien, ton amitié, ces soirées à refaire le monde et à rêver ensemble. Et merci au groupe des Théseux pour la bonne humeur et les galères de thèse partagées ensemble. A mes ami.es de longue date A Chachou, Dédoule et Cass, mes meilleures amies depuis tant d'années maintenant, votre amour et soutien sont sans faille, vous êtes mes pilliers de vie, merci pour tout. Merci à Ele, ma précieuse amie, pour nos week-ends évasion et dégustation de vin. Merci à ma Caice pour ta joie de vivre et tes mots réconfortants. Merci à mes copains de master Juju, Roro, Fabi, Val et Mauricette pour nos week-ends découverte de la France et votre présence depuis Lyon. Et merci à mes copains bourguignons, à nos soirées au Galopin ou à la cocoloc, ces moments de franche rigolade m'ont permis de tenir le choc pendant trois ans. A ma famille Merci à ma famille, pour votre soutien et amour inconditionnel depuis 26 ans. A mon papa et ma maman, qui ont toujours les mots qu'il faut pour apaiser mes angoisses et doutes. A mes deux petites soeurs Marine et Ema, qui me comblent de bonheur. A ma tata Séverine, mon tonton Pascal et à ma mamie MC pour votre présence et votre amour, et à mon papi parti pendant ma thèse qui je sais aurait été très fier de moi. List of figures State of the art Preface Muscle wasting results from a wide range of pathophysiological conditions such as cancer and renal failure, but also microgravity, bed rest or inactivity. Muscle wasting is associated with adverse health effects such as a decline in independence and an increased morbidity and mortality. With increasing physical inactivity and improved life expectancy, muscle wasting is a major public health problem. Muscle atrophy results from an imbalance between protein synthesis and degradation, and a variety of intracellular players are involved in this dysregulation, including the TGF-β superfamily and the ATF4 pathway. Their biological roles in muscle physiology are mainly described in humans or laboratory rodents. Despite the wealth of knowledge on this subject, an approved and readily available therapeutic or preventive treatment is still lacking. Hibernating brown bears are fascinating mammals because they naturally resist muscle atrophy although they remain completely inactive and starved for 5-7 months during the hibernation period. The main objective of this thesis was to find new underlying mechanisms that could become therapeutic targets to combat muscle atrophy in humans. To achieve this goal, we (1) used a comparative physiology approach in hibernating bears naturally resistant to atrophy, (2) investigated the role of the interaction between the ATF4 and the TGF-β/BMP pathways in unloaded mice susceptible to atrophy, and finally (3) initiated experiments in human muscle cells to validate hypotheses arising from the first two studies. This manuscript is therefore a compilation of all the work carried out during my 3-year thesis on the regulation of the TGF-β superfamily and ATF4 signalling pathways in the skeletal muscle of the hibernating brown bear and the unloaded mouse. It will be divided into 3 distinct parts. First, a stateof-the-art of (1) skeletal muscle physiology and muscle atrophy, (2) TGF-β superfamily and ATF4 signalling pathways, and their pivotal roles in muscle homeostasis, as well as (3) bear hibernation and the first clues to explain its resistance to muscle atrophy. In the second part, two studies will be presented. The first one is a published article on the transcriptomic analysis of muscles of brown bears during hibernation and of mice during unloading. The second study is a paper currently under review on the effect of ATF4 induction on skeletal muscle in both healthy and unloaded mice, and also in hibernating brown bears. A discussion of perspectives and questions arising from the data follows both studies. Finally, the last part consists of the presentation and discussion of the preliminary results of the effect of bear serum on human muscle cells. This thesis is written in english, and therefore a substantial abstract in french requested by the doctoral school is included in the Appendix, together with 2 articles of which I am co-author: one is an original article related to tissue adapatations (muscle, adipose tissue and serum) in the hibernating brown bear and the other is a review on ubiquitin ligases and their role in muscle atrophy. The word "muscle" was first used by Middle French speakers in the 14 th century, from the existing Latin words mus meaning "mouse" and musculus which translates to both "little mouse" and "muscle." Ancient Romans thought that some muscles, especially the biceps, looked like little mice running under a person's skin. Our organism contains about 600 "little mice" accounting for approximately 40% of our total body weight, making skeletal muscle the most abundant tissue in the human body. Generalities. Two types of muscles coexist: (1) the so-called smooth muscles located in walls of hollow organs (e.g intestine, stomach) under involuntary control, and (2) the striated muscles divided into two types, the cardiac muscles which also contract spontaneously, and the skeletal muscles which cover our skeleton and allow movements under voluntary control. Skeletal muscle is a very dynamic and plastic tissue. It is essential for movement, gesture and posture for functional autonomy. It also acts as the main tissue for energy metabolism, with heat production and absorption, use and storage of energy substrates (i.e. glucose, lipids and amino acids). Skeletal muscle is composed of water (75%), proteins (20%) and other constituents such as carbohydrates, lipids and minerals. It accounts for approximately 50-75% of body proteins and 30-50% of whole-body protein turnover [1,2]. Organisation. Skeletal muscle is a highly organised tissue containing several bundles of myofibers with each layer successively surrounded by the extracellular matrix (Figure 1). Myofibers are multinucleated and post-mitotic cells and contain adult stem cells called satellite cells that contribute to muscle growth and repair. Each myofiber contains thousands of myofibrils which are composed of the basic cellular unit of muscle, the sarcomere. The sarcomere itself is composed of billions of myofilaments, both thick (myosin) and thin (actin), which are essential for muscle contraction requiring high ATP consumption (Figure 1). Myofilaments represent the main protein content of muscle (i.e. 70-80% of the total protein content of a single fibre) [1,2]. The size of the muscle is primarily determined by the number and the size (i.e. cross-sectional area, CSA) of each myofiber, although fat and extracellular matrix infiltration can also influence its size [1,2]. Typology. Myofibers are classified into different types, with different characteristics such as the sarcomeric myosin heavy chain (MYH) gene expression, strength-velocity, response to neural inputs, or metabolic properties [3] (Figure 1). Since the first half of the 19 th century, scientists have distinguished skeletal muscles based on their colours and contractile properties: (1) red muscles composed of slow-twitch fibres (i.e. type 1) rich in mitochondria with oxidative metabolism and (2) white muscles composed of fast-twitch fibres (i.e. type 2) poor in mitochondria with glycolytic metabolism [4]. Over the past 40 years, this oversimplified schema has evolved with the notion of diversity of muscle fibre types, and four major fibre types have been identified in adult mammalian skeletal muscle (i.e. types 1, 2A, 2X, and 2B) (Figure 1). Humans, however, lack type 2B fibres, and the proportion of MYH within the same muscle may differ between mammals [3,5]. Skeletal muscle fibre type and mitochondrial function are sometimes uncoupled, for example for fast type 2A fibres with abundant mitochondrial content [2,3,6] (Figure 1). Based on differential MYH expression, a muscle may also consist of hybrid fibres (i.e., 1/2A, 2A/2X, 2X/2B), which allow muscles to utilise ATP in a nearly continuous gradient and thus be endowed with a fast type 2B to slow type 1 muscle contraction rate [2,3]. The heterogeneity of muscle fibres is the basis for the flexibility to use the same muscle for a variety of tasks, from fast and intense contraction (e.g. jumping) to slow and low-intensity activity (e.g. posture). Mitochondria. Skeletal muscles are highly vascularised and innervated, and contain components of the metabolic machinery (e.g. mitochondria, sarcoplasmic reticulum), allowing efficient energy production. The precise coordination of activity between each of these components is essential to maintain muscle homeostasis and associated motor activity. The energy requirement during an intense contraction increases the normal ATP consumption in skeletal muscle by 100-fold [7]. To support this high energy demand, skeletal muscle relies in part on mitochondrial oxidative phosphorylation (OXPHOS) for ATP production. Adult myofibers exhibit specific subcellular localisation of distinct populations of mitochondria, namely subsarcolemmal (i.e. just below the plasma membrane) and intermyofibrillar. These two distinct populations of mitochondria are functionally highly interconnected but have a specific shape and exhibit differences in their biochemical and functional properties [8][9][10][11]. The morphology, arrangement, and connectivity of the mitochondrial network are adapted to the specific functional needs of each fibre type. For example, oxidative fibres have a gridlike organisation with elongated mitochondria oriented both parallel and perpendicular to the muscle contraction axis, in contrast to the mitochondrial network of glycolytic fibres, which is fragmented and oriented perpendicular to the muscle contraction axis [12]. Maintaining a functional mitochondrial network in skeletal muscle is fundamental to fulfilling the metabolic demands imposed by contraction, thereby regulating fuel utilisation, energy expenditure, and overall metabolism. Mitochondrial integrity and function are highly regulated by quality control systems (e.g., mitochondrial biogenesis, dynamics, and degradation) to maintain homeostasis [2,13]. Moreover, mitochondrial dysfunction has been linked to several human muscle diseases called mitochondrial myopathies [14][15][16][17][18][19]. Muscle-organ crosstalk Myokinome. Over the past decades, skeletal muscle has been extensively studied for its role as an endocrine organ, producing and secreting hundreds of cytokines and other peptides, i.e. myokines, with autocrine, paracrine, or endocrine effects [20][21][22][23] (Figure 2). The first myokine described was myostatin [24] followed by . The latter is increased 100-fold in the bloodstream during exercise and shows multiple metabolic effects at the whole the body level [25,26]. Given the broad physiological and metabolic effects of physical activity throughout the body, it was clear that there was more than one myokine [27,28]. The biological name myokinome provided a new concept for understanding how muscles communicate with the rest of the body, and more than 650 myokines have been identified [20][21][22][23] (Figure 2). Myokines (e.g. myostatin,cathepsin B,irisin, are synthesised and released from myofibers during muscle contraction and provide communication between skeletal muscles and other organs, including the brain, adipose tissue, bone, liver, intestine, pancreas, endothelial cells, and skin, as well as communication within the muscle itself [20][21][22] (Figure 2, Figure 3). The biological roles of myokines include widespread body functions such as cognition, lipid and glucose metabolism, white fat browning, bone formation, endothelial cell function, hypertrophy, and skin structure (Figure 2, Figure 3). For muscle itself, myokines play a role in mitochondrial biogenesis, fat oxidation and glucose metabolism, and act as signals for muscle hypertrophy or atrophy [20,21,29] (Figure 3). It should be noted that most of the myokines are not yet sufficiently well characterised with respect to their biological functions [20][21][22][23]. Establishing proper crosstalk between body organs and muscles is essential for whole-body homeostasis [20][21][22][23]. Amino acid reservoir. Another major role of skeletal muscle is to be a reservoir of amino acids. Muscle amino acids can be mobilised in the absence of an adequate nutrient supply or in situations of increased need in other tissues to maintain their protein mass [30][31][32]. For example, obese individuals maintain normal plasma amino acid concentrations even after 60 days of total fasting [33]. Studies conducted by Jewish physicians in concentration camps during World War 2 suggested that death by starvation (i.e. uncomplicated by severe disease) occurred when amino acids mobilised from muscle proteins became insufficient to maintain the precursors necessary for gluconeogenesis. Indeed, amino 20 acids released from muscles serve as precursors for the maintenance of blood glucose levels through hepatic gluconeogenesis during starvation [34]. In the context of disease prevention and health maintenance, reduced muscle mass compromises the body's ability to respond to stress and chronic disease due to inappropriate crosstalk between muscles and organs. Therefore, loss of muscle mass is incompatible with life, and maintenance of muscle protein content through appropriate turnover is vital to maintain whole body homeostasis (see section 4.1.3). Muscle protein turnover During embryonic and early postnatal development, muscle growth occurs primarily through myogenesis and fusion of satellite cells [35,36]. In adult organisms, regulation of muscle mass results from growth within existing myofibers primarily via cellular pathways that control protein turnover [37,38] (see Appendix 10.4). Muscle proteins are constantly renewed, i.e. synthesised and degraded (Figure 4). The balance between the rates of muscle protein synthesis (MPS) and muscle protein breakdown (MPB), i.e. the net muscle protein balance, determines muscle protein content and homeostasis (Figure 4). MPS and MPB are sensitive to many factors, including nutritional status, hormonal balance, physical activity, injury or diseases. A decrease in muscle size in fully mature organisms, i.e. muscle atrophy (see section 4.1.3), results from a negative protein balance, whereas an increase in muscle size, i.e. hypertrophy, results from a positive protein balance. Muscle hypertrophy, in response to physical activity or a high-protein diet, is an interesting field of investigation that is of clinical interest in the search for treatments to limit or prevent muscle wasting. Muscle protein synthesis External stimuli. Amino acids (AA) provided by an appropriate diet act as extra-and intra-cellular anabolic molecules and are essential for inducing MPS [39,40] (Figure 4). High-protein diets do not enhance MPS as long as energy and protein requirements are met in the muscles and other organs. Amino acids bioavailability is strongly influenced by protein source, digestibility, and protein intake pattern, and is important for optimising MPS [40,41]. Mechanical cues are also considered anabolic stimuli, based on two basic lines of evidence: (1) muscles atrophy when mechanical load is reduced (e.g. bed rest) [42] and (2) muscle overload is sufficient for skeletal muscle hypertrophy [43,44]. Life on Earth has evolved in a 9.8m/s 2 environment that loads organisms. Therefore, cells have evolved with a plethora of sensors that detect mechanical stimuli. These mechanosensors help cells to adapt not only directly to the force produced by the contraction of a muscle fibre but also to more indirect mechanical signals, such as the stiffness of the extracellular matrix (ECM) that surrounds each cell [44,45]. However, these mechanical signals remain incompletely characterised. The increase in MPS after food intake is a systemic transient phenomenon, whereas physical activity stimulates a long-term AA: amino acid local adaptive response. Furthermore, adequate nutrition after physical activity can take advantage of anabolic pathways initiated by physical activity [41,46,47]. mTOR. One of the most recognized players in MPS is the mechanistic target of rapamycin (mTOR), which controls anabolic and catabolic signalling in skeletal muscle, resulting in the modulation of muscle hypertrophy and wasting [48,49] (Figure 5). mTOR inhibition by rapamycin or genetic invalidation respectively reduces the increase in MPS and/or muscle size after exercise in humans [50], or results in severe myopathy leading to premature death in mice [51]. mTOR is a serine/threonine kinase that (1) senses a variety of environmental and intracellular changes, including nutrient availability, energy status and mechanical stimulation, and (2) coordinates a variety of cellular processes, including cell growth and survival, differentiation and autophagy [48,49,52]. There are two biochemically and functionally distinct mTOR complexes, namely mTORC1 and mTORC2 [48,52] (Figure 5). Both complexes share the mTOR catalytic subunit and are distinguished by their accessory proteins, and their unique substrates and functions [48,52] (Figure 5). On one hand, mTORC2 regulates cell survival and cytoskeleton organisation [48,52]. On the other hand, mTORC1 controls protein synthesis by activating S6K1 2 , which promotes ribosome biogenesis, and by inhibiting 4E-BP1 3 leading to protein translation (Figure 5). mTORC1 also promotes muscle hypertrophy by phosphorylating and suppressing ULK1 4 activity resulting in the inhibition of autophagy, one of the major protein degradation processes in muscle [48,52] (Figure 5). Muscle protein breakdown Physical activity, nutritional interventions, hormonal balance or inflammation also influence muscle protein breakdown (MPB) (Figure 4). The mechanisms are much less understood than for MPS, mainly because MPB measurement is technically more challenging than for MPS [53,54]. The main systems that contribute to MPB in skeletal muscle are the autophagy-lysosomal system (ALS) and the ubiquitinproteasomal system (UPS). Autophagy-lysosomal system. The word autophagy is derived from two Greek words auto meaning "self", and phagy meaning "eating". Three different systems of autophagy have been described in mammals: macroautophagy, chaperone-mediated autophagy, and microautophagy. In this manuscript, ALS will refer to macro-autophagy, the most explored system in MPB. ALS involves the formation of a nascent membrane structure, i.e. the phagophore, surrounding bulk intracellular components, such as organelles, damaged proteins, or other target proteins (e.g. transporters, ion channels, receptors) (Figure 6). The origin of the membrane, i.e. endosomal, trans-golgi, nuclear or de novo synthesis, is unclear. After maturation of the autophagosome, it fuses with lysosomes to generate an autolysosome (Figure 6). Finally, activation of lysosomal proteases, i.e. cathepsins, or other enzymes such as DNAses or lipases, leads to the degradation of the autolysosome content and recycling of AAs [55] (Figure 6). It should be noted that ALS cannot degrade proteins in intact myofibrils; therefore, additional catabolic pathways are required [56,57]. Under normal conditions, ALS primarily prevents the accumulation of damaged organelles and misfolded proteins, but also degrades glycogen and lipid droplets, thus providing glucose, free fatty acids (FFAs), or AAs to the entire body to support basic cellular metabolism. In response to different stresses, such as starvation, ALS acts as a pro-survival mechanism in skeletal muscle, providing metabolic substrates [55]. Skeletal muscle is one of the organs with the highest rate of autophagy flux when nutrients are lacking [58]. In particular, basal autophagic flux is higher in glycolytic fibres than in oxidative fibres [59]. This is particularly important 2 ribosomal protein S6 kinase 1 3 eukaryotic translation initiation factor 4E binding protein 1 4 unc-51-like autophagy activating kinase because muscles may regulate ALS differently during specific stresses depending on their fibre type composition. Additionally, because muscle cells are highly sensitive to insulin [60], known to inhibit ALS [61], the autophagy flux fluctuates according to the food intake throughout the day and the level of physical activity [62,63]. While too much autophagy flux contributes to muscle wasting (see section 4.1.3), inhibition of ALS also leads to muscle atrophy [64]. In addition, inhibition of ALS leads to the accumulation of abnormal mitochondria, oxidative stress and protein aggregates, causing degeneration, weakness and premature death of myofibers [62][63][64][65]. Therefore, proper autophagic flux is required to maintain healthy muscle cells [62,63]. Mitophagy. In healthy skeletal muscles, damaged and depolarized mitochondria are selectively removed by the mitophagy process, which is a selective form of autophagy. The most studied mitophagy involves the ubiquitin-protein ligase Parkin and the mitochondrial kinase PINK1 5 [13,19,66]. Studies have demonstrated that mitochondrial dynamics and mitophagy are essential for skeletal muscle homeostasis [13,19,66]. There is a growing body of evidence that alterations in mitophagy or mitochondrial distribution and dynamics are present in muscles during wasting conditions (e.g. ageing, 5 PTEN induced kinase 1 AA: amino acids disuse or cancer cachexia) [66][67][68]. Importantly, enhancing mitophagy through genetic or nutritional approaches improves skeletal muscle function in aged rodents [13,19,66]. Thus, improving mitophagy in skeletal muscle appears to be a promising therapeutic target to prevent or even treat skeletal muscle dysfunction. Ubiquitin-proteasomal system. The UPS is perhaps the best-known cellular proteolytic system and is responsible for degrading the majority of misfolded or defective proteins in all cell types. The UPS plays a fundamental role in normal muscle physiology, including degrading myofibrillar proteins [69,70]. Most proteins undergo degradation by being targeted to the 26S proteasome through the covalent attachment of a multi-ubiquitin chain (Figure 7). Protein ubiquitination involves the action of 3 enzymes: ubiquitin-activating enzymes E1, ubiquitin-conjugating enzyme E2, and ubiquitin-ligase E3 (Figure 7). The ubiquitin-tagged proteins are then recognized by the 26S proteasome, which initiates the ATP-dependent degradation process within the catalytic core (Figure 7). Through this mechanism, the UPS performs substrate-specific proteolysis [71]. Protein ubiquitination is both dynamic and reversible. Deubiquitinase or deubiquitinating enzymes (DUB) catalyze the removal of ubiquitin from target proteins and are also involved in ubiquitin maturation, recycling and editing. Several reports have demonstrated a relationship between UPS and lifespan, with proteasome activity decreasing with age in skeletal muscle, causing dysfunction [72]. Moreover, inhibition of proteasome activity in skeletal muscle is associated with a defect in muscle growth and shortened lifespan in rodent models [70,73]. UPS and ALS crosstalk. UPS and ALS have long been considered independent. However, emerging evidence suggests that there is a crosstalk between both pathways in skeletal muscle. Although ALS had been thought to be a non-specific degradation system, it has also been reported to degrade ubiquitinated proteins [74]. Studies also suggest that ALS and UPS are complementary because proteasome-deficient mice exhibit increased autophagic flux [70,73]. Therefore, the UPS and ALS are compensatory mechanisms, both essentials to sustain muscle homeostasis and integrity. However, since they are important for the health of skeletal muscle, dysfunctions in both systems lead to muscle pathological conditions. Muscle atrophy Causes and consequences Causes. The loss of muscle mass and strength in the adult body is referred to as muscle atrophy. Muscle loss arises from inherited (congenital or genetic) or acquired conditions (pathological or physiological conditions) [75] (Figure 8). In addition, older adults exhibit age-induced muscle atrophy, primarily due to anabolic resistance, which may predispose this population to more pronounced muscle loss when exposed to periods of reduced physical activity [76]. Pathological conditions that cause muscle atrophy include cancer cachexia [77], chronic obstructive pulmonary disorders [78], diabetes and obesity [79], chronic kidney diseases [80], heart failure [81], sepsis [82], burns [83], and conditions associated with anorexia or malnutrition [84]. Physical inactivity also leads to muscle wasting, especially following leg fractures, immobilisation and bed rest [85][86][87][88][89][90] and even in those with sedentary lifestyles, as observed during COVID-19 home confinement [91]. It should be noted that 60-85% of people worldwide lead a sedentary lifestyle (World Health Organisation). Muscle atrophy results from an imbalance between MPS and MPB, with a negative balance in favour of protein breakdown [75] (Figure 8). Data suggest that both (1) decreased MPS and (2) increased MPB contribute to muscle loss and that the relative contribution of each process to muscle loss depends on the pathophysiological condition [92][93][94]. During this thesis, we were primarily interested in disuse-induced muscle atrophy. Consequences and treatments. The resulting muscle wasting is characterised by muscle alterations such as myofiber shrinkage, changes in fibre types or myosin isoforms, and net losses of cytoplasm, organelles, and total proteins. Loss of myofibers and/or a decrease in myofiber diameter are the most prominent histopathologic features of skeletal muscle atrophy (Figure 9). As mentioned above, skeletal muscle plays a central and major role in whole-body homeostasis. Lack of physical activity is associated with a wide network of diseases, including type 2 diabetes, cardiovascular diseases, cancer, dementia, and osteoporosis [22,95]. These adverse effects are likely, to some extent, mediated by a lack of release of myokines and/or resistance to their effects [20,23]. In addition, skeletal muscle is a major organ of insulin-induced glucose metabolism. Therefore, a loss of muscle mass is closely related to insulin resistance and metabolic syndrome [96]. Muscle atrophy limits daily activities, reduces quality AA: amino acid of life and lengthens recovery time after illness, while increasing morbidity and mortality (Figure 8). Given its adverse consequences, increasing sedentary lifestyle and life expectancy worldwide, muscle wasting affects millions of people and remains a major economic and social burden. Currently, strategies for treating skeletal muscle atrophy include physical exercise, nutritional interventions and some medications [75]. In addition, natural products have a wide range of effects on muscle function. However, their low bioavailability and low intestinal absorption limit their application [97]. To date, no drugs have been approved for clinical use, no effective remedies for muscle atrophy have been discovered, and exercise or nutritional interventions are strategies that are not suitable for all patients (e.g., immobilised or intensive care unit patients). Thus, although our understanding has improved considerably over the last two decades, mainly through the use of laboratory models inducing muscle atrophy, there is still a need to discover new targets and drugs to combat it. An interconnected network of cellular actors Signalling pathways. MPS and MPB are influenced by a wide range of external and internal molecular actors. External stimuli (1) include mechanical load, inflammatory factors such as cytokines (e.g. IL-6, TNF-α 6 ), endocrine factors such as growth factors (e.g. IGF-1 7 , insulin), catecholamines and angiotensin, and (2) activate various intracellular pathways (Figure 10). This interconnected network of intracellular actors contributes to the regulation of muscle protein balance by working in synergy or in antagonism to promote anabolism or catabolism [2,37,38,75] (see Appendix 10.4). In the context of muscle atrophy, dysregulation of one or more of these actors results in a blunting of anabolic signalling in favour of catabolism leading to either MPS inhibition, UPS and ALS overactivation, or both [2,37,38,75] (see Appendix 10.4) (Figure 10). In brief, anabolic pathways suppressed in many atrophy 6 tumor necrosis factor α 7 insulin growth factor-1 conditions include signalling from PI3K 8 -AKT 9 -mTORC1, ß2-adrenergic, WNT/FZD 10 , calcineurin, hippo and bone morphogenetic protein (BMP). In contrast, catabolic pathways overactivated in many cases of atrophy include signalling from transforming growth factor-β (TGF-β), AMPK 11 , NF-κB 12 , glucocorticoid receptors, angiotensin, IL-6-JAK/STAT 13 , kinin, sphingolipids, notch or activating transcription factor 4-endoplasmic reticulum stress (ATF4 and ER stress) [2,37,38,75] (see Appendix 10.4) (Figure 10). The precise interconnection and biological actions of these actors still need to be fully elucidated. A detailed description of their regulation is beyond the scope of this manuscript. Nevertheless, a review of which I am a co-author is appended for more details (see Appendix 10.4). Atrogenes. Atrogenes (i.e. atrophy-related genes) are referred to as a set of genes whose expression changes in different catabolic situations associated with muscle wasting. Regulation at the protein level is sometimes more complex to elucidate [98,99]. The atrogenes belong to different cellular pathways, mainly the UPS and ALS proteolytic systems, and include the E3-ubiquitin ligases containing tripartite motif 63 (TRIM63)/muscle ring finger-1(MuRF1) and F-Box protein 32 (FBXO32)/Atrogin-1, as well as some autophagy players such as cathepsin L and BCL2-interacting protein 3 (BNIP3) [99]. For example, TRIM63/MuRF1 targets myofibrillar proteins (i.e. thin and thick filaments), as well as sarcomere structural components such as telethonin, for UPS-dependent degradation (from Peris-Moreno et al., 2021, see Appendix 10.4 [38]). In this thesis project, we focused on the pivotal role in muscle homeostasis of the TGF-β superfamily (see section below) and the Integrated Stress Response signalling (see section 4.3). 4.2 A pivotal role for the TGF-β superfamily in skeletal muscle homeostasis Overview The transforming growth factor-β (TGF-β) superfamily is an ubiquitious family that regulates a multiplicity of biological actions including proliferation, differentiation and apoptosis. This superfamily is divided into two signalling pathways, named TGF-β and bone morphogenetic protein (BMP). Ligands. More than 30 secretable ligands belong to this family including activins A and B (INHBA and INHBB genes), myostatin (MSTN gene), TGF-β1-3, growth differentiation factor GDF1/10/11 for TGF-β signalling, and AMH 14 , BMP2-7, GDF5/7 for BMP pathway (Figure 11). Signal transduction. Ligands bind to a type 2 receptor, which subsequently recruits the type 1 receptor to form an heteromeric complex. These receptors pair up in different combinations to mediate the response of each ligand and the subsequent intracellular response [105][106][107]. Once the receptors are complexed, the adaptor protein SARA 15 recruits the receptor-regulated SMAD family member (R-SMAD), i.e. SMAD2 and 3 for the TGF-β signalling and SMAD1,5 and 8 for the BMP signalling (Figure 11). Thereafter, the receptor complexes phosphorylate the R-SMAD making them recognizable by the common TGF-β and BMP mediator, SMAD4. Subsequently, SMAD4 forms an heteromeric complex with SMAD1/5/8 or SMAD2/3 that translocates into the nucleus and elicits a cell/environment/ligand specific transcriptional program [105-108] (Figure 11). 14 anti-müllerian hormone 15 SMAD anchor for receptor activation TAKE HOME MESSAGE Muscle atrophy is a social and economic burden that results from an imbalance in muscle protein synthesis in favour of muscle protein breakdown. However, despite a better understanding of the anabolic and catabolic signalling pathways dysregulated during atrophy, there is still no proven therapeutic or preventive treatment suitable for all patients. Regulation. Regarding the wide range of ubiquitous biological actions, the TGF-β superfamily is tightly regulated at multiple steps. First of all in the ECM, several mechanisms enable the activation of the secreted ligands from their latent inactive state. Second of all, the active ligands can be sequestrated by antagonists within the ECM, for instance by follistatin (FST gene) the best described for TGF-β signalling, and noggin (NOG gene) for BMP [109] (Figure 11). Moreover, each ligand can bind to several receptor subtypes which, themselves, can be post-translationally modified, adding layers of complexity [105][106][107]. In addition, the inhibitory SMADs (I-SMAD), SMAD6 and SMAD7 can antagonize the signal initiated by ligands by competing with R-SMAD for the binding to a given receptor. SMAD6 selectively inhibits BMP signalling whereas SMAD7 inhibits signalling for both TGF-β and BMP signalling (Figure 11). Moreover, both signalling are also tightly regulated by ubiquitination/deubiquitination processes from receptor and R-SMAD activation to induction of the transcriptional program [38,[105][106][107] (Figure 12). Finally, TGF-β signalling: The master regulator of skeletal muscle atrophy Pathophysiological conditions. TGF-β is a catabolic pathway of great interest within the field of skeletal muscle biology. In the late 1990's, the discovery that deletion of the myostatin gene (MSTN), one of its ligands, and inhibition of its receptor, caused a profound hyper-muscularity in mice, cattle, sheep, and dogs sparked the initial interest in its role in atrophy [24,[112][113][114]. Thereafter, MSTN was found to be elevated in muscles or blood in all type of catabolic situations characterised by muscle atrophy, such as in ageing subjects, in response to prolonged bed rest, in patients with acquired immune deficiency syndrome, renal failure or heart failure [115][116][117][118][119]. Serum levels of other TGF-β ligands such as activin A also rise in response to cancer, kidney failure and heart failure, all associated with muscle wasting [115,[120][121][122]. In addition, TGF-β1 is remarkably elevated in plasma of patients with muscular dystrophies [123]. Besides, microRNA positively controlling TGF-β signalling are increased in muscles of patients following 10 days of sustained bed rest or in patients in intensive care unit [124,125]. The binding of myostatin or activin A/B to activin receptor type-2B (i.e. to a lesser extent type 2A) ActR-2B/A (ACVR2A/2B genes) leads to the recruitment and phosphorylation of SMAD2/3 and is associated with muscle atrophy in a multiplicity of catabolic situations [110,114,118,[126][127][128]. TGF-β underlying mechanisms inducing muscle atrophy Activation of MPB. TGF-β signalling is involved in the transcription of the atrogenes FBXO32/Atrogin-1 and TRIM63/MuRF1 known to be involved in ubiquitin-proteasome proteolysis. Mice or cultured mice myotubes treated with TGF-β ligands (i.e. activin A/B, myostatin) show muscle atrophy through the activation of SMAD2/3 resulting in overexpression of Fbxo32/Atrogin-1 and/or Trim63/MuRF1 atrogenes [127,129] (Figure 13). Similarly, exposing healthy mice to exogenous GDF11 ligand results in muscle wasting through the activation of the SMAD2-ubiquitin-proteasome pathway and autophagy axis [130] (Figure 13). Furthermore, overexpression of the transforming growth factor beta receptor 1 (Tgfbr1) in mice muscles also increases the expression of the atrogene Fbxo32/Atrogin-1 and induces muscle fibre atrophy via a SMAD2/3-dependent mechanism [131] (Figure 13). Conversely, mice with muscle specific deletion of Smad2 or 3 are resistant to muscle atrophy induced by Tgfbr1 surepexression or denervation [131,132]. Additionally, inhibition of TGF-β signalling through musclespecific KO type 1 receptors (i.e Tgfbr1 and Acvr1b) or follistatin administration, induces muscle hypertrophy in mice by reducing Fbxo32/Atrogin- 1 and Trim63/MuRF1 expression [133,134]. Mechanistically, overexpression of Smad3 in mice muscles is sufficient to induce Fbxo32/Atrogin-1 expression and ultimately induces muscle fibres atrophy [135]. In cultured mice myotubes, SMAD3 synergises with the transcription factor FOXO3 16 to induce the expression of Trim63/MuRF1 [136] (Figure 13). Finally, myostatin treatment also inhibits the expression of MyoD and Pax3 17 myogenic genes in cultured myotubes [129]. Altogether, these data showed that the canonical TGF-β signalling (i.e. SMAD2/3) is required for muscle damage. 16 forkhead box-O 3 17 paired box 3 Inhibition of MPS. TGF-β catabolic action also involves inhibition of protein synthesis. Myostatin or activin A administration are sufficient to inhibit protein synthesis in mice muscles through inhibition of the AKT/mTORC1 signalling [127,131,137,138]. The same phenotype is observed by overexpressing Smad3 [133,135] (Figure 13). Additionally, inhibition of TGF-β signalling through muscle-specific KO type 1 receptors (i.e Tgfbr1 and Acvr1b) or follistatin administration, induces muscle hypertrophy in mice by increasing AKT phosphorylation [133,134]. Moreover, the hypertrophic effect of myostatin blockade is reduced when mTORC1 is genetically or pharmacologically inhibited [131,133,137]. The mechanisms linking SMAD2/3 to AKT/mTORC1 signalling in muscle atrophy conditions remain unclear. Treatment with insulin-like growth factor 1 (IGF-1) activates AKT and increases the interaction between AKT and SMAD3, leading to inhibition of TGF-β signalling in cultured myoblasts [139]. IGF-1/AKT The dotted lines correspond to the signalling impaired in numerous muscle wasting conditions, and the questions marks the unsolved questions. signalling is altered in many catabolic situations in muscles [140], and impairement of IGF-1 receptor during muscle immobilisation contributes to SMAD2/3 protein accumulation [132]. Accordingly, TGFβ signalling might be further amplified in these situations due to compromised interaction between AKT and SMAD3. In turn, this may reinforce the vicious circle between activation of TGF-β signalling and impairment of AKT-mTORC1 signalling (Figure 13). Other muscle detrimental actions. A proteomic analysis of mice muscles overexpressing follistatin has uncovered changes in energy metabolism, fibres type, insulin and calcium signalling, providing insight into the intracellular modifications sensitive to TGF-β signalling [141]. In addition, TGF-β signalling in mice represses mitochondrial biogenesis [142,143] and is associated with mitochondrial disruption in cancer cachexia-induced muscle wasting in mice [144] (Figure 13). TGF-β-induced muscle wasting is also linked to reactive species oxygen (ROS). Injection of TGF-β1 ligand into mice muscles increases ROS content and induces atrophy, both being reversed by administration of an antioxidant treatment, suggesting that TGF-β1-induced muscle atrophy was . Finally, the TGF-β pathway is also known to play a major role in fibrosis, promoting muscle mechanical changes and muscle injury in many muscular dystrophies in mice and humans [146,147] (Figure 13). TGF-β-targeted mediation in muscle atrophy: Panacea or smokescreen ? The establishment of myostatin and activins as robust negative regulators of skeletal muscle has designated these ligands and partners as attractive therapeutic targets for various musculoskeletal disorders. Promising results in vivo. Follistatin (FST gene) is a potent extracellular inhibitor of myostatin and of several other ligands of the TGF-β superfamily and its overexpression results in muscle hypertrophy in mice. This hypertrophy exceeded that observed in Mstn KO mice and was further exacerbated when overexpressing Fst in Mstn KO mice [133,148]. Inhibition of activin A by a specific antibody leads to muscle hypertrophy in mice and monkeys [128], and codelivery of specific activin A and myostatin inhibitors induces a synergistic response with an increase in muscle mass of up to 150% in mice [149]. Finally, the concomitant neutralization of both ACVR2A and ACVR2B receptors with BYM338 antibody results in a stronger skeletal muscle hypertrophy [150]. These observations led to the utilisation of such strategies during muscle atrophy situations: Pharmacological inhibition of myostatin alleviates muscle wasting in cachectic mice [151]. Inhibition of the receptor TGFBR1 by the LY364947 molecule abolishes diaphragm atrophy in rats undergoing sepsis [152]. Genetic or pharmacological blockade of the myostatin/activin A receptor ACVR2B improves muscle mass and function in mice models of cancer cachexia, spinal muscular atrophy, or microgravity [153][154][155]. Overexpression of Smad7, the intracellular TGF-β antagonist, prevents cancer-mediated muscle wasting in mice [156,157]. Other effective strategies using muscle-specific microRNAs have also been investigated. For instance, overexpression of miR-206 attenuates muscle atrophy during denervation in rat by inhibiting TGF-β-SMAD2/3 axis [158]. The system renin-angiotensin is involved in muscle loss. Interestingly, treatment with an angiotensin 2 inhibitor prevents muscle atrophy in mice through a blockade of the TGF-β-induced SMAD2 /3 activation [111,145]. Finally, the use of angiotensin 2 inhibitor, extracellular or receptor antagonists shows improvement in muscle function in different muscular dystrophies by inhibiting the TGF-β signalling and hence its consequences as a pro-fibrotic pathway [146]. Disillusion in human clinical trials. Based on the pre-clinical studies, numerous TGF-β-inhibiting pharmacologic agents have progressed in human trials or are still currently under evaluation [119]. Treating elderly patients requiring hip replacement with an anti-myostatin has proven to be safe, although preservation of muscle mass following surgery was minimal [159]. Another anti-myostatin molecule showed promising results with amelioration of muscle locomotor function in spinal muscular dystrophy patients [119]. Other phase 2 clinical trials showed that the use of an antibody blocking the activin type 2 receptor was safe but with little or no functional benefit in patients with muscle wasting (i.e. hip fracture surgery, sporadic inclusion body myositis, sarcopenic elderly, cachexia, chronic obstructive pulmonary disease) [160][161][162][163]. Therefore, although these molecules were promising in rodents, they have shown only a minimal effect in humans or have demonstrated important side effects [119,164]. Indeed, most myostatin inhibitors also repress the activities of other closely related TGF-β family members including GDF11, activins, and BMPs, increasing the potential off-targets. Consequently, a careful distinction between targets is required to evaluate the use of these medications in human clinical practice [119,164]. BMP signalling: The silver bullet for muscle atrophy ? The BMP signalling pathway was originally discovered for its ability to induce bone formation. BMP signalling is important in embryogenesis and development in all organ systems, and also in the maintenance of adult tissue homeostasis [165]. The role of BMP in the regulation of muscle mass was only discovered in 2013 [166,167] (Figure 14). For this reason, much less is known about this pathway and its underlying mechanistic in muscle homeostasis. Fundamental in healthy adult muscle. BMP signalling controls the mass of healthy adult muscles, since increasing the expression of the ligand Bmp7 or a constitutively active BMP receptor type 1A (caBmpr1a, ALK3 protein) promotes a SMAD1/5-dependent hypertrophy phenotype in mice [166,167]. Furthermore, inhibition of the BMP pathway by using inhibitors of ligand-receptor interaction (i.e. LDN-193189 or noggin), or invalidation of Smad1 or 5, leads to muscle atrophy in healthy adult mice muscles [166]. In addition, the profound increase in muscle mass observed in Mstn KO mice is mediated by the activation of BMP signalling via SMAD1/5, whereas overexpressing the selective BMP inhibitor Smad6, significantly reduces this hypertrophic phenotype [149]. Regulation in catabolic conditions. SMAD1/5 phosphorylation increased in rodent muscles exhibiting atrophy associated with motor nerve degeneration, intensive care disuse or with amyotrophic lateral sclerosis [166,167]. In addition, the expression of BMP-related components, i.e. BMP ligands Gdf5 and Gdf6 and the BMP receptor type 1B (Bmpr1b, ALK6 protein), increase in denervated mice muscles. Similarly, the DNA-binding protein inhibitor (ID1)-luciferase reporter, which mirrors BMP transcriptional activity also increases in this situation [166,167]. However, SMAD1/5/8 phosphorylation is down-regulated, whereas gene expression of the BMP inhibitor noggin is upregulated in muscles of tumour-bearing mice, and in muscles of pre-cachectic and cachectic patients [168]. Additionally, a decreased in gene expression of BMP-related components is also observed in the elderly with muscle atrophy following hip arthroplasty [169]. A central role to counteract muscle atrophy. Administration of tilorone, a molecule capable of inducing BMP signalling, restores BMP-mediated signalling in muscles, limits muscle wasting, and lengthens the survival of tumour-bearing mice [168]. These data showed the necessity of promoting/maintaining BMP signalling to limit cancer-induced muscle atrophy [168]. Overexpression of caBmpr1a/ALK3 or Bmp7 in mice blunts muscle atrophy induced by denervation or cancers [166,167], while Smad6 or Nog overexpression suppresses SMAD1/5 phosphorylation and exacerbates muscle atrophy during denervation and fasting [166,167]. Besides, the role of altered BMP signalling in muscle atrophy was confirmed by Sartori et al. who observed a severe aggravation of denervationinduced muscle atrophy in Gdf5 KO mice [166]. Moreover, Mstn KO mice, which are usually resistant to denervation-induced muscle atrophy, lose this ability when BMP signalling is concomitantly blunted [166]. Finally, a long non-coding RNA, Chronos, impairs muscle growth in ageing mice by repressing BMP signalling [170]. Altogether, these data strongly suggest that (1) activation of the BMP pathway in skeletal muscle during catabolic conditions is an adaptive response to counteract atrophy, and (2) a deficiency in this signalling plays a critical role in aggravation of muscle frailty [166,167]. Intracellular actions. In innervated mice muscles, hypertrophy induced by increased expression of Bmp7 or caBmpr1a/ALK3 is associated with increased phosphorylation of AKT and of two mTORC1 substrates (i.e RPS6 and 4E-BP1), which is blunted by rapamycin treatment. These data provided the first demonstration that the mTORC1 pathway is indispensable in the regulation of BMP signallinginduced muscle growth [167] (Figure 14). In denervated mice muscles, overexpression of Nog significantly enhances the expression of the Fbxo30. This gene encodes a protein identified as MUSA1 for muscle ubiquitin ligase of SCF complex in atrophy-1 [166]. The authors proved that BMP signalling acts as a positive regulator of muscle mass by repressing the transcription of Fbxo30/MUSA1, whose induction is required for denervation-induced atrophy [166] (Figure 14). Similarly, increased expression of the BMP inhibitor Smad6 also results in increased Fbxo30/MUSA1 expression in denervated muscles [167]. Inhibition of the BMP signalling is also associated with increased expression of Trim63/MuRF1 and Fbxo32/Atrogin-1 during muscle atrophy associated with denervation [167] and cancer [168] in mice. The repression of the expression of these atrogenes by the BMP pathway is believed to be through the suppression of the HDAC4 18 -Myogenin axis, which is involved in the transcription of TRIM63/MuRF1, FBXO32/Atrogin-1, and FBXO30/MUSA1 [167] (Figure 14). 18 histone deacetylase 4 The dotted lines correspond to the signalling impaired when BMP signalling is activated. A finely tuned balance between TGF-β and BMP signalling SMAD4 shared custody SMAD4 is the shared actor between the TGF-β and BMP signalling (Figure 15). Smad4 KO mice slightly lose muscle mass and are even more susceptible to muscle wasting during denervation or fasting [166]. Mstn KO mice exhibit a significant activation of the SMAD1/5/8 transcriptional activity with greater recruitment of SMAD4 on the promoter of BMP target genes. On the contrary, Gdf5 KO mice lead to an increased binding of the SMAD4-SMAD2/3 complex to the promoter of TGF-β target genes [166]. This has prompted the concept of a competition between SMAD 2/3 and SMAD 1/5/8 for SMAD4 recruitment. The authors speculated that inhibiting TGF-β signal would release SMAD4 from SMAD2/3 to be more available for SMAD1/5/8. In muscle wasting scenarios, TGF-β over activation is considered B. A. as a factor which reduces the availability of SMAD4 for BMP signalling (Figure 15). This article has strongly highlighted the need for a fine-tuning of the BMP/TGF-β balance to maintain muscle homeostasis [166] (Figure 15). This is consistent with the Myhre syndrome, a human rare autosomal dominant genetic condition characterised by muscle hypertrophy. This syndrome is explained by a missense mutation of SMAD4-leading to defects in ubiquitination, and hence Myhre syndrome patients have increased levels of SMAD4 protein rather than SMAD4 loss [171]. TGF-β/BMP non-canonical signalling and its dual role in muscle homeostasis Non-SMAD signalling. In addition to the canonical SMAD-mediated TGF-β superfamily signal transduction, activated receptors also transduce signals through non-SMAD signalling [172] (Figure 16). For example, the TGF-β-activated kinase 1 (TAK1) protein, originally identified as a member of the MAPK 19 family, is a major component of the non-canonical TGF-β superfamily signalling. TAK1 interacts with the TNF Receptor Associated Factor 6 (TRAF6), which is bound to a receptor type 1 of the canonical SMAD signalling [173,174] (Figure 16). Once the receptors type 1 and 2 are complexed , TRAF6 undergoes autoactivation and subsequently activates TAK1. Thereafter, TAK1 phosphorylates MAPK actors leading notably to the activation of the p38 MAPK [173,175] (Figure 16). In addition to the possible role of TAK1 in muscle atrophy, TAK1 has also been reported to be required for the maintenance of skeletal muscle mass in adult mice. Inducible skeletal muscle-specific Tak1-KO mice leads to severe muscle wasting which is accompanied by increased proteasome activity, elevated autophagy, redox imbalance and mitochondrial dysfunctions associated with decreased p38 19 mitogen-activated protein kinase phosphorylation [182,183]. Overexpression of Tak1 in mice induces muscle hypertrophy, increases protein synthesis, and attenuates denervation-induced muscle atrophy, while its genetic inactivation leads to neurogenic atrophy [184]. A very recent study has revealed promising findings. The authors identified a strong physical interaction between TAK1 and SMAD1 in denervated mice muscles. The authors assumed that TAK1 could regulate the spatial distribution of SMADs proteins by promoting (1) the nuclear localisation of SMAD1-SMAD4 to suppress FBXO30/MUSA1 transcription and ( 2) the cytosolic retention of the inhibitor SMAD6 in denervated muscles. The underlying mechanisms are however still completely unknown [184]. Whether such an interplay between TAK1 and SMADs also exists in other catabolic conditions has never been investigated (Figure 16). The red and green lines represent respectively the catabolic and anabolic signalling, and the questions marks the unsolved questions. MEF2 a pro-maintenance actor. Myogenic enhancer factor 2 A-D (MEF2A-D) proteins are key transcriptional regulators of skeletal muscle development, sarcomeric gene expression, fibre type control and glucose uptake metabolism [185][186][187]. p38 MAPK directly phosphorylates MEF2, increases its transcriptional activity in myoblasts from mice or rats, and is required for a proper differentiation process [188,189]. In addition, Traf6 deletion in mice myotubes inhibits MEF2 transcription [190]. Furthermore, SMAD3 interacts with MEF2C resulting in reducing MEF2C transcriptional activity in mice myoblast cells and thus disrupt differentiation [191]. Finally, activin A treatment in human muscle cells reduces MEF2C expression and activity and leads to myotubes atrophy [192]. To our knowledge, no study has ever explored the TGF-β/BMP-TRAF6-TAK1-p38-MEF2 axis in skeletal muscle in vitro or in vivo (Figure 16). The pivotal role for TRAF6-TAK1-p38 in muscle homeostasis beneath TGF-β and BMP signalling is full of unsolved questions. How does TAK1 act as a pro-atrophic actor through TGF-β-p38, and promotes muscle gain through BMP? How does TGF-β/BMP ligand-receptors pair to activate SMAD and non-SMAD signalling? How do these signals interact? These unsolved questions warrant further investigation (Figure 16). The Integrated Stress Response signalling: Beneficial or harmful for skeletal muscle? The Integrated Stress Response (ISR) signalling is another pathway involved in muscle homeostasis. First of all, an overview of the signalling organisation will be presented and second of all, a focus will be made on the role of ISR in muscle homeostasis. Overview The ISR is a well-conserved signalling present in eukaryotic cells, which is activated in response to a range of physiological stresses [193,194]. Such stresses commonly include extracellular factors such as hypoxia, amino acids deprivation, glucose deprivation, heme deficiency, viral infection, and intracellular stresses such as ER stress. The core event of the ISR activation is the phosphorylation of TAKE HOME MESSAGE A major conceptual insight emerging from these studies is that the balance between TGF-β and BMP signalling pathways plays a key role in determining skeletal muscle fate. Therapies targeting TGF-β induce challenging side effects due to its pleiotropic role. At present, extensive effort should be directed toward further a better understanding of the role of BMP signalling in muscle homeostasis. the alpha subunit of the eukaryotic translation initiation factor 2 (eIF2α) on its serine 194] (Figure 17). To date, four kinases have been reported to phosphorylated eIF2α: PKR-like ER kinase (PERK), double-stranded RNA-dependent protein kinase (PKR), heme-regulated inhibitor (HRI), and general control nonderepressible 2 (GCN2) [193,194] (Figure 17). They all dimerise and autophosphorylate to be activated in response to distinct environmental stresses: amino acid deprivation for GCN2, ER stress for PERK, heme deficiency for HRI and viral infection for PKR [193,194] (Figure 17). Phosphorylation of eIF2α leads to two consequences (1) a general inhibition of the translational machinery and ( 2) the translation of selected mRNA including ATF4. Inhibition of protein translation. In normal conditions, the GDP 20 -bound form of eIF2α is exchanged for a GTP 21 -bound form by the action of the guanine nucleotide exchange factor eIF2B (Figure 17). This 20 guanosine diphosphate 21 guanosine-5'-triphosphate event converts eIF2α to its active form which can recruit the translation initiation ternary complex and subsequently initiates the first step of protein translation. Upon stress conditions, p-eIF2α inhibits eIF2B action, which thus remains in its GDP-bound form, preventing the formation of the ternary complex and leading to the global inhibition of protein translation [193,194] (Figure 17). Translation of specific mRNAs. In parallel, p-eIF2α results in the translation of specific mRNAs, including the activating transcription factor 4 (ATF4) (Figure 18). The mechanism is highly conserved, from yeasts to mammals. The ATF4 transcript is constitutively expressed in many cells and has several small upstream open reading frames (uORF) at its 5′ end, being out of frame with the main proteincoding sequence (CDS). These uORFs mediate basal repression of ATF4 translation (Figure 18). Upon normal conditions, when the ternary complex is abundant (i.e. eIF2α non-phosphorylated), ribosomes initiate scanning at uORF1 and re-initiate at uORF2 overlapping with ATF4 CDS, hence precluding ATF4 translation (Figure 18). Upon stresses conditions (i.e. p-eIF2α), a limited number of ternary complexes are formed. Ribosomes still initiate scanning at uORF1 but, due to low levels of the ternary complex, ribosomes take longer to re-initiate translation. Hence, they re-initiate scanning at the ATF4 CDS (Figure 18). Therefore, upon p-eIF2α, this results in approximately fivefold higher ATF4 protein expression [193][194][195]. Biological role of ATF4. ATF4 is a bZIP 22 transcription factor that belongs to the ATF/CREB 23 protein family [196]. ATF4 is a key determinant of cellular fate in response to ISR activation, mainly acting as a transcriptional activator of a cohort of genes involved in cellular stress adaptation (Figure 17). ATF4 has several dimerisation partners that influence its regulation of gene transcription and governs cellular outcome. ATF4 produces distinct tailored responses with the transcription of target genes being highly dependent on the cellular context/stresses [193,194]. For instance, upon nutritional stress, ATF4 stimulates the expression of genes involved in amino acid transport and biosynthesis, and in autophagy, to supply new amino acids for de novo protein synthesis [197][198][199] (see section 4.3.2). Moreover, ATF4 induces the transcription of the protein phosphatase 1 regulatory subunit 15A (PPP1R15A, GADD34 24 protein), the main eIF2α phosphatase, acting as an important negative feedback loop to restore protein synthesis once the stress is overcome [193,194,200] (Figure 17). It has been proposed that the relative duration and intensity of the ISR signalling dictate the cellular outcome. Therefore, ATF4 may also facilitate the execution of a cell death transcriptional program when cellular homeostasis cannot be restored, by activating the transcription of apoptotic genes [201][202][203][204] (Figure 17). Other regulations of ATF4 translation and transcription. ATF4 translation can also be stimulated by anabolic hormones and growth factors, including insulin or IGF-1 (Figure 17), which activate mTORC1 to increase ATF4 translation [205][206][207][208]. mTORC1 activation enables ribosomes to bypass short uORF in the 5' of ATF4 mRNA in the same way as ISR activation does [195,[207][208][209]. In contrast to p-eIF2α, mTORC1 activity increases both ATF4 translation and general protein synthesis [205][206][207][208]. In this context, ATF4 heterodimers primarily induce the transcription of genes that promote amino acid uptake and synthesis to facilitate anabolism (Figure 17). In addition to the translational regulation of ATF4, it can also be regulated at the transcript level. ATF4 mRNA levels are low under normal conditions but are induced in response to different stresses by different transcription factors (Figure 17). For example, ATF4 expression is induced in response to oxidative stress by NRF2 25 , chemotherapeutic drugs by CLOCK 26 , or ER stress and starvation by TFEB 27 and TFE3 28 [193,194]. Interestingly, in a positive feedback loop, ATF4 downstream gene targets, such as NUPR1 29 , can also elevate ATF4 mRNA levels [210] (Figure 17). 22 basic leucine zipper 23 cyclic AMP response element binding 24 growth arrest and DNA damage-inducible 34 25 NF-E2-related factor 2 26 clock circadian regulator 27 transcription factor EB 28 transcription factor binding to IGHM enhancer 3 29 nuclear Protein 1, transcriptional regulator How ATF4 facilitates such diverse cellular adaptations, ranging from anabolism to growth arrest, is an important and unsolved question. One possibility is that different ATF4 heterodimers or different combinations of ATF4 heterodimers mediate the different effects of signalling. The ISR pathway involvement in autophagy and mitochondrial homeostasis As written above, autophagy and mitochondrial quality control are cellular processes essential for muscle homeostasis, and deficiency in either is associated with muscle wasting [13,19,66]. The ISR is an important signalling involved in these two processes in a wide range of tissues and cells. Role in autophagy. During hypoxia, ER stress, amino acid deprivation, lipopolysaccharide treatment, or low protein diet, ATF4 binds to the specific promoter of genes involved in autophagy to promote (1) a pro-survival response in vitro or in the liver, heart and skeletal muscle of rodents [197,199,[211][212][213][214][215][216][217] or (2) a pro-lethal autophagy response in heart and kidney [218,219] (Figure 19). ER stress or hypoxia leads to the upregulation of certain autophagy actors in a PERK-dependent manner in vitro and in the heart of mice [197,211,212,214,216,218]. Of note, PERK regulates all stages of autophagy including induction, vesicle nucleation, phagophore elongation, and maturation [220]. GCN2 is also essential for the induction of autophagy-related genes upon amino acid starvation in vitro and in the mouse intestine, while a mutant form of eIF2α suppresses the autophagy process [197,[221][222][223]. Moreover, there are direct interactions between eIF2α subunits and core autophagy proteins, although it is not yet known whether these interactions are biologically significant [224,225]. Role in mitochondrial quality control. Over the past decades, growing evidence have placed the ISR signalling as essential in mitochondrial quality control, through the mitochondrial unfolded protein response (UPRmt). UPRmt is a mitochondria stress response induced by a loss of mitochondrial homeostasis. UPRmt activates a transcriptional program of mitochondrial chaperone proteins and proteases (i.e. encoded by nuclear DNA) to promote the recovery of mitochondrial proteostasis [226]. Nonetheless, if the UPRmt is unable to repair mitochondrial damages, it promotes the elimination of the entire mitochondrion by mitophagy. Finally, if the damages persist, cells undergo senescence and/or apoptosis [226]. UPRmt is evolutionarily conserved and there are three key regulatory proteins of UPRmt, including ATF4, which is often overexpressed upon mitochondrial damage [227]. Cells lacking a functional gene copy of ATF4 fail to upregulate several mitochondrial enzymes and exhibit a reduction in mitochondrial respiration. A global transcriptomic analysis has validated the presence of ATF4-binding motifs in many UPRmt genes [226][227][228]. In skeletal muscle, evidence of ATF4 activation by mitochondrial stresses is growing. For instance, ATF4 protein accumulation is linked to HRI or GCN2 activation following the loss of mitochondrial membrane potential in vitro [229,230], and is associated with PERK upon a genetic defect in mitochondrial fission in mice muscles [231,232] (Figure 19). In addition, mitochondrial stresses, induced by a genetic defiency in mitochondrial fusion or mitophagy in mice muscles, increase p-eIF2α, ATF4 protein levels and the expression of an ATF4 target gene, the fibroblast growth factor 21 (FGF21) [233]. Of note, although the biological effects of FGF21 are largely unknown, it is massively induced by a defect in mitochondrial homeostasis, and in turn, improves mitochondrial function [233]. There is evidence that the activation of UPRmt-ATF4 following mitochondrial disturbances in muscles can have protective or maladaptive effects. For instance, in a rare children mitochondrial myopathy (i.e. Reversible Infantile Respiratory Chain Deficiency), the induction of ATF4-FGF21 axis and the subsequent induction of mitochondrial biogenesis-related genes precede the complete disease recovery phase in humans [231]. On the contrary, genetic deletion of a mitochondrial fusion protein in muscles leads to an accelerated ageing phenotype with increased muscle atrophy and inflammation through an ATF4-FGF21-dependent mechanism [234]. Taken together, these studies highlight the intricate involvement of the ISR in autophagy and in mitochondrial quality control. These studies support a dual role for ATF4 in mediating survival and cell death responses, depending on the duration and type of stress, the cell type and the pathophysiological context. Moreover, much remain to be explored in understanding ISR-ATF4 involvement in autophagy and mitochondrial quality control in muscle homeostasis. The ISR pathway implication in muscle atrophy Some of the ISR members have been associated to muscle weakness and atrophy in different catabolic conditions and will be discussed in the following part (Figure 19). The ISR kinases and eIF2α PERK kinase. A ligand-activatable PERK kinase induces p-eIF2α, expression of ATF4 target genes, and leads to severe muscle atrophy within a few days after injection in mice muscles [235]. However, genetic ablation or pharmacological inhibition of PERK reduces skeletal muscle mass and strength and increases gene expression of UPS and ALS components in healthy mice muscles [236]. In addition, Perk is increased in a model of cancer cachexia-induced muscle atrophy in mice, but genetic ablation or pharmacological inhibition of PERK exacerbates muscle atrophy [237]. Of note, in this study, they observed that PERK increased (1) p-eIF2α and Atf4 expression and (2) gene expression of the Unfolded Protein Response (UPR) components, which is another PERK downstream signalling [237]. Therefore further studies are needed to decipher by which downstream pathways PERK acts as a negative or positive regulator of muscle mass (Figure 19). GCN2 kinase. Gcn2 deficiency protects mice from denervation-induced muscle atrophy while forced Gcn2 expression worsens denervation-induced atrophy [238]. The authors highlighted that GCN2 could promote FOXO3 nuclear accumulation, and the subsequent transcription of the atrogenes Trim63/MuRF1 and Fbxo32/Atrogin-1 [238] (Figure 19). Of note, whether the atrophic role of GCN2 is mediated by the downstream p-eIF2α-ATF4 signalling has not been yet demonstrated. PKR kinase. Levels of phosphorylated PKR and eIF2α are increased in muscles of cancer cachectic patients [239] and mice [240]. In addition, the pharmacological inhibition of PKR attenuates muscle atrophy in cancer cachectic mice through a possible inhibition of NF-κB [240] (Figure 19). Increased levels of intracellular calcium might be the upstream stress activating PKR in skeletal muscle (Figure 19). Currently, as for GCN2 kinase, the atrophic role of PKR has not been demonstrated to be linked to the ATF4 downstream signalling. eIF2α. P-eIF2α is enhanced in muscles during cancer-cachexia and amyotrophic lateral sclerosis in mice or humans [237,239,241]. In addition, fasting-induced muscle atrophy is hampered when mice express a phosphorylation-resistant form of eIF2α [242]. However, p-eIF2α is decreased in atrophic mice muscles following food deprivation, while an upregulation of Atf4 and some of its target genes is observed [243]. In addition, p-eIF2α is also decreased in atrophying mice muscles during disuse and spinal cord isolation [244,245]. Therefore, p-eIF2α is definitely not a common feature of muscle atrophy, and its augmentation is likely more a consequence of various extracellular stresses during muscle wasting conditions (Figure 19). The atrogene ATF4 Like other cell types, skeletal muscle fibres do not significantly express the ATF4 protein in the absence of cellular stress, especially as ATF4 is non-essential for the normal development or maintenance of skeletal muscle mass and function [246][247][248]. Mice with a lifelong absence of ATF4 expression in skeletal muscle fibres undergo normal skeletal muscle development and exhibit normal muscle mass and function until late in life, at which time they begin to exhibit protection from age-related muscle atrophy and weakness [246,247]. ATF4 is considered as an atrogene because its mRNA levels rise in muscle during many catabolic conditions that cause muscle atrophy in mice (i.e. denervation, spinal cord isolation, fasting, ageing, immobilisation, myopathies) [99,242,246,247,249]. In addition, a deletion of Atf4 in mice muscles limits muscle atrophy during starvation, immobilisation and ageing [242,[246][247][248]. Additionally, a transcriptionally inactive ATF4 does not lead to a reduction of myofiber size during fasting in mice. This suggests that the ATF4-mediated transcriptional program is required to induce atrophy [242]. When ATF4 is expressed in skeletal muscle fibres, it interacts with several different bZIP family members but only the heterodimerisation with C/EBPβ 30 is yet known as required for muscle atrophy caused by immobilisation [250] (Figure 19). The ATF4-C/EBPβ heterodimer induces the transcription of the growth arrest and DNA damage inducible alpha (GADD45A) gene in muscle fibres by binding a DNA sequence that is 100% conserved in all mammalian genomes [250] (Figure 19). Of note, in most of the studies, the authors only measured ATF4 mRNA expression and expression of its target genes as evidence of its activity, because endogenous ATF4 protein cannot be reliably detected in skeletal muscle, presumably due to its low abundance, very short half-life, and lack of highquality antibodies. 30 CCAAT enhancer binding protein β The red and green lines represent respectively the proved pro-atrophic and pro-maintenance signalling, and the questions marks the unsolved questions. At present, the mechanisms by which ATF4 is transcriptionally and translationally activated in catabolic conditions are unclear but may involve different mechanisms or combinations of mechanisms. For example, contrarily to anabolic conditions, mTORC1 activity is increased in muscle during advanced ageing and is thought to contribute to the pathogenesis of age-related skeletal muscle atrophy [207,251,252]. In this context, mTORC1 may be a driver of the ATF4 pathway in skeletal muscle. In contrast, many acute stress conditions repress mTORC1 activity in muscle fibres while inducing eIF2α kinase signalling [237,239,241,253]. Thus, in some situations, such as starvation, eIF2α signalling may be the driver of the ATF4 pathway in skeletal muscle fibres (Figure 19). Whether the canonical ISR signalling is always implicated as an atrophic inducer seems unlikely. Further investigation of these issues might uncover new unrelated signalling. ATF4 target genes: the atrogenes GADD45A,CDKN1A and EIF4EBP1 The mechanism by which ATF4 promotes muscle atrophy does not lead to an increase of FBXO32/Atrogin-1 nor TRIM63/MuRF1 gene expression [242]. Much remains to be discovered, but it is currently known that ATF4 contributes to muscle atrophy by modulating the transcription of three genes considered as atrogenes: GADD45A, cyclin-dependent kinase inhibitor 1 (CDKN1A) and eukaryotic translation initiation factor 4E binding protein 1 (EIF4EBP1) [99] (Figure 19). GADD45A. GADD45A is a myonuclear protein that induces widespread transcriptional changes in muscles. It represses genes involved in anabolic signalling and energy production, and it induces proatrophic genes [246]. GADD45A transcript is weakly expressed in non-catabolic conditions in skeletal muscle fibres but strongly induced during muscle atrophy (e.g. ageing, amyotrophic lateral sclerosis, critically ill patients, fasting, immobilisation) in pigs, humans and mice [242,246,248,[254][255][256][257]. For example, GADD45A is increased by 22-fold in muscle biopsies from critically ill patients with severe generalized skeletal muscle atrophy compared to healthy controls [257]. Gadd45a is the earliest and most sustained gene shown to be increased in muscles after denervation in mice [258]. Forced expression of Gadd45a in mice muscles or cultured mice myotubes induces atrophy even in the absence of any catabolic stimuli. Additionally, ATF4 is necessary and sufficient to induce Gadd45a expression during fasting-and immobilisation-induced muscle atrophy in mice [242,246,259,260]. HDAC4 is required for increasing muscle atrophy induced by Gadd45a overexpression during denervation, and forced expression of Hdac4 is sufficient to induce muscle atrophy in healthy mice muscles [260]. GADD45A by forming a complex with MEKK4 31 in muscles, increases MEKK4 protein kinase activity, which leads to GADD45A-MEKK4-mediated skeletal muscle atrophy in healthy mice muscles [259]. (Figure 19). Despite the strong evidence that GADD45A is an atrogene, a recent study 31 mitogen-activated protein kinase kinase kinase 4 52 has shown that it is likely induced in the context of denervation for a protective effect because mice lacking Gadd45a show accelerated and exacerbated neurogenic muscle atrophy [258]. CDKN1A. Another important ATF4-C/EBPβ target gene in muscles is CDKN1A (P21 protein). CDKN1A gene expression is strongly associated with muscle atrophy in pigs, rodents and humans [242,246,248,[254][255][256][257]261,262]. Increased Cdkn1a expression in mice muscles is sufficient to induce muscle fibre atrophy and is required for ATF4-mediated muscle atrophy during immobilisation [248]. However, to date, the cellular mechanisms by which P21 induces muscle atrophy are not defined. Although P21 protein is a well-known cell cycle inhibitor, its mechanistic role in muscle fibres seems likely to be different, essentially because muscle fibres have exited from the cell cycle. Its role in the control of skeletal muscle mass might involve the repression of the spermine oxidase (SMOX) gene expression, a gene suggested as anti-atrophic, even if the mechanisms remain completely unknown [262,263] (Figure 19). EIF4EBP1. ATF4 heterodimers also induce the expression of the EIF4EBP1 gene, encoding for the wellestablished inhibitor of global protein synthesis 250,262] (Figure 5). EIF4EBP1 gene expression rises up in numerous catabolic conditions being as well considered as an atrogene [99,248,249,261,264]. Accordingly, Eif4ebp1 is often induced alongside Gadd45a and Cdkn1a during skeletal muscle atrophy in mice [246,247]. However, it remains unclear which ATF4 heterodimers regulate EIF4EBP1 gene expression [264] (Figure 19). TRIB3. Finally, another ATF4 target gene, the tribbles pseudokinase 3 (TRIB3), has been associated with muscle atrophy in numerous studies. Trib3 deficient mice show increased muscle mass and MPS rate while decreasing the expression of the atrogenes Trim63/MuRF1 and Fbxo32/Atrogin-1 in healthy muscles [265]. In addition, Trib3 deficient mice show attenuation of muscle fibre atrophy and fibrosis during ageing by increasing autophagy flux [266]. Finally, these mice are also partially protected from food deprivation-induced muscle atrophy [267] (Figure 19). TAKE HOME MESSAGE A major conceptual insight emerging from these studies is that the ISR pathway is involved (1) in the maintenance of muscle homeostasis likely through autophagy and mitochondrial quality control, but also (2) in the induction of muscle atrophy through ATF4 and its target genes. How such an interplay can occur with two different outcomes on skeletal muscle, remains to be elucidated. A natural model of muscle atrophy resistance: the hibernating brown bear The first three chapters of this review show (1) the need to preserve muscle mass in order to remain healthy, but also (2) that, despite the huge amount of data acquired on the multiple signalling pathways involved in atrophy, there is still no approved treatment that can be used in the clinic. Most, if not all, of the mechanisms have been elucidated using classical laboratory models in rodents and humans. In this thesis project, we have chosen to combine classical and biomimetic approaches. Getting inspired by the oldest research laboratory: Nature As noted earlier, global health is being challenged by an ageing population and epidemics of lifestyle diseases such as type 2 diabetes, obesity, atherosclerosis, osteoporosis, or muscle wasting. Rather than destroying and exploiting living beings, we should learn from and emulate ingenious evolutionary adaptations to solve current human challenges. Nature is the oldest of research and development laboratories, where failure becomes fossil and our environment is a secret of survival. Biomimicry or bio-inspiration is an approach that (1) seeks sustainable solutions to human challenges by mimicking nature's patterns and strategies and (2) has enabled significant human biomedical advances and progress [268]. For example, about one-third of the medicines we use today are derived from nature. In addition, marine organisms have inspired polymers for medical adhesives [269] and microscopically small mosquito needles have inspired the development of small, flexible microprobes to be implanted in the brain [270]. A particular species of Namibian desert beetle has a system for collecting water by condensing fog into water droplets in its exoskeleton, and gradually channelling them to its head to drink. Inspired by this ingenious strategy, researchers have replicated this structure with glass and plastic, intending to cover existing objects to turn them into fog collectors, potentially ending the world's water shortage [271] (https://www.youtube.com/watch?v=IoflT3Uvels). The examples are vast and endless given the great diversity of the millions of species on Earth, which live in all types of environments from extreme temperatures to total hypoxia, to the driest places on the planet [268]. Therefore, the development of future drugs, technologies, or biomedical advances depends on humans preserving the diversity of nature. Hibernation is a perfect example of seasonal variability that holds clues to diverse solutions for human pathologies. Hibernation: a bioinspired approach for human challenges Hibernation comes from the word hibernare "the action of overwintering" and may date back 250 million years in the Antarctic Circle [272]. Hibernation is an adaptation used by some animals to cope with an episodic or seasonal lack of energy due to unfavourable environmental conditions (e.g., low food/water availability, high predation pressure) [273]. Torpor is at the heart of hibernation, it represents a period of metabolic suppression that can last from a few hours to several weeks. Hibernation is a more elaborate behaviour, structured into several long periods of torpor often separated by brief periods of interbout arousals (IBA). IBA last approximately 24 hours and are present in nearly all small hibernators (i.e. <10kg) but not in hibernating bears (see section 4.4.3) [273]. The most typical hibernation season is the cold season (i.e. fall to spring) but is also found in mammals inhabiting temperate and tropical climates [273]. During hibernation, along with the sharp reduction in metabolic rate (MR), there is a strong reduction in respiratory and heart rates, followed by a decrease in body temperature (Tb) [274]. The decrease in Tb can be either extreme, as in small hibernators (e.g. Arctic ground squirrels) where Tb can drop below zero [275], or mild, as in bears, where Tb rarely drops below 30 °C [276-280]. Despite the multiplicity of phenotypes and regulations of hibernation onset, there is likely a single underlying mechanism that reduces MR, although it is still unknown. However, hypotheses are emerging about the role of the hypothalamus, in particular the dorsomedial hypothalamus and a recently discovered set of neurons in the preoptic area [281][282][283]. A complete understanding of the molecular mechanisms triggering torpor would be of great value in developing a process to induce a hibernation/torpor state in humans, for example for a manned deep space expedition [284]. In the next sections, we will discuss hibernation in bears and how understanding its characteristic could provide insights into human health challenges, particularly muscle atrophy. Hibernation in bears Hundred years of myths and legends The English word "bear" reflects the long history between bears and humans. Around 500 BC, in Northern Europe, the brown bear was the undisputed predator. In Proto-Germanic, an ancient language spoken by the Nordic tribes, the bear was known by the harsh name of hrktos. Hunters were so scared of hrktos that they came to believe that the mere mention of the bear was a cause for trouble. Linguists believe that the word became so taboo that tribes began to use the euphemism bero "the brown one" instead. In other words, bears were the Voldemort of Northern Europe. As another example, among the Celtic islanders, the bear was more associated with power and sovereignty, as evidenced by the figure of King Arthur, the bear-king. The etymology of the name Arthur comes from the Celtic name of the bear, artos, which means both "bear" and "warrior". King Arthur's death occurs on All Saints' Day when the bears begin to hibernate. Like them, Arthur does not die, he goes into dormancy. According to the history books, it is at Candlemas that Arthur takes the sword Excalibur out of the rock, a symbolic day since it corresponds roughly to the end of the hibernation of bears. Hundreds of myths and legends surround the bears. In recent decades, contemporaries have become interested in the bear for the characteristics of its hibernation characteristics and the opportunity that this represents for medicine. Features of bear hibernation Ursidae family. Bears are mammals in the diverse family Ursidae because (1) they comprise eight species in three subfamilies, (2) they are geographically widespread in North and South America, Europe and Asia and (3) they inhabit a wide range of ecological niches from Arctic ice to tropical rainforests [285]. Bears in warm climates, do not go into hibernation, nor do the giant panda or polar bears. In this manuscript, we will only discuss bears that hibernate, i.e. brown bears (Ursus arctos), American black bears (Ursus americanus) and Asiatic black bears (Ursus thibetanus). Winter in the dens. Bears enter dens in October-November and remain there until late April or early May (Figure 20); both periods are highly dependent on weather conditions (i.e. snow levels). Unfortunately, researchers have observed a decrease in hibernation duration due to global warming. They estimate that for every 1°C increase in winter temperatures, bears hibernate six days less [286]. As mentioned before, hibernating bears do not exhibit IBA, and not only do they remain physically inactive inside their dens, but they also do not eat, defecate, drink or urinate [276][277][278][279][287][288][289]. isolation [279,290,291]. Furthermore, in hibernating bears, there is a significant decrease compared to active bears in average heart rate (i.e. from 50-80 to 10-30 beats per minute) [279,292,293] and respiratory rate (i.e. from 10-12 to 5-7 breaths per minute) [294] (Figure 21). At the renal level, the glomerular filtration rate decreases during bear hibernation compared to summer (i.e. from 117 ml to 37 ml per minute) resulting in the production of very small amounts of urine that are reabsorbed by the urothelium of the bladder [295]. MR and Tb are clearly linked. However, bears decrease their activity, heart rate and MR before decreasing their Tb before entering the den (Figure 21). Furthermore, Tb is the first physiological parameter to change before den exit, whereas bears maintain a reduced MR up to 3 weeks after den exit (Figure 21). Therefore, the pronounced reduction and delayed recovery of MR in hibernating bears suggest that the majority of metabolic suppression during hibernation is independent of Tb decline [278,279]. Energy storage. Fat storage is increased before hibernation (i.e. hyperphagia). For example, Swedish brown bears achieve this by eating lots of carbohydrate-rich berries [296]. During the fall, bears more than double their daily energy intake reaching a total weight gain of 40% [297]. During this period of hyperphagia, lipogenesis-related genes are upregulated in white adipose tissue (WAT) to promote fat storage [298]. Energy requirements in winter rely primarily on the mobilisation and oxidation of lipid fuels, with bears experiencing a loss of approximately 22-25% of their body mass during the hibernation season [299,300] and only a moderate loss of muscle protein (see section 4.4.4) [301,302]. The respiratory quotient 32 in bears decreases from 0.8 to nearly 0.7 in winter, reflecting pure fat burning [303,304]. In states of negative energy balance such as during hibernation or starvation, triglycerides (TG) stored in the WAT are converted to glycerol and FFA, which are used for gluconeogenesis 33 , ketogenesis 34 and β-oxidation 35 in the liver. Consistently, genes related to these 3 biological processes are upregulated in the liver of hibernating bears [305][306][307]. This is also consistent with the maintenance of blood glucose levels via gluconeogenesis, and the increase in circulating TG, FFA and ketone bodies during hibernation. Of note, a decrease in circulating glycerol is observed in winter, probably due to greater uptake by the liver and its reaction with ammonia to form amino acids (see section 4.4.4) [298,303,308-312]. Altogether, these data highlight the amazing metabolic flexibility 36 of hibernating bears [313,314]. 32 the ratio of the volume of carbon dioxide evolved to that of oxygen consumed by an organism or tissue in a given time 33 glucose production from non-carbohydrate carbon substrates 34 production of ketone bodies by fatty acid breakdown 35 fatty acid catabolism 36 capacity to switch among energy substrates to generate ATP depending on the physiological circumstances 57 Model for human pathologies. Hibernating bears appear to be insulin resistant compared to active bears. Insulin resistance is normally observed in hyperinsulinemic diabetic humans [312]. However and surprisingly, hibernating bears do not develop type 2 diabetes. Plasma cholesterol and TG levels are Evans et al., 2016 [279]). 58 twice as high in hibernating bears as in healthy humans, but they show no signs of developing atherosclerosis or cardiovascular damage [315]. In brief, bears readily emerge from their dens in spring and show no signs of organ damage [316] (Figure 22). Under similar conditions, humans would develop cardiovascular disease, obesity, muscle loss, osteoporosis and other deleterious health consequences. Dozens of examples could be discussed regarding the extraordinary characteristics of bears during hibernation and the therapeutic possibilities it offers to treat human pathologies (Figure 22). The conservation of muscle mass during a long period of fasting and physical inactivity has attracted our attention and will be discussed in the next section. Skeletal muscle features in hibernating bears Over the six months of total physical inactivity and fasting, hibernating bears experience only a moderate loss of muscle protein content. Conversely, over a shorter period of time, this leads to a significant reduction in muscle mass and function in humans [87,90]. (2) decreases by 4-10% in the gastrocnemius and biceps femoris muscles [302,319], or (3) it decreases by 15% in the vastus lateralis muscle [301]. Interestingly, in the latter study, they found that the 15% muscle loss after 1 month of denning remained the same 4 months later [301] (Figure 23). Furthermore, the nitrogen content of the vastus lateralis muscle remained unchanged in winter compared to summer, indicating a moderate loss of proteins [301]. The limited decrease in muscle protein content is consistent with the slight increase in 3-methylhistidine observed in the serum of hibernating bears [309]. Some researchers have speculated that the small amount of atrophy exhibited by bears may simply be due to muscle dehydration [317]. However, other studies have not observed any change in muscle water composition during winter [302]. Muscle CSA. Remarkably, the number of muscle fibres as well as their CSA remain unchanged in hibernating bears in most studies [302,319,320]. A recent paper showed a 26% decrease in CSA in the sartorius muscle after 5 months of hibernation, but the authors considered that was a minimal loss compared to what would happen in humans [321]. Indeed, under a similar period of disuse, a muscle loss of about 150% would be expected in humans [322,323]. Fibre type composition. It should be noted that the proportion of slow and fast muscle fibre types in active bears is roughly the same as in rodents or humans [320,324], and like humans, bears lack the type 2B fibre isoform [319,324]. Studies have reported conflicting changes in the proportion of fibre types between seasons with (1) an increase in type 1 fibre content and a decrease in type 2A fibre content [324,325], (2) a moderate shift towards more type 2 muscle fibres [302], or even (3) no change Lohuis et al. -2007 [301]). [302,319,320]. These discrepancies can be explained by the biochemical techniques used and the timing of the muscle sampling during hibernation. Muscle strength and neuromuscular activity. The loss of muscle strength is about 29% after 110 days of anorexia and physical inactivity during hibernation in bears. This is about half of what is observed in humans confined to bed for 90 days, with a 54% loss in muscle strength while on a balanced diet [318,326]. Furthermore, hibernating bears show no or very limited changes in muscle contractile properties (e.g. contraction time, half relaxation time) [302,318,319,326]. Although bears do not exhibit as vigorous shivering thermogenesis as small hibernators, they do make occasional postural adjustments, wake up briefly and shiver. It has been suggested that this mild muscle activity may limit atrophy [278,318,327]. Finally, neural inputs cannot be considered as a mechanism to limit muscle atrophy. Indeed, the denervation-induced decrease in muscle mass in active bears is comparable to that observed in other mammals, whereas hibernating bears are partly resistant to denervationinduced muscle atrophy [328]. Regulation of metabolism and signalling pathways Muscle protein sparing. Lohuis et al., showed that muscle protein synthesis and degradation were lower in bears during hibernation than during the active period [301]. Furthermore, they observed that both phenomena remained unchanged between the beginning and the end of hibernation, indicating that protein balance is maintained throughout the hibernation period (Figure 24). Furthermore, protein synthesis was greater than protein degradation in summer bear muscles, suggesting that they accumulate muscle protein during the season when food is available in abundance [301] (Figure 24). A comprehensive transcriptomic analysis in skeletal muscle also showed that bear hibernation results in (1) increased expression of genes involved in protein biosynthesis (translation) and ribosome biogenesis and (2) decreased expression of genes related to proteolysis in skeletal muscle [304,307,329] (Figure 25). Of note, whole-body protein sparing is also supported by transcriptional downregulation of genes related to amino acid catabolism in the liver of hibernating bears [305][306][307] (Figure 25). Unchanged or decreased levels of circulating urea 37 and decreased aminotransferase activities reflect low muscle protein mobilisation in bears during winter [289,299,309,330,331]. This is consistent with the coordinated downregulation of genes involved in urea production in skeletal muscle, but also in the liver during hibernation in bears [304,307]. Furthermore, urea recycling is very efficient in hibernating bears, with 99.7% of the urea produced being recycled into protein, which probably limits muscle protein degradation [304,307]. The mechanisms remain to be clarified but urea recycling would 61 include a role for the gut microbiota in the hydrolysis of urea to ammonia, which would subsequently be used for the synthesis of amino acid, in particular glutamine [304,307] (Figure 25). Consistently, an increase in blood glutamine is observed in bears during hibernation [310]. Finally, hibernating bear muscles show an increase in a transcriptomic signalling mediated by MEF2A, contributing to a decrease of the expression of TRIM63/MuRF1 and FBXO32/Atrogin-1 [332] (Figure 25). Muscle energy metabolism. Glycolysis is preserved in the skeletal muscle of hibernating bears, as suggested by (1) an overall increase in protein abundance of all glycolytic enzymes, (2) an increase in muscle lactate dehydrogenase activity and (3) maintenance or reduction of circulating lactate levels [303,310,331] (Figure 25). Bear muscles still oxidise glucose and produce lactate during hibernation. This could help maintain skeletal muscle functionality in unexpected situations, such as an emergency exit from the den that would require a rapid increase in ATP production [308,310]. Glycolysis could be fuelled by hepatic gluconeogenesis and mobilisation of muscle glycogen content, which is higher in bear muscles in winter compared to summer [310,320] (Figure 25). Together, these studies have led some authors to suggest that the Cori cycle 38 may be active in bears during hibernation thus contributing to muscle protein sparing [298,310] (Figure 25). PDK4 39 is a switch that enables the use of lipid substrates and thus limits the entry of glycolytic intermediates into the tricarboxylic acid (TCA) cycle. PDK4 protein is increased in hibernating bears muscles [298,310,333], whereas proteins involved in the TCA cycle and β-oxidation are predominantly all downregulated in hibernating bear muscles [310,333] (Figure 25). Although lipids are the preferred 38 degradation of glucose into lactate in muscles, then transformation of lactate into glucose, and finally into glycogen in liver 39 pyruvate dehydrogenase kinase isoenzyme 4 Lohuis et al. -2007 [301]). fuels in winter, bear muscles metabolism is mainly characterised by reduced ATP turnover, so that the reduced β-oxidation in muscles during hibernation is due to and/or contributes to the depression of metabolic rate [307,310]. Hormonal/growth factor changes. Cortisol is a glucocorticoid hormone that reduces glucose uptake in peripheral tissues and stimulates lipolysis in adipose tissue and skeletal muscle. Elevated cortisol levels are observed during hibernation and are associated with reduced (1) protein levels of phosphorylated and total AMPK and ( 2) expression of PGC-1α/PPAR-α 40 in skeletal muscle and adipose tissue [308] (Figure 25). Low plasma levels of IGF-1 and IGF-2 are also recorded in hibernating bears, but the authors suggest that they would be present in a different spatial conformation than in summer and therefore more available to tissues such as skeletal muscle [334] (Figure 25). Serum from 40 peroxisome proliferator-activated receptor-gamma coactivator 1/ peroxisome proliferator-activator receptor The words in green, red and black represent respectively the functions/metabolites that are down-regulated, up-regulated or unchanged in winter compared to summer. Two-coloured words (e.g. urea, lactate) represent discrepancies found in the literature. Words in bold represent metabolic processes (e.g. glycolysis). Words in italics represent biological processes regulated at the genetic level. Dashed lines delineate altered intracellular mechanisms and pathways in skeletal muscle in winter versus summer. hibernating bears is also enriched in some specific n-3 polyunsaturated fatty acids, including docosahexaenoic acid (DHA) [310,335] (see Appendix 10.5). DHA has previously been associated with increased muscle glycogen stores and subsequent prevention of muscle atrophy in fasted mice [336]. Interestingly, a 3-fold increase in muscle glycogen content is recorded in hibernating bears muscles compared to active summer bear muscles, while muscle protein is preserved [310] (Figure 25). Antioxidant defences. Most subunits of mitochondrial complexes are downregulated in hibernating bear muscles [310,337]. This may be due to reduced mitochondrial content in winter compared to summer in bear muscles. In addition, protein levels for UCP3 41 , which limits ROS production [338], are increased in bear muscles during hibernation [321,337]. Consistently, increased levels of proteins involved in cytosolic antioxidant systems, higher plasma antioxidant capacities and maintenance of the GSH/GSSG42 ratio are recorded in bear muscles during hibernation [337,339] (Figure 25). Overall, this suggests that bear muscles do not undergo significant oxidative damage in winter and/or that antioxidant defence systems remain effective [337]. Overall, metabolic adaptations in skeletal muscle (and other tissues such as liver and adipose tissue) promote ( 1) lipid catabolism during hibernation, (2) maintenance of glycolysis, and (3) regulation of intracellular pathways that contribute to the preservation of muscle proteins during this period of fasting and physical activity. Circulating antiproteolytic compounds in hibernating bear serum The preservation of vital functions of most organs in bears during hibernation, including skeletal muscle, has led researchers to speculate that active circulating factors may be responsible for these characteristics. Rats muscles incubated ex vivo in the presence of hibernating bear serum exhibited a 40% decrease in net proteolytic rate compared to the muscles incubated with active bear serum. This inhibition of proteolysis was accompanied by a decrease in gene expression of the lysosomal (e.g. cathepsin B) and ubiquitin-dependent (e.g. ubiquitin) proteolytic systems. These results showed for the first time that a compound present in bear serum during hibernation had an anti-proteolytic property on skeletal muscle [340]. Subsequently, our team showed that cultivating primary human myotubes with hibernating bear serum favoured an increase in myotubes area compared to summer bear serum [341] (Figure 26). This was the first proof of concept that an active compound in bear serum was transmissible to human biological material. We showed that protein turnover in human myotubes was overall reduced when incubated with winter bear serum, with both a dramatic inhibition of proteolysis (i.e. UPS and ALS) and an average reduction in the rate of protein synthesis [341]. Therefore, winter bear serum was able to reproduce in human muscles cells the regulation of protein turnover already described in hibernating bears muscles (i.e. lower rate of protein synthesis and degradation). Recently, another team confirmed our results with an increase in total protein content in myotubes cultured with hibernating bear serum, although they did not observe any alteration in protein anabolism [342]. Overall, these few studies showed similar results, with hibernating circulating compounds being able to induce potent cross-species effects on human or rat muscle cells. It is therefore highly likely that the (2) myostatin and phosphorylated SMAD2 protein levels increase as squirrels emerge from torpor [343]. In the latter study, they also observed an increase in phosphorylated SMAD1/5 levels at the beginning of hibernation, which returned to normal levels when the squirrels emerged from torpor [343]. In the muscles of hibernating little brown bats (Myotis lucifugus), a decrease in myostatin Illustrative immunodetection and corresponding quantifcation of myosin heavy chain in cultured myotubes winter bear serum (WBS) or summer bear serum (SBS) treatment (from Chanon et al., 2018 [341]). protein expression while an increase in the TGF-β inhibitor SMAD7 protein is observed compared to the active animal [344]. Similarly, a decrease in Mstn expression is also recorded in the muscles of hibernating ground squirrels (Spermophilus lateralis) [345]. To date, only one study has explored the regulation of Mstn expression in hibernating bear muscles and shown that it was lower in muscles at den exit compared to summer [321]. ATF4 signalling. In hibernating ground squirrels, ATF4 protein expression is strongly increased in skeletal muscle, and subcellular localisation studies have shown that ATF4 translocates into the nucleus during hibernation, as does its cofactor, the phosphorylated form of CREB-1 [346]. Furthermore, in the torpid muscles of the Daurian ground squirrels (Spermophilus dauricus), the levels of phosphorylated PERK and eIF2α, as well as ATF4 protein are increased during hibernation, and normalised during IBA and the active period [347]. Overall, these few data suggest that upregulation of ATF4 and downregulation of TGF-β signalling may play a role in the coordination of muscle maintenance in small hibernating mammals. However, the ability of hibernating bears to preserve skeletal muscle biochemical and performance characteristics during prolonged hibernation still needs to be explored with respect to these two signalling pathways. TAKE HOME MESSAGE (1) Biomimicry is an approach that seeks sustainable solutions to human challenges by mimicking nature's patterns and strategies and has enabled significant human biomedical advances and progress. (2) Hibernation, particularly in bears, is of great interest for understanding the mechanisms that allow them to cope with prolonged fasting and physical inactivity without deleterious effects on the whole body. The hibernating bear, being resistant to muscle atrophy, is an interesting model for identifying new biomolecular actors that can possibly be translated to human pathophysiology where muscle atrophy is present. (3) The regulation of TGF-β/BMP and ISR signalling pathway is important for muscle homeostasis in rodents and humans, and interesting clues have been found during hibernation in the muscles of small hibernators. However no studies have yet explored these signalling in the muscles of hibernating bears. Objectives and strategies The main objective of this thesis was to identify new molecular players and their mechanisms that could become therapeutic targets to combat muscle atrophy in humans. For this purpose, , we adopted a biomimetic approach using the brown bear, which is naturally resistant to muscle atrophy, and compared muscle adaptations to those observed in a classical model of sensitivity to atrophy. The project has been subdivided into three studies, as follows. The objectives were to (1) identify the underlying mechanisms implicated in the resistance of muscle atrophy although prolonged physical inactivity and (2) determine whether these mechanisms were oppositively regulated in a model of susceptibility to atrophy. We performed a comparative transcriptomic analysis of the atrophy-resistant muscles of the hibernating brown bear and the atrophy-sensitive muscles of the hindlimb-suspended mouse. Study 2 Induction of ATF4 atrogenes is uncoupled from disuse-induced muscle atrophy in halofuginone-treated mice and in hibernating brown bears (paper under review). The objectif was to further explore the role of ATF4 signalling in skeletal muscle during basal and catabolic conditions. KEY POINTS FROM THE LITERATURE: (1) Muscle atrophy affects millions of people worldwide and despite intensive efforts using laboratory rodents models, there is still no easily used therapeuthic or preventive treatments. (2) TGF-β signalling pathway have been extensively targeted for its pro-atrophic role. However, little is known about the BMP counterpart, which is promising for fighting muscle atrophy. (3) ISR signalling has a dual and complex role in skeletal muscle homeostasis and needs to be further investigated. (4) Hibernating bears resist muscle atrophy even when faced with long-term fasting and physical inactivity. Therefore, the hibernating bear is a promising model to find new avenues to fight muscle atrophy in humans. 67 We first developed an experimental protocol for inducing the ATF4 signalling with the pharmacological molecule halofuginone (HF) in mice. We then (1) investigated the effect of ATF4 induction on mice muscles during basal and hindlimb suspension-induced atrophy conditions and ( 2) deciphered the HF molecular mechanisms in mice muscles. We also investigated the regulation of this pathway in the atrophy-resistant muscles of the hibernating brown bear. Study 3 Winter bear serum induces similar characteristics in human muscle cells as those found naturally in hibernating bear muscles (preliminary results). Our objective was to determine whether the molecular characteristics of the atrophy-resistant muscles of hibernating bears can be reproduced in human muscle cells. We first analysed microarray data from human muscle cells cultivated with winter bear serum to assess whether there is a TGF-β/BMP signalling transcriptomic signature. We then measured TGF-β/BMP signalling transcriptional activity with luciferase reporter assay. We sought to compare at the genetic level, the changes occurring in muscles between a natural model of muscle atrophy resistance, the hibernating brown bear, and a model of susceptibility to muscle atrophy, the unloaded mouse. We thus performed muscle RNA sequencing and analysed common and different features in these two models to uncover unexplored intracellular mechanisms (Figure 27). Experimental protocol Muscle atrophy-sensitive model. We studied disuse atrophy in the hindlimb-suspended (HS) mice model [348]. HS is a method developed in the 1970s, used to mimic space flight and prolonged bed rest in humans. The mouse tail is attached to a device that elevates the hindlimb into an unloaded position (Figure 28). Unlike cast immobilisation-induced muscle atrophy in rodents, the HS procedure does not cause inflammation or fibrosis in muscles [349,350], making it an interesting model to study the role of disuse in the induction of muscle atrophy independently of any other parameters. Muscle atrophy-resistance model. The brown bear, remains resistant to muscle atrophy during hibernation, although confronted with two strong atrophy inducers (i.e. starvation and physical inactivity) (see section 4.4.4). Our laboratory belongs to an international consortium, the Scandinavian Brown Bear Research Project (http://bearproject.info/). Our team travels twice a year to Northern Sweden to collect biological samples (e.g. muscle, blood) from bears during the hibernation period (February) and the active period (June). A team of experienced veterinarians care for the bears under anaesthesia and monitor vital signs in the field (Figure 29). Bears are either male or female, from 2 to 3 years old, just before sexual maturity. The same bears are sampled twice a year using GPS collars. One of the disadvantages of using a wild animal is the difficulty of collecting biological samples due to the lack of proximity to their living area. Therefore, some periods during the hibernation or the active seasons remain unexplored in our analysis. Moreover, commercialized biological reagents (e.g. antibodies) do not necessarily react with the cellular components of the bear, and hence, the study of some signalling pathways, for example, is challenging. In the cages, there are two rails with wheels connected to the tails of the mice allow them to move. This system leaves the hind legs free to move without being able to grip. The mice can move with their front legs. Their food is underneath the grid. Introduction Muscle atrophy is defined as a loss of muscle mass and strength and is associated with adverse health outcomes, such as an autonomy decline and an increase in morbidity and mortality in many catabolic conditions (e.g., cancer cachexia, heart, and kidney failure, fasting, sepsis, injury, aging, or physical inactivity, etc.) [1][2][3][4][5]. Given the increase in sedentary behavior and improvement in life expectancy, and with to date still no proven therapeutic or preventive treatment to date, muscle atrophy remains a major public health issue (World Health Organization data) [6]. Skeletal muscle tissue represents an important reservoir of amino acids, which are mobilized during catabolic situations to preserve vital functions, resulting in an imbalance of contractile protein turnover (i.e., proteolysis exceeding protein synthesis) [7,8]. Catabolic stimuli (e.g., oxidative stress, endoplasmic reticulum disturbances, nutrient shortage, mitochondrial disruptions, etc.) activate a complex network of intracellular modulators, which in turn lead to the activation of the ubiquitin-proteasome system (UPS) and autophagy [9,10]. These two main proteolytic systems in muscle tissue involve a set of genes, i.e., atrogenes, whose expression at the mRNA levels is commonly altered during atrophy [11]. Cascades of events and players of muscle atrophy are well described and conserved in mammals [12,13] and include the transforming growth factor-β (TGF-β) superfamily, with TGF-β signaling acting as a negative regulator, and Bone Morphogenetic Proteins (BMP) signaling as a positive regulator of muscle mass [14]. The TGF-β pathway mediates muscle atrophy through cytoplasmic and nuclear signaling molecules SMAD2/3, mainly leading to the expression of the atrogenes TRIM63 (MuRF1) and FBXO32 (atrogin-1) [15]. Constitutive expression of SMAD3 triggers muscle wasting, and inhibition of SMAD2 and SMAD3 is sufficient to induce muscle growth in vivo [16][17][18][19]. Conversely, the BMP pathway mediates muscle mass maintenance through cytoplasmic and nuclear signaling molecules SMAD1/5/9, promoting a negative transcriptional regulation for a ubiquitin ligase required for muscle wasting, FBXO30 (MUSA1) [20], and increasing the expression of BMP receptors activity in muscles induced hypertrophy through Smad1/5mediated activation of mTOR signaling [21]. Hibernating bears (Ursidae family) are naturally resistant to muscle atrophy when facing the two major atrophic inducers, prolonged fasting and physical inactivity up to 5-7 months [22,23]. Conversely and during a shorter period, a loss of muscle mass and volume prevails in rodent models [24][25][26][27][28][29] and humans [4,5]. As in rodent models, the muscles of the active bear are sensitive to disuse after denervation, whereas the muscles of the hibernating brown bear (Ursus arctos) are resistant [30]. The hibernating brown bear, therefore, appears as a suitable model to study the underlying mechanisms of muscle mass maintenance [31]. How it withstands muscle loss in conditions where muscle atrophy is expected in non-hibernating mammals remains to be fully elucidated. However, several hypotheses can be raised; our recent analysis of muscle proteome in the hibernating brown bear revealed the maintenance of glycolysis and a turning down of ATP turnover [32]. In addition, we reported (i) a myogenic microRNA signature prone to promoting muscle regeneration and suppressing ubiquitin ligase expression in bear muscle during winter [33], as well as (ii) limited levels of oxidative stress [34]. To unravel the molecular basis of muscle maintenance at the mRNA level, the bear muscle transcriptome has already been explored using cDNA microarrays [35] or RNA sequencing [36,37]. These two transcriptomic studies suggested an overall reduction in energy and protein metabolism, consistent with metabolic suppression and lower energy demand in skeletal muscle during hibernation. Although these studies focused on the changes in bear muscle transcriptome during the hibernating versus active period, our study aimed to compare them with those that occur in the muscle transcriptome of disuse-induced atrophy in a mouse model. The rationale for such a comparative analysis of two contrasted situations of muscle atrophy or maintenance lies in the identification of potential new candidates, beyond already reported metabolic factors [35][36][37], that may help the hibernating bear resist atrophy, thereby providing new targets for fighting muscle atrophy in humans. Among the transcription factors involved in the regulation of the differentially expressed genes highlighted in bear muscle between the hibernation and active periods, eight were involved in the regulation of the TGF-β superfamily. We therefore subsequently focused on an in-depth analysis of the TGF-β and BMP intracellular pathways. Materials and Methods Animal Experiments Bear Sample Collection Biopsies from the vastus lateralis muscle were collected from 17 free-ranging brown bears, 2-3 years old (Ursus arctos; 11 females and 6 males), from Dalarna and Gävleborg counties, Sweden, from 2014 to 2019 (Table S6). The samples were immediately frozen on dry ice until storage at -80 • C. In a given year, the same bears were captured during winter hibernation (February) and recaptured during their active period (June). The study was approved by the Swedish Ethical Committee on Animal Experiment (applications Dnr C3/2016 and Dnr C18/2015), the Swedish Environmental Protection Agency (NV-0741-18), and the Swedish Board of Agriculture . All procedures complied with Swedish laws and regulations. Capture, anesthesia, and sampling were carried out according to an established biomedical protocol [38]. Mouse Model of Hindlimb Unloading Our objective was to compare the muscle transcriptome of the hibernating bear with that in a rodent model of long-term physical inactivity. We chose the 10-day mouse model of unloading as an established disuse-atrophy model, with atrophic pathways still activated [28,39]. All experiments were conducted with the approval of the regional ethics committee (agreement n • D6334515) following the European Directive 2010/63/EU on the protection of vertebrate animals used for experimental and scientific purposes. This study was performed with 12 C57BL6/J adult male mice purchased from Janvier Labs (Le Genest-Saint-Isle, France). They were housed by pairs upon arrival in a polycarbonate cage in a controlled room (22 ± 2 • C, 60% ± 5% humidity, 12 h light/dark cycle, light period starting at 8 h), fed ad libitum, and given free access to water. After 10 days of acclimatization, the mice were either kept unsuspended (Control, n = 6) or subjected to hindlimb unloading through tail suspension (Unloaded, n = 6) for 10 days. Custom tail suspension cages were adapted from previous studies [40,41]. The cages (43 × 29 × 24 cm) have an overhead frame to which two suspension systems are fixed in parallel on the width of the cage. These suspension systems are widely spaced in a cage so that mice can always be housed in pairs without touching each other. Unloaded mice had a metal ring attached near the base of their tails using surgical adhesive tape. This ring was then attached to a swivel that allowed a 360-degree rotation and was attached on a rail that covered the upper width of the cage. The height of the swivel has been adjusted to keep the mouse at an angle of about 30 • from the head so that the hindlimbs could not touch the ground or the walls. During the 10 days of unloading, mice showed only a very small body weight loss (<7%) that occurred within the first 3 days, with no change in food intake. At the end of the experiment, soleus muscles were rapidly dissected out and immediately frozen in liquid nitrogen and stored at -80 • C until analyses. As for the data used from the RNA sequencing of Zhang et al. [42], the soleus muscle atrophied by 37% (from 7.24 ± 0.27 mg in control mice to 4.58 ± 0.31 mg in unloaded mice, p < 0.001 according to the unpaired Student's t-test). RNA Sequencing of Brown Bear Muscle RNA Isolation Total RNA from bear muscles was isolated as described [43]. Briefly, muscle RNA from six bears (paired samples collected in summer and winter in a given year for the same individual) was extracted using TRIzol reagent (Invitrogen, Courtaboeuf, France). Illumina RNA Sequencing, Data Assembly, Statistical Analysis We constructed RNA-Seq libraries with the Truseq stranded mRNA sample preparation kit from Illumina and sequenced them in two lanes on an Illumina HiSeq2500 (single-end, 50 bp, six libraries per lane). Image analyses and base calling were performed using the HiSeq Control Software (v2.2.70, Illumina, San Diego, CA, USA) and , Illumina, San Diego, CA, USA). Demultiplexing was performed using Illumina's conversion software (bcl2fastq 2.20). The quality of the raw data were assessed using FastQC from the Babraham Institute and the Illumina software SAV (Sequencing Analysis Viewer, Illumina, San Diego, CA, USA). A splice junction mapper, TopHat 2.1.1 [44] (using Bowtie 2.3.5.1 [45], Johns Hopkins University, MD, USA), was used to align the RNA-Seq reads to the Ursus arctos genome (GCA_003584765.1 ASM358476v1 assembly downloaded from NCBI) with a set of gene model annotations (GCF_003584765.1_ASM358476v1_genomic.gff downloaded on 17 June 2019, from NCBI). Final read alignments having more than three mismatches were discarded. Samtools (v1.9) (http://samtools.sourceforge.net) was used to sort the alignment files. Then, the gene quantification was performed with Featurecounts 1.6.2 (http://subread.sourceforge.net/) [46]. As the data were from a strand-specific assay, the read had to be mapped to the opposite strand of the gene (-s 2 option). Before statistical analysis, genes with less than 30 reads (cumulating over all the analyzed samples) were filtered out. Differentially expressed genes were identified using the Bioconductor(https://bioconductor.org/) [47] package DESeq2 1.26.0 [48] (R version 3.6.1 https://www.r-project.org/). Data were normalized using the DESeq2(https://bioconductor.org/packages/release/bioc/html/DESeq2.html, accessed on 16 June 2021) normalization method. Genes with an adjusted p-value below 5% (according to the Benjamini-Hochberg procedure that controls the FDR) were declared differentially expressed. Functional and Pathway Enrichment Analysis Hierarchical clustering of bear transcriptomic data (log-transformed) was performed using Cluster v3.0 software (University of Tokyo, Tokyo, Japan) from the 13531 reads [49]. Parameters were set as follows: median centering and normalization of genes for adjusting data and centroid linkage clustering for both genes and arrays. Dendrograms were generated and viewed using the Java Treeview v1.3.3 program (Alok Saldanha, Stanford University, Stanford, CA, USA) [50]. To identify the differentially expressed genes (DEGs), we selected a fold change (FC) Winter/Summer >|1.3| or <|0.77| and an adjusted p-value < |0.01| as cut-off standards, for the up-and down-regulated genes, respectively. Visualization of functional enrichment was performed using Metascape [51], a web-based portal for visualizing the inference of enriched biological pathways among the DEGs. For the given DEGs gene list, pathway and process enrichment analysis has been carried out with the following ontology sources: KEGG Pathway, GO Biological Processes, Reactome Gene Sets, Canonical Pathways, CORUM, TRRUST, DisGeNET, PaGenBase, Transcription Factor Targets, WikiPathways, PANTHER Pathway, and COVID. All genes in the genome have been used as the enrichment background. Terms with a p-value < 0.01, a minimum count of 3, and an enrichment factor >1.5 (the enrichment factor is the ratio between the observed counts and the counts expected by chance) are collected and grouped into clusters based on their membership similarities. More specifically, p-values are calculated based on the accumulative hypergeometric distribution, and q-values are calculated using the Benjamini-Hochberg procedure to account for multiple testings. Kappa scores are used as the similarity metric when performing hierachical clustering on the enriched terms, and sub-trees with a similarity of >0.3 are considered a cluster. The most statistically significant term within a cluster is chosen to represent the cluster. The 10 top-score enrichment terms from that analysis are shown in Figure 1b,c S1). The color code for the functional cluster is indicated in the respective graphs, and the bold numbers in the different bars represent the numbers of DEGs found in the enriched terms. (d) Graph representing the 10 top-score of transcription factors involved in DEGs regulation from "Formation of a pool of free 40S subunits" and "Extracellular matrix organization" enriched terms. The bold TFs are involved in TGF-β superfamily regulation. Transcriptomic Data Assembly and Statistical Analysis of Mouse Muscle We used transcriptomic data from an already published study [42]. Briefly, in this study, C57BL6/J adult male mice were either kept unsuspended (Control, n = 4) or subjected to hindlimb unloading through tail suspension (Unloaded, n = 4) for 10 days. The fastq files of eight soleus muscles were downloaded from GEO (GSE102284). A splice junction mapper, TopHat 2.1.1(Johns Hopkins University, MD, USA) [44], was used to align the RNA-Seq reads to the mouse genome (UCSC mm10) with a set of gene model annotations (genes.gtf downloaded from UCSC on 29 October 2019; GeneIDs come from the NCBI gene2refseq file). Final read alignments having more than three mismatches were discarded. Samtools (v1.9, http://www.htslib.org/) was used to sort the alignment files. Then, the gene quantification was performed with Featurecounts 2.0.0 (http://subread.sourceforge.net/) [46]. As the data were from a strand-specific assay, the read had to be mapped to the opposite strand of the gene (-s 2 option). Before statistical analysis, genes with less than 20 reads (cumulating all the analyzed samples) were filtered out. Differentially expressed genes were identified using the Bioconductor [47] package DESeq2 1.26.0 [48] as previously described (cf. 2.2.2). Western Blot Vastus lateralis muscles from eleven bears (paired samples collected in summer and winter in a given year for the same individual; Table S6) and soleus muscles from 10-days control or unloaded mice (n = 6/group) (~30 mg) were used. Samples were homogenized using a polytron in 1 mL of an ice-cold buffer (10 mM Tris pH 7.5, 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, 0.5% Igepal CA630) containing inhibitors of proteases (Protease Inhibitor Cocktail) and phosphatases (1 mM Na 3 VO 3 , 10 mM NaF) (Sigma, Saint-Quentin-Fallavier, France). The homogenates were stirred for 1 h at 4 • C and then centrifuged at 10,000 g for 15 min at 4 • C. The resulting supernatants were then stored at -80 • C until use. The concentration of proteins was determined using the Bradford Protein Assay Kit (Biorad, Marnes-la-Coquette, France). Proteins were then diluted in Laemmli buffer and stored at -80 • C until use. Protein extracts were subjected to SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) using TGX™ FastCast™ 10% Acrylamide gels (Biorad, Marnes-la-Coquette, France) and transferred onto a PVDF membrane (Hybond P, Amersham, England). Blots were blocked for 1 h at room temperature with 5% bovine serum albumin in TBS buffer with 0.1% Tween-20 (TBS-T, pH = 7.8), then washed thrice in TBS-T and incubated (overnight, stirring, 4 • C) with appropriated primary antibodies against SMAD1/5 (PA5-80036, Thermo Fisher, Illkirch, France), SMAD2/3 (#8685, Cell Signaling Technology, Saint-Cyr-L'Ecole, France), SMAD4 (ab230815), CTGF (ab227180), and GDF5 (ab137698) (Abcam, Cambridge, United Kingdom). Blots were then washed and incubated for 1 h with an appropriate secondary horseradish peroxidase-conjugated antibody at room temperature. Signals were detected after incubation with Luminata Crescendo Western HRP substrate (Millipore, Burlington, MA, USA) and visualized using G: BOX ChemiXT4 (XL1) imaging system (Syngene, Frederick, MD, USA). Signals were then quantified using the GeneTools software (Syngene, Cambridge, UK) and normalized against the total amount of proteins determined by TGX signals to correct for uneven loading. Protein data were presented as individual values. The bilateral ratio paired Student's t-test was used to compare the muscles of bears during summer and winter (S and W, respectively). For muscles of control and unloaded mice (C and U, respectively), statistical significance was determined using the bilateral unpaired Student's t-test. Statistical analysis was performed using Prism 8 (GraphPad Prism 9, San Diego, CA, USA). Results Deep Changes in Brown Bear Muscle Transcriptome during Hibernation The brown bear transcriptome data set revealed that, from 13531 transcripts commonly identified in all individuals, gene expression differed markedly between summer and winter (Figure 1a). We identified 4115 differentially expressed genes (DEGs) between muscles of the active and hibernating bear with mainly down-regulated genes (Table S1). The 10 top-score obtained from an annotation enrichment analysis performed from the down-and up-regulated DEGs highlighted several significant enriched terms regulated differentially in bear muscles between the two seasons (Figure 1b,c). For instance, the protein metabolism functional cluster was the most up-regulated in bear muscles in winter compared to summer, with "Formation of a pool of free 40S subunits", "Ribonucleoprotein complex biogenesis", or "Translation" enriched terms (Figure 1b). In addition, the tissue structure remodeling functional cluster was the most down-regulated in muscles of the hibernating versus active bear, with "Extracellular matrix organization", "Cell-substrate adhesion", or "Supramolecular fiber organization" enriched terms (Figure 1c). We then run a transcriptional regulatory network analysis to identify transcription factors (TFs) involved in the DEGs regulation from the two most differentially enriched terms between the two seasons (e.g., "Formation of a pool of free 40S subunits" and "Extracellular matrix organization") (Figure 1d). We found that SP1 was the TF the most involved in the DEGs regulation, as well as others TFs such as SP3, NFKB1, JUN, RELA, TFAP2A, ETS1, or SMAD4. Interestingly, these TFs cited above are all involved in the regulation or signal transduction of the TGF-β superfamily [52][53][54]. We therefore decided to focus on that superfamily, including (i) TGF-β signaling, which is a master regulator of the extracellular matrix organization and also involved in muscle mass loss, and (ii) BMP signaling, which has recently been discovered to be involved in muscle mass maintenance [20]. Hibernation Induces a Transcriptional Shift from the TGF-β to the BMP Pathway From a thorough analysis of the literature [55][56][57], we have drawn up a list of the actors and regulators of the TGF-β superfamily. We then analyzed precisely how they were regulated at the mRNA level in the muscles of the hibernating bear between the winter and summer seasons. The expression levels of two main TGF-β ligands, INHBA and MSTN, were dramatically lower (Fold change (FC) = 0.28 and 0.52, respectively) in winter, whereas INHBB expression was higher (FC = 1.85) (Figure 2, Tables S2 andS3). In BMP signaling, the main ligand described in muscle mass maintenance, GDF5, showed higher levels (FC = 2.5) in hibernating bear muscles compared to active muscles. Extracellular actors inhibiting (KCP, DCN, MGP, NOV, or CHRD) or promoting (CCN2 or BMPER) TGF-β and/or BMP signals were mainly down-regulated in winter compared to summer. Receptors from the TGF-β signaling were differentially expressed during hibernation, with ACVR1C and TGBBR2 levels being considerably lower (FC = 0.28 and 0.80, respectively) in winter compared to summer, whereas TGFBR1 and ACVR1B were higher (FC = 1.42 and 1.47, respectively). The GDF5 receptor, BMPR1B, was higher in winter compared to summer (FC = 1.37). The co-receptors that control intensity and specificity of the downstream TGF-β/BMP signaling were mainly down-regulated or unchanged in winter for both pathways, except for MUSK, an important BMP co-receptor in muscle cells, which was up-regulated (FC = 2.39). Overall, actors involved in the initiation of the TGF-β signal were mainly repressed while those driving the BMP signal were increased. S2 andS3). ECAg: Extracellular Agonist, ECAn: Extracellular Antagonist, L: Ligand, Co-R: Co-Receptor, RII: Receptor type II, RI: Receptor type I, TA: Transcriptional Activator, TR: Transcriptional Repressor, SBEs: SMAD Binding Element. Created with BioRender.com. For TGF-β signaling, intracellular inhibitors such as ERBIN, LDLRAD4, EIF3I, STK17B, and PP2CA were up-regulated in winter bear muscles (FC = 1.26, 1.50, 1.37, 1.86, and 1.56, respectively), whereas some of the actors promoting the signal were lower expressed, i.e., DAB2 and TRAP1 (FC = 0.50 and 0.63, respectively). By contrast, for BMP signaling, intracellular inhibitors were mainly lower expressed in muscles of the hibernating bear, such as CTDNEP1, FKBP1A (FC = 0.70 and 0.67, respectively). Regarding the intracellular actors triggering TGF-β/BMP signaling, SMAD3 (TGF-β signaling; FC = 0.76) was less expressed, SMAD1 and SMAD5 (BMP signaling; FC = 1.99 and 1.30, respectively), and SMAD4 (common to TGF-β and BMP signaling; FC = 1.34) were more expressed in muscles of the hibernating bear compared to the active one. Thus, expression changes of the intracellular actors again suggest repression of TGF-β signaling but maintenance of the BMP signaling. For nuclear components, transcriptional activators were either up-(FOXO3 and SP3, FC = 1.68 and 1.70) or down-regulated (e.g., KAT2B, ATF3 and ETS1, FC = 0.55, 0.46, and 0.54) for TGF-β, whereas mainly up-regulated in winter for BMP with YAP1, ZC-CHC12, and HOXC8 (FC = 1. 63, 2.21, and 2.46, respectively). Conversely, the transcriptional repressors were mainly up-regulated for the TGF-β pathway, i.e., TRIM33, YAP1, and SIRT1 (FC = 1.60, 1.63, and 1.82, respectively), whereas unchanged or lower expressed, i.e., TOB1 (FC = 0.78) for BMP during hibernation. Considering TGF-β target genes, an overall down-regulation was observed in winter compared to summer, as highlighted for the different collagen isoforms or the multiple metalloproteinases COL1A1/2 (FC = 0.04 and 0.07), COL3A1 (FC = 0.06), COL5A2 (FC = 0.28), COL6A1/3 (FC = 0.22 and 0.20), COL14A1 (FC = 0.14), and MMP2/14 (FC = 0.33 and 0.46) (Figure 2 and Table S2). For BMP target genes, it was less contrasted, with either an unchanged (e.g., RUNX2 or ID4), downregulated (e.g., ID1 or ID2, FC = 0.31 and 0.65), or up-regulated expression (RGS4 or KLF10, FC = 1.72 and 1.32) during hibernation, with RGS4 being a muscle-specific gene. Overall, this supports a general down-regulation of the transcriptional activity that drives TGF-β signaling, with possible maintenance of that for BMP signaling. Finally, TGF-β and BMP signaling also use a shared SMAD-independent pathway, involving a branch of the MAPK (Mitogen-Activated Protein Kinase) pathway [58]. In this pathway, the expression of TRAF6 and its downstream actor MAP3K7 were higher in muscles of the hibernating bear compared to the active one (FC = 1.78 and 1.60, respectively). Some of the TRAF6 downstream actors, including MEF2A and MEF2C, which are key muscle transcription factors, were as well up-regulated (FC = 1.89 and 2.23) in winter compared to summer bear muscles (Figure 2). TGF-β and BMP signaling pathways are tightly regulated by several UPS members, such as E3 ubiquitin ligases (E3s) and deubiquitinating enzymes (DUB) [55,59]. TGF-β inhibitors localized from the receptors to the nucleus level were up-regulated in winter, such as for CUL1 (FC = 1.85), NEDD4L (FC = 1.63), and TRIM33 (FC = 1.60), the latter being also a positive regulator of BMP signal transduction (Figure 3 and Table S4). SMURF1, an E3 ligase inhibiting both TGF-β and BMP signaling, was also up-regulated (FC = 1.64) in muscles of the hibernating bear. For UPS activators of the TGF-β signaling pathway alone, several DUBs were up-regulated in winter, e.g., UCHL5, USP11, and USP4 (FC = 1.78, 1.77, and 1.37, respectively), the latter also promoting BMP signaling, and others were down-regulated such as for USP15 and TRAF4 (FC = 0.52 and 0.47). The TGF-β and BMP pathways regulate the transcription of some muscle-specific E3s (TRIM63, FBXO32, and FBXO30). None of them were differentially expressed in winter compared to summer (Figure 3 and Table S4). Overall, this wide transcriptomic analysis highlighted a winter transcriptional pattern in muscles of the brown bear that was prone to shutting down TGF-β signaling while maintaining or even over-activating the BMP pathway. Divergent Regulation of TGF-β and BMP Pathways in Atrophy-Resistant Muscles of the Hibernating Brown Bear versus Atrophied Muscles of the Unloaded Mouse We compared the above-described brown bear muscle transcriptome to published transcriptomic data from a model of long-term physical inactivity in mice induced by 10 days of unloading, where the soleus muscle mass decreased by ~30% [42]. Comparing the two models, the muscle transcriptomic profiles appeared very different for selected gene expression related to TGF-β and BMP signaling pathways (Figure S1 and Tables S2-S4). Of note, gene expressions of TGF-β and BMP ligands were mainly differently regulated, as particularly evidenced for GDF5, which was up-regulated during bear hibernation, but down-regulated during unloading in the mouse model (FC = 0.18), and as well with MSTN strongly down-regulated during bear hibernation but unchanged in unloaded mouse muscles (FC = 1.57) (Figure 4a and Tables S2 andS3). Regarding receptor expressions, the two models responded quite similarly (Figure 4b and Tables S2 andS3), with three notable exceptions. Firstly, the gene expression of ACRV1C did not change in muscles of the unloaded mouse, unlike muscles of the hibernating bear, where a strong downregulation occurred (FC = 1.11 and 0.28, respectively), what we also observed with the gene expression of TGFBR2 (FC = 1.08 and 0.80, respectively). Secondly, the gene expression of BMPR1B, GDF5 receptor, was up-regulated in muscles of the hibernating bear but was down-regulated in muscles of the unloaded mouse (FC = 0.66) (Figure 4b). The intracellular actors SMAD3, SMAD4, and SMAD1 were regulated similarly by bear hibernation or mouse unloading, although not to the same extent (FC = 0.73, 1.10, and 1.50, respectively). However, SMAD2 was up-regulated (FC = 1.24), and SMAD5 and SMAD9 remained unchanged only in the muscles of the unloaded mouse (Figure 4c). S4). SBEs: SMAD Binding Elements. Created with BioRender.com. Regarding the E3s and DUB enzymes involved in the BMP pathway alone (Figure 5 upper panel), NEDD4 was similarly up-regulated in both models (FC = 1.63 and 1.28, respectively in bear and mouse), whereas FBXO30 (FC = 0.82) was down-regulated only in muscles of the unloaded mouse (Figure 5 and Table S4). The enzymes involved in the regulation of both TGF-β/BMP signaling were mainly up-or down-regulated in muscles of the hibernating bear but were mainly unaffected in unloaded mouse muscles (Figure 5 middle panel, Table S4). Finally, the E3s/DUB involved only in TGF-β signaling regulation were for half of them commonly unchanged or up-regulated, e.g., CUL1, UCHL5 or NEDD4L (FC = 1.17, 1.30 or 2.38, respectively), and for the other half oppositely regulated, e.g., SMURF2, CBLB, or TRIM62 (FC = 0.81, 1.50, and 1.97, respectively) in muscles of the unloaded mouse compared to muscles of the hibernating bear (Figure 5 lower panel and Table S4). Overall, TGF-β and BMP signaling pathways were differentially regulated between atrophy-resistant muscles of the hibernating bear and atrophy-sensitive muscles of the unloaded mouse. Hibernation Induces Changes in TGF-β and BMP Pathway Components at the Protein Level To further compare these models of resistance and vulnerability to atrophy, we explored the protein levels of the SMAD intracellular actors. In the brown bear muscle, we observed a tendency to decrease for SMAD2 protein levels (p = 0.09) in winter compared to summer, whereas SMAD3 levels remained quite similar between the two seasons (Figure 6a,c,d). Protein levels of SMAD4 were higher in muscles of the hibernating bear compared to the active one (Figure 6a,e), but the converse was observed for SMAD1/5 (Figure 6a,f). As for mRNA, these SMAD proteins did not follow the same regulation pattern in muscles of the unloaded mouse, where none changed at the protein level (Figure 6b-f). The protein levels of CCN2, a TGF-β target gene that is also an extracellular activator of the TGF-β pathway and an inhibitor of the BMP pathway, were strongly lower in winter bear muscles but were unaffected in unloaded mouse muscles (Figure 6a,b,g). Finally, the protein levels for GDF5, a BMP ligand, remained unchanged in both muscles from the hibernating bear and the unloaded mouse (Figure 6a,b,h). Discussion Although basic knowledge regarding the underlying mechanisms of muscle atrophy is continuously growing, essentially from rodent models and clinical studies in humans, there are still no efficient therapeutic strategies for its prevention and treatment. To explore new avenues, we compared a model of muscle atrophy resistance, the hibernating brown bear [23], and a mouse model of disuse-induced muscle atrophy [42]. Therefore, we analyzed the bear muscle transcriptome and identified sweeping changes in gene expression between the summer-active period and the winter-hibernating period, and we compared them with transcriptomic data from muscles of the unloaded mouse. Whereas the loss of muscle mass during inactivity is often associated with a decrease in muscle protein synthesis [42,[61][62][63], we reported here that genes implicated in protein metabolism were mainly up-regulated in muscles of the hibernating bear. This is consistent with previous studies that linked muscle atrophy resistance during hibernation to induction of protein translation [35,36,64]. Under activation of FOXOs transcription factors, atrogenes involved in both autophagy and UPS pathways are enhanced in rodents [65][66][67]. These atrogenes (e.g., MAP1LC3A, FBXO32, and ZFAND5) were indeed up-regulated or maintained in the unloaded mouse model but down-regulated or unchanged in muscles of the hibernating bear (Table S5). In agreement with previous studies, our data confirmed that proteolytic actors were up-regulated in disuse-induced muscle atrophy in rodents [11] but not in the bear model of muscle atrophy resistance [36]. Despite discrepancies between bear and mouse models (e.g., fed status, torpor vs. hindlimb muscle disuse), both display similar pathways controlling skeletal muscle mass and protein balance. For instance, the TGF-β signaling pathway has been reported to be evolutionarily conserved among several species, from Caenorhabditis elegans and Drosophila melanogaster to Mus musculus [68]. In addition, both models are responsive to denervation-induced atrophy when in active conditions [20,30]. We performed our analyses on the fast-twitch vastus lateralis muscle for the brown bear and the slow-twitch soleus muscle for the mouse. Thus, we cannot exclude the possibility that the differences recorded here between the models may partly have resulted from the specific nature of these muscles concerning their metabolic and contractile properties. However, it is noteworthy that, although fast-twitch muscles are not as sensitive to physical inactivity as slow-twitch muscles, both types of muscles atrophied in classical models of long-term physical inactivity in rodents [25,29,39,69,70] and humans [71][72][73]. Despite this difference in metabolic and contractile properties, a general down-regulation of genes involved in extracellular matrix (ECM) structure organization was observed in atrophied muscles from the unloaded mouse and atrophy-resistant muscles from the hibernating brown bear (Figure 1 and Table S2). This ECM structure remodeling is a common feature of other atrophic models reported during dystrophic diseases [74] or muscle disuse in rodents or humans [26,69,[75][76][77]. TGF-β is currently of major interest within the field of skeletal muscle biology. Indeed, the gene disruption of two of its ligands Myostatin (MSTN) or Activin A (INHBA), and the inhibition of their shared receptor ActRIIB (ACVR2B) promote a profound muscle hypertrophy phenotype in various conditions and species [78][79][80][81]. On the contrary, overexpression of MSTN or INHBA leads to the recruitment and phosphorylation of SMAD2-3, triggering an atrophic transcriptional program [16,17,[82][83][84]. Here, we reported that the expression levels of both ligands MSTN and INHBA were lower only in muscles of the hibernating bear. In addition, ACVR2B and SMAD2 mRNA levels were up-regulated in muscles of the unloaded mouse whereas maintained in muscles of the hibernating bear. At the protein level, SMAD2 was maintained in the muscles of the unloaded mouse and showed a tendency to decrease in muscles of the hibernating bear. However, despite extensive efforts and numerous antibodies tested, we could not characterize the SMADs phosphorylation status in bear muscles, which thus remains to be defined. One TGF-β target gene, CCN2 (also known as CTGF), is an ECM protein associated with fibrotic activity that is up-regulated in several muscle chronic disorders (i.e., Duchenne muscular dystrophy or the amyotrophic lateral sclerosis) [85]. CCN2 is one of the main pro-fibrotic cytokines acting downstream of TGF-β signaling and can amplify its effects through enhancement of TGF-β ligand-receptor binding [85][86][87][88][89][90]. Inhibition of CCN2 gene expression reduced fibrosis and improved muscle and locomotor performance in a rodent model of amyotrophic lateral sclerosis [86]. During unloading, mRNA and protein levels of CCN2 were unchanged in the muscles of the unloaded mouse but were strongly lower in the muscles of the hibernating brown bear. Taken together, our data strongly suggest that TGF-β signaling is overall inhibited only in muscles resistant to atrophy during bear hibernation. The BMP pathway is a potent inducer of bone and cartilage formation [91]. BMP signaling was also recently discovered as a regulator of muscle mass, as its inhibition abolished the hypertrophic phenotype of the MSTN-KO mouse [20], and an increase in its receptor activity induced important muscle hypertrophy [21]. Regarding BMP ligands, GDF5 is essential to muscle mass maintenance, binding preferentially to the type I receptor BMPR1B. GDF5 expression was strongly induced in denervated mouse muscles, and its inhibition worsened muscle atrophy, suggesting a role in counteracting denervation-induced atrophy [20]. We report here that the gene expression for both GDF5 and BMPR1B was strongly down-regulated in mouse muscles during unloading, whereas up-regulated in muscles of the hibernating bear. However, GDF5 protein levels were stable in the muscles of both the unloaded mouse and the hibernating bear. We recently demonstrated that circulating components of the hibernating bear serum were able to induce trans-species effects on human myotubes, notably inhibition of protein degradation. Therefore, we hypothesized that those components could be involved in the maintenance of muscle mass and strength in the hibernating bear [92]. Along with muscle mass maintenance, bear hibernation is also associated with bone mass maintenance [93], and thus GDF5 may constitute a possible target toward muscle and bone protection during long periods of physical inactivity and/or fasting. Unfortunately, the exploration of GDF5 concentration in bear serum was hampered by species cross-reactivity concerns with commercially available Elisa kits and thus remains to be addressed. It has been proposed that the BMP and TGF-β common actor SMAD4 mainly engages with the TGF-β pathway, but switches to the BMP pathway when TGF-β transduction is reduced, and thus could be the limiting factor between these two signalings [20]. Moreover, denervation-induced muscle atrophy was exacerbated in the SMAD4 deficient mouse [20], and muscle mass was increased in humans with a mutation-associated gain of function in the SMAD4 gene [94]. We observed here higher mRNA and protein levels for SMAD4 in muscles of the hibernating bear, but only at mRNA levels in muscles of the unloaded mouse. This is concomitant with the overall down-regulation highlighted for the TGF-β signaling in the hibernating bear, which was not observed for the unloaded mouse. Thus, the inhibition of the TGF-β signaling in muscles of the hibernating bear may have released SMAD4 from TGF-β to BMP pathway to maintain muscle mass in a long period of disuse (Graphical abstract). TGF-β and BMP share a SMAD-independent pathway that activates the E3 ubiquitin ligase TRAF6 [58]. In addition to its pro-atrophic role [95], TRAF6 is also required for myogenic differentiation and muscle regeneration via the MEF2 axis [96]. MEF2 is a conserved family of transcription factors involved in the control of muscle gene expression [97]. A recent muscle transcriptome analysis highlighted an inhibition of MEF2 transcription factors during human bed rest leading to skeletal muscle alterations [98]. Here, only MEF2A was up-regulated during unloading in mouse muscles, whereas TRAF6, MEF2A, and MEF2C were up-regulated in the hibernating bear muscles. We already observed a myogenic microRNA signature mediated by MEF2A signaling in the muscles of the hibernating bear, promoting mechanisms of muscle regeneration, suppression of ubiquitin ligases, and resistance to muscle atrophy [33]. Further studies are required to address whether this MEF2 signature could be under the control of the TGF-β and/or the BMP pathway through TRAF6. Conclusions Resistance to muscle atrophy in hibernating brown bears has so far been linked to a reduction in protein and energy metabolism. Here, we show for the first time that the TGF-β pathway is down-regulated whereas the BMP pathway is concomitantly sustained or even up-regulated in atrophy-resistant muscles of the hibernating brown bear. Thus, beyond strengthening the previous hypothesis of a hypometabolism enabling this natural resistance to muscle atrophy, our study provides new insights regarding the underlying mechanisms. The originality of the current work lies in the choice to study the mechanisms involved in resistance to atrophy, and not solely, as in many studies, the mechanisms involved in the onset of atrophy. Our comparison of resistance and sensitivity to muscle atrophy animal models suggested the balance between the TGF-β and the BMP pathways as critical for preventing skeletal muscle atrophy over a long period of disuse. Many targeted therapies to counteract muscle atrophy already focus on TGF-β inhibition [99]. Our data open the way for further studies and clinical trials to test the effects of strategies to switch on (or sustain) the BMP pathway in combination with TGF-β inhibition to prevent disuse-induced muscle atrophy. CNRS Discussion and perspectives The main conclusion of our study is that the TGF-β/BMP balance appears to be crucial for the maintenance of muscle mass in hibernating brown bears (Figure 30). In addition to the points raised in the article, there are other issues worth discussing. First of all, we will discuss the transcriptomic regulation of genes related to protein synthesis in hibernating bears muscles. Second of all, we will address in 5 sub-sections the possible reasons why and how the TGF-β and BMP signalling pathways are differentially regulated in the hibernating bears muscles compared to summer. 24). This was associated with a lower protein degradation rate suggesting that overall muscle protein turnover was lowered in winter compared to summer in bears [301] (Figure 24). Hibernation induces a transcriptomic reprogramming in genes related to protein synthesis and RNA metabolism in bear muscles The decrease in the rate of protein degradation is consistent with the downregulation of genes related to protein degradation that we reported in the present transcriptomic study (see Paper 6.3) [351]. However, proteomic data from hibernating bears muscles revealed that proteins involved in the biological processes "protein biosynthesis" and "ribosome biogenesis" were not differentially regulated between winter and summer [310], whereas all were upregulated in winter bear muscles in our transcriptomic analysis (i.e. RPS2 43 , RPS4X, RPS5, RPS7 and RPSA) (see Paper 6.3) [351]. Induction of RNA metabolism-related genes. We identified RNA binding motif protein 3 (RBM3) as a gene that is highly up-regulated in winter compared to summer in bear muscles (see Paper 6.3) [351]. This appears to be a common feature of many hibernators in almost all tissues (e.g. brain, heart, liver muscle) [304,306,337,352,353]. RBM3 has been described as (1) facilitating the processing of RNA molecules in the cold and protecting mRNA transcripts from degradation in hibernating ground squirrels organs [352,354,355] and ( 2) playing a role in RNA metabolism and mRNA-related posttranscriptional processes (e.g. trafficking, stability, translation initiation) that ultimately affect protein synthesis [356]. This is consistent with other terms related to RNA metabolism that were found to be enriched in our transcriptomic study (e.g. mRNA 3'-end processing, mRNA splicing, regulation of mRNA stability) (Figure 31) (see Paper 6.3) [351]. What does the muscle translatome of the hibernating bear look like? In many cell types, transcript levels do not always predict protein levels [357], hence the emergence of the term translatome. For example, synaptic plasticity requires relatively rapid de novo protein synthesis and specific mRNAs can be stored in neurons waiting for the precise moment to be translated [358]. To identify the level and type of mRNAs translated in hibernating versus active brown bears muscles, we could combine total RNA sequencing and ribosome profiling (ribosome sequencing). For the time being, many questions remain open, including what factors determine when and in which tissues these mRNAs should be translated? It is possible that elevated mRNAs levels of genes related to protein translation and ribosome biogenesis in winter bear muscles are waiting to be translated when needed. The transcriptional increase in protein biosynthetic-related genes detected in torpid squirrels facilitates the induction of translation in muscle during short bouts of arousal [354,355]. Bears, in contrast, do not undergo periods of arousal but maintain an alertness state during hibernation to potential dangers outside the den [294]. In agreement, we recently reported that hibernating bears have higher plasma levels of the endocannabinoid-like compound N-oleoylethanolamide, which has been described to have wakefulness-promoting effects in rodents (see Appendix 10.5) [335]. Therefore, in hibernating 99 bear muscles, the increase in protein translation-related mRNAs could serve as a rescue in case of unexpected exit from the den and thus a rapid reactivation of general protein synthesis. Hibernation induces a transcriptomic reprogramming in TGF-β superfamilyrelated genes in bear muscles The main message from our study is that the TGF-β pathway is down-regulated whereas the BMP pathway is concomitantly maintained or even up-regulated at the transcriptomic level in atrophyresistant muscles of the hibernating brown bear. Our data suggest that the balance between the TGFβ and the BMP pathways is crucial for preventing skeletal muscle atrophy during a long period of disuse [351] (Figure 30). In the following subsections we will discuss (1) the expression of target genes of BMP Figure 31. Detailed enriched terms from the biological processes "Protein metabolism" and "RNA metabolism" from the differentially expressed genes in Cussonneau et al., 2021 [351]. Terms written in the same colour represent closely related biological terms. signalling in skeletal muscle, (2) the regulation of the intracellular actor SMAD4, (3) the regulation of the extracellular actor CCN2, ( 4) the relationship between TGF-β/BMP signalling and changes in lipid membrane composition, (5) the relationship between TGF-β/BMP signalling and neuromuscular junction integrity and ( 6) muscle-organ crosstalk and connection with TGF-β/BMP signalling. Identification of BMP target genes in skeletal muscle. To our knowledge, no study has explored the transcriptomic signature of BMP signalling in skeletal muscle. Therefore, one of the final objectives of this thesis project was to draw a list of genes induced upon activation of BMP signalling. We first looked for easy-to-use tools for the preliminary experiments and therefore chose the immortalised human muscle cell line CCL136 (rhabdomyosarcoma cell line) as they are undifferentiated cells, which saved time and made the genetic manipulation more efficient. CCL136 were transfected with a dominant negative BMP type 1 receptor (BMPR1A/ALK3) where the kinase is inactivated (K261R) blocking the signal to be transduced [367], and treated with GDF5 ligand to activate BMP signalling (see Appendix 10.1). We observed a lower induction in total and phosphorylated SMAD1/5 and SMAD4 protein contents following GDF5 treatment in CCL136 expressing ALK3-KD compared to non-transfected cells (Figure 32). The slight decrease in SMADs protein content could be explained by compensatory mechanisms set up by cells with other BMP receptors. Unfortunately, the transfection conditions were not optimal when tested with C2C12 myotubes and human primary myotubes. For these cell lines, the use of viral particles (e.g. lentivirus) to transduce the inactive receptor could improve the transfection efficiency. In addition, other methods can be used to attenuate BMP signalling and overcome the compensatory mechanism of the receptors, for example, the use of siRNAs against SMAD1 and/or SMAD5 or treatment with the BMP inhibitor noggin. Once the optimisation is done, we will perform RNA and chromatin immunoprecipitation (ChIP) sequencing and obtain a list of BMP-dependent genes in muscle. Then, further analysis of the transcriptome of hibernating bear muscles will confirm whether the identified genes are indeed upregulated in this model of natural resistance to muscle atrophy. Immunoprecipitation protocols were optimised for SMAD4, SMAD1/5 and SMAD2/3. However, our conditions, unfortunately, did not enable the co-immunodetection of SMAD4 with either SMAD1/5 or SMAD2/3 (Figure 33). Co-IP is highly dependent on protein-protein interactions. As bear muscles samples are necessarily frozen when sampling in the field, protein-protein interactions may have been disrupted by a freezethaw cycle, which may explain the lack of co-immunodetection. Another antibody against SMAD4 may be used to avoid a possible overlap of the epitope of the SMAD4 antibody with the protein-protein interaction site. Cross-linking the binding partners could also help to stabilize physiological interactions throughout extraction procedures involving mechanical and chemical stresses and thus enhance protein-protein interactions. However, for now, whether the proportion of SMAD4 recruited to TGF-β versus BMP signalling may change in winter versus summer in bear muscles remains unsolved. SMAD4 stability could be different in bears vs. other mammals. The analysis of the SMAD4 protein sequence revealed two highly conserved domains separated by a proline-rich linker, which is a substrate for kinases and phosphatases. This linker serves as a binding platform for cofactors and ubiquitin ligases, which tag SMAD4 protein for activation or degradation [368,369]. The SMAD4 protein sequence is highly conserved in metazoans. In mammals, the bear or mouse share 98% sequence 103 homology with the human SMAD4 protein. In the SMAD4 linker region, a Threonine is found at position 272 (Thr272) and is well conserved from drosophila to human (Figure 34). However, in all members of the Ursidae family (red box), this Threonine is replaced by a Serine (Figure 34). This characteristic is not shared by other small hibernators (black box) (Figure 34). An in vitro study revealed a putative regulatory site consisting of four threonines in the linker region of SMAD4, including Thr272 [370,371]. The authors showed that activation of the MAPK pathway induces phosphorylation of Thr276, which initiates three sequential phosphorylations by GSK3 44 on Thr272, 268 and 264. This generated the recognition of SMAD4 by the E3 ubiquitin ligase β-TrCP leading to its proteasome-dependent degradation [370,371] (Figure 35). 44 glycogen synthase kinase-3 This sequential phosphorylation of SMAD4 may not be possible in the Ursidae due to the replacement of this Threonine by a Serine. This could thus stabilise SMAD4, explaining the slight increase in its protein content in hibernating bear muscles [351]. This change in sequence could also allow SMAD4 to interact differently with its partners. Whether this stabilisation and/or interaction with other proteins play a role in regulating the TGF-β/BMP balance in muscles remains to be explored. Myotube culture experiments could be performed by mutating Thr272 within SMAD4 to explore the consequences on TGF-β or BMP transcriptional activities. In addition, using Surface Plasmon Resonance, the real-time association and dissociation rates between wild-type or mutated SMAD4 and SMAD1/5 or SMAD2/3 could be accurately determined. CCN2 is strongly reduced in hibernating bear muscles We showed that the cellular communication network factor 2 (CCN2) mRNA and protein levels were strongly reduced in hibernating bear muscles compared to the summer counterpart (see Paper 6. has been reported to be induced by mechanical stretch in vitro in a TGF-β signalling-dependent manner Hibernating bear muscles do not experience fibrosis as shown by the downregulation of most of the extracellular matrix organisation-related genes (see Paper 6.3) [351]. Therefore, our data suggest that in addition to targeting CCN2 to reduce fibrosis in muscular dystrophies, this strategy could also be of interest in other muscle-related diseases without fibrosis likewise the hibernating bear muscles. Is the regulation of TGF-β/BMP balance linked to the modification of the lipid membrane composition? In hibernating bear serum, we and others published an increase in free circulating fatty acids and triglycerides arising from the lipolysis of the adipose tissue [298,303,309,310,331,335]. These NMJ in hibernating bear muscles. Amazingly, hibernating bears are partially resistant to denervationinduced muscle atrophy, whereas summer-active bears are susceptible to it like other mammals [328]. Whether the maintenance of BMP signalling in muscles is responsible for this resistance remains to be elucidated (Figure 36). No study has yet explored in detail the structure and characteristics of the NMJ in active versus hibernating bears. For that purpose, histological studies of NMJ markers will be soon initiated in our laboratory. 6.4.2.6 The muscle-organ crosstalk and connection with TGF-β/BMP signalling. Is GDF5 synthesised and released by adipose tissue? GDF5 and its receptor BMPR1B are strongly upregulated in the hibernating bear muscles relative to its summer counterpart, whereas significantly downregulated in the unloaded mice muscles (see Paper 6.3) [351]. We wondered whether the GDF5 ligand was increased in winter bear serum and could explain the induction of BMP-related gene transcription. Unfortunately, serum GDF5 concentration could not be measured because the only commercially available ELISA 45 kit did not work in bears (see Appendix 10.1). GDF5 is primarily 108 synthesised and secreted by the salivary glands, which is probably not a dynamic tissue during hibernation since bears do not drink or eat for 6 months. GDF5 is also present in adipose tissue (Human Protein Atlas data https://www.proteinatlas.org/ENSG00000125965-GDF5/tissue). In mice, GDF5 promotes thermogenesis in subcutaneous white adipose tissue (sWAT) after cold exposure via non-SMAD p38 signalling [395]. In addition, GDF5 facilitates the development of brown fat-like cells in sWAT tissue via SMAD1/5 signalling in mice [396]. However, brown fat in hibernating brown bears has only been described in one study [397], which was subsequently refuted [398]. Shivering plays a role in active thermogenesis in muscles, but there is also non-shivering thermogenesis that occurs primarily through the metabolism of brown fat and, to a lesser degree, white fat [399]. If GDF5 is increased in winter bear serum, it would not only contribute to the maintenance of muscle mass induced by BMP signalling but could also play a role in non-shivering thermogenesis in white adipose tissue (Figure 37). The concentration of GDF5 in bear serum could be assessed by mass spectrometry. Moreover, exploring the gene/protein expression of GDF5 and BMP-related components in adipose tissue could provide valuable information on whether GDF5 is synthesised and released from adipose tissue into the bloodstream during hibernation. In green and red, the genes down-and upregulated respectively in hibernating bear muscles in Cussonneau et al., 2021 [351]. TGF-β signalling and muscle-bone In addition, myostatin, produced in muscle, stimulates the production and the differentiation of osteoclasts responsible for bone resorption through SMAD-dependent signalling [406][407][408][409] In this study, we aimed to explore the impacts of a controlled induction of ATF4 signalling on skeletal muscle. We selected the molecule halofuginone (HF), which (1) induces eIF2α-ATF4 signalling and ( 2) is already used and well tolerated in mouse dystrophic models. We designed an experimental protocol, choosing (1) the dose of HF, ( 2) the mode and frequency of administration, and ( 3) the duration of treatment (data not shown). Once the protocol was validated, it was tested in mice either in basal conditions or when they were subsequently subjected to muscle atrophy induced by hindlimb suspension (HS) (Figure 38). We also took advantage of a model of muscle atrophy resistance that we previously explored (see Paper 6.3) [351] and studied the regulation of the ATF4 atrogenes in the hibernating brown bear muscles. We showed that induction of ATF4 signalling was not associated with atrophy in muscles from HF-treated mice or hibernating brown bears. We also investigated the molecular mechanisms of HF in skeletal muscle. Introduction Many unloading conditions (e.g., microgravity, bed rest, or physical inactivity) lead to a loss of muscle mass and strength. This muscle atrophy is associated with adverse health effects such as autonomy decline and increased morbidity and mortality [1][2][3]. Considering the lack of proven, easy-to-use therapeutic or preventive treatment, muscle atrophy remains a major public health issue (World Health Organisation data) [4]. The underlying molecular mechanisms of muscle atrophy involve the dysregulation of a complex network of intracellular pathways leading to an imbalance in protein turnover [5][6][7][8][9]. Activating transcription factor 4 (ATF4) is overexpressed in many conditions of muscle atrophy [10][11][12][13][14] and is, therefore, considered as an atrogene, i.e., one of the genes with expression at the mRNA level that is commonly altered during atrophy [14]. ATF4 belongs to the integrated stress response (ISR) pathway, a conserved intracellular network activated in response to various intrinsic and extrinsic stresses (e.g., amino acid (AA) depletion and endoplasmic reticulum (ER) stress) to restore cellular homeostasis [15]. Activation of the ISR involves phosphorylation of eukaryotic translation initiation factor 2 (eIF2α) by several kinases (i.e., general control nonderepressible 2 (GCN2), protein kinase RNA-like ER kinase (PERK), protein kinase R (PKR), heme-regulated inhibitor (HRI), and microtubule affinity-regulating kinase 2 (MARK2)), resulting in the global inhibition of protein synthesis but the increased translation of certain mRNAs, including ATF4 [15,16]. Inhibition of ATF4 in skeletal muscle limits starvation-, immobilisation-, and ageing-induced atrophy, whereas ATF4 induction results in muscle wasting [11][12][13]17]. ATF4 target genes include some atrogenes, such as GADD45A and CDKN1A, that are required for ATF4-mediated muscle atrophy [11][12][13]18], and TRIB3, which is involved in fasting-and ageing-induced muscle atrophy [19,20]. However, ATF4 targets also include genes that may be involved in the maintenance of muscle homeostasis. Indeed, ATF4 contributes to the transcription of autophagy-related genes [21][22][23][24] and is activated during mitochondrial perturbations (e.g., oxidative stress) to restore mitochondrial homeostasis [25,26]. In fact, activation of the eIF2α-ATF4 pathway by the pharmacological molecule halofuginone (HF), prior to stressful events (i.e., ischemia-reperfusion injury models) has shown positive effects on the preservation of kidney and liver function [27]. Moreover, the maintenance of autophagy and mitochondria homeostasis are essential for maintaining muscle mass [28][29][30]. Altogether, this led to the hypothesis that the ATF4 pathway may have a dual role in skeletal muscle. Interestingly, halofuginone also improved muscle performance during dystrophies, mainly through its antifibrotic properties [31][32][33][34]. Whether these beneficial effects involve a regulation of the ATF4 pathway has never been investigated. However, they mainly involved the inhibition of the transforming growth factor-β (TGF-β) pathway [35][36][37]. The TGF-β signalling pathway acts as a negative regulator of muscle mass, notably through the transcriptional induction of the atrogenes TRIM63/MuRF1 and FBXO32/Atrogin-1 [38][39][40]. When inhibited, it promotes a profound muscle hypertrophy phenotype in various conditions and species [41,42]. Members of TGF-β signalling belong to the TGF-β superfamily [38,39], as do the bone morphogenetic protein (BMP) signalling members. The BMP signalling pathway [39,43] instead acts as a positive regulator of muscle mass through the transcriptional repression of the atrogene FBXO30/Musa1, which is required for denervation-induced muscle loss [44,45]. When inhibited, it profoundly exacerbates denervation-induced muscle atrophy [44,45]. This study aimed to explore the impacts of HF-induced ATF4 signalling on skeletal muscle under basal conditions and hindlimb suspension-induced atrophy in mice. We further deciphered the molecular mechanisms of HF by focusing on protein metabolism and TGF-β/BMP signalling. We also took advantage of a model of muscle atrophy resistance that we previously explored [46], and studied the regulation of the ATF4-induced atrogenes in hibernating brown bear muscle. Results Induction of ATF4-Regulated Atrogenes Does Not Affect Muscle Mass in Mice We used halofuginone (HF) to induce ATF4 transcriptional activity in mouse muscles, and we investigated the effect on skeletal muscle mass. For that purpose, mice were treated with HF three times a week for up to 4 weeks. Six hours after the last HF administration at the end of each week, we measured the mRNA levels for some ATF4 target genes, involved in muscle atrophy, i.e., Trib3, Cdkn1a, Gadd45a, and Eif4ebp1 (Figure 1A-F). Except for Atf4, for which mRNA levels were elevated during the first 2 weeks of HF treatment, Trib3, Cdkn1a, and Gadd45a were all overexpressed in muscles after 2 weeks of HF treatment compared to H 2 O-treated mice. Of note, Eif4ebp1 mRNA levels increased in mouse muscles after 4 weeks of HF-treatment compared to H 2 O, with a noticeable trend after 3 weeks of treatment (Figure 1F). We also investigated the regulation of other ATF4 target genes and showed the overexpression of Asns over the 4 weeks of HF treatment as well as a trend for Ddit3 and Ppp1r15a (Supplementary Figure S1). istration at the end of each week, we measured the mRNA levels for some ATF4 target genes, involved in muscle atrophy, i.e., Trib3, Cdkn1a, Gadd45a, and Eif4ebp1 (Figure 1A-F). Except for Atf4, for which mRNA levels were elevated during the first 2 weeks of HF treatment, Trib3, Cdkn1a, and Gadd45a were all overexpressed in muscles after 2 weeks of HF treatment compared to H2O-treated mice. Of note, Eif4ebp1 mRNA levels increased in mouse muscles after 4 weeks of HF-treatment compared to H2O, with a noticeable trend after 3 weeks of treatment (Figure 1F). We also investigated the regulation of other ATF4 target genes and showed the overexpression of Asns over the 4 weeks of HF treatment as well as a trend for Ddit3 and Ppp1r15a (Supplementary Figure S1). These data underline an activation of the ATF4 transcriptional activity. However, despite the overexpression of the ATF4-regulated atrogenes, the mass of the gastrocnemius, soleus, tibialis anterior, and extensor digitorum longus (EDL) muscles remained unchanged during the 4 weeks of HF treatment (Figure 1G, Supplementary Figure S1). Altogether, these data suggest that a long-term halofuginone administration induced ATF4-regulated atrogenes without leading to muscle atrophy. Overexpression of ATF4-Regulated Atrogenes during Hindlimb Suspension Is Uncoupled from Muscle Atrophy in HF-Treated Mice We then investigated the effect of the induction of the ATF4 pathway by HF treatment during the muscle atrophy induced by hindlimb suspension (HS). Briefly, mice received HF three times a week for 3 weeks and were then, 3 days after the last HF administration, hindlimb-suspended or not for 3 or 7 days (Figure 2A). Measurements were thus performed at least 6 days after the last HF administration. HF induces ATF4 activation through the phosphorylation of eIF2α [47]. We observed that HF treatment resulted in overall higher phosphorylated and total eIF2α protein levels compared to H 2 O-treated mice (Figure 2B-D). In addition, HS led to an overall decrease in phosphorylated eIF2α protein levels compared to the controls (Ctrls) (Figure 2B,C). In addition, levels of the mRNA encoding the phosphatase GADD34 (Ppp1r15a) were higher at 3 days of hindlimb suspension (HS3) compared to the Ctrls in both H 2 O-and HF-treated mice (Supplementary Figure S2). The expression of ATF4-regulated genes was not different between Ctrl-HF and Ctrl-H 2 O groups (Figure 2 and Supplementary Figure S2). The mRNA levels of Atf4 and its target genes involved in muscle atrophy, i.e., Trib3, Cdkn1a, and Eif4ebp1, were higher during HS in both H 2 O-and HF-treated mice (Figure 2E-H), while the mRNA levels of the ATF4-regulated atrogene Gadd45a remained unchanged (Figure 2I). Of note, mRNA expression of other ATF4 target genes remained unchanged for Asns or slightly decreased upon HS for Ddit3. (Supplementary Figure S2). Altogether, Figures 1 and2 shows that (i) HF administration induced ATF4-regulated atrogenes after 6 h, but no more after 6 days, indicating that this effect was rapid and transient, and (ii) HS induced overexpression of these atrogenes. We next investigated the outcomes on skeletal muscle. Gastrocnemius muscle had atrophied only in H 2 O-treated mice after 3 days of HS (H 2 O-HS3) and in both H 2 O-and HF-treated mice after 7 days of HS. Surprisingly, the average of the fibre cross-sectional area (CSA) did not change in H 2 O-HS3 mice compared to the Ctrls but was lower after HS7 regardless of the treatment (Figure 3B). We further analysed the distribution of gastrocnemius fibre CSA (Supplementary Figure S3A,B). We reported (i) a lower proportion of small fibres and (ii) a higher proportion of large fibres in HF-treated mice compared to the H 2 O group (Supplementary Figure S3A). This observation was restricted to fast-twitch fibres (i.e., 2X/2B) (Supplementary Figure S3B). Taken together, our data suggest that induction of ATF4-regulated atrogenes is not associated with muscle atrophy after 3 days of hindlimb suspension in HF-treated mice and, thus, that HF slightly preserves muscle mass during HS. We next investigated the outcomes on skeletal muscle. Gastrocnemius muscle had atrophied only in H2O-treated mice after 3 days of HS (H2O-HS3) and in both H2O-and HF-treated mice after 7 days of HS. Surprisingly, the average of the fibre cross-sectional area (CSA) did not change in H2O-HS3 mice compared to the Ctrls but was lower after HS7 regardless of the treatment (Figure 3B). We further analysed the distribution of gastrocnemius fibre CSA (Supplementary Figure S3A,B). We reported (i) a lower proportion of small fibres and (ii) a higher proportion of large fibres in HF-treated mice compared to the H2O group (Supplementary Figure S3A). This observation was restricted to fast-twitch fibres (i.e., 2X/2B) (Supplementary Figure S3B). Taken together, our data suggest that induction of ATF4-regulated atrogenes is not associated with muscle atrophy after 3 days of hindlimb suspension in HF-treated mice and, thus, that HF slightly preserves muscle mass during HS. Halofuginone Treatment Inhibits TGF-β while Promoting BMP Signalling in Gastrocnemius Muscle HF inhibits the TGF-β pathway [36,48]. We and others have recently reported that inhibition of the TGF- signalling is associated with the concomitant activation of BMP signalling [45,46]. Therefore, we investigated how HF treatment and subsequent HS treatment affected these pathways in skeletal muscle. The nuclear localisation of SMADs mirrors the upstream activation of the TGF-β or BMP pathway [49]. We, thus, measured protein levels for the transcription factors SMAD2/3 (TGF- signalling), SMAD1/5 (BMP signalling), and SMAD4 (TGF- and BMP signalling) in nuclear and cytosolic fractions (Figure 4A-D and Supplementary Figure S4A-F). The ratio of nuclear SMAD2/3 to total SMAD2/3 was very low in HF-treated mice compared to H2O-treated mice and decreased Halofuginone Treatment Inhibits TGF-β While Promoting BMP Signalling in Gastrocnemius Muscle HF inhibits the TGF-β pathway [36,48]. We and others have recently reported that inhibition of the TGF-β signalling is associated with the concomitant activation of BMP signalling [45,46]. Therefore, we investigated how HF treatment and subsequent HS treatment affected these pathways in skeletal muscle. The nuclear localisation of SMADs mirrors the upstream activation of the TGF-β or BMP pathway [49]. We, thus, measured protein levels for the transcription factors SMAD2/3 (TGF-β signalling), SMAD1/5 (BMP signalling), and SMAD4 (TGF-β and BMP signalling) in nuclear and cytosolic fractions (Figure 4A-D and Supplementary Figure S4A-F). The ratio of nuclear SMAD2/3 to total SMAD2/3 was very low in HF-treated mice compared to H 2 O-treated mice and decreased upon HS only in H 2 O-treated mice (Figure 4A,B). Consistently, the overall mRNA levels of several collagens, which are well-known target genes of TGF-β signalling, decreased upon HS (Supplementary Figure S5). Moreover, the ratio of nuclear SMAD1/5 to total SMAD1/5 was higher in HF-Ctrl mice than in H 2 O-Ctrl mice. This ratio was reduced at HS7 compared to the Ctrl in HF-treated mice, while it was increased in H 2 O-treated mice (Figure 4A,C). Finally, the ratio of nuclear SMAD4 to total SMAD4 was overall lower in HFvs. H 2 O-treated mice (Figure 4A-D), with a decrease in HF-treated mice at HS7 compared to the Ctrl. TGF-β catabolic action involves the inhibition of protein synthesis [40,50,51], whereas the anabolic action of BMP involves its promotion [44]. The overall protein-synthesis rates measured by puromycin incorporation were reduced during hindlimb suspension. However, this decrease was only significant in H 2 O-HS7 compared to H 2 O-Ctrl mice (Figure 5A,B). TGF-β signalling acts also as a negative regulator of muscle mass through the induction of the atrogenes TRIM63/MurF1 and FBXO32/Atrogin-1 [38,39], while BMP signalling acts as a positive regulator with the transcriptional repression of the atrogene FBXO30/Musa1 [44,45]. Trim63 and Fbxo32 mRNA levels were upregulated only at HS3 in both H 2 O and HF-treated mice, while mRNA levels for the atrogene Fbxo30 remained unchanged (Figure 5C-E). Our data suggest that while HF inhibits TGF-β signalling, it also promotes BMP signalling in the control gastrocnemius muscles. We also showed that HF partially attenuates the drop in protein synthesis during hindlimb suspension. ATF4-Regulated Atrogenes Are Overexpressed in Atrophy-Resistant Hibernating Brown Bear Muscle Our data strongly suggest that the induction of ATF4 signalling is not always associated with muscle atrophy, either in basal conditions or in HS-induced muscle atrophy. We took advantage of a natural model, i.e., the hibernating brown bear, which experiences only a moderate loss of muscle protein content while remaining completely inactive for up to 6 months [52][53][54]. As with HF treatment, we recently reported a concomitant TGF-β pathway inhibition and BMP pathway activation in hibernating brown bear muscle [46]. We, thus, explored whether ATF4-regulated atrogenes may also be induced in this model. Interestingly, as shown in Figure 6, Atf4 was upregulated in hibernating brown bear muscle compared to the active counterpart. In addition, the two main ATF4-regulated atrogenes (Gadd45a and Cdkn1a) and Trib3 were also induced in hibernating brown bear muscle. Other ATF4 target genes were either down-(Eif4ebp1, Ppp1r15a, and Asns) or upregulated (Ddit3). These data show that ATF4-regulated atrogenes are induced in hibernating brown bear muscle, even if they are resistant to atrophy. Int. J. Mol. Sci. 2023, 24, x FOR PEER REVIEW 10 of 19 Our data strongly suggest that the induction of ATF4 signalling is not always associated with muscle atrophy, either in basal conditions or in HS-induced muscle atrophy. We took advantage of a natural model, i.e., the hibernating brown bear, which experiences only a moderate loss of muscle protein content while remaining completely inactive for up to 6 months [52][53][54]. As with HF treatment, we recently reported a concomitant TGF- pathway inhibition and BMP pathway activation in hibernating brown bear muscle [46]. We, thus, explored whether ATF4-regulated atrogenes may also be induced in this model. Interestingly, as shown in Figure 6, Atf4 was upregulated in hibernating brown bear muscle compared to the active counterpart. In addition, the two main ATF4-regulated atrogenes (Gadd45a and Cdkn1a) and Trib3 were also induced in hibernating brown bear muscle. Other ATF4 target genes were either down-(Eif4ebp1, Ppp1r15a, and Asns) or upregulated (Ddit3). These data show that ATF4-regulated atrogenes are induced in hibernating brown bear muscle, even if they are resistant to atrophy. Discussion Several muscle-wasting conditions, including fasting or physical inactivity, are associated with eIF2 phosphorylation [55,56] and/or ATF4 overexpression, which trigger muscle atrophy [10][11][12][13]18,57]. Moreover, muscle atrophy is hampered during fasting or ageing in mice with reduced ATF4 expression or expressing a phosphorylation-resistant form of eIF2α [11][12][13]. Both CDKN1A and GADD45A are referred to as atrogenes and are required for ATF4-mediated muscle atrophy [11][12][13]18], and TRIB3 is another ATF4 target gene involved in fasting-and ageing-induced muscle atrophy [19,20]. We showed here that overexpression of ATF4-regulated atrogenes was dissociated from muscle wasting (1) in a basal condition, (2) during hindlimb suspension, and ( 3) in a natural model of muscle-atrophy resistance (Figure 7). Discussion Several muscle-wasting conditions, including fasting or physical inactivity, are associated with eIF2α phosphorylation [55,56] and/or ATF4 overexpression, which trigger muscle atrophy [10][11][12][13]18,57]. Moreover, muscle atrophy is hampered during fasting or ageing in mice with reduced ATF4 expression or expressing a phosphorylation-resistant form of eIF2α [11][12][13]. Both CDKN1A and GADD45A are referred to as atrogenes and are required for ATF4-mediated muscle atrophy [11][12][13]18], and TRIB3 is another ATF4 target gene involved in fasting-and ageing-induced muscle atrophy [19,20]. We showed here that overexpression of ATF4-regulated atrogenes was dissociated from muscle wasting (1) in a basal condition, (2) during hindlimb suspension, and ( 3) in a natural model of muscle-atrophy resistance (Figure 7). We first observed that the overexpression of the ATF4-regulated atrogenes Trib3, Cdkn1a Gadd45a, and Eif4ebp1, as well as Atf4 itself, induced by halofuginone treatment for up to 4 weeks, did not coincide with atrophy in all hindlimb muscles, including gastrocnemius. Subsequently, we then reported that pre-treatment with halofuginone mitigated the atrophy of the gastrocnemius muscle during hindlimb suspension. These positive effects of halofuginone treatment are consistent with reports showing that this dose (i.e., 0.25 μg/g) and frequency of administration (i) were very well-tolerated in mice for up to 3 months and (ii) improved muscle-cell survival, promoted membrane repair, and improved muscle performances in models of muscular dystrophies [31,33,[58][59][60]. However, none of these studies explored whether the potential effect of HF would involve the ATF4 pathway. Here, we showed that induction of ATF4-regulated atrogenes was uncoupled from muscle atrophy during hindlimb suspension. Indeed, although ATF4-regulated atrogenes were overexpressed during hindlimb suspension, halofuginone-treated mice displayed a partial preservation of gastrocnemius muscle mass and CSA. In addition, we took advantage of a natural model of resistance to muscle atrophy to examine the expression of ATF4-regulated atrogenes. The brown bear remains completely inactive during hibernation for up to 6 months but, surprisingly, is not sensitive to muscle atrophy [52][53][54], which provides an interesting model for finding new molecular mechanisms to fight muscle atrophy in humans. We showed that the atrogenes CDKN1A, GADD45A, ATF4 itself, and TRIB3 were upregulated in atrophy-resistant hibernating brown bear muscle compared to active bear muscle. Of note, ATF4 is mainly regulated at the translational level [15]. However, in most of the previous studies on the topic, the authors only measured Atf4 mRNA expression and expression of its target genes as evidence of its activity We first observed that the overexpression of the ATF4-regulated atrogenes Trib3, Cdkn1a Gadd45a, and Eif4ebp1, as well as Atf4 itself, induced by halofuginone treatment for up to 4 weeks, did not coincide with atrophy in all hindlimb muscles, including gastrocnemius. Subsequently, we then reported that pre-treatment with halofuginone mitigated the atrophy of the gastrocnemius muscle during hindlimb suspension. These positive effects of halofuginone treatment are consistent with reports showing that this dose (i.e., 0.25 µg/g) and frequency of administration (i) were very well-tolerated in mice for up to 3 months and (ii) improved muscle-cell survival, promoted membrane repair, and improved muscle performances in models of muscular dystrophies [31,33,[58][59][60]. However, none of these studies explored whether the potential effect of HF would involve the ATF4 pathway. Here, we showed that induction of ATF4-regulated atrogenes was uncoupled from muscle atrophy during hindlimb suspension. Indeed, although ATF4-regulated atrogenes were overexpressed during hindlimb suspension, halofuginone-treated mice displayed a partial preservation of gastrocnemius muscle mass and CSA. In addition, we took advantage of a natural model of resistance to muscle atrophy to examine the expression of ATF4regulated atrogenes. The brown bear remains completely inactive during hibernation for up to 6 months but, surprisingly, is not sensitive to muscle atrophy [52][53][54], which provides an interesting model for finding new molecular mechanisms to fight muscle atrophy in humans. We showed that the atrogenes CDKN1A, GADD45A, ATF4 itself, and TRIB3 were upregulated in atrophy-resistant hibernating brown bear muscle compared to active bear muscle. Of note, ATF4 is mainly regulated at the translational level [15]. However, in most of the previous studies on the topic, the authors only measured Atf4 mRNA expression and expression of its target genes as evidence of its activity in skeletal muscle. Indeed, endogenous ATF4 protein cannot be reliably detected in skeletal muscle, presumably due to its low abundance, very short half-life, and lack of a high quality antibody [11][12][13]17,18,61]. Altogether, these data strongly suggest that the induction of the ATF4 pathway can be dissociated from muscle atrophy. ATF4 target genes are highly dependent on the type and duration of stress stimuli [21,62], and the ability to restore homeostasis may be overwhelmed when the stress is too severe or sustained, resulting in cell death through the transcription of pro-apoptotic genes [63][64][65][66]. Therefore, to avoid chronic and acute activation, halofuginone was administrated periodically to activate the eIF2α-ATF4 pathway in mice. In our conditions, the ATF4 transcriptional program may, thus, (i) differ from the transcriptional program induced by a severe and sustained activation and (ii) include genes that might counteract the effect of ATF4-induced atrogenes. Halofuginone is well-described to also target the TGF-β pathway [35,36]. The nuclear translocation of the TGF-β transcription factors SMAD2/3 requires the formation of a complex with SMAD4 [43]. Halofuginone-induced eIF2α phosphorylation has been reported to inhibit the nuclear translocation of this complex in intestinal porcine enterocyte cells in vitro [67]. Consistently, we reported here a concomitant overall (i) increase in phosphorylated eIF2α protein levels and (ii) reduction in SMAD2/3 and SMAD4 nuclear protein levels in HF-treated mice. Thus, this highlights in skeletal muscle, for the first time, the possible role of HF-induced eIF2α phosphorylation in TGF-β inhibition. Although, the concomitant collagen downregulation and decrease in SMAD2/3 nuclear protein levels during hindlimb suspension in H 2 O-treated mice suggest a decrease in TGF-β signalling, we cannot exclude that these events are disconnected. Indeed, the SMAD2/3 nuclear protein levels are consistently low in HF-treated mice and, thus, cannot explain the decreased collagen expression during hindlimb suspension. Much remains to be clarified about the mechanisms of action of HF. Indeed, HF is used for its antifibrotic properties mediated by TGF-β inhibition in situations already characterised by fibrosis [48]. This is, however, not the case in our study. In addition, the inhibition of TGF-β signalling in muscles of HF-treated mice could have led to transcriptional changes that remain to be explored. It is possible that TGF-β signalling is induced later during hindlimb suspension. In fact, the TGF-β signalling pathway has previously been reported to be either unchanged in skeletal muscle after 1-3 days or induced after 7-10 days of unloading [46,68,69]. We also reported an upregulation of Trim63 and Fbxo32 during hindlimb suspension. Although these atrogenes are targets of the TGF-β signalling activation, they are also regulated by other signalling pathways [5]. We and others reported that the balance between TGF-β and BMP signalling seems crucial for muscle-mass maintenance during catabolic situations [39,[44][45][46]70]. Indeed, using the hibernating bear model, we recently reported that TGF-β signalling, i.e., a negative regulator of muscle mass, was downregulated at the transcriptomic level in muscles that are resistant to atrophy, while BMP signalling, i.e., a positive regulator of muscle mass, was maintained [46]. Previous data suggested that TGF-β inhibition would release SMAD4, i.e., the common actor in TGF-β and BMP signalling, which could, thus, be recruited to BMP signalling and promote hypertrophy and/or counteract atrophy [45]. Here, we reported an increase in the SMAD1/5 nuclear protein levels in the halofuginonetreated control mice, suggesting there was concomitant BMP signalling activation and TGF-β inhibition. In addition, BMP activation was reported to increase during denervation, intensive care disuse, and amyotrophic lateral sclerosis and was described as essential to counteract excessive muscle wasting [44,45]. In agreement, we reported here that BMP transcription factors SMAD1/5 accumulated in the nucleus in H 2 O-treated mice but, surprisingly, declined in HF-treated mice after 7 days of hindlimb suspension. Whether the higher basal pools of nuclear SMAD1/5 and their maintenance after 3 days of hindlimb suspension in HF-treated mice contributed to attenuating skeletal muscle atrophy during hindlimb suspension remains to be explored. Of note, BMP signalling has been reported to promote protein synthesis in muscle [44]. Maintenance of the BMP pathway after 3 days of hindlimb suspension may have contributed to the partial preservation of protein synthesis and muscle mass in HF-treated mice. Nuclear translocation of SMAD1/5 represses the transcription of FBXO30/Musa1 [44,45]. However, we did not observe any change in Fbxo30 expression. Mechanisms by which BMP signalling controls muscle mass are still very poorly understood and will require further studies, particularly with a comprehensive characterisation of the BMP target genes in skeletal muscle. We can also speculate that HF-induced BMP activation has helped to limit muscle atrophy induced by ATF4-regulated atrogenes (Figure 7). In conclusion, halofuginone treatment reproduced the muscle features of hibernating bears in gastrocnemius mice muscles with (i) the activation of ATF4-regulated atrogenes and (ii) the concurrent inhibition of TGF-β signalling and promotion of BMP signalling, without resulting in muscle atrophy (Figure 7). These characteristics were associated with mitigated muscle atrophy during physical inactivity. To date, clinical trials have all attempted to inhibit the TGF-β pathway, mostly with side effects or minimal efficiency [71]. Our study suggests halofuginone, as a well-tolerated chemical compound, already used in human clinical trials [36], was able to tune the TGF-β/BMP balance in vivo and likely sustained muscle mass. Moreover, our data open new ways to further decipher by which precise mechanisms ATF4 induces atrophy and how BMP activation can interfere. Materials and Methods Ethics, animals housing, and experimental design. All experiments were conducted with the approval of the regional ethics committee (agreement no. D6334515) following the European Directive 2010/63/EU on the protection of vertebrate animals used for experimental and scientific purposes. This study was performed with 12-week-old C57BL6/J male mice (25-30 g), purchased from Janvier Labs (Le Genest-Saint-Isle, France). They were housed individually upon arrival for 10 days of acclimatisation in a controlled room (22 ± 2 • C, 60 ± 5% humidity, 12 h light/dark cycle, and light period starting at 8 h), fed ad libitum a standard rodent diet (pellets A03 from Safe, Augy, France), and given free access to water. Two distinct animal experiments were performed. To evaluate the effects of a periodic halofuginone (HF) (#32481, Sigma, Saint-Quentin-Fallavier, France) administration, we performed a first protocol where mice received either HF (0.25 µg/g) or water (H 2 O) by gavage 3 times a week for 1 to 4 weeks (n = 6 animals per group). This dose was reported as well tolerated over longer periods [31,33,60]. Gastrocnemius muscle was sampled 6 h after the last HF/H 2 O administration at the end of each week. Subsequently, we performed a second protocol to test whether HF administration before hindlimb unloading had a positive effect on muscle mass and function. For that purpose, we performed two separate animal experiments. In each experiment, mice received either HF (0.25 µg/g) or H 2 O by gavage 3 times a week for 3 weeks and were afterwards subjected either to hindlimb unloading through tail suspension (HS) or kept unsuspended (Ctrl) for 3 or 7 days, as previously described [46] (n = 8-19 animals per group). We did not record any difference between Ctrl mice at 3 or 7 days for all the measurements reported in this manuscript. We, therefore, pooled the two groups of Ctrl mice for further analysis and data representation. Food intake and body weight were recorded throughout the different protocols. Unloading in control mice resulted in only a small body weight loss (<10%) that occurred within the first 3 days concomitantly with a decrease in food intake, whereas HF treatment did not modify food intake or body weight ( see Supplementary Figures S1 andS2). Tissue collection. At the end of the experiments, mice were euthanised by cervical dislocation. The soleus, gastrocnemius, tibialis anterior, and extensor digitorum longus (EDL) muscles were carefully collected and weighed prior to immediate freezing in liquid nitrogen and storage at -80 • C until analyses. Measurement of protein synthesis in gastrocnemius. At the end of protocol 2, mice received an intraperitoneal injection of 0.040 µmol/g puromycin (#P8833, Sigma, Saint-Quentin-Fallavier, France) dissolved in 100 µL of a saline solution before euthanasia, as described previously [72]. At exactly 30 min post-puromycin injection, gastrocnemius muscle was dissected and frozen in liquid nitrogen for Western blot analysis, as follows. Histology and morphometric measurements. A part of the gastrocnemius muscle was collected at the end of protocol 2 and frozen in isopentane chilled with liquid nitrogen and stored at -80 • C until use. Serial muscle cross-sections (10 µm thick) were obtained using a cryostat (HM500M Microm International, Fisher Scientific, Illkirch, France) at -20 • C. Cross-sections were labelled with anti-laminin-α1 (L9393 Sigma, Saint-Quentin-Fallavier, France) to outline the fibre cross-sectional area (CSA) and BFF3 antibody (#AB_2266724, DSHB, Iowa City, IA, USA) to determine myosin heavy chain type 2B fibre. Both were subsequently hybridised with a corresponding secondary antibody conjugated to Alexa-Fluor (Invitrogen, Cergy-Pontoise, France). Image acquisitions were performed with a high-resolution ORCA-Flash4.0 LT+ Digital CMOS camera coupled to a IX-73 microscope (Olympus, Münster, Germany) and Cell-Sens dimension software (Olympus Soft Imaging Solutions, Münster, Germany). The CSA was determined for 1000-1500 fibres per animal, using ImageJ software 1.53f51 (http://rsb.info.nih.gov/ij/, accessed on 3 April 2018). Protein isolation. Gastrocnemius muscles were pulverised in liquid nitrogen. ( 1) For all targets, ~30 mg of the resulting powders were homogenised using a polytron in 1 mL of an ice-cold buffer (10 mM Tris pH 7.5, 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, and 0.5% Igepal CA630) containing inhibitors of proteases (Protease Inhibitor Cocktail) and phosphatases (1 mM Na3VO3 and 10 mM NaF) (Sigma, Saint-Quentin-Fallavier, France). The homogenates were stirred for 1h at 4 • C and then centrifuged at 10,000× g for 15 min at 4 • C. The resulting supernatants containing total soluble proteins were then stored at -80 • C until use. ( 2) For SMADs protein level analysis, subcellular fractionation was performed. For that purpose, ~50 mg of gastrocnemius powder samples were homogenised for 1 min on ice using a polytron in 500 µL of ice-cold extraction buffer (10 mM HEPES, pH 7.5, 10 mM MgCl2, 5 mM KCl, 0.1 mM EDTA, pH 8.0, and 0.1% Triton X-100) [73]. The resulting homogenates were subjected to sequential fractionation steps to separate soluble cytosolic and nuclear proteins as described [74]. Pellets containing nuclear proteins were solubilised in nuclear extraction buffer (20 mM HEPES, pH 7.9, 25% glycerol, 500 mM NaCl, 1.5 mM MgCl2, 0.2 mM EDTA, and pH 8.0) [73]. For all protein extracts, protein concentration was determined using the Bradford Protein Assay Kit (Biorad, Marnes-la-Coquette, France). Proteins were then diluted in Laemmli buffer and stored at -80 • C until use. Western blots. Protein contents for (i) SMAD family members (anti-SMAD1-5, PA5-80036, Thermofisher, Illkirch, France; anti-SMAD2-3, #8685, Cell Signalling Technology, Saint-Cyr-L'Ecole, France; anti-SMAD4, ab230815, Abcam, Cambridge, UK), (ii) total and phosphorylated eukaryotic initiation factor 2 alpha (anti-eIF2α, #9722, Cell Signalling Technology; anti-p-Ser51eIF2α, ab32157, and Abcam), and (iii) incorporation of puromycin (anti-puromycin clone 12D10, MABE343, Millipore, Burlington, MA, USA) were assessed by immunoblotting. Briefly, 20-40 µg of protein extracts were subjected to SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) using TGX™ FastCast™ 10% Acrylamide gels (Biorad, Marnes-la-Coquette, France) and transferred onto a PVDF membrane (Hybond P, Amersham, England) using Trans-Blot ® Turbo™ Transfer System standard protocol (Biorad, Marnes-la-Coquette, France). Western blots were blocked for 1 h at room temperature in TBS (Tris-Buffered Saline) buffer with 0.1% Tween-20 (TBS-T, pH = 7.8) with 5% bovine serum albumin (BSA) for all the targets, in accordance with the instructions of the manufacturer. They were then washed thrice in TBS-T and incubated (overnight, stirring, 4 • C) with appropriate primary antibodies diluted at 1:1000, except for anti-puromycin diluted at 1:5000. Western blots were then washed and incubated for 1 h with an appropriate secondary antibody (HRP-conjugated anti-rabbit (#7074) or anti-mouse (#7076) IgGs) (Cell Signalling Technology, Saint-Cyr-L'Ecole, France). For anti-puromycin antibody, an anti-mouse IgG2Ak (115-035-206, Jackson ImmunoResearch Laboratories, West Grove, PA, USA) was used. Signals were detected after incubation with Luminata Crescendo Western HRP substrate (Millipore, Burlington, MA, USA) and visualised using G: BOX ChemiXT4 (XL1) imaging system (Syngene, Frederick, MD, USA). Signals were then quantified using ImageJ 1.53f51 software. Two samples from each group were loaded on each gel. The signal recorded within each lane of one Western blot was normalised to the overall signal of that blot, and then signals were normalised to the total amount of proteins determined by the Biorad's stain-free system or ponceau S to correct for uneven loading. The normalised values were then averaged by group and expressed as the fold change from the mean of all H 2 O-ctrl samples. RT-qPCR. Total RNA from gastrocnemius muscle samples was extracted with Macherey-Nagel™ NucleoSpin™ 96 RNA Kit and KingFisher™ Duo Prime Purification System, in accordance with the instructions of the manufacturer (Macherey-Nagel, Hoerdt Cedex, France). RNA was quantified by measuring the absorbance at 260 nm on a NanoDrop ND-1000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA). Then, 500 ng of RNA were treated with DNase I (Invitrogen, Cergy-Pontoise, France) prior to reverse transcription using random primers and SuperScript II (Invitrogen, Cergy-Pontoise, France), in accordance with the instructions of the manufacturer. Real-time PCR was carried out using the CFX96 Real-Time PCR detection system (Biorad, Marnes-la-Coquette, France). Primer sequences are provided in Supplementary Table S1. PCR reactions were performed using the IQ SYBR Green Supermix (Biorad, Marnes-la-Coquette, France), in accordance with the instructions of the manufacturer. The comparative threshold cycle (2∆∆CT) method was used to compare the relative mRNA expression between each group, using TBP (TATA binding protein) as a reference gene for muscle. The relative mRNA abundance was arbitrarily set to 1 for the H 2 O-Ctrl group. Statistics. All data are means ± SEM and were analysed for normality of residuals using the Shapiro-Wilk test. No set of data was transformed for non-normality distribution. For protocol 1 (n = 6/group), we performed a multiple Welch t-test within each week. For protocol 2 (n = 8-19/group), we performed a two-way ANOVA with the factors "Hindlimb suspension" and "Halofuginone" and corrected the data for multiple comparisons using Tukey's test. These analyses were performed using Prism 9 (GraphPad Prism 9, San Diego, CA, USA). Transcriptomic Data. We used transcriptomic data from already published studies [46]. The transcriptomic bear data supporting Figure 6 of this study are openly available in the GEO repository database (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi, reference no. GSE144856, accessed on 1 September 2021). To identify the differentially expressed genes (DEGs) from this list, we selected a winter/summer (FC) > 1.0 with an adjusted p-value < 0.05 as cut-off for the up-regulated genes. Supplementary Materials: The supporting information can be downloaded at: https://www.mdpi. com/article/10.3390/ijms24010621/s1. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and of the European Directive 2010/63/EU and was approved by the institutional review board (or ethics committee) of ( 1) the Swedish Ethical Committee on Animal Experiment (application nos. Dnr C3/2016 and Dnr C18/2015), the Swedish Environmental Protection Agency (no. NV0741-18), and the Swedish Board of Agriculture (no. Dnr 5.2.18-3060/17) and ( 2) the C2E2A (Comité d'Ethique pour l'Expérimentation Animale Auvergne) (no. D6334515). Data Availability Statement: The analysed transcriptomic bear data that support the findings of Figure 6 are openly available in the GEO repository database at https://www.ncbi.nlm.nih.gov/ geo/query/acc.cgi, accessed on 16 June 2021, reference no. GSE144856. Other data that support the findings of this study are available in the supplementary materials of this article. Asns TACAACCACAAGGCGCTACA AAGGGCCTGACTCCATAGGT Cdkn1a GTCTTGCACTCTGGTGTC CTTGGAGTGATAGAAATCTG Col1a1 CGTGGTGACAAGGGTGAGAC AACCAGGAGAACCAGGAGGA Col3a1 CTGCTGGTCCTTCTGGTGCT AGCCACGTTCACCAGTTTCA Col5a2 CCTGGTCCAAATGGTGAACA CCAGGGTTTCCTTCCTTTCC Col6a1 CCACAACCAGATGCAAGAGC CACCAGCCATCCATTGTAGC Ddit3 GCATGAAGGAGAAGGAGCAG CTTCCGGAGAGACAGACAGG Eif4ebp1 CAGGCGGTGAAGAGTCACAA CCTTGGGGGACATAGAAGCA Fbxo30 GTTGGGATTGCGTAGTGACC CCCTCATTAGCCGGGATACA Fbxo32 AGTGAGGACCGGCTACTGTG GATCAAACGCTTGCGAATCT Gadd45a AGTCAACTTATTTGTTTTTGC GCAATTTGGTTCAGTTATTT Ppp1r15a GACTCAAGCCAGAGTCCCTG TAGAGGAATCTCGGGGTCCT Tbp TGGTGTGCACAGGAGCCAAG TTCACATCACAGCTCCCCAC Trib3 CCAGAGATACTCAGCTCCCG GAGGAGACAGCGGATCAGAC Trim63 ATGGAGAACCTGGAGAAGCA AACGACCTCCAGACATGGAC Atf4: activating transcription factor 4; Asns: asparagine synthetase; Cdkn1a: cyclin dependent kinase inhibitor 1a; Col1a1: collagen type I alpha 1; Col3a1: collagen type III alpha 1; Col5a2: collagen type V alpha 2; Col6a1: collagen type VI alpha 1; Ddit3: DNA damage inducible transcript 3; Eif4ebp1: eukaryotic translation initiation factor 4E binding protein 1; Fbxo 30/32: F-Box protein 30/32; Gadd45a: growth arrest and DNA damage inducible alpha; Ppp1r15a: protein phosphatase 1 regulatory subunit 15a; Tbp: tata binding protein; Trib3: tribbles pseudokinase 3; Trim63: tripartite motif containing 63 Discussion and perspectives In our study, we found that the induction of ATF4 atrogenes in skeletal muscle was not associated with atrophy (1) in healthy and ( 2) catabolic conditions in halofuginone-treated mice, and also (3) in hibernating brown bears. Even more, we found benefits on gastrocnemius muscle mass in halofuginone-treated mice when subjected to hindlimb suspension (HS) compared to the untreated mice. Furthermore, we demonstrated that the molecular mechanisms of halofuginone involve the inhibition of TGF-β signalling while concomitantly promoting BMP signalling (Figure 39). In addition to the points discussed in the article, other points deserve to be discussed. First, we will discuss the halofuginone mechanisms of actions, its biological effects on muscle mass, and finally the dual role of ATF4 signalling in skeletal muscle. Halofuginone mechanism of action in skeletal muscle Halofuginone (HF) is a synthetic derivative of febrifugine that has been isolated from the roots and leaves of Dichroa febrifuga and also from Hydrangea [415] (Figure 40). Febrifugine, and subsequently HF, have been used in traditional Chinese medicine for many years for their therapeutic benefit against malaria, cancer, fibrosis and inflammatory diseases [416]. Currently, two modes of action of HF have been described: (1) inhibition of the prolyl-tRNA synthetase (ProRS) activity leading to ISR activation and ( 2) inhibition of type 1 collagen production via SMAD3 inhibition [416]. In addition, expression profiling of HF targets in epithelial cells revealed that this molecule can induce the expression of ATF4 target genes, including TRIB3, GADD45A and ATF4 itself [417]. Consistently, in our study, we found that Atf4 and its target genes were upregulated following HF treatment in mice muscles. Does halofuginone induce GCN2 phosphorylation in skeletal muscle ? HF binds and inhibits the activity of prolyl-tRNA synthetase (ProRS) [418]. This mimics the response to amino acid starvation by increasing GCN2 and subsequently eIF2α phosphorylation, and thus ATF4 translation [418]. Unfortunately, examining GCN2 phosphorylation in skeletal muscle is impossible given the commercially available antibodies. Therefore, it is unclear in our study whether the increased transcriptional activity of ATF4 following HF treatment was mediated by HF canonical activation of GCN2. ATF4 binds to specific CCAAT/enhancer binding protein (C/EBP)-ATF response elements (CAREs) located in the promoters of its target genes. Our team has developed a CARE-driven luciferase mouse model (CARE-LUC) that enables the study of the activity of the eIF2α-ATF4 pathway in the whole organism, and at tissue and cellular levels, by combining imaging, luciferase assays and immunochemistry [419]. We could use wild-type CARE-LUC mice and CARE-LUC Gcn2 KO mice available in our laboratory to compare the intensity of luciferase in muscles during HF treatment and subsequent hindlimb suspension. This will enable determination of whether ATF4 transcriptional activity induced by HF treatment is GCN2 dependent. Furthermore, we showed for the first time that TGF-β inhibition by HF was concomitant with a promotion of BMP signalling with nuclear translocation of SMAD1/5 (see Paper 7.2) [423]. We suggested that this promotion and the resulting transcriptional program, might be in favour of neutralising the atrophic actions of ATF4 in muscles subjected to HS but also in muscles of the hibernating bears. A study showed that BMP2 treatment induced an increase in ATF4 protein levels and its phosphorylation in chondrocyte cells [424]. Several post-translational modifications of ATF4, including phosphorylation, regulate its stability or enhance its transcriptional activity [194]. Whether BMP signalling in muscles can induce ATF4 phosphorylation and whether this post-translational modification can alter its stability and/or transcriptional activity has never been studied. As it is not possible to analyse ATF4 at the protein level in muscles in vivo, in vitro experiments will be needed to explore this hypothesis. Biological effects of halofuginone on skeletal muscle Duration of HF pre-treatment. In this study, we reported that 3-weeks of HF pre-treatment slightly preserved the mass and CSA of the gastrocnemius muscle during hindlimb suspension in mice (HS) (see Mice were treated with H2O or halofuginone (HF, 0.25µg/g) 3 times a week for 2 weeks and were then subjected to hindlimb suspension for 3 (HS3) days or kept unsuspended (Ctrl). Gastrocnemius muscle mass (mg) represented with means ± SEM. Two-way ANOVA * padj <0.05; ns= non-significant. pre-treated with HF for 2 (Figure 43) or 3 weeks (data not shown). Consistently, the average CSA of soleus muscle fibres was similarly reduced in both untreated and HF-treated mice (Figure 43). We then analysed the distribution of fibre CSA in the soleus and compared it to the gastrocnemius muscle (Figure 43). Overall, HF reduced the proportion of small fibres and increase the proportion of large fibres in both the soleus and the gastrocnemius muscle in control mice (see Paper 7.2 and Figure 43) [423]. However, contrarily to the gastrocnemius muscle (see Paper 7.2) [423], this difference in CSA fibre distribution was not maintained during hindlimb suspension in the soleus muscle (Figure 43). This suggests that muscle fibre type may influence the effect of HF on skeletal muscle and could be specific to the pathophysiological condition and/or the nature of the muscle. It is therefore conceivable that HF treatment is even more successful in preserving muscle mass in glycolytic muscles under catabolic conditions where glycolytic fibres are more likely to atrophy, such as ageing, cancer or glucocorticoid treatment. A. B. C. Finally, to examine whether this slight preservation of gastrocnemius mass and CSA might have had functional muscle benefits, we performed locomotor experiments using the Rotarod and Catwalk devices. The Rotarod test is widely used to assess the effects of drugs on motor coordination and balance, while the Catwalk is used for the quantitative assessment of stepping and motor performance in rodents. These functional measures did not show much difference between control and HS mice with or without HF treatment. Other functional measures could be considered, such as electromyography. The dual role of ATF4 signalling in skeletal muscle As mentioned in the state of the art (see section 4.3.2), ATF4 target genes comprise atrogenes, but also genes that may be involved in muscle homeostasis including autophagy. In our study, we showed that the induction of ATF4 atrogenes was not associated with muscle atrophy during disuse in halofuginone-treated mice and in hibernating bears. For these reasons, we hypothesised that ATF4 may play a dual role in skeletal muscle, either pro-atrophic or pro-homeostatic, and that this may depend on the frequency and duration of its activation. Does halofuginone-induced ATF4 signalling lead to the expression of autophagy-related genes? ATF4 is a transcription factor involved in the transcription of autophagy-related genes in response to various stresses (e.g. amino acid starvation, ER stress) [197,[211][212][213][214]216,217]. We thus examined the expression of some autophagy-related genes known to be targets of ATF4 [217]. We observed no change in the muscle expression of Atg5, Atg12 nor Atg16, whether the mice were treated with HF or hindlimb suspended (Figure 44) (see Appendix 10.2). However, mRNA and/or protein levels for microtubule associated protein 1 light chain 3 alpha (LC3) and BCL2-interacting protein 3 (BNIP3), involved in autophagosome formation were increased during HS in both H2O-and HF-treated mice [426,427] (Figure 44). Therefore, the uncoupling of ATF4 from atrophy observed in our study in halofuginone-treated mice does not seem to be dependent on autophagy induction. Interestingly, BNIP3, with the help of LC3, can sequester ATF4 into mitophagosomes leading to ATF4 degradation by mitophagy in response to nutrient deprivation in cancer cells [428]. It would therefore be very interesting to study the cellular localisation of ATF4 upon HF treatment. In addition, RNA sequencing analysis of muscles from HF-treated mice would provide insight into the signalling pathways that may explain the uncoupling between ATF4 atrogenes induction and muscle atrophy. (A) A list of 614 ATF4-related genes has been drawn up from the Genecards web-based portal. Their expression has been analysed in a model of atrophy resistance (the hibernating brown bear) as described before (Cussonneau et al., 2021 [351]). Biological processes represents the protein-protein enrichment analysis performed on Metascape from the respective down-(green) or up-(red) regulated genes. (B-C) Vastus lateralis relative protein levels were assessed by Western blotting and the ratio phosphorylated/total form for eIF2α was calculated after quantification and normalisation using TGX signal for uneven loading. A representative western blot is shown (see Appendix 10.2). Data are presented as individuals values with mean bars (n=12 bears/season, the same individuals were sampled and analysed in summer and winter). S: summer; W: winter. Enrichment analysis revealed that ATF4-related genes in the biological processes such as heme deficiency response, oxidative stress, endoplasmic reticulum stress and amino acid deficiency were upregulated in atrophy-resistant muscles of the hibernating bear (Figure 45). In contrast, ATF4-related genes in the biological processes of the aerobic electron transport chain and amino acid biosynthesis were predominantly downregulated (Figure 45). These data suggest that the ATF4 transcription factor is transcriptionally active in hibernating bear muscles. We also observed that the ratio of phosphorylated eIF2α to total eIF2α remains unchanged between hibernating and active bear muscles (Figure 45) (see Appendix 10.2). ATF4 target genes are highly dependent on the intensity and duration of the stress [197,204,429]. When the stress is too severe and sustained, eIF2α is phosphorylated leading to the induction by ATF4 of a pro-apoptotic transcriptional program [202][203][204]. The maintenance of a low level of phosphorylated eIF2α may avoid any death-like transcriptional response in bear muscles during hibernation, but also suggest an uncoupling between eIF2α phosphorylation and ATF4 transcriptional activity. This is consistent with the decrease in phosphorylated eIF2α protein levels that occurs alongside an increase in the expression of ATF4 target genes during HS in mice (see Paper 7.2) [423]. Therefore, the transcriptional activity of ATF4 may be independent of the level of phosphorylated eIF2 and may depend on other signals in certain situations such as disuse. Halofuginone-like compound enriched in bear food As mentioned above, HF is a synthetic derivative of febrifugine that has been isolated from the roots and leaves of Dichroa febrifuga and also from Hydrangea [415]. Dichroa febrifuga is mainly found in Asia. However, a certain type of Hydrangea, named Hydrangea macrophylla occurs in northern and southern Europe, as well as in southern China. Thus, the geographical distribution of this plant overlaps with some of the brown bear habitat areas (Figure 46). Interestingly, the fruiting of this plant occurs from April to September, and hyperphagia of the brown bear occurs before den entry in late October [296]. Regarding the biological similarity between the muscles of HF-treated mice and hibernating bears (i.e. ATF4 and BMP signalling induction and TGF-β inhibition), it could be envisaged that a halofuginone-like molecule is present in the brown bear food before it enters hibernation. This molecule/compound could be stored in the adipose tissue and released into the bloodstream during the hibernation period. This hypothesis is part of a larger hypothesis that active circulating compounds may be present in the serum of the hibernating bear, which could explain the general hypometabolism present during hibernation and the consequent preservation of organ functions such as skeletal muscle. Analysis of microarrays in human muscle cells cultivated with WBS Prior to the start of this thesis project, human primary myotubes (HM) were cultured with SBS or WBS and a microarray sequencing experiment was performed. We analysed these data focusing on the TGFβ/BMP-related genes. Interestingly, we observed common features with hibernating bear muscles, with down-regulation of CCN2, ID1,3 and 4, and up-regulation of MEF2A expression when HM were treated with WBS compared to SBS (Figure 49). Furthermore, we found that NOG (noggin protein), a well-known antagonist of BMP signalling, was also downregulated in HM after WBS treatment (Figure 49), although this was not the case in the hibernating bear muscles (see Paper 6.3) [351]. The downregulation of NOG and CCN2 are key features of TGF-β inhibition and promotion of BMP signalling. These data indicate that WBS can induce transcriptional changes in the TGF-β/BMP balance in HM similar to those occurring in hibernating bear muscles, supporting the existence of circulating active compounds in the winter bear serum. Optimisation of tools for the screening active compounds in WBS We used luciferase reporters to visualise the induction/inhibition of BMP or TGF-β transcriptional activity in muscle cells cultivated with bear serum (see Appendix 10.3). To minimise the need for bear serum, we miniaturised the protocol using the immortalised CCL136 human muscle cell line (rhabdomyosarcoma cell line). BMP signalling. We first used a BMP response element (BRE) luciferase reporter containing the mouse ID1 promoter responsive region for BMP [430]. We first validated that this pathway is active and that the machinery to transduce the BMP signal from the ligand to the target genes is operational in CCL136 cells (data not shown). Thereafter, we observed a decrease in luciferase intensity when CCL136 were treated with WBS for 6 to 24h compared to SBS (Figure 50). This was confirmed on HM after 24h WBS treatment (Figure 50). These data suggest that the downregulation of ID1 observed in hibernating bear muscles in vivo is the result of a compound present in WBS. Furthermore, this effect of WBS could be direct since it occurred after only 6h of treatment (Figure 50). TGF-β signalling. We also used a SMAD binding element (SBE) luciferase reporter containing four copies of the SMAD binding site GTCTAGAC corresponding to the main TGF-β responsive element the cells treated with SBS (Figure 50). This preliminary result is very promising as this is the second proof of concept that a compound in the WBS can mimic what happens naturally in winter bear muscles in vivo. Further studies are needed to determine whether this MEF2 signature observed in hibernating bear muscle, in vivo and in human muscle cells cultured with WBS, could be under the control of the TGF-β and/or BMP pathway via a circulating compound. Perspective on identifying the circulating active compound in WBS The identification of compounds in hibernating bear serum that have the potential to control the TGFβ/BMP balance is the core of a project recently granted by the ANR 46 with a PhD student just starting. The prospects are promising. They will include confirmation of the results with the MEF2-Luc reporter in HM, and repetition with more individuals for the ID1-Luc reporter. Is ID1 disconnected from BMP signalling in skeletal muscle? The decrease in the luminescence of ID1-Luc reporter observed in HM cultivated with WBS is coherent with what we showed in hibernating bear muscles (see Paper 6.3) [351]. However, ID1 is a well-known BMP target gene in bone and cartilage (see Discussion 6.4.2.1), therefore its reduction raises questions about the disconnection of ID1 transcription and BMP transcriptional acitivity in skeletal muscle. Moreover, ID1 expression is under the control of other signalling pathways, including 363]. Whether the decrease in ID1 expression in hibernating bear muscles or HM cultivated with WBS reflects inhibition of TGF-β signalling remains to be explored. Therefore, we will examine the SMADs nuclear translocation, as performed in study 2, in HM cultivated with either WBS or SBS. Then, once the genetic signature of BMP signalling in muscle cells will be identified (see Discussion 6.4.2.1), we will design a BMP-Luc reporter with a BMP-dependent gene in muscle to address whether a circulating compound can activate the transcriptional activity of BMP signalling when HM are cultivated with WBS. Is BMP canonical signalling involved? Subsequently, we will decipher the molecular mechanisms capable of inducing BMP signalling when human muscle cells are exposed to winter bear serum. For example, we can examine whether BMP inhibition with silencing RNAs (e.g. siSMAD1) or pharmacological treatments (e.g. noggin) abolish/limit ID1 down-regulation or MEF2A up-regulation in cells treated with SBS or WBS. This will allow us to find out whether the biological effect observed on muscle cells treated with WBS is mediated by BMP canonical signalling. What is the nature of the active compound in WBS? The project also includes serum fractionation processes (e.g. delipidation, dealbumination, heat denaturation) to determine the nature of the compounds that may be involved in the regulation of the TGF-β/BMP balance in muscle cells. Serum 155 fractionation is already performed by one of our collaborators, Dr Bertile Fabrice at the IPHC in Strasbourg. This is also why we have miniaturised the reporter assay experiments in 96-well plates using CCL136 cells to test the different fractions and easily read the luminescence of the TGF-β/BMP reporters. Therefore, we will be able to investigate the nature of the compound (e.g. lipid, protein, …) leading to transcriptional modifications of the TGF-β /BMP signalling in vitro. Finally, the aim is also to reproduce in vitro and then in vivo the atrophy resistance phenotype of hibernating bear muscles, using active fractions and/or compounds of bear serum to prevent or reverse atrophy under catabolic conditions (e.g. dexamethasone-induced atrophy on myotubes in vitro). General conclusion Some therapies to combat muscle wasting have been developed, including exercise, nutritional interventions and certain medications. However, no effective treatment has been found to completely and safely prevent muscle wasting. Furthermore, despite all its preclinical success, modulation of TGFβ signalling has not translated into the desired effects in humans. The promotion of BMP signalling has received very little attention to date, and the concept that simultaneous fine-tuning of the BMP and TGF-β pathways might be of interest against the development of muscle atrophy has been mentioned in very few papers. The use of a model of natural resistance to muscle atrophy, the hibernating brown bear, has great potential for the discovery of new therapeutic targets for the human clinic. In addition, our comparative physiology strategy between an induced atrophy model and an atrophy-resistant model unveiled new and promising avenues for new future treatments. In this thesis project, we (1) performed a transcriptomic analysis comparing hibernating versus active bear muscles to unloaded versus control mouse muscles, ( 2 The signal was measured for 12 seconds. The firefly values were then normalised by the renilla luciferase values. Introduction Cachexia is a multifactorial syndrome leading to serious clinical complications with high mortality rates and is present in almost all chronic diseases [1]. Besides inflammation and metabolic modifications, skeletal muscle loss is an important factor of cachexia and limiting muscle wasting is a major challenge for maintaining well-being of patients, the capacity of the organism to fight against diseases and the tolerance of the patients towards challenging therapies like cancer chemotherapies [2]. Muscle homeostasis is mainly driven by the ubiquitin-proteasome system (UPS) that controls signaling pathways, contractile structure, cellular architecture, energy metabolism, protein translation, etc., thus allowing a fine-tuning of skeletal muscle metabolism [3][4][5][6]. The UPS is composed by hundreds of proteins and controls protein fate by ubiquitination, a post-translational modification carried out by the E1, E2, E3 enzymatic cascade (see [7] for a review). Ubiquitin (Ub) is covalently attached to the target proteins thanks to the interactions between Ub conjugating E2 enzymes (35)(36)(37)(38)(39)(40) members according to species) and E3 Ub ligases (>600 in human). Another complexity of the UPS resides in the multitude of Ub signals that can be synthesized on the target proteins, from mono-Ub, multiple mono-Ub, or poly-Ub chains with at least eight different topologies. Each type of Ub modification is dedicated to a specific fate for the target protein, the role of some Ub linkages being still obscure. This Ub code can send the target protein for either proteasome or autophagy degradation or for non-proteolytic purposes (addressing, stabilization, activation, etc.) [7]. Furthermore, the multiple possible combinations between a given E3 and several E2s (and vice versa) further increase the potential of the UPS for controlling cellular metabolism. E3 ligases can be either monomeric or multi-protein complexes and are classified into three families according to their structure and mode of action (recently reviewed [8]). The first class contains 28 members that contain a C-terminal Homologous to E6-Associated Protein C Terminus (HECT) domain that is necessary and sufficient to accept Ub from an E2 enzyme and to transfer it to the substrate, HECT E3 ligases having their own catalytic activity. Their N-terminal domain is involved in the recognition of the substrate. The second class comprises ≈90% of the E3 Ub ligases and are known as Really Interesting New Gene-finger (RING) type. RING domains are defined by eight cysteine and/or histidine residues coordinating four zinc atoms that allow interaction with E2 enzymes. RING-type E3s do not bind Ub, but they serve as a platform for the E2 and the substrate and promote the Ub transfer from the E2 to the substrate. Within multi-protein RING-E3 complexes, also named cullin-containing RING Ligase E3s (CRLs), several families of proteins with motifs involved in protein-protein interactions (e.g., F-box pattern) are responsible for substrate recognition [9]. The third class of E3 ubiquitin ligases are the RING-in-Between-RING (RBR)-type that combine properties of RING-and HECT-type E3s. They utilize an E2-binding RING domain and a second domain (called RING2) that binds Ub before transferring it to substrate [10,11]. Within muscle atrophy, numerous ubiquitinating enzymes are now identified for their involvement in the regulation of both anabolic and catabolic pathways during the atrophy process, notably by being responsible for the degradation of the contractile proteins [12]. The E3 Ub ligases appear to be at the heart of these regulations and some of them may prove to be efficient therapeutic drug strategies with roughly two main approaches: (i) indirect modulation of an E3 ligase by targeting the signals involved in its regulation [13][14][15][16] or (ii) direct inhibition of the E3 ligase [17][18][19]. However, the intertwinement between anabolic and catabolic processes (including the signaling pathways) often renders difficult an indirect modulation of E3 ligases, while direct inhibition strategies is limited by the somehow limited data available on E3 ligases. This review summarizes the signaling pathways implicated in muscle homeostasis, and highlights the E3 ligases playing a role in the regulation of skeletal muscle mass and function, excluding the muscle regeneration process where numerous E3 Ub ligases are also involved. We more specifically focus on the strategies that have already been used for modulating E3 ligase activity, including pharmaceutical drugs or natural compound-based approaches. Signaling Pathways Regulating Skeletal Muscle Mass and Function Skeletal muscle homeostasis is controlled by numerous signaling pathways (Figure 1) that act either as anabolic or catabolic factors. Depicting in detail their regulation is beyond the scope of this review and we just briefly summarize their implication in muscle mass control. Skeletal muscle hypertrophy via the PI3K/AKT (phosphatidylinositol 3-kinase/protein kinase B) pathway can be induced by nutrients (amino acids, glucose and fatty acids) [20], hormones (insulin) [20,21] growth factors (Insulin Growth Factor-1 (IGF-1)) [22,23], and mechanical stimuli (e.g., exercise) [24]. Upon ligand binding, the PI3K/AKT pathway activates mTORC1 that phosphorylates numerous substrates [25,26], which regulate the activation of translation, transcription, ribosome biogenesis, and autophagy [27,28]. AKT also phosphorylates and inactivates GSK3β (a negative regulator of protein translation) [29] and the pro-catabolic FOXOs transcription factors (TF), the latter being crucial inducers of muscle loss upon catabolic situations via the expression of numerous atrophy-related genes [30][31][32][33]. Moreover, mTORC1 also inhibits the autophagy induction complex [34]. Intriguingly, mTORC1 can also exhibit adverse effects on skeletal muscle homeostasis upon denervation [35] or ageing [36,37]. In these situations, a negative feedback loop from mTORC1 to AKT was involved, thus favoring FOXOs activation and the subsequent expression of proteolytic genes like the atrophy-related E3 ligases MuRF1/TRIM63 and MAFbx/Atrogin-1. G Protein-Coupled Receptors (GPCRs) and cAMP Signaling 1. ß2-Adrenergic Receptors Signaling Pathway Upon stimulation by endogenous catecholamines or synthetic agonists, ß2-Adrenergic Receptors (ß2-ARs) lead to skeletal muscle hypertrophy (Figure 1) through: (i) PKAmediated expression of genes containing cAMP response elements (follistatin, NR4A3, calpastatin) via CREB [38] (ii) PKA-mediated inhibition of FOXO activity in vivo [39] or (iii) the activation of PI3K/AKT/mTORC1 [40,41], or both AKT and CaMKII/HDAC4 signaling [42]. WNT/FZD Signaling Pathway The Wingless-type mouse mammary tumor virus integration site (Wnt) family of proteins induce hypertrophy via Wnt/ß-catenin and PI3K/AKT/mTORC1 cascades [43,44] (Figure 1). The former one controls the transcriptional regulation of growth-related genes (e.g., C-myc and Cyclin 1) via ß-catenin and T-cell factor/lymphoid enhancer factor (TCF/LEF) transcription factors [45,46] whereas the latter regulates the protein synthesis process. The PI3K/AKT/mTORC1 pathway is induced via the specific interaction of WNT7a (ligand) and FZD7 (receptor) proteins [47][48][49][50]. Under mechanical stimulation, WNT is the only pathway able to stabilize ß-catenin and therefore to promote growthrelated gene expression [51,52]. Accordingly, therapeutic stimulation of WNT7a/FZD7 by injection of recombinant Wnt7a resulted in a significant increase in muscle strength and a reduce contractile damages in mdx mice (Duchenne Muscular Dystrophy (DMD) model) [49]. By contrast, in dystrophic muscles WNT7a increased fibrosis by inducing transforming growth factor-β2 (TGFβ2) [53], and Wnt activation enhanced the fibrotic response in aged mice [54]. These data suggest WNT7a to have a context-dependent effect in skeletal muscle, thus complicating future therapeutic strategies. Calcineurin Signaling Pathway Different downstream effectors have been proposed for calcineurin (Cn) during skeletal muscle hypertrophy, such as NFAT [55], GAT-2 [55] and MEF-2 [56], which seem to be activated during skeletal muscle hypertrophy in a fiber-specific manner [57]. Cn can modulate these TFs and downstream effectors (including the E3 ligases MuRF1/TRIM63 and MAFbx/atrogin-1) upon several conditions (dexamethasone [58], diabetes [56], exercise [59] or starvation [60] (Figure 1). Hippo Signaling Pathway The Hippo signaling pathway consists of a cascade of kinases that inhibits the transcriptional co-activators YAP and TAZ (Figure 1) (for a review, see [61]). Upon exercise and myostatin/activin inhibition in mdx mice [62], mechanical overloading [63] and following injury or degeneration of motor nerves [64], the expression and phosphorylation of YAP increased [62,63] along with those of other pro-hypertrophy proteins [40]. Furthermore, YAP negatively regulated the myostatin/activins signaling pathway by inhibiting SMAD2/3 transduction and consequently blunted the SMAD-mediated MuRF1/TRIM63 E3-ligase expression [63]. Transforming Growth Factor (TGFs), Pro-Anabolic and Pro-Catabolic Pathways The transforming growth factor (TGF) multifunctional cytokine family is divided in two subfamilies with opposite outcomes on muscle mass: myostatin/activin/TGF-β are negative regulators of muscle mass and BMPs (Bone Morphogenic Proteins)/GDF (Growth and Differentiation Factors) are positive regulators [65]. Myostatin/activin/TGF-β activate the pro-catabolic SMADs 2-3 whereas BMP ligands recruit pro-anabolic Smads 1-5-8 and elicit an anabolic transcriptional program (Figure 1). SMAD4 is shared by both pro-anabolic and pro-catabolic SMADs and can be a limiting factor for SMADs downstream effects [45]. Upon myostatin binding, Mafbx/Atrogin-1 and genes involved in the degradation of several anabolic factors (ribosomal proteins, translation initiation factors, MyoD, desmin and vimentin) are up-regulated [49,66] and the AKT/mTORC1 pathway is inhibited [67]. TGF-ß signaling also regulates MuRF1/Trim63 expression through the synergistic action of FOXO3a and SMAD3 [68,69] (see [12] for a recent review). Similarly, Activin A ligand negatively regulates muscle mass by binding to the same receptor than myostatin and by activating the same intracellular pathway [70][71][72]. Interestingly, the non-canonical TGF-ß pathway involving TAK1-p38 MAP kinase can also be activated under Activin A treatment in cellulo and in vivo, with MAFbx-mediated myotube atrophy [73]. Moreover, TGF-ß induces skeletal muscle atrophy through a mechanism dependent on NOX-derived ROS production, in vivo [69]. The TGF-ß pathway is also known for its master role in fibrosis, which promotes muscle mechanical constraints and injuries [74,75]. Recent reports showed that the canonical NF-κB and angiotensin pathways mediate the TGF-ß effects in cellulo and in vivo [76]. Conversely, the BMP pathway regulates hypertrophy by repressing the E3 ligases MUSA1/Fbxo30 [77] MAFbx/Atrogin-1, MuRF1/Trim63 [78,79] and through the positive modulation of mTORC1 and consequently protein synthesis [80]. Additionally, the long non-coding RNAs Myoparr and Chronos negatively modulate the BMP pathway (and muscle mass) by repressing Gdf5 [81] and Bmp7 [82] respectively. Altogether, a major conceptual idea is that a net balance between TGF-ß/BMP pathways plays a major role in determining skeletal muscle mass. Catabolic Pathways 2.3.1. AMPK Signaling Pathway The adenosine 5 -monophosphate-activated (AMP)-activated protein kinase (AMPK) is an energy sensor that preserves energy by turning on catabolic pathways and turning off ATP-consuming anabolic pathways [83][84][85]. In skeletal muscle, AMPK inhibits protein synthesis through the reduction of the mTORC1 signaling and favors contractile protein breakdown via the activation of FOXO1 and FOXO3a TFs (Figure 1) [86]. Consequently, MuRF1/TRIM63 and MAFbx/Atrogin-1 E3 ligases target different proteins involved in muscle contraction and protein synthesis initiation for UPS-dependent degradation [86,87]. Additionally, AMPK also promotes skeletal muscle autophagy [88]. The NF-κB Signaling Pathway NF-κB, a major pro-inflammatory transcription factor, is considered one of the main effectors of muscle atrophy via the regulation of UPS-related proteins expression [89][90][91][92][93][94][95][96]. Indeed, the NF-κB pathway is consistently upregulated upon catabolic conditions in both mouse models [89,97,98] and patients suffering from chronic obstructive pulmonary disease (COPD) [99] or chronic heart failure (CHF) [100] patients. A hypertrophic response is also observed in myotubes when blunting NF-κB activation upon catabolic TNFα exposure [93]. In addition to TNFα induction of NF-κB signaling, other proinflammatory cytokines (such as IL6 and TWEAK), bacterial products, growth factors, ROS, genotoxic stress, and viruses can activate this pathway [101]. Interestingly, for controlling the proper signaling, the NF-κB pathway comprizes several E3 ubiquitin ligases, TRAF6 [95,102,103], cIAP1 [19,104], LUBAC [95,105], SCF β-TRCP [105,106] that represent several opportunities for future potential therapies (Figure 2). NF-κB, a major pro-inflammatory transcription factor, is considered one of the main effectors of muscle atrophy via the regulation of UPS-related proteins expression [89][90][91][92][93][94][95][96]. Indeed, the NF-κB pathway is consistently upregulated upon catabolic conditions in both mouse models [89,97,98] and patients suffering from chronic obstructive pulmonary disease (COPD) [99] or chronic heart failure (CHF) [100] patients. A hypertrophic response is also observed in myotubes when blunting NF-κB activation upon catabolic TNFα exposure [93]. In addition to TNFα induction of NF-κB signaling, other proinflammatory cytokines (such as IL6 and TWEAK), bacterial products, growth factors, ROS, genotoxic stress, and viruses can activate this pathway [101]. Interestingly, for controlling the proper signaling, the NF-κB pathway comprizes several E3 ubiquitin ligases, TRAF6 [95,102,103], cIAP1 [19,104], LUBAC [95,105], SCF β-TRCP [105,106] that represent several opportunities for future potential therapies (Figure 2). Glucocorticoid Receptor Signaling Pathway Glucocorticoids (GCs) are endogenous stress hormones involved in modulating inflammation [107]. GCs are well known for their catabolic effects on skeletal muscle [108] and can exert their action via different mechanisms (Figure 1). In skeletal muscles, GCs mainly operate through the glucocorticoid receptor (GR), that interacts with specific DNA sequences, DNA-bound TFs as well as transcriptional co-regulatory proteins, which modulate the transcription of numerous genes [108-110] like MuRF1/Trim63, MAFbx/Atrogin-1, Foxo transcription factors, the myokine Gdf8, Klf15,Redd1 and Sesn1 [110]. Intriguingly, the effect of GCs on muscle mass is dependent on the type of GC, fiber type composition, muscle type, sex and dose, but also on the type of catabolic situation (e.g., starvation, diabetes, sepsis, cancer cachexia, etc.) (for details, refer to [110,111]). Recent works at least partly explained these differential effects by the capacity of GCs to use different signaling pathways, such as IGF-1/PI3K/AKT, MEK/ERK, Myostatin [112], , NOTCH [114] or to depend on co-factors such as connexin-based hemichannels [115], high-fat diet [116], oxidative stress [111] or mechanical load [51,52,117]. Angiotensin Signaling Pathway Angiotensin (Ang) is a peptide hormone that upon enzymatic processing [118] renders different variants like Ang-II and Ang-(1-7) (Figure 1) that can either be linked to catabolic conditions (Ang-II) [118][119][120][121][122][123] or counteract muscle atrophy (Ang-(1-7)) [124][125][126][127][128]. However, Ang-II can also exhibit anticatabolic properties, but only in some circumstances [127,129]). High levels of Ang-II have been associated with skeletal muscle atrophy in CHF, CKD, and SARS-CoV-2 pathologies [120,130]. Ang-II induced atrophy was also linked to increased proteasome activity [131], elevated polyubiquitinated protein conjugates [132], and early and transient accumulation of MuRF1/Trim63 and MAFbx/Atrogin-1 mRNA [119,123]. Therefore, the differential modulation of the enzymes processing Ang may be a promising approach for improving skeletal muscle atrophy. Glucocorticoid Receptor Signaling Pathway Glucocorticoids (GCs) are endogenous stress hormones involved in modulating inflammation [107]. GCs are well known for their catabolic effects on skeletal muscle [108] and can exert their action via different mechanisms (Figure 1). In skeletal muscles, GCs mainly operate through the glucocorticoid receptor (GR), that interacts with specific DNA sequences, DNA-bound TFs as well as transcriptional co-regulatory proteins, which modulate the transcription of numerous genes [108-110] like MuRF1/Trim63, MAFbx/Atrogin-1, Foxo transcription factors, the myokine Gdf8, Klf15,Redd1 and Sesn1 [110]. Intriguingly, the effect of GCs on muscle mass is dependent on the type of GC, fiber type composition, muscle type, sex and dose, but also on the type of catabolic situation (e.g., starvation, diabetes, sepsis, cancer cachexia, etc.) (for details, refer to [110,111]). Recent works at least partly explained these differential effects by the capacity of GCs to use different signaling pathways, such as IGF-1/PI3K/AKT, MEK/ERK, Myostatin [112], , NOTCH [114] or to depend on co-factors such as connexin-based hemichannels [115], high-fat diet [116], oxidative stress [111] or mechanical load [51,52,117]. Angiotensin Signaling Pathway Angiotensin (Ang) is a peptide hormone that upon enzymatic processing [118] renders different variants like Ang-II and Ang-(1-7) (Figure 1) that can either be linked to catabolic conditions (Ang-II) [118][119][120][121][122][123] or counteract muscle atrophy (Ang-(1-7)) [124][125][126][127][128]. However, Ang-II can also exhibit anticatabolic properties, but only in some circumstances [127,129]). High levels of Ang-II have been associated with skeletal muscle atrophy in CHF, CKD, and SARS-CoV-2 pathologies [120,130]. Ang-II induced atrophy was also linked to increased proteasome activity [131], elevated polyubiquitinated protein conjugates [132], and early and transient accumulation of MuRF1/Trim63 and MAFbx/Atrogin-1 mRNA [119,123]. Therefore, the differential modulation of the enzymes processing Ang may be a promising approach for improving skeletal muscle atrophy. JAK/STAT Signaling Pathway In skeletal muscle, the Janus Kinase/Signal Transducers and Activators of Transcription (JAK/STAT) pathway has been reported to be essential for transducing signals from growth factors and IL-6 among others (For a recent review, see [133,134]). STAT3, one of its effectors (Figure 1), is particularly implicated in skeletal muscle atrophy upon disease [recently reviewed elsewhere [135], notably through the development of skeletal muscle insulin resistance in Type 2 diabetes mellitus [136,137], the induction of myostatin [138], caspase -3 [139] and UPS [14], and increased mitochondrial ROS [140]. Kinin Signaling Pathway Kinins are a group of peptides that act via inducible (B1) or constitutive (B2) receptors [141]. Using B1 receptors, kinins participate to muscle atrophy by blunting the PI3K/AKT/mTORC1 axis and by stimulating the IKK/NF-κB pathway (Figure 1) [142]. Both genetic or pharmacologic ablation of B1 receptor protect skeletal muscles from atrophy in androgen-sensitive mice, mainly by blunting MuRF1/Trim63 expression [142]. The role of kinin B2 receptors is more controversial as they may either be pro-catabolic via activation of myostatin signaling [143] or pro-anabolic [144]. Therefore, kinin receptors may regulate muscle mass but more studies are clearly needed before they become potential targets to modulate muscle atrophy. Sphingolipids Signaling Pathway The sphingomyelin pathway plays a role in skeletal muscle mass through the hydrolysis of plasma membrane sphingomyelin (SM) and the subsequent formation of ceramide and sphingosine-1-phosphate (S1P) (Figure 1). Ceramide, is linked to muscle atrophy through (i) the reduction of protein synthesis [145][146][147][148] the Notch Intracellular Domain (NICD) translocates to the nucleus (Figure 1) and binds directly to the MuRF1/Trim63 promoter to activate its transcription, thereby establishing NOTCH signaling as a proteolysis inducer [161]. Oxidative Stress Is an Inducer of Skeletal Muscle Atrophy Oxidative stress is characterized by increased levels of reactive oxygen species (ROS) and/or reactive nitrogen species (RNS) and is a well-known mechanism of atrophy induction in skeletal muscle under several conditions and proteolytic mechanisms (reviewed elsewhere [162,163]) (Figure 1). Both ROS and RNS negatively impact muscle mass during COPD [164,165]. ROS induce a FOXO1-dependent MuRF1/Trim63 and MAFbx/Atrogin-1 overexpression in COPD peripheral muscle cells in cellulo [166]. NOS activation was suggested to occur through inflammation and hypoxia in COPD patients with low body weight via an activation of NF-κB and iNOS-generated RNS [99]. Besides increased protein breakdown, a decrease in protein synthesis via AKT/mTORC1 also contributes to muscle mass loss by ROS [162]. Importantly, depending on the type, duration and intensity of the imposed stress, specific signaling mechanisms are activated [162,163,[166][167][168][169] indicating that the underlying mechanisms by which oxidative stress contributes to muscle wasting is context-dependent. One strategy to fight against atrophy may be to stimulate the anabolic pathways leading to skeletal muscle hypertrophy. Insulin-like growth factor 1 (IGF1) induces skeletal muscle hypertrophy by activating the IGF1R/PI3K/AKT pathway, a critical mediator and checkpoint being IRS1. Indeed, the effect of IGF1 is time-limited by the phosphorylation of IRS1 by IGF1R and its subsequent ubiquitination and proteasome-mediated degradation. E3 Ligases Involved in the Different E3 ligases can target IRS1 in different tissues. For example, in embryonic fibroblasts, the CUL7 E3 ligase, containing FBXW8, has been shown to target IRS1 for ubiquitin-dependent degradation [170]. In skeletal muscle, Casitas B-lineage lymphoma-b (CBL-B), a RING E3 ligase, targets IRS1 for degradation and thus impairs muscular trophic signals in response to unloading conditions [171][172][173], which inhibits downstream IGF1 signaling [173] (Figure 2 and Table 1). Accordingly, mice deficient for CBL-B were partly resistant to unloading-induced skeletal muscle atrophy and dysfunction [173]. These results highlight the importance of CBL-B in the process of muscle atrophy in response to unloading. FBXO40 is a muscle-specific F-box protein [174], component of an SCF (Skp1-Cullin1-F-box protein) E3 ligase complex. Following IRS1 activation, IGF1R phosphorylates IRS1 leading to its ubiquitination by FBXO40 and its degradation by the 26S proteasome, in cultured myotube and in mice [22,175]. FBXO40 expression is decreased in muscles from Limb-girdle muscular dystrophy (LGMD) patients, and up-regulated in mice skeletal muscle following denervation and in chronic kidney disease (CKD) mice model, but not during starvation [174,175]. Accordingly, the knock-down of Fbxo40 resulted in thicker myotubes (20% to 50% increase in diameter) [22] and its deletion in mice also induced muscle hypertrophy during the growth phase, a phase associated with high IGF1 levels [22] (Figure 2 and Table 1). IRS1 is thus an important checkpoint of the IGF1/PI3K/AKT pathway controlled by at least 2 E3 ligases (CBL-B and FBXO40). Although being an attractive target for fighting against muscle atrophy, the multiple ways for degrading IRS1 may complicate the development of drugs. 3.1.2. NEDD4-1 E3 Ubiquitin Ligase, Friend or Foe? In muscles undergoing atrophy, NEDD4-1 mRNA levels are elevated upon severe sepsis [191], denervation or unloading [178,192,193]. On the one hand, NEDD4-1 E3 Ub ligase targets phosphatase and tensin homologue (PTEN). PTEN is a redox sensitive phosphatase that negatively regulates the PI3K-AKT signaling pathway, thereby affecting metabolic and cell survival processes. The deletion of PTEN improves muscle mass and function in a mouse model of Duchenne muscular dystrophy [194]. PTEN inhibition may thus also represent a potential therapeutic strategy to maintain muscle function during catabolic situations. The over-expression of NEDD4-1 is sufficient for activating the PI3K/AKT signaling in cardiac muscle, following myocardial ischemia/reperfusion (I/R) [176]. However, the negative regulation of PTEN by NEDD4-1 remains to be confirmed in skeletal muscle, especially since NEDD4-1 has also been shown to promote skeletal muscle atrophy in a denervation model. Indeed, NEDD4-1-KO mice exhibited increased weights and type II muscle fiber cross-sectional areas in denervated gastrocnemius muscle [178]. Moreover, NEDD4-1 also negatively regulates the hypertrophic BMP signaling (Figures 1 and2). Indeed, NEDD4-1 ubiquitinates phosphorylated-SMAD1 leading to its proteasomal degradation, thereby silencing BMP signaling in C2C12 myoblasts, and conversely the knock-down of Nedd4-1 potentiates BMP signal through upregulation of phospho-SMAD1 [195]. Altogether, the exact function of NEDD4-1 in skeletal muscle is still obscure and needs more work. Among the E3s involved in the regulation of the NF-κB pathway, two promising candidates may be manipulated to limit muscle atrophy, namely cIAP and TRAF6 (Figures 1 and2). cIAP1 is up-regulated in denervated gastrocnemius muscle, paralleling the upregulation of MAFbx/atrogin-1 and MuRF1/Trim63 mRNA [19]. Mice with genetic ablation of cIAP1 (cIAP1-KO mice) displayed limited denervation-induced atrophy in TA, gastrocnemius and EDL muscles. This was correlated with the blunting of the denervation-induced upregulation of MAFbx/Atrogin-1 and MuRF1/Trim63 [19]. The authors further demonstrated that cIAP1 induced atrophy through the up-regulation of the canonical NF-κB signaling. Conversely, cIAP1 overexpression in myotubes induced atrophy and the strong up-regulation of MAFbx/Atrogin-1 and MuRF1/Trim63 protein expression [19]. The E3 Ub ligase cIAP1 represents thus a potential therapeutic target at least for fighting against denervation-induced muscle atrophy. E3 Ubiquitin Ligases TRAF6 is a RING-type Ub ligase that plays an important role during skeletal muscle atrophy. TRAF6 expression is enhanced during starvation or within aged-induced muscle atrophy [179,196,197]. Traf6-KO mice are resistant to skeletal muscle loss (rescue of myofibril degradation, preservation of myofiber size and strength) induced by denervation, cancer cachexia, starvation or Dex and a concomitant suppression of the expression of key regulators of muscle atrophy was observed, including MAFBx/Atrogin-1, MuRF1/TRIM63, p62, Lc3b,Beclin1,Atg12,and Fn14 [179,180,[196][197][198]. Moreover, inhibition of Traf6 expression through miR-351 administration in C2C12 myotubes or in denervated mice attenuated Dexinduced muscle atrophy and concomitantly decreased the expression of MAFBx/Atrogin-1 and MuRF1/Trim63 [199,200]. Overexpression of miR-125b targeted Traf6 for degradation and protected skeletal muscle samples from atrophy in starved myotubes or in denervated rat tibialis muscle [201]. The implicated mechanisms involved both direct and indirect effects of TRAF6 on protein breakdown with TRAF6-mediated ubiquitination being re-quired for the optimal activation of JNK, AMPK, FOXO3, and NF-κB catabolic pathway in muscle [202]. In human, gastric cancer patients suffering from cachexia exhibited an upregulation of TRAF6 associated with an upregulation of ubiquitination in the rectus abdominis muscle [203]. Altogether, this highlights the importance for targeting TRAF6 inhibition to counteract muscle atrophy. WWP1 in the Regulation of Muscle Atrophy WWP1 is a HECT E3 ligase that is involved in chicken muscular dystrophy. Indeed, a missense mutation in the gene coding WWP1 was identified as the most promising candidate responsible for chicken muscular dystrophy (MD), potentially affecting the E3 function of WWP1 protein [204]. WWP1 was also shown to target the transcription factor KLF15 [181]. In response to glucocorticoids, KLF15 is up-regulated at the mRNA levels [205]. This induction leads to the up-regulation of the E3 ligases MAFbx/Atrogin-1 and MuRF1/Trim63 expression, likely in cooperation with a FOXO transcription factor, while inhibiting the anabolic mTORC1 [205]. Likewise, exogenous KLF15 expression in myotubes and in TA muscle leads to myofiber atrophy [205]. It has recently been shown that KLF15 protein expression was upregulated in skeletal muscle of diabetic mice, without any change in its mRNA expression [181]. This increase correlated with an increase in MAFbx/Atrogin-1, Murf1/Trim63 and Foxo3 genes expression and accordingly, the muscle-specific deletion of Klf15 in this model prevented from diabetes-induced muscle atrophy [181]. The authors identified WWP1 as an E3 ligase targeting KLF15 and showed that knocking-down WWP1 in both C2C12 myotubes and in tibialis anterior muscles increased MuRF1/Trim63 and MAFbx/Atrogin-1 expression and induced atrophy [181] (Figure 2). WWP1 E3 ligase is indeed induced by high glucose conditions in myotubes [206]. Conversely, in high glucose conditions, WWP1 has also been implicated in the down-regulation of AMPKα2 protein levels [206]. The authors have shown that WWP1 interacted with AMPKα2 leading to a proteasome-dependent decrease of AMPKα2 in myotubes; however, direct ubiquitination was not addressed [206]. WWP1 may thus control muscle mass through a direct action on AMPK, a known modulator of FOXO3a, MuRF1/TRIM63 and MAFbx/Atrogin-1 [88]. TRIM32 in the Regulation of Autophagy TRIM32 is a RING E3 Ub ligase whose mutation is responsible for the development of limb girdle dystrophy 2H (LGMD2H) [207]. Several substrates have been identified for TRIM32 in non-muscle cells, including cell cycle regulators (c-Myc, MYCN, p53), the cell growth and transformation factor ABI2 and PIASY (a SUMO E3 ligase). TRIM32 is also involved in the targeting of factors influencing myogenesis (NDRG2 and TRIM72) that regulate muscle satellite cells renewal and differentiation [208]. While initially postulated to promote muscle atrophy, TRIM32 is in fact a master regulator of myogenesis during recovery situations [208]. Indeed, the dystrophic phenotype of TRIM32 mutations appeared to be largely due to impaired myogenesis [208][209][210]. More recently, TRIM32 was implicated in the early events leading to autophagy. Indeed, TRIM32 targets ULK1, a Ser/Thr protein kinase (Figures 1 and2). ULK1 is an upstream regulator of autophagy rapidly activated to ensure a rapid response to stress conditions [211]. The authors showed that TRIM32 deficiency was directly responsible for autophagy defects both in cultured cells and in mice treated with Dex. The mechanisms by which TRIM32 controls the activation of autophagy through ULK1 involves its binding to AMBRA1, a positive regulator of autophagy [211]. AMBRA1 is a pivotal factor able to bind several E3 ligases during the course of the autophagy process. In presence of AMBRA1, TRIM32 binds to ULK1, synthesizes unanchored K63 Ub chains that activate ULK1 kinase activity, thus promoting autophagy. The role of TRIM32 during the autophagy process is not limited to ULK1 as p62, an important autophagy receptor [212], is also a TRIM32 substrate. p62 activity is modulated by multi mono-Ub catalyzed by TRIM32 and loss of function of TRIM32 largely abolished autophagy [213]. Altogether, TRIM32 appears as a master regulator of muscle renewal through the initiation of autophagy. FOXO Transcription Factors Are Regulated by MDM2 and SKP2 E3 Ubiquitin Ligases Alternatively to phosphorylation, FOXO can be regulated by acetylation/deacetylation, methylation and ubiquitination to modulate its activity, localization as well as degradation [214][215][216]. Ubiquitination modulate FOXO activity by either mono-or polyubiquitination through MDM2 and SKP2 E3 Ub ligases (Figures 1 and2). MDM2 is the enzyme responsible of a single addition of an ubiquitin moiety to FOXOs, specifically to FOXO4, thus allowing its nuclear localization and transcriptional activation [217,218]. Mono-Ub of FOXO4 is observed under oxidative stress conditions and can be counteracted by deubiquitinating enzymes such as ubiquitin-specific protease (USP7). Importantly, ubiquitination mediated by MDM2 is context specific and upon growth factor stimulation can induce FOXO1 and 3 degradation [217]. In addition, interaction between FOXOs and SKP2, a subunit of the SKP/cullin 1/F-box protein E3 ligase leads to proteasomal degradation of FOXO1 in the cytosol [218]. Combined with the other posttranslational modifications, ubiquitination allows FOXOs to integrate information arising from insulin, growth factors, cytokines, and oxidative stress and to control downstream signaling. Interestingly, FOXO TFs have systematically been envisioned as crucial drivers of catabolic pathways during muscle wasting. Nonetheless, recent work showed that Muscle-specific RING finger protein 1 (MuRF1), also named TRIM63, is a RING-type E3 ligase and a founding member of the so-called "atrogenes" (see [6] for a recent review). MuRF1/TRIM63 is a master regulator of skeletal muscle atrophy development occurring in numerous catabolic conditions and MuRF1/Trim63 mRNA appeared to be upregulated in more than 25 atrophying situations [6] (Figures 1 and2). Mice deleted for MuRF1/TRIM63 (MuRF1-KO mice) were partially resistant (preservation of muscle mass and structure) to skeletal muscle atrophy induced by denervation [4], hindlimb suspension [4,223], glucocorticoid [224], amino acid deprivation [225], and acute lung injury [226]. MuRF1/TRIM63 is responsible for the coordinated breakdown of both thick and thin filaments occurring during catabolic states in skeletal muscle, targeting to degradation the main proteins of the contractile apparatus: myosin heavy chains (MHC) [227], alpha-actin [228], troponin I [229], TCAP/telethonin [230]. During denervation and starvation, MuRF1/TRIM63 has also been involved in the degradation of acetylcholine receptor (CHRN), the major postsynaptic ion channel of the neuromuscular junction. This degradation is mediated by the activation of selective autophagy and degradation of CHRN, likely via the degradation of BIF-1 (Bax interacting factor 1)/EndoB1 (EndophilinB1) and/or SQTM1/p62 (sequestosome-1) [231,232]. While numerous studies have promoted a major role of MuRF1/TRIM63 in the development of skeletal muscle atrophy during catabolic states, in the heart, the analyses of MuRF1 mutants have highlighted a beneficial cardioprotective role [233]. These opposites roles in both kind of muscles imply the development of skeletal muscle-specific drugs to inhibit MuRF1/TRIM63. Moreover, one should also take into account that MuRF1/TRIM63 has two homologs, MuRF2 and MuRF3 that share some redundant functions and could replace its role [12]. MAFbx/Atrogin-1/FBXO32 The multimeric E3 ligase MAFbx/atrogin-1/FBXO32 is another founding member of the atrogene family ([6] for a recent review) crucial for the development of muscle atrophy. Interestingly, nearly all catabolic situations induce an overexpression of both MAFbx/Atrogin-1 and MuRF1/TRIM63, which are controlled by the same TFs (FOXO1/FOXO3a, NF-κB, C/EBP β, Smad 3, etc.) and the same signaling pathways [234] (Figures 1 and2). In contrast with MuRF1/TRIM63 that targets directly the contractile proteins for their degradation MHC,, MAFbx appeared to target pro-anabolic factors like MyoD, myogenin or eIF3f [235][236][237]. MyoD is a muscle-specific transcription factor that plays crucial roles during cell cycle and muscle differentiation [238]. The eukaryotic initiation factor 3 subunit f (eIF3f) is a pivotal element of protein synthesis and its control by MAFbx allows the latter to master the anabolic processes [235]. While a putative role of MAFbx/Atrogin-1 on sarcomeric proteins was hypothesized using an indirect approach, this has never been confirmed [239]. By contrast, the authors found that desmin, a main component of the intermediate filaments, physically interacted with MAFbx and was degraded in myostatin-treated cultured C2C12 myotubes. As MAFbx/Atrogin-1 and MuRF1/TRIM63 are controlled by similar signaling pathways, the strategies for the upstream control of MuRF1/Trim63 expression are generally also valid for MAFbx/Atrogin-1 (Table 2). By contrast with MuRF1/TRIM63, no direct inhibitor of MAFbx/Atrogin-1 has been described so far but general strategies, like targeting the interface responsible for substrate recognition or impeding the assembly of the F-box (i.e., the subunit recognizing the substrates) into the SCF complex, may prove to be efficient. Altogether, controlling concomitantly MAFbx/Atrogin-1 and MuRF1/TRIM63 E3 ligases allows skeletal muscle cells to both increase the degradation of the contractile apparatus and to depress the protein synthesis machinery, which allows a tight regulation of protein homeostasis. PARKIN Controls Muscle Mass through the Maintenance of Mitochondrial Homeostasis PARKIN is an E3 ubiquitin ligase implicated in the regulation of mitophagy, a quality control process in which defective mitochondria are degraded. Mitochondrial quality control through both mitochondria turnover and dynamic plays an essential role in the maintenance of muscle mass (see [240] for a review). During mitophagy, PARKIN ubiquitinates several outer mitochondrial membrane proteins leading to subsequent autophagosomal engulfment and lysosomal degradation (Figures 1 and2). This role of PARKIN has been emphasized in rodent models or in humans where a deregulation of PARKIN mRNA and/or protein expression prevailed in response to catabolic or anabolic situations. An accumulation of PARKIN protein prevailed during: (i) muscle wasting situations such as chronic kidney disease [241], chronic obstructive pulmonary disease (COPD) [242], physical inactivity [243,244] and (ii) upon exercise training [245,246]. Conversely, PARKIN mRNA or protein levels decreases in skeletal muscles from some elderly populations, perhaps related to the loss of muscle mass and poor physical function, e.g., physically inactive frail older women [247,248] or gastric cancer patients with cachexia [249]. In the last two years many studies using loss/gain of function models have provided insight on the role of PARKIN in skeletal muscle. Loss of function mouse models pointed out the essential role of PARKIN in basal conditions for the maintenance of (i) mitochondrial function [250,251] and (ii) skeletal muscle mass and normal contractile function [184,251]. Such studies also reported that PARKIN helps to resist to some drug-induced muscle damages [252] and is required for exercise-induced mitophagy flux and for the accumulation of functional mitochondria following muscle adaptations to training [250]. In addition, these loss-of-function studies also highlighted that PARKIN-mediated mitochondrial clearance contributes to proteasome activation during denervation in atrophied slow-twitch mus-cles [253]. On the flip side, gain-of-function studies showed that PARKIN overexpression in mice: (i) attenuates the ageing-related and the sepsis-induced muscle wasting and causes hypertrophy in adult skeletal muscle, (ii) increases mitochondrial content and enzymatic activities and (iii) protects from ageing-related increases of oxidative stress markers, fibrosis and apoptosis [185,186]. It is very likely that this role of PARKIN in controlling muscle mass has been evolutionary conserved. Indeed, similar observations were also reported in the fruit fly model: Parkin deficiency in Drosophila leads to severe degeneration of the flight muscles with accumulation of swollen mitochondria [254] whereas Parkin overexpression promotes mitophagy in older muscles and extend lifespan. Together, these studies clearly indicate that PARKIN is an important player in the control of muscle mass through its role in the maintenance of mitochondrial homeostasis. This makes it a potential therapeutic target of interest for preserving muscle mass or fighting against atrophy. Nevertheless, the regulation of PARKIN can be very different according to the physiological or pathological situation or during ageing. Further investigations should enable defining how this actor could be a target of interest according to the population considered. MUSA1/FBXO30 FBXO30, also called muscle ubiquitin ligase of the SCF complex in atrophy-1 (MUSA1), is a FBOX protein forming an SCF complex with SKP1, Cullin1 and ROC1 [77]. Proteins targeted by MUSA1 remain undefined, but its inhibition in denervated muscles reduces remarkably muscle atrophy, and reverts almost completely the strong atrophic phenotype of Smad4-KO mice [77] (Figures 1 and2). In muscle, Musa1 expression is upregulated in atrophic mice muscle undergoing CKD [255] or sepsis [256]. FBXL21 Very recently, a new E3 ubiquitin ligase involved in muscle function control has emerged, FBXL21 [188]. FBXL21 forms an SCF E3 ligase complex and was first identified as clock-controlled E3 ligase modulating circadian periodicity via subcellular cryptochrome degradation [257]. Accordingly, in mice, the Psttm mutation, corresponding to a hypomorphic mutation of FBXL21 with reduced FBXL21 activity, caused circadian period shortening [257]. Further studies of these mice revealed that they also displayed skeletal muscle deficiencies with a decrease in fiber CSA (gastrocnemius) and impaired exercise tolerance and grip strength for both forelimbs and hindlimbs [188]. The authors nicely demonstrated the circadian degradation of the cytosolic TCAP/Telethonin by FBXL21 (Figure 2), under the control of GSK-3β. They reported that GSK-3β phosphorylated both FBXL21 and TCAP leading to FBXL21-CULLIN1 complex formation and phosphodegron-dependent TCAP turnover. Ubiquitin Ring-Type E3 Ligases (UBR) Ubiquitin Ring-type (UBR, also referred to as E3α) proteins are RING finger E3 ligases that compose a 7-member family and that mainly recognize their substrate through the N-end rule pathway [258]. A first member, UBR2/E3alpha-II, has been shown to be significantly induced in skeletal muscle, in two different animal models of cancer cachexia, at the onset and during the progression of muscle wasting [259]. However, its exact function and importance in skeletal muscle maintain during catabolic states have not been further studied. UBR4 is overexpressed in the skeletal from fasted mice and genetic ablation of UBR4 preserves muscle mass in tumor-bearing mice [189] (Table 1). Intriguingly, the protection of UBR4 knockout against tumor-induced atrophy was limited to type IIA fibers. In contrast, UBR5 has been implicated in muscle hypertrophy [260] and reported to be at least partially associated to the proteasome [261]. Recently several members of the UPS have been described as UBR5 substrates, which included an E2 (UBE2B, an abundant muscle E2), several E3 ligases, proteins involved in chromatin remodeling, etc. [189]. As the main UBR4 targets are positive regulators of muscle growth, the authors concluded that UBR4 acts as a negative regulator of muscle hypertrophy. 3.3.7. FBXO21/SMART FBXO21/SMART forms an SCF complex with Skp1, Cullin1 and Roc1, in skeletal muscle and has been shown to promote atrophy during denervation [187]. Indeed, the authors showed that FBXO21/SMART upregulation was required for atrophy while, knockdown in TA muscle protected denervated muscles from atrophy (Table 1), probably due to a global reduction of protein ubiquitination [187]. FBXO21/SMART might therefore be a new critical E3 to target to limit skeletal muscle atrophy. Further work should determine whether this E3 is crucial for the development of atrophy in other catabolic conditions and what are the mechanisms involved. Promising E3 Ubiquitin Ligases Regulating Muscle Mass and Function Other E3 ubiquitin ligases are also promising putative targets for maintaining muscle mass and function, if we rely on what has been published in other organs or organisms. For example, the SIAH-1 RING E3 ligase has been identified in the same RNAi screen that UBR4, performed to identify ubiquitin-related enzymes that regulate myofiber size, using the fruit fly Drosophila [189]. In Drosophila, SIAH1 knock-down led to muscular hypertrophy while its overexpression led to atrophy [189]. It is noteworthy that, in space flown rats, SIAH1 mRNA expression has been shown upregulated suggesting also a putative role during this process in mammals [172]. However, in mammals two isoforms, SIAH1 and SIAH2, are expressed in muscle and could share redundant functions [189]. SMURF1, an HECT ubiquitin ligase interacts with SMAD1 and SMAD5 (BMP pathway) and SMAD4 in a certain context, leading them all to proteasomal degradation in vitro [262]. Moreover, it can degrade the main TGF-β receptor through an indirect recruitment to the receptor by SMAD7, leading to the receptor degradation [263]. In, COPD leading to muscle atrophy, TGF-β signaling is abnormally up-regulated and this, is negatively correlated to SMURF1 expression. This highlights that the inhibitory effect of SMURF1 over TGF-β is needed for muscle homeostasis [264]. The C terminus of Hsc70-interacting protein (STUB1/CHIP) serves as an E3 ubiquitin ligase. This E3 plays a dual role in BMP/TGF signaling. Overexpression of CHIP inhibits TGF-β luciferase reporter through the ubiquitination and degradation of SMAD3, and conversely silencing it leads to increase the signal transduction in HEK293T cells [265]. In cellulo experiments showed that CHIP mediates as well SMAD1-5 poly-ubiquitination, and subsequent degradation to terminate BMP signaling [266]. In muscle, CHIP is highly expressed. For instance, Chip-/mice at 6 months shows muscle morphological changes consistent with increased sarcoplasmic reticulum compartments in quadriceps muscle and gastrocnemius, resulting in damages and fiber switch composition [267]. From our knowledge, no studies have shown the implication of CHIP in TGF/BMP signaling-mediated muscle atrophy. TRIM62 belongs to the TRIM/RBCC family. This enzyme acts as a negative regulator of TGF-β signaling by binding to SMAD3 and promoting its ubiquitination and degradation, resulting in a decrease of TGF-β/SMAD3 target genes in HEK and human mammary epithelial cells [268]. TRIM62 is increased in the skeletal muscle of ICUAW patients (Intensive care unit-acquired weakness), a devastating illness characterized by loss of muscle mass [269]. In this context, the authors proposed TRIM62 contribution in inflammation-induced muscle atrophy through IL-6 pathway. Indeed, Trim62-KD inhibited LPS-induced IL-6 expression in C2C12 cells [269]. TRIM72/MG53 is a muscle-specific E3 ligase, also called mitsugumin 53, specifically expressed in the plasma membrane of skeletal muscle, and has a critical role in membrane repair. Membrane repair deficiency causes muscle cell death, injury, and dystrophy. Accordingly, the overexpression of human TRIM72 in a hamster model of genetic muscular dystrophy protects skeletal muscle damage through enhancement of membrane repair [270]. Similarly, short-term TRIM72 injection ameliorates the underlying defects in dysferlin-deficient muscle by increasing sarcolemma membrane integrity [271] while Trim72 -/-mice develop significant skeletal muscle myopathy and cardiovascular defects due to defective sarcolemma repair [272]. Current Treatments/Potential Modes of Action The importance of maintaining muscle mass together with the discovery of several E3 ligases implicated in muscle homeostasis has rapidly end up with multiple approaches to chemically alter the expression of these enzymes. This includes chemical drugs but also several natural molecules that have been tested for their ability to modulate the UPS and more particularly the E3 ligases (Table 2). As E3 ligases are controlled by several signaling pathways, one possibility that was first addressed was to block these signals. The PI3K-AKT-mTORC1 axis is known to control muscle mass by directly acting on FOXO transcription factors, the latter being master regulators of several E3 ligases, like MAFbx/Atrogin-1, MuRF1/TRIM63, MUSA1, SMART and FBXO31, during several atrophy situations [187]. As such, clenbuterol (Table 2 and Figure 2), an activator of the AKT-mTORC1 pathway, is able to decrease MuRF1/Trim63 and MAFbx/Atrogin-1 expression in denervated or hindlimb suspend rats and to partially preserve muscle mass [273]. Glucocorticoids Glucocorticoids are potent manipulators of muscle mass and the glucocorticoid receptor antagonist RU486 proved to be efficient in rats for blocking dexamethasone (Dex)induced induction of MuRF1/Trim63 of MAFbx/Atrogin-1, the main regulators of muscle mass [13] (Table 2 andFigure 2). Similarly, the authors demonstrated that blocking TNFα by the TNF-binding protein (TNFBP) was efficient for blunting LPS-induced expression of MuRF1/Trim63 and MAFbx/Atrogin-1. However, when sepsis was induced by cecal ligation and puncture, neither RU486 nor TNFBP were able to counteract the overexpression of MuRF1/Trim63 and MAFbx/Atrogin-1, indicating that multiple signals were activated by sepsis. This points out the difficulty of treating complex catabolic signals in vivo. Infliximab is an anti-TNF-α agent able to lower the downstream NF-κB signaling. In patient's suffering from Crohn disease, treatment with infliximab was able to ameliorate muscle atrophy but, although hypothesized by the authors, the expression of MuRF1/Trim63 or any other E3 ligase was not addressed [274]. Il-6 is another inflammatory cytokine that can be implicated during muscle wasting conditions like muscle disuse [275]. Increased IL-6 in tail-suspended mice paralleled skeletal muscle atrophy and was accompanied by increased levels of MuRF1/Trim63 and MAFbx/Atrogin-1. The inhibition of the IL-6 receptor by hydroxymethyl butyrate (HMB, a metabolite of leucine) or vitamin D tended to decrease IL-6 levels and when combined, HMB and vitamin D exhibited better efficiency for blunting IL-6 production [275] (Table 2 and Figure 2). By contrast, each molecule was sufficient for decreasing MuRF1/TRIM63 and MAFbx/atrogin-1 levels and to attenuate muscle atrophy. While the authors attributed the beneficial effects of HMB and vitamin D on IL-6 receptor, using a monoclonal antibody directed against IL-6 receptor (MR16-1) proved to be inefficient as only MuRF1/Trim63 expression was decreased with no amelioration on muscle mass. As for the TNF-α, this work underscores the multiplicity of signaling during atrophy situations and the difficulty of blunting efficiently receptor-linked signaling. STAT-3 is a downstream effector of IL-6 signaling and a specific inhibitor of was investigated for its capacity to block muscle atrophy in a model of mice deficient for the vitamin D receptor (VDR) [14]. In these conditions, VDR -/-mice exhibited exacerbated MuRF1/Trim63 expression and increased muscle atrophy. While C188-9 was able to partially preserve muscle mass, its efficacy against MuRF1/TRIM63 was not addressed. NF-κB Inhibition of the NF-κB signaling pathway was also efficiently performed using high doses of salicylate (Table 2 andFigure 2), which allowed the reversion of MuRF1-induced muscle atrophy in tumor bearing or denervated mice [89]. However, the high doses used for achieving a potent inhibitor would be toxic when administered to humans. β2-AR agonists can exert both anabolic and anti-catabolic effects on skeletal muscles either by decreasing catabolic signals or by promoting anabolic ones or both. Formoterol (Table 2 andFigure 2 β2-AR reversion of E3 ligases expression and muscle sparing was also observed in a rat rheumatoid arthritis model and was attributed to modulation of both the AKT and the NF-κB pathways [293]. Other 2-AR agonists like espindolol have also been shown to ameliorate muscle loss and to blunt E3 ligase expression in aged rats. The authors found that both NF-κB and myostatin expression was reduced with no effect on AKT and FOXO3a [292]. Altogether, this strongly suggests that the positive effects of 2-AR agonists on muscle mass are mediated through the modulation of different signaling pathways depending on the catabolic stimuli, which complicates future therapeutical strategies. p38α MAPK is known to play an important role in the development of muscle atrophy [295]. Inhibition of the p38α MAPK receptor by the selective inhibitor VX-745 (Table 2 and Figure 2) partially improved muscle weight in hindlimb suspended rats with a modest inhibition of MuRF1 expression but no modification of MAFbx [292]. NOTCH The NOTCH pathway is mainly known for its implication in muscle development and regeneration upon injury. However, it has been also implicated in muscle atrophy linked to either cancer or amyotrophic lateral sclerosis (ALS) mice models [161]. Using a tocopherol derivative (AGT251) (Table 2 andFigure 2), the authors found that this antioxidant molecule was protective against muscle atrophy and MuRF1/Trim63 expression, and that the effects may be mediated through NOTCH1 and 3 expression. Ion Channels Electrical stimulation is an important signal that controls muscle mass and ion exchange through specific channels, e.g., K + -channels, [296]. Following nerve injury, improvement of muscle mass was observed by blocking K+ channels with 4-aminopyridine (4-AP) [278]. 4-AP (Table 2) was able to partially restore muscle fiber diameter with a concomitant decrease of MuRF1/Trim63 expression accompanied by decreased Foxo1 and Foxo3a expression. 4.1.9. Acute-Phase Protein Serum Amyloid A1 (SAA1) Skeletal muscle loss in intensive care unit patients has been at least partially attributed to the acute-phase protein serum amyloid A1 (SAA1) [256] (Table 2). Recent work performed in cultured C2C12 myotubes and septic mice showed that SAA1 effects were mediated through TLR-dependent IL-6 expression and recruitment of the NF-κB pathway. This leads with muscle atrophy and an overactivation of MuRF1/TRIM63, MAFbx/Atrogin-1 and MUSA1 E3 ligases. Using BMS-345541, an inhibitor of the IκB kinase, the authors found that the expression of the E3 ligases returned to basal levels and muscle sparing was observed, indicating that blocking the NF-κB pathway may be an efficient way for indirectly modulating E3 ligases [266]. 4.1.10. TGF-β TGF-β family ligands, including myostatin and activin, are potential effectors of muscle atrophy in several situations of muscle atrophy like cancer [16]. The injection of a truncated form (aa 7-100) of the TGF-β ligands ActRIIB (Table 2 andFigure 2) in mice subjected to several models of cancer cachexia was sufficient for blocking MuRF1/Trim63 and MAFbx/Atrogin-1 expression together with complete sparing of both skeletal muscle and heart mass [16]. 4.1.11. Reactive oxygen species (ROS) ROS are downstream modulators of muscle wasting and may be also potential levers for preserving muscle mass [162]. Several molecules have been tested for their potency to modulate E3 ligase expression and thus to preserve muscle mass. Dehydroepiandrosterone (DHEA) (Table 2 andFigure 2), a multifunctional steroid with antioxidant properties was shown to decrease MuRF1/Trim63 expression (but not MAFbx/Atrogin-1) in tumor-bearing rats, which helped moderately preserving muscle mass [168]. Transforming growth factor type beta 1 (TGF-β1) regulates the function and pathological status of skeletal muscle and was found to modulate muscle mass by increasing the activity of NADPH oxidase (NOX), a major ROS producer [69]. This was accompanied by an increased expression of MuRF1. Interestingly, N-acetylcysteine (NAC, a clinically used anti-oxidant) and apocynin (NOX inhibitor) were able to reverse both MuRF1 overexpression and muscle mass loss in cultured myotubes treated with TGF-β1. Similarly, NAC or pyrroloquinoline quinone (PQQ, a naturally occurring antioxidant) were able to decrease MuRF1/Trim63 and MAFbx/Atrogin-1 expression and to preserve muscle mass in denervated mice or in starved cultured myotubes [285]. SS-31 is a cell-permeable mitochondria-targeted antioxidant tetrapeptide undergoing clinical trials [297]. This peptide is efficient for lowering ROS production, improving muscle atrophy and decreasing MuRF1/Trim63 and MAFbx/Atrogin-1 expression [287]. While ROS modulation seems to be efficient for protecting muscle mass, the mechanisms involved in the decrease of E3 ligases expression is far from being understood. Vitamin E is another antioxidant that has been used in a rat model of muscle disuse (hindlimb suspension) [291]. Vitamin E supplementation was able to largely prevent the overexpression of several proteolytic enzymes including MuRF1/TRIM63 and MAFbx/atrogin-1 but the impact on muscle mass fiber cross section was moderate. Interestingly, the authors attributed the protective role of vitamin E to a direct action on gene expression and not to its antioxidant properties [291]. 4.1.12. Leucine and Its Derivative ß-Hydroxy-ß-Methylbutyrate (HMB) The essential amino acid leucine and its derivative HMB were described as modulators of protein synthesis through an action on the mTORC1 pathway [298,299]. The efficiency of HMB and Leucine on MuRF1/Trim63 expression was addressed in Dex-treated rats [282] (Table 2). However, while HMB and leucine ameliorated muscle function and decreased MuRF1 expression, no effect of both HMB and leucine was observed on muscle weight. This might be due to partial effect of the treatment on muscles. Interestingly, the modulation of FOXO1 nuclear translocation was the putative mechanism for MuRF1/Trim63 downregulation. Leucine was also implicated in the modulation of both MuRF1/TRIM63 and MAFbx/atrogin-1 with an improvement of myotube diameter in Dex-treated primary muscle cells [283,300]. The authors found that the effect of leucine on E3 ligase expression was mediated by FOXO3a cytoplasmic sequestration and concomitant vacuolar protein sorting 34 (VPS34) nuclear accumulation. Alternatively, a supplementation with Vital01 (composed by high levels of BCAAs, increased ratio of whey and casein proteins, vitamin D, and ursolic acid) in calorically restricted mouse model of muscle atrophy preserved muscle mass both during and after the atrophic conditions were stablished. The catabolic phenotype was ameliorated by Vital01, notably through the modulation of the UPS (decreased expression of MuRF1/Trim63 and MAFbx/Atrogin-1) and the autophagy-lysosome pathways, [301]. However, Leu and HMB exhibit no effect on E3 ligase expression (MuRF1/Trim63 and MAFbx/Atrogin-1) in human during fasting [210] and the beneficial muscle sparing was attributed to a stimulation of the mTORC1 pathway [298]. On the whole, the potential beneficial effect of Leu and HMB is still controversial for both its action on E3 ligases and for muscle preservation effect. Plant Derivatives Plant derivatives were also tested for their potency to protect skeletal muscle atrophy. Ursolic acid (Table 2), was able to partially decrease muscle atrophy in mice subjected to chronic kidney disease and a moderate effect on MuRF1/TRIM63, MAFbx/Atrogin-1 and MUSA1 expression was observed, that was attributed to decreased expression of myostatin and inflammatory cytokines [255]. However, ursolic acid was unable to modify E3 ligases expression in cultured myotubes treated with Dex, and ursolic acid was able to directly induce the expression of MuRF1/Trim63 and MAFbx/Atrogin-1 in C2C12 myotubes. More investigation is clearly needed before concluding of any potential therapy using ursolic acid. A polyphenol from green tea, epigallocatechin-3-gallate (EGCG), was also used as a countermeasure for fighting against cancer cachexia [279]. EGCG was able to reduce NF-κB expression and the downstream E3 ligases MuRF1/TRIM63 and MAFbx/Atrogin-1 (only a trend for MuRF1/TRIM63). However, the decrease of the tumor volume renders difficult the interpretation of the effect of ECGC as its protective role on muscles might be indirect. Teaghrelin, an analog of the human ghrelin, was efficient for decreasing the catabolic effect of Dex in cultured C2C12 myotubes, with depressed expression of MuRF1/Trim63 and MAFbx/Atrogin-1 [289]. The authors suggested that increased myogenin expression might be implicated in the beneficial effect of teaghrelin. In rats submitted to thermal injury, ghrelin blunted the expression of MuRF1/Trim63 and MAFbx/Atrogin-1 [302]. While the exact mechanism was not addressed, the authors found that TNFα and IL-6 mRNA levels were normalized upon ghrelin infusion. Interestingly, mice knocked out for ghrelin exhibit an increased expression of MuRF1/Trim63 and are less protected from fasting atrophy [290]. Sabinene is a terpene present in plant essential oil and was found to decrease muscle atrophy in starved rats through reversal of the increased MuRF1/Trim63 overexpression that is commonly observed upon fasting [286]. The mechanism proposed by the authors was the repression of ROS-mediated activation of ERK and p38 MAKP. Matrine (Table 2 and Figure 2) is a natural compound used in traditional medicine and approved for cancer therapy in China [284]. The authors demonstrated that this compound was able to partially reverse muscle atrophy in mice subjected to Colon 26 adenocarcinoma with a concomitant decrease of MuRF1/Trim63 and MAFbx/Atrogin-1 expression. Using cultured C2C12 myotubes, the authors found that the effect of matrine was mainly driven by the AKT/mTORC1/FOXO3a signaling pathway with both a repression of the catalytic axis and an up regulation of the anabolic one. E3 Ligases Inhibitors The main E3 ligase that has been investigated so far for the design of inhibitors is MuRF1/TRIM63. This can be explained by the fact it is also the only E3 ligase known to target contractile proteins from both the thin and the thick filament [227,228,230,303]. In a first attempt, the screening of a small molecule library for finding MuRF1/TRIM63 inhibitors identified a compound (P013222) (Table 2 andFigure 2) that was able to decrease MuRF1/TRIM63 autoubiquitylation [294]. The selectivity was within the µM range with a 10 times preference for MuRF1/TRIM63 compared to other E3 ligases and P013222 was able to inhibit the degradation of MHC in Dex-treated C2C12 myotubes. More recently, the screening of a library identified another small molecule compound (ID#704946/MyoMed-946) able to alter MuRF1-titin interaction (IC50 around 25 µM), thus targeting the coiled-coil region of MuRF1/TRIM63 [293]. Compound ID#704946/MyoMed-946 was able to decrease in vitro MuRF1/TRIM63 self-ubiquitination and surprisingly was also able to decrease the mRNA levels of MuRF1/Trim63 in catabolic C2C12 myotubes [293]. This suggests that this compound may be interfering on several mechanisms modulating MuRF1/TRIM63 action. This compound was at least partially effective for preserving muscle mass in catabolic mice. The mechanism by which compound ID#704946/MyoMed-946 preserve muscle function needs further investigations as the same laboratory found that it was also able to modulate MuRF2 expression [17,18]. The cellular inhibitor of apoptosis 1 (cIAP1) E3 ligase is a negative regulator of muscle mass by acting on TNFα-mediated NF-κB signaling. cIAP1 is in fact an E3 ligase whose role is to blunt the non-canonical NF-κB signaling and its genetical ablation was reported to improve muscle mass in mdx mice [91]. Recently, an inhibitor of cIAP1 (LCL161) was addressed for its capacity to improve skeletal muscle mass in denervated mice [19]. While genetic ablation of cIAP1 was able to preserve muscle mass in denervated mice, its inhibition by LC161 was only moderately efficient as only the EDL muscle was preserved, indicating either a poor inhibition efficiency of LCL161 or a compensation by other E3 ligases and/or signaling pathways. CBL-B is an E3 ligase involved in the targeting of the Insulin Receptor Substrate 1 (IRS1) that mediates IGF1 signaling, notably by activating the AKT-mTORC1 pathway. CBL-B is involved in spaceflight-induced muscle atrophy and genetic ablation of CBL-B protects skeletal muscle from disuse atrophy [171]. CBL-B can be inhibited by a small pentapeptide mimetic of tyrosine608-phosphorylated IRS-1 that restores IGF1 signaling and protects from atrophy. Interestingly, IGF1 signaling restoration induced a concomitant decrease of MAFbx expression while no variation on MuRF1/Trim63 mRNA levels was observed [171]. Another peptide, called cblin, was also reported to exhibit some protective action on skeletal muscle through the inhibition of . Conclusions and Future Directions The discovery of molecules able to lower muscle loss during catabolic situations is a promising field of investigation and numerous possibilities can be envisaged, from directly blunting the signals arriving at the cellular membrane levels to more specifically inhibiting the E3 ligase(s) involved in the degradation of the muscle contractile apparatus. Each strategy has advantages and disadvantages. The first approaches are not specific and alter numerous metabolic pathways, which may end up with side effects both at the short-and long-term levels. For example, suppressing the general protein breakdown by acting on the PI3K/AKT/FOXO pathway might be deleterious by accumulating misfolded proteins. On the other side, receptor or metabolic pathways have been studied for decades and several inhibitors have been well characterized, which allows more straightforward investigations dedicated to muscle atrophy. The drugs targeting directly the E3 ligases, so far mostly focused on MuRF1/TRIM63, have the advantage of being more selective and should prove to be better tolerated by the muscle cells and the whole organism. Indeed, MuRF1/TRIM63 (and some other ligases) is muscle-specific, which means that drugs will only affect muscles. This is an important advantage over metabolic pathways that are shared by several organs. More investigations are clearly needed for ameliorating the first generation of molecules or for finding new ones, which includes new strategies for modulating E3 ligases activity. Background To deal with seasonal cold and food shortage during winter, hibernating mammals show a combination of behavioral and physiological changes. To save energy during hibernation, hibernating animals use periods of torpor characterized by decreased metabolic rate and body temperature, reduction in respiratory and heart rates, and physical inactivity [1,2]. Brown bears (Ursus arctos) exhibit unique features, as they hibernate at mild hypothermia (32-35 °C) and can stay inside their dens for up to 7 months, without drinking, eating, defecating or urinating, and with no arousal episodes [3][4][5][6]. While denning, they reduce their metabolic rate by about 75% [7], and rely primarily on mobilization of fat stores, which is reflected by increased circulating fatty acid concentration and body fat store depletion during winter [8][9][10]. Beyond energy substrates, lipids also have pleiotropic actions in the regulation of metabolism, and changes in membrane fatty acid composition have already been described in hibernating animals [11][12][13][14], including the brown bear [9]. Membrane phospholipids can also provide long-chain fatty acids for the synthesis of bioactive lipid mediators, such as endocannabinoids [15][16][17]. The endocannabinoid system (ECS) was originally described as being composed of Gprotein coupled receptors (CB1 and CB2) and their endogenous ligands, of which the main ones are derived from arachidonic acid 20:4n-6 (AA) esterified into phospholipids, and called 2-arachidonoyl glycerol (2-AG) and anandamide (AEA) [15][16][17][18][19][20]. These two well-characterized compounds clearly show varying affinity for CB1 and CB2 receptors. Indeed, AEA is considered as a high affinity CB1-partial agonist (and weak CB2 agonist), whereas 2-AG is described as a low-to-moderate affinity CB1 and CB2 full agonist [21,22]. 2-AG and AEA belong to the large family of 2-acylglycerols (2-AcGs) and N-acylethanolamines (NAEs), respectively [17,19]. N-acylphosphatidylethanolamine-hydrolyzing phospholipase D (NAPEPLD) and sn-1-specific diacylglycerol lipase-α and β (DAGLA and DAGLB) are the main enzymes involved in the biosynthesis of NAEs and 2-AcGs, respectively [17,19]. Fatty acid amide hydrolase (FAAH) is responsible for NAEs catabolism (and to a lesser extend for 2-AG) [23], and monoacylglycerol lipase (MGLL) specifically catabolizes 2-AcGs [17,19]. eCBs can also be metabolized by lipoxygenases (LOXs) and by cyclooxygenase-2 (COX-2), an alternative pathway for eCBs catabolism [17]. The ECS includes structurally related compounds like Noleoylethanolamine (OEA), called «endocannabinoids-like compounds» (eCBs-like). The latter are metabolized by the same biosynthetic and catabolic enzymes as eCBs [17]. Although eCBs-like compounds are not able to bind to CB1 and CB2 receptors, they can bind to other G-protein coupled receptors (e.g. GPR119 and GPR55) or nuclear receptors, like peroxisome proliferator-activated receptor α (PPARA) [17]. Endogenous cannabinoids are involved in the regulation of many physiological processes, including neuronal signaling [24], stress response [25], metabolism [25][26][27], feeding behavior and energy storage [25,28]. Evidences support the fact that the ECS could be involved in sleep cycles [29], circadian and potentially circannual rhythms [30]. At the central level (e.g. hypothalamus), CB1 is able to promote food intake and reduce energy expenditure [25,31]. In addition, CB1 activation in adipose tissue leads to fatty acid and glucose uptake, and to upregulation of lipogenesis [25]. In liver, CB1 signaling leads to increased expression of genes involved in the synthesis of fatty acids [32], and in skeletal muscle tissue, CB1 activation triggers a decrease in glucose uptake and insulin sensitivity [25]. The CB2 receptor is well known to be widespread over immune cells and to have numerous immunomodulatory roles [33]. CB2 has also been detected in metabolic tissues, like adipose tissue and skeletal muscle [34,35] and CB2 pharmacological or genetic inactivation in murine obesity models promote insulin-mediated glucose uptake in skeletal muscles, reduce adipose tissue inflammation, and thus improves insulin sensitivity [36,37]. Finally, the eCB-like OEA promotes lipolysis, fatty acid oxidation in skeletal muscle and liver, and triggers an anorexigenic signal, notably through the nuclear receptor PPARA [38,39]. Considering the pleiotropic roles of ECS in neuronal signaling, regulation of feeding behavior, energy metabolism and circannual rhythms, important changes are expected during hibernation. Several ECS circulating compounds have been quantified in hibernating black bears, during and around the torpor phase [40],with no major changes observed except a slight increase in 2-AG in the period of metabolic drop before torpor. Although a decrease in ECS tone has been observed in hibernating marmots (Marmota monax and flaviventris) and ground squirrels (Spermophilus richardsonii) [30,41,42], we hypothesize that a similar decrease should occur in hibernating bears, not excluding specific changes due to their unique features during hibernation (mild hypothermia, no periodic arousal, and maintenance of alertness). Therefore, we explored here seasonal variations in fatty acid composition and ECS tone, in both circulating compartment and in muscle and adipose tissues, in winter-hibernating and summer-active brown bears. Results Seasonal differences in serum lipids We explored the fatty acid (FA) composition of winterhibernating (WBS) and summer-active (SBS) bear serum (see supplementary Table S1). From the lipidomic data, we compared both the summer and winter concentrations and proportions of fatty acids (see supplementary Table S2 and S3 for detailed lipidomic results). As shown in Fig. 1a, the total concentration of FAs was about twofold higher in WBS relative to SBS (28.82 ± 1.71 vs. 15.99 ± 1.09 mmol/L). All but two quantified lipid species were higher in concentration in hibernating bears, i.e. saturated fatty acids (SFAs), monounsaturated fatty acids (MUFAs), and n-6 polyunsaturated fatty acids (PUFAs) (Supplementary Table S2). Only concentrations of alpha-linolenic acid C18:3 n-3 (ALA) (0.49 -fold, non-significant) and eicosapentaenoic acid C20:5 n-3 (EPA) (0.26-fold) were lower in WBS (Supplementary Table S2). Meanwhile, the molar percent of total n-6 species were found to be lower in WBS compared to SBS (Fig. 1b). Lipid species with the highest molar percent are presented in Fig. 1c (see Supplementary Table S3). Among SFAs, palmitic acid C16:0 (PA) was found in higher proportion, whereas stearic acid C18:0 (SA) was in lower proportion in winter serum. Similar proportions of oleic acid 18:1n-9 (OA), belonging to the n-9 MUFAs, were found in winter and summer bear serum. Concerning n-6 PUFAs, the proportion of arachidonic acid C20:4 n-6 (AA) was lower during winter, whereas proportion of linoleic acid C18:2 n-6 (LA) remained unchanged (Fig. 1c). For individual species of the n-3 family (Fig. 1d and Supplementary Table S3), docosapentaenoic acid C22:5 n-3 (DPA, 1.5-fold) and docosahexaenoic acid C22:6 n-3 (DHA, 2.2-fold) were found in higher proportions. The proportion of C20:5 n-3 (EPA) was found much lower (0.15-fold) in winter serum, as well as the alphalinolenic acid C18:3 n-3 (ALA, 0.27-fold), a precursor of the EPA, DPA and DHA species. From molar percent values, the DHA/AA ratio was 3.2-fold higher in winter (Fig. 1e). Changes in plasma endocannabinoids and endocannabinoids-like compounds We next assessed circulating eCBs and eCBs-like in bear plasma. Paired samples were collected in winter and in summer from eight bears (Supplementary Table S1) and quantification of AEA, 2-AG and OEA are presented in Fig. 2 and supplementary Table S5. Lower concentrations were observed for AEA (0.63-fold) in winter compared to summer, whereas the reverse was observed for OEA (3.3-fold). No difference was found for 2-AG plasma concentration. Changes in endocannabinoid concentrations in muscle and adipose tissues Quantification of endocannabinoids was then performed in bear muscle and adipose tissues. Paired tissues samples were collected from bears in winter and in summer (Supplementary Table S1) and quantification of AEA, 2-AG and OEA are presented in Fig. 3 and supplementary Table S5. AEA concentration was lower in both muscle and adipose tissues during winter versus summer, close to the statistical threshold (p = 0.064 and p = 0.069, respectively). 2-AG concentration was significantly lower in muscle and adipose tissues samples during winter, by about 1.6-and 9-fold, respectively. By contrast, no seasonal changes were found in OEA concentrations in both muscle and adipose tissues. Changes in endocannabinoid pathway-related gene expressions in muscle and adipose tissues To explore tissue metabolism of endocannabinoids, we quantified gene expression in muscle and adipose tissue of the eCBs membrane receptors CB1 and CB2, and several enzymes involed in the synthesis and catabolism of eCBs. For muscle tissue, paired samples were from 8 bears at the two time points, while for adipose tissue, data are coming from 5 bears in summer and 13 bears (See figure on previous page.) Fig. 1 Lipidomic from summer and winter brown bear serum. The winter and summer bear serum mixes were prepared as described (Supplementary Table S1). a: Total fatty acid (FA) concentration. b: Total n-6 and n-3 FA relative proportions of total lipids. c: Highest molar percent lipid species D: Molar percent of the n-3 family lipid species. e: Molar ratios of DHA/AA in summer and winter serum. Detailed lipidomic results are presented in Supplementary Tables S2 andS3. Data are expressed in mmol/L for total FAs concentration, or molar percentage of total lipids and are represented as mean ± SEM of separate extractions and quantifications from the twelve mixes (six summer and six winter serum mixes, except for EPA with data from only three summer and three winter mixes). Paired Student t-test were used to compare wummer and winter data and Benjamini-Hochberg correction was applied for multiple comparisons. * indicates BH adjusted p value < 0.05 when comparing seasons, ** for p < 0.01, *** for p < 0.001, NS: non significant. AA:arachidonic acid, ALA: alpha-linolenic acid, DHA: docosahexaenoic acid, DPA: docosapentaenoic acid, EPA: eicosapentaenoic acid, LA: linoleic acid, OA: oleic acid, PA: palmitic acid, SA: stearic acid, SBS: summer bear serum, WBS: winter bear serum Fig. 2 Circulating endocannabinoids concentration in brown bear plasma. Concentration of three major endocannabinoids compounds in bear plasma. Plasma were collected from bears at both winter-hibernating and summer-active time points (Supplementary Table S1). Data are expressed in ng/mL and are represented as mean ± SEM of separate extractions and quantifications (n = 8). Paired Student t-test were used to compare wummer and winter data. * indicates p value < 0.05 when comparing seasons, *** for p < 0.001, NS: non significant. AEA: anandamide, 2-AG: 2-arachidonoylglycérol, eCBs: endocannabinoids, OEA: N-oleoylethanolamine, SBP: summer bear plasma, WBP: winter bear plasma in winter (Supplementary Table S1). Data are presented in Fig. 4 and Supplementary Table S5. For genes that encode the membrane receptors CB1 and CB2 in muscle tissue, CNR1 mRNA level, but not CNR2, was decreased (0.63-fold) in winter (Fig. 4). Concerning enzymes that catabolize AEA and 2-AG, mRNA level of FAAH was induced (2.3-fold) in winter, but MGLL gene expression did not change. For genes encoding enzymes of the biosynthetic pathway, DAGLA mRNA level was strongly reduced in muscle tissue during winter (0.40-fold), whereas DAGLB mRNA level was increased (1.53-fold). Finally, gene expression of NAPEPLD did not change in muscle (Fig. 4). Conversely, in adipose tissue (Fig. 5), no significant changes in CNR1 gene expression were reported whereas CNR2 expression was strongly decreased in winter (0.42fold). For gene expression of catabolic enzymes (FAAH and MGLL), did not change in adipose tissue between seasons. Finally, for genes encoding biosynthetic enzymes, mRNA levels of DAGLB and NAPELD were respectively found higher (1.44-fold) and lower (0.75-fold) in winter. Discussion Thanks to repeated capture sessions, we were able to gather samples of serum, plasma and tissues from high number of free-living brown bears (Ursus arctos). From the 28 bears included in this study, samples were collected both in February during winter hibernation and in June during summer active period. Due to limited amount of available biological material, the analyses were performed on samples coming from different subsets of the 28 bears. In all but adipose tissue, analyses were performed on winter and summer paired samples (Supplementary Table S1). We examined circulating lipid and ECS compounds in both summer-active and winter-hibernating brown bears to explore the extent to which regulation of the ECS reflects bear hibernation peculiarities, including survival due to lipid oxidation, maintenance of muscle glycolysis, and maintained alertness during dormancy. The seasonal shift we highlighted in serum FAs composition, together with a decrease in tissue AEA and 2-AG, and a three-fold increase in circulating OEA during winter, could contribute to the behavioral and metabolic changes that occur in hibernating bears. Hibernators experience extended periods of food shortage during hibernation and primarily rely on mobilization of fat stores from white adipose tissue [1]. Accordingly, we found that the concentration of total circulating fatty acids was elevated in hibernating bears, a finding in line with previous studies [5,43]. Considering both the amount and Fig. 3 Endocannabinoids concentration in brown bear muscle and adipose tissue. Concentration of three major endocannabinoids compounds in bear muscle and adipose tissue. Tissues were collected from bears at both winter-hibernating and summer-active time points (Supplementary Table S1). Data are expressed in pg/mg and are represented as mean ± SEM of separate extractions and quantifications (n = 5 for muscle tissue and n = 6 for adipose tissue). Paired Student t-test were used to compare wummer and winter data. *** indicates p value < 0.0001 when comparing seasons, NS: non significant. AEA: anandamide, 2-AG: 2-arachidonoylglycérol, OEA: N-oleoylethanolamine, SBA: summer bear adipose tissue, SBM: summer bear muscle, WBA: winter bear adipose tissue, WBM: winter bear muscle relative proportions of circulating lipids, our results are consistent with changes in serum and plasma lipid profiles during hibernation that have been previously published [5,9,10], notably an enrichment in DHA C22:6 n-3 and depletions in ALA C18:3 n-3 and EPA C20:5 n-3, during winter compared to summer. Whether the depletion in the ALA and EPA precursor species could be directly linked to the observed DHA increase remains to be elucidated. Here, the DHA serum enrichment that we observed in hibernating bears is actually not coming from dietary FAs intake but rather due to lipid stores mobilization. The health benefits that have been attributed to n-3 PUFAs (e.g. DHA), essentially triggered by DHA dietary intervention studies, could potentially be transposed in the context of hibernation. Indeed, it has already been hypothesized that DHA could be involved in the bear's resistance to muscle atrophy during hibernation [10]. DHA appears to prevent muscle atrophy in fasting mice, and increases muscle glycogen stores [44]. Strikingly, in parallel to DHA serum enrichment, hibernating bears have more than a 3-fold higher glycogen muscle content compared to summer-active animals [10]. In addition to its anti-inflammatory effects, DHA is also known to exert a positive effect on protein balance by decreasing expression of factors involved in protein breakdown [45] and enhancing protein synthesis, notably by promoting mammalian Target Of Rapamycin (mTOR) activation [46]. Concomitantly to serum DHA enrichment, we observed a drop in AA proportion, thus leading to a sharp increase in the DHA/AA ratio. Omega-3/omega-6 ratio is known to have an impact on global health [47], and the balance of this ratio could also impact the endocannabinoid system [48], notably because AA is a precursor of the two main eCBs 2-AG and AEA. Indeed, n-6 PUFAs-enriched diets have been shown to increase the level of 2-AG or AEA in the brain, plasma, and peripheral tissues in nonhibernating animal models [49][50][51][52]. It is noteworthy to mention that, in response to DHA supplementation, an enrichment of this fatty acid in phospholipids of cell membranes occurs in parallel with a decrease in AA content [38,49,53,54]. By remodeling the amount of AAcontaining phospholipids, DHA is able to reduce the synthesis of AEA and 2-AG [49,54]. Further studies on bears, focusing on fatty acid membrane composition in tissues at different time points, will be helpful to characterize the Fig. 4 Fold change in gene expression of target genes involved in endocannabinoids biosynthesis and catabolism in brown bear muscle tissue. Muscle tissues were collected from bears at both winter-hibernating and summer-active time points (Supplementary Table S1), total RNA was extracted and expression levels were measured by RT-qPCR. Data are normalized against TBP mRNA levels and expressed as a fold change relative to the summer condition, represented as mean ± SEM of separate extractions and quantifications (n = 8). Paired Student t-test were used to compare wummer and winter data. * indicates p value < 0.05 when comparing seasons, NS: non significant. CNR1: cannabinoid receptor 1, CNR2: cannabinoid receptor 2, DAGLA: diacylglycerol lipase alpha, DAGLB: diacylglycerol lipase beta, FAAH: fatty acid amide hydrolase, MGLL: monoacylglycerol lipase, NAPEPLD: N-acyl phosphatidylethanolamine phospholipase D, SBM: summer bear muscle, WBM: winter bear muscle remodeling of membrane lipids that could affect the availability of FAs precursors for eCBs biosynthesis. Data on eCBs compounds from experimental short fasting in nonhibernating mammals are very divergent, depending on the tissue considered (e.g. brain or peripheral tissues) and the duration of food deprivation, but tissue levels of eCBs are mainly regulated by the availability of their membrane phospholipid precursors and by the activity of biosynthetic and catabolic enzymes [28,49,55,56]. We hypothesized that drastic reduction in metabolic activity, lack of intake of dietary PUFAs, significant increase in the serum DHA/AA ratio, and perhaps reduction in tissue AA-phospholipids concentration, could lead to a global reduction in ECS tone during the hibernation period. The reduction in ECS tone has already been documented in hibernating marmots [30,41], but not confirmed in large-bodied hibernators. Comparing active and hibernation states in brown bears, we reported here a decrease in plasma concentration of AEA, and an unexpected 3-fold increase in OEA circulating levels in hibernating bears. In both muscle and adipose tissues, 2-AG and AEA (close to statistical threshold) were found lower in winter, while OEA did not change. Quantification of winter serum eCBs was previously reported in black bears during and around the topor phase, but summer active bears were not investigated [40]. Nutritional status of the captured animals and diet were not specified. These elements strongly limit comparison between the two studies. Taken together, our data allowed us to make several hypotheses about possible mechanisms by which ECS could contribute to the metabolic and behavioral changes that occur in bears during hibernation. First, considering that AEA and 2-AG CB1 agonists favor food intake and stimulate lipogenesis [25], CB1 signaling is expected to be upregulated during the active summer period in order to promote energy storage, and downregulated during winter hibernation to stimulate lipolysis and FAs oxidation. The tissue concentration drops in 2-AG and AEA observed during winter could be due to a decrease in tissue AAphospholipids concentration, as we hypothesized above. The degradation of AEA could also be increased in muscle tissue during hibernation, as reflected in the higher mRNA levels of FAAH, the main hydrolase that degrades AEA [19,23]. In adipose tissue, lower NAPEPLD mRNA level content during hibernation may support a decrease in AEA synthesis, and ultimately content. The tissue content in 2-AG is decreased in winter with no changes in mRNA levels Fig. 5 Fold change in gene expression of target genes involved in endocannabinoids biosynthesis and catabolism in brown bear adipose tissue. Adipose tissues were collected from bears at both winter-hibernating and summer-active time points (Supplementary Table S1), total RNA was extracted and expression levels were measured by RT-qPCR. Data are normalized against TBP mRNA levels and expressed as a fold change relative to the summer condition, represented as mean ± SEM of separate extractions and quantifications (n = 5 for summer and n = 13 for winter samples). Unpaired Student ttest were used to compare wummer and winter data.* indicates p value < 0.05 when comparing seasons. CNR1: cannabinoid receptor 1, CNR2: cannabinoid receptor 2, DAGLA: diacylglycerol lipase alpha, DAGLB: diacylglycerol lipase beta, FAAH: fatty acid amide hydrolase, MGLL: monoacylglycerol lipase, NAPEPLD: N-acyl phosphatidylethanolamine phospholipase D, SBA: summer bear adipose tissue, WBA: winter bear adipose tissue of the catabolic enzyme MGLL. Furthermore, opposite changes in DAGLA and DAGLB gene expression do not allow to speculate on the biosynthetic/degradation balance. One limitation of our study is that gene expression could not reflect biological activity. Moreover, we only focused on main biosynthetic and catabolic enzymes involved in eCBs metabolism, and investigation on alternative degradation route as endocannabinoid oxygenation by cyclooxygenases and lipoxygenases would bring new insights. During hibernation, lower 2-AG (and AEA close to statistical threshold) tissue content and the reduction of CNR1 and CNR2 mRNA levels in muscle and adipose tissue, respectively, strongly support reduced ECS tone in both tissues. In non-hibernating mammals, pharmacological inhibition of CB1 leads to a decrease in PDK4 expression [25,57]. PDK4 is a major negative regulator of PDH activity, that in turn regulates the whole body oxidative carbohydrate metabolism. In hibernating bear muscle, recent studies have shown that PDK4 is upregulated compared to summer active state [10,58] and expression of PDK4 during hibernation appear thus to be disconnected from direct regulation by CB1. CB1 receptor antagonism also leads to an increased uptake of glucose in muscle via PI3K signaling [59], and glycolysis appears preserved in bear skeletal muscle during hibernation, as suggested by an overall increase in the protein abundance of all glycolytic enzymes [10]. As proposed by Chazarin et al. and Vella et al., bears still oxidize glucose and produce lactate in skeletal muscle during hibernation [10,60]. Overactivation of the ECS is a hallmark of obesity [61,62], and 2-AG is predominantly found in higher concentration in tissues of obese people [61,63]. Interestingly, in murine models of obesity, gain of adipose tissue often leads to increased fat inflammation [36,37]. Genetic or pharmacological inactivation of CB2 receptor contribute to reduce adipose tissue inflammation, increase insulin sensitivity and skeletal muscle glucose uptake [36,37]. Strikingly, insulin resistance has been described in hibernating bears adipocytes [64]. As bears don't experience health consequences of circannual high body fat storage [65], a reduced CB2 signaling in adipose tissue could dampen adipose tissue inflammation. Lower amounts of 2-AG and AEA could also reduced CB1 signaling in adipose tissue, thus limiting lipogenesis and promoting lipolysis during hibernation in bears, as also suggested for hibernating marmots [30]. OEA is a high-affinity agonist for peroxisome proliferator-activated receptor α (PPARA), regulating food intake and stimulating fat catabolism [38,39,53,66,67]. The eCB-like OEA is generally synthesized in response to dietary oleic acid intake by enterocytes of the small intestine [49,54], and inhibits food intake. It has already been shown in rodents that food deprivation inhibits OEA synthesis in the small intestine, but stimulates its synthesis in liver [38,53,68,69]. Therefore, during bear hibernation, circulating OEA could originate from tissue synthesis (probably hepatic) and be released in the blood flow. The high OEA level that we found in hibernating bears, not triggered by food intake, could participate in a sustained anorexigenic signal during the hibernation state. Consequences of high levels of circulating OEA have been studied in non-hibernating rodents. Intraperitoneal OEA administration in rats notably impairs locomotor activity, which is supported by a decrease in ambulation, an increase of the time spent in inactivity, and the presence of signs of catalepsy [66,70]. We thus can hypothesize that a higher amount of plasma OEA during bear hibernation can participate in the maintenance of prolonged physical inactivity. It has also been shown that intracerebroventricular injections of OEA promote alertness, with the observation of enhanced dopamine and c-Fos expressions in wakerelated brain areas [71]. Bears are known to stay sensitive to disturbance during hibernation [72][73][74]. High circulating amounts of OEA might thus participate in alertness to external stimuli from the environment in hibernating bears. OEA during winter possibly also favors body fat mobilization for energy needs, with stimulation of FA and glycerol release from adipocytes [38,39]. Finally, a potential role for OEA in the promotion of fasting-induced ketogenesis during hibernation could also be considered, as OEA has been demonstrated to increase 3-hydroxybutyrate production in in vivo rodent models [38,39]. Conclusions In conclusion, our results show a reduction in ECS tone in hibernating bears and suggest a coordinated downregulation of CB1 and CB2 signaling in skeletal muscle and adipose tissue. As summarized in Fig. 6, these features could favor energy mobilization through lipolysis, and optimization of glucose uptake by skeletal muscles. Despite high fat stores in winter, bears do not exhibit features of ECS overactivation, and decrease in CB2 signaling could dampen adipose tissue inflammation. The observed increase in circulating OEA level may participate in the behavioral and physiological adaptations during bear hibernation state, like maintenance of an anorexigenic signaling pathway, and promotion of lipolysis and fatty acid β-oxidation. We also speculated about OEA involvement in torpor maintenance and in motor activity reduction, as well as a role in conservation of alertness at the level of central nervous system. Methods Bear sample collection A total of 28 free ranging subadult brown bears (Ursus arctos) from Dalarna and Gävleborg counties, Sweden, were included in this study, including 4 bears captured two consecutive years. All samples and data were collected under protocols approved by the Swedish Ethical Committee on Animal Experiment (applications Dnr C3/2016 and Dnr C18/2015), the Swedish Environmental Protection Agency (NV-00741-18), and the Swedish Board of Agriculture . All procedures complied with Swedish laws and regulations. As described previously [10,75], blood, subcutaneous adipose tissue, and muscle tissue (vastus lateralis) samples were collected at two time points, in February during winter hibernation (W) and in June during summer-active period (S). Blood samples were collected from the jugular vein into 8 ml dry tubes for serum (Vacuette® Z serum Sep Clot Activator, Greiner Bio-One GmbH, Kremsmünster, Austria) or into 10 ml EDTA-coated tubes (BD Vacutai-ner®, FisherScientific, Illkirch, France) for plasma. The analyses were performed on samples coming from different subsets of bears as described in Supplementary Table S1. Lipid extraction and analysis To perform serum lipidomic analysis, serum mixes were prepared as followed: for a given year, 50 μl of summer serum from each bear of the year was pooled to obtain the summer mix. In parallel, 50 μl of winter serum from the same bears was pooled to obtain the winter mix. A total of 6 summer and winter paired mixes were obtained (Supplementary Table S1). Lipids were extracted and analyzed as previously described [76]. After addition of an internal standard (tri-17:0 triacylglycerol), total lipids were extracted twice from bear serum mixes with ethanol/chloroform (1:2, v/v). The organic phases were dried under nitrogen and lipids were transmethylated. Briefly, samples were treated with toluene-methanol (1:1, v/v) and boron trifluoride in methanol (14%). Transmethylation was carried out at 100 °C for 90 min in screw-capped tubes. Then 1.5 mL K 2 CO 3 in 10% water was added and the resulting fatty acid methyl esters were extracted by 2 mL of isooctane and analyzed by gas chromatography (GC) with a HP6890 instrument equipped with a fused silica capillary BPX70 SGE column (60 × 0.25 mm). The vector gas was hydrogen. Temperatures of the Ross injector and the flame ionization detector were set to 230 °C and 250 °C, respectively. Data were expressed in mmol/L for total or individual fatty acids (FAs) concentration or molar percentage of total lipids for individual FAs. Detailed lipidomic results are presented in supplementary Table S2 (serum fatty acid concentrations) and S3 (serum fatty acid relative proportions). Endocannabinoid quantification For quantification of circulating endocannabinoids, analysis was performed on 500 μl of plasma collected at the two time points (S and W) from 8 individual animals (see supplementary Table S1). Standard endocannabinoids (eCBs), i.e.-PEA, PEA-d5, OEA, OEA-d4, AEA, AEA-d4, 2AG, and 2AG-d5, were purchased from Cayman (Bertin BioReagent, Saint-Quentin en Yvelines, France). Mass spectrometry quality grade solvents were purchased from Fischer Scientific (Illkirch, France). Tissue samples (adipose and muscle tissues); c.a 100 mg) were crushed in an Omni Bead Ruptor 24 apparatus (Omni International, Kennesaw, USA) with circa twenty 1.4 mm OD zirconium oxide beads (S = 6.95 m/s, T = 30s, C = 3; D = 10s) and 900 μl of methanol/Tris-buffer (50 mM, pH = 8) 1/1 containing 20 ng of PEA-d5, 2 ng OEA-d4, 10 ng AEA-d4, and 20 ng 2AG-d5. Then, each homogenate was added with 2 mL of CHCl 3 / MeOH (1:1, v/v) and 500 μL of Tris (50 mM, pH = 8), vortexed and centrifuged 10 min at 3000 g. The organic layer was recovered and the upper aqueous phase was extracted twice with chloroform (1 mL). Finally, organic phases were pooled and evaporated under vacuum. Plasma (500 μL) were mixed with 500 μL cold methanol containing 11 ng AEA. After protein precipitation at -20 °C for 2 h, endocannabinoids were extracted with methanol/chloroform (1:1, v/v) (5 ml) and saline (1.25 mL). The organic phase was recovered and the aqueous phase was extracted twice with chloroform (3 mL). Organic phases were finally pooled and evaporated under vacuum. Dried extracts were solubilized with methanol (200 μL) and centrifuged for 5 min at 20,000 g. Four microliters of the supernatant were injected into a 1200 LC system coupled to a 6460-QqQ MS/MS system equipped with an ESI source (Agilent technologies). Separation was achieved on Zorbax SB-C18 2.1 × 50 mm, 1.8 μm column (Agilent technologies) at a flow rate of 0.4 mL/min, 40 °C, with a linear gradient of (solvent A) water containing 0.1% formic acid and (solvent B) methanol containing 0,1% formic acid as follows: 10% of B for 1 min, up to 85% of B in 8 min, and then 100% B for 4.5 min. Acquisition was performed in positive Selected Reaction Monitoring (SRM) mode (source temperature: 350 °C, nebulizer gas flow rate: 10 L/ min, 40 psi, sheath gas flow 10 L/min, sheath gas temperature 350 °C, capillary 4000 V, nozzle 1000 V). Transitions used were: 2AG-d5 384.3 → 91.1 (frag 120 V, CE 62 V), 2AG 379.1 → 91 (frag 120 V, CE 62 V), AEA-d4 352.2 → 66.1 (frag 115 V, CE 14 V), AEA 348.2 → 62 (frag 120 V, CE 14 V), OEA-d4 330.2 → 66.1 (frag 120 V, CE 14 V), OEA 326.2 → 62 (frag 115 V, CE 14 V), PEA-d5 305.2 → 62 (frag 124 V, CE 14 V), and PEA 300.2 → 62 (frag 124 V, CE 14 V). Endocannabinoids quantification in tissues was performed on tissue samples collected at the two time points (S and W) from 5 (muscle tissue) and 6 (adipose tissue) bears (Supplementary Table S1). eCBs from tissues were quantitated according to the isotope dilution method. Results are expressed as pg per mg of wet weight of tissue. eCBs from plasma were quantitated using calibration curves obtained with authentic standards extracted by the same method used for plasma samples. Linear regression was applied for calculations. Results are expressed as ng of endocannabinoid per mL of plasma. Quantification of mRNAs by real-time RT-PCR For mRNA quantification using RT-qPCR, total RNAs were obtained from muscle and adipose tissues collected at the two time points (S and W). For the muscle tissue, RNAs were extracted from 8 bears in summer and winter, while for adipose tissue, RNAs were extracted from 5 bears in summer and 13 bears in winter (Supplementary Table S1). Muscle and adipose tissue total RNA was isolated using the TRIzol reagent (Invitrogen, Courtaboeuf, France) according to the manufacturer's instructions. First-strand cDNAs were synthesized from 1 μg of total RNA using the PrimeScript RT kit (Ozyme, saint quentin en Yveline, France) with a mixture of random hexamers and oligo(dT) primers, and treated with 60 units of RnaseH (Ozyme). Real-time PCR assays were performed with Rotor-Gene 6000 (Qiagen, Courtaboeuf, France). The primers and real-time PCR assay conditions are listed in supplementary Table S4. The results were normalized by using TBP (TATA box binding protein) mRNA concentration, measured as reference gene in each sample. Statistical analysis Statistical analysis was performed using the R software environment v3.0.2 [77]. For each set of values, distribution of the data was tested using the Shapiro-Wilk normality test, and using the p = 0.01 threshold normal distribution was considered in all cases. Differences between summer and winter data were tested using paired Student t-test for lipidomic, endocannabinoid quantification in plasma and tissues, and mRNA level in muscle tissue. For mRNA level in adipose tissue, differences between summer and winter data were tested using unpaired Student t-test. For multiple comparison (lipidomic data), the Benjamini-Hochberg correction using the p. mitochondries sont présentes dans les myofibres, subsarcolemmales et les intermyofibrillaires [10]. Le maintien d'un réseau mitochondrial fonctionnel dans le muscle est fondamental pour soutenir les demandes métaboliques imposées par la contraction. L'intégrité et la fonction mitochondriale sont hautement régulées par des systèmes de contrôle qualité (la biogenèse, la dynamique et la dégradation mitochondriales) afin de maintenir l'homéostasie. Cependant, un dysfonctionnement mitochondrial peut résulter en plusieurs maladies musculaires humaines appelées myopathies mitochondriales [19]. Réservoir d'acide aminés. Un des rôles majeurs du muscle squelettique est d'être le principal réservoir d'acides aminés (AA) du corps. Les AA musculaires peuvent être mobilisés en l'absence d'un apport nutritionnel adéquat pour assurer de nombreuses fonctions au niveau corps entier [30]. Par exemple, les AA libérés par les muscles servent de précurseurs pour le maintien du niveau de glucose sanguin grâce à la gluconéogenèse hépatique pendant le jeûne. Cependant, la réduction de la masse musculaire compromet la capacité de l'organisme à répondre à différents stresses en raison d'une altération des interactions entre les muscles et les organes. Balance protéique musculaire. Chez les organismes adultes, la régulation de la masse musculaire résulte de la croissance des myofibres existantes par le biais de signalisations intracellulaires qui contrôlent la balance protéique [38]. L'équilibre entre les taux de synthèse des protéines musculaires (SPM) et de dégradation des protéines musculaires (DPM) détermine le contenu en protéines et donc l'homéostasie musculaire. La SPM et la DPM sont sensibles à de nombreux facteurs, notamment l'état nutritionnel, l'équilibre hormonal, l'activité physique ou les maladies. La diminution de la taille des muscles chez l'adulte, c'est-à-dire l'atrophie musculaire, résulte d'une balance protéique négative, tandis que l'augmentation de la taille des muscles, c'est-à-dire l'hypertrophie, résulte d'une balance positive. Les AA apportés par une alimentation adaptée agissent comme des substrats et des signaux et sont essentiels pour induire la SPM. L'un des acteurs les plus reconnus de la SPM est le complexe mTORC1 qui joue un rôle central dans la régulation de la synthèse des protéines et de la biogenèse ribosomale [48]. Par exemple, la délétion spécifique de mTOR (la kinase du complexe) dans les muscles de souris induit une myopathie sévère entraînant une mort prématurée. Concernant la DPM, les principaux systèmes qui y contribuent sont les systèmes autophagie-lysosome (ALS) et ubiquitine-protéasome (UPS). L'ALS implique la formation du phagophore, qui englobe des composés intracellulaires environnants, tels que des protéines endommagées, et qui fusionne avec les lysosomes pour entraîner la dégradation du contenu protéique et le recyclage des AA [55]. Dans des conditions normales, l'ALS empêche principalement l'accumulation d'organites endommagés et de protéines mal repliées. En réponse au stress, comme le jeûne, l'ALS agit principalement comme un mécanisme pro-survie dans le muscle, fournissant des substrats métaboliques [55]. Alors qu'un flux d'autophagie trop important contribue à l'atrophie musculaire, l'inhibition de l'ALS entraîne également une atrophie musculaire [62]. Dans les muscles sains, les mitochondries endommagées et dépolarisées sont sélectivement éliminées par le processus de mitophagie, une forme spécifique d'autophagie. Des études ont démontré que la mitophagie est essentielle au maintien de l'homéostasie du muscle squelettique [10]. Il existe de nombreuses preuves que des altérations de la mitophagie sont présentes dans le muscle au cours de nombreuses conditions cataboliques conduisant à l'atrophie musculaire [10]. Le système UPS joue également un rôle fondamental dans la physiologie du muscle, notamment en dégradant les protéines myofibrillaires [70]. La plupart des protéines subissent une dégradation en étant ciblées par le protéasome 26S grâce à la fixation covalente d'une chaîne d'ubiquitine. Ces protéines marquées sont ensuite reconnues par le protéasome 26S, qui initie un processus de dégradation dépendant de l'ATP. Grâce à ce mécanisme, l'UPS dégrade spécifiquement le substrat. L'inhibition de l'activité du protéasome dans le muscle est associée à un défaut de croissance musculaire et à une diminution de la durée de vie chez les rongeurs [70]. 10.6.1.2 L'atrophie musculaire Causes. La perte de masse et de force musculaire est appelée atrophie musculaire. Les causes peuvent être diverses, par exemple congénitales ou génétiques, ou acquises suite à certaines conditions physiopathologiques [75] (Figure 51). Les conditions pathologiques qui conduisent à l'atrophie musculaire comprennent la cachexie cancéreuse, les troubles pulmonaires obstructifs chroniques, le diabète et l'obésité, ou encore des conditions associées à l'anorexie ou à la malnutrition [75] (Figure 51). L'inactivité physique entraîne également une fonte musculaire, comme dans le cas de fractures, d'immobilisation et d'alitement prolongé et même chez les personnes ayant un mode de vie sédentaire, comme observé lors du confinement de la COVID-19 [75] (Figure 51). L'atrophie musculaire résulte d'un déséquilibre entre la SPM et la DPM, en faveur de la DPM [75] (Figure 51). Au cours de ma thèse, je me suis principalement intéressée à l'atrophie musculaire induite par l'inactivité physique. Conséquences. Le muscle est un organe majeur du métabolisme du glucose, de fait l'atrophie musculaire est étroitement liée à la résistance à l'insuline et au syndrome métabolique. De plus, l'atrophie musculaire limite les activités quotidiennes, réduit la qualité de vie et prolonge les temps de convalescence après une maladie tout en augmentant la morbidité et la mortalité (Figure 51). Compte tenu de ses conséquences néfastes, de l'augmentation de la sédentarité et de l'allongement de l'espérance de vie dans le monde, la fonte musculaire touche des millions de personnes et reste un fardeau social et économique majeur. Actuellement, les stratégies thérapeutiques pour limiter l'atrophie musculaire comprennent essentiellement l'exercice physique et des stratégies nutritionnelles, qui ne sont cependant pas applicables à tous (par exemple aux patients immobilisés ou en unité de soins intensifs) [75]. A ce jour, aucun médicament n'a été approuvé pour un usage clinique et aucun remède efficace contre l'atrophie musculaire n'a été découvert. Il est donc nécessaire de mieux comprendre les mécanismes sous-jacents et de découvrir de nouvelles cibles thérapeutiques potentielles. Notre compréhension s'est considérablement améliorée au cours des dernières décennies, principalement grâce à l'utilisation de modèles de rongeurs de laboratoire. Les acteurs moléculaires. La SPM et la DPM sont influencés par un large éventail d'acteurs moléculaires extra et intracellulaires. Les stimuli extracellulaires comprennent des facteurs inflammatoires tels que des cytokines ou des facteurs endocriniens tels que des facteurs de croissance, qui activent diverses voies intracellulaires. Ces acteurs intracellulaires interconnectés contribuent à la régulation de l'équilibre protéique musculaire en travaillant en synergie ou en opposition, favorisant soit l'anabolisme, soit le catabolisme [38]. Dans le contexte de l'atrophie musculaire, le dérèglement d'un ou de plusieurs de ces acteurs entraîne une atténuation de la signalisation anabolique en faveur du catabolisme, ce qui conduit soit à l'inhibition de la SPM, soit à la suractivation des systèmes UPS et ALS, soit aux deux [38]. Les atrogènes sont désignés comme un ensemble de gènes dont l'expression La manière dont ATF4 facilite des adaptations cellulaires aussi diverses, allant de l'anabolisme à l'arrêt de la croissance, est une question importante et non résolue. Une des possibilités pourrait venir des différents hétérodimères d'ATF4 ou différentes combinaisons d'hétérodimères qui médieraient les différents effets de la signalisation. Le role de l'ISR dans l'autophagie et l'homéostasie mitochondriale. Comme indiqué plus haut, l'autophagie et le contrôle de la qualité mitochondriale sont des processus cellulaires essentiels à l'homéostasie musculaire, et une déficience de l'un ou l'autre est associée à la l'atrophie musculaire [13]. L'ISR est impliquée dans ces deux processus dans un large éventail de tissus et de cellules. Lors de divers stress, ATF4 se lie au promoteur spécifique de gènes impliqués dans l'autophagie pour promouvoir une réponse pro-survie ou une réponse pro-létale, de manière PERK-ou GCN2dépendante (Figure 55). La signalisation de l'ISR est également essentielle au contrôle de la qualité des mitochondries, par le biais de UPRmt 60 Métabolisme énergétique musculaire. La glycolyse est préservée dans les muscles des ours hibernants [310]. Cela pourrait aider à maintenir la fonctionnalité des muscles dans des situations inattendues, comme une sortie urgente de la tanière qui nécessiterait une augmentation rapide de la production Résumé L'atrophie musculaire impacte des millions de personnes à travers le monde, inlcuant des personnes âgées, des personnes atteintes de maladies ou encore des personnes immobilisées pendant de longues périodes. La perte de masse musculaire conduit à un déclin de l'autonomie, favorise l'apparition de maladies, augmente la résistance aux traitements mis en place, et est associée à une augmentation de la mortalité. De ce fait, l'atrophie musculaire constitue un problème majeur de santé publique. Enormément de mécanismes biomoléculaires ont été documentés pouvant expliquer l'apparition de l'atrophie musculaire, principalement grâce à l'utilisation de modèle de rongeurs de laboratoire. Pourtant, aucun traitement n'est réellement efficace et/ou adaptable pour tous aujourd'hui. L'objectif principal de cette thèse était de trouver de nouveaux mécanismes sous-jacents qui pourraient devenir des cibles thérapeutiques pour combattre l'atrophie musculaire chez l'Homme. Nous avons choisi une approche basée sur le biomimétisme. Notre stratégie a été (1) de réaliser une étude de physiologie comparée entre le modèle de l'ours brun naturellement résistant à l'atrophie pendant hibernation et la souris suspendue sensible à l'atrophie musculaire, (2) d'étudier le rôle des voies de signalisations ATF4 et TGF-β/BMP dans ces deux modèles, et enfin (3) d'initier des études sur des cellules musculaires humaines pour valider les hypothèses issues des deux premières études. Dans notre première étude, la stratégie a été d'identifier les gènes différentiellement régulés dans les muscles de l'ours brun entre la période d'hibernation et la période d'activité. Ensuite, nous les avons comparés à ceux différentiellement régulés dans les muscles de la souris suspendue par rapport à la souris contrôle. Nous avons montré que la concomitance de l'inhibition de la signalisation TGF- et de l'induction de la signalisation BMP semblait être cruciale pour le maintien de la masse musculaire en condition d'inactivité physique prolongée. Dans notre deuxième étude, nous avons montré que l'induction de la voie de signalisation d'ATF4 dans le muscle était découplée de l'atrophie musculaire chez la souris saine ou soumise à une situation d'inactivité physique lorsqu'elles étaient préalablement traitées par la molécule d'halofuginone, et également chez l'ours hibernant. Dans ces 3 situations, le maintien de la masse musculaire était associé à la fois à l'induction de la signalisation d'ATF4 et de BMP et à l'inhibition de TGF-β. Enfin, des résultats préliminaires obtenus en cultivant des cellules musculaires humaines avec du sérum d'ours brun hibernant suggèrent la présence d'un composé actif circulant pouvant reproduire certaines caractéristiques observées dans le muscle de l'ours brun hibernant résistant à l'atrophie. En conclusion, ces travaux ouvrent de nombreuses perspectives dans la modulation de la balance des voies de signalisation TGF- et BMP dans des situations d'inactivité physique prolongée. De plus, ils ouvrent de nouvelles recherches sur l'identification de composés actifs dans le sérum de l'ours pouvant être utilisables en clinique humaine afin de limiter ou prévenir l'apparition d'atrophie musculaire lors de l'immobilisation ou dans d'autres conditions physiopathologiques. Figure 1 . 1 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 2 . 2 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 3 . 3 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 4 . 4 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 5 . 5 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 6 . 6 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 7 . 7 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 8 .Figure 9 . 89 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 10 . 10 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 11 . 11 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 12 . 12 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 13 . 13 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 14 . 14 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 15 . 15 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 16 . 16 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 17 . 17 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 18 . 18 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 19 . 19 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 20 . 20 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 21 .Figure 22 . 2122 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 23 .Figure 24 . 2324 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ... Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Figure 26 . 1 Figure 27 . 26127 Figure 1. Skeletal muscle organisation and contractile apparatus structure* 1 . ...................................Figure 2. Myokinome overview of muscle-organ crosstalk*. ...............................................................Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk*. .........................................Figure 4. Muscle protein balance in physiological conditions*. ...........................................................Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects*. ...................Figure 6. Autophagy-lysosomal system*...............................................................................................Figure 7. Ubiquitin-proteasomal system*. ............................................................................................Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy*. ...Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA)*. ............................................................................... Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis. ..........................................................................................................................................Figure 11. TGF-β superfamily organisation and signal transduction*. .................................................Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes*. .........Figure 13. TGF-β signalling involvement in muscle atrophy*. ..............................................................Figure 14. BMP signalling involvement in muscle hypertrophy*. .........................................................Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions*. ..................Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis*. .......Figure 17. The Integrated Stress Response pathway organisation and signal transduction*. .............Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition.* ...............................................................................................................................................................Figure 19. The Integrated Stress Response involvement in muscle homeostasis*. .............................Figure 20. Pictures of hibernating brown bear dens in North Sweden. ...............................................Figure 21. Main physiological characteristics of hibernating bears. ..................................................... Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation*. ..............................................................................................................................Figure 23. Protein content in bear muscles in summer, early denning and late denning. ................... Figure 24. Protein turnover in bear muscles in summer, early denning and late denning. ................. Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation*. .......................................................................................Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. .................................... Study 1 Figure 27. Schema of the experimental strategy of study 1*. ..............................................................Figure 28. Hindlimb suspension model in laboratory mice. .................................................................Figure 29. Pictures from the collection of free-ranging bears samples in the forest of Northern Sweden as part of the Brown Bear Research Project. ........................................................................................Figure 30. Graphical abstract of the study 1*. ......................................................................................Figure 31. Detailed enriched terms from the biological processes "Protein metabolism" and "RNA metabolism" from the differentially expressed genes in Cussonneau et al., 2021. ............................. Figure 32. Western blots of total and phosphorylated SMAD1/5, and SMAD4 in CCL136 human muscle cells expressing an inactive BMP receptor. ......................................................................................... 101 Figure 28 . 28 Figure 27. Schema of the experimental strategy of study 1*. .............................................................. Figure 28. Hindlimb suspension model in laboratory mice. .................................................................Figure 29. Pictures from the collection of free-ranging bears samples in the forest of Northern Sweden as part of the Brown Bear Research Project. ........................................................................................ Figure 30. Graphical abstract of the study 1*. ......................................................................................Figure 31. Detailed enriched terms from the biological processes "Protein metabolism" and "RNA metabolism" from the differentially expressed genes in Cussonneau et al., 2021. ............................. Figure 32. Western blots of total and phosphorylated SMAD1/5, and SMAD4 in CCL136 human muscle cells expressing an inactive BMP receptor. ......................................................................................... 101 Figure 29 . 29 Figure 27. Schema of the experimental strategy of study 1*. .............................................................. Figure 28. Hindlimb suspension model in laboratory mice. .................................................................Figure 29. Pictures from the collection of free-ranging bears samples in the forest of Northern Sweden as part of the Brown Bear Research Project. ........................................................................................ Figure 30. Graphical abstract of the study 1*. ......................................................................................Figure 31. Detailed enriched terms from the biological processes "Protein metabolism" and "RNA metabolism" from the differentially expressed genes in Cussonneau et al., 2021. ............................. Figure 32. Western blots of total and phosphorylated SMAD1/5, and SMAD4 in CCL136 human muscle cells expressing an inactive BMP receptor. ......................................................................................... 101 Figure 30 . 30 Figure 27. Schema of the experimental strategy of study 1*. .............................................................. Figure 28. Hindlimb suspension model in laboratory mice. .................................................................Figure 29. Pictures from the collection of free-ranging bears samples in the forest of Northern Sweden as part of the Brown Bear Research Project. ........................................................................................ Figure 30. Graphical abstract of the study 1*. ......................................................................................Figure 31. Detailed enriched terms from the biological processes "Protein metabolism" and "RNA metabolism" from the differentially expressed genes in Cussonneau et al., 2021. ............................. Figure 32. Western blots of total and phosphorylated SMAD1/5, and SMAD4 in CCL136 human muscle cells expressing an inactive BMP receptor. ......................................................................................... 101 Figure 31 .Figure 32 .Figure 33 .Figure 34 .Figure 35 . 3132333435 Figure 27. Schema of the experimental strategy of study 1*. .............................................................. Figure 28. Hindlimb suspension model in laboratory mice. .................................................................Figure 29. Pictures from the collection of free-ranging bears samples in the forest of Northern Sweden as part of the Brown Bear Research Project. ........................................................................................ Figure 30. Graphical abstract of the study 1*. ......................................................................................Figure 31. Detailed enriched terms from the biological processes "Protein metabolism" and "RNA metabolism" from the differentially expressed genes in Cussonneau et al., 2021. ............................. Figure 32. Western blots of total and phosphorylated SMAD1/5, and SMAD4 in CCL136 human muscle cells expressing an inactive BMP receptor. ......................................................................................... 101 Figure 36 .Figure 37 . 2 Figure 38 .Figure 39 .Figure 40 .Figure 41 .Figure 42 . 363723839404142 Figure 33. Co-immunoprecipitation of SMAD4 with SMAD1/5 or SMAD2/3 in bear muscles. .......... Figure 34. SMAD4 linker protein sequences in different species. ...................................................... Figure 35. SMAD4 degradation involving MAPK, GSK3 and β-TrCP proteins*. ..................................Figure 36. Hypothetical schema to explain the denervation-induced muscle atrophy resistance in the hibernating brown bears muscles*. .................................................................................................... Figure 37. Hypothetical schema of the origin of muscle transcriptomic changes of TGF-β superfamily*. ............................................................................................................................................................. Study 2 Figure 38. Schema of the experimental strategy of study 2*. ............................................................ Figure 39. Graphical abstract of the study 2*. .................................................................................... Figure 40. Chemical structures of febrifugine and halofuginone. ...................................................... Figure 41. Effect of halofuginone treatment prior to hindlimb suspension on extracellular matrix components expression in gastrocnemius muscle in mice. ................................................................ Figure 42. Halofuginone treatment for 2 weeks prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle in mice. ........................................................................................................... Figure 43. Effect of halofuginone treatment prior to hindlimb suspension in soleus muscle. ...........Figure 44. Effect of halofuginone treatment prior to hindlimb suspension on autophagy-lysosomal system in gastrocnemius muscle. ........................................................................................................Figure 45. ATF4 signalling regulation in atrophy-resistant muscle of the hibernating bear...............Figure 46. Worldwide geographic repartition of (A) Hydrangea macrophylla and (B) brown bear living area (Ursus arctos). .............................................................................................................................Figure 47. Hypothetical halofuginone and halofuginone-like molecular mechanisms in skeletal muscle*. .............................................................................................................................................. Figure 43 . 43 Figure 38. Schema of the experimental strategy of study 2*. ............................................................ Figure 39. Graphical abstract of the study 2*. .................................................................................... Figure 40. Chemical structures of febrifugine and halofuginone. ......................................................Figure 41. Effect of halofuginone treatment prior to hindlimb suspension on extracellular matrix components expression in gastrocnemius muscle in mice. ................................................................ Figure 42. Halofuginone treatment for 2 weeks prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle in mice. ........................................................................................................... Figure 43. Effect of halofuginone treatment prior to hindlimb suspension in soleus muscle. ...........Figure 44. Effect of halofuginone treatment prior to hindlimb suspension on autophagy-lysosomal system in gastrocnemius muscle. ........................................................................................................ Figure 45. ATF4 signalling regulation in atrophy-resistant muscle of the hibernating bear............... Figure 46. Worldwide geographic repartition of (A) Hydrangea macrophylla and (B) brown bear living area (Ursus arctos). .............................................................................................................................Figure 47. Hypothetical halofuginone and halofuginone-like molecular mechanisms in skeletal muscle*. .............................................................................................................................................. Figure 44 . 44 Figure 38. Schema of the experimental strategy of study 2*. ............................................................ Figure 39. Graphical abstract of the study 2*. .................................................................................... Figure 40. Chemical structures of febrifugine and halofuginone. ......................................................Figure 41. Effect of halofuginone treatment prior to hindlimb suspension on extracellular matrix components expression in gastrocnemius muscle in mice. ................................................................ Figure 42. Halofuginone treatment for 2 weeks prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle in mice. ........................................................................................................... Figure 43. Effect of halofuginone treatment prior to hindlimb suspension in soleus muscle. ...........Figure 44. Effect of halofuginone treatment prior to hindlimb suspension on autophagy-lysosomal system in gastrocnemius muscle. ........................................................................................................ Figure 45. ATF4 signalling regulation in atrophy-resistant muscle of the hibernating bear............... Figure 46. Worldwide geographic repartition of (A) Hydrangea macrophylla and (B) brown bear living area (Ursus arctos). .............................................................................................................................Figure 47. Hypothetical halofuginone and halofuginone-like molecular mechanisms in skeletal muscle*. .............................................................................................................................................. Figure 45 .Figure 46 . 4546 Figure 38. Schema of the experimental strategy of study 2*. ............................................................ Figure 39. Graphical abstract of the study 2*. .................................................................................... Figure 40. Chemical structures of febrifugine and halofuginone. ......................................................Figure 41. Effect of halofuginone treatment prior to hindlimb suspension on extracellular matrix components expression in gastrocnemius muscle in mice. ................................................................ Figure 42. Halofuginone treatment for 2 weeks prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle in mice. ........................................................................................................... Figure 43. Effect of halofuginone treatment prior to hindlimb suspension in soleus muscle. ...........Figure 44. Effect of halofuginone treatment prior to hindlimb suspension on autophagy-lysosomal system in gastrocnemius muscle. ........................................................................................................ Figure 45. ATF4 signalling regulation in atrophy-resistant muscle of the hibernating bear............... Figure 46. Worldwide geographic repartition of (A) Hydrangea macrophylla and (B) brown bear living area (Ursus arctos). .............................................................................................................................Figure 47. Hypothetical halofuginone and halofuginone-like molecular mechanisms in skeletal muscle*. .............................................................................................................................................. Figure 47 . 3 Figure 48 .Figure 49 .Figure 50 . 473484950 Figure 38. Schema of the experimental strategy of study 2*. ............................................................ Figure 39. Graphical abstract of the study 2*. .................................................................................... Figure 40. Chemical structures of febrifugine and halofuginone. ...................................................... Figure 41. Effect of halofuginone treatment prior to hindlimb suspension on extracellular matrix components expression in gastrocnemius muscle in mice. ................................................................ Figure 42. Halofuginone treatment for 2 weeks prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle in mice. ...........................................................................................................Figure 43. Effect of halofuginone treatment prior to hindlimb suspension in soleus muscle. ...........Figure 44. Effect of halofuginone treatment prior to hindlimb suspension on autophagy-lysosomal system in gastrocnemius muscle. ........................................................................................................ Figure 45. ATF4 signalling regulation in atrophy-resistant muscle of the hibernating bear............... Figure 46. Worldwide geographic repartition of (A) Hydrangea macrophylla and (B) brown bear living area (Ursus arctos). .............................................................................................................................Figure 47. Hypothetical halofuginone and halofuginone-like molecular mechanisms in skeletal muscle*. .............................................................................................................................................. Study 3 Figure 48. Schema of the experimental strategy of study 3*. ............................................................Figure 49. Human myotubes cultivated with winter bear serum induces similar transcriptional changes than those occurring in hibernating bears muscles. ........................................................................... Figure 50. Winter bear serum mimics in human muscle cells what occurs naturally in the muscles of hibernating bears. ............................................................................................................................... Figure 1 . 1 Figure 1. Skeletal muscle organisation and contractile apparatus structure. Figure 2 . 2 Figure 2. Myokinome overview of muscle-organ crosstalk. Figure 3 . 3 Figure 3. Myokinome overview of muscle autocrine/paracrine crosstalk. Figure 4 . 4 Figure 4. Muscle protein balance in physiological conditions. Figure 5 . 5 Figure 5. mTORC1 and mTORC2 intracellular organisation and their biological effects. Figure 6 . 6 Figure 6. Autophagy-lysosomal system. Figure 7 . 7 Figure 7. Ubiquitin-proteasomal system. Figure 8 . 8 Figure 8. Muscle protein imbalance in pathophysiological conditions leading to muscle atrophy. Figure 9 . 9 Figure 9. Histopathological characteristics of healthy muscle (A) versus atrophied muscle (B) with a decrease of myofiber cross sectional area (CSA). [100]. FBXO32/Atrogin-1 is involved in the degradation of ribosomal proteins and translation initiation factors, as well as several other proteins such as myoblast determination protein (MyoD), desmin, and vimentin (i.e. the intermediate filament in muscle). Thus, overexpression of FBXO32/Atrogin-1 in the context of muscle atrophy may reduce MPS and regeneration and thus leads to muscle wasting [101]. Atrogenes are markers of atrophy, but their involvement as active inducers of atrophy remains an open question. Furthermore, whether rodent atrogenes are shared with humans remains to be established for most of them. Drugs targeting. Several actors in this interconnected network have been proven effective when targeted to limit or counteract skeletal muscle atrophy in rodent models. For instance, targeting myostatin ligand or its activin receptor type 2B (TGF-β signalling) has shown beneficial effects in preserving muscle mass in different catabolic conditions (see section below 4.2.2.2). Moreover, the stimulation of β2-adrenoceptors prevents or even reverses muscle wasting and weakness in several 30 catabolic conditions, including cancer cachexia [102], ageing [103] and muscular dystrophies [104]. Yet, so far, no effective drug has been used in the clinical practice. Figure 10 . 10 Figure 10. Overview of anabolic and catabolic signalling pathways involved in muscle protein homeostasis (from Peris-Moreno et al., 2021, seeAppendix 10.4 [38]). Figure 11 . 11 Figure 11. TGF-β superfamily organisation and signal transduction. ribosylation and linker-domain phosphorylation, all of which changing the fate of the intracellular response [105]. TGF-β superfamily is a master regulator of adult muscle mass with (1) TGF-β signalling as a negative regulator and (2) BMP signalling as a positive regulator [110,111]. Figure 12 . 12 Figure 12. TGF-β superfamily regulation by E3-ubiquitin ligase and deubiquitinase enzymes. Green and blue circles respectively represent deubiquitinase and E3-ubiquitin ligase enzymes. The schema is adapted from Cussonneau et al., 2021. Full and dotted lines respectively represent the SMAD and non-SMAD signalling. SBE: SMAD Binding Element. Figure 13 . 13 Figure 13. TGF-β signalling involvement in muscle atrophy. Figure 14 . 14 Figure 14. BMP signalling involvement in muscle hypertrophy. Figure 15 . 15 Figure 15. SMAD4 recruitment in muscle in basal (A) or muscle wasting (B) conditions. TRAF6 a pro-atrophic actor. TRAF6 mediates the activation of p38 and induces the expression of the atrogenes Trim63/MuRF1 and Fbxo32/Atrogin-1, as well as autophagic-related actors in atrophying muscles during denervation and starvation in mice [176-178] In addition, TRAF6 protein levels increase in muscles of gastric cancer patients [179]. Conversely, Traf6 deletion suppresses the increase expression of Fbxo32/Atrogin-1 and Trim63/MuRF1, improves AKT phosphorylation, and limits muscle atrophy in mice during ageing, starvation, denervation, cancer cachexia or dexamethasone treatment [176-179]. A dual role for TAK1. TAK1-p38 signalling is activated under activin A treatment in mice myotubes and in vivo ending up by up-regulation of Fbxo32/Atrogin-1 and muscle atrophy. Interestingly, the catabolic effect of activin A was abolished by p38 inhibitor administration [180]. Moreover, muscles damages were alleviated through pharmacologic inhibition of TGF-β1-TAK1 axis by the neuroprotective molecule catalpol in a model of Duchenne muscular dystrophy [181]. Figure 16 . 16 Figure 16. Non-SMAD TGF-β/BMP signalling and its dual involvement in muscle homeostasis. Figure 17 . 17 Figure 17. The Integrated Stress Response pathway organisation and signal transduction. Figure 18 . 18 Figure 18. ATF4 mRNA sequence and its translation upon (A) basal condition or (B) stress condition. uORF: upstream open reading frame; CDS: coding sequence. Figure 19 . 19 Figure 19. The Integrated Stress Response involvement in muscle homeostasis. MR and Tb. Hibernating bears show a 75-85% decline in MR and their Tb only decreases by a few degrees Celsius compared to the values of the active summer season, remaining around 32-33°C [276-280] (Figure 21). Thermoregulatory mechanisms could explain the maintenance of a relatively high Tb, but also body insulation due to high fur coverage, accumulation of subcutaneous fat, and also den Figure 20 . 20 Figure 20. Pictures of hibernating brown bear dens in North Sweden. Figure 21 . 21 Figure 21. Main physiological characteristics of hibernating bears. Average of the daily mean values for ambient temperature (A), bear body temperature (B), heart rate (C) and activity level in accelerometery units (D) for 14 individual free-ranging brown bears in central Sweden collected over 3 years. The X-axis indicates the time of year. Green vertical bars indicate the den entry and exit periods (fromEvans et al., 2016 [279]). Figure 22 . 22 Figure 22. Overview of the spectacular characteristics of bears resistance to physiological damage during hibernation. Figure 23 . 23 Figure 23. Protein content in bear muscles in summer, early denning and late denning (fromLohuis et al. -2007 [301]). Figure 24 . 24 Figure 24. Protein turnover in bear muscles in summer, early denning and late denning (fromLohuis et al. -2007 [301]). Figure 25 . 25 Figure 25. A complex and non-exhaustive overview of the mechanisms identified in bears to save muscle protein content during hibernation. maintenance of muscle mass during hibernation in bears involves one or more circulating factors. The identification of these factors will undoubtedly open up a new field of study that will lead to new solutions for preventing and/or reversing muscle atrophy in humans. 4. 4 . 6 46 The ISR and TGF-β superfamily signalling regulation in hibernating mammal muscles Very few papers have explored the regulation of ISR or TGF-β/BMP signalling in the skeletal muscle of hibernating mammals and only one in hibernating bears. TGF/BMP signalling. Studies have shown that (1) myostatin protein expression does not change during early or late torpor in muscles of thirteen-lined ground squirrels (Spermophilus tridecemlineatus), but Figure 26 . 26 Figure 26. Winter bear serum promotes hypertrophy in human muscle cells. Study 1 1 Concurrent BMP Signaling Maintenance and TGF-β Signaling Inhibition Is a Hallmark of Natural Resistance to Muscle Atrophy in the Hibernating Bear (published paper) 6 . 6 Study 1: Concurrent BMP Signaling Maintenance and TGF-β Signaling Inhibition Is a Hallmark of Natural Resistance to Muscle Atrophy in the Hibernating Bear 6.1 Objective and strategy Figure 27 . 27 Figure 27. Schema of the experimental strategy of study 1. Figure 28 . 28 Figure 28. Hindlimb suspension model in laboratory mice. Figure 29 . 29 Figure 29. Pictures from the collection of free-ranging bears samples in the forest of Northern Sweden as part of the Brown Bear Research Project. , and the 10 top-score enrichment transcription factors regulating the DEGs are shown in Figure1d. The heat map representing the expression changes of the TGF-β/BMP gene sets in bear versus mouse muscles was made using the Pheatmap package (R 1.4.1106, University of Tartu, Tartu, Estonia). Briefly, the gene hierarchical clustering is based on Euclidean distance calculated from the log2FC value (Winter/Summer and Unloaded/Control for bear and mouse muscles, respectively). Figure 1 . 1 Figure 1. Deep changes in brown bear muscle transcriptome during hibernation. (a) Heatmap from brown bear muscle (vastus lateralis) transcripts (n = 6 bears/season, the same individuals were sampled and analyzed in summer and winter); red indicates high and green indicates low expression level of the 13531 genes. (b) Graph representing the 10 top-score of significantly enriched Figure 2 . 2 Figure 2. Deep transcriptomic reprogramming of TGF-β and BMP pathways in muscle during brown bear hibernation. Scheme showing brown bear vastus lateralis muscle transcripts involved in TGF-β and BMP signaling and depicting (i) their relationships [55-57] and (ii) the difference in their expression levels between hibernation and activity periods. Red and green boxes indicate, respectively up-and down-regulated genes during hibernation compared to the summer season, and white boxes indicate unchanged genes. Target genes of the TGF and BMP pathways are indicated in italic and are in green when down-regulated, in red when up-regulated, and in black when unchanged. Dashed lines show the SMAD-independent pathway and full lines the canonical signaling pathway. Arrows indicate activation, and ⊥ bars indicate inhibition. (n = 6 bears/season, the same individuals were sampled and analyzed in summer and winter, padj < |0.05|; TablesS2 and S3). ECAg: Extracellular Agonist, ECAn: Extracellular Antagonist, L: Ligand, Co-R: Co-Receptor, RII: Receptor type II, RI: Receptor type I, TA: Transcriptional Activator, TR: Transcriptional Repressor, SBEs: SMAD Binding Element. Created with BioRender.com. Figure 3 . 3 Figure 3. A transcriptomic reprogramming of UPS components involved in TGF-β and BMP regulation prevails in muscle during brown bear hibernation. Scheme showing the E3s/DUBs enzymes regulation of brown bear vastus lateralis muscle transcripts involved in TGF-β and BMP signaling pathways and depicting (i) their relationships [12,60] and (ii) the difference in their expression levels between hibernation and activity periods. Red and green boxes indicate, respectively up-downregulated genes during hibernation compared to the summer season, and white boxes indicate unchanged genes. Dashed lines show the SMAD-independent pathway and full lines the canonical signaling. Arrows indicate activation and bars inhibition. (n = 6 bears/season, the same individuals were sampled and analyzed in summer and winter, padj < |0.05|; TableS4). SBEs: SMAD Binding Elements. Created with BioRender.com. Figure 4 . 4 Figure 4. The gene expression pattern of TGF-β and BMP components is different in brown bear muscle resistant to atrophy during hibernation compared to atrophied muscles of the unloaded mouse. Genes expression level in vastus lateralis muscle of Figure 5 . 5 Figure 5. The gene expression pattern of muscle E3s/DUB enzymes involved in TGF-β and BMP pathway regulation differed in the atrophy-resistant brown bear muscle during hibernation compared to the atrophied muscle of the unloaded mouse. Genes expression level (a) in vastus lateralis muscle of active and hibernating brown bears (n = 6 bears/season, the same individuals were sampled and analyzed in summer and winter, log2FC Winter/Summer, dotted white bars), and (b) in soleus muscle of control and unloaded mice (n = 4 mice per condition, log2FC Unloaded/Control, gray bars). Data are expressed as log2FC ± lfcSE. Statistical significance is shown (* padj < |0.05|; ** padj < |0.01|; *** padj < |0.001|). Figure 6 . 6 Figure 6. Cont. Figure 6 . 6 Figure 6. Hibernation induces changes in TGF-β and BMP components at protein level. Protein levels for total SMAD2/3, SMAD1/5, SMAD4, CCN2, and GDF5 were assessed by Western blots (a) in the vastus lateralis muscle of brown bears during summer (S) and winter (W) and (b) in the soleus muscle of control (C) and unloaded mice (U). Representative western blots are shown for three couples of bears and mice. (c-h). Data are presented as individuals' values with mean bars (n = 11 bears/season, the same individuals were sampled and analyzed in summer and winter, and n = 6 mice per condition). Gray and black dots are for muscles of bears, in summer and winter, respectively, and gray and black triangles are for control and unloaded muscle of mice, respectively. 1 Supplementary Figure 1 . 11 The gene expression pattern of TGF-β and BMP components is different in brown bear muscle resistant to atrophy during hibernation compared to atrophied muscles of the unloaded mouse. Heatmap from vastus lateralis muscle of active and hibernating brown bears (n=6 bears/season, the same individuals were sampled and analyzed in summer and winter, log2FCWinter/Summer), and soleus muscle of control and unloaded mice (n=4 mice per condition, log2FC Unloaded/Control) of 171 TGF-β/BMP related genes. The green and red colours indicate that the gene expression decreased or increased, respectively, and each line represents one gene. Figure 30 . 30 Figure 30. Graphical abstract of the study 1. Figure 32 . 32 Figure 32. Western blots of total and phosphorylated SMAD1/5, and SMAD4 in CCL136 human muscle cells expressing an inactive BMP receptor. CCL136 human muscle cells were transfected with the plasmid ALK3-KD (kinase dead) receptor, followed by 6h treatment with GDF5 ligand. BMP intracellular actors SMAD1/5 and SMAD4 were measured by Western blotting (see Appendix 10.1). Figure 33 . 33 Figure 33. Co-immunoprecipitation of SMAD4 with SMAD1/5 or SMAD2/3 in bear muscles. SMAD4 immunoprecipitation (IP) in vastus lateralis muscle from hibernating versus active brown bears was followed by immunodetection of SMAD1/5, SMAD4 and SMAD2/3 by Western blotting (IB). IgGs have been used as negative control. W: Winter; S: Summer (see Appendix 10.1). Figure 34 . 34 Figure 34. SMAD4 linker protein sequences in different species. The T bold in black represents the Threonine 272, while the S bold in blue represents the Serine replaced at position 272. The red box represents the Ursidae family members, while the black box represents examples of small hibernators. 3 )[ 3 351]. CCN2 is a secreted matricellular protein predominantly expressed during development in almost all tissues, during numerous pathological conditions that involve enhanced fibrogenesis, and during several cancers [372]. CCN2 and TGF-β signalling. CCN2 expression is regulated by growth factors, cytokines and hormones, including TGF-β1 [372]. TGF-β1 induces CCN2 gene expression and plays an important role in fibrosis, especially during dystrophies [372-375]. In turn, once secreted, CCN2 directly interacts with TGF-β ligands and thereby facilitates the signal to transduce [373,376]. Interestingly, CCN2 gene expression Figure 35 . 35 Figure 35. SMAD4 degradation involving MAPK, GSK3 and β-TrCP proteins. [ 377]. The strong reduction in CCN2 mRNA and protein expression in winter bear muscles is consistent with the very limited mechanical demand in hibernating bear muscles, and the general downregulation of the TGF-β signalling[351]. CCN2 and TGF-β1 proteins are significantly overexpressed in muscles from Duchenne muscular dystrophy patients and both are positively correlated with the degree of pathology and clinical severity[378]. Furthermore, TGF-β signalling also induces the gene expression of other CCN family members with common biological actions, such as CCN4 (also known as WISP1)[379], which is strongly downregulated in hibernating bear muscles compared to summer in our study (seePaper 6.3) [351]. It remains to be determined whether the downregulation of CCN2 at the transcriptomic and proteomic level in the hibernating bear muscles is a cause and/or a consequence of the TGF-β signalling inhibition. However, CCN2 inhibition is likely to be a consequence of reduced TGF-β signalling given the large number of downregulated TGF-β target genes in winter bear muscles (seePaper 6.3) [351].CCN2 and BMP signalling. In addition, CCN2 can antagonize the activity of BMP4 and BMP7 ligands by preventing their binding to BMP receptors in Xenopus embryos and mice kidneys respectively, resulting in reduced SMAD1/5 signal transduction[376,380]. Moreover, surface plasmon resonance spectroscopy shows that CCN2 and GDF5 protein interacts[381]. Whether CCN2 inhibits the transduction of BMP signalling in muscles remains an open question. However, our data show that CCN2 downregulation is correlated with the maintenance of BMP signalling in winter bear muscles. From a clinical perspective. CCN2 is considered a therapeutic target in combating fibrosis and related disorders in a variety of organs and tissues. Muscle function is improved by anti-CCN2 antibody in a mouse model of Duchenne muscular dystrophy [382]. In addition, clinical trials in phase 2 and phase 3 are currently testing another fully human monoclonal antibody that interferes with the action of CCN2 during Duchenne muscular dystrophy (ClinicalTrials.gov Identifier: NCT02606136 and NCT04632940). profound changes in circulating lipids may have altered the composition of membrane lipids and therefore the membrane fluidity of organs, including muscle. The plasma membrane is a critical hub for signalling proteins. Membrane lipids are organised into different microdomains rich in specific lipid species, which attract different types of proteins[383,384]. A change in membrane lipid composition may have altered the heterodimerisation of TGF-β superfamily receptors known to be a dynamic process[107]. TGF-β receptors are distributed in both lipid rafts/caveolae and non-raft membrane microdomains (i.e. clathrin-coated pits). The internalisation of TGF-β receptors via clathrin-coated pits enhances TGF-β signalling, whereas lipid rafts-mediated endocytosis of TGF-β receptors facilitates receptor degradation and thus the turnoff of signalling[385][386][387]. Cholesterol has been suggested to inhibit SMAD2 activation, promote TGF-β receptor degradation, and therefore inhibit TGF-β signalling.This effect may result from the shifted localisation of TGF-β receptors from non-raft to lipid-raft microdomains in the plasma membrane[386][387][388]. On the contrary, BMP receptors have been suggested to be distributed in lipid rafts-mediated endocytosis, and a decrease in cholesterol level specifically blocks the BMP receptors-mediated intracellular signalling[389]. To test the dynamics of TGF-β superfamily receptors, we could treat myotubes with winter or summer bear serum and analyse the localisation of the receptors by confocal microscopy. We could also perform lipidomic analysis of membrane phospholipids in myotubes or biopsied bear muscles by functional two-photon microscopy.6.4.2.5 Is the resistance to denervation-induced muscle atrophy in hibernating bearsrelated to BMP signalling? BMP signalling in NMJ organisation. BMP signalling is important for the conservation of muscle mass when the neuromuscular junction (NMJ) is compromised, as reported in a model of denervationinduced muscle atrophy in mice [166-168]. In addition, disruption of presynaptic architecture and NMJ degeneration concomitantly to BMP signalling perturbation are observed in muscles of tumourbearing mice before muscle loss occurred [168]. The same feature was also observed in muscles from pre-cachectic cancer patients [168]. Promoting BMP signalling using genetic or drug-based interventions (tilorone) preserves NMJ function during the development of cachexia and therefore counteract muscle atrophy [168]. On the contrary, overexpression of the BMP inhibitor noggin in muscles of healthy mice induced muscle atrophy, mimicked the loss of presynaptic motor neuron terminals and increased the presence of denervation markers [168]. The BMP pathway regulates peripheral synaptic development and plasticity in Drosophila [390,391] and is essential for proper axon elongation in motor neurons during development in mice [392]. However, the role of BMP in controlling postnatal NMJ remodelling in adult mammals, particularly in pathological contexts, remains largely unexplored. TAK1 is involved in non-SMAD TGF-β/BMP signalling (see section 4.2.4.2) and the Figure 36 . 36 Figure 36. Hypothetical schema to explain the denervation-induced muscle atrophy resistance in the hibernating brown bears muscles. communication. A new concept has emerged that bone also acts as an endocrine tissue targeting other organs such as muscle. Therefore, muscle and bone communicate via soluble factors [400]. Both bone and muscle volumes are sensitive to mechanical loading, which regulates many of their secreted factors. Therefore, muscle and bone mass are both reduced during immobilisation [401]. Bone resorption releases TGF-β1 into the bloodstream in pediatric burn patients and tumour-bearing mice [402,403]. The use of antiresorptive drugs protects bone and muscle mass, demonstrating that a factor released by bones contributes to muscle wasting in these conditions. TGF-β1 released from bone suppresses activation of the AKT/mTOR anabolic pathway and promotes expression of UPS players in myoblasts in vitro [403]. TGF-β ligands are produced by osteoblasts, stored in the extracellular matrix of bone and released by osteoclastic proteolysis during bone resorption [404,405]. Figure 37 . 37 Figure 37. Hypothetical schema of the origin of muscle transcriptomic changes of TGF-β superfamily. 7. Study 2 : 2 Induction of ATF4 atrogenes is uncoupled from disuseinduced muscle atrophy in halofuginone-treated mice and in hibernating brown bear 7.1 Objective and strategy Figure 38 . 38 Figure 38. Schema of the experimental strategy of study 2. Figure 1 . 1 Figure 1. Halofuginone activates the expression of ATF4-regulated atrogenes in muscle without leading to atrophy. (A) Schematic representation of the experimental protocol, where mice received H 2 O (white bars) or HF (0.25 µg/g, grey bars) 3 times a week for up to 4 weeks (WK). Muscles were collected 6 h after the last HF administration at the end of each week (dotted arrows). (B-F) Relative mRNA levels in gastrocnemius for Atf4, Trib3, Cdkn1a, Gadd45a, and Eif4ebp1 were measured by RT-qPCR. Data were normalised using Tbp. Data are expressed as fold change vs. H 2 O within each week and are presented as individual values with mean bars ± SEM. (G) Gastrocnemius muscle mass per gram of body weight (BW). Data are expressed as a percentage from H 2 O within each week and presented as individual values with mean bars ± SEM. Statistics are described in Section 4. * p adj < 0.05; ** p adj < 0.01; *** p adj < 0.001; **** p adj < 0.0001. Figure 2 . 2 Figure 2. Hindlimb suspension induces ATF4 pathway. (A) Schematic representation of the imental protocol, where mice received H2O or halofuginone (HF) oral administration (0.25 μ Figure 2 . 2 Figure 2. Hindlimb suspension induces ATF4 pathway. (A) Schematic representation of the experimental protocol, where mice received H 2 O or halofuginone (HF) oral administration (0.25 µg/g) 3 times a week for 3 weeks (WK) (black arrows) and were then subjected to hindlimb suspension Figure 3 . 3 Figure 3. Halofuginone treatment prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle. Mice were treated with H2O or halofuginone (HF, 0.25 μg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3, light grey bars) or 7 (HS7, white bars) days or kept unsuspended (Ctrl, dark grey bars), as described in Figure 2A. (A) Gastrocnemius muscle mass per gram of body weight (BW). Data are expressed as a percentage from H2O-Ctrl and presented as individual values with mean bars ± SEM. (B) Mean fibre cross-sectional area in gastrocnemius muscle. Data are presented as individual values with mean bars ± SEM. Statistics are described in Methods section. ** padj < 0.01; *** padj < 0.001; **** padj < 0.0001; ns = non-significant. Figure 3 . 3 Figure 3. Halofuginone treatment prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle. Mice were treated with H 2 O or halofuginone (HF, 0.25 µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3, light grey bars) or 7 (HS7, white bars) days or kept unsuspended (Ctrl, dark grey bars), as described in Figure 2A. (A) Gastrocnemius muscle mass per gram of body weight (BW). Data are expressed as a percentage from H 2 O-Ctrl and presented as individual values with mean bars ± SEM. (B) Mean fibre cross-sectional area in gastrocnemius muscle. Data are presented as individual values with mean bars ± SEM. Statistics are described in Section 4. ** p adj < 0.01; *** p adj < 0.001; **** p adj < 0.0001; ns = non-significant. Int. J. Mol. Sci. 2023, 24, x FOR PEER REVIEW 8 of 19 Figure 4 . 4 Figure 4. Halofuginone treatment inhibits TGF-β while promoting BMP signalling in gastrocnemius muscle. Mice were treated with H2O or halofuginone (HF, 0.25 μg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3, light grey bars) or 7 (HS7, white bars) days or kept unsuspended (Ctrl, dark grey bars), as described in Figure 2A. (A-D) The ratio of protein levels in gastrocnemius for the transcription factors SMAD2/3 (TGF-β signalling), SMAD1/5 (BMP signalling), and SMAD4 (TGF- and BMP signalling) have been assessed in the nuclear and cytosolic subcellular fractions, quantified, and normalised to the total protein content. Representative Western blots are shown. The ratio of nuclear SMAD contents on the total (cytosolic and nuclear) SMAD content was calculated. Data are expressed as fold change vs. H2O-Ctrl and presented as individual values with mean bars ± SEM. Statistics are described in Methods section. * padj < 0.05; **** padj < 0.0001. Figure 4 . 4 Figure 4. Halofuginone treatment inhibits TGF-β while promoting BMP signalling in gastrocnemius muscle. Mice were treated with H 2 O or halofuginone (HF, 0.25 µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3, light grey bars) or 7 (HS7, white bars) days or kept unsuspended (Ctrl, dark grey bars), as described in Figure 2A. (A-D) The ratio of protein levels in gastrocnemius for the transcription factors SMAD2/3 (TGF-β signalling), SMAD1/5 (BMP signalling), and SMAD4 (TGF-β and BMP signalling) have been assessed in the nuclear and cytosolic subcellular fractions, quantified, and normalised to the total protein content. Representative Western blots are shown. The ratio of nuclear SMAD contents on the total (cytosolic and nuclear) SMAD content was calculated. Data are expressed as fold change vs. H 2 O-Ctrl and presented as individual values with mean bars ± SEM. Statistics are described in Section 4. * p adj < 0.05; **** p adj < 0.0001. Figure 5 . 5 Figure 5. Halofuginone treatment prior to hindlimb suspension partially prevents the decrease in protein synthesis in gastrocnemius muscle. Mice were treated with H2O or halofuginone (HF, 0.25 μg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3, light grey bars) or 7 (HS7, white bars) days or kept unsuspended (Ctrl, dark grey bars), as described in Figure 2A. (A,B) Relative puromycin incorporation into gastrocnemius muscle was assessed by Western blotting, quantified, and normalised to the total protein content. A representative Western blot is shown. (C-E) Relative mRNA levels in gastrocnemius for Trim63, Fbxo32, and Fbxo30 were measured by RT-qPCR. Data were normalised using Tbp. Data are expressed as fold change vs. H2O-Ctrl and presented as individual values with mean bars ± SEM. Statistics are described in Methods section. * padj < 0.05; **** padj < 0.0001, or ns = non-significant. Figure 5 . 5 Figure 5. Halofuginone treatment prior to hindlimb suspension partially prevents the decrease in protein synthesis in gastrocnemius muscle. Mice were treated with H 2 O or halofuginone (HF, 0.25 µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3, light grey bars) or 7 (HS7, white bars) days or kept unsuspended (Ctrl, dark grey bars), as described in Figure 2A. (A,B) Relative puromycin incorporation into gastrocnemius muscle was assessed by Western blotting, quantified, and normalised to the total protein content. A representative Western blot is shown. (C-E) Relative mRNA levels in gastrocnemius for Trim63, Fbxo32, and Fbxo30 were measured by RT-qPCR. Data were normalised using Tbp. Data are expressed as fold change vs. H 2 O-Ctrl and presented as individual values with mean bars ± SEM. Statistics are described in Section 4. * p adj < 0.05; **** p adj < 0.0001, or ns = non-significant. Figure 6 . 6 Figure 6. ATF4-regulated atrogenes are induced in atrophy-resistant hibernating brown bear muscles. Gene expression levels for ATF4, GADD45A, CDKN1A, TRIB3, EIF4EBP1, PPP1R15A, ASNS, and DDIT3 in vastus lateralis muscle of active and hibernating brown bears (n = 6 bears/season, the same individuals were sampled and analysed in summer and winter, log2FC winter/summer). Data are presented as individual values as log2FC with mean bars ± lfcSE (log2 fold change standard error). Statistics are described in [46]. * padj < 0.05; ** padj < 0.01; **** padj < 0.0001. FC: fold change; W: winter (hibernating season); S: summer (active season) Figure 6 . 6 Figure 6. ATF4-regulated atrogenes are induced in atrophy-resistant hibernating brown bear muscles. Gene expression levels for ATF4, GADD45A, CDKN1A, TRIB3, EIF4EBP1, PPP1R15A, ASNS, and DDIT3 in vastus lateralis muscle of active and hibernating brown bears (n = 6 bears/season, the same individuals were sampled and analysed in summer and winter, log2FC winter/summer). Data are presented as individual values as log2FC with mean bars ± lfcSE (log2 fold change standard error). Statistics are described in [46]. * p adj < 0.05; ** p adj < 0.01; **** p adj < 0.0001. FC: fold change; W: winter (hibernating season); S: summer (active season). Figure 7 . 7 Figure 7. Graphical abstract. The red and green lines represent catabolic and anabolic effects, respectively. Dotted lines represent hypothetical connections. The arrows/T bars above the ATF4 atrogenes, SMAD2/3, and SMAD1/5 boxes represent the induction/inhibition by halofuginone or by an as-yet-unknown mechanism in mouse or bear muscle, respectively. Created with BioRender.com. Figure 7 . 7 Figure 7. Graphical abstract. The red and green lines represent catabolic and anabolic effects, respectively. Dotted lines represent hypothetical connections. The arrows/T bars above the ATF4 atrogenes, SMAD2/3, and SMAD1/5 boxes represent the induction/inhibition by halofuginone or by an as-yet-unknown mechanism in mouse or bear muscle, respectively. Created with BioRender.com. Author Contributions: Conceptualisation, L.C. (Laura Cussonneau), A.-C.M. and L.C. (Lydie Combaret); methodology, L.C. (Laura Cussonneau), L.C. (Lydie Combaret), C.D., C.C.-G., G.C., J.H., M.D.-M. and Y.D.; analysis, L.C. (Laura Cussonneau), L.C. (Lydie Combaret), C.D. and C.C.-G.; writing-original draft preparation, L.C. (Laura Cussonneau) and L.C. (Laura Cussonneau); writingreview and editing, all authors; project administration, L.C. (Laura Cussonneau); funding acquisition, E.L., F.B. and L.C. (Laura Cussonneau). All authors have read and agreed to the published version of the manuscript. Funding: This work was supported by the Center National d'Etudes Spatiales (CNES, #865, #974, #1006, and #1905), the iSITE Challenge 3 Mobility program (Université Clermont Auvergne), and the Agence Nationale de la Recherche (B-STRONG ANR-22-CE14-0018). L.C. (Laura Cussonneau) was supported by a grant from the Institut National de la Recherche Agronomique et Environnement and Clermont Métropole. The long-term funding of the Scandinavian Brown Bear Research Project (SBBRP) comes primarily from the Swedish Environmental Protection Agency, the Norwegian Environment Agency, the Austrian Science Fund, and the Swedish Association for Hunting and Wildlife Management. Figure 39 . 39 Figure 39. Graphical abstract of the study 2. Figure 40 . 40 Figure 40. Chemical structures of febrifugine and halofuginone (from Pines et Spector 2015 [416]). Figure 41 . 41 Figure 41. Effect of halofuginone treatment prior to hindlimb suspension on extracellular matrix components expression in gastrocnemius muscle in mice. Mice were treated with H2O or halofuginone (HF, 0.25µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3) or 7 (HS7) days or kept unsuspended (Ctrl). (A-D) Gastrocnemius relative mRNA levels for Col1a1, Col3a1, Col5a2 and Col6a1 by RT-qPCR (see Appendix 10.2). Data were normalised using Tbp gene. Data are means ± SEM (expressed as fold change vs. H2O-Ctrl). Twoway ANOVA: * padj < 0.05; ** padj <0.01; *** padj <0.001; **** padj < 0.0001. Paper 7 . 2 ) 72 [423]. We also recorded slight preservation of muscle mass during HS in mice that were pretreated with HF for a shorter period (i.e. 2 weeks) (Figure42). This slight preservation of muscle mass occurred even if ATF4 atrogenes were overexpressed during HS both in mice pre-treated for 2 or 3 weeks with HF (see Paper 7.2 and data not shown) [423]. The effect of HF may depend on the nature of the muscle. The slight preservation of gastrocnemius CSA observed in HF-treated mice subjected to HS was mainly observed in glycolytic fibres (i.e. type 2X/2B) (see Paper 7.2) [423]. Oxidative fibres (i.e. type 1/2A) are well-documented to be more susceptible to disuse-induced atrophy than glycolytic fibres [425]. Unlike the glycolytic gastrocnemius muscle, the mass of the oxidative soleus muscle was not protected after 3 days of HS when mice were Figure 42 . 42 Figure 42. Halofuginone treatment for 2 weeks prior to hindlimb suspension mitigates atrophy in gastrocnemius muscle in mice. Figure 43 . 43 Figure 43. Effect of halofuginone treatment prior to hindlimb suspension in soleus muscle. Mice were treated with H2O or halofuginone (HF, 0.25µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 (HS3) or 7 (HS7) days or kept unsuspended (Ctrl). (A) Soleus muscle mass (mg) represented with means ± SEM. (B) Mean fibre cross-sectional area (CSA) of soleus muscle in Ctrl, HS3 and HS7. Data are means ± SEM. Two-way ANOVA. * padj<0.05; ** padj <0.01; **** padj <0.0001. (C) Frequency distribution proportion of fibres CSA of soleus muscle in Ctrl, HS3 or HS7 of mice treated with H2O or HF, for all fibres type. Data are means ± SEM. 7. 3 . 3 . 2 332 ATF4 signalling in hibernating bear musclesInduction of ATF4 atrogenes is associated with moderate muscle atrophy in hindlimb suspended mice when treated with HF or in hibernating brown bears muscles (seePaper 7.2) [423]. We further explored the transcriptome of the muscle of the hibernating brown bear from study1 [351] to analyse the ATF4 gene signature. Based on an extensive literature review and the use of databases (i.e. GeneCards), we have established a list of ATF4-related genes (seeAppendix 10.2). Using the same strategy as in study 1, we performed an enrichment analysis from the down-(161) and up-regulated (105) ATF4-related genes. Figure 45 . 45 Figure 45. ATF4 signalling regulation in atrophy-resistant muscle of the hibernating bear. 8. Study 3 : 3 Winter bear serum induces similar characteristics in human muscle cells as those found naturally in hibernating bear muscle 8.1 Objective and strategyThis last part contains preliminary results. We sought to replicate the molecular characteristics of atrophy-resistant muscles of hibernating bears in human muscle cells. Our team previously reported an increase in total protein content in human myotubes (HM) cultured with hibernating bear serum.This result proved for the first time that a circulating compound in bear serum could transfer biological properties to human muscle cells[341]. In this thesis project, we showed concurrent TGF-β inhibition and BMP activation in atrophy-resistant muscles of the hibernating brown bear (seePaper 6.3) [351] that we replicated in muscles of hindlimb-suspended mice treated with HF (see Paper 7.2)[423]. We aimed at determining whether a compound in bear serum during hibernation could reproduce these changes in TGF-β/BMP balance in human muscle cells. Our strategy was first to analyse microarray data from human muscle cells cultivated with winter bear serum (WBS) or summer bear serum (SBS) to assess whether there is a transcriptomic signature of TGF-β/BMP signalling. Subsequently, we optimised tools to measure TGF-β/BMP signalling transcriptional activity through their canonical or non-canonical signalling, using luciferase reporter assays in human muscle cells cultivated with SBS or WBS (Figure48). Figure 48 . 48 Figure 48. Schema of the experimental strategy of study 3. Figure 49 . 49 Figure 49. Human myotubes cultivated with winter bear serum induces similar transcriptional changes than those occurring in hibernating bears muscles. Human myotubes cultured upon winter bear serum (W) or summer bear serum (S) treatment for 48 hours. Gene expression assessed by DNA microarrays was analysed focusing on TGF-β and BMP signalling components. Data are expressed as log2FoldChange (FC) W/S ± lfcSE of 3 independent experiments (different cell preparations and bear serum mixes). Statistical significance is shown * padj < 0.05. Figure 50 . 50 Figure 50. Winter bear serum mimics in human muscle cells what occurs naturally in the muscles of hibernating bears. CCL136 cells were transfected with (A) ID1-Luc, (C) SBE-Luc and (D) MEF2-Luc and cultivated with summer bear serum (SBS) or winter bear serum (WBS) for (A) 6, 12 or 24 hours or (B and C) 24 hours. Cells were then lysed and luciferase activity was measured. Data are presented as individuals values with mean bars (n = 8-12 bear serum/season the same individuals were sampled and analysed in summer and winter). (B) Human primary myotubes (HM) were transfected with ID1-Luc, cultivated with SBS or WBS for 24 hours, and then lysed before measuring luciferase activity (see Appendix 10.3). Data are presented as individuals values with mean bars (n= 4 bear serum/season). Statistical significance is shown (Ratio paired t test) * pvalue<0.05; ** pvalue<0.01; ns: non-significant. ) studied the impact of controlled ATF4 induction by halofuginone in mice subjected to hind limb suspension and (3) explored the effect of winter bear serum on human muscle cells. This work(1) demonstrated that the BMP/TGF-β balance is important in the muscle atrophy resistance phenotype and (2) suggested that it could be replicated in human muscle cells by the presence of circulating compounds in hibernating bear serum. The identification of new relevant targets within the BMP and TGF-β pathways that could be modulated by compounds in the hibernating bear serum and the identification of these compounds would therefore allow the development of innovative strategies against muscle atrophy. The continuation of this project is therefore the first step in the future development of new therapeutic solutions to confer resistance to muscle atrophy in humans. 1 5 ( 15 Identification of BMP target genes in skeletal muscleCell culture. The CCL136 human rhabdomyosarcoma cell line (ATCC, USA) was maintained in dulbecco's modified eagle medium (DMEM) 4,5g/L glucose medium (Gibco, USA) containing 1% penicillinstreptomycin (Gibco, USA) and 10% fetal bovine serum (FBS) (Gibco, USA). The cells were seeded in 12-wells plates at a density of 1.5 × 10 5 cells/well, cultured at 37 °C under 5% CO2, and transfected the day after. Plasmid. Plasmid cloning (pc)DNA3-ALK3 K261R was a gift from Aristidis Moustakas (Addgene plasmid # 80875; http://n2t.net/addgene:80875; RRID: Addgene_80875) [367]. This pcDNA3 back bone contains the DNA sequence of the BMP receptor BMPR1A/ALK3 carrying a K261R mutation leading to an inactive kinase. Transfection. CCL136 cells were transfected using the ViaFect Transfection Reagent (Promega, E4981). The ratio of ViaFect™ Transfection Reagent volume (µL) to DNA amount (µg) was optimised with green fluorescent protein (GFP) transfection assays prior to the actual experiments. We chose the 3:1 ratio (3 µl reagent: 1 µg DNA) as increasing the quantity of DNA or reagent did not increase the efficacy of the transfection. On the day of transfection, DNA and Viafect transfection reagent were mixed at the 1:3 ratio described above in a serum-free medium and incubated for 15 minutes at room temperature to form the ViaFect™ Transfection Reagent:DNA complex. Cells were transfected with 1 µg of DNA/well by adding 100 µl of the transfection mixture to the wells. GDF5 treatment. After one night of transfection, the media have been replaced by fresh media. Half of the wells were treated with recombinant human GDF5 protein (Accession # P43026, BioLegend) at 0.1 µg /mL for 6 hours. Cells were then washed with phosphate-buffered saline (PBS), harvested for protein extraction in the same protein extraction buffer as described in study 2 (see Paper 7.2) [423], and then stored at -80 until use. Western blots experiments have been performed as described in study 2 (see Paper 7.2) [423] with the same SMAD4 and SMAD1/5 antibodies, and Phospho-SMAD1/Ser463/465) (#9516, Cell Signalling Technology, Saint-Cyr-L'Ecole, France) diluted 1:1000 in 5% bovine serum albumin (BSA). 10. 1 . 2 12 Is SMAD4 recruited more by TGF-β or BMP signalling: Coimmunoprecipitation Protein extraction. 5 mg of bear muscle powder were lysed on ice using a polytron with 500 µL NP-40 buffer (10mM Tris pH 7.5, 150 mM NaCl, 1 mM EDTA, 1.0% Nonidet P-40, 20mM betaglycerophosphate) containing inhibitors of proteases (Protease Inhibitor Cocktail) and phosphatases (1 mM Na3VO3, 10 mM NaF) (Sigma, Saint-Quentin-Fallavier, France). The homogenates were then centrifuged at 10,000 g for 10min at 4°C and the concentration was determined using the Bradford Protein Assay Kit (Biorad, Marnes-la-Coquette, France). An aliquot was taken for protein expression analyses.Immunoprecipitation. The remaining lysates, containing equivalent amounts of 1 mg of total protein, were pre-cleaned for 1 h with 40 µL of Protein A-Agarose beads(sc-2001, Santa Cruz Biotechnology) and 1 µg of IgG isotype control antibody (sc-2027, Santa Cruz Biotechnology, Nanterra, France) with gentle rotation at 4°C. The samples were then centrifuged at 3200g for 30 seconds at 4°C, and the supernatants were stored. Immunoprecipitation was performed by the addition of 1.8 µg of SMAD4 antibody (ab230815, Abcam, Cambridge, UK) and protein A-Agarose, followed by incubation at 4°C overnight with gentle rotation. The immune complex was isolated by centrifugation at 3500g for 5 minutes. The resulting pellet was then washed with 350 µl of wash buffer (PBS pH 7.4, 5mM EDTA, 10mM NaF) and centrifuged at 3500g for 5 minutes without vortexing. This last step was repeated twice.Immunodetection. The resulting pellet was eluted in 80 µL Laemmli 1X. Proteins were then denatured at 95°C for 5min, and 25 µL of the eluate was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) using tris-glycine eXtended (TGX)™ FastCast™ 7,5% Acrylamide gels (Biorad, Marnes-la-Coquette, France) and transferred onto a polyvinylidene difluoride (PVDF) membrane (Hybond P, Amersham, England) using Trans-Blot® Turbo™ Transfer System standard protocol (Biorad, Marnes-la-Coquette, France). Blots were blocked for 1 h at room temperature in Tris-Buffered Saline (TBS) buffer with 0.1% Tween-20 (TBS-T, pH = 7.8) and 5% BSA for all the targets according to the manufacturer's instructions. They were then washed thrice in TBS-T and incubated (overnight, stirring, 4°C) with appropriate primary antibodies: (1) for validating the immunoprecipitation, SMAD4 (ab230815, Abcam, Cambridge, United Kingdom) antibody was used diluted 1:1000 in 5% BSA and (2) for checking the protein partners, membranes were hybridised with either SMAD1/5 (PA5-80036, Thermo Fisher, Illkirch, France) or SMAD2/3 (#8685, Cell Signalling Technology, Saint-Cyr-L'Ecole, France) both diluted 1:1000 in 5% BSA overnight at 4°C. Blots were then washed and incubated for 1 h at room temperature with VeriBlot for IP Detection Reagent (HRP) (ab131366, Abcam) diluted 1:2000 in 5% non-fat dried milk. Signals were detected after incubation with Luminata Crescendo Western HRP substrate (Millipore, Burlington, MA, USA) and visualized using G: BOX ChemiXT4 (XL1) imaging system (Syngene, Frederick, MD, USA). 10.1.3 Is GDF5 synthesised and released by adipose tissue: GDF5 ELISA We used the only commercially available GDF5 ELISA kit (Catalog Number: DY853-05 and DuoSet Ancillary Reagent Kit2 Catalog number: DY008; R&D Systems Europe) and performed GDF5 immunodetection in serum from hibernating and active bears following the supplier protocol. Figure 1 . 1 Figure 1. Signaling pathways regulating skeletal muscle mass and function. Myofiber representation of the different signaling pathways controlling skeletal muscle mass and function during atrophy conditions. Ligands and arrows (both with head or perpendicular line) in green denote those signaling pathways and interactions with an anabolic effect whereas the red ones represent catabolic signaling. Orange ligands and arrows stand for pathways with a dual role (context-dependent). ß2-AR: ß-2 Adrenergic Receptor; 𝛾-sec: 𝛾-secretase; Ang: Angiotensin; AT1R: Angiotensin II Type 1 Receptor; AT2R: Angiotensin II Type 2 Receptor; BCAAs: Branched-chain amino acids; BMP R: Bone Morphogenetic Receptor; Calp: Calpain; CSL: CBF1, Suppressor of Hairless, Lag-1; Dsh: Dishevelled; Fzd: Frizzled; GR: Glucocorticoid Receptor; IL-6: Interleukin-6; NCID: Notch Intracellular domain; NOX: NADPH oxidase activator; P: Phosphorylation; S1P: Sphingosine-1phosphate; SLC T: Solute Carrier Transporter; STAPs: Signal Transducing Adaptor Proteins; TF: Transcription Factors; TGF-ß R: Transforming Growth Factor ß Receptor; TKR: Tyrosine-protein Kinase Receptor; TLR: Toll-like Receptor; TNF R: Tumor Necrosis Factor Receptor; Transl. Fact.: Translational Factors. Figure 1 . 1 Figure 1. Signaling pathways regulating skeletal muscle mass and function. Myofiber representation of the different signaling pathways controlling skeletal muscle mass and function during atrophy conditions. Ligands and arrows (both with head or perpendicular line) in green denote those signaling pathways and interactions with an anabolic effect whereas the red ones represent catabolic signaling. Orange ligands and arrows stand for pathways with a dual role (contextdependent). ß2-AR: ß-2 Adrenergic Receptor; γ-sec: γ-secretase; Ang: Angiotensin; AT1R: Angiotensin II Type 1 Receptor; AT2R: Angiotensin II Type 2 Receptor; BCAAs: Branched-chain amino acids; BMP R: Bone Morphogenetic Receptor; Calp: Calpain; CSL: CBF1, Suppressor of Hairless, Lag-1; Dsh: Dishevelled; Fzd: Frizzled; GR: Glucocorticoid Receptor; IL-6: Interleukin-6; NCID: Notch Intracellular domain; NOX: NADPH oxidase activator; P: Phosphorylation; S1P: Sphingosine-1-phosphate; SLC T: Solute Carrier Transporter; STAPs: Signal Transducing Adaptor Proteins; TF: Transcription Factors; TGF-ß R: Transforming Growth Factor ß Receptor; TKR: Tyrosine-protein Kinase Receptor; TLR: Toll-like Receptor; TNF R: Tumor Necrosis Factor Receptor; Transl. Fact.: Translational Factors. Figure 2 . 2 Figure 2. Cont. Figure 2 . 2 Figure 2. E3 ubiquitin ligases regulating skeletal muscle mass and molecules developed to modulate their activity and expression. Myofiber representation of the different E3-ligases and molecules targeting the signaling pathways controlling skeletal muscle mass and function during atrophy conditions. Ligands and arrows (both with head or perpendicular line) in green denote those signaling pathways and interactions with an anabolic effect whereas the red ones indicate catabolic signaling. ß2-AR: ß-2 Adrenergic Receptor; BCAAs: Branched-chain amino acids; BMP R: Bone Morphogenetic Receptor; Calp: Calpain; CSL: CBF1, Suppressor of Hairless, Lag-1; GR: Glucocorticoid Receptor; IL-6: Interleukin-6; NCID: Notch Intracellular domain; NOX: NADPH oxidase activator; P: Phosphorylation; STAPs: Signal Transducing Adaptor Proteins; TF: Transcription Factors; TGF-ß R: Transforming Growth Factor ß Receptor; TKR: Tyrosine-protein Kinase Receptor; TLR: Toll-like Receptor; TNF R: Tumor Necrosis Factor Receptor; Transl. Fact.: Translational Factors. Figure 2 . 2 Figure 2. E3 ubiquitin ligases regulating skeletal muscle mass and molecules developed to modulate their activity and expression. Myofiber representation of the different E3-ligases and molecules targeting the signaling pathways controlling skeletal muscle mass and function during atrophy conditions. Ligands and arrows (both with head or perpendicular line) in green denote those signaling pathways and interactions with an anabolic effect whereas the red ones indicate catabolic signaling. ß2-AR: ß-2 Adrenergic Receptor; BCAAs: Branched-chain amino acids; BMP R: Bone Morphogenetic Receptor; Calp: Calpain; CSL: CBF1, Suppressor of Hairless, Lag-1; GR: Glucocorticoid Receptor; IL-6: Interleukin-6; NCID: Notch Intracellular domain; NOX: NADPH oxidase activator; P: Phosphorylation; STAPs: Signal Transducing Adaptor Proteins; TF: Transcription Factors; TGF-ß R: Transforming Growth Factor ß Receptor; TKR: Tyrosine-protein Kinase Receptor; TLR: Toll-like Receptor; TNF R: Tumor Necrosis Factor Receptor; Transl. Fact.: Translational Factors. and (ii) the activation of NF-κB [149-151]. Oppositely, S1P can promote skeletal muscle mass in denervated mice [152] although the downstream signaling depends on the context and the S1P-receptor type [153]. 2.3.8. NOTCH Signaling Pathway Hyperactivation of NOTCH leads to atrophy during cancer cachexia [154], denervation [155-157], chronic alcohol consumption [158], hypovitaminosis D [159], and glucocorticoid treatment [114]. Upon cleavage of the NOTCH receptor by secretases [160], Regulation of Muscle Atrophy 3.1. E3 Ligases Involved in the Regulation of Anabolic Pathways 3.1.1. The CBL-B and FBXO40 E3 Ubiquitin Ligases Target IRS1 to Degradation in Skeletal Muscle Involved in the Regulation of Catabolic Pathways 3.2.1. Regulating the Canonical NF-κB Pathway via the Manipulation of cIAP and TRAF6 E3 Ligases FOXO1 and 3a participate to skeletal muscle adaptation upon exercise thus adding a new of FOXOs in the control of muscle cell homeostasis [219-222]. 3. 3 . 3 E3 Ubiquitin Ligases Involved in the Regulation of Muscle Mass and Function 3.3.1. MuRF1/TRIM63 4. 1 . 1 Indirect Action on E3 Ligases 4.1.1. PI3K-AKT-mTORC1 ), a β2-AR agonist, was shown to reverse MuRF1/Trim63 and MAFbx/Atrogin-1 overexpression with a concomitant muscle sparing in tumor-bearing mice[276]. Intriguingly, neither a repression of FOXO1 and FOXO3a transcription factors nor an activation of AKT-mTORC1 pathway explained the positive effect of formoterol. By contrast, formoterol was able to blunt MuRF1/Trim63 and MAFbx/Atrogin-1 expression in LPS-induced muscle atrophy through restoration of the AKT-mTORC1 pathway and reversal of P-FOXO/FOXO1 ratio[277]. Fig. 1 ( 1 Fig. 1 (See legend on next page.) Fig. 6 6 Fig. 6 Hypothetical consequences of changes in circulating lipids and endocannabinoid system tone during hibernation in brown bear. Black arrows represent possible behavior and metabolic outcomes adjust function (Package stats version 4.0.0 of R studio) was applied. Data are presented as means ± SEM and individual values are plotted as grey and black dots for respectively summer and winter values. Means, SEM, fold change and associated p-values are reported in supplementary TablesS2 to S5. Statistical significance was considered with p values or adjusted p values lower than 0.05. TableS5: Endocannabinoids (eCBs) and mRNA quantification in plasma and tissues in winter hibernating (W) and summer active (S) bears. 10.6 Résumé de la thèse en français 10.6.1 Introduction bibliographique 10.6.1.1 Le muscle squelettique Physiologie. Il existe 3 différents types de muscle, et dans ce projet de thèse nous nous sommes intéressés au muscle squelettique. Les muscles squelettiques recouvrent notre squelette et son essentiellement responsables des mouvements volontaires et de la posture. Il s'agit également d'un tissu très dynamique et plastique qui agit comme principal tissu du métabolisme énergétique avec la production de chaleur, l'absorption, l'utilisation et le stockage de substrats énergétiques tels que le glucose et les acides aminés (AA). Le muscle est essentiellement composé d'eau (75%) et de protéines (20%) [1]. C'est un tissu hautement organisé contenant plusieurs faisceaux de myofibres dont chaque couche est successivement encapsulée par la matrice extracellulaire. Les myofibres sont des cellules multinucléées et post-mitotiques. Chaque myofibre contient des milliers de myofibrilles qui sont composées de l'unité cellulaire de base du muscle, le sarcomère. Le sarcomère lui-même est composé de milliards de myofilaments, à la fois épais (myosine) et fins (actine) qui sont essentiels à la contraction musculaire nécessitant une forte consommation d'ATP. Les myofilaments représentent le principal contenu protéique du muscle (c'est-à-dire 70-80% du contenu protéique total d'une seule fibre) [1]. Mitochondries. Les muscles sont hautement vascularisés et innervés. Les besoins énergétiques pendant une contraction multiplient par 100 la consommation normale d'ATP dans le muscle. Pour répondre à cette demande énergétique élevée, le muscle dépend en partie de la phosphorylation oxydative mitochondriale (OXPHOS) pour la production d'ATP. Deux populations distinctes de Figure 51 . 51 Figure 51. Déséquilibre dans la balance protéique dans des conditions physiopathologiques conduisant à l'atrophie musculaire. AA : acides aminé Figure 52 . 52 Figure 52. Organisation et transduction du signal de la superfamille du TGF-β. SMAD4 le facteur contrôlant la balance protéique. SMAD4 est l'acteur commun entre la signalisation du TGF-β et du BMP (Figure 53). Les muscles des souris déficients pour Smad4 s'atrophient fortement et présentent une protéolyse excessive après un mois de dénervation chez la souris. Les souris déficientes pour la Mstn présentent un recrutement plus important de SMAD4 sur le promoteur des gènes cibles de la signalisation BMP. Au contraire, les souris déficientes pour Gdf5 présentent une augmentation de la liaison du complexe SMAD4-SMAD2/3 sur le promoteur des gènes cibles du TGFβ. De ces résultats a émergé le concept d'une compétition entre SMAD2/3 et SMAD 1/5/8 pour le recrutement de SMAD4. L'inhibition du signal TGF-β libérerait SMAD4 de SMAD2/3 pour être plus disponible pour SMAD1/5/8. Ainsi, la suractivation du TGF-β lors de situations cataboliques est considérée comme un facteur qui réduit la disponibilité de SMAD4 pour la signalisation BMP. Il y a donc une vraie nécessité d'un équilibre finement régulé de la balance BMP/TGF-β afin de maintenir l'homéostasie musculaire [110] (Figure 53). . L'UPRmt est une réponse à divers stress mitochondriaux. Il active un programme transcriptionnel codé par l'ADN nucléaire pour favoriser le retour à une homéostasie mitochondriale [226]. Néanmoins, si l'UPRmt est incapable de réparer les dommages mitochondriaux, l'élimination de la mitochondrie entière par mitophagie est favorisée. Il existe trois protéines régulatrices clés de l'UPRmt, dont la protéine ATF4 qui est souvent surexprimée suite à des dommages mitochondriaux. Une analyse transcriptomique globale a validé la présence de motifs de liaison d'ATF4 dans de nombreux gènes UPRmt [226] (Figure 55). Dans les muscles de souris, des stress mitochondriaux causés par une mauvaise fusion mitochondriale ou mitophagie, augmentent p-eIF2α et les niveaux protéiques d'ATF4. Cependant, des preuves montrent que l'activation de l'UPRmt-ATF4 suite à des perturbations mitochondriales dans les muscles peuvent avoir à la fois des effets protecteurs ou néfastes pour le muscle [226]. Ainsi, ces études mettent en évidence l'implication complexe de l'ISR dans l'autophagie et dans le contrôle de la qualité mitochondriale. En outre, il reste beaucoup à explorer pour comprendre l'implication de l'ISR-ATF4 dans l'autophagie et le contrôle de la qualité mitochondriale dans l'homéostasie musculaire. 227 Le role de l'ISR dans l'atrophie musculaire. Certains membres de l'ISR ont été associés de près ou de loin à l'atrophie musculaire dans différentes conditions cataboliques. Pour la kinase PERK et p-eIF2α, certaines études les associent à l'atrophie musculaire, pendant que d'autres leurs confèrent une action anti-atrophique. Pour les kinases GCN2 et PKR, il semblerait que leurs activations soient associées à l'atrophie musculaire, bien qu'aucune connection avec ATF4 n'ait été faite. Le gène ATF4 est quant à lui considéré comme un atrogène car ses niveaux d'ARNm augmentent dans de nombreuses conditions cataboliques induisant l'atrophie musculaire [99,207]. Les souris déficientes pour Atf4 dans les fibres musculaires se développent normalement et présentent une masse et une fonction musculaire normale jusqu'à un âge avancé. Par la suite, elles commencent à présenter une protection contre l'atrophie musculaire liée à l'âge [207]. Une délétion d'Atf4 dans les muscles de souris limite l'atrophie musculaire induite par le jeûne, l'immobilisation ou encore le vieillissement [207]. Le rôle catabolique d'ATF4 passe par la transcription des atrogènes Gadd45a 61 , Cdkn1a 62 and Eif4ebp1 63 dans les muscles 61 growth arrest and DNA damage inducible alpha 62 cyclin-dependent kinase inhibitor 1 63 eukaryotic translation initiation factor 4E binding protein 1 Figure 55 . 55 Figure 55. Le rôle double de l'ISR dans l'homéostasie musculaire. Les flèches vertes représentent des conséquences cellulaires anaboliques, les flèches rouges des effets cataboliques, et noires des conséquences encore mal comprises. Figure 56 . 56 Figure 56. Caractéristiques physiologiques fascinantes de l'ours brun hibernant. Figure 58 .Etude 3 583 Figure 58. Schéma expérimental de l'étude 2. Figure 59 . 59 Figure 59. Schéma expérimental de l'étude 3. Les cellules musculaires humaines sont transfectées avec un plasmide rapporteur luciférase pour le gène ID1 avant d'être traitées par du sérum d'ours d'été (SBS) ou d'hiver (WBS), avant de lire la luminescence. Figure 61 . 61 Figure 61. Expression des gènes associés aux signalisations TGF-β (à gauche) et BMP (à droite) dans les muscles de l'ours hibernants. Figure Figure simplifiée extraite de la publication Cussonneau et al., 2021 [351]. Schéma montrant les transcriptions des gènes impliqués dans les signalisations TGF-β et BMP du muscle vastus lateralis de l'ours brun et décrivant (1) leurs relations et (2) la différence de leurs niveaux d'expression entre les périodes d'hibernation et d'activité. Les cases rouges et vertes indiquent, respectivement, les gènes régulés à la hausse et à la baisse pendant l'hibernation par rapport à la saison d'été, et les cases blanches indiquent les gènes inchangés. Les gènes cibles des signalisations TGF-β et BMP sont indiqués en italique et sont en vert lorsqu'ils sont régulés à la baisse, en rouge lorsqu'ils sont régulés à la hausse, et en noir lorsqu'ils sont inchangés. Les flèches indiquent l'activation, et les barres ⊥ l'inhibition (n = 6 ours/saison, les mêmes individus ont été échantillonnés et analysés en été et en hiver). SBEs : SMAD Binding Element. Créé avec BioRender.com. Figure 60 . 60 Figure 60. Les niveaux protéiques de SMAD4 sont augmentés dans le muscle de l'ours brun hibernant. Les niveaux de la protéine SMAD4 ont été évalués par Western blots dans le muscle vastus latéralis des ours bruns pendant l'été (S) et l'hiver (W), et des Westerns blots représentatifs sont présentés pour trois couples d'ours. Les données sont représentées sous forme de valeurs individuelles avec des barres moyennes (n=11 ours/saison, les mêmes individus ont été échantillonnés et analysés en été et en hiver). Les points gris et bleus représentent les muscles des ours en été et en hiver respectivement. Figure 63 . 63 Figure 63. Le traitement à l'halofuginone inhibe la signalisation du TGF-β tout en favorisant la signalisation du BMP dans le muscle gastrocnémien chez les souris. Le même protocole expérimental qu'expliqué dans la légende de la figure 63 a été réalisé. Les niveaux protéiques relatifs du gastrocnémien pour les facteurs de transcription (B) SMAD2-3 (TGF-β), (A) SMAD1-5 (BMP) et (C) SMAD4 (TGF-/BMP) ont été évalués dans la fraction subcellulaire nucléaire, quantifiés et normalisés en utilisant le signal TGX. Des Western blots représentatifs pour les fractions subcellulaires nucléaires et cytosoliques sont présentés. Les données sont des moyennes ± SEM (exprimées par rapport aux souris H2O-Ctrl). Test statistique d'ANOVA à deux facteurs : ** padj <0,01 ; **** padj <0,0001. # padj (effet de l'HF) <0,05. Figure 62 . 62 Figure 62. Le traitement à l'halofuginone avant la suspension par le train arrière atténu l'atrophie du muscle gastrocnémien chez les souris.Les souris ont été traitées avec de l'H2O ou de l'halofuginone (HF, 0.25µg/g) 3 fois par semaine pendant 3 semaines et ont ensuite été soumises à une suspension du train arrière pendant 3 (HS3, barres hachurées) ou 7 (HS7, barres noires pointillées) jours ou non suspendues (Ctrl, barres grises). Masse du muscle gastrocnémien par gramme de poids corporel, les données sont des moyennes ± SEM (exprimées par rapport aux souris H2O-Ctrl). Test statistique d'ANOVA à deux facteurs : ** padj <0.01; **** padj <0.0001; ns= non-significatif ; # padj (effet de l'HF) <0.05. Figure 64 . 64 Figure 64. Le sérum d'ours d'hiver réprime la transcription du gène ID1 dans les cellules musculaires humaines. Des cellules CCL136 ont été transfectées avec le plasmide ID1-Luc et cultivées avec du sérum d'ours d'été (SBS, points gris) ou du sérum d'ours d'hiver (WBS, points noirs) pendant 6, 12 ou 24 heures, puis les cellules ont été lysées et l'activité de la luciférase a été mesurée. Les données sont présentées sous forme de valeurs individuelles avec des barres moyennes (n = 12 sérum d'ours/saison, les mêmes individus ont été échantillonnés et analysés en été et en hiver). La signification statistique est indiquée (test t apparié de ratio) * pvalue<0,05 ; ** pvalue<0,01 ; ns : non significatif. and Strasbourg University (H2E project; MyoBears project of the PEPS ExoMod program). MGX acknowledges financial support from France Génomique National infrastructure, funded as part of "Investissement d'Avenir" program managed by the Agence Nationale pour la Recherche (contract ANR-10-INBS-09). L.C. (Laura Cussonneau) was supported by a grant from the Institut National de la Recherche Agronomique et Environnement and Clermont Métropole. C.B. (Christian Boyerand) and C.B. (Charlotte Brun) were supported by grants from the Ministère Français de l'Enseignement Supérieur, de la Recherche et de l'Innovation. The long-term funding of Scandinavian Brown Bear Research Project (SBBRP) has come primarily from the Swedish Environmental Protection Agency, the Norwegian Environment Agency, the Austrian Science Fund, and the Swedish Association for Hunting and Wildlife Management. The study was conducted according to the guidelines of the Declaration of Helsinki and of the European Directive 2010/63/EU and approved by the Institutional Other data that support the findings of this study are available in the supplementary material of this article. (GSE144856). Institutional Review Board Statement: Review Board (or Ethics Committee) of (1) the Swedish Ethical Committee on Animal Experiment (applications Dnr C3/2016 and Dnr C18/2015), the Swedish Environmental Protection Agency (NV- 0741-18), and the Swedish Board of Agriculture (Dnr 5.2.18-3060/17), and (2) the C2E2A (Comité d'Ethique pour l'Expérimentation Animale Auvergne) (D6334515-08/2018). Informed Consent Statement: Not applicable. Table 6 . Bears features 6 Supplementary ID_number Year of collection Age (year) Gender Experiments w1305 2 F w1316 2014 2 M w1317 2 M RNA w1509 2 F sequencing w1511 2016 2 F w1512 2 F w1509 w1610 2017 3 2 F M w1709 w1710 2018 2 2 F F w1707 3 F w1802 2 M Western Blot w1803 2 F w1806 2019 2 F w1812 2 M w1813 2 F w1814 2 M N = 17 2.117647059 Sex ratio : 11F/6M . Inhibition of myostatin in muscles can result in either increased bone formation under physiological conditions or decreased bone resorption under pathological conditions[410][411][412]. Interestingly, TGFB1 and MSTN are downregulated in hibernating bear muscles in our study (seePaper 6.3) [351] and previous studies have clearly shown that bears do not suffer from osteoporosis despite long-term inactivity, lack of food, and cold exposure during hibernation[413]. Furthermore, peripheral blood mononuclear cells collected from hibernating Japanese black bears and cultured with hibernating bear serum do not differentiate into osteoclasts, unlike cells cultured with active bear serum[414]. This study implies the presence of circulating compounds during hibernation that protect bone from resorption. Altogether, these observations raise questions about whether or not soluble bone and muscle factors are secreted during hibernation in bears. Because TGF-β/BMP pathways are ubiquitous throughout the body, their regulation in one place is likely to imply consequences in another place. Is it because bone resorption does not occur in bears during winter that muscle atrophy does not occur either, or is it the other way around (Figure37)? Transcriptomic Data and Functional Pathway Enrichment Analysis for ATF4 pathway. Transcriptomicdata from the already published study 1 (seePaper 6.3) [351] are openly available in the GEO repository databases (bear: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi, reference number Plasmid. pGL3 BRE Luciferase was a gift from Martine Roussel & Peter ten Dijke (Addgene plasmid # 45126; http://n2t.net/addgene:45126; RRID: Addgene_45126) [430]. Two copies of the SMAD binding element present in the ID1 promoter are cloned into pGL3-MLP-luc minimal promoter vector.Transfection. CCL136 were transfected with 50 ng of DNA/well the day after seeding and HM cells with 1 µg of DNA/well at the end of the differentiation process, by adding respectively 5 or 100 µl of the transfection mixture described above (seeAppendix 10.1). We added a renilla luciferase plasmid (i.e. Pools 10.2.2 ATF4 signalling in hibernating bear muscles Bear ID_number Year of collection Age (year) Gender Experiments 1813 2020 3 F P1 1509 1701 2017 2018 2 2 F F SBE4-Luc was a gift from Bert Vogelstein (Addgene plasmid # 16495; http://n2t.net/addgene:16495; 1814 2019 2 M (GSE144856)). Genecards web-based portal was used to draw up an ATF4 gene set list, selecting genes with a score > 1.3. To identify the differentially expressed genes (DEGs) from this list, we selected a Winter/Summer fold change (FC) >|1.0| or <|1.0| with an adjusted p-value < |0.05| as cut-off RRID: Addgene_16495) [431]. This vector contains four copies of the SMAD binding site (GTCTAGAC) which are cloned into pBV-Luc. 1604 2018 3 F HM cell line, BRE plasmid transfection 1610 2017 2 M P2 1710 2018 2 F 3XMEF2-luc was a gift from Ron Prywes (Addgene plasmid # 32967 ; http://n2t.net/addgene:32967 ; 1803 2019 2 F standards, for the up-and down-regulated genes, respectively. Visualization of functional enrichment was performed using Metascape [432], a web-based portal for visualizing the inference of enriched biological pathways and protein-protein interaction among the DEGs as described in study 1 (see Paper 1707 2019 3 F RRID:Addgene_32967). This plasmid was cloned from pFOS WT-GL3 (Addgene #11983), where the human c-fos promoter was removed and replaced with 4 MEF2 sites. 1608 2017 2 F P4 1806 2019 2 F 10.2 Materials and Methods discussion 2 6.3) [351]. 1812 2019 2 M 10.2.1 Complementary RT-qPCR and Western blots of study 2 Immunodetection of total and phosphorylated eIF2α protein. The total and phosphorylated eIF2α protein content has been assessed by Western blots in vastus lateralis muscle of bears, as previously described in study 1 (see Paper 6.3) [351] using eIF2α (#9722S, Cell Signalling) and p-Ser51eIF2α (#ab32157) antibodies diluted 1:1000 in 5% BSA. Gene Forward Reverse Atg5 Atg12 TCAACCGGAAACTCATGGAA CGGAACAGCTTCTGGATGAA TAAACTGGTGGCCTCGGAAC CCATCACTGCCAAAACACTCA 10.3 Materials and Methods discussion 3 Atg16 TCCCGTGATGACCTGCTAAA CAGTCAGAGCCGCATTTGAA 10.3.1 Optimisation of tools for screening active compounds in WBS Map1lc3a Cell culture. CCL136 cells were cultured as described in Materials and Methods discussion 1, and GAGCGAGTTGGTCAAGATCA GGAGGCGTAGACCATGTAG Bnip3 seeded in 96-wells plates at a density of 2.0 × 10 4 cells/well. TCACTGTGACAGCCCACCTC GCTGTTTTTCTCGCCAAAGC Col1a1 Human myotubes (HM) were derived from vastus lateralis muscle biopsies obtained from healthy CGTGGTGACAAGGGTGAGAC AACCAGGAGAACCAGGAGGA Col3a1 Col5a2 control donors (Diomede experimental protocol). All procedures were approved by the French Ethical CTGCTGGTCCTTCTGGTGCT AGCCACGTTCACCAGTTTCA CCTGGTCCAAATGGTGAACA CCAGGGTTTCCTTCCTTTCC Committee SUD EST IV (Agreement #12/111A 13-02) and performed according to French legislation Bear ID_number Year of collection Age (year) Gender Experiments Col6a1 (Huriet's law). All patients gave their written consent after being informed of the nature, purpose, and CCACAACCAGATGCAAGAGC CACCAGCCATCCATTGTAGC w1601 2 M 2017 w1610 2 M possible risks of the study. The myoblasts were thawed from liquid nitrogen, directly platted into 6-w1604 3 F 2018 w1701 2 F well plates coated with collagen (Corning® BioCoat® Collagen I 6-well Clear Flat Bottom TC-treated Immunodetection of LC3 protein. The lipidation of the LC3 protein has been assessed by Western blots w1707 3 F in gastrocnemius muscle, as previously described in study 2 (see Paper 7.2) [423]. As LC3 protein has a pHi > 8.0, a CAPS (3-(cyclohexylamino)-1-propanesulfonic acid)-ethanol buffer (10 mM CAPS, 10% Multiwell Plate, Product Number 356400), and maintained in medium Ham's F10 (1g/L glucose, Dutscher) containing 1% penicillin-streptomycin (Gibco, USA) and 10% FBS (Gibco, USA). The cells were CCL136 cell line, BRE, SBE w1709 3 F or MEF2 plasmid transfection w1803 2 F 2019 w1806 2 F seeded in 12-wells plates at a density of 4 × 10 4 cells/well and cultured at 37 °C under 5% CO2. After ethanol, pH = 11) was used to optimise protein transfer. Blots were blocked for 1 h at room w1812 2 M w1814 2 M w1813 w1909 2020 3 2 F F RT-qPCR. Reverse transcription and quantitative polymerase chain reaction (RT-qPCR) of gastrocnemius muscle have been performed as previously described in study 2 (see Paper 7.2) [423] and the primers used are described in the following table : temperature in TBS buffer with 0.1% Tween-20 (TBS-T, pH = 7.8) with 5% non-fat dried milk according to the manufacturer's instructions. They were then washed thrice in TBS-T and incubated (overnight, stirring, 4°C) with LC3 antibody (#ab48394, Abcam) diluted 1:1000 with 5% non-fat dried milk. reaching 80% confluence, differentiation was triggered by replacing the previous medium with DMEM 1g/L glucose containing 2% FBS for 5 days. The differentiation media were changed every other day. 0,25 ng for CCL136 and 5 ng for HM) in the transfection mixture for intra-well luminescence normalisation Bear serum treatment. After one night of transfection, the media have been replaced by fresh media containing 5% winter or summer bear serum instead of 5% FBS for 6, 12 or 24 hours. CCL136 cells were treated with the serum of 12 individuals bears for each season. HM cells were treated with 3 mixes of bear serum for each season. Bears characteristics for individuals sera or for pools of sera are described in the table below. Cells were then washed with PBS and lysed with the passive lysis buffer from Dual-Luciferase® Reporter Assay System 100 assays E1910 (Promega, Charbonnières-les-Bains, France) by adding 20 µl for CCL136 or 250 µl for HM cells. Luminescence reading. In the Dual-Luciferase® Reporter (DLR™) Assay System, the activities of firefly (from BRE-Luc, SBE-Luc or MEF2-Luc plasmids) and renilla luciferases are measured sequentially from a single sample. 10 µl of the lysed cells were added in each well of a 96-well flat-white plate. Thereafter, the plate was placed into the plate-reading Synergy™ 2 luminometer (Biotek, Colmar, France) equipped with two reagent injectors. The firefly luciferase reporter was measured first by adding 50 µl of Luciferase Assay Reagent II (LAR II) in each well. After quantifying the firefly luminescence for 12 seconds, this reaction was quenched, and the renilla luciferase reaction was simultaneously initiated by adding 50 µl of Stop & Glo® Reagent to the same well. The Stop & Glo® Reagent also produces a stabilized signal from the renilla luciferase, which decays slowly over the course of the measurement. Table 1 . 1 Phenotypes of transgenic mice for genes encoding ubiquitin ligases involved in the control of muscle mass and function. Gene Product E3 Family Mouse Model Phenotype References E3 ligases regulating the anabolic pathways CBL-B RING KO Protection from unloading-induced muscle atrophy and dysfunction [171] FBXO40 RING KD KO Myofibers hypertrophy Muscle hypertrophy [22] NEDD4-1 HECT OX Myocardial activation of AKT during I/R [176,177] KO Partially resistant to denervation-induced skeletal muscle atrophy [178] E3 ligases regulating the catabolic pathways TRAF6 RING m.KO m.KO Resistance to starvation induced muscle atrophy Resistance to denervation-induced loss of muscle mass and function [179] [180] cIAP1 RING KO Limitation of denervation-induced muscle atrophy [19] OX Myotube atrophy WWP1 HECT KD Muscle fiber atrophy [181] TRIM32 RING KO DN Muscular dystrophy Muscular dystrophy [182] [183] Other E3 ligases involved in the control of muscle mass and function MuRF1 RING KO Resistance to catabolic-induced muscle atrophy [4] MAFbx RING KO Resistance to catabolic-induced muscle atrophy [4] PARKIN RBR KO Impaired mitochondrial function and muscle atrophy [184] OX Increased muscle mass and function in young and old mice [185] OX Prevention of sepsis-induced muscle atrophy [186] SMART/FBXO21 RING KD Resistance to denervation-induced muscle atrophy [187] MUSA1/FBXO30 RING KD Resistance to denervation-induced muscle atrophy [77] FBXL21 RING HM Impaired muscle functions [188] UBR4 HECT KD Muscle hypertrophy [189] UBR5 HECT KD Muscle atrophy [190] DN, Dominant Negative mutation; HM, Hypomorphic Mutation; I/R, Ischemia/Reperfusion; KD, knock-down mutant; KO, Knock-out mutant; m.KO, skeletal muscle-specific KO mice; OX, overexpressing mutant; PTEN, Phosphatase and tensin homologue. Table 2 . 2 Treatments influencing E3 ligases expression and/or activity. E3 Ligases Inhibited Molecule Mode of Inhibition Signal inhib-ited/Activated Efficiency on E3 Ligases Efficiency on Muscle Mass References Indirect inhibition of E3 ligases MuRF1/MAFbx Expression 4-aminopyridine (4-AP) K+-channels blockade K+-channels blocking Yes Yes [278] Notch1, Notch3 MuRF1 Expression AGT251 expression NOTCH Yes Yes [161] inhibition MuRF1/MAFbx/ MuSA1 Expression Anti-TLR2 IKK2 (NF-κB) TLRs Serum Amyloib A1 Yes Yes [256] MuRF1 Expression Anti-TLR4 IKK2 (NF-κB) TLRs Serum Amyloib A1 Yes Yes [256] MuRF1/MAFbx/ MuSA1 Expression BMS-345541 IKK2 (NF-κB) TLRs Serum Amyloib A1 Yes Yes [256] MuRF1 expression C188-9 STAT3 inhibition STAT3 signaling ND Partially [14] MuRF1/MAFbx Expression Clenbuterol AKT-FOXO axis Activation of PI3K-AKT Yes Yes [15] MuRF1 not MAFbx Dehydroepiandros-terone (DHEA) ND ND Yes Yes [168] MuRF1 Expression Epigallocatechin-3-gallate/EGCG ND NF-κB Yes Yes [279] MuRF1 Expression Espindolol ND Myostatin and NF-κB Yes Yes [280] MuRF1/MAFbx Expression Formoterol ND ND Yes Yes [276] MuRF1/MAFbx Expression Formoterol AKT/mTORC1/ FOXO1 ß2 Adrenergic receptor? Yes Yes [277] MuRF1/MAFbx Expression Formoterol ND AKT and NF-κB Yes Yes [281] MuRF1/MAFbx Expression HMB IL-6 receptor inhibition NF-κB Yes Partially [275] MuRF1 expression HMB or Leucine FOXO1 nuclear translocation Glucocorticoid Yes No [282] Table 2 . 2 Cont. E3 Ligases Inhibited Molecule Mode of Inhibition Signal inhib-ited/Activated Efficiency on E3 Ligases Efficiency on Muscle Mass References Cbl-b activity IRS1 peptide mimetic Cbl-b targeting Activation of PI3K-AKT Yes Yes [171] MuRF1/MAFbx Expression Leucine ND FOXO3a and VPS34 nuclear translocation Yes Yes, myotube diameter [283] MuRF1/MAFbx Expression Matrine AKT/mTORC1/ FOXO3α FOXO3a and translocation VPS34 nuclear Yes Yes [284] MuRF1 expression MR16-1 Anti-IL-6 receptor NF-κB Mitigated No [275] MuRF1 expression N-acetyl cysteine ROS TGF-ß Yes Yes [69] MuRF1/MAFbx Expression Pyrroloquinoline quinone (PQQ) ROS ND Yes Yes [285] MuRF1/MAFbx Expression RU486 GR Glucocorticoid Yes ND [13] MuRF1 Expression Sabinene ROS ERK, p38 MAPK Yes Yes [286] MuRF1/MAFbx Expression sActRIIB ActRIIB antagonist SMADs Yes Yes [16] MuRF1/MAFbx/ MuSA1 Expression Salicylate IKK2 (NF-κB) NF-κB Yes Yes but toxic [89] MuRF1/MAFbx Expression SS-31 ROS No Yes Yes [287] MuRF1/MAFbx Expression Teaghrelin ND Myogenin Yes Moderate [288-290] MuRF1/MAFbx Expression TNF-BP TNF binding TNF Yes ND [13] MuRF1/MAFbx/ MuSA1 Expression Ursolic acid ND Myostatin and cytokines inflammatory Yes Moderate [255] MuRF1/MAFbx Expression Vitamin E ND but seems ROS independent Unknown Yes Moderate [291] MuRF1/MAFbx Expression Vitamin-D IL-6 receptor inhibition NF-κB Yes Partially [275] MuRF1 expression VX-745/Neflamapimod p38α MAPK p38α MAPK Partially Moderate [292] Direct inhibition of E3 ligases MuRF1 expression ID#704946/MyoMed-946 ND MuRF1 Expression Yes Partially [293] MuRF1 Expression ID#704946/MyoMed-946 ND MuRF1 and Expression MuRF2 Yes Partially [17,18] MuRF1 and MuRF2 Expression MyoMed-205 ND MuRF1 expression [17] MuRF1 activity P013222 MuRF1 targeting - Yes ND [294] cIAP1 (activity??) LCL161 cIAP1 NF-κB Yes Very moderate [19] * schemas created with BioRender.com glutathione/oxidized glutathione ratio Int. J. Mol. Sci. 2023, 24, 621. https://doi.org/10.3390/ijms24010621 https://www.mdpi.com/journal/ijms Molecules 2021, 26, 407. https://doi.org/10.3390/molecules26020407 https://www.mdpi.com/journal/molecules 71 6.3 Paper Acknowledgments: The authors thank the field capture team (D. Ahlqvist, A. Friebe, H. Nordin, H. Blomgren, and S. Persson) and the IEN (INRAE Clermont-Ferrand-Theix, France) for excellent assistance with mouse care (A. Cissoire, Y. Delorme, and M. Djelloul-Mazouz), and L. Parry for help during animal slaughtering. The authors are also grateful to T. Brioche (Unité Mixte de Recherche INRAe, Dynamique Musculaire et Métabolisme, Université de Montpellier, France) for his help in implementing the unloading model in mouse and F. Rocher for helpful discussions. This is paper N • 317 of the Scandinavian Brown Bear Research Project. Acknowledgments: The authors thank the IEN (INRA Clermont-Ferrand-Theix, France) for the technical assistance with animal care. We are also grateful to Bogdan Vulpescu (Laboratoire de Physique de Clermont, UMR6533) for the helpful discussions regarding the statistical treatment of fibre CSA distribution. Acknowledgments: This work was supported by the French Institut National de Recherche pour l'Agriculture, l'Alimentation et l'Environnement (INRAE). Laura Cussonneau is supported by Clermont-Auvergne-Métropole and Dulce Peris-Moreno by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 813599. Acknowledgments The authors wish to thank the field capture team (D Ahlqvist, A Friebe, H Nordin, H Blomgren, S Persson), and are grateful to Hélène Choubley and Victoria Bergas from the lipidomic platform of the university of Bourgogne-Franche-Comté for their valuable technical assistance. This is scientific paper no. 296 from the SBBRP. Funding: This work was supported by the Center National d'Etudes Spatiales (CNES, #865, #974, #1006, #1905), the iSITE Challenge 3 Mobility program (Université Clermont Auvergne), and the Funding This work was supported by the French Space Agency (CNES), iSITE Challenge 3 Mobility program (UCA), CNRS and Strasbourg University (H2E project; MyoBears project of the PEPS ExoMod program), French Proteomic Infrastructure (ProFI; ANR-10-INSB-08-03, and MetaHUB (French infrastructure in metabolomics & fluxomics; ANR-11-INBS-0010). CBo was supported by a grant from the MESRI, LCu by grants from the INRAE and Clermont Metropole and CBr by a grant from French space agency (CNES). The long-term funding of Scandinavian Brown Bear Research Project (SBBRP) has come primarily from the Swedish Environmental Protection Agency, the Norwegian Environment Agency, the Austrian Science Fund, and the Swedish Association for Hunting and Wildlife Management. Data Availability Statement: The analyzed transcriptomic mouse data that support the findings of this study are openly available in the GEO repository database at https://doi.org/10.1093/gerona/ gly051, accessed on 16 June 2021, reference number (GSE102284). The generated transcriptomic bear data that support the findings of this study are openly available in the GEO repository database at https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi, accessed on 16 June 2021, reference number Cells 2021, 10, 1873 Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Availability of data and materials The datasets generated during and/or analyzed during the current study available from the corresponding author on reasonable request. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/cells10081873/s1, Figure S1, Tables S1-S6, and Blots quantification. Conflicts of Interest: The authors declare no conflict of interest. Paper Conflicts of Interest: The authors declare no conflict of interest. Supplementary Figure S1 Supplementary Figure S1. Effect of halofuginone treatment on muscle mass. Mice were treated with H2O (white bars) or HF (0.25 µg/g, grey bars) 3 times a week up to 4 weeks (WK) as described in Supplementary Figure S2 Supplementary Figure S2. ATF4-regulated alternative target genes expression in muscle during hindlimb suspension. Mice were treated with H2O or halofuginone (HF) oral administration (0.25µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 or 7 days (HS3 and HS7, light grey and white bars, respectively) or kept unsuspended (Ctrl, dark grey bars). Supplementary Figure S3 Supplementary Figure S3. Effects of halofuginone treatment prior to hindlimb suspension on skeletal muscle. Mice were treated with H2O or halofuginone (HF, 0.25µg/g) 3 times a week for 3 weeks and were then subjected to hindlimb suspension for 3 or 7 days (HS3 and HS7, respectively) or kept unsuspended (Ctrl) as described in Figure 2A Supplementary Figure S5 Supplementary Table S1: Primers Gene Forward Reverse Atf4 E. A. B. C. D. F. G. TGX A. B. Figure 46. Worldwide geographic repartition of (A) Hydrangea macrophylla and (B) brown bear living area (Ursus arctos). (A) Map of Hydrangea macrophylla geographic repartition found on https://identify.plantnet.org/fr/the-plant-list/species.(B) Map of brown bear living area repartition found on https://databayou.com/bear/habitat.html. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the writing of the manuscript. Abbreviations Paper Supplementary Information The online version contains supplementary material available at https://doi. org/10.1186/s12983-020-00380-y. Additional file 1 Table S1. Characteristics of brown bears included in the study. Table S2. Serum fatty acid concentrations (mmol/L) in winter hibernating (WBS) and summer active (SBS) bears. Table S3. Serum fatty acid relative proportions (mol %) in winter hibernating (WBS) and summer active (SBS) bears. Table S4. List of primers used for RT-qPCR. Ethics approval All samples and data were collected under protocols approved by the Swedish Ethical Committee on Animal Experiment (applications Dnr C3/2016 and Dnr C18/2015), the Swedish Environmental Protection Agency (NV-00741-18), and the Swedish Board of Agriculture . All procedures complied with Swedish laws and regulations. Competing interests The authors declare no competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. [193] (Figure 54). A ce jour, quatre kinases sont connues pour phosphoryler eIF2α, à savoir PERK 54 , 53 integrated stress response 54 PKR-like ER kinase B. A. Figure 53. Compétition pour le recrutement de SMAD4 en conditions physiologiques (A) ou cataboliques (B). DPM : dégradation protéique musculaire ; SPM : synthèse protéique musculaire. 225 PKR 55 , HRI 56 , et GCN2 57 [193] (Figure 54). P-eIF2α conduit à deux conséquences : (1) une inhibition générale de la machinerie traductionnelle et (2) la traduction d'ARNm spécifiques dont ATF4 58 [193] (Figure 54). ATF4 est un facteur de transcription qui agit principalement comme un activateur transcriptionnel d'une cohorte de gènes impliqués dans l'adaptation au stress cellulaire en réponse à l'activation de l'ISR [193] (Figure 54). ATF4 produit des réponses distinctes en fonction du stress cellulaire. Par exemple, lors d'un stress nutritionnel, ATF4 stimule l'expression de gènes impliqués dans le transport et la biosynthèse des AA, et l'autophagie pour fournir de nouveaux AA pour la synthèse de novo des protéines. De plus, ATF4 induit la transcription de PPP1R15A 59 (la protéine GADD34), la principale phosphatase d'eIF2α, agissant comme une importante boucle de rétrocontrôle négatif pour restaurer la synthèse protéique une fois le stress résolu [193] (Figure 54). Il a été proposé que la durée et l'intensité de la signalisation de l'ISR dicteraient la résultante biologique cellulaire. Par conséquent, ATF4 peut également faciliter l'exécution d'un programme transcriptionnel de mort cellulaire par apoptose lorsque l'homéostasie cellulaire ne peut être restaurée, en induisant la transcription de gènes apoptotiques [193] (Figure 54). 55 double-stranded RNA-dependent protein kinase 56 heme-regulated inhibitor 57 general control nonderepressible 2 58 activating transcription factor 4 59 protein phosphatase 1 regulatory subunit 15A de souris [99,207] (Figure 55). GADD45A est une protéine myonucléaire qui réprime des gènes impliqués dans l'anabolisme et induit des gènes impliqués dans le catabolisme (Figure 55). ATF4 est nécessaire et suffisante pour induire l'expression de Gadd45a en condition d'atrophie musculaire induite par le jeûne et l'immobilisation [207]. L'augmentation de Cdkn1a (protéine P21) dans les muscles de souris, est nécessaire et suffisante pour induire une atrophie par immobilisation médiée par ATF4 [207]. ATF4 induit également l'expression du gène Eif4ebp1, codant pour l'inhibiteur de la synthèse protéique 4E-BP1 [207] (Figure 55). Enfin, un autre gène cible d'ATF4, TRIB3 64 , a été associé à l'atrophie musculaire dans de nombreuses études. Par exemple, les souris déficientes pour Trib3 sont en partie résistante à l'induction de l'atrophie par le jeûne. À l'heure actuelle, les mécanismes par lesquels ATF4 est activé sur le plan transcriptionnel et traductionnel dans des conditions cataboliques ne sont pas clairs, mais peuvent impliquer différents mécanismes ou combinaisons de mécanismes. Le modèle de résistance naturelle à l'atrophie musculaire de l'ours brun hibernant Le biomimétisme est une approche qui cherche des solutions durables aux défis humains en imitant les modèles et les stratégies de la nature, ce qui a déjà permis des avancées et des progrès biomédicaux humains significatifs. L'hibernation est un parfait exemple de variabilité saisonnière pouvant receler des indices et solutions diverses pour les pathologies humaines. Généralités de l'hibernation chez les ours. L'hibernation est une adaptation utilisée par certains animaux pour faire face à un manque épisodique ou saisonnier d'énergie dû à des conditions environnementales défavorables (par exemple, faible disponibilité de nourriture/eau, forte pression de prédation) [273]. La torpeur est au coeur de l'hibernation, elle représente une période de suppression métabolique qui peut durer de quelques heures à plusieurs semaines. L'hibernation est un comportement plus élaboré, structuré en plusieurs longues périodes de torpeur souvent séparées par de brèves périodes d'éveil (IBA 65 ). Les IBA durent environ 24 heures et sont présents chez presque tous les petits hibernants (c'est-à-dire <10kg) mais pas chez les ours hibernants [273]. Les ours sont des mammifères de la famille des Ursidae. Les ours des climats chauds n'entrent pas en hibernation, pas plus que le panda géant ou l'ours polaire. Dans ce projet de thèse, nous nous sommes concentrés sur les ours hibernants, c'est-à-dire les ours bruns (Ursus arctos), les ours noirs américains 64 Abstract Muscle wasting affects millions of people around the world, including the elderly, people with illnesses, and people who are immobilised for long periods of time. The loss of muscle mass leads to a decline in independence, promotes disease, increases resistance to treatment, and is associated with increased mortality. As a result, muscle wasting is a major public health problem. Many biomolecular mechanisms have been documented to explain the occurrence of muscle wasting, mainly through the use of laboratory rodent models. However, no treatment is really effective and/or adaptable for all today. The main objective of this thesis was to find new underlying mechanisms that could become therapeutic targets to combat muscle atrophy in humans. We chose an approach based on biomimicry. Our strategy was (1) to perform a comparative physiology study between the brown bear model naturally resistant to atrophy during hibernation and the unloading mouse sensitive to muscle atrophy, (2) to study the role of the ATF4 and TGF-β/BMP signalling pathways in these two models, and finally (3) to initiate studies on human muscle cells to validate the hypotheses from the first two studies. In our first study, the strategy was to identify genes differentially regulated in brown bear muscle between the hibernation and active periods. Then we compared them to those differentially regulated in the muscles of the unloading mouse versus the control mouse. We showed that the concomitance of inhibition of TGF-β signalling and induction of BMP signalling appeared to be crucial for the maintenance of muscle mass under conditions of prolonged physical inactivity. In our second study, we showed that the induction of the ATF4 signalling pathway in muscle was uncoupled from muscle atrophy in healthy and physically inactive mice when previously treated with the halofuginone molecule, and also in hibernating bears. In all three situations, the maintenance of muscle mass was associated with both the induction of ATF4 and BMP signalling and the inhibition of TGF-β. Finally, preliminary results obtained by cultivating human muscle cells with hibernating brown bear serum suggest the presence of a circulating active compound that may mimic some of the characteristics observed in atrophy-resistant hibernating brown bear muscle. In conclusion, this work provides numerous perspectives in the modulation of the balance of TGF-β and BMP signalling pathways in situations of prolonged physical inactivity. In addition, it opens up new research on the identification of active compounds in bear serum that could be used in the human clinic to limit or prevent the onset of muscle atrophy during immobilisation or in other pathophysiological conditions.
04098847
en
[ "info" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04098847/file/Hal___Test_de_Mutation___2023%20%281%29.pdf
Moez Krichen email: [email protected] Une enquête sur le test de mutation: principes, avantages, limites et orientations futures En raison de sa fiabilité et de son efficacité, l'approche du test de mutation (Mutation Testing -MT) pour le test de logiciels a gagné en popularité ces dernières années. Cette revue de la littérature pertinente fournit un résumé de MT, y compris son histoire, son état actuel, ses applications, ses avantages, ses inconvénients et ses perspectives d'avenir. Au départ, l'article définit MT et explique sa pertinence dans le test de logiciels. L'article se poursuit ensuite en définissant MT et en expliquant en quoi il diffère des autres méthodes de test de logiciels. Certaines des techniques discutées ci-dessus comprennent le statement MT, le decision MT, le condition MT, le data MT et le object-oriented MT. Ensuite, l'étude se penche sur les avantages et les inconvénients de l'intégration de MT dans le processus de test de logiciels, ainsi que sur l'état actuel des méthodologies et de la technologie de MT. Nous analysons également les avantages et les inconvénients de MT par rapport aux autres méthodes de test de logiciels. Dans ses dernières sections, l'article met en évidence les difficultés du MT existant et propose des solutions potentielles pour la recherche future, telles que l'intégration de MT avec d'autres stratégies de test de logiciels. Introduction Le logiciel étant présent dans presque tous les aspects de notre vie quotidienne, sa qualité et sa fiabilité sont devenues une préoccupation majeure pour les organisations et les utilisateurs. Les exemples d'applications logicielles dans les secteurs tels que les transports (13; 14), la santé (25; 24), la blockchain et d'autres, soulignent l'importance de la qualité et de la fiabilité du logiciel dans la vie quotidienne. Les erreurs ou les défauts dans le logiciel peuvent avoir des conséquences graves, allant de pertes financières aux risques pour la sécurité et la vie privée. C'est pourquoi le test logiciel est devenu une étape cruciale dans le développement des applications logicielles. Les tests de logiciels, qui visent à assurer la qualité et la fiabilité des systèmes logiciels, constituent un élément crucial du développement de logiciels (6; 12; 30). Au fil des ans, de nombreuses méthodes de tests de logiciels ont été créées, chacune présentant des avantages et des inconvénients uniques [START_REF] Lahami | A survey on runtime testing of dynamically adaptable and distributed systems[END_REF]. Le test de mutation (Mutation Testing Une enquête sur le test de mutation: principes, avantages, limites et orientations futures -MT) est l'une de ces méthodes qui a récemment gagné en popularité en tant que moyen fiable d'évaluer la qualité des cas de test (43; 4). MT est une technique de test basée sur les défauts qui consiste à créer un ensemble de mutants, qui sont des versions du programme contenant un ou plusieurs défauts (49; 47; 44). L'objectif de MT est de mesurer l'efficacité des cas de test en calculant le score de mutation, qui est le pourcentage de mutants tués par la suite de tests (35; 18; 28). Si un mutant n'est tué par aucun cas de test, il est considéré comme équivalent à un défaut réel qui n'a pas été détecté, et le score de mutation est réduit [START_REF] Jefferson Offutt | Mutation testing of software using mimd computer[END_REF]. Les mutants sont créés en apportant de petites modifications au code du programme d'origine. L'objectif est d'introduire des défauts dans le programme qui sont représentatifs des erreurs de programmation courantes, telles que : -Changer un opérateur conditionnel (par exemple, remplacer "==" par " !="). -Changer un opérateur arithmétique (par exemple, remplacer "+" par "-"). -Supprimer une instruction ou un bloc de code. -Échanger deux instructions ou blocs de code. -Ajouter une instruction ou un bloc de code. -Les opérateurs relationnels sont remplacés par leur complément (par exemple, < par >=). Cette étude de revue fournit un aperçu de la MT, en couvrant ses principes, ses types, ses innovations, ses applications industrielles, ses avantages, ses limites, ses défis ouverts et ses orientations futures. Le document commence par définir la MT et souligner son importance dans les tests de logiciels (15; 46). Il explique ensuite les principes de base de la MT et la compare à d'autres techniques de tests de logiciels (11; 9; 3; 33; 48). Le document couvre également différentes classes et types de MT, notamment la MT de déclaration, de décision, de condition, de données et orientée objet. Il discute des avancées récentes dans les techniques et les outils de MT, ainsi que des défis et des avantages de la mise en oeuvre de la MT dans l'industrie. Les avantages et les limites de la MT sont analysés, ainsi qu'une comparaison avec d'autres techniques de tests de logiciels. Vers la fin du document, les défis liés à la MT sont identifiés, et des suggestions pour des recherches futures sont proposées. Celles-ci comprennent l'intégration de la MT avec d'autres méthodologies de tests de logiciels pour surmonter ses limites. Dans l'ensemble, cette étude de revue fournit un aperçu complet de la MT, qui sera précieux à la fois pour les praticiens et les chercheurs en tests de logiciels. Elle met en évidence le potentiel de la MT en tant que technique de test utile et identifie des domaines pour des améliorations futures. Principe de MT MT est une technique de test logiciel puissante qui consiste à créer un ensemble de mutants, ou variations du programme original, qui contiennent une ou plusieurs anomalies. Le principe de base de MT est d'évaluer l'efficacité des cas de test en mesurant leur capacité à identifier ces anomalies. Le processus de création de mutants M. Krichen consiste à appliquer un ou plusieurs opérateurs de mutation au code du programme, ce qui entraîne la création de nouvelles versions du programme contenant des anomalies. Les opérateurs de mutation peuvent être basés sur une variété de critères, tels que le changement de la valeur d'une variable, l'échange d'opérateurs ou la modification de déclarations conditionnelles. MT est basé sur l'hypothèse que les anomalies dans un programme sont similaires aux mutations dans le matériel génétique, dans le sens où elles peuvent être considérées comme des modifications aléatoires qui affectent le comportement du programme. Les mutations peuvent être utilisées pour simuler différents types d'anomalies, telles que des erreurs de syntaxe, des erreurs logiques et des erreurs de données. En introduisant ces anomalies dans le programme, MT fournit une évaluation plus réaliste de l'efficacité des cas de test que d'autres techniques de test. Idée de base Définition formelle Soit P un programme, et soit T un ensemble de cas de test conçus pour tester P . Soit M un ensemble de mutants qui sont créés en appliquant un ou plusieurs opérateurs de mutation au code de P . Chaque mutant m dans M est une variation de P qui contient une ou plusieurs anomalies. L'objectif de MT est d'évaluer l'efficacité de T en mesurant sa capacité à détecter les anomalies dans les mutants de M . Pour chaque mutant m dans M , soit f (m) une fonction qui représente le comportement de m. La fonction f (m) prend en entrée les mêmes données que P et produit les sorties correspondantes de m. Le score de mutation, SM (T ), est défini comme : SM (T ) = (|M | -|K(T, M )|) |M | * 100% où |M | est le nombre total de mutants dans M , et |K(T, M )| est le nombre de mutants dans M qui sont tués par T . Un mutant m est considéré comme étant tué par T s'il existe au moins un cas de test dans T qui produit une sortie différente pour m que pour P . Intuitivement, le score de mutation mesure la capacité de T à détecter les anomalies dans le programme. Si T est efficace, il devrait être capable de détecter la plupart des anomalies dans les mutants, ce qui se traduit par un score de mutation élevé. Si T est inefficace, il ne réussira pas à détecter de nombreuses anomalies, ce qui se traduira par un score de mutation faible. Une enquête sur le test de mutation: principes, avantages, limites et orientations futures Étapes pour la génération et l'exécution de mutants Il existe plusieurs étapes impliquées dans la génération de mutants pour MT, notamment l'identification des opérateurs de mutation, l'application de ces opérateurs au programme original, la compilation et l'exécution des mutants, ainsi que l'évaluation de l'efficacité de la suite de test pour détecter les anomalies. Ces étapes fournissent un cadre général pour la génération de mutants en MT et peuvent être adaptées en fonction de l'environnement de test et du programme à tester. Plus de détails sur ces étapes sont fournis ci-dessous : Ces étapes fournissent un cadre général pour la génération de mutants en MT. En fonction de l'environnement de test et du programme à tester, des étapes supplémentaires ou des modifications de ces étapes peuvent être nécessaires. Différentes classes de MT Il existe différentes classes de MT, telles que les MT forts et faibles ainsi que les MT conventionnels. Dans les MT conventionnels, un seul opérateur de mutation est utilisé à la fois pour créer des mutants. Le processus de création de mutants pour les MT faibles nécessite la combinaison de différents opérateurs de mutation. La création de mutations qui sont fonctionnellement identiques au programme original mais avec une implémentation différente fait partie des MT forts. Avantages et Limitations Comme toute technique de test, le test de mutation (Mutation Testing -MT) a ses avantages et ses limites. Avantages L'un des principaux avantages de la MT est sa capacité à identifier les faiblesses et les lacunes de la suite de test. En introduisant de petits changements dans le code, la MT peut évaluer l'efficacité de la suite de test dans la détection de défauts et identifier les zones où la suite de test peut être améliorée. Cela fait de la MT un outil précieux pour améliorer la qualité globale du logiciel. De plus, la MT peut détecter des défauts Une enquête sur le test de mutation: principes, avantages, limites et orientations futures qui ne peuvent pas être détectés par d'autres techniques de test. En introduisant de petits changements dans le code, la MT peut révéler des défauts que d'autres techniques ne peuvent pas détecter, tels que les conditions aux limites ou les défauts de logique complexes. Cela fait de la MT un complément précieux aux autres techniques de test, améliorant l'efficacité globale du processus de test. Enfin, la MT peut évaluer la qualité de la suite de test elle-même. En générant et en exécutant un grand nombre de mutants, la MT peut déterminer l'efficacité et la fiabilité de la suite de test dans la détection de défauts. Cela fait de la MT un outil précieux pour garantir la qualité et la fiabilité du processus de test. Limitations Cependant, il y a aussi des limites à l'utilisation de la MT. L'une des principales limites est les exigences computationnelles et le temps nécessaire pour exécuter le grand nombre de mutants générés. Bien que la MT soit un moyen efficace d'évaluer l'adéquation d'une suite de test, elle comporte des coûts significatifs en termes d'argent et de ressources. L'un des coûts principaux de la MT est les ressources computationnelles nécessaires pour exécuter les tests. Cette technique implique la génération d'un grand nombre de mutants, chacun représentant une version légèrement modifiée du logiciel original, et l'exécution de la suite de test contre chacun de ces mutants. À mesure que le nombre de mutants augmente, la quantité de puissance de calcul et le temps d'exécution requis pour le processus de test augmentent de manière exponentielle. Cela peut entraîner des coûts supplémentaires en termes de matériel et de consommation d'énergie. En plus des exigences computationnelles, la MT peut également être coûteuse en termes de ressources humaines. Le processus de génération et d'analyse de mutants nécessite souvent des professionnels qualifiés du test de logiciel, qui sont très demandés et qui commandent des salaires élevés. De plus, lorsque la suite de test est jugée insuffisante pour détecter des mutants, les développeurs peuvent être amenés à consacrer du temps et des efforts supplémentaires à l'amélioration de la suite de test, ce qui peut encore augmenter le coût du projet. De plus, le coût des outils et des licences de MT ne doit pas être négligé. Bien qu'il existe des frameworks de MT open source disponibles, de nombreuses organisations choisissent d'investir dans des outils commerciaux offrant des fonctionnalités plus avancées et un meilleur support. Ces outils peuvent être coûteux, et le coût peut être prohibitif pour les petites organisations ou les projets ayant des budgets serrés. De plus, la nature chronophage de la MT peut entraîner des retards dans le cycle de développement de logiciel. Le processus de génération, d'exécution et d'analyse de mutants peut prendre une quantité considérable de temps, surtout s'il y a un grand nombre de mutants et des suites de test complexes. Cela peut entraîner des coûts supplémentaires car le temps de mise sur le marché d'un produit logiciel est prolongé, ce qui peut entraîner une perte de revenus et de parts de marché. En outre, les mutations qui n'introduisent pas réellement de défauts dans le code peuvent être difficiles à distinguer de celles qui le font, ce qui entraîne un grand nombre de faux positifs. Cela peut rendre difficile l'interprétation des résultats du test et peut entraîner un gaspillage d'efforts dans l'investigation des faux positifs. Enfin, la MT peut M. Krichen ne pas être efficace dans la détection de certains types de défauts, tels que ceux liés aux conditions aux limites ou à la performance. Dans de tels cas, d'autres techniques de test peuvent être plus appropriées. En résumé, la MT est une technique puissante pour évaluer la qualité d'une suite de test de logiciel, mais elle comporte des coûts importants en termes de ressources computationnelles, humaines, d'outillage et de temps. En tant que tel, les organisations doivent peser soigneusement les avantages de la MT par rapport à ces coûts pour déterminer si la technique est un investissement rentable pour leur projet spécifique. Directions Futures Pour répondre à certaines des limitations précédemment mentionnées, les chercheurs ont proposé diverses techniques telles que la mutation sélective, où seul un sous-ensemble de mutants est généré et exécuté, et la mutation d'ordre supérieur, qui introduit des mutations plus complexes qui peuvent être plus efficaces pour détecter les défauts. De plus, les progrès récents en matière de puissance de calcul et de techniques de parallélisation ont rendu la MT plus faisable pour une utilisation dans des systèmes plus importants. Ci-dessous, nous fournissons une brève explication de ces techniques et de la façon dont elles ont contribué à rendre la MT plus faisable pour les systèmes plus importants. -Mutation sélective : La mutation sélective vise à réduire le nombre de mutants générés et exécutés en se concentrant sur un sous-ensemble de mutations considérées comme plus efficaces pour détecter les défauts. En sélectionnant seulement un ensemble plus petit de mutants représentatifs, le coût computationnel de la MT peut être considérablement réduit. Certaines stratégies pour la sélection de mutants comprennent : - Échantillonnage aléatoire : Sélection d'un sous-ensemble aléatoire de mutants. -Techniques basées sur le regroupement : Regroupement des mutants en fonction de leur similarité et sélection d'un représentant de chaque groupe. -Sélection basée sur l'opérateur : Priorisation de certains opérateurs de mutation qui ont été démontrés comme étant plus efficaces pour détecter les défauts. -Mutation d'ordre supérieur : La mutation d'ordre supérieur implique la création de mutants avec des modifications multiples et simultanées. Ces mutations complexes peuvent conduire à une détection de défauts plus réaliste et efficace, car elles peuvent mieux représenter les types de défauts qui se produisent dans le développement de logiciels en temps réel. Bien que les mutants d'ordre supérieur puissent être plus coûteux en termes de calcul à générer et à analyser, ils ont le potentiel d'améliorer considérablement les taux de détection des défauts. -Progrès en matière de puissance de calcul et de parallélisation : À mesure que la puissance de calcul s'est améliorée, il est devenu plus faisable d'exécuter la MT sur des systèmes plus importants. De plus, les techniques de parallélisation peuvent également aider à répartir la charge de travail de la MT Une enquête sur le test de mutation: principes, avantages, limites et orientations futures sur plusieurs processeurs ou machines, accélérant ainsi le processus. Certaines stratégies de parallélisation comprennent : -Parallélisme au niveau de la tâche : Division de l'ensemble du processus MT en tâches plus petites qui peuvent être exécutées simultanément. -Parallélisme au niveau des données : Partitionnement des données d'entrée (par exemple, la suite de tests ou la base de code) à traiter de manière indépendante par différents processeurs ou machines. -Parallélisme hybride : Combinaison du parallélisme au niveau de la tâche et du parallélisme au niveau des données pour optimiser des ressources et accélérer la MT. En conclusion, la mutation sélective, la mutation d'ordre supérieur et les progrès en matière de puissance de calcul et de techniques de parallélisation ont rendu la MT plus faisable pour une utilisation dans des systèmes plus importants. Par conséquent, les développeurs et les testeurs peuvent mieux évaluer la qualité de leur logiciel et identifier les défauts plus efficacement. L'objectif de MT est d'évaluer la qualité des cas de test en mesurant le score de mutation, qui est le pourcentage de mutants détectés par la suite de test. Un score de mutation élevé indique que la suite de test est efficace pour détecter les anomalies du programme, tandis qu'un score de mutation faible indique que la suite de test peut être insuffisante et nécessite une amélioration. MT peut être utilisé pour compléter d'autres techniques de test, telles que le test fonctionnel et le test de couverture de code, afin d'améliorer la qualité et la fiabilité des systèmes logiciels. 1. Identifier les opérateurs de mutation : Identifier les opérateurs de mutation qui seront utilisés pour générer les mutants. Différents opérateurs de mutation peuvent être utilisés pour générer différents types de mutants, tels que des mu- tants faibles ou forts. 2. Appliquer les opérateurs de mutation : Appliquer les opérateurs de muta- tion au programme original pour générer un ensemble de mutants. Pour le MT traditionnel, un seul opérateur de mutation est appliqué au programme origi- nal à la fois pour générer des mutants. Pour d'autres types de MT, tels que le MT faible ou fort, plusieurs opérateurs de mutation peuvent être appliqués au programme original pour générer un plus grand ensemble de mutants. 3. Compiler et exécuter les mutants : Compiler et exécuter les mutants pour déterminer leur comportement. Les mutants doivent être compilés et exécu- tés dans le même environnement que le programme original, en utilisant les mêmes données d'entrée et les mêmes conditions d'exécution. Le comportement de chaque mutant doit être comparé au comportement du programme original pour déterminer si le mutant introduit une anomalie. 4. Évaluer l'efficacité de la suite de test : Évaluer l'efficacité de la suite de test pour détecter les anomalies introduites par les mutants. La suite de test doit être exécutée sur chaque mutant, et le nombre de mutants détectés par la suite de test doit être enregistré. L'efficacité de la suite de test peut être mesurée à l'aide de métriques telles que le score de mutation, qui est le pourcentage de mutants détectés par la suite de test. basés sur les schémas de mutants : Les Il existe d'autres types de MT, moins populaires, tels que les MT comparables, les tests basés sur les schémas de mutants et M. Krichen les tests basés sur le comportement. Nous donnons ci-dessous plus d'informations sur ces différentes classifications : -MT traditionnel : Le MT traditionnel est la classe de MT la plus largement utilisée, où un seul opérateur de mutation est appliqué au programme original à la fois pour générer un ensemble de mutants. Un opérateur de mutation est une règle ou un algorithme qui modifie le programme original en introduisant une erreur (par exemple, en changeant un opérateur relationnel ou en supprimant une instruction conditionnelle). L'objectif du MT traditionnel est d'évaluer l'efficacité d'un ensemble de tests donné pour détecter les erreurs introduites par les opérateurs de mutation. Le MT traditionnel est informatiquement coûteux, car il nécessite la génération et l'exécution d'un grand nombre de mutants. Le MT basé sur le comportement est une classe plus récente de MT qui implique la génération de mutants en modifiant le comportement du programme, plutôt que sa syntaxe. L'objectif du MT basé sur le comportement est de générer des mutants qui sont plus représentatifs du comportement réel du programme, et de réduire le nombre de mutants non pertinents qui doivent être générés. En fonction des objectifs du processus de test et des caractéristiques du logiciel testé, chaque classe de MT présente des avantages et des inconvénients distincts et peut être utilisée. Une enquête sur le test de mutation: principes, avantages, limites et orientations futures -MT faible : Le MT faible est une variation du MT traditionnel, où plusieurs opérateurs de mutation sont appliqués au programme original pour générer des mutants. L'idée derrière le MT faible est qu'il peut augmenter le nombre de mutants générés et réduire le nombre de tests requis pour évaluer l'ensemble de tests, tout en restant informatiquement faisable. Le MT faible peut également aider à détecter des erreurs qui sont manquées par le MT traditionnel. -MT fort : Le MT fort est une autre variation du MT traditionnel, où les mutants générés sont fonctionnellement équivalents au programme original, mais avec une implémentation différente. L'objectif du MT fort est d'évaluer la capacité d'un ensemble de tests à détecter des erreurs dans l'implémentation du programme, plutôt que simplement dans sa syntaxe. Le MT fort est plus difficile à implémenter que le MT traditionnel ou faible, car il nécessite la génération de mutants qui sont fonctionnellement équivalents au programme original. -MT équivalent : Le MT équivalent est une classe moins courante de MT qui implique la génération de mutants qui sont sémantiquement équivalents au programme original, mais avec une syntaxe différente. L'objectif du MT équivalent est d'évaluer la capacité d'un ensemble de tests à détecter des erreurs liées à la structure et à la syntaxe du programme. -Tests tests basés sur les schémas de mutants sont une autre classe moins courante de MT qui implique la génération de mutants basés sur un schéma qui définit la structure et la syntaxe des mutants. L'objectif des tests basés sur les schémas de mutants est de générer des mutants qui sont plus pertinents pour la structure et la syntaxe du programme, et de réduire le nombre de mutants non pertinents qui doivent être générés. -MT basé sur le comportement : 3 Comparaison avec d'autres types de tests En plus du MT, il existe plusieurs autres types de tests que les développeurs peuvent utiliser pour assurer la qualité et la fiabilité de leur logiciel. Chaque type de test a ses propres forces et faiblesses, et les développeurs doivent choisir le type de test approprié en fonction des exigences spécifiques du logiciel. Dans cette comparaison, nous nous concentrerons sur le MT et le comparerons à d'autres types de tests pour mettre en évidence ses avantages et limites uniques. -Tests unitaires (17) : Ils consistent à tester les composants individuels du code pour s'assurer qu'ils fonctionnent correctement. L'accent est mis sur l'identification des bugs dans de petits morceaux de code isolés. Bien que les tests unitaires fassent partie intégrante de toute stratégie de test, ils peuvent ne pas être suffisants pour détecter toutes les erreurs dans le code. Contrairement au MT, qui évalue l'efficacité des cas de test eux-mêmes en introduisant des erreurs dans le code, les tests unitaires ne vérifient que le bon fonctionnement des composants individuels. -Tests d'intégration (10) : Ils consistent à tester les interactions entre différents composants pour s'assurer qu'ils fonctionnent ensemble correctement. L'accent est mis sur l'identification des bugs qui se produisent lorsque les composants sont combinés. Bien que les tests d'intégration soient importants, ils peuvent ne pas détecter toutes les erreurs qui découlent des composants individuels. Le MT peut identifier les erreurs à la fois dans les composants individuels et dans leurs interactions avec d'autres composants. -Tests système (50) : Ils consistent à tester l'ensemble du système pour s'assurer qu'il répond aux exigences et fonctionne comme prévu. L'accent est mis sur l'identification des bugs qui découlent du système dans son ensemble. Bien que les tests système soient importants, ils peuvent ne pas détecter toutes les erreurs qui découlent des composants individuels ou de leurs interactions. Le MT peut compléter les tests système en identifiant les faiblesses des cas de test eux-mêmes. -Tests d'acceptation (7) : Ils consistent à tester le logiciel pour s'assurer qu'il répond aux exigences et satisfait les besoins des utilisateurs. L'accent est mis sur l'évaluation du logiciel du point de vue des utilisateurs finaux. Bien que les tests d'acceptation soient importants, ils peuvent ne pas détecter toutes les erreurs qui découlent des composants individuels ou de leurs interactions. Le MT peut compléter les tests d'acceptation en identifiant les faiblesses des cas de test eux-mêmes. -Tests d'injection de fautes (34) : Ils consistent à introduire intentionnellement des erreurs dans le code ou l'environnement pour observer le comportement du système dans ces conditions. L'accent est mis sur l'évaluation de la résilience du système aux erreurs imprévues. Bien que les tests d'injection de fautes puissent être utiles pour identifier les faiblesses du système, ils peuvent ne pas être aussi efficaces que le MT pour évaluer l'efficacité des cas de test. M. Krichen -Tests Les tests d'injection de fautes sont plus axés sur l'identification des erreurs déjà connues dans le système, tandis que le MT est conçu pour identifier les faiblesses des cas de test eux-mêmes. basés sur des modèles (22; 21; 19; 26; 20; 27) : Ils consistent à créer des modèles du système et à utiliser ces modèles pour générer des cas de test. Les tests basés sur des modèles peuvent être utiles pour générer un grand nombre de tests et vérifier que le système répond aux exigences. Bien que les tests basés sur des modèles puissent être efficaces, ils peuvent ne pas détecter toutes les erreurs qui se produisent dans le code réel. Le MT peut compléter les tests basés sur des modèles en identifiant les faiblesses des cas de test euxmêmes. Cependant, le développement de modèles précis peut être chronophage et peut nécessiter des compétences et des outils spécialisés. -Tests de sécurité (23) : Ils visent à identifier les vulnérabilités et à garantir que le logiciel est protégé contre les attaques malveillantes. Les tests de sécurité sont essentiels pour garantir que le logiciel est sûr et protégé contre les menaces externes. Bien que le MT puisse aider à identifier les erreurs de sécurité dans les cas de test, il ne remplace pas les tests de sécurité spécifiques qui sont nécessaires pour garantir la sécurité globale du logiciel. - Tests de charge (39; 38; 40; 41) : Ils visent à évaluer la performance du logiciel sous différentes charges de travail et à identifier les limites du système. Les tests de charge sont essentiels pour garantir que le logiciel peut fonctionner de manière fiable dans des conditions de charge élevée ou variable. Bien que le MT puisse aider à identifier les erreurs dans les cas de test liées à la charge, il ne remplace pas les tests de charge spécifiques qui sont nécessaires pour garantir la performance globale du logiciel.-Tests de régression (31) Ils visent à garantir que les modifications apportées au logiciel n'ont pas introduit de nouveaux bugs ou altéré le comportement du logiciel existant. Les tests de régression sont essentiels pour garantir que le logiciel reste fiable et conforme aux exigences initiales. Bien que le MT puisse aider à identifier les erreurs dans les cas de test liées à la régression, il ne remplace pas les tests de régression spécifiques qui sont nécessaires pour garantir la fiabilité globale du logiciel. En conclusion, le MT est un type de test unique qui complète les autres types de tests en identifiant les faiblesses des cas de test eux-mêmes. Bien que le MT ne remplace pas les autres types de tests, il peut améliorer leur efficacité en identifiant les faiblesses des cas de test et en améliorant leur qualité globale. Le MT peut être utilisé dans le cadre d'une stratégie de test complète pour garantir la qualité et la fiabilité du logiciel. 4 Différents types de MT Les MT peuvent être classés en différents types en fonction des critères utilisés pour générer les mutants :-MT de déclaration : Les MT de déclaration impliquent la modification de déclarations individuelles dans le programme en introduisant des défauts. Cela peut être fait en modifiant les valeurs des constantes, des variables ou des opérateurs, ou en supprimant ou en insérant des déclarations. L'objectif des MT de déclaration est d'évaluer l'efficacité des cas de test pour détecter des défauts dans les déclarations individuelles. Dans l'ensemble, chaque type de MT offre une perspective différente sur l'efficacité des cas de test et peut être utile pour identifier différents types de défauts dans le programme. Le choix du type dépend des objectifs des tests et des caractéristiques du programme. En utilisant une combinaison de différents types de MT, une évaluation complète de la qualité des cas de test et de la fiabilité des systèmes logiciels peut être obtenue. Une enquête sur le test de mutation: principes, avantages, limites et orientations futures -MT de décision : Les MT de décision impliquent la modification du flux de contrôle du programme en modifiant les conditions de branchement des instruc- tions if et des boucles. Cela peut être fait en modifiant les opérateurs relationnels, logiques ou les expressions booléennes utilisées dans les conditions. L'objectif des MT de décision est d'évaluer l'efficacité des cas de test pour détecter des défauts dans le flux de contrôle du programme. -MT de condition : Les MT de condition sont une version plus fine des MT de décision qui impliquent la modification des expressions booléennes utilisées dans les conditions des instructions if et des boucles. Les mutations peuvent être créées en modifiant les opérateurs booléens, les opérandes des opérateurs ou la négation des opérateurs. L'objectif des MT de condition est d'évaluer l'efficacité des cas de test pour détecter des défauts dans les expressions booléennes utilisées dans les conditions. -MT de données : Les MT de données impliquent la modification des données d'entrée utilisées pour exécuter le programme. Cela peut être fait en modifiant les valeurs des constantes, des variables ou des structures de données. L'objectif des MT de données est d'évaluer l'efficacité des cas de test pour détecter des défauts dans les données d'entrée du programme. -MT orienté objet : Les MT orienté objet impliquent la modification de la structure du programme, telle que la modification de la hiérarchie d'héritage ou des appels de méthodes entre objets. Les mutations peuvent être créées en modifiant les appels de méthodes, en modifiant les modificateurs d'accès des méthodes et des champs, ou en modifiant les relations d'héritage. L'objectif des MT orienté objet est d'évaluer l'efficacité des cas de test pour détecter des défauts dans la conception et la mise en oeuvre orientées objet du programme. 5 Applications industrielles de MT MUTTA, un outil automatisé pour MT des applications Web en test de bout en bout (E2E), a été proposé par les auteurs de[START_REF] Leotta | Mutta : a novel tool for e2e web mutation testing[END_REF]. Les fichiers sources du serveur de l'application Web cible sont modifiés par MUTTA, qui exécute ensuite la suite de tests E2E contre les applications Web modifiées et recueille les résultats des tests. Le programme génère un grand nombre de mutants, utilise des données de couverture pour prendre en compte uniquement les mutants exécutés, et enregistre les résultats de la suite de testsde manière acceptable. Le processus MT des suites de tests E2E peut être automatisé à l'aide de MUTTA, et il peut être utilisé pour évaluer les suites de tests E2E, selon une étude de cas que les auteurs ont réalisée pour comparer deux méthodologies de test E2E Web. L'article présente une nouvelle technique pour évaluer automatiquement l'efficacité des suites de tests pour améliorer la qualité des applications Web. explique la mise en oeuvre du cadre dans MuTomVo et donne une étude de cas de trois applications fonctionnant dans divers systèmes distribués pour démontrer sa praticité. Dans l'ensemble, la recherche propose un cadre unique qui peut améliorer la qualité des applications distribuées en offrant un mécanisme fiable et automatisé pour évaluer la détection d'erreurs des suites de tests dans des MuJava est un outil conçu spécifiquement pour la MT d'applications Java (37; 36). L'outil crée des mutants en utilisant une variété d'opérateurs de mutation sur le code source d'origine, notamment en modifiant les appels de méthode, en changeant les opérateurs arithmétiques ou logiques et en supprimant des déclarations. MuJava prend en charge une variété de techniques de MT, telles que la MT forte, faible et conventionnelle. En plus de supporter la génération automatisée de cas de test et l'analyse de couverture, MuJava offre également un ensemble complet d'outils pour la génération, l'exécution et l'analyse de mutants. -Jester : Jester est un outil de MT populaire pour Java, C et C++. Pour créer des mutants, il modifie les opérateurs relationnels ou supprime les instructions conditionnelles. Jester exécute ensuite le jeu de tests sur chaque mutant et calcule un score de mutation, pourcentage de mutations découvertes. Jester est extrêmement adaptable, permettant aux utilisateurs de définir des opérateurs de mutation et des niveaux de mutation de code. Il est rapide et évolutif, ce qui permet aux utilisateurs de tester des bases de code énormes de manière efficace. Jester a été utilisé dans plusieurs projets de développement de logiciels pour trouver des bugs ignorés par les tests standard. -Simple Jester : Simple Jester est un outil de MT développé pour être une alternative plus légère à l'outil Jester original. Comme Jester, Simple Jester est conçu pour fonctionner avec du code Java et applique des opérateurs de mutation pour générer un ensemble de mutants. Il exécute ensuite le jeu de tests sur chaque mutant et calcule un score de mutation. Simple Jester est conçu pour être facile à utiliser et pour fournir une rétroaction rapide sur l'efficacité d'un jeu de tests. Il dispose d'une interface de ligne de commande simple et peut être intégré à des outils d'intégration continue comme Jenkins. Bien qu'il ne soit peut-être pas aussi riche en fonctionnalités que d'autres outils de MT, Simple Jester peut être un bon choix pour les projets qui nécessitent une solution de MT légère et efficace. -Jumble : Jumble est un outil de MT conçu pour être hautement personnalisable et extensible. Il peut être utilisé avec des bases de code Java, .NET et Ruby et offre une grande variété d'opérateurs de mutation. Jumble permet également aux utilisateurs de créer leurs propres opérateurs de mutation et de configurer l'outil pour appliquer des mutations de manière sélective en fonction de critères spécifiques, tels que la complexité du code ou la couverture de test. Jumble dispose d'une interface en ligne de commande et M. Krichen peut être intégré à des outils de construction tels qu'Ant, Maven ou Gradle. Jumble a été utilisé dans une variété de projets de développement de logiciels et s'est avéré efficace pour identifier des défauts qui sont ignorés par d'autres méthodes de test. -Javalanche : Javalanche est un outil de MT conçu pour être hautement parallélisable et évolutif. Il applique des opérateurs de mutation au code Java et génère un ensemble de mutants, puis exécute le jeu de tests sur chaque mutant et calcule un score de mutation. Javalanche est conçu pour bien fonctionner avec de grandes bases de code et peut être exécuté en parallèle sur plusieurs machines. Il fournit également des rapports détaillés sur les résultats du processus de test, y compris des informations sur les mutants qui ont été tués par le jeu de tests et ceux qui ne l'ont pas été. Javalanche peut être intégré à des outils de construction tels qu'Ant et Maven et fournit également une interface web pour visualiser les résultats de la MT. Javalanche a été utilisé dans une variété de projets de développement de logiciels et s'est avéré efficace pour identifier des défauts qui sont ignorés par d'autres méthodes de test. -PIT : PIT (abréviation de "Pitest") est un outil de MT Java populaire. Il génère une collection de mutations, exécute le jeu de tests sur chaque mutant et calcule un score de mutation. PIT permet aux utilisateurs de choisir des opérateurs de mutation, des classes à tester et d'autres fonctionnalités. Les utilisateurs peuvent également utiliser les rapports et visualisations de PIT pour comprendre les résultats des tests. La "MT incrémentale", qui aide à accélérer la MT sur de grandes bases de code, est l'une des fonctionnalités de PIT. PIT est généralement connecté à des technologies de construction telles que Maven ou Gradle et peut être exécuté automatiquement dans un flux de travail d'intégration continue. PIT a été utilisé dans de nombreux projets de développement de logiciels pour trouver des défauts ignorés par les méthodes de test conventionnelles. Ces exemples démontrent l'efficacité de la MT pour identifier les faiblesses de la suite de tests et améliorer la qualité, la fiabilité et les performances globales du logiciel dans diverses applications industrielles. M. Krichen Une enquête sur le test de mutation: principes, avantages, limites et orientations futures car il peut être un outil précieux pour mesurer l'exhaustivité et la qualité d'un grande échelle. Dans deux situations d'application, le système suggéré améliore jeu de tests et identifier les lacunes dans même les modules bien testés. la MT automatisée d'un facteur de 10. En traitant le code source comme des MT a été appliqué avec succès dans diverses applications industrielles et réelles pour améliorer la qualité des logiciels. Voici quelques exemples : -Kernel Linux : La principale contribution de l'étude présentée dans (2) a été de démontrer l'utilité de l'analyse de mutation dans les grands systèmes, notamment sur le module RCU du noyau Linux. Les auteurs ont adapté des données, les frameworks centrés sur les données plus récents offrent des options de réutilisation supplémentaires, selon le rapport. L'article présente un cadre MT qui aide à augmenter la qualité des projets logiciels en évaluant les méthodologies de test plus rapidement et de manière plus évolutive. -Programmes Java : Il existe plusieurs systèmes de MT différents disponibles pour Java (1; 16), notamment : -Applications Web : situations simulées. -MuJava (µJava) : techniques existantes pour répondre aux exigences de calcul et aux nombreux -Projets logiciels à grande échelle : L'étude (45) a proposé HadoopMutator, faux positifs généralement associés à l'analyse de mutation. L'étude a montré une plateforme MT basée sur le cloud qui utilise MapReduce pour accélérer la que l'analyse de mutation était capable d'identifier des lacunes dans le harnais création et le test de mutants. L'article propose une solution pour améliorer la de test RCU et a mis au jour deux bogues masqués dans le module RCU. Les MT automatisée et fournir une solution évolutive pour les projets logiciels à auteurs soutiennent que MT devrait être utilisé plus largement dans la pratique -Programmes Spark : L'étude présentée dans [START_REF] Batista De Souza Neto | Transmut-spark : Transformation mutation for apache spark[END_REF] a proposé TRANSMUT-Spark, un outil pour automatiser la MT du code de traitement de données volumineuses des programmes Spark. L'étude met en évidence la complexité de la programmation parallèle de données volumineuses et la nécessité de combiner intelligemment les fonctions intégrées de Spark dans les programmes pour exploiter les ressources de calcul nécessaires au traitement de données volumineuses et éviter des pertes de sortie importantes. MT est utilisée pour tester les applications Spark, et TRANSMUT-Spark automatise la génération de mutants, l'exécution des tests et l'analyse d'adéquation. La publication couvre également l'étendue et les limites de l'outil sur la base d'études de validation. L'article présente une nouvelle méthode pour évaluer les ensembles de tests et trouver des erreurs de code dans les programmes Spark. -Applications distribuées : MuTomVo, un cadre MT pour les applications distribuées dans des environnements simulés, a été présenté dans [START_REF] Pablo C Cañizares | Mutomvo : Mutation testing framework for simulated cloud and hpc environments[END_REF] . L'étude souligne les contraintes de test des applications distribuées dans des systèmes fortement distribués pendant le développement et propose d'utiliser des platesformes de simulation pour modéliser une large gamme de configurations de systèmes distribués et exécuter ces applications dans le système modélisé. Une suite de tests peut être exécutée contre l'ensemble de modèles modifiés pour évaluer sa capacité de détection d'erreurs à l'aide du cadre proposé. L'article ConclusionPour conclure, cet article a fourni un aperçu complet de la MT, une technique qui s'est avérée efficace pour améliorer la qualité des applications logicielles. Nous avons discuté de l'idée de base et de la définition formelle de la MT, comparé cette technique à d'autres types de tests, et exploré différents types de MT et leurs applications industrielles. Nous avons également examiné le coût de la MT et identifié ses avantages, limitations et futures directions. En particulier, nous avons souligné le potentiel de recherche et de développement supplémentaire dans le domaine de la MT, y compris l'automatisation et l'intégration de cette technique avec d'autres méthodes de test.En résumé, la MT est un ajout précieux à toute stratégie de test, offrant des avantages en termes d'identification de défauts difficiles à détecter et d'augmentation de la confiance dans la fiabilité du logiciel. Ses limites, telles que le coût computationnel élevé et la nécessité de testeurs expérimentés, peuvent être surmontées grâce à une planification et une mise en oeuvre rigoureuses. En fin de compte, l'utilisation de la MT peut conduire à un logiciel de meilleure qualité, une plus grande satisfaction des clients et une plus grande réussite commerciale.En regardant vers l'avenir, il y a beaucoup de potentiel pour une recherche et un développement supplémentaires dans le domaine de la MT. Une zone prometteuse d'investigation est l'automatisation de la MT, ce qui réduirait le temps et l'effort nécessaires pour générer et analyser les mutants. Une autre direction pour les travaux futurs est l'intégration de la MT avec d'autres techniques de test, comme le fuzz testing, pour améliorer l'efficacité de la stratégie de test globale. De plus, il est nécessaire de mener davantage de recherches sur la scalabilité de la MT, en particulier pour les applications logicielles complexes et volumineuses. Enfin, le développement de meilleurs outils et techniques pour l'analyse des résultats de la MT peut aider à maximiser les avantages de cette technique. En abordant ces défis et autres, le domaine de la MT peut continuer à croître et à évoluer, offrant une valeur encore plus grande aux développeurs et aux utilisateurs de logiciels. M. Krichen En fin de compte, la MT est un ajout précieux à toute stratégie de test et devrait être considérée par les développeurs et les testeurs de logiciels cherchant à améliorer la fiabilité et la qualité de leurs applications. Avec une planification et une mise en oeuvre rigoureuses, la MT peut aider à identifier les défauts et à garantir que les applications logicielles répondent aux normes les plus élevées de qualité et de fiabilité.
04097848
en
[ "info.info-tt" ]
2024/03/04 16:41:20
2023
https://hal.inrae.fr/hal-04097848v2/file/2023_GT_AQNA_Bilan_seminaire_Qualite_20mars%20%281%29.pdf
Estelle Morel Céline Housseau Alain Label Q T Représentant Dmq Avril Validation Dmq Bilan du groupe de travail national AQNA Accueil Qualité Nouveaux Arrivants : Mémento + Plaquette validée DMQ et proposition de traduire en anglais… 4, 24 février et 22 mars 2022 Réalisation de la plaquette « des acteurs qualité à vos côtés » le GT → livrables répondent aux attentes de DMQ Légende Alain Label -Fondamentaux du management qualité Bilan du groupe de travail national Accueil Qualité Nouveaux Arrivants -Estelle MOREL / Céline HOUSSEAU -20 Mars 2023 -Paris 4 Estelle Morel, CQC PACA depuis 2013 avec besoin de documents qualité harmonisés pour RQU Alain Label, QT avec connaissance feuille de route DMQ Gabrielle Boffredo, RQU RECOVER et pilote du projet passeport accueil ex-Irstea Florian Duperret, nouveau CQC avec une mission /DSA dans l'accueil au côté de la DRH et DSI François Varamo, nouveau RQU d'une unité DSI connaissance des besoins informatiques Céline Housseau, CQC + chargée de mission communication DMQ Emilie Poirel, CQC expérience dans l'appui et missionnée par le DSA pour rédiger livret d'accueil Nathalie Bosselut-Benoit, RQU d'une TGU avec expérience dans les processus d'appui Amandine Etayo, nouvelle QT et pilote projet MOOC « Qualité en recherche » /continuité du kit AQNA Emmanuel Lemoine, pilote du processus formation qualité au sein de la plateforme PACIFIQ Julien Dublon, représentant département AgroEcoSystem et du GT bien être au travail (accueil, suivi, accompagnement) Membres Alain Label -Fondamentaux du management qualité Bilan du groupe de travail national Accueil Qualité Nouveaux Arrivants -Estelle MOREL / Céline HOUSSEAU -20 Mars 2023 -Paris 5 Enjeux ➢ DMQ souhaitait améliorer sa visibilité sur les centres en se positionnant comme le font la prévention, l'informatique… présents lors des accueils de nouveaux arrivant organisées sur les centres ➢ DMQ souhaitait améliorer la visibilité et la reconnaissance des CQC sur les centres en les positionnant comme « référence » du réseau qualité dans l'accueil des nouveaux arrivants. ➢ Des Référents Qualité d'Unités (RQU) ont exprimé, depuis longtemps, le besoin de disposer de supports nationaux permettant de communiquer sur la démarche qualité INRAE aux nouveaux arrivants et même auprès des agents déjà présents. ➢ Parallèlement une demande spécifique du centre PACA avait été formulée pour travailler sur ce sujet de l'accueil des nouveaux arrivants Le groupe national AQNA a été créé en 2020 afin de : ➢ Mener une réflexion en direction de l'accueil des nouveaux arrivants pour : -Accroitre la visibilité du réseau au niveau des centres -Informer sur l'organisation qualité au niveau national et au niveau d'un centre -Donner les contacts des acteurs du réseau national qualité, des centres et de l'unité concernée -Expliquer simplement ce qu'est une démarche qualité, son intérêt et à quels enjeux elle répond -Toucher tous les nouveaux agents quelque soit leur durée de présence dans l'institut -Sensibiliser et présenter simplement les outils du management par la qualité et leur intérêt dans leur travail (PDCA, 8M, 5 Pourquoi, Méthode EureQUA, Plan d'action…) ➢ Créer les supports nécessaires à cet accueil (Livrables) ➢ Pour la commande du centre PACA : -Anticiper l'organisation administratives pour accueillir convenablement le nouvel arrivant et l'agent quittant l'unité. un diaporama reprenant les informations de la politique qualité INRAE + présentation d'outils organisationnels de base Cette présentation pourra être utilisée par les acteurs qualité dans les centres : CQC et RQU Diaporama clair, qui va à l'essentiel et donne envie d'aller plus loin dans la démarche qualité INRAE Il sera facilement appropriable par les acteurs qualité et disponible en format CANVA et POWERPOINT. Les outils et méthodes présentés sont à utiliser et combiner sans modération. Pour un collectif ou votre propre organisation professionnelle et/ou personnelle Et après ? Communiquer auprès des acteurs qualité du réseau : -La newsletter QualitAE → QT, CQC et CQD -Sites intranet DMQ → tous les acteurs qualité INRAE -Livret ou guide d'accueil des nouveaux arrivants → nouveaux arrivants dans les unités ou sur un centre -Supports AQNA diffusés en réunions → à tous les agents d'une unité -Journée des nouveaux arrivants → Tous les agents d'un centre -… Juillet 2022 Réflexion support pédagogique avec outils de base Rencontre FPN + consultant externe afin de structurer le projet d'un support pédagogique Livrables Tous nos remerciements aux testeurs : Anne-Laure Loiseau, Sylvie Picard, Mireille Cambert, Fabrice Egido, Jean-Noël Thibault, Nelly Rouet, Véronique Mathe, Rita Rebollo, Catherine Chaumontet, Lysiane Dherret, Stéphanie Oriol, Nathalie Bosselut-Benoit, Christelle Margoum, Nicolas Proix, Amandine Daval, Olivier Delumeau, Audrey Prezelin, Martine Letheule, Caria Giovanni, Estelle Carteret, François Varamo, Florian Duperret, Florence Borderes, Agnes Vallier, Monique Estienne, Laurent Leclere … Et d'autres !
04098901
en
[ "shs.eco" ]
2024/03/04 16:41:20
2022
https://theses.hal.science/tel-04098901/file/2022UPASI011.pdf
Aurélien Eyquem Bertrand Candelon Christoph Trebesch Gauthier Keywords: Défaut souverain et dynamique de la dette publique Dette publique, Défaut souverain, Espace fiscal, Changement climatique, Désastres naturels Public debt, Sovereign default, Fiscal space, Climate change, Natural disasters Cette thèse étudie trois problématiques liées à la soutenabilité de la dette publique, le défaut souverain, et leur lien avec le changement climatique et l'occurrence des désastres naturels. Pour étudier ces questions, je combine modélisation théorique, méthodes économétriques et empiriques. La thèse est composée de trois chapitres. Le premier chapitre est le fruit d'une collaboration avec Michel Guillard et Hubert Kempf. Nous étudions la relation entre la dynamique de la dette publique d'une part, et le taux de recouvrement de dette applicable en cas de défaut de l'Etat d'autre part. Pour cela, nous développons un modèle stochastique de défaut souverain comportant une règle de recouvrement de dette en cas de défaut de l'Etat. Cette règle dépend d'un paramètre qui permet un recouvrement partiel ou total de la dette après un défaut. Nous montrons que le ratio de dette limite, c'est-à-dire le ratio de dette publique en part de PIB maximum qui peut être soutenu sans faire défaut, est une fonction décroissante et non-linéaire du taux de recouvrement. Avant le défaut, un taux recouvrement élevé se traduit par un espace fiscal plus important, mais cela dégrade la situation financière de l'Etat en cas de défaut. Nous montrons l'importance de prendre en compte ce mécanisme pour une estimation empirique plus précise de l'espace fiscal des pays. Le deuxième chapitre repose sur un travail commun avec Adham Jaber. Nous estimons l'effet des anomalies de température sur le risque de défaut souverain et explore les canaux de transmission de cet effet. Pour cela, nous utilisons des données de panel portant sur 76 pays durant la période 1999-2017. Nous montrons qu'une augmentation de la température se traduit par une augmentation de la prime de défaut, mesurée par le spread de taux des swaps de défaut (CDS). Partant d'une équation d'évaluation des titres obligataires dérivée des modèles de défaut souverain, nous montrons l'existence d'un canal de dette limite à travers laquelle la température affecte le risque de défaut : un niveau de température plus élevé impacte négativement le taux de croissance du PIB, ce qui diminue le ratio de dette limite. Par conséquent, la probabilité de défaut augmente, ce qui se traduit par une augmentation de la prime de défaut. Le troisième chapitre s'intéresse à la relation entre le risque de défaut d'une part, et le risque d'occurrence des désastres naturels d'autre part, en particulier ceux qui sont liées au changement climatique. Pour comprendre ce lien, je développe un modèle de défaut souverain comportant une probabilité de désastre qui varie dans le temps. En premier lieu, je montre que la dette limite est une fonction décroissante et non-linéaire de la probabilité de désastre. Ensuite, j'étudie le rôle des anticipations des créanciers par rapport d'éventuels désastres dans le futur. Plus précisément, je compare trois types d'anticipations : anticipations constantes, naïves et rationnelles. Je montre que si les anticipations sont constantes, le ratio de dette limite est également constant. Dans le cas avec anticipations naïves, où les créanciers révisent la probabilité de désastre à chaque période sans tenir compte des variations futures de celle-ci, la dette limite varie dans le temps. Cependant, les créanciers sous-estiment considérablement le risque de défaut comparativement au cas avec anticipations rationnelles. Enfin, je montre qu'en présence du risque de désastre, le défaut peut survenir même dans un contexte très favorable où le taux d'intérêt sans risque reste à un niveau très bas inférieur au taux de croissance du PIB. Chapter 3 "Sovereign Defaults in a World of Climatic Disasters: The Expectations Channel" analyzes the expectations channel linking the increasing risk of climatic disasters and the prospects of sovereign defaults. I build a tractable model of sovereign default that allows for time-varying probability of climatic disasters and analyze the role of creditors' expectations on disaster risk. First, I show that the maximum debt-to-GDP ratio that a country can sustain without defaulting is decreasing and nonlinear in the probability of disasters. Second, I compare three types of expectations on disaster risk: the cases of constant, naive, and forward-looking expectations of disaster risk. I show that constant expectations of disaster risk lead to a constant maximum debt ratio. On the other hand, the case with naive expectations of disaster risk-creditors revising the disaster probability in each period while disregarding any future changes-leads to a time dependent maximum debt ratio, but it relatively underestimates default risk compared to the case with forward looking, rational expectations of disasters. Finally, I show that, in the presence of disaster risk, sovereign defaults can occur even in a very favorable environment with low real risk free rate, possibly below the growth rate of output. Résumé detaillé de la thèse La crise financière de 2008 et la crise de la dette européenne qui s'en est suivie à remis au centre des débats publics et académiques la question de la soutenabilité de la dette publique des Etats. La grande récession causée par la crise a profondément affecté l'équilibre des finances publiques, aussi bien dans les pays avancés qu'en développement. En reponse à cette recession, la pluapart des banques centrales ont méné une politique monétaire d'assouplissement, en ayant souvent recours à des instruments qui étaient jusque là non conventionnels et communiment connus sous l'expression « Quantitative Easing ». Cela a conduit, à l'époque, à une baisse généralisée des taux d'intérêt et un écrasement de la prime de risque sur la dette deténue par les Etats. La disparition des primes de risque avait temporairement fait de la queston de la soutenabilité de la dette des Etats un problème de second ordre sans réel conséquence à court et moyen terme. Ce point de vue était d'ailleurs largement repandu dans les débats publics, et souvent repris dans les discussions académiques (voir Blanchard, 2019, par exemple) . Après plusieurs années dans cet environmment de taux bas et d'absence de primes de risque, ce point de vue optimiste est maintenant discutable, du fait notamment des déficits fiscaux records causés par les plans de relance pour faire face aux effets de la Covid 19 et de la crise énergitiqe et alimentaire due au conflit en Ukraine. Ces chocs multiples et simultanés ont conduit à des taux d'inflation records dans un grand nombre de pays et font planer un risque de recession plus ou moins durable sur leurs économies. Du fait de l'inflation, nous assistont à une remontée graduelle des taux d'intérêt par les banques centrales et à la ré-apparition des primes de risque sur la dette des Etats. Par ailleurs, il y a urgence pour les Etats d'agir et investir pour la transition écologique et faire face aux effets macroéconomiques potentiels du changement climatique et des désastres naturels de plus en plus fréquents. Tout cela met des pressions supplémentaires sur les finances publiques des pays qui, jusque là, ne s'étaient pas complètement remises des effets de la crise de 2008. Cette thèse a pour objectif d'apporter un nouveau regard sur la question de la soutenabilité de la dette publique et du défaut souverain, dans un contexte de changement climatique et d'augmentation de la fréquence des désastres naturels. La question du défaut souverain n'étant plus seulement un problème des pays en développement, du moins depuis la crise de la dette européenne, nous nous interessons au risque de défaut souverain à la fois dans les pays en développement que dans les pays avancés. Contrairement à la majorité des analyses théoriques du défaut souverain qui abordent la question en termes de décision stratégique, cette thèse privilégie l'hypothèse d'un risque de défaut « excusable » au sens de Grossman et Van Huyck (1988). Cet type de défaut apparait lorsque les Etats ne sont pas en mesure d'obtenir suffissamment de revenu fiscal et/ou de financement sur les marchés financiers pour rembourser leur dette. La thèse est structurée en trois chapitres et utilise une approche combinant la modélisation théorique avec l'analyse économétrique et empirique. Le premier chapitre part d'une observation empirique. En analysant les données historiques sur les défauts souverains, on constate que la plupart des défauts sont partiels dans le sens où l'Etat rembourse toujours une partie de la dette en cas de défaut. Ce constat est en contraste avec l'hypothèse standard dans les analyses théoriques selon laquelle, en cas de défaut, l'Etat ne rembourse rien à ses créanciers et sa dette est complètement effacée. Nous relachons cette hypothèse "simpliste" dans le premier chapitre au profit du cas où le défaut souverain peut être partiel. Pour cela, nous développons un modèle stochastique de défaut souverain comportant une règle de recouvrement de dette en cas de défaut. Dans ce modèle l'Etat, representant un pays, emprunte sur les marchés financiers aurpès des créanciers neutres au risque pour combler son déficit budgétaire et/ou faire face à ses obligations financières. L'Etat peut toutefois faire défaut sur la dette contractée, du fait notamment d'un choc de productivité négative, l'incapacité d'augmenter ses revenus fiscaux ou d'obtenir de nouveau financment sur les marché financiers. Nous proposons une règle de recouvrement de la dette qui s'applique en cas de défaut. Cette règle dépend d'un paramètre qui permet un recouvrement partiel ou total de la dette par les créanciers après un défaut. Ce parèmetre peut prendre des valeurs allant de 0 (acun recouvrement par les créanciers) à 1 (recouvrement complet). Nous résolvons le modèle de façon analytique et mettons en evidence plusieurs résulats sur le lien entre la dymaique de la dette publique et le taux de recouvrement. En premier lieu, nous clarifions les concepts de soutenabilité de la dette publique, de défaut souverain et de solvabilité qui sont souvent confondus dans la litterature académique sur le sujet. Nous proposons une définition plus précise et une mesure opérationnelle à ces concepts. Nous montrons que le ratio de dette soutenable est généralement plus faible que le ratio de dette limite, c'est-à-dire le ratio de dette publique en part de PIB maximum qui peut être soutenu sans faire défaut. Ce dernier est également est toujours plus faible que le ratio de solvabilité, sauf le cas peu réaliste où l'on suppose un recouvrement complet en cas de défaut. Ensuite, nous montrons que le ratio de dette limite est une fonction décroissante et non-linéaire du taux de recouvrement. Avant le défaut, un taux recouvrement élevé se traduit par un espace fiscal plus important, mais cela dégrade la situation financière de l'Etat en cas de défaut. Le message clé de ce chapitre est que l'analyse de la soutenabilité de la dette publique dépend énormement du pramètre definissant le taux recouvrement. Une petite variation de ce paramètre peu avoir des effets très importants sur l'espace fiscal d'un pays et peut donc conduire à des conclusions très différentes quant à la soutenabilité de sa dette publique. Le deuxième chapitre analyse de façon empirique le lien entre le changement climatique et le risque de défaut souverain. En effet, il y a un interêt grandissant des économistes et décideurs de politque publique concernant les effets économiques potentiels du changement climatique. De plus, l'importance du rôle des Etats dans le financement de l'adaptation au changement climatique est largement admis dans les débats publics et scientifiques. Etant donné le niveau d'endettement relativement élevé des pays, il est naturel de s'intéresser au lien entre le changement climatique et le risque de défaut souverain. Pour analyser ce lien, nous estimons l'effet des anomalies de température sur le risque de défaut souverain et explore les canaux de transmission de cet effet. Pour cela, nous utilisons des données de panel portant sur 76 pays durant la période 1999-2017. Nous utilisons le spread de taux des swaps de défaut (CDS) sur les titires d'obligations d'Eat comme une mesure proxy du risque de défaut. Nous considérons quatre maturités différentes de CDS, à savoir les CDS à 1, 3, 5 et 10 ans. Sur le plan économetirque, nous utilisons différentes méthodes d'estimation, notamment la méthode de régression en panel avec effets fixes. Nous mettons en evidence plusieurs résultats. Dadord, nous montrons qu'une augmentation de la température se traduit par une augmentation de la prime de défaut, mésurée par les CDS à 3, 5 et 10 ans: une augmentation d'un degré Celsius de la température par rapport à sa moyenne de long terme conduit à une augementation du CDS de 15.61 à 31.09 points de base. Plus la maturité du CDS est élevée, plus l'effet de la température sur le CDS est important. Ce résultat suggère que les créanciers tiennent compte du risque climatique aussi bien à court qu'à moyen et long terme. Ensuite, nous examinons les méchanismes de transmission des effets de la température sur le CDS. Pour cela, nous utilisons l'équation d'évaluation des titres obligataires dérivée des modèles de défaut souverain. Nous montrons l'existence d'un canal de dette limite à travers laquelle la température affecte le risque de défaut : un niveau de température plus élevé impacte négativement le taux de croissance du PIB, ce qui diminue le ratio de dette limite. Par conséquent, la probabilité de défaut augmente, ce qui se traduit par une augmentation de la prime de défaut. Le message clé de ce chapitre est qu'il existe bien un lien entre le réchauffement climatique et le risque de défaut souverain et ce risque est pris en compte par les créanciers. Le troisième chapitre s'intéresse à la relation entre le risque de défaut d'une part, et le risque d'occurrence des désastres naturels d'autre part, en particulier ceux qui sont liées au changement climatique. Pour comprendre ce lien, j'étends le modèle de défaut souverain déveoloppé dans le premier chapitre en introduisant une probabilité de désastre qui varie dans le temps. Je résous le modèle analytiquement et fais ensuite des simulations pour illustrer les méchanismes clés. En premier lieu, je montre que le ratio de dette limite est une fonction décroissante et non-linéaire de la probabilité de désastre. Ensuite, j'étudie le rôle des anticipations des créanciers par rapport à d'éventuels désastres dans le futur. Plus précisément, je compare trois types d'anticipations : anticipations constantes, naïves et rationnelles. Je montre que si les anticipations sont constantes, le ratio de dette limite est également constant. Dans le cas avec anticipations naïves, où les créanciers révisent la probabilité de désastre à chaque période sans tenir compte des variations futures de celleci, la dette limite varie dans le temps. Cependant, les créanciers sous-estiment considérablement le risque de défaut comparativement au cas avec anticipations rationnelles. Enfin, je montre qu'en présence du risque de désastre, le défaut peut survenir même dans un contexte très favorable où le taux d'intérêt sans risque reste à un niveau très bas inférieur au taux de croissance du PIB. Ce chapitre montre l'importance de prendre en compte le risque lié à l'augementation de la fréquence des désastres naturels dans l'analyse de la soutenabilité de la dette publique. Mots clefs : Dette publique, Défaut souverain, Espace fiscal, Changement climatique, Désastres naturels. Collaborations dans le cadre de la thèse: Adham Jaber (Doctorant, Université Paris 1 Panthéon-Sorbonne), Hubert Kempf (Professeur, Ecole Normale Supérieur Paris-Saclay), Michel Guillard (Professeur, Université d'Evry-Val-d'Essonne). I am deeply indebted to Michel Guillard for his constant guidance, support, and advice since the inception of this doctoral project when I first met him during my first year of master's studies at the Université Paris-Saclay. I always remember the exciting discussions that we had when crafting this thesis project at a coffee place or restaurant near Place Denfert-Rochereau in the south of Paris. At that time, I knew very little about what it means to be a great researcher. I learned that from him. His enthusiastic passion for research, rigor, and intellectual openness has been invaluable example throughout my Ph.D. experience. Collaborating with him has been a fascinating experience. Above all, I think I learned from him some great principles about economic research which will stay with me forever: always listen to the model, never settle for half-baked intuitions, always reach for parsimony, and most importantly, always keep pushing. I cannot thank him enough for his generosity with his time and support. More than a supervisor, Michel is a mentor for me. I cannot overstate my debt to Hubert Kempf. Hubert is one of the humblest and most caring people I met during my Ph.D. He is the kind of person who, as a Ph.D. student, challenges you to have a broader perspective to better understand the world. Most importantly, he always reminds you to stay kind, generous, and to keep your feet on the ground. I think every Ph.D. student needs someone like Hubert and I was very lucky for meeting him. In addition to that, Hubert is an exceptional researcher. Our mutual collaboration has been one of the great intellectual experiences of my life, and also, a warm friendship. A special thanks to my colleague and friend Adham Jaber. I will fondly re-1 member our amazing discussions and arguments late at night over zoom and at the blackboard at Université d'Evry or Université Paris 1 Panthéon-Sorbonne. Thank you for sparking my interest in the applied and empirical universe of economics. I learned an immense deal from you, without ever taking a class from you. This learning experience has evolved into a fruitful collaboration, and also, a warm friendship. I owe special thanks to Fabien Tripier and Thai Hahuy for animated discussions, from which sprang many ideas. I will always remember this fondly. I am also very grateful to Fabrice Pansard who advised me during my master's studies and who first encouraged me to pursue doctoral studies. His constant encouragement and support were invaluable. I am grateful to the Centre d'Etude des Politiques Economiques of Université d'Evry (EPEE), for granting me a three-year doctoral contract without which the conduct of this research would be impossible. I would like to also thank the team of the EPEE, including their directors Gregory Verdugo and Jean Debeir, its incredible team of researchers, Ph.D. students, and administrative staff for all the support granted during all these years. I enjoy all the time spent in the dynamic and joyful environment they promote. I also thank the financial support granted by the Labex MME-DII for the period 2016-2020. This was a great deal of support during my master's studies and the first years of this Ph.D. thesis. Thank you very much for making my life easier during this arduous and challenging journey. Introduction The Great Financial Crisis of 2007-08 and the ensuing European debt crisis have revived the long dated issue of public debt sustainability and sovereign default. The moderate growth rates following the financial crisis combined with the functioning of automatic stabilizers (reduction of tax revenues) have prevented many economies, both developing and developed ones, in succeeding to reduce the high levels of public debt inherited from the financial crisis (Figure 1). For most developed countries, the implementation of unconventional monetary policies and the resulting incredible low and even negative interest rates on their public debt had temporally made the sustainability of public debt a second order issue with "no real concerns in the short run". This view was regularly present both in the public debate and among some academics (see, e.g., Blanchard 2019). After, a few years in that favorable environment, this optimistic view is now challenged on several grounds. A notable one is the surge in fiscal deficits due, in particular, to fiscal stimulus plans in response to the Covid-19 pandemic (see Figure 1). This led to a complete draining of previous efforts maid to reduce the level of public debt in most countries. Moreover, the rapid increase in interest rates by most central banks around the world to fight an ever increasing inflation will increase the burden of public debt for most countries. This situation could be accompanied by a re-emergence of risk premia, particularly for the most heavily indebted countries. Finally, there is an urgent need to mitigate the macroeconomic consequences of climate change, the increasing frequency of climate-related disasters, and to finance the transition towards a more sustainable and environmental-friendly economy. All these factors put additional pressure on the fiscal stance of countries and will potentially lead to the resurgence of sovereign default premia over the coming years. In fact, at the time of writing this thesis, some countries, especially developing ones, are manifesting signs of debt crisis, as reflected in the high yield spreads on their public debt (Figure 2). Even worse is the fact that some of theses countries are already in default1 or on the brink of it. In this context, the issue of public debt sustainability, which was topical ten years ago from the date of writing this dissertation, is even more topical today. Defining sovereign default. Throughout the thesis, I focus on "excusable sovereign defaults" in the sense of [START_REF] Grossman | Sovereign Debt as a Contingent Claim: Excusable Default, Repudiation, and Reputation[END_REF]. This type of defaults are associated with identifiable bad states of the world. They occur only when, following a negative shock, the government is unable to get sufficient fiscal and debt issuance revenue to repay due debt. This definition of default is different from the one used in the literature of strategic default à la [START_REF] Eaton | Debt with potential repudiation: Theoretical and empirical analysis[END_REF]. According to this latter, the government optimally decides to repay or to default on due debt regardless of the realized state of the nature, even if it has the ability to meet its financial obligations. Put bluntly, strategic defaults occur due to the lack of willingness of the government to repay while excusable defaults occur due to its inability to repay. While the strategic default approach is the most standard framework, recent empirical studies document that countries are in general reluctant to default (see Yeyati and Panizza 2011). Moreover, the issue of public debt sustainability, which is the focus of this thesis, has little relevance when the government can strategically default each time it finds this optimal. A key stylized fact in historical sovereign defaults data is that sovereign default is almost always partial in the sense that creditors are able to recover a fraction of the defaulted debt after default. Although it is well supported by recent empirical studies (Sturzenegger and Zettelmeyer 2008;[START_REF] Cruces | Sovereign defaults: The price of haircuts[END_REF], this fact is in contrast with the standard assumption of zero debt recovery found in most theoretical models of sovereign default. The first chapter of the thesis, based on a joint work with Michel Guillard and Hubert Kempf, relaxes this unrealistic assumption. We propose a tractable stochastic model of sovereign default that allows for partial debt-recovery after a default. In this model, a country, represented by its government, borrows from risk-neutral creditors on international financial markets to balance its budget constraint, namely the funding it needs to finance fiscal deficit and/or to repay debt contracted in the past. The government may however default on its debt obligations due, in particular, to bad productivity shocks, the inability to raise sufficient fiscal revenue, either by increasing taxes or lowering deficits, or to get sufficient funding from creditors. We propose a simple debt-recovery rule that applies following a default. It depends on a unique parameter, which we refer to as the debt-recovery parameter. This parameter is equal to one minus the "haircut"the fraction of debt-to-GDP ratio lost by creditors following a sovereign default. It can take any value from 0 to 1. The case with 0 corresponds to a full repudiation of the defaulted debt, while the case with 1 is equivalent to full repayment of public debt. We solve the model explicitly and analyze its numerical properties and empirical relevance. We find several novel results, which have direct echos with the related literature and profound policy implications. First, we clarify the notions of the sustainability of public debt, sovereign default, and solvency, which are quite often overlooked by economists and in public debate. We provide more precise definition to these concepts, discuss the importance to distinguish them, and provide operational measure to each of them. We show that the sustainable debt-to-GDP ratio is in general lower than the default ratio-the maximum debt-to-GDP ratio that a country can sustain without defaulting. This latter is itself always lower than the solvency ratio, which obtains under the standard transversality condition on public debt, except in an unrealistic case where the debt-recovery parameter is set to one, its upper limit. In this particular case, the two ratios are equivalent. We find that sustainable and default ratios are both increasing, nonlinear, and sensitive to the debt-recovery parameter: even a small change in the debt recovery parameter can have substantial effects on the sustainable and default ratios. The full dynamics of public debt is also shown to depend on the debt recovery parameter, as well as on the realizations of the growth shock. To better understand this dynamics, we resort to the notion of risky steady state (RSS) recently used by [START_REF] Coeurdacier | The Risky Steady State[END_REF]. This allows us to analyze the impact of a productivity shock when agents form their expectations of relevant variables and take decisions based on the probability distribution of future shocks whereas the realizations of these shocks are equal to their mean values. We show that a RSS debt level does not always exist in this framework. It exists only for sufficiently high values of the debt recovery parameter. In particular, there is no RSS under the no debt recovery assumption found in most models of sovereign default. Building on these results, we introduce a new definition of debt unsustainability: public debt is unsustainable when its trajectory leads to the default ratio at some finite date, assuming that there is no realization of the growth shock higher than the mean. This allows us to revisit the concept of "fiscal space" introduced by [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF]. A fiscal space measures the capacity of a country to secure additional borrowing to face bad shocks without defaulting. Here we precisely define the fiscal space as the difference between the actual and RSS debt ratios when this latter exists, or the actual and default ratios when there is no RSS. Since both default and RSS ratios depend positively on the debt recovery parameter, it plays a critical role in the assessment of country fiscal spaces. Finally, we study the post-default dynamics of public debt. We show that there exists a critical value of the debt recovery parameter such that the post-default debt ratio is "sustainable" if this parameter is below the critical value. Otherwise, the after-default public debt is unsustainable and the defaulting country is exposed to what is known in the literature as "serial defaults", that is repetitive defaults. A key message of this chapter is that the assessment of the sustainability of public debt depends crucially on the value of the debt recovery parameter. A small change in this parameter can have substantial effects on the dynamics of public debt, and therefore lead to very different conclusions in terms of debt sustainability analysis. We illustrate the role of this parameter trough calibrations, simulations and estimations of the model using historical data on emerging and advanced countries. We find that the debt-recovery parameter that is implicit in sovereign yield spreads is relatively lower for emerging countries than for advanced ones. Since the fiscal space is positively related to the debt-recovery parameter, this result partly explains the paradox of "debt intolerance": compared with advanced countries, emerging countries experience both lower default ratios, that is a lower debt tolerance by markets, and higher risk premia. Accelerating climate change and the increase in the frequency of extreme climate shocks, such as heatwaves, droughts, hurricanes and coastal flooding, have recently received particular attention, both in academia, public debate and the media. A large strand of the literature documents the impacts of climate change on economic growth [START_REF] Nordhaus | Geography and macroeconomics: New data and new findings[END_REF][START_REF] De Jong | Temperature Shocks and Economic Growth: Evidence from the Last Half Century[END_REF][START_REF] Burke | Global non-linear effect of temperature on economic production[END_REF] and various economic outcomes (see [START_REF] Dell | What Do We Learn from the Weather? The New Climate-Economy Literature[END_REF] and Kolstad and Moore 2019 for a survey). Yet, there is little evidence on the link, and the nature of the link, between climate change and sovereign risk, and whether financial markets effectively price climate-related risk. Chapter 2 of the thesis, which is based on a joint work with Adham Jaber, contributes to the growing literature on the impacts of climate change on the economy. The goal of this chapter is to empirically assess the implications of climate change for sovereign default risk. To address this issue, we use temperature anomalies-temperature's deviation from its long-run mean-as a proxy for climate change. As for sovereign default risk, we use sovereign Credit Default Swap (CDS) spread as a proxy. The chapter is organized around its two main contributions. The first part documents the relationship between temperature anomalies and sovereign CDS spreads. We consider sovereign CDS spread at several maturities-one, three, five and ten-year maturities. Our key hypothesis is that financial markets account for climate risk when lending to countries. Econometrically, we address this issue using a large panel dataset that covers 76 developing and advanced countries from 1999 to 2017. The countries are selected based on data availability only. Using a standard two-way fixed effects estimation method, we document a strong positive impact of temperature on sovereign CDS spread for the three, five and ten-year maturities but not for the one-year one: a one degree Celsius increase in temperature, relatively to the long-run mean, increases CDS spreads by 15.61 to 31.09 basis points. This effect is statistically and economically significant and robust to different alternative measures of temperature anomalies. Moreover, we find that the longer the maturity of the CDS spreads the larger the impact of temperature on spreads. This finding suggests that sovereign creditors price climate risk, not only when investing over a short horizon but also the medium and long ones. In the second part of our analysis, we investigate the key channels through which the estimated positive effect of temperature on CDS spread may occur. To isolate these channels, we build on an equilibrium bond pricing equation found in most theoretical models of sovereign default, including the one proposed in Chapter 1. This equation relates the spread to its key underlying macroeconomic fundamentals. We document the existence of a "debt limit channel" of temperature: a higher temperature has a negative impact on future growth rate of output, which lowers the country's debt limit-the maximum debt-to-GDP ratio it can sustain without defaulting. As a result, the probability of default increases, leading to a higher CDS spread. We find that this debt limit channel accounts for the bulk of the estimated effect of temperature on the CDS spread. Interestingly, we find that the debt-to-GDP ratio and the primary balance, which are two macroeconomic determinants present in the basic pricing equation, do not play any role in the transmission of the effect of temperature to CDS spread. Our findings have interesting implications for the policy responses to climate change in the context of high public debt-to-GDP ratios and limited fiscal space available for countries. Our identification of the key mechanisms suggests that climate risk must be taken into account in the assessment of public debt sustainability. There is a growing interest of academics and policymakers in the economic impacts of large macroeconomic shocks and the appropriate policy responses. This interest has been strongly revived recently due to the increase in the frequency and intensity of natural disasters, in particular climate and weather related ones. In its latest Atlas of Mortality and Economic Losses from Weather, Climate and Water Extremes (August 2021), 2 the World Meteorological Organization shows that the number of climate related disasters, such as floods and extreme temperature, have increased five-fold in 1970-2019, killing more than 2 million people and costing $3.64 trillion in total losses. Recent empirical studies show that climate-related disasters have been particularly salient in some recent sovereign default and debt restructuring episodes. Notable examples are Dominican Republic 1998, Grenada 2004, Antigua & Barbuda 2004 and 2009. 3 Since countries across the world rely heavily on borrowing from international financial markets, an increase in the frequency of disasters and the associated losses, as predicted by climate scientists, may reinforce their fiscal vulnerabilities and the risk premium on their public debt. Chapter 3 of the thesis investigates the link between sovereign default risk and the risk of natural disasters, in particular climate-related ones. To address this issue, I expand the framework developed in Chapter 1 and introduce a timevarying probability of disasters. I do that in a tractable way so that the key mechanisms of the model can be analyzed analytically. In the baseline model, the probability of disaster is a deterministic function of time with a linear trend. Next, I estimate the probability of disaster based on historical disaster occurrences, and calibrate the model to better understand its key properties. Two main findings came out of this analysis. First, I show that the maximum debt-to ratio that a country can sustain is decreasing and nonlinear in the probability of disasters. Second, in the presence of disaster risk, a sovereign default can occur even in a very favorable environment with a low risk free rate or high growth rate. I show how these findings may change according to different types of expec-2 Click here: web link 3 See International Monetary Fund (1999a) and [START_REF] Asonuma | Sovereign Debt Restructurings in Grenada; Causes, Processes, Outcomes, andLessons Learned[END_REF]. Other default episodes related to climatic disasters are Moldova and Suriname which defaulted respectively in 1992 and 1998 following severe droughts (International Monetary Fund, September 1999;[START_REF] De Jong | Temperature Shocks and Economic Growth: Evidence from the Last Half Century[END_REF]. Ecuador defaulted in 1997 just a few months after floods caused major power shortages [START_REF] Sturzenegger | Debt defaults and lessons from a decade of crises[END_REF]. tations that creditors may have about disaster risk. Specifically, I compare three types of expectations that creditors may have about disaster risk: i) the case of constant disaster risk; ii) the case with "naive" expectations, corresponding to a situation where creditors are short-sighted and revise the disaster probability in each period while ignoring any future changes to this probability; and iii) the case with forward-looking, rational expectations about disaster risk. I show that when disaster expectations are constant over time, the maximum debt ratio is also constant. On the other hand, the case with naive expectations of disaster risk leads to a time dependent maximum debt ratio. However, this naive approach relatively underestimates sovereign default risk compared to the case when creditors have forward-looking, rational expectations about disaster risk. In the last part of the chapter, I provide an extension of the model where there is uncertainty about the disaster probability and creditors engage in a Bayesian learning to define this probability. A nice feature about this model is its relative simplicity. Although it focuses more on climate extreme events, the setup that I develop can be fairly applied to other types of extreme events, not necessarily related to climate, such as major conflicts, the Great Recession, Covid, Ukraine War or any event that can have severe effect on output growth. I plan to adopt this more general approach in a future version of this work. A stylized fact in historical sovereign defaults data is that default is almost always partial, that is, creditors are able to recover a fraction of the defaulted debt after default.1 This suggests the existence of a "debt recovery channel" which we define as the link between sovereign defaults, public debt sustainability and the fraction of due debt recovered by lenders after a sovereign default.2 In this paper we investigate such a channel and show how it affects the dynamics of public debt, its sustainability, and the occurrence of sovereign defaults. Specifically, we analyze how lenders' expectations of a debt recovery after a potential default contribute to the "snowball effect" related to the default premium included in the interest rate on public debt. Relying on the concept of "excusable default" (see the seminal paper of [START_REF] Grossman | Sovereign Debt as a Contingent Claim: Excusable Default, Repudiation, and Reputation[END_REF], we set up a tractable stochastic model of sovereign default with a "debt recovery rule" that allows for partial debt haircuts. We use a simple specification of such a rule which hinges on a unique parameter defined as the expected maximum debt recovery rate. We solve the model explicitly and uncover the following main findings. First, we show that a country's default ratio-the maximum debt-to-GDP ratio that can be sustained without default-is increasing, nonlinear and very sensitive to the debt recovery parameter. We show that the default ratio is different from the solvency ratio which obtains under the standard transversality condition on public debt. Second, we resort to the concept of risky steady state (RSS) to analyze the dynamics of debt and introduce a new definition of debt unsustainability: public debt is unsustainable when its trajectory leads to the default ratio at some finite date, assuming that there is no realization of the growth shock higher than the mean. The whole dynamics of public debt is shown to depend on the recovery parameter. Third, we use historical data on both advanced and emerging countries and provide n estimates of the recovery parameter for both groups. We find that these estimated parameters are markedly lower for emerging countries than for advanced countries. This results sheds light on the evidence of debt intolerance, that is, the fact that countries face different default ratios and experience default at very different debt-to-GDP ratios, consistently with observed risk premia. 3Finally, we reassess the issue of sustainability when the real risk-free interest rate is low, possibly lower than the growth rate. We show that even for high values of the debt recovery parameter, a sovereign default cannot be ruled out as the default ratio is finite, although the solvency ratio-which corresponds to a more classical definition of sustainability-is infinite in this case. The paper is organized as follows. Section 1.2 provide a brief review of the literature. Section 1.3 presents the model. Section 1.4 addresses the valuation of public debt and its link with the debt recovery rule. Section 1.5 analyzes the dynamics of public debt in the presence of stochastic shocks and addresses the issues of unsustainability. Section 1.6 provides estimations of the debt recovery parameter and compute country default ratios and fiscal spaces associated to these estimations. Section 1.7 concludes. Related literature Willems and Zettelmeyer (2021) provide a recent and up-to-date survey on sovereign debt sustainability which is a useful introduction to this topic. [START_REF] Sturzenegger | Debt defaults and lessons from a decade of crises[END_REF], [START_REF] Reinhart | This time is different: A panoramic view of eight centuries of financial crises[END_REF] and [START_REF] Das | Sovereign debt restructurings 1950-2010: Literature survey, data, and stylized facts[END_REF] provide a comprehensive survey of historical sovereign defaults and restructurings. In a pioneering work, Sturzenegger and Zettelmeyer (2008) introduce a methodology to compute haircuts on defaulted debt. The haircut is defined as the percentage difference between the present value of old and new debt instruments issued during debt restructuring. Using data for 14 debt restructurings in 1998-2005, they document average haircuts ranging from 13% to 73%. [START_REF] Cruces | Sovereign defaults: The price of haircuts[END_REF] and, more recently, [START_REF] Meyer | Sovereign Bonds since Waterloo[END_REF] use a similar approach to compute haircuts using data on sovereign default events in a larger number of countries and a time period going back to 1815. They find that debt repudiation and debt cancellations (haircuts of, or close to 100%) are the exception rather than the rule. Following [START_REF] Eaton | Debt with potential repudiation: Theoretical and empirical analysis[END_REF], the bulk of theoretical studies on sovereign default address the issue in a strategic framework. [START_REF] Aguiar | Sovereign debt[END_REF] and [START_REF] Mitchener | Sovereign Debt in the 21st Century: Looking Backward, Looking Forward[END_REF] provide useful surveys on this topic. This literature focus on solving the puzzle of the existence of sovereign debt contracts between fully rational agents when there is no or limited enforcement capacity. The issue is the designing of efficient contracts taking into account the incentive of the sovereign to default. Important references on the subject are [START_REF] Calvo | Servicing the public debt: The role of expectations[END_REF], [START_REF] Cole | Self-fulfilling debt crises[END_REF], [START_REF] Aguiar | Defaultable debt, interest rates and the current account[END_REF] and [START_REF] Arellano | Default risk and income fluctuations in emerging economies[END_REF]. The standard assumption in these papers is a full discharge of public debt after default and a sanction by lenders in the form of complete exclusion from financial markets. These assumptions are in contrast with the empirical studies mentioned above and with our work. 4 In particular, we allow for a partial hair-cut on the defaulted debt and the possibility for the government to reenter the markets after default. A few recent papers depart from the complete default assumption of early papers in the strategic default paradigm. Yue (2010) develops a model of debt renegotiation with Nash bargaining and complete information. In her setting, the government and creditors bargain to a debt haircut that maximizes the total renegotiation surplus. She shows that the renegotiation outcome affects the expected duration of financial exclusion, and therefore the country's incentive to default. In the same spirit, [START_REF] Benjamin | Recovery Before Redemption: A Theory Of Delays In Sovereign Debt Renegotiations[END_REF] and [START_REF] Ghosal | Waiting for a haircut? A bargaining perspective on sovereign debt restructuring[END_REF] consider a model of debt renegotiation with a dynamic alternating offers framework to analyze the delay observed in some historical debt restructurings. 5Arellano, Mateos-Planas and Rios-Rull (2019) emphasize the role of missed payments on debt service preceding sovereign default events. In their setting, each period the sovereign strategically decides whether to fully honor its debt payment or to miss a fraction. The amount of payments missed accumulate as arrears and add to future debt. In their model, the government uses missed payments to inter-temporally transfer resources and to smooth consumption. Following the seminal paper of [START_REF] Grossman | Sovereign Debt as a Contingent Claim: Excusable Default, Repudiation, and Reputation[END_REF], a growing strand of the literature takes a different approach and models sovereign defaults as "excusable". Our paper clearly adopts this approach. An "excusable default" excludes any strategic decision by the sovereign to default and is solely associated to identifiable "bad states of the world".6 Such defaults occur when the government is unable to obtain the necessary funds to refinance its outstanding debt, either by issuing new debt, by decreasing public spending or by raising taxes. 7 In a model document that, while the average length of exclusion was 4 years in the 1980s, it drops to 2 years during the 1990s. [START_REF] Meyer | Sovereign Bonds since Waterloo[END_REF] note that, in recent period, defaulting countries managed to place bonds quickly post-default. A notable example is Argentina in 2016. The country re-accessed international markets only months after its 7th default. of excusable default, [START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF] shows that the existence of fiscal limits drastically modify the conditions on the sustainability of debt and contributes to defaults. [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF] relate fiscal fatigue to public default and endogenously derive the "debt limit". Assuming that default may occur in one period only, [START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF] investigate the gradual worsening of public debt position which is due to the presence of long-term debt. Assuming zero debt recovery (haircut of 100%) by investors in case of a sovereign default, [START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF] propose a measure of maximum borrowing for advanced economies. This assumption is at odds with the observations on historical sovereign defaults mentioned before. As we shall see below, it substantially underestimates a country's maximum borrowing, which we find to be a highly non-linear function of (expected) haircut. Finally, the issue of public debt sustainability has recently been re-examined, taking into account the low risk-free interest rate relative to the growth rate. [START_REF] Blanchard | Public debt and low interest rates[END_REF], [START_REF] Sergeyev | Debt Sustainability in a Low Interest Rate World[END_REF] and [START_REF] Mauro | r minus g negative: Can We Sleep More Soundly?[END_REF] suggest that negative rg differentials8 are quite common over the past 200 years and characterize recent years. The authors of these two last papers and [START_REF] Blanchard | Redesigning EU fiscal rules: From rules to standards[END_REF] nevertheless point to the possibility of abrupt bond yield reversals and subsequent reappearances of public debt sustainability issues. The model. We consider a small open economy with international financial markets and perfect diversification of risks. Time is discrete t = 0, 1, 2 . . .. In each period t, a quantity Y t of goods is available and represents the country's GDP. Let a t ≡ Y t /Y t-1 be the gross rate of growth of output between t -1 and t. 9 We assume that a t evolves randomly across time and follows a probability law with the following characteristics: Assumption 1. 1. a t is an i.i.d. random variable with a density function g (a) , denoting by G (a) its cumulative distribution function, both defined on the interval [0, +∞), and E (a) ≡ ā < β -1 where β -1 = 1 + r is the risk-free real gross interest rate; 2. the hazard function z (a) = g (a) 1-G(a) is monotone and non-decreasing. Assumption 1.1 makes clear that the productivity follows a random walk and the condition E (a) < β -1 will guarantee that the long run growth rate is inferior to the risk-free interest rate for this economy. We will relax this assumption in Section 1.6. Assumptions 1.2 is a regularity assumption which allows us to exclude the possibility of multiple equilibria as it will be made explicit in Section 1.4. Private sector. We assume that international financial markets allow perfect coverage against risk and therefore investors behave as risk-neutral agents. Consider a one-period maturity security offering -in the absence of default -a promise of one unit of goods in t + 1. The price at date t, denoted q t , of such a security satisfies rational expectations if q t = βE t h t+1 , (1.1) where h t+1 is the fraction of the end-of-period value that will be repaid in a given state of nature in period t + 1, with h t+1 = 1 if there is no default and h t+1 < 1 in case of default. 1.3.2 Government. Fiscal rule and fiscal constraint. The government generates a sequence of primary fiscal surpluses as fractions of output {s t }, representing total taxes collected minus total outlays on government purchases and transfers. A negative value of s t corresponds to a primary deficit. The government balances its budget by issuing one-period maturity Treasury bonds of facial value 1 at price q t . The level of debt (which is also the number of bonds emitted in t) is denoted by B t . In case of default at t, it reimburses a fraction h t < 1 of its debt contracted at t-1, B t-1 . The instantaneous government budget constraint writes: q t B t = h t B t-1 -s t Y t , (1.2) with h t ∈ [0, 1] . This parameter takes the value of 1 if there is no default in t and a lower value, given by a debt recovery rule, when the government is unable to meet its financial obligations in t and thus defaults. Following [START_REF] Davig | Inflation and the fiscal limit[END_REF], [START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF] and [START_REF] Daniel | Pushing the limit? Fiscal policy in the European Monetary Union[END_REF], we assume that the primary surplus s t increases with the actually redeemed debt-to-GDP ratio, up to a limit denoted by ŝ: s t = min s + θ • h t B t-1 Y t -ω ; ŝ , (1.3) where ω ≥ 0 is the long run target for the outstanding debt-to-GDP ratio in period t : B t-1 /Y t . Such a limit to the primary surplus can be justified by the coexistence of tax distortions (leading to a Laffer curve) and inelastic public expenditures. We make the following assumption: Assumption 2. The parameters θ, s and ŝ satisfy: θ > 1 -βā, and ŝ > s ≡ (1 -βā) ω. The presence of the upper bound ŝ captures the maximum fiscal effort the government is able to make in order to repay its debt. When the primary surplus has reached its maximum value ŝ, we refer to this situation as fiscally constrained and we will say that the economy is in a constrained fiscal regime. Default and the debt recovery rule. Default occurs only when the government does not obtain the necessary funds to refinance its outstanding debt. Let us denote by Ω def t the maximum (face value of) debt which can be redeemed by the Treasury in t: default occurs when B t-1 > Ω def t . We refer to Ω def t as the "default threshold" for period t. As we will see later, this threshold obtains in equilibrium on the financial markets. We abstract from specifically studying the bargaining process between the defaulting public borrower and its lenders and consider that it is captured by a simple debt recovery rule, contingent on the level of contractual debt B t-1 and on the default threshold Ω def t , is applied. We use the following specification: h t =        h • Ω def t /B t-1 if B t-1 > Ω def t 1 else (1.4) with 0 ≤ h ≤ 1. 10 According to this rule, any realization of the (stochastic) default threshold Ω def t below the contractual level of debt triggers default and a rescheduling of public debt. This rescheduling is such that the after-default (redeemed) debt level is a fraction of Ω def t , i.e. h t B t-1 = hΩ def t . By considering the limit case where the overrun is negligible (B t-1 → Ω def+ t ), h can be interpreted as the maximum debt recovery rate in a default episode. By extension, 1 -h is the minimal rate of default, or equivalently and loosely speaking, the lowest possible "haircut". This rule displays two important features: 1. This debt recovery rule has the property of ensuring that the government is immediately able to re-enter the bond market as its post-default initial debt is below Ω def t and the economy functions again according to the set of equations characterizing its dynamics. 2. The possibility of future defaults is not ruled out. Nevertheless the rule allows the defaulting government to withstand adverse shocks in the future. The lower is h, the more room there is to accommodate future adverse shocks. 1. is meant to simplify the analysis of the dynamics and could be relaxed at the cost of cumbersome analytical complexities. 2. is important as it captures the fact that a debt rescheduling is a temporary arrangement. It does not necessarily provide a definitive solution to a country's fiscal situation which may worsen due to adverse shocks. Cross-country evidence shows that the ratio of recovered debt to due debt h t is not unique and markedly differs across countries and circumstances.11 This evidence is consistent with (1.4) when considering country-specific values of h. Moreover the realized values of h t are affected by macroeconomic shocks. The no-Ponzi condition and the solvency ratio. The government's budget constraint is subject to a no-Ponzi condition: lim T →∞ E t β T h t+T B t+T -1 ≤ 0. (1.5) Using (1.1) in (1.2), one gets: βE t h t+1 B t = h t B t-1 -s t Y t . Defining ω t ≡ h t B t-1 /Y t , and remembering that a t+1 = Y t+1 /Y t , we obtain: βE t a t+1 ω t+1 = ω t -s t , (1.6) and the no-Ponzi condition (1.5) is equivalent to: lim T →∞ E t β T T n=1 a t+n ω t+T ≤ 0. (1.7) The no-Ponzi solution is consistent with individual rationality and therefore standard in macro models. In models where the possibility of defaults is a priori excluded, this condition corresponds to a debt sustainability condition. As we shall see below, when taking into account the possibility of defaults and therefore of debt rescheduling, this equivalence does not hold anymore. Note that ω t is a stochastic variable which may "jump" in each period according to the growth rate innovation and the possibility of a sovereign default. Using the definition of ω t , the fiscal rule (1.3) rewrites: s t = min (s + θ • (ω t -ω) ; ŝ) . (1.8) Using (1.8) and the definition of s given in Assumption 2, we obtain from (1.6) the following dynamic equation for the expected redeemed debt-to-output ratio: E t βa t+1 ω t+1 =        (1 -θ) (ω t -ω) + βāω for ω t < ω ω t -ŝ for ω t ≥ ω (1.9) with ω ≡ ω + ŝ - s θ > ω, (1.10) where the last inequality comes from Assumption 2. Equation (1.9) makes clear the consequence of a maximum fiscal surplus ŝ. It creates a kink in the dynamics of expected debt-to-output ratio. If the actually redeemed debt-to-output ratio ω t is sufficiently low (below ω), an increase in the public debt ratio can be partially offset by an increase in the primary surplus ratio s t . Let us consider a deterministic version of this equation by assuming a t+1 = ā. The expected debt ratio is obtained from a linear equation. Its slope, equal to (1θ) /βā, is from Assumption 2 less than one. When ω t is above the debt-to-output ratio ω at which the primary surplus ratio reaches its maximum ŝ, the expected actually redeemed debt ratio is obtained from a linear equation the slope of which, (βā) -1 , is more than one. Hence the kink at ω creates two (deterministic) steady states, the first of which is ω1 = ω, and the second: ω2 = ω sup , with ω sup ≡ ŝ 1 -βā . (1.11) Note that ω sup is equal to the sum of the present and expected discounted primary surpluses (relative to the actual GDP) when they are set at their maximum value. Hence it defines the conventional solvency limit of public debt-to-output ratio in a deterministic environment. It does not depend on the debt recovery parameter. As we will see below this is an important difference with the (equilibrium) default ratio which we find to be very sensitive to the (expected) debt recovery parameter. When ω t ≥ ω we obtain from (1.9): ω t = ŝ + E t βa t+1 ω t+1 = ŝ 1 -βā + lim T →∞ E t β T T n=1 a t+n ω t+T . Using this last result, the no-Ponzi condition (1.7) implies: ω t ≤ ω sup , (1.12) where ω sup is given by (1.11). This inequality is the solvency condition on government debt in this stochastic environment. In the sequel, we will refer to ω sup as the solvency ratio of sovereign debt. Market equilibrium. Let us denote by b t ≡ B t /Y t the level of contractual government debt emitted today relative to GDP at t, and (1.13) the "default threshold" for period t as a percentage of GDP. Using these notations and according to (1.4) default occurs when b t-1 > a t ω def t . The market equilibrium is given by the following equations: is endogenous and ultimately needs to be obtained. We will see below that this sequence is actually deterministic in this setting. ω def t ≡ Ω def t /Y t , q t b t = h t b t-1 a t -min s + θ • h t b t-1 a t -ω ; ŝ (1.14) h t =        h atω def t b t-1 if b t-1 > a t ω def t 1 else (1.15) q t = βE t h t+1 , ( 1 Sovereign default and debt recovery. In this section, we focus on the study of the functioning of this economy in the fiscal constraint regime.12 Specifically, we suppose that the economy was in a constrained tax regime in t -1, remains in this regime in t and will be there in t + 1. The budget constraint is then written in the following simpler form: q t b t = h t b t-1 a t -ŝ. (1.17) Debt valuation. Assuming that ω def t+1 is known in t and using (1.15) the price of public debt (1.16) rewrites as: q t = β     1 -G b t ω def t+1 + h ω def t+1 b t b t /ω def t+1 adG (a)     . (1.18) Notice that the price of bond is a decreasing function of b t . Lenders include in the price a risk premium linked to the probabilities of expected future defaults, based on the ratio b t /ω def t+1 , on the probability law of a t and the debt recovery parameter in case of default. The market value of public debt in t is denoted by v t ≡ q t b t . From (1.18), it is a function of b t , parameterized by ω def t+1 and h: v t = β        1 -G b t ω def t+1 b t + hω def t+1 b t /ω def t+1 adG (a)        ≡ v b t ; ω def t+1 , h . (1.19) The function v (•) is potentially non-monotone. The following proposition formalizes the existence of a unique maximum to this function: Proposition 1. Given ω def t+1 , under Assumption 1, the market value of debt v t reaches a unique maximum v max t for a quantity of debt b t = b max t . Both v max t and b max t are linearly increasing in ω def t+1 : v max t = βx h ω def t+1 and b max t = δ h ω def t+1 where δ h is such that [1 -G (δ h )] [1 -(1 -h) δ h z (δ h )] = 0, (1.20) z (δ) = g(δ) 1-G(δ) being the hazard function and x h given by x h = [1 -G (δ h )] δ h + h δ h adG (a) . (1.21) δ h and x h are increasing functions of h, with 0 < x h ≤ ā and 0 < δ h ≤ +∞ for 0 ≤ h ≤ 1. According to this proposition, the maximum value of public debt v max t and the ratio ω def t+1 and the debt recovery parameter h. The higher the debt recovery parameter h, the higher the maximal market value: Lenders are ready to lend more as they receive more in case of default. Even in the extreme case of no debt recovery (h = 0), lenders are potentially willing to lend to the government, despite complete loss in case of default, because they are compensated by a positive risk premium. In the extreme case of the highest debt recovery parameter (h = 1), the maximum public debt value is equal to the discounted default ratio, that is: v max t = βāω def t+1 . 13 Figure 1.1 illustrates this relation for a given value of h verifying 0 < h < 1. For values of b t below b max t , the market value of public debt v t = q t b t is increasing in b t . Above b max t , the decreasing effect of bond price overcomes the direct effect of increasing debt and makes the public debt value starting to decrease. Because of its "bell"-shaped form, the function υ (•) is referred to as the "debt Laffer curve" in the literature (see D'Erasmo, Mendoza and Zhang 2016, and Lorenzoni and Werning 2019). An equilibrium debt ratio b t without default in t is such that (1.17) holds with h t = 1. The equilibrium displayed in Figure 1.1 corresponds to the no-default case. For financing needs b t-1 /a tŝ between βhāω def t+1 and v max t , there are two values of b t which meet this request (as shown in Figure 1.1). Notice that the equilibrium situated on the decreasing side of the valuation function is "unstable" in the Walrasian sense. In the neighborhood of the high debt equilibrium, in the case of an excess demand a higher bond price increases the gap between demand and supply; the reverse is true in the case of an excess supply. ), the equilibrium debt-to-output ratio is given by: b t = min b v b; ω def t+1 , h = -ŝ + b t-1 /a t . (1.22) Equilibrium default ratio. Figure 1.1 helps us to graphically understand default as a market event. There is default in t when a sufficiently negative shock heightens the horizontal line above the v b t ; ω def t+1 , h curve, that is, above v max t . Formally the condition corresponding to default can be written as: b t-1 a t -ŝ > v max t . (1.23) The default condition used in (1.15) has been defined as: b t-1 > a t ω def t . Thus the default ratio ω def t is necessarily equal to: ω def t = v max t + ŝ. (1.24) It is defined as the sum of the maximum value that the government can obtain from the market and the primary surplus of the period. Since from Proposition 1 we have: v max t = βx h ω def t+1 , using (1.24), we get a dynamic expression for ω def t : ω def t = βx h ω def t+1 + ŝ. (1.25) It is a forward-looking equation: how much can at most be redeemed today depends on how much can at most be redeemed tomorrow, because this last one directly determines the opportunities for public funding. Denoting by ω h the stationary solution of (1.25),the following proposition obtains: Proposition 2. The equilibrium default ratio is locally unique and equal to: ω def t = ŝ 1 -βx h ≡ ω h , ∀t. (1.26) ω h is a strictly increasing function of ŝ and h, with ω h ≤ ω sup for h ≤ 1. Strikingly, even though we reason in a stochastic environment, the default ratio ratio is a constant, ω def t = ω h ∀t, independent from the dynamics of public debt and thus from the history of shocks. We can deduce from Proposition 1 that b max t = δ h ω h ≡ b max h , ∀t, (1.27) and v max t = βx h ω h ≡ v max h , ∀t, (1.28) which denote respectively the maximum quantity of public bonds in percentage of output that can be emitted and the associated maximum public debt value15 again in terms of output -where δ h and x h are given by (1.20) and (1.21). From equation (1.26), we note that, unless x h is equal to its upper limit ā corresponding to the case h = 1, the default ratio is lower than the solvency ratio ω sup . [START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF] already highlighted the same kind of result in the particular case h = 0. Taking into account a positive recovery parameter allows us to generalize their findings while showing the sensitivity of the default ratio to the recovery parameter h. Figure 1.2 shows the default ratio ω h as a function of the (expected) debt recovery parameter h,using a baseline calibration proposed in section 1.6. 16 The default ratio (blue curve) is an increasing, highly nonlinear function of the debt recovery parameter. Recall that when h = 1, the default ratio is equal to the solvency ratio ω sup , which is evaluated to 238% of GDP (horizontal dash line) with our baseline calibration. As h moves from 1 to 0.98 the default ratio falls to 197% of GDP and amounts only to 135% at h = 0.5, and 129% when h = 0. The increasing sensitivity of ω h to the debt recovery parameter is due to the effect of sovereign risk on debt price: The default premium is decreasing in the debt recovery parameter, the higher h the lower the prospect of post-default losses and the higher the price of emitted debt. This increases the maximum debt value v max h and the implied default ratio: ω h = v max h + ŝ. The effect of the debt recovery parameter on the default ratio illuminates the debt recovery channel and shows the limitation of assuming no debt recovery, as it is the case in most 16 To construct Figure 1.2, we set ŝ, the maximum primary surplus, to 5%, β = (1 + r) -1 with a risk free rate r equal to 2.93%, and a log-normal distribution for the gross rate of growth, that is: lna ∼ N µ, σ 2 with µ = 0.0281, and σ = 0.0263. Section 1.6 provides more details on the choice of parameter values. sovereign default models. It is clear from Figure 1.2 that such an assumption would substantially underestimate a country's default ratio. Since this ratio is constant we simplify the notation of the valuation function v (b t ; ω h , h) ≡ υ (b t ; h) . Equation (1.19) becomes: υ (b t ; h) = β    1 -G b t ω h b t + hω h b t /ω h adG (a)    . (1.29) The property of this function is given in the following proposition: Proposition 3. The market value of public debt is a strictly increasing function of the debt recovery parameter h. This proposition confirms the intuition that lenders expect to be better covered in case of default when the debt recovery parameter increases and thus value more a given amount of public debt. Public debt dynamics and unsustainability. In this section we address the public debt dynamics when it is subject to market pricing and dependent on the debt recovery rule as explained in the previous section. This dynamics is made complex because it actually depends on many factors: the capacity to proceed to fiscal adjustments, the recurring shocks hitting the economy and, last but not least, the prospects of haircuts to be applied in case of default. This is true even in the constrained fiscal regime. To overcome this difficulty, we exploit the notion of "Risky Steady State" and offer an new approach to the notion of public debt unsustainability in the presence of default. This allows us to reformulate the definition of fiscal space, originally introduced by [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF]. This notion is central in the management of public debt as it points to the fact that the prospect of default is more or less acute, depending on the capacity of a government to modify its fiscal policy or buffer negative shocks given the probability law governing the relevant random variables. Intuitively, the larger the fiscal space in a given period, the lower the probability of default in the Figure 1.3: The no-default case dynamics next period. We highlight the impact of the debt recovery rule on the dynamics of public debt and its impact on the fiscal space. The dynamics of the public debt. The debt dynamic process can be formally obtained in our model. Consider a period t where the random variable realization a t and the debt ratio to be redeemed b t-1 are such that no default occurs, that is: b t-1 /a t < ω h , implying h t = 1. The dynamics of public debt defined by the government budget constraint (1.17) expressed in the constrained fiscal regime can be written as: b t = min (b |v (b; h) = -ŝ + b t-1 /a t ) , (1.30) where the function v (b; h) is given by (1.29). This formula makes clear that the debt dynamics is stochastic and shifts with the realizations of the productivity shock. Figure (1.3) illustrates the dynamics of the public debt for two possible values of the realized rate of growth a 1 t and a 2 t , for which there is no default in t, satisfying: b t-1 < a 1 t ω h < a 2 t ω h . For an initial public debt-to-output ratio b t-1 , the straight lines (b t-1 /a 1 t -ŝ) and (b t-1 /a 2 t -ŝ) give the government's refinancing requirements in each scenario corresponding to the two states of nature considered. By projecting these values onto the curve υ (b t ; h), we get two possible debt-to-output ratios of period t: b 1 t and b 2 t . For the higher growth rate, a 2 t , the service of the maturing debt b t-1 /a 2 t is low, leading to a reduction of the new emitted debt: b 2 t < b t-1 . However this is not so for the lower growth rate a 1 t and the debt ratio increases: b 1 t > b t-1 . Interestingly, even if the growth rate a 1 t is not low enough to lead to an immediate default, it nevertheless leads to a serious deterioration in the government's financial situation which contributes to a higher default risk premium included in the price of debt. A "snowball effect" comes into play. The increase in a given period t of the amount of emitted debt increases the probability of default and thus the default risk premium. This in turn lowers the price of public bond which increases the quantity of debt to be emitted in the next period for the refinancing of the outstanding debt. This results in a gradual worsening of the financial position of the government. If the same macroeconomic situation is repeated in period t + 1, i.e. a t+1 = a 1 t , it leads to a sovereign default since the financial needs in t + 1 now exceed the maximum availability of funds v max h . The Risky Steady State and the debt recovery rule. In order to shed more light on the debt dynamics in this stochastic environment, we resort to the concept of "Risky Steady State" (RSS), introduced by Juillard (2011) and [START_REF] Coeurdacier | The Risky Steady State[END_REF]. 17 This concept makes it possible to study the dynamics of public debt by disregarding the realization of shocks but without eliminating the effect of risk on the debt valuation. Let us consider the following Definition 1. A Risky Steady State (RSS) is a stationary equilibrium of the dynamic system when the realization of these shocks are equal to their mean value 17 An early reference on this notion is [START_REF] Juillard | Solving SDGE Models: Approximation About The Stochastic Steady State[END_REF]. and agents form their expectations of relevant variables and make decisions on the basis of the probability distribution of future shocks. Applying this definition to our problem, the Risky Steady State level of debt is the stationary level of the debt-to-output ratio b t = b t-1 in equation (1.30) with a t = ā. More precisely, denoting by b rss h the RSS-debt-to-output ratio, it is such that: υ (b rss h ; h) = b rss h ā -ŝ. (1.31) The left hand side of ( ≤ āω h ≤ b max h , if and only if h ≥ h = 1 -1 āz(ā) , with strict equalities for h = h. When h > h, b rss h and the difference b max h b rss h are both increasing in h. Figure (1.4) represents the potential existence and determination of the RSS for different values for the recovery parameter: 0, h, 1 and a value h such that h < h < 1. A notable result from Proposition 4 is that a RSS does not always exist in this model. Its existence depends on the debt recovery parameter and this parameter must be sufficiently large. In particular a RSS does not exist when h = 0, the case considered for instance by [START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF]. In this case, and more generally when h < h, defining b max h as the debt limit18 seems to be a good a RSS always exists and it is generally below b max h . We will propose in the section 1.5.3 to consider b rss h as a relevant alternative candidate to define the debt limit ratio in this case. Given that the value of an emitted public bond is increasing in the debt recovery parameter, the amount of debt which can be rolled over consistent with the RSS is also increasing in h. This explains point 2. of Proposition 4. When h = 0, a turning point of the curve corresponding to the maximum quantity of public debt exists and is below the 45°degree line. Thus the intersection with the 45°degree line does not define a RSS as the part of the curve above the turning point corresponds to the wrong side of the debt Laffer curve and is discarded. In the other limit case, h = 1, considered for instance by Uribe (2006) and [START_REF] Juessen | Default risk premia on government bonds in a quantitative macroeconomic model[END_REF], there is a RSS but no turning point. The curve is asymptotically vertical and the default ratio is the solvency ratio. In such a configuration, a default makes the post-default indebtedness equal to the solvency ratio. If the post-default value of a t is at most equal to its mean, this necessarily leads to a renewed default. This captures an extreme case of the feature of serial default. 20 There is a value of the debt recovery parameter, denoted by h, such that the turning point of the curve is exactly on the 45°line. It is the lowest value of h for which there exists a RSS. For values of h higher than h but lower than 1, there exists a RSS which is below the solvency ratio. The level of public debt consistent with the RSS is below the maximum debt level b max h . Lastly, notice that when it exists, a RSS is unstable as the dynamics of public debt is diverging as long as b t > b max h and a t+τ ≤ ā (for τ ≥ 0). This makes apparent a striking paradox with respect to the snowball effect (as defined above). The intuition is that the snowball effect, understood as the buildup of public debt possibly leading to default, is large when the risk supported by the lenders is high, that is when the post-default recovered debt is low (due to a low recovery parameter or, loosely speaking, a high haircut). Actually, it happens only when h is above h and the level of debt is above the RSS: the subsequent debt level is increased and closer to the default ratio (again as long as a t+τ ≤ ā). On the other hand, when h is below h, there is no snowball effect at all: if the due debt level is higher than the level corresponding to the turning point, default is immediate. 20 See [START_REF] Reinhart | Serial default and the" paradox" of rich-to-poor capital flows[END_REF], for instance. [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF] define the fiscal space at time t as the difference between the "debt limit", which corresponds to the maximum level of debt b max h in the context of our model, and the current debt ratio b t . Therefore it depends on the minimum debt recovery parameter h. This notion is critical for the management of public debt as it points to the fact that the prospect of default is more or less acute, depending on the capacity of a government to modify its fiscal policy21 or the capacity to buffer negative shocks given the probability law governing the relevant random variables. The larger the fiscal space, the lower the probability of future default. However, in line with our discussion in the previous subsection, defining the fiscal space as the difference between b max h and the current debt ratio, especially for using it as a criterion of debt sustainability, is of little value when the debt recovery parameter is high and thus a RSS exists. In this case, it may be relevant to more precisely define the fiscal space as the difference between the RSS debtto-output ratio b rss h and the contemporary debt-to-output ratio b t . This allows to distinguish two very different situations, depending on whether b t is below or above b rss h . In the former case, the fiscal situation can be perilous, especially if the debt level is close to b rss h , but it is "not critical" in the following sense: if the growth rate is not strictly below its average, the share of debt in GDP should decrease over time. In the latter case, the public debt situation is "critical" given the instability of the RSS: the debt sustainability cannot be taken for granted and default looms in even if the growth rate is equal to its mean. Reassessing unsustainability In order to shed some light on this intuition, we first give an original definition of the (un-)sustainability of public debt: Definition 2. A public debt is said to be "unsustainable" at date t when its trajectory reaches the default ratio at some finite date, assuming that there is no realization of the (gross) rate of output growth a t+s higher than ā. The case of unsustainability refers to the following "non-optimistic" scenario: no future realizations of the shock will be higher than ā. The period t public debt is "unsustainable" since, under this scenario, a market-triggered default will unavoidably occur in the future. 22This calls for the redefinition of the notion of "debt limit". When there exists a RSS (h above h), trespassing this level implies that public debt is unsustainable and leads to future default (assuming that a t = ā). Thus the RSS should be considered as the debt limit. When it does not exist (h below h), the debt limit is logically the maximum level of debt. Thus we propose the following Definition 3. The debt limit and the fiscal space denoted by F S t in period t are respectively defined as: b lim h = min (b max h , b rss h ) and F S t = b lim h -b t . As we have just shown that the maximum debt-to-GDP ratio b max h , and the risky steady state b rss h are both increasing functions of the recovery parameter h, so is the fiscal space F S t . This comes directly from Proposition 3 and the fact that the value of public debt is increasing in h. We shall see in the next section how this dual definition of the debt limit can be used in empirical analyses to shed light on the public finance positions of different countries, both advanced and emerging. In line with Definition 2, a worrisome case is when, in the event of a default, the post-default debt ratio is unsustainable. The following proposition establishes that this outcome is possible when the recovery parameter is sufficiently high: Proposition 5. When a public default has occurred, the post-default debt-to-GDP ratio hω h is unsustainable if the debt recovery parameter h is above a critical value H : h > H > h, where H is implicitly defined by: Hω H = b rss H ā . When h > H, the post-default debt ratio hω h is superior to the level b rss h /ā that makes it possible to maintain the debt ratio at its RSS level at the next period when the realization of the shock a t+1 is equal to its mean value ā. In other words, according to Definition 2, public debt is unsustainable. In such a situation, except in the case where a very favorable macroeconomic shock allows the economy to leave the zone of unsustainability, the economy could suffer a series of repeated defaults, i.e. serial defaults. Post default, a higher value of the recovery parameter h increases the debt burden. Above the threshold value H, this burden is so high that public debt becomes unsustainable. This is in stark contrast with the ex ante perspective adopted in the previous sub-sections where a high value of h was viewed as favorable. Numerical / Empirical analysis. The previous analysis provided a better understanding of the dynamics of public debt in a stochastic environment where default is not a priori excluded. It highlighted the role played by the debt recovery rule on the dynamics of public debt, both before and after default has occurred. This allows us to offer new instruments so as to assess the soundness of the financial position of a country at a given date, by redefining the debt limit and the fiscal space. In this section, we show how these notions can be put in practical use to empirically investigate the link between public default and the debt recovery parameter. Data. We use a dataset that covers two groups of countries over the period 1980-2018. The first one ("Advanced") contains 31 advanced economies. The second one ("Emerging") contains 13 emerging economies. We restrict the sample of countries to those with sufficient historical observations for our variables of interest. 23Appendix 1.8.2 presents descriptive statistics of the data, the definition of the variables, and data sources.24 Baseline calibration. We consider a log-normal distribution for the growth rate a t : lna ∼ N µ, σ 2 . Table 1.1 presents the baseline parameter values used in the calibration exercises to follow. Growth mean and volatility are computed over the whole country-time sample. The risk-free rate r is set to the average real yield on German Treasury bond.25 The length of one period in the model is set to 4 years. 26 The maximum primary surplus ŝ is calibrated following IMF (2011, 2018), with two possible values capturing different degrees of fiscal effort. Conditional estimates of h. We provide conditional estimates of the debt recovery parameter h for both groups of countries. For this purpose, we examine the relation between a country i's actual sovereign yield spread in year t, denoted s i,t , and its theoretical spread in that same year. Because this theoretical spread is conditional to the assumption concerning the maximum primary surplus, our estimates are conditional to this assumption. Nevertheless, we will see in the next sub-section that the fiscal spaces that we can compute are much less sensitive than the estimates of h to this assumption. We compute actual spread as the difference between the country's long-term real interest rate and the German long-term real rate, s i,t = r i,tr G,t . The theoretical spread is defined as S (b i,t ; µ i , σ i , ŝ, h) ≡ 1 q i,t - 1 β , where q i,t is the country's bond price, defined in equation (1.18). The term 1/q i,t is the gross interest rate on government bonds and 1/β is the gross risk-free rate, common to all countries. b i,t is the debt-to-GDP ratio observed for country i at date t. µ i and σ i are the mean and volatility of the growth rate, respectively, and are calibrated to their sample values at country level. ŝ is the maximum primary surplus, which is calibrated according to IMF (2011,2018). We estimate the recovery parameter h by nonlinear least squares, minimizing the sum of squared deviations of theoretical yield spreads from actual spreads.27 That is, our estimated parameter, denoted ĥ, solves: min h t i [S (b i,t ; µ i , σ i , ŝ, h) -s i,t ] 2 . (1.32) We estimate equation (1.32) for both country groups separately. Notice that the dataset for each country group is an unbalanced panel because sovereign yields and debt-to-GDP ratios are not available for all countries over the time period considered, 1980-2018. Table 1.2 reports the obtained values for ĥ for each group of countries, considering the two different values for the primary surplus used in Table 1.1. The last column of Table 1.2 shows the mean absolute deviation of theoretical spreads from actual spreads in percentage point. Overall, the average deviations are small. We obtain a higher ĥ for advanced countries than for emerging countries, assuming either a high primary surplus or a low one. Notice that the estimated values for both country groups are positive and well above zero, suggesting that exante lenders do expect a partial debt recovery should a sovereign default actually occur. This finding is in line with historical estimates of post-default debt haircuts documented in the empirical studies mentioned in Section 1.2. Notice that, in the (2011; 2018). ĥ is the value of h that solves (1.32). a: Average (absolute) difference between theoretical spreads and actual spreads when h = ĥ. case of the group of emerging countries, the estimated value of h is sensitive to the calibration of the maximum primary surplus ŝ: ĥ is equal to 0.70 or 0.42 for ŝ equal to 4% or 3% of GDP, respectively. An alternative strategy could be to fix the value of h and estimate the maximum primary surplus but the same type of sensitivity of the obtained estimates would probably be found. Sustainability and the debt recovery rule. Section 1.5.3 introduced a more precise measure of the debt limit than the one proposed by [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF]. We showed that this measure depends crucially on the debt recovery rule. In this section, we further illustrate the role of the debt recovery parameter h by computing debt limits for the Advanced and Emerging groups of countries in our dataset. More precisely, for each country i, we calibrate the mean µ i and volatility σ i of log growth rate of GDP to their historical values while setting the risk-free rate r and the primary surplus ŝ to their baseline values defined in Table 1.1. We solve the model numerically and compute the debt limit for four different values of h: the case of no debt recovery h = 0 (haircut of 100%), the case of maximum debt recovery h = 1 (haircut of 0%), an intermediate case: h = 0.5 and the conditional estimated values of h = ĥ (for each group of countries). Tables 1.3 and 1.4 present the results of this exercise for advanced and emerging countries, respectively. For comparison, we also report the debt-to-GDP ratio of each country in 2018, the last year in our dataset. First, consider the group of advanced countries. Assume, as in [START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF], zero debt recovery by creditors in case of a sovereign default (that is h = 0) and a primary surplus of 5%. This case corresponds to Column 2 of Table 1.3. Under this assumption, Greece has the lowest debt limit at 82% of GDP, followed by Czech Republic with 98%, and by Latvia with 106%. On the other hand, Singapore presents the highest debt limit at 512% of GDP, followed by Korea and Israel with a debt limit of 395% and 311%, respectively.28 Moving from h = 0 to h = 0.5, again setting ŝ = 5%, the debt limit increases by 118% in Singapore, followed by Korea with an increase of 77%. At the same time, the debt limits for Greece, the Czech Republic and Latvia increase only by 11%, 15%, and 28%, respectively. Assuming a maximum debt recovery parameter (h = 1), one would conclude that default is not a concern for any advanced country (including Greece), given their fairly large debt limits. Ten countries are even characterized by an infinite debt limit. 29 A similar pattern occurs when we set ŝ = 4%. When h is equal to its estimated value 0.88, debt limits in advanced countries are reduced by 58% on average with respect to the case h = 1. Two countries only, Singapore and Korea, benefit from an infinite debt limit. Comparing the debt limit of each country to its debt-to-GDP ratio in 2018 (Table 1.3 Column 1) so as to have a measure of its fiscal space at this date, Greece and Japan are associated with a negative fiscal space when h = 0.88 whereas they benefit from a positive fiscal space when h = 1. This again illustrates the b 2018 b lim h ( ŝ = 5%, ĥ = 0.88) b lim h (ŝ = 4% , ĥ = 0.93) = ∞ and no positive value exists for b rss h . ĥ is the estimated value for h. For each country, the mean µ and volatility σ of the growth rate are calibrated to their historical values. The other parameters (r and ŝ) are set to their baseline values in Table 1.1. b 2018 b lim h (ŝ = 3%, ĥ = 0.42) b lim h (ŝ = 2% , ĥ = 0.70) = ∞ and no positive value exists for b rss h . ĥ is the estimated value for h. For each country, the mean µ and volatility σ of the growth rate are calibrated to their historical values. The other parameters (r and ŝ) are set to their baseline values in Table 1.1. h = 0 h = 0.5 h = ĥ5% h = 1 h = 0 h = 0.5 h = ĥ4% h = 1 Country (1) (2) (3) (4) (5) (6) (7) (8) (9) h = 0 h = 0.5 h = ĥ3% h = 1 h = 0 h = 0.5 h = ĥ2% h = 1 Country (1) (2) (3) (4) (5) (6) (7) (8) (9) sensitivity of the assessment of public debt sustainability to the debt recovery parameter and the need to improve the estimation of this rate. In the case of Japan, which have not defaulted and doesn't appear on the verge of default, this may be due to a value of h included in the market risk premium higher than 0.88. In the case of Greece, its negative fiscal space may suggest that its recent default is not completely resolved. Turning to emerging countries 30 (Table 1.4 ), we observe a pattern similar to advanced countries. Setting ŝ = 3%, the debt limit increases on average from 137% when h = 0 to 490% when h = 1. Under the latter assumption, one would conclude that default is not an issue for any emerging country, even if we take into account the debt-to-GDP ratios in 2018 (Column 1) to have a measure of Comparing the two country groups, although emerging countries have relatively low debt-to-GDP ratios (50% on average) than advanced countries (71% on average), they also have overall lower fiscal spaces. [START_REF] Reinhart | Debt intolerance[END_REF] refer to this phenomenon as "debt intolerance", highlighting the fact that developing countries default at relatively low debt levels than what is conventionally considered as prudent. Here we exhibit the link between the unsustainability of public debt and the debt recovery parameter h, and show that this parameter varies across (groups of) countries, possibly explaining debt intolerance. Finally, we note that despite the difference between the two conditional estimations of h, especially for the group of emerging countries, the two computed values for the debt limit are sufficiently close to provide a fairly good approximation or, at least, a reasonable range for this financial sustainability indicator. Figure 1.7 allows us to compare the two evaluations of the debt limit for advanced countries 31 according to the case h = .88 and ŝ = 5%, or h = .93 and ŝ = 4%. This simple numerical/empirical exercise shows how the assessment of public debt sustainability depends crucially on the assumption that one makes about the debt recovery parameter. While the assumption of zero debt recovery parameter (h = 0) prevalent in previous studies appears at odds with historical evidence, assuming a maximum debt recovery by creditors in case of a sovereign default (h = 1) is not realistic and may overestimate a country's fiscal space 1.6.5 Sovereign default and debt sustainability when r < g. In his presidential lecture to the American Economic Association, Blanchard (2019) argued that "public debt may have no fiscal cost" if interest rates remain below the rate of growth. With close to zero interest rates, governments 31 Only countries with computed debt limits below 350% are included here. The difference between the two evaluation is greater for countries with a computed debt limit above 400% but the risks associated with these cases are negligible. In this sub-section, we relax the condition ā < β -1 of Assumption 1 in order to reassess the question of the sustainability of public debt when the risk-free interest rate is lower than the growth rate. To illustrate this point empirically, we focus on the situation of the Eurozone countries 33 in recent years, just before the Covid-19 outbreak. More specifically, we consider the average growth rate for each country and the 4-Year German (risk-free) bond rate for the period 2009-2018. We check that the average growth rate is higher than the risk-free interest rate for each country by evaluating the terms βā i . 34 Results are reported in Table 1.9 in Appendix 1.8.2. Except for Greece and Italy, for which βā is equal to 0.97 and to 0.99, respectively, this term is higher than 1 and the solvency ratio is infinite for all other countries over the considered period. Nevertheless, the default ratio ratio, given by ω h = ŝ 1-βx h , can take a positive and finite value as long as x h verifies x h < β -1 . In the same Table 1.9, we compute for each country the critical value of h, denoted hi , which verifies x hi = β -1 . These critical values are reported on Figure 1.8. For countries with a value hi higher than their own maximum recovery pa- 33 We consider only the 15 countries of the euro zone present in our database of advanced countries which excludes Cyprus, Estonia, Malta and Slovenia for data limitations for these countries. 34 Here, the value for β is given by (1 + r G ) -1 where r G = 0.31% is the annualized German interest rate for the period (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018). rameter h i , there is a positive and finite default ratio. Using the value of the maximum recovery parameter estimated (over the longer period 1980-2018) for the group of advanced countries, ĥ = 0.88, as the best proxy35 for any h i , and represented by the vertical (red) line on Figure 1.8, it cannot be excluded that 87% of the countries considered, including Germany, admit a finite default ratio despite the non-existence of a finite solvency ratio. Nevertheless, as Table 1.5 shows, the values calculated for the debt limit (in terms of GDP) with a risk-free interest rate of 0.31% (the German rate over the period 2009-2018) are rather high, with the exception of Greece whose debt limit is then 89% of GDP. Columns (4) and ( 5) give the values of these debt limits for risk-free interest rates of 1% and 1.5% respectively. In the latter case, Italy is also in a particularly dangerous situation, giving reason to [START_REF] Sergeyev | Debt Sustainability in a Low Interest Rate World[END_REF] and [START_REF] Mauro | r minus g negative: Can We Sleep More Soundly?[END_REF] warning about the danger that a rise in riskfree rates would represent in the years to come. Conclusion. We have developed a tractable stochastic model of sovereign default allowing us to highlight the macroeconomic impact of the debt recovery channel, roughly speaking the impact of "haircuts" being applied to the due debt owed by a defaulting country on the whole dynamics of public debt, and in particular on its sustainability and the prospects of defaults. We use a simple specification of such a rule, consistent with empirical evidence, which depends on a single parameter h, the (maximum) debt recovery rate. We show that the default ratio, namely the maximum debt-to-GDP ratio that a country can reach without defaulting, depends on this debt recovery parameter. It differs from the solvency ratio which corresponds to the transversality condition obtained when the possibility of default is neglected. The two quantities are equal only under the extreme, non realistic assumption of debt recovery parameter equal to one. We provide a new definition of debt unsustainability and a new measure of fiscal space. We show that the assessment of the unsustainability of public debt depends crucially on the debt recovery rule that is applied following a sovereign default. This finding provides some insights on the current debate on the sustainability of public debt in the context of low real interest rates. Our findings are consistent with the paradox of "debt intolerance": compared with advanced countries, emerging countries experience both lower default ratios, that is a lower debt tolerance by markets, and higher risk premia. We illustrate these findings by means of several empirical analyses based on a dataset covering advanced and emerging countries. First we provide some (admittedly rough given the paucity of data) evaluations of the debt recovery parameter. It appears that its magnitude is higher for advanced countries than for emerging ones. Second we assess the extent of fiscal spaces for the various countries of the dataset. Fiscal spaces for advanced economies are fairly large. Greece and Italy (to a lesser extent and in the event of a future increase in the risk-free interest rate) are notable exceptions. The estimated values of the fiscal spaces for emerging countries are much lower. The sensitivity of the estimated fiscal spaces to the debt recovery parameter shows clearly that it plays a major role in the assessment of the financial position of a country. These analyses illustrate the necessity to take into account the debt recovery channel when studying public debts and their dynamics. A final word of caution: h itself is not a "deep" parameter. It is the result of a negotiation between lenders and borrowers who carefully take into account the capacity of a defaulting country to correct its fiscal stance, its default record, its expected growth process. Our results show that it is an important factor for understanding various phenomena linked to sovereign default and the capacity to issue sovereign debt. It implies that a better understanding of the debt recovery rules being applied to defaulting countries and their determinants is needed. This is beyond the scope of this paper. = δ h ω def t+1 where δ h is such that [1 -G (δ h )] [1 -(1 -h) δ h z (δ h )] = 0, where z (δ) = g(δ) 1-G(δ) is the hazard function, and x h is given by x h = [1 -G (δ h )] δ h + h δ h adG (a) . These two coefficients are increasing functions of h, with 0 < x h ≤ ā and 0 < δ h ≤ +∞ for 0 ≤ h ≤ 1. Proof. By denoting δ t = b t /ω def t+1 , from (1.19) we can rewrite v t as: v t = βχ (δ t , h) ω def t+1 , (1.33) where χ (δ, h) is a non-monotonic function defined by: χ (δ, h) ≡ [1 -G (δ)] δ + h δ a • dG (a) . (1.34) Let us define Φ (δ, h) ≡ ∂χ (δ, h) /∂δ, the derivative of χ (δ, h) with respect to δ, we get: Φ (δ, h) = [1 -G (δ)] [1 -(1 -h) δz (δ)] , (1.35) where the function z (δ) is the hazard function: z (δ) ≡ g (δ) 1 -G (δ) . Assuming that there exists a positive value δ h such that: Φ (δ h , h) = 0, (1.36) we can then define x h ≡ χ (δ h , h) . (1.37) By denoting Φ z (δ, h) ≡ ∂Φ (δ, h) /∂z, the partial derivatives of Φ (δ, h) for z = δ, h, we get, for any h ∈ [0, 1): Φ h (δ h , h) = δ h g (δ h ) > 0, (1.38) Φ δ (δ h , h) = -[1 -G (δ h )] (1 -h) [z (δ h ) + δ h z ′ (δ h )] < 0, (1.39) where the last inequality is implied by Assumption 1. Hence, from (1.33), (1.35), (1.36) and (1.39), v max t = βχ (δ h , h) ω def t+1 = βx h ω def t+1 is a maximum reached for b max t = δ h ω def t+1 . From the definition of δ h , implicitly given by (1.35) and (1.36), and using (1.34), (1.38) and (1.39), we find that: ∂δ h ∂h = - Φ h (δ h , h) Φ δ (δ h , h) > 0. (1.40) ∂χ h ∂h = ∂χ (δ, h) ∂h δ=δ h = δ h a • dG (a) > 0. (1.41) Furthermore, from (1.34) we compute: x 0 = χ (δ 0 , 0) = [1 -G (δ 0 )] δ 0 where, from (1.35) and (1.36), δ 0 is such that: δ 0 z (δ 0 ) = 1. From the same equations (1.34), (1.35), and (1.36), where Φ (δ, h) is given by we get δ 1 = +∞ and x 1 = χ (δ 1 , 1) = a • dG (a) = ā, which ends the proof of Proposition 1. Proof of Proposition 2 Proposition 2: The equilibrium default ratio ω max t is locally unique and equal to: ω def t = ŝ 1 -βx h ≡ ω h , ∀t. (1.42) ω h is a strictly increasing function of ŝ and h, with ω h ≤ ω sup for h ≤ 1. Proof. Using (1.25) ω def t = βx h ω def t+1 + ŝ, (1.43) we obtain the stationary value for ω def t that we denote ω h . It is given by: ω h = ŝ 1 -βx h . (1.44) From Proposition 1, we know that x h is an increasing function of h with a maximum x h = ā for h = 1. It immediately follows that ω h is a growing function of h with a maximum ω 1 = ŝ 1 -βā ≡ ω sup . From Assumption 1, we have ā < 1 + r with 1 + r = β -1 and hence, from Proposition 1, βx h ≤ βx 1 = βā < 1. This implies that, by rewriting the dynamics of equation (1.43) in a more conventional backward-looking form, it is unstable around the unique stationary equilibrium, ω h . Since ω def t is not predetermined, ω h is a determinate, i.e. locally unique, equilibrium. Proof of Proposition 3 Proposition 3: The market value of public debt is a strictly increasing function of the debt recovery parameter h. Proof. Using equation (1.33) the market value of public debt (1.29) can be rewritten υ (b t ; h) = βχ b t ω h , h ω h , (1.45) where χ (δ t , h) is given by (1.34). We compute ∂υ (b t ; h) ∂h = βω h δ h a • dG (a) + β χ b t ω h , h - b t ω h Φ b t ω h , h ∂ω h ∂h , where is Φ bt ω h , h given by (1.35) is the derivative of χ (δ, h) with respect to δ. Since χ bt ω h , h is strictly concave, with χ (0, h) = 0, the term in square brackets is strictly positive, as is ∂ω h ∂h from Proposition 2, which makes it possible to conclude that ∂υ(bt;h) ∂h > 0 ∀b t . Proof of Proposition 4 Proposition 4: In the constrained fiscal regime, there exists a unique risky-steady-state-debt ratio, b rss h = b * h , satisfying (1.31) and b rss h ≤ āω h ≤ b max h , if and only if h ≥ h = 1 -1 āz(ā) , with strict equality for h = h. when h > h, we have: (a) b rss h is increasing in h, (b) b max h -b rss h is increasing in h. Proof. 1. Recalling equation (1.31) for convenience: sufficient to prove the existence of a RSS for h = h 2 , and its non-existence for h = h 1 . In the first case, we observe that b max h 1 < b * h 1 and b max h 1 < āω h 1 , and we also simply check that: b * h /āŝ < ω h 1ŝ. This can be summarized as follows: υ (b * h ; h) = b * h ā -ŝ, (1.46) b max h 1 < b * h 1 < āω h 1 . In the other case, we obtain: b rss h 2 = b * h 2 < āω h 2 < b max h 2 . It remains to be shown that h 1 < h 2 and that there exists h, verifying h 1 < h < h 2 , and such that b * h = āω h = b max h . Note that, from Proposition 1, we can express the difference b max h āω h as: b max h -āω h = (δ h -ā) ω h . (1.47) This difference is positive for h = h 2 , and negative for h = h 1 . From Proposition 1, we know that δ h is an increasing function of h, which is sufficient to conclude that h 1 < h 2 . Furthermore, when δ h = ā, we necessarily have: b max h = b * h = āω h , or equivalently δ h = δ * h = ā, with δ * h ≡ b * h /ω h . Thus, there is a value h such that δ h = δ * h = ā. From (1.35) and (1.36), δ h is implicitly given by: (1 -h) δ h z (δ h ) = 1, which implies, when δ h = δ * h = ā : h = 1 - 1 āz (ā) . A necessary and sufficient condition to have 0 < h < 1 is therefore āz (ā) > 1. a. We now seek to show that, for h > h, the RSS debt ratio, b rss h = b * h , is an increasing function of h. By looking for the derivative ∂b * h ∂h from equation (1.46), one find: ∂b * h ∂h = ā ∂υ(b * h ;h) ∂h 1 - ā∂υ(b * h ;h) ∂b . Remembering that υ (b t ; h) = q t b t with ∂q ∂b < 0, and q t < β, we necessarily have ∂υ(b * h ;h) ∂b = q * h + ∂q ∂b b * h < β which implies 1 - ā∂υ(b * h ;h) ∂b > 1 -βā > 0, b. Finally, we have to prove that b max h -b rss h is increasing in h when h ≥ h. Note first that: b max h -b rss h = (b max h -āω h ) + (āω h -b rss h ) . From (1.47), the first term of the right-hand side of this equality can be written: b max h -āω h = (δ h -ā) ω h , where, δ h and ω h are both increasing in h, from Propositions 1 and 2 and δ h -ā > 0 when h ≥ h, Next, υ (b; h) - b ā -ŝ = ŝ ā φ (δ; ; h) 1 -βx h , (1.48) where the function φ (δ; h) is defined by φ (δ; h) ≡ ā -δ -βā [x h -χ (δ, h)] , ( 1 φ δ (δ, h) = -[1 -βāΦ (δ, h)] < 0, (1.51) φ h (δ, h) = βā δ a • dG (a) - δ h a • dG (a) ⩾ 0 iff δ ≥ δ h . (1.52) The first derivative is negative since Φ (δ, h), given by (1.35), is such that Φ (δ, h) ≤ 1 for δ ≥ 0, and βā < 1 by assumption 1. The second one is negative (respect. positive) if δ > δ h (respect. δ < δ h ) From (1.50) , (1.51), and (1.52) we then obtain: ∂δ * h ∂h = - φ h (δ * h , h) φ δ (δ * h , h) ≤ 0 iff δ * h ≤ δ h , i.e. iff h ≥ h, (1.53) which ends the proof. Proof of Proposition 5 Proposition 5: In case of default, the post-default debt-to-GDP ratio hω h is unsustainable when h > H > h, where H is implicitly defined by: Hω H = b rss H ā . Proof. Let us introduce the function ∆ (h) implicitly defined by the condition The two functions intersect for h = H which unambiguously satisfies: h < H < 1. According to the latest Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, February 2022), 2 global surface mean temperature will reach 1.5°C above pre-industrial levels by 2040, even under an optimistic scenario of very low greenhouse gas emissions. 3 Above this critical threshold, there will be little room for adaptation and the socioeconomic impacts of climate change and related mitigation costs will increase exponentially. Glaciers will melt at unprece- Climate change is defined as a change in the distribution of the climate (Figure 2.1), i.e., changes in the mean and/or the variability of weather variables that persist for an extended period, typically decades or longer. In our case, we focus on changes in the distribution of surface temperature and use temperature anomaliestemperature's deviation from its long-run mean-as a proxy for climate change. When h > H, δ * h = ∆ (h) < āh, As for sovereign default risk, we use sovereign Credit Default Swap (CDS) spread on foreign sovereign debt as a proxy for default risk. We consider sovereign CDS spread at one, three, five, and ten-year maturities. Contribution. In this paper, we estimate the effects of temperature anomaliestemperature's deviation from its long-run mean-on sovereign credit default swap (CDS) premium and explore the transmission channels. We use a panel dataset covering 76 developing and advanced countries from 1999 to 2017. We consider sovereign CDS spread at several maturities-one, three, five and ten-year maturities-and estimate the effects of temperature on CDS spread at each one of them. We build on an equilibrium bond pricing equation found in theoretical sovereign default models to isolate the key transmission mechanisms through which temperature can affect CDS spreads. Succinctly, we find that an increase of temperature leads to an increase of sovereign CDS premium. We document the existence of a "debt limit channel" of temperature: a higher temperature has a negative impact on future growth rate of output, which lowers the country's debt limit-the maximum debt-to-GDP ratio it can sustain without defaulting. As a result, the probability of default increases, leading to a higher CDS spread. In contrast, we find no evidence of transmission through the primary balance nor the public debt-to-GDP ratio, which are two main determinants of CDS premium. The paper is structured as follows. Section 2.2 provides a brief review of the literature related to our analysis Section 2.3 presents the dataset we use. Section 2.4 assesses the link between temperature and sovereign CDS spreads and investigates the key transmission mechanisms. Section 2.5 concludes. Related literature The paper combines two strands of literature: the sovereign default risk and the economic impacts of climate change. The economic impact of climate change literature is clustered into two groups: growth and fiscal impacts. Important references on the growth-climate relationship are Dell, Jones and Olken (2012), [START_REF] Burke | Global non-linear effect of temperature on economic production[END_REF], [START_REF] Burke | Large potential reduction in economic damages under UN mitigation targets[END_REF] and [START_REF] Kalkuhl | The impact of climate conditions on economic production. Evidence from a global panel of regions[END_REF]. 6 Dell, Jones and Olken (2012) use a cross country panel data and find evidence for a negative effect of temperature on per capita GDP. Using a similar approach, Burke, [START_REF] Burke | Global non-linear effect of temperature on economic production[END_REF] document the existence of a nonlinear relationship between temperature and GDP per capita. This non-linear relationship is captured considering the square of temperature level as a regressor, along with the level one. Their findings suggest that countries with low or mild temperature initially experience a positive impact of temperature on output, up to a threshold above which this impact is reversed to a negative one. [START_REF] Kalkuhl | The impact of climate conditions on economic production. Evidence from a global panel of regions[END_REF] use long-difference and cross-sectional regressions to analyze the productivity-temperature relationship at sub-national level and find a negative effect of temperature on the level and growth rate of productivity. In our investigation of key transmission channels through which temperature can affect CDS spreads, we (re)assess the impact of temperature on the growth rate of GDP. A main distinction of our approach is that we use the deviation of temperature from its long-run mean, instead of temperature level, as suggested by recent studies (see [START_REF] Kahn | Long-term macroeconomic effects of climate change: A cross-country analysis[END_REF] for a discussion). Regarding the fiscal impact of climate change, [START_REF] Jones | Fiscal Implications of Climate Change[END_REF] and [START_REF] Baur | Climate Change and Long-term Fiscal Sustainability[END_REF] vik and Nanda (2021) and [START_REF] Khadan | Fiscal sustainability in the Caribbean: Aneconometric analysis[END_REF] reassess the fiscal sustainability in the Caribbean, a region that is highly exposed to climate change. These studies find that fiscal policy in Caribbean is "weakly"' sustainable in the sense that the government primary balance is positively related to the debt-to-GDP ratio. We take a more direct approach, in terms of climate change, by estimating the impacts of temperature anomalies on primary balance and find no significant effect. As for the sovereign default risk literature, only few studies address its link to climate change. [START_REF] Kling | Climate Vulnerability and the Cost of Debt[END_REF] and [START_REF] Cevik | This changes everything: Climate shocks and sovereign bonds[END_REF] simulation approach to estimate the potential impacts of various climate scenarios on sovereign credit ratings and find a similar conclusion. Here we assess the link between sovereign CDS spread-which we believe is a better proxy for creditors' perception of sovereign default risk than the yield spread or credit rating-and temperature anomalies, which are purely exogenous and directly related to climate change. Our identification of the key mechanisms has direct echoes with the broad literature of fiscal space, public debt sustainability, and sovereign default [START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF][START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF][START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF][START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF][START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF]. By identifying the link between temperature, growth and the debt limit, we show that climate risk must be taken into account when assessing public debt sustainability. Finally, by highlighting the connection between temperature and sovereign CDS spread, our analysis expands the large literature on the determinants of sovereign default risk premia (see [START_REF] Edwards | LDC Foreign Borrowing and Default Risk: An Empirical Investigation, 1976-80[END_REF][START_REF] Eichengreen | What Explains Changing Spreads on Emerging-Market Debt: Fundamentals or Market Sentiment[END_REF]. Data CDS spreads on sovereign bonds We measure the market perception of sovereign default risk by the spread on sovereign credit default swaps (CDS). CDS are financial instruments that are mainly traded in over-the-counter (OTC) derivative markets. The spread represents the periodic payment that the buyer of CDS must pay to the seller for the contingent claim in the case of a credit event, namely a default or restructuring of sovereign debt. Hence, the CDS spread is theoretically related to the probability that a country defaults: the greater this probability the higher the CDS spread. Therefore, we consider it as a good proxy for market-based default risk pricing. We consider sovereign CDS spreads at four maturities-one, three, five and ten-years, across a sample of 76 advanced countries and developing markets. The spreads are denominated in US dollars and expressed in basis point for all countries and maturities. The original data is taken from Macrobond and is reported at a daily frequency over the time period 1999-2017. We collapse daily data to yearly frequency by taking the yearly country average.7 Table 2.1 provides summary statistics of the CDS spread at the four different maturities. We do not use interest rate spreads as a proxy of sovereign default risk. There are several reasons why we believe that CDS spreads are more adequate for our analysis than interest rates spreads. First, CDS spreads are a more precise measure of default risk premia than interest rates spreads as they are specifically designed to hedge against the prospects of a sovereign default. Second, CDS spreads are not subject to time-to-maturity issues that occur when using interest rates spreads.8 Third, interest rates spreads are subject to a large cross-country heterogeneity due, for example, to differences in the underlying risk-free rates, the level of financial development of the countries, inflation expectations, exchange rate differences, and supply/demand of credit dynamics. All these issues blur the default risk component in interest rate spreads. Temperature We use temperature anomalies expressed in degree Celsius as a proxy for climate change. We rely on temperature data from the Terrestrial Air Temperature dataset by [START_REF] Matsuura | Terrestrial Air Temperature and Precipitation: Monthly and Annual Time Series[END_REF]. This dataset contains 0.5 degree gridded monthly mean temperature time series over the period 1900-2017. We use a geocoding procedure to aggregate gridded monthly data to a country-year scale. Our proxy of climate change defines as the deviation of temperature from its long run mean. Specifically, for each county i and year t we compute T i,t = T i,t -T i,1900-1950 , (2.1) Sovereign CDS spreads and temperature To estimate the impacts of temperature on sovereign CDS spreads, we consider a standard two-way fixed effects model9 that accounts for fluctuations in temperature. Specifically, we estimate the following equation: S i,t = α 0 + α 1 T i,t + β ′ X i,t + ω i + η t + ε i,t , (2.2) where S i,t is the CDS spread in basis point observed for country i in year t, and T i,t is the deviation of temperature from its long run mean. X i,t is a vector of economic and political control variables. ω i and η t are country and time fixed effects, respectively, which account for unobserved country specific and time-varying factors. ε i,t is an error term. Table 2.2 presents estimation results of equation (2.2) using each of the four CDS maturities and our baseline measure of temperature deviation defined in equation (2.1). We find a positive and statistically significant impact of temperature on CDS spreads for the three, five and ten-year maturities but not for the one-year one: a degree Celsius increase in temperature relative to the long run mean increases CDS spreads by 15.61 to 31.09 basis points. This effect is statistically and economically significant and the longer the maturity of the CDS spreads the larger the impact of temperature on spreads. This suggests that sovereign creditors price climate risk, as measured by increases in temperature, not only when investing over a short horizon but also the medium and long horizons. Regarding our control variables, they have the expected signs. Unsurprisingly, the ratio of public debt-to-GDP, the growth rate of output, and credit ratings are important determinants of sovereign CDS spreads. In contrast, the other macroeconomic variables, in particular the primary balance and reserves, do not contribute to the CDS spread. We next investigate whether the geographic location of countries plays a role for the impact of temperature on sovereign CDS spreads. For this purpose, we We do not find any significant effect of temperature for the other regions, except the Sub-Saharan Africa region where we find a negative impact on spreads. One possible explanation of this finding is that countries in this region are exposed to others exogenous shocks and factors that dominate the effect of climate change on CDS spreads. Robustness to alternative measures of temperature We conduct two sets of robustness analyses to check the sensitivity of our results regarding the measurement of temperature. In the first set, we consider four alternatives to our baseline measure of temperature deviation defined in equation (2.1). First, we change the reference period used to compute the long run mean of temperature in the baseline specification in equation (2.1) from 1900-1950 to 1900-1998. This corresponds to the time period before the start of our sample (i.e., 1999-2017). Second, we compute the deviation of temperature with respect to the long run mean observed in the geographical region of the country, that is, we redefine equation (2.1) as: T region,1900-1950 . Third, we explicitly account for the volatility of temperature by dividing our baseline measure in (2.1) by its country standard deviation computed over the period 1999-2017, that is: T i,t /σ ( T i,1999-2017 ). Finally, we consider the standard temperature level, T i,t , used in most previous studies. T i,t = T i,t - Table 2.4 reports estimations results using the four alternative measures. We find a significant positive impact of temperature on CDS spreads for all alternative measures, with estimated coefficients comparable to those from the baseline regression (Table 2.2). Table 2.5 reports estimation results using each measure. We find a significant positive effect of temperature on CDS spreads for all time windows, except the for the shortest window (i.e., 10-year). We also notice that the effect of temperature on spreads is mainly observed for larger windows, in particular above 30 years, which is the standard time span used to define and measure climate change in the literature (see [START_REF] Dell | What Do We Learn from the Weather? The New Climate-Economy Literature[END_REF]. This result reinforces the relevance of our approach to consider the deviation of temperature from its long run mean. In Table 2.5, we also report results using the cyclical and trend components of temperature in level.11 We find significant effect only for the latter, which is in line with the results presented above. Mechanisms In the previous section, we estimated the impacts of temperature anomalies on sovereign default risk, proxied by CDS spreads. We have documented robust positive impact of temperature on sovereign default risk. A natural question therefore is: what are the transmission mechanisms of this effect ? We address this question in this section. Our main goal is to isolate the key channels through which temperature anomalies can lead to an increase in sovereign CDS spreads. As a starting point of our investigation, we build on the basic equilibrium pricing equation of public debt that is found in most theoretical models of sovereign default [START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF][START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF][START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF][START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF]. This equation relates a country's spread to its key macroeconomic fundamentals that are relevant for the pricing of sovereign debt on financial markets. According to this framework, the CDS spread of country i in year t can be defined as follows S i,t = f b i,t , b lim i,t , (2.3) where f (•) is a potentially non-linear function, S i,t is the CDS spread observed for country i in period t and b i,t is the debt-to-GDP ratio. b lim i,t is the "debt limit" of the country, that is the maximum debt-to-GDP ratio that the country can sustain without defaulting. Notice that, in equation (2.3), the debt limit of the country is time dependent. This is not the case in most models of sovereign default, where the debt limit is constant over time: b lim i,t = b lim i ∀t. This is because these models make simplifying assumptions on the maximum primary surplus (as a fraction of out-put) and the growth rate of output, which are the main determinants of the debt limit. Specifically, these models assume that the maximum primary surplus is constant over time and that the growth rate is i.i.d [START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF] or constant [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF].12 However, one can have a time-varying debt limit either by assuming a time-varying maximum primary surplus and/or an autocorrelated growth rate. Since there is no particular economic justification for the simplifying assumptions mentioned above, we do not impose them here. Instead, we consider a more general formulation that allows for time-varying maximum primary surplus and autocorrelated growth rate. Let a i,t denote the real GDP growth rate of country i in period t, which may be autocorrelated, and s max i,t the maximum primary surplus. We consider the following reduced form for the debt limit: b lim i,t = g a i,t , s max i,t , (2.4) where the function g (•) depends positively on a i,t and s max i,t .13 This positive effect can be transmitted through two main channels. First, it can be related to the increase in public debt-to-GDP ratio following a temperature shock, which in turn increases the probability of default and thus the CDS spread. This potential channel is described in the upper branch of the figure. Second, temperature can also lowers the country's debt limit due to its negative effects on the growth rate and/or the maximum primary surplus, which in turn increases the country's probability of default and the spread. This second potential channel is described in the lower branch of the figure. Discussion The CDS spread defined in equation (2.3) is a reduced form that emerges under rational expectations in theoretical models of sovereign default. It relates the spread to key macroeconomic fundamentals of the country, namely the debt-to-GDP ratio and the debt limit, which are the two transmission mechanisms in our analysis. One may ask whether these are the only possible channels trough which temperature can affect the CDS spread. Another potential transmission channel is the (expected) "haircut", that is the fraction of debt that creditors lose in case of a sovereign default, which is found to be an important determinant of the debt limit [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF] and the spreads [START_REF] Cruces | Sovereign defaults: The price of haircuts[END_REF]. Therefore, the CDS spread could vary across countries and over time if creditors believe that temperature can affect the haircut. This is not however an issue in our investigation of key mechanisms surplus. Our empirical investigation will later illuminate the sign of theses effects. because theoretical models of sovereign default suggest that the haircut is itself a function of the defaulting country's fundamentals, in particular the debt-to-GDP ratio, the maximum primary surplus and the growth rate of output (Yue 2010; Lorenzoni and Werning 2019). Therefore, if temperature has any effect on the haircut, this effect should transmit trough the underlying macroeconomic variables that we consider. For this reason, we do not explicitly explore this (potential) haircut channel here. In addition, since equation (2.3) arises under rational expectations it does not account for potential behavioral mechanisms of creditors. While behavioral arguments are often used in the media and by some academics to rationalize fluctuations in country spreads, in particular during periods of debt crisis,15 recent studies show that these fluctuations are actually in accordance with standard default models with rational expectations (see [START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF]Traum 2012, 2014). This is the view that we adopt here. Identification of the mechanisms In this section, we investigate the transmission mechanisms through which temperature affects sovereign default risk, as presented in Figure 2.3. Only two channels are involved: the debt-to-GDP channel and the debt limit channel. Debt-to-GDP channel The first part of our identification exercise aims to isolate the potential increase of the CDS spread due to increased debt-to-GDP ratio following a temperature shock. Specifically, we ask whether the statistically significant impacts of temperature on CDS spreads found in Table 2.2 can be attributed to the (potential) increase in the debt-to-GDP ratio following a temperature shock, or if the debt limit also plays an important role. To address this matter, we follow a two-step estimation approach. In the first step, we take advantage of the orthogonality property between residuals and regressors in an OLS estimation in order to separate the two channels. Precisely, we first estimate the following equation: S i,t = γ 0 + γ 1 b i,t + ε i,t , (2.5) where S i,t is the CDS spread in basis point observed for country i in year t, b i,t is the debt-to-GDP ratio observed in the same period and ε i,t is an error term. From the OLS estimation of equation (2.5), we define Ŝi,t ≡ γ0 + γ1 b i,t (2.6) as the part of the CDS spread that is predicted by the debt-to-GDP ratio. Let εi,t denote the residual from the OLS estimation. Therefore, from (2.5) and (2.6), we have, by definition, S i,t ≡ Ŝi,t + εi,t . (2.7) Notice that Ŝi,t and εi,t are orthogonal. Therefore, equation (2.7) decomposes the CDS spread in two components: the predicted spread, Ŝi,t , which is related to the debt-to-GDP ratio, and the residual, εi,t , which is not related to debt-to-GDP. Recall that, as illustrated in Figure 2.3, temperature can affect the CDS spreads either trough the debt-to-GDP ratio or trough the debt limit. Therefore, since εi,t is orthogonal to Ŝi,t , and thus to b i,t , one can interpret εi,t as the variation in CDS spreads that is explained by the debt limit. Our second step estimation regresses Ŝi,t and εi,t on temperature, while accounting for the standard control variables included in the baseline specification defined in (2.2). Specifically, we separately estimate the following two equations: Ŝi,t = θ 0 + θ 1 T i,t + β ′ X i,t + ω i + η t + ν i,t , (2.8) εi,t = ϕ 0 + ϕ 1 Ti,t + β ′ X i,t + ω i + η t + u i,t , (2.9) where T i,t is the deviation of temperature from its long run mean (see equation 2.1) and X i,t is the vector of economic and political control variables included in equation (2.2). 16 The terms ω i and η t are country and time fixed effects, respectively. ν i,t and u i,t are error terms. The intuition behind this two-step identification approach is related to the basic equation of the CDS spread defined in equation(2.3): If temperature affects sovereign CDS spreads through the debt-to-GDP channel, then the coefficient θ 1 in equation (2.8) should be statistically significant. On the other hand, if the debt limit plays a role in the transmission of the effects of temperature to spreads, then the coefficient ϕ 1 in equation (2.9) should be statistically significant. Of course, both θ 1 and ϕ 1 can be statistically significant, in which case both the debt-to-GDP ratio and the debt limit will be important transmission channels. Tables 2.6 and 2.7 report estimation results of the first step and second step regressions, respectively. In the first step regression (Table 2.6), we find a positive In the second step regression (Table 2.7 Column 1), we find no significant estimated value for the coefficient θ 1 (i.e., -1.69). This indicates that the estimated effect of debt-to-GDP ratio on CDS spread obtained in the first step regression (Table 2.6) is not related to temperature. Therefore, there must exists a debt limit channel through which temperature affects CDS spreads. The regression of the spread residual (Table 2.7 Column 2) shows that this is indeed the case. We find a positive and significant effect of temperature anomalies on the spread residual. A degree Celsius upward shift in temperature, relatively to the long run mean, increases the CDS spread residual by 21.27 basis points, which is quite close to the estimated effect of temperature on spreads reported in Table 2.2 (i.e., 22.71 basis points). Two differences must be reminded: Table 2.7 excludes Japan and does not include neither the Debt-to-GDP ratio nor its squared value as it is the case in Table 2.2 leading to a difference in the estimated coefficients. These findings show that increases in temperature lead to increased sovereign CDS spreads through the debt limit channel but not the debt-to-GDP one. In the next section, we investigate how the debt limit channel is impacted following a temperature shock. Debt limit channel The identification of the debt limit channel of the effects of temperature on CDS spreads is quite challenging because the debt limit is a theoretical construct that is not observed in the data. However, from equation (2.4), we know that the debt limit, b lim i,t , depends positively on both the growth rate, a i,t , and the maximum primary surplus, s max i,t . Therefore, we can assess the effects of temperature on these two variables. The debt limit channel of temperature can thus be decomposed in two parts, as indicated in Figure 2.3: a growth effect and a maximum surplus effect. Growth effects of temperature We are interested in the (potential) effects of temperature on the growth rate of real GDP. To address this issue, we do not make any theoretical assumption on the (aggregate) output function à priori, nor how this function is related to changes in temperature. Instead, we take an agnostic approach, guided by econometric methods. Theoretically, temperature can have various effects on output, which can be both positive or negative, transitory or permanent.17 We start our analysis by testing the presence of unit roots in the log GDP. We use the Levin-Lin-Chu test to detect unit roots at the panel level. 18 The tests suggest that log GDP is integrated of order one, that is, I(1). Therefore, we take the first difference, which corresponds to the growth rate, and find evidence for stationarity of this later. We proceed and estimate a dynamic growth model: a i,t = ρ 0 + ρ 1 a i,t-1 + ρ 2 T i,t + ϕ ′ X i,t + ω i + ε i,t , (2.10) where a i,t ≡ ∆ln real GDP i,t × 100 is the percentage growth rate of real GDP of country i in year t, T i,t is the deviation of temperature from its long run mean defined in equation (2.1), and X i,t is a vector of economic and political control variables. ω i is a country fixed effect and ε i,t is an error term. Equation (2.10) is a flexible way to account for potential autocorrelation in the growth process. As discussed in Section 2.4.3, one can have a time-varying debt limit if the growth rate is autocorrelated, that is, if the coefficient ρ 1 in (2.10) is statistically significant. Table 2.8 presents estimation results of equation (2.10). We find a strong negative effect of temperature on the growth rate for all specifications: a degree Celsius increase of temperature relative to its long run mean decreases the current period growth rate by 0.437-0.489 percent on annual basis. This effect is statistically and economically significant. Interestingly, we note that the estimated value for the coefficient ρ 1 is also statistically significant for all specifications. Since growth is positively autocorrelated, temperature affects, not only the current period growth rate, but also future growth rates. Moreover, the fact that (log) GDP is I(1) implies that temperature has a permanent effect on the level GDP. This finding confirms the existence of a debt limit channel of temperature through the growth rate: an increase in temperature has a negative effect on current and future growth rates, which lowers the country's debt limit. A decrease in the debt limit increases a country's probability of default and thus its CDS spread. (Maximum) Surplus effects of temperature We now focus on the effects of temperature anomalies on the maximum surplus. A main challenge of this exercise is that the maximum primary surplus is not observed in the data. 19 However, although the maximum primary surplus is not observable, we have historical data on countries' primary surplus. We therefore use the primary surplus and relate this variable to temperature anomalies. The underlying assumption of this procedure is that the maximum primary surplus of a country is related to its historical fiscal behavior. This assumption is in line with the practice in the literature and in debt sustainability analyses by institutions like the IMF and the World Bank, which consists to use historical primary surpluses of countries to estimate the maximum primary surplus. 20 In line with the literature of debt sustainability and sovereign default [START_REF] Bohn | The Behavior of U. S. Public Debt and Deficits[END_REF][START_REF] Davig | Inflation and the fiscal limit[END_REF][START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF][START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF][START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF], we consider a specification where the primary surplus reacts positively to past debt, augmented with temperature anomalies: (2.11) where s i,t is the primary surplus in percentage of GDP observed for country i in year t, T i,t is the deviation of temperature from its long run mean, and b i,t-1 s i,t = δ 0 + δ 1 T i,t + δ 2 b i,t-1 + Λ ′ X i,t + ω i + η t + ε i,t , is the ratio of debt-to-GDP in the previous year. X i,t is a vector of economic and political control variables used in previous studies (see [START_REF] Bohn | The Behavior of U. S. Public Debt and Deficits[END_REF][START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF]). To account for potential non-linearity between the primary surplus and the debt-to-GDP ratio, as suggested by [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF], we include in the control vector X i,t the square and cubic terms of past debt, that is: b 2 i,t-1 and b 3 i,t-1 . The other control variables are standard in the literature. Appendix 2.6 presents a detailed definition of theses variables. Table 2.9 reports estimation results of equation (2.11). We do not find any significant effect of temperature anomalies on the primary surplus. Since temperature anomalies do not affect the primary surplus, we conclude that the maximum primary surplus is not affected neither. As a result, temperature does not affect the debt limit through the maximum primary surplus. Thus the (maximum) primary surplus does not play any role in the transmission of the estimated effects of temperature to CDS spreads found in Section 2.4.1. 20 See, for example, [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF][START_REF] Imf | Modernizing the Framework for Fiscal Policy and Public Debt Sustainability Analysis[END_REF], 2018). Conclusion We estimate the effects of temperature anomalies-the deviation of temperature from its long run mean-on sovereign default risk, proxied by sovereign CDS spread. We use a large panel dataset covering 76 developing and advanced countries over the period 1999-2017. We consider CDS spreads at one, three, five and ten-year maturities and find that a higher temperature, relatively to the long run mean, increases sovereign CDS spreads. Moreover, the longer the maturity of the CDS spreads the larger the impact of temperature on spreads, suggesting that sovereign creditors price climate risk, when investing over both the short, medium and long horizons. To better understand this finding, we build on an equilibrium pricing equation of sovereign debt present in most theoretical models of sovereign default to isolate the key transmission channels through which temperature can affect CDS spreads. We document the existence of a "debt limit channel" of temperature: a higher temperature has a negative effect on future growth rates, which lowers the country's debt limit-the maximum debt-to-GDP ratio it can sustain without defaulting. As a result, the probability of default increases, leading to higher CDS spreads. This debt limit channel can roughly explain all the estimated effect of temperature anomalies on sovereign CDS spreads. We show that the growth rate play a crucial role in this mechanism. In particular, the debt-to-GDP ratio and the primary surplus do not play any role in the transmission of the effects of temperature anomalies to spreads. Our findings have interesting implications for the policy responses to climate change in the context of high public debt-to-GDP ratios and limited fiscal space available for countries. Our identification of the key mechanisms suggests that climate risk must be taken into account in the assessment of public debt sustainability. Chapter 3 Sovereign Defaults in a World of Climatic Disasters: The Expectations Channel 3.1 Introduction. The potential for large economic shocks in explaining assets prices and risk premia has received a great deal of attention since the seminal work of [START_REF] Barro | Rare Disasters and Asset Markets in the Twentieth Century[END_REF]. This work sparked a large literature that emphasized the crucial role of the prospects of rare events, such as wars, major conflicts or economic crises, to explain several macro-finance puzzles. 1 In this regard, natural disasters may be equally important for risk premia, although their economic impacts are often less severe than major conflicts and crises considered by [START_REF] Barro | Rare Disasters and Asset Markets in the Twentieth Century[END_REF]. 2 This paper analyses the link between sovereign default risk and the prospects of natural disasters, especially climate-related ones. It focuses on the expectations channel linking sovereign default and the risk of climatic disasters. This topic is important in at least three regards. First, ex-post, climatic disasters appear especially salient in light of some recent sovereign default and debt 1 See [START_REF] Gabaix | Variable Rare Disasters: A Tractable Theory of Ten Puzzles in Macro-Finance[END_REF], [START_REF] Gourio | Disaster Risk and Business Cycles[END_REF], and Wachter (2013). 2 [START_REF] Barro | Rare Disasters and Asset Markets in the Twentieth Century[END_REF] define rare disasters as major political or economic events that cause at least 15 percent of economic collapse. To address this issue, I develop a tractable model of sovereign default that allows for time-varying probability of climatic disasters. In the model, the probability of disaster is a deterministic function of time with a linear trend (Figure 3 See International Monetary Fund (1999a) and [START_REF] Asonuma | Sovereign Debt Restructurings in Grenada; Causes, Processes, Outcomes, andLessons Learned[END_REF]. Other default episodes related to climatic disasters are Moldova and Suriname which defaulted respectively in 1992 and 1998 following severe droughts (International Monetary Fund, September 1999;[START_REF] De Jong | Temperature Shocks and Economic Growth: Evidence from the Last Half Century[END_REF]. Ecuador defaulted in 1997 just a few months after floods caused major power shortages [START_REF] Sturzenegger | Debt defaults and lessons from a decade of crises[END_REF]. 4 See Intergovernmental Panel on Climate Change (2007;2021). 5 Using a recent large scale survey for the United States, Dietrich, Müller and Schoenle (2022) confirms this intuition. The authors find that respondents to the survey do expect future increases in the probability of costly climate-related disasters, with a median probability of 5%. They also find that expected probability of disasters varies markedly with various individual characteristics of respondents, the degree of media consumption, and exposition to past disasters. 3.1). 6 The model is purposely kept tractable, so that its main mechanisms can be characterized analytically. Succinctly, I find the following: the default ratio-the maximum debt-to-GDP ratio that a country can be sustain without defaulting-is decreasing, nonlinear in the probability of disasters. Second, I show that different expectations of creditors on disaster risk can have very different effects on a country's default ratio. Finally, I show that, in the presence of disaster risk, sovereign defaults can occur even in a very favorable environment with low real risk free below the growth rate. The paper is organized as follows. Section 3.2 provides a brief review of the relevant literature. Section 3.3 presents the model. Section 3.4 characterizes the equilibrium default ratio. Section 3.5 conducts calibration and simulations to illustrate the role of creditors' expectations on disasters. Section 3.6 extends the analysis to situations with low real risk free rate below the growth rate and uncertainty about the disaster probability. Section 3.7 concludes. Related literature. This paper combines the literature of sovereign default and a growing literature that considers the macroeconomic impacts of climate-related disasters. I briefly review important references that are relevant for my analysis. [START_REF] Noy | The macroeconomic consequences of disasters[END_REF], [START_REF] Lis | The impact of extreme weather events on budget balances[END_REF], [START_REF] Loayza | Natural Disasters and Growth: Going Beyond the Averages[END_REF][START_REF] Botzen | The Economic Impacts of Natural Disasters: A Review of Models and Empirical Studies[END_REF][START_REF] Botzen | The Economic Impacts of Natural Disasters: A Review of Models and Empirical Studies[END_REF] provide comprehensive reviews of the literature on the macroeconomic impacts of natural disasters, including those that are not directly related to climate change such as geophysical (earthquakes, mass movements, volcanic activity) and biological (epidemics) disasters. This paper focuses on climatic disasters (floods, storms, droughts, landslides, wildfires and extreme temperature) whose frequency is projected to increase with climate change. The effects of climatic disasters on sovereign default risk has received little attention so far. A few exceptions are [START_REF] Mallucci | Natural Disasters, Climate Change, and Sovereign Risk[END_REF] and [START_REF] Phan | Climate Defaults and Financial Adaptation[END_REF] who introduce hurricane shocks in the strategic default framework à la [START_REF] Eaton | Debt with potential repudiation: Theoretical and empirical analysis[END_REF]. 7 In their framework, the optimizing government can choose each period to repay its outstanding debt or to default, in which case it will face an exogenous cost. Surprisingly, [START_REF] Mallucci | Natural Disasters, Climate Change, and Sovereign Risk[END_REF] finds that the introduction of hurricane shocks in the strategic default model leads to lower debt levels. At the same time, interest rates spreads increase because the optimizing government chooses to default more often, despite the issuance of low debt levels. [START_REF] Phan | Climate Defaults and Financial Adaptation[END_REF] emphasize the role of disaster insurance and disaster-indexed bonds in the post-disaster recovery path and the government's incentive to default. In contrast to these papers, I abstract from strategic features and model sovereign defaults as "excusable events", following [START_REF] Grossman | Sovereign Debt as a Contingent Claim: Excusable Default, Repudiation, and Reputation[END_REF]. This type of default occurs only when the government is unable to get sufficient fiscal and debt issuance revenue to repay due debt.8 Second, in contrast to [START_REF] Mallucci | Natural Disasters, Climate Change, and Sovereign Risk[END_REF] and [START_REF] Phan | Climate Defaults and Financial Adaptation[END_REF] who assume a constant disaster probability, my framework allows for time-varying disaster probability and it emphasizes the role of creditors's expectations of disaster risk, independently from the realizations of disaster shocks. The expectation channel of climatic disasters has been less studied so far. An exception is a parallel paper by [START_REF] Dietrich | The Expectations Channel of Climate Change: Implications for Monetary Policy[END_REF]. The authors conduct a survey of a representative sample of U.S. households and find that respondents expect a high probability of costly disasters due to climate change. Incorporating this insight in a New Keynesian model, they find that disaster risk lowers the natural interest rate and contributes to business cyclical fluctuations. My paper extends the role of disaster expectations to the literature of sovereign default, contrasting different types of expectations about disaster risk. This paper is also related to the literature of rare disasters that emerged following the seminal works of [START_REF] Rietz | The equity risk premium a solution[END_REF] and [START_REF] Barro | Rare Disasters and Asset Markets in the Twentieth Century[END_REF]. This literature emphasizes that the prospects of growth collapses following unlikely events such as wars and severe economic crises can explain several puzzles in macro-finance, in particular the high risk premia observed in stock markets. I extend this literature to the case of climate-related disasters and the linkage with countries' default risk premia. The model. The model builds on our previous work Diarra, [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF]. I expand this framework by introducing a time-varying probability of disaster in a tractable way so that the key mechanisms of the model can be analyzed analytically. Environment. Consider a small open economy with an infinite horizon. Time is discrete with periods t = 0, 1, 2, . . .. 9 The economy has access to international financial markets, which allow perfect coverage against risk, and investors behave as risk-neutral agents. Let Y t denote the country's GDP at date t and ãt ≡ Y t /Y t-1 the (gross) growth rate of GDP at this date. Each period the country may experience a climate-related disaster which occurs with probability p t . When a disaster occurs in period t, the growth rate is negatively affected. Assumption 3 formalizes the stochastic shocks in the economy. Assumption 3. 1. ãt evolves randomly according to: ãt =        a t with probability 1p t u a t with probability p t , (3.1) 9 The discrete time framework is the standard framework in macroeconomic models. where u is the severity of the disaster shock which is assumed to be constant and satisfies 0 < u < 1; a t is an i.i.d. random variable with a cumulative distribution function G (a) and a density function g (a), both defined on the interval [0, +∞), and E (a) ≡ ā < β -1 , where β -1 = 1 + r is the risk-free real gross interest rate; 2. the density function g (a) satisfies x 0 ag (a) da > x 2 g (x); 3. the probability of climatic disasters is a deterministic function of time: p t = min (α 0 + α 1 t ; 1) , (3.2) where α 0 ≥ 0 and α 1 ≥ 0 are two parameters. According to Assumption 3.1, the growth rate of output ãt is i.i.d. This implies that the severity of disaster u has a permanent effect on output.10 Assumption 3.2 is a regularity condition on the density function of a t and it is introduced for analytical purpose only. This assumption essentially states that the density function of the growth rate decreases rapidly.11 Assumption 3.3 is a tractable way to capture the increasing trend in the probability of climatic disasters, as illustrated in Figure 3.1. In Section 3.6, I consider a specification where the probability of disasters is random and creditors learn about it trough Bayesian learning. Creditors. Creditors are risk-neutral 12 and have access to a risk-free asset that pays an interest rate r. They price government's debt taking into account the possibility of meet its financial obligations in t and thus defaults. The government's budget constraint is subject to a no-Ponzi condition: [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF]. In case of default at t, the government reimburses a fraction h t < 1 of its due debt. lim T →∞ E t β T h t+T B t+T -1 ≤ 0. ( 3 Therefore, the after-default (redeemed) debt level is h t B t-1 . Following Diarra, [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF], I assume that the fraction of repaid debt h t is defined according to a default rule with the following specification: h t =        h • Ω t /B t-1 if B t-1 > Ω t 1 else, (3.6) where h ∈ [0, 1] is a parameter. The parameter h can be interpreted as the maximum recovery rate that creditors can expect in case of a sovereign default. Inversely, we can interpret 1 -h as the minimum "haircut" on public debt, i.e the fraction of debt-to-GDP ratio that is written off in case of default. The higher h the lower the haircut. Equilibrium default ratio and expectations of disaster risk. In this Section, I characterize the equilibrium default ratio and analyze how it is related to creditors' expectations about disaster risk. I consider three different cases: a simple case with constant disaster probability, a naive expectation of disaster risk where creditors revised the probability of disaster but assume that it has no trend, and a third situation where creditors are forward-looking and assume an increasing trend in the disaster probability. Debt valuation and the default ratio. Let us first address the valuation of public debt in the presence of disaster risk. For convenience, it is useful to express all quantities in terms of fraction of output. Define b t = B t /Y t as the debt-to-GDP ratio and ω t = Ω t /Y t as the default ratio. Using these definitions, equations (3.7) and (3.6) rewrite respectively: Notice that the sequence of default ratios {ω t } is endogenous and ultimately needs to be obtained. We will see below how this sequence obtains in equilibrium. q t b t = h t b t-1 ãt -ŝ (3.7) h t =        h ãtωt b t-1 if b t-1 > ãt ω t 1 else. ( 3 Market value of debt. Take the sequence {ω t+1 } as given. Then using (3.1) and (3.8) the price of public debt defined in (3.3) rewrites q t = β (1 -E t p t+1 ) 1 -G b t ω t+1 + h ω t+1 b t bt/ω t+1 0 adG (a) + βE t p t+1 1 -G b t u ω t+1 + h u ω t+1 b t bt/u ω t+1 0 adG (a) . (3.9) It is a function of the ratio of public debt emitted on financial markets b t , a risk premium linked to the probabilities of expected defaults in the future, based on the ratio ω t+1 , the probability law of a t , the probability of disaster p t+1 , and the debt recovery rate to be applied in case of a sovereign default. Let v t = q t b t denote the market value of debt, again as a fraction of GDP. From (3.9), it rewrites as follows: v t = β (1 -E t p t+1 ) 1 -G b t ω t+1 b t + h ω t+1 bt/ω t+1 0 adG (a) + βE t p t+1 1 -G b t u ω t+1 b t + h u ω t+1 bt/u ω t+1 0 adG (a) . (3.10) The term on the first line of (3.12) is the market value of public debt if there is no disaster at date t + 1. The term on the second line is the market value of debt if there is a disaster at t + 1. .11) From the default rule (3.8) and, again, taking ω t+1 as given, the ratio δ t can be interpreted as the minimum growth rate necessary to avoid default at t + 1. Since the ratio ω t+1 is subject to the prospect of future disasters so is δ t . The higher δ t the higher the probability of default. Let us define δ t ≡ b t ω t+1 . ( 3 Using definition (3.11), equation (3.12) can be rewritten as v t = βχ (δ t ; E t p t+1 ) • ω t+1 , ( 3.12) where .13) According to (3.12), the market value of public debt is linearly increasing in the next period default ratio ω t+1 . Notice that χ (δ t ; E t p t+1 ) is a potentially non monotone function. We will see in the next subsection how this function can be used to derived the equilibrium default ratio. χ (δ t ; E t p t+1 ) ≡ (1 -E t p t+1 ) (1 -G (δ t )) δ t + h δt 0 adG (a) + E t p t+1 (1 -G (δ t /u)) δ t + h u δt/u 0 adG (a) . ( 3 Equilibrium default ratio. We can now characterize the default ratio ω t . By definition it is the sum of the maximum primary surplus (as a fraction of GDP) and the maximum borrowing proceeds obtained on financial markets, that is, from (3.12) and (3.13): ω t = max δt βχ (δ t ; E t p t+1 ) • ω t+1 + ŝ. ( 3 ) ≡ (1 -E t p t+1 ) 1 -(1 -h) δ t g (δ t ) -G (δ t ) + E t p t+1 1 -(1 -h) δ t u g (δ t /u) -G (δ t /u) = 0. ( 3 ω t = ŝ 1 -βχ (p) ≡ ω * p , ∀t. ( 3 .19) The ratio ω * p is strictly increasing in ŝ and strictly decreasing in p. Proof. See Appendix 3.8.1. Despite the stochastic nature of this environment, the default ratio is a constant, ω t = ω * p ∀t, independent from the history of shocks, in particular the realizations of disaster shocks. Notice however that it is affected by the prospect of these latter shocks or, loosely speaking, by their "frequency", which is assumed to be constant here. The constant property of the default ratio is related to the fact that here we assume a constant disaster probability, p t = p ∀t . 15 We will see below that this is no longer the case once we allow the probability of disaster to vary over time. Naive expectations of disaster risk. Suppose that creditors have "naive" expectations about disaster risk in the following sens: they assume that there is no particular trend in the probability of disaster but each period they revise the probability of disaster, disregarding any possible changes in the future. That is, creditors' expectations about future disaster probabilities are such that After date T , the probability of disaster is constant and equal to one: p T = p T +1 = p = 1. Therefore, the default ratio is defined by (3.19), that is: E t p t+1 = p t , ∀t, ( 3 ω T = ω T +1 = ω * 1 . Then, from (3.25) and setting ω T = ω * 1 , we can compute the default ratio at date T -1: ω T -1 = ŝ + βχ (T ) • ω * 1 ≡ ω forward T -1 . ( 3 Numerical illustration. In this Section, I calibrate the model and simulated the model illustrate the link between the default ratio and creditors' expectations about disaster risk. Building on the historical trend in the frequency of climatic disasters, I assess the potential effects of a gradual increase in the probability of disaster on the default ratio. Estimating α 0 and α 1 . A starting of point of the numerical analysis is the estimation of the parameters of disaster probability α 0 and α 1 . A natural strategy to estimate these parameters is to use the historical frequency of climatic disasters. For this purpose, I use the Emergency Events Database (EM-DAT).17 EM-DAT is a global database that records the occurrence of natural events, their monetary damage, the number of deaths they cause and other information such as the location. The database covers all countries in the world and goes back up to 1900. It includes both climate-related events (eg., storms, floods, droughts, landslides, wildfires, extreme temperature) and those that are not directly related to climate (eg., earthquakes, biological and technological events). I consider the period 1960-2019 to avoid recording issues in earlier periods and focus on events that are directly related to climate change since their frequency is predicted to increase over the next decades. The last column of Table 3.1 presents the parameter values for the numerical exercises in the next subsection. The length of one period in the model is set to 4 years. 19 The risk-free rate is Simulation results. I simulate the model over 50 years period. Figure 3.2 illustrates the evolution of the ratios ω forward t and ω naive t over 50 periods. We notice that both ratios are decreasing in the probability of disaster. However, ω forward t is much lower than ω naive t , especially in the early periods. This is because, in contrast to the case of naive expectations of disaster risk, when creditors have a forward looking view they anticipate future increases in disaster probability. Higher probabilities of disasters imply lower expected growth rates, a higher risk premium on public debt and thus a lower default ratio. In contrast, under naive expectations the negative feedback of disaster risk to the default ratio is weaker as creditors ignore future increases in the probability of disasters. Nonetheless, the two ratios converge overtime as creditors with naive expectations become more "aware" of the increasing frequency of disasters, although they do not anticipate this trend. The figure also highlights the sensitivity of the default ratio to specifications that underlies creditors' expectations about disaster risk. A sudden change in creditors expectations of disaster risk, from naive to forward looking ones, can cause a sharp fall in the default ratio, which can lead to a rapid increase in the default risk. 3.6 Extensions. Sovereign default and disaster risk when r < g. There has been a large debate recently on the sustainability of public debt and the possibility of government default in an environment where the real risk free rate is below the rate of growth, that is when the differential rg is negative. 21 This situation, while it is not new [START_REF] Sergeyev | Debt Sustainability in a Low Interest Rate World[END_REF]Mehrotra 2020, Mauro and[START_REF] Mauro | r minus g negative: Can We Sleep More Soundly?[END_REF], has led some researchers to conclude that the sustainability of public debt is not a real concern, in particular in the near future (see [START_REF] Blanchard | Public debt and low interest rates[END_REF]. In this section, I relax the assumption ā < β -1 = 1 + r and analyze how the introduction of disaster risk may contribute to the emergence of sovereign default risk, further emphasizing the crucial role of creditors' expectations about disaster risk. To set the intuition, let us consider the simple case with constant expectations of disaster risk of Section 3.4.2, that is, p t = p ∀t. In this case, the default ratio is defined by (3.19) 21 Here g refers to the (net) real growth rate of output, and r is the real risk-free rate as before. Proposition 7. Suppose that the probability of disasters is constant, p t = p. Therefore, the default ratio defined by (3.19) is infinite if only if p is sufficiently low and the risk free rate and the mean growth rate are such that βχ (p) ≥ 1. Proof. The proof of Proposition 7 is immediate from (3.17) and Proposition 7. According to Proposition 7, the possibility of sovereign defaults cannot be always ruled out in the presence of disaster risk, even if the risk free rate remains below the growth rate, that is r < g. Following the same reasoning, this finding generalizes to the case with naive expectations of disaster risk. 22 We also obtain a similar result for the case with forward looking expectations of disaster risk by inspecting (3.26). From this equation, we notice that the ratio ω forward t is finite if and only if ω forward T = ω * 1 < ∞. This inequality is verified if only if the risk free and growth rates are such that the function χ (p t ) satisfies βχ (1) < 1 for any t ≥ T . The vertical line on the figure indicates the date after which the default ratio under naive expectations of disaster risk becomes finite, that is: ω naive t < ∞. After this date, a public default cannot be ruled out. Before this date, the ratio ω naive t is infinite as the probability of disaster is not high enough to have βχ (p t ) < 1. In this case, there is no concerns about sovereign default. The picture is however different in the case with forward looking expectations of disaster risk. In this case the default ratio ω forward t is always finite and, for sufficiently high initial public debt-to-GDP ratio, a sovereign default can occur at any date t. Compared to Figure 3.2, the ratio ω forward t is now markedly lower than ω naive t . This finding shows that creditors' expectations about disaster risk plays a crucial role even in an environment with low interest rates. 22 See equations (3.22) and (3.23). 123 Figure 3.3: Simulated paths of the default ratio (% of GDP) when r < g. Learning from disasters. The analysis in the previous sections assumed that the probability of disaster is deterministic and observed by creditors. In this section, I relax this assumption and consider a situation where there is uncertainty about the disaster probability and creditors engage in Bayesian learning: they do not observe the probability of disaster p t+1 but each period they form a prior belief on the distribution of p t+1 and update their belief based on the history of disaster realizations up to the current period. Let z t denote a disaster indicator that equal to one if a disaster occurs at t, and zero otherwise. Then creditors' posterior expectations of p t+1 , given the observed sequence of disaster realizations, is E t (p t+1 |z t ), where z t is the disaster history up to date t. Using this notation in (3.13), (3.15) p t+1 = p ≡ Pr (z t+1 = 1) and 1p ≡ Pr (z t+1 = 0) , (3.29) where I use the tilde to highlight the random feature of the disaster probability. Given the distribution that underlies the occurrence of disasters, the next step is to decide on creditors' prior belief on the distribution of p, that is, the distribution that they have in mind before observing any disaster (i.e., in the initial period t = 0). I assume that creditors' prior belief on the distribution of p at the initial date t = 0 is a beta distribution with shape parameters α > 0 and γ > 0. 25 Then for each period t = 1, 2 . . ., creditors observe whether a disaster occurs (i.e., z t = 1) or not (i.e., z t = 0) and update their prior distribution of p. Therefore, using Bayes's rule, it can be shown that creditors' posterior distribution of p given the disaster history z t is also a beta distribution with parameters α + n and γ + tn, where n is the total number of periods where a disaster occurs up to date t, that is: n = t s=1 z s . 26 Symmetrically, tn is the number of periods with no disaster occurrence. Creditors' posterior expectations of p is E t p|z t = α + n α + γ + t . (3.30) 23 The assumption of i.i.d. disaster occurrences is made for simplicity. One could consider a more elaborated, non i.i.d. process but at the cost of cumbersome complexities in the model. 24 The Bernoulli distribution is a standard probability distribution used for random variables with a binary outcome, as it is the case here. 25 The beta distribution is a commonly used distribution for a continuous random variable that can only take on values on the interval [0, 1], as it is the case for p. 26 Inversely, the term tn represents the number of period where no disaster occurs. Notice that, given α and γ, it is increasing in the number of disaster occurrences n and a decreasing function of time or, more precisely, of the number of non-disaster periods tn. Intuitively, as the number of disaster occurrences increases, creditors will expect a higher probability of disaster in the future. On the contrary, if the number of non-disaster periods increases, creditors will expect a lower probability of disaster. Inserting (3.30) where δ * t is defined by (3.16) and (3.30). Numerical illustration. Here is a numerical example of this Bayesian perception of disaster risk. Suppose that the truth probability is 0.035. 27 Suppose also that in period in period t = 0 creditors' prior distribution of p is a beta distribution with parameters α = 1 and γ = 27.57, so that the prior expectation of p, defined as α/ (α + γ), equal to the truth probability. 28 The other parameter values are set as in Figure 3.2. 29 27 This corresponds to the historical estimated value for α 0 , the intercept of (3.2), which is present in Table (3.1). 28 Alternatively, one could use any values of 'α and γ that put most of the prior probability at small values of p. It does not matter very much which values one chooses, the resulting posteriors given the data would be very similar. 29 See Table 3.1. 126 We notice a sharp fall in the default ratio after the realization of disaster at date t = 1. This is because, after observing the disaster, there is an upward shift in creditors' posterior expectations of disaster risk, leading to a sudden fall of the default ratio. However, if no disaster occurs in the subsequent periods, the default ratio recovers gradually towards its pre-disaster level as creditors' posterior expectations about disaster risk fade over time. Conclusion. I have developed a tractable stochastic model to analyze the potential effects of a gradual increase in the probability of climatic disasters on public debt sustainability and sovereign default risk. In the baseline model, the probability of disaster is modeled as a deterministic function with a linear trend. I show that when a their country specific values. 31 Tables 3.2 and 3.3 report simulated maximum debt ratios, bt , for advanced and emerging countries, respectively. 32 In each table, I report simulation results for both the naive and forward looking expectations of disaster risk over three time horizons: 2019 (the last year in my sample), 2030 (a decade ahead), and 2050 (mid century). For comparison, I also report results for the standard case with no disaster risk, along with the actual debt-to-GDP ratio of each country in 2019 so as to have a measure of their fiscal space. 33 Consider the group of advanced countries (Table 3.2). Assuming no disaster risk, as in [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF], the maximum debt ratio in advanced countries amounts to 375% of GDP on average (Column 2). This value is well above the average debt-to-GDP ratio of 95% observed for advanced countries in 2019 (Column 1). There is however some heterogeneity across countries, reflecting differences in their growth rates. For instance, some countries such as Hong Kong and Lithuania present fairly large maximum debt ratios evaluated at 3050% and 1090% of GDP, respectively. 34 In contrast, other countries such as Italy and Greece present debt ratios amounting to 150% and 125% of GDP, respectively. Comparing maximum debt ratios to actual debt-to-GDP ratios observed in each country in 2019 to have a measure of their fiscal space, one would conclude that sovereign default is not an immediate concern in advanced countries, except for Italy and Greece.35 Assuming naive expectations of disaster risk, the maximum debt ratios in 2019 (Column 3) of Hong Kong and Lithuania are respectively reduced by 63%, 35% of GDP relative to the case with no disaster risk (Column 2). The corresponding 31 See Appendix ?? Table 3.4. 32 I focus on the ratio bt , instead of the default ratio ω t , as the former is directly comparable to actual debt-to-GDP ratios. 33 The fiscal space is defined as the difference between the maximum debt ratio that a country can sustain and its actual debt ratio, that is: btb t . See [START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF] and [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF]. 34 The relatively large maximum debt ratios of these countries is partly due to their high growth rates. Turning to emerging countries (Table 3.3 ), we observe a pattern similar to advanced countries. In the absence of disaster risk (Column 2), China presents an infinite maximum debt ratio,36 followed by Pakistan with a maximum debt ratio of 683% of GDP. Brazil and Russia present the lowest maximum debt ratios evaluated to 62% and 32% of GDP, respectively. Comparing maximum debt ratios to actual debt-to-GDP ratios observed in each country in 2019, we notice that Conclusion In this doctoral thesis, I have studied three different issues related to the topic of public debt sustainability, sovereign default, and their interplay with climate change and the risk of climate-related disasters. To address these issues, I combine theoretical modeling with econometric and simulation methods. Throughout the thesis, I adopt a perspective that considers sovereign defaults as "excusable events" in the sense of [START_REF] Grossman | Sovereign Debt as a Contingent Claim: Excusable Default, Repudiation, and Reputation[END_REF]. This type of defaults are associated with identifiable bad states of nature. They occur only when, following a negative shock, the government is unable to get sufficient fiscal and debt issuance proceeds to repay due debt. This definition of default is in contrast with the one used in the literature of strategic default à la [START_REF] Eaton | Debt with potential repudiation: Theoretical and empirical analysis[END_REF], which emphasizes the willingness of the government to repay due debt. The thesis is structured around three chapters. Chapter 1 revisits the old issue of public debt sustainability in stochastic environment. We have challenged the standard assumption zero debt recovery in the literature and developed a stochastic model that allows for partial and, potentially, repetitive defaults. This model incorporates a debt recovery rule that depends on a unique parameter, which is grossly equal to one minus the "haircut"-the fraction of debt-to-GDP ratio lost by creditors following a sovereign default. It can take any value from 0 to 1. The case with 0 corresponds to a full repudiation of the defaulted debt, while the case with 1 is equivalent to full repayment of public debt. We have solved the model explicitly and have clarified the notions of public debt sustainability, sovereign default, and solvency, which are quite often overlooked by economists and in public debate. We have provided more precise definition to these concepts, discussed the importance to distinguish them, and provided operational measure to each of them. A key message of this chapter is that the assessment of the sustainability of public debt, in particular the estimation of country fiscal spaces, depends crucially on the value of the debt recovery parameter. A small change in this parameter can have substantial effects on the dynamics of public debt and, therefore, lead to very different conclusions in terms of debt sustainability analysis. We illustrate the role of this parameter trough calibrations, simulations and estimations of the model using historical data on a well defined sample of emerging and advanced countries. Chapter 2 has embarked on the growing literature of climate change and the link with sovereign default risk. Indeed, although climate change is a hot topic for academics, policy makers, and investors, there is little evidence on the link, and the nature of the link, between climate change and sovereign risk, and whether financial markets effectively price climate-related risks. To address this issue, we have estimated the effects of temperature anomalies -temperature's deviation from its long-run mean -on sovereign default risk and explore the transmission channels. As a proxy for sovereign default risk, we have used sovereign credit default swap (CDS) spread. We considered sovereign CDS spread at several maturities-one, three, five and ten-year maturities. Econometrically, we addressed this issue using a cross-country panel data covering 76 developing and advanced countries with sufficient data over the period 1999-2017. Using a standard two-way fixed effects estimation method, we have documented a strong positive impact of temperature on sovereign CDS spread. Interestingly, we found that the longer the maturity of the CDS spreads the larger the impact of temperature on spreads. An implication of this finding is that sovereign creditors price climate risk, not only when investing over a short horizon but also the medium and long ones. Regarding the transmission channels of the effects of temperature to CDS spread, we have used the equilibrium bond pricing equation found in theoretical sovereign default models to isolate the key transmission mechanisms. We have shown that the effect of temperature on CDS spread is mostly driven by the neg-137 ative effect of the former on the growth rate of GDP. Interestingly, we find that the debt-to-GDP ratio and the primary balance, which are two macroeconomic determinants present in the basic pricing equation, do not play any role in the transmission of the effect of temperature to CDS spread. These findings emphasized the need to take climate risk into account in the assessment of public debt sustainability. They have important implications for the policy responses to climate change in the context of high public debt-to-GDP ratios and limited fiscal space available for countries. I acknowledge the five members of the jury for the defense of this thesis, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Related literature. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Environment. . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Creditors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Government. . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3.1 Default and debt recovery rule. . . . . . . . . . . 3.4 Equilibrium default ratio and expectations of disaster risk. . . . . 3.4.1 Debt valuation and the default ratio. . . . . . . . . . . . . 3.4.1.1 Market value of debt. . . . . . . . . . . . . . . . 3.4.1.2 Equilibrium default ratio. . . . . . . . . . . . . . 3.4.2 Constant expectations of disaster risk. . . . . . . . . . . . 3.4.3 Naive expectations of disaster risk. . . . . . . . . . . . . . 3.4.4 Forward looking expectations of disaster risk. . . . . . . . 3.5 Numerical illustration. . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Estimating α 0 and α 1 . . . . . . . . . . . . . . . . . . . . . 3.5.2 Calibration. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Simulation results. . . . . . . . . . . . . . . . . . . . . . . 3.6 Extensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Sovereign default and disaster risk when r < g. . . . . . . 3.6.2 Learning from disasters. . . . . . . . . . . . . . . . . . . . 3.7 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Proof of Proposition 2. . . . . . . . . . . . . . . . . . . . . 3.8.2 Country analysis. . . . . . . . . . . . . . . . . . . . . . . dynamics in developing (red) and advanced (blue) countries. Projections start after the vertical line. Sources: IMF WEO (April 2020) and Author's calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Sovereign yield spread dynamics in emerging countries (year-to-year moving average). Sources: JP. Morgan Emerging Market Bond Index and Author's calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Equilibrium debt valuation. . . . . . . . . . . . . . . . . . . . . . 1.2 Debt recovery parameter and default ratio . . . . . . . . . . . . . 1.3 The no-default case dynamics . . . . . . . . . . . . . . . . . . . . 1.4 (Non-) Existence of a RSS according to h . . . . . . . . . . . . . 1.5 Debt dynamics when a t = ā. . . . . . . . . . . . . . . . . . . . . . 1.6 Debt limit: min (b max h , b rss h ) . . . . . . . . . . . . . . . . . . . . . . 1.7 Computation of the debt limits of advanced countries for ŝ = 4% and ŝ = 5%. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Sustainability and low interest interest rate: Euro zone (r = 0.31% a ) . 1.9 Existence of a RSS . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 h < H < 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Temperature anomalies . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Temperature anomalies (moving average) . . . . . . . . . . . . . . 2.3 Temperature shocks and CDS spreads: Transmission Mechanisms. . . 2.4 CDS -Debt relationship . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Note: The left panel shows the frequency of climatic disasters (left axis) and estimated disasters probabilities (right axis); The right panel shows monthly averages of media coverage of climate change by major TV stations and newspapers; Sources: EM-DAT, Media and Climate Change Observatory, and author's computations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Simulated paths of the default ratio (% of GDP). . . . . . . . . . . . 3.3 Simulated paths of the default ratio (% of GDP) when r < g. . . . . . 3.4 Simulated paths of the default ratio (% of GDP) with Bayesian learning. List of Tables 1.1 Baseline calibration (annual basis). . . . . . . . . . . . . . . . . . 1.2 Debt recovery parameter estimation (Nonlinear Least Squares Method). 48 1.3 Debt limit, b lim h = min (b max h , b rss h ): Advanced countries. . . . . . . 1.4 Debt limit, b lim h = min (b max h , b rss h ) : Emerging countries. . . . . . . 1.5 Debt limit, b lim h = min b max h , b rss h , and low interest rates: Euro zone. . 1.6 Definition of variables and data sources. . . . . . . . . . . . . . . . . 1.7 Data : Advanced countries (1980-2018). . . . . . . . . . . . . . . 1.8 Data : Emerging countries (1980-2018). . . . . . . . . . . . . . . . 1.9 Sustainability and low interest interest rate: Euro zone (r = 0.31%). . 1.10 Debt-to-GDP ratios and real yield spreads (%): Advanced countries (1980-2018). . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11 Debt-to-GDP ratios and real yield spreads (%) : Emerging countries (1980-2018). . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Data: Summary statistics (1999-2017) . . . . . . . . . . . . . . . 2.2 CDS spreads and Temperature (baseline, 1999-2017) . . . . . . . 2.3 Five-year CDS spreads and Temperature (regions, 1999-2017) . . 2.4 CDS estimation (5 years) -Temperature robustness (1) . . . . . 2.5 CDS estimation (5 years) -Temperature robustness (2) . . . . . 2.6 Debt channel: First step regression . . . . . . . . . . . . . . . . . . 2.7 Debt channel: Second step regression . . . . . . . . . . . . . . . . . 2.8 GDP growth rate and Temperature (Arellano-Bond) . . . . . . . 2.9 Primary surplus and Temperature (1999-2017) . . . . . . . . . . . 2.10 Definition of variables and data sources. . . . . . . . . . . . . . . Figure 1 : 1 Figure 1: Debt-Surplus dynamics in developing (red) and advanced (blue) countries. Projections start after the vertical line. Sources: IMF WEO (April 2020) and Author's calculations. Figure 2 : 2 Figure 2: Sovereign yield spread dynamics in emerging countries (year-to-year moving average). Sources: JP. Morgan Emerging Market Bond Index and Author's calculations. .16) together with the no-Ponzi condition (1.7). Equation (1.14) is the government budget constraint, obtained by using equations (1.2) and (1.3); (1.15) is the debt recovery rule, and (1.16) is the pricing equation. Taking the sequence ω def t as given, these equations are sufficient to analyze the valuation of public debt and the dynamics of emitted debt-to-output ratio b t . Of course, the sequence of default ratios ω def t Figure 1 . 1 : 11 Figure 1.1: Equilibrium debt valuation. Figure 1 . 2 : 12 Figure 1.2: Debt recovery parameter and default ratio Figure 1 1 Figure 1.4: (Non-) Existence of a RSS according to h Figure 1 . 1 Figure 1.5 illustrates the implied dynamics of the public debt ratio, given by equation (1.30) when a t = ā, for the two polar cases h = 0, and h = 1. 19 Figure 1 1 Figure 1.5: Debt dynamics when a t = ā. Figure 1 . 1 Figure 1.6 represents b max h and b rss h , and implicitly the debt limit b lim h = min (b max h , b rss h ) , with the basic calibration already used for Figures 1.2 and 1.5. Figure 1 1 Figure 1.6: Debt limit: min (b max h , b rss h ) Figure 1 . 7 : 17 Figure 1.7: Computation of the debt limits of advanced countries for ŝ = 4% and ŝ = 5%. Figure 1 . 8 : 18 Figure 1.8: Sustainability and low interest interest rate: Euro zone (r = 0.31% a ) Given ω def t+1 , under Assumption 1, the market value of debt v t reaches a unique maximum v max t for a quantity of debt b t = b max t . Both v max t and b max t are linearly increasing in ω def t+1 : v max t = βx h ω def t+1 and b max t Figure 1 1 Figure 1.9: Existence of a RSS ( 1 . 1 50): φ (δ * h ; h) = 0, such that δ * h = ∆ (h) . From (1.53), we know that ∆ ′ (h) ≤ 0 for h ≥ h. Note that the condition Hω H = is reached when we have ∆ (h) = āh. We represent on Figure1.10 the functions ∆ (h) and āh. Figure 2 2 Figure 2.1: Temperature anomalies provide a review of the key channels through which climate change may have fiscal consequences and discuss related mitigation and adaptation mechanisms. These papers highlight the ambiguous potential effects of climate change on fiscal balance. While tax policies aiming at mitigating negative externalities of climate change will likely increase fiscal revenue, adaptation policies in contrast tend to increase public spending. Therefore, the ultimate effect of climate change on the fiscal balance is not à priori clear. Ce- use indices from the Notre Dame Global Adaptation Initiative to estimate the link between sovereign yield spreads and the vulnerability and resilience of countries to climate change. Overall, these papers find a positive relationship between yield spreads and countries' vulnerability to climate change. Agarwala et al. (2021) take a Figure 2 . 2 : 22 Figure 2.2: Temperature anomalies (moving average) Figure 2.3 illustrates the key transmission channels of the effects of temperature on sovereign CDS spreads. The straight arrow in the middle indicates the positive effect of temperature on CDS spreads identified in the previous section. 14 Figure 2 . 3 : 23 Figure 2.3: Temperature shocks and CDS spreads: Transmission Mechanisms. Figure 2 2 Figure 2.4: CDS -Debt relationship Figure 3 . 1 : 31 Figure 3.1: Note: The left panel shows the frequency of climatic disasters (left axis) and estimated disasters probabilities (right axis); The right panel shows monthly averages of media coverage of climate change by major TV stations and newspapers; Sources: EM-DAT, Media and Climate Change Observatory, and author's computations. .8) Taking the sequence {ω t } as given, equations (3.7) and (3.8), together with (3.3) and (3.5), are sufficient to analyze the valuation of public debt and the dynamics of emitted debt-to-output ratio b t . set to the average annualized real yield on the 4-year maturity German bonds in 1980-2019. The debt recovery parameter h is calibrated following[START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF]; the maximum primary surplus ŝ is calibrated according to estimates by IMF(2018). The disaster shock u is calibrated in accordance with estimates found in the empirical literature on the impacts of disasters on growth.20 The parameters of disaster probability probability are calibrated to their estimated values obtained previously. The mean and volatility of the log growth rate are sample averages in 1980-2019. Figure 3 . 2 : 32 Figure 3.2: Simulated paths of the default ratio (% of GDP). Figure 3 . 3 Figure 3.3 illustrates this result, considering the cases of naive and forward looking expectations of disaster risk as in Section 3.4. To construct Figure 3.3, I set r = 1%, in contrast to 2.9% in Figure 3.2, while keeping the other parameters the same as before. Figure 3 . 4 : 34 Figure 3.4: Simulated paths of the default ratio (% of GDP) with Bayesian learning. Figure 3 . 3 Figure 3.4 presents a simulated default ratio ω bayesian t Brazil and Hungary have exhausted their fiscal spaces in 2019 and thus should be already in default.Assuming naive expectations of disaster risk, the maximum debt ratio of emerging countries in 2019 (Column 3) is reduced by 42% on average relative to the case with no disaster risk (Column 2). At debt-to-GDP ratios observed in 2019, four of the twelve emerging countries (Brazil, Hungary, Mexico and South Africa) would be in default by 2030. This pattern is further accelerated if creditors have a forward looking view about disaster risk and anticipate future increases in this risk. In this case, Brazil, Hungary, Mexico and South Africa would be in already default in 2019. This simple calibration exercise further illustrates the role of creditors expectations about disasters risk. It shows how a gradual increase in the probability of disasters, due to climate change, may affect public debt sustainability and (potentially) lead to more sovereign defaults across countries. 14 This leads us to select the low debt equilibrium, satisfying b t ≤ b max t . Excluding the case of default (i.e. assuming b t-1 /a t ≤ ω def t+1 1.31) represents the market value of debt at the RSS, that is what lenders are willing to lend. The right hand side is the financial needs of the sovereign borrower at the RSS. We formalize the existence of the RSS-debt-to-output ratio in the following Proposition 4. In the constrained fiscal regime, 1. there exists a unique risky-steady-state-debt ratio, b rss h , satisfying (1.31) and b rss h Table 1 . 1 1: Baseline calibration (annual basis). Emerging Advanced c : Historical (1980-2018), d : US debt duration (2010). Table 1 . 1 2: Debt recovery parameter estimation (Nonlinear Least Squares Method). ŝ ĥ Mean absolute yield spread error (%) a Advanced economies 0.05 0.88 0.47 0.04 0.93 0.46 Emerging economies 0.03 0.42 0.40 0.02 0.70 0.48 Notes: ŝ is calibrated following IMF Table 1 . 1 3: Debt limit, b lim h = min (b max h , b rss h ): Advanced countries. Table 1 . 1 4: Debt limit, b lim h = min (b max h , b rss h ) : Emerging countries. 30 See Table1.8 in Appendix 1.8.2 for the mean growth of the list of emerging countries. Table 1 . 1 5: Debt limit, b lim h = min b max h , b rss h , and low interest rates: Euro zone. and no positive value exists for b rss h . For each country, the mean µ and volatility σ of the growth rate are calibrated to their historical values in 2009-2018. Debt-to-GDP (b t ) b lim h (ŝ = 5%, h = 0.88) 2018 2022 † r = r = 1% r = 0.31% 1.5% (1) (2) (3) (4) (5) Country Austria 73.75 85.74 624.01 344.27 261.72 Belgium 102.03 116.23 1751.15 529.98 355.54 Finland 59.26 69.15 235.24 181.46 156.32 France 98.39 114.27 760.44 380.68 281.70 Germany 61.69 67.28 554.10 324.15 250.89 Greece 184.85 200.49 89.78 80.79 75.48 Ireland 63.65 63.16 ∞ ∞ ∞ Italy 132.16 155.51 212.85 166.94 144.95 Latvia 35.93 45.33 249.12 191.90 165.20 Lithuania 34.17 47.67 383.00 262.59 215.03 Luxembourg 21.43 27.30 ∞ 983.17 521.71 Netherlands 52.39 56.10 517.42 309.25 241.02 Portugal 120.13 125.55 272.72 201.92 170.74 Slovak Republic 48.94 64.29 3094.43 626.79 401.51 Spain 97.09 117.32 301.11 217.16 181.54 Mean 79.06 90.36 695.80 342.93 244.53 Notes. † : Projections from IMF World Economic Outlook (April 2021). ∞: Cases where b max h = ∞ Using again δ = b/ω h , (1.33) and (1.44), we can express the difference between the value function υ (b; h) and the refinancing needs, b ā -ŝ, as follows: we have to prove that āω hb rss h is increasing in h. As b * h = δ * h ω h , and knowing, from Proposition 2, that ω h is a strictly increasing function of h, we only have to show that δ * h is decreasing in h for h > h. Table1.7: Data : Advanced countries. The time period for the growth rate ā and the risk-free rate r is 2009-2018. Min, Max and Std are the minimum, maximum and standard deviation. The time period of the sample is 1980-2018. Sources: Debt-to-GDP ratios correspond to general government gross debt from the IMF World Economic Outlook database (October 2019). Yield spreads are the difference between the country's long-term real government interest rates and the German rates. See Table1.6 for data sources. Country Australia Austria Belgium Canada Czech Republic Denmark Finland France Germany Greece Hong Kong Iceland Ireland Israel Italy Japan Korea Latvia Lithuania Luxembourg Netherlands New Zealand Norway Portugal Singapore Slovak Republic Spain Sweden Table 1.8: Data : Emerging countries (1980-2018). µ σ h 3.08 1.45 0.964 1.98 1.47 0.964 1.91 1.42 0.965 2.74 1.95 0.952 1.98 3.84 0.907 1.76 1.88 0.954 2.16 3.06 0.925 1.78 1.34 0.967 1.70 1.90 0.953 0.79 3.54 0.914 4.53 3.58 0.913 3.47 3.55 0.914 4.76 4.66 0.888 3.48 1.78 0.956 1.21 1.84 0.955 1.90 2.23 0.945 5.95 3.80 0.908 3.90 5.73 0.863 4.10 5.19 0.875 3.78 3.04 0.926 2.07 1.79 0.956 2.59 1.82 0.955 2.43 1.69 0.958 1.96 2.58 0.937 6.21 3.72 0.91 3.87 3.05 0.926 2.25 2.14 0.947 2.14 2.08 0.949 Country µ σ h Brazil 2.37 3.27 0.920 Chile 4.21 3.95 0.904 China 9.07 2.47 0.939 Colombia 3.40 2.05 0.950 Hungary 2.14 2.76 0.932 Malaysia 5.63 3.45 0.916 Mexico 2.48 3.22 0.922 Nigeria 3.01 5.45 0.869 Pakistan 4.80 1.93 0.953 Philippines 3.75 3.34 0.919 Poland 3.66 2.66 0.935 Russia 0.61 6.55 0.844 South Africa 2.23 2.22 0.946 Sample 3.64 3.33 0.919 Table 1.9: Sustainability and low interest interest rate: Euro zone (r = 0.31%). ā (1 + r) -1 h (1) (2) Country Austria 1.007 0.970 Belgium 1.009 0.946 Finland 1.000 1.000 France 1.006 0.973 Germany 1.010 0.959 Greece * 0.970 1.000 Ireland 1.049 0.754 Italy * 0.994 1.000 Latvia 1.006 0.976 Lithuania 1.014 0.947 Luxembourg 1.021 0.859 Netherlands 1.006 0.977 Portugal 1.000 1.000 Slovak Republic 1.019 0.900 Spain 1.002 0.992 (1980-2018). Debt-to-GDP ratio (bt) Yield spread (st) Country Mean Std Min Max Mean Std Min Max Slovak Republic 41.59 9.52 21.67 54.74 0.31 1.44 -2.27 3.72 Notes. Table 1.10: Debt-to-GDP ratios and real yield spreads (%): Advanced countries Australia 23.71 9.68 9.69 41.37 1.36 1.90 -1.03 7.00 Austria 69.1 9.32 55.93 84.4 0.08 0.29 -0.33 1.02 Belgium 110.73 15.37 76.36 138.14 0.38 0.90 -0.79 2.58 Canada 78.24 14.12 44.91 100.25 0.67 1.00 -0.59 3.19 Czech Republic 28.63 10.47 11.65 44.91 0.18 0.96 -1.67 1.46 Denmark 48.9 14.13 27.35 78.63 0.61 1.09 -0.85 4.16 Finland 38.33 17.38 10.89 63.45 1.02 1.87 -1.34 5.50 France 58.83 24.52 20.83 98.42 0.54 0.78 -0.75 2.48 Germany 63.26 11.27 38.99 82.31 ----Greece 101.97 48.44 22.53 184.85 4.10 6.67 -1.78 22.90 Hong Kong 0.97 1.02 0.05 3.52 0.20 3.24 -4.34 7.13 Iceland 47.61 19.64 24.48 92.03 1.46 2.51 -3.77 4.91 Ireland 61.15 30.82 23.62 120.04 1.25 2.35 -2.80 7.60 Israel 74.39 10.74 60.41 92.89 1.86 2.13 -3.70 5.68 Italy 112.52 12.31 92.91 132.16 1.25 1.68 -1.13 4.34 Japan 136.42 66.77 48.81 237.13 -0.72 1.05 -3.46 0.94 Korea 22.64 10.19 7.98 37.92 1.06 1.13 -0.95 3.04 Latvia 26.12 14.09 8.12 46.91 -0.04 4.50 -8.04 8.60 Lithuania 28.59 9.74 14.57 42.58 0.71 3.39 -4.90 9.54 Luxembourg 13.46 6.83 6.49 23.69 -0.58 0.87 -2.39 0.54 Netherlands 60.76 10.56 41.97 76.78 -0.01 0.95 -2.28 1.89 New Zealand 38 14.66 16.3 68.58 1.88 1.60 -0.48 6.74 Norway 36.85 8.09 22.94 52.56 0.60 1.34 -1.27 3.98 Portugal 78.89 30.42 50.34 130.61 1.58 2.90 -2.16 9.56 Singapore 90.48 13.31 69.82 113.63 -0.09 2.23 -3.45 3.55 Table 1.11: Debt-to-GDP ratios and real yield spreads (%) : Emerging countries (1980-2018). Debt-to-GDP ratio (bt) Yield spread (st) Country Mean Std Min Max Mean Std Min Max Brazil 69.32 8.07 60.2 87.89 4.39 3.83 -2.31 13.31 Chile 15.31 8.17 3.88 37.37 1.60 1.54 -1.33 3.65 China 30.59 8.87 20.45 50.64 -0.10 1.92 -3.07 3.16 Colombia 38.63 7.99 23.36 52.16 4.34 2.23 0.19 8.15 Hungary 68.43 9.75 51.58 84.06 1.53 2.31 -1.33 6.84 Malaysia 46.82 11.03 29.62 74.13 0.66 1.87 -1.37 3.96 Mexico 44.21 5.46 37.21 56.76 2.58 1.46 -0.02 6.19 Nigeria 33.64 21.97 7.28 74.96 0.40 3.75 -5.65 5.21 Pakistan 65.62 7.3 52.44 81.23 3.98 2.43 -0.44 7.12 Philippines 55.39 11.14 38.92 76.08 3.04 2.85 -1.74 9.90 Poland 46.85 5.63 36.38 55.69 2.10 1.58 -0.80 5.00 Russia 29.15 31.45 7.44 135.06 0.60 5.97 -6.55 15.84 South Africa 39.67 8.87 26.51 56.71 2.41 2.41 -2.57 7.62 Sample 44.32 20.47 3.88 135.06 2.22 2.84 -5.65 13.31 Sovereign Default Risk and Climate Change: Is it Hot Enough ? Notes: Chapter 2 2.1 Introduction Spain Switzerland 55.84 22.88 1.80 16.58 100.37 1.56 1.03 0.962 2.02 -1.88 5.18 Sweden United Kingdom 49.68 11.49 2.13 37.24 69.15 1.91 0.93 0.953 1.29 -1.07 4.16 United States Switzerland 47.69 7 2.60 34.35 59.16 1.81 -0.48 0.955 1.08 -3.21 1.23 Sample United Kingdom 50.4 19.93 2.81 28.57 87.91 2.63 0.55 0.936 0.93 -1.10 2.81 United States 84.29 20.85 53.15 106.82 -0.05 0.93 -1.66 1.96 Sample Notes: µ and σ are the mean and standard deviation of the log gross growth rate of GDP 60.14 38.13 0.05 237.13 0.68 2.19 -8.04 22.90 per capita expressed in percentage. h is the minimum value of h above which a risky steady state exists (see Proposition 4). Growth data are from the World Bank database and cover Notes: Min, Max and Std are the minimum, maximum and standard deviation. The time period of the the period 1980-2018. 68 or equivalently b rss h ā < hω h . Figure 1.10: h < H < 1 Notes: µ and σ are the mean and standard deviation of the log gross growth rate of GDP per capita. h is the minimum value of h above which a risky steady state exists (see Proposition 4). Growth data are from the World Bank database and cover the period 1980-2018. * : Countries where ā < 1 + r. sample is 1980-2018. Sources: Debt-to-GDP ratios correspond to general government gross debt from the IMF World Economic Outlook database (October 2019). Yield spreads are the difference between the country's long-term government real interest rates and the German rates, hence the dash (-) in the table for the German spread. Sources: see Table 1 .6. There is growing interest of academics, investors, and policymakers in the economic impacts of climate change and the appropriate policy responses. This interest has been strongly revived recently due to accelerating increase in global surface temperature (Figure 2 .1) and the frequency of extreme climate shocks, such as heatwaves, droughts, wildfires, tropical cyclones, and coastal flooding. 1 Table 2 . 2 1: Data: Summary statistics(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017) Variable Obs. Mean Std. dev. Min Max Year 1444 2008 5.48 1999 2017 CDS 1-year (basis point) 1145 141.75 242.30 0.50 3240.94 CDS 3-year (basis point) 1145 175.45 258.76 0.50 2595.08 CDS 5-year (basis point) 1145 206.96 283.09 0.52 3687.56 CDS 10-year (basis point) 1145 249.40 323.32 0.83 4893.56 T i,t -T i,1900-1950 (°Celsius) 1444 0.76 0.57 -1.58 2.91 T i,t -T region,1900-1950 (°Celsius) 1444 -1.20 11.43 -57.07 9.68 T i,t -T i,1900-1998 (°Celsius) 1444 0.73 0.56 -1.54 2.79 T i,t -T i,10 year moving average (°Celsius) 1444 0.17 0.51 -2.32 2.00 T i,t -T i,20 year moving average (°Celsius) 1444 0.31 0.51 -2.14 2.22 T i,t -T i,30 year moving average (°Celsius) 1444 0.44 0.52 -1.85 2.28 T i,t -T i,40 year moving average (°Celsius) 1444 0.53 0.53 -1.80 2.40 T i,t -T i,50 year moving average (°Celsius) 1444 0.58 0.54 -1.68 2.46 T i,t /sd i,1999-2017 1444 -4.07 26.00 -103.39 53.99 T i,t (°Celsius) 1444 12.96 12.47 -34.10 28.40 Log of real GDP per capita 1444 9.81 0.92 6.56 11.54 Real GDP Growth rate (%) 1444 2.31 4.01 -48.48 40.20 Debt-to-GDP (%) 1406 53.78 37.52 1.56 344.32 Primary balance (% of GDP) 1429 0.18 4.69 -34.91 28.57 Output gap 1444 -0.25 3.82 -34.23 28.88 Log of consumption 1409 27.84 2.87 22.44 36.38 Inflation rate (%) 1438 6.16 16.76 -8.24 325.03 Reserves 1436 857*10 8 301*10 9 174*10 6 39*10 11 Credit ratings 1356 13.88 4.91 2.48 21.00 Trade openness (% of GDP) 1429 81.16 37.83 18.35 226.04 FDI inflow (% of GDP) 1435 5.19 14.89 -28.31 280.13 Gross capital formation (% of GDP) 1441 23.58 6.30 0.00 46.66 Population(t-1) 1444 713*10 5 202*10 6 274047 138*10 7 Political stability index 1444 0.06 0.96 -3.18 1.76 Governance effectiveness index 1444 0.47 0.92 -1.98 2.35 Current account (% of GDP) 1428 -0.85 7.55 -28.84 45.46 Table 2 2 Regression results of unbalanced panel data. Robust, country clustered standard errors in parentheses. The dependent variable is the yearly CDS spread measured in basis point. The key explanatory variables is temperature deviation from its long mean in 1900-1950. Temperature is measured in °C. .2: CDS spreads and Temperature (baseline, 1999-2017) Dependent variable: CDS spreads in basis point with maturity: 1-year 3-year 5-year 10-year Climate Temperature 11.68 15.61 * 22.71 * * 31.09 * * (7.21) (8.22) (10.52) (13.86) Controls variables Log of GDP 204.77 * * * 147.35 * * 76.06 48.42 (70.16) (65.42) (66.30) (74.24) Real GDP growth rate -13.31 * * * -14.21 * * * -14.91 * * * -16.83 * * * (4.91) (3.80) (3.93) (5.22) Inflation rate 13.07 * * * 15.00 * * * 16.59 * * * 17.87 * * * (2.38) (2.41) (3.28) (4.68) Public debt/GDP 1.13 1.87 * * 2.58 * * * 3.60 * * * (0.87) (0.81) (0.91) (1.03) Public debt/GDP squared -0.00 -0.00 -0.01 * * -0.01 * * * (0.00) (0.00) (0.00) (0.00) Primary balance/GDP -0.35 0.15 0.80 1.28 (1.29) (1.42) (1.97) (2.73) Reserves/GDP -0.00 -0.00 0.00 0.00 (0.00) (0.00) (0.00) (0.00) Credit ratings -42.53 * * * -49.03 * * * -53.91 * * * -56.21 * * * (5.12) (5.22) (5.83) (6.16) Constant -1607.14 * * * -1013.16 * -310.51 226.57 (594.95) (554.51) (557.03) (623.46) Observations 1116 1116 1116 1116 Countries 76 76 76 76 Country fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes R 2 0.652 0.723 0.732 0.710 Adjusted R 2 0.617 0.696 0.705 0.681 AIC 14,408 14,296 14,462 14,854 BIC 14,895 14,782 14,949 15,336 Notes: * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. Table 2 . 2 3: Five-year CDS spreads and Temperature (regions, 1999-2017) Regression results of unbalanced panel data. Robust, country clustered standard errors in parentheses. The dependent variable is the yearly CDS spread measured in basis point. The key explanatory variables are temperature deviation from its long mean in 1900-1950 and the interaction with region dummies. Temperature is measured in °C. Control variables used in Table2.2 are included but not reported. * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. Dependent variable: 5-year CDS spreads in basis point (1) (2) (3) (4) (5) (6) (7) Climate Temperature 19.90 * 19.54 23.65 * * 23.50 * * 23.09 * * 22.37 * * 26.06 * * (10.98) (14.39) (10.36) (11.61) (11.29) (10.48) (11.36) Interaction terms East Asia & Pacific × Temp. 45.38 * * (19.19) Europe & Central Asia × Temp. 5.60 (20.74) Latin America & Caribbean × Temp. -12.87 (50.76) Middle East & North Africa × Temp. -5.09 (27.23) North America × Temp. -6.02 (18.58) South Asia × Temp. 32.30 (98.95) Sub-Saharan Africa × Temp. -46.88 * * (20.65) Observations 1116 1116 1116 1116 1116 1116 1116 Countries 76 76 76 76 76 76 76 Country fixed effects Yes Yes Yes Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Yes Yes Yes Control and Constant Yes Yes Yes Yes Yes Yes Yes R 2 0.732 0.732 0.732 0.732 0.732 0.732 0.732 Adjusted R 2 0.705 0.704 0.704 0.704 0.704 0.704 0.705 Notes: Table 2 . 2 3 presents estimations results for the 5-year CDS spreads, which is the most maturity used in empirical studies on sovereign CDS spreads.10 We find a positive and significant impact of temperature for countries in the East Asia and Pacific region only: one degree Celsius increase in temperature (relative to the long run mean) leads to an increase of spreads by 45.38 basis points if the country is located in the East Asia and Pacific region. This result is in line with the fact that the East Asia and Pacific is one of the regions that are highly exposed to climate change and experienced the fastest temperature increases over the past few decades. Table 2 . 2 4: CDS estimation (5 years) -Temperature robustness (1) Dependent variable: 5-year CDS spreads in basis point (1) (2) (3) (4) Climate T i,t -T i,1900-1950 22.71 * * (10.52) T i,t -T region,1900-1950 22.71 * * (10.52) T i,t /sd i,1999-2017 12.90 * * (5.81) T i,t 22.71 * * (10.52) Observations 1116 1116 1116 1116 Countries 76 76 76 76 Country fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Control and Constant Yes Yes Yes Yes R 2 0.732 0.732 0.732 0.732 Adjusted R 2 0.705 0.705 0.705 0.705 AIC 14,460 14,462 14,462 14,459 BIC 14,942 14,949 14,949 14,940 Notes: Regression results of unbalanced panel data. Robust, country clustered standard errors in parentheses. The dependent variable is the yearly CDS spread measured in basis point. Temperature measures are in °C. Control variables used in Table 2 .2. * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. Table 2 2 Regression results of unbalanced panel data. Robust, country clustered standard errors in parentheses. The dependent variable is the yearly CDS spread measured in basis point. Temperature measures are in °C. Control variables used in Table 2.2. * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. .5: CDS estimation (5 years) -Temperature robustness (2) Dependent variable: 5-year CDS spreads in basis point (1) (2) (3) (4) (5) (6) Climate Temperature (10-y mov av) 10.03 (8.70) Temperature (20-y mov av) 18.53 * (9.93) Temperature (30-y mov av) 20.71 * * (10.19) Temperature (40-y mov av) 21.93 * * (10.22) Temperature (50-y mov av) 21.23 * * (10.13) Temperature (Trend) 90.37 * * (38.68) Temperature (cycle) 12.80 (9.57) Observations 1116 1116 1116 1116 1116 1116 Countries 76 76 76 76 76 76 Country fixed effects Yes Yes Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Yes Yes Control and Constant Yes Yes Yes Yes Yes Yes R 2 0.731 0.731 0.732 0.732 0.732 0.733 Adjusted R 2 0.704 0.704 0.705 0.705 0.705 0.706 AIC 14,464 14,464 14,465 14,462 14,461 14,457 BIC 14,946 14,950 14,957 14,949 14,942 14,944 Notes: Table 2 2 The table reports OLS estimates of equation (2.5). The dependent variable is the yearly CDS spread measured in basis point. Japan is excluded from this regression. Standard errors in parentheses. .6: Debt channel: First step regression Dependent variable: 5-year CDS spread (basis point) Public debt/GDP -0.87 (0.88) Public debt/GDP square 0.02 * * * (0.01) Constant 183.79 * * * (23.76) Observations 1097 Countries 75 R 2 0.041 Adjusted R 2 0.039 Notes: * * * p < 0.01. Source: Authors' estimates. Table 2 . 2 7: Debt channel: Second step regression The table reports estimation results of equations (2.8) and (2.9). Robust, country clustered standard errors in parentheses. The key explanatory variables is temperature deviation from its long mean in 1900-1950. Temperature is measured in °C. Japan is excluded from this regression. We do not use the debt-to-GDP ratio neither its squared value as controls to avoid reverse causality bias. * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. Dependent variable: Predicted CDS spread CDS residual (ε i,t ) ( S i,t ) (1) (2) Climate Temperature -1.69 21.27 * * (1.21) (10.66) Controls Log of GDP -40.98 * * * 82.32 (9.38) (65.40) Real GDP growth rate -0.35 -15.23 * * * (0.34) (3.96) Inflation rate -0.27 * 17.23 * * * (0.14) (3.39) Primary balance/GDP 0.39 0.14 (0.24) (2.17) Reserves/GDP 0.00 * * * 0.00 (0.00) (0.00) Credit ratings -9.25 * * * -49.37 * * * (0.75) (5.28) Constant 607.82 * * * -477.94 (77.97) (538.78) Observations 1097 1097 Countries 75 75 Country fixed effects Yes Yes Time fixed effects Yes Yes R 2 0.897 0.713 Adjusted R 2 0.887 0.685 Notes: Table 2 . 2 8: GDP growth rate and Temperature(Arellano-Bond) Regression results of unbalanced panel data. Robust, standard errors in parentheses. The dependent variable is the yearly real growth rate of GDP in percentage point. Growth is measured as the first difference of log GDP. The key explanatory variable is the deviation of temperature from its long run mean in1900-1950. Temperature is measured in °C. Dependent variable: Real GDP growth rate ( %) (1) (2) (3) (4) Climate Temperature -0.488 * * * -0.489 * * * -0.475 * * * -0.437 * * (0.160) (0.164) (0.165) (0.180) Controls Growth rate (t-1) 0.174 * * 0.174 * * 0.169 * * 0.178 * * (0.068) (0.069) (0.070) (0.069) Growth rate (t-2) -0.011 -0.011 -0.011 (0.054) (0.054) (0.057) Growth rate (t-3) -0.003 0.007 (0.024) (0.025) Growth rate (t-4) -0.025 (0.031) Inflation -0.051 * * -0.051 * * -0.053 * * -0.050 * * (0.022) (0.022) (0.023) (0.021) Trade openness 0.033 * * 0.033 * * 0.033 * * 0.033 * * (0.013) (0.013) (0.014) (0.014) Foreign direct -0.003 -0.003 -0.003 -0.004 investment (0.013) (0.013) (0.012) (0.012) Gross capital 0.297 * * * 0.302 * * * 0.300 * * * 0.311 * * * formation (0.068) (0.077) (0.077) (0.085) Population (t-1) -0.000 -0.000 -0.000 -0.000 * (0.000) (0.000) (0.000) (0.000) Primary balance 0.226 * * * 0.227 * * * 0.230 * * * 0.217 * * * (0.055) (0.058) (0.058) (0.058) Political stability 0.187 0.192 0.187 0.013 (0.659) (0.662) (0.661) (0.682) Current account 0.030 0.030 0.032 0.040 (0.054) (0.055) (0.055) (0.059) Constant -6.289 * * * -6.299 * * * -6.169 * * * -5.823 * * * (1.974) (2.003) (2.029) (2.039) Observations 1379 1378 1369 1305 Countries 76 76 76 76 Notes: * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. Table 2 2 Regression results of unbalanced panel data. Robust, country clustered standard errors in parentheses. The dependent variable is the primary surplus in percentage of GDP. The key explanatory variable is the deviation of temperature from its long run mean in1900-1950. Temperature is measured in °C. .9: Primary surplus and Temperature (1999-2017) Dependent variable: Primary surplus (% of GDP) (1) (2) (3) Climate Temperature -0.12 -0.12 -0.12 (0.20) (0.20) (0.20) Controls Debt-to-GDP (t-1) 0.05 * * * 0.06 * * * 0.08 * (0.01) (0.02) (0.04) Debt-to-GDP squared (t-1) -0.00 -0.00 (0.00) (0.00) Debt-to-GDP cubic (t-1) 0.00 (0.000) Output gap 0.19 * * * 0.20 * * * 0.20 * * * (0.04) (0.04) (0.04) Expenditure (t-1) -4.30 * * * -4.49 * * * -4.48 * * * (1.45) (1.48) (1.48) Inflation 0.01 0.01 0.00 (0.03) (0.03) (0.03) Governance effectiveness 1.47 * 1.58 * 1.54 * (0.81) (0.81) (0.83) Ratings 0.10 0.08 0.07 Trade openness 0.01 0.01 0.01 (0.01) (0.01) (0.01) Constant 118.44 * * * 123.55 * * * 123.51 * * * (42.62) (43.50) (43.54) Observations 1289 1289 1289 Countries 76 76 76 Country fixed effects Yes Yes Yes Time fixed effects Yes Yes Yes R 2 0.465 0.466 0.466 R 2 -adjusted 0.419 0.420 0.419 AIC 6,836 6,836 6,838 BIC 7,363 7,368 7,375 Notes: * p < 0.1, * * p < 0.05, * * * p < 0.01. Source: Authors' estimates. Let Ω t denote the maximum revenue that the Treasury can collect at date t to repay debt due at this date, B t-1 . It defines as the sum of the maximum primary surplus in period t and the maximum funding that the government can obtain in the same period by emitting new debt on financial markets.Default occurs at date t if and only if B t-1 > Ω t . In this regard, we can refer to Ω t as the period t default threshold, as in .5) 3.3.3.1 Default and debt recovery rule. This equation also makes it clear that the default ratio ω t depends on the prospect of future disasters. I turn to the investigation of this relationship. 3.4.2 Constant expectations of disaster risk. Let us first consider the situation where the probability of disaster is constant over time, that is p t = p ∀t. Under this assumption, equation (3.17) rewrites ω t = ŝ + βχ (p) • ω t+1 . (3.18) Iterating (3.18) forward, we obtain a stationary solution if βχ (p) < 1, and an infinite solution otherwise. The following proposition formalizes the existence of a unique stationary solution: .15) Let δ * t denote the value of δ t that solves (3.15). Note that δ * t depends on the expected probability of disaster, that is δ * t ≡ δ * (E t p t+1 ) . (3.16) Inserting (3.16) into (3.13), equation (3.14) becomes ω t = ŝ + βχ (E t p t+1 ) • ω t+1 . (3.17) Equation (3.17) is a forward looking dynamic equation. How much funding the government can obtain in the current period depends on how much it can obtain in the next period as the latter determines the opportunities for public funding. Proposition 6. Under Assumption 3, if p t = p, the function χ (p) satisfies βχ (p) < 1. The equilibrium default ratio defined by (3.18) is locally unique and equal to: Forward looking expectations of disaster risk. is a strictly decreasing function of the probability of disaster. Let bnaive t denote the maximum quantity of debt (-to-GDP) that can be emitted under naive expectations of disaster risk. From (3.21) and (3.23), recalling the definition in (3.11), it defines as bnaive t = δ * t ω naive t . (3.24) .20) 3.4.4 Let us now consider the case with forward looking expectations of disaster risk where I use the "tilde" notation to distinguish naive expectations from rational where, as in the naive case, the probability of disaster varies over time, but cred- expectations, denoted by E t . itors now anticipate the increasing trend in disaster probability, as specified in (3.2). (and time-varying) mean frequency of disasters, based on (3.2), while ignoring the time-varying nature of this mean. Inserting (3.20) into (3.15), equation (3.16) redefines δ * t ≡ δ (3.21) Then, using (3.13) and (3.21), equation (3.17) becomes ω t = ŝ + βχ (p t ) • ω t+1 . (3.22) Iterating on (3.22) as before, we obtain the following solution ω t = ŝ 1 -βχ (p t ) ≡ ω naive t . (3.23) As for the case of constant disaster probability in Section 3.4.2, the ratio ω naive t The probability of disaster p t in (3.20 ) can be interpreted as the frequency of disasters observed in period t. Given the relatively low number of disasters that can be observed in single period, an elaborate way to model p t would be to consider this later as a random variable. I will consider this possibility in Section 3.6. Here I simply assume that naive creditors set p t to the deterministic * (p t ) . Using (3.2), (3.13) , and (3.16), equation (3.17) now becomes ω t = ŝ + βχ (E t p t+1 ) • ω t+1 = ŝ + βχ (t + 1) • ω t+1 . (3.25) As it is clear from (3.25) , χ (•) is now a deterministic function of time. The definition of the disaster probability in (3.2) implies that there exists a finite date T < ∞ such that p t = 1 for any date t ≥ T . Using this property, we can solve equation (3.25 ) by backward induction, starting from date T . 16 Table 3 . 3 1: Baseline parameter values (annual basis). Emerging Advanced Notes : a : Annualized rate on 4-year-maturity German bonds (1980-2019) b : Diarra, Guillard and Kempf (2022), c : IMF(2018), d : Historical (1980-2019), e : Hsiang and Jina (2014) f : Estimates based on historical frequency of large climatic disasters, g: US debt duration (2010). , and in particular the term βχ (p) of this equation, which depends on the distribution of the growth rate of output (see equation 3.13). Consider a situation with a low value of the disaster probability p. Therefore, from equation (3.22), the default ratio is finite if only if the risk free rate and the (mean) growth rate are such that βχ (p) < 1, and infinite otherwise. However, since χ (p) is a decreasing function of p according to Proposition 7, for sufficiently high values of p a finite default ratio may exist even if β is close to one, i.e., if the risk free rate is close to zero and/or the growth rate is very high. The following proposition formalizes this finding: and (3.16), equation(3.17) becomesω t = ŝ + βχ E t p t+1 |z t • ω t+1 .(3.28)To solve (3.28), I consider the case where the sequence {z t } are i.i.d. random variables, 23 each following a Bernoulli distribution with the same unknown, random parameter 24 in(3.28), we obtainω t = ŝ + βχ (t, n; α, γ) • ω t+1 ,(3.31) where χ (•) is a decreasing function of n and an increasing of t. Iterating(3.31) forward, we obtain the equilibrium default ratio under Bayesian learning:From(3.16) and(3.32), recalling the definition in(3.11), we deduce the maximum quantity of debt-to-GDP under Bayesian learning: ω t = ŝ 1 -βχ (t, n; α, γ) ≡ ω bayesian t . (3.32) bbayesian t = δ * t ω bayesian t , (3.33) Table 3 . 3 2: Simulated maximum debt-to-GDP ratio, bt (%): Advanced economies.figures for Italy and Greece are 10% , and 7% of GDP, respectively. Overall, maximum debt ratios in advanced countries fall on average by 24% of GDP in 2019, 42% in 2030 (Column 4) and 56%in 2050 (Column 5) relative to the case with no disaster risk. At debt-to-GDP ratios observed in 2019, Italy and Greece will be in default by 2030, and Portugal will be on the verge of default by 2050.With forward looking expectations of disaster risk, the decreasing pattern in maximum debt ratios is even more accelerated. Maximum debt ratios in advanced countries now fall on average by 42% of GDP in 2019, 50% by the 2030 and 58% by 2050. Naive expectations Forward expectations b 2019 bnodisas b2019 b2030 b2050 b2019 b2030 b2050 Country (1) (2) (3) (4) (5) (6) (7) (8) Australia 47.47 489.80 357.07 260.47 188.18 257.27 215.60 175.51 Austria 70.51 220.87 189.17 158.09 128.05 164.12 144.19 123.05 Belgium 98.06 216.80 185.89 155.53 126.19 161.63 142.05 121.37 Table 3 . 3 4: Mean and volatility of the growth rate(1980-2019, %) Advanced countries µ σ Emerging countries µ σ Australia 3.06 1.44 Brazil 2.35 3.23 Austria 1.96 1.45 Chile 4.13 3.93 Belgium 1.90 1.40 China 8.99 2.50 Czech Republic 1.99 3.77 Colombia 3.39 2.03 France 1.78 1.32 Hungary 2.23 2.75 Germany 1.68 1.89 Malaysia 5.59 3.41 Greece 0.82 3.49 Mexico 2.42 3.20 Hong Kong 4.37 3.67 Pakistan 4.70 2.00 Iceland 3.43 3.48 Philippines 3.80 3.31 Israel 3.48 1.74 Poland 3.69 2.62 Italy 1.19 1.82 Russia 0.66 6.44 Latvia 3.82 5.62 South Africa 2.18 2.21 Lithuania 4.11 5.08 Sample 3.68 3.14 Luxembourg 3.74 3.01 Netherlands 2.06 1.77 New Zealand 2.57 1.81 Portugal 1.97 2.55 Slovak Republic 3.82 3.00 Spain 2.24 2.11 Switzerland 1.78 1.54 United Kingdom 2.11 1.89 United States 2.59 1.79 Sample 2.57 2.53 This is the case, for example, of Lebanon, Sri Lanka, Suriname and Zambia. See Sturzenegger and Zettelmeyer (2008),[START_REF] Cruces | Sovereign defaults: The price of haircuts[END_REF], and Arellano, Mateos-Planas and Rios-Rull (2019). This refers to what is commonly known as an "haircut". The haircut rate is equal to one minus the fraction of debt-to-GDP recovered by creditors following a sovereign default. The seminal paper on debt intolerance is[START_REF] Reinhart | Debt intolerance[END_REF]. On the assumption of exclusion from financial markets,[START_REF] Gelos | Sovereign borrowing by developing countries: What determines market access?[END_REF] See alsoSunder-Plassmann (2018),[START_REF] Asonuma | Sovereign Debt Restructurings: Delays in Renegotiations and Risk Averse Creditors[END_REF],[START_REF] Dvorkin | Sovereign Debt Restructurings[END_REF] and[START_REF] Amador | Reputation and Partial Default[END_REF]. See Grossman and Huyck (1988), p.1088. Note that sovereign "excusable defaults" are different from "rollover crises" à la[START_REF] Cole | Self-fulfilling debt crises[END_REF], which are driven by sunspot shocks. g refers to the real growth rate of GDP, and r is the real risk-free interest rate. We will often refer to a t simply as the growth rate and be more precise when necessary. Note that, although we use bold notation, h is a scalar parameter not a vector. See the empirical studies mentioned in Section 1.2. Formally, this leads in particular to neglecting the probability of a shock favorable enough to exit from this regime. Treating this hypothesis more rigorously would require restricting the distribution support of shocks, which would considerably and unnecessarily complicate the analysis (see[START_REF] Guillard | Public Debt Sustainability and Defaults[END_REF]. Note that, since both v max t and ω def t+1 are expressed in terms of output, the discount rate used is the risk-free real interest rate net of the expected growth rate of output. [START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF] develop the same argument and give other reasons justifying the discarding of the "unstable" equilibrium. What CHR calls, respectively, the maximum sustainable debt (MSD) and the maximum sustainable borrowing (MSB). We prefer to keep the term "sustainable" for another use, proposed in the next section. That is, using the definition of[START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF]: "the maximum debt level at which the government can rollover its maturing debt and finance the primary deficit at a finite interest rate". We use the same calibration described in the footnote 16. We limit the scale of the axes for ease of display. This is no longer possible in our economy, under the assumption of a constrained fiscal regime. Symmetrically we could said that a public debt is "sustainable" at date t when its trajectory does not reach the default ratio at any future date, assuming that there is no realization of the (gross) rate of output growth a t+s lower than ā. It is a very weak definition of sustainability given the very optimistic nature of the considered scenario. We discard this view. We limit our analysis to countries with at least ten consecutive years of observations. We use the IMF's World Economic Outlook definition to classify countries between emerging and advanced groups. Table 1.6 presents the definition of the variables, and data sources. Tables 1.7 to 1.11 presents descriptive statistics of the data. Calibration results that we shall report below are similar when we use the US Government rate as the risk-free rate. We prefer the German rate as it appears to be a fairly better benchmark over the past few decades than the US rate (see also[START_REF] Mitchener | Sovereign Debt in the 21st Century: Looking Backward, Looking Forward[END_REF]. Although the length of one period in the model is 4 years, calibration results reported in the next section are on annual basis. Parameter values in Tables 1.1 are also on annual basis. To obtain parameter values on a 4-year period basis, we follow[START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF]. The authors assume that the GDP observed in each year corresponds to one fourth of the next (rolling) 4 years. The corresponding growth rate, the mean and the volatility of this growth rate are then computed. [START_REF] Arellano | Default and the Maturity Structure in Sovereign Bonds[END_REF] use a similar method to estimate the parameters of the yield curve of long-term Government debt for four emerging countries. The large variation of the debt limit across countries reflects differences in theirs economic fundamentals, in particular the mean growth rate which is positively related to the debt limit. For instance, over the period 1980-2018, Greece presents an annual growth rate of 0.7% on average while that of Singapore is 8 times larger (6.21% on average). See Table1.7 in Appendix 1.8.2 for the mean growth of the list of advanced countries. For the considered countries, the growth rate is higher than the risk-free interest rate and the solvency ratio is infinite. Here g refers to the (net) real growth rate of GDP, and r is the real risk-free rate as before. The considered time period(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018) and the small sample size of countries do not allow for a new statistically estimate of the parameter h. See Stern (2007), IPCC (2007; 2012; 2021; 2022). 2 See https://www.ipcc.ch/report/ar6/wg2/ 3 Green gas emissions have been identified and considered in the scientific community as the main drivers of anthropogenic climate change. SeeIPCC (2022) A few exception are[START_REF] Kling | Climate Vulnerability and the Cost of Debt[END_REF],[START_REF] Agarwala | Rising Temperatures, Falling Ratings: The-Effect of Climate Change on Sovereign Creditworthiness[END_REF] and[START_REF] Cevik | This changes everything: Climate shocks and sovereign bonds[END_REF]. The literature distinguishes between physical risk of climate change, i.e the actual impact of the climate on the economy, and the transition risk, i.e the cost of transition to a green economy. We focus on the first one. [START_REF] Dell | What Do We Learn from the Weather? The New Climate-Economy Literature[END_REF],[START_REF] Hsiang | Climate Econometrics[END_REF] and[START_REF] Kolstad | Estimating the Economic Impacts of Climate Change Using Weather Observations[END_REF] We are currently working on a new version of the paper that uses monthly CDS spreads as a robustness check of our findings. See, e.g., Bank for International Settlements (2010). One may argue that it is possible to compute the spread for a given maturity using the yield curve. However, this is not feasible for many (developing) countries where there is a paucity of data on yield at some maturities to construct a reliable yield curve. See, for example,[START_REF] Eichengreen | What Explains Changing Spreads on Emerging-Market Debt: Fundamentals or Market Sentiment[END_REF] and[START_REF] Cruces | Sovereign defaults: The price of haircuts[END_REF]. We find similar results for the other maturities but do not report them for parsimony. For the decomposition into trend and cycle, we use the standard Hodrick-Prescott filter. We set the smoothing parameter λ = 100. One exception is[START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF], who develops a framework that generates a time-varying maximum primary surplus from endogenous Laffer-curves. 13 This is in line with the positive link between the debt limit, the maximum primary surplus and output growth present in the papers mentioned previously.14 Note that we use +/-on the lower branch of the figure to indicate the fact that, à priori, temperature can have both negative and positive effects on growth and the maximum primary See DeGrauwe and Ji (2012). We exclude the debt-to-GDP ratio from the vector of controls to avoid reverse causality bias in Table2.7 Column 1. We also do not include it in Column 2 for homogeneity purposes. See Burke, Hsiang and Miguel (2015) for the possibility of positive and negative effects of temperature on output. Results available upon request. This is one reason why the debt limit is not observed in the data. However, even if one had empirical data on the maximum primary surplus, the debt limit remains an unobserved theoretical object. SeeDiarra, Guillard and Kempf, In an extension, I also consider a situation where there is uncertainty about the disaster probability and creditors engage in Bayesian learning. The empirical literature on this topic is also limited.[START_REF] Klomp | Sovereign Risk and Natural Disasters in Emerging Markets[END_REF] is among the rare papers in this literature. Important references in this line are[START_REF] Bi | Sovereign default risk premia, fiscal limits, and fiscal policy[END_REF],[START_REF] Ghosh | Fiscal fatigue, fiscal space and debt sustainability in advanced economies[END_REF],[START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF],[START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF][START_REF] Lorenzoni | Slow Moving Debt Crises[END_REF][START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF][START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF]. [START_REF] Barro | Rare Disasters and Asset Markets in the Twentieth Century[END_REF] also considers an i.i.d growth rate with constant disaster severity. In an earlier version of the paper, I consider the case where u is a random variable and the findings are qualitatively similar. Natural candidates for this condition are probability distributions in the exponential family, such as the exponential distribution with density g (a) = λe -λa . As we will see in Section 3.8.2, the widely used log-normal distribution also satisfies this condition under some parameter choices. The assumption of risk-neutral creditors is standard in theoretical models of sovereign default. One could of course assume risk-averse creditors. This would lower the discount factor [START_REF] Collard | Sovereign debt sustainability in advanced economies[END_REF] and[START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF] obtain a similar result for the default ratio, but they abstract from disaster risk, that is p = 0 in their framework. I generalize their findings to the case with disaster risk. One could also iterate and solve(3.25) forward. This alternative approach would lead to the same path for ω t , but it is slightly more complicated than the backward induction method pursued here. https://www.emdat.be/ Although the length of one period in the model is 4 years, I will report all results of the numerical exercises on annual basis. See, for example,[START_REF] Hsiang | Climate Econometrics[END_REF][START_REF] Hsiang | The Causal Effect of Environmental Catastrophe on Long-Run Economic Growth: Evidence From 6,700 Cyclones[END_REF][START_REF] Felbermayr | Naturally negative: The growth effects of natural disasters[END_REF][START_REF] Felbermayr | Naturally negative: The growth effects of natural disasters[END_REF]. The fiscal space given by the difference ( bnodisasb 2019 ) indicates that Greece should be in default in 2019, while Italy is on the verge of default. This situation is not very surprising given the recent debt sustainability issues experienced by these countries. This is because, in this case, the growth rate is higher than the risk-free rate and thus the default ratio, as well as the maximum debt ratio bt , tend to infinity Acknowledgments 1.8.2 Data and supplementary results. where T i,t is the mean temperature in year t and T i, is the long run mean of temperature for country i computed over the period 1900-1950. The computation choice in (2.1) follows [START_REF] Stock | Climate Change, Climate Policy, and Economic Growth[END_REF] and [START_REF] Kahn | Long-term macroeconomic effects of climate change: A cross-country analysis[END_REF]. The authors suggest that the deviation of temperature from the long run mean is a more adequate measure when analyzing the economic impacts of climate change than the standard mean temperature since the latter presents a strong upward trend that may potentially lead to spurious regression. In our robustness analyses in Section 2.4, we also consider five alternative measures to the one defined in equation (2.1). Table 2.1 summarizes descriptive statistics of the baseline temperature deviation T i,t , as defined in (2.1), as well as for the alternative measures. Macroeconomic variables To explore the key mechanisms underlying our estimation results in Section 2.4, we use data on country GDP, debt-to-GDP, and primary surplus. We also use others macroeconomic and political variables as control variables in our estimations. Estimation and mechanisms In this section, we assess the link between temperature anomalies and sovereign default risk. Section 2.4.1 studies the direct impacts of temperature anomalies on sovereign CDS spreads. Section 2.4.3 investigates the key transmission channels of these effects. Let β ≡ (1 + r) -1 be the discount factor of the economy. Assuming perfect international diversification of risks and rational expectations, the sovereign bond price at date t, denoted by q t , writes as Appendix: Data where h t+1 is the fraction of the end-of-period value that will be repaid in a given state of nature in period t + 1, with h t+1 = 1 if there is no default, and h t+1 < 1 in case of default. Government. The government has access to international financial markets where it can issue one-period maturity debt of facial value B t , which is reimbursed at date t + 1. Let S t denote the government's maximum primary surplus. It is assumed to evolve proportionally to output: S t = ŝY t , where ŝ is the maximum surplus-to-output ratio. 13 Note however that even though ŝ is constant the primary surplus in level S t is not, as this latter depends on output. Since output is affected by climatic disasters, so is S t . 14 The instantaneous government budget constraint writes: . This parameter takes the value of 1 if there is no default in t and a lower value, given by a debt recovery rule, when the government is unable to β, and thus amplify the expectations effects of disaster risk. Therefore, the results uncovered in the risk-neutral framework considered here can be seen as a lower bound with respect to a setting with risk-averse creditors. 13 he constant character of ŝ is for simplicity only. 14 One may wonder about the role of fiscal policy, in particular for the post-disaster recovery. I do not explore this mechanism here but instead focus on the expectations channel of disasters risk. Note however that since ŝ is, by definition, the maximum primary surplus that can be reached and this maximum is unlikely to increase following a disaster shock, my results can be seen as a lower bound with respect to the case where fiscal policy plays a role. An issue with EM-DAT is that the majority of recorded events cannot be considered as "disasters" as they have negligible economic impacts. A standard approach in the empirical literature on the impacts of natural disasters on economic activity is to set a threshold of monetary damage or a number of deaths above which an event is classified as a disaster. Here I consider a threshold of 95th percentile and classify as a disaster any climatic event that causes a monetary damage or a number of deaths (per population) that is above this threshold. That is, an event is defined as a disaster if it is among the top 5% extremes. The blue curve in Figure 3.1 (left axis) presents the yearly number of disasters observed according to this definition. Next, I compute the probability of disaster based on the number of disasters observed in each year. Specifically, for a given year, I compute the probability of disaster as the fraction of countries that experience a disaster in that year. The dashed orange curve in Figure 3.1 plots the obtained disaster probabilities. Finally, I estimate the parameters α 0 and α 1 by fitting a linear trend to probabilities of disaster obtained in the previous step. The associated estimates of α 0 and α 1 are 0.035 and 0.002, respectively. Note that these estimates are common to all countries. Given the relatively small frequency of disaster events, reliable country estimations of the parameters α 0 and α 1 cannot be conducted. Calibration. To illustrate the role of disaster expectations, I calibrate the model using data on advanced economies that have experienced climatic disasters in 1960-2019. 18 Appendix 3.8.2 extends the analysis to emerging countries and simulates country specific debt ratios according to the naive and forward looking expectations of disasters. I consider a log-normal distribution for the (potential) growth rate a t : lna ∼ N µ, σ 2 . 18 See Table 3.4 in Appendix 3.8.2 for the full list of advanced countries considered. country moves from an environment with low disaster risk to one with high disaster risk it will face a sharp fall in the maximum debt-to-GDP ratio that can be sustained sustain without defaulting. The model emphasizes the crucial role of creditors' expectations about disaster risk. Different expectations of disaster risk have very different implications for debt sustainability and sovereign default risk. Calibration and simulation exercises based on the historical frequency of climatic disasters show that a gradual increase in the probability of disasters over the coming years, if anticipated by creditors, can contribute to the reemergence of sovereign defaults, even when the risk free rate is lower than the growth rate of output. I show that this result also applies when there is uncertainty about the probability of disasters and creditors learn this probability trough Bayesian learning. From a policy perspective, my findings call for more attention and caution about public debt sustainability in the context of climate change and the related increasing probability of extreme events. Appendix Proof of Proposition 2. Proposition 2: Under Assumption 3, with p t = p, the equilibrium default ratio is locally unique and equal to: The ratio ω * p is strictly increasing in ŝ and strictly decreasing in p. Proof. First, assume that βχ (p) < 1. We will see below this inequality is verified under Assumption 3. Then, iterating (3.17 (3.8.4) 129 Using this definition, we have (3.8.5) where the last inequality is implied by Assumption 3.2. From (3.8.5), and recalling u < 1, we have which is a sufficient condition to have ∂χ (p) /p < 0. Country analysis. This section further investigates the link between the default ratio and creditors' expectations about disaster risk by resorting to country calibration and simulations. I calibrate and simulated the model to a sample of 12 emerging and 22 advanced countries that have experienced at least one climatic disaster in 1960-2019. 30 Table present parameter values for the risk-free rate r, the debt recovery parameter h, the maximum primary surplus ŝ, and the disaster parameters u , α 0 and α 1 . The mean µ and volatility σ of the (log) gross growth rate are set to 30 I follow the definition of the IMF to classify the countries in the Advanced and Emerging groups. In line with the assumption of access to international financial markets in the model and Following [START_REF] Diarra | Sovereign Defaults and Debt Sustainability: the Debt Recovery Channel[END_REF], I require the selected countries to have at least ten consecutive years of observations of sovereign bond yield. Sovereign yields used for this selection are from Reuters.
04098948
en
[ "info.info-oh" ]
2024/03/04 16:41:20
2022
https://theses.hal.science/tel-04098948/file/2022UCFAC079_MBOUOPDA.pdf
Keywords: Time series, classication, uncertainty, explainability, shapelet, astrophysics, transient. 3 Time series classication is one of the must studied and applied time series analysis tasks. Several methods have been proposed to perform this task accurately, eciently and sometimes in an explainable way. However, situations where the time series are made of uncertain values are still under-explored although any physical measurement is subject to uncertainty. The existing works in this elds are based on uncertain similarity measures such as DUST, MUNICH, and FOTS which have the same main limitation of not propagating uncertainty to the next step of the classication process. This behavior causes the last parts of the process to treat the data as if they were certain while they are not, leading to untrustable predictions. This thesis tackles this limitation by proposing ecient, robust and explainable methods for uncertain time series classication (uTSC). We start by proposing a general framework for uncertain time series classication which propagates uncertainty from the beginning to the end of the process. Then, we instantiate this framework using uncertainty propagation arithmetic to propose the UST model which outperformed existing uTSC models while being explainable. We continue by improving the scalability of UST by proposing the SAST and the uSAST models. SAST is a novel accurate, scalable and interpretable method that we propose for time series classication. uSAST is the extension of SAST to uTSC. We show the eectiveness our methods on simulated datasets, on state-of-the-art datasets, and on a real-world uncertain time series dataset from the astrophysics domain. The source codes and the data used in the work are all available publicly. Résumé La classication de séries temporelles est l'une des tâches d'analyse de séries temporelles les plus étudiées et les plus appliquées. Plusieurs méthodes performantes et des fois interprétables ont été proposées pour réaliser cette tâche. Cependant, les cas où les séries temporelles sont faites de valeurs incertaines restent sous-explorés, et ceci malgré que toute mesure physique soit sujette à incertitude. Les travaux existants dans ce domaine sont basés sur des mesures de similarité incertaine telles que DUST, MUNICH et FOTS qui partagent la principale limite de ne pas propager l'incertitude à la prochaine étape de la classication. Par conséquent, les dernières étapes du processus de classication ne sont pas conscientes du fait que les données sont incertaines et les traitent donc comme si elles étaient certaines, conduisant ainsi à des prédictions non ables. Cette thèse a pour but de corriger cette limite en proposant des méthodes ecaces, robustes et interprétables pour la classication de séries temporelles incertaines. Nous commençons par proposer un cadre général pour la classication de séries temporelles incertaines qui propage l'incertitude du début à la n du processus de classication. Nous instancions ensuite ce cadre en utilisant l'arithmétique de propagation de l'incertitude pour proposer la méthode UST qui a donné des résultats meilleurs que ceux données par les méthodes existantes de classication de séries temporelles incertaines tout en étant interprétable. Par la suite, nous améliorons le temps de calcul requis par UST en proposant les méthodes SAST et uSAST. SAST est une nouvelle approche performante, rapide et interprétable que nous avons proposé pour la classication de séries temporelles, et uSAST est son extension aux séries temporelles incertaines. Nous évaluons nos méthodes sur des jeux données simulées, sur des données de l'état de l'art et sur un jeu de données réel provenant du domaine de l'astrophysique. Les codes sources et les données utilisés dans ce travail sont rendus disponible sur internet. Mots clés: Série temporelle, classication, incertitude, interprétabilité, shapelet, astrophysique, objet transitoire. Introduction This chapter gives the motivation of this thesis and summarizes its main contributions. Whether it is in the eld of transportation [Lopez Conde & Twinn 2019, Zheng et al. 2021], medicine [START_REF] May | Eight ways machine learning is assisting medicine[END_REF][START_REF] Miller | Radar-based monitoring system for medication tampering using data augmentation and multivariate time series classication[END_REF], industry or in physics [Boone 2019, Leoni et al. 2021]. ML is used not only to assist corresponding domain experts, but also to improve how the task is done and the service quality. ML is used to make predictions and to discover new knowledge from raw data. This is made possible by the data collection capabilities that are available today and the proliferation of tremendous techniques to analyze dierent types of data including tabular, image , video, audio and time series data. Most of the time however, and specically for time series, the data are required to be precise for ML algorithms to perform well. This is not the case in every application as the data are sometimes uncertain because of many factors including noise, sensors precision, privacy preservation, and collection methods [Mazzi et al. 2019[START_REF] Abdar | A review of uncertainty quantication in deep learning: Techniques, applications and challenges[END_REF]]. Being able to analyze uncertain data is at least as important as being able to analyze certain data. The goal of this thesis is to build ML methods for the analysis of uncertain time series data. Although there are many types of uncertainty, in this work we focus on imprecision, a special type of uncertainty. There are three key properties that we would like our methods to have, namely: Eciency: the methods should produce accurate results, uses as less resource as possible, and be competitive with state-of-the-art methods. Robustness: the methods should be resilient to the variation of data uncertainty. Explainability: the methods should be inherently explainable or explainable by other means. Let us discuss these three properties in details and understand why they are important to have. Eciency is the ability of a machine learning model to achieve good performance while being trained using a small quantity of memory and computation time [START_REF] Hernandez & Brown | [END_REF]. The magnitude of small" quantity is application dependent and may vary a lot. For instance, a model that detects anomalous heartbeats should report anomalies as soon as they appear and not many hours after the patient cannot be saved anymore [Lu et al. 2022]. On the contrary, a model trained to detect plagiarism could take three hours to run. For Internet Of Things systems, which are governed by limited memory and computation power, eciency is a must [START_REF] Sliwa | LIM-ITS: Lightweight machine learning for IoT systems with resource limitations[END_REF]. In any case, it is always good to have models that are ecient as inecient ones have higher carbon footprint [START_REF] Patterson | [END_REF]. The robustness property is another important qualier of machine learning models and reects its capability of achieving similar performance on both training and new data. This property is even more important nowadays as it has been proved that machine learning models can be fooled by malicious individuals (adversaries), noisy and uncertain data [Fawaz et al. 2019a, Yang et al. 2020]. In this work, the importance of this property is emphasized by the fact that we are dealing with uncertain data. The explainability of a machine learning model is the ability to explain its decisions, to describe its weaknesses and strengths, and to convey an understanding of how it will behave in the future [DARPA ]. Beyond understanding the model and increasing its faithfulness and adoption by humans, explainability helps in debugging machine learning models by revealing the features used by the model to make its predictions [START_REF] Ribeiro | [END_REF]]. In the case where the wrong features are used, the model can be modied correspondingly. With these three characteristics, we would like to ensure in the rst hand that the proposed methods work correctly and do what they have been built for using acceptable amount of resources. On the second hand, we would like the proposed methods to be adopted with condence by domain experts as well as any end users. 0.2 Uncertainty, not a bad thing ! Uncertainty is ubiquitous in real life, the majority of the decisions we take everyday is based on uncertain knowledge. For instance, we choose how we dress regarding an uncertain and changing forecasting of the weather [START_REF] Slingo | Uncertainty in weather and climate prediction[END_REF]; we plan our future without having the certitude that we are going to live up to that future; we learn lot of things at school hoping that it will be useful someday in someway [Kauman et al. 2022]. There are many of such examples. Similarly, any 0.2. Uncertainty, not a bad thing ! 3 collected data is associated with an uncertainty coming from the precision of sensor used, the environmental condition of the measurement, the source of the data, and other application-dependent constraints. It is sometimes possible to reduce the uncertainty, but it cannot be completely suppressed [Taylor 1996]. Therefore, uncertainty cannot be avoided and needs to be properly taken into account in machine learning algorithms as learning from uncertain data may lead to uncertain and inaccurate predictions. Uncertainty is usually seen as a problem, a hindrance to learning from data. This is why it is usually handled during the preprocessing step, during which some assumptions are considered in order to get rid of uncertainty. The preprocessing of uncertainty requires domain knowledge, limiting the usability of this approach of uncertainty handling. Furthermore, assumptions made for getting rid of uncertainty actually add more uncertainty in the process as they are based on an incomplete knowledge of the system that generated the data. Uncertainty comes with challenges for decision-making systems [START_REF] Stanton | [END_REF], but it also brings some advantages. In particular, uncertain data can be used to model situations on which we have incomplete knowledge and on which we do not have a total control like modeling the climate change; they are more expressive and less prone to assumptions that may not hold in practice. Learning from uncertain data has gained a lot of interest recently. For instance, there is a community working on imprecise probabilities (SIPTA [SIPTA ]) , there are international workshops organized for uncertain machine learning (Workshop on Uncertain Machine Learning [WUML 2020], Online Learning from Uncertain Data Streams [START_REF] Olud | OLUD. Workshop on Online Learning from Uncertain Data Streams[END_REF]). For these communities, uncertainty is not a problem, but an additional input that should be taken into account in order to build more robust and trustable machine learning systems. These systems are expected to be aware of uncertainty, should be at least as eective as if there was no uncertainty, and should be used without requiring domain knowledge. This thesis focuses on the classication of uncertain time series. Unlike, regular/certain time series which are generated from a stochastic process assumed to be completely known, uncertain time series are generated by processes that are only partially known. Some authors have worked on uncertain time series classication (uTSC) and have proposed dierent methods to perform this task. The main component of all these works is a similarity measure for uncertain data. These measures are named uncertain similarity measures [Dallachiesa et al. 2012]. They take as input two uncertain time series, and output a real number representing the similarity between the two objects. We claim in this work that modeling the similarity with a real number is enough for certain data, but is not sucient for uncertain data. In fact, as the compared objects are uncertain, their similarity should also be uncertain. Existing works have never discussed how uncertain is the similarity computed by their uncertain measures, how this uncertainty might inuence the nal prediction, and even less, how this uncertainty could be computed. In this work, we address these limitations. Main contributions We mentioned in the previous section that existing uncertain time series classication methods share the same limitation of not providing the similarity uncertainty. In addition to this limitation, we identied three other lacks in the state-of-the-art of uncertain time series classication, namely the absence of published applications on real uncertain datasets and the diculty to reproduce existing works. The four main contributions that we did throughout this thesis are guided by these limitations and the key properties presented in Section 0.1. These contributions are described in the following subsections: Contribution 1: Uncertain shapelet transform We rst observed that with the existing uTSC approaches, uncertainty is not handled throughout the whole classication process. This is because the proposed uncertain similarities give the similarity without any information about the uncertainty on that similarity. This behavior is rstly not natural as the compared objects are uncertain. Secondly, it is misleading, as the end-user may think that the similarity between two uncertain objects is certain though it is not guaranteed. We tackled this limitation by proposing an explainable and accurate method for uTSC named UST, for Uncertain Shapelet Transform. UST is described in detailed in Chapter 2. Contribution 2: Scalable subsequence transform Shapelet approaches are known to be accurate and interpretable. However, they are computationally expensive. In this thesis, we proposed a new design of shapeletbased TSC, allowing us to signicantly improve the scalability of shapelet approaches while slightly increasing the classication performance. Our method is named SAST for Scalable and Accurate Subsequence Transform and it is presented in Chapter 3. In Chapter 4, we extend SAST to uncertain time series classication. Contribution 3: Real-word application The authors of existing uTSC methods have limited their experiments on datasets with simulated uncertainties. This observation may question the usability of these methods in real world. In this thesis, we applied our method on real uncertain time series dataset. As described in Chapter 4, our method achieves good classication performance while being interpretable by astrophysicists. Contribution 4: Reproducibility The last contribution of this thesis is that all the datasets used, including the datasets with the simulated uncertainties that we created are publicly available. Elsewhere, the source codes of our experiments are accessible on public repositories. We wanted to make this work easily reproducible by anyone in order to facilitate 0.4. Report structure 5 subsequent contributions to the eld of uncertain time series classication in particular, and in the eld of uncertain time series analysis in general. Report structure In the previous section, we gave our main contributions while specifying in which part of this report each contribution is detailed; however, we nd it clearer and more informative to present the organization of this report in a dedicated section. We organized this report in 6 chapters: Chapter 0, this one, is actually the rst and it gives this work's context, its motivations and goals, summarizes the main contributions and presents how this report is organized. A detailed background and related works of time series classication in the absence and presence of uncertainty is given in Chapter 1. For readers who are not familiar with time series classication or with uncertainty, we strongly suggest reading this chapter before the following ones as subsequent chapters use concepts described in Chapter 1. Next comes Chapter 2 which describes our rst main contribution UST. The second main contribution, SAST, is described in Chapter 3 as our proposed solution to improve UST's time complexity. Chapter 4 extends SAST to uncertain time series and details its performance on a real uncertain time series dataset. Finally, a general conclusion and some possible future directions of this work are given in Chapter 5. Chapter 1 Background and related work In this chapter, we give in-depth background required for understanding this work. What is a time series A time series is a type of data that allows the modeling of the evolution of a phenomenon through time. Dierently said, a time series is used to see how an object changes with time. A time series is formally dened as follows: Denition 1.1 (Time series). A time series (TS) is a nite sequence of objects ordered in time. T = (t d 1 , t d 2 , ..., t dm ), ∀j ∈ [1, m], d j ∈ D, t d j ∈ Ω, m ∈ N, m ≥ 1 (1.1) In the previous denition, D is a totally ordered set and for any pair of integers j 1 and j 2 such that j 1 < j 2 , we have d j 1 < d j 2 . The set Ω is the domain of the objects whose evolution in time is tracked. The objects in Ω are generally of the same nature, for instance, it could be a set of numbers, images, videos, audio, text, etc. Finally, m is the length of the time series. When Ω is an ordered set, the time series can be represented as a line plot on a 2 dimensional space where the x-axis is labeled by sorted objects from the set D and the y-axis is labeled by objects from Chapter 1. Background and related work In this thesis, we consider only the case where the set Ω is the set of real numbers. Therefore, we reduce the denition of time series to a nite set of numbers ordered in time. Notion of uncertainty Any measurement is subject to uncertainty, and unlike error which can be avoided by being careful, uncertainty cannot be avoided [Taylor 1996]. It can be reduced to a certain level but it cannot be eliminated. Many factors can lead to uncertain measurements including the sensitivity and the precision of the sensor used to make the measurement, the environmental conditions in which the measurement is done and the privacy preservation constraints. To make this clear, let's assume we want to know the height of a person who is 500 meters far away, we could estimate that he measures between 150 centimeters and 160 centimeters with some level of condence given our experience. The uncertainty here is due to the fact that the person is far away, and hence, it is dicult to make a more precise estimation of its height. This uncertainty can be reduced by getting closer to the person. A more precise height can be obtained using a meter, but the precision of this measurement will still be limited by the graduation of the meter. A meter graduated in centimeter will give less precise measurements than a meter graduated in millimeter. It is dicult and even impossible in some applications to obtain the required level of certainty. Therefore, it is necessary to build ML tools that could work well despite the uncertainty in the data. There exists two types of uncertainty: aleatoric and epistemic. Also called statistical, aleatoric uncertainty is due to the unknowns that dier each time a measurement is made. A typical source of aleatoric uncertainty is the random seed used for random numbers generation in computer science. Epistemic uncertainty, which is also called systematic uncertainty is due to things that should be known in principle, but are not in practice. The uncertainty in the measurement of a person height as described in the previous paragraph is epistemic as it is possible to measure a person height with precision using an appropriate tool. Some subtypes of epistemic uncertainty are imprecision, incompletion and unreliability. Imprecision is when there are many possible outputs for the same measurement and the exact output is unknown. An example is to give an interval in which a person height is known to be instead of a crisp real number. Incompletion is when some measurement are unknown or missing. A typical example is missing values in datasets. Unreliability is when it is not guarantee at 100% if the output of a measurement is correct or not. Figure 1.4 summarizes the categorization of the dierent types of uncertainty that have just been described. Is it due to unknown ? Figure 1.4: Categorization of uncertainty types of the value and the deviation is the maximal possible error on that estimate. x = x ± δx, x ∈ R, δx ∈ R + (1.2) Therefore, the exact value is somewhere in the interval [x -δx, x + δx]. Denition 1.3 (Multiset uncertain value ). An uncertain value can be given as a set of all the possible exact values. These values could be equiprobable or not. x = {x 1 , x 2 , ..., x s }, ∀i, x i ∈ R, s ∈ N + (1.3) The exact value is either one of the values in the set or a value close to one of the values in the set. . Let D = {(T 1 , c 1 ), (T 2 , c 2 ), ..., (T n , c n )} be a dataset, where each T i is a time series and c i the associated class label taken from a nite set of discrete classes C = {c 1 , c 2 , ..., c nc } (n c is the number of classes and C ⊂ N). The classication task for this dataset consists of learning a function f (also called a classier) such that: f (T i ) = c i (1.4) Once the function is learned, it can be applied to new time series to automatically predict their class labels. In practice, learning the exact function f is generally a dicult problem depending on the complexity of the relationship between the time series and the classes. Instead, an approximated function f , close as possible to f is learned. The quality of the approximation is measured using a loss function l which computes how far is the approximation from the exact function. One of the most used loss functions is the categorical cross entropy dened as follows: Denition 1.6 (Cross entropy). The cross entropy loss of a classier f on a given time series T i is dened as follows: l(T i , c i ) = c i log f (T i ) (1.5) The overall loss is obtained by averaging the losses for every time series in the dataset. Denitions 1.5 and 1.6 remain valid when the time series are uncertain. Regarding the discriminative features used to classify the data, the existing methods have been grouped in ve categories which are: whole series, interval, shapelet, dictionary and spectral approaches [START_REF] Bagnall | [END_REF]. In order to include the methods proposed subsequently, we add three other categories, namely: subsequence, hybrid and deep learning approaches. Whole series Whole series approaches classify time series using the k-Nearest Neighbor (k-NN) classier. In particular, a new time series is aected to the class of its rst nearest 1.2. Related work 15 neighbor. The neighborhood is dened using a distance that measures the dissimilarity between two times series. The Euclidean distance is one of the most used distances in machine learning, and time series classication is not an exception. Denition 1.7 (Euclidean distance (ED)). Given two times series T 1 and T 2 of same length m, the Euclidean distance between them is dened as follows: ED(T 1 , T 2 ) = m i=1 (t 1i -t 2i ) 2 (1.6) where t 1i and t 2i are the respective values of T 1 and T 2 . In practice, the square root can be omitted to save computation since its eect is only to change the dissimilarity scale. The main limitation of ED appears when the compared time series are not aligned, for instance when there is a time shift. To illustrate this situation, let us consider the the time series T 1 = (0, 0, 0, 2, 2, 2, 0, 0, 0, 0) and T 2 = (0, 0, 0, 0, 2, 2, 2, 0, 0, 0). It can be seen that T 2 is obtained by shifting T 1 one time step to the right. However the ED between them is not 0, meaning that this time series are not similar. The reason behind this behavior is that ED is computed assuming that the time series are naturally aligned as depicted in Figure and realign one speech signal to perfectly match another one [START_REF] Sakoe | Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition[END_REF]. The optimal alignment found using DTW is illustrated on Figure 1.8. The Euclidean distance computed with respect to this alignment is 0, meaning that these two time series are exactly the same. Combining DT W and the 1-NN has been the state-of-the-art approach to classify time series [START_REF] Bagnall | [END_REF] a parameter called the warping window in order to reduce the computation time complexity [START_REF] Sakoe | Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition[END_REF][START_REF] Ratanamahatana | Making time-series classication more accurate using learned constraints[END_REF]]. Beside DTW, there exists other elastic distances that has been used in time series analysis, but DTW performs better. Unlike ED which has a linear time complexity, elastic distances generally have a quadratic time complexity as the optimal alignment is obtained using dynamic programming. Some pruning and early abandon techniques has been proposed to accelerate this computation [START_REF] Herrmann | Early abandoning and pruning for elastic distances including dynamic time warping[END_REF]. Given the success of ensemble techniques in machine learning, and the fact that dierent distances computes dierent dissimilarities, whole series approaches has been combined in order to improve the classication. Elastic Ensemble (EE), proposed in [Lines et al. 2018] is one of these ensemble methods. EE's time complexity has been signicantly reduced in [Tan et al. 2020] to obtain the Fast Elastic Ensemble (FastEE). Interval Whole series approaches assume that the whole time series is important to achieve the classication. It is said that these approaches perform classication regarding the global features of the time series. By doing so, the computational time is high. On the contrary, interval approaches hypothesize that only a limited portion of the time series are relevant for classication. For instance, consider the time series dataset on Figure 1.9. It can be realized that the classication could be done regarding only the subsequences that are inside the gray rectangles, the remaining parts of the time series do not bring more information as these parts are similar regardless of the time series' class. Interval approaches are suitable for these type of datasets. An interval approach for time series classication works in tree steps: intervals selection, features computation, and nally, classication. Relevant intervals identication is generally done randomly or using an heuristic. The next step after the selection of relevant intervals is the computation of a set of statistics on these intervals. In particular the mean, the standard deviation, the slope, the median, etc are computed for each interval and for each time series in the dataset. The last step is the actual classication using any supervised classier on the computed statistics. This process in summarized on Figure 1.10. State-of-the-art methods that follow the interval based approach are the time Shapelet A Shapelets is a time series primitive proposed by [Ye & Keogh 2009b] as an eective and interpretable feature for time series classication. Simply said, a shapelet is subsequence (a set of consecutive values in a time series) that is characteristic of a class in a dataset. This notion is illustrated on Figure 1.11 on which the shapelets are the subsequences located in the rectangles. It can be seen that these subsequences are enough to perform the classication eectively as the green shapelets are only present in the green time series, while the blue shapelets are only present in the blue ones. Therefore, similarly to interval approaches, shapelet methods assume that only a subpart of the time series are relevant for the classication. However, unlike intervals whose locations are xed once for all, shapelets can appear at any location on the time series: shapelet are phase-independent, while interval are phase-Chapter 1. Background and related work dependent. The classication of a time series using a shapelet approach is done with respect to the similarity between the time series and the shapelets. More specically, if a time series contains a subsequence that is similar to a shapelet, then it is consider as being from the same class as the shapelet. The similarity is computed using a distance function, generally the Euclidean distance, but any distance can be also used. A shapelet approach performs classication in three steps: shapelets extraction, shapelets transformation, and nally, the actual classication. Extracting shapelets consists in nding the top most relevant shapelets from the training set. This is done by generating all the subsequences from the dataset, then computing the information gain obtained by splitting the dataset with respect to each subsequence (the split is done by creating two groups: one containing time series that are the most similar to the subsequence, and the other containing time series that are the less similar to the subsequence). Finally, the subsequences with the highest information gain are selected as shapelets. Then comes the second step which consists of transforming the original time series dataset to a tabular dataset. This is achieved by replacing each time series in the dataset by the vector of its distances to the selected shapelets. The nal step is the training of any supervised classier on the the transformed dataset. Figure 1.12 summarizes how time series classication with shapelets is performed. This approach is said to be interpretable since the selected shapelet are mean- Dictionary Shapelet, interval and whole series approaches work in the time domain as they perform classication based directly on the observed values of the time series. In some cases, this could be ineective. The dataset shown on Figure 1.13 illustrates a situation where these approaches may not be suitable. This is because the time series are made of similar subsequences with dierent frequencies depending on the time series class. Fourier Approximation (SFA [Schäfer & Högqvist 2012]) and the Symbolic Aggregate approXimation (SAX [Lin et al. 2007]). We illustrate the discretization using the example of SAX on Figure 1.14. This kind of datasets are preponderant in the eld of speech analysis. features. The eectiveness of using Fourier coecients, autocorrelation and power spectrum has been demonstrated in [START_REF] Bagnall | [END_REF]. [START_REF] Corduas | Marcella Corduas and Domenico Piccolo. Time series clustering and classication by the autoregressive metric[END_REF] used dierent supervised classiers on autoregressive features in order to perform time series clustering and classication. [Fawaz et al. 2019b]. This study includes recent advances in time series classication, but also many methods that were not considered in [START_REF] Wang | [END_REF]]. Furthermore, with the availability of more open time series datasets, this review also considered more diverse application domains. This time, ResNet signicantly outperformed the others deep learning methods, followed by the FCN. Additionally, the state-of-the-art handcrafted features method at this time, HIVE-COTE [Lines et al. 2018], achieved better classication performance compared to ResNet. HIVE-COTE will be described in the next section. Subsequently, the deep learning method InceptionTime has been proposed [Fawaz et al. 2020]. It is an ensemble of convolutional neural network models that include residual connections as in ResNet. The state-of-the-art deep learning method for time series classication is ROCKET [START_REF] Dempster | ROCKET: Exceptionally fast and accurate time series classication using random convolutional kernels[END_REF]. It is a special kind of convolutional neural network as it uses random lters. In fact, instead of learning convolutional kernels by optimization as it is generally done in deep learning, ROCKET uses random kernels sampled from a uniform distribution. As there is no learning, this method is extremely fast. Additionally, ROCKET uses a new feature called the proportion of positive values (PPV) computed after applying the convolutions. PPV allows ROCKET to achieve comparable performance to state-of-the-art methods. An almost deterministic variant of ROCKET has been developed recently under the name MiniRocket [Dempster et al. 2021]. Another variant of ROCKET is MultiROKET which extract features from the rst order derivative of the raw time series in addition to features extracted on the raw time series [Tan et al. Hybrid approaches Some time series datasets simultaneously contains whole series, shapelet, interval, dictionary and spectral features. In this situation, using a single type of feature to perform the classication leads to poor performance. Moreover, it would be benecial to have a method than could perform well on any dataset. Hybrid approaches has been proposed to tackle these challenges. The main idea is to extract every type of features, then use a strategy to merge the predictions from each feature type to obtain the nal prediction. For instance, the Random Interval Spectral Ensemble (RISE [Lines et al. 2018 STC, TDE, ROCKET and CIF. The most important limitation of HIVE-COTE is the computational time which is in the order of O(n 2 m 4 ) for a dataset of n time series of length m. This limitation is solved in practice by searching shapelet for a given amount of time only. Although this works well in practice, there is no guarantee that some interesting shapelets will not be missed by the algorithm. The Time Series Combination of Heterogeneous and Integrated Embedding Forest (TS-CHIEF [START_REF] Shifaz | [END_REF]) combines whole series, interval and dictionary features in a forest of trees fashion in order to reduce the variance, increase the classication performance, while reducing the computational time. The authors did not include shapelet features because they are computationally expensive. TS-CHIEF achieves performance comparable to the rst HIVE-COTE version while being much more scalable. However, the second HIVE-COTE version is signicantly more accurate than TS-CHIEF. Summary of time series classication approaches It can be realized by looking carefully at the existing time series classication approaches that they follow the same pattern to perform classication. In fact, each approach works in three main steps: features extraction, feature transformation, and nally classication. This process is summarized in Figure 1.16 and is described as follow: Given a dataset of time series with their class labels, a feature extractor is used to extract relevant features (shapelets, intervals, words, articial neural network (ANN) weighs, etc), then the input dataset is transform with respect to the Chapter 1. Background and related work extracted features (shapelet transformation, interval transformation, histogram, forward pass for ANN, etc), and nally, a supervised classier is trained on the obtained tabular dataset. The feature transformation is performed using the function v(T, f ) which computes the values of the feature f for the the time series T . This function is the distance function in shapelet and whole series approaches, it is the statistic functions (mean, standard deviation, median, etc) for interval approaches, it is the count for dictionary approaches, it is the correlation for spectral approaches, and it is the forward pass for deep learning approaches. We synthesized the existing methods for time series classication in Table 1.1. The rst two columns respectively indicate the category and the features that are used for classication. The last column gives some state-of-the-art methods in the corresponding category. The third column indicates the type of explainability, which is either absent when the method is not explainable, by design when the method is explainable right after training without using any additional tool, and post hoc when the method is explainable after the training by using additional techniques such as LIME [START_REF] Ribeiro | [END_REF] and SHAP [Lundberg & Lee 2017]. Note that although post hoc explainability methods such as LIME and SHAP are model-agnostic, and thus are applicable to any method, there is no theoretical guarantee that the result will be meaningful to the end-users. Uncertain time series classication approaches Existing uncertain time series classication (uTSC) approaches are inspired from regular time series classication (TSC) approaches. The general idea is to take uncertainty into account in an existing TSC approach by the means of some adap- small. Moreover, it does not compute a probability on the similarity, but the similarity itself. DUST assumes that the imprecision in the time series follows a normal distribution. N_DUST is a DUST variant that assumes the best estimate to be normally distributed while U_DUST considers uniform distribution. Denition 1.8 (DUST). The DUST similarity is dened as follows: U _DU ST (T 1 , T 2 ) = l 1 t 1,i -t 2,i 2σ i 2 (1.7) N _DU ST (T 1 , T 2 ) = l 1 t 1,i -t 2,i 2σ i (1 + σ 2 i ) 2 (1.8) where σ i is the uncertainty at time step i. Since DUST requires the compared values to have the same uncertainty, we Denition 1.9 (FOTS). The FOTS similarity is dened as follows: consider σ i = max(δt 1,i , δt 2,i ) in F OT S(T 1 , T 2 ) = l i=1 k j=1 (U 1 -U 2 ) 2 ij (1.9) where U 1 and U 2 are the k rst Eigenvector matrices of the local auto-covariance matrices of T 1 and T 2 respectively. The local auto-covariance at timestamp t of a time series T is computed using M sliding windows of size w as follows: Γ t (T, w, M ) = t τ =t-M +1 T τ,w ⊗ T τ,w (1.10) where T τ,w is the subsequence of length w from T and which starts at timestamp τ . ⊗ is the outer product operator. DUST, MUNICH, PROUD have been compared against each other on some time series analysis tasks such as querying and classication [Dallachiesa et al. 2012]. In the same work, the authors proposed the uncertain moving average (UMA), a moving average strategy for uncertain time series. We have noted four main limitations in the existing works on uncertain time series: 1. First, existing similarity measures give the similarity between two uncertain time series as an exact similarity without any uncertainty information. Since the compared objects are uncertain, it is natural to expect the similarity between them to have some uncertainty. Elsewhere, since the existing measures do not give the uncertainty on the similarity, the classier cannot be aware that the input data are uncertain. Likewise, the result of applying UMA is provided without the associated uncertainty, 2. second, except FOTS which has been proved to be more robust for clustering uncertain time series than Euclidean distance, the other uncertain measures have not been proved to be more eective compared to any exact distance that ignores uncertainty. This opens a question: are the existing uncertain similarity measures really necessary/useful? 3. third, the existing uncertain similarity measures have never been applied on real uncertain time series datasets, but always on exact time series on which random uncertainty have been added, 4. Finally, the synthetic uncertain time series data on which the existing methods have been tested are never provided, making it dicult to reproduce or verify the published results. The Uncertain Euclidean Distance (UED) that we are proposing in Section 2.2 is a solution for the rst aforementioned limitations. This thesis mitigates the second limitation by giving a comparison of DUST, which is to our knowledge the stateof-the-art uncertain measure, to Euclidean distance. In addition, we also consider FOTS and UED in this comparison. We apply these measures on synthetic datasets, but also on PLAsTiCC, an astronomical dataset of time series with real uncertainty [START_REF] Allam | The photometric LSST astronomical time-series classication challenge (PLAsTiCC): Data set[END_REF]. Finally, we share the source code, datasets and results of our experiment on a public repository to encourage reproducibility and re-usability. As mentioned earlier, the existing methods are based on the combination of a similarity measure with a 1-NN classier and it has been shown that this approach is signicantly less eective than approaches that extract local and/or global features on which classication is then performed [START_REF] Bagnall | [END_REF]. The UST and uSAST methods, which we respectively propose in Chapter 2 and Chapter 4 are based on shapelet features which are known to be very competitive for exact time series classication when tested on the UCR archive [Dau et al. 2019]. Beside giving accurate classication, shapelet features provide an inherent and easy explanation of the predictions [START_REF] Hills | Classication of time series by shapelet transformation[END_REF]. Figure 1.17 summarizes existing uncertain time series classication methods. Because the existing uncertain similarity measures do not provide the similarity uncertainty, the brutal disappearance of the uncertainty at the very beginning of the classication process can be observed, misleading the remaining part of the process to wrongly believes that the input data is certain. In the next chapter, we follow our proposed design by propagating uncertainty in the euclidean distance in order to obtain the uncertain euclidean distance (UED) which allows us not only to have the similarity, but also the quantication of the uncertainty on that similarity. Subsequently, UED is used to perform uTSC classication eciently. Chapter 2 Uncertain Time Series Classication With Shapelet Transform Time series classication is a task that aims at classifying chronological data. It is used in a diverse range of domains such as meteorology, medicine and physics. In the last decade, many algorithms have been built to perform this task with very appreciable accuracy. Introduction The last decade has been characterized by the availability of measurements in a large and variate set of domains such as meteorology, astronomy, medicine and object tracking. Generally, these measurements are represented as time series Chapter 2. Uncertain Time Series Classication With Shapelet Transform [Dallachiesa et al. 2012], that means a sequence of data ordered in time. Time series classication is used in many applications such as astronomy, land cover classication and human activity recognition. Meanwhile, there has been an increase of the number of methods for time series classication [START_REF] Shifaz | [END_REF], Bagnall et al. 2017]. However, to the best of our knowledge, these methods do not take data uncertainty into account. Any measurement is subject to uncertainty that can be due to the environment, the mean of measurement, privacy constraints and other factors. Furthermore, even if uncertainty can be reduced, it cannot be eliminated [Taylor 1996]. In some applications, uncertainty cannot be neglected and has to be explicitly handled [START_REF] Sarangi | [END_REF]. Shapelet based methods are one of the best approaches that have been developed for time series classication. A shapelet is a subseries that is representative for a class of time series. These methods are especially appreciated for their interpretability, their robustness and their classication speed [Ye & Keogh 2009b]. Almost every time series classication methods are built by coupling a similarity measure with a supervised classier. We follow this pattern in this chapter to build an uncertain time series classier. We are not aware of any existing method in the literature for the classication of uncertain time series. Our contribution is as follows, we rst propose an uncertain dissimilarity measure based on Euclidean distance. Secondly we use it to build the uncertain shapelet transform algorithm, which is the shapelet transform algorithm adapted to the classication of time series with available uncertainty information. The rest of this chapter is organized as follows: In Section 2.2, we present a new uncertain dissimilarity measure called UED, and in Section 2.3, we present the uncertain shapelet transform algorithm (UST). Section 2.4 is about experiments and Section 2.5 nally concludes this chapter. UED: a new uncertain dissimilarity measure As stated by [Taylor 1996], uncertainty is dierent from error since it cannot be eliminated, but it can be reduced up to a certain magnitude. Regardless of the measurement method, there is always an uncertainty and uncertain measures cannot be compared with a 100% reliability: the result of the comparison of uncertain values should also be uncertain. From now on, we consider only PDF-based representation of uncertain values. Let x be an uncertain value, we have x = x ± δx, the exact value of x follows a probability distribution and lies in the interval [x -δx, x + δx]. x is the best guess of the exact value of x. Let y be another uncertain value, any mathematical operator applied on x and y produces a new uncertain value. We have the following uncertainty propagation properties [Taylor 1996]: z = x + y = (x + ŷ) ± (δx + δy) z = x -y = (x -ŷ) ± (δx + δy) z = x × y = (x × ŷ) ± ( δx |x| + δy |ŷ| ) × (|x × ŷ|) z = x y = x ŷ ± ( δx |x| + δy |ŷ| ) × |x| |ŷ| Euclidean distance (ED) is U ED(T 1 , T 2 ) = n i=1 ( t 1i -t 2i ) 2 ± 2 n i=1 | t 1i -t 2i | × (δt 1i + δt 2i ) (2.1) where Ti is the time series of the best guesses of T i . The output of UED is an uncertain measure representing the similarity between the two uncertain time series given as inputs. In order to use UED to classify time series, especially with a shapelet algorithm, an ordering relation for the set of uncertain measures is needed. We propose three ways to compare uncertain measures: the rst one is the simpler one and is based on condence, the second one is a stochastic order and the last one is an interval number ordering. Simple ordering for uncertain measures This ordering is based on two simple properties. Let x and y be two PDF-based uncertain measures, the rst property is the property of equality and states that two uncertain measures are equal if their best guesses and their uncertainties are equals. x = y ⇐⇒ x = ŷ ∧ δx = δy (2.2) The property of inferiority is the second one and states that the uncertain measure x is smaller than the uncertain measure y if and only if the best guess of x is smaller than the best guess of y. In the case where x and y have the same best guesses, the smaller is the one with the smallest uncertainty. x < y ⇐⇒ (x < ŷ) ∨ ((x = ŷ) ∧ (δx < δy)) (2.3) Unlike the property of equality which is straightforward, the property of inferiority need some explanations. Unfortunately, we don't have a mathematical justication of this property but it is guided by two points: rstly we are in some way condent Chapter 2. Uncertain Time Series Classication With Shapelet Transform about the best guess since it must have been given by an expert, and secondly we are more condent with smaller uncertainties. Of course, these properties do not always give a correct ordering; in fact, if x = 2 ± 0.5 and y = 2 ± 0.1 then the inferiority property says that y < x. Now, if there is an oracle able to compute the exact value of any uncertain measure, it might says that x = 1.8 and y = 2, and thus invalidating our ordering. This observation also holds for the properties of equality. Stochastic ordering of uncertain measures X ≤ st Y ⇐⇒ P r[X > t] ≤ P r[Y > t] ∀t ∈ I ⇐⇒ 1 -P r[X > t] > 1 -P [Y > t] ∀t ∈ I ⇐⇒ P r[X ≤ t] > P r[Y ≤ t] ∀t ∈ I ⇐⇒ CDF X (t) > CDF Y (t) ∀t ∈ I (2.4) CDF X (t) is the cumulative distribution function of the random variable X evaluated at t. Because the cardinality of I is innite, we discretized it as being the set of the following values: min(I) + i × max(I) -min(I) k 0 ≤ i ≤ k and k is a whole number to be dened. (2.5) Unlike the simple ordering which is a total order, the stochastic ordering is a partial order. That means, the relation stochastically less than or equal to is not dened for any two random variables as the condition may not hold for every t in I, and thus, sorting some uncertain measures using the stochastic ordering is impossible. This is clearly a limitation, but we did not nd a total stochastic ordering in the literature. Interval numbers ordering Denition 2.1 (Interval number). An interval number i n is a number represented as an interval, that is i n = [i l n , i u n ] , where i l n and i u n are respectively the lowest and highest possible values of the number. A PDF-based uncertain value is by denition an interval number, enhanced with a probability distribution in that interval. Given two uncertain values x and y, the interval number-based ordering can be estimated using the following probability [START_REF] Xu | [END_REF], Yue 2011] : P r[x ≥ y] = max(1 -max( ŷ + δy -x + δx 2δx + 2δy )) (2.6) Unlike the stochastic ordering, the simple ordering and the interval numberbased ordering do not exploit the uncertainty distribution, nor the best guess given by the expert. The simple ordering required the best guess to be known, and this is not always the case in practice. If the compared uncertain values do not overlap at all, that is they do not have some possible exact values in common, all the three ordering give the same order. Now that we know how to sort uncertain measures, let us see how to use UED to classify uncertain time series. UST: The uncertain shapelet transform classication In this section, we describe how to classify uncertain time series using shapelets. Uncertain observations are represented using the probability density function model (or simply PDF model). We start by dening the concepts that are used in our algorithm, then we describe the algorithm itself. Denition of concepts Denition 2.2 (Uncertain subsequence). An uncertain subsequence S of an uncertain time series T is a series of l consecutive uncertain values in T . S = Ŝ ± δS = ( ti+1 ± δt i+1 , ..., ti+l ± δt i+l ) 1 ≤ i ≤ m -l, 1 ≤ l ≤ m, m = |T | (2.7) Denition 2.3 (Distance). The distance between a subsequence S of length l and a time series of length m is dened as follows: Dist(S, T ) = min P ∈T l dist(S, P ), , where T l = {(t i , t i+1 , ..., t i+l )| 1 ≤ i ≤ m -l + 1} The dist(•, •) function in Denition 2.3 could be any distance metric. In practice the Euclidean Distance (ED) and the Dynamic Time Warping (DTW) are generally used. The denition is also applicable between uTS and uncertain subsequence by ignoring the uncertainty or by taking it into account using an uncertain distance, namely the UED distance. Let D = {(T i , c i )|1 ≤ i ≤ n} be a dataset of n time series T i (respectively uncertain time series) with their class labels c i taken from a discrete nite set C such that the cardinality of C is much less than n. We can dene the notions of separator and shapelet for this dataset. Chapter 2. Uncertain Time Series Classication With Shapelet Transform Denition 2.4 (Separator). A separator (respectively uncertain separator) is a pair of a subsequence S (respectively uncertain subsequence) and a threshold ε that divide the dataset in two groups D L and D R such that: D L = {(T i , c i )|Dist(S, T i ) < ε, 1 ≤ i ≤ n} D R = {(T i , c i )|Dist(S, T i ) ≥ ε, 1 ≤ i ≤ n} Denition 2.5 (Shapelet). Given a dataset D = {(T 1 , c 1 ), (T 2 , c 2 ), ..., (T n , c n )} of time series with their class labels c i taken from a nite set of classes C, a shapelet S ⋆ is a separator that maximizes the information gain. S ⋆ = arg max S∈W IG(D, S), with W being the set of all subsequences in D. (2.8) Denition 2.6 (Information gain (IG)). Let D be a time series dataset and S a shapelet. Let D L = {T ∈ D | dist(T, S) ≤ ε} and D R = {T ∈ D | dist(T, S) > ε}, then IG(D, S) = max ε∈SP ⊂R H(D) - |D L | |D| H(D L ) - |D R | |D| H(D R ) , with H(D) = - c∈C p c log p c (2.9) H(•) is the entropy, p c is the probability of having the class c in the dataset D, C is the set of classes in D, and SP is the set of possible split points. Shapelets have been introduced as primitives for time series classication by [Ye & Keogh 2009b]. The authors proposed a shapelet based decision tree in which each node is a subsequence and the time series arriving at a node are split in two groups such that one group contains data that are similar to the subsequence at that node, and the other group is the set of data that are not similar to the subsequence. UST: The uncertain shapelet transform classication 37 This training is done in a top-down approach as in a classical decision tree using the information gain (IG) at each node to nd the best split. Uncertain shapelet transform classication Our algorithm for uncertain time series classication is an extension of the shapelet transform algorithm [START_REF] Hills | Classication of time series by shapelet transformation[END_REF]. Given a dataset D of uncertain time series, the rst step is to select the top k best uncertain shapelets from the dataset. This step is achieved using the procedure described by Algo. 1, which takes as input, the dataset D, the maximum number of uncertain shapelets to be extracted k, The next step after the top-k uncertain shapelets selection is the uncertain shapelet transformation. This step is done using Algo. 2, which takes as input the dataset D, the set of the top-k uncertain shapelets S and the number of uncertain shapelets k. For each uncertain time series in the dataset, its uncertain feature vector of length k is computed using U ED. The i th element of the vector is the U ED between the uncertain time series and the uncertain shapelet i. Because the uncertainties add up during the transformation, the uncertain feature vectors are such that the scale of the best guesses is smaller than the scale of the uncertainties. It is very important to have everything on the same scale. The second for loop of Algo. 2 performs the standard normalization of the transformed dataset. We use D:j to represent the list of the best guesses of uncertain dissimilarities between every uncertain time series and the j th uncertain shapelet, and δD :j is the list of the Chapter 2. Uncertain Time Series Classication With Shapelet Transform If instead of U ED, one of the existing metrics from the state of the art (DUST, MUNICH, PROUD or FOTS) is used, the classier would not be able to learn while being aware of uncertainty in the input since the output of these metrics are apparently 100% reliable; most importantly, it would not be possible to take advantage of an uncertain classier. Algorithm 1: Top-K Uncertain Shapelet Selection Input: D, k, M IN, M AX 1 begin 2 C ← ∅; Q ← ∅ 3 for i ← 1, n do 4 cands ← GenCand(T i , M IN, M AX) 5 qualities ← AssessCand(cands, D) 6 C ← C + cands 7 Q ← Q + qualities 8 end 9 S ← ExtractBest(C, Q, k) Compared models We have compared dierent models which are dierent congurations of the UST model. In particular, our models are built regarding the following attributes: uncertain similarity, ordering strategy, and supervised classier. Uncertain similarity This is how the dissimilarity between uncertain subsequences is computed by UST. This attribute has ve possible values which are ED, UED, FOTS, U_DUST, and N_DUST. We set the parameter of FOTS following the specications in its original paper Ordering strategy This is the method used to sort uncertain measures, that is simple, stochastic or interval ordering. When measures are not uncertain (when using FOTS or DUST), we use the natural order. For the stochastic ordering we consider an uncertain measure x to be normally distributed. Given this assumption, the following cumulative distribution function can be used CDF X (t) = 1 2 (1 + erf ( t - x δx √ 2 )) (2.10) where erf (•) is the Gauss error function. To discretize I (using Eq. 2.5), we xed the value of k to 100. For the interval ordering, we say that x ≤ y if P r[x ≤ y] > 0.5 Supervised classier This is the model used to classify the transformed dataset in the last step of UST. We used the classical Gaussian Naive Bayes (GNB) and the Uncertain Gaussian Naive Bayes (UGNB) models. We implemented UGNB following [START_REF] Qin | [END_REF]]. We chosen these classiers for their simplicity in order to evaluate UED and the importance of propagating uncertainty followed by the use of a classier that takes uncertainty into account during its training phase. For each model, the parameter M IN and M AX are set to 3 and m -1 respectively, where m is the length of the uncertain time series in the dataset being processed. Because of the high time complexity of the algorithm, we have used a time contract to limit the execution time of each model. After the evaluation of an uncertain shapelet candidate, the next candidate is evaluated only if there is time remaining in the contract; otherwise the shapelet search is ended. Because FOTS is more time consuming than ED, UED and DUST, we set FOTS's time contract 12 times higher above the time limit of other measures. Tab. 2.1 gives a summary of the dierent models that are evaluated and compared throughout our experiments. In order to apply the model UST(UED, GNB), Since the datasets in this repository are without uncertainty, we manually add uncertainty in the datasets listed in Tab. 2.2. Given a dataset, the standard deviation σ i of each timestep is computed. For each time series in the dataset, the added uncertainty for the observation at timestep i follows a normal distribution with mean 0 and standard deviation c×σ, where σ is randomly chosen from a normal distribution with mean 0 and standard deviation σ i . We used dierent values of c ranging from 0.1 to 2. Results For each of the models we compared, we have recorded the obtained accuracy, the training duration and the testing duration. These values are recorded for each level of uncertainty. Accuracy analysis In this analysis, UST(UED, GNB) and UST(UED, UGNB) use the interval ordering only. UED-based models are better than the others. They are even better when the uncertain naive bayes is used as classier, that is UST(UED, UGNB). We observe also that the accuracy of each model decreases when the uncertainty level increases; Uncertainty is unpredictable, and because we are dealing with it, it is dicult to identify in which uncertain situation our approach will work well. For this reason, we use dierent levels of uncertainty in our experiment, expecting to cover at most possible situations as we can. The uncertainty levels from c = 1 to c = 2 are more likely to be extreme and may not be found in a real application, but it is important to see how the model's behavior as the uncertainty becomes too large. We manually added uncertainty in our datasets. Applying our model on a real uncertain dataset will strengthen our contribution. Nevertheless, by using dierent levels of uncertainty in our datasets, we expect to cover any real situation. The uncertain classier we used is the uncertain naive Bayes. We choose it for its simplicity. There are other uncertain classiers [Li et al. 2020a, Aggarwal & Yu 2009], and they can be used in UST, but we did not try them because our goal was to show how important it is to correctly handle uncertainty in the context of uncertain time series classication. We highly recommend to try other uncertain classiers when in real application. Finally, the time contract we set during our experiments limits in some ways the discovery of more, and why not better uncertain shapelets. In fact, maybe new uncertain shapelets might have been discovered with a larger time contract. Conclusion The goal of this chapter was to classify uncertain time series using the shapelet transform approach. To achieve this goal, we use uncertainty propagation techniques to dene an uncertain dissimilarity measure called U ED. Similarly to shapelet transform, UST's high computation time for identifying shapelets is one the of the greatest limitations of the approach. In the next chapter, we will describe a novel design of shapelet-based classication that overcomes this limitation while keeping at least the same level of accuracy. Key points We proposed the Uncertain Euclidean Distance (UED) which provides uncertain similarities between uncertain time series. We used UED to take uncertainty into account in the shapelet-based classication and proposed the Uncertain Shapelet Transform (UST). We shown that using UED leads to better classication than using existing uncertain measures in the state-of-the-art. Communications Michael F. Time series classication using phase-independent subsequences called shapelets is one of the best approaches in the state of the art. This approach is especially characterized by its interpretable property and its fast prediction time. However, given a dataset of n time series of length at most m, learning shapelets requires a computation time of O(n 2 m 4 ) which is too high for practical datasets. In this chapter, we exploit the fact that shapelets are shared by the members of the same class to propose the SAST (Scalable and Accurate Subsequence Transform) algorithm which has a time complexity of O(nm 3 ). SAST is accurate, interpretable and does not learn redundant shapelets. The experiments we conducted on the UCR archive datasets shown that SAST is more accurate than the state of the art Shapelet Transform algorithm while being signicantly more scalable. Introduction Time series classication with shapelets is accurate, robust to noise and interpretable [Ye & Keogh 2009b]. In particular, the shapelet transform algorithm is known to be among the most eective when tested on the UEA & UCR archive [START_REF] Bagnall | [END_REF]. Shapelet have also been proved to be eective in time series clustering [START_REF] Fotso | [END_REF], showing how useful shapelets are. The interpretability of a shapelet method is obtained by visualizing the subsequences that triggered the class label of a given instance. Since the introduction of time series classication using shapelets, one of the major limitations of the developed algorithms is their time complexity. In fact, the state-of-the-art time complexity of shapelet based methods is n 2 m 4 where n is the number of time series in the dataset and m is the length of the longest time series. This high time complexity is due to the large number of shapelet candidates that need to be evaluated in order to nd the top best shapelets. A human brain is able to recognize a lot of variations of an object after seeing a single variant. For instance, we are able to recognize any model of car after seeing one of them, we can recognize many species of dog if we have ever seen a dog. This ability is called core object recognition [START_REF] Dicarlo | [END_REF]]. Inspired by this amazing behavior of our brain, we claim that a shapelet model should be able to recognize any variant of a shapelet if it knows one or a few number of its variants. Simply dened, a shapelet is a pattern that is shared by the time series that belong to the same class. Therefore, any single instance of a class should contain all the shapelet or at least a variant of each shapelet for that class. Guided by this observation, we propose the Scalable and Accurate Subsequence Transform (SAST) algorithm, a time series classication algorithm that is accurate, scalable and whose predictions are interpretable. Existing shapelet based methods use the whole dataset to generate shapelet candidates, then use information gain to select the top best shapelets before doing the classication using a supervised classier. We claim that it is not necessary to generate the shapelet candidates from the whole dataset, only one or few instances per class is enough. We also claim that pruning shapelet candidates without taking into account the classier can lead to inaccurate classication. We propose the SAST model to support our claims ; it uses only a single instance per class in order to generate shapelet candidates. Furthermore, shapelet candidates are not assessed beforehand of classication. The supervised classier automatically identies the top best shapelets during its training phase. The key points of our contribution are the following: We introduce the core shapelet recognition task which aims to recognize any variant of a shapelet from one or few variants of that shapelet. We claim that time series classication by shapelets is a core shapelet recognition task and therefore the size of the shapelet space is considerably reduced without losing crucial information. SAST: Scalable and Accurate Subsequence Transform 49 We propose the SAST method, which successfully performs the core shapelet recognition task in order to accurately classify time series. SAST is also more scalable than the state of the art shapelet methods. In particular, SAST took 1 second to classify the Chinatown dataset with an accuracy of 96%, while the state of the art shapelet based algorithm STC took 51 seconds and achieved an accuracy of 97% on the same computer. Furthermore our proposed method can successfully classify some datasets on which STC fails. The rest of this chapter is organized as follows: In Section 3.2 we describe our proposed method SAST, which is inspired by the core object recognition capability of human brain. In Section 3.3, we assess SAST on various datasets and compare it to state of the art shapelet and non-shapelet based methods. Section 3.4 summarizes this work and presents future direction. SAST: Scalable and Accurate Subsequence Transform In time series classication, a shapelet is ideally a pattern that is shared by every instances of the same class, and that instances of other classes do not have, they are called discriminative patterns or subsequences. The number of patterns in a dataset of n time series of length m is O(nm 2 ), and state of the art shapelet algorithms evaluate each of them by computing their information gain for a set of similarity thresholds before keeping the patterns and their corresponding similarity thresholds that give the highest information gain. Reducing the number of patterns to be assessed will make shapelet models faster to train. In this section we propose a way to reduce the number of shapelet candidates. Then we show that there is no need to select the top best shapelets beforehand. Finally we present a novel method for shapelet based time series classication. Reducing the number of shapelet candidates Human brain eortlessly performs core object recognition, the ability to recognize objects despite substantial appearance variations [START_REF] Dicarlo | [END_REF]. This gives human the capability to recognize a vast number of objects that have the same name just by seeing a few of them. [START_REF] Heeger | [END_REF]] used Figure 3.1 in his lecture notes on Perception to illustrate the notion of invariance in recognition. This gure shows dierent ducks. Some are in water while others are not, some ducks are photographs and other are drawings. Furthermore, the ducks have dierent sizes, colors, etc. Despite all these variabilities, a human brain that has already seen a duck is able to recognize that each object on this gure is a duck. A shapelet is a pattern, a shape that is common to time series that have the same class label. By common, we do not mean that these time series have exactly that shapelet, but they have a pattern that is very similar to the shapelet. Any pattern that is similar to a shapelet can be considered as a variant of that shapelet. Proof. Let's assume that classes in D are distinguishable using shapelets and that there exists a shapelet shp for the dataset D that is not similar to any time series in the set D c . Since D c contains at least a time series of each class in D, any shapelet for the dataset D must be similar to at least one time series in D c . It follows from there that assuming shp to be a shapelet is wrong. Therefore the statement is true. From the previous proposition, any shapelet shp of D is always similar to a pattern in D c . Therefore, a shapelet algorithm that generated shapelet candidates from D c can achieve the same accuracy as if D was used. We run the shapelet transform algorithm (STC) [START_REF] Hills | Classication of time series by shapelet transformation[END_REF] on the Chinatown dataset and plotted the top 5 shapelets that have been selected for each class on Figure 3.4. The shapelets on the rst row clearly identify the valley at the beginning of time series in class 1. Although they are coming from dierent time series, they are very similar in shape. Likewise, the shapelets on the last row identify the at starting of instances in class 2. Generating shapelet candidates from the whole dataset makes STC learns dierent variants of the same patterns. The variations are in terms of starting position, length and shape. By applying proposition 3.2.1 in the STC algorithm, we introduce STC-k, a variation of the STC algorithm that uses at most k time series from each class to generate the shapelet candidates. Hence, for a dataset with c classes, the number of shapelet candidates to be evaluated in STC-k is O(ckm 2 ) unlike STC in which O(nm 2 ) need to be evaluated. Algorithm 3 is an outline of the STC-k algorithm. The only dierence with STC is that the size of the shapelet space can be controlled by the parameter k. For a more detailed description of the algorithm, the reader should refer to the original STC's paper [START_REF] Hills | Classication of time series by shapelet transformation[END_REF]. By default, the length_list parameter is the set {3, 4, ..., m}, where m is the length of the time series in the dataset. min_ig is set to 0.05. In practice, two other parameters are used in STC: the maximum number of shapelets to keep per class and the time contract. The number of shapelets to keep per class is by default set to 200. The time contract is the maximum time allocated to the algorithm to search shapelets on the given dataset. [Middlehurst et al. 2020a] stated that in one hour of searching per dataset, the result is not signicantly worse than the full search. Let D f = {(x 1 , c 1 ), (x 2 , c 2 ), ..., (x n , c n )} be a dataset such that x i = [x i:1 , x i:2 , ..., x i:|S| ], where x i:j = dist(T i , S j ). If the j th feature is an important feature given by the analysis of feature importance for the dataset D f , then S j is a shapelet for the dataset D. Proof. Let's suppose the j th feature is an important feature, and that S j is not a shapelet for the dataset. By denition 2.5, not being a shapelet means that the information gain of S j is not high enough, and whether a time series T is similar or not to S j does not give any clue about the class of T . Therefore, knowing dist(T, S j ) doesn't help to classify T . In other words, the j th feature is not correlated to 54 Chapter 3. Scalable and Accurate Subsequence Transform for TSC the target variable. Hence, it cannot be an important feature. This proves the statement. The importance of a feature in a tree based algorithm determines how much it reduces the variance of the data compared to the parent node [START_REF] Dash | Feature selection for classication[END_REF], Molnar 2020]. This corresponds exactly to the denition of a shapelet (see Denition 2.5). In a linear model, the absolute value of the weight of an important feature will be greater than the one of a less important feature [Molnar 2020]. Classiers such as decision trees and linear models are said to be inherently interpretable since a post hoc analysis is not required to interpret their predictions. More generally, when a classier is tted, a post hoc explainer can be used to nd most important features [START_REF] Murdoch | [END_REF] The classication block: this block is actually the SAST algorithm and begins with the random selection of reference time series from which subsequences are then generated. Thereafter, the dataset is transformed by replacing each time series with a vector of its distances to each subsequence. Finally a supervised classier (illustrated here by a decision tree) is trained on the transformed dataset. The interpretability block: The role of this block is to explain the SAST algorithm by identifying shapelet candidates associated with the most important features learned by the classier. For inherently interpretable classiers such as decision trees, the importance of each feature is computed while tting the classier. For other classiers, eventually not inherently interpretable, an existing post hoc explainer such as LIME [START_REF] Ribeiro | [END_REF]] can be used to nd the importance of each feature. A pseudo code of the SAST algorithm is given by Algorithm 4. SAST takes as input the time series dataset D, the number k of instances to randomly select from each class in order to create the shapelet candidates, the list of lengths to use to generate shapelet candidates, and nally the supervised classier C that is going to be trained on the transformed dataset. SAST time complexity Each step of the SAST algorithm runs in a nite amount of time, therefore the algorithm always terminates. Selecting k reference time series is done in O(c) time complexity, c is the number of classes in the dataset. There are m -l + 1 subsequences of length l in a time series of length m. The total number of subsequences for a time series is m(m+1) 2 . Since there are kc reference time series in a dataset with c classes, generating all shapelet candidates is done in O(kcm 2 ). The transformation step requires O(nm 2 ) distance computations, each of which requires O(l) (l is the length of the subsequence) point wise operations. As the maximum subsequence length is m, the time complexity of the transformation step is O(nm 3 ). Therefore, to total time complexity of SAST is O(c) + O(kcm 2 ) + O(nm 3 ) + O(classif ier), where O(classif ier) is the time complexity of the classier used. The overall asymptotic time complexity of the SAST algorithm is therefore O(nm 3 ) + O(classif ier). SAST is much faster than the state of the art shapelet transform algorithm (STC) 56 Chapter 3. Scalable and Accurate Subsequence Transform for TSC [START_REF] Hills | Classication of time series by shapelet transformation[END_REF]] which time complexity is O(n 2 m 4 ) + O(classif ier). Ensemble of SAST models SAST accuracy is highly dependent on the randomly selected reference series. If a reference time series is noisy or not representative of its class, then it could be dicult for SAST to learn the best shapelets for the dataset. Furthermore, the random selection of reference time series could lead to a variance in performance. We use Bagging [Breiman 1996] to leverage these possible issues and we call the obtained model SASTEnsemble (or SASTEN in reduced form). SASTEN is obtained by ensembling r SAST models. Each individual model in the ensemble uses randomly selected reference time series and may also have dierent parameters, especially the parameters controlling the length of shapelet candidates (that is length_list in Algorithm 4). The nal prediction is obtained by averaging the predictions of every SAST models in the ensemble. The time complexity of SASTEN is r times the time complexity of SAST if run sequentially. But this can be reduced using parallelization. SASTEN uses r times more memory than a regular SAST. Experiments We have implemented STC-k, SAST and SASTEN in Python. Our implementation is based on the scikit-learn machine learning library [Pedregosa et al. 2011]. We have also followed scikit-learn design principles so that our models are compatible with any scikit-learn pipeline. We have used the implementation of STC (Shapelet Transform Classier) from the sktime library [Löning et al. 2019]. The source code of our experiments and all the results we discuss here are publicly available here 1 . In all our experiments, the number of reference time series per class (that is the parameter k in Algorithm 4) is always set to one. The supervised classier used in STC-k, STC and SAST is the Ridge classier with Leave-One-Out (LOO) cross validation. This classier is available in the scikit-learn library. The LOO cross validation is used to nd the best regularization parameter among 10 log spaced values ranging from -3 to 3 (these values are inspired from [START_REF] Dempster | ROCKET: Exceptionally fast and accurate time series classication using random convolutional kernels[END_REF]). The other parameters are left to their default values and are not ne tuned. We have also used the Random Forest classier in SAST. For this classier all features are evaluated at each node to nd the best split and a split is selected if the impurity decreases by about 0.05, the minimal information gain for a shapelet like in STC. Although it is generally better to evaluate only a subset of the feature space in Random Forest in order to reduce the correlation between the trees, we have not followed this guideline in our work because we want the model to always select the best possible split (that is the best shapelet). However, each tree in the 1 https://github.com/frankl1/sast/tree/master 3.3. Experiments 57 ensemble is trained on a random subset of the training set. This classier is also available in the scikit-learn library. We make use of the Wilcoxon signicance test with a p-value of 0.05 to compare our models. We give the result of this test as a critical dierence diagram on which models that are not signicantly dierent from each other are linked with a bold line. The code used for this test and to draw critical dierence diagrams is from [Fawaz et al. 2019b]. Table 3.1 describes the models that we use in our experiments. We experiment using 72 randomly selected datasets from the UEA & UCR repository [Anthony [START_REF] Bagnall | The UEA & UCR Time Series Classication Repository. www.timeseriesclassification.com[END_REF]. The datasets in the repository are dierent in terms of series length, number of series, number of classes and application domain. For each dataset, the repository provides a training set and a test set. Since searching shapelets for one hour is not signicantly worse than the full search on the UEA & UCR archive [Middlehurst et al. 2020a], we used a time contract of one hour for each STC-k models as well as for STC. Accuracy In this subsection, we compare the models in terms of accuracy and we use scatter plots and critical dierence diagrams to summarized the results. However, the exact accuracy of SAST, STC and STC-k, which are the core models of this work are given in Table A.2. STC-k results We have evaluated STC-k on 72 datasets with dierent value of the parameters k. We have considered STC-1, STC-0.25, STC-0.5, STC-0.75 and STC. These models are described in Table 3.1. Figure 3.7 shows pairwise comparisons of these model accuracies on the test set of each dataset. STC is better than any STC-k on almost every datasets. This is because an STC-k model does not search the whole shapelet space, and therefore the shapelets obtained using the minimum information gain are not good enough to classify the dataset. The critical dierence diagram on Figure 3.8 shows that STC-0.75 is not signicantly more accurate than STC-0.5, which is signicantly more accurate than STC-0.25, which is in turn signicantly more accurate than STC-1. Therefore, STC-k accuracy increases with the value of the parameter k. All STC-k models are considerably less accurate than STC. We have observed that, STC generally fails at classifying datasets that have few time series in the training set. In particular, STC failed to nd shapelets on the Fungi datasets. This dataset has 18 classes with one instance per class in the training set. In this particular case, STC is exactly the same as STC-1. 3.1 on the 39 datasets marked with a star in Table A.2. The rst thing to note is that SAST-Ridge is generally more accurate than SAST-RF on our datasets (Figure 3.10a). There are many parameters in RF that can be optimized in order to improve SAST-RF, but we did not perform parameter tuning in this work and we consider SAST-Ridge as the best model for our experiment. This is why we use SAST-Ridge as the default SAST model and as the pivot in our comparison. We tried several length_list for the approximated SAST model, and we are presenting here only the four that achieved the best accuracy on our datasets. The critical dierence diagram between these four models is given in Figure 3.9. There is no signicant dierence between the models, however the model using length_list = {7, 11, 15} is the best of all. When not clearly precised in the SAST vs STC We now compare SAST (i.e SAST-Ridge) to STC, the state of the art shapelet method to our knowledge. This experiment is performed on the same 72 datasets and a pairwise comparison of SAST, STC and STC-1 is shown of Figure 3.12. SAST is more accurate than STC on 43 datasets, worse on 27 and there are two draws. STC-1 is more accurate than SAST on only 5 datasets among 72, although the only dierence between these two models is that the Ridge classier in SAST is trained using the whole shapelet space while only a subset of the shapelet space is used in STC-1. STC-1, STC and SAST respectively achieve an average accuracy of 0.68 ± 0.21, 0.79 ± 0.20 and 0.84 ± 0.12 on the 72 datasets. The standard deviation of STC and STC-1 models is higher due to the zero score obtained on one dataset (Fungi). There are datasets on which STC and STC-1 hardly achieve 50% accuracy, while SAST performs signicantly better. This is the case for the datasets Crop, ElectricDevices and Fungi. These datasets contain respectively 24, 7 and 18 classes. It is dicult to nd a subsequence in these datasets that is present in one class and not in the others. A subsequence is generally shared among multiple classes, and therefore is not highly discriminative in terms of information gain by itself. Subsequences need to be combined in other to dierentiate classes, and since all the subsequences are available in SAST, this combination is automatically learned by the classier. Elsewhere SAST achieves 90% accuracy on the dataset Fungi, while The critical dierence diagram on Figure 3.13 reveals that SAST is generally more accurate that STC, but the dierence is not highly signicant. We also compare our proposal to methods that learn shapelets, namely Learning time series Shapelets or LS [START_REF] Grabocka | [END_REF] and ELIS++ [Zhang et al. 2021]. The accuracy of ELIS++, FS and LS are taken from the ELIS++ paper and we considered the same 35 datasets they used (marked with a plus sign in Table A.2). The average accuracies of these models on the 35 datasets are 0.78±0.14, 0.81±0.14, 0.83 ± 0.13 and 0.85 ± 0.14 for FS, LS, SAST and ELIS++ respectively. Pedestrian; so we excluded these 5 datasets from this comparison. Elsewhere, we believe that the comparison we are doing here is not fair since these methods are not based on only shapelet features. However, considering the no free lunch theorem [START_REF] Wolpert | [END_REF], SAST could outperform these models on some datasets and the goal of this experiment is to see how SAST stands w.r.t to these methods that are based on combination of features. Although our model uses only shapelet features, it manages to outperform ROCKET on 5 among the 67 with 4 draws (Figure 3.16a). Elsewhere, SAST respectively outperforms HIVE-COTE and TS-CHIEF on 10 and 9 datasets among the 67 with 4 and 3 draws. Since SAST can perform better than HIVE-COTE on some datasets, replacing the shapelet module in HIVE-COTE with a SAST based model The Wilcoxon statistical test failed to reject the null hypothesis with a p-value of 0.05, meaning that these four models are not signicantly dierent on the considered 67 datasets. In fact, SAST, ROCKET, HIVE-COTE and TS-CHIEF respectively achieve an average accuracy of 0.84 ± 0.12, 0.88 ± 0.11, .88 ± 0.11 and 0.88 ± 0.12. These average scores clearly show that SAST is comparable to ROCKET and HIVE-COTE in terms of accuracy, and in addition SAST is more interpretable as it is a shapelet based method [Ye & Keogh 2009b, Bagnall et al. 2017]. Model accuracies per dataset type The datasets on the UEA & UCR archive are categorized in problem types. Among the 72 datasets we have experimented on, there is 1 electric device problem, 4 ECG problems, 1 High Resolution Melt (HRM) problem, 25 image problems, 9 motion recognition problems, 1 power consumption problem, 16 sensor reading problems, 7 simulated dataset problems, 6 spectrograph problems and 2 trac problems. We would like to see the method that is more appropriate for each problem type. However, be careful drawing too much conclusions because the number of datasets per problem type is relatively small to be representative. We compute these statistics among three groups of methods as in the previous subsections: the rst group is SAST, STC-1 and STC; the second group is SAST, ELIS++, FS and LS; and the last group is SAST, ROCKET, TS-CHIEF and HIVE-COTE. For each group and for each problem type, the percentage of times each method achieves the highest accuracy is computed. These statistics are shown as stacked bar plots with problem types on the x-axis and the number of times the highest accuracy is achieved on the y-axis. Above each bar, the number of datasets in the corresponding problem type 64 Chapter 3. Scalable and Accurate Subsequence Transform for TSC is displayed. Since more than one model can achieve the highest accuracy on the same dataset, summing the percentage in a bar could be greater than 100% and the value above a bar can be less than the bar height. Figure 3.17 shows the percentage of times SAST, STC-1 and STC achieve the highest accuracy per problem type. STC-1 achieves the highest accuracy on the image dataset MiddlePhalanxOutlineAgeGroup and on the sensor dataset Earthquake. STC is the only method that achieves the highest accuracy for ECG and Power. Elsewhere STC seems more appropriate for simulated datasets. SAST tends to be generally the best choice for electric device, HRM, image, motion recognition, sensor and is always the best for spectrograph problems compared to STC approaches. Figure 3.17: SAST, STC-1 and STC percentage of wins per dataset type When comparing SAST to other shapelet methods (ELIS++, FS and LS), we can see on Figure 3.18 that SAST always achieves the highest accuracy on spectrograph problems and is therefore a good choice for this problem type. Elsewhere it achieves the highest accuracy on more than 25% of image and sensor datasets. ELIS++ is more suitable for ECG, image, motion, and sensor problem types. LS is a good choice for simulated datasets. Finally, Figure 3.19 reveals that ROCKET, TS-CHIEF and HIVE-COTE win on more datasets than SAST, but with a relatively small dierence in accuracy. ROCKET seems to be the most promising method for ECG, motion, sensor, simulated , spectrograph and trac datasets while HIVE-COTE is a good choice for image and power datasets. TS-CHIEF is a fair option for device. Although SAST achieves the highest accuracy than ROCKET, HIVE-COTE and TS-CHIEF on some datasets, it sometimes obtains the same average accuracy as these methods. In fact, Table 3.2 gives the mean and standard deviation of each model accuracy per dataset type. We can see that SAST achieves the same average accuracy as the state of the art methods on spectrograph and is on average relatively closed on many other data types, except device (but there is only one dataset of that type). This results emphasize the fact that SAST can achieve accuracy equal to or closed to the state of the art method accuracy while oering easier interpretability. Scalability The scalability of SAST based models and STC is assessed regarding two criteria: the time series length and the number of time series in the dataset. In this experiment, the time contract is not used for STC, and therefore the full search is performed. Elsewhere, the training set and the test set are exactly the same. For each model, the time taken to t the model on the training set and then predict the test set is recorded. Time series length Here we use the dataset HouseTwenty from the UEA & UCR repository [Anthony Bagnall & Keogh 2018]. It is a binary dataset of electricity usage in houses. The training set has 34 time series and of length 3000 each. We vary the series length starting at 32 and only the rst time steps up to the current length are used to train our models. More precisely, we consider the HouseTwenty dataset with time series truncated at length 2 5 , 2 6 , 2 7 and nally 2 8 . The running time of each model is given in Figure 3.20a. For each of the four models, the running time increases with the length of time series in the dataset. However, SAST models are much more scalable than STC, and SASTEN-A is the most scalable of all, since it uses a xed number of shapelet 66 Chapter 3. Scalable and Accurate Subsequence Transform for TSC minutes to train on a dataset of 34 time series of length 64, while SAST, SASTEN and SASTEN-A take about 13 seconds, 27 seconds and 8 seconds respectively. For the same number of time series but now of length 256, STC takes a bit more than a day, while SAST, SASTEN and SASTEN-A take about 14 minutes, 26 minutes and 2 minute respectively. Therefore, even our slowest method SASTEN is 55 times faster than STC. SASTEN-A and SAST are respectively 1440 times and 102 times faster than STC. Interpretability The predictions of a SAST model trained on a dataset are explained by identifying and visualizing the shapelets that have been learned for that dataset. This is how the explanation of shapelet methods is given in the litterature [Ye & Keogh 2009b, Wang et al. 2020]. This is done using feature importance analysis (see Proposition 3.2.2). Each feature is related to a shapelet candidate extracted from a time series whose class label is known. Shapelet candidates related to the most important features are the top best shapelets. We say that any shapelet candidate is from the class of the time series from which it has been extracted. Therefore, the class label of a time series can be interpreted by looking at the class labels of the shapelet candidates to which it is the most similar. Let us interpret the predictions of SAST-RF and SAST-Ridge trained on the Chinatown dataset. We consider this dataset because it has only two classes and time series of length 24, it is therefore easy to visualize this dataset. However, what we are doing here is applicable to any dataset. Since SAST-RF uses a tree based classier, information gain is used as feature importance. With SAST-Ridge, the importance of feature is given by the absolute value of the corresponding learned weight. Although feature importance is computed dierently for both models, we show that their predictions are interpretable in the same manner. In order to predict the class label of a test time series, SAST identies the most important features similar to the time series. In other words, SAST checks if the time series contains subsequences that are similar to the most important features. Figure 3.23 shows the matches between the top 5 most important features learned by SAST-Ridge and two randomly selected test time series. We can note that the model correctly predicts the class labels. Since the top 5 shapelets learned by SAST-Ridge are from class 1, there are near perfect matches with the test instance from class 1 (see Figure 3.23 top). A near perfect match between a subsequence and shapelet candidate means that the subsequence is a variant of that shapelet candidate. No good match is found with the test instance from class 2 (see Figure 3.23 bottom). Therefore, we have an explanation (i.e the most important features that triggered the predicted class label) of why the rst instance is predicted as coming from class 3.3. Experiments 69 1, while the second one is predicted as coming from class 2. The same analysis is shown for SAST-RF in Figure 3.24. Like SAST-Ridge, SAST-RF also predicted the class labels correctly. The rst test time series has a near perfect match with the second top best shapelet candidate (see Figure 3.24 top) which is a shapelet candidate of class 1. The other top best shapelet candidates, which are all from class 2 do not match with the rst time series. This explains why the predicted class label for the rst time series is class 1 and not class 2. The rst, third, fourth and fth top best shapelet candidates, which are all from class 2 have near perfect matches with the second time series, while the second top best shapelet candidate, which is from class 1 does not match (see Figure 3.24 bottom). Hence, we can interpret why the class label of the second instance is predicted as class 2 and not class 1. Conclusion In this work, we shown that the number of shapelet candidates in a shapelet algorithm can be reduced considerably without losing accuracy. We also shown that it is not always necessary to learn shapelets beforehand of classication. We introduced the Scalable and Accurate Subsequence Transform (SAST) algorithm which is interpretable, accurate and a more scalable alternative to the Shapelet Transform algorithm. Furthermore, SAST is comparable in terms of accuracy to the state of the art methods ROCKET, HIVE-COTE and TS-CHIEF, especially for the spectrograph dataset type, while oering easier interpretability. Our experiments revealed that a good trade-o between accuracy and scalability can be found by ensembling dierent SAST models, each one focusing on dierent length of shapelet candidates. We have also introduced the core shapelet recognition task which consists of learning a shapelets model using only few variants of each shapelet candidate. SAST achieves this task accurately and we hope future shapelet methods will follow the design we introduced. We plan to do many improvements on the SAST algorithm in the future. Particularly, distance computation could be speed up using lower bounding and early abandon techniques. Dierent variants of the same shapelet can be present in the same time series, therefore similar subsequences can be pruned in order to further reduce the number of shapelet candidates. We are also planing to explore how core shapelet recognition can be applied in TS-CHIEF in order to take shapelet features into account. Although we focused on time series classication in this chapter, we hope that the same idea can be used in the near future to increase the scalability of shapeletbased time series clustering [START_REF] Fotso | [END_REF]]. In the next chapter, we will improve SAST by removing duplicate subsequences and extend it to uncertain time series using uncertainty propagation. Conclusion 71 Key points We introduced the core shapelet recognition task, consisting of building a ML model able to recognize any shapelet by seeing one or a few number of its variants. We proposed the Scalable and Accurate Subsequence Transform (SAST), a novel design of subsequence-based time series classication approach which is many magnitudes more scalable than shapelet transform while being more accurate. We demonstrated SAST's eectiveness on respectively 72 and 8 stateof-the-art datasets and methods. We demonstrated that SAST is an interpretable-by-design method. Communications Michael F. for i ← 1 to n do Chapter 4. Explainable Classication of Astronomical uTS which appear in the sky for a limited period of time then disappear forever, there is a small time-window of opportunities when such measurements can be taken. x i ← [] for j ← 1 to |S| do 9 x i [j] ← dist(T i , S j ) 10 end 11 D f ← D f ∪ {(x i , c i )} Alternatively, we can also associate dierent classes of astronomical transients to the respective shape of their light curves (brightness variation as a function of time). In this case, we need to repeatedly measure the brightness of the source in a relatively broad region of the wavelength spectrum. This process, called photometry, is less expensive and imposes more manageable constraints on observation conditions. However, measurements are more prone to uncertainties (due to moonlight, twilight, clouds, etc) in the ux determination and the distinction between light curves from dierent classes is subtle, resulting in less accurate classication. Nevertheless, since there is not enough spectroscopic resources to provide denite label for all photometric observed objects, being able to eectively analyze uncertain photometric light curves means that a wider range of the universe can be quickly understood and at a lower cost. The Vera C. Rubin Observatory 1 is a ground-based observatory, currently under construction in Chile, whose goal is to conduct the 10-year Legacy Survey of Space and Time (LSST) in order to produce the deepest and widest images of the universe. The observatory is expected to start producing data in early 2024, and in order to prepare the community for the arrival of its data, one important data challenge was put in place: the Photometric LSST Astronomical Time-Series Classication Challenge or simply PLAsTiCC [START_REF] Allam | The photometric LSST astronomical time-series classication challenge (PLAsTiCC): Data set[END_REF]. The goal was to identify machine learning models able to classify 14 types of transients in simulated data, represented by uncertain time series, or light curves. The ultimate goal behind the challenge was to understand which methods are expected to perform better in LSST-like data, thus preparing the community to the arrival of its data and help understanding the universe's expansion history. Therefore, using interpretable approaches was very important. However, contributors focused on minimizing the classication loss by employing techniques such as mixture of classiers and data augmentation [Hloºek et al. 2020] Given a time series dataset, SAST follows four steps: i), one instance is randomly selected from each class: these are called reference time series; ii) a set containing every subsequences from the selected time series is created; iii) each instance in the dataset is replaced by the vector of its distances to each subsequence obtained in the second step; iv) a supervised classier is trained on the transform dataset. Performing classication following the SAST steps could be inecient because of the redundancy in the set of subsequences obtained at the second step. The redundancy is particularly high for small length subsequences and in datasets such as electrocardiogram (ECG) and PLAsTiCC, in which repetitive patterns occur very often. Furthermore, the third step is based on the application of Denition 2.3 using the Euclidean distance and, therefore, only the most similar subsequence is considered; however, taking into account the number of occurrences of the best match is important in some contexts. To overcome these limitations, we dene the notion of ε-similarity as follows: Denition 4.1 (ε-similarity). Two subsequences (respectively uncertain subsequences) S 1 and S 2 of same length l are ε-similar if the distance between them is less than or equal to a user-dened threshold ε ≥ 0. ε-similar(S 1 , S 2 ) = T rue, if dist(S 1 , S 2 ) ≤ ε F alse, otherwise Theorem 4.2.1. The ε-similar relationship is not transitive. Proof. Let X, Y , and Z be three subsequences of same length l such that ε-similar(X, Y ) = T rue and ε-similar(Y, Z) = T rue. Let us assume that the transitivity property is veried, that is ε-similar(X, Z) = T rue. A counterexample is built by considering X, Y , and Z as points in a high dimensional space (R l ) such that dist(X, Y ) = dist(Y, Z) = ε, and XY ⊥ XZ. The following derivation proves the theorem: Chapter 4. Explainable Classication of Astronomical uTS and the number of occurrences of the subsequence S j in Similarly to the Uncertain Shapelet Transform, the uncertain SAST+ ( uSAST+) is obtained by using UED as the distance metric in Algorithm 5; allowing uncertainties to be propagated to the classier which then uses these uncertainties to learn robust decision boundaries. dist(X, Z) = dist(X, Y ) 2 + dist(Y, Z) 2 = ε 2 + ε 2 =ε √ 2 >ε =⇒ ε-similar(X, Z) = F T i */ 8 x i [j], x i [j + |S|] ← distAndCount(T i , S j , ε) 9 end 10 D f ← D f ∪ {(x i , c i )} Experiment The PLAsTiCC dataset As far as we know, existing methods published on uTS classication have never been evaluated on real uncertain time series datasets, but solely on simulated datasets. The corresponding simulated datasets have never been made publicly accessible neither for reproducibility reasons, nor for facilitating research on uTS. In this work, we evaluate our method on a realistic publicly available uncertain time series dataset from the astrophysics domain. The Photometric LSST Astronomical Time-Series Classication Challenge (PLAsTiCC) dataset contains uncertain time series representing the brightness evolution of astronomical transients including supernovae, kilonovae, active galactic nuclei and eclipsing binary systems [START_REF] Allam | The photometric LSST astronomical time-series classication challenge (PLAsTiCC): Data set[END_REF], among others. The uncertainty in this dataset is modeled by the probability density-based model. Therefore, for each measurement, the astrophysicists provides a best estimate and the maximal possible deviation from that estimate. Each object is represented as a multivariate uncertain time series of 6 dimensions named u, g, r, i, z, y, each corresponding to a particular broadband wavelength lter. After the challenge was nished, the organizers made available an updated version of the data through Zenodo [PLASTICC Team and PLASTICC Modelers 2019] with some bug xes and the classication answers for both the training and test sets. In this work, we demonstrate our method using only uncertain time series from the training set, but the methodology is general enough to be extended to the test set. There are 7848 transients in the dataset, grouped in 14 dierent classes, and the number of objects in the classes are highly imbalanced. More specically, the most underpopulated class has only 0.3% of objects, whereas the most populated one contains 29% of the objects. Furthermore, the dataset contains a lot of missing observations. We handled this with the help of astrophysicists who suggested to ll missing data using a rolling average with a window of length 5. Missing values and corresponding error bars are replaced by the mean and standard deviation of the window. This procedure translated the original dataset into a homogeneously sampled uncertain time series. Chapter 4. Explainable Classication of Astronomical uTS The preprocessed dataset is made public 2 . Our implementation uses the Python programming language and is based on the Scikit-learn machine learning library [Pedregosa et al. 2011] and the Sktime time series dedicated machine learning library [Löning et al. 2019]. The experiment is run on a computing node equipped with 1 Gb of RAM and an AMD EPIC 7452 processor containing 64 logical cores of 2.35 GHz frequency. The source code of our experiments and all the results we discuss in this chapter are publicly available on GitHub 3 . Results et discussion Since PLAsTiCC is a multivariate uncertain time series dataset, the subsequence transformation is performed on each dimension independently. The transformations from each dimension are then concatenated together to build a large matrix which is subsequently fed to the supervised classier. We used 80% of the data for training and the remaining is used for testing. Shapelet-based methods results: Shapelet-based classication is a special case of subsequence-based classication which consider only shapelets as relevant subsequences. We considered two shapeletbased methods STC [START_REF] Hills | Classication of time series by shapelet transformation[END_REF]] and UST for their interpretability. For both methods, we kept every parameters to their default values except the minimum information gain parameter which is the threshold used to decide if a separator is a valid shapelet. We tried dierent values for this parameter without success, none of these methods were able to nd a single valid shapelet in the dataset. Since feature extraction was not successful, classication was not possible. This result is due to the dataset being highly imbalanced and the uncertain time series from dierent classes being too similar in shape. The same dimension of two randomly selected samples from two dierent classes is shown on Figure 4.1. The left gure which is a Supernova Type Ia-x (SNIax) looks like a left-shifted version of the right gure which is a Supernova Type Ia-91bg (SNIa-91bg). SNIax and SNIa-91bg are known to be dicult to distinguish by astrophysicists. This observation holds, with dierent magnitude, for other classes in the PLAsTiCC dataset and therefore, any shapelet-based methods might struggle to nd shapelets in this dataset. SAST-based methods results: For this experiment we considered dierent SAST+ congurations in order to measure the eect of taking uncertainty into account, dropping duplicates and counting the number of occurrences of patterns (i.e patterns frequency). We named congurations that ignore uncertainty as SAST<X> and those which take uncertainty into account as uSAST<X>, where <X> is either : i) an empty string to specify (a) Supernova Type Ia-x (b) Supernova Type Ia-91bg that duplicate subsequences are not removed and the patterns frequency is ignored; ii) the character d, meaning that duplicate patterns are removed; iii) the string dc, meaning that duplicate patterns are removed and the frequency of patterns is taken into account. We use three dierent supervised classiers, namely Random Forest (RF), eXtreme Gradient Boosting (XGBoost) and the Ridge regression with Leave-One-Out cross-validation (RidgeCV). The cross-validation procedure is used to nd the best regularization parameter. We set the minimum and maximum subsequence lengths to 20 and 60 respectively, with a step of 10. Compared to a step of 1, a step of 10 reduces the chance of having similar subsequences while reducing the number of subsequences to be used. We observed that the classication performance is better with this setup as can be seen in Appendix B. The ε-similarity is computed with ε = 0.25 as experiments shown that too much relevant subsequences are discarded with higher values. The parameters of the classiers are left to their default values, except for the regularization parameter in RidgeCV which is selected using cross-validation. As the reference time series are chosen randomly, we run each experiment 3 times and we report the average precision, recall, F1 score, cross entropy loss and the time taken for training and inference (in hours). As PLAsTiCC is an imbalanced multiclass dataset, we use a weighted average to compute the precision, recall and F1 score; the weights being the percentage of each class in the dataset. The data set includes 6 classes with overall similar behavior (42,52,62,67,90,95). Among these, astronomers are specially interested in type 90 (SNIa), which is used as distance indicator in cosmological analysis [START_REF] Ishida | [END_REF]]. Reporting our results as a binary problem with class 90 against all others, we achieve 85% precision, 81% recall and 82% F1 score. Therefore, our method is able to correctly classify a high proportion of SNIa despite its similar behavior to other classes. Ablation study: Here, we study the impact of taking uncertainty into account. In particular, we compare the results obtained when uncertainty is ignored (Table 4.3) to the results obtained when uncertainty is taken into account (Table 4.1). Explainability: One of the best properties of subsequence-based classication is its interpretability. The explanation could be done either locally, when it concerns only a single instance, or globally when it concerns the whole model. In any case, this is generally done by inspecting the model in order to extract the most discriminative subsequences [Ye & Keogh 2009a]. These subsequences could also be found using a posthoc method such as LIME [START_REF] Ribeiro | [END_REF] or SHAP [Lundberg & Lee 2017], but since our approach is explainable-by-design, inspecting the model is sucient. More specically, since the classier used in our model is tree-based, the information gain can be used as a measure of the discriminative power of the subsequences similarly to what is done in shapelet-based methods. The local explainability of our method is obtained by inspecting the subsequence on which the model focused the most in order to make the prediction for a single instance, these are the subsequences which led to the highest information gain (see Denitions 2.4 and 2.5). . This conrms that our model focuses on the relevant regions and dimensions of the time series to make the classication. Being able to correctly learn the dimension's relevance is crucial as the discriminative subsequence may appear only in a subset of the dimensions. Furthermore, the location of the discriminative subsequence may not be the same on every dimension. In PLAsTiCC in fact, depending how far is the object, the light may be visible only on some wavelengths (i.e. dimension). Due to the accelerated expansion of the universe, objects which are further away are also moving with a higher velocity. Thus, there is a Doppler eect in the observed light which shifts it to higher wavelengths. Thus, closer (galactic) objects will generally have higher signals in lower wavelengths than further away (extragalactic) ones. Our method perfectly captures the Doppler effect unlike XEM which cannot identify from which dimensions the discriminative subsequences is located. It is observed that the discriminative power is generally due to the value, but sometimes it is due to the uncertainty (for example subsequences #18 and #19). Seeing that some subsequences are important because of their uncertainty emphasizes the fact that taking uncertainty into account is important and improves the classication performance. There are also some subsequences that are too similar despite the fact that duplicate subsequences have been dropped; for instance, the subsequences #3 and #7. This is because the similarity between subsequences is computed using the Uncertain Euclidean Distance (UED) which considers the subsequences to be perfectly aligned. This problem can be resolved by using an elastic distance such as the DTW distance at the cost of more computational time since such distances generally have at least quadratic time complexity while UED is linear. From the domain knowledge point of view, these discriminative subsequences are able to grasp the important shapes commonly associated with their respective class of astronomical transients. Subsequences #1 and #6 were taken from class 16 (eclipsing binary) and clearly show the expected light curve from a well measured binary system where one star eclipses the other exactly in the line of sight, thus leading to a decrease in brightness. Subsequence #19 is also associated to the eclipsing binary class, but in this case the signal is less clear, corresponding to an object which is further away thus leading to low signal and large uncertainties. We also call attention to the supernova-like behavior exhibited by subsequences #4 and #9 one single burst events whose brightness are only visible for weeks to Conclusion and future directions The classication of time series with available uncertainty measures is an underexplored and challenging task. In this work, we proposed an approach to perform this task with a global F1 score of 70%, without using techniques such as data augmentation nor oversampling. The explainability of the proposed approach allows domain experts to not only understand individual predictions, but also to characterized each class by a set of subsequences with high discriminative power, which can then be used to perform other important tasks in astrophysics such as novel astronomical transients detection and anomaly detection. The ablation study shown the positive impact of taking uncertainty into account. A limitation of the approach is the time complexity, which could be considerably high for datasets with relatively long uncertain time series. A future direction would consist of further reducing the number of subsequences to be used and optimizing the computation time of the method. Another future direction would consist of nding a better way of managing uncertainty during the classication step in order to improve the performances. Nevertheless, the results presented in this work illustrate how our approach is eective in identifying meaningful subsequences which, beyond the classication performance, can provide important information to the expert. The approach is exible enough to be applied to other scientic domains where uncertain time series are the common, thus enabling future advances in multiple subject areas. Key points We reduced SAST's computational time and proposed the uncertain SAST (uSAST), an extension of SAST to uncertain time series classication. We applied uSAST to a realistic uncertain time series dataset and demonstrated its classication eectiveness as well as explainability. Communications Michael F. Chapter 5 General conclusion and future directions In this chapter, we summarize our contributions, discuss the limitations of this work and give some future directions. We also give the list of publications that we did throughout this thesis. Finally we proposed uSAST, an extension of SAST to uncertain time series classication. We assessed uSAST not only on simulated datasets, but also on a real uncertain time series dataset from the astrophysics domain named PLAsTiCC. As far as we know, this is the rst open-sourced experiment on real uncertain data. uSAST shown good classication performance on PLAsTiCC as well as great local and global explanations. Scientic valorization We valorized and shared our contributions at national and international conferences. In particular, the rst UST version has been accepted and presented at the French Limitations and perspectives We proposed to use uncertainty propagation to take uncertainty into account in uncertain time series classication. We shown that this approach lead to models that are more robust and accurate, but also more natural to users and therefore, more trustable. Both UST and uSAST have three steps: feature selection, feature transformation and classication. We successfully took uncertainty into account in the three steps, but we strongly believe that uncertainty handling could be improved in the classication step. In fact, we used regular supervised classiers (Random Forest, XGBoost, etc) for this step, these classiers consider uncertainties as regular features although they should be considered as meta features. This step would be more eective if the classier used was aware of uncertainties and dierentiated them from regular features. Few works exist in the literature to achieve that, namely the Decision Tree for Uncertain data [START_REF] Qin | [END_REF], Qin et al. 2011]. Therefore, it would be interesting to explore how the work started in this thesis could be improved using uncertain supervised classiers. Given that preprocessing generally makes learning easier, it would be legitimate to ask why we did not mention it in this work. In fact, preprocessing is generally application and data-dependent, and therefore not straightforward to be integrated in an end-to-end approach. Instead, we wanted our models to have good performance on the raw data. Nevertheless, it would be a good future direction to see if the performance of our models could be improved by applying some preprocessing of the time series rst. One of this techniques could be the uncertain moving average (UMA) and the uncertain exponential moving average (UEMA) [Dallachiesa et al. 2012]. In order to ensure the explainability of our methods, we used only the shapelet features, which are acknowledge in the literature for their natural explainability. However, it has been shown in the literature that time series classication is more eective when dierent features (shapelet, interval, word, etc) are combined together. Hence, another future direction will be to take uncertainty into account in interval, dictionary, spectral, hybrid and deep learning methods for time series classication. We modeled uncertainty in this work using two values which are the best estimate (or guess) and the standard deviation from that estimate, however, there exist other representations: random sets, possibility distributions, probability intervals, etc [START_REF] Destercke | Unifying practical uncertainty representationsI: Generalized p-boxes[END_REF] which have particular properties. It is worthy to analyze how these representation could be used with time series data. For instance, using imprecise probability modeling will ease the adaptation of existing works to time series data [START_REF] Destercke | Ranking of fuzzy intervals seen through the imprecise probabilistic lens[END_REF], Carranza Alarcon & Destercke 2019]. Contents 0.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 0.2 Uncertainty, not a bad thing ! . . . . . . . . . . . . . . . . . . 2 0.3 Main contributions . . . . . . . . . . . . . . . . . . . . . . . . 4 0.4 Report structure . . . . . . . . . . . . . . . . . . . . . . . . . . 5 0.1 Context In this era of Big Data, machine learning (ML) has become ubiquitous in almost any aspect of human life. Contents 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.1 What is a time series . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.2 Notion of uncertainty . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Time series classication . . . . . . . . . . . . . . . . . . . . . 12 1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.1 Certain time series classication approaches . . . . . . . . . . 14 1.2.2 Uncertain time series classication approaches . . . . . . . . 24 1.3 Our proposed general approach for uTSC . . . . . . . . . . . 28 1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ω. In order to better understand this general denition of time series, concrete examples are shown on Figures 1.1 and 1.2. Figure 1 1 Figure 1.1: A time series modeling the the evolution of daily new covid-19 cases in Africa Figure 1 1 Figure 1.2: A time series modeling the groundwater level in the french department Puy-de-dôme Figure 1 . 1 Figure 1.5 illustrates some uncertain time series. The rst two gures are from [Siyou Fotso et al. 2020] and show a multiset uncertain time series (Figure 1.5a) and a probability density function uncertain time series (Figure 1.5b). The last two gures use the PDF representation, the blue lines are obtained by connecting the best estimates, these lines can be seen as the best estimate of the uncertain time series. The red bars are the possible deviation from the best estimates. Unlike Figure 1.5b, the probability distributions are unknown on the last two gures. 1. 7 .Figure 1 71 Figure 1.7: Natural alignment of both time series. ED(T 1 , T 2 ) = 2.82 Figure 1 . 8 : 18 Figure 1.8: Elastic alignment of both time series. DT W (T 1 , T 2 ) = 0 Figure 1 . 9 : 19 Figure 1.9: Relevant intervals in a dataset (image obtained by annotating an original image from [Lines et al. 2018]) Figure 1 . 1 Figure 1.10: Overview of time series classication based on intervals Figure 1 . 1 Figure 1.11: Illustration of shapelets (image obtained by annotating an original image from [Lines et al. 2018]) Figure 1 1 Figure 1.12: Overview of time series classication based on shapelets Figure 1 . 1 Figure 1.13: Illustration of a dictionary dataset [Lines et al. 2018] Figure 1.14: SAX illustration [Lin et al. 2007]. The string representing the time series is baabccbc. Figure 1 . 1 Figure 1.15: Illustration of spectral dataset [Lines et al. 2018] 2022]. In addition to PPV, MultiROCKET considers the Mean of Positive Values (MPV), the Mean of Indices of Positive Values (MIPV) and the Longest Stretch of Positive Values (LSPV). Figure 1 1 Figure 1.16: Overview of existing time classication approaches the fact that any measure is uncertain (either for epistemic, or for aleatoric reasons), researchers have not focused on uTSC as much as they did for TSC.The few methods that exist for uncertain time series classication work in the same way: combining an uncertain similarity measure with the 1-Nearest Neighbor (1-NN) classier. The development has been therefore focused on the building of similarity measures for uncertain time series. al. 2009] and can be used when the uncertainty is represented by a set of possible observations at each time step. It is used to compute the probability that the similarity between two uncertain time series is below a user-dened threshold. Therefore, MUNICH does not actually compute the uncertain similarity and has never been used in a classication context. The uncertain similarity measure PROUD [Yeh et al. 2009] also computes the probability of the similarity being below a threshold, but unlike MUNICH, uses PDF representation of uncertain time series. The uncertain similarity distance DUST [Sarangi & Murthy 2010] has been been proposed as a generalized notion of distance between uncertain time series to overcome the limitation of PROUD and MUNICH. It makes fewer assumptions on the uncertainty compared to PROUD, is computationally less expensive compared to MUNICH and degenerates to the Euclidean distance when the uncertainty is very this work. About 10 years after DUST, the uncertain similarity measure FOTS has been proposed [Siyou Fotso et al. 2020]. Compared to DUST, MUNICH and PROUD, FOTS does not explicitly model the uncertainty, but assumes the time series are noisy and there is no information nor assumptions made on that noise. FOTS uses Eigenvalues decomposition to keep only the most important components of the uTS and to reduce the noise. Figure 1 Figure 1 11 Figure 1.17: Overview of existing uncertain time classication approaches Figure 2 . 2 Figure2.1 illustrates a node in the proposed decision tree. The blue time series contain the subsequence in the node (i.e they are similar to the subsequence at the node), so they follow the branch labeled with yes. The red time series does not contain the subsequence in the node (i.e they are not similar to the subsequence at the node), therefore the follow the branch labeled with no. Figure 2 2 Figure 2.1: An illustration of a node in a shapelet decision tree for a binary time series classication the minimum and the maximum length of an uncertain shapelet M IN and M AX. This algorithm uses three subprocedures: GenCand(T, M IN, M AX) which generates every possible uncertain shapelet candidates from the input uncertain time series T . These candidates are uncertain subsequences of T , with length at least M IN and at most M AX. AssessCand(cands, D) which computes the quality of each candidate in the list of candidates cands. The quality of a candidate is the information gain it produces when used as a separator for the dataset. ExtractBest(C, Q, k) which takes the list of uncertain shapelet candidates C, their associated qualities Q and returns rst k uncertain shapelets with highest qualities. In summary, Algo. 1 generates every uncertain subsequences of length at least M IN and at most M AX from the dataset, assesses the quality of each one by computing the information gain obtained when it is used as a separator for the dataset and nally returns the k subsequences that produce the highest information gain. The parameters M IN and M AX should be optimized to reduce the execution time of the algorithm. With the knowledge of the domain, the length of a typical shapelet can be estimated and used to set M IN and M AX in order to reduce the number of shapelet candidates. By default M IN is set to 3 and M AX is set to m -1, where m is the length of the time series. 10 return S // Top k uncertain shapelets 11 end corresponding uncertainties. The scaled and transformed dataset is nally returned by the algorithm. The third and last step is the eective classication. A supervised classier is trained on the uncertain transformed dataset, such that, given the feature vector of an unseen uncertain time series, it can predict its class label. Since the uncertainty have been propagated, the training process can be aware of uncertainty by taking it as part of the input. More specically, best guesses are features and uncertainties are features of best guesses, and thus are meta-features.There exists many supervised classiers in the literature for the classication of uncertain tabular data[Li et al. 2020a, Aggarwal & Yu 2009]. We have decision tree-based methods[START_REF] Tsang | [END_REF], Qin et al. 2009], SVM-based methods[Bi & Zhang 2005, Yang & Gunn 2007, Li et al. 2020b] and Naive Bayes-based methods[START_REF] Qin | [END_REF], Qin et al. 2010]. Since the transformed data is an uncertain tabular data, uncertain supervised classiers can be used for the classication step. Furthermore, any supervised classier can be used as soon as the transformed dataset is formatted in a way that is accepted by that classier. Fig. 2 2 Fig. 2.2 gives an overview of the classication process. During the training step, top-k uncertain shapelets are selected and an uncertain supervised model (illustrated here by a decision tree for simplicity) is trained on the uncertain transformed dataset. During the test step, the uncertain shapelets extracted during the training step are used to transform the test set, and the trained model is used to predict the class labels of the test set according to the result of the transformation. We call this model UST for Uncertain Shapelet Transform classication. [ [START_REF] Fotso | [END_REF]. Hence, the number of windows (m) is equal to the length of a window (w) which is equal to half the length of the time series. The time index (t) used to compute the auto-covariance matrices is equal to half the length of the time series, and nally the number of eigenvectors (k) is set to 4. The uncertainty result for an instance from the Chinatown dataset is shown by Fig.2.3 for c = 0.6. The orange line is the original time series, and the blue one is the obtained uncertain time series. Sometimes, the original time series does not cross the uncertainty interval (vertical red bars), these cases are there to represent situations where the uncertainty has not been well estimated, maybe because the expert has been too optimistic. Situations where the expert had been too pessimistic are represented by very large uncertainty bars. During the training phase, original time series are not used, only the uncertain time series are used. We implemented UST in the Python programming language, and we used the open source package sktime [Löning et al. 2019]. The code and the data used for our experiment are publicly available 1 . Figure 2 2 Figure 2.3: Illustration of uncertainty for an instance from the Chinatown dataset. the uncertainty level is c = 0.6 Figure 2.4: Critical dierence diagrams of UED-based models regarding the ordering strategy for some levels of uncertainty Then we adapt the well known shapelet algorithm to the context of uncertain time series using Chapter 2. Uncertain Time Series Classication With Shapelet Transform U ED and propose the uncertain shapelet transform algorithm (UST). We have run experiments on state of the art datasets. The results show that propagating uncertainty during the shapelet transformation and then using an uncertain classier lead to a more accurate model for uncertain time series classication. The idea of uncertainty propagation can be used with any dissimilarity measure, and any uncertain supervised classier can be used in the classication phase. Contents 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 SAST: Scalable and Accurate Subsequence Transform . . . 49 3.2.1 Reducing the number of shapelet candidates . . . . . . . . . . 49 3.2.2 Identify shapelets using feature importance analysis . . . . . 53 3.2.3 Time series classication with SAST . . . . . . . . . . . . . . 54 3.2.4 SAST time complexity . . . . . . . . . . . . . . . . . . . . . . 55 3.2.5 Ensemble of SAST models . . . . . . . . . . . . . . . . . . . . 56 3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.3.1 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3.2 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.3.3 Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 48 Chapter 3. Scalable and Accurate Subsequence Transform for TSC 50 Figure 3.1: Illustration of invariance in recognition. Figure 3 . 3 Figure 3.3 shows 4 randomly selected instances for each class. Instances of the same class are superposed in order to expose global patterns. The gure emphasizes the previous observation that class 1 contains instances that start by a deep valley while class 2 are instances that are more at at the beginning.Based on this observation, we propose the following statement: 52 Figure 3.4: Top 5 shapelets extracted for each class of the Chinatown dataset by the shapelet transform algorithm. Following Figure 3.5. Dierent variants of the same shapelet are not learned anymore. Forthis dataset, exactly one shapelet has been selected for each class. As we will show in Section 3.3, STC-k is signicantly less accurate than STC, even when k is equal Figure 3 3 Figure 3.5: Shapelets extracted by STC on the Chinatown dataset using a single randomly selected instance per class to generate shapelet candidates. Figure 3 . 6 : 36 Figure 3.6: Overview of the SAST method 58 Figure 3.7: Comparison of STC-k to STC in terms of accuracy chapter, SAST-Ridge-A is the approximated SAST-Ridge model with length_list = {7, 11, 15}. Figure 3 . 9 : 39 Figure 3.9: Critical dierence diagram between approximated SAST models Figure 3 . 3 Figure 3.11: Critical dierence diagram between SAST models -1 fails to nd any shapelet on it. These results conrm our though that pruning shapelet candidates, without taking into account the classier can lead to very inaccurate classication. Figure 3.12: Pairwise comparison of SAST, STC and STC-1 Figure 3 . 3 Figure 3.13: Critical dierence diagram between SAST, STC and STC-1 Figure 3 . 3 Figure3.14 shows a pairwise comparison of these method and the critical different diagram on Figure3.15 shows how signicant is each model compared to others in terms of accuracy. LS, ELIS++ and SAST are not signicantly dierent in terms of accuracy, however they outperform FS. It is important to note that LS and ELIS++ do not select shapelets from the training set, but learn them through an optimization process. Therefore the shapelet space is unlimited, the learned shapelets are unpredictable as well as the time required for convergence. Furthermore, nding the hyper-parameters and the appropriate shapelet initialization for -COTE accuracy and could reduce its time complexity since the shapelet module is the most time consuming one in HIVE-COTE. When TS-CHIEF was proposed, their authors decided not to exploit shapelet features because of their computation time. With the core shapelet recognition task we introduce in this work, we believe that shapelet feature can be added in TS-CHIEF a low cost and that this could increase the accuracy of this model . Figure 3.16: SAST vs SOTA Figure 3 . 3 Figure 3.18: SAST, ELIS++, LS and FS percentage of wins per dataset type considering the 35 datasets used in ELIS++ paper. Figure 3 . 3 Figure 3.19: SAST, HIVE-COTE, TS-CHIEF and ROCKET percentage of wins per dataset type Figure 3 . 3 Figure 3.21 and 3.22 show the top 5 best shapelets plotted on the reference time series for the Chinatown dataset with respect to SAST-Ridge and SAST-RFrespectively. The top rows of the gures are the reference time series selected from class 1, while the second rows are the reference time series selected from class 2. A perfect match between a shapelet candidate and a reference time series means that the shapelet has been extracted from that reference time series. Hence, the top 5 Figure 3 . 3 Figure 3.21: Top 5 shapelets learned by SAST-Ridge on Chinatown. Figure 3 . 3 Figure 3.22: Top 5 shapelets learned by SAST-RF on Chinatown. Figure 3 . 3 Figure 3.23: Explanation of SAST-Ridge predictions on two random test instances Input: D = {(T 1 , c 1 ), (T 2 , c 2 ), ..., (T n , c n )}, k: the number of instances to use per class, length_list: the list of subsequence lengths, C: the classier to use begin /* randomly select k instances per class from the dataset */ D c ← randomlySelectInstancesP erClass(D, k) /* generate every patterns of length in length_list from D c */ S ← generateShapeletCandidates(D c , length_list) /* transformed the dataset using every patterns in S */ D f ← ∅for i ← 1 to n dox i ← [] for j ← 1 to |S| do 8 x i [j] ← dist(T i , S j ) end 10 D f ← D f ∪ {(x i , c i )} 11 end /*train the classifier on the transformed dataset */ 12 clf ← trainClassif ier(C, D f ) 13 return (clf , S) ; // the trained classifier and the shapelet candidates 14 end 74 Chapter 3. Scalable and Accurate Subsequence Transform for TSC alse Using Denition 4.1, we can reduce redundancies and count subsequence frequencies in SAST. The updated SAST method, hereafter SAST+, is detailed in Algorithm 5. Algorithm 5: SAST+ Input: D = {(T 1 , c 1 ), (T 2 , c 2 ), ..., (T n , c n )}, k: the number of instances to use per class, length_list: the list of subsequence lengths, C: the classier to use, ε: ε-similarity parameter. 1 begin Randomly select k instances per class from the dataset 2 D c ← randomlySelectInstancesP erClass(D, k) /* Generate every patterns of length in length_list from D c , using 1 to |S| do /* The procedure distAndCount(T i , S j , ε) returns Dist(T i , S j ) clf , S) /* The trained classifier and the subsequences */ 13 end The time complexity of the SAST method is O(N c ) + O(kN c m 2 ) + O(nm 3 ) + O(classif ier), where N c is the number of classes, n the number of time series, m the length of the time series and k the number of reference time series per class. In practice, it is not necessary to have k greater than one. Removing re-dundancies in SAST is done only once (during the training phase) with a theoretical time complexity of O(km 4 ) ; counting frequencies is done while computing the distance in a constant time. Therefore, the SAST+ time complexity is O(N c ) + O(kN c m 2 ) + O(nm 3 ) + O(classif ier) + O(km 4 ) which is asymptotically equivalent to O(classif ier)+O(km 4 ). Removing redundancies makes SAST+ much faster than SAST during inference. Figure 4 . 1 : 41 Figure 4.1: Two supernova from PLAsTiCC. They look similar in terms of shapes although they are from distinct classes. Figure 4 . 4 Figure 4.2 shows local explanations for a Supernova Type Ia (SNIa) and a Corecollapse Supernova Type II-P (SNII-P) correctly classied by the model. The P in the denomination of the latter references the plateau phase observed in its timeseries just after maximum brightness. This feature is clearly shown in the bottom panel of Figure (4.2). This conrms that our model focuses on the relevant regions A global explanation is obtained by building a subsequence-based prole of each of the class. The top 20 most discriminative subsequences from the uSASTd model are shown in Figure 4.3. Subsequences that are from the same class label are plotted with the same color, its rank, its class label and its type are given at the top of its corresponding plot. The type is either Value if the discriminative power comes from the value itself or Uncertainty if the discriminative powers comes from the Figure 4 . 2 : 42 Figure 4.2: Local explainability of a Supernova Type Ia (top) and a Core-collapse Supernova Type II (bottom) Figure 4 . 3 : 43 Figure 4.3: The top 20 most discriminative subsequences in the PLAsTiCC dataset months. The fact that such characteristic behaviors are easily spotted in the list of most important subsequences certies that our nal classication results are in line with the expert denition of such classes and hence, shows that our model is safe and trustworthy. Moreover, further investigations of a more extensive list of impor-4.4. Conclusion and future directions 87 tant subsequences have the potential to reveal unexpected time series shapes and promote the development of more detail theoretical models for such astrophysical sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.1.1 Main contributions . . . . . . . . . . . . . . . . . . . . . . . . 90 5.1.2 Scientic valorization . . . . . . . . . . . . . . . . . . . . . . 90 5.1.3 Open sourced codes and data . . . . . . . . . . . . . . . . . . 90 5.2 Limitations and perspectives . . . . . . . . . . . . . . . . . . 91 5.3 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.4 List of publications . . . . . . . . . . . . . . . . . . . . . . . . 92 5.4.1 First-authored publications . . . . . . . . . . . . . . . . . . . 92 5.4.2 Non rst-authored publications . . . . . . . . . . . . . . . . . 93 5.1 Conclusion In this thesis, we addressed the task of classifying uncertain time series. The uncertainty in the data makes this task more challenging than a regular time series classication task. We dened what are uncertainty and uncertain time series, then we shown that the current state-of-the-art on the classication of uncertain time series is limited and dicult to adopt by end-users because of a brutal disappearance of the uncertainty in the classication process. This work advances the state-ofthe-art of uncertain time series classication by proposing explainable, robust and ecient methods. Specically, we proposed a novel general design of uncertain time series classication which propagates uncertainty throughout the whole classication process. The propagation is achieved using uncertainty propagation techniques, widely used in physics. 90 Chapter 5. General conclusion and future directions5.1.1 Main contributionsFrom the novel design, we derived the Uncertain Shapelet Transform (UST) which is a shapelet-based, hence explainable method for the classication of uncertain time series. UST internally uses the Uncertain Euclidean Distance (UED) that we proposed as a novel similarity measure for uncertain time series with the ability to give not only the similarity, but also the uncertainty on that similarity. We demonstrated the performance of UST on synthetic datasets.Given the high time complexity of existing shapelet-based methods (including UST), we proposed the Scalable And Accurate Subsequence Transform (SAST), a subsequence-based method to signicantly reduce the computation time without losing accuracy, nor interpretability.Concretely, while the state-of-the-art shapelet-based method STC's time complexity is O(n 2 m 4 ), SAST's time complexity is O(nm 3 ), where n and m are respectively the number and the length of time series. This correspond to a reduction by a factor of nm. We also shown that SAST is competitive with state-of-the-art methods for regular time series classication HIVE-COTE and ROCKET. We have also demonstrated SAST explainability on the Chinatown dataset from the UCR & UEA archive. national conference on articial intelligence (CNIA 2020) and at the Workshop on Uncertainty in Machine Learning (WUML 2020) which was hosted by the European Conference on Machine Learning (ECML/PKDD 2020). Later on, the nal UST version has been presented at the workshop on Large-Scale Industrial Time Series Analysis (LITSA 2021) hosted by the IEEE International Conference on Data Mining (IEEE ICDM 2021). SAST has been accepted for a long presentation at the french conference on machine learning (CAp 2021) and is currently under review at the Elsevier journal Pattern Recognition. A working paper on uSAST is available on the open archive HAL.5.1.3 Open sourced codes and dataWe open sourced the codes and data used throughout this thesis in order to facilitate research and applications in uncertain time series classication in particular, and in uncertain time series analysis in general. Access links are given below: 5.2. Limitations and perspectives 91 UST: https://github.com/frankl1/ustc SAST: https://github.com/frankl1/sast uSAST: https://github.com/frankl1/usast Figure A. 1 1 Figure A.1 shows a zoom on Figure 3.20b. We can now clearly see that SASTEN is slower than SAST and SASTEN-A whatever the number of time series in the dataset. SASTEN-A running time is quite linear because the shapelet space is constant and only the transform time increases with the number of series. Figure A. 1 : 1 Figure A.1: Running time in seconds of SAST, SASTEN and SASTEN-A regarding the number of time series Lewandowski et al. 2010, Lin et al. 2007]. Others time series analysis tasks are forecasting [Hewamalage et al. 2022, Mbouopda 2022, Mbouopda et al. 2022], novelty detection [Ma & Perkins 2003], anomaly detection [Nakamura et al. 2020, Audibert et al. 2020], motif discovery [Yeh et al. 2018] and querying [Ding et al. 2008, Yagoubi et al. 2018]. These tasks can be classied as either supervised or unsupervised as shown on Figure 1.6. 1.1. Background Chapter 1. Background and related work 13 1.2 Related work 1.2.1 Certain time series classication approaches Classification This subsection describes the existing approaches to perform time series classica- tion when uncertainty is assumed to be absent or negligible: this is what we call certain or regular time series classication. Denition 1.5 (Time Supervised series classication) Regression Time series analysis Unsupervised Motif discovery Clustering Forecasting novelty/anomaly detection Figure 1.6: Time series analysis tasks hierarchy Dimensionality reduction This thesis focuses on the classication task of time series whose values are uncertain. The presence of uncertainty makes this task more complicated, since even for humans, it is not easy to take decisions when the possessed information are inaccurate. Because the existing methods for the classication of uncertain time Querying series are inspired from certain time series classication methods, we present the later methods rst. Time series analysis includes several tasks including the four classical machine learning tasks which are classication [Ye & Keogh 2009b, Bagnall et al. 2017] , clustering [Ulanova et al. 2015, Siyou Fotso et al. 2020] , regression [Tan et al. 2021] , and dimensionality reduction [ , Flynn et al. 2019]) combines features from the frequency and interval domains, and uses ensemble techniques to reduce the variance. RISE is signicantly more accurate than any other spectral method. The Hierarchical Vote Collective of Transformation-Based Ensembles (HIVE-COTE) [Lines et al. 2018], inspired from the collective of transformation-based ensembles (COTE) is the state-of-the-art method for time series classication. It combined 35 time series classiers grouped in 5 modules: 1 shapelet module, 1 whole series module, 1 dictionary module, 1 interval module and 1 spectral module. Each module is made of ensemble classiers. The nal classication is obtained by a majority vote. The second version of HIVE-COTE has been developed exploiting the recent advances in time series classication [Middlehurst et al. 2021]. This version is made of four modules of ensemble classiers whose base learners are: Table 1 . 1 1: Summary of the current time series classication state-of-the-art (SOTA) Category Features Explainability SOTA methods 1NN-DTW Whole series time series absent EE FastEE Subsequence-based subsequences by design XEM widely used in the literature to measure the dissimilarity between time series. It is particularly used in shapelet-based approaches [Ye & Keogh 2009b, Hills et al. 2014, Bagnall et al. 2017]. Using the uncertainty propagation properties, an uncertain dissimilarity measure based on ED can be computed for two uncertain time series T 1 and T 2 by propagating uncertainty in the ED formula. We name the obtained measure UED for Uncertain Euclidean Distance, and it is dened as follows: An uncertain measure can be considered as a random variable with mean equals to the best guess and standard deviation equals to the uncertainty. Given this consideration, a stochastic order can be dened on the set of uncertain measures. A random variable X is stochastically less than or equal to (noted ≤ st ) another random variable Y if and only if P r[X > t] ≤ P r[Y > t] ∀t ∈ I, where I is the union of the domains of X and Y [START_REF] Marshall | [END_REF] . The stochastic order can be rewritten and developed as follows: Larger values of k lead to best approximation of I, however slow the classication process. We tried several values of k, but k = 100 worked better. We have also used a relaxed version of the stochastic ordering: given two random variables X and Y , we have X ≤ st Y if the number of values t in I such that CDF X (t) > CDF Y (t) is greater than the number of values t in I such that CDF X (t) ≤ CDF Y (t). Table 2 . 2 1: Summary of the models that are compared in our experiments. used datasets from the well known UCR repository[Dau et al. 2019]. Instead of running our experiment on the whole repository, we use only datasets on which shapelet approaches are known to work well. According to[START_REF] Bagnall | [END_REF], shapelet approaches are more suitable for electric device, ECG, sensor and simulated datasets. Tab. 2.2 gives a summary of the 15 shapelet datasets on which we conducted our experiments. The rst column is the name of the dataset, the second is the number of instances in the training/test set, the third is the length of time series and the fourth and last column is the number of dierent classes in the dataset. Each dataset is already split into the training and the test sets on the repository. Name Measure Ordering Classier Time contract ST ED Natural GNB 10 minutes UST(DUST_NORMAL) DUST NORMAL Natural GNB 10 minutes UST(DUST_UNIFORM) DUST UNIFORM Natural GNB 10 minutes UST(FOTS) FOTS Natural GNB 120 minutes UST(UED, GNB) UED Simple, Stochastic and Interval GNB 10 minutes UST(UED, UGNB) UED Simple, Stochastic and Interval UGNB 10 minutes each uncertain feature vector is atten such that the rst half contains the best guesses and the second half contains the uncertainty deviation. This is required because the Gaussian naive bayes (GNB) classier does not take uncertainty into account. 2.4.2 Datasets We Mbouopda, Engelbert Mephu Nguifo. Classication des Séries Temporelles Incertaines Par Transformation Shapelet. In Con- Chapter 3 Scalable and Accurate Subsequence Transform for Time Series Classication férence Nationale en Intelligence Articielle (CNIA), pp.14-21, Jun. 2020. Michael F. Mbouopda, Engelbert Mephu Nguifo. Classication of Uncertain Time Series by Propagating Uncertainty in Shapelet Trans- form. In ECML/PKDD Workshop on Uncertainty in Machine Learning (WUML), pp.1-12, Sep. 2020. Michael F. Mbouopda, and Engelbert Mephu Nguifo. Uncertain time series classication with shapelet transform. In International Confer- ence on Data Mining Workshops (ICDMW), pp. 259-266, Nov. 2020. in order to interpret predictions. Two examples of these post hoc explainers are LIME[START_REF] Ribeiro | [END_REF] and SmoothGrad[START_REF] Smilkov | [END_REF] for saliency maps. More methods can be found in the review of[Samek et al. 2020]. Hence, selecting shapelets beforehand of classication using information gain can be skipped, since the classier can automatically learn the top best shapelets during its training iterations and feature analysis can be used after training to get the learned shapelets. 3.2.3 Time series classication with SAST Time series classication with SAST (Scalable and Accurate Subsequence Transform) is designed with respect to Proposition 3.2.1 and Proposition 3.2.2. A visual view of the the method is shown on Figure 3 .6. There are two main blocks: Chinatown dataset is used here. It is a binary dataset with time series of length 24. There are 20 instances in the training set and we use random oversampling to create bigger versions of this dataset. Figure 3.20b shows the running time of each model.The running time of each model increases nearly linearly with the number of time series in the dataset. STC running time starts higher and increases much faster compared to other models. This is not surprising since the training time of shapelet methods is extremely related to the number of shapelet candidates, and the number of shapelet candidates in STC increases with the number of time series while the number of shapelet candidates in a SAST model increases with the number of classes. More precisely, STC takes about 12 minutes on a dataset of 64 time series of length 24, while SAST takes only 2 seconds, SASTEN requires 10 seconds and SASTEN-A needs about 6 seconds. For a dataset with 1024 time series of length 24, SASTEN, SAST and SASTEN-A are respectively about 5000 times, 8000 times and 9000 times faster than STC. 3.3. Experiments 67 3.3.2.2 Training set size The Mbouopda, Engelbert Mephu Nguifo. Scalable and Accurate Subsequence Transform for Time Series Classication. In Conférence sur l'Apprentissage automatique (CAp), pp 1-27, Jun. 2021. D = {(T 1 , c 1 ), (T 2 , c 2 ), ..., (T n , c n )}: the training dataset, k: the number of instances to use per class: the list of subsequence lengths, C: the classier to use, length_list: the list of subsequence lengths, min_ig: the minimum information gain to Algorithm 3: ShapeletTransformK Input: consider a subsequence as shapelet begin /* randomly select k instances per class from the dataset */ D c ← randomlySelectInstancesP erClass(D, k) /* generate every subsequence of length in length_list from D c */ S ← generateShapeletCandidates(D c , length_list) /* compute the information gain of each subsequence and Michael F. Mbouopda, Engelbert Mephu Nguifo. Scalable and Accurate return the one that have at least the required Subsequence Transform for Time Series Classication. Pattern Recog-information gain */ nition, pp 1-35, submitted in May 2021. S ← extractShapelet(S, D, min_ig) /* transformed the dataset using every patterns in S */ D f ← ∅ Table 3 . 3 1: List of models used in our experiments Name classier length_list Description STC-1 Ridge classier {3, 4, .., m} STC-k with k = 1, mean- ing that shapelets are selected from a randomly selected time series per class STC-0.25 Ridge classier {3, 4, .., m} STC-k that select shapelets from 25% of time series per each class randomly selected STC-0.5 Ridge classier {3, 4, .., m} STC-k that select shapelets from 50% of time series per each class randomly selected STC-0.75 Ridge classier {3, 4, .., m} STC-k that select shapelets from 75% of time series per each class randomly selected STC Ridge classier {3, 4, .., m} STC-k that select shapelets from every time series in the dataset SAST-RF Random Forest {3, 4, .., m} SAST model using Random Forest classier SAST-Ridge Ridge classier {3, 4, .., m} SAST model using Ridge clas- sier with LOO SAST-Ridge-A Ridge classier {9, 13, 15}, Approximated SAST-Ridge, {7, 11, 15}, that is a SAST-Ridge which {7, 9, 15} or considers only some subse- {9, 11, 15} quence lengths SASTEN-Ridge Ridge classier - Ensemble of 3 SAST-Ridge SASTEN-Ridge-A Ridge classier - Ensemble of 3 Approximated SAST-Ridge with length_list {3, 4, .., 9}, {10, 11, ..., 16}, and {17, 18, ..., 23} respec- tively Table 3.2: Average accuracy of each model per problem type HIVE-COTE ROCKET SAST TS-CHIEF Number of datasets Device 0.75 ± 0.0 0.73 ± 0.0 0.62 ± 0.0 0.76 ± 0.0 1 ECG 0.95 ± 0.06 0.96 ± 0.05 0.93 ± 0.07 0.94 ± 0.07 4 Image 0.82 ± 0.12 0.82 ± 0.12 0.78 ± 0.12 0.83 ± 0.12 25 Motion 0.93 ± 0.09 0.93 ± 0.07 0.88 ± 0.1 0.93 ± 0.09 9 Power 1.0 ± 0.0 0.94 ± 0.0 0.91 ± 0.0 0.99 ± 0.0 1 Sensor 0.89 ± 0.12 0.9 ± 0.11 0.85 ± 0.14 0.89 ± 0.13 13 Simulated 0.99 ± 0.02 1.0 ± 0.01 0.95 ± 0.04 1.0 ± 0.01 7 Spectro 0.87 ± 0.11 0.87 ± 0.12 0.87 ± 0.11 0.87 ± 0.11 6 Trac 0.98 ± 0.0 0.98 ± 0.0 0.96 ± 0.0 0.97 ± 0.0 1 Average 0.88 ± 0.11 0.88 ± 0.11 0.84 ± 0.12 0.88 ± 0.12 while neglecting explainability. In this chapter, we address this problem with explainability in mind.[START_REF] Gruber | [END_REF], Liu et al. 2021] and particularly uncertain time series. By using a single random instance from each class, SAST is more scalable and at least as accurate as STC while keeping STC interpretability capabilities. 4.2. Uncertain Subsequence Transform Classication 77 70% while providing faithful explanation similarly to STC. The rest of this chapter is organized as follows: we start by describing the uSAST method (Section 4.2). Then, we detail our experiments and the obtained results Section 4.3 before concluding this Chapter in Section 4.4. 4.2 Uncertain Subsequence Transform Classication In this section, we describe a new uncertain time series classication method based on uncertainty propagation as in UST and subsequence transform as in SAST. In fact, uncertainty propagation is an eective approach to analyze uncertain data [ We consider two approaches to classify uTS in an explainable manner : the rst one ignores uncertainty and uses only the best estimates, while the second one takes uncertainty into account. Ignoring uncertainty makes the task a regular time series classication task, allowing the usage of Shapelet Transform Classication or simply STC [Hills et al. 2014], an eective and explainable regular time series classication algorithm. This model failed to nd a valid shapelet on PLAsTiCC, and therefore could not perform the classication task. We performed extensive hyper-parameter tuning tests, but the result was the same. We also tried to take uncertainty into account by using the Uncertain Shapelet Transform algorithm, but as expected, this method also failed since it is an extension of STC for uncertain time series. In this chapter, we propose the Uncertain Scalable and Accurate Subsequence Transform (or uSAST for short) method which is able to achieve an F1-score of Table 4 . 4 1 shows the result using the XGBoost classier only as it has led to the best classication performance. However, detailed results are available in Appendix B. Table4.1: Results on PLAsTiCC averaged over 3 runs.The rst observation is that any variant of our proposed method is able to achieve around 70% precision, recall and F1 score, unlike shapelet-based methods which completely failed on the PLAsTiCC dataset. This result corroborates with the claim that pruning subsequences before the eective classication could sometimes lead to poor performance. Dropping duplicates, counting patterns frequency or doing both does not have signicant impact on the classication performance. However, dropping duplicate makes the models faster. In particular, uSASTd is about 12 hours faster than uSAST. Counting pattern frequency does not add a computation overhead because it is done while computing the distance in O(1) time.Choosing the right subsequence lengths to considered is challenging and assessing all possible values is computationally expensive; However, domain knowledge could guide in setting this parameter as it is application-dependent.PLAsTiCC contains objects that are either galactic or extra-galactic, and whose light curves were obtained following a Deep Drilling Fields (DDF) or Wide Fast Deep (WFD) observation strategy. Extra-galactic objects are further away than galactic ones, they are fainter and more dicult to be observed. DDF light curves contain more frequent observation points than WFD ones. Thus, DDF light curves provide a more certain determination of the time series properties than their WFD counterparts which have more uncertainties. Table4.2 gives the performances of the model uSASTd regarding if the objects are galactic or not, DDF or WFD. The model is considerably better at classifying galactic objects than extra-galactic ones, and a little better at classifying DDF objects than WFD ones. While the model achieves an F1 score of 94% for galactic objects in DDF, it achieves an F1 score of only 67% for extra-galactic objects in WFD. This is directly related to the astrophysical nature of Precision Recall F1 score LogLoss Time (h) uSAST 0.72 ± 0.01 0.72 ± 0.00 0.69 ± 0.01 0.96 ± 0.01 51.03 ± 0.12 uSASTd 0.72 ± 0.00 0.73 ± 0.00 0.70 ± 0.01 0.97 ± 0.01 43.49 ± 0.27 uSASTdc 0.71 ± 0.01 0.72 ± 0.01 0.69 ± 0.01 0.96 ± 0.01 43.52 ± 0.72 Table 4 . 4 3: Results on PLAsTiCC averaged over 3 runs when uncertainty is ignored.Taking uncertainty into account increases the classication performance in terms of precision, recall, F1 score and cross entropy loss. In fact, from SASTd to uSASTd, there is a gain of 6% in precision, 5% in recall, 6% in F1 score. It can also be seen that the model is more condent on its predictions as the loss has decreased. However, this gain in performance requires almost four times more computation. In fact, ROCKET uses the proportion of positive values obtained after applying random convolutions. MUSE uses bag of words obtained after applying some transformations to the time series. These features have no particular meaning for domain experts. Our method does not have this limitation, as it is based on features that are intelligible to domain experts. Precision Recall F1 score LogLoss Time (h) SAST 0.65 ± 0.01 0.67 ± 0.00 0.63 ± 0.00 1.16 ± 0.01 16.41 ± 0.52 SASTd 0.66 ± 0.02 0.68 ± 0.00 0.64 ± 0.00 1.14 ± 0.00 12.79 ± 0.84 SASTdc 0.66 ± 0.01 0.68 ± 0.00 0.64 ± 0.01 1.14 ± 0.01 12.99 ± 0.30 4.3.2.4 Comparison to SOTA: In this subsection, we compare our proposed method to the state-of-the-art multi- variate time series classication methods ROCKET [Dempster et al. 2020], MUSE [Schäfer & Leser 2017b] and XEM [Fauvel et al. 2022] which have been shown to be among the most accurate methods for this task [Ruiz et al. 2021]. Results are shown in Table 4.4. Table 4.4: uSASTd vs SOTA results. Precision Recall F1 score Time (h) uSASTd 0.72 ± 0.00 0.73 ± 0.00 0.70 ± 0.01 43.49 ± 0.27 MUSE 0.71 ± 0.01 0.73 ± 0.01 0.71 ± 0.01 3.36 ± 0.04 ROCKET 0.77 ± 0.00 0.77 ± 0.00 0.75 ± 0.00 0.05 ± 00 XEM 0.69 ± 0.01 0.71 ± 0.00 0.69 ± 0.00 12.24 ± 0.46 The classication performance of our method is comparable to those of the SOTA methods. In particular, uSASTd achieves better precision, recall and F1 score com- pared to XEM on PLAsTiCC. uSASTd and MUSE have similar classication perfor- mance. ROCKET achieves the best classication performance. SOTA methods are faster than our proposal. Except for XEM which is explainable-by-design, SOTA methods are not explainable. Mbouopda, Emille E. O. Ishida, Engelbert Mephu Nguifo, Emmanuel Gangler. Explainable Classication of Astronomical Uncertain Time Series. HAL preprint, pp 1-8. 2022. Table A . A 2: Accuracy of models on 72 UEA & UCR datasets (average over 5 runs). The numbers are rounded at 2 decimals.Continued on next pageTable A.2: Accuracy of models on 72 UEA & UCR datasets (average over 5 runs). The numbers are rounded at 2 decimals. Continued on next page A.2. Scalability of SAST, SASTEN and SASTEN-A regarding the dataset size 99 Table A.2: Accuracy of models on 72 UEA & UCR datasets (average over 5 runs). The numbers are rounded at 2 decimals. STC-1 STC-1 STC-0.25 STC-0.25 STC-0.5 STC-0.5 STC-0.75 STC-0.75 STC STC SAST SAST 0.71 ± 0.02 0.77 ± 0.01 0.76 ± 0.01 0.77 ± 0.01 0.77 ± 0.01 0.76 ± 0.02 60 0.75 ± 0.05 0.81 ± 0.02 0.82 ± 0.03 0.83 ± 0.02 0.91 ± 0.03 0.97 ± 0.01 0.66 ± 0.01 61 0.58 ± 0.05 0.68 ± 0.02 0.61 ± 0.02 0.69 ± 0.01 0.59 ± 0.07 0.69 ± 0.03 0.6 ± 0.09 0.71 ± 0.01 0.83 ± 0.02 0.74 ± 0.01 0.88 ± 0.02 0.67 ± 0.01 0.68 ± 0.01 0.68 ± 0.01 0.68 ± 0.01 0.68 ± 0.0 62 0.91 ± 0.05 0.63 ± 0.07 0.93 ± 0.02 0.93 ± 0.01 0.95 ± 0.01 0.95 ± 0.0 0.67 ± 0.02 0.43 ± 0.03 63 0.87 ± 0.02 0.47 ± 0.02 0.97 ± 0.01 0.54 ± 0.02 0.97 ± 0.01 0.52 ± 0.04 0.97 ± 0.01 0.47 ± 0.04 0.98 ± 0.0 0.61 ± 0.04 0.98 ± 0.0 0.61 ± 0.09 64 0.88 ± 0.04 0.68 ± 0.11 0.91 ± 0.02 0.8 ± 0.02 0.94 ± 0.01 0.77 ± 0.03 0.95 ± 0.0 0.81 ± 0.03 0.95 ± 0.0 0.9 ± 0.02 0.88 ± 0.04 0.94 ± 0.05 65 0.84 ± 0.08 0.97 ± 0.0 0.9 ± 0.04 0.96 ± 0.0 0.87 ± 0.02 0.97 ± 0.01 0.89 ± 0.02 0.97 ± 0.01 0.88 ± 0.03 0.98 ± 0.01 0.88 ± 0.03 0.74 ± 0.07 0.85 ± 0.02 0.84 ± 0.01 0.85 ± 0.01 66 0.94 ± 0.07 0.99 ± 0.01 1.0 ± 0.0 1.0 ± 0.0 0.84 ± 0.0 0.99 ± 0.0 0.84 ± 0.03 1.0 ± 0.0 0.91 ± 0.01 67 0.93 ± 0.04 0.93 ± 0.01 0.95 ± 0.02 0.93 ± 0.01 0.98 ± 0.03 0.93 ± 0.01 0.99 ± 0.01 0.94 ± 0.0 0.98 ± 0.01 0.94 ± 0.0 0.96 ± 0.03 0.92 ± 0.05 68 0.56 ± 0.07 1.0 ± 0.0 0.61 ± 0.05 1.0 ± 0.0 0.64 ± 0.07 1.0 ± 0.0 0.64 ± 0.03 1.0 ± 0.0 0.81 ± 0.03 1.0 ± 0.0 0.99 ± 0.0 0.75 ± 0.0 69 0.83 ± 0.07 0.75 ± 0.0 0.93 ± 0.03 0.75 ± 0.0 0.97 ± 0.03 0.75 ± 0.0 0.96 ± 0.03 0.75 ± 0.0 0.98 ± 0.01 0.98 ± 0.01 0.68 ± 0.04 0.32 ± 0.04 70 0.99 ± 0.01 0.32 ± 0.07 1.0 ± 0.0 0.31 ± 0.05 1.0 ± 0.01 0.34 ± 0.05 1.0 ± 0.0 0.32 ± 0.03 1.0 ± 0.0 0.62 ± 0.01 1.0 ± 0.0 0.43 ± 0.04 71 0.54 ± 0.06 0.46 ± 0.03 0.71 ± 0.1 0.44 ± 0.04 0.72 ± 0.07 0.47 ± 0.03 0.8 ± 0.04 0.74 ± 0.01 0.8 ± 0.06 0.78 ± 0.01 0.85 ± 0.06 72 0.96 ± 0.06 0.4 ± 0.01 0.95 ± 0.03 0.45 ± 0.02 0.96 ± 0.06 0.47 ± 0.02 0.99 ± 0.01 0.44 ± 0.01 0.99 ± 0.01 0.56 ± 0.01 1.0 ± 0.01 0.7 ± 0.01 0.71 ± 0.04 0.88 ± 0.01 0.92 ± 0.01 0.93 ± 0.01 0.94 ± 0.0 0.95 ± 0.0 0.33 ± 0.06 0.29 ± 0.03 0.31 ± 0.02 0.31 ± 0.03 0.59 ± 0.01 0.77 ± 0.0 0.95 ± 0.09 0.97 ± 0.03 0.96 ± 0.03 0.96 ± 0.03 0.97 ± 0.01 0.98 ± 0.01 0.93 ± 0.1 0.91 ± 0.13 0.86 ± 0.17 0.87 ± 0.08 0.93 ± 0.03 0.73 ± 0.01 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 0.9 ± 0.0 0.79 ± 0.05 0.94 ± 0.06 0.97 ± 0.02 0.96 ± 0.03 0.97 ± 0.02 0.97 ± 0.02 0.84 ± 0.03 0.96 ± 0.01 0.97 ± 0.0 0.98 ± 0.0 0.97 ± 0.0 0.97 ± 0.0 0.89 ± 0.05 0.97 ± 0.01 0.97 ± 0.01 0.98 ± 0.01 0.97 ± 0.01 0.99 ± 0.01 0.75 ± 0.03 0.93 ± 0.02 0.95 ± 0.01 0.96 ± 0.01 0.95 ± 0.01 0.96 ± 0.02 0.62 ± 0.06 0.72 ± 0.02 0.72 ± 0.04 0.7 ± 0.01 0.7 ± 0.02 0.71 ± 0.03 0.59 ± 0.07 0.57 ± 0.04 0.58 ± 0.02 0.57 ± 0.06 0.62 ± 0.03 0.6 ± 0.04 0.53 ± 0.02 0.56 ± 0.01 0.56 ± 0.01 0.54 ± 0.01 0.62 ± 0.0 0.56 ± 0.01 STC-1 0.91 ± 0.06 0.63 ± 0.1 STC-0.25 0.96 ± 0.0 0.96 ± 0.01 0.96 ± 0.0 STC-0.5 STC-0.75 0.84 ± 0.04 0.77 ± 0.02 0.87 ± 0.03 STC 0.96 ± 0.0 0.96 ± 0.01 SAST 0.87 ± 0.06 0.92 ± 0.02 1 0.29 ± 0.06 0.54 ± 0.02 0.32 ± 0.03 0.59 ± 0.03 0.34 ± 0.04 0.56 ± 0.01 0.34 ± 0.02 0.58 ± 0.02 0.48 ± 0.04 0.66 ± 0.01 0.68 ± 0.0 0.68 ± 0.01 2 0.63 ± 0.06 0.3 ± 0.04 0.71 ± 0.08 0.3 ± 0.07 0.74 ± 0.03 0.3 ± 0.03 0.72 ± 0.04 0.28 ± 0.01 0.75 ± 0.03 0.72 ± 0.02 0.77 ± 0.02 0.87 ± 0.0 3 0.65 ± 0.1 0.62 ± 0.02 0.68 ± 0.05 0.6 ± 0.01 0.86 ± 0.05 0.6 ± 0.02 0.95 ± 0.02 0.61 ± 0.02 0.87 ± 0.03 0.61 ± 0.02 0.87 ± 0.02 0.53 ± 0.02 4 0.44 ± 0.05 0.57 ± 0.0 0.47 ± 0.09 0.65 ± 0.05 0.55 ± 0.09 0.66 ± 0.04 0.65 ± 0.06 0.62 ± 0.04 0.71 ± 0.09 0.65 ± 0.06 0.8 ± 0.02 0.83 ± 0.01 5 0.74 ± 0.06 0.57 ± 0.02 0.79 ± 0.09 0.58 ± 0.02 0.78 ± 0.09 0.56 ± 0.01 0.77 ± 0.07 0.58 ± 0.01 0.78 ± 0.05 0.59 ± 0.02 0.8 ± 0.03 0.56 ± 0.02 6 0.76 ± 0.1 0.8 ± 0.06 0.76 ± 0.1 0.79 ± 0.05 0.91 ± 0.04 0.85 ± 0.02 0.83 ± 0.14 0.87 ± 0.02 0.86 ± 0.1 0.89 ± 0.01 0.76 ± 0.1 0.85 ± 0.03 7 0.88 ± 0.09 0.64 ± 0.01 0.95 ± 0.01 0.64 ± 0.01 0.96 ± 0.01 0.64 ± 0.0 0.96 ± 0.01 0.64 ± 0.0 0.95 ± 0.01 0.65 ± 0.01 0.98 ± 0.01 0.78 ± 0.01 8 0.66 ± 0.04 0.97 ± 0.02 0.75 ± 0.02 0.99 ± 0.0 0.74 ± 0.02 1.0 ± 0.0 0.76 ± 0.03 1.0 ± 0.0 0.77 ± 0.06 1.0 ± 0.0 0.88 ± 0.01 1.0 ± 0.0 9 0.91 ± 0.08 0.76 ± 0.03 0.95 ± 0.02 0.93 ± 0.02 0.95 ± 0.03 0.92 ± 0.02 0.96 ± 0.01 0.93 ± 0.02 0.97 ± 0.01 0.94 ± 0.02 0.96 ± 0.01 0.91 ± 0.02 10 0.54 ± 0.02 0.85 ± 0.01 0.54 ± 0.01 0.85 ± 0.01 0.54 ± 0.01 0.86 ± 0.01 0.56 ± 0.0 0.85 ± 0.01 0.56 ± 0.0 0.86 ± 0.01 0.75 ± 0.04 0.85 ± 0.0 11 0.96 ± 0.03 0.71 ± 0.03 0.99 ± 0.02 0.76 ± 0.03 0.99 ± 0.03 0.75 ± 0.04 1.0 ± 0.0 0.76 ± 0.01 1.0 ± 0.0 0.82 ± 0.02 1.0 ± 0.0 0.87 ± 0.01 12 0.38 ± 0.04 0.77 ± 0.02 0.78 ± 0.01 0.78 ± 0.01 0.78 ± 0.0 0.35 ± 0.03 0.35 ± 0.01 0.38 ± 0.03 0.66 ± 0.02 0.77 ± 0.01 0.77 ± 0.01 0.78 ± 0.01 13 0.41 ± 0.04 0.82 ± 0.15 0.36 ± 0.04 0.92 ± 0.06 0.35 ± 0.03 0.98 ± 0.01 0.42 ± 0.05 0.98 ± 0.02 0.64 ± 0.02 1.0 ± 0.0 0.74 ± 0.01 0.96 ± 0.01 14 0.41 ± 0.06 0.68 ± 0.05 0.4 ± 0.06 0.91 ± 0.01 0.4 ± 0.03 0.94 ± 0.01 0.38 ± 0.03 0.95 ± 0.01 0.95 ± 0.01 0.69 ± 0.01 0.77 ± 0.01 0.91 ± 0.02 15 0.08 ± 0.0 0.7 ± 0.14 0.08 ± 0.0 0.82 ± 0.03 0.08 ± 0.0 0.81 ± 0.02 0.08 ± 0.0 0.78 ± 0.05 0.08 ± 0.0 0.79 ± 0.04 0.73 ± 0.0 0.76 ± 0.05 16 0.9 ± 0.03 0.75 ± 0.05 0.87 ± 0.06 0.84 ± 0.03 0.91 ± 0.04 0.86 ± 0.05 0.93 ± 0.02 0.83 ± 0.03 0.94 ± 0.04 0.88 ± 0.01 0.97 ± 0.0 0.85 ± 0.04 A.2 Scalability of SAST, SASTEN and SASTEN-A regarding the dataset size https://github.com/frankl1/ustc/releases/tag/litsa https://lsst.org/ Cleaned dataset: https://drive.uca.fr/f/f0741be3fb77402f8e82/ Source code: https://anonymous.4open.science/r/usast-FBC0/ work and for their constructive feedback, my family for their inestimable support, my friends. This work has been funded by the French Ministry of Higher Education, Research and Innovation, the LabEx IMobS3 and the CNRS PEPS project TransiXplore. Thank to the UEA & UCR Time Series Classication Repository which provides List of Tables 1.1 Summary of the current time series classication state-of-the-art Explainable Classication of Astronomical Uncertain Time Series Exploring the expansion history of the universe, understanding its evolutionary stages, and predicting its future evolution are important goals in astrophysics. Today, machine learning tools are used to help achieving these goals by analyzing transient sources, which are modeled as uncertain time series. In this chapter, we propose an uncertainty-aware subsequence based model which achieves a classication comparable to that of state-of-the-art methods. Unlike conformal learning which estimates model uncertainty on predictions, our method takes data uncertainty as additional input. Moreover, our approach is explainable-by-design, giving domain experts the ability to inspect the model and explain its predictions. The explainability of the proposed method has also the potential to inspire new developments in theoretical astrophysics modeling by suggesting important subsequences which depict details of light curve shapes. In this work, we considered the case were input time series were uncertain while the labels were certain. In their current form, our proposed methods cannot be applied in scenarios where the labels are also uncertain. Given that this scenarios is likely to arise in practice, it is important to adapt or extend UST and uSAST to this case. A possible way of tackling this issue is to use belief function theory as proposed in [START_REF] Quost | Learning from data with uncertain labels by boosting credal classiers[END_REF]. Furthermore, it would be interesting to evaluate the condence of our methods on their predictions using conformal learning techniques. We expect our methods' condence to be correlated with the uncertainty level, that is, the more uncertain is the data, the less condent our model would be. Combining conformal learning with the explainability of our methods will increase their trustability and adoption by end-users. The last future direction, but not the least is to explore other time series analysis tasks. In particular, it would be worthy to see how our proposed uncertain similarity measure UED is compared to FOTS for the task of uncertain time series clustering. Beyond that, assessing the performance of uncertainty propagation in the tasks of uncertain time series anomaly detection and forecasting would be good steps to advance the state-of-the-art of uncertain time series analysis in general. Recommendations Uncertain time series classication and more generally the classication of uncertain data is challenging, yet under-explored. We shown in this work that uncertainty is not necessarily a problem, but an additional input that can be used to improve the classication performance and the decision boundaries' robustness. We also hope that uncertainty quantication will be integrated in the data collection process so that the nal data is released with the associated uncertainty. List of publications During this thesis, we made many publications on the topic of uncertain time series
04099079
en
[ "math.math-na", "info.info-mo", "phys.meca.biom", "phys.meca.mefl", "sdv.mhep.psr" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04099079/file/PosterMS05-Poncet-Interpore2023-Edimbourg-V3.pdf
Jean-Matthieu Etancelin Marlène Murris-Espin Philippe Poncet PARTICLE METHODS FOR THE DYNAMICS OF POROUS BIOFILMS WITH HETEROGENEOUS RHEOLOGY AND ITS INTERACTION WITH LUNG EPITHELIUM CONTEXT: UPSCALING FLOW AROUND LUNG CELLS Multiscale description of human lung from trachea (∼ 2cm) to its epithelium at the 6-7th generations (∼ 10µm high), one cilia being 100nm wide (sketch inspired from [START_REF] Chatelin | A Hybrid Grid-Particle Method for Moving Bodies in 3D Stokes Flow with Variable Viscosity[END_REF]). FROM DNS OF THE PORE SCALE REACTIVE FLOWS The superficial velocity u follows the Darcy-Brinkman-Stokes equation in a porosity field ε, which defines a pure liquid phase where ε = 1 and a solid matrix where ε is small : ε -1 ∂(ρu) ∂t + ε -1 div ε -1 ρu ⊗ u -ε -1 div (2µD(u)) + µ * K -1 u = f -∇p (1) along with div u = 0, using the usual notations. The permeability K of the lower scale is driven by the Kozeny-Carman relation K(ε) = K 0 ε 3 (1 -ε) -2 . Given the scales, it reduces to -div (2µD(u)) + µ * K -1 0 (1 -ε) 2 ε 2 u = ε(f -∇p) (2) An acid of concentration C dissolves a solid of molar volume m at rate r following ∂C ∂t + div (ε -1 u C) -div (d m ε 1+β ∇ε -1 C) = -r(C, ε) and ∂ε ∂t = -m r(C, ε) (3) where d m is molecular diffusion of the acid and β β β is the index of tortuosity within the solid. Using a particle method for (3) means that we describe the fluid by a set of volumes v p , concentrations c p , and locations ξ p . Then, one gets the particle formulation of (3) : dC p dt = div (d m ε 1+β ∇ε -1 C) -Cu • ∇ε -1 ξ p , dξ p dt = ε -1 u(ξ p ), dv p dt = div (ε -1 u) ξ p v p (4) with back and forth high order interpolations from the grid used to solve (2). t=10.5min t=12.3h t=42.6h Dissolution of a 5% porous calcite body (1.43mm wide) at pH=2, following model (2, 3). See [START_REF] Etancelin | Improvement of remeshed Lagrangian methods for the simulation of dissolution processes at pore-scale[END_REF] for more details and the validation of the numerical method (4) in [START_REF] Molins | Simulation of mineral dissolution at the pore scale with evolving fluid-solid interfaces: review of approaches and benchmark problem set[END_REF]. The resulting porosity/permeability is displayed to the right. TO THE DNS OF LUNG CELLS WITH PCL/MUCUS BIOFILM The lung epithelium (the ciliated cells B(t)) is surrounded by a Newtonian pericelial liquid (PCL) of viscosity µ ∞ and a miscible shear-thinning mucus composed of mucin proteins at concentration C. Model (2, 3) is still valid but in this case, the fluid is not reacting (r = 0) and K(ε) = 1 B(t) /ε. Here (2) and (3) are coupled by means of the Carreau relationship defining the viscosity of the heterogeneous non-Newtonian fluid (see [START_REF] Chatelin | Numerical and experimental investigation of mucociliary clearance breakdown in cystic fibrosis[END_REF] for instance) : µ(α, D) = µ ∞ + µ ∞ B µ ∞ α d 1-N 0 -1 1 + 2 D : D d 2 0 α(N -1)/2 ( 5 ) where D is the shear-rate and α = C/C 0 the renormalized concentration of mucin (d 0 is the cut-off frequency and C 0 the maximum admissible concentration of mucins). In this model, only two "patient specific" parameters remain: the bulk viscosity B and the shear-thinning index N (see the clinical context thereafter). Cilia B(t) and fluid domain Ω(t) in motion (picture to the left, from [START_REF] Chatelin | Analysis of the penalized 3D variable viscosity stokes equations coupled to diffusion and transport[END_REF]), driving the mucus (isosurfaces of mucin concentration on the picture to the right, see [START_REF] Sanchez | Analysis of 3D non-linear Stokes problem coupled to transport-diffusion for shear-thinning heterogeneous microscale flows, applications to digital rock physics and mucociliary clearance[END_REF] for more details). FROM THE UPSCALING OF THE PORE SCALE DIFFUSION Diffusion in this sandstone, which is 21% porous, leads to a tortuosity index of β = 0.68 β = 0.68 β = 0.68. This means that the upscaled diffusion coefficient in an homogeneous medium is ε β times the molecular diffusion (taken from [START_REF] Hume | A velocity-vorticity method for highly viscous 3D flows with application to digital rock physics[END_REF]). TO THE UPSCALING OF THE DIFFUSION WITHIN LUNG CELL Diffusion within ciliated cell with time-dependent geometry is upscaled and computed using the same technique as above. It is required in order to estimate the mucin slip-through dynamics. The resulting tortuosity index β(t) is shown to be relatively time independent. Cilia carpet with time-dependent geometry (left picture, from [START_REF] Chatelin | Hybrid grid-particle methods and Penalization: A Sherman-Morrison-Woodbury approach to compute 3D viscous flows using FFT[END_REF]) and resulting time-dependant effective diffusion and porosity (right picture). This leads to a tortuosity index at β ≃ 0.4 β ≃ 0.4 β ≃ 0.4 when not in flat shape configuration. CLINICAL CONTEXT: MONITORING OF THERAPIES FOR CYSTIC-FIBROSIS Rheology coefficients B and N have been measured from expectoration samples of cystic fibrosis patients. Experiments were carried out on a Mars III rheometer equipped with a cone and plate geometry (angle 1 o , diameter 35mm) which required a small volume of fluid (0.2ml). It has been shown in [START_REF] Chatelin | Numerical and experimental investigation of mucociliary clearance breakdown in cystic fibrosis[END_REF] that numerical simulation of mucus dynamics are coherent with clinical data: slow mucus motion coincides with CF patients, while the highest mucus motions correspond to healthy people. (B, N ) phase diagram with clinical data from [5] (figure to the left). Examples of robust estimation of coefficients B and N by rheometry, using aliquoting of mucus samples (figure to the right) .
04099116
en
[ "sdv" ]
2024/03/04 16:41:20
2022
https://dumas.ccsd.cnrs.fr/dumas-04099116/file/Berguigua_Hamza_39008280.pdf
Keywords: ARF = acute respiratory failure, ICU = Intensive care unit, SARS-CoV-2 = severe acute respiratory syndrome coronavirus 2, VV-ECMO = veno-venous extracorporeal membrane oxygenation air evacuation, COVID-19, Mayotte, Reunion Island, SARS-CoV-2, transportation Conclusion: Emergency air evacuation of patients with ARF due to SARS-CoV-2 was associated with severe hypoxemia but remained feasible. In cases of ARF due to SARS-CoV-2 requiring emergency air evacuation, sedated patients receiving invasive mechanical ventilation and curare should be prioritized over nonintubated patients. It is noteworthy that patients with SARS-CoV-2 pneumonia related to the 501Y.V2 variant were very severe despite their young age. Introduction Medical evacuation is the extraction by an air (helicopter, plane), land (ambulance) or naval unit of a person suffering from a health problem. Medical aviation represents a stage in the history of medicine and more particularly in aeronautical medicine. During wars, the speed with which the patients are evacuated in order to apply early treatment to them has always been considered a determining factor in their survival. This is how medical aviation was born and developed during the wars of the 20th century, benefiting both from improved aircraft performance and from progress in hospital medicine. The knowledge acquired on the battlefields was then put at the service of the entire civilian and military population. It has been of great help in managing epidemic peaks due to the spread of coronavirus-2 internationally, as well as that of the Indian Ocean where our study was made. Mayotte (270,000 inhabitants) and Reunion Island (845,000 inhabitants) are 2 French overseas departments located in the Indian Ocean at a distance of 10,000km from Paris. Until January 2021, Mayotte was relatively spared by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic. On January 6, 2021, the incidence of deaths caused by SARS-CoV-2 was 20.7 per 100,000 inhabitants on the island (compared to 116.4 per 100,000 inhabitants in metropolitan France). [START_REF]COVID-19: point épidémiologique à Mayotte[END_REF][START_REF]Coronavirus (COVID-19) -Santé publique France[END_REF] However, the incidence of infection with SARS-CoV-2 rose rapidly from 268 per 100,000 inhabitants in week 3 to >800 per 100,000 inhabitants in weeks 5 and 6, an increase that has been attributed to the SARS-CoV-2 501Y.V2 variant. [START_REF] Tegally | Detection of a SARS-CoV-2 variant of concern in South Africa[END_REF] In February 2021, the only hospital of the island, Mayotte Hospital, [START_REF]COVID-19: point épidémiologique à Mayotte[END_REF] was overwhelmed by the explosion in the number of cases, as it is equipped with only 16 intensive care unit (ICU) beds for a population of 270,000 inhabitants. All evaluated patients had positive confirmatory respiratory sample with SARS-CoV-2 by real time reverse transcription polymerase chain reaction using kit targeting IP2 and IP4 regions and N gene. Samples from nasopharyngeal swabs or patient's respiratory specimens were extracted using NucliSens easyMAG system (BioMérieux). The genomes were secondly obtained by sequencing using the Oxford Nanopore technology and the Artic Network's overlapping amplicon protocol. [START_REF]Protocol: Real-Time RT-PCR Assays for the Detection of SARS-CoV-2[END_REF][START_REF] Corman | Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR[END_REF] Therapeutic management In accordance with our protocol, all patients with ARF due to SARS-CoV-2 were treated with: a 3 rd generation cephalosporin or piperacillin-tazobactam after realization of respiratory sample; dexamethasone at a dosage of 6mg/day for 10days; systematic deworming with ivermectine or albendazole; en-hanced systematic anticoagulation according to the guidelines of the French Society of Hemostasis and the French Society of Anesthesia and Intensive Care. [START_REF] Susen | Traitement anticoagulant pour la prévention du risque thrombotique chez un patient hospitalisé avec COVID-19 et surveillance de l'hémostase. Proposition du groupe GIHP et du GFHT 3 avril[END_REF] High flow nasal cannula oxygenation was initiated in patients who need standard oxygen ≥9L/min to maintain peripheral arterial oxygenation saturation ≥92%. The decision of the timing of intubation and mechanical ventilation was not protocolized but determined by the ICU team. Details of air evacuation The protocol of air evacuation had been established by the prehospital emergency medical system of La Reunion and Mayotte and there was no specific training for these transports. In order to be transported, patients had to meet the following criteria: FiO 2 <0.6, PEEP <15, and norepinephrine <1 mg/h. The use of curares in patients receiving invasive mechanical ventilation was not protocolized and was left to the discretion of the medical team. As there was no patient isolation unit and additional decontamination against SARS-CoV-2 was required between air transport. Data collection Clinical and biological data along with information on comorbidities were collected on admission to ICU. Complications occurring during air transport and in ICU were recorded. Survival at hospital discharge was also recorded Outcomes The primary outcome was the occurrence of severe hypoxemia during air transport (ie, oxygen saturation (spO 2 ) <88% during >15 minutes). The secondary outcomes were the need for veno-venous extracorporeal membrane oxygenation (VV-ECMO), duration of invasive mechanical ventilation, in-ICU length of stay and hospital mortality. Other complications during air transport were evaluated: oxygen desaturation requiring increase of oxygen requirements or introduction of curares, hypotension (mean arterial pressure <65 mmHg) requiring positive inotropic therapy or fluid challenge >500 mL of crystalloid, endotracheal tube or vascular catheter dislodgement, arrhythmia, cardiac arrest, agitation requiring modification of sedation. Statistical analysis Categorical variables were expressed as total number (percent-age) and continuous variables as median (25 th -75 th percentiles). Categorical variables were compared using Fisher exact test. Continuous variables were compared using the nonparametric Wilcoxon test or Mann-Whitney U test, as appropriate. A P value <.05 was considered significant. Statistical analyses were conducted using SPSS 15.0 (SPSS Inc, Chicago, IL). For all patients the indication for emergency air evacuation was to free up ICU beds in the hospital of Mayotte. Results Over The characteristics of patients are shown in Tables 1 and2. The median age of patients was 55 (46-65) years. Of the 38 evacuated patients, 44.7% had arterial hypertension, 36.8% were diabetic, and 34.2% had a body mass index >30kg/m 2 . Fifteen patients (39.5%) were screened for the SARS-CoV-2 501Y.V2 variant, all of whom tested positive. The median time from onset of symptoms to air evacuation was 12 (9-14) days. For patients receiving invasive mechanical ventilation, the median time from orotracheal intubation to air evacuation was 4 (2-7) days. Eight patients (21.1%) were treated with noradrenaline during air transport. A total of 13 patients (34.2%) developed an episode of severe hypoxemia during air transport. Delay between air transfer or delay between orotracheal intubation in mechanically ventilated patients and onset of symptoms were not associated with severe hypoxemia during air transport (P>.9) (Table 1). In univariate analysis, the only factors predicting the occurrence of severe hypoxemia during air transport was lack of treatment with curare (P=.012), lack of invasive mechanical ventilation (P=.003) (Table 1). The median paO2/FiO2 ratio was significantly lower on admission to our ICU (140 [102-192] mmHg) than on departure from Mayotte Hospital (165 [150-200] mmHg, P = .022) (Table 2). Three patients developed hypotension requiring positive inotropic therapy (mean arterial pressure <65 mmHg) and fluid challenge >500mL of crystalloid during air transport. Agitation requiring modification of sedation occurred in 2 patients (5.3%) and oxygen desaturation requiring increase of fraction of inspired oxygen, sedation enhancement and introduc-tion of neuromuscular blocking agents occurred in 8 patients (21%). No cardiac arrest, arrhythmia, endotracheal tube or vascular catheter dislodgement occurred during air transport. The median plasma lactate concentration on admission to ICU was 1.3 (1.1-1.6) mmol/L (Table 2). Evolution in ICU One patient had a fatal cardiac arrest due to pulmonary embolism in our ICU within 24 hours of departure from Mayotte Hospital. One patient who presented with acute coronary syndrome in Mayotte underwent coronary angiography in our ICU. The intervention revealed severe 3-vessel disease requiring revascularization. Of the 35 patients who were not on VV-ECMO support on departure from Mayotte, 6 (17.1%) required VV-ECMO support for refractory acute respiratory distress syndrome in our ICU. Overall, 9 (23.7%) of 38 patients received VV-ECMO support for refractory acute respiratory distress syndrome (Table 3). Pulmonary embolism and hospital-acquired pneumonia occurred in 8 patients (21.1%) and 19 patients (50%), respectively. The median length of stay in ICU was 21 (13-46) days, with no significant difference between patients with and without an episode of severe hypoxemia during air transport (P=.584) (Table 3). After a median follow-up of 56 (49-66) days, 7 deaths (18.4%) occurred, with no significant difference between patients with and without an episode of severe hypoxemia during air transport (P=.385) (Table 3). Discussion In our study, 34.2% of patients with ARF due to SARS-CoV-2 developed an episode of severe hypoxemia during air transport. The respiratory status of patients deteriorated considerably between departure and arrival, as attested by the significant decrease in the median paO2/FiO2 ratio. The only factors significantly associated with severe hypoxemia during air transport were lack of treatment with curare and lack of invasive mechanical ventilation. At the beginning of the SARS-CoV-2 pandemic, international guidelines recommended the early intubation of infected patients with respiratory distress [START_REF] Poston | Management of critically ill adults with COVID-19[END_REF] because noninvasive ventilation was believed to cause secondary contamination of caregivers via aerosolization. [START_REF] Tran | Aerosol generating procedures and risk of transmission of acute respiratory infections to healthcare workers: a systematic review[END_REF] This risk has since been shown to be limited, and noninvasive ventilation is now recommended over invasive ventilation. [START_REF] Li | High-flow nasal cannula for COVID-19 patients: low risk of bio-aerosol dispersion[END_REF][START_REF] Wang | The experience of high-flow nasal cannula in hospitalized patients with 2019 novel coronavirus-infected pneumonia in two hospitals of Chongqing, China[END_REF] However, in our study, almost all patients were intubated in Mayotte before being evacuated, as noninvasive mechanical ventilation proved ineffective due the severity of their condition. Note that this procedure was in accordance with international guidelines for air evacuation, which recommend stabilizing patients at risk of respiratory deterioration through preventive orotracheal intubation, whether or not they are infected with SARS-CoV-2. [START_REF] Teichman | International aeromedical evacuation[END_REF] Studies have shown that the transfer of critically ill patients receiving mechanical ventilation can increase ventilation duration and patient mortality. [START_REF] Barratt | Effect of non-clinical interhospital critical care unit to unit transfer of critically ill patients: a propensitymatched cohort analysis[END_REF][START_REF] Parmentier-Decrucq | Adverse events during intrahospital transport of critically ill patients: incidence and risk factors[END_REF] However, the long-distance transfer of patients with ARF appears to be safe when performed by specialized and dedicated medical teams. [START_REF] Uusaro | Safe long-distance interhospital ground transfer of critically ill patients with acute severe unstable respiratory and circulatory failure[END_REF] The Parenmark and Walther study showed a large proportion of ICU-to-ICU transfers and an increased odds of dying for those transferred due to other reasons than repatriation. [START_REF] Parenmark | [END_REF] Another example is the Larosa study . [START_REF][END_REF] who attest the Chilean Air Force experience and show that air transport of mechanically ventilated patients with COVID-19 infection has been shown to be a safe way of transport, with no in-flight deaths and an in-hospital mortality of 12%, which compares favorably with the inhospital mortality of similar patients who did not undergo air transport. In the study by Pavain et al, the inter-hospital transfer of patients with SARS-CoV-2 pneumonia receiving mechanical ventilation was not associated with an increase in mortality. [START_REF] Painvin | Inter-hospital transport of critically ill patients to manage the intensive care unit surge during the COVID-19 pandemic in France[END_REF] In our study, no excess mortality was found in patients who developed an episode of severe hypoxemia during air transport, even though a significant deterioration of patients' respiratory status was observed on arrival. In our practice, the use of curares was not protocolized and was left to the discretion of the medical team. Nevertheless, lack of treatment with curare was associated with severe hypoxemia during air transport. From a medical perspective, several patients in our study received a personal benefit from air evacuation. Thus, 9 patients with refractory acute respiratory distress syndrome (23.7%) were able to receive VV-ECMO support in Reunion Island, where 16 machines are presently available compared to 2 machines in Mayotte. Moreover, 1 patient who presented with acute coronary syndrome in Mayotte was able to undergo coronary angiography in Reunion Island; the intervention revealed three-vessel disease, prompting treatment with coronary revascularization. From an ethical perspective; however, the benefits of emergency air evacuation existed only at the collective level (eg, reduction in the need for triage). [START_REF] Painvin | Inter-hospital transport of critically ill patients to manage the intensive care unit surge during the COVID-19 pandemic in France[END_REF][START_REF] Turc | Collective aeromedical transport of COVID-19 critically ill patients in Europe: a retrospective study. Collective aeromedical transport of COVID-19 critically ill patients in Europe[END_REF] The lack of ethical benefit at the personal level explains why many patients and/or their families refused the evacuation. [24] Lastly, it should be stressed that patients in our case series were critically ill, with almost a quarter of them requiring VV-ECMO support despite their young age. This could due to the increased virulence of variant of concern, which has been highlighted in several studies. [START_REF] Challen | Risk of mortality in patients infected with SARS-CoV-2 variant of concern 202012/1: matched cohort study[END_REF][START_REF] Pearson | Estimates of Severity and Transmissibility of Novel SARS-CoV-2 Variant 501Y.V2 in South Africa[END_REF] The other issues to be highlighted are the numerous transfers to manage to reach the ICU at the hospital of Reunion Island (ICU to ambulance, ambulance to airplane . . . ) and the discomfort of the protective equipment in a tropical zone where temperature and hygrometry are very high at that season which corresponded to austral summer. Our study has many limitations. Biases may have been introduced due to the retrospective nature of the study. In addition, the number of evaluated patients was relatively small. Yet, in their study of patients with SARS-CoV-2 pneumonia evacuated by air, Pavain et al [START_REF] Painvin | Inter-hospital transport of critically ill patients to manage the intensive care unit surge during the COVID-19 pandemic in France[END_REF] examined only 13 patients, none of whom were on VV-ECMO support; moreover, travel time was shorter than in our study. Similarly, the study by Turc et al [START_REF] Turc | Collective aeromedical transport of COVID-19 critically ill patients in Europe: a retrospective study. Collective aeromedical transport of COVID-19 critically ill patients in Europe[END_REF] evaluated only 36 patients (with mechanical ventilation but without VV-ECMO support) who traveled for a much shorter period of time (70minutes) than in our study. In conclusion, emergency air evacuation of patients with ARF due to SARS-CoV-2 501Y.V2 variant was associated with respiratory complications but remained feasible. In Total flight time was 2 2 hours and total travel time between the 2 hospitals was 6hours. Patients were evacuated on board an Embraer 135 aircraft specially equipped for medical evacuation missions (12 seats and 3 stretchers). The vast majority of flights carried 3 patients, 2 of whom received invasive mechanical ventilation and 1 received conventional oxygen therapy. The medical crew, was composed of 3 doctors and 3 nurses, experienced in inter-hospital transport. Both the cockpit and a delimited area at the bottom of the aircraft were considered "clean"; the rest of the aircraft was deemed "contaminated." The medical team was fully dressed in protective clothing for the duration of the flight including examination gloves, filtering facepiece class 2 mask (FFP2) and goggles for eye protection. Each patient was assigned 1 doctor, 1 nurse, and 1 emergency kit composed of a vacuum mattress, an aspirator, a defibrillator, a transport ventilator, and suitcases with ventilation material and emergency medication (Fig.1). the study period, 43 patients with SARS-CoV-2 pneumonia were evacuated by air from Mayotte Hospital in Mamoudzou to Félix Guyon University Hospital in Reunion Island. Of these, 38 patients (84.4%) presenting with ARF were transferred directly to our ICU and included in the study cohort. During air transport, 4 patients (10.5%) received conventional oxygen therapy (between 6 L/min and 9 L/min) before departure and 34 patients (89.5%) received invasive mechanical ventilation; 3 (8.8%) of the ventilated patients also received VV-ECMO support. cases of ARF due to SARS-CoV-2 requiring emergency air evacuation, sedated patients receiving invasive mechanical ventilation and curare should be prioritized over non-intubated patients. It is notewor-thy that patients with SARS-CoV-2 pneumonia related to the 501Y.V2 variant were very severe despite their young age. Évacuations sanitaires d'urgence de patients atteints d'une défaillance respiratoire due au SARS-CoV-2 depuis Mayotte vers l'île de la Réunion Résumé Introduction : En février 2021, une explosion des cas de syndrome de détresse respiratoire aiguë due au coronavirus 2 (SARS-CoV-2) a submergé le seul hôpital à Mayotte. Nous avons rapporté une série de cas de patients atteints d'insuffisance respiratoire aiguë (IRA) due au SRAS-CoV-2 qui ont été évacués par voie aérienne de Mayotte vers l'île de La Réunion. Méthode : Cette étude observationnelle rétrospective a évalué tous les patients consécutifs atteints d'IRA due au SARS-CoV-2 qui ont été évacués par voie aérienne du CH de Mayotte vers l'unité de réanimation du CHU Félix Guyon à La Réunion entre le 2 février et le 5 mars 2021. Résultats : Au total, 43 patients atteints de pneumonie à SRAS-CoV-2 ont été évacués par voie aérienne, pour un temps de vol total de 2 heures et un temps de trajet total de 6 heures. Parmi ceuxci, 38 patients (88,4 %) avec un âge médian de 55 (46-65) ans ont présenté une IRA et ont été hospitalisés dans notre unité de soins intensifs. Quinze patients ont été dépistés pour le variant SARS-CoV-2 501Y.V2, tous testés positifs. Treize patients (34,2 %) ont développé un épisode d'hypoxémie sévère pendant le transport aérien, et le rapport médian paO2/FiO2 était plus faible à l'admission en réanimation (140 [102-192] mmHg) qu'au départ (165 [150-200], P= .022). Les facteurs associés à une hypoxémie sévère pendant le transport aérien étaient l'absence de traitement par curare (P = 0,012) et l'absence de ventilation mécanique invasive (P = 0,003). Neuf patients (23,7 %) ont reçu une assistance à l'oxygénation par membrane extracorporelle veino-veineuse dans notre unité de soins intensifs. Sept décès (18,4 %) sont survenus à l'hôpital. Conclusion : L'évacuation aérienne d'urgence des patients atteints d'IRA due au SRAS-CoV-2 était associée à une hypoxémie sévère mais restait faisable. En cas d'IRA due au SRAS-CoV-2 nécessitant une évacuation aérienne d'urgence, les patients sous sédation sous ventilation mécanique invasive et curare doivent être prioritaires sur les patients non intubés. Il est à noter que les patients atteints de pneumonie SARS-CoV-2 liée au variant 501Y.V2 étaient très sévères malgré leur jeune âge. Discipline : Médecine d'urgence, réanimation médicale Mots-Clés : évacuation aérienne, COVID-19, Mayotte, La Réunion, SARS-CoV-2, transport TABLE DES MATIERES : DES SOMMAIRE DES TABLEAUX ET DES FIGURES : 1. Introduction Figure 1: Air evacuation of patients p.4 p.8 2. Methods p.5 2.1 Study Design p.5 2.2 Therapeutic management p.6 2.3 Details of air evacuation p.6 2.4 Data collection p.7 2.5 Outcomes p.8 2.6 Statistical analysis p.9 3. Results p.9 3.1 Evolution in ICU p.13 4. Discussion p.14 Table 1 : 1 Characteristics at hospital departure of the 38 patients with acute respiratory failure due to SARS-CoV-2 p.11 Table 2 : 2 Characteristics at hospital admission (Reunion Island) of the 38 patients with acute respiratory failure due to SARS-CoV-2 p.12 Table 3 : 3 Evolution during the stay in intensive care unit p.[START_REF] Von Elm | STROBE InitiativeStrengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies[END_REF] 2. Methods 2.1. Study design located in the Indian Ocean. As a result, patients needing heart, lung or liver transplantation must be transferred to Paris, over a distance of 10,000 km. It is to be noted that medical teams in both Reunion Island and Mayotte have experience with emergency air evacuations, including for patients receiving extracorporeal membrane oxygenation support. [5,6] It was necessary to urgently transfer patients with acute respiratory failure (ARF) due to SARS-CoV-2 from Mayotte to the only ICU in Indian Ocean equipped with many VV-ECMO supports. To date, few studies have examined the outcome of patients with SARS-CoV-2 pneumonia evacuated by air, [7-9] and only a handful of them have focused on intensive care patients. [10,11] Here, we report a case series of patients with ARF due to SARS-CoV-2 who were evacuated by air from Mayotte to Reunion Island. This retrospective observational study evaluated all consecutive patients with ARF due to SARS-CoV-2 who were evacuated by air from Mayotte Hospital in Mamoudzou to the ICU of Félix Guyon University Hospital in Reunion Island between February 2 and March 5, 2021. All patients or their legally authorized representative were verbally informed about the study and could refuse to participate; moreover, they received a written information notice about the process of data collection. The study was approved by the Ethics Committee of the French Society of Infectious Disease and Tropical Medicine (CER-MIT), and was declared to the Commission nationale de l'informatique et des libertés (French Data Protection Agency MR004, # 2206739). This study complies with the Strengthening the Reporting of Observational studies in Epidemiology recommendations statement. [12] The medical infrastructure in Reunion Island meets European standards (P3 laboratories, 16 extracorporeal membrane oxygenation machines, coronary angiography, surgeons trained to perform all types of surgeries, among others). [4] When Mayotte Hospital became overwhelmed with cases of SARS-CoV-2 pneumonia, several patients were transferred to Félix Guyon University Hospital in Reunion Island, in a context of solidarity among islands of the Indian Ocean region. All patients had to be evacuated by air due to the emergency situation. There is no transplantation center (except kidney) on the French overseas territories of Reunion Island and Mayotte which are Emergency air evacuation of patients with acute respiratory failure due to SARS-CoV-2 from Mayotte to Reunion Island Abstract Background: In February 2021, an explosion of cases of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pneumonia overwhelmed the only hospital in Mayotte. To report a case series of patients with acute respiratory failure (ARF) due to SARS-CoV-2 who were evacuated by air from Mayotte to Reunion Island. Method
04099146
en
[ "info.info-lo" ]
2024/03/04 16:41:20
2005
https://inria.hal.science/hal-04099146/file/whatdoweknow.pdf
What do we know when we know that a theory is consistent ? Gilles Dowek École polytechnique and INRIA LIX, École polytechnique, 91128 Palaiseau Cedex, France. [email protected], http://www.lix.polytechnique.fr/~dowek Abstract. Given a first-order theory and a proof that it is consistent, can we design a proof-search method for this theory that fails in finite time when it attempts to prove the formula ⊥? 1 Searching for proofs in a theory 1.1 Who knows that higher-order logic is consistent? It is well known that higher-order logic can be presented as a first-order theory, i.e. that there exists a first-order theory H and a function Φ translating closed formulas of higher-order logic to closed formulas of the language of H such that the sequent ⊢ A is provable in higher-order logic if and only if the sequent H ⊢ ΦA is provable in first-order logic (see, for instance, [?]). Thus, instead of using a proof-search method specially designed for higher-order logic, such as higher-order resolution [?,?], it is possible to use a first-order method, such as resolution, to search for proofs in higher-order logic. However, this reduction is inefficient. Indeed, if we attempt to prove the formula ⊥ with higher-order resolution, the clausal form of the formula ⊥ is the empty set of clauses, from these clauses, we can apply neither the higher-order resolution rule that requires two clauses to be applied, nor any other rule of higher-order resolution, that all require at least one clause. Thus, this attempt to prove the formula ⊥ fails immediately. In contrast, the axioms of H give an infinite number of opportunities to apply the resolution rule and thus when searching for a proof of H ⊢ ⊥, the search space in infinite. Thus, we can say that higher-order resolution "knows" that higher-order logic is consistent, because an attempt to prove the formula ⊥ fails in finite time, while first-order resolution does not. A proof-search method for the theory H There is an link between higher-order logic and higher-order resolution and another link between higher-order logic and the first-order theory H. But can we establish a direct link between higher-order resolution and the theory H, without referring to higher-order logic? The answer is positive because the translation Φ can be inverted: there exists a function Ψ translating closed formulas of the language of H to closed formulas of higher-order logic such that the sequent H ⊢ B is provable in first-order logic if and only if the sequent ⊢ Ψ B is provable in higher-order logic. Thus, a way to search for a proof of a sequent H ⊢ B is to apply higher-order resolution to the sequent ⊢ Ψ B. Thus, independently of higher-order logic, higher-order resolution can be seen as special proof-search method for a the first-order theory H. As Ψ ⊥ = ⊥ this first-order proof-search method immediately fails when attempting to prove the sequent H ⊢ ⊥. Thus, this method is much more efficient than applying first-order resolution to the sequent H ⊢ ⊥ as it "knows" that the theory H is consistent. 1.3 A proof-search method for a theory T Can we generalize this to other theories than H? Given an arbitrary first-order theory T and a proof that T is consistent, can we always build in the theory T, i.e. exploit the consistency of T to design a proof-search method that fails in finite time when required to prove the formula ⊥? Of course, as we are not interested in the trivial solution that first tests if the formula to be proven is ⊥ and then applies any method when it is not, we have to restrict to proof-search methods that do not mention the formula ⊥. It is clear that the consistency of the theory T is a necessary condition for such a method to exist: if T is inconsistent, a complete proof-search method should succeed, and not fail, when attempting to prove the formula ⊥. The main problem is to know if this hypothesis is sufficient. Resolution modulo Resolution modulo Resolution modulo is a proof-search method for first-order logic that generalizes higher-order resolution to other theories than the theory H. Some axioms of the theory H are equational axioms. How to build in equational axioms is well-known: we drop equational axioms and we replace unification by equational unification modulo these axioms (see, for instance, [?,?]). Equational unification modulo the equational axioms of H is called higher-order unification. From a proof-theoretical point of view, this amounts to define a congruence on formulas generated by the equational axioms and to identify congruent formulas in proofs. For instance, if we identify the terms 2 + 2 and 4, we do not need the axiom 2+2 = 4 that is congruent to 4 = 4, but when we substitute the term 2 for the variable x in the term x + 2, we obtain the term 4. We have called deduction modulo the system obtained by identifying congruent formulas in proofs. But not all axioms can be expressed as equational axioms. For instance, if the axiom of arithmetic S(x) = S(y) ⇒ x = y can be replaced by the equivalent equational axiom P red(S(x)) = x, the axiom ¬0 = S(x), that has no one-point model, cannot be replaced by an equational axiom. Thus, we have extended deduction modulo by identifying some atomic formulas with not atomic ones. For instance, identifying formulas with the congruence generated by the rewrite rules N ull(0) -→ ⊤ and N ull(S(x)) -→ ⊥ is equivalent to having the axiom ¬0 = S(x). When we have such rewrite rules operating directly on formulas, equational resolution has to be extended. Besides the resolution rule, we need to add another rule called Extended narrowing. For instance, if we identify the formula P (1) with ¬P (0), we can refute the set of clauses {¬P (x)}, but to do so, we have to be able to substitute the term 1 for the variable x in the clause ¬P (x), deduce the clause P (0) and conclude with the resolution rule. More generally, the Extended narrowing rule allows to narrow any atom in a clause with a propositional rewrite rule. The proposition obtained this way must then be put back in clausal form. Equational resolution extended with this rule is called ENAR -Extended Narrowing and Resolution -or resolution modulo for short. When we orient the axioms of H as rewrite rules and use resolution modulo, we obtain exactly higher-order resolution. Proving completeness Proving the completeness of higher-order resolution, and more generally of resolution modulo, is not very easy. Indeed higher-order resolution knows that higher-order logic is consistent, i.e. it fails in finite time when attempting to prove the formula ⊥. Thus, a finitary argument shows that the completeness of higher-order resolution implies the consistency of higher-order logic, and by Gödel's second incompleteness theorem, the completeness of higher-order resolution cannot be proved in higher-order logic itself. This explains that some strong proof-theoretical results are needed to prove the completeness of higher-order resolution, at least the consistency of higher-order logic. The completeness proof given by Andrews and Huet [?,?,?] uses a result stronger than consistency: the cut elimination theorem for higher-order logic. In the same way, the completeness of resolution modulo rests upon the fact that deduction modulo the considered congruence has the cut elimination property. Indeed, when the congruence is defined by rules rewriting atomic formulas to non-atomic ones, deduction modulo this congruence may have the cut elimination property or not. For instance, deduction modulo the rule P -→ Q∧R has the cut elimination property, but not deduction modulo the rule P -→ Q ∧ ¬P [?] and resolution modulo this second rule is incomplete. Is it possible to weaken this cut elimination hypothesis and require, for instance only consistency? The answer is negative: the rule P -→ Q ∧ ¬P is consistent, but resolution modulo this rule is incomplete. More generally, Hermant [?] has proved that the completeness of resolution modulo a congruence implies cut elimination for deduction modulo this congruence. A resolution strategy At least in the propositional case, resolution modulo can be seen as a strategy of resolution [?]. For instance, consider the rule P -→ Q ∧ R. The Extended narrowing rule allows to replace an atom P by Q ∧ R and to put the formula obtained this way back in clausal form. With this rule, from a clause of the form C ∨ P we can derive the clauses C ∨ Q and C ∨ R and from a clause of the form C ∨ ¬P we can derive the clause C ∨ ¬Q ∨ ¬R. We can mimic this rule by adding three clauses ¬P ∨Q, ¬P ∨R, P ∨¬Q∨¬R and restricting the application of the resolution rules as follows: (1) we cannot apply the resolution rule using two clauses of the this set (2) when we apply the resolution rule using one clause of this set the eliminated atom must be the underlined atom. Notice that this set of clauses is exactly the clausal form of the formula P ⇔ (Q ∧ R). This strategy is in the same spirit as hyper-resolution, but the details are different. If we apply the same method with the formula P ⇔ (Q ∧ ¬P ), we obtain the three clauses ¬P ∨ Q, ¬P ∨ ¬P , P ∨ ¬Q ∨ P with the same restriction and, like resolution modulo, this strategy is incomplete: it does not refute the formula Q. The fact that this strategy is complete for one system but not for the other is a consequence of the fact that deduction modulo the rule P -→ Q ∧ R has the cut elimination property, but not deduction modulo the rule P -→ Q ∧ ¬P . Understanding resolution modulo as a resolution strategy seems to be more difficult when we have quantifiers. Indeed, after narrowing an atom with a rewrite rule, we have to put the formula back in clausal form and this involves skolemization. From consistency to cut elimination We have seen in section ?? that the theory T = {P ⇔ (Q ∧ ¬P )} is consistent, but that resolution modulo the rule P -→ (Q ∧ ¬P ) is incomplete. Thus, it seems that the consistency hypothesis is not sufficient to design a complete proof-search method that knows that the theory is consistent. However the rule P -→ (Q ∧ ¬P ) is only one among the many rewrite systems that allow to express the theory T in deduction modulo. Indeed, the formula P ⇔ (Q ∧ ¬P ) is equivalent to ¬P ∧ ¬Q and another solution is to take the rules P -→ ⊥ and Q -→ ⊥. Deduction modulo this rewrite system has the cut elimination property and hence resolution modulo this rewrite system is complete. In other words, the resolution strategy above with the clauses ¬P , ¬Q is complete and knows that the theory is consistent. Thus, the goal should not be to prove that if deduction modulo a congruence is consistent then it has the cut elimination property, because this is obviously false, but to prove that a consistent set of axioms can be transformed into a congruence in such a way that deduction modulo this congruence has the cut elimination property. To stress the link with the project of Knuth and Bendix [?], we call this transformation an orientation of the axioms. A first step in this direction has been made in [?] following an idea of [?]. Any consistent theory in propositional logic can be transformed into a polarized rewrite system such that deduction modulo this rewrite system has the cut elimination property. To do so, we first put the theory T in clausal form and consider a model ν of this theory (i.e. a line of a truth table ). We pick a clause. In this clause there is either a literal of the form P such that ν(P ) = 1 or a literal of the form ¬Q such that ν(Q) = 0. In the first case, we pick all the clauses where P occurs positively P ∨ A 1 , ..., P ∨ A n and replace these clauses by the formula (¬A 1 ∨ ... ∨ ¬A n ) ⇒ P . In the second, we pick all the clauses where Q occurs negatively ¬Q∨B 1 , ..., ¬Q∨B n and replace these clauses by the formula Q ⇒ (B 1 ∧ ... ∧ B n ). We repeat this process until there are no clauses left. We obtain this way a set of axioms of the form A i ⇒ P i and Q j ⇒ B j such that the atomic formulas P i 's and Q j 's are disjoint. The next step is to transform these formulas into rewrite rules and this is difficult because they are implications and not equivalences. But, this is possible if we extend deduction modulo allowing some rules to apply only to positive atoms and others to apply only to negative atoms. This extension of deduction modulo is called polarized deduction modulo. We get the rules P i -→ + A i and Q j -→ -B j . Using the fact that the P i 's and the Q j 's are disjoint, it is not difficult to prove cut elimination for deduction modulo these rules [?]. So, this result is only a partial success because resolution modulo is defined for non-polarized rewrite systems and orientation yields a polarized rewrite system. There may be two ways to bridge the gap: the first is to extend resolution modulo to polarized rewrite systems. There is no reason why this should not be possible, but this requires some work. A more ambitious goal is to produce a non-polarized rewrite system when orienting the axioms. Indeed, the axiom P ⇒ A can be oriented either as the polarized rewrite rule P -→ -A or as the non-polarized rule P -→ (P ∧ A), and similarly the axiom A ⇒ P can be oriented as the rule P -→ (P ∨ A). But the difficulty here is to prove that deduction modulo the rewrite system obtained this way has the cut elimination property. Bridging this gap would solve our initial problem for the propositional case. Starting from a consistent theory, we would build a model of this theory, orient it using this model, i.e. define a congruence and resolution modulo this congruence would be a complete proof search method for this theory that knows that the theory is consistent. But, this would solve only the propositional case and for full first-order logic, everything remains to be done. We have started this note with a problem in automated deduction: given a theory T and a proof that it is consistent, can we design a complete proof-search method for T that knows that T is consistent? We have seen that this problem boils down to a problem in proof theory: given a theory T and a proof that it is consistent, can we orient the theory into a congruence such that deduction modulo this congruence has the cut elimination property? We seem to be quite far from a full solution to this problem, although the solution for the propositional case seems to be quite close. Some arguments however lead to conjecture a positive answer to this problem: first the fact that the problem seems almost solved for propositional logic, then the fact that several theories such as arithmetic, higher-order logic, and some version of set theory have been oriented. Finally, we do not have examples of theories that can be proved to be non orientable (although some counter examples exist for intuitionistic logic). However, some theories still resist to being oriented, for instance higher-order logic with extensionality or set theory with the replacement scheme. A positive answer to this problem could have some impact on automated theorem proving, as in automated theorem proving, like in logic in general, axioms are often a burden.
04099331
en
[ "phys", "spi" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04099331/file/DAAA23056.1680000356.pdf
B Franc ¸ois F Gand HYBRID RANS/LES SIMULATIONS OF SECONDARY FLOWS IN A LINEAR CASCADE OF TURBINE BLADES WITH EXTERNAL TURBULENCE Keywords: TURBINE, SECONDARY FLOW, HYBRID RANS/LES, TURBULENCE Turbine blades are highly aerodynamically loaded, which leads to secondary flows in the vicinity of the hub/shroud walls. These secondary flows are the cause of significant aerodynamic losses in the turbine row, that need to be accurately predicted by the design engineer. Also, turbine rows operate in a flow with a high intensity of free-stream turbulence. Predicting such flows is quite challenging for RANS models that may lack accuracy. In the present work, a hybrid RANS/LES approach, named Zonal Detached Eddy Simulation, is assessed on a linear cascade of turbine blades submitted to a significant external turbulence. The selected configuration is the MAGPI configuration (Re=560 000) for which measurements are available downstream of the blades and on the blade walls. In this work, both the ZDES approach and the methodology for injecting turbulence are presented. The physical consistency of ZDES solutions is analysed and discussed. Finally, ZDES results are compared to the measurements and with a RANS solution: pressure losses maps downstream of the blade simulated with ZDES show a fair agreement with experiments both in shape and magnitude, while the RANS solution overestimates the pressure losses. The comparison of the pressure distribution on the blade wall near the lower wall (where secondary flows develop) exhibits also a better matching of the ZDES results with the measurements. INTRODUCTION Turbine blades are submitted to strong aerodynamic loads. This leads to strong pressure gradients at the vicinity of shroud walls generating secondary flows responsible for the main losses in a turbine row. These flows are three-dimensional, they generate a strong mixing of momentum with the main flow [START_REF] Cui | Numerical Investigation of Contrasting Flow Physics in Different Zones of a High-Lift Low-Pressure Turbine Blade[END_REF]). The prediction of the aerodynamic losses is a concern for the engineer who needs accurate methods to design high-performance and efficient machines. RANS methods are widely used in design cycle in industry, because they offer the best compromise between computational cost and accuracy. For such complex flows, the choice of the turbulence model remains delicate because of its strong influence on the results. Also, some turbulence models face difficulties for modeling large external turbulence rates. If not modelled, the turbulence can be partly resolved for a higher level of accuracy (Large Eddy Simulations approaches) but at the expense of a much larger computational cost. In turbine flows, Wall-Resolved LES (WRLES) is often prefered because the turbulence in the boundary layer is essentially resolved [START_REF] Fiore | Description of the flow in a linear cascade with an upstream cavity part 1: Influence of turbulence[END_REF]) and not modelled as in Wall-Modeled LES approaches. However, the numerical cost of WRLES is high because of the mesh required in the boundary layer. An alternative way is to resort to hybrid RANS/LES approaches, which solve the attached boundary layer with RANS while the rest is solved with LES. The mesh effort is reduced compared to WRLES, so does the numerical effort [START_REF] Tucker | Hybrid les approach for practical turbomachinery flows part i: Hierarchy and example simulations[END_REF]). This strategy is followed here using the Zonal Detached Eddy Simulation of [START_REF] Deck | Recent Improvements in the Zonal Detached Eddy Simulation (ZDES) Formulation[END_REF]. Yet, this approach has never been assessed on turbine configurations in which secondary flows and external turbulence are present. The configuration studied in this work is a linear cascade of a turbine from the EU project MAGPI. Measurements downstream of the cascade and on the blade walls are available and will be compared to the numerical results. The present paper is organized as follows. First the configuration is presented as well as the secondary flow topology. Second, the numerical methodes are described: the Zonal Detached Eddy Simulation and the approach for injecting turbulent fluctuations to simulate the external turbulence in the computational domain. Third, the flow solutions are analysed to verify their consistency and the distribution of RANS/LES solving areas. Finally, the numerical results are compared to the measurements and discussed. The influence of the mesh on the numerical results is also discussed. CONFIGURATION Geometry and main flow features MAGPI (Main Annulus Gas Path Interactions) project is a European Commission funded project (2006)(2007)(2008)(2009)(2010)(2011). The project addressed the interactions between main gas path and secondary flow systems in commercial gas turbines. Experiments were performed on turbine disk rim and compressor manifold cavity heat transfer, hot gas ingestion, and spoiling effects of cooling air flow and their impact on turbine and compressor performance, as well as a reduction of secondary air losses. The experimental data was used for better understanding of the complex flow phenomena and improvements of platform and cavity design. The MAGPI configuration investigated in the present work is a linear cascade composed of five nozzle guide vanes (NGV) installed at Karlsruhe University, Germany [START_REF] Schuler | Investigation of the Influence of Different Rim Seal Geometries in a Low-Pressure Turbine[END_REF]). The rig is set in an open circuit, which includes an upstream honeycomb settling chamber, a centrifugal blower, and a Venturi pipe to target desired inflow conditions. Upstream of the blade leading edge, the rim seal is included in a cavity module linked to the test-section allowing to set different rim seal designs. Purge flows of various mass flow rate are supplied to the cavity with a tangential component with respect to the span direction that mimic the driving effect of the rotor disk on the sealing flow in the cavity. The blade design represents a typical low-pressure turbine. In the present work, only the configuration without cavity is studied. Figure 1 gives an illustration of the test rig with the linear cascade and a summary of the main geometrical and flow features. Flow topology In order to illustrate and better understand the general flow topology observed in the MAGPI configuration prior to analyzing in details the simulations, Figure 2 presents the main secondary flow structures in the inter-blade channel, obtained from a RANS simulation1 . For the sake of clarity, only half on the channel height is displayed. Firstly, the incoming lower wall boundary layer impinges the blade leading-edge (LE) creating a horseshoe vortex. In the suction side, this vortex is convected along the blade wall, labeled as 1b in Fig. 2-a. On the pressure side, this vortex becomes a passage vortex crossing the whole inter-blade channel, labeled as 1a in Fig. 2-a. This passage vortex is driven by the strong azimuthal pressure gradient due to the large turning of the flow in the channel and the low momentum of the flow at the lower wall. This passage vortex impacts the adjacent blade suction side and is then convected along the suction side, labelled as 2 in Fig. 2-b. Inspection of the friction map (not shown here for conciseness) shows that the interaction of the passage vortex with the suction side of the blade do not cause flow separation (independently of the turbulence model). Near the suction side of the blade, the passage vortex causes then, in its vicinity, the creation of another vortex rotating in the opposite direction (colored in red), visible in Fig. 2-b and Fig. 2-d. On the first half of the pressure side, a flow separation occurs near the lower wall, downstream of the start of the passage vortex. This flow separation is labelled as 3 in Fig. 2-c. Downstream of this flow separation, a second passage vortex is created and is convected again across the whole channel because of the azimuthal pressure gradient. Nevertheless, it does not interact with the adjacent blade as observed in Fig. 2-d and labelled as 3 but with its wake. Figure 2-d illustrates the two main vortices responsible for the main aerodynamic losses of the linear cascade: the vortex colored in blue is the passage vortex (coming from the horseshoe vortex and labelled as 1a in Fig. 2-a) while the vortex colored in red is induced by the passage vortex (1a) (see Fig. 2-b), usually called "wall vortex". HYBRID RANS/LES APPROACH The hybrid RANS/LES method used in this work is the Zonal Detached Eddy Simulation (ZDES) developed at ONERA. The reader is referred to [START_REF] Deck | Recent Improvements in the Zonal Detached Eddy Simulation (ZDES) Formulation[END_REF] for the complete formulaiton of ZDES, which main ideas are recalled herefater. This approach has been used with success to simulate a wide range of applications of industrial interest (aircraft, space launchers, jet flows and turbomachinery) . ZDES has three operating modes. It is important to bear in mind that modes 1 and 2 of ZDES belong to the so-called "natural Detached Eddy Simulation" approaches in which the whole attached boundary layers are treated in RANS and the freeshear flows are treated in LES. Conversely, mode 3 of ZDES is a Wall Modelled LES approach in which the outer part of attached boundary layers is resolved in LES while the inner part is treated in RANS. In the present paper, the formulation ZDES mode 2 is applied to the whole computational domain. ZDES mode 2 provides an "automatic" operating option (referred to as mode 2 in the following) for which the switch between RANS and LES regions is dynamically set by the model. Indeed, the ZDES mode 2 was developed to deal with separations over smooth surfaces and works as a global RANS/LES approach where the model itself defines the RANS and DES areas, which makes the treatment of complex geometries straightforward for the user. To ensure that attached boundary layers are treated in RANS independently of the grid density, a shielding function f d is used. This function was defined in the framework of Delayed-DES (DDES) [START_REF] Spalart | A New Version of Detached-Eddy Simulation, Resistant to Ambiguous Grid Densities[END_REF]) to be equal to 0 in the boundary layer and 1 elsewhere: f d = 1 -tanh (8r d ) 3 , r d = ν + ν U i,j U j,i κ 2 d 2 w (1) where ν is the kinematic viscosity, U i,j is the velocity gradient and κ is the Karman constant. The hybrid length scale for ZDES mode 2 then reads: dII ZDES = d w -f d × max 0, d w -C DES ∆ II ZDES (2) which ensures that dII ZDES = d w in attached boundary layers where f d = 0, i.e. the underlying RANS model (the SA model is used in this work) is forced in these areas. Conversely to DDES, the mode 2 of ZDES provides a specific definition of the subgrid length scale according to the flow resolution and a threshold value f d0 = 0.8 determined in [START_REF] Deck | Recent Improvements in the Zonal Detached Eddy Simulation (ZDES) Formulation[END_REF]. When f d < f d0 , dII ZDES = ∆ max (∆ max being the characteristic mesh length necessary to ensure the correct behaviour of f d ). When f d > f d0 , the subgrid length scale revolves to dII ZDES = ∆ vol = 3 ∆ x ∆ y ∆ z . MESH AND NUMERICAL PARAMETERS Mesh Figure 3: Mesh used in the simulations. One point over two is shown in the illustration. Figure 3 presents the mesh used in the simulations and the main mesh features. The computational domain contains a single channel blade. It extends to one axial chord upstream of the blade and three axial chords downstream of the blade. These choices allow the convection of the secondary flows up to at least two axial chords. Near the outlet boundary condition, a buffer area with large mesh cells was set. The largest mesh size is 11 mm. At the inlet boundary condition, flow quantities measured experimentally are imposed (pressure stagnation, temperature stagnation, velocity direction). All walls are treated with a no-slip adiabatic boundary condition. The total number of mesh cells is 13 millions. The RANS and ZDES simulations were performed on the same mesh. Flow Solver and Numerical Parameters The flow solver is the elsA software [START_REF] Cambier | The Onera elsA CFD Software: Input from Research and Feedback from Industry[END_REF]), owned by ONERA and Safran. This compressible flow solver is based on a cell centered finite volume technique and structured multiblock meshes. For the steady RANS computations, the time scheme used is a standard 1st order implicit Backward Euler and for the unsteady ZDES simulations, the time scheme is achieved by an implicit second order scheme based on a Newton algorithm. The timestep has been set to 3 × 10 -7 s. The choice was mainly driven by acoustic CFL number considerations and computational cost (the selected timestep is small enough to capture the turbulent structures at stake). It corresponds to a convective CFL number around 36 in the smallest cells in the boundary layer and of 0.1 for cells with the length of 1 mm. This timestep samples the convection time of a particle from the leading-edge to the trailing-edge (TE) of the blade by 7000. For each timestep, five sub-iterations are used in the inner-loop of the time scheme and the residuals decrease by one order of magnitude. The convective flux discretization is treated with the AUSM (Advection Upstream Splitting Method) scheme with a third order limiter. The turbulence model used in the RANS part of ZDES is the turbulence model from [START_REF] Spalart | A One-Equation Turbulence Model for Aerodynamic Flows[END_REF]. METHODS TO ACCOUNT FOR THE EXTERNAL TURBULENCE IN ZDES Setting the correct inlet turbulence rate To account for the freestream turbulence, perturbations modeling this turbulence need to be injected at the inlet of the computational domain. Measurements indicate that the turbulence rate is roughly equal to 6% at the leading edge position. Upstream of the blade at a distance of at seven chords, the turbulence is generated by a grid composed of bars of 8 mm with an aperture of 25 mm. The turbulence impinging the blade is assumed to be homogeneous and isotropic. According to [START_REF] Sagaut | Homogeneous Turbulence Dynamics[END_REF] and used by [START_REF] Fiore | Description of the flow in a linear cascade with an upstream cavity part 1: Influence of turbulence[END_REF], the time evolution of the turbulent kinetic energy can be written as: k(t) = k(t 0 ) 1 + (C ϵ -1) t τ 0 -1 Cϵ-1 (3) with τ 0 = k(t 0 )/ϵ(t 0 ) and C ϵ = 1.92. The Taylor assumption is used to relate time and spatial evolution. It comes then using the relation x = u 0 t where u 0 is the mean velocity: k(x) = k(x 0 ) 1 + (C ϵ -1) x u 0 τ 0 -1 Cϵ-1 (4) with τ 0 = k(x 0 )/ϵ(x 0 ). The dissipation rate can be related to a turbulent length scale L t with the relation L t = C Lt k 3/2 /ϵ where C Lt = 0.4 [START_REF] Pope | Turbulent Flows[END_REF]). For the present configuration, the velocity is equal to u 0 = 41.4 m/s., the flow angle α = 38 • . The distance between the inlet and the leading-edge of the blade (following the flowpath) is roughly 95 mm. Given that the uncertainties regarding the turbulent length scale for the upstream turbulence, two length scales were tested in the simulations: 8 mm and 4 mm. To meet the measured turbulence rate at the leading-edge position, the estimated turbulence rate imposed at the inlet based on Eq. 4 for turbulent length scales L = 8 mm and L = 4 mm are respectively 7% and 8.5%. Methodology for injecting external turbulence The methodology to account for external turbulence is based on the approach of [START_REF] Jarrin | Reconstruction of Turbulent Fluctuations for Hybrid RANS/LES Simulations using a Synthetic-Eddy Method[END_REF], named Synthetic Eddy Method (SEM). In order to generate an unsteady and turbulent velocity field at the inlet of the computational domain, the principle of SEM is to superimpose a large number of turbulent structures defined analytically by their velocity field over a mean velocity field provided by the user. Thus, the velocity field in each point of the inlet boundary condition u i is defined as follows: u i (x, y, z, t) = U i + 1 √ N N k=1 ϵ k j A ij V B σ -3 T x -x k (t) σ T y -y k (t) σ T z -z k (t) σ i = 1, 2, 3 where: • Provided data from the user are the mean velocity field U i , the Choleski decomposition of the targeted Reynolds tensor A ij and the length scales of the coherent structures σ. • N , V B are respectively the number of structures and a control volume (computed automatically) • ϵ k j : a random value equal to 1 or -1 • (x k (t), y k (t), z k (t)) are the coordinates of the turbulent structure k following an assumption of frozen turbulence • T is a shape function for the coherent structures, defined as T (x) = 3 2 (1 -|x|) if x < 1, it equals 0 otherwise. For the present study, the Reynolds stress is assumed homogeneous and isotropic such as R ij = (T u × U 0 ) 2 × δ ij where T u is the turbulence rate, U 0 a reference velocity, δ ij the Kronecker symbol. After 15 time passages from LE to TE (i.e. 10 5 iterations), the mean mass flow rate and the mean forces are converged. Figure 4 presents the evolution of the magnitude of the instantaneous velocity over time on one numerical probe located at mid-span, at the center of the inter-blade channel and at the same axial position as the leading-edge of the blades. For conciseness, only the signal for ZDES with L = 8mm is shown. The red curve depicts the instantaneous velocity modulus signal whereas the green curve is the time-averaged quantities over a sliding window. The sliding window width is equal to ten time passages of a particle convected from the leading-edge to the trailing-edge of the blade. Both signals are normalized by the final time-averaged value: it gives direct access to the variation in percent of the signals. RESULTS AND DISCUSSION OF THE ZDES SIMULATION Convergence The time in abscissa is normalized by this time passage. Figure 4 shows the mean values are converged and varies less than a few percents over the periods. Figure 5 illustrates the injection in the inlet plane and the convection of the turbulent structures for two ZDES simulations with prescribed external turbulence. It enables to verify that the shape and size of turbulent structures at the inlet and their evolution through the computational domain are qualitatively correct. The purpose of injecting external turbulence is to reproduce the experimental conditions in which a turbulence rate of 6% was measured on the leading-edge plane of the cascade. Calculations of this turbulence rate from the simulations show that ZDES L = 8 mm and ZDES L = 4 mm actually reached a turbulence rate at the LE of 5.5% and 4.5% respectively. With the decaying turbulence law, the turbulence rate is slighty underestimated in the simulations compared to the measurements. Nevertheless, it remains quite significant and it is believed to have a noticeable influence on the results. Convection of the injected turbulent structures Behavior of the ZDES shielding function and analysis of the flow Analysis without external turbulence The present ZDES formulation was never applied to a turbine configuration with secondary flows. As a validation step of the approach, the shielding function maps (indicating RANS or LES solving) are inspected. Figure 6-a presents a flow visualisation with secondary flows colored by the helicity (blue denotes counterclockwise rotation while red denotes clockwise rotation). Close-up views are shown in subfigures within Fig. 6-a. The same passage vortex labelled 1a and described in Fig. 2-a and a flow separation (also present in RANS) on the pressure side on the blade can be observed. On all subfigures in Fig. 6-a, a second color map is superimposed on a plane located at x = 0.2C x in which the shielding function field f d is displayed. Dark magenta color indicates that the area is solved with a RANS modeling, otherwise it is solved with LES. The dark magenta area nearby the wall indicate the presence of the boundary layer which is, as expected, treated in RANS; the rest of the domain is treated in LES. Locally, Fig. 6-b shows that the presence of the passage vortex nearby the lower wall and interacting with its boundary layer : this implies the boundary layer is no longer a canonical attached boundary layer in its upper part and for this reason, the method switches to a LES solving. This is a consistent behavior of the approach which automatically detects this kind of situation. In Eq. 1, the additional vorticity brought by the main vortex decreases the variable r d and modifies the value of the shielding function to 1 (LES solving). Same behavior was observed by Franc ¸ois et al. [2020] with tip vortices nearby a shroud wall on a fan. Fig. 6-a,b (bottom-right) shows that the flow separation is well solved in LES mode near the wall. This is also an expected feature of the approach. For conciseness, the colormap of the shielding function is shown only on one plane but additional analysis have demonstrated a relevant repartition of the RANS and LES solving area in the whole computational domain. Effect of the external turbulence The validation work is continued with the analysis of the effect of external turbulence on the RANS/LES areas. Using the same pattern as Fig. 6-b for shielding function visualisation, Fig. 6c,d presents ZDES with external turbulence for two turbulent length scales. Observations of the boundary layer on the lower wall show that the RANS area reduces due to the presence of the external turbulence. The presence of eddies give rise to strong velocity gradient at the border of the boundary layer. Compared to simulations without external turbulence, the variable r d becomes smaller and the model switches to LES in these part. This explains why the RANS area is smaller with external turbulence in ZDES simulations. This expected behavior is observed in the whole computational domain. COMPARISON TO THE MEASUREMENTS AND RANS RESULTS Pressure losses downstream of the blades Figure 7 presents stagnation pressure maps at a quarter-chord downstream of the cascade for the experimental data, a RANS simulation using the Spalart-Allmaras turbulence model and two ZDES simulations accounting for external turbulence with two turbulent length scales. In the same figure, another prediction using ZDES with a mesh of higher density (27 millions points) than the one presented in Fig. 3 is added. The convection of the passage vortex, described in Fig. 2, generates stagnation pressure losses at positions z/H ≈ 0.2 and z/H ≈ 0.8 on the channel height. The pressure losses due to the wake are also visible on the whole channel height at y/s ≈ 0. In the simulations, the RANS results with the SA turbulence model predicts a correct shape of the maps of losses but not their magnitude. Pressure losses are overestimated in both the wakes and the passage vortex where the ZDES results reproduces quite accurately the pressure losses maps, both in shape and magnitude. In a companion paper, Uncu et al. [2023] show similar overestimations of pressure losses with other turbulence models (relying on the Boussinesq approximation and a Reynolds Stress Model), although these discrepancies with measurements are somehow compensated when comparing the radial profiles of pressure losses due to the azimuthal average. In Fig. 7, a slight overestimation of the pressure losses can be observed compared to the measurements near the lower wall. Comparison of Fig. 7-c and Fig. 7-e show the predicted maps of pressure losses are almost identical with two different meshes. First, the RANS and ZDES results on mesh with 13 millions points (named M13) are compared. At mid-span, the results are identical between the three ZDES and match quite well with the measurements: the same under-estimation of the blade loading as RANS results is observed. This is an expected behavior of the method because at this channel height, the boundary layer remains attached and solved in RANS with ZDES. Also, the presence of external turbulence has no effect on the C p distribution for this blade height. At blade height of 4% and 6%, both ZDES results with external turbulence are very similar. Both show a significantly better agreement of the results with the measurements compared to RANS results and to ZDES without external turbulence. The results of the latter are quite similar to the RANS results. At blade height of 6%, the bump observed at a position of 60% chord in the measurements on the suction side is reproduced with the same intensity with ZDES but at a slightly more upstream position than in the measurements whereas the tested RANS model over-estimates this bump (such as the ZDES without external turbulence). Among the two ZDES accounting for external turbulence, the simulation using the turbulent length scale of 8mm (and having the closest turbulence rate to the measurements when meeting the blade leading-edge) is in closest agreement with experiments. At blade height of 4%, a bump is observed on the C p distribution with the RANS model while it is absent with the ZDES results with mesh M13. Figure 9 shows that the main reason of this discrepancy is the position of the low pressure area which is more distant (and of higher intensity) to the hub for the RANS model than for the ZDES results. Wall pressure on blade The results of a ZDES simulation on a refined mesh M27 are also plotted in Fig. 8 and Fig. 9, which provides some insights on the mesh sensitivity of the results. At h/H = 6% and h/H = 50%, the discrepancies between the two meshes are quite low as observed in Fig. 8. At h/H = 4%, the discrepancies between the two ZDES with different densities is as large as the ones with RANS results. Inspection of the pressure distribution on the whole suction side surfaces in Fig. 9 shows that the reason for this is again that the trajectory of the passage vortex along the suction side (identified by the low pressure area) is slightly different. The low pressure area is located closer to the hub surface for the simulation using the mesh of 27 millions points and intersect the isosurface at h/H = 4% whereas for the ZDES using the mesh of 13 millions points, it does not intersect this iso-surface. CONCLUSIONS The present paper proposes an assessment of a hybrid RANS/LES approach, the Zonal Detached Eddy Simulation (ZDES), on the aerodynamic simulation of turbine flow with external turbulence on a linear cascade. The flow topology around turbine blades is quite complex because secondary flows are present and generate additional losses. Their prediction is important in an industrial design process. The ZDES results have shown that first, the approach works correctly on such flows with a consistent distribution between RANS and LES solving areas. Accounting for external turbulence with this approach for a turbomachinery flow was also new in this work and the flow analysis shows that the method behaves as expected. These observations increase the maturity of the approach for turbomachinery flows. Furthermore, the ZDES results presents a very good agreement with the experimental results on both the pressure losses map and on the pressure distribution on the blades. Regarding the last point, the comparison with the C p measurements must be done with care because the numerical predictions are strongly dependent of the trajectories of the secondary flows. Three major conclusions can be drawn from this work. First, accounting the external turbulence for this configuration significantly improves the results, especially for secondary flows. Second, the ZDES results provide better prediction results than RANS simulations using Spalart-Allmaras model. Third, the mesh influence study has revealed a low mesh sensitivity of the flow solution with two meshes. Future work will be dedicated to the simulation of this configuration with a cavity flow interacting with the main flow. Figure 1 : 1 Figure 1: Test rig of the MAGPI configuration from Schuler et al. [2011] (left) and main geometrical and flow features (right) Figure 2 : 2 Figure 2: Flow topology of the secondary flows from a RANS simulation illustrating the main flow physics of the MAGPI configuration. Blue and red colors indicate respectively counterclockwise and clockwise vortices with respect to the X-axis. Legend from (a) applies to figures b,c,d. Figure 4 : 4 Figure 4: Instantaneous and time-averaged (10 periods) signal of the magnitude of the velocity over time iteration for the ZDES with turbulent length scale L = 8 mm. Figure 5 : 5 Figure 5: Illustration of the turbulent structures (Q criterion colored by the axial momentum) for the ZDES for length scales (left) L = 8mm and (right) L = 4mm Fig. 6-b is identical to Fig. 6-a except the secondary flows isocontours have been removed for the better visualisation of the shielding function colormap. Figure 6 : 6 Figure 6: Plots of secondary flows with shielding function contour maps at plane x = 20% without external turbulence (a & b), with external turbulence (T u = 6%) and two turbulence length scales : (c) L = 8 mm , (d) L = 4 mm. Same legend applies for all the figures. Figure 7 : 7 Figure 7: Comparison of stagnation pressure map (P i -P i,amb ) at a quarter axial chord downstream of the cascade. P i,amb refers to the stagnation pressure at one axial chord upstream of the leading-edge of the cascade. MXX indicates the number of million cells in the mesh used. Figure 8 : 8 Figure 8: Pressure coefficients at h/H = 4% (left), h/H = 6% (center) and h/H = 50% (right) from ZDES Figure 9 : 9 Figure 9: Non-dimensionned time-averaged pressure on the suction side of the blade from RANS and ZDES simulations (Tu=6%,L = 8mm) using two mesh densities. F . Uncu, B. Franc ¸ois, R. Barrier, Buffaz N., and Le Guyader S. RANS Prediction of the Aerodynamic Losses in a Linear Turbine Cascade with an Upstream Cavity and a Purge Flow. In 15th European Turbomachinery Conference, 2023. the secondary flow topology does not vary much according to the turbulence model ACKNOWLEDGEMENTS The authors wish to thank Fatih Uncu for the fruitful discussions on the experimental database of the MAGPI configuration and on the RANS modeling and Safran Helicopter Engines for providing this experimental database in the framework of the PhD thesis of Fatih Uncu. The authors are grateful to the EU commission who funded the experiments in the framework of the MAGPI project (Grant number 30874).
04059426
en
[ "spi" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04059426v2/file/TOUIL%20IET%20journal%20%281%29.pdf
Touil Abderrahim email: [email protected] Babaa Fatima Kratz Frédéric Bennis Ouafae An Early Diagnosis of Outer Race Bearing Defects In Induction Machines Based On Line Neutral Voltage Analytical Calculation This paper is devoted to diagnosis outer race bearing fault of induction motors by studying line neutral voltage analytical development. The study is based on the analysis of the frequency content of the neutral voltage after the appearance of the fault. Despite the minimal cost of bearing maintenance, the most machines failures are caused by this latter. That is why, the investigation of the outer ring bearing faults is essential due to the financial losses and break time resulting from unexpected events caused by this fault. The early detection of the bearing faults becomes necessary in order to increase the yield of the machine as well as its lifespan. Reliable monitoring can only be achieved if the chosen diagnosis method is inadequacy with the nature of the fault. The method chosen must be facilitating the diagnosis as much as possible. In fact, this mechanical defect gives very little significant defect index when using the MCSA method. The most contribution of this paper is to propose an accurate indicator of the outer race bearing fault that is different of the signature engender by the eccentricity fault. The line neutral voltage calculation taking into account the outer race bearing fault is developed and simulated. To support and to confirm the simulation results, experimental study is presented in this paper. Introduction Today, maintenance activities are massively oriented towards "FMDS" type services (Reliability, Maintainability, Availability, and Security). These activities permit and meet overall performance objectives to guarantee continuity of the service of the electrical systems while minimizing the maintenance costs. The new industrial objectives are to switch to Industry 4.0 that consists of monitoring and control machinery and equipment in a real-time. For that, it is necessary to propose an optimal monitoring method that will be able to process the data provided by sensors at each stage of the production process. To obtain the fourth new industrial revolution and to have a connected factory it is necessary to studies separately and efficiency the different elements of the sub-systems of an industrial electrical installation. One of the most important constituents in the process is the induction motor. In fact, asynchronous machine occupies an ideal position in industrial processes because of the advantage that distinguishes it to other electric machines. Its accuracy, efficiency, simplicity focus on a low maintenance cost [1][START_REF] Silva | Multi-domain model of faulty stator core for thermal effects and losses evaluation[END_REF][START_REF] Fong | An intelligent online machine fault diagnosis system[END_REF][START_REF] Mendes | Fault diagnostic algorithm for three-level neutral point clamped AC motor drives, based on the average current Park's vector[END_REF][START_REF] Alalibo | Bi-Signature optical spectroscopy for online fault detection in electrical machines[END_REF][START_REF] Rahnama | Machine-learning approach for fault detection in brushless synchronous generator using vibration signals[END_REF][START_REF] Vinayak | Wavelet-based real-time stator fault detection of inverter-fed induction motor[END_REF][START_REF] Akbari | Calculation of inductances of induction machines under axial non uniformity conditions[END_REF][START_REF] Khezzar | Induction motor diagnosis using line neutral voltage signatures[END_REF][START_REF] Khezzar | Accurate modelling of cage induction machine with analytical evaluation of inductances[END_REF].During his life, the machine is exposed to various malfunctions. For that, many extensively researchesare proposed to predict and deal well the inappropriate stop of industrial process.Induction machine malfunction can be caused by two principal kinds of failures. These failures can be outside or inside of the machine. The most important outside failure is the unbalanced supply which affects the power supply source [START_REF] Babaa | Combined Electrical Faults Detection and Diagnosis Using Current Signature Analysis[END_REF][START_REF] Morsy | Sensorless speed control of a five-phase induction machine under open-phase condition[END_REF][START_REF] Xu | SVD filtering and TLS-ESPRIT algorithm based on stator fault characteristic detection of doubly-fed induction generator[END_REF][START_REF] Babaa | Steady State Analytical Study of Stator Current Harmonic Spectrum Components on Three-Phase Induction Motor under Unbalanced Supply Voltage[END_REF][START_REF] Moosavi | Comparison of rotor electrical fault indices owing to inter-turn short circuit and unbalanced resistance in doublyfed induction generator[END_REF]. In a long way, this fault provokes a many other faults. For the inside failures, we distinguish the electrical and mechanical faults. Thestator faults in the machine such as an inter-turn short-circuits which threatens the insulating part in the stator [START_REF] Touil | Analytical calculation of inductances under stator inter-turn short circuits fault condition in operating squirrel cage induction motors[END_REF][START_REF] Han | Modelling, diagnosis, and tolerant control of phase-to-phase fault in switched reluctance machine[END_REF][START_REF] Zeng | Inter-turn fault diagnosis of permanent magnet synchronous machine based on tooth magnetic flux analysis[END_REF][START_REF] Foitzik | Fault tolerant control of a three-phase PMSM by limiting the heat of an inter-turn fault[END_REF][START_REF] Nazemi | Stator and rotor turn-to-turn fault detection in wound rotor induction machines based on the air-gap magnetic field distortion[END_REF][START_REF] Sun | Compound frequency modulation strategy of DFIG-based wind turbines dealing with stator winding inter-turn short-circuit fault[END_REF][START_REF] Chen | Electromagneticthermal coupled modelling and analysis of inter-turn shortcircuit faults of a permanent magnet alternator[END_REF][START_REF] Alvarez-Gonzalez | Permanent magnet synchronous machine stator windings fault detection by Hilbert-Huang transform[END_REF][START_REF] Liu | Stator inter-turn fault detection in closed-loop controlled drive based on switching sideband harmonics in CMV[END_REF][START_REF] Abdallah | Stator winding inter-turn short-circuit detection in induction motors by parameter identification[END_REF][START_REF] Touil | Fault Tolerance Control and Diagnosis of Induction Machine under Stator Fault Conditions[END_REF], or a crack or rupture of the rotor bar [START_REF] Bossio | Rotor fault diagnosis in permanent magnet synchronous machine using the midpoint voltage of windings[END_REF][START_REF] Marzebali | Manipulation of stator current signature for rotor asymmetries fault diagnosis of wound rotor induction machine[END_REF][START_REF] Panagiotou | FEM approach for diagnosis of induction machines nonadjacent broken rotor bars by short-time Fourier transform spectrogram[END_REF][START_REF] Fu | Multiple coupled circuit modelling approach for squirrel cage induction machine under single-broken-bar fault with stator winding functions decomposed in d-q rotor reference frame[END_REF][START_REF] Ramu | Broken rotor bar fault detection using Hilbert transform and neural networks applied to direct torque control of induction motor drive[END_REF][START_REF] Zaggout | Detection of rotor electrical asymmetry in wind turbine doubly-fed induction generators[END_REF][START_REF] Hou | Quantitative broken rotor bar fault detection for closed-loop controlled induction motors[END_REF][START_REF] Kaikaa | Effects of the simultaneous presence of static eccentricity and broken rotor bars on the stator current of induction machine[END_REF][START_REF] Nandi | Detection of rotor slot and other eccentricity related harmonics in a threephase induction motor with different rotor cages[END_REF], or the asymmetry error due to the manufacturing. As it concerns the mechanical faults, the rolling elements fault like the bearing fault, the ball defect and the cage of ball defect [START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF][START_REF] Jafarian | Spectral analysis for diagnosis of bearing defects in induction machine drives[END_REF][START_REF] Vilchis-Rodriguez | Wound rotor induction generator bearing fault modelling and detection using stator current analysis[END_REF][START_REF] Mcfadden | Model for the vibration produced by a single point defect in a rolling element bearing[END_REF][START_REF] Blodt | Models for bearing damage detection in induction motors using stator current monitoring[END_REF][START_REF] Schoen | Motor bearing damage detection using stator current monitoring[END_REF], are the defects that appears the most. This fault is definedas an eccentricityproduced by the axial overload on the axes of rotation [START_REF] Dorrell | Analysis of airgap flux, current, and vibration signals as a function of the combination of static and dynamic airgap eccentricity in 3-phase induction motors[END_REF][START_REF] Hadjami | Analytical model of cage induction machine dedicated tothe study of axial non-uniformities[END_REF][START_REF] Faiz | Eccentricity fault diagnosis indices for permanent magnet machines: state-of-the-art[END_REF][START_REF] Mazaheri-Tehrani | Airgap and stray magnetic flux monitoring techniques for fault diagnosis of electrical machines: An overview[END_REF][START_REF] Faiz | Detection of mixed eccentricity fault in doubly-fed induction generator based on reactive power spectrum[END_REF][START_REF] Afshar | Eccentricity fault detection in brushless doubly fed induction machines[END_REF][START_REF] Ehya | Static and dynamic eccentricity fault diagnosis of large salient pole synchronous generators by means of external magnetic field[END_REF][START_REF] Nasiri-Gheidari | New design solution for static eccentricity in single stator-single rotor axial flux induction motors[END_REF][START_REF] Faiz | Dynamic air gap asymmetry fault detection in single-sided linear induction motors[END_REF][START_REF] Oumaamar | Static air-gap eccentricity fault diagnosis using rotor slot harmonics in line neutral voltage of threephase squirrel cage induction motor[END_REF], that are divided in the outer and the inner bearing ring defects.These latterare known as a temporary static and dynamic eccentricity defects respectively [START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF][START_REF] Mcfadden | Model for the vibration produced by a single point defect in a rolling element bearing[END_REF]. Bearings are a very most important part of the machine because they are the force transmission elements between the moving and static components. As such, they contribute considerably to the proper functioning of equipment. Bearings deteriorate as a result of manufacturing errors, incorrect assembly, static and dynamic loads, deficient lubrication or because of the harshness of the environment. However, even if a bearing is perfectly manufactured, assembled, etc., it will eventually present a defect caused simply by normal wear of the elements that compose it.Studying bearings permits to understand their shaping in order to predict their operation and make a distinction between theimpacts on the behaviour of the machine and also understand the signatures allowing going back to the main cause of the failure. The one of the first works to have studied this defect is McFadden and al. [START_REF] Mcfadden | Model for the vibration produced by a single point defect in a rolling element bearing[END_REF]. This paper is considered as a key reference in bearing fault detection. In fact, bearing faults provoke the misalignment and eccentricity in the rotating part that can be divided into several parts, static eccentricity, dynamic eccentricity, but in the reality, we have the co-existence of the both faults in the same time which called by the mixed eccentricity. There are various diagnostic methods to monitor the conditions of the working system and to prevent the industrial system from being interrupted [START_REF] Touil | Analytical calculation of inductances under stator inter-turn short circuits fault condition in operating squirrel cage induction motors[END_REF][START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF]. Depending on the measurable quantities, the diagnostic methods know is: Stator current analysis, vibration analysis, Flux analysis, Electromagnetic torque analysis and neutral to neutral analysis [START_REF] Khezzar | Induction motor diagnosis using line neutral voltage signatures[END_REF]. Line neutral voltage is defined as the voltage present between the neutral of the power source and the neutral of theasynchronous motor. In this paper we study the outer ring bearing fault by the line neutral analysis method. Through an analytical development of line neutral voltage, the paper proposes also to separate and dissociate the indices of the apparition of this fault with the others related to the eccentricity fault. We notice that, in previous works that, line neutral voltage has been previously used and that it gave better results in comparison with the Motor Current Signature Analysis method (MCSA) [START_REF] Oumaamar | Static air-gap eccentricity fault diagnosis using rotor slot harmonics in line neutral voltage of threephase squirrel cage induction motor[END_REF].The development of our model is based on the sensitivity of the magnetic field in the air gap which separates the stator from the rotor. In fact, the variation of this air gap as a function of time will be changed in presence of fault and stay unchanged in healthy conditions. In some fault condition, the minimum thickness of the air gap will vary over time. This variation influences considerably the values of the different inductances of the asynchronous machine [START_REF] Touil | Analytical calculation of inductances under stator inter-turn short circuits fault condition in operating squirrel cage induction motors[END_REF][START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF], in particular the mutual inductances which depends on the permeability of the air gap. This analytical study takes into account the variation that appear in the mutual inductances and propose a good and accuracy formula,that is using to the inductances calculation.To support our study and the proposed analytical development, experimental results are also presented in the paper. Analytical multi-loop model for squirrel cage induction motor. Electrical Equations In our squirrel-cage induction motor modelization, we assume that the distribution of the windings, in the stator phases and the rotor cage, is identical. We also suppose that bars are connected electrically with the end-rings to produces an identical spaced multiloop. Under these assumptions, the system equations in vector matrix form, including line neutral voltage is: Where: , and are the stator current, voltage and flux vectors respectively. , and are the rotor current, voltage and flux vectors respectively, and for , and are the correspondent end ring voltage, current and flux. , are the stator and rotor resistance matrices respectively. , are the stator and rotor inductances respectively. For the neural voltage there is whith is placed between the supply neutral set point and the stator neutral set point. As there is no way to quantify and to know the behaviour of line neutral voltage, we put the previous equation in the line-to-line voltage form: Where: , and From with . Mechanical Equations According to the principle of the energy conversion and considering change of co-energy of the system, the torque variation taking into account a slight change in the rotor's position, when the currents are held constant is: The equations describing theelectromagnetic torque, the rotor's position and the rotor's angular speed as follow: The angular speed of the rotor can be calculated from the variation of the electromagnetic torque with the presence of a load as in equation [START_REF] Babaa | Combined Electrical Faults Detection and Diagnosis Using Current Signature Analysis[END_REF]. The minimum length position of the radial air gap assumption Starting from the common assumption that the air gap is constant in the construction of the asynchronous machine, the movement of the rotor inside the machine with the adequacy of the rotor axis is better. In healthy monitoring conditions,in the proposed model, we have chosen is the axis of rotation. Inthe case, of any abnormal factors such as poor manufacturing, nonuniformly circular stator core or the bearings fault, we remarkthe existence of a permanent static eccentricity. Most referenced researches about static eccentricity rises the idea of a misalignment between the rotor centre and the axes of rotation.Regardless of mixed eccentricity, that is the most famous one among eccentricity faults, the pure static eccentricity is defined as a fixed position of the minimal radial air-gap length in space(see Fig. 1)expressed with the following formula as: When the outer race bearing fault occurs in presence of a small gap or slit as in Fig. 1,the rolling elements pass by the irregularityandproduces a temporary mismatch between the rotor centre and the axes of rotation. This phenomenon leads to a vibration considered as temporary static eccentricity as in [START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF]. This vibration signal appears with a periodic frequency called The Bale Pass Frequency (BPFO) (see Fig. 2) which is described as a train of impulses. This signal is expressed by: Fig. 3 shows the shape of the length of the air gap as a function of the angle of rotation in different states, healthy condition which is constant, in the static eccentricity condition referenced to equation ( 13)and in the outer ring bearing fault with equation ( 15). Inductances calculation Dimensioning inductors theoretically consists in the study of the winding function theory which is the development of mutual inductance between two windings in the electric machines. -a- -b- In healthy state the air gap is uniform, sothe winding stator functions is calculated as follows: To get a reality of the machine, it is necessary to considering the non-uniform of the air gapwhen we have a faulty condition. Then it is necessary to modify the winding function.By using the formula of the winding function theory [START_REF] Khezzar | Accurate modelling of cage induction machine with analytical evaluation of inductances[END_REF][START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF].The different inductance calculations (stator, rotor, mutual stator-rotor) depend h val f "p" mb f p l pa W our case the condition p Inprevious research we calculate the difference inductors under healthy state and both of the faulty parts (static eccentricity and outer race bearing fault) [START_REF] Abderrahim | Analytical Model for Separated Frequency Signature of Outer Race Bearing Fault from Static Eccentricity[END_REF]. The choice to work with the presence of static eccentricity is due to the proposed approach that the condition for the occurrence of bearing outer ring faults is a temporary fault which affects the performance of the machine instead of a permanent fault when a static eccentricity occurs. The changes that occur in inductance values led to the conclusion that there is a small change in stator coils values and of air gap length change affects ' l .On the other hand, the air gap length variation affects the distribution of the mutual induction by causing an imbalance in the distribution of the magnetic field around the rotor. So, the mutual inductance between stator coils and rotor loops with the consideration of space harmonics is given by: Healthy condition: Static air gap eccentricity condition: Where: Outer race bearing fault condition: Fig. 4 shows different presentations of mutual inductance in healthy and faulty conditions. The influence of the outer bearing fault is considered as a mp a y a y ma k by a fa l a l γ and a static eccentricity level . Line neutral voltage Healthy Condition As known, the multi-loop model of induction machine supplied by an asymmetrical three-phase network voltage produces a rotating field and causes the creation of current in the rotor loops resulting the rotation of the shortcircuit cage. The multi loops currents expressionis: Where: The expression of line neutral voltage is a two-part formula, withshape of a sinusoid that is the sum of an N b series of equations shifted by This expression equals zero except when the space harmonics on the part of the rotor cage have a common point with . This assumption leads to a specified setwhich represents the order of a high frequencies component in the induction machine line neutral voltage. Under this condition sets, the harmonics components that appear in the line neutral voltage are a consequence of rotor loops distribution called the RSHs. These harmonics and each related orders are shown in Table 1. Fig. 5 represents the spectral content in for line neutral voltage for a healthy machine.The simulation was made for an induction machine whichwere made for a 1.1-kW motor with 24 stator and22rotor slots, 2 pair poles. The RSHs which can be seen onthe spectrum curve at related to the order h b with and , the second one at related to the orderh b with . It is evident that for the case of or theRSHs harmonics doesn't appear because of the correspondent harmonic order does not belong to 'G'. Under Static Eccentricity Action Permanent misalignment of the axis of rotation caused by an unequal distribution of the air gap between the stator and the rotor leads to the existence of field variation that crosses the air gap cause by mismatch of the minimal space. Also, a permanent misalignment of rotation axis caused by a small eccentricity affects the inductances values which are dependent on the winding and the permeance functions (the inverse air gap function).In this case the change in the flux which crosses the rotor introduces a change in currents of rotor loops. The resulting currents are defined by the interaction of the stator current with the new part of the mutual inductance. The rotor loops current become: The expression for line neutral voltage: Where: is the additional part of the mutual induction that defines the eccentricity appearance. The first term represents the interaction of healthy mutual inductance with the healthy rotor loops current.The first additional part of line neutral voltage term is related to the interaction of healthy mutual inductance with the additional rotor loops current that is caused by the eccentricity appearance describe in [START_REF] Hadjami | Analytical model of cage induction machine dedicated tothe study of axial non-uniformities[END_REF]. Where: Thisexpression is a sumof multiple waves shifted either by or by . Equation(37)equals zero except when the space harmonicsof the rotor cagepart have a common point with . Undid, in this casethe assumption leads to a specified order of harmonics due to static air gap eccentricity related to space harmonics with the following set: The second additional part of line neutral voltage term is related to the interaction of the faulty part of the mutual inductance with the healthy part of rotor loops current (the term is described in [START_REF] Mazaheri-Tehrani | Airgap and stray magnetic flux monitoring techniques for fault diagnosis of electrical machines: An overview[END_REF]). Where: hp hp This expression is a sumof multiple waves shifted either by hp or by hp . Equation(39)equals zero except when the space harmonicsof the rotor cagepart have a common point with . In this casethe assumption leads to a specified order of harmonics due to static air gap eccentricity related to space harmonics with the following set: The third additional part of line neutral voltage term is related to the interaction of faulty part of the mutual inductance with the additional rotor loops current caused by the eccentricity appearance, describe in [START_REF] Afshar | Eccentricity fault detection in brushless doubly fed induction machines[END_REF]. Where: hp hp hp hp This expression is a sumof multiple waves shifted either by hp or by hp 1 . Equation ( 41)equals zero except when the space harmonicsof the rotor cagepart have a common point with . In this casethe assumption leads to a specified order of harmonics due to static air gap eccentricity related to space harmonics presentedin the following set: hp hp hp hp Under this condition sets, the harmonics components that appears in the line neutral voltage under static eccentricity fault are shown in Table 2. The spectral content for line neutral voltage of faulty machine (see Fig. 6), and the RSHs which can be seen additional to the ones in healthy condition is according to assumption the ones in the third additional term. So, the spectrum curve at related to the order h b with and , the second onesat related to the order h b and related to the orderh b . For the case of =2 or =4 the RSH harmonics appear in the faulty case with the same corresponding harmonic order 'G' as in healthy condition. Presence of outer race bearing fault Since theouter race bearing fault is the fault that appears by any cracks or deformation or breakage, or irregularity in the outer race of the ball bearing, each contact between the ball and the outer ring produces a temporary vibration can be considered as temporary static eccentricity. The change of the flux which crosses the rotor provokes a change in currents of rotor loops.Then, anew resulting current defined by: The line neutral voltage expression is: Where: In line neutral voltage expression two terms are exist. The first is related of healthy state, the second related to the static eccentricity. However, another term appears. This latter is related to the bearing fault of the outer ring from the interaction of healthy mutual inductance with the additional rotor loop current. It is also from the interactionof the additional part of the mutual inductance caused by the occurrence of the fault with the healthy currenttaking into account the interaction with the static eccentricity expressions. The first additional part of line neutral voltage term is describedin [START_REF] Jafarian | Spectral analysis for diagnosis of bearing defects in induction machine drives[END_REF]. Where: The expression is a sumof multiple waves shifted either by or by . This equation equals zero except when the space harmonicsof the rotor cagepart have a common point with . The assumption made above leads to define the order of harmonics generated by the bearing faults: The second additional part of line neutral voltage term is related to the interaction of the faulty part of the mutual inductance caused by bearing fault appearance with the healthy part of rotor loops current which is describe in [START_REF] Mcfadden | Model for the vibration produced by a single point defect in a rolling element bearing[END_REF]. Where: hp hp The expression is a sumof multiple waves shifted either by hp p or by hp p . This equation ( 49) equals zero except when the space harmonicsof the rotor cagepart have a common point with . So, theorder of harmonics due to bearing fault appearance related to space harmonics is: The third additional part of line neutral voltage term is related to the interaction of faulty part of the mutual inductance caused by the eccentricity appearance with the additional rotor loops current caused by bearing fault appearance which is describe in [START_REF] Schoen | Motor bearing damage detection using stator current monitoring[END_REF]. The expression is a sumof multiple waves shifted either by hp or by hp ±1 . This equation ( 51) equals zero except when the space harmonicsof the rotor cagepart have a common point with . In this case, the orders of harmonics are: As it concerns the forth additional part of line neutral voltage term, is define as the interaction of faulty part of the mutual inductance caused by bearing fault appearance with the additional rotor loops current caused by the eccentricity appearance which is describe in (53). The expression is a sumof multiple waves shifted either by hp or by hp ±1 . This equation equals zero except when the space harmonicsof the rotor cagepart have a common point with . In this case theorder of harmonics is define as: The last additional part of line neutral voltage term is related to the interaction of faulty part of the mutual inductance with the additional rotor loops current caused by bearing fault appearance which is described in (55). Each part of this expression is a sum of multiple waves shifted either by hp or by hp . The equation (55) equals zero except when the space harmonicsof the rotor cagepart or have a common point with . We put theorder of harmonics in the following equation. The harmonics components that appear in the line neutral voltage under outer race bearing fault are shown in Table 3. The RSH orders for the frequencies engender by outer race bearing fault indicate that only the third and the forth terms of line neutral voltage contribute to the appearance of some harmonics related to RSH components with the formula f The fifth term contribute with the appearance of the RSH components define by the formula f . Simulation results For simulation, we use a machine with 2 pairspoles , 3 phases, 50Hz with 22 rotor bars.The spectrum results were computed with Fast Fourier Transform, all results were obtained with MATLAB software.The line neutral voltage spectrum is simulated in static eccentricity fault condition. We confirm the appearance of several RSHs related to the fault. Another RSHs harmonicsare also evident and are related tothe healthy condition.By using [START_REF] Panagiotou | FEM approach for diagnosis of induction machines nonadjacent broken rotor bars by short-time Fourier transform spectrogram[END_REF]healthy condition contributes to the emergence of RSHs with the orderh b p andh b p respectively when and with , While and do not appear, they indicate that the harmonic order does not belong to "G". The RSHs exist for and due to the presence of SE.The harmonics with the order h b p andh b p belong to " ". Table 2 summarize the frequencies shown on line neutral voltage spectrum in the case of SE existence.Under the same testing conditions with the slip the RSHs that can be seen additionally to the othersare related to the orderh b with , the second oneat related to the orderh b and related to the orderh b . In bearing fault condition, the sum of the frequencies which appear can be divided into three parts. The first one when harmonic order belongs to "G", that is the harmonics of the healthy part. These frequencies appear also in spectrum curve of outer race bearing faultTable 1. The second part is related to the RSHs with and these latter are appeareddue to the presence of a temporary SE when the ball passes by the irregularity and the harmonic order belong to " "Table 2. The third part, harmonics represent the RSHs related to bearing fault exclusively.The harmonic order belongs to " " or " "with an offset of , and with when harmonic order belongs tothe association of" " and " " Table 3. h f h f h f h f h f h f h f h f h f f h f f h f f h f f 7. Experimental results and discussion A test bench has been set up at Constantine Electrical Engineering Laboratory, "LEC"University Constantine 1, 25000, Constantine,Algeria.In order to support the suggested model for outer ring bearing fault diagnosis using line neutral voltage, we synthesize data from experimental tests with a 5-kW squirrel-cage induction motor. The motor is loaded with magnetic powder brake. The tested motor is a 5 KW three-phase induction motor with28 rotor bars, 2 pole pairs of FIMET manufactures, with accessible neutral point(see Fig. 8). The line neutral voltage acquisition has been accomplished using dSPACE with (50 kHz) sample rate and data signal length of (5s). The outer ring defect was created by grooving the outer ring of the bearing(see Fig. 9). The results obtained for the outer ring bearing fault areobtain after several tests for several load speeds of 1421 rpm and 1422rpm for a defective bearing and 1420 rpm /min for a healthy bearing -Series 6208 bearing (Bd= 10.49 mm, Pd= 61 mm, N=9). This type of bearing series feature is low friction and designed to generate low noise and vibration levels at high rotational speeds. They support radial and axial loads in both directions, are easy to mount, and require little maintenance. We made a diagnosis by analyzing the line neutralvoltage. The line neutral voltage spectrum was studying and analyzefor thehealthy and the defective cases. One can already notice spectral components at f s ± f r in the healthy state because of the original eccentricity. In the faulty case, the component f s + f r shows a tiny growth in value, especially for the first harmonic as shown in (see Fig. 10). Like described in many research before, these components are related to the mixed eccentricity that exist on the sideband of the low and high frequencies with the relation f s + f r. They appear in a healthy state as a consequence of a small vibration caused by the asymmetry in rotor construction and load torque oscillation.In outer race bearing fault,we clearly notice the increase of these components. The machine being directly connected and tested through the power source,an unbalanced exist at the beginning. Following this, some specific harmonics exist in both healthy state and outer race bearing fault. These harmonics exist in all induction motors situation due to an existence of a small unbalance provide by the means supply (see Fig. 11). The spectral analysis of the line neutral voltage always shows distinct frequencies linked to the distribution structure of the rotor bars(Harmonics related to RSHs). Indeed, these frequencies still exist but with low perceptible eccentricity(see Fig. 12), which explains their existence in the outer race bearing fault condition, as shown in [START_REF] Ehya | Static and dynamic eccentricity fault diagnosis of large salient pole synchronous generators by means of external magnetic field[END_REF]. We remark, the same harmonics appear in a healthy state because of the presence of static eccentricity due to a small asymmetry in the induction machine construction. To find the different bearing fault signatures of the outer ring, it is necessary to identify the harmonics using the formulas in Table 3. Fig. 13 shows some of specific harmonics amplitudes related to the outer race bearing fault have increased due to the small eccentricity resulting by the ball movement and the irregularity in the outer ring in fully loaded machine condition «see Appendix 1 for table of results". Fig. 14 represents some harmonics predicted in literature related to f 0 the characteristic frequency of bearing outer ring failureTable 4. The tests are run several times and the induction motor is tested under the same load conditions, with different fault level. The specific harmonics of the outer ring fault increase with increasing fault level. This increase is justified by the increase in degree of eccentricity. Fig. 15 shows some examples explaining the differences. Conclusion To make a good diagnosis, there are several methodswhich have been explored and tested in the past, with varying degrees of effectiveness. All methods aim to provide a reliable and accurate fault indicator that can detect, at early stage, any fault that occurs in the machine. Asophisticated monitoring method is vitally important to ensurean optimal maintenance operation of the machine. Thus, it is necessary to provide a reliable and easily interpretable diagnosisto detect the appearance and evolution of most mechanical and electrical faults. This early detection allowsavoid unplanned production stoppages.Studying of outer race bearing fault occupies a most important consideration in the new researches. It is necessary to have a strongly information of the defecting part of the machine to propose an adequate diagnosis method. The MCSA method is very poor in information for the outer bearing faults. The proposed signature is not clear and it is difficult to interpret it. In this paper an analytical development of line neutral voltage is proposed. The developed model takes into account the healthy and outer bearing fault conditions. The simulation results of our equations show that the neutral voltage contains very rich information about this fault. Following the analytical results obtained, a reliable and specific bearingdefect index has been proposed.The proposed index permits also to separate the outer race bearing fault to another eccentricity faults. To confirm our analytical development, a series of experimental results are presented and discuss. Fig. 3 . 3 Fig. 3.Air gap length variation in different conditions. Fig. 1 . 1 Fig. 1. Air gap length in different conditions (a) Healthy (left)-static eccentricity SE (right), (b) Outer race bearing fault Fig. 2 . 2 Fig. 2.Vibration signal (train of impulses). Fig. 4 . 4 Fig. 4. Mutual inductance Msr, for Healthy situation, for static eccentricity and outer race bearing fault. Fig. 5 . 5 Fig. 5.Simulated, FFT spectrum of line neutral voltagein healthy induction machine state. Fig. 6 . 6 Fig. 6.Simulated, FFT spectrum of line neutral voltageunder static eccentricity induction machine state. Fig. 7 . 7 Fig. 7.Normalised FFT spectrum of the line neutral voltage for deferent states on induction machine performance. (Top) Healthy, static eccentricity SE (middle), (bottom) Outer race bearing fault ORBF.Table 3The frequencies shown on line neutral voltage Fig. Fig. 8.Test bench dedicated to the study of outer ring bearing fault diagnosis using line neutral voltage Fig. 12 . 12 Fig. 12. Experimental FFT spectrum of line neutral voltage (Harmonics related to RSHs). Fig. 11 . 11 Fig. 11.Experimental FFT spectrum of line neutral voltage (Harmonics related to small unbalance provide by the means supply). Fig. 10 . 10 Fig. 10.Experimental FFT spectrum of line neutral voltage (Harmonics related to the component f s + f r). Fig. 9 . 9 Fig. 9. Some bearings for the experiments [ 1 ] 1 Khan, F., Sulaiman, E., & Ahmad, M. Z.: 'Review of switched flux wound-field machines technology', IETE Technical Review, 2017, 34, (4), pp. 343-352 Fig. 15 . 15 Fig. 15. Deferent angle of defected outer ring (A1>A2). Fig. 14 .Fig. 13 . 1413 Fig. 14.Experimental FFT spectrum of line neutral voltage (Harmonics related to f 0 ) Table 1 1 Rotor slot harmonics for the healthy induction machine in the line neutral voltage RSHs orders RSHs frequencies h b p h b p f h b p h b p f Table 2 2 The frequencies shown on line neutral voltage spectrum in the case of SE existence Neural RSHs orders RSHs frequencies voltage h h f h h f h h f h h f h h f h f Table 3The 3The frequencies shown on line neutral voltage spectrum in the case of ORBF existence Neural RSHs orders RSHs related frequencies voltage 8.Test bench dedicated to the study of outer ring bearing fault diagnosis using line neutral voltage h hp h hp h hp h hp h hp h hp h hp h hp h hp h hp h hp h hp h hp h hp h hp p hp γ hp p h ka hp hp h p γ hp hp p ka h hp h hp p hp γ hp p h ka hp hp h h p hp γ hp hp p ka h hp h hp hp p γ hp p ka h hp hp h p hp γ hp p ka h hp h h hp hp p hp γ hp p hp h ka h p hp hp γ hp p ka h hp Table 4 4 Harmonics related to f 0 the characteristic frequency of bearing outer ring failure Outer race bearing failure characteristic harmonics. 87,36 174,71 262,07 349,43 436,78 786,21 1135,64 1222,99 Table 6Table 6Table of results f k k k k Table 5Table of results k f k k k k k k k k k k
00403627
en
[ "math.math-ap" ]
2024/03/04 16:41:20
2008
https://hal.science/hal-00403627v2/file/HeartSegmentation.pdf
Olivier Rousseau Yves Bourgault HEART SEGMENTATION WITH AN ITERATIVE CHAN-VESE ALGORITHM This paper presents 2D and 3D applications of the Chan-Vese model to heart and trachea segmentation. We improved the multi-phase Chan-Vese model by introducing an iterative method, by choosing an appropriate L 1 fidelity term as well as an efficient and prior free initial condition. For 3D applications, the algorithm is parallelized in order to speed up the computations. We provide extensive information on computational details, on the convergence times and on the quality of segmentations obtained. The results of the segmentations are then meshed to be used for finite element simulations. Introduction The recent increase of computers capacities now allows for realistic simulations of human organ physiology. Mathematical modeling of organ functions opens a wide range of research, allowing diagnostics and understanding of malfunctions and diseases. Such simulations usually require a mesh of the given organ. However, most computations are made on meshes of idealized geometries and there is a real lack of accurate 3D models. Realistic geometries should be extracted from medical images, this is known as the segmentation process. The goal of this work is to segment the heart muscle from high resolution CT scans of the thorax and to produce meshes that are adequate for numerical simulations in electro-physiology. Most existing methods for heart segmentation involve a prior knowledge of the heart's shape (see for example [START_REF] Zhukov | Dynamic deformable models for 3D MRI heart segmentation[END_REF][START_REF] Ecabert | Towards automatic full heart segmentation in computed-tomography images[END_REF]). We intend to use a modified version of the Chan-Vese model [START_REF] Tf Chan | Active contours without edges[END_REF], also known as Active Contours without Edges to segment the heart, since it has no geometrical or topological a priori. This method has been successfully used for brain segmentation [START_REF] Vese | A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model[END_REF][START_REF] Chung | Energy minimization based segmentation and denoising using a multilayer level set approach[END_REF]. Brain images usually have nice contrasts between gray and white matter. To our knowledge, this method has not been attempted yet for segmenting 3D scans of the heart and lungs. This is a real challenge since the images are more diversified. They contain many objects of similar grey levels that the method should separate. The main contributions of this paper are: (1) Application of the Chan-Vese algorithm to trachea and heart segmentation. (4) Analysis of parallelization of the Chan-Vese algorithm to fit needs of large applications. (5) Creation of meshes from the segmentation results. In the first section, the Mumford-Shah functional and the Chan-Vese model are presented as well as the numerical methods for solving the latter. In section 2, the improvements made to the original Chan-Vese model are explained and justified. Sections 3 and 4 respectively present 2D and 3D applications of this modified Chan-Vese algorithm. where Ω is some region of R n , typically a square or a cube. In the segmentation process, the goal is to split the image g into its constituting objects. Mumford and Shah proposed to minimize the following functional (1) E MS (u, K) = Ω |∇u| 2 dx + λ Ω\K |g -u| 2 dx + µH N -1 (K) over pairs (u, K). K is a compact subset of Ω representing edges of objects in g, and u ∈ H 1 (Ω \ K) is the intensity of the image. This intensity varies smoothly inside the connected components of Ω \ K [START_REF] Mumford | Optimal approximations by piecewise smooth functions and associated variational problems[END_REF]. The middle term in the equation is called the fidelity term, while the combination of the two others form the regularity terms that do not depend on the underlying image g. The weights µ and λ should be adjusted in accordance with the noise level of the image to be segmented. It is a well known result that minimizers of the Mumford-Shah energy exist (see for example [START_REF] Ambrosio | Functions of Bounded Variation and Free Discontinuity Problems[END_REF]). It is a hard task to directly find the actual minima of the functional. To achieve this, one can approximate the Mumford-Shah functional by a sequence of elliptic functionals easier to solve [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. An other approach is to minimize E M S over a restricted domain. One example is the Chan-Vese model. It seeks for a minimum over functions that take only two values c 1 and c 2 [START_REF] Tf Chan | Active contours without edges[END_REF]. Such functions can be written as u = c 1 1 F + c 2 (1 -1 F ) for some F ⊆ Ω. The Mumford-Shah energy then rewrites as (2) E CV (F, c 1 , c 2 ) = F |g -c 1 | 2 dx + Ω\F |g -c 2 | 2 dx + H N -1 (∂F ∩ Ω). The set F can be described via a level set function φ : Ω→ R, that is F = {φ ≤ 0}. Then the Level Set formulation [START_REF] Osher | Fronts propagating with curvature-dependent speed-Algorithms based on Hamilton-Jacobi formulations[END_REF][START_REF] Tf Chan | Active contours without edges[END_REF] of the problem 2 is to find φ that minimizes (3) E CV (φ, c 1 , c 2 ) = λ Ω |g-c 1 | 2 (1-H(φ)) dx+λ Ω |g-c 2 | 2 H(φ) dx+µ Ω |DH(φ)| dx, where H(•) stands for the Heaviside function. |DH(φ)| = δ(φ)|∇φ| is the derivative of H(φ) in the sense of distributions. This is a Dirac measure with support on the discontinuity set {x : φ(x) = 0} of u. The two-phase image then rewrites as u = c 1 (1 -H(φ)) + c 2 H(φ). We can deduce that if (φ, c 1 , c 2 ) is a minimum of E CV , then it is one of the many solutions of the Euler-Lagrange equation. To avoid difficulties related to the non-uniqueness of φ, a time dependence is added and the following initial problem is solved: (4)        ∂φ ∂t = δ(φ) µdiv ∇φ |∇φ| + λ (g -c 1 ) 2 -(g -c 2 ) 2 φ(x, 0) = φ 0 (x), ∂φ ∂n = 0 on ∂Ω, for some initial condition φ 0 which describes an initial curve C 0 = {φ 0 = 0}. The values c 1 and c 2 are also given by ( 5) c 1 = Ω g(1 -H(φ)) dx Ω (1 -H(φ)) dx , c 2 = Ω gH(φ) dx Ω H(φ) dx , That is, c 1 and c 2 are mean values of g inside the regions {φ < 0} and {φ ≥ 0} respectively. While solved with a time-stepping scheme, the problem 4 can be interpreted as a gradient method applied to the minimization of the functional in 3. Any steady state of 4 will satisfy the Euler-Lagrange equation. In order to get a splitting of the image into more than 2 phases (multi-phase segmentation), Chan and Vese proposed to have several level set functions φ i [START_REF] Vese | A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model[END_REF]. A set of n level set functions can describe up to 2 n different regions of Ω. In this case n coupled PDE's must be solved simultaneously. An other approach is to use several level curves of the level set function [START_REF] Chung | Energy minimization based segmentation and denoising using a multilayer level set approach[END_REF]. In this case the number of regions has to be chosen in advance. This method does not allow to have junctions of three different regions as level curves are parallel to each other. 1.2. Numerics. The equation 4 can be solved via standard finite difference scheme ( [START_REF] Tf Chan | Active contours without edges[END_REF], [START_REF] Osher | Level Set Methods and Dynamic Implicit Surfaces[END_REF]) using a C ∞ regularization δ of δ. For example, in 2D the first term of div ∇φ |∇φ| can be discretized as (6)   ∂ ∂x φ x φ 2 x + φ 2 y   i,j ≈ D - x     D + x φ i,j (D + x φ i,j ) 2 + D 0 y φi,j +D 0 y φi+1,j 2 2 + ε 2     . D + x φ, D 0 x , D - x respectively stand for forward, centered and backward finite difference approximations. Hence an explicit discretization of equation 4 can be written as φ n+1 i,j = φ n i,j + ∆tδ (φ i,j )[d 1 (φ n i+1,j -φ n i,j ) + d 2 (φ n i,j -φ n i-1,j ) + d 3 (φ n i,j+1 -φ n i,j ) + d 4 (φ n i,j -φ n i,j-1 ) (7) + λ (g -c 1 ) 2 -(g -c 2 ) 2 ]. This is the form of a non linear diffusion with coefficients d 1 = 1 h 2 x µ (D + x φ i,j ) 2 + D 0 y φi,j +D 0 y φi+1,j 2 2 + ε 2 d 2 = 1 h 2 x µ (D - x φ i,j ) 2 + D 0 y φi,j +D 0 y φi-1,j 2 2 + ε 2 (8) d 3 = 1 h 2 y µ D 0 x φi,j +D 0 x φi,j+1 2 2 + (D + y φ i,j ) 2 + ε 2 d 4 = 1 h 2 y µ D 0 x φi,j +D 0 x φi,j-1 2 2 + (D - y φ i,j ) 2 + ε 2 However, for an explicit time discretization, the time step constraint for stability may be quite restrictive, leading to a large number of time steps to reach a steady solution. It has been noticed that replacing δ(φ) by |∇φ| increases the stability of the PDE [START_REF] Marquina | Explicit algorithms for a new time dependent model based on level set motion for nonlinear deblurring and noise removal[END_REF], yielding [START_REF] Tf Chan | Active contours without edges[END_REF] φ t = |∇φ| µdiv ∇φ |∇φ| + λ (g -c 1 ) 2 -(g -c 2 ) 2 , which can be interpreted as a motion of the interface in the normal direction with speed V n = -µdiv ∇φ |∇φ| -λ (g -c 1 ) 2 + (g -c 2 ) 2 . Note that equation 9 does not admit any steady state, although the interface {φ = 0} converges to the zero level set of the steady state of equation 4. In fact when equation 9 evolves over time, the level set function φ will converge to -∞ inside the curve and to +∞ outside. To avoid dealing with too large values, φ can be truncated, for example restricted to [-100, 100]. Althought, on must be careful as we remarked that if φ is truncated to a small value (eg. to [-1, 1]), it may lead to numerical artifacts. It is also possible to use a semi-implicit time discretization, which also increases the stability [START_REF] Smereka | Semi-Implicit Level Set Methods for Curvature and Surface Diffusion Motion[END_REF]. In this scheme, the value φ i,j of the central pixel is taken at time n + 1 for the right hand side of equation 7. This yields φ n+1 i,j = φ i,j + ∆t|∇φ| i,j [d 1 φ n i+1,j + d 2 φ n i-1,j + d 3 φ n i,j+1 + d 4 φ n i,j-1 + (g -c 1 ) 2 -(g -c 2 ) 2 ] 1 + ∆t|∇φ|(d 1 + d 2 + d 3 + d 4 ) . ( ) 10 We will analyze the convergence of equation 9 to a steady state using the semiimplicit time discretization described in equation 10 under several factors. We will study the influence of the initial condition and of the fidelity term chosen. We will also propose an iterative method to obtain multi-phase segmentation. In [START_REF] Song | A fast algorithm for level set based optimization[END_REF], Song and Chan decoupled equation 9 in order to simplify the resolution. They first smooth the image g to obtain a new image g * , and then solve the simplified equation ( 11) φ t = (g * -c 1 ) 2 -(g * -c 2 ) 2 . They proposed a fast algorithm for solving equation 11. In fact, their algorithm corresponds closely to clustering the set {g * (x) : x ∈ Ω} of values of g * into two clusters using a k-means procedure [START_REF] Hartigan | Clustering Algorithms[END_REF]. Osher and He introduced a similar algorithm to solve the multi-phase problem [START_REF] He | Solving the Chan-Vese Model by a Multiphase Level Set Algorithm Based on the Topological Derivative[END_REF]. Again, this is closely related to applying a k-means procedure to the set {g * (x) : x ∈ Ω} with the right number of clusters k. Independently, Gibou and Fedkiw took advantage of this analogy to propose an hybrid algorithm that alternates between smoothing and k-means [START_REF] Gibou | A fast hybrid k-means level set algorithm for segmentation[END_REF]. However, it is not known if any of these fast algorithms lead to minimums of the Chan-Vese energy for some value of the parameters µ and λ. Improvements We focus on the resolution of the Chan-Vese model via equation 9 in order to extract the heart shape from a 3D CT scan of a human thorax. We propose improvements on several aspects of the resolution process in order to fit the needs of large 3D applications. These aspects are: (1) Iterative segmentation instead of multi-phase segmentation. (2) Choice of a fidelity term in equation 9. (3) Choice of an initial condition in equation 9. 2.1. Iterative Segmentation. Medical images usually contain many regions of different pixel intensity value. In this case a two-phase segmentation will generally not be able to extract the region of interest. A solution to the multi-region problem is the algorithm proposed by Chan and Vese for multi-phase segmentation that simultaneously evolves several level set functions, that is several curves [START_REF] Vese | A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model[END_REF]. There are drawbacks to this method. One is that instability may appear from solving these PDE's simultaneously. Also it may happen that the two curves need to coincide at some places. In this case, if the curvature term is dominant, it may lead to miss-classification in this region, since the curves may not superimpose correctly. Figure 1 shows an example of this phenomena. The other approach is to look at several level curves of the level set function φ [START_REF] Chung | Energy minimization based segmentation and denoising using a multilayer level set approach[END_REF]. If n different values are chosen, it splits the domain into n+1 different regions. One possible issue is that it can not recover triple junctions, as level curve are naturally parallel to each other. Instead of these strategies, we will segment by solving the simple two-phase Chan-Vese problem iteratively. The process is as follows: (1) Split the domain Ω into Ω + and Ω -using the 2-phase Chan-Vese model. (2) Stop if the object is extracted, that is if the object is either Ω + or Ω -. Otherwise, decide which of Ω + or Ω -contains the object of interest, and pick this region as a new domain for step 1. Figure 2 shows the results of an iterative segmentation process on a slice of a CT scan of the heart. There are many advantages to proceed in this manner. First, if we are interested in a single object in the image, the computations need not to be done on the whole domain after the first step. This clearly saves on CPU time. We can thus focus on the region of interest, rejecting at each step the part of the image that does not contain the given object. However a full 2 n -phase segmentation of the image can also be done with this iterative method. The first step splits the domain into two different sub-domains Ω + and Ω -. In the second step, each of these sub-domains will be split in 2 parts, yielding four sub-domains Ω ++ , Ω +-, Ω -+ and Ω --. This can be done until the requested number of sub-domains is reached. It takes n such steps to get 2 n regions. In the multi-phase case, n different level sets functions need to be evolved. One step of the iterative method has the same computational cost as evolving a single level set function in the multi-phase method. However in practice the multi-phase algorithm requires smaller time step to ensure stability, which makes it slower. Another advantage of the iterative method is that the smoothing parameters can be varied from steps to steps. This saves time since the equation can be solved with larger time steps when the smoothing term is not dominant. Another benefit of our iterative method is that triple junctions are accurately obtained since the boundary conditions force the curve to be normal to the domains boundary. Figure 3 shows how our method performs on the synthetic image of Figure 1. It is clear that the the miss-classification problem is avoided with the iterative method. 2.2. L 1 fidelity term. Replacing the classical L 2 fidelity term by an L 1 fidelity term is an idea that has first been introduced in signal processing [START_REF] Alliney | Digital filters as absolute norm regularizers[END_REF][START_REF] Alliney | Recursive median filters of increasing order: a variational approach[END_REF][START_REF] Alliney | A property of the minimum vectors of a regularizing functional defined by means of the absolute norm[END_REF] and later in image processing [START_REF] Nikolova | Minimizers of Cost-Functions Involving Nonsmooth Data-Fidelity Terms. Application to the Processing of Outliers[END_REF][START_REF] Nikolova | A Variational Approach to Remove Outliers and Impulse Noise[END_REF]. Using L 1 fidelity term makes the problem more robust to noise and outliers in the signal or image. It has been remarked that for TV denoising, a L 1 fidelity is more natural [START_REF] Chan | Aspects of total variation regularized L1 function approximation[END_REF], since in this case the problem is scale-invariant. Recently, it has been shown that the Chan-Vese model benefits from the same properties when L 2 fidelity is replaced by L 1 fidelity. Indeed if (φ, c 1 , c 2 ) minimizes ( 12) [START_REF] Darbon | A Note on the Discrete Binary Mumford-Shah Model[END_REF]. This seems to be a desirable property since the scaling of the image should not affect the geometry of the segmented objects. E L 1 CV,g (φ, c 1 , c 2 ) = Ω |g -c 1 |H(φ) dx + Ω |g -c 2 |(1 -H(φ)) dx + Ω |DH(φ)| dx, then (φ, λc 1 , λc 2 ) minimizes E L 1 CV,λg Computing the Euler-Lagrange equation of 12 yields to the gradient descent equation ( 13) φ t = |∇φ| div ∇φ |∇φ| + |g -c 1 | -|g -c 2 | , where δ(φ) has again been replaced by |∇φ|. Now c 1 and c 2 are the median values of g in {φ ≤ 0} and {φ > 0} respectively instead of the mean values. These values are to be updated at each time step. Computing the median is more demanding than the mean, but this is negligible compared to the time required to compute the update of the level set function at each iteration. How does one decide which fidelity term to chose? The L 2 -fidelity is more sensible to noise as it can be seen in Figure 4. It comes from the fact that the mean value is affected by large values whereas the median is affected by frequent values. Hence, if the object to be segmented has a very distinct color, the L 2 -fidelity term should be preferred. If the object has color close to the rest of the image or if the image is very noisy, the L 1 -fidelity will be more efficient. The CT scan of the thorax falls in the second category. We illustrate this by doing an iterative segmentation of a 2D slice using the L 1 -fidelity, see Figure 5. Note that the number of steps needed for the heart segmentation drops from 5 to 3. Initial condition. The Chan-Vese energy may admit some local minimums, depending on the image g and on the scale of the parameter µ. The choice of a good initial condition is then crucial in order for the curve not to get stuck into a local minimum which is not the global minimum. The convergence to the steady state may also be slowed down by a bad choice of initial condition. We tried four different strategies and compared the results obtained. They are summarized in figure 6. The first strategy consists of taking a circle as an initial curve and its signed distance function as level set function. Many level set functions may describe the curve, but the signed distance function has the advantage of being the most regular. The second strategy is to take the union of many circles spread out over the image as an initial curve. Again the signed distance function to this curve is taken as level set function. The idea is that if the curve is spread out, it should be close to features of interest. The first two strategies are used in the seminal paper of Chan and Vese [START_REF] Tf Chan | Active contours without edges[END_REF]. In the same direction, we propose to take as initial condition a level set function that takes random values in [-1, 1] over the domain Ω. Then there are point inside and outside the curve almost everywhere. The level set function is not regular, but this doesn't seem to be a problem since the level curves gets smooth very fast as time evolves. The last strategy comes from the fact that when µ = 0, the Chan-Vese problem is very easy to solve via a k-means procedure. This splits Ω into two regions Ω 1 and Ω 2 . One can take as an initial condition φ = χ Ω1 -χ Ω2 . The signed distance function to ∂Ω 1 ∩ ∂Ω 2 would be another option, but numerical experiments suggest that it is a better idea that all points are initially relatively close to 0. We tried the different strategies on the well known cameraman picture. On Figures 7 and8 we compared the decay of the energy and the L 2 -convergence to the absolute minimum. The energy is approximated using a discretization of the equation 3, the term DH(φ) is simply discretized using a forward Euler scheme. If a regularized version H (φ) of H(φ) is used, the energy becomes less accurate as the function φ evolves since φ develops sharp gradients. Figure 9 shows the semi-log plot of the L 2 distance to the absolute minimizer. On these figures, it is possible to see two different behaviors. At the beginning of the iteration process the fidelity term is dominant in equation 9. In this case, the random initial curve and the (µ = 0) solution converge very quickly towards the steady state, as seen in Figure 8. As the curves get closer to the steady state, the curvature term gain importance and then every curve approaches the steady state at a similar rate as shown on the semi-log plot in Figure 9. The same kind of phenomena happens for the energy. It is not easy to determine which initial condition is better than the other. It seems that the random curve and the (µ = 0) solution converge quickly to a state that is close to the steady state, which is certainly a nice property. However in practice we favor the use of the random curve since it has no a priori on the position of the interface and the points inside and outside the curve are evenly spread over the image. In this case, there is a lower chance to get stuck in a local minimum. 2D Application As an illustration of the method, we have presented in Figure 5 the results of the iterative segmentation with L 1 norm on a 2D CT slice. In order to use the results of the segmentation process as a base for numerical simulations of the organ functions, it is usually required to generate a mesh of the given organ. To do so, we used the simple and very efficient 2D/3D mesh generator DistMesh [START_REF] Persson | A Simple Mesh Generator in MATLAB[END_REF]17]. Distmesh is a Matlab tool designed for meshing domains implicitly defined through a level set function. Most mesh generators start by meshing the boundary of the domain (a curve for 2D domains, a surface for 3D domains) and then march to mesh the interior. DistMesh has a different approach: it triangulates the whole domain and then moves vertices to well fit the boundary, with a control on the size and quality of the elements. It can also be modified to accommodate subdomains. For each sub-domain, we have computed the signed distance function to the boundary. The signed distance function is computed using the reinitialization equation ( 14) where S(φ) is a regularization of the sign function [START_REF] Osher | Level Set Methods and Dynamic Implicit Surfaces[END_REF]. This signed distance function is used to project on the boundary nearby vertices. It is then sufficient to compute the signed distance function only in a neighborhood of the zero level set. With DistMesh, an element diameter has to be specified. This one may differ spatially. We remarked that nice boundaries can be obtain if the element size is decreased continuously when approaching sub-domains boundaries. In practice, the element size is usually decreased to half the size it has in the sub-domain. Figure 10 shows an example of a 2D mesh of the thorax, built from the previous segmentation in Figure 5. φ t = S(φ)(1 -|∇φ|), 3D Application We will apply our framework to 3D segmentation of a full CT scan. This CT scan is by courtesy of the Heart Institute of the University of Ottawa. It is of size 512×512×199 and the voxels have a 0.48mm resolution in the transverse plane (the x -y plane) and a 1.25mm. resolution in the direction of the transverse axis (z direction). This image has over 52 million voxels, it is thus a challenge to solve the segmentation problem in an efficient way. However, since the equation is solved with an explicit time scheme, it is possible to parallelized the solver. The idea is to split the image into pieces that are to be distributed on computing nodes. A simple choice is to split the z direction into as many blocks as we want. At each iteration, the computations can be made independently on each block. Then only the information about shared boundaries of the block need to be exchanged. In the case where the image is split in only one direction, there is only the information about the top slice and the bottom slice to be exchanged, which can be done very quickly. If we apply the Chan-Vese model with L 2 fidelity, we need to compute the average of a function in a given region. This is easy to do in parallel. However, if we are to use the L 1 fidelity, we need to compute the median in parallel, which is trickier. To find the median in the serial case, the C++ template nth element is used [18], which is a linear algorithm for partial sort (it is more efficient than complete sort which is of order n log(n), where n is the number of pixels). Hence for preliminary tests, we used the L 2 fidelity. For all computations, a random initial curve has been chosen. The computations then require less time steps. The parallel code is implemented in C++ using the openMPI library [START_REF] Gropp | Using MPI: Portable Parallel Programming with the Message Passing Interface[END_REF]. The amount of RAM memory required to solve the Chan-Vese problem is then divided into the different nodes. The speed up for this parallelization is nearly perfect: using n processors divides the time per iteration by n, see figure 11. Most computations are made on a 16 processor SUN cluster with distributed memory. Running the code on 6 processors divides the CPU time by almost 6. As the number of processors increases, the time lost in data transmission becomes more important. Note that many computations have been made successfully on a simple dual-core laptop, cutting the CPU time in 2, although the same amount of RAM is required. Figure 12 shows the 3D CT image as well as the result of the first segmentation step. Following this first segmentation, one side of the surface must be chosen as the new segmentation domain. The interior is the region that contains the trachea, the exterior contains the heart. To obtain the trachea we segmented the interior region with a high curvature term, since there is much noise and irregularities in the lunges. Once the segmentation process is complete, we just need to chose the connected component that corresponds to the trachea. Figure 13 shows the resulting segmentation. The trachea is then meshed with DistMesh. Figure 14 shows 2 different meshes of the trachea: a coarse one of about 40 000 tetrahedra, and a finer one of about 500 000 tetrahedra. Now, if we want to segment the heart, we have to take the result of the first segmentation and choose the exterior of the surface. To get to the heart, four more segmentation steps will be required (five steps in total). If this would to be done with several level set functions, it would then take 5 level set functions to get this level of details, which is hard to solve on images of this size. From the second stage, we have decided to do the segmentation not on the exact image but on a blurred version of it, due to the high level of noise. To blur the image, we apply the linear heat equation on the image for a few time steps with an explicit scheme. This is also easily done in parallel. Figure 15 shows the exterior surface of the heart in the final segmentation and Figure 16 shows the segmented heart cavities. The cavities are well segmented, especially the one of left ventricle, which is the most important part in many applications. The position of the mitral valve is also precisely captured as shown in Figure 17. The general shape of the heart is well captured, ventricles and atria are extracted as well as the aorta. There are some imperfections in the segmentation of the heart surface near the epicardium, as the surface leaks in some areas. This is due to the fact that the heart touches the liver at that place in the image and that the two organs are of same grey level. Further work has to be done to fully extract the heart. Conclusion In his paper, the efficiency of the iterative version of the Chan-Vese model have been shown on test cases and successfully applied to large 2D and 3D applications to the human trachea and heart. We also conclude that the L 1 fidelity is more efficient for this kind of segmentation. It is less sensitive to noise and to small regions of different color intensity. Analysis of convergence under various initial conditions showed that our new approach which consists of taking a random initial curve is very efficient. The curve quickly converges to a state close to the steady state and has no shape prior. It has been successfully apply in 2D and 3D for CT scan segmentation. We also explained how the algorithm is implemented in parallel and the computing gain obtained this way. Finally, high quality fine 2D and 3D meshes have been produced. These meshes are actually used for Finite Element and Finite Volume simulations in electro-physiology. ( 2 ) 2 Introduction of an Iterative version of the Chan-Vese algorithm to replace multi-phase segmentation. (3) Analysis of the convergence and efficiency of the Chan-Vese algorithm under different initial conditions and different fidelity terms. 1 . Background 1 . 1 . 111 The Mumford-Shah energy and the Chan-Vese model. An image can be interpreted as a function g : Ω-→ R, Figure 1 . 1 Figure 1. An example of an image for which segmentation with two level set function may lead to incorrect segmentation. (a) is the synthetic image to be segmented. (b) shows a correct segmentation of the image, when the curvature term is not dominant (µ = λ = 1). (c) is the result if the curvature term is more important (µ = 10000, λ = 1): there is a miss-classification of some pixels. (d) shows a close-up on the curves where there is miss-classification: it comes from the fact that level curves are only nearly superposed in these regions. Figure 2 .Figure 3 . 23 Figure 2. 2D Iterative segmentation of the heart from a CT scan image. (a) The image to be segmented (b) the result of the first application of the 2-phase Chan-Vese model, the blue part is the region of interest (c) Second step: in green is the region that has been ignored in the segmentation. The red region will be chosen. (d) Red is chosen (e) blue is chosen (f) the blue region is the heart muscle, we stop. Figure 4 . 4 Figure 4. Segmentation of the cameraman image using L2fidelity (left) and L1-fidelity (right), with the same weights (µ = 1 and λ = 0.1) Figure 5 . 5 Figure 5. Iterative segmentation of the heart from a 2D slice of a CT scan using the L 1 -fidelity. 3 steps are required instead of 5 for the L 2 -fidelity. Figure 6 . 6 Figure 6. The different initial curves: (a) one circle, (b) 100 equidistant circles, (c) random values in [-1, 1] and (d) the solution of the problem when µ = 0 and λ = 1. Figure 7 . 7 Figure 7. The energy decay under various initial conditions. Figure 8 . 8 Figure 8. The L2-convergence under various initial conditions. Figure 9 . 9 Figure 9. The semi-log plot of the L2-convergence under various initial conditions. Figure 11 . 11 Figure 11. The speed up for the parallel algorithm: the time for doing an iteration with one processor divided by the time for doing an iteration with x processors. It is just below the perfect speedup, that is the line y = x. Figure 12 .Figure 13 . 1213 Figure 12. The CT scan to be segmented (a) and the result of the first segmentation step (b) Figure 14 . 14 Figure 14. Two meshes of the trachea generated with DistMesh: a coarse mesh of about 40 000 tetrahedra (a) and a finer of about 500 000 tetrahedra. Figure 15 . 15 Figure 15. 2 views of the exterior surface of the heart in the final segmentation. The general shape is well recovered. Figure 16 .Figure 17 . 1617 Figure 16. 2 views of the interior surfaces of the heart in the final segmentation. The pillars in the left ventricle are well segmented. Acknowledgment The authors would like to thank the Ottawa Heart Institute for providing them with quality CT scans of the thorax. The authors acknowledge the support of NSERC under a Discovery Grant and a CGSD (Canada Graduate Scholarship for Doctoral studies) post graduate scholarship for the first author.
00157563
en
[ "spi.meca.mema", "phys.meca.mema" ]
2024/03/04 16:41:20
2005
https://hal.science/hal-00157563/file/JLS2005.pdf
J.-L Seichepine S Albertazzi P Nardin Belfort/F C Coddet D Sporer Stockport / Gb Topological characterization of wear tracks in thermally sprayed abradable coatings Abradable coatings are located on the stationary parts of gas turbines, in front of blades, which cut a track in them. This has to be achieved with minimum wear of the blades, in order to control the over-tip leakage. These coatings are generally deposited by thermal spraying of composite powders comprising a metal base, a polymer filler generating porosities and a dislocator such as hBN. The very demanding properties are nowadays adjusted using rig tests, where samples are rubbed by the contact of a dummy, simulating actual working conditions in an aircraft engine. Several types of behaviour are usually described, but few numerical data are produced from these tests. Only the blade wear (or metal transfer) is generally measured. As the understanding of contact phenomena is fundamental for the development of predictive models allowing the design of more performing materials, a comprehensive characterization process of the rub path was developed. The study was based on a topological survey made by laser profilometry, giving three-dimensional maps. These maps were then processed by image analysis and several parameters were computed, like surface roughness and parameters giving information on the shape and orientation of the holes or grooves in the rubbed surface of the samples. 1 Introduction Air seals in aircraft jet engines represent one of the most challenging and key technologies world-wide. Well functioning seal systems allow to keep rated pressure ratios, the best efficiency and an optimized specific fuel consumption. For instance, abradable coatings are used to control the over-tip leakage of air in the compressor between rotor blades and casings, labyrinth seals as well as drums and stators. Increasing performances of engines induce more severe working conditions for the coating , which have to be continuously improved. That is why it is important to gain a better understanding of the complex phenomena occurring when a blade contacts an abradable coating. Efforts are presently done in two complementary directions: modelling the thermo-mechanical behaviour of the seal and characterizing it experimentally. This paper presents a route allowing to go further in the analysis of the results given by the industrially used abradability test. It provides additional information useful for understanding and then simulating the abradability. 2 Abradable coatings Abradable seals allow the blade to cut a track in the coating thickness but this has to be achieved with minimum wear of the blade. Furthermore, the abradable material must not erode due to the hot gas flow otherwise sealing effectiveness is lost. Many such gap-sealing products are available for use in the compressor section of gas turbines [START_REF] Schmid | An Overview of Compressor Abradables[END_REF]. Our study was focused on materials already used or being presently developed to gain the combination of properties required in the high pressure compressor. These abradable coatings are deposited by combustion or plasma spraying of composite powders, comprising a metal base, a polymer filler generating porosities and a dislocator. This combination results in deposits which are very porous, easily removed by the contact of the blade, but able to resist to the severe environmental conditions. The reference materials for this study, already industrially used were AISi-BN (powder supplied by Sulzer Mateo under the reference SM 320) and NiCrAl-Bentonite (powder supplied by Sulzer Metco under the reference SM 314). The new material being presently developed is NiCrAl-BN, with different rates of BN and sprayed with polyester, in order to increase the porosity. The contact phenomena occurring in an engine are reproduced between a dummy blade and an abradable specimen. The rig is equipped with two independent drive systems, one for the rotor and the other for moving the seal specimen radially into the rotor. The speed of the specimen radial motion, or incursion rate, and the rotor speed are adjusted to obtain a realistic simulation of different typical working conditions. Furthermore, a heating system allows to make tests at increased temperatures [START_REF]Sulzer lnnotec Abradable Facility: Standard Abradability Test Description[END_REF] [3] [START_REF] Schmid | New High Temperature Abradables for Gas Turbines[END_REF]. The main results of these tests are measurements of the blade wear (or metal transfer from the coating to it), records of temperatures, and observations of the rubbed zones on both the blade and the coating. So several typical behaviours have been identified, like pure cutting, smearing, grooving or melting. This information is classically summarized in wear maps [1] [4]. Meanwhile, only few numerical data are usually produced from the analysis of the rubbed surfaces. Profilometry A comprehensive characterization process of the rub path cut by the blade in a tested coating was developed. The study was based on a topological survey made by laser profilometry. By this technique, the distance from a reference plan to the scanned surface was measured with a resolution of 50 points/mm in both directions, giving a three-dimensional map [START_REF] Keyence | Specification of Laser Displacement Sensor LK-031[END_REF]. The general shape of the rubbed zone is cylindrical, because of the circular motion of the blade. This mean shape is removed numerically, giving a table of the distances of the surface points to the mean surface, which can be plotted as a map of grey levels, Fig. 2. From this data, different parameters can be computed, like the surface roughness Sa. S3 = --x L Id(x, y) 1 n-1 [m-1 ] nxm x .. o y=O Where: -d(x,y) is the distance between the (x,y) point and the mean plan, -n is the number of points sampled in x direction, -m is the number of points sampled in y direction. 5 Image analysis The analysis of some wear tracks showed different topographies, with craters isotropically distributed or with grooves in the direction of the blade motion, or even with grooves perpendicular to the blade motion. To get more information, the maps obtained by laser profilometry were processed by image analysis. In a first step, the grey-level maps were binarized, giving black-and-white pictures where the areas above the mean surface are white and the areas below the mean surface are black. Then the shapes were simplified with the "despeckel" filter, that replaces each pixel by the mean value of its 5 neighbours [6). This filter was run several times, until the stabilization of the contours, Fig. 2. To go further in the characterization of the peaks shape, the local dimensions of the white areas of these pictures were examined extensively. The whole pictures were scanned in both directions x and y, detecting the white sets of pixels encountered and measuring their lengths. The results are tables of lengths hx and hy, Fig. 3. ( 1 ) I ( 2 ): Segments of white area in x I y direction, having a length of hx I hy. 6 An origin al parameter The tables of values hx and hy were used to compute a new parameter called "anisotropy coefficient", giving numerical information about the shape and orientation of the grooves or holes in the coating. The idea was to have it equal to -100 for a square picture perfectly banded in the horizontal x direction, equal to 0 for a perfectly isotropic picture and equal to +100 for a picture perfectly banded in the vertical y direction, This coefficient was calculated in several steps. First the mean values fl;;;; and hyO of the lengths of the whole sets of segments are determined. Then, 2 ranges of segments are arbitrarily defined. Those whose length is lower than the mean values are arbitrarily considered as belonging to small insignificant areas. Those whose length is higher than the mean values are considered as describing the main significant white areas or main peaks and the mean values of their length are calculated. -11.1 is the mean value of the lengths hx higher than their mean value fl;;;; . -11;1 is the mean value of length hy higher than their mean value n;Q. The anisotropy coefficient c is finally defined by the following equation: 7 Analysis of a set of wear tracks 26 wear tracks belonging to 9 abradable specimens were characterized by laser profilometry. For each of them the roughness Sa and the anisotropy coefficient were computed. Table 1 summarizes the test conditions (blade tip speed, incursion ration), the measured blade wear (as a percentage of the total incursion) and the results obtained from laser profilometry and image analysis (roughness Sa and anisotropy coefficient c). .. An original coefficient was defined to give numerical information about the anisotropy of the wear track surfaces. It appeared that this process gives a useful tool to go deeper into the understanding of the complex abradability phenomena. 9 Ill >. CJ "tJ c ftl Ill c. .. Q) .2 Q) � Q) 0 c CJ) Q) Q) c .. Q) c D. c. !!! .. .. ftl g� � E II) i:l &! Q) " §, II) "tJ 0 ftl c. ftl ::J •c: � (,) 3 3 Abradability testThe performance of the coatings are usually evaluated by abradability rig testing. Tests are conducted in a dynamic abradability rig, Fig.1. Fig. 1 . 1 Fig. 1. Schematic of Standard Test Rig. Fig. 2 . 2 Fig. 2. Typical wear track scan. y is the direction of blade motion.-Left: grey levels.-Right: Binarized and filtered. Fig. 3 . 3 Fig. 3. Image analysis on a binarized wear track scan. Fig. 4 .c = - 100 Fig. 4 . 41004 Fig. 4. Fig. 5 . 3 Fig. 6 . 4 Fig. 7 . 53647 Fig. 5. Roughness versus blade wear for the 26 wear tracks analysed. On this table as usually a negative blade wear means that metal has been transferred from the coating to the blade tip. The given anisotropy coefficients are mean values of 5 values computed from square areas along each wear track. The computed surface roughnesses Sa have been compared to the blade wears. Some consistency appeared: the surface roughnesses are rather high in case of severe blade wear and rather low in case of metal transfer, Fig. 5. Table 1 . 1 Test conditions and results for 26 wear tracks. II) i= ..5 iii 0 et: c( CJ NICrAI -Bentonite m/s um/s % 305 2,5 3 305 138 33 153 760 71 um 56 48 62 -2 24 40 305 25 46 93 78 AISI -BN 305 138 -14 25 65 153 760 -20 16 64 305 2,5 0 47 0 z al 0 ..... ' � (,) z 0 en 1 2 3 305 138 153 760 305 2,5 305 138 153 760 305 2,5 305 138 0 18 0 -2 36 0 -2 44 60 39 49 46 34 34 -8 9 -2 4 1 2 -4 153 760 3 41 73 305 25 3 51 -19 1 250 500 3 44 -15 z al . � (,) z 0 co 2 3 153 760 305 2,5 250 500 153 760 397 10 397 100 397 100 28 3 50 41 5 -5 0 68 41 35 40 39 37 37 26 0 -1 6 1 -2 5 4 397 397 100 10 -5 -5 21 18 0 14 0 .... Literature Acknowledgements This study was funded by the European Commission under the FP5 Growth Program. It was completed within Seal-Coat on-going project, which involves the following partners: ;.. ESIL, Dublin/IE ;.. Rolls-Royce pie, Derby/GB ;.. MTU Aero Engines GmbH, Mi.inchen/DE ;.. Euromat GmbH, Heinsberg/DE ;.. RTWH -Thermal Spraying Group, Aachen/DE ;.. Neomet Limited, Manchester/GB ;.. Institute of Plasma Physics, Praha/CZ ;.. RALSA, Langreo/ES ;.. UTBM -LERMPS, Belfort/FR
01007979
en
[ "spi.meca", "spi.mat" ]
2024/03/04 16:41:20
2003
https://hal.science/hal-01007979/file/CG.pdf
Pascal Casari Laurent Gornet Characterization of residual stresses in a composite curved sandwich beam Keywords: Sandwich structure, B. Residual stress, C. Finite element analysis, D. Mechanical testing à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The aim of this study is to investigate the thermomechanical response of a sandwich composite shell, which exhibits a double curvature. In this kind of heterogeneous structure, the skins and the core play very different roles. The skins are stiff and their coefficient of thermal expansion (noted CTE) is low. The foam core has stiffness 1000 times lower than the stiffness of the layers in the direction of the fibers and expands 10 times more. Residual stress control is required to ensure the geometrical stability of the structure, and for this reason it is important to measure the order of magnitude of these stresses. For single laminates, it has been shown in [START_REF] Bogetti | Process-induced stress and deformation in thick-section thermoset composite laminates[END_REF][START_REF] White | Process modeling of composite materials: residual stress development during cure. Part II. Experimental validation[END_REF][START_REF] Ridgard | Accuracy and distortion of composite parts and tools: causes and solutions[END_REF][START_REF] Radford | Measurement of manufacturing distortion in flat composite laminates[END_REF][START_REF] Johnston | A plane strain model for process induced deformation of laminated composite structures[END_REF][START_REF] Fernlund | Experimental and numerical study of the effect of cure cycle, tool surface, aspect ratio, and the lay-up on the dimensional stability of autoclave-processed composite parts[END_REF][START_REF] Gigliotti | Development of curvature during the cure of AS4/8552 [0/90] unsymmetric composite plates[END_REF] that the significant phenomena responsible for residual strain are chemical and thermal shrinkage, and frictional effects in combination with moulds whose constitutive materials have different coefficients of thermal expansion to the manufactured composites. In those published studies, modelling of residual stresses has progressed, but they concern mostly laminates. Only a few studies have been performed on sandwich structures, and essentially on flat panels [START_REF] Matsunaga | Assessment of a global higher-order deformation theory for laminated composite and sandwich plates[END_REF][START_REF] Matsunaga | Interlaminar stress analysis of laminated composite and sandwich circular arches subjected to thermal/mechanical loading[END_REF][START_REF] Matsunaga | A comparison between 2-D single-layer and 3-D layerwise theories for computing interlaminar stresses of laminated composite and sandwich plates subjected to thermal loadings[END_REF]. In the present work, a set of experiments has been performed going from the scale of the structure down to the scale of its constitutive parts. First, tests carried out on the sandwich structure provide an order of magnitude of residual stresses. Then curved beams, easier to test, are cut from the sandwich shell, and are subjected to the removal of edges and outer skin. Thus, materials or structures can be characterized in the range of thermal and mechanical loadings encountered in the structure's life. Once material properties have been identified (or collected from datasheets), a thermo-mechanical finite element model (noted FEMo) has been built and is used to simulate the heating of a beam simply supported at the ends. Measurement of residual stresses and materials properties Sandwich panel The sandwich shell under consideration is part of a radar antenna (see Fig. 1), It has been designed mostly with process constraints, and no calculations were run before manufacture. It has a double curvature and is approximately 2 m wide, 0.9 m high and 50 mm thick. After manufacturing, this structure shows a slight twist, and a residual deflection, which tends to hollow out the panel. This effect can be attributed to the thermal shrinkage of both skins and core during polymerization and to the difference of coefficients of thermal expansion of skins and core when cooling down from 80 8C to room temperature. The cure cycle of the structure is 8 h at 80 8C. The initial state of the structure can be considered as the non-stressed state at curing temperature. From then on, residual stresses remain in the constituents. The main idea proposed in this study is to give an order of magnitude of residual stresses by cutting a piece from the sandwich panel (Fig. 2). Strains released from the cutting are measured by means of triaxial strain gages bonded on both faces. Under elastic hypotheses, the corresponding stresses are given in Fig. 3 (Table 1 for the description of the skin stacking sequence). The first remark is that strains remain low but significant. On both sides, the plies are predominantly in tension in the width direction of the panel. An opposite global shear deformation appears between the two skins, which leads to the confirmation of a global twist which has been observed on the shell. Regarding stresses in each ply, they are low and match the global equilibrium, so that the core is consequently in compression in the width direction. Shear in the plies is much lower than tensions or compressions in the fiber directions. This means that the stacking sequence seems convenient for storing energy in the principal directions of the plies and not in shear. In fact, the symmetry of the structure associated with the viscoelasticity of the matrix tends to reduce the residual stresses. In addition to this first step of characterization, a finite element model has been constructed with NISAe. It consists in modelling the panel with linear rectangular shell elements for the skins and three-dimensional elements for the core as detailed in Fig. 4. The inputs are both mechanical and thermal properties, measured on coupons and presented later. The predicted maximum deflection of the panel is 2.6 mm but the geometrical report on the shape of a manufactured panel gave a value of 0.6 mm. This measurement of deflection had been completed several days after curing. Numerical and experimental results were not comparable: a significant mismatch was appearing in the strains, stresses and deflection results. This led to the need of accurate modelling and characterization of each component of the sandwich material in order to explain this gap. Sandwich beam A second set of experiments has been conducted on a sandwich beam extracted from the structure. The aim was to find out which components of the sandwich are responsible for the residual stresses. The removal operations are illustrated in Fig. 5. The beam is 50 mm wide. The displacement field is then measured by means of a testing device, which simply supports the beam (Fig. 6) on both ends. The first step consists of cutting the beam from the panel. The residual stress field then becomes unidirectional. Through the second step, the effect of the edges on the shape of the beam is analyzed. This stems from a study conducted in [START_REF] Cheng | Reduction of the concentrated residual stress on the edges of a sandwich plate[END_REF] in which some sharp corners in the sandwich have been found to cause very high stress concentrations at the beam's tips. Here, the removal of the edges has no significant effect in terms of deflection of the beam. The last step is the removal of the outer skin in order to release residual stresses included in this component. Deflection variations (Fig. 7) are taken in a regular 300 mm-long part of the beam (Fig. 6). The graph in Fig. 7 shows that the most important effect is due to the skin removal: the beam tends to flatten out. This leads to the conclusion that internal stresses probably remain in tension and in compression in the skins and in shear in the foam core. A possible interpretation of the curvature is to calculate the corresponding bending moment that must be applied in order to return to the initial shape (before cutting). The result is presented in Table 2. In addition, it may be noted that the sandwich structure is not symmetrical through the thickness. The beam deflection due to heating will be compared to the Fig. 5. Removal operations on a sandwich beam in order to release residual stresses. response of FEMo in the 'finite element modelling' section below. Material properties Thermal expansion and mechanical tests have been conducted on the constitutive parts of the sandwich in order to quantify the properties required for the FEMo analysis. The identification has been made on the two different plies used in the lamination of the sandwich shell. These are a twill woven carbon fabric and a stitched [G45] made of two unidirectional plies. The properties of the core have been taken from the manufacturer's datasheet. The estimated material properties required for modelling are listed in Table 3. The 08 direction of the panel and of the laminates is the x axis plotted in Fig. 5. Skins In order to characterize the constitutive plies of the skins, two specific plates (called twill and stitched) have been manufactured with the same process as that used for the sandwich shell. The twill plate is made of a twill balanced fabric and has a [0,G30] S stacking sequence. This ensures a mirror symmetry and produces a quasi-isotropic composite laminate. In the stitched plate, six layers have been stacked in the 08 direction. A mirror symmetry cannot be exactly used here, and the direction of maximum stiffness is 458. From each plate, seven coupons have been cut every 158 from 0 to 908 (Fig. 8). From the stress-strain curves obtained, the elastic modulus, the Poisson's ratios and the coefficients of thermal expansion could be measured. Results confirm that the twill laminate can be considered as a quasiisotropic material. For the stitched laminate, these properties are plotted in Fig. 9 and show that the behaviour cannot be considered as quasi-isotropic. Moreover, classical laminate theory cannot match completely these results because, due to its constitution, the laminate should have the same properties in the directions (q), (Kq), and (p/2Kq). A compromise has been found in the choice of the properties of this layer: the evolution of elastic and thermal parameters versus angle is plotted in addition to experimental results. The discrepancy is probably due to movement of the stitched layers during lamination: dry layers are very deformable and the specific angle of 458 can vary rapidly by 5-108 under a stretching action. Nevertheless, elastic properties are well simulated. Only the evolution of the coefficient of thermal expansion is not satisfactory because the variations around the average vary from negative to positive values. Core The foam core is a very soft material compared to the skins. Its properties are recalled in Table 3 from the manufacturer's datasheet. A special feature of the core properties is the difference in stiffness from tension to compression. This particularity could not be taken into account in modelling, but it may have a significant effect on the stress and strain fields in the sandwich shell and beam. The core is considered here as a linear elastic and isotropic material, and its behaviour has to be included as three-dimensional in modelling with tensile Young's modulus. Indeed, because of the sandwich curvature, the thermal expansion of the core through the thickness leads to a variation of this curvature. Modelling of the sandwich beam Modelling is now focused on the behaviour of the beam extracted from the shell in order to simulate a heating experiment applied on the beam. In this test, the beam is simply supported at the ends and heated with a temperature-controlled airflow. The upper and lower displacements in the mid-section are recorded. Deflection and thickness variation of the beam can be deduced a f t e ra ni n c r e a s eo f6 08C, representing the heating of the structure from room to curing temperature. Thermo-mechanical analysis After heating, the core has forced the curved composite shell to expand through the thickness and the skins have also expanded; this leads to a new internal equilibrium and the existence of internal stresses. This effect can be taken into account in the finite element calculations if the core is modelled with three-dimensional elements. The skins are considered in a plane stress state with the same stacking sequence as for the panel (Table 1). The finite element code Cast3M capability for coupled thermo-mechanical heat transfer analysis is used in order to model heat conduction in the multi-layer composite sandwich structure and to determine the elastic strain and stress induced by temperature variations. Both steady-state and transient heat transfer finite element capabilities have been used for computations, but here only the first one is taken into consideration. Predictions are made with both composite shell and second order tetrahedral and full Lagrange heat conduction brick elements (Fig. 10). Classical thermal boundary conditions such as the surface heat flux or the properties of the surface convection must be specified. In order to model the experimental facilities, only surface convection boundary conditions are used. Results At first, deflection results have been calculated versus the CTE value of the stitched layers, varying from K1!10 K6 to 7!10 K6 8C K1 as mentioned in Fig. 9. Resulting deflection and thickness variation of the beam are plotted in Fig. 11 and compared to experimental values. The calculated deflection is sensitive to this parameter, and the value of CTE for the stitched layers which fits the experiment is close to 4.7!10 K6 8C K1 . This choice of a constant CTE must be done because its evolution shown in Fig. 9 cannot be reproduced in numerical simulations. Thickness variation from the test and the simulation are very close, which indicates that t h eC T Eo ft h ec o r ei ss a t i s f a c t o r y .R e s u l t si nt e r m so f stresses in the skins are calculated from the model with the above-mentioned CTE value and given in Table 4. A significant stress level (maximum of 28.1 MPa in a twill layer of the outer skin) is created by the internal equilibrium and shows that the highest stresses are located in the outer skin and due to the difference of CTE between the four constitutive layers. Compared to the stress level measured on the panel from strain analysis (maximum of 7 MPa in one of the stitched plies), it seems that this increase of 60 8C generates more stresses than the ones remaining in the structure after manufacturing. These stress levels are not completely comparable, but this order of magnitude is a typical one that may create creep deformations and affect the geometrical stability of the structure. Conclusion The approach presented in this article is based on a procedure of characterization of residual stresses that can be applied to any sandwich structure for which destructive tests are possible. Data collected among displacements, curvatures or strains have been selected and consistently compared with calculations. These specific results have given some orders of magnitude of residual stresses, but have also underlined the need for accurate mechanical and thermal material properties. Indeed, some parameters like the evolution of the CTE of the stitched layers are not explained by classical laminate theory, and require more accurate investigations. This approach is not comprehensive yet, because only a few data and measurements are available for the core. Moreover, there is no up-to-date measurement technique for stress analysis and thermal characterization and only the thickness variation of the beam with temperature could be validated. Work on the thermomechanical and viscous properties of the core are probably the next aspects to study. Fig. 1 . 1 Fig. 1. Sandwich panel and mismatch between theoretical and manufactured shapes. Fig. 2 .Fig. 3 . 23 Fig. 2. Piece of sandwich cut out from the structure, and direction of the strain gages. Fig. 4 . 4 Fig. 4. FEMo of the panel and transverse displacement field. Fig. 6 . 6 Fig. 6. Test device for curvature evolution on sandwich beams. Fig. 7 . 7 Fig. 7. Deflection of the beam after part removal. Fig. 8 . 8 Fig. 8. Geometry and direction of the tensile coupons. Fig. 10 . 10 Fig. 10. FEMo of the sandwich beam. Fig. 11 . 11 Fig. 11. Deflection of the beam and thickness variation vs CTE of the stitched layer. Table 1 1 Properties of the sandwich panel Component Thickness (mm) Stacking sequence Outer skin 1.6 1 Twill fabric K308 2 Stitched fabrics 08 1 Twill fabric C308 Inner skin 0.7 1 Twill fabric K308 1 Twill fabric 08 1 Twill fabric C308 Core 50 Table 3 3 Thickness 0.23 mm E x ZE y Z40 GPa, n xy Z0.1, G xy Z3 GPa 3 Stitched ply Carbon T300, 2 layers at [G45] 440 g/m 2 , Thickness 0.45 mm E x ZE y Z56 GPa, n xy Z0.04, G xy Z3.3 GPa Material properties Constitutive materials properties of the sandwich Geometric Mechanical CTE!10 6 m/m Twill fabric Carbon T300, [0;90] 220 g/m 2 , 3 Resin Aralditee LY5052-HY5052 Fiber volume fraction: 40!V f !46 Core Klegecelle R55 55 kg/m 3 E compression Z77 MPa, E tension Z51 MPa, GZ27 MPa, nZ0.32 35 Fig. 9. Mechanical and thermal behaviour of the stitched layers. Table 4 4 FEMo results of stress-state in the plies of the beam after 60 8C heating Thickness (mm) Stiffness matrix Stresses (MPa) (in the coordinate system of each ply) Ply s xx s yy s xy Inner skin 0.7 0 B @ 3:28 1:16 0 1:16 3:28 0 001 :06 1 C A !10 4 Twill [K30] Twill [0] Twill [30] 7.3 6.1 4.6 4.4 0.7 K2.9 0.1 K0.1 0.1 Outer skin 1.6 0 B @ 3:06 2:16 0 2:16 3:06 0 002 :20 1 C A !10 4 Twill [K30] Stitched [45] Stitched [45] Twill [K30] 26.2 K5.5 K6.4 23.1 28.1 K6.5 K8.6 19.6 K1.2 0.1 0.1 1.0
04099589
en
[ "phys.cond.cm-ms" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04099589/file/Ultra-low_lattice_thermal_conductivity_in_tungsten-based_scheelite_ceramics.pdf
Hicham Ait Laasri email: [email protected] Eliane Bsaibess Fabian Delorme Guillaume F Nataf Fabien Giovannelli email: [email protected] Ultra-low lattice thermal conductivity in tungsten-based scheelite ceramics Keywords: Tungsten-based scheelites, Defect scheelite-type, Low thermal conductivity, Solidstate synthesis BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 polycrystalline ceramics were synthesized by conventional solid-state reaction route. The effect of cation-deficiency on the crystallographic structure, microstructure and thermal properties of these scheelite-type compounds were investigated. X-ray diffraction was used to identify the single-phase scheelite structure of the studied ceramics. Scanning Electron Microscopy technique has revealed a homogenous and dense microstructure with a few micro-cracks. The thermal conductivity of BaWO4 scheelite decreases from 1.3±0.2 to 1.0±0.1 W m -1 K -1 in the range 373 K -673 K. The cation-deficient scheelites Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics display an ultra-low thermal conductivity of 0.3±0.04 W m -1 K -1 and 0.2±0.03 W m -1 K -1 at 673 K, respectively. These materials exhibit among the lowest known values of thermal conductivity in crystalline oxides, in this temperature range. Therefore, they appear as very attractive for thermal barrier coating and thermoelectric applications. Introduction The discovery of materials with low thermal conductivity offers great opportunities for many applications [START_REF] Zheng | Advances in thermal conductivity for energy applications: A review[END_REF] such as thermal barrier coatings (TBCs) [START_REF] Clarke | Thermal-barrier coatings for more efficient gasturbine engines[END_REF][START_REF] Padture | Advanced structural ceramics in aerospace propulsion[END_REF][START_REF] Fergus | Zirconia and pyrochlore oxides for thermal barrier coatings in gas turbine engines[END_REF][START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF] and solid-state thermoelectric converters [START_REF] Poudel | High-thermoelectric performance of nanostructured bismuth antimony telluride bulk alloys[END_REF][START_REF] Tan | Rationally Designing High-performance bulk thermoelectric materials[END_REF]. With the extensive demand for more efficient and powerful aero-engines, the development of advanced thermal barrier coatings has attracted impressive attention to find suitable thin oxide-ceramics (1µm -5µm), particularly useful for gas turbine blades to thermally insulate air-cooled metallic components from hot gases in the engine, enhancing their combustion efficiency, performance, and longevity [START_REF] Clarke | Thermal-barrier coatings for more efficient gasturbine engines[END_REF]. Nowadays, 7-8 wt. % yttria-stabilized zirconia (YSZ) is the main successful ceramic oxide used as thermal barrier coating due to its impact resistance, low thermal conductivity (∼1.5-3 W m -1 K -1 ) [START_REF] Fergus | Zirconia and pyrochlore oxides for thermal barrier coatings in gas turbine engines[END_REF], high thermal expansion, high melting point and excellent chemical stability [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF]. Nevertheless, its structural phase transition at temperatures above 1473 K increases the risk of crack propagation [START_REF] Vagge Atul | Synthesis and processing of thermal barrier coatings with the use of YSZ, LTA and LTA/YSZ[END_REF][START_REF] Shen | Thermal shock life and failure behaviors of La2Zr2O7/YSZ, La2Ce2O7/YSZ and Gd2Zr2O7/YSZ DCL TBCs by EB-PVD[END_REF]. To improve the performances of TBCs, several scientific and technical approaches have been adopted to discover material with low thermal conductivity. Previous studies have reported that some inorganic materials, such as rare earth zirconates [START_REF] Cheng | Prolong the durability of La2Zr2O7/YSZ TBCs by decreasing the cracking driving force in ceramic coatings[END_REF], pyrochlore oxides [START_REF] Feng | Thermal expansion and conductivity of RE2Sn2O7 (RE = La, Nd, Sm, Gd, Er and Yb) pyrochlores[END_REF] or La2Mo2O9-based compounds [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF][START_REF] Sabarthes | Reducing the thermal conductivity of La2Mo2O9 with a trivalent praseodymium substitution for its potential use as a thermal barrier coating[END_REF] exhibit a lower thermal conductivity than YSZ [START_REF] Fergus | Zirconia and pyrochlore oxides for thermal barrier coatings in gas turbine engines[END_REF]. Recently, low thermal conductivity has been reported in high entropy oxides [START_REF] Wright | Size disorder as a descriptor for predicting reduced thermal conductivity in medium-and high-entropy pyrochlore oxides[END_REF][START_REF] Song | Glass-like thermal conductivity in mass-disordered high-entropy (Y, Yb)2(Ti, Zr, Hf)2O7 for thermal barrier material[END_REF][START_REF] Vayer | New entropy-stabilized oxide with pyrochlore structure: Dy2(Ti0.2Zr0.2Hf0[END_REF] or guided by probe structure and machine learning [START_REF] Collins | Discovery of a low thermal conductivity oxide guided by probe structure prediction and machine learning-Angew[END_REF]. Besides, due to the development of global economy, the demand for energy is considerably increasing for industrial production and human life. Thermoelectric materials, which can directly convert waste heat into electricity, are the subject of numerous studies [START_REF] Wu | Thermoelectric converter: Strategies from materials to device application[END_REF]. High thermoelectric performance can be achieved by increasing the power factor and reducing the thermal conductivity of materials [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF][START_REF] Poudel | High-thermoelectric performance of nanostructured bismuth antimony telluride bulk alloys[END_REF][START_REF] Qian | Phonon-engineered extreme thermal conductivity materials[END_REF][START_REF] Kanatzidis | Nanostructured thermoelectrics: the new paradigm[END_REF]. In this research field, many different routes have been explored to decrease the thermal conductivity of promising materials such as alloying, nano-structuring, composites, microcracking approaches or improving electrical properties of intrinsic low thermal conductivity materials [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF][START_REF] Poudel | High-thermoelectric performance of nanostructured bismuth antimony telluride bulk alloys[END_REF][START_REF] Qian | Phonon-engineered extreme thermal conductivity materials[END_REF][START_REF] Kanatzidis | Nanostructured thermoelectrics: the new paradigm[END_REF][START_REF] Giovannelli | Thermal conductivity and stability of Al-doped ZnO nanostructured ceramics[END_REF][START_REF] Chen | Thermopower and chemical stability of Na0.77CoO2/Ca3Co4O9 composites[END_REF][START_REF] Delorme | Thermoelectric properties of Ca3Co4O9-Co3O4 composites[END_REF][START_REF] Delorme | Nanostructuring of dense SnO2 ceramics by Spark Plasma Sintering[END_REF][START_REF] Chen | Thermoelectric properties of Fe2(Ti1-xNbx)O5 pseudobrookite ceramics with low thermal conductivity[END_REF][START_REF] Yin | A Review of Strategies for Developing Promising Thermoelectric Materials by Controlling Thermal Conduction[END_REF][START_REF] Zhao | Ultralow thermal conductivity and high thermoelectric figure of merit in SnSe crystals[END_REF]. Increasing crystal complexity is one of these approaches to obtain ultra-low lattice thermal conductivity in inorganic materials [START_REF] Sabarthes | Reducing the thermal conductivity of La2Mo2O9 with a trivalent praseodymium substitution for its potential use as a thermal barrier coating[END_REF][START_REF] Qian | Phonon-engineered extreme thermal conductivity materials[END_REF][START_REF] Li | Crystal structure induced ultralow lattice thermal conductivity in thermoelectric Ag9AlSe6[END_REF][START_REF] Cherniushok | Crystal structure and thermoelectric properties of novel quaternary Cu2MHf3S8 (M-Mn, Fe, Co, and Ni) thiospinels with low thermal conductivity[END_REF]. Previous studies focused on complex crystal structures and materials that have a large unit cell volume and a large molecular weight, due to the large population of diffuson-like modes [START_REF] Sabarthes | Reducing the thermal conductivity of La2Mo2O9 with a trivalent praseodymium substitution for its potential use as a thermal barrier coating[END_REF][START_REF] Qian | Phonon-engineered extreme thermal conductivity materials[END_REF][START_REF] Li | Crystal structure induced ultralow lattice thermal conductivity in thermoelectric Ag9AlSe6[END_REF][START_REF] Cherniushok | Crystal structure and thermoelectric properties of novel quaternary Cu2MHf3S8 (M-Mn, Fe, Co, and Ni) thiospinels with low thermal conductivity[END_REF]. Another promising approach to reduce the thermal conductivity is to introduce cation vacancies [START_REF] Popuri | Glass-like thermal conductivity in SrTiO3 thermoelectrics induced by A-site vacancies[END_REF][START_REF] Lu | High-Figure-of-Merit Thermoelectric La-Doped A-Site-Deficient SrTiO3 Ceramics[END_REF][START_REF] Kovalevsky | Effect of A-Site Cation Deficiency on the Thermoelectric Performance of Donor-Substituted Strontium Titanate[END_REF][START_REF] Delorme | Low intrinsic thermal conductivity of Spark Plasma Sintered dense KNbO3 and NaNbO3 perovskite ceramics[END_REF][START_REF] Azough | Concurrent La and A-Site Vacancy Doping Modulates the Thermoelectric Response of SrTiO3: Experimental and Computational Evidence[END_REF][START_REF] Wang | Large thermal conductivity reduction induced by La/O vacancies in the thermoelectric LaCoO3 system[END_REF]. For example, in perovskites (ABO3), the introduction of La-site vacancies in SrTiO3 decreases the thermal conductivity from 10 W m -1 K -1 to ~2 W m -1 K -1 and results in a glass-like behavior [START_REF] Popuri | Glass-like thermal conductivity in SrTiO3 thermoelectrics induced by A-site vacancies[END_REF][START_REF] Azough | Concurrent La and A-Site Vacancy Doping Modulates the Thermoelectric Response of SrTiO3: Experimental and Computational Evidence[END_REF]. Searching for novel thermally insulating materials, first principles calculations and highthroughput calculations have predicted an ultra-low thermal conductivity of ABO4 scheelitetype structure, composed into the stacking of weak units [AO8] and strong units [BO4] [START_REF] Najafvandzadeh | Firstprinciples study of elastic and thermal properties of scheelite-type molybdates and tungstates[END_REF][START_REF] Liu | Discovery of ABO4 scheelites with the extra low thermal conductivity through high-throughput calculations[END_REF]. In addition, the strong bonds favour a high temperature stability, while the weak bonds lead to large thermal expansion coefficients [START_REF] Ge | The thermal and optical properties of BaWO4 single crystal[END_REF] and good damage tolerance [START_REF] Najafvandzadeh | Firstprinciples study of elastic and thermal properties of scheelite-type molybdates and tungstates[END_REF][START_REF] Liu | Discovery of ABO4 scheelites with the extra low thermal conductivity through high-throughput calculations[END_REF][START_REF] Liu | Advances on strategies for searching for next generation thermal barrier coating materials[END_REF]. In particular, the low Pugh's ratio of scheelites indicate that they are quasi-ductiles [START_REF] Najafvandzadeh | Firstprinciples study of elastic and thermal properties of scheelite-type molybdates and tungstates[END_REF][START_REF] Liu | Discovery of ABO4 scheelites with the extra low thermal conductivity through high-throughput calculations[END_REF]. They are thus good candidates as TBCs. Recently, the predictions of ultra-low thermal conductivities have been confirmed by Bsaibess et al. [START_REF] Bsaibess | Ultra-low thermal conductivity in scheelite and A-deficient scheelite ceramics[END_REF], who have reported a value around 1 W m -1 K -1 from 400 to 1000 K in BaMoO4. They have also demonstrated that the presence of vacancies on the A-site, in the La2/3□1/3MoO4 molybdate scheelite, leads to even lower thermal conductivity values of about 0.6 W m -1 K -1 over the entire temperature range [400 -1000 K] [START_REF] Bsaibess | Ultra-low thermal conductivity in scheelite and A-deficient scheelite ceramics[END_REF], in agreement with theoretical findings [START_REF] Liu | Discovery of ABO4 scheelites with the extra low thermal conductivity through high-throughput calculations[END_REF]. Based on the same calculation approach, BaWO4 should exhibit a lower thermal conductivity (0.8 W m -1 K -1 ) than BaMoO4 (1 W m -1 K -1 ) [START_REF] Liu | Discovery of ABO4 scheelites with the extra low thermal conductivity through high-throughput calculations[END_REF]. Therefore, in this study, we investigate the sintering and thermal properties of BaWO4 ceramics and A-site deficient scheelites Ce2/3□1/3WO4 and La2/3□1/3WO4. Experimental procedure BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics were synthesized by conventional solidstate reaction method using a stoichiometric amount of high purity precursors of BaCO3 (99.9%), WO3 (99.9%), CeO2 (99,9%) from Sigma Aldrich and La2O3 (99.9%) from ChemPur. La2O3 was pre-heated at 1000 °C for 10 hours to remove water from this very hygroscopic precursor. Appropriate amounts of these precursors were then weighed and ground in planetary tungsten carbide ball mill at 300 rpm for 1 hour (Retsch PM 100). The obtained powders were subsequently calcined in air using alumina crucibles in a muffle furnace at 1173 K during 4 h for BaWO4 and Ce2/3□1/3WO4, and 1273 K during 10 h for La2/3□1/3WO4. The calcined powders were then ground with few drops of polyvinyl alcohol (PVA) solution (2 wt. % in water) and pressed into pellets of 10 mm in diameter and about 2 mm in thickness under a pressure of 125 MPa. The obtained pellets were heated thereafter on zirconia beads in an alumina crucible at a heating rate of 3 K min -1 , up to 773 K and then a heating rate of 1 K min -1 , up to 873 K to drive off the PVA, and sintered for 4 hours at 1573 K, 1273 K and 1323 K for BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4, respectively. Relative densities were calculated from the pellets by both geometrical measurements (mass and dimensions) and the Archimedes method using a MS204TS/00 analytical balance (Mettler Toledo) and the theoretical density values obtained by structural refinement. X-ray diffraction (XRD) analysis was performed at room temperature on the ceramic pellets using a Bruker D8 Advance diffractometer system, with λ=1.5418 Å powered at 40 kV × 40 mA, and a scanning step of 0.02° (2θ) per second. The obtained XRD patterns were refined using JANA software in order to confirm the single-phase scheelite-structure. Microstructures were inspected by Scanning Electron Microscopy (TESCAN MIRA 3 SEM), employing secondary electrons (SE) with an acceleration voltage of 5 kV on the fracture of ceramics covered with an ultra-thin layer of gold to avoid charging effects. The average grain size of ceramics is estimated by the linear intercept method using the image J software. Elementary compositions were analysed by Energy Dispersive X-ray (EDX) spectroscopy, using the scanning electron microscopy (Tescan Mira 3 SEM). The homogeneity of ceramics was confirmed by back-scattering electron (BSE) analyses. Specific heat capacity (Cp) of all samples was measured by Differential Scanning Calorimetry (DSC, Netzsch STA 449 F3 Jupiter) technique. The crushed ceramics (about 50 mg) were placed in a platinum crucible and heated continuously up to 900 K under nitrogen atmosphere with a heating rate of 20 K min -1 . Thermal diffusivity measurements of ceramics were performed by the Laser Flash method (Netzsch LFA 457 instrument), under vacuum (10 -2 mbar). All obtained samples were previously coated with a thin-layer of graphite to improve absorption of the laser light and avoid emissivity errors. Results and discussion The XRD patterns at room temperature of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics are presented in Fig. 1a. All the obtained peaks can be attributed to the pure scheelite-type phase of tungsten oxides. The diffraction peaks of BaWO4 reveal a tetragonal symmetry with the space group I41/a, indexed according to ICDD files (PDF cards 01-072-0746). The Ce2/3□1/3WO4 and La2/3□1/3WO4 diffractograms crystallize in a monoclinic symmetry with the space group C2/c [START_REF] Dos Passos | Structural and electrical properties of cerium tungstate: Application to methane conversion[END_REF][START_REF] Jeong | Photoluminescence features of greenemitting sol-gel synthesized La2W3O12 doped with Tb 3+ phosphor for PDP applications[END_REF]. The collected data of both compounds are in good agreement with ICDD files (PDF cards 01-085-0143 and 01-082-2068, respectively). Fig. 1b, 1c and1d show the experimental and calculated XRD patterns of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4, respectively, as well as their difference. The lattice parameters, volumes of the unit cell, theoretical and experimental densities are presented in table 1. The Archimedes relative densities of the sintered ceramics are ≈ 97% for BaWO4 and ≈ 93% for Ce2/3□1/3WO4 and La2/3□1/3WO4. Relative densities obtained from the mass and dimensions of the sintered ceramics are ≈93%, and ≈ 90% for Ce2/3□1/3WO4 and La2/3□1/3WO4. The difference between both methods arises from the open porosity of the ceramics. Microstructure study and grain size distribution of fractured surfaces of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramic pellets imaged by Scanning Electron Microscopy are shown in Fig. 2. The micrographs show a homogeneous microstructure and a well-developed grain morphology for all specimens. This is consistent with the measured relative density of about 97% for BaWO4 and 93% for Ce2/3□1/3WO4 and La2/3□1/3WO4. Distinct grain boundaries are observed in all samples. From histograms, the grain size of the scheelite ceramics is distributed in the range of ~ 5-50 µm. SEM images reveal average grain sizes of ~24 µm for BaWO4 ceramic and ~20 µm for Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics. Moreover, for the three ceramics, both pores and cracks are observed. The EDX analysis gives cation composition ratios of 0.98 (Ba/W) in BaWO4 and 0.68 (Ce/W and La/W) in Ce2/3□1/3WO4 and La2/3□1/3WO4. EDX measurements confirm thus the expected compositions for the three studied ceramics, within a 2% accuracy. Back-scattering electron images show that there is no chemical contrast for the three ceramics, confirming the good homogeneity and distribution of elements. Fig. 3 shows the measurements of specific heat capacity for BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 crushed ceramics as a function of temperature, from 330 to 900 K. The specific heat of BaWO4 increases with increasing temperature from 0.34 J g -1 K -1 at 330 K to 0.46 J g -1 K -1 at 900 K. Ran et al. reported similar specific heat capacity for a BaWO4 single crystal: from 0.32±0.01 J g -1 K -1 to 0.36±0.01 J g -1 K -1 between 336 K and 573 K [START_REF] Ran | Thermal conductivity of BaWO4 single crystal[END_REF]. Specific heat capacity of Ce2/3□1/3WO4 and La2/3□1/3WO4 have been measured for the first time. Cp of Ce2/3□1/3WO4 varies from 0.34±0.01 J g -1 K -1 at 330 K to 0.46±0.02 J g -1 K -1 at 900 K. Cp of La2/3□1/3WO4 varies from 0.32±0.01 J g -1 K -1 at 330 K to 0.42±0.02 J g -1 K -1 at 900 K. They are thus similar to the values found for BaWO4, within the ±4% precision of heat capacity measurements. The thermal diffusivity (λ) measurements of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics are shown in Fig. 4a. The thermal diffusivity of the three samples decreases when the temperature is increased from 0.588, 0.166 and 0.109 mm 2 s -1 at 373 K to 0.369, 0.106 and 0.081 mm 2 s -1 at 773 K for BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4, respectively. This decrease is due to the Umklapp scattering [START_REF] Ziman | Electrons and Phonons[END_REF]. The thermal diffusivity is slightly increasing at 873 K by 18% for Ce2/3□1/3WO4 and 15% for La2/3□1/3WO4, while it is constant for BaWO4. This is attributed to an increase of the radiative contribution at high temperatures, as reported in previous studies using the laser flash method [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF]. Overall, it can still be seen that the A-deficient scheelites Ce2/3□1/3WO4 and La2/3□1/3WO4 exhibit very low thermal diffusivity values in the temperature range. The thermal conductivities (κ) of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics are shown in Fig. 4b. Thermal conductivity is determined from measurements of thermal diffusivity (λ), specific heat (Cp) and density (ρ) through the following formula: 𝜅 = 𝜆 . 𝜌 . 𝐶 𝑝 (1) The measurement of thermal conductivity was carried out three times on each studied ceramics to confirm repeatability of the results. The standard deviation has been calculated, which is about 0.02 W m -1 K -1 . The thermal conductivity of all samples diminishes with increasing temperature between 373 K and 673 K. It then slightly increases, most likely because of an enhanced radiative contribution that is known to play a role in laser flash measurements [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF]. BaWO4 thermal conductivity decreases from 1.3±0.2 W m -1 K -1 to 1.0±0.1 W m -1 K -1 , which corresponds to the theoretical prediction of Liu et al. [START_REF] Liu | Discovery of ABO4 scheelites with the extra low thermal conductivity through high-throughput calculations[END_REF]. However, these values are lower than those reported by Ran et al. on a BaWO4 single crystal, which are 2.1 and 1.6 W m -1 K -1 at respectively 373 and 550 K [START_REF] Ran | Thermal conductivity of BaWO4 single crystal[END_REF]. This difference could be linked to the porosity and to the presence of microcracks and grain boundaries in the sample. To consider the impact of porosity, thermal conductivity values were normalized to represent values for 100% dense materials (κdense) using Maxwell's relation [START_REF] Winter | Oxide Materials with Low Thermal Conductivity[END_REF] as follow: κ 100% 𝑑𝑒𝑛𝑠𝑒 = κ 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 ⨯ 1 1-1.5Ф (2) Where Ф is the porosity. In Fig. 4b, these corrected values are plotted for BaWO4 ceramic and still show a decrease of about 30% as compared to the single crystal. The thermal conductivity is 1.4±0.2 and 1.1±0.1 W m -1 K -1 at 373 and 550 K, respectively. This difference is attributed to the presence of grain boundaries that introduce an interface thermal resistance. The thermal conductivity of Ce2/3□1/3WO4 decreases from 0.37±0.05 W m -1 K -1 to 0.30±0.04 W m -1 K -1 and that of La2/3□1/3WO4 is almost constant at about 0.2±0.03 W m -1 K -1 between 373 K and 673 K. These values are lower than in BaWO4 and even lower than the calculated minimum thermal conductivity (κmin) of BaWO4 [START_REF] Najafvandzadeh | Firstprinciples study of elastic and thermal properties of scheelite-type molybdates and tungstates[END_REF]. The minimum thermal conductivity can be expressed as a function of the number density of atoms and the average sound velocity, with three different models that relate to each other's as follow: 𝜅 𝑑𝑖𝑓𝑓 = 0.72 𝜅 𝐶𝑙𝑎𝑟𝑘𝑒 = 0.63 𝜅 𝐶𝑎ℎ𝑖𝑙𝑙 (3) The main difference between these three models is that Clarke and Cahill assume propagating phonon-like vibrations, while κdiff is based on diffuson-like vibrations [START_REF] Agne | Minimum thermal conductivity in the context of: Diffuson-mediated thermal transport[END_REF]. For BaWO4, κClarke and κCahill, have been already calculated based on first-principle calculations in ref. [START_REF] Najafvandzadeh | Firstprinciples study of elastic and thermal properties of scheelite-type molybdates and tungstates[END_REF]: κClarke = 0.61 W m -1 K -1 , κCahill = 0.69 W m -1 K -1 . These values lead to κdiff = 0.44 W m -1 K -1 . The thermal conductivity experimentally measured for Ce2/3□1/3WO4 and La2/3□1/3WO4 scheelites are lower than the κmin calculated for BaWO4 whatever the model used. However, a major difference with respect to BaWO4 is the presence of a large amount of cation vacancy in these compounds that can strongly reduce the thermal conductivity, as shown previously for La2/3□1/3MoO4 deficient scheelite [START_REF] Bsaibess | Ultra-low thermal conductivity in scheelite and A-deficient scheelite ceramics[END_REF] and A-deficient ABO3 perovskites [START_REF] Popuri | Glass-like thermal conductivity in SrTiO3 thermoelectrics induced by A-site vacancies[END_REF][START_REF] Delorme | Low intrinsic thermal conductivity of Spark Plasma Sintered dense KNbO3 and NaNbO3 perovskite ceramics[END_REF][START_REF] Wang | Large thermal conductivity reduction induced by La/O vacancies in the thermoelectric LaCoO3 system[END_REF]. Another approach consists thus in modelling the impact of vacancies. Indeed, the influence of vacancies on the thermal conductivity can be estimated by considering mass-difference scattering [START_REF] Gurunathan | Analytical Models of Phonon-Point-Defect Scattering[END_REF][START_REF] Gurunathan | Alloy scattering of phonons[END_REF][START_REF] Hanus | Thermal transport in defective and disordered materials[END_REF]. We consider here a fictive material Ba2/3□1/3WO4 and calculate the change in mass on the A-site induced by vacancies, such that: 𝛥𝑀 𝐴 2 ̅̅̅̅̅̅̅ = 2 3 (𝑀 𝐵𝑎 -𝑀 𝐴 ̅̅̅̅ ) 2 + 1 3 (𝑀 𝐵𝑎 + 2 < 𝑀 ̅ >) 2 = 22306 g 2 mol -2 (4) With < 𝑀 ̅ > the average mass of the compound, 𝑀 𝐴 ̅̅̅̅ the stoichiometry weighted average of Asite average mass, and MBa the molar mass of barium. This fictive material is close to La2/3□1/3WO4 and Ce2/3□1/3WO4 as molar masses of Ba, La and Ce are respectively 137.3, 138.9 and 140.1 g mol -1 . This leads to the average mass variance in the system < 𝛥𝑀 2 ̅̅̅̅̅̅ > normalized by the squared average atomic mass < 𝑀 ̅ > 2 equal to 1.16. It is then possible to calculate the ratio of the defective solid's thermal conductivity (Ba2/3□1/3WO4) to that of a reference pure solid (BaWO4) through Klemens model [START_REF] Klemens | The scattering of low-frequency lattice waves by static imperfections[END_REF][START_REF] Klemens | Thermal resistance due to point defects at high temperatures[END_REF]. In our case, using values from first-principle calculations for the speed of sound and the volume per atom [START_REF] Najafvandzadeh | Firstprinciples study of elastic and thermal properties of scheelite-type molybdates and tungstates[END_REF], and considering as a reference the thermal conductivity of BaWO4 we measured at room temperature (1.3 W m -1 K -1 ), we find that the thermal conductivity of the compound with vacancies would be 0.50 W m -1 K -1 , i.e. two times less than the thermal conductivity of the sample without vacancies. Following the same reasoning, we expect the thermal conductivity of Ce2/3□1/3WO4 and La2/3□1/3WO4 scheelites to be ultra-low. In this considered mass-difference scattering model, only kinetic perturbation due to mass difference is considered for the scattering parameter. However, vacancies lead also to a force constant difference and radius difference. In addition, in some cationic deficient oxides, it has been shown that the experimental thermal conductivity is lower than the one predicted by the mass-difference scattering only, and that the influence of force constant difference and radius difference must be taken into account as well [START_REF] Gurunathan | Analytical Models of Phonon-Point-Defect Scattering[END_REF]. In the case of La1-xCoO3-x, a value of ~0.6 W m -1 K -1 is measured for x = 0.1 [START_REF] Wang | Large thermal conductivity reduction induced by La/O vacancies in the thermoelectric LaCoO3 system[END_REF]. The model with only massdifference scattering gives a thermal conductivity of 1.6 W m -1 K -1 , whereas the model with mass-difference and virial-theorem treatment for broken bonds, gives a thermal conductivity of 0.7 W m -1 K -1 which is very close to the measured one. If we assume that in scheelites, the rate of decrease is similar, it leads to a thermal conductivity of 0.2 W m -1 K -1 which is very closed to the measured thermal conductivity. The thermal conductivity that we report for La2/3□1/3WO4 is among the lowest lattice thermal conductivity values reported for a dense crystalline oxide ceramic. As already mentioned, in the case of BaWO4, the influence of the microstructure accounts for about 30% of the decrease in thermal conductivity between the single crystal and the ceramic. If we assume a similar impact of the microstructure in La2/3□1/3WO4, this leads to an ultra-low thermal conductivity around 0.3 W m -1 K -1 for a single crystal. Conclusions BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 dense crystalline ceramics were prepared using the conventional solid-state reaction route. Dense ceramics have been obtained (97% for BaWO4 and 93% for Ce2/3□1/3WO4 and La2/3□1/3WO4). XRD analysis revealed a single-phase scheelitetype structure for all ceramics. The crystal structure transformed from tetragonal for BaWO4 to monoclinic for Ce2/3□1/3WO4 and La2/3□1/3WO4. The thermal measurements showed a low thermal conductivity for BaWO4, below 1 W m Figure 1 . 1 Figure 1. a) XRD patterns of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics at room temperature. Experimental and calculated XRD patterns of b) BaWO4 c) Ce2/3□1/3WO4 and d) La2/3□1/3WO4. Figure 2 . 2 Figure 2. SEM micrographs and grain size distribution of a) BaWO4, b) Ce2/3□1/3WO4 and c) La2/3□1/3WO4. Figure 3 . 3 Figure 3. Temperature dependence of specific heat capacity Cp of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4. Dashed lines indicate the ±4% precision of heat capacity measurements. Figure 4 . 4 Figure 4. Temperature dependence of a) thermal diffusivity and b) thermal conductivity of BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 ceramics. Values for 100% are corrected for the influence of the porosity. Error bars indicate the ±8% precision of thermal diffusivity measurements and ±14% precision on thermal conductivity data. Table 1 . 1 -1 K -1 in the investigated temperature range [373 -900 K]. In this work, ultra-low thermal conductivity values of 0.3±0.04 and 0.2±0.03 W m -1K -1 at 673 K are reported for Ce2/3□1/3WO4 and La2/3□1/3WO4, respectively. These values are among the lowest thermal conductivities reported for crystalline oxides. The results of this study confirm that such A-deficiency is an efficient approach for lowering the thermal conductivity in scheelites. Combined with chemical doping to increase the electrical conductivity, it could lead to high-performance thermoelectric materials. Refined lattice parameters, volume of the unit cell, theoretical and relative densities of sintered BaWO4, Ce2/3□1/3WO4 and La2/3□1/3WO4 scheelite ceramics. Compound BaWO4 Ce2/3□1/3WO4 La2/3□1/3WO4 a (Å 3 ) 5.6288 7.8298 7.8757 b (Å 3 ) 5.6288 11.7408 11.8359 c (Å 3 ) 12.7281 11.6043 11.6539 Cell volume (Å 3 ) 403 1006 1025 Theoretical density (g.cm -3 ) 6.3 6.75 6.61 Geometrical relative density (%) 93 90 90 Archimedes relative density (%) 97 93 93 Acknowledgement The authors are grateful to Tatiana Chartier for technical support. Conflicts of Competing Interest The authors declare that they have no conflicts of interest.
01245988
en
[ "shs.hist", "shs.anthro-se", "shs.hisphilso", "shs.relig" ]
2024/03/04 16:41:20
2011
https://shs.hal.science/halshs-01245988/file/Importation%20des%20ouvrages%20chinois%20en%20Cor%C3%A9e%20par%20le%20roi%20Chongjo_110217.pdf
Daeyeol Kim email: [email protected] d'une nouvelle société basée sur l'idéologie néo-confucéenne et d'un nouveau gouvernement, sur le modèle chinois à un grand besoin d'importation • En dehors de commerce diplomatique, l'interdiction générale par les Ming de l'exportation des ouvrages chinois La politique d'importation des ouvrages en chinois du roi Chŏngjo (r. 1776-1800) Yŏngjo 英祖 (r. 1724-1776) Chŏngjo 正祖 (r. 1776-1800) La 1 ère moitié du 16 e siècle • Les conflits intenses entre les lettrés-fonctionnaires • La retraite des lettrés dans des régions • Le lecture des 四書 mise en valeur avec la priorité donnée à la quête de la Voie et de la culture de soi 17/05/2023 La 1 ère moitié du 17 e siècle • Les invasions manchous -Nouvelle dynastie de Qing • Une nouvelle identité : pro-Ming et anti-Qing, Chosŏn héritier de la civilisation chinoise • L'orthodoxie coréenne de l'école de Zhu Xi renforcée • L'importation croissante des ouvrages occidentaux en chinois 17/05/2023 Diversité de courants intellectuels La 2 ème moitié du 16 e siècle • Les « lettrés des forêts » devenus une force majeure de la scène politique • Le division en diverses factions des lettrés • Les guerres avec le Japon -un rapprochement avec la Chine de Ming La 2 ème moitié du 17 e siècle • Après 1683, la cour impériale Qing rassurée et confiante • Un dégel relatif entre Qing et Chosŏn au 17 e siècle • « L'idéologie était mal appliquée » à • « Il y a un problème, une révision est nécessaire » à • « Il faut aller voir ailleurs » à Evolution politique et culturelle Differences de deux rois Une circulation Les facteurs principaux Moyens/réseaux d'acquisition de la institutionnelle déterminant la nature des cour royale • L'idéologie devenue plus radicale et rigide • Le dogmatisme de 性理學 mise en cause et le retour aux 五經 et à Zhu Xi non-dogmatisé • L'Intérêt porté en dehors du confucianisme La 1 ère moitié du 18 e siècle • Les conséquences palpables des réformes et les changements dans les domaines économiques • La politique d'impartialité de roi Yŏngjo (r. 1724-1776) La 2 ème moitié du 18 e siècle • La politique d'impartialité de roi Chŏngjo (r. 1776-1800) • L'apparition des communautés chrétiennes • De nouvelles tendances dans la lecture et l'écriture littéraire confucianisme primitif 17/05/2023 7 17/05/2023 17/05/2023 • Chercher à construire dominer les discours sur les Classiques • Critiquer certaines interprétations néo-confucéennes et privilégier la vision du monde du personnelle des fonctionnaires • Rester fidèle à l'idéologie néo-fonctionnaires l'école de Zhu Xi jeunes • Les dons des individus confucénne de des belles lettres de formation des • Les dons des missionnaires occidentaux Programme spécial 燕行使 ü 抄啓文臣 • Les achats par les envoyés à Pékin hauts ü 弘文館 Collège • Les dons de la cour impériale chinoise académique et à reconnaissance royal un environnement • Chercher à obtenir ü 奎章閣 Institut ouvrages importés Daeyeol KIM, [email protected] UMR 8173 Chine Corée Japon (EHESS/CNRS), EA 4512 ASIEs (INaLCO) • Besoins spécifiques à la cour Chosŏn • Offres/approvisionnements de la Chine 17/05/2023 Quelques caractéristiques La 1 ère période de son règne (1776~ 1785) • Les importations massives et systématiques 17/05/2023 17/05/2023 『奎章總目』 La 2 ème période de son règne (1785~1800) • Les acquisitions très faibles et occasionnelles • Les interdictions de certaines catégories d'ouvrage • En 1781, à l'ordre du roi, une catalogue établie comprenant les 華本 + 東本. • La catalogue qui nous est parvenue comprend seulement les 華本. • Le titre + une notice bibliographique • Un classement selon le système de 四部 • 697 titres, 20.179 volumes 17/05/2023 • Un fonds relativement faible en 經 • Dans la section 經 : 種數 種比率 本數 本比率 經部 78 11 % 2.222 10,7 % 史部 134 18,9 % 4.201 20,2 % 子部 182 25,7 % 8.841 42,4 % 集部 315 44,4 % 5.573 26,7 % 計 709 20.837 17/05/2023 部 類 種數(%) 本數(%) 經 總經 7 (9) 966 (43,5) 易 7 (9) 73 (3,3) 書 (6,4) 66 (3) 詩 (6,4) 63 (2,9) 春秋 7 (9) 58 (2,6) 禮 (12,8) 397 (17,9) 樂 (2,6) 16 (0,7) 四書 (19,2) 243 (10,9) 小學 (25,6) 340 (15,3) 計 9 78 2.222 史 正史 (20,2) 1215 (28,3) 編年 (11,9) 556 (12,9) 別史 (35,8) 915 (21,3) 掌故 (11,2) 867 (20,2) 地理 (16,4) 553 (12,9) 抄史 (1,5) 117 (2,7) 譜系 (0,8) 36 (0,8) 總目 (2,2) 42 (1) 計 8 4.201 部 類 種數(%) 本數(%) 子 儒家 47 (25,8) 685 (7,6) 天文 1 (0,6) 10 (0,1) 曆籌 9 (5) 159 (1,8) 卜筮 1 (0,6) 4 (0,1) 農家 2 (1,1) 27 (0,3) 醫家 9 (5) 135 (1,5) 兵家 9 (5) 136 (1,5) 刑法 3 (1,7) 46 (0,5) 道家 6 (3,3) 45 (0,5) 釋家 3 (1,7) 49 (0,6) 雜家 13 (7,1) 63 (0,7) 說家 23 (12,6) 238 (3) 藝玩 15 (8,2) 155 (17,5) 類事 32 (17,6) 6.409 (72,5) 叢書 9 (5) 680 (7,7) 計 15 182 8.841 集 總集 80 (26,4) 2.125 (43,2) 別集 223 (73,6) 2.790 (56,8) 計 2 315 5.573 17/05/2023 『內閣訪書錄』 • Une catalogue établie en vue d'une acquisition par l'Institut royal 奎章閣 • Difficile de dater. Mais probablement achevée juste après le 『奎章總目』(1781) • Un classement selon le système de 四部 • 385 titres, 10.357 volumes 17/05/2023 • 『浙江採集遺書總錄』 , 鍾音(淸) 等受命編, impression Choson 乾隆39年 (1774) • 『直齋書錄解題』 陳振孫(宋) 撰, impression Choson 乾隆 39年 (1774) • Probablement aussi 『 四庫全書簡明目錄 』 17/05/2023 • Diversité et quantité augmentée en 經 • Dans la section 經 : -L'importance en 類事, 醫家, 兵家 -L'existence de fonds de 道家 et 釋家 17/05/2023 『內閣訪書錄』 收錄書籍分析表 種數 種比率 本數 本比率 經部 134 34,8 % 2,263 21,8 % 史部 63 16,6 % 2,646 25,5 % 子部 124 32,2 % 2.943 28,4 % 集部 63 16,4 % 2,505 24,2 % 計 385 10,357 17/05/2023 部 類 種數(%) 本數(%) 經 易 23 (17,2) 403 (17,8) 書 19 (14,2) 248 (11) 詩 24 (17,9) 512 (22,6) 禮 24 (17,9) 593 (26,2) 春秋 17 (12,7) 401 (17,7) 四書 6 (4,5) 79 (3,5) 孝經 1 (0,7 4 (0,2) 總經 4 (3) 191 (8,4) 樂 6 (4,5) 208 (9,2) 小學 10 (7) 191 (8,4) 計 10 134 2.263 史 正史 1 (1,5) 172 (6,5) 編年 4 (6) 237 (9) 別史 2 (3) 330 (12,5) 雜史 2 (3) 41 (1,5) 掌故 28 (41,8) 1.014 (38,3) 地理 22 (32,8) 764 (28,9) 譜系 8 (11,9) 88 (3,3) 計 7 67 2.646 部 類 種數(%) 本數(%) 子 儒家 (11,6) 202 (6,7) 天文 (1,7) 29 (1) 曆籌 (2,5) 102 (3,5) 農家 (1,7) 109 (3,7) 醫家 (14,9) 446 (15,2) 兵家 (10,7) 177 (6) 道家 (2,5) 128 (4,3) 釋家 (2,5) 20 (1,7) 雜家 6 (5) 229 (7,8) 說家 (4,1) 140 (4,8) 藝玩 (12,4) 272 (9,2) 類事 (17,4) 740 (25,1) 叢書 10 (8,3) 349 (11,9) 計 13 2.943 集 總集 12 (19) 944 (37,7) 別集 51 (81) 1561 (62,3) 計 2 63 2,505 『內閣訪書錄』 收錄書籍分析表 17/05/2023 28 部類 類屬 計 總計 部類 類屬 計 總計 經部 134 子部 124 易 23 儒家 14 書 19 雜家 28 詩 24 類書 禮 24 小說 春秋 17 藝術 四書 6 譜錄 11 孝經 1 術數 總經 4 天文算法 樂 6 史部:地理 小學 10 兵家 10 史部 63 農家 編年 3 醫家 17 別史 2 道家 載記 2 釋家 政書 23 集部 63 職官 12 總集 12 地理 14 別集 51 譜系 7 Evolution de 奎章閣 所藏 華本 • Après 1780, relativement peu d'importation par rapport à l'acquisition/publication des ouvrages coréens 17/05/2023 29 華本 1781 1870 傾 差 題目數 697 756 +59 冊數 20.179 22.600 +2421 華本 + 東本 1780 1795 冊數 30.000 80.000 奎章總目 (1781) 內閣訪書錄 閱古觀書目 (1870?) 種數 比率 種數 比率 種數 比率 經 78 11,2 % 134 34,8 % 111 14,6 % 史 134 19,2 % 63 16,6 % 144 18,6 % 子 182 26,1 % 124 32,2 % 189 24,8 % 集 303 43,5 % 63 16,4 % 317 41,7 % 計 697 100,0 % 385 100,0 % 761 100,0 % 17/05/2023 Les titres acquis ou en commun entre 『內閣訪書錄』 et 『閱古觀書目』 經 部 『古周易』, 『 春秋辨疑』, 『 漢隸字源』, 『 廣金石韻府』 子 部 『 廣博物誌』, 『 昨非菴日纂』, 『 香乘』, 『 翰苑新書全集』, 『 昭代叢書』, 『 檀机叢書』, 『 玉機微意』 集 部 『 誠齋集』 17/05/2023 Une très faible importation, pourquoi ? • Epuisement ou interdiction en Chine? • Remplacement ? • Changement de politique de la cour ? 17/05/2023 Les évènements en 1785 et 1791 Les tensions dues à l'introduction du catholicisme • La découverte à Séoul d'un mouvement catholique -1785 • Le procès de Chinsan -1791 : l'incinération par un yangban catholique de la tablette ancestrale de sa mère La campagne du « retour à la littérature correcte » -1971 • Les styles d'administration 科文 • Les histoires romancées 稗官雜記 • Le style de « notes au fil du pinceau » 筆記 17/05/2023 L'acquisition après le 『內閣訪書錄』 1794 • Ouvrages sur Confucius -『闕里志』 -『 闕里文獻考』 -『 聖廟圖』 -『 孔氏碑本』 1799 • Environs 200 volumes de 方書 ainsi que 『律曆淵源』nécessaires à 觀象監 -(5/30) • Autres ouvrages -(11/17) -『 朱子大同集 』 -『 朱子實紀 』 -『 後漢書』 17/05/2023 34 1796 • 66 volumes de 方書 acquis et offerts à la cour par les fonctionnaires hémérologues Les projets d'importation 1798 • L'ordre d'acquisition : ouvrages nécessaires au bureau d'observatoire 觀象 監, ex. 『律曆淵源』, 『協 紀辨方書』-10/12 1799 • L'ordre d'acquisition : 朱子諸書 -/07/16 • L'intention d'acquérir des ouvrages concernant les principes de 數理 et 曆象 17/05/2023 Les interdictions d'importation • 西洋之書 (正祖 9年-1785, 4月 9日) • 不經書籍 (正祖 10年-1786, 1月 22日) • 天主妖術 (正祖 11年-1787, 4月 27日) • Une discussion à la cour sur l'interdiction des « 稗官雜記等册 » (1791/10/25, 日省錄) • 稗官小記姑無論, 雖經書、史記, 凡係唐 板者 (正祖 16年-1792, 10月 19日) 17/05/2023 Les destructions des ouvrages occidentaux • 1791 -« 弘文館所藏西洋諸書… 令館中燒火 » -27 titres de 外奎章閣 brûlés dont la plupart sont liés au catholicisme • Parmi les ouvrages occidentaux dans le 『奎章總目』, les ouvrages disparus dans la catalogue ultérieure : -『記法』 -『西儒耳目資』 Ouvrages occidentaux détruits en 1791/12 de 外奎章閣 滌罪正規 悔罪要指小引 玫瑰十五端 渡海苦績紀 聖記百言 天主降生言行紀略 齋克 主制群徵小引 達道紀言 天主聖敎四末論 西學修身 聖水紀言 西學齊家 進呈書像 勵學古言 仁會約 斐錄答彙(匯答?) 西洋統領公沙效忠記 童幼教育 靈魂道體說 泰西人身槪說 淸凉山志 主敎緣起總論 寰宇始末 畏天愛人極論 眞福訓詮總論 譬學 悔罪要指小引 17/05/2023 Conclusion • Peu d'ouvrages chinois contemporains importés • Sauf quelques éditions rares sur Confucius et Zhu Xi • Peu d'ouvrages occidentaux importés • Sauf les ouvrages sur le calcul, l'astronomie et l'hémérologie 17/05/2023 Quelques explications • : l'ordre d'acquisition de 『 四庫全書 』 • : l'acquisition de 『古今圖書集成』 • : l'acquisition de plusieurs dizaines de titres rares dont -『通志堂經解』de 徐乾學, 477 volumes -『繹史』de 馬繡 , 36 volumes Un plan d'acquisition en 1781 ? • La catalogue 奎章總目 • La liste d'ouvrages de 『古今圖書集成』 • La catalogue 內閣訪書錄 • Un rassemblement des fonds de : -弘文館 藏本 -江華府 行宮 所藏 圖書 -『古今圖書集成』 -Acquisitions par 奎章閣 閣臣 • 30.000 volumes env. (華本 + 東本) en 1780 • 80.000 volumes env. (華本 + 東本) en 1795 -L'importance de 四書 et 小學 • Dans la section 子 : -L'importance de 類事 (圖書集成 5.022 volumes) -Le fonds faible en 兵家 -L'existence de fonds de 道家 et 釋家 『 奎章總目 』 收錄書籍分析表 -L'importance en 五經 -La faiblesse en 四書 et 小學 • Dans la section 史 : -L'importance en 掌故 • Dans la section 子 : • Grandes collections, encyclopédies et ouvrages scientifiques et techniques déjà en possession • Publications et rééditions coréennes en projet • La politique de « restauration de la tradition correcte » • La politique culturelle et scientifique indépendantiste vis-à-vis de la Chine de Qing • La crise due à l'introduction du catholicisme • Une conjonction du marché chinois de publication ? 17/05/2023 17/05/2023
04099802
en
[ "phys" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04099802/file/David_Papier_FSP_submitted.pdf
David Marti email: [email protected] Pierre-Henri Cocquet Delphine Ramalingom Alain Bastide Topology optimization of fluid flow using local thermal objective function based on field synergy principle Keywords: Topology optimization, natural convection, objective function, adjoint sensitivity analysis, field synergy principle Topology optimization is widely used to design heat exchangers and may involve different expressions of objective functions to increase heat exchange. This work proposes a new thermal objective function based on the local orientation of velocity and temperature gradient fields. The latter is then defined as the cosine of these two vector fields and thus our approach has connections with the field synergy principle. The cosine objective function is compared with a more classical one in a multi-objective optimization framework whose resolution is done with the adjoint method. Our results reveal that the cosine objective function lead to results that are comparable with those obtained with the classical cost function and may be used by designers to look for optimized design by taking into account the synergy of the fields. A study of the field synergy principle reveals that it is only an accurate indicator of heat exchange in some cases discussed in this article. Introduction Thermal systems involving energy transfer and fluid flow are essential components in numerous industries such as civil engineering, aeronautics, spatial, transport, chemicals industry or mechanical industry. These thermal systems comprise specific components like pumps, compressors, heat exchangers, ducts and related devices. If a thermal system meets all the requirements and performs as expected during the design process, it can be manufactured efficiently. However, due to the need for more and more efficient thermal systems, optimizing the existing systems to find an "optimal" design in terms of some predefined criteria has become essential. From a physics perspective, the ideas to increase the heat exchanged through a domain are [START_REF] Webb | Principles of Enhanced Heat Transfer[END_REF]: (a) mixing the main flow and near wall flow; (b) reducing the boundary layer thermal thickness ; (c) generating vortex or secondary flow ; (d) raising the turbulence intensity. This heat exchange enhancement often decreases the mechanical performance of the flow through the domain. These principles have been (and are still) considered and applied for designing more efficient thermal systems. Over the years, different computational optimization design techniques have been proposed to optimize flow and thermal components. Among them, from the world of structural mechanics, came parametric optimization, shape optimization, and then topology optimization. Topology optimization aims to go beyond the designer's intuition and provide the shape that best suits the physical goals the designer prescribes. The resulting design can take any shape or topology from a geometrical point of view. Topology optimization in fluid mechanics started with [START_REF] Borrvall | Topology optimization of fluids in stokes flow[END_REF] and have been applied to many types of flow as (for extensive reviews, see [START_REF] Alexandersen | A review of topology optimisation for fluid-based problems[END_REF][START_REF] Fawaz | Topology optimization of heat exchangers: A review[END_REF]) steady laminar flow, unsteady flow, turbulent flow, non-Newtonian fluids, and for different kind of physics as conjugate heat transfer, fluid-structure interaction, microstructure in porous media. Different modes of heat transfer were studied: forced convection by [START_REF] Dede | Multiphysics topology optimization of heat transfer and fluid flow systems[END_REF][START_REF] Yoon | Topological design of heat dissipating structure with forced convective heat transfer[END_REF] and natural convection by [START_REF] Alexandersen | Topology optimisation for natural convection problems[END_REF]. In the field of topology optimization, several objective functions related to thermal exchanges are used to suit the need of the designer. The thermal performance of a thermal system can be evaluated by considering the following objective functions: maximize the thermal energy exchange of a fluid system using the quantities defined at the boundaries [START_REF] Marck | Topology optimization of heat and mass transfer problems: Laminar flow[END_REF][START_REF] Ramalingom | A new interpolation technique to deal with fluid-porous media interfaces for topology optimization of heat transfer[END_REF][START_REF] Pietropaoli | Three-dimensional fluid topology optimization for heat transfer[END_REF][START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF], maximize recoverable thermal power [START_REF] Ramalingom | A multi-objective optimization problem in mixed and natural convection for a vertical channel asymmetrically heated[END_REF], minimize thermal compliance [START_REF] Alexandersen | Topology optimisation for natural convection problems[END_REF][START_REF] Gersborg-Hansen | Topology optimization of heat conduction problems using the finite volume method[END_REF][START_REF] Bendsøe | Topology Optimization[END_REF][START_REF] Yoon | Topological design of heat dissipating structure with forced convective heat transfer[END_REF], maximize averaging temperature on the boundary [START_REF] Tian | Bionic topology optimization of fins for rapid latent heat thermal energy storage[END_REF], minimize mean temperature [START_REF] Zhao | A "poor man's approach" to topology optimization of cooling channels based on a Darcy flow model[END_REF][START_REF] Zhang | Design of nanofluid-cooled heat sink using topology optimization[END_REF][START_REF] Sun | 3D topology optimization of heat sinks for liquid cooling[END_REF][START_REF] Dede | Multiphysics topology optimization of heat transfer and fluid flow systems[END_REF][START_REF] Zhao | A "poor man's approach" to topology optimization of cooling channels based on a Darcy flow model[END_REF], minimize capacity dissipation [START_REF] Tian | Bionic topology optimization of fins for rapid latent heat thermal energy storage[END_REF], maximize heat generation [START_REF] Kobayashi | Freeform winglet design of fin-and-tube heat exchangers guided by topology optimization[END_REF][START_REF] Li | Optimal design and thermal modelling for liquid-cooled heat sink based on multi-objective topology optimization: An experimental and numerical study[END_REF], minimize thermal resistance [START_REF] Van Oevelen | Application of topology optimization in a conjugate heat transfer problem[END_REF], minimize entropy generations [22,[START_REF] Li | A novel optimization approach to convective heat transfer enhancement for solar receiver[END_REF]. Each of the previous objective functions arises from different quantities derived from different insights of continuum mechanics and thermodynamics. In order to enhance thermal exchange, the current paper proposes a new objective function based on local observations of physic fields inside a studied domain. Based on physical justification, the local angle between the local velocity vector and the local temperature gradient vector will be used to create an objective function involving their cosine cos(∠[u, ∇θ]) to encourage the whole flow to be in the same direction as the heat flux. Mathematically, that means the cosine will be close to 1 in the main flow. This approach can be linked to the field synergy principle (FSP) proposed by [START_REF] Guo | A novel concept for convective heat transfer enhancement[END_REF], where it is argued that the heat exchange in convection is increased when the velocity field of the flow tends to be in the same direction as the vector field of the temperature gradient. Since the field synergy is directly measured by the average of the mean angle formed by the velocity field and the temperature gradient over the domain [START_REF] Guo | The field synergy (coordination) principle and its applications in enhancing single phase convective heat transfer[END_REF], the constructed cosine objective function, which involves the summation of the cosine cos(∠[u, ∇θ]) over the domain, complies with the FSP. Bejan has criticized the FSP concept in [START_REF] Bejan | Heatlines (1983) versus synergy (1998)[END_REF], which states that this concept cannot be useful in a whole flow domain since heat exchanges arise mainly near heated boundaries. This has been acknowledge by [START_REF] Zhu | Improvement in field synergy principle: More rigorous application, better results[END_REF], which shows that for better results, the field synergy should be evaluated only within the thermal boundary. The FSP principle also shows some limitations with turbulent flow [START_REF] Tao | A comprehensive review and comparison on heatline concept and field synergy principle[END_REF]. It would therefore be interesting to ask ourselves about the limits of our objective function and to consider them in the light of the literature on the subject. Still, the working group of the original authors produced answers, reviews and explanations [START_REF] Tao | A comprehensive review and comparison on heatline concept and field synergy principle[END_REF][START_REF] Zhao | A review on heat enhancement in thermal energy conversion and management using field synergy principle[END_REF] citing various applications of this concept on successful optimization of various thermal devices. The applications studied with an analysis of the FSP cover multiple domains such as heat exchangers, fuel cells, porous medium, friction resistance reduction, solar energy receivers, vortex generator and diesel particulate filter. Most articles use classical optimization techniques, and compare then the field synergy before and after optimization with the conclusion that enhanced thermal performance led to a better synergy between flow and heat transfer. For example, in [START_REF] Tian | Bionic topology optimization of fins for rapid latent heat thermal energy storage[END_REF], thins added to a latent heat storage device thanks to topology optimization technique led to a performance gain of 80% and synergy fields (∠[u, ∇θ]) tend near 0 to 10 • while it was near 175 • before optimization. For finned tube banks heat exchangers, the articles [START_REF] He | Three-dimensional numerical study of heat transfer characteristics of plain plate fin-and-tube heat exchangers from view point of field synergy principle[END_REF][START_REF] Tao | Three-dimensional numerical study of wavy fin-and-tube heat exchangers and field synergy principle analysis[END_REF][START_REF] Tao | Three-dimensional numerical study and field synergy principle analysis of wavy fin heat exchangers with elliptic tubes[END_REF] show that heat transfer performance increase is linked to the field synergy increase. In [START_REF] Hamid | Application of field synergy principle for optimization fluid flow and convective heat transfer in a tube bundle of a pre-heater[END_REF], the authors showed that optimizing the performance of a pre-heater in a solar-assisted desalination unit by 23% boosts the field synergy of 36%. In [START_REF] Zhai | Analysis of field synergy principle and the relationship between secondary flow and heat transfer in double-layered microchannels with cavities and ribs[END_REF], it is shown that the best heat exchanger design through double-layered micro-channels with cavities and rib also had a better field synergy. In the current paper, we influence the synergy field directly by use of topology optimization problem considering an objective function linked to the FSP. To the best of our knowledge, this the first attempt to apply the FSP concept to topology optimization. The current paper is organized as follows: Section 2 presents the studied physical problem and its modeling. Section 3 explains the construction of the cosine objective function. Section 4 demonstrates the derivation of the adjoint solver and its numerical implementation. Section 5 illustrates the newly defined objective function on two studied cases of the literature (the single and bend pipe), a comparison with the classical approach and a confrontation with the FSP principle. Lastly, Section 6 offers the conclusion. Modelling The topology optimization method used herein is based on the classical density approach proposed first for computational fluid mechanics by Borrvall and Petersson [START_REF] Borrvall | Topology optimization of fluids in stokes flow[END_REF]. The working domain is characterized by a scalar field α (called the design variable) representing the pseudo-permeability of the fluid. This field penalizes the flow equations thanks to some term h τ (α)u which makes them similar to a Brinkman-type model of Darcy's law for flow through a porous medium [START_REF] Alexandersen | A review of topology optimisation for fluid-based problems[END_REF]. Equations ( 2)-(3) given below then describe a fluid if h τ (α) is zero and a solid if h τ (α) is large enough since, in that case, the velocity of the fluid in these zones is close to zero. The localization of the fluid-solid zones can then be done thanks to h τ (α) by using α as optimization parameter. The conductivity of the matter (fluid, solid) is also adjusted via the term k τ . This formalism allows studying heat exchange between fluid and solid with a single unified approach, unlike the classical conjugate heat transfer approach, where different physics are solved for the solid and flow parts. The flow considered in this paper is incompressible, Newtonian, laminar and stable. The physics of the flow is described by the non-dimensional equations (1),( 2),(3),(4) given below with the notation ∇ n = n • ∇ : ∇ • u = 0, (1) (u • ∇) u = -∇p + Re -1 ∆u -h τ (α) u + Ri θ, (2) ∇ • (uθ) = Re -1 Pr -1 ∇ • (k τ (α) ∇θ) , (3) u = 0, ∇ n p = 0, θ = 1 on Γ 1 , u = 0, ∇ n p = 0, ∇ n θ = 0 on Γ 2 , u = u i e x , ∇ n p = 0, θ = 0 on Γ i , ∇ n u = 0, p = 0, ∇ n θ = 0 on Γ o . (4) The flow enters the domain Ω by the inlet on boundary Γ i and goes out by the outlet Γ o . The domain is enclosed partially surrounded by hot walls Γ 1 and some adiabatic walls on Γ 2 , as presented on Figure 2. The non-dimensional velocity u is defined by u = u ⋆ U where U = 1 is related to the inlet velocity. The non-dimensional pressure p is defined as p = p ⋆ 0.5ρU 2 where ρ is the volumic mass of the fluid. The fields u ⋆ and p ⋆ are the dimensioned velocity and pressure fields. The non-dimensional temperature θ is defined as θ = T -T ref T wall -T ref where T is the dimensioned temperature field, T wall is the temperature imposed on the heated boundary Γ 1 and T ref is a reference temperature. Note that the non-dimensional temperature on the hot wall is thus θ wall = 1. In the above system, we have some non-dimensional numbers that are defined below. The Reynolds number is Re = U L ν where ν is the kinematic viscosity of the fluid. The Richardson number is Ri = Gr b Re 2 and represents the balance between gravitational energy and the flow's kinetic energy. The modified Grashof Gr b = gβ∆T L 3 ν 2 , where ∆T = T wall -T ref represents the flow balance of the magnitude of buoyancy force towards the magnitude of the viscous force. The Prandtl number Pr = ν k is the ratio between the momentum and thermal diffusivities. Interpolation functions Note that h τ (α) ideally take only binary values h τ (α) ∈ {0, +∞} where h τ (α) = 0 indicates the fluid zones and h τ (α) = +∞ the solid zones. Since such constraint can be hardly used in an optimization algorithm, the latter is relaxed first by considering 0 ≤ α ≤ α max for large enough α max and next with so-called interpolation function that are smooth regularization of step-like functions. Since the pseudo-inverse permeability α is a scalar field where α ∈ [0, α max ], using it directly to penalize the physical equations may lead to some parts of the domain Ω where the physical matter has an intermediate physical behavior between solid and fluid. This may lead to optimization results that are not physical. To overcome this issue, several penalization techniques are used in the literature. One common approach is the RAMP function [START_REF] Borrvall | Topology optimization of fluids in stokes flow[END_REF][START_REF] Alexandersen | Topology optimisation for natural convection problems[END_REF][START_REF] Marck | Topology optimization of heat and mass transfer problems: Laminar flow[END_REF],but the approach used herein is the one proposed by Ramalingom et al. [START_REF] Ramalingom | A new interpolation technique to deal with fluid-porous media interfaces for topology optimization of heat transfer[END_REF] with some sigmoid function which shows smaller transitions area between the fluid and solid behavior than the RAMP approach. Some comparisons between the RAMP and sigmoid interpolation functions have been done in [START_REF] Ramalingom | A multi-objective optimization problem in mixed and natural convection for a vertical channel asymmetrically heated[END_REF] and it has been shown that any choice give similar results. The pseudo-inverse permeability can be then interpolated with the expression (5) where α 0 and τ are, respectively, the pseudo inverse permeability threshold and the slope of the sigmoid function, and α max is the maximum value that can be taken by the design parameter α. The thermal diffusivity is interpolated with the sigmoid function [START_REF] Yoon | Topological design of heat dissipating structure with forced convective heat transfer[END_REF] where k f and k s are, respectively, the fluid and the solid thermal diffusivities. h τ (α) = α max 1 1 + e -τ (α-α0) - 1 1 + e τ α0 , (5) k τ (α) = k s k f -1 1 1 + e -τ (α-α0) - 1 1 + e τ α0 + 1. ( 6 ) It is worth noting that lim α→0 h τ (α) = 0, lim α→0 k τ (α) = 1, lim α→αmax h τ (α) = α max and lim α→αmax k τ (α) = ks k f . Definition of the cosine cost function The objective function we propose is closely related to the inner orientation within the domain of the velocity and the heat flux fields. As it is pictured on the left part of Figure 1, if the velocity field is collinear towards the heat flux, it can be assumed that the heat is well convected towards the velocity direction. The right part of Figure 1 represents a heat flux vector field that is not collinear to the velocity field. The heat will be less effectively convected towards the direction of the velocity field than in the first case. The angle between these two vector fields is directly linked to the definition of cosine. Hence, it can be natural to define an objective function based on the cosine field between the velocity and heat flux. The idea is that obtaining a cosine of 1 (collinear vectors) leads to better heat transfer than other values. Therefore, it should be interesting to maximize the following cost function: J cos (u, θ) = U u • ∇θ ∥u∥ ∥∇θ∥ dΩ, (7) where the fraction in the integral is the cosine between the two vector fields u and ∇θ. This angle is commonly called β in the FSP framework. In its critique of the FSP, Bejan stated in [START_REF] Bejan | Heatlines (1983) versus synergy (1998)[END_REF] : "The angle β is not a degree of freedom, a knob to be turned by the designer. There is an infinity of angles β distributed throughout the flow field, and each β depends on its neighbors (β is a field). The distribution of β is one, and it is fixed, just like the distribution of T and (u, v) in the specified flow configuration." This paper fairly tries to influence the flow field and heat exchange performances by having an objective function based actually on the β field in order to try "to turn down the knob". To go further the above considerations, it can be shown that our approach is related to the classical cost function that aims at maximizing thermal power J 2 (u, θ) = ∂Ω u • n θ dΓ. ( 8 ) Such functional that involves the values of physical quantities at the inlet and outlet of the system are going to be referred as global cost function. Local cost functions are then going to be functions defined on the whole computational domain Ω. The idea behind local cost function is to use cost functions to define correlations between local flow parameters and their global impact. Using Green's formula and the incompressibility condition, one gets J 2 (u, θ) = Ω ∇ • (uθ) dΩ = Ω u • ∇θ dΩ. From Cauchy-Schwarz inequality, we have To highlight further the effect of the angle between u and ∇θ on heat transfer, we now estimate the heat flow across some surface ∂U where U is a subset of Ω. We then consider two sets of velocity fields/temperatures u 1 , θ 1 and u 2 , θ 2 such that both velocities, respectively gradient of temperature, have same length but the angle between them is varying (see e.g. what is shown in Figure 1). Hence: -∥u∥ ∥∇θ∥ ≤ u • ∇θ ≤ ∥u∥ ∥∇θ∥ , ∀x ∈ U : ∥u 1 (x)∥ = ∥u 2 (x)∥ , ∥∇θ 1 (x)∥ = ∥∇θ 2 (x)∥ , cos(∠[u 1 (x), ∇θ 1 (x)]) < cos(∠[u 2 (x), ∇θ 2 (x)]). Integrating the energy equation ( 3) over U and using Green's formula give ∂U Re -1 Pr -1 k τ (α)∇θ • n s dΓ = U ∥u∥ ∥∇θ∥ cos(∠[u, ∇θ]) dx, where n s is the outward unitary normal to ∂U. Introducing the heat flux density j(θ) = k τ (α)∇θ • n s and using the previous computation, we obtain Φ(θ, u) = ∂U j(θ) • n s dΓ = Re Pr U ∥u∥ ∥∇θ∥ cos(∠[u, ∇θ]) dx, (9) where Φ(θ, u) is the heat flow. From ( 9), it follows that the heat flows associated to (u 1 , ∇θ 1 ) and (u 2 , ∇θ 2 ) verify Φ(θ 1 , u 1 ) < Φ(θ 2 , u 2 ). From this inequality, we see that for velocity and gradient of temperature with same length, the heat flow increases with the decreases of the angle between u and ∇θ, namely with the increases of the value of cos(∠[u, ∇θ]) (see also [25, Eq. ( 4)] for a similar conclusion). Therefore, it is expected that maximizing cos(∠[u, ∇θ]) will increase the heat flux which motivates using J cos as cost function. Remark 1. Similarly, if we consider velocity u i and temperature gradients ∇θ i having same angle but different norms, namely satisfying ∀x ∈ U : ∥u 1 (x)∥ ∥∇θ 1 (x)∥ ≤ ∥u 2 (x)∥ ∥∇θ 2 (x)∥ , 0 < cos(∠[u 1 (x), ∇θ 1 (x)]) = cos(∠[u 2 (x), ∇θ 2 (x)]). In that case, one can see from (9), we get Φ(θ 1 , u 1 ) < Φ(θ 2 , u 2 ). As a result, the heat flux can also be increased by increasing the norm of the velocity field and the temperature gradient. Nevertheless, as indicated in the introduction, since the field synergy angle is widely used in the literature [START_REF] Tao | A comprehensive review and comparison on heatline concept and field synergy principle[END_REF][START_REF] Zhao | A review on heat enhancement in thermal energy conversion and management using field synergy principle[END_REF], we are going to use the cosine as cost function in our topology optimization problem defined in the next section (see [START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF]). To end, this section, we emphasize that in the solid zones of the computational domain, the velocity is penalized so that we have u ≈ 0. This could yield indeterminate values (e.g. 0/0) in the cosine cost function [START_REF] Alexandersen | Topology optimisation for natural convection problems[END_REF]. To avoid such behavior, we will consider the following modification of the cosine cost function J cos (u, θ) = U u • ∇θ ∥u∥ ∥∇θ∥ + s dΩ, ( 10 ) where a small scalar s fixed to 10 -6 has been added to the denominator to regularize the function near zero. The function is then zero when the velocity or heat flux magnitudes are zero. Adjoint-based solver for the topology optimization problem We now want to minimize the following multi-objective functional : min J (u, p, θ) = c 1 J 1 (u, p, θ) -c 2 J 2 (u, p, θ) -c 3 J cos (u, p, θ), where (u, p, θ) are subject to (1, 2, 3), ( 4), [START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF] where the first cost function J 1 aims at minimizing pressure losses and is defined by J 1 (u, p) = - ∂Ω u • n p + 1 2 ∥u∥ 2 dΓ. (12) The cost function J 2 is defined by [START_REF] Marck | Topology optimization of heat and mass transfer problems: Laminar flow[END_REF] and will be maximised. We emphasize that J 2 is a classical objective function used in the literature, aiming to improve the thermal power gain through the domain. The constants c 1 , c 2 and c 3 are the so-called weigh factors of the objective functions in the framework of multiobjective optimization. Since J 2 and J cos objective functions are meant to be maximized, it is equivalent to minimize their opposite. This explains the minus sign facing these objective functions in [START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF]. We solve the topology optimization problem [START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF] with a gradient-descent algorithm. The gradient of the cost function with respect to the design variable α is computed with the continuous adjoint method (see e.g. [START_REF] Gunzburger | Perspectives in Flow Control and Optimization[END_REF]). The adjoint system is defined as the critical point of the following Lagrangian with respect to the so-called primal variables (u, p, θ) L (u, θ, p, u * , θ * , p * , α) = J (u, θ, p) + Ω p * ∇ • u dΩ + Ω u * [(u • ∇) u + ∇p -A∆u + h τ (α)u -Bθe y ] dΩ + Ω θ * [∇ • (uθ) -∇ • (Ck τ (α)∇θ)] dΩ, (13) where (u * , p * , θ * ) are the adjoint variables. In the sequel, we first compute the derivative of J with respect to (u, p, θ) and give next the adjoint problem. We end this section with the algorithm used to solve the topology optimization problem [START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF] involving the local cost function. Derivatives of the cost functions For any function F : X → R where X is a normed space, the Gateaux (directional) derivative is defined as ∂F ∂x [δx] = lim t→0 F (x + t δx) -F (x) t . Assuming also that X is a Hilbert space equipped with an inner product (•, •) X , we can define the gradient of F , denoted here by ∂F ∂x thanks to the identification ∂F ∂x [δx] = ∂F ∂x , δx X . Below, we will identify gradients of the functionals using the L 2 -inner product (u, v) := Ω u • v dΩ, which is defined accordingly for scalar valued functions. For the cost function considered in this paper, we have ∂J ∂(u, p, θ) [δu, δp, δθ] = c 1 ∂J 1 ∂(u, p, θ) [δu, δp, δθ] -c 2 ∂J 2 ∂(u, p, θ) [δu, δp, δθ] -c 3 ∂J cos ∂(u, p, θ) [δu, δp, δθ], and we now compute each term. For the pressure losses functional, we have ∂J 1 ∂(u, p, θ) [δu, δp, δθ] = - Γ u • n δp dΓ - Γ (p t n + (u • n) u) • δu dΓ, where p t = p+∥u∥ 2 /2 is the total pressure. For the thermal power cost function, one has ∂J 2 ∂(u, p, θ) [δu, δp, δθ] = Γ u • n δθ dΓ + Γ θ n • δu dΓ. Before computing the derivative of the proposed local cost function, we recall that the derivative of the Euclidean norm N : x ∈ R N → ∥x∥ ∈ R is ∂N ∂x [δx] = x ∥x∥ • δx. Using this, the derivative of the local (cosine) cost function is ∂J cos ∂(u, p, θ) [δu, δp, δθ] = ∂J cos ∂u [δu] + ∂J cos ∂θ [δθ] = Ω δu • 1 (∥u∥ ∥∇θ∥ + s) 2 ∇θ (∥u∥ ∥∇θ∥ + s) -(u • ∇θ) u ∥u∥ ∥∇θ∥ dΩ + Ω ∇δθ • 1 (∥u∥ ∥∇θ∥ + s) 2 u (∥u∥ ∥∇θ∥ + s) -(u • ∇θ) ∇θ ∥∇θ∥ ∥u∥ dΩ = Ω δu • (ψ(u, θ)) dΩ + Γ δθ ϕ(u, θ) • n dΓ - Ω δθ ∇ • (ϕ(u, θ)) dΩ, where ψ(u, θ) = 1 (∥u∥ ∥∇θ∥ + s) 2 ∇θ (∥u∥ ∥∇θ∥ + s) -(u • ∇θ) u ∥u∥ ∥∇θ∥ , ϕ(u, θ) = 1 (∥u∥ ∥∇θ∥ + s) 2 u (∥u∥ ∥∇θ∥ + s) -(u • ∇θ) ∇θ ∥∇θ∥ ∥u∥ . Using the previous computations, the derivatives of the cost function J can be written as ∂J ∂(u, p, θ) [δu, δp, δθ] = Ω ∂J Ω ∂(u, p, θ) [δu, δp, δθ] dΩ + Γ ∂J Γ ∂(u, p, θ) [δu, δp, δθ] dΓ, with ∂J Γ ∂u [δu] = c 3 Ω δu • ψ(u, θ) dΩ, ∂J Γ ∂θ [δθ] = -c 3 Ω δθ ∇ • (ϕ(u, θ)) dΩ, ∂J Γ ∂p [δp] = 0, and: ∂J Ω ∂u [δu] = -c 1 Γ (p t n + (u • n) u) • δu dΓ + c 2 Γ θ n • δu dΓ, ∂J Ω ∂θ [δθ] = +c 2 Γ u • n δθ dΓ + c 3 Γ ϕ(u, θ) • n δθ dΓ, ∂J Ω ∂p [δp] = -c 1 Γ u • n δp dΓ. Adjoint equations The continuous adjoint system is defined as the critical points of the Lagrangian, defined in [START_REF] Gersborg-Hansen | Topology optimization of heat conduction problems using the finite volume method[END_REF], with respect to the primal variables (u, θ, p). The latter has already been computed for instance in [START_REF] Ramalingom | A new interpolation technique to deal with fluid-porous media interfaces for topology optimization of heat transfer[END_REF] and reads as ∇ • u * = ∂J Ω ∂p , Re -1 ∆u * + (u • ∇) u * + (∇u * ) T • u + ∇p * + θ∇θ * -h τ (α)u * = ∂J Ω ∂u , u • ∇θ * + Re -1 Pr -1 ∇ • (k (α) ∇θ * ) = ∂J Ω ∂θ , (14) supplemented with the following boundary conditions on Γ 1 and Γ i : on Γ 1 ∪ Γ i : -u * • n = ∂J ∂p , -Re -1 u * • t = ∂J ∂u • t, Ri k τ (α)θ * = ∂J ∂θ on Γ 2 : -u * • n = ∂J ∂p , -Re -1 u * • t = ∂J ∂u • t, -Rik τ (α)∇ n θ * -(u • n)θ * = ∂J ∂θ , on Γ o : -Re -1 ∇ n u * -(u • n)u * -np * -nθθ * = ∂J ∂u , -Rik τ (α)∇ n θ * -(u • n)θ * = ∂J ∂θ . (15) The full adjoint system is therefore given by ( 14),(15). Algorithm to update conception variable Given some design variable α k , the sensibility ∇J k := ∂J ∂α (α k ) is defined with the following equation: ∇J k = ∂J ∂α (α) = - ∂h τ ∂α u • u * -Ri ∂k τ ∂α ∇θ • ∇θ * on Ω. ( 16 ) Once the adjoint set of equation is solved and the sensibility is computed, the design field α k is then updated across the k-th update step with the gradient method using α k+1 = α k + λ k d k , (17) where d k is the descent direction and λ k the step which will be constant. The design variables are evaluated by using the conjugated-gradient descent direction method associated to Polack-Ribiere method. The descent direction d k+1 is given by : d k+1 = ∇J k+1 + β k+1 d k where : β P R k+1 = ∇J T k+1 (∇J k+1 -∇J k ) ∇J T k ∇J k . The implementation follows the Algorithm 1. The systems of equations have been solved with the finite volume library OpenFoam [START_REF] Weller | A tensorial approach to computational continuum mechanics using object-oriented techniques[END_REF]. The algorithm follows the SIMPLE algorithm philosophy. The primal equations (1) to (4) and adjoint equations ( 14), [START_REF] Tian | Bionic topology optimization of fins for rapid latent heat thermal energy storage[END_REF] are solved, the sensitivity is computed with equation ( 16) and the design field α is updated with equation [START_REF] Zhang | Design of nanofluid-cooled heat sink using topology optimization[END_REF]. The generalized Algorithm 1 Optimization algorithm 1: initialization of constant Re, Ri, Pr, α 0 , τ 0 , s 2: evaluation of cost function J 0 3: while ϵ ≥ 0.01 over 1000 iterations do 4: solve primal equation ( 2) to (4) with a SIMPLE algorithm loop. 5: solve adjoint equations ( 14), [START_REF] Tian | Bionic topology optimization of fins for rapid latent heat thermal energy storage[END_REF] with a SIMPLE algorithm loop. 6: update sensitivity according to [START_REF] Zhao | A "poor man's approach" to topology optimization of cooling channels based on a Darcy flow model[END_REF] 7: update design field α with (17), along with h τ (α), k τ (α) 8: evaluation of J k ; ϵ ← J k J k-1 9: J k-1 ← J k 10: k ← k + 1 11: end while Geometric-Algebraic Multi-Grid (GAMG) solver with a cell-centered colocalized finite volume approach is used. First order numerical schemes are used to discretize the convective terms. The optimization process is stopped when the evaluation of function remains stable at 1% over 1000 iterations of the optimization process. The current algorithm has been used in [START_REF] Ramalingom | A new interpolation technique to deal with fluid-porous media interfaces for topology optimization of heat transfer[END_REF][START_REF] Ramalingom | A multi-objective optimization problem in mixed and natural convection for a vertical channel asymmetrically heated[END_REF] and will be used to get all our numerical results. Numerical results In this section, we present first the result of the CFD analysis without optimization on two geometries (single pipe and bend pipe). After that, the optimized results considering the classical multi-optimization approach (J 1 , J 2 ), denominated by SP1 for the single pipe and BP1 for the bend pipe, will be quantitatively and qualitatively compared to the approach proposed by this paper, namely using cost functions (J 1 , J cos ) in a multi-objective optimization framework for the single pipe and the bend pipe (denominated SP2 and BP2). It will be demonstrated that the results are very similar. The FSP principle will be finally confronted with the thermal power gain. Figure 2 represents the studied geometries, their design domain Ω and their boundaries for the single pipe and the bend pipe. These configurations are some common shapes studied in the literature [START_REF] Ramalingom | A new interpolation technique to deal with fluid-porous media interfaces for topology optimization of heat transfer[END_REF][START_REF] Marck | Topology optimization of heat and mass transfer problems: Laminar flow[END_REF][START_REF] Subramaniam | Topology optimization of conjugate heat transfer systems: A competition between heat transfer enhancement and pressure drop reduction[END_REF] to develop some new approaches to topology optimization. Both geometries are squares of side L, regularly meshed with 40,000 cuboid cells. The single pipe is a straightforward symmetrical geometry with hot walls on the top and lower walls. The inlet is located on the middle of the left wall, and the outlet on the middle of the right wall. The bend pipe has its inlet on the upper part of the left wall and its outlet on the left of the bottom wall, forcing the flow to curve its trajectory through the domain. The hot wall is on the left part of the lower wall. The inlet and outlet sizes for both geometries are L 5 . The hot walls have a fixed temperature, and the other walls are adiabatic. For the two representative cases studied in this section, the thermal diffusivity ratio used in the energy equation ( 3) with ( 6) is fixed to k s k f = 4.4, representing, for example, aluminum for the solid and air for the fluid. For the sigmoid interpolation factor in ( 5) and ( 6), we set α 0 = 20, τ = 0.7 and α max = 200. Prandtl number is set to Pr = 0.71. The Reynolds and Richardson numbers that are investigated in this paper are Re = 100, 200, 400, Ri = 0, 0.3, 3. Finally, the inlet non-dimensional velocity is set to u i e x with u i = 1. Results for the single pipe without optimization Figure 3 shows the ∥u∥, θ, ∥∇θ∥ and cos(∠[u, ∇θ]) fields without optimization for the single pipe with Re=200 and Ri=0.3. The flow is straightforwardly directed from left to right, with some slight recirculation areas appearing outside the central part of the flow. The temperature field from the hot walls is diffused through the domain and is then convected by the flow through the outlet. The heat flux is pronounced near the interfaces of the main flow path. The Figure 3-(d) illustrates the cosine field, which is the integrant of the objective function J cos we experiment with in this article. Maximizing J cos will increase the average value of this field to a value closer to one. With the exception of the area of the outlet, it displays that the main flow shows positive cosine values (shade of red color on the figure) and is outlined by a negative cosine field (blue color). Areas with a zero cosine are in white. We clearly see a horizontal line with cos(∠[u, ∇θ]) ≈ 1 which reveals a huge synergy. According to the FSP principle, this huge synergy should lead to a local high contribution of the flow to heat transfer. However, it can be seen on the Figure 3-(c) that the magnitude of the heat flux on this line is near zero. This observation is clearly the critic of Bejan in [START_REF] Bejan | Heatlines (1983) versus synergy (1998)[END_REF]. It is worth mentioning that the fields are not strictly symmetric because of the Richardson number and gravity, which apply a force over the vertical direction. Results for the bend pipe without optimization Figure 4 presents the fields for the bend pipe with Re=200 and Ri=0. The Figure 4-(d) shows the cosine field. As for the single pipe, the main part of the flow leads to a positive cosine. On the lower left part of the domain where a recirculating flow appears, the cosine alternatively takes positive and negative values. The upper part of the flow contour shows an area with cos(∠[u, ∇θ]) ≈ 1 while the heat flux shown on 4-(c) has minimal values near this area. This reveals some limitations of the FSP in areas distant from where local heat exchanges are low, as stated previously for the single pipe. Comparison of Local VS global approaches: Selected cases The shapes and flow obtained with optimization using local approaches are qualitatively comparable with the ones in [START_REF] Subramaniam | Topology optimization of conjugate heat transfer systems: A competition between heat transfer enhancement and pressure drop reduction[END_REF][START_REF] Ramalingom | A new interpolation technique to deal with fluid-porous media interfaces for topology optimization of heat transfer[END_REF] and bring new designs of topology optimization for the single and bend pipe cases. Among all the investigated configurations tested for each set of Reynolds and Richardson numbers, and geometries, the representative cases for Re = 200 and Ri = 0.3, denoted as SP1, SP2, BP1 and BP2 whose parameters are shown in Table 1 will be presented. These couples have been chosen to have nearly the same performance as it will be outlined. First of all, one can see that our algorithm succeeds in minimizing/maximizing one or other cost functions for each investigated geometry. Table 1 gives J values before and after optimization. We observe that the objective function decreases up to a factor 6.0 in the bend pipe, for example, and up to a factor 7.6 in the single pipe. So, our algorithm converges to an optimized solution for these studied cases. The Figure 5 presents the fields resulting from the multi-objective optimization of (J 1 , J 2 ) for the single pipe case SP1 in a configuration where the heat exchange is favored over the mechanical power. The optimized shape leads the flow to be split through the upper and lower part of the domain near the heating wall to gain heat and convect it through the outlet. This example outlines that our optimization process is doing its purpose since the more the fluid travels near the hot wall, the more it will gain heat and transfer it through the domain. Figure 6 displays the optimized single pipe for the case SP2. The h τ (α) shows roughly the same shape as the case SP1 shown on Figure 5 except that some solid is put in a part of the outlet. The solid in the middle of the domain also shows some fluid within the solid, meaning that fluid is trapped inside the solid region. This phenomenon is a known drawback of the density-based topology optimization approaches and may appear with multi-physics problems [START_REF] Alexandersen | A review of topology optimisation for fluid-based problems[END_REF]. For the single pipe cases SP1 and SP2, even if the shape differs a little, the plot of the magnitude of the velocity shows that in both cases, the flow is split in two by a solid internal shape, with comparable velocity amplitude. One part of the flux is directed toward the heating wall to gain heat, while the other goes more straightforwardly to the outlet. We can see on the h τ (α) field for (J 1 , J cos ) (case SP2) that even if the solid part is less closed than the one with the classical approach (case SP1), the flow is well canalized as it can be seen on the magnitude of the velocity figures. Looking at the cos(∠[u, ∇θ]) field, we can see that the case SP2 shows less negative cosine (blue area) than the SP1 case. This was attended since, in the SP2 case, the objective function involves the maximization of J cos , which tend to increase the mean values of the cos(∠[u, ∇θ]) field. It can be concluded that even if the density field h τ (α) may look different, nearly the same velocity and temperature fields are obtained for both cases. This correspondence qualitatively confirms that the cosine objective function J cos is closed to the classical J 2 objective function in this case. Figures 7 and8 display the optimized bend pipe cases, BP1 and BP2. For both cases, the optimized shape presents a huge solid in most of the middle to right part of the domain. This shape makes the flow be pushed through the hot wall in order to gain heat that will be convected to the outlet, leading to increased heat exchanges. For BP2 the temperature field h τ (α) shows a shape of bulbs near the outlet. This make the temperature field spread a little more away from the hot wall. The amplitude of velocity and the velocity field are comparable even if a "leak" occurs inside the upper middle part of the solid BP2 shape. For the BP2 case with the optimization of (J 1 , J cos ), as for the case SP2, the solid area is less continuously defined as it is the case for the use of (J 1 , J 2 ). As for the single pipe SP1 and SP2, and for the same reason, the cosine field for BP2 shows nearly only positive values (in red on the figure) when more negative values are present for BP1. Nevertheless, from a qualitative point of view, both cases BP1 and BP2 are considered very similar. The thermal power gain in each couple of cases is comparable from a quantitative point of view. Table 2 shows the values obtained for each case. Comparing SP1 and SP2, the mechanical power loss is lower by 0.5% for SP2, and the thermal power gain is nearly the same, close to 0.06%. Between BP1 and BP2, the mechanical power loss is lower by 5.% for BP2, and the thermal power gain is nearly the same, close to 0.006%. Meaning that BP2 shows better mechanical performance for the same heat exchanged. Note that these points have been voluntarily chosen to be close in performance. A panorama of all the tested configurations will be presented in the following subsection. Local VS global approaches: Pareto fronts Since we wish to study the effect of the optimization of the local cost function on the heat transfer, we are going to consider the following multi-objective cases in the frame of optimization problem [START_REF] Høghøj | Topology optimization of two fluid heat exchangers[END_REF]: c 1 ∈ [0, 1], c 2 ∈ [0, 1], c 3 = 0 for J = c 1 J 1 -c 2 J 2 (classical approach) c 1 = 1, c 2 = 0, c 3 ∈ [0, 50] for J = c 1 J 1 -c 3 J cos (local approach) Several weight factors c 1 , c 2 and c 3 for each objective have been chosen in order to sweep the space between favoring the flow [START_REF] Ramalingom | A multi-objective optimization problem in mixed and natural convection for a vertical channel asymmetrically heated[END_REF] or enhancing thermal exchange (( 8) or [START_REF] Pietropaoli | Three-dimensional fluid topology optimization for heat transfer[END_REF]). This will allow next to present each performance on a Pareto front where it has been chosen to plot the performance of each numerical experiment regarding thermal power gain and mechanical power loss. In order to represents the whole set of weights factor of each couple of Reynolds and Richardson, Pareto front showing the performance of each case has been computed. The Pareto front including the cases SP1 and SP2 for the single pipe is shown on Figure 9 while the one including BP1 and BP2 is shown on Figure 10 for the bend pipe. The couple SP1, SP2 and BP1, BP2 are each time presented with cross marker instead of circle markers. As stated above, they are nearly superposed because they were chosen to give the same performance. The abscissa is the mechanical power loss between the inlet and the outlet. This power loss corresponds to the objective function J 1 defined in [START_REF] Ramalingom | A multi-objective optimization problem in mixed and natural convection for a vertical channel asymmetrically heated[END_REF]. The ordinate represents the thermal power gain between the inlet and outlet, corresponding to the objective function J 2 defined in [START_REF] Marck | Topology optimization of heat and mass transfer problems: Laminar flow[END_REF]. For each Pareto front, the unreachable point corresponding to the best performance of thermal power gain and power loss of the considering set of results of all cases for a geometry, Reynolds and Richardson are shown as the "Utopia" point. Proposition : The "Utopia" point is defined for each Pareto front as the unreachable point corresponding to the best performance of thermal power gain and power loss, among obtained results for a given geometry, Reynolds numbers and Richardson numbers. The blue, yellow and gray point represent the performance of the objective function J 1 , J 2 and J cos optimized alone. The purple point presents the performance without optimization. The green points represent the performance for each tested couple of (c 1 , c 3 ) for (J 1 , J cos ) while the red points represent the performance for each tested couple of (c 1 , c 2 ) for (J 1 , J 2 ). The points indicating the most important performance are linked by a blue dashed line representing the Pareto front. For the single pipe, Figure 9 highlights that in the tested range of weighting factor, the (J 1 , J 2 ) and (J 1 , J cos ) points are mostly superposed with sometimes (J 1 , J cos ) dominating the Pareto front when sometimes (J 1 , J 2 ) dominating the Pareto front. It should be noted that when approaching the mechanical power loss of J 2 alone (yellow point), the limits of the current methodology is reached since the domain are mostly all filled with solid and the solids show many leaks with no clear path of the flow from the inlet to the outlet. Although, the objective functions are still minimized. For the bend pipe, the Pareto front Figure 9 shows that most of the time, the (J 1 , J 2 ) points dominate the other point except in an area where mechanical power loss ranges from 0.1 to 0.2 where (J 1 , J cos ) lead to better performance. This comforts the previous assertion that the J cos objective function leads to optimized results which may be comparable to those obtained with the classic approach, meaning that a designer could consider this multi-objective optimization of thermal power gain and mechanical power reduction by also taking into account the fields synergy principle. (J 1 , J cos ) (J 1 , J 2 ) J 1 J 2 J cos Without optimization Utopia Pareto front Thermal power gain VS field synergy angle The field synergy principle will be now confronted to the thermal power gain. To achieve this, we rely on the mean field synergy angle which is defined in e.g. [START_REF] Guo | The field synergy (coordination) principle and its applications in enhancing single phase convective heat transfer[END_REF] by: 1 V tot i V i arccos u i • ∇θ i ∥u i ∥ ∥∇θ i ∥ , Figures 11 and 12 depict the mean synergy angle versus the thermal power gain for the cases represented on the Pareto front from Figures 9 and10. Two statements can be made from these plots. First, even optimizing (J 1 , J 2 ) (reds points) increases the synergy field, it can be established that the mean synergy angle tends to decrease when the thermal power gain increases for those red points. Secondly, a better field synergy doesn't automatically lead to a better heat transfer. Some green points over these plots show better synergy than some other points but with less heat exchanged. For example, for the single pipe we can see that the SP2 (green cross) has a mean synergy of 80% while SP1 (red cross) has a mean synergy field of 55% although both cases has nearly the same thermal power gain (see Table 2). This confirms the critic of Bejan [START_REF] Bejan | Heatlines (1983) versus synergy (1998)[END_REF] that a good synergy field doesn't automatically link to good heat transfer. Even if these results are not presented, the same conclusion can be established for the other tested couple of Reynolds and Richardson numbers. Figure 12: Mean synergy versus thermal power gain for the bend pipe -Re=200 Ri=0.3 Conclusions and perspectives This paper used a topology optimization method for Newtonian laminar flow involving heat transfer. A new objective function involving the local angle between the velocity field and the temperature gradient has been proposed. This approach connects with the so-called field synergy principle (FSP). This objective function is tested along and compared with more classical ones in a multi-objective optimization process based on an adjoint solver. The single pipe and bend pipe configurations are studied with several Reynolds and Richardson numbers. The performance of the cosine objective function has been compared with the classical thermal power objective function with the use of Pareto front. The field synergy principle has been analyzed by comparing the mean field synergy angle to the thermal power gain between the inlet and outlet of the geometry. The research findings of the present paper can be summarized as follows: 1. The proposed cosine objective function enhances the thermal transfer. 2. Compared to the classical approach, the cosine objective function leads to comparable performance. Sometimes, better performances are shown for the studied cases. 3. Improvement of the field synergy principle generally leads to better heat transfers. 4. High field synergy sometimes leads to less heat transfer than low field synergy, meaning that the field synergy principle is not always the best indicator of heat exchange through a domain. Since a criticism of the FSP principle is that this principle is more advantageous near the heat exchange interfaces, further studies should design a cosine objective function involving only areas with some high gradient temperature to see if this could lead to a further increase of the heat exchange through a domain. Acknowledgements All the authors are supported by the "Agence Nationale de la Recherche" (ANR), Project O-TO-TT-FU number ANR-19-CE40-0011. Figure 1 : 1 Figure 1: Orientation of heat flux field (red arrow) towards velocity field (blue arrows).The dashed lines represent iso-thermal of hot (red dashed-line) and cold temperature. (yellow dashed-line). -Left: Fields are collinear, heat is well convected. -Right: There is an angle between field, heat is less convected to the right of the domain. and thus the maximal values of the function x ∈ Ω → u(x) • ∇θ(x) are reached as soon as u • ∇θ = ∥u∥ ∥∇θ∥. From the definition of the scalar product, this translates to cos(∠[u, ∇θ]) = u • ∇θ ∥u∥ ∥∇θ∥ = 1, which indicates that the vector fields x ∈ Ω → u(x) and x ∈ Ω → ∇θ(x) are collinear pointing in the same direction. Figure 2 : 2 Figure 2: Right: Single pipe geometry. Left: Bend pipe geometry. Figure 3 :Table 1 :Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 : 3145678 Figure 3: Field of the single pipe -no optimization -Re = 200 and Ri = 0.3 Figure 9 : 9 Figure 9: Pareto front for the single pipe -Re = 200 and Ri = 0.3 Figure 10 : 10 Figure 10: Pareto front for the bend pipe -Re = 200 and Ri = 0.3 1 ,Figure 11 : 1 , 1111 Figure 11: Mean synergy versus thermal power gain for the single pipe -Re=200 Ri=0.3 Table 2 : 2 Numerical values obtained after for different optimized cases Case Mechanical power loss Thermal power gain SP1 0.1592 0.02537 SP2 0.1584 0.02549 BP1 0.1724 0.00795 BP2 0.1649 0.00796
03546297
en
[ "math.math-ra" ]
2024/03/04 16:41:20
2022
https://hal.science/hal-03546297v5/file/double.pdf
Loïc Foissy email: [email protected] Bialgebras in cointeraction, the antipode and the eulerian idempotent Keywords: Double bialgebra, monoid of characters, eulerian idempotent. AMS classication. 16T05 16T30 68R15 05A05 05C15 Bialgebras in cointeraction, the antipode and the eulerian idempotent Introduction Quite recently, various combinatorial Hopf algebras equipped with a second coproduct appear in the literature: some based on trees [START_REF] Calaque | Two interacting Hopf algebras of trees: a Hopf-algebraic approach to composition and substitution of B-series[END_REF], on several families of graphs [START_REF] Manchon | On bialgebras and Hopf algebras or oriented graphs[END_REF][START_REF]Chromatic polynomials and bialgebras of graphs[END_REF], on nite topologies or posets [START_REF] Mohamed | Doubling bialgebras of nite topologies[END_REF][START_REF] Foissy | Commutative and non-commutative bialgebras of quasi-posets and applications to Ehrhart polynomials[END_REF][START_REF] Fauvet | The Hopf algebra of nite topologies and mould composition[END_REF], noncrossing partitions [START_REF] Ebrahimi-Fard | Operads of (noncrossing) partitions, interacting bialgebras, and moment-cumulant relations[END_REF], or on words related to Ecalle's mould calculus [START_REF] Ebrahimi-Fard | A comodule-bialgebra structure for word-series substitution and mould composition[END_REF], for example. These objects play an important role in Bruned, Hairer and Zambotti's study of stochastic PDEs [START_REF] Bruned | Singular KPZ Type Equations[END_REF][START_REF] Bruned | Algebraic renormalisation of regularity structures[END_REF]. Let us give some common properties of these objects. These are families pB, m, ∆, δq such that: pB, m, ∆q is a bialgebra. In most cases, it is a graded and connected Hopf algebra. pB, m, δq is a bialgebra, sharing the same product as pB, m, ∆q. It is generally not a connected coalgebra, as it contains non trivial group-like elements. Moreover, as these elements are not invertible, this is generally not a Hopf algebra. It turns out that pB, m, ∆q is a bialgebra in the category of right comodules of pB, m, δq, with the right coaction given by δ itself: the product m, the coproduct ∆, the unit map ν and the counit ε ∆ of ∆ are comodule morphisms. It is rather trivial for m and ν, but gives the two interesting following relations for ∆ and its counit ε ∆ : p∆ Idq m 1,3,24 ¥ pδ δq ¥ ∆, pε ∆ Idq ¥ δ ν ¥ ε ∆ , where m 1,3,24 : B 4 ÝÑ B 3 send the tensor a 1 a 2 a 3 a 4 to a 1 a 3 a 2 a 4 . In other words, for such an object, pB, m, ∆q is a right-comodule bialgebra over pB, m, δq, that is to say a bialgebra in the symmetric monoidal category of right comodules over pB, m, δq. For the sake of simplicity, these objects will be called double bialgebras in this text. Considering the associated characters monoids, we obtain two products ¦ and on the same set CharpBq, coming respectively from ∆ and δ, such that: pCharpBq, ¦q is a monoid (in most cases a group). pCharpBq, q is a monoid. pCharpBq, q acts (on the right) on pCharpBq, ¦q by monoid endomorphisms: for any λ 1 , λ 2 and µ CharpBq, pλ 1 ¦ λ 2 q µ pλ 1 µq ¦ pλ 2 µq. In the particular case where ∆ and δ are cocommutative, we obtain that pCharpBq, ¦, q is in fact a ring. Our aim in this text is a review of the theoretical consequences of this setting, illustrated by examples based on words with quasishue products and on graphs, with an unexpected application to the eulerian idempotent. We start with general results, with no particular hypothesis on the structure of pB, m, ∆q. We show that, as mentioned before, the monoid of characters pCharpBq, q of pB, m, δq acts on the monoid of characters pCharpBq, ¦q of pB, m, ∆q, but also on the space HompB, V q of linear homomorphisms from B to any vector space V (Proposition 2.5). If V is an algebra (respectively a bialgebra or a coalgebra), the subset of algebra (respectively bialgebra or coalgebra) morphisms is stable under this action. We also prove that, in the case where pB, m, ∆q is a Hopf algebra, then its antipode S is automatically a comodule morphism (Proposition 2.1), that is to say: δ ¥ S pS Idq ¥ δ. We also introduce an important tool, the map Θ, which sends a linear form λ on B to the endomorphism Θpλq pλ Idq ¥ δ. We prove in Proposition 2.2 that this map is compatible with both ¦ and : for any λ, µ B ¦ , Θpλ ¦ µq Θpλq ¦ Θpµq, Θpλ µq Θpµq ¥ Θpλq. As an example of consequence, we give in Corollary 2.3 a criterion for the existence of the antipode for pB, m, ∆q: this is a Hopf algebra, if and only if, the counit ϵ δ of the coproduct δ is invertible for the convolution product ¦ dual of ∆, and then the antipode is Θpϵ ¦¡1 δ q. An immediate consequence is that S is an algebra morphism and an algebra antimorphism, by a very classical result. Consequently, we obtain that pH, mq is commutative. By the way, this explains why no non commutative example of double bialgebra was known. We then add the assumption that pB, ∆q is a connected coalgebra. This gives the existence of an increasing ltration pB ¤n q nN (the coradical ltration) of B such that for any k, l, n N, mpB ¤k B ¤l q B ¤k l , ∆pB ¤n q n p0 B ¤p B ¤n¡p , and such that B ¤0 K1 B . In this case, for any vector space V , EndpB, V q inherits a distance, making it a complete hypermetric space. When V is an algebra, then EndpB, V q inherits a convolution product, which makes it a complete hypermetric algebra. Moreover, if f EndpB, V q satises f p1 B q 0, we obtain a continuous algebra map from the algebra of formal series KrrT ss to EndpB, V q, which sends T to f (Proposition 3.3): this allows to dene exponential, logarithm, or non integral powers of elements of EndpB, V q. This formalism can be used to prove Takeuchi's formula for the antipode, a universal property for shue coalgebras (Proposition 3.5), or the well-known exp-ln bijection between the Lie algebra of innitesimal characters to the group of characters of pB, m, ∆q (Proposition 3.6). One of the simplest examples of double bialgebra is the polynomial algebra KrXs, with its two coproducts dened by ∆pXq X 1 1 X, δpXq X X. We prove in Theorem 3.9 that it is a terminal object in the category of connected double bialgebras: in other words, for any connected double bialgebra pB, m, ∆, δq, there exists a unique double bialgebra morphism from B to KrXs. Moreover, this morphism is Φ ϵ X δ p1 pϵ δ ¡ ε ∆ qq X V ņ0 XpX ¡ 1q . . . pX ¡ n 1q n! ϵ n δ ¥ ∆pn¡1q , with the use of formal series described earlier, and where the maps ∆pn¡1q are the iterated of the reduced coproduct ∆. We also prove that this morphism Φ allows to construct all bialgebra morphisms from pB, m, ∆q to pKrXs, m, ∆q, thanks to the action of the monoid of characters pCharpBq, q, see Corollary 3.12. When applied to the double bialgebra of graphs, this gives the chromatic polynomial (Theorem 3.13). When applied to a quasishue double bialgebra, this gives a morphism involving Hilbert polynomials (Proposition 3.14). When one works with the enveloping algebra Upgq of a Lie algebra g, the eulerian idempotent is a useful projector on g, see [START_REF] Loday | Série de Hausdor, idempotents eulériens et algèbres de Hopf[END_REF][START_REF] Burgunder | Eulerian idempotent and Kashiwara-Vergne conjecture[END_REF][START_REF] Bandiera | Eulerian idempotent, pre-Lie logarithm and combinatorics of trees[END_REF] for several applications. It is originally dened on the enveloping algebra of a free Lie algebra, in terms of descents of permutations [START_REF] Solomon | On the Poincaré-Birkho-Witt theorem[END_REF]. This can be generalized without any problem to any connected bialgebra, by the formula ϖ lnpIdq V ķ1 p¡1q k 1 k m pk¡1q ¥ ∆pk¡1q , where the m pk¡1q are the iterated products. It is generally not a projector. If B is cocommutative, it is well-known that it is a projector on the Lie algebra of the primitive elements of B. The case of a commutative connected bialgebra is not so well known. We here consider the case of a connected double bialgebra B. An especially interesting innitesimal character is given by the logarithm ϕ of the counit ϵ δ . We prove that: ϕ is closely related to the double bialgebra morphism Φ, see Proposition 4.1: for any x B, ϕpxq Φpxq I p0q. for any innitesimal character µ of B, ϕ µ µ, see Lemma 4.3. Consequently, ϕ ϕ ϕ, which implies that ϖ is a projector, that its kernel is K1 B B 2 (Proposition 4.4), and its image contains the Lie algebra PrimpBq of primitive elements of B (but is not equal, except if B is cocommutative). This result can be extended to any commutative connected bialgebra (Proposition 4.13). In the case of the graph bialgebra, this innitesimal character admits a combinatorial interpretation in term of acyclic orientations with a single xed source (Theorem 4.9). For quasishue bialgebras, the eulerian idempotent is given in Corollary 4.17, in term of descents of surjections. Applications of this projector include that any commutative connected bialgebra can be seen as a subbialgebra of a shue bialgebra (Corollary 4.14), which in turns implies Homan's result [START_REF] Michael | Quasi-shue products[END_REF] that any commutative quasishue bialgebra is isomorphic to a shue algebra and that any commutative connected bialgebra can be embedded in a double bialgebra. We then make precise the hypothesis on pB, m, ∆q and assume that it is connected and graded: there exists a family of subspaces pB n q nN of B such that B V à n0 B n , and such that for any k, l, n N, mpB k B l q B k l , ∆pB n q n p0 B p B n¡p , and with B 0 K1 B . A natural question is the description of homogeneous morphisms from pB, m, ∆q to pKrXs, m, ∆q (noting that the unique double bialgebra morphism is usually not homogeneous). we obtain that these morphisms are in bijection with the space B ¦ 1 (Proposition 5.2), with explicit formulas (Corollary 5.3). In the case of graphs, taking λ B ¦ 1 dened by λp q 1, we obtain the bialgebra morphism sending any graph G of degree n to X n , and the action of pCharpBq, q allows to recover the interpretation of the coecients of the chromatic polynomials in terms of acyclic orientations of [START_REF] Greene | On the interpretation of Whitney numbers through arrangements of hyperplanes, zonotopes, non-Radon partitions, and orientations of graphs[END_REF][START_REF] Bishal | Chromatic polynomial and heaps of pieces[END_REF]. Finally, we also consider, following Aguiar, Bergeron and Sottile's result [START_REF] Aguiar | Combinatorial Hopf algebras and generalized Dehn-Sommerville relations[END_REF], that under a homogeneity condition, there exists a unique homogeneous double bialgebra morphism from pB, m, ∆, δq to the double bialgebra of quasisymmetric functions QSym, which is a special case of a quasishue double bialgebra based on a semigroup [START_REF] Ebrahimi-Fard | A comodule-bialgebra structure for word-series substitution and mould composition[END_REF]. This paper is organized as follows: the rst section recalls the denition of double bialgebras, the examples of graphs and of quasishue bialgebras. The second section gives general results on double bialgebras, including the properties of the map Θ and the actions of the monoid of characters. The third part concentrates on the particular case of connected double bialgebras, with the exp-ln bijection between innitesimal characters and characters and the polynomial invariants. The eulerian projector, its properties and their consequences, are studied in the next section, and the last section gives results in the more specic case of graded double bialgebras. Acknowledgements. The author acknowledges support from the grant ANR-20-CE40-0007 Combinatoire Algébrique, Résurgence, Probabilités Libres et Opérades. Notations 0.1. K is a commutative eld of characteristic zero. All the vector spaces in this text will be taken over K. For any k N, we put rks t1, . . . , ku. In particular, r0s r. We denote by KrrT ss the algebra of formal series with coecients in K. If P pTq °an T n KrrT ss, the valuation of P pTq is valpP pTqq mintn N | a n $ 0u. By convention, valp0q V. This induces a distance d on KrrT ss, dened by dpP pTq, QpT qq 2 ¡valpPpTq¡QpTqq , with the convention 2 ¡V 0. Then pKrrTss, dq is a complete metric space. If P pTq °an T n and QpT q KrrT ss with Qp0q 0, the composition of P and Q is P ¥ QpT q V ņ0 a n QpT q n . We shall use the classical formal series e T V ņ0 T n n! , lnp1 T q V ņ1 p¡1q n 1 n T n , p1 T q x V ņ0 xpx ¡ 1q . . . px ¡ n 1q n! T n , for x K. We recall the classical results: e T ¡ 1 ¨¥ lnp1 T q lnp1 T q ¥ e T ¡ 1 ¨ T, p1 T q x e x lnp1 T q . Moreover, for any formal series P pTq and QpT q with no constant terms, e T ¥ pPpTq QpT qq e T ¥ P pTq ¨ e T ¥ QpT q ¨, lnp1 T q ¥ pPpTq QpT q P pTqQpTqq lnp1 T q ¥ P pTq lnp1 T q ¥ QpT q, which implies that for any x, y K, p1 T q x y p1 T q x p1 T q y . 1 Cointeracting bialgebras Denition We refer to the references [START_REF] Abe | Hopf algebras[END_REF][START_REF] Cartier | Classical Hopf algebras and their applications[END_REF][START_REF] Sweedler | Hopf algebras[END_REF] for the main denitions on bialgebras and Hopf algebras. Let pB, m, δq be a bialgebra. Its counit will be denoted by ϵ δ . It is well-known that its category of (right) comodules is a monoidal category: If pM 1 , ρ 1 q and pM 2 , ρ 2 q are two comodules over B, then M 1 M 2 is also a comodule, with the coaction m 1,3,24 ¥ pρ 1 ρ 2 q, with m 1,3,24 : 4 M 1 B M 2 B ÝÑ M 1 M 2 B m 1 b 1 m 2 b 2 ÝÑ m 1 m 2 b 1 b 2 . If f 1 : M 1 ÝÑ M I 1 and f 2 : M 2 ÝÑ M I 2 are comodule morphisms, then f 1 f 2 : M 1 M 2 ÝÑ M I 1 M I 2 is a comodule morphism. The associativity of m implies that if pM 1 , ρ 1 q, pM 2 , ρ 2 q and pM 3 , ρ 3 q are three comodules over B, then pM 1 M 2 q M 3 and M 1 pM 2 M 3 q are the same comodule. The unit comodule is K with the coaction dened by dx K, ρpxq x 1 B . The canonical identications of K M and M K with M are comodules isomorphims for any comodule M . In particular, B is a comodule over itself with the coaction δ. Hence, for any n N, B n is a comodule over B, with the coaction m 1,3,...,2n¡1,24...2n ¥ δ n , where m 1,3,...,2n¡1,24...2n : 4 B 2n ÝÑ B n b 1 . . . b 2n 1 ÝÑ b 1 b 3 . . . b 2n¡1 b 2 b 4 . . . b 2n . Note that m : BB ÝÑ B is always a comodule morphism, as well as the unit map ν B : K ÝÑ B, which sends x K to x1 B . A double bialgebra is given by a coproduct ∆ on B, making pB, m, ∆q a bialgebra in the category of comodules over pB, m, δq with the coaction δ. In more details: Denition 1.1. A double bialgebra is a family pB, m, ∆, δq such that: pB, m, δq is a bialgebra. Its counit is denoted by ϵ δ . pB, m, ∆q is a bialgebra. Its counit is denoted by ε ∆ . ∆ : B ÝÑ B B is a comodule morphism: p∆ Id B q ¥ δ m 1,3,24 ¥ pδ δq ¥ ∆. ε ∆ : B ÝÑ K is a comodule morphism: pε ∆ Idq ¥ δ ν B ¥ ε ∆ . Remark 1.1. Let pB, m, ∆, δq be a double bialgebra. Then, as pB, m, δq is a bialgebra, δ ¥ m m 13,24 ¥ pδ δq pm Idq ¥ m 1,3,24 ¥ pδ δq, with the obvious notation m 13,24 . Therefore, m is a comodule morphism from B B to B. Moreover, as δp1 B q 1 B 1 B , the map ν B : K ÝÑ B is a comodule morphism. Example 1.1. The algebra KrXs is a double bialgebra, with the two multiplicative coproducts dened by ∆pXq X 1 1 X, δpXq X X. In other terms, identifying KrXs KrXs with KrX, Y s through the algebra map 4 KrXs KrXs ÝÑ KrX, Y s P pXq QpXq ÝÑ P pXqQpY q, for any P KrXs, ∆pP pXqq P pX Y q, δpP pXqq P pXY q. The counit ε ∆ sends P KrXs to P p0q and the counit ϵ δ sends it to P p1q. The example of graphs We refer to [START_REF] Harary | Graph theory[END_REF] for classical denitions and notations on graphs. In the context of this article, a graph will be a pair G pV pGq, EpGqq, where V pGq is a nite set (maybe empty), called the set of vertices, and EpGq a sets of 2-elements sets of elements of V pGq, called the set of edges of G. The degree of G is the cardinality of V pGq. If G and H are two graphs, an isomorphism from G to H is a bijection f : V pGq ÝÑ V pHq such that for any x $ y V pGq, tx, yu EpGq if, and only if, tfpxq, f pyqu EpHq. We shall denote by G the set of isoclasses of graphs, and for any n N, by Gpnq the set of isoclasses of graphs of degree n. The vector space generated by G will be denoted by H G . Example 1.2. Gp0q t1u, Gp1q t u, Gp2q t , u, Gp3q t , , , u, Gp4q t , , , , , , , , , , u. If G and H are two graphs, their disjoint union is the graph GH dened by V pGHq V pGq V pHq, EpGHq EpGq EpHq. This induces a commutative and associative product m on H G , whose units is the empty graph 1. Let G be a graph and I V pGq. The subgraph G |I is dened by V pG |I q I, EpG |I q ttx, yu EpGq | x, y Iu. This notion induces a commutative and coassociative coproduct ∆ on H G given by dG G, ∆pGq IV pGq G |I G |V pGqzI . Its counit ε ∆ is given by dG G, ε ∆ pGq δ G,1 . Example 1.3. ∆p q 1 1 , ∆p q 1 1 2 , ∆p q 1 1 3 3 , ∆p q 1 1 2 2 , ∆p q 1 1 4 6 4 , ∆p q 1 1 2 2 4 2 2 , ∆p q 1 1 2 2 2 2 2 , ∆p q 1 1 4 4 2 4 , ∆p q 1 1 3 3 3 3 , ∆p q 1 1 2 2 2 2 2 2 . Let G be a graph and let be an equivalence relation on V pGq. We dene the contracted graph G{ by V pG{ q V pGq{ , EpG{ q ttπ pxq, π pyqu | tx, yu EpGq, π pxq $ π pyqu, where π : V pGq ÝÑ V pGq{ is the canonical surjection. δp q , δp q , δp q 3 , δp q 2 , δp q 6 p6 4 q, δp q p4 q p2 2 2 q, δp q p 3 q p 2 q, δp q 4 p2 4 q, δp q 3 3 , δp q 3 p 2 q. Proposition 1.2. [START_REF]Chromatic polynomials and bialgebras of graphs[END_REF] pH G , m, ∆, δq is a double bialgebra. Let pV, ¤q be a commutative algebra (not necessarily unitary). The quasishue bialgebra associated to V is pTpV q, , ∆q, where dv 1 , . . . , v k l V, v 1 . . . v k v k 1 . . . v k l σQShpk,lq ¤ ¥ ¤ ¹ σpiq1 v i . . . ¤ ¥ ¤ ¹ σpiqmaxpσq v i , where the symbol ¤ ¹ means that the products are taken in pV, ¤q. For example, if v 1 , v 2 , v 3 , v 4 V , v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 v 2 v 1 v 3 v 4 v 2 v 3 v 1 v 4 v 2 v 3 v 4 v 1 pv 1 ¤ v 2 qv 3 v 4 v 2 pv 1 ¤ v 3 qv 4 v 2 v 3 pv 1 ¤ v 4 q, v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 v 1 v 3 v 2 v 4 v 1 v 3 v 4 v 2 v 3 v 1 v 2 v 4 v 3 v 1 v 4 v 2 v 3 v 4 v 1 v 2 pv 1 ¤ v 3 qv 2 v 4 pv 1 ¤ v 3 qv 2 v 4 v 3 pv 1 ¤ v 4 qv 2 v 1 pv 2 ¤ v 3 qv 4 v 1 v 3 pv 2 ¤ v 4 q v 3 v 1 pv 2 ¤ v 4 q pv 1 ¤ v 3 qpv 2 ¤ v 4 q. The coproduct is the deconcatenation coproduct: dv 1 , . . . , v n V, ∆pv 1 . . . v n q n ķ0 v 1 . . . v k v k 1 . . . v n . In the particular case where ¤ 0, we obtain the quasishue algebra pTpV q, ¡,∆q. When pV, ¤, δ V q is a commutative (not necessarily unitary) bialgebra, then pTpV q, , ∆q inherits a second coproduct δ: dv 1 , . . . , v n V, δpv 1 . . . v n q 1¤i 1 ... i k n £ ¤ ¹ 0 i¤i 1 v I i . . . £ ¤ ¹ i k i¤n v I i v P 1 . . . v P i 1 . . . v P i k 1 . . . v P n , with Sweedler's notation δ V pvq v I v P for any v V . For example, if v 1 , v 2 , v 3 V , δpv 1 q v I 1 v P 1 , δpv 1 v 2 q v I 1 v I 2 v P 1 v P 2 v I 1 ¤ v I 2 v P 1 v P 2 , δpv 1 v 2 v 3 q v I 1 v I 2 v I 3 v P 1 v P 2 v P 3 pv I 1 ¤ v I 2 qv I 3 v P 1 v P 2 v P 3 v I 1 pv I 2 ¤ v I 3 q v P 1 v P 2 v P 3 pv I 1 ¤ v I 2 ¤ v I 3 q v P 1 v P 2 v P 3 . Proposition 1.3. If pV, ¤, δ V q is a commutative (not necessarily unitary) bialgebra, then pTpV q, , ∆, δq is a double bialgebra. Proof. It is quite well-known that pTpV q, , ∆q is a bialgebra [START_REF] Michael | Quasi-shue products[END_REF][START_REF]Quasi-shue algebras and applications[END_REF]. We shall use the following notation: for any w v 1 . . . v n V n , with n ¥ 1, |w| v 1 ¤ . . . ¤ v n , w I w P v I 1 . . . v I n v P 1 . . . v P n , where we used Sweedler's notation δ V pvq v I v P for any v V . Let w V n , with n ¥ 1. Second step. Let us prove that for any x V k , y V l , 13,24 ¥ pδ δqpx yq δ ¥ px yq. Then We proceed by induction on n k l. If k 0, we can assume that x 1 and then 13,24 ¥ pδ δqp1 yq δpyq δ ¥ px yq. The result also holds if l 0: these observations give the cases n 0 and n 1. Let us now assume that k, l ¥ 1 and the result at all ranks n. p∆ Idq ¥ 13,24 ¥ pδ δqpx yq 14,25,36 ¥ p∆ Id ∆ Idq ¥ pδ δqpx yq 14,25,36 ¥ 1,3,24,5,7,68 ¥ pδ δ δ δq ¥ p∆ ∆qpx yq 15,37,2468 ¥ pδ δ δ δq ¥ p∆ ∆qpx yq, whereas, with Sweedler's notation δpzq z p1q z p2q for any z T pV q, p∆ Idq ¥ δ ¥ px yq 1,3,24 ¥ pδ δq ¥ ∆ ¥ px yq 1,3,24 ¥ pδ δq ¥ 13,24 ¥ p∆ ∆qpx yq 1,3,24 ¥ pδ δq ¥ 13,24 p∆ ∆px yq ¡ x 1 y 1 ¡ 1 x 1 yq px yq p1q 1 px yq p2q 1 px yq p1q px yq p2q 1,3,24 ¥ 15,24,37,68 ¥ pδ δ δ δq p∆ ∆px yq ¡ x 1 y 1 ¡ 1 x 1 yq px yq p1q 1 px yq p2q 1 px yq p1q px yq p2q 15,37,2468 ¥ pδ δ δ δq ¥ p∆ ∆qpx yq ¡ x p1q y p1q 1 x p2q y p2q ¡ 1 x p1q y p1q x p2q y p2q px yq p1q 1 px yq p2q 1 px yq p1q px yq p2q p∆ Idq ¥ 13,24 ¥ pδ δqpx yq ¡ x p1q y p1q 1 x p2q y p2q ¡ 1 x p1q y p1q x p2q y p2q px yq p1q 1 px yq p2q 1 px yq p1q px yq p2q . We use the induction hypothesis for the fourth equality. We obtain that p ∆ Id Idq ¥ δ ¥ px yq p ∆ Idq ¥ 13,24 ¥ pδ δqpx yq, so δ ¥ px yq ¡ 13,24 ¥ pδ δqpx yq V T pV q. Let π be the canonical projection from T pV q onto V . For any w V n , with n ¥ 1, pπ Idq ¥ δpwq |w I | w P . Hence, as V is a commutative bialgebra, pπ Idq ¥ δ ¥ px yq |px yq I | px yq P |x I | ¤ |y I | px P y P q pπ Idq ¥ 13,24 ¥ δpx yq. We obtain that δ ¥ px yq 13,24 ¥ pδ δqpx yq. Third step. Let us prove that pId δq ¥ δpxq pδ Idq ¥ δpxq for any x V n by induction on n. It is obvious if n 0, taking then x 1. Let us assume the result at all ranks n. The x p2q © P pπ Idq ¥ pId δq ¥ δpxq. rst step implies that p ∆ Idq ¥ δ 1,3,24 ¥ pδ δq ¥ ∆, so p ∆ Id Idq ¥ pδ Idq ¥ δpxq 1,3,24,5 ¥ pδ δ Idq ¥ p ∆ Idq ¥ δpxq 1,3,24,5 ¥ pδ δ Idq ¥ 1, Finally, pδ Idq ¥ δpxq pId δq ¥ δpxq. Final step. It is immediate that ε ∆ is a comodule morphism. Let us prove now that δ has a counit. We put, for any v 1 , . . . , v n V , with n ¥ 1, ϵ δ pv 1 . . . v n q 5 ϵ V pv 1 q if n 1, 0 otherwise. Then, if w v 1 . . . v n , pϵ δ Idq ¥ δpwq ϵ δ p|w I |qw P 0 ϵ V pv I 1 ¤ . . . ¤ v I n qv P 1 . . . v P n ϵ V pv I 1 q . . . ϵ V pv I n qv P 1 . . . v P n v 1 . . . v n , whereas pId ϵ δ q ¥ δpwq v I 1 . . . v I n ϵ V pv P 1 ¤ . . . ¤ v P n q 0 v I 1 . . . v I n ϵ V pv P Taking Ω pN ¡0 , q, we recover the double bialgebra of quasisymmetric functions QSym, A basis of QSym is given by words in strictly positive integers, which are called compositions. The second coproduct δ is often called the internal coproduct, and is dual of the Kronecker product of noncommutative symmetric functions. Characters Notations 1.2. Let pB, m, ∆q be a bialgebra. B ¦ inherits an algebra structure, with the convolution product ¦ induced by ∆: dλ, µ B ¦ , λ ¦ µ pλ µq ¥ ∆. The unit is ε ∆ . The set of the characters of B, that is to say algebra morphisms from B to K, is denoted by CharpBq. It is a monoid for the convolution product ¦. In the case of a double bialgebra pB, m, ∆, δq, B ¦ inherits a second convolution product, denoted by and coming from δ: dλ, µ B ¦ , λ µ pλ µq ¥ δ. The unit is ϵ δ . Moreover, CharpBq is also a monoid for the convolution product . The space of innitesimal characters of B, that is to say ε ∆ -derivations from B to K, is denoted by InfCharpBq. In other words, a linear map λ : B ÝÑ K is an innitesimal character of B if for any x, y B, λpxyq ε ∆ pxqλpyq λpxqε ∆ pyq. In other terms, for any λ B ¦ , λ InfCharpBq if and only if λpK1 B B 2 q p0q, where B Kerpε ∆ q is the augmentation ideal of B. If pB, m, ∆q is a bialgebra, we can consider the transpose m ¦ : B ¦ ÝÑ pB Bq ¦ of the product m. Note that B ¦ B ¦ is considered as a subspace of pB Bq ¦ , through the canonical Proof. Immediate. injection 6 8 7 B ¦ B ¦ ÝÑ pB Bq ¦ λ µ ÝÑ 4 B B ÝÑ K x y ÝÑ λpxqµpyq. (This is not an isomorphism except if B is nite-dimensional). As m is a coalgebra morphism, dually m ¦ : B ¦ ÝÑ Lemma 1.5. Let pB, m, ∆, δq be a double bialgebra. For any µ B ¦ , ε ∆ µ µp1 B qε ∆ . Proof. As ε ∆ is a comodule morphism, ε ∆ µ pε ∆ µq ¥ δ µ ¥ pε ∆ Idq ¥ δ µ ¥ ν B ¥ ε ∆ µp1 B qε ∆ . Proposition 1.6. Let pB, m, ∆, δq be a double bialgebra. Then InfCharpBq B ¦ InfCharpBq. Proof. Let λ InfCharpBq and µ B ¦ . For any x, y B, using Sweedler's notation δpzq °zI z P for δ, λ µpxyq ¸λppxyq I qµppxyq P q ¸¸λpx I y I qµpx P y P q ¸¸λpx I qε ∆ py I qµpx P y P q ¸¸ε ∆ px I qλpy I qµpx P y P q ¸λpx I qε ∆ pyqµpx P 1 B q ¸ε∆ pxqλpy I qµp1 B y P q λ µpxqε ∆ pyq ε ∆ pxqλ µpyq. Therefore, λ µ InfCharpBq. 2 General results Compatibility of the antipode with the coaction Proposition 2.1. Let pB, m, ∆, δq be a double bialgebra, such that pB, m, ∆q is a Hopf algebra of antipode S. Then S is a comodule morphism: δ ¥ S pS Idq ¥ δ. Proof. We consider the space HompB, BBq of linear maps from B to BB, with the convolution product ¦ dened by f ¦ g m 13,24 ¥ pf gq ¥ ∆. The unit ι sends any b B to ε ∆ pbq1 B 1 B . Let us show that δ has an inverse in this algebra. ppS Idq ¥ δq ¦ δ m 13,24 ¥ pS Id Id Idq ¥ pδ δq ¥ ∆ pm Idq ¥ pS Id Idq ¥ m 1,3,24 ¥ pδ δq ¥ ∆ pm Idq ¥ pS Id Idq ¥ p∆ Idq ¥ δ ppm ¥ pS Idq ¥ ∆q Idq ¥ δ ppν B ¥ ε ∆ q Idq ¥ δ ι, so pS Idq ¥ δ is a left inverse of δ for the convolution product ¦. δ ¦ pδ Sq m 13,24 ¥ pδ δq ¥ pId Sq ¥ ∆ δ ¥ m ¥ pId Sq ¥ ∆ δ ¥ ν B ¥ ε ∆ ι, so δ ¥ S is a right inverse of δ for the convolution product ¦. As ¦ is associative, δ is invertible and its inverse is pS Idq ¥ δ δ ¥ S. From linear forms to endomorphisms Notations 2.1. Let pB, m, ∆, δq be a double bialgebra and let pA, m A q be an algebra. Then the space HompB, Aq of linear maps from B to A is given two convolution products ¦ and : for any f, g HompB, Aq, f ¦ g m A ¥ pf gq ¥ ∆, f g m A ¥ pf gq ¥ δ. The unit of ¦ is ν A ¥ ε ∆ whereas the unit of is ν A ¥ ϵ δ . Two particular examples are given by A B, which denes ¦ and for EndpBq, and A K, giving back the products ¦ and on B ¦ . Proposition 2.2. Let pB, m, ∆, δq be a double bialgebra. We consider the linear map Θ : 4 B ¦ ÝÑ EndpBq λ ÝÑ pλ Idq ¥ δ. For any λ, µ B ¦ , Θpλ ¦ µq Θpλq ¦ Θpµq, Θpλ µq Θpµq ¥ Θpλq. Moreover, Θpε ∆ q ν B ¥ ε ∆ and Θpϵ δ q Id B . The map Θ is injective, with a left inverse given by Θ I : 4 EndpBq ÝÑ B ¦ f ÝÑ ϵ δ ¥ f. Proof. Let λ, µ B ¦ . Θpλ ¦ µq pλ µ Idq ¥ p∆ Idq ¥ δ pλ µ Idq ¥ m 1,3,24 ¥ pδ δq ¥ ∆ m ¥ pλ Id µ Idq ¥ pδ δq ¥ ∆ m ¥ pΘpλq Θpµqq ¥ ∆ Θpλq ¦ Θpµq. Θpλ µq pλ µ Idq ¥ pδ Idq ¥ δ pλ µ Idq ¥ pId δq ¥ δ pµ Idq ¥ δ ¥ pλ Idq ¥ δ Θpµq ¥ Θpλq. By denition of the counit, Θpϵ δ q Id B . As ε ∆ is a comodule morphism, Θpε ∆ q ν B ¥ ε ∆ . Let λ B ¦ . Θ I ¥ Θpλq ϵ δ ¥ pλ Idq ¥ δ pλ ϵ δ q ¥ δ λ ϵ δ λ. So Θ I ¥ Θ Id B ¦. Corollary 2.3. Let pB, m, ∆, δq be a double bialgebra. Then pB, m, ∆q is a Hopf algebra if, and only if, ϵ δ has an inverse in the algebra pB ¦ , ¦q. If this holds, the antipode of pB, m, ∆q is S pϵ ¦¡1 δ Idq ¥ δ. Proof. ùñ. If pB, m, ∆q is a Hopf algebra, denoting by S its antipode, the inverse of ϵ δ in pB ¦ , ¦q is ϵ δ ¥ S. ðù. If so, putting S Θpϵ ¦¡1 δ q, we obtain S ¦ Id B Θpϵ ¦¡1 δ q ¦ Θpϵ δ q Θpϵ ¦¡1 δ ¦ ϵ δ q Θpε ∆ q ν B ¥ ε ∆ . Similarly, Id B ¦ S ν B ¥ ε ∆ , so pB, m, pϵ δ ¦ ϵ ¦¡1 δ q ϵ ¦¡1 δ ε ∆ ϵ ¦¡1 δ ϵ ¦¡1 δ p1 B qε ∆ ε ∆ , and pϵ δ ¦ ϵ ¦¡1 δ q ϵ ¦¡1 δ pϵ δ ϵ ¦¡1 δ q ¦ pϵ ¦¡1 δ ϵ ¦¡1 δ q ϵ ¦¡1 δ ¦ pϵ ¦¡1 δ ϵ ¦¡1 δ q. Hence, ϵ ¦¡1 δ ¦ pϵ ¦¡1 δ ϵ ¦¡1 δ q ε ∆ , which implies that ϵ ¦¡1 δ ϵ ¦¡1 δ ϵ δ . Applying Θ, we obtain that Θpϵ ¦¡1 δ ϵ ¦¡1 δ q Θpϵ ¦¡1 δ q ¥ Θpϵ ¦¡1 δ q S ¥ S Θpϵ δ q Id, so S is involutive and therefore, surjective. Actions of the groups of characters Proposition 2.5. Let pB, m, ∆, δq be a double bialgebra and V be a vector space. The following map denes a (right) action of the monoid pCharpBq, q on the space HompB, V q of linear maps from B to V : 4 HompB, V q ¢ CharpBq ÝÑ HompB, V q pf, λq ÝÑ f øλpf λq ¥ δ. Moreover: 1. If A is an algebra, λ CharpBq and f : B ÝÑ A is an algebra morphism, then f øλ is an algebra morphism. 2. If C is a coalgebra, λ CharpBq and f : B ÝÑ C is a coalgebra morphism, then f ø λ is a coalgebra morphism. 3. If B I is a bialgebra, λ CharpBq and f : B ÝÑ B I is a bialgebra morphism, then f øλ is a bialgebra morphism. Proof. The fact that this is an action comes from the coassociativity of δ. 1. By composition, pId λq ¥ δ is an algebra morphism. 2. We obtain, as ∆ is a comodule morphism, ∆ ¥ pf øλq∆¥pf λq ¥ δ pf f λq ¥ p∆ Idq ¥ δ pf f λq ¥ m 1,3,24 ¥ pδ δq ¥ ∆ pf λ f λq ¥ pδ δq ¥ ∆ ppf øλqpf øλqq¥∆. As ε ∆ is a comodule morphism, ε ∆ ¥ pf øλqppε ∆ ¥ f q λq ¥ δ λ ¥ ν B ¥ ε ∆ ε ∆ . Therefore, f øλ is a coalgebra morphism. 3. Direct consequence of 1. and 2. Remark 2.1. Consequently, if V is an algebra (respectively a bialgebra or a coalgebra), then ø denes an action of the monoid pCharpAq, q on the set Hom a pB, V q (respectively Hom b pB, V q or Hom c pB, V q) of morphisms of algebras (respectively bialgebras or coalgebras), from B to V . Proposition 2.6. Let pB, m, ∆, δq be a double bialgebra, V and W be two spaces and f : B ÝÑ V , g : V ÝÑ W be two linear maps. Then pf ¥ gq øλf ¥ pg øλq. Proof. Indeed, pf ¥ gq øλppf ¥ gq λq ¥ δ f ¥ ppg Idq ¥ δq f ¥ pg øλq. Proposition 2.7. Let pA, m A q be an algebra. For any f, g HompB, Aq, for any λ CharpBq, pf ¦ gq øλpf øλq¦pg øλq. Proof. Indeed, pf ¦ gq øλm A ¥ pf g λq ¥ p∆ Idq ¥ δ m A ¥ pf g λq m 1,3,24 ¥ pδ δq ¥ ∆ m A ¥ pf λ g λq ¥ pδ δq ¥ ∆ m A ppf øλqpg øλqq¥∆ pf øλq¦pg øλq. Remark 2.2. In the particular case where V K, then ø. We obtain that for any λ 1 , λ 2 B ¦ , for any µ CharpBq, pλ 1 ¦ λ 2 q µ pλ 1 µq ¦ pλ 2 µq. So pCharpBq, q acts on pB, ¦q by algebra endomorphisms. By restriction, pCharpBq, q acts on pCharpBq, ¦q by monoid endomorphisms. 3 Connected double bialgebras 3.1 Reminders on connected bialgebras Notations 3.1. Let pB, m, ∆q be a bialgebra. We denote by B its augmentation ideal, that is to say the kernel of its counit ε ∆ . We dene a coassociative (non counitary) coproduct ∆ : B ÝÑ B B by dx B , ∆pxq ∆pxq ¡ x 1 ¡ 1 x. We may extend ∆ to B by putting ∆p1 B q 0. The iterated reduced coproducts ∆pnq : B ÝÑ B pn 1q are inductively dened by ∆pnq 5 Id B if n 0, ¡ ∆pn¡1q Id © ¥ ∆ if n ¥ 1. In particular, ∆p1q ∆. Recall that a bialgebra pB, m, ∆q is connected if its coradical is reduced to K. This is equivalent to the fact that ∆ is locally nilpotent: for any x B , there exists n N such that ∆pnq pxq 0. In this case, we obtain a ltration of B dened by B ¤n Ker ¡ ∆pnq © K1 B . This is called the coradical ltration. In particular, B ¤0 K1 B and B ¤1 B Kerp ∆q PrimpBq, the space of primitive elements of B. The degree associated to this ltration is denoted by deg p dx B, deg p pxq minpn N, x B ¤n q. The coassociativity of ∆ implies that for all n N, ∆pB ¤n q n ķ0 B ¤k B ¤n¡k . Combined with the connectivity of B, this gives that for any n N, ∆pB ¤n q n¡1 ķ1 B ¤k B ¤n¡k . The compatibility of ∆ and m implies that for any k, l N, mpB ¤k B ¤l q B ¤k l . Conversely, if B has an increasing ltration pB ¤n q nN (which may not be the coradical ltration), such that for any k, l, n N, mpB ¤k B ¤l q B ¤k l , ∆pB ¤n q n ķ0 B ¤k B ¤n¡k , and such that B ¤0 K1 B , then B is connected, as the coradical of B is necessarily included in B ¤0 . In particular, if k ¡ |V pGq|, ∆pk¡1q pGq 0, so H G is connected. Let V be a vector space. For any map f : B ÝÑ V , we dene its valuation by valpf q mintn N, f pB ¤n q $ p0qu, with the convention that valp0q V. Note that for any f, g HompB, V q, valpf gq ¥ minpvalpf q, valpgqq. We therefore obtain a distance on HompB, V q dened by dpf, gq 2 ¡valpf¡gq , with the convention 2 ¡V 0. Note that for any f, g, h HompB, V q, dpf, hq ¤ maxpdpf, gq, dpg, hqq ¤ dpf, gq dpg, hq. Lemma 3.1. For any vector space V , pHompB, V q, dq is a complete metric space. Proof. Let pf n q nN be a Cauchy sequence of HompB, V q. For any n N, there exists N pnq such that if k, l ¥ N pnq, then pf k q |B¤n pg k q |B¤n . Let us x for any n, a complement B n of B ¤n¡1 in B ¤n . Then for any n N, B ¤n n à k0 B k , and consequently B V à k0 B k . Let n N. We dene g pnq : B n ÝÑ V by g pnq pf N pnq q |Bn , and we consider the map g V à k0 g pkq . If k ¥ maxpN p0q, . . . , N pnqq, then pf k q |B¤n g |B¤n , so dpf k , gq ¤ 2 ¡n . Hence, pf n q nN converges to g. Proposition 3.2. Let pB, m, ∆q be a connected bialgebra and let pA, m A q be an algebra. For any f, g HompB, Aq, valpf ¦ gq ¥ valpf q valpgq. Consequently, ¦ : HompB B, Aq ÝÑ HompB, Aq is continuous. Proof. Let n valpf q valpgq. Then f ¦ gpB ¤n q m A ¥ pf gq ¥ ∆pB ¤n q n ķ0 m A pfpB ¤k q gpB ¤n¡k qq p0q, as either k valpf q or n ¡ k valpgq. Hence, valpf ¦ gq ¥ valpf q valpgq. Consequently, if A is an algebra and f : B ÝÑ A is a map such that valpf q ¥ 1, for any n N, valpf ¦n q ¥ n. Let pa n q nN be a sequence of scalars. For any n, p N, val £ n p ķn a k f ¦k ¥ minpvalpa k f ¦k q, k tn, . . . , n puq ¥ n. Hence, as pHompB, Aq, dq is complete, the series °ak f ¦k converge in HompB, Aq. We obtain: Proposition 3.3. Let pB, m, ∆q be a connected bialgebra and let A be an algebra. For any f HompB, Aq such that f p1 B q 0, we obtain a continous algebra morphism ev f : 6 9 8 9 7 KrrT ss ÝÑ HompB, Aq V ķ0 a k T k ÝÑ V ķ0 a k f ¦k . Moreover, for any P pTq V ķ0 a k T k KrrT ss, ev f pPpTqqp1 B q gp0q1 A and for any x B , ev f pPpTqqpxq valpxq ķ1 a k m pk¡1q A ¥ f k ¥ ∆pk¡1q pxq. Proof. As f p1 B q 0, valpf q ¥ 1: ev f is well-dened. For any k, l N, ev f pT k T l q ev f pT k l q f ¦pk lq f ¦k ¦ f ¦l ev f pT k q ¦ ev f pT l q. By linearity, if P pTq, QpT q KrXs, ev f pPpTqQpTqq ev f pPpTqq ¦ ev f pQpTqq. By continuity and density of KrT s in KrrT ss, this is still true if P pTq, QpT q KrrT ss. As f p1 B q 0, for any x B , f ¦k pxq m pk¡1q A ¥ f k ¥ ∆ pk¡1q pxq m pk¡1q A ¥ f k ¥ ∆pk¡1q pxq, which implies the announced formula for ev f pPpTqqpxq. Notations 3.2. We shall write, for any P pTq KrrT ss and f HompB, Aq such that f p1 B q 0, P pfq ev f pPpTqq. Note that for any P pTq, QpT q KrrT ss, P Qpf q P pfq ¦ Qpf q. In particular, taking A B and ρ the canonical projection on B which vanishes on K1 B , we can consider S ev ρ ¢ 1 1 X 1 1 ρ V ķ0 p¡1q k ρ ¦k . Then S is the inverse of ν B ¥ ε ∆ ρ Id for the convolution product: we proved that pB, m, ∆q is a Hopf algebra and recovered Takeuchi's formula [START_REF] Takeuchi | Free Hopf algebras generated by coalgebras[END_REF]: for any x B , Spxq V ķ1 p¡1q k m pk¡1q ¥ ∆pk¡1q pxq. Lemma 3.4. Let pB, m, ∆q be a connected bialgebra and let A be an algebra. For any f HompB, Aq such that f pA B q 0 and for any formal series P, Q KrrT ss, such that Qp0q 0, ev f pP ¥ QpT qq ev ev f pQpTqq pPpTqq. In other words, pP ¥ Qqpf q P pQpfqq. Proof. As Q has no constant term, if valpf q ¥ 1, then valpev f pQpTqqq ¥ 1 and ev ev f pQpTqq exists. We start with the particular case P X n , for a certain n N. As ev f is an algebra morphism, ev f pX n ¥ Qq ev f pQ n q ev f pQq ¦n ev f pQq pX n q. By linearity of ev f pP ¥Qq and of ev ev f pQq pPq, the equality is still true if P KrXs. By continuity of P ÝÑ ev f pP ¥ Qq and of P ÝÑ ev ev f pQq pPq, as KrT s is dense in KrrT ss, this remains true for any P KrrT ss. Applications to shue and quasishue bialgebras Proposition 3.5 (Universal property of shue bialgebras). Let pB, m, ∆q be a connected bialgebra, V be a vector space, ϕ : B ÝÑ V be a linear map such that ϕp1 B q 0. We consider the shue bialgebra pTpV q, ¡,∆q or the quasishue bialgebra pTpV q, , ∆q if V is a (non necessarily unitary) algebra. We equip the tensor coalgebra T pV q with the concatenation product, and the associated convolution on hompB, T pV qq is denoted by ¦. Then Φ 1 1 ¡ ϕ is the unique coalgebra map making the following diagram commuting: pB, ∆q ϕ & & N N N N N N N N N N N N Φ / / ppTpV q, ∆q π V where π is the canonical projection onto V . Moreover: 1. Φ is injective if, and only if, ϕ |PrimpBq is injective. 2. Φ is a bialgebra morphism from pB, m, ∆q to pTpV q, ¡,∆q if, and only if, ϕpB 2 q 0, where B is the augmentation ideal of B. 3. If pV, ¤q is an algebra (not necessarily unitary), then Φ is a bialgebra morphism from pB, m, ∆q to pTpV q, , ∆q if, and only if, for any x, y B , ϕpxyq ϕpxq ¤ ϕpyq. Proof. Firstly, observe that as ϕp1 B q 0, valpϕq ¥ 1 and Φ exists. Let us prove that Φ is a coalgebra morphism. Firstly, as Φp1 B q 1 is a group-like, ∆ ¥ Φp1 B q pΦ Φq ¥ ∆p1 B q 1 1. Let x B . ∆ ¥Φpxq ∆ £ V ķ1 f k ¥ ∆pk¡1q pxq V ķ1 k¡1 i1 ¡ f i f pk¡iq © ¥ ∆pk¡1q pxq V ķ1 k¡1 i1 ¡ f i f pk¡iq © ¥ ¡ ∆pi¡1q ∆pk¡i¡1q © ¥ ∆pxq V i,j1 f i f j ¨¥ ¡ ∆pi¡1q ∆pj¡1q © ¥ ∆pxq pΦ Φq ¥ ∆pxq, so Φ is indeed a coalgebra morphism. Moreover, for any x B , ϖ ¥ Φpxq ϕpxq 0 ϕpxq. As π ¥ Φp1 B q πp1q 0 ϕp1 B q, π ¥ Φ ϕ. Let Ψ : pB, ∆q ÝÑ pTpV q, ∆q be another coalgebra morphism, such that π ¥ Ψ π ¥ Φ ϕ. As 1 is the unique group-like element of T pV q, Φp1 B q Ψp1 B q 1. Let us assume that Φ $ Ψ. There exists x B , such that Φpxq $ Ψpxq. Let us choose such an x, with deg p pxq n minimal. As ∆pxq B 2 ¤n¡1 , by denition of n, ∆ ¥Φpxq pΦ Φq ¥ ∆pxq pΨ Ψq ¥ ∆pxq ∆ ¥Ψpxq, so Φpxq ¡ Ψpxq Kerp ∆q V . Hence, Φpxq ¡ Ψpxq π ¥ Φpxq ¡ π ¥ Ψpxq 0: this is a contradiction, so Φ Ψ. pΦ Φq ¥ ∆pxq ∆ ¥Φpxq 0. By denition of n, Φ |B ¤n¡1 is injective. As ∆pxq B ¤n¡1 B ¤n¡1 , we obtain that ∆pxq 0, so x PrimpBq. Then Φpxq ϕpxq 0, so ϕ |PrimpBq is not injective. morphism from pB , mq to pV, ¤q. ðù. Let us consider Φ 1 ¥pΦΦq and Φ 2 Φ¥m. As m and are coalgebra morphisms, by composition both Φ 1 and Φ 2 are coalgebra morphisms. In order to prove that Φ 1 Φ 2 , it is enough to prove that π ¥ Φ 1 π ¥ Φ 2 . Let x, y B . π ¥ Φ 1 p1 B yq πp1 Φpyqq π ¥ Φpyq ϕpyq, π ¥ Φ 2 p1 B yq π ¥ Φ 2 pyq ϕpyq, so π ¥ Φ 1 p1 B yq π ¥ Φ 2 p1 B yq. Similarly, π ¥ Φ 1 px 1 B q π ¥ Φ 2 px 1 B q. π ¥ Φ 1 px yq πpΦpxq Φpyqq π ¥ Φpxq ¤ π ¥ Φpyq ϕpxq ¤ ϕpyq, π ¥ Φ 2 px yq π ¥ Φpxyq ϕpxyq. By hypothesis, π ¥ Φ 1 px yq π ¥ Φ 2 px yq, which gives π ¥ Φ 1 π ¥ Φ 2 and nally Φ 1 Φ 2 : Φ is an algebra morphism. 2. From the second point, with ¤ 0. Innitesimal characters and characters Proposition 3.6. Let pB, m, ∆q be a connected bialgebra. The following maps are bijections, inverse one from the other: exp : CharpBq ÝÑ InfCharpBq λ ÝÑ lnp1 pλ ¡ ε ∆ qq lnpλq V ķ0 p¡1q k 1 k pλ ¡ ε ∆ q ¦k . Proof. We consider the two subsets B ¦ 0 tλ B ¦ | λp1 B q 0u, B ¦ 1 tλ B ¦ | λp1 B q 1u, and the maps exp : 4 B ¦ 0 ÝÑ B ¦ 1 λ ÝÑ e λ ev λ pexppTqq, ln : 4 B ¦ 1 ÝÑ B ¦ 0 λ ÝÑ lnpλq ev λ¡ε ∆ plnp1 T qq. If λ B ¦ 0 , then valpλq ¥ 1, so ev λ pexppTqq is well-dened. Moreover, for any λ B ¦ 0 , exppλqp1 B q 1, so exp is well-dened. If λ B ¦ 1 , then pλ ¡ ε ∆ qp1 B q 0, so ev λ¡ε ∆ plnp1 T qq is well-dened. Moreover, for any λ B ¦ 0 , lnpλqp1 B q 0, so ln is well-dened. By Lemma 3.4, for any λ B ¦ 0 , ln ¥ exppλq ev ev λ pexppTqq¡ε ∆ plnp1 T qq ev ev λ pexppTq¡1q plnp1 T qq ev λ plnp1 T q ¥ pexppTq ¡ 1qq ev λ pTq λ. Similarly, if λ B ¦ 1 , exp ¥ lnpλq ev ev λ¡ε ∆ plnp1 T qq pexppTqq ev λ¡ε ∆ pexppTq ¥ lnp1 T qq ev λ¡ε ∆ p1 T q ε ∆ λ ¡ ε ∆ λ, so exp and ln are bijections, inverse one from the other. Let λ InfCharpBq. Then λp1 B q 0, so InfCharpBq B ¦ 0 . By denition, CharpBq B ¦ 1 . It remains to prove that for any λ B ¦ 0 , exppλq CharpBq if, and only if, λ InfCharpBq. We shall use the transpose m ¦ of the product. As m is a coalgebra morphism, dually, m ¦ is an algebra morphism for the product ¦. Let f B ¦ , of valuation equal to N . Let n N and let x y pB Bq ¤n . We can assume that x B ¤k and y B ¤n¡k , with 0 ¤ k ¤ n. m ¦ pfqpx yq f pxyq f pB ¤k B ¤n¡k q f pB ¤n q p0q, so valpm ¦ pfqq ¥ N : we deduce that m ¦ is continuous. Hence, for any formal series P pTq KrrT ss, m ¦ pPpλqq m ¦ pev λ pPpTqqq ev m ¦ pλq pPpTqq P pm ¦ pλqq. Let us assume that λ InfCharpBq. Then m ¦ pexppλqq m ¦ pe λ q e m ¦ pλq e λλ e pλε ∆ q¦pε ∆ λq e λε ∆ ¦ e ε ∆ λ pe λ ε ∆ q ¦ pε ∆ e λ q e λ e λ exppλq exppλq, as ε ∆ λ and λ ε ∆ commute for the product ¦, ε ∆ being its unit. So exppλq is indeed in CharpBq. Let us assume that exppλq µ CharpBq. Then m ¦ pλq lnp1 m ¦ pµ ¡ ε ∆ qq lnp1 µ µ ¡ ε ∆ ε ∆ q lnp1 pµ ¡ ε ∆ q ε ∆ ε ∆ pµ ¡ ε ∆ q pµ ¡ ε ∆ q pµ ¡ ε ∆ qq lnp1 pµ ¡ ε ∆ q ε ∆ q lnp1 ε ∆ pµ ¡ ε ∆ qq lnp1 µ ¡ ε ∆ q ε ∆ ε ∆ lnp1 µ ¡ ε ∆ q lnpµq ε ∆ ε ∆ lnpµq λ ε ∆ ε ∆ λ, so λ InfCharpBq. Lemma 3.7. Let pB, m, ∆, δq be a connected double bialgebra. For any n N, ∆pB ¤n q B ¤n B. Proof. For any x B, we put ρ L pxq x 1 B and ρ R pxq 1 B x. Then, putting δpxq x I x P , m 1,3,24 ¥ pδ δq ¥ ρ L pxq m 1,3,24 px I x P 1 B 1 B q x I 1 B x P pρ L Idq ¥ δpxq, so ρ L : B ÝÑ B B is a comodule morphism. Similarly, ρ R is a comodule morphism. Hence, ∆ ¡ ρ L ¡ ρ R is a comodule morphism. For any x B , ∆pxq ∆pxq ¡ ρ L pxq ¡ ρ R pxq, so ∆ : B ÝÑ B B is a comodule morphism. By composition, for any n N, ∆pnq is a comodule morphism. So its kernel is a sub-comodule of B: for any n N, ∆ ¡ Ker ¡ ∆pnq ©© Ker ¡ ∆pnq © B. The result then follows immediately. Proposition 3.8. Let pB, m, ∆, δq be a connected double bialgebra, A an algebra, f : B ÝÑ A a map such that f p1 B q 0 and λ CharpBq. For any P pTq KrrT ss, P pfq øλPpf øλq. Proof. Firstly, f øλp1 B q f p1 B qλp1 B q 0, so ev f øλ pPpTqq is well-dened. Let us rst consider the case where P pTq T n , with n N. Then ev f pT n q øλpT ¦n q øλpT øλq ¦n ev f øλ pT n q. By linearity in P pTq, for any P pTq KrT s, the announced equality is satised. Let V be a vector space, f HompB, V q and let us denote by N its valuation. By Lemma 3.7, if n N , f øλpB ¤n q pf λq ¥ δpB ¤n q f pB ¤n q λpBq p0q, so valpf øλq¤val pfq. In other words, the following map is continuous: 4 HompB, V q ÝÑ HompB, V q f ÝÑ f øλ. Therefore, by density of KrT s in KrrT ss, the announced equality is true for any P KrrT ss. Polynomial invariants Theorem 3.9. Let pB, m, ∆q be a connected bialgebra and let λ B ¦ , such that λp1 B q 1. 1. There exists a unique coalgebra morphism Φ λ : pB, m, ∆q ÝÑ pKrXs, m, ∆q such that ϵ δ ¥ Φ λ λ. Moreover, Φ λ λ X is given by Φ λ p1 B q 1 and dx B , Φ λ pxq λ X pxq V ķ1 λ k ¥ ∆pk¡1q pxqH k pXq, where for any k N, H k pXq is the k-th Hilbert polynomial H k pXq XpX ¡ 1q . . . pX ¡ k 1q k! . 2. Φ λ is a bialgebra morphism from pB, m, ∆q to pKrXs, m, ∆q if and only if λ CharpBq. 3. Φ λ is a double bialgebra morphism from pB, m, ∆, δq to pKrXs, m, ∆, δq if and only if λ ϵ δ . Proof. 1. Existence. We extend the scalars to the eld KppXqq of fractions of KrrXss. Then KppXqq B is a double bialgebra over KppXqq. The map λ is extended as a KppXqq-linear map from KppXqq B to KppXqq, which we denote by λ. As λp1 B q ¡ ε ∆ p1 B q 1 ¡ 1 0, we can consider λ X ev λ¡ε ∆ pp1 T q X q V ķ0 pλ ¡ ε ∆ q k ¥ ∆ pk¡1q pxqH k pXq. Therefore, for any x B K B KppXqq B , λ X pxq V ķ1 λ k ¥ ∆pk¡1q pxqH k pXq V ķ1 λ k ¥ ∆pk¡1q pxqH k pXq KrXs. Hence, λ X |B λ X takes its values in KrXs. Identifying KrXs KrXs and KrX, Y s, as p1 T q X Y p1 T q X p1 T q Y , ∆ ¥ λ X λ X Y λ X ¦ λ Y λ X ¦ λ Y pλ X λ X q ¥ ∆, so λ X is a coalgebra morphism. Moreover, ϵ δ ¥ λ X pλ X q |X1 λ. Unicity. Let Λ : B ÝÑ K be a coalgebra morphism. We put ϵ δ ¥ Λ λ. We consider Λ as an element of B ¦ rrXss, putting ΛpXq V ņ0 f n X n , where for any n ¥ 0, for any x B, f n pxq is the coecient of X n in Λpxq. As ∆ ¥ Λ pΛ Λq, still identifying KrXs KrXs and KrX, Y s, in B ¦ rrX, Y ss, ΛpX Y q V ņ0 f n pX Y q n ∆ ¥ ΛpXq pΛpXq ΛpXqq ¥ ∆ ΛpXq ¦ ΛpY q. Derivating according to Y and taking Y 0, we obtain ΛpXq Λ I p0q ¦ ΛpXq. So ΛpXq C ¦ e Λ I p0qX , for a certain constant C B ¦ . As Λp0q ε ∆ ¥ Λ ε ∆ , Λp0q C ε ∆ , so ΛpXq e Λ I p0qX . We put µ e Λ I p0q B ¦ , then ΛpXq µ X . Moreover, µ ϵ δ ¥ ΛpXq λ, so nally Λ λ X . 2. ùñ. By composition, if λ X is an algebra morphism, then ϵ δ ¥ λ X λ is an algebra morphism, so λ is a character. ðù. Let us assume that λ is a character. We put µ lnpλq. Then µ is an innitesimal character, so Xµ is also an innitesimal character of KppXqq B. As λ X exppXµq, λ X is a character of KppXqq B, so its restriction to B is an algebra mophism from B to KrXs. 3. ùñ. If λ X is a double bialgebra morphism, then λ ϵ δ ¥ λ X ϵ δ . ðù. By the second point, as ϵ δ is a character, ϵ X δ is a bialgebra morphism from pB, m, ∆q to pKrXs, m, ∆q. We still identify KrXs KrXs and KrX, Y s. For any λ CharpBq, by Proposition 3.8, as ø for B ¦ , λ X λ X ¨¥ δ λ X λ Y pλ λ Y q X . In the particular case λ ϵ δ , unit of the product , pϵ X δ ϵ X δ q ¥ δ pϵ X δ q Y ϵ XY δ δ ¥ ϵ X δ . So ϵ X δ is a double bialgebra morphism. Using the exp and ln bijections, we obtain: Proposition 3.10. Let pB, m, ∆q be a connected bialgebra and let µ B ¦ , such that µp1 B q 0. 1. There exists a unique coalgebra morphism Ψ µ : pB, m, ∆q ÝÑ pKrXs, m, ∆q such that for any x B, Ψ µ pxq I p0q µpxq. Moreover, Ψ µ e µX is given on any x B by Ψ µ pxq e µX pxq V ķ1 µ k ¥ ∆pk¡1q pxq X k k! . ( 1 ) 2. Ψ µ is a bialgebra morphism from pB, m, ∆q to pKrXs, m, ∆q if and only if µ InfCharpBq. 3. Ψ µ is a double bialgebra morphism from pB, m, ∆, δq to pKrXs, m, ∆, δq if and only if µ lnpϵ δ q. Proof. All can be proved by taking λ exppµq and Ψ µ Φ exppµq . Let us now prove (1). Ψ µ ev exppµq¡ε ∆ pp1 T q X q ev µ ppe T q X q ev µ pe T X q e µX . Therefore, for any x B , as µp1 B q 0, Ψ µ pxq V ķ0 µ ¦k X k k! V ķ1 µ k ¥ ∆pk¡1q X k k! . Moreover, Ψ µ pxq I p0q µ 1 pxq 0 µpxq, so Ψ µ pxq I p0q µpxq. Corollary 3.11. Let pB, m, ∆, δq be a connected double bialgebra. There exists a unique double bialgebra morphism Φ from pB, m, ∆, δq to pKrXs, m, ∆, δq. For any x B , Φpxq V ķ1 ϵ k δ ¥ ∆pk¡1q pxqH k pXq. Moreover, for any λ CharpBq, the unique bialgebra morphism Φ λ from pB, m, ∆q to pKrXs, m, ∆q such that ϵ δ ¥ Φ λ λ is Φ λ Φ øλpΦλq¥δ. Proof. The rst point is a direct reformulation of Theorem 3.9. By Proposition 2.5, Φ ø λ is a bialgebra morphism. Moreover, by Proposition 2.6, ϵ δ ¥ pΦ øλqpϵ δ ¥ Φq λ ϵ δ λ λ. So Φ øλΦ λ . Corollary 3.12. Let pB, m, ∆, δq be a connected double bialgebra and let Φ : B ÝÑ KrXs be the unique double bialgebra morphism. We denote by Hom b pB, KrXsq the set of bialgebra morphisms from pB, m, ∆q to pKrXs, m, ∆q. The following maps are bijections, inverse one from the other: Φ chr p q X, Φ chr p q XpX ¡ 1q, Φ chr p q XpX ¡ 1qpX ¡ 2q, Φ chr p q XpX ¡ 1q 2 , Φ chr p q XpX ¡ 1qpX ¡ 2qpX ¡ 3q, Φ chr p q XpX ¡ 1qpX ¡ 2q 2 , Φ chr p q XpX ¡ 1q 2 pX ¡ 2q, Φ chr p q XpX ¡ 1qpX 2 ¡ 3X 3q, Φ chr p q XpX ¡ 1q 3 , Φ chr p q XpX ¡ 1q 3 . Let us now consider the case of quasishue algebras. Let pV, ¤, δ V q be a commutative (not necessarily unitary) bialgebra. We denote by Φ the unique double bialgebra morphism from pTpV q, , ∆, δq to pKrXs, , ∆, δq. For any v 1 , . . . , v n V , with n ¥ 1, Φpv 1 . . . v n q v1 ...vnw 1 ...w k , w 1 ,...,w k $r ϵ δ pw 1 q . . . ϵ δ pw k qH k pXq ϵ V pv 1 q . . . ϵ V pv n qH n pXq 0. Therefore: Proposition 3.14. Let pV, ¤, δ V q be a commutative (not necessarily unitary) bialgebra. The unique double bialgebra morphism Φ from pTpV q, , ∆, δq to pKrXs, , ∆, δq sends any word v 1 . . . v n T pV q of length n ¥ 1 to Φpv 1 . . . v n q ϵ V pv 1 q . . . ϵ V pv n qH n pXq. Remark 3.1. In the particular case of QSym, the unique double bialgebra morphism from pQSym, , ∆, δq to pKrXs, m, ∆, δq sends the composition pk 1 . . . k n q to H n pXq for any n. This morphism is denoted by Φ QSym . [START_REF] Bruned | Singular KPZ Type Equations[END_REF] The eulerian idempotent Notations 4.1. Let pB, m, ∆q be a connected bialgebra. Its eulerian idempotent is ϖ ev ρ plnp1 T qq lnp1 ρq V ķ1 p¡1q k 1 k ρ ¦k . Logarithm of the counit and the eulerian idempotent Let us go back to the map Θ of Proposition 2.2, with V B. By Lemma 3.7, it is a continuous algebra map from B ¦ to EndpBq, as it sends B ¦ ¤n to EndpBq ¤n for any n. Proposition 4.1. Let pB, m, ∆, δq be a connected double bialgebra. Let us denote by Φ the unique double bialgebra morphism from B to KrXs. We dene an innitesimal character ϕ B ¦ by dx B, ϕpxq Φpxq I p0q, that is to say ϕpxq is the coecient of X in Φpxq. Then ϕ lnpϵ δ q and the eulerian idempotent ϖ of B is ϖ Θpϕq pϕ Idq ¥ δ. Proof. By the proof of Proposition 3.10, for any λ CharpBq, Ψ lnpλq Φ λ , and for any x B, Ψ lnpλq pxq I p0q Φ λ pxq I p0q lnpλqpxq. In the particular case where λ ϵ δ , then Φ Φ ϵ δ and we obtain that ϕ lnpϵ δ q. As Θ is a continuous algebra morphism, Θpϕq Θplnpϵ δ qq lnpIdq ϖ. Example 4.1. In the case of H G , this character is denoted by ϕ chr . ϕ chr p q 1, ϕ chr p q ¡1, ϕ chr p q 2, ϕ chr p q 1, ϕ chr p q ¡6, ϕ chr p q ¡4, ϕ chr p q ¡2, ϕ chr p q ¡3, ϕ chr p q ¡1, ϕ chr p q ¡1. Proposition 4.2. Let pB, m, ∆, δq be a connected double bialgebra and let λ CharpBq. Then lnpλq ϕ λ. Proof. By Proposition 3.8 with V K (and then ø), ϕ λ lnpϵ δ q λ lnpϵ δ λq lnpλq. Lemma 4.3. Let µ InfCharpBq. Then ϕ µ µ. Proof. Let λ 1 , λ 2 B ¦ and µ InfCharpBq. pλ 1 ¦ λ 2 q µ pλ 1 λ 2 µq ¥ p∆ Idq ¥ δ pλ 1 λ 2 µq ¥ m 1,3,24 ¥ pδ δq ¥ ∆ pλ 1 ε ∆ λ 2 µ λ 1 µ λ 2 ε ∆ q ¥ pδ δq ¥ ∆ pλ 1 ε ∆ q ¦ pλ 2 µq pλ 1 µq ¦ pλ 2 ε ∆ q. Hence, for any n ¥ 1, if λ B ¦ , λ ¦n µ n ķ1 pλ ε ∆ q ¦pk¡1q ¦ pλ µq ¦ pλ ε ∆ q ¦pn¡kq . For λ ϵ δ ¡ ε ∆ , pϵ δ ¡ ε ∆ q µ ϵ δ µ ¡ ε ∆ µ ¡ µp1 B qε ∆ µ, whereas pϵ δ ¡ ε ∆ q ε ∆ ϵ δ ε ∆ ¡ ε ∆ ε ∆ ε ∆ ¡ ε ∆ p1qε ∆ 0. Therefore, for any n ¥ 1, pϵ δ ¡ ε ∆ q ¦n µ 5 µ if n 1, 0 otherwise. We nally obtain that ϕ µ lnp1 ε ∆ ¡ ϵ δ q µ V ķ1 p¡1q k 1 k pε ∆ ¡ ϵ δ q ¦ µ µ. Proposition 4.4. Let pB, m, ∆, δq be a connected double bialgebra. Then ϖ is a projector, which kernel is B 2 K1 B . Proof. Indeed, by Lemma 4.3 with µ ϕ, ϖ ¥ ϖ Θpµq ¥ Θpµq Θpµ µq Θpµq ϖ. So ϖ is a projector. As ϕ is an innitesimal character, ϕpB 2 K1 B q p0q. Moreover, as ε ∆ is a comodule morphism, δpB q B B, which implies that δpB 2 K1 B q pB 2 K1 B q B. Therefore, as ϖ pϕ Idq ¥ δ, B 2 K1 B Kerpϖq. Let x B . Then ϖpxq V ķ1 p¡1q k 1 k ρ ¦k pxq x V ķ2 p¡1q k 1 k m 12...k ¥ ∆pk¡1q pxq looooooooooooooooooomooooooooooooooooooon B 2 , so x ¡ ϖpxq B 2 K1 B . In particular, if x Kerpϖq, then x B 2 K1 B . Hence, Kerpϖq B 2 K1 B . If x PrimpBq, then ϖpxq x, so PrimpBq Impϖq. In the cocommutative case, it is an equality: Corollary 4.5. Let pB, m, ∆, δq be a connected double bialgebra, such that ∆ is cocommutative. Then the eulerian idempotent ϖ is the projector on PrimpBq which vanishes on B If G is not connected, then ϖpGq 0. Remark 4.1. If pB, m, ∆q is neither a commutative or a cocommutative bialgebra, then ϖ is generally not a projector, and does not vanishes B 2 . To illustrate this, let us consider the bialgebra freely generated by three generators x 1 , x 2 , and y, with the coproduct dened by ∆px 1 q x 1 1 1 x 1 , ∆px 2 q x 2 1 1 x 2 , ∆pyq y 1 1 y x 1 x 2 . Then ϖpx 1 x 2 q rx 1 , x 2 s 2 $ 0 and ϖpyq y ¡ x 1 x 2 2 . Therefore, ϖ 2 pyq y ¡ x 1 x 2 2 ¡ rx 1 , x 2 s 4 y ¡ 3x 1 x 2 ¡ x 2 x 1 4 $ ϖpyq. Chromatic innitesimal character In the case of graphs, for any innitesimal character µ, if Ψ µ is the associated bialgebra morphism, for any graph G, Ψ µ pGq V ķ1 V pGqI 1 ...I k µpG |I 1 q . . . µpG |I k q X k k! . As µ is an innitesimal character, it vanishes on nonconnected graphs, so this is in fact a sum over E c pGq: Ψ µ pGq EcpGq ¹ CV pGq{ µpG |C qX clpq , where clpq is the number of classes of . Denoting by µ the character dened dG G, µpGq ¹ H connected component of G µpHq, we obtain Ψ µ pGq EcpGq µpG |qX clpq , (2) Let ϕ chr be the innitesimal character associated to the morphism Φ chr from H G to KrXs: for any graph G, ϕ chr pGq Φ chr pGq I p0q lnpϵ δ qpGq. We obtain from (2) that for any graph G, Φ chr pGq EcpGq ϕ chr pG |qX clpq . Notations 4.2. We shall use here the notion of acyclic orientation of G. Recall that: An oriented graph is a pair G pV pGq, ApGqq, where V pGq is a nite set called the set of vertices of G and ApGq is a set of couples of distinct elements of G, such that for any x, y V pGq, distinct, px, yq ApGq ùñ py, xq ApGq. A walk in G is a sequence of vertices px 1 , . . . , x k q such that for any i rk ¡ 1s, px i , x i 1 q ApGq. A cycle px 1 , . . . , x k q is a walk if k ¥ 2 and if x 1 x k . The oriented graph is acyclic if it does not contain any cycle. If G is an oriented graph, its support is the graph supppGq dened by V psupppGqq V pGq, EpsupppGqq ttx, yu | px, yq ApGqu. If G is an oriented graph, a source of G is a vertex y V pGq such that for any x V pGq, px, yq ApGq. The set of sources of G is denoted by spGq. It is not dicult to show that any non empty acyclic oriented graph has at least one source. Consequently, if G is a non empty acyclic oriented graph, then any of its connected component is also an acyclic oriented graph and so contains at least one source. Therefore, if G is not connected, |spGq| $ 1. If G is a graph, we denote by OpGq the set of acyclic oriented graphs H such that supppHq G. If x V pGq, we denote by OpG, xq the set of acyclic oriented graphs H OpGq such that spHq txu. Let us start by a combinatorial lemma. Proposition 4.6. Let G be a graph and x, y V pGq. Then OpG, xq and OpG, yq are in bijection. Proof. We assume that x $ y. Let H OpGq. We dene a partial order ¤ H on V pGq such that for any x, y V pGq, x ¤ H y if there exists an oriented path px x 1 , x 2 , . . . , x k yq in H. As H is acyclic, this relation is antisymmetric. It is obviously reexive and transitive, so it is an order on V pGq. The set of minimal elements of pV pGq, ¤ H q is spHq. Let H OpG, xq. As spHq txu, x is the unique minimal element of pV pGq, ¤ H q, so x ¤ H y. We consider rx, ys H tz V pGq | x ¤ H z ¤ H yu. This is non empty. Let H I be the oriented graph obtained by changing the orientations of all the edges between two vertices of rx, ys H . Let px 1 , . . . , x k x 1 q be a cycle in H I . If none of the vertices of this cycle belongs to rx, ys H , then it is a cycle in H: this is contradiction, as H is acyclic. Let us assume that at least one of the vertices of this cycle belongs to rx, ys H : up to a permutation, we assume that x 1 x k rx, ys H . Let us prove by induction on i that x ¤ H x i for any i. It is obvious if i 1. Let us assume that x i¡1 ¤ y. Two cases are possible: If px i¡1 , x i q ApHq, then x ¤ H x i¡1 ¤ H x i , so x ¤ H x i . If px i , x i¡1 q ApHq, by denition of H I , this implies that x i , x i¡1 rx, ys H , so x ¤ H x i . Let us now prove by induction on i that x k¡1 ¤ H y for any i. It is obvious for i 0, as x k x 1 . Let us assume that x k 1¡i ¤ H y. Two cases are possible. If px k¡i , x k 1¡i q ApHq, then x k¡i ¤ H x k 1¡i ¤ H y, so x k¡i ¤ H y. If px k 1¡i , x k¡i q ApHq, by denition of H I , this implies that x k 1¡i , x k¡i rx, ys H , so x k¡i ¤ H y. We obtain that x 1 , . . . , x k rx, ys H , so px k , x k¡1 , . . . , x 1 q is a cycle of H: this is a contradiction, as H is acyclic. As a conclusion, H I is acyclic. Let z V pH I q. If z rx, ys H , then it is not a source of H (as the unique source of H is x,) so there exists t V pHq, such that pt, zq ApHq. Then pt, zq ApH I q and z spH I q. Let z rx, ys H , dierent from y. As z H y, there exists a walk in H from z to y, so there exists t rx, ys H such that pz, tq ApHq. Then pt, zq ApH I q, so z spH I q. Finally, spH I q tyu and, as spH I q $ r, spH I q tyu. We proved that H I OpG, yq. This dene a map f x,y : If px i¡1 , x i q ApHq, then x ¤ H x i¡1 ¤ H x i , so x ¤ H x i . If px i , x i¡1 q ApHq, then by denition of H I , x i¡1 , x i rx, ys H , so x ¤ H x i . Let us now prove by induction on i that x k¡i ¤ H y for any i. It is obvious if k 0, as x ¤ H y. Let us assume that x k 1¡i ¤ H y. Two cases are possible. If px k¡i , x k 1¡i q ApHq, then x k¡i ¤ H x k 1¡i ¤ H y, so x k¡i ¤ H y. If px k 1¡i , x k¡i q ApHq, then by denition of H I , x k 1¡i , x k¡i rx, ys H , so x k¡i ¤ H y. We proved that any vertex of px 0 , . . . , x k q belongs to rx, ys H . In particular, z rx, ys H . Therefore, rx, ys H ry, xs H I. As a consequence, f y,x ¥ f x,y Id OpG,xq . So f x,y is a bijection for any x $ y V pGq, of inverse f y,x . Consequently, we dene: Denition 4.7. For any graph G, choosing any vertex x V pGq, we denote by φpGq the number of acyclic orientations of G, such that spGq txu. By convention, φp1q 0. This denes an innitesimal character of G. Proof. By the preceding lemma, this does not depend on the choice of x. As any non connected graph G has at least two sources, if G is not connected, then φpGq 0. So φ is an innitesimal character. Here is a second combinatorial lemma: Lemma 4.8. Let G be a graph and e EpGq. We denote by G{e the graph obtained by contraction of e (and so identifying the two extremities of e) and by Gze the graph obtained by deletion of e. Then Φ chr pGq Φ chr pGzeq ¡ Φ chr pG{eq, ϕ chr pGq ϕ chr pGzeq ¡ ϕ chr pG{eq, φpGq φpGzeq φpG{eq. Proof. We put e tx, yu. 1. Let us give a proof of this classical result. Let n N. Then Φ chr pGzeqpnq is the numbers of colorations f of G such that for any e I tx I , y I u V pGq, e I $ e, f px I q $ f py I q. Moreover, Φ chr pG{eqpnq is the numbers of colorations f of G such that for any e I tx I , y I u V pGq, e I $ e, f px I q $ f py I q, and such that f pxq f pyq. Taking the dierence, Φ chr pGzeqpnq¡Φ chr pG{eqpnq Φ chr pGqpnq for any n N, which gives the rst equality. Lemma 4.10. Let pA, m, ∆q be a commutative or cocommutative bialgebra. The induced convolution product on EndpAq is denoted by ¦. The canonical projection on the augmentation ideal of A is denoted by ρ. There exists a family of scalars pλpk, l, pqq k,l,pN , which does not depend on A, such that for any k, l N, ρ ¦k ¥ ρ ¦l ρ ¦l ¥ ρ ¦k kl p0 λpk, l, pqρ ¦p . Proof. We shall use Sweedler's notation for ∆pxq x p1q x p2q for any x A. Note that λpk, l, pq 0 if p ¡ kl. As for any k, l, p N, λpk, l, pq λpl, k, pq, ρ ¦k ¥ ρ ¦l ρ ¦l ¥ ρ ¦k . Lemma 4.11. Let f pTq KrrT ss and let ρ be the projection on the augmentation ideal of KrXs. If f pρq 0, then f 0. Proof. For any k, n N, ρ ¦k pX n q ¤ ¦ ¦ ¥ i1 ... i k n, i 1 ,...,i k ¥1 n! i 1 ! . . . i k ! X n . In particular, ρ ¦k pX k q k!X k $ 0 and ρ ¦k pX n q 0 if n k. Let f KrrT ss, nonzero, and let k valpf q. Then f pρqpX k q a k ρ ¦k pX k q 0 a k k!X k $ 0, so f pρq $ 0. Lemma 4.12. Let p, k, l N. If p k or p l, then λpk, l, pq 0. Proof. We work in the bialgebra pKrXs, m, ∆q. If p l, then ρ ¦k ¥ ρ ¦l pX p q 0, as ρ ¦l pX p q 0. We consider the formal series f pTq kl p0 λpk, l, pqT p KrrT ss. Then f pρq T ¦k ¥ T ¦l . Let q valpf q. Then f pρqpT q q λpk, l, qqq!X q $ 0, so q ¥ l. By symmetry in k, l of the coecients λpk, l, pq, valpf q ¥ k. Proposition 4.13. Let A be a connected commutative bialgebra. We put ϖ lnpIdq V ķ1 p¡1q k 1 k ρ ¦k . Then ϖ is a projection. Its kernel is A 2 K1 A and its image contains PrimpAq. Proof. By denition of the coecients λpk, l, pq and by the preceding lemma, for any formal series f °ak T k and g °bk T k in KrrT ss, f pρq ¥ gpρq V p0 ¤ ¥ ķ ,l¤p λpk, l, pqa k b l ρ ¦p . We consider the case where A pKrXs, m, ∆q. As it is a double bialgebra, in this case, by , so x A 2 . We obtain that Kerpϖq A 2 K1 A . Note that ϖp1 A q 0. Moreover, π ¥ m m ¦ pπq m ¦ plnpIdqq lnpm ¦ pIdqq lnpId Idq lnppId ιq ¦ pι Idqq lnpId ιq lnpι Idq lnpIdq ι ι lnpIdq ϖ ι ι ϖ. Therefore, if x, y A , ϖpxyq ϖpxqεpyq εpxqϖpyq 0. So A 2 K1 A Kerpϖq. Corollary 4.14. Let pA, m, ∆q be a connected and commutative bialgebra. Then pA, m, ∆q is isomorphic to a subbialgebra of the shue algebra pTpPrimpAqq, ¡,∆q. Proof. By the universal property of pTpPrimpAqq, ¡,∆q, (Proposition 3.5), for any linear map ϕ : A ÝÑ PrimpAq such that ϕp1 A q 0, there exists a unique coalgebra morphism Φ : A ÝÑ T pPrimpAqq such that π ¥Φ ϕ, where Φ : T pPrimpAqq ÝÑ PrimpAq is the canonical projection. By Proposition 4.13, PrimpAq A 2 p0q. Let us choose ϕ such that ϕ |PrimpAqq Id PrimpAq and ϕpA 2 q p0q. We denote by Φ the corresponding coalgebra morphism from A to T pPrimpAqq. As Φ |PrimpAq is injective, by Proposition 3.5, Φ is injective. As ϕpA 2 q p0q, still by Proposition 3.5, Φ is a bialgebra morphism from pA, m, ∆q to pTpPrimpAqq, ¡,∆q. Corollary 4.15. Let pA, m, ∆q be a connected commutative bialgebra. Then it can be embedded in a double bialgebra pB, m, ∆, δq, with PrimpBq PrimpAq. If A is cofree or if A is cocommutative, then there exists a second coproduct δ on A making it a double bialgebra. Proof. First step. Let pV, ¤q be a commutative algebra. We can consider the quasishue algebra pTpV q, , ∆q. By Corollary 4.15 and its proof, choosing a convenient ϕ, there exists an injective bialgebra morphism Φ : pTpV q, , ∆q ÝÑ pTpV q, ¡,∆q, such that Φpvq v for any v V . Moreover, for any v 1 , . . . , v n V , with n ¥ 1, Φpv 1 . . . v n q 1 1 ¡ ϕ pv 1 . . . v n q n ķ1 v1 ...vnw 1 ...w k , w 1 ,...,w i $1 ϕpw 1 q . . . ϕpw k q loooooooomoooooooon V k v 1 . . . v n words of length n. An easy triangularity argument proves then that Φ is bijective. Hence, pTpV q, , ∆q and pTpV q, ¡,∆q are isomorphic. In the particular case where V is a commutative bialgebra, we obtain a second coproduct δ on T pV q, making it a double bialgebra. Second step. Let pA, m, ∆q be a connected bialgebra. Let us choose any commutative bialgebra structure on PrimpAq. As pTpPrimpAqq, ¡,∆q and pTpPrimpAqq, , ∆q are isomorphic, from Corollary 4.14, there exists an injective bialgebra morphism from A to pTpPrimpAqq, , ∆q, which proves the rst point, as pTpPrimpAqq, , ∆q is a double bialgebra. Last step. If A is cofree, then the injection from A to T pPrimpAqq is a bijection. If A is cocommutative, as it is connected it is primitively generated by Cartier-Quillen-Milnor-Moore's theorem: hence, its image is the subalgebra A I of pTpV q, q generated by PrimpT pV qq V . As δpV q V V by construction of V , δpA I q A I A I so A I is a double subbialgebra of pTpPrimpAqq, , ∆, δq. a k T k , N ķ0 b k X k y N ķ0 a k b k . 2. Let g : rns ↠ rls be a surjective map. We put dpgq Uti rn ¡ 1s | gpiq ¥ gpi 1qu, P g pXq X dpgq 1 p1 Xq n¡1¡dpgq KrXs. The letter d is for descents. Proposition 4.16. Let pV, ¤, δ V q be a commutative, non necessarily unitary bialgebra. We consider the double quasishue bialgebra pTpV q, , ∆, δq. Let QpT q KrrT ss and let λ Qpϵ δ ¡ ε ∆ q T pV q ¦ . For any word v 1 . . . v n V n , with n ¥ 1, Θpλqpv 1 . . . v n q n ļ1 ģ:rns↠rls xQpTq, P g pXqy ¤ ¥ ¤ ¹ gpiq1 v i . . . ¤ ¥ ¤ ¹ gpiql v i . Proof. For any v 1 . . . v n V n , with n ¥ 1, λpv 1 . . . v n q Qpϵ δ ¡ ε ∆ qpv 1 . . . v n q V ķ1 a k ϵ k δ ¥ ∆pk¡1q pv 1 . . . v n q a n ϵ V pv 1 q . . . ϵ V pv n q, as ϵ δ vanishes on any word of length ¥ 2. By denition of the coproduct δ, δpv 1 . . . v n q ķ ,l¥1, f :rns↠rks,increasing g:rns↠rls, di,jrns, pi j and f piqfpjqqùñgpiq gpjq ¤ ¥ ¹ f piq1 v I i . . . ¤ ¥ ¹ f piqk v I i ¤ ¥ ¹ gpiq1 v P i . . . ¤ ¥ ¹ gpiql v P i ķ ,l¥1, g:rns↠rls, f :rns↠rks,increasing, di,jrns, pi j and gpiq¥gpjqqùñf piq f pjq ¤ ¥ ¹ f piq1 v I i . . . ¤ ¥ ¹ f piqk v I i ¤ ¥ ¹ gpiq1 v P i . . . ¤ ¥ ¹ gpiql v P i . For any g : rns ↠ rls, let us put Apgq tf : rns ↠ rks, increasing | di, j rns, pi j and gpiq ¥ gpjqq ùñ f piq f pjqu. Then, putting QpT q °ak T k , Θpλqpv 1 . . . v n q pλ Idq ¥ δpv 1 . . . v n q ļ¥1 ģ:rns↠rls fApgq λ ¤ ¥ ¤ ¥ ¹ f piq1 v I i . . . ¤ ¥ ¹ f piqmaxpfq v I i ¤ ¥ ¹ gpiq1 v P i . . . ¤ ¥ ¹ gpiql v P i ļ¥1 ģ:rns↠rls fApgq a maxpf q n ¹ i1 ϵ V pv I i q ¤ ¥ ¹ gpiq1 v P i . . . ¤ ¥ ¹ gpiql v P i ļ¥1 ģ:rns↠rls ¤ ¥ fApgq a maxpf q ¤ ¥ ¹ gpiq1 v i . . . ¤ ¥ ¹ gpiql v i . For any k N ¡0 , we put A k pgq tf Apgq | maxpf q ku and we put R g pXq ķ¥1 |A k pgq|X k . With this denition, we obtain that The result R g pXq P g pXq then comes from an easy induction on n. Let us apply this formula for QpT q 1 1 T and QpT q lnp1 T q: Corollary 4.17. Let pV, ¤q be a commutative (non necessarily unitary) algebra. In the Hopf algebra pTpV q, , ∆q, the antipode S is given on any non empty word v 1 . . . v n by Spv 1 . . . v n q p¡1q n ļ¥1 ģ:rns↠rls,decreasing ¤ ¥ ¤ ¹ gpiq1 v i . . . ¤ ¥ ¤ ¹ gpiql v i . The eulerian idempotent is given on any non empty word v 1 . . . v n by ϖpv 1 . . . v n q ļ¥1 ģ:rns↠rls p¡1q dpgq dpgq!pn ¡ 1 ¡ dpgqq! n! ¤ ¥ ¤ ¹ gpiq1 v i . . . ¤ ¥ ¤ ¹ gpiql v i . Proof. By functoriality, as any commutative algebra is the quotient of a commutative bialgebra, it is enough to prove it for a commutative bialgebra. For the antipode, we use Proposition 4.16 with QpT q 1 1 T . For any n N, xQpTq, X n y p¡1q n , so xQpTq, P pXqy P p¡1q for any P pXq KrXs. Therefore, if g : rns ↠ rls, xQpTq, P g pXqy P g p¡1q 5 p¡1q n if dpgq n ¡ 1, 0 otherwise, 5 p¡1q n if g is decreasing, 0 otherwise. For the eulerian idempotent, we use QpT q lnp1 T q. For any n N ¡0 , xQpTq, X n y p¡1q An easy induction on p, based on an integration by part, proves that for any p, q N, » 0 ¡1 t p p1 tq q dt p¡1q k k!l! pk l 1q! . The result immediately follows, with p dpgq and q n ¡ 1 ¡ dpgq. 5 Graded double bialgebras 5.1 Reminders Denition 5.1. Let pB, m, ∆q be a bialgebra. We shall say that it is graded if there exists a graduation pB n q nN of B such that: For any k, l N, mpB k B l q B k l . For any n N, ∆pB n q n ķ0 B k B n¡k . We shall say that the graduation is connected if B 0 K1 B . Example 5.1. This is the case of KrXs, with KrXs n KX n for any n. The bialgebra QSym is also graded and connected, putting any composition pk 1 . . . k n q homogeneous of degree k 1 . . . k n . The bialgebra of graphs pH G , m, ∆q is also graded by the number of vertices of the graphs. Proof. By Theorem 3.12, Ψ µ Φ øλ, so Ψ µ øλ ¡1 Φ øpλλ ¡1 q Φ. For any x B, ϕpxq, coecient of X in Φpxq, is given by ϕpxq ϵ δ ¥ ϖ 1 ¥ Φ, where ϖ 1 : KrXs ÝÑ KX is the canonical projection. By homogeneity of Ψ µ , ϕ ϵ δ ¥ ϖ 1 ¥ pΨ µ λ ¡1 q ¥ δ ϵ δ ¥ pΨ µ λ ¡1 q ¥ pϖ 1 Idq ¥ δ pλ λ ¡1 q ¥ pπ 1 Idq ¥ δ ppλ ¥ π 1 q λ ¡1 q ¥ δ pµ λ ¡1 q ¥ δ µ λ ¡1 . Example 5.2. Let µ pH G q 1 dened by µp q 1. Then exppµqp q µp1q 1. In the same way as in [START_REF] Foissy | Commutative and non-commutative bialgebras of quasi-posets and applications to Ehrhart polynomials[END_REF]Proposition 11], it is possible to prove that λ exppµq is invertible for the product. Moreover, for any graph G with n vertices, λpGq 1 n! V pGqI 1 ...In, |I 1 |...|In|1 n ¹ i1 µpG |I i q n! n! 1. Let G be a graph, and let E c pGq. If G is not connected, then G{ has at least two vertices, so pπ 1 Idq ¥ δpGq 0, which implies (again) that ϕ chr pGq 0. Let us now assume that G is connected. The unique 0 E c pGq such that |G{ 0 | 1 is the equivalence with only one class, which indeed belongs to E c pGq as G is connected. Moreover, G | 0 G. We obtain ϕ chr pGq µ λ ¡1 pGq µp qλ ¡1 pGq λ ¡1 pGq. Therefore, for any connected graph G, λ ¡1 pGq ϕ chr pGq, which entirely determines the character λ ¡1 . For any graph G, Φ chr pGq EcpGq Ψ µ pG{ qλ ¡1 pG |q EcpGq ¤ ¥ ¹ CV pGq{ ϕ chr pG |C q X clpq , where clpq is the number of classes of . Morphisms to QSym Let us recall the following result, due to Aguiar, Bergeron and Sottile [2]: Proposition 5.5. Let pB, m, ∆q be a graded and connected bialgebra, and let λ CharpBq. There exists a unique homogeneous bialgebra morphism Φ λ : pB, m, ∆q ÝÑ pQSym, , ∆q such that ϵ δ ¥ Φ λ λ. For any x B , Φ λ pxq ņ¥1 ķ1 ,...,knN ¡0 λ k ¥ pπ k 1 . . . π kn q ¥ ∆pn¡1q pxqpk 1 . . . k n q. Proposition 5.6. Let pB, m, ∆, δq be a double bialgebra, such that pB, m, ∆q is a graded and connected bialgebra in the sense of Denition 5.1. 1. If dn N, δpB n q B n B KerpΦ ϵ δ Φ ϵ δ q, then the unique homogeneous double bialgebra morphism from pB, m, ∆, δq to pQSym, , ∆, δq is Φ ϵ δ . Otherwise, there is no homogeneous double bialgebra morphism from B to QSym. 2. For any λ CharpBq, the unique homogeneous bialgebra morphism Φ λ : pB, m, ∆q ÝÑ pQSym, , ∆q such that ϵ δ ¥ Φ λ λ is Φ øλ. Proof. 1. Unicity. Let Φ be such a morphism. Then ϵ δ ¥ Φ ϵ δ . By the unicity in Aguiar, Bergeron and Sottile's theorem, Φ Φ ϵ δ . 1. Existence. Let us rst assume that (3) holds, and let us prove that Φ Φ ϵ δ is a double bialgebra morphism. Let us prove that for any x B n , δ ¥ Φpxq pΦ Φq ¥ δpxq by induction on n. If n 0, we can assume that x 1 B , and then δ ¥ Φp1 B q 1 1 pΦ Φq ¥ δp1 B q. Let us assume the result at all ranks n. For any x B n , as Φ : pB, m, ∆q ÝÑ pQSym, , ∆q is a bialgebra morphism, p ∆ Idq ¥ δ ¥ Φpxq Therefore, if k $ n, x k,l KerpΦ Φq and we nally obtain that x B n B KerpΦ Φq. 2. Let λ CharpBq. Then Φ øλ is a bialgebra morphism. For any n N, Φ øλpB n q pΦ λq ¥ δpB n q pΦ λqpB n Bq ΦpB n q QSym n , so Φ øλ is homogeneous. Moreover, ϵ δ ¥ pΦ øλqpϵ δ ¥ Φq λ ϵ δ λ λ. Hence, Φ øλΦ λ . Remark 5.2. Let pB, m, ∆, δq be a connected double bialgebra and let Φ be a double bialgebra morphism from pB, m, ∆, δq to pQSym, m, ∆, δq. Then the unique double bialgebra morphism from pB, m, ∆, δq to pKrXs, m, ∆, δq is Φ QSym ¥ Φ, where Φ QSym is described in Remark 3.1. Remark 5.3. Non homogeneous double bialgebra morphisms from B to QSym may exist. For example, the algebra morphism Ψ : KrXs ÝÑ QSym sending X to p1q is a double bialgebra morphism. By composition with the double bialgebra morphism from B to KrXs, we obtain non homogeneous double bialgebra morphisms from B to QSym. Remark 5.4. The hypothesis (3) does not hold if B H G . For example, pΦ ϵ δ Φ ϵ δ q ¥ δp q pΦ ϵ δ Φ ϵ δ qp q p1q 2p11q 2p11q p2p11q p2qq, δ ¥ Φ ϵ δ p q δp2p11qq p2q 2p11q 2p11q p2p11q p2qq. A way to correct this is to work with decorated graphs, see [START_REF]Chromatic polynomials and bialgebras of graphs[END_REF] for more details. Example 3 . 1 . 31 In pH G , m, ∆q, for any graph G, ∆pk¡1q pGq I1 ...I k V pGq, I 1 ,...,I k $r G |I 1 . . . G |I k . Let x A. Then Id ¦k pxq x p1q . . . x pkq . Therefore, for any x A, Id ¦k ¥ Id ¦l pxq Id ¦k ¡ x p1q . . . x plq © ¡ x p1q . . . x plq © p1q . . . ¡ x p1q . . . x plq © pkqx p1q x pl 1q . . . x ppk¡1ql 1q . . . x pkq x p2kq . . . x pklq x p1q . . . x pklq Id ¦kl pxq.We use that A is commutative or cocommutative for the fourth equality. Hence, Id ¦k ¥ Id ¦lId ¦kl .Let ι be the unit of ¦. Then ρ Id ¡ ι, and ρ ¦k ¥ ρ ¦l pId ¡ ιq ¦k ¥ pId ¡ ιq ¦l Proposition 4 . 4 ,p 1 pρ 441 ϖ is a projection. Hence, ¦p ϖ, so ϖ is a projection. Let x PrimpAq. Then ϖpxq ρpxq 0 x, so x Impϖq. Let x Kerpϖq A . Then 4. 4 4 Antipode and eulerian idempotent for quasishue algebras Notations 4.3. 1. We identify KrrT ss and the dual of KrXs, with the pairing dened by x V ķ0 the eulerian idempotent of B is Θpϕq pϕ Idq ¥ δ, see Proposition 4.1. for any character λ of B, lnpλq ϕ λ, see Proposition 4.2. We dene the restricted graph G | by V pG |q V pGq, EpG |q ttx, yu EpGq | π pxq π pyqu. In other words , G | is the disjoint union of the subgraphs G |C , with C V pGq{ . We shall say that E c pGq if for any class C V pGq{ , G |C is a connected graph. We then dene a Example 1.4. second coproduct δ on H G by dG G, δpGq EcpGq G{ G | . This coproduct is coassociative, but not cocommutative. Its counit ϵ δ is given by dG G, ϵ δ pGq 5 1 if EpGq r, 0 otherwise. 1.3 Quasishue algebras Notations 1.1. Let k, l N. A pk, lq-quasishue is a surjective map σ : rk ls ÝÑ rmaxpσqs such that σp1q . . . σpkq and σpk 1q . . . σpk lq. The set of pk, lq-quasishues is denoted by QShpk, lq. First step. Let w V n , with n ¥ 1. δpwq ww 1 ...wn, |w I 1 | . . . |w I n | w P 1 . . . w P w 1 ,...,wn$1 p∆ Idq ¥ δpwq ww |w I 1 | . . . |w I k | |w I k 1 | . . . |w P k l | w P 1 . . . w P n 1 ...w k l , w 1 ,...,w k l $1 w k 1 ,...,w k l $1 w p2q w k 1 ...w k l , w 1 ,...,w k $1, w p1q w 1 ...w k , ww p1q w p2q , |w I 1 | . . . |w I k | |w I k 1 | . . . |w P k l | w P 1 . . . w P n 1,3,24 ¥ pδ δq ¥ ∆pwq. n . 3,24 ¥ pδ δq ¥ ∆pxq 1,4,25,36 ¥ pδ Id δ Idq ¥ pδ δq ¥ ∆pxq, By the induction hypothesis, p ∆ Id Idq ¥ pδ Idq ¥ δpxq p ∆ Id Idq ¥ pId δq ¥ δpxq, so pδ Idq ¥ δpxq ¡ pId δq ¥ δpxq V. Moreover, pπ Idq ¥ pδ Idq ¥ δpxq whereas p ∆ Id Idq ¥ pId δq ¥ δpxq pId Id ∆q ¥ p ∆ Idq ¥ δpxq pId Id ∆q ¥ 1,3,24 ¥ pδ δq ¥ ∆pxq 1,4,25,36 ¥ pId δ Id δq ¥ pδ δq ¥ ∆pxq. ¡ x p1q © I δ ¢ ¡ ∆q is a Hopf algebra of antipode S. Corollary 2.4. Let pB, m, ∆, δq be a double bialgebra, such that pB, m, ∆q is a Hopf algebra. Then pB, mq is commutative. Proof. As pB, m, ∆q is a Hopf algebra, ϵ δ has an inverse for the convolution product ¦, and the antipode of pB, m, ∆q is S pϵ ¦¡1 δ Idq ¥ δ by Corollary 2.3. As ϵ δ is a character of pB, m, ∆q, its inverse ϵ ¦¡1 δ is also a character. By composition, S is an algebra endomorphism of B. By the classical result on the antipode [START_REF] Abe | Hopf algebras[END_REF][START_REF] Sweedler | Hopf algebras[END_REF] , it is also a algebra anti-endomorphism. Hence, SpBq is a commutative subalgebra of B. It is enough to prove that S is surjective. By Lemma 1.5, 1. ùñ. If Φ is injective, by restriction Φ |PrimpBq is injective. If x PrimpBq, then Φpxq PrimpT pV qq V , so Φpxq π ¥ Φpxq ϕpxq: we obtain that ϕ |PrimpBq is injective. ðù. Let us assume that Φ is not injective. Let x KerpΦq B , nonzero, with deg p pxq n minimal. Then 4 CharpBq ÝÑ Hom b pB, KrXsq λ ÝÑ Φ øλ, Hom b pB, KrXsq ÝÑ CharpBq Ψ ÝÑ ϵ δ ¥ Ψ. pG |f ¡1 p1q q . . . ϵ δ pG |f ¡1 pkq qH k pXq. Moreover, by denition of ϵ δ , ϵ δ pG |f ¡1 p1q q . . . ϵ δ pG |f ¡1 pkq q $ 0 if, and only if, for any i, G |f ¡1 piq has no edge. This gives us the well-known concept of a valid coloration: a k-coloration is a map f : V pGq ÝÑ rks; it is packed if f is surjective and it is valid if for any tx, yu EpGq, f pxq $ f pyq. Hence, denoting by PVCpGq the set of packed valid coloration of G, ΦpGq Proof. Immediate. Example 3.2. Let us consider the case of H G . For any non empty graph G, ∆pk¡1q pGq f:V pGq↠rks G |f ¡1 p1q . . . G |f ¡1 pkq , therefore V ķ1 f:V ΦpGq ϵ δ pGq↠rks 4 fPVCpGq H maxpf q . Consequently, for any k N, ΦpGqpnq is the number of valid n-colorations of G: in other words, ΦpGq is the chromatic polynomial of G [17]. Theorem 3.13. The unique double bialgebra morphism Φ chr from pH G , m, ∆, δq to pKrXs, m, ∆, δq sends any graph G to its chromatic polynomial. Example 3.3. 2 K1 B . Proof. As pB, m, ∆q is a cocommutative bialgebra, it is primitively generated by Cartier-Quillen-Milnor-Moore's theorem. Hence, B PrimpBq B 2 K1 B . As PrimpBq Impϖq and ϖ vanishes on B 2 K1 B , PrimpBq Impϖq. Example 4.2. Let G be a connected graph. For any EpGq, G{ is connected. Hence, as H G is cocommutative, ϖpGq EcpGq ϕ chr pG{ qG | PrimpH G q. 4 OpG, xq ÝÑ OpG, yq H ÝÑ H I . Let us consider ry, xs H I. By construction of H I , rx, ys H ry, xs H I. Let z ry, xs H I. There exists a walk px 0 y, . . . , x j z, . . . , x k xq in H I . Let us prove by induction on i that x ¤ H x i for any i. It is obvious if i 0, as x ¤ H y. Let us assume that x ¤ H x i¡1 . Two cases are possible. It remains to prove that R g pXq P g pXq. For this, let us now study Apgq for g : rns ↠ rls. For any f : rn ¡ 1s ÝÑ rks, increasing, we put υ 0 pfq : Denoting by g I the standardization of the restriction of g to rn¡1s (that is to say the composition of g |rn¡1s with the unique increasing bijection from gprn¡1sq to rl I s for a well-chosen l I ), we obtain that Apgq 5 tυ pfq | f Apg I qu if gpn ¡ 1q ¥ gpnq, tυ pfq, υ 0 pfq | f Apg I qu if gpn ¡ 1q gpnq.As υ 0 does not change the maximum and υ 1 increases it by 1:If gpn ¡ 1q ¥ gpnq, then dpgq dpg I q 1 and |A k pgq| |A k¡1 pg I q|. Hence, R g pXq XR g IpXq.If gpn ¡ 1q ¡ gpnq, then dpgq dpg I q and |A k pgq| |A k pg I q| |A k¡1 pg I q|. Hence, R g pXq pX 1qR g IpXq. Θpv 1 . . . v n q ļ¥1 ģ:rns↠rls xQpTq, R g pXqy ¤ ¥ ¤ ¹ v i . . . ¤ ¥ ¤ ¹ v i . gpiq1 gpiql 6 8 rns ÝÑ rks 6 8 rns ÝÑ rks 7 7 i rn ¡ 1s ÝÑ f piq, n ÝÑ f pn ¡ 1q, υ pfq : i rn ¡ 1s ÝÑ f piq, n ÝÑ f pn ¡ 1q 1. In particular, if g : rns ↠ rls, xQpTq, P g pXqy n 1 n » 0 ¡1 t n¡1 dt, so for any P pXq XKrXs, xQpTq, P pXqy » 0 ¡1 P ptq t dt. » 0 ¡1 t dpgq p1 tq n¡1¡dpgq dt. 1,3,24 ¥ pδ δq ¥ ∆ ¥Φpxq 1,3,24 ¥ pδ δq ¥ pΦ Φq ¥ ∆pxq 1,3,24 ¥ pΦ Φ Φ Φq ¥ pδ δq ¥ ∆pxq pΦ Φ Φq ¥ m 1,3,24 ¥ pδ δq ¥ ∆pxq pΦ Φ Φq ¥ p ∆ Idq ¥ δpxq p ∆ Idq ¥ pΦ Φq ¥ δpxq. We use the induction hypothesis for the third equality, as Hence, δ ¥ Φpxq ¡ pΦ Φq ¥ δpxq PrimpQSymq QSym. Moreover, by homogeneity of Φ, δ ¥ Φpxq δpQSym n q QSym n QSym. Φq ¥ δpxq pΦ ΦqpB n B KerpΦ Φqq ΦpB n q ΦpBq QSym n QSym, so nally δ ¥ Φpxq ¡ pΦ Φq ¥ δpxq PrimpQSymq QSym n QSym Kpnq QSym, and we put δ ¥ Φpxq ¡ pΦ Φq ¥ δpxq pnq y. As ϵ δ ppnqq 1, y pϵ δ Idq ¥ δ ¥ Φpxq ¡ pϵ δ Idq ¥ pΦ Φq ¥ δpxq Φpxq ¡ pϵ δ Φq ¥ δpxq Φpxq ¡ Φpxq 0, so nally δ ¥ Φpxq pΦ Φq ¥ δpxq. Let us now assume that Φ is a double bialgebra morphism. Let n N. For any x B n , as Φ is homogeneous, pΦ Φq ¥ δpxq δ ¥ Φpxq δpQSym n q QSym n QSym. where x k,l B k B l for any k, l. Then, by homogeneity of Φ, pΦ Φq ¥ δpxq Let us put δpxq ķ ,l¥0 x k,l , ķ ,l¥0 pΦ Φqpx k,l q ∆pxq n¡1 à By (3), i1 B i B n¡i . pΦ looooooomooooooon QSym k QSym l QSym n QSym. q . . . ϵ V pv P n q v 1 . . . v n .The fact that ϵ V is an algebra morphism is left to the reader.A particular case is obtained when V is the bialgebra of a semigroup pΩ, q. In this case, a basis of the quasishue algebra is given by words in Ω. This construction is established in[START_REF] Ebrahimi-Fard | A comodule-bialgebra structure for word-series substitution and mould composition[END_REF], where it is related to Ecalle's mould calculus (product and composition of symmetrel moulds).For example, if k 1 , k , k , k Ω, in this quasishue double bialgebra, pk 1 q pk 2 k 3 k 4 q pk 1 k2 k 3 k 4 q pk 2 k 1 k 3 k 4 q pk 2 k 3 k 1 k 4 k 2 k 3 k 4 k 1 q ppk 1 k 2 qk 3 k 4 q pk 2 pk 1 k 3 qk 4 q k 2 k 3 pk 1 k 4 qq, pk 1 k 2 q pk 3 k 4 q pk 1 k 2 k 3 k 4 q pk 1 k 3 k 2 k 4 q pk 1 k 3 k 4 k 2 q pk 3 k 1 k 2 k 4 q pk 3 k 1 k 4 k 2 q pk 3 k 4 k 1 k 2 q ppk 1 k 3 qk 2 k 4 q ppk 1 k 3 qk 2 k 4 q pk 3 pk 1 k 4 qk 2 q pk 1 pk 2 k 3 qk 4 q pk 1 k 3 pk 2 k 4 qq pk 3 k 1 pk 2 k 4 qq ppk 1 k 3 qpk 2 k 4 qq, ∆ppk 1 k 2 k 3 k 4 qq pk 1 k 2 k 3 k 4 q 1 pk 1 k 2 k 3 q pk 4 q pk 1 k 2 q pk 3 k 4 q pk 1 q pk 2 k 3 k 4 q 1 pk 1 k 2 k 3 k 4 q,δppk 1 qq pk 1 q pk 1 q, δppk 1 k 2 qq pk 1 k 2 q pk 1 q pk 2 q pk 1 k 2 q pk 1 k 2 q, δppk 1 k 2 k 3 qq pk 1 k 2 k 3 q pk 1 q pk 2 q pk 3 q ppk 1 k 2 qk 3 q pk 1 k 2 q pk 3 q pk 1 pk 2 k 3 qq pk 1 q pk 2 k 3 q pk 1 k 2 k 3 q pk 1 k 2 k 3 q. ùñ. Let us assume that Φ : pB, m, ∆q ÝÑ pTpV q, , ∆q is a bialgebra morphism. As π : pTpV q , q ÝÑ pV, ¤q is an algebra morphism, by composition π ¥ Φ |B ϕ |B is an algebra Acknowledgments: the author acknowledges support from the grant ANR-20-CE40-0007 Combinatoire Algébrique, Résurgence, Probabilités Libres et Opérades. The author also warmly thanks Darij Grinberg for his careful reading and his helpful comments. 2. Direct consequence of the rst point, as ϕ chr pHq Φ chr pHq I p0q for any graph H. Let H be an orientation of G{e and let H 1 and H 2 be the two orientations of G inducing H: in H 1 , e is oriented from x to y whereas in H 2 , it is oriented from y to x. We assume that H OpG{e, xq. As H is acyclic, H 1 and H 2 are acyclic (as the contraction of a cycle is a cycle). Let z be a source of H 1 or of H 2 . If z $ x, y, it is also a vertex of G{e and is not a source of G{e, so it is not a source of H Summing, this gives the announced formula. The following result is rstly due to Greene and Zaslavsky [START_REF] Greene | On the interpretation of Whitney numbers through arrangements of hyperplanes, zonotopes, non-Radon partitions, and orientations of graphs[END_REF], see [START_REF] Gebhard | Sinks in acyclic orientations of graphs[END_REF] for several proofs of dierent natures: Theorem 4.9. For any graph G, ϕ chr pGq p¡1q |V pGq| 1 φpGq. Proof. We proceed by induction on the number n of edges of G. If EpGq r, then G n for a certain n N. Then Φ chr pGq X n , so Let us assume the result at all ranks n. Let us choose any edge e of G. As G{e and Gze has strictly less than n edges, ϕ chr pGq ϕ chr pGzeq ¡ ϕ chr pG{eq p¡1q |V pGq| 1 ϕpGzeq ¡ p¡1q |V pGq| ϕpG{eq p¡1q |V pGq| 1 pϕpGzeq ϕpG{eqq p¡1q |V pGq| 1 ϕpGq. Generalization to commutative connected bialgebras We proved in Proposition 4.4 that in the case of a connected double bialgebra, the eulerian idempotent ϖ is a projector. We now extend this result to any commutative connected bialgebra. Remark 5.1. If pB, m, ∆q is a graded and connected bialgebra, then for any n ¥ 1, Inductively, for any n, k ¥ 1, In particular, if x B n , for any k ¡ n, ∆pk¡1q pxq 0: B is connected in the sense of the preceding section. Homogeneous polynomial invariants If pB, m, ∆, δq is a graded connected bialgebra, a natural question is to nd all the homogeneous bialgebra morphisms from B to KrXs. For this, we identify B ¦ Note that as B is connected, B ¦ 1 InfCharpBq. We obtain: Proposition 5.2. Let µ InfCharpAq and let Ψ µ : pB, m, ∆q ÝÑ pKrXs, m, ∆q associated to µ by Proposition 3.10. Then Ψ µ is homogeneous if, and only if, µ B ¦ 1 . Proof. ðù. Let us assume that µ B ¦ 1 . Let n ¥ 1 and x B n . Then ùñ. Let us assume that µ B ¦ 1 . As µp1 B q 0, there exists n ¥ 2 and x B n such that µpxq $ 0. Then the coecient of X in Ψ µ pxq is µpxq $ 0, so Ψ µ pxq is not homogeneous of degree n and Ψ µ is not homogeneous. The second formula comes immediately. Proposition 5.4. Let µ B ¦ 1 , such that λ exppµq is invertible for the product . Then Φ Ψ µ øλ ¡1 , ϕ µ λ ¡1 .
00409999
en
[ "shs.langue" ]
2024/03/04 16:41:20
2009
https://hal.science/hal-00409999/file/communication_11ICAL_final.pdf
Laurent Sagart PAN MORPHOLOGY IN PHYLOGENETIC PERSPECTIVE A recurrent problem when building phylogenetic trees from sound changes is that while they are relatively easy to identify2 and their directionality is usually clear, the objects these changes take for their targets -sounds-are not highly language-specific: neighboring languages tend to have much overlap in their sound inventories, so that when a sound change reaches a language boundary, the risk is high that it will cross it, provided that there are bilinguals and that the target sound exists on the other side. For this reason, shared phonological innovations often do not reflect shared inheritance. Blust's sound-changebased phylogeny for Formosan languages (1999) assumes a ten-branch star shape because the phylogenetic signal is hard to extract from the phonological isoglosses of a group of related languages in contact. Lexical innovations take words as their targets. As associations of a string of phonemes and a meaning, words are far more language-specific than sounds. A lexical innovation affects the meaning of a particular word. When a lexical innovation reaches a sharp language boundary, the target word will usually not be present on the other side: the change cannot cross the boundary, even if there are bilinguals. True, a word having undergone a semantic change can be borrowed by a neighboring language, giving the appearance of lexical change in the borrowing language. The confounding effect of borrowing can be minimized by excluding cultural vocabulary from consideration. In my ( 2004), based on a pattern of phylogenetic compatibility among six innovative characters in the An numerals from 5 to 10, I presented a new higher phylogeny of An. This was slightly modified in my communication to 10ICAL and in my (2008). I reproduce the tree in Figure 1. The story told by this tree is that of a founder group crossing from northern Fujian, where the straits are narrowest and a mountain top in NW Taiwan can be seen with the naked eye: expanding from their first settlements in NW Taiwan southward along the west coast, reaching southern Taiwan and continuing their expansion along the east coast: from where a second founder group left to establish Austronesian-speaking colonies ancestral to PMP and Tai-Kadai in the Philippines and on the south China coast. Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France In this paper I begin to examine the contribution of morphological innovations to early Austronesian (Formosan) phylogeny, using the same compatibility approach as in my 2004: I will discuss three nested characters, each of which is clearly innovative. The Austronesian languages which show these characters form are distributed in three nested isoglosses, forming a clique (sensu Meacham and Estabrook 1985), like the six numerals in Sagart 2004. When a node dominates another node in a phylogenetic tree, the innovations which define those nodes should show a nesting structure in geographical space. 1. Three early Austronesian morphological innovations. In terms of their readiness to spread across language boundaries, morphological changes are probably intermediate between sound changes and lexical changes: on the one hand they are associations of a string of phonemes and a morphological meaning, making them more language-specific than sounds; on the other they are typically shorter than words, making them more like phonemes. Moreover, at least when closely related languages are in geographical contact, affixal inventories are likely to be shared to a significant extent: this increases the risk that a morphological change spreading from across the boundary will find its target on this side, and cross. At the same time morphological changes are reputed to be relatively resistant to borrowing. 3 In this, morphological changes are like changes affecting the basic vocabulary. Starosta (1994, 1995) accepted that phylogenies should be built from innovations. He had the insight that because of their resistance to borrowing, morphological innovations are choice phylogenetic material. He built a phylogeny for Formosan languages based exclusively on morphological arguments. He accepted that Taiwan is the Austronesian homeland: he also accepted that MP subgroups with Amis (Harvey; Reid). He regarded Philippine morphosyntax as highly elaborate and assumed the history of morphosyntax between PAn and the Philippine subgroup to have been one of gradual complexification. He insisted that if Rukai, the Formosan language with the simplest morphosyntax, had lost morphological processes found in other languages of Taiwan or the Philippines, lexicalized traces of these processes should be found in Rukai. The directionality of morphological change is not easy to establish on general principles. Starosta took as his guiding idea the view that in the absence of lexicalized vestiges, the Formosan language with the simplest morphology -Rukai-must be the first to branch off. My approach is different. I accept that morphological complexity can decrease over time in a language, leaving only faint traces in the lexicon. Arguments from 'absence of vestiges' are a variant of argumenta e silentio (arguments from silence) and must be used with caution: absence of evidence is not evidence of absence. My guiding principle is the idea, established in my 2004, that the PAn homeland was in NW Taiwan and that the first languages to branch off were Pazeh and Saisiat (whether as a single subgroup or not). This is because they show none of the early An innovations in the numeral system, but have the long additive expressions that gave rise to the shortened numerals *pitu 'seven', *walu 'eight' and *siwa 'nine'. In general I regard morphological characters shared by Pazeh or Saisiat and at least one other Austronesian language not in contact with them as PAn. When applied to neutral focus markers, this yields the system of PAn focus markers in Table 1: non-actor focus markers patient V-en location V-an instrumental, beneficiary etc. Si-V actor focus marker mu-V/<um>V On the same grounds, PAn perfective aspect was marked by <in> infixed in the verb. and the second set as an innovation at the Pituish node of the tree, as Ross notes. The path by which the second set arose out of the first can be described as follows: first, the genitive singular *ni-Cia was reduced to *nCia: *s and *C would be hard to distinguish following *n, and *nCia was reinterpreted as *nsia. In an analogically-motivated change, the base was *Cia was levelled to *sia. Finally *nsia was reduced to *nia. This sequence of changes was completed in proto-Pituish. In Pituish languages outside of Atayal and Malayo-Polynesian, the history of third-person pronouns is one of replacement of the new set by demonstratives. These replacements took place independently in each branch, as shown by the fact that the third-person pronouns in these languages are non-cognate. 1.2. Loss of -en in perfective patient focus forms. There is no principle reason why we should expect an asymmetry in the way aspect and focus marking affixes combine in early An verb forms. Yet in the An world the perfective aspect marker (PERF) *<in> and patient focus marker (PF) *-en are found attached to the same verb stem only in four West coast Formosan languages: Saisiat, Pazeh, Thao and Siraya. Elsewhere such forms are not found. Here is a Saisiat example (Zeitoun et al. 1996): For Pazeh, Li and Tsuchida (2001: 28; 38 n. 17) state that "In most Formosan languages and western Austronesian languages, the perfective form of the patient-focus does not bear the focus affix -en, but the infix <in>. Pazih may bear only the the affix <in> (e.g. b<in>aket 'to have been beaten'), -en (e.g. 'to be beaten or to have been beaten'), or both affixes (e.g. b<in>aked-en 'to have been beaten') to indicate the perfective aspect of the Patient-focus." They do not provide sentence examples. hiza Verb forms with both <in> and -in (< *-en) are grammatical in Thao (Blust 2003:238). Examples: in-dahip-in 'was helped', in-fari-n 'was blown by the wind', lh-in-irik-in 'was poked or pierced ', sh-in-umshun-in 'was Siraya, an extinct language of the SW coast of Taiwan known through 17th-century missionary materials, has been studied by Adelaar (1997). Adelaar gives the past tense of undergoer-oriented verbs ('PF') as either ni-V or ni-V-ən, where past tense ni-is clearly the reflex of the PA *<in> (although it is prefixed to the verb stem, not infixed in it) and -ən is the reflex of *-ən. Examples of ni-V-ən from Adelaar (1997): ni-sulat-ən da PAST-write-PF ? 'It has been written...' ni-patimxa-ən tin ta vare vaung-appa PAST-punish-PF (by) him TM wind sea-also 'he rebuked the winds and the sea' Are we dealing with a retention from PAn or with an innovation by four western Formosan languages ? Under Blust's ten-branch phylogeny, an innovation is strongly indicated, since the four languages fall into two primary branches of An: a retention would take eight independent losses in order to account for the form's absence outside of the Western plains and Northwest Formosan branches. Predictably, Blust (1998), to whom the Siraya, Pazeh and Saisiat facts were moreover not available at the time, came strongly on the side of the innovation view, arguing that in Thao PF -in (< *-en) could have been added to PF-PERF forms originally without it, in order to avoid homonymic clash with AF-PERF forms where the AF marker had a zero allomorph: if it was not for -in, he argued, nothing would distinguish AF-PERF forms like d<in>uruk 'stabbed' and f<in>ariw 'bought' from the corresponding PF-PERF verbs. He noted that among Thao verb stems in perfective patient focus, the PF marker -in (< *-en) is optional in some and obligatory in others; he suggested there is a tendence for those in which it is obligatory to have a zero allomorph of the AF marker <um> (like d<in>uruk 'stabbed' and f<in>ariw 'bought'), and to be optional in verbs where the allomorph of the AF marker is not zero (like s-m-iraq 'to kiss', s-m-in-iraq 'kissed' whose PF-PERF form is either s-in-iraq or s-in-iraq-in); he warned that the correlation is not perfect. But the homonymic clash problem he outlined only exists under the view that -en is not original in Thao PERF-PF verbs. If loss of *-en is the innovation, Thao never lost it: it could have permitted zero allomorphs of <um> precisely because *-en was still there. Blust's discussion of this issue does not provide evidence that PF-PERF verb forms were without *-en in PAn; rather, it is a speculative account of why *-en could have been added to PF-PERF verbs in Thao assuming PF-PERF verbs were without that suffix in PAn. In fact, consideration of the Saisiat, Pazeh and Siraya facts clearly shows that avoidance of homonymic clash is not what accounts for the presence of *-en in Pazeh b<in>aked-en 'to have been hit', Saisiat in-tani-in 'stopped' and Siraya ni-kita-(ə)n 'was seen', for these verbs require full allomorphs of the AF marker in their respective languages: Pazeh m<in>u-baket 'to have hit' (AF-PERF); Saisiat m-in-tani 'stopped' (AF-PERF), Siraya ni-k<m>ita 'saw' (AF-Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France PAST). Because <in>V-en verbs are found in Saisiat and Pazeh as well as in two other languages -Thao and Siraya-, I regard them as part of PAn, and treat their absence in the rest of the An world as innovative. Judging from their low frequency of occurrence in Pazeh and Saisiat and their optionality in Siraya, it is likely that *-en was already optional in PAn when *<in> was present, in other words that loss of *-en in PF-PRF verbs was already in progress in PAn. The motivation for this change appears to have been simplification. Removing *-en allowed to decrease morphological marking on verbs forms which were presumably very frequent in discourse. Since the *<in> perfective marker only occurred in verbs marked for verbal focus, removal of *-en could be achieved without losing a distinction, by taking advantage of a gap in the system. 1.3. Extension of ki-prefixation to verb stems. In the following discussion, I rely in part on a draft paper by Stacy Teng and Elisabeth Zeitoun4 . There attention is drawn to a prefix *ki-that attaches to noun stems, deriving verbs with the broad meaning of "get N" (or "collect N", "harvest N", "cut N" etc.) in Saisiat, Kavalan, Rukai, Kanakanabu, Saaroa, Paiwan and Puyuma (as well as in Bunun where it assumes the specialized meaning of 'take off, remove'). Thao has a prefix kin-which serves the same function; it is unclear how it relates to *ki-. No Amis examples can be found in Fey (1986), and Teng and Zeitoun accordingly do not mention Amis among the languages having such forms; yet Zeng (1991) and Pourrias and Poinsot (ms) list some. Here are some illustrative word pairs: Saisiat kaehoey 'tree, wood, brushwood' vs. ki-kaehoey 'gather brushwood', Kavalan tamun 'vegetable' vs. qi-tamun 'pick vegetables', Kanakanavu tamemi 'sweet potato' vs. ki-tamemi 'gather sweet potatoes', Rukai (Mantauran) paiso 'money' vs. i-paiso 'earn money'5 Paiwan sudju 'sweetheart', ki-sudju 'go courting, look for a sweetheart', Puyuma daqiŋ 'a share' vs. ki-daqiŋ 'claim one's share' Amis6 runaŋ 'mud pool' vs. ki-runaŋ 'wallow in mud' (of water buffaloes). As expected of verbs which incorporate their object, denominal ki-verbs are one-argument intransitives: their unique argument is a nominative agent. Most cannot cooccur with any voice affixes. Teng and Zeitoun argue that derivation of denominative verbs by *ki-is a good candidate for PAn ancestry. This is likely from my point of view because this process is seen in Saisiat. The affix may have grammaticalized out of a verb meaning 'get, collect, gather'. A denominal verb *ki-pañay 'harvest rice' was probably part of PAn (Saisiat ki-pazay 'harvest rice', Paiwan ki-paday 'harvest rice'). 7 Teng and Zeitoun describe a second construction involving the prefix *ki-, reflected in a more limited collection of languages. There the *ki-prefix attaches to a verb root to derive another verb which can be described as 'get V-ed': having reflexive or middle voice. This process is seen, with semantic variation, in Paiwan, Puyuma, Rukai, Bunun, Kavalan and, again, Amis.8 It is not seen in Saisiat, Pazeh, Siraya, Thao, Tsou, Kanakanabu and Saaroa. In Pass-beat Nom Kivi 'Kivi got beaten.' As Teng and Zeitoun noted, the nominative argument or subject in these constructions is the verb's patient. This is in contrast to verbs derived by ki-out of nouns (see above). Moreover Teng and Zeitoun contrasted ki-passives in Puyuma and Paiwan with patientfocus constructions in the same languages, observing that the ki-passives (where X gets V-ed according to his/her wishes) are more volitional than the Patient-focus construction. In a 'tentative conclusion' they further suggested that the rise of ki-passives is what triggered the realignment of verbal morphology around an active/passive distinction in Rukai: "In Rukai, ki-V expanded so much that it came to replace the N[on-]A[ctor]V[oice] affixes while losing its volitional feature; the earlier NAV affixes were preserved in their nominalizing functions" While this conclusion was offered tentatively, it is attractive. Several Walu-Siwaish languages aside from Rukai have undergone wide-ranging morphosyntactic realignments: competition between verbs with ki-and NAF (Non-Actor Focus) constructions has the potential to explain why Puyuma *ki-passives have replaced the old neutral NAF constructions with affixes *-en, *Si-, *-an; 9 while the NAF "projective" constructions with *aw, *-ay and *-anay and with them, the entire NAF category were able to maintain themselves. Ross (this panel) argues that Puyuma never had verbal uses of *-en, *Si-and *-an because no vestiges of verbs carrying these affixes can be found in that language. Here is a possible counter-example. Several languages of the northern Philippines: Isneg, Agta, Casiguran Dumagat etc. have a verb reflecting *[qʔ]unik 'to climb. 10 Puyuma (Cauquelin) has a stem qunkun 'to jump over', 11 eligible for agent and non-agent focus marking: munkun or muthe ouside by some object' ('中間有東西擋住') 9 *-an was maintained in its nominalizing function. 10 Isneg ʔumuneʔ, Casiguran Dumagat ʔunek 'climb up a tree', Agta ʔimunek (Reid 1971). The last vowels in those forms reflect *i (Lawrence Reid, p.c., June 2009 13 Puyuma unkun can derive from *qunik-en, the PF form of *qunik. I suggest that when neutral patient-focus constructions were abandoned for ki-verbs, the PF suffix of *qunik-en was incorporated into the stem, making it eligible for all focus constructions and moving stress to the right. *quniken then underwent unstressed vowel syncope to *qunken and eventually (q)unkun, the modern form. I speculate that vowel syncope helped the stem to survive in recognizable form. Verb stems having incorporated *-en normally had three syllables: this made them prone either to be discarded as too cumbersome, or to have their first syllable pruned off to make them disyllabic. Pruning would however have a side-effect: to make them hard to etymologize. 14 Here we see how morphological simplification can fail to leave conspicuous vestiges in the lexicon. Like Puyuma, Paiwan developed a ki-passive construction but competition between NAF constructions and ki-passives did not lead to the loss of certain NAF markers as in Puyuma or of the entire NAF category as in Rukai. Likewise, in Amis and Bunun, the entire NAF category is maintained. Amis, Bunun and PMP settled the competition in favour of the NAF constructions by adding prefixes like ma-, pa-, pi-and mi-before ki-verbs, thereby giving those verbs the grammatical properties attached to these prefixes. A gradation can be seen among the prefix-adding languages: Bunun retains many unprefixed ki-verbs, Amis only a few: the MP languages keep none. This tree conflicts with the tree in Figure 1 in that the *enem isogloss in the numerals-based tree overlaps with the Loss of *-en isogloss in the morphology-based tree: thus Atayalic is above Thao and Siraya in the lexical-based tree, but below them in the morphology-based tree. 15 The problem can be fixed by allowing *-en to be lost twice: once in Atayalic and another time in a node corresponding to Enemish in the numerals-based tree. This is not unrealistic for a character that was already optional in PAn. It is not surprising, in any case, that the two trees are entirely compatible, given the likelihood of contact effects and of independent innovations. What matters is that they tell very similar stories: a PAn homeland on the northwest coast of Taiwan, full settlement of the west coast before southern Taiwan and finally the east coast are settled, and an origin of the MP migration in the East coast languages. This view, now supported by seven 16 lexical and three morphological innovations, is consistent with archaeological dates -earlier on the west coast than on the east coast of Taiwan-. Population genetics (Sanchez-Mazas, this panel) give a similar picture. 15 It is impossible to be certain that extinct Favorlang did not allow *-en in at least some perfective patient-focus verb forms. 16 The Enemish node is also supported by the displacement of *kawaS by *CawiN as 'year' (Sagart 2004). Why, then, do Gray, Drummond and Greenhill (2009), who work with the basic vocabulary, find a star-like phylogeny -although unlike in Blust's, and like in mine, PMP appears as coordinate with Paiwan-? this is undoubtedly because of their choice of both Old Chinese and Tai-Kadai as outgroups. Probably no Austronesianist thinks that both Tai-Kadai and Chinese are simultaneously related to An and exterior to it. Yet this is the basic assumption that underlies the analysis of Gray and his colleagues. By forcing Tai-Kadai, really a subgroup of An, to assume the role of an outgroup (that is, a language family outside of Austronesian but related to it), they are forcing their Bayesian statistics algorithm to treat every item shared by Tai-Kadai and any Austronesian language as a retention from PAn, thus preventing it from making use of post-PAn innovations like the 'standard' numerals between 5 and 10 or the shifts in 1st--and 2nd-person pronouns.17 9/11 I predict that if Gray and his colleagues give up Tai-Kadai as an outgroup and allow it to place itself in the tree rooted in Old Chinese, Tai-Kadai will branch off the Austronesian tree near where Malayo-Polynesian branches off, and the phylogeny they will find for Formosan languages will be very similar to those in Figure 1 and Figure 2. This paper is dedicated to the memory of Stanley Starosta Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France 1. 1 . 1 Third-person pronouns replacedRoss (2006:536-537) described two sets of third-person pronouns in Formosan languages: one, reflected in Pazeh and Saisiat, is formed on a base *Cia. The nominative third-person pronouns in Pazeh and Saisiat, sia and sia (singular and plural not distinguished) reflect that base. Singular and plural forms are distinguished in the genitive: singular ni-sia, plural n-asia in both languages, reflecting *ni-Cia (singular) and *ni-a-Cia (plural). The second set is reflected in proto-Atayal and in PMP: only the singular forms are cognate. The base form is *sia, the genitive *nia. In terms of the tree in Figure1, the first set can be regarded as PAn Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France Paiwan, Puyuma and Rukai one sees a further development to passive verbs derived from verb s by *ki-. Teng and Zeitoun warn that this development may involve contact or parallel innovation. The following examples are reproduced from Teng and Zeitoun: Figure 2: Higher An phylogeny based on three morphological characters. (1): replacement of 3rd-person pronouns; (2) loss of *-en in perfective PF; (3) extension of ki-prefixation to verb roots. Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France Table 1 : 1 PAn neutral focus markers worshipped by someone' etc.Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France 'my foot was pierced by a slender bamboo some time ago' i-nay a hulus f<in>ariw-in suma this LIG shirt buy-PERF-PF someone 'Someone has bought this shirt' aki sh-in-umshum-in iza ita God worship-PERF-PF already 1Pi-NOM 'we worshipped God' nak a kuskus d-<in>uruk-in sa shkish kahiwan 1s-GENLIG foot pierce-PERF-PF by slender-bamboo some-time-ago ). 11 Listed by Cauquelin under unkun. Panel on Reconstruction of PAn morpho-syntax and implications for the An settlement on Taiwan 11ICAL -June 22-25, Aussois, France qunkun (AF), unkun-ai (PF), unkun-aw (LF). 12 Examples from Cauquelin's dictionary: ku unkun-aw na gung 1S jump-over-LF the ox I jump over the ox tu unkun-ai ku Da suan 3S jump-over-PF 1S the dog The dog jumps over me If the semantics are not judged too divergent, Thanks go to John Wolff and Lawrence Reid for useful discussion and information. All phonological mergers are innovations. I make a distinction between transfer of a feature by spread and by borrowing: I speak of borrowing when a feature which is already fixed in a language -not in the process of spreading-is transferred from that language into another. The passive ki-in Rukai, Paiwan and Puyuma: borrowing, shared innovation or parallel development? shown me by E. Zeitoun in August 2005. This example shows that the process was still productive during the Spanish occupation of Taiwan. This example is from[START_REF] Siqi | Taiwan Ameisi-yu yufa [A Grammar of Amis of Taiwan[END_REF] See Sagart (2004) on the reconstruction of the PAn phoneme traditionally referred to as *j. Zeng (1991:29) cites the pair ʔadiŋ 'keep off, fend off, shelter from' (遮擋) vs. ki-ʔadiŋ 'to be protected from The variety of Puyuma investigated by Cauquelin loses *q word-initially. Initial q-in this word is preserved in one of the AF forms thanks to the mu-prefix. The other AF form munkun is analogically motivated. Semantically the main difference between 'jump over' and 'climb' is the ballistic character of the former. Puyuma has a number of disyllabic verbs ending in -un and without cognates in other languages, which are candidates for incorporation of *-en and loss of first syllable: repun 'to assemble, get together', compare Puy. reprep 'to swarm with, be infested with' (insects); kuLun 'to roll', compare root -kul 'curl, bend'; LuDun 'to sink', compare Isneg allad 'sink', Tiruray eled 'sink' (forms from ACD, under *qeled 'sink'; Blust mentions a rot -led 'sink'). Note that the choice of TK as an outgroup distorts the tree only above the point where TK really branches off: the non-Formosan part of the phylogeny in[START_REF] Gray | Language Phylogenies Reveal Expansion Pulses and Pauses in Pacific Settlement[END_REF] does not suffer from that problem.
00410000
en
[ "shs.langue" ]
2024/03/04 16:41:20
1999
https://hal.science/hal-00410000/file/NANCMELB_repair.pdf
Laurent Sagart Laurent Sagart Melbourne NOTES ON NANCHANG DIALECT (draft) des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Table of Contents 1.1.1 Tones....................................................................................................................................... 1.1.2 initials ..................................................................................................................................... 1.1.3 medials.................................................................................................................................... 1.1.4 rhymes..................................................................................................................................... 1.2 1.3.1 Nanchang text No. 1.: Nanchang weather ........................................................................... 1.3.2Nanchang text No. 1.4.1 copula ................................................................................................................................... 1.4.2 pronouns............................................................................................................................... 1.4.3 classifiers.............................................................................................................................. 1.4.4 etymologies........................................................................................................................... 1.5 1.5.1 passive .................................................................................................................................. 1.5.2 disposal................................................................................................................................. 1.5.3 comparative.......................................................................................................................... 1.5.4 attributive/relative construction.......................................................................................... 1.5.5 complementation .................................................................................................................. 1.5.6 question types ....................................................................................................................... 1.6 speaker XYZ Speaker XYZ was in his early fifities at the time of recording. He was born in Jiangxi and raised in Nanchang since childhood. He speaks Nanchang dialect, Mandarin and English. He is highly educated. These notes entirely based on five texts recorded in Nanchang, see transcriptions below. Nanchang phonology and orthography for speaker XYZ Pronunciation of speaker XYZ is close to Nanchang dictionary 1.1.1 Tones Tone speaker XYZ Nanchang dictionary T1 [42] 42 T2 [24] 24 T3 [213] 213 T5 [45] 45 T6 [31~21] 11 T7 [45 / ] 5 T8 [2 / ] 2 In my transcriptions tones are marked by a sub/super-script numeral at the right of the syll : xxxx 1, xxxx 2 , xxxx 3 , etc. In addition there is a qingsheng (T0), unmarked in the orthography, for de-or unstressed syllables, which are also typically reduced for their rhyme (VV or VC > V). These syllables are marked without a tone symbol in the transcription. Acc to Wei Gangqiang and Chen Changyi (1998), the actual contour of T0 depends on the contour of a preceding tonal syllable. it is mid-high [4] if the preceding syllable ends in a rise (T2, T3, T5) and mid-low [2] otherwise. Have'nt checked this on speaker XYZ's pronunciation. Destressing can be due to speech environment but some morphemes (typically grammatical morphemes such as KO NOM ) have lexicalized T0. When a lexicalized T0 open-vowelled word undergoes restressing, the outcome is T7, thus 到 tao5 > të > [restressing] teq7 ; 箇 下 ko3 ha6 'this moment, now' > ko3 ha (destressing) > ka (fusion) > 隔 kaq7 (restressing). initials These initials occur in the transcription : p ph/B m f t th/D l ts tsh/DZ s c ch/J ny x k kh/G ng h Aspirated stops are unstable and often show special variants in connected speech. In a few cases (IV :104), the closure part of an aspirated stops disappears in connected speech, and only the aspiration part remains. Much more common variants are the sounds written by voiced/Capitalized stop symbols. These are phonetically unaspirated, lenis, and often voiced variants of the aspirated stops in connected speech, typically : B = B, D= D (tongue flap), DZ= dz or z, J= Ô G= ƒ. They are very common, except in utterance-initial position, and following tone 7 (unsure about the rare tone 8). In these environments aspirated stops occur. Isolation syllables are utterances in themselves and therefore do not show voicing of aspirates, hence most descriptions of Nanchang do not mention any voiced stops in phonetic inventory. In fact these phonemes might be underlyingly voiced with voiceless aspirates as conditioned variants (utterance-initial ; perhaps stress ; following -q or -t). Correspondences [5.89 zhu dao shenmo shihou a ?]. Note : all three functions may be basically one, i.e. #2. 4. A DUR 啊 serves to derive verbs in durative aspect: V a V (ex : II :12 ; IV :123-124). This is probably from an earlier V 下 V, as in Yangjiang (Huang Borong : 205). Guangzhou has V 下 V 下 , with 下 ha 35 . 5. CHILAI INCH . 起來. V+CHILAI. Inchoative marker. 6. CL : classifier, see section crossref 7. HA 哈 sentence-final or after a pause, indicates resolve : refers to immediately preceding discourse, meaning something like, 'as for me, that's what I think/will do' IV :3, IV :69, IV :75 ; IV :147. Note : the Nanchang dictionary writes 口 +夏, sentence-final ; particle, expressing order. 8. KAQ NEW 隔 kaq7 utterance-initial, introduces a new development in a situation that has been previously described, a development which does not follow naturally from that situation, couldn't be deduced from it (many examples, e.g. II.13). 9. KAQ EPI 隔 Epistemic uses of the preceding (= Eng. 'now', argumentative : 'the foregoing being granted, now, a new question arises ; for instance in 'now, can we be sure that…') 10. KO ATT . 嘅 Attributive use of KO. Links a property to a head noun. The property is often expressed by a xingrongci (ie an intransitive verbs which can be preceded by an adv of degree) ; but it can also be a noun phrase denoting substance (tsiï3 tsaq7 kë ten1 lung5 'a lantern made of paper'), age (liong3 san1 suïii5 kë tsai3 'a child of 2 or 3'), subject matter (2.2.yiq 7 ko ^xiu 5 DZai 2 kë xieu 5 fa 6 'a story about a scholar'), shape (yiq kien yiq kien kë mifïn 'noodles in sticks') etc. 11. KO GEN 嘅 . With all kinds of NP dependents, whether or not expressing possession or part/whole relationship: includes in particular place (Ngonyi kë nyin 'people from Anyi'), and time (ku3tai5 kë thuq su nyin 'literati of old times'). 12. KO NOM 嘅 . Derives nominals. 2 :14. 'yiq7 ko nyu3 KO' « a woman », Also 5.162, 5.163 13. KO REL 嘅. VP+KO+NP. Attaches a VP to a NP. Wwith all kinds of VPs except those with bare xingrongci's (plus possible adv of degree), treated here as KO for one head noun with several attached VPs. 14. KO ADV . 嘅 . Derives adverbials. Equivalent to Mand DE1 地. 1.66.pïq 7 kien 1 he 3 ko peq 7 cin 1 yiq 7 yong 6 ko yiu 3 lon 3 chi 3 . 15. KO ASST . . 嘅 Clause+ KO ASST . Assertive use of KO. Equivalent to Mand. (shi)…de. Sometimes used with 是 sï 6 . Ex : I :15, IV :32, IV :65. 16. KO XIE 嘅 : yiu3+KO+N. Equiv of Mandarin de 0 in the sense of xie1 'some', between yiu3 and count noun (si5 kan1 'times', nyin5 'people') ; also of Cantonese di1 (yau4 di1 yan2). 'there are people'. Nanchang : 1.10 : yiu 3 ko sï 5 kan ; 5.186 yiu3 ke nyin5 a. The noun can be omitted, 3.81. 17. KO MIR 個 . 'Mirative' use of KO. Equivalent to Mand. ge 0 個 in one of its uses. 'as much/many as+quantification ; to the extent that as a result+result'. Typically used after TEQ DEG or TEQ RES (but I.11 without TEQ). Degree is quantified in III.135, but result (not quantified) : 'to the extent that+result' in I :15 ; I :45 ; III :121. KO MIR seems obligatory after TEQ DEG when the complement of degree is not a verbal expression (III.127,134,135) ; but also occurs when the comp. of degree is a verbal exp (I.15, 45) 2.28 ; 3.10 ; 3.37 ; 3.94 ; ; 3.117 ; 4.26 ; 4.150. 44 (1.25, 1.30, 1.47, 1.56, 1.58, 3.7, 3,35) 49. TEQ EXT 得 introduces complements of extent (1.13, 1.15, 1.26, 1.45, 5.191) 50. TEQ MAN 得 introduces complements of manner (2.17, 3.24, 3.35, 3.36, 3.61, 3.69, 4.43, 4.83, 5.155) 51. TEQ PREP 得 . A preposition, equivalent to Mand. zai 4 (maybe not different from TAO PREP , as both are phonetically të or tëq. III.77) 52. TSHAI PROG . 在. TSHAI+V. Marks verbs in progressive aspect. Equivalent to Mand zai 4 . 53. YEU CHR 要. YEU+V 'characteristic behaviour' , as Eng. 'will V', 'can be predicted to V' in « water will boil at 100 degrees Celsius ». See Bybee, Revere and Pagliuca (1994 :157). Ex : 1.56, 3.25, 3.27, 3.102. 4 :91 ; 5 :136 five Nanchang texts (speaker XYZ ; recorded May-June 1999 in Nanchang) • Line 1 : as much as possible, the usage of the Nanchang Dictionary is followed for Chinese characters. 2 new characters were created : A3C1 一; A3C2 乙 • Line 2 : follows 'outline of discourse transcription' of Du Bois, Schuetze-Coburn, Cumming and paolino (1993). Orthography is mine (see above for details). Symbols in bold are unexpected/irregular (for instance initial n in Nanchang in text 1, line 1 below is irreg., as Nanchang has merged n and l). • Line 3 : for glossing of grammatical words and discourse markers, see 'glossary' above. 現 在 大 家 生 活 要 好 得 多 7. xien 6 Dzai 6 Dai 6 ka 1 sen 1 foq 7 yew 5 hao 3 teq 7 to 1 . now everybody life want good TEQ DEG much 但 是 我 覺 得 窮 嘅 時 間 8. ... than 6 sï 6 ngo 3 cioq 7 teq 7 ^chiung 2 kë sï 5 kan 1 but 1SG feel poor KO REL time 許 個 味 道 啊 he 3 ko wïi 6 Dao 6 a that CL taste A 比 現 在 味 道 還 更 足 pi 3 xien 6 Dzai 6 wïi 6 Dao 6 hai 2 kien 5 ciuq 7 . compare now taste even more full 更 有 意 思 就 是 話 9. ... kien 5 yiu 3 yi 5 sï chiu 6 x[ï 6 ] wa 6 . more interesting that:is:to:say 你 看 我 們 小 時 間 吧 10. ... n 3 Gon 3 ngo 3 mïn xieu 3 sï 5 kan 1 ba [pa], 2SG look 2PL small time PA PRSS 也 冒 有 電 視 看 11. ... ya 3 mao 6 yiu 3 ^Dien 6 sï 6 Gon 3 , also NEG have television watch 也 冒 有 什 哩 卡 拉 OK 12. ... ya 3 m[ao 6 ] yiu 3 xiq 7 li ^Ga Gon 3 tao 5 nyuq 7 a=@@@@@@. NEG-PRFV see ------------------------------ ---------------------- ---------------------------- ------------------------------------- ------------------------- -------------------------------------------------------But a different construction, with LAQ7 (< 搦 MC nak 'hold' ) also found (unique ex.), with verb PA3 'give' : - ATT ------------------------------------------- 把 佢 煮 軟 了 就 可 以 5. ----------------- 把 佢 煮 5.85. pa3 cie 3 tsu 3 . PA DISP 3SG boil ---------------------- 隔 又 把 水 就 完 全 濾 乾 了 Local LAQ7 N PA3 X V 人 家 就  搦 糖 把 你 人 喫 --------------------- 但 是 我 覺 得 窮 嘅 時 間 . . ... than 6 sï 6 ngo 3 cioq 7 teq 7 ^chiung 2 kë sï 5 kan 1 but 1SG feel poor KO REL time                     細 伢 子 啊 平 時 穿 到 . . .                        你 就 話 玩 嘅 東 西 吧 . . ... n 3 chiu 6 wa 6 ^wan 5 kë tung 1 xi pa. 2SG then say play KO REL thing PA SUGG                         可 以 插 小  蠟 燭 嘅 地 方 . . kho 3 yi 3 tshaq 7 xieu 3 laq 7 tsuq 7 kë [ko] Di 6 fong. can insert small candle KO REL place                               我 小 嘅 時 間 啊 . . .. ngo 3 xieu 3 kë [ko] sï 5 kan 1 a, 1SG small KO REL time A                      跟 外 頭 嘅 大 人 玩 嘅 燈 籠 差 不 多 .1 1.                  箇 個 有 文 化 有 修 養 嘅 人 . . ko 3 ko yiu                   你 人 是 好 有 學 問 嘅 人 啊 . . n 3 len sï 6 = hao 3 yiu 3 = ^hoq 7 wïn 6 kë [ko] nyin 5 a, 2SG be very have knowledge KO REL person A                     冷 粉 就 是 拿 剛 才 話 嘅 . -------------------- 就 是 涼 拌 嘅 冷 粉 啊 . ----------------- 隔  成 了 許 個 又 濕 又 軟 .  現 在 味 道 還 更 足 pi 3 xien 6 Dzai 6 wïi 6 Dao 6 hai 2 kien 5 ciuq 7 . -------------------- - 你 人 許 個 米 呢 . ... n 3 [l]e[n h]e 3 ko mi 3 le, 2SG that CL rice LE ---------------------- 箇 個 舞  正 了 . 你  提 嘅 許 個 籃 子 好 累 人 家 吧 . . n 3 Dia 2 ko he 3 ko lan 5 tsï hao 3 luïi 6 nyin 5 ka pa, 2SG carry KO REL that CL basket very tire people PA PRSS 1.5.5 complementation complement of degree V-得 -Adv deg hao3-teq7-to1, hao3 teq7 hen, (1.25, 1.30, 1.47, 1.56, 1.58, 3.7, 3,35) 1.5.5.2 complement of extent V-得-NP let teq7 yeu5 sï3 loq7 teq7 nyin5 tu1 tshuq7 puq7 lieu3 mïn5 loq7 teq7 ko fong5 kan1 li tu1…sït7 let teq7 chi3 tu1 theu puq7 tshut7 lai5 kon1 teq7 ko tsuïi3pa …te teq7 ha6 lai chiaq7 teq7 sïn1 song pi3 kao leq7fo 1.5.5.3 complement of manner V-得-NP tsong3 teq7 chiu6 man5 kieq7 chi3 tshon1 teq7 hao3 so tshon1 teq7 hao3 tshon1 teq7 man5 hao3 wan5 teq7 hao3 kao1 xin wan5 teq7 man5 khuai3 fat long teq7 hao3 pao kuan teq7 hao3 kao3 teq7 hao3 tiaq7 tsï 1.5.5.4 potential No ex. for positive Negative ex : puq7 teq7 thin2 'cannot stop, won't stop'. Teq7 is probably full V here : 'get, get to' tshuq7 puq7 lieu3 mïn5 'cannot go out' khon3 puq7 tao5 nyuq7 'cannot see meat' 1.5.5.5 Resultative complements : 1.5.5.5.1 positive : VV khon3 tao5, tsaq7 tao5, chiaq7 won5, yen si3, tshao3 hao3, wu3 tsang5, tshao3 suq7 2.13, 2.19, 2.31, 3.53, 4.63, 4.142, 4.111. 4.160, 5. 144, 5.146, 5. 148 , 5.165, 5.171, 5.195, 5.199; 1.5.5.5.2 Mrs Xie (elicited materials) data collected Oct-December 1999 in Ormond, Melbourne. Donnees dans classeur noir. Mrs Xie's Nanchang phonology No systematic analysis of the phonology was attempted. The data were collected primarily for syntax (October-December 1999, in Ormond, Melbourne) Yet Mrs Xie's pronunciation differs significantly from 'standard' Nanchang (as reflected in the NCFYZD by Xiong). ; neg : mao6 V A-1-1 cie3 chiaq7 lë fan6, n3len chiaq7 lë mao6 ? ngo3 chiaq7 lieu (~lë)/ngo3 mao6 chiaq7 A-1-2 cie2 kong1 chiaq7 lieu yoq7 A-1-3 cin1nyen2, ngo3 chiu6 tso5 lieu liong3 chien1 fïn1 A-2-1 ngo3 chie3 lë liong3 fïi A-5-1 miang2 nyiq7 ko3 sïkan, cie3 chiu6 tao6 lë pïq7cin1 A-7-5 cie3 fïi lë hao3 thai6 kë cin, tshai2 pha2 song chie3 2.2.2.2 affirmative : same as perfective B-1-1 Q… ; A : chie3 lieu/ mao6 chie3 B-1-2 same (V is khun) B-2-1 same (V is yong) progressive 在 V Mandarin uses zai4 V or zhengzai4 V. Cant. V-kan3; Xiamen teq-V ; Nanchang also uses tshai6 V (or ts [h] Mandarin is -zhe (Anne says p. 72, Mandarin has the same markers for progressive and durative but that seems an error) ; SHT -kin3 (la khai pu miangkin ; phuikin chut mun) ; other Hakka dialects have V dao 到 ~倒 tao3 or tao5 cf. LZ p. 443 ('eat sitting'). Nanchang and basically all Gan dialects have tao 3/5 (some have T3, others T5, in both dialects) D-1-3 cie3 tshon tao ko t [h]ai yi, yiq7 tiaq7 tu1 puq7 lang3 D-2-1 cie3 xi3fon chi3 teq7/tao chiaq7 D-2-2 he3 tsaq7 nyin2 cin3 ts [h] F-1-1 xingatsï tshai khunkao, thoq7 len khuq7 chi3 lai2 (note : apparently tshai6 used here for a durative) F.2.1. cie3 wa6 chi3 sï6 lai2, chiu6 mao6 won2 mao6 lieu3 (冒 完 冒 了 ) F-3-1 cie3 kao1xin teq7 tshong1 chi3 ko1 lai2 2.2.2.7 instantive : 一 V (就) .... G-1-1 cie3 yiq7 chiaq7 chiu6 thu G-2-1 cie3 yiq7 tshuo tshon chiu6 theu2 fïn E-2-5 ko3 tsaq7 tung1xi thia2 puq7 chi3 lai2 can one say puq7 teq7 thia2 chilai? no ! 2.2.3.5.3 Adj : puq7 teq7 Adj/puq7 wei6 Adj E-5-1 miang nyit tsao3song puq7 teq7 kon1 E-5-2 fong5 tao5 li3theu2, puq7 teq7 set7/puq7 wei6 set7 2.2.3.5.4 optative E-7-1 miang nyiq7 ya3 puq7 wei6 xiong3 chiaq7 E-7-3 cie3 miang nyit puq7 yiq7Tin lai2 2.2.3.6 other aspects F-1-1 ngo3 hai2 mao6 chie3 kuo5 peq7cin F-1-3 n3len hai2 mao6 pha2 kuo5 tshao3 F-2-1 ngo3 mao6 chie3 kuo5 peq7cin F-2-5 he3 tsong1 pao5 n3len mao6 khon3kuo5 F-3-1 cie3 xienTsai mao6 ta3 maoyi1 F-3-3 cie3 mao6 tshai6 su theu2 Resume : mao6 used to negate verbs in perfective, experiential, progressive aspects ; verb yiu 3 'to have' ; passive passive 等 ten3 Yan Sen p. 77 of black notebook contributes a sentence : 車 子 等 斷 到 xx xx ten3 ton3 tao 'the car was prevented from passing' More examples in Nanchang Fangyan cidian, with construction patient ten3 agent VP. Seems to be the normal passive construction in town. ten3 grammaticalized from 'X waits for Y to V' to agent marker in passive constructions, then to passive marker (when patient is omitted, as in the above sentence). Unknown whether non-detrimental examples occur, as 'Clinton was elected president of the USA' passive te1 te1 cited in NCFYCD as a full verb meaning 'pull ; drag ; tear' ; one ex., in which te1 has a resultative complement pho 'broken', looks quasi-passive : yi1song te1 pho le 'the garment was torn from pulling on it'. te1 occurs as agent marker in passive constructions in Yangzizhou (Mrs Xie), always detrimental ('inflictive'). An other morpheme, te1 (not felt to be the same as the passive marker), has the meanings 'pull, drag, tear'. Perhaps no connection : te1 'passive marker' perhaps connects with teq7 得 as in a few Gan-Hakka dials, cf LZ :438 ; however loss of rusheng is unexplained and the fact that te1 is always inflictive argues against this ; alternatively a denasalized form of 等 ten3 ? A-2-4 ngo3 wa6 teq7 puq7 hao3, ya3 (也) te1 n3koli xieu5 'I spoke badly and made you laugh' A-3-5 kong1 mai3 kë nye2 te1 mao2 tho1 tseu3 le 'the fish I just bought was dragged away by the cat' C-5-2 tsu5 fai sï6, puq7 tho3 tsoq7 tshai2 kuai le 'it will be strange if one is not arrested doing bad things' A-10-2 n3 puq7 tho2 cie3 ma tshai2 kuai 'It's only strange that you weren't scolded by him' ngo3 tho2 lin3tao3 kuaq7 le yiq7 ha6 'I was criticized by the lingdao' C-5-1 cie3 to3 phoq7 le, mau6 tho2 faq7 xien 'he wasn't discovered' ngo3 na yong sï tao5 n3len tso5 'I'll give you a task to do' A-1-3 cie3 yeu5 wan yiq7 pi chien2 tao5 nyin2ka1 'he wanted to return a sum of money to someone' A-1-4 ngo3 yeu5 mai yiq7 tung fang2 tsï tao tshoq nyiq7 lai2 kë he3 tsaq7 nyin2. 'I want to sell a house to that person who came yesterday' A-1-5 cie3 su le liong3 phon chi tao ngo 'he lost two games of chess to me' A-1-7-2 n3 cia tiaq7 tsï chien2 tao ngo3 mai yen 'lend me a few dollars to buy cigarettes' A-4-1 n3 phao pei tsha2 tao kheq7 nyin2 'he steeped a cup of tea for the guest' C-1-1 ngo3 sung yiq7 pïn3 su1 tao cie 'I'll give a book to him (as a gift)' [slightly better than other version below : X sung Y N] C-2-3 n3 liu ci3 kë wïi tsï tao cie3 mïn 'leave a few seats for them' C-3-1 cie3 theu1 le yiq7 khuai3 chien2 tao5 ngo3 yung 'he stole a dollar for me to use' construction X cia Y N A-1-7-1 n3len cia le cie3 san1 tsaq7 wo 'you lent him three pots' A-1-6 cie3 kau le ngo tsai3 tiaq7 pïn3 sï 'he taught my son some skills' A-5-2 cieu5 cie3 fïitaq n3len yiq7 kë wïn thi 'make him answer you a question' C-1-1 ngo3 sung cie yiq7 pïn3 su1 'I'll give a book to him (as a gift)' [slightlly less coll than other version as X sung N tao Y] construction X sung tao Y N A-1-2 n3 sung tao5 na3 ko yiq7 phin ciong1 yiu5 'who did you give a bottle of soy sauce to ?' 2.2.6 causative nyong6 讓 permissive The main construction with Mrs Xie is X nyong6 Y V, but nyong 讓 still a full verb meaning 'to let, allow'. The NCFYCD even allows some passive uses : chien2 nyong6 nyin2ka1 theu1 phoq7 le.'the money was stolen by someone' A1-san-c lao3 wong2 ko 'Lao Wang's' No haplology, no double kë : A1san-e2, A1-san-e5 modifying clause Mod kë N can be dropped with inalienable possession B1-san-a54 but not with alienable B1-san-a11 PHONOLOGY AND ORTHOGRAPHY FOR SPEAKER XYZ............................................... GLOSSARY OF FUNCTION WORDS AND DISCOURSE MARKERS IN THE TEXTS .................................. 1.3 FIVE NANCHANG TEXTS (SPEAKER XYZ ; RECORDED MAY-JUNE 1999 IN NANCHANG) ............. 2.: Scholar and Maiden ....................................................................... 1.3.3 Nanchang text No. : New Year......................................................................................... 1.3.4 Nanchang text No. 4: Turnips ............................................................................................. 1.3.5 Nanchang text No. 5: Noodles ........................................................................................... 1.4 NOTES ON THE LEXICON ................................................................................................................. NOTES ON THE GRAMMAR : CONSTRUCTIONS IN THE 5 TEXTS...................................................... REFS............................................................................................................................................... tao D-4-3 tshong2 song khun tao kë xi5ngatsïThis shared innovation with Hakka probably on the ground of a former V-dao construction indicating successful outcome (frozen in zhidao 'to know'), evolving to 'resulting state' (for example : 拿 到 'you catch this' > 'you hold this' Lamarre cites a Yue dialect, Xinyi (W of Guangdong) where 倒 tou 35 (upper shang) serves as extent/manner complement marker, but also as resultative/perfective/durative marker. She cites a Tang 1986 Lamarre also cites a description of Daye 大 冶 by a Wang 1994 where ta~tç < 到 serves for result complements ; ta is also a durative marker chie3 kuo peq7cin1 mao6Cannot say : yiu3 mao6 chie3 kuo5 peq7cin ? E-1-3 cie3 tao5 kuo hao3 to1 thi6fong E-1-5 cie3 khon3 kuo ko3 tshong xi E-1-6 ngo3 chiaq7 kuo ko3 tsung3 nyen2 kao, hao3 thien2 2.2.2.6 inchoative : V-起 來 . X laq7/wan/mai N pa3 Y VP used by YS, see text. Not accepted by Mrs Xie. 2.2.5.2 construction X laq7/wan/mai N tao Y VP na/wan/mai is full verb ; tao is prep. Examples : mma na piang3 tao cie3 mïn chiaq7 'mother gives them cookies to eat' A-1-1-1 ngo3 na song1 khuai3 tsï tao n3len 'I'll give you a pair of chopsticks' A-1-1-4 glossary of function words and discourse markers in the texts In addition a vowel 'ë' serves to note a reduced central vowel in my transcription. Correspondences with NC dictionary : XYZ NC dict. a a ai ai ao au an an ang aN aq/t a/, at with Nanchang Dictionary are e e XYZ eu NC dict. Eu p en p En ph/B eq/t p' E/, Et m oe P m f o o f t on t on th/D ong t' çN l oq l ç/ ts ï tsh/DZ ts' ts ˆ; apical vowel ïi ˆi s c ïu ïn s ˆu t˛ ˆn ch/J ny ïq/ït x i iu t˛' ˆt, ut ȵ i ˛ iu k in k in kh/G ng iq/it ü k' i/, it N y h ün h yn üt 1.1.3 medials yt u u Speaker XYZ : NC dictionary : un un ung uN i I ü y u u (word-initially : y yü w) ut/uq ut, u/ symbols between brackets are word-initial variants of the first three. 1.1.4 rhymes a ai 1.2 1. A SUPP 啊. CLAUSE1+A+CLAUSE2. Indicates supposition : translatable as ao an ang aq/t e eu en « suppose that CLAUSE1, then CLAUSE2 », « when CLAUSE1,… » or eq/t oe « if CLAUSE1,…» Ex : esp. in I :18,33,69. o 2. A on ong oq ï ïi ïu ïn ïq/ït i iu in iq/it ü ün üt u un ung uq SUSP 啊 X+A : suspensive : « wait for predicate or main clause». Ex. : [suoyi a], 3. A TOP 啊 X+A. Topic Marker : Ex. : [ko3 ko nan5 Dzong1 sï6 a], . 18. KUO EXP . 過.Verb suffix, Marks verbs in experiential aspect. Similar to Mand guo 4 . 19. LA CHG 啦. Probably a fusion of LIEU CHG +A, q.v. Ex. : 4.50 'they start to eat'. 20. LA OBV 啦 ( low qingsheng). YS says : 'it's obvious, of course, no doubt, one can only agree with it'. This kind of obviousness is obvious to everyone, unlike MA OBV , qv. Ex. : IV:10. IV :25, IV :27. 5.66 (but pitch is mid). 21. LA UNS 啦 (high qingsheng). Utterance final, gives preceding clause a meaning of uncertainty. Apparently a fusion of LA OBV + A 'question marker in yes-no questions' (no occurrence in our 5 texts, but common in Nanchang, cf. elicited sentences, crossref). The resulting meaning is 'is CLAUSE really obvious ? I doubt it'. 'it ought to be so, not certain ; unsure ; consulting ; = right ? ». Only one ex. IV:44. 22. LA LOOK . 喇. Utterance-initial : attracts the hearer's attention to an element in the situation, similar to Eng. 'look !'. One ex., 2.46. 23. LAI INC 來. change into the state, in V-lai5 : 5.52 : sai5 kon1 lai5 yi3heu6 'after it has become dry'. Only ex. Perhaps an error for LIEU PRFV . 24. LAQ DISP 搦. Marker of disposal construction. Only one ex. main verb is 'give' (pa 3 ), and direct objects ('sweets') is indefinite. See PA DISP . 25. LAQ INST 搦. introduces an instrumental complement. 3 :48 ; 5 :21 ; 26. LE 呢 . X+LE. Speaker presents preceding discourse as premise for following discourse. Similar to Mand ne 0 呢. 27. LIEU CHG 了: sentence final, marks change of state : IV :9 28. LIEU PRFV 了: verb-final, marks perfective aspect. . 32. LO INST 咯. Phrase-final : P+LO, P+LO, P+LO…P+LO. marks each of a series of consecutive, parallel phrases. Text 1 ; also 5.139-141. 33. LO CONSULT . Sentence-final, presses hearer to agree. 2.29 ; 2.38. 34. MA sentence-final, after 郎 long5 'how ?'. This MA is equivalent to 呢 . ADV 哦 sentence-final, low qingsheng. Strong advice. IV :70 40. O REG 哦. IV :158. Sentence final : indicates regret on the part of the speaker : 'alas' !. Only one ex. See also LO OBV . 41. ORD. 第 thi 6 . Ordinal marker. 42. P 'particle' 43. PA PRSS 吧. Sentence-final, pressing hearer to agree, comply, accept a new topic, an estimate or approximation, Ex:5.151. 35. MA OBV 嘛 Sentence-final, presents the situation as obvious to the speaker, though not necessarily to everybody else = patronizing (unlike LA OBV , qv). Only one ex. : III.112. 36. NEG : the general negation for verbs not in experiential or prefective aspect :mao 6 29. LIEU PROG 了: verb-final, marks progressive aspect : IV : 43, 95 (on path of development perfective > progressive: Mand zhe) 30. LIEU POT . 了: in negative potential constructions. One ex. : 1.14. 31. LO OBV 咯. sentence final, sentence meaning is 'regrettably obvious'. Perhaps a fusion of LA OBV + O REG . IV :120, 121,136冒 for 'have', 不 pïq 7 or puq 7 for all other verbs. 37. NEG-EXP. 冒 mao 6. Negation for verbs in experiential aspect. V :133 38. NEG-PRFV. 冒 mao 6 . Negation for verbs in perfective aspect. IV :102 39. O . PA DISP . 把. The main disposal marker. Equivalent to mandarin ba 3 把. 45. PREP. preposition 46. TAO DUR 到 verb suffix, marks durative aspect. III.51 ; III.41 ; IV :64 ; IV :162 47. TAO RES 到, a resultative complement, marks successful outcome. II.13, II.19 ; 2.31 ; III.53; IV.63, IV. 111, 4.142, IV :160 ; 48. TEQ DEG 得 introduces complements of degree Translation 1. [I'll] tell a funny story, ha. 2. A scholar story. 3. 'Scholar'. 4. You know [what] a scholar [is], right ? 5. Eh. 6. A scho-, a literate person in ancient times, 7. that's [what] a scholar [is]. 8. Once upon a time there was a scholar, 9. Who carried a basket. 10. Carried a very big basket, 11. [and] went down the streets. 12. He was walking. 13. At that point he saw, ahead of him, 14. A female person. 15. A young female person. 16. also carrying a basket. 17. That female person, well, she was quite pretty. 18. This sai-, 19. This scholar see[ing] that the person was quite pretty, 20. then, 21. With hidden intentions, 22. He then, 23. started talking with the person, 24. He chats her up. 25. Eh. ((HIGH PITCH, PUTS SELF IN ROLE OF SCHOLAR)) 26. He=, 27. this young lady, 28. This bag you're carrying is very tiring, isn't it, 29. I'll carry it for you, OK ? ((LOWER PITCH, RESUMES NARRATOR ROLE)). 30. This, 31. This young female person, seeing he [acted] thus, 32. Did not answer him. 33. Now, [as] she was not answering, 34. He stuck to her [side and went on] talking. 35. Eh. 36. He said. 37. I'll carry it for you, yes ? 38. I'll carry it for you, OK ? 39. Eh, 40. I say, 41. You, 42. You, 43. I am carrying a big basket, 44. You carry a small basket, 45. Let's put them together [one into the other], 46. Look, 47, I, 48. I ca-, 49. I can carry it with one hand. 50. 1.3.1 Nanchang text No. 1.: Nanchang weather (Told by XYZ. Recorded by Laurent Sagart. Nanchang, June 1999) 南 昌 箇 個 地 方 啊 1. na=n 5 DZong 1 ko 3 ko Di 6 fong a. Nanchang this CL place A TOP 別 什 哩 都 好 2. phieq7 xiq 7 li tu 1 hao 3 . other-things all good 就 是 箇 個 天 氣 3. chiu 6 x[ï 6 ] ko 3 ko ^thie=n 1 chi, then be this CL weather 硬 是 不 好 . 4. nga=ng 6 x[ï 6 ] pïq 7 hao 3 . decidedly be NEG good 難 得 一 年 到 頭 5. lan 5 teq 7 !yit 7 nyen 5 tao 5 Deu 2 . seldom in:one:year 難 得 有 幾 日 子 好 6. lan 5 teq 7 yiu 3 ci 3 nyiq 7 tsï hao 3 thien 1 chi. 天 氣 seldom have several day good weather 首 先 春 天 咯 7. sïu 3 xien 1 DZun 1 thien 1 lo. first springtime LO INST 箇 春 天 呢 8. ko 3 tshun 1 thien 1 le, this springtime LE 老 是 落 雨 9. lao 3 sï 6 loq 8 yu 3 . always rain (v.) 硬 有 嘅 時 間 10. ngang 6 yiu 3 ko sï 5 kan, really have KO XIE times 一 落 就 落 個 三 日 四 11. yit 7 loq 7 chiu 6 loq 7 ko san 1 nyit sï 5 as :soon:as fall then fall KO MIR three day four day 日 nyit 五 日 七 日 八 日 十 來 日 ng 3 nyit 7 chit 7 nyit 7 paq 7 nyit 7 sïq 8 lai 5 !nyit 7 . five day seven day eight day ten and:more day 日 日 落 12. ^nyit 7 nyit 7 loq 7 , everyday fall 不 得 停 13. pïq 7 teq 7 thin 2 . NEG get stop 落 得 人 都 出 不 了 門 14. loq 7 teq 7 nyin 5 tu 1 DZuq 7 pïq 7 lieu 3 mïn 5 , fall TEQ EXT people all go:out NEG LIEU POT 落 得 個 房 間 裏 呢 到 處 15. loq 7 teq 7 ko fo=ng 5 kan li le tao 5 DZu fall TEQ EXT KO MIR apartment inside LE everywhere 是 潮 潮 濕 嘅 xi 6 DZeu 2 -DZeu 2 sït 7 ko. be [TRUNC. W.] wet KO ASST 摸 到 箇 個 桌 子 辣 濕 16. mo 1 të [tao 5 ] ko 3 ko tsoq 7 tsï lat 8 sït 7 , feel TAO DUR this CL table very wet 摸 到 凳 子 辣 濕 17. mo 1 të [tao 5 ] ten 5 tsï lat 8 sït 7 . feel TAO DUR stool very wet 你 打 開 箱 子 看 啊 18. n 3 ta 3 Gai 1 xiong 1 tsï khon 5 na, 2SG open trunk see A SUPP 你 人 嘅 被 窩 也 發 了 霉 19. n 3 len ko Bïi 6 wo ya 3 ^faq 7 lë [lieu 3 ] ^mïi 5 =, 2SG KO GEN quilt also grow LIEU PRFV mould 棉 襖 也 發 了 霉 20. mien 5 ngao 3 ya 3 ^faq 7 lë [lieu 3 ] mïi 5 , padded:jacket also grow LIEU PRFV mould 什 哩 都 發 了 霉 21. xiq 7 li tu 1 faq 7 lë [lieu 3 ] mïi 5 , everything all grow LIEU PRFV mould 長 了 毛 22. tsong 3 lë [lieu 3 ] mao 1 . 熱 天 就 熱 得 要 死 25. let 7 thien 1 chiu 6 let 7 teq 7 yeu 5 sï 3 . summer then hot TEQ DEG want die 硬 熱 得 氣 都 敨 不 出 來 26. ngang 6 let 7 teq 7 chi 5 tu 1 Deu 3 pïq 7 tshuq 7 lai. indeed hot TEQ RES breath all breathe NEG come:out 三 十 七 度 三 十 八 度 27. san 1 sïq 8 chiq 7 Du 6 san 1 sïq 8 paq 7 Du 6 thirty-seven degrees thirty-eight degrees 算 好 個 son 5 hao 3 ko. count:as good KO ASST 有 嘅 時 間 就 三 十 九 度 28. yiu 3 ko sï 5 kan 1 chiu 6 san 1 sïq 8 ciu 3 Du 6 have KO XIE times then thirty-nine degrees 四 十 度 ^sï 5 sïq 8 Du 6 . forty degrees 你 像 舊 年 子 29. n 3 [ch]iong 6 chiu 6 nyen 5 tsï, like last:year 就 熱 得 要 死 30. chiu 6 let 7 teq 7 yeu 5 sï 3 . then hot TEQ DEG want die 舊 年 子 熱 到 四 十 度 31. chiu 6 nyen 5 tsï let 7 tao 5 sï 5 sïq Du 6 . last:year hot reach 40 degrees 漲 脫 大 嘅 水 34. ... tsong 3 thoq 7 thai 6 ko suïi 3 , swell very big KO ATT water 你 像 舊 年 子 咯 35. n 3 [ch]iong 6 chiu 6 nyen 5 tsï lo. like last:year LO INST 江 西 漲 水 全 國 有 名 36. ciong 1 xi 1 tsong 3 suïi 3 .. chyon 2 kueq 7 yiu 3 miang 5 , Jiangxi floods countrywide have fame 全 世 界 都 有 了 名 37. chyon 2 sï 5 kai 5 tu 1 yiu 3 lë [lieu 3 ] miang 5 . all world even have LIEU PRFV fame 舊 年 子 差 滴 子 出 大 問 題 38. chiu 6 nyen 5 tsï .. tsha 1 tiaq 7 tsï tshuq 7 thai 6 wïn 6 thi 2 , last:year almost happen big problem 啊 39. a, A SUSP 防 洪 搶 險 40. fong 5 fung 2 chiong 3 xien 3 guard:against flood take:emergency:measures:against danger ((A SLOGAN)) [1 :23] 秋 天 裏 吧 41. ^chiu 1 thien 1 li pa, autumn inside PA 巴 乾 44. ^pa 1 kon 1 . ((pa 1 SPECIAL VOICE QUALITY VENTRICULAR ? PHARYNGEALIZED ? )) very dry 有 嘅 時 間 乾 得 呢 45. yiu 3 ko sï 5 kan 1 kon 1 teq 7 le have KO XIE times dry TEQ RES LE 乾 得 個 嘴 巴皮 子 嘅 皮 kon 1 teq ko tsuïi 3 pa bi 2 tsï ko phi 2 dry TEQ RES KO MIR lips KO GEN skin 都 可 以 □ 得 下 來 tu 1 kho 3 yi 3 ^te 1 teq 7 ha 6 lai 5 . even can tear TEQ RES fall:off 許 乾 燥 46. he 3 ^kon 1 tsao 5 . that dry (that's how dry it gets) 冬 天 吧 就 冷 得 要 死 47. tu=ng 1 dien 1 pa chiu 6 lang 3 teq 7 yeu 5 sï 3 . winter PA then cold TEQ DEG want die 又 冷 又 潮 濕 48. yiu 6 lang 3 yiu 6 dzeu 2 sït 7 . and cold and wet 我 們 箇 裏 話 起 來 49. ngo 3 mïn ko 3 li wa 6 chi 3 lai ((TRUNCATED)) 1PL here say CHILAI INCH □ fu X 52. pi 3 lu 5 liang 5 ha 6 ng 3 Du 6 lo for:instance below:zero five degrees LO INST 六 度 咯 liuq 8 Du 6 lo. Six degrees LO INST 主 要 是 什 哩 呢 53. tsu 3 yeu 5 sï 6 xiq 7 li le principally be what LE 潮 濕 tsheu 2 sït 7 . wet 所 以 北 京 啊 54. so 3 yi peq 7 cin 1 na, that:is:why Beijing A TOP 佢 零 下 十 多 度 55. cie 3 liang 5 ha 6 sïq 8 to 1 du 6 . 3SG below:zero ten more degrees 佢 比 我 們 箇 裏 感 覺 56. cie 3 pi 3 ngo 3 mïn ko 3 li kon 3 cioq 7 3SG compare:with 1PL here feel 要 好 得 多 yeu 5 hao 3 teq 7 to 1 . YEU CHR better TEQ DEG much 第 一 佢 乾 燥 57. thi 6 yiq 7 cie 3 kon 1 tsao 5 . ORD one 3SG dry 再 一 個 呢 60. tsai 5 yiq 7 kë [ko] le again one CL LE 佢 房 間 裏 有 暖 氣 61. cie 3 fong 5 kan 1 li yiu 3 non 3 chi 3 . 3SG apartment inside have heating 我 們 箇 裏 呢 62. ngo 3 mïn ko 3 li le, 1PL here LE 又 冰 冷 辣 濕 落 雨 落 雪 63. yiu 6 pin 1 lang 3 lat 8 sït 7 loq 8 yu 3 loq 8 xuet 7 . and cold very wet fall rain fall snow 房 間 裏 又 是 冒 有 64. fong 5 kan 1 li yiu 6 sï 6 mao 6 yiu 3= = = =, apartment inside again be NEG have 也 冒 有 冒 有 65. ya 3 ma[w] 6 y[iu 3 ] ma[w] 6 y[iu 3 ] also NEG have NEG have 不 跟 許 個 北 京 一 樣 嘅 有 暖 氣 66. pïq 7 kien 1 he 3 ko peq 7 cin 1 yiq 7 yong 6 ko yiu 3 lon 3 chi 3 . NEG like that CL Beijing same KO ADV ? have heating 所 以 呢 67. so 3 yi 3 le, that:is:why LE 冬 天 裏 啊 68. ^tung 1 thien 1 li a, 還 更 熱 和 些 子 hai 2 kien 5 leq 7 fo xiet 7 tsï. still more warm a:little 在 房 間 裏 啊 70. tshai 6 fong 5 kan 1 li a, at apartment inside A TOP 冰 冷 71. pin 1 lang 3 , ice-cold 更 冷 72. kien 5 lang 3 more cold 所 以 啊 73. so 3 yi 3 a. 箇 個 南 昌 市 啊 74. ko 3 ko nan 5 DZong 1 sï 6 a. this CL Nanchang city A TOP 一 年 到 頭 冒 什 哩 好 天 氣 75. yiq 7 nyen 5 tao 5 Deu 2 mao 6 xiq 7 li hao 3 Dien 1 chi in:one:year NEG any good weather cold, 72. colder. 73. That is why, 74. here in Nanchang city, 75. All through the year, there no [period of] good weather. (Told by XYZ. Recorded by Laurent Sagart. Nanchang, June 1999) 話 一 個 笑 話 哈 1. wa 6 yiq 7 kë [ko] xieu 5 fa 6 say one CL funny:story HA 一 個 秀 才 嘅 笑 話 2. yiq 7 ko ^xiu 5 DZai 2 kë [ko] one CL scholar KO ATT funny:story 秀 才 啊 3. xiu 5 DZai 2 a. 一 個 女 嘅 14. yiq 7 kë [ko] nyü 3 kë [ko]. one CL female KO NOM 就 跟 人 家 □ 啊 24. chiu 6 kien 1 nyin 5 ka 1 tse 3 a. then with person flirts A ? ? □ 35. eh. P 喇 46. la 6 , LA LOOK 調 戲 婦 女 57. Dieu 2 xi 5 bothering woman 就 兩 才 并 一 才 1SG in:my:heart think A fu 6 nyü 3 , 68. kuon 1 DZai 2 tsong 1 coffin 4. ngo 3 xin 1 li xiong 3 a, hold:within scholar 我 心 理 想 啊 xiu 1 DZai 2 , xieu 5 fa 6 . 隔 看 到 前 頭 呢 13. kaq 7 Gon 3 të [tao] Jien 2 Dë [theu 2 ] le, KAQ NEW see TAO RES ahead LE 跟 人 家 搭 腔 23. kien 1 nyin 5 ka 1 taq 7 chio=ng 1 , with person start a conversation □ □ 佢 就 粘 到 人 家 話 事 34. <X X> cie 3 chiu 6 nyen 5 tao 5 nyin 5 ka wa 6 sï 6 . 3SG then stick PREP person talk 我 們 裝 得 一 起 45. ngo 3 mïn tsong 1 teq 7 yiq 7 chi 3 . 1PL install PREP together 實 際 上 就 是 56. in:fact then be 管 材 裝 秀 才 1SG sït 7 ci song 6 chiu 6 xi 6 =, 67. ^xiu 5 DZai 2 ya 3 xi 6 scholar also be 3. ngo 3 , cai 2 我 tshai 2 , ha. 走 啊 走 12. tseu 3 a tseu 3 walk A PROG walk 就 , 22. chiu= 6 , then 隔 □ □ 不 答 佢 呢 33. kaq 7 <XX> puq 7 taq 7 cie 3 le, KAQ NEW NEG answer 3SG LE 妳 搦 隻 小 籃 子 44. n 3 laq 7 tsaq 7 xieu 3 lan 5 tsï, 2SG take CL small basket 箇 箇 箇 就 是 55. ko 3 ko 3 ko 3 this this this then be 秀 才 也 是 才 1SG chiu 6 x[ï 6 ], 66. ^kuon 1 DZai 2 ya 3 xi 6 coffin also be 2. ... ngo 3 , cai 2 我 tshai 2 , 1.3.2 Nanchang text No. 2.: Scholar and Maiden 到 街 上 去 11. tao 5 ^kai 1 song chie to street on go 不 懷 好 意 21. puq 7 fai 5 hao 3 yi 5 , NEG hold good:intentions 不 答 佢 32. puq 7 taq 7 cie 3 . NEG answer 3SG 我 搦 了 一 隻 大 籃 子 43. ngo 3 laq 7 lieu yiq 7 tsaq 7 Dai 6 lan 5 tsï, 1SG take LIEU PROG one CL big basket 佢 就 講 話 嘅 54. cie 3 chiu 6 kong 3 wa 6 3SG then this:way say KO ASST 管 材 也 是 才 this:way if speak:about new:year A ko 65. cie 3 chiu 6 wa 6 kë= [ko], 3SG then say 1. ko 3 yeu 5 wa 6 të [tao 5 ] kuo 5 nyen 5 a. KO ASST 箇 要 話 到 過 年 啊 that:is:why A SUSP mouldy, 22. has grown hairy. 23. In the springtime. 24. Let's now consider summer. 25. In the summer it's hot to die of. 26. Indeed so hot that [one] cannot breathe. 27. Thirty-seven degrees, thirty-eight degrees counts as pleasant, 28. At times it's reached forty degrees. 32. In the summer in addition there often are floods. 33. [when there are] inundations, 34. they are very big inundations, 35. Like, last year, 36. The floods in Jiangxi were publicized everywhere in China, 37. They were even publicized everywhere in the world. 38. Last year there almost happened a big problem, 39. [what], 40. [during the] fight against the flood. 41. Let's now consider the automn. 42. [In] the automn [it] is reasonably good. 43. But then it's very dry. 44. Extremely dry. 45. At times so dry that [one] can tear the skin off [one's] lips. 46. That dry. 47. Let's now consider the winter, then it's bitterly cold. 48. Both cold and wet. 49. Here we say [PART MISSING…fu]. 50. The temperature in the winter .51. cannot be considered very low, 52. For instance five below zero, six below zero. 53. The worst is the, what, the humidity. 54. That is why [in] Beijing, 55. [when the temperature there is] ten below zero, 56. It still feels much better there than [for] us here. 57. The first thing is, there it's dry. 58. The winter weather is very good. 59. It's all sunny days. 60. Another thing is, 61. There the apartments have heating. 62. [With] us here, 63. It's bitterly cold, very wet, rainy and snowy. 64. In the apartments there is no…65. There is no…66. It's not heated like in Beijing . 67. That is why, 68. In the winter, 69. [if] you're staying indoors, you might just as well go outside, it's a little warmer. 70. Inside the apartment, 71. It's ice-就 是 7. then be 從 前 有 個 秀 才 呢 8. in the past have CL scholar 提 隻 籃 子 9. carry CL 提 隻 脫 大 嘅 籃 子 10. thia 2 tsaq 7 ^Doq 7 Dai 6 kë [ko] lan 5 tsï carry CL very big KO ATTR basket ((RESUMES NARRATOR ROLE)) 佢 就 話 嘅 A then this:way 2SG a, 20. chiu= 6 , this CL young KO ATTR woman KO NOM see TAO RES 3SG 42. n 3 len, then two basket combine one basket 3SG then point:to that CL coffin say KO ASST 啊 就 箇 隻 年 輕 嘅 女 嘅 看 到 佢 31. ko 3 tsaq 7 nyen 5 chiang 1 kë [ko] nyü 3 kë [ko] Gon 3 të [tao 5 ] cie 3 kong 3 , 你 人 53. chiu 6 liong 3 lan 5 phin 6 yiq 7 lan 5 . 64. cie 3 chiu 6 tsï 3 të [tao] he 3 kë [ko]. kë [ko] kuon 1 DZai 2 wa 6 講 就 兩 籃 并 一 籃 佢 就 指 到 許 個 管 材 話 嘅 basket that CL scholar look TAO RES person quite pretty LE 2SG thia 2 tsaq 7 lan 5 tsï. 箇 隻 秀 才 看 到 人 家 蠻 可 氣 呢 19. ko 3 tsaq 7 xiu 5 DZai 2 Gon 3 të [tao] nyin 5 ka 1 man 5 Gieq 7 chi le, 箇 隻 30. ((LOWER PITCH, RESUMES NARRATOR ROLE)) ko 3 tsaq 7 this CL 你 人 41. n 3 len, 大 籃 裝 小 52. big basket install small basket then suddenly has an idea thai 6 lan 5 tsong 1 xieu 3 lan 5 , 63. chiu 6 lin 5 ci 1 yiq 7 Dung 6 籃 就 靈 機 一 動 LE that CL [TRUNC. W.] I say tshung 2 Jien 2 yiu 3 kë [ko] ^xiu 5 DZai 2 le, 箇 隻 □ 18. ko 3 tsaq 7 sai-我 來 幫 你 人 提 咯 29. ngo 3 lai 5 pong 1 n 3 len dia 2 lo ? 1SG come help 2SG carry LO CONSULT 我 話 40. ngo 3 wa 6 , 小 籃 也 是 51. xieu 3 lan 5 ya 3 xi 6 small basket also be basket that CL female KO NOM LE lan 5 . 62. he 3 tsaq 7 nyü 3 kë [ko] le, 籃 許 隻 女 嘅 呢 scholar grow TEQ MAN then quite pretty P chiu 6 s(ï 6 ) ^xiu 5 DZai 2 . tsong 3 teq 7 chiu 6 man 5 Gieq 7 chi. 2SG carry KO REL that CL basket very tire people PA PRSS 39. eh, big basket also be basket Just walk:toone CL coffin:shop door A 秀 才 長 得 就 蠻 可 氣 28. n 3 Dia 2 ko he 3 ko lan 5 tsï hao 3 luïi 6 nyin 5 ka pa, □ 50. ((RECITING))thai 6 lan 5 ya 3 xi 6 lan 5 . 61. tsïn 5 hao 3 tseu 3 ta[w] yiq 7 tsaq 7 kuon 1 [D]Zai 2 Bu 3 mïn 5 Gieu 3 wa. thirty-nine degrees, forty degrees. 29. Like, last year, 30 it was hot to die of. 31. Last year the heat 古 代 個 秀 讀 書 6. ku 3 tai 5 kë [ko] xiu 5 -^thuq 7 su 1 ancient:times KO GEN [TRUNC. W.] literate 你 提 嘅 許 個 籃 子 好 累 人 家 吧 大 籃 也 是 籃 正 好 走 到 一 隻 管 材 鋪 門 口 哇 person that CL female KO NOM LE 1SG for 2SG carry LO CONSULT nyin 5 , 17. he 3 tsaq 7 nyü 3 ko le this young:lady a 38. ... ngo 3 pong 1 n 3 len Dia 2 lo, 1SG one CL hand then able:to carry KAQ NEW walk a walk LE 人 許 隻 女 嘅 呢 27. ke=i 3 xieu 3 nyong 5 ts[ï] a=, 我 幫 你 人 提 咯 49. ngo 3 yiq 7 tsaq 7 xiu 3 chiu 6 Go 3 yi 3 Dia 2 . 60. kaq 7 tseu 3 a tseu 3 le, door grow LIEU PRFV hairy 箇 春 天 裏 23. ko 3 DZun 1 thien 1 li. this springtime inside 熱 天 裏 吧 24. let 7 thien 1 li pa, summer inside PA 熱 天 裏 還 往 往 有 洪 水 32. let 7 thien 1 li hai 2 wong 3 wong 3 yiu 3 fung 5 suïi 3 , summer inside in:addition often have flood 漲 水 啊 33. ... tsong 3 suïi 3 a. swelling:of:waters A SUPP 秋 天 還 好 42. chiu 1 thien 1 hai 2 hao 3 . autumn still good 不 過 呢 就 是 好 乾 燥 43. puq 7 kuo 5 le chiu 6 x[ï 6 ] ^hao 3 kon 1 tsao 5 but LE then be very dry 冬 天 嘅 許 個 氣 溫 啊 50. tung 1 thien 1 ko he 3 ko chi 3 wïn a, winter KO GEN that CL temperature A TOP 也 不 算 好 低 51. ya 3 pïq 7 son 5 hao 3 ti 1 . also NEG count:as very low 比 如 零 下 五 度 咯 冬 天 嘅 天 氣 好 得 很 58. tung 1 dien 1 kë [ko] ^thien 1 chi hao 3 teq 7 hen 3 , winter KO GEN weather good TEQ DEG very 都 是 晴 天 59. tu 1 sï 6 chiang 2 dien 1 . all be clear day winter inside A TOP 你 坐 得 箇 個 房 間 裏 啊 69. n 3 tsho 6 = teq 7 ko 3 ko fong 5 kan 1 li a, 2SG sit TEQ PREP this CL apartment inside A SUPP 還 不 如 走 到 外 頭 去 hai 2 puq 7 lu 5 tseu 3 tao 5 wai 6 deu 2 chie 3 , still not:as:good:as walk to outside go Translation 1. Here in Nanchang, 2. everything (else) is good, 3. only the weather 4. is really bad. 5. Seldom in one year 6. Seldom are there several days good weather [in a row]. scholar a 你 曉 得 秀 才 一 個 年 輕 嘅 女 嘅 15. yiq 7 kë [ko] nyen 5 c[h]iang 1 kë [ko] nyü 3 kë [ko], □ 25. ë 佢 話 嘅 我 47. ngo 3 , 就 想 跟 人 家 許 隻 女 嘅 □ 58. chiu 6 xiong 3 kien 1 nyin 5 ka he 3 tsaq 7 nyü 3 ko ^tse 3 . 69. Jiu 6 liong 3 tshai 2 phin 6 yiq 7 DZai 2 . then two cai 2 combine one 以 前 啊 比 現 在 更 有 味 道 cai 2 5. ... yi 3 Jien 2 a pi 3 xien 6 DZai 6 kien 5 yiu 3 wïi 6 Dau 6 . 吧 4. n 3 xieu 3 teq 7 xiu 5 DZai 2 pa. ((TALKING TO ME, LS)) one CL young KO ATTR female KO NOM eh 36. cie 3 wa 6 kë [ko]. 1SG then want with person that CL in:the:past A compare now more have taste female KO NOM flirt Note deaspiration but no voicing in c[h]iang 1 3SG say KO ASST 7. First, spring. 8. In the spring, 9. it always rains. 10. Really there are times, 11. When it starts to 2SG know scholar PA PRSS 嘿 我 一 箇 隻 女 嘅 心 理 聽 到 就 好 起 火 以 前 窮 rain, it rains for three days, four days, five days, seven days, eight days, over ten days. 12. Everyday it rains. 13. It won't stop. 14 it rains [so much] that as a result people cannot go out of their homes. 15. It rains [so much] that everything in the apartment is wet. 16. When feeling with one's hand the table top it's all wet, 17 when feeling the stool with the hand, it's all wet, 18, [when] you open the trunk to look, 19. Your quilt is all mouldy, 20. your padded jacket is all mouldy, 21. Everything is all 也 提 隻 籃 子 26. ((HIGH PITCH, PUTS SELF IN ROLE OF SCHOLAR)) e=, 我 幫 你 人 提 嘛 48. ngo 3 yiq 7 , 59. ko 3 tsaq 7 nyü 3 kë [ko] xin 1 li Diang 1 të chiu 6 hao 3 ^chi 3 fo 3 . 6. yi 3 Jien 2 ^chiung 2 , XX 5. 箇 一 小 娘 子 啊 我 一 隻 手 就 可 以 提 隔 走 啊 走 呢 eh also carry CL basket 1SG for 2SG carry MA PRESSING eh. 16. ya 3 Dia 2 tsaq 7 lan 5 tsï. hey 37. ... ngo 3 pong 1 n 3 len Dia 2 ma, 1SG one this CL female KO NOM heart hear then very angry in:the:past poor ((RECITING, SING-SONG)) « A big basket is a also a basket, 51. A small basket is also a basket. 52. [if one] puts the small basket into the large basket, 53, then the two baskets become one ». 54. ((END OF RECITING. RESUMES NARRATOR ROLE)). That is what he said. 55. This, this, this [thing that he was doing], was. 56. Was in fact. 57. Bothering/harassing the woman. 58. He wanted to flirt with her, with that female person. 59. This female person, on hearing this, was very angry. 60. Now, [as] they were walking, 61. They just walked past the door of a coffin shop. 62. That female person, well 63. She suddenly had an idea. 64. She pointed to those coffins, and said. 65. She said. 66. ((RECITING, SING-SONG)) « A coffin is also a [kind of] cai2. 67. A scholar is also a [kind of] cai2. 68. [If one] puts the scholar into the coffin, 69. Then the two cai2's become one. » Notes 吧 : I assume you know (what xiucai means) (debut) 1.3.3 Nanchang text No. 3: New Year (Told by XYZ. Recorded by Laurent Sagart. Nanchang, June 1999) If one [has to] speak of New Year, 2. I, 3. I, 4. I believe 5. That in the old days it was more fun than now. 6. In the old days[we] were poor. 7. Now everybody's life has improved a lot. 8. But I feel that the time[we] were poor was more fun than now. 9. More interesting, that is to say. 10. Look, when we were little, 11. There was no TV, 12. Neither was there any of that Karaoke, 13. Neither were there electronic games. 14. And yet, 15. New Year then,16. Was a lot of fun.17. In these days everybody at New Year's time.18. like, roasted peanuts, 19. Or roasted beans, 20. Killed chickens or ducks, 21. Bought pork meat or fish. 22. Ah, 23.-24. The clothes that children wore everyday were of very poor quality. 25. But come new year, 26. The adults at home 27. Would always find a way 28. Then one did not have any money, 69. But one played very happily. 70. I, I remember 71. That, 72. I, 73. In those days, 74. When I was small, 75. Eh, seven or eight year-old, 76. Ten-something year-old, 77. [I] lived in a big large mixed courtyard, 78. In that large mixed courtyard 79. [lived] many children 80. About the same age as I. 81. Some were a bit older than I, and some were a bit smaller than I. 82. That…there were two brothers. 83. Of these two brothers, 84. The elder was called Suïi3-kien1, 85. And the younger one, the younger brother, he was called Mao5-kuïi-tsï. 86. Those two brothers were very able with their hands. 87. Later in life they became industry workers. 88. Eh, 89. they they really knew how to make things to play with. 90. These two brothers, one year at the time of New Year 91. They managed to procure a lot of bamboo laths and sticks, 92. And made a very long lantern, 93. With many sections. 94. From memory there were something like five, six, seven, or Each of us holding one section, we played inside. 106. We played inside the courtyard. 107. Eh, 108. You can't imagine how much fun it was. 109. [and for] that, one did not have to spend money. 110. But nowadays, new year. 111. Really is no fun at all. 112. Nowadays [during] New Year, obviously, 113. The main activity 114. 115. Is, on New Year's eve, to watch the, the, the, the 116. New Year party program on the Central TV channel. 117. [right ?] 118. Apart from that, 119. All the rest is eating. 120. A meal at this house, 121. A meal at that house, 122. A meal at one's mother and father's, 123. A meal at one's mother-in-law's, 124. A meal at one's elder brother's, 125. A meal at one's younger brother's, 126. Eh, 127. One eats so much that ((UNAUTHORIZED BY INFORMANT)). 128. A meal at this friend's, 129. A meal at that old schoolmate's, 130. Aside from that, what else is there ? 131. There is nothing else. 132. Those who can play mahjong, 133. Play mahjong from morning till night. 134. They play so much that ((UNAUTHORIZED BY INFORMANT)). 135. They play for days and nights on end.136. They do not leave the [mahjong] table. 137. That is why, 138. Although nowadays, 139. Material life has improved, 140. Today's, 141. As far as being interesting goes, 142. It is still the time when we were little143. Which was more interesting, 144. More fun. 3 la 1 o 1 Ge 1 , Karaoke 什 哩 電 子 also NEG have any 也 冒 有 遊 戲 ya 3 m[ao 6 ] yiu 3 xiq 7 li Dien 6 tsï ^yiu 5 xi, also NEG have any electronic game and yet (???) ... □ Eh, 許 時 間 過 年 啊 ... ^he= 3 s[ï 2 k]an 1 kuo 5 nyen 5 a, that time new:year A 好 有 味 道 hao 3 yiu 3 wïi 6 Dao 6 very have taste 許 時 間 家 家 戶 戶 過 年 he 3 sï 5 kan 1 ka 1 ka 1 fu 6 fu 6 kuo 5 nyen 5 that time every:household new:year 炒 花 生 咯 tshao 3 fa 1 sen 1 lo, roast peanuts LO INST 炒 豆 子 咯 tshao 3 Deu 6 tsï lo, roast beans LO INST 殺 雞 殺 鴨 咯 ... saq 7 ci 1 saq 7 !ngaq 7 lo, kill chicken kill duck LO INST 買 肉 買 魚 咯 mai 3 nyuq 7 mai 3 ^nye 5 lo. buy pork:meat buy fish LO INST 啊 a, A 細 伢 子 啊 平 時 穿 到 ... xi 5 nga tsï a phin 2 sï 5 Dzon 1 të= [tao 5 ]--children A usually wear TAO DUR 嘅 衣 裳 都 穿 得 好 kë [ko] yi 1 song tu 1 Dzon 1 teq 7 hao 3 so 5 . 啊 a, A 一 KO REL clothes all wear TEQ MAN very low:quality 隔 要 過 年 啊 25. ... 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. kaq 7 yew 5 kuo 5 nyen 5 a, KAQ NEW YEU CHR spend:new:year A 屋 里 嘅 大 人 啊 26. ... wuq 7 li kë [ko] thai 6 nyin 5 a, house in KO GEN adults A 郎 樣 都 要 想 辦 法 27. ... long 5 yong 6 tu 1 yeu 5 xiong 3 phan 6 faq 7 anyway all YEU CHR think method 跟 細 伢 子 做 件 衣 裳 28. kien 1 xi 5 nga tsï tsu 5 chien 6 yi 1 song. for children make CL clothes □ 29. eh, eh 許 時 間 30. he 3 sï 5 kan 1 , that time 所 以 就 普 普 通 通 嘅 新 衣 裳 31. so 3 yi 3 chiu 6 phu 3 phu 3 thung 1 thung 1 kë [ko] xin 1 yi 1 song that:is:why even ordinary KO ATTR new clothes 許 都 覺 得 好 快 活 32. he 3 tu 1 cioq 7 teq 7 hao 3 Guai 5 foq 7 . that all feel very happy 心 理 好 高 興 33. xin 1 li hao 3 kao 1 xin. heart in very contented 許 哪 跟 現 在 一 樣 嘅 34. he 3 la 3 kien 1 ^xien 6 Dzai 6 yiq 7 yong 6 kë [ko]. that where with now same KO ASST 現 在 □ □ □ □ 穿 穿 得 好 得 35. ... xien 6 Dzai 6 g,g,g,g, Dzon, Dzon teq 7 hao 3 teq 7 now dress dress TEQ MAN well TEQ DEG 一 年 到 頭 都 穿 得 蠻 好 36. yiq 7 nyen 5 tao 5 Deu 2 tu 1 Dzon 1 teq 7 man 5 hao 3 . all:year:round all dress TEQ MAN very well 你 就 話 玩 嘅 東 西 吧 37. ... n 3 chiu 6 wa 6 ^wan 5 kë tung 1 xi pa. 2SG then say play KO REL thing PA PRSS 玩 嘅 東 西 我 記 得 38. ... wan 5 kë [ko] tung 1 xi ngo 3 ci 1 teq 7 play KO REL thing 1SG remember 許 個 小 時 間 啊 he 3 ko xieu 3 sï 5 kan 1 a, that CL small time A 五 分 錢 39. ... ng 3 fïn 1 Jien 2 , five cents money 箇 五 分 錢 硬 太 不 抵 錢 40. ... ko 3 ng 3 fïn 1 Jien 2 ngang 6 thai 6 pïq 7 ti 3 Jien 2 , this five cents money really too NEG worth money 現 在 坐 個 汽 車 41. xien 6 Dzai 6 Dzo 6 kë chi 5 Dza 1 now sit:on CL bus 坐 一 站 汽 車 42. Dzo 6 yiq 7 tsan 5 chi 5 Dza 1 , sit:on one station bus 都 要 五 角 錢 43. tu 1 yeu 5 ng 3 ^koq 7 Jien 2 . all need five dime money 許 時 間 啊 44. ... he 3 si 5 kan 1 a, that time A 五 分 錢 啊 45. ^ng 3 fïn 1 Jien 2 a, five cents money A 就 可 以 買 只 紙 紮 嘅 燈 籠 46. chiu 6 Go 3 yi 3 mai 3 t[s]aq 7 tsï 3 tsaq 7 kë [ko] ten 1 lung 5 啊 a, A 很 hen 3 very then can buy CL paper:and:bamboo KO ATTR lantern 紙 □ 47. ... tsï 3 ts--paper 搦 紙 紮 嘅 燈 籠 啊 48. ... laq 7 tsï 3 tsaq 7 kë [ko] ten 1 lung 5 a. LAQ INST paper:and:bamboo KO ATTR lantern A 幾 根 篾 子 49. ... ci 3 kien 1 mieq 7 tsï, several CL bamboo laths 幾 根 篾 拍 子 50. ... ci 3 kien 1 mieq 7 phang 1 tsï, several CL bamboo laths 外 頭 糊 到 許 個 紅 嘅 紙 51. ... wai 6 Deu 2 fu 5 të [tao 5 ] he 3 ko fung 5 kë [ko] tsï 3 outside glue TAO DUR that CL red KO ATTR paper 綠 嘅 紙 52. liuq 7 kë [ko] tsï 3 , green KO ATTR paper 紮 到 一 個 籠 頭 嘅 樣 子 53. tsaq 7 të [tao 5 ] yiq 7 kë [ko] lung 5 Deu 3 kë [ko] yong 6 tsï. assemble TAO RES one CL dragon:head KO ATT aspect 里 頭 呢 54. ... li 3 Deu 2 le, inside LE 還 做 了 一 嘅 55. hai 2 tsu 5 lë [lieu 3 ] yiq 7 kë [ko], 3SG then this:way say KO ASST 一 個 個 個 個 個 個 56. yiq 7 kë k k k k k, one CL CL: CL CL CL CL 一 個 可 以 插 一 個 蠟 燭 57. yiq 7 kë kho 3 yi 3 tshaq 7 yiq 7 kë [ko] !laq 7 tsuq 7 , one CL can insert one CL candle 可 以 插 小 蠟 燭 嘅 地 方 58. kho 3 yi 3 tshaq 7 xieu 3 laq 7 tsuq 7 kë [ko] Di 6 fong. can insert small candle KO REL place □ 許 個 五 五 五 分 錢 啊 59. eh=, he 3 kë [ko] ng 3 ng 3 ng 3 [fï]n 1 Jien 2 a, eh that CL five five five cents money A 買 隻 許 個 許 個 龍 燈 啊 60. mai 3 tsaq 7 he 3 kë [ko] he 3 kë [ko] lung 5 ten 1 a, buy CL that CL he3 CL lantern A 就 玩 得 好 高 興 61. ... chiu 6 wan 5 teq 7 hao 3 kao 1 xin 5 . then play TEQ MAN very happy 夜 晚 啊 62. ... ya 6 wan 3 a, evening A 插 根 小 蠟 燭 到 裏 頭 63. tshaq 7 kien 1 xieu 3 !laq 7 tsuq 7 tao 5 li 3 theu, insert CL small candle PREP inside □ 箇 家 走 到 許 家 64. .. eh ko 3 ka 1 tseu 3 të [tao 5 ] he 3 ka 1 Eh that family walk to that family 去 跟 人 家 拜年 chie 3 kien 1 nyin 5 ka 1 pai 5 nyen 5 . go with people present:new:year:wishes 人 家 就 搦 糖 把 你 人 喫 65. nyin 5 ka 1 chiu 6 laq 7 Dong 2 pa 3 2 n 3 len c[h]iaq 7 , people then LAQ DISP sweets give 2SG eat 搦 搦 花 生 豆 子 把 你 人 喫 66. ... laq 7 laq 7 fa 1 sen 1 Deu 6 [ts]ï paq 7 n 3 len c[h]iaq 7 . take take peanuts beans give you eat □ 67. ... eee= eh, 許 時 間 冒 有 什 哩 錢 啊 68. he 3 [s]ï 5 [k]an 1 mao 6 yiu 3 xiq 7 li c[h]ien 2 a, that time NEG have whatever money A 但 是 玩 得 蠻 快 活 69. t[h]an 6 sï wan 5 teq 7 man 5 Guai 3 foq 7 . but play TEQ MAN very happy 我 我 記 得 70. ... ngo 3 ngo 3 ci 5 teq 7 , 1SG 1SG remember 許 個 71. he 3 kë=, that CL 我 72. ngo 3 , 1SG 許 時 間 啊 73. he 3 sï 5 kan 1 a, that time A 我 小 嘅 時 間 啊 74. .. ngo 3 xieu 3 kë [ko] sï 5 kan 1 a, 1SG small KO REL time A 許 個 個 個 七 八 歲 子 啊 75. he 3 kë [ko] kë kë= chiq 7 paq 7 suïi 5 tsï a, that CL CL CL seven eight year:old A 十 來 歲 子 啊 76. ... sïq 7 lai 5 suïi 5 tsï a, ten:odd year:old A 住 得 一 個 大 雜 院 里 頭 77. ... tshu 6 teq 7 yiq 7 kë [ko] thai 6 ^tsaq 7 yüon 6 li 3 theu. dwell PREP one CL big mixed:courtyard inside 許 隻 大 雜 院 里 頭 啊 78. (TSK) he 3 tsaq 7 thai 6 ^tsaq 7 yüon 6 li 3 Deu a, that CL big mixed:courtyard inside A, 好 多 嘅 細 伢 子  79. ... hao 3 to 1 kë [ko] xi 5 nga tsï kë--very many KO ATTR child KO 跟 我 差 不 多 大 嘅 80. kien 1 ngo 3 tsha 1 pïq 7 to 1 Dai 6 kë [ko] with 1SG approximately big KO REL 有 嘅 大 滴 子 81. ... yiu 3 kë [ko] Dai 6 tiaq 7 tsï have KO XIE big a little 有 嘅 小 滴 子 嘅 yiu 3 kë [ko] xieu 3 tiaq 7 tsï kë [ko]. have KO XIE small a little KO NOM 許 個 有 兩 兄 弟 82. ... he 3 kë yiu 3 liong 3 xiang 1 thi 6 . that CL have two brother 許 兩 兄 弟 啊 83. ... he 3 liong 3 xiang 1 Di 6 a, that two brother A 許 個 大 嘅 叫 做 水 根 84. he 3 kë [ko] thai 6 kë [ko] cieu 5 tsu 5 = suïi 3 kien 1 . that CL big KO NOM be called suïi3 kien1 小 嘅 呢 箇 老 弟 叫 做 毛 鬼 子 85. xieu 3 kë [ko] le= ko 3 lao 3 Di 6 cieu 5 tsu 5 mao 5 kuïi tsi. small KO NOM LE that younger brother be called mao1 kuïi tsï 許 兩 兄 弟 手 都 好 巧 86. he 3 liong 3 xiang 1 Di 6 sïu 3 tu 1 hao 3 chieu 3 . that two brother hand all very able 箇 大 了 以 後 呢 都 都 當 了 工 人 啊 87. ko 3 thai 6 lë [lieu 3 ] yi 3 heu 6 le tu 1 tu 1 tong 1 lë kung 1 len a. that big LIEU PRFV after LE all all work asLIEU PRFV worker A □ 88. ... eh. eh 佢 門 好 會 做 東 西 玩 89. cie 3 mïn hao 3 fïi 6 tsu 5 tung 1 xi wan 5 . 3PL very know:how make things play 佢 門 兩 兄 弟 呢 有 一 年 過 年 啊 90. ... cie 3 mïn liong 3 xiang 1 Di le yiu 3 yiq 7 nyen 5 kuo 5 nyen 5 a, 3PL two brother LE have one year New:year A 搦 舞 了 好 多 箇 個 篾 子 棍 子 啊 91. ... laq 7 --wu 3 lë [lieu 3 ] hao 3 to 1 ko 3 kë [ko] miet 7 tsï kun 5 tsï a, take procure PRFV very many this CL bamboo:lath stick A 做 了 一 個 好 長 嘅 燈 籠 92. tsu 5 lë [lieu 3 ] yiq 7 kë [ko] hao 3 tshong 2 kë [ko] ten 1 lung 5 . make LIEU PRFV one CL very long KO ATTR lantern 好 多 節 93. ... hao 3 to 1 ciet 7 very many sections 好 像 有 五 六 節 七 八 節 吧 94. ... hao 3 chiong 6 yiu 3 ng 3 liuq 7 ciet 7 chiq 7 paq 7 ciet 7 pa. from:memory have five six section seven eight section PA PRSS 有 龍 頭 95. yiu 3 lung 5 Deu 2 , have dragon head 有 龍 尾 96. ... yiu 3 lung 5 wïi 3 , have dragon tail 中 間 還 有 好 幾 節 97. tsung 1 kan 1 hai 2 yiu 3 hao 3 ci 3 ciet 7 . middle in:addition have very many section □ 98. e/ 做 得 個 99. tsu 5 teq 7 kë [ko] --make TEQ ?? CL 搦 個 紙 糊 起 來 100. laq 7 kë [ko] tsï 3 fu 5 chi 3 lai 5 , take CL paper glue CHILAI INCH 跟 外 頭 嘅 大 人 玩 嘅 燈 籠 差 不 多 101. kien 1 wai 6 theu 2 kë [ko] ^thai 6 nyin 5 wan 5 kë [ko] ten 1 lung 5 Dza 1 pït 7 to 1 , with outside KO GEN adult play KO REL lantern almost 當 然 要 □ □ 小 □ □ 好 多 啦 102. tong 1 len 5 yeu 5 <XX> xieu 3 <XX> hao 3 to 1 la. of:course YEU CHR small very much LA OBV 做 了 許 個 七 八 節 燈 籠 啊 103. tsu 5 lë [lieu 3 ] he 3 kë [ko] chiq 7 paq 7 ciet 7 ten 1 lung 5 a, make LIEU PRFV that CL seven eight section lantern A 我 我 我 我 們 許 個 院 子 裏 嘅 細 伢 子 啊 104. ngo 3 -ngo 3 -ngo 3 -ngo 3 mïn he 3 kë [ko] yüon 5 tsï li kë [ko] xi 5 nga ts(ï) a 1PL that CL yard inside KO GEN children A 一 個 人 搦 一 節 就 在 里 頭 玩 105. yiq 7 kë [ko] nyin 5 laq 7 yiq 7 cieq 7 chiu 6 Dzai 6 li 3 Deu wan 5 , one CL person hold one section then at inside play 就 在 箇 院 子 里 頭 玩 106. chiu 6 Dzai 6 kë [ko 3 ] yüon 5 tsï li 3 Deu wan 5 . then at this yard inside play □ 107. e/ eh 不 曉 得 幾 有 味 道 108. puq 7 xieu 3 teq 7 ci 3 yiu 3 wïi 6 Dao 6 not know how:much fun 許 又 不 要 化 錢 嘅 109. he 3 yiu 6 puq 7 yeu 5 fa 5 chien 2 kë [ko]. that again 3 NEG need spend money KO ASST 隔 現 在 過 年 啊 110. kaq 7 xien 6 Dzai 6 kuo 5 nyen 5 a, KAQ NEW nowadays New:year A 真 冒 有 什 哩 味 道 111. tsïn 1 m[ao] 6 yi[u] 3 xiq 7 li wïi 6 Dao 6 . really NEG have whatever fun 現 在 過 年 嘛 112. xien 6 Dzai 6 kuo 5 nyen 5 ma, nowadays New:year MA OBV 最 主 要 嘅 節 目 113. tsuïi 5 tsu 3 yeu 5 kë [ko] cieq 7 muq 7 most important KO ATT activity 就 是 114. chiu 6 xi 6 =. then be 三 十 夜 晚 看 許 個 個 個 個 115. san 1 xiq 8 ya 6 wan 3 Gon 3 he 3 ko ko ko ko New:year's:eve watch that CL CL CL CL 中 央 電 視 台 嘅 春 節 聯 歡 晚 會 116. tsung 1 yong 1 Dien 6 sï 6 Dai 2 kë [ko] Dzun 1 cieq 7 lien 5 fon 1 wan 3 fïi 6 . Central:TV:channel KO GEN New:year get-together party 是 吧 117. <X xi 6 Ba X> be PA PRSS 除 了 箇 個 呢 118. DZu 2 lieu 3 ko 3 ko le, apart:from that CL LE 隔 都 是 喫 119. ... kaq 7 tu 1 xi 6 <WH ^chiaq 7 WH> KAQ NEW all be eat 箇 家 喫 一 餐 120. ko 3 ka 1 chiaq 7 yiq 7 DZan, this house eat one CL 許 家 喫 一 餐 121. he 3 ka 1 chiaq 7 yiq 7 DZan 1 , that house eat one CL 箇 自 己 爺 娘 許 裏 喫 一 餐 122. ko 3 DZï 6 ci 3 ya 5 nyong 5 he 3 li chiaq 7 yiq 7 DZan 1 , this oneself parents there eat one CL 丈 母 娘 屋 裏 喫 一 餐 123. tshong 6 mu 3 nyong 5 wuq 7 li <@ chiaq 7 yiq 7 DZan 1 @>, mother:in:law house eat one CL 箇 個 哥 哥 屋 裏 喫 一 餐 124. ko 3 kë= [ko] ko 1 ko 1 wuq 7 li chiaq 7 yiq 7 DZan, this CL elder:brother house eat one CL 許 個 老 弟 屋 裏 喫 一 餐 125. he 3 kë [ko] lao 3 Di 6 wuq 7 li chiaq 7 yiq 7 DZan 1 , that CL younger:brother house eat one CL 126. eh, 喫 得 個 (........) 4 127. ... ^chiaq 7 teq 7 kë [ko] ((UNAUTHORIZED BY INFORMANT)), eat TEQ RES KO MIR 箇 個 朋 友 屋 裏 喫 一 餐 128. ko 3 ko phung 2 yiu 3 wuq 7 li chiaq 7 yiq 7 DZan, this CL friend house eat one CL 許 個 老 同 學 屋 裏 喫 一 餐 129. he 3 ko lao 3 thung 2 hoq 7 wuq 7 li chiaq 7 yiq 7 DZan. that CL old schoolmate house eat one CL 還 有 什 哩 呢 130. ... hai 2 yiu 3 xiq 7 li le ? in:addition have what LE 隔 就 冒 有 什 哩 131. kaq 7 chiu 6 mao 6 yiu 3 xiq 7 li. KAQ NEW then NEG have anything 隔 會 打 麻 將 嘅 132. kaq 7 fïi 6 ta 3 ma 5 ciong 5 ko KAQ NEW know:how play mahjong KO NOM 就 一 日 到 夜 打 麻 將 133. chiu 6 yiq 7 nyiq 7 tao 5 ya 6 ta 3 ma 5 ciong 5 . then all:day:long play mahjong 打 得 個 (.....) 134. ... da 3 teq 7 kë [ko] ((UNAUTHORIZED BY INFORMANT)), eat TEQ RES KO MIR 打 得 個 幾 日 幾 夜 135. .. ta 3 teq 7 ko ci 3 nyiq 7 ci 3 ya 6 , play TEQ DEG KO MIR several:days:several:nights:in:a:row 不 下 桌 136. puq 7 ha 6 tsoq 7 NEG leave table 所 以 啊 137. ... so 3 yi 3 a, that:is:why A 現 在 雖 然 啊 138. xie=n 6 Dzai 6 suïi 1 len a, nowadays although A 物 質 生 活 改 善 了 啊 139. ... wuq 7 tsït 7 sen 1 foq 7 kai 3 sen 6 l[i]eu 3 a, material life improve LIEU PRFV A 現 在 嘅 140. xien 6 Dzai 6 kë [ko], nowadays KO GEN 你 話 有 意 思 來 話 141. ... n 3 wa 6 yiu 3 yi 5 sï lai 5 wa 6 . 2SG say interesting from:the:point:of:view:of 還 是 我 們 小 時 間 142. hai 2 x[ï 6 ] ngo 3 mïn xieu 3 sï 5 kan 1 , still:be 1PL small time 更 有 意 思 143. kien 1 yiu 3 yi 5 sï, more interesting 更 好 玩 144. kien 1 hao 3 wan 5 . More fun Translation 1. 1.3.4 Nanchang text No. 4: Turnips (Told by XYZ. Recorded by Laurent Sagart. Nanchang, June 1999) 我 來 1. ngo 3 lai 5 =, 1SG come 話 個 故 事 2. wa 6 kë [ko] ^ku 5 sï 6 , say CL story 哈 3. ha. HA 從 前 啊 4. ... tshung 2 c[h]ien 2 a, formerly A SUSP 有 個 財 主 5. yiu 3 kë [ko] tshai 2 tsu 3 . have CL rich:man 財 主 呢 6. ... tshai 2 tsu 3 le, rich:man LE 就 有 個 崽 7. ... chiu 6 yiu 3 kë [ko] tsai 3 . then have CL son 箇 崽 呢 8. ko 3 tsai 3 le, this child LE 隔 有 好 幾 歲 了 9. ... kaq 7 yiu 3 hao 3 ci 3 suïi 5 l[i]eu 3 . KAQ NEW have a:good:number year LIEU CHG 隔 要 讀 書 啦 10. kaq 7 yeu 5 thuq 7 su 1 la. KAQ NEW must study LA OBV 佢 就 想 請 個 先 生 11. ... cie 3 chiu 6 xiong 3 chiang 3 kë [ko] ^xien 1 = sen 1 =, 3SG then want invite CL teacher 到 佢 屋 裏 來 教 佢 嘅 崽 讀 書 12. ... tao 5 cie 3 wuq 7 li lai 5 kau 5 cie 3 kë [ko] tsai 3 thuq 7 su 1 . to 3SG house come teach 3SG KO GEN son study □ 13. ... eah. eh 就 請 個 私 塾 先 生 啊 14. ... chiu 6 chiang 3 kë [ko] sï 1 suq 7 xien 1 sen 1 a. then invite CL private teacherA SUSP 到 佢 屋 裏 □ 來 讀 書 15. ... tao 5 cie 3 wuq 7 li zai-lai 5 = Duq 7 su 1 . to 3SG house come study □ 16. ... e, e 來 教 □ 17. lai 5 kao 5 k--come teach 到 佢 屋 裏 來 教 佢 嘅 崽 讀 書 18. tao 5 cie 3 wuq 7 li lai 5 kau 5 cie 3 kë [ko] tsai 3 thuq 7 su 1 . to 3SG home come teach 3SG KO GEN son study □ 19. ah. 隔 先 生 來 了 呢 20. kaq 7 xien 1 sen 1 lai 5 lë [lieu 3 ] le, KAQ NEW teachercome LIEU PRFV LE 照 箇 規 矩 啊 21. tseu 5 ko 3 kuïi 1 cy a. follow this rule A SUSP 箇 個 第 一 日 來 啊 22. ... ko 3 ko--... t[h]i 6 yiq 7 nyiq 7 lai 5 a, that CL ORD one day come A □ □ □ □ 23. ggggggggg 箇 主 人 啊 24. ko 3 tsu 3 len 5 a, this host A 就 要 請 箇 個 先 生 喫 餐 飯 啦 25. chiu 6 yeu 5 chiang 3 ko 3 ko xien 1 sen 1 chiaq 7 tshan 1 fan 6 la. then must invite this CL teachereat CL meal LA OBV 是 吧 26. sï 6 pa. be:so PA PRSS 箇 是 禮 帽 啦 27. ko 3 sï 6 li 3 mao 6 la. this be politeness LA OBV □ □ □ □ □ □ 老 師 來 了 28. ... ggggegggg lao 3 sï 1 lai 5 lë [lieu 3 ], gggggggg teachercome LIEU PRFV 要 請 佢 喫 餐 飯 啦 29. yeu 5 chiang 3 cie 3 chiaq 7 Dzan fan 6 la. must invite 3SG eat CL meal LA OBV □ 30. ... e. e 喫 飯 呢 31. ... chiaq 7 fan 6 le, eat meal LE 本 來 是 正 常 嘅 32. ... pïn 3 lai 5 sï 6 tsïn 5 tshong 2 ko. originally be normal KO ASST 但 是 呢 箇 隻 □ □ □ 地 主 啊 33. ... than 6 sï 6 le ko 3 tsaq 7 ggggggg Di 6 tsu 3 a, but LE this CL gggggg landlord A 箇 隻 財 主 啊 34. ko 3 tsaq 7 tshai 2 tsu 3 a, this CL rich:man A 就 好 小 氣 35. chiu 6 ^hao 3 ^xieu 3 chi 3 . then very measly 就 是 好 嗇 36. chiu 6 sï 6 hao 3 set 7 . then be very miserly 南 昌 人 話 嘅 好 嗇 啦 37. nan 5 tshong 1 nyin 5 wa 6 ko hao 3 set 7 la, Nanchang people say KO ASST very miserly LA OBV 好 小 氣 38. hao 3 xieu 3 chi 3 . very measly □ 39. eh 就 請 箇 個 個 個 個 先 生 喫 什 哩 呢 40. ... chiu 6 chiang 3 kë [ko] gggggg xien 1 sen 1 chiaq 7 xiq 7 li le, then invite this gggggg teacher eat what LE 就 喫 蘿 卜 燒 肉 41. ... chiu 6 chiaq 7 lo 5 Bo seu 1 nyuq 7 . then eat turnip braised pork:meat 弄 了 一 個 菜 叫 蘿 卜 燒 肉 42. nong 6 lë [lieu 3 ] yiq 7 kë tshai 3 cieu 5 lo 5 Bo seu 1 nyuq 7 . make LIEU PRFV one CL dish call turnip braised pork:meat 蘿 卜 燒 肉 本 來 弄 得 好 43. ... lo 5 Bo seu 1 nyuq 7 pïn 3 lai 5 nong 6 teq 7 hao 3 , turnip braised pork:meat originally 也 蠻 好 喫 啦 44. ya 3 man 5 hao 3 chiaq 7 la. then very tasty 但 是 箇 隻 一 個 個 個 財 主 就 好 小 氣 啊 45. ... than 6 sï ko 3 tsaq 7 yiq 7 gggg tshai 2 tsu 3 chiu 6 ^hao 3 xieu 3 Ji 3 a, but this CL ggggggg rich:man then very measly A 就 舍 不 得 多 擱 肉 46. chiu 6 ^sa 3 pïq 7 teq 5 to 1 koq 7 nyuq 7 . then could:not:bearmuch add pork:meat 就 是 一 大 缽 盡 是 蘿 卜 47. .. chiu 6 sï 6 yiq 7 thai 6 ^pït 7 chin 6 x 6 ^lo 5 Bo. then be one big pot entirely be turnip 就 冒 有 兩 塊 子 肉 48. ... chiu 6 mao 6 yiu 3 liong 3 Guai 3 tsï nyuq 7 . then NEG have two pieces meat 就 差 不 多 就 看 不 到 肉 49. ... chiu 6 [t]sha 1 pït 7 to 1 chiu 6 Gon 3 pïq 7 tao 5 nyuq 7 . then almost then see NEG TAO RES meat 隔 喫 啦 50. ... kaq 7 chiaq 7 la, KAQ NEW eat LA CHG 箇 個 先 生 51. ... ko 3 ko ^xien 1 sen 1 , this CL teacher 箇 個 有 文 化 有 修 養 嘅 人 52. ko 3 ko yiu 3 wïn 5 fa yiu 3 xiu 1 yong 3 kë nyin 5 , this CL have culture have education KO REL person □ 56. ... ëh 喫 完 啦 57. ... chiaq 7 won 5 la. eat finish LA CHG □ 58. ëh, 抹 抹 嘴 巴 59. maq 7 maq 7 tsuïi 3 pa. wipe wipe mouth □ 60. ... ëh. 喫 茶 啊 61. chiaq 7 tsha 2 a. drink tea A 隔 箇 個 個 財 主 呢 62. ... kaq 7 ko 3 kë kë tshai 2 tsu 3 le=, KAQ NEW this CL CL rich:man LE 喫 完 了 63. chiaq 7 won 5 lieu 3 , eat finish LIEU PRFV 陪 到 先 生 話 話 事 啦 64. Bïi 2 tao 5 xien 1 sen 1 wa 6 wa 6 sï 6 la. 先 生 啊 67. xien 1 sen 1 a, teacherA 你 人 是 好 有 學 問 嘅 人 啊 68. n 3 len sï 6 = hao 3 yiu 3 = ^hoq 7 wïn 6 kë [ko] nyin 5 a, 2SG be very have knowledge KO REL person A 哈 69. ... ha. HA 你 跟 我 們 講 個 子 故 事 哦 70. .. n 3 kien 1 ngo 3 mïn kong 3 ko 5 tsï ^ku 5 sï 6 o. 2SG for 1PL tell one:or:two story O ADV □ 71. ... ah, ah 講 故 事 啊 72. kong 3 ^ku 5 sï 6 a. tell story A 箇 先 生 想 了 一 下 73. ko 3 xien 1 sen 1 xiong 3 lieu 3 yiq 7 ha 6 . this teacherthink LIEU PRFV one:moment 好 啊 74. ... hao 3 a. good A 有 一 個 女 嘅 78. yiu 3 yiq 7 kë nyü 3 kë [ko]. have one CL 79. A 一 個 年 輕 嘅 女 嘅 80. yiq 7 kë [ko] nyen 5 c[h]iang 1 one CL young KO ATTR female KO NOM 佢 生 了 一 個 崽 81. cie 3 ^sang 1 lieu 3 3SG give:birth LIEU PRFV one CL son 許 個 崽 啊 82. .. he 3 that CL son A 長 得 又 客 氣 又 聰 明 83. ... tsong 3 teq 7 … yiu 6 kheq 7 chi … yiu 6 tshung 2 min 5 , grow TEQ MAN and goodlooking and intelligent 不 曉 得 幾 好 玩 84. pïq 7 xiao 3 teq 7 ci 3 hao 3 wan 5 . NEG know how cute 啊 85. ... a. A 隔 到 了 兩 三 歲 洗 蘿 卜 94. ... xi 3 ^lo 5 p[h]o, wash turnip 提 了 一 籃 子 蘿 卜 啊 95. thia 2 lieu yiq 7 lan 5 tsï lo 5 Bo a, carry LIEU PROG one basket turnip A 就 到 河 邊 上 去 洗 96. ... chiu 6 tao 5 ho 2 pien 1 song 6 chie 3 xi 3 , then to river:side on go wash 洗 箇 蘿 卜 啊 97. xi 3 ko 3 lo 5 Bo a. 跌 到 水 里 頭 去 啦 105. ... tiet 7 tao 5 suïi 3 li Deu [c]hie 3 la, fall to water inside go LA CHG 隔 箇 箇 箇 箇 106. kaq 7 ko ko ko ko--(H) 嗷 107. ao, 叮 咚 一 下 跌 下 去 了 108. tin 6 tung 3 yiq 7 ha 6 tieq 7 ha 6 c[h]ie 3 lieu 3 <H>, [sound of heavy fall into water] one instant fall go:down LIEU CHG 隔 箇 箇 個 女 個 就 嚇 一 跳 趕 快 救 命 啊 117. kon 3 Guai 3 ciu 5 miang 6 a= hurry help A 哦 118. ... o, O 隔 就 來 了 好 幾 個 人 啊 119. kaq 7 chiu 6 lai 5 lë [lieu 3 ] hao 3 ci 3 kë [ko] nyin 5 a, KAQ NEW then come LIEU CHG a:good:many CL people A 幫 佢 撈 咯 120. pong 1 cie 3 = ^lao 1 lo. □ 128. e=. 箇 就 冒 看 129. ... ko 3 chiu 6 mao 6 Gon 3 <BR tao 5 = BR>. this then NEG-PRFV see TAO RES 隔 冒 看 到 130. ... kaq 7 mao 6 Gon 3 tao 5 , KAQ NEW NEG-PRFV see TAO RES 冒 有 辦 法 佢 話 164. chiu 6 tsït 7 Gon 3 tao 3 lo 5 Bo, NEG have way in:one:instant then NEG-PRFV see TAO RES LIEU CHG 就 只 看 到 蘿 卜 131. mao 6 yiu 3 Ban 6 faq 7 , yiq 7 kë liong 3 san 1 suïi 5 kë one CL two □ 又 聰 明 又 可 氣 個 崽 140. <X> yiu 6 (D)zung 1 min 5 yiu 6 Gjeq 7 chi 3 kë and intelligent  一 下 子 就 冒 看 到 了 141. (H) <X> yiq 7 ha 6 tsï chiu 6 mao 6 ^Gon 3 = lieu, 3SG how cry KO ASST MA tao 5 佢 郎 哭 152. cie 3 long 5 Guq 7 kë ma. KAQ NEW then come cry 嘅 嘛 163. kaq 7 chiu 6 lai 5 Guq 7 . and goodlooking KO ATTR son 3SG say 隔 就 來 哭 tsai 3 , 佢 151. c[ie 3 ] wa 6 , 3SG then look TAO DUR that one basket turnip 話 162. @@@ cie 3 chiu 6 wong 6 të [tao 5 ] he 3 yiq 7 lan 5 tsï lo 5 Bo. three year KO ATTR 2SG say upset PA PRSS 佢 就 望 到 許 一 籃 子 蘿 卜 到 139. ko 3 kë this CL a:moment:ago still full:of:life 一 個 兩 三 歲 嘅 你 話 傷 心 150. n 3 wa 6 song 1 xin 1 pa. NEG-PRFV see TAO RES flesh A 吧 161. mao 6 Gon 3 tao 5 nyuq 7 a. KO REL 3SG say 冒 看 到 肉 啊 kong 1 kong 1 [h]ai 2 fïq 7 phïq 7 xin 1 xien 1 kë kë [ko] tsai 3 a=, 就 到 河 邊 上 去 93. ... chiu 6 tao 5 ho 2 pien 1 song 6 chie 3 , then to river:side on go 跌 到 水 里 頭 去 了 104. tiet 7 tao 5 suïi 3 li Deu [c]hie 3 l[I]e[u 3 ]. fall to water inside go LIEU CHG 細 人 子 跌 到 水 里 頭 去 了 啊 116. xi 5 nyin 5 tsï tie 7 tao 5 suïi 3 li theu chie 3 lieu a= child fall to water inside go LIEU CHG A 就 冒 撈 到 127. ... chiu 6 mao 6 lao 1 <BR tao 5 = BR>. then NEG-PRFV take:out TAO RES 138. ... ko 3 this son 箇 個 剛 剛 還 活 潑 新 鮮 嘅 佢 149. cie 3 wa 6 , then only see TAO RES this turnip 話 160. (H) chiu 6 tsïq 7 Gon 3 tao 5 ko 3 lo 5 Bo, LE very upset HA 就 只 看 到 箇 蘿 卜 ^tsai 3 le, yiq 7 kë [ko] tsai 3 =. 隔 有 一 日 呢 92. ... kaq 7 yiu 3 yiq 7 nyiq 7 le, KAQ NEW have one day LE 許 隻 崽 一 不 小 心 呢 103. he 3 tsaq 7 tsai 3 yiq 7 pïq 7 xieu 3 xin 1 le, that CL son accidentally LE 救 命 啊 115. ciu 5 miang 6 a= help A □ 126. ... e. 137. ... 箇 崽 呢 好 傷 心 148. ... hao 3 song 1 xin 1 ha. 2SG to where g LIEU PRFV O REG 哈 159. n 3 o=, tao 5 la 3 li [c]hie 3 lieu 3 right ? wow ! 你 到 哪 里 去 了 哦 sï 6 pa. kë [ko] nyü 3 kë [ko]. 都 要 帶 箇 箇 個 崽 到 身 邊 91. ... tu 1 yeu 5 tai 5 ko 3 ko ko tsai 3 tao 5 sïn 1 pjen 1 . all YEU CHR take:along this this CL son PREP her:side 箇 裏 □ □ 在 洗 蘿 卜 冒 注 意 呢 102. ko 3 li g g ts[h]ai 6 xi 3 lo 5 Bo mao 6 tsu 5 yi 5 le, there TSHAI PROG wash turnip NEG-PRF notice LE 救 命 啊 114. ciu 5 miang 6 a= help A 撈 了 半 日 125. lao 1 lë pon 5 nyit 7 . take:out LIEU PRFV a:long:time 是 吧 147. ... ie= flesh a 嘩 158. nyuq 7 a=, then cannot:bear leave LO OBV that CL rich:man say KO ASST 肉 啊 136. chiu 6 ^sa 3 pïq 7 teq 7 tseu 3 lo, a. walk to anywhere Ah ! 就 舍 不 得 走 咯 146. he 3 ko tshai 2 tsu 3 wa 6 ko. flesh a 啊 走 到 哪 里 90. .. tseu 3 tao 5 ^la 3 li, 101. ... ko 3 ko wan 5 a this CL play A play A 113. !ai ya=, take:out A PROG take:out 許 個 財 主 話 嘅 157. nyuq 7 a=, wan 5 a, 哎 呀 124. lao 1 a lao 1 , then this CL female KO NOM LE 肉 啊 female KO NOM 89. ko 3 ko nyü 3 ko le, this CL female KO NOM LE also then at side on play 箇 個 玩 啊 玩 啊 112. kon 3 Guai 3 chiu 6 , hurry then then take:out A PROG take:out 撈 啊 撈 隔 箇 隻 女 嘅 呢 135. kaq 7 ko tsaq 7 ^nyü 3 kë [ko] le, 145. ... eh 3SG say KO ASST □ 156. ... cie 3 kë, wa 6 LA UNS 嘴 巴 裏 就 不 話 55. .. tsuïi 3 pa li chiu 6 pïq 7 wa 6 . mouth in then NEG say □ 66. ... ea, ah 以 前 啊 77. yi 3 c[h]ien 2 a, once A 88. .. ko 3 tsaq 7 = this CL 箇 個 女 嘅 呢 this CL two three year KO ATTR son LE 也 就 在 邊 上 玩 100. ya 3 chiu 6 tshai 6 pien 1 song wan 5 . 111. Gon 3 tao 5 tsï 6 ci 3 kë tsai 3 tieq 7 ha 6 chie 3 , see own KO GEN son fall go:down 趕 快 就 Alas 就 撈 啊 撈 123. ... chiu 6 lao 1 a lao 1 , 隔 最 後 慢 慢 子 都 走 了 134. kaq 7 tsuïi 5 [h]eu 6 man 6 man 6 tzï tu 1 KAQ NEW finally gradually all leave LIEU PRFV OR CHG KAQ NEW then bitterly cry 佢 話 嘅 A ^tseu 3 lëu. 隔 就 攢 勁 裏 哭 啊 144. kaq 7 chiu 6 tsan 3 cin 5 li khuq 7 a. then be thus KO ASST MA OBV .. kë ma. 155. chiu 6 x[i 6 ] ^kong 3 make TEQ MAN good 又 不 好 意 思 乙 氣 53. .. yiu 6 pïq 7 hao 3 yi 5 sï 1 tshoq 8 chi 3 . again embarrassed get:angry 心 里 頭 不 高 興 呢 54. .. xin 1 li theu pïq 7 kao 1 xin 1 le, heart inside NEG happy LE keep:company TAO DUR teachertalk talk things LA OBV Attention the vowel in 啦 is acoustically intermediate between /a/ and /o/ 財 主 話 嘅 65. ... tshai 2 tsu 3 wa 6 kë, rich:man say KO ASST 講 就 講 一 個 哈 75. kong 3 chiu 6 kong 3 yiq 7 kë [ko] ha. tell then tell one CL HA 佢 話 嘅 76. cie 3 wa 6 kë [ko]. 3SG say KO ASST 86. kaq 7 tao 5 lieu 3 liong 3 san 1 suïi 5 , KAQ NEW reach LIEU PRFV two three year 哪 個 都 喜 歡 87. la 3 kë [ko] tu 1 xi 3 fon. everyone all like 箇 隻 wash these turnips A 箇 個 崽 呢 98. ... ko 3 kë [ko] tsai 3 le, this CL child LE 箇 個 兩 三 歲 嘅 崽 呢 , 99. ko 3 ko liong 3 san 1 suïi 5 kë [ko] tsai 3 le, 109. kaq 7 ko 3 ko kë nyü 3 ko chiu 6 haq 7 yiq 7 Djeu 3 , KAQ NEW this this CL female KO NOM then scare one jump 哎 呀 110. !ai ya. Ah ! 看 到 自己 嘅 崽 跌 下 去 for 3SG take:out:of:water LO OBV 幫 佢 救 救 救 佢 嘅 崽 咯 121. pong 1 cie 3 ciu 5 --ciu 5 --ciu 5 cie 3 kë [ko] tsai 3 lo. for 3SG rescue rescue rescue 3SG KO GEN son LO OBV 嗐 呀 122. <P<SGH hai ya SGH>P>. 隔 人 家 箇 箇 開 始 132. kaq 7 nyin 5 ka 1 k[o 3 ] k[o 3 ] Gai 1 sï 3 --有 一 些 人 圍 到 看 啊 133. yiu 3 yiq 7 xieq 7 nyin 5 wïi 5 tao have some people surround TAO DUR watch A NEG-PRFV see TAO RES 就 是 講 嘅 嘛 LIEU CHG Gon 3 a, 冒 看 到 143. .. mao 6 Gon 3 tao 5 l[i]eu. how cry KO ASST A 了 a, 154. long 5 Guq 7 kë KAQ NEW people begin 就 淹 死 在 河 里 頭 了 153. c[ie 3 ] wa 6 , then only look TAO RES turnip □ □ 142. chiu 6 yen 1 sï 3 DZai 6 ho 2 3SG say li theu lieu <XX>, then drown die in 冒 看 到 肉 啊 river inside LIEU CHG 郎 哭 嘅 啊 165. mao 6 1 The Nanchang Fangyan Cidian gives a glottal stop instead of -ng in this word: To make a [set of] clothes for the children 29. Eh, 30. [that's how it was] then. 31. That is why, [it took] no more than a very ordinary new suit, 32. To make one feel very happy. 33. Very contented. 34. Hey, it's not at all like that today. 35. Nowadays, one dresses very well. 36. All year round one dresses very well. 37. Things to play with, you said. 38. Things to play with, I remember when I was a child 39. [with] five cents 40. Nowadays five cents is really worth very little. 41. Nowadays to sit on a bus 42. Sit on the bus for one stop 43. One needs fifty cents. 44. At that time, 45. For five cents, 46. One could buy a lantern of paper and bamboo. 47. Paper and b…48. A lantern made of paper and bamboo. 49. [With] several bamboo laths, 50. Several bamboo laths, 51. [And with] that red paper glued all around. 52. Or green paper, 53. One could assemble the likeness of a dragon. 54. Inside it, 55. One also made a , 56. A, a, a, a., a, 57. A [] for inserting a candle, 58. A place for inserting a small candle. 59. Eh, for those five, five, five cents, 60. [You] bought a, one of those lanterns, 61. And then [you could] play very happily. 62. In the evening, 63. [you] inserted a small candle inside [it] 64. Eh, families would pay each other New Year visits 65. People would give you sweets 66. Peanuts and beans, 67. Eh, 68. eight sections. 95. There was the dragon's head, 96. The tail, 97. And many sections in between.98. Eh, 99. Make…100. With the paper glued on it, 101. it was not very different from the lanterns that the adults outside used,102. [Except] of course, that it would have been much smaller. 103. Having built that seven-or eight-section lantern, 104. W-, w-, we, the children of the courtyard, 105. will now tell 2. of the rice noodles that we Nanchang people like to eat 3. These rice noodles, well 4. they are a specialty of Jiangxi. 5. A lot of counties have them. 6. And moreover, they all say their own rice noodles are the best. 7. The people in Fenyi 8. say that Fenyi rice noodles are the best. 9. Xiajiang people 10. say Xiajiang rice noodles are the best. 11. Those in Anyi say Anyi [noodles] are the best. 12. Those in Nancheng say Nancheng [noodles] are the best. 13. They all say that their own rice noodles are the tastiest 14. We Nanchang people like rice noodles too. 15. These rice noodles, what kind of thing are they ? 16. The raw material [in them] 17. is, 18. is rice. 19. That is, the grains of the rice plant. 20. It is not wheat. 21. It is not made of wheat. 22. It is-it is rice. 23. Those rice grains, 24. are first ground into a paste. 25. ground into a paste. 26. Then [one] strains it dry, 27. using a strainer to remove the water. 28. Then it becomes…29. becomes a wet noodle paste. 30. a very wet noodle paste. 31. Afterwards, 32. then, 33. afterwards, 34. [one] once again uses the… 35. that…36. that…37. in…38. in the workshop 39. in the noodle factory, 40. then, 41. one presses it [the paste] to obtain very fine and very long stick-like 42. noodles 43. we call those « noodles » 44. [we] call them « rice noodles ». 45. Those stick-like, very long 46. noodles 47. are called « rice noodles » 48. Now, th-th-th-49. those are wet, of course 50. as they come out. 51. Because they come out wet, they must be exposed to the sun to dry. 52. After they have dried in the sun, 53. these stick-like [noodles] 54. are hard and dry, 55. and easy to preserve. 56. If you preserve them well 57. in a dry and dark place 58. those 59. one or two years is no problem at all, 60. they can still be eaten. 61. Now how does one eat them ? 62. there are two ways of eating them. 63. One is, 64. ah, 65. of course there are many ways of eating them, 66. not just two ways, of course. 67. For instance, stir-fried rice noodles. 68. This is something we Nanchang people all like to eat. 69. People from everywhere--70. Jianggxi--71. People from anywhere in Jiangxi all like to eat [these].72. eh. 73. These stir-fired noodles, how--74. how-how are they eaten ? 75. [one] takes those… 76. those… those hard, 77. those dry rice noodles, 78. it is… 79. [one] takes them to boil. 80. First boil water. 81. When the water boils, 82. half a pan of water, 83. then, [depending on] the amount of noodles you want to eat, 84. you put that amount of noodles into [the water]. 85. Boil it. 86. After it has boiled, 87. [you] still have to boil it one more time. 88. boil it one more time. 89. How long [should you] boil it for, well, 90. [you should not] boil it too long. 91. If you boil it for too long, then 92. it gets all mushy. 93. When it's mushy it does not taste good. 94. eh. 95. But [if you] cook it for too short a time, it's not…96. if you cook… 97. if you cook it for too short a time, it's not good either. 98. Inside 99. The noodles are hard inside.100. Boiling it… 101. Boiling.. 102. Cook till soft, then it will be OK.103. That is, boil for a few minutes after the water has started to boil. 104. Then it's done.105. eh. 106. Then they are soft. 107. After they are soft, 108. now you use cold water 109. to rinse them. 110. wash them. 111. You change them 112. That is, you change them back to that wet rice noodle state. 113. It again,114. it again 115. the dry noodles have become wet again. 116. That's how it is. 117. Now, 118. Now after they have become wet, soft, well-cooked rice noodles, 119. [one] again pours out all the water. 120. Pour out all the water 121. After that you can stir-fry them. 122. To stir-fry rice noodles, well 123. in general you always begin by 124. begin by taking oil to stir-fry a bit of these ingredients, obviously, 125. such as, 126. we Nanchang people like to eat fried noodles with beef. 127. this [dish], fried noodles with beef, 128. there is in Nanchang city a restaurant especially 129. it's [called] the Chin-tsïn-wan-fa-leu, 130. [there at] the Chin-tsïn-wan-fa-leu, 131. the fried noodles with beef are famous .132. I used to like to eat there in the past. 133. Now it's a long time since I ate there. 134. If [you] do not use beef, you can use 135. pork, that is also fine. 136. Also in general one will add some chili pepper, 137. We Nanchang people very… most people 138. most Nanchang people like to eat chili peppers. 139. The meat, 140. chili peppers, 141. raw ginger, garlic, 142. eh, 143. kkkk 144. these ingredients [should be] stir-fried until done. 145. The ingredients 146. these things, when they are done, 147. Then [you] again [take] [the noodles] that [you] have just 148. prepared 149. those wet noodles, 150. pour them down [into the stir-fried ingredients] 151. add som…152. add soy sauce, 153. salt, 154. whatever. 155. There are also [those] who make it a little better. 156. They add dried mushroom, mu'er mushroom, and such like. 157. It's extremely good. 158. Ah. 159. I like to eat this thing most. 160. These stir-fried rice noodles. 161. Or alternatively you eat… 162. [if] you don't want to eat them stir-fried, 163. [you] can also eat them in a soup. 164. That is… 165. having prepared a broth, 166. take that… 167. that another pan, 168. again bring to ebullition another panful of water, 169, half a pan of boiling water, that's [your] broth. 170.Eh. 171. After it has reached ebullition, 172. [take] the wet noodles 173. and put them in. 174. Add the ingredients. 175. Add meat, 176. That sort of thing. 177. Eh. 178. Salt, 179, Monosodium glutamate, 180 chopped stalks of chives, 181. [and] eat the soup-182. the noodle soup. 183. It is called « noodle soup ». 184. That's not stir-fired noodles, 185. that's noodle soup. 186. There are people, 187. who like to eat noodle soup. 188. Especially. 189. Especially in the winter. 190. Eating this relatively--. 191. Eating it keeps you relatively warm. 192. Also, in the summer, 193. in the summer we like to eat cold noodles. 194. [For] cold noodles [you] take [what] I described a moment ago, 195. those already cooked through, 196. dry--197. cooked through, 198. wet, 199. those wet noodles after pouring out the water, 200. eh. 201. [You] add. 202. put in a little sesame oil, 203. a few… 204. dried chili mash, 205. chopped stalks of chives, 206. chopped garlic, 207. Things like that. 208. Stir up and eat. 209. It's very tasty. 210. This, 211. Like, now is summer, 212, in the summer-213. in the morning, 214. everybody likes to eat this thing. 215. Eat a bowl of 216. cold noodles. 217. eah, 218. It's cheap. 219. Those cold… 220. eh, 221. that is, cold-stirred noodles, well, 222. cost only two Yuan, more or less. 223. Stir-fried noodles, 224. in those small shops, 225. they cost only three Yuan, more or less. 226. It's convenient, 227. it's good, 228. and it's cheap. 1.4 notes on the lexicon This son, well. 9. had now a good number of years of age. 10. Now he must study. 11. He then wanted to hire a teacher, 12. to come to his house teach his son.13. eh. 14. He then invited a private teacher 15. to come teach his ho--16. eh, 17. to come and teach, 18. to come to his house and teach his son.19. ah. 20. Now when the teacher comes, 21. there is a rule. 22. this …on the first day, 23. ggggggggg 24. the master of the house 25. must invite the teacher to eat a meal, of course. 26. isn't it ?. 27. that is politeness, obviously. 28. When the teacher comes, 29. one must invite him for a meal. 30. eh. 31. [inviting the teacher to] eat a meal, 32. is a normal thing. 33. But, this rich man, well, 34. this rich man, 35. was very measly, 36. that is he was very miserly. 37. the people in Nanchang say 'set' for 'miserly'. 38. or 'measly' 39. eh. 40. She] carried a basket of turnips 96. and went to the river side 97. to wash these turnips. 98. Her son, 99. Her two-or-three year-old son 100. was [playing on the side of the river 101. As he was playing, 102. [that woman] who was washing turnips was not paying attention, 103. that son of hers accidentally 104. fell into the water. 105. As he had fallen into the water, 106. now th--107. aoh 108. at the heavy splash of him falling into the water, 109. the woman was suddenly alerted 110. « ai ya !» 111. seeing that her own son had fallen in, Eh. 146. The rich man said, 147. « Wow, 148. she was very upset, I suppose ! » . 149. He [the teacher] said 150. « you bet she was ». 151. He [the rich man] said : 152. « And how did she cry ? ». 153 He [teacher] said : 154. « how did she cry ? 155. she cried thus. 156. She said : 157. « my [own] flesh [=meat], 158. my flesh, 159. where have you gone to ? 160. [One] can only see the turnips, 161. [but] my flesh has disappeared ». 162. The Tang northern innov. 你 has not reached Nanchang. This set shows analogical leveling : all tone 3 (regular for 1sg and 2sg, but T2 expected for cie3). Lack of aspiration in cie3 is also irreg., and shows tonal leveling to T3 is older than devoicing (the word would have evolved to ch-had leveling not occurred). Some speakers say chie3. Origin uncertain. There is a +polite 2sg pronoun n3 len, probably earlier 你 人 , apparently a 2pl. plural ? with politeness shift to singular (Eng. you 2pl > polite 2sg). TAO RES 1. I will now … 2. tell a story 3. that's what I'll do. 4. Once, 5. there was a rich man.. 6. [that] rich flesh A Translation man 7. had a son. 8. 112. she then quickly [cried] 113. « Aiya ! 114. Help ! 115. Help ! 116. A child has fallen into the water ! 117. Qick, help ! »118. Oh. 119. At that point a number of people came 120. to get the child out of the water for her. 121. to rescue, rescue, rescue her son for her. 122. Alas, 123. for all their attempts to take the child out, 124. their continuous attempts, 125. they tried for a long time, 126. eh, 127. but they did not succeed to get the child from the water, 128. eh, 129. they did not find him.130. As they had not found him, 131. nothing could be done, 132. people now began--133. She was looking at the basket of turnips, 163, and crying, 164. « [One] can only see turnips, 165. But my flesh has disappeared » 1.3.5 Nanchang text No. 5: Noodles (Told by XYZ. Recorded by Laurent Sagart. Nanchang, June 1999) 我 來 話 一 下 子 啊 1. ngo 3 lai 5 wa 6 yiq 7 ha 6 tsï a 1SG come tell one time a 我 們 江 西  人 喜 歡 喫  嘅 米 粉 2. ngo 3 mïn ^kong 1 xi 1 nyin 5 xi 3 fon 1 chiaq 7 kë [ko] mi 3 fïn 3 . 1PL Jiangxi people like eat KO REL rice:noodles 箇 米 粉 啊 3. ... ko 3 mi 3 fïn 3 a, this rice:noodle A 是 江 西  嘅 特 產 4. sï 6 kong 1 xi 1 kë theq 7 ts[h]an. be Jiangxi KO GEN specialty 好 多 縣 里 頭 都 有 5. ... hao 3 to 1 xien 6 li 3 Deu tu 1 yiu 3 . very many counties inside all have 而 且 都 話 佢 自 簡  嘅 好 6. ... oe 5 chie 3 tu 1 wa 6 cie 3 ts[h]ï 6 kan 3 kë [ko] hao 3 . moreover all say 3SG self KO GEN good □ 分 宜 人  啊 7. ... <X> ^fïn 1 nyi 5 len 5 a, Fenyi people A 就 話  分 宜  嘅 米 粉 最 好 8. chiu 6 wa 6 fïn 1 nyi 5 kë [ko] mi 3 fïn 3 tsuïi 5 hao 3 . then say Fenyi KO GEN rice:noodles most good 峽 江 人 就 話 9. ... ^haq 7 kong 1 nyin 5 chiu 6 wa 6 , Xiajiang people then say 峽 江  嘅 米 粉 最 好 10. haq 7 kong 1 kë mi 3 fïn 3 tsuïi 5 hao 3 . Xiajiang KO GEN rice:noodles most good 安 義  嘅 就 話 安 義  嘅 最 好 11. ... ngon 1 nyi 6 kë chiu 6 wa 6 ngon 1 nyi 6 kë tsuïi 5 hao 3 . Anyi KO GEN then say Anyi KO GEN most good 南 城  嘅 就 話 南 城  嘅 最  好 12. ... lan 5 Dzïn 2 kë chiu 6 wa 6 lan 5 Dzïn 2 kë tsuïi 5 hao 3 . Nancheng KO GEN then say Nancheng KO GEN most good 都 話 自 簡  嘅  米 粉 好 喫 13. tu 1 wa 6 Dzï 6 kan 3 kë mi 3 fïn 3 hao 3 c[h]iaq 7 . all say oneself KO GEN rice:noodles tasty 我 們 南 昌  人 也 喜 歡 喫 米 粉 14. ... ngo 3 mïn lan 5 ts[h]ong 1 nyin 5 ya 3 xi 3 fon 1 chiaq 7 mi 3 fïn 3 . 1PL Nanchang people also like eat rice:noodles 箇 米 粉 是 郎  回 事 呢 15. ... ko 3 mi 3 fïn 3 sï 6 long 5 fïi 2 sï 6 le, this rice :noodles be how type business LE 佢  嘅 原 材 料 啊 16. cie 3 kë= nyüon 5 tshai 2 lieu 6 a, 3SG KO GEN raw:material A 是 17. sï 6 =, be 是 大 米 18. sï 6 thai 6 mi 3 . be rice:grains 就 是 稻 米 啊 19. ... chiu 6 sï thao 6 mi 3 a. that is rice:from:rice:plant A 不 是 小 麥 20. ... puq 7 sï xieu 3 maq 7 . NEG be wheat 不 是 拿 麥 子 做 . 21. ... puq 7 sï 6 laq 7 maq 7 tsï tsu 5 NEG be LAQ INST wheat make 是 是 大 米 22. sï 6 --sï 6 thai 6 mi 3 . be be rice:grain 你 人 許 個 米 呢 23. ... n 3 [l]e[n h]e 3 ko mi 3 le, 2SG that CL rice LE 就 先 磨 成 米 漿 24. ... chiu 6 xien 1 mo 6 Dzïn 2 mi 3 ^ciong 1 . then first grind make:into rice:paste 磨 成 米 漿 啊 25. ... mo 6 Dzïn 2 mi 3 ciong 1 a. grind make:into rice:paste A 然 後 濾 乾 啊 26. ... len 5 heu 6 li 6 kon 1 a, afterwards strain dry A 把 許 個 水 濾 掉 去 27. pa 3 he 3 kë suïi 3 ^li 6 tieu 5 chie 3 . PA DISP that CL water strain throw:out go 就 成 啊 箇 箇 28. ... chiu 6 Dzïn 2 a kkkk, then become A 成 啊 箇 箇 濕  嘅 粉 粉 子 29. Dzïn 2 a gggg sït 7 ko fïn 3 fïn 3 tsï. become a gggg wet KO ATT noodle paste 辣 濕  嘅 粉 粉 子 30. lat 8 sït 7 kë fïn 3 fïn 3 tsï. very wet KO ATT noodle paste 然 後  嘅 31. ... len 5 heu 6 kë=, afterwards KO ? ? 就 32. chiu 6 , then 然 後 呢 33. len 5 heu 6 le, afterwards LE 再 用 箇 個 個 個 34. tsai 5 yung 6 ko 3 kë kë kë, again use this CL CL CL 許 個 35. ... he 3 kë=, that CL 許 個 個 個 個 36. ... he 3 kë kë kë kë, that CL CL CL CL 在 37. tshai 6 --at 在 作 坊 里 啊 38. tshai 6 tsoq 7 fong 5 li a, at workshop in A 在 箇 米 粉 廠 啊 39. tshai 6 ko 3 mi 3 fïn 3 Dzong 3 a, at this noodle factory A 就 40. ... chiu 6 = then 把 佢 榨 成 箇 41. pa 3 cie 3 ^tsa 5 Dzïn 2 ko PA DISP 3SG press make:into this 一 根  一 根 好 細 好 長  嘅 yiq 7 kien 1 yiq 7 kien 1 hao 3 xi 5 hao 3 Dzong 2 kë=, one CL one CL very fine very long KO ATT 箇 個 粉 條 42. ... ko 3 kë fïn 3 thieu 2 . this CL noodle 我 們 把 許 個 就 叫 粉 43. ... ngo 3 mïn pa 3 he 3 kë chiu 6 cieu 5 fïn 3 . 1PL PA DISP that CL then call noodle 就 叫 米 粉 44. chiu 6 cieu 5 mi 3 fïn 3 . then call rice:noodle 許 個 一 根 一 根 好 長  嘅 45. .. he 3 kë yiq 7 kien 1 yiq 7 kien 1 hao 3 Dzong 2 kë=, that CL one CL one CL very long KO ATT 粉 條 46. .. fïn 3 thieu 2 , noodle 就 叫 米 粉 47. .. chiu 6 cieu 5 mi 3 fïn 3 . then call rice:noodle 隔 箇 個 個 個 48. ... kaq 7 ko 3 kë kë kë=, KAQ NEW this CL CL CL 許 個 是 濕  嘅 啦 49. he 3 kë sï 6 sït 7 kë la, that CL be wet KO ASS LA OBV 做 出 來 50. .. tsu 5 Dzut 7 lai 5 . make come:out 出 來 濕  嘅 呢 要 晒 乾 51. .. tshut 7 lai 5 sït 7 kë le yeu 5 sai 5 kon 1 . come:out wet KO LE must expose:to:sun dry 晒 乾 來 以 後 52. ... sai 5 kon 1 lai 5 yi 3 heu 6 , expose:to:sun dry LAI INC after 隔 箇 一 根 一 根  嘅 53. kaq 7 ko 3 yiq 7 kien 1 yiq 7 kien 1 kë KAQ NEW this one CL one CL KO ATT 就 是 硬  嘅 乾 燥  嘅 54. chiu 6 sï 6 ngang 6 kë kon 1 tsao 5 kë then be hard KO ASS dry KO ASS 就 好 保 管  啊 55. chiu 6 hao 3 pao 3 kwon 3 a. then good preserve A 你 要 是 保 管  得 好 啊 56. ... n 3 yeu 5 sï 6 pau 3 kuon 3 teq 7 hao 3 a, 2SG if preserve TEQ MAN good A 在 箇 個 乾 燥 陰 涼  嘅 地 方 啊 57. ... tshai 6 ko 3 kë kon 1 tsao 5 yin 1 liong 5 kë Di 6 fong a, at this CL dry dark KO ATT place A 許 個 58. ... he 3 kë=, that CL 一 兩 年 都 冒 有 什 哩 問 題 59. yiq 7 liong 3 nyen 5 tu 1 ma[o 6 ] y[iu] xiq 7 li wïn 6 thi 2 . one two year all NEG have whatever problem 都 還 是 可 以 喫 60. tu 1 hai 2 sï 6 kho 3 yi 3 chiaq 7 . all still be can eat 隔 喫  嘅 時 間 郎 喫 呢 61. ... kaq 7 chiaq 7 kë si 5 kan 1 long 5 chiaq 7 le ? KAQ EPI eat KO REL time how eat LE 有 兩 種 喫 法 62. ... yiu 3 liong 3 tsung 3 chiaq 7 faq 7 . have two kind eat method 一 種 是 63. ... yiq 7 tsung 3 sï 6 =, one kind be □ 64. ë=\, ah 當 然 好 多 種 喫 法 65. ... tong 1 l[e]n 5 hao 3 to 1 tsung 3 chiaq 7 faq 7 . of:course very many kind eat method 還 不 是 兩 種 啦 66. hai 2 puq 7 xi 6 liong 3 tsung 3 la. still NEG be two kind LA OBV 比 如 炒 米 粉 67. ... pi 3 lu 5 tshao 3 mi 3 fïn 3 . for:instance stir:fried:rice:noodles 箇 是 我 們 南 昌 人 都 喜 歡 喫  嘅 68. ko 3 xi ngo 3 mïn lan 5 Dzong 1 nyin 5 tu 1 xi 3 fon 1 chiaq 7 kë. this be 1PL Nanchang people all like eat KO ASS 哪 里  嘅 人 都 69. la 3 li kë nyin 5 tu 1 --everywhere KO GEN people all 江 西 70. .. ciong 1 xi 1 --Jiangxi 江 西 哪  嘅 地 方  嘅 人 都 喜 歡 喫 71. ... kong 1 xi 1 la 3 kë Di 6 fong kë nyin 5 tu 1 xi 3 fon 1 c[h]iaq 7 . Jiangxi any KO GEN place KO GEN people all like eat □ 72. ... ea. eh 箇 炒 米 粉 郎 73. .. ko 3 tshao 3 mi 3 fïn 3 long 5 --this stir:fry noodle how 郎 郎 喫 呢 74. .. long 5 --long 5 --chiaq 7 le ? how how eat LE 就 拿 許 個 75. ... chiu 6 laq 7 he 3 kë=, then take that CL 許 個 許 個 硬  嘅 76. he 3 kë he 3 kë ngang 6 kë, that CL that CL hard KO ATT 許 個 乾  嘅 米 粉 啊 77. he 3 kë kon 1 kë mi 3 fïn 3 a. that CL dry KO ATT noodle A 是 78. ... sï 6 = be 拿 去 煮 79. laq 7 chie 3 tsu 3 . take go boil 先 燒 開 水 80. xien 1 seu 1 khai 1 suïi 3 . first prepare:by:applying:fire boiling water 燒 好 □ 81. ... seu 1 hao 3 yi--prepare successfully 半 鍋 子 開 水 82. pon 5 wo 1 tsï khai 1 suïi 3 . half pan boiling:water 隔 你 要 喫 幾 多 米 粉 83. kaq 7 n 3 yeu 3 chiaq 7 ci 3 to 1 mi 3 fïn 3 , KAQ NEW 2SG want eat how:much noodle 就 擱 幾 多 許 個 米 粉  下 去 84. chiu 6 koq 7 ci 3 to 1 he 3 kë mi 3 fïn 3 ha 6 chie. then put how:much that CL noodle down go 把 佢 煮 85. pa 3 cie 3 tsu 3 . PA DISP 3SG boil 煮 呢  煮 開 以 後 86. ... tsu 3 le tsu 3 khai 1 yi 3 heu 6 , boil LE boil bubble after 還 要 再 煮 一 下 子 湊 87. hai 2 yeu 5 tsai 5 tsu 3 yiq 7 ha 6 tsï tsheu 3 . still must again boil one time in:addition 多 煮 一  下 子 88. ... to 1 tsu 3 yiq 7 ha 6 tsï. more boil one time 煮 得 什 哩 時 間 啊 89. ... tsu 3 teq7 xiq 7 li sï 5 kan 1 a, boil TEQ EXT what time A TOP 不 要 煮 得 太   久 90. puq 7 yeu 5 tsu 3 teq 7 ^thai 3 ciu 3. NEG must boil TEQ EXT too long 煮 得  太 久 個  就 91. tsu 3 teq 7 ^thai 3 ciu 3 kë chiu 6 --boil TEQ EXT too long KO ASS then 就 爛 掉 了 92. .. chiu 6 ^lan 6 tieu 5 lieu. then mushy off LIEU PRFV 許 個 爛  掉 了 也 不 好 喫 93. he 3 ko lan 6 tieu 5 lieu ya 3 puq 7 hao 3 chiaq 7 . that CL mushy off LIEU PRFV then NEG good eat □ 94. ... ë. eh 但 是 煮 短 許 個 時 間 不 95. ... than 6 sï 6 tsu 3 ton 3 he 3 kë sï 5 kan 1 puq 7 --but boil short that CL time NEG 煮 得 96. tsu 3 teq 7 --, boil TEQ 煮 少 了 時 間 也 不 行 . 97. tsu 3 seu 5 lieu sï 5 kan 1 ya 3 puq 7 xin 5 boil little LIEU PRFV time also NEG allright 許 個 里 頭 98. he 3 kë li 3 Deu that CL inside 心 子 還 冒 煮 熟 99. xin 1 ts[ï h]ai 2 mao 6 tsu 3 suq 7 . heart still NEG-PRFV boil cooked 就 煮 得 100. chiu 6 tsu 3 teq 7 =, then boil TEQ 煮 101. tsu 3 --boil 把 佢 煮 軟 了 就 可 以 102. pa 3 cie 3 tsu 3 nyüon 3 lieu chiu 6 kho 3 yi 3 . PA DISP 3SG boil soft LIEU PRFV then OK 就 是 XX 水 煮 開 了 以 後 103. chiu 6 sï 6 ts--suïii 3 tsu 3 khai 1 lieu yi 3 heu 6 then be water boil open LIEU PRFV after 再 煮 幾 分 鐘 tsai 5 tsu 3 ci 3 fïn 1 tsung 1 . again boil several minute 就 可 以 了 104. ... chiu 6 kho 3 yi 3 lieu. then OK LIEU PRFV □ 105. ... ë. eh 箇 就 把 佢 煮 軟 了 106. .. ko 3 chiu 6 pa 3 cie 3 tsu 3 nyüon 3 lieu. that then PA DISP 3SG boil soft LIEU PRFV 煮 軟 了 以 後 呢 107. ... tsu 3 nyüon 3 lieu yi 3 heu 6 le, boil soft LIEU PRFV after LE 隔 就 拿 許 個 冷 水 啊 108. kaq 7 chiu 6 laq 7 he 3 kë lang 3 suïi 3 a, KAQ NEW then take that CL cold water A 拿 去 沖 109. laq 7 chie 3 ^tshung 1 . take go rinse 拿 去 洗 110. ... laq 7 chie 3 ^xi 3 . take go wash 把 佢 變 成 111. ... pa 3 cie 3 pien 5 tshïn 2 , PA DISP 3SG change:into 就 是 把 佢 重 新 又 變 成 112. chiu 6 sï 6 pa 3 cie 3 tshung 2 xin 1 yiu 6 pien 5 tshïn 2 that:is PA DISP 3SG anew again change:into 許 個 濕  嘅 米 粉 狀 態 he 3 kë sït 7 kë mi 3 fïn 3 Dzong 6 Dhai 6 . that CL wet KO ATT rice:noodle state 又 把 113. ... yiu 6 pa 3 --, again PA DISP 又 把 114. yiu 6 pa 3 , again PA DISP 箇 個 乾  嘅 米 粉 又 變 成 了 115. ko 3 kë kon 1 kë mi 3 fïn 3 yiu 6 pien 5 tshïn 2 lieu this CL dry KO ATT rice:noodle again change:into LIEU PRFV 濕  嘅 米 粉 sït 7 kë mi 3 fïn 3 , wet KO ATT rice:noodle 就 是 116. chiu 6 sï. that be 隔 個 117. ... kaq 7 kë--KAQ NEW KO ? ? ? 隔  成 了 許 個 又 濕 又 軟 118. kaq 7 Dzïn 2 lieu he 3 kë yiu 6 sït 7 yiu 6 nyüon 3 KAQ NEW become LIEU PRFV that CL and wet and soft 又 煮 熟 了  嘅 米 粉  啊 yiu 6 tsu 3 suq 7 lieu kë mi 3 fïn 3 a, and well-cooked LIEU PRFV KO REL rice:noodle A 隔 又 把 水 就 完 全 濾  乾 了 119. ... kaq 7 yiu 6 pa 3 suïi 3 tu 1 won 5 chüon 2 li 6 kon 1 lieu. KAQ NEW again PA DISP water all completely filter dry LIEU PRFV 把 水 都 濾 掉 去 120. ... pa 3 suïi 3 tu 1 li 6 tieu 6 chie 3 . PA DISP water all filter off go 隔 然 後 你 就 可 以 炒 了 121. ... kaq 7 len 5 heu 6 n 3 chiu 6 kho 3 yi 3 tshao 3 lieu. KAQ NEW afterwards 2SG then can stir:fry LIEU CHG 炒 米 粉 啊 122. ... tshao 3 mi 3 fïn 3 a, stir:fry rice:noodles A 一 般  都 是 先 拿 123. yiq 7 pon 1 tu 1 sï 6 xien 1 laq 7 --in:general all be first take  先 拿 油 來 炒 滴 子 箇 個 作 料 啦 124. xien 1 laq 7 yiu 5 lai 5 tshao 3 tiaq 7 tsï ko 3 kë ^tsoq 7 lieu 6 la. first take oil to stir:fry a:little this CL ingredients LA OBV 比 如 125. ... pi 3 lu 5 , for:instance 我 們 南 昌 人 喜 歡 喫 牛 肉 炒 粉 126. ... ngo 3 mïn lan 5 Dzong 1 nyin 5 xi 3 fon 1 chiaq 7 nyu 5 nyuq 7 Dzao 3 fïn 3 . 1PL Nanchang people like eat fried:noodles:with:beef 箇 牛 肉 炒 粉 127. ... ko 3 nyu 5 nyuq 7 tshao 3 fï--, this fried:noodles:with:beef 南 昌 市 還 專 門 有 一 家 館 子 店 128. lan 5 tshong 1 sï 6 hai 2 tson 1 mïn 5 yiu 3 yiq 7 ka 1 kuon 3 tsï tien 5 , Nanchang city in:addition especially have one CL restaurant 是 許 個 個 個 清 真 萬 花 樓 129. sï 6 he 3 kë kë kë chin 1 tsïn 1 wan 6 fa 1 leu 5 . be that CL CL CL Chin-tsïn-wan-fa:restaurant 清 真 萬 花 樓 130. ... chin 1 tsïn 1 wan 6 fa 1 leu 5 , Chin-tsïn-wan-fa:restaurant 許 個 個 個  牛 肉 炒 粉 是 好 有 名  嘅 131. he 3 kë k k nyu 5 nyuq 7 tshao 3 fïn 3 sï 6 hao 3 yiu 3 miang 5 kë. that CL CL CL fried:noodles:with:beef be very famous KO ASS 我 以 前 都 好 喜 歡 喫 132. ngo 3 yi 3 chien 2 tu 1 hao 3 xi 3 fon 1 chiaq 7 . 1SG previously all very like eat 隔 好 久 冒 喫 過 了 133. ... kaq 7 hao 3 ciu 3 mao 6 chiaq 7 kuo 5 lieu. KAQ NEW very long:time NEG-EXP eat KUO EXP LIEU CHG 不 拿 牛 肉 炒 就 拿 134. ... puq 7 laq 7 nyu 5 nyuq 7 tshao 3 chiu 6 laq 7 =, NEG take beef stir:fry then take 豬 肉 炒 也 可 以 135. tsu 1 nyu 5 tshao 3 ya 3 kho 3 yi 3 . pork stir:fry also possible 還 一 般 啊 都 要 擱 辣 椒 136. ... hai 2 yiq 7 pon 1 a tu 1 yeu 5 koq 7 lat 7 cieu 1 , in:addition in:general A all YEU CHR put chili:pepper 我 們 南 昌 人 好 大 部 分 人 137. ngo 3 mïn 3 lan 5 tshong 1 nyin 5 hao 3 --thai 6 phu 6 fïn 1 nyin 5 , 1PL Nanchang people very most people 大 部 分 南 昌  人 都 喜 歡 喫 138. ... t[h]ai 6 p[h]u 6 fïn 1 lan 5 tshong 1 nyin 5 tu 1 xi 3 fon 1 chiaq 7 most Nanchang people all like eat 辣 椒 lat 7 cieu 1 . chili:pepper 箇 個 肉 咯 139. ... ko 3 kë= nyuq 7 lo, this CL meat LO INST  辣 椒 咯 140. lat 7 cieu 1 lo, chili:pepper LO INST 生 姜 大 蒜  咯 141. sen 1 ciong 1 thai 6 son 5 lo, raw:ginger garlic LO INST □ 142. .. e eh 隔 隔 隔 隔 143. kaq 7 kaq 7 kaq 7 kaq 7 , kkkkkkkkkkk 把 箇 個 作 料 炒 好 144. .. pa 3 ko 3 kë tsoq 7 lieu 6 tshao 3 hao 3 . PA DISP this CL ingredient stir:fry done 作 料 個 個 145. ... tsoq 7 lieu 6 kë kë ingredient k k 箇 些 東 西 炒 熟 了 呢 146. ko 3 xiet 7 tung 1 xi 1 tshao 3 suq 7 lieu le, this CL things stir:fry cooked LIEU PRFV LE 再 把 剛 才 箇 個 XX 147. ... tsai 5 pa 3 kong 1 Dzai 2 ko 3 kë li--again PA DISP a:moment:ago this CL 箇 個 舞  正 了 148. ko 3 ko wu 3 tsang 5 lieu this CL make done LIEU PRFV 箇 個 濕 米 粉 啊 149. ko 3 kë sït 7 mi 3 fïn 3 a, this CL wet rice:noodle a 倒 下 去 150. tao 5 ha 6 chie 3 . pour down go 擱 XX 151. ... koq 7 yi--put 擱 擱 擱 醬 油 咯 152. koq 7 koq 7 koq 7 ciong 5 yiu 5 lo, put put put soy:sauce LO INST 鹽 啊 153. yen 5 a, salt A 什 哩 154. xiq 7 li. what(ever) 還 有 搞 得 好 滴 子 個 155. hai 2 yiu 3 kao 3 teq 7 hao 3 tiaq 7 tsï kë, also have make TEQ MAN good a:little KO ? ? ? 還 有 加 上 香 菇 木 耳 什 哩 東 西 156. hai 2 yiu 3 ka 1 song 6 xiong 1 ku 1 muq 7 oe 3 xiq 7 li tung 1 xi. still have add dried:mushroom mu'er:mushroom and:such:like 非 常 好 喫 157. ... fïi 1 tshong 2 hao 3 chiaq 7 extremely tasty 啊 158. ... a, A 我  就 最 喜 歡 喫 箇 個  東 西 159. ngo 3 = chiu 6 tsuïi 5 xi 3 fon 1 chiaq 7 ko 3 ko tung 1 xi. 1SG then most like eat this CL thing 箇 是 炒 米 粉 160. ... ko 3 kë tshao 3 mi 3 fïn 3 . this CL stir-fry rice:noodle 或 者 是 你 喫 161. ... fït 7 tsa 3 sï 6 n 3 chiaq 7 --alternatively be 2SG eat 你 不 願 意 喫 炒  嘅 162. n 3 puq 7 yüon 6 yi 5 chiaq 7 tshao 3 kë, 2SG NEG wish eat stir-fry KO NOM 喫 煮  嘅 也 行 163. chiaq 7 tsu 3 kë ya 3 xin 5 . eat boil KO NOM also OK 就 是 164. ... chiu 6 sï 6 = then be 箇 個 燒  正 了  嘅 湯 165. ko 3 kë seu 1 tsang 5 lieu 3 kë thong 1 . this CL cook right LIEU PRFV KO REL broth 把 許 個 個 個 166. ... pa 3 he 3 kë kë kë, PA DISP that CL CL CL 許 個  另 外 一 鍋 子 167. he 3 kë lin 6 wai 6 yiq 7 wo 1 tsï, that CL anotherone pan 再 煮 一 鍋 子 新 鮮  嘅 開 水 啊 168. tsai 5 tsu 3 yiq 7 wo 1 tsï xin 1 xien 1 kë khai 1 suïi 3 a. again boil one pan fresh KO ATT boiling:water A 半 鍋 子 開 水 就 是 湯 啊 169. ... pon 5 wo 1 tsï khai 1 suïi 3 chiu 6 sï 6 ^thong 1 a. half pan boiling:water then be broth A □ 170. .. ë. eh  煮 開 了 以 後 171. ... tsu 3 khai 1 lieu yi 3 heu 6 , boil open LIEU PRFV afterwards 把 箇 個 濕  嘅 米 粉 172. pa 3 ko 3 ko sït 7 kë mi 3 fïn 3 , PA DISP this CL wet KO ATT rice:noodles 擱 進 去 173. ... koq 7 chin 5 c[h]ie 3 . put enter go 加 上 作 料 174. ... ka 1 song 6 tsoq 7 lieu 6 . add ingredient 加 上 箇 個 肉 啊 175. ka 1 song 6 ko 3 kë nyuq 7 a, add this CL meat A 箇 些 東 西 176. ko 3 xiet 7 tung 1 xi 1 . this CL things □ 177. ... ë. eh 鹽 啊 178. yen 5 a, salt A 箇 個 味 精 啊 179. ... ko 3 ko wei 6 cin 1 a, this CL glutamate A 蔥 花 呀 180. ... tshung 1 fa 1 ya, chopped:chives:stalks A 箇 個 喫 箇 個 湯 181. ko 3 ko chiaq 7 ko 3 kë ^thong 1 --This CL eat this CL broth 湯 粉 182. .. thong 1 fïn 3 . noodles:in:broth 許 就 叫 湯 粉 183. ... he 3 chiu 6 cieu 5 ^thong 1 fïn 3 . that then be:called noodle:soup 許 就 不 是 炒 米 粉 184. ... he 3 chiu 6 puq 7 sï 6 tshao 3 mi 3 fïn 3 . that then NEG be stir-fry noodles 許 就 湯 粉 185. he 3 chiu 6 ^thong 1 fïn 3 . that then noodle:soup 有  嘅 人 啊 186. ... yiu 3 ko 3 nyin 5 a have KO XIE people A 喜 歡 喫 湯 粉 187. xi 3 fon 1 chiaq 7 thong 1 fïn 3 . like eat noodle:soup 特 別 188. .. theq 7 p[h]ieq 7 --especially 特 別 是 冷 天 里 啦 189. theq 7 p[h]ieq 7 sï 6 lang 3 Dien 1 li la, especially winter in LA OBV 喫 箇 個 比 較 190. chiaq 7 ko 3 ko pi 3 kao 5 --eat this CL relatively 喫 得 身 上 比 較 熱 和 191. ... chiaq 7 teq 7 sïn 1 song pi 3 kao 5 leq 7 fo. eat TEQ EXT body on relatively warm 還 有 熱 天 里 呢 192. ... hai 2 yiu 3 let 7 t[h]ien 1 li le, in:addition have summer in LE 熱 天  我 們 喜 歡 喫 冷 粉 193. let 7 t[h]ien 1 ngo 3 mïn xi 3 fon 1 chiaq 7 lang 3 fïn 3 . summer 1PL like eat cold:noodle 冷 粉 就 是 拿 剛 才 話  嘅 194. ... lang 3 fïn 3 chiu 6 sï 6 laq 7 kong 1 ts[h]ai 2 wa 6 kë, cold:noodle then be use a:moment:ago say KO REL 許 個 已 經 煮 熟 了  嘅 195. he 3 ko yi 3 cin 1 tsu 3 suq 7 lieu kë, that CL already boiled:through LIEU PRFV KO REL 乾 □ 196. ... kon 1 --ee, dry eh 煮 熟 了  嘅 197. ... tsu 3 suq 7 lieu kë, boiled:through LIEU PRFV KO REL 許 個 濕  嘅 198. he 3 kë sït 7 kë, that CL wet KO REL 許 個 濾 掉 了 水 個 濕 米 粉 啊 199. ... he 3 kë li 3 tieu 5 lë [lieu] suïi 3 kë sït 7 mi 3 fïn 3 a. that CL filter off LIEU PRFV water KO REL wet rice:noodle A □ 200. ... ë. eh 加 上 201. ka 1 song--add 擱 滴 子 麻 油 202. koq 7 tiaq 7 tsï ^ma 5 yiu, put a:little sesame:oil 擱 些 子 箇 個 203. ... koq 7 xiet 7 tsï ko 3 kë= put a:few this CL 辣 椒 末 子 204. lat 7 cieu 1 moq 7 tsï, eat one bowl this CL 8 7 15 23 21 74 1.5 notes on the grammar : constructions in the 5 texts 個 ko chili:pepper dried:chili:mash 蔥 花 冷 粉 216. ... lang 3 fïn 3 , 又 好 喫 227. yiu 6 hao 3 chiaq 7 . at:once tasty 隻 tsaq7 0 15 3 5 0 就 想 跟 人 家 許 隻 女 個 了 23 根 kien1 0 0 3 0 6 3.58. chiu 6 xiong 3 kien 1 nyin 5 ka he 3 tsaq7 nyü 3 ko 1.5.1 passive ^tse 3 . 9 then want with person that CL No ex female KO NOM flirt 205. tshung 1 fa 1 , chopped:chives:stalks 箇 個 大 蒜 □ 子 206. ko 3 kë thai 6 son 5 mïi 3 tsï, this CL chopped:garlic 箇 些 東 西 啊 207. ... ko 3 xiet 7 tung 1 xi 1 a this CL thing A 拌 起 來 喫 208. ... phon 6 chi 3 lai 5 chiaq 7 . stir CHILAI INCH eat 也 好 喫 209. ... ya 3 hao 3 chiaq 7 . then tasty 箇 個 210. ... ko 3 kë, this CL 你 像 現 在 是 熱 天  啦 211. n 3 chiong 6 xien 6 ts[h]ai 6 sï 6 let 7 t[h]ien 1 la, like now be summer LA OBV 熱 天 里 早 212. let 7 t[h]ien 1 li ts--summer in ear-許 個 早 上 啊 213. he 3 kë tsao 3 song a, that CL morning A 大 家 都 喜 歡 喫 箇 個 東 西 214. thai 6 ka 1 tu 1 xi 3 fon 1 chiaq 7 ko 3 kë tung 1 xi. everyone all like eat this CL thing 喫 一 碗 箇 個 215. ... chiaq 7 yiq 7 won 3 ko 3 kë, cold:noodle □ 217. ... ea, eah 又 偏 宜 218. yiu 6 p[h]ien 2 yi. at:oncecheap 箇 冷 □ □ 219. ... ko 3 lang 3 wïn 3 la--(SPEECH ERROR: woo(dles) for noo(dles)) this cold woo □ □ 個 個 220. ëë= kë kë, eh gggggg 就 是 涼 拌  嘅 冷 粉 啊 221. chiu 6 sï 6 liong 5 phon 6 kë lang 3 fïn 3 a, that be cold-stirred KO REL cold:noodle A  只 要 兩 塊 子  錢 222. ... tsï[t 7 y]eu 5 liong 3 Guai 3 tsï chien 2 only need two approximately:RMB money 箇 炒 米 粉 啊 223. ... ko 3 tshao 3 mi 3 fïn 3 a, this stir:fry rice:noodle A 箇 些 小 店 里 頭 224. ... ko 3 xiet 7 xieu 3 tien 5 li Deu, this CL small shop in 就 只 要 三 塊 子 錢 225. chiu 6 tsï[t 7 y]eu 5 san 1 Guai 3 tsï chien 2 . then only need three approximately:RMB money  又 方 便 226. ... yiu 6 fong 1 p[h]ien 6 . at:once convenient 又 偏 宜 228. ... yiu 6 p[h]ien 2 yi. at:once cheap Translation 1. I 1.4.1 copula 是 oscillates between sï6, xi6 and even x (unstressed). 1.4.2 pronouns 1.4.2.1 personal • the singular pronouns are ngo3 n3 cie3 cie3 ko li (b) Mandarin (common in C and in these texts) ngo3 mïn (n3 mïn ? ? ?) cie3 mïn. 1.4.2.2 demonstratives Only two degrees : proximal ko3 and distal he3 he3 許 has cognates in Wu and Min, see Hakvoc. First occurs as a far demonstrative in EMCtexts, see ZGYW 1999:6 p. 442. Cognate with 處 ko3 developed out of the general classifier, see ko.doc he3 occurs once as « so very adj, that Adj.. » I.46. he3 'that' in « that dry !» 1.4.2.3 manner kong3 this way (perhaps from ko3-yong > kong3) hen3 that way (perhaps from he3-yong > heng3 >hen3) 1.4.2.4 interrogative la3 which ? la3 kë who ? whoever, anybody, everybody la3 li where ? wherever, anywhere long5 how ? (perhaps from *na3 yong > nong > long ; 哪 樣. as still in Yiyang. MX Hakka nyong 56 closer than NC to the original. xiq7 li sï5 kan1 when ? xiq7 li what ? whatever, anything 1.4.3 classifiers There are 124 occurrences of noun classifiers in the 5 texts (17 mns of monologue), excluding numerous cases of phatic he3 ko, ko3 ko. Altogether 7 different classifiers. By decreasing order of frequency : Text No 1 2 3 4 5 1SG one CL hand then able:to carry total ko places/apartments/houses/courtyards/tables/weather/temperature/taste/ paper/ story/joke/basket/coffin/lantern/dragon head/bus/dish/person/woman/son/scholar/rich man/teacher/brother tsaq7 basket/woman/scholar/hand/lantern/courtyard/landlord/rich man/son/coffin kien1 sticks/noodles (long and thin objects) tshan1 meals xiet7 plural objects chien6 clothes ka1 restaurants Many nouns classified by tsaq7 can also be classified by ko. In blue, nouns classified by both ko and tsaq7. They include human and artefact. Non concrete nouns only classified by ko : place ; story ; joke ; weather, temperature, taste Examples of tsaq7 : 許 隻 女 個 呢 2.17. ... he 3 tsaq 7 nyü 3 ko le that CL female KO NOM LE 箇 隻 秀 才 看 到 人 家 蠻 可 氣 呢 2.19. ko 3 tsaq 7 xiu 5 DZai 2 Gon 3 të [tao] nyin 5 ka 1 man 5 Gieq 7 chi le, that CL scholar look TAO RES person quite pretty LE 箇 隻 年 輕 個 女 個 看 到 佢 講 2.31. ko 3 tsaq 7 nyen 5 chiang 1 kë [ko] nyü 3 kë [ko] Gon 3 të [tao 5 ] cie 3 kong 3 , this CL young KO ATTR woman KO NOM see TAO RES 3SG this:way 我  一 隻 手 就 可 以 提 2.49. ... ngo 3 yiq 7 tsaq 7 xiu 3 chiu 6 Go 3 yi 3 Dia 2 . Just walk:to one CL coffin:shop door P 許 隻 女 個 呢 3.62. ... he 3 tsaq 7 nyü 3 kë [ko] le, that CL female KO NOM LE 提 隻 籃 子 3.9. ... thia 2 tsaq 7 lan 5 tsï. carry CL basket 提 隻 脫 大 個 籃 子 啊 2.10. ... thia 2 tsaq 7 ^Doq 7 Dai 6 kë [ko] lan 5 tsï a, carry CL very big KO ATTR basket P 但 是 呢 箇 隻 了 了 了  地 主 啊 4.33. ... than 6 sï 6 le ko 3 tsaq 7 ggggggg Di 6 tsu 3 a, but LE this CL gggggg landlord A 箇 隻 財 主 啊 4.34. ko 3 tsaq 7 tshai 2 tsu 3 a, this CL rich:man A 許 隻 崽 一 不 小 心 呢 4.103. he 3 tsaq 7 tsai 3 yiq 7 pïq 7 xieu 3 xin 1 le, that CL son accidentally LE 隔 箇 隻 女 個 呢 4.135. kaq 7 ko tsaq 7 ^nyü 3 kë [ko] le, then this CL female KO NOM LE 1.4.4 etymologies kaq7 < ko3 ha6 'this moment, now' (etym. from NC dictionary) laq7 搦 MC *nak (GY, JY) 'to hold'. Cognate w/ 拿 =挐 nra 'hold' Etymologically 我 ngaX 爾 njeX 佢 gjo. The 3sg cie3 is sometimes used to refer to non-animated nouns : I.55,56,56,61 : 3SG cie3 used to refer to Beijing • The plural pronouns exist in two sets : (a) local (not represented in the 5 texts) ngo3 ko li n3 ko li 餐 tshan1 0 0 8 2 0 10 些 xiet7 0 0 0 0 4 4 件 chien6 0 0 3 0 0 箇 隻 女 個 心 理 聽 到 就 好 1.5.2 disposal  起 火 3.59. ... ko 3 tsaq 7 nyü 3 kë [ko] xin 1 li Diang 1 të The Mandarin PA3-construction : PA3 NP V used in Nanchang (text 5 only) chiu 6 hao 3 ^chi 3 fo 3 . 3 家 ka1 0 0 0 0 1 this CL female KO NOM heart hear then very angry 把 許 個 水 濾 掉 去 1 total 8 22 32 30 32 124 正 好 走 到 一 隻 管 材 鋪  門 口 5.27. pa3 he 3 kë suïi 3 ^li 6 tieu 5 chie 3 . 哇 3.61. ... tsïn 5 hao 3 tseu 3 ta[w] yiq 7 tsaq 7 kuon 1 [D]Zai 2 Bu 3 mïn 5 Gieu 3 wa. PA DISP that CL water strain throw:out go And what did he invite the teacher to eat ? 41. [he invited the teacher to] eat braised pork meat with turnips. 42. He [had] a dish made called braised pork meat with turnips. 43. Braised pork with turnips, normally, if it has been made well, 44. it is very tasty. 45. but this rich man who was very measly, well 46. he could not bear to put enough pork meat in it, 47. and it was all turnip in the large pot 48. there weren't two pieces of meat. 49. you almost could not see any meat. 50. The moment to begin eating came. 51. the teacher, 52. this cultivated and educated person, 53. did not think it proper to get angry. 54 In his heart he was not pleased,. 55. but he refrained from saying anything. 56. eh. 57. They finished eating. 58. eh. 59. They wiped their mouths. 60. eh. 61. and they drank tea. 62. Now the rich man, 63. having finished eating, 64. chats with the teacher to keep him company. 65. The rich man says, 66. « ah, 67. teacher, 68. you are a very erudite person 69. I believe. 70. Why don't you tell us a story or two » ? 71. « ah, 72. tell a story » 73. The teacher thought for a moment. 74. « all right. 75. I'll tell one if I have to ». 76. He said. 77. « Once, 78. there was a woman. 79. well. 80. A young woman. 81. She had given birth to a son. 82. Her son, well 83. he was both goodlooking and intelligent. 84. You cannot imagine how cute he was. 85. ah. 86. When he was two or three, 87. everyone liked him. 88. The 89. The woman 90. wherever she went, 91. always would take her son along with her. 92. One day, 93. she went to the river side 94. to wash turnips. 95. [some people started making a circle, 134. and finally left. 135. The woman, 136. She could not bear to leave, 137. right ? 138. her son, 139. this two or three year-old [boy who] moments before was still full of yound and fresh life, 140. that intelligent and good-looking boy 141. had disappeared in one instant 142. and drowned in the river. 143. and disappeared 144. She was crying bitterly ». 145. .4.1 NC uses Dependent -ko -Head : This ko 嘅 is same as general classifier 個 , although different characters are used. 3.65. nyin 5 ka 1 people Many other Gan-Ke dialects have disposal laq7 in that « give A to B » construction, see chiu 6 laq 7 Dong 2 pa 3 5 n 3 len c[h]iaq 7 , then LAQ DISP sweets give 2SG eat have KO XIE big a little 有 個 小 佢 比 我 們 箇 裏 感 覺 一 個 年 輕 嘅 女 . . la 3 li kë nyin 5 tu 1 --嘅 .1 . ... yiq 7 kë [ko] nyen 5 c[h]iang 1 everywhere KO ATT people all kë [ko] nyü 3 kë [ko], 滴 子  個 yiu 3 kë [ko] xieu 3 tiaq 7 tsï kë [ko]. 1. . ... cie 3 pi 3 ngo 3 mïn ko 3 li kon 3 cioq 7 3SG compare:with 1PL here feel one CL young KO ATTR female KO NOM 許 個 乾 嘅 米 粉 啊 KGFYDCBG p. 451 : Anyi, Yifeng, Yugan, Jishui, Yongxin, Duchang ; Dayu, Ganxian, Ningdu, Changting, Sandu, Shaowu. Only few have laq7 where Mand has standard ba3 constructions, see KGFYDCBG p. 441, 455 : Anyi, have KO XIE small a little KO NOM yeu 5 hao 3 teq 7 to 1 . house in KO ATTR adults P • double-marking : kien5 reinforced by 些 子 after the Adj : 要  好  得  多 屋 里 嘅 大 人 . . he 3 kë kon 1 kë mi 3 fïn 3 a. 啊 . . ... wuq 7 li kë [ko] thai 6 nyin 5 a, that CL dry KO ATT noodle A Yifeng, Sandu, Changting, Shaowu. LAQ7 Losing ground to Mandarin BA3. YEU IRR better TEQ DEG much 1.5.4.1.2 Relatives : 你 1.5.3 comparative 所 以 1.5.3.1 In Nanchang 1. . ... n 3 2SG sit 坐 就 tsho 6 = teq 7 得 普 普 通 通 TEQ PREP . 1. so 3 yi 3 chiu 6 phu 3 phu 3 thung 1 thung 1 箇 個 ko 3 ko this CL 1.5.3.2 simple marking elsewhere in Gan-Hakka Simple comparative with guo4-adj in SHT Hakka : that:is:why even ordinary 房 間 嘅 fong 5 kan 1 apartment kë [ko] KO ATTR 裏 新 li inside A SUPP  啊 衣 裳 a, xin 1 yi 1 song new clothes 啊 a, P 1.5.3.1.1 simple comparative (A more Adj) 還 不 如 hai 2 puq 7 lu 5 han2 tu1 he4 ni2 kan3 sen1 la ! ni2 ko4 se4 ! (you choose first ! you are smaller/smallest) 走 到 外 頭 去 tseu 3 tao 5 wai 6 deu 2 chie 3 , 就 可 以 買 只 紙 紮 嘅 kia1 han2 ko4 lao1 (they were even more angry) . . ... chiu 6 Go 3 yi 3 mai 3 t[s]aq 7 tsï 3 tsaq 7 kë [ko] • zero-marking, with absence of degree adverb hao3 indicating comparative : still not:as:good:as walk to outside go 1.5.3.3 double marking elsewhere in Gan-Hakka : then can buy CL paper:and:bamboo KO ATTR 燈 籠 ten 1 lung 5 lantern 以 前 KGFYDCBG p. 450 gives same double marking A bi B geng Adj in Hakka : Dayu, Ningdu, 窮 . . ... yi 3 Jien 2 還 更 熱 和 些 子 Changting Wuping. 綠 嘅 紙 ^chiung 2 , in:the:past hai 2 kien 5 leq 7 fo xiet 7 tsï. . . liuq 7 kë [ko] tsï 3 , poor still more warm a:little SHT surpass type : green KO ATTR paper 而 且 so3yi3 tu1 he4 jong1 lai4tsu1 hau3 ko4 yong1 moi4tsu1 la1 都 話 佢 自 簡 . . ... oe 5 chie 3 tu 1 wa 6 cie 3 ts[h]ï 6 kan 3 moreover all say 3SG self 1.5.3.1.2 double comparative (A more Adj than B) • No ex. of simple marking A bi B adj. inb the texts 好 多 嘅 細 伢 子 1.5.3.4 double marking elsewhere in Chinese . . ... hao 3 to 1 kë [ko] xi 5 nga tsï kë--Ansaldo 1999 says double marking also in Cantonese, Taiwanese Minnan, Medieval Chinese : 個 好 kë [ko] hao 3 . KO GEN good A bi B gengjia Adj, A bi B zonggeng Adj in Cantonese (p. 129) very many KO ATTR child KO ?? • simple marking, with kien5 更 -Adj : 'Adj-er' 更 A bi B khaq Adj. in Minnan A bi B jiao Adj in medieval Chinese (p.133) ; also examples in Hunan Hakka (Ansaldo) 做 了 一 個 好 長 個 燈 籠 • double marking : A bi B kien5 Adj. : 'A Adj-er than B' . . tsu 5 lë [lieu 3 ] yiq 7 kë [ko] hao 3 tshong 2 kë [ko] ten 1 lung 5 . 冷 . kien 5 lang 3 more cold • Simple marking, Adj-de-duo 'much Adj-er': 以 前 啊 比 現 在 更 有 Double marking probably earlier than simple marking : A bi B geng Adj simplifies to A bi B adj in make PRFV one CL very long KO ATTR lantern 味 道 . . ... yi 3 Jien 2 a pi 3 Mandarin (Ansaldo supposes first A bi B adj, reinforced to A bi B geng Adj independently in xien 6 DZai 6 kien 5 yiu 3 wïi 6 Dau 6 . in:the:past P compare now different places) 我 們 許 個 院 子 裏 嘅 細 伢 子 啊 more have taste .1 . ngo 3 mïn he 3 kë [ko] yüon 5 tsï li kë [ko] xi 5 nga ts(ï) a 1.5.4 attributive/relative construction 1PL that CL yard inside KO ATTR children A 現 在 xien 6 Dzai 6 Dai 6 ka 1 大 家 now everybody 但 是 我 覺 得 than 6 sï 6 ngo 3 cioq 7 teq 7 生 活 sen 1 foq 7 life 窮 ^chiung 2 要 yew 5 hao 3 teq 7 好 得 want good TEQ DEG 個 時 間 kë sï 5 kan 1 多 to 1 . much but 1SG feel poor KO ATTR time 最 主 要 嘅 節 目 ... tsuïi 5 tsu 3 yeu 5 kë [ko] cieq 7 muq 7 1.51.5.4.1.1 Attributives . . ... 3.8. ... .11 . most important KO ATT activity • Simple marking : Adj-tiaq7 tsï 'a little adj-er'  還 有 搞 得 .1 . hai 2 yiu 3 kao 3 teq 7 also have make TEQ MAN  許 個 味 道 he 3 ko wïi 6 Dao 6 that CL taste 漲 脫 大 嘅 箇 個 兩 三 歲 嘅 水 1. . ... tsong 3 thoq 7 thai 6 ko . . ko 3 ko liong 3 san 1 suïi 5 kë [ko] tsai 3 le, 好 滴 子 hao 3 tiaq 7 tsï good a:little 啊 a P 崽 呢 , sui 3 , swell very big this CL two three year KO ATTR son LE KO ATT water 有 個 大 滴 子 . 1. ... yiu 3 kë [ko] Dai 6 tiaq 7 tsï 比  現 在 味 道 還 pi 3 xien 6 Dzai 6 wïi 6 Dao 6 hai 2 kien 5 ciuq 7 . 個 kë, KO ? ? ? 更 足 compare now taste even more full 提 隻 脫 大 嘅 籃 子 成 啊 箇 箇 濕 嘅 粉 粉 子 啊 .1 . ... thia 2 tsaq 7 ^Doq 7 Dai 6 kë [ko] lan 5 tsï . . Dzïn 2 a gggg sït 7 ko fïn 3 fïn 3 tsï. a, carry CL very big KO ATTR basket P become a gggg wet KO ATT noodle paste • double marking : A bi B yeu5 Adj teq7 to1 : 'A much more Adj than B' 哪 里 嘅 人 都 .4.2 Alternative construction P Dem CL N This the same pattern as Kaiping : « this/that N which Vs». Examples : yiu 6 tsu 3 suq 7 lieu kë mi 3 fïn 3 a, and well-cooked PRFV KO REL rice:noodle A 1.5但 是 我 覺 得 窮 嘅 時 間 3.8. ... than 6 sï 6 ngo 3 cioq 7 teq 7 ^chiung 2 kë sï 5 kan 1 but 1SG feel poor KO ATTR time  許 個 味 道 啊 he 3 ko wïi 6 Dao 6 a that CL taste P 比 11 . kaq 7 Dzïn 2 lieu he 3 kë yiu 6 sït 7 yiu 6 nyüon 3 KAQ NEW become LIEU PRFV that CL and wet and soft 又 煮 熟 了 嘅 米 粉  啊 .4.3 third pattern P ko Dem CL N This is a combination of the preceding 2 : relative marked with KO REL , while noun is preceded by Dem-CL : 1 . ko 3 ko wu 3 tsang 5 lieu this CL make done LIEU PRFV 箇 個 濕 米 粉 啊 .1 . ko 3 kë sït 7 mi 3 fïn 3 a, this CL wet rice:noodle a 1.5 Wei Gangqiang and Chen Changyi (1998) Nanchang Hua Yin Dang [Nanchang sound archives]. In: Hou Jingyi (ed.) Xiandai Hanyu Fangyin Yinku. Shanghai: Shanghai Jiaoyu Chubanshe. 1.6 REFS negative : mao6 VV mao6 lao1 tao5 ; mao6 khon3 tao5 4.127, 4.129, 4.141, 4.143. 1.5.5.5.3 serial verb constructions but resultative cpl only a special case of a larger V1-V2 serial verb construction category where V2 does not express result of V1 action : 5.95 tsu3 ton3 or 5.97 tsu3 seu3 'boil not long enough'. In these 2 examples, the object is si5kan1 'time' 1.5.6 question types .2.1 perfective V-lieu~lë (reduced form of lieu3) Mrs Xie distinguishes n and l !The aspirates have lenis/breathy variants only in the low (even-numbered) tones, ie when corresponding to OC QZ initials ; however Mrs Xie had a shanghainese mother who spoke shanghainese at home. ? cannot get pair ; but 不=北=pˆ/ 7 (same vowel, lower, close to schwa) tïn/ten 蹲、燈 Mrs xie les lit : tun [u barre] /tïn she has kïn instead of kien < ken 城 =層 tshïn2 針 蒸 =砧 甑 tsïn1 2.2.1.3 question words na3 ko 'who' Likewise -e and -ït are merged xiq7 li 'what' 十=舌=s´q 7 (same vowel as above) na3 li 'where' na3 'which' ci3 to1 然 normally len (dictionary), is lµïn 'how much~many' ci3 'how-Adj' (how deep, how long ; 'how many' a special case of this) lung2 what of 'how-Verb' (asking about manner : 'how to say…'). Pronunciation checked. Identical with lung2 dragon. Not long5 as in Yan Sen's speech. 2.1.1 Tones (Mrs Xie) T1 [42] lung2 pan6 'how to do ?' cf Hakka liong pen ti) lung2 yong 'what kind of…' xiq7li sï2kan1 'when ?' wïi xiq7li, tsoq7 xiq7li 'why' pïn/pen 2.1.3.1.2 -on pronounced -uon 2.2.2 aspect T2 T3 2.1.3.1.3 -ieu [˙24] , [35] [213] 2.2 T5 pronounced -iao [5544] T6 [˙21] T7 see if ïu=eu ? [45  ] T8 no, Nanchang Zidian -eu = Mrs Xie -ao 招 超 燒 [˙2  ] Low-series tones (odd-numbred) are breathy after obstruents. and Nanchang Zidian -ïu = Mrs Xie -ïu 州 抽 收 T2 has 2 allophones, 24/breathy w/ obstruents, 35 clear w/ sonorants. The higher allophone has not merged with T5, unlike 'standard' NC. 2.1.3.2 codas 2.1.2 initials p ph/B no final -t, all > -q m f t th/D n l ts tsh/DZ s c ch/J ny x k kh/G ng h 2.1.3 rhymes 2.1.3.1 main vowels 2. 1.3.1.1 -en and -ïn Mrs Xie merges -en and -ïn, into -ïn : 深 =生 sïn1 (vowel lower ; close to schwa) Isolation form is pa , merges with T 7 through destressing 'again' here means 'an additional advantage is that…' Complement of degree, informant asked not to publish Isolation form is pa3 , merges with T 7 through destressing constructions Numbering of examples follows Anne's book 2.2.1 interrogatives neutral question : Two types are in concurrence : the Clause-po and V-neg-V type 2.2.1.1.1 V-Neg-V type with 'go' A-1-1 he3 ko thifong, n3 chie3 puq7 chie3 ? with Adj A-2-1 ko3 phin ciu3 xiong1 puq7 xiong1 ? If the VP has an object, then V-puq7-VP : A-3-1 n3 cin5 puq7 cin5 tshen2 Copula : xi6-puq7-xi6 A-4-1 lao3 wang sï6 puq7 sï6 san1tung1 nyin2 ? Existential : yiu3 mao6 yiu3 A-5-1 nlen yiu mao yiu chien2 ? If the VP includes an aux, both aux-puq7-aux V and aux-V-puq7-aux are accepted A-6-3-1 n3 wa6 cie3 wïi puq wïi tseu3 n3 wa6 cie3 wïi tseu3 puq wïi Potential : V1-teq7-V2 V-puq7-V2 A-7-4 ko3 mo yen, n3 khon3 teq7 cien5 khon3 puq7 cien5 ? ko3 mo yen, n3 khon3 teq7 tao5 khon3 puq7 tao5 ? Emphatic : sï6 puq7 sï6 VP ? A-8-1 n3 sï6 puq7 sï6 tsoq7 ngo3 kë chi3 ? Aspects perfective. V (lieu3) mao6 V, 'cocme' A-9-jia-1 cie3 lai2 mao6 lai2 cie3 lai2 lieu/lë mao6 lai2 cie3 lai2 lieu/lë mao with Adj : A-9-jia-8 thien1 heq7 le mao6 ? cannot say : *thien1 heq7 le mao6 heq7 Like V-neg-V Indicates no presupposition. Cannot use it in contexts that indicate a presupposition on the part of the speaker, like E-1-1 *tsiq7 yiu3 cie3 yiu3 chien2 po ? 'is he the only one who is rich ?' Perhaps -po. is a phonetically reduced form of the negation puq7, if so it is relatable to the V-neg type of neutral question, Cf. Yue 1993 : 42. However, it does not change to -mao6 in perfective or experiential aspects. Used for neutral questions alternatively with the V-neg-V type. For instance A-1-1 given above as V-neg-V, can also be said as : he3 ko thi6 fong, n3 chie3 po ? Both ways are exactly equivalent. A-2-1 ko3 phin ciu3 xiong1 po ? A-3-2 n3len chiaq7 ci1than po ? A-4-1 lao wang sï santung nyin po ? A-4-5 n3 wa6 ke sï6 pïn thi wa6 po ? A-5-2-3 cie3 tshai6 wuq7 li po ? A-6-1-1 n3len chie3 teq7 po ? 'are you able to go ?' A-6-1-2 nlen nen pong kë mong po ? A-6-3-1 n3 wa6 cie3 wïi6 tseu3 po ? A-7-1 n3koli chiaq7 teq7 lieu3 po ? A-7-3 cie3 thia2 teq7 chi3 po ? perfective : A-9-jia-1-1 cie3 lai2 lë po ? equivalent with : cie3 lai2 lieu/lë mao ? A-9-jia-1-2 cie3 tshoq8 nyiq7 lai2 lë po ? A-9-jia-1-3 cie3 xientsai lai2 lë po ? A-9-jia-1-8 n3len thiang1tung3 lë po ? experiential A-9-yi-1 he3 kë tifong n3 chie3 kuo5 po ? habitual A-9-yi-10 cie3 cin1 tshong lai2 po ? Yes-no questions (speaker has preformed idea of the answer) Used to express disbelief, speaker expects 'no' as answer : A-6-3-1 n3 wa6 cie3 wïi6 tseu3 a ? A-9-jia-6 n3 wuq7li yiu3 tshai Dien a ? (you really have a color TV at home ?) A-1-9 kuo5 le ha tsï yiq7 nyit pi3 yiq7 nyit7 ton3 'After the summer solstice, the days become shorter' A-1-10 cin1-nyit pi3 ts [h]oq8-nyit7 lang3 tiaq7tsï 'today is colder than yesterday' A-2-1 mi2 nyin2 pi3 la3 ko tu1 wei6 wa6 sï6 'the matchmaker has more of a way with words than anyone' A-2-2 cie3 pi3 ngo3 xi3 fon chiaq7 han2 tshai3 'he likes to eat pickled vegetables more than I do' A-2-4 xieu3 nyü3 pi3 xieu3 tsai3 pha3 tsu1 tsu ''the youngest daughter is more afraid of spiders than the youngest son is' A-3-1 (Mrs) cie3 pi3 ngo3 tsao3 tao5 'he arrived earlier than I did' (Mr) cie3 pi3 ngo3 lei2 teq7 tsao3 A-4-1 ci1 pi3 ngaq7 tsï phao3 teq7 khuai3 'chickens run faster than ducks' A-4-2 cie3 chiaq7 fan6 pi3 ngo3 man6 'he eats more slowly than I do' A-5-1-1 ko3 tshong2 p [h]i x tsï pi3 he3 tshong2 heu6 'this covering is still thicker than that one' ask this one again A-5-1-3 xien ts [h]ai6 n3-len sao1 [ ! seu1] ko tshai3 pi3 yi3 chien2 han2 'now the food you cook is even saltier than before' ask this one again A-5-3 cin1 nyen2 kë ku3 kheq7 pi3 chyü3 nyen2 (expect chiu6 nyen2) kan3 sao3 [expect seu3] le hao3 to1 A-5-6 ko3 li pi3 he3 li sït7 tiaq7tsï 'It is a bit wetter here than there' A-8-1 cie3 puq7 pi3 ngo3 phong3 'He's not fatter than me' A-8-3 xieu3 tao1 puq7 pi3 cien3 tsï khuai3 'the knife isn't sharper than the scissors' A-8-5 sïn3 sïn nyen2 lin2 puq7 pi3 suq7 suq7 thai6 'Auntie is not older than uncle is' A-8-7 cin1 nyiq7 puq7 pi3 chien2 nyiq7 leq7 ci3 to1 'today is not much warmer than yesterday' A-8-13-2 cie3 ko t[]sï6 xia3 teq7 puq7 pi3 ngo3 so1 'his handwriting is not worse than mine' Anne's equal degree : Anne's equalling degree Anne's equalling/other types Potential constructions also used to express equalling degree semantics : E-1-1 A wïi6 pi3 teq7 song6 B (A can compare with B) A wa6 teq7 yang B (A can beat B at talking) A wa6 B puq7 yang (A shuo bu quo B) E-2-3 ma3 tseu3 puq7 yang nyu3. Anne's superlative degree E53 ko3 tao sui3, thieu teq7 chie3 'he can carry this bucket of water over on a shoulder pole' If there is an object, the Nanchang dictionary says the structure is Mrs Xie accepts these, but she also accepts : cie3 chiaq7 teq7 cin5 fan6~cie3 chiaq7 puq7 cin5 fan6 Likewise, Mrs Xie in E16 gives 2 structures : n3 len2 ta3 teq7 yang2 cie3 打 得 贏 佢 ; and (same as dictionary) : n3 len2 ta3 teq7 cie3 yang2 打 得 佢 贏 . • negative form of these constructions : If there is an object, Mrs Xie puts it after the ResComp : Nanchang syntax notes L. Sagart 108 E-2-3 ko3 xiet7 wa6, phien teq7 lieu n3len, phien puq7 lieu ngo3 but she also accepts structures where the object is before puq7, cf. the pair of examples : Of course you can also say A puq7 nen2 pa3 B V ResComp (E-3-3-2) 2.2.9.4.2 without resultative cpl Negative is puq teq V cie3 puq7 teq7 song6 cie3 puq7 teq7 fïi2 chie3 cie3 puq7 teq7 fong5 ha6 2.2.9.5 resultative complement 2.2.9.5.1 V1-V2 If there is an object it can come after the resultative compound : E23
04100048
en
[ "spi.opti" ]
2024/03/04 16:41:20
2022
https://hal.science/hal-04100048/file/DOTA22106.1648474976.pdf
Ugo Tricoli email: [email protected] Leon Schlemmer Using optical potentials with gain-loss to generate structural colors We study the possibility to obtain structural colors through the use of supersymmetric transformations in optics such as the Darboux transform. Structural colors were originally discovered by studying the interference of light with natural photonics structures giving rise to vivid and spectacular tonalities. They differ fundamentally from ordinary colors based on light absorption at particular wavelengths, as they result from light interference only. To treat interference analytically, we make use of the Darboux transform to define materials with continuously varying spatial distributions of the refractive index that are exactly solvable for the electric field. Consequently, it is possible to calculate analytically the Transfer Matrix linked to the definition of the transmission and reflection coefficients. Interestingly, by using gain, anomalous transmission/reflection (i.e. larger than one) can be obtained, the physical system being open towards the external environment (the system is using external energy in order to increase both transmission and reflection). The generated active optical cavity can thus be used to amplify the incoming light in the desired spectral-angular region. The calculated refractive index distributions can be realized in practice as 1D multi-layered structures corresponding to optical filters in the visible. Introduction Structural colors were originally discovered by studying the interference of light with natural photonics structures giving rise to vivid and spectacular tonalities [START_REF] Kinoshita | Physics of structural colors[END_REF]. They differ fundamentally from ordinary colors based on light absorption at particular wavelengths as in pigments, dyes and metals as they are a consequence of light interference only. Many studies where dedicated to the understanding of natural photonic structures as they are found in animals and insects [START_REF] Anderson | An electron microscope study of some structural colors of insects[END_REF][START_REF] Kinoshita | Structural colors in nature: the role of regularity and irregularity in the structure[END_REF][START_REF] Noh | How noniridescent colors are generated by quasi-ordered structures of bird feathers[END_REF][START_REF] Vignolini | Pointillist structural color in pollia fruit[END_REF][START_REF] Burresi | Bright-white beetle scales optimise multiple scattering of light[END_REF]. On the other hand numerous other studies where dedicated to the attempt to produce structural colors through artificial structures based mainly on photonics [START_REF] Forster | Biomimetic isotropic nanostructures for structural coloration[END_REF][START_REF] Wiersma | Disordered photonics[END_REF] and plasmonics [START_REF] Yu | Transmissive/reflective structural color filters: theory and applications[END_REF]. Surprisingly, no study has been directed yet to explore the possibility to obtain structural colors through supersymmetry (SUSY) [START_REF] Cooper | Supersymmetry and quantum mechanics[END_REF]. We propose refractive index distributions that are used to produce iridescent structural colors i.e. strongly dependent on the incident direction. Supersymmetry, implemented through the Darboux transform (DT) [START_REF] Darboux | On a proposition relative to linear equations[END_REF], is explicitly broken truncating the refractive index distribution in space. Nevertheless the results remain analytical [START_REF] Krapez | Sequences of exact analytical solutions for plane waves in graded media[END_REF] being calculated through the Transfer Matrix [START_REF] Lekner | Light in periodically stratified media[END_REF][START_REF] Mostafazadeh | Transfer matrix in scattering theory: A survey of basic properties and recent developments[END_REF]. Due to the presence of loss and gain regions, the system is behaving as an active cavity being able to amplify the incoming light. We demonstrate how controlling the light incoming direction, many different intense colors can be generated by amplifying the radiation at the selected frequency. Only the spatial distribution of the refractive index is controlled while, for a given point in space, the refractive index is assumed to obey spectrally the Drude dispersion model, i.e. no absorption resonance is treated. As a consequence the observed colors result from geometric interference only and not from spectral absorption. Interestingly, almost the same dependence of colors on direction is predicted for the transmitted and reflected light. In this work we consider scattering of visible light from a sample with continuously varying (in space) refractive index with a large gain region and very small regions with absorption. We calculate transmission and reflection spectra through the Transfer Matrix theory [START_REF] Lekner | Light in periodically stratified media[END_REF][START_REF] Mostafazadeh | Transfer matrix in scattering theory: A survey of basic properties and recent developments[END_REF]. When SUSY is broken anomalous transmission/reflection is observed i.e. the transmission/reflection coefficients become larger than one. This can be seen as the manifestation of the fact that the system is open towards the external environment i.e. external (electromagnetic) energy is used to generate anomalous transmission/reflection. Indeed, a spectral-angular filter can be obtained for generating structural colors that are strongly direction dependent. From the optical potential to the refractive index spatial distribution: Dispersion We consider scattering of a monochromatic electromagnetic wave by an inhomogeneous dielectric slab of finite thickness along x but infinite extension in the yz-plane with a varying refractive index in the x direction n(x). The slab is surrounded by a non-absorbing background medium with refractive index n b . Only a scalar electric field corresponding to a TEpolarized wave is considered. The DT is used to define optical potentials V (x) that are continuously varying in space and are analytically solvable for the electric field in the corresponding Schrödinger-like equation [START_REF] Krapez | Sequences of exact analytical solutions for plane waves in graded media[END_REF]. Here the Darboux transformation energy is fixed to ε = 0.00005i (in units of the square of a reference wavenumber used for scaling) and n b = 1 i.e. the optical filter is immersed in air (see [START_REF] Tricoli | Susy designed broken pt-symmetric optical filters[END_REF] for details about the analytical DT). By using the optical potential definition we can calculate the spatial refractive index distribution by n(x) = n 2 b -V (x)/k 2 where k is the wavenumber in void of the incoming wave (and V (x) has the same units of ε). This assures a dispersion relation compatible with the Drude model for sufficiently small imaginary part of the refractive index. In order to break SUSY we truncate the refractive index distribution at some points in space (represented as dots in Fig. (1)). Selecting almost only the gain portion of the index (where the index imaginary part is negative) we define an active cavity. Importantly, the Kramers-Kronig relations [START_REF] Altarelli | Superconvergence and sum rules for the optical constants[END_REF] are not violated since the index imaginary part is sufficiently small. We note that for purely supersymmetric index profiles, we would expect reflectionless m aterials w ith u t ransmission f or a ll wavelengths a nd i ncoming d irections ( this i s the case for real and negative transformation energy) since the superpartner of the generated index is void. Indeed, when two index distributions are superpartners, they share the same scattering spectra [START_REF] Miri | Susy-inspired one-dimensional transformation optics[END_REF]. However, here a purely imaginary transformation energy is taken thus SUSY is broken by construction. In addition, we cut the spatial distribution at some points in order to get angular scattering spectra that are interesting for iridescent structural colors production. Results: Iridescent structural colors in transmission and reflection We analytically evaluate the transmission and reflection spectra in the visible for all angles of incidence. For the definition of the optical Transfer Matrix method for analytically solvable refractive index spatial distributions see [START_REF] Mostafazadeh | Transfer matrix in scattering theory: A survey of basic properties and recent developments[END_REF][START_REF] Born | Principles of optics: electromagnetic theory of propagation, interference and diffraction of light[END_REF]. Interestingly, for the refractive index distribution of Fig. [START_REF] Kinoshita | Physics of structural colors[END_REF] the net energy balance is positive indicating a gain effect (the absorption coefficient becomes negative). This can be seen as an evidence of the opening of the system toward the external environment (as it is known for quantum open systems [START_REF] Persson | Observation of resonance trapping in an open microwave cavity[END_REF][START_REF] Hatano | Some properties of the resonant state in quantum mechanics and its computation[END_REF][START_REF] Garmon | Bound states, scattering states, and resonant states in pt-symmetric open quantum systems[END_REF]). The dielectric system is using external energy in order to increase both transmission and reflection. We show in Fig. (2) the transmission angular spectrum for incoming visible light (in the range [350, 700] nm) associated to the refractive index distribution of Fig. [START_REF] Kinoshita | Physics of structural colors[END_REF]. Notably, the optical potential is capable of operating spectral-angular filtering and is particularly efficient in selecting a narrow high intensity peak which is strongly dependent on the incoming direction. Remarkably, the spectral-angular filter has almost the same performance in the reflection configuration (not shown). We also calculate the RGB colors associated to different spectra for a particular incoming direction (we used a standard computer graphics RGB procedure as described in [START_REF]Javascript code to convert light wavelength to color[END_REF]). Interestingly, the proposed active optical device is able to generate structural colors from red to purple by changing the incoming direction (or alternatively for isotropic illumination, by changing the observation direction). However, for large angles the colors are less intense due to the absence of large radiation amplification. Conclusions We investigated the possibility to generate structural colors strongly dependent from direction (thus giving rise to iridescence) by using broken SUSY 1D-spatial distributions of the refractive index. Interestingly, the same device can be used for both transmission and reflection configurations. These results are only obtainable through the use gain i.e. an active cavity is defined which can efficiently accumulate the external electromagnetic energy in order to amplify the desired color channels. Due to the possibility to produce large surfaces with 1D multi-layered structure down to nanometer precision in the profile production, a direct application of the reported results could be the design of (strongly direction dependent) active color filters for the generation of colors with high intensity and bright tonalities. Fig. 1 . 1 Fig. 1. The refractive index distribution n(x). (a) Scattering geometry and a physical realization of n(x) a graded refractive index distribution with a large gain region and tiny absorption parts. (b) The refractive index distribution variation ∆n(x) = n(x)n b . The blue line represents Re[∆n(x)] while the yellow line represents Im[∆n(x)]. The region in between the red dots is used for spectra analytical calculation. Fig. 2 . 2 Fig. 2. Transmission angular spectrum of the refractive index distribution shown in Fig.(1). The corresponding structural colors (in RGB scheme) obtained for incidence angles from 0 • to 89 • with a step of 5 • are shown on the left side of the panel while on the right side is the transmitted normalized intensity (much greater than one due to amplification).
04100062
en
[ "math" ]
2024/03/04 16:41:20
2023
https://hal.science/hal-04100062/file/hal2023may.pdf
Leif Arkeryd email: [email protected] The paper proves existence of stationary solutions to the Boltzmann equation in a bounded set of R 2 for given indata, hard forces and truncation in the collision kernel for small velocities and close to parallel colliding velocities. It does not use any averaging in velocity lemma and uses techniques from the discrete velocity stationary case recently developped in [7]-[8]-[9], where the averaging in velocity lemmas are not valid. Stationary solutions to the Boltzmann equation in the plane. 1 Introduction. Consider the stationary Boltzmann equation in Ω ⊂ R 2 , v • ∇ z f (z, v) = Q(f, f ), z ∈ Ω, v ∈ R 2 , (1.1) where Ω is a strictly convex domain with C 1 boundary. The nonnegative function f represents the density of a rarefied gas with z the position and v the velocity. The operator Q is the nonlinear Boltzmann collision operator with angular cut-off and a truncation for small velocities and close to parallel colliding velocities, From now on denote χ η B by B. The inward and outward boundaries in phase space are Q(f, f )(z, v) = R 2 S 1 χ η (v, v * , ω) B(v -v * , ω) f (z, v )f (z, v * ) -f (z, v)f (z, v * ) dv * dω. (1.2) S 1 is the unit circle in R 2 , v = v+v * 2 + |v-v * | 2 ω, v * = v+v * 2 -|v-v * | 2 ω. For η ∈]0, ∂Ω + = {(z, v) ∈ ∂Ω × R 2 ; v • n(z) > 0}, ∂Ω -= {(z, v) ∈ ∂Ω × R 2 ; v • n(z) < 0}, where n(z) denotes the inward normal on ∂Ω. Given a function f b defined on ∂Ω + , solutions f to (1.1) are sought with f (z, v) = f b (z, v), (z, v) ∈ ∂Ω + . (1.5) For any (z, v) ∈ Ω × R 2 , denote by s + (z, v) = inf{s > 0; (z -sv, v) ∈ ∂Ω + }, s -(z, v) = inf{s > 0; (z + sv, v) ∈ ∂Ω -}, z + (z, v) = z -s + (z, v)v, z -(z, v) = z + s -(z, v)v. (1.6) Solutions are understood in mild form, i.e. f (z, v) = f b (z + (z, v), v) + s + (z,v) 0 Q(f, f )(z + (z, v) + sv, v)ds, a.a. (z, v) ∈ Ω × R 2 . (1.7) The main result of the paper is the following. Theorem 1.1 Let f b be a non negative measurable function such that ∂Ω + v • n(z) 1 + v 2 + ln f b f b (z, v)dσ(z)dv < ∞. (1.8) Then equation (1.1) has a non negative solution satisfying the boundary condition (1.5). For the stationary Boltzmann equation the control of mass and entropy is not straightforward, contray to the evolutionary case. Existence results in the slab, i.e. one-dimensional spatial and three-dimensional velocity frame, together with an invariance with respect to the two remaining space variables of the distribution function were first established for the nonlinear Boltzmann equation. Stationary integrable solutions to the Boltzmann equation in a slab have been proven in [START_REF] Arkeryd | L 1 solutions to the stationary Boltzmann equation in a slab[END_REF], [START_REF] Arkeryd | The stationary Boltzmann equation in a slab, with given weighted mass for hard and soft forces[END_REF] and [START_REF] Arkeryd | A compactness result related to the stationary Boltzmann equation in a slab, with applications to the existence theory[END_REF], for different boundary conditions, bounds on the entropy production term and a weighted moment of the distribution function giving control of the entropy. For higher space dimension, stationary unsigned solutions close to Maxwellians were constructed in convex domains [START_REF] Guiraud | Problème aux limites intérieur pour l'équation de Boltzmann linéaire[END_REF], [START_REF] Guiraud | Problème aux limites intérieur pour l'équation de Boltzmann en régime stationnaire, faiblement non linéaire[END_REF]. In [START_REF] Esposito | Non-Isothermal Boundary in the Boltzmann Theory and Fourier Law[END_REF], the existence and uniqueness of the stationary solution to the Boltzmann equation close to a uniform Maxwellian have been proven in a bounded domain of R n , 1 ≤ n ≤ 3, for diffuse reflection boundary conditions. Its hydrodynamic limit to a solution to the steady incompressible Navier-Stokes-Fourier system has been performed in [START_REF] Esposito | Stationary solutions to the Boltzmann equation in the hydrodynamic limit[END_REF]. Existence of stationary solutions to the Boltzmann equation in a bounded domain of R n , n ≥ 1, and given indata has been proven in [START_REF] Arkeryd | The stationary Boltzmann equation in R n with given indata[END_REF]. There, scaling arguments from [START_REF] Arkeryd | On the stationary Boltzmann equation in R n[END_REF] were used. In this paper, we prove existence of solutions to the stationary Boltzmann equation in the plane with the help of the entropy production term and the construction of 'good' characteristics where the distribution function is bounded and 'bad' characteristics of arbitrarily small measure. This is inspired by recent results for discrete velocity models for the Boltzmann equation where averaging lemmas do not hold and new arguments are required. In [START_REF] Arkeryd | Stationary solutions to the two-dimensional Broadwell model[END_REF], [START_REF] Arkeryd | On stationary solutions to normal, coplanar discrete Boltzmann equation models[END_REF], [START_REF] Arkeryd | Discrete velocity Boltzmann equations in the plane: Stationary solutions[END_REF] a weaker property than L 1 compactness of averages in velocity, i.e. the L 1 compactness of the integrated collision frequencies of a sequence of approximations is proven. It strongly depends on the two-dimensional spatial dimension. In this paper we use the tools developped in [START_REF] Arkeryd | Stationary solutions to the two-dimensional Broadwell model[END_REF], [START_REF] Arkeryd | On stationary solutions to normal, coplanar discrete Boltzmann equation models[END_REF], [START_REF] Arkeryd | Discrete velocity Boltzmann equations in the plane: Stationary solutions[END_REF] and do not use any averaging lemma. Work is in progress to fill a gap in the proof of Lemma 4.1 of [START_REF] Arkeryd | On the evolutionary velocity-discrete Boltzmann equation[END_REF] that uses these techniques in the discrete velocity evolutionary case. The construction of a first sequence of approximations with damping and convolutions is performed in Section 2. In Section 3, the damping and convolutions are removed, leading to a more involved sequence of approximations. In Section 3, the phase space is split into 'good' characteristics where the approximations are uniformly bounded and 'bad' characteristics of arbitrarily small measure. In Section 4, the L 1 compactness of the integrated collision frequency sequence is proven. The passage to the limit in the mild form satisfied by the approximations is performed in Section 5. 2 First approximations. In the paper we denote by c constants that do not depend on approximations. To emphasize their dependence on the indatum f b , we sometimes denote them by c b . We use the following approximation scheme. Let (B α ) α∈]0,1[ , be a family of C ∞ regularizations of B. Let (ϕ α ) α∈]0,1[ be mollifiers defined from ϕ ∈ C ∞ 0 (R 4 ) such that ϕ(z, v) = 0 for |z| ≥ 1 or |v| ≥ 1, ϕ(z, v)dzdv = 1, by ϕ α (z, v) = 1 α 4 ϕ( z α , v α ). Outside the boundary the function to be convolved with µ α , is continued in the normal direction by its boundary value. Let μk be a smooth mollifier on ∂Ω × R 2 in a ball of radius 1 k . Denote by f bk = min{f b , k} * μk . Lemma 2.1 For every (α, k) ∈]0, 1[×N * , there is a non negative solution F to αF + v • ∇ z F = B α F 1 + F k (z, v ) F * ϕ α 1 + F * ϕα k (z, v * ) - F 1 + F k (z, v) F * ϕ α 1 + F * ϕα k (z, v * ) dv * dω, (z, v) ∈ Ω × R 2 , (2.1) F (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + . (2.2) Proof of Lemma 2.1. It follows the lines of the proofs in Section 2 in [START_REF] Arkeryd | On stationary solutions to normal, coplanar discrete Boltzmann equation models[END_REF] that we refer to for details. Let (α, k) ∈]0, 1[×N * be given. Let K be the closed and convex subset of L 1 (Ω × R 2 ) defined by K = {f ∈ L 1 + (Ω × R 2 ); f (z, v)dzdv ≤ 1 α ∂Ω + v • n(z)f b (z, v)dσ(z)dv}. Define the map T from K into K by T (f ) = F , where F is the solution to αF (z, v) + v • ∇ z F (z, v) = R 2 ×S 1 B α F 1 + F k (z, v ) f * ϕ α 1 + f * ϕα k (z, v * ) - F 1 + F k (z, v) f * ϕ α 1 + f * ϕα k (z, v * ) dv * dω, (z, v) ∈ Ω × R 2 , (2.3) F (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + . (2.4) F = T (f ) can be obtained as the limit in L 1 + (Ω × R 2 ) of the sequence (F q ) q∈N defined by F 0 = 0 and αF q+1 + v • ∇ z F q+1 = R 2 ×S 1 B α F q 1 + F q k (z, v ) f * µ α 1 + f * µα k (z, v * ) - F q+1 1 + F q k (z, v) f * µ α 1 + f * µα k (z, v * ) dv * dω , (2.5) F q+1 (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + , q ∈ N. (2.6) In exponential form F q+1 can be written as F q+1 (z, v) = f bk (z + (z, v), v)e -αs + (z,v) e -0 -s + (z,v) 1 1+ F q k (z+sv,v) Bα f * µα 1+ f * µα k (z+sv,u * )du * dγ ds + 0 -s + (z,v) B α F q 1 + F q k (z + sv, v ) f * µ α 1 + f * µα k (z + sv, v * )e αs e s 0 1 1+ F q k (z+rv,v) Bα f * µα 1+ f * µα k (z+rv,u * )du * dγ dr . (2.7) The sequence (F q ) q∈N is monotone. Indeed, F 0 ≤ F 1 , by the exponential form of F 1 . If F q ≤ F q+1 , then it follows from the exponential forms of F q+1 and F q+2 that F q+1 ≤ F q+2 . Moreover, α F q+1 (z, v)dzdv ≤ ∂Ω + v • n(z)f bk (z, v)dσ(z)dv + B α F q -F q+1 1 + F q k (z, v) f * µ α 1 + f * µα k (z, v * )dzdvdv * dω, so that F q+1 (z, v)dzdv ≤ 1 α ∂Ω + v • n(z)f b (z, v)dσ(z)dv, q ∈ N. (2.8) By the monotone convergence theorem, (F q ) q∈N converges in L 1 (Ω × R 2 ) to a solution F of (2.3)-(2.4). The solution of (2.3)-(2.4) is unique in the set of non negative functions. Indeed, let G be a non negative solution of (2.3)-(2.4). It follows by induction that F q ≤ G, q ∈ N. (2.9) Indeed, (2.9) holds for q = 0, since G ≥ 0. Assume (2.9) holds for q. Using the exponential form of F q+1 implies F q+1 ≤ G. Consequently, F ≤ G. (2.10) Moreover, subtracting the equation satisfied by G from the equation satisfied by F , and integrating the resulting equation on Ω × R 2 leads to α Ω×R 2 (G -F )(z, v)dzdv + ∂Ω - |v • n(z)|(G -F )(z, v)dσ(z)dv = 0. (2.11) It results from (2.10)-(2.11) that G = F . The map T is continuous in the L 1 -norm topology (cf [6] pages 124-5). Namely, let a sequence (f q ) q∈N in K converge in L 1 (Ω × R 2 ) to f ∈ K. Set F q = T (f q ). Because of the uniqueness of the solution to (2.3)-(2.4), it is enough to prove that there is a subsequence of (F q ) converging to F = T (f ). Now there is a subsequence of (f q ), still denoted (f q ), such that decreasingly (resp. increasingly) (G q ) = (sup r≥q f r ) (resp. (g q ) = (inf r≥q f r )) converges to f in L 1 . Let (S q ) (resp. (s q )) be the sequence of solutions to αS q + v • ∇ z S q = B α S q 1 + S q k (z, v ) G q * µ α 1 + G q * µα k (z, v * ) - S q 1 + S q k (z, v) g q * µ α 1 + g q * µα k (z, v * ) dv * dω , (2.12 ) S q (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + , (2.13 ) αs q + v • ∇ z s q = B α s q 1 + s q k (z, v ) g q * µ α 1 + g q * µα k (z, v * ) - s q 1 + s q k (z, v) G q * µ α 1 + G q * µα k (z, v * ) dv * dω , (2.14) s q (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + . (2.15) (S q ) is a non-increasing sequence, since that holds for the successive iterates defining the sequence. Then (S q ) decreasingly converges in L 1 to some S. Similarly (s q ) increasingly converges in L 1 to some s. The limits S and s satisfy (2.3)-(2.4). It follows by uniqueness that s = F = S, hence that (F q ) converges in L 1 to F . The map T is compact in the L 1 -norm topology. In [START_REF] Arkeryd | On stationary solutions to normal, coplanar discrete Boltzmann equation models[END_REF] an averaging lemma was used. Here we replace it by the following argument. Let (f l ) l∈N be a bounded sequence of L 1 and F l = T (f l ). Denote by Fl = F l 1+ F l k , fl = f l * ϕ j 1+ f l * ϕ j k , by B(0, V ) the open ball in R 2 centered at the origin and of radius V > 0, and by I(α, β) the interval with end points (α, β) ∈ R 2 . (F l ) l∈N is the sum of f bk (z + (z, v), v) + 0 -s + (z,v) B α Fl (z + sv, v ) fl (z + sv, v * )dv * dωds l∈N * , (2.16) and - 0 -s + (z,v) B α Fl (z + sv, v) fl (z + sv, v * )dv * dωds l∈N * . (2.17) In order to prove that (2.16) is compact in L 1 , it is sufficient to prove that for any V > 0 and µ ∈]0, 1[, 0 -s + (z,v) B(0,V )×Aµ B α Fl (z + sv, v ) fl (z + sv, v * )dv * dωds l∈N * , (2.18) is compact in L 1 (Ω × B(0, V )) where A µ = {ω ∈ S 1 ; | (v, ω)| ≥ µ and |π -(v, ω)| ≥ µ}. (2.19) Indeed, noticing that v is parallel to v if and only if ω = ± v |v| , and expressing ω by its angle with the vector v |v| , the integral over Ω × B(0, V ) of 0 -s + (z,v) B(0,V )×A c µ B j Fl (z + sv, v ) fl (z + sv, v * )dv * dωdsdzdv is smaller than ck 2 V 2 µ. The sequence (2.18) is uniformly bounded in L 1 (Ω×B(0, V )). Let us prove that it is uniformly equiintegrable with respect to the z variable. By the restriction to A µ in (2.18), any (v, v ) ∈ R 2 × R 2 considered when integrating the absolute value of (2.18) over Ω × B(0, V ) forms a basis in R 2 . For any h ∈ R 2 , denote by (a(h), b(h)) its coordinates in this basis. Split the difference of (2.18) between (z, v) and (z + h, v) into the three following terms, I(-s + (z,v),-s + (z+h,v)) Aµ B α Fl (z + h + sv, v ) fl (z + h + sv, v * )dv * dωds, 0 -s + (z,v) Aµ B α Fl (z + h + sv, v ) fl (z + h + sv, v * )dv * dω - Aµ B α Fl (z + b(h)v + sv, v ) fl (z + b(h)v + sv, v * )dv * dω ds = I(-s + (z,v)-a(h),-s + (z,v))∪I(0,a(h)) Aµ B α Fl (z + h + sv, v ) fl (z + h + sv, v * )dv * dω ds, 0 -s + (z,v) Aµ B α Fl (z + b(h)v + sv, v ) fl (z + b(h)v + sv, v * ) -fl (z + sv, v * ) dv * dω ds, that tend to zero h → 0 when integrated over Ω × {v ∈ R 2 ; |v| < V }, and 0 -s + (z,v) Aµ B α Fl (z + b(h)v + sv, v ) -Fl (z + sv, v ) fl (z + sv, v * )dv * dωds. (2.20) Notice that the integrand in the first line of (2.20) is a directional derivative in the direction v . Consequently, (2.20) is equal to 0 -s + (z,v) Aµ B α Fl (z + sv, v ) fl (z + sv, v * )e -αb(h) exp(- b(h) 0 B α fl (z + sv + rv , u * )du * dγ (1 + F l k )(z + sv + rv , v ) dr) -1 dv * dωds + 0 -s + (z,v) Aµ B α fl (z + sv, v * ) b(h) 0 B α Fl (z + sv + rv , V ) fl (z + sv + rv , V * )du * dγ e α(r-b(h)) exp - b(h) r B α fl (z + sv + tv , u * )du * dγ (1 + F l k )(z + sv + tv , v ) dt dr dv * dωds. (2.21) Here, V (resp V * ) denotes v -(v -u * , γ)γ (resp. u * + (v -u * , γ)γ). (2.21) tends to zero when h → 0 when integrated over Ω × B(0, V ) in absolute value since all integrands are uniformly bounded and the domains of integration are of order h. This ends the proof of the uniform equiintegrability of (2.18) w.r.t. the z variable. The proof of its uniform equiintegrability w.r.t. the v variable is analogous. The L 1 compactness of (2.17) can be proven analogously. Hence the sequence (F l ) l∈N * is compact in L 1 . And so, the Schauder fixed point theorem applies to T , leading to a solution F of (2.1)-(2.2). 3 Removal of the damping and convolutions. For any k ∈ N * , denote by Q + k (resp. Q k , resp. ν k (F ), resp. D k ) the approximate gain term (resp. collision operator, resp. collision frequency, resp. entropy production term) defined by Q + k (F, F )(v) = R 2 ×S 1 B F 1 + F k (v ) F 1 + F k (v * )dv * dω, (3.1) ν k (F )(v) = 1 1 + F (v) k B F 1 + F k (v * )dv * dω, (3.2) Q k (F, F ) = Q + k (F, F ) -F ν k (F ), D k (v) = R 2 ×S 1 B F k 1 + F k k (v ) F k 1 + F k k (v * ) (v * ) - F k 1 + F k k (v) F k 1 + F k k (v * ) ln F k 1 + F k k (v ) F k 1 + F k k (v * ) (v * ) 1 + F k k F k (v) 1 + F k k F k (v * ) dv * dω . (3.3) For any (α, k) ∈]0, 1[×N * , denote by F α,k the solution to (2.1)-(2.2) obtained in the previous section. (F α,k ) α∈]0,1[ is weakly compact in L 1 loc (Ω × R 2 ) since it is bounded by a multiple of k 2 . Denote by F k the limit of a converging subsequence when α → 0. In the next lemma we prove that for a subsequence, the convergence is strong in L 1 (Ω × R 2 ). Lemma 3.1 There is a sequence (α q ) q∈N tending to zero when q → +∞, such that when q → +∞, (F αq,k ) q∈N strongly converges in L 1 (Ω × R 2 ) to F k . Moreover, F k is a solution to v • ∇ z F k = Q k (F k , F k ), (z, v) ∈ Ω × R 2 , (3.4) F k (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + , (3.5) and (1 + v 2 )F k (z, v)dzdv ≤ c b , (3.6) D k (z, v)dzdv ≤ c b , (3.7 ) ∂Ω - F k (z, v) | v • n(z) | dσ(z)dv + ∂Ω -,F k ≤k F k ln F k (z, v) | v • n(z) | dσ(z)dv + ln k 2 ∂Ω -,F k ≥k F k | v • n(z) | dσ(z)dv ≤ c b , k ∈ N * . (3.8) Proof of Lemma 3.1 Consider the approximation scheme (f α,ρ ) ρ∈N of F α,k , f α,0 = 0, (3.9) αf α,ρ+1 (z, v) + v • ∇ z f α,ρ+1 (z, v) = B α F α,k 1 + F α,k k (z, v ) F α,k * µ α 1 + F α,k * µα k (z, v * ) - f α,ρ+1 1 + f α,ρ+1 k (z, v) f α,ρ * µ α 1 + f α,ρ * µα k (z, v * ) dv * dω, (3.10) f α,ρ+1 (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + , ρ ∈ N. (3.11) f α,1 is explicitly given in terms of F α,k . It follows from the exponential forms of F α,k and f α,1 that F α,k ≤ f α,1 , α ∈]0, 1[. The sequence (f α,ρ ) ρ≥2 is constructed as follows. Denote by S the map from (L 1 (Ω × R 2 )) 2 mapping (X, Z) into W = S(X, Z) ∈ L 1 (Ω × R 2 ) solution to αW + v • ∇ z W = B α F α,k 1 + F α,k k (z, v ) F α,k * µ α 1 + F α,k * µα k (z, v * ) - W 1 + X k (z, v) Z * µ α 1 + Z * µα k (z, v * ) dv * dω , W (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + . Denote by f α,1,0 = S(0, f α,1 ), f α,1,r = S(f α,1,r-1 , f α,1 ), F α,k,0 = S(0, F α,k ), F α,k,r = S(F α,k,r-1 , F α,k ), r ∈ N * . First, f α,1,0 ≤ F α,k,0 . Then the sequence (f α,1,r ) r∈N (resp. (F α,k,r ) r∈N ) is increasing with limit f α,2 (resp. F α,k ). It follows from f α,1,r ≤ F α,k,r , r ∈ N, that f α,2 ≤ F α,k . (3.12) Let f α,2,0 := S(0, f α,2 ), f α,2,r := S(f α,2,r-1 , f α,2 ), r ∈ N * . It follows from (3.12) that f α,2,0 ≥ F α,k,0 . The sequence (f α,2,r ) r∈N is also increasing with limit f α,3 and with f α,2,r ≥ F α,k,r . Hence f α,3 ≥ F α,k . From here by induction on ρ, it holds that f α,2ρ ≤ f α,2ρ+2 ≤ F α,k ≤ f α,2ρ+3 ≤ f α,2ρ+1 , α ∈]0, 1[, ρ ∈ N. (3.13) By induction on r, for each r the sequence (f α,1,r ) α∈]0,1[ is translationally equicontinuous in α. The limit sequence (f α,2 ) α∈]0,1[ is also translationally equicontinuous. This is so, since given > 0, r and then h 0 can be taken so that (f α,2 -f α,1,r )(z, v)dzdv < and |f α,1,r (z + h, v + h) -f α,1,r (z, v)|dzdv < , |h| < h 0 , | h| < h 0 . It can analogously be proven that for each ρ ∈ N, (f α,ρ ) α∈]0,1[ is translationally equicontinuous in α. Let (α q ) q∈N be a sequence tending to zero. Take a subsequence in (α q ) q∈N , still denoted by (α q ) q∈N , such that (f αq,2 ) q∈N converges in L 1 to some f 0,2 when q → +∞. Continuing by induction gives a sequence (f 0,ρ ) ρ∈N satisfying f 0,2ρ ≤ f 0,2ρ+2 ≤ F k ≤ f 0,2ρ+3 ≤ f 0,2ρ+1 , ρ ∈ N, (3.14) v • ∇ z f 0,ρ+1 (z, v) = G(z, v) -B f 0,ρ+1 1 + f 0,ρ+1 k (z, v) f 0,ρ 1 + f 0ρ k (z, v * )dv * dω, f 0,ρ+1 (z, v) = f bk (z, v), (z, v) ∈ ∂Ω + . Here, G is the weak L 1 limit when α → 0 of the gain term B α F α,k 1 + F α,k k (z, v ) F α,k * µ α 1 + F α,k * µα k (z, v * )dv * dω . In particular, (f 0,2ρ ) ρ∈N (resp. (f 0,2ρ+1 ) ρ∈N ) non decreasingly (resp. non increasingly) converges in L 1 to some g (resp. h) when ρ → +∞. The limits satisfy 0 ≤ g ≤ F k ≤ h, (3.15) v • ∇ z h = G -B h 1 + h k (z, v) g 1 + g k (z, v * )dv * dω, (3.16) v • ∇ z g = G -B g 1 + g k (z, v) h 1 + h k (z, v * )dv * dω, (3.17) (h -g)(z, v) = 0, (z, v) ∈ ∂Ω + . Subtracting (3.17) from (3.16) and integrating the resulting equation on Ω × R 2 gives that ∂Ω - | v • n(Z) | (h -g)(Z, v)dσ(Z)dv = 0, so that h -g = 0 also on ∂Ω -. Hence, s -(z,v) -s + (z,v) h(z + sv, v)e s 0 Bh(z+rv,u)dudγdr B(h -g)(z + sv, v * )dv * dω ds = 0, (z, v) ∈ Ω × R 2 . (3.18) (3.15) and (3.18) imply that g = h and is equal to F k . (F αq,k ) q∈N converges to F k in L 1 (Ω × R 2 ) when q → +∞. Indeed, given η > 0, choose ρ 0 big enough so that f 0,2ρ 0 +1 -f 0,2ρ 0 L 1 < η and f 0,2ρ 0 -F k L 1 < η, then q 0 big enough, so that f αq,2ρ 0 +1 -f 0,2ρ 0 +1 L 1 ≤ η and f αq,2ρ 0 -f 0,2ρ 0 L 1 ≤ η, q ≥ q 0 . Then split F αq,k -F k L 1 as follows, F αq,k -F k L 1 ≤ F αq,k -f α,2ρ 0 L 1 + f α,2ρ 0 -f 0,2ρ 0 L 1 + f 0,2ρ 0 -F k L 1 ≤ f α,2ρ 0 +1 -f α,2ρ 0 L 1 +2η by (3.13) ≤ f α,2ρ 0 +1 -f 0,2ρ 0 +1 L 1 + f 0,2ρ 0 +1 -f 0,2ρ 0 L 1 + f 0,2ρ 0 -f α,2ρ 0 L 1 +2η ≤ 5η, q ≥ q 0 . It remains to prove (3.6)-(3.8). Multiplying (3.4) by 1 + v 2 and integrating over Ω × R 2 , leads to ∂Ω - |v • n(z)|(1 + v 2 )F k (z, v)dσ(z)dv ≤ ∂Ω + v • n(z)(1 + v 2 )f b (z, v)dσ(z)dv. (3.19) Denote by (v 1 , v 2 ) the components of v. Multiply (3.4) by v 1 and integrate it over Ω a × R 2 , where Ω a is the part of Ω with z 1 < a. Set S a = Ω ∩ {z 1 = a} and ∂Ω a = ∂Ω ∩ Ωa . This gives Sa×R 2 v 2 1 F k (a, z 2 , v)dz 2 dv = - ∂Ωa×R 2 v 1 v • n(z)F k (z, v)dzdv. Ω×R 2 v 2 1 F k (z, v)dzdv ≤ c b . (3.21) ( v 2 2 F k (z, v)dzdv) k∈N * is analogously bounded from above. Thus the boundedness of energy holds. Recalling the small velocity cut-off χ η defined in (1.3), this in turn implies the boundedness of mass. The boundedness of the mass outflow in (3.8) follows from an integration of (3.4) on Ω × R 2 . Finally, Green's formula for F k ln F k 1+ F k k implies that for some c b > 0, ∂Ω - |v • n(z)| F k ln F k -(k + F k ) ln(1 + F k k ) (z, v)dσ(z)dv + Ω×R 2 D k (z, v)dzdv ≤ c b , k ∈ N * . Moreover, k ∂Ω - ln(1 + F k k )(z, v) | v • n(z) | dσ(z)dv ≤ ∂Ω - F k (z, v) | v • n(z) | dσ(z)dv ≤ c b . Hence ∂Ω - F k ln F k 1 + F k k (z, v) | v • n(z) | dσ(z)dv + Ω×R 2 D k (z, v)dzdv ≤ c b . (3.22) It holds that ∂Ω - F k ln(1 + F k k )(z, v) | v • n(z) | dσ(z)dv ≤ ∂Ω -,F k ≤k F k ln(1 + F k k )(z, v) | v • n(z) | dσ(z)dv + ∂Ω -,F k ≥k F k ln(1 + F k k )(z, v) | v • n(z) | dσ(z)dv ≤ ln 2 ∂Ω - F k (z, v) | v • n(z) | dσ(z)dv + ∂Ω -,F k ≥k F k ln 2F k k (z, v) | v • n(z) | dσ(z)dv ≤ c b + ∂Ω -,F k ≥k F k ln F k (z, v) | v • n(z) | dσ(z)dv -ln k 2 ∂Ω -,F k ≥k F k (z, v) | v • n(z) | dσ(z)dv. Together with (3.22), this implies that ∂Ω -,F k ≤k F k ln F k (z, v) | v • n(z) | dσ(z)dv + ln k 2 ∂Ω -,F k ≥k F k | v • n(z) | dσ(z)dv ≤ c b . This ends the proof of Lemma 3.1. Compactness of the integrated collision frequency. Denote by Q + (f, f ) (resp. ν(f )) the gain term (resp. the collision frequency) of the nonlinear Boltzmann collision operator, Q + (f, f )(z, v) = Bf (z, v )f (z, v * )dv * dω, ν(f )(z, v) = Bf (z, v * )dv * dω, (4.1) so that Q(f, f ) = Q + (f, f ) -f ν(f ). Lemma 4.1 For any V > 1, the sequence s -(z,v) -s + (z,v) ν(F k )(z + sv, v)ds k∈N * is uniformly bounded by cV 2 on Ω × {v ∈ R 2 ; η < |v| < V }. Proof of Lemma 4.1. For any (Z, v) ∈ ∂Ω + with η < |v| < V , the truncation χ η in B implies that ν(F k )(Z + sv, v) = |v * |>η B(v, v * , ω)F k (Z + sv, v * )dv * dω ≤ c χ η (v, v * , ω)(v 2 + v 2 * )F k (Z + sv, v * )dv * ≤ c V 2 + η 2 η 4 v 2 * sin 2 (v, v * )F k (Z + sv, v * )dv * . (4.2) Let Ω Z,v be one of the two subsets of Ω split by the segment [Z, Z +s -(Z, v)v] and ∂Ω Z,v = ∂Ω ∩ ΩZ,v . Let v ⊥ be ) ∈ Ω Z,v × R 2 . This gives s -(Z,v) 0 (v * • v ⊥ ) 2 |v| 2 F k (Z + sv, v * )dv * ds ≤ ∂Ω Z,v ×R 2 v * • v ⊥ |v| v * • n(z) F k (z, v * )dσ(z)dv * ≤ c b , (Z, v) ∈ ∂Ω + . (4.3) Together with (4.2) this ends the proof of the lemma. Lemma 4.2 The sequence (F k ) k∈N * is weakly compact in L 1 . Proof of Lemma 4.2. By (3.6) it is sufficient to prove that for any V > 1 the sequence (F k /Ω×{v∈R 2 ;|v|≤V } ) k∈N * is weakly compact in L 1 (Ω × {v ∈ R 2 ; η ≤ |v| ≤ V }). It follows from the exponential form of F k (z, v) from the outgoing boundary ∂Ω -, F k (z, v) = F k (z -(z, v), v) exp s -(z,v) 0 ν(F k )(z + sv, v)ds - s -(z,v) 0 Q + (F k , F k )(z + sv, v) exp s 0 ν(F k )(z + rv, v)dr ds, and Lemma 4.1 that F k (z, v) ≤ e cV 2 F k (z -(z, v), v), z ∈ Ω, η ≤ |v| ≤ V. (4.4) By (3.8), (F k / ∂Ω -) k∈N * is weakly compact in L 1 |v•n(Z)| (∂Ω -). This completes the proof of the lemma. Lemma 4.3 For k ∈ N * and ∈]0, 1[, there is a subset Ω k, of characteristics of Ω × {v ∈ R 2 ; η < |v| < 1 }, with measure smaller than c , such that F k (z, v) ≤ c 3 e c 2 , (z, v) ∈ Ω × {v ∈ R 2 ; η < |v| < 1 } \ Ω k, . (4.5) Proof of Lemma 4.3. Let ∈]0, 1[ be given. By the strict convexity of Ω and its C 1 regularity, the set ω 1 := {(Z, v) ∈ ∂Ω × R 2 ; η < |v| < 1 and -2 < v • n(Z) < 0} (4.6) is of measure smaller than c for some constant c > 0 and small enough. It follows from (3.8) that the measure of the subset of ∂Ω -where η < |v| < 1 , v • n(Z) < -2 and F k (Z, v) > 1 3 , is smaller than c b . Denote this set by ω k, 2 . Define Ω k, as Ω k, = {(Z + sv, v); (Z, v) ∈ ω 1 ∪ ω k, 2 , s ∈ [-s + (Z, v), 0]}. (4.7) Together with (4.4), this ends the proof of the lemma. For any (k, ) ∈ N * ×]0, +∞[, denote by χ k, the characteristic function of (Ω k, ) c . Lemma 4.4 For any V > 0 the sequence ( s -(z,v) -s + (z,v) ν(F k )(z + sv, v)ds) k∈N * is compact in L 1 (Ω × {v ∈ R 2 ; |v| < V }). Proof of Lemma 4.4. Let V > 0 be given. By (3.6) and Lemma 4.2, it is sufficient to prove that for any > 0 and W > 0, the sequence s -(z,v) -s + (z,v) |v * |<W Bχ k, F k (z + sv, v * )dv * ds k∈N * (4.8) is compact in L 1 (Ω × {v ∈ R 2 ; |v| < V }). By (3.6) this sequence is bounded in L 1 . Let us prove that it is uniformly equiintegrable w.r.t. the z variable. For any (α, β) ∈ R 2 , denote by I(α, β) the interval with end points α and β. For any h ∈ R 2 , split s -(z+h,v) -s + (z+h,v) Bχ k, F k (z + h + sv, v * )dv * ds - s -(z,v) -s + (z,v) Bχ k, F k (z + sv, v * )dv * ds (4.9) into I(-s + (z,v),-s + (z+h,v))∪I(s -(z,v),s -(z+h,v)) Bχ k, F k (z + h + sv, v * )dv * ds, (4.10) which absolute value tends to zero when integrated over Ω × R 2 and h → 0 by the continuity of (s + , s -) on Ω × {v; |v| ≤ V }, and s -(z,v) -s + (z,v) Bχ k, F k (z + h + sv, v * ) -F k (z + sv, v * ) dv * ds. (4.11) Almost every (v, v * ) ∈ R 2 × R 2 considered when integrating the absolute value of (4.11) over Ω × {v ∈ R 2 ; |v| < V } forms a basis in R 2 . Denote by (a(h), b(h)) the coordinates of h in this basis. Split (4.11) into s -(z,v) -s + (z,v) Bχ k, F k (z + h + sv, v * ) -F k (z + b(h)v * + sv, v * ) dv * ds = s -(z,v) -s + (z,v) Bχ k, F k (z + h + sv, v * ) -F k (z + h + (s -a(h))v, v * ) dv * ds = I(-s + (z,v)-a(h),-s + (z,v))∪I(s -(z,v),s -(z,v)-a(h)) Bχ k, F k (z + h + sv, v * )dv * ds, (4.12) and s -(z,v) -s + (z,v) Bχ k, F k (z + b(h)v * + sv, v * ) -F k (z + sv, v * ) dv * ds = s -(z,v) -s + (z,v) Bχ k, f bk (z + (z + b(h)v * + sv, v * ), v * ) -f bk (z + (z + sv, v * ), v * ) dv * ds + s -(z,v) -s + (z,v) B I(0,b(h)) χ k, Q k (F k , F k )(z + sv + rv * , v * )drdv * ds. (4.13) When integrated over Ω × {v ∈ R 2 ; |v| < V }, the limit when h → 0 of the first term of (4.13) is zero by the integrability of f b . Notice that (χ k, Q k (F k , F k )) k∈N * is weakly compact in L 1 (Ω × {v ∈ R 2 ; |v| < V }), since χ k, Q k (F k , F k ) ≤ 1 ln Λ D k + cΛ 3 e c 2 ν(F k ), Λ > 1. When integrated over Ω×{v ∈ R 2 ; |v| < V }, the limit when h → 0 of the second term of (4.13) is zero by the weak L 1 compactness of (χ k, Q k (F k , F k )) k∈N * and lim h→0 b(h) = 0, (4.14) uniformly on Ω × {v ∈ R 2 ; |v| < V }. The uniform equiintegrability w.r.t. the v variable of s -(z,v) -s + (z,v) |v * |<V Bχ k, F k (z + sv, v * )dv * ds k∈N * follows from similar arguments. Passage to the limit in the approximations For each > 0, let F be the weak L 1 limit of a subsequence of (χ k, F k ) k∈N * . (F ) ∈]0,1[ is non increasing with respect to decreasing and bounded in L 1 . Let f be its strong L 1 limit when → 0. Notice that f is also the weak L 1 limit of (F k ) k∈N when k → +∞. For proving that f is a mild solution of (1.1)-(1.5), it is sufficient to prove that for any β > 0, there is a set X β of characteristics with complementary set of measure smaller than cβ, such that if χ β denotes the corresponding characteristic function, (χ β f )(z, v) = (χ β f b )(z + (z, v), v) + 0 -s + (z,v) (χ β Q(f, f ))(z + sv, v)ds, (z, v) ∈ Ω × R 2 . (5.1) This in turn is satisfied if for any test function ϕ ∈ L ∞ ( Ω×R 2 ), continuously differentiable along characteristics, with v • ∇ z ϕ ∈ L ∞ (Ω × R 2 ) , compact support and vanishing on ∂Ω -, Ω×R 2 ϕχ β f (z, v)dzdv = Ω×R 2 ϕχ β f b (z + (z, v), v)dzdv + Ω×R 2 0 -s + (z,v) χ β f v • ∇ z ϕ + ϕχ β Q(f, f ) (z + sv, v)ds dzdv. (5.2) Let 0 > 0 be such that the support of ϕ is included in Ω×{v ∈ R 2 ; |v| ≤ 1 0 }. Define the set X β as follows. Using the weak L 1 (Ω × R 2 ) compactness of (F k ) k∈N * and the weak L 1 (∂Ω -) compactness of (F k /∂Ω -) k∈N * , pass to the limit when k → +∞ in F k (z, v) ≤ e c 2 0 F k (z -(z, v), v), a.a. z ∈ Ω, η < |v| ≤ 1 0 , k ∈ N * . (5.3) It implies that f (z, v) ≤ e c 2 0 f (z -(z, v), v), a.a. z ∈ Ω, η < |v| ≤ 1 0 . From here the proof follows the lines of the proof of Lemma 4.3, so that given β > 0, X β can be defined as a set of characteristics, with complementary set of measure smaller than cβ, such that f (z, v) ≤ e c 2 0 β 3 , a.a. (z, v) ∈ X β . (5.4) The following lemma is a preliminary step to pass to the limit when k → +∞ in quadratic terms along the 'good characteristics' (z + sv, v), (z, v) / ∈ Ω k, . Lemma 5.1 For any test function ϕ ∈ L ∞ ( Ω × R 2 ), continuously differentiable along characteristics, with v • ∇ z ϕ ∈ L ∞ (Ω × R 2 ), compact support and vanishing on ∂Ω -, lim k→+∞ Ω×R 2 0 -s + (z,v) ϕχ β χ k, F k ν(F k )(z + sv, v)ds dzdv = Ω×R 2 0 -s + (z,v) ϕχ β F ν(f )(z + sv, v)ds dzdv. (5.5) Proof of Lemma 5.1. Since ϕ has compact support, one can restrict to the passage to the limit when k → +∞ and V fixed of Ω×{v∈R 2 ;|v|<V } 0 -s + (z,v) ϕχ β χ k, F k ν(F k )(z + sv, v)ds dzdv. (5.6) By an integration by parts, Ω×{v∈R 2 ;|v|<V } 0 -s + (z,v) ϕχ β χ k, F k ν(F k )(z + sv, v)ds dzdv = Ω×{v∈R 2 ;|v|<V } 0 -s + (z,v) ϕχ β χ k, f bk (z + (z, v), v) 0 -s + (z,v) ν(F k )(z + sv, v)ds(z + sv, v)ds dzdv + Ω×{v∈R 2 ;|v|<V } 0 -s + (z,v) χ β χ k, v • ∇ z ϕ F k + ϕQ k (F k , F k ) (z + sv, v) 0 s ν(F k )(z + rv, v)dr ds dzdv. (5.7) Proof of Lemma 5.2. For ∈]0, 0 [, write the mild form of ϕχ β χ k, F k and integrate it on Ω × R 2 . It results Ω×R 2 ϕχ β χ k, F k (z, v)dzdv = Ω×R 2 ϕχ β χ k, f bk (z + (z, v), v)dzdv + Ω×R 2 0 -s + (z,v) χ β χ k, F k v • ∇ z ϕ(z + sv, v)ds dzdv + Ω×R 2 0 -s + (z,v) ϕχ β χ k, Q + (F k , F k ) -F k ν(F k ) (z + sv, v)ds dzdv. (5.11) By the weak L 1 compactness of (F k ) k∈N * and the linearity with respect to χ k, F k of the first two lines of (5.11), their passage to the limit when k → +∞ is straightforward. The passage to the limit when k → +∞ in the last term of (5.11) follows from Lemma 5.1. Finally, using monotonicity arguments together with the L ∞ boundedness of χ β f allows to pass to the limit when → 0 in Ω×R 2 ϕχ β F (z, v)dzdv - Ω×R 2 ϕχ β f b (z + (z, v), v)dzdv - Ω×R 2 0 -s + (z,v) χ β F v • ∇ z ϕ(z + sv, v)ds dzdv + Ω×R 2 0 -s + (z,v) ϕχ β F ν(f )(z + sv, v)ds dzdv, (5.12) and obtain the limit ϕχ β χ k, (z + sv , v )B Ω×R 2 ϕχ β f (z, v)dzdv - Ω×R 2 ϕχ β f b (z + (z, v), v)dzdv - Ω×R 2 0 -s + (z,v) χ β f v • ∇ϕ(z + sv, v)ds dzdv + Ω×R 2 0 -s + (z,v) ϕχ β f ν(f )(z + sv, v)ds dzdv. (5.13) Let us prove that lim →0 lim k→+∞ Ω×R 2 0 -s + (z,v) ϕχ β χ k, Q + (F k , F k )(z + sv, v)ds dzdv = Ω×R 2 0 -s + (z,v) F k 1 + F k k (z + sv , v) F k 1 + F k k (z + sv , v * )dsdz dvdv * dω = Ω×R 2 ϕχ β χ k, s -(Z, v )B F k 1 + F k k (Z, v * )dv * dω F k 1 + F k k (Z, v)dZdv. (5.16) One can restrict to the study of the limit of (5.21) Passing to the limit when (α, 1 , µ) → (0, 0, 0) in (5.21) leads to (5.14). 1[ and fixed,χ η (v, v * , ω) = 0 if |v| ≤ η or |v * | ≤ η or |v | ≤ η or |v * | ≤ η or (v, v * ) < η or (v , v * ) < η, χ η (v, v * , ω) = 1, else. (1.3)Here (u, v) denotes the angle between vectors u and v. The function B is the kernel of the classical nonlinear Boltzmann operator for hard forces,|v -v * | β b(ω) with β ∈ [0, 2[, b ∈ L 1 + (S 1), b(ω) ≥ c > 0 a.e.(1.4) 20) on [l, L], where l = inf{a; S a = ∅}, L = sup{a; S a = ∅}, and using (3.19) leads to one of the vectors orthogonal to v such that |v ⊥ | = |v|. Multiply (2.1) written in the variables (z, v * ) by v * • v ⊥ |v| and integrate the resulting equation over (z, v * 2 0 2 ϕχ β Q + (f, f )(z + sv, v)ds dzdv.(5.14) For any (v, v * , ω), the change of variables (z, s) → (Z, s) = (z + sv , s), (5.15) moves the domain Ω × ] -s + (z, v ), 0[ into the domain {(Z, s) ; Z -sv ∈ Ω and s < 0}, i.e. (Z, -s) ∈ Ω×]0, s -(Z, v )[. Hence, Ω×R -s + (z,v)ϕχ β χ k, Q + (F k , F k )(z + sv, v)ds dzdv = Ω 0 -s + (z,v ) Ω×R 2 e c 2 F 2 s 2 s 2 s 22222 {v * ∈R 2 ;|sin( v,v * )|>µ}×S 1ϕχ β χ k, s -(Z, v )B F k 1 + F k k (Z, v * )dv * dω F k 1 + F k k (Z, v)dZdv, µ ∈]0, 1[,(5.17)sinceΩ×R 2 {v * ∈R 2 ;|sin( v,v * )|<µ}×S 1 ϕχ β χ k, s -(Z, v )B F k v) → F k (Z, v) {v * ∈R 2 ;|sin( v,v * )|>µ}×S 1 ϕχ β χ k, s -(Z, v )BF k (Z, v * )dv * dω k∈N *is weakly compact in L 1 . Indeed, using the change of variables v * → v * for every (v, ω), which holds since|sin( v, v * )| > µ, F k (Z, v) ϕχ β χ k, s -(Z, v )BF k (Z, v * )dv * dω ≤ c ln Λ D k (Z, v) + c Λ 3 k (Z, v * )dv * , Λ > 1,where (D k ) k∈N * is defined in (3.1) and uniformly bounded in L 1 by (3.7).Consequently one can restrict to the passage to the limit when k → +∞ in(Z,v)∈Ω×R 2 ;s -(Z,v)>α {v * ∈R 2 ;|sin( v,v * )|>µ}×S 1 ϕχ β χ k, s -(Z, v )BF k (Z, v * )dv * dω χ k, 1 F k (Z, v)dZdv, (α, 1 ) ∈]0, 1[ 2 . (5.19) Moreover, (Z,v);s -(Z,v)>α {v * ∈R 2 ;|sin( v,v * )|<µ}×S 1 ϕχ β χ k, s -(Z, v )BF k (Z, v * )dv * dω χ k, 1 F k (Z, v)dZdv = {v * ∈R 2 ;|sin( v,v * )|<µ}×S 1 Z;s -(Z,v)>α s -(Z,v) 0 ϕχ β χ k, s -(Z, v ) χ k, 1 F k s -(Z, v) F k (Z, v * )dsdZ Bdvdv * dω = {v * ∈R 2 ;|sin( v,v * )|<µ}×S 1 Ω s -(z,v) max{0,α-s -(z,v)} ϕχ β χ k, s -(z + sv, v ) χ k, 1 F k s -(z + sv, v) F k (z + sv, v * )dsdz Bdvdv * dω = Ω×R -(z,v) max{0,α-s -(z,v)} χ k, 1 F k s -(z + sv, v) Bϕχ β χ k, s -(z + sv, v )F k (z + sv, v * )dv * ds dzdv. -(z,v) max{0,α-s -(z,v)} χ k, 1 F k s -(z + sv, v) Bϕχ β χ k, s -(z + sv, v )F k (z + sv, v * )dv * ds dzdv = Ω×R -(z,v) max{0,α-s -(z,v)} F 1 s -(z + sv, v){v * ∈R 2 ;|sin( v,v * )|<µ}×S 1 Bϕχ β s -(z + sv, v )f (z + sv, v * )dv * ds dzdv = (Z,v);s -(Z,v)>α {v * ∈R 2 ;|sin( v,v * )|<µ}×S 1 ϕχ β s -(Z, v )Bf (Z, v * )dv * dω F 1 (Z, v)dZdv. The change of variables (z, s) → (Z, s) = (z + sv, s), (5.8) moves the domain Ω × ] -s + (z, v), 0[ into the domain Hence, Lemma 5.1 follows from the passage to the limit when k → +∞ in (5.9). It uses the weak L 1 compactness of (χ k, F k ) and (χ k, Q k (F k , F k )), the strong L 1 compactness of dr . An integration by parts back in the s variable is finally performed, taking into account that Lemma 5.2 f is a solution of (1.1)-(1.5), i.e. for test functions ϕ defined as in Lemma 5.1, (5.10)
04100103
en
[ "sdv.ee" ]
2024/03/04 16:41:20
2022
https://hal.science/hal-04100103/file/Gardner_et_al_ms_for_GEB.R3_RF_TB.pdf
Janet Gardner Mark Clayton Richard Allen John Stein Timothée Bonnet The effects of temperature extremes on survival in two semi-arid Australian bird communities over three decades, with predictions to 2104 Keywords: semi-arid eastern Australia arid zone, Australian birds, avian guilds, climate change, energy and water budgets, population projections, species persistence, survival, temperature extremes, thermoregulation published or not. The documents may come The effects of temperature extremes on survival in two semi-arid Australian bird communities over three decades, with predictions to 2104 INTRODUCTION A signature of contemporary climate change is the changing frequency, intensity and duration of extreme weather events [START_REF] Ummenhofer | Extreme weather and climate events with ecological relevance: a review[END_REF][START_REF] Stillman | Heat waves, the new normal: summertime temperature extremes will impact animals, ecosystems, and human communities[END_REF]. Birds in the arid and semi-arid regions of the world are particularly vulnerable to temperature extremes, in part, because their high body temperatures are close to the lethal limit for endotherms, and small changes in temperature can cause thermal stress at both the hot and cold ends of the temperature range. Exposure to weather extremes can disrupt energy and water budgets with consequences for fitness that affect both survival and reproduction [START_REF] Mitchell | Revisiting concepts of thermal physiology: predicting responses of mammals to climate change[END_REF][START_REF] Huey | Predicting organismal vulnerability to climate warming: roles of behaviour, physiology, and adaptation[END_REF][START_REF] Boyles | Adaptive thermoregulation in endotherms may alter responses to climate change[END_REF]. Mortality associated with exposure to summer extremes can be a direct result of succumbing to acute heat stress via hyperthermia or lethal dehydration, exemplified by mass die-offs of budgerigars (Melopsittacus undulatus), zebra finches (Taeniopygia guttata) and Carnaby's cockatoos (Calyptorhynchus latirostris) during heatwave events in Australia [START_REF] Mckechnie | Feeling the heat: Australian land birds and climate change[END_REF][START_REF] Saunders | The impact of two extreme weather events and other causes of death on Carnaby's Black Cockatoo: a promise of things to come for a threatened species[END_REF]. In such cases, avian body temperature exceeding about 46°C is lethal [START_REF] Boyles | Adaptive thermoregulation in endotherms may alter responses to climate change[END_REF]. Fitness in terrestrial animals may also be affected by high temperatures well below those causing die-offs events. Growing evidence suggests that air temperatures in the mid to high 30 o Cs are harmful for endotherms [START_REF] Cunningham | Missed-opportunity costs and the response of birds and mammals to climate change[END_REF]. For example, prolonged exposure to temperatures >38 o C has been shown to increase the cost of avian thermoregulation by increasing the need for evaporative cooling to maintain body temperatures below lethal limits [START_REF] Mckechnie | Avian thermoregulation in the heat: evaporative cooling in five Australian passerines reveals within-order biogeographic variation in heat tolerance[END_REF]. Increasing exposure to these sub-lethal conditions increases the risk of mortality, when exposure is prolonged over hours, days and weeks [START_REF] Sharpe | Weighing the cost: the impact of serial heatwaves on body mass in a small Australian passerine[END_REF][START_REF] Sharpe | Too hot to handle? Behavioural plasticity during incubation in a small, Australian passerine[END_REF]. Indeed, recent work on Australian desert birds in the laboratory showed that chronic heat stress can cause organ damage (e.g. liver or kidneys) which can contribute to avian mortality over time [START_REF] Xie | Organ histopathology and hematological changes associated with heat exposure in Australian desert birds[END_REF]. High temperatures are also associated with changes to foraging patterns and microhabitat use (du Plessis et al., 2012;[START_REF] Edwards | The impact of high temperatures on foraging behaviour and body condition in the western Australian Magpie Cracticus tibicen dorsalis[END_REF][START_REF] Cooper | The field metabolic rate, water turnover, and feeding and drinking behaviour of a small avian desert granivore during a summer heatwave[END_REF]Fungi et al., 2019) which can lead to a reduction in food intake and loss of body condition. Southern pied babblers, Turdoides bicolor and southern yellow-billed hornbills (Tockus leucomelas), for example, were unable to maintain body mass on days when temperatures exceeded thresholds in the mid 30s°C (van de [START_REF] Van De Ven | The costs of keeping cool: behavioural trade-offs between foraging and thermoregulation are associated with significant mass losses in an arid-zone bird[END_REF]du Plessis et al., 2012) and Jacky Winters Microeca fascinans lost 2% of their mass on days ≥ 42 o C [START_REF] Sharpe | Weighing the cost: the impact of serial heatwaves on body mass in a small Australian passerine[END_REF]. Given that body condition is closely linked to survival and reproductive success [START_REF] Paniw | Life history responses of meerkats to seasonal changes in extreme environments[END_REF], increasing exposure to high summer temperatures is likely to have major fitness consequences as the climate warms. Exposure to cold extremes can also impose thermoregulatory and energetic challenges for endotherms, although we have much less understanding of how birds respond to reductions in cold extremes that are expected to be beneficial for fitness as the climate warms. Energetic constraints and cold stress can drive reductions in body condition and impose carry-over costs on subsequent survival and reproduction [START_REF] Williams | Cold truths: how winter drives responses of terrestrial organisms to climate change[END_REF]. In Australia, changes in survival were associated with temperature extremes for two coexisting bird species monitored over 37 years, red-winged fairy-wrens Malurus elegans and white-browed scrubwrens Sericornis frontalis [START_REF] Gardner | Effects of extreme weather on two sympatric Australian passerine bird species[END_REF]. In scrubwrens, winter temperatures <5 o C were associated with lower survival within the same season while in both species, survival was associated with body size, and there was evidence that size-dependent mortality was mediated by carry-over effects of climate in the previous season [START_REF] Gardner | Effects of extreme weather on two sympatric Australian passerine bird species[END_REF]. Moreover, energetic costs of thermoregulation in the cold during periods of low food availability can be greater than those in warmer conditions during the breeding season [START_REF] Harrison | Carry-over effects as drivers of fitness differences in animals[END_REF][START_REF] Williams | Cold truths: how winter drives responses of terrestrial organisms to climate change[END_REF]. For example, energetic stress on cool days was greater than that associated with exposure to high temperatures between 40 and 45 o C in an arid zone population of zebra finch [START_REF] Cooper | The field metabolic rate, water turnover, and feeding and drinking behaviour of a small avian desert granivore during a summer heatwave[END_REF]. Warming winters can, or are predicted to, reduce winter mortality and lead to improvement in body condition [START_REF] Robinson | Weather-dependent survival: implications of climate change for passerine population processes[END_REF][START_REF] Ozgul | Coupled dynamics of body mass and population growth in response to environmental change[END_REF]. For example, warming winters led to longer growing seasons with concomitant improvement in the body condition of yellow-bellied marmots, Marmota flaviventris and this was associated with changing patterns of survival and reproduction [START_REF] Ozgul | Coupled dynamics of body mass and population growth in response to environmental change[END_REF]. Benefits of warming conditions in winter may therefore offset negative effects of warming summers on survival. Studies of how changes in seasonal weather influence vital rates are rare in arid zone species, largely because of a paucity of long-term demographic data. This limits our ability to project population trends in coming decades. Here we investigate variation in survival associated with exposure to high summer and low winter temperatures for 37 avian species captured over a 30year period at two sites in a semi-arid landscape in western NSW, Australia. Using mark recapture time-dependent Cormack-Jolly-Seber models we first test for the effects on 6 monthly survival of exposure to: (1) summer temperatures above 38 o C, and (2) winter temperatures below 0 o C. We classify these temperatures as extreme because chronic effects which re-duce survival appear to begin at these thresholds in Australian bird species [START_REF] Bailey | Using different body size measures can lead to different conclusions about the effects of climate change[END_REF][START_REF] Gardner | Individual and demographic consequences of reduced body condition following repeated exposure to high temperatures[END_REF]2017;[START_REF] Mckechnie | Avian thermoregulation in the heat: evaporative cooling in five Australian passerines reveals within-order biogeographic variation in heat tolerance[END_REF]; obviously more extreme temperatures can have acute effects via immediate mortality. At our study sites these thresholds are currently reached on around 5% of days: cold extreme in winter 8% at Charcoal Tank Nature Reserve, 9% at Weddin Mt National Park; hot extreme in summer 5% at Charcoal Tank, 3% at Weddin Mt. We then test whether the changes in 6 monthly survival associated with warming winters will be sufficient to offset the negative effects on survival of rising summer temperatures based on climate projections for the region to the year 2104. Current climate projections for Australia indicate more hot summer and fewer cold winter temperature extremes (BOM & CSIRO, 2020). METHODS AND MATERIALS Study sites and species The study was carried out at two sites: The Charcoal Tank Nature Reserve (hereafter Charcoal; -33.9831 o S, 147.1575 o E) and Weddin Mountains National Park (hereafter Weddin; -33.9386 o S, 147.9872 o E) in western New South Wales in south-east Australia (Fig. S1). Both sites have been the subject of long-term (>30 years) ringing programs, with data collected in each year since 1986. The study sites are located 75 km apart within a fragmented agricultural landscape (see Appendix S1 in Supporting Information for site description). Birds were captured in mist nets and sampled 2-7 times annually [START_REF] Gardner | Individual and demographic consequences of reduced body condition following repeated exposure to high temperatures[END_REF]. At each capture, we recorded body mass using a Pesola balance (± 0.5 g). Wing length was measured as the length of the flattened wing chord from the carpal joint to the tip of the longest primary feather using a butt-ended ruler (± 1.0 mm), and primaries were scored for moult. Data collection was overseen by Mark Clayton (Charcoal) and Richard Allen (Weddin) ensuring consistency in methods over time and between sites. We obtained weather data from the Australian Government Bureau of Meteorology, from the Grenfell (Manganese Rd) station for the Weddin site, and from the West Wyalong Airport station for the Charcoal site. We defined 6-month periods from 1 st September to 29 th February as "Summer" or from 1 st March to 31 st August as "Winter". For each 6-month period between 1970 to 2016 we extracted (1) mean daily extreme temperatures for each season (mean daily maximum in summer, mean daily minimum in winter), (2) the sum of precipitation for each 6-month period, (3) the number of days with temperatures beyond an extreme threshold for each season (days above 38°C in summer, days below 0°C in winter). Future climate predictions To predict the number of temperature extremes until the end of the century under difference scenarios of greenhouse gas concentration we chose the model CESM1-CAM5 (Community Earth System Model version 1 that includes the Community Atmospheric Model version 5, [START_REF] Neale | Description of the NCAR Community Atmosphere Model (CAM5.0)[END_REF]. The model CESM1-CAM5 performs very well for short-term predictions of climate in Eastern and Southern Australia (Moise et al., 2015). We extracted daily predicted minimum and maximum temperatures for the years 2016 to 2104 from the West Wyalong Airport weather station, located 10.5km from Charcoal, under three Representative Concentration Pathway (RCP) scenarios: RCP 2.6 corresponding to a very stringent scenario of reductions in greenhouse gases emissions, RCP 4.5 an intermediate scenario and RCP 8.5 the worst-case climate change scenario [START_REF] Meinshausen | The RCP greenhouse gas concentrations and their extensions from 1765 to 2300[END_REF]. We combined climate projections to observed data (1986 to 2015) and for each 6-month period calculated the number of days below 0°C in winter and above 38°C in summer. For both seasons and for each of the three RCP scenarios we fitted generalised additive models of the number of extreme temperature days as a function of year. Year was modeled as a thin-plate regression spline [START_REF] Wood | Thin plate regression splines[END_REF]. We used the predictions from each of these generalised additive models to project survival in the future. Mark-recapture data The full data set contains captures for 39 species for which > 100 captures were recorded between 1986 and 2016 (Table S1) with a total of 33550 captures for 21621 individuals. Those data included only adults which were identified on the basis of plumage. We classified birds as belonging to one of four feeding guilds: nectarivore, insectivore, omnivore or granivore. Most individuals are captured only once, presumably due to movements of non-resident birds or postfledging dispersal. Our study populations, typical for much of the Australian arid zone, contain both resident and migratory species, including partial migrants and local nomads, making it difficult to classify movement patterns for individual species (see [START_REF] Chan | Partial migration in Australian landbirds: a review[END_REF]. This means that apparent local mortality represents a mixture of true mortality and permanent emigration. To reduce the influence of emigration we removed the first capture of each individual to focus our analysis on resident birds, increase the proportion of apparent survival (or mortality) related to local survival (or mortality) rather than emigration and decrease computation time. In two species, not a single individual was recaptured so we were left with only 37 species totalling 4136 individuals. We present results derived from the full data set in Fig. S8. We used capture data to calculate a temporal proxy of bird density within each guild (see Appendix S2). To create regular time intervals corresponding to seasons and to decrease the number of variables in models we aggregated capture sessions by 6-month periods from 1 st September to 29 th February ("Summer") or from 1 st March to 31 st August ("Winter"). Thus, we obtained 62 virtual capture occasions instead of the initial 748. This procedure violates the assumption of mark-recapture models that survival intervals are long relative to the marking intervals but was essential to obtain a manageable number of capture occasions and reasonably high recapture probabilities. The noise introduced by this violation is expected to only blur the statistical signal and not bias it (see Appendix S3, Fig. S2). Statistical analyses Mark-recapture model We modelled apparent survival probability across species and time periods, as a function of summer and winter weather. We used time-dependent Cormack-Jolly-Seber models which describe capture histories as the product of a survival probability and a recapture probability. We modeled variation among species, among time periods and among locations for survival and recapture probabilities (see Appendix S4). For the linear predictor corresponding to recapture probability, the fixed effects were a global intercept, the effect of site (Weddin vs. Charcoal), the effect of season (the 6-month capture period being the non-reproductive or the reproductive season), and the effect of a bird being caught on a previous occasion to control for trap-dependence. For the linear predictor corresponding to survival probability, the fixed effects were a global intercept for the effect of site (Weddin vs. Charcoal), a linear effect of time (i.e. 6-months periods fitted as a continuous covariate) to de-trend the response and avoid over-estimating the effect of climate (Grobois et al., 2008), the effect of the 6-month survival period being in winter (as opposed to summer), a dummy variable indicating whether an individual was captured for the first time to account for transience effects, a proxy for within-guild-site bird density fitted as a continuous covariate (see Appendix S2, Fig. S3, S4), and the effect of weather variables, for which we tried different combinations involving our three weather variables, measured for each site and each 6-month period: (1) the number of days with temperature reaching a threshold (either above 38°C for summer survival periods or below 0°C for winter survival periods); (2) the sum of precipitation because extreme temperatures are correlated with the sum of precipitation, and precipitation is known to affect bird survival [START_REF] Kennedy | Direct effects of rain on birds: a review[END_REF][START_REF] Robinson | Weather-dependent survival: implications of climate change for passerine population processes[END_REF][START_REF] Gardner | Individual and demographic consequences of reduced body condition following repeated exposure to high temperatures[END_REF]; (3) the average daily temperature (average winter minimum, average summer maximum). Weather variables were standardized to a mean of zero and a standard deviation of one. In a first model (model 1 hereafter), we fitted the linear effects of the number of days with temperatures beyond thresholds in each season as these were the main predictors of interest, and a linear effect of the sum of precipitation in each season. That model is at the center of our work because it estimates the independent effect of extreme temperatures when accounting for the confounding effect of precipitation. For both seasons, separately, we calculated the proportion of temporal variance in mean survival among years explained by extreme temperatures (see Appendix S5 for details). To predict the net effect of changes in the balance of deleterious winter versus deleterious summer temperatures, we predicted seasonal-survival and year-survival based on model 1 and on predictions of temperature extremes based on the CESM1-CAM5 model for the three RCP scenarios, from 1986 to 2104. Survival was predicted while integrating the random effects of year and species. Beyond model 1, we fitted four extra models to address secondary questions regarding the effect of weather on survival probability. In a second model (model 2 hereafter) we added the interaction between sum of precipitation and the numbers of days >38 o C or <0 o C in each season. In wetter summers the negative effect of extreme high temperatures may be reduced because water availability helps birds cope with heat, while in wetter winters the negative effect of extreme cold may be exacerbated because the insulating capacity of wet feathers is poor [START_REF] Kennedy | Direct effects of rain on birds: a review[END_REF][START_REF] Robinson | Weather-dependent survival: implications of climate change for passerine population processes[END_REF]. These expectations, however, assume that extreme temperatures co-occur with increased precipitation or water availability, within each time period, which may not be the case. Next we fitted model 3, an extension of model 2, in which we also included a linear effect of mean maximum or minimum daily temperature in each season. With the model we test whether it is possible to disentangle the effects of the number of days of extreme temperatures from the effect of the average season temperature. We first fitted models 1 to 3 for all species together. However, dietary guilds are expected to respond differently to weather conditions because food availability varies with temperature and rainfall, and because water content varies between diets, so we fitted models 1 to 3 separately for each of the 4 guilds. Model 4 included random slopes of species for the effects of the number of days of extreme temperatures to test for differences among species in their responses (see Appendix S4, results in main text). Model 5 included species average size and its interaction with the number of days of extreme temperatures (see Appendix S4, results Fig. S7). With model 6 we tested whether our results were robust to the exclusion of the first capture: model 6 is identical to model 1 but is fitted on the full data set, which includes all captures of all individuals, and includes a second transience fixed effect (see Appendix S4, results Fig. S8). We fitted all models in the Bayesian probabilistic programming language Stan [START_REF] Carpenter | Stan: A probabilistic programming language[END_REF], using No-U-Turn Hamiltonian Monte Carlo sampling with Rstan (Stan Development Team, 2020). For fixed effects we used normalising Gaussian priors with a mean of 0 and a standard deviation of 3. For random effect standard deviations we used half-Cauchy priors of location 0 and scale 0.1. We used 4 chains each with 2000 warm-up iterations and 2000 samples. We assessed model convergence using visual diagnostics (trace plots, pairwise posterior plots), in particular using Bayesplot [START_REF] Gabry | Visualization in Bayesian workflow[END_REF] and checked that all R-hat values were below 1.05 and effective sample sizes were above 200. We report posterior means and 95% highest posterior density credible intervals. All derived parameters and model predictions were calculated by integrating over the full posterior distribution. The code to run all models is provided (see Appendix Code S1). RESULTS Unless mentioned otherwise, parameter estimates are from model 1, fitted to all species together. Survival probability was slightly higher at Weddin compared to Charcoal (β=0.25, 95% highest posterior density credible interval [0.02; 0.50]) and did not clearly differ between summer and winter (β=-0.04 [-0.68; 0.59]) (Table S2). The effect of transience was negative (β=-0.78 [-0.998;-0.55]), meaning that birds have a lower probability of apparent survival just after being captured for the first time. This effect may correspond to non-resident birds that permanently leave the study site after first capture. There was heterogeneity in survival probabilities both among occasions, even after accounting for weather variables (random standard deviation, σ= 1.09 [0.77;1.54]) and among species (σ= 0.28 [0.14; 0.49], Fig. 1). Species mean 6-month survival probabilities varied from 0.88 [0.83; 0.92] in the white-throated treecreeper, to 0.76 [0.67; 0.83] in the superb fairy-wren (Fig. 1). Recapture probability was consistently lower at Weddin compared with Charcoal and did not clearly differ between seasons (Table S2). Trap dependence was positive (β=0.33 [0.23;0.43]), meaning that a living bird is more likely to be recaptured just after having been captured. There was heterogeneity in recapture probabilities both among occasions (σ= 0.55 [0.46;0.66]) and among species (σ= 0.68 [0.44; 1.05]). Species recapture probabilities varied from 0.34 [0.27; 0.42] in the white-throated treecreeper, to 0.074 [0.038; 0.12] in the spiny-cheeked honeyeater (Fig. 1). Effects of number of days of extreme temperatures The effect of the number of extreme summer days was strongly negative (β=-0.57 [-1.01; -0.08]), corresponding to a decline in survival probability from 0.86 [0.78;0.95] when there are no days above 38°C to 0.59 [0.45; 0.77] for 30 days above 38°C (Fig. 2). The effect of the number of extreme winter days was marginally negative (β=-0.39 [-1.01; 0.04]), corresponding to a decline in survival probability from 0.84 [0.76;0.90] for 0 days below 0°C to 0.53 [0.28;0.78] for 44 days below 0°C (Fig. 2; Table S2). The effect of extreme temperatures explained 43% [16%; 70%] of temporal variation in survival among years in summer and 13% [0%; 36%] in winter. The effects of the number of extreme summer and winter days were also apparent when including transients, in Model 6 (summer β=-0.33 [-0.57; -0.10], winter β=-0.37 [-0.62; -0.15]), with the effect in summer being a bit weaker and the effect in winter being of similar strength but more precisely estimated (Fig. S8). Model 4 revealed that species differed significantly in their responses to extreme summer temperatures (σ= 0.14 [0.02;0.32]) and to extreme winter temperatures (σ= 0.12 [0.02;0.27]), but those differences were biologically small so that the pattern of survival was qualitatively consistent across species (Fig. 2, Fig. S5, Table S8). Moreover, as is often the case in mixed models [START_REF] Houslay | Avoiding the misuse of BLUP in behavioural ecology[END_REF], despite significant variance among species the uncertainty in species-specific slopes was large so that it is not possible to identify which species responded more or less strongly (Fig. S5). In particular, those differences were not significantly explained by differences in the mean body size of species (Model 5, Table S9). Guild-specific analyses confirm the consistency of the responses among species. The negative effect of extreme temperatures in summer was independently found in insectivore, nectarivore and omnivore species (Table S3, Fig. S6). On the other hand, the effect of extreme temperatures in winter was present only in nectarivore species, where it was significant and strong (β=-0.66 [-1.56;-0.43]), with a predicted decline in survival probability from 0.89 [0.78;0.97] with 0 days below 0°C to 0.41 [0.05;0.81] with 44 days below 0°C. For the granivore guild, a small sample size prevented a precise estimation of any parameter for weather effects. Winter precipitation had a negative effect on survival (β=-0.52 [-0.94; -0.17]). The effect was statistically supported in the nectarivore guild model (β=-0.92 [-1.56;-0.43]) but not found in other guilds (Table S3). Summer precipitation had no clear effect on survival across species (β=-0.24 [-0.57; -0.57]), or in any of the guilds (Table S3). When fitting model 2 to all species together (Table S4), there was no evidence of interactions between number of days beyond extreme temperatures and precipitation, in summer (β=0.04 [-0.4; 0.52]) or in winter (β=0.13 [-0.44; 0.13]). In guild models, there was support for an interaction only in omnivorous species, for summer, with increased survival to extreme temperatures at higher precipitation(β=0.67 [0.03; 1.42]) (Table S5). Fitting model 3 to all species together allowed us to disentangle the effects of mean temperatures from the number of days with extreme temperatures. For summer, there was sufficient evidence that the number of days above 38°C had a negative effect on survival probability after accounting for effects of mean temperature (β=-0.32 [-0.55; -0.03]) (Table S6). In winter the trend for the effect of the number of days beyond 0°C was marginally negative but the uncertainty was large (β=-0.28 [-0.68; 0.04]). In summary, winter extremes had a weak effect on 6-monthly survival, except in the case of nectarivores which are predicted to have low survival (ca 41%) in the coldest winters with > 40 days below freezing. In contrast, the negative effect of summer extremes was strong and consistent across species (Model 4), showing little variation among species (Fig. S5) and all three guild-models showing similar trends (Fig. S6). Our results are robust to the inclusion of the correlated effects of mean extreme temperatures (Model 3, Table S6), to the inclusion or exclusion of transient individuals (Model 6, Fig. S8, Table S10), and in accounting for species (Model 4, Fig. S5) and guild differences (Fig. S6). Climate predictions: At Charcoal the number of winter days below 0°C is projected to decrease from 12.0 in the year 2016 to almost 0 in 2104 (2.4 RCP 2.6, 1.4 RCP 4.5, 0.17 RCP 8.5), while the number of summer days above 38°C is projected to increase from 13.8 in 2016 to 28.7 (RCP 2.6), 36.4 (RCP 4.5) and 67.4 (RCP 8.5) in 2104 (Fig. 3). Based on these climate projections, model predictions for survival to 2104 showed broad-scale declines in annual survival across species (Fig. 4). Assuming median precipitation, in winter, 6-month survival is projected to increase slightly as winters warm,whereas summer survival and annual survival were predicted to decline substantially, with the severity of the decline depending on the RCP scenario (Fig. 4). DISCUSSION We quantified patterns of adult survival for two bird communities in semi-arid south-eastern Australia as a function of exposure to temperature extremes in winter and summer, using data from 37 species that have been the subject of continuous ringing programs over 30 years. Survival probability did not differ between summer and winter, but survival declined at a higher rate with increasing exposure to summer days with thermal maxima >38 o C, than with increasing exposure to winter days with minima <0 o C. The effect of hot days in summer was consistent across species and guilds whereas the effect of cold days in winter was consistent across species but only statistically clear in nectarivores. Model projections to 2104 suggest that survival may be reduced by a third from current levels; the increase in survival associated with milder, warming winters will be insufficient to offset the negative effects on survival of more frequent extreme days above >38 o C in summer. Survival Our study does not elucidate the causes of temperature-related summer mortality or the specific timeframe over which such mortality occurs, but several mechanisms might be at play. Mortality might be a direct result of hyperthermia or lethal dehydration during heatwave events as previously reported in arid Australia [START_REF] Mckechnie | Feeling the heat: Australian land birds and climate change[END_REF][START_REF] Saunders | The impact of two extreme weather events and other causes of death on Carnaby's Black Cockatoo: a promise of things to come for a threatened species[END_REF]. However, increasing exposure to high temperatures >38 o C is also likely to increase mortality over broader timeframes via increasing the need for evaporative cooling to maintain body temperatures below lethal limits or reducing food intake with consequences for water and energy budgets [START_REF] Mitchell | Revisiting concepts of thermal physiology: predicting responses of mammals to climate change[END_REF]. Recent work by [START_REF] Mckechnie | Avian thermoregulation in the heat: evaporative cooling in five Australian passerines reveals within-order biogeographic variation in heat tolerance[END_REF] examined the evaporative cooling capacity and heat tolerance of five insectivorous and nectarivorous passerine species in Australia's arid zone, among them the spiny-cheeked honeyeater (Acanthagenys rufogularis) included in this study. They confirmed the reliance of these species on evaporative heat loss via panting at high air temperatures. When air temperatures exceeded 38°C, rates of evaporative water loss increased rapidly and linearly to at least sevenfold (670-860%) above basal rates under field laboratory conditions [START_REF] Mckechnie | Avian thermoregulation in the heat: evaporative cooling in five Australian passerines reveals within-order biogeographic variation in heat tolerance[END_REF]. Previous work at both Charcoal and Weddin showed that prolonged exposure to days with air temperature >38°C was associated with reductions in body condition of white-plumed honeyeaters (Ptilotula penicillatus) [START_REF] Bailey | Using different body size measures can lead to different conclusions about the effects of climate change[END_REF], and at Charcoal, heat-exposed individuals in poor condition were less likely to be recaptured the following spring [START_REF] Gardner | Individual and demographic consequences of reduced body condition following repeated exposure to high temperatures[END_REF]. This apparent mortality may have been a result of thermoregulatory constraints but could also result from heat-related changes in foraging patterns and microhabitat use (du Plessis et al., 2012;[START_REF] Edwards | The impact of high temperatures on foraging behaviour and body condition in the western Australian Magpie Cracticus tibicen dorsalis[END_REF]Fungi et al., 2019) that lead to a reduction in food intake and loss of body condition, although we have no direct evidence in our study species. Diet is an important factor influencing the way birds respond to high temperatures [START_REF] Albright | Mapping evaporative water loss in desert passerines reveals an expanding threat of lethal dehydration[END_REF][START_REF] Smit | Differences in the use of surface water resources by desert birds are revealed using isotopic tracers[END_REF]. Diets vary in water content, potentially making some dietary guilds more vulnerable to rising temperatures than others. For example, the dry seeds eaten by granivores have the lowest water content of any food, leading to a high dependency on surface water [START_REF] Fisher | Drinking patterns and behavior of Australian desert birds in relation to their ecology and abundance[END_REF]. Nectarivores are similarly dependent on surface water because they need water to process diets high in sugar [START_REF] Fleming | Nectar concentration affects sugar preferences in two Australian honeyeaters and a lorikeet[END_REF]. Both groups drink more frequently as ambient temperature increases [START_REF] Fisher | Drinking patterns and behavior of Australian desert birds in relation to their ecology and abundance[END_REF]. By contrast, omnivores and insectivores have animal-based diets or a combination of animal and plant-based (omnivores) diets, drink infrequently from freestanding water, and rely on water from their prey during the hottest, driest times of year [START_REF] Smit | Differences in the use of surface water resources by desert birds are revealed using isotopic tracers[END_REF]. This has led to the suggestion that drinkers would be more vulnerable to mortality in hot conditions when surface water is unavailable [START_REF] Smit | Differences in the use of surface water resources by desert birds are revealed using isotopic tracers[END_REF]. [START_REF] Czenze | The costs of keeping cool in a warming world: implications of high temperatures for foraging, thermoregulation and body condition of an arid-zone bird[END_REF] found that drinkers had greater heat tolerance than non-drinkers and could withstand air temperatures of ≥52°C based on metabolic chamber measurements for wild-caught individuals of 12 passerine species in arid South Africa that had ad libitum access to water prior to testing. However, in the absence of surface water or during long periods of inactivity in hot conditions, they concluded that drinking species are likely to exceed their limits of dehydration tolerance at lower temperatures than non-drinkers, and thus may be more vulnerable to mortality in dry, hot conditions. In an attempt to understand the causes of collapse of bird communities in the Mojave Desert USA over the last century [START_REF] Iknayan | Collapse of a desert bird community over the past century driven by climate change[END_REF], [START_REF] Riddell | Cooling requirements fueled the collapse of a desert bird community from climate change[END_REF] modelled changing water requirements for thermoregulation using biophysical models and found that species declines were associated with increasing water requirements for body cooling in hot conditions. They found that birds inhabiting the hottest, driest sites without surface water had the lowest probability of persistence. Increasing water requirements were correlated with declines in insectivores and carnivores (non-drinkers with animal-based diets) but not herbivores and omnivores (drinkers with plant-based diets), in conditions where surface water was present. In contrast to these studies, we found that dietary guilds displayed similar patterns of summer survival with increasing exposure to daily maxima >38°C. We found no evidence for an interaction between rainfall and increasing exposure to extremes >38°C across species, although the negative effect of summer extremes was reduced in higher rainfall conditions in the omnivorous guild. Despite this result, our study design is unlikely to detect the effects of rainfall-related water availability on summer survival because we calculated rainfall as the sum of rain for each 6month period and is not temporally related to the occurrence of extreme days >38°C. The structure of our analyses does not permit finer-scale resolution of the co-occurrence of rainfall and temperature >38°C, which seems critical for understanding responses to heat. Indeed, previous study of mass loss in white-plumed honeyeaters at Charcoal found that apparent mortality was associated with high summer temperatures but only in low rainfall conditions, characterized as rainfall in the 4 weeks prior to capture [START_REF] Gardner | Individual and demographic consequences of reduced body condition following repeated exposure to high temperatures[END_REF]. This suggests that water availability in hot conditions may be critical for drinking species, but we have no comparable data for non-drinking species. Although these studies suggest that drinkers might fare worse in the absence of surface water in hot conditions, the broad-scale ecological significance of such findings are unclear. Behavioural field studies are needed to understand the trade-offs individuals make while foraging in the heat in conditions of varying water availability, and the influence dietary guild has on mortality risk, to complement lab-based studies of physiology. Climate Projections and survival Under all RCP scenarios the number of hot days >38°C is predicted to increase and the number of cold days <0°C is predicted to decrease until the end of the century. Given that increasing exposure to cold winter days is associated with decreased survival, warmer winters are projected to increase 6-monthly survival. Conversely, hot summer days are deleterious to survival and projected to become more common reducing 6-monthly survival by 2104. The increase in winter survival will be more than offset by decreasing survival in summer. Indeed, annual survival is projected to decrease from 0.631 to 0.430 at best, 0.11 at worst, by 2104. We note that these calculations come with assumptions and should not be taken as a prophecy. First, we predicted survival based only on the predicted changes in extreme temperatures, while ignoring other changes that may affect bird survival in the next 80 years. Second, in the case of hot days the calculation is based on an extrapolation since a summer with more than 25 days above 38°C has never been observed at our study sites, whereas climate models predict yearly averages of 30 to 70 days above 38°C. It is possible that the effect of days above 38°C is not consistent for the range of values observed at our study sites and for the larger values in the range predicted by climate models. Third, our calculations assume no significant adaptation of the bird populations to the changing climate (see below). Nevertheless, the prediction of a substantial decrease in annual survival for the bird community currently present in our study sites is likely robust. Indeed, the deleterious effects of temperature extremes we identified statistically are consistent with physiological mechanisms (see next paragraph). Even if the shape of the relationship between survival and number of hot days is not accurately represented by our extrapolation (for instance, being optimistic there may be a plateau beyond which survival does not decline), we know of no plausible mechanism that would reverse the trend of decreasing survival across a broader range of hot days. Therefore, more hot days on average will certainly decrease mean survival compared to the current situation. In addition, cold days are already quite rare in our study sites and are predicted to almost disappear within a few decades under all RCP scenarios, whereas hot days will become more and more common until the end of the century (Fig. 3). Trading very few cold days for a dozen or more hot days logically decreases overall survival. Our results add to concerns about the persistence of avian populations in arid and semi-arid regions of Australia and on other continents. Modelling avian water requirements for temperatures predicted for the 2080s, [START_REF] Mckechnie | Climate change increases the likelihood of catastrophic avian mortality events during extreme heat waves[END_REF] concluded that desert birds in Australia will experience reduced survival times more frequently during summer due to large increases in thermoregulatory water requirements, regardless of water availability. Similarly, [START_REF] Conradie | Avian mortality risk during heat waves will increase greatly in arid Australia during the 21st century[END_REF] modelled the risks of lethal hyperthermia and dehydration for 10 Australian arid zone bird species to 2104 based on thermal physiology data. They concluded that increased water demands for evaporative cooling will greatly increase mortality risk and several species will also be vulnerable to lethal hyperthermia, particularly smaller species. In combination, increasing temperatures and unpredictable water availability pose a significant risk to the avifauna of the Australian interior, particularly in the far western parts of the continent [START_REF] Conradie | Avian mortality risk during heat waves will increase greatly in arid Australia during the 21st century[END_REF]. Our analyses, based on demographic data, suggest that climate change also poses a threat to birds in semi-arid regions of eastern Australia. Elsewhere, Albright et al. (2019) modelled the consequences of 4°C warming to the year 2104 for desert passerines in the United States, and concluded warming would greatly increase the ex-tent, frequency and intensity of dehydration risk over large parts of the south-western United States, affecting water balance, daily activity and geographic distribution of desert birds in these regions. Building on that work, simulation models of heat flux showed that observed declines over the last century in the Mojave desert were positively associated with climate-driven increases in water requirements for evaporative cooling: predictions suggest an increase in water requirements of 50 to 78% under future climate scenarios, all else held equal [START_REF] Riddell | Cooling requirements fueled the collapse of a desert bird community from climate change[END_REF]. Finally, Conradie et al. (2019) synthesized physiological and behavioural data to evaluate the risks of lethal extreme heat events versus sublethal costs of chronic exposure to hot weather for birds in the Kalahari Desert. They concluded that the risk of acute heat exposure will remain relatively low to the end of the century compared with Australia and south-western USA due to lower rates of warming. By contrast, sublethal costs of chronic heat exposure are likely to drive large declines in avian diversity in the southern African arid zone by the end of the century. These laboratory-based studies provide a physiological basis for climate-driven local extinctions that are likely to apply to the current study. Birds in our study will inevitably be exposed to greater risk of acute hyperthermia and dehydration as summers warm in these semi-arid environments. Model projections for the region indicate a 50% increase in the frequency of days with air temperatures >38°C, a temperature threshold known to affect white-plumed honeyeaters at both sites [START_REF] Gardner | Dynamic size responses to climate change: prevailing effects of rising temperature drive long-term body size increases in a semi-arid passerine[END_REF], 2016[START_REF] Bailey | Using different body size measures can lead to different conclusions about the effects of climate change[END_REF]. Moreover, recent heatwaves in semiarid South Australia, provide some insights into the consequences of a 4°C rise for the survival of small passerines. In January 2019 the mean daily maximum temperature was 37.5°C, 4.2°C above average, identical to the temperature increase predicted for that region in 2090. Birds at the site were exposed to up to 10 h per day of air temperatures above body temperature (40 o C) [START_REF] Sharpe | Weighing the cost: the impact of serial heatwaves on body mass in a small Australian passerine[END_REF], the threshold at which evaporative cooling is required to maintain body temperatures below lethal limits [START_REF] Boyles | Adaptive thermoregulation in endotherms may alter responses to climate change[END_REF][START_REF] Albright | Mapping evaporative water loss in desert passerines reveals an expanding threat of lethal dehydration[END_REF]. Apparent mortality among the colour-ringed population of insectivorous jacky winters, Microeca fascinans, increased almost three-fold during the two-month heatwave period (20% of the population died) and all breeding attempts were abandoned. Birds were never observed to forage, provision nestlings or incubate at air temperatures > 38°C [START_REF] Sharpe | Too hot to handle? Behavioural plasticity during incubation in a small, Australian passerine[END_REF]. More recently, in December 2019, 30% of the ringed population disappeared, and presumably died, within 24hr of air temperatures reaching 49°C [START_REF] Sharpe | Too hot to handle? Behavioural plasticity during incubation in a small, Australian passerine[END_REF]. Body size will inevitably affect avian responses to high temperatures, with smaller birds at particular risk of mortality. Balancing energy and water budgets against the risk of fatal hyperther-mia is a challenge for smaller birds because they have smaller energy stores and body water pools but higher rates of heat gain and higher mass-specific rates of evaporative water and energy use [START_REF] Smit | Can behaviour provide a basis for rapid assessment of relative vulnerability of desert birds to climate change?[END_REF][START_REF] Albright | Mapping evaporative water loss in desert passerines reveals an expanding threat of lethal dehydration[END_REF]. Consequently, smaller birds have shorter survival windows during extreme heat events than larger species. [START_REF] Albright | Mapping evaporative water loss in desert passerines reveals an expanding threat of lethal dehydration[END_REF] estimate that a 9.7g Lesser Goldfinch (Spinus psaltria) will lose body mass at about twice the rate of 71g Curve-billed Thrasher (Toxostoma curvirostre) (3.4% versus 1.8% body mass per hour). An increase in air temperature from 40 to 44°C will result in a 150% increase in rates of evaporative water loss, but time to death through dehydration differs greatly because of differences in massspecific rates of evaporative water loss [START_REF] Albright | Mapping evaporative water loss in desert passerines reveals an expanding threat of lethal dehydration[END_REF]. However, we found no effect of body size on responses to temperature extremes, although the response to temperatures >38°C was consistent with poorer survival of smaller individuals. Our data may be too limited to detect strong effects, as we analysed only 37 species with a limited range of sizes (from 48 to 133mm). Our study looks at just one component of fitness: longitudinal adult survival. Clearly, estimates of reproduction are necessary to model population viability per se, but such data are not available for our study species. Nevertheless, our estimates of survival are likely to correlate with population decline, regardless of recruitment. Even if populations maintain recruitment at current levels by advancing the start of seasonal breeding as winters warm, increasing exposure to extreme heat represents a clear threat to survival, affecting all individuals, suggesting a downward trend in population size is likely. Assuming the decrease in annual survival of around 30% (RCP 2.6), recruitment would have to increase by ca. 40% to offset the decline in survival (1/(1-0.3) = 1.43), which seems unrealistic. In general, adaptive micro-evolution may be substantial over a few generations [START_REF] Bonnet | Genetic variance in fitness indicates rapid contemporary adaptive evolution in wild animals[END_REF], but current evidence suggests that adaptive responses are most likely insufficient to offset the current rate of climate change [START_REF] Radchuk | Adaptive responses of animals to climate change are most likely insufficient[END_REF]. Moreover, the potential for genetic adaptation to higher temperatures that could reduce mortality is unclear given the lack of evidence for such effects in birds (Gienapp et al., 2008;[START_REF] Chevin | Evolution of phenotypic plasticityin extreme environments[END_REF]. In Australia, there is evidence for microevolution in genes that are associated with heat-related dessication in Drosphila melanogaster [START_REF] Umina | A rapid shift in a classic clinal pattern in Drosophila reflecting climate change[END_REF], but we are unaware of similar examples in birds. This reflects the rarity of examples of microevolution in response to climate change in general, and heat tolerance in particular (Gienapp et al., 2008;[START_REF] Chevin | Evolution of phenotypic plasticityin extreme environments[END_REF]. In addition to microevolutionary changes in the mean response to extreme temperatures, a genetic response to natural selection may mitigate the negative impact of climate change on survival by reducing the temporal variation of vital rates, via demographic buffering [START_REF] Hilde | The Demographic Buffering Hypothesis: Evidence and Challenges[END_REF]. CONCLUSIONS Our study highlights the significance of temperature extremes for species' persistence in arid and semi-arid regions, comprising some 70% of the terrestrial land mass of Australia, and about 40% globally. We used long-term ringing data and a mark-recapture framework to estimate survival as a function of temperature extremes, rather than make projections based on the physiological costs of heat exposure. Despite differences in methodology and approach, our conclusions are similar to physiological-based projections, and suggest that rising summer temperatures pose a risk to population persistence for birds in arid and semi-arid regions. Figure 1. Species-specific 6-monthly mean survival and mean recapture probabilities as estimated from a random intercept model (Model 1). Species are arranged by mean-survival on the y-axis and color-coded by guild. Survival and recapture probabilities are calculated conditional on a median number of days with temperature extremes, median rainfall, and integrating sites, seasons and temporal variance. Points represent posterior means and horizontal bars represent 95% highest posterior density credible intervals. Figure 2. Predicted probability of 6-monthly survival as a function of the number of days of extreme summer temperatures recording daily maxima >38° C and extreme winter temperatures recording minima <0° C. Average predictions, over all species, are shown in thick lines with shaded areas representing 95% credible intervals, and were derived from model 1. Thin lines represent species-specific predictions derived from model 4. Predictions were estimated for a median rainfall and population density, but integrated over temporal variation, and in the case of the average predictions, over species variation. Figure 3 . 3 Figure 3. Predictions of daily temperature extremes under three emission scenarios from the model CESM1-CAM5. (A) Number of days on which temperatures are below 0°C in winter; (B) Number of days on which temperatures are above 38°C in summer. Numbers are observed from 1986 to 2015 (black line) and simulated according to the model from 2016 to 2104. Smoothed lines and 95% confidence intervals are produced with negative-binomial generalised additive models using thin-plate smooths of year from 1986 to 2104. Figure 4 . 4 Figure 4. Projected 6-monthly survival as a function of predicted change in the number of (a) winter and (b) summer days with temperature extremes >38° C or <0° C, respectively; (c) annual survival to 2104 (averaged over all species, sites and time periods), for three scenarios of emissions and associated climate change. For winter the predictions from the three scenarios largely overlap. Dashed lines indicate the initial predicted survival probability to provide context for trends in predicted survival. Shaded areas represent 95% credible intervals.
01443735
en
[ "math.math-oc", "phys.phys.phys-comp-ph" ]
2024/03/04 16:41:20
2017
https://hal.science/hal-01443735/file/Bur2017.pdf
N Bur email: [email protected] P Joyot email: [email protected] On the Use of PGD for Optimal Control Applied to Automated Fibre Placement Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its conceptual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone. INTRODUCTION Automated Fibre Placement (AFP) is one the main technologies employed today to manufacture advanced composite laminates from unidirectional prepregs. This technique consists in laying and welding tapes of prepregs, building a laminate having the geometry and the desired mechanical characteristics. This process has been widely studied [START_REF] Sonmez | [END_REF]2,3,4] with numerical models becoming more accurate, robust and complex. New techniques of computation [5,6], based on variables' separation, have enabled models' enrichment by adding parameters as extra-coordinates [7]. Thus, the Proper Generalised Decomposition (PGD) method leads to multi-parametric virtual charts providing a whole set of solutions for each combination of the considered parameters [8,9,10,11,12]. Then, the computational vademecum can be exploited on-line for process control or process optimisation purposes. Indeed, within the AFP we want to efficiently control the heating power: tapes have to be heated enough to ensure the melting of the matrix coating the fibres and the cohesion with the previously laid tapes, while not exceeding a threshold from which material burns. Therefore, we took advantage of the PGD to build off-line virtual charts in order to determine the best power associated to a draping velocity profile [13]. However, in those simulations, solutions were computed from equations provided by underlying physics of the studied phenomena, the optimisation being carried out as post-process. We propose here to compute directly the solution of an optimisation problem in order to get the separated representations of both the field and the control to obtain it. That is to say the optimisation is made directly off-line, reducing the cost of the post-process and improving the real-time control of the AFP. Within the next section, we present the equations governing the phenomenon under consideration. In order to have a reference solution the system is solved by standard finite element method (FEM). Thereafter, section b) focuses on the writing and solving of the optimal system. We improve the obtained results by applying Uzawa's method in section b). Lastly section b) addresses few conclusions and perspectives. PROCESS MODELLING AFP process can be modelled with an heat equation associated with the next boundary conditions. Heat source applies on boundary Γ P (see Figure 1); considering a wide enough domain Ω, we set a Dirichlet's condition taking into account a fixed temperature on Γ R ; then, for the sake of simplicity, an homogeneous Neumann's condition is applied on others boundaries. Γ B Γ L Γ T Γ P Φ Ω FIGURE 1. Domain of study. This leads to the following advection-diffusion equation 1 ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ -div (K∇u) + ρC p V∇u = 0i n Ω; u = 0o n Γ R ; -K∂ n u = 0o n Γ B ∪ Γ L ∪ Γ T ; -K∂ n u = -Φ on Γ P ; (1) with K = k 0 0 k ⊥ et V = v 0 T . This system can obviously be solved in a classical way with standard FEM. However, since we want a reference solution to latter compare with results from other methods, we have to implement comparable systems. Consequently, in order to compare solution in separated representation, we take advantage of the tensor product method introduced by R. E. Lynche, J. R. Rice and D. H. Thomas [14] [15]. The writing of discrete form of Equation ( 1) separating the two space coordinates and using tensor forms of operators allows the handling of shape functions in the separated representation, the system remaining solved with a global, non separated FEM. We can thus obtain the temperature field from the input Φ modelling the laser heat flux. For some couples power/speed we compute the corresponding reference solutions. From these results we extract the temperature profiles on boundary Γ P where the source is applied. Indeed, we want to control the temperature on this same boundary during the process, to reach an optimum, providing the ideal heat flux. We compute also P S ray , the power seen by Γ P , as the integrand of the flux on this boundary multiplied by the width of the shone surface S ray . We gather in Table 1 some key values for these four reference solutions, for the purpose of latter comparison. Due to velocity values, we increase the number of nodes to contain the Péclet number, avoiding the requirement of a stabilisation technique v = 0.001 m • s -1 v = 0.01 m • s -1 v = 0.1m• s -1 v = 1m• s -1 Pw = 600 W Pw = 1897 W Pw = 6000 W Pw = SETTING UP THE OPTIMAL SYSTEM As announced in the Introduction, The PGD method provides multi-parametric virtual charts that can be used to control the AFP process [13]. Another way to go on is to use the optimal control theory. Within the AFP process, a heat flux provided by a laser melt the thermoplastic. The difficulty consists in determining the best power of the laser to reach an optimal temperature to melt the thermoplastic enough but not too much. Thus we consider the following cost-function to be minimised, since the flux is applied only on Γ P , part of the boundary J(u, Φ) = 1 2 Γ P (u -u d ) 2 + αΦ 2 dγ, (2) subject to the state advection-diffusion equation ( 1), with α cost parameter of the command. That way Φ is used as control and we want to reach u d on boundary Γ P . The domain of study is depicted on Figure 1. The corresponding Lagrangian writes L (u, Φ, p) = J(u, Φ) + Ω p -div (K∇u) + ρC p V∇u dx. ( 3 ) To find a stationary point of the Lagrangian we set ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ∇ u L (u, Φ, p) = 0; ∇ Φ L (u, Φ, p) = 0; ∇ p L (u, Φ, p) = 0. ( 4 ) Expanding these equations leads to the non-linear optimality system (see [START_REF] Lions | Optimal Control of Systems Governed by Partial Differential Equations[END_REF]), whom weak form writes, with test functions u ⋆ et p ⋆ , ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ Ω K∇u ⋆ •∇u + Ω ρC p V∇uu ⋆ -Γ P 1 α pu ⋆ dγ = 0 ∀u ⋆ ; Ω K∇p ⋆ •∇p + Ω ρC p Vp∇p ⋆ + Γ P up ⋆ dγ = Γ P u d p ⋆ dγ ∀p ⋆ . (5) In Equation ( 5), fields u and p are coupled. The computation of such a problem can be achieved using mixed formulation, within a standard FEM as well as in the PGD framework as described thereafter. The discrete form of the state variable u (x, z) and the adjoint parameter p (x, z) are expressed in tensor product form U = ∞ i U i x ⊗ U i z and P = ∞ i P i x ⊗ P i z . (6) The vector Ψ which brings together nodal values of u and p takes the form Ψ= U P = ∞ i U i x ⊗ U i z P i x ⊗ P i z . (7) The discretised weak form of Equation ( 5) is written Ψ ⋆T AΨ=Ψ ⋆T B were A = 8 i=1 A j x ⊗ A j z and B = B x ⊗ B z will be expressed in a tensor form, and with the test function defined by Ψ ⋆ = U ⋆ x ⊗ U z + U x ⊗ U ⋆ z P ⋆ x ⊗ P z + P x ⊗ P ⋆ z . (8) This discretised problem can then be solved with the PGD method, for different values of the velocity, the desired state u d coming from the corresponding reference solution. Since the goal of optimal control is to minimise the distance between the unknown u and the desired state u d on the boundary Γ P , we retrieve the wanted temperature profile on this boundary. However, to reach it, the flux, computed by PGD, to be applied take substantially away compared to the FEM input, as speed increases (see Figure 2). v = 0.001 ms -1 v = 0.1ms -1 v = 0.01 ms -1 v = 1ms -1 UN-MIXING THE OPTIMALITY SYSTEM Applying the PGD framework on the optimality system (5) can produce, in our case, weird results. Instead of searching a workaround within numerical stabilisation techniques, we propose to solve this system without mixed elements. For this purpose, we use Uzawa technique [17] [18]. Given an initial guess p (0) for p, Uzawa's method consists in our case of the following coupled iteration: ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ A u u (k+1) = A pu p (k) p (k+1) = p (k) + ω A pu u (k+1) -u d + A p p (k) ( 9 ) where ω>0 is a relaxation parameter. Tensors are defined by A u = k K u x ⊗ M u z + M u x ⊗ k ⊥ K u z + ρC p vH u x ⊗ M u z A pu = - 1 α M pu x ⊗ M pu z A p = k K p x ⊗ M p z + M p x ⊗ k ⊥ K p z -ρC p vH p x ⊗ M p z A up = M up x ⊗ M up z This coupled iteration is computed within a fixed-point loop, each field u and p being solved by PGD. As previously, Figure 3 shows the computed flux to reach the desired temperature u d on the boundary Γ P . v = 0.001 ms -1 v = 0.1ms -1 v = 0.01 ms -1 v = 1ms -1 FIGURE 3. Flux on Γ P with Uzawa's method TABLE 2. Summary of key values v = 0.001 m • s -1 v = 0.01 m • s -1 v = 0.1m• s -1 v = 1m• FIGURE 2 . 2 FIGURE 2. Flux on Γ P for different speeds. TABLE 1 . 1 Key values for reference solutions x z l 1 l 2 l 3 h Γ R CONCLUSION In order to control the AFP process, we proposed a method based on the PGD technique allowing the inclusion of parameters as extra-coordinates. Due to numerical instabilities, we transformed a mixed formulation to a coupled problem, improving significantly the results. Next step consists in considering the desired state u d as a new coordinate. Thus we will be able to build directly the process control as a multi-parametric virtual chart. Table 2 summarises the previous simulations, collecting some key-values. Thus, solving the optimality system with the PGD algorithm, but without mixed elements using Uzawa method leads to consistent results, since both temperature and heat flux remain similar to those expected (using standard FEM).
04100209
en
[ "phys", "spi" ]
2024/03/04 16:41:20
2022
https://hal.science/hal-04100209/file/DPHY23049.1683708769.pdf
Jeanne Bernard Dphy Malo Cadoret email: [email protected] Yannick Bidel email: [email protected] Clément Salducci email: [email protected] Nassim Zahzam email: [email protected] Sylvain Schwartz email: [email protected] Alexis Bonnin email: [email protected] Cédric Blanchard email: cé[email protected] Alexandre Bresson email: [email protected] Progress towards the development of a cold-atom inertial measurement unit for onboard applications with performances at the level of the best classical inertial sensors [START_REF] Liu | Sensitive quantum tiltmeter with nanoradian resolution[END_REF]- [START_REF] Mcguirk | Sensitive absolute-gravity gradiometry using atom interferometry[END_REF]. There was only one demonstration of a complete atom interferometer IMU [START_REF] Canuel | Six-Axis Inertial Sensor Using Cold-Atom Interferometry[END_REF]. But it was a laboratory experiment which can work only in static condition and with rotation measurements not at the state of art level. There are still several blocking points to address in order to develop a compact atom IMU working in onboard environment. Here, we will give our progress toward a cold atom inertial measurement unit. First, we will present the hybridization strategy that we choose for the IMU. Then, we will present our work on the development of an onboard horizontal accelerometer. Finally, we will conclude and present the remaining developments. II. PRINCIPLE OF THE HYBRIDIZED ATOM INTERFEROMETER INERTIAL MEASUREMENT UNIT The hybridized cold-atom inertial measurement unit relies on the periodic calibration of 3 classical accelerometers and 3 classical gyroscopes by a multi-axis atom interferometer sensor (see Fig. 1). This is an extension of the method developed for atom accelerometer [START_REF] Bidel | Absolute marine gravimetry with matter-wave interferometry[END_REF], [START_REF] Lautier | Hybridizing matter-wave and classical accelerometers[END_REF]. At each measurement cycle, the atom interferometer is configured to measure one of the six inertial components. This absolute measurement is then used to estimate the bias and the scale factor of the corresponding classical inertial sensor. On the other hand, the measurements Abstract-In this paper we present our progress towards the development of a complete cold-atom Inertial Measurement Unit (IMU). Our goal is to have a single atomic sensor that will alternatively measure each inertial component (3 accelerations and 3 rotations). Every atomic measurement will be hybridized with a set of classical inertial sensors. Hybridization enables to combine the advantages of both technologies and to provide accurate continuous measurements with high dynamic range, in the scope of onboard inertial navigation compatibility. Index Terms-inertial sensor, atom interferometry, cold atoms, hybridization I. INTRODUCTION Atom interferometer sensor is a promising technology for inertial measurement. It has the main advantage compared to classical technology to provide absolute measurements (no calibration needed) and high long term stability (no drift). An Inertial Measurement Unit (IMU) i.e 3 accelerometers and 3 gyroscopes using this technology could thus provide a breakthrough in navigation application in which the biases of the sensors lead to position errors. Vertical accelerometers i.e gravimeters based on atom interferometry have already demonstrated better performances than classical sensors for static [START_REF] Karcher | Improving the accuracy of atom interferometers with ultracold sources[END_REF] [2] and onboard measurements [START_REF] Bidel | Absolute marine gravimetry with matter-wave interferometry[END_REF] and start to be commercialized [START_REF] Ménoret | Gravity measurements below 10-9 g with a transportable absolute quantum gravimeter[END_REF]. Concerning other inertial sensors (horizontal accelerometer, gyroscope, gradiometer), the level of maturity is much lower with only few laboratory instruments of the classical sensors are necessary to process the atom interferometer measurements in onboard environments. The output signal of a Mach Zehnder type atom interferometer consisting of three laser pulses is given by the following simplified expression: P = P m - C 2 cos k eff • ( a + 2 v × Ω) T 2 (1) where P m and C are the offset and the contrast of the fringe, a is the acceleration, v is the atom velocity, Ω is the rotation vector, k eff is the effective wave vector of the laser interrogating the atom and T is the time interval between the laser pulses. The measured inertial component is selected with the direction of laser k eff and the atom velocity. By imposing a zero velocity for the atom, we measure only the acceleration in the direction of the laser k eff . By imposing a non-zero velocity, we measure a combination of a and Ω. To retrieve only the rotation, the acceleration is estimated by the calibrated classical accelerometer. Due to the periodic aspect of the signal of the atom interferometer, many values of the inertial quantities are possible for a given value of the atom signal. This ambiguity is lifted by using an estimated value of the atom phase provided by the classical inertial sensor. Finally by using this strategy, the six classical inertial sensors are regularly calibrated providing thus an absolute and continuous measurement of 3 accelerations and 3 rotations. A multi-axes atom interferometer sensor has thus to be developed. The vertical acceleration measurement is mature with the demonstration of airborne [START_REF] Bidel | Absolute airborne gravimetry with a cold atom sensor[END_REF] and shipborne measurement [START_REF] Bidel | Absolute marine gravimetry with matter-wave interferometry[END_REF]. The next steps of development are the horizontal accelerometer and the gyroscopes. III. HORIZONTAL ACCELERATION MEASUREMENTS A. Zero-velocity atom interferometry In a light-pulse atom interferometer, laser pulses are used to coherently split, deflect, and recombine matter waves in order to create atom interference. Inertial sensors usually use stimulated two-photon (Raman or Bragg) transitions to create the beamsplitters and mirrors required for the interferometer. Stimulated two-photon transitions consist of an atom coherently absorbing and then emitting a photon from a pair of counterpropagating laser beams with different frequencies. This process results in a transfer between two internal ground states of the atom, accompanied with a momentum change of ± k eff , where k eff is the effective wave vector. For high precision measurements, the two counterpropagating Raman lasers are usually implemented in a retroreflected geometry, where a single laser beam with two laser frequencies is retroreflected off a mirror. In an atom interferometer, the direction of the acceleration measurement is given by the Raman (or Bragg) lasers direction. Therefore, in order to measure horizontal accelerations, the Raman lasers must be horizontal. This configuration is more challenging than a vertical accelerometer where the Raman beams are vertical. First, the atoms fall under gravity perpendicular to the laser beam, therefore limiting the falling distance to the beam diameter for an instrument using a single laser beam. The use of several laser beams is possible, but complicates the setup. Second, the atoms are interrogated with zero velocity in the Raman beam direction. In a retroreflected configuration, two pairs of Raman beams with opposite effective wave vector ±k eff can perform the twophoton transitions (see Fig. 2). When the atoms have a non zero velocity in the Raman beam direction, the Doppler effect naturally lifts the degeneracy between the two transitions ± k eff , and one can easily address a single transition. This is not the case in a horizontal accelerometer : lifting the degeneracy requires to give a velocity to the atoms thanks to a moving molasses technique for instance. The best horizontal accelerometers [START_REF] Liu | Sensitive quantum tiltmeter with nanoradian resolution[END_REF], [START_REF] Canuel | Six-Axis Inertial Sensor Using Cold-Atom Interferometry[END_REF] based on atom interferometry both use this technique. However, it leads to more complex and larger instruments. Another solution is to exploit the double diffraction scheme, but this only works if the two Raman transitions are perfectly degenerated. For onboard applications, where the atoms acquire an uncontrolled velocity under the effect of the vehicle accelerations, it is impossible to address the double diffraction scheme with high efficiency. This method is thus not appropriate to build an embedded horizontal accelerometer. Finally, a last method consists in selecting atoms with nonzero velocity within the atomic velocity distribution [START_REF] Geiger | Detecting inertial effects with airborne matter-wave interferometry[END_REF]. This technique leads to an important loss of atoms which is not satifying. In this paper we present two new techniques using single-diffraction two-photon Raman transitions despite a zero Doppler shift in a retroreflected geometry. B. Retroreflected frequency-chirped laser The first technique demonstrated in 2019 [START_REF] Perrin | Zero-velocity atom interferometry using a retroreflected frequency-chirped laser[END_REF] consists in applying a frequency chirp β = 1 2π dω1 dt = dω2 dt to the Raman laser frequencies ω 1 and ω 2 (see Fig. 3). Since the reflected beams are delayed with respect to the incoming ones by a time duration t d = 2L c (with L the distance between the atoms and the mirror), the incoming laser frequencies are shifted by δω = 2πβt d at the position of the atoms. Therefore, one transition is detuned from the other by 2δω : such a frequency-chirped laser method lifts the degeneracy between the two transitions ± k eff , allowing one to selectively address Using this method, we demonstrated a Mach-Zehnder type atom interferometer hybridized with a force balanced accelerometer providing horizontal acceleration measurements with a short-term sensitivity of 3.2 × 10 -5 m.s -2 / √ Hz (see Fig. 4). This is comparable to state of the art for horizontal configurations [START_REF] Xu | Quantum tiltmeter with atom interferometry[END_REF], despite shorter interrogation times (32 versus 226 ms). We cannot draw conclusions regarding the longterm stability of the hybridized acelerometer because of the fluctuations of the Raman mirror angle. Future improvements of the experimental setup could include the implementation of an auxiliary tilt sensor monitoring the angle between the Raman beam and the horizontal plane during the measurement. C. σ + -σ -Raman transitions between magnetically sensitive states For the second technique, we implemented a horizontal accelerometer based on atom interferometry using counterpropagative Raman transitions between the hyperfine states |F = 1, m F = ∓1 and |F = 2, m F = ±1 (see Fig. 5) [START_REF] Bernard | Atom interferometry using σ + -σ -Raman transitions between |F = 1, m F = ∓1 and |F = 2, m F = ±1[END_REF]. Atom interferometry usually exploits the |F = 1, m F = 0 ↔ |F = 2, m F = 0 transition, but our scheme presents several advantages. First, due to selection rules, only a single counterpropagating transition is allowed even if the Raman laser beams are retroreflected (see Fig. 5). Secondly, it uses Raman beams with the same polarization configuration as the magneto-optical trap. Finally, it allows the control of the atom trajectory with magnetic forces. With this method, we performed the measurement of the horizontal component of acceleration with a hybridized accelerometer and reached a short-term sensitivity of 25 × 10 -5 m.s -2 / √ Hz (see Fig. 6). Specific features of our technique, in comparison with usual magnetically-insensitive Raman-based atom interferometers, are the following : higher spontaneous emission rate (as the Raman lasers require to be tuned closer to atomic transitions), additional sensitivity to magnetic field inhomogeneity and light shifts (the differential light shift can not be cancelled out by tuning the intensity ratio between the Raman pair). The bias due to the magnetic field gradient is the main limitation to this hybridized atom accelerometer accuracy (2×10 -4 m.s -2 ). Using the D 1 line (instead of the D 2 line) for the Raman excitation scheme could enhance the short-term sensitivity of this sensor, because it would drastically reduce spontaneous emission. In addition, the bias could be reduced by using a high-performance magnetic shielding. Moreover, that acceleration bias could be canceled out by implementing a fountain geometry. IV. CONCLUSION AND PERSPECTIVES We have demonstrated two new methods for measuring horizontal accelerations in a close-to-zero velocity regime. Both methods can be implemented in an embedded IMU. Next step of development are the gyroscopes. Here the challenge is be to perform interferometric measurements of rotations in a compact device but with state-of-the-art performance. Large momentum transfer beamsplitters will be necessary since the sensitivity of the interferometric sensor is proportional to its enclosed space-time area. Fig. 1 . 1 Fig. 1. Possible implementation of the hybridized cold atom IMU (SF: Scale Factor). Fig. 2 . 2 Fig. 2. Left : Schematic setup of two-photon Raman transitions in the commonly used retroreflecting geometry. Right: Scheme of the Raman transition between the two hyperfine ground states of an alkaline atom in the absence of Doppler shift. Both pairs ± k eff are simultaneously resonant. Fig. 3. Left : The frequency chirp β applied to the Raman lasers frequency shifts the incident laser frequency by δω = 2πβt d where t d = 2L/c. Right : Energy diagram showing that the frequency chirp β allows to lift the degeneracy between the two resonant transitions by an amount 2δω. Fig. 4 . 4 Fig. 4. The blue line represents the Allan standard deviation (ADEV) of the hybridized atomic accelerometer. τ -1/2 scaling is represented by the dashed line, whereas the τ scaling appears in the green line. Fig. 5 .Fig. 6 . 56 Fig. 5. (a) Energy diagram of the two hyperfine ground states of 87 Rb when a magnetic fied is applied. The degeneracy between the Zeeman sublevels is lifted and one can perform σ + -σ -Raman transitions. The circularly polarized beams which perform the |F = 1, m F = -1 ↔ |F = 2, m F = 1 transition appear in solid lines, whereas the beams performing the |F = 1, m F = 1 ↔ |F = 2, m F = -1 transition are represented in dashed lines. (b) Usual implementation of two-photon Raman transitions in a retroreflected configuration. The atom is a two-level system which interacts with two pairs of counterpropagating light fields with σ + -σ - polarizations. Such a polarization scheme allows only one pair of couterpropagating light beams to drive the Raman coupling. This results in a single diffraction process even if there is no Doppler shift.
00436229
en
[ "spi.meca.mema" ]
2024/03/04 16:41:20
2006
https://hal.science/hal-00436229/file/Buffiere2006.pdf
Jean-Yves Buffiere Emilie Ferrié Wolfgang Ludwig Anthony Gravouil Jean-Yves Buffière Characterisation and modelling of the three dimensional propagation of short fatigue cracks Keywords: fatigue, micro-tomography, cast alloy, fine grain, 3D Introduction In the last twenty years, a lot of experimental and theoretical efforts have been devoted to the characterisation and modelling of small fatigue crack propagation [START_REF] Newman | The merging of fatigue and fracture mechanics concepts : a historical perspective[END_REF], Despite of this intense activity, the growth rate predictions for this kind of defects are far from being completely satisfactory for the moment. Small fatigue cracks have a complex 3D shape and their propagation at the surface of a fatigue sample, where they are readily observed, is not necessarily representative of the bulk behaviour unlike long cracks (typically more than several millimetres in length). Although this problem has been recognised for at least twenty years [START_REF] Miller | The short crack problem[END_REF] it has hardly been taken into consideration in practice. From a characterisation point of view, the resolution of experimental techniques such as beach marking or heat tinting conventionally used to visualise the 3D shape of long cracks is not sufficient for small cracks. Thus, up to now, experimental characterisation has remained quite limited. For such a characterisation, X-ray micro-tomography is a very attractive technique which enables the visualisation of internal features in opaque samples. Being a non destructive technique, it also enables, in principle, in situ visualisation of damage during loading and hence the chronology of damage initiation and growth. In the last ten years, significant progress has been made in terms of resolution with both the availability of new third generation synchrotron X-ray sources and new detectors [START_REF] Baruchel | Tomography in Material Science[END_REF]. A spatial resolution close to that of an optical microscope can now be achieved in 3D which opens (or re-opens) wide areas of research. This paper presents some recent experiments which show the 3D development of fatigue cracks inside two aluminium alloys with very different grain sizes. The crack morphologies are studied in relation to the microstructure along the crack front. Examples of the use of 3D growth data for modelling the propagation of fatigue cracks are given. Experiments and methods Materials Two aluminium alloys have been studied. One is a model AS7GO3 cast Al alloy (Al-7Si-0.3Mg, in wt%) containing artificial pores introduced during the casting process. This was used in the T6 conditions (yield stress 200 MPa). The pores located at the surface or just below it act as stress raisers in this material and promote local cyclic plasticity which leads to crack initiation. Because of the casting process, the size of the grains is large (∼ 300 µm) and their 3D shape is irregular. This can be visualised by X ray tomography via a Gallium infiltration technique. Although this method is destructive (Gallium renders the material extremely brittle) it can be used at the end of a fatigue test to reveal the shape of the grains surrounding a fatigue crack. The 3D shape of a crack can therefore be studied as a function of the grain boundaries encountered locally by the crack front [START_REF] Ferrié | 3D characterisation of the nucleation of a short fatigue crack at a pore in a cast Al alloy using high resolution synchrotron microtomography[END_REF]. More detailed descriptions of the microstructure and fatigue deformation mechanisms of this model material and of the infiltration technique can be found elsewhere [START_REF] Buffière | Experimental study of porosity and its relation to fatigue mechanisms of model Al-Si7-Mg0.3 cast Al alloys[END_REF], [START_REF] Ludwig | Study of the interaction of a short fatigue crack with grain boundaries in a cast Al alloy using X-ray microtomography[END_REF]. The second material studied is a powder metallurgy Al-Li alloy based on AA5091 alloy (Al-1.2Li-4.0Mg-1.0C-0.5O in wt%). The material was selected because of its grain size which is of the order of 1 µm. The ultrafine grains promote homogeneous deformation and prevent crystallographic cracking so that the crack shape appears very regular with the spatial resolution of the 3D imaging technique employed in this study (∼ 1 µm). The material was produced from powders using a mechanical alloying process, followed by hot isostatic pressing (hipping) and forging. It was used in the as-forged (T1) condition (Yield stress 450 MPa) [START_REF] Ferrié | Fatigue crack propagation: In situ visualization using X-ray microtomography and 3D simulation using the extended finite element method[END_REF]. Micro-tomography All the experiments described in this paper have been performed at the European Radiation Synchrotron Facility (ESRF) in Grenoble (France) on beam line ID19. The details of the tomography experimental set up of ID19 can be found in reference [START_REF] Ludwig | Study of the interaction of a short fatigue crack with grain boundaries in a cast Al alloy using X-ray microtomography[END_REF], In the following paragraphs, only the main experimental features are presented. Obtaining a tomographic image consists, firstly, of recording a series of radiographs (called scans) of a sample which is rotated around one axis. These radiographs are further used by a reconstruction algorithm to obtain a 3D numerical image of the sample which is, in its classical form, a 3D map of the attenuation coefficient in the sample. The two dimensional (2D) radiographs are recorded on a Charge Coupled Device (CCD) camera with a square array of 2048*2048 elements. This detector is coupled with a fluorescent screen via optical lenses. The white beam coming from the synchrotron ring is rendered monochromatic by a multilayer monochromator. The energy of the beam is set to 20 keV. The monochromatic beam is parallel so no geometric enlargement is possible, instead, the voxel size is a result of the camera optics used. The size of the isotropic voxels in the reconstructed images is 0.7µm, a value which gave a good compromise between the specimen size and the spatial resolution. The specimen section to be imaged is hence restricted to 1 × 1mm 2 . In situ fatigue tests A dedicated fatigue machine with a reduced size and a low vibration level has been designed in order to monitor crack development in situ. The machine was directly installed on the rotation stage of the micro-tomography setup. The samples have been cycled in air with a constant stress amplitude (R=0.1). Crack initiation was first monitored by radiography. Once a crack was initiated, full 3D tomographic scans of the crack were recorded at regular intervals during the (interrupted) fatigue test which was stopped before the final fracture of the sample. During image acquisition the sample was maintained under maximum load to improve crack detection. The cyclic conditions were different for the two materials. For the cast alloy a maximum cyclic frequency of 5Hz was attainable. Therefore a maximum stress of 180 MPa was used in order to make the fatigue life "fit" within a 24 hour experiment. This rather high value somewhat restricts the probability of observing crack arrests, which are typical of the short crack regime below the macroscopic crack propagation threshold. For the Al-Li alloy a cyclic frequency of 40 Hz could be achieved by changing the loading system from pneumatic to mechanical. The maximum load value was set at 220 MPa. Similar hour glass specimens were used for both materials (section 1*1 mm 2 ) but in the case of Al-Li a thin (2 µm) rectangular notch, 100 µm wide and 20 µm deep, was produced in the sample using Focused Ion Beam (FIB) machining. This notch was located at the centre of one of the specimen faces and acted as a crack initiation site [START_REF] Ferrié | Fatigue crack propagation: In situ visualization using X-ray microtomography and 3D simulation using the extended finite element method[END_REF] Results and analysis Cast alloy The development of a fatigue crack initiated on a sub-surface pore in the corner of a sample of the cast material is illustrated in Fig. 1. This figure shows that, for the experimental conditions investigated, the front of a microstructurally short crack was not smooth but exhibited protruding and retreating parts (see arrows labeled 1, 2 and 3 on the figure). Although the crack propagation plane was on average perpendicular to the uniaxial fatigue stress, local deviations from this plane did occur. These appear as shadows on the image. The examination of several similar images revealed that these deviations were usually more pronounced in the interior of the specimen that at the surface where the crack remained, on average, more planar and perpendicular to the stress direction. Using the Gallium infiltration technique, it was possible to image the 3D shape of the grains surrounding the crack. The results of this analysis revealed that the irregularities observed on the crack front were always correlated with the presence of a grain boundary. Hence, in spite of the rather high stress level used for the tests, the local crystallography appeared to either promote or impede the crack propagation. It has been suggested [START_REF] Cox | Monte Carlo Simulations of the Growth of Small Fatigue Cracks[END_REF] that this effect of local crystallography could in fact be balanced by a local modification of the stress intensity factor K along the crack front: a protruding part of the front leading to an increase in K and vice versa. In order to check this assumption, K calculations along the experimental crack fronts have been performed using the eXtended Finite Elements Method (XFEM). The authors are well aware that for such short fatigue cracks the use of the K parameter is probably not valid. However the calculation was performed to estimate, as a first approximation, the magnitude of variation of K along an experimental crack front compared with a crack of similar shape but showing a regular crack front. To perform the calculation the 3D crack shape was first obtained by thresholding the grey level tomographic images. The crack was then transformed into a 3D FE mesh. The results of the calculation are shown in Fig. 2. They confirm that in the parts of the crack front which are propagating faster (eg for the part of the crack front comprised between the letters A and B on Fig. 2, a decrease in K is observed3 with respect to the crack with a regular shape. More precisely, the largest variations in K are induced by small values of the local radius of curvature (m 1 and m 2 on Fig. 2). The K values calculated at the surface of the sample (θ = 0 degrees and θ = 90 degrees) note were higher than those calculated for a crack with a regular shape indicating that sub-surface irregularities along the crack front (invisible from the surface) do affect the values of the stress intensity factor at the sample surface. Al-Li alloy In the Al-Li material a crack initiated from the notch after 2000 cycles. Its propagation was monitored every 1000 cycles up to 24000 cycles. The observed crack shapes are shown in Fig. 3 for three different numbers of fatigue cycles. The crack fronts were very smooth and the plane of propagation was almost perfectly perpendicular to the cyclic stress direction. This is a direct consequence of the small grain size encountered in this material. The crack aspect ratio a/c (see definitions of a and c on Fig. 3 remained larger than 1 throughout the propagation, indicating a higher crack growth rate in the bulk. Calculation of the K I values along the crack front was carried out using the same thresholding/meshing method described previously for the cast alloy. Fig. 4 shows examples of the calculated K I values. At the sample surface the K I values were obtained by extrapolating the values obtained immediately below the surface. As expected from the crack aspect ratio the K I values were found to be lower in the bulk of the sample. It is worth mentioning out that analytical values obtained with the classical Newman & Raju [9] formulas were in very good agreement with the FE values (see Fig. 4). From the K=f(θ) curves, and the experimental values of the crack size at different number of cycles, da/dN=f(∆K) and dc/dN=f(∆K) curves could be obtained. They confirm that the crack growth rate is higher in the bulk of the material than at the surface, for a given ∆K value. The measured surface crack growth rate was very similar ( for ∆K larger than 3.5 MPa.m 1 2 ) to that measured on macroscopic CT samples. The 3D propagation of the crack was simulated using the C and m parameters of the Paris law determined from the experimental da/dN=f(∆K) curve. This curve was chosen rather that the dc/dN curve because the calculation of K I values was assumed to be less problematic in the bulk (no extrapolation needed). An example of the simulation results is shown in Fig. 5. This figure shows that the experimental crack front cannot be simulated using a single propagation law; this was expected given the fact that, as mentioned before, different crack growth rates were observed in the bulk and at the surface. The lower crack growth rate observed at the surface was assumed to be due to crack closure. In this material, Plasticity Induced Crack Closure (PICC) is considered to be the main mechanism responsible for closure. At the surface of the sample where plane stress prevails, a larger plastic zone is expected, leading to a closure level higher than in the bulk. This 3D aspect of closure has been reported various times in the literature (eg in reference [10]). A constant ratio of ∆K ef f /∆K of 0.4 was assumed at the sample surface. This value is typical of the closure mechanism in this material. The closure level was then assumed to vary from 0.4 at the surface to 0 in the bulk linearly. With this simple assumption, a very good correlation was found between the calculated experimental crack fronts all through the fatigue life of the sample. An example is shown in Fig. 5. More details on the simulation of crack propagation can be found in reference [START_REF] Ferrié | Fatigue crack propagation: In situ visualization using X-ray microtomography and 3D simulation using the extended finite element method[END_REF] Conclusion The availability of powerful synchrotron X ray sources coupled with the development of new detectors has enabled the development of high resolution X ray microtomography setups which deliver images of internal features in optically opaque materials with a spatial resolution close to that of optical microscopy. In the work reported here, this technique has been applied to the field of fatigue. The 3D shape of fatigue cracks grown in situ can be determined, providing unique experimental data. For Al alloys, it has been shown that fatigue cracks with a size comparable to the grain size exhibit an irregular crack front which is caused by the presence of grains which impede or promote crack propagation. Those irregularities on the crack front induce variations in the stress intensity factor which tend to balance the effect of the microstructure. When the crack size is much larger than the grain size the crack shape becomes very planar and the crack front is very smooth. For the experimental conditions investigated the value of the crack growth rate was found higher in the bulk than at the surface. This anisotropy has been attributed to a 3D closure effect: the closure level being higher at the surface (plane stress) than in the bulk (plane strain). A good agreement between experimental and simulated crack fronts is found when the 3D correction of crack closure between bulk and surface is taken into account. [ 9 ]Fig. 1 :Fig. 2 :Fig. 3 : 4 : 91234 Fig. 1: 3D rendition of a small fatigue crack growing inside a cast Al alloy. Fig. 5 : 5 Fig. 5: Comparaison between experimental and simulated crack fronts with and without correction for 3D crack closure. the crack being planar, only the K I (Mode I) factor is calculated Acknowledgement The authors are grateful to Mr. Bultreys from FEI Europe for the FIB machining to Dr.Ian Sinclair (Southampton University) for fruitful discussions on the 3D nature of the closure phenomenon and to Prof. L. Edwards (Open University) for providing the Al-Li alloy. The ESRF and staff are acknowledged for their help and assistance during the tomographic imaging experiment.
04100280
en
[ "sdv.gen" ]
2024/03/04 16:41:20
2021
https://theses.hal.science/tel-04100280/file/va_Xie_Juanjuan.pdf
Dr Damien Hermand Dr Julie Soutourina Dr Alain Jacquier Dr Pascale Lesage Dr André Sentenac Keywords: RNA polymerase III, transcription termination, T-tract, helicase Sen1, RNA secondary structure v the time to read, comment and evaluate my work. I am so iii Abstract Transcription is the process whereby RNA molecules are synthetized using DNA as the template. In eukaryotic cells, this process is carried out by three different kinds of RNA polymerases (RNAP). RNAPI is mainly responsible for the production of rRNA, RNAPII transcribes protein-coding genes and several classes of non-coding genes and RNAPIII is mostly dedicated to the synthesis of tRNAs, 5S rRNA as well as a few short non-coding RNAs. The genomic distribution of different polymerases needs to be tightly controlled to avoid disruptive interferences between adjacent transcription events, which largely depends on the process of transcription termination. The current model posits that RNAPIII termination relies solely on a stretch of thymines (T-tract) in the non-template DNA strand that are presumably sufficient for both RNAPIII pausing and its release from the DNA. However, previous results from our group identified an interaction between RNAPIII and a well-characterized RNAPII transcription termination factor, the helicase Sen1, which prompted us to investigate a possible role for Sen1 in the termination of RNAPIII transcription in budding yeast. In this study, to address specifically the function of Sen1 in RNAPIII transcription termination I have employed a Sen1 mutant (sen1-3) containing three point mutations in a conserved region of the Sen1 N-terminal domain, which are sufficient to abolish the interaction with RNAPIII without affecting RNAPII transcription termination. By generating high-resolution maps of transcribing RNAPIII, I have observed that a significant fraction of RNAPIIIs normally read through the primary terminator (i.e. the first T-tract downstream of the 3' end of genes). Importantly, I have shown that the mutations in sen1-3 induce transcription termination defects at most RNAPIII-dependent genes, indicating that Sen1 is globally required for efficient termination of RNAPIII transcription in vivo, and that this function relies on the interaction of its N-terminal domain with RNAPIII. In addition, I have shown that Sen1 acts mainly at read-through regions as a fail-safe mechanism to promote termination of RNAPIIIs that override the primary terminator. In order to explore whether Sen1 can directly induce the release of RNAPIII from the DNA, I have performed in vitro transcription termination assays with purified proteins. First, I have tested the termination efficiency at T-tracts of different lengths and I have shown that six consecutive thymines are sufficient for RNAPIII transcription termination in vitro. Furthermore, I have demonstrated that Sen1 can promote termination at weak terminators (i.e. containing 4 or 5 Ts), as well as at other kinds of pausing sequences. By analysing the function of several Sen1 variants, I have shown that the helicase domain of Sen1 alone can induce transcription termination of RNAPIII in vitro as the full-length protein. Moreover, I plus, j'ai découvert que la terminaison médiée par Sen1 nécessite la liaison de Sen1 à l'ARN et l'activité ATPasique de Sen1, comme montré précédemment pour la terminaison de la transcription de l'ARNpol II. Enfin, j'ai découvert que les structures secondaires qui se trouvent généralement sur les ARN transcrits par l'ARNpol III peuvent aussi complémenter la fonction des terminateurs faibles. J'ai également montré que la présence de ces structures empêche l'interaction de Sen1 avec l'ARN, ce qui indique que Sen1 et les structures de l'ARN fonctionnent d'une manière mutuellement exclusive. Alors que Sen1 peut favoriser le relâchement de l'ARNpol III de différentes types de séquences, les structures d'ARN ne peuvent fonctionner qu'avec une séquence de terminaison canonique. Ensemble, nos données offrent un modèle détaillé pour la terminaison de la transcription de ARNpol III qui implique la coopération des suites de Ts, des structures secondaires d'ARN et de l'hélicase Sen1. Table 1- Table of Contents List of Figures List of Tables INTRODUCTION Chapter 1 General introduction of RNA polymerase and transcription Chapter 1 General introduction of RNA polymerase and transcription It has been more than 50 years since the Central Dogma of molecular biology was enunciated by Francis Crick [START_REF] Crick | On protein synthesis[END_REF]. The general idea of the Central Dogma is that the genetic information is copied from DNA into RNA, and then used to make a functional protein. RNA molecules that can encode proteins are known as messenger RNAs (mRNAs). The process by which the genetic information is converted into functional products is called gene expression, which typically contains two key stages: transcription and translation. However, this notion does not apply to a large group of RNA molecules which are transcribed but not destined to become proteins, and thus are termed non-coding RNAs (ncRNAs). Among them, transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs) are the most abundant ones and function as either carriers of amino acids, in the case of tRNAs, or as structural components of the ribosome, in the case of rRNAs, which are both indispensable for protein synthesis. Other classes of noncoding RNAs (such as small nuclear RNAs, snRNAs) also perform many diverse and important functions in the regulation of gene expression. Moreover, prokaryotic and eukaryotic genomes are pervasively transcribed to generate a large ensemble of different RNA molecules, most of them without any known function [START_REF] Jensen | Dealing with Pervasive Transcription[END_REF]. Overall, RNA is a central Abbreviations: rDNA, ribosomal DNA; tDNA or transfer RNA gene; ncDNA, non-protein coding gene; RNAPI, RNA polymerase I; RNAPII, RNA polymerase II; RNAPIII, RNA polymerase III; mRNA, messenger RNA; rRNA, ribosomal RNA; tRNA, transfer RNA; ncRNA, non-coding RNA; lncRNA, long non-coding RNA; snoRNA, small nucleolar RNA; snRNA, small nuclear RNA; miRNA, microRNA; siRNA, small interfering RNA; piRNA, Piwi-interacting RNA; CUT, cryptic unstable transcript; SUT, stable unannotated transcript; XUT, Xrn1-sensitive unstable transcript; RUT, Reb1-dependent unstable transcript. For pervasive transcripts, see Box 2-3 in section 2.5.2. *U6 snRNA is transcribed by RNAPIII. component in the flow of genetic information, and therefore plays a fundamental role in all biological processes (Figure 1-1). Discovery of the RNA polymerase The biosynthesis of RNA is carried out by a complex molecular machine, the DNA-dependent RNA polymerase (RNAP), which exists in all living organisms. The RNA polymerase was first discovered in 1959 by Samuel Weiss and Leonard Gladstone when they found that rat liver nuclei support the synthesis of RNA in reaction mixtures containing the four types of nucleotides [START_REF] Weiss | A mammalian system for the incorporation of cytidine triphosphate into ribonucleic Acid1[END_REF]. One year after, the RNA polymerase was simultaneously discovered in Escherichia coli extracts by several independent studies [START_REF] Huang | Enzymatic synthesis of RNA[END_REF][START_REF] Hurwitz | The enzymic incorporation of ribonucleotides into polyribonucleotides and the effect of DNA[END_REF][START_REF] Stevens | Incorporation of the adenine ribonucleotide into RNA by cell fractions from E. coli B[END_REF]. Bacteria RNAP was then purified to homogeneity and found to require an additional subunit, the sigma factor, in order to perform specific transcription initiation [START_REF] Burgess | Factor Stimulating Transcription by RNA Polymerase[END_REF]. Whereas bacteria contain a single type of RNA polymerase, multiple forms of DNA-dependent RNA polymerases were found in eukaryotic cells by Robert Roeder and William Rutter in 1969. In developing sea urchin embryos, they identified three chromatographically distinct RNA polymerases that later proved to contain distinct subunit composition and to produce distinct subsets of RNAs in nuclei [START_REF] Reeder | Ribosomal RNA synthesis in isolated nuclei[END_REF][START_REF] Roeder | Multiple Forms of DNA-dependent RNA Polymerase in Eukaryotic Organisms[END_REF][START_REF] Sklar | Distinct molecular structures of nuclear class I, II, and III DNAdependent RNA polymerases[END_REF]Weinmann et al., 1974;Weinmann & Roeder, 1974;reviewed in Roeder, 2019). Only in the early 1990s the polypeptide sequences of subunits of the three RNAPs were initially revealed for yeast [START_REF] Sentenac | Yeast RNA polymerase subunits and genes[END_REF]. Owing to the substantial progress of biochemistry, molecular biology, genomics and structural biology, all forms of nuclear RNAPs in bacteria, archaea and eukaryotes have so far been very well described at the structural and functional level. Even though the subunit composition of these RNAPs vary among the three domains of life, they have a common structural framework and operate by closely related molecular mechanisms, indicating that the last universal common ancestor (LUCA) of all life forms on earth had an RNAP very similar to the simplest form of contemporary RNAPs found in bacteria [START_REF] Werner | Evolution of multisubunit RNA polymerases in the three domains of life[END_REF]. The composition of the different RNAPs is summarized in Figure 1-2 andTable 1-1, and their detailed mechanisms during transcription will be discussed in the following chapters. Bacterial RNA polymerase Bacteria have the simplest form of RNA polymerase comprising five subunits, which are two copies of a, b, b', and w, encoded by four genes. These five subunits (a 2 bb'w) constitute the bacterial RNAP core enzyme, with a total molecular mass of around 400 kDa (Figure 12). All RNAPs in archaea and eukaryotes contain homologues of the bacterial core RNAP subunits (Table 1-1). The bacterial core RNAP can bind DNA in a non-sequence-specific manner and initiate transcription from DNA ends or nicks. For initiating transcription at promoter DNA, the core polymerase must bind to a single regulatory subunit known as sigma (s) factor, which confers specificity and partakes in promoter DNA opening. The core enzyme associated with s factor is referred to as the holoenzyme. Bacteria express various kinds of s factors which direct the polymerase to specific promoters in response to environmental cues (reviewed in [START_REF] Sutherland | An Introduction to the Structure and Function of the Catalytic Core Enzyme of Escherichia coli RNA Polymerase[END_REF]. The simplicity of subunit composition and evolutionary conservation among all organisms make bacterial RNAP an ideal model for studying the mechanisms of transcription. Eukaryotic RNA polymerases In eukaryotic cells transcription of nuclear DNA is carried out by three kinds of multi-subunit RNA polymerases, namely RNA polymerase I (RNAPI), RNA polymerase II (RNAPII) and RNA (A) Subunit composition of eukaryotic RNAPI (left), RNAPII (middle) and RNAPIII (right). Subunits are labelled according to Table 1-1. (B) Subunit composition of the bacterial core RNA polymerase. (A) and (B) are adapted from [START_REF] Wild | Biogenesis of multisubunit RNA polymerases[END_REF]. (C) Subunit composition of the archaeal RNA polymerase. Scheme adapted from [START_REF] Hirata | Archaeal RNA polymerase[END_REF]. Subunits in (B) and (C) are coloured according to their eukaryotic homologs in (A), except for Rpo13 which is only found in the domain of Archaea. polymerase III (RNAPIII). Specifically, RNAPI mainly produces rRNAs, RNAPII synthetises mRNAs and several classes of ncRNAs, and RNAPIII is mostly responsible for the production of tRNAs, the 5S rRNA and a few additional short and abundant ncRNAs. Although RNAPII transcribes the largest number of genes, its transcripts only constitute 5-10% of all cellular RNAs due to their low expression level and high turn-over rates. The vast majority of RNA molecules are rRNAs transcribed by RNAPI which take up to ~75% of total RNAs. RNAPIII transcripts (tRNAs and 5S rRNA) are the second most abundant RNA species and constitute around 15% of all RNAs in a cell [START_REF] Khatter | RNA polymerase I and III: Similar yet unique[END_REF]. RNAPI, RNAPII and RNAPIII contain 14, 12 and 17 subunits, respectively (Figure 12; Table 1-1). These polymerases share a 10-subunit catalytic core that consists of the two largest subunits (A190-A135 in RNAPI, Rpb1-Rpb2 in RNAPII, and C160-C128 in RNAPIII) related to bacterial RNAP b and b' subunits; five subunits shared by the three RNAPs (Rpb5-Rpb6-Rpb8-Rpb10-Rpb12); two subunits common to RNAPI and RNAPIII (AC40-AC19) and equivalent to RNAPII subunits Rpb3-Rpb11; and one eukaryote-specific subunit (A12. [START_REF]Sen1 est nécessaire pour une terminaison efficace de la transcription de ARNpol III in vivo Le modèle le plus largement accepté pour la terminaison de la transcription de l'ARNpol III postule que les polymérases reconnaissent un élément agissant en cis composé d'une suite de thymidines sur l'ADN non-matrice et qu'il est libéré sans qu'il soit nécessaire d'avoir recours à des facteurs supplémentaires[END_REF]Rpb9 and C11 in RNAPI,RNAPII and RNAPIII,respectively). They also share a heterodimeric stalk composed of A43-A14 in RNAPI, Rpb4-Rpb7 in RNAPII and C25-C17 in RNAPIII. This subdomain is conserved in archaea but not in bacteria, it mediates the interaction with exiting RNA, and has multiple roles in transcription initiation, elongation and termination. RNAPI and RNAPIII share an additional peripheral heterodimeric subcomplex (A49-A34.5 in RNAPI and C53-C37 in RNAPIII) that is related to RNAPII transcription factor TFIIF (see section 2.1) and is involved in transcription initiation and termination. Moreover, RNAPIII contains a specific peripheral trimeric subcomplex (C82-C34-C31) partially resembling TFIIE (see section 2.1) and contributing to transcription initiation (Reviewed in [START_REF] Vannini | Conservation between the RNA Polymerase I, II, and III Transcription Initiation Machineries[END_REF][START_REF] Wild | Biogenesis of multisubunit RNA polymerases[END_REF]. Archaeal RNA polymerase Archaea are prokaryotic microorganisms that were once considered to be closely related to bacteria as their cellular organization resembles bacteria. However, they are now believed to be more closely related to the eukaryotic cells at the molecular level than bacteria. Structural evidence shows that the archaeal transcription machinery including RNAP and some general transcription factors (such as TBP and TFB) is similar to that of eukaryotes, while archaeal transcription regulatory factors, such as activators and repressors, are found in close relationship with bacterial factors. Thus, transcription in archaeal appears to be a combination of eukaryotic-type transcription apparatus with bacterial-like regulatory mechanisms [START_REF] Jun | Archaeal RNA polymerase and transcription regulation[END_REF]. Like bacteria, archaea possess only one kind of RNAP to transcribe all genes and its subunit composition and architecture are very similar to the eukaryotic RNAPII, including a catalytic core and a heterodimeric stalk (Figure 12; Table 1-1 ). Archaeal RNAP consists of 10-12 subunits depending on the species, most of which are conserved in eukaryotes except for Rpo13, a subunit with unknown function that is only present in some groups of archaea (reviewed in [START_REF] Jun | Archaeal RNA polymerase and transcription regulation[END_REF][START_REF] Fouqueau | The cutting edge of archaeal transcription[END_REF]. RNA polymerase IV and V Plants have evolved two additional DNA-dependent RNA polymerases, RNAPIV and RNAPV. These two odd polymerases are thought to evolve from eukaryotic RNAPII because of the striking similarities with RNAPII regarding their subunit composition and architecture. RNAPIV and RNAPV are composed of 12 subunits, as RNAPII, with half of them identical to those of RNAPII and the remaining subunits being encoded by paralogues of genes encoding RNAPII subunits (Table 1-1). RNAPIV and RNAPV are not essential for viability in plants but they both cooperate to play an important role in RNA-mediated gene silencing and heterochromatin formation. These processes are involved in development, transposon control, genome defence against viruses and allelic crosstalk (reviewed in [START_REF] Haag | Multisubunit RNA polymerases IV and V: Purveyors of non-coding RNA for plant gene silencing[END_REF]. Table 1-1: Subunit composition of RNA polymerases in the three domains of life. Transcription cycle As mentioned above, transcription is the first step in gene expression and leads to the production of RNA molecules from DNA templates. The transcription cycle includes the following events: (1) Initiation, which involves the recruitment of transcription initiation factors and RNAP to gene promoter regions and the melting of promoter sequences to permit RNAP to launch RNA synthesis. (2) Elongation, which consists in the escape of RNAP from the promoter to start processive addition of nucleotides to the growing RNA chain. (3) Termination or the cessation of RNA synthesis and the disassembly of the transcription elongation complex (EC) composed of the RNA polymerase, the nascent RNA transcript and the DNA template. Every step in the transcription cycle is highly controlled by a multitude of proteins called transcription factors (TFs). Some transcription factors bind to the promoter sequence to help form the transcription initiation complex, thus known as basal transcription factors or general transcription factors (GTFs). Other transcription factors bind to DNA regulatory sequences, for instance, the enhancers in metazoans or upstream activation sequences (UAS) in yeast, to regulate the transcription of the related gene. These factors are thus termed regulators, including transcription activators (TAs) and repressors (TRs). Activators and repressors, collectively known as specific transcription factors, act, in part, by recruiting the transcription machinery or repressive complexes to gene regulatory regions. Their activities are further modulated by the transcription co-regulators (co-activators or co-repressors), which, in general, function by transmitting the signals from the specific transcription factors to the transcription preinitiation complex (PIC) [START_REF] Hahn | Transcriptional Regulation in Saccharomyces cerevisiae: Transcription Factor Regulation and Function, Mechanisms of Initiation, and Roles of Activators and Coactivators[END_REF][START_REF] Soutourina | Transcription regulation by the Mediator complex[END_REF]. In contrast to the high degree of homology between RNAPs from different organisms, the evolutionary conservation of TFs is very limited. Furthermore, even though the overall process of transcription is similar in the three domains of life, some significant divergences and variabilities can be found between different lineages and different kinds of polymerases [START_REF] Werner | Evolution of multisubunit RNA polymerases in the three domains of life[END_REF]. In the following chapters, I will introduce the mechanisms of transcription by eukaryotic RNAPII and RNAPIII, respectively, including the process of transcription initiation, elongation and termination. For RNAPI and bacterial RNAP, I will only briefly describe the current models governing transcription termination, for comparison. The activities of archaeal RNAP and plant-specific RNAPs will not be covered by this manuscript. Yeast as a model organism Our laboratory works on budding yeast Saccharomyces cerevisiae, the simplest and bestcharacterized eukaryotic cellular model. S. cerevisiae has been cultured and extensively studied in laboratory for many decades and has been exploited for understanding all kinds of basic biological processes. The main advantages of this model are that: it is a unicellular organism with a short generation time (~100 min); it is easy to grow and to manipulate genetically; it is the first eukaryotic organism having its entire genome sequenced [START_REF] Goffeau | Life with 6000 Genes[END_REF]; and many of the essential cellular processes, among which the mechanisms of transcription, are highly conserved from yeast to humans. In this manuscript, I will focus on the mechanisms governing transcription in S. cerevisiae but comparisons with other organisms will also be provided when necessary. Chapter 2 Transcription by RNA polymerase II Eukaryotic RNA polymerase II (RNAPII) is a 12-subunit protein complex dedicated to the transcription of all protein-coding genes and many non-coding regions in eukaryotic genomes. The structure of the 10-subunit core RNAPII from S. cerevisiae lacking the subcomplex Rpb4-Rpb7 was first solved by crystallography [START_REF] Cramer | Architecture of RNA polymerase II and implications for the transcription mechanism[END_REF]. 20 years afterwards, atomic structures of RNAPII holo-enzyme from different species and under different transcriptional states are available (reviewed in [START_REF] Osman | Structural Biology of RNA Polymerase II Transcription: 20 Years On[END_REF], which contribute greatly to our understanding of the mechanisms of eukaryotic transcription. Transcription by RNAPII is so far the best characterized. RNAPII has the simplest subunit composition among the three eukaryotic RNAPs (Figure 12). Nevertheless, RNAPII transcription is clearly the most highly organized and tightly controlled process, which requires regulation at multiple steps by a large number of transcription factors [START_REF] Woychik | The RNA Polymerase II Machinery: Structure Illuminates Function[END_REF]. Transcription initiation Transcription initiation of RNAPII-dependent genes (class II genes) is mediated by the basal transcription machinery, which consists of RNAPII and several GTFs including TFIIA, TFIIB, TFIID, TFIIE, TFIIF and TFIIH. GTFs cooperate with RNAPII to recognize and open the promoter DNA, to nucleate the synthesis of RNA and stimulate the escape of RNAPII from the promoter region [START_REF] Sainsbury | Structural basis of transcription initiation by RNA polymerase II[END_REF][START_REF] Schier | Structure and mechanism of the RNA polymerase II transcription machinery[END_REF]. 2.1.1 The RNAPII core promoter The RNAPII core promoter is generally defined to be the minimal stretch of DNA that is sufficient to direct the accurate initiation of RNAPII transcription. A core promoter comprises several specific DNA sequence motifs, which include the TATA box, the initiator (Inr), the TFIIB recognition element (BRE), the motif ten element (MTE), and downstream core promoter element (DPE) (Figure 2-1). RNAPII core promoters are found to be structurally and functionally diverse as there are no universal elements that are found in all promoters (reviewed in [START_REF] Smale | The RNA polymerase II core promoter[END_REF][START_REF] Juven-Gershon | Regulation of gene expression via the core promoter and the basal transcriptional machinery[END_REF][START_REF] Kadonaga | Perspectives on the RNA polymerase II core promoter[END_REF]. The TATA box or TATA-related sequence was the first core promoter motif discovered [START_REF] Goldberg | Sequence analysis of Drosophila histone genes[END_REF][START_REF] Breathnach | Organization and Expression of Eucaryotic Split Genes Coding for Proteins[END_REF] and is the best known core promoter element so far. It is a consensus sequence of ~8 nt localized ~30 bp and 40-120 bp upstream of the transcription start site (TSS) in metazoans and yeast, respectively, and is recognized and bound by the TATA-binding protein (TBP). The TATA box and TBP are both conserved from archaea to humans [START_REF] Reeve | Archaeal chromatin and transcription[END_REF], however, only 10-20% of yeast and human core promoters are TATA-containing ones [START_REF] Basehoar | Identification and distinct regulation of yeast TATA boxcontaining genes[END_REF][START_REF] Yang | Prevalence of the initiator over the TATA box in human and yeast genes and identification of DNA motifs enriched in human TATA-less core promoters[END_REF]. There are two BRE motifs, located upstream (BREu) or downstream (BREd) of a TATA box, both of which function in conjunction with a TATA box in transcriptional regulation (Juven-Gershon and Kadonage, 2010). The sequence encompassing the TSS is called the initiator (Inr), which is possibly the most commonly occurring core promoter motif. The Inr element is recognized by TFIID, it is functionally similar to the TATA box and can function independently of it (Smale and Kadonage, 2003). The DPE is a recognition site for TFIID downstream of the Inr. The MTE element is localized immediately upstream of the DPE and stimulates the binding of TFIID. The MTE functions in cooperation with the Inr but can act independently of the DPE and the TATA box. BRE, MTE and DPE elements are only found in metazoan (Juven-Gershon and Kadonage, 2010). PIC assembly According to the in vitro experiments, transcription initiation commences with the assembly of GTFs and RNAPII on the core promoter to form a large complex known as the transcription preinitiation complex (PIC). To nucleate PIC assembly, TBP, a subunit of TFIID, binds first to the TATA box. TFIID is a multisubunit complex composed of TBP and, in yeast, 14 TBPassociated factors (TAFs) that also involved in promoter recognition [START_REF] Huisinga | A genome-wide housekeeping role for TFIID and a highly regulated stressrelated role for SAGA in Saccharomyces cerevisiae[END_REF]. TBP binding to the TATA box induces a ~90-degree bent in the DNA [START_REF] Geiger | Crystal structure of the yeast TFIIA/TBP/DNA complex[END_REF][START_REF] Tan | Crystal structure of a yeast TFIIA/TBP/DNA complex[END_REF], which is subsequently stabilized by the joining of TFIIB and TFIIA. TFIIA is a two-subunit auxiliary factor that is not strictly required for basal transcription but can stabilize the TBP-DNA complex [START_REF] Imbalzano | Facilitated binding of TATA-binding protein to nucleosomal DNA[END_REF]. TFIIB is composed of a single polypeptide chain that is required for the recruitment of RNAPII to the promoter and facilitates TBP binding to DNA and DNA bending [START_REF] Malik | Sequence of general transcription factor TFIIB and relationships to other initiation factors[END_REF][START_REF] Zhao | A regulated two-step mechanism of TBP binding to DNA: A solvent-exposed surface of TBP inhibits TATA box recognition[END_REF]. The complex containing these GTFs bound to the upstream promoter recruits the RNAPII-TFIIF complex, leading to the formation of a stable complex called core PIC (cPIC) [START_REF] Buratowski | Five intermediate complexes in transcription initiation by RNA polymerase II[END_REF]. Mammalian TFIIF is constituted of the TFIIFa/b heterodimer, corresponding to Tfg1/2 in yeast. However, yeast TFIIF additionally contains a third subunit, Tfg3 that is not essential for transcription [START_REF] Henry | Purification and characterization of yeast RNA polymerase II general initiation factor g[END_REF]. TFIIF plays an important role in the PIC stabilization, TSS selection and early RNA synthesis [START_REF] Sainsbury | Structural basis of transcription initiation by RNA polymerase II[END_REF]. Finally, the heterodimeric factor TFIIE is recruited to the cPIC to assemble an intermediate PIC (mPIC), followed by the 10-subunit TFIIH complex to form a complete holo PIC (hPIC) [START_REF] Zawel | Initiation of transcription by RNA polymerase II: A multi-step process[END_REF]. The TFIIH is an essential and multifunctional factor, which includes the ATPase/helicase activity that is necessary for promoter opening, and the kinase activity that promotes the phosphorylation of the RNAPII to initiate transcription [START_REF] Svejstrup | The multiple roles of transcription/repair factor TFIIH[END_REF]. TFIIE binds to RNAPII, facilitates the recruitment of TFIIH and stimulates the ATPase and kinase activity of TFIIH (Figure 2-2; Table 2-1; [START_REF] Sainsbury | Structural basis of transcription initiation by RNA polymerase II[END_REF]. The aforementioned stepwise PIC assembly is dependent on the TBP-TATA system. However, a vast majority (up to 85%) of core promoters lack a consensus TATA box motif or TATA-like sequence, which are commonly called TATA-less promoters [START_REF] Basehoar | Identification and distinct regulation of yeast TATA boxcontaining genes[END_REF][START_REF] Yang | Prevalence of the initiator over the TATA box in human and yeast genes and identification of DNA motifs enriched in human TATA-less core promoters[END_REF]. Nevertheless, TFIID is required for the transcription of almost all class II genes [START_REF] Warfield | Transcription of Nearly All Yeast RNA Polymerase II-Transcribed Genes Is Dependent on Transcription Factor TFIID[END_REF] and it was not clear until recently how TFIID can recognize highly diversified RNAPII promoters. A newly structural study on human TFIID-containing PIC revealed that TATA box and TATA-less promoters employ a shared TFIID-binding pattern and loading of TBP, and TBP similarly bends TATA box and TATA-less promoters, providing structural insights into how TFIID can support PIC assembly on TATA-less promoters [START_REF] Chen | Structural insights into preinitiation complex assembly on core promoters[END_REF]. In general, a complete PIC consists of RNAPII and six basal transcription factors (TFIIA, TFIIB, TFIID, TFIIE, TFIIF and TFIIH) as well as the closed, double-stranded DNA (Table 2-1). It is also worth mentioning that PIC assembly is normally preceded and activated by the binding of activators to DNA regions located at various distances from the core promoter, and often require co-activators, such as the SAGA complex that is necessary for all regulated transcription and is conserved among eukaryotes, and the Mediator complex, which plays an important role in the assembly and stabilization of the PIC by interacting with GTFs and now is widely considered as a part of the PIC [START_REF] Nguyen | Spatiotemporal coordination of transcription preinitiation complex assembly in live cells[END_REF] (Box 2-1). Box 2-1: The SAGA and the Mediator complex SAGA (Spt-Ada-Gcn5 acetyltransferase) is a multi-subunit transcriptional co-activator complex conserved between yeast and humans that controls transcription by modifying histones. Yeast SAGA contains 19 subunits, with a total molecular mass of 1.8 MDa, which are organized into four modules with distinct functions, namely: 1) the histone acetyltransferase (HAT) module, composed of Gcn5, Ada2, Ada3 and Sgf29; 2) the histone deubiquitinase (DUB) module, composed of Ubp8, Sgf11, Sgf73 and Sus1; 3) the Tra1 module, containing only the large Tra1 protein that serves as a docking platform for transcription factors binding; 4) the 10-subunit core module, composed of TBP-associated factor (TAF) Taf5/6/9/10/12 as well as Spt3/7/8/20 and Ada1. Although not considered as a PIC factor, SAGA can be recruited to promoters by genespecific transcription factors, can bind TBP, and contains activities to acetylate and to deubiquitylate histones. Reviewed in [START_REF] Schier | Structure and mechanism of the RNA polymerase II transcription machinery[END_REF][START_REF] Osman | Structural Biology of RNA Polymerase II Transcription: 20 Years On[END_REF][START_REF] Wang | Structure of the transcription coactivator SAGA[END_REF][START_REF] Papai | Structure of SAGA and mechanism of TBP deposition on gene promoters[END_REF]. The mediator of RNAPII transcription (Mediator) complex is a transcription co-activator conserved from yeast to metazoans. Budding yeast Mediator comprises 25 subunits that are organized into four distinct modules: the head module, the middle module, and the tail module, and the CDK8 kinase module, which is transiently associated with the complex. The main function of Mediator is to transduce signals from the activators to the preinitiation complex to assist in PIC assembly on core promoters. The Mediator complex can also stimulate the phosphorylation of the carboxy-terminal domain (CTD) of the largest RNAPII subunit, which in turn triggers RNAPII release from the promoter. Reviewed in [START_REF] Poss | The Mediator complex and transcription regulation[END_REF][START_REF] Soutourina | Transcription regulation by the Mediator complex[END_REF]. Transcription elongation Promoter clearance The transition from transcription initiation to productive elongation must go through a stage known as promoter clearance, during which the contact with initiation factors is lost and stable association with the nascent transcript and elongation factors is established [START_REF] Luse | Promoter clearance by RNA polymerase II[END_REF]. Once the PIC assembles, the closed promoter DNA needs to be opened to initiate RNA synthesis commences at the TSS in the presence of NTPs [START_REF] Sainsbury | Structural basis of transcription initiation by RNA polymerase II[END_REF]. Nevertheless, in the very early stage of RNA synthesis, the RNA:DNA hybrid within the transcription bubble is too short to be stable and the nascent transcript can be released from the elongation complex (EC), resulting in abortive initiation. Abortive initiation is considerably reduced when the length of RNA:DNA hybrid reaches 8-9 nt [START_REF] Sims | Elongation by RNA polymerase II: The short and long of it[END_REF]. As elongation continues, the 5' end of the nascent RNA is released from the template DNA and enters the RNA exit channel of the polymerase. After synthesis of ~30 nt of RNA, RNAPII is thought to lose contact with the core promoter and the rest of the transcription machinery, and promoter clearance is complete [START_REF] Luse | Promoter clearance by RNA polymerase II[END_REF]. At this stage, a subset of GTFs remain at the core promoter, serving as a scaffold for the assembly of the next transcription initiation complex, which is believed to be much faster relative to the initial round. Of all the GTFs, only TFIIF and TFIIB need to be re-assembled for a new cycle of transcription event [START_REF] Hahn | Structure and mechanism of the RNA Polymerase II transcription machinery[END_REF]. Elongation in the body of genes RNAPII transcription elongation is not a smooth, continuous process. During transcription elongation, RNA polymerase may encounter obstacles that can slow down or stall the transcribing polymerase. These obstacles can be caused by the positioning of nucleosomes, DNA-binding factors, DNA damages/mismatches, depletion of NTPs, and DNA sequences that are intrinsically difficult to transcribe. Many factors are required for normal transcription elongation in the body of genes, such as TFIIF, Spt4 and Spt5 (DSIF in humans), and the PAF complex [START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF]. During elongation RNAPII might stall and backtrack, i.e. move in the reverse direction along the DNA template. Polymerase backtracking results in the displacement of the neo-synthetized RNA 3' end from the catalytic sites, which renders transcriptionally inactive the enzyme [START_REF] Wilson | Ubiquitylation and degradation of elongating RNA polymerase II: The last resort[END_REF]. Arrested RNAPII can be reactivated by the recruitment of TFIIS (Dst1 in yeast) that acts by stimulating the intrinsic RNA endonuclease activity of RNAPII to cleave the displaced portion of the transcript, so that the RNA 3' end is located again in the polymerase active center [START_REF] Fish | Promoting elongation with transcript cleavage stimulatory factors[END_REF][START_REF] Cheung | Structural basis of RNA polymerase II backtracking, arrest and reactivation[END_REF]. However, transcribing RNAPII can become permanently stalled or arrested under a wide variety of conditions. If the arrested RNAPII cannot be restarted, it becomes poly-ubiquitylated by ubiquitin ligases, and is then disassembled and degraded by the proteasome [START_REF] Wilson | Ubiquitylation and degradation of elongating RNA polymerase II: The last resort[END_REF]. Because transcription occurs on a chromatin template, factors that affect chromatin dynamics are important for elongation. These will be briefly discussed in section 2.4. 2-3). A full-length CTD is not required for the function of RNAPII or viability, as genetic studies showed that the minimal viable CTD in yeast contains eight repeats [START_REF] West | Construction and Analysis of Yeast RNA Polymerase II Ctd Deletion and Substitution Mutations[END_REF]. The CTD is subject to extensive post-translational modifications (PTMs), most notably phosphorylation, which define a "CTD code" based on the combination of modified residues. CTD post-translational modifications are crucial for the regulation of the transcription process, from initiation to termination, as well as for cotranscriptional processes, such as pre-mRNA capping, splicing and chromatin modification (reviewed in [START_REF] Harlen | The code and beyond: Transcription regulation by the RNA polymerase II carboxy-terminal domain[END_REF]. The phosphorylation cycle of the RNAPII CTD Of the seven residues in a CTD repeat, Tyr1, Ser2, Thr4, and Ser5 and Ser7 residues are dynamically phosphorylated and dephosphorylated by several CTD kinases and phosphatases throughout the transcription cycle (Figure 2-4, [START_REF] Harlen | The code and beyond: Transcription regulation by the RNA polymerase II carboxy-terminal domain[END_REF]. Among them, Ser5 phosphorylation (Ser5P) and Ser2P patterns are the best studied and characterized. Initially, RNAPII assembled in the PIC has an unmodified CTD, which has high affinity for the Mediator complex [START_REF] Lu | The nonphosphorylated form of RNA polymerase II preferentially associates with the preinitiation complex[END_REF][START_REF] Myers | The Med proteins of yeast and their function through the RNA polymerase II carboxy-terminal domain[END_REF]. Cyclin-dependent kinase 7 (CDK7; known as Kin28 in S. cerevisiae), the kinase subunit of TFIIH, phosphorylates Ser5 and Ser7 early in the transcription cycle and these phosphorylations are believed to favour promoter escape by decreasing the affinity of RNAPII for the Mediator complex [START_REF] Wong | TFIIH phosphorylation of the Pol II CTD stimulates mediator dissociation from the preinitiation complex and promoter escape[END_REF][START_REF] Jeronimo | Kin28 regulates the transient association of Mediator with core promoters[END_REF]. Ser5P also promotes the recruitment of capping and splicing factors [START_REF] Perales | Cotranscriptionality": The transcription elongation complex as a nexus for nuclear transactions[END_REF], the COMPASS (Complex Protein Associated with Set1, see section 2.4.1) as well as the NNS complex (Nrd1-Nab3-Sen1, see section 2.5.2). During early elongation, Ser5P levels drop due to dephosphorylation by the phosphatase Rtr1 [START_REF] Mosley | Rtr1 is a CTD phosphatase that regulates RNA polymerase II during the transition from serine 5 to serine 2 phosphorylation[END_REF]. The remaining Ser5P is subsequently dephosphorylated by the phosphatase Ssu72, a subunit of the CPF-CF (Cleavage and Polyadenylation Factor and Cleavage Factor I, see section 2.5.1) complex in yeast (Battle et al. 2012). Ssu72 has also been shown to dephosphorylate Ser7 [START_REF] Bataille | A universal RNA polymerase II CTD cycle is orchestrated by complex interplays between kinase, phosphatase, and isomerase enzymes along genes[END_REF]. In contrast to Ser5P, Ser2P is seen later in gene bodies, with its levels increasing toward the 3' end of genes due to the action of the Ser2-kinases Bur1 and Ctk1, its peak coinciding with the polyadenylation site (PAS) [START_REF] Qiu | Phosphorylation of the Pol II CTD by KIN28 enhances BUR1/BUR2 recruitment and Ser2 CTD phosphorylation near promoters[END_REF][START_REF] Bataille | A universal RNA polymerase II CTD cycle is orchestrated by complex interplays between kinase, phosphatase, and isomerase enzymes along genes[END_REF]. Ser2P is removed by the phosphatase Fcp1, which favors the recycling of RNAPII at the promoter after termination (Bastaille et al., 2012;[START_REF] Egloff | Updating the RNA polymerase CTD code: Adding gene-specific layers[END_REF]. Ser2P is important for the recruitment of RNA processing and termination factors, while phosphorylation at Ser5 has been implicated in early elongation (promoter clearance) and early termination [START_REF] Harlen | The code and beyond: Transcription regulation by the RNA polymerase II carboxy-terminal domain[END_REF]. Chromatin dynamics during transcription elongation Transcription by RNAPII occurs on a DNA template that is organized in chromatin fibers, the elementary unit of which is the nucleosome. A nucleosome consists of ~147 nt of DNA wrapped 1.65 turns around a set of eight proteins called histones, which are known as a histone octamer. A histone octamer is composed of two copies each of the histone proteins H2A, H2B, H3 and H4. There are multiple interactions between the histones and the DNA, making the nucleosome one of the most stable protein-DNA complexes that thus serves as a strong physical barrier to RNAPII movement [START_REF] Luger | Crystal structure of the nucleosome core particle at 2.8 Å resolution[END_REF]. However, the nucleosome is not a static but rather a dynamic unit, which is under control of various protein complexes that favour the passage of RNAPII during transcription. The best characterized of these factors include the histone modifiers, the histone chaperones and the ATP-dependent chromatin remodelers [START_REF] Sims | Elongation by RNA polymerase II: The short and long of it[END_REF][START_REF] Li | The Role of Chromatin during Transcription[END_REF][START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF]. Histone modifiers During transcription, histones are subject to a vast array of post-translational modifications including methylation of arginine residues; methylation, acetylation, ubiquitination, ADPribosylation, and sumolation of lysines; and phosphorylation of serines and threonines [START_REF] Li | The Role of Chromatin during Transcription[END_REF][START_REF] Kouzarides | Chromatin Modifications and Their Function[END_REF]. These modifications are found primarily in the unstructured, amino terminal segments of histones that protrude from the nucleosome (known as histone tails) [START_REF] Luger | Crystal structure of the nucleosome core particle at 2.8 Å resolution[END_REF]. Histone modifications play an important role in transcription elongation, working by affecting internucleosomal contacts or changing electrostatic charges to alter the packaging of chromatin [START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF]. Histone modifications can also serve as a binding surface for elongation-associated effector complexes [START_REF] Dechassa | SWI/SNF has intrinsic nucleosome disassembly activity that is dependent on adjacent nucleosomes[END_REF][START_REF] Hassan | Histone acetyltransferase complexes stabilize swi/snf binding to promoter nucleosomes[END_REF][START_REF] Spain | The RSC Complex localizes to coding sequences to regulate Pol II and histone occupancy[END_REF]. Histone acetylation is mostly associated with activation of transcription. It occurs at multiple lysine residues and is carried out by histone acetyltransferases (HAT). Acetylation decreases the positive charge on the histone tails, thereby affecting the interactions of the histone octamer with the negatively charged phosphate group of the DNA. As a consequence, the chromatin is more relaxed and the DNA more accessible to transcription factors. This effect can be reversed by deacetylation catalysed by histone deacetylases (HDAC), which normally correlates with transcriptional repression [START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF][START_REF] Zentner | Regulation of nucleosome dynamics by histone modifications[END_REF]. As an example, the SAGA complex (Box 2-1) containing the HAT Gcn5 protein can stimulate acetylation of histones and thus can cause the eviction of nucleosomes in transcribed coding sequences and promote RNAPII elongation [START_REF] Govind | Gcn5 Promotes Acetylation, Eviction, and Methylation of Nucleosomes in Transcribed Coding Regions[END_REF]. Also, NuA4, the major H4 lysine acetyltransferase (KAT) complex in S. cerevisiae, is recruited through the interaction with phosphorylated RNAPII CTD to acetylate H4 and promote histone eviction [START_REF] Ginsburg | NuA4 Lysine Acetyltransferase Esa1 Is Targeted to Coding Regions and Stimulates Transcription Elongation with Gcn5[END_REF]. Unlike acetylation, histone methylation does not change the net charge of nucleosomes, but rather acts as a tag for effector proteins containing methyl-binding domains. Lysine (K) residues of H3 and H4 can be modified by one, two, or three methyl groups (me), and these different methylation sites can have distinct functions. Methylation on histone 3 lysine 4 (H3K4me), H3K36me and H3K79me are implicated in activation of transcription and commonly referred to as euchromatin modifications, while H3K9me, H3K20me and H3K27me are localized to inactive genes or regions and are often termed heterochromatin modifications [START_REF] Kouzarides | Chromatin Modifications and Their Function[END_REF]. In yeast, H3K4me and H3K36me are carried out by Set1, which is part of the COMPASS complex (see below), and Set2, respectively. The recruitment of Set1 and Set2 depends on the phosphorylation state of the RNAPII CTD [START_REF] Sims | Elongation by RNA polymerase II: The short and long of it[END_REF][START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF]. The COMPASS (Complex Protein Associated with Set1) was the first H3K4 methylase identified in S. cerevisiae and is capable of catalysing the mono-, di-, and trimethylation of histone H3K4 [START_REF] Miller | COMPASS: A complex of proteins associated with a trithorax-related SET domain protein[END_REF]. This complex consists of eight subunits, including Set1, Cps60, Cps50, Cps40, Cps35, Cps30, Cps25, and Cps15. Among them, Cps35 (also known as SWD2) is the only essential subunit of the complex in yeast, which is shared with other complexes, such as the cleavage and polyadenylation factor complex. Set1 is the catalytic subunit that possess histone (H) or lysine (K) methytranferase (HMTase or KMTase) activity. However, Set1 alone is not active as a KMTase, as Set1 within COMPASS is the active form of the enzyme [START_REF] Shilatifard | The COMPASS family of histone H3K4 methylases: Mechanisms of regulation in development and disease pathogenesis[END_REF]. Histone chaperones Histone chaperones are histone-binding proteins involved in intracellular histone dynamics, as well as histone storage and replication-associated chromatin assembly [START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF]. The FACT (facilitates chromatin transcription) complex and Spt6 are two histone chaperones that affect the chromatin structure during transcription elongation. The FACT complex is a heterodimer composed of Spt16 and Pob3 in yeast (SPT16 and SSRP1 in human). This complex is highly conserved among eukaryotes, and functions to destabilize the nucleosome by selectively displacing the histone dimer H2A/H2B and reassemble nucleosomes after RNAPII passage [START_REF] Orphanides | The chromatin-specific transcription elongation factor FACT comprises human SPT16 and SSRP1 proteins[END_REF][START_REF] Hsieh | Histone chaperone FACT action during transcription through chromatin by RNA polymerase II[END_REF]. Spt6 (suppressor of Ty6) is also well conserved throughout eukaryotes and is involved in the maintenance of chromatin structure during elongation [START_REF] Selth | Transcript Elongation by RNA Polymerase II[END_REF]. ATP-dependent chromatin remodelers ATP-dependent chromatin remodeling complexes (remodelers) utilize the energy of ATP hydrolysis to modify the structure of chromatin. The outcomes of these factors include transient unwrapping the end DNA from histone octamers (histone ejection), or moving nucleosomes to different positions (histone sliding), all of which change the accessibility of nucleosomal DNA to transcription factors [START_REF] Li | The Role of Chromatin during Transcription[END_REF]. Chromatin remodelers can be classified into four subfamilies: SWI-SNF (switch/sucrose non-fermentable), ISWI (imitation switch), CHD (chromodomain helicase DNA-binding), and INO80, on the basis of the similarities in their catalytic ATPases. Each subfamily employs unrelated enzymatic mechanisms to alter the nucleosome organization of chromatin. However, they all use ATPdependent DNA translocation to break histone-DNA contacts and to propel DNA along the histone surface (for review, see [START_REF] Clapier | Mechanisms of action and regulation of ATPdependent chromatin-remodelling complexes[END_REF]. Transcription termination The elongation complex is removed from the DNA template at the end of genes, to allow the release of the transcript and the recycling of the polymerase and to prevent the interference with neighbouring transcription units. In budding yeast, there are two main pathways for termination of RNAPII transcription (Porrua et al., 2016). The first one depends on the Cleavage and Polyadenylation Factor-Cleavage Factor (CPF-CF) complex and is responsible for the termination of protein-coding genes; the second one depends on the Nrd1-Nab3-Sen1 (NNS) complex and is dedicated to termination of genes coding for snRNAs, snoRNAs, and cryptic unstable transcripts (CUTs), which constitute the major product of pervasive transcription (for pervasive transcription, see Box 2-3). 2.5.1 The CPF-CF pathway RNAPII termination of most mRNA genes is functionally connected with the 3' end processing of the nascent transcript, which requires a set of different proteins that bind to the polymerases as well as to specific sequences in the nascent RNA. This event in yeast is dependent on a large, multisubunit complex composed of three subcomplexes: CPF (cleavage and polyadenylation factor), CFIA (cleavage factor IA) and CFIB (cleavage factor IB), hereafter CPF-CF. CFIA consists of Rna15, Rna14, Clp1 and Pcf11, whereas CFIB includes only one component, the RNA-binding protein Hrp1. The enzymatic activities for cleavage and polyadenylation are present in the 14 subunit CPF complex that are further organized into three prominent modules: the nuclease module that is comprised of three subunits (Ysh1, Cft2 and Mpe1); the phosphatase module that consists of six subunits (Pta1, Ref2, Pti1, Swd2, Glc7, Ssu72); and the poly(A) polymerase module that contains five subunits (Cft1, Pfs2, Pap1, Fip1 and Yth1) (Casanal et al., 2017). The six subunits of the phosphatase module together with an additional factor, Syc1, form a distinct complex called APT (associated with Pta1), with partially distinct functions in termination of sn/snoRNA genes [START_REF] Nedea | Organization and Function of APT, a Subcomplex of the Yeast Cleavage and Polyadenylation Factor Involved in the Formation of mRNA and Small Nucleolar RNA 3ʹ-Ends *[END_REF][START_REF] Lidschreiber | The APT complex is involved in non-coding RNA transcription and is distinct from CPF[END_REF]. The subunit composition of CPF-CF and their related roles are presented in Table 2-2. Termination by the CPF-CF pathway involves three successive steps: 1) recruitment of CPF-CF through interaction with RNAPII and the recognition of cis-regulatory elements on the nascent RNA; 2) cleavage of the nascent RNA at the poly(A) site, followed by the polyadenylation of the upstream cleavage product and degradation of the downstream cleavage fragment; 3) dismantling of the EC (Figure 2-6A). For most S. cerevisiae protein coding genes, a poly(A) signal (PAS) serves as a 3' end processing site and a transcription termination signal (TTS). The PAS contains five different elements: the AU-rich efficiency element (EE) responsible for polyadenylation efficiency; the A-rich positioning element (PE) critical for precise 3' end processing; the poly(A) cleavage site; and the upstream and downstream U-rich enhancer elements (UUE and DUE) [START_REF] Mischo | Disengaging polymerase: Terminating RNA polymerase II transcription in budding yeast[END_REF]Figure 2-5). Once RNAPII transcribes the PAS, CTD phosphorylation at Ser2 promotes the recruitment of the CPF-CF complex to these elements in the nascent pre-mRNA. Among the various CPF-CF proteins that interact with phosphorylated CTD, a key component of CFIA called Pcf11 contains a CTD interacting domain (CID) and can specifically interact with Ser2P CTD, thus enhancing its recruitment at the 3' end of genes [START_REF] Komarnitsky | Different phosphorylated forms of RNA polymerase II and associated mRNA processing factors during transcription[END_REF][START_REF] Lunde | Cooperative interaction of transcription termination factors with the RNA polymerase II C-terminal domain[END_REF][START_REF] Mayer | CTD Tyrosine Phosphorylation Impairs Termination Factor Recruitment to RNA Polymerase II[END_REF]. In addition, the EE element is loosely associated by the Hrp1 subunit of the CFIB subcomplex, and the PE element is bound by the Rna15 component of the CFIA subcomplex, which together define the RNA cleavage site [START_REF] Valentini | Arginine methylation and binding of Hrp1p to the efficiency element for mRNA 3ʹ-end formation[END_REF][START_REF] Gross | Five subunits are required for reconstitution of the cleavage and polyadenylation activities of Saccharomyces cerevisiae cleavage factor I[END_REF]. Then, the endonuclease Ysh1, one of the catalytic subunits of CPF, cleaves the RNA at the polyadenylation site [START_REF] Jenny | Sequence Similarity Between the 73-Kilodalton Protein of Mammalian CPSF and a Subunit of Yeast Polyadenylation Factor I[END_REF]. The cotranscriptional cleavage of the nascent transcript results in the formation of an uncapped 5' end in the downstream cleavage fragment attached to the EC, which is subsequently degraded by the nuclear 5'-3' exonuclease Rat1 in complex with Rai1 (Rat1 interacting protein) and Rtt103 that also contains a CID domain (Kim et al., 2004). Rtt103 recognizes the Ser2P and the Thr4P forms of the CTD [START_REF] Harlen | Comprehensive RNA Polymerase II Interactomes Reveal Distinct and Varied Roles for Each Phospho-CTD Residue[END_REF]. On the other hand, the resulting upstream cleavage product containing a 3' OH is recognized by Pap1, the CPF poly(A)polymerase subunit that catalyses the addition of a poly(A) tail [START_REF] Lingner | Cloning and expression of the essential gene for poly(A) polymerase from S. cerevisiae[END_REF]. The activity of Pap1 is regulated by Nab2, a protein that interacts with the CPF-CF complex. This protein binds to poly(A) tails and limits the addition of adenine residues to the 3' end of the RNA [START_REF] Hector | Dual requirement for yeast hnRNP Nab2p in mRNA poly(A) tail length control and nuclear export[END_REF]. It is generally accepted that the cleavage of the nascent RNA by Ysh1 precedes the release of the EC from the DNA template, which occurs about ~200 bp downstream of the poly(A) site (Baejen et al., 2017;[START_REF] Schaughency | Genome-wide mapping of yeast RNA polymerase II termination[END_REF]. With regard to the mechanisms provoking RNAPII dissociation, two non-mutually exclusive models have been proposed: the allosteric model and the torpedo model (Figure 2-6A; reviewed in [START_REF] Porrua | Transcription termination and the control of the transcriptome: Why, where and how to stop[END_REF][START_REF] Richard | Transcription termination by nuclear RNA polymerases[END_REF]. The allosteric model posits that the transcription through the poly(A) site causes a conformational change in the EC due to the loss of elongation factors and/or association of termination factors, which is followed by dissociation of RNAPII from the DNA template [START_REF] Logan | A poly(A) addition site and a downstream termination region are required for efficient cessation of transcription by RNA polymerase II in the mouse beta maj-globin gene[END_REF]. The torpedo model proposes that progressive degradation by Rat1 of the downstream RNA product after cleavage leads to destabilization of the EC and promotes RNAPII release upon "catching up" with RNAPII (Kim et al, 2004;West et al, 2004;Park et al., 2015;Pearson and Moore, 2013;Baejen et al, 2017). However, several evidences suggest that these two mechanisms may act in concert to efficiently terminate transcription at protein-coding genes [START_REF] Luo | The role of Rat1 in coupling mRNA 3'-end processing to transcription termination: Implications for a unified allosteric-torpedo model[END_REF][START_REF] Kaneko | The multifunctional protein p54nrb/PSF recruits the exonuclease XRN2 to facilitate pre-mRNA 3' processing and transcription termination[END_REF][START_REF] West | Molecular dissection of mammalian RNA polymerase II transcriptional termination[END_REF]. 2.5.2 The NNS-dependent pathway Transcription termination at most non-coding genes in S. cerevisiae is dependent on the NNS complex, which consists of two RNA-binding proteins, Nrd1 and Nab3, and a DNA/RNA helicase called Sen1 (reviewed in Arndt and Reines, 2015). All three NNS subunits are essential for growth in S. cerevisiae [START_REF] Steinmetz | Repression of gene expression by an exogenous sequence element acting in concert with a heterogeneous nuclear ribonucleoprotein-like protein, Nrd1, and the putative helicase Sen1[END_REF][START_REF] Wilson | Characterization of nuclear polyadenylated RNA-binding proteins in Saccharomyces cerevisiae[END_REF][START_REF] Winey | Mutations affecting the tRNA-splicing endonuclease activity of Saccharomyces cerevisiae[END_REF]. The NNS components Nrd1 (Nuclear pre-mRNA Down-regulation) is a 63 kDa protein containing an N-terminal RNAPII CTD interaction domain (CID), a central RNA recognition motif (RRM), a Nab3 interaction domain (NabID) and a C-terminal proline and glutamine (Q/P) rich region (Figure 234567). Nrd1 interacts genetically and physically with RNAPII [START_REF] Conrad | A yeast heterogeneous nuclear ribonucleoprotein complex associated with RNA polymerase II[END_REF] and preferentially binds the Ser5P CTD, which is a mark of early elongation [START_REF] Vasiljeva | The Nrd1-Nab3-Sen1 termination complex interacts with the Ser5-phosphorylated RNA polymerase II C-terminal domain[END_REF][START_REF] Kubicek | Serine phosphorylation and proline isomerization in RNAP II CTD control recruitment of Nrd1[END_REF]. Nrd1 CID also interacts with Trf4, a component of the TRAMP complex (see Box 2-2) involved in RNA degradation and processing [START_REF] Tudek | Molecular Basis for Coordinating Transcription Termination with Noncoding RNA Degradation[END_REF]. Nrd1 is recruited to the nascent transcript by recognizing the GUAA/G motif [START_REF] Carroll | Identification of cis Elements Directing Termination of Yeast Nonpolyadenylated snoRNA Transcripts[END_REF]Wlotzka et al., 2011;Porrua et al., 2012). Nab3 (Nuclear polyAdenylated RNA-Binding) is a 90 kDa protein that contains an Nterminal glutamate/aspartate (D/E) rich domain, a central RRM, a Nrd1 interaction domain, as well as a Q/P rich domain at its C-terminus (Figure 234567). Nab3 interacts directly with Nrd1 forming a stable heterodimer, and specifically recognizes UCUUG motifs present on the nascent RNA [START_REF] Carroll | Identification of cis Elements Directing Termination of Yeast Nonpolyadenylated snoRNA Transcripts[END_REF]Wlotzka et al., 2011;Porrua et al., 2012;). Moreover, AUrich sequences frequently present close to Nab3 binding sites are found to be important for efficient termination (Porrua et al., 2012). Sen1 (Splicing ENdonuclease) belongs to the superfamily 1B (SF1B) Upf1-like family of helicases. It is a large (molecular mass of 252 kDa), and low-abundance nuclear protein (63-498 molecules/cell according to [START_REF] Ghaemmaghami | Global analysis of protein expression in yeast[END_REF][START_REF] Newman | Single-cell proteomic analysis of S. cerevisiae reveals the architecture of biological noise[END_REF][START_REF] Kulak | Minimal, encapsulated proteomic-sample processing applied to copy-number estimation in eukaryotic cells[END_REF][START_REF] Chong | Yeast Proteome Dynamics from Single Cell Imaging and Automated Analysis[END_REF] that consists of a central helicase domain, an N-terminal domain (NTD) and a C-terminal unstructured region (Figure 2-8A). Sen1 interacts with the RNA without any known sequence specificity [START_REF] Creamer | Transcriptome-Wide Binding Sites for Components of the Saccharomyces cerevisiae Non-Poly(A) Termination Pathway: Nrd1, Nab3, and Sen1[END_REF]Porrua and Libri, 2013) and both the N-and C-terminal domains are involved in protein-protein interactions. Specifically, the N-terminal domain mediates the interaction with the CTD of RNAPII [START_REF] Chinchilla | Interactions of Sen1, Nrd1, and Nab3 with Multiple Phosphorylated Forms of the Rpb1 C-Terminal Domain in Saccharomyces cerevisiae[END_REF]Han et al, 2020), while the C-terminal domain contains sequences that are important for its nuclear localisation and for the interaction with the phosphatase Glc7 and Nrd1 [START_REF] Nedea | The Glc7 Phosphatase Subunit of the Cleavage and Polyadenylation Factor Is Essential for Transcription Termination on snoRNA Genes[END_REF][START_REF] Ursic | Multiple protein/protein and protein/RNA interactions suggest roles for yeast DNA/RNA helicase Sen1p in transcription, transcription-coupled DNA repair and RNA processing[END_REF][START_REF] Chen | Saccharomyces cerevisiae Sen1 as a Model for the Study of Mutations in Human Senataxin That Elicit Cerebellar Ataxia[END_REF]Han et al., 2020, see Figure 2345678). Sen1 is the only subunit of the NNS complex that possesses catalytic activity. It is an ATP-dependent helicase, which can unwind both DNA and RNA substrates (Han et al., 2017;[START_REF] Martin-Tumasz | Saccharomyces cerevisiae Sen1 Helicase Domain Exhibits 5'-to 3'-Helicase Activity with a Preference for Translocation on DNA Rather than RNA[END_REF]. Deletion of its N-terminal domain or mutation of the helicase domain provoke transcription termination defects in vivo [START_REF] Chen | Saccharomyces cerevisiae Sen1 as a Model for the Study of Mutations in Human Senataxin That Elicit Cerebellar Ataxia[END_REF][START_REF] Finkel | Sen1p Performs Two Genetically Separable Functions in Transcription and Processing of U5 Small Nuclear RNA in Saccharomyces cerevisiae[END_REF][START_REF] Steinmetz | la terminaison ainsi que son mécanisme d'action n'ont cependant pas été abordés dans cette étude. Ainsi, une grande incertitude demeure quant à la contribution relative des éléments de séquence, des structures d'ARN et des facteurs de transcription à l'efficacité de la terminaison de la transcription par l'ARNpol III[END_REF]Han et al., 2020). Besides its role in transcription termination, Sen1 was also shown to function in the resolution of R-loops, which are structures that typically form during transcription when the nascent RNA invades and anneals with the DNA template (Mischo et al., 2011). Sen1 was also proposed to play an important role in the resolution of transcriptionreplication conflicts (Alzu et al., 2012), and in the repair of DNA damage (Li et al., 2016). Mechanisms of NNS-dependent termination Unlike the CPF-CF termination pathway, NNS-mediated termination is not linked to the endonucleolytic cleavage of the nascent RNA, but rather relies on the translocase activity of Sen1 helicase and is coupled to the nuclear RNA degradation pathway. The current model proposes that the NNS complex is recruited by the interaction of Nrd1 CID with RNAPII CTD, as well as by the binding of Nrd1 and Nab3 to the nascent RNA, which facilitates the recruitment of Sen1, albeit is not a strict requirement for Sen1 binding to its targets (Han et al, 2020). Following recruitment, Sen1 can translocate along the nascent RNA and dismantle the EC upon "catching up" with RNAPII (Figure 2-6B). Once the EC is dissociated, Nrd1 and Nab3 remain bound to the nascent RNA and promote the recruitment of the TRAMP complex and the exosome (Box 2-2) to trim or degrade the nascent transcript [START_REF] Tudek | Molecular Basis for Coordinating Transcription Termination with Noncoding RNA Degradation[END_REF]. RNA degradation allows the release of Nrd1 and Nab3 for further cycles of termination [START_REF] Villa | Degradation of Non-coding RNAs Promotes Recycling of Termination Factors at Sites of Transcription[END_REF]. Our team has long-standing interest in the mechanism of transcription termination by the NNS pathway. During the past years, the team has characterized in detail the function of the NNS complex. By using a highly purified in vitro system, it was shown that Sen1 alone is sufficient to terminate RNAPII transcription, which requires its interaction with the nascent RNA and the hydrolysis of ATP to dismantle the paused EC. Importantly, the CTD of RNAPII was shown not to be necessary for Sen1-depedent termination in vitro. The ability to terminate transcription is specific to Sen1, as Upf1, a related helicase involved in the Nonsense-Mediated mRNA Decay (NMD) pathway for RNA quality control, cannot terminate RNAPII transcription in vitro (Porrua and Libri, 2013a). A subsequent study from our team established that the helicase domain of Sen1 is sufficient for transcription termination by RNAPII in vitro, indicating that this domain retains the essential features and activities involved in termination (Han et al., 2017). These study showed that Sen1 is a low-processivity helicase that can translocate along both singlestranded (ss) RNA and ssDNA in the 5' to 3' direction. Sen1 translocation along the RNA allows dismantling of the EC in a reaction that depends on the activity of its helicase domain. Furthermore, Sen1 can also promote forward translocation of stalled RNAPII, possibly by acting on the nascent RNA to exert a mechanical force on the EC (Han et al., 2017). The observation that Sen1 can alternatively promote either forward translocation or dissociation of paused RNAPII suggests that termination might require a particular state of the polymerase, possibly a persistent pause. The mechanism employed by Sen1 closely ressembles that of the bacterial termination factor Rho, which is the key actor of one of the major pathways for Box 2-2: The exosome and the TRAMP complex The exosome is a highly conserved RNA-processing protein complex that plays a key role in RNA surveillance. It is localized in the nucleus and the cytoplasm and degrades aberrant non-coding and coding RNAs and catalyse the 3' end maturation of rRNAs, snRNAs and snoRNAs [START_REF] Butler | The yin and yang of the exosome[END_REF]Houseley and Tollervery, 2006). In S. cerevisiae, the exosome is composed of a nine-subunit core (Rrp4, Rrp40, Rrp41, Rrp42, Rrp43, Rrp45, Rrp46, Mtr3, and Csl4) and two catalytic subunits, the 3'-5' exonucleases Dis3 and Rrp6. However, Rrp6 is only present in the nuclear form of the exosome, thus distinguishing the nuclear exosome from the cytoplasmic exosome [START_REF] Synowsky | Comparative multiplexed mass spectrometric analyses of endogenously expressed yeast nuclear and cytoplasmic exosomes[END_REF][START_REF] Zhang | Exosomes: Biogenesis, biologic function and clinical potential[END_REF]. The TRAMP complex is one of the best-characterized nuclear exosome cofactors. In S. cerevisiae, TRAMP is a heterotrimeric complex composed of a poly(A) polymerase (Trf4 or Trf5); a zinc-knuckle RNA-binding protein (Air 1 or Air2), and the RNA helicase Mtr4 [START_REF] Lacava | RNA Degradation by the Exosome Is Promoted by a Nuclear Polyadenylation Complex[END_REF]Wyer et al., 2005;[START_REF] Houseley | Yeast Trf5p is a nuclear poly(A) polymerase[END_REF]Vanacova et al., 2005). TRAMP interacts with the exosome in the nucleus, and polyadenylates RNAs destined for Rrp6 and the core exosome, assisting in transcript recognition and exosome activation [START_REF] Schmidt | Nuclear RNA Surveillance: Role of TRAMP in Controlling Exosome Specificity[END_REF]. transcription termination in bacteria (see . A single-molecule study performed in collaboration with Terence Strick's laboratory (IJM, Paris) (Wang et al, 2019) Upf1 and other similar helicases, Sen1 Hel also contains two SF1B-specific accessory domains that extend on the surface of RecA1: subdomain 1B (the "barrel") and subdomain 1C (the "prong"). Subdomain 1B is flanked by two antiparallel helices that pack against each other, forming the so-called "stalk" (Figure 2-8B). Common to Upf1-like helicases is that they bind nucleic acids in the same orientation, with the 3' end at RecA1 and the 5' end at RecA2, and unwind duplexes processively in the 5'-3' direction. However, Sen1 Hel shows several distinct features in the accessory subdomains when compared with the helicase domain of other Upf1-like helicases: the "barrel" has a more elaborate topology; the ordered portion of the "prong" is shorter; and most importantly, Sen1 has a distinct and evolutionarily conserved domain localized at the N-terminal end of Sen1 Hel called "brace" (aa 1097-1149). The "brace" connects the "stalk" and the "barrel" and appears to stabilize the overall fold of the protein. The structural and biochemical data suggested that the "brace" helps pulling the "barrel" toward the "prong", thus shaping a favourable conformation for RNA binding and unwinding. More importantly, it was found that the "prong" is an essential element for 5' to 3' unwinding and for Sen1-mediated transcription termination both in vivo and in vitro (Leonaite et al, 2017). Based on these observations, it was speculated that once Sen1 encounters RNAPII, the "prong" will insert into the RNA exit channel, which would lead to conformational changes and destabilization of the EC (discussed in [START_REF] Han | Helicases as transcription termination factors: Different solutions for a common problem[END_REF]. In conclusion, the particular conformation determined by the "brace" and the distinctive characteristics of the "prong" are likely important for the specific function for Sen1 in termination of non-coding transcription. The structure of the full-length Sen1 protein is currently missing, as Sen1 has a large N-terminal domain (aa 1-975) and a C-terminal intrinsically disordered region (aa 1930-2231). Although the N-terminal of Sen1 is not required for termination in vitro (Han et al., 2017), it is essential for cell growth as well as for RNAPII transcription termination in vivo (Han et al., 2020). However, the detailed function of these domains of Sen1 is still not completely understood. The precise role of Nrd1 and Nab3 in RNAPII termination remains also unclear. As mentioned before, Nrd1 and Nab3 can recognize specific motifs on the nascent RNA, which is required for termination and is thought to provide the necessary specificity to the NNS complex (Wlotzka et al., 2011;Porrua et al., 2012;Schulz et al., 2013). Nrd1-Nab3 also recruits the Trf4 subunit of TRAMP, thus coupling termination by the NNS complex to RNA degradation [START_REF] Tudek | Molecular Basis for Coordinating Transcription Termination with Noncoding RNA Degradation[END_REF]. It was proposed that Nrd1 and Nab3 also function as adaptors to position Sen1 for timely and specific termination. However, the regions responsible for the interactions between Sen1 and Nrd1-Nab3 were identified by our team and removal of these regions did not significantly affect the efficiency of termination and only partially reduced the association of Sen1 with non-coding RNAs, indicating that the main role of Nrd1 and Nab3 in termination is not to promote Sen1 recruitment (Han et al., 2020). Box 2-3: Pervasive transcription The notion of "pervasive" or "hidden" transcription refers to the generation of a large ensemble of different RNAs distinct from those encode protein and those with established functions like tRNAs, rRNAs, snRNAs and snoRNAs [START_REF] Neil | Widespread bidirectional promoters are the major source of cryptic transcripts in yeast[END_REF][START_REF] Jensen | Dealing with Pervasive Transcription[END_REF]. Because these transcripts are rapidly degraded, they can be generally revealed only in cells defective for defined RNA degradation pathways. For instance, the absence of Rrp6, a catalytic subunit of the nuclear exosome, exposes a layer of RNAs known as "cryptic unstable transcripts" (CUTs) [START_REF] Davis | Accumulation of unstable promoter-associated transcripts upon loss of the nuclear exosome subunit Rrp6p in Saccharomycescerevisiae[END_REF][START_REF] Houalla | Microarray detection of novel nuclear RNA substrates for the exosome[END_REF][START_REF] Wyers | Cryptic pol II transcripts are degraded by a nuclear quality control pathway involving a new poly(A) polymerase[END_REF]. Pervasive transcripts that are less sensitive to Rrp6 are therefore named "stable unannotated transcripts" (SUTs) [START_REF] Xu | Bidirectional promoters generate pervasive transcription in yeast[END_REF]. The absence of the cytoplasmic 5'-3' exonuclease Xrn1 revealed another group of RNAs referred to as "Xrn1sensitive unstable transcripts" (XUTs) [START_REF] Van Dijk | XUTs are a class of Xrn1sensitive antisense regulatory non-coding RNA in yeast[END_REF]. In addition, RNAs that are generated from Reb1-dependent termination events (see Figure 2-9B) are unstable and called "Reb1-dependent unstable transcripts" (RUTs) [START_REF] Colin | Roadblock Termination by Reb1p Restricts Cryptic and Readthrough Transcription[END_REF]. CUTs, SUTs, XUTs and RUTs are all transcribed by RNAPII. CUTs and SUTs almost exclusively originate from nucleosome-depleted regions (NDRs) at the 5' and 3' ends of genes, and often appear to result from divergent transcription from gene promoters (reviewed in [START_REF] Jensen | Dealing with Pervasive Transcription[END_REF]. Pervasive transcription is a common phenomenon conserved in prokaryotes and eukaryotes. It is potentially harmful for cell homeostasis, as it can interfere with transcription of canonical genes and generate toxic non-coding RNA molecules. Therefore, pervasive transcription needs to be controlled, which often depends on transcription termination and RNA degradation [START_REF] Jensen | Dealing with Pervasive Transcription[END_REF]. Some SUTs and XUTs probably rely on CPF-CF for termination [START_REF] Marquardt | Distinct RNA degradation pathways and 3' extensions of yeast non-coding RNA species[END_REF][START_REF] Van Dijk | XUTs are a class of Xrn1sensitive antisense regulatory non-coding RNA in yeast[END_REF], while for others, such as CUTs and a fraction of SUTs, termination and RNA degradation depends on the NNS pathway [START_REF] Arigo | Regulation of yeast NRD1 expression by premature transcription termination[END_REF][START_REF] Thiebaut | Transcription termination and nuclear degradation of cryptic unstable transcripts: A role for the nrd1-nab3 pathway in genome surveillance[END_REF], Schulz et al, 2013). Alternative pathways for RNAPII termination Apart from the CPF-CF and NNS termination pathways, RNAPII can also be terminated through other mechanisms. In S. cerevisiae, two alternative pathways have been revealed: one is dependent on the Rnt1 endonuclease, the other one relies on various DNA-binding proteins that function as a roadblock for RNAPII (Figure 23456789). Rnt1-dependent termination Rnt1 is a nuclear dsRNA-specific ribonuclease, a homolog of bacterial RNase III, that plays a role in rDNA transcription, rRNA processing and snRNA 3' end maturation (Abou [START_REF] Elela | RNase III cleaves eukaryotic preribosomal RNA at a U3 snoRNP-dependent site[END_REF][START_REF] Abou Elela | Depletion of yeast RNase III blocks correct U2 3' end formation and results in polyadenylated but functional U2 snRNA[END_REF][START_REF] Catala | Deletion of Rnt1p alters the proportion of open versus closed rRNA gene repeats in yeast[END_REF]. Rnt1 is also involved in transcription termination in the absence of polyadenylation signals and mainly serves as a fail-safe termination pathway for protein-coding genes [START_REF] Ghazal | Yeast RNase III Triggers Polyadenylation-Independent Transcription Termination[END_REF][START_REF] Rondón | Fail-safe transcriptional termination for protein-coding genes in S. cerevisiae[END_REF]. The model posits that Rnt1 recognizes and cleaves a stem-loop structure in the nascent transcript, generating a free 5'-OH at the downstream cleavage product that is subsequently targeted by Rat1 for degradation (Figure 2 The roadblock termination pathway A roadblock mechanism of termination was first studied for the transcription factor Reb1 by our laboratory [START_REF] Colin | Roadblock Termination by Reb1p Restricts Cryptic and Readthrough Transcription[END_REF]. Reb1 is a sequence-specific DNA binding protein localized in the nucleus that was originally described as an activator for RNAPII and RNAPI transcription [START_REF] Brandl | A nucleosome-positioning sequence is required for GCN4 to activate transcription in the absence of a TATA element[END_REF][START_REF] Kulkens | A system to study transcription by yeast RNA polymerase I within the chromosomal context: Functional analysis of the ribosomal DNA enhancer and the RBP1/REB1 binding sites[END_REF]. It also plays an important role in the positioning and protection of nucleosome-free regions (NFRs) [START_REF] Hartley | Mechanisms that specify promoter nucleosome location and identity[END_REF]. Our team has shown that in yeast Reb1 can promote termination of RNAPII transcription. Reb1 bound to DNA is a roadblock for RNAPII as it induces pausing of the polymerase. The stalled polymerase is then ubiquitylated and degraded by the proteasome, thus provoking termination (Figure 2-9B). Reb1-dependent termination generates a class of unstable transcripts that are degraded in the nucleus by the TRAMP and exosome complexes, which were dubbed Reb1-dependent unstable transcripts, or RUTs [START_REF] Colin | Roadblock Termination by Reb1p Restricts Cryptic and Readthrough Transcription[END_REF]. It was further demonstrated that roadblock termination can extend to various DNA-binding proteins including general regulatory factors (GRFs) such as Rap1, centromere-binding proteins, and RNAPIII transcription factors like TFIIIB. Roadblock termination occurs genomewide and functions as a fail-safe mechanism to neutralize transcriptional leakage from canonical termination pathways, which is a significant source of pervasive transcription (Candelli et al., 2018). Porrua et al., 2016;[START_REF] Roberts | Mechanisms of Bacterial Transcription Termination[END_REF]. Intrinsic termination relies solely on a DNA sequence that, when transcribed, forms a GC-rich hairpin followed by a U-rich region. Upon pausing of RNAP at the U-tract, the hairpin forms into the RNA exit channel of RNAP, which induces dissociation of the EC. Three alternative mechanistic models have been proposed. The hyper-translocation model [START_REF] Yarnell | Mechanism of intrinsic transcription termination and antitermination[END_REF][START_REF] Santangelo | Forward translocation is the natural pathway of RNA release at an intrinsic terminator[END_REF]) posits that a steric clash of the hairpin with the RNA exit channel pushes RNAP forward without addition of nucleotides to the RNA 3' end, resulting in shortening of the RNA:DNA hybrid in RNAP main channel and destabilization of the EC. The hybrid shearing model [START_REF] Larson | Applied force reveals mechanistic and energetic details of transcription termination[END_REF][START_REF] Molodtsov | The Presence of an RNA:DNA Hybrid That Is Prone to Slippage Promotes Termination by T7 RNA Polymerase[END_REF], proposes that formation of the hairpin rather generates a shearing force that pulls the transcript out of the complex, resulting in hybrid shortening. Finally, the allosteric model [START_REF] Gusarov | The Mechanism of Intrinsic Transcription Termination[END_REF][START_REF] Toulokhonov | Allosteric Control of RNA Polymerase by a Site That Contacts Nascent RNA Hairpins[END_REF] Box 2-4: Transcription termination by bacterial RNAP (continued) The other main pathway for termination depends on the protein Rho, a highly conserved homohexameric RNA helicase [START_REF] Brennan | Transcription termination factor rho is an RNA-DNA helicase[END_REF]. Rho interacts with the nascent RNA at the so-called Rho utilization (rut) sites, which are 85-100 nt long regions, rich in cytosine and poor in guanine with little propensity to form secondary structures [START_REF] Ciampi | Rho-dependent terminators and transcription termination[END_REF]. Once bound, Rho translocates along the nascent RNA in the 5' to 3' direction to catch up with the RNAP and promote its release from the DNA. The same models as for intrinsic termination have been proposed to explain the Subsequently, Rat1 progressively degrades the RNAPI-bound transcript and promotes the release of RNAPI (Kawauchi et al., 2008;[START_REF] Hage | Efficient termination of transcription by RNA polymerase I requires the 5ʹ exonuclease Rat1 in yeast[END_REF]. It was proposed that Sen1 aids Rat1 function by removing possible RNA secondary structures that might impair Rat1 progression (Kawauchi et al., 2008). Chapter 3 Transcription by RNA polymerase III RNA polymerase III (RNAPIII) is the largest of the three classical eukaryotic RNA polymerases (Figure 1-2; Table 1-1). It is specialized in the synthesis of short, abundant and structured noncoding RNAs (ncRNAs), such as nuclear tRNAs, the 5S rRNA and the spliceosomal U6 snRNA, many of which are involved in protein biosynthesis. Studies of the genomic occupancy of RNAPIII and its associated transcription factors have expanded the transcriptome of RNAPIII to include a number of additional non-coding RNAs, which have been found to actively regulate various essential processes in the cell (Table 3-1). RNAPIII transcriptional activity is highly regulated in response to multiple environmental cues, thus making it a key target for the regulation of cell growth, proliferation and differentiation (Hoffmann et al., 2016a, Willis and[START_REF] Willis | Signaling to and from the RNA Polymerase III Transcription and Processing Machinery[END_REF]. The RNAPIII structure RNAPIII is a multi-protein complex composed of 17 subunits with an overall molecular mass of approximately 700 kDa (Figure 1-2; Table 1-1; andFigure 3-1A). The subunit composition and architecture of RNAPIII have been summarized in a previous review [START_REF] Vannini | Conservation between the RNA Polymerase I, II, and III Transcription Initiation Machineries[END_REF]. Former insight into the RNAPIII topology was based on several low-resolution electron cryo-microscopy (cryo-EM) studies (Fernandez-Tornero et al., 2007, Fernandez-Tornero et al., 2010[START_REF] Vannini | Molecular Basis of RNA Polymerase III Transcription Repression by Maf1[END_REF]. In 2015 the first atomic structure of yeast elongating RNAPIII (holo RNAPIII, with a DNA:RNA hybrid in the active centre and downstream double-stranded DNA in the cleft) was solved at 3.9 Å resolution, together with two different conformations of the unbound RNAPIII (apo RNAPIII) at 4.6 and 4.7 Å resolution (Hoffmann et al., 2015). Three years later, atomic structures of the yeast RNAPIII preinitiation complex (PIC) comprising promoter-bound RNAPIII and TFIIIB have been published by two independent groups [START_REF] Abascal-Palacios | Structural basis of RNA polymerase III transcription initiation[END_REF], Vorländer et al., 2018). Finally, very recently, cryo-EM structures of human RNAPIII have also been solved by the same laboratories [START_REF] Ramsay | Structure of human RNA polymerase III[END_REF][START_REF] Girbig | Cryo-EM structures of human RNA polymerase III in its unbound and transcribing states[END_REF]. These structural studies allow a better understanding of RNAPIII subunit positioning and its transcriptional mechanisms. All multi-subunit RNAPs resemble a "crab claw" including several functional domains (Figure 3-1D). The two largest subunits of RNAPIII, C160 and C128, form the active site and the DNA-binding cleft. The cleft and ABC27, ABC23, ABC14.5, ABC10a, ABC10b, AC40 and AC19, together with C11 constitute the core enzyme. C11 is involved in transcription termination and RNA cleavage, with its C-terminal domain being structurally and functionally related to the C-terminal zinc-ribbon domain of the RNAPII elongation factor TFIIS (Arimbasseri andMaraia, 2015, Chédin et al., 1998). RNAPIII also contains three distinct subcomplexes on the periphery of the core enzyme (C25-C17, C53-C37, and C82-C34-C31). The C25-C17 subcomplex forms the RNAPIII stalk that protrudes from the polymerase core on the C160 side and is involved in transcription initiation [START_REF] Vannini | Conservation between the RNA Polymerase I, II, and III Transcription Initiation Machineries[END_REF] and in the binding of single-stranded exiting RNA [START_REF] Jasiak | Structural Biology of RNA Polymerase III: Subcomplex C17/25 X-Ray Structure and 11 Subunit Enzyme Model[END_REF]. The ten-subunit core and the heterodimeric stalk are structurally conserved among the three RNAPs in eukaryotes. The second subcomplex, the C53-C37 heterodimer, is situated on the C128 lobe and extends into the DNA-binding channel. This subcomplex is involved in RNAPIII transcription initiation and termination, and participates with C11 in the process of facilitated reinitiation (Arimbasseri and Maraia, 2015, Kassavetis et al., 2010, Landrieux et al., 2006; see below). C37 and C53 are distantly related to RNAPI A49 and A34.5, respectively, and to the RNAPII transcription factor TFIIFa and TFIIFb, respectively [START_REF] Vannini | Conservation between the RNA Polymerase I, II, and III Transcription Initiation Machineries[END_REF][START_REF] Carter | The increase in the number of subunits in eukaryotic RNA polymerase III relative to RNA polymerase II is due to the permanent recruitment of general transcription factors[END_REF][START_REF] Geiger | RNA polymerase I contains a TFIIF-related DNA-binding subcomplex[END_REF]. The last subcomplex, the C82-C34-C31 heterotrimer, is placed on C160 clamp head in close proximity to the stalk and is involved in transcription initiation (Vannini andCramer, 2012, Thuillier et al., 1995) and in the recruitment of RNAPIII to TFIIIB [START_REF] Brun | Dual role of the C34 subunit of RNA polymerase III in transcription initiation[END_REF][START_REF] Khoo | Mapping the Protein Interaction Network for TFIIB-Related Factor Brf1 in the RNA Polymerase III Preinitiation Complex[END_REF]. The C82-C34-C31 subcomplex is RNAPIII-specific, but C82 and C34 are distantly related to TFIIEa and TFIIEb, repectively [START_REF] Carter | The increase in the number of subunits in eukaryotic RNA polymerase III relative to RNA polymerase II is due to the permanent recruitment of general transcription factors[END_REF]. The RNAPIII transcriptome In eukaryotes, the genes transcribed by RNAPIII are normally referred to as class III genes. RNAPIII transcripts were previously thought to be restricted to only a small set of infrastructural non-coding RNAs. However, extensive work on RNAPIII transcriptomes in yeast and metazoans has discovered many new RNAPIII-dependent genes. The known set of yeast RNAPIII-synthesized RNAs is provided in Table 3-1 . RNAPIII in mammals produces additional RNAs such as the vault RNA, Y RNAs and many small interspersed elements (SINEs). The biological functions of some of these RNAs have been well characterized, for instance those of tRNAs, the 5S rRNA, U6, the RNA component of RNase P, 7SL and snR52. However, the cellular functions of many others remain unclear. tRNAs Transfer RNAs are RNAPIII transcripts that serve as adaptor molecules in the biosynthesis of proteins. Mature tRNAs have a cloverleaf-shaped secondary structure including a D-loop (also known as DHU loop because it contains the dihydrouridine base), a T-loop (also named TyC loop due to the presence of thymidine, pseudouridine and cytidine bases), an anticodon loop (that specifies an amino acid), a variable loop and an acceptor stem. The cloverleaf structure further folds into a tertiary structure, a "L-shape" structure, which is maintained by hydrogen bonds (Figure 3-2A; [START_REF] Kirchner | Emerging roles of tRNA in adaptive translation, signalling dynamics and disease[END_REF]. In S. cerevisiae, there are 275 nuclear tRNA genes including tX(XXX)D, with unknown specificity but very similar to a serine tRNA gene. The length of tRNA genes is 71-133 nt and 61 of them have an intron (Lesniewska and Boguta, 2017). tRNAs that bear the same anticodon belong to the same isoacceptor family, and isoacceptors that translate to the same amino acid are grouped into the same isotype. There are 64 isoacceptors in theory according to the genetic code, however, most eukaryotic genomes encode only ~42 isoacceptors and some isoacceptors are universally missing [START_REF] Chan | GtRNAdb 2.0: An expanded database of transfer RNA genes identified in complete and draft genomes[END_REF]. On the other hand, tRNA genes show an important redundancy both in prokaryotes and in eukaryotes so that most isoacceptors are produced by more than one gene in the genome. A summary of isoacceptor tRNAs in S. cerevisiae is provided in Figure 3-2B. 5S rRNA 5S ribosomal RNA is a component of the large (60S) ribosomal subunit that in S. cerevisiae is encoded by 6 repeated transcription units known as RDN5 genes (RDN5-1 to RDN5-6). RDN5-1 and RDN5-2 are placed within the RDN1 locus, while RDN5-3 through RDN5-6 are positioned at sites distal to RDN1, in a 3.6 kb repeated region [START_REF] Mcmahon | Tandemly arranged variant 5S ribosomal RNA genes In the yeast Saccharomyces cerevisiae[END_REF]. The scheme of one RDN1 locus has been shown in Figure 2-11A. These RNAs, together with 5.8S, 18S and 25S rRNAs (28S in mammals) that are processed from 35S (45S in mammals) rRNA precursor produced by RNAPI, are fundamental parts of the protein synthesis machinery. U6 snRNA SNR6-encoded U6 spliceosomal RNA (U6 snRNA) is the RNA component of U6 small nuclear ribonucleoprotein (snRNP). snRNP is an RNA-protein complex that combine with various other proteins to form a large ribonucleoprotein complex, namely the spliceosome that removes introns from precursor mRNA (pre-mRNA) before its translation into protein. U6 snRNA is the most highly conserved spliceosomal snRNA across species [START_REF] Brow | Spliceosomal RNA U6 is remarkably conserved from yeast to mammals[END_REF] which can directly mediate the catalysis of pre-mRNA splicing by the spliceosome [START_REF] Fica | RNA catalyses nuclear pre-mRNA splicing[END_REF]. RNase P RNA and human RNase MRP RNA The yeast RPR1 gene (RPPH1 in humans, for ribonuclease P RNA component H1) encodes the RNA component of the nuclear ribonuclease P (RNase P), an enzyme mainly involved in the maturation of the 5' ends of tRNA precursors [START_REF] Baer | Characterization in vitro of the defect in a temperaturesensitive mutant of the protein subunit of RNase P from Escherichia coli[END_REF][START_REF] Lee | Characterization of RPR1, an essential gene encoding the RNA component of Saccharomyces cerevisiae nuclear RNase P[END_REF]. Another related gene is the human RMRP, which encodes the RNA subunit of the RNase MRP complex (MRP stands for mitochondrial RNA processing). RNase MRP is a ribonucleoprotein complex evolutionarily linked to RNase P [START_REF] Zhu | Sequence analysis of RNase MRP RNA reveals its origination from eukaryotic RNase P RNA[END_REF] that plays a role in the initiation of mitochondrial DNA replication and in the processing of rRNA precursors in the nucleus [START_REF] Chang | Mouse RNAase MRP RNA is encoded by a nuclear gene and contains a decamer sequence complementary to a conserved region of mitochondrial RNA substrate[END_REF]Clayton, 1989, Schmitt and[START_REF] Schmitt | Nuclear RNase MRP is required for correct processing of pre-5.8S rRNA in Saccharomyces cerevisiae[END_REF]. 7SL RNA The 7SL RNA, also called SPR RNA, encoded by SCR1 is the longest RNAPIII transcript in S. cerevisiae with a length of 522 nt. It forms the scaffold of the signal recognition particle (SRP), a ribonucleoprotein involved in targeting proteins to the endoplasmic reticulum membrane [START_REF] Hann | The signal recognition particle in S. cerevisiae[END_REF]. The SCR1 gene is in general well conserved among eukaryotes but the type of promoter is different in yeast and metazoans [START_REF] Dieci | Intragenic promoter adaptation and facilitated RNA polymerase III recycling in the transcription of SCR1, the 7SL RNA gene of Saccharomyces cerevisiae[END_REF][START_REF] Dieci | The expanding RNA polymerase III transcriptome[END_REF]. snR52 snoRNA The small nucleolar RNA snR52 is the only snoRNA transcribed by RNAPIII [START_REF] Harismendy | Genome-wide location of yeast RNA polymerase III transcription machinery[END_REF][START_REF] Roberts | The RNA polymerase III transcriptome revealed by genome-wide localization and activity-occupancy relationships[END_REF][START_REF] Marck | The RNA polymerase III-dependent family of genes in hemiascomycetes: Comparative RNomics, decoding strategies, transcription and evolutionary implications[END_REF]. snoRNAs are also often referred to as guide RNAs because they direct chemical modifications in other RNAs. In particular, snR52 belongs to the box C/D class of snoRNAs, which contain the conserved C box (UGAUGA) and the D box (CUGA) and function in directing site-specific 2'-O-methylation of other RNAs [START_REF] Galardi | Purified Box C/D snoRNPs Are Able To Reproduce Site-Specific 2ʹ-O-Methylation of Target RNA In Vitro[END_REF]. Other RNAPIII-transcribed ncRNAs Yeast RNAPIII synthesizes several additional short ncRNAs whose function is yet to be determined, such as RNA170 and ZOD1 RNAs, as shown in Table 3-1. Human RNAPIII is also responsible for producing SINEs, 7SK RNAs, Vault RNAs, Y RNAs, etc [START_REF] Dieci | The expanding RNA polymerase III transcriptome[END_REF]. Short interspersed nuclear elements (SINEs) are non-autonomous retrotransposons evolutionally derived from other RNAPIII-transcribed genes. Eukaryotic genomes can harbour more than a million of SINE copies, the bulk of which (66%) have a length from 150 to 300 bp [START_REF] Kramerov | Origin and evolution of SINEs in eukaryotic genomes[END_REF]. 7SK RNA is an abundant non-coding RNA found in a small nuclear ribonucleoprotein complex (snRNP), that has been involved in the regulation of RNAPII transcription by controlling the positive transcription elongation factor P-TEFb [START_REF] Diribarne | 7SK RNA, a non-coding RNA regulating P-TEFb, a general transcription factor[END_REF]Bensaude, 2009, Peterlin et al., 2012). Vault RNAs, or vtRNAs, are small RNAs (~100 nt) present in a very large cytoplasmic ribonucleoprotein particle (RNP) known as vault [START_REF] Stadler | Evolution of Vault RNAs[END_REF]. They have been implicated in a broad range of cellular functions including multidrug resistance of human tumors and are thought to partake in intracellular and nucleocytoplasmic transport [START_REF] Berger | Vaults and the major vault protein: Novel roles in signal pathway regulation and immunity[END_REF][START_REF] Van Zon | Vault mobility depends in part on microtubules and vaults can be recruited to the nuclear envelope[END_REF]. Y RNAs are a family of small RNAs (~100 nt) that show high conservation in metazoans, and appear to be required for mediating the initiation of chromosomal DNA replication, regulating the autoimmune protein Ro60 and generating smaller RNA fragments following cellular stress [START_REF] Hall | Y RNAs: Recent developments[END_REF]. Table 3-1: The RNAPIII transcriptome. Transcription initiation RNAPIII transcription involves three general transcription factors (GTFs): TFIIIA, TFIIIB and TFIIIC. Metazoans additionally require SNAPc to transcribe a specific group of class III genes (Table 3-2). Transcription initiation follows several key conserved steps. First, GTFs are recruited sequentially to the promoter, followed by the association of RNAPIII around the transcription start site (TSS) to form the pre-initiation complex (PIC). Afterwards, the DNA double strand is melted and the transcription bubble is formed, followed by the initiation of RNA synthesis [START_REF] Hoffmann | Specialization versus conservation: How Pol I and Pol III use the conserved architecture of the pre-initiation complex for specialized transcription[END_REF]. Basal RNAPIII transcription factors TFIIIA Transcription Factor III A (TFIIIA) is a single protein that plays a major role in the synthesis of the 5S rRNA by binding to the internal control regions (ICR) of the 5S rRNA genes and then serving as a platform for TFIIIC recruitment. It contains nine conserved zinc fingers at its Nterminal domain, which carry out sequence-specific DNA and RNA binding activity. The Cterminal domain of TFIIIA is involved in the transactivation process possibly by interacting with other general factors. In addition to the recognition of promoter sequence in DNA, TFIIIA can also bind to the 5S RNA to form the 7S ribonucleoprotein particle (RNP) and the 42S RNP complex [START_REF] Layat | Structure, function and regulation of Transcription Factor IIIA: From Xenopus to Arabidopsi[END_REF]. TFIIIB Transcription Factor III B (TFIIIB) is composed of three subunits: the TATA binding protein (TBP), TFIIB-related factor 1 (Brf1) and B" (Bdp1, for B double prime). Early work showed that Brf1 interacts with TBP forming the B' domain, which is able to bind the Bdp1 subunit (Kassavetis et al., 1992a). In vertebrates, Brf1 is replaced by the TFIIB-related factor 2 (Brf2) at type 3 promoters of class III genes. TBP interacts tightly with Brf1 to form the so called B' fraction, which binds a specific sequence upstream of the transcription start site. Bdp1 is only weakly associated with other components of the TFIIIB complex in the absence of DNA but it is required for the formation of a stable TFIIIB-DNA complex. As a transcription initiation factor, TFIIIB is involved in the recruitment of RNAPIII to the promoter and promotes the transition from a closed to an open RNAPIII preinitiation complex. Moreover, TFIIIB has been shown to act as a genomic roadblock to induce the termination of neighboring transcribing RNAPIIs and the dissociation of the replisome [START_REF] Gouge | Molecular mechanisms of Bdp1 in TFIIIB assembly and RNA polymerase III transcription initiation[END_REF][START_REF] Roy | Common genomic elements promote transcriptional and DNA replication roadblocks[END_REF]Candelli et al., 2018). TFIIIC Transcription Factor III C (TFIIIC) is a large protein complex composed of 6 subunits, which are organized in two DNA-binding subcomplexes called tA and tB. In yeast tA is assembled from three subunits, namely t131 (Tfc4), t95 (Tfc1) and t55 (Tfc7), while tB is composed of t138 (Tfc3), t91 (Tfc6) and t60 (Tfc8), they together form a complex with a molecular mass of about 500 kDa. TFIIIC is required for the transcription of class III genes that are controlled by type 1 and 2 promoters (see section 0). It recognizes the highly conserved promoter elements, thus allowing the recruitment of TFIIIB upstream of the transcription start site, which subsequently leads to the recruitment of RNAPIII and formation of the preinitiation complex [START_REF] Male | Architecture of TFIIIC and its role in RNA polymerase III pre-initiation complex assembly[END_REF]. It has also been shown in yeast that TFIIIC binds some chromosomal locations called ETC loci (for extra-TFIIIC) without the rest of the RNAPIII transcription apparatus, which appears to be able to mediate some additional functions, for example, to create nucleosomefree genomic landmarks, and as boundary elements in separating chromatin domains (reviewed in Donze, 2011). SNAPc Small nuclear RNA (snRNA)-activating protein complex (SNAPc), also called proximal sequence element (PSE)-binding transcription factor (PTF), is a sequence-specific DNA binding complex composed of 5 subunits: SNAP190, SNAP50, SNAP45, SNAP43 and SNAP19 [START_REF] Mittal | SNAPc: A core promoter factor with a built-in DNA-binding damper that is deactivated by the Oct-1 POU domain[END_REF]. SNAPc is required for transcription of RNAPII-dependent snRNA genes and RNAPIIIdependent genes with type 3 promoters (see section 0) in metazoans. SNAPc is involved in a number of well-defined functions, for instance, specific binding to the PSE, nucleation of the assembly of RNAPII and RNAPIII transcription initiation complexes, cooperative binding with TBP and with its corresponding activators, etc. Promoters of class III genes Based on the organization of transcriptional control elements and transcription factor dependence, the promoters of known RNAPIII-transcribed genes are divided into three categories, as shown in Figure 3 A box and the B box is highly variable because TFIIIC exhibits a naturally elastic structure so that it does not require a specific spacing between its recognition sites [START_REF] Nagarajavel | Global 'bootprinting' reveals the elastic architecture of the yeast TFIIIB-TFIIIC transcription complex in vivo[END_REF]. Sometimes an upstream TATA box is also present in type 2 promoters, which can help Recruitment of TFIIIC to the promoter The first step for transcription of tRNA genes is the binding of TFIIIC to the internal promoter sequences, the A box and B box, which does not require any additional transcription factor [START_REF] Lassar | Transcription of class III genes: Formation of preinitiation complexes[END_REF][START_REF] Ruet | Isolation of a class C transcription factor which forms a stable complex with tRNA genes[END_REF]. For 5S rRNA genes, which lack the B box, the recruitment of TFIIIC requires the preceding binding of TFIIIA to the ICR [START_REF] Orioli | RNA polymerase III transcription control elements: Themes and variations[END_REF]. Photochemical crosslinking and antibody-based interference experiments revealed that t95 is responsible for A box binding and t138 for B box binding, with both interactions being essential for RNAPIII transcription [START_REF] Gabrielsen | Two polypeptide chains in yeast transcription factor tau interact with DNA[END_REF]Bartholomew et al., 1990). It was previously shown that tB binding to B box was predominant over the tA binding to the A box (Stillman and Geiduschek, 1984;[START_REF] Schultz | The two DNA-binding domains of yeast transcription factor tau as observed by scanning transmission electron microscopy[END_REF]. The low affinity of tA for the A box was supported by a recent structural study [START_REF] Vorländer | Structure of the TFIIIC subcomplex τA provides insights into RNA polymerase III pre-initiation complex formation[END_REF] in which the authors observed that t95 binding to the A box is auto-inhibited by its C-terminal tail (Figure 3-4). The negatively charged acidic tail transiently associates with the positively charged DNA binding domain (DBD) of t95, thereby competing with DNA and reducing the affinity to promoter. The auto-inhibition by the acidic tail of t95 is thought to increase the specificity of the interaction with the A box by outcompeting suboptimal DNA sequences [START_REF] Vorländer | Structure of the TFIIIC subcomplex τA provides insights into RNA polymerase III pre-initiation complex formation[END_REF]. tRNA genes share a similar organization but they differ in length either because of the variable arm of the tRNA or because of the presence of an intron. Thus, the distance between the A and the B box can also be very different. However, the tA and tB modules of TFIIIC can bind the A and B boxes regardless of the distance between them, which was believed to largely owe to the remarkable structural elasticity of TFIIIC [START_REF] Schultz | The two DNA-binding domains of yeast transcription factor tau as observed by scanning transmission electron microscopy[END_REF][START_REF] Nagarajavel | Global 'bootprinting' reveals the elastic architecture of the yeast TFIIIB-TFIIIC transcription complex in vivo[END_REF]. But how TFIIIC achieves this elasticity was not clear. Based on earlier low-resolution scanning transmission election microscopy (STEM) analyses, TFIIIC was observed as a "dumb-bell" shaped molecule with a flexible linker connecting two subdomains. Thus, this linker was proposed to underlie the structural elasticity of TFIIIC [START_REF] Marzouki | Selective proteolysis defines two DNA binding domains in yeast transcription factor τ[END_REF][START_REF] Schultz | The two DNA-binding domains of yeast transcription factor tau as observed by scanning transmission electron microscopy[END_REF]. Chemical cross-linking coupled to mass spectrometry (XL-MS) and crystal structure analyses further mapped the region of this linker [START_REF] Male | Architecture of TFIIIC and its role in RNA polymerase III pre-initiation complex assembly[END_REF]. The authors identified that t131 contains tetra-trico peptide repeat (TPR) arrays at both its N-terminal and C-terminal halves (Figure 3-4). Importantly, they found that the N-terminal TPR arrays of t131 establish contacts with an unstructured, central region of t138 (in the tB module) termed t-Interacting Region (tIR). It was therefore suggested that tIR establishes the main link between tA and tB, as its disordered nature may provide flexibility to recognize variously space between A and B boxes [START_REF] Male | Architecture of TFIIIC and its role in RNA polymerase III pre-initiation complex assembly[END_REF]. Recruitment of TFIIIB by TFIIIC At type 1 and type 2 promoters, TFIIIB is recruited by TFIIIC to a region upstream of the transcription unit [START_REF] Kassavetis | S. cerevisiae TFIIIB is the transcription initiation factor proper of RNA polymerase III, while TFIIIA and TFIIIC are assembly factors[END_REF]. Previous studies suggested that t131 binds to TFIIIB subunits Brf1 and Bdp1 in a stepwise manner to help TFIIIB assembly, using overlapping sites on its N-terminal TPR arrays [START_REF] Dumay-Odelot | Multiple roles of the tau131 subunit of yeast transcription factor IIIC (TFIIIC) in TFIIIB assembly[END_REF][START_REF] Liao | The Brf1 and Bdp1 Subunits of Transcription Factor TFIIIB Bind to Overlapping Sites in the Tetratricopeptide Repeats of Tfc4*[END_REF][START_REF] Moir | A tetratricopeptide repeat mutation in yeast transcription factor IIIC131 (TFIIIC131) facilitates recruitment of TFIIB-related factor TFIIIB70[END_REF], and that conformational changes occur within t131 upon binding of Brf1 and Bdp1 [START_REF] Moir | Interactions between the tetratricopeptide repeat-containing transcription factor TFIIIC131 and its ligand, TFIIIB70. Evidence for a conformational change in the complex[END_REF]Kassavetis et al., 1992b). However, biochemical and structural data obtained by [START_REF] Male | Architecture of TFIIIC and its role in RNA polymerase III pre-initiation complex assembly[END_REF] showed that Brf1 and Bdp1 binds to distinct sites on t131, but the Bdp1-t131 interaction sites overlap with the t138-t131 interaction region. Thus, it was proposed that the interaction of t131 with Bdp1 could cause a conformational change in TFIIIC leading to the displacement of the tB module, which would be a regulatory mechanism essential for the initial round of RNAPIII transcription [START_REF] Male | Architecture of TFIIIC and its role in RNA polymerase III pre-initiation complex assembly[END_REF]. These results support the notion that t131 TPR arrays play a major role in linking tA, tB and TFIIIB during PIC assembly. A couple of evidence also suggested that the binding of tA to the A box mediates TFIIIB assembly, and is important for transcription activation and TSS selection [START_REF] Baker | Gene size differentially affects the binding of yeast transcription factor tau to two intragenic regions[END_REF][START_REF] Gerlach | TFIIIB placement on a yeast U6 RNA gene in vivo is directed primarily by TFIIIC rather than by sequence-specific DNA contacts[END_REF][START_REF] Joazeiro | Alternative outcomes in assembly of promoter complexes: The roles of TBP and a flexible linker in placing TFIIIB on tRNA genes[END_REF]. However, it is not clear how TSS selection is achieved. It has been shown that TFIIIB is placed preferentially ~30 bp upstream of the TSS, and its positioning is determined by TFIIIC and by the direct interaction of TBP with the upstream DNA sequence [START_REF] Kassavetis | Transcription factor IIIB generates extended DNA interactions in RNA polymerase III transcription complexes on tRNA genes[END_REF][START_REF] Joazeiro | Alternative outcomes in assembly of promoter complexes: The roles of TBP and a flexible linker in placing TFIIIB on tRNA genes[END_REF]. A recent study has redefined the model for TFIIIB sequential assembly [START_REF] Vorländer | Structure of the TFIIIC subcomplex τA provides insights into RNA polymerase III pre-initiation complex formation[END_REF]. Initially Brf1 is bound to the N-terminal region of t131. TBP is then recruited via interactions with the Cterminus of Brf1 and with the tB subunit t60 [START_REF] Mylona | Structure of the τ60/Δτ91 Subcomplex of Yeast Transcription Factor IIIC: Insights into Preinitiation Complex Assembly[END_REF][START_REF] Deprez | A subunit of yeast TFIIIC participates in the recruitment of TATA-binding protein[END_REF]. TBP subsequently binds and bends the upstream DNA sequence, and is further stabilized through the incorporation of Bdp1. This process would allow TFIIIB to assemble on a suitable DNA sequence by using a proofreading mechanism, in which the lifetime of the initial TBP-DNA complex helps selecting the correct sequence around which TFIIIB assembles, and therefore the correct TSS. Importantly, the position of the DBD of t95 and the Brf1-t131 TPR array is such that the distance between them might serve as a molecular ruler to place TFIIIB at a relatively constant position upstream of the TSS [START_REF] Vorländer | Structure of the TFIIIC subcomplex τA provides insights into RNA polymerase III pre-initiation complex formation[END_REF]Figure 3-5). Previous cryo-EM structure revealed that Brf1, TBP and Bdp1 form a positively charged ring around the TATA box, which may explain why the binding of TFIIIB to DNA is so unusually stable and why they can serve as a "roadblock" for the transcription and replication machineries (Vorländer et al., 2018). Recruitment of RNAPIII and assembly of the PIC Once bound to the promoter region, TFIIIC and TFIIIB recruit RNAPIII to form the pre-initiation complex (PIC). Assembly of RNAPIII PIC at tRNA genes in yeast involves numerous interactions between the polymerase and the GTFs. For instance, yeast two hybrid experiments showed that C34 interacts with Brf1 [START_REF] Werner | Interaction between a complex of RNA polymerase III subunits and the 70-kDa component of transcription factor IIIB[END_REF], this interaction was subsequently shown to be essential for RNAPIII recruitment and open complex formation based on a mutagenic analysis of C34 [START_REF] Brun | Dual role of the C34 subunit of RNA polymerase III in transcription initiation[END_REF]. Photo-cross-linking studies found that C34 is located immediately downstream from Brf1 in the assembled RNAPIII transcription complex, supporting its role in TFIIIB recognition [START_REF] Bartholomew | Orientation and topography of RNA polymerase III in transcription complexes[END_REF]. Additional cross-links between Brf1 and the RNAPIII subunits C160 and C128 were also detected [START_REF] Khoo | Mapping the Protein Interaction Network for TFIIB-Related Factor Brf1 in the RNA Polymerase III Preinitiation Complex[END_REF]. Furthermore, the stalk subunit C17 was found to interact with the N-terminal cyclin repeats of Brf1 (Figure 3-6), suggesting a role for C17 in the recruitment of RNAPIII (Ferri et al., 2000) and Bdp1 was found to interact with C37 [START_REF] Wu | The TFIIF-like Rpc37/53 dimer lies at the center of a protein network to connect TFIIIC, Bdp1, and the RNA polymerase III active center[END_REF]. Besides TFIIIB interactions, C53 and ABC10a were also found to contact the TFIIIC subunit t131 [START_REF] Wu | The TFIIF-like Rpc37/53 dimer lies at the center of a protein network to connect TFIIIC, Bdp1, and the RNA polymerase III active center[END_REF][START_REF] Dumay | Interaction between Yeast RNA Polymerase III and Transcription Factor TFIIIC via ABC10α and τ131 Subunits *[END_REF]. A role for C31 in the preinitiation complex recognition was also characterized since a small deletion of the C-terminal end of C31 impaired RNA chain initiation [START_REF] Thuillier | A mutation in the C31 subunit of Saccharomyces cerevisiae RNA polymerase III affects transcription initiation[END_REF]. Several studies revealed a structurally and functionally conserved core transcription initiation complex in all eukaryotic RNAPs which contains promoter DNA, RNAP, TBP, a TFIIBlike factor, a TFIIF-like factor and a TFIIE-related factor. In the RNAPIII transcription system, the C53-C37 heterodimer is considered to be homologous to TFIIF, while the C82-C34-C31 heterotrimer is regarded as the homolog of TFIIE. Among the three subunits of TFIIIB, TBP is common to the three eukaryotic RNAPs; the N-terminal half of Brf1 shares a high degree of sequence similarity with TFIIB; but Bdp1 has no counterpart in RNAPI and RNAPII transcription machineries [START_REF] Vannini | Conservation between the RNA Polymerase I, II, and III Transcription Initiation Machineries[END_REF]. It was also suggested that TFIIIC acts as an assembly factor, but they are not bona fide components of the PIC. Recruitment of RNAPIII and promoter opening may displace tA from the A box to free the transcribing unit, as the tA module was found to elute separately from the TFIIIB-RNAPIII-DNA complex during sizeexclusion chromatography [START_REF] Vorländer | Structure of the TFIIIC subcomplex τA provides insights into RNA polymerase III pre-initiation complex formation[END_REF]. This idea is in agreement with previous in vitro data showing that TFIIIC is only required for TFIIIB assembly but is dispensable for RNAPIII transcription [START_REF] Bardeleben | Encounters of Saccharomyces cererisiae RNA Polymerase III with its Transcription Factors during RNA Chain Elongation[END_REF]. However, other in vitro studies showed that TFIIIC is not released from the DNA template once it is bound to it [START_REF] Ruet | Isolation of a class C transcription factor which forms a stable complex with tRNA genes[END_REF], and, moreover, TFIIIC is required to support re-initiation at genes as long as 300 bp [START_REF] Ferrari | Distinct roles of transcription factors TFIIIB and TFIIIC in RNA polymerase III transcription reinitiation[END_REF]). Thus, it still remains unclear whether TFIIIC is indeed disassembled during transcription initiation in vivo. Promoter opening Upon RNAPIII PIC assembly on promoter, the transcription machinery forms a closed complex (CC) where the duplex DNA that is located at the DNA-binding cleft of the polymerase remains double stranded. The promoter DNA is then melted into single strands and the template strand is engaged by the polymerase active site, leading to the formation of an open complex (OC). After synthesis of a short piece of RNA transcript, the initially transcribing complex (ITC) forms and the polymerase successively escapes the promoter and transits to the phase of transcription elongation [START_REF] Ramsay | Structural rearrangements of the RNA polymerase III machinery during tRNA transcription initiation[END_REF]). An early study showed that RNAPIII does not open its promoter uniformly, with the upstream segment (bp -9 to -5, relative to the TSS as +1) opening at a lower temperature than the downstream segment (bp -3 to +7), suggesting that promoter opening by RNAPIII may nucleate at the upstream end of the transcription bubble (Kassavetis et al., 1992b). TFIIIB was found to participate in promoter opening in addition to its role in RNAPIII recruitment [START_REF] Kassavetis | The role of the TATA-binding protein in the assembly and function of the multisubunit yeast RNA polymerase III transcription factor[END_REF]. Further analyses proposed that TFIIIB participates in two steps of promoter opening: the N-terminal domain of Bdp1 is involved in initiating duplex DNA separation at the upstream edge of the transcription bubble, and then, the zinc-ribbon domain in the N-terminus of Brf1 functions by extending the initial transcription bubble towards and beyond the TSS (Figure 3-6; [START_REF] Kassavetis | The RNA polymerase III transcription initiation factor TFIIIB participates in two steps of promoter opening[END_REF]. It was suggested that both Bdp1 and Brf1 drive promoter opening by inducing conformational changes in RNAPIII, particularly by altering the arrangement of the C82-C34-C31 heterotrimer, which has been long implicated in promoter opening [START_REF] Ramsay | Structural rearrangements of the RNA polymerase III machinery during tRNA transcription initiation[END_REF]. Atomic models of yeast RNAPIII PIC in different functional states have been built clamp conformations of unbound RNAPIII has also been reported before in yeast, where it was suggested that a moving stalk comprising subunits C25 and C17 can mediate this conformational change (Hoffmann et al., 2015). The open-clamp conformation, where the Box 3-1: The TPR, WH, WD40 and SANT protein domains Tetra-trico peptide repeat (TPR) is a degenerate 34 amino acid sequence arranged into two antiparallel a-helices, present in tandem arrays of 3-16 motifs, which can serve as a platform for protein-protein interactions and for the assembly of multiprotein complexes [START_REF] Das | The structure of the tetratricopeptide repeats of protein phosphatase 5: Implications for TPR-mediated protein-protein interactions[END_REF]. (Presented in TFIIIC subunit t131) Winged helix (WH) DNA-binding domain is a structural motif belonging to the helix-turnhelix (HTH) family. The classical WH fold consists of two wings (W1 and W2), three ahelices (H1, H2 and H3) and three b-strands (S1, S2 and S3) arranged in the canonical order H1-S1-H2-H3-S2-W1-S3-W2. It is found in core components of transcription systems in eukaryotes and prokaryotes, participating in the establishment of protein-DNA and protein-protein-interactions [START_REF] Gajiwala | Winged helix proteins[END_REF][START_REF] Teichmann | Structural and functional aspects of winged-helix domains at the core of transcription initiation complexes[END_REF]. "Extended" winged helix (eWH) is a WH domain extended by specific a-helices at the N-and C-termini [START_REF] Meinhart | An Extended Winged Helix Domain in General Transcription Factor E/IIEα *[END_REF]. (Present in TFIIIC subunit t138, C34 and C82) WD40 was named by the conserved WD dipeptide and the length of approximately 40 amino acid residues in a single repeat. Each WD40 repeat comprises a four-stranded antiparallel b-sheet. The repeats in a protein fold into a b-propeller architecture, often comprising seven blades. Proteins containing WD40 domain are very abundant in eukaryotic organisms, and are rarely present in prokaryotes. This domain is among the top ten most abundant domains in the eukaryotic genomes [START_REF] Stirnimann | WD40 proteins propel cellular networks[END_REF]. WD40 domain proteins are involved in a large variety of cellular processes, in which WD40 domains function as a protein-protein or protein-DNA interaction platform. No enzymatic activity has been detected so far for this domain [START_REF] Xu | Structure and function of WD40 domain proteins[END_REF]. (Present in TFIIIC subunit t91 and t60) SANT domain, for "switching-defective protein 3 (Swi3), adaptor 2 (Ada2), nuclear receptor co-repressor (N-CoR), transcription factor (TF)IIIB", is a protein domain found in many chromatin-remodelling proteins that function as a unique histone-interaction module coupling histone binding to enzyme catalysis. It has high sequence similarity with the DNA binding domain of Myb-related proteins [START_REF] Boyer | The SANT domain: A unique histone-tail-binding module?[END_REF]. (Present in TFIIIB subunit Bdp1) Transcription elongation Transcription elongation is the process following initiation by which an RNA chain is processively synthesized as the polymerase moves along the template DNA. This transcription step is the less well characterized for RNAPIII. Genome-wide mapping of RNAPIII transcription in yeast by the CRAC method (UV crosslinking and analysis of the cDNA, details see Box 3-2) revealed unequal distribution of RNAPIII along tRNA genes (Turowski et al., 2016). According to this study, RNAPIII was enriched in two regions close to the 5' and the 3' ends of tRNA genes, respectively, with the 5' end peak being much higher than the 3'end peak. These peaks could represent regions with decreased elongation rate and/or RNAPIII pausing. Interestingly, the 5' and 3' peaks coincided with the beginning of the A and B boxes bound by TFIIIC, respectively. Thus, Turowski et al. proposed that TFIIIC binding to the A and B boxes forms a physical barrier that interferes with RNAPIII elongation, therefore supporting the idea that TFIIIC remains associated with the DNA during transcription elongation. RNAPIII would displace the tA module from the A box and then tB module from the B box sequentially, and thus, one TFIIIC module would always remain in contact with the transcribed gene. With regards to the prominent 5' peak, they proposed that it could due to slow clearance from the initiation site, perhaps a delay in the dissociation of the polymerase from the transcription factors (Turowski et al., 2016, Turowski andTollervery, 2016), which, however, needs to be further investigated. Box 3-2: UV crosslinking and analysis of the cDNA (CRAC) The UV crosslinking and analysis of cDNA (CRAC) method was originally developed by the Tollervey group and used for the identification of binding sites of RNA-interacting proteins (Granneman et al., 2009;[START_REF] Bohnsack | Identification of RNA helicase target sites by UV crosslinking and analysis of cDNA[END_REF]. Briefly, proteins of interest containing a bipartite tag are crosslinked with their target RNA and isolated under highly denaturing conditions, ensuring that only direct interactions are detected. Then, RNA fragments are recovered and deep-sequenced after linker ligation and cDNA synthesis. This method allows a genome-wide analysis of the interactome of RNA-binding proteins with nucleotide resolution. Transcription termination The final step of a transcription cycle is the process of transcription termination, which allows the release of the transcript and the recycling of the polymerase for the next round of transcription. Proper termination is critical for maintaining genome stability by avoiding the interference between neighboring transcription units as well as the conflicts between transcribing RNAPs and other DNA-associated machineries such as the replisome (reviewed in [START_REF] Porrua | Transcription termination and the control of the transcriptome: Why, where and how to stop[END_REF]. The three eukaryotic RNAPs employ different strategies to terminate transcription and among them, RNAPIII seems to adopt the simplest mechanism. T-tract-dependent termination Unlike RNAPI and RNAPII, which require both cis-acting sequences and trans-acting factors for transcription termination, the most widely accepted model posits that RNAPIII terminates autonomously and efficiently at a stretch of thymidines (T-tract) of variable length on the non-template strand, without the need for ancillary factors (Geiduschek and Tocchini-Valentini, 1988;Arimbasseri et al., 2013;Porrua et al., 2016). Early studies revealed that a cluster of four or more consecutive T residues on the non-coding strand could terminate the synthesis of Xenopus 5S RNA and tRNA Lys in vitro, in the absence of additional factors [START_REF] Bogenhagen | Nucleotide sequences in Xenopus 5S DNA required for transcription termination[END_REF][START_REF] Mazabraud | Structure and transcription termination of a lysine tRNA gene from Xenopus laevis[END_REF][START_REF] Cozzarelli | Purified RNA polymerase III accurately and efficiently terminates transcription of 5s RNA genes[END_REF]. Introducing a T-tract within the Xenopus tRNA Tyr coding region caused premature termination [START_REF] Koski | Mutations of the yeast SUP4 tRNATyr Locus: Transcription of the mutant genes in vitro[END_REF], while deletions of natural oligo(dT) signals resulted in transcription readthrough in yeast and human tRNA genes [START_REF] Allison | Effects of alterations in the 3' flanking sequence on in vivo and in vitro expression of the yeast SUP4-o tRNATyr gene[END_REF][START_REF] Adeniyi-Jones | Generation of long read-through transcripts in vivo and in vitro by deletion of 3ʹ termination and processing sequences in the human tRNA imet gene[END_REF]. Further analyses showed that the minimum length of T-tract required for RNAPIII termination is species-specific. For Xenopus and mammals, as few as 4 Ts can support efficient termination [START_REF] Bogenhagen | Nucleotide sequences in Xenopus 5S DNA required for transcription termination[END_REF][START_REF] Cozzarelli | Purified RNA polymerase III accurately and efficiently terminates transcription of 5s RNA genes[END_REF][START_REF] Hamada | Transcription termination by RNA polymerase III in fission yeast-A genetic and biochemically tractable model system[END_REF] and, indeed, most tRNA genes harbour a stretch of 4 Ts (i.e. a T4 terminator) in these organisms [START_REF] Allison | Effects of alterations in the 3' flanking sequence on in vivo and in vitro expression of the yeast SUP4-o tRNATyr gene[END_REF][START_REF] Braglia | Sequence context effects on oligo(dT) termination signal recognition by Saccharomyces cerevisiae RNA polymerase III[END_REF]. The fission yeast Schizosaccharomyces pombe requires at least 5 Ts for efficient termination [START_REF] Hamada | Transcription termination by RNA polymerase III in fission yeast-A genetic and biochemically tractable model system[END_REF], while in S. cerevisiae, 5 Ts support only very moderate levels of termination and at least 6 Ts are necessary for relatively efficient termination [START_REF] Allison | Effects of alterations in the 3' flanking sequence on in vivo and in vitro expression of the yeast SUP4-o tRNATyr gene[END_REF]Arimbasseri and Maraia, 2015;Mishra and Maraia, 2019). These ideas are in agreement with the fact that T5/T6 and T6/T7 are the most frequent terminators in S. pombe and S. cerevisiae, respectively [START_REF] Braglia | Sequence context effects on oligo(dT) termination signal recognition by Saccharomyces cerevisiae RNA polymerase III[END_REF]. In the case of human RNAPIII, in vitro and in vivo data have demonstrated that termination can occur at non-canonical terminators consisting in interrupted long T-tracts, generally composed of several portions of 1, 2 or 3 consecutive Ts separated by another nucleotide [START_REF] Orioli | Widespread occurrence of non-canonical transcription termination by human RNA polymerase III[END_REF]. Effect of the T-tract sequence context on termination Several pieces of evidence indicate that the nucleotide sequence surrounding the T-tract could also influence the efficiency of transcription termination, especially for T-tracts of short length. For the Xenopus 5S RNA, efficient transcription termination was observed whenever GC-rich sequences surrounded the T4 terminator. In contrast, the presence of two or more consecutive A nucleotides within the three nucleotides preceding or following the T4 terminator significantly reduced the efficiency of termination [START_REF] Bogenhagen | Nucleotide sequences in Xenopus 5S DNA required for transcription termination[END_REF]. Similarly, human RNAPIIIs normally readthrough in vitro a B1-Alu gene that contains a T4 terminator flanked by AA, but replacing AA with GC increased the termination efficiency dramatically. Moreover, 4 Ts flanked by GC can be as efficient as a T5 terminator (Goodier and Maraia, 1998). In the case of S. cerevisiae, for a model tRNA gene, a CT dinucleotide placed immediately downstream of a T5 can significantly weaken the T5 termination potential. In contrast, termination was highly efficient when T5 was followed by an A or G residue [START_REF] Braglia | Sequence context effects on oligo(dT) termination signal recognition by Saccharomyces cerevisiae RNA polymerase III[END_REF]. Although an impact of the sequence context on termination at short Ttracts has been observed both in yeast and in metazoans, there is so far no universal rule with regard to the effect of the flanking sequence. Molecular mechanisms of termination at T-tracts It has been shown that the 8-9 nt RNA:DNA hybrid in the polymerase catalytic center is a major determinant of the stability of the elongation complex [START_REF] Korzheva | Mechanistic model of the elongation complex of Escherichia coli RNA polymerase[END_REF][START_REF] Sidorenkov | Crucial role of the RNA:DNA hybrid in the processivity of transcription[END_REF]Kireeva et al., 2000). Stability analyses have revealed that oligo (rU:dA) sequences are exceptionally unstable, as a DNA:RNA duplex containing a (rU:dA) 5 is at least 200 times less stable at room temperature than the corresponding duplex containing an (rA:dT) 5 [START_REF] Martin | DNA-RNA hybrid duplexes containing oligo(dA:rU) sequences are exceptionally unstable and may facilitate termination of transcription[END_REF], providing a strong rationale for the biological selection of a short run of U residues at transcription termination sites. An early biochemical study showed that a S. cerevisiae RNAPIII variant lacking the three subunits C53-C37-C11, dubbed RNAPIIID (core enzyme) failed to terminate transcription efficiently. C53-C37 forms a stable heterodimer related to TFIIFa/b [START_REF] Vannini | Conservation between the RNA Polymerase I, II, and III Transcription Initiation Machineries[END_REF]. The efficient association of C53-C37 with RNAPIII is dependent on C11, a 11 kDa protein containing two zinc ribbon domains, with its NTD similar to Rpb9 and CTD similar to TFIIS exhibiting RNA 3' cleavage activity [START_REF] Chédin | The RNA cleavage activity of RNA polymerase III is mediated by an essential TFIIS-like subunit and is important for transcription termination[END_REF]. Adding back recombinant C53-C37 to RNAPIIID restored the recognition of the termination signal, while adding C11 only restored the RNA cleavage activity of RNAPIII but not termination (Landrieux et al., 2006). So it is possible that the addition of high amounts of recombinant C53-C37 could be enough to support some interactions with the rest of the RNAPIII even in the absence of the C11 subunit. Kinetic analyses showed that RNAPIIID elongated faster than the wild-type enzyme, suggesting that C53-C37 promotes termination by reducing the elongation rate, therefore increasing the dwelling time of the polymerase at the terminator (Landrieux et al., 2006). Arimbasseri and Maraia (2013) further compared the properties of RNAPIIID and the wildtype RNAPIII and proposed relatively distinct termination mechanisms for each form of RNAPIII. They observed that the 17-subunit holoenzyme can terminate on T-tracts composed of as few as 5 or 6 Ts, or on the proximal part of a 9 Ts tract, whereas termination by the 14subunit RNAPIIID requires longer T-tracts (≥ 9 Ts) and termination indeed occurs mainly at the distal part of the terminator. The requirement for a longer T-tract by RNAPIIID could reflect its lower sensitivity to the instability of the oligo(rU:dA) hybrid, as it would require longer, and therefore less stable, rU:dA hybrids. Accordingly, the C53-C37-C11 subcomplex might play a role in increasing the sensibility to the instability of oligo(rU:dA) hybrid (Arimbasseri and Maraia 2013). These hypothesises were supported by a following study, in which the authors showed that RNAPIIID and holo enzyme differ in their sensitivity to the rU:dA hybrid and that weakening the short rU:dA hybrid by using 4-thio-UTP, a uridine-triphosphate analog with decreased base pairing strength, can compensate for the lack of C53-C37-C11 (Mishra and Maraia, 2019). Former structural data provided a possible explanation for the higher sensibility of RNAPIII to the stability of the RNA:DNA duplex at the active site, compared to the other eukaryotic RNAPs (Hoffmann et al., 2015). Indeed, RNAPIII appears to bind the RNA:DNA hybrid less tightly than the other polymerases, although more extensive interactions were observed with the downstream DNA duplex, which was suggested to compensate the loose grip of the DNA:RNA hybrid during elongation. important for eliciting RNAPIII release, thus they proposed that this C37 region would mediate specific interactions with the fifth T nucleotide of the terminator (Arimbasseri and Maraia, 2015). This region of C37 was previously identified by cross-linking experiments as binding to C128 and localizing to the active center [START_REF] Wu | The TFIIF-like Rpc37/53 dimer lies at the center of a protein network to connect TFIIIC, Bdp1, and the RNA polymerase III active center[END_REF], and by cryo-EM study as positioned in close proximity to the non-template strand (Hoffmann et al., 2015). However, very recent In conclusion, termination by RNAPIII is driven by an interplay between the DNA template, the RNA transcript and the polymerase itself, mainly mediated by the subunits C53-C37, C11 and C128. The T-tract is a bipartite termination signal as both its template and nontemplate strands carry distinct information to direct different stages of RNAPIII termination. structural Specifically, the oligo(dT) sequence in the non-template strand promotes both RNAPIII pausing and release, while the oligo(dA) in the template strand leads to formation of a weak rU:dA hybrid in the polymerase active center that acts as a destabilizing signal (Figure 3-10; Arimbasseri [START_REF] Arimbasseri | A high density of cis-information terminates RNA Polymerase III on a 2-rail track[END_REF]Porrua et al., 2016;Mishra and Maraia, 2019). The role of RNA secondary structures in termination A new model for RNAPIII transcription termination was proposed by Nielsen et al. (2013) according to which a hairpin structure formed by the nascent RNA is an absolute requirement for RNAPIII termination, somewhat similar to the intrinsic termination pathway for bacterial RNAP (see Box 2-4). Using purified yeast RNAPIII and in vitro transcription termination assays, the authors of that study observed that these polymerases paused on stretches of 7, 8 or 12 Ts but failed to dissociate from the DNA. However, introducing a hairpin-like structure immediately upstream of the poly-U sequence significantly facilitated RNAPIII release. They further found that a distance as long as ~12 nt between the poly-U stretch and the upstream RNA structure still allowed efficient termination, while longer spacers prevented the RNA hairpin from promoting termination. The fact that hairpin-mediated termination could occur even when placed at a certain distance from the poly-U sequence suggested that, upon pausing within the T-tract, RNAPIII can move backward to some extend to approach the hairpin. Importantly, their data challenged the idea that RNAPIII requires a weak RNA:DNA hybrid in the active center, as efficient hairpin-dependent termination could also be achieved However, this model remains very controversial as it is in contradiction with data from several groups. Indeed, prior in vitro transcription assays showed that a T-tract alone, without the presence of an upstream hairpin, is sufficient for efficient termination by RNAPIII [START_REF] Bogenhagen | Nucleotide sequences in Xenopus 5S DNA required for transcription termination[END_REF][START_REF] Wang | Termination of transcription by RNA polymerase III from wheat germ[END_REF]Arimbasseri et al., 2013). In addition, it was shown both in vivo and in vitro that RNA 3' end cleavage is active during RNAPIII termination [START_REF] Huang | Mutations in the RNA Polymerase III Subunit Rpc11p That Decrease RNA 3ʹ Cleavage Activity Increase 3ʹ-Terminal Oligo(U) Length and La-Dependent tRNA Processing[END_REF][START_REF] Bobkova | Mutational analysis of the hydrolytic activity of yeast RNA polymerase III[END_REF]Rijal and Maraia, 2013). found that Nielsen's protein was far more active than the other two proteins and, indeed, it was more prone to read through the terminator. Arimbasseri et al. also showed that, unlike Nielsen's protein, the other two proteins can terminate efficiently at a T-tract in the absence of an upstream hairpin. They further showed that RNA structures are not required for RNAPIII termination in vivo in a particular yeast reporter system (Arimbasseri et al. 2014). A very recent study using a reporter system in human cells showed that an RNA hairpin placed just upstream of a run of 4 Ts can enhance transcription termination of RNAPIIIs initiating from a type 3 promoter. However, the RNA structure by itself could not induce termination in the absence of an oligo(dT) sequence. These findings indicate that RNA structures can also partake in RNAPIII termination (Verosloff et la., 2021). Nonetheless, whether these structures are functional only in particular sequence contexts or in specific organisms is still unclear. Extrinsic termination factors Several extrinsic factors have shown to promoter RNAPIII termination in vitro. These include the La protein (Lhp1 in budding yeast), TFIIIC, topoisomerase-1 (topo-1) and PC4 (reviewed in Arimbasseri et al., 2013). TFIIIC were found to be cross-linked to the terminators of RNAPIIItranscribed genes, suggesting a role in termination (Bartholomew et al., 1990;[START_REF] Bartholomew | Two components of Saccharomyces cerevisiae transcription factor IIIB (TFIIIB) are stereospecifically located upstream of a tRNA gene and interact with the second-largest subunit of TFIIIC[END_REF]. Both topo-1 and PC4 were found to copurify with TFIIIC and can promote both termination and reinitiation by RNAPIII [START_REF] Wang | DNA topoisomerase I and PC4 can interact with human TFIIIC to promote both accurate termination and transcription reinitiation by RNA polymerase III[END_REF]. The La protein is an RNA binding protein that associates with all newly synthesized RNAPIII transcripts and protects the 3' end of these RNAs from exonuclease digestion. La association is required for the maturation of pre-tRNAs and the assembly of RNP, and contributes to nuclear retention of certain small RNAs (reviewed in [START_REF] Wolin | The La Protein[END_REF]. A couple of evidence obtained from in vitro transcription assays showed that La protein is required for proper termination and reinitiation of RNAPIII, as well as for transcript release (Goodier and Maraia, 1998;[START_REF] Maraia | Eukaryotic transcription termination factor La mediates transcript release and facilitates reinitiation by RNA polymerase III[END_REF][START_REF] Gottlieb | The RNA binding protein La influences both the accuracy and the efficiency of RNA polymerase III transcription in vitro[END_REF][START_REF] Fan | Phosphorylation of the human La antigen on serine 366 can regulate recycling of RNA polymerase III transcription complexes[END_REF][START_REF] Goodier | A carboxy-terminal basic region controls RNA polymerase III transcription factor activity of human La protein[END_REF], which, however, has been contested by other studies [START_REF] Lin-Marq | Efficient synthesis, termination and release of RNA polymerase III transcripts in Xenopus extracts depleted of La protein[END_REF][START_REF] Hu | A minimal RNA polymerase III transcription system from human cells reveals positive and negative regulatory roles for CK2[END_REF]Schramm and Hernandez, 2002). Whether or not these factors are directly involved in RNAPIII transcription termination needs to be verified. During the course of my PhD, a recent study showed that one of the homologs of the S. cerevisiae helicase Sen1 in S. pombe is required for robust transcription termination by RNAPIII in vivo (Rivosecchi et al., 2019). Fission yeast S. pombe contains two Sen1 homologs: Sen1 (hereafter Sp Sen1) and Dbl8. Unlike S. cerevisiae Sen1, none of the two S. pombe Sen1 homologs is essential for viability or is involved in RNAPII transcription termination (Larochelle et al., 2018). A previous study indicated that Sp Sen1 interacts with RNAPIII but not with RNAPI or RNAPII [START_REF] Legros | RNA Processing Factors Swd2.2 and Sen1 Antagonize RNA Pol III-Dependent Transcription and the Localization of Condensin at Pol III Genes[END_REF]. Facilitated reinitiation by RNAPIII After transcription termination, polymerases are available to initiate a new transcription cycle. In the case of RNAPIII, transcription reinitiation occurs preferentially on the same gene, a phenomenon that is referred to as facilitated recycling. This process was discovered by [START_REF] Dieci | Facilitated Recycling Pathway for RNA Polymerase III[END_REF], when performing in vitro transcription assays with yeast RNAPIII. They found that RNAPIIIs completed each new cycle 5-10 times faster than the first round of transcription and were quite resistant to the presence of heparin. The polyanion heparin mimics the structure of nucleic acids and is used as preinitiation inhibitor because it can sequester RNAPIIIs not engaged in elongation, for instance unassembled or terminating RNAPIIIs [START_REF] Kassavetis | Transcription factor IIIB generates extended DNA interactions in RNA polymerase III transcription complexes on tRNA genes[END_REF][START_REF] Dieci | Investigating transcription reinitiation through in vitro approaches[END_REF]. From these observations they concluded that RNAPIIIs were somehow retained on the same template for multiple rounds of transcription, implying that RNAPIIIs are not fully released from the DNA after termination but instead are committed to reinitiation on the same gene. In addition, they noticed that the RNAPIII canonical termination signal was required for efficient reinitiation, as RNAPIIIs that read-through the termination site exhibited reduced recycling ability, probably due to the loss of physical proximity between terminating RNAPIII and transcription factors, indicating an important link between the termination and the reinitiation steps (Figure 3-11A). Studies in both yeast and humans revealed that a substantial amount of RNAPIIIs escape the canonical terminators [START_REF] Canella | Defining the RNA polymerase III transcriptome: Genome-wide localization of the RNA polymerase III transcription machinery in human cells[END_REF][START_REF] Orioli | Widespread occurrence of non-canonical transcription termination by human RNA polymerase III[END_REF]Turowski et al., 2016), and then preferentially terminate at U-rich tracts downstream of the canonical terminators [START_REF] Orioli | Widespread occurrence of non-canonical transcription termination by human RNA polymerase III[END_REF]Turowski et al., 2016), with readthrough transcripts targeted for degradation by the exosome (Figure 3-11C; Turowski et al., 2016;Rivosecchi et al., 2019). This ability of RNAPIII to be reloaded on the same transcription unit to achieve high transcription rates [START_REF] Dieci | Facilitated Recycling Pathway for RNA Polymerase III[END_REF] was later found to be conserved from yeast to humans [START_REF] Cabart | Facilitated recycling protects human RNA polymerase III from repression by Maf1 in vitro[END_REF]. However, the molecular mechanism of termination-coupled facilitated reinitiation by RNAPIII remains poorly understood. An in vitro study on RNAPIII reinitiation properties suggested that facilitated recycling is not a stochastic process, but relies on a specific polymerase recapture pathway involving the promoter-bound transcription factors TFIIIB and TFIIIC. Particularly, TFIIIB was capable of directing recycling on a short template (~100 bp) in the absence of TFIIIC. However, on long genes (> 300 bp) such as SCR1, TFIIIC was further required to support a high reinitiation rate [START_REF] Ferrari | Distinct roles of transcription factors TFIIIB and TFIIIC in RNA polymerase III transcription reinitiation[END_REF]. Biochemical evidence showed that the termination-linked reinitiation also involved the action of the RNAPIII subunits C53, C37 and C11 (Landrieux et al., 2006). Among them, C11 was critical for transcription reinitiation independently of its RNA cleavage stimulation activity. The interaction of C11 with the C53-C37 heterodimer would induce a conformational change in the RNAPIII needed for the recapture pathway. The C53-C37 heterodimer was proposed to slow down the elongation rate and increase the duration of pausing either during elongation or at the terminator, thus giving time for the correct recognition of termination element and, thus, also favoring facilitated reinitiation (Landrieux et al., 2006). However, how the interactions between the terminating RNAPIII and initiation factors assembled on the promoter are established remains unclear, and structural evidence for the "reinitiation complex" is greatly needed. [START_REF] Dieci | Facilitated Recycling Pathway for RNA Polymerase III[END_REF]. (d) Secondary termination: a substantial fraction of RNAPIIIs overrides the canonical terminator and then terminates at regions downstream of 3'-end of tRNA, thus producing aberrant readthrough transcripts. (B) Pre-tRNA processing. The primary transcripts of tRNA genes must undergo maturation at both ends to generate mature tRNAs. The 5'-end leader of the pre-tRNA is generally removed by the RNase P endonuclease, and the 3'-end trailer is thought to be cleaved by the tRNase Z endonuclease or trimmed by the exonucleases Rex1 and Rrp6 [START_REF] Skowronek | tRNA 3' processing in yeast involves tRNase Z, Rex1, and Rrp6[END_REF]. (C) Readthrough transcripts processing. Readthrough tRNAs can be degraded by the nuclear surveillance machineries such as the exosome, or can be processed through other mechanisms. Adapted from Turowski and Tollervey, 2016. Research outline and main objectives The Nrd1-Nab3-Sen1 (NNS) complex in budding yeast is dedicated to terminate non-coding RNAs transcribed by RNAPII. Previously, studies in our group identified a major role for the helicase Sen1 in dismantling the elongation complex, which requires its RNA-binding and ATPhydrolysis activities (Porrua and Libri, 2013b;Han et al., 2017). The team showed that the helicase domain of Sen1 alone is sufficient to induce RNAPII transcription termination in vitro (Han et al., 2017). However, the N-terminal domain (NTD) of Sen1 is essential for viability as well as for RNAPII transcription termination in vivo, likely by recognizing the CTD of RNAPII (Han et al., 2020). In order to investigate other functional interactions mediated by the Sen1 NTD, we studied the protein interactomes of Sen1 and several Sen1 variants. Surprisingly, we found that deletion of the NTD of Sen1 leads to the loss of its association with replication factors and RNAPIII, whereas the NTD alone could interact with the replisome and RNAPIII, suggesting a direct role for Sen1 NTD in binding the replisome and RNAPIII (data obtained by Nouhou Haidara). But little was known about the biological function of these interactions. In a collaborative study with the De Piccoli group (Coventry, UK), it was found that budding yeast Sen1 binds to the replisome via the replication factors Ctf4 and Mrc1. Our collaborators identified a region of Sen1 NTD that is involved in these interactions and generated a mutant version of Sen1 containing substitutions at three conserved residues (W773, E774, W777), referred to as sen1-3, that lost the capacity to bind the replisome but was fully competent for cell growth and RNAPII transcription termination (Appanah et al., 2020). Further analyses of the interactome of the Sen1-3 protein demonstrated that it was impaired in the interaction with RNAPIII as well (data obtained by Umberto Aiello). Therefore, the sen1-3 allele is a separation-of-function mutant that allows us to study the role of Sen1 in RNAPIII transcription and replication without affecting the efficiency of RNAPII transcription termination. This was the moment when I joined the team and started my PhD project. My study aimed at characterizing in more detail the interaction of Sen1 with RNAPIII and explore the function of this interaction in RNAPIII transcription in S. cerevisiae. To this end: 1) I analysed RNAPIII interacting partners by co-immunoprecipitation followed by massspectrometry analysis to understand whether the interaction of Sen1 with RNAPIII could be mediated by the replisome. 2) I generated high-resolution maps of transcribing RNAPIII using the CRAC technique (Box 3-2) in a wild-type, sen1-3, or a Sen1-depleted strain, to determine the role of Sen1 and the Sen1-RNAPIII interaction on RNAPIII transcription. I also performed RNAPIII CRAC under additional conditions to assess whether the role of Sen1 in RNAPIII transcription requires other components of the NNS complex or the presence of replisome. 3) I performed in vitro transcription termination assays using purified RNAPIII and Sen1, to characterize the mechanisms of Sen1-mediated RNAPIII termination. 4) I also tested the role of RNA secondary structures in RNAPIII transcription termination using similar in vitro systems. By these approaches, we expected to unveil a new function for the highly conserved helicase Sen1 and revisit the current model for RNAPIII transcription termination. MANUSCRIPT An integrated model for termination of RNA polymerase III transcription Juanjuan Xie 1 , Umberto Aiello 1 °, Yves Clement 1 °, Nouhou Haidara 1 °, Mathias Girbig Introduction Transcription termination is an essential process that sets the borders between genes, therefore avoiding the interference between neighboring transcription units. Furthermore, transcription termination plays an important role in the maintenance of genome integrity by limiting the possible conflicts between transcribing RNA polymerases (RNAPs) and other cellular machineries involved in DNA replication or repair (reviewed in Porrua and Libri, 2015a). Transcription termination can be envisioned as a multi-step process consisting in the recruitment of termination factors, the recognition of sequence motifs, RNAP pausing, and finally the release of the RNAP and the transcript from the DNA. This last step involves a remodeling of an intricate network of interactions between the RNAP, the nascent RNA and the DNA template (reviewed in Porrua et al., 2016). Within this network, the interactions between the polymerase and the RNA:DNA hybrid are considered as the main determinant of the stability of the elongation complex (EC) (Kireeva et al., 2000). Most eukaryotic organisms possess three different RNAPs that are specialized in producing different classes of transcripts and seem to adopt different strategies to efficiently terminate transcription. RNAPI is responsible for the synthesis of ribosomal RNAs; RNAPII transcribes all proteincoding genes and several classes of non-coding genes and RNAPIII synthetizes short and abundant transcripts among which all tRNAs, the 5S rRNA, and several additional non-coding RNAs. The mechanisms of transcription termination of the three polymerases have been extensively characterized in the eukaryotic model Saccharomyces cerevisiae and many of the principles uncovered in this organism seem to be highly conserved from yeast to humans (reviewed in Porrua et al., 2016). RNAPI and RNAPII require extrinsic protein factors to terminate transcription. RNAPI pauses when it encounters a Myb-like factor bound to the DNA downstream of each rRNA gene (Merkl et al., 2014;Reiter et al., 2012). The release of the paused RNAPI is then mediated by additional proteins, specifically the Rat1 exonuclease and the helicase Sen1 (El [START_REF] Hage | Efficient termination of transcription by RNA polymerase I requires the 5ʹ exonuclease Rat1 in yeast[END_REF]Kawauchi et al., 2008), which are also major termination factors for RNAPII (see below). The mechanism of RNAPII transcription termination is more complex and involves the action of a larger number of proteins. There are two major termination pathways for RNAPII (reviewed in Porrua and Libri, 2015a). Transcription termination at protein-coding genes relies on a multi-subunit complex that is responsible for the co-transcriptional cleavage of the pre-mRNA at the poly(A) site and the addition of a poly(A) tail. The downstream portion of the nascent transcript is then targeted by Rat1 (XRN2 in humans), which degrades the RNA molecule until it encounters RNAPII and promotes its release from the DNA (Baejen et al., 2017;Kim et al., 2004;Park et al., 2015;Pearson and Moore, 2013;West et al., 2004). The second pathway is devoted to termination of non-coding transcription and plays an essential role in the control of pervasive transcription as well as in the biogenesis of snoRNAs (Arndt and Reines, 2015;Porrua and Libri, 2015a). This pathway depends on a complex composed of two RNA-binding proteins, Nrd1 and Nab3, and the aforementioned helicase Sen1 (i.e. the NNS complex). Whereas Nrd1 and Nab3 recognize specific sequence motifs that are enriched in the target non-coding RNAs, the helicase Sen1 induces the dissociation of the EC (Porrua and Libri, 2013b;Porrua et al., 2012;Schulz et al., 2013;Steinmetz et al., 2006;Wlotzka et al., 2011). The mechanisms of action of Sen1 in RNAPII transcription have been extensively characterized at the molecular level by our group and others (Han et al., 2017;[START_REF] Hazelbaker | Kinetic competition between RNA Polymerase II and Sen1-dependent transcription termination[END_REF]Leonaitė et al., 2017;Porrua and Libri, 2013b;Wang et al., 2019). Briefly, Sen1 uses the energy of ATP hydrolysis to translocate along the nascent RNA towards the transcribing RNAPII and, upon transcriptional pausing, it collides with the polymerase and induces its dissociation from the DNA. A large body of evidence supports the notion that, in contrast to the other RNAPs, RNAPIII can terminate precisely and efficiently at a particular DNA sequence without the need for accessory proteins (reviewed in Arimbasseri et al., 2013 andPorrua et al., 2016). A typical RNAPIII terminator consists in a stretch of thymidines (T) of variable length in the nontemplate DNA strand that, according to the current model, is sufficient to promote both pausing and release of RNAPIII. Upon transcription of a T-tract, the weakness of the resulting rU:dA hybrid is thought to be central to the destabilization of the RNAPIII EC (Mishra and Maraia, 2019). The particular sensitivity of RNAPIII to weak rU:dA hybrids compared to other RNAPs that do not sense T-tracts as terminators is believed to depend on the less-extensive interactions between RNAPIII and the RNA:DNA hybrid (Hoffmann et al., 2015). The Ts in the non-template strand play an additional critical role in transcription termination (Arimbasseri and Maraia, 2015), as they have been proposed to be recognized by the C37 and C53 subunits of RNAPIII that also contribute to termination (Landrieux et al., 2006;Rijal and Maraia, 2013). An alternative model proposed by Nielsen and coauthors (Nielsen et al., 2013) posits that Ttracts are required for RNAPIII pausing but are not sufficient for its release from the DNA. These authors have proposed that the folding of the nascent RNA into a hairpin-like structure in the vicinity of the paused RNAPIII is an absolute requirement for termination. The hairpin would invade the RNA exit channel of the polymerase, thus provoking its dissociation from the DNA. The proposed mechanism is reminiscent of the so-called intrinsic termination pathway described for bacterial RNAP. This hairpin-dependent model remains, however, highly controversial since it is seemingly in disagreement with a large body of former experimental evidence (Arimbasseri et al., 2014). The model according to which sequence signals are the sole determinant of RNAPIII termination has also been challenged in the fission yeast Schizosaccharomyces pombe by a recent report showing that one of the homologues of the S. cerevisiae Sen1 (hereafter designated Sp Sen1) is involved in RNAPIII termination in vivo (Rivosecchi et al., 2019). Deletion of this gene that in S. pombe is non-essential leads to a global shift of RNAPIII occupancy downstream of tRNA genes, consistent with the notion that Sp Sen1, in addition to T-tracts, is required for RNAPIII termination in this organism. The precise role of Sp Sen1 in termination as well as its mechanism of action were, however, not addressed in this study. In the present study we combine high-resolution genome-wide approaches with in vitro transcription termination assays using highly-purified components to dissect the mechanism of RNAPIII transcription termination in S. cerevisiae. We observe that termination at the primary terminator of RNAPIII-dependent genes (i.e. the first T-tract after the gene), is only partially efficient and, thus, a considerable fraction of polymerases terminate in the downstream region. We provide in vivo and in vitro evidence that the helicase Sen1 plays a global role in RNAPIII transcription termination and that this function relies on the interaction of its N-terminal domain with RNAPIII. However, we find that Sen1 contributes very little to the efficiency of primary termination and that it mainly functions as a fail-safe mechanism to Results The N-terminal domain of Sen1 interacts with RNAPIII S. cerevisiae Sen1 is a modular protein composed of a large N-terminal domain (aa 1-975), a central helicase domain (aa 1095-1867) and a C-terminal disordered region (aa 1930-2231, see figure 1A). We have recently shown that the N-terminal domain (NTD) is essential for viability and for termination of RNAPII transcription and that it recognizes the CTD of RNAPII, although it is not the only RNAPII-interacting region in Sen1 (Han et al., 2020). In a quest for other functional interactions mediated by the Sen1 Nter, we performed coimmunoprecipitation (co-IP) experiments followed by mass spectrometry (MS) analyses using either a full-length or a DNTD version of Sen1 as a bait (table 1 andS1). We expressed both SEN1 variants from the GAL1 promoter (pGAL1) because only overexpression of the sen1DNTD allele supports viability (Han et al., 2020). In agreement with previous reports (Appanah et al., 2020;Yüce and West, 2013), we detected an RNase-resistant interaction of Sen1 with its partners within the NNS-complex Nrd1 and Nab3, several replication and transcription-related factors, as well as with the three RNAPs. Strikingly, deletion of the NTD abolished the association of Sen1 with RNAPIII and most replication factors without markedly affecting other interactions. Additional co-IP/MS experiments using the isolated NTD as a bait confirmed the interaction with replication factors (e.g. Ctf4) and RNAPIII subunits, strongly suggesting direct protein-protein interactions between the NTD and these factors (table 1 andS2). The interaction of the N-terminal domain of Sen1 with the replisome was found to depend on the replication factors Ctf4 and Mrc1 in a parallel, collaborative study (Appanah et al., 2020). In that work, we found that combination of three point mutations in a conserved region of the Sen1 NTD (W773A, E774A, W777A; defining the Sen1-3 variant) abolishes the interaction with these proteins. Importantly, we showed that Sen1-3 is expressed at similar levels as WT Sen1 and is fully proficient for terminating transcription of NNS target genes (Appanah et al., 2020). To assess whether these mutations also affect the association with RNAPIII, we analysed the protein interactome of Sen1-3 by co-IP/MS (figure 1A, tables 1 andS3). The interaction with RNAPII was not significantly altered in this mutant, in agreement with its proficiency in RNAPII transcription termination (Appanah et al., 2020). Interestingly, we observed that the mutations introduced in Sen1-3 strongly affect the interaction with RNAPIII subunits. These results are compatible with the notion that the same surface of Sen1 mediates mutually exclusive interactions with the replisome and RNAPIII. Alternatively, the interaction between Sen1 NTD and RNAPIII could be mediated by the replisome. To distinguish between these possibilities, we conducted quantitative MS and western blot analyses on RNAPIII coimmunoprecipitates from WT and sen1-3 cells (figure 1B-D and table S4). We observed a clear association of RNAPIII with protein components of the Ty1 transposon, which was previously reported and validates our experimental conditions (figure 1C, [START_REF] Bridier-Nahmias | Retrotransposons. An RNA polymerase III subunit determines sites of retrotransposon integration[END_REF]. Importantly, while Sen1 was among the most enriched RNAPIII interactors, we did not detect the two replisome anchoring factors, Ctf4 and Mrc1, indicating that Sen1 interacts in a mutually exclusive manner with RNAPIII and the replisome. RNase A treatment induced a ~2-fold decrease in the level of RNAPIII-bound Sen1, indicating that this interaction is also partially mediated or stabilized by the RNA (figure 1E). As expected, the association of Sen1-3 with RNAPIII was strongly reduced compared to WT Sen1 (figure 1D), even in the absence of RNase treatment, suggesting that the protein-protein interaction mediated by Sen1 NTD is a major pre-requisite for the association of Sen1 with RNAPIII transcripts. Strikingly, the Sen1 NNS partners Nrd1 and Nab3 were very poorly enriched in RNAPIII coimmunoprecipitates (figure 1C-E), strongly suggesting that Sen1 plays a role in RNAPIII transcription termination independently from its function within the NNS-complex. Taken together, our results support the notion that Sen1 associates with RNAPIII and the replisome within two alternative complexes that are also distinct from the NNS-complex and likely exert different functions. Sen1 is required for efficient termination of RNAPIII transcription in vivo. The most widely accepted model for RNAPIII transcription termination posits that the polymerases recognize a cis-acting element composed of a stretch of thymidines on the nontemplate DNA and is released without the need for additional trans-or cis-acting factors (reviewed in Arimbasseri et al., 2013 andPorrua et al., 2016). However, the evidence supporting a direct interaction between RNAPIII and Sen1 prompted us to investigate a possible role for the latter in terminating RNAPIII transcription. To this end, we generated high-resolution maps of transcribing RNAPIII by CRAC (crosslinking analysis of cDNAs) (Candelli et al., 2018;Granneman et al., 2009). Briefly, the nascent RNAs are UV-crosslinked to RNAPIIIs in vivo and the RNAPIII-RNA complexes are purified under stringent conditions. The extracted RNAs are then used to generate cDNAs that are deep-sequenced, providing the position of RNAPIIIs with nucleotide resolution. We performed these experiments in WT or sen1-3 cells as well as in a Sen1-AID (auxin-inducible degron) strain, which allowed assessing the effect of Sen1 depletion (figure 2). We obtained very specific and reproducible RNAPIII occupancy signals in the crosslinked samples relative to un-crosslinked controls, with most reads mapping at RNAPIIIdependent genes (figure S1A-C). Consistent with a former genome-wide study (Turowski et al., 2016), our metagene analyses revealed significant RNAPIII signals downstream of the first T-tract after the 3' end of tRNA genes, (hereafter referred to as the primary terminator), indicating that termination at this sequence element is only partially efficient in vivo (figure 2A-C). Importantly, we observed a clear increase in the RNAPIII signal downstream of the primary terminator in the sen1-3 mutant, indicating that the interaction with Sen1 promotes termination of RNAPIII, either at the primary terminator or downstream of it. Read-through (RT) transcription was also increased in the Sen1-AID strain even under non-depletion conditions, most likely because the presence of the tag affects the amount or the function of Sen1 even in the absence of auxin as observed for other proteins. Transcriptional readthrough was further exacerbated when Sen1 was depleted by the addition of the auxin analogue 3-indoleacetic acid (IAA). The stronger effect of Sen1 depletion relative to the sen1-3 mutation might imply either that Sen1-3 can still interact weakly with RNAPIII in vivo, or G) and H) Northern blot analysis of transcripts derived from two different tRNA genes in the indicated backgrounds. Schemes on the top indicate the approximate position of the probes (P1 and P2) used for the detection of the different RNA species (RT, for read-through, and mature tRNA). The RNA probe is indicated in red, while DNA oligonucleotide probes are indicated in black (more details in table S5). pTet-NRD1 denotes strains expressing NRD1 from a Tet-Off promoter. Depletion of Nrd1 in those strains was achieved by incubation with the tetracycline analogue doxycycline for 10.5h. This system was employed instead of the Nrd1-AID system because RT species are only detectable in a Drrp6 background and the Nrd1-AID, Drrp6 strain is not viable even in the absence of IAA. The 5.8S rRNA is used as a loading control. Note that Rrp6 is responsible for the processing of the 5.8S rRNA and thus, 5.8S precursors are detected in the Drrp6 background. The asterisk in panel H indicates that the signal of mature tH(GUG)G2 corresponds to the same samples loaded in the blot in panel G. that Sen1 functions in RNAPIII termination to some extent in the absence of interaction with the polymerase. Nevertheless, because full depletion of Sen1 also affects termination of many RNAPII non-coding RNA genes, we focused on the more specific sen1-3 mutant for the rest of our study. Heatmap analyses of the RNAPIII differential signal (log 2 ratio) in the sen1-3 mutant relative to the WT showed that an increase in the signal downstream of the primary terminator could be observed for the vast majority of tRNA genes (figure 2D). Furthermore, inspection of other RNAPIII-dependent genes such as the 5S and U6 genes revealed similar transcription termination defects, indicating that the role of Sen1 in favouring RNAPIII transcription termination is not restricted to tRNA genes (figure 2E-F). Taken together, our results indicate that Sen1 is globally required for fully efficient termination of RNAPIII transcription in vivo and that this Sen1 function relies to a large extent on its interaction with RNAPIII. Sen1 functions in RNAPIII transcription independently of the NNS-complex Nrd1 and Nab3 have been found to bind the precursors of several tRNAs in vivo [START_REF] Steinmetz | la terminaison ainsi que son mécanisme d'action n'ont cependant pas été abordés dans cette étude. Ainsi, une grande incertitude demeure quant à la contribution relative des éléments de séquence, des structures d'ARN et des facteurs de transcription à l'efficacité de la terminaison de la transcription par l'ARNpol III[END_REF], and it remain possible that these proteins also partake in RNAPIII termination although they did not appear significantly associated with RNAPIII in our MS analyses (Figure 1E). To address this possibility we conducted RNAPIII CRAC experiments in a Nrd1-AID strain. Depletion of Nrd1 upon treatment with IAA for 1h was sufficiently efficient to provoke clear termination defects at two well-characterized non-coding genes transcribed by RNAPII (i.e. NEL025c and SNR13, see figure S2C). However, neither the metagene analyses of RNAPIII distribution around tRNAs (figure 3A) nor the inspection of individual RNAPIII-dependent genes (figure 3B-C) revealed any significant effect on RNAPIII transcription termination efficiency. We conclude that, unlike Sen1, Nrd1 is not required for efficient termination of RNAPIII transcription. Because Nab3 is not known to function separately from Nrd1, our results indicate that Sen1 plays a role in RNAPIII transcription independently from the NNScomplex. The function of Sen1 in RNAPIII transcription termination is not mediated by the replisome Our analyses of Sen1 and RNAPIII protein interaction network support a model whereby Sen1 interacts with RNAPIII and the replisome in a mutually exclusive manner. However, they do not exclude the possibility that the replisome mediates the loading of Sen1 onto RNAPIII, for instance when a collision between these complexes occurs (e.g. Sen1 could interact sequentially with the replisome and RNAPIII). RNAPIII transcription units are indeed hot spots of conflicts between the transcription and the replication machineries [START_REF] Osmundson | Pif1-family helicases cooperatively suppress widespread replication-fork arrest at tRNA genes[END_REF]. Therefore, we considered the possibility that Sen1 might only function in RNAPIII transcription termination in the presence of ongoing replication. To explore this possibility, we performed parallel RNAPIII CRAC experiments in asynchronous cells and in cells arrested in the G1 phase by treatment with a-factor, in a WT and a sen1-3 background (G1-arrest was verified by FACS analysis, figure S1D-E). Importantly, we observed a very similar RNAPIII pattern in G1-arrested and asynchronously growing cells (figure 2A-C and 3D-E), namely prominent RNAPIII termination defects in sen1-3. The finding that abolishing the interaction between Sen1 and RNAPIII reduces the efficiency of termination even in the absence of the replisome (i.e. G1-arrested cells) indicates that Sen1 plays a role in termination of RNAPIII transcription independently of its association with the replisome. Sen1 operates in a fail-safe transcription termination pathway Our genome-wide data indicate that the association of Sen1 with RNAPIII globally increases the efficiency of transcription termination. However, these results are consistent with both a function for Sen1 in promoting termination at the primary terminator and/or a role in removing polymerases that constitutively escape primary termination. To distinguish between these possibilities, we first analysed the distribution of the RNAPIII CRAC signal in WT and sen1-3 cells. The total transcription levels, inferred from the RNAPIII signal within the gene body, were virtually identical in WT and sen1-3 cells, indicating that the mutations in Sen1-3 do not impact transcription initiation or elongation (figure S3A-B). We then computed for each tRNA gene both the RT index (i.e. the ratio of the RNAPIII signal downstream versus upstream of the primary terminator) and the RT length (i.e. the distance between the primary terminator and the 3' end of the RT signal) in the WT and in sen1-3 (figure 4A). For most genes, we observed an increase in the RT index in sen1-3 cells compared to WT cells (figure 4B-E), which is compatible with Sen1 functioning in primary or in secondary termination, since failure in either one of these processes alone would result in the accumulation of RNAPIIIs within RT regions. However, the heatmap analyses shown in figure 2D revealed that for most tRNA genes, very little or no RNAPIII accumulation could be observed immediately after the primary terminator in the mutant, with the largest increase of RNAPIII signal occurring further downstream, arguing against a major role for Sen1 at the primary termination site. Consistent with this notion, we observed a clear increase in the RT length in the mutant (figure 4B-E), indicating that polymerases that have escaped primary termination transcribe for longer because downstream termination is defective in the Sen1 mutant. Because termination defects would lead to the production of different RNA species from tRNA genes depending on whether they occur at the primary terminator or at readthrough regions, we set out to analyse these RNAs by northern blot (figure 2G-H). Mature tRNAs are generated by termination at the primary terminator and eventually by the processing of short 5' and 3'-extentions. Therefore, defects in primary termination are expected to result in lower amounts of mature tRNAs with a concomitant increase in the amount of RT transcripts. We could only detect RT RNAs for the tRNA genes tK(UUU)O and tH(GUG)G2 in the absence of the exosome-associated exonuclease RRP6 (figure 2G-H), consistent with former data indicating that RT species are degraded by the RNA exosome (Turowski et al., 2016). In the case of tG(GCC)F2, simultaneous deletion of RRP6 and depletion of the tRNase Z endonuclease Trz1, involved in the processing of tRNA precursors [START_REF] Skowronek | tRNA 3' processing in yeast involves tRNase Z, Rex1, and Rrp6[END_REF], was required for the strong detection of RT transcripts (figure S2A), indicating that RT transcripts can also be targeted by Trz1. Importantly, in all these cases we did not observe a significant decrease in the abundance of mature tRNAs in sen1-3, not even upon depletion of Trz1, excluding the possibility that RT transcripts are recognized as tRNA precursors by this endonuclease and cleaved to generate mature tRNAs (figure S2A and data not shown). Accordingly, the overall abundance of RT RNAs was similar in the WT and in sen1-3, but these species were globally longer in sen1-3 cells, confirming CRAC data suggesting that they result from defective Sen1dependent termination occurring downstream of tRNA primary terminators (figures 2G-H and S2A). This increase in size was not observed, as expected, when the NNS subunit Nrd1 was depleted, consistent with the Nrd1-AID RNAPIII CRAC data (figures 2G and 3A-C). To further support the notion that Sen1 functions mainly on RNAPIIIs that have escaped the primary termination site, we performed more detailed analyses of our CRAC data. If Sen1 does not function in primary termination, its failure to interact with RNAPIII should affect similarly genes with weak or strong primary terminators. Based on in vitro data, the minimal length for a functional terminator is 5 Ts (Arimbasseri and Maraia, 2015;Mishra and Maraia, 2019) but 6 Ts are required for relatively efficient termination and it is generally assumed that the termination efficiency is higher as the T-tract length increases. In partial agreement with these notions, we observed that i) the first T-tract rarely contains 4 Ts, ii) 6 Ts and 7 Ts are the most frequent terminators at this position and iii) tracts longer than 8 Ts are rarely found as primary terminators (figure 4F). We analysed the RT index of tRNAs clustered according to the length of their primary terminator and, as expected, we found that the RT index in these clusters tends to decrease as the T-tract length increases (figure 4G) in inverse correlation with the termination efficiency. Importantly, in sen1-3 cells the RT index increases similarly for all clusters suggesting that having an inefficient primary terminator does not make termination more sensitive to Sen1, arguing against a role of Sen1 at these sites. F) Histogram representing the number of tRNA genes that possess a primary terminator of each indicated length. Only consecutive thymidines are considered when computing the length of the primary terminator. G) Analysis of the RT index of tRNA genes grouped according to the length of their primary terminator in either the WT or the sen1-3 mutant. Data points correspond to the average value whereas error bars denote the standard error. H) and I) Analysis of the number of either "weak" (H) or "strong" (I) terminators located at the 700 bp region downstream of the primary terminator for tRNA genes grouped according to the extent of termination defects in the sen1-3 mutant (i.e. dependency on Sen1 for efficient transcription termination). Groups correspond to quartiles (Q) defined by the tRNA gene ranking obtained in the heatmap analyses in figure 2D, where Q1 includes the 25% of genes with the highest impairment in transcription termination in the sen1-3 mutant. The region downstream of tRNA genes contains T-stretches that were previously proposed to play a role as secondary termination sites (Turowski et al., 2016). We considered the possibility that Sen1 might be preferentially required for tRNA genes having a lower number of secondary termination sites or less efficient ones. To address this possibility, we ranked the different tRNAs according to the extent of the RNAPIII accumulation in sen1-3 relative to WT cells, thus defining a hierarchy of Sen1-dependency. For each tRNA, we computed the number of weak (4 or 5 Ts) or strong (≥ 6 Ts) terminators in regions of secondary termination and we compared the average number of terminators of each kind in the different quartiles. Interestingly, we found that the tRNA genes that are more dependent on Sen1 for termination (i.e. Q1) tend to have a lower number of efficient terminators compared to those that are less dependent (i.e. Q3 and Q4). In contrast, the number of weak terminators, which have a lower impact on RNAPIII progression, was similar in all groups of tRNAs (figures 4H-I and S3). These results strongly suggest that Sen1 compensates for the lack of efficient terminators in regions of secondary termination. This could imply that Sen1 improves termination at weak terminators or that it promotes termination at other sequences. A careful analysis of the RNAPIII CRAC signal at individual tRNA genes provided evidence supporting both possibilities (figure 4J andS4). Mapping only the 3' end of the nascent RNA allows obtaining a precise readout of RNAPIII position with single-nucleotide resolution. We observed very little if any effect of the sen1-3 mutation at positions around the primary terminator, while RNAPIII was clearly found to accumulate preferentially around T-tracts, but also at other sequences, in the downstream regions. Together our results support the notion that Sen1 does not play a prominent role in primary termination and rather promotes the release of RNAPIIIs that pause within regions of secondary termination. Sen1 can promote termination of RNAPIII transcription in vitro We have previously demonstrated that Sen1 can directly promote termination of RNAPII transcription in a sequence-independent manner (Porrua and Libri, 2013b). To assess whether Sen1 can also directly induce RNAPIII transcription termination and whether it requires the presence of canonical termination signals, we employed an in vitro transcription termination system containing purified proteins (i.e. RNAPIII and full-length Sen1), transcription templates and nascent RNA (figures 5A and B). We first analysed the capacity of canonical terminator sequences to induce RNAPIII transcription termination by comparing the behaviour of RNAPIII on transcription templates containing T-tracts of variable lengths (i.e. from 4 to 12 Ts, see figures 5C-D and S5). Consistent with former data (Arimbasseri and Maraia, 2015;Mishra and Maraia, 2019) supporting the idea that RNAPIII can sense downstream untranscribed T-tracts. Importantly, the presence of Sen1 in the reaction provoked a substantial increase in the levels of transcription termination at the T4 terminator, and, to a lesser extent, at the T5 terminator, while no significant effect was observed for the more efficient T6 terminator. This result indicates that Sen1 can enhance RNAPIII release at weak terminators. Interestingly, we found that Sen1 could also promote the release of RNAPIIIs that are A) Scheme of an in vitro transcription termination (IVTT) assay. Ternary elongation complexes (ECs) composed of purified RNAPIII, nascent RNA, and DNA transcription templates are assembled by step-wise addition of the different components (see methods) and associated with streptavidin beads via the 5ʹ biotin of the non-template strand to allow subsequent separation of beads-associated (B) and supernatant (S) fractions. The RNA is radioactively labeled at the 5'end to allow its detection (indicated by an asterisk). Each transcription template contains a T-tract of a particular length on the non-template strand. After addition of nucleotides, RNAPIII transcribes and can pause at different positions, including the T-tract. RNAPIIIs that pause at a T-tract can either dissociate from the DNA to the supernatant (i.e. undergo transcription termination) or remain paused, and thus associated with the beads, or resume transcription. Polymerases that read-through the T-tract and reach the end of the template can either run-off, with concomitant release of full-length transcripts into the supernatant, or perform iterative synthesis. For transcripts associated with paused RNAPIIIs, the comparison of the fraction that is retained in the beads with the fraction that is released provides an estimate of the efficiency of termination at each site. Sen1 employs a similar mechanism to terminate transcription of RNAPII and RNAPIII According to previous studies, canonical terminators contain signals that induce both RNAPIII pausing and release from the DNA (reviewed in Arimbasseri et al., 2013 andPorrua et al., 2016). The above results indicate that Sen1 requires polymerase pausing but not necessarily the presence of a T-tract for terminating RNAPIII. To further explore this idea, we performed in vitro transcription assays with modified templates containing a G-less cassette followed by a run of Gs to force the stalling of RNAPIII at the G-stretch in the absence of GTP (figure 6A). In these conditions, and similarly to what was observed for RNAPII, Sen1 could induce the release of roughly 50% of paused RNAPIIIs, demonstrating that it can terminate transcription at pausing sites other than T-tracts. We next set out to investigate the mechanism by which Sen1 induces RNAPIII transcription termination. We have previously shown that in order to dissociate the RNAPII elongation complex, Sen1 needs to load on the nascent RNA and translocate towards the RNAPII using the energy of ATP hydrolysis. Upon colliding with RNAPII, Sen1 can induce forward translocation of the polymerase, which, in the appropriate context, results in its release from the DNA (Han et al., 2017). Our in vitro assays support a similar mechanism for RNAPIII release (figure 5C), with evidence for "pushing" the elongation complex at sites of pausing. This is for instance manifest at the T5 terminator (figure 5C, compare lanes 5-6 to 7-8) where a decrease in the pausing signal at position -2 is not due to release at this position but rather by forward translocation and release at a downstream site. This is best illustrated in a time-course experiment performed with the T4 template, in which we quantified the total signal in the presence and absence of Sen1 (figure 6B,right). If a decrease in the signal from a paused polymerase is due to its release, the total signal (i.e. beads + supernatant) at that position should not change. On the contrary, if polymerases are "pushed" by Sen1 and eventually released at a later stage the signal distribution should be shifted downward. Indeed, upon addition of Sen1 we observe such a signal shift as well as the accumulation of RNA signal over time at positions where Sen1 induces its release. These findings support the notion that Sen1 promotes both RNAPIII translocation and its release from the DNA, similarly to what we previously showed for RNAPII. A) Analysis of the capacity of Sen1 to promote the release of RNAPIII paused at a sequence other than a T-tract. The transcription templates contain a G-less cassette followed by a stretch of Gs so that in the absence of G in the nucleotide mixture RNAPs are stalled at the first G. Experiments were performed in parallel with purified RNAPII as a positive control for Sen1 activity. The efficiency of transcription termination observed for RNAPII is similar as in our former studies (Han et al., 2017;Leonaitė et al., 2017). B) Time-course IVTT assay performed on transcription templates containing a T4 terminator. All the reactions were performed in parallel but were migrated on different gels. Left: Representative gel of one out of two independent experiments. Right: Profile of the total signal (beads and supernatant) over the region of interest. C) Analysis of the role of different protein regions and activities of Sen1 in RNAPIII transcription termination. Left: Scheme of the different proteins used in IVTT assays and summary of the relevant phenotypes. The different variants of Sen1 helicase domain (HD) were purified and characterized within the frame of a previous study (Leonaitė et al., 2017). A constitutively active version of the Upf1 HD that possess helicase activity but cannot induce RNAPII transcription termination in vitro is used as a negative control (see Porrua and Libri, 2013). The Dprong version of Sen1 HD contains a deletion of amino acids 1461-1554, which corresponds to most of this subdomain. Right: Representative gel of one out of two independent experiments. All the reactions were performed in parallel but were migrated on different gels. To further explore the mechanisms of RNAPIII termination by Sen1, we first assessed whether the interaction of Sen1 with RNAPIII, mediated by its N-terminal domain, is required for the actual step of polymerase release. To this end we first analysed the capacity of the helicase domain of Sen1 alone to induce termination in vitro. We have previously shown that this domain is sufficient for inducing termination of RNAPII transcription (Han et al, 2017, Leonaite et al, 2017). Strikingly, we found that the helicase domain of Sen1 could induce termination of RNAPIII transcription in vitro as efficiently as the full-length protein (figure 6C), suggesting that the association of Sen1 with RNAPIII via its N-terminal domain is not a strict requirement for termination but might rather play a role in the recruitment of Sen1 to RNAPIII in vivo. As a negative control, we assessed a catalytically active version of the closely-related helicase Upf1 [START_REF] Chakrabarti | Molecular mechanisms for the RNA-dependent ATPase activity of Upf1 and its regulation by Upf2[END_REF], which could not provoke termination of RNAPIII transcription, indicating that termination is not induced unspecifically by any active helicase but rather requires specific Sen1 activities or features (figure 6C). Finally, we analysed several mutant variants of the Sen1 helicase domain that are deficient for RNA binding (N1413S T1568A) or ATP hydrolysis (R1820Q), or a mutant that retains the catalytic activity but lacks the "prong" domain (i.e. D1461-1554), which is essential for viability and for RNAPII transcription termination (Leonaitė et al., 2017). Importantly, none of these mutants could promote RNAPIII transcription termination in vitro, indicating that Sen1 employs the same structural features and activities to induce transcription termination of RNAPII and RNAPIII. RNA structures upstream of T-tracts can promote the release of paused RNAPIIIs The above results indicate that, akin to the RNAPII system, Sen1-mediated termination of RNAPIII transcription involves Sen1 translocation along the nascent transcript, and our former structural and biochemical data showed that Sen1 can only interact with single-stranded RNA (Porrua andLibri, 2013b, Leonaite et al, 2017). tRNAs are highly structured RNA molecules and for a vast majority of them (i.e. 251 out of 270 tRNAs) the spacer between the 3' end of the mature tRNA and the primary terminator is at most 7 nt. We envisioned that a possible reason for which Sen1 does not function at sites of primary termination is that its binding to the nascent RNA is hindered by the co-transcriptional formation of stable structures in the vicinity of the primary terminator. Conversely, less structured RNAs in the read-through region would allow Sen1 loading and function. To explore these possibilities we performed in vitro transcription assays with modified transcription templates containing a natural hairpin from the 5S RNA, an RNAPIII-dependent transcript, upstream of T-tracts of different lengths. As a control we used a mutated hairpin with substitutions in the left arm preventing stem formation (figure 7A-C and S5). Surprisingly, the presence of a hairpin in the transcribed RNA could significantly stimulate transcription termination at a T4 terminator, similarly to the addition of Sen1 to the unstructured version of the same RNA (figure 7A). This result indicates that not only Sen1 but also RNA secondary structures can improve the function of weak terminators. In agreement with this idea, the presence of the hairpin did not enhance termination at the T6 terminator, since this sequence already supports full release of paused RNAPIIIs (figure 7B). However, the RNA structure could induce the release of polymerases paused at the proximal part of the T12 terminator, even more efficiently than Sen1 (figure 7C). These observations support the notion that, similarly to Sen1, RNA hairpins have the capacity to promote RNAPIII release. Because RNA structures naturally form close to the primary terminator of RNAPIII-dependent genes, we next assessed to what extent secondary structures need to be in proximity to Tstretches to function in termination. To this end, we compared the efficiency of termination on templates containing a T4 or a T12 terminator when the hairpin was located immediately upstream (figure 7A-C), 7 nt or 18 nt upstream of the corresponding T-tract (figure 7D-F). We observed a clear enhancement of RNAPIII release at both T4-and T12-containing templates when the hairpin was located immediately upstream or 7 nt away from the T-tract but not in the presence of a 18 nt spacer. These results indicate that RNA structures can enhance transcription termination only when they are in close proximity to T-tracts. The results with the 18 nt spacer provided a tool to address the impact of secondary structures of the RNA on the function of Sen1 in termination. Indeed, in this case, formation of a hairpin 18 nt upstream of the T-tract only allows exposing a segment of roughly 7 nt of single strand RNA outside of the RNAPIII, which is not sufficient for the loading of Sen1 (Leonaite et al, 2017). Importantly, Sen1 could not release RNAPIII at the T4 terminator in this construct, likely because of the insufficient single stranded RNA span between the structure and the polymerase. We observed, however, an increase in the amount of released run-off transcripts in the presence of Sen1, indicating that at further downstream positions Sen1 can load on the nascent RNA and promote the release of polymerases at the end of the template (figure 7E, lanes 5-8). Taken together, our results strongly suggest that RNA secondary structures forming in the vicinity of weak primary terminators can markedly improve their function. However, they can also hamper the recruitment of Sen1 to the nascent RNA and, thus, would likely prevent Sen1 from functioning at primary terminators, regardless of their strength. RNA secondary structures can form within RNAPIII A previously proposed model (Nielsen et al., 2013) posits that T-tracts are sufficient for RNAPIII pausing but not for its release from the DNA, for which RNA secondary structures would be strictly required. This model opposes the most widely accepted one, which points to an exclusive role for the T-tract in termination. Our data indicate that secondary structures can promote RNAPIII release but only at defective terminators. We decided to further investigate the mechanism of action of RNA structures in termination. The model of Nielsen et al. postulates that one of the main functions of T-tracts would be to promote RNAPIII backtracking to bring the polymerase in contact with the nearest upstream structure, which would invade the RNA exit channel of RNAPIII, thus destabilizing the elongation complex. Nonetheless, we have observed that the RNA hairpin is functional when located immediately upstream or very close to a T4 or a T12 sequence, which implies either that i) RNAPIII transcribes beyond the T-tract to allow formation of the hairpin, pauses at downstream sequences and undergoes subsequent backtracking; or that ii) the RNA folds at least partially A-C) Analysis of the role of RNA structures in transcription termination at a T4 (A), T6 (B) or a T12 (C) terminator. IVTT assays as in figure 5 but with modified templates to introduce a hairpin (HP) in the transcribed RNA immediately upstream of the T-tract. The control template (HPmut) harbors several mutations at the left arm that disrupt hairpin formation (see figure S4 for sequence details and structure prediction of the resulting RNAs). D-F) Analysis of the impact of the positioning of the hairpin relative to the T-tract on its capacity to stimulate RNAPIII release. Experiments performed as in A) and C) but with modified templates to introduce a spacer (S) of the indicated lengths between the hairpin and the Ttract (see figure S4 for details). D) Scheme showing the predicted position of the hairpin relative to RNAPIII in the presence of the indicated spacers. IVTT assays with templates containing either a T4 terminator (E) or a T12 terminator (F). G) Functional dissection of a transcription terminator composed of an RNA hairpin and a stretch of 4 Ts (T4). IVTT assays performed as in A) but with modified templates (see figure S4 for details) to include an A-less cassette followed by a stretch of As so that, in the absence of A in the nucleotide mixture, RNAPs are stalled at the first A. In the HP template the T-tract is mutated to CTCT. Experiments were performed in parallel with purified RNAPII to compare the sensitivity of both RNAPs to termination signals. H) Model for the role of canonical termination signals, RNA structures and Sen1 in termination of RNAPIII transcription (shown for tRNA genes). At the primary terminator, termination typically involves the action of a T-tract and the secondary structure of the nascent tRNA. RNA structures are required only for T-tracts of sub-optimal length. In downstream regions transcription by RNAPIII is typically terminated either by "strong" secondary terminators, without the aid of Sen1, or by "weak" termination signals if Sen1 can access and load onto the nascent RNA. Sen1 can in principle also promote termination at sites of pausing other than Ttracts. within the polymerase to induce its release. The former possibility appears to be hardly compatible with the case of the T12 terminator, for which we did not find any evidence of RNAPIII transcribing through the terminator (figure 5D). To address these possibilities, we conducted in vitro transcription assays with modified templates where the RNA hairpin is encoded in an A-less cassette and is followed by a T4 sequence and three As (figure 7G andS5). By performing reactions with these templates in the absence of ATP, polymerases cannot transcribe through the terminator and stall at the fourth T of the T4. In these conditions, the hairpin cannot form outside of the polymerase, its downstream arm still being within the polymerase. Interestingly, we observed that stalled RNAPIIIs were released in the presence of the hairpin but not the corresponding mutant version, indicating that transcription through the terminator is not required for the folding of the hairpin, which must occur in the RNA exit channel of RNAPIII. Importantly, very little, if any, polymerase release was observed when the T4 sequence was mutated, even in the presence of the hairpin, indicating that the hairpin is not an autonomous termination signal and can only function together with a canonical termination sequence. Finally, the concomitant presence of an RNA hairpin and a T-tract induced poor release of RNAPIIs, indicating that these nucleic acid elements function specifically as RNAPIII termination signals. These findings strongly support the notion that the RNA can fold within the RNAPIII and that backtracking is not required to promote RNAPIII termination. Together, our data comfort the notion that RNA secondary structures are not absolutely required for RNAPIII termination, but can nevertheless function as auxiliary elements that work in concert with weak or defective termination signals. Discussion RNAPIII synthetises short ncRNAs like tRNAs and the 5S rRNA that are absolutely essential for mRNA translation and, therefore, for cell growth and survival. Timely termination of RNAPIII transcription is critical not only for the correct synthesis of these RNAs but also for preventing RNAPIIIs from invading neighbouring genomic regions and interfering with the function of other machineries that operate in these regions. This is even more relevant considering the high expression levels of RNAPIII-dependent genes. The traditional model for termination of RNAPIII transcription posits that termination exclusively relies on the presence of T-tracts at the 3' end of RNAPIII-dependent genes that are specifically recognized by the polymerase as termination signals. The implication of additional cis-acting factors, such as RNA secondary structures has been previously proposed (Nielsen et al., 2013) but has remained hitherto controversial (Arimbasseri et al., 2014). Here we show that the mechanisms governing RNAPIII transcription termination in vivo are considerably more complex than those represented in former models, involving the interplay between distinct cis-acting elements and the extrinsic termination factor Sen1. We propose an integrated model whereby T-tracts and RNA secondary structures function in concert at primary terminators (and possibly other sites) while Sen1 concurs to release polymerases that have escaped "intrinsic" termination. S. cerevisiae Sen1 is a fail-safe transcription termination factor for RNAPIII One of the important findings of this study is the demonstration that Sen1 can directly promote termination of RNAPIII transcription, a conclusion that is supported both by genome-wide data and by in vitro transcription termination assays with purified components. However, multiple lines of evidence support the notion that Sen1 functions to remove polymerases that have escaped primary termination at the very 3'-end of RNAPIII-dependent genes. These include the high-resolution detection of RNAPIII occupancy by CRAC as well as the analysis of the different RNA species produced from model tRNA genes (figures 2, 4, S2 and S4). A mechanistic explanation for our observation that Sen1 cannot operate at primary terminators is provided by our finding that in vitro Sen1 function in termination is hindered by RNA secondary structures, which are typically present in RNAPIII transcripts close to the first terminator. A recent study has reported that one of the two Schizosaccharomyces pombe homologues of Sen1, Sp Sen1, also interacts with RNAPIII and its deletion leads to global defects in RNAPIII transcription termination (Rivosecchi et al., 2019). However, some important conclusions of this study are substantially different from the ones supported by our results. It was shown that deletion of Sp Sen1 leads to a global downstream shift of the RNAPIII occupancy peak at tRNA genes, as determined by ChIP-seq, and a reduction in the levels of mature tRNAs, which we did not observe in S. cerevisiae. These findings have been interpreted in support of a model whereby efficient primary termination in S. pombe relies on Sp Sen1 and would be only partially dependent on intrinsic termination signals. In contrast, in S. cerevisiae primary termination mainly depends on cis-signals (T-tracts and secondary structures) and Sen1 operates in downstream regions to remove read-through polymerases. Therefore, in S. cerevisiae Sen1 rather plays an important genome safeguarding role in preventing inappropriate extension of RNAPIII transcription. The divergency between these models might be due to the different resolution of the techniques employed in the two studies (e.g. CRAC and ChIP-seq) but might also reflect mechanistic differences between two organisms. For instance, differences in the biochemical properties of the two Sen1 proteins or in the mode they are loaded onto the nascent transcript might be at stake. Indeed, substantial sequence homology between the two proteins can be found only in their helicase domains and, contrary to S. cerevisiae Sen1, none of the S. pombe Sen1 homologues is essential for viability, interacts with RNAPII or partakes in RNAPII transcription termination (Larochelle et al., 2018;Rivosecchi et al., 2019). Importantly, despite the possible mechanistic differences, the fact that the two Sen1 orthologues play a role on RNAPIII transcription opens up the possibility that this function is conserved in other organisms. The mechanism of Sen1-dependent RNAPIII transcription termination Although its best-characterized function is the termination of non-coding transcription by RNAPII within the NNS-complex, S. cerevisiae Sen1 is also implicated in other processes like the control of R-loop formation, the resolution of transcription-replication conflicts and DNArepair (Alzu et al., 2012;Appanah et al., 2020;Li et al., 2016;Mischo et al., 2011). The Nterminal domain of Sen1 is an important hub for protein-protein interactions that might modulate these different functions of Sen1 (Appanah et al., 2020;Han et al., 2020). The function of Sen1 in RNAPIII termination in vivo strongly depends on the interaction with RNAPIII, which is mediated by a region in the Sen1 N-terminal domain containing the amino acids mutated in Sen1-3 (W773, E774 and W777). This region is not required for termination in vitro indicating that it is not a critical molecular determinant of the process of RNAPIII release, and therefore we suggest it drives the recruitment of Sen1 to RNAPIII, which might be a limiting step for termination in vivo. Mutation of the same amino acids in Sen1-3 also prevents the interaction with the replisome components Ctf4 and Mrc1 (Appanah et al., 2020), yet we show that the interactions of Sen1 with the replisome and RNAPIII are not interdependent but rather mutually exclusive. This suggests either that the same surface mediates the interaction with RNAPIII and the replication fork, or that these mutations alter the conformation of two distinct regions of interaction. The observation that Sen1 promotes RNAPIII transcription termination even in the absence of replication (i.e. in G1) indicates that the action of Sen1 is not restricted to situations of transcription-replication conflicts. However, we cannot exclude that such conflicts might trigger Sen1-dependent termination in some circumstances. Our in vitro data strongly support the notion that Sen1 terminates RNAPIII transcription essentially by the same mechanism employed to induce RNAPII release, for which we have previously provided a detailed molecular model (Han et al., 2017;Leonaitė et al., 2017;Porrua and Libri, 2013b). We have shown that Sen1 translocates along the nascent RNA and induces a forward motion of paused RNAPII that results in its release from the DNA. The helicase domain of Sen1 retains all the properties that are necessary for transcription termination and we proposed that a particular subdomain protruding from the helicase core (the "prong") enters the RNA exit channel provoking destabilizing conformational changes in the elongation complex (Leonaitė et al., 2017;discussed in Han and Porrua, 2017). We show here that the helicase domain is also sufficient for RNAPIII transcription termination and that the essential activities involved in translocation (RNA binding and ATP hydrolysis) as well as the "prong" are required. Akin to what was shown for RNAPII, Sen1 "pushes" paused RNAPIII, which either promotes elongation resumption or results in its release from the DNA (figure 6). Whether the outcome of "pushing" (elongation or termination) is determined by alternative, pre-existing conformations of paused RNAPIII or it is stochastic remains to be determined A former study reported transcription termination defects at RNAPI-dependent genes upon inactivation of Sen1 in vivo (Kawauchi et al., 2008). The interpretation of these data is blurred by the fact that Sen1 inhibition can have multiple indirect effects due to its widespread role in termination of RNAPII transcription. However, we have found that Sen1 associates with RNAPI in vivo (table 1) and can also promote the release of paused RNAPI in vitro (figure S6). Therefore, altogether these data could point at a common mechanism of transcription termination operating at the three eukaryotic RNAPs and relying on the helicase Sen1. RNA structures are enhancers of canonical termination signals The two fundamental steps in RNAPIII transcription termination are RNAPIII pausing and release from the DNA. The most widely accepted model (Arimbasseri et al., 2013), posits that a stretch of Ts in the non-template DNA strand is sufficient for both pausing and release of RNAPIII. An alternative model was proposed by Nielsen and co-authors (Nielsen et al., 2013), according to which, while T-tracts can promote RNAPIII pausing, an RNA hairpin in the vicinity of the paused RNAPIII is the main determinant for the dissociation of the polymerase from the DNA. These fundamental disparities were attributed to differences in the purity of the RNAPIII preparations employed in the studies supporting these models (Arimbasseri et al., 2014;[START_REF] Nielsen | Transcription. Response to Comment on "Mechanism of eukaryotic RNA polymerase III transcription termination[END_REF]. Here we use a high-purity preparation of the RNAPIII holoenzyme validated in structural and functional analyses (Hoffmann et al., 2015;Vorländer et al., 2018) to investigate the different mechanisms involved in RNAPIII transcription termination. We find that the capacity of T-tracts to promote RNAPIII pausing is directly linked to the T-tract length, with T4 terminators supporting very little pausing and T≥9 terminators inducing a complete block of RNAPII elongation (figure 5C-D). Our results show that T6 terminators, which are the most frequently found in vivo (figure 4F) are not fully efficient in supporting pausing but can induce RNAPIII release in the absence of any adjacent RNA structure (figure 5C and 7B), indicating that RNA secondary structures are not always required for termination. In contrast, in the case of T4 terminators, which are essentially nonfunctional in vitro, we find that an adjacent RNA secondary structure can convert these sequences into moderately efficient terminators (figure 7A). This behaviour likely phenocopies the situation in vivo where the tRNA acceptor stem typically folds very close to the primary terminator and might explain why T4 terminators can be found as primary terminators (figure 4F). Although some tracts of 4 Ts are separated by only 1-2 nt from downstream T-tracts and, thus, could be part of longer interrupted termination signals, more isolated T4 terminators appear to function independently (figure S7) and likely in concert with native RNA secondary structures. Strikingly, in our assays, very long T-tracts (T≥9) are defective in promoting RNAPIII release, and these defects are more pronounced as the length of the T-tract increases. More precisely, we observe that a fraction of RNAPIIIs stall at the proximal portion of these long Ttracts after "reading" only the first 3-6 nt of the T-tract and fail to dissociate from the DNA. Our interpretation of these observations is that RNAPIII can recognize the T-tract in the downstream duplex region, either because of its sequence or because of the particular structure T-tracts impose to the DNA helix [START_REF] Stefl | DNA A-tract bending in three dimensions: solving the dA4T4 vs. dT4A4 conundrum[END_REF]. Such interactions would stabilize the EC, thus compensating for the destabilizing effect of the weak rU-dA hybrid and the interaction with the unpaired thymidines in the non-template strand within the transcription bubble. Interestingly, we find that an RNA hairpin forming in the vicinity of these long T-tracts can promote full release of stalled RNAPIIIs (figure 7C), suggesting that in vivo long T-tracts might require the concomitant presence of an adjacent secondary structure to be fully proficient in transcription termination. We provide mechanistic evidence that RNA hairpins can form within the RNA exit channel of RNAPIII (figure 7G) to promote termination. Consistent with this finding, a recent structural study has provided evidence that an RNA hairpin can fold within the RNA exit channel of a bacterial polymerase, leading to a rearrangement of the EC [START_REF] Kang | RNA Polymerase Accommodates a Pause RNA Hairpin by Global Conformational Rearrangements that Prolong Pausing[END_REF]. Interestingly, structural comparisons indicate that eukaryotic polymerases can also accommodate such RNA secondary structures within their RNA exit channels [START_REF] Kang | RNA Polymerase Accommodates a Pause RNA Hairpin by Global Conformational Rearrangements that Prolong Pausing[END_REF]. Indeed, a very recent structural study on human RNAPI has provided evidence for the presence of double-stranded RNA in the RNA exit channel (Misiaszek et al, https://doi.org /10.1101/2021.05.31.446457). Therefore, we propose that, in the case of RNAPIII, the formation of an RNA hairpin can induce destabilizing conformational changes in the RNAPIII that would contribute to the dissociation of the EC. Importantly, unlike Nielsen and co-authors (Nielsen et al., 2013), we find that an RNA hairpin can only promote efficient release of paused RNAPIIIs when a T-tract resides in the polymerase main channel (figure 7G). While this work was in progress, a study using a reporter system in human cells provided evidence that an RNA hairpin located close to a short T-tract (T4) can enhance RNAPIII transcription termination in cellulo (Verosloff et al., 2021), pointing to an evolutionarily conserved role for RNA structures in termination. Taken together, our results allow proposing a revisited model for autonomous RNAPIII transcription termination that can partially reconcile former contradictory findings. According to our model, T-tracts are strictly required for termination, but adjacent RNA structures are important auxiliary elements when the length of the T-tract falls outside of the optimal range. Thus, the protein-independent mechanism of termination of RNAPIII transcription has more commonalities with the so-called intrinsic termination pathway for bacterial RNAP than previously appreciated. Multiple mechanisms partake in RNAPIII transcription termination We and others have observed that RNAPIIIs read through the primary terminator quite frequently and termination at downstream regions was proposed to rely on secondary canonical terminators (Turowski et al., 2016). Indeed, tRNA read-through regions contain Ttracts that are more frequent in the sense orientation than in the antisense orientation (figure S8A-B), suggesting they are under positive selection. However, long T-tracts (T>5) are scarce in these regions (figure 4I), suggesting that alternative evolutionary routes have been undertaken for ensuring efficient termination. We have shown that both RNA structures and the helicase Sen1 can complement the function of short termination signals. These two factors act in a mutually exclusive manner because: i) both employ a similar mechanism likely involving a conformational change initiated at the level of the RNA exit channel of RNAPIII; and ii) the presence of secondary structures in the nascent transcript prevents Sen1 loading. Our data support the idea that Sen1 would play a more prominent role in termination at read-through regions than RNA structures. A possible explanation is that Sen1 can function both at weak terminators and at other pausing sites, while RNA structures can only work when located sufficiently close to a T-tract. We have observed that the transcripts encoded in the ~250 bp region immediately downstream of the primary terminator have a lower propensity to fold into secondary structures than the genomic average (figure S8C-E). While this could be partially due to the higher frequency of T-tracts in this region, which lowers the GC content, it might also be a consequence of Sen1 involvement in fail-safe termination. We suggest that "repurposing" the RNAPII transcription termination factor Sen1 for terminating RNAPIII might have a lower evolutionary cost than generating the appropriate arrangements of T-tracts and RNA structures in tRNA read-through regions. These considerations do not exclude the possibility that more than one mechanism operate in secondary termination for the same gene. This is for instance illustrated by the tH(GUG)G2 gene (figure S2B), where a secondary T8 terminator is present 60 bp downstream of the primary terminator. Termination at this site is independent of Sen1 most likely because a strong secondary structure forms immediately before T8. The fraction of RNAPIII that escape termination at this site terminates at downstream sites, in the apparent absence of strong secondary structures, in a Sen1-dependent manner. Taken together, our findings reveal the existence of multiple mechanisms cooperating to promote termination of RNAPIII transcription. We propose that RNA structures contribute to the efficiency of primary termination in some instances (i.e. genes with suboptimal terminators), thanks to the natural proximity of the tRNA acceptor stem to the first T-tract, whereas Sen1 would preferentially function at downstream regions (figure 7H). Efficient termination is important for the rapid recycling of RNAPIII for new cycles of transcription and, thus, for maintaining robust expression of tRNAs and other RNAPIII-dependent transcripts that are essential to sustain cell proliferation. Also, it is crucial to prevent or to minimize the conflicts with other transcribing polymerases as well as with other DNA-associated machineries. The RNA probe is indicated in red, while DNA oligonucleotide probes are indicated in black (more details in table S5). The 5.8S rRNA is used as a loading control. B) T T T T A A T T T T T C T A A C T C C T -1 A T A T T T T T T C T G A T C T C A A G T G C T A G G T T -1 -5 WT sen1-3 WT sen1-3 whole reads 3' ends (C) A tI(AAU)D T5 T6 T4 (C) T5 [0 -600] [0 -600] [0 -12000] [0 -12000] T A T A C A T C T T T T G T T T T T G T C C T T C A C A A G A G A A G T G G T T T G A C G C A G A A A G G -1 -7 WT sen1-3 WT sen1-3 whole reads 3' ends B T5 T6 T4 T5 T4 C T G A A G G A A A C A A A C T A T AG A G A ACGACC G A GCCC T CA C A ATCTCTC C A C C A T C TC A C C A G A G C C C G G C C (T)n -3' CGACC G A GC -5' 3'- (A)n T C C T T T G T T T G A TATC T C T TGCTGG C T CGGG A GT G T TAGAGAG G T G G T A G A G T GG T C T C G G G C C G G -3' G C 5' 5' T-tract RNAPIII 1) No hairpin and T4, T5, T6, T9 or T12 terminators (figures 5, and 6) 2) Hairpin (HP) and T4, T6 or T12 terminators (figure 7E-F) C T G A A G G A A A C A A A C T A T AG A G A ACGACC G A GC C C T C A C A T C T C C C A CC T C T C A CC G A G C C C G G C C (T)n -3' CGACC G A GC -5' 3'- (A)n T C C T T T G T T T G A TATC T C T TGCTGG C T CG A G T G T A G A G A G G T G G TA G A G T G G C T C G G G C C G G -3' G C 5' 5' T-tract RNAPIII C GG C G Hairpin Spacer (S) S7 S18 3) Hairpin (HP) -A-less cassette -T4 terminator -A-tract (figure 7E) Datasets correspond to the WT strain. C T G A A G G A A A C A A A C T A T AG A G A ACGACC G A GC C C T C A C T CC C C A CC T C T C CC C C C G G C C CGACC G A GC 3'-T C C T T T G T T T G A TATC T C T TGCTGG C T CG G T G T A G A G A G G T G G A G A G G G G G G C C G G G C 5' 5' T-tract RNAPIII C GG C G Hairpin G TTTT AAAA AAA TTT A-tract -3' -5' A-less RNA DNA C G G GG C C A C A C A CC A C A C A CC AA C A CC A C A C UG C A U U U C G A C C A G G CCCUCAACAUCUCUCACCCAUCUCCACAC 5' 3' 10 G A G A G G T C T C C G G T T C G A T T C C G G A C T C G T C C A T T T T A A T T T T T G A T T A T G G A A T A C A A A A T G T G T C G T T A G T T C A A T T C T G A C A G G T G G C A T A A T T T T A T A T A T C T C T T T T T T T G C T T G T T G T A C G G T G C A G C A T T A A C C A G A C C G A T A C T G T T T G T T T T T T T A G C T A C A A T T T T A C A A G G T T G G C T C C T A G C T T G A G C C C T G G A A tV(CAC)H ( E) Combined representation of the GC content and the DG of the different regions coloured according to the distance from the primary terminator. Closer regions (blue) tend to be less GC-rich and less structured while further regions (red) tend to be more GC-rich and structured. Methods Construction of yeast strains and plasmids All the strains used in this paper are listed in table S6. Tagging of RPC160 with the HTP-tag was performed with standard procedures [START_REF] Longtine | Additional modules for versatile and economical PCR-based gene deletion and modification in Saccharomyces cerevisiae[END_REF][START_REF] Rigaut | A generic protein purification method for protein complex characterization and proteome exploration[END_REF] using plasmid pDL599. Plasmid pDL995 for expression of recombinant Sen1 in insect cells was constructed using the SLIC (sequence and ligation-independent cloning) method [START_REF] Li | SLIC: a method for sequence-and ligation-independent cloning[END_REF]. Co-immunoprecipitation (Co-IP) For immunoprecipitation of proteins expressed under their own promoter cells were grown on YPD medium. For proteins expressed under the control of the GAL1 promoter (i.e. fulllength Sen1, Sen1 DNTD and the N-terminal domain of Sen1), cells were grown on rich medium containing 20 g/L of galactose instead of glucose as the carbon source. Cultures (typically 250 mL) were grown to OD 600 ~1 and then cells were collected by centrifugation and resuspended in 1.5 mL of lysis buffer (10 mM sodium phosphate pH 7.5, 200 mM sodium acetate, 0.25% NP-40, 2 mM EDTA, 1 mM EGTA, 5% glycerol) containing protease inhibitors. Suspensions were frozen in liquid nitrogen and lysed using a Retsch MM301 Ball Mill (5 cycles of 3 minutes at 15 Hz). Lysates were clarified by centrifugation at 13 krpm for 30 min at 4°C and, unless otherwise indicated, treated with 20 µg/mL RNase A for 20 min at 25°C prior to immunoprecipitation. For HTP-tagged proteins, the extracts were then incubated with 2.5 mg of IgG-coupled M-280 tosylactivated dynabeads (Thermo Fisher) for 2 h at 4°C with rotation. After incubation, beads were washed three times with lysis buffer and once with H 2 O and used directly in mass spectrometry analyses. For proteins overexpressed from the GAL1 promoter IgG sepharose (GE HEathcare) was used instead. After washes with lysis buffer, beads were washed with TEV cleavage buffer Oxidation (M), Phosphorylation (STY). Spectra were filtered using a 1% FDR using the percolator node. UV crosslinking and analysis of cDNA (CRAC) The CRAC protocol used in this study is derived from Granneman et al. (Granneman et al., 2009) with several modifications as previously described (Candelli et al., 2018) The cDNAs were amplified by PCR using LA Taq polymerase (Takara), and then, the PCR reactions were treated with 200 U/mL of Exonuclease I (NEB) for 1 h at 37°C. Finally, the DNA was purified using NucleoSpin columns (Macherey-Nagel) and sequenced on a NextSeq 500 Illumina sequencer. Synchronization of cells in G1 and analysis by flow cytometry 2L of cells were synchronized in the G1 phase of the cell cycle by adding 4 mg of a-factor. To maintain cells in G1, 8 mg and 4 mg of a-factor were subsequently added to the culture after 1h and 2h of incubation at 30°C, respectively. Cells were collected and processed 1h after the last addition of a-factor. To analyse the DNA content of synchronized cells, 2 mL of culture were collected at different time points and cells were harvested by centrifugation. Cell pellets were resuspended in 50 mM sodium citrate buffer and treated with RNase A (QIAGEN) for 2 hours at 50°C, followed by proteinase K (Sigma) treatment for 2 hours at 50°C. Cell aggregates were then dissociated by sonication and 40 µL of cell suspension were incubated with 170 µL of 50 mM sodium citrate buffer containing 0.5 µM Sytox Green (Invitrogen). Data were acquired on a MASCQuant Analyzer (Miltenyi Biotec) and analyzed with FlowJo Software. Dataset processing CRAC reads were demultiplexed using the pyBarcodeFilter script from the pyCRACutility suite [START_REF] Webb | PAR-CLIP data indicate that Nrd1-Nab3-dependent transcription termination regulates expression of hundreds of protein coding genes in yeast[END_REF]. Next, the 5ʹ adaptor was clipped with Cutadapt and the resulting insert quality-trimmed from the 3ʹ end using Trimmomatic rolling mean clipping [START_REF] Bolger | Trimmomatic: a flexible trimmer for Illumina sequence data[END_REF]. We used the pyCRAC script pyFastqDuplicateRemover to collapse PCR duplicates using a 6nucleotide random tag included in the 3ʹ adaptor. The resulting sequences were reverse complemented with the Fastx reverse complement that is part of the fastx toolkit (http://hannonlab.cshl.edu/fastx_toolkit/) and mapped to the R64 genome with bowtie2 using "-N 1" option. Reads shorter than 20 nt were filtered out after mapping and coverage files were generated and normalized to counts per million (CPM) using the bamCoverage tool from the deepTools package [START_REF] Ramírez | deepTools2: a next generation web server for deep-sequencing data analysis[END_REF] using a bin size of 1. Bioinformatic analyses All sequence files and annotations were obtained from Saccharomyces Genome Database (S. cerevisiae genome version R64-2-1). T-tracts were annotated by first searching for sequences containing at least 4 consecutive thymines (for the plus strand) or adenines (for the minus strand) using the unix command line tool grep and then generating coordinate files by the awk command. The resulting files were then combined into a single BED file (table S7) using BEDOPS suite [START_REF] Neph | BEDOPS: high-performance genomic feature operations[END_REF] with the "everything" option. For each tRNA gene, the primary terminator was defined as the 1 st T-tract after the 3' end of the mature tRNA. Such primary terminators were identified by comparing the mentioned T-tract annotations and the tRNAs annotations with the closest tool from BEDTools [START_REF] Quinlan | BEDTools: a flexible suite of utilities for comparing genomic features[END_REF]. T-tracts falling within the 700 bp region immediately downstream of the primary terminator of each tRNA gene were identified with the BEDTools intersect tool and defined as secondary terminators (table S8). Reads mapped to different classes of RNAs were summarized by BEDTools coverage. Metagene analyses of RNAPIII occupancy were performed with deepTools suite [START_REF] Ramírez | deepTools2: a next generation web server for deep-sequencing data analysis[END_REF]. Strand-specific coverage bigwig files and modified tRNA coordinates (from the 5' end to the end of 1 st T-tract) were used as inputs for the computeMatrix tool using a bin size of 1 and the scale-regions mode. The matrices generated for each strand were subsequently combined by the computeMatrixOperations tool with the rbind option and used as inputs for the plotProfile tool to create a summary plot. For heatmap analyses the log2 ratio of the RNAPIII signal in the sen1-3 mutant relative to the WT was calculated by the bigwigCompare tool using a bin size of 1 and the corresponding bigwig coverage files as inputs. Matrices were generated and combined as for metagene analyses and the final matrix was used as the input for the plotHeatmap tool. To analyze the correlation between two replicates, the average 123 RNAPIII signal over regions comprising tRNA genes and 500 bp upstream and downstream regions was computed using the multiBigwigSummary tool. The resulting tables were used as inputs for the plotCorrelation tool to generate scatter plots and calculate the correlation coefficients using the Spearman method. To annotate tRNA genes read-through regions in the WT and the sen1-3 mutant, we first determined a threshold below which CRAC signal was considered as background signal. To do so, genomic regions corresponding to protein-coding genes, which are transcribed by RNAPII, were divided into 20 bp non-overlapping windows and the total signal was computed for each of them using normalized coverage files. The value corresponding to the 95% quantile (value below which 95% of windows values fall) was set as threshold. For each tRNA gene, the 1 kb region immediately downstream of the primary terminator was then divided into 20 bp windows with 1 bp overlap and the RNAPIII CRAC signal was computed for all of them. Contiguous windows with values above the threshold were merged and, most often, this resulted in a single read-through region for each gene. When this was not the case, we manually merged the fragmented regions that were separated by small gaps. Final annotations are provided as BED files in table S9. The efficiency of transcription termination in the WT and the sen1-3 mutant was estimated by calculating the read-through index defined as the percentage of RNAPIII signal over the read-through regions relative to the signal over tRNA gene regions. The total RNAPIII signal at each region was computed with the UCSC bigWigAverageOverBed package (http://genome.ucsc.edu) using the aforementioned annotations. Data representation and statistical analyses were performed with R using the ggplot2 and plyr (https://cran.r-project.org/web/packages/plyr/index.html) packages. RNA analyses Yeast cells were grown on 30 mL of YPD medium containing the appropriate additives, depending on the experiment, at 30°C to OD 600 0.3 to 0.6. Cells were harvested by centrifugation and RNAs were prepared using standard methods. 4 µg of total RNA were reverse-transcribed by the M-MLV reverse transcriptase (New England BioLabs) following the manufacturer specifications and using oligo d(T) and a mixture of random hexamers at 37°C for 45 min. The resulting cDNAs were analysed by quantitative PCR using the LightCycler 480 SYBR Green I Master reagent (Roche) and LightCycler 480 instrument (Roche) using primers specific for the regions to detect (table S5). For northern blot assays, typically 10 µg of total RNA were loaded onto a 10% denaturing polyacrylamide gel and separated by electrophoresis at 20 W for 2 h. RNAs were then transferred to a nitrocellulose membrane (GE Amersham Hybond TM -N + ) using a wet transfer system (Trans-Blot cell, Bio-Rad) at 100 V for 2 h at 4°C. Membranes were UV crosslinked and hybridized with the corresponding radioactively labeled probe in commercial buffer (Ultrahyb, Ambion) overnight. For abundant RNA species we employed 5' end labeled DNA oligonucleotides as probes and hybridizations and subsequent washes were performed at 42°C. For RNA species that were very poorly detected using DNA oligonucleotide probes, we employed RNA probes generated by in vitro transcription in the presence of a 32 P-UTP using the MAXIscript Kit (Ambion). Hybridization was then performed at 68°C overnight, and the membrane was washed two times for 15 min at 42°C with buffer 2x SSC (30 mM sodium citrate pH 7.0, 300 mM NaCl) containing 0.5% SDS and two times for 15 min at 60°C with buffer 0.1x SSC containing 0.1% SDS. After washes, blots were exposed on a phosphorimager screen and finally scanned using a Typhoon scanner (GE healthcare). Images were analysed using the ImageQuant software (GE healthcare). Protein purification RNAPIII and RNAPI were purified from Saccharomyces cerevisiae by heparin chromatography, followed by IgG-affinity chromatography and finally anion-exchange using a previously Purified RNAPIII and RNAPI were buffer-exchanged to storage buffer (15 mM HEPES pH 7.5, 150 mM ammonium sulfate, 10 mM DTT), concentrated to 14.9 mg/mL (RNAPIII) and 10.4 mg/mL (RNAPI), snap-frozen in liquid nitrogen and stored at -80°C. Full-length Sen1 was overexpressed from pFL vector in insect cells (Trichoplusia ni) using an optimized synthetic gene (GeneArt, Life Technologies) and the baculovirus expression system [START_REF] Berger | Baculovirus expression system for heterologous multiprotein complexes[END_REF] were concentrated using an Amicon Ultra-100 centrifugal filter (Millipore), aliquoted, flash frozen in liquid nitrogen and stored at -80°C. In vitro transcription termination assays RNAPIII transcription termination assays were performed using essentially the previously described method for RNAPII [START_REF] Porrua | Transcription termination and the control of the transcriptome: Why, where and how to stop[END_REF] from Invitrogen, 10 µL of slurry per reaction) pre-washed 4 times with TRB buffer containing 0.1% triton X-100 and then incubated at 20°C for 30 min with gentle shaking. After binding, the beads were washed with 1 volume of TRB containing 0.1% triton X-100, then with 1 volume of TRB containing 250 mM (NH 4 ) 2 SO 4 , and finally with 1 volume of TRB. After washes, beads were resuspended in 13 µL of TRB buffer. The reaction was started by adding 7 µL of nucleotides mixture (1 mM each in TRB buffer) and incubating at 28°C for 10 min, and then stopped by the addition of 1 µL of 0.5 M EDTA. Beads and supernatant fractions were then collected separately. RNAs in the supernatant were ethanol-precipitated and resuspended in 10 µL of loading buffer containing 1x TBE and 8 M urea and incubated at 95°C for 3 min before loading onto a 10% denaturing polyacrylamide gel. To isolate RNAs from beads, 10 µL of loading buffer was added to the beads and boiled at 95°C for 3 min and then recovered supernatants as "bead fractions". Finally, sample were subjected to 10% denaturing PAGE, running for 1 h at 40 W in 1x TBE buffer. Gels were exposed on a phosphorimager screen overnight at -80°C and screens were scanned using a Typhoon scanner (GE healthcare). Images were analysed using the ImageQuant software (GE healthcare). In this study, we provided evidence that Sen1 interacts directly with RNAPIII and this interaction is essential for efficient RNAPIII termination in vivo. Our in vitro results showed that Sen1 can directly induce the release of RNAPIII from the DNA by the same mechanism employed in termination of RNAPII transcription. We also showed that the presence of RNA secondary structures can promote RNAPIII termination in the appropriate context in vitro. The function of Sen1 and RNA structures in terminating RNAPIII have been discussed in the manuscript. Here I will mention some additional data and discuss some questions that are not included in the manuscript. What is the Sen1-RNAPIII interaction surface? Although the W773, E774 and W777 residues of Sen1 are essential for the interaction of Sen1 with RNAPIII, the specific surface of RNAPIII that interacts with Sen1 is unknown. Identifying this surface could provide important insights into the mechanisms of Sen1 recruitment and its function in termination of RNAPIII transcription. Furthermore, this would allow us to generate RNAPIII mutants specifically affected in the interaction of Sen1 with RNAPIII to further validate the results obtained with the sen1-3 mutant. Therefore, we tried to map the Sen1-RNAPIII interaction regions by yeast two hybrid assays. Briefly, components of the RNAPIII transcription machinery were fused to the DNA binding domain (BD) of the Gal4 transcription factor [START_REF] Flores | A protein-protein interaction map of yeast RNA polymerase III[END_REF] and transformed into a yeast strain containing a lacZ reporter gene, while Sen1 protein or its NTD were fused with the activation domain (AD) of Gal4 and transformed into a strain carrying a HIS3 reporter gene. These two strains were mated and then tested for b-galactosidase activity as well as for their resistance to the 3-AT drug (inhibitor of the HIS3 reporter gene). As shown in Figure S9, the interaction of C82 with C34, C31 with C34 and TFIIIB70 with C34 were clearly observed by both assays, consistent with former results obtained by a similar two hybrid system [START_REF] Flores | A protein-protein interaction map of yeast RNA polymerase III[END_REF]. Note that one of the two replicates for interactions with C34 did not give the expected result, possibly because of a problem with this particular clone. If the experiment were to be pursued other replicates would be obviously needed. However, we didn't detect the interaction of Sen1 or the NTD with any of the RNAPIII subunits used, or with the replisome component Ctf4 (Figure S9). The ABC23 subunit appeared to interact with Sen1 NTD only in the assay scoring for HIS3 expression but not for b-galactosidase activity, which casts a doubt on this result. Also, we know that only the interaction with RNAPIII is lost in a sen1-3 mutant, which suggests that the mediator of this interaction is unlikely to be a common RNAP subunit. While these experiments were ongoing, we learned that our collaborator, G. De Piccoli had committed a large-scale two-hybrid screen to a company that is a leader in this field, Hybrigenix, which failed to identify any interaction of Sen1 with any subunit of RNAPIII or with the replisome, which might suggest that the Sen1-Gal4 fusion protein does not fold properly or is not stable enough for these assays. Alternatively, Sen1 might associate with a composite surface of RNAPIII involving several polymerase subunits and, therefore, the interaction with separate subunits could not be efficiently detected. It is also possible that the presence of RNA might be important to stabilize this interaction as we observed a decreased level of RNAPIII-bound Sen1 upon RNase A treatment in coIP experiments (Figure 1E). Crosslinking of reconstituted Sen1-RNAPIII complexes coupled to mass spectrometry analysis might be an alternative approach to identify the precise interacting peptides of RNAPIII and Sen1. However, those experiments require large amounts of purified proteins, and the protein preparations we possessed for this study were only sufficient for the experiments shown in the research manuscript, which we decided to prioritize. Our collaborators from the group of C. Muller will perform in the future a structural analysis of RNAPIII-Sen1 complex, which, if successful, will inform about the interaction surfaces. How and when is Sen1 recruited to RNAPIII? As introduced before, Sen1 functions in the termination of RNAPII transcription in the context of the NNS pathway. In short, Nrd1 and Nab3 within the NNS complex are initially recruited via both the interaction with the Ser5P CTD of RNAPII and specific sequence motifs on the target RNA, which would enhance the recruitment of Sen1 (see details in section 2.5.2). Sen1 itself can also recognize the Ser5P CTD, which is important for efficient termination (Han et al., 2020). Unlike in RNAPII termination, our data showed that the action of Sen1 in termination of RNAPIII transcription is independent of the NNS complex. Moreover, RNAPIII does not have a CTD that is dynamically phosphorylated and recognized by various of factors. So far, it is not clear what kind of signal triggers the recruitment of Sen1 to the RNAPIII transcription machinery. Our genome-wide studies of RNAPIII transcription together with northern blot analyses of different tRNA species suggest that the moment when Sen1 acts in RNAPIII termination is when RNAPIIIs have read through the primary terminator, and Sen1 acts as a fail-safe transcription termination factor. Our data suggest that the activity of Sen1 in primary termination is hindered by the presence of tRNA secondary structures. However, our data do not distinguish at which moment Sen1 associates with RNAPIII. It is possible that Sen1 binds to RNAPIIIs that have escaped primary termination and paused at downstream regions. Alternatively, Sen1 could associate with RNAPIII at the very beginning of the transcription, travel along with RNAPIII and load onto the nascent RNA only when there is a sufficiently long portion of unstructured RNA available, which tends to happen at readthrough regions. In our Sen1 CRAC data (Han et al, 2020), we observe significant Sen1 signal downstream of the 3' end of RNAPIII-dependent genes but also within the body of RNAPIIIdependent genes. The latter signal is not fully reliable because highly abundant RNAs are common contaminants in this kind of experiments, though the signal in the bodies remained much higher than in the uncrosslinked controls, indicating that part of the signal observed could correspond to bona fide RNAPIII signal. Thus, a different method would be needed to map more reliably the position of Sen1 within and around class III genes, which would help us to understand if the association of Sen1 with RNAPIII occurs at early stages of transcription. Structural differences between S. cerevisiae and S. pombe Sen1 proteins: an implication in functional divergence? One of the main discoveries of our work is that S. cerevisiae Sen1 (ScSen1) can directly induce RNAPIII termination. During the course of my PhD, another study has shown that S. pombe Sen1 (SpSen1) also interacts with RNAPIII and is required for efficient termination of RNAPIII transcription (Rivosecchi et al., 2019). There are several features shared by ScSen1 and SpSen1. For instance, they are both constituted by a large NTD and a helicase domain, though significant sequence homology can be found only in their helicase domains (Figure S10). Both proteins were shown to translocate along single-stranded nucleic acids in the 5' to 3' direction and to be able to unwind DNA:DNA and DNA:RNA duplexes in vitro (Han et al., 2017;[START_REF] Kim | The sen1+ Gene of Schizosaccharomyces pombe, a Homologue of Budding Yeast SEN1, Encodes an RNA and DNA Helicase[END_REF]. In addition, the ATP hydrolysis activity of ScSen1 and SpSen1 were found to be essential for their function in RNAPIII termination (this thesis and Rivosecchi thesis, 2019). However, as discussed in the research manuscript, significant functional differences do exist between these two proteins. Contrary to ScSen1, neither SpSen1 nor the other Sen1 homolog in fission yeast (Dlb8) are essential for viability or for RNAPII transcription termination (Larochelle et al., 2018). Furthermore, none of the S. pombe Sen1 homologues form a stable complex with the homologues of S. cerevisiae Nrd1 (Seb1) and Nab3 (Larochelle et al., 2018;[START_REF] Legros | RNA Processing Factors Swd2.2 and Sen1 Antagonize RNA Pol III-Dependent Transcription and the Localization of Condensin at Pol III Genes[END_REF][START_REF] Lemay | The Nrd1-like protein Seb1 coordinates cotranscriptional 3' end processing and polyadenylation site selection[END_REF]. With regards to their function in RNAPIII transcription termination, SpSen1 was implicated in primary termination (Rivosecchi et al., 2019), whereas our data strongly support the notion that ScSen1 functions mainly on secondary termination. What different properties of the two Sen1 proteins underlie the functional divergences between the two organisms? As mentioned before, very little if any sequence similarity can be found in the NTDs of ScSen1 and SpSen1, which appear to be protein-protein interaction hubs. Moreover, ScSen1 has an additional disordered CTD, which was shown to interact with Nrd1 and Glc7 (Figure S10; Han et al., 2020;[START_REF] Nedea | The Glc7 Phosphatase Subunit of the Cleavage and Polyadenylation Factor Is Essential for Transcription Termination on snoRNA Genes[END_REF]. Thus, does the To approach these questions, it could be useful to compare the shape of the two proteins, which is closely linked with their function but, so far, only the structure of the helicase domain of ScSen1 is available (Leonaité et al., 2017). However, very recently, an artificial intelligence (AI) program AlphaFold developed by Google's DeepMind has been released, which is able to predict quite accurately a protein's structure from its amino acid sequence [START_REF] Jumper | Highly accurate protein structure prediction with AlphaFold[END_REF]). An AlphaFold database has been simultaneously built containing predicted structures for the human proteome and 20 other key organisms When comparing the predicted structures (Figure S11A) with Sen1 HD (Figure 2-8B), they all have a similar organization as Sen1 HD in the helicase core including two RecA domains that are positioned side by side. One accessory subdomain, the "barrel", is extended on the surface of RecA1. The "stalk", containing two antiparallel helices, is also well recognized. Another accessory subdomain, the "prong", is formed by two long antiparallel a-helices protruding from the helicase core. Importantly, structural comparisons of ScSen1 with SpSen1 reveal distinct features in the orientations of the "stalk" and "prong", especially for the latter. In the structure of ScSen1, the "prong" is extending towards the NTD. However, in SpSen1, the "prong" is rotated about 90° toward the RecA2, which is opposite to the face encountering the polymerase. This different positioning of the "prong" in SpSen1 relative to ScSen1 should be confirmed by bona fide structural analyses, since the model provided by AlphaFold remains a prediction. However, it opens up the possibility that the different orientation of this subdomain provides different properties with respect to termination. Indeed, in S. cerevisiae the "prong" is essential for trancription termination by RNAPII (Leonaite et al., 2017) and RNAPIII (this study), and its position suggests that, upon Sen1 collision with RNAPIII, the "prong" could enter the RNA exit channel of the polymerase thus inducing conformational changes and termination. Leonaite et al. have also identified that another subdomain called "brace" (Figure 2-8 and Figure S10) is important for shaping a favourable conformation for RNA binding and unwinding by pulling the "barrel" towards the "prong". Another pronounced difference is the conformation of the NTD. As shown in Figure S11A, the NTD of ScSen1 is more extended and might be pushed by the "prong" as well. As introduced before, the NTD and CTD of ScSen1 mediate various protein-protein interactions (see section 2.5.2). On the contrary, in SpSen1, the NTD is well organized into a "clamp" shape. The difference in the NTD might result in a different way of recruitment by the RNAPIII and different mode of loading onto the nascent RNA. The information we get from these predicted structures needs, of course, to be verified. In the future, it would be interesting to test if the "prong" and NTD of SpSen1 are required for its activity in RNAPIII termination. It will also be interesting to understand in more detail the function of SpSen1 unstructured regions. The structure of HsSETX is poorly modeled, but the overall fold of the helicase domain is more similar to ScSen1, especially for the orientation of the "prong", which might indicate a conserved function between the two proteins. This result is in agreement with another model made by our collaborators Marek Sebesta and Richard Stefl (CEITEC, Czech Republic), in which the two structures fit very well when overimposed, as show in Figure S11C. Senataxin was shown to plays a role in transcription termination of at least a subset of RNAPII-transcribed genes [START_REF] Suraweera | Functional role for senataxin, defective in ataxia oculomotor apraxia type 2, in transcriptional regulation[END_REF][START_REF] Skourti-Stathaki | Human Senataxin Resolves RNA/DNA Hybrids Formed at Transcriptional Pause Sites to Promote Xrn2-Dependent Termination[END_REF][START_REF] Wagschal | Microprocessor, Setx, Xrn2, and Rrp6 Co-operate to Induce Premature Termination of Transcription by RNAPII[END_REF][START_REF] Zhao | SMN and symmetric arginine dimethylation of RNA polymerase II C-terminal domain control termination[END_REF]. It has been proposed that, as Sen1, Senataxin is involved in the resolution of R-loops and that this activity is important for genome stability [START_REF] Becherel | Senataxin plays an essential role with DNA damage response proteins in meiotic recombination and gene silencing[END_REF][START_REF] Skourti-Stathaki | Human Senataxin Resolves RNA/DNA Hybrids Formed at Transcriptional Pause Sites to Promote Xrn2-Dependent Termination[END_REF]Yüce and West, 2013). Senataxin has also been implicated in DNA repair [START_REF] Andrews | A senataxin-associated exonuclease SAN1 is required for resistance to DNA interstrand cross-links[END_REF][START_REF] Cohen | Senataxin resolves RNA:DNA hybrids forming at DNA double-strand breaks to prevent translocations[END_REF] and in the resolution of transcriptionreplication conflicts [START_REF] Richard | A SUMO-dependent interaction between Senataxin and the exosome, disrupted in the neurodegenerative disease AOA2, targets the exosome to sites of transcription-induced DNA damage[END_REF]Yüce and West, 2013). Importantly, mutations in the most conserved regions of Senataxin, specifically the N-terminal and the helicase domains, are linked to two neurodegenerative disorders: amyotrophic lateral sclerosis type 4 (ALS4) and ocular ataxia-apraxia type 2 (AOA2) [START_REF] Chen | DNA/RNA helicase gene mutations in a form of juvenile amyotrophic lateral sclerosis (ALS4)[END_REF][START_REF] Moreira | Senataxin, the ortholog of a yeast RNA helicase, is mutant in ataxiaocular apraxia 2[END_REF][START_REF] Bennett | Unwinding the role of senataxin in neurodegeneration[END_REF][START_REF] Groh | Senataxin: Genome Guardian at the Interface of Transcription and Neurodegeneration[END_REF]. How distinct Senataxin mutations cause these diseases remains unclear. The introduction of AOA2-associated mutations in the equivalent residues of the budding yeast Sen1 provokes transcription termination defects both in vivo (Chen et al., 2014) and in vitro (Leonaitė et al., 2017), suggesting that the development of this disorder could be linked to some extent to termination defects. In the future, it would be interesting to understand whether Senataxin does also play an important role in termination of RNAPIII transcription, and eventually whether a dysfunction in this process is linked to Senataxin-associated neurodegenerative disorders. The positions of the mutations in Sen1-3 are also indicated in Figure S11B. The three mutated amino acids reside in a helix that is modelled with very high confidence. More importantly, this helix is placed on the base of the NTD, which could be important to sustain the conformation of the NTD. Thus, mutating these residues might provoke conformational changes in the NTD, which in turn could affect RNAPIII and replisome binding. However, it is also possible that these residues mediate directly contacts with RNAPIII and the replisome. More work is needed in the future to distinguish between these possibilities. Uneven distribution of RNAPIII along tRNAs: bias in data analysis? The last question I would like to discuss is the distribution of RNAPIII during transcription. Turowski et al. (2016) observed that RNAPIIIs were enriched at two regions corresponding to the beginning of the A box and the B box, respectively, which they proposed were caused by the binding of TFIIIC at promoters. They also detected much higher RNAPIII signals at the first peak (i.e. at the A box) and they suggested it could due to a delay in the dissociation of RNAPIII from the transcription initiation factors (Figure 3456789). However, we do not observe the same pattern with our CRAC data (Figure 2A; Figure S12). Metagene analysis using the whole reads did not show two well-separated peaks and we rather observed an increased signal towards the 3' end of tRNA genes (Figure 2A). We also analysed the RNAPIII occupancy by mapping only the 3' end of each read, which can more precisely reflect the position of individual RNAPIIIs with single-nucleotide resolution. Although the RNAPIII distribution seems to vary according to the tRNA isotype, in general, we observe an enrichment of RNAPIIIs around the 3' end of the mature tRNAs, in other words, near the primary terminator of tRNA genes (Figure S12). This would be consistent with RNAPIII undergoing strong and relatively longlived pausing at the primary terminator, as is expected. Indeed, correct recognition of termination signal and reinitiation by RNAPIII requires slowing down or/and pausing of the polymerase mediated by the C53-C37 subunits. With regards to the 5' end bias observed by Turowski et al. (2016), it could actually be due to the way they processed their data. In their study, their sequencing procedure generated 50-bp single-end reads and the cDNA fragments were sequenced from the 5' end. Only reads including both the 5' and 3' linkers, which flank the cDNA, were analysed to ensure that bona fide 3' ends were mapped, which means a fraction of long reads containing real 3' end signals were discarded. Furthermore, reads corresponding to RNAPIIIs transcribing the first 50 bp would be enriched as they would meet the criteria to be selected and used in their analyses. Therefore, the different RNAPIII distribution patterns originated by two different studies might due to the different ways of data processing, which still needs to be verified. La terminaison de la transcription peut être envisagée comme un processus en plusieurs étapes consistant au recrutement de facteurs de terminaison, à la reconnaissance de motifs de séquence, à la mise en pause de l'ARNpol, et enfin à la libération de l'ARNpol et du transcrit de l'ADN. Cette dernière étape implique un remodelage d'un réseau complexe d'interactions entre l'ARNpol, l'ARN naissant et la matrice d'ADN (revue dans Porrua et al., 2016). Au sein de ce réseau, les interactions entre la polymérase et l'hybride ARN:ADN sont moléculaire par notre groupe ainsi que par d'autre groupes (Han et al., 2017 ;[START_REF] Hazelbaker | Kinetic competition between RNA Polymerase II and Sen1-dependent transcription termination[END_REF]Leonaitė et al., 2017 ;Porrua et Libri, 2013b ;Wang et al., 2019). En bref, Sen1 utilise l'énergie de l'hydrolyse de l'ATP pour se déplacer le long de l'ARN naissant vers l'ARNpol II en train de transcrire et, lors de la pause transcriptionnelle, il entre en collision avec la polymérase et induit sa dissociation de l'ADN. De nombreuses données soutiennent l'idée que, contrairement aux autres ARNpols, l'ARNpol III peut terminer la transcription de manière précise et efficace au niveau d' une séquence d'ADN particulière sans avoir besoin de protéines accessoires (voir Arimbasseri et al., 2013et Porrua et al., 2016). Un terminateur typique de l'ARNpol III consiste en une suite de thymidines (T) de longueur variable dans le brin d'ADN non-matrice qui, selon le modèle actuel, est suffisant pour favoriser à la fois la pause et la libération del' ARNpol III. Lors de la transcription d'une suite de Ts, on pense que la faiblesse de l'hybride rU:dA qui en résulte joue un rôle central dans la déstabilisation du CE de l'ARNpol III (Mishra et Maraia, 2019). La sensibilité particulière de l'ARNpol III aux hybrides rU:dA faibles par rapport aux autres ARNpols qui ne détectent pas les suites de T comme terminateurs dépendrait des interactions moins étendues entre l'ARNpol III et l'hybride ARN:ADN (Hoffmann et al., 2015). Les Ts dans le brin non-matrice jouent un rôle critique supplémentaire dans la terminaison de la transcription (Arimbasseri et Maraia, 2015), car il a été proposé qu'ils sont reconnus par les sous-unités C37 et C53 de l'ARNpol III qui contribuent également à la terminaison (Landrieux et al., 2006 ;Rijal et Maraia, 2013). Un modèle alternatif proposé par Nielsen et ses coauteurs (Nielsen et al., 2013) postule que les suites de T sont nécessaires à la pause de la ARNpol III mais ne sont pas suffisants pour sa libération de l'ADN. Ces auteurs ont proposé que le repliement de l'ARN naissant dans une structure en épingle à cheveux à proximité de la ARNpol III en pause est une condition absolue pour la terminaison. Cette structure envahirait le canal de sortie de l'ARN de la polymérase, provoquant ainsi sa dissociation de l'ADN. Le mécanisme proposé rappelle la voie de terminaison dite intrinsèque décrite pour l'ARNpol bactérienne. Ce modèle dépendant de l'épingle à cheveux reste cependant très controversé car il semble en désaccord avec un grand nombre de données expérimentales antérieures (Arimbasseri et al., 2014). Le modèle selon lequel les éléments de séquence au niveau de l'ARN et de l'ADN sont les seuls déterminants de la terminaison de la ARNpol III a également été remis en question chez la levure de fission Schizosaccharomyces pombe. En effet, un rapport récent a montré que l'un des homologues de Sen1 de S. cerevisiae (ci-après désigné Sp Sen1) est impliqué dans la terminaison de la ARNpol III in vivo (Rivosecchi et al., 2019). La délétion de ce gène qui chez Mots clés: ARNpol III, terminaison de la transcription, suite de T, l'hélicase Sen1, Structures secondaires d'ARN vii Figure 1 - 1 : 11 Figure 1-1: Overview of the main classes of RNA in eukaryotes and their role in gene expression. ..................... Figure 1 - 2 : 12 Figure 1-2: Scheme of the RNA polymerase subunits composition in the three domains of life. .......................... Figure 2 - 1 : 21 Figure 2-1: The core promoter elements for transcription by RNAPII. ................................................................ Figure 2 - 2 : 22 Figure 2-2: In intro model of RNAPII transcription initiation. ............................................................................... Figure 2 - 3 : 23 Figure 2-3: The composition and conservation of the CTD. ................................................................................. Figure 2 - 2 Figure 2-4: The CTD phosphorylation patterns across protein-coding genes in humans and yeast. ................... Figure 2 - 5 : 25 Figure 2-5: Cleavage and polyadenylation complex and its binding sequences. ................................................. Figure 2 - 6 : 26 Figure 2-6: Major pathwaysfor RNAPII transcription termination. ...................................................................... Figure 2 - 2 Figure 2-7: The domain structure of the Nab3 and Nrd1 proteins from S. cerevisiae. ........................................ Figure 2 - 2 Figure 2-8: The domain organization and structural features of Sen1. ................................................................ Figure 2 - 2 Figure 2-9: Alternative pathways of RNAPII transcription termination. .............................................................. Figure 2 - 10 : 210 Figure 2-10: Transcription termination in bacteria. ............................................................................................. Figure 2 - 2 Figure 2-11: Transcription termination by RNAPI in yeast. .................................................................................. Figure 3 - 1 : 31 Figure 3-1: Structures of eukaryotic RNA polymerases. ....................................................................................... Figure 3 - 2 : 32 Figure 3-2: tRNA structure, isotypes and isoacceptors. ....................................................................................... Figure 3 - 3 : 33 Figure 3-3: Promoter architecture of class III genes. ........................................................................................... Figure 3 - 4 : 34 Figure 3-4: Architecture of TFIIIC subunits. .......................................................................................................... Figure 3 - 5 : 35 Figure 3-5: Model of TFIIIB assembly by tA/TFIIIC. .............................................................................................. Figure 3 - 6 : 36 Figure 3-6: Architecture of TFIIIB subunits. .......................................................................................................... Figure 3 3 Figure 3-7: Domain architecture of RNAPIII key subunits involved in PIC assembly. ........................................... Figure 3 3 Figure 3-8: Schematic of the mechanism of promoter opening and DNA melting by RNAPIII. ............................ Figure 3 3 Figure 3-9: Uneven distribution of RNAPIII on transcription units. ...................................................................... Figure 3 - 10 : 310 Figure 3-10: Models of Transcription termination by RNAPIII. ............................................................................ Figure 3 - 3 Figure 3-11: The RNAPIII transcription cycle. ....................................................................................................... 1 : 1 Subunit composition of RNA polymerases in the three domains of life. ............................................... Table 2-1: Subunit composition of RNAPII and its general transcription factors. ................................................ Table 2-2: Factors involved in RNAPII transcription termination. ........................................................................ Table 3-1: The RNAPIII transcriptome. ................................................................................................................. Table 3-2: Subunit composition of RNAPIII and its transcription factors. ............................................................ Figure 1 - 1 : 11 Figure 1-1: Overview of the main classes of RNA in eukaryotes and their role in gene expression. Figure 1 - 2 : 12 Figure 1-2: Scheme of the RNA polymerase subunits composition in the three domains of life. Figure 2 - 1 : 21 Figure 2-1: The core promoter elements for transcription by RNAPII. This diagram is roughly to scale. -40 and +40 represent the distance in base pair from the transcription start site (black arrow). Abbreviations: BRE u , upstream TFIIB recognition element; BRE d , downstream TFIIB recognition element; TATA, TATA box; Inr, initiator; MTE, motif ten element; DPE, downstream core promoter element. From Juven-Gershon and Kadonage, 2010. Figure 2 - 2 : 22 Figure 2-2: In intro model of RNAPII transcription initiation. Representation of the canonical model for stepwise pre-initiation complex (PIC) assembly from general transcription factors and RNAPII on promoter DNA. The names of the intermediate PIC complexes are provided in the boxes. Details described in the text. Adapted from Sainsbury et al., 2015. transcription. Unlike other DNA-dependent RNAPs, RNAPII cannot open the promoter DNA by itself. Promoter opening requires ATP and the Ssl2 (XPB in human) subunit of TFIIH, which possesses ATP-dependent DNA-translocase activity. Following formation of the open complex, Figure 2 - 3 : 23 Figure 2-3: The composition and conservation of the CTD. The core and the carboxy-terminal domain (CTD) of S. cerevisiae and human RNAPII are depicted. Each rectangle represents one CTD repeat and the repeat sequence consensus is shown above. Repeats with identical sequence in yeast and human CTDs are highlighted in green. From Harlen and Churchman, 2017. Figure 2 - 4 : 24 Figure 2-4: The CTD phosphorylation patterns across protein-coding genes in humans and yeast. Figure 2 - 5 : 25 Figure 2-5: Cleavage and polyadenylation complex and its binding sequences. Depicted is the subunit composition of the CPF-CF complex in yeast (upper panel). Mammals homologues are also presented here for comparison (lower panel). Conserved subunits are indicated in similar colors. Details in the text. Adapted from https://gsbs.tufts.edu/facultyResearch/faculty/moore-claire/research . Figure 2 - 6 : 26 Figure 2-6: Major pathwaysfor RNAPII transcription termination. Transcription termination by the CPF-CF pathway. The CPF-CF is recruited via the interaction with the CTD of RNAPII and with sequence signals present on the nascent transcript (yellow box). The Ysh1 subunit of the CPF-CF complex cleaves the RNA, generating an uncapped 5' end that serves as an entry point for the Rat1 exonuclease. Two models are proposed for the dissociation of the elongation complex: the torpedo mechanism and the allosteric mechanism (details in the text). (B) Transcription termination by the NNS pathway. Nrd1 and Nab3 are recruited by recognition of Ser5P CTD and specific motifs on the nascent transcript. Sen1 is then recruited, translocates along the RNA and releases RNAPII. The resulting RNA transcript is subsequently degraded by the nuclear exosome and the TRAMP complex. Adapted from Porrua et al, 2016. Figure 2 - 7 : 27 Figure 2-7: The domain structure of the Nab3 and Nrd1 proteins from S. cerevisiae. The lengths of the proteins are indicated on the right (in amino acids). The Nrd1 interaction domain of Nab3 (NrdID; amino acids 204-248), the RNAPII CTD interaction domain of Nrd1 (CID; amino acids 6-151); the Nab3 interaction domain of Nrd1 (NabID; amino acids 151-214), the RNA recognition motifs (RRMs), and the Q/P rich and D/E rich domains are indicated in boxes with different colors, repectively. From Arndt and Reines, 2015. Figure 2 - 8 : 28 Figure 2-8: The domain organization and structural features of Sen1. (A) Scheme of Sen1 protein. Globular domains are denoted by solid bars, and intrinsically disordered regions are indicated by a line. The disorder prediction was obtained by IUPred (Dosztányi et al., 2005). The sequence of the Nrd1-nteraction motif (NIM) is shown on top. From Han et al., 2020. (B) Structure of the Sen1 helicase domain determined in the absence of RNA. Dashed lines indicate disordered loops not resolved in the structure. A scheme with the domain organization is shown on top. (C) Simplified diagram of the main structural features according to (B). (B) and (C) from Leonaité et al., 2017. -9A). RNAPII would be destabilized and released from the DNA by Rat1 as the torpedo model proposed for the CPF-CF termination pathway. The upstream cleavage transcript is subjected to trimming or degradation by the TRAMP and exosome complexes, similarly to what occurs for transcripts produced by NNS-dependent termination (Figure 2-6B). Overall, Rnt1-depedent termination of RNAPII transcription is similar to that of RNAPI (see Box 2-5). Figure 2 - 9 : 29 Figure 2-9: Alternative pathways of RNAPII transcription termination.(A) Rnt1-dependent termination pathway. (B) Reb1-mediated roadblock termination pathway. Details see in the text. Rsp5, ubiquitin ligase; Elc1, elongin C that forms a complex with Cul3 that polyubiquitylates monoubiquitylated RNAPII to trigger its proteolysis. From Challal thesis, 2019. proposes that formation of the hairpin triggers a conformational change within the RNAP which destabilizes the EC (Figure 2-10). Figure 2 - 10 : 210 Figure 2-10: Transcription termination in bacteria.A paused intermediate is common to all termination models. In the shearing model, RNA is pulled (by the action of Rho or hairpin folding) from an otherwise immobile EC, whereas, in the hyper-translocation model, hybrid shortening results from forward movement of RNAP. In the allosteric model, the hairpin or Rho invades the RNA exit channel triggering catalytic inactivation and conformational destabilization of the EC. Adapted fromPorrua et al., 2016. mechanisms of EC dissociation by Rho (Figure2-10; reviewed in[START_REF] Ray-Soni | Mechanisms of Bacterial Transcription Termination: All Good Things Must End[END_REF] Porrua et al., 2016).Box 2-5: Transcription termination by RNAPIIn yeast, ribosomal DNA (rDNA) is located in chromosome XII and consists in an approximately 1-2 Mb region composed of 150-200 tandem copies of a 9.1 kb region.Each repeat contains the genes encoding for the 18S, 5.8S and 25S rRNAs, which are transcribed by RNAPI, and the 5S rRNA, which is transcribed by RNAPIII. The different genes are separated by various spacer regions (Figure2-11) and the tandem repeats are separated by an intergenic sequence (IGS), where termination of RNAPI transcription occurs[START_REF] Venema | Ribosome Synthesis in Saccharomyces cerevisiae[END_REF]. Termination has been proposed to involve a roadblock mechanism, similar to the one previously mentioned in Figure2-9B. The IGS contains one or several recognition sequences for NTS1 silencing protein (Nsi1), a DNAbinding protein of the Myb-family(Reiter et al., 2012; Merkl et al., 2014) preceded by an Rnt1 cleavage site. RNAPI is roadblocked by Nsi1 bound to the IGS DNA and Rnt1 cleaves a stem-loop in the pre-rRNA generating an entry point for the exonuclease Rat1. Figure 2 - 2 Figure 2-11: Transcription termination by RNAPI in yeast.(A) Structure of the rDNA locus. (B) Model for RNAPI transcription termination. More details in the text. Abbreviations: ITS, internal transcribed spacer; ETS, external transcribed spacer; IGS, intergenic sequence; RFB, replication fork barrier; ARS: autonomously replicating sequence. Adapted fromPorrua et al., 2016. Figure 3 - 1 : 31 Figure 3-1: Structures of eukaryotic RNA polymerases. (A) Surface view of the elongating RNAPIII compared to (B) RNAPII and (C) RNAPI. Homologous subunits in RNAPII and RNAPI are coloured based on RNAPIII as indicated in the legend. Three peripheral subcomplexes of RNAPIII (C25-C17, C53-C37, and C82-C34-C31) are indicated by black circles. (A-C) Adapted from Hoffmann et al., 2015. (D) Overall architecture of RNA polymerase based on the 12-subunit RNAPII (PDB 1Y1W). This simplified diagram of RNA polymerase shows important structural and functional features including the assembly platform, the active site, the DNA-binding channel, the jaws and the wall, clamp and stalk domains. The two active-site Mg 2+ ions are indicated as magenta spheres. Adapted from Werner and Grohmann, 2011. Figure 3 - 2 : 32 Figure 3-2: tRNA structure, isotypes and isoacceptors.(A) Classical cloverleaf structure of tRNA. The conventional IUB/IUPAC degenerate DNA alphabet is used in this figure: R (purine), A or G; N (any), A, C, G or U; Y (pyrimidine), C or T. Main interactions supporting the L-shape of tRNAs are shown in the right dashed box. Adapted from Ehrlich et al., 2021. (B) Anticodon chart showing gene copy number of tRNA isotypes and isoacceptors in S. cerevisiae (genome version R64-2-1). 274 of nuclear tRNA genes are analysed (the 'tX(XXX)D' tRNA gene with unknown function is excluded). Number of genes for each isotype or isoacceptor is indicated within brackets. The coloured boxes correspond to missing tRNA isoacceptors in the different kingdoms, according to Ehrlich et al., 2021. *The 'GAG' isoacceptor tRNA is missing for most eukaryotes but there is one copy in S. cerevisiae. - 3 . 3 Type 1 promoters contain the A box and the internal control region (ICR) recognized by transcription factors TFIIIC and TFIIIA, respectively. Type 2 promoters are composed of an A box and a B box bound by TFIIIC. The distance between the TFIIIB recruitment and influence transcription start site selection. Type 3 promoters, present in metazoans, lack control elements within the transcribed region and, instead, harbor an upstream proximal sequence element (PSE) which is bound by SNAPc. The majority of SNAPcdependent promoters include an external enhancer region, the distal sequence element (DSE), occupied by activators Staf and/or Oct1. Moreover, in type 3 RNAPIII promoters, there is also a TATA box bound by TFIIIA, which determines their RNAPIII specificity (Figure3-3). 5S rRNA genes are the only set of class III genes that rely on type 1 promoters. All tRNA genes (with the exception of the selenocysteine tRNA gene) employ a type 2 promoter and the two conserved A box and B box correspond to the universally conserved D-loop and Tloop of mature tRNAs. In the structure of mature tRNAs, the A box normally starts from the end of the acceptor stem and extends to the D-loop (i.e. N8 to N19) and the B-box is positioned from N52 to N62 (Figure 3-2A). The synthesis of U6 and other RNAPIII-dependent transcripts from metazoans like the 7SK and Y RNAs is dependent on type 3 upstream promoters (Figure 3-3). Figure 3 - 3 : 33 Figure 3-3: Promoter architecture of class III genes.Three types of class III gene promoters are shown. Detail described in the text. RNA species relying on distinct promoter are indicated on the right site. Primary transcripts originating from type 1 and type 3 promoters do not undergo removal of a 5'-leader and, thus, the 5'-terminal nucleotide of the mature RNA coincides with the transcription start site, as indicated by the bent arrows. In contrast, primary transcripts originated from type 2 promoters generally carry a 5'-leader portion that must be processed. Abbreviations: TSS, transcription start site; Sc, S. cerevisiae; Hs, Homo sapiens. Adapted from[START_REF] Dieci | Identification of RNA polymerase III-transcribed genes in eukaryotic genomes[END_REF] Figure 3 - 4 : 34 Figure 3-4: Architecture of TFIIIC subunits. Figure 3 - 5 : 35 Figure 3-5: Model of TFIIIB assembly by tA/TFIIIC. (a) Initially, Brf1 and TBP are recruited to the N-terminal TPR array of t131. The distance between the t95 DBD and the t131 TPR array serves as a molecular ruler that places TFIIIB within a certain distance upstream of the TSS. (b) TBP binds and bends the upstream DNA sequence. The lifetime of this complex depends on the upstream DNA sequence engaged. (c) Bdp1 then enters the complex and stabilizes the bent state. (d) Recruitment of RNAPIII and promoter opening displaces tA, freeing the transcribed region. From[START_REF] Vorländer | Structure of the TFIIIC subcomplex τA provides insights into RNA polymerase III pre-initiation complex formation[END_REF] Figure 3 - 6 : 36 Figure 3-6: Architecture of TFIIIB subunits. Domain architecture of TFIIIB involved in PIC assembly. Protein regions are depicted according to their presence (solid color boxes) or absence (empty boxes) in the open complex of the preinitiation complex (OC-PIC) structure built by Abascal-Palacios et al., 2018. Abbreviations: ER, essential region. For SANT domain see Box 3-1. simultaneously by two laboratories, providing structural insights into the mechanisms of promoter opening[START_REF] Abascal-Palacios | Structural basis of RNA polymerase III transcription initiation[END_REF] Vorländer et al., 2018). The cryo-EM structures of open RNAPIII PIC (OC-PIC) revealed an intricate interaction network between TFIIIB and several RNAPIII subunits. The architecture of the subunits involved in promoter opening are shown in Figure 3-7. TFIIIB was observed to completely enclose the DNA around the TATA box. The overall topology of the PICs is highly similar in RNAPIII and RNAPII. As mentioned before, the C82-C34-C31 heterotrimer is regarded as a TFIIE-like subcomplex. Indeed, the unwound non-template DNA strand is stabilized by the C82 cleft loop (Figure 3-7) resembling the mechanism of action of TFIIE. In addition, C31 physically bridges the stalk and the clamp, which is functionally conserved in TFIIE. However, based on the structures obtained, the upstream region of the transcription bubble is stabilized by the WH domains of C34 (Figure 3-7) as by the TFIIF subunit Tfg2 in the case of the RNAPII PIC. Thus, the heterotrimer combines the roles of both TFIIF and TFIIE. Through the comparison between the RNAPIII CC and OC structures, Vorländer et al. proposed a model for the structural rearrangements occurring during promoter opening (Figure 3-8). In the early closed DNA complex, the C34 WH domains and the Bdp1 tether are disordered, the upstream DNA is kinked away by the clamp head and C82 cleft loops, leading to a 30-degree bend introduced around position -15. The transition from the closed to the open promoter complex proceeds via an open-clamp intermediate. The open and closed Figure 3 - 7 : 37 Figure 3-7: Domain architecture of RNAPIII key subunits involved in PIC assembly. Protein regions are depicted according to their presence (solid color boxes) or absence (empty boxes) in the open complex of the pre-initiation complex (OC-PIC) structure built by Abascal-Palacios et al., 2018. Abbreviations: WHD, winged helix domain. More details see in the text. clamp and the C82 cleft loop move upwards and lie on top of the closed DNA, allows a better association of the polymerase with the target DNA. Subsequently, C34 WH domains become ordered followed by clamp closing. The transition to the closed clamp position enforces DNA melting by a steric clash of the C82 cleft loop and the DNA duplex, and then the template strand is loaded into the active site (Figure3-8). This model is similar to that of RNAPII, since both employ a movement of the clamp, an extended loop (C82 or TFIIE), and WH domains (heterotrimer/TFIIE-TFIIF) for promoter opening and transcription bubble stabilization(Vorländer et al., 2018). But unlike RNAPII, for which DNA opening generally requires ATP hydrolysis, in the case of RNAPIII (as well as RNAPI) the DNA is open only with the aid of the binding energy generated by interactions newly established during the assembly of PIC[START_REF] Cramer | Organization and regulation of gene transcription[END_REF]. Figure 3 - 8 : 38 Figure 3-8: Schematic of the mechanism of promoter opening and DNA melting by RNAPIII. Figure 3 - 9 : 39 Figure 3-9: Uneven distribution of RNAPIII on transcription units. (a) RNAPIII distribution detected by the CRAC method, across most genes. The profile presents a high peak of nascent transcript density over the 5' end of the transcription unit and a weaker pear before the 3' end of mature tRNA (intron-less tRNA gene is shown). Read-through (RT) of termination signal is observed on many tRNAs, typically extending 50-200 nt beyond the expected canonical termination site. (b) Localization of A-and B-boxes of the bipartite internal promoter, and termination site (T n ) in a tRNA-encoding gene (tDNA). (c) The tA and tB modules of TFIIIC factor binding the A and B boxes. Regions of postulated transient pausing of RNAPIII correspond to the TFIIIC binding sites. From Lesniewska and Boguta, 2017. Using purified S. cerevisiae RNAPIII and various nucleic acid scaffolds,Arimbasseri and Maraia (2015) revealed an unexpected role of the non-template DNA strand in RNAPIII termination. The authors observed that a majority of ECs paused upon transcribing the first four Ts of a terminator and entered into a metastable, yet catalytically active, intermediate that they called the pre-termination complex (PTC). The data suggested that both the first four Ts in the non-template strand and the C53-C37-C11 subunits are required for the formation of a PTC. Substitution of the first four Ts abolished polymerase pausing and RNA release, whereas a mutation of the fifth T retained pausing but abrogated RNA release, indicating that a stretch of 4 Ts is only sufficient for polymerase pausing but transcript release requires the presence of a fifth T in the non-template strand. On the other hand, they found that the A-tract in the template strand is a strict requirement for termination, further supporting the idea that the weak oligo(rU:dA) hybrid is an essential determinant for termination. Based on these and former data, Arimbasseri and Maraia proposed that the template strand promotes destabilization of the EC through an unstable oligo(rU:dA) hybrid, and the non-template strand provides distinct signals for RNAPIII pausing and release.With regard to the recognition of the non-template strand signals,Arimbasseri and Maraia observed that five amino acids(226)(227)(228)(229)(230) in the C-terminal domain of C37 were analyses of the yeast RNAPIII PTC have shown that the recognition of the Ts in the non-template strand does not involve C37, but is actually mediated by several residues of the second largest subunit of RNAPIII, C128 (Mathias Girbig and Christoph W. Muller, personal communication). Although direct engagement between C53-C37 and the non-template strand could not be observed, the RNAPIIID construct fails to stabilize the non-template strand as shown by cryo-EM (Mathias Girbig and Christoph W. Muller, personal communication). Taken all together, the precise role of C53-C37-C11 in termination is more complicated than expected, probably by aiding C128 to sense the termination signals, which remains to be further elucidated. Figure 3 - 10 : 310 Figure 3-10: Models of Transcription termination by RNAPIII.Transcription termination occurs after pausing upon synthesis of a U-tract. Two mechanisms are proposed for the dissociation of the elongation complex. The T-tract dependent model posits that RNAPIII is released by the weakness of the rU:dA hybrid in the catalytic sites and some specific interactions between several subunits of RNAPIII and the T-tract on the non-template strand. While the RNA-structure-dependent model suggests that invasion of the RNA structure induces RNAPIII conformational change, which destabilizes and releases the elongation complex. when RNAPIII paused at a G-tract. Thus,Nielsen et al. suggested that the function of the Ttract would be to induce the pausing and catalytic inactivation of RNAPIII by switching off the RNA cleavage activity mediated by C11. RNAPIII termination would then be triggered by the formation of RNA secondary structures within the RNA exit channel or by the invasion of this channel by the RNA hairpin upon RNAPIII backtracking, which would destabilize the EC via an allosteric mechanism(Nielsen et al. 2013). A follow-up study performed by Arimbasseri et al. (2014) implicated that the discrepant results obtained by Nielsen et al. could be due to the specific polymerase preparation used rather than variables of the different assays. By comparing the RNAPIII preparation assessed by Nielsen and co-authors with two other RNAPIII preparations from distinct groups, Arimbasseri et al. Rivosecchi et al. further showed Sp Sen1 but not Dbl8 is involved in RNAPIII termination in fission yeast. Specifically, ChIP-seq analyses performed in a Dsen1 strain showed that RNAPIII accumulated downstream of class III genes, indicating transcription termination defects. Previous data had revealed that unstable R-loops form at tRNA loci in fission yeast and that Sp Sen1 was recruited to these loci, raising the question whether Sp Sen1 recruitment and action on RNAPIII transcription would involve the resolution of R-loops, which was proposed to be an important function of S. cerevisiae Sen1[START_REF] Legros | RNA Processing Factors Swd2.2 and Sen1 Antagonize RNA Pol III-Dependent Transcription and the Localization of Condensin at Pol III Genes[END_REF]. However, Rivosecchi et al. showed that the function of Sen1 in RNAPIII transcription termination was independent on the presence of R-loops at tRNA regions. The precise mechanisms by which Sp Sen1 would stimulate termination have not been explored, but based on its homology with S. cerevisiae Sen1, it was proposed that it would employ the same mode of action as S. cerevisiae Sen1 on RNAPII-dependent transcription units. Whether the function of Sp Sen1 in RNAPIII transcription termination is conserved in other organisms remains an open question. Figure 3 - 3 Figure 3-11: The RNAPIII transcription cycle (next page).(A) RNAPIII Transcription cycle on tRNA genes. (a) Transcription initiation: the internal promoters (A box and B box) of tRNA genes are bound by the transcription factor TFIIIC, which recruits TFIIIB to the DNA. RNAPIII is then recruited and transcription initiates at the transcription start site (TSS). (b) Transcription elongation: during elongation, TFIIIC remains associated with the promoter elements, possibly because RNAPIII displaces TFIIIC from A box and subsequently from B box during elongation, but not from both sites simultaneously. (c) Primary (canonical) termination: RNAPIII terminates at a canonical terminator sequence and releases the pre-tRNA transcript. The released RNAPIII can rapidly re-associate with the same transcription unit with the help of promoter-bound TFIIIC and TFIIIB and start a new transcription cycle through a process known as "facilitated reinitiation"[START_REF] Dieci | Facilitated Recycling Pathway for RNA Polymerase III[END_REF]. (d) Secondary termination: a substantial fraction of RNAPIIIs overrides the canonical terminator and then terminates at regions downstream of 3'-end of tRNA, thus producing aberrant readthrough transcripts. (B) Pre-tRNA processing. The primary transcripts of tRNA genes must undergo maturation at both ends to generate mature tRNAs. The 5'-end leader of the pre-tRNA is generally removed by the RNase P endonuclease, and the 3'-end trailer is thought to be cleaved by the tRNase Z endonuclease or trimmed by the exonucleases Rex1 and Rrp6[START_REF] Skowronek | tRNA 3' processing in yeast involves tRNase Z, Rex1, and Rrp6[END_REF]. (C) Readthrough transcripts processing. Readthrough tRNAs can be degraded by the nuclear surveillance machineries such as the exosome, or can be processed through other mechanisms. Adapted fromTurowski and Tollervey, 2016. Thus, much uncertainty remains about the relative contribution of sequence elements, RNA structures and trans-acting factors to the efficiency of RNAPIII transcription termination. Also, to what extent the different termination mechanisms are evolutionary conserved remains an open question. promote termination of RNAPIIIs that override the first termination signal. Our data indicate that only T-tracts within a particular length range are sufficient to promote autonomous termination by RNAPIII. Nevertheless, we show that tRNA genes often contain suboptimal termination signals and that their capacity to induce termination can be complemented by Sen1 as well as by secondary structures of the nascent RNA. These two factors act in a mutually exclusive manner since the presence of RNA structures prevent the loading of Sen1 onto the transcript, which is strictly required for Sen1-mediated termination. While Sen1 can also promote the release of RNAPIII at pausing sites other than T-tracts, we find that RNA structures can only function in association with canonical termination signals.Together, our data allow revisiting former models for RNAPIII transcription termination and offer a novel and detailed view of how intrinsic components of the EC (i.e. T-tracts and RNA structures) and the extrinsic factor Sen1 concur to promote efficient termination of RNAPIII transcription. Figure 1 : 1 Figure 1: The N-terminal domain of Sen1 interacts with RNAPIII.A) Summary of the results of coimmunoprecipitation-MS experiments using different versions of TAP-tagged Sen1 as baits that are included in table1. A scheme of Sen1 protein indicating the different functional domains as well as the position of the mutations introduced in the sen1-3 strain is shown on the top. Globular domains are denoted by solid bars while protein regions predicted to be intrinsically disordered are indicated by a line. B) Western blot analysis of a representative coimmunoprecipitation experiment using a Cterminally His 6 -TEV-Protein A (HTP)-tagged version of the largest subunit of RNAPIII (Rpc160) as the bait.C) and D) Label-free quantitative MS analysis of coimmunoprecipitation assays using Rpc160-HTP as the bait. Data correspond to experiments performed in the absence of RNase A treatment. C) Volcano plot representing the enrichment of the detected proteins in the HTP-tagged strain relative to the untagged control in a WT (SEN1) background. D) Quantitative comparison of the proteins that are associated with tagged RNAPIII in a sen1-3 mutant relative to the WT. Only proteins with a fold change ³ 2 relative to the control and p-value < 0.05 are considered as significantly changed among the compared conditions. E) Comparison of the abundance (arbitrary units) of the different NNS-components in Rpc160-HTP coimmunoprecipitates with or without treatment with RNase A. Figure 2 : 2 Figure 2: The interaction of Sen1 with RNAPIII is globally required for efficient transcription termination at RNAPIII-dependent genes.A) Metagene analysis of the RNAPIII distribution around tRNA genes. The signal covering the region between the 5' and the primary terminator (i.e. the 1 st T-tract after the 3' end of the mature tRNA) is scaled to 100 bp. Values on the y-axis correspond to the mean coverage expressed as counts per million (CPM) multiplied by 10. Sen1-AID denotes the strain expression an Auxin Inducible Degron version of Sen1. IAA: indole-3-acetic acid, an analogue of auxin. B) and C) Integrative Genomics Viewer (IGV) screenshots of examples of tRNA genes displaying termination defects upon mutation or depletion of Sen1. "w" and "c" denote the Watson and the Crick strands, respectively. The values under brackets correspond to the scale of the RNAPIII signal expressed in 10xCPM. D) Heatmap analysis representing the log2 of the fold change (FC) of the RNAPIII signal around tRNA genes in the sen1-3 mutant relative to the WT. The summary plot on the top was calculated using the average values for each position. E) and F) Examples of RNAPIII-dependent genes other than tRNA genes that present termination defects upon mutation or depletion of Sen1. Figure 3 : 3 Figure 3: The function of Sen1 in RNAPIII transcription termination does not rely on Sen1 interaction with its partners Nrd1 and Nab3 or with the replisome. A) Metagene analysis of the RNAPIII distribution around tRNA genes as in figure 2 in a WT or in a Nrd1 Auxin-Inducible Degron (AID) strain in the absence or in the presence of IAA. Additional experiments validating the efficiency of Nrd1 depletion on well-characterized NNS target RNAs are included in figure S2. B) and C) Individual examples of tRNA genes that exhibit clear termination defects in a sen1-3 mutant (see figure 2B-C) but not in Nrd1-depleted conditions. D) Metagene analysis as in figure 2A but in cells blocked in the G1 phase of the cell cycle. E) and F) Individual examples of tRNA genes that display termination defects in the sen1-3 mutant in cells blocked in G1 as well as in asynchronous cells (compare with figure 2B-C). Figure 4 : 4 Figure 4: Sen1 functions mainly on secondary termination. A) Scheme of tRNA transcription units indicating the relevant elements and parameters used for the assessment of the transcription termination efficiency in the WT and the sen1-3 mutant. B) and C) Comparison the RT length for the different tRNA genes in the mutant relative to the WT. B) Correlation plot. The grey zone corresponds to the confidence interval whereas R is the Pearson's correlation coefficient. p is the p-value associated with Pearson's correlation. C) Violin plot showing the distribution of RT lengths in the WT and in sen1-3. The p-value (p) was calculated with the Wilcoxon test. D) and E) Comparison of the RT index measured in the indicated strains for each tRNA gene. D) Correlation plot generated as in B). Three outliers in sen1-3 are not shown. E) Violin plots as in C) but with RT index values. Three outliers in sen1-3 are excluded. Note that both RT length and the RT index are inversely proportional to the termination efficiency (e.g. higher RT index indicates lower termination efficiency). p corresponds to the p-value for the global comparison of the four groups according to the Kruskal-Wallis test. Asterisks denote the p-values of pairwise comparisons (*: p ≤ 0.05 ; **: p ≤ 0.01; ***: p ≤ 0.001; ****: p ≤ 0.0001). J) IGV screenshot of an individual tRNA showing the distribution of RNAPIII CRAC signal in the WT and the sen1-3 mutant. The 3' ends datasets provide the position of individual RNAPIIIs with single-nucleotide resolution. Insets are zoom in views of the main regions where RNAPIII accumulates in the mutant. Coordinates in insets correspond to the position relative to the beginning of the nearest downstream T-tract. , we observed only very weak polymerase pausing at the T4 terminator and no detectable RNAPIII release. The T5 terminator induced stronger pausing and intermediate levels of RNAPIII release, while the T6 terminator promoted very efficient release. Stretches of 9 or 12 Ts induced very strong RNAPIII pausing as virtually no transcription signal could be detected downstream of these terminators but a substantial proportion of RNAPIIIs remained associated with the proximal part of these long T-tracts (~ 50% for the T9 and ~ 80% for the T12). This might be due to the recognition of the distal portion of the T-tract in the downstream DNA by RNAPIII, which might induce strong pausing and disfavour release (see Discussion). Indeed, for shorter T-tracts we also observed several prominent pausing sites a few nt upstream of these sequences, both in vitro (figure5C) and in vivo(figures 4J and S5) Figure 5 : 5 Figure 5: Sen1 can induce termination of RNAPIII transcription in vitro. B) SDS-PAGE analyses of the protein preparations used in IVTT assays. C) Analyses performed on templates containing T-tracts composed of 4 (T4), 5 (T5) or 6 (T6) consecutive Ts. Left: Denaturing PAGE analysis of transcripts from an IVTT assay testing the capacity of Sen1 and T-tracts of different lengths to induce RNAPIII transcription termination. "B" corresponds to the beads fraction while "S" denotes the supernatant. Representative gel of one out of three independent experiments. Right: Profile of the signal over the region of interest for each gel lane. D) Analysis of IVTT reactions performed on templates containing stretches of 9 (T9) or 12 (T12) Ts. These reactions were performed in parallel with those in panel C) but migrated on different gels. Left: Representative gel of one out of three independent experiments. Right: Profile of the signal over the region of interest for each gel lane. The position of the nucleotides of interest was determined by migrating in parallel a radioactively-labelled ladder (not shown). Figure 6 : 6 Figure 6: Analysis of the mechanisms of Sen1-mediated termination of RNAPIII transcription.A) Analysis of the capacity of Sen1 to promote the release of RNAPIII paused at a sequence other than a T-tract. The transcription templates contain a G-less cassette followed by a stretch of Gs so that in the absence of G in the nucleotide mixture RNAPs are stalled at the first G. Experiments were performed in parallel with purified RNAPII as a positive control for Sen1 activity. The efficiency of transcription termination observed for RNAPII is similar as in our former studies(Han et al., 2017; Leonaitė et al., 2017). Figure 7 : 7 Figure 7: Hairpin-like structures forming in the nascent RNA can complement the function of canonical termination signals. Figure S1: Complementary analyses validating CRAC experiments in figures 2 and 3.A) Comparison of the reads distribution among different genomic regions in crosslinked samples (WT and sen1-3) relative to the un-crosslinked control (Ctrl) in a typical RNAPIII CRAC experiment. The "others" category corresponds to RNAPII genes and intergenic regions. Note that tRNA read-through regions are included in this category and the larger proportion of reads in this group in the sen1-3 mutant could be due to the observed increased RNAPIII presence at those regions.B)Plot representing the number of mapped reads obtained in a typical CRAC experiment in the different samples. Note that the number of reads in cross-linked samples is at least one order of magnitude higher than in the un-crosslinked control (Ctrl). "M" denotes millions.C) Scatter plots showing the high correlation between the two biological replicates of each condition/strain for the CRAC experiments showed in figure2.D)Analysis of DNA copy-number for samples in figure 3D-F by flow cytometry. 1C and 2C corresponds to 1 and 2 copies of the genome, respectively. Cultures were used for CRAC analyses after 3h of treatment with a-factor (see methods).E)Correlation plots for the two biological replicates of samples used in experiments in figure3D-F. Figure Figure S2: Experiments related to figures 2 and 3.A) Northern blot analysis of transcripts derived from two different tRNA genes in the indicated backgrounds. In pGal-TRZ1 strains, the essential gene TRZ1 is expressed from the GAL1 promoter and the different strains are either grown on galactose for the whole experiment (t=0) or shifted to medium containing glucose for 24h to repress TRZ1. Experiments performed with 12h-incubation in glucose-containing medium provided similar results (data not shown). Schemes on the left indicate the approximate position of the probes (P1 and P2) used for the detection of the different RNA species (RT, for read-through, and mature tRNA). IGV screenshot of the region around the tH(GUG)G2 gene indicating the position of different T-tracts found and the two major groups of RT transcripts detected by northern blot (A, bottom blot). Note that the fact that we detect multiple bands for each termination region could be due to the existence of several termination sites and/or the presence of heterogeneous poly(A) tails. The structure of the RNA between the T6 and the T8 T-tract predicted by the mFold software of the UNAFold package (http://www.unafold.org/) is shown on the bottom. C) Analysis of transcription termination defects at two well-characterized NNS-targets (the CUT NEL025c and the snoRNA gene SNR13) in the indicated strains. RNAs where prepared from the same cultures used for CRAC experiments and northern blot analyses in figures 2 and 3. Typical read-through transcripts resulting from inefficient termination by the NNScomplex where detected by RT-qPCR with oligonucleotides listed in table S5. Values are normalized relative to the levels of the ACT1 mRNA. D) Western blot analysis of Nrd1 depletion by incubation of pTet-NRD1 strains (2 biological replicates) with doxycycline (Dox) for the indicated times. Tubulin detection was used as a loading control. E) Native agarose gels showing total RNA levels in the indicated samples stained with ethidium bromide. Figure S3 : S3 Figure S3: Complementary analyses related to figure 4. A) and B) Comparison of the transcription signal at tRNA genes in the WT and in the sen1-3 mutant measured by the total RNAPIII CRAC signal at the region from the 5' end of the mature tRNA to the first T-tract. A) Dispersion plot where "R" corresponds to Pearson's correlation coefficient and p is the associated p-value. B) Violin plot where p corresponds to the p-value calculated by the Wilcoxon test. C) and D) Representation of the number of T-tracts of indicated lengths located in the 700 bp region downstream of the primary terminator of each tRNA gene. Data points are coloured according to the quartile (Q) they belong to. Quartiles are defined according to the log2 FC of the RNAPIII signal in the sen1-3 relative to the WT at the same region, which provides an estimation of the dependency on Sen1 for termination. Thus, Q1 includes the most Sen1dependent tRNA genes. Figure S4 : S4 Figure S4: Examples of tRNA genes illustrating the role of Sen1 in enhancing secondary termination. IGV screenshots showing the distribution of RNAPIII CRAC signal in the WT and the sen1-3 mutant with zoom in views of the main regions where RNAPIII accumulates in the mutant. The 3' ends datasets provide the position of individual RNAPIIIs with singlenucleotide resolution. Most accumulation is observed just upstream of or at the first Ts of secondary weak terminators, suggesting impaired RNAPIII release by Sen1 at these sites. The indicated coordinates correspond to the position relative to the beginning of the nearest downstream T-tract. Figure S5 : S5 Figure S5: Sequence of transcription templates and predicted structure of the different transcribed RNAs in in vitro transcription termination assays. The sequence of the wild-type version of each template are indicated in the schemes. The mutant version of the transcribed RNAs is shown together with the wild-type version under the corresponding scheme. The sequence of the spacers (S) correspond to the non-template strand. RNA structure predictions and DG calculation for each structure were performed with the mFold software. Figure S6 : S6 Figure S6: Sen1 can promote the release of paused RNAPIs in vitro.A) SDS-PAGE analysis of the RNAPI preparation used in these assays.B) IVTT assay performed on templates containing a G-less cassette followed by a run of Gs to promote stalling of RNAPI at the first G in the absence of guanine in the reaction. Top: scheme of the transcription templates. Bottom: Denaturing PAGE analysis of transcripts from one out of two independent IVTT assays, which produced very similar results. Figure S7 : S7 Figure S7: Examples of tRNA genes harbouring a T4 primary terminator. A) and B) Cases where the T4 terminator seems to function autonomously to promote moderate levels of termination. The 3' end datasets provide accurately the position of individual RNAPIIIs, and therefore the location of pausing sites. The decrease in the whole reads RNAPIII signal within the T4 sequence despite the presence of very strong pausing at close downstream T-tracts supports the idea that a fraction of RNAPIIIs terminate at the T4 terminator. C) Example where the T4 sequence seems to function in combination with the downstream T5 T-tract, as suggested by the strong termination observed at these sequences compared to A) and B), where the T4 sequence is further from other T-tracts. The pausing pattern, with accumulation of RNAPIIIs at the first 3 thymidines of the T4 sequence rather resembles the pattern observed for long T-tracts (e.g. T9 terminators) in vitro. Figure S8 : S8 Figure S8: Analysis of T-tracts and RNA structures at regions of secondary termination. A) Analysis of the frequency of A and T nucleotides in the 200 bp region downstream of the primary terminator of tRNA genes. B) Comparison of the number of weak (T ≤ 5) or strong (T ≥ 6) terminators at the 200 bp region downstream of the primary terminator of tRNA genes in the sense orientation versus to the antisense orientation relative to transcription. Statistical significance was calculated using a Wilcoxon rank sum test. n.s indicates no significant difference between the compared groups whereas * denotes a p-value ≤ 0.05 and *** a p-value ≤ 0.001. C) Analysis of the GC content (fraction of G and C nucleotides) of regions downstream of the primary terminator of tRNA genes. Values were calculated for 60 bp sliding windows. The black line corresponds to the average value whereas the grey zone represents the 95% confidence interval of the average value. A dashed line indicates the average value for the whole genome. D) Analysis of the Gibbs free energy (DG) as a proxy for the propensity of the transcribed regions to form secondary structures. We plotted values calculated for 65 bp sliding windows published in Turowski et al. 2020. A dashed line indicates the average DG value for the whole genome. HA-Trz1, Δrrp6, sen1-3 GAL-HA-Trz1::KAN, rrp6::URA3, sen1W773A,E774A,W777A This work ( 10 mM 10 Tris pH 8, 150 mM NaCl, 0.1 % NP-40, 0.5 mM EDTA, 1 mM DTT, 5% glycerol) and proteins were then eluted by cleaving the protein A moiety with the TEV protease in TEV cleavage overnight at 4°C.Mass spectrometry analysis and label-free quantificationAnalysis of Sen1 and RNAPIII coimmunoprecipitates by mass spectrometry was conducted by the proteomics core facility of the Institut Jacques Monod. Proteins were digested by adding 0.2 µg of trypsin (Promega, Madison, WI, USA) per sample followed by incubation in 25 mM NH 4 HCO 3 at 37°C overnight. The resulting peptides were desalted using ZipTip μ-C18 Pipette Tips (Pierce Biotechnology, Rockford, IL, USA) and analyzed using an Orbitrap Fusion equipped with an easy spray ion source and coupled to a nano-LC Proxeon 1200 (Thermo Scientific, Waltham, MA, USA). Peptides were loaded with an online pre-concentration method and separated by chromatography using a Pepmap-RSLC C18 column (0.75 x 750 mm, 2 μm, 100 Å) from Thermo Scientific, equilibrated at 50°C and operating at a flow rate of 300 nl/min.Peptides were eluted by a gradient of solvent A (H 2 O, 0.1 % FA) and solvent B (ACN/H 2 O 80/20, 0.1% FA), the column was first equilibrated 5 min with 95 % of A, then B was raised to 28 % in 105 min and to 40% in 15 min. Finally, the column was washed with 95% of B during 20 min and re-equilibrated with 95% of A during 10 min. Peptide masses were analyzed in the Orbitrap cell in full ion scan mode, at a resolution of 120,000, a mass range of m/z 350-1550 and an AGC target of 4.10 5 . MS/MS were performed in the top speed 3 s mode. Peptides were selected for fragmentation by Higher-energy C-trap Dissociation (HCD) with a Normalized Collisional Energy of 27% and a dynamic exclusion of 60 seconds. Fragment masses were measured in an Ion trap in the rapid mode, with and an AGC target of 1.10 4 . Monocharged peptides and unassigned charge states were excluded from the MS/MS acquisition. The maximum ion accumulation times were set to 100 ms for MS and 35 ms for MS/MS acquisitions respectively. Label-free quantification was done on Progenesis QI for Proteomics (Waters, Milford, MA, USA) in Hi-3 mode for protein abundance calculation. MGF peak files from Progenesis were processed by Proteome Discoverer 2.4 with the Mascot search engine. The Swissprot protein database was typically used for interrogation. A maximum of 2 missed cleavages was authorized. Precursor and fragment mass tolerances were set to 7 ppm and 0.5 Da, respectively. The following post-translational modifications were included as variable: . Briefly, 2 L of cells expressing an HTP-tagged version of Rpc160 (the largest subunit of RNAPIII) at the endogenous locus were grown at 30°C to OD 600 ~ 0.6 in CSM-TRP medium. Cells were crosslinked for 50 seconds using a W5 UV crosslinking unit (UVO3 Ltd) and harvested by centrifugation. Cell pellets were washed once with ice-cold 1x PBS and resuspended in 2.4 mL of TN150 buffer (50 mM Tris pH 7.8, 150 mM NaCl, 0.1% NP-40 and 5 mM b-mercaptoethanol) per gram of cells in the presence of protease inhibitors (Complete™ EDTA-free Protease Inhibitor Cocktail, Roche). Suspensions were flash frozen in droplets and cells subjected to cryogenic grinding using a Ball Mill MM 400 (5 cycles of 3 minutes at 20 Hz). The resulting frozen lysates were thawed on ice and digested with DNase I (165 units per gram of cells) at 25°C for 1h to solubilize chromatin and then clarified by centrifugation at 16 krpm for 30 min at 4°C. RNA-protein complexes were immobilized on M-280 tosylactivated dynabeads coupled with rabbit IgGs (10 mg of beads per sample), washed with TN1000 buffer (50 mM Tris pH 7.8, 1 M NaCl, 0.1% NP-40 and 5 mM β-mercaptoethanol), and eluted by digestion with the TEV protease. RNAs were subjected to partial degradation to reduce their size by adding with 0.2 U of RNase cocktail (RNace-IT, Agilent) and the reaction was stopped by the addition of guanidine-HCl to a final concentration of 6 M. RNA-protein complexes were then incubated with Ni-NTA sepharose (Qiagen, 100 μl of slurry per sample) overnight at 4°C and extensively washed. Sequencing adaptors were ligated to the RNA molecules as described in the original procedure. RNA-protein complexes were eluted with elution buffer containing 50 mM Tris pH 7.8, 50 mM NaCl, 150 mM imidazole, 0.1% NP-40 and 5 mM βmercaptoethanol fractionated using a Gel Elution Liquid Fraction Entrapment Electrophoresis (GelFree) system (Expedeon) following manufacturer's specifications. The fractions containing Rpc160 were treated with 100 μg of proteinase K, and RNAs were purified and reverse-transcribed using reverse transcriptase Superscript IV (Invitrogen). described procedure[START_REF] Moreno-Morcillo | Solving the RNA polymerase I structural puzzle[END_REF] with the following modifications: for cells lysis and equilibration of the heparin column (GE Healthcare), we used instead a buffer containing 250 mM Tris-HCl pH 8, 250 mM ammonium sulfate, 20% glycerol, 1 mM EDTA, 10 mM MgCl 2 ,10 µM ZnCl 2 , 12 mM b-mercaptoethanol and a protease inhibitor cocktail (0.3 µg/mL leupeptin, 1.4 µg/mL pepstatin A, 170 µg/mL PMSF and 330 µg/mL benzamidin). Figure S9 : S9 Figure S9: Yeast two hybrid assays to identify RNAPIII components interacting with Sen1. (A) Scheme showing the strategies of two hybrid screening. Plasmids containing RNAPIII subunits were from Flores et al., 1999. Plasmids containing Sen1 and Ctf4 (indicated in red color) were constructed in this study. Only the CTD of Ctf4, which contains the Sen1-interaction region was used as putative positive control (Appanah et al., 2020) (B) Assays of the two hybrid screens. Positive results from both methods are highlighted in boxes. Figure S10 : S10 Figure S10: Alignment of the amino acid sequence of Sen1 proteins from different organisms. Sc, S. cerevisiae; Sp, S. pombe; Hs, H. sapiens. Structural features indicated above the sequence are based on S. cerevisiae Sen1, including the N-terminal domain (NTD, aa 1-975), the helicase domain (aa 1095-1876), the "brace" (aa 1097-1149), and the "prong" (aa 1461-1554) (Leonaite et al., 2017). The Glc7-interacting motif (aa 1999-2003) (Nedea et al., 2008) and the Nrd1interacting motif (NIM, aa 2052-2063) (Han et al., 2020) are also indicated. The domain organization of the helicase domain of S. cerevisiae Sen1 is presented in Figure 2-8. Sequence alignment performed by the mafft program (https://mafft.cbrc.jp/alignment/software/). ( https://alphafold.ebi.ac.uk). Thanks to this scientific advance, the 3D models of the structure of full-length Sen1 from S. cerevisiae and S. pombe are now available, as shown in FigureS11.The structure of the human homologue of Sen1, Senataxin (HsSETX), is also included for comparison. The experimentally-generated structure of ScSen1 helicase domain (HD)(Leonaite et al., 2017) has already been shown in Figure2-8 (herein Sen1 HD ). Figure S11 : S11 Figure S11: Structure of various Sen1 proteins predicted with AlphaFold . (A) AlphaFold-predicted structures of Sen1 in S. cerevisiae (left, ID: AF-Q00416-F1), S. pombe (middle, ID: AF-Q92355-F1) and H. sapiens (right, ID: AF-Q7Z333-F1). (B) Zoom-in view (with the corresponding overall view) showing the three amino acids mutated in S. cerevisiae sen1-3 variant. (Jumper et al., 2021). AlphaFold produces a per-residue confidence score (pLDDT) between 0 and 100. Some regions below 50 pLDDT may be unstructured in isolation. (C) A model of the structure of HsSTEX helicase domain (HD) (green) based on ScSen1 HD (Leonaité et al., 2017) (magenta). Generated by Richard Stefl and Marek Sebesta (CEITEC, Czech Republic) with HHPRED suite. Figure S12 : S12 Figure S12: Heatmap analysis of the distribution of RNAPIII along tRNA genes.Only 3' ends of the reads are analysed in the heatmap. All nuclear tRNA genes are aligned to the 3' end of the mature tRNAs, with 150 nt upstream and 50 nt downstream, and clustered by the amino acid isotype. S . pombe est non-essentiel, entraîne un déplacement global de la distribution de l'ARNpol III en aval des gènes d'ARNt, ce qui est cohérent avec l'idée que Sp Sen1, en plus des suites de avons également effectué une experience de CRAC d'ARNpol III sur des cellules de type sauvage et sen1-3 arrêtées en phase G1, lorsque le replisome est absent des cellules. De manière importante, nous avons observé des défauts de terminaison similaires chez le mutant dans les cellules arrêtées en G1 et dans les cellules asynchrones, ce qui indique que la fonction de Sen1 dans la transcription par l'ARNpol III est indépendante de la présence du replisome.3) Sen1 peut induire la terminaison de la transcription de l'ARNpol III in vitroNous avons précédemment démontré que Sen1 peut promouvoir directement la terminaison de la transcription de l'ARN pol II d'une manière indépendante de la séquence de l'ARN(Porrua et Libri, 2013b). Pour tester si Sen1 peut également induire directement la terminaison de la transcription de l'ARNpol III et si cela nécessite la présence de signaux de terminaison canoniques, nous avons utilisé un système de terminaison de la transcription in vitro contenant des protéines purifiées (c'est-à-dire ARNpol III et Sen1), des matrices de transcription et de l'ARN naissant. Nous avons constaté que Sen1 peut à la fois renforcer la terminaison au niveau des terminateurs inefficaces et favoriser la terminaison au niveau des séquences non liées. Nous proposons que Sen1 assiste ARNpol III pour la terminaison au niveau des terminateurs sous-optimaux et fonctionne donc comme une voie de terminaison de sécours. Nous avons également étudié le mécanisme par lequel Sen1 induit la terminaison de la transcription par l'ARNpol III. Plus précisément, nous avons montré que le domaine hélicase de Sen1 peut déclencher la libération de l'ARNpol III in vitro avec une efficacité similaire à celle de la version complète de Sen1. Ceci suggère fortement que l'interaction entre Sen1 et l'ARNpol III via le domaine N-terminal est importante pour le recrutement de Sen1 aux transcrits ARNpol III in vivo. De plus, la terminaison de ARNpol III médiée par Sen1 est ATPdépendante et nécessite l'interaction de Sen1 avec l'ARN naissant. Ces résultats indiquent que Sen1 utilise un mécanisme similaire pour induire la terminaison de la transcription de l'ARNpol II et l'ARNpol III. 4) Les structures d'ARN peuvent favoriser la terminaison de la transcription de l'ARNpol III in vitro Les résultats ci-dessus indiquent que, comme pour l'ARNpol II, la terminaison de la transcription de l'ARNpol III médiée par Sen1 implique la translocation de Sen1 le long du transcrit naissant, et nos données structurales et biochimiques antérieures ont montré que Sen1 ne peut interagir qu'avec un ARN simple brin (Porrua et Libri, 2013b, Leonaite et al, 2017). Les ARNt sont des molécules d'ARN hautement structurées et pour une grande majorité d'entre elles, la region entre l'extrémité 3' de l'ARNt mature et le terminateur primaire est d'au plus 7 nt. Nous avons imaginé qu'une raison possible pour laquelle Sen1 ne fonctionne pas aux sites de terminaison primaire est que sa liaison à l'ARN naissant est entravée par la formation co-transcriptionnelle de structures stables à proximité du terminateur primaire. À l'inverse, des ARN moins structurés permettraient le chargement et la fonction de Sen1. Pour explorer ces possibilités, nous avons réalisé des essais de transcription in vitro avec des matrices de transcription modifiés pour former une structure du type épingle à cheveux qui se trouve naturellement dans l'ARN 5S, un transcrit dépendant de ARNpol III, en amont de suites de T de différentes longueurs. De façon surprenante, nous avons constaté que la présence de cette structure dans l'ARN transcrit pouvait augmenter de façon significative l'efficacité de terminaison de la transcription au niveau d'un terminateur sousoptimal. L'effet observé est similaire au résultat de l'addition de Sen1 à la version non structurée du même ARN. Ceci indique que non seulement Sen1 mais aussi les structures secondaires de l'ARN peuvent améliorer la fonction des terminateurs inefficaces. Nous avons également montré que les structures d'ARN peuvent améliorer la terminaison de la transcription uniquement lorsqu'elles se trouvent à proximité des segments T. Nous avons également montré, avec des approches similaires, que les structures d'ARN entravent le recrutement de Sen1 à l'ARN naissant et, par conséquent, empêcheraient Sen1 de fonctionner au niveau des terminateurs primaires. Conclusion Dans la présente étude, nous combinons des approches à haute résolution à l'échelle du génome avec des essais de terminaison de transcription in vitro utilisant des composants hautement purifiés afin de disséquer le mécanisme de terminaison de transcription de l'ARNpol III chez S. cerevisiae. Nous observons que la terminaison au niveau du terminateur primaire des gènes dépendant de l'ARNpol III (c'est-à-dire la premiere suite de T après le gène), n'est que partiellement efficace et, ainsi, une fraction considérable de polymérases se terminent dans la région en aval. Nous fournissons des données in vivo et in vitro prouvant que l'hélicase Sen1 joue un rôle global dans la terminaison de la transcription par l'ARNpol III et que cette fonction repose sur l'interaction de son domaine N-terminal avecl' ARNpol III. Cependant, nous constatons que Sen1 contribue très peu à l'efficacité de la terminaison primaire et qu'il fonctionne principalement comme un mécanisme de sécurité pour promouvoir la terminaison des ARNpol III qui outrepassent le premier signal de terminaison. Nos données indiquent que seuls les suites de T dans une gamme de longueur particulière sont suffisants pour promouvoir la terminaison autonome par l'ARNpol III. Néanmoins, nous montrons que les gènes ARNt contiennent souvent des signaux de terminaison sous-optimaux et que leur capacité à induire la terminaison peut être complémentée par Sen1 ainsi que par les structures secondaires de l'ARN naissant. Ces deux facteurs agissent de manière mutuellement exclusive puisque la présence de structures d'ARN empêche l'association de Sen1 avec le transcrit, ce qui est strictement requis pour que Sen1 puisse induire la terminaison de la transcription. Bien que Sen1 puisse également promouvoir la terminaison des ARNpol IIIs qui sont arrêtées dans des sites autres que les suites de T, nous constatons que les structures d'ARN ne peuvent fonctionner qu'en association avec des signaux de terminaison canoniques, c'est à dire des suites de T. Ensemble, nos données permettent de revoir les anciens modèles de terminaison de la transcription par l'ARNpol III et offrent une vision nouvelle et détaillée de la façon dont les composants intrinsèques du CE (c'est-à-dire les suites de T et les structures d'ARN) et le facteur extrinsèque Sen1 coopèrent pour promouvoir une terminaison efficace de la transcription par l'ARNpol III. Nous proposons que les structures de l'ARN contribuent à l'efficacité de la terminaison primaire dans certains cas (c'est-à-dire les gènes avec des terminateurs sous-optimaux), grâce à la proximité naturelle de la tige acceptrice de l'ARNt au premier T-tract, alors que Sen1 fonctionnerait préférentiellement dans les régions en aval. Une terminaison efficace est importante pour le recyclage rapide des polymerases pour de nouveaux cycles de transcription et, ainsi, pour le maintien d'une expression robuste des ARNt et d'autres transcrits dépendants de l'ARNpol III qui sont essentiels pour soutenir la prolifération cellulaire. De plus, il est crucial de prévenir ou de minimiser les conflits avec les autres polymérases en train de transcrire ainsi qu'avec les autres machineries associées à l'ADN. Table 2 -1: Subunit composition of RNAPII and its general transcription factors. 2 Phosphorylation of the carboxy-terminal domain (CTD) of Rpb1, the largest subunit of RNAPII is an important event in the transcription cycle. The RNAPII CTD is a long unstructured domain composed of multiple heptapeptide repeats with the consensus sequence Tyr 1 -Ser 2 -Pro 3 -Thr 4 -Ser 5 -Pro 6 -Ser 7 (YSPTSPS). The number of these repeats varies between species: 26 in S. cerevisiae, 29 in S. pombe, 32 in C. elegans, 37 in D. melanogaster and A. thaliana, and 52 in H. sapiens. Most of the repeats in yeast are identical to the consensus, while in human, only the first 26 are highly conserved (Figure Table 2 -2: Factors involved in RNAPII transcription termination. 2 Table 1 : mass spectrometry analyses of coimmunoprecipitation experiments using different versions of Sen1 as bait. 1 Values correspond to the mascot score. ND, not detected. Note that the mascot score depends on the size of the protein and therefore, truncated versions of Sen1 have lower values. ND, not detected. The full datasets and the results of additional replicates are included in tables S1, S2 and S3. TAP-Sen1 vs TAP-Sen1DNTD TAP-Sen1 NTD Sen1-TAP vs Sen1-3-TAP Protein Complex Ctrl Sen1 Sen1DNTD Ctrl Sen1 NTD Ctrl Sen1 Sen1-3 Sen1 NNS 50 24819 14182 153 4749 0 7299 Nrd1 NNS 0 439 665 ND ND 0 240 Nab3 NNS 0 417 575 ND ND 0 89 Ctf4 Replisome 0 2465 0 19 1343 0 0 Mrc1 Replisome 0 40 0 ND ND 0 21 Rpa190 RNAPI 126 463 1326 31 0 492 1812 Rpa135 RNAPI 87 122 664 29 0 400 1060 Rpb1 RNAPII 0 2770 3003 81 102 113 2494 Rpb2 RNAPII 0 2302 2328 86 55 34 1684 Rpc160 RNAPIII 0 7479 0 0 358 25 59 Rpc128 RNAPIII 0 4731 0 0 125 42 204 Rpc82 RNAPIII 0 2735 0 0 233 0 34 Rpc53 RNAPIII 68 1255 183 0 56 0 34 Rpc37 RNAPIII 0 1982 0 0 132 0 50 Rpc34 RNAPIII 0 2022 0 0 28 0 174 Rpc31 RNAPIII 0 1212 0 0 19 0 0 Rpc25 RNAPIII 0 410 0 0 0 0 0 Rpc17 RNAPIII 0 422 0 0 91 0 0 Rpc11 RNAPIII 0 191 0 15 37 33 62 Table S6 : S6 Yeast strains used in this study. Number Name Genotype Source DLY671 BMA as W303, ∆trp1 F. Lacroute DLY1152 Δrrp6 as W303, rrp6::URA3 F. Lacroute DLY1605 pTet-NRD1 as BMA, pTet::FLAG::NRD1 J. Colin DLY1626 pTet-NRD1, Δrrp6 as BMA, pTet::FLAG::NRD1, rrp6::KAN This work DLY1656 P GAL1-TAP-SEN1 as BMA, TRP1::Pgal::TAP::SEN1 Porrua et al, 2012 DLY2692 P GAL1-TAP-sen1ΔNter as BMA, TRP1::Pgal::TAP::sen1∆Nter Han et al, (∆1-975) 2020 DLY3171 Sen1-TAP as W303, SEN1::TAP::KAN Appanah et al, 2020 DLY3173 sen1-3-TAP as W303, Appanah et sen1W773A, E774A, W777A::TAP::KAN al, 2020 DLY3197 sen1-3 as W303, sen1W773A, E774A,W777A This work DLY3246 sen1-3, Δrrp6 as W303, sen1W773A, E774A,W777A rrp6::URA3 de la transcription. Selon le modèle actuel, la terminaison de l'ARNpol III repose uniquement sur une suite de thymines (T) dans le brin d'ADN non-matrice qui serait suffisant pour induire une pause de l'ARNpol III et sa dissociation de l'ADN. Cependant, mon groupe a précédemment trouvé une interaction entre l'ARNpol III et l'hélicase Sen1, un facteur de terminaison de la transcription de l'ARNpol II bien caractérisé, ce qui nous a incité à étudier un rôle possible de Sen1 dans la terminaison de la transcription de l'ARNpol III chez la levure bourgeonnante.Pour élucider la fonction spécifique de Sen1 dans la terminaison de l'ARNpol III, j'ai utilisé un mutant (sen1-3) contenant trois mutations ponctuelles dans le domaine N-terminal de Sen1, qui sont suffisantes pour abolir son interaction avec l'ARNpol III sans affecter la terminaison de la transcription de l'ARNpol II. En générant des cartes à haute résolution de la transcription par l'ARNpol III, j'ai observé qu'une fraction significative d'ARNpol III lit normalement à travers le terminateur primaire (c'est-à-dire la première suite de Ts en aval de l'extrémité 3' des gènes). J'ai montré que les mutations dans sen1-3 induisent des défauts de terminaison dans la plupart des gènes dépendants de l'ARNpol III, ce qui indique que l'interaction de Sen1 avec l'ARNpol III est globalement requise pour une terminaison efficace de la transcription de l'ARNpol III in vivo. De plus, j'ai montré que Sen1 agit principalement comme un mécanisme de "fail-safe" pour promouvoir la terminaison des ARNpol IIIs qui dépassent le terminateur primaire. Afin d'explorer si Sen1 peut induire directement la dissociation de l'ARNpol III de l'ADN, j'ai réalisé des essais de terminaison de la transcription in vitro avec des protéines purifiées. Tout d'abord, j'ai montré que six Ts consécutives sont nécessaires pour une terminaison efficace de la transcription par l'ARNpol III in vitro. De plus, j'ai démontré que Sen1 peut promouvoir la terminaison au niveau des terminateurs faibles contenant 4 ou 5 Ts, ainsi que d'autres types de séquences de pause. Ensuite, j'ai montré que le domaine hélicase de Sen1 peut induire la terminaison de l'ARNpol III de façon similaire à la protéine entière in vitro. De Table 3-2: Subunit composition of RNAPIII and its transcription factors. (Candidate for) Joint PhD degree from EMBL and Heidelberg University, Faculty of Biosciences. Max Planck Institute for Biophysical Chemistry, Macromolecular Crystallography, Am Fassberg 11, 37077, Goettingen, Germany Present address: Max Planck Institute for Biophysical Chemistry, Department of Molecular Biology Am Fassberg 11, 37077 Göttingen, Germany. The Institute of Cancer Research, Structural Biology Division, Fulham Road, SW7 3RP, London, UK. [Manuscript] An integrated model for termination of RNA polymerase III transcription DISCUSSION & PERSPECTIVES Modèle pour le rôle des signaux de terminaison canoniques, des structures d'ARN et de Sen1 dans la terminaison de la transcription par l'ARNpol III (montré pour les gènes d'ARNt). Les mécanismes régissant la terminaison de la transcription du ARNpol III in vivo sont considérablement plus complexes que ceux représentés dans les modèles précédents et impliquent l'interaction entre des éléments distincts agissant en cis et le facteur de terminaison extrinsèque Sen1. Au niveau du terminateur primaire, la terminaison implique généralement l'action d'une suite de T et la structure secondaire de l'ARNt naissant. Les structures d'ARN ne sont requises que pour les terminateurs de longueur non optimale. Dans les régions en aval, la transcription par l'ARNpol III est typiquement terminée soit par des terminateurs secondaires "forts", sans l'aide de Sen1, soit par des signaux de terminaison "faibles" si Sen1 peut accéder et se charger sur l'ARN naissant. D'après nos données, Sen1 peut également favoriser la terminaison au niveau des sites de pause autres que les suites de T. Acknowledgements Acknowledgements ................................................................................................. i Abstract ................................................................................................................ iii Acknowledgements We thank G. Wentzinger for technical assistance and other members of the Libri lab for fruitful discussions. We thank F. Fiorini and Hervé Le Hir for sharing the Upf1 protein. We thank the Roscoff Bioinformatics platform ABiMS (http://abims.sb-roscoff.fr) for providing computational resources and support. This work has benefited from the facilities and expertise of the high throughput sequencing core facility of I2BC (http://www.i2bc.parissaclay.fr/). We thank the proteomics facility of the Institut Jacques Monod, supported by the Region Ile-de-France (SESAME), Université de Paris and the CNRS, for their technical assistance. thanks to my country, for the support from China Scholarship Council, for the timely help and consideration from the Educational Office of Chinese Embassy in Paris. Funding This study was supported by the Centre National de la Recherche Scientifique (CNRS), the Agence National pour la Recherche (ANR-16-CE12-0001-01 to O.P. and ANR-16-CE12-0022-01 to D.L) and the Fondation pour la Recherche Medical (F.R.M., programme Equipes 2019). J.X. was supported by the China Scholarship Council, by the FRM (FDT202012010433) and the LabEx "Who Am I?" (ANR-11-LABX-0071 and the Université de Paris IdEx ANR-18-IDEX-0001) funded by the French Government through its "Investments for the Future". U.A. was supported by the French Ministry of Education and Research and by the Fondation ARC pour la recherche sur le cancer. M.G. was supported by a Boehringer Ingelheim Fonds PhD fellowship and the EMBL International PhD program. C.W.M. was supported by the EMBL. J.S. and V.P. were supported by the DFG grant PE 2079/2-2. Data availability The RNAPIII CRAC data have been deposited in NCBI's Gene Expression Omnibus (GEO) and are accessible through GEO Series accession number GSE174738. List of Boxes Supplementary material List of supplementary material: -Figures S1: Complementary -Table S6: Yeast strains used in this study. Material provided as separate xls files and accessible by the following link: https://www.dropbox.com/sh/th60zut0jz3bvj8/AAAUV1Df5ub6Hli7qqdMten2a?dl=0 -Table S1: mass spectrometry analyses of TAP-Sen1 and TAP-Sen1 DNTD coimmunoprecipitates. -Table S2: mass spectrometry analyses of TAP-Sen1 NTD coimmunoprecipitates. -Table S3: mass spectrometry analyses of Sen1-TAP and Sen1-3-TAP coimmunoprecipitates. -Table S4: label-free quantitative mass spectrometry analyses of Rpc160-HTP coimmunoprecipitates in a WT and a sen1-3 background. -Table S5: list of oligonucleotides used in this study. -Table S7: annotations of tRNA genes from the 5' end to the mature tRNA to the primary terminator in bed format. -Table S8: annotations of potential secondary terminators of tRNA genes in bed format. -Table S9: annotations of tRNA genes readthrough regions in a WT and a sen1-3 mutant in bed format. Authors contributions
04100319
en
[ "phys", "spi" ]
2024/03/04 16:41:20
2022
https://hal.science/hal-04100319/file/DTIS23082.1655979511.pdf
Gilles Ferreres Thierry Le Moing Clément Toussaint Simulation and comparison of control laws for the landing of an automonous parafoil under adverse conditions 1 Simulation and Comparison of Control Laws for the Landing of an autonomous Parafoil under Adverse Conditions Gilles Ferreres1 , Thierry Le Moing2 and Clément Toussaint 3ONERA, Toulouse, France, 31055 A nonlinear simulation model of a parafoil was developed. Then, two pairs of flight control and guidance laws were designed. In the first approach, an MPC flight controller controls the yaw angle, and the guidance law computes a trajectory and the associated yaw angle reference signal to be followed. In the second simplified approach, the flight controller controls the yaw rate, and the guidance law computes the yaw rate reference value that minimizes the distance to the target. Each pair of flight control and guidance laws is tested in the non-linear closed loop simulation tool, first without and then with turbulence. The performance appears quite satisfactory, not only as regards the main criterion which is the distance to the target, but also relating to the magnitude of closed loop signals, such as yaw rates and asymmetric brake deflections, which remain within realistic levels. I. Nomenclature 𝑉 ℎ = parafoil horizontal airspeed 𝑉 𝑣 = parafoil vertical airspeed 𝑇 𝑎𝑝𝑝 = final approach time 𝑇 𝑡𝑢𝑟𝑛 = final turn time 𝑊 𝑥 , 𝑊 𝑦 , 𝑊 𝑧 = wind components in the inertial frame 𝑉 𝑤 = wind speed 𝜓 𝑤 = wind heading II. Introduction A. Context Airdrop Aerial Delivery represents an efficient way to deliver equipment onto sites with no infrastructure, and without exposing cargo aircraft. However, landing accuracy of autonomous parafoil delivery systems remains an operational limitation when the mission requires a final dispersion smaller than 50m. Designing a guidance and control system which ensures precision landing, especially in disturbed aerological environment, is a highly methodological challenge: First, parafoils are under-actuated systems mainly controlled with asymmetric brake deflection, symmetric deflection having a very limited effect on glide slope. Then, the low airspeed of parafoils compared to conventional aircraft makes the flight path very sensitive to changes in wind speed. Most notably during final manoeuvre, turbulence-induced disturbances significantly impact landing accuracy. Finally, airdrop flight control systems are equipped with a limited instrumentation usually excluding air data devices. Achieving very stringent landing precision requirements for a wide range of weather conditions requires the implementation of innovative control and guidance algorithms efficiently exploiting the system dynamics. This objective has motivated the launch of a common pluri-annual research program between ONERA and DGA (the French Defense Agency). The first step concerned the development of models and tools apt to represent adequately the behavior of payload/parafoil systems. A state-of-the-art 9-DoF (degrees of freedom) dynamics flight model was developed [START_REF] Toussaint | Flight dynamic modeling of the PBO parafoil using sparse preliminary flight test data[END_REF], which takes into account not only the relative rotations of the canopy and of the payload [START_REF] Barrows | Apparent mass of parafoil with spanwise camber[END_REF], but also added mass effects [START_REF] Lissaman | Apparent mass effects on parafoil design[END_REF] that are significant contributors to the canopy dynamics. Modellings found in the literature [START_REF] Barrows | Multibody parafoil model[END_REF] could provide an initial parameter set in order to represent the envisioned system, equipped with the PBO parachute used by the French army. In a second step, instrumented flight tests were carried out by DGA, and the 9-DoF model was updated to match the measured trajectories and flight parameter evolutions during piloted glides. The updated model was then implemented in simulation tools aimed at assessing the factors limiting landing accuracy of autonomous parafoil landing systems, through realistic scenarios (mostly weather scenarios). Preliminary control and guidance laws were developed, and were confirmed to be critical to the achieved performance and robustness. Additionally, with the aim of gaining experience in the field of parafoil control systems, ONERA engaged in experimental studies with a mini-paramotor. This test-bed is currently used to try and validate new landing guidance and control concepts. The most common approaches encountered in the literature address the problem of terminal guidance in a twostep strategy: 1) A terminal path planner delivers an optimal flyable path given a known wind profile 2) The optimal trajectory is tracked by a specific controller which commands canopy brake deflections. The trajectory planning performance is limited by the capability to predict wind along the final trajectory, and by the approximations of the model used in the path planner, which is generally a kinematic model. Moreover, the low control capacity limits the opportunity to correct for guidance errors and wind disturbances. Therefore, the optimal trajectory needs to be periodically updated during the final manoeuvre, accounting for the current state and wind estimation. Numerous authors have proposed a wide range of optimal parafoil guidance strategies: 1) The band limited method presented in [START_REF] Carter | Band limited guidance and control of large parafoils[END_REF] encompasses the actuator limitation in the optimization problem 2) Modified Dubins paths are proposed in [START_REF] Rademacher | In-flight trajectory planning and guidance for autonomous parafoils[END_REF][START_REF] Van Der Kolf | Flight Control system for an autonomous parafoil[END_REF] for terminal guidance phase planning with a concept of altitude margin 3) Flight path parameterization using single or connected cubic Bézier curves was shown to be an attractive path-planning scheme, offering geometrically flexible trajectory shapes that maintain landing accuracy in the presence of terrain obstacles [8] 4) An optimal terminal guidance strategy based on the inverse dynamics in the virtual domain is proposed in [START_REF] Yakimenko | Using Direct Methods for terminal guidance of autonomous aerial delivery systems[END_REF][START_REF] Slegers | Optimal Control for Terminal Guidance of Autonomous Parafoils[END_REF][START_REF] Slegers | Terminal Guidance of Autonomous Parafoils in High Wind-to-Airspeed Ratios[END_REF]. This efficient algorithm which can be used in real time accounts for varying wind and disturbances by periodically updating the trajectory during the final turn. All these techniques provide a reference signal for the heading rate or heading angle. The aim of the flight controller is to generate brake deflections that track this reference signal, i.e. the true heading rate or angle should (exactly) follow this reference signal. Among the variety of tracking controllers proposed in the literature, Model Predictive Control (MPC) seems to be an efficient way to control the yaw angle of the canopy over the terminal trajectory horizon [START_REF] Slegers | Model Predictive Control of a Parafoil and Payload System[END_REF] [START_REF] Slegers | Optimal Control for Terminal Guidance of Autonomous Parafoils[END_REF][START_REF] Slegers | Terminal Guidance of Autonomous Parafoils in High Wind-to-Airspeed Ratios[END_REF][START_REF] Alaniz | Model predictive control with application to real time hardware and a guided parafoil[END_REF]. The MPC controller uses an internal model of the turn rate dynamics to determine an optimal set of brake deflection commands, given a heading reference path. A simple two DoF model of the roll and yaw dynamics was used to develop a yaw controller [START_REF] Slegers | Optimal Control for Terminal Guidance of Autonomous Parafoils[END_REF][START_REF] Slegers | Terminal Guidance of Autonomous Parafoils in High Wind-to-Airspeed Ratios[END_REF]. This fourth order linear model was obtained by linearizing the 6-DoF model assuming a constant air velocity. Ward and Fowler showed that a single degree-of-freedom linear model including a turn rate first order response may be used quite effectively for parafoil inner-loop tracking [START_REF] Ward | Adaptive Glide Slope Control for Parafoil and Payload Aircraft[END_REF]. In short, the rationale for focusing on the terminal guidance phase is that the sought landing accuracy relies chiefly on its performance, in particular on its capacity to compensate efficiently for initial position errors and unknown wind perturbations until touchdown. B. Content This paper focuses on the terminal approach phase, which consists in a final turn. The flight controller and the guidance law are designed to ensure that the parafoil lands as close as possible to the landing target in the presence of turbulence. Realistic simulation models and tools are a pre-requisite to convincingly address this topic. Based on previous studies [START_REF] Toussaint | Flight dynamic modeling of the PBO parafoil using sparse preliminary flight test data[END_REF] funded by DGA, in which specific instrumented piloted parafoil flight tests were performed, various models have been developed in order to gain a fine knowledge of parafoil flight dynamics, and to support the design of flight controllers. Although the reference was shown to be the 9 DOF model, simpler 6-DOF models generally turn out accurate enough to assess guidance and control laws. The results presented in this paper concern a mini-paramotor employed in experimental studies. First, a representative 6-DOF model has been developed and integrated in a non-linear simulation tool. Second, two pairs of flight control and guidance laws have been developed. In the first approach, an MPC flight controller controls the yaw angle, and the guidance law computes a trajectory and the associated yaw angle reference signal to be followed. This guidance law makes use of wind-fixed frame [START_REF] Rademacher | In-flight trajectory planning and guidance for autonomous parafoils[END_REF], which greatly simplifies the trajectory planning formulation. In the second simplified approach, the flight controller controls the yaw rate, and the guidance law computes the yaw rate reference value that minimizes the distance to the target. Both guidance laws exploit a kinematic model, instead of a dedicated model matching exactly the dynamics of the inner closed loop corresponding to the flight controller. Last, each pair of flight control and guidance laws is tested in the non-linear closed loop simulation tool, first without and then with turbulence. The performance appears quite satisfactory, not only as regards the main criterion which is the distance to the target, but also relating to the magnitude of closed loop signals, such as yaw rates and asymmetric brake deflections, which stay within realistic levels. III. Experimental setup A research testbed was developed on the basis of a mini-paramotor designed by the company Opale Paramodels. It aims at studying the dynamic behavior of a parafoil-load system, and at investigating Guidance, Navigation and Control (GNC) concepts. It is employed in an on-going ONERA research project, whose aim is to improve the accuracy of terminal guidance of autonomous parafoil systems and to develop experimental skills. The mini-paramotor consists of a trike, a canopy, and a brushless motor connected to a 15" propeller (only used to make the mini-motor gain altitude), and two servomotors to activate the canopy brake lines (Fig. 1). With the canopy area of about 2.4m², the system is capable of carrying a total mass of about 6-7 kg. The forward airspeed is around 8m/s and the descent rate around 2.4 m/s. A specific avionics system has been specified and implemented for this project, which integrates off-the-shelf components, including: 1) On board computer 2) Inertial measurement unit 3) GNSS 4) Barometer 5) Anemometer 6) Servomotors 7) Potentiometers for copying positions of brake lines. Fig. 1 Mini-paramotor and payload system. An initial avionics had been based upon the 3DR Pixhawk, which facilitates the integration of all sensors allowing the design of a flight controller. A new type of avionics has been developed, based on a Raspberry Pi 3 and a ROS architecture. Fig. 2 GNC hardware. This single-board computer offers higher cpu performance and memory capacity, necessary to implement advanced path planning algorithms. On the other hand, the hardware capabilities of the Raspberry Pi's interface are not adequate for building a flight controller, so that an EMLID Navio 2 shield was used to facilitate the development of a flight controller. This shield is equipped with double IMU, Ublox GNSS, barometer, ADC interface, PWM output to control servomotors and SBUS input to decode RC signals. Internal IMU and GNSS have been replaced by external devices for better accuracy. The avionics is currently fitted with a SBG Ellipse-A IMU and a Drotek RTK GNSS (Fig. 2). In its standard configuration, the mini-paramotor is remotely controlled by the pilot using a radio transmitter. In this case, the radio receiver directly controls the propeller engine and the brake line servomotors. An Sbus signal is also sent to Raspberry Pi through the Navio 2 shield, to handle flight control modes and to control data logging. An electromechanical switch, controlled by radio transmitter, allows the servomotor control from either the radio receiver, or from the flight control laws implemented in the Raspberry Pi. The mini-paramotor is also equipped with a remote identification device, required by national regulations for all unmanned aircraft weighing more than 800g. A ground station, based on the open source software QGroundControl running on Linux PC, provides a bidirectional radio modem link to the vehicle. The MAVLink communication protocol is used to monitor key flight parameters, and also to send base station RTK corrections to the mini-paramotor. The ROS flight control software performs the following functions: 1) Acquisition of sensor data 2) Sbus signal decoding 3) Kinematic state estimation 4) Wind estimation 5) Terminal path planning 6) Tracking control 7) Servomotor commanding (Hitec HS-785 HB sail winches) 8) Communication with the ground station 9) Data logging to a USB memory stick. The kinematic state estimator designed by ONERA makes use of invariant Kalman filtering. An additional estimator was developed to provide a wind velocity estimate from measurement of GNSS ground velocities and measurement of an anemometer used during the non-propelled flight phases. This configuration does not allow simultaneous estimation of the three components of the wind velocity, and remaining uncertainties in the wind estimation must be compensated by an adaptive trajectory planning. On-line estimation of key parameters of the parafoil (air speed, descent rate, glide ratio) during the flight phases before the final turn can also improve the accuracy of final path planning. Sensor drivers are implemented as separate ROS nodes, and a specific ROS architecture has been designed to provide a strict task scheduling. Thus, all flight control system functions are synchronized by the acquisition of IMU data at a rate of 50 Hz. ROS messages including sensors measurements and computed data are logged to binary files on a USB memory stick. These files can be directly loaded to Matlab workspace for data analysis. IV. Parafoil model A. Identification of the parafoil model Most flight dynamics simulations of parafoil systems rely on 6 or 9-DoF models, depending on whether relative payload motion is considered or not. A 6-DoF model was retained for this design and for the assessment of terminal guidance and control laws, since literature and our experience show that a 6-DoF model yields good enough agreement with experimental data and is well-suited for control law design. The structure of the 6 DoF parafoil dynamic model adopted in this work is similar to that of an aircraft. The parafoil payload system is assumed to be a rigid body. The equations of motion of the parachute-payload system are written in the body reference frame, by summing up forces and moment contributions about the system center of gravity (CG). Aerodynamic modelling consists of static and dynamic derivatives, which are constant values. Unlike in aircraft flight dynamics modelling, the air velocity considered in the aerodynamic model must be evaluated at the aerodynamic center of the canopy, and not at the center of gravity. Aerodynamic derivatives have been estimated from flight test experiments, remotely conducted for flight data acquisition with the avionics system. Flight tests are performed in restricted areas reserved for ONERA flight experiments. Gliding phase duration is limited due to the maximum flying altitude permitted by the UAS regulations, 120m above ground level. In a first step, processing of steady state flights performed with different symmetric brake inputs led to estimate the parameters of lift and drag models. In a second step, analysis of steady-state turns with asymmetric brake deflections enabled estimation of the main lateral aerodynamic derivatives. Last, a system identification technique based on an output error method was applied to update the model parameters, using flight tests performed with dynamic brake inputs. Figure 3 presents an illustrative example of a comparison between outputs of the identified model and flight test measurements. This result highlights the limitation of the 6-DoF parafoil model: yaw rate time histories exhibit oscillations with period around 1s, due to the yawing motion of the trike supporting the avionics. The chosen model structure does not allow restitution of this yawing mode. A specific open-loop process was settled to identify a dynamic model of the servomotors installed in the miniparamotor. In order to have no unexplained and spurious delays, as it is often the case with applications such as ROS, a dedicated ROS node was developed to perform the generation of a chirp signal, the sending to PWM servo driver, the acquisition of servomotor position and the recording of all data. It was checked that the delay introduced by the real-time execution on the Raspberry Pi 3 is around 1ms, and so negligible compared to a sampling frequency fs = 50Hz. Tests were carried out with 4 magnitudes, PWM pulses of 50, 100, 200 and 400us respectively, to check the dependency of the results on magnitude. With a simple system like a servomotor, using identification methodologies available in toolboxes is suitable. The CAPTAIN toolbox [START_REF] Young | Recursive estimation and time-series analysis: an introduction for the student and practitioner[END_REF] was chosen to identify a continuous-time (CT) model of the servomotors, since this toolbox was utilized to identify electromechanical systems [START_REF] Young | Recursive estimation and time-series analysis: an introduction for the student and practitioner[END_REF][START_REF] Janot | Identification and control of electro-mechanical systems using state-dependent parameter estimation[END_REF][START_REF] Brunot | An improved instrumental variable method for industrial robot model identification[END_REF]. The time domain refined instrumental variable (riv) technique implemented in CAPTAIN was used to identify the structure and parameters of the following hybrid Box-Jenkins model (BJ model): 𝑦(𝑡 𝑘 ) = 𝑒 -𝑛 𝑑 .𝑇 𝑒 .𝑠 𝐵(𝑠) 𝐴(𝑠) 𝑢(𝑡 𝑘 ) + 𝐷(𝑧 -1 ) 𝐶(𝑧 -1 ) 𝜀(𝑡 𝑘 ) (1) Where: 𝐴(𝑠) = 𝑠 𝑛 𝑎 + 𝑎 𝑛 𝑎 -1 𝑠 𝑛 𝑎 -1 + ⋯ + 𝑎 1 𝑠 + 𝑎 0 𝐵(𝑠) = 𝑏 𝑛 𝑏 𝑠 𝑛 𝑏 + 𝑏 𝑛 𝑏 -1 𝑠 𝑛 𝑏 -1 + ⋯ + 𝑏 1 𝑠 + 𝑏 0 (2) 𝐶(𝑧 -1 ) = 𝑐 𝑛 𝑐 𝑧 -𝑛 𝑐 + 𝑐 𝑛 𝑐 -1 𝑧 -𝑛 𝑐 +1 + ⋯ + 𝑐 1 𝑧 -1 + 𝑐 0 𝐷(𝑧 -1 ) = 𝑑 𝑛 𝑑 𝑧 -𝑛 𝑑 + 𝑑 𝑛 𝑑 -1 𝑧 -𝑛 𝑑 +1 + ⋯ + 𝑑 1 𝑧 -1 + 𝑑 0 𝑛 𝑎 and 𝑛 𝑏 are the degrees of the polynomials 𝐴(𝑠) and 𝐵(𝑠), 𝑛 𝑐 and 𝑛 𝑑 are the degrees of the polynomials 𝐶(𝑧 -1 ) and 𝐷(𝑧 -1 ), 𝑛 𝑑 is the number of delays and 𝑇 𝑒 is the sampling period. Results show that a hybrid BJ model with 𝑛 𝑎 = 3, 𝑛 𝑏 = 2, 𝑛 𝑐 = 3 and 𝐷(𝑧 -1 ) = 1 gives responses in good agreement with the experiment, and the CT model is weakly magnitude dependent. Figures 4 and5 display the bode diagrams of the four CT models identified for the left and right servomotors. To assess the quality of the simulation model, a comparison is shown in Figures 6 and7 between the time-domain model response and the measured position. V. Path planning A. Introduction The objective of the terminal path planning is to find the optimal turn that satisfies boundary conditions defined by the current and final states at touchdown. This phase is the last chance to manoeuvre the vehicle, and the terminal guidance scheme must compensate for initial position and other errors. The remaining flight time is unknown, and depends on initial altitude, descent rate and vertical wind velocity. Correct assumptions about the wind are crucial for the landing precision, as wind drift is the result of the integration of the wind profile along the remaining trajectory. The determination of the trajectory is approached as a parametric optimization problem, including a kinematic model of the parafoil to reduce computational burden and to allow on-line optimization. The methodology used considers the following assumptions: 1) Initial state is defined by the cartesian coordinates (𝑥0, y0, z0) and heading 2) Landing is into the wind to reduce relative speed to ground and to improve longitudinal landing accuracy 3) The trajectory ends with a final straight line into the wind to stabilize payload motion. 4) Wind speed 𝑉 𝑤 is unknown, but assumed to be constant during this manoeuvre 5) In first approximation, horizontal airspeed 𝑉 ℎ and descent rate 𝑉 𝑣 are also assumed to be constant. The final guidance setup depicted in Fig. 8 is similar to those considered in Yakimenko and Slegers [START_REF] Yakimenko | Using Direct Methods for terminal guidance of autonomous aerial delivery systems[END_REF]. It starts with a turn, followed by a straight section into the wind. The final approach time 𝑇 𝑎𝑝𝑝 is chosen to stabilize the payload swinging motion and to prepare for the flare manoeuvre. The nominal turn start position is determined on the basis of the wind velocity, assuming a constant turn rate. First, the lateral distance 𝑦0 to the target is determined by the desired constant turn rate. The turn duration is deduced as: 𝑇 𝑡𝑢𝑟𝑛 = 𝜋 𝑦0 2 /𝑉 ℎ (3) Second, the longitudinal turn start position is determined on the basis of the drift due to wind velocity, in order to ensure that the constant turn will end at the desired point: 𝑥0 = -(𝑇 𝑡𝑢𝑟𝑛 + 𝑇 𝑎𝑝𝑝 )𝑉 𝑤 + 𝑇 𝑎𝑝𝑝 𝑉 ℎ (4) Third, assuming a zero mean velocity of the vertical wind, the sum (𝑇 𝑎𝑝𝑝 + 𝑇 𝑡𝑢𝑟𝑛 ) of the final approach time and of the turn time determines the altitude at the start of the final turn: ℎ0 = (𝑇 𝑡𝑢𝑟𝑛 + 𝑇 𝑎𝑝𝑝 )𝑉 𝑣 [START_REF] Carter | Band limited guidance and control of large parafoils[END_REF] Fig. 8 Final turn and approach. B. Computation of the optimal trajectory The kinematic model equations require the knowledge of the remaining time value, which is treated as a parameter of the optimization problem. The problem equations are formulated in a wind-fixed coordinate frame, which removes wind induced time variations and simplifies the computation of a reference course angle. During a steady turn with a constant bank angle and a constant wind speed, parafoil course rate is constant in the wind-fixed frame while it is time-varying in the inertial frame. The origin of this wind-fixed frame coincides with the origin of the inertial frame at the initial state. The target position is shifted by the wind-induced drift during the remaining trajectory: [ 𝑥 𝑦 ] 𝜏=1 = [ 𝑥 𝑇 + 𝑇 𝑎𝑝𝑝 𝑉 ℎ 𝑐𝑜𝑠𝜓 𝑤 -𝑇 𝑡 𝑊 𝑥 𝑦 𝑇 + 𝑇 𝑎𝑝𝑝 𝑉 ℎ 𝑠𝑖𝑛𝜓 𝑤 -𝑇 𝑡 𝑊 𝑦 ] (6) ℎ 𝜏=1 = ℎ 𝑇 + 𝑇 𝑎𝑝𝑝 𝑉 𝑣 + 𝑇 𝑡 𝑊 𝑧 [START_REF] Van Der Kolf | Flight Control system for an autonomous parafoil[END_REF] Remember 𝑇 𝑎𝑝𝑝 is the duration of the final straight line approach, 𝑇 𝑡𝑢𝑟𝑛 is the final turn time, and: 𝑇 𝑡 = 𝑇 𝑡𝑢𝑟𝑛 + 𝑇 𝑎𝑝𝑝 (8) The coordinates of the horizontal trajectory are represented by polynomial functions of a virtual time 𝜏: 𝑥(𝜏) = 𝑎 5 1 𝜏 5 + 𝑎 4 1 𝜏 4 + 𝑎 3 1 𝜏 3 + 𝑎 2 1 𝜏 2 + 𝑎 1 1 𝜏 1 + 𝑎 0 1 (9) 𝑦(𝜏) = 𝑎 5 2 𝜏 5 + 𝑎 4 2 𝜏 4 + 𝑎 3 2 𝜏 3 + 𝑎 2 2 𝜏 2 + 𝑎 1 2 𝜏 1 + 𝑎 0 2 (10) The polynomial order is fixed to 5, to satisfy boundary conditions that avoid discontinuities in the bank angle. The virtual time 𝜏 varies between 0 and an unknown value, which is the second parameter of the optimization problem. Coefficients 𝑎 𝑖 𝑗 of the above polynomials are determined by the following boundary conditions: [ 𝑥 𝑦 ] 𝑡=0 = [ 𝑥 0 𝑥 0 ] (11) [ 𝑥ẏ ̇]𝑡=0 = [ 𝑉 ℎ 𝑐𝑜𝑠𝜓 0 𝑉 ℎ 𝑠𝑖𝑛𝜓 0 ] (12) [ 𝑥ẏ ̇]𝑡=𝑇 𝑡𝑢𝑟𝑛 = [ -𝑉 ℎ 𝑐𝑜𝑠𝜓 𝑤 -𝑉 ℎ 𝑠𝑖𝑛𝜓 𝑤 ] (14) [ 𝑥ÿ ]𝑡=𝑇 𝑡𝑢𝑟𝑛 = [ 𝑉 ℎ 𝑠𝑖𝑛𝜓 𝑤 -𝑉 ℎ 𝑐𝑜𝑠𝜓 𝑤 ] ( (15) ) 16 On the other hand, first and second differentiations of the polynomial functions result in: 𝑥(𝜏) = 𝑎 5 1 𝜏 5 + 𝑎 4 1 𝜏 4 + 𝑎 3 1 𝜏 3 + 𝑎 2 1 𝜏 2 + 𝑎 1 1 𝜏 1 + 𝑎 0 1 ( 17 ) 𝑑𝑥(𝜏) 𝑑𝜏 = 5𝑎 5 1 𝜏 4 + 4𝑎 4 1 𝜏 3 + 3𝑎 3 1 𝜏 2 + 2𝑎 2 1 𝜏 + 𝑎 1 1 ( 18 ) 𝑑 2 𝑥(𝜏) 𝑑𝜏 2 = 20𝑎 5 1 𝜏 3 + 12𝑎 4 1 𝜏 2 + 6𝑎 3 1 𝜏 + 2𝑎 2 1 ( 19 ) Introducing these expressions at the initial and terminal points yields a system of linear equations for polynomial coefficients. Solving this system of equations allows the computation of position and speed at each time in the virtual domain [0 𝜏 𝑡𝑢𝑟𝑛 ]. It is worth noting that this solution does not guarantee a constant speed along this path. Then, the trajectory is sampled over a set of regularly spaced points in the virtual domain. The time interval between two consecutive points is calculated, so as to move along the trajectory with a constant speed 𝑉 ℎ : Δ𝑡 𝑘 = 𝑑𝜏 √ 𝑑𝑥(𝑘.𝑑𝜏) 𝑑𝜏 2 + 𝑑𝑦(𝑘.𝑑𝜏) 𝑑𝜏 2 𝑉 ℎ ⁄ (20) This time step Δ𝑡 𝑘 is used to compute the altitude change over this interval: Δℎ 0 = 𝑉 𝑣 . Δ𝑡 𝑘 (21) The reference course angle is computed from the horizontal speed components: ψ 𝑘 = atan ( 𝑑𝑦(𝑘.𝑑𝜏) 𝑑𝜏 𝑑𝑥(𝑘.𝑑𝜏) 𝑑𝜏 ⁄ ) ( 22 ) and the course rate is simply evaluated by numeric differentiation: 𝑑𝜓 𝑘 𝑑𝑡 = (ψ 𝑘 -ψ 𝑘-1 )/Δ𝑡 𝑘 ( 23 ) These values are used to evaluate a cumulative performance index that must be zero for the optimal trajectory: 𝐶 = 𝜇 1 (∑ 𝑉 𝑣 𝑘 . Δ𝑡 𝑘 -ℎ 0 ) 2 + 𝜇 2 (24) This quadratic criterion can be completed by an additional constraint allowing the limitation of the course rate: Note that the parachute dynamics cannot be taken into account with this calculation scheme. However, it is possible to introduce quasi-static phenomena, such as the rate speed change observed in steady turn, which can be modelled by the equation: The optimization problem is solved using a Gauss-Newton algorithm, which generally converges after a few iterations as long as a rigorous solution exists. During the periodic update of the optimal trajectory, the initial value of the parameters is set from the solution of the previous optimization, which speeds up convergence. 𝑉 𝑣 (𝑘) = 𝑉 𝑣 + 0.25 ( 𝑑𝜓 𝑘 𝑑𝑡 ) 2 (21) C. Some results Figure 10 shows an example of the optimal final turn obtained with a wind of 5m/s. This result exhibits the benefit of the path optimization in the wind fixed frame. The turn rate of the trajectory relative to the wind (in blue) indicates two maxima close to 25°/s, whereas the ground turn rate (in red) is much higher during the last quarter turn. Fig. 10 Example of optimal flight path in air frame and inertial frame. The path planning algorithm can generate reference trajectories for a range of initial conditions, as shown in Fig. 11. Ten trajectories are computed with a random selection of initial position (𝜎 𝑥 =𝜎 𝑦 =7m, 𝜎 𝑧 =2m) and heading (𝜎  =10°). It can be observed that these trajectories still converge to the target. When the altitude is a little too low with respect to the horizontal distance, the optimal trajectory requires turn rates reaching 40°/s. MPC control could have difficulties to produce an accurate tracking for these high turn rates, so that it may be necessary to optimize the trajectory with a turn rate boundary. Figure 12 shows new trajectories generated with the same initial points, but with the introduction of a turn rate limit of 30°/s. In some cases, no optimal trajectory reaches the target. As depicted by the first plot, the landing position error still remains on the longitudinal axis. VI. Model Predictive Control A. Principle Model predictive control (MPC) is a well-established approach for advanced system control in many industrial applications. MPC is efficient, simple in tuning and robust for tracking a known reference trajectory considering constraints and disturbances. This method is often used in parafoil guidance, navigation and control systems, to track a reference yaw or course angle provided by the path planning. It was successfully demonstrated by Slegers and Costello [START_REF] Slegers | Model Predictive Control of a Parafoil and Payload System[END_REF]. The MPC methodology attempts to solve an on-line open loop finite horizon optimal control problem subject to input, state, and/or output constraints. As shown in Fig. 13, at time t, the system model is used to predict the future behavior of the controlled plant over the prediction horizon Np. Fig. 13 Example of optimal flight path in air frame and inertial frame. The method delivers an optimal control input signal u(t) over a control horizon Nc (usually Nc ≤ Np), which minimizes the difference between the predicted output and the reference trajectory. Only the first element of the optimal signal is applied to the plant, and all other elements are discarded. Then, at the next sampling instance, the whole procedure is repeated. MPC allows operation within constraints, and it can be extended to multi-variable and nonlinear systems. However, the method has some limitations. As the system complexity increases or with non-linear models, the on-line calculation burden is substantially increased. It could also be difficult to predict the controller performance and robustness with the real system in closed loop. With a linear model and without constraints, the control law can be computed analytically as the solution of a cost optimization problem. Consider the following discrete-time linear system in state space form: 𝑥 𝑘+1 = 𝐹𝑥 𝑘 + 𝐺𝑢 𝑘 𝑦 𝑘 = 𝐶𝑥 𝑘 ( 26 ) The finite horizon optimal control problem to be solved at each sampling instance can be expressed as the following unconstrained optimization problem. The optimization criterion J is defined as: 𝐽 𝑘 = (𝑌 𝑘+1 𝑟 -𝑌 𝑘+1 ) 𝑡 𝑄(𝑌 𝑘+1 𝑟 -𝑌 𝑘+1 ) + 𝑈 𝑘 𝑡 𝑅𝑈 𝑘 ( 27 ) The input signal to be optimized is: 𝑈 𝑘 = [𝑢 𝑘 , 𝑢 𝑘+1 , … , 𝑢 𝑘+𝐻-1 ] 𝑡 ( 28 ) where H is the prediction horizon. The obtained output signal is: 𝑌 𝑘+1 = [𝑦 𝑘+1 , 𝑦 𝑘+2 , … , 𝑦 𝑘+𝐻 ] 𝑡 ( 29 ) while the reference trajectory is: 𝑌 𝑘+1 𝑟 = [𝑦 𝑘+1 𝑟 , 𝑦 𝑘+2 𝑟 , … , 𝑦 𝑘+𝐻 𝑟 ] 𝑡 (29) Matrices Q and R are used to balance the respective minimization of the output error and control input costs. The discrete-time linear model can be used to estimate the future output over the prediction horizon: 𝑌 𝑘+1 = 𝐾 𝐶𝐹 𝑥 𝑘 + 𝐾 𝐶𝐹𝐺 𝑈 𝑘 ( 30 ) where: 𝐾 𝐶𝐹 = [ 𝐶𝐹 𝐶𝐹 2 ⋮ 𝐶𝐹 𝐻 ] ( 31 ) 𝐾 𝐶𝐹𝐺 = [ 𝐶𝐺 𝐶𝐹𝐺 𝐶𝐹 2 𝐺 ⋮ 𝐶𝐹 𝐻-1 𝐺 0 𝐶𝐺 𝐶𝐹𝐺 ⋮ 𝐶𝐹 𝐻-2 𝐺 … … … ⋱ … 0 0 0 ⋮ 𝐶𝐹𝐺 0 0 0 ⋮ 𝐶𝐺 ] (32) The optimal control signal 𝑈 𝑘 over the prediction horizon, which minimizes the cost 𝐽 𝑘 , is: 𝑈 𝑘 = (𝐾 𝐶𝐹𝐺 𝑡 𝑄𝐾 𝐶𝐹𝐺 + 𝑅) -1 𝐾 𝐶𝐹𝐺 𝑡 𝑄(𝑌 𝑘+1 𝑟 -𝐾 𝐶𝐹 𝑥 𝑘 ) (22) B. Application to the parafoil The prediction model used in the above MPC algorithm was identified from the yaw response of the 6-DoF parafoil model. Fig. 14 shows that a good approximation can be obtained with a reduced third order model in the following form: 𝑥̇= [ 0 1 0 0 0 1 0 -𝜔² -2𝜔𝜁 ] 𝑥 + [ 0 𝑏 𝑎 𝜔² ] 𝑢 (33) 𝜓 = [1 0 0]𝑥 ( 34 ) where 𝑎 = -1.15, 𝑏 = 0.2, 𝜔 = 1.33 𝑟𝑎𝑑/𝑠 and 𝜁 = 0.32. This continuous-time model is discretized inside the MPC algorithm. The output of the prediction model is quite close to the yaw output (3 rd subfigure of Fig. 14, from above) and to the course angle (4 th subfigure) of the 6-Dof model. During turn, these flight parameters are not identical even without wind. This can be illustrated by the result of Fig. 15, obtained by applying the MPC algorithm to the 6-DoF model, to track a reference output trajectory generated by the path planning algorithm. The difference between yaw and course angles reaches more than 3°. It is mainly due to the effect of angle of attack when the parafoil is banked, while the angle of side slip remains low. Last note that the MPC algorithm can be quite easily extended to deal with limitations on brake line commands or parafoil turn rate. A quadratic programming approach can be used with these linear inequality constraints. Penalty functions can be incorporated into the optimization criterion to simplify the implementation. Figure 16 illustrates the tracking of the course angle after the introduction of a turn rate limit set at 20°/s. VII. Simulation results Performance of the proposed path-planning and control algorithms is demonstrated through detailed example simulations and Monte Carlo studies in order to evaluate the effects of turbulence. A. Context Common turbulence models used in simulation scenario model are Dryden and Von Karman models. The continuous Dryden model was chosen because of its easier implementation. According to the military references [START_REF]US Military Specification: Flying Qualities of Piloted Airplanes[END_REF], the turbulence scale lengths and the turbulence intensities are functions of the altitude. The standard deviation of the wind was directly imposed in this evaluation. To generate a signal with correct characteristics, a band-limited white noise signal is passed through low-pass filters, derived from the spectral square roots of the Dryden power spectrum functions. Flight path generation, MPC control and non-linear parafoil flight dynamics model have been implemented in a comprehensive simulation framework for algorithm validation and performance evaluation. Simulation of energy management and homing phases are not implemented, but it is assumed that they are performed imperfectly (due to wind shifts or other factors), and thus initialization of terminal guidance may not be performed in an ideal geometry. In this case, the parafoil begins terminal guidance heading downwind, where the wind blows to the north at 2 m/s. Path planning requires wind knowledge along the remaining trajectory. The deterministic wind component was assumed constant and the estimation algorithm was not modeled: the wind velocity used by trajectory optimization was delivered by low pass filtering of the instantaneous wind speed. B. A result without turbulence Figure 17 shows an example of a reference path tracking with a constant wind of 2m/s and without turbulence. The parafoil initially tracks the optimal trajectory. However, a deviation of a few meters appears after a couple of seconds and the impact point is 3m away from the target. This final error can be explained by the air velocity variations due to the low damping of the phugoid mode, as depicted in Fig. 18. The average value of the oscillations is also different from the equilibrium value used to compute the reference trajectory. The implementation of an output feedback stabilization law which acts on symmetric deflection of the canopy trailing-edge strongly reduces airspeed oscillations and thus the impact error, as shown in Fig. 19 and20. C. Results in the presence of turbulence To illustrate the effect of wind variation, Fig. 21 shows the results of one simulation performed with turbulence. The Dryden turbulence model described above was used with a horizontal wind standard deviation of 0.5 m/s. Impact error is greater than 10 meters. This landing error can be reduced by updating the reference trajectory en route. Fig. 22 shows the new trajectory obtained with the same turbulence and a replanning each second, enabling the parafoil to land 3m from the target. A series of Monte Carlo simulations were performed to assess the overall effect of turbulence. Figure 23 presents the landing dispersion obtained without trajectory replanning. Longitudinal impact errors vary between -15m and +15m, and lateral impact errors are approximately two times lower. Dispersion can be characterized by the elliptical error probability (EEP), which defines the radius of the ellipse including a given percentage of the impacts. EEP encompasses a more general definition of circular error probability (CEP) when variance is unequal in the dimensional axes. Dispersion results are shown in Fig. 23. The resulting circular error probability (CEP) shown by the circle is 16.8m and is defined by the radius which includes 50 per cent of the impacts. Activation of reference trajectory updating each second reduces the ellipse radii by a factor of about 3, as shown in Fig. 24. Figure 25 shows results of a new Monte Carlo simulation, carried out without the longitudinal control law. It generates an impact offset of 4m in the opposite direction of the wind, and landing dispersion increases by approximately 25%. VIII. A simplified guidance and heading rate control approach In this section the flight controller controls the yaw rate, instead of the yaw angle above, and the guidance law computes the yaw rate reference value that minimizes the distance to the target on the kinematic model, instead of a yaw angle reference signal above. As this yaw rate reference value is constant, although it may be re-adjusted each second, the flight controller need not be predictive, its aim is only to impose a constant value for the yaw rate output. Figure 26 illustrates the principle of this method. The position of the starting point is defined to have an initial trajectory (in dotted line) passing through this point and the target point. Trajectory updating provides a new circular arc (in solid line) starting at the current position, its radius is chosen to minimize the distance to the target. A. Longitudinal airspeed control As a preliminary, a longitudinal flight control law is synthesized, which uses the symmetric brake deflection input to control the airspeed 𝑉. When using the kinematic model for the design of the guidance law below, the horizontal and vertical speeds are assumed to be constant, which is not true in practice even without turbulence and when maintaining the symmetric brake deflection input to a constant value. Indeed, a high value of the asymmetric brake deflection input (used by the lateral flight control law) induces a strong coupling on the longitudinal variables, and thus a significant variation of the speeds. Since it is not possible to regulate both the horizontal and vertical speeds, a realistic choice is to regulate the airspeed. Since there are 4 states in the longitudinal model, obtained by linearizing the nonlinear 6 DOF model of the parafoil at a trim point, 4 longitudinal outputs 𝑎𝑧 (vertical acceleration), 𝑞 (pitch rate), 𝜃 (pitch angle), 𝑉 and an integrator on 𝑉 are used to place the 5 main closed loop poles, namely the integrator pole and the short period and phugoid modes. The unplaced actuator closed loop poles are checked a posteriori. Classically, the frequencies of the closed loop short period and phugoid modes are chosen to be the same as the open loop ones, and the damping ratio is fixed to 0.7. One does not accelerate the longitudinal closed loop, to reduce the magnitude of the symmetric brake deflection input signal, since there is a limited authority on this signal. B. Lateral heading rate control The architecture of the flight controller is: 𝑢 = -𝐾[∫(𝜓 ̇-𝜓 ̇𝑟𝑒𝑓 ) 𝜓 ̇𝑝] + 𝐻𝜓 ̇𝑟𝑒𝑓 ( 35 ) Where u is the plant input (asymmetric brake deflection), 𝜓 ̇ is the yaw rate, 𝜓 ̇𝑟𝑒𝑓 is the yaw rate reference value and p is the roll rate. K is the feedback gain, and H a static feedforward term. K need not be computed here, since it can be extracted from the MPC flight controller above, which contains a feedback part. Otherwise, different classical methods could be used to design this state or output feedback controller, e.g. a modal technique which places the main closed loop poles on the basis of the open loop plant model (see the previous subsection). This one can be the 3rd order simplified model used by the MPC flight controller, or a higher order model obtained by linearizing the nonlinear 6 DOF model of the parafoil at a trim point. The feedforward term H is a tuning parameter that will be chosen in the following to minimize the distance to the target. C. Computation of the heading rate reference value The equations of the kinematic model are recalled first: 𝑥̇= 𝑉 ℎ cos(𝜓) + 𝑉 𝑊 cos (𝜓 𝑊 ) 𝑦̇= 𝑉 ℎ sin(𝜓) + 𝑉 𝑊 sin (𝜓 𝑊 ) 𝑧̇= 𝑉 𝑣 (38) The horizontal and vertical speeds 𝑉 ℎ and 𝑉 𝑣 are assumed to be known and constant, as well the constant wind defined by 𝑉 𝑊 and 𝜓 𝑊 . Let 𝑥 0 ,𝑦 0 ,𝑧 0 ,𝜓 0 be the initial values of x,y,z,𝜓. If 𝜓 = 𝜔𝑡 + 𝜓 0 , the solution to the differential equations above is: The issue is to minimize the distance to the target with respect to 𝜔 between -𝜔 𝑚𝑎𝑥 and 𝜔 𝑚𝑎𝑥 , where 𝜔 𝑚𝑎𝑥 is a tuning parameter. This approach is suboptimal in the sense that it may not be possible to minimize this distance to zero. 𝑥(𝑡) = 𝑥 0 + 𝑉 ℎ 𝜔 (sin(𝜔𝑡 + 𝜓 0 ) -sin (𝜓 0 )) + 𝑉 𝑊 cos(𝜓 𝑊 ) 𝑡 ( Remark: there is no constraint on the final value of the heading angle in this simplified approach, whereas this final value must be 𝜓 𝑊 + 𝜋 in the first approach. D. Comparison of simulation results with the first and simplified approaches A preliminary step is to tune the value of the static feedforward term H. To this aim, the distance to the target is calculated, as a function of H, on the nonlinear simulator without turbulence. The following table gives the results, where the first line is the value of H, the second line is the distance to the target, and the third line is the normalized magnitude of the input signal u (asymmetric brake deflection), constrained between -1 and 1: With H=-3, in the presence of turbulence, the average value of the distance to the target is 4.77 m and the standard deviation is 3.20 m. A better result is obtained with the first approach: the distance to the target is 0.17 m without turbulence. In the presence of turbulence, the average value is 3.24 m and the standard deviation is 2.68 m. Nevertheless, the results of the simplified approach are not so far. As for the magnitude of the heading rate in the presence of turbulence, its average value (standard deviation) is 31.75 deg/s (3.84 deg/s) with the first approach and 33.60 deg/s (3.27 deg/s) with the second one. The results are close, and reasonable. Concerning the magnitude of the input signal u in the presence of turbulence, its average value (standard deviation) is 0.61 (0.04) with the first approach. But the values are significantly higher with the second approach, the average value (standard deviation) being 0.89 (0.04), which means a higher solicitation. Last, the final value of the bank angle is studied. In the presence of turbulence, its average value (standard deviation) is -2.35 deg (6.11 deg) with the first approach and 1.52 deg (5.57 deg) with the second one. Thus, the result is slightly better with the simplified approach. Figures. 27 and 28 (resp. 29 and 30) display the Monte Carlo landing dispersion (resp. the trajectories) with the two approaches in the presence of turbulence. IX. Conclusion After settling the context, the paper has presented a thorough investigation into flight control algorithms for parafoil automatic precision landing in turbulent weather. The flight models were based on previous studies and experiments involving the DGA, and then tuned to ONERA's experimental vehicle: a paramotor equipped with suitable avionics, employed as a test-bed for control laws. Two approaches for terminal guidance and control were presented in this paper, and tested on a realistic nonlinear simulator. Satisfactory results were obtained, without turbulence and in the presence of turbulence. The reference approach presented first relies on regularly updated optimized trajectories followed by an MPC flight controller. The trajectories have a polynomial parametrization simple enough for real-time implementation, with degrees of freedom allowing to enforce desirable landing conditions (straight flight into the wind). The trajectory is provided to the flight controller as a series of yaw angles. This approach demonstrates very small landing position errors with constant wind and varying initial positions. On a realistic scenario with turbulence, the mean errors remain well below the 50m mark typical of operational air-delivery systems. Longitudinal control on top of the MPC controller was also shown to provide enhanced accuracy thanks to smoother vertical trajectories (phugoid damping). The second approach is a simplified algorithm based on the yaw rate control. Though it is theoretically less performing than the reference, both for following a preset trajectory and for fixing landing conditions, it demonstrated almost similar performance in terms of final accuracy. This good result can be partly explained by the fact that the simplified approach is more respectful of the parafoil dynamics since its aim is ideally to make a single turn. Being also easier to implement, this suboptimal approach would be worth further styduing. New approaches are also being developed to provide a reference trajectory that respects the parafoil dynamics at the beginning and the end of the final manoeuvre while minimizing yaw rate variations. All these simulation studies must be followed by the implementation of these guidance and control laws in the paramotor avionics which was designed to perform on-line optimization. Hardware-in-the-loop simulations will be conducted to validate these real-time algorithms before flight tests. Fig. 3 3 Fig. 3 Responses of identified lateral model. Fig. 4 Fig. 6 46 Fig. 4 Bode diagram of the left models. Fig. 5 Bode diagram of the right models. Fig. 9 9 Fig. 9 descent rate as function of turn rate. Fig. 11 Fig. 12 1112 Fig. 11 Optimal trajectories with initial condition errors. Fig. 12 Optimal trajectories with initial condition errors and turn rate limitation. Fig. 14 14 Fig. 14 Responses of prediction model. Fig. 15 Comparison between heading and course angles. Fig. 16 16 Fig. 16 Response of 6-DoF model with MPC and turn rate limitation. Fig. 17 Fig. 18 1718 Fig. 17 Simulated approach in ideal conditions (Wind 2 m/s). Fig. 20 20 Fig. 20 Air velocity. Fig. 21 21 Fig. 21 Example of simulated approach with turbulence (turb=0.5 m/s). Fig. 23 23 Fig. 23 Monte Carlo landing dispersion without trajectory updating. Fig. 26 26 Fig. 26 Geometry of the reference trajectory. Fig. 27 27 Fig. 27 Monte Carlo landing dispersionwith the first approach. Fig. 28 28 Fig. 28 Monte Carlo landing dispersionwith the second simplified approach. Fig. 29 29 Fig. 29 trajectories with the first approach. Fig. 30 Trajectories with the second simplified approach. 𝑇 𝑎𝑝𝑝 𝑉 ℎ 𝑐𝑜𝑠𝜓 𝑤 -𝑇 𝑡 𝑊 𝑥 𝑦 𝑇 + 𝑇 𝑎𝑝𝑝 𝑉 ℎ 𝑠𝑖𝑛𝜓 𝑤 -𝑇 𝑡 𝑊 𝑦 ] [ 𝑥ÿ ]𝑡=0 = [ -𝑉 ℎ 𝑠𝑖𝑛𝜓 0 𝑉 ℎ 𝑐𝑜𝑠𝜓 0 ] 𝜓 0 ̇(13) [ 𝑦 ] 𝑥 𝑡=𝑇 𝑡𝑢𝑟𝑛 = [ 𝑥 𝑇 + 𝑇 1 = 𝑇 -𝑇 𝑎𝑝𝑝 , one chooses 𝜓 = 𝜔𝑡 + 𝜓 0 , where 𝑇 𝑎𝑝𝑝 and 𝜔 are fixed. For 𝑇 1 < 𝑡 ≤ 𝑇, the heading rate reference value is fixed to zero, to minimize the final (absolute) value of the bank angle at t = T. Thus, the distance to the target is √𝑥 2 (𝑇) + 𝑦 2 (𝑇), where: 𝜔𝑇 1 + 𝜓 0 ) -sin (𝜓 0 )) + 𝑉 𝑊 cos(𝜓 𝑊 ) 𝑇 1 + 𝑇 𝑎𝑝𝑝 (𝑉 ℎ cos(𝜔𝑇 1 + 𝜓 0 ) + 𝑉 𝑊 cos (𝜓 𝑊 )) 𝜔𝑇 1 + 𝜓 0 ) -cos (𝜓 0 )) + 𝑉 𝑊 sin(𝜓 𝑊 ) 𝑇 1 + 𝑇 𝑎𝑝𝑝 (𝑉 ℎ sin(𝜔𝑇 1 + 𝜓 0 ) + 𝑉 𝑊 sin (𝜓 𝑊 )) 39) 𝑦(𝑡) = 𝑦 0 - 𝑉 ℎ 𝜔 (𝑐𝑜𝑠(𝜔𝑡 + 𝜓 0 ) -𝑐𝑜𝑠 (𝜓 0 )) + 𝑉 𝑊 𝑠𝑖𝑛(𝜓 𝑊 ) 𝑡 (40) Let 𝑇 = . For 𝑡 ≤ 𝑥(𝑇) = 𝑥 0 + 𝑧 0 𝑉 𝑣 𝑉 ℎ 𝜔 (sin(𝑦(𝑇) = 𝑦 0 -𝑉 ℎ 𝜔 (cos( Table . 13 Distance to the target as a function of feedforward gain H. . H=1 H=0 H=-1 H=-2 H=-3 H=-4 22.80 m 16.02 m 10.23 m 5.27 m 0.94 m 3.19 m 0.34 0.27 0.36 0.61 0.87 1.00 Research engineer, Information Processing and Systems Department. Research engineer, Information Processing and Systems Department. Research engineer, Information Processing and Systems Department.
00475367
en
[ "chim.mate" ]
2024/03/04 16:41:20
2001
https://hal.science/hal-00475367/file/Amroune2001.pdf
Ali Amroune Gilbert Fantozzi Synthesis of Al203-SiC from kyanite precursor à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. I. INTRODUCTION Due to their highly ordered structure, chemical stability, and high strength approaching the theoretical value, ceramic whiskers have been attracting interest for use as reinforcements in ceramic matrices for thermostructural applications. Among these, silicon carbide whiskers (SiC W ) have been extensively studied as reinforcing materials in ceramic matrices. [1][2][3][4] Three main methods have been utilized to synthesize SiC W : (i) carbothermal reduction of silica in the presence of metal catalysts including rice-hulls as starting material, [5][6][7][8][START_REF] Shalek | Proc. Int. Conf. On Whisker-and-Fiber Toughened Ceramics[END_REF][START_REF] Wang | Ceramic Powder Science[END_REF][START_REF] Shimada | [END_REF] (ii) thermal decomposition of organic silicon compounds, 12,13 and (iii) reaction between halides in presence of hydrogen. 14,15 Because of its simplicity, the first method is the most utilized. However, the toxicity (mainly by inhalation) related to whisker handling and the high cost of their production have limited their application. To reduce these disadvantages, an in situ SiC W synthesis approach has been attempted. Partial carbothermal reduction of Si 3 N 4 has been utilized 16,17 to obtain Si 3 N 4 /SiC W powders. Synthetic or natural aluminosilicates nAl 2 O 3 и mSiO 2 may give Al 2 O 3 and SiC W as final products by carbothermal reduction at high temperatures in an inert atmosphere. 18,19 It has been reported 18,19 that the fracture toughness K IC values of Al 2 O 3 -SiC W composites prepared by hot pressing from in situ synthesized powders range between 5 and 7.6 MPa m 1/2 . Similar composites prepared by conventional physical mixing of the components and sintering generally present a fracture toughness ranging between 6 and 9.5 MPa m 1/2 . These values indicate the possibility of obtaining in situ composites having mechanical properties comparable to those of similar conventional composites prepared from separately synthesized SiC W and Al 2 O 3 . In this study, carbothermal reduction of kyanite, a large availability aluminosilicate mineral with high alumina content, was used as a way to in situ obtain ␤-SiC whisker and ␣-Al 2 O 3 particles. This route would simplify the conventional costly way to prepare powder for elaborating Al 2 O 3 -SiC W composites. Some reaction conditions were studied to synthesize a powder with a favorable morphology for elaborating such composites. II. EXPERIMENTAL Table I lists chemical compositions of the utilized kyanite (Virginia, kyanite supplied by SOGEMET, Paris, France) as aluminosilicate precursor and indicates the content of impurities in the mineral, primarily iron and titanium oxides. X-ray analysis showed the good crystallinity of this mineral. Scanning electron microscopy (SEM) observation of the powder reveals that the particle size ranges between 5 and 53 m. As carbon sources, two types of carbon black have been utilized (supplied by DEGUSSA, Paris, France) with specific surface areas of 330 and 996 m 2 /g. XRD analysis of these two types of carbon reveals their amorphous states. Kyanite powder was mixed (in ethylic alcohol) with carbon black in a molar ratio of C/SiO 2 ≅ 5.5 by ball milling. The slurries were then dried and the powder mixtures were sieved to less 500 m. A vertical tubular furnace (Pyrox, Paris, France) at 1260-1550 °C was used for carbothermal reduction of kyanite. A bed of the mixed starting powder (10 mm in height and 50 mm in diameter) was prepared in a graphite crucible and heat treated for2ha teach reaction temperature under argon. The heating and cooling rates were both 300 °C / h. After each heat treatment, unreacted carbon was removed by oxidation in air at 625 °C for 5 h before characterization. X-ray diffraction (XRD; Cu K ␣ radiation with a graphite monochromator) was used for phase analysis. XRD quantitative analysis was been performed on the synthesized powders, too. Mixtures of Al 2 O 3 and SiC-whiskers in different proportions have been prepared and a calibration curve established. The morphology was investigated by SEM (JEOL 840, Tokyo, Japan) with an energy dispersive x-ray spectroscopy (EDS) system (EDAX TRACOR, Paris, France). III. RESULTS AND DISCUSSION A. Phase evolution XRD phase analysis (Fig. 1) shows that the carbothermal reaction goes through many steps leading to ␤-SiC and ␣-Al 2 O 3 as final products. Mullite and a small amount of an amorphous phase were detected as intermediate solid phases. The formation of ␤-SiC and ␣-Al 2 O 3 began at about 1320 and 1370 °C, respectively. Figure 1 shows that the rate of reduction of kyanite started very slowly at 1320 °C and increased with increasing temperature. On the basis of XRD analysis the following reaction scheme is proposed. The thermal decomposition of kyanite to mullite and amorphous silica occurs below 1370 °C. 3͑Al 2 O 3 и SiO 2 ͒ kyanite → Ͻ1370 °C 3Al 2 O 3 и 2SiO 2 + SiO 2 , (1) The silica produced by this decomposition is reduced to SiC above 1320 °C. SiO 2 + 3C → Ͼ1320 °C SiC + 2CO , (2) For the last reaction, several mechanisms have been proposed, all of which involve two important steps 5,7,8,20,21 The first step is related to the formation of silicon monoxide by the reaction: SiO 2 + C → SiO + CO . ( 2a ) This reaction is followed by SiO + 2C → SiC + CO . (2b) Combination of reactions (2a) and (2b) gives reaction (2). Above 1370 °C (after consumption of silica), the mullite is reduced. This reduction forms ␣-alumina (corundum). 3Al 2 O 3 и 2SiO 2 + 6C mullite → Ͼ1370 °C 3Al 2 O 3 + 2SiC + 4CO . ( 3 ) The combination of reactions ( 2) and (3) gives 3Al 2 O 3 и 2SiO 2 + SiO 2 + 9C → 3Al 2 O 3 + 3SiC + 6CO . The reaction in the system is complete at 1550 °C. B. Morphology of the ␣-Al 2 O 3 /␤-SiC powder SEM observations of the powder synthesized at 1550 °C revealed that whiskers coexist with spherical particles. The synthesized powder has a woolly aspect. . Whiskers from the upper layer have a higher aspect ratio. This is likely due to the lower supersaturation degree of silicon and carbon monoxides at the upper layer. A higher supersaturation degree is not favorable to SiC whisker growth. [START_REF] Shalek | Proc. Int. Conf. On Whisker-and-Fiber Toughened Ceramics[END_REF] The mean dimensions of SiC whiskers and Al 2 O 3 particles in different parts of the reaction zone are presented in Table II. Globules were observed at the tip of some whiskers (Fig. 3) and EDS analysis reveals that these globules are iron-rich regions with the presence of silicon. As in the case of separate synthesis of SiC whiskers, these globules are solidified droplets which form at high temperatures during the vapor-liquid-solid (VLS) 6,7 growth mechanism of SiC whiskers. The impurities, such as transition metals, constitute energetically favourable sites for SiC nucleation. The presence of whiskers having a conical tip instead of balls indicates that there was another mechanism of crystal growth. Although the nature of this mechanism is uncertain, vapor-solid (VS) growth mechanism of whiskers has been proposed, [START_REF] Campbell | Whisker Technology[END_REF] and carbon black may constitute nucleation sites for SiC. [START_REF] Kennedy | Proc. of the British Ceram. Society[END_REF] In the VS mechanism, SiC crystallisation occurs directly from the vapor phases SiO and CO formed at high temperatures. The tip growth is postulated to occur on an axial screw dislocation, but such dislocations are rarely seen. [START_REF] Hurley | Advanced Composites Proceedings[END_REF] XRD quantitative analysis of the phases present in the powder after completion of the synthesis reaction gave approximately 28 wt% SiC in Al 2 O 3 /SiC powder. Optimal strength values for densified Al 2 O 3 /SiC W composites prepared by conventional mixing of alumina and SiC whiskers are obtained with 25 wt% SiC. [START_REF] Lio | [END_REF] This value may be achieved by adding alumina after synthesis. Adding alumina before synthesis to the raw materials does not contribute to a good morphology of the whiskers. 19 Some too-long, bent, or twisted whiskers and agglomerates are also encountered in the synthesized powders. These characteristics are also observed in the case of separate synthesized SiC whiskers. 26,27 This needs further operation of milling-deagglomeration (for example by ball milling) before sintering the powder to form composite materials with good dispersion of the phases. 28 C. Effect of carbon specific surface area Figure 4 shows XRD patterns of heat-treated mixtures of kyanite and carbon black at the same temperature 1500 °C with two different values of Brunauer-Emmett-Teller (BET) specific surface area of carbon: 330 and 996 m 2 /g. Higher specific surface area of carbon lead to completion of the carbothermal reduction reaction, whereas a significant amount of mullite is detected in the sample with lower specific surface area at the same temperature of 1500 °C (the reaction in this latter case is completed between 1500 and 1550 °C). Figure 5 shows the evolution of the most intense peak (111) of ␤-SiC formed from carbothermal reduction of kyanite in the temperature range 1370-1550 °C. A higher specific surface area of carbon black leads to an increase in the conversion rate to SiC. Quantitative analysis shows that the amount of SiC phase in Al 2 O 3 / SiC powder (about 28 wt%) does not change with increasing the temperature from 1500 to 1550 °C in the case of the higher surface area. A similar effect of specific surface area of carbon has been observed in the case of the carbonitriding of silica. 29,30 Higher specific surface area of carbon does not influence favorably the morphology of the synthesized powder. The SEM micrograph (Fig. 6) shows that the synthesized powder is mainly composed of agglomerates, and the fibrous phase SiC is not morphologically regular. Whiskers are bent, twisted, and very thin. This morphology is not favorable for Al 2 O 3 -SiC W composites, since these characteristics lead to large flaws, porosity, and inhomogeneous microstructure in the final composite. A carbon black with a relatively high surface area would lead to a large number of sites for the nucleation and growth of SiC during synthesis. These conditions favor irregular morphology of the whiskers. IV. CONCLUSION Carbothermal reduction of kyanite leads to in situ Al 2 O 3 particles and SiC-whisker formation with 28 wt% SiC. The completion of the reaction is obtained at 1550 °C when using a carbon with a specific surface area 330 m 2 /g and a heating time of 2 h. With a higher value of 996 m 2 /g, the completion of the reaction occurred at 1500 °C. During carbothermal reduction of kyanite, this mineral decomposes to free silica and mullite as intermediate phases. The reduction of mullite occurs at higher temperature than that of free silica. Mullite reduction does not occur before full consumption of the free silica. The aspect ratio of the synthesized SiC in the upper layer of the samples is relatively higher than that in the lower one. Both VLS and VS crystal growth mechanisms for SiC W are suggested. The morphology of the synthesized powder is not with a too-high specific surface area of carbon. The principal advantages of this in situ route are reducing the health hazard related to the handling of whiskers, the use of raw materials with large availability such as kyanite, and the simplification of the process of Al 2 O 3 /SiC W composites. The principal disadvantage of this method is the difficulty of morphological control of the phases because they are not synthesized separately. FIG. 1 . 1 FIG. 1. X-ray diffraction patterns on heat treated powders from kyanite-carbon system in the temperature range 1260 -1550 °C under argon. (A) ␣-Al 2 O 3 ; (B) ␤-SiC; (K) kyanite; (M) mullite. FIG. 5 . 5 FIG.5. Evolution of the most intense peak (111) of ␤-SiC formed from carbothermal reduction of kyanite in the temperature range 1370-1550 °C. With the higher value of specific surface area of carbon black the conversion to SiC is accelerated. TABLE I . I Chemical composition of kyanite Oxydes wt % SiO 2 37.64-43.70 Al 2 O 3 54.00-60.06 Fe 2 O 3 0.40-1.16 TiO 2 0.52-1.65 CaO 0.03 K 2 O 0.42 Na 2 O 0.42 Weight loss 0.21 TABLE II . II Dimensions (m) of the synthesised SiC-whiskers and Al 2 O 3 -particles at 1550°C under argon atmosphere. FIG. 3. SEM micrograph showing a globule at the tip of a whisker. The globule is a solidified droplet which forms at high temperatures during the VLS growth mechanism of the SiC-whisker. FIG. 2. SEM micrographs showing difference in morphology between reacted powders from upper and lower layer of the sample. Whiskers from (a) the upper layer have relatively higher aspect ratio than from (b)the lower one. Synthesised powder from the lower layer of the sample Synthesised powder from the upper layer of the sample SiC whiskers Al 2 O 3 particles SiC whiskers Al 2 O 3 particles Length Diameter Size Length Diameter Size 10 0.5 6 20 0.6 4 FIG. 4. XRD patterns of heat treated kyanite-carbon powder at 1500 °C. The reaction for the formation of SiC and Al 2 O 3 is completed with the higher specific surface area carbon. (A) ␣-Al 2 O 3 ; (B) ␤-SiC; (M) mullite.
00475154
en
[ "spi.meca.mema" ]
2024/03/04 16:41:20
2003
https://hal.science/hal-00475154/file/Ludwig2003.pdf
W Ludwig email: [email protected] J-Y Buffie S Savelli P Cloetens Study of the interaction of a short fatigue crack with grain boundaries in a cast Al alloy using X-ray microtomography Keywords: zusätzliche Analyse der Kornorientierungen mit Hilfe der Ru¨ckstreuelektronenbeugung (EBSD), ermöglicht die Diskussion und Interpretation der Ergebnisse hinsichtlich verschiedener Rissausbreitungsmechanismen Short fatigue crack, Microtomography, Grain boundary wetting, Gallium, PACS: Zusammenfassung Mit Hilfe von Synchrotronstrahlung-Röntgenmikrotomographie gelingt es die Kornstruktur einer Aluminium Guss-Legierung in der unmittelbaren Nachbarschaft eines Ermüdungsrisses dreidimensional abzubilden. Die Abbildung der Kornstruktur beruht auf der selektiven Benetzung der Aluminium Korngrenzen mit fl¨ussigem Gallium, welches als Kontrastmittel verwendet wird. Die komplizierte Form des Ermüdungsrisses im Inneren der Probe, sowie die an der Probenoberfläche beobachteten Verzögerungen in der Rissausbreitung ko¨nnen mit der Kornstruktur Synchrotron Radiation X-ray microtomography is used to visualize and analyze simultaneously the three-dimensional shape of crystallographic grains containing a short fatigue crack in a cast Al alloy. The visualization of the grains is based on the decoration of Al grain boundaries by liquid Ga which serves as a selective contrast agent. The intricate three-dimensional shape of the fatigue crack, as well as the crack stops observed on the sample surface, are correlated to the grain structure of the material. Complementary measurements of the grain orientation on the sample surface by electron backscattering diffraction (EBSD) allow us to discuss and interpret the observations in terms of possible crack propagation mechanisms. Re ´sume Ĺa micro-tomographie utilisant le rayonnement X synchrotron est utilisée pour visualiser et analyser simultanément la forme tridimensionnelle de grains cristallographiques contenant une fissure courte de fatigue dans un alliage d'aluminium de moulage. La visualisation des grains repose sur la décoration des joints de grains de l'alliage d'aluminium par le gallium liquide qui sert de marqueur sélectif. La forme compliquée de la fissure de fatigue au sein du matériau ainsi que les arrêts de la fissure observées à la surface du matériau apparaissent corrélés à la structure des grains sous la surface de l'échantillon. La determination de l'orientation cristallographique des grains par diffraction des électrons rétro-diffusés (EBSD) permet de discuter et interpreter ces re´sultats en terme de mécanismes de propagation des fissures. Introduction The Paris law which relates the growth rate da/dN of a fatigue crack to the stress intensity factor ⌬K is extensively used to predict the fatigue life of engineering components. However, in many engineering metallic alloys, it has been observed that small cracks (with a size typically smaller than a few grains [START_REF] Miller | The short crack problem[END_REF]), do not show a "regular" growth with ⌬K (as described by the Paris law) but some accelerations and retardations due to their interactions with microstructural features [START_REF] Miller | The behaviour of short fatigue cracks[END_REF]. If neglected, this so called short crack phenomenon can lead to a serious over estimation of the fatigue life [START_REF] Suresh | Fatigue of materials 1st ed[END_REF]. It is therefore a crucial problem to tackle. Among all the microstructural features which can alter the propagation of short cracks, grain boundaries have been observed to play a key role [START_REF] Miller | The behaviour of short fatigue cracks[END_REF][START_REF] Zurek | The effect of grain size on fatigue growth of short cracks[END_REF][START_REF] Lankford | The growth of small fatigue cracks in 7075-T6 aluminium[END_REF][START_REF] Lankford | The influence of microstructure on the growth of small fatigue cracks[END_REF][START_REF] Zhang | Measurement of plastic zones associated with small fatigue cracks by selected area channeling patterns[END_REF][START_REF] Turnbull | The effect of grain size on the fatigue of commercially pure aluminium[END_REF][START_REF] Turnbull | The effect of grain size on fatigue crack growth in aluminium magnesium alloy[END_REF]. However, the experimental study of the interaction of short fatigue cracks with grain boundaries is greatly complicated by the fact that it is a three dimensional problem. A few studies have shown that the abnormal growth of fatigue cracks as observed from the surface of a sample can be accounted for by the presence of some sub surface microstructural features [START_REF] Clement | Short crack behaviour in nodular cast iron[END_REF][START_REF] Tokaji | Limitations of linear elastic fracture mechanics in respect of small fatigue cracks and microstructure[END_REF]. Those studies are based on serial polishing and optical observations of fatigue samples, a quite tedious and destructive experimental technique which strongly limits the number of observations and which is not free of artifacts (e.g. crack blurring in ductile materials). In the last five years, however, the availability of new powerful synchrotron X-ray sources has opened the way for three dimensional (3D) imaging of the interior of materials through the use of high resolution microtomography. This technique which gives direct images of internal defects in the micrometer range has been used, for example, to study fatigue crack closure in Al-Li alloys [START_REF] Guvelenir | Direct observation of crack opening as a function of applied load in the interior of a notched tensile sample of Al-Li 2090[END_REF] but also damage development in various heterogeneous materials by repeated inspection of the specimens at different deformation stages [START_REF] Buffie | Characterisation of internal damage in a MMC p using X-ray synchrotron phase contrast microtomography[END_REF][START_REF] Savelli | Analyse des me ´canismes et mode ´lisation de la dure ´e de vie en fatigue d'alliages de moulage AS7G03[END_REF][START_REF] Martin | Characterisation by X-ray micro-tomography of cavity coalescence during superplastic deformation[END_REF][START_REF] Babout | Characterization by X-ray computed tomography of decohesion, porosity growth and coalescence in model metal matrix composites[END_REF][START_REF] Owen | A synchrotron X-ray study of a TiSiC f composite during in situ straining[END_REF][START_REF] Everett | Spatial distri-bution of voids in HY-100 steel by X-ray tomography[END_REF]. The use of coherent X-rays has also been shown to improve the detection of cracks and microstructural features in strained metal matrix composites [START_REF] Cloetens | Observation of microstructure and damage in materials by phase sensitive radiography[END_REF]. In this work, high resolution microtomography has been used to visualize simultaneously the 3D shape of a short fatigue crack in a cast aluminium alloy and of the surrounding grains. The grain visualisation is achieved by applying liquid Ga to the sample surface. Ga penetrates along the grain boundaries into the bulk of the material and leads to the formation of microscopic grain boundary wetting layers which can be detected by X-ray absorption imaging [START_REF] Ludwig | Penetration of liquid gallium into the grain boundaries of aluminium: a synchrotron radiation microtomographic investigation[END_REF]. The 3D images obtained with this technique clearly show that the surface growth of short cracks is strongly correlated to sub surface microstructural features. Besides, the analysis of the crack shape with respect to the local grain boundaries also reveals the importance of the crack-plane tilt and twist for the propagation rate of stage I cracks as suggested by other authors [START_REF] Zhai | A crystallographic mechanism for fatigue crack propagation through grain boundaries[END_REF]. Experimental methods A model AS7G03 cast alloy containing artificial pores has been used in this study. A complete description of the microstructure and fatigue deformation mechanisms of this material can be found elsewhere [START_REF] Buffie | Experimental study of porosity and its relation to fatigue mechanisms of model Al-Si7-MgO.3 cast Al alloys[END_REF][START_REF] Savelli | Analyse des me ´canismes et mode ´lisation de la dure ´e de vie en fatigue d'alliages de moulage AS7G03[END_REF], only the main details are given in what follows. The alloy, which chemical composition is given in Table 1, was chill mould cast in a silicon carbide mould. The artificial pores were obtained by introducing at 1093 K a mixture of hydrogen (H 2 ) and argon (Ar) gases in the liquid metal through a propeller. After cooling, the alloy was solution treated during 10 h at 813 K, quenched in cold water, and aged during 6 h at 433 K (T6 heat treatment). The resulting microstructure is shown in Fig. 2a andb. It consists in large grains with an average size of 300 µm (2D measurements), containing a mixture of primary α aluminium dendrites separated by the eutectic phase containing small silicon particles with an average size of 3 µm. The fatigue behaviour of the material has been studied during constant stress amplitude tests (120Յ⌬ σ Յ 240 MPa) at room temperature in air with a load ratio R ϭ 0.1 and a frequency of 10 Hz. One fatigue sample has been prepared for tomographic investigation according to the sequence described below: 1. An interrupted fatigue test with a stress amplitude ⌬σ ϭ 160 MPa was carried out and the surface growth of a crack initiated on one sample's corner was monitored each 25.000 cycles during 375.000 cycles by optical microscopy. 2. After the test, the grain orientation at the surface around the crack was determined by the use of electron backscattering diffraction (EBSD, surface XZ defined in Fig. 1). 3. One needle shape tomography sample with a square cross section of 0.7 * 0.7 mm 2 and a length of 5 mm was carefully cut with a wire saw around the fatigue crack (see Fig. 1). 4. A first tomographic scan was performed in order to study the 3D shape of the crack in the bulk of the sample. 5. The sample was then exposed to liquid gallium and annealed for 1 h at 100 °C. 6. A second tomographic scan, revealing the grain boundary structure was recorded. 7. The (now fragile) sample has been embedded into epoxy resin, cut along the XY plane and polished in order to determine the crystallographic orientation of sub-surface grains via EBSD measurements. Microtomographic imaging The tomographic imaging experiments were carried out at the ID19 beamline of the European Synchrotron Radiation Facility (ESRF), Grenoble. The dedicated microtomographic set-up consists of a precision mechanic sample stage combined with a fast, high resolution detector system. The detector system itself is based on a 26 µm thick transparent fluorescent screen (LAG: Eu) [START_REF] Koch | X-ray camera for computed microtomography of biological samples with micrometer resolution using Lu 3 Al 5 O 12 and Y 3 Al 5 O 12 scintillators[END_REF] which transforms the X-rays into visible light and microscope optics to project the image on the Peltier cooled 1024 2 CCD camera. The latter has a dynamic range of 13 bits, fast readout (0.06 s/frame) and a low dark current (3 e-/s) [START_REF] Labiche | FRELON camera: Fast REadout LOw Noise[END_REF]. The spatial resolution was determined with the knifeedge method to be 1.7 µm (full width half maximum of the line spread function) for the optical magnification used in our experiments (effective pixel size: 0.95 µm). The need for ultimate spatial resolution in order to resolve cracks with micron and sub-micron widths restricts the field of view and consequently the sample cross-section 2 to approximately 1 mm. In order to obtain high quality 3D images the incoming 'white' synchrotron radiation was monochromatized to 15 keV using a synthetic Rb-B 4 C multilayer with large energy bandwidth (⌬λ / λ ϭ 2 × 10 Ϫ 2 ) [START_REF] Morawe | Design and performance of graded multilayers as focusing elements for X-ray optics[END_REF]. Typical scan times (1000 projections over 180 degrees, 13 bit dynamic range, 1 µm pixelsize) are of the order of 20 min. After some basic image processing (subtraction of dark current, flat field correction), the 3D distribution of the linear attenuation coefficient µ is calculated using the filtered backprojection algorithm [START_REF] Kak | Principles of computerized tomographic imaging[END_REF]. The resulting threedimensional dataset consists in a set of isotropic voxels 3 and can be visualised and analysed using dedicated 3D image processing software packages. It should be noted that the coherence properties of third generation synchrotron beams can be used to perform phase sensitive X-ray imaging. By increasing the sample to detector distance to a few centimeters, sample defects and heterogeneities give rise to pronounced interference effects (Fresnel diffraction, in-line holography). The use of this phenomenon leads to a contrast enhancement which improves the sensitivity of the tomography technique for detecting cracks in materials by at least one order of magnitude [START_REF] Cloetens | Observation of microstructure and damage in materials by phase sensitive radiography[END_REF]. Part of the work presented in this article is based on this phase sensitive imaging technique. Grain boundary imaging Currently only a few experimental techniques allow us to characterize the three-dimensional arrangement of grains in the bulk of a polycrystalline material: serial sectioning, either mechanically or by the focussed ion beam technique (limited to grain sizes below 20 µm [START_REF] Inkson | 3D determination of grain shape in a FeAl-based nanocomposite by 3D FIB tomography[END_REF]), and X-ray tracking [START_REF] Poulsen | Three-dimensional maps of grain boundaries and the stress-state of individual grains[END_REF][START_REF] Lauridsen | Tracking: a method for structural characterization of grains in powders or polycrystals[END_REF]. The sectioning techniques have the disadvantage of being extremely laborious and destructive. X-ray tracking on the other hand is a nondestructive technique based on Laue diffraction and provides the shape and crystallographic orientation of the individual grains in the bulk of a polycrystalline material. However, the current spatial resolution of the latter technique does not exceed 20-40 µm and it does not allow to detect the existence of cracks. Due to their small lateral width, natural grain boundaries can not be detected by the change in X-ray attenuation. However, for the case of Al alloys we recently pointed out the possibility to visualize the 3D grain structure by using liquid Ga as a contrast agent for X-ray absorption tomography [START_REF] Ludwig | Penetration of liquid gallium into the grain boundaries of aluminium: a synchrotron radiation microtomographic investigation[END_REF]. When applied to the surface of a polycrystalline Al sample, liquid Ga (T m ϭ 29.7 °C) penetrates rapidly (Х 10 µm/s) [START_REF] Ludwig | Development and Applications of synchrotron radiation microtomography[END_REF] along the grain boundaries where it tends to form liquid layers of up to one micrometer thickness. Due to the fact that Ga has a considerably higher X-ray attenuation coefficient than Al, these layers can be easily detected by means of high resolution absorption microtomography (see Fig. 5). In order to improve the visibility of the wetting layer, the sample is annealed for about one hour at 100 °C before the acquisition of the tomographic dataset. This annealing leads to volume diffusion of Ga into the neighboring grains and increases the width and the continuity of the wetting layers in the tomographic reconstruction. Note that only grain boundaries which fulfill the wetting condition ν gb Ͼ 2ν sl , where ν gb and ν sl denote the grain boundary and solid-liquid interface tensions respectively, are prone to penetration. Special grain boundaries and low angle grain boundaries are known to have lower surface tensions and might therefore not be detected by this technique. It should also be noted, that the penetration process leads to the complete loss of the mechanical strength of the material (grain decohesion by the liquid metal) and this technique has, therefore, to be considered as a destructive one which can only be applied at the end of the fatigue experiment. In order to be able to characterize and visualize the individual grains in a convenient way, the tomographic dataset has to be segmented and labeled in order to extract the grain structure. The transformation of the tomographic raw data, containing the three-dimensional arrangement of the grain boundaries, into its segmented representation is shown in Fig. 5. All voxels belonging to the same crystallographic grain (a grain is defined by the closed grain boundary surface which delimits it) have been attributed the same colour label. Despite its apparent simplicity for the human eye, the computer implementation of this procedure is not straightforward and requires a considerable amount of image processing. In a first step the data set is binarised by applying an intensity threshold. In order to keep all voxels belonging to the grain boundaries, this threshold has to be chosen rather low and results in a relatively noisy representation of the grain boundary structure. This binarised dataset in turn has to be cleaned by a series of morphological filter operations before it can be passed to the actual segmentation routine (3D watershed algorithm) [START_REF] Gratin | Mathematical morphology in three dimensions[END_REF][START_REF] Meyer | Mathematical morphology: from two dimensions to three dimensions[END_REF]. The segmented volume is finally labeled and can now be visualized with adapted volume rendering software packages. Results Observations of the crack at the surface of the sample: the validation of the method The results of the crack propagation experiment are summarized in Fig. 2. The optical micrograph on the left side (Fig. 2a) shows the crack path at the sample surface (plane XZ on Fig. 1). As already observed for other samples of this material, crack initiation occurred on a Si particle/matrix interface at the immediate vicinity of a pore. Fig. 2b) shows an EBSD map of the same zone and reveals the position and orientation of the grains along the crack path. The crack, as well as the silicon rich particles cannot be indexed in the map and appear as black points. Note, that the Si particles are not only located at the grain boundaries, but also inside grains. The crack growth chronology, as determined from surface observations, is reported in Fig. 2e. From this figure, it can be seen that the crack stopped at the surface for Х 50000 and Х 25000 cycles on two grain boundaries visible on the EBSD surface map (points 'X' and 'Z'). A shorter arrest of the crack (Х 20000 cycles) was also observed (point 'Y') but does not seem to be correlated with the presence of a grain boundary at the surface. The tomographic reconstructions of the sample surface before and after application of gallium are shown on Fig. 2c andd respectively. Although the presence of the sample surface deteriorates the quality of those images slightly 4it appears that the crack shape observed on reconstructed images is very close to that observed by optical microscopy. The white contrast in Fig. 2d is due to the enhanced absorption caused by the presence of liquid Ga which has penetrated the grain boundaries [START_REF] Ludwig | Penetration of liquid gallium into the grain boundaries of aluminium: a synchrotron radiation microtomographic investigation[END_REF]. Note that the gallium also wets the crack surface. 5 The rather large width of the Ga rich zone (about 10 µm) is a result of the volume diffusion from the wetted grain boundary into the adjacent grains during the mentioned heat treatment at 100°C. Again, a good match is obtained between the reconstructed gallium layers and the grain boundaries as observed by EBSD (Fig. 2b). From those comparisons it is concluded that both the crack and the grain shapes in the interior of the sample can be studied with a good accuracy from the tomographic images. Observations of the crack in the bulk of the sample The crack path in the interior of the sample is strikingly more complex than the relatively linear propagation observed at the surface. This is illustrated on Fig. 3a andc which show two tomographic slices parallel to the XY plane defined in Fig. 1. Three marked changes in the direction of the crack can be seen on Fig. 3a. In addition, when going further in the material along the same direction, small crack segments appear, apparently disconnected from the main crack front (Fig. 3d). For the post-mortem EBSD analysis of the sample (# 7 in the experimental procedure) the sample was cut along the plane depicted in Fig. 3a in order to analyse the grain orientations in this particular section of the sample. This study revealed that the planar portions of the crack are close (within 10°) to crystallographic {111} planes. A better understanding of the crack morphology can be obtained when looking directly at a three dimensional representation of the crack surface. However because of the dual phase microstructure of the material (aluminium matrix plus silicon particles) this 3D representation is not straightforward. Indeed, in the reconstructed images, the gray levels of voxels corresponding to the crack and to the Si particles are both lower than the gray levels of voxels corresponding to the Al matrix; a mere segmentation based on gray levels would therefore extract the crack and the particles simultaneously. To avoid this problem, a sequence of 3D morphological filter operations [START_REF] Gratin | Mathematical morphology in three dimensions[END_REF] has been used and allowed the selective extraction of the crack surface from the 3D tomographic data. The result of this operation is illustrated on Fig. 3b andd for the two slices described previously. It can be seen that the binary image, although slightly less accurate that the initial one, still provides a good description of the crack. It is thus possible to obtain a 3D rendering of the binary data set representing the crack. Such a rendering is shown on Fig. 4 where the crack surface intersection, examined in EBSD and optical microscopy is indicated by an arrow. A close examination of this figure reveals that, in fact, the apparently unconnected crack segments described in Fig. 3c do belong to the crack front. 6This illustrates clearly the difficulty of studying crack propagation from simple 2D observations. In the central part of the crack front (label #2), one can also distinguish the bifurcation number 3 shown previously in Fig. 3a. One striking feature of this 3D view is that the crack front does not appear as a regular circular line. It is rather formed by three independent main segments (labelled 1,2 and 3) around 200 µm in length separated by uncracked Al matrix invisible in this rendering. This suggests that the crack front propagates first in "easy" zones of the crystal which might be grains showing favourable crystallographic orientations. In order to check this assumption we can now use the information about the 3D shape of the grains along the crack path. Correlation between the crack shape and the internal grain distribution along the crack front A first approach for studying the relation between the grain distribution and the crack morphology is to look at 2D reconstructed slices obtained after the wetting of the metal by gallium. Fig. 5a shows the identical section like Fig. 3a after application of the liquid metal. It appears that two of the abrupt changes in the crack propagation (1& 3) are related to the passage of the crack into a new grain. Close to label 1 the crack deviates by Ϸ 45°when crossing the grain boundary between grains A and B. Then, for unknown reasons, the crack deviates by Ϸ 90°in the center of grain B. Finally, when crossing the grain boundary between grains B and C the crack deviates again by Ϸ75°. Thanks to the 3D nature of our observations, it is possible to study how the crack goes from grain B into grain C all along the crack front by simply looking at the different slices parallel to the XY planes. However, as discussed previously, this is better achieved by considering directly 3D images of the grains. 3D grain images have been obtained from the initial 3D data by morphological segmentation of the data set (see section 2.2). Fig. 5 illustrates the transformation of the tomographic reconstruction into the segmented and labelled representation. It can be seen that the grain boundary structure after segmentation (Fig. 5b) is in good agreement with the tomographic raw data: all grains have been detected and only a few inaccuracies can be observed (e.g. the additional small grain in the lower part, arising from over-segmentation). As already mentioned, gallium also wets the crack surface. Thus, when the crack is transgranular, the corresponding grain is split in two distinct cells by the segmentation algorithm. In that case, parts belonging to the same grain were attributed identical colours (in the 3D images), but still exist as distinct cells (see for example cells A 1 and A 2 in Fig. 5b) in the numerical representation of the data set. Both data sets, the one containing only the crack and the labelled one containing the grains, were carefully aligned with respect to each other and finally merged for a simultaneous 3D visualization of both the crack and the grains. An example of such a visualization is shown in Fig. 6 which represents the 3D shape of two of the grains observed by EBSD in Fig. 2b. For a direct comparison the grains have been assigned the same colors in the EBSD map and in the 3D image. Part of grain G2 has been set transparent in the visualization: the smooth surface on the right side corresponds to a tomographic section close to the sample surface observed by optical microscopy and the rather rough surface (nearly parallel to the YZ plane) corresponds to one crack face in the grain. It is interesting to note that the point Y7 corresponds well to the position, where the sub-surface crack front (illustrated schematically by the black lines in Fig. 6a had to cross a section of the grain boundary lying parallel to the crack front. A possible interpretation of this observation is given at the end of section 4.1. The bifurcation number 3 on Fig. 5a can also be studied with the help of the 3D rendering of the grains. The results of the analysis are shown in Figs. 7 and8. The first figure shows the 3D shape of the grain which is responsible for the crack bifurcation. It appears (Fig. 8a andb) that the crack enters a new grain (the one depicted in Fig. 7) along the curved lined A A". This line can be further divided in two sections: a first section AA' corresponding to a large advance (Х 100 µm) of the crack in the grain by tilting around an axis which is roughly the intersection of the crack front with the boundary of the grain; and a second section A'A" corresponding to a much smaller advance of the crack (Х 10-20 µm) in the new grain by twisting of the crack front. A possible explanation of the preferential advance of the crack along A-A' might be given in terms of a continuity condition for the crack surface, as discussed in section 4.2. Discussion To the best of our knowledge, this is the first time that a simultaneous 3D image of a short fatigue crack and of the surrounding crystallographic grains has been obtained. The combination of the 3D tomographic measurements with conventional 2D surface observations (crack growth history and EBSD mapping) provides interesting information about various aspects of the interaction of short fatigue cracks with the grain structure in polycrystalline material. Crack arrest at grain boundaries One may distinguish two types of models describing the influence of the grain structure on fatigue crack propagation in polycrystals: 1. Models based on the varying elastic properties between neighboring grains. 2. Models based on the dislocation mechanisms and crystal plasticity ahead of the crack tip. Representative for the first class of models we consider here recent work by Ravichandran and Li It is likely that the preferential attack along A-A' is governed by compatibility with favorable crystallographic orientations inside the grain. [33] which accounts for the influence of the varying elastic modulus in the grains due to the change in grain orientation. Assuming isostrain conditions, these authors have established a method to calculate the local values of the stress intensity factor K along a crack front in heterogeneously stressed material and their model predicts strong variations of the local K values in the case of microstructural short cracks. In order to test the applicability of this model, we calculated the elastic modulus of the surface grains along the loading direction, using the EBSD mapping and the relationship [START_REF] Nye | Physical properties of crystals[END_REF] E ϭ ͩ s 11 Ϫ2 ͩ s 11 Ϫs 12 Ϫ ͩ s 44 2 ͪͪ (l 2 1 l 2 2 ϩ l 2 2 l 2 3 ( 1 ) ϩ l 2 3 l 2 1 ) ͪ Ϫ1 where the terms s ij correspond to the elastic constants of Al and l i to the coordinates of the loading direction, expressed in terms of the vectors of the unit cell. The results are summarized in Table 2. Due to the weak coefficient of anisotropy 8 of Al (1.2 compared to 2.5 in the case of Fe [START_REF] Cottrell | The mechanical properties of matter[END_REF]) the elastic moduli show only moderate variations (7% with respect to the mean value) and the above mentioned mechanism is not expected to play a dominant role. Indeed, two of the crack stops observed at the sample surface (points X and Z) occur at the transition from a grain of low modulus into a grain of high elastic modulus, whereas one would expect the opposite behavior (acceleration rather than retardation) of the crack propagation rate in this situation. This, together with the experimental evidence of localized plasticity ahead of the crack tip [START_REF] Savelli | Analyse des me ´canismes et mode ´lisation de la dure ´e de vie en fatigue d'alliages de moulage AS7G03[END_REF] seems to indicate that the second class of mechanisms plays a preponderant role in the present case. The common point of these mechanisms is the postulate that some plasticity has to be initiated in the new grain before the crack can enter it [START_REF] Zhang | Measurement of plastic zones associated with small fatigue cracks by selected area channeling patterns[END_REF][START_REF] Turnbull | The effect of grain size on the fatigue of commercially pure aluminium[END_REF][START_REF] Lankford | The growth of small fatigue cracks in 7075-T6 aluminium[END_REF]. As a result the "waiting time" in the grain boundary should be a function of the misorientation between the two grains. Such stops are likely to happen frequently along the crack front, when the crack is small. The quantitative analysis of the crack propagation data (i.e. the observed crack arrests at grain boundaries) in terms of such a dislocation based model, taking into account the local differences in the activation of the crystallographic slip systems, is under progress and will be reported elsewhere. Irrespective of the underlying mechanism, the restriction of the observation of crack propagation on the sample surface may lead to erroneous Table 2 Elastic moduli along the loading direction, determined from the EBSD mapping (Fig. 2b interpretation of the growth kinetics as the threedimensional nature of the crack is neglected. There is experimental evidence that significant changes in the aspect ratio 2c/a (c surface length of the crack, a bulk length) may occur during the growth of short cracks [START_REF] Ravichandran | Three-dimensional crack-shape effects during the growth of small surface cracks under fatigue in a titanium-base alloy[END_REF]. The ideal characterization procedure should therefore include the continuous monitoring of the 3D crack shape during the fatigue experiment and the determination of the grain orientations in the bulk of the polycrystalline sample. The former task can be achieved by repeated microtomographic inspection during an interrupted fatigue test and the latter task can be performed with the help of a new non-destructive, synchrotron based diffraction technique [START_REF] Poulsen | Three-dimensional maps of grain boundaries and the stress-state of individual grains[END_REF][START_REF] Lauridsen | Tracking: a method for structural characterization of grains in powders or polycrystals[END_REF]. For the current study we do not dispose over such detailed information on the crack growth history in the bulk of the sample. Nevertheless it is interesting to note that the crack retardation observed at point Y on the sample surface corresponds well to the position where the sub-surface segment of the crack has to pass a section of the grain boundary which lies parallel to the crack front. A possible interpretation of the retardation at point Y can therefore be given in terms of a crack arrest below the sample surface, which in turn influences the crack growth at the surface. The dashed lines in Fig. 6 have been added to illustrate the possible evolution of the crack front according to the above mentioned scenario. Tilt and twist The 3D tomographic reconstruction gives direct access to geometrical and morphological parameters of the crack surface in the bulk of the sample. This allows us, for example, to determine in a quantitative way the mixed-mode character of short cracks which often consist of several segments growing in different directions with respect to the loading axis. Coupled with in situ monitoring of the crack growth during the fatigue experiment, this information could be used in the future to assess the growth kinetics in a full fracture mechanical context, for example, by including the effect of crack deflections on the stress intensity factor [START_REF] Suresh | Crack deflection: Implications for the growth of long and short fatigue cracks[END_REF]. However, interesting information can also be obtained from the analysis of the 3D crack morphology at a given state during fatigue life, i.e. without the knowledge of the crack history. The investigated crack consists of three segments with preferred growth directions. The analysis of the strongly inclined central segment with respect to the surrounding grain structure clearly revealed that the deflection of the crack is related to the passage of the crack into a new grain (Figs. 7 and8). Two factors are to be taken into account for the propagation of the crack. The first one is the activation of some plasticity on a crystallographic well orientated plane in the next grain. Our observations show that in the studied case the crack plane is close to a {111} plane. The second factor that has to be considered is the geometry of the crack surface itself. It has been shown recently [START_REF] Zhai | A crystallographic mechanism for fatigue crack propagation through grain boundaries[END_REF] by EBSD surface observations of an Al-Li alloy that short fatigue cracks growing on crystallographic planes tend to propagate by tilting rather than twisting of the crack front. The reason for this is illustrated schematically in Fig. 9. The propagation of the crack by simple tilt of the crack front is more favorable, in terms of energy, than a propagation by twist. In the second case, indeed, the fracture surface is larger. Our observations of the crack penetrating the grain depicted in Fig. 7 are in favor of this mechanism: the propagation of the crack in this grain is larger in the region where it can be accommodated by simple tilt (line AA' on Fig. 8). On the contrary, the grain resists propagation when it is accompanied by a twist of the crack front (between A' and A"). The observation of significant changes in the growth directions of cracks in the same material by conventional post mortem fractographic analysis (Fig. 10) gives additional support for the above mentioned mechanism: rather than homogeneous propagation into all grains close to the crack tip, the crack seems to follow the perimeter of some of the grains until propagation into these grain starts in a lateral direction. It should be emphasized that the analysis of a larger number of fatigue cracks, including the crystallographic characterization of the adjacent grains, is necessary to corroborate the validity of this tilting/twisting mechanism. Conclusions This paper presents a new technique for the simultaneous three-dimensional visualization of fatigue cracks together with the surrounding grain structure in the bulk of polycrystalline Al samples. Penetration of liquid Ga along the Al grain boundaries leads to the formation of microscopic wetting layers which in turn can be detected by high resolution X-ray absorption tomography. The studied short fatigue crack appears to consist of three crack segments corresponding to the crack growth in different grains. The stop and go motion of the crack, observed by optical microscopy at the sample surface, is clearly related to the grain structure directly at and close to the sample surface. The tentative interpretation of these crack stops in terms of an elastic model supposing isostrain conditions fails, presumably due to the weak coefficient of anisotropy of Al. The three-dimensional analysis of the crack with respect to the grain structure indicates that the passage of the crack into a new grain occurs preferentially from regions on the grain perimeter, where the growth can be accommodated by tilting of the crack plane. On the contrary, the crack has difficulties in entering the new grain from regions where twisting of the crack plane is required and tends to grow along the grain perimeter. Future experiments should include repeated microtomographic characterization of the 3D crack shape during (interrupted) fatigue tests. This, together with the characterization of the grain orientation in the bulk of the material via modern synchrotron diffraction techniques will provide the necessary information to assess the problem of fatigue crack propagation in terms of a full threedimensional fracture mechanical context and to analyse the physical mechanisms governing crack propagation. Fig. 1 . 1 Fig. 1. Schematic representation of the geometry of the fatigue sample. The indicated small parallelepiped (0.7×0.7×5 mm) containing the short fatigue crack was wire-cut from the original specimen and characterized by synchrotron radiation microtomography. Fig. 2 . 2 Fig. 2. a) Optical micrograph of the fatigue crack at the sample surface b) EBSD mapping of the same zone c) tomographic reconstruction of the sample surface before application of Gallium (energy: 15 keV, distance: 5 mm) d) tomographic reconstruction after application of Gallium e) plot of crack length versus number of cycles as observed by optical microscopy during the interrupted fatigue test. Fig. 3 3 Fig. 3. a,c): Tomographic slices perpendicular to the surface of observation revealing the complicated morphology of the crack in the bulk of the sample (energy: 15 keV, distance 42 mm). b,d): corresponding slices after segmentation by 3D morphologic reconstruction. Fig. 4 . 4 Fig. 4. Three dimensional rendering of the crack surface. Note the abrupt change in inclination of the central arc-shaped crack segment (#2). The finger-like crack extensions (see arrows) correspond to the crack fragments depicted in Fig. 3d. Fig. 5 . 5 Fig. 5. a) Tomographic reconstruction after application of Ga. The depicted slice corresponds to the same section shown in Fig. 3a (energy: 15 keV, distance: 5 mm). b) Result of 3D morphologic segmentation: each cell detected by the segmentation algorithm receives a distinct label, represented here by different colours. Cells belonging to the same grain (A 1 &A 2 and B 1 &B 2 ) have been attributed identical colours. c) The superposition of the tomographic and the segmented data set shows a good agreement between both representations. Fig. 6 . 6 Fig. 6. a) 3D rendering of the local grain structure close to the crack stops observed at the sample surface at points X and Y. Both stops might be attributed to the pinning of the crack front at the grain boundary between grains G1 and G2. b) EBSD mapping of the corresponding sample area. Fig. 7 . 7 Fig. 7. The abrupt deviation in the crack propagation direction coincides with the transition into a new grain. a) Volume rendering of the crack surface as in Fig. 3 but from a different viewpoint. b) Rendering of the crack surface and the grain that caused the sudden deviation. Fig. 8 . 8 Fig.8. a) 3D rendering of the crack inside the new grain (set to transparent). b) same scene with the surrounding grains. The crack entered the new grain between points A and A' by tilting. Between points A' and A" the crack followed the contour of the grain. It is likely that the preferential attack along A-A' is governed by compatibility with favorable crystallographic orientations inside the grain. 8 Defined as the ratio of the elastic constants 2c 44 /c 11 -c 12 . Fig. 9 . 9 Fig. 9. The adaptation of a crack to a new plane may involve tilting and twisting. Note that twisting requires the creation of additional surfaces (black) in order to preserve the connectivity of the crack surface. Fig. 10 . 10 Fig. 10. SEM micrograph of the fracture surface of a sample tested at 120 MPa (N R =2075200). The characteristic crack growth patterns indicate significant changes in the propagation direction of the crack. Table 1 1 Chemical composition (mass %) of the studied model cast Al alloy. Al Si Fe Mg Ti Sb Cr rest 92.5 6.6-7 0.09- 0.29- 0.11- 0.12- 0.2 Յ0.01 0.14 0.34 0.14 0.14 Current address: Lab GEMPPM, INSA-Lyon, 69621 Villeurbanne Cedex, France. For tomographic reconstruction the sample has to fit laterally entirely into the field of view. 3D equivalent to the 2D pixel; represents the smallest digital volume of the material from which the X-ray attenuation is determined. On reconstructed slices in the bluk of the material, individual Si particles can be more clearly visualized, see for example Fig. 3a. [START_REF] Lankford | The growth of small fatigue cracks in 7075-T6 aluminium[END_REF] It is always possible, however, to distinguish the crack from a grain boundary by comparing with the tomographic scan obtained before the application of gallium. Details labeled "finger" in Fig.4. Retardation of surface crack propagation for about 20,000 cycles. Acknowledgements We would like to thank P.H Jouneau (INSA Lyon) and P. Buffat (CIME, Lausanne) for the EBSD characterization. We are also much indebted to D. Jeulin and S. Bouchet (Ecole des Mines, Paris) who provided and implemented the 3D image segmentation routines. D. Bellet and Y. Brechet (INPG, Grenoble) are acknowledged for useful discussions on various aspects of the work.
04100359
en
[ "phys.phys.phys-comp-ph" ]
2024/03/04 16:41:20
2023
https://inria.hal.science/hal-04100359/file/main.pdf
Ivan Duchemin Antoine Levitt Computing photoionization spectra in Gaussian basis sets We present a method to compute the photoionization spectra of atoms and molecules in linear response time-dependent density functional theory. The electronic orbital variations corresponding to ionized electrons are expanded on a basis set of delocalized functions obtained as the solution of the inhomogeneous Helmholtz equation with gaussian basis set functions as right-hand side. The resulting scheme is able to reproduce photoionization spectra without any need for artificial regularization or localization. We demonstrate that it is able to produce accurate spectra for semilocal exchange-correlation functionals even using relatively small standard gaussian basis sets. Introduction Time-dependent functional theory in the linear response regime (LR-TDDFT) is widely used to compute excitation energies of molecules. The traditional way to compute these properties involves solving the Casida equations [START_REF] Mark | Time-dependent density functional response theory for molecules[END_REF], which are the discretized LR-TDDFT equations in a basis set of localized functions (typically, gaussian basis sets). This is very convenient for the excitations lying below the ionization potential, for which the orbital variations are localized functions. This however fails to reproduce the photoionization spectrum, turning a continuous function into a series of infinitely sharp peaks (Dirac deltas). These peaks approximate the function in a weak sense (when integrated against smooth functions) in the limit of complete basis sets [START_REF] Dupuy | Finite-size effects in response functions of molecular systems[END_REF], but pointwise values are not easily obtained. Various techniques have been used to remedy this. The simplest and most general technique is to use an artificial dissipation parameter η. This effectively adds a constant imaginary part to the Hamiltonian, broadening the infinitely sharp peaks and producing a continuous function. Another related technique is the use of complex absorbing potentials, where the imaginary part is only made to act away from the molecule, effectively attempting to broaden only the states that correspond to ionized electrons [START_REF] Jg Muga | Complex absorbing potentials[END_REF]. Another variation on the same idea is to add the imaginary part to the molecular orbitals energies directly [START_REF] Coccia | Ab initio lifetime correction to scattering states for time-dependent electronic-structure calculations with incomplete basis sets[END_REF]. When implemented in the time domain, these techniques can be understood as adding an artificial dissipation, resulting in states with finite lifetimes (complex energies). These schemes shift the poles of the response function away from the real axis into the lower complex plane, resulting in response functions that can mathematically be expressed as sums of Lorentzians. Howerver, this does not respect the mathematical structure of photoionization spectra (infinitely sharp peaks before the ionization potential, continuous function with sharp changes around the ionization potentials as well as possible resonances). As a result, when the basis sets are small (as is the case for typical gaussian basis sets), spectra are often unsatisfactory. Another class of methods attempts to recover the continuous photoionization spectrum from the discrete peaks by fitting rational functions to quantities that are computable on localized basis sets, such as moments of the oscillator strengths or values of the polarizability in the complex plane: examples include Stieljes imaging or Padé extrapolation (see [START_REF] Nunes | Molecular photoionization and photodetachment cross sections based on l 2 basis sets: Theory and selected examples[END_REF] for a review). These methods are intrinsically numerically unstable and need a non-trivial manual parameter selection. More sophisticated schemes like the complex scaling and exterior complex scaling methods use the analytic continuation of the solutions to transform oscillatory tails into decaying ones, resulting in more accurate 1 spectra [START_REF] Cerioni | Accurate complex scaling of three dimensional numerical potentials[END_REF]. These schemes are however non-trivial to implement, and require significant fine-tuning of parameters (as do the dissipation-based schemes). A more principled class of methods involve solving an approximate form of the equation exactly outside of the computational domain, resulting in a Dirichlet-to-Neumann map that acts as an effective boundary condition [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]. These schemes are however hard to implement in systems without spherical symmetry. In this paper, inspired by a method we recently developed to compute resonances in locally perturbed periodic media [START_REF] Duchemin | Efficient extraction of resonant states in systems with defects[END_REF], we propose a new scheme that does not rely on artificial dissipation or localization methods, and works for arbitrary molecules without symmetries. The method can be summarized as follows. The LR-TDDFT equations for the orbital variations δψ ± i are of the Helmholtz form (±ω + i0 + + ε i + 1 2 ∆)δψ ± i = f where f is a localized function (that depends self-consistently on all the δψ ± i ) and ε i < 0 are the Kohn-Sham one-particle energies. The i0 + in this equation means that the equation is to be solved for finite positive values iη, and η is to be taken to tend to zero after the calculation. From the form of the Green function of the operator ±ω + i0 + + ε i + 1 2 ∆, it can be seen that, in the limit η → 0 + , for ω > -ε i , δψ + i will be delocalized. Physically, δψ + i represents an electron ionization; the positive values of η correspond to imposing an outgoing wave boundary condition. It is not appropriate to discretize the delocalized δψ + i on a localized basis set; in fact, doing so results in a singular photoionization limit that, in the limit η → 0 + , tends to a finite sum of Dirac masses located at the Casida excitation energies. These then have to be regularized to obtain a continuous spectrum. Instead, we perform the change of variable δψ + i = (ω + i0 + + ε i + 1 2 ∆) -1 δφ + i , and discretize φ + i (which is localized) on a standard gaussian basis set. Equivalently, we discretize δψ + i on a basis set consisting of solutions of the Helmholtz equation with gaussian basis set functions as right-hand side. By respecting the structure of the equations (localized excitation or delocalized ionization, depending on the value of ±ω + ε i ), this method allows us to compute photoionization spectra directly. Compared to the standard method of solving the Casida equations, it is much more accurate, resulting in accurate spectra using very moderate basis sets. It can be easily adapted to compute resonances, and is fully general, being suited to molecules as well as atoms. The flip side is an added computational cost, which, although formally having the same cubic scaling as usual TDDFT methods, has a higher prefactor, mostly due to the need for frequency-dependent integrals on a real space grid. These are however highly parallelizable. Methods Model We consider a molecule with N spin-paired electrons modeled using time-dependent adiabatic Kohn-Sham density functional theory with a semilocal functional, and an electrostatic potential V nucl originating from the nuclei. The ground-state orbitals ψ i satisfy, for all i ∈ {1, . . . , N/2}, H[ρ]ψ i = ε i ψ i where, in atomic units, H[ρ] = -1 2 ∆ + V tot [ρ] V tot = V nucl + V Hxc [ρ] V Hxc [ρ](r) = ρ(r ) |r -r | dr + v xc (ρ(r)) The total density is ρ(r) = 2 N/2 i=1 |ψ i (r)| 2 . We choose the ground-state orbitals to be real for simplicity, but the scheme extends trivially to complex orbitals. Consider now a perturbing potential δV P . In the time-harmonic regime, the first-order response can be described by the Sternheimer equations [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF] (±ω + iη + ε i -H[ρ])δψ ± i -(f HXC δρ)ψ i = δV P ψ i The variation in the Hartree-exchange-correlation kernel is given by (f HXC δρ)(r) = δρ(r ) |r -r | dr + v xc (ρ(r))δρ(r). and the variation in density by δρ(r) = 2 N/2 i=1 ψ i (r)(δψ + i (r) + δψ - i (r)). This defines the total variation in polarization δP = rδρ(r)dr. When δV P (r) = -e • r, the linear relationship δP = α(ω + iη)e defines the polarizability tensor α, a 3 × 3 matrix. Finally, the photoionization cross-section is given by σ(ω) = lim η→0 + 4πω c 1 3 Tr(α(ω + iη)), with c the speed of light, and will be our main observable of interest. Integral form and delocalization The standard approach to discretizing these equations is to expand δψ i in a basis, which leads to the usual Casida equations. This is however inefficient when ω is greater than -ε i , at which point the electron becomes ionized and δψ + i is delocalized. To see this, we write the Sternheimer equations as (±ω + iη + ε i + 1 2 ∆)δψ ± i = V tot δψ ± i + (f HXC δρ)ψ i + δV P ψ i (1) Since V tot and ψ i are localized, the right-hand side is localized. Introduce the free Green's function G 0 (r, r ; z), the kernel of the inverse of the operator z + 1 2 ∆, well-defined when η > 0. We can then reformulate the Sternheimer equations in integral form δψ ± i = G 0 (±ω + iη + ε i ) V tot δψ ± i + (f HXC δρ)ψ i + δV P ψ i An explicit computation shows that the kernel of the Green function is given by G 0 (r, r , z) = - 1 2π e ik(z)|r-r | |r -r | where k(z) is the square root of 2z with positive imaginary part (so that G 0 is localized when η > 0). When z = ±ω + iη + ε i approaches the real axis, this Green function is oscillatory when Re(z) > 0, and decaying when Re(z) < 0. In the region ω > 0, -ω + ε i is always negative, and therefore δψ - i will be localized. However, the behavior of ψ + i depends on whether ω < -ε i or ω > ε i (below or above ionization threshold). In the case ω < -ε i , δψ + i will be localized; in the case ω < -ε i , it will be oscillatory. This explains why usual methods, based on the discretization of δψ i , have trouble reproducing the ionization region ω > -ε i . Our method Our approach is to use the change of variables δψ + i = G 0 (±ω + iη + ε i )δφ + i when ω + ε i > 0 and to discretize δφ + i in a Gaussian basis set instead of δψ + i directly. This ensures automatically the correct asymptotic behavior for δψ + i . The δφ + i satisfy the integral equation δφ + i -V tot G 0 (+ω + iη + ε i )δφ + i -(f HXC δρ)ψ i = δV P ψ i (2) When ±ω + ε i ≤ 0, we discretize δψ ± i in the usual way on a basis of Gaussian-type orbitals (χ α ) α=1,...,N b . When ±ω + ε i > 0, we discretize δφ ± i on the basis: δφ + i (r) = N b α=1 a + iα χ α (r) when ω + ε i > 0 δψ ± i (r) = N b α=1 b ± iα χ α (r) otherwise This sets up a linear system in the coefficients (a, b). Note that from (2) it follows that δφ + i is as localized as V tot ψ + i . Therefore, the decay of the total meanfield potential determines the localization of δφ + i , and therefore the effectiveness of the numerical method. When using semilocal density functionals (with exponentially decaying exchange-correlation potentials), this is determined by the electrostatic potential. We can therefore rank systems by decreasing order of localization: atoms (exponentially decaying potential), nonpolar molecules (potential decaying as 1/r 3 ), polar molecules (1/r 2 ), charged systems (1/r). When using hybrid functionals incorporing Hartree-Fock exchange, the effective potential seen by ionized electrons behaves as 1/r [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF], and we expect our method to have difficulties. Solution of the linear system We project the Sternheimer equations ( 1) and (2) on the basis to obtain (S -K + i )a + i -g i [a, b] = h i when ω + ε i > 0 ((±ω + ε i + iη)S -H)b ± i -g i [a, b] = h i otherwise, (3) where S αβ = χ α |χ β H αβ = χ α | -1 2 ∆ + V tot |χ β K + iαβ = χ α |V tot G 0 (+ω + iη + ε i )|χ β g iα [a, b] = χ α |f HXC δρ[a, b]|ψ i h iα = χ α |δV P |ψ i The computation of S, H and h i are standard. For K, the matrix elements are not analytic. However, the values (G 0 (±ω + iη + ε i )χ β )(r) can be computed analytically for all r (see Appendix). Therefore, we introduce an integration grid with points r l and weights w l , and approximate K αβ as K αβ ≈ l w l χ α (r l )V tot (r l )(G 0 (±ω + iη + ε i )χ β )(r l ). The values of V tot (r l ) arising from the nuclei and exchange-correlation terms are computed exactly, as is usual in DFT. Note that since the delocalizing operator G 0 is then multiplied by localized quantities, the grid only needs to be sufficient to integrated localized functions. In practice, we found that a coarse exchangecorrelation grid was often adequate. The computation of g, assuming that all the (a, b) are known, is more conveniently reformulated as g iα = δρ[a, b]|f HXC ψ i χ α . The values of f HXC ψ i χ α on the grid are precomputed, using the same technique as for the computation of V tot . Then, δρ[a, b](r) = 2 N/2 j=1 ψ j (r)(δψ + j (r) + δψ - j (r)) is formed on the grid, by using the values of G 0 (±ω + iη + ε i )χ α on the grid and the (a, b) coefficients. The linear system (3) is solved with the GMRES iterative solver, preconditioned by S -K + i (for the a block) and (±ω + ε i )S -H (for the b block). The additional computational cost compared to standard iterative TDDFT computations is summarized in Table 1. Note that these steps are cubic scaling (and, for low-lying excitations where the number of ionized electrons is of the order of unity, quadratically scaling) except for the steps involving f HXC . The scaling can be improved using techniques such as the resolution of the identity. However, in our tests, the step involving the computation of the values of G 0 χ α on the grid dominated the overall computational time, and therefore we did not optimize the other steps. Operation Cost Precomputations of f HXC ψ i χ α N g N occ N b + N 2 b N g N occ Computation of G 0 (±ω + iη + ε i )χ α on the grid N ω N ionized N g N b Matrix-vector products with K N ω N ionized N iter N g N b Table 1: Dominant scaling of the main operations compared to standard iterative TDDFT. N b is the number of basis functions, N g the number of grid points, N occ the number of occupied orbitals, N ionized the number of ionized orbitals. Typically, N ionized N occ N b N g . Additionally, N ω is the number of frequencies desired, and N iter is the number of iterations of the iterative solver (typically, ≤ 10). Results We implemented the method in the Julia programming language [START_REF] Bezanson | Julia: A fresh approach to numerical computing[END_REF], interfacing with the PySCF package [START_REF] Sun | Recent developments in the pyscf program package[END_REF] to perform the initial setup and DFT run. The integrals described in the Appendix were implemented in the GaIn Fortran library. The code is freely available at https://github.com/antoine-levitt/ PhotoionizationGTO.jl and https://gitlab.maisondelasimulation.fr/beDeft/GaIn. All results presented below use the LDA exchange-correlation functional, with no spin polarization. The use of the LDA functional is inadequate to obtain accurate photoionization spectra for the systems studied here, but the emphasis in this paper is on the methodology rather than on the particular results. Unless explicitly mentioned, we used a very coarse exchange-correlation grid (PySCF setting 1), which yielded reasonably accurate results at minimal computational cost. We used the Dunning augmented basis sets [START_REF] Thom | Gaussian basis sets for use in correlated molecular calculations. i. the atoms boron through neon and hydrogen[END_REF][START_REF] Kendall | Electron affinities of the first-row atoms revisited. systematic basis sets and wave functions[END_REF], as provided by the basis set exchange [START_REF] Benjamin P Pritchard | New basis set exchange: An open, up-to-date resource for the molecular sciences community[END_REF]. These basis sets are designed for converging post-Hartree-Fock methods rather than TDDFT properties, and are very suboptimal here. We simply use them to demonstrate that acceptable convergence can be obtained in basis sets not specifically designed for that purpose. Atoms Atoms have an exponentially decaying total potential; therefore, we expect the δφ to be exponentially localized, making it an ideal case for our method. On atoms, we are able to compare the results to a reference (black line) computed using the atom-specific method of [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]. We focus in this section on He and Be atoms. Helium is the simplest system, with only one occupied orbital (1s). Accordingly, its TDLDA photoionization spectrum has only one threshold, plotted as a dashed vertical line. We see that results are already The result of the standard Casida method with damping (η = 0.5) in the d-aug-cc-pvqz basis set is shown for comparison. Reference data (black line) from the method of [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]. qualitatively consistent even at extremely coarse basis sets: note for instance that the basis set aug-cc-pvdz only has two basis functions for the p channel relevant here. To check systematic convergence, we used a basis of even-tempered gaussians with N gaussian exponents logarithmically spaced from 0.01 to 10 (these values are taken somewhat arbitrarilly and not optimized). To appreciate how inadequate the Casida method is to compute photoionization spectra on this system, note that, in the largest basis set used here, d-aug-cc-pvqz and in the frequency range displayed here, there are only two relevant (1s → 2p) excitations, at ω = 0.64 and ω = 0.91. This is clearly not sufficient to reconstruct a full spectrum. The same goes for other approaches such as complex absorbing potentials; these methods, even with optimal parameters, will only move these two poles in the complex plane, which is not sufficient to reproduce the full structure of the spectrum. To illustrate, we plot Figure 2 for ω = 1 the orbital variation δψ + , as well as the localized δφ + defined by δψ + = G 0 (ω + i0 + )δφ + . It is clearly not possible to represent the delocalized variation δψ + by localized orbitals, but δφ + is relatively short-range and therefore can reasonably be expanded on a gaussian basis set. Beryllium has two occupied orbitals (1s and 2s), yielding two different thresholds. Here, small standard basis sets are inaccurate, showing a displaced peak after the 2s ionization and unphysical oscillations after the 1s ionization. Using even-tempered basis set (using 10 gaussians with exponents logarithmically spaced between 0.01 and 10) yields an almost perfect ionization spectrum. The ionization threshold of the 2s orbital lies below the 1s→2p excitation energy, which turns into a resonance. This resonance, extremely hard to capture with standard damping methods, is present and relatively accurate even with very coarse basis sets. Note that, since we are able to compute analytic continuations of the matrix elements of the free Green function as z crosses the positive real axis, we could compute this resonance directly [START_REF] Duchemin | Efficient extraction of resonant states in systems with defects[END_REF], but we do not pursue this direction in this paper. Nonpolar molecules We next try our method on nonpolar molecules, on which the asymptotic decay of the total potential is dictated by the quadrupole moment (decaying thus as 1/r 3 ). The test was conducted for H 2 and CH 4 , demonstrating a rapid convergence with respect to the basis set used: in both cases the spectrum is already almost converged at the aug-ccpvdz level. In particular, we find that moving from aug to d-aug basis sets turns out to be much more efficient, convergence-wise, than increasing the zeta label of the basis. This confirms again that the relevant components lies in the asymptotic part of the wave-function, supported by the delocalized atomic orbitals. Polar molecule We test the method on the strongly polarized LiH molecule. The strong dipole moment makes it hard for the method to converge, especially after the second ionization. Spurious oscillations after the peaks appear, which are consistent with what has been observed using approximate boundary conditions in radial methods [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]. However, we remark that these difficulties do not hinder the capture of both ionization thresholds. Charged systems We use Li + as a benchmark in this case. As any other charged systems, ions have a long-range total potential decaying in 1/r. In such a case, the asymptotic form of the δψ is modified from a plane wave to a Coulomb wave [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]Appendix]. Therefore, the functions δφ become much more delocalized, and our method is inadequate. Accordingly, there are strong oscillations after the ionization threshold, which disappear very slowly when increasing the basis set size; this is consistent with Figure 5 of [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]. For this example, the coarse integration grids we used before were not sufficient to resolve the fine oscillations, and we used a Gauss-Chebyshev grid with 100 points in the radial direction produced by PySCF. Conclusion We have presented a new method to compute photoionization spectra for TDDFT with semilocal exchangecorrelation functionals in gaussian basis sets, and tested it on atoms and small molecules. The method appears to be very efficient on atoms and nonpolar molecules; it struggles on polar molecules and charged systems. We note the following possible improvements to our methodology • For simplicity, we used standard Dunning basis sets, with exchange-correlation integration grids. Both of these were designed for a different problem (converging ground states orbitals and exchangecorrelation integrals) than the one addressed here, and it is likely that computational efficiency can be significantly improved by tailoring these to our problem rather than using off-the-shelf technology. • The current methodology scales formally as the fourth power of the number of electrons, with the possibility of cubic scaling at the cost of a higher prefactor. It should be possible to reduce this scaling even further using techniques like the resolution of the identity. • Although we tested our scheme on semilocal density functionals, it is in theory straightforwardly adaptable to hybrid functionals. However, since the Sternheimer equation is effectively long-range in these cases [START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF], we expect to face the same difficulties as we did in the ionic case. An efficient treatment of the Coulomb potential is, as far as we know, an open problem in the non-radial case. A partial solution could be to truncate artificially the potential; this would yield better basis set convergence and eliminate the oscillations, at the price of a regularization of the problem (particularly important near ionization thresholds). • We only explored linear response TDDFT in the frequency domain; it would be interesting to generalize this methodology to the non-perturbative TDDFT equations in the time domain. Recent progress has been made in this direction in [START_REF] Kaye | Eliminating artificial boundary conditions in time-dependent density functional theory using fourier contour deformation[END_REF], using a similar decoupling between the kinetic and potential operators. Finally, we note that we have focused here on atoms and small molecules. Experience suggests that it might be harder to converge photoionization spectra of small molecules than of large systems, because the relatively fine features computed here average out over a molecule, and because the basis sets used for larger molecules cover a larger region of space. Exploring this further is an interesting topic for future research. Acknowledgments We are grateful to Karno Schwinn and Julien Toulouse for the reference data on atoms and ions. Appendix: matrix elements of the Helmholtz kernel in Gaussian basis sets Our goal is the computation of integrals of the form I = g 1 , G 0 (ω + i0 + )g 2 = -1 2π R 6 g 1 (r)g 2 (r ) e -λ|r-r | |r -r | . when g 1 and g 2 are gaussian-type orbitals, for complex values of λ. The case of interest in the present paper is λ = -i √ 2ω (as well as possible analytic continuations of this). This also includes pointwise values of G 0 (ω + i0 + )g 2 by taking for g 1 the limit of a zero-width gaussian. Explicit formulas have been obtained in the context of range-separated hybrids with Yukawa potentials [START_REF] Ten-No | Initiation of explicitly correlated slater-type geminal theory[END_REF][START_REF] Ten-No | New implementation of second-order møller-plesset perturbation theory with an analytic slater-type geminal[END_REF][START_REF] Akinaga | Range-separation by the yukawa potential in long-range corrected density functional theory with gaussian-type basis functions[END_REF]; we recall here the method of computation. As is standard, it suffices to compute the integral for the primitive radial gaussians g 1 (r) = e -α|r-R1| 2 , g 2 (r) = e -β|r-R2| 2 . Expressions for integrals involving higher angular momenta can be obtained by differentiating the integrals with respect to the centers R 1 and R 2 . We use the integral representation: e -λx x = 2 π ∞ 0 dt e -1 2 (x 2 t 2 + λ 2 t 2 ) and relatively straightforward but tedious changes of variables to obtain The computation of derivatives requires special care for numerical stability. We refer to [START_REF] Ten-No | New implementation of second-order møller-plesset perturbation theory with an analytic slater-type geminal[END_REF] for a discussion of the issues and robust methods. We have implemented a similar method to that of [START_REF] Ten-No | New implementation of second-order møller-plesset perturbation theory with an analytic slater-type geminal[END_REF] and generalized it to complex values of λ. Since we have found the method of [START_REF] Molin | Multi-precision computation of the complex error function[END_REF][START_REF] Al | Computation of the complex error function using modified trapezoidal rules[END_REF] to be very fast for the computation of error functions, we found that some of the complexity caused by the avoidance of these computations in [START_REF] Ten-No | New implementation of second-order møller-plesset perturbation theory with an analytic slater-type geminal[END_REF] were not needed, and we use a slightly simplified implementation, using an upward recurrence relation (s1 in [START_REF] Ten-No | New implementation of second-order møller-plesset perturbation theory with an analytic slater-type geminal[END_REF]) for large values of ar, and a Taylor expansion in ar (s3 in [START_REF] Ten-No | New implementation of second-order møller-plesset perturbation theory with an analytic slater-type geminal[END_REF]) for small values. I = - 1 Figure 1 : 1 Figure 1: Photoionization of Helium, with standard basis sets (left) and even-tempered basis sets (right).The result of the standard Casida method with damping (η = 0.5) in the d-aug-cc-pvqz basis set is shown for comparison. Reference data (black line) from the method of[START_REF] Schwinn | Photoionization and core resonances from range-separated density-functional theory: General formalism and example of the beryllium atom[END_REF]. Figure 2 : 2 Figure 2: Orbital variations δψ + and δφ + (see text) for Helium at ω = 1. Figure 3 : 3 Figure 3: Photoionization of Beryllium (left), with zoom on the 1s→2p resonance (right). Figure 4: H2 Figure 5 5 Figure 5: CH4 Figure 6: LiH Figure 7 : 7 Figure 7: Li + a = αβ/(α + β) the combined Gaussian exponent, r = |R t -R s | and where erfcx denotes the scaled complementary error function: erfcx(z) = e z 2 erfc(z).
00410052
en
[ "shs.psy", "shs.info" ]
2024/03/04 16:41:20
2009
https://shs.hal.science/halshs-00410052/file/Journal_of_Applied_Social_Psychology.pdf
Keywords: HIV/AIDS, Prevention, Media Campaigns In this study we start by recalling some hypotheses on how the media construct an event. Then we focus especially on a linguistic process, the presence of some particular adverbs, that we identified in French daily news media informing about the prevalence and incidence of HIV/AIDS. We claim that this linguistic process is part of the construction of the event and we experimentally test its reception effects. We show that this "adverbial marking" influences one's risk perception, the preventive intentions, the perceived seriousness of the epidemiological situation, and attitudes towards fighting the epidemic. The final discussion addresses the persuasiveness and efficacy of preventive mass media campaigns. . . Indeed, mass media are a leading source of information about important health issues and therefore are targeted by those who aim to influence perceptions and behaviours. Concerning HIV/AIDS, experimental research has focused on how best to compose media mass media draw the public's attention to some questions and contribute to imbue those questions with a sense of urgency. They direct the public's preferences and define which problems are important and how events must be understood, telling what to think about and how to think about it [START_REF] Mccombs | The Agenda-setting Function of Mass-media[END_REF]Combs & Slovic, 1979;[START_REF] Iyengar | News that matters[END_REF][START_REF] Mazur | Risk Perception and News Coverage across Nations[END_REF][START_REF] Weaver | Thoughts on Agenda Setting, Framing, and Priming[END_REF]. The second one refers to the research about the (important) role of the media in the shaping of people's risk perception and claims that the mass media tend to exaggerate some risks and ignore others, sacrificing objectivity for sensationalism, hazards with associated deaths or injuries being longer and more prominently featured than other hazard stories [START_REF] Johnson | Agenda-setting, group conflict, and the social construction of risk[END_REF]Kristiansen, 1983;[START_REF] Karpowicz-Lazreg | Societal risk as seen by French public[END_REF]Af Wåhlberg & Sjöberg, 2000). The third one, referring to the Lippmann's "pseudo environment" idea (1922) claims that the mass media stage the world into representations which are considered by the public as "The real" (McCombs & Gilbert, 1986;[START_REF] Kinder | Public opinion and political action[END_REF] and to some extent matches with the main tenet of the Bandura's Social Learning Theory (1977) according to which the picture the people are given by the media distorts their worldview and makes them unrealistically worried. There is also a large consensus in the human and social sciences that language is an essential aspect of the social construction of reality. The structural power of language is asserted by Berger and Luckmann in their "constructionist theory": "The common objectifications of everyday life are maintained primarily by linguistic signification" (1967, p.67) and also asserted by some social psychologists, supporting the thesis that language is not simply an inert vehicle for communication but also a means for the control of the world around us and its social orders [START_REF] Blakar | Language as a means of social power[END_REF]. This notion is also shared by researchers on mass media who highlight the inevitability of constructing meaning through the use of language, considering that factual reality must necessarily be put into words and this phrasing journalistic criteria and was also biased towards the dramatic and sensationalistic aspects [START_REF] Herzlich | Une maladie dans l'espace public. Le sida dans six quotidiens français[END_REF][START_REF] Herzlich | La construction d'un phénomène social : le sida dans la presse française[END_REF][START_REF] Masseran | La mise en scène médiatique du sida. De la peur au fatalisme[END_REF]. Apprehending AIDS as an event constructed through the media means that it is in media discourse and through media discourse that a fact is transformed into its representation. Traditionally, studies concerning linguistics aspects questioned above all the use of metaphors, demonstrating that in Western press these have been widely used to make sense of HIV/AIDS during its construction as a new disease in the public consciousness [START_REF] Sontag | Illness as Metaphor and Aids and its Metaphors[END_REF][START_REF] Pepper | Book review: AIDS and its metaphor[END_REF][START_REF] Lupton | Moral threats and dangerous desires: AIDS in the news media[END_REF][START_REF] Aroni | Looking at the media[END_REF][START_REF] Manning | News and News sources[END_REF][START_REF] Herzlich | Une maladie dans l'espace public. Le sida dans six quotidiens français[END_REF][START_REF] Herzlich | La construction d'un phénomène social : le sida dans la presse française[END_REF]. Our study is to some extent also concerned by linguistic aspects, focusing on some particular linguistic forms frequently identified in journalistic statements, namely some adverbs, and questioning their status. Let's consider the following statements: "AIDS has already killed 300 patients", "In December only, 91 deaths were reported", "Since its emergence, the virus has infected more than 60 million people worldwide, killing more than a third", "In Europe, their number has again increased by 60% from the first to the second half-year", "The infection is spreading at the rate of more than 14000 new cases per day"2 . The figures quoted are indeed raw data and are likely to be associated to scientificity in a collective subjective way of thinking, according to a certain representation of objectivity and the way in which it is revealed through discourse. But it should not hide the fact that these information items have been worked upon and constructed; hence we argue that information is a construct and thereby constructs the reality referred to in the message. It is from this perspective that these particular statements are F o r P e e r R e v i e w investigated, considering them as efficient formatting of reality but also as an insidious and subtle construction of a properly mediatised world endowed with ideological characteristics. The study examined if such statements did not quantify the number of people deceased, ill and/or infected in such a way that this number is seen as having exceeded an acceptable and tolerable threshold, precisely because of the presence of adverbs such as "already", "only" and/or "again"? They seem to put the indicated quantity into relief. It was then assessed if such statements are likely to influence the readers' judgment and appreciation of the epidemiological situation referred to in the message. For instance, according to [START_REF] Charaudeau | Grammaire du sens et de l'expression[END_REF], the use of the adverb "already" shows that "the moment the event occurs is deemed premature compared to its expected occurrence" and signals that "a certain reference point, considered as a maximum not to be exceeded, has been overshot" (p.483). This manner of phrasing the epidemiological fact could be seen as a way of influencing its "visibility", its "salient character" and its "character as event", in other words as a means of affecting the way the fact is perceived as regards its importance and seriousness. Method Independent Variables Given that a first objective of this experimental study was to assess the impact of this particular phrasing on perceptions and judgments, we elaborated a text presented as an epidemiological information message regarding HIV/AIDS in which we manipulated the adverbial marking. Concretely, one version was characterized by the presence of adverbs in some statements ("high marking" condition) and another version not ("low marking" F o r P e e r R e v i e w condition). A second objective was to compare the effects of this phrasing (i.e. adverbial marking factor) according to whether the disease to which the experimental message referred was known or not. One version referred to a sexually transmitted infection whose existence the participants knew of, namely HIV/AIDS ("known disease" condition) and a second one referred to a sexually transmitted infection whose existence the participants did not know, namely Paramyxoviridae infection3 ("unknown disease" condition). Dependent Variables The self risk perception. Participants indicated whether they considered that, in comparison with other persons of same age, status and sexual orientation, the risk to be personally infected in the future was 1 "more weaker" -2 "weaker" -3 "the same" -4 "higher" -5 "more higher"4 . The preventive intentions. Participants were asked to indicate on a scale from 1 "not at all" to 7 "absolutely" to what extent they were willing to use a condom during the next sexual intercourse and to what extent they were willing to undertake a screening test over the next six months. The perceived seriousness of the (epidemiological) situation. Participants were asked to rate seven societal problems on a scale from 1 "the most important problem" to 7 "the least important problem the public powers have to solve". Besides HIV/Paramyxoviridae infection, the list referred to road safety, juvenile delinquency, unemployment, pollution, terrorism and urban cleanliness. Then they were asked to rate five public health problems on a scale from 1 "the most important" to 5 "the least important problem the public powers have to solve". Besides HIV/Paramyxoviridae infection, the list referred to cancer, cardiovascular diseases, tuberculosis and obesity. The attitude on how to fight the epidemic. Participants were asked to indicate their agreement on a scale from 1 "totally agree" to 7 "totally disagree", considering three "coercive" opinions: "It is time a serologic test is made compulsory for every person one might reasonably think he/she might have been infected the virus", "Anyone having tested positive to a test should be registered in a file that registers the names of anyone newly infected", "It would be useful to set up a customs check for travellers coming from countries where the epidemic is persistent, asking them their serologic status" ; and three "tolerant" opinions: "It is unfair to blame a person infecting his/her partner not knowing he/she is carrying the virus", "Anybody infected by the virus has the right to keep his serologic status secret", "Only the person concerned by the screening test should have access to the result". Participants Ninety nine subjects were involved in this study, randomly assigned to the four experimental conditions defined before. They were all male students from 18 to 22 years old and all sexually active, ninety subjects declaring more than four sexual relations during the last six months. As regards these sexual relations, 35,4% systematically used a condom, 50,5% occasionally and 14,1% never. None has undergone a HIV test over the last 12 months and none knows someone suffering from HIV/AIDS. Results The self risk perception Table 1 near here An ANOVA revealed that the comparative optimism (see Table 1) expressed by the participants who received the version characterized by the presence of adverbs (i.e. "adverbial high marking" condition) was significantly weaker (M = 2.86 versus 2.27, F(1, 97) = 10.06, p <.01], with a more detailed analysis showing however that the difference only approached significance in the "known disease" condition (Gr.3 versus Gr.1, F(1, 95) = 2.92, p = .09 / Gr.4 versus Gr.2, F(1, 95) = 8.39, p <.01). The comparative optimism was also significantly weaker in the "unknown disease" condition (M = 2.83 versus 2.30, F(1, 97) = 8.04, p <.01), with a more detailed analysis showing however that this difference did not approach significance in the "adverbial low marking" condition (Gr.2 versus Gr.1, F(1, 95) = 2.11, p = .13 / Gr.4 versus Gr.3, F(1, 95) = 7.19, p <.01). No significant interaction effect appeared (F<1). The preventive intentions Table 2 near here Regarding the preventive intentions (see Table 2), the same ANOVA revealed that the participants' intention to use a condom during the next sexual intercourse was significantly more important in the "adverbial high marking" condition (M = 4.98 versus 4.23, F(1, 97) = 9.06, p <.01), whatever their knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 3.85, p <.06 / Gr.4 versus Gr.2, F(1, 95) = 5.51, p <.03). This intention was also significantly stronger in the "unknown disease" condition (M = 4.91 versus 4.30, F(1, 97) = 5.87, p <.02), a more detailed analysis showing however that this difference did not reach significance in the "adverbial low marking" condition (Gr.2 versus Gr.1, F(1, 95) = 2.38, p = .12 / Gr.4 versus Gr.3, F(1, 95) = 3.85, p <.06). No significant interaction effect appeared (F<1). Regardless of the experimental conditions, the intention to practice a screening test was rather weak but the statistical analysis showed that this reluctance for the test was significantly less important in the "adverbial high marking" condition (M = 3.55 versus 2.79, F(1, 97) = 11.61, p <.001), whatever the knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 7.88, p <.01 / Gr.4 versus Gr.2, F(1, 95) = 3.89, p <.06). On the other hand, this particular intention did not differ significantly between the "known disease" and "unknown disease" conditions (M = 3.24 versus 3.12, F<1). The perceived seriousness of the (epidemiological) situation Table 3 near here The statistical analysis revealed that when they compared the disease with other societal problems (see Table 3, 1 st ranking), the subjects significantly assigned a greater importance to it as a problem to solve by the public powers in the "adverbial high marking" condition (M = 3.63 versus 4.71, F(1, 97) = 34.24, p <.0001), whatever the knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 15.60, p <.001 / Gr.4 versus Gr.2, F(1, 95) = 18.74, p <.0001). On the other hand, this judgment did not differ significantly between the participants informed about the HIV/AIDS (i.e. known disease) and the participants informed about Paramyxoviridae (i.e. unknown disease) (M = 4.31 versus 4.02, F(1, 97) = 2.07, p = .15). The same pattern of results appeared when the disease is compared with other public health problems (see Table 3, 2 nd ranking), the statistical analysis showing that the judgment of importance was significantly greater in the "adverbial high marking "condition (M = 1.61 versus 1.98, F(1, 97) = 7.02, p <.001). However a more detailed analysis revealed that this difference did not reach significance in the "unknown disease" condition (Gr.3 versus Gr.1, F(1, 95) = 6.83, p <.02 / Gr.4 versus Gr.2, F(1, 95) = 1.38). Again the "disease knowledge" factor did not yield a significant effect on this judgement (M = 1.88 versus 1.70, F(1, 97) = 1.67). The attitude as regards the way to fight against the epidemic Table 4.1 near here Whatever the coercive opinion considered, agreement with it was more important in the "adverbial high marking" condition (see Table 4.1). The ANOVA revealed that in this condition, participants supported significantly the compulsory test measure (M = 3.35 versus 3.98, F(1, 97) = 7.93, p <.01), especially when it was a matter of unknown disease (Gr.4 versus Gr.2, F(1, 95) = 5.95, p <.02 / Gr.3 versus Gr.1, F(1, 95) = 2.26, p = .13) ; the identification / registration measure (M = 3.65 versus 4.48, F(1, 97) = 13.43, p <.001), whatever the knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 4.16, p <.05 / Gr.4 versus Gr.2, F(1, 95) = 9.67, p <.01) ; the customs check measure (M = 3.92 versus 4.37, F(1, 97) = 5.95, p <.02), with a more detailed analysis showing however that the difference was significant only in the "unknown disease" condition (Gr.4 versus Gr.2, F(1, 95) = 6.08, p <.02 / Gr.3 versus Gr.1, F<1). As regards the "disease knowledge" factor, the same ANOVA showed that it only produced a marginally significant effect in the "adverbial high marking" condition as regard the customs check measure (Gr.4 versus Gr.3, F(1, 95) = 3.31, p = .07). Although the effect of the "adverbial marking" factor was descriptively greater in the "unknown disease" than in the "known disease" condition, no significant interaction effect appeared (F<1). Table 4.2 near here Regarding the tolerant opinions (see Table 4.2), their agreement was weaker in the "adverbial high marking" condition. The statistical analysis revealed that the difference was As regards the opinion according to which "it is unfair to blame a person for infecting the partner when not knowing her seropositivity", the difference was only marginal in the "unknown disease" condition (Gr.4 versus Gr.2, F(1, 95) = 3.30, p = .07). The same analysis did not reveal a significant effect of the "disease knowledge" factor for any of these tolerant opinions (F<1), nor an interaction effect (F<1). Table 4.3 near here The ANOVA revealed that participants who received the version characterized by an adverbial high marking envisaged a manner to fight against the epidemic which was significantly more coercive (M = 3.38 versus 4.16, F(1, 97) = 19.61, p <.0001), whatever their knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 6.99, p <.01 / Gr.4 versus Gr.2, F(1, 95) = 12.71, p <.001). On the other hand, the "disease knowledge" factor had no effect and no interaction effect appeared (F<1). Discussion First, this experiment shows that the introduction of some linguistic adverbs in an epidemiological information message regarding a sexually transmitted infection, strengthens the subjects' self risk perception, reinforces their preventive intentions, in this case, the intention to use a condom and to realize a screening test, and lastly enhances their perception as regards the seriousness and gravity of the problem. Second it also shows that the effect of this linguistic manipulation on these perceptions and intentions is approximately the same, whether the subjects know the disease or not. We will refer to the "informative" and/or "descriptive" dimension of these particular linguistic forms (i.e. adverbs). Indeed, to talk about the "information value" of these adverbs amounts to saying that, due to their linguistic meaning, they contribute to represent the reality, giving to the depicted situation a certain image in which the notions of "excessiveness", "unpredictability", "inexpectability", "exceptionality" and "uncontrollability" are salient. In other words, such a phrasing/wording of the prevalence and the incidence of the disease reinforces the perceived "danger" and "uncontrollability" of the situation. It is worth recalling that for the linguist [START_REF] Charaudeau | Grammaire du sens et de l'expression[END_REF], the use of the adverb "already" shows that "the moment the event occurs is deemed premature compared to its expected occurrence" and signals that "a certain reference point, considered as a maximum not to be exceeded, has been overshot". Similarly, according to the researchers on "Risk perception" [START_REF] Slovic | Perception of risk[END_REF][START_REF] Slovic | Why study risk perception[END_REF], these notions are precisely some of the dimensions a subject uses to judge the riskiness of a given situation or activity and to decide how much risk he is willing to accept (i.e. risk tolerance). Maybe our participants mobilized these particular semantic categories while they processed the message. This study has evident implications for the designers of mass media public health campaigns oriented toward the prevention of sexually transmitted diseases, as it shows that a simple linguistic manipulation is enough to increase the intention to adopt preventive behaviour. But to some extent, its efficacy is limited. Indeed, if this discursive strategy acts favourably on prevention, it also reinforces the support for coercive measures when it concerns the way of fighting against the extent of the disease. It also illustrates the unwanted consequences a mass media campaign can sometimes produce, in this case discriminatory reactions. Numerous authors have pointed out that designers of HIV/AIDS preventive campaigns have to manage a paradox: how to involve people in the adoption of healthy behaviours not while provoking discriminatory reactions (Berrenger, Rosnik, & Kravcisin, 1991;[START_REF] Ménard | La double contrainte de la communication publique sur le Sida[END_REF][START_REF] Peretti-Watel | Un risque, ça va ! Trois risques, bonjour les dégâts. Les difficultés de l'éducation pour la santé à prévenir des risques comportementaux multiples[END_REF]. Referring to the "Second-order victim-blaming" process (Dressel, Carter, & Balachandran, 1995), we may wonder if some negative affects did not mediate the effect on the attitude. Maybe our participants have considered that the epidemiological situation described in the message is due to the lack of responsibility of certain people, newly infected, in spite of numerous and large preventive campaigns organized until now, and therefore judged them badly (i.e. depreciation and blame), judging that "all has been done to prevent them" or "to give them the means to avoid the disease". So we can hypothesize that the "adverbial high marking" version has reinforced these judgments and feelings. However, this "attribution of blame" process implies that subjects are convinced that the health related topic has been largely covered by mass media campaigns and so people are sufficiently aware of the problem. In other words, it depends on the perception the subjects have about the (past) mass mediatization of the disease. So did this process work in our study when the communication was about an "unknown disease"? Let us recall that increased support for coercive policy, by reinforcing the adverbial marking of the message, was also obtained when this one referred to a disease whose existence was unknown to the participants. So we can think that in the "unknown disease" condition, participants did not consider that this particular disease had been the object of wide media coverage and then we can hypothesize that the underlying process is different. Future research should address this question. The efficacy of the discursive strategy manipulated in this study has to be questioned for another reason. Indeed, we have to question its "real" efficacy, that is, its capacity to convert these preventive intentions into effective behaviours. We will refer now to a main tenet of the Elaboration Likelihood Model according to which the message's degree of cognitive elaboration determines how strongly the new attitude is triggered: the more important the cognitive elaboration, the more persistent, resistant and predictive of conforming behaviour is the resulting attitude [START_REF] Petty | The elaboration likelihood model of persuasion[END_REF][START_REF] Petty | Elaboration as a determinant of attitude strength : creating attitudes that are persistent, resistant and predictive of behaviour[END_REF][START_REF] Petty | Elaboration as a determinant of attitude strength : creating attitudes that are persistent, resistant and predictive of behaviour[END_REF]. But we have some good reasons to think that the cognitive elaboration of the message content was weaker in the "adverbial high marking" condition. These reasons are theoretical and relative to the pragmatics and the "intentionalist paradigm" of the communication (Grice, 1975;[START_REF] Sperber | Relevance: Communication and Cognition[END_REF][START_REF] Krauss | Language and social behavior[END_REF]. Indeed, the adverbs which we have manipulated in this study have also an "argumentative function", that is, they confer to the statements some measure of "argumentative force" or "argumentative orientation" as it is defined in the theory of argumentation in the language: "The presence of some morphemes (nearly for instance) in some sentences gives an intrinsic argumentative orientation to these sentences, predisposing them to be used in some types of conclusions rather than others" (Ducrot, 1980, p.27), and "The statement's argumentative force or orientation can be defined as the type of conclusions suggested to the recipient, the conclusions that the statement offers as one of the discursive aims" (Anscombre & Ducrot, 1983, p.149). So basing our argument on these considerations, we can hypothesize that the presence in the message of these linguistic markers has also generated inferences centred on the "speaker's meaning" and that these were matched with a less "systematic information processing" of the informative content. Here let us recall that stronger preventive intentions were obtained in the "adverbial high marking" condition. So we can wonder to what extent can a change towards more preventive intentions, yielded by this discursive strategy, lead to effective and lasting behaviour? So are the mass media a useful tool or an obstacle for those who aim to change behaviour towards more precaution and security? First it depends on the type of change that such a media discourse is able to produce. Public health campaigns designers ideally seek to provoke significant long term effects and so they have to be certain that messages they compose are highly processed, a required condition to reinforce the attitude-behaviour link. For this, they have to develop ingenious and subtle strategies to facilitate the cognitive elaboration of the message, the key to success. The strategy we tested here enhanced preventive intentions. However, although we did not measure the cognitive elaboration of the message, we hypothesized that this strategy could impact it negatively. Generally speaking, we may wonder to what extent, mass media messages, due to their characteristics and the criteria which direct their construction, are likely to be highly processed? Second it depends precisely on the objectives that journalists assign to the message. As we have told before, there is a large consensus among the mass media researchers according to which journalists are primarily concerned with their audience so that mass media information is primarily driven by criteria which have much to do with drama and sensational. The recent studies on the media treatment of emerging diseases such as SARS, mad cow disease or aviary flu have shown how the information continued to be directed by these criteria [START_REF] Washer | Representations of SARS in the British newspapers[END_REF][START_REF] Wallis | Disease Metaphors in New Epidemics. The UK Media framing of the 2003 SARS Epidemic[END_REF]Dudo, Dahlstrom, & Brossard, 2007). We may wonder to what extent messages, based on such criteria, trigger predominantly affective, stereotyped and automatic reactions instead of rational reactions. In her book entitled "AIDS and its metaphors", Susan [START_REF] Sontag | Illness as Metaphor and Aids and its Metaphors[END_REF] wrote "The most honest attitude which we can have towards the disease consists in purifying it of the metaphor". We will add that the mass media actor must act honestly towards his/her audience. May this be the case for present and future emerging diseases. For example: "The HIV contaminates in France 6000 new persons per year (…) 3500 HIV infections have been registered from January to June" (low marking/known disease condition) versus "The HIV contaminates in France up to 6000 new persons per year (…) Already 3500 HIV infections have been registered just for the period of January to June" (high marking/ known disease condition) versus "The Paramixoviridae contaminates in France 6000 new persons per year (…) 3500 Paramyxoviridae infections have been registered from January to June" (low marking/unknown disease condition) versus "The Paramixoviridae contaminates in France up to 6000 new persons per year (…) Already 3500 Paramyxoviridae infections have been registered just for the period of January to June" (high marking/ unknown disease condition). the right for the person to keep his/her seropositivity secret (M = 5.08 versus 3.86, F(1, 97) = 25.82, p <.0001), whatever the knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 13.45, p <.001 / Gr.4 versus Gr.2, F(1, 95) = 11.91, p <.001) ; the right for the person who realize a screening test to be the unique recipient of the results (M = 5.19 versus 3.81, F(1, 97) = 32.03, p <.0001), whatever the knowledge of the disease (Gr.3 versus Gr.1, F(1, 95) = 14.35, p <.001 / Gr.4 versus Gr.2, F(1, 95) = 17.09, p <.0001). .L., & Luchmann, T.(1967). The social construction of reality. A treatise in the sociology of knowledge. Garden City, New York: Anchor Books. Berrenger, J.L., Rosnik, D., & Kravcisin, N.J.(1991). Blaming the victim: When diseaseprevention program misfire. Current Psychology:Research and Reviews, 9,(4),[415][416][417][418][419][420] .N. (1992). Cancer, Heart Disease, and AIDS: What Do the Media Tell Us about These Diseases? Health Communication, 4(2), 105-120. Combs, B., & Slovic, P. (1979). Newspaper coverage of causes of death. Journalism Quarterly, 56, 837-843. Dressel, P.L., Carter, V., & Balachandran, A. (1995). Second-order victim-blaming. Journal of Sociology and Social Welfare, 22(2), 107-123. Ducrot, O. (1980). Les mots du discours. Paris: Editions de Minuit. Dudo, A.D., Dahlstrom, M.F., & Brossard, D. (2007). Reporting a potential pandemic. A riskrelated assessment of avian influenza coverage in U.S. newspaper. Science Communication, 28(4), 429-454. Entman, R.M. (2004). Projections of power: Framing news, public opinion, and U. S. foreign policy. Chicago: University of Chicago Press. Fielding, S.L. (1997). Comment: The discursive shaping of reality and mass media. Sociological imagination, 34(2), 148-154. Grice, H.P. (1975). Logic and conversation. In P. Cole & J. Morgan (Eds.) Syntax and semantics 3: Speech acts (pp. 41-58). New York: Academic Press. and standard deviation (SD) regarding the comparative optimism in function of the adverbial marking and the knowledge of the disease and standard deviation (SD) regarding the approbation of tolerant opinions in function of the adverbial marking and the knowledge of the disease This study only focused on some of the numerous statements collected in a study devoted to the media treatment of AIDS in the French press(Coppola & Camus, 2006). In fact it was a fictitious disease whose the so called origin was the "Paramyxoviridae virus". We controlled that the participants declared they did not know it. To some extent, this "self risk perception" measure refers to "comparative optimism". Thus when we will present the results, we will talk about "comparative optimism". Journal of Applied Social PsychologyJournal of Applied Social Psychology David & G. Soucey (Eds.), Reviewing the behavioral science knowledge base on technology Second ranking M = 1.96 The lower the mean is, the higher the priority given to the disease as a problem to solve by the public powers is. The lower the mean is, the higher the approbation of the opinion is. The lower the mean is, the higher the approbation of the opinion is. The lower the mean is, the more coercive the general attitude is (after inversion of the polarity of scales for tolerant items)